National Academies Press: OpenBook
« Previous: INTRODUCTION AND OVERVIEW
Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×

Part 1
Setting the Stage

Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
This page in the original is blank.
Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×

Introduction to Part 1

Alfred V. Aho

The telecommunications/information infrastructure in the United States has been evolving steadily for more than a century. Today, sweeping changes are taking place in the underlying technology, in the structure of the industry, and in how people are using the new infrastructure to redefine how the nation's business is conducted.

The two keynote papers in this section of the report set the stage by outlining the salient features of the present infrastructure and examining the forces that have led to the technology, industry, and regulatory policies that we have today.

Robert Lucky discusses how the nation's communications infrastructure evolved technically to its current form. He outlines the major scientific and engineering developments in the evolution of the public switched telecommunications network and looks at the current technical and business forces shaping tomorrow's network. Of particular interest are Lucky's comments contrasting the legislative and regulatory policies that have guided the creation of today's telephone network with the rather chaotic policies of the popular and rapidly growing Internet.

Charles Firestone outlines the regulatory paradigms that have molded the current information infrastructure and goes on to suggest why these paradigms might be inadequate for tomorrow's infrastructure. He looks at the current infrastructure from three perspective—the production, electronic distribution, and reception of information—and proposes broad goals based on democratic values for the regulatory policies of the different segments of the new information infrastructure.

The four papers that follow examine the use of the telecommunications infrastructure in several key application areas, identify major obstacles to the fullest use of the emerging infrastructure, and discuss how the infrastructure and concomitant regulatory policies need to evolve to maximize the benefits to the nation.

Colin Crook discusses how the banking and financial services industries rely on the telecommunications infrastructure to serve their customers on a global basis. He notes that immense sums of money are moved electronically on a daily basis around the world and that the banking industry cannot survive without a reliable worldwide communications network. He underscores the importance of an advanced public telecommunications/information infrastructure to the nation's continued economic growth and global competitiveness.

Edward Shortliffe notes that the use of computers and communications is not as advanced in health care as in the banking industry. He gives examples of how the use of information technology can both enhance the quality of health care and reduce waste. He stresses the importance of demonstration projects to help prove the cost-effectiveness and benefits of the new technology to the health care industry.

Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×

Robert Pearlman notes that, at present, K-12 education lacks an effective information infrastructure at all levels—national, state, school district, and school site. He presents numerous examples of how new learning activities and educational services on an information superhighway have the potential for improving education. He outlines the major barriers that need to be overcome to create a suitable information infrastructure for effective K-12 schooling in the 21st century.

In the final paper, Clifford Lynch looks at the future role of libraries in providing access to information resources via a national information infrastructure. He examines the benefits and barriers to universal access to electronic information. He notes that a ubiquitous information infrastructure will cause major changes in the entire publishing industry and that intellectual property rights to information remain a major unresolved issue.

Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×

The Evolution of the Telecommunications Infrastructure

Robert L. Lucky

I have been asked to talk about the telecommunications infrastructure—how we got here, where we are, and where we are going. I don't think I am going to talk quite so much about where we are going but rather about the problems. I will discuss what stops us from going further, and then I will make some observations about the future networking environment.

First, I will review the history of how we got where we are in the digitization of telecommunications today, and then I would like to address two separate issues that focus on the problems and the opportunities in the infrastructure today. The first issue is the bottleneck in local loop access, which is where I think the challenge really is. Then I will discuss two forces that together are changing the paradigm for communications—what I think of as the ''packetizing" of communications. The two forces that are doing this are asynchronous transfer mode (ATM) in the telecommunications community and the Internet from the computer community. The Internet will be the focus of many of the subsequent talks in this session, since it seems to be the building block for the national information infrastructure (NII).

HOW THE TELECOMMUNICATIONS NETWORK BECAME DIGITAL

It is not as if someone decided that there should be an NII, and it has taken 100 years to build it. The history of the NII is quite a tangled story.

First, there was the Big Bang that created the universe, and then Bell invented the telephone. If you read Bell's patent, it actually says that you will use a voltage proportional to the air pressure in speech. Speech is, after all, analog, so it makes a great deal of sense that you need an analog signal to carry it. In the end Bell's patent is all about analog. Since that invention, it has taken us over 100 years to get away from the idea of analog.

A long time went by and the country became wired. In 1939, Reeves of ITT invented pulse code modulation (PCM), but it was 22 years before it became a part of the telephone plant, because nobody understood why it was a good thing to do. Even in the early 1960s a lot of people did not understand why digitizing something that was inherently analog was a good idea. No one had thought of an information infrastructure. Computer communications was not a big deal. The world was run by voice. But since then the telephone network has been digitized. As it happened, this was accomplished for the purposes of voice, not for computer communication.

So in 1960 we had analog voice and it fit in a 4-kilohertz channel, and we stacked about 12 of these together like AM radio and sent it over an open wire. That was the way communication was done. But if you take this 4-kilohertz voice channel and digitize it, it is 10 times bigger in

Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×

bandwidth. Why do we do that? That seems like a really dumb idea. But you gain something, and that is the renewability of the digital form. When it is analog and you accumulate distortion and noise comes along, it is like Humpty Dumpty—you can't put it back together again. You suffer these degradations quietly.

The digits can always be reconstructed, and so in exchange for widening the bandwidth, you get the ability to renew it, to regenerate it, so that you do not have to go 1,000 miles while trying to keep the distortion and noise manageable. You only have to go about 1 mile, and then you can regenerate it—clean it up and start afresh. So that was the whole concept of PCM, this periodic regeneration. It takes a lot more bandwidth, but now you can stack a lot more voice signals, because you don't have to go very far.

The reason that the network is transformed is not necessarily to make it more capable but to make it cheaper. PCM made transmission less expensive, since 24 voice channels could be carried, whereas in the previous analog systems only 6 voice channels could be carried. So digital carriers went into metropolitan areas starting in the early 1960s.

At that time we had digital carriers starting to link the analog switches in the bowels of the network. More and more the situation was that digits were coming into switches designed to switch analog signals. It was necessary to change the digits to analog in order to switch them. Engineers were skeptical. Why not just reshuffle the bits around to switch them?

THE ADVENT OF DIGITAL SWITCHING

So the first digital switches were designed. The electronic switching system (ESS) number four went out into the center of the network where there were lots of bits and all you had to do was time shifting to effect switching. That seemed natural because all the inputs were digital anyway. It was some time before we got around to the idea that maybe the local switches could be digital, too, because the problem in the local switches is that, unlike the tandem switches, they have essentially all of their input signals in analog form.

I was personally working on digital switching in 1976. I did not start that work, but I was put in charge of it at that time. I remember a meeting in 1977 with the vice president in charge of switching development at AT&T. It was a very memorable meeting: we were going to sell him the idea that the next local switch should be digital.

We demonstrated a research prototype of a digital local switch. We tried to explain why the next switch in development should be digital like ours, but we failed. They said to us that anything we could do with digits, they could do with analog and it would be cheaper—and they were right on both scores. Where we were all wrong was that it was going to become cheaper to do it digital, and if we had had the foresight we would have seen that intelligence and processing would be getting cheaper and cheaper in the future. But even though we all knew intellectually that transistor costs were steadily shrinking, we failed to realize the impact that this would have on our products. Only a few short years later those digital local switches did everything the analog switches did, and they did more, and they did it more cheaply.

The engineers who developed the analog switches pointed to their little mechanical relays. They made those by the millions for pennies apiece, and those relays were able to switch an entire analog channel. Why would anyone change the signals to digits? So the development of analog switching went ahead at that time, but what happened quickly was the advent of competition enabled by the new digital technology. There was a window of opportunity where the competition could come in and build digital switches, so AT&T was soon forced to build a digital switch of its own, even though it had a big factory that made those nice little mechanical relays.

The next major event was that fiber came along in 1981. Optical fiber was inherently digital, in the sense that you really could not at that time send analog signals over optical fiber

Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×

without significant distortion. Cable TV is doing it now, but in 1980 we did not know how to send analog signals without getting cross-talk between channels. So from the start optical fiber was considered digital, and we began putting in fiber systems because they were cheaper. You could go further without regenerating signals than was possible with other transmission media, so even though the huge capacity of optical systems was not needed at first, it went in because it was a cheaper system. It went in quickly beginning in 1981, and in the late 1980s AT&T wrote off its entire analog plant. The network was declared to be digital.

This was incredible because in 1984—at divestiture—we at AT&T believed that nobody could challenge us. It had taken us 100 years to build the telephone plant the way it was. Who could duplicate that? But what happened was that in the next few years AT&T built a whole new network, and so did at least two other companies! And, in fact, just a few years ago, you would not have guessed that the company with the third most fiber in the country was a gas company—Wiltel, or the Williams Gas Company. So everybody could build a network. All of a sudden it was cheap to build a long-distance network and a digital one, inherently digital because of the fiber.

In the area of digital networking we have been working for many years on integrated services digital network (ISDN). We all trade stories about when we all went to our first ISDN meeting. I said, "Well, I went to one 25 years ago," and he said, "27," so he had me—that kind of thing. ISDN is one of those things that still may happen, but in the meantime we have another revolution coming along beyond the basic digitization of the network—we have the packetizing of communication and ATM and the Internet.

Internet began growing in the 1970s, and now we think of it as exploding. Between Internet and ATM something is happening out there that is doing away with our fundamental concept for wired communications. First we did away with analog, and then we had streams of bits, but now we are doing away with the idea of a connection itself. Instead of a circuit with a continuous channel connecting sender and receiver, we have packets floating around disjointedly in the network, shuttling between switching nodes as they seek their separate destinations. This packetization transforms the notion of communication in ways that I don't think we have really come to grips with yet.

Where we stand in 1993 is this. All interexchange transmission is digital and optical. So is undersea transmission, which is currently the strongest traffic growth area: between nations we have increasing digital capability and much cheaper prices. The majority of the switches in the network are now digital—both the local and the tandem switches. But there is a very important point here. In these digital switches a voice channel is equated with a 64-kilobits-per-second stream. It is not as if there were an infinite reservoir to do multimedia switching and high-bandwidth applications, because the channel equals 64 kilobits per second in these switches. They are not broadband switches in terms of either capacity or flexibility.

Another important conceptual revision that has occurred during the digital revolution has been one involving network intelligence. The ESS number five, AT&T's local switch, has a 10-million instructions per second (MIPS) processor as its central intelligence. That was a big processor at the time the switch was designed, but now the switch finds itself connected to 100-MIPS processors on many of its input lines. The bulk of intelligence has migrated to the periphery, and the balance of the intelligence has been seeping out of the network. I always think of Ross Perot's giant sucking noise or, as George Gilder wrote, the network as a centrifuge for intelligence. This has been a direct outgrowth of the personal computer (PC) revolution.

THE BOTTLENECK: LOCAL LOOP ACCESS

Let us turn now to loop access, where I have said that the bottleneck exists. Loop access is still analog, and it is expensive. The access network represents about 80 percent of the total

Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×

investment in a network. That is where the crunch is, and it is also the hardest to change. There is a huge economic flywheel out there to change access, whereas the backbone, as we have seen, can be redone in a matter of a few years at moderate expense.

There are many things happening in the loop today, but I see no silver bullet here. People always say there ought to be some invention that is going to make it cheap to get from the home into the network, but the problem is that in the loop there is no sharing of cost among subscribers. In the end you are on your own, and there is no magic invention that makes this individual access cheap.

Today everybody is trying to bring broadband access into the home, and everybody is motivated to do this. The telephone companies want a new source of growth revenue, because their present business is not considered a good one for the future. The growth rate is very small and it is regulated, and so they see the opportunity in broadband services, in video services, and in information services, and they are naturally attracted to these potential businesses. On the other hand, the cable television companies are coming from a place where they want to get into information services and into telephony services. So from the perspective of the telephone companies, not only is the conventional telephone business seen as unattractive, but there is also competition coming into it, which makes it even less attractive.

There are many alternatives for putting broadband service into the home. There are many different architectures and a number of possible media. Broadband can be carried into the home by optical fiber, by coaxial cable, by wireless, and even by the copper wire pairs that are currently used for voice telephony.

The possibilities for putting fiber into homes are really a matter of economics. The different architectural configurations differ mainly in what parts of the distribution network are shared by how many people. There is fiber to the home, fiber to the curb, fiber to the pedestal, and fiber to the whatever! The fact is that when we do economic studies of all these, they don't seem to be all that different. It costs about $1,100 to put a plain old telephone service (POTS) line into a home and about $500 more to add broadband access to that POTS line. You can study the component costs of all these different architectures, but it just does not seem to make a lot of difference. It is going to be expensive on some scale to wire the country with optical fiber.

The current estimate is that it would be on the order of $25 billion to wire the United States. This is incremental spending over the next 15 years to add optical fiber broadband. Moreover, we are probably going to do that not only once, but twice, or maybe even three times. I picture standing on the roof of my house and watching them all come at me. Now we hear that even the electric power utilities are thinking of wiring the country with fiber. It is a curious thing that we are going to pay for this several times, but it is called competition, and the belief is that competition will serve the consumer better than would a single regulated utility.

Because the investment in fiber is so large, it will take years to wire the country. Figure 1 shows the penetration of fiber in offices and the feeder and distribution plants by year, and the corresponding penetration of copper. You can see the kind of curves you get here and the range of possible times that people foresee for getting fiber out to the home. If you look at about a 50 percent penetration rate, it is somewhere between the year, say, 2001 and 2010. This longish interval seems inconsistent with what we would like to have for the information revolution, but that is what is happening right now with the economic cycle running its normal course.

The telephone company is not the only one that wants to bring fiber to your home. The cable people have the same idea, and Figure 2 shows the kind of architecture they have. They have a head end with a satellite dish and a video-on-demand player, or more generally a multimedia server. They have fiber in the feeder portion of their plant, and they have plans for personal communication system (PCS) ports to collect wireless signals. They have their own broadband architecture that will be in place, and, of course, we see all the business alliances that are going on between these companies right now. We read about these events in the paper every morning.

Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×

FIGURE 1 Evolution of fiber in the local exchange network. Broad solid line, fiber; thin solid line, copper. CATV, cable television. Courtesy of Bellcore.

FIGURE 2 Fiber/coaxial bus cable broadband network. VOD, video on demand; PCS, personal communication system. Courtesy of Bellcore.

Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×

Everybody thinks that the money will be in the provision of content and that the actual distribution will be a commodity that will be relatively uninteresting.

If you look at the cable companies versus the telephone companies, cable has the advantage of lower labor costs and simpler service and operations. Their plant is a great deal simpler than that of the telephone companies. They have wider bandwidth with their current system and lower upgrade costs. They are also used to throwing away their plant and renewing it periodically. Disadvantages of cable franchises have more to do with financial situations, spending, and capital availability.

THE PACKETIZING OF COMMUNICATIONS: ATM

Asynchronous transfer mode (ATM) is a standard for the packetizing of communications, where all information will be carried in 48-byte packets with 5-byte headers. This fixed cell size is a compromise: it is too small for file transfers and too big for voice. It is a miraculous agreement that we do it this way and that, if everybody does it this way, we can build an inexpensive broadband infrastructure using these little one-size-fits-all cells. But it seems to be taking hold, and both the computer and communications industries are avidly designing and building the new packet infrastructure.

ATM has an ingenious concept called the virtual channel that predesignates the flow for a particular logical stream of packets. You can make a connection and say that all the following cells on this channel have to take this particular path, so that you can send continuous signals like speech and video. ATM also integrates multimedia and is not dependent on medium and speed.

Since ATM local area networks (LANs) are wired in tree-like "stars" with centralized switching, their capacity scales with bandwidth and user population, unlike the bus-structured LANs that we have today. Moreover, ATM integrates multimedia, is an international standard, and offers an open-ended growth path in the sense that you can upgrade the speed of the system while conforming to the standard. I think it is a beautiful idea, as an integrating influence and as a transforming influence in telecommunications.

THE NEW INFRASTRUCTURE

Now let us turn to the other force—Internet. The Internet will be the focus of a lot of our discussions, because when people talk about the NII, the consensus is growing that the Internet is a model of what that infrastructure might be. The Internet is doubling in size every year. But whether it can continue to grow, of course, is an issue that remains to be discussed, and there is much that can be debated about that.

To describe the Internet, let me personalize my own connection. I am on a Bellcore corporate LAN, and my Internet address is rlucky@bellcore.com. At Bellcore we have a T-1 connection, 1.5 megabits per second, into a midlevel company, JVNC-net at Princeton. Typically, these midlevels are small companies, making little profit. They might work out of a computer center at a university and then graduate to an independent location, but basically it is that kind of affair. They start with a group of graduate students, buy some routers, and connect to the national backbone that has been subsidized by the National Science Foundation.

Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×

THE USER VIEW OF INTERNET ECONOMICS

As for the user view of the Internet, here is what I see. I pay about $35,000 for routers and for the support of them in my computing environment at Bellcore. But people say that we have to have this computer environment anyway, so let's not count that against Internet. That is an interesting issue actually. I don't think we should get hung up on any government subsidies relating to Internet operations, because if we removed all the subsidies it really would not change my cost for the Internet all that much.

All we have to do is take the LANs we have right now, the LAN network that interconnects them locally, and add a router (actually a couple of routers) and then lease a T-1 line for $7,000 a year from New Jersey Bell. That puts us into JVNC-net. We pay $37,000 a year to JVNC-net for access to the national backbone network. For this amount of expenditure about 3,000 people have access to Internet at our company—all they can use, all they want.

A friend recently gave me a model for Internet pricing based on pricing at restaurants. One model is the a la carte model; that is, you pay for every dish you get. The second model is the all-you-can-eat model, where you pay a fixed price and eat all you want. But there is a third model that seems more applicable to Internet. If you ask your kids about restaurant prices, they will say that it does not matter—parents pay. Perhaps this is more like Internet! The pricing to the user of Internet presents a baffling dilemma that I cannot untangle. It looks like it is almost free.

Internet is a new kind of model for communications, new in the sense that while obviously it has been around for a while, it is very different from the telecommunications infrastructure. If I look at it, what I see inside Internet is basically nothing! All the complexity has been pushed to the outside of the network. The traditional telecommunications approach has big switches, lots of people, equipment, and software, taking care of the inside of the network. In the Internet model, you do not go through the big switches—it just has routers. Cisco, Wellfleet, and others have prospered with this new business. Typically, a router might cost about $50,000. This is very different from the cost of a large telecommunications switch, although they are not truly comparable in function.

With a central network chiefly provisioned by routers, all the complexities push to the outside. Support people in local areas provide such services as address resolution and directory maintenance at what we will consider to be the periphery of the network. The network is just a fast packet-routing fabric that switches you to whatever you need.

THE CONTRAST IN PHILOSOPHY BETWEEN THE INTERNET AND TELECOMMUNICATIONS

The contrasting approaches and philosophies are as follows. The telecommunications network was designed to be interoperable for voice. The basic unit is a voice channel. It is interoperable anywhere in the world. You can connect any voice channel with any other voice channel anywhere. Intelligence is in the network, and the provisioning of the network is constrained by regulatory agencies and policies.

Internet, by contrast, is interoperable for data. The basic unit is the Internet protocol (IP) packet. That is in its own way analogous to the voice channel; it is the meeting place. If you want to exchange messages on Internet, you meet at the IP level. In Internet the intelligence is at the periphery, and there have not yet been any significant regulatory constraints that have governed the building and operation of the network.

Pricing is a very important issue right now in Internet. The telecommunications approach to pricing is usage based, with distance, time, and bandwidth parameters. Additionally, there are settlements between companies when a signal traverses several different domains. The argument

Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×

that the telecommunications people use is that pricing regulates fairness of use. They feel that without some usage-based pricing, people will not use the network services fairly.

In Internet the policy has been to have a fixed connection charge, which depends on pipe size and almost nothing else. There are no settlements between companies, and there is no usage-based charge. The Internet fanatics would say that billing costs more than it is worth and that people use the network fairly anyway. That has worked so far, but it remains to be seen, as the Internet grows up—and especially as multimedia go on it—whether that simple pricing philosophy will continue to function properly.

Before I conclude I would like to introduce an issue that bothers me as a member of the telecommunications community. There was an article in the Wall Street Journal that said people are now bypassing the telecommunications network by sending faxes over the Internet. Here is the situation: if you send a fax over the Internet, it really appears to be free. You go to the telephone on your desk, and if you send the same fax, it costs you a couple of dollars. Why is this? This paradox sums up the idea that Internet really is almost a free good. Is the difference in price for the fax because Internet has found a cheaper way to build a telecommunications network? Or is it because Internet is a glitch on the regulatory system, that it is cream skimming and that it evades all the regulation and all the shared costs of the network? I can tell you honestly that I don't understand this question. I have had economists and engineers digging down, and I keep trying to think there will be bedrock down there somewhere. I cannot find it. The problem is anytime an engineer like me tries to understand pricing, I have the idea that it ought to have something to do with cost.

Unfortunately, that is not really the case. The biggest single difference we were able to find between the cost of a fax on Internet and on the telephone network was that in the cost of the business telephone there is a large subsidy for residential service built into the tariff. In traditional telecommunications the relationship between the cost of providing a service and the pricing for that service is sometimes seemingly arbitrary. Furthermore, the actual cost of provision may involve a number of rather arbitrary assumptions and in any case is only weakly dependent on technology. At base, telecommunications is a service business.

The arresting fact about Internet is its growth rate of 100 percent. There are approximately 2 million host computers connected to it. It is about 51 percent commercial usage. Moreover, it violates the rules of life—someone must be in charge, someone must pay. Very troublesome. Yet it is a worldwide testbed for distributed information services, and it is forming the prototype information infrastructure of our time.

MULTIMEDIA AND THE FUTURE OF THE INTERNET

As we look to how the Internet might have to evolve in the near future, we see that the IP protocol that has grown up with Internet is going to have to change in some small but fundamental way. As it stands now, the basic unit is the connectionless data packet, and that does not really support video and voice on Internet in spite of the fact that, as we all know, those services are being sent experimentally on Internet right now. However, they only work when there is very little traffic on the network. If the traffic grows, voice and video cannot really be sent on Internet because it presently uses statistical multiplexing of packets, so they don't arrive regularly spaced at the edges of the network. So the future IP protocol has to have some kind of flow mechanism put into it to assure regularity of arrival of certain packets. Furthermore, the new protocol must support more addresses than are presently possible. But these problems are fixable. Security is also often cited as a problem, but that too can be fixed.

Communications people are betting that the future traffic will include a lot of multimedia. That means a great deal of additional capacity will be required, as well as control mechanisms appropriate to the new media. ATM is the answer being propagated by telecommunications

Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×

companies, but Internet people say ATM does not really matter, because that is a lower-level protocol to TCP/IP.

Meanwhile, the computer industry is building new LANs that take ATM directly to the desktop. It now seems that we are headed to a situation where we will have ATM in the local LANs, but to be internetworked that ATM will be converted to TCP/IP to enter the routers that serve as the gateways to the network beyond. From there, however, it might again be converted to ATM in order to traverse the highways of the network. Similarly, on the other half of its voyage, it would again be converted back to IP and then to ATM to reach the distant desktop. That seems like a lot of conversions, but they will be easy in the immediate future and may well be the price of interoperability.

The prize in the future is control of multimedia—whether telecommunications controls multimedia through ATM or whether Internet is able to gear itself up, change its protocol, and capture this new kind of traffic. Will people use the Internet for voice five years from now, and if so, will that take any significant traffic away from the network? Those are interesting questions that I think we can debate.

Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×

The Search for the Holy Paradigm: Regulating the Information Infrastructure in the 21st Century

Charles M. Firestone

The thrust of this paper is that the electronic information infrastructure, which I define as the production, electronic distribution, and reception of information, has undergone two major regulatory paradigms and is ready for a third. The first is the rather legalistic reaction to monopoly and oligopoly and is manifest in the period of antitrust and regulatory enforcement under the Communications Act, culminating in the late 1960s to early 1970s. The second is the competitive period with its commensurate deregulatory paradigm, which has been present in communications regulation since the late 1970s. These overlapping stages find expression in the regulation of many different aspects of the information infrastructure.

My thesis is that these prior models are inadequate, by themselves, for the complexities of the 1990s and beyond. While I do not specify a definitive new paradigm, I suggest that we might look to scientific or ecological models in beginning to sort out ways for the government to interact, in a dynamic democratic process, with the coevolving technologies, applications, and regulatory environments of the communications and information world: the international information environment within which the national information infrastructure will wind. In that sense, I call for a vision of "sustainable democracy," wherein technological developments preserve and enhance present and future democratic institutions and values.

DEFINITION OF THE INFORMATION INFRASTRUCTURE

The information infrastructure can be broadly or narrowly defined. For the purposes of this paper, I take a broader outlook that corresponds to the nature of the communications process, namely, the production, distribution, and reception of information in electronic form. Thus, while I will allude to print and newspaper, the focus will be on the following elements of electronic communications:

  • Production of information in film, video, audio, text, or digital formats;

  • Distribution media, most notably telephony, broadcasting, cable, other uses of electronic media, and, for our purposes here, storage media such as computer disks, compact disks, and video and audio tape; and

  • Reception processes and technologies such as customer premises equipment in telephony, videocassettes and television equipment, satellite dishes, cable television equipment, and computers.

Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×

Others might prefer to distinguish only between software and hardware, between content and conduit, or between information and communications.1 While such an organization is a plausible one, I favor the tripartite approach because it places attention on an increasingly important but often overlooked area of regulation, that of reception. As First Amendment cases move toward greater editorial autonomy by the creators, producers, and even the distributors of information, attention will have to be focused on reception for filtering and literacy concerns (Aufderheid, 1993). I address this in greater detail below.

REGULATION OF THE INFRASTRUCTURE

Regulation of the information infrastructure can also be usefully analyzed from this three-pronged approach. The law of production is associated with information law, with intellectual property concerns, First Amendment cases, and issues of content. Governmental regulatory policy in this area has also been affected by the expenditure of billions of dollars in governmental and public investment in programming for public broadcasting, research and development grants, creation of libraries, and software generation by the government.

Regulation of telecommunications has centered, however, on the distribution media. For many years this took the form of categorizing the form of media—telephony, broadcasting, or print—and placing that medium in the proper regulatory mold, namely, common carrier, trusteeship, or private. The main function of government was to make the electronic media usable, to allocate and assign uses and users to the particular media, and then to regulate the licensees in the public interest.2 The major intention of this paper is to describe the past, present, and future paradigms of such regulation. At the same time, I hope to point out other levers of governmental policy, including investment in technology, or other nonregulatory influences.

Regulation of the receiver or reception process is relatively unusual, but important nevertheless. One familiar area was the early regulation of cable television, which was a technology intended to enhance reception of over-the-air television stations but was regulated to ensure that, at first, it did not undermine the economics of the broadcast television system.3 Most significant in recent forms of regulation of reception is the area of privacy, which, among other rights, protects the receiver to be free from intrusion at the reception end of a communication.4

Goals of Regulation

In each case the aim of regulation was to effectuate or bring about a set of goals for society. I find it useful to categorize these goals into five basic democratic values, as follows.5

Liberty

Foremost among the Bill of Rights is the First Amendment guarantee against abridgment of freedom of expression (speech, press, association, petition, and religion). Beyond this trump card of protection, certain regulations have sought to promote First Amendment values of editorial autonomy,6 diversity of information sources,7 and access to the channels of public communication.8 A long-standing debate centers on the proper legal attention that should be paid to the underlying values of the First Amendment, as opposed to the bare dictates of the amendment, that is, is it a sword as well as a shield?

A second liberty, implicit in the Bill of Rights but explicit in state constitutions and in legislation, is the right to privacy, individual dignity, and autonomy. At the federal level this has

Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×

been expressed in the form of sector-by-sector legislative codes; at the state level privacy is protected in common law and statutes.9

A third liberty, which is not often expressed as such, is that of ownership of property. In the world of communications this usually takes the form of a bundle of intangible or intellectual property rights that attach to original works of authorship or invention by the Constitution's copyright and patent clause. Traditionally, individuals have been licensed as trustees but not owners of the electromagnetic spectrum. However, this approach could change with the auctioning of spectrum. Furthermore, the application of intellectual property concepts to new forms of information has raised many new and vexing problems for future regulators.

Equity

A countervailing value to liberty is that of equality. In the field of communications this takes the form, most prominently, of universal service. That is, the Communications Act gives, as one of its seminal purposes, to make available to "all the people" a worldwide communications system.10 Today, the issue of universal service is foremost in the minds of regulators and legislators, who see the need for an evolving definition of which communications and information services should be available to all. Meanwhile, traditional schemes for assuring universal service are dissipating.

Another issue that comes within the equity value is that of information equity—is the gap between rich and poor extending to the information haves and have-nots? Are the technologies that have such potential to bridge the gap between rich and poor instead going to widen it? Governments at all levels have established libraries to provide all citizens with access to a broad array of information resources. The obvious question for the future is the level and extent that information resources will be made available in nonprint formats, and through electronic connections to the home, rather than by extensive collections at libraries.11

Community

The mass media, long attacked for monolithic mediocrity, is also seen as a key cohesive force in a community. The Federal Communications Commission (FCC) has included, as a basic tenet, the concept of localism as a touchstone for licensing and regulating broadcasting stations.12 This includes a preference for locals obtaining the licenses initially and for stations, once licensed, to serve the local needs and interests of the service area.13

Similarly, franchising authorities often grant cable television franchises to locals, who are thought to be more attuned to the needs of the local community. Whoever receives the franchises, furthermore, usually has requirements to include public, educational, and governmental access channels.14 At one time, the FCC required cable operators to include local origination cable-casting channels on their systems.15 The access channels, as well as some public broadcasting and other cable channels, might be viewed as electronic commons or public space for the local community.

At the same time, on a national scale, the broadcast media serve as one of the country's most significant and binding cultural forces. The broadcast media provide common experiences and a larger, more cohesive national community for a country that is increasingly multicultural.

A third manifestation of community values is the advent of public broadcasting—governmental and public financial support of noncommercial fare. Some of that programming is aimed at better serving local communities. This aspect of "community" raises a broader issue for

Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×

the future: What is the role of government financing of programming, information resources, and distribution facilities?16

Finally, the whole concept of community is changing in the electronic world of networks and targeted communications. New electronic communities are being formed around the interests of individuals rather than geographic boundaries. This, again, raises profound questions about the impact of modern communications technologies on the concept of community.

Efficiency

The Communications Act specifically refers to the need for an "efficient" communications system by wire and radio.17 Generally, economic regulation values efficiency as its ultimate goal and fairness secondarily. Much of the communications regulatory scheme is premised on making the infrastructure and the pricing system efficient. Title II common carrier regulation seeks to assure that telecommunications facilities serve the entire country at rates that are fair, affordable, and comparable to those that would result from competition.18

In the wireless realm, governmental spectrum allocation is premised on the tenet that, if unregulated, spectrum users would injuriously and therefore inefficiently interfere with each other.19 More broadly, the government considers how communications and information sectors can maximize the citizenry's economic welfare, within the country and on a more global scale. Recently, the United States has been concerned with its competitive position in the world economy. The promotion of an efficient communications and information infrastructure supports that concern and interest.

Participatory Access

The communications system needs to be not only efficient but also workable and accessible to ordinary citizens. Accessibility is an undercurrent in many elements of communications regulation, from the creation of free over-the-air broadcasting stations, to access channels on cable, to common carrier regulation of telephony and its progeny. Indeed, access could be an organizing principle for regulation, in terms of facilities, costs, and, as will be explained later, literacy (Reuben-Cooke, 1993).

Finally, the regulatory scheme seeks to enable citizens to participate in the process on a fair and equal basis. The concepts of due process, sunshine for governmental agencies, and appellate review are all designed to facilitate citizens' fair access to, and participation in, the process. By using new communications technologies, these goals will likely be enhanced.

Whatever regulatory scheme is employed, it will have to consider and balance the above basic values and goals to fashion a system of greatest benefit to the public. The following sections will describe how these goals were addressed in the regulatory paradigms of yesterday and today, along with how they might be considered and treated in the future.

REGULATORY PARADIGMS

At each level of the communications process—production, distribution, and reception—there have been two paradigmatic stages of regulation, one of scarcity and one of apparent abundance and competition. At the scarcity stage, usually obtained after an initial period of skirmishing among pioneers for position, the regulation has taken the general form of governmental intervention in order to promote the broader public interest.20 At the abundant or competitive stage, which

Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×

overlaps the earlier stage, there has been a reversal, a deregulation to promote greater efficiency (Kellogg et al., 1992). In each case the paradigm is a regulatory religion, at times demanding faith on the part of the believers. My thesis is that neither of the past regulatory schemes is sufficient alone to address the evolving complexities of the new communications infrastructure and that a new regulatory paradigm, a new religion, will be needed for the global and digital convergence of the telecommunications future.

Stage 1: Scarcity

The overriding characteristic of the first stage, "scarcity," is that in the various levels and industries associated with the information infrastructure there were large centralized, monolithic, top-down, command-and-control-type enterprises, relatively passive receivers, the formation of monopolies or oligopolies, often of a vertical (end-to-end) nature, and a certain potential for anticompetitive practices. These could be symbolized by the early stages of the major movie studios, AT&T, the broadcast networks, and IBM.

In most if not all cases the tendency of the law was for the government to regulate the monopoly industry in the public interest, either through application of antitrust laws or the regulatory apparatus of the Communications Act. Usually, this resulted in the removal or regulation of bottlenecks, to try to approximate the efficiencies of competition where none existed. The Communications Act also sought to foster the values of equality (in the form of universal service) and community (in the form of localism) under the general regulatory standard of the "public interest, convenience, and necessity."

Production

The production of information has been the most widespread, abundant, and competitive of all of the elements of the infrastructure. In almost each case the production industries have not been subject to regulation, for First Amendment and other historical reasons. Rather, they are subject to normal commercial, antitrust, intellectual property, and First Amendment laws. Nevertheless, where economic oligopolies arose at earlier stages of the film, telephone technology, and computer equipment industries, the antitrust laws were used to address the alleged monopolization. Although the antitrust cases had only mixed success, that was the accepted method for governmental activity in this area. In the area of intellectual property, the law imposed a form of regulation in the imposition of fair use and compulsory licensing prescriptions. In the lone industry subject to federal agency regulation—broadcast programming—the FCC imposed both antitrust-like and content-related remedies.

Motion Pictures. In the 1930s and 1940s the major motion picture studios found that vertical integration could enhance their ability to control their products from inception to exhibition. (Some studios, such as Warner Brothers, were formed by theater chains.) The Justice Department, however, alleged that the studios were monopolizing the industry by both their structure and certain anticompetitive practices.

In the Paramount consent decree of 1948,21 the major, vertically integrated motion picture production studios and distribution organizations of the time agreed not to monopolize the production, distribution, and exhibition sectors of the motion picture industry and to divest their theater chains. Thus, a bottleneck at one level (e.g., major movies or control of theaters) could not be used to create scarcities and affect competition at another. This case is instructive to our inquiry, then,

Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×

for its principle: to assure competition at each level of the communications process. Today, the analogy would require competition (or openness) at the production, distribution, and reception levels of all communications.

Telephone Equipment.22 Shortly after the Justice Department litigated the Paramount case, it brought an action against AT&T and its Western Electric subsidiary for using its patents to monopolize the telephone equipment business.23 In an antitrust consent decree—the original "Final Judgment"—AT&T agreed to segregate the company into strictly regulated businesses, forsaking competitive businesses.24 Western Electric's equipment would henceforth be sold only to Bell companies. Significantly, it also required that the patents be licensed to other manufacturers, a form of compulsory licensing.25

Broadcast Programming. The broadcasting networks were regulated by the FCC and thus were subject to a broader "public interest" standard. During the 1930s and into the 1940s, NBC controlled both a Red and a Blue network, with only modest competition from CBS. The parent network imposed a set of requirements on its affiliates to take what was offered. After a long inquiry, the FCC (1) ordered NBC to divest one network (thereby creating ABC), (2) restricted certain abusive network practices that took discretion away from local affiliates, and (3) prohibited local stations from affiliating with more than one network.26

Broadcast Content Controls. Content controls were imposed on broadcasters from the inception of the Radio Act of 1927 and the Communications Act of 1934. These included the requirement that a station afford equal opportunities to legally qualified candidates for public office to "use" the facilities27 and restrictions against certain illegal activities such as obscenity.28 After first prohibiting editorializing,29 the FCC imposed a Fairness Doctrine, which encouraged stations to air controversial issues of public importance and to present contrasting viewpoints from responsible spokespeople on such issues.30 Furthermore, at the height of content regulation, the late 1960s and early 1970s, the FCC's regulations extended to changes of formats,31 to programming designed to respond to the needs and interests of the local community,32 and programs on public affairs.33

Copyright. The most common form of "regulation" of the production of programming and other information is through governmentally protected ownership rights, in the form of intellectual property laws. In the field of communications the most prevalent legal issues are in the area of copyright, which relates to original works of authorship. Here, the law provides one with a monopoly over the use and distribution of the work for a limited term.34

But even in the area of intangible ownership, as in real property law, there are limits to the rights of the owner. Thus, the public may have limited rights to use or copy a work for public interest reasons,35 just as it has rights of access to certain properties (e.g., private beachfronts in California) for public purposes. This area of "fair use" is vague and complicated, but even in the pure ownership of intellectual property there is a certain amount of regulation of a monopoly for public benefit.

Similarly, there are certain situations where the copyright owner is entitled to compensation for use of the work but cannot restrict to whom and where the use is extended. Thus, "compulsory licensing" has been extended to cable operators and jukebox owners in order for them to access certain works that are otherwise too cumbersome to license on an ad hoc basis.36 (There are other

Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×

instances of cumbersome licensing where compulsory licensing is not extended but, rather, private licensing agencies are employed. The compulsory licensing provisions of the Copyright Act are mainly the result of political compromises.)

Computer Manufacturing. In the second and third quarters of this century, IBM dominated the field of computer manufacturing. The IBM mainframe epitomizes the centralized, top-down, monolithic enterprise of the scarcity stage of regulation. The Justice Department, however, was unsuccessful in its antitrust suit against IBM and dropped the suit on the same day that AT&T agreed to the Modified Final Judgment.37 Nevertheless there is a history of private antitrust suits that did affect, and in a sense "regulate," the computer industry.38 More significant, however, was the research and development work in various government agencies, particularly the Defense Department, that led to many of the advances in computer hardware and software.

Distribution Media

In contrast to the production entities and businesses, the electronic distribution media during the 1920s through the 1970s were characterized by direct regulatory control of centralized gatekeepers under the public interest standard. Here the concepts of scarcity had two distinct meanings and regulatory approaches: common carriage, which was applied to telephony, and public trusteeship, which was applied to broadcasting. These concepts are explained below.

Telephony. In the field of telephony the building of facilities and the operation of a telephone network were thought to have natural monopoly characteristics—large barriers to entry due to high initial capital costs and increasing economies of scale. From the time of Theodore Vail's 1909 use of the words "universal service" to explain his plan for exchanging governmental protection and regulation for the extension of service to all at fair rates,39 the Bell system's end-to-end monopoly had social equity implications. Regulation of that monopoly as common carriers under a broad public interest standard would maintain an efficient, fair, reliable, accessible, and affordable telephone system for the country.

Under the Communications Act, common carriers have been required to apply for permission to enter or exit a market.40 They may not discriminate among their customers; they must provide access to all at reasonable rates, which are reviewed by the government; and they must file tariffs of their rates and practices.41 States generally paralleled this approach for intrastate communications.42 So long as there was a monopoly carrier, the rates could be averaged among all the customers of a given area so that high-cost areas could be subsidized within the rate system by ratepayers from high-volume areas. This regulatory deal, which promised stability, growth, universal service, and restraint against competitors, reached its zenith in the late 1960s and early 1970s.43

Satellites. The use of satellites for communications common carriage derived from Arthur Clarke's model of covering the world by three geostationary orbiting satellites (Clarke, 1945).44 The United States instigated the founding of INTELSAT, an international consortium of countries to foster international communications. The INTELSAT agreement contemplates a monopoly carrier to handle the domestic traffic to and from the INTELSAT satellites.45 This vision of a regulated monopoly was perfectly consistent with the prevailing paradigm of the time. It followed, as well,

Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×

the tremendous federal government investment in a space communications system. Thus, during the 1960s and early 1970s. COMSAT was the American monopoly satellite space segment provider.

Broadcasting. In the use of the electromagnetic spectrum it was long perceived as necessary to have the government allocate uses and users in order to avoid destructive interference on each frequency. For example, in the mid-1920s the broadcasters themselves called for federal regulation of the AM airwaves (Barnouw, 1975). Congress responded with the Radio Act of 1927 and the successor Communications Act of 1934, regulatory schemes whereby access to the airwaves was limited by law. Only those relative few who were licensed would be allowed to broadcast. These licensees of "scarce" frequencies, however, would be deemed as trustees for the public and therefore regulated according to the "public interest, convenience, and necessity."46

This regulatory scheme grew through the 1960s to incorporate various requirements on each broadcaster to serve its local community. It required stations to provide access to candidates for federal elective office;47 to provide equal opportunities to opposing candidates for the use of the station's facilities ("equal time");48 to provide its lowest unit charge for political advertising;49 to air contrasting viewpoints on controversial issues of public importance;50 to ascertain the local needs and interests and to design programming aimed at meeting those ascertained needs,51 including those of minority audiences;52 to air news and public affairs programs;53 to have some prime time available for nonnetwork programs;54 to refrain from airing obscenity,55 indecent language,56 lotteries,57 fraudulent programming,58 or too many commercials;59 and to concern themselves with many other regulations considered to be in the public interest.

These requirements were far more stringent than those applicable to newspapers, under the theory that as licensees of scarce radio frequencies, broadcasters must operate under a regime where the public's interest is paramount.60 In the newspaper business, on the other hand, the editorial discretion of the publisher is the primary legal consideration.61 While the scarcity theory has been seriously questioned by many, it is still the applicable law today.

Cable Television. As cable television grew from the hilltops in rural areas to the wiring of towns and cities, local jurisdictions were called upon to franchise cable operators' uses of the streets and rights of way. These franchises were often exclusive, whether de jure or de facto. Once an operator obtained a franchise, it was only rarely that a locality granted another franchise for the same area, even if competitors knocked on the door. Again, like the monopoly distribution media described above, franchising authorities exerted certain regulatory controls over the franchisees.62 At first these included rate regulation; later, and particularly after rate regulation was preempted at the federal level,63 franchising authorities imposed other public interest obligations on the cable operators, which at times took the form of payments to the city for noncommunications-related activities.

Private Carriage. A third type of media entity gained prominence during the 1970s—private users of the spectrum and, eventually, private wire-based facilities providers. While spectrum users had to obtain licenses and were still subject to public interest regulation, the burdens were minimal and technical in nature. Primarily, these entities were treated under the private model of regulation, left to use the medium for the purpose for which it was granted (i.e., a trucking company could obtain frequencies for two-way radiotelephone communications but could not use them for nonbusiness purposes and definitely could not "broadcast" on them).64

Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×

Storage Media. Records, tapes, and disks, like print media, have been subject to the "private" model of regulation—application of antitrust and other laws generally applicable to all businesses.65 While the government does not directly regulate, it can potentially impact the production and distribution of these media through large or strategic purchases of selected titles. Thus far, however, government purchases of these media are mainly through normal library procedures.66

Reception

At the scarcity stage, the law of reception was primarily concerned with integrity of the respective systems of communication. With respect to telephony, AT&T retained through its tariffs and its regulatory deal the ability to restrict any foreign attachment to its facilities. This went so far as excluding plastic book covers for the yellow pages directory and cups on handsets to prevent others from hearing a conversation. (For the latter, called a "hush-a-phone," it was possible to obtain appellate court reversal of the FCC's enforcement of AT&T's tariffs under the determination that if something is beneficial and not publicly detrimental, it could be added.)67

A second major battleground during the 1950s and 1960s was the use of community cable television (CATV) antennas to improve reception of nearby television stations. The FCC flip-flopped on its attitude toward CATV but eventually held that it needed to be regulated as ancillary to the commission's power over broadcasting.68 In 1968 the Supreme Court held that cable systems were enhancers of the reception mechanism for television, rather than rebroadcasters, for the purposes of the copyright laws.69 The commission then proceeded to regulate these reception enhancers so as to prevent importation of certain distant television stations or to unduly fragment the audiences of local stations servicing a particular area.70

Other regulatory decisions affected reception, such as the FCC's decisions selecting a particular system for color television71 and Congress's requirement that television sets include UHF channels.72 The color television episode was an example of the government setting a standard: some in the industry win, some lose, but the determination is made by the government.

Summary of Stage 1

In sum, the period of the 1920s to the 1970s was one of regulation in the telecommunications business according to the touchstone of the Communications Act—the "public interest, convenience, and necessity." The major question, in each case, was, What was the "public interest"? This standard, according to the Supreme Court, was a supple instrument for guiding the industries in periods of rapid technological change.73 Nevertheless, it was admittedly vague. Thus, the agenda was essentially set by the lawyers, who would argue as to how far this vague and lofty standard should or could extend before it exceeded the bounds of regulatory authority or the limits of the First Amendment.

In addition, the antitrust laws were used to restrict economic abuses and to promote access to bottlenecks. Through consent decrees, the Justice Department and courts served as a kind of regulator of the movie and telephone equipment industries. Analogous regulation also appeared in other contexts, such as the fair use doctrine in copyright.

Finally, state, local, and private parties enforced other applicable regulatory regimes. Governments had many levers to pull and buttons to push in shaping the form and substance of the communications infrastructure. These included direct regulation, governmental investments, research grants, low-interest loans, tax incentives, and even the bully pulpit.

Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×

Despite the logic and general agreement that the public interest was the paramount consideration, this general regulatory regime began to unravel in the 1970s (with certain prior indications of problems well before that). Scarcity was questioned both technologically and philosophically. A deregulatory mood swept the country, symbolized by the election of Jimmy Carter as President. Regulatory systems, such as trucking and airlines, as well as telecommunications, were seen as less efficient than they would be under competition. There was a sense that a new approach would resolve the country's regulatory interests better than a cumbersome, bureaucratic, and failing system. But most particularly, there were significant technological breakthroughs in microwave, broadband communications, and computerization that challenged the natural monopolies, oligopolies, and technological underpinnings of scarcity.

Stage 2: Abundance and Competition

In each of the affected industrial sectors under review, the key industrial elements became abundant, competitive, or both, creating the need for a new regulatory paradigm. Mostly, this process became manifest in the 1970s, strong in the 1980s, and perhaps excessive by the 1990s. I will look at each sector individually, pointing out, where appropriate, the parallels among industrial sectors.

Production

Compared to the distribution media, the production elements of the communications process were essentially competitive even during the "scarcity" stage. But by the 1970s, developments in these industries created new levels of competition and commensurate regulatory responses.

Motion Pictures. We begin with the motion picture industry. The Paramount decree was adopted just as television became a consumer product. With the steady increase in viewership of television, the film industry suffered box office declines for a while. But as the studios (1) sold movies to television as a new window for their products and (2) moved into the production of television programming, they saw a resurgence in their businesses (Owen and Wildman, 1992).74 With more outlets (theater and television), there arose more production entities. Talent agencies, independent producers, and syndicators all added levels of production entities for both film and television. As new windows arose, with pay television, pay cable, and videocassette tapes, still more entities entered the businesses, many of which also produced film, television, and tape. The advent of increasing competition for production, characterized by the independent television and film producer, created an environment of competition. This in turn led to a reintegration at the edges of the new media. Paramount and MCA jointly owned the USA cable network. Other motion picture companies began to own other outlets, such as the Disney Channel. 20th Century Fox was heavily involved in film, television, and videotape production. CBS tried the film business for a while but was unsuccessful. Vestron, an early leader in the videotape business, also tried but failed in the film business.

The Premiere case75 of the early 1980s was the final vestige of Stage 1 antitrust regulation. The major studios, buoyed by the trend toward reintegration and the governmental attitude of relaxed enforcement of antitrust laws, entered into an arrangement whereby the studios would provide their best movies to their jointly owned and operated Premiere pay cable network. The court determined that the studios were prima facie violative of the antitrust laws. The fledgling network was disbanded, but the film and television production entities have since become more and

Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×

more integrated. The Paramount decree has become all but a dead letter, and economic regulation has become anathema in the film industry.

For whatever reason, the expanded world of television appears to be producing a diet of more violent and sexual programming. Public reaction to this has taken the form of calls for content controls to minimize violent programming. But even here the call for regulation has recognized a competitive paradigm in production rather than Stage 1-type content controls at the production levels. For violent programming the calls have been to rate programs, a measure aimed at enhancing consumer information instead of restricting output. Another possibility is to require a chip in the receiving apparatus so that the receiving home can edit objectionable programming. These measures, then, are aimed at enhancing competition and markets, consumer choice, and freedom of individuals to receive access to a diverse array of products.

Copyright. The production of information enhanced by the copyright laws has also seen greater competition and reduced regulation in recent times. Fair use applications of the copyright laws became less available after passage of the 1976 Copyright Act amendments, which set forth a set of guidelines for the doctrine and expanded the bundle of rights contained in an author's claim to ownership of his or her original works of expression.76 This has resulted in greater use of contracts, a classic marketplace mechanism, as a means for determining the rights and responsibilities of owners and users of copyrightable material. In the field of cable television, one of the mainstays of compulsory license, cable's carriage of local television signals, was altered by new provisions requiring cable operators to obtain "retransmission consent" from the owner of those signals.77

Computers. The paradigm of competition also manifested itself in the unregulated computer industry. IBM faced competition not only from other manufacturers of mainframes and large top-down-type computers but also from clone manufacturers, Apple, and others. As computer technology became smaller and cheaper, moving to the minicomputer to the microcomputer, competition grew in all aspects and levels of the industry. Indeed, the industry is a metaphor for the whole era in telecommunications regulation: increasing competition, less regulation.

Distribution Media

The heavily regulated distribution media saw the most dramatic paradigm shift in economic structure and regulation.

Telephony. First, two sets of decisions beginning in the late 1960s signaled the decomposition of AT&T's monopoly. The decisions freeing the terminal equipment market from regulation are described below in the "Reception" section. The other decision was to allow competition to AT&T in the intercity, long-distance markets.78 This determination was essentially inevitable as a result of the advent of microwave and satellite transmission technologies. Microwaves allowed signals to be transported cheaply by line-of-sight relays over large distances. The long lines of old became short waves. Costs declined rapidly, and innovative entrepreneurs such as William McGowan of MCI saw some openings.

Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×

Long Distance. After the FCC allowed private entities to use microwave frequencies above 890 MHz to build their own private transmission facilities,79 AT&T filed tariffs setting discounts for large users. But these were inflexible tariffs, and MCI sought to customize transmission bandwidth for large customers over its own competitive facilities. In 1969 the FCC allowed MCI to build a line from Chicago to St. Louis.80 Later, it determined that competitors need not show the need for a new service in order to obtain a license as a Specialized Microwave Common Carrier to compete with AT&T for long-distance business.81 MCI eventually won the ability to offer switched long-distance service,82 and others, such as Southern Pacific Railroad, followed with national long-distance networks of their own (which became Sprint). The flood of competition meant not only an open, competitive market for large users, who could pick and choose telecommunications providers and the types of service offerings they might want, but also competition and lower rates for the average subscriber—a sacrosanct market for the internal cross-subsidies contained in the regulatory regime of the scarcity stage.

Satellites. Domestic satellites, at the same period of the early 1970s, won an "Open Skies" decision by the FCC.83 As long as they were legally, technically, and financially qualified, private entities could own and operate a communications satellite. Here, the government again moved to the competitive paradigm. The domestic satellite business would be a business, competitive with wireline services but free of many of the regulatory hamstrings inherent in the common carriage of old. The 22,400 feet to and from the satellite that each signal had to travel made distances on Earth insignificant. "Long distance" became a misnomer. The satellite carriers, who were already competitive in the sense of the Open Skies decision, also found increasing competition—first from resellers of satellite transponders, who were deregulated by the FCC,84 and then from decisions allowing private competition to INTELSAT on the international stage.85

Local Loop. As the interexchange and terminal equipment parts of the telecommunications network became competitive and deregulated, the last frontier for competition was the heavily regulated local loop. Even here, competition has emerged in recent years (Entman, 1992). Two cellular franchises were granted for each market, one to local wireline carriers and one to competitors.86 Now, after many years of building, selling, and trading, culminating in AT&T's purchase of the McCaw systems, the cellular business is not only competitive but also most formidable. Newer competitors are on the horizon, whether they come in the form of a third cellular license in each market, specialized mobile services, new Personal Communications Services (PCSs),87 or other, even newer technologies.

Furthermore, the reduced costs of fiber optic transmission, and FCC orders requiring interconnection to local exchange facilities,88 have also made alternative carriers at the local level an economically viable and burgeoning business. As cable television systems enter the local telecommunications business, with fiber backbones and broadband connections to 65 percent of the country, and as PCSs become a reality, these trends are only going to continue. Thus, even the heavily regulated local loop is beginning to enter the next stage of rapid competition and deregulation.

Television, Cable Television, and Multichannel Distribution. The competition phenomenon hit equally hard in the television delivery media. The number of local television stations steadily increased over the last four decades. This, coupled with the advent of cable television as a common delivery mechanism, has resulted in half the country's viewers having at least 20 channels available to them and over 40 percent with at least 30 channels.89 With the increasing number of signals

Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×

(over 80 national networks90), the FCC felt less obligated to regulate each one closely. Indeed, former FCC Chairman Mark Fowler called television a "toaster with pictures" and proceeded to "unregulate" as much as he could during the Reagan administration. This attitude received a healthy boost by the Cable Act of 1984, deregulating cable to a great measure.

Cable television's exclusive franchises have also been subject to several new competitors and challenges. Wireless cable (a compilation of multipoint microwave distribution channels) has finally entered the fray, and direct broadcasting satellites, long heralded but nonexistent until now, will at long last pose still another new threat to the multichannel delivery monopoly held by cable operators. Meanwhile, significant legal challenges to the local franchising monopoly are being waged both by upstart cable operators91 and telephone companies.92 Here the First Amendment, which was written as a prohibition against government infringements of freedom of communication, is being used to forge a wedge by industrial companies into a distribution business, with broad-ranging consequences.93

Reception

Reception devices and processes are becoming more important, for regulatory purposes, during the deregulatory era. The more responsibilities and choices that can be shifted to the consumer or reception end of the process, the less regulation is necessary or appropriate at the production, transmission, or distribution ends. Thus, the FCC and others are looking to creating greater choice and control at the user's discretion. This thirst to shift the regulatory burden to the consumer is being aided by advances in technology that allow for greater user control and targeting.

Terminal Equipment. The FCC broke AT&T's monopolization of terminal equipment on the Bell System in the Carterfone case in 1968.94 There, relying on an obscure precedent from the 1950s, the commission allowed Carterfone to interconnect its device to the Bell network on the grounds that it would not be physically detrimental to the network and could enhance consumers' choices. So long as it was privately beneficial without being publicly detrimental, the foreign attachment could not be excluded.

This decision opened up the terminal equipment market and laid the groundwork for the commission's decision in Computer Inquiry II.95 There the agency declared that customer premises equipment was to be competitive and therefore completely deregulated. The sole area of regulation was the commission's equipment registration rules, aimed at preventing harmful interference from such devices.96 The ensuing terminal equipment business is now an open and competitive one, subject to only minimal regulations.

The AT&T divestiture proceeding in the mid-1980s prevented the divested Bell operating companies from entering into the equipment manufacturing business,97 but this preclusion is now actively under review by Congress and the courts.

Privacy. The law of privacy has grown from a law review article in 1890 (Brandeis and Warren, 1890), to private tort law in the states, to state constitutional and statutory provisions,98 to, eventually in the 1970s, a federal approach.99 The federal laws, however, tend to follow industrial sectors, such as credit-reporting businesses,100 financial records,101 cable television,102 and others.103 The United States has thus far rejected the broad across-the-board regulatory approach that certain European countries have adopted (Reidenberg, 1992a,b). Typically, these federal laws will restrict the industrial company from maintaining certain files beyond a useful time period, will afford the subject individual with rights to ascertain what information is collected and maintained, to

Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×

correct errors, and to have the information used only for the collected purposes, unless permission is granted for other uses.

Thus, in addition to their place in the constellation of human and (implicit) constitutional rights, privacy protections are one means of assuring individuals that their use of the information infrastructure will be secure. They have always involved balancing against other rights and interests and therefore a kind of regulation of the use of information.104 While the timetable for regulation and deregulation is later than with the distribution media, there is discussion today of moving to a marketplace solution to some of the privacy issues in telecommunications.105

Reception of Indecent Programs and Messages. The law of indecency took shape in the late 1970s under a Supreme Court decision that established a nonscarcity basis for regulation. In FCC v. Pacifica Foundation,106 the court held that the nature of the broadcast medium, intruding as it does into the home, allowed the federal government to impose "indecency"107 restrictions on broadcast transmissions that are more stringent than those for print media.

But when prosecutors wanted to apply the same standards to cable broadcasts, the courts took a contrary, more relaxed approach. Because subscribers brought the signals into the home voluntarily, and because electronic lockboxes and other means of blocking signals were available to subscribers, the lower courts overturned regulatory attempts to block transmission of indecent programming over the newer technologies.108

Summary of Stage 2

In sum, spurred by technological innovations that expanded the opportunities to communicate over multiple paths, the second regulatory paradigm saw a perceived if not actual end of scarcity, an abundance of channels, frequencies, bandwidth, and competitors. The rationale for specialized economic regulation for the telecommunications industry became weaker and weaker, while the ideological winds of deregulation blew stronger and stronger. The result, beginning in the late 1970s and early 1980s, has been a period of strong deregulation in almost all telecommunications sectors, increasingly intense competition, and new economic relationships.

Nevertheless, the revolution of the newer competitive paradigm has not been absolute. Strong vestiges109 of the first paradigm remain in the regulation of television (new requirements for educational children's television), reregulation of the cable television industry, continued maintenance of the Modified Final Judgment in restricting Bell operating companies from entering certain lines of business, extensive regulations imposed on those companies to assure that competitors can interconnect into the local exchange, protections of privacy interests, and calls for a renewed definition of the public interest in the field of communications regulation and policy.

Why can't the legislators get it straight and move all the way into the new paradigm? Some would fault special interest groups, industries each wanting their fair advantage, consumer groups wanting goods and services at unreasonably low rates. I would suggest something different. The thesis of this paper is that each paradigm properly has vestiges that remain during the next stage. But in this case the competitive paradigm is only a temporary stage, a kind of adolescence with certain traits that will continue throughout life.

The technological, political, economic, sociological, and scientific trends of the new age of complexity require a broader, more holistic approach to the subject matter. The second paradigm of competition met more carefully the goals of efficiency and liberty. The agenda, set by the economists, cleared away the underbrush of "externalities" that are not resolved by economic solutions. But in the 1990s this approach is simply too shallow for the issues and problems ahead. Most particularly, it does not resolve distributional problems adequately and slights the values of

Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×

equality, community, and participation. As we perfect the solutions of the second paradigm of abundance and competition, we are already late for a new-age train moving out of the station carrying a variety of new trends and perspectives as its passengers.

THE NEW COMPLEXITY

In considering the applicability of the earlier regulatory schemes, one needs to explore the environment for which the regulations were designed. As I have detailed above, the lawyerly governmental intervention approach to regulation contained in the Communications Act of the New Deal was established in an era of scarcity of telecommunications resources and players. The public interest standard was designed to address goals of community, equity, and efficiency, while at the same time respecting liberties of communications.

As the environment changed, as communications channels became more plentiful, and as many new entrants came to offer communications services, the regulatory regime changed to a procompetitive, deregulatory milieu. This served primarily the goals of efficiency (as economists asked, What is competitive? What is efficient?), and liberty (freeing individuals to act as they chose, moving toward a laissez-faire approach).

Today, however, the environment is more rugged and complex than envisioned by the legal or economic theories of the prior regulatory ages. While remnants of the past paradigms will remain with us, and rightfully so, this new environment calls for a different approach to the government's interaction with the information infrastructure.

Technological Trends

Foremost among the drivers of change in this century are the tremendous technological changes. The technological trends that moved us from the first to the second paradigms were essentially those that made communications faster, shorter, cheaper, and better. These were the continual improvements on the processes of communication at the production, distribution, and reception stages. Transistors, microprocessors, microwaves, and even lasers and fiber optics all contributed to these advances.

The more current technological trends that are creating digitization and convergence, however, are creating a difference in kind more than of degree. By breaking voice, video, and data all into 1s and 0s, digital technologies are breaking down the ability to segment communications into neat regulatory cubbyholes. As Nicholas Negroponte suggests, eventually all transmissions will be bit radiation, which can be shaped at the reception end into the desired format (Negroponte, 1993).

This digitization, then, is creating a convergence of the production, media, and reception levels of the communications process. At the production stage, the movement to multimedia will further interrelate the voice, video, and data. At the distribution level, these digital bits can be packaged and sent through any pipes and decoded at the other end perfectly. And at the reception end, the converged component television-cable-computer will produce the product in the form desired by the user.

A second change in kind rather than degree is around the corner: the movement to broadband interactivity. That is, the telephone and computer networks have always been interactive, but emerging information services and interactive television are creating a new function at the reception end of broadband telecommunications: user/receiver interaction and video return transmission. Even the smallest user, the individual in his/her home, can send movies upstream.

Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×

This interactivity creates a new dynamism in the nature of information and its relation to the communications process.

That is, in the earlier stages, power and intelligence resided first at the center and moved downstream. It then moved steadily toward the extremities to the point that intelligence in the nodes, the terminal equipment, or even off-network is now common. As we approach Stage 3, the power and intelligence are turning upstream. The user has control and ownership of the remote control, the telephone, the television set, and the computer terminal. Congress has turned its attention now to the new set-top boxes that will interface with the next generation of video on demand. And I would suggest that the consumer's ability to own or lease inside wiring will and should extend to outside wiring—that is, the broadband drop to the curb. (Still at issue, however, is the interface between the drop and the fibers at the pedestal.)

As competing fibers to the curb are economically feasible, these electronic driveways demonstrate this new bidirection of power in the network. Similarly, software agents, as well as filters, send intelligence in both directions, adding multidirection and complexities to the communications networks of the next era.

Economic Trends

These technological developments and other economic trends are creating two overriding phenomena: (1) the commoditization of information (pay-per-minute ''transactional television" and information services on computer networks) and (2) the paradoxical convergence and fragmentation of economic entities.

Communications networks are narrowing the continuum between advertising and consumption. That is, if today an entity advertises on television, it expects results at the store, or possibly by a telephone call. Home shopping networks and computerized shopping on on-line networks are moving the locus of the transaction to the telecommunications network. As interactive broadband networks are created, with video on demand, the ability to advertise and sell in the same process will be heightened.

Producers and distributors of information try to find multiple revenue streams for the supply of information. For example, programmers seek advertising and subscriber revenues for the same program, as cable television networks and commercial computer networks such as Prodigy currently receive. Cable television is tending toward a la carte network ordering and greater use of the pay-per-view channels; Prodigy has moved to a basic rate with add-on charges for extra services. Strategic alliances, acquisitions, and business marriages raise fears of new levels of concentration and vertical integration; at the same time, successful enterprises are arising from lean entrepreneurial ventures and cottage businesses.

Users will also likely increase their demand for communications services in many ways. For example, Edward Shortliffe in his paper, "The Changing Nature of Telecommunications and the Information Infrastructure for Health Care," in this volume, highlights the increased demand likely from health care institutions as a result of changing policy initiatives in that sector.

Sociological and Organizational Trends

The convergence and fragmentation trends, in fact, are more than economic. They occur in broader contexts and account, I believe, for another force that warrants close scrutiny.

It is clear that the general populace is less trusting of its traditional intermediary institutions such as political parties and candidates, journalists, educators, religious leaders, and many other mediating institutions in the current social environment. Along with that distrust comes a technology

Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×

that allows the citizen to bypass many of those intermediaries. With the new transactional telecommunications, one can work, learn, shop, bank, pray, and even vote at home. The ability to bypass the traditional gatekeeper, to "disintermediate," coupled with increasing information overload, creates new pressures on older intermediary institutions to alter their gatekeeping functions by either adapting to the new information environment or becoming obsolete and giving way to new ones.

That is, there will remain a need for effective new or relegitimized old intermediaries. These neointermediaries will have to act as agents, filters, and integrators. They will connect people and information across space and time, serving the functions of knowledge navigation, information analysis, and system integration. These functions will move in all directions, working for the individual in an information-abundant world and as analyzers, information differentiators, and audience integrators for the entity trying to reach or target individuals. Changes in the functions of mediation may be the most significant of the nontechnological trends with respect to impact on the communications and information milieu.

Political Trends

Despite the movement against politics as usual, against taxes, and against big government, citizens are demanding greater meaning from their governments and political institutions. There is a longing on both the left and the right for a return to traditional democratic values and, with some, to orthodoxy. Some would characterize our current era's "green movement" and its attention to sustainable development as an indication of the need to recede from the model of the marketplace as the operative religion of the world's political and economic spheres.

In the communications regulatory realm, passage of the cable television reregulatory legislation over the veto of then President Bush, regardless of how one looks at the merits of the act, was an indication that purely deregulatory, competitive solutions were no longer politically sufficient for the country's problems. In the communications field, I would suggest, the regulatory regime will have to address other core values besides efficiency. These would include liberty, equity, community, and participatory access, if not others as well. The methods for doing so, on the other hand, will most likely have to be different from either of the preceding paradigms.

Scientific Trends

While one might consider scientific trends in the same breath as technological, they are not the same. The new sciences of chaos and complexity, drawing on teachings from quantum physics, modern evolutionary theory, ecology, and some of the newer thinking in economics, could be instructive to our thinking about new paradigms in regulation.110

One interesting approach to these topics is some recent work on coevolving complex adaptive systems. I will not expand on that here other than to suggest that in our exploration for a new paradigm for regulating the information infrastructure, we should consider what that model might teach us.111 Specifically, it recognizes that there is a need to look at the crude whole rather than individual specialties in a vacuum. It suggests that systems coevolve, affecting each other as each adapts to the others. It recognizes that adaptive systems have memories, methods for prediction, feedback systems, and abilities to change to adapt to new conditions. It requires freedom for chaotic behavior but does not necessarily rely on invisible hands. Indeed, it bears mentioning that human organizations differ significantly from biological ones by the ability to apply intentionality: one can impact how something evolves in a number of ways.112

Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×

The science of complexity also addresses self-organizing systems. In the communications world the emergence of the Internet is an interesting example of a self-organizing, complex, adaptive system and one engendering a great deal of attention as a possible model for future networks.

The science of ecology is also forming the basis for broader thinking about human activity,113 for example, the green movement, social ecology, and political references to sustainable development. In the next section I suggest some of the major issues we must address in the government's interaction with the production, distribution, and reception levels of the communications process. Perhaps they will show the way to a new communications landscape, conducive to a coevolution of technology, applications, and regulations to "sustain" our democratic value system.

Attention to scientific models is understandable in light of the current period of convergence and upheaval. The traditional boundaries of technology, of functionality, or regulatory regimes are dissolving. As Monroe Price suggests, "Convergence constitutes the abolition of given categories and the invention, perhaps, of new ones. That [is] what scientists … have done: question the existing categories and reconceptualize the world unburdened by the musty thinking of the scholastics."114 These mergers, combinations, and convergences may transform the communications process and the nature of their regulation.

At the same time, we need to keep in mind our own ideology, a faith in the democratic process. Thus, as policymakers look at the evolving environment surrounding the information infrastructure, they will also want to act so as to foster democratic values.

TOWARD A NEW REGULATORY PARADIGM FOR TELECOMMUNICATIONS REGULATION

Much of each of the regulatory schemes of Stages 1 and 2 is still with us today. Indeed, one proponent of the competitive paradigm suggests that the current regime is more regulatory than the past, in an effort to manage the transition to competition.115 In any event, there is both extensive regulation currently and a strong consensus to move to a more competitive environment for the production, delivery, and reception of communications in the future. The question is this: Is the newer competitive paradigm the goal that has yet to be achieved, as Kellogg et al. (1992) suggest, or do we want to return to a more public-interest-oriented regulation, as others long for? Or should the country turn to a still never approach to govern our most important political commerce, the delivery of information?

To answer that question, to find the regulatory religion of tomorrow, we have to have a vision not of the highway but of where the highway is headed. We have to have a better concept of the environment through which the highway is winding and where the highway leads. We need to complement our technological visions with scenarios of our society and of our democracy.

Furthermore, to have a vision of the society, we need to understand the nature of the coevolving systems surrounding the communications processes, which include, at the least, interactions among (1) the new technologies, (2) applications of those technologies, and (3) regulations impacting them. As each interacts, changes, and evolves, it affects each of the others in sometimes unpredictable ways.

For example, in the area of advanced or high-definition television, the technologies advanced during the 1980s from an analog model to a digital one. The applications for the technology thereby changed from better pictures to a much more flexible, scalable television, multiple channel delivery, and other uses converging television with computing. Furthermore, the regulatory process (e.g., spectrum allocation and standards setting) also evolved as the changes in technology and applications took shape. Finally, the technology progressed in a certain direction when the FCC

Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×

specified its criteria for the new advanced television system. Each of these areas, then, is impacted by the other two and by other influences as well.

It is important, therefore, to recognize this very dynamic process and to adopt intelligent policies, in that light, to foster societal and democratic values. Policymakers in the governmental, business, and nonprofit worlds can then design buttons and levers to adjust the landscape so that these sectors grow and adapt in the general form desired.

Governmental policy tools can come in the form of financial and tax incentives; research and development grants; subsidies on the demand and/or supply side; governmental loans and purchases;116 standards setting; direct structural or behavioral regulation at the federal, state, or local levels; and perhaps new forms or methods for creating a proper atmosphere for democratic applications within the communications and information sectors.

These policies will certainly have to be flexible to allow the communications networks to self-organize, evolve, and adapt to new policies (such as health care initiatives) and other new demands. They will have to reflect the movement to a new regulatory stage, from scarcity to abundance, and now to complexity.

MAJOR ISSUES AHEAD

If the agenda for regulatory Stage 1 was essentially set by the lawyers (what is in the public interest?) and Stage 2 by the economists (what is efficient?), to whom do we turn for setting the agenda of Stage 3? My suggestion is that the scientists will help in providing models for analyzing the environment but not in setting the agenda for regulatory action. The new agenda setters for Stage 3 will likely be the political scientists (what is democratic?). That is, while regulators will want to allow growth and development in every sector of the telecommunications/information infrastructure, they will also need to judge and address them according to traditional democratic values. Like the environmental movement, in the face of rapid growth in the communications and information sectors, the new information environmentalists will want to assure "sustainable democracy," or the retention of democratic values and nondiminution of our most important human and information resources today and for future generations.

While this paper cannot suggest the specific policies to adopt at this time, it can suggest some of the underlying values and approaches we have traditionally wanted in an information infrastructure to serve broader democratic goals. Correlating to the values of efficiency, liberty, equality, community, and participatory access, the approaches might include:

  • Creation of an efficient, reliable, adequate, flexible, and accessible infrastructure;

  • Promotion of communications and privacy rights and responsibilities;

  • Recognition of intellectual property interests;

  • Concern for equality and equity, particularly with respect to an evolving concept of universal service; and

  • Maintenance of public space for community discourse and as a public resource, with diverse and equitable access to that space.

Following are some key issues for consideration in designing a new approach to foster these goals and values in the networked telecomputing society of the next decade. Under each element of the communications process, I briefly address what I consider to be the three most significant issues in creating a communications landscape to sustain a democratic value system.

Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Production

The three most significant issues in the production realm are (1) the regime for intellectual property, (2) the use of government to enforce content controls, and (3) the creation of public space and governmental support for programming and software.

Intellectual Property

The first issue is perhaps the most vexing of the legal issues of the new age. It places the needs and interests of creators, authors, and inventors to protect and exploit their works, on the one hand, against the interest of society in general to have access to a wide diversity of information. This tension, it could be argued, places the values of liberty, property, and efficiency against those of equality, community, and participation. With digitization and convergence, the issues become blurred in several respects.

First, copyright is premised on reducing an expression to tangibility. The new uses of the information infrastructure enable works to be fluid. Potentially, they can be manipulated, reordered, and commingled from diverse sources. Intellectual property laws of the future will have to contend with the need for attribution of authorship, compensation of partial authors, and the inability to control distribution channels of a work.

Second, multimedia programs are information gluttons, calling for thousands of works in order to be true and effective. For example, should a historical multimedia program contain only the writings and pictures that are available through the public domain or through contractual agreement? If so, as the current copyright law would appear to require, the program itself may have to distort the nature of the historical point. For example, recent disputes over the ownership of Martin Luther King, Jr.'s, papers could result in a multimedia program on the civil rights movement to be largely devoid of Dr. King's original works. While such a result is expected in the current world of intellectual property rights, it could distort the teaching tools of the next generation of students.

These and other similar issues in intellectual property suggest that authors and major users will have to work out some form of group or agency licensing, technological attribution system, compulsory licensing, or other method of compensating authors, perhaps without their complete control of the distribution of their works.

Content Controls

With respect to content controls, the proliferation of programming would suggest that some new programming reaching the market will be beneath the current standards of good taste and decency (in terms of sexual content or violence). With the consumer's ability to control programming on the receiving end, however, the need to suppress speech at the production end should be minimized. Thus, with the abundance of sources at the production end and additional distributional outlets, there is likely to be greater amounts of objectionable programming. But with the attendant movement toward deregulation and less scarcity, there is less the government can or will do about it. On the other hand, there should be more options available to the consumer to control information at the terminal end.

Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Public Space and Information

The third issue is the need for and willingness of the government to support communities (geographical and communities of interest) by the creation of public spaces and substantive products (programming, software) for consumption within that public space. This issue is also thought of as the next manifestation of the public broadcasting, or public telecommunications, concept.

Traditionally, libraries allowed the public access to information that was otherwise available only to those who could afford to purchase each book. What are the libraries in the future age of commoditized information bits? How does one connect to them, as opposed to accessing the physical collections of public book libraries? Who pays and where does the information get delivered?

The future of the public broadcasting system is also an issue. As the distribution systems explode, will the market fulfill the programming needs heretofore met by public broadcasting? How do we measure market failures for public service programming? Should programming be subsidized? If so, which kinds? Who creates it? For what media? The current public broadcasting system is, of course, centered on the broadcast medium, where broadcasters are also producers. What should the future public telecommunications system look like, at what cost, and to whom? To what extent should the system extend beyond the broadcasting/video media?

Distribution

The issues for the distribution system revolve around the creation of an adequate communications infrastructure, the ability to gain access to the distribution system, and the impact that the infrastructure has on communities. The first issue addresses the value of efficiency, which today is usually equated with policies for competition. These policies, however, will also have to consider the values of equality and community, for the public is unlikely to tolerate a system that does not allow all to participate or that atomizes the country to the detriment of our community institutions.

Infrastructure

In the expanding global information environment, the United States already has a varied, diverse, and complex communications system that combines switched narrowband, broadband, wire, fiber, and radio paths to the end user. From this system to the vision of a nationwide (indeed, worldwide) switched broadband interactive video-on demand system will take significant planning, patience, and determination. No doubt the U.S. communications and information highway is headed in that direction with only limited government involvement. But the public will have to take an active role in helping to shape significant elements of that distribution system. The regulatory principles of free flow of information, interconnection, access, diversity, and interoperability appear to be central to the democratic development and efficient evolution of the new international information environment. At this point in time, the evolution of the Internet—perhaps the most interesting and significant development of the 1990s in this field—has seen a push by government in the initial research and development, subsidies, and, arguably, in creating incentives but little in the way of direct regulation. As the government begins to pull away from its subsidization, and as access to the Internet becomes more prevalent and important, these issues should become more acute and visible.

Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Universal Service

Furthermore, social and governmental institutions will have to determine what "universal service" means beyond regular dial tone. How should that definition change over time and in different environments? What items should be considered essential and subject to controls to assure that most people have access? The public will have to determine how much it wants to invest directly in a telecommunications system that should (but may not) benefit the country economically and socially. What do we want to see the system do, such as provide educational, health care, citizenship, and cultural services? To whom and where should those services be delivered, and at what cost to whom?

Community

How do we design a system that will allow all citizens a form of editorial autonomy in the transmission and reception of communications and still have a society that recognizes value in community (however defined), in cohesiveness, in participatory democracy, and in equity? The network of networks comprising the information infrastructure will coevolve with the advance of telecommunications technologies and the software applications for which the distribution system will be used. The government's very difficult role will be to impact these applications in a positive direction while still allowing the flexibility for growth, adaptability, and sustainability.

Reception

Finally, we come to the issues associated with the reception end of the communications process. Here we might focus on the individual's interest in liberty and participation. That suggests the creation of an environment that empowers the individual to use the infrastructure, with the attendant benefit that the infrastructure will advance as more individuals become literate and intimate with it.

User Control

The new environment surely will promote the ability of users to exert control over the communications and information coming into the terminal location. Although the equipment industry is essentially private and subject only to competitive and safety restraints, regulation may be appropriate to empower the user to have a choice of equipment and of content over the network. This may take the form of violence chips, touch-tone phones, or signaling system 7 compatible equipment. More importantly, it may take the form of protecting against dominance of the set-top device that allows one to navigate the myriad information sources available on the new electronic networks, or even extending the residential users' ownership of wiring to the curb.

Privacy

Similarly, government will likely need to remain involved in the protection of individuals' privacy interests. But the extent of those privacy rights (as opposed to comodified privacy protections) is unclear. What are the bundle of privacy rights that must be guaranteed to every individual? Do they change in different environments? Who bears the costs in each situation?

Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Information Literacy

Finally, in order for the individual to be able to use any of the telecommunications goods and services, he or she will need to be information literate. This is the ability of a citizen to access, analyze, and produce information for specific outcomes (Aufderheide, 1993); access to the communications process will have to include access in terms of literacy as well as facilities and cost. This translates to the need for literacy education and awareness, along with the development of user-friendly technologies, which could be encouraged by government grants and purchases.

CONCLUSION

In conclusion, technological, economic, political, scientific, and organizational changes are ushering in a new age of complexity. The self-organizing interconnectedness of the Internet, the reverse direction of power in the network, and the convergence and dissolution of boundaries are bringing on a difference in kind more than of degree.

The future regulatory system for the nation's information infrastructure will need to go beyond the regulatory-deregulatory paradigms of the past and present. It will need to arrive at a new conception of the roles of governments in the evolution of the communications and information environment.

The new regime might look to ecological or scientific models to allow the flexibility of the various players to evolve and adapt to each other, to encourage or allow for the creation of neo-intermediaries, and to relate to the outside world. More specifically, technologies, applications, and regulations will all coevolve to form the new information and communications landscape of tomorrow. Governments will be charged with assuring that that landscape retains its democratic properties for current and future generations.

As U.S. society faces this process, it will have to balance the traditional values that are inherent in a democracy's political system: liberty, equality, community, efficiency, and participatory access. Unless policymakers assess and address that which is undemocratic in the new information environment, these values could be endangered. Under a new paradigm that fosters these strong values within the evolving international information environment, we can best attain a fair, efficient, and sustainable balance of private and public goals.

ACKNOWLEDGMENTS

The author acknowledges and thanks Marjory Blumenthal, Catherine Clark, Cameron Graham, Monroe Price, and Tracy Westen for their constructive comments and editorial suggestions on earlier drafts of this paper.

REFERENCES

AT&T Annual Report, 1909.

Aufderheide, Patricia. 1993. Media Literacy: A Report of the National Leadership Conference on Media Literacy, the Aspen Institute Wye Center, Queenstown, Maryland, December 7–9, 1992. Communications and Society Program, Aspen Institute, Washington, D.C.


Barnouw, Erik. 1975. Tube of Plenty: The Evolution of American Television. Oxford University Press, New York. (Second edition, 1990.)

Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×

Bollier, D. 1993. The Information Evaluation: How New Information Technologies Are Spurring Complex Patterns of Change. Aspen Institute, Washington, D.C.

Bollier, D. 1994. The Promise and Perils of Emerging Information Technologies: A Report on the Second Annual Roundtable on Information Technology. Aspen Institute, Washington, D.C.

Brandeis, Louis D., and Charles Warren. 1890. "The Right to Privacy," Harvard Law Review 4:193.

Brenner, D. 1988. "Cable Television and the Freedom of Expression," Duke Law Journal 329.

Brenner, D. 1991. "What About Privacy in Universal Telephone Service?" Universal Telephone Service: Ready for the 21st Century? Annual Review of Institute for Information Studies (Northern Telecom Inc. and the Aspen Institute), Queenstown, Md.

Brenner, Daniel L. 1992. Law and Regulation of Common Carriers in the Communications Industry. Westview Press, Boulder, Colo.

Bruce, Robert R., Jeffrey P. Cunard, and Mark D. Director. 1986. From Telecommunications to Electronic Services: A Global Spectrum of Definitions, Boundary Lines, and Structures. Butterworths, Boston.

Chiron, Stuart Z., and Lise A. Rehberg. 1986. "Fostering Competition in International Communications," Federal Communications Law Journal 38(1):1–57.

Clarke, A. 1945. "Extraterrestrial Relays: Can Rocket Stations Give Worldwide Radio Coverage?" Wireless World, Vol. LI(Jan.-Dec.).

Cleveland, Harlan. 1993. Birth of a New World: An Open Moment for International Leadership. Jossey-Bass, San Francisco.


Dordick, Herbert S. 1986. Understanding Modern Telecommunications. McGraw Hill, New York.

Duggan, E. 1992. "The Future and Public Broadcasting," The Aspen Quarterly 4(3):14ff.


Entman, Robert M. 1992. Competition at the Local Loop: Policies and Issues. Aspen Institute, Washington, D.C.


Geller, Henry. 1991. Fiber Optics: An Opportunity for a New Policy? Annenberg Washington Program, Communications Policy Studies, Northwestern University, Evanston, Ill.


Information Infrastructure Task Force (IITF). 1993. The National Information Infrastructure: Agenda for Action. Information Infrastructure Task Force, Washington, D.C., September 15.


Kellogg, Michael K., John Thorne, and Peter W. Huber. 1992. Federal Telecommunications Law. Little, Brown, Boston.

Krasnow, Erwin G., Lawrence D. Longley, and Herbert A. Terry. 1982. The Politics of Broadcast Regulation. 3rd Ed. St. Martin's Press, New York.


Lawrence, John Shelton, and Bernard Timberg. 1989. Fair Use and Free Inquiry: Copyright Law and the New Media. 2nd Ed. Ablex Publishing Corporation, Norwood, N.J.

Levy, Steven. 1992. Artificial Life: The Quest for a New Creation. Pantheon Books, New York.

Lewin, Roger. 1992. Complexity: Life at the Edge of Chaos. MacMillan Publishing Company, New York.


Marx, Gary T. 1991. "Privacy & Technology," Whole Earth Review (Winter, 73):90–95.

McCarthy, J. Thomas. 1991. The Rights of Publicity and Privacy. C. Boardman, New York.

Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×

National Cable Television Association (NCTA). 1993. National Cable Television Developments. Washington, D.C., November.

National Telecommunications and Information Administration (NTIA). 1988. NTIA Telecom 2000: Charting the Course for a New Century. U.S. Government Printing Office, Washington, D.C.

Negroponte, N. 1993. "The Bit Police: Will the FCC Regulate Licenses to Radiate Bits?" Wired May-June, p. 112.

Nimmer, Melville B., and David Nimmer. 1978. Nimmer on Copyright: A Treatise on the Law of Literacy, Musical and Artistic Property, and the Protection of Ideas. 1993 supplement. Matthew Bender, New York.


O'Toole, James. 1993. The Executive's Compass: Business and the Good Society. Oxford University Press, New York.

Owen, Bruce, and Steven S. Wildman. 1992. Video Economics. Harvard University Press, Cambridge, Mass.


Pepper, Robert. 1993. "Broadcasting Policies in a Multichannel Marketplace," Television for the 21st Century: The Next Wave, Charles Firestone, ed. Aspen Institute, Washington, D.C.

Price, M. 1990. "Congress, Free Speech, and Cable Legislation: An Introduction," Cardozo Journal of Entertainment and Communications Law 8:225.

Prigogine, I., and Isabelle Stengers. 1984. Order Out of Chaos: Man's New Dialogue with Nature. Bantam Books, New York.


Reidenberg, J. 1992a. "The Privacy Obstacle Course: Hurdling Barriers to Transnational Financial Services," Fordham Law Review 60:§137.

Reidenberg, Joel R. 1992b. "Privacy in the Information Economy: A Fortress or Frontier for Individual Rights?" Federal Communications Law Journal 44(2):195–243.

Reuben-Cooke, W. 1993. "Rethinking Legal and Policy Paradigms for Television in the 21st Century," Television for the 21st Century: The Next Wave, Charles Firestone, ed. Aspen Institute, Washington, D.C.


Schmidt, Benno C., Jr. 1976. Freedom of the Press vs. Public Access. Praeger, New York.

Soma, John. 1983. Computer Technology and the Law. McGraw-Hill, New York.


Twentieth Century Fund Task Force on Public Television. 1993. Quality Time? The Report of the Twentieth Century Fund Task Force on Public Television. Twentieth Century Fund Press, New York.


Vogel, Harold L. 1990. Entertainment Industry Economics: A Guide for Financial Analysis. 2nd Ed. Cambridge University Press, New York.


Waldrop, M. Mitchell. 1992. Complexity: The Emerging Science at the Edge of Order and Chaos. Simon & Schuster, New York.

NOTES

1.  

See, for example, Bruce et al. (1986), which suggests several dichotomies such as basic versus enhanced, and facilities versus services. See also IITF (1993) (hereinafter, Agenda for Action), which suggets a broader definition that includes regulatory standards and people, as well as hardware, information, and applications.

2.  

See, for example, National Broadcasting Company v. United States, 319 U.S. 190 (1943) (Communications Act of 1934 is constitutional; FCC is more than just a traffic cop of the airways).

3.  

See HBO v. Federal Communications Commission, 567 F.2d 9 (D.C.Cir.), cert. denied, 434 U.S. 829 (1977).

Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×

4.  

Actually, the issue of privacy enters at each stage of the communications process. Privacy as the disclosure of intimate facts, for example, is properly viewed as a production-level issue and, to a certain extent, privacy could also be considered a distributional issue. For simplicity, however, this paper considers privacy issues as part of "reception".

5.  

Four of these values come from the work of James O'Toole (see particularly, O'Toole, 1993). The fifth value, participation, is the author's.

6.  

See, for example, 47 U.S.C. 326; Fairness Doctrine Report, 102 FCC2d 143 (1985).

7.  

See National Citizens Committee for Broadcasting v. Federal Communications Commission, 436 U.S. 775 (1978).

8.  

See, for example, 47 U.S.C. 312(a)(7), 315(a); Cable Access Rules, 47 U.S.C. §§531-32.

9.  

See Brenner (1991) and Marx (1991), pp. 90-95.

10.  

47 U.S.C. 151.

11.  

See Statement of Senator Robert Kerrey, U.S. Senate, introducing S. 626, "The Electronic Library Act of 1993," March 22, 1993, in Congressional Record.

12.  

Primer on Ascertainment of Community Problems by Broadcast Applicants, 27 FCC2d 650 (1971).

13.  

Henry v. Federal Communications Commission, 302 F.2d 191 (D.C.Cir.), cert. denied, 371 U.S. 821 (1962).

14.  

See Federal Communications Commission v. Midwest Video Corporation, 440 U.S. 689 (1979).

15.  

United States v. Midwest Video Corporation, 406 U.S. 649 (1972).

16.  

See generally, Twentieth Century Fund Task Force on Public Television (1993) and Duggan (1992).

17.  

47 U.S.C. 151.

18.  

Communications Act of 1934, Title II, 47 U.S.C. §§201-14.

19.  

National Broadcasting Company v. United States, 319 U.S. 190 (1943).

20.  

Communications Act of 1934.

21.  

See United States v. Paramount Pictures, 334 U.S. 131 (1948).

22.  

Telephone equipment, at the periphery of the communications process, could be used in the initiation or reception of communications messages. I consider it in the "Production" section of this part of the paper, because it is used first in the initiation of messages. The same is true of computer manufacturing, discussed below.

23.  

Complaint, United States v. Western Electric Company, No. 17-49 (D.N.J. Jan. 14, 1949). See Kellogg et al. (1992) at §4.3.

24.  

United States v. Western Electric Company, 1956 Trade Cas. (CCH) ¶68,246 (D.N.J. 1956), discussed in Kellogg et al. (1992) at §4.3.

25.  

In the later "Modified Final Judgment," agreeing to the divestiture of the Bell Operating Companies (BOCs) from AT&T, the manufacturing of equipment went with the competitive business and was forbidden fruit to the BOCs. See United States v. AT&T, 552 F.Supp. 131 (D.D.C. 1982), aff'd sub nom. Maryland v. United States, 460 U.S. 1001 (1983).

26.  

National Broadcasting Company v. United States, 319 U.S. 190 (1943).

27.  

47 U.S.C. 315(a).

28.  

18 U.S.C. 1464.

29.  

Mayflower Broadcasting Corporation, 8 FCC 333 (1940).

30.  

Report on Editorializing by Broadcast Licensees, 13 FCC 1246 (1949).

31.  

Committee to Save WEFM v. Federal Communications Commission, 506 F.2d 246 (D.C.Cir. 1974)(en bane). But see Federal Communications Commission v. WNCN Listeners Guild, 450 U.S. 582 (1981)(upholding FCC deregulation of format changes).

32.  

Primer on Ascertainment of Community Problems by Broadcast Applicants, 27 FCC2d 650 (1971); Office of Communication of the United Church of Christ v. Federal Communications Commission, 359 F.2d 994 (D.C.Cir. 1966).

33.  

See Community Access v. Federal Communications Commission, 737 F.2d 74 (D.C.Cir. 1984); West Coast Media (KDIG) v Federal Communications Commission, 695 F.2d 617 (D.C.Cir. 1982), cert. denied, 464 U.S. 816 (1983) (failure to meet promises in public affairs and news).

34.  

17 U.S.C. 102ff. See generally, Sony Corporation of America v. Universal City Studios, 464 U.S. 4117 (1984); Nimmer and Nimmer (1978; 1993 Supp.).

35.  

17 U.S.C. 107.

36.  

17 U.S.C. 111.

37.  

IBM Corporation, 687 F.2d 591 (2d Cir. 1982); Soma (1983) at §4.03; Kellogg et al. (1992) at §4.5, n. 7.

38.  

See Soma (1983) at §§4.02-4.18.

39.  

See AT&T (1909); Kellogg et al. (1992) at §1.3.

40.  

47 U.S.C. 214.

Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×

41.  

47 U.S.C. 201-214; Brenner (1992), pp. 15-18.

42.  

See Louisiana Public Service Commission v. Federal Communications Commission, 476 U.S. 355 (1986); Kellogg et al. (1992) at §§2.7.2 and 2.11.

43.  

See MTS and WATS Market Structure, Report and Order, 2 FCC Rcd. 2953 (1987); Brenner (1992), Chapter 9.

44.  

See Dordick (1986), Chapter 4.

45.  

INTELSAT Agreement.

46.  

See Red Lion Broadcasting Corporation v. Federal Communications Commission, 395 U.S. 367 (1969); National Broadcasting Corporation v. United States, 319 U.S. 190 (1943). But see Federal Communications Commission v. League of Women Voters of California, 468 U.S. 364, n. 11 (1984).

47.  

47 U.S.C. 312(a)(7).

48.  

47 U.S.C. 315(a).

49.  

47 U.S.C. 315(b).

50.  

Report on Editorializing, supra.

51.  

Primer on Ascertainment of Community Problems by Broadcast Applicants, 27 FCC2d 650 (1971).

52.  

Black Broadcasting Coalition of Richmond v. Federal Communications Commission.

53.  

See Community Access v. Federal Communications Commission, 737 F.2d 74 (D.C.Cir. 1984); West Coast Media (KDIG) v. Federal Communications Commission, 695 F.2d 617 (D.C.Cir, 1982), cert. denied, 464 U.S. 816 (1983) (failure to meet promises in public affairs and news).

54.  

Prime Time Access Rule, 47 CFR §73.658(k).

55.  

18 U.S.C. 1464.

56.  

Federal Communications Commission v. Pacifica Foundation, 438 U.S. 726 (1978).

57.  

18 U.S.C. §1304 (transmission of lottery information prohibited), §1307 (state lotteries exempted). See also United States v. Edge Broadcasting Company, 113 S.Ct. 2696 (1993).

58.  

18 U.S.C. §1343 (transmission of fraud prohibited).

59.  

See Citizens Communications Center v. Federal Communications Commission, 447 F.2d 1201 (D,C.Cir. 1971) at n. 9. See generally, Krasnow et al. (1982), Chapter 7.

60.  

Red Lion Broadcasting Corporation v. Federal Communications Commission, 395 U.S. 367 (1969).

61.  

Miami Herald v. Tornillo, 418 U.S. 241 (1974), For a discussion of the contrast between Tornillo and Red Lion, see Schmidt (1976).

62.  

City of Los Angeles v. Preferred Communications, 476 U.S. 488 (1986).

63.  

Cable Act of 1984, 47 U.S.C. §§521ff.

64.  

See 47 CFR §21.1ff; National Association of Regulatory and Utility Commissioners v. Federal Communications Commission, 525 F.2d 630 (D.C.Cir.), cert, denied, 425 U.S. 992 (1976).

65.  

See generally, Geller (1991).

66.  

Purchases by libraries may affect the production of certain books, tapes, and disks in that without this market certain academic works might not be published. Both the 1993 National Performance Review and the Agenda for Action (IITF, 1993) recognized the federal government's ability to impact the directions of the infrastructure through its purchasing power.

67.  

Hush-A-Phone v. United States, 238 F.2d 266 (D.C.Cir. 1957).

68.  

United States v. Southwestern Cable Company, 392 U.S. 157 (1968).

69.  

Fortnightly Corp. v. United Artists Television, 392 U.S. 390 (1968); TelePrompTer v. CBS, 415 U.S. 394 (1974).

70.  

See generally, Malrite T.V. of New York v. Federal Communications Commission, 652 F.2d 1140 (2d Cir. 1981), cert. denied, 454 U.S. 1143 (1982) (affirming later deregulation).

71.  

See Barnouw (1975), p. 100.

72.  

All Channel Receiver Act, 47 U.S.C. 330. See Krasnow et al, (1982), Chapter 6.

73.  

Federal Communications Commission v. Pottsville Broadcasting Company, 309 U.S. 134 (1940),

74.  

See also Vogel (1990), Chapter 1, which shows increasing share of nontheatrical revenues.

75.  

United States v. Columbia Pictures Industry, 507 F.Supp. 412 (S.D.N.Y, 1980), aff'd F.2d (2d Cir. 1981).

76.  

Copyright Act of 1976, Public Law No. 94-533, 90 Stat. 2541, Oct. 19, 1976, at new Section 107, 17 U.S.C. §107. See Lawrence and Timberg (1989).

77.  

Cable Television Consumer Protection and Competition Act of 1992 (Public Law 102-385), Section 6, 47 U.S.C. §325 (b).

78.  

Specialized Microwave Common Carriers, 29 FCC 2d 870 (1971), aff'd sub nom. Washington Utilities and Transportation Commission v. Federal Communications Commission, 512 F.2d 1157 (9th Cir.), cert. denied, 423 U.S. 826 (1975).

Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×

79.  

Above 890 MHz, 27 FCC 359 (1959), recon, denied, 29 FCC 825 (1960).

80.  

Microwave Communications, Inc., 18 FCC 2d 953 (1969).

81.  

Specialized Microwave Common Carriers, 29 FCC 2d 870 (1971), aff'd sub nom. Washington Utilities and Transportation Commission v. Federal Communications Commission, 512 F.2d 1'157 (9th Cir.), cert. denied, 423 U.S. 826 (1975).

82.  

MCI Telecommunications Corporation v. Federal Communications Commission, 561 F.2d 365 (D.C. Cir. 1977), cert. denicd, 434 U.S. 1040 (1978); MCI v. FCC, 580 F.2d 590 (D.C.Cir.), cert. denied, 434 U.S. 790 (1978).

83.  

Establishment of Domestic Communications-Satellite Facilities by Non-Governmental Entities, 22 FCC 2d 86 (1970) ("Open Skies"); Establishment of Domestic Communications-Satellite Facilities by Non-Governmental Entities, 35 FCC 2d 844, 37 FCC 2d 184, recon. denied, 38 FCC 2d 665 (1972), aff'd sub nom. Network Project v. Federal Communications Commission, 511 F.2d 786 (D.C.Cir. 1975). See generally, Note, "The Satellite Competition Debate: An Analysis of FCC Policy and an Argument in Support of Open Competition," Syracuse Law Review 40:867 (1989), cited in Kellogg et al. (1992) at §12.3.3, n. 60.

84.  

Resale and Shared Use, 62 FCC 2d 588 (1977), aff'd sub nom. AT&T v. Federal Communications Commission, 572 F.2d 17 (2d Cir.), cert. denied, 439 U.S. 875 (1978).

85.  

Establishment of Satellite Systems Providing International Communications, 101 FCC 2d 1046 (1985). See Chiton and Rehberg (1986) and Kellogg et al. (1992) at §§15.3.3 -15.3.4.

86.  

Use of Bands 825-845 MHz and 870-890 MHz for Cellular Communications Systems, 86 FCC 2d 469 (1981).

87.  

Personal Communications Service (FCC General Docket 90-314, Sept. 23, 1993).

88.  

Expanded Interconnection with Local Telephone Company Facilities, CC Docket No. 91-141, Report and Order and Notice of Proposed Rulemaking, released October 19, 1992.

89.  

See NCTA (1993), p. 11-A. See generally, Pepper (1993).

90.  

National Cable Television Association (1993), p. 7-A (78 national cable video networks as of December 31, 1992). Added to those are the four national television broadcast networks: ABC, CBS, Fox, and NBC. Several new cable video networks were begun in 1993, including The Food Network and FX-TV.

91.  

City of Los Angeles v. Preferred Communications, supra.

92.  

The Chesapeake and Potomac Telephone Company of Virginia v. United States, Civil Case No. 92-1751-A (D.E.Va.) (Slip Op. of Judge T.S. Ellis III, August 24, 1993).

93.  

See, for example, Price (1990). Professor Price questions whether the First Amendment is being extended too far in these kinds of cases. See generally, Brenner (1988).

94.  

Carterfone v. AT&T; 13 FCC 2d 420, recon, denied, 14 FCC 2d 571 (1969).

95.  

Computer Inquiry II, 77 FCC 2d 384 (1980), recon. FCC 2d 445 (1981), aff'd in part sub nom. Computer & Communications Industry Association v. Federal Communications Commission, 693 F.2d 198 (D.C. Cir. 1982), cert. denied, 461 U.S. 938 (1983).

96.  

Id. See 47 C.F.R., Part 68. See New or Revised Classes of Interstate and Foreign Message Toll Telephone Service (MTS) and Wide Area Telephone Service (WATS), First Report and Order, 56 FCC 2d 593 (1975).

97.  

In the later "Modified Final Judgment," agreeing to the divestiture of the Bell Operating Companies (BOCs) from AT&T, the manufacturing of equipment went with the competitive business and was forbidden fruit to the BOCs. See United States v. AT&T, 552 F.Supp. 131 (D.D.C. 1982), aff'd sub nom. Maryland v. United States, 460 U.S. 1001 (1983).

98.  

For example, New York Civil Rights Law, §§50-51. See also, McCarthy (1991), pp. 6-4 and 6-5.

99.  

See Brenner (1991). See also NTIA (1988), pp. 135-140.

100.  

Fair Credit Reporting Act, 15 U.S.C. 1681.

101.  

Rights to Financial Privacy Act, 12 U.S.C. 3401.

102.  

Cable Communications Policy Act of 1984, 47 U.S.C. 551.

103.  

For example, Family Educational Rights and Privacy Act, 15 U.S.C. 1681; Video Privacy Protection Act, 18 U.S.C. 2710.

104.  

In that sense, the issues of privacy are not strictly within the "reception" category, but for ease of reading, the discussion of privacy is primarily in this section and not spread throughout the paper.

105.  

See, for example, Brenner (1991).

106.  

Federal Communications Commission v. Pacifica Foundation, 438 U.S. 726 (1978).

107.  

Indecency differed from obscenity in that it had to meet only one test—o be patently offensive to community values (for the nation as a whole, according to the FCC)—whereas obscenity had a three-part test that included an appeal to prurient interests and the work taken as a whole was without serious literary, artistic, political, or scientific value.

Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×

108.  

See, for example, Cruz v. Ferre, 755 F.2d 1415 (11th Cir. 1985). The courts also restricted federal approaches to regulating "indecent" content of sexually oriented 900 numbers. See, for example, Sable Communications of California v. Federal Communications Commission, 492 U.S. 115 (1989), but the courts have allowed regulations in this area that provided for access codes, scrambling of messages, and use of credit cards. See Carlin Communications v. Federal Commuications Commission, 837 F.2d 546 (D.C.Cir.), cert. denied, 488 U.S. 924 (1988). Nevertheless, the courts have not sided entirely with vendors on this matter, and some restrictive regulations, even on carriers, were allowed [Mountain States].

109.  

Some could conclude that these vestiges are simply problems of transition, or regulatory noise. In light of the later discussion, I would suggest that often they are expressions of values so strongly held that even the newer paradigm would not release the regulatory need for more than a competitive response.

110.  

For descriptions of the movement toward a science of complexity, or on self-organization, see, for example, Prigogine and Stengers (1984), Lewin (1992), Levy (1992), and Waldrop (1992).

111.  

See Bollier (1993), pp. 4-6.

112.  

See, for example, Bollier (1994).

113.  

See, for example, Cleveland (1993).

114.  

Letter of Monroe Price to author, September 9, 1993.

115.  

Kellogg et al. (1992) at §3.

116.  

The workshop for which this paper was written explored various investment and incentive options. See particularly, Walter S. Baer, "Government Investment in Telecommunications Infrastructure," in this volume. For another thorough listing of government incentives, see IITF (1993).

Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×

Current and Future Uses of Information Networks

Colin Crook

This paper describes the uses of information networks, today and in the future. Citicorp is used as a specific example, but many of the comments are applicable across the financial and banking communities.

BACKGROUND

Citicorp offers banking and financial services in more than 90 countries. Products and services serve both individual consumers (approximately 40 million) and corporations, on a global basis. Information technology is fundamental to the operation of the bank. Indeed, money can be viewed as an ideal information technology product. In most instances, money exists as ''bits" on computer storage disks, computer processors, networks, and so forth. The sheer volume of money movement per week across the bank's systems approximates the U.S. gross domestic product.

Central to the bank's structure is its communications network. The developments in information technology and distributed computing emphasize even more the increasing role of the network. The bank serves its consumer and corporate customers via electronic network means (customer automatic teller machines, or ATMs, and corporate mainframe connections). The network provides immense flexibility to the bank in serving its customers; locating its assets, independent of geography; delivering products globally; connecting with competitors and partners; and using other companies' resources to assist the bank.

The bank's networks in the United States are very extensive, providing voice, video, and data communications. The global reach of the bank requires that the extensive U.S. network is an integral part of the bank's global network. The communications network is rapidly becoming the bank's common infrastructure. This enormously complex network, which actually is a combination of many national network systems, must be viewed in the context of global networks. Today, however, no single company can yet offer a truly global networking capability. Citicorp can be viewed as an acid test of true end-to-end service delivery across the world.

THE IMPORTANCE OF INFORMATION NETWORKS

The bank's goal of total relationship banking requires that customers anywhere in the United States can access a customer service person via telephone. In turn, this service person must be able to access all of the customers' information, regardless of location. For example, a service center in San Antonio offers Spanish and English language support for the whole United States, with plans in

Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×

place to support overseas operations in the future. The service person can access customer information in New York, Nevada, and South Dakota, where computer centers exist. The network is fundamental to this capability. A network-based infrastructure would also allow the shifting of business functions to locations that can provide operating efficiencies. The bank is transferring its European credit card processing activities to Nevada. That will require routinely moving lots of information across the network between Europe and the United States, and doing so in a secure way.

In terms of the bank's basic operation, its networks are already essential to its very survival—the bank cannot function without the network. However, the full impact of contemporary technology on both the bank's own systems and the overall economy has not yet been felt! Indeed, early evidence and experience indicate that this may be profound, affecting both the very structure of the bank and the fundamental behavior of its customers.

A GOOD ECONOMIC BET

The banking industry contains large amounts of inefficiencies that may be intrinsic to other industries as well. Citicorp's push for a network-based infrastructure assumes that telecommunications is a good economic bet. We believe, in fact, that networking information technology is a very good economic bet.

However, companies cannot exploit technology networking unless it is fundamentally integrated into the way the companies actually do business. It involves a systems approach that requires a rethinking of the entire business.

In addition, considerable technological innovation is taking place. The creative use of these developments could represent significant competitive threats. However, these developments could also provide the bank with the capability to fundamentally restructure its business processes as well as permit dramatic improvements in serving customers. In examining the likely developments in technology, several themes appear to be evolving that will affect the bank.

LIKELY DEVELOPMENTS

Customer Interfaces

The emergence of an increasing panoply of innovative consumer and business information devices (screen phones, personal digital assistants, smart cards, etc.) represents opportunities for new products and services. In addition, some form of payment system will be needed in order to support the wide range of services being sold. These new devices will be connected via networks to the service providers.

In essence, network usage, which used to be dominated by the plain old telephone, will dramatically rise through the increasing use of innovative information devices. Already, considerable amounts of financial activities are carried out by electronic means, such as ATMs. Therefore, financial and banking companies will seek to exploit the network to provide more effective consumer and business access to their services via the multitude of new information devices. Customers will probably wish to use electronic means to transact their business with the bank.

Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Bank Structure

In addition to providing customer access, the network facilitates and enables significant new ways to structure enterprises. Issues of both time and geography can be handled more effectively. For example, the processing of information can be handled at specific points around the world and, via use of the network, information can be moved quickly and efficiently. Citicorp already processes information in the United States for non-U.S. operations of the company. This trend will continue.

As companies operate globally, their needs must be handled 24 hours a day. For example, Citicorp is providing 24-hour-a-day service coverage from the United States in support of its global customers' cash management needs. The new awareness of systemic risk and potential threats to the bank's operations permits the bank to exploit the network to provide geographical and time diversity and risk allocation. Provided that the network remains available, the bank's operations can continue despite calamitous events (fire, bombs, natural disasters, etc.). The ability of the network to facilitate the operating integrity of the company, enable efficiency, and eliminate unneeded redundancy will grow in importance as the company grows. The network must have the needed characteristics of capacity, resiliency, and reliability.

The Network as a Market

Electronic commerce is increasing rapidly as companies restructure their business processes and exploit new thinking in terms of how companies can more effectively work together. The U.S. gross domestic product of approximately $5.6 trillion per year [as of 1993] uses around 100 billion financial transactions to support it (cash, credit cards, etc.). Interestingly, the telecommunications network, in order to operate, uses approximately 100 billion financial transactions each year. Already the information economy has more financial transactions than the U.S. gross domestic product!

As electronic commerce increases, the financial transaction volume will rise exponentially. Electronic commerce carried out across networks will represent new challenges requiring considerable innovation. Already trillions of dollars are transferred by the bank every week—and most of this activity is performed not by people but by machines. Such volumes mandate a high level of bank control over access to its network-based applications. In a network-based business environment, protecting the application by authenticating application users is critical because of the extreme difficulty in protecting the network itself. The network will have literally millions and millions of customers—individuals, companies, public institutions. Because customers tend to do whatever they want, the key is to build the right kind of infrastructure that can support the needs of the individual customer.

Exponential rises in electronic commerce will also lead to pressures for new legal and regulatory thinking. Without such new thinking, business and commerce could be inhibited. Intellectual property issues are already proving to be thorny ones.

The Network Structure

A contemporary view of a network is to regard it as an information infrastructure with an array of essential network services that facilitate the flow of customer products and services across it. If the network is just viewed as bandwidth, it becomes very difficult to use. The services inside the network, such as security, directories, and so forth, are essential to its use by intelligent devices or software applications.

Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×

As enterprises become more network-based, they are seeking to have other companies supply capabilities that permit the enterprise to focus on those processes that create and deliver value to their customers. The network facilitates the creation of the so-called virtual enterprise. Here, directories are critical to permitting several companies to appear to be a single enterprise. Directories are central to the network.

Citicorp's future strategies anticipate the emergence of the virtual enterprise, and work is already in hand to move in this direction. The future network structure of the bank will be a combination of public and private networks. Wherever possible, the public network will be exploited. Additionally, the bank will be a combination of the capabilities of several companies, all existing within an integrated information network infrastructure. This vision of the bank will not be static. It will be constantly changing, according to customer and business needs.

Critical to the successful evolution of these ideas is the emergence of common infrastructural services that all companies will utilize. An example of this is a common, secure, trusted billing and payment infrastructure that many institutions can use. This would address issues of auditability, fraud prevention, user authentication, secure access control, and so forth. It is believed that progressive economies will develop such common information network-based capabilities that users and suppliers can use, obviating the need to develop multiple proprietary networks with no customer value.

Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×

The Changing Nature of Telecommunications and the Information Infrastructure for Health Care

Edward H. Shortliffe

INTRODUCTION

Discussions at this CSTB workshop and in other forums are confirming the notion, often voiced in medical computing circles, that the use of computers and communications for health care is roughly a decade behind routine applications of the technologies for much of the rest of society. The details about Citicorp provided by Colin Crook in this volume, however, suggest that the gap is closer to two decades! Much is happening in the medical computing world, but the story is different from that for many other segments of society, and here I will try to explain some of the reasons for those differences.

It would be inaccurate to give the impression that the medical community has been oblivious to the potential role of computers and networking in biomedicine. In fact, in the early 1970s the first node on the ARPANET that was not a defense-funded resource (either through U.S. Department of Defense grants to academic institutions or direct military support) was the National Institutes of Health-funded SUMEX-AIM resource at Stanford University's School of Medicine, a machine dedicated to biomedical applications of artificial intelligence research. With the subsequent addition of many more medically oriented nodes to the national network, a small but active segment of the biomedical community grew up with the ARPANET during its transition to the Internet of today.

Furthermore, it is revealing to look back to a prophetic 10-year-old document that was produced by a long-range planning panel for the National Library of Medicine (NLM) in 1986. Shortly after the new director of the NLM took office in 1985, he brought together people who could help devise a grand plan for what the institution should be doing to prepare for the decades ahead. The resulting report (NLM, 1986) dealt with the future of medical "informatics" and noted that "widely disseminated medical information systems will require high-bandwidth communications to allow access to the computational, data, and information resources needed for health care and research" (p. 60). An explicit goal mentioned in the report was that "by the end of the next decade [presumably 1996], there will be a national computer network for use by the entire biomedical community, both clinical and research professionals. The network will have advanced electronic

NOTE: Support of this work under grants LM-05305 and LM-05208 from the National Library of Medicine is gratefully acknowledged. Portions of this paper are adapted from a presentation given by the author at the 1993 annual meeting of the American Clinical and Climatological Association held in Sea Island, Georgia (Shortliffe, 1994). Many of the topics in this paper were also discussed at "National Information Infrastructure for Health Care," a workshop sponsored by the Computer Science and Telecommunications Board and the Institute of Medicine on October 5–6, 1993, in Washington, D.C.

Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×

mail features as well as capabilities for large file transfer, remote computer log-in, and transmitted graphics protocols. It will either be part of the larger national network … or will have gateways to other federally sponsored networks" (p. 65). The report includes much discussion of the ARPANET, as the larger Internet was still called in 1986, as a model for the national network that would facilitate a variety of applications in biomedicine and health care delivery.

Despite this explicit call for the biomedical community to embrace the potential role of a national communications infrastructure, little happened in the intervening years. The NLM expressed an interest in pursuing the topic, but it needed incremental funding to do much and the rationale for new efforts was extremely hard to sell to the mainstream medical community and hence to the Congress. Yet since the passage of the High Performance Computing and Communications Act and the election of an administration with particular interests in both the national information infrastructure and health reform, we have seen the awakening of interest among leaders for whom computers and telecommunications for health care had previously been viewed as an esoteric topic.

STIMULI TO CHANGE

Certain key forces are driving such changes in awareness and interest. First, the shift to managed care and capitation is changing dramatically the requirement for communication among the parties involved in health care. Insurers are demanding a basis for making comparisons among providers (both institutional and individual), and suddenly new kinds of clinical data need to be collected, communicated, and collated. Pressures to develop and manage such comparative clinical data did not exist in the past to the same extent.

Second, proposed health care reform legislation, and the resulting high-profile discussions of health care financing and organization, are reinforcing and broadening the pressures on health care institutions and providers. When President Clinton introduced his health reform proposals to the Congress on September 22, 1993, he referred explicitly to the opportunities for increased efficiency, technology assessment, and cost savings offered by information technology. Computing and communications technologies have emerged as key elements in the strategic plan for eliminating waste in the health care system. Clearly, such an impact will be easier claimed than achieved, but the expectations do help explain the sudden shift in interest among health planners.

There are other prominent examples of a growing societal awareness of the potential role for the national information infrastructure (NII) in supporting health reform. In April 1993, the Computer Systems Policy Project (CSPP), composed of senior executives from the major vendors of computer systems, released a report on the relevance of the NII to health care (CSPP, 1993). During that same month, a report commissioned by the U.S. Department of Health and Human Services, and developed by a panel formed by the American Hospital Association, was released (AHA/DHHS, 1993). Although the committee that drafted the report had been convened to look at issues in the creation of computer-based patient records, it soon chose to address more broadly the issues of information infrastructure required to support the notion of computerized individual patient records. This broader view is especially valid when one begins to envision longitudinal medical records that are tied to a mobile patient rather than to a single provider's office or a hospital.

For those who have been interested in medical informatics and biomedical communications for some time, now is an exciting period. Key decisionmakers are listening and becoming very enthusiastic about seeing profound changes, both in the health care system itself and in the creation and use of an underlying information infrastructure. Until recently, the role of regional and national communications in support of health care has been a largely grass-roots activity, with limited shared national vision and leadership. Some of the most successful experiments have been in the area of "telemedicine," in which, for example, electronic communications have been used to provide

Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×

consultation by specialists to physicians in rural, inner-city, or other isolated locations. As the Internet increases in its capacity, it will be able to accommodate the kinds of voice and video transmissions that are crucial for this kind of telemedicine activity. Constraining progress, however, have been frequently voiced concerns regarding risks to the privacy and security of clinical information. Such concerns have led many health care institutions to resist exploring modern networking technologies, both within their own walls and when considering linkage to outside networks in their communities or beyond (IOM, 1994).

RECOGNIZING THE NEED FOR IMPROVED CLINICAL DATA SYSTEMS

Until 1993, there was essentially no federal involvement in defining the role of the information infrastructure as it relates to the delivery of health care. Beginning in the late 1980s, largely through the activities of the NLM, we did see the involvement of the health sector in discussions of how the NII might support research and education in biomedicine. Unfortunately, there has been little or no knowledge of the existing NII, nor an understanding of its implications, among the leaders in the health care industry. Those few hospitals that are connected to the Internet are mostly academic institutions that have sought such connections through their main university campuses. Recently, the NLM initiated a grants program to encourage more hospitals to institute Internet connections and to begin to explore the ways in which national networking could support their clinical mission.

Obstacles to more effective use of the existing NII in health care, and to an informed anticipation of how emerging communications and computing technologies will affect health care, are largely logistical, political, and financial, rather than technological. About two years ago I was asked to give a talk at a conference on gigabit networking. My message was that, for the present, we can largely ignore biomedical gigabit networking issues and simply work to make better use of the technologies that we have today. That is not to say biomedicine could not do more with gigabit speeds in the future, but that is not the major need at present.

One way to make progress in dealing with the logistical, political, and financial barriers to acceptance of computing and communications technology has been to demonstrate the relevance of such technology to cost savings and to health reform. We are beginning to see data in the literature that demonstrate how computing holds the promise of impressive economies. One recent report, from physicians at the Regenstrief Institute at the Indiana University School of Medicine, describes a well-designed controlled clinical trial demonstrating more than $800 in savings per patient stay at Wishard Memorial Hospital in Indianapolis during which some physicians used computers to order tests and to receive reminders, whereas other providers did not use the technology (Tierney et al., 1993). As the article notes, extrapolation of these effects suggests "savings of more than $3 million in charges annually for this hospital's medicine service and potentially tens of billions of dollars nationwide" (p. 379). The problem is that large complex systems such as the one built at the Wishard Memorial Hospital and connected to the Regenstrief Medical Record System over the past 20 years could not be duplicated simply for implementation at another hospital. In the absence of standards for systems integration and data sharing among institutions, transporting a highly tuned technology from one hospital to another can be next to impossible.

Data such as those from the Indiana study are clearly needed to demonstrate the value of interinstitutional network connections and the role that the NII could play. Such fiscal data make hospital CEOs pay attention, and they are having much to do with a reassessment of how hospital data systems need to be designed and especially how they might become more clinical in their emphasis, departing from a traditional administrative and financial orientation.

Also driving the need for more clinically oriented data systems is our current lack of data to gauge the quality or cost of health care. Employers and insurers are increasingly choosing to

Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×

contract with the hospital that can show the highest quality at the lowest cost; if an institution charges more, it must be able to show that the higher cost is associated with higher quality. If provider institutions lack the data systems that allow them to demonstrate improved outcomes over competing hospitals, they may increasingly find that they are unable to win the managed care contracts required to keep their beds full. Subjective impressions that one hospital is "better" than another hold little sway with an employer or insurer that is fighting frantically to control health care costs.

There are generally no community-wide databases that store information on providers and patients, although there are a few experiments to develop regional health databases (IOM, 1994). When there are pooled data for regions, they tend not to record patient-centered information on topics such as consumer satisfaction or functional outcomes after treatment. Similarly, there is generally no reasonable way to determine whether doctors are performing safe, appropriate, and effective care, despite demands that we begin to develop the kind of data sets that allow such information to be released.

We clearly need better data collection methods than we have today. For example, clinical data sets derived from insurance claims and depending on voluntary submission of diagnostic and outcomes information by providers are often rendered useless by inconsistent compliance and information of questionable accuracy. Several recent studies regarding the health of our nation are based upon data submitted to Medicare on claims forms. Clinical information on such claims provides a limited proxy for medical reality. This helps explain the Medicare system's recent pressure for the creation of a Uniform Clinical Data Set (Audet and Scott, 1993). We clearly need better ways of collecting comprehensive clinical data than to depend on insurance claims submissions. This is one of the many justifications for the recent push to see the creation of computer-based patient records (IOM, 1991).

ATTRACTING PHYSICIANS TO INFORMATION MANAGEMENT TOOLS

A colleague and I recently argued that clinicians are inherently "horizontal" users of information technology (Greenes and Shortliffe, 1990). By this we mean that they require a critical mass of functionality from the system they are using before routine use of the computer will be viewed as worthwhile to them. If the computer is useful only occasionally, say for one or two patients per day, then the inertia involved in going to the machine will typically prevent the effective use of that technology. But if the computer provides functionality that is useful for essentially every patient seen, and if that use is as good as or better than the manual methods previously available, then it is reasonable to assume that physicians will begin to turn to the computer for support. It is also important to recognize that physicians seek help with the noxious tasks associated with data management but are not interested in having computers infringe on valued tasks. Furthermore, they require intuitive interfaces that require little or no training (similar, for example, to the training required to use the telephone or, more topically, an automated teller machine).

Part of this critical mass of functionality will be made available not within the physician's local environment but via the NII. Imagine the model shown in Figure 3, in which providers are linked, either directly or through the hospitals at which they practice, to research and public health databases or to national repositories of patient data. When people become sick away from home, there would be tremendous utility to a system that permitted authorized health workers, say in an emergency room, both to identify you and to access key clinical data about your medical problems, allergies, medications, and the like from a centralized data resource provided via the network. Physicians could also be provided with access to third-party payers, to Medicaid/Medicare, to the Food and Drug Administration, to the Centers for Disease Control, to medical schools and their

Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×

FIGURE 3 A view of the resources that could be made available to practitioners, patients, and the public via a national information infrastructure for health care.

continuing education activities, to a variety of vendors such as pharmaceutical companies or software companies, to information resources such as Medline, and to patients themselves. If the network provides health workers, patients, and the public with access to all these kinds of information sources, as well as two-way communication that allows retrieval of data plus submission of information (e.g., claims), one can begin to imagine the appeal and acceptability of the NII and its health care role. There has been remarkably little planning initiated regarding the implementation of these kinds of connectivity, but I believe that Figure 3 demonstrates that the for a major impact on health care.

EXAMPLE USES OF THE NII FOR HEALTH AND HEALTH CARE

A workshop sponsored jointly by the Computer Science and Telecommunications Board of the National Research Council and by the Institute of Medicine ("National Information Infrastructure for Health Care," October 5–6, 1993) offered a variety of possible uses of a national information infrastructure for health and health care. I close by summarizing them here since they follow naturally from the discussion above:

  • Information distribution and access, including bibliographic-retrieval software for searching the medical literature;

  • Population databases (regional, state, and national) with specialized interfaces that allow retrieval of subsets of patients meeting particular search criteria;

  • Access to longitudinal, sharable, standardized health records for individual patients, particularly important for providing continuity of care for our highly mobile society;

Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
  • Telemedicine, especially to provide enhanced care and information access in underserved areas such as in rural regions or inner cities;

  • Personal health information systems, which would provide individuals, whether sick or healthy, with educational materials plus a personally maintained health database;

  • Databases for research and outcomes assessment, as previously described;

  • Systems to handle billing, finance, reimbursement, and eligibility determination;

  • Multimedia communication and videoconferencing;

  • Implementation of practice guidelines and outcomes management advice with specialized software at the point of care that allows access to the individual guidelines that may be available locally or over the network; and

  • Submission of clinical reports to federal agencies, such as reportable disease information to the Centers for Disease Control or adverse drug reaction reporting to the Food and Drug Administration.

CONCLUSION

The future world that I have described here (assuming appropriate safeguards to protect patient privacy and confidentiality of data) offers a set of features that many observers believe would not only be acceptable to practitioners but also would enhance their practices in positive ways while helping to reduce some of the waste in our current health care system. Unlike other technologies that have played a role in escalating the cost of health care in this country, there is reason to believe that computing technology, coupled with a standardized communications infrastructure, could actually eliminate waste and reduce the total health bill. To achieve these goals, however, various enabling activities are required. Among these are the need for improved national leadership and a greater understanding of the federal role in guiding the development of standards, the education of practitioners and others regarding the role of the NII for health care, the creation of incentives, and attention to how the health care system should reimburse those who use the information infrastructure in support of health care delivery. Several observers have noted the need for preemptive federal legislation in the areas of privacy and confidentiality in particular, but also to deal with authentication of electronic signatures to assure their acceptability for legal documentation. The goal of developing centralized longitudinal lifelong medical records that could be accessed by providers (when authorized to do so by individual patients; see IOM, 1994) requires that we address the need for national patient and provider identifiers while balancing that need against the civil liberties issues involved in linking health care identifiers with other identifiers used in our society.

There is clearly a major need for training and education, not only of health professionals but also of the public. In addition, we need a new cadre of medical computing professionals who not only understand the technological issues involved but also have practical experience in clinical settings (if not formal training in one of the health professions) so that they are knowledgeable about the realities of health care practice, ethics, and financing and can incorporate resulting sensitivities into the systems that they design and build. Finally, there is a major need for demonstration projects to help prove the technology's cost-effectiveness and its impact on the quality of care. One complicating factor in the evolution of this field has been the difficulty in developing demonstration projects with sufficient scope, penetration, and generalizability to assure that they can provide meaningful data regarding the technology's potential impact on cost and quality.

Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×

REFERENCES

American Hospital Association (AHA)/U.S. Department of Health and Human Services (DHHS). 1993. Toward a National Health Information Infrastructure. Work Group on Computerization of Patient Records. American Hospital Association, Chicago, April.

Audet, A., and H. Scott. 1993. ''The Uniform Clinical Data Set: An Evaluation of the Proposed National Database for Medicare's Quality Review Program," Annals of Internal Medicine 119:1209-1213.


Computer Systems Policy Project (CSPP). 1993. Information Technology's Contribution to Healthcare Reform . Computer Systems Policy Project, Washington, D.C., April.


Greenes, R.A., and E.H. Shortliffe. 1990. "Medical Informatics: An Emerging Academic Discipline and Institutional Priority," Journal of the American Medical Association 263:1114–1120.


Institute of Medicine (IOM). 1991. The Computer-Based Patient Record: An Essential Technology for Health Care, R. Dick and E. Steen, eds. National Academy Press, Washington, D.C.

Institute of Medicine (IOM). 1994. Health Data in the Information Age: Use, Disclosure, and Privacy, M.S. Donaldson and K.N. Lohr, eds.National Academy Press, Washington, D.C.


National Library of Medicine (NLM). 1986. Long Range Plan on Medical Informatics. National Library of Medicine, Washington, D.C., December.


Shortliffe, E.H. 1994. "Clinical Information Systems in the Era of Managed Care," Transactions of the American Clinical and Climatological Association, Vol. 105, pp. 203–215.


Tierney, William M., Michael E. Miller, J. Marc Overhage, and Clement J. McDonald. 1993. "Physician Impatient Order Writing on Microcomputer Workstations: Effects on Resource Utilization," Journal of the American Medical Association 269(3):379–383.

Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×

Can K-12 Education Drive on the Information Superhighway?

Robert Pearlman

If they build it, will we come? If government, the telephone and telecommunications companies, and the cable industry join to develop the backbone of the information highway and its local access ramps, will schools and school districts invest in the local telecommunications infrastructure that will ensure universal participation by the nation's over 40 million K-12 students and their teachers?

While the government's goal to "extend the 'universal service' concept to ensure that information resources are available to all at affordable prices" (IITF, 1993) may be a reasonable short-term policy for federal government action, it is at best only a first step toward the more appropriate goal of universal participation (Bolt, Beranek, and Newman, 1993) on the information superhighway by the nation's students and teachers.

The universal service goal, which borrows an analogy from telephone service, means that governments use regulation to require private companies with regional monopolies to provide the public with access to minimal services at affordable prices. Still, after 100 years of telephone service and over 60 years of regulation, there are few telephones in schools today. Few school districts in the country have seen the educational and communication services on the telephone network that would justify both the ongoing service costs and the up-front investment in a local school-based telephone infrastructure that would ensure universal participation by students and teachers. Many more factors than access will be needed to justify an investment in a computer-based telecommunications infrastructure that provides the pathway to universal participation on the information superhighway.

The national debate on education today stresses as its goals not just access to education but instead high standards of what students know and can do. It is the active participation on the information superhighway that helps students develop the planning, interpersonal, informational, technological, and communication skills required by the knowledge-based citizens and workers of the 21st century. If such skills are the goal of long-term federal policy for K-12 education, then universal participation is the appropriate strategic goal of federal policy.

Neither federal nor state government action can assure K-12 education both universal access to and widespread student utilization of the information superhighway. Smart regulation and investment can, however, encourage organizational change in schools and the emergence of new educational service providers on the information superhighway. These new services will, in turn, provide the incentive for local communities to invest in the development of an adequate and sufficient local school and school district telecommunications infrastructure.

Government goals, at both the federal and state levels, must be to ensure K-12 access to the future national information infrastructure (NII), to provide equity for all students, and to support the development of information resources for education. Such goals, however, will not be realized

Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×

unless public and private initiatives combine to enable the development of a real economy of schools and learning enterprises on the information superhighway.

K-12 education is a totally different sector, economically, than banking, health, or libraries. Though educators today know well the effective learning activities that can take place on the information highways, there remain many roadblocks to significant K-12 traffic. K-12 is still very much a cottage industry of some 100,000 disparate school units, not easily subject to economic integration or rationalization. While banking, health care, libraries, and higher education have all invested substantially in data networks, K-12 lacks a telecommunications infrastructure at all levels—national, state, school district, and, most important, at the school site. While other economic sectors provide obvious markets that the NII will facilitate, such as money transfer, medical information and report sharing, information resources, research, and video on demand, K-12 opportunities will not emerge without structural changes in school organization and the parallel development of new educational enterprises that utilize the new telecommunications potential.

What will be the triggering event for schools and school to reallocate current expenditures and invest new dollars to equip their students and teachers with the vehicles, training, local roads, and on-line ramps to the superhighway? Can government investment and regulatory activity be designed to spur the organizational changes in schooling and encourage the growth of new educational service providers that, in turn, can combine to exploit the educational potential of the future information superhighway?

NO MYSTERY AS TO THE EFFECTIVE APPLICATIONS OF NEW COMMUNICATIONS TECHNOLOGIES

There is no mystery today among educators as to what the effective and powerful applications of communications technology in education are. Using technologies such as computers, CD-ROMs, videodiscs, phones, cable, broadcast, satellite, local area networks, and wide-area "internetworking," students and teachers today in exemplary technology-using schools can do their work, access information, communicate via electronic mail with each other and with mentors, engage in professional collaboration and student collaborative project work, go on electronic field trips, create virtual learning communities, and receive and use course and minicourses from any number of educational service providers. Some exemplary uses include the following:

  • In July 1990 a teacher named Nikolai Yakolev got on a train with five other teachers and 20 students in Moscow and traveled 25 hours north to Kem, a town at the rim of the Arctic Circle. They went there because that was the best place on earth that year to view the solar eclipse. There, with only some remmants of the Red Army and some black flies as companions, they established a scientific plot and collected data before, during, and after the eclipse on the behavior of the animals and plants and the changes in light and temperature. They collected this data with probes connected to a portable personal computer (PC) and used Kem's only phone line to transmit the data to a computer at the Technical Education Research Centers (TERC) in Cambridge, Massachusetts, where it was stored and made available for schools throughout the world to compare with data from the forthcoming 1991 solar eclipse in Hawaii.

    This project, conducted by the Moscow-based Institute of New Technologies (INT), was a forerunner to the Global Lab Project. Boris Berenfeld, INT's associate director, subsequently came to the United States to work at TERC, where he now directs the Global Lab Project, a project highlighted recently in the Clinton administration's report, The National Information Infrastructure: Agenda for Action (IITF, 1993). Today, students in the Global Lab Project's 101 schools in 27 states and 17 countries establish environmental monitoring stations to study climate change, monitor pollutants, and measure radiation. Schools then share the date over the network with each other and with scientists to gain a global perspective on environmental problems.

Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
  • Students at McKinley High School in Hawaii gave a presentation in Japanese to a real audience of students in Kanazawa, Japan, in real time. After they finished with this hour-long presentations, the class in Japan likewise gave its presentation back in English. Both presentations were transmitted inexpensively by video telephones through a project called Teleclass International. Students at each site prepared for their presentations for a few months in advance. The fact that they were able to link up with their peers in real time and see their faces made an enormous difference in the whole learning effort.

  • A group of 25 teachers met in Indiana in the summer of 1993 for a week-long school restructuring institute. They were researching exemplary nationwide school restructuring efforts and developing a resource report. Using a "Pic-Tel" videoconferencing system, they held daily two-hour seminars with school restructuring experts in Massachusetts, Rhode Island, California, and elsewhere. The external experts were able to talk with them and to transmit video showing scenes of innovative schools throughout the country.

  • Students in the Midwest have worked on scientific projects and have communicated via audio, video, data, and pictures over the Internet with scientists at various universities and in industry. The project, COVIS or Collaborative Visualization, uses scientific visualization software to enable students and consulting scientists to share and display their data and, at the same time, in a video window, engage in a real-time, two-way active discussion.

  • Via satellite links in museums, schools, and universities and through cable, over 1.5 million students and teachers from around the world have participated in real science through interactive video and audio links with scientists from the Jason project. They have observed scientific expeditions and communicated with practicing scientists, in real time, in the Mediterranean Sea and at the Galapagos Islands, the bottom of Lake Ontario, the site of the wreck of the Titanic, and the coast of Baja California. The Jason V Classroom Network planned to air Expedition Planet Earth Belize via satellite and the Mind Extension University cable network in the spring of 1994.

  • Students throughout the country watch daily 12- to 15-minute news programs through CNN Newsroom and Whittle's Channel One Program. CNN Newsroom schools are able to receive a daily textual curriculum guide via any kind of telecommunications service or via the cable line. In addition, the cable industry's Cable in the Classroom organization provides educational use of cable programs throughout the United States. Over that same cable line, using a special modern, students are able to receive an Associated Press-like service called Xpress Xchange, which gives them news, in a steady data stream, from all the major news agencies of the world. They can capture whatever stories they preselect by subject into their computer and save them to disk. To facilitate a debate about what is going on in Somalia, for example, they would collect stories from AP, Reuters, TASS, Xinhua, Agence France Presse, and so forth. The teams from the school would take different national perspectives in the debate.

  • Students take satellite and cable courses in foreign languages, mathematics and science, and the humanities. Distance learning, usually with one-way video and two-way audio but sometimes also with two-way video, provides courses not available to schools because of geographical isolation or economic limitation (OTA, 1989). Many of the organizations delivering these courses are Star School grantees, such as the Satellite Educational Resources Consortium, TI-IN United Star Market, and the Massachusetts Corporation for Educational Telecommunications (MCET). MCET also delivers curriculum modules that support teaching via satellite, Picture Tel, videodisk, and videotape.

Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
  • Students in Fairfax County, Virginia, have taken electronic field trips to Germany since the fall of the Berlin Wall, to China, to Wales, to the countries of the former Soviet Union, and to NASA (National Aeronautics and Space Administration) space centers. These live, interactive visits use video and satellite communications, telephones, computers, and cable for local distribution. Fairfax has shared these visits with schools around the country.

These new learning activities and educational services on the information superhighway have expanded significantly over the past several years. Over 1.5 million students around the world have participated in the Jason project. Over 14,000 schools watch the daily 15-minute news program, CNN Newsroom, and 10,000 schools receive Whittle's Channel One. Together, these news programs reach nearly 25 percent of U.S. schools. According to the U.S. Department of Education, distance learning is now used in 6,000 schools for 20,000 elementary and 20,000 high school students. The TI-IN network, transmitting video via satellite from Texas, reaches 1,100 schools in 40 states (Bruder, 1991). MCET reaches 37 states with its curriculum modules.

These applications, whether carried on phone lines, cable lines, broadcasts, satellite, or through the Internet, are the wonderful side of these communications technologies. The unfortunate characteristic, however, that unites these examples is that few have demonstrated independent commercial viability. Customers for these services include schools and state agencies. Most of the schools that receive these services pay a user fee that is not sufficient to cover costs. States, as the governmental unit responsible for schools, add support, as do foundations, and federal government agencies like the National Science Foundation or the U.S. Department of Education.

The emerging industry of educational service providers is frail and fragmented. The real market, from schools, school districts, and states, is too limited today to support the range and quality of services that schools need. The Midlands Consortium, one of the early Star Schools grantees, shut down after the Star Schools money ended. Schools are not paying customers for the daily news programs. They receive the CNN Newsroom program, which is subsidized by Turner Broadcasting, by paying minimal or no cable use fees and the Channel One news program by agreeing to have students watch two minutes of advertising. There are other factors, however, including the current organization of schools and the lack of a telecommunications infrastructure at school sites and school districts, that impede these developments. State and federal support will, of course, continue to be necessary to encourage this emerging industry, but we are going to have to look at the K-12 industry a lot closer to see how market niches might emerge to foster the development of the new educational service providers.

SNAGS, BARRIERS, AND ROADBLOCKS ON THE INFORMATION SUPERHIGHWAY

Imagine what the information superhighway looks like to a teacher in one of our 100,000 K-12 schools in the United States. First, most teachers don't even know about it. Few have external phone lines and most school district business offices will not approve the open-ended purchase orders needed for phone service. Some teachers have phone lines but have outmoded Apple II-E's or early IBM PCs as workstations, with interfaces that cannot support the newer software for easy navigation on the Internet. Some teachers have a phone line, a computer with an effective interface,

Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×

a modem, and a connection, through a local university, to the Internet. But while they can communicate with colleagues in Moscow, they can do little with their primarily unconnected colleagues and students and the parents of the students in their own district. Few teachers in the country use computers that are on local area networks (LANs) connected to wide area networks (WANs), as is common in higher education, business, and the research community.

Market Data Retrieval reports that 49 percent of school buildings have LANs. Most of these, however, are relegated to a single lab for limited instructional purposes, such as drill and practice or remediation. Occasionally, an entire building is wired for administrative purposes, including the reporting of attendance data and grades. While a survey by the California Technology Project found that 10.6 percent of 400 schools surveyed had LANs connected by modem to WANs, further research showed these connections were for administrative purposes and that there were "no connections between instructional LANs and instructional uses of wide area networks" (Newman et al., 1992).

Typically, schools and group of teachers who might want to develop the local infrastructure for school networking do not have budget control at the school site level. They have difficulty paying for the phone lines. The barrier to putting the nation's 100,000 schools on to the Internet is not the capacity of the Internet, but the lack of local school and school district infrastructure and the difficulty in any locality of financing the end-user equipment and the LANs that represent the vehicles and local access roads to the information superhighway.

Many states are currently developing plans to give all schools access to the Internet. In most cases this means dial-in access by single computers and modems and use of data but not video communication. Some states, such as Iowa, plan a more extensive fiber optic backbone with county points of presence to which local schools can connect, permitting both data and video communications. This is the kind of infrastructure states should build. This still leaves, however, two tasks for local school districts. First, districts will have to run the fiber to the "curb" of the local school and, second, wire the school and build the in-school voice, data, and video distribution system. For all members of a school community to "drive" on the information superhighway, schools will need their own local telecommunications infrastructure, including an Internet server and router, cable, and satellite connections, and internal voice, data, and video distribution, all of which requires significant investment. The most extensive wiring of schools in the country, by Whittle's Channel One news program, has, unfortunately, done little to meet this need. Channel One schools get a fixed satellite dish, receiving equipment, 19-inch monitors in every room, and a limited video distribution network, with only the ability to send out two signals to classrooms and only receive from one satellite. It is much too constraining, not the real video distribution that schools need, nor does it do anything to set up a school-wide data communications network.

The costs of providing real access to all U.S. students on the future NII are significant. Educational researcher Henry Jay Becker estimates the annual personnel, hardware, and software costs at nearly $2,000 per student for developing expertise in technology use among teachers and providing students with a learning environment characterized by project-based learning, gathering information from diverse sources, and electronic communication with "students all over the world, with scientists engaged in real world inquiry, and with databases of enormous magnitude" (Becker, 1993).

That is nearly one-third the current annual cost per student in most U.S. school districts and would amount to approximately $90 billion in additional costs annually for all the nation's schools. Such an investment is unlikely to happen, except in wealthy districts or in schools and districts where there is a clear understanding that the up-front investment will yield real and rapid dividends, such as better and more appropriate student outcomes and economies in the costs of schooling.

Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
The Telecommunications Industry and K-12 Education

In the real economy, building the home-school, school-community, and wide-area connections will require partnerships between the phone companies, cable companies, the schools, and the states to plan the local and regional infrastructures.

Cable shows how the real economy works. The spread of cable to U.S. schools has little to do with the schools as a current market for cable products. Instead, schools are the beneficiaries of a struggle for public policy leverage and regulatory advantage between the cable industry, the regional Bell operating companies, and the other telephone and telecommunication companies (telcos). Displays of public spirit and partnerships are the currency in this battle. To its credit, the cable industry competes by providing excellent educational programming to schools and, through Cable in the Classroom, an industry organization, obtains and identifies program rights for recording and use in schools. Cable in the Classroom and CNN have also done a wonderful job of encouraging regional and local cable multiple service operators to wire schools and their classrooms, not just run a cable drop to the school door, as was the practice until recently.

In the distance learning sector, there are many good examples of programming services that are encouraged by the federal government's Star Schools initiatives, but in most cases none of these are real commercial successes. Federal and state support is critical to nurture the development of these service providers.

By providing video equipment to schools in exchange for students watching two minutes a day of commercial time, Whittle's Channel One program tapped an economic niche by giving advertisers access to a well-defined teenager audience. For this service, advertisers pay $157,000 a half-minute to reach 8 million students, yielding Channel One a daily gross of $628,000 (Kubey, 1993). This has been very controversial in school districts and states all over the country, but it did demonstrate that there was a small niche in the marketplace. It also shows the limit of that niche, as there is hardly any room in the school schedule to expand that kind of advertiser-supported educational programming into schools.

Many companies are now eyeing the potential of the NII as the delivery system for future electronic and video educational services such as customized curriculum, thematic units, customized textbooks, courses, modules, and electronic field trips. Some of these companies come out of the educational technology sector, like the Educational Management Group, but most represent new alliances from companies in the publishing, printing, cable, and telecommunications sectors.

The Organization of Schooling

"Informating" is organizing to exploit information technology.

—Shoshana Zuboff, In the Age of the Smart Machine (1988)

Besides the substantial cost of building the local school telecommunications infrastructure, there is also the problem of the organization of schooling. Schools are not really organized to exploit this new world. Schools and school districts need to change profoundly in order to exploit these new technologies, to "informate." For students of today to become the citizen knowledge workers of tomorrow's global economy, and for students to acquire the high-performance skills set forth in the SCANS1 report, What Work Requires of Schools (SCANS, 1991), schools will have, in a sense, 24-hour school days and 365-day years, operating around the clock by telecommunications.

Education in what James Mecklenburger calls a global village school (GVS) would be appropriate to both today's and tomorrow's world. According to Mecklenburger, "A GVS will be

Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×

concerned round-the-clock for learning. A school building will not define the school; a building may be the 'headquarters' of a community of learners but not the sole location. Electronically, through voice mail and various networking schemes, there are many ways even today that home, school, and other institutions near and far can be intimately connected …" (Mecklenburger, 1993). Changing schooling requires the changing of all elements of school practice and organization simultaneously, including "school organization, curriculum, assessment, technology, and the learning environments" (Pearlman, 1993).

Students have to be the real workers in education, the ones who actually produce and do the work. There is a very good vision of how schools might be organized in British management expert Charles Handy's book, The Age of Unreason (1989). Handy describes how the "shamrock organization," or what others are calling a virtual corporation, would operate in education. It should be stressed that "virtual corporations" are not synonymous with "virtual reality''; instead, they are real corporations that are becoming leaner, more engaged in core activity and are outsourcing much of their work to other companies or to consultants. It is in this sense that the "virtual school" may emerge, functioning as Handy says, "as a shamrock with a core activity and everything else contracted out or done part time by a flexible labor force. The core activity would be primarily one of educational manager, devising an appropriate educational program for each child and arranging for its delivery." Much of that delivery could be on the future NII.

K-12 education is in much the same place today as the service industry was in the mid-1980s. Despite some of the great capabilities available in schools today (including single desktop machines with associated communication links), critical mass is nowhere in sight. Schools and school districts in the United States are spotted with technology-using teacher pioneers, but widespread utilization and its associated productivity gains remain distant and elusive. The school site communications infrastructure is not in place and, even if it were, schools are not organized to exploit it.

If schools are to change in the direction of shamrock organizations, they will need to outsource those learning activities not provided at the school site. This means that new educational service providers will have to arise locally, regionally, nationally, and internationally to deliver courses, minicourses, project activities, curriculum, materials, and apprenticeships to meet the range of learning needs and learning styles in the shamrock school. Enabling this new industry of education providers is, of course, the potential of the information superhighway, or NII.

K-12 Education and the Real Economy

Nobody heard the story on the news wires that the Los Angeles Unified School System bought out the San Francisco Unified School District. No one heard about the Whittle bid for majority control of the New York City Public Schools. Nor the merger of the Washington, D.C. Public Schools with those of Fairfax County, Virginia, and Montgomery County, Maryland. But everyone has heard about the rash of mergers in the health industry, in banking, in telecommunications, and in many other sectors of the global economy. No one heard about any of these kinds of economic developments in K-12 education because it is a distinct kind of economic and organizational entity, not easily subject to economic rationalization. While schools are governed by local, state, and national entities, in no sense is any of these a system in the way that banking enterprises are integral national or global economic units. Chris Whittle's stalled efforts to establish a national network of Edison schools did aspire to such systemic economic and organizational integration. Despite its failure to raise sufficient investment funds, the Edison project does raise the proper question of how, whether private or public, to put together a network of schools, whether in one location or around the country, that actually work together and produce some savings by sharing curriculum, programming, and teachers through a national delivery system of educational services.

Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×

The Clinton administration's NII initiative aims to loosen the regulatory noose around phone and cable companies in order to get the private sector to build the information superhighway, yet retain regulation as a way to ensure a "public right of way on the information superhighway" for schools, libraries, and museums (New York Times, 1993).

Schools, of course, will need much more than a "public right of way." "Universal access" to phone service has not led to phones in schools nor to significant investment in teacher and student workstations, school and district LANs, Internet nodes, and training among K-12 schools and school districts, as it has in higher education and in the research community. The "public right of way" to the information superhighway only deals with a portion of the costs. What will be the triggering event for schools, school districts, and local communities to reallocate current expenditures and invest new dollars to equip their students and teachers with the vehicles, training, local roads, and on-line ramps to the superhighway?

GETTING TO CRITICAL MASS: BUILDING THE 21ST-CENTURY INFRASTRUCTURE FOR SCHOOLING

The first person with a telephone gets little productive use if his or her neighbors and family are without phones. The first person with a car in a remote region doesn't get much local benefit until the neighbors also acquire them and local society and government build roads for the local infrastructure. So, too, in schools, things don't take off until most teachers and most children are using technology to do their work, when they can communicate with each other and with parents and community-based mentors as easily as pioneer technology-using teachers and students can today share data and reflections with counterparts in Moscow on the Global Lab project. Schools today lack the critical mass of skilled information highway "drivers" that will lead to organizational changes in schooling and to a new industry of educational service providers. Besides a lack of physical infrastructure in schools, school districts, and states in terms of local-area and wide-area networks, another serious problem is that schools today are filled with outmoded technology equipment. More than half of the computers in schools today are five or more years old. According to Quality Education Data (QED), less than 30 percent of the computer equipment in schools can support graphical user interfaces (GUIs) such as Windows or Macintosh. The GUI is critical to today's applications, whether in the creation of multimedia documents or in "internetworking" around the globe.

To realize schools for the 21st century requires that state and federal governments develop the policies and investment that will spur local school and school district investment in local school network infrastructure and that will assist schools in the process of envisioning, reorganizing, and redesigning themselves. Local communities will not make the enormous local investment in a school telecommunications infrastructure unless there is a clear public understanding of its perceived benefits. Despite the acquisition of some 4 million microcomputers in U.S. schools, technology has not resulted in many economies in the cost of schooling. In K-12 education about 80 percent of school budgets have to do with personnel, mostly teachers. Some private companies that are making a business of managing schools are realizing economies on custodial or food service personnel, but significant economies in schooling would require reductions in teacher personnel. This has not happened as yet, even in technologically rich schools.

The only way to realize that is by totally reorganizing schooling and making a significant up-front investment in communications technology. What would it take for schools to work effectively with fewer teachers? Students would have to be able to work much more on their own, with teacher advisors managing their education with the support of external mentors and service providers, as in Handy's shamrock organization. To support this, schools would have to establish up front a communications infrastructure with a local area network in the school connected to the

Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×

Internet and with connections throughout a wired local community. With that kind of structure in place, it is possible to imagine that significant economies could occur in school personnel. Schools today, however, face the dilemma of investing up front significant funds and incurring ongoing costs for a communications infrastructure before there exists the availability of sufficient educational services on the information highway that are cost efficient and effective.

This, of course, is the critical mass, or chicken and egg, problem. Without widespread local school infrastructures, no one will invest in creating new educational service companies. Without widespread and cost-efficient educational services, local school districts won't invest in local school infrastructures. To solve the critical mass problem requires the progressive and simultaneous development of:

  • The information superhighway and the "public right of way" (highway and on-ramps);

  • A new industry of educational service providers that deliver distance learning, curriculum, educational resources, project-based learning activities, and so forth (program);

  • A reorganized system of schooling and schools (organization); and

  • The local school telecommunications infrastructure (local roads and vehicles).

There are many attempts today to reinvent and redesign the American school. These include efforts by school districts to build new kinds of schools (Pearlman, 1993), the efforts of the design teams sponsored by the New American School Development Corporation, schools launched by Charter School legislation in many states, and private efforts, like the Edison project. All of these attempts focus on the question of how to develop schools that have both better student outcomes and outcomes more appropriate for the 21st century; that are more economical and are each tied to a system of schools, whether in one location or around the country; that actually work together and produce some savings by sharing curriculum, resources, teachers, project activities; and so forth.

Today, K-12 education, despite the governance of school districts and state systems, is effectively a cottage industry with 51 state units, 15,000 school district units, and 100,000 school units that exhibit little or no economic integration. These school districts and state systems will have to evolve in a way that goes far beyond their current governance and regulatory role to use the new information infrastructure to provide services and bring about efficiencies in the way that a global corporation like Citibank or a large health maintenance organization utilizes networks to rationalize and economize on the delivery of services.

As a nation, we will need many experiments on the new design of schooling and the utilization of telecommunications network services over the next several years in order to understand how to exploit its potential. These experiments will come from both government investment and private and public partnerships.

Despite the fact that the current Internet owes much of its development to prior governmental support, the NII of the near future will be largely a private venture and will swamp the current Internet in size and power. Universal digital access and the "public right of way" for schools, libraries, and museums will come about less through government investment than through private development.

The key sectors of the telecommunications industry—cable and the telcos—are actively engaged with school partnerships in their contest for public policy leverage. Smart schools, school districts, and states will use these partnerships to show what can be done effectively and productively in K-12 education with an advanced communications infrastructure.

Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×

GOVERNMENT INVESTMENT AND REGULATION TOOLS TO GET K-12 TO CRITICAL MASS

Government has a key role in enabling the development of an information superhighway that fosters the simultaneous development of new school organizations and the educational service providers to serve them. But federal or state investment can never be of the scale to provide nationally to local schools and school districts the necessary level of school site workstations and LANs, training, and nodes to make them full partners on the information superhighway.

Federal and state policy initiatives should be aimed at overcoming the critical mass problem so that local schools and school districts see the benefits of investing in the local infrastructure. Today, few classrooms have phones. Tomorrow's schools will have digital links not just between Denver and Moscow but also, and more importantly, between the local school and its own community, parents, students, and teachers, so that students will be able to communicate with teachers, parents, community mentors, and international mentors and resources and share messages, ideas, data, and multimedia student work. To raise the funds needed to build the local infrastructure, local schools and school districts will have to understand the benefits of these information-age skills and activities and be able to persuade the local citizenry to invest, through special bonds and appropriations, in such a capability.

Federal and state government policy can support these developments. Government can be a customer of new services and can invest directly in, and promote through grants, the following:

  • Public educational information resources;

  • Long-range planning by states, school districts, and schools;

  • Staff development;

  • Software and interface development for internetworking;

  • Project-based learning modules;

  • New educational enterprises; and

  • Research on the effectiveness of the new learning activities and enterprises.

The federal government can increase existing support through direct NSF or National Telecommunications and Information Administration grants, through defense conversion funds or the High Performance Computing and Communications Initiative, the Star Schools program, and the U.S. Department of Education. In the regulation sector, tariffs could be regulated and targeted subsidies established to enable school consumers less costly access to the information highway. The state of Georgia gives schools the same below-tariff rate as state agencies and, together with a less than 1-cent surcharge, has generated $35 million annually for an education technology trust fund (Kessler, 1993).

Government must respect that the most significant investment will come from the private sector, when private interests coincide with the public interest, and not because private sector subsidies are specifically required by regulatory rules. Linkage requirements to builders of the information superhighway to give schools a "public right of way" can, however, be used smartly to foster win-win public-private partnerships between schools and the telephone or cable companies. K-12 education also needs to develop partnerships with other public service sectors, including libraries, museums, social services, and health care; to press for government action; and to lobby effectively with the telecommunications and cable companies.

Building the information superhighway, the Internet, and the future digital NII is not enough. That is just infrastructure. The key development, besides national, state, and local infrastructure, is the availability of quality educational programs and services.

Few technologies have made any impact on the organization of K-12 education, whether radio, broadcast TV, satellite TV, cable TV, or computers. Accompanying these developments,

Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×

however, have been an increasing number and quality of both public and private, but free, programming services. With the development of the NII and "shamrock schools," one can envision the parallel development of local, regional, and national program providers that are part of the real economy, that is, neither public supported nor advertiser supported but instead paid for through service fees by schools that have been able to reorganize and reallocate their expenditures to pay for such services. These services would include direct instruction courses, minicourses and modules, thematic blocks for project-based learning, and an array of curriculum materials and information resources for school and home use. They could also provide the basis for companies that manage networks of public or charter schools dispersed throughout the country.

A Seat at the Table

Like other economic sectors, K-12 education is faced with a critical mass problem. While commercial viability of NII-based services is evident in such sectors as banking, financial services, and new video-on-demand services, K-12 education will require substantial state and federal support to nurture the development of school-site telecommunications infrastructures and new educational services. K-12 education needs a seat at the table in national efforts to develop the NII. To be effective, K-12 education will have to get its voice together in a better way as a sector and in coalition with other public service sectors. The federal government could aid that process by financing national teleconferences for the education sector to learn the issues of the NII and build its collective voice.

State and federal governments are critical players in promoting universal access to and widespread student utilization of the future NII. Through investment and regulation, states and the federal government can promote the new school organizations, the new educational enterprises, and the local school-site telecommunications infrastructures that will make the information superhighway a roadway for today's and tomorrow's students.

REFERENCES

Becker, Henry Jay. 1993. "A Truly Empowering Technology-rich Education—How Will It Cost?" Educational IRM Quarterly (Fall).

Bolt, Beranek, and Newman Inc. 1993. Getting the NII to School: A Roadmap to Universal Participation. Bolt, Beranek, and Newman, Cambridge, Mass., December.

Bruder, Isabelle. 1991. "A Guide to Distance Learning," Electronic Learning (November/December).


Handy, Charles. 1989. The Age of Unreason. Harvard University Press, Cambridge, Mass.


Information Infrastructure Task Force (IITF). 1993. The National Information Infrastructure: Agenda for Action. Information Infrastructure Task Force, Washington, D.C., September 15.


Kessler, Glenn. 1993. "Treat Schools as State Agencies for Telephone Rates," Inventing Tomorrow's Schools (May/June).

Kubey, Robert. 1993. "Whittling the School Day Away," Education Week (December 1).


Mecklenburger, James A. 1993. "To Start a Dialogue: The Next Generation of America's Schools," Phi Kappa Phi Journal 73(4):42–45.

Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×

Newman, Dennis, Susan Bernstein, and Paul A. Reese. 1992. Local Infrastructure for School Networking: Current Models and Prospects, BBN Report No. 7726. BBN Systems and Technologies, Cambridge, Mass., April.

New York Times. 1993. "Gore Views the Data Highway," December 22.


Office of Technology Assessment (OTA). 1989. Linking for Learning: A New Course for Education, OTA-SET-439. U.S. Government Printing Office, Washington, D.C., November.


Pearlman, Robert. 1993. "Designing the New American Schools," Communications of the ACM 36(5):46–49.


Secretary's Commission on Achieving Necessary Skills (SCANS). 1991. What Work Requires of Schools. U.S. Department of Labor, Washington, D.C.


Zuboff, Shoshana. 1988. In the Age of the Smart Machine: The Future of Work and Power. Basic Books, New York.

NOTE

1.  

SCANS (1991) defines a framework that can serve as the foundation for the establishment of education goals and objectives in future U.S. citizens. Whether they work on a shop floor on in an executive suite, they will need to master the following competencies and foundations skills:

COMPETENCIES:

1. Resources: identifies, organizes, plans, and allocates resources.

2. Interpersonal: works with others.

3. Information: acquires and uses information.

4. Systems: understands complex interrelationships.

5. Technology: works with a variety of technologies.

A THREE-PART FOUNDATION:

• Basic skills: reads, writes, performs arithmetic and mathematical operations, listens, and speaks.

• Thinking skills: thinks creatively, makes decisions, solves problems, visualizes, knows how to learn, and reasons.

• Persons qualities: displays responsibility, self-esteem, sociability, self-management, integrity, and honesty.

Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×

Future Roles of Libraries in Citizen Access to Information Resources through the National Information Infrastructure

Clifford A. Lynch

INTRODUCTION

This paper critically examines several popular assumptions about the national information infrastructure (NII) vision that I believe are not entirely consistent with the evidence available to date and current trends. These assumptions include the following:

  • Ensuring universal access to the NII and the resulting benefits are largely synonymous with ensuring universal connectivity, as has been the case with the electrical power grid or the telephone system. If such universal connectivity can be provided, immediate benefits in terms of improved access to information resources and services will follow.

  • The NII will greatly increase the public's (free) access to information. The public will access digital libraries that will supplant the functions of traditional (physical) libraries. The digital libraries on the NII will be enormous storehouses of information that will include (but not be limited to) much of the existing literature of the world. Forward-looking traditional libraries are already transforming themselves into such electronic storehouses.

  • In the new environment of the NII, libraries will continue (and expand) their current role as the key providers of information to the public (regardless of economic status). Libraries will be the mechanism through which we will balance the increased commodization of information with the public's need to have access to information.

  • Libraries will benefit greatly from the NII. The new technology will facilitate efficiencies in resource sharing that will help to alleviate the budget crises facing libraries of all types and will improve the quality of library service nationwide, in part by reducing geographically based inequities among libraries.

UNIVERSAL ACCESS IN THE NETWORKED INFORMATION CONTEXT

The NII enterprise has two components. One component is the upgrading of the existing national telecommunications infrastructure to incorporate very high speed computer-communications facilities that will efficiently (and hopefully cheaply) move large amounts of digital information, including digital video. This is the realm of integrated services data network, asynchronous transfer mode, fiber to the curb and the home, "video dial-tone," and similar technologies. The issues are

Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×

numerous and are well covered elsewhere in this volume; they include such basic public policy concerns as how to encourage the rapid deployment of the infrastructure on a broad basis, how ubiquitous the new infrastructure must be, and regulatory issues related to competition and monopoly.

The second component of the NII initiative is the set of applications that will be enabled by this upgraded telecommunications infrastructure. These applications must drive and justify the initiative: the new applications will engage and excite the public to support the effort. Indeed, the initiative has been justified with visions of a citizenry informed and empowered by greatly improved access to information and students—be they children in rural areas or adults engaged in lifelong learning activities—exploiting ubiquitous access to the world's literature to facilitate their learning. These visions have been eloquently sketched not only by the research and education community but also by political leaders such as Vice President Gore. They are indeed worthy goals for public policy, and, technically, the visions are challenging but not unrealistic. From social, legal, political, and economic perspectives, however, they are much more problematic. The difficulties are not widely understood and have not been well addressed.

Curiously, services such as person-to-person electronic mail—which are quite similar to the services offered by voice telephony, and for which we do have some understanding of demand, usage, and economic basis—are not often cited as justifications for moving forward with development of the NII, either by analogy to current phone service as a universal citizen right or as an example of a clearly viable commercial service that could be enabled by a ubiquitous NII. Perhaps they are not enough: not sufficiently compelling in their impact, not exciting enough as a potential marketplace, not challenging enough in terms of the base telecommunications technology necessary to support them.

The new telecommunications infrastructure will enable other information-content-intensive applications that may create entire new business sectors. These are more compelling to the corporate world and are not often mentioned as public policy justifications—in particular, the creation of a marketing paradise on a previously unimagined scale. Such enterprises will succeed or fail based primarily on the marketplace; their success or failure is not a public policy issue. My opinion, which is somewhat at odds with the view represented by the enormous investments currently being made in the alliances and acquisitions involving regional Bell operating companies, media companies, and cable television companies, is that the marketplace viability of many of these commercial services (and the time frame in which they will become profitable) is still uncertain.

In framing access to information services and resources through the NII as a public policy issue, we are fundamentally in conflict about what we are trying to accomplish. We welcome the economic growth that sales of information access through the new networks are likely to represent. For those who can pay, it seems clear that the NII can only increase the range of information that is accessible for purchase, since it will add to all of the current sources of information, and the NII forms a hospitable environment for a wide range of fundamentally new types of information content and information services.

But as a society we have beliefs about the rights of citizens to have access to a wide range of information and the importance of such access in contexts like education. The NII is particularly appealing because it offers a technological environment that can expand such public access to information to support the visions of greatly improved education discussed earlier and can facilitate equal access to knowledge by citizens in rural as well as urban areas. In today's world we welcome and support the diversity represented by libraries, bookstores, broadcast television and radio, and printed newspapers and magazines sold both at newsstands and by subscription. While all of these enterprises compete with each other to a limited extent, they have come to coexist fairly comfortably (though with continuing minor conflicts around the boundaries). In a new world of widespread distribution of electronic information through networks (under business terms we can only guess at today), it is unclear that all these institutions will continue to maintain independent existences. They

Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×

may converge and coalesce in complex, competitive ways. With the exception of libraries, all of the other information providers and distributors mentioned above are basically profit-oriented enterprises; even the exceptions, such as scholarly societies, compete with public and research libraries, which usually are publicly supported nonprofit organizations. The trajectory of competition and convergence among the for-profit enterprises is primarily a business question (though not without some public policy implications in terms of, for example, ensuring diversity). The future role of libraries in the NII environment may represent a conflict of public-and private-sector interests. The stakes in this conflict are potentially high, not only in impact on private-sector revenues but in terms of our future society.

The conflict is not just over direct revenue—it is over the library's purchasing of information and giving it to its user community, thus depriving the information provider of the opportunity to collect revenue from sales of the information. The library is also preventing the information provider from establishing a relationship with each purchaser, which can facilitate the collection of marketing and demographic data (valuable in its own right). The library continues to provide its patrons with privacy through anonymity in a marketplace that is increasingly moving toward the massive collection of data about the behavior and interests of customers.

Beyond the admittedly compelling visions of public good that are being presented to justify the NII program, our government seems to be establishing some fundamental policy objectives for its development, such as very broad access to this new infrastructure. The need for "universal" access to redress the increasing gap between the information "haves" and "have-nots" has become an issue. To some extent this universal service objective makes explicit the universal service goals that have been evolving for voice telephone service over the past few decades and extends the entitlement for basic telephone service to an entitlement not only to NII connectivity but also to some poorly defined set of information services on the future network. The discussion of policies and implementation strategies for the NII to address the universal service objective has really focused on how to achieve large-scale connectivity; it has not addressed what, if anything, this connectivity will provide to the user other than an opportunity to access whatever services may be generally available on the NII, on whatever economic terms these services may be offered (which may include free services supported by advertising and some set of free or very low cost "public access" services).

The NII is something completely new. In our quest to gain insight into the development of this infrastructure and to continue to inform our policymaking and planning, we have studied historical analogies. These include the telephone system, the electric power grid, and various transportation systems such as highways and railroads. These analogies are often confusing and misleading because they fail to capture the essential character of computer-communications networks, such as the Internet, as the new medium of communication and information dissemination. If applications for the new telecommunications infrastructure are to be driven by information, one must carefully evaluate the societal and legal structures that have developed around ownership of and access to information resources. I think that access to information services and resources on the NII will quickly come to dominate issues about connectivity in the universal service debate.

Consider the implications of universal access to voice telephony. Roughly speaking, such universal access means that everyone has a phone, can receive calls, and perhaps can make local calls. Access to additional services is based on the ability of the end user to pay. Universal access does not guarantee anything other than the most limited enjoyment of the intellectual property of others. Universal access to the services that we expect of the future NII (and indeed the services we have used to justify its development) are fundamentally different and go beyond simple connectivity. They involve universal access to information resources, which (except for public domain and other freely accessible information resources) are someone's intellectual property. The widely ignored precedents here are advertiser-supported broadcasting (both television and radio), which provides the public with access to a rather limited set of information resources and artistic works under a very

Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×

peculiar "contract" that requires the public to endure a certain number of minutes of advertising (or pledge drives in the case of public broadcasting) along with the viewed content.

Indeed, in the NII context we will be challenged as a society to define a base level of information resources that we believe must be available to all members of our society, regardless of the ability to pay. Access to some of these resources will probably be subsidized by advertisers; others will be made available directly by local, state, and federal governments; still others will be provided by a range of nonprofit groups, including, perhaps, some future "Corporation for Public Network Information" and its local "Public Network Information" affiliates. Access to those resources not provided through other means will have to be subsidized by government, just as it subsidizes public libraries to provide access to a good deal of information not available through other mechanisms.

LIBRARY ROLES IN UNIVERSAL ACCESS TO ELECTRONIC INFORMATION

Libraries defend intellectual freedom, the right to privacy in accessing information, and the public's right to have access to a wide range of information and viewpoints. Over the past century libraries have been effective advocates and defenders of these values. Today libraries successfully provide access to a very broad and diverse base of information. Individual units of information such as books are fairly inexpensive, and a wide range of materials can be acquired for a library's collection. The interlibrary loan system essentially permits a library to offer its patrons a range of information that goes far beyond locally held materials, which greatly expands the scope of materials available to every library's users. Ensuring unlimited public access to a given electronic information resource, on the other hand, appears to be a very expensive proposition, and for reasons to be described later in this paper, it appears that such electronic resources will not be able to be shared among libraries through interlibrary loans. As we look toward the issues of information access on the NII, one danger is that the goal will be reformed from an assurance that the public has access to a broad range of information to a definition of specific information resources that the public must be able to access. Such subsidized access may be limited to a few selected electronic information resources in much the same manner as school systems today acquire textbooks and the government creates and develops reports on specific topics of public interest. In the future the government may develop and mount electronic information resources or contract for these resources from the private sector for public use. In the future, support of wide-ranging intellectual curiosity and open-ended exploration of diverse viewpoints may no longer be part of the publicly supported program. One of the most troubling possibilities is that the government will select a base set of information resources to be guaranteed to the public, rather than simply subsidizing some amount of access to the broad base of "commercially published" information.

To be sure, government-subsidized information sources will not be the only voices to be heard for free on the NII. This information will be complemented by a wide range of information offered by various public service organizations and indeed by any organization with a message to deliver and the funds to support the delivery of that message. (Organizations will include advertisers; the difference between advertisers and public service organizations providing information grows more and more blurred.) Further, the cost to an individual or organization offering information to the public through the NII should be much lower than the analogous costs for print distribution or dissemination of information through the broadcast media today. But a reliance on these various free sources of information does not seem to be a sufficient substitute for the broad access to published literature provided by libraries today. Libraries not only provide access to a wide range of current information but they also conserve and provide access to a cultural and historical record. Those providing "free" information through the network will neither maintain such a record nor guard its integrity, accuracy, and balance.

Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×

Ensuring access to an appropriate range of information resources and services is not the only issue, although it may be the most difficult long-term problem. Connectivity issues must be addressed as well. People will need to invest in technology to access the NII, but many will be unable or unwilling to make such investments. Libraries may be a natural place to make available the information technology the public needs to use the network, at least at some minimum level of functionality. I also anticipate the development of a number of personalized direct information distribution services that assume their users have personal ''information appliances" of some type. The ability to exploit these new types of information services will be limited in a public access environment. Such an environment will support today's user-driven searching type of information access but not the emerging model that includes software agents continually working on a user's behalf to ferret out and sift information. Users in rural areas may face continued problems with network access or with the cost of that access, depending on how various policy decisions shape network deployment and economics. And training and user support issues, particularly for those who have not grown up with the technology, will be of major importance. Meaningful universal access will require a major education and outreach program. Such a program will require not only increased emphasis on information technology literacy as part of the basic K-12 curriculum but also a means of reaching the adult population.

LIBRARIES, INTELLECTUAL PROPERTY, AND ELECTRONIC INFORMATION IN A NETWORKED ENVIRONMENT

The majority of information has never been available for free use. Someone owns it. It is worth noting that even for works that are hundreds of years old and out of copyright, publishers have used a great deal of ingenuity and have gone to a great deal of trouble and expense to be in a position to claim ownership (through edited editions, translations, packaging of out-of-copyright works with newly commissioned introductions and notes, and so on). And there is a substantial body of information that is created or compiled by the government at various levels that is "owned" by the public. I have no doubt that the Internet today and the NII tomorrow will tremendously facilitate public access to these important information resources. Legislative initiatives and projects by the national libraries and federal agencies are making many of these information resources available today. But there is also a great reluctance to see the government expand its role as a creator and compiler of information very far. I would speculate that there would be little support for a government-sponsored encyclopedia to compete with the private products available today. Most of the information that the government collects, creates, or compiles is closely related to its operations or to specific legislative mandates and public policy objectives.

For most of the 20th century, libraries have provided the public with "free" access to a great deal of this privately owned information. (More precisely, the public has funded libraries; these libraries then acquire information and make it available to the public without charge.) The basic enabling mechanisms have been the copyright law and the doctrine of first sale: The library purchases a copyrighted work. While the library cannot copy it, it is free to loan it to anyone (including patrons and other libraries). The impact of the role of libraries on the revenue to rights-holders has been limited by the fact that only one person can borrow (read) a copy of a work at a time, plus the fact that material purchased by a given library is primarily accessible only to those in fairly close geographic proximity to that library. While in theory the work is available nationwide through interlibrary loan, realistically the delays and inconvenience of moving material physically from one library to another have been such that use of library materials, just like use of the libraries themselves, has had a strong geographic bias. It is only someone with a very strongly felt and typically rather specialized information need who will pursue interlibrary loan from a public

Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×

library to a research library, and research library to research library interlibrary loans are typically in support of a relatively small community of scholars.

Further, the cost of travel to a distant library has historically been much greater than the cost of acquiring a given unit of information. With the exception of a few scholars who needed to use particularly rare, fragile, or valuable material that did not travel through the interlibrary loan system, people would not travel far to use material in a library. Again, this served to limit the economic effects of libraries providing access to material on the rights-holders.

We are already seeing this set of assumptions change with electronic information resources. In many cases the cost of gaining access to electronic databases is so high that users will travel to a research library that offers its user community (which typically includes walk-in patrons as well as their specific institutional user community, in the library or accessing the resources remotely) to use them rather than to pay the prices charged to the general public. We still have a very limited understanding of how this shift is really changing revenues to rights-holders; it is unclear the extent to which users are physically traveling to libraries with electronic information resources to avoid paying, as opposed to the extent to which they are making the journey to get access to a resource that they simply could not afford under any other circumstances.

Congress has also played a role in making sure that information is available to the public. The copyright laws, following the constitutional mandate that Congress "promote the progress of science and useful arts," have crafted a balance between the need to reward rights-holders for creating intellectual property and the needs of society to make use of this property. The elements of this balance include limited duration of copyrights and specific exemptions for fair use, teaching, and preservation of the nation's intellectual heritage under certain circumstances. Some of these specific provisions in the copyright law, as supported by the interpretation of the courts, recognize that there are differences between information published primarily for mass markets and information published primarily to support scholarly communications.

In an environment of networked access and electronic information this entire framework changes. Electronic information is virtually never sold; it is licensed, and its use is regulated by a contract between the rights-holder and the user that typically limits access to the information to specific uses (and perhaps even for specific purposes), to a specific, defined, limited user community and for a limited duration of time.

Two of the major factors in determining the price of a license for a given resource are the size of a library's user community and (sometimes) the maximum number of patrons that will be permitted to access the resource concurrently. Costs are high; a system like the University of California pays about $100,000 a year for the right to provide its user community with access to a large citation database or the full text of a few hundred periodicals. In many cases the electronic resources are unique offerings (and critical to scholarship in a given discipline), and there is little competition to drive prices down. Many of the public policy compromises that characterize copyright law are not usually accommodated in licenses for electronic information resources. No one knows how such compromises and the broader objective of the free flow of scholarly information can be effectively implemented in the electronic environment without unacceptable damage to the revenues of the rights-holder, barring large-scale, systemic restructuring of the scholarly publishing system. With mass market information (which includes such things as newspapers, which are also important to the scholarly community), there is even less experience and less clarity regarding terms and impact of licenses for electronic information. Very little of this mass market material has been licensed electronically to institutions such as libraries except in stand-alone formats such as CD-ROM disks.

A few other points should be stressed. Libraries cannot begin wholesale digitization programs to convert their existing print collections into electronic form; it is not permitted under existing copyright law for material that is still under copyright, and most of the most heavily used information is still under copyright. Rights-holders are under no obligation to make their information

Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×

available to libraries in electronic form under any terms (regardless of whether the library already holds a print copy). Availability is much more of an issue for electronic information than it is for print: if a publisher makes a print product generally available, it is difficult to prevent a library from acquiring it under the same terms it is offered to the general public. (Differential pricing for libraries—particularly of journal subscriptions—has been imposed by publishers for a number of years; the ethics and legality of this practice remain a very vexed topic, although the practice is now widely accepted. It is hard to distinguish where differential pricing ends and a refusal to license to libraries begins; a library subscription that is 3 times the cost of an individual subscription may still be affordable, but a library subscription priced at 100 times the cost of an individual subscription likely represents what is in effect a refusal to make the material available to libraries.) In many cases, rights-holders are making a comfortable profit from print publications and perhaps from some limited direct electronic licensing to specific users (such as the business community). They have little, if any, incentive to make their information available electronically to libraries; and if they are willing to do so at all it may be at a premium of many times the cost of print. Libraries, already strapped for funds to continue any sort of acquisitions program, may be unable to afford or at least cost justify an investment in many publishers' electronic information, particularly if it is already available and affordable in print form. And the publisher has little incentive to lower the price because the print profit is already sizable. Also, publishers are being cautious about making electronic information available because pricing patterns are not well established, even for backlists or journal backfiles. They may not make retrospective files available to libraries at all, or only at very high costs. Libraries, concerned with their ability to maintain current acquisitions, will likely assign a very low priority to funding relicensing of materials in electronic formats that they already own in print or can get through interlibrary loan.

Further, any license agreements that a library may be able to negotiate typically do not accommodate the present interlibrary loan system. Information licensed by one library cannot be shared with another library's patron community. This is particularly frustrating since technically electronic information can be moved much more swiftly and economically than printed materials through an interlibrary loan system, thus promising improved service for library patrons nationwide and reduced costs for libraries that are increasingly relying on the interlibrary loan system to compensate for their dwindling local purchasing power. In some ways the interlibrary loan model does not even make sense for electronic information resources—technically, any patron at any library should be able to access resources at another library across the network without involving an interlibrary. Through the ability of networks to erase the accidental constraints of geography, anyone in the world has potential access to electronic information offered by a given library on an equally convenient basis. Electronic library collections are freed from their geographic anchors by networking technology. But facilitating such access, which clearly runs counter to a rights-holder's interests in maximizing revenue, is prevented by license. Indeed, part of the motivation to shift from sale to license has been specifically to address this issue. But although some license terms could work, to an extent, in an interlibrary loan environment (such as ensuring that only a fixed maximum number of people have access to an information resource at one time), the information providers have not been enthusiastic about experimenting with such agreements. What motivation do they have, after all?

In a networked environment libraries are being forced to be much more explicit in defining their user communities as part of their license negotiations. For a research library it is typically phrased "the faculty, students, and staff of University X"; for a public library it might be "the residents of township Y or county Z." These definitions can get cumbersome. Are affiliated hospitals included in the case of a university library; are people included who work for businesses in a city but don't reside there? There is still some recognition of public access, though this tends to be structured around geographically based traditions. For example, many libraries still propose to include as part of their user community anyone who is physically present at the library, regardless of

Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×

who they are and how far they have traveled. Many rights-holders have been willing to accommodate this kind of limited public access provision. But such definitions ignore—indeed run directly counter to—many of the real advantages of the networked environment.

THREATS TO PUBLIC LIBRARIES IN THE NETWORKED ENVIRONMENT

The public libraries have always been on the front lines in making information accessible to the general public. Typically, the research library community has supported them in this effort and has also done a great deal to make information directly available to the public, though preferably through the interlibrary loan system rather than by large-scale direct service to the general public. In the new world of the NII it is natural to try to continue to keep the library community in this role and indeed to call for an expanded role particularly for public libraries in providing access to electronic information, both that provided by the government (building, for example, on the precedent established by the Federal Depository Library Program) and that made available on the network by commercial and noncommercial rights-holders and information providers.

Of all the types of libraries, however, public libraries may be in the deepest trouble. Their budgets have been slashed nationwide over the past decade, leading to massive curtailments in services. Unlike research libraries or corporate special libraries that are typically service units within their parent organizations and viewed as providing essential services to those organizations, public libraries are sometimes valued as providing a service to a community that is relatively peripheral compared to such services as public safety. Because of the very large size of their user communities, public libraries often face large costs for licensing electronic information resources. Because they tend to provide a great deal of access to mass market rather than more purely scholarly information, they are considered by some information providers as potential competitors to the information providers' efforts to market information (particularly electronic information) directly to the public, and prices for licenses are set accordingly. In order to meet the challenges of the information age they will need to make massive investments in both information technology and staff training. It is not simply a matter of ensuring that the network reaches public libraries, but also that the public libraries have a sufficient installed base of hardware, software, and sufficient competent staff to make use of these network connections as an effective way of providing access to information for their patrons.

The basic funding mechanisms for public libraries are also likely to come into serious question in the networked information environment. Historically, most public libraries have drawn their support from their geographically defined user community through various forms of taxes. In a time when any person in the world can potentially use any public library through the network, geography no longer defines the user community for a public library, and thus geographically based funding may no longer make sense as a means of supporting these libraries.

There is a great irony here. In the print world each community funded its local libraries, and these libraries served as access points to the national library collections through the interlibrary loan system. In the networked environment, collections are local to a specific library and its user community, and each library can compete equally with all other libraries on the network for patrons who would join their user community (and presumably pay for such membership as a means of funding the library). Commercial organizations are also free to appear on the network to compete with libraries for patrons. Given the opportunity to aggregate a critical mass of interested customers for information in any imaginable specialized area because of the national and international scope of the network, a wide range of newly economically viable niche commercial information providers may evolve on the network, much as such providers exist as specialty publications and shops that do most of their business by mail order. It is interesting to speculate about the potential effects of users "joining" the library or libraries of their choice through the network. Many geographic locales

Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×

that currently fund libraries are quite diverse, economically, demographically, and culturally. One wonders if such diversity will continue to be found in the electronic communities that fund electronic libraries or whether we will see a much greater homogeneity among the patrons of a given "library" on the network.

Realistically, another aspect of this problem is that the economically disadvantaged or those uncomfortable with technology are least likely to own the network connections and information technology needed to access libraries over the network. They will continue to visit the library physically and to use it as a place where they can obtain not only access to information generally but also the information technology necessary to use electronic information resources on the network (which might include important community resources, such as listings of employment and educational opportunities or information about social services). The more prosperous, technologically literate people may have abandoned the geographically local public library for remote electronic libraries on the network, thus eroding the community commitment to continue to support the local library.

DIGITAL LIBRARIES AND LIBRARY SERVICES

I am not fond of the term "digital library." The term "library" is used to refer to at least three things: a collection, the building that houses that collection, and the organization responsible for it all. As organizations, libraries acquire, organize, provide access to, and preserve information. These are the primary functions of a library, though in many cases they have been augmented with more extensive responsibilities for training and teaching, for providing or facilitating access to social services, or for managing certain types of government or institutional information resources. When one considers libraries as organizations, it is clear that they deal with information in all formats—print, microforms, sound recordings, video, and electronic information resources. From this perspective, the term "digital library" doesn't make much sense and provides a very one-sided view of the mission and operation of a library.

It is certainly true that an increasing part of the collection of information that many libraries acquire, organize, provide access to, and preserve will be in electronic form, and as this proportion increases it will have wide-ranging effects on all aspects of library planning, operation, policy, and funding. How quickly this transition will move and how soon a critical mass of information necessary to serve various classes of library users will actually be available in electronic form are subject to considerable debate. I think that it will take longer than many people believe. There are a number of obvious situations in which much of the primary data in various areas is available and heavily used in electronic form, including space sciences, remote sensing, molecular biology, law, and parts of the financial industries. In some cases these data can only be meaningfully stored and used in electronic forms. But even in many of these areas much of the journal literature (as opposed to primary information) is still available only in print. Already there are massive data and information archives available on the network, containing everything from planetary exploration data to microcomputer software. While these are often referred to (rather grandly) as digital libraries, they are really not, in many cases, part of any actual library's collection and are not really managed in the way in which a library would manage them. In a number of cases they are actually the volunteer efforts of a few interested and energetic individuals, and there is no institutional commitment to maintain the resources.

It is important to recognize how little of the existing print literature base is currently available through libraries in electronic form, even to the limited and well-specified user communities that are typically served by the still relatively well funded academic research libraries. Libraries have been investing heavily in information technology over the past two decades, to be sure, but most of this investment has been spent on modernizing existing systems rather than acquiring

Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×

electronic content. The most visible results of this program of investment are the on-line catalogs of various libraries that can be freely accessed across the Internet today (although they often contain a mix of databases, such as the catalog of the library's holdings, which are available to anyone, and other licensed information resources, such as databases of journal citations or full text, which are limited to the library's direct user community). These information retrieval systems allow a user to find out what print-based information a library holds; while they are very heavily used, their end result is normally citations to printed works rather than electronic primary information. Even a system such as the University of California, which operates a sophisticated system-wide information retrieval system and had been aggressively investing in electronic information resources for some years, only offers access to a few hundred journals and virtually no books in electronic format. It holds active journal subscriptions to over 100,000 journals and its on-line catalog contains entries for over 11 million books. (In many cases the journal articles are only available to the UC community a month or more after print publication; in most cases the electronic form of the article does not include graphs, equations, or illustrations.) The conversion of the major research libraries to electronic form is years away and is less a technological problem than a business and legal issue.

Interestingly, it seems likely that conversion of critical masses of electronic information may occur first for specific, often modest communities that are willing to pay substantially for access to the information resources and that are willing to pay personally (or at least organizationally), rather than through the support of an intermediary agency like a library. In these situations it also seems fairly common for individual users to be willing to pay for access to information transactionally (i.e., by the article retrieved or viewed or by the connect hour) rather than under the flat-fee license model favored by libraries. In many cases primary rights-holders feel much more comfortable with an arrangement that gives them income for each use of the material, rather than trying to decide "what it's worth" in advance as they would have to do in a flat-rate contract agreement. Indeed, this conversion of information resources to electronic form is already well advanced in the legal community. This area has moved quickly because it seems that the user community is not price sensitive and because of some unusual situations with regard to concentrated ownership of much of the key material by corporations that are both rights-holders and providers of access services directly. And, as has already been discussed, it may prove impractical or even impossible, for a variety of reasons, for libraries to provide much patron access to these electronic information resources once they are established. Certainly this has been the case in areas such as law (with the exception of law school libraries, which benefit from special arrangements intended to familiarize future graduates with the electronic resources).

It is important to recognize that these commercial information providers are not libraries, though they may offer access to immense databases that represent key resources in a given discipline. They acquire and mount information to make a profit and remove it if it does not generate sufficient revenue. They will not preserve little-used information just because it is an important part of the historical or cultural record. And many of these providers have followed a marketing strategy that emphasizes sales to a limited market at very high prices rather than a much larger volume market at much lower prices per unit of information.

In considering the conversion on the existing print base to electronic form, one basic issue that must be considered is the negotiation for license with the rights-holders. In most disciplines there are many publishers who contribute to the print base. The notion of a major research library negotiating and managing contracts with thousands of publishers is clearly absurd, as is the notion of negotiating a contract for a $100/year journal. (It will cost much more than that to negotiate the contract, in the absence of a "standard" contract that all parties could just accept unmodified. Such standard contracts do not exist today, and there has been little success in developing one, despite the efforts of such organizations as the Coalition for Networked Information.) A number of companies have begun to act as rights aggregators for libraries: University Microfilms Inc. and Information Access Company, for example, will license libraries' full-text databases containing material they

Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×

have, in turn, licensed from the primary publishers. But coverage of these full-text databases is limited, and the material is not sufficiently timely to substitute for print subscriptions (or electronic access directly from the publisher). Until libraries can acquire electronic information under some framework that matches the simplicity and uniformity that characterizes the acquisition of most printed material today, the growth of electronic library collections will be slow.

The limited amount of library collections available in electronic form is already beginning to have effects that cause concern. Network-based access to electronic information is unquestionably convenient, and users tend to use electronic information resources that are available without consideration that they may be incomplete or may not represent the highest quality or most current work on a given subject. To some users, electronic library collections are already being viewed as defining the literature in a given area; these users have been too quick to embrace the transition to electronic formats.

In the next decade we will be dealing with a mixed transitional environment for libraries, where some parts of the collection exist in electronic formats but a large part continues to be available only on paper. This will greatly increase the complexity of managing the provisions for library services, defining priorities, and developing policies and strategies to position libraries to function effectively in their role as information providers to the public both in today's geographically bound, print-based world and in the future environment of electronic information on the NII.

CONCLUSION: THE BROADER CONTEXT OF INFORMATION PUBLISHING ON THE NII

It is important to recognize that libraries are part of an enormously complex system of information providers and consumers that also includes publishers, government at all levels, scholars and researchers, the general public, scholarly societies, individual authors, various rights-brokers, and the business community. Even the much more constrained and relatively homogeneous scholarly communications and publishing system is vastly complex. Libraries play an integral part, but only a part. There may be an expanding division between the world of mass market information, much of which is bought and sold as a commodity in a commercial environment, and scholarly information, which is primarily produced and consumed by nonprofit institutions that place great value of the free flow of information (although there are currently many for-profit organizations involved in the scholarly publishing system).

There are vast changes taking place throughout this entire system as a result of the possibilities created by networks and information technology. Organizations are reassessing their roles: scholarly societies that once viewed themselves very much like commercial publishers are now rethinking their relationships with their disciplinary knowledge base, their authors, their readers, and the libraries that have traditionally purchased much of their output. Individual scholars are exploring the possibilities of various types of network-based publishing outside the framework established by traditional publishers, and in some cases also outside the customary social constructs such as peer review. Indeed, processes such as publication and citation that were relatively well understood in the print world and that are central to numerous areas in our society are still very poorly defined or understood in the network context.

Libraries are still basically oriented toward artifact-based collections and toward print. They are struggling to respond to the new variations on publishing and information distribution that are evolving in the networked environment and to determine what their roles should be with regard to these developments. Considering that libraries have played a relatively minor role in anything involved in television or radio in this regard, I would suggest caution in predicting their roles in a system of information creation, distribution, and use that is being transfigured by the introduction of networks and information technology based solely on the effects of these technologies on existing

Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×

library operations and services rather than on the system as a whole. I do not believe that libraries are going to disappear, but they may change substantially in their roles, missions, and funding in the coming years. Certainly there is going to be a growing demand for properly trained, innovative, technologically literate librarians in many new roles in the networked information environment.

It is clear that the visions of information access through the NII are attractive and represent worthy societal goals. In many regards they build on the tradition of public libraries and their essential role in our society as it has developed during the 20th century. But without outright subsidy of access to electronic information or major changes in the framework that determines the terms and conditions under which the public has access to such information, these visions of the NII's potential may be difficult to achieve, and it may be unrealistic to assume that libraries have the resources to step up to these challenges. It seems clear that libraries have a number of potential—perhaps central—roles to play in implementing the public policy objectives that have been articulated for the NII. But they may not be the route to achieve all of these objectives, and in any event their roles and efforts will need to be complemented by investments from other sources, such as the federal government, in the development of public access information resources. We need to be very clear about who the user communities are, who is to be subsidized, what is being subsidized, and how that subsidy will be financed.

Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×

Discussion

ROBERT KAHN: Bob Lucky, if your model of the Internet being free is really going to hold for the rest of the country, how are we going to get it into the 70 million homes in America at $37,000 plus $7,000 each?

ROBERT LUCKY: I don't know. That is where the cost is going to be. The model is really based on business use, where we pay an average of $5 per person per month for everything that we can associate with the Internet, while the telephone might be closer to $100 when you throw in the long-distance charges and so forth. But in trying to get them into homes, particularly if there is a separate line involved, you face this enormous cost and bottleneck of that access, and so I simply do not know.

PAUL MOCKAPETRIS: I was struck by your comments about the difference between the phone company model and the Internet model. You said that the local loop is what is expensive and the long distance is not, in the phone company case. In the Internet case, you said you pay for access but then the Internet is free, so it seems like those two are the same model in the sense that the local loop is the thing that is dominating the cost, and I was just curious whether I was wrong or not.

LUCKY: I am not sure I understand, but it is sort of the same form as Bob Kahn's question. The costs are in that access. In business you have a model that is able to share that cost among many people because you have one T-1 line into a big building and then you have the local area network where most of the costs are. But most of us associate those costs in the local area with our computing environment and not the communication. I spent millions of dollars keeping up all the local area networks and all the people that run them and so forth, but I say, I have do that for computers anyway and the communication in the Internet just comes out for free from that.

A totally different model applies for the home because it is one on one and I don't know how you would do that unless you overlay it on your voice. I mean, if you share that line and you have a flat rate, then again you could conceivably have Internet access that is rather cheap.

MOCKAPETRIS: I have one other question. A lot of people have different metrics about what part of the Internet is commercial and what part is not. What is the right way to count? We have numbers about how many hosts are registered that are not entirely sound in some ways—sort of like sampling license plates and seeing how many are commercial and how many are government. What would be the appropriate way to think about that? Packets or packet miles?

LUCKY: I do not know; we do not know. I do not think you can count packets and distinguish which is which right now, so again I am sorry not to be able to answer these questions. In fact, no one knows how many Internet users there are really. People say 20 million, but all we know for sure is the number of registered hosts. I have no idea, even in our company; I said 3,300

Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×

people. We have 3,300 addresses, but we think maybe only 1,200 are active. We really do not have good statistics on a lot of the stuff.

GEORGE GILDER: How many hosts?

LUCKY: About 2 million hosts.

GILDER: The concept of universal service confuses me. It took 50 years and about a trillion bucks to get universal service, something like that. To proceed forward with some requirement for universal service would stifle all progress. You cannot instantly create universal service. Universal service is something that can happen when you get sufficient volume so that incremental costs are very low. You have to start as an elite service almost necessarily. I wonder, when you speak of universal service, just what kind of requirement is meant or how that can be defined.

CHARLES FIRESTONE: I think that what you are going to see is a dynamic concept of universal service. You are right—something that is new and innovative but eventually becomes necessary or heavily penetrated might become universal service in a future year.

One of the big issues that we are going to be facing is to figure out what should be included in the bundle of universal service (just like we need to figure out what should be included in the bundle of privacy rights), and this, I think, right now is just a connection, a dial-tone connection. One of the issues, for example, is touch-tone telephone service. Is that something that in some states is considered universal service? People have to have access to touch-tone or they have to have a touch-tone connection into their homes, because otherwise they cannot avail themselves of the services I think you have in mind.

But eventually there may be some governmental service—I am thinking down the line—some access to local information, your community. I know, for example, that Santa Monica has had the PEN [Public Electronic Network] system. Now, universal service may only mean having free access to the Santa Monica City Council or their local network—an ability to access without having to pay extra for it. But it is something that has to be dynamic and I am sure will be.

The other question is the application to libraries. What is the library equivalent in the electronic era? How do you connect into information, and what information should be a public resource as opposed to something that is available strictly on a pay-by-the-drink or per-bit basis. That is something that I think we as a society we are going to have to come to grips with.

MICHAEL ROBERTS: There is a lot of clamor these days for improving the security of the network, especially the Internet, so I think Colin Crook has got it right. Especially in a network that has to have universal access, you assume the network is at some level insecure and you secure the applications. The question is, for public-sector areas that are critical, such as health care, libraries, and also intellectual property, what sort of process should we be thinking about from a policy standpoint in securing those applications? Is it possible, for instance, to have the sort of certification of applications that the financial people can do very privately and behind closed doors applied to private-sector applications? Is it possible to apply product liability sorts of considerations to that area even in the public sector?

EDWARD SHORTLIFFE: I have some thoughts about it. One of the intriguing problems about privacy and confidentiality of health care data is that the issue is not necessarily the security of an application per se but what somebody who has access to the data does with them. In other words, there is a potential for abuse by people who are authorized to access patient data. That is why legal remedies with criminal penalties are required when someone misuses privileged medical information to which they have access. There are no national standards in this area at present. As a result, one of the big emphases I have heard in the medical area is that we need to begin to introduce some uniform penalties, probably with preemptive federal legislation about the misuse of data to which people do have valid rights of access.

As for the more generic issue of trying to prevent people from breaking into data sets, instituting appropriate certification of software has a valid role. What you are talking about is

Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×

simply having databases out on the network with lots of different applications that could access them. Of course, you run into the question of what should be the nature of the varied access methods that you could use to get to those data sets, independent of specific applications that may have been written to access them.

MARY JO DEERING: I have a question for Ted Shortliffe, whose clear vision I have always appreciated, but also for people in the audience who are on both the engineering and content side. It picks up on something that Charlie Firestone said this morning about the process of disintermediation that is going on in society with regard to information. The same thing is going on in health, actually, and it is a process that is parallel to the deinstitutionalization of health care. Health care reform is really going to continue pressure in those directions, with an emphasis away from hospitals and high-end acute care providers, toward primary care and preventive medicine and home care, for that matter.

My request specifically to Ted right now concerns that wonderful sketch [Figure 3], where you actually began to paint what the health information infrastructure would look like. It did not really include any of the linkages that would reflect that type of new reality in health care, that would provide the linkages that would be necessary among nonhospital institutions, nonspecialty providers, and perhaps the consumer. I think we would all like to see what that would look like.

SHORTLIFFE: You are absolutely right. As you know, we had a meeting on the subject of the NII and health care last week that the National Research Council sponsored [''National Information Infrastructure for Health Care," October 5–6, 1993] at which I think the message was driven home loud and clear. I personally am in primary care, and therefore I am sensitive to it as well. Good point.

KAHN: I have a two-part question, one part for Cliff Lynch and one part for Ted Shortliffe. I know, Cliff, that you are very well aware of the importance not only of accessing information but also of delineating what you can do with it—when you get it, not only whether you can copy it or distribute it or make derivative works, but all the things that are typically covered under copyright. In your talk you really did not get into that, and I am wondering what your own views are as to how that particular aspect of library development is going to proceed. How are we going to know what we can do with this stuff?

CLIFFORD LYNCH: I think that this issue of what you can do with information is already a major problem. As soon as you move into a mode where you are licensing information rather than purchasing it, and licensing information from different sources with different contractual provisions—and, just to add to the fun, do not necessarily have a standard sort of taxonomy of things you can do—you are into a situation where it is very difficult to know what you can do beyond reading the information once and then purging it from memory and never printing it.

I think that this is a particularly troublesome trend, as we start thinking about much more intelligent ways of handling information. As people start building personal databases and using intelligent agents, "knowbots," things of that nature, that correlate information from multiple sources and refine it, this sort of trend may be a major barrier to making intelligent use of information. It is something I am really concerned about. Another dimension of this is multimedia. I keep hearing about multimedia, but at least on bad days, I think the only ones who will be able to afford it are groups like major motion picture studios because only they can afford enough lawyer time to clear the rights.

ROBERT PEARLMAN: This issue is very, very big in the education sector, because one of the things that kids are doing these days is making multimedia reports, combinations of video and text and graphics, and they just grab images and text from everywhere, and it is a tremendous ethical problem.

But what has happened is that an industry is developing now, and CD-ROMs are out that basically provide images that you can use. In other words, an industry is growing up that says, "Sure, National Geographic will sell it to you at a big price, but we will give you similar images

Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×

that you can buy and use for practically nothing," so there may be this kind of development in other sectors as well as competing sources of information.

KAHN: We have heard from Bob Lucky that the Internet, to certain kinds of businesses, essentially looks almost free and from Bob Pearlman that to the educational community it is virtually uneconomic at the moment. The other part of my question for Ted Shortliffe is where the technology fits in the case of the medical community. I think we heard from Ted that there is a strong motivation on the part of physicians in the hospitals to gain access to this technology. The medical profession itself, and I believe you, could provide a good, strong justification for that. In fact, I see it coming. The question I have is whether there is any economic justification for the end patient, namely, the user of the health care system, in having some coupling to the Internet and, if so, at what level of capability?

SHORTLIFFE: This relates to Mary Jo Deering's question. First, when you look at what the health care field is spending on computing right now, the amount of money that it would take for any given institution to hook up to the Internet is minuscule compared to what it is spending overall in information technology, so I don't see cost as a barrier per se. It is more the perceived benefit and how you actually would make use of the national network, given the lack of standards for actual connectivity and data sharing.

The same kind of argument could be made as we begin to see the evolution of new health care delivery plans. It may well become economically beneficial for health care providers to pay for linkages into the homes in order to provide patients with access to information that would prevent them from becoming more expensive users of the health care system. A lot of visits to doctors are unnecessary. If people only had more and easy access to the kind of information they might need, you can imagine how this might have an impact on overuse of certain kinds of facilities.

The problem with that approach is that you are hypothesizing not only an availability but also an education of the end-user patients and health care users which right now is unlikely. The biggest users of health care are the people least likely to have the facilities in their homes at present and the ones least likely to have the education that would allow them to make optimal use of such technology. So we are talking about a major social issue that would facilitate allowing patients in their homes, and people who are nonpatients who simply need access to health information, to make good use of the kind of information that might be made available.

KAHN: Is that a practical suggestion you just put out, that the doctors or hospitals literally pick up that mantle somehow?

SHORTLIFFE: It is not going to be the doctors. If you think the doctors are in charge of the health care system, you are a few years behind. It is going to be health plans as they begin to look at how they can compete for patients in large areas, especially in the big metropolitan areas. How this will play out in more rural areas is another matter that I think is a great worry to people, because the emphasis tends to be on the competitive marketplaces around the big cities. However, the health plans will pay for these technologies if they find it is to their competitive advantage to do so.

LINDA ROBERTS: I am trying to look for the common thread in all that we have heard this morning. It strikes me that in most cases what you have been talking about is doing what we do better. But I think there really is an opportunity to do better things in every one of the sectors that have been talked about this morning. I am thinking about the library as one example. We really have public libraries as a compromise that, if I understand it, was reached between the publishers and the communities that really wanted everybody to have their own library.

There were people who had libraries and who did not need public libraries, and they were the exception rather than the rule. What is so interesting about what could happen in the future is that there really could be a much more decentralized system of libraries. Everybody could have their own in their own home, and what is even more fascinating about some of the things that are

Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×

happening in education when you talk to teachers is that a lot of the material that could be in these libraries does not necessarily have to be controlled and produced by the publishers out there.

So it seems to me that one of the things we really have to think about for the future is what we haven't been doing that we ought to be doing and that might create new opportunities for business and industry, but might also create new opportunities for learning more broadly.

VINTON CERF: With respect to something Bob Pearlman said, that somehow it was not economic for the schools to be a part of the network environment, I am a little puzzled because I assume that we spend, as a country, a great deal of money on education, so it is not like there are zero dollars out there. A lot of dollars are spent for education, so it is obvious a trade-off is being made about the utility of being a part of a network environment and spending money on other things. Maybe you can help us understand a little bit about how the educational dollar is spent. Is it mostly for personnel? Do we find networking not so useful until all of the teachers are trained in its use, as opposed to throwing equipment into the schools and expecting somebody to do something useful with it with no adequate software, and so on?

I want to suggest that we are like parents of teenagers when the kids are out late and we do not know where they are and we're worried. There are a thousand different possible things that could have happened to them even though it is probably the case that most of those things are mutually exclusive, and yet we still worry about all thousand of them.

We worry about how this technology will finally reach critical mass, will finally get to the point where there are enough people who have access to it for good, sound economic reasons that we can start doing some of the things that we have heard about this morning. I would like to suggest that we actually do not know yet which of the things will trigger the regular availability of all of the technology. The computer scientists got it first because they had to have the stuff to write programs. Then they got to do all these other neat things with the networks. Bob, what in your mind is the triggering event for making this economically interesting for the schools?

PEARLMAN: This, of course, is a very complicated question. If you look clearly at technology in U.S. schools, it has not resulted in many economies. For instance, in the mid-1980s there were some companies that were trying to market what were called integrated learning systems in the schools, on the basis that they would actually save in the use of teachers. In general they did not really save in the use of teachers, which is where the real economies are.

In schooling, about 80 percent of school budgets have to do with personnel, so the only way to reasonably save money is by saving on personnel. Some are saving on, say, custodial or food service personnel, but in the main you have to save on teacher personnel, and the only way you really get at that is by totally reorganizing schooling. I was associated with one of the new American school design awardee groups in Cambridge for the last year and a half. We came up with a design that we think in the long run is going to save money. It requires quite an up-front investment in communications technology.

For our kids to be able to work much more on their own with teacher advisors managing their affairs and with mentors, we had to establish up front an infrastructure in our design of a local area network in the school connected to the Internet with those up-front costs and with connections in the community, which meant that the community had to be wired properly. All of these up-front costs had to be borne by somebody.

With that kind of structure in place, we felt that all sorts of economies could occur in personnel in schools, but we had to test the proposition. The problem is that from the point of view of a school that exists right now, we are only talking about additional cost add-on. I don't mean that people really cannot afford to get onto a network; it is just hard for them to justify it. There are not real economies nationally. You get on the Internet, and what do you see? You are really just bringing in more programs—more kinds of things to look at, more curriculum possibly. But you are not really enabling a saving at that site.

Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×

What we really need is some real, serious innovation in the organization of schooling. That may take a lot of forms, and the charter school development may help. I am not a partisan of Whittle's efforts, but I am not happy that he cannot make it in the new design world. In fact, I would like to see some private player actually try to do it, because the question is how to put together the package, how to develop a school that is more economical when it is tied to a system—meaning many, many schools, whether in one location or around the country, that actually work together and produce some savings because they are sharing curriculum, curriculum costs, development costs, and teachers through the media. That is what has to happen.

Although we have things called school districts, or states that pretend to be school districts, they don't really bring about efficiencies in the way that a corporation like Citibank is trying to accomplish, looking at how to make a network that really makes all that blend together. So my answer is only that we are going to have to have lots of experiments and new designs for schooling over the next several years in order to actually exploit information technology properly.

CAROL HENDERSON: I want to pick up on an earlier question by Bob Kahn and tie together a couple of strains that we heard this morning in connecting the health care sector and the library sector. We often hear about patient records and telemedicine, but I do not know that it is widely realized how often people go from the doctor's office to the public library to try to get information about what it is they have just heard, what their child has been diagnosed with, or what they are looking at in terms of caring for their aged parents down the line, and so on.

When you look at the kinds of questions they ask, they really want to mine the medical field's information, but they want to do it, in a sense, outside the medical field because they want to know whether there are alternative kinds of treatment and what the literature says about this drug that they are supposed to be taking if they are also taking something else.

The usefulness of a neutral source of information about information is something that I think is very valuable, and perhaps we should not give up too easily. Access for patients, or people before they become patients, to preventive health care information and to information about their conditions is something that libraries can be a big help with.

I think perhaps we also passed over the idea of libraries as community institutions and as information providers. It is often the libraries that have mounted databases not just about their collections but also about community information and referral sources. Libraries are often the central source of information on where in a community you go for various government and social services across agencies as well as an institution that, for instance, reaches out to newer immigrant groups and helps get them into the mainstream.

Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
This page in the original is blank.
Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Page 21
Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Page 22
Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Page 23
Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Page 24
Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Page 25
Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Page 26
Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Page 27
Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Page 28
Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Page 29
Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Page 30
Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Page 31
Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Page 32
Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Page 33
Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Page 34
Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Page 35
Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Page 36
Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Page 37
Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Page 38
Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Page 39
Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Page 40
Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Page 41
Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Page 42
Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Page 43
Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Page 44
Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Page 45
Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Page 46
Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Page 47
Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Page 48
Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Page 49
Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Page 50
Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Page 51
Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Page 52
Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Page 53
Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Page 54
Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Page 55
Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Page 56
Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Page 57
Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Page 58
Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Page 59
Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Page 60
Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Page 61
Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Page 62
Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Page 63
Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Page 64
Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Page 65
Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Page 66
Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Page 67
Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Page 68
Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Page 69
Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Page 70
Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Page 71
Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Page 72
Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Page 73
Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Page 74
Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Page 75
Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Page 76
Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Page 77
Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Page 78
Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Page 79
Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Page 80
Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Page 81
Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Page 82
Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Page 83
Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Page 84
Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Page 85
Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Page 86
Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Page 87
Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Page 88
Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Page 89
Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Page 90
Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Page 91
Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Page 92
Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Page 93
Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Page 94
Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Page 95
Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Page 96
Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Page 97
Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Page 98
Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Page 99
Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Page 100
Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Page 101
Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Page 102
Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Page 103
Suggested Citation:"PART 1--SETTING THE STAGE." National Research Council. 1995. The Changing Nature of Telecommunications/Information Infrastructure. Washington, DC: The National Academies Press. doi: 10.17226/4816.
×
Page 104
Next: PART 2--REGULATION AND THE EMERGING TELECOMMUNICATIONS INFRASTRUCTURE »
The Changing Nature of Telecommunications/Information Infrastructure Get This Book
×
 The Changing Nature of Telecommunications/Information Infrastructure
Buy Paperback | $50.00 Buy Ebook | $39.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

Advancement of telecommunications and information infrastructure occurs largely through private investment. The government affects the rate and direction of this progress through regulation and public investment. This book presents a range of positions and perspectives on those two classes of policy mechanism, providing a succinct analysis followed by papers prepared by experts in telecommunications policy and applications.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!