Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 55
4 Free Speech The right of free speech enjoyed by Americans is rooted in the First Amendment, which states that "Congress shall make no law … abridging the freedom of speech. …" Nevertheless, the right to free speech is not entirely unfettered, and one's ability to speak whatever one likes can be legally limited under certain circumstances that depend on the nature of the speech and the communications medium in which that speech is expressed. The electronic environment, which gives every user access to a large audience and a virtually unlimited supply of information, poses particular challenges concerning free speech. This chapter summarizes a discussion of two free speech scenarios that were examined by a panel at CSTB's February 1993 forum. NOTE: This chapter, and the three chapters following it, are based on the discussions held at the February 1993 forum described in the preface. As noted in the preface, the forum was intended to raise issues related to and associated with the rights and responsibilities of participants in networked communities as they arose in discussions of various hypothetical scenarios. Thus, Chapter 4 through 7 collectively have a more descriptive than analytical quality.
OCR for page 56
SCENARIO 1: EXPLICIT PHOTOS ON A UNIVERSITY NETWORK A large state university serves as a network "hub" for the state's high schools. The university itself is networked with every faculty member, staff member, and student having a network computer on his or her desk. The university also is connected to the Internet. A student electronically scans pictures of men and women in various sexual poses. Issue: The Law as the Ultimate Authority The university needs to consider how its policies are consistent with the law, because a state university exists in a jurisdiction that probably has indecency and obscenity laws, according to Allan Adler, a lawyer with Cohn and Marks. "We do not voluntarily submit ourselves to the law. It is the reality in which we live," he said. Adler acknowledged, however, the common practical desire to reach consensus about behavior or conduct through negotiated policies and agreements. This is a typical approach in situations involving a communications medium, he said, where legal resolutions tend to be expensive and most parties get more than they bargained for in terms of restrictions on future conduct. Clearly, he said, there must be a set of social norms, whether defined by policy, contract, or law, and there must be some authority that enforces those norms; the ultimate authority is the law. Assuming that the university in Scenario 1 communicated the network ground rules to users, their usage of the network would imply assent, Alder said. Assent is required in both contract and criminal law, by individuals in the former case and by society as a whole in the latter. That is, individuals must be notified of regulations that affect their conduct so that they have a fair opportunity to comply; if they do not assent to compliance, then they cannot fairly be held responsible for complying, he explained. Responding to a suggestion that users could enforce rules themselves by employing screening devices, Adler argued that defamatory or fraudulent information would be difficult to informally identify and filter out, and the law would need to step in. For example, an individual about whom a defamatory story had been written might wish to prevent others from seeing the story, and thus would have to
OCR for page 57
persuade an unknown universe of individuals to screen stories about him—clearly an imposing if not impossible task. "If the user is a participant in a system where either the user or some third party is defamed or has sustained damage to their reputation that affects them outside of the immediate electronic network on which they are operating, they are going to want some remedy for that," Adler said.1 As to whether the student, if disconnected, has any First Amendment rights, Larry Lessig, an assistant professor of constitutional law and contracts at the University of Chicago, noted that such rights would apply at a state university (though not necessarily at private universities). However, Lessig also contended that all universities habitually regulate speech. "If a student comes into my classroom and wants to talk about pornography when I want to talk about contracts, I tell the student, 'I'm going to fail you.' Now that is speech regulation," he asserted. Michael Godwin, staff counsel for the Electronic Frontier Foundation, concurred that the classroom is highly regulated and such regulation is appropriate based on the purpose of the classroom, but he raised the question of whether the university's electronic forum was more like a conversation on the block (in which freedom of speech guarantees do obtain) than like a lecture in a classroom. Lessig warned that the legal community has few tools to make sense of behavior on electronic networks, in part because judges, lawyers, and legislators have little or no experience with that world. "I think the point is that we have very little understanding of how these principles that seem fundamental to us, like free speech, can apply in these various different worlds," he said. Issue: The Need to Establish Rules and Educate Users If the university is only now responding to the problems posed by Scenario 1, then it is already too late, according to Reid W. Crawford, legal advisor and interim vice president for external affairs at Iowa State University. He noted that networking issues should be considered within the university community before such problems arise, and any concerns should be shared with the connected high schools. He also said that discussions should begin very early in the process 1 An alternative to legal intervention might be to require that any message about a person leave behind an electronic trail detailing where it was sent from, so that the person defamed can seek—and the remorseful author can send—a retraction or apology.
OCR for page 58
with university trustees, regents, legislators, the faculty senate, women's groups, civil liberties groups, and other concerned parties. According to Crawford, this scenario would be an important public relations concern, because the university supports the network with public funds. Thus, he said, the matter should be handled preemptively through a quiet and nonpublic political and regulatory process managed by the university, involving consultation with the various constituencies in the university community. Crawford continued, "That is not the only step, but it's the first step that has to be taken so that you can deal with issues in a rational setting. Because if you think of it strictly as a legal issue, you can make all the arguments you want to about pornography, Playboy, Playgirl, or getting into the harder-core and illegal pornography. But if you cannot recognize the public relations issues, I don't think you will ever get to the substantive legal issues." Since he made these remarks at the forum in 1993, Crawford's point has been underscored by scores of articles in the public press about "sex and the information superhighway."2 Lessig suggested that if users of a university network have an understanding that any subject is permissible, then it may be appropriate to have some technical means for segregating topics.3 This idea was seconded by Murray Turoff, who noted that New Jersey Institute of Technology electronic forums supported the discussion of very controversial subjects in private conferences that are advertised in a directory. An affirmative choice must be made by the student before access to a private conference is granted. Carrying forward the theme of individual responsibility for enforcement, David Hughes, a freedom-of-speech advocate and managing partner of Old Colorado City Communications, suggested that users who wished to screen out material they deem offensive could use a technological filter (akin to "Caller ID" for telephones), assuming 2 See, for example, Amy Harmon, "The 'Seedy' Side of CD-ROMs," Los Angeles Times, November 29, 1993, p. A-1; John Schwartz, "Caution: Children at Play on Information Highway," Washington Post, November 28, 1993, pp. A-1 and A-26. 3 This is an approach employed by America OnLine, which offers message boards and conferences for different topics. At the same time, the library community believes that any official scheme for "segregating" or labeling violates intellectual freedom. For example, the American Library Association's "Statements on Labeling" says that ''labeling is the practice of describing or designating materials by affixing a prejudicial label and/or segregating them by a prejudicial system. The American Library Association opposes these means of predisposing people's attitudes toward library materials for the following reasons: Labeling is an attempt to prejudice attitudes and as such, it is a censor's tool. …"
OCR for page 59
such a device could be developed for electronic traffic.4 This technological solution would be the equivalent of a porch—i.e., it would allow offensive messages to reach the door but not to enter the house. Still, other technical approaches for screening undesirable material may not be viable in the electronic environment. For example, broadcast media have often resorted to segregating material not intended for children by broadcasting such material late at night; in an electronic environment in which the material is available 24 hours per day, such an approach becomes more difficult to implement. The first concern of George Perry, vice president and general counsel for the Prodigy Services Company, with regard to Scenario 1 was whether any laws were broken; he was not sure, noting that the answer might depend on the nature of the pictures. Therefore, the real question becomes the nature of the relationship among the user, the provider, and, perhaps, the victim, he said. Perry did not see a freedom-of-speech issue in the scenario, arguing that speech has never been entirely free. Given that electronic networks are a new medium with very few commonly accepted rules of behavior, Perry emphasized the need for providers to establish them. "I think it is absolutely critical that operators of these systems, whether they are universities or commercial operations, establish their rules. … I don't think that every network in which you express yourself has to have the same rules, but you've got to have some rules; otherwise the place is just going to crumble." A view that went farther was espoused by Hughes. He contended that the primary responsibility in handling the explicit photographs lay with the individual and that the university had no responsibility. If the university responded to the incident by disconnecting the student from the network, then the student should respond by demanding his or her freedom of electronic speech, Hughes asserted. 4 The technological feasibility of an electronic filter is not at all a given, although some prototypes are being developed. It is easy to imagine a filter that would delete or suppress text-based messages that contained, for example, certain four-letter words; it would be far more difficult, however, to develop a filter that could screen out bit-mapped graphic images of humans in sexual poses. (One could, of course, screen authors' names and subject keywords.) Moreover, any such filter would require a user to specify with some precision what he or she found offensive, a feat not all individuals could accomplish. Example-based learning techniques may enable a program to derive a filter based on a few samples of undesirable material, but whether such techniques are feasible and applicable on a large operational scale is the subject of considerable technical argument.
OCR for page 60
He based this argument on the premise that the digitized photographs constitute speech rather than property, and that freedom of speech is the overriding principle. Methods of handling problems on electronic networks should evolve from that foundation, Hughes argued, rather than from university policies or social norms developed for other communications media. In Hughes's view, the university's only responsibility in Scenario 1 is to educate students and the public. He asserted that members of the public who lack experience with electronic networks—including parents and newspaper reporters—are unable to make sound, objective judgments about such situations. On his own network, Hughes educates and influences his user population to adhere to an ethic. "We discuss that ethic, and out comes socially responsible behavior—not by imposing external authority on them. I never delete, on my system, a piece of mail—never, no matter how obtrusive. I handle it technologically sometimes [by masking the appearance of the message] so that it is free speech, and we have a discussion to the point of responsible group behavior." Others argue that masking the appearance of the message is itself a form of censorship. In closing discussion of Scenario 1, moderator Henry Perritt noted that panel members generally agreed on the importance of establishing rules under which electronic forums operate. Rules need not be the same from forum to forum, as Perry pointed out, but they need to be explicit and give important consideration to the views of stakeholders—operators and users. Perritt pointed out that the disagreements were over the extent (if any) to which other mechanisms were needed to enforce those rules. SCENARIO 2: NEGATIVE COMMENTS HARM A THIRD PARTY In an investment forum bulletin board hosted by a commercial network service provider, several users are discussing the merits of investing in XYZ Corp., a "penny stock" whose price can fluctuate widely on relatively small trading volume. John, a regular user of the bulletin board, has gained some credibility with other users for his stock picks. He posts a note on the bulletin board that says: "I was heavy in this stock 4 months ago, but sold most of my holdings last month. The company is out of cash, and sales are in the tank. Inside management is waiting for the stock to go up a quarter point
OCR for page 61
to dump some big positions." The next day XYZ's shares fall precipitously on heavy trading. Issue: Provider Responsibility and Liability Lessig argued that because the First Amendment applies less forcefully in the context of commercial speech, it may be better to first ask to what extent it is reasonable to make individuals (in this case, the provider) seek out and screen information to avoid harming others. Lessig outlined two points relevant to the problem. First, he offered the analogy of electronic networks functioning as the National Enquirer of cyberspace, in that no one could reasonably rely on anything that was said. Thus, "the person has no claim they had been harmed, because they shouldn't have relied upon [the information], and nobody should have relied upon it." That is not to say, however, that speakers should be immune from liability, Lessig added. It is only when individuals feel responsible for their words and actions that others begin to give them credibility, he noted. Thus, if the network is to gain credibility there must be some responsibility. Adler was disturbed by the National Enquirer analogy, arguing that credibility is essential if electronic networks are to evolve into the marketplace of the future. Trying to ward off liability by warning users to distrust the network is the wrong approach, he said; users must feel comfortable knowing that they do not routinely risk any type of injury and that injuries, when sustained, can be redressed. Moreover, the First Amendment has been linked to Oliver Wendell Holmes's notion that the marketplace is where truth will prevail through a free competition of ideas, Adler added.5 Lessig's second point was that what is reasonable liability depends primarily on technology. If there were a simple way to search the bulletin board for harmful information, then common law courts 5 Specifically, Justice Holmes wrote that "when men have realized that time has upset many fighting faiths, they may come to believe even more than they believe the very foundations of their own conduct that the ultimate good desired is better reached by free trade in ideas—that the best test of truth is the power of the thought to get itself accepted in the competition of the market, and that truth is the only ground upon which their wishes safely can be carried out. That at any rate is the theory of our Constitution. It is an experiment, as all life is an experiment." See Abrams v. United States, 250 U.S. 616, 630 (1919).
OCR for page 62
might find it reasonable to impose that duty on providers, he said.6 However, emphasizing his earlier point, Lessig pointed out that common law is made by judges, who may be unfamiliar with electronic networks. "The biggest and, I think, most frightening thing about regulation in this area is that the people doing the regulation have no experience at all," he said. "They have no conception of what cyberspace looks like or even feels like." There was some dispute on this point. Marc Rotenberg, a privacy advocate with Computer Professionals for Social Responsibility, suggested that Cubby Inc. v. CompuServe Inc. (described in Chapter 3) was a landmark free speech case that demonstrated that judges are beginning to understand electronic networks. Rotenberg sympathized with CompuServe's argument that it acted as a distributor, not a publisher, and that it did not know and had no reason to know of the statements in question. The court agreed, emphasizing the First Amendment and saying that CompuServe deserved the same protection as a traditional news vendor. Rotenberg was most enthusiastic about the Cubby decision, calling it "wonderful" and "sensible." He said the court appeared to be promoting electronic networks as an information resource by limiting but not eliminating liability for providers. Moderator Perritt suggested that even if judges didn't understand the new technology, they could be educated about specialized subjects through the testimony of expert witnesses and amicus briefs and that such approaches could be encouraged for cases involving electronic networks. Others, however, warned that the decision establishes troubling precedents. Adler was disturbed by the finding that CompuServe was not responsible because it had little or no editorial control. In fact, the provider did have the technological ability to exercise control but chose not to do so, placing that responsibility on a contractor, Adler noted; he further wondered whether publishers should be allowed to pass liability on to a contractor simply by declaring themselves to be distributors and thus lowering their liability. "So when 6 If the cost of searching the bulletin board for harmful postings were less than the value of the damage caused by information likely to be on the bulletin board, common law would hold the bulletin board owner liable for not undertaking a search if damaging information were indeed found on the board. (More formally, the cost comparison would be between the cost of the search and the expected value of the damage associated with harmful information (i.e., the product of the likelihood that the information will be found on the board times the damage it would cause if it were indeed found).) See Richard Posner, Economic Analysis of Law, 4th Ed., Little, Brown, Boston, Mass., 1992, pp. 163-169.
OCR for page 63
we are talking about rights and responsibilities in this world, one of the things we have to consider is, do the responsibilities flow from whether or not you actually have the capability, whether or not it is feasible for you to exercise control, or whether or not you choose to place yourself in that position of exercising control?" In addition, rights and responsibilities flow from a social decision regarding whether it is beneficial to grant individuals the rights or to saddle them with the responsibilities. The central problem with the Cubby decision, Adler said, is lack of clarity, or failure to distinguish among the various electronic services and formats. Adler said he feared that some readers of the decision would conclude that solicitation of criminal activity, defamation, and other crimes or torts could be carried out on electronic networks without liability. Hughes said he was frightened by the notion that a provider's capability to review materials translates into an obligation to do so. It is the provider that defines its services and thus its obligations, he argued. (Of course, a provider never has complete freedom, as it is subject to the laws of the jurisdiction in which it operates.) Hughes said further that even a single system may have multiple roles. For example, the Prodigy Services Company is a publisher, which implies some review of content; but the service also carries free speech, for which it should not be held accountable, according to Hughes. The point regarding multiple roles was reinforced by Davis Foulger from IBM, who argued that different types of computer-mediated communications (e.g., electronic newsletters, conferences, and mail) may carry different types of responsibility. Electronic newsletters, he suggested, may be entirely analogous to print-based newsletters, with all of the liabilities of the latter carrying over to the former, whereas an unmoderated conferencing forum may carry fewer responsibilities. Given the flexibility offered by electronic networks to define the type of communication, Hughes and Foulger agreed that the self-definition ought to be the element that defines liability; for example, in the case of a commercial network whose contractual agreement with users declares that the network owns all data on its system, the network should be subject to all of the legal mechanisms used to hold an individual liable for that data. Hughes went on to note a further complication to the self-definition process: the extensive interconnection between networks means that a given network may be unable to control the input to it. Hughes asked: "To what extent am I accountable for what someone else says on another system that happens to be displayed on mine?" Godwin argued that Cubby posed a "knew" or "should have known"
OCR for page 64
standard of liability for defamation, and he thought that the decision was proper. But Godwin also argued that the case does not imply that liability results from a complete failure to exercise discretion or a decision not to exercise discretion. He noted that "Cubby is based on California v. Smith,7 [in which] bookstores were held to be not responsible for monitoring and not responsible for the specific contents of their books [even though] bookstores can exercise discretion about what they carry." He further argued that a broad kind of discretion need not necessarily result in liability; such a freedom from liability would be important to forum operators, who need to be able to shape the character of their forums but who also want to avoid liability for the specific contents of those forums. Providers were satisfied with the Cubby decision but said it leaves some questions unanswered. Perry, while pleased that CompuServe was not held liable, said a provider's obligation remains unclear. For instance, if a system is large, with perhaps 50 million messages posted each year, how far should the operator have to go in investigating allegations about a piece of information? "Do I have an obligation to go find that thing in the 50 million [notes]? … Once I find it, do I have an obligation to go out and discover the facts as to whether or not what the individual was saying was true …?" Perry argued that, in the scenario he just outlined, both a publisher and a distributor have an obligation to determine the facts. This responsibility is clearer in the case of a publisher, he noted; in the Cubby case, the court found that if a distributor is notified of a problem, then it, too, faces some liability unless it takes action. Moreover, he warned that "it is very dangerous to make the publisher/distributor distinction when it comes particularly to commercial operators, because in fact they are a wide spectrum of different kinds of beasts. At one end they may very well be publishers. On another end they may be pure distributors. On another end they may be none of either. So I think it's dangerous for us to take those preexisting analogies and try to apply the law that we have today that applies to those areas, to this new medium." In general, liability protection models from other media may not be appropriate on electronic networks, Adler said, noting that the 7 In this decision (361 U.S. 147 (1959)) the U.S. Supreme Court held a distributor's lack of editorial control precluded states from holding the distributor strictly liable for publication contents (A.J. Sassan, "Cubby Inc. v. CompuServe Inc.—Comparing Apples to Oranges: The Need for a New Media Classification," Software Law Journal, Vol. V, 1992).
OCR for page 65
distributor model suggests that "the best way to avoid any possible liability is to exercise the least control. [But] I'm not sure that is socially responsible." Rule by law is not necessarily so bad, he said, pointing out that, historically, citizens have not objected to laws per se, but rather to the arbitrary exercise of law—law without participation and consent. As to how liability should be applied, Adler emphasized the distinction between a common carrier and a service provider: the former is virtually immune from liability because it is legally required to provide equal access to all users without editorial control over content, whereas the latter can be held responsible if it is notified of a problem and does nothing to eliminate the continuing harm. It is not clear which definition applies in Scenario 2. Issue: User Responsibility and Liability Liability for defamation is a critical issue in the electronic environment. In common law, the question of defamation rested on the truth or falsity of a statement about an individual. However, in light of First Amendment considerations, the Supreme Court has focused on the degree of fault that can be attributed to the speaker, Adler said. Of course, if the speaker (or poster) of a defamatory message is truly anonymous (i.e., if it is genuinely impossible to determine the identity of the speaker with certainty, as might be the case if the message originated on another network, for example), then the matter ends there, and no party can be held liable. True anonymity currently is rare on electronic networks.8 In many more cases, the true identities of speakers are confidential (i.e., the identities of speakers are withheld as a matter of policy on the part of the service provider, although the provider does in fact know these true identities). In such cases, Adler maintained, "the question is whether the service provider is willing to accept the liability for the harm that is caused by maintaining the promise of confidentiality, or whether … there is a balance … that says there are compelling interests which outweigh the values of the promise of confidentiality and require disclosure." But Perry and Hughes took a somewhat different tack. Perry argued that a user's liability on a bulletin board ought to be the same 8 Anonymous use of electronic networks is nevertheless expected to increase. Even today, there are so-called "anonymous remailers" that accept e-mail messages and forward them to their intended recipient stripped of any identifying information.
OCR for page 66
as in any other circumstance. Hughes agreed, saying that John's note in Scenario 2 was an individual act of irresponsible speech, for which the provider was not responsible. Both suggested that under the circumstances of Scenario 2, the provider should not have to bear the liability to which Adler referred. Hughes further argued that when identities are kept confidential as a policy choice on the part of the provider, a complainant should seek the assistance of law enforcement authorities and show probable cause for issuance of a search warrant (in a criminal case) or a subpoena (in a civil case) that would compel the provider to provide the sender's identity. In this case, the decision concerning whether to divulge the speaker's identity does not rest with the provider but with law enforcement authorities. If the provider in Scenario 2 maintained the confidentiality of the originator of the communication, thus shielding the only potential defendant, then the provider still would not be obligated to report a violation to law enforcement authorities, Hughes said, citing the Electronic Communications Privacy Act of 1986 (Public Law 99-508). However, he said, the provider has some fundamental ethical responsibility. "Some things are right and some things are wrong," Hughes argued. DISCUSSION AND COMMON THEMES The balance between free speech and other values is tested regularly, and the regulation of speech comes in many forms. Even in academia, usually regarded as the bastion of free speech, users sometimes are banned from networks. Carl Kadie, a graduate student in computer science at the University of Illinois and moderator of the academic-freedom mailing list on the Internet, cited the example of an Iowa State University student being expelled from the campus network for copying materials from an erotic forum into an open forum meant for discussion of newsgroups and newsgroup policy. The expulsion was lifted after protests from Internet and campus network users, but access to the erotic forum remains restricted. Kadie went on to argue that because many universities are state universities and because many parts of the Internet are owned or leased by federal or state government, lawsuits could be filed in these cases based on the First Amendment,9 although the law is weaker for cases involving library policy on selection (or, more to the point, exclusion) 9 The First Amendment applies only to public forums, which can result from government funding or a dedication to public use (e.g., an airport or a shopping mall even if privately funded), and different types of speech may have different levels of protection.
OCR for page 67
of electronic resources. "I think many of these arguments have to be won or lost on the moral argument and [by] appealing to freedom of expression—academic freedom—and can't depend so much on legal protections," Kadie said. Still, users generally can be more outspoken on university networks than anywhere else. Kadie expressed the hope that those who determine rules for behavior on electronic networks will "learn from the experience (and, hopefully, wisdom) codified in long-standing academic policies and principles. … I don't think academia should be the only forum for free speech. … I would hope that technical solutions such as 'kill files'10 and the ability to create new forums [as is done in Santa Monica] would be enough to have people [on nonacademic networks] regulate themselves." It may be difficult to transfer this principle of openness to other arenas. For instance, corporate executives may want to control postings of material relevant to company business. Economic considerations also may argue for the regulation of speech at times. This is particularly true for commercial information providers, as clients demand to be insulated from certain types of content, noted Perry. Perry said, "There are different environments in which you have to deal with the same set of problems, maybe with a different view and a different historical and traditional background." These differences in perspective can lead to conflicting views about what speech is or is not appropriate for public viewing or exposure. Finally, it was argued that free speech was not possible when one was denied access to the electronic environment. As Sara Kiesler noted, "The purest form of censorship is absence of access. If you can't have access to a network at all, then you are completely censored from that forum. Therefore, we have to know who has access, and, even for those who have physical access, who's being driven off." There was broad agreement among panel participants that free speech is not an absolute right to be exercised under all circumstances. The relevant issue for free speech is what circumstances justify the uses of which mechanisms to discourage and/or suppress certain types of speech. The nature of the circumstances in which free speech should be discouraged is a matter of political and social debate, but it is clear that policymakers have a variety of such mechanisms at their disposal to discourage or suppress expression: 10 A "kill file" is a list of users (i.e., network accounts) whose messages are to be deleted automatically from the set of messages shown to the owner of the kill file.
OCR for page 68
Educate and persuade: At a minimum, most people agree that an important step involves persuasion and education, relying on voluntary means to dissuade people from saying things that are arguably harmful, objectionable, or offensive to others. Rely on contractual provisions: Someone agrees, as a condition of use, to abide by regulations regarding the content of speech. Weigh political considerations: An institution may wish to weigh how it will be seen in the eyes of its relevant public in determining the nature of its response. Rely on market mechanisms: Grass-roots pressure on suppliers of information services often forces change because suppliers fear losing the business of those complaining. Explicitly rely on First Amendment freedoms: A state university may have far fewer options for regulating speech due to the fact that it can be regarded as an arm of government. How are these issues different in the electronic networked environment? Lessig asserted that in ordinary life, social norms are created in a context where other things besides speech are going on, things such as exclusion, anger, and the impact of local geographies. … [These are the] sorts of things that help the process of norm creation in a speech context. [But] what makes electronic networks so difficult from the perspective of creating and molding norms is that the interactive human behavior on these networks is mostly if not entirely pure speech. From the constitutional perspective, this is the first environment in which society has had to face the problem of creating and changing norms when the only thing it is doing is trying to regulate speech. The steering committee generally concurred with this assessment, concluding that Networks offer a greater degree of anonymity than is possible for speakers under other circumstances. Networks enable communications to very large audiences at relatively low cost as compared to traditional media. Networks are a relatively new medium for communications, and there are few precedents and little experience to guide the behavior of individuals using this medium. As a result, Speakers are less familiar with a sense of appropriateness and ethics here (treating a big megaphone as though it were a smaller one); and Policymakers are less confident in this domain.