Click for next page ( 268


The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 267
1 ~ Technology-Based Tools for Users This chapter discusses tools for the end user, here the party respon- sible for making decisions on behalf of the child or youth in question. Thus, the "end user" may be a parent in the case of a home or family, a maker of school policy (e.g., a school principal or a school district), a maker of policy for a public library (or library system), or some other similar individual or body. The focus on tools for end users is important because such tools are intended to empower end users by providing a wide range of options for the children in their care, more or less regard- less of what other Internet stakeholders do or do not do. (The only excep- tion concerns instant help, in which the user is the child seeking help.) Table 12.1 provides a preview of this chapter. 12.1 FILTERING AND CONTENT-LIMITED ACCESS Filters are at the center of the debate over protecting children and youth from inappropriate sexually explicit material on the Internet. A good filter allows a specific filtering policy to be implemented with accu- racy, has enough flexibility, and can be implemented with a minimum of undesirable side effects. Box 12.1 describes the dimensions of choice that GetNetWise identifies for filters. 12.1.1 What Is Filtering and Content-Limited Access? Today, Internet access is largely unrestricted. That is, a user who does not take some explicit action to limit the content to which he or she is 267

OCR for page 267
268 YOUTH, PORNOGRAPHY, AND THE INTERNET TABLE 12.1 Technology-Based Tools for the End User Type of Tool Function One Illustrative Advantage Can be configured to deny access to substantial amounts of adult-oriented sexually explicit material from commercial Web sites One Illus Disadvan Filter Content-limited access Labeling of content Monitoring with individual identification Monitoring without individual identification Spam- controlling tools Instant help Block "inappropriate" access to prespecified content; typically blocks specific Web pages, may also block generic access to instant messages, e-mail, and chat rooms Allow access only to content and/or services previously determined to be appropriate Enable users to make informed decisions about content prior to actual access Examine a child's actions by an adult supervisor in real time or after the fact Watch the collective actions of a group (e.g., a school) without identifying individuals Inhibit unsolicited e-mail containing sexually explicit material (or links to such material) from entering child's mailbox Provide immediate help when needed from an adult Provides high confidence that all accessible material conforms to the acceptability standards of the access provider Separates content characteriza- tion (e.g., sexually explicit or not) from decisions to block; multiple content raters can be used Rarely prevents child from reaching appropriate material that might have been mistakenly flagged as . . napproprlare Can provide useful information about whether or not acceptable use policies are being followed Can reduce the volume of inappropriate e-mails significantly Provides guidance for child when it is likely to be most effective, i.e., at time of need NOTE: The "end user" is generally the adult supervisor who makes decisions on behalf of a child. (This is true in all cases except for instant help, in which the user is the child seeking help.)

OCR for page 267
TECHNOLOGY-BASED TOOLS FOR USERS 269 One Illustrative Disadvantage In typical (default) configuration, generally denies access to substantial amounts of Web material that is not adult-oriented and sexually explicit Voluntary versus Involuntary Exposure Protects against both deliberate and inadvertent exposure for sites that are explicitly blocked; can be circumvented under some circumstances May be excessively limiting for those with broader information needs than those served by the access provider Effectiveness depends on broad acceptance of a common labeling framework Potential loss of privacy zone for child Does not enable individual account- ability for irresponsible actions Among users concerned about losing personalized e-mail, reduced tolerance for false positives that block genuinely personal e-mails incorrectly identified as spam Requires responsive helpers infrastructure of Very low possibility of deliberate or inadvertent exposure, given that all material is explicitly vetted Likelihood of exposure depends on accuracy of labels given by labeling party Warnings can help to deter deliberate exposure; ineffective against inadvertent exposure Warnings can help to deter deliberate exposure; less effective against inadvertent exposure Mostly relevant to inadvertent exposure (i.e., unsought commercial e-mail containing sexually explicit material) Mostly relevant to inadvertent exposure

OCR for page 267
270 YOUTH, PORNOGRAPHY, AND THE INTERNET

OCR for page 267
TECHNOLOGY-BASED TOOLS FOR USERS 271 exposed has access to any content that the Internet provides through Web pages, e-mail, chat rooms, and the like. This report uses the term "filter" to refer to a system or service that limits in some way the content to which users may be exposed. The vast majority of filters block access to content on specific Web sites (though these sites may be specified as a class). Other filters also block access on the basis of keywords appearing either in a user's query to a search engine or contained in the about-to-be dis- played Web sited Some filters provide the capability to block more broadly, so that an individual may be denied access to other common Internet services, such as interactive services (e.g., e-mail, chat rooms), Usenet newsgroups, file downloading, peer-to-peer connections, or even e-commerce with credit card usage. Users who wish to use a filter have a number of technical options: Client-side filters. Filters can tee installed on the devices (today, desktop and laptop personal computers) that serve as the Internet access point for the end user. Client-side systems are installed as local software, in the same way that any other software is locally installed, except that standard uninstallation (which would disable the filter) can be done only by someone with the appropriate password. The party with the appropri- ate password is also generally responsible for configuring the profile of the system for those they are seeking to protect from inappropriate mate- rials. A common personal use is a client-side filter installed at home by a parent wishing to protect children from inappropriate material. Client- side filtering is also feasible in an environment in which only some access points in a local area network must be filtered for example, in a library attempting to segregate "children's areas" from areas for all patrons. Content-limited Internet service providers. As a feature of their offer- ings, a number of ISPs provide Internet access only to a certain subset of Internet content. Content-limited ISPs are most likely to be used by orga- nizations and families in which the information needs of the children involved are fairly predictable. Today, such dedicated ISPs are a niche 1The actual content of a list of such keywords is usually held as proprietary information by the vendor of the filter. However, such lists include a variety of "four-letter words" associated with sex, reproduction, and excretory functions, words such as "sex," "naked," and so on. Other words that might be singled out for inclusion include "bomb," "bond- age," "fetish," "spunk," "voyeurism," "babe," "erotica," "gay rights," "Nazi," "pot," "white power," "girls," and "hard-core pornography." (These examples are taken from Lisa Guern- sey, 1999, "Sticks and Stones Can Hurt, But Bad Words Pay," New York Times, April 9.) Also, a more sophisticated form of filtering is based on the analysis of word combinations and phrases, proximity of certain keywords to certain other keywords, the presence of various URLs, and so on. In some cases, text-based analysis may also be combined with the analysis of images on the Web page in question. As a rule, tools based on this more sophisticated filtering are not broadly marketed today.

OCR for page 267
272 YOUTH, PORNOGRAPHY, AND THE INTERNET market and typically have subscriber bases in the thousands to tens of thousands. All subscribers which are often families and less often insti- tutions are subject to the same restrictions. Some content-limited ISPs, intended for use by children, make available only a very narrow range of content that has been explicitly vetted for appropriateness and safety. Thus, all Web pages accessible have been viewed and assessed for content that is developmentally appropriate, educational, and entertain- in~. (This approach is known as "white listing" all content from sources tJ ~ 1 1 tJ . ~ . - . - .. . ~ . - . ~ - ~ ~ ~ ~ Ad. not on a white list are d~sa110wed,~ as discussed In beckon 2.~.-1.) That rooms and bulletin boards are monitored for appropriate content, and those violating rules of chatting or message posting are disinvited, forc- ibly if necessary. E-mail and instant messages (IMs) can be received only from specified parties and/or other users of the system. Other content- limited ISPs, intended for use by both children and adults, allow access to all content that is not explicitly designated by the ISP as inappropriate. Monitoring and limits on e-mail are less strict or non-existent. Some services allow multiple "login names" or "screen names." A screen name is an online identity, similar to a CB radio "handle." Each online session uses a single screen name, and families can choose not to give the adult or "administrative" screen name password to youth. An online account may have multiple screen names, and a user with appro- priate privileges (usually associated with paying for the master account) can create arbitrary screen names at will for himself or someone else on his account as long those names are not already in use. With each name can be associated unrestricted access or more limited access to online content (which may include both Internet and proprietary content). In the case of America Online (AOL), a parent can identify the age of the child for whom he or she is setting up a screen name. AOL then puts into place default limitations based on the age of the child, which the parent can then adjust if necessary. Such limitations might include, for example, Web access only to age-appropriate content or to everything except ex- plicitly mature themes, receipt of e-mail only without file attachments or embedded pictures, and access only to chat rooms intended for children (or no chat room access at all). Server-sidefilters. Server-side filtering is useful in institutional set- tings in which users at all access points within the institution's purview 2Note that sources on a white list can be specified in advance or identified as appropriate because a source contains one or several "good words" that may be found on a "good- word" list. For an example of the latter, see Gio Wiederhold, Michel Bilello, Vatsala Sarathy, and XioaLei Qian, "A Security Mediator for Health Care Information," pp. 120-124 in Pro- ceedings of the 1996 American Medical Informatics Association Conference, Washington, D.C., October.

OCR for page 267
TECHNOLOGY-BASED TOOLS FOR USERS 273 must conform to the access policy defined by the institution. Server-side filtering might be used by a school district that provides Internet service to all schools in the district or a library system that provides Internet service to all libraries in the system. Server-side filters are located on systems other than the client.3 An institution may contract with an ISP to implement its filtering policy, or it may install a filter in the server that manages a local area network (e.g., that of a school district or a library system).4 (Note that there are no fundamental technological impediments to having filtering policies that differentiate between schools within a district, so that a high school might operate under a policy different from that for an elementary school. Such differentiation is simply a matter of cost.) Search enginefilters. In a special class of server-side filters are those that are today part of major search engines such as Google, AltaVista, and so on. Each of these search engines has the capability of enabling an Internet safety filter (hereafter "filtered search engined. When activated by the user, these filters do not return links to inappropriate content found in a search, but they also do not block access to specifically named Web sites (so that a user knowing the URL of a Web site containing inappropri- ate sexually explicit material could access it).5 Other search engines are explicitly designed for use by children. For example, Lycos and Yahoo both offer a special kids-oriented version of their general-purpose search engine that restricts the universe of a search to child-appropriate areas. (This is the white-list approach.) Filters can be used to block certain incoming inappropriate informa- tion (an application that is the most common use of filters), to block access to certain Internet services (e.g., file downloads), or to block selected out- 3The distinction between server-side filtering and content-limited ISPs is not a technical one, because content-limited ISPs use server-side filters. Rather, the point is that server- side filtering provides a degree of institutional customization that is not possible with con- tent-limited ISPs, which tend to offer one-size-fits-all filtering policies. 4The use of server-side filters may degrade performance. In particular, a server-based filter may rely on proxy servers that are unable to take advantage of the caching tech- niques that are often used by major Internet providers to speed the retrieval of commonly requested pages. Such a filter would be forced to retrieve information from its host server and take whatever performance hit that might entail In other cases, performance is im- proved because without irrelevant material taking up space in the cache, retrieval of rel- evant material is faster. 5In practice, a responsible adult would set the filtering provision to the on setting, and save the configuration. Thereafter, other requests on that client to the search engine would encounter the on setting. The setting can also be turned off through entering a pass- word known only to the individual who initially set it ta possible problem if that person is the teenager in the household who manages the family's information technology'.

OCR for page 267
274 YOUTH, PORNOGRAPHY, AND THE INTERNET going information. All technology-enforced methods for blocking access to inappropriate information require a determination that certain pieces of content are inappropriate.6 Content can be deemed inappropriate on the basis of the methods discussed in Section 2.3.1. Many filter vendors establish lists of "suspect" Web sites (compiled as a list of specific URLs and/or IP addresses) deemed sources of inap- propriate content.7 The number of such sites may range from several hundred thousand to 2 million. In addition, many vendors establish lists of keywords (typically hundreds of words) that represent inappropriate content. Far fewer employ image analysis or statistical techniques to analyze text. In addition, techniques for textual and image analysis can be used to identify and block e-mail containing inappropriate content and for block- ing outgoing content as well. For example, the technology that identifies inappropriate content by searching for keywords can also prevent those words (or some set of them) from being used in e-mail messages or in chat rooms. (In this case, the adult supervisor can augment the keyword list to include certain phrases that should not appear, such as specific addresses or phone numbers.) Filters are perhaps the most widely deployed of all technological tools intended to protect children from exposure to inappropriate material. The majority of schools have deployed filters,8 while around 25 percent of 6Note that content that is transmitted through certain channels such as attachments to e- mail, videoconferences, instant messages, or peer-to-peer networking (in a Gnutella-like arrangement) is very difficult (arguably impossible) to block selectively, though a filter can block all interaction through these channels. Moreover, to the extent that the content of traffic is determined interactively, neither labeling nor sites are likely to provide a sufficient basis. The reason is that interactive sources, almost by definition, can support a variety of different types of interaction the best example of which is an online friend with whom one may exchange sports trivia, conversation about school homework, and inappropriate sexu- ally explicit material. Only real-time content recognition has a chance of filtering such content. 7Note also that the list of blocked sites often includes sites that could help users circum- vent the basic filtering. Thus, sites providing information on how to circumvent filters are often included on the list, and a number of filters block sites that allow language translation (Seth Finkelstein and Lee Tien, 2001, "Blacklisting Bytes," white paper submitted to the committee, available online at ) or access to Web archives (Seth Finkelstein, 2002, The Pre-Slipped Slope Censorware vs. the Wayback Machine Web Archive, available online at ~. 8According to the National Center for Education Statistics, nearly three-fourths of all schools use blocking or filtering software. See A. Cattagni and E. Farris. 2001. Internet Access in U.S. Public Schools and Classrooms: 1994-2000. NCES 2001-071. Office of Educa- tional Research and Improvement, U.S. Department of Education, Washington, D.C. Avail- able online at .

OCR for page 267
TECHNOLOGY-BASED TOOLS FOR USERS 275 libraries filter at least some workstations.9 Through AOL's parental con- trols (Box 12.2), a substantial number of Internet-using children enjoy the benefits and endure the costs of filtering. However, as a percentage of all children using the Internet, the fraction whose Internet access is filtered apart from school usage is small.l It is noteworthy that filters are increasingly common in corporate and business settings and thus affect the Internet use of adults.l1 Many com- panies, driven primarily by concerns about productivity and time wasted on non-business Internet activities and about the possible creation of hos- tile work environments and the consequent liability, use filters to prevent inappropriate use of company IT facilities.l2 12.1.2 How Well Does Filtering Work? Denying access to inappropriate material Trough technological means, filters are intended to protect against both inadvertent and deliberate ac- cess. However, as discussed in Section 2.3.1, all filters are subject to over- blocking (false positives, in which filters block some anurouriate material 1 1 1 9By contrast, around 57 percent of public libraries do not filter Internet access on any work- station, while about 21 percent filter access on some workstations. About 21 percent filter all workstations. See Norman Oder. 2002. "The New Wariness," The Library Journal, January 15. Available online at . 10A survey conducted by Family PC magazine in August 2001 found that of 600 families surveyed, 26 percent used parental controls of some kind. About 7 percent of those using parental controls (about 1.8 percent of the total) used off-the-shelf store-bought filtering packages. The rest used filtering offered by an Internet service provider. (This study is not available in print, because it was scheduled for publication in October 2001, and Ziff Davis, the publisher of Family PC, terminated the magazine before that issue was printed.) 1lFor example, a survey taken by the American Management Association in 2001 found that 38 percent of the firms responding do use blocking software to prevent Internet con- nections to unauthorized or inappropriate sites. Seventy eight percent of the responding firms restricted access to "adult" sites with explicit sexual content, though it is not clear how the remaining 40 percent are enforcing such restrictions. (The survey suggests that they are doing it by actively monitoring Internet use.) See American Management Associa- tion. 2001. 2001 AMA Survey, Workplace Monitoring and Surveillance: Policies and Practices. Available online at . 12Potential overlap between the business market and the school and library filtering mar- ket raises the following operational concern: a blocked category may be defined by a ven- dor so that it is appropriate in a business environment, but that definition may not be appropriate in a school or library context. For example, information about sexually trans- mitted diseases, safe sex practices, and pregnancy may not be necessary in most business environments (and hence an employer may have a legitimate business reason for blocking such information), but many would argue that older students using school facilities should not be blocked from receiving such information.

OCR for page 267
276 YOUTH, PORNOGRAPHY, AND THE INTERNET

OCR for page 267
TECHNOLOGY-BASED TOOLS FOR USERS 277 from Me user) and underblocking (false negatives, in which filters pass some inappropriate material to We user). While the issue of underblocking and overblocking should not, in and of itself, rule out filters as a useful tool, the extent of underblock~ng and overblock~ng is a significant factor in un- derstanding and deciding about the use of filters.l3 There is no agreed-upon methodology for measuring a filter's effec- tiveness, as might be indicated by an overblocking rate and an under- blocking rate (discussed in Section 2.3.1~.14 Filter vendors sometimes pro- vide estimates of overblock and underblock rates, but without knowing the methodology underlying these estimates, the cautious user must be concerned that the methodology is selected to minimize these rates. (The discussion in Box 2.7 illustrates some of the problems in estimating these rates. Note further that the lists of blocked Web pages change constantly, with both additions and subtractions made regularly.) Underblocking results from several factors: New material appears on the Internet constantly, and the contents of given Web pages sometimes change. When content changes, the judg- ing parties must revisit the sources responsible for the content they pro- vide frequently enough to ensure that inappropriate information does not suddenly appear on a previously trusted source or that the inappropriate material remains on the Web pages in question. Technology is available that can indicate if a page has changed (thus flagging it for human assess- ment), but not to tell if it continues to be developmentally and education- ally appropriate. Vendors of filtering systems generally provide updates from time to time, but there is inevitably a lag between the time inappro- priate material first appears and the time that item is entered into the list of blocked sites. (Content-based filtering systems are not subject to this particular problem.) The algorithms (i.e., the computational techniques) used to iden- tify inappropriate material are imperfect. For example, the emergence of 13Note also that legal challenges brought against the mandated use of filters in institu- tional settings have relied significantly on the existence of underblocking and overblocking as inherent flaws in the technology that make filters unsuitable for such use. 14For "bake-offs" comparing Internet filters, see Christopher D. Hunter, 2000, "Internet Filter Effectiveness: Testing Over and Underinclusive Blocking Decisions of Four Popular Filters," Social Science Computer Review 18 (2, Summer), available online at ; Karen J. Bannan, 2001, "Clean It Up," PC Magazine, September 25, available online at ; and "Digital Chaperones for Kids," Consumer Reports, March 2001. For a critique of the Consumer Reports analysis, see David Burt, 2001, "Filtering Advo- cate Responds to Consumer Reports Article," February 14, available online at .

OCR for page 267
316 YOUTH, PORNOGRAPHY, AND THE INTERNET actions is everything, and it is the nature of the existing parent-child relationship, the intent of the monitoring process, and how it is carried out that counts in understanding its ultimate implications. Is the monitoring of children and adolescents a step in the erosion of privacy for all citizens? Many people believe that it is, and point to other measures that they find equally disturbing employers who track the behavior of their employees, and commercial online sites that track the behavior and clicks of those who pass through them. The monitoring of children raises special concerns because of the fear that a younger genera- tion will grow up never having known a world in which they had rights to privacy, and thus never realizing what rights they might have lost. Others argue that such fears are overplayed, pointing to social and commercial benefits of increased customization of information delivery and an assumed lack of government interest in the affairs of ordinary people, as well as the fact that schools are expected to act in loco parentis with respect to the students in their care. Indeed, some believe that it would be a positive development in society if adults in all venues felt some responsibility for looking after the welfare of children and for su- pervising children when they are in a position to do so.45 Resolving this question is beyond the scope of this study, but noting the question raised by monitoring of children is certainly not.46 12.2.8 Findings on Monitoring 1. Monitoring that warns when exposure to inappropriate material may occur is an alternative to filtering and eliminates the problem of overblocking associated with filtering. 2. Overt monitoring in concert with explicit discussion and education may help children develop their own sense of what is or is not appropri- ate behavior. Monitoring coupled primarily with punishment is much less likely to instill in children such an internal sense. In general, the simple presence of monitoring equipment and capabilities (or even the assertion of such capabilities) may create a change in behavior, though the change in behavior is likely to be restricted to the situation in which . . . momtormg occurs. 3. Because human intervention is required on a continuing basis, monitoring is more resource-intensive than filtering. For the same rea- son, monitoring is more likely to be construed as a violation of privacy than are other techniques that simply block access. 45In this instance, there are debates about the role of technology in supervising children vis-a-vis an in-person adult doing so. 46A current CSTB study on privacy in the information age will address these issues.

OCR for page 267
TECHNOLOGY-BASED TOOLS FOR USERS 317 4. Covert monitoring leads to an entirely different psychological dynamic between responsible adult and child than does overt moni- toring. (Furthermore, because people habituate to warnings, children may respond to overt monitoring as though it were covert i.e., more negatively.) 12.3 TOOLS FOR CONTROLLING OR LIMITING "SPAM" "Spam," e-mail that is similar to the "junk mail" that an individual receives through the post office in the brick and mortar world, is sent- unsolicited and indiscriminately to anyone with a known e-mail ad- dress. E-mail addresses can be purchased in bulk, just as regular mailing lists can be purchased: a typical rate for buying e-mail addresses is 5 million addresses for $50. Alternatively, e-mail addresses can be found by an e-mail address "harvester." (See Box 12.6 for more details.) Spam

OCR for page 267
318 YOUTH, PORNOGRAPHY, AND THE INTERNET refers to any form of unsolicited e-mail a person might receive, some of which might be sent by publishers of adult-content Web sites. A typical spam message with sexual content would contain some "come-on" words and a link to an adult-oriented Web site, but would in general arrive without images. Policy issues associated with spam are addressed in Chapter 10. 12.3.1 What Are Technologies for Controlling Spam? Technologies for controlling spam fall into two categories tools that seek to conceal the e-mail address (because if an e-mail address is not known, spam cannot be sent to it) and tools that manage spam once it has been received. Whether an individual can implement such tools varies with the ISP and/or e-mail service used. To conceal e-mail addresses with certain ISPs, one can create different login names. For example, online services such as AOL enable a user to create more than one login name that can serve as an e-mail address. An individual can thus use this special login name for activities that might result in spam (e.g., for participating in a chat room). This special name becomes the attractor for spam, and mail received at that address can be deleted at will or even refused. A number of services (some free, some not) automate this process, enabling users to create special-purpose ad- dresses that can be turned off or discarded at will. In addition, e-mail systems may allow a user to disallow all e-mail except e-mail from a specified list of preferred addresses and/or domain names. To manage spam that does reach the user's mailbox, a number of tools are available. Most of these tools depend on the ISP or the use of an e-mail program with filtering capabilities (e.g., Eudora, Netscape Messen- ger, Microsoft Outlook). Spam e-mail can be identified and blocked on the basis of: Content. Content-based analysis examines the text, images, and attachments to e-mails to determine its character. (The technology for content-based analysis is much like that for content-based filtering, as described in Section 2.3.1.) Source. E-mail being received from a particular e-mail address or a particular domain name may be defined as spam after a few examples have been received. AOL mail controls are based in part on identifying certain sources as spam sources. Addressees. For the most part, spam mail is not explicitly addressed to a specific individual. Instead, a spam e-mail is addressed to a large number of people in the "blind copy" field. (On the other hand, "blind

OCR for page 267
TECHNOLOGY-BASED TOOLS FOR USERS 319 copies" (bcc: foo~example.com) sent to an individual and e-mail sent through mailing lists to which an individual has subscribed also make use of the hidden address technique.) Mail filters (e.g., one based on "procmail," a mail processing system for Unix and some other platforms) can check and file or delete an e-mail if it arrived at the user's location via blind copy addressing. (Steps can be taken to set up exceptions for mailing list mes- sages and bcc: messages from legitimate correspondents.) Users can also take a number of procedural measures. For example, Web sites often ask for information from the user. By inserting false information (e.g., indicating an income lower than is actually true), it is sometimes possible to avoid marketing attacks based on demographic information consistent with the group being targeted by the marketers. Internet service providers also take measures to limit spam. For ex- ample, AOL limits how fast other users can repeatedly enter and exit chat rooms, because a pattern of repeatedly and rapidly entering and exiting chat rooms can also be an indication that someone is harvesting e-mail addresses. Most ISPs also have lists of known spammers from which they refuse to carry traffic. 12.3.2 How Well Do Spam-Controlling Technologies Work? Spam-control technologies for dealing with e-mail that has arrived in one's mailbox suffer from the same underblocking and overblocking is- sues that are discussed in Section 12.1.2. One important issue is that spam often contains links to inappropriate sexually explicit material rather than the actual material itself, and no content-screening spam-controlling tool known to the committee scans the content for links that may be embed- ded in an e-mail. That said, some spam-controlling technologies are highly effective against spammers. Those that restrict the e-mail that can be received to a given set of senders are very effective (i.e., do not accept mail unless the sender is on a list of permissible senders or from specific domains). On the other hand, they also sharply restrict the universe of potential contacts, so much so that a user may fail to receive desired e-mail. (For example, a friend who changes his or her sending e-mail address will not be able to reach someone who has identified a white list of preferred senders.) ISP-based or e-mail-service-based spam filters are partially effective. For example, the researcher mentioned in Box 12.6 found that the spam filter on one popular free e-mail service reduced the volume of spam by about 60 percent, though it still passed more than one message per day. Spam filters that are based on content analysis techniques have all of the problems with false positive and false negatives that Web filters have.

OCR for page 267
320 YOUTH, PORNOGRAPHY, AND THE INTERNET 12.3.3 Who Decides What Is Spam? Some spam filters have preconfigured lists of known spammers. But in general, it is the user who must decide what is spam and what is not. Of course, the difficulty especially for a child is to recognize spam without opening the e-mail. In some cases, it is easy to recognize from the header or subject line. But many spam messages reveal themselves only when they are opened. (Note also that one person's spam is another person's service or information. Unsolicited notices for ski vacations or material on a local political candidate may be useful to some individuals and useless to others.) 12.3.4 How Flexible and Usable Are Products for Controlling Spam? Because many ISPs filter out spam for most users, users of those ISPs need not take any action at all to reject spam. However, when spam leaks through the ISP filters (or if e-mail is not filtered for spam at all), as is typical of most e-mail, the user must take action. Note that unsolicited e-mail, and the resources and attention it con- sumes, is not limited to sexually explicit e-mail for youth. It would be reasonable to assume that the number of parties sending unsolicited e- mail, the frequency with which they send it, and the volume that they send will all increase. Therefore, approaches to this problem are likely to be developed, regardless of the concerns about youth and sexually ex- plicit material. However, this can easily turn into another race: as better spam-discriminating technologies are invented, alternative ways of wrap- ping the unsolicited e-mail are invented, and the cycle continues. 12.3.5 What Are the Costs and Infrastructure Required for Using Spam-Control Products? Spam can be controlled in a variety of locations. When control is located at the receiver, locally installed software spam filters can help to process and eliminate spam. Conceptually, the cost of local spam filters is similar to that for content filters. However, spam filters must be inte- grated into software for processing e-mail in general. Control points based in the network (or an ISP) are both more com- plex and more comprehensive. Some ISPs have extensive capabilities for detecting the sending of spam mail (e.g., by monitoring the volume of e- mail sent in a given time interval), preventing the "harvesting" of e-mail addresses, and so on, and developing these capabilities entails substantial effort.

OCR for page 267
TECHNOLOGY-BASED TOOLS FOR USERS 321 Individual organizations often incur cost in configuring software and servers to stop spam from going through their own internal networks. Such efforts are often undertaken to help manage internal bandwidth more effectively. Finally, there may be costs incurred for infrastructure that may be needed to support legislative efforts to curb spam. For example, one method for dealing with junk mail and phone telemarketers is to establish a clearinghouse where people can register their names, addresses, and phone numbers. But the effectiveness of this approach is based on the fact that it is in the marketer's self-interest to refrain from wasting phone and mail effort and time on people unlikely to buy. Because sending spam is so much cheaper than mail and phone calls, a similar approach is unlikely to work effectively without some kind of legal cause of action that can be taken against those who ignore the clearinghouse. (Policy-based solu- tions are discussed in Chapter 9.) 12.3.6 What Does the Future Hold for Spam-Controlling Systems? There has been an acceleration of commercial organizations introduc- ing their messages into schools, although almost always after signing an agreement with the school board (which agreement usually includes new funds flowing to the school to supplement the budget). However, schools may wish to install some mail filtering before the marketing department of some soft-drink manufacturer decides to send e-mail to students just before lunch, promoting its product while also, to prevent uproar, giving "the spelling word of the day," "the math hint of the day," or whatever. It is easier for the school district to add another item to the spam filter than to have its lawyer sue the sender of the e-mails. As in the case of age verification technologies, expanded use of "mail deflection" beyond is- sues of sexually inappropriate material may warrant the trouble of install- ing Spam-Controlling systems. 12.3.7 What Are the Implications of Using Spam-Controlling Systems? As described in Chapter 9, legislative efforts to curb spam do have societal implications. 12.3.8 Findings on Spam-Controlling Technologies 1. Spam-Controlling technologies generally do not allow differentia- tion between different kinds of spam (e.g., hate speech versus inappropri-

OCR for page 267
322 YOUTH, PORNOGRAPHY, AND THE INTERNET ate sexually explicit material). Rather, they seek to identify spam of any nature. 2. Spam-controlling technologies that filter objectionable e-mail have more or less the same screening properties that filters have. That is, they do block some amount of objectionable content (though they do not gen- erally screen for links, which are often transmitted in lieu of actual explicit content). However, they are likely to be somewhat less effective than filters at preventing such e-mail from being passed to the user because users are likely to be more apprehensive about losing e-mail that is di- rected toward them than about missing useful Web sites, and thus would be more concerned about false positives. 3. Behavioral and procedural approaches to avoiding spam (rather than filtering it) have at least as much potential as spam-controlling tech- nologies to reduce the effect of spam. However, using such approaches adds somewhat to the inconveniences associated with Internet use. 12.4 INSTANT HELP The technologies discussed in Sections 12.1 and 12.2 are intended to prevent the exposure of children to inappropriate material. Instant help is a tool to deal with exposure after the fact. 12.4.1 What Is Instant Help? The philosophy underlying instant help is that from time to time children will inevitably encounter upsetting things online inappropri- ate material, spam mail containing links to inappropriate sexually explicit material, sexual solicitations, and so on. When something upsetting hap- pens, it would be helpful for a responsible adult to be able to respond promptly. An "instant help" function would enable the minor to alert such an adult and appropriate action could then ensue, could provide another channel to law enforcement through which threats, and solicita- tions, and obscene materials or child pornography could be reported. To the best of the committee's knowledge, there are no commercially available tools that provide instant help. But current technology could easily support an instant help function. For example, a secure one-click call for help47 could be: 47security for this ''one-click,, button is an important element of help the functionality of the button must not be disabled, as it is in mousetrapping When the ''back,, button sends the user to a new adult-oriented Web sited.

OCR for page 267
TECHNOLOGY-BASED TOOLS FOR USERS toolbar; 323 On ISPs, an "always-on-top" button that would connect the user directly to a trained respondent; On Internet browsers, a "plug-in" represented by a button on the On search engines, an icon that would always be displayed on the results page; On e-mail readers, an "I object to this e-mail" button on the toolbar; On a computer desktop, a button that activates an application that allows a remote helper to take over control of the user's machine and view the screen. These buttons might be as simple as an icon for the CyberTipline (CTL) that would serve as an easily accessible channel for the public to use in reporting child pornography. The CTL icon has proven to be an effective tool in reporting obscene or child pornography because it is user-friendly and is the most direct method to report such images to the appropriate law enforcement authority. Because the CTL icon was built for the sole purpose of interfacing with the public to facilitate reporting to law enforcement computer-assisted crimes against children, it is more effective than other mechanisms for such reporting. Depending on the context of the technology through which the user is coming into contact with inappropriate content or interactions, a wide range of functionality is possible once the button is clicked. For example, "instant help" on a browser or an ISP could immediately connect the user to a helper who provides assistance. To provide context, an image of the screen could be transmitted to the helper. Such assistance might be most useful if the user encounters a solicitor or inappropriate conversation in a chat room or an instant message. Or, if a user encounters inappropriate material, the last several Web pages viewed could be shared with the helper, who could assist the user in whatever action he or she wished to take (e.g., sending URLs to the CyberTipline). For Internet access on a LAN, instant help could be configured to summon assistance from a re- sponsible adult within the LAN, such as a teacher or a librarian. Instant help would be applicable in both home and institutional con- texts. Implementing instant help functionality must be undertaken by service providers and technology vendors. But such functionality is not helpful without a human infrastructure to assist those seeking help the human infrastructure may be provided by the ISP, a parent, a school, a library, or even an expanded National Center for Missing and Exploited Children (NCMEC).

OCR for page 267
324 YOUTH, PORNOGRAPHY, AND THE INTERNET 12.4.2 How Well Might Instant Help Work? By providing assistance to the minor, instant help could potentially reduce the negative impact that may result from exposure to inappropri- ate material or experiences. Such exposure can come from either deliber- ate or inadvertent access, but in practice instant help is more likely to be useful in the case of inadvertent access. Instant help obviously does not enable the user to avoid inappropriate material but does provide a means for trying to cope with it. It also provides opportunities to educate chil- dren about how to avoid inappropriate material or experiences in the future, and it might lead to the creation of more civil norms online. It provides immediate assistance in the case of aggressive solicitations and harassment. Finally, it might lead to greater law enforcement activity if the materials involved are obscene or constitute child pornography. Metrics of effectiveness that indicate the extent to which children are not exposed to inappropriate materials do not apply to instant help. Rather, effectiveness is better assessed on the basis of the quality of the assistance that helpers can provide, and the responsiveness of the instant help function. Assistance that arrives 20 minutes after the user has pressed the instant help button is obviously much less helpful than if it arrives in 5 seconds, and of course, human helpers must be trained to handle a wide variety of situations. A specialist trained to provide this kind of help to youth, or a peer with special training, could potentially be more effective than the child's own parent or teacher or librarian. However, because this approach has never been implemented on a wide scale, staffing needs for instant help centers are difficult to assess. In many urban areas, crisis intervention hotlines (focused on helping people subject to domestic abuse, or feeling suicidal, struggling with substance abuse addictions, and so on) exist, but there are none known to the committee that give training to their volun- teer staffs concerning children's exposure to sexually explicit material on the Internet. 12.4.3 Who Decides What Is Inappropriate? Unlike other tools, the locus of decision making in the context of instant help rests with the minor. The minor decides what is upsetting and determines the situations in which he or she needs help. 12.4.4 How Flexible and Usable Is Instant Help? The purpose of an instant help function is to ensure that something can be done with very little difficulty. Thus, the flexibility and usability of an instant help function are paramount.

OCR for page 267
TECHNOLOGY-BASED TOOLS FOR USERS 325 For example, individual parents, libraries, or schools could customize who is contacted when the instant help button is pressed. Thus, a family with strong religious ties could set instant help to alert helpers from a group associated with their religious tradition, while a school district could set the instant help button so that in the elementary school, a mes- sage went to a staffer in that building and in the middle school, to a staffer in the middle school building. This is in some sense analogous to the national phone emergency number 911 going to a local 911 dispatch cen- ter based on the exchange from which 911 was dialed. 12.4.5 What Are the Costs and Infrastructure Required for Instant Help? The infrastructure and institutional cooperation needed for instant help to work successfully are considerable. Vendors must be willing to use precious screen space to provide instant help buttons. The infrastruc- ture of helpers must be developed and deployed. For an ISP, such an infrastructure might well be expensive; for the NCMEC or law enforce- ment agencies, it would be very expensive. But for a school or library (or even for responsible adult guardians), the infrastructure of helpers may already be in place.48 The costs are roughly proportional to the size of the helper infrastruc- ture; helpers (who could be volunteers) must be trained in how to re- spond to a call for help. Note also that a skilled adult predator or even adolescents bent on mischief could create a flood of diversionary instant help requests so that the responding individuals would become backlogged, during which time the predator could attempt to continue an interaction with a young per- son. Thus, some mechanism for protection from "flooding attacks" would be needed by any responding center that serves a large number of anony- mous end users or devices. 12.4.6 What Does the Future Hold for Instant Help? To the committee's knowledge, instant help functionality has not been implemented anywhere, and it remains to be seen if children would actu- ally use it if and when they are confronted with inappropriate material or 48It is true that in schools or libraries a child should be able to request help from these individuals without instant help features. The primary advantage of clicking an instant help icon is that it can be done privately and without drawing attention from other users.

OCR for page 267
326 YOUTH, PORNOGRAPHY, AND THE INTERNET experiences that upset them. Thus, some small-scale piloting of the con- cept to evaluate how it might work is likely to be very helpful before any major effort to implement instant help is considered. 12.4.7 What Are the Implications of Using Instant Help? A potential downside of a "low-cost" implementation that would require the child to describe the material and how he or she got there is that the child might be forced to focus more on the inappropriate mate- rial, perhaps causing at least discomfort to the child who may be better off if transferred back to appropriate activities as soon as possible. Such a negative outcome could be avoided if the inappropriate material could be automatically transmitted to the helper. (Since the material may well not be present on the child's screen when he or she contacts the helper, the automatic display of material might have to retrieve the last several screens this may be a difficult technical task under some circumstances.) In cases where a new type of offensive material or communication begins to occur for the first time on the Internet, the first instant help response center to identify this new material could share that information with schools and parents, other instant help response centers, youth (as warnings), or even filtering vendors. In that sense, instant help might harness all youth who use it to improve the monitoring of the Internet for new offensive material or communication. Dissemination of the insights of the staff of an instant help center should be considered a networked response, as opposed to the response of assisting a child when requested. The Internet technical community has experience with networked re- sponse in the CERT system to propagate information about worms, secu- rity holes, and the like. 12.4.8 Findings on Instant Help As the committee is unaware of any implementation of instant help that fits the description above, there are no findings to report.