National Academies Press: OpenBook

Behavioral and Social Science: 50 Years of Discovery (1986)

Chapter: Some Developments in Research on Language Behavior

« Previous: Changing Views of Cognitive Competence in the Young
Suggested Citation:"Some Developments in Research on Language Behavior." National Research Council. 1986. Behavioral and Social Science: 50 Years of Discovery. Washington, DC: The National Academies Press. doi: 10.17226/611.
×
Page 208
Suggested Citation:"Some Developments in Research on Language Behavior." National Research Council. 1986. Behavioral and Social Science: 50 Years of Discovery. Washington, DC: The National Academies Press. doi: 10.17226/611.
×
Page 209
Suggested Citation:"Some Developments in Research on Language Behavior." National Research Council. 1986. Behavioral and Social Science: 50 Years of Discovery. Washington, DC: The National Academies Press. doi: 10.17226/611.
×
Page 210
Suggested Citation:"Some Developments in Research on Language Behavior." National Research Council. 1986. Behavioral and Social Science: 50 Years of Discovery. Washington, DC: The National Academies Press. doi: 10.17226/611.
×
Page 211
Suggested Citation:"Some Developments in Research on Language Behavior." National Research Council. 1986. Behavioral and Social Science: 50 Years of Discovery. Washington, DC: The National Academies Press. doi: 10.17226/611.
×
Page 212
Suggested Citation:"Some Developments in Research on Language Behavior." National Research Council. 1986. Behavioral and Social Science: 50 Years of Discovery. Washington, DC: The National Academies Press. doi: 10.17226/611.
×
Page 213
Suggested Citation:"Some Developments in Research on Language Behavior." National Research Council. 1986. Behavioral and Social Science: 50 Years of Discovery. Washington, DC: The National Academies Press. doi: 10.17226/611.
×
Page 214
Suggested Citation:"Some Developments in Research on Language Behavior." National Research Council. 1986. Behavioral and Social Science: 50 Years of Discovery. Washington, DC: The National Academies Press. doi: 10.17226/611.
×
Page 215
Suggested Citation:"Some Developments in Research on Language Behavior." National Research Council. 1986. Behavioral and Social Science: 50 Years of Discovery. Washington, DC: The National Academies Press. doi: 10.17226/611.
×
Page 216
Suggested Citation:"Some Developments in Research on Language Behavior." National Research Council. 1986. Behavioral and Social Science: 50 Years of Discovery. Washington, DC: The National Academies Press. doi: 10.17226/611.
×
Page 217
Suggested Citation:"Some Developments in Research on Language Behavior." National Research Council. 1986. Behavioral and Social Science: 50 Years of Discovery. Washington, DC: The National Academies Press. doi: 10.17226/611.
×
Page 218
Suggested Citation:"Some Developments in Research on Language Behavior." National Research Council. 1986. Behavioral and Social Science: 50 Years of Discovery. Washington, DC: The National Academies Press. doi: 10.17226/611.
×
Page 219
Suggested Citation:"Some Developments in Research on Language Behavior." National Research Council. 1986. Behavioral and Social Science: 50 Years of Discovery. Washington, DC: The National Academies Press. doi: 10.17226/611.
×
Page 220
Suggested Citation:"Some Developments in Research on Language Behavior." National Research Council. 1986. Behavioral and Social Science: 50 Years of Discovery. Washington, DC: The National Academies Press. doi: 10.17226/611.
×
Page 221
Suggested Citation:"Some Developments in Research on Language Behavior." National Research Council. 1986. Behavioral and Social Science: 50 Years of Discovery. Washington, DC: The National Academies Press. doi: 10.17226/611.
×
Page 222
Suggested Citation:"Some Developments in Research on Language Behavior." National Research Council. 1986. Behavioral and Social Science: 50 Years of Discovery. Washington, DC: The National Academies Press. doi: 10.17226/611.
×
Page 223
Suggested Citation:"Some Developments in Research on Language Behavior." National Research Council. 1986. Behavioral and Social Science: 50 Years of Discovery. Washington, DC: The National Academies Press. doi: 10.17226/611.
×
Page 224
Suggested Citation:"Some Developments in Research on Language Behavior." National Research Council. 1986. Behavioral and Social Science: 50 Years of Discovery. Washington, DC: The National Academies Press. doi: 10.17226/611.
×
Page 225
Suggested Citation:"Some Developments in Research on Language Behavior." National Research Council. 1986. Behavioral and Social Science: 50 Years of Discovery. Washington, DC: The National Academies Press. doi: 10.17226/611.
×
Page 226
Suggested Citation:"Some Developments in Research on Language Behavior." National Research Council. 1986. Behavioral and Social Science: 50 Years of Discovery. Washington, DC: The National Academies Press. doi: 10.17226/611.
×
Page 227
Suggested Citation:"Some Developments in Research on Language Behavior." National Research Council. 1986. Behavioral and Social Science: 50 Years of Discovery. Washington, DC: The National Academies Press. doi: 10.17226/611.
×
Page 228
Suggested Citation:"Some Developments in Research on Language Behavior." National Research Council. 1986. Behavioral and Social Science: 50 Years of Discovery. Washington, DC: The National Academies Press. doi: 10.17226/611.
×
Page 229
Suggested Citation:"Some Developments in Research on Language Behavior." National Research Council. 1986. Behavioral and Social Science: 50 Years of Discovery. Washington, DC: The National Academies Press. doi: 10.17226/611.
×
Page 230
Suggested Citation:"Some Developments in Research on Language Behavior." National Research Council. 1986. Behavioral and Social Science: 50 Years of Discovery. Washington, DC: The National Academies Press. doi: 10.17226/611.
×
Page 231
Suggested Citation:"Some Developments in Research on Language Behavior." National Research Council. 1986. Behavioral and Social Science: 50 Years of Discovery. Washington, DC: The National Academies Press. doi: 10.17226/611.
×
Page 232
Suggested Citation:"Some Developments in Research on Language Behavior." National Research Council. 1986. Behavioral and Social Science: 50 Years of Discovery. Washington, DC: The National Academies Press. doi: 10.17226/611.
×
Page 233
Suggested Citation:"Some Developments in Research on Language Behavior." National Research Council. 1986. Behavioral and Social Science: 50 Years of Discovery. Washington, DC: The National Academies Press. doi: 10.17226/611.
×
Page 234
Suggested Citation:"Some Developments in Research on Language Behavior." National Research Council. 1986. Behavioral and Social Science: 50 Years of Discovery. Washington, DC: The National Academies Press. doi: 10.17226/611.
×
Page 235
Suggested Citation:"Some Developments in Research on Language Behavior." National Research Council. 1986. Behavioral and Social Science: 50 Years of Discovery. Washington, DC: The National Academies Press. doi: 10.17226/611.
×
Page 236
Suggested Citation:"Some Developments in Research on Language Behavior." National Research Council. 1986. Behavioral and Social Science: 50 Years of Discovery. Washington, DC: The National Academies Press. doi: 10.17226/611.
×
Page 237
Suggested Citation:"Some Developments in Research on Language Behavior." National Research Council. 1986. Behavioral and Social Science: 50 Years of Discovery. Washington, DC: The National Academies Press. doi: 10.17226/611.
×
Page 238
Suggested Citation:"Some Developments in Research on Language Behavior." National Research Council. 1986. Behavioral and Social Science: 50 Years of Discovery. Washington, DC: The National Academies Press. doi: 10.17226/611.
×
Page 239
Suggested Citation:"Some Developments in Research on Language Behavior." National Research Council. 1986. Behavioral and Social Science: 50 Years of Discovery. Washington, DC: The National Academies Press. doi: 10.17226/611.
×
Page 240
Suggested Citation:"Some Developments in Research on Language Behavior." National Research Council. 1986. Behavioral and Social Science: 50 Years of Discovery. Washington, DC: The National Academies Press. doi: 10.17226/611.
×
Page 241
Suggested Citation:"Some Developments in Research on Language Behavior." National Research Council. 1986. Behavioral and Social Science: 50 Years of Discovery. Washington, DC: The National Academies Press. doi: 10.17226/611.
×
Page 242
Suggested Citation:"Some Developments in Research on Language Behavior." National Research Council. 1986. Behavioral and Social Science: 50 Years of Discovery. Washington, DC: The National Academies Press. doi: 10.17226/611.
×
Page 243
Suggested Citation:"Some Developments in Research on Language Behavior." National Research Council. 1986. Behavioral and Social Science: 50 Years of Discovery. Washington, DC: The National Academies Press. doi: 10.17226/611.
×
Page 244
Suggested Citation:"Some Developments in Research on Language Behavior." National Research Council. 1986. Behavioral and Social Science: 50 Years of Discovery. Washington, DC: The National Academies Press. doi: 10.17226/611.
×
Page 245
Suggested Citation:"Some Developments in Research on Language Behavior." National Research Council. 1986. Behavioral and Social Science: 50 Years of Discovery. Washington, DC: The National Academies Press. doi: 10.17226/611.
×
Page 246
Suggested Citation:"Some Developments in Research on Language Behavior." National Research Council. 1986. Behavioral and Social Science: 50 Years of Discovery. Washington, DC: The National Academies Press. doi: 10.17226/611.
×
Page 247
Suggested Citation:"Some Developments in Research on Language Behavior." National Research Council. 1986. Behavioral and Social Science: 50 Years of Discovery. Washington, DC: The National Academies Press. doi: 10.17226/611.
×
Page 248

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Some Developments In Research on Larlguage Behavior MICHAEL STUDDERT-KENNEDY INTRODUCTION Fifty years ago the study of language was largely a descriptive endeavor, grounded in the traditions of nineteenth century European philology. The object of study, as proposed by de Saussure in a famous course of lectures at the University of Geneva (1906-191 1), was larlgue, language as a system, a cultural institution, rather than parole, language as spoken and heard by individuals. In 1933 historical linguists were describing and comparing the world's languages, tracing their family relations, and reconstructing the protolanguages from which they had sprung (Lehmann, 19731. Structural linguists were developing objective procedures for analyzing the sound patterns and syntax of a language, according to well-defined, systematic principles (e.g., Bloomfield, 19331. Students of dialect were applying such procedures to construct atlases of dialect geography (Kurath, 1939), while anthropological linguists were applying them to American Indian, African, Asian, Polynesian, and many other languages (Lehmann, 19731. The work goes on. From it we are coming to understand the origins of language diversity: not only how languages change over time and space but also how they and their dialects act as forces of social cohesion and differentiation (e.g., Labov, 19721. However, the unfolding of the descriptive tradition and the development of new methods and theories in the field of sociolinguistics are not my concerns in this chapter. My concern, rather, is with a view of language that has emerged from a more diverse tradition. For like the taxonomic studies of Linnaeus in botany and of his followers in zoology, the great 208

SOME DEVELOPMENTS IN RESEARCH ON LANGUAGE BEHAVIOR 209 labor of language description and classification has provided the raw ma- terial for a broader science, stemming from the work of seventeenth century grammarians and of such nineteenth century figures as the German physicist Hermann von Helmholtz, the French neurologist Paul Broca, and the En- glish phonetician Henry Sweet. The several strands that their works rep- resent have come together over the past 30 to 40 years to form the basis of a new science of language, focusing on the individual, rather than on the social and cultural, linguistic system. Since the new focus is essentially biological, a biological analogy may be helpful. It is as though we shifted from describing and classifying the distinctive flight patterns of the world's eight or nine thousand species of birds to analyzing the basic principles of individual flight as they must be instantiated in the anatomy and physiology of every hummingbird and condor. Thus, this new science of language asks: What is language as a category of individual behavior? How does it differ from other systems of animal communication? What do individuals know when they know a language? What cognitive, perceptual, and motor capacities must they have to speak, hear, and understand a language? How do these capacities derive from their biophysical structures, that is, from human anatomy and physiology? What is the course of their ontogenetic development? And so on. Such questions hardly fall within the province of a single discipline. The new field is markedly interdisciplinary and addresses questions of practical application as readily as questions of pure theory or knowledge. Linguistics, anthropology, psychology, biology, neuropsychology, neu- rology, and communications engineering all contribute to the field, and their research has implications for workers in many areas of social import: doctors and therapists treating stroke victims, surgeons operating on the brain, applied engineers working on human-machine communication, teachers of second languages, of reading, and of the deaf and otherwise language-handicapped . The origins of the new science are an object lesson in the interplay between basic and applied research, and between research and theory. To understand this, we must begin by briefly examining the nature of language and the properties that make it unique as a system of communication. The Structure of Language If we compare language with other animal communication systems, we are struck by its breadth of reference. The signals of other animals form a closed set with specific, invariant meanings (Wilson, 19751. The ultrasonic squeaks of a young lemming denote alarm; the swinging steps and lifted tail of the male baboon summon his troop to follow; the "song" of the

210 MICHAEL STUDDERT-KENNEDY male white-crowned sparrow informs his fellows of his species, sex, local origin, personal identity, and readiness to breed or fight. Even the elaborate "dance" of the honeybee merely conveys information about the direction, distance, and quality of a nectar trove. But language can convey information about many more matters than these. In fact, it is the peculiar property of language to set no limit on the meanings it can carry. How does language achieve this openness, or productivity? There are several key features to its design (Hockett, 1960~. Here we note two. First, language is learned: it develops under the control of an open rather than a closed genetic program (Mayr, 1974~. Transmission of the code from one generation to the next is therefore discontinuous; each individual recreates the system for himself. There is ample room here for creative variation- probably a central factor in the evolution of language and in the constant processes of change that all languages undergo (e.g., Kiparsky, 1968; Locke, 1983; Slobin, 1980~. One incidental consequence of this freedom is that the universal properties of language (whatever they may be) are largely masked by the surface variety of the several thousand languages, and their many dialects, now spoken in the world. Second, and more crucially, language has two hierarchically related levels of structure. One level, that of sound pattern, permits the growth of a large lexicon; the other level, that of syntax, permits the formation of an infinitely large set of utterances. A similar combinatorial principle underlies the structure of both levels. Consider, first, the fact that a six-year-old, middle-class American child typically has a recognition vocabulary of some 8,000 root words, some 14,000 words in all (Templin, 19571. Most of these have been learned in the previous four years, at a rate of about five or six roots a day. As an adult, the child may come to have a vocabulary of well over 150,000 words (Seashore and Erickson, 19401. How is it possible to produce and perceive so many distinct signals? The achievement evidently rests on the evolution in our hominid ancestors of a combinatorial principle by which a small set of meaningless elements (phonemes, or consonants and vowels) is repeatedly sampled, and the sam- ples permuted, to form a very large set of meaningful elements (morphemes, words). Most languages have between 20 to 100 phonemes; English has about 40, depending on dialect. The phonemes themselves are formed from an even smaller set of movements, or gestures, made by jaw, lips, tongue, velum (soft palate), and larynx. Thus, the combinatorial principle was a biologically unique development that provided "a kind of impedance match between an open-ended set of meaningful symbols and a decidedly limited set of signaling devices" (Studdert-Kennedy and Lane, 1980; cf. Cooper, 1972; Liberman et al., 19671. We may note, incidentally, that a large lexicon

SOME DEVELOPMENTS IN RESEARCH ON LANGUAGE BEHAVIOR 211 is not peculiar to complex, literate societies: even so-called primitive human groups may deploy a considerable lexicon. For example, the Hanunoo, a stone age people of the Philippines, have nearly three thousand words for the flora and fauna of their world (Levi-Strauss, 19661. Of course, a large lexicon is not a language. Many languages have relatively small lexicons, and in everyday speech we may draw habitually on no more than a few thousand words (Miller, 19511. To put words to linguistic use, we must combine them in particular ways. Every language has a set of rules and devices, its syntax, for grouping words into phrases, clauses, and sentences. Among the various devices that a language may use for predicating properties of objects and events, and for specifying their relations (who does what to whom) are word order and inflection (case, gender, and number affixes for nouns, pronouns, adjectives; person, tense, mood, and voice affixes for verbs). An important distinction is also made in all languages between open-class words with distinct meanings (nouns, verbs, adjectives, etc.) and closed-class or function words (conjunctions, articles, verbal auxiliaries, enclitics e.g., the particle "not" in "cannot") that have no fixed meaning in themselves but serve the purely syntactic function of indicating relations between words in a sentence or sequence of sentences. Here again then, a combinatorial principle is invoked: a finite set of rules and devices is repeatedly sampled and applied to produce an infinite set of utterances. I should note that many of the facts about language summarily described above are already framed from the new viewpoint that has developed in the past 40 years. Let us now turn back the clock and consider the early vicissitudes of three areas of applied research that contributed to this de- velopment. Three Areas of Applied Research in Language In the burst of technological enthusiasm that followed World War II, federal money flowed into three related areas of language study: automatic machine translation, automatic speech recognition, and automatic reading machines for the blind. A considerable research effort was mounted in all three areas during the late 1940s and early l950s, but surprisingly little headway was made. The reason for this, as will become clear below, was that all three enterprises were launched under the shield of a behaviorist theory according to which complex behaviors could be properly described as chained sequences of stimuli and responses. The initial assumption underlying attempts at machine translation was that this task entailed little more than transposing words (or morphemes) from one language into another, following a simple left-to-right sequence.

212 MICHAEL STUDDERT-KENNEDY If this were so, we might store a sizable lexicon of matched Russian, say, and English words in a computer and execute translation by instructing the computer to type out the English counterpart of each Russian word typed in. Unfortunately, both semantic and syntactic stumbling blocks lie in the path. The range of meanings, literal and metaphorical, that one language assigns to a word (say, English high, as in "high mountain," "high pitch," "high hopes," "high horse," "high-stepping," and "high on drugs") may be guise different from the range assigned by another language; and the particular meaning to be assigned will be determined by context, that is, by meanings already assigned to some in principle unspecifiable sequence of preceding words. Moreover, the syntactic devices for grouping words into phrases, phrases into clauses, and clauses into sentences may be quite different in different languages. This is strikingly obvious when we compare a heavily inflected language, such as Russian, with a lightly inflected language with a more rigid word order, such as English. Oettinger (1972) amusingly illustrates the general difficulties with two simple sentences, immediately intelligible to an English speaker, but a source of knotty prob- lems in both phrase structure and word meaning to a computer, programmed for left-to-right lexical assignment: Time flies like an arrow, and Fruit flies like a banana. From such observations, it gradually became clear that we would make little progress in machine translation without a deeper under- standing of syntax and of its relation to meaning. The initial assumption underlying attempts at automatic speech recog- nition was similar to that for machine translation and equally in error (cf. Reddy, 19751. The assumption was that the task entailed little more than specifying the invariant acoustic properties associated with each consonant and vowel, in a simple left-to-right sequence. One would then construct an acoustic filter to pass those properties but no others, and control the ap- propriate key on a printer by means of the output from each filter. Unfor- tunately, stumbling blocks lie in this path also. A large body of research has demonstrated that speech is not a simple left-to-right sequence of discrete and invariant alphabetic segments, such as we see on a printed page (e.g., Pant, 1962; loos, 1948; Liberman et al., 19671. The reason for this, as we shall see shortly, is that we do not speak phoneme by phoneme, or even syllable by syllable. At each instant our articulators are engaged in executing patterns of movement that correspond to several neighboring phonemes, including those in neighboring syllables. The result of this shingled pattern of movement is, of course, a shingled pattern of sound. Even more extreme variation may be found when we examine the acoustic structure of the same syllable spoken with different stress or at different rates or by different speakers. From such observations it gradually became clear that we would make little progress in automatic speech recognition without a deeper un

SOME DEVELOPMENTS IN RESEARCH ON LANGUAGE BEHAVIOR 213 derstanding of how the acoustic structure of the speech signal specifies the linguistic structure of the message. Finally, the initial assumption underlying attempts to construct a reading machine for the blind was closely related to that for automatic speech recognition and again in error (Cooper et al., 19841. A reading machine is a device that scans print and uses its contours to control an acoustic signal. It was supposed that, given an adequate device for optical recognition of letters on a page, one need only assign a distinctive auditory pattern to each letter, to be keyed by the optical reader and recorded on tape or played in real time to a listener a sort of auditory Braille. Once again there were stumbling blocks, but this time they were perceptual. We normally speak and listen to English at a rate of some 150 words per minute (wpm), that is, roughly 5 to 6 syllables or 10 to 15 phonemes per second. Ten to 15 discrete sounds per second is close to the resolving power of the ear (20 elements per second merge perceptually into a low-pitched buzz). Not surprisingly, despite valiant and ingenious attempts to improve the acoustic array, even the most practiced listeners were unable to follow a substitute code at rates much beyond that of skilled Morse code receivers, namely some 10 to 15 words per minute a rate intolerably slow for any extended use. From this work, it gradually became clear that the only acceptable output from a reading machine would be speech itself. This conclusion was one of many that spurred development of speech synthesis by artificial talking machines in following years (Cooper and Borst, 1952; Pant, 1973; Flanagan, 1983; Mattingly, 1968, 1974~. The conclusion also raised the- oretical questions. For example: Why can we successfully transpose speech into a visual alphabet, using another sensory modality, if we cannot suc- cessfully transpose it within its "natural" modality of sound? Why is speech so much more effective than other acoustic signals? Is there some peculiar, perhaps biologically ordained, relation between speech and the structure of language? We will return to these questions below. I have not recounted these three failures of applied research missions to argue that money and effort spent on them were wasted. On the contrary, initial failure spurred researchers to revised efforts, and valuable progress has since been made. Reading machines for the blind, using an artificial speech output, have been developed and are already installed in large li- braries (Cooper et al., 19841. There now exist automatic speech recognition devices that recognize vocabularies of roughly a thousand words, spoken in limited contexts by a few different speakers (Levinson and Liberman, 19811. Scientific texts with well-defined vocabularies can now be roughly translated by machine, then rendered into acceptable English by an informed human editor. These advances have largely come about by virtue of brute computational

214 MICHAEL STUDDERT-KENNEDY force and technological ingenuity, rather than through real gains in our understanding of language. This is not because we have made no gains, for as we shall see shortly, we surely have. However, none of the devices that speak, listen, or understand actually speaks, listens, or understands according to known principles of human speech and language. For example, a speech synthesizer is the functional equivalent of a human speaker to the extent that it produces intelligible speech. But it obviously does so by quite different means than those that humans use: none of its inorganic compo- nents correspond to the biophysical structures of larynx, tongue, velum, lips, and jaw. Instead, a synthesizer simulates speech by means of a complex system of tuned electronic circuits, and resembles a speaker somewhat as, say, a crane resembles a human lifting a weight. We are still deeply ignorant of the physiological controls by which a speaker precisely coordinates the actions of larynx, tongue, and lips to produce even a single syllable. In short, the main scientific value of the early work I have described was to reveal the astonishing complexity of speech and language, and the in- adequacy of earlier theories to account for it. One important effect of the initial failures was therefore to prepare the ground for a theoretical revolution in linguistics (and psychology) that began to take hold in the late 1950s. THE GENERATIVE REVOLUTION IN LINGUISTICS The publication in 1957 of Noam Chomsky's Syntactic Structures began a revolution in linguistics that has been sustained and developed by many subsequent works (e.g., Chomsky, 1965, 1972, 1975, 1980; Chomsky and Halle, 1968~. To describe the course of this revolution is well beyond the scope of this chapter. However, the impact of Chomsky's writings on fields outside linguistics philosophy, psychology, biology, for example and their importance for the emerging science of language has been so great that some brief exposition of at least their nontechnical aspects is essential. I should emphasize that Chomsky's work has by no means gone unchal- lenged (e.g., Givon, 1979; Hockett, 1968; Katz, 19811. My intent in what follows is not to present a brief in its defense, but simply to sketch a bare outline of the most influential body of work in modern linguistics. The central goal of Chomsky's work has been to formalize, with math- ematical rigor and precision, the properties of a successful grammar. He defines a grammar as "a device of some sort for producing the sentences of the language under analysis" (Chomsky, 1957, p. 111. A grammar, in Chomsky's view, is not concerned either with the meaning of a sentence or with the physical structures (sounds, script, manual signs) that convey it. The grammar, or syntax, of a language is a purely formal system for arranging the words (or morphemes) of a sentence into a pattern that a

SOME DEVELOPMENTS IN RESEARCH ON LANGUAGE BEHAVIOR 215 native speaker would judge to be grammatically correct or at least accept- able. In Syntactic Structures, Chomsky compared three types of grammar: finite-state, phrase-structure, and transformational grammars. A finite-state grammar generates sentences in a left-to-right fashion: given the first word, each successive word is a function of the immediately preceding word. (Such a model is, of course, precisely that adopted by B.F. Skinner in his Verbal Behavior (1957), a dernier cri in behaviorism, published in the same year as the "premier cri" of the new linguistics.) Chomsky (1956) proved mathematically, as work on machine translation had suggested empirically, that a simple left-to-right grammar can never suffice as the grammar of a natural language. The reason, stated nontech- nically, is that there may exist dependencies between words that are not adjacent, and an indefinite number of phrases containing other nonadjacent dependencies may bracket the original pair. Thus, in the sentence, Anyone who eats the fruit is damned, anyone and is damned are interdependent. We can, in principle, continue to add bracketing interdependencies indef- initely, as in Whoever believes that anyone who eats the fruit is damned is wrong, and Whoever denies that whoever believes that anyone who eats the fruit is damned is wrong is right. In practice, we seldom construct such sentences. However, the recursive principle that they illustrate is crucial to every language. The principle permits us to extend our communicative reach by embedding one sentence within another. For example, even a four-year-old child may combine, We picked an apple and I want an apple for supper into the utterance I want the apple we pickedfor supper. Thus, the child embeds an adjectival phrase, we picked (= that we picked with the relative pronoun deleted), to capture two related sentences in a single utterance (cf. Limber, 19731. Chomsky goes on to consider how we might formulate an alternative and more powerful grammar, based on the traditional constituent analysis of sentences into "parts of speech." Constituent analysis takes advantage of the fact that the words of any language (or an equivalent set of words and affixes) can be grouped into categories (such as noun, pronoun, verb, adjective, adverb, preposition, conjunction, article) and that only certain sequences of these categories form acceptable phrases, clauses, and sen- tences. By grouping grammatical categories into permissible sequences, we can arrive at what Chomsky terms a phrase-structure grammar. Such a grammar is "a finite set . . . of initial strings and a finite set . . . of 'instruction formulas' of the form X~Y interpreted: 'rewrite X as Y' " (Chomsky, 1957, p. 291. Figure 1 illustrates a standard parsing diagram of the utterance, The woman ate the apple, in a form familiar to us from grammar school (above), and as a set of "rewrite rules" from which the parsing diagram can be generated (below).

216 MICHAEL STUDDERT-KENNEDY Parsing Diagram Sentence Noun Phrase / \ Article Noun Verb Verb Phrase - Noun Phrase /\ the woman ate Article Noun the apple Rewrite Rules (1) Sentence ~ Noun Phrase+ Verb Phrase (2) Noun Phrase - Article ~ Noun (3) Verb Phrase - Verb + Noun Phrase (4) Article -~ the, a } (5) Noun ~ ~ woman, apple... (6) Verb ~ { ate, seized } FIGURE 1 Above, a parsing diagram dividing the sentence The woman ate the apple into its constituents. Below, a set of rewrite nobles that will generate any sentence having the constituent structure shown above. Notice, incidentally, that rewrite rules are indifferent to meaning. They will generate anomalous utterances such as The chocolate loved the clock, no less readily than The woman ate the apple. Moreover, many native speakers would be willing to accept such anomalous utterances as gram- matically correct, even though they have no meaning. This hints at the possibility that syntactic capacity might be autonomous, a relatively in- dependent component of the language faculty. This is a matter to which we will return below. An important point about a set of rewrite rules is that it specifies the grouping of words necessary to correct understanding of a sentence. The sentence Let's have some good bread and wine is ambiguous until we know whether the adjective good modifies only bread or both bread and wine. The distinction may seem trivial. But, in fact, the example shows that we

SOME DEVELOPMENTS IN RESEARCH ON LANGUAGE BEHAVIOR 217 are sensitive (or can be made sensitive) to an ambiguity that could not have arisen from any difference in the words themselves or in their sequence. Rather, the origin of the ambiguity lies in our uncertainty as to how the words should be grouped, that is, as to their phrase structure. A correct (or incorrect) interpretation of their meaning therefore depends on the listener (and a fortiori the speaker) being able to assign an abstract phrase structure to the sequence of words. Whether a complete grammar of English, or any other natural language, could be written as a set of phrase-structure rules is not clear. In any event, Chomsky argues in Syntactic Structures that such a grammar would be unnecessarily repetitive and complex, since it does not capture a native speaker's intuition that certain classes of sentence are structurally related. For example, the active sentence Eve ate the apple and the passive sentence The apple was eaten by Eve could both be generated by an appropriate set of phrase-structure rules, but the rules would be different for active sentences than for their passive counterparts. Surely, the argument runs, it would be "simpler" if the grammar somehow acknowledged their structural relation by deriving both sentences from a common underlying "deep structure." The derivation would be accomplished by a series of steps or "transfor- mations" whose functions are to delete, modify, or change the order of the base constituents Eve, ate, apple. An important aspect of transformations is that they are structure depen- dent, that is, they depend on the analysis of a sentence into its structural components, or constitutents. For example, to transform such a declarative sentence as The man is in the garden into its associated interrogative Is the man in the garden?, a simple left-to-right rule would be: "Move the first occurrence of is to the front." However, the rule would not then serve for such a sentence as The man who is tall is in the garden, since it would yield Is the man who tall is in the garden? The rule must therefore be something like: "Find the first occurrence of is following the first noun phrase, and move it to the front" (Chomsky, 1975, pp. 30-311. Thus, a transformational grammar, no less than a phrase-structure grammar, pre- supposes analysis of an utterance into its grammatical (or phrasal) constit- uents. We may note, in passing, that children learning a language never produce sentences such as Is the man who tall is in the garden? Rather, their errors suggest that, even in their earliest attempts to frame a complex sentence, they draw on a capacity to recognize the structural components of an utterance. However, here we should be cautious. Chomsky has repeatedly empha- sized that " . . .a generative grammar is not a model for a speaker or hearer" (1965, p. 9), not a model of psychological processes presumed to be going on as we speak and listen. The word "generative" is perhaps misleading

218 MICHAEL STUDDERT-KENNEDY in this regard. Certainly, experimental psychologists during the 1960s de- voted much ingenuity and effort to testing the psychological reality of transformations (for reviews, see Cairns and Cairns, 1976; Fodor et al., 1974; Foss and Hakes, 1978~. But the net outcome of this work was to demonstrate the force of Chomsky's distinction between formal descriptions of a language and the strategies that speakers and listeners deploy in com- municating with each other (cf. B ever, 1970~. At first glance, the distinction might seem to be precisely that between langue and parole, drawn by de Saussure. However, for de Saussure, langue, the system of language, "exists only by virtue of a sort of contract signed by the members of a community" (de Saussure, 1966, p. 14~: it is a kind of formal artifice or convention, maintained by social processes of which individuals may be quite unaware. By contrast, for Chomsky the "generative grammar [of a language] attempts to specify what the speaker actually knows" (1965, p. 81. What a speaker knows, his competence in Chomsky's terminology, is attested to by "intuitive" judgments of gram- maticality. What a speaker does, performance (parole), is linguistic com- petence filtered through the indecisions, memory lapses, false starts, stammerings, and the "thousand natural [nonlinguistic] shocks that flesh is heir to." Thus, even though a theory of grammar is not a theory of psychological process, it is a theory of individual linguistic capacity. In Chomsky's view, the task of linguistics is to describe the structure of language much as an anatomist might describe the structure of the human hand. The complementary role of psychology in language research is to describe language function and its course of behavioral development in the individual, while physiology, neurology, and psychoneurology chart its underlying structures and mechanisms. Whether this shark distinction between language as a formal object and language as a mode of biological function can, or should, be maintained is an open question. What is clear, however, is that it was from a rigorous analysis of the formal properties of syntax (and later of phonology: see Chomsky and Halle, 1968) that Chomsky was led to view language as an autonomous system, distinct from other cognitive systems of the human mind (cf. Fodor, 1982; Pylyshyn, 19801. His writings during the late 1950s and 1960s brought an exhilarating breath of fresh air to psychologists in- terested in language, because they offered an escape from the stifling be- havioristic impasse, already noted by Lashley (195 1) and others (e.g., Miller et al., 19601. The result was an explosion of research in the psychology of language, with a strong emphasis on its biological underpinnings. Whatever one's view of generative grammar, it is fair to say that almost every area of language study over the past 25 years has been touched, directly or indi

SOME REVEL OPME=S IN RESEARCH ON LANGUAGE BEHAVIOR 219 rectly, whether into action or into reaction, by Chomsky's work. This will be obvious from the following selective review of research in four major areas: acoustic phonetics, American Sign Language (ASL), brain special- ization for language, and language development in children. Acoustic Phonetics We begin with audible speech, partly because we are then following the course of development, both in the species and the individual, from the bottom up; partly because it is in this area, where we are dealing with observable, physical processes, that the most dramatic progress has been made; and partly because we have come to realize in recent years that the physical medium of language places fundamental constraints on its surface structure. To understand this we must know something of the way speech is produced. The Source-filter Theory of Speech Production The source-filter theory, first proposed by Johannes Muller in 1848, has been elaborated in the past 50 years, notably at the University of Tokyo (Chiba and Kajiyama, 1941), the Royal Institute of Technology in Stockholm (Fant, 1960, 1973) and, in this country, the Massachusetts Institute of Technology (Stevens and House, 1955, 1961) and Bell Telephone Laboratories (Flanagan, 19831. As a result of this work, we are now able to specify accurately the possible acoustic outputs of any vocal tract, animal or human. When we speak, we drive air from our lungs through the pharynx, mouth, teeth, lips, and, sometimes, nose. The sound source is usually either the "voice" produced by rapid pulsing of the vocal cords (as in the final sounds of be and do), the hiss of air blown through a narrow constriction (as in the initial and final sounds of safe and thrush3 or both (as in the final sounds of leave and bees). The resonant filter is the vocal tract, its air set into vibration by the flow of air from the lungs, much as we produce sound from a bottle or a wind instrument by blowing air across its top. To some large degree linguistic information (that is, consonants and vowels) is conveyed by systematic variations in the configuration of the vocal tract. For example, if we lower the tongue and move it back toward the pharynx, we set up a pattern of resonances (known as formants) cor- responding to the vowel [al. If we raise the tongue forward toward the gums, we set up resonances for the vowel [il. Finally, if we raise the tongue backward toward the soft palate, we set up resonances for the vowel [ul. These three sounds are the most distinct vowels, both articulatorily and acoustically, that the human vocal tract can produce, and all known lan- guages use at least two of them.

220 MICHAEL STUDDERT-KENNEDY [We may note in passing that Lieberman and his colleagues (Lieberman and Cretin, 1971; Lieberman et al., 1972) have used the source-filter theory of speech production to demonstrate that these vowels lie outside the range of sounds that could be produced either by an adult chimpanzee or by a newborn human infant. The reason for this is that the larynx in both chim- panzee and infant is high in the throat, restricting the range of possible tongue movements. An advantage of the high larynx for the infant is that it provides an arrangement of the oral tract such that, like other mammals, the infant can suck through its mouth and breathe through its nose at the same time. Over the first six months of life, the infant's larynx lowers, a special swallowing reflex develops to prevent food entering the lungs, and the infant becomes capable of producing the vowels of the language spoken around it. The lowered larynx seems to be one of several adaptations of the vocal apparatus that have suited it for speaking as well as for eating and breathing.] Of course, we do not speak only in vowels. Rather, we speak in runs of syllables, alternately constricting the vocal tract to form consonants, open- ing it to form vowels. (This repeated opening and closing of the tract produces the rises and falls of amplitude that are the basis of speech rhythm and poetic meter.) What is of interest, as we have already remarked, is that the tract configurations appropriate to particular consonants and vowels do not follow each other in linear sequence. At any instant, each articulator is executing a complex pattern of movement, of which the spatiotemporal coordinates reflect the influence of several neighboring segments. Readers may test this by slowly uttering, for example, the words cool and keel. They will find that the position of the tongue on the palate during closure for the initial consonant, [k], is slightly further back for the first word than for the second. The result of this interleaving is that, at any instant, the sound is conveying information about more than one phonetic segment, and that each phonetic segment draws information from more than one piece of sound an obvious problem for automated speech recognition. Unfor- tunately, we cannot, as was at one time hoped, escape from this predicament by building a machine to recognize syllables, because similar interactions between phonetic segments occur across syllable boundaries. We see all this quite clearly if we examine a sound spectrogram. The Sound Spectrograph The sound spectrograph was developed at Bell Telephone Laboratories during World War II, to provide a visible display of the acoustic spectrum of speech as it changes over time. Originally, it was hoped that the device would enable deaf persons to use the telephone (Potter et al., 1947), but this proved impracticable because spectrograms are formidably difficult to read (but see Cole et al., 19801.

SOME DEVELOPMENTS IN RESEARCH ON LANGUAGE BEHAVIOR 221 Figure 2 is a spectrogram of the utterance She began to read her book. Frequency on the ordinate is plotted against time on the abscissa. Variations in relative amplitude appear as variations in the darkness of the pattern. The dark bars correspond to fonnants, that is, to resonant peaks in the vocal tract resonance function. Scattered patches, as at the beginning, correspond to the noise of fricatives, e.g., [f], Esl, and stop consonants, e.g., [pi, [tel. A series of vertical lines has been drawn, dividing the spectrogram into discrete, acoustic segments. There are 25 of these segments, even though the utterance consists of only 17 phonetic segments and 7 syllables. Some of these acoustic segments correspond more or less directly to phonetic segments: thus, segments 1 and 2 correspond to the two sounds of she. Segment 3, on the other hand, corresponds to the first three sounds of began, segments 11 and 12 to the first sound of to, segment 23 to the first two sounds of book. The sound spectrograph revealed, for the first time, the astonishing var- iability of the speech signal both within and across speakers. It was also the basis for the first systematic studies of speech perception, from which we have learned which aspects of the signal carry crucial phonetic infor- mation. These studies, in turn, provided the basis for the development of speech synthesis. Thus, artificial talking machines, now being used in reading machines for the blind and in a variety of human-machine com- munication systems, rest squarely on the shoulders of the spectrograph. Speech Perception Early work in speech perception was largely guided by the demands of telephonic communication. Its aim was to estimate how much distortion (by filtering, noise, peak-clipping, and so on) could be imposed on the signal without seriously reducing its intelligibility (Licklider and Miller, 1951; Miller, 19511. Two general conclusions from this work were surprising and important. First, speech is so resistant to distortion that we can throw away large parts of the signal without reducing its intelli- gibility. Second, intelligibility does not depend on naturalness. These two facts made it possible to learn a great deal about the important information- bearing elements in speech by stripping it down to its minimal cues. Work of this kind was first undertaken at Haskins Laboratories in New York during the l950s, as part of a program to develop a suitable output for a reading machine. The key research tool was the Pattern Playback, developed by F.S. Cooper (Cooper, 1950; Cooper and Borst, 1952) to reconvert the visual pattern of a spectrogram into sound. The pattern, painted on a moving acetate belt, reflects frequency-modulated light to a photocell that drives a speaker. Figure 3 illustrates an early spectrogram and its stylized copy. If the copy is passed through the playback, it produces an intelligible version of the utterance to catch pink salmon. The utterance

222 Cal en, 2 -._ _ .~ _ · _ ~ .~ - tic ... . . .~_ ... .. - . _~ _ -Ed, O- 0~ 0 27 _ ~ "' _ ~ _ . ., ~ ' ~- · ,~! 0 - _} .. :1 . ~ ~ _ ~ q ·- O _. it: . ° ,D ca ~ c' ~ DO ~ ~ O Cal ~ ~ I: U. ;~ ·_ ~ ~ ._ ~ ~ an' ~ C ~ ~ C O C) ~ ~ _ o ~ 0 3 o o o o ~ ~ C`' .ca ~ ~ ~ ;^ ~ =. _ I: ~ ~- E-' - A ~ ~ ~ s~ ~ o ms .= 2 ~ =._ ~ ~ o ~ C~ =-C- ° ° oa > ,e 1 0 ~ ~ ~ ~ C &= ~o &.= C~O ~ ~ ~ ~ . - e~ ~ ca ~ ~ ~ ~ 0 rT1 Ct ~ ~ Z ~ _ C ~o .= . o ~n

SOME DEVELOPMENTS 1N RESEARCH ON LANGUAGE BEHAVIOR 223 7 i 4 3 2 1 _ _ ~e O ~ 7F 6 3 2 1 o ~, =.. - ~;;- ; _~ 0 0~1 0~2 0 3 04 0~5 0~6 Time ( seconds ) T-O ~_ _ ~; .~' I, ~; r:, ,.~, _ ..~b2'.' _ ~- ;:. _\ . 0 01 0~2 0~3 04 T-0 _ '. '1s' . _ :' ~ . ,. ~_ ~ ~'^~. _^ ~ _ ~- . . 0~70~8 09 10 ^'' P I-N K SA-(L)M O N ( a ) 0 5 0~6070 8 0 9 10 11 Time ( seconds ) TCH P-I N KSA (L)M-O-N ( b) FIGURE 3 Above, a spectrogram of the utterance To catch pink salmon. Below, a stylized copy of the spectrogram, suff~cient to regenerate the utterance if played on the Pattern Playback.

224 MICHAEL STUDDERT-KENNEDY sounds unnatural, partly because the formant bandwidths have been sharply reduced, partly because it is spoken in a monotone. The playback made it possible for experimenters to manipulate the speech signal systematically, by pruning, deleting, or exaggerating portions of the spectrographic pattern until they had determined the minimal cues for any particular utterance (Liberman, 1957; Liberman et al., 1959~. With this device, and with its successors at Haskins and elsewhere, a body of knowl- edge was built up, sufficient for synthesis by rule of relatively high-quality speech (Fant, 1960, 1968; Flanagan, 1983; Mattingly, 19741. Several reviews of the perceptual implications of this work have been published (Darwin, 1976; Liberman et al., 1967; Liberman and Studdert- Kennedy, 1978; Studdert-Kennedy, 1974, 1976), and I will not review Hem here. However, two facts deserve note. First, the cues for a given phonetic segment (that is, for a particular consonant or vowel) vary markedly as a function of context. Figure 4 displays spectrograms of the naturally spoken syllables Edid] and [dud]. We know from synthetic speech Hat a main cue to the initial [d] lies in changes ~ the second formant after onset. Notice that the second formant rises before [il. falls before [u], and Hat He rising and falling patterns are precisely reversed for the final [d]. Yet all are heard as [d]. Moreover, if these patterns or their synthetic versions are removed from context and presented to listeners for judgments, they are no longer heard as 3 2 kHz 1 - ~L [did ] [dud ] TIME FIGURE 4 Spectrograms of naturally spoken [did] (deed) and [dud] (dood). The acoustic information specifying the alveolar place of articulation of the initial and final consonants is primarily carried by the second formant, centered around 2 kHz for [did] and slightly below 1 kHz for [dud]. Note that this formant forms a parabola, concave downwards in [did], concave upwards in [dud]. Despite this difference, both patterns are heard as beginning and ending with [d].

SOME DEVELOPMENTS IN RESEARCH ON LANGUAGE BEHAVIOR 225 [d], nor are they heard as invariant. Rather they are heard as rising and falling tones (Liberman et al., 1967). In other words, different acoustic patterns are heard as different in a nonspeech context but as the same in a speech context. This is merely one of dozens of such examples. The second fact of note is that despite the apparent lack of discrete phonetic segments in the signal, listeners have little difficulty in learning to find segments so little, in fact, that a segmental representation of speech is the basis of the alphabet. The interpretation of these facts is still a matter of controversy (e.g., Cole and Scott, 1974; Ladefoged, 1980; Stevens, 1975), and I will not pursue the matter here. However, it is worth noting that such findings gave rise to the hypothesis that humans have evolved a specialized perceptual mechanism for speech, distinct from, though dependent on, their general auditory system (Liberman, 1970, 1982; Liberman and Studdert-Kennedy, 1978; Liberman et al., 19671. The hypothesis has received substantial sup- port from many dozens of studies of dichotic listening over the past 20 years (e.g., Kimura, 1961 , 1967; Shankweiler and Studdert-Kennedy, 1967; Studdert-Kennedy and Shankweiler, 1970; for a review, see Porter and Hughes, 19831. The conclusion from this work, and from studies of patients with separated cerebral hemispheres (see section below on brain speciali- zation for language), is that the left hemisphere of most normal right-handed individuals is specialized not only for speaking (as has been known for many years from studies of brain-damaged patients), but also for perceiving speech. Specifically, there is now good reason to believe that "while the general auditory system common to both hemispheres is equipped to extract the auditory parameters of a speech signal, the dominant [i.e., left] hemi- sphere may be specialized for the extraction of linguistic features from these parameters" (Studdert-Kennedy and Shankweiler, 1970, p. 5791. An important implication of this conclusion is that speech forms an integral part of the left-hemisphere language system discussed below. With this in mind let us turn to recent work on American Sign Language, which draws on a different perceptuomotor system from that of spoken language. AMERICAN SIGN LANGUAGE Speech is the natural medium of language. Specialized structures and functions have evolved for spoken communication: vocal tract morphology; lip, jaw, and tongue innervation; mechanisms of breath control (Lenneberg, 1967~; and perhaps even (as I have just suggested) matching perceptual mechanisms. But is there any further specialization for language? Is lan- guage an autonomous system, distinct from other cognitive systems, as Chomsky has argued?

226 MICHAEL STUDDERT-KENNEDY An opportunity to address this question has arisen in recent years from an unexpected quarter: sign languages of the deaf. Until some 20 years ago, it was commonly believed that sign languages of the deaf and of other social groups, such as American Plains Indians and Australian ab- origines were either more or less impoverished hybrids of conventional iconic gesture and impromptu pantomime, or artificial systems based, like reading and writing, on a specific spoken language. Artificial systems, such as Signed English and Paget-Gorman, are indeed used in many schools of the deaf: their signs refer to letters (finger-spelling) or higher-order linguistic units (words, morphemes), and their syntax follows that of the base lan- guage. However, there are other signed languages, not based on any spoken language, with their own independent lexicons and syntactic systems. The most extensively studied of these is American Sign Language (ASL), the first language of over 100,000 deaf individuals and, according to Mayberry (1978), the fourth most common language (after English, Spanish, and Italian) in the United States. Modern ASL stems from a French-based sign language introduced into the United States by Thomas Gallaudet in 1817. (According to Stokoe [19741 ASL signers today find French SL more intelligible than British SL, a nice demonstration that ASL is independent of English.) Thus, the original language was in fact based on a spoken language. However, over the past 165 years it has developed among the deaf into an independent sign lan- guage. Structural analysis of ASL was first undertaken by Stokoe (1960), and in 1965 he and his colleagues, Casterline and Croneberg (Stokoe et al., 1965), published A Dictionary of American Sign Language on Linguistic Principles, containing a description and English gloss of nearly 2,500 signs. The dictionary used minimal pair analysis to show that signs contrasted along three independent dimensions: hand configuration, place of articu- lation, and movement. For example, signs for APPLE and JEALOUS con- trast in hand configuration; signs for SUMMER and UGLY contrast in place of articulation; signs for CHAIR and TRAIN contrast in movement (Klima and Bellugi, 1979, p. 421. Stokoe and his colleagues isolated 55 "cher- emes" or primes, analogous to the phonemes of a spoken language: 19 for hand configuration, 12 for place of articulation, and 24 for movement. Thus, they demonstrated that ASL has a sublexical structure, analogous to the phonological structure of a spoken language. ASL also has a second level of structure, a grammar or syntax. This has been demonstrated in an extensive program of research at the Salk Institute for Biological Studies in La Jolla, California, over the past 10 years (Klima and Bellugi, 19791. I will not attempt to review this work in any detail, but several points deserve note. First, ASL has a rule-governed system of ,, ,

SOME DEVELOPMENTS IN RESEARCH ON LANGUAGE BEHAVIOR 227 compounding, by which signs may be combined to form a new sign different in meaning from its components. The process is analogous to that by which, in English, hard and hat, say, are combined to form hardhat, meaning a construction worker. Thus, the lexicon of ASL can be expanded by rule, not simply by iconic invention. Second, ASL has an elaborate system of inflections by which it modulates the meaning of a word. For example, in English, changes in aspectual meaning (that is, distinctions in the onset, duration, frequency, recurrence, permanence, or intensity of an event) are indicated by concatenating mor- phemes. We may say, he is quiet, he became quiet, he used to be quiet, he tends to be quiet, and so on. All these meanings are conveyed in ASL by distinct modulations of the root sign's movement. In the root sign for QUIET the hands move straight down from the mouth, while for TENDS TO BE QUIET they move down forming a circle. Similarly, related nouns and verbs are also distinguished by movements, while verbs are inflected by movement modulation for person, number, reciprocal action, and aspect. Third, ASL has a spatial (rather than a temporal) syntax. Nouns intro- duced into a discourse are assigned arbitrary reference points in a horizontal plane in front of the signer. These points then serve to index grammatical relations among referents: verb signs are executed with a movement between two points, or across several points, to indicate subject and object. Thus, a grammatical function variously served in spoken language by word order, case markers, verb inflections, and pronouns is fulfilled in ASL by a spatial device. Finally, ASL has a variety of syntactic devices that make use of the face. Liddell (1978) has shown that a relative clause ("The apple that Eve offered tempted him") may be marked by tilting back the head, raising the eye- brows, and tensing the upper lip for the duration of the clause. Baker and Padden (1978) describe gestures of the face and head that mark the juncture of conditional clauses ("If you eat the fruit, you will be punished". In short, though structural analysis of ASL is far from complete, it is evident that the language has a dual pattern of form and syntax, fully analogous to that of a spoken language. Nonetheless, there are differences. The main structural difference between ASL and English was illustrated by Klima and Bellugi (1979) in a comparison of their rates of communi- cation. The times taken to tell a story in the two languages were almost exactly equal. Yet the speaker used two to three times as many words as the signer used signs. The reason for the discrepancy, already hinted at, lies in the temporal distribution of information. Speech, for the most part, develops its patterns in time, sequentially, while ASL develops its patterns both simultaneously, in space, and sequentially. The difference is evidently due to the difference in the perceptual modalities addressed. Sign, addressed

228 MICHAEL STUDDERT-KENNEDY to the eye, is free to package information in parallel; speech, addressed to the ear, is forced into a serial mode. What is interesting, of course, is that despite constraints of modality, the two languages convey information at roughly the same rate. This suggests that they may be operating under the same temporal constraints of cognition. What, finally, are the implications of this work for the study of speech and language? Evidently, the dual structure of language is not a mere consequence of perceptuomotor modality but a reflection of cognitive re- quirements. Whether these cognitive requirements are linguistic rather than general is still not clear. Differently put, we still do not know whether the relation between signed and spoken language is one of analogy or homology. If the two systems prove to be homologous, that is, if they prove to draw on the same neural structures and organization, we will have strong evidence that language is a distinct cognitive faculty. However, if they do not draw on the same underlying neural organization, we might suppose that linguistic structure is purely functional, the adventitious consequence of a cognitively complex animal's attempt to communicate its thought. Studies of sign- language breakdown due to brain injury, discussed below, are therefore of unusual interest and importance. BRAIN SPECIALIZATION FOR LANGUAGE Most of our knowledge of brain specialization for language comes from those "experiments of nature" in which some more or less circumscribed lesion (due to stroke, epilepsy, congenital malformation, gunshot wounds, and so on) proves to be correlated with some more or less circumscribed cognitive or linguistic deficit (for a brief account of modern brain-scanning techniques, see Benson, 1983, and references therein). Recently, our sources of knowledge have been expanded by use of brain stimulation, preparatory to surgery under local anesthesia (Ojemann, 1983, and references therein), and by studies of so-called "split-brain" patients whose cerebral hemi- spheres have been separated surgically for relief of epilepsy (see below). Some degree of concordance between patterns of brain localization in nor- mal and abnormal individuals has been established by experiments on nor- mals in which visual or auditory input is confined, or more clearly delivered, to one hemisphere rather than the other (Moscovitch, 19831. Evidence From Studies of Aphasia The term aphasia refers to some impairment in language function, whether of comprehension, production, or both, due to some more or less well localized damage to the brain. Systematic study of aphasia goes back well

SOME DEVELOPMENTS IN RESEARCH ON LANGUAGE BEHAVIOR 229 over a hundred years, and the literature of the subject is vast (for reviews, see, for example, Goodglass and Geschwind, 1976; Hecaen and Albert, 1978; Lesser, 1978; Luria, 1966, 19701. The most that can be done here is to hint at one area in which linguistics (that is, formal language descrip- tion) has begun to affect aphasia studies. Until recently, the standard framework for describing aphasic symptoms was that of the language modalities: speaking, listening, reading, and writ- ing, or, more generally, the dimensions of expression and reception. These are still the dimensions of the major test batteries used to diagnose aphasia, such as the Boston Diagnostic Aphasia Examination (Goodglass and Kap- lan, 19721. An important assumption, underlying any attempt at diagnosis, is that damage to a particular region of the brain has particular, not general, effects on language function. The assumption has strong empirical support and has led to the isolation of two (among several other) broad types of aphasia, confluent and fluent, respectively associated with damage to the left cerebral hemisphere in an anterior region around the third frontal con- volution (Broca's area) and a posterior region around the superior temporal convolution (Wernicke's area). Broca's area lies close to the motor strip of the cortex (in fact, close to that portion of the strip associated with motor control of the jaw, lips, and tongue), while Wernicke's area surrounds the primary auditory region. In accord with this anatomical dissociation, a Broca's aphasic (that is, an individual with damage to Broca's area) has been classically found to be confluent: having good comprehension but awkward speech, characterized by pauses, difficulties in word-finding and distorted articulation; utterances are described as "telegrammatic," consisting of simple, declarative sen- tences, relying on nouns and uninflected verbs, omitting grammatical mor- phemes or function words. By contrast, a Wernicke's aphasic has been found to have poor comprehension, even of single words, but fluent speech, composed of inappropriate or nonexistent (though phonologically correct) words, often inappropriately inflected and/or out of order. Notice that these descriptions are still couched in terms of input and output that is, modalities of behavior rather than in linguistic terms. The idea that linguistic theory should be brought to bear on aphasia, and attempts made to characterize deficits in terms of overarching linguistic function, has been proposed a number of times in the past (e.g., Jakobson, 1941; Pick, 19131. But only recently (again, partly under the influence of Chomsky's view of language as an autonomous system, composed of au- tonomous syntactic and phonological subsystems) has the idea begun to receive widespread attention. The general hypothesis of the studies de- scribed below is that language breaks down along linguistic rather than modal lines of demarcation.

230 MICHAEL STUDDERT-KENNEDY We will focus mainly on the hypothesis that syntactic competence is discretely and coherently represented in Broca's area of the left frontal lobe. If this is so, the clinical impression that Broca's aphasics have good com- prehension, despite their agrammatic speech (and, incidentally, writing), must be in error. More careful testing should reveal deficits in their com- prehension, also. Caramazza and Zurif (1976) tested this hypothesis with three types of sentences: (1) simple declarative sentences in which semantic constraints might permit decoding without appeal to syntax (The apple that the boy is eating is rem; (2) so-called reversible sentences that require knowledge of syntactic relations for decoding (The boy that the girl is chasing is tall); and (3) implausible, though grammatically correct, sentences (The boy that the dFog is patting is fat). The sentences were presented orally, and patients were asked to choose which of two pictures represented the meaning of the sentence. The incorrect alternative showed either a subject-object reversal or an action different from that specified by the verb. Broca's aphasics performed very well on simple declarative sentences and on sentences with strong semantic constraints (as when the incorrect alternative depicted the wrong action). On reversible plausible and im- plausible sentences (when the incorrect alternative depicted a subject-object reversal) the patients' performance was at chance. Caramazza and Zurif (1976) concluded that the clinical impression of good comprehension in Broca's aphasics was due to their ability to draw on semantic and pragmatic constraints to understand sentences despite their inability to process syntax. Other studies have shown that Broca's aphasics have difficulty in parsing a sentence into its grammatical constituents (don Stockert, 1972~; cannot use articles to assign appropriate reference in understanding a sentence (Goodenough et al., 19771; and cannot, in general, access closed-class grammatical morphemes (Zurif and Blumstein, 19781. These studies are not without their critics (e.g., Linebarger et al., 1983), nor is the general claim that aphasic breakdown is typically (or, indeed, ever) along purely linguistic lines (Studdert-Kennedy, 1983, pp. 193-1941: the locus and ex- tent of brain damage in aphasia is largely a matter of chance, and it is rare that language alone is affected. However, we have other sources of evidence to test the hypothesis that syntax is represented in the brain as a functionally discrete subsystem. Evidence From Split-brain Studies One source of evidence is the split-brain patient whose cerebral hemi- spheres have been separated surgically for relief of epilepsy. The condition permits an investigator to assess Me cognitive and linguistic capacities of

SOME DEVELOPMENTS IN RESEARCH ON LANGUAGE BEHAVIOR 231 each hemisphere separately. Zaidel (1978) has devised a contact lens, opaque on either the nasal or temporal side, that can be used (profiting from decussation of the optic pathways) to ensure that visual information is freely scanned by a single hemisphere. A variety of written verbal materials- nonsense syllables, words, sentences of varying length and complexity and pictures can then be used to test the capacities of the isolated hemi- spheres. For example, the sentences, The fish is eating or The fish are eating, can be presented to a single hemisphere, together with appropriate alternative pictures, to test the hemisphere's capacity to understand written verbal auxiliaries (is, are) (Zaidel, 19831. Similarly, pictures of various objects belonging to different classes (fruit, furniture, vehicles, etc.) might be presented to a single hemisphere to test the hemisphere's capacity to categorize. The number of available subjects is, of course, limited. But the conclu- sions from studies of four split-brain patients are remarkably consistent (Zaidel, 1978, 1980, 1983~. In general, each hemisphere seems to have "a complete cognitive system with its own perception, memory, language, and cognitive abilities, but with a unique profile of competencies: good on some abilities, poor on others" (Zaidel, 1980, p. 3 18~. Of particular interest in the present context is the finding that, although the right hemisphere cannot speak, it has a sizable auditory and reading lexicon. However, unlike the left hemisphere, the right cannot read new (nonsense or unknown) words or recognize words for which it has no semantic interpretation. Similarly, the right hemisphere cannot group pictures of objects on the basis of rhyme (e.g., nail, male). Evidently, phonological analysis is the prerogative of the left hemisphere. The syntactic capacity of the right hemisphere is also limited. The hemi- sphere can recognize verbal auxiliaries (see above), but has difficulty in discriminating inflections (The fish eat versus The fish eats). Similarly, the right hemisphere can recognize and interpret nouns, adjectives, and certain prepositions, but has difficulty with the English infinitive marker to. These findings on closed-class morphemes mesh to a degree with the deficits of Broca's aphasics, described above. Not surprisingly, the right hemisphere's capacity to understand sentences is sharply reduced: it cannot deal with sentences longer than about three words. On the evidence of these studies, then, the right hemisphere has essen- tially no phonological capacity and only a limited syntactic capacity. Un- fortunately, the limited syntactic capacity is equivocal because all these split-brain patients have had epilepsy since early childhood. Brain disorders are known to lead to reorganization and redistribution of function, partic- ularly in childhood (Lenneberg, 1967; Dennis, 19831. We cannot therefore be sure that such syntactic capacity as the right hemisphere displays does

232 MICHAEL STUDDERT-KENNEDY not reflect compensation for left hemisphere deficiencies, induced by epilepsy. Evidence From Studies of ASL "Aphasia" Studies of normally hearing, brain-damaged patients have established a double dissociation of brain locus and function in right-handed individuals: the left cerebral hemisphere is specialized for language, the right hemisphere for visual-spatial functions (as revealed, for example, by tests requiring a subject to copy a drawing, assemble wooden blocks into a pattern, or discriminate between photographs of unfamiliar faces). As we have seen, ASL is an autonomous linguistic system with a dual structure analogous to that of spoken language, on the one hand; yet, on the other, it encodes its meanings in visual-spatial rather than auditory-temporal patterns. How then should we expect brain damage to affect the language of a native ASL signer? The answer bears directly on our understanding of the basis of brain specialization for language. For if language loss in ASL aphasia follows damage to the right hemisphere, we may infer that language is drawn to the hemisphere controlling its perceptuomotor channel of communication. But if language loss follows damage to the left hemisphere, we may infer that the neural structure of that hemisphere is, in some sense, matched to the structure of language, whatever its modality. Language might then be seen as a distinct cognitive faculty, sufficiently abstract in its descriptive predicates to encompass both speaking and signing. Recent studies at the Salk Institute, the first systematic and linguistically motivated studies of ASL aphasia on record, support the second hypothesis. Moreover, the forms of ASL breakdown vary with locus of lesion in a fashion strikingly similar to certain forms of spoken-language breakdown. Bellugi, Poizner, and Klima (1983) describe three patients, all of whom are native ASL signers and display normal visual-spatial capacity for non- language functions. Their symptoms, resulting from strokes, divide readily into the two broad classes noted above for spoken language: two patients are fluent, one is confluent. The two fluent patients display quite different symptoms, coordinated with different areas of damage to the left hemisphere. The deficits of one patient (PD) are primarily grammatical; the deficits of the other (KL) are primarily lexical. PD has extensive subcortical damage from below Broca's area in the frontal lobe through the parietal to the temporal lobe, abutting Wernicke's area. PD produces basically normal root signs, but displays an abundance of semantic and grammatical paraphasias. He produces many semantically displaced signs (e.g., EARTH for ROOM, BED for CHAIR,

SOME DEVELOPMENTS lN RESEARCH ON LANGUAGE BEHAVIOR DAUGHTER for WIFE). More strikingly, he often modulates an appro- priate root form with an inappropriate or nonsensical inflection. Finally (despite his normal nonlanguage visual-spatial capacity), his spatial syntax is severely disordered: he misuses or avoids spatial indexing (the equivalent of pronominal function, as noted above), and overuses nouns. The second fluent patient, KL, has more limited damage, extending in a strip across the left parietal lobe. Her deficits, though relatively mild, are almost the reverse of PD's. First, she avoids nouns and overuses pronouns (spatial indexing). Second, she tends to make formational errors in root signs, producing nonsense items by substituting incorrect hand configura- tions, places of articulation, or movements. Thus, these two fluent patients display almost complementary deficits, breaking along linguistic fault lines, as it were, between lexicon and grammar. The third patient (GD) is confluent. She has massive damage over most of the left frontal lobe, including Broca's area. She produces individual signs correctly (with her nondominant hand, due to paralysis of the right side of her body), and can repeat a test series of signs rapidly and accurately, so that her deficits are not simply motoric. Yet her spontaneous signing invites description by just those epithets that characterize a Broca's aphasic. Her utterances are slow, effortful, short and agrammatic, largely made up of open-class items. She omits all grammatical formatives, including in- flections, morphological modulations, and most spatial indices. In short, this patient, too, displays a peculiarly linguistic rather than a general cog- nitive pattern of breakdown. From this brief review of brain specialization for language we may draw several conclusions. First, language breakdown seems to follow rough lin- guistic lines of demarcation, indicating that phonology (or patterns of sign formation) and syntax may be supported by separable neural subsystems within the left hemisphere. Second, left hemisphere specialization does not rest on a particular sensorimotor channel. Rather, the hemisphere supports general linguistic functions, common to both spoken and signed language. Thus, despite the left hemisphere's innate predisposition for speech (see section below on language acquisition), its initial neural organization is sufficiently plastic to admit quite different language forms (cf. Neville, 1980; Neville et al., 19821. At the same time, we still do not know enough about the anatomy and physiology of the brain to be sure that areas important for particular functions in spoken language precisely correspond to areas important for analogous functions in signed language: the issue of analogy versus homology is not yet closed. Several further cautions should be noted. It is not yet clear (either from linguistic theory or from behavioral evidence) that syntax and phonology constitute homogeneous functions: some aspects of syntax and phonology

234 MICHAEL STUDDERT-KENNEDY may be separable from some aspects of language, others may not (Dennis, 1983~. Second, it is even less clear that we should expect a coherent function, once specified, to be discretely and coherently localized in the brain. In looking for correspondences between one level of description (linguistic) and another level (neurological), we may be guilty of the "first- order isomorphism fallacy" that caused the downfall of phrenology and faculty psychology. The error would be analogous to that of someone who expected a single function of an automobile say, acceleration-to be discretely and coherently localized in the engine. In fact, of course, the mechanism underlying acceleration is distributed over gears, fuel pump, carburetor, pistons, and so on. Perhaps syntactic and phonological functions emerge, like acceleration, from the coordinated actions of disparate parts. LANGUAGE ACQUISITION As many as 5 percent of American children suffer from some form of delayed or disordered language development, and many more join the ranks of the illiterate. Moreover, there is growing evidence that the capacity to read depends in large part on normal development of the primary language processes of speaking and listening (Crain and Shankweiler, in press). Scientific understanding of development is therefore of broad pediatric and educational interest. In the first instance, the work may simply permit us to establish reliable norms, based on a sound understanding of what language acquisition entails. Later, we may hope, the work should lead to more effective therapeutic intervention than is now available. No area of language study has been more strongly affected by Chomsky's work than language acquisition. Indeed, it is fair to say that until Chomsky's writings began to be widely disseminated among psychologists, in the early 1960s, the field did not exist. The few psychologists who considered the matter at all (e.g., Mowrer, 1960; Skinner, 1957) assumed that language learning would be subsumed under the general learning theory that behav- iorists were striving to develop. Yet today the field has grown to such depth and complexity that a recent volume on the state of the art (Wanner and Gleitman, 1982) lists some 900 references, over half of them published in the last 10 years. The most that I can hope to do here is sketch some of the reasons for this phenomenal growth. What did Chomsky say that aroused such interest? What questions are researchers trying to answer? Language development is a central issue in Chomsky's thought (e.g., 1965, 1972, 1980), bearing directly on the natural categories of the human mind. The issue arises from four assumptions. First, any grammar sufficient to generate the sentences of a natural language is a complex "system of many . . . rules of . . . different types organized in accordance with certain

SOME DEVELOPMENTS IN RESEARCH ON LANGUAGE BEHAVIOR 235 fixed principles of ordering and applicability and containing a certain fixed substructure" (1972, p. 751. Second, the descriptive predicates of this sys- tem (grammatical categories, phonological classes) are not commensurate with those of any other known system in the world or in the mind. Third, the data available to the child in the speech of others is "meager and degenerate." Fourth, no known theory of learning least of all a stimulus- response reinforcement theory of the kind scathingly criticized by Chomsky in his review (1959) of Skinner's Verbal Behavior (1957)-is adequate to account for a child's learning a language. Chomsky (1972) therefore assigns to the mind an innate property, a schema constituting the "universal gram- mar" to which every language must conform. The schema is highly re- strictive, so that the child's search for the grammar of the language it is learning will not be impossibly long. Chomsky (1972) then divides the research task into three parts. First is the linguist's task: to define the essential properties of human language, the schema or universal grammar. Second is the psychologist's task of determining the minimal conditions that will trigger the child's innate lin- guistic mechanisms. The third task, closely related to the second, arises from the assumption that most of the utterances a child hears are not well formed. How then is the child to know which utterances to accept as evidence of the grammar it is searching for and which utterances to reject? The third task is therefore to discover the nature of the relation between a set of data and a potential grammar, sufficient to validate the grammar as a theory of the language being learned. The proposition that language is an innate faculty of the human mind has a long history in Western thought from Plato to Darwin. The proposition is logically independent of any particular theory of language structure. Indeed, the entire enterprise of generative grammar might fail, yet leave the claim of innateness untouched. Certainly Chomsky's linguistic theories have been, and continue to be, a rich source of hypothesis and experiment in studies of language acquisition. However, his principle achievement in this area has been to force recognition that the learning of a language is an extraordinarily complex process with profound implications for the nature of mind. He has formulated the problem of language learning more precisely than ever before, spelling out its logical prerequisites in a fashion that promises to lead, given appropriate research, to a more precise specification of the innate "knowledge" that a child must bring to bear if it is ever to learn a language at all. As we have noted, Chomsky's challenge precipitated a vast quantity of research. The first need was for data, for systematic descriptions of how language actually develops. Work initially concentrated on syntactic de- velopment (e.g., Brown, 1973), but in the past dozen years has expanded

236 MICHAEL STUDDERT-KENNEDY to include phonology (e.g., Yeni-Komshian et al., 1980), semantics (e.g., Carey, 1982; MacNamara, 1982), and pragmatics (e.g., Bates and MacWhinney, 1982). As data have accumulated it has become possible to answer many questions and, of course, to ask many more. When does language development begin? Can we isolate reliable stages of development across children? Do the same stages occur in different language environments? Is the input to the child truly "meager and de- generate"? Is the child really constructing a grammar? Is the process pas- sive, or must the child actively engage itself? What is the role of imitation? Do we have to posit innate proclivities? If so, are they indeed purely linguistic? And so on. To see the force of these questions, we must have a sense of the com- plexity of the task that faces a child learning its native language. From our discussion of the problems of speech perception and automatic speech recognition, it will be obvious that we have much to learn about how the infant discovers invariant phonetic and lexical segments in the speech signal. We still do not know how the infant learns the basic sound pattern of a language during its first two years of life and comes to speak its first few dozen words. But let us set these puzzles aside and go straight to early syntax, where the bulk of child language research has been concentrated. The goal of this work has been to infer from a child's utterances (perfor- mance) what it "knows" (competence) about grammar and the meanings encoded by grammar, at each stage of its development. Consider, as an example, the sentence cited above, I want the apple we picked for supper, a sentence comfortably within the competence of a four- year-old child. What must a child know to produce such a sentence? We will look at three aspects of its structure to illustrate the basis of Chomsky's claim that grammatical categories do not map in any simple way onto the categories of general cognition. (1) Word order A child who utters the sentence evidently knows the standard subject-verb-object (SVO) order of English and so says, I want the apple. The child does not say as (transposing into English) a Turkish or Japanese child might say, I the apple want (SOY) or The apple I want (OSV). Presumably, the English-speaking child has long since learned that Adam loves Eve does not mean the same as Eve loves Adam. A Turkish or Japanese child, on the other hand, would have learned that uncertainties, due to variable word order, as to the underlying relations expressed in a sentence (who does what to whom) are resolved by attaching appropriate suffixes to subject and object (Slobin, 19821. So far, the mapping between grammar and world, in the three languages, would seem to be arbitrary but direct. However, we are given pause by

SOME DEVELOPMENTS IN RESEARCH ON LANGUAGE BEHAVIOR 237 another phrase in our example, the apple we picked (= the apple that we picked. Here, in an object relative clause, the order of subject (we) and object (apple) is reversed, and the verb (picked) appears at the end, giving OSV. The switch from SVO (we picked that) to OSV (that we picked) is obligatory in English object relative clauses. Notice that, to apply this rule, a child cannot draw on any knowledge of the world; rather, it must (in some sense) know the grammatical structure of the sentence. We have here, then, another example of structure dependence, noted above in our dis- cussion of interrogatives. (2) Use of the article The child says, I want the apple, not I want an apple. Of course, if many apples had been picked, an apple would have been correct. The distinction between definite and indefinite articles seems natural to an English speaker. To a speaker of Russian, Chinese, or other languages in which articles are not used, the distinction might seem tiresome and unnecessary. In fact, rules for use of articles in English are complex and, with respect to the aspects of the world that they encode, seemingly arbitrary. Yet the rules are learned by the third or fourth year of life (Brown, 1973, p. 271~. (3) Noun phrases As a final example, consider the noun phrase the apple we picked. These four words (article + noun + adjectival phrase) form the grammatical object of the sentence. A child who utters them must already know the general rule for constructing noun phrases in English: the adjective goes before the noun (the red apple), not, as in French, after the noun (la pomme rouge). However, there is an exception to the rule: if the adjective is itself a phrase (that is, a relative clause: that we picked, the adjective must follow the noun (the apple we picked, not the we picked apple). Once again, the child reveals in its utterance knowledge of a rule of English grammar that cannot be derived from knowledge of the world. In short, there are solid grounds for believing that language structure (both at the level of sound pattern, or phonology, and at the level of syntax) may be sui generic. With this in mind let us briefly review some of what we know about the course of development, with particular attention to the questions with which we began. The infant is biologically prepared to distinguish speech from nonspeech at, or very soon after, birth. A double dissociation of the left cerebral hemisphere for perceiving speech and of the right hemisphere for perceiving nonspeech sounds within days of birth has been demonstrated both elec- trophysiologically (e.g., Molfese, 1977) and behaviorally (e.g., Segalowitz and Chapman, 19801. Further, dozens of experiments in the past 10 years have shown that infants, in their first six months of life, can discriminate virtually any adult speech contrast from any language on which they are

238 MICHAEL STUDDERT-KENNEDY tested (e.g., [b] versus [p], [d] versus [g], [m] versus [n]) (Aslin et al., 1983; Eimas, 1982~. There is also evidence that infants begin to recognize the function of such contrasts, to distinguish words in the surrounding language, during the second half of their first year (Werker, 1982~. (For fuller review, see Studdert-Kennedy, l9XS.) In terms of sound production, Oiler (1980) has described a regular pro- gression from simple phonation (0-1 months) through canonical babbling (7-10 months) to so-called variegated babbling (1 1-12 months). The pho- netic inventory of babbled sounds is strikingly similar across many lan- guages and even across hearing and deaf infants up to the end of the first year (Locke, 1983~. These similarities argue for a universal, rather than language-specific, course of articulatory development. However, around the end of the twelfth month, when the child produces its first words, the influence of the surrounding language becomes evident. From this point on, universals become increasingly difficult to discern, because whatever universals there may be are masked by surface diversity among languages. In this respect, the development of language differs from the development of, say, sensorimotor intelligence or mathematical ability (cf. Gelman and Brown, this volume). Nonetheless, we can already trace some regularities across children within a language and, to some lesser extent, across languages. The most heavily studied stage of early syntactic development, in both English and some half-dozen other languages, is the so-called two- morpheme stage. Brown (1973) divides early development into five stages on the basis of mean length of utterance (MLU), measured in terms of the number of morphemes in an utterance. The stages are "not . . . true stages in Piaget's sense" (Brown, 1973, p. 58), but convenient, roughly equi- distant points from MLU = 2.00 through MLU = 4.00. The measure provides an index of language development independent of a child's chro- nological age. Of interest in the present context is that no purely grammatical description of Stage I (MLU = 2.00, with an upper bound of 5.00) has been found satisfactory. Instead, the data are best described by a "rich interpretation," assigning a meaning or function to an utterance on the basis of the context in which it occurs. Brown lists eleven meanings for Stage I constructions, including: naming, recurrence (more cup), nonexistence (all gone egg), agent and action (Mommy go), agent and object (Daddy key), action and location (sit chair), entity and location (Baby table), possessor and pos- session (Daddy chair), entity and attribute (yellow block). Brown (1973) proposes that these meanings "derive from sensorimotor intelligence, in Piaget's sense . . . [and] probably are universal in humankind but not . . . innate" (p. 2011.

SOME DEVELOPMENTS IN RESEARCH ON GAGE BEHAVIOR 239 We should emphasize that these Stage I patterns reflect semantic, not grammatical, relations even though they may be necessary precursors to the grammatical relations that develop during Stage II (MLU = 2.50, with an upper bound of 7.001. Brown (1973) traced the emergence of 14 gram- matical morphemes in three Stage II English-speaking children. The mor- phemes included: prepositions (in, on), present progressive (l am playing), past regular (Jumped), past irregular (broke), plural -s, possessive -s, third persons -s (he jumps), and others. The remarkable finding was that all three children acquired the morphemes in roughly the same order (with rank order correlations between pairs of children of 0.86 or more). This result was confused in a study of 21 English-speaking children by de Villiers and de Villiers (19731. However, unlike the meanings and functions of Stage I, the more or less invariant order of morpheme acquisition of Stage II has not been confirmed for languages other than English. Perhaps we should not expect that it will be. Languages differ, as we have seen, in the grammatical devices that they use to mark relations within a sentence. The devices used by one language to express a particular grammatical relation may be, in some uncertain sense, "easier" to learn than the devices used by another language for the same grammatical relation. Slobin (1982) has compared the ages at which four equivalent grammatical constructions are learned in Turkish, Italian, Serbo-Croatian, and English. In each case, the Turkish children developed more rapidly than the other children. If these results are valid and not mere sampling error, the "studies suggest that Turkish is close to an ideal language for early acquisition" (Slobin,1982, p. 145~. Unless we suppose that Turkish parents are more attentive to their chil- dren's language than Italian, Serbo-Croatian, and English parents, we may take this result as furler evidence that "selection pressures" (reinforce- ment) have little role to play in language learning. Brown and Hanlon (1970) showed some years ago that parents tend to correct the pronunciation and truth value, rather than the syntax, of their children's speech. Indeed, one of the puzzles of language development is why children improve at all. At each stage, the child's speech seems sufficient to satisfy its needs. Neither reinforcement nor imitation of adult speech suffices to explain the improve- ment. Early speech is replete with forms that the child has presumably never heard: two sheeps, we goed, mine boot. These errors reflect not imitation, but over-generalization of rules for forming plurals, past tenses, and possessive adjectives. We come then to a guiding assumption of much current research: Learning a first language entails active search for language-specific grammatical patterns (or rules) to express universal cognitive functions. The child may be helped in this by the relative "transparency" (Slobin, 1980) of the speech

240 MICHAEL STUDDERT-KENNEDY addressed to it" either because the language itself, like Turkish, is trans- parent and/or because adult speech to the child is conspicuously well foxed. Several studies (e.g., Newport et al., 1977) have shown that the speech addressed to children tends not to be "degenerate." Yet the speech may be "meager" in the sense that relatively few instances suffice to trigger recognition of a pattern (Roeper, 19821. Such rapid learning would seem to require a system specialized for discovering distinctive patterns of sound and syntax in any language to which a child is exposed. Finally, it is worth remarking that all normal children do learn a language, just as they learn to walk. Western societies acknowledge this in their attitude to children who fail: we regard them as handicapped or defective, and we arrange clinics and therapeutic settings to help them. As Dale (1976) has remarked, we do not do the same for children who cannot learn to play the piano, do long division, or ride a bicycle. Of course, children vary in intelligence, but not until I.Q. drops below about 50 do language difficulties begin to appear (Lenneberg, 19671. Children at a given level of maturation also vary in how much they talk, what they talk about, and how many words they know. Where they vary little, it seems, is in their grasp of the basic principles of the language system-its sound structure and syntax. CONCLUSION The past 50 years have seen a vast increase in our knowledge of the biological foundations of language. Rather than attempt even a sampling of the issues raised by the research we have reviewed, let me end by emphasizing a point with which I began: the interplay between basic and applied research, and between research and theory. The advances have come about partly through technological innovations, permitting, for example, physical analysis of the acoustic structure of speech and precise localization of brain abnormalities; partly through methodolog- ical gains in the experimental analysis of behavior; partly through growing social concern with the blind, the deaf, and otherwise language-handi- capped. Yet these scattered elements would still be scattered had they not been brought together by a theoretical shift from description to explanation. Perhaps the most striking aspect of the development is its unpredictability. Fifty years ago no one would have predicted that formal study of syntax would offer a theoretical framework for basic research in language acqui- sition, now a thriving area of modern experimental psychology, with im- portant implications for treatment of the language-handicapped. No one would have predicted that applied research on reading machines for the blind would contribute to basic research in human phonetic capacity, lending experimental support to the formal linguistic claim of the independence of

SOME DEVELOPMENTS IN RESEARCH ON LANGUAGE BEHAVIOR 241 phonology and syntax. Nor, hmally, would anyone have predicted that basic psycholinguistic research in Amencan Sign Language would provide a unique approach to the understanding of brain organization for language and to testing the hypothesis, derived from linguistic theory, that language is a distinct faculty of the human mind. Presumably, continued research in the areas we have reviewed and in related areas that we have not (such as the acquisition of reading, the motor control and coordination of articulatory action, second language learning), will consolidate our view of language as an autonomous system of nested subsystems (phonology, syntax). Beyond this lies the further task of un- folding the language system, tracing its evolutionary and ontogenetic origins in the nonlinguistic systems that surround it and from which, in the last analysis, it must denve. We would be rash to speculate on the diverse areas of research and theory that will contribute to this development. * * * I thank Ignatius Mattingly for comments and advice. REFERENCES Aslin, R.N., Pisoni, D.B., and Jusczyk, P.W. 1983 Auditory development and speech perception in infancy. In M.M. Haith and J.J. Campos, eds., Infancy and the Biology of Development. Vol. II: Carmichael's Manual of Child Psychology. 4th ed. New York: John Wiley and Sons. Baker, C., and Padden, C.A. 1978 Focusing on the nonmanual components of American Sign Language. Pp. 27-58 in P. Siple, ea., Understanding Language Through Sign Language Research. New York: Academic Press. Bates, E., and MacWhinney, B. 1982 Functionalist approaches to grammar. Pp. 173-218 in E. Wanner and L.R. Gleitman, eds., Language Acquisition: The State of the Art. New York: Cambridge University Press. Bellugi, U., Poizner, H., and Klima, E.S. 1983 Brain organization for language: clues from sign aphasia. Human Neurobiology 2: 155- 170. Benson, D.F. 1983 Cerebral metabolism. Pp. 205-211 in M. Studdert-Kennedy, ea., Psychobiology of Language. Cambridge: MIT Press. Bever, T.G. 1970 Bloomfield, L. 1933 Language. New York: Holt. Brown, R. 1973 A First Language: The Early Stages. Cambridge: Harvard University Press. Brown, R., and Hanlon, C. 1970 Derivational complexity and order of acquisition in child speech. In J.R. Hayes, ea., Cognition and the Development of Language. New York: John Wiley and Sons. The cognitive basis for linguistic studies. In J.R. Hayes, ea., Cognition and Language Development. New York: John Wiley and Sons.

242 MICHAEL STUDDERT-KENNEDY Cairns, H.S, and Cairns, C.E. 1976 Psycholinguistics. New York: Holt, Rinehart and Winston. Caramazza, A., and Zurif, E.B. 1976 Comprehension of complex sentences in children and aphasics: a test of the regression hypothesis. Pp. 145-161 in A. Caramazza and E.B. Zurif, eds., Language Acquisition and Language Breakdown. Baltimore: Johns Hopkins University Press. Carey, S. 1982 Semantic development: the state of the art. In E. Wanner and L. Gleitman, eds., Language Acquisition: The State of the Art. New York: Cambridge University Press. Chiba, T., and Kajiyama, M. 1941 The Vowel: Its Nature and Structure. Tokyo: Tokyo-Kaiseikan. Chomsky, N. 1956 Three models for the description of language. IRE Transactions on Information Theory IT-2:113-124. 1957 1959 1965 1972 1975 Syntactic Structures. The Hague: Mouton. Review of Verbal Behavior by ELF. Skinner. Language 35:26-58. Aspects of the Theory of Syntax. Cambridge: MIT Press. Language and Mind. New York: Harcourt Brace Jovanovich (revised edition). Reflections on Language. New York: Random House. 1980 Rules and representations. The Behavioral and Brain Sciences 3:1-62. Chomsky, N., and Halle, M. 1968 The Sound Pattern of English. New York: Harper and Row. Cole, R.A., and Scott, B. 1974 Toward a theory of speech perception. Psychological Review 81:348-374. Cole, R.A., Rudnicky, A., Reddy, R., and Zue, V.W. 1980 Speech as patterns on paper. In R.A. Cole, ea., Perception and Production of Fluent Speech. Hillsdale, N.J.: Lawrence Erlbaum Associates. Cooper, F.S. 1950 Spectrum analysis. Journal of the Acoustical Society of America 22:761-762. 1972 How is language conveyed by speech? Pp. 25-45 in J.F. Kavanagh and I.G. Mattingly, eds., Language by Ear and by Eye: The Relationships Between Speech and Reading. Cambridge: MIT Press. Cooper, F.S., and Borst, J.M. 1952 Some experiments on the perception of synthetic speech sounds. Journal of the Acoust- ical Society of America 24:597-606. Cooper, F.S., Gaitenby, J., and Nye, P.W. 1984 Evolution of reading machines for the blind: Haskins Laboratories' research as a case history. Journal of Rehabilitation Research and Development 21:51-87. Crain, S., and Shankweiler, D. In press Reading acquisition and language acquisition. In A. Davison, G. Green, and G. Herman, eds., Critical Approaches to Readability: Theoretical Bases of Linguistic Complexity. Hillsdale, N.J.: Lawrence Erlbaum Associates. Dale, P.S. 1976 Language Development. 2nd ed. New York: Holt, Rinehart and Winston. Darwin, C.J. 1976 The perception of speech. In E.C. Carterette and M.P. Friedman, eds., Handbook of Perception. Vol. 7, Language and Speech. New York: Academic Press. Dennis, M. 1983 Syntax in brain-injured children. Pp. 195-202 in M. Studdert-Kennedy, ea., Psycho- biology of Language. Cambridge: MIT Press.

SOME DEVELOPMENTS IN RESEARCH ON LANGUAGE BEHAVIOR 243 de Saussure, F. 1966 Course in General Linguistics (Translated by Wade Basuin). New York: McGraw Hill. de Villiers, J.G., and de Villiers, P.A. 1973 A cross-sectional study of the acquisition of grammatical morphemes. Journal of Psycholinguistic Research 2:267-278. Eimas, P.D. 1982 Speech perception: A view of the initial state and perceptual mechanics. Pp. 339-360 in J. Mehler, E.C.T. Walker, and M. Garrett, eds., Perspectives on Mental Repre- sentation. Hillsdale, N.J.: Lawrence Erlbaum Associates. 1973 Pant, G. 1960 Acoustic Theory of Speech Production. The Hague: Mouton. 1962 Descriptive analysis of the acoustic aspects of speech. Logos 5:3-17. 1968 Analysis and synthesis of speech processes. Pp. 173-277 in B. Malmberg, ea., Manual of Phonetics. Amsterdam: North-Holland. Descriptive analysis of the acoustic aspects of speech. Speech Sounds and Features (Chapter 2). Cambridge: MIT Press. Flanagan, J.L. 1983 Speech Analysis, Synthesis and Perception. Heidelberg: Springer-Verlag. Fodor, J. 1982 Modularity of Mind. Cambridge: MIT Press. Fodor, J.A., Bever, T.G., and Garrett, M.F. 1974 The Psychology of Language. New York: McGraw-Hill. Foss, D.J., and Hakes, D.T. 1978 Psycholinguistics: An Introduction to the Psychology of Language. Englewood Cliffs, N.J.: Prentice-Hall. Givon, T. 1979 On Understanding Grammar. New York: Academic Press. Goodenough, C., Zurif, E.B., and Weintraub, S. 1977 Aphasics' attention to grammatical morphemes. Language and Speech 20:11-20. Goodglass, H., and Geschwind, N. 1976 Language disturbance (aphasia). In E.C. Carterette and M.P. Friedman, eds., Hand- book of Perception. Vol. 7. New York: Academic Press. Goodglass, H., and Kaplan, E. 1972 The Assessment of Aphasia and Related Disorders. Philadelphia: Lea and Febiger. Hecaen, H., and Albert, M.L. 1978 Human Neuropsychology. New York: John Wiley and Sons. Hockett, C.F. 1960 The origin of speech. Scientific American 203:89-96. 1968 The State of the Art. The Hague: Mouton. Jakobson, R. 1941 Kindersprache, Aphasie, und Allgemeine Lautgesetze. Stockholm: Almqvist and Wik sell. Joos, M. 1948 Acoustic phonetics. Language Monograph 23(24):Supplement. Katz, J.J. 1981 Language and Other Abstract Objects. Totowa, N.J.: Rowman and Littlefield. Kimura, D. 1961 Cerebral dominance and the perception of verbal stimuli. Canadian Journal of Psy- chology 15:166-171.

244 MICHAEL STUDDERT-KENNEDY 1967 Functional asymmetry of the brain in dichotic listening. Cortex 8:163-178. Kiparsky, P. 1968 Linguistic universals and linguistic change. Pp. 171-202 in E. Bach and R. Harms, eds., Universals in Linguistic Theory. New York: Holt, Rinehart and Winston. Klima, E.S., and Bellugi, U. 1979 The Signs of Language. Cambridge: Harvard University Press. Kurath, H. 1939 Handbook of the Linguistic Geography of New England (With the collaboration of Marcus L. Hansen, Julia Bloch, and Bernard Bloch). Providence, R.I.: Brown Uni versity Press. Labov, W. 1972 Sociolinguistic Pattern. Philadelphia: University of Pennsylvania Press. Ladefoged, P. 1980 What are linguistic sounds made of? Language 56:485-502. Lashley, K.S. 1951 The problem of serial order in behavior. Pp. 112- 136 in L.A. Jeffress, ea., Cerebral Mechanisms in Behavior. New York: John Wiley and Sons. Lehmann, W.P. 1973 Historical Linguistics. New York: Holt, Rinehart and Winston. Lenneberg, E.H. 1967 Biological Foundations of Language. New York: John Wiley and Sons. Lesser, R. 1978 Linguistic Investigations of Aphasia. New York: Elsevier. Levinson, S.E., and Liberman, M.Y. 1981 Speech recognition by computer. Scientific American, April. Levi-Strauss, C. 1966 The Savage Mind. Chicago: University of Chicago Press. Liberman, A.M. 1957 Some results of research on speech perception. Journal of the Acoustical Society of America 29: 117- 123. 1970 The grammars of speech and language. Cognitive Psychology 1:301-323. 1982 On finding that speech is special. American Psychologist 37:148-167. Liberman, A.M., and Studdert-Kennedy, M. 1978 Phonetic perception. Pp. 143-178 in R. Held, H.W. Leibowitz, and H.-L. Teuber, eds., Handbook of Sensory Physiology. Vol. VIII: Perception. New York: Springer- Verlag. Liberman, A.M., Cooper, F.S., Shankweiler, D., and Studdert-Kennedy, M. 1967 Perception of the speech code. Psychological Review 74:431-461. Liberman, A.M., Ingemann, F., Lisker, L., Delattre, P.C., and Cooper, F.S. 1959 Minimal n~les for synthesizing speech. Journal of the Acoustical Society of America 31:1490-1499. Licklider, J.C.R., and Miller, G. 1951 The perception of speech. In S.S. Stevens, ea., Handbook of Experimental Psychology. New York: John Wiley and Sons. Liddell, S.K. 1978 Nonmanual signals and relative clauses in American Sign Language. Pp. 59-90 in P. Siple, ea., Understanding Language Through Sign Language Research. New York: Academic Press. Lieberman, P., and Crelin, E.S. 1971 On the speech of Neanderthal marl. Linguistic Inquiry 2:203-222.

SOME DEVELOPMENTS IN RESEARCH ON LANGUAGE BEHAVIOR 245 Lieberman, P., Cretin, E.S., and Klatt, D.H. 1972 Phonetic ability and related anatomy of the newborn, adult human, Neanderthal man, and the chimpanzee. American Anthropologist 74:287-307. Limber, J. 1973 The genesis of complex sentences. In T.E. Moore, ea., Cognitive Development and the Acquisition of Language. New York: Academic Press. Linebarger, M.C., Schwartz, M.F., and Saffran, E.M. 1983 Sensitivity to grammatical structure in so-called agrammatic aphasics. Cognition 13:361- 392. Locke, J. 1983 Phonological Acquisition and Change. New York: Academic Press. Luria, A.R. 1966 Higher Cortical Functions in Man. New York: Basic Books. 1970 Traumatic Aphasia. The Hague: Mouton. MacNamara, J. 1982 Names for Things. Cambridge: MIT Press. Mattingly, I.G. 1968 Experimental methods for speech synthesis by rule. IEEE Transactions on Audio and Electroacoustics AU-16:198-202. Speech synthesis for phonetic and phonological models. Pp. 2451-2487 in T.A. Sebeok, ea., Current Trends in Linguistics Vol. 12. The Hague: Mouton. 1974 Mayberry, R.I. 1978 Manual communication. In H. Davis and S.R. Silverman, eds., Hearing and Deafness (4th ed.). New York: Holt, Rinehart and Winston. Mayr, E. 1974 Behavior programs and evolutionary strategies. American Scientist 62:650-659. Miller, G.A. 1951 Language and Communication. New York: McGraw-Hill. Miller, G.A., Galanter, E., and Pribram, K.H. 1960 Plans and the Structure of Behavior. New York: Henry Holt and Company, Inc. Molfese, D.L. 1977 Infant cerebral asymmetry. In S.J. Segalowitz and F.A. Gruber, eds., Language Development and Neurological Theory. New York: Academic Press. Moscovitch, M. 1983 Stages of processing and hemispheric differences in language in the normal subject. Pp. 88-104 in M. Studdert-Kennedy, ea., Psychobiology of Language. Cambridge: MIT Press. Mowrer, O.H. 1960 Learning Theory and the Symbolic Processes. New York: John Wiley and Sons. Muller, J. 1848 The Physiology of the Senses, Voice and Muscular Motion with the Mental Faculties. (Translated by W. Baly). New York: Walton and Maberly. Neville, H.J. 1980 Event-related potentials in neuropsychological studies of language. Brain and Lan- guage 11:300-318. Neville, H.J., Kutas, M., and Schmidt, A. 1982 Event-related potential studies of cerebral specialization during reading. II. Studies of congenitally deaf adults. Brain and Language 16:316-337. Newport, E.L., Gleitman, H., and Gleitman, L.R. 1977 Mother, I'd rather do it myself: some effects and non-effects of maternal speech style.

246 MICHAEL STUDDERT-KENNEDY In C. Snow and C. Ferguson, eds., Talking to Children; Language input and Acqui- sition. Cambridge, England: Cambridge University Press. Oettinger, A. 1972 The semantic wall. In E.E. David and P.B. Denes, eds., Human Communication: A Unif ed View. New York: McGraw-Hill. Ojemann, G.A. 1983 Brain organization for language from the perspective of electrical stimulation mapping. The Behavioral and Brain Sciences 6:218-219. Oiler, D.K. 1980 The emergence of the sounds of speech in infancy. Pp. 93-112 in G.H.Yeni-Komshian, J.F. Kavanagh, and C.A. Ferguson, eds., Child Phonology. Vol. 1: Production. New York: Academic Press. Pick, A. 1913 Die Agrammatischen Sprachstorungen. Berlin: Springer. Porter, R.J., Jr., and Hughes, L.F. 1983 Dichotic listening to CV's: method, interpretation and application. In J. Hellige, ea., Cerebral Hemispheric Asymmetry: Method, Theory, and Application. Praeger Science Publishers: University of Southern California Press. Potter, R.K., Kopp, G.A., and Green, H.C. 1947 Visible Speech. New York: D. Van Nostrand Co., Inc. Pylyshyn, Z.W. 1980 Computation and cognition: issues in the foundations of cognitive science. The Be- havioral and Brain Sciences 3:111-169. Reddy, D.R. 1975 Speech Recognition: Invited Papers Presented at the 1974 IEEE Symposium. New York: Academic Press. Roeper, T. 1982 The role of universals in the acquisition of gerunds. In E. Wanner and L.R. Gleitman, eds., Language Acquisition: The State of the art. New York: Cambridge University Press. Seashore, R.H., and Erickson, L.D. 1940 The measurement of individual differences in general English vocabularies. Journal of Educational Psychology 31:14-38. Segalowitz, S.J., and Chapman, J.S. 1980 Cerebral asymmetry for speech in neonates: a behavioral measure. Brain and Language 9:281-288. Shankweiler, D., and Studdert-Kennedy, M. 1967 Identification of consonants and vowels presented to the left and right ears. Quarterly Journal of Experimental Psychology 19:59-63. Skinner, B.F. 1957 Verbal Behavior. New York: Appleton-Century Crofts. Slobin, D.I. 1980 The repeated path between transparency and opacity in language. In U. Bellugi and M. Studdert-Kennedy, eds., Signed and Spoken Language: Biological Constraints on Linguistic For~n. Weinheim: Verlag Chemie. 1982 Universal and particular in the acquisition of language. In L. Gleitman and E. Warner, eds., Language Acquisition: State of the Art. New York: Cambridge University Press. Stevens, K.N. 1975 The potential role of property detectors in the perception of consonants. Pp. 303-330 in G. Fant and M.A.A. Tatham, eds., Auditory Analysis and Perception of Speech. New York: Academic Press.

SOME DEVELOPMENTS IN RESEARCH ON LANGUAGE BEHAVIOR 247 Stevens, K.N., and House, A.S. 1955 Development of a quantitative description of vowel articulation. Journal of the Acous- tical Society of America 27:484-493. 1961 An acoustical theory of vowel production and some of its implications. Journal of Speech and Hearing Research 4:303-320. Stokoe, W.C., Jr. 1960 Sign language structure. Studies in Linguistics: Occasional Papers 8. Buffalo: Buffalo University Press. 1974 Classification and description of sign languages. Pp. 345-371 in T.A. Sebeok, ea., Current Trends in Linguistics. Vol. 12. The Hague: Mouton. Stokoe, W.C., Jr., Casterline, D.C., and Croneberg, C.G. 1965 A Dictionary of American Sign Language. Washington, D.C.: Gallaudet College Press. Studdert-Kennedy, M. 1974 The Perception of Speech. In T.A. Sebeok, ea., Current Trends in Linguistics. Vol. 12. The Hague: Mouton. 1976 Speech perception. Pp. 243-293 in N.J. Lass, ea., Contemporary Issues in Experi mental Phonetics. New York: Academic Press. 1983 Psychobiology of Language. M. Studdert-Kennedy, ea., Cambridge: MIT Press. 1985 Sources of variability in early speech development. In J.S. Perkell and D.H. Klatt, eds., Invariance and Variability of Speech Processes. Hillsdale, N.J.: Lawrence Erl baum Associates. Studdert-Kennedy, M., and Lane, H. 1980 Clues from the differences between signed and spoken language. Pp. 29-39 in U. Bellugi and M. Studdert-KeMedy, eds., Signed and Spoken Language: Biological Constraints on Linguistic Form. Deerfield Park, Fla.: Verlag Chemie. Studdert-Kennedy, M., and Shankweiler, D.P. 1970 Hemispheric specialization for speech perception. Journal of the Acoustical Society of America 48:579-594. Templin, M. 1957 Certain Language Skills of Children. Minneapolis: University of Minnesota Press. Von Stockert, T. 1972 Recognition of syntactic structure in aphasic patients. Cortex 8:323-335. Wanner, E., and Gleitman, L.R., eds. 1982 Language Acquisition: The State of the Art. New York: Cambridge University Press. Werker, J.F. 1982 The Development of Cross-Language Speech Perception: The Effect of Age, Expe- rience and Context on Perceptual Organization. Unpublished Ph.D. dissertation. Uni- versity of British Columbia. Wilson, E.O. 1975 Sociobiology. Cambridge: The Belknap Press. Yeni-Komshian, G.H., Kavanagh, J.F., and Ferguson, C.A., eds. 1980 Child Phonology. Vols. 1 and 2. New York: Academic Press. Zaidel, E. 1978 Lexical organization in the right hemisphere. Pp. 177-197 in P.A. Buser and A. Rougeul-Buser, eds., Cerebral Correlates of Conscious Experience. Amsterdam: E1 sevier/North-Holland Biomedical Press. 1980 Clues from hemispheric specialization. In U. Bellugi and M. Studdert-Kennedy, eds., Signed and Spoken Language: Biological Constraints on Linguistic Form. Weinheim: Verlag Chemie. 1983 On multiple representations of the lexicon in the brain-the case of two hemispheres.

248 it. 105-125 ~ ha. Smdded-Kennedy, Eden f~ ~. Cadge: MID Pass. t E.B., ad Blums~in, S.E. 1978 L=guage ad Me ban. In a. Halle, J. B=SD=, ad G.A. Miller, eds., [f~c ~ ~ f~-f~' R~/~. C~bddge: Ma Pass.

Next: Visual Perception of Real and Represented Objects and Events »
Behavioral and Social Science: 50 Years of Discovery Get This Book
×
Buy Hardback | $65.00
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

In 1933, President Herbert Hoover commissioned the "Ogburn Report," a comprehensive study of social trends in the United States. Fifty years later, a symposium of noted social and behavioral scientists marked the report's anniversary with a book of their own from the Commission on Behavioral and Social Sciences and Education. The 10 chapters presented here relate the developments detailed in the "Ogburn Report" to modern social trends. This book discusses recent major strides in the social and behavioral sciences, including sociology, psychology, anthropology, economics, and linguistics.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!