psycholinguistics

36
Language Cuneiform is one of the first known forms of written language , but spoken language is believed to predate writing by tens of thousands of years at least. Language is a term most commonly used to refer to so called "natural languages " — the forms of communication considered peculiar to humankind . By extension the term also refers to the type of human thought process which creates and uses language. Essential to both meanings is the systematic creation, maintenance and use of systems of symbols , each referring to concepts different from themselves. The most obvious manifestations are spoken languages, such as English or Chinese . For example the English word "language", derived ultimately from lingua, Latin for tongue , and "tongue" is still a word which can be used in English to refer to spoken language. But there are also written languages , and other systems of visual symbols, sign languages and so on. Although some other animals make use of quite sophisticated communicative systems, and these are sometimes casually referred to as animal language , none of these are known to make use of all of the properties that linguists use to define language in the strict sense. When discussed more technically as a general phenomenon then, "language" always implies a particular type of human thought which can be present even when communication is not the result, and this way of thinking is also sometimes treated as indistinguishable from language itself. In Western Philosophy for example, language has long been closely associated with reason , which is also a uniquely human way of using symbols. In Ancient Greek philosophical terminology, the same word, logos, was used as a term for both language or speech and reason, and the philosopher Thomas Hobbes used the English word "speech" so that it similarly could refer to reason, as will be discussed below. Properties of language A set of commonly accepted signs (indices, icons or symbols) is only one feature of language; all languages must define (i) the structural relationships between these signs in a system of grammar , (ii) the context wherein the signs are used (pragmatics ) and (iii) dependent on their context the content specifity, i.e. its meaning (semantics ). Rules of grammar are one of the characteristics sometimes said to distinguish language from other forms of communication. They allow a finite set of signs to be manipulated to create a potentially infinite number of grammatical utterances. Another property of language is that its symbols are arbitrary . Any concept or grammatical rule can be mapped onto a symbol. In other words, most languages make use of sound, but the combinations of sounds used do

Upload: jahanzeb-jahan

Post on 13-Dec-2014

1.148 views

Category:

Documents


7 download

DESCRIPTION

 

TRANSCRIPT

Language

Cuneiform is one of the first known forms of written language, but spoken language is believed to predate writing by tens of thousands of years at least.

Language is a term most commonly used to refer to so called "natural languages" the forms of communication considered peculiar to humankind. By extension the term also refers to the type of human thought process which creates and uses language. Essential to both meanings is the systematic creation, maintenance and use of systems of symbols, each referring to concepts different from themselves.

The most obvious manifestations are spoken languages, such as English or Chinese. For example the English word "language", derived ultimately from lingua, Latin for tongue, and "tongue" is still a word which can be used in English to refer to spoken language. But there are also written languages, and other systems of visual symbols, sign languages and so on.Although some other animals make use of quite sophisticated communicative systems, and these are sometimes casually referred to as animal language, none of these are known to make use of all of the properties that linguists use to define language in the strict sense.

When discussed more technically as a general phenomenon then, "language" always implies a particular type of human thought which can be present even when communication is not the result, and this way of thinking is also sometimes treated as indistinguishable from language itself.

In Western Philosophy for example, language has long been closely associated with reason, which is also a uniquely human way of using symbols. In Ancient Greek philosophical terminology, the same word, logos, was used as a term for both language or speech and reason, and the philosopher Thomas Hobbes used the English word "speech" so that it similarly could refer to reason, as will be discussed below.

Properties of language

A set of commonly accepted signs (indices, icons or symbols) is only one feature of language; all languages must define (i) the structural relationships between these signs in a system of grammar, (ii) the context wherein the signs are used (pragmatics) and (iii) dependent on their context the content specifity, i.e. its meaning (semantics). Rules of grammar are one of the characteristics sometimes said to distinguish language from other forms of communication. They allow a finite set of signs to be manipulated to create a potentially infinite number of grammatical utterances.

Another property of language is that its symbols are arbitrary. Any concept or grammatical rule can be mapped onto a symbol. In other words, most languages make use of sound, but the combinations of sounds used do not have any necessary and inherent meaning they are merely an agreed-upon convention to represent a certain thing by users of that language. For instance, there is nothing about the Spanish word nada itself that forces Spanish speakers to convey the idea of "nothing". Another set of sounds (for example, the English word nothing) could equally be used to represent the same concept, but all Spanish speakers have acquired or learned to correlate this meaning for this particular sound pattern. For Slovenian, Croatian, Serbian or Bosnian speakers on the other hand, nada means something else; it means "hope".

This arbitrariness even applies to words with an onomatopoetic dimension (i.e. words that to some extent simulate the sound of the token referred to). For example, several animal names (e.g. cuckoo, whip-poor-will, katydid) are derived from sounds the respective animal makes, but these forms did not have to be chosen for these meanings. Non-onomatopoetic words can stand just as easily for the same meaning. For instance, the katydid is called a "bush cricket" in British English, a term that bears no relation to the sound the animal makes. In time, onomatopoetic words can also change in form, losing their mimetic status. Onomatopoetic words may have an inherent relation to their referent, but this meaning is not inherent, thus they do not violate arbitrariness.

Origin of language

Even before the theory of evolution made discussion of more animal-like human ancestors commonplace, philosophical and scientific speculation casting doubt on the use of early language has been frequent throughout history. In modern Western Philosophy, speculation by authors such as Thomas Hobbes and later Jean-Jacques Rousseau led to the Acadmie franaise declaring the subject off-limits.[citation needed]The origin of language is of great interest to philosophers because language is such an essential characteristic of human life. In classical Greek philosophy such inquiry was approached by considering the nature of things, in this case human nature. Aristotle, for example, treated humans as creatures with reason and language by their intrinsic nature, related to their natural propensities to be "political," and dwell in city-state communities (Greek: poleis)[1].

Hobbes, followed by John Locke and others, claimed that language is an extension of the "speech" which humans have within themselves, which in a sense takes the classical view that reason is one of the most primary characteristics of human nature. Others have argued the opposite - that reason developed out of the need for more complex communication. Rousseau, despite writing[2] before the publication of Darwin's theory of evolution, claimed that there had once been humans who had no language or reason and who developed language first--rather than reason--the development of which things he explicitly described as a mixed blessing, with many negative characteristics.

Since the arrival of Darwin, the subject has been approached more often by scientists than philosophers. For example, neurologist Terrence Deacon in his Symbolic Species has argued that reason and language "coevolved." Merlin Donald sees language as a later development building upon what he refers to as mimetic culture,[3] emphasizing that this coevolution depended upon the interactions of many individuals. He writes that:

A shared communicative culture, with sharing of mental representations to some degree, must have come first, before language, creating a social environment in which language would have been useful and adaptive.[4]The specific causes of the natural selection that led to language are however still the subject of much speculation, but a common theme which goes right back to Aristotle is that many theories propose that the gains to be had from language and/or reason were probably mainly in the area of increasingly sophisticated social structures.

In more recent times a theory of mirror neurons has emerged in relation to language, Ramachandran [5] has gone so far as to claim that "mirror neurons will do for psychology what DNA did for biology: they will provide a unifying framework and help explain a host of mental abilities that have hitherto remained mysterious and inaccessible to experiments". Mirror neurons are located in the human inferior frontal cortex and superior parietal lobe, and are unique in that they fire when completing an action and also when witnessing an actor performing the same action. Various studies have proposed a theory of mirror neurons related to language development [6] [7] [8].The study of language Linguistics

Linguistics is the scientific study of language, encompassing a number of sub-fields. At the core of theoretical linguistics are the study of language structure (grammar) and the study of meaning (semantics). The first of these encompasses morphology (the formation and composition of words), syntax (the rules that determine how words combine into phrases and sentences) and phonology (the study of sound systems and abstract sound units). Phonetics is a related branch of linguistics concerned with the actual properties of speech sounds (phones), non-speech sounds, and how they are produced and perceived.

Theoretical linguistics is mostly concerned with developing models of linguistic knowledge. The fields that are generally considered as the core of theoretical linguistics are syntax, phonology, morphology, and semantics. Applied linguistics attempts to put linguistic theories into practice through areas like translation, stylistics, literary criticism and theory, discourse analysis, speech therapy, speech pathology and foreign language teaching.

History

History of linguisticsThe historical record of linguistics begins in India with Pini, the 5th century BCE grammarian who formulated 3,959 rules of Sanskrit morphology, known as the Adhyy () and with Tolkppiyar, the 2rd century BCE grammarian of the Tamil work Tolkppiyam().[9] Pinis grammar is highly systematized and technical. Inherent in its analytic approach are the concepts of the phoneme, the morpheme, and the root; Western linguists only recognized the phoneme some two millennia later.[citation needed] Tolkppiyar's work is perhaps the first to describe articulatory phonetics for a language. Its classification of the alphabet into consonants and vowels, and elements like nouns, verbs, vowels, and consonants, which he put into classes, were also breakthroughs at the time. In the Middle East, the Persian linguist Sibawayh () made a detailed and professional description of Arabic in 760 CE in his monumental work, Al-kitab fi al-nahw ( , The Book on Grammar), bringing many linguistic aspects of language to light. In his book, he distinguished phonetics from phonology.

Later in the West, the success of science, mathematics, and other formal systems in the 20th century led many to attempt a formalization of the study of language as a "semantic code". This resulted in the academic discipline of linguistics, the founding of which is attributed to Ferdinand de Saussure.[citation needed] In the 20th century, substantial contributions to the understanding of language came from Ferdinand de Saussure, Hjelmslev, mile Benveniste and Roman Jakobson,[10] which are characterized as being highly systematic.[10]Human languages

Main article: Natural languageSome of the areas of the brain involved in language processing: Broca's area(Blue), Wernicke's area(Green), Supramarginal gyrus(Yellow), Angular gyrus(Orange) ,Primary Auditory Cortex(Pink)

Human languages are usually referred to as natural languages, and the science of studying them falls under the purview of linguistics. A common progression for natural languages is that they are considered to be first spoken, then written, and then an understanding and explanation of their grammar is attempted.

Languages live, die, move from place to place, and change with time. Any language that ceases to change or develop is categorized as a dead language. Conversely, any language that is in a continuous state of change is known as a living language or modern language.

Making a principled distinction between one language and another is usually impossible.[11] For instance, there are a few dialects of German similar to some dialects of Dutch. The transition between languages within the same language family is sometimes gradual (see dialect continuum).

Some like to make parallels with biology, where it is not possible to make a well-defined distinction between one species and the next. In either case, the ultimate difficulty may stem from the interactions between languages and populations. (See Dialect or August Schleicher for a longer discussion.)

The concepts of Ausbausprache, Abstandsprache and Dachsprache are used to make finer distinctions about the degrees of difference between languages or dialects.

Artificial languages

Constructed languages

Some individuals and groups have constructed their own artificial languages, for practical, experimental, personal, or ideological reasons. International auxiliary languages are generally constructed languages that strive to be easier to learn than natural languages; other constructed languages strive to be more logical ("loglangs") than natural languages; a prominent example of this is Lojban.

Some writers, such as J. R. R. Tolkien, have created fantasy languages, for literary, artistic or personal reasons. The fantasy language of the Klingon race has in recent years been developed by fans of the Star Trek series, including a vocabulary and grammar.

Constructed languages are not necessarily restricted to the properties shared by natural languages.

This part of ISO 639 also includes identifiers that denote constructed (or artificial) languages. In order to qualify for inclusion the language must have a literature and it must be designed for the purpose of human communication. Specifically excluded are reconstructed languages and computer programming languages.

International auxiliary languages

Main article: International auxiliary languageSome languages, most constructed, are meant specifically for communication between people of different nationalities or language groups as an easy-to-learn second language. Several of these languages have been constructed by individuals or groups. Natural, pre-existing languages may also be used in this way - their developers merely catalogued and standardized their vocabulary and identified their grammatical rules. These languages are called naturalistic. One such language, Latino Sine Flexione, is a simplified form of Latin. Two others, Occidental and Novial, were drawn from several Western languages.

To date, the most successful auxiliary language is Esperanto, invented by Polish ophthalmologist Zamenhof. It has a relatively large community roughly estimated at about 2 million speakers worldwide, with a large body of literature, songs, and is the only known constructed language to have native speakers, such as the Hungarian-born American businessman George Soros. Other auxiliary languages with a relatively large number of speakers and literature are Interlingua and Ido.

Controlled languages

Main article: Controlled natural languageControlled natural languages are subsets of natural languages whose grammars and dictionaries have been restricted in order to reduce or eliminate both ambiguity and complexity. The purpose behind the development and implementation of a controlled natural language typically is to aid non-native speakers of a natural language in understanding it, or to ease computer processing of a natural language. An example of a widely used controlled natural language is Simplified English, which was originally developed for aerospace industry maintenance manuals.Formal languagesMathematics and computer science use artificial entities called formal languages (including programming languages and markup languages, and some that are more theoretical in nature). These often take the form of character strings, produced by a combination of formal grammar and semantics of arbitrary complexity.

Programming languages

A programming language is an extreme case of a formal language that can be used to control the behavior of a machine, particularly a computer, to perform specific tasks.[12] Programming languages are defined using syntactic and semantic rules, to determine structure and meaning respectively.

Programming languages are used to facilitate communication about the task of organizing and manipulating information, and to express algorithms precisely. Some authors restrict the term "programming language" to those languages that can express all possible algorithms; sometimes the term "computer language" is used for artificial languages that are more limited.

Animal communication

The term "animal languages" is often used for non-human systems of communication. Linguists do not consider these to be "language", but describe them as animal communication, because the interaction between animals in such communication is fundamentally different in its underlying principles from human language. Nevertheless, some scholars have tried to disprove this mainstream premise through experiments on training chimpanzees to talk. Karl von Frisch received the Nobel Prize in 1973 for his proof of the language and dialects of the bees.[13] Current research indicates that signalling codes are the most fundamental precondition for every coordination within and between cells, tissues, organs and organisms of all organismic kingdoms. All of these signalling codes follow combinatorial (syntactic), context-sensitive (pragmatic) and content-specific (semantic) rules. In contrast to linguists, biolinguistics and biosemiotics consider these codes to be real languages.[14]In several publicized instances, non-human animals have been taught to understand certain features of human language. Chimpanzees, gorillas, and orangutans have been taught hand signs based on American Sign Language. The African Grey Parrot, which possesses the ability to mimic human speech with a high degree of accuracy, is suspected of having sufficient intelligence to comprehend some of the speech it mimics. Most species of parrot, despite expert mimicry, are believed to have no linguistic comprehension at all.

While proponents of animal communication systems have debated levels of semantics, these systems have not been found to have anything approaching human language syntax.Learning

Learning is acquiring new knowledge, behaviors, skills, values, preferences or understanding, and may involve synthesizing different types of information. The ability to learn is possessed by humans, animals and some machines. Progress over time tends to follow learning curves.

Human learning may occur as part of education or personal development. It may be goal-oriented and may be aided by motivation. The study of how learning occurs is part of neuropsychology, educational psychology, learning theory, and pedagogyLearning may occur as a result of habituation or classical conditioning, seen in many animal species, or as a result of more complex activities such as play, seen only in relatively intelligent animals[1]

HYPERLINK "http://en.wikipedia.org/wiki/Learning" \l "cite_note-1" \o "" [2] and humans. Learning may occur consciously or without conscious awareness. There is evidence for human behavioral learning prenatally, in which habituation has been observed as early as 32 weeks into gestation, indicating that the central nervous system is sufficiently developed and primed for learning and memory to occur very early on in development.[3]Play has been approached by several theorists as the first form of learning. Children play, experiment with the world, learn the rules, and learn to interact. Vygotsky supports that play is pivotal for children's development, since they make meaning of their environment through play.

Types of learningSimple non-associative learningHabituationIn psychology, habituation is an example of non-associative learning in which there is a progressive diminution of behavioral response probability with repetition of a stimulus. It is another form of integration. An animal first responds to a stimulus, but if it is neither rewarding nor harmful the animal reduces subsequent responses. One example of this can be seen in small song birds - if a stuffed owl (or similar predator) is put into the cage, the birds initially react to it as though it were a real predator. Soon the birds react less, showing habituation. If another stuffed owl is introduced (or the same one removed and re-introduced), the birds react to it again as though it were a predator, demonstrating that it is only a very specific stimulus that is habituated to (namely, one particular unmoving owl in one place). Habituation has been shown in essentially every species of animal, including the large protozoan Stentor Coeruleus.[4]SensitizationSensitization is an example of non-associative learning in which the progressive amplification of a response follows repeated administrations of a stimulus (Bell et al., 1995). An everyday example of this mechanism is the repeated tonic stimulation of peripheral nerves that will occur if a person rubs his arm continuously. After a while, this stimulation will create a warm sensation that will eventually turn painful. The pain is the result of the progressively amplified synaptic response of the peripheral nerves warning the person that the stimulation is harmful. Sensitization is thought to underlie both adaptive as well as maladaptive learning processes in the organism.

Associative learningAssociative learning is the process by which an element is learned through association with a separate, pre-occurring element.

Operant conditioningOperant conditioning is the use of consequences to modify the occurrence and form of behavior. Operant conditioning is distinguished from Pavlovian conditioning in that operant conditioning deals with the modification of voluntary behavior. Discrimination learning is a major form of operant conditioning. One form of it is called Errorless learning.

Classical conditioningThe typical paradigm for classical conditioning involves repeatedly pairing an unconditioned stimulus (which unfailingly evokes a particular response) with another previously neutral stimulus (which does not normally evoke the response). Following conditioning, the response occurs both to the unconditioned stimulus and to the other, unrelated stimulus (now referred to as the "conditioned stimulus"). The response to the conditioned stimulus is termed a conditioned response.

ImprintingMain article: Imprinting (psychology)Imprinting is the term used in psychology and ethology to describe any kind of phase-sensitive learning (learning occurring at a particular age or a particular life stage) that is rapid and apparently independent of the consequences of behavior. It was first used to describe situations in which an animal or person learns the characteristics of some stimulus, which is therefore said to be "imprinted" onto the subject.

This section requires expansion.

Observational learningMain article: Observational learningThe most common human learning process is imitation; one's personal repetition of an observed behaviour, such as a dance. Humans can copy three types of information simultanesouly: the demonstrators goals, actions and environmental outcomes (results, see Emulation (observational learning)). Through copying these types of information, (most) infants will tune into their surrounding culture.

PlayMain article: Play (activity)Play generally describes behavior which has no particular end in itself, but improves performance in similar situations in the future. This is seen in a wide variety of vertebrates besides humans, but is mostly limited to mammals and birds. Cats are known to play with a ball of string when young, which gives them experience with catching prey. Besides inanimate objects, animals may play with other members of their own species or other animals, such as orcas playing with seals they have caught. Play involves a significant cost to animals, such as increased vulnerability to predators and the risk of injury and possibly infection. It also consumes energy, so there must be significant benefits associated with play for it to have evolved. Play is generally seen in younger animals, suggesting a link with learning. However, it may also have other benefits not associated directly with learning, for example improving physical fitness.

EnculturationEnculturation is the process by which a person learns the requirements of the culture by which he or she is surrounded, and acquires values and behaviours that are appropriate or necessary in that culture.[5] The influences which as part of this process limit, direct or shape the individual, whether deliberately of not, include parents, other adults, and peers.[5] If successful, enculturation results in competence in the language, values and rituals of the culture.[5]Multimedia learningThe learning where learner uses multimedia learning environments (Mayer, 2001). This type of learning relies on dual-coding theory (Paivio, 1971).

e-Learning and Augmented LearningElectronic learning or e-learning is a general term used to refer to Internet-based networked computer-enhanced learning. A specific and always more diffused e-learning is mobile learning (m-Learning), it uses different mobile telecommunication equipments, such as cellular phones.

When a learner interacts with the e-learning environment, it's called augmented learning. By adapting to the needs of individuals, the context-driven instruction can be dynamically tailored to the learner's natural environment. Augmented digital content may include text, images, video, audio (music and voice). By personalizing instruction, augmented learning has been shown to improve learning performance for a lifetime.[6]Rote learningRote learning is a technique which avoids understanding the inner complexities and inferences of the subject that is being learned and instead focuses on memorizing the material so that it can be recalled by the learner exactly the way it was read or heard. The major practice involved in rote learning techniques is learning by repetition, based on the idea that one will be able to quickly recall the meaning of the material the more it is repeated. Rote learning is used in diverse areas, from mathematics to music to religion. Although it has been criticized by some schools of thought, rote learning is a necessity in many situations.

Informal learningInformal learning occurs through the experience of day-to-day situations (for example, one would learn to look ahead while walking because of the danger inherent in not paying attention to where one is going). It is learning from life, during a meal at table with parents, Play, exploring.

Formal learningA depiction of the world's oldest university, the University of Bologna, Italy

Formal learning is learning that takes place within a teacher-student relationship, such as in a school system.

Nonformal learningNonformal learning is organized learning outside the formal learning system. For example: learning by coming together with people with similar interests and exchanging viewpoints, in clubs or in (international) youth organizations, workshops.

Non-formal learning and combined approachesThe educational system may use a combination of formal, informal, and non-formal learning methods. The UN and EU recognize these different forms of learning (cf. links below). In some schools students can get points that count in the formal-learning systems if they get work done in informal-learning circuits. They may be given time to assist international youth workshops and training courses, on the condition they prepare, contribute, share and can proof this offered valuable new insights, helped to acquire new skills, a place to get experience in organizing, teaching, etc.

In order to learn a skill, such as solving a Rubik's cube quickly, several factors come into play at once:

Directions help one learn the patterns of solving a Rubik's cube

Practicing the moves repeatedly and for extended time helps with "muscle memory" and therefore speed

Thinking critically about moves helps find shortcuts, which in turn helps to speed up future attempts.

The Rubik's cube's six colors help anchor solving it within the head.

Occasionally revisiting the cube helps prevent negative learning or loss of skill.

Tangential LearningTangential Learning is the process by which some portion of people will self-educate if a topic is exposed to them in something that they already enjoy.

Domains of LearningThe three domains[7] of learning are:

Cognitive--such as learning to recall facts, to analyze, and to solve a problem;

Psychomotor--such as learning to perform the correct steps in a dance, learning to swim, learning to ride a bicycle, or drive a car;

and

Affective--such as learning how to like someone, "to hate sin", to love one's country (patriotism), to worship God, or to move on after a failed relationship.

These domains are not mutually exclusive. For example, in learning to play chess, the person will have to learn the rules of the game (cognitive domain); but he also has to learn how to set up the chess pieces on the chessboard and also how to properly hold and move a chess piece (psychomotor). Furthermore, later in the game the person may even learn to love the game itself, value its applications in life, and appreciate its history (affective domain).

Behaviorism

Behaviorism or Behaviourism, also called the learning perspective (where any physical action is a behavior) is a philosophy of psychology based on the proposition that all things which organisms do including acting, thinking and feelingcan and should be regarded as behaviors.[1] The school of psychology maintains that behaviors as such can be described scientifically without recourse either to internal physiological events or to hypothetical constructs such as the mind.[2] Behaviorism comprises the position that all theories should have observational correlates but that there are no philosophical differences between publicly observable processes (such as actions) and privately observable processes (such as thinking and feeling).[3]From early psychology in the 19th century, the behaviorist school of thought ran concurrently and shared commonalities with the psychoanalytic and Gestalt movements in psychology into the 20th century; but also differed from the mental philosophy of the Gestalt psychologists in critical ways.[citation needed] Its main influences were Ivan Pavlov, who investigated classical conditioning, Edward Lee Thorndike, John B. Watson who rejected introspective methods and sought to restrict psychology to experimental methods, and B.F. Skinner who conducted research on operant conditioning. [3] In the second half of the twentieth century, behaviorism was largely eclipsed as a result of the cognitive revolution.

Versions

There is no classification generally agreed upon, but some titles given to the various branches of behaviorism include:

Classical: The behaviorism of Watson; the objective study of behavior; no mental life, no internal states; thought is covert speech.

Radical: Skinner's behaviorism; is considered radical since it expands behavioral principles to processes within the organism; in contrast to methodological behaviorism; not mechanistic or reductionist; hypothetical (mentalistic) internal states are not considered causes of behavior, phenomena must be observable at least to the individual experiencing them. Willard Van Orman Quine used many of radical behaviorism's ideas in his study of knowing and language.

Teleological: Post-Skinnerian, purposive, close to microeconomics.

Theoretical: Post-Skinnerian, accepts observable internal states ("within the skin" once meant "unobservable", but with modern technology we are not so constrained); dynamic, but eclectic in choice of theoretical structures, emphasizes parsimony.

Biological: Post-Skinnerian, centered on perceptual and motor modules of behavior, theory of behavior systems.

Two popular subtypes are Neo: Hullian and post-Hullian, theoretical, group data, not dynamic, physiological, and Purposive: Tolmans behavioristic anticipation of cognitive psychology.

B.F. Skinner and radical behaviorism

Skinner, who carried out experimental work mainly in comparative psychology from the 1930s to the 1950s, but remained behaviorism's best known theorist and exponent virtually until his death in 1990, developed a distinct kind of behaviorist philosophy, which came to be called radical behaviorism. He is credited with having founded a new version of psychological science, which has come to be called behavior analysis or the experimental analysis of behavior after variations on the subtitle to his 1938 work The Behavior of Organisms: An Experimental Analysis Of Behavior.

Definition

B.F. Skinner was influential in defining radical behaviorism, a philosophy codifying the basis of his school of research (named the Experimental Analysis of Behavior, or EAB.) While EAB differs from other approaches to behavioral research on numerous methodological and theoretical points, radical behaviorism departs from methodological behaviorism most notably in accepting treatment of feelings, states of mind and introspection as existent and scientifically treatable. This is done by identifying them as something non-dualistic, and here Skinner takes a divide-and-conquer approach, with some instances being identified with bodily conditions or behavior, and others getting a more extended 'analysis' in terms of behavior. However, radical behaviorism stops short of identifying feelings as causes of behavior.[1] Among other points of difference were a rejection of the reflex as a model of all behavior and a defense of a science of behavior complementary to but independent of physiology. Radical behaviorism has considerable overlap with other western philosophical positions such as American pragmatism [4]Experimental and conceptual innovations

This essentially philosophical position gained strength from the success of Skinner's early experimental work with rats and pigeons, summarized in his books The Behavior of Organisms[5] and Schedules of Reinforcement.[6] Of particular importance was his concept of the operant response, of which the canonical example was the rat's lever-press. In contrast with the idea of a physiological or reflex response, an operant is a class of structurally distinct but functionally equivalent responses. For example, while a rat might press a lever with its left paw or its right paw or its tail, all of these responses operate on the world in the same way and have a common consequence. Operants are often thought of as species of responses, where the individuals differ but the class coheres in its function--shared consequences with operants and reproductive success with species. This is a clear distinction between Skinner's theory and S-R theory.

Skinner's empirical work expanded on earlier research on trial-and-error learning by researchers such as Thorndike and Guthrie with both conceptual reformulations Thorndike's notion of a stimulus-response 'association' or 'connection' was abandoned and methodological ones the use of the 'free operant', so called because the animal was now permitted to respond at its own rate rather than in a series of trials determined by the experimenter procedures. With this method, Skinner carried out substantial experimental work on the effects of different schedules and rates of reinforcement on the rates of operant responses made by rats and pigeons. He achieved remarkable success in training animals to perform unexpected responses, and to emit large numbers of responses, and to demonstrate many empirical regularities at the purely behavioral level. This lent some credibility to his conceptual analysis. It is largely his conceptual analysis that made his work much more rigorous than his peers, a point which can be seen clearly in his seminal work Are Theories of Learning Necessary? in which he criticizes what he viewed to be theoretical weaknesses then common in the study of psychology. An important descendant of the experimental analysis of behavior is the Society for Quantitative Analysis of Behavior.[7]Relation to language

As Skinner turned from experimental work to concentrate on the philosophical underpinnings of a science of behavior, his attention turned to human language with Verbal Behavior[8] and other language-related publications;[9] Verbal Behavior laid out a vocabulary and theory for functional analysis of verbal behavior, and was strongly criticized in a review by Noam Chomsky.[10] Skinner did not respond in detail but claimed that Chomsky failed to understand his ideas,[11] and the disagreements between the two and the theories involved have been further discussed.[12]

HYPERLINK "http://en.wikipedia.org/wiki/Behaviorism" \l "cite_note-pmid2103585-12" \o "" [13]What was important for a behaviorist's analysis of human behavior was not language acquisition so much as the interaction between language and overt behavior. In an essay republished in his 1969 book Contingencies of Reinforcement,[14] Skinner took the view that humans could construct linguistic stimuli that would then acquire control over their behavior in the same way that external stimuli could. The possibility of such "instructional control" over behavior meant that contingencies of reinforcement would not always produce the same effects on human behavior as they reliably do in other animals. The focus of a radical behaviorist analysis of human behavior therefore shifted to an attempt to understand the interaction between instructional control and contingency control, and also to understand the behavioral processes that determine what instructions are constructed and what control they acquire over behavior.

Molar versus molecular behaviorism

Skinner's view of behavior is most often characterized as a "molecular" view of behavior; that is, each behavior can be decomposed into atomistic parts or molecules. This view is inaccurate when one considers his complete description of behavior as delineated in the 1981 article, Selection by Consequences and many other works. Skinner claims that a complete account of behavior has involved an understanding of selection history at three levels: biology (the natural selection or phylogeny of the animal); behavior (the reinforcement history or ontogeny of the behavioral repertoire of the animal); and for some species, culture (the cultural practices of the social group to which the animal belongs). This whole organism, with all those histories, then interacts with its environment. He often described even his own behavior as a product of his phylogenetic history, his reinforcement history (which includes the learning of cultural practices) interacting with the environment at the moment. Molar behaviorists, such as Howard Rachlin argue that behavior can not be understood by focusing on events in the moment. That is, they argue that a behavior can be understood best in terms of the ultimate cause of history and that molecular behaviorist are committing a fallacy by inventing a fictitious proximal cause for behavior. Molar behaviorists argue that standard molecular constructs such as "associative strength" are such fictitious proximal causes that simply take the place of molar variables such as rate of reinforcement.[15] Thus, a molar behaviorist would define a behavior such as loving someone as exhibiting a pattern of loving behavior over time, there is no known proximal cause of loving behavior, only a history of behaviors (of which the current behavior might be an example of) that can be summarized as love. Molectular behaviorists use notions from Melioration theory, Negative power function discounting or additive versions of negative power function discounting.[16]Behaviorism in philosophy

Behaviorism is a psychological movement that can be contrasted with philosophy of mind. The basic premise of radical behaviorism is that the study of behavior should be a natural science, such as chemistry or physics, without any reference to hypothetical inner states of organisms as causes for their behavior. A modern example of such analysis would be Fantino and colleagues' work on behavioral approaches to reasoning.[17] Other varieties, such as theoretical behaviorism, permit internal states, but do not require them to be mental or have any relation to subjective experience. Behaviorism takes a functional view of behavior.

There are points of view within analytic philosophy that have called themselves, or have been called by others, behaviorist. In logical behaviorism (as held, e.g., by Rudolf Carnap and Carl Hempel), the meaning of psychological statements are their verification conditions, which consist of performed overt behavior. W. V. Quine made use of a type of behaviorism, influenced by some of Skinner's ideas, in his own work on language. Gilbert Ryle defended a distinct strain of philosophical behaviorism, sketched in his book The Concept of Mind. Ryle's central claim was that instances of dualism frequently represented 'category mistakes,' and hence that they were really misunderstandings of the use of ordinary language. Daniel Dennett likewise acknowledges himself to be a type of behaviorist.[18]It is sometimes argued that Ludwig Wittgenstein defended a behaviorist position, but while there are important relations between his thought and behaviorism, the claim that he was a behaviorist is quite controversial (e.g., the Beetle in a box argument). Mathematician Alan Turing is also sometimes considered a behaviorist,[citation needed] but he himself did not make this identification.

21st Century behavior analysis

As of 2007, modern day behaviorism, known as "behavior analysis," is a thriving field. The Association for Behavior Analysis: International currently has 32 state and regional chapters within the United States. Approximately 30 additional chapters have also developed throughout Europe, Asia, South America, and Australia. In addition to 34 annual conferences held by ABAI in the United States and Canada, ABAI will hold the 5th annual International conference in Norway in 2009.

The interests among behavior analysts today are wide ranging, as a review of the 30 Special Interest Groups (SIGs) within ABAI indicates. Such interests include everything from developmental disabilities and autism, to cultural psychology, clinical psychology, and Organizational Behavior Management (OBM; behavior analytic I/O psychology). OBM has developed a particularly strong following within behavior analysis, as evidenced by the formation of the OBM Network and the influential Journal of Organizational Behavior Management (JOBM; recently ratest the 3rd highest impact journal in applied psychology by ISI JOBM rating.

Modern behavior analysis has also witnessed a massive resurgence in research and applications related to language and cognition, with the development of Relational Frame Theory (RFT; described as a "Post-Skinnerian account of language and cognition." [19]RFT also forms the empirical basis for the highly successful and data-driven Acceptance and Commitment Therapy (ACT). In fact, researchers and practitioners in RFT/ACT have become sufficiently prominent that they have formed their own specialized organization, known as the Association for Contextual Behavioral Science (ACBS).

Some of the current prominent behavior analytic journals include the Journal of Applied Behavior Analysis (JABA), the Journal of the Experimental Analysis of Behavior (JEAB) JEAB website, the Journal of Organizational Behavior Management (JOBM), Behavior and Social Issues (BSI) , as well as the Psychological Record. Currently, the U.S. has 14 ABAI accredited MA and PhD programs for comprehensive study in behavior analysis.

Babbling

Babbling (also called twaddling) is a stage in child language acquisition, during which an infant appears to be experimenting with uttering sounds of language, but not yet producing any recognizable words. (Crucially, the larynx or voicebox, originally high in the throat to let the baby breathe while swallowing, descends during 'the first year of life', allowing a pharynx to develop and all the sounds of human speech to be formed [1]). Babbling begins at approximately 5 to 7 months of age, when a baby's noises begin to sound like phonemes. Infants begin to produce recognizable words usually around 12 months, though babbling may continue for some time afterward.Types of BabblingThere are two types of babbling. Most people are familiar with the characteristic sounds made during babbling, namely reduplicative and variegated babbling. The former consists of repeated syllables, such as /ba/ e.g. 'Ba-ba-ba-ba-ba-ba-ba,' whereas variegated babbling consists of a mix of syllables, e.g. 'ka-da-bu-ba-mi-doy-doy-doy.' Here we must take note that the consonants that babbling infants produce tend to be any of the following: /p,b,t,m,d,n,k,g,s,h,w,j/. The following consonants tend to be infrequently produced during phonological development: /f,v,,,,t,d,l,r,/. The complex nature of sounds that developing children produce make them difficult to categorize, but the above rules tend to hold true regardless of the language to which children are exposed. [2]Babbling in nonhuman species

Human babies engage in babble as a sort of vocal play that occurs in a few other primate species, all which belong to the family Callitrichidae (marmosets & tamarins) and are cooperative breeders.

Sarah Blaffer Hrdy writes, "...marmoset and tamarin babies also babble. It may be that the infants of cooperative breeders are specially equipped to communicate with caretakers. This is not to say that babbling is not an important part of learning to talk, only to question which came firstbabbling so as to develop into a talker, or a predisposition to evolve into a talker because among cooperative breeders, babies that babble are better tended and more likely to survive." [3]Babbling in humans

Terrence W. Deacon suggests that human infants are not generally excited or upset when babbling, because they will babble spontaneously and incessantly only when emotionally calm. Deacon adds, "It is the first sign that human vocal motor output is at least partially under the control of the cortical motor system because babbling is basically vocal mimickry that happens in correspondence to the maturation of the cortical motor output pathways in the human brain."

Steven Pinker compares a child babbling to a person fiddling with a complex hi-fi system in an attempt to understand what the controls do. Most babbling consists of a small number of sounds, which suggests the child is preparing the basic sounds necessary to speak the language to which it is exposed.

Infants who are deaf also show vocal babbling, suggesting that early babbling arises from inherent human tendencies to use the vocable articulators in particular ways during early language acquisition. If they are exposed to sign language, they will babble with their hands at approximately the same time vocal babbling appears, although sign production appears a few months before word production generally does in hearing children.

At 0-4 months babies gurgle, and coo (vowel sounds such as "oooh" and "aah"). And at 4-6 months babies may start to babble (adding consonants: "gaga," "dada"). At 6-12 months of age, babies typically babble and enjoy vocal play as they experiment with a range of sounds. At 12-18 months, toddlers begin to use sound in a meaningful way. They utter one-syllable words, make sounds imitating cars and planes, and say things like, "uh oh." Toddlers also understand the meaning of some words they cannot yet say. They may also use one word to represent a whole sentence. For example, "Juice" may mean, "Mother, I would like some juice," "You are drinking juice," or "Oh look, there is juice in the cup." At age 18-24 months, toddlers repeat words and can link words into short sentences. They use approximately 50 words, but can understand many more. They may use short sentences, such as, "She go bye bye." And "What you doing?" They may also use familiar words incorrectly, e.g., a child with a pet dog might describe all large furry animals as "doggie."

According to Menn and Stoel Gammon in The Development of Language, This early period of prelinguistic vocalization can be divided into five stages, the first of which begins at about age six months. Stage one is crying, stage two is cooing, stage three is vocal play, and stage four is canonical babbling. The fifth and final stage is conversational babbling, also known as the "jargon stage (usually occurring by about ten months of age). This jargon stage is defined as: Pre-linguistic vocalizations in which infants use adult like stress and intonation. [4]Hence, babbling occurs during the first year of life if the child is developing normally. As the baby grows and changes, his/her vocalizations change as well. Babies use these vocalizations to communicate. They commence vocal development by crying, progress to loud yelling noises, and finally make speech.

Children who can't babble for some physiological reason, such as having a breathing tube in their throat, do subsequently acquire normal pronunciation but their speech development is significantly delayed. [5]COGNITION1.1 Definition of Cognitive

Let's start with a simple working definition of the word cognitive which will be sufficient for our purposes: having to do with how the mind works.

1.2 History

Cognitive linguistics arose during the 1970s essentially as a reaction to three things: (1) dissatisfaction with the existing linguistics paradigm of the time, Noam Chomsky's generative grammar due to the inability of generative grammar to provide explanations for an increasing number of problem examples and observations about language, especially when it was applied to non-Indo-European languages; (2) the failed attempts by Chomskian-trained linguists to create a generative semantics, i.e., the attempt to extend Chomsky's theory of generative grammar into the realm of semantics; and (3) the pioneering work on human categorization done by a psychologist named Eleanor Rosch, whose evidence strongly suggested that the subconscious human mind creates categories in ways previously unsuspected (although work by the philosopher Ludwig Wittgenstein had foreshadowed Rosch's findings, e.g., Wittgenstein's classic analysis of the German word spiel [English game]).

The first linguists to formally pursue a new non-Chomskian approach to linguistics were Charles Fillmore at UC Berkeley and Ronald Langacker at UC San Diego. Langacker, a former Chomskian, finally became so fed up with all the exceptions that had to be made in generative grammar the more he explored the subtleties of language, that he finally concluded Chomsky's theories must simply be wrong. Rather than try to fix generative grammar, he instead decided to sit down and re-think linguistics from scratch, irrespective of any theory, with the following guiding principles: that language is a direct reflection of the workings of the human mind, and that any theory of grammar and semantics must be consistent with the way the human mind functions and the human brain physically manifests the processes of thinking and conceptualization. He began publishing a series of papers on his new ideas in the 1970s, closely followed by George Lakoff, Leonard Talmy, Gilles Fauconnier, Fillmore and others. Langacker eventual encapsulated all his ideas in the monumental two-volume work Foundations of Cognitive Grammar, published in 1987 and 1991. It is generally perceived that the publication of this work, along with Lakoff and Johnson's Metaphors We Live By in 1980 and Lakoff's Women, Fire, and Dangerous Things in 1987, established cognitive linguistics on a solid academic footing which has now led to the generally worldwide acceptance of the new paradigm as nearly co-equal with (and in many universities now surpassing) Chomsky's generative grammar.

While cognitive linguistics was originally defined in terms of a rebellion against Chomsky's theories, in the last decade, cognitive linguistics has matured to be considered a fully autonomous linguistic paradigm in its own right. Nevertheless, for beginners, it is still convenient to introduce cognitive linguistics in comparative terms to Chomsky's theory of generative grammar.

1.3 Comparison with Chomsky's Generative Grammar

Chomsky, whose theories evolved during the late 1950s through 1970s to replace the previous structuralist and behaviorist models of language, believes the structure of language is determined by an innate, autonomous formal system of rules (analogous to the predicate calculus for those of you who?ve been trained in formal logic, but much more intricate and sophisticated). This formal system of rules, called universal grammar (UG), is inherent within the human brain at birth and is largely devoid of any association with meaning. This UG is also independent of other human cognitive faculties, i.e., it operates on its own within the brain, independent of any other non-linguistic cognitive processes.

Cognitive linguists, on the other hand, believe the structure of language is a direct reflection of human cognitive processes, and that there is no independent language faculty like UG in the brain. If there is, cognitive linguists generally believe it will eventually be found to be ultimately rooted in the general processes of human cognition itself (i.e., not peculiar to the phenomenon of language alone). The cognitivists believe that the grammatical structures of language are directly associated with the way people conceptualize (i.e., think about and understand) any given situation in the world. Syntax, morphology, even phonology are conceptual in nature, i.e., they are merely input and output of those cognitive processes within the human mind that govern speaking and understanding. This idea is generally encapsulated in a phrase coined by Ronald Langacker and often repeated by cognitive linguists: grammar is conceptualization.

The other big difference between Chomsky and the cognitivists is where knowledge of language in general comes from. Chomsky argues that infants know how to put language components together innately (because of their reliance on the UG), i.e., they do not (solely) rely on having to hear how to put words together correctly (i.e., syntax) from listening to their family and other sources such as television. Chomsky believes evidence exists to support this notion in his famous poverty of the stimulus argument (the one that Kirk has railed about in some of his posts in this and other threads), saying that children in general are too good at learning language so quickly, i.e., they don't get exposed to a sufficiently large corpus of language stimuli/data to work with to figure out so quickly how their native language works, therefore they must have an innate faculty (the UG) to subconsciously tell them about things like syntactic relations (e.g., case morphology), tenses, aspect, clause structure, grammatical transformations such as active-into-passive voice, etc.

The cognitivists, on the other hand, reject this argument entirely and do not believe in the poverty of stimulus argument. Cognitivists firmly believe that knowledge of language comes strictly from language use. Infants learn language by listening, observing, pattern recognition and pattern-matching, imitation and trial-and-error attempts to learn the grammatical rules of their native language. The reason Junior first says Mommy drink before he says Mommy, I want a drink is simply because the former is easier and therefore gets tried out and used first, while the more sophisticated (and correct) structure of the latter gets learned and used later on. In other words, language gets learned just like anything else gets learned. The use of language has nothing special about it that differentiates it from other cognitive processes. Rather, the human infant uses the same store of cognitive tools and processes to learn and use language as he learns to do anything else. Cognition is cognition. Learning is learning. Pattern-recognition and matching is pattern-recognition and matching; imitation and practice is imitation and practice, whether learning your native language or learning to ride a bicycle or select and put on clothes to wear.

1.4 Focus on the Relationship Between Semantics and Syntax

Because cognitive linguists believe that grammar is conceptualization (see 1.3 above), the core area of study to date within the field of cognitive linguistics is semantics and morpho-semantics and the way these two components of language determine syntax (the way words are put together to create grammatically acceptable phrases and sentences). While cognitive linguists fully believe that the cognitive paradigm extends to more nuts-and-bolts units of language such as phonology and morphology, little work has yet been done in these areas during the brief quarter-century that the paradigm has existed. (Remember the analogy from my previous post about looking at the whole car first and driving it, THEN start taking it apart rather than starting with the pieces and parts and putting it together?) In regard to going the other direction beyond syntax into the linguistic areas of pragmatics and discourse analysis, many cognitive linguists believe that these two areas of linguistics actually don't exist. Rather, as cognitive analysis of language begins to delve more deeply as time goes on, the usage of language in everyday contexts which is the realm of pragmatics and discourse analysis will simply be found to be based purely on the same semantically driven rules of language structure that contextless or normal language structures are based upon. In other words, while linguists normally study the structure of sentences like I have to urinate rather than the semantically equivalent colloquial version I gotta go, the production of the two sentences by living, breathing English speakers is nevertheless analyzable by the same kinds of semantically driven, context-filled rules, INCLUDING rules to govern the speaker's very choice of using one sentence as opposed to the other. This area of language wouldn't be touched by a Chomskian with a ten-foot pole, whereas to a cognitivist, if people say it, it's fair game for linguistic analysis. Indeed, linguistics can't be considered complete until linguists understand why a person chooses to say it one way as opposed to the other. Needless to say, under this view of the scope of linguistics, the science of linguistics can be considered to be almost still in its infancy.

Language acquisition device

The Language Acquisition Device (LAD) is a postulated "organ" of the brain that is supposed to function as a congenital device for learning symbolic language (i.e., language acquisition). First proposed by Noam Chomsky, the LAD concept is a component of the nativist theory of language which dominates contemporary formal linguistics, which asserts that humans are born with the instinct or "innate facility" for acquiring language.

Chomsky motivated the LAD hypothesis by what he perceived as intractable complexity of language acquisition, citing the notion of "infinite use of finite means" proposed by Wilhelm von Humboldt. At the time it was conceived (19571965), the LAD concept was in strict contrast to B.F. Skinner's behavioral psychology which emphasized principles of learning theory such as classical and operant conditioning and imitation over biological predisposition. The interactionist theory of Jerome Bruner and Jean Piaget later emphasized the importance of the interaction between biological and social (nature and nurture) aspects of language acquisition.

Chomsky (1965) set out an innate language schema which provides the basis for the childs acquisition of a language. The acquisition process takes place despite the limited nature of the primary linguistic data (PLD, the input signals received) and the degenerate nature (frequent incorrect usage, utterances of partial sentences) of that data. Given this poverty of the stimulus, a language acquisition model requires a number of components. Firstly, the child must have a technique for representing input signals and, secondly, a way of representing structural information about them. Thirdly, there must be some initial delimitation of the class of possible language structure hypotheses. Fourthly, the child requires a method for determining what each of these hypotheses implies with respect to each sentence. Finally, an additional method is needed by which the child can select which hypothesis is compatible with the PLD.

Equipped with this endowment, first language learning is explained as performed by a Language Acquisition Device progressing through the following stages:

1. The device searches the class of language structure hypotheses and selects those compatible with input signals and structural information drawn from the PLD.

2. The device then tests the compatibility using the knowledge of implications of each hypothesis for the sentences.

3. One hypothesis or grammar is selected as being compatible with the PLD.

4. This grammar provides the device with a method of interpreting sentences (by virtue of its capacity for internally representing structural information and applying the grammar to sentences).

Through this process the device constructs a theory of the language of which the PLD are a sample. Chomsky argues that in this way, the child comes to know a great deal more than she has learned, acquiring a knowledge of language, which "goes far beyond the presented primary linguistic data and is in no sense an 'inductive generalization' from these data."

In some views of language acquisition, the LAD is thought to become unavailable after a certain age the critical period hypothesis (i.e., is subject to maturational constraints).

Chomsky has gradually abandoned the LAD in favour of a parameter-setting model of language acquisition (principles and parameters).

Neurological argumentation in favour of The Language Acquisition Device could be the Expressive aphasia and agrammatism arising from the damage of a brain region called Broca's area (a location that seems to have the functions the LAD should have). For example, in the following passage, a Broca's aphasic patient is trying to explain how he came to the hospital for dental surgery: "Yes... ah... Monday... er... Dad and Peter H... (his own name), and Dad.... er... hospital... and ah... Wednesday... Wednesday, nine o'clock... and oh... Thursday... ten o'clock, ah doctors... two... an' doctors... and er... teeth... yah."

Recently, neuroscientist Steven Pinker discovered a self-organizing language acquisition device in the brain of an epileptic.

Stages of first language acquisition in children

In nearly all cases, children's language development follows a predictable sequence. However, there is a great deal of variation in the age at which children reach a given milestone. Furthermore, each child's development is usually characterized by gradual acquisition of particular abilities: thus "correct" use of English verbal inflection will emerge over a period of a year or more, starting from a stage where vebal inflections are always left out, and ending in a stage where they are nearly always used correctly.

There are also many different ways to characterize the developmental sequence. On the production side, one way to name the stages is as follows, focusing primarily on the unfolding of lexical and syntactic knowledge:StageTypical ageDescription

Babbling6-8 monthsRepetitive CV patterns

One-word stage(better one-morpheme or one-unit)or holophrastic stage9-18 monthsSingle open-class words or word stems

Two-word stage18-24 months"mini-sentences" with simple semantic relations

Telegraphic stageor early multiword stage(better multi-morpheme) 24-30 months"Telegraphic" sentence structures of lexical rather than functional or grammatical morphemes

Later multiword stage30+ monthsGrammatical or functional structures emerge

Vocalizations in the first year of life

At birth, the infant vocal tract is in some ways more like that of an ape than that of an adult human. Compare the diagram of the infant vocal tract shown on the left to diagrams of adult human and ape.

In particular, the tip of the velum reaches or overlaps with the tip of the epiglottis. As the infant grows, the tract gradually reshapes itself in the adult pattern.

During the first two months of life, infant vocalizations are mainly expressions of discomfort (crying and fussing), along with sounds produced as a by-product of reflexive or vegetative actions such as coughing, sucking, swallowing and burping. There are some nonreflexive, nondistress sounds produced with a lowered velum and a closed or nearly closed mouth, giving the impression of a syllabic nasal or a nasalized vowel.

During the period from about 2-4 months, infants begin making "comfort sounds", typically in response to pleasurable interaction with a caregiver. The earliest comfort sounds may be grunts or sighs, with later versions being more vowel-like "coos". The vocal tract is held in a fixed position. Initially comfort sounds are brief and produced in isolation, but later appear in series separated by glottal stops. Laughter appears around 4 months.

During the period from 4-7 months, infants typically engage in "vocal play", manipulating pitch (to produce "squeals" and "growls"), loudness (producing "yells"), and also manipulating tract closures to produce friction noises, nasal murmurs, "raspberries" and "snorts".

At about seven months, "canonical babbling" appears: infants start to make extended sounds that are chopped up rhythmically by oral articulations into syllable-like sequences, opening and closing their jaws, lips and tongue. The range of sounds produced are heard as stop-like and glide-like. Fricatives, affricates and liquids are more rarely heard, and clusters are even rarer. Vowels tend to be low and open, at least in the beginning.

Repeated sequences are often produced, such as [bababa] or [nanana], as well as "variegated" sequences in which the characteristics of the consonant-like articulations are varied. The variegated sequences are initially rare and become more common later on.

Both vocal play and babbling are produced more often in interactions with caregivers, but infants will also produce them when they are alone.

No other animal does anything like babbling. It has often been hypothesized that vocal play and babbling have the function of "practicing" speech-like gestures, helping the infant to gain control of the motor systems involved, and to learn the acoustical consequences of different gestures.

One word (holophrastic) stage

At about ten months, infants start to utter recognizable words. Some word-like vocalizations that do not correlate well with words in the local language may consistently be used by particular infants to express particular emotional states: one infant is reported to have used to express pleasure, and another is said to have used to express "distress or discomfort". For the most part, recognizable words are used in a context that seems to involve naming: "duck" while the child hits a toy duck off the edge of the bath; "sweep" while the child sweeps with a broom; "car" while the child looks out of the living room window at cars moving on the street below; "papa" when the child hears the doorbell.

Young children often use words in ways that are too narrow or too broad: "bottle" used only for plastic bottles; "teddy" used only for a particular bear; "dog" used for lambs, cats, and cows as well as dogs; "kick" used for pushing and for wing-flapping as well as for kicking. These underextensions and overextensions develop and change over time in an individual child's usage.

Perception vs. production

Clever experiments have shown that most infants can give evidence (for instance, by gaze direction) of understanding some words at the age of 4-9 months, often even before babbling begins. In fact, the development of phonological abilities begins even earlier. Newborns can distinguish speech from non-speech, and can also distinguish among speech sounds (e.g. [t] vs. [d] or [t] vs. [k]); within a couple of months of birth, infants can distinguish speech in their native language from speech in other languages.

Early linguistic interaction with mothers, fathers and other caregivers is almost certainly important in establishing and consolidating these early abilities, long before the child is giving any indication of language abilities.

Rate of vocabulary development

In the beginning, infants add active vocabulary somewhat gradually. Here are measures of active vocabulary development in two studies. The Nelson study was based on diaries kept by mothers of all of their children's utterances, while the Fenson study is based on asking mothers to check words on a list to indicate which they think their child produces.

MilestoneNelson 1973 (18 children)Fenson 1993(1,789 children)

10 words15 months (range 13-19)13 months(range 8-16)

50 words20 months(range 14-24) 17 months(range 10-24)

Vocabulary at 24 months186 words(range 28-436)310 words(range 41-668)

There is often a spurt of vocabulary acquisition during the second year. Early words are acquired at a rate of 1-3 per week (as measured by production diaries); in many cases the rate may suddenly increase to 8-10 new words per week, after 40 or so words have been learned. However, some children show a more steady rate of acquisition during these early stages. The rate of vocabulary acquisition definitely does accelerate in the third year and beyond: a plausible estimate would be an average of 10 words a day during pre-school and elementary school years.

Perception vs. production again

Benedict (1979) asked mothers to keep a diary indicating not only what words children produced, but what words they gave evidence of understanding. Her results indicate that at the time when children were producing 10 words, they were estimated to understand 60 words; and there was an average gap of five months between the time when a child understood 50 words and the time when (s)he produced 50 words.

All of these methods (maternal diaries and checklists) probably tend to underestimate the number of words about young children actually know something, although they also may overestimate the number of words to which they attribute adult-like meanings.

Combining words: the emergence of syntax

During the second year, word combinations begin to appear. Novel combinations (where we can be sure that the result is not being treated as a single word) appear sporadically as early as 14 months. At 18 months, 11% of parents say that their child is often combining words, and 46% say that (s)he is sometimes combining words. By 25 months, almost all children are sometimes combining words, but about 20% are still not doing so "often."

Early multi-unit utterances

In some cases, early multiple-unit utterances can be seen as concatenations of individual naming actions that might just as well have occured alone: "mommy" and "hat" might be combined as "mommy hat"; "shirt" and "wet" might be combined as "shirt wet". However, these combinations tend to occur in an order that is appropriate for the language being learned:

1. Doggy bark

2. Ken water (for "Ken is drinking water")

3. Hit doggy

Some combinations with certain closed-class morphemes begin to occur as well: "my turn", "in there", etc. However, these are the closed-class words such as pronouns and prepositions that have semantic content in their own right that is not too different from that of open-class words. The more purely grammatical morphemes -- verbal inflections and verbal auxiliaries, nominal determiners, complementizers etc. -- are typically absent.

Since the earliest multi-unit utterances are almost always two morphemes long -- two being the first number after one! -- this period is sometimes called the "two-word stage". Quite soon, however, children begin sometimes producing utterances with more than two elements, and it is not clear that the period in which most utterances have either one or two lexical elements should really be treated as a separate stage.

In the early multi-word stage, children who are asked to repeat sentences may simply leave out the determiners, modals and verbal auxiliaries, verbal inflections, etc., and often pronouns as well. The same pattern can be seen in their own spontaneous utterances:

1. "I can see a cow" repeated as "See cow" (Eve at 25 months)

2. "The doggy will bite" repeated as "Doggy bite" (Adam at 28 months)

3. Kathryn no like celery (Kathryn at 22 months)

4. Baby doll ride truck (Allison at 22 months)

5. Pig say oink (Claire at 25 months)

6. Want lady get chocolate (Daniel at 23 months)

7. "Where does Daddy go?" repeated as "Daddy go?" (Daniel at 23 months)

8. "Car going?" to mean "Where is the car going?" (Jem at 21 months)

The pattern of leaving out most grammatical/functional morphemes is called "telegraphic", and so people also sometimes refer to the early multi-word stage as the "telegraphic stage".

Acquisition of grammatical elements and the corresponding structures

At about the age of two, children first begin to use grammatical elements. In English, this includes finite auxiliaries ("is", "was"), verbal tense and agreement affixes ("-ed" and '-s'), nominative pronouns ("I", "she"), complementizers ("that", "where"), and determiners ("the", "a"). The process is usually a somewhat gradual one, in which the more telegraphic patterns alternate with adult or adult-like forms, sometimes in adjacent utterances:

1. She's gone. Her gone school. (Domenico at 24 months)

2. He's kicking a beach ball. Her climbing up the ladder there. (Jem at 24 months).

3. I teasing Mummy. I'm teasing Mummy. (Holly at 24 months)

4. I having this. I'm having 'nana. (Olivia at 27 months).

5. I'm having this little one. Me'll have that. (Betty at 30 months).

6. Mummy haven't finished yet, has she? (Olivia at 36 months).

Over a year to a year and a half, sentences get longer, grammatical elements are less often omitted and less often inserted incorrectly, and multiple-clause sentences become commoner.

Perception vs. production again

Several studies have shown that children who regularly omit grammatical elements in their speech, nevertheless expect these elements in what they hear from adults, in the sense that their sentence comprehension suffers if the grammatical elements are missing or absent.

Progress backwards

Often morphological inflections include a regular case ("walk/walked", "open/opened") and some irregular or exceptional cases ("go/went", "throw/threw", "hold/held"). In the beginning, such words will be used in their root form. As inflections first start being added, both regular and irregular patterns are found. At a certain point, it is common for children to over-generalize the regular case, producing forms like "bringed", "goed"; "foots", "mouses", etc. At this stage, the child's speech may actually become less correct by adult standards than it was earlier, because of over-regularization.

This over-regularization, like most other aspects of children's developing grammar, is typically resistant to correction:

CHILD: My teacher holded the baby rabbits and we patted them.

ADULT: Did you say your teacher held the baby rabbits.

CHILD: Yes.

ADULT: What did you say she did?

CHILD: She holded the baby rabbits and we patted them.

ADULT: Did you say she held them tightly?

CHILD: No, she holded them loosely.

Critical Period Hypothesis

The Critical Period Hypothesis refers to a long-standing debate in linguistics and language acquisition over the extent to which the ability to acquire language is biologically linked to age. The hypothesis claims that there is an ideal 'window' of time to acquire language in a linguistically rich environment, after which this is no longer possible.

The Critical Period Hypothesis states that the first few years of life is the crucial time in which an individual can acquire a first language if presented with adequate stimuli. If language input doesn't occur until after this time, the individual will never achieve a full command of language especially grammatical systems.

The evidence for such a period is limited, and support stems largely from theoretical arguments and analogies to other critical periods in biology such as visual development, but nonetheless is widely accepted. The nature of this phenomenon, however, has been one of the most fiercely debated issues in psycholinguistics and cognitive science in general for decades. Some writers have suggested a "sensitive" or "optimal" period rather than a critical one; others dispute the causes (physical maturation, cognitive factors). The duration of the period also varies greatly in different accounts. Steven Pinker, in his book The Language Instinct, states that acquisition of a normal language is guaranteed for children up to the age of six, is steadily compromised from then until shortly after puberty, and is rare thereafter (Pinker 1994, p. 298).

In second language acquisition, the strongest evidence for the critical period hypothesis is in the study of accent, where most older learners do not reach a native-like level. However, under certain conditions, native-like accent has been observed, suggesting that accent is affected by multiple factors, such as identity and motivation, rather than a critical period biological constraint (Moyer, 1999; Bongaerts et al., 1995; Young-Scholten, 2002).

History

The Critical Period Hypothesis was first proposed by Montreal neurologist Wilder Penfield and co-author Lamar Roberts in a 1959 paper Speech and Brain Mechanisms, and was popularised by Eric Lenneberg in 1967 with Biological Foundations of Language. Lenneberg proposed brain lateralisation at puberty as the mechanism that closes down the brain's ability to acquire language, though this has since been widely disputed. Other notable proponents of the Critical Period Hypothesis include Noam Chomsky.

Linguist Eric Lenneberg (1964) stated that the crucial period of language acquisition ends around the age of 4-5 years. He claimed that if no language is learned before then, it could never be learned in a normal and fully functional sense. This was called the "critical period hypothesis."

An interesting example of this is the case of Genie, also known as "The Wild Child". A thirteen-year-old victim of lifelong child abuse, Genie was discovered in her home on November 4th, 1970, strapped to a potty chair and wearing diapers. She appeared to be entirely without language. Her father had judged her retarded at birth and had chosen to isolate her, and so she had remained until her discovery.

It was an ideal (albeit horrifying) opportunity to test the theory that a nurturing environment could somehow make up for a total lack of language past the age of 12. She was unable to acquire language completely, although the degree to which she acquired language is disputed.[1]Detractors of the "Critical Period Hypothesis" point out that in this example and others like it (see Feral children), the child is hardly growing up in a nurturing environment, and that the lack of language acquisition in later life may be due to the results of a generally abusive environment rather than being specifically due to a lack of exposure to language.

A more up-to-date view of the Critical Period Hypothesis is represented by the University of Maryland, College Park instructor Robert DeKeyser. DeKeyser argues that although it is true that there is a critical period, this does not mean that adults cannot learn a second language perfectly, at least on the syntactic level. DeKeyser talks about the role of language aptitude as opposed to the critical period.[citation needed]Second language acquisition

The theory has often been extended to a critical period for second language acquisition, although this is much less widely accepted. Certainly, older learners of a second language rarely achieve the native-like fluency that younger learners display, despite often progressing faster than children in the initial stages. David Singleton (1995) states that in learning a second language, "younger = better in the long run," but points out that there are many exceptions, noting that five percent of adult bilinguals master a second language even though they begin learning it when they are well into adulthood long after any critical period has presumably come to a close.

While the window for learning a second language never completely closes, certain linguistic aspects appear to be more affected by the age of the learner than others. For example, adult second-language learners nearly always retain an immediately-identifiable foreign accent, including some who display perfect grammar (Oyama 1976). Some writers have suggested a younger critical age for learning phonology than for syntax. Singleton (1995) reports that there is no critical period for learning vocabulary in a second language. Robertson (2002) observed that factors other than age may be even more significant in successful second language learning, such as personal motivation, anxiety, input and output skills, settings and time commitment.

On reviewing the published material, Bialystok and Hakuta (1994) conclude that second-language learning is not necessarily subject to biological critical periods, but "on average, there is a continuous decline in ability [to learn] with age."

Experimental and observational studies

This section of the article is too long to read comfortably, and needs subsections. Please format the article according to the guidelines laid out at Wikipedia:Manual of Style (headings) (March 2009)

How children acquire native language (L1) and the relevance of this to foreign language (L2) learning has long been debated. Although evidence for L2 learning ability declining with age is controversial, a common notion is that children learn L2s easily, whilst older learners rarely achieve fluency. This assumption stems from critical period (CP) ideas. A CP was popularised by Eric Lenneberg in 1967 for L1 acquisition, but considerable interest now surrounds age effects on second language acquisition (SLA). SLA theories explain learning processes and suggest causal factors for a possible CP for SLA, mainly attempting to explain apparent differences in language aptitudes of children and adults by distinct learning routes, and clarifying them through psychological mechanisms. Research explores these ideas and hypotheses, but results are varied: some demonstrate pre-pubescent children acquire language easily, and some that older learners have the advantage, whilst others focus on existence of a CP for SLA. Recent studies (e.g. Mayberry and Lock, 2003) have recognised certain aspects of SLA may be affected by age, whilst others remain intact. The objective of this study is to investigate whether capacity for vocabulary acquisition decreases with age.

A review of SLA theories and their explanations for age-related differences is necessary before considering empirical studies. The most reductionist theories are those of Penfield and Roberts (1959) and Lenneberg (1967), which stem from L1 and brain damage studies; children who suffer impairment before puberty typically recover and (re-)develop normal language, whereas adults rarely recover fully, and often do not regain verbal abilities beyond the point reached five months after impairment. Both theories agree that children have a neurological advantage in learning languages, and that puberty correlates with a turning point in ability. They assert that language acquisition occurs primarily, possibly exclusively, during childhood as the brain loses plasticity after a certain age. It then becomes rigid and fixed, and loses the ability for adaptation and reorganisation, rendering language (re-)learning difficult. Penfield and Roberts (1959) claim children under nine can learn up to three languages: early exposure to different languages activates a reflex in the brain allowing them to switch between languages without confusion or translation into L1 (Penfield, 1964). Lenneberg (1967) asserts that if no language is learned by puberty, it cannot be learned in a normal, functional sense. He also supports Penfield and Roberts (1959) proposal of neurological mechanisms responsible for maturational change in language learning abilities. This, Lenneberg maintains, coincides with brain lateralisation and left-hemispherical specialisation for language around age thirteen: infants motor and linguistic skills develop simultaneously, but by age thirteen the cerebral hemispheres functions separate and become set, making language acquisition extremely difficult (Lenneberg, 1967).

Cases of deaf and feral children provide evidence for a biologically determined CP for L1. Feral children are those not exposed to language in infancy/childhood due to being brought up in the wild, in isolation and/or confinement. A classic example is 'Genie', who was deprived of social interaction from birth until discovered aged thirteen (post-pubescent). She was completely without language, and after seven years of rehabilitation still lacked linguistic competence. Another case is Isabelle, who was incarcerated with her deaf-mute mother until the age of six and a half (pre-pubescent). She also had no language skills, but, unlike Genie, quickly acquired normal language abilities through systematic specialist training.

Such studies are however problematic; isolation can result in general retardation and emotional disturbances, which may confound conclusions drawn about language abilities. Studies of deaf children learning American Sign Language (ASL) have fewer methodological weaknesses. Newport and Supalla (1987) studied ASL acquisition in deaf children differing in age of exposure; few were exposed to ASL from birth, most of them first learned it at school.

Results showed a linear decline in performance with increasing age of exposure; those exposed to ASL from birth performed best, and late learners worst, on all production and comprehension tests. Their study thus provides direct evidence for language learning ability decreasing with age, but it does not add to Lennerbergs CP hypothesis as even the oldest children, the late learners, were exposed to ASL by age four, and had therefore not reached puberty, the proposed end of the CP. In addition, the declines were shown to be linear, with no sudden drop off of ability at a certain age, as would be predicted by a strong CP hypothesis. That the children performed significantly worse, however, suggests the CP may end earlier than originally postulated.

Other work has challenged the biological approach; Krashen (1975) reanalysed clinical data used as evidence and concluded cerebral specialisation occurs much earlier than Lenneberg calculated. Therefore, if a CP exists, it does not coincide with lateralisation. Despite concerns with Lennebergs original evidence and the dissociation of lateralisation from the language CP idea, however, the concept of a CP remains a viable hypothesis, which later work has better explained and substantiated.

Contrary to biological views, behavioural approaches assert that languages are learned as any other behaviour, through conditioning. Skinner (1957) details how operant conditioning forms connections with the environment through interaction and, alongside O. Hobart Mowrer (1960), applies the ideas to language acquisition. Mowrer hypothesises that languages are acquired through rewarded imitation of language models; the model must have an emotional link to the learner (e.g. parent, spouse), as imitation then brings pleasant feelings which function as positive reinforcement. Because new connections between behaviour and the environment are formed and reformed throughout life, it is possible to gain new skills, including language(s), at any age.

To accommodate observed language learning differences between children and adults, Felix (1985) describes that children, whose brains create countless new connections daily, may handle the language learning process more effectively than do adults. This assumption, however, remains untested and is not a reliable explanation for childrens aptitude for L2 learning. Problematic of the behaviourist approach is its assumption that all learning, verbal and non-verbal, occurs through the same processes. A more general problem is that, as Pinker (1995) notes, almost every sentence anybody voices is an original combination of words, never previously uttered, therefore a language cannot consist only of word combinations learned through repetition and conditioning; the brain must contain innate means of creating endless amounts of grammatical sentences from a limited vocabulary. This is precisely what Chomsky (1965) argues with his proposition of a Universal Grammar (UG).

Chomsky (1965) asserts that environmental factors must be relatively unimportant for language emergence, as so many different factors surround children acquiring L1. Instead, Chomsky claims language learners possess innate principles building a language acquisition device (LAD) in the brain. These principles