makalah phonological construction

28
Phonological Construction By : Jerusman Marbun Grade : Semester IV ( Four ) Department : Pendidikan Bahasa Inggris Course Title/ Subject : Phonology Lecturer : Subono Hutagalung, S.Pd SEKOLAH TINGGI KEGURUAN DAN ILMU PENDIDIKAN ( STKIP ) BARUS TAPANULI TENGAH T.A 2013/2014

Upload: jerusman-marbun

Post on 04-Dec-2014

409 views

Category:

Education


2 download

DESCRIPTION

Makalah English Department STKIP BARUS TAPANULI TANGAH

TRANSCRIPT

Page 1: Makalah Phonological Construction

Phonological Construction

By :

Jerusman Marbun

Grade :

Semester IV ( Four )

Department :

Pendidikan Bahasa Inggris

Course Title/ Subject :

Phonology

Lecturer :

Subono Hutagalung, S.Pd

SEKOLAH TINGGI KEGURUAN DAN ILMU PENDIDIKAN

( STKIP ) BARUS TAPANULI TENGAH

T.A 2013/2014

Page 2: Makalah Phonological Construction

TABLE OF CONTENTS

Pages

FREFACE ............................................................................................................... i

BACKGROUND ....................................................................................................ii

TABLE OF CONTENTS ...................................................................................... iii

CHAPTER I

A. Cognitive Phonology .................................................................................. 1

Introduction ................................................................................................ 1

Knowledge of Linguistic Sound ............................................................. 2-5

CHAPTER II

B. Phonological Constructions ....................................................................... 6

Groups of Sound ..................................................................................... 6-9

The Internal Structure of Syllable ............................................................ 10

CHAPTER III

C. The Role Of Abstraction In Constructing Phonological Structure ...........11

The Origins Of Syllable Structure ............................................................11

Cross Linguistic Variation In Subsyllabic ............................................... 12

Subsyllabic Structure In Korean And Japanese ....................................... 12

CHAPTER IV

D. Phonology III: Syllable Structure, Stress ................................................. 13

Syllable Structure ( Universal ) ................................................................ 13

Vowels Are Nuclear ( English ) .......................................................... 14-16

Syllable Structure II ............................................................................ 16-17

Stress ................................................................................................... 18-22

CONCLUSION

SUGGESTION

REFERENCES

iii

Page 3: Makalah Phonological Construction

PREFACE

Praise be to the God, The cherisher and sustainer of the worlds; God who

has been giving His blessing and mercy to the writer to complete the paper

entitled “PHONOLOGY” This paper is submitted to fullfil one of the

requirements to gain college degree of English Study Faculty in STKIP – BARUS

TAPANULI TENGAH

In finishing this paper, the writer really gives his regards and thanks for

people who has given guidance and help the writer to finish this paper.

Finally, the writer realizes there are unintended errors in writing this paper.

He really allows all readers to give their suggestion to improve its content in order

to be made as one of the good examples for the next paper.

Barus, April 2014

Writer,

i

Page 4: Makalah Phonological Construction

BACKGROUND

This paper is arranged to introductory on Phonological Construction of

the sort taught in the first year of The English Language. The students on such

courses can struggle with phonetics and phonology ; it is sometimes difficult to

see past the new symbols and terminology, and the apparent assumption that we

can immediately become consciously aware of movements of the vocal organs

which we have been making almost automatically for the last eighteen or more

years. This paper attempts to show us why we need to know about phonetics and

phonology, if we are interested in language and our knowledge of it, as well as

introducing the main units and concepts we require to describe speech sounds

accurately.

When it’s arranged to presenting the details of phonology, I have also

chosen to use verbal descriptions rather than diagrams and pictures in most cases.

The reason for this is we need to learn to use our own intuitions, and this is helped

by encouraging us to introspect and think about our own vocal organs, rather than

seeing disembodied pictures of structures which don’t seem to belong to them at

all.

Our hope is that a through grounding in the basics will help us approach

more abstract theoretical and met theoretical issues in more advanced courses with

greater understanding of what the theories intend to do and to achieve, and with

more chance of evaluating competing models realistically.

ii

Page 5: Makalah Phonological Construction

CHAPTER 1

A. Cognitive Phonology

1. Introduction

Phonology is usually thought of as the study of the ‘sound systems’ of

languages. In this article the writer will make an attempt to explain what that

means for mean why the writer refer to the approach that the writer favor as

‘cognitive’. Frankly, the writer have no idea how phonology could ever be

anything but cognitive. However, there is a certain view that explanation in this

domain must be crucially built upon our understanding of how sounds are

produced and perceived. The writer do not dispute that insight into linguistic

sound systems can be derived from such understanding. My point is that some

fundamental properties of sound systems cannot be understood in this way, but

rather must come from theories about the cognitive representations that underlie

sound systems. Hence ‘cognitive phonology’. As we will see, phonology, being

cognitive, is not fully encapsulated in the mind, as there must also be a system for

externalizing the phonological representations.

In section 2 the writer first discuss what it means to say someone knows

linguistic sound events. Section 3 argues for a strong parallelism between

grammatical components (phonology, syntax and semantics) and contains a brief

excursion on the evolution of language. Sections 4 and 5 home in on the

phonological component, discussing static (i.e. phonotactic) and dynamic (i.e.

allomorphy) phenomena, respectively. In section 6, the writer argue that even

though the phonological system is non-derivational and constraint-based, two

levels (in different planes) are necessary. Section 7 offers some conclusions.

This article can be read as an attempt on my part to explain to interested

parties (linguists who are not, or not primarily, phonologists, scholars in other

fields, colleagues, friends, relatives) what it is that the writer studying and why the

writer hold certain beliefs or guiding ideas about this subject. Hence the writer do

not presuppose much prior knowledge of phonology, although the exposition

becomes increasingly more involved and complex, all of which, the writer hope is

not offending Hans den Besten, to whom the writer offer this article as a token of

both my esteem for him as a linguist as well as my friendship as a colleague.

1

Page 6: Makalah Phonological Construction

2. Knowledge of linguistic sounds

Humans do not appear to be very good at communicating telepathically,

i.e. by reading the thoughts in each other’s minds. To travel from one mind to the

other, thoughts need carriers that are perceptible. Most human languages make use

of carriers that are audible, although communication (taken as a broader concept)

also uses a great deal of visual information, notably in the gestures and ‘body

languages’ that humans display when they express their thoughts. In addition, sign

languages, as used in Deaf communities, are human languages that are totally

based on visual forms.

Other sensory channels (smell, taste and touch) play a much smaller role in

human communication (while being quite significant among other animal

species), although tactile communication can reach quite sophisticated levels,

even as an alternative to spoken or signed language. (See Finnegan 2002 for a

comprehensive overview of the different channels of communication.) I will

confine the discussion here to that part of communicative acts that is taken care of

by spoken language. Human languages, then, contain an inventory of ‘sound

events’ that are conventionally(and largely arbitrarily) linked to ‘meanings’ or

‘concepts’ that constitute the building blocks out of which we construct our

thoughts.

By stringing these sound events together we construct complex words and

sentences that represent our hidden thoughts. If the listener knows the

conventional linkage between sound events and their meanings, as well as the

grammatical rules for decomposing complex expressions that encode the complex

thoughts of the speaker, communication can proceed successfully. What does it

mean to say that the language user knows the sound events that are linked to

meanings? Clearly, sound events as such (i.e. as physical acoustic events) do not

form part of cognition. There is no part of the brain that literally contains an

inventory of acoustic events that somehow can be released upon command.

Rather, human produce sounds every time they speak. Sounds are produced by

specific actions or movements of certain body parts. How this works in detail is

studied under the heading of ‘articulatory phonetics’. Speakers apparently know

these specific movements (as appropriate for achieving certain acoustic targets)

and it might therefore be assumed that it is this (largely unconscious) knowledge

(rather than the sounds themselves) that forms part of human cognition. But this

cannot be enough.

2

Page 7: Makalah Phonological Construction

Language users do not recognize sounds by visually identifying the

movements that produce them(although such visual information is used in speech

recognition when available). If ‘speech’ recognition was totally based on the

visual perception of articulatory movements, spoken languages would be sign

languages! Rather they identify the sounds as such (which is why can talk over the

telephone or listen to the radio). In other words, language users have a mental,

perceptual (i.e. a psycho-acoustic) image of the sounds that allows them to parse

the acoustic speech signal into units that can be matched with words or

meaningful parts of complex words. It would seem, then, that knowledge of sound

events has two aspects, an articulatory plan and a perceptual or psycho-acoustic

image. It is a widely accepted view that the knowledge of sound events that

correspond to morphemes and words is quite specific, in that it takes the form of a

mental representation that is compositional (rather than holistic). This means that

the cognitive representation is built out of smaller parts or ‘atoms’ that are in

themselves meaningless.

At one time it was believed that the atoms corresponded roughly to

complete slices of the acoustic signal and the atoms were thus called

phonemes(since X-emes are fundamental units in the component that deals with

X). Subsequent research revealed, however, that phonemes themselves consist of

smaller parts, and these were called features.

There has been a long debate on the question as to whether the articulatory

aspects or the perceptual aspects of features are more fundamental. For a revealing

discussion, I refer to Fowler and Galantucci (2002). In their view, the articulatory

plan is fundamental. The articulatory basis of features is also clearly present in

most work on feature theory since Chomsky and Halle (1968), especially in its

‘feature geometric’ descendants (cf. Halle 1983, 2003). A consequence of taking

articulation as basic is that in speech perception, listeners must be assumed to

crucially draw on their articulatory knowledge. The idea is that perceived stimuli

are internally linked to the corresponding articulatory plan which then can be

matched with the articulatory - based phonological form of lexical items. This

proposal is known as the ‘motor theory of speech perception’ advocated by

Liberman and Mattingly (1985). (Ina way, it is claimed then that hearers mentally

‘see’ the articulatory movements that underlie the acoustic events.) This theory

forms the basis of the “articulatory phonology” model (cf. Browman & Goldstein

1986).

3

Page 8: Makalah Phonological Construction

Others (among them, Roman Jakobson) believe that the perceptual image

is fundamental, mainly because of its presumed close relationship to the acoustic

aspect of sounds, which is shared by speaker (through feedback) and listener. See

Harris (1994) as well as Anderson and Ewen (1987) for a defense of this position.

I will not enter into this debate here, and keep a neutral stand on the issue, in that I

will assume that the cognitive representation of sounds simply has two layers, an

articulatory plan and a perceptual image. In fact, my own views on the true atoms

of phonology involves the postulation of just two units which can hardly be said

to have any phonetic definition, articulatory or perceptual. (See van der Hulst

2000, in press d, in prep a.)

In any event, it seems safe to conclude that knowledge of sound events

involves compositional mental representations of these events, and that makes the

‘sound side’ of linguistic signs as abstract and cognitive as the meaning side. Just

like (word) meaning can be thought of in terms of concepts that stand in some

relationship to real world objects, phonology must also be understood as a

conceptual system that stands in somesort of relationship to real world objects

(namely certain types of noises, or certain types of behavior, i.e. articulatory

behavior); see van der Hulst (in press b) for a more detailed discussion. But wait,

says the philosopher. We know that not all word meanings or concepts that

constitute them correspond to something in the real world. And I was hoping that

he would say that because if he is right (and I think we all agree that he is,

although the case can more easily be made for compositional conceptual

structures than for the ‘atomic’ concepts), then there is no reason to assume that

all phonological concepts stand in a direct referential relationship to some real

world noise or behavior. I will return to this point below when we will see that a

phonology that is limited to concepts that correlate with phonetic events fails to

provide insight into many phonological phenomena.

According to some, the phonological primes (i.e. features) are hard-wired

as part of an innate human language capacity, rather than being ‘constructed’ from

scratch in the course of cognitive development and language acquisition. This

view must rely on somesort of evolutionary development whereby skills to make

certain kinds of sounds become hard-wired, presumably because of adaptive

advantages of somesort ( more on that below )

4

Page 9: Makalah Phonological Construction

However, if this is so, and if spoken language ‘phonetics’ has determined

the nature of the primes in the evolutionary past, how can this same endowment

be helpful to a deaf person? Clearly, it cannot, and I am assuming here without

discussion that sign languages are true human languages that, as far as we know

now, share all essential structural properties with spoken languages. As a

consequence, deaf people either must construct their phonology in some other

way (perhaps using some non-specialized general cognitive ability that allows

them to construct a phonology-like conceptual system) or they have no

compositional conceptual system comparable to phonology at all, which implies

that conceptual representations of articulatory movements and perceptible forms

of signs are stored in the lexicon holistically. The first option is logically

consistent, although it predicts differences between the course of acquisition of

the phonologies of spoken and signed languages.

There appears to be little support for any significant differences. If

anything, the contrary is true. Striking similarities between the acquisition course

of languages in both modalities have been put forward as ‘evidence’ for the claim

that sign languages and spoken languages are both natural human languages,

stemming from the same innate capacity (Klima and Bellugi 1979, Emmorey

2002, Meier,Cormier and Quinto-Pozos 2002). The second option (sign languages

have no phonology) flies in the face of the results that installed the idea that sign

languages are natural languages in the first place. Stokoe’s seminal work (Stokoe

1960) lead to the recognition of the fact that sign languages, in fact, have a

phonology. This finding lead to an explosion of work on sign languages,

especially American Sign Language (Klima and Bellugi 1979, Emmorey 2002,

Fischer and van der Hulst 2003; see van der Hulst (1993, 1995, 1996, 2000), van

der Kooij (2002) for detailed discussion of phonological compositionality in signs

and many references to current work).

The conclusion must be that phonological categories are constructed in the

course of language acquisition. Elsewhere I propose that the innate language

faculty (or some more general faculty) provides a universal mechanism for

constructing the set of primitives and I specify the properties of that mechanism in

some detail (van der Hulst 1993, 2000, to appear d, in prep a).

5

Page 10: Makalah Phonological Construction

CHAPTER 2

B. Phonological Construction

1. Groups of sounds

The fact that sounds do not come out of our mouths one segment at a time

has been known for a long time. In fact, many people have argued that segments

per se actually only exist as abstractions, because it appears that the smallest piece

of sound we can comfortably produce is larger than a single segment. If you think

about it, when you try to say a consonant, and particularly a stop, you are most

likely to say [thә] or [gә], and it’s really very hard to say [g] in isolation.

Furthermore, although the Roman alphabet (which is the basis for most

writing systems in the world) expresses segments directly, many other writing

systems don’t, but instead express units consisting of (at least) a consonant and a

vowel together, and may also include a following consonant. In the Japanese

syllabaries hiragana and katakana, and the Sanskrit and other writing systems

based on the Brahmi script each symbol stands for a consonant and vowel in an

indissoluble unit. The term used to describe this supra segmental unit is the well -

known term syllable.

There has been an enormous controversy about the status of this unit, with

opinions varying between those, on the one hand, who argued that it was an

unnecessary addition to the inventory of linguistic units (this was most strongly

argued for in Chomsky & Halle (1968), but other American Structuralists also felt

this way) and others, on the other hand, who have argued that segments are

inventions of linguists spoiled by the unnatural Roman writing system, and that

syllables are the smallest real linguistic units. One of the problems with the

concept of the syllable is that it is not a purely physical entity (like a vowel or a

consonant), in the sense that it does not correspond to any single physical gesture

of the articulators, nor to any single stretch of sound. This has led phoneticians to

be skeptical of its existence – one cannot see syllables in a spectrogram, nor in a

waveform. Similarly, syllables do not appear in x-ray films of people speaking. Of

course, for cognitive linguists (and in fact, for most linguists of any stripe)

language is not a physical event in any case but a cognitive one, so that the

inability to define a purely physical unit like the syllable is not a drawback.

6

Page 11: Makalah Phonological Construction

Normally the easiest part of thinking about syllables is recognizing them

in words. If we take a word like anthropology we can easily see that it has five

syllables, and when we learn to spell (at least in English) we learn how to divide

the word into syllables: anthropology. Of course, things are rarely that simple.

The syllable division rules that we learned in school were orthographic rules –

rules for dividing the spelling of words. For example, we divide sitting as sit-ting,

but, of course, there aren’t two phonetic /t/’s in the word, only two spelled ⟨t⟩’s.

One of the facts that we find about syllables is that while we often know how

many syllables there are in the word, and can say the word syllable by syllable, we

find some difficulty in saying exactly where one syllable ends and the other

begins. Try saying satire one syllable at a time and you will happily say [sæ.tʰaır].

On the other hand, if you divide our earlier example sitting you might be less

comfortable with [sɪ.tʰIŋ]. Perhaps you said [sIt.tʰɪŋ], with a double [t], reflecting

the spelling, curiously enough. But of course neither pronunciation reflects the

undivided pronunciation of the word, which is without any kind of [t]: [sɪɾnIŋ].

The problem is that we can’t say either [*sɪɾ.ɪŋ] or [*sɪ.ɾɪŋ].

So where does the syllable divide? Later we’ll see that both before and

after as well as in the middle of the flap have been proposed as answers to the

question. Most linguists argue that syllables are organizational units of sound.

That means that they unite individual segments into constituents, much the same

way as words are organized into phrases, and phrases into clauses and so on.

Thus, it is argued, syllables have a hierarchical structure. This organizing

principle recurs not only in syntax (where clauses are made up of phrases which

are made up of words) but also in morphology, where words are made up of

suffixes and prefixes: deny(a verb) suffixed with – able makes a new adjective,

and then - ity can be added to make a new noun out of the new adjective:

deniability.

The traditional pieces of the syllable are the onset and the rhyme

(sometimes spelled rime). The onset consists of every segment up to but not

including the vowel, while the rhyme includes the rest of the syllable. The term

rhyme is, of course, familiar from poetry,

1. The period is normally used to divide syllables. Other symbols that are found

in the literature are the dollar sign $, and more complex notations representing

tree structure. We will discuss these below.

7

Page 12: Makalah Phonological Construction

(1) Hickory dickory dock The mouse ran up the clock

Notice that it is only the vowel plus all following segments that must match –

what precedes is irrelevant.

The rhyme is divided into the nucleus and the coda. In the most simple

terms, the nucleus is the vowel and the remaining consonants are the coda. In the

above example, the nucleus is /ɑ/, the coda is /k/. What complicates the matter in

English is that both the nucleus and the coda can be rather more complex (as can,

for that matter, the onset). In a word like tasks there are three consonants in the

coda, and in sixths there are four: [ksθs]. English, incidentally, is relatively

unusual in permitting so many consonants in the coda. Four is near the top

permitted in the languages of the world, and in fact, in many languages the limit is

zero. well-known language that has such a limit is Hawai‘ian. If you think of

Hawai‘ian words (you probably know half a dozen) you will notice that every

consonant is followed by a vowel:

(2) Ho.no.lu.lu Ha.le.a.ka.la Ki.la.u.e.a Ma.u.na Ke.a

We can classify the phonology of languages with respect to exactly what

they allow in their codas, with languages like Hawai‘ian representing one extreme

(zero consonants in the coda), and languages like English (up to four) representing

the other extreme. Other languages appear part way along this continuum. One

interesting set of restrictions is not on the number but on the kindof consonant

permitted in the coda. For example, in Japanese, there can be one consonant in

the coda, but there are heavy restrictions on what that consonant can be. It

can be either a nasal (Hon.da, Shim.bun) or a consonant identical to the consonant

following: tep.pan, Nip.pon, gak.kai.No other consonants are permitted in the

coda. Mandarin permits only nasals /n/ and /ŋ/, and in some dialects, /r/.

In general, languages make no restrictions on what can occur in nuclei,

except that it should be a vowel. Two vowels in a nucleus constitute one

definition of a diphthong, although defining a diphthong this way doesn’t

distinguish between rising and falling diphthongs (the /ju/ in cute vs the /aI/

in bite). The one complication involving nuclei involves whether things other

than vowels can occur in the nucleus. In some languages, such as English and

German, nasals can occupy the nucleus in unstressed syllables: hidden [hIdn̩

], German geben [gebm̩ ].

8

Page 13: Makalah Phonological Construction

In other languages the restriction is not quite so strict.

In Czech /r/ can be a stressed nucleus: the city of Brno [ˈbr̩ no]. A

language that has been the subject of much recent inquiry is Imdlawn Tashlhiyt

Berber (spoken in North Africa) where apparently anyconsonant can be the

nucleus of a syllable. Some famous examples are /txznt/ meaning ‘you stored’,

which is syllabified [tx̩ .zn̩ t] and /tftkt/ (‘you suffered a strain’), which comes out

[tf̩ .t k̩ t] (Prince & Smolensky 1994). This is, of course, a highly unusual case –

languages greatly prefer relatively open segments to count as the center of

syllables.

As I mentioned above, a common assumption is that the parts of a syllable have a

hierarchical structure:

(3) σ

Onset Rhyme

Nucleus Coda

The reasons for this assumption include the fact that many languages have

rhyme (in the poetic sense) as a linguistic device. Other languages have

alliteration, which involves identity among Onsets. Old English was famous for

this device. In any line of Old English poetry there had to be at least two stressed

words with the same onset in one of their syllables (one in each half of the line):

(4) Stræt wæs stanfah, stig wisode

gúmum ætgædere. Gúðbyrne scan

However, there doesn’t seem to be any language whose poetry requires

identity between words which are identical in Onset+Nucleus while ignoring the

coda.

In addition, there are many cases in languages where there are restrictions

between the nucleus and the coda. For example, in English /ŋ/ forbids a preceding

tense vowel: [*iŋ, *eŋ] etc. However, onsets normally do not set restrictions on

what follows – any consonant in English can be followed by any vowel.

9

Page 14: Makalah Phonological Construction

2. The internal structure of the syllable

One of the main reasons for arguing that we need a syllable as a linguistic

unit is that manyphonological patterns appear to pay attention to syllable

structure. For example, it is simplest to describe the operation of aspiration in

English by making it sensitive to syllable structure. Consider the following data

(5) pie phaI spy spaI

pacific phә.ˈsI.fIk specific spә.ˈsI.fIk

appear ә.phir aspire ә.spaIr

Note that whether the /p/ begins an accented or unaccented syllable, as long as it

is at the beginning of the onset it is aspirated. If something else precedes it (and,

of course, that something could only be an /s/) in the onset it is unaspirated.

Similarly, in German, obstruents are devoiced syllable-finally:

(6) abgehen ˈap.ge.әn ‘walk away’

wegfahren ˈvεk.fɑr.әn ‘drive away’

Burgmeister ˈb~rk.mai.stәr ‘mayor’

One additional issue that has been extensively discussed in the

phonological literature deals with how many consonants are permitted to

occur in onsets and codas. We have already seen the limiting case above –

none, along with the fact that many languages permit only limited kinds of

(single) consonants in codas. Every language permits at least one consonant in

onsets. In fact, there are languages which forbid empty onsets – Arabic requires

that every syllable begin with a consonant, although some have argued that

syllable initial glottal stop is simply an empty consonant fulfilling the function of

being an onset.

10

Page 15: Makalah Phonological Construction

CHAPTER 3

C. The Role of Abstraction in Constructing Phonological Structure

1. The Origins of Subsyllabic Structure

Syllables play a critical role in many domains of language processing,

including speech perception (Álvarez, Carreiras, & Perea, 2004), production

(Cholin, Levelt, & Schiller, 2006; Laganaro & Alario, 2006) and short-term

memory (Nimmo & Roodenrys, 2002). In this work, we focus on the internal

structure of syllabic units.

We review evidence that subsyllabic structure varies cross-linguistically.

English and other Indo-European languages group together vowels and codas

into an unit (‘rime’) distinct from the onset (Kessler & Treiman, 1997). For

example, the English syllable for ‘cat’ /kæt/ is argued to be composed of onset /k/

and rime /æt/. In contrast, Korean (Yoon & Derwing, 2001) and Japanese

(Katada, 1990)group onsets and vowels into a unit (‘body’) distinct from

the coda(such that the Korean syllable for ‘feather’ /kis/ is decomposed as body

/ki/ vs. coda /s/).

In this work, we argue that speakers construct these structures to

account for associations between segments that are not attributable to other,

more general, phonological patterns. We first review research showing that

in both Korean and English the distribution of segments provides robust

statistical cues to their contrasting subsyllabic structures; furthermore,

speakers are sensitive to these statistical cues (Lee & Goldrick, 2008). We then

present an analysis of a new, larger lexicon of Korean. This shows that robust

cues to subsyllabic structure are not found when considering representations

that incorporate the influence of more general phonological patterns. This is

consistent with the claim that learners postulate subsyllabic structures only after

correcting for other potential sources of phonological patterns.

11

Page 16: Makalah Phonological Construction

2. Cross linguistic Variation in Subsyllabic

Structure A variety of types of behavioral data suggest that

subsyllabic structures differ across languages. We focus on two prominent

subtypes involving alternate groupings of the nucleus and margin positions:

onset-rime (dividing a CVC syllable into C-VC) vs. body-coda (CV-C).

Subsyllabic Structure in Indo-European Languages Indo-European languages-

specifically, both Germanic (Dutch, English, German) and Romance

(French)—have been argued to group together the nucleus and coda into a unit

(rime) to the exclusion of the onset. The presence of this rime unit has been

used to account or the greater prevalence of phonotactic restrictions governing

vowel-coda vs. onset-vowel sequences (e.g., Dutch: Martensen, Maris, &

Dijkstra, 2000; English: Kessler & Treiman, 1997; French: Perruchet &

Peereman, 2004). In metalinguistic tasks, English speakers judge syllables

sharing the vowel and coda (but not onset) to be more similar than

syllables sharing the onset and vowel (but not coda; Yoon & Derwing,

1994). Finally, spontaneously occurring speech errors (e.g., MacKay, 1972)

as well as those produced in short-term memory tasks (e.g., Treiman & Danis,

1988) are more likely to involve the simultaneous misordering of the vowel and

coda (i.e., the rime unit) vs. the onset and vowel.

3. Subsyllabic Structure in Korean and Japanese

Similar evidence has been used to argue that Korean and Japanese

have a different subsyllabic structure, grouping the vowel and onset together to

the exclusion of the coda. With respect to metalinguistic tasks, Katada (1990)

documents a Japanese language game that involves reuse of onset-vowel

sequences, excluding any following codas. Yoon & Derwing (2001) show that

in contrast to the English pattern discussed above Korean speakers judge

syllables sharing the onset and vowel (but not coda) to be more similar than those

sharing only the vowel and coda. Kureta, Fushimi, & Tastumi (2006) find

that Japanese speakers use onset-vowel sequences to prepare responses in an

implicit priming paradigm (in contrast, Dutch speakers can rely on singleton

onsets; Meyer, 1991). As discussed in more detail below, Korean speech errors

are more likely to involve simultaneous misorderings of onsets and vowels

than vowels and codas (Lee & Goldrick, 2008).

12

Page 17: Makalah Phonological Construction

CHAPTER 4

D. Phonology III: Syllable Structure, Stress

Words consist of syllables. The structure of syllables is determined partly

by universal and partly by language specific principles. In particular we shall

discuss the role of the sonoricity hierarchy in organising the syllabic structure, and

the principle of maximal onset.

Utterances are not mere strings of sounds. They are structured into units

larger than sounds. A central unit is the syllable. Words consist of one or several

syllables. Syllables in English begin typically with some consonants. Then comes

a vowel or a diphthong and then some consonants again. The first set of

consonants is the onset, the group of vowels the nucleus and the second group of

consonants the coda. The combination of nucleus and coda is called rhyme. So,

syllables have the following structure:

(75) [onset [nucleus coda]]

For example, /strength/ is a word consisting of a single syllable:

(76) [stɹ [ԑ ŋθ]]

Onset Nucleus Coda

Rhyme

Syllable

Thus, the onset consists of three consonants: [s], [t] and [ɹ], the nucleus consists

just of [ԑ], and the coda has [ŋ] and [θ]. We shall begin with some fundamental

principles. The first concerns the structure of the syllable.

a. Syllable Structure I (Universal )

Every syllable has a nonempty nucleus. Both coda and onset may

however be empty.

A syllable which has an empty coda is called open. Examples of open syllables

are /a/ [e] (onset is empty), /see/ [si] (onset is nonempty). A syllable that is

not open is closed. Examples are /in/ [ın] (onset empty) and /sit/ [sıt] (onset

nonempty). The second principle identifies the nuclei for English.

13

Page 18: Makalah Phonological Construction

b. Vowels are Nuclear ( English )

A nucleus can only contain a vowel or a diphthong.

This principle is not fully without problems and that is the reason

that we shall look below at a somewhat more general principle.

The main problem is the unclear status of some sounds, for

example [ɚ]. They are in between a vowel and consonant, and

indeed sometimes end up in onset position (see above) and

sometimes in nuclear position, for example in /bird/ [bɚd].

The division into syllables is clearly felt by any speaker,

although there sometimes is hesitation as to exactly how to divide a

word into syllables. Consider the word /atmosphere/. Is the /s/ part

of the second syllable or part of the third? The answer is not

straightforward. In particular the stridents ( that is, the sounds [s],

[ʃ] ) enjoy a special status. Some claim that they are extrasyllabic

(not part of any syllable at all), some maintain that they are

ambisyllabic (they belong to both syllables). We shall not go into

that here.

The existence of rhymes can be attested by looking at

verses (which also explains the terminology): words that rhyme do

not need to end in the same syllable, they only need to end in the

same rhyme: /fun/ – /run/ – /spun/ – /shun/. Also, the coda is the

domain of a rule that affects many languages: For example, in

English and Hungarian, within the coda the obstruents must either

all be voiced or unvoiced; in German and Russian, all obstruents in

coda must be voiceless. (Here is an interesting problem caused

among other by nasals. Nasals are standardly voiced. Now try to

find out what is happening in this case by pronouncing words with

a sequence nasal+voiceless stop in coda, such as /hump/, /stunt/,

/Frank/.) Germanic verse in the Middle Ages used a rhyming

technique where the onsets of the rhyming words had to be the

same. (This is also called alliteration. It allowed to rhyme two

words of the same stem; German had a lot of Umlaut and ablaut,

that is to say, it had a lot of root vowel change making it

impossible to use the same word to rhyme with itself (say /run/ –

/ran/). It is worthwhile to remain with the notion of the domain of a

rule.

14

Page 19: Makalah Phonological Construction

Many phonological constraints are seen as conditions that concern

two adjacent sounds. When these sounds come into contact, they

undergo change to a smaller or greater extent, for some sound

combinations are better pronounceable than others. We have

discussed sandhi at length in Lecture 4. For example, the Latin

word /in/ ‘in’ is a verbal prefix, which changes in writing (and

therefore in pronunciation) to /im/ when it precedes a labial

(/impedire/). Somewhat more radical is the change from [ml] to

[mpl] to avoid the awkward combination [ml] (the word

/templum/derives from /temlom/, with /tem/ being the root,

meaning ‘to cut’). There is an influential theory in phonology, auto

segmental phonology, which assumes that.

Table 10: The Sonoricity Hierarchy

dark vowels

[a], [o]

> mid vowels

[æ], [œ]

> high vowels

[i], [y]

> r-sounds

[r]; [ɹ]

> nasals; laterals

[m], [n]; [l]

> vd. fricatives

[z], [ӡ]

> vd. plosives

[b], [d]

> vl. fricatives

[s], [ʃ]

> vl. plosives

[p], [t]

phonological features are organized on different scores (tiers) and can spread to

adjacent segments independently from each other. Think for example of the

feature [±voiced]. The condition on the coda in English is expressed by saying

that the feature [±voiced] spreads along the coda. Clearly, we cannot allow the

feature to spread in discriminately, otherwise the total utterance is affected.

Rather, the spreading is blocked by certain constituent boundaries; these can be

the coda, onset, nucleus, rhyme, syllable, foot or the word. To put that on its head:

the fact that features are blocked indicates that we are facing a constituent

boundary. So, voicing harmony indicates that English has a coda.

15

Page 20: Makalah Phonological Construction

The nucleus is the element that bears the stress. We have said that in

English it is a vowel, but this applies only to careful speech. In general this need

not be so. Consider the standard pronunciation of /beaten/: [ ‘bi:thᶇ] (with a

syllabic[n]). For my ears the division is into two syllables: [bi:] and [thᶇ]. (In

German this is certainly so; the verb /retten/ is pronounced [вԑthᶇ]. The [n] must

therefore occupy the nucleus of the second syllable.) There are more languages

like this. (Slavic languages are full of consonant clusters and syllables that do not

contain a vowel. Consider the island /Krk/, for example.) In general, phonologists

have posited the following conditions on syllable structure. Sounds are divided as

follows. The sounds are aligned into a so-called sonoricity hierarchy, which is

shown in Table 10 (vd. = voiced, vl. = voiceless). The syllable is organized as

follows.

c. Syllable Structure II

Within a syllable the sonoricity strictly increases and then decreases again. It is highest in the nucleus.

This means that a syllable must contain at least one sound which is

at least as sonorous as all the others in the syllable. It is called the

sonoricity peak and is found in the nucleus. Thus, in the onset

consonants must be organized such that the sonority rises, while in

the coda it is the reverse. The conditions say nothing about the

nucleus. In fact, some diphthongs are increasing ([ıǝ] as in the

British English pronunciation of /here/) others are decreasing ([aı],

[oı]). This explains why the phonotactic conditions are opposite at

the beginning of the syllable than at the end. You can end a

syllable in [ɹt], but you cannot begin it that way. You can start a

syllable by [tɹ], but you cannot end it that way (if you to make up

words with [tɹ], automatically, [ɹ] or even [tɹ] will be counted as

part of the following syllable). Let me briefly note why diphthongs

are not considered problematic in English: it is maintained that the

second part is actually a glide (not a vowel), and so would have to

be part of the coda. Thus, /right/ would have the following

structure:

16

Page 21: Makalah Phonological Construction

(77)

The sonoricity of [j] is lower than that of [a], so it is not nuclear.

A moment’s reflection now shows why the position of

stridents is problematic: the sequence [ts] is the only legitimate

onset according to the sonoricity hierarchy, [st] is ruled out.

Unfortunately, both are attested in English, with [ts] only occurring

in non-native words (eg /tse-tse/, /tsunami/). There are

phonologists who even believe that [s] is part of its own little

structure here (‘extrasyllabic’). In fact, there are languages which

prohibit this sequence; Spanish is a case in point. Spanish avoids

words that start with [st]. Moreover, Spanish speakers like to add

[e] in front of words that do and are therefore difficult to

pronounce. Rather than say [stɹenӡ] (/strange/) they will say

[estôen ]. The effect of this maneuver is that [s] is now part of the

coda of an added syllable, and the onset is reduced (in their speech)

to just [tô], or, in their pronunciation most likely [tr]. Notice that

this means that Spanish speakers apply the phonetic rules of

Spanish to English, because if they applied the English rules they

would still end up with the onset [stɹ] (see below). French is

similar, but French speakers are somehow less prone to add the

vowel. (French has gone through the following sequence: from [st]

to [est] to [et]. Compare the word /étoile/ ‘star’, which derives from

Latin /stella/ ‘star’.)

17

r a j t O N C C

Page 22: Makalah Phonological Construction

d. Stress

Syllables are not the largest phonological unit. They are

themselves organized into larger units. A group of two, sometimes

three syllables is called a foot. A foot contains one syllable that is

more prominent than the others in the same foot. Feet are grouped

into higher units, where again one is more prominent than the

others, and so on. Prominence is marked by stress. There are

various ways to give prominence to a syllable. Ancient Greek is

said to have marked prominence by pitch (the stressed syllable was

about a fifth higher (3/2 of the frequency of an unstressed syllable).

Other languages (like German) use loudness. Other languages use

combination of the two (Swedish). Within a given word there is

one syllable that is the most prominent. In IPA it is marked by a

preceding [ˈ]. We say that it carries primary stress. Languages

differ with respect to the placement of primary stress. Finnish and

Hungarian place the stress on the first syllable, French on the last.

Latin put the stress on the last but one (penultimate), if it was long

(that is to say, had a long vowel or was closed); otherwise, if was a

syllable that preceded it (the antepenultimate) then that syllable

got primary stress. Thus we had pe.re.gri.nus (‘foreign’) with stress

on the penultimate (gri) since the vowel was long, but in.fe.ri.or

with stress on the antepenultimate /fe/ since the /i/ in the

penultimate was short. (Obviously, monosyllabic words had the

stress on the last syllable.) Sanskrit was said to have free stress,

that is to say, stress was free to fall anywhere in the word.

Typically, within a foot the syllables like to follow in a

specific pattern. If the foot has two syllables, it consists either in an

unstressed followed by a stressed syllable (iambic metre), or vice

versa (trochaic metre).

18

Page 23: Makalah Phonological Construction

Sometimes a foot carries three syllables (a stressed followed by

two unstressed ones, a dactylus). So, if the word has more than

three syllables, there will be a syllable that is more prominent than

its neighbours but not carrying main stress. You may try this with

the word /antepenultimate/. You will find that the first syllable is

more prominent than the second but less than the fourth. We say

that it carries secondary stress: [a̩ntԑpǝn'˄ltımԑt]. Or [ǝ'sımǝ'leıʃn].

The so-called metrical stress theory to account for stress as

follows. The syllables are each represented by a cross (×). This is a

Layer 0 stress. Then, in a sequence of cycles, syllables get assigned

more crosses. The more crosses, the heavier the syllable. The

number of crosses is believed to correspond to the absolute weight

of a syllable. So, a word that has a syllable of weight 3 (three

crosses) is less prominent than one with a syllable of weight 4.

Let’s take tries to account for stress as follows. The syllables are

each represented by a cross (×). This is a Layer 0 stress. Then, in a

sequence of cycles, syllables get assigned more crosses. The more

crosses, the heavier the syllable. The number of crosses is believed

to correspond to the absolute weight of a syllable. So, a word that

has a syllable of weight 3 (three crosses) is less prominent than one

with a syllable of weight 4. Let’s take

(78)

We have five syllables. Some syllables get extra crosses. The

syllable [sı] carries primary stress in /assimilate/. Primary stress is

always marked in the lexicon, and this mark tells us that the

syllable must get a cross. Further, heavy syllables get an additional

cross.

19

Layer 0 X X X X X ǝ sl mǝ lel ʃn

Page 24: Makalah Phonological Construction

A syllable counts as heavy in English if it has a coda or a

diphthong or long vowel. So, [leı] gets an extra cross. [ʃn] is not

heavy since the [n] is nuclear. So this is now the situation at Layer

1:

(79)

Next, the nominalisation introduces main stress on the fourth

syllable. So this syllable gets main stress and is therefore assigned

another cross. The result is this:

(80)

If larger units are considered, there are more cycles. The word

/maintain/ for example has this representation by itself:

(81)

To get this representation, all we have to know is where the

primary stress falls. Both syllables are heavy and therefore get an

extra cross at Layer 1. Then the main syllable gets a cross at Layer

2. Now, if the two are put together, a decision must be made which

of the two words is more prominent. It is the second, and this is

therefore what we get:

20

Layer 1 X X Layer 2 X X X X X

ǝ sl mǝ lel ʃn

Layer 2 X

Layer 1 X X

Layer 0 X X X X X

ǝ sl mǝ lel ʃn

Layer 2 X

Layer 1 X X

Layer 0 X X

meln teln

Page 25: Makalah Phonological Construction

(82)

Notice that the stress is governed by a number of heterogeneous

factors. The first is the weight of the syllable; this decides about

Layer 1 stress. Then there is the position of the main stress (which

in English must to a large extent be learned - equivalently, it must

explicitly be given in the representation, unlike syllable structure).

Third, it depends on the way in which the word is embedded into

larger units (so syntactic criteria play a role here). Also,

morphological formation rules can change the location of the main

stress! For example, the suffix (a)tion attracts stress (['kǝm'baın]

and ['kǝmbı'neıʃn]) so does the suffix ee (as in /employee/), but

/ment/ does not (['gavɚn] and ['gavɚnment]). The suffix /al/ does

move the accent without attracting it (['ænԑkdot] versus

['ænԑk'dotal]).

Finally, we mention a problem concerning the

representations that keeps coming up. It is said that certain

syllables cannot receive stress because they contain a vowel that

cannot be stressed (for example, schwa: [ǝ]). On the other hand, we

can also say that a vowel is schwa because it is unstressed. Take,

for example, the pair ['ɹiǝlaız] and [̩ɹiǝlı'zeıʃn]. When the stress

shifts, the realisation of /i/ changes. So, is it rather the stress that

changes and makes the vowel change quality or does the vowel

change and make the stress shift? Often, these problems find no

satisfactory answer.

21

Layer 3 X Layer 2 X X Layer 1 X X X X Layer 0 X X X X X X X

meln teln ǝ sl mǝ lel ʃn

Page 26: Makalah Phonological Construction

In this particular example it seems that the stress shift is

first, and it induces the vowel change. It is known that unstressed

vowels undergo reduction in time. The reason why French stress in

always on the last syllable is because it inherited the stress pattern

from Latin, but the syllables following the stressed syllable

eventually got lost. Here the stress was given and it drove the

development.

22

Page 27: Makalah Phonological Construction

1. CONCLUSION

Modeling Phonological Construction of speech is a critical issue in speech

recognition. In this paper, we report our recent development of an overlapping

feature based phonological model that represents long-span contextual

dependency in speech acoustics. In this model, high-level linguistic constraints are

incorporated in automatic construction of the patterns of feature overlapping and

of the hidden Markov model (HMM) states induced by such patterns. The main

linguistic information explored includes groups of sound, syllable structure

(Universal ), The internal structure of syllable, vowels are nuclear, and word

stress. A consistent computational framework developed for the construction of

the feature-based model and the major components of the model are described.

Experimental results on the use of the overlapping-feature model in an HMM-

based system for speech recognition show improvements over the conventional

triphone - based phonological model.

2. SUGGESTION

The aim of this study all reader be able to explore The Phonological

Construction Comprehensible to examine whether the sentence comprehension of

this features are affected by limitation of Phonological Construction, and if so,

which types of sentences are affected. More specifically, the reader could

declarative or developing the sentences in this case. The participants to create the

students have motivated.

Page 28: Makalah Phonological Construction

REFERENCES

Hulst,H. van der. 2003. Cognitive Phonology. University of Connecticut

Nathan, Geoffrey S. 2008. Phonology A Cognitive Grammar Introduction (

Cognitive Linguistic in Practice [ CLP ] 3 ). Amsterdam & Philadelphia: John

Benjamins

Goldrick, Matthew. 2016. The Role of Abstraction in Constructing Phonological

Structure. Northwestern University. Evanston, IL 60208 USA

Kracht, Marcus. Introduction to Linguistics. Los Angeles, CA 90095 – 1543