analysis of voice impairment in aphasia after stroke-underlying neuroanatomical substrates

8
Analysis of voice impairment in aphasia after stroke-underlying neuroanatomical substrates Mile Vukovic ´ a,, Radmila Sujic ´ b , Mirjana Petrovic ´ -Lazic ´ a , Nick Miller c , Dejan Milutinovic ´ d , Snez ˇana Babac b , Irena Vukovic ´ a a University of Belgrade, Faculty of Special Education and Rehabilitation, Belgrade, Serbia b ‘‘Zvezdara’’ Clinical and Hospital Centre, Belgrade, Serbia c Institute of Health and Society, Speech Language Sciences, University of Newcastle, UK d Galenika Pharmaceuticals R&D Institute, Belgrade, Serbia article info Article history: Accepted 24 June 2012 Available online 3 August 2012 Keywords: Voice Phonation Aphasia Motor speech disorders Acoustic Perceptual abstract Phonation is a fundamental feature of human communication. Control of phonation in the context of speech-language disturbances has traditionally been considered a characteristic of lesions to subcortical structures and pathways. Evidence suggests however, that cortical lesions may also implicate phonation. We carried out acoustic and perceptual analyses of the phonation of /a/ in 60 males with aphasia (20 Wernicke’s, 20 Broca’s, 20 subcortical aphasia) and 20 males matched in age with no neurological or speech-language disturbances. All groups with aphasia were significantly more impaired on the majority of acoustic and perceptual measures as compared with the control speakers. Within the subjects with aphasia, subjects with subcor- tical aphasia were more impaired on most measures compared to subjects with Broca’s aphasia, and they, in turn, more impaired than those with Wernicke’s aphasia. Lesions in regions involved in sound production–perception result in dysfunction of the entire neuro- cognitive system of articulation–phonological language processing. Ó 2012 Elsevier Inc. All rights reserved. 1. Introduction The standard source-filter model of vocal production assumes the existence of a vocal ‘‘source’’ and subsequent filtering of the source sound. Sub-glottal pressure from the lungs pushes air through the approximated vocal cords to create vibration in the airstream. The ensuing vocal note is ‘‘filtered’’ through a series of approximated or occluded articulators in the oral and nasal cavi- ties, to select certain resonant frequencies in that wave. Through this process, speech sounds are generated. Phonation encompasses segmental aspects of speech, suprasegmental processes in intona- tion (Ladd, 1996) and lexical tone (Yip, 2002). Whereas some con- sonants can be generated in a voiceless fashion, all vowels and most consonants require phonation. The majority of the speech stream, therefore, is phonated. Phonation for speech is a learned motor activity, controlled by the central nervous system. Vocal learning can be defined as acqui- sition of the ability to modify the acoustic structure of produced sounds to create contrasts in keeping with the system of contrasts in the ambient language. Vocal learning is different from auditory learning, which is the ability to make associations with sounds heard, though clearly vocal learning depends upon auditory learn- ing. Vocal learning gives humans the ability to imitate speech sounds heard individually and sequentially, and modify them through auditory feedback. This makes vocal learning one of the most critical behavioral substrates for spoken human language (Brown et al., 2009; D’Ausilio, Bufalari, et al., 2011). Phonation is not peculiar to humans. Many animal species are capable of producing voice, but, in contrast with humans, they do not speak. While most, if not all, vertebrates are capable of audi- tory learning, few are capable of vocal learning. There are three dis- tantly related groups of mammals (humans, bats, and cetaceans) and three distantly related groups of birds (parrots, hummingbirds, and songbirds) with the capacity for vocal learning (Janik & Slater, 1997; Nottebohm, 1972). There is evidence from recent studies supporting vocal learning in seals (Sanvito, Galimberti, & Miller, 2007) and elephants (Poole, Tyack, Stoeger-Horwath, & Watwood, 2005). However, it appears only vocal learners, songbirds, parrots, hummingbirds, and humans, have telencephalic regions which control vocal behavior (Jarvis & Mello, 2000; Jürgens, 2002). 1.1. Brain areas involved in voice production Learning to control voice and speech is gradual, but subse- quently becomes automatic. Learning engages control across broad cerebral regions, such as prefrontal cortex, Broca’s area, primary 0093-934X/$ - see front matter Ó 2012 Elsevier Inc. All rights reserved. http://dx.doi.org/10.1016/j.bandl.2012.06.008 Corresponding author. Address: University of Belgrade, Faculty of Special Education and Rehabilitation, Visokog Stevana 2, Belgrade, Serbia. E-mail addresses: [email protected], [email protected] (M. Vukovic ´). Brain & Language 123 (2012) 22–29 Contents lists available at SciVerse ScienceDirect Brain & Language journal homepage: www.elsevier.com/locate/b&l

Upload: mile-vukovic

Post on 24-Nov-2016

215 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Analysis of voice impairment in aphasia after stroke-underlying neuroanatomical substrates

Brain & Language 123 (2012) 22–29

Contents lists available at SciVerse ScienceDirect

Brain & Language

journal homepage: www.elsevier .com/locate /b&l

Analysis of voice impairment in aphasia after stroke-underlyingneuroanatomical substrates

Mile Vukovic a,⇑, Radmila Sujic b, Mirjana Petrovic-Lazic a, Nick Miller c, Dejan Milutinovic d,Snezana Babac b, Irena Vukovic a

a University of Belgrade, Faculty of Special Education and Rehabilitation, Belgrade, Serbiab ‘‘Zvezdara’’ Clinical and Hospital Centre, Belgrade, Serbiac Institute of Health and Society, Speech Language Sciences, University of Newcastle, UKd Galenika Pharmaceuticals R&D Institute, Belgrade, Serbia

a r t i c l e i n f o a b s t r a c t

Article history:Accepted 24 June 2012Available online 3 August 2012

Keywords:VoicePhonationAphasiaMotor speech disordersAcousticPerceptual

0093-934X/$ - see front matter � 2012 Elsevier Inc. Ahttp://dx.doi.org/10.1016/j.bandl.2012.06.008

⇑ Corresponding author. Address: University ofEducation and Rehabilitation, Visokog Stevana 2, Belg

E-mail addresses: [email protected], milevuk@o

Phonation is a fundamental feature of human communication. Control of phonation in the context ofspeech-language disturbances has traditionally been considered a characteristic of lesions to subcorticalstructures and pathways. Evidence suggests however, that cortical lesions may also implicate phonation.

We carried out acoustic and perceptual analyses of the phonation of /a/ in 60 males with aphasia (20Wernicke’s, 20 Broca’s, 20 subcortical aphasia) and 20 males matched in age with no neurological orspeech-language disturbances.

All groups with aphasia were significantly more impaired on the majority of acoustic and perceptualmeasures as compared with the control speakers. Within the subjects with aphasia, subjects with subcor-tical aphasia were more impaired on most measures compared to subjects with Broca’s aphasia, and they,in turn, more impaired than those with Wernicke’s aphasia.

Lesions in regions involved in sound production–perception result in dysfunction of the entire neuro-cognitive system of articulation–phonological language processing.

� 2012 Elsevier Inc. All rights reserved.

1. Introduction heard, though clearly vocal learning depends upon auditory learn-

The standard source-filter model of vocal production assumesthe existence of a vocal ‘‘source’’ and subsequent filtering of thesource sound. Sub-glottal pressure from the lungs pushes airthrough the approximated vocal cords to create vibration in theairstream. The ensuing vocal note is ‘‘filtered’’ through a series ofapproximated or occluded articulators in the oral and nasal cavi-ties, to select certain resonant frequencies in that wave. Throughthis process, speech sounds are generated. Phonation encompassessegmental aspects of speech, suprasegmental processes in intona-tion (Ladd, 1996) and lexical tone (Yip, 2002). Whereas some con-sonants can be generated in a voiceless fashion, all vowels andmost consonants require phonation. The majority of the speechstream, therefore, is phonated.

Phonation for speech is a learned motor activity, controlled bythe central nervous system. Vocal learning can be defined as acqui-sition of the ability to modify the acoustic structure of producedsounds to create contrasts in keeping with the system of contrastsin the ambient language. Vocal learning is different from auditorylearning, which is the ability to make associations with sounds

ll rights reserved.

Belgrade, Faculty of Specialrade, Serbia.pen.telekom.rs (M. Vukovic).

ing. Vocal learning gives humans the ability to imitate speechsounds heard individually and sequentially, and modify themthrough auditory feedback. This makes vocal learning one of themost critical behavioral substrates for spoken human language(Brown et al., 2009; D’Ausilio, Bufalari, et al., 2011).

Phonation is not peculiar to humans. Many animal species arecapable of producing voice, but, in contrast with humans, theydo not speak. While most, if not all, vertebrates are capable of audi-tory learning, few are capable of vocal learning. There are three dis-tantly related groups of mammals (humans, bats, and cetaceans)and three distantly related groups of birds (parrots, hummingbirds,and songbirds) with the capacity for vocal learning (Janik & Slater,1997; Nottebohm, 1972). There is evidence from recent studiessupporting vocal learning in seals (Sanvito, Galimberti, & Miller,2007) and elephants (Poole, Tyack, Stoeger-Horwath, & Watwood,2005). However, it appears only vocal learners, songbirds, parrots,hummingbirds, and humans, have telencephalic regions whichcontrol vocal behavior (Jarvis & Mello, 2000; Jürgens, 2002).

1.1. Brain areas involved in voice production

Learning to control voice and speech is gradual, but subse-quently becomes automatic. Learning engages control across broadcerebral regions, such as prefrontal cortex, Broca’s area, primary

Page 2: Analysis of voice impairment in aphasia after stroke-underlying neuroanatomical substrates

M. Vukovic et al. / Brain & Language 123 (2012) 22–29 23

and associative auditory areas, and subcortical structures. Withinsubcortical regions the basal ganglia (caudate, putamen, globuspallidus, substantia nigra, subthalamic nucleus, ventral tegmentalarea, and nucleus accumbens) play a role. The striatum (caudateand putamen) receives its major afferents from cortex (excitatoryglutamatergic projections) and substantia nigra pars compacta(dopaminergic projections). The basal ganglia also influence thethalamus. The basal ganglia are known to be responsible for main-taining static muscle contraction – the underlying framework forvoluntary skilled movements and the regulation of amplitude,velocity, and the initiation of movements (DeLong & Wichmann,2009; Grillner, Hellgren, et al., 2005; Spencer & Rogers, 2005).Damage to the basal ganglia may result in hypokinetic or hyperki-netic changes to speech and voice.

The caudate nucleus gathers and integrates data from the associa-tive cortex (Anderson et al., 2004). It is responsible for acquisition ofprimitive motor activities. When voice skills are sufficiently devel-oped, they become automatic, and putamen is subsequently engaged,which controls automatic voice production. The putamen and cau-date receive massive input from the cortex and perform furtheramplification of activated signals, while suppressing weaker ones.In this way signals are amplified and noise reduced in favor of centralinformation. Thus processed information is transferred to corticalareas via non-specific and specific thalamic nuclei. Cortical structuresare developed through further use, to achieve immediate and accu-rate integration and analysis of various elements of voice and speech.In this sense development of vocalization is paralleled by develop-ment of central structures which control voice production.

Increasing empirical evidence from studies of voice impairmentassociated with lesions of specific structures opens new opportuni-ties to correlate anatomic structures with (aspects of) voice pro-duction more clearly. Voice disorders may be caused bydisturbed phonation, due in turn to morphological changes in thelarynx (congenital malformations, traumatic, inflammatory or neo-plastic conditions) but also to some neurological diseases. Amongstneurological diseases, voice disorders have been studied exten-sively in Parkinson’s disease where patients present with weak,hoarse and/or monotonous voice (Liotti, Ramig, et al., 2003; Sewall,Jiang, & Ford, 2006). Voice characteristics are altered in Hunting-ton’s disease (HD) (Velasco García, Cobeta, et al., 2011). Multiplesclerosis (MS) also features several acoustic and perceptual anom-alies of voice (Feijo et al., 2004; Hartelius, Buder, & Strand, 1997;Merson & Rolnick, 1998).

1.2. Voice production in stroke

Stroke is also associated with disruption to vocal cord function-ing. This is typically associated with brainstem or other subcorticallesions - upper motor neuron, pontine and cerebellar lesions (Duffy& Folger, 1996; Thompson & Murdoch, 1995). Changes in voicequality are reported both in bilateral and unilateral lesions, thoughsome prosodic elements can be more impaired in left-sided lesions(Urban et al., 2006). Rigueiro-Veloso, Pego-Reigosa, Branas-Fernán-dez, Martínez-Vázquez, and Cortés-Laino (1997) found that dys-phonia was the most common symptom in 25 patients withWallenberg’s syndrome. Merati et al. (2005) described isolated vo-cal fold paralysis as an uncommon manifestation of stroke.

As is apparent from the above, whilst acoustic characteristics ofvoice have been extensively studied in subcortical and degenera-tive neurological conditions, in keeping with the designation ofaphasia as a linguistic breakdown in which voice plays no part,voice status in aphasia has received only sporadic attention. Never-theless, clinical observations and patient and relative reports indi-cate that people with aphasia may experience altered voice qualityor even complete loss of ability to vocalize (Baum & Kim, 1997;Hoole, Schroter Morasch, & Ziegler, 1997; Katz, 1988; Kurowski,

Hazen, & Blumstein, 2003; Marshall, Gandour, et al., 1988; Sieron,Westphal, et al., 1995).

Given the indication that aphasia may be associated with changesto voice production and control, the aim of this study was to furtherelucidate the acoustic voice parameters of patients with aphasiasassociated with cortical and subcortical lesions in the left hemi-sphere. Specifically, we aimed to analyze the acoustic characteristicsof voice in relation to differing sites of brain lesions in people withaphasia and thereby address the basis of phonation disorders in dif-ferent aphasia types. In that way we intend to contribute to discus-sion on the cognitive mechanisms and anatomical systems whichmay underlie the control of phonation and speech production.

2. Method

2.1. Participants

Sixty male, right- handed patients with aphasia aged from 33 to61 years (mean: 54.9) joined the study following their voluntaryinformed consent, obtained according to hospital ethics committeeagreed procedures (based on the Helsinki Declaration).

Inclusion was limited to those who had suffered a single cerebro-vascular accident resulting in a localized infarct, which was verifiedon CT or MRI; were at least 3 months post onset; and had hearingwithin normal limits for their age (assessed using pure tone audiom-etry). Potential participants were not considered if they had any lar-yngeal pathology (assessed via indirect laryngoscopy);demonstrated oral cavity anomalies; had prior speech or neurolog-ical impairments. Patients with dysarthria and apraxia of speechwere excluded based on oral motor assessments and apraxia batter-ies (Dabul, 2000; Wertz, LaPointe, & Rosenbek, 1984). All partici-pants were native speakers of Serbian and had no particularmusical education. Bearing in mind that smoking can influencevoice, all participants in the aphasic and control groups were non-smokers. Participants were divided into three groups according totype of aphasia: Wernicke’s, Broca’s and subcortical aphasia. Apha-sia was diagnosed on the basis of results of the Boston DiagnosticAphasia Examination (Goodglass & Kaplan, 1983).

The control group comprised twenty males aged 34–61 years(mean 55) with no vocal complaints, laryngeal pathology, hearingdisorder, any kind of communication disturbance, or history ofneurological damage. All were native speakers of Serbian, non-smokers, and had no particular musical education.

2.2. Procedure

All participants were assessed to determine the presence andtype of aphasia. They were then examined by a laryngologist torule out evidence of laryngeal pathology. Following these examin-ations a voice recording was made. Subjects were seated in a quietroom. The microphone (E825S, Sennheiser,) was placed 5.0 cmfrom the speaker’s mouth. Participants were instructed to sustainthe vowel /a/ at their optimal pitch level several times. Beforethe voice recording was commenced, all patients, including thosewith Wernicke’s aphasia, understood the instructions completely.

As individuals became used to this and vocal production was re-laxed and habitual the recording started. Each individual repeatedthe same sustained vowel /a/ at their habitual pitch and loudnesslevel at least three times. This was recorded digitally and savedto file. The token with the median Fo (fundamental frequency)value was taken for later analysis.

Acoustic analyses were conducted employing the Multi-dimen-sional voice program, Model 4300 Kay Elemetrics Corp., LincolnPark, NJ and Computerized Speech Lab hardware. The followingvoice parameters were examined, taking the period from onset tooffset of phonation:

Page 3: Analysis of voice impairment in aphasia after stroke-underlying neuroanatomical substrates

Table 1Age of the aphasic groups and control subjects.

Control Broca Wernicke Subcortical

Sample size (n) 20 20 20 20Age (years) mean 55.0 54.8 55.42 53.42Std. dev. 4.06 4.75 4.62 4.40

Table 2Site of lesion.

Aphasia Type Brain CT or MRI site of lesion

Broca’saphasia

The inclusion criteria for the patients with Broca s aphasiawere the lesions of the precentral cortical regions thatcomprised typical Broca’s area (BA44) and surrounding areas(BA45, BA6, BA12, BA46, BA47). Exclusion criteria werepresence of the multiple subcortical lacunas and infarcts ofsubcortical areas, and/or lesion of the postcentral corticalregionsDistribution of main lesions in Broca s aphasics was asfollowing:� 7 patients with lesions in BA44, BA45� 3 patients with lesions in BA44, BA45, BA12, BA6� 5 patents with lesions in BA44, BA45, BA47� 3 patients with lesions in BA45, BA46, BA47� 1 patients with lesions in BA6, BA44� 1patients with lesions in BA45, BA47

Wernicke’saphasia

Inclusion criteria for the patients with Wernicke s aphasiawere lesion of the postcentral cortical regions thatcomprised typical Wernicke s area (BA22) and aroundingarea (BA42, BA40, BA39, BA37). Exclusion criteria werepresence of the multiple subcortical lacunes and infarcts ofsubcortical areas, and/or lesion of the precentral corticalregionsDistribution of main lesions in Wernicke s aphasics was asfollowing:� 8 patients with lesion in BA22, BA42� 6 patients with lesion in BA22, BA37, BA39� 6 patients with lesions in BA22, BA40, BA39

Subcorticalaphasia

Inclusion criteria for the patients with subcortical aphasiawere lesions which included putamen, thalamus, globuspalidus, head of caudate, periventricular white matter,capsula externa and capsula interna. Exclusion criteria werepresence of the infarcts of cortical areas.Distribution of main lesions in patients with subcorticalaphasia was as following:� 6 patients with lesion in periventricular white matter� 3 patients with lesion of the head of caudate� 3 patients with lesion of the putamen� 2 patients with lesion of the thalamus� 4 patients with lesion of the capsula interna� 1 patient with lesion of the globus palidus� 1 patient with lesion of the capsula externa

24 M. Vukovic et al. / Brain & Language 123 (2012) 22–29

1. Frequency Alteration Parameters: Fo – Average FundamentalFrequency (Hz), average value of all extracted period-to-periodfundamental frequency values – this parameter is a measureof the basic frequency; Jita – Absoulte Jitter (ms) – this parame-ter represents period-to-period variability of the basicfrequency; Jitt – Jitter Percent% – a relative measure of short-lasting cyclic voice irregularities (Jitt), expressed in percent;RAP – Relative Average Perturbation% – Relative estimate ofthe period-to-period variability of the pitch within the analyzedvoice sample, with smoothing factor of 3 periods. It definesshort perturbations of the basic frequency period; PPQ – PitchPerturbation Quotient% – indicates perturbation coefficient ifthe basic frequency period and reflects a short-lasting (cyccle-to-cycle with 5 periods equalization factor) irregularity of this period.

2. Amplitude Alteration Parameters: APQ (Amplitude Perturba-tion Quotient)% – is a relative measure of the period-to-periodvariability of the peak-to-peak amplitude within the analyzedvoice sample at smoothing of 11 periods. This parameterreflects well short-lasting amplitude disturbances; ShdB (Shim-mer in dB) – Shimmer in dB is related to irregular intensity, i.e.,variarion of the voice signal amplitude. It is measured by max-imum vocal phonation, and its value is expressed in decibels;Shim (Shimmer percent)% – Shimmer in percent also reflectsirregularity of intensity, i.e., voice signal amplitude variation.It is sensitive to amplitude variations occuring between succes-sive peak periods.

3. Noise and tremor estimate parameters: NHR (Noise to Har-monic Ratio) – This parameter is an average quotient of noisespectral energy to the harmonic spectral energy in the fre-quency range 70–4200 Hz. It is essentially a general estimateof the noise presence in analyzed signal; VTI (Voice TurbulenceIndex) – measures the relative level of inharmonic high fre-quency energy and mostly correlates with turbulences causedby incomplete closure of vocal cords or their looseness.

Voice recordings were also perceptually evaluated. The tokensemployed for acoustic analysis were heard by two speech-lan-guage pathologists experienced in voice perceptual analysis andblind to speaker group, but with information on gender and age.A shortened version of the GRBAS scale i.e., GRB scale, consistingof G (grade), R (roughness), and B (breathiness) factors was used,with raters working independently of each other. These featureswere assessed on a four point scale (0-normal/absent, 1-slight, 2-moderate, and 3-severe).

2.3. Statistical analysis

Significance of age differences were determined by using the t-test for independent samples. Acoustic parameters were comparedby using non-parametric Mann–Whitney Z Test. The choice of thenon-parametric Mann-Whitney test was based upon lack ofassumption of the normal distribution of acoustic parameters inthe underlying studied population. Chi-square test was used forcomparison of categorical variables across groups for perceptualvoice evaluation. At this stage no corrections of significance level(e.g. Bonferroni’s correction) were made for pairwise comparisons,since the study was done for exploratory purposes.

3. Results

3.1. Demographic details

Details of the participant groups appear in Table 1. There wereno differences in age between the groups with Broca’s andWernicke’s aphasia (t = �.193; df = 14; p = .203), Broca’s and sub-cortical aphasia (t = �.299; p = .097), ‘Wernicke’s and subcortical

aphasia (t = 1.944; p = .876), Broca’s and control group (t = �.500.;p = .146), Wernicke’s and control group (t = �1.589; p = .287) or be-tween those with subcortical aphasia and the control group(t = �1.200; p = .123).

Table 2 indicates the sites of lesion associated with the differentaphasia types. No subcortical extension of the lesions for patientswith Broca’s and Wernicke’s aphasia was discerned.

3.2. Acoustic voice measures

Tables 3–5 display the results for the voice measures and com-parisons between different groups. Results show statistically sig-nificant or highly statistically significant differences between thedifferent groups of people with aphasia and individuals in the con-trol group.

Differences between people with Broca’s aphasia and the con-trol group, as well as between those with subcortical aphasia and

Page 4: Analysis of voice impairment in aphasia after stroke-underlying neuroanatomical substrates

Table 3Mean values of voice parameters in subjects with Wernicke’s, Broca’s and subcortical aphasia compared with the control group.

Acoustic voice parameters Control Group (n = 20) Wernicke’s aphasia (n = 20) Broca’s aphasia (n = 20) Subcortical aphasia (n = 20)

Mean SD Mean SD Z-test Mean SD Z-test Mean SD Mann–Whitney Z-test

Fo 127.08 10.74 162. 67 28.78 �3.173* 182.570 28.78 �3.418** 187.335 30.12 �3.250**

Jita 45.56 5.02 82.05 12.29 �3.873** 102.051 23.78 �3.416** 157.032 12.05 �3.156**

Jitt% 0.56 0.44 3.59 2.98 �3.281* 6.591 4.98 �3.428** 8.292 0.52 �3.240**

RAP% 0.80 0.56 4.40 2.90 �3.879** 6.404 2.90 �3.426** 8.945 0.29 �3.233**

PPQ% 0.96 1.63 5.41 3.34 �3.884** 7.480 3.40 �3.420** 3.840 0.46 �3.140*

ShdB 0.16 0.20 1.46 0.26 �3.883** 2.984 0.26 �3.418** 3.185 0.10 �3.154**

Shim 1.209 1.49 6.94 .95 �3.277* 8.944 1.95 �3.320** 11.960 0.79 �3.340**

APQ 1.489 2.07 7.75 1.65 �3.871** 9.747 1.65 �3.420** 11.784 1.14 �3.708**

NHR 0.270 0.14 0.45 0.16 �3.545** 1.451 1.16 �3.383** 2.264 1.16 �3.354**

VTI 0.090 0.13 0.22 0.38 �3.412* 0.590 0.16 �3.812** 1.108 0.26 �3.720**

* p < 0.05.** p < 0.001.

Table 4Mean values of voice parameters in subjects with Broca’s and subcortical aphasia compared with the Wernicke’s aphasia group.

Acoustic voice parameters Wernicke’s aphasia (n = 20) Broca’s aphasia (n = 20) Subcortical aphasia (n = 20)

Mean SD Mean SD Z test Mean SD Mann–Whitney Z test

Fo 162.678 28.78 182.570 28.78 �1.490 187.335 30.12 �3.250**

Jita 82.051 12.29 102.051 23.78 �2.487* 157.032 12.05 �3.156**

Jitt% 3.591 2.98 6.591 4.98 �3.505** 8.292 0.52 �3.240**

RAP% 4.404 2.90 6.404 2.90 �2.591* 8.945 0.29 �3.233**

PPQ% 5.408 3.34 7.480 3.40 �2.228* 3.840 0.46 �3.140*

ShdB 1.464 0.26 2.984 0.26 �2.230* 3.185 0.10 �3.154**

Shim 6.944 .95 8.944 1.95 �2.041* 11.960 0.79 �3.340**

APQ 7.75 1.65 9.747 1.65 �1.859* 11.784 1.14 �3.708**

NHR 0.451 0.16 1.451 1.16 �2.521* 2.264 1.16 �3.354**

VTI 0.218 0.38 0.590 0.16 �2.153* 1.108 0.26 �3.720**

* p < 0.05.** p < 0.001.

Table 5Mean values of voice parameters in speakers with Broca’s aphasia compared with the subcortical aphasia group.

Acoustic voice parameters Broca’s aphasia Subcortical aphasia

Mean SD Mean SD Mann–Whitney Z test

Fo 182.570 28.78 187.335 30.12 �1.874Jita 102.051 23.78 157.032 12.05 �1.227Jitt% 6.591 4.98 8.292 0.52 �3.428**

RAP% 6.404 2.90 8.945 0.29 �3.359**

PPQ% 7.480 3.40 3.840 0.46 �3.358**

ShdB 2.984 0.26 3.185 0.10 �3.556**

Shim 8.944 1.95 11.960 0.79 �3.750**

APQ 9.747 1.65 11.784 1.14 �3.617**

NHR 1.451 1.16 2.264 1.16 �3.876**

VTI 0.590 0.16 1.108 0.26 �.809*

* p < 0.05.** p < 0.001.

M. Vukovic et al. / Brain & Language 123 (2012) 22–29 25

the control group, were greater than between those with Wer-nicke’s aphasia and the control group. These data suggest more sig-nificant changes in voice quality in aphasia caused by anteriorcortical lesions and in subcortical aphasia than in aphasia due to le-sions of posterior cortical areas.

Comparisons of results of patients with Broca’s vs. Wernicke’saphasia show significant differences in all voice parameters exceptfor Fo. In other words, patients with Broca’s aphasia have signifi-cantly more altered voice quality than people with Wernicke’saphasia.

Comparison of results for Broca’s vs. subcortical aphasiasindicated highly significant differences for most parameters. Signif-icance was not found for values of Average Fundamental frequency,Fo and Absolute Jitter.

3.3. Perceptual voice evaluation

Perceptual voice evaluation, according to the GRB scale, re-vealed statistically significantly (p < 0.01) decreased rating of G, Rand B in participants with Broca’s, and subcortical aphasia com-pared to the speakers with Wernicke’s aphasia (Table 6). Statisticalcomparison of perceptual analysis parameters showed significant(p < 0.01) difference for all three aphasic subject groups comparedwith control group for tested parameters (Table 7).

4. Discussion

The main purpose of this study was to assess for and comparepossible differences in acoustic characteristics of voice in speakers

Page 5: Analysis of voice impairment in aphasia after stroke-underlying neuroanatomical substrates

Table 6Perceptual voice evaluation (GRB Scale) in patients with Broca’s and subcortical aphasia compared to those with Wernicke’s aphasia.

GRB Wernicke’s aphasia Broca’s aphasia Subcortical aphasia

0 1 2 3 0 1 2 3 p 0 1 2 3 p

G 10 8 2 0 7 8 3 2 p < 0.01 0 10 8 2 p < 0.01R 10 9 1 0 2 8 8 2 p < 0.01 0 8 10 2 p < 0.01B 14 5 1 0 5 12 2 1 p < 0.01 4 9 5 2 p < 0.01

Table 7Perceptual voice evaluation (GRB Scale) in patients with Wernicke’s aphasia, patients with Broca’s aphasia and patients with subcortical aphasia compared to control group.

GRB Control gr. Wernicke’s aphasia Broca’s aphasia Subcortical aphasia

0 1 2 3 0 1 2 3 p 0 1 2 3 p 0 1 2 3 p

G 19 1 0 0 10 8 2 0 <0.01 7 8 3 2 <0.01 0 10 8 2 <0.01R 19 1 0 0 10 9 1 0 <0.01 2 8 8 2 <0.01 0 8 10 2 <0.01B 20 0 0 0 14 5 1 0 <0.01 5 12 2 1 <0.01 4 9 5 2 <0.01

26 M. Vukovic et al. / Brain & Language 123 (2012) 22–29

with different types of aphasia. Our results show that vocal param-eters are impaired in all groups with aphasia whom we examined.Higher mean values for all parameters were present in participantswith aphasia compared to speakers without stroke. Changes ap-peared greatest in association with subcortical aphasia, less in Bro-ca’s aphasia, and least in association with Wernicke’s aphasia.

Increased values of jitter and shimmer and other studied fre-quency and amplitude estimate parameters are evidence of dis-turbed periodicity of vocal fold vibrations and are associatedwith perception of dysphonia (Titze & Liang, 1993). Differencesin values for harmonic-to-noise ratio compared with controls sug-gest that there is an insufficiency of glottal closure. Both these find-ings are suggestive of laryngeal pathology. Considering that weexcluded anyone with evidence of laryngeal pathology, one can as-sume observed differences are linked to lesions of cortical and sub-cortical structures which are traditionally thought of as primarilyinvolved in speech and/or language processing. This finding implic-itly suggests that purely motor-mechanical bucco-pharyngealactivity depends on intactness of certain cortical and subcorticalstructures or networks, and that, given the differences betweenthe groups with aphasia, these different areas may perform differ-ent roles or to different degrees in these activities.

In a way the least surprising finding is that voice changesshould be found with subcortical lesions, where resulting altera-tions to muscle tone, power and coordination commonly foundin lesions of these areas, impact on vocal cord stiffness and resis-tance and coordination between respiration and phonation. Le-sions of subcortical structures (thalamus, striatum, globuspallidus) have however been linked to speech disorders.

In the current study we excluded people with dysarthria orapraxia of speech, and the 20 males with subcortical lesions herehad only aphasia. Aphasia was manifested in various combinationsof symptoms, with the dominant symptoms being of language pro-duction. Significantly greater impairments of acoustic voiceparameters in these patients compared with the other aphasiagroups indicates that subcortical nuclei and their interconnectionsplay an important role in the phonation process.

Although aphasias associated with subcortical lesions have notbeen extensively studied compared to aphasia from cortical damage,empirical data show that they are found in subcortical nuclei andwhite matter lesions. There are aphasias associated with lesions ofglobus pallidus (Strub, 1989). The fact that this can occur suggestssome link with a striatal vocal area in humans. The globus palliduscan also show activation during speaking (Wise, Greene, Buchel, &Scott, 1999). It was also demonstrated that damage to anterior por-tions of the human thalamus led to aphasia (Graff-Radford, Damasio,Yamada, Eslinger, & Damasio, 1985). Thalamic lesions can lead to

temporary muteness followed by aphasia deficits that are some-times greater than after lesions to the anterior striatum or premotorcortex. This greater deficit may occur perhaps because there is fur-ther convergence of inputs from the striatum to the globus pallidusand then from the globus pallidus to the thalamus (Beiser, Hua, &Houk, 1997).

Functional neuroimaging and lesion studies have frequently re-ported thalamic and putamen activation during speech production.The putamen is more important for articulation than phonation(Brown et al., 2009). Many studies have shown activity in the puta-men during lip movement, tongue movement, and voluntary swal-lowing (Corfield et al., 1999; Martin et al., 2004). One problemwith this interpretation is that damage to the basal ganglia circuitgives rise to marked dysphonia (hypophonia; problems in initiationof phonation) in addition to articulatory problems (Merati et al.,2005). This would seem to suggest that the basal ganglia do play adirect role in processes underlying phonation. Activation in the leftputamen and thalamus also increased when subjects monitored ver-bal output during syllable production (Riecker, Wildgruber, Dogil,Grodd, & Ackermann, 2002). Nevertheless, putamen and thalamicactivation do not always co-occur as illustrated by observations thatincreased speech production rate increases thalamus activation butnot putamen activation (Riecker, Kassubek, Groschel, Grodd, &Ackermann, 2006; Riecker et al., 2005).

One must of course emphasize the importance of connectionsbetween cortical regions, particularly frontal regions, and subcorti-cal structures for adequate phonation and speech production. Thisconclusion is based upon the fact that subcortical structures, be-side their participation in procedural memory, and generation ofvoice patterns, are at the intersection of numerous pathways,sensory, premotor and motor, as well as pathways from the limbicsystem structures.

Our results indicate that subjects with Broca’s aphasia have sig-nificant voice impairment measured instrumentally comparedwith the control group, as well as compared to speakers withWernicke’s aphasia (except Fo). More severe impairment of voicein patients with Broca’s aphasia compared with those with Wer-nicke’s aphasia can be explained by the fact that in Broca’s aphasialesions are in the area which represents larynx and regulates theexpiratory muscles (‘‘larynx motor cortex’’).

It was demonstrated that people with anterior aphasia havedeficits in laryngeal control for voicing (Baum, Blumstein, Naeser,& Palumbo, 1990), and deficits of laryngeal timing or coordinationwith the supralaryngeal vocal tract, and lower than normal ampli-tudes of glottal excitation. (Kurowski et al., 2003). These resultssuggest that speech production disturbances of patients withanterior aphasia lie not at the ‘higher’ stages of speech production

Page 6: Analysis of voice impairment in aphasia after stroke-underlying neuroanatomical substrates

M. Vukovic et al. / Brain & Language 123 (2012) 22–29 27

such as phoneme selection or planning, but rather in articulatorimplementation, and relate to laryngeal control resulting in dys-function in both attaining and sustaining normal amplitudes ofglottal excitation. Classical approaches have generally character-ized phonetic/articulator impairments as occurring in patientswith lesion involving both cortical and subcortical structures thatare suprasylvian and anterior to the central sulcus (Damasio,1991).

Earlier empirical evidence has demonstrated that the cortexregulates laryngeal activity, but only recent research has led tothe characterization of a somatotopic representation of the larynxin the human motor cortex (Brown, Ngan, & Liotti, 2008). Relatedwork has shown that this same general region contains a represen-tation of the expiratory muscles as well (Loucks, Poletto, Simonyan,Reynolds, & Ludlow, 2007; Simonyan et al., 2007). This area is,actually, very similar to that which Murphy et al. (1997) associatedwith speech breathing. Recently, this area has been designated the‘‘larynx motor cortex’’ (Brown et al., 2009). The two major compo-nents that comprise the vocal source hence appear to be in closeproximity in the motor cortex, and this may reflect a unique corti-cal-level type of respiratory/phonatory coupling specific to humanvocalization. For almost all other species, this coupling occurs inthe brainstem alone (Jürgens, 2002). Given that fMRI studiesshowed that the larynx motor cortex was activated comparablyby vocal and non-vocal laryngeal tasks (i.e., vocal-fold adductionalone), this area would seem a good candidate for being a regulatorof complex human vocalizations such as speaking and singing(Brown et al., 2009).

Voice quality changes in Wernicke’s aphasia patients in ourstudy suggest that posterior areas are as well involved in the pho-nation process. Why speakers with Wernicke’s aphasia shouldshow voice disturbances is less obvious. Mapping of the neural cir-cuits supporting the various language levels has been the object ofextensive research. The superior temporal gyrus (STG) and superiortemporal sulcus (STS) are critical for phonological processing, asindicated by evidence from a variety of sources (Binder et al.,1994; Hickok & Poeppel, 2007; Indefrey & Levelt, 2004). Althoughmany authors consider this system to be strongly left dominant,both lesion and imaging evidence suggest a bilateral organization.

Data from Wada procedures (Hickok et al., 2008) indicate thatthe right hemisphere alone is capable of good auditory comprehen-sion at the single word level, and that when errors occur they aremore often semantic than phonological. Even acute deactivationof the entire left hemisphere in patients undergoing Wada proce-dures, which produces complete speech arrest, leaves speechsound perception relatively intact (phonemic error rate <10%). Thisobservation confirms the finding that phonological processes inspeech recognition are bilaterally organized. Research suggestsspeech production is extensively influenced by perception and thatauditory inputs can produce rapid and automatic effects on speechproduction. For example, when a speaker hears an error in hisspeech, he almost immediately corrects it automatically (Burnett,Senner, & Larson, 1997; Houde & Jordan, 1998). Adult onset deaf-ness is associated with articulatory decline (Waldstein, 1989), indi-cating that proper auditory feedback is important in maintainingarticulatory tuning.

Given this perspective one may speculate that the individualswith Wernicke’s area lesions had difficulties in correcting voiceproduction, i.e., they felt no need to correct their voice as theyfailed to notice deviations which were objectively confirmed. Ofcourse this still fails to establish why there should be voice changesin the first place. All patients with Wernicke’s aphasia included inthis study had a temporal lobe lesion including the classicalWernicke’s area, which was verified by CT or MRI. These lesionshave caused impairment of bidirectional function as a ‘‘phonolog-ical loop’’, manifested in deficiency in word repeating in BDAE, in

which they often made phonemic paraphasia, which they failedto notice (Vukovic, 2008, 2011).

An area termed Spt, Sylvian parietal–temporal area, in the leftposterior planum temporale region that has been argued to supportsensory-motor integration for speech is included in the speech-re-lated network. Spt area responds during the perception as well asin production of speech (Buchsbaum, Hickok, & Humphries, 2001;Buchsbaum et al., 2005; Hickok, Buchsbaum, Humphries, &Muftuler, 2003). Area Spt provides a critical node in the proposedsensory-motor network for the vocal tract articulators. It is locatedin the planum temporale (PT) region in the left hemisphere. Sincethe left PT is larger than the right PT in most individuals, this regionhas been associated with speech functions for a long time. The rightPT responds better to tones than to speech (Binder, Frost, Hammeke,Rao, & Cox, 1996). Spt has distinct cell populations, some sensory-weighted and some motor-weighted. Spt activity is tightly corre-lated with activity in frontal speech-production related areas, suchas the pars opercularis BA44 (Buchsbaum et al., 2001). Moreover,cortex in the posterior portion of the planum temporale (area Spt)has a cytoarchitectonic structure similar to BA44. Spt is a sensory-motor integration area for vocal tract actions (Hickok, Okada, &Serences, 2009; Pa & Hickok, 2008), and is therefore placed in thecontext of a network of sensory-motor integration areas in the pos-terior parietal and temporal/parietal cortex, which receive multi-sensory input supporting the notion that Spt performs sensoryguidance of speech production, including voice.

It was demonstrated that PT was found to be activated by a rangeof non-speech stimuli including aspects of spatial hearing (Smith,Okada, Saberi, & Hickok, 2004; Warren & Griffiths, 2003; Warren,Zielinski, Green, Rauschecker, & Griffiths, 2002). These data indicatemulti-functionality of the PT, but it is noteworthy that the PT is func-tionally segregated. Data suggest a clear separation between spatialhearing-related functions on the one hand and sensory-motor func-tions (i.e., Spt) on the other. In functional MRI studies that map bothspeech related sensory-motor activity and spatial hearing activitywithin the same subjects, these activations have been found to bedistinct with sensory-motor function more posterior in the PT.

Partially overlapping, but also partially distinct neural circuitsare involved in the processing of acoustic information in speechrecognition and speech production. Speech recognition is primarilybased upon neural circuits in the superior temporal lobes bilater-ally, whereas speech production (and related processes such asverbal short-term memory) relies on a fronto-parietal-temporalcircuit that is left hemisphere dominant.

This divergence of processing streams is consistent with the factthat auditory/phonological information plays a role in (i) accessinglexical-semantic representations on the one hand and (ii) drivingmotor-speech articulation on the other. As lexical-semantic and mo-tor-speech systems involve very different types of representationsand processing mechanisms, it stands to reason that divergent path-ways should underlie the interface with auditory/phonological net-works. The Dual Stream Model represents this dual interfacerequirement with respect to auditory/phonological processing(Hickok & Poeppel, 2004, 2007). This model proposes that a ventralstream, which involves structures in the superior and middle por-tions of the temporal lobe, is involved in processing speech signalsfor comprehension, whereas a dorsal stream, which involves areaSpt and posterior frontal lobe, is involved in translating speech sig-nals into articulatory representations in the frontal lobe (Scott &Johnsrude, 2003; Warren, Wise, & Warren, 2005; Wise et al.,2001). In that way, the dorsal stream is involved in mapping soundto articulation, and a ventral stream in mapping sound to meaning.

This also suggests that it is likely that a spatially-relatedprocessing system co-exists with the sensory-motor integrationsystem, while being distinct from it. The model also proposes thatthe ventral stream is bilaterally organized, although with

Page 7: Analysis of voice impairment in aphasia after stroke-underlying neuroanatomical substrates

28 M. Vukovic et al. / Brain & Language 123 (2012) 22–29

important computational differences between the two hemi-spheres. Thus, the ventral stream itself comprises parallel process-ing streams. This would explain why unilateral temporal lobedamage is not followed by a substantial speech recognition deficit.However, the dorsal stream is strongly left-dominant, and that ex-plains why dorsal temporal and frontal lesions result in prominentproduction deficits. Sublexical repetition of speech is subserved bya dorsal pathway, connecting the superior temporal lobe and pre-motor cortices in the frontal lobe via the arcuate and superior lon-gitudinal fasciculus.

By contrast, higher-level language comprehension is mediatedby a ventral pathway connecting the middle temporal lobe andthe ventrolateral prefrontal cortex via the external capsule. Conse-quently, the dorsal route, which was traditionally considered to bethe major language pathway, is mainly engaged in sensory-motormapping of sound to articulation, whereas linguistic processingof sound to meaning requires temporo-frontal interaction trans-mitted via the ventral route (Saur et al., 2008). Furthermore, thisnetwork is activated more when listening to speech than to noise(Zheng, Munhall, & Johnsrude, 2010). These facts speak in favorof large-scale architecture networks rather than modular organiza-tion of language in the left hemisphere (Vigneau et al., 2006.). Thenature of language deficits found in subjects with Wernicke’s apha-sia indicates a dysfunction of the ventral system. However, acous-tic changes of voice found in our study indicate existence ofpossible dorsal system dysfunction in cases of Wernicke’s aphasia,i.e. impairment of speech-phonation characteristics which are per-formed by the dorsal system.

5. Conclusions

Results above show that there is phonation impairment in ante-rior cortical (Broca’s) aphasia, posterior cortical (Wernicke’s) apha-sia and subcortical aphasia. Accordingly, one may assume that alesion of any of areas resulting in aphasia may give rise to voice-production deficiency. That is, a lesion in regions involved in soundproduction–perception results in dysfunction of the entire neuro-cognitive system of articulation–phonological language processing.However, in keeping with a view that sees functional differentia-tion within the system, the nature of disruption is not equal at dif-ferent points in the network. There are differences in severity, orlikelihood of occurrence of voice disturbances. Lesions of the inputauditory pathway (Wernicke’s area) impair voice production to theleast extent, while impairment is more pronounced when lesionsare in structures which regulate articulation (Broca’s area) andeven more so in association with subcortical lesions. However,we also argued that the underlying basis of voice dysfunction dif-fers according to site of lesion.

The current study examined only production of a sustained /a:/sound at habitual loudness. Deeper appreciation of the role of vary-ing sites is likely to be gained if analyses encompass production ofsustained /a:/ at varying pitch and loudness levels and in non-habitual modes (whispering; shouting). This will be the subjectof later studies.

Acknowledgment

This research study was supported by the Ministry of Educationand Science of the Republic of Serbia under Project No. 179068‘‘Evaluation of Treatment of Acquired Speech and LanguageDisorders’’.

References

Anderson, J. R., Bothell, D., Byrne, M. D., Douglass, S., Lebiere, C., & Qin, Y. (2004). Anintegrated theory of the mind. Psychological Review, 111(4), 1036–1060.

Baum, S., Blumstein, S., Naeser, M., & Palumbo, C. (1990). Temporal dimensions ofconsonant and vowel production: An acoustic and CT scan analysis of aphasicspeech. Brain and Language, 39, 33–56.

Baum, S. R., & Kim, J. A. (1997). Compensation for jaw fixation by aphasic patients.Brain and Language, 56, 354–376.

Beiser, D. G., Hua, S. E., & Houk, J. C. (1997). Network models of the basal ganglia[Review]. Current Opinion in Neurobiology, 7, 185–190.

Binder, J. R., Frost, J. A., Hammeke, T. A., Rao, S. M., & Cox, R. W. (1996). Function ofthe left planum temporale in auditory and linguistic processing. Brain, 119,1239–1247.

Binder, J. R., Rao, S. M., Hammeke, T. A., Yetkin, F. Z., Frost, J. A., Bandettini, P. A., et al.(1994). Effects of stimulus rate on signal response during functional magneticresonance imaging of auditory cortex. Cognitive Brain Research, 2(1), 31–38.

Brown, S., Laird, A. R., Pfordresher, P. Q., Thelen, S. M., Turkeltaub, P., & Liotti, M.(2009). The somatotopy of speech: Phonation and articulation in the humanmotor cortex. Brain and Cognition, 70(1), 31–41.

Brown, S., Ngan, E., & Liotti, M. (2008). A larynx area in the human motor cortex.Cerebral Cortex, 18, 837–845.

Buchsbaum, B., Hickok, G., & Humphries, C. (2001). Role of left posterior superiortemporal gyrus in phonological processing for speech perception andproduction. Cognitive Science, 25, 663–678.

Buchsbaum, B. R., Olsen, R. K., Koch, P. F., Kohn, P., Kippenhan, J. S., & Berman, K. F.(2005). Reading, hearing, and the planum temporale. Neuroimage, 24, 444–454.

Burnett, T. A., Senner, J. E., & Larson, C. R. (1997). Voice F0 responses to pitch-shiftedauditory feedback: A preliminary study. Journal of Voice, 11, 202–211.

Corfield, D. R., Murphy, K., Josephs, O., Fink, G. R., Frackowiak, R. S., Guz, A., et al.(1999). Cortical and subcortical control of tongue movement in humans: Afunctional neuroimaging study using fMRI. Journal of Applied Physiology, 86,1468–1477.

Dabul, B. L. (2000). Apraxia, battery for adults (2nd ed.) (ABA-2). Pro-ed.Damasio, H. (1991). Neuronatomical correlates of the aphasia. In M. T. Sarno (Ed.),

Acquired aphasia (2nd ed. New York: Academic Press.D’Ausilio, A., Bufalari, I., et al. (2011). Vocal pitch discrimination in the motor

system. Brain and Language, 118(1–2), 9–14.DeLong, M., & Wichmann, T. (2009). Update on models of basal ganglia function and

dysfunction. Parkinsonism & Related Disorders, 15(Suppl. 3), S237–S240.Duffy, J. R., & Folger, W. N. (1996). Dysarthria associated with unilateral central

nervous system lesions: a retrospective study. Journal of Medical Speech –Language Pathology, 4, 57–70.

Feijo, A. V., Parente, M. A., Behlau, M., Haussen, S., De Veccino, M. C., & de FariaMartignago, B. C. (2004). Acoustic analysis of voice in multiple sclerosispatients. Journal of Voice, 18, 341–347.

Goodglass, H., & Kaplan, E. (1983). The assessment of aphasia and related disorders.Philadelphia: Lea & Febiger.

Graff-Radford, N. R., Damasio, H., Yamada, T., Eslinger, P. J., & Damasio, A. R. (1985).Nonhaemorrhagic thalamic infarction: clinical, neuropsychological andelectrophysiological findings in four anatomical groups defined bycomputerized tomography. Brain, 108, 485–516.

Grillner, S., Hellgren, J., et al. (2005). Mechanisms for selection of basic motorprograms – roles for the striatum and pallidum. Trends in Neurosciences, 28(7),364–370.

Hartelius, L., Buder, E. H., & Strand, E. A. (1997). Long-term phonatory instability inindividuals with multiple sclerosis. Journal of Speech and Hearing Research, 40,1056–1072.

Hickok, G., Buchsbaum, B., Humphries, C., & Muftuler, T. (2003). Auditory-motorinteraction revealed by fMRI: Speech, music, and working memory in area Spt.Journal of Cognitive Neuroscience, 15, 673–682.

Hickok, G., Okada, K., Barr, W., Pa, J., Rogalsky, C., Donnelly, K., et al. (2008). Bilateralcapacity for speech sound processing in auditory comprehension: Evidencefrom Wada procedures. Brain and Language, 107(3), 179–184.

Hickok, G., Okada, K., & Serences, J. T. (2009). Area spt in the human planumtemporale supports sensory-motor integration for speech processing. Journal ofNeurophysiology, 101, 2725–2732.

Hickok, G., & Poeppel, D. (2004). Dorsal and ventral streams: A framework forunderstanding aspects of the functional anatomy of language. Cognition, 92,67–99.

Hickok, G., & Poeppel, D. (2007). The cortical organization of speech processing.Nature Reviews Neuroscience, 8(5), 393–402.

Hoole, P., Schroter Morasch, H., & Ziegler, W. (1997). Patterns of laryngeal apraxia intwo patients with Broca’s aphasia. Clinical Linguistics & Phonetics, 11(6),429–442.

Houde, J. F., & Jordan, M. I. (1998). Sensorimotor adaptation in speech production.Science, 279, 1213–1216.

Indefrey, P., & Levelt, W. J. M. (2004). The spatial and temporal signatures of wordproduction components. Cognition, 92(1–2), 101–144.

Janik, V. M., & Slater, P. J. B. (1997). Vocal learning in mammals. Advances in theStudy of Behavior, 26, 59–99.

Jarvis, E. D., & Mello, C. V. (2000). Molecular mapping of brain areas involved inparrot vocal communication. Journal of Comparative Neurology, 419, 1–31.

Jürgens, U. (2002). Neural pathways underlying vocal control. Neuroscience &Behavioral Reviews, 26, 235–258.

Katz, W. F. (1988). Anticipatory coarticulation in aphasia: Acoustic and perceptualdata. Brain and Language, 35, 340–368.

Kurowski, K., Hazen, E., & Blumstein, S. (2003). The nature of speech productionimpairments in anterior aphasics: An acoustic analysis of voicing in fricativeconsonants. Brain and Language, 84, 353–371.

Page 8: Analysis of voice impairment in aphasia after stroke-underlying neuroanatomical substrates

M. Vukovic et al. / Brain & Language 123 (2012) 22–29 29

Ladd, D. R. (1996). Intonational phonology. Cambridge: Cambridge University Press.Liotti, M., Ramig, L. O., et al. (2003). Hypophonia in Parkinson’s disease: Neural

correlates of voice treatment revealed by PET. Neurology, 60(3), 432–440.Loucks, T. M., Poletto, C. J., Simonyan, K., Reynolds, C. L., & Ludlow, C. L. (2007).

Human brain activation during phonation and exhalation: Common volitionalcontrol for two upper airway functions. Neuroimage, 36(1), 131–143.

Marshall, R. C., Gandour, J., et al. (1988). Selective impairment of phonation: A casestudy. Brain and Language, 35(2), 313–339.

Martin, R. E., MacIntosh, B. J., Smith, R. C., Barr, A. M., Stevens, T. K., Gati, J. S., et al.(2004). Cerebral areas processing swallowing and tongue movement areoverlapping but distinct: A functional magnetic resonance imaging study.Journal of Neurophysiology, 92, 2428–2443.

Merati, A., Heman-Ackah, Y. D., Abaza, M., Altman, K. W., Sulica, L., & Belamowicz, S.(2005). Common movement disorders affecting the larynx: A report from theNeurolaryngology Committee of the AAO-HNS. Otolaryngology Head and NeckSurgery, 133, 654–665.

Merson, R. M., & Rolnick, M. I. (1998). Speech-language pathology and dysphagia inmultiple sclerosis. Physical Medicine and Rehabilitation Clinics of North America,9(3), 631–642.

Murphy, K., Corfield, D. R., Guz, A., Fink, G. R., Wise, R. J., Harrison, J., et al. (1997).Cerebral areas associated with motor control of speech in humans. Journal ofApplied Physiology, 83, 1438–1447.

Nottebohm, F. (1972). The origins of vocal learning. The American Naturalist, 106,116–140.

Pa, J., & Hickok, G. (2008). A parietal-temporal sensory-motor integration area forthe human vocal tract: Evidence from an fMRI study of skilled musicians.Neuropsychologia, 46, 362–368.

Poole, J. H., Tyack, P. L., Stoeger-Horwath, A. S., & Watwood, S. (2005). Elephants arecapable of vocal learning. Nature, 434, 455–456.

Riecker, A., Kassubek, J., Groschel, K., Grodd, W., & Ackermann, H. (2006). Thecerebral control of speech tempo: Opposite relationship between speaking rateand BOLD signals changes at striatal and cerebellar structures. Neuroimage, 29,46–53.

Riecker, A., Mathiak, K., Wildgruber, D., Erb, M., Hertrich, I., Grodd, W., et al. (2005).FMRI reveals two distinct cerebral networks subserving speech motor control.Neurology, 64, 700–706.

Riecker, A., Wildgruber, D., Dogil, G., Grodd, W., & Ackermann, H. (2002).Hemispheric lateralization effects of rhythm implementation during syllablerepetitions: An fMRI study. Neuroimage, 16, 169–176.

Rigueiro-Veloso, M. T., Pego-Reigosa, R., Branas-Fernández, F., Martínez-Vázquez, F.,& Cortés-Laino, J. A. (1997). Sindrome de Wallenberg: revision de 25 casos.Revista de Neurología, 25, 1561–1564.

Sanvito, S., Galimberti, F., & Miller, E. H. (2007). Vocal signalling in male southernelephant seals is honest but imprecise. Animal Behaviour, 73, 287–299.

Saur, D., Kreher, B. W., Schnell, S., Kümmerer, D., Kellmeyer, P., Magnus-SebastianVry, M. S., et al. (2008). Ventral and dorsal pathways for language. Proceedings ofthe National Academy of Sciences USA, 105, 18035–18040.

Scott, S. K., & Johnsrude, I. S. (2003). The neuroanatomical and functionalorganization of speech perception. Trends in Neurosciences, 26(2), 100–107.

Sewall, G. K., Jiang, J., & Ford, C. N. (2006). Clinical evaluation of Parkinson’s-relateddysphonia. The Laryngoscope, 116(10), 1740–1744.

Sieron, J. K. P., Westphal, K. P., et al. (1995). Apraxia of the Larynx. Folia Phoniatricaet Logopaedica, 47(1), 33–38.

Simonyan, K., Saad, Z. S., Torrey, M., Loucks, J., Poletto, C. J., & Ludlow, C. L. (2007).Functional neuroanatomy of human voluntary cough and sniff production.Neuroimage, 37(2), 401–409.

Smith, K. R., Okada, K., Saberi, K., & Hickok, G. (2004). Human cortical auditorymotion areas are not motion selective. NeuroReport, 15, 1523–1526.

Spencer, K. A., & Rogers, M. A. (2005). Speech motor programming in hypokineticand ataxic dysarthria. Brain and Language, 94(3), 347–366.

Strub, R. L. (1989). Frontal lobe syndrome in a patient with bilateral globus palliduslesions. Archives of Neurology, 46, 1024–1027.

Thompson, E. C., & Murdoch, B. E. (1995). Interpreting the physiological bases ofdysarthria from perceptual analyses: An examination of subjects with UMN typedysarthria. Australian Journal of Human Communication Disorders, 23(1), 1–23.

Titze, I. R., & Liang, H. (1993). Comparison of Fo extraction methods for high-precision voice perturbation measurements. Journal of Speech and HearingResearch, 36(6), 1120.

Urban, P. P., Rolke, R., Wicht, S., Keilmann, A., Stoeter, P., Hopf, H. C., et al. (2006).Left-hemispheric dominance for articulation: A prospective study on acuteischaemic dysarthria at different localizations. Brain, 129(3), 767–777.

Velasco García, M. J., Cobeta, I., et al. (2011). Acoustic analysis of voice inHuntington’s disease patients. Journal of Voice, 25(2), 208–217.

Vigneau, M., Beaucousin, V., Hervé, P. Y., Duffau, H., Crivello, F., Houdé, O., et al.(2006). Meta-analyzing left hemisphere language areas: Phonology, semantics,and sentence processing. Neuroimage, 30(4), 1414–1432.

Vukovic, M. (2008). Treatment of Aphasia. Beograd: Univerzitet u Beogradu –Fakultet za specijalnu edukaciju i rehabilitaciju. University of Belgrade, Facultyof Special Education and Rehabilitation (in Serbian).

Vukovic, M. (2011). Aphasiology (3rd ed.). Beograd: Univerzitet u Beogradu –Fakultet za specijalnu edukaciju i rehabilitaciju (in Serbian).

Waldstein, R. (1989). Effects of postlingual deafness on speech production:Implications for the role of auditory feedback. Journal of the Acoustical Societyof America, 99, 2099–2144.

Warren, J. D., & Griffiths, T. D. (2003). Distinct mechanisms for processing spatialsequences and pitch sequences in the human auditory brain. Journal ofNeuroscience, 23(13), 5799–5804.

Warren, J. E., Wise, R. J., & Warren, J. D. (2005). Sounds do-able: Auditory-motortransformations and the posterior temporal plane. Trends in Neurosciences,28(12), 636–643.

Warren, J. D., Zielinski, B. A., Green, G. G., Rauschecker, J. P., & Griffiths, T. D. (2002).Perception of sound-source motion by the human brain. Neuron, 34, 139–148.

Wertz, R. T., LaPointe, L. L., & Rosenbek, J. C. (1984). Apraxia of speech in adults: thedisorder and its management. Orlando: Grune & Stratton.

Wise, R. J., Greene, J., Buchel, C., & Scott, S. K. (1999). Brain regions involved inarticulation. Lancet, 353, 1057–1061.

Wise, R. J., Scott, S. K., Blank, S. C., Mummery, C. J., Murphy, K., & Warburton, E. A.(2001). Separate neural subsystems within Wernicke’s area. Brain, 124, 83–95.

Yip, M. J. W. (2002). Tone. Cambridge: Cambridge University Press.Zheng, Z. Z., Munhall, K. G., & Johnsrude, I. S. (2010). Functional overlap between

regions involved in speech perception and in monitoring one’s own voiceduring speech production. Journal of Cognitive Neuroscience, 22(8), 1770–1781.