walden et al: the robustness of hearing aid … · dos personas hipoacúsicas establecieron juicios...

22
358 J Am Acad Audiol 18:358–379 (2007) *Army Audiology and Speech Center, Walter Reed Army Medical Center, Washington, DC; †Auditory Research Laboratory, GN ReSound North America, Glenview, IL Brian E. Walden, Ph.D., Army Audiology and Speech Center, Walter Reed Army Medical Center, 6900 Georgia Ave, NW, Washington, DC 20307-5001; Phone: 202-782-8601; Fax: 202-782-9227; E-mail: [email protected] Parts of the work reported here were presented at AudiologyNOW!, April 2006, Minneapolis, MN. This work was sponsored by GN Hearing Care Corporation, North America, Chicago, IL, through a Cooperative Research and Development Agreement with the Clinical Investigation Regulatory Office, United States Army Medical Department, Fort Sam Houston, TX. The Robustness of Hearing Aid Microphone Preferences in Everyday Listening Environments Brian E. Walden* Rauna K. Surr* Mary T. Cord* Ken W. Grant* Van Summers* Andrew B. Dittberner† Abstract Automatic directionality algorithms currently implemented in hearing aids assume that hearing-impaired persons with similar hearing losses will prefer the same microphone processing mode in a specific everyday listening environment. The purpose of this study was to evaluate the robustness of microphone preferences in everyday listening. Two hearing-impaired persons made microphone preference judgments (omnidirectional preferred, directional preferred, no preference) in a variety of everyday listening situations. Simultaneously, these acoustic environments were recorded through the omnidirectional and directional microphone processing modes. The acoustic recordings were later presented in a laboratory setting for microphone preferences to the original two listeners and other listeners who differed in hearing ability and experience with directional microphone processing. The original two listeners were able to replicate their live microphone preferences in the laboratory with a high degree of accuracy. This suggests that the basis of the original live microphone preferences were largely represented in the acoustic recordings. Other hearing-impaired and normal-hearing participants who listened to the environmental recordings also accurately replicated the original live omnidirectional preferences; however, directional preferences were not as robust across the listeners. When the laboratory rating did not replicate the live directional microphone preference, listeners almost always expressed no preference for either microphone mode. Hence, a preference for omnidirectional processing was rarely expressed by any of the participants to recorded sites where directional processing had been preferred as a live judgment and vice versa. These results are interpreted to provide little basis for customizing automatic directionality algorithms for individual patients. The implications of these findings for hearing aid design are discussed. Key Words: Directional microphones, everyday listening environments, hearing aids, microphone preference Abbreviations: AD = automatic directionality; BTE = behind the ear; DA = directional advantage; DIR = directional; DNR = digital noise reduction; HI = hearing impaired; IEEE = Institute of Electrical and Electronic Engineers; NH = normal hearing; OMNI = omnidirectional; SNR = signal-to-noise ratio; WDRC = wide dynamic range compression

Upload: ngothu

Post on 23-Sep-2018

216 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Walden et al: The Robustness of Hearing Aid … · Dos personas hipoacúsicas establecieron juicios de preferencia en cuanto a ... sor due to earmold and ear canal effects. If these

358

J Am Acad Audiol 18:358–379 (2007)

*Army Audiology and Speech Center, Walter Reed Army Medical Center, Washington, DC; †Auditory Research Laboratory,GN ReSound North America, Glenview, IL

Brian E. Walden, Ph.D., Army Audiology and Speech Center, Walter Reed Army Medical Center, 6900 Georgia Ave, NW,Washington, DC 20307-5001; Phone: 202-782-8601; Fax: 202-782-9227; E-mail: [email protected]

Parts of the work reported here were presented at AudiologyNOW!, April 2006, Minneapolis, MN.

This work was sponsored by GN Hearing Care Corporation, North America, Chicago, IL, through a Cooperative Researchand Development Agreement with the Clinical Investigation Regulatory Office, United States Army Medical Department, FortSam Houston, TX.

The Robustness of Hearing Aid MicrophonePreferences in Everyday Listening Environments

Brian E. Walden*Rauna K. Surr*Mary T. Cord*Ken W. Grant*Van Summers*Andrew B. Dittberner†

Abstract

Automatic directionality algorithms currently implemented in hearing aidsassume that hearing-impaired persons with similar hearing losses will preferthe same microphone processing mode in a specific everyday listeningenvironment. The purpose of this study was to evaluate the robustness ofmicrophone preferences in everyday listening. Two hearing-impaired personsmade microphone preference judgments (omnidirectional preferred, directionalpreferred, no preference) in a variety of everyday listening situations.Simultaneously, these acoustic environments were recorded through theomnidirectional and directional microphone processing modes. The acousticrecordings were later presented in a laboratory setting for microphonepreferences to the original two listeners and other listeners who differed in hearingability and experience with directional microphone processing. The original twolisteners were able to replicate their live microphone preferences in thelaboratory with a high degree of accuracy. This suggests that the basis of theoriginal live microphone preferences were largely represented in the acousticrecordings. Other hearing-impaired and normal-hearing participants wholistened to the environmental recordings also accurately replicated the originallive omnidirectional preferences; however, directional preferences were not asrobust across the listeners. When the laboratory rating did not replicate the livedirectional microphone preference, listeners almost always expressed nopreference for either microphone mode. Hence, a preference for omnidirectionalprocessing was rarely expressed by any of the participants to recorded siteswhere directional processing had been preferred as a live judgment and viceversa. These results are interpreted to provide little basis for customizingautomatic directionality algorithms for individual patients. The implications ofthese findings for hearing aid design are discussed.

Key Words: Directional microphones, everyday listening environments, hearingaids, microphone preference

Abbreviations: AD = automatic directionality; BTE = behind the ear; DA =directional advantage; DIR = directional; DNR = digital noise reduction; HI =hearing impaired; IEEE = Institute of Electrical and Electronic Engineers; NH= normal hearing; OMNI = omnidirectional; SNR = signal-to-noise ratio; WDRC= wide dynamic range compression

Page 2: Walden et al: The Robustness of Hearing Aid … · Dos personas hipoacúsicas establecieron juicios de preferencia en cuanto a ... sor due to earmold and ear canal effects. If these

RRoobbuussttnneessss ooff MMiiccrroopphhoonnee PPrreeffeerreenncceess/Walden et al

359

Traditionally, hearing aids havebeen fit to the hearing loss of thepatient, without regard to his or

her everyday listening environments.Prescriptive formulas, for example, mayuse the patient’s pure-tone thresholdsand loudness judgments to calculate tar-get gain and frequency response. Onceset, these amplification parameters areapplied to all listening. The advent ofmultimemory hearing aids allowed differ-ent amplification characteristics to beapplied in different listening environ-ments. Hence, one set of amplification

characteristics might be used when lis-tening to speech in quiet, another set innoisy listening situations, and a thirdwhen listening to music. However, suchmultimemory hearing aids require theuser to switch manually among the vari-ous memories as the listening environ-ment changes. Manually switchableomnidirectional/directional hearing aidsare a version of this technology. Here, theuser switches between the two micro-phone modes, depending upon whetheromnidirectional or directional sound pro-cessing seems preferable.

Sumario

Los algoritmos automáticos de direccionalidad actualmente implementados enauxiliares auditivos asumen que las personas hipoacúsicas con pérdidassimilares preferirán el mismo modo de procesamiento del micrófono en losambientes cotidianos específicos de escucha. El propósito de este estudio fueevaluar la firmeza de las preferencias de micrófonos para la audición cotidiana.Dos personas hipoacúsicas establecieron juicios de preferencia en cuanto alos micrófonos (preferencia omnidireccional, preferencia direccional, sinpreferencia) en una variedad de situaciones cotidianas de escucha.Simultáneamente, estos ambientes acústicos fueron registrados a través demodos omnidireccionales y direccionales de procesamiento del micrófono. Lasgrabaciones acústicas fueron luego presentadas en un contexto de laboratoriopara preferencias del micrófono a los dos sujetos originales y a dos sujetosque diferían en su habilidad auditiva y en su experiencia con procesamientodireccional de micrófonos. Los dos sujetos originales pudieron replicar en ellaboratorio sus preferencias de micrófono en vivo con un alto grado deexactitud. Esto sugiere que las bases para la preferencia original y aquella envivo de los micrófonos fueron correctamente representadas en los registrosacústicos. Otros participantes con hipoacusia y normoyentes que escucharonlos registros ambientales también replicaron con exactitud las preferenciasomnidireccionales originales en vivo; sin embargo, las preferencias direccionalesno fueron tan consistentes entre todos ellos. Cuando la clasificación delaboratorio no replicó la preferencia direccional de micrófono en vivo, lossujetos casi siempre dejaron de expresar preferencia por ningún modo demicrófono. Por lo tanto, la preferencia para procesamiento omnidireccionalraramente fue escogida por ninguno de los participantes para situaciones dondese había preferido el registro direccional como un juicio en vivo y viceversa.Se interpreta que estos resultados aportan poco en la búsqueda de adecuarautomáticamente los algoritmos de direccionalidad para pacientes individuales.Se discuten las implicaciones de estos hallazgos en el diseño de auxiliaresauditivos.

Palabras Clave: Micrófonos direccionales, ambientes cotidianos de escucha,auxiliares auditivos, preferencia de micrófonos

Abreviaturas: AD = direccionalidad automática; BTE = retroauricular; DA =ventaja direccional; DIR = direccional; DNR = reducciones digitales de ruido;HI = hipoacúsico; IEEE = Instituto de Ingeniero Eléctricos y Electrónicos; NH= audición normal; OMNI = omnidireccional; SNR = Tasa señal-ruido; WDRC= compresión de rango dinámico amplio

Page 3: Walden et al: The Robustness of Hearing Aid … · Dos personas hipoacúsicas establecieron juicios de preferencia en cuanto a ... sor due to earmold and ear canal effects. If these

One of the primary advantages ofadvanced-technology digital hearing aidsis the potential for the signal processing tochange with the acoustic characteristics ofthe listening situation. These devices arecapable of automatically altering amplifi-cation characteristics based on an acousticanalysis of the listening environment.Among the earliest examples of this inter-active technology, implemented in analoghearing aids, were output-limiting com-pression and adaptive filtering. Here, thegain and frequency response of the devicecould be altered based primarily on theinput level. More recently, increasinglysophisticated signal processing algorithmshave been developed that alter amplifica-tion parameters based on more complexanalyses of the acoustic environment.Most of these algorithms are designed todeal with the interfering effects of back-ground noise. Predominant among theseare digital noise reduction (DNR) andautomatic directionality (AD). In DNR, thegain is selectively reduced across the fre-quency spectrum when noise is present. InAD, omnidirectional or directional micro-phone processing is activated based on,among other factors, the presence of back-ground noise in the environment.Although manufacturers have achievedsome success with their DNR and AD cir-cuitry, the efficacy of these automatic pro-cessing schemes, especially in everydaylistening, has not been definitively estab-lished to date.

Manufacturers have taken differing sig-nal processing approaches to both DNRand AD. Algorithms differ from manufac-turer to manufacturer depending onassumptions about the optimal amplifica-tion characteristics in a particular listen-ing environment, as well as the capabili-ties of the signal processors. For the mostpart, however, these algorithms assumethat the preferred signal processing in aparticular acoustic environment is rela-tively constant across hearing-impaired(HI) listeners. For example, AD algorithmsthat switch microphone modes (based onan acoustic analysis of the listening envi-ronment) assume that directional process-ing will be preferred in a particular every-day listening environment by virtually allhearing-aid users. This assumption, how-ever, appears to be largely untested.

Walden et al (2004) examined preferencesfor omnidirectional and directional micro-phone processing in a group of HI personsacross a large number of everyday listen-ing situations. Although listeners tendedto prefer one processing mode to the otherin similar everyday listening situations,substantial individual variability wasobserved. In any case, these data do notprovide a definitive test of the assumptionthat different listeners have similar micro-phone preferences because, although thelistening environments that were com-pared in that study were similar in theirgeneral characteristics, microphone pref-erences were not obtained by differentpatients in identical listening environ-ments. It remains possible that the signalprocessing (e.g., microphone mode) pre-ferred by one listener in a specific listen-ing environment may not be preferred byother listeners.

If there is substantial individual vari-ability in the preferred signal processingfor a given listening situation, the sameprocessing algorithm cannot be effectivelyapplied to all hearing aid users. Obviously,this adds considerably to the complexity ofdeveloping successful adaptive (automat-ic) hearing aid signal processing. Givensubstantial variability, it may be neces-sary to program signal processing algo-rithms, such as DNR and AD, to therequirements of the individual patient.This might be accomplished via learningalgorithms that take input from the hear-ing aid wearer during normal daily use todetermine preferred signal processingcharacteristics across a variety of every-day listening environments (Walden et al,2005). However, more simple solutions toautomatic signal processing strategiessuch as DNR and AD can be pursued if thesame signal processing is generally pre-ferred across listeners for a given acousticenvironment.

Another factor adding to the complexityof effective advanced signal processingalgorithms is the influence of nonacousticfactors in listener preferences. For exam-ple, AD algorithms assume that the basisof the individual’s preference for omnidi-rectional versus directional processing ismanifest in the acoustic input. However,nonacoustic factors can be important. Forexample, if a listener wishes to attend to a

JJoouurrnnaall ooff tthhee AAmmeerriiccaann AAccaaddeemmyy ooff AAuuddiioollooggyy/Volume 18, Number 5, 2007

360

Page 4: Walden et al: The Robustness of Hearing Aid … · Dos personas hipoacúsicas establecieron juicios de preferencia en cuanto a ... sor due to earmold and ear canal effects. If these

talker who is located to the side or behind,and to ignore a talker who is in front,directional processing (which might other-wise be preferred) will serve as a detri-ment to hearing the desired talker.Clearly, the listener’s intent is not repre-sented in the acoustic input.

Yet another issue that could complicateautomatic signal processing in hearing aids,which is related to the general issue of indi-vidual variability in listener preferences, ishow and where the signal should be sam-pled. The acoustic signal that is delivered tothe patient’s tympanic membrane can differfrom the output of the digital signal proces-sor due to earmold and ear canal effects. Ifthese acoustic alterations of the DSP outputare sufficient to change listener preferences,effective adaptive signal processing mayrequire that the acoustic output of the hear-ing aid be sampled and delivered back to thedigital signal processor, resulting in furtherdesign complications.

This study is a preliminary explorationof the issues discussed above, focusing onthe robustness of omnidirectional versusdirectional microphone preferences ineveryday listening. Specifically, we wishedto determine the extent to which micro-phone preferences from everyday listeningenvironments are stable across listeners,and whether the recorded output of thehearing aid signal processor can be used toreplicate actual live microphone prefer-ences obtained in everyday listening. TwoHI listeners who were highly experiencedwith the hearing aid and test methodolo-gies used in this study made microphonepreference judgments in everyday listen-ing situations while wearing a manuallyswitchable omnidirectional/directionalhearing aid. Simultaneously, the output ofthe digital signal processor (sampled priorto transduction by the receiver) wasrecorded through each microphone mode.These recordings were later presented in alaboratory setting to the original listeners,as well as other groups of HI and normal-hearing (NH) listeners, to obtain micro-phone preferences. The original live micro-phone preferences were compared to thoseobtained by the different listening groupsto determine the robustness of the prefer-ences across a range of everyday listeningsituations.

MMEETTHHOODD

PPaarrttiicciippaannttss

Participants included 30 persons withimpaired hearing, recruited from thepatient population of the Army Audiologyand Speech Center, and 10 normal-hearingparticipants. All HI participants had bilat-eral, generally symmetric hearing impair-ments that fell within the fitting range ofthe test hearing aid used in the study. Thegeneral requirements for participationwere sensorineural hearing loss (cochlearsite of lesion), which was verified by differ-ences between air- and bone-conductionthresholds of 10 dB or less and by normaltympanograms (Type A; Jerger, 1970); pres-ence of ipsilateral acoustic reflexes for a1000 Hz tone; and unaided monosyllabicword-recognition ability in quiet (NU-6) of50% or better in each ear at a comfortablelistening level. Although all testing in thisstudy was performed unilaterally, pure-tone thresholds were within 30 dB betweenears at each test frequency from .25 to 6kHz, with the three-frequency average (.5,1, 2 kHz) differing between ears by no morethan 13 dB for any participant.

All HI participants reported regular useof binaural hearing aids for a minimum offour hours per day. Furthermore, if theirown hearing aids had a manually switch-able directional microphone option, theyreported regular, selective use of eachmicrophone option in everyday listening.Finally, all HI participants were screenedwith a test of speech recognition in noise(see below) while wearing the test instru-ment to make sure that they obtained adirectional advantage (DA) of at least 15%.

Two HI persons from the 17 participantsin Walden et al (2004) were selected toserve as field raters of microphone prefer-ences. Their selection was based, in part, ona close match of their hearing thresholds tothe mean audiogram of the participants inthat study, which was used to program thetest hearing aid in this study. Further, theyhad considerable experience with the testhearing aid and methodologies used in thisstudy and were regarded as highly coopera-tive and reliable subjects based on theirparticipation in the earlier study. Each field

RRoobbuussttnneessss ooff MMiiccrroopphhoonnee PPrreeffeerreenncceess/Walden et al

361

Page 5: Walden et al: The Robustness of Hearing Aid … · Dos personas hipoacúsicas establecieron juicios de preferencia en cuanto a ... sor due to earmold and ear canal effects. If these

rater participated in three phases of thestudy, which required a total of four clinicvisits. First, each identified two everyday lis-tening environments in which he preferredomnidirectional microphone processing, twoin which he preferred directional processing,and two in which he had no preference.Hence, live field microphone ratings wereobtained in a total of 12 everyday listeningsituations (2 environments X 3 microphonepreferences X 2 field raters). These prefer-ences were obtained with the test instru-ment programmed to the mean audiogram ofthe 17 participants (34 ears) in the earlierstudy, with a minor (-3 dB) gain reduction forboth field raters based on their hearingthreshold at 1 kHz. Secondly, simultaneouswith their field microphone preferences, thefield raters took part in recordings of the 12acoustic environments through the testhearing aid. Details of the recording proce-dures are described below. Finally, approxi-mately three months later, both individualsprovided microphone preference ratings in alaboratory setting to the (edited) recordingsfrom the field environments.

Field Rater 1 was a 76-year-old man whohad used hearing aids for about 12 yearsand reported an average 12 hours of dailyuse. He had a mild-to-moderate, graduallysloping, sensorineural hearing loss withword recognition in quiet in the test ear of88% (NU-6 test). Field Rater 2 was an 85-year-old man. He had used hearing aids forabout five years and reported 17 hours ofdaily use. He also had a gradually sloping,sensorineural hearing loss with word recog-

nition in the test ear of 96%. The right earwas selected as the test ear for Field Rater1 and the left ear for Field Rater 2. Figure 1shows the audiogram for the test ear of eachfield rater, as well as the mean bilateralaudiogram for the 17 participants (34 ears)in the earlier study. As can be seen, thethree pure-tone audiograms are relativelysimilar in configuration and degree of loss.

The other 28 HI participants, as well asthe 10 NH participants, took part only inthe laboratory portion of the study, whichwas completed in one session. The HI par-ticipants were divided into three groups.The first consisted of eight of the remaining15 participants in Walden et al (2004). Thisgroup will be referred to as the “Cohort”group. All but one were men. At the conclu-sion of the earlier study, they had been fitbilaterally with the ITE (in-the-ear) versionof the hearing aid used in the current study.They had been wearing these instrumentson a daily basis for a minimum of 18months prior to their enrollment in thisstudy, which used the BTE (behind-the-ear)version of their hearing aids. As a result oftheir participation in the earlier study,these individuals had extensive experiencewith the test instrument, had demonstrat-ed a DA ≥15 dB with this hearing aid inlaboratory testing, and had extensive expe-rience making omnidirectional/directionalmicrophone preference judgments in every-day listening situations.

The remaining 20 HI participants wereexperienced hearing aid wearers who hadnot previously served in a hearing aidstudy. The second group, hereafter the“Omni” group, consisted of nine men andone woman. They had been fit bilaterallywith hearing aids having only omnidirec-tional microphones (i.e., no directionaloption) within the three years precedingtheir enrollment in this study. The thirdgroup, hereafter the “Dir” group, consistedof ten men who had been fit bilaterally withmanually switchable omnidirectonal/direc-tional hearing aids within the three yearspreceding their enrollment and who rou-tinely used both microphone modes in dailyliving. Finally, the 10 NH participants (3men, 7 women) had air-conduction thresh-olds within normal limits (≤15 dB HL) from.25 to 6 kHz and reported no abnormalhearing symptoms.

Laboratory testing was administered

JJoouurrnnaall ooff tthhee AAmmeerriiccaann AAccaaddeemmyy ooff AAuuddiioollooggyy/Volume 18, Number 5, 2007

362

FFiigguurree 11.. Pure-tone audiogram in test ear of each field raterand mean audiogram of participants in Walden et al (2004).

Page 6: Walden et al: The Robustness of Hearing Aid … · Dos personas hipoacúsicas establecieron juicios de preferencia en cuanto a ... sor due to earmold and ear canal effects. If these

unilaterally. For the HI participants, thetest ear was selected depending on whichear provided the closest match to the meanaudiogram used to program the test hearingaid. Often, little difference existed betweenears. In these cases, participants wereallowed to select their preferred ear, as werethe NH participants. The field raters lis-tened through the ear that had been select-ed for the live field testing. For the remain-ing 38 participants, 23 listened throughtheir right ear and 15 through their left ear.The mean audiograms (test ears) of the par-ticipants in the Cohort, Omni, and Dirgroups are shown in Figure 2. Also shown(lines without symbols) are the thresholdminima and the maxima for these 28 HIparticipants at each test frequency. Allaudiograms fit within the fitting range ofthe test hearing aid. On average, the partic-ipants in the three HI groups had moderate-to-severe, gradually sloping hearing lossescomparable to those of the field raters.Audiometric thresholds for the three HIgroups were compared using a two-wayANOVA with group and frequency as fac-

tors. The results showed a significant groupeffect (F = 12.99, p < 0.001) and, as expected,a significant frequency effect (F = 50.86, p <0.001). However, the group X frequencyinteraction was not significant (F = 1.41, p =0.14). The Tukey HSD procedure indicatedthat all three means were significantly dif-ferent from one another. The Cohort grouphad the poorest hearing and the Dir groupthe best hearing, particularly in the low-fre-quency region, among the three HI groups.

The three HI participant groups werealso compared for speech recognition scores.Mean NU-6 word recognition in quiet underearphones (test ear) at a comfortable listen-ing level was 79.8% (SD = 15.5, range =56–100) for Cohort group, 86.6% (SD = 9.7,range = 72–100) for Omni group, and 86.6%(SD = 9.7, range = 72–100) for Dir group. Aone-way ANOVA revealed that the groupswere not significantly different (F = 0.95; p= 0.40) on this measure.

Demographic data for the four partici-pant groups are presented in Table 1. Themean age of the three HI groups did not dif-fer significantly, although the mean age ofthe NH group was significantly youngerthan each of the three HI groups (ANOVA:F = 34.60, p < 0.001). A one-way ANOVAcomparing total hearing aid use (in years)revealed a statistically significant differ-ence among the groups (F = 4.66, p < 0.05).Comparison of the means (Tukey HSD pro-cedure) indicated that the Cohort grouphad used hearing aids for a significantlylonger period of time on average than theother two groups, which probably reflectstheir greater and, presumably, more long-standing hearing losses. Total hearing aiduse did not differ significantly between theOmni and Dir groups. Finally, a statisticalcomparison of daily hearing aid use (hoursper day) revealed a significant group effect(F = 4.44, p < 0.05). The Omni group usedtheir hearing aids significantly less on adaily basis than the other two groups,

RRoobbuussttnneessss ooff MMiiccrroopphhoonnee PPrreeffeerreenncceess/Walden et al

363

FFiigguurree 22.. Mean audiogram for the test ear of participantsin each HI group. The lines without symbols show theminimum and maximum pure-tone threshold obtained ateach test frequency across the HI participants.

Table 1. Demographic Data (mean and range) for the Four Groups of Participants

Age Hearing Aid Use Daily UseGroup (years) (years) (hours)

Cohort 74.5 (61–85) 14.3 (5–29) 15.6 (14–17)

Omni 72.7 (60–82) 7.3 (2–14) 10.8 (4–16)

Dir 72.7 (63–78) 6.5 (3–10) 13.3 (8–18)

NH 41.3 (27–66) NA NA

Page 7: Walden et al: The Robustness of Hearing Aid … · Dos personas hipoacúsicas establecieron juicios de preferencia en cuanto a ... sor due to earmold and ear canal effects. If these

364

although the Cohort and Dir groups did notdiffer significantly. One might expect thatthe Dir group, with better hearing, woulduse their hearing aids less than the Omnigroup. However, it is possible that thechoice of directional hearing aids led togreater hearing aid satisfaction and use ashas been reported by Kochkin (2003).

TTeesstt HHeeaarriinngg AAiidd

A GN ReSound Canta 770D BTE hearingaid was used in the study. It is a multiband,multimemory digital hearing aid with vari-able wide dynamic range compression(WDRC) and no user-adjustable volumecontrol. Directionality is achieved electron-ically via a two-microphone system andincludes an adaptive polar response option.Although this option was activatedthroughout the study, the Canta770Ddefaults to a hypercardioid polar responsein relatively diffuse background noise,which appears typical of most everydaynoisy listening situations (Walden et al,2004). Unvented, custom earmolds weremade for the two field raters and were usedduring the field testing. For the remaining28 HI participants who did not take part inthe field portion of the study, the test hear-ing aid was used only for screening DA, andthe device was coupled to the ear via a foamear tip.

The first two memories of the test hear-ing aid were programmed with the manu-facturer’s “basic program.” The standardomnidirectional microphone mode was pro-grammed into one of the two memories andthe directional mode into the other (here-after, referred to as the “OMNI” and “DIR”programs, respectively) in counterbalancedorder across the participants. A push but-ton on the back of the instrument allowsthe wearer to switch between programs.The appropriate number of audible tonesinforms the user whether program 1 or pro-gram 2 has been activated.

The hearing aid was fit according to themanufacturer’s “Audiogram Plus” fittingalgorithm with full compensation for thenormal low-frequency roll-off in the direc-tional mode (“Max Boost”). No noise reduc-tion was used in either program, and digi-tal feedback suppression was not requiredin any of the fittings.

Participants were not informed of the

specific nature of the signal processing pro-vided by the two programs; that is, theywere not told that one of the programs pro-vided omnidirectional- and the other direc-tional-microphone processing. Rather, theywere simply told that the two programswere “different ways of processing sound.”

As previously noted, the bilateral aver-age audiogram of the 17 participants (34ears) in the previous study (Walden et al,2004) was used to program the studyinstrument (Figure 1). However, becausethresholds at 1000 Hz for the 30 HI partic-ipants ranged from 25 to 75 dB HL, overallgain was adjusted to prescribed gain at1000 Hz based on each participant’s audio-gram in the test ear. As a result, the gainadjustment at 1000 Hz ranged from -10 to15 dB, although the compression settingsfor the various frequency bands were heldconstant across the 30 HI participants.Because the two field raters had the samethreshold at 1000 Hz, the field measureswere made with the identical hearing aidprogram.

Mean frequency responses (for the 30 fit-tings) for three input levels of speech-weighted noise (50, 65, and 80 dB SPL) areshown in Figure 3. The 2-cc coupler meas-urements (Fonix 6500-CX) were made inthe OMNI mode and show the characteris-tic decrease in gain with increasing inputlevel of WDRC. The mean frequencyresponse was also obtained in the DIRmode with the 65 dB SPL input (displayedin Figure 3 as one of the two middle and

JJoouurrnnaall ooff tthhee AAmmeerriiccaann AAccaaddeemmyy ooff AAuuddiioollooggyy/Volume 18, Number 5, 2007

FFiigguurree 33.. Mean 2-cc coupler gain of the test instrumentfor three input levels (50, 65, 80 dB SPL) in the OMNI modeand for a 65 dB SPL input in the DIR mode.

Page 8: Walden et al: The Robustness of Hearing Aid … · Dos personas hipoacúsicas establecieron juicios de preferencia en cuanto a ... sor due to earmold and ear canal effects. If these

overlapping curves) to assure that the fre-quency-gain characteristics of the OMNIand DIR programs were equalized. As isapparent, the mean frequency responses ofthe two programs were nearly identical. Atthe end of the study, the coupler responsesof the hearing aid in the OMNI mode and inthe DIR mode were compared using themethod suggested by Frye (2006) to verifydirectionality. The results indicated thatthe average directional effect in the .5–6kHz range was 5.9 dB.

SSccrreeeenniinngg ffoorr DDiirreeccttiioonnaall AAddvvaannttaaggee

A test of speech recognition was adminis-tered to verify that each of the 30 HI par-ticipants obtained a DA of at least 15%under laboratory test conditions. The testhearing aid was fit to the test ear using afoam ear tip, and the opposite ear wasplugged with a foam earplug. With the lis-tener seated comfortably in a sound-treatedtest booth, IEEE sentences (Institute ofElectrical and Electronic Engineers, 1969)were presented at 65 dBA from an ear-levelloudspeaker at 0˚ azimuth. Each sentencecontains five key words that form thebasis for scoring (percent correct).Simultaneously, uncorrelated speech-shaped noise was introduced from threeear-level loudspeakers at 90˚, 180˚, and270˚ azimuths with a combined level of 65dBA, thereby creating a 0 dB signal-to-noise ratio at the location of the partici-pant’s head. The participant was asked torepeat each word of the sentence and toguess when unsure. Two ten-sentence listswere presented for the omnidirectionalmode and two for the directional mode (100key words each), with the order of presen-tation of the two test conditions counterbal-anced in ten-sentence lists across partici-pants. If the performance level of the lis-tener was below 20% for the first list of 50test words in either mode, the signal-to-noise ratio was adjusted to +3 or +6 dB byreducing the noise level, and testing begananew with different test lists. The DA wascalculated as the percent of words correctlyrecognized for the omnidirectional modesubtracted from the percent of words cor-rectly recognized for the directional mode.

IIddeennttiiffiiccaattiioonn ooff FFiieelldd RReeccoorrddiinngg SSiitteess

During their second visit, the two fieldraters were fit unilaterally with the testhearing aid using their custom earmolds.The opposite ear was occluded with a foamearplug. Each field rater was required toidentify six noisy everyday listening envi-ronments, two in which he distinctly pre-ferred omnidirectional processing, two inwhich he distinctly preferred directionalprocessing, and two in which he had noclear preference for either microphonemode. Each field rater walked around theWalter Reed campus in search of appropri-ate listening environments, accompaniedby two of the investigators. As criteria forinclusion, noise was present in every listen-ing environment, and speech was the pri-mary signal. Given that these field judg-ments were to serve as “gold standard”microphone preferences for the laboratoryratings, great care was taken to assure thatthe field microphone preferences were con-stant and unequivocal. Acoustically stablelistening environments were required toallow thorough comparisons between thetwo microphone modes, both in the fieldand for the laboratory ratings. To facilitatethis, one or more of seven adults (four menand three women) variously accompaniedthe field rater and investigators to serve asthe primary talker in each listening envi-ronment, reciting narrative speech.Identification of appropriate listening situ-ations was largely by trial and error as theprimary talker and/or the field rater slowlymoved around in the listening environmentuntil one of the three preferences (Program1, Program 2, No Preference) clearlyemerged. Location and distance of the pri-mary talker relative to the listener werethe most distinct variables determiningmicrophone preferences. The level of thebackground noise (e.g., fans, crowds of peo-ple, music from a radio, sounds of nature,traffic) varied considerably across the dif-ferent environments.

Once a stable environment was locatedand the listener had indicated his micro-phone preference, a 2 min recording wasmade through the hearing aid. During thisrecording period, the distance and orienta-tion between the primary talker and thefield rater were held constant. Details ofthe recording method are provided below.

RRoobbuussttnneessss ooff MMiiccrroopphhoonnee PPrreeffeerreenncceess/Walden et al

365

Page 9: Walden et al: The Robustness of Hearing Aid … · Dos personas hipoacúsicas establecieron juicios de preferencia en cuanto a ... sor due to earmold and ear canal effects. If these

At the end of the recording, the field raterwas asked to confirm that the microphonepreference had not changed, and a modifiedHearing Aid Use Log (Walden et al, 2004)was filled out by one of the investigators,documenting the field rater’s microphonepreference and describing the listeningenvironment using a checklist format (seeAppendix 1).

Table 2 describes the talker location anddistance, and background noise locationand perceived loudness, for the 12 listeningenvironments. The first two listening envi-ronments listed under each microphonepreference category are those of Field Rater1, and the last two are from Field Rater 2.In general, there was relatively good agree-ment between the two field raters regard-ing the characteristics of OMNI-preferredand DIR-preferred listening environments.In each of the four OMNI-preferred sites,

the primary talker was located to the sideor behind the listener and relatively near(<10 ft). Background noise was located allaround or behind the listener, and its per-ceived level varied from soft to loud.

As expected, the primary talker was infront of the field rater for the four DIR-preferred sites. Further, except for the con-ference room, the talker was located closeto the listener (<3 ft). The background noisewas perceived as being all around, with theexception of the conference room where thenoise source (i.e., an electric fan) was locat-ed behind the listener. The perceived loud-ness of the background noise varied acrossthese four sites.

In contrast to the OMNI-preferred andDIR-preferred sites, the four no-preferencelistening environments were more variablein their characteristics across the two fieldraters. For Field Rater 1, the primary talker

JJoouurrnnaall ooff tthhee AAmmeerriiccaann AAccaaddeemmyy ooff AAuuddiioollooggyy/Volume 18, Number 5, 2007

366

Table 2. Characteristics of the 12 Everyday Listening Environments, Including Location and Distanceof the Primary Talker, and Location and Loudness of the Background Noise

Talker Background Noise

Location Distance (feet) Location

AllFront Side Back <3 3–10 >10 Front Back Around

OMNI-Preferred SitesOutdoorGarden x x x Moderate

Car(passenger) x x x Soft

Office(10’ x 12’) x x x Moderate

HospitalCafeteria x x x Loud

DIR-Preferred SitesConference Room(14’ x 18’) x x x Moderate

HospitalCafeteria x x x Soft

Outsideof Hospital x x x Soft

HospitalLobby x x x Loud

No-Preference SitesLunch Room(13’ x 15’) x x x Soft

OutdoorGarden x x x Moderate

Classroom(19’ x 25’) x x x Loud

Conference Room(14’ x 18’) x x x Soft

LoudnessRecordingSite

Page 10: Walden et al: The Robustness of Hearing Aid … · Dos personas hipoacúsicas establecieron juicios de preferencia en cuanto a ... sor due to earmold and ear canal effects. If these

was located to the side and relatively closeto the listener in both no-preference sites(i.e., break room, outdoor garden), althoughthe characteristics of the noise variedbetween these two settings. The outdoorgarden that yielded an OMNI- pre-ferred rating for Field Rater 1 also yieldeda no-preference rating for this listener. Adistinct preference for omnidirectional pro-cessing resulted when the talker was locat-ed directly behind and several feet (but notmore than 10 ft) from Field Rater 1.However, when the talker was located tothe side and very close (<3 ft), there was nopreference for either microphone mode. ForField Rater 2, the primary talker and noisesources were located in front of the listenerin both no-preference environments (i.e.,classroom, conference room), with a talker-to-listener distance of several feet.

AAccoouussttiicc RReeccoorrddiinnggss

The test hearing aid was modified by themanufacturer to allow the electrical outputsignal (i.e., before transduction by the hear-ing aid receiver) to be accessed directly viaan electrical connector attached to the out-side of the hearing aid case. Recordings ofthe acoustic environment were made byfeeding the pulse-code modulated output ofthe signal processor to a demodulator. Thedemodulated signal was fed to a pre-amplifierand then to the analog-to-digital converterof a notebook computer.

During the 2 min digital recording ofeach listening environment, the field raterswitched between the two microphonemodes in 10 sec intervals at the direction ofone of the investigators. The one- or two-tone signal produced by the hearing aid aseach microphone mode was activated,which identified the processing mode foreach 10 sec sample, was also recorded.

PPrreeppaarraattiioonn ooff LLaabboorraattoorryy TTeessttMMaatteerriiaallss

The 2 min recordings of each listeningenvironment consisted of six 10 sec samplesof omnidirectional processing interleavedwith six 10 sec samples of directional pro-cessing. These recordings of the 12 every-day listening environments were edited foruse in all subsequent listening tasks. Fromeach of the original 2 min recordings, six

overlapping 40 sec samples were produced,thereby creating a total of 72 listeningitems. The six samples of each environmentbegan at different places in the 2 minrecordings such that three started with 10sec of omnidirectional processing and threebegan with 10 sec of directional processing.Each 40 sec sample contained two 10 secsamples of omnidirectional processingalternating with two 10 sec samples ofdirectional processing. The tones producedby the hearing aid that indicate which pro-cessing mode was activated were digitallymoved to correspond with the order of thesamples in a specific 40 sec test item. Thatis to say, a given 40 sec test item did notalways begin with omnidirectional or withdirectional processing, but the tonesequence was always 1-2-1-2.

LLaabboorraattoorryy PPrreeffeerreennccee RRaattiinnggss

Each of the 40 participants listened mon-aurally to the edited recordings of the lis-tening environments in a sound-treatedbooth via an insert receiver. Initially, a com-fortable listening level was established foreach participant, as follows: ConcatenatedIEEE sentences recorded through the testhearing aid mounted on KEMAR (Burkhardand Sachs, 1975) were presented in quietthrough the insert receiver to the listener’stest ear. A comfort level was establishedusing a bracketing procedure in 5 dB steps.The participant periodically indicated his orher perceived loudness of the concatenatedsentences using a seven-category loudnessscale adapted from the contour test of theIHAFF (Independent Hearing Aid FittingForum) protocol (Valente and Van Vliet,1997). The comfort level determined by thisprocedure was used to present a few prac-tice items (not included in the 72 test items),as well as the actual 72-item test.

The 72 items were presented randomly,without replacement, to each participant.For each item, participants were requiredto indicate whether they preferred sample 1(one tone), sample 2 (two tones), or had nopreference. Typically, the listeners respond-ed after the second or the third 10 sec por-tion of the test item; that is, they seldomneeded the full 40 sec sample to make apreference judgment. The software identi-fied which sample (1 or 2) was associatedwith which microphone mode in each 40 sec

RRoobbuussttnneessss ooff MMiiccrroopphhoonnee PPrreeffeerreenncceess/Walden et al

367

Page 11: Walden et al: The Robustness of Hearing Aid … · Dos personas hipoacúsicas establecieron juicios de preferencia en cuanto a ... sor due to earmold and ear canal effects. If these

sample and recorded the participant’sresponses accordingly. Once a preferenceresponse was made to a given item, the testadvanced to the next item. Testing timewas about 90 min, which included one ortwo rest periods.

RREESSUULLTTSS

DDiirreeccttiioonnaall AAddvvaannttaaggee

As noted earlier, HI participants wererequired to obtain a DA of 15% or greaterwith the test hearing aid at 0 dB, +3 dB, or+6 dB SNR (signal-to-noise ratio), as a cri-terion for enrollment in the study. Five ofthe participants passed the screening testat 0 dB SNR, 12 passed at +3 dB SNR, andthe remaining 13 participants passed at +6dB SNR. Field Rater 1 obtained a DA of15% (+3 dB SNR), and the DA for FieldRater 2 was 26% (+6 dB SNR). For the HIgroups, the mean DA was 38.4% for theCohort group (SD: 14.4; range: 19–64),33.3% for the Omni group (SD: 12.0; range:18–65), and 31.0% for the Dir group (SD:9.2; range: 20–47). A one-way ANOVA indi-cated that the mean DA did not differ sig-nificantly among the three HI groups (F =0.83, p = 0.45).

LLaabboorraattoorryy PPrreeffeerreennccee RRaattiinnggss

The primary purpose of this study was todetermine the extent to which OMNI ver-sus DIR microphone preferences in every-day listening situations are similar acrossHI persons. Ideally, this would be evaluatedby comparing the microphone preferencesof a large number of listeners in identicallive everyday listening environments. As apractical matter, such a study is difficult toconduct because of the individual anddynamic nature of most everyday listeningsituations. Not only are everyday listeningenvironments largely unique to an individ-ual, the specific acoustic characteristics of agiven listening environment often changeconsiderably over time as, for example, thetalker and listener move. This problem wasaddressed in this study by making acousticrecordings of a variety of everyday listeningenvironments and using those recordingspresented in a laboratory setting to evalu-ate the similarity of microphone prefer-

ences across different listeners. Obviously,this approach assumes that the essentialdeterminants of microphone preferences ineveryday listening are present in theacoustic signal (captured by the record-ings). Notably, the same assumption under-lies automatic directionality algorithms. Toassess this assumption, the laboratory pref-erence ratings of the two field raters werecompared to their actual live microphonepreferences.

IInnttrraassuubbjjeecctt AAggrreeeemmeenntt

Figure 4 shows the distribution ofOMNI-preferred, DIR-preferred, and no-preference ratings (expressed as percent-ages) obtained from the two field ratersduring laboratory testing, organizedaccording to their preference categories inthe field. Chance performance is 33.3%.Although these data are pooled across spe-cific listening situations in each preferencecategory and combined for the two fieldraters, they reflect only the degree of agree-ment of each field rater’s laboratory prefer-ences with his own field ratings. Depicted,therefore, is the distribution of 24 laboratorypreference ratings within each field prefer-ence category (6 samples X 2 sites X 2 fieldraters). Recall that approximately threemonths passed between when the fieldraters made their live preference judg-ments and when they participated in thelaboratory testing. This, combined with themultiple items presented from each field

JJoouurrnnaall ooff tthhee AAmmeerriiccaann AAccaaddeemmyy ooff AAuuddiioollooggyy/Volume 18, Number 5, 2007

368

FFiigguurree 44.. Agreement between the laboratory and fieldmicrophone preference ratings for the field raters (intra-subject agreement), pooled across specific listening situa-tions in each preference category.

Page 12: Walden et al: The Robustness of Hearing Aid … · Dos personas hipoacúsicas establecieron juicios de preferencia en cuanto a ... sor due to earmold and ear canal effects. If these

environment and the random association ofthe microphone modes with the one or twotones indicating sample order within itemson the laboratory test, makes it highlyunlikely that memory played a role in therelatively high intrasubject agreementobserved. As can be seen, the vast majorityof the laboratory ratings given by the fieldraters agreed with their field preferences.However, the level of agreement variedwith the preference category. For OMNI-preferred sites, 23 of the 24 laboratory rat-ings (95.8%) agreed with the field preference,whereas 19 of the 24 laboratory ratings(79.2%) agreed with the field preference forthe DIR-preferred sites. A no-preferencerating was given to the other five test itemsfrom the DIR-preferred site recordings. For

the no-preference sites, 16 of the laboratoryratings (66.7%) agreed with the field pref-erence, with the remaining eight ratingsbeing distributed between OMNI and DIRpreferences.

Figure 5 displays the intrasubject agree-ment separately for each of the four every-day listening environments in each prefer-ence category. It should be noted that thedata for each environment is the distribu-tion of only six laboratory ratings, so cau-tion should be used in interpreting thesedata. Nevertheless, it appears that the levelof agreement between the laboratory rat-ings and field preferences differed some-what among the specific listening environ-ments. For six of the 12 sites, there wasperfect agreement between the field andlaboratory preference, and for three addi-tional sites, five of the six laboratory rat-ings agreed with the field preferences. Forthe remaining three sites, however, goodagreement was not observed. Notably, twoof these were DIR-preferred sites; specifi-cally, the conference room (Field Rater 1)and outside a building (Field Rater 2).Nevertheless, the intrasubject comparisons(Figures 4 and 5) generally suggest that thedetermining factors in the OMNI and DIRpreferences in the everyday listening envi-ronments were largely captured in theacoustic recordings, although this was lesstrue for the DIR preferences than for theOMNI preferences. With this as a frame ofreference, it is possible to consider the pri-mary purpose of the study; that is, the con-sistency of microphone preferences acrosslisteners.

RRoobbuussttnneessss ooff MMiiccrroopphhoonnee PPrreeffeerreenncceess/Walden et al

369

FFiigguurree 55.. Agreement between the laboratory and fieldmicrophone preference ratings for the field raters (intra-subject agreement) for the specific listening situations ineach preference category.

FFiigguurree 66.. Agreement between the laboratory and fieldmicrophone preference ratings for 38 participants (inter-subject agreement), pooled across specific listening situa-tions in each preference category.

Page 13: Walden et al: The Robustness of Hearing Aid … · Dos personas hipoacúsicas establecieron juicios de preferencia en cuanto a ... sor due to earmold and ear canal effects. If these

IInntteerrssuubbjjeecctt AAggrreeeemmeenntt

Figure 6 shows the distribution ofOMNI-preferred, DIR-preferred, and no-preference ratings for the remaining 38participants, organized according to (fieldrater) preference category and pooledacross specific listening situations withineach category. Depicted, therefore, is thedistribution of 912 laboratory preferenceratings (6 samples X 4 sites X 38 partici-pants) within each field preference catego-ry. Overall, these 38 participants tended toagree with one another (and with the fieldraters’ live preferences) in their laboratorypreferences. Again, however, the degree ofagreement varied with the field preferencecategory, with best agreement (90.4%)being achieved for the OMNI-preferredenvironments. For the DIR-preferred andno-preference sites, agreement was 57.3%and 52.6%, respectively.

A comparison of the intrasubject agree-ment data in Figure 4 with the intersubjectagreement data in Figure 6 suggests gener-ally similar trends. This observation wasconfirmed by separate 3 X 2 chi squareanalyses comparing the data of these twofigures for each field preference category.Results were χ2 = 1.18 (p = 0.55), 4.80 (p = 0.09),and 2.00 (p = 0.37) for the OMNI-preferred,DIR-preferred, and no-preference sites,respectively. Field preferences for OMNI-preferred sites appeared to be highly replic-able in the laboratory, both within andacross listeners, but less so for the DIR-pre-ferred and no-preference sites. Althoughthe general pattern of results for the DIR-preferred and no-preference sites were sim-ilar between the field raters and the other38 participants, as substantiated by the chisquare analyses, the degree of agreementbetween the field preferences and laborato-ry ratings for the DIR-preferred sitesdropped from nearly 80% for the intrasub-ject comparisons (Figure 4) to 57% for inter-subject comparisons (Figure 6).

Figure 7 displays intersubject agreementfor each of the four listening situations ineach preference category. Hence, the distri-bution of 228 laboratory preference ratings(six samples X 38 participants) is depictedfor each of the four sites within each panel.Although generally similar patterns of lab-oratory preference ratings were observedfor the four field sites within each prefer-

ence category, quite different patterns offindings were obtained for the three prefer-ence categories. It is evident in the firstpanel that the laboratory ratings showedexcellent agreement (>95%) with the fieldpreference for three of the four OMNI-preferred sites. For the recordings made ina car, agreement was 75.4%, with the otherlaboratory ratings being divided betweenDIR-preferred and no preference. Thesedata, therefore, generally support the con-clusion that OMNI microphone preferencesare highly robust across listeners and lis-tening environments.

For the DIR-preferred sites (second panel),the laboratory ratings showed substantiallyless agreement, ranging from 43.4% (outsidea building) to 65.4% (cafeteria). Laboratory

JJoouurrnnaall ooff tthhee AAmmeerriiccaann AAccaaddeemmyy ooff AAuuddiioollooggyy/Volume 18, Number 5, 2007

370

FFiigguurree 77.. Agreement between the laboratory and fieldmicrophone preference ratings for 38 participants (inter-subject agreement), for the specific listening situations ineach preference category.

Page 14: Walden et al: The Robustness of Hearing Aid … · Dos personas hipoacúsicas establecieron juicios de preferencia en cuanto a ... sor due to earmold and ear canal effects. If these

ratings that were not in agreement withthe DIR field preference were almostalways no preference; that is to say, OMNIprocessing was rarely preferred in the labo-ratory for a DIR-preferred field site.Notably, for one of the DIR-preferred fieldsites (outside a building), the majority ofthe laboratory ratings did not agree withthe field preference.

Results for the no-preference field sites(third panel) revealed that a no-preferencerating was the most frequent responseacross all four sites. However, DIR prefer-ences were often obtained in the laboratoryfor each of these sites and, to a lesserextent, OMNI preferences.

PPaarrttiicciippaanntt GGrroouuppss

The data in Figures 6 and 7 suggest thatpreferences for omnidirectional processingare highly robust across listeners and every-day listening environments. However, thisappears to be less true for directional pro-cessing. Rather, the field raters’ DIR prefer-ences in everyday listening situations werenot consistently replicated by the other 38participants in the laboratory. Often a no-preference rating was assigned to therecorded samples of the DIR-preferred sites.As suggested earlier, part of this inconsis-

tency may be because the acoustic record-ings did not always fully reflect why direc-tional processing had been preferred by thefield rater in a particular listening environ-ment. However, it is also possible that someof the inconsistency between the laboratoryratings and the field preferences are attrib-utable to participant differences, such aspossible effects of hearing loss and experi-ence with hearing aids. This possibility wasexplored by comparing results for the fourparticipant groups. Recall that the Cohortgroup, on average, had the greatest amountof hearing loss and the longest history ofhearing aid use compared to the othergroups. In addition, both the Cohort and Dirgroups had experience with directionalhearing aids in everyday living in contrastto the Omni group, who had no experiencewith this type of signal processing prior totheir participation in this study. The datafor each of the four groups are shown inFigure 8, pooled across listening sites.

Overall, the Cohort and the NH groupsappeared to replicate the field preferencesmost consistently. Not unexpectedly, theOMNI-preferred field preferences werehighly replicable in the laboratory for allfour participant groups, consistent withearlier representations of the data. Thiswas confirmed by a 3 X 4 chi square analy-sis of the data for the OMNI-preferred sites,

RRoobbuussttnneessss ooff MMiiccrroopphhoonnee PPrreeffeerreenncceess/Walden et al

371

FFiigguurree 88.. Agreement between the laboratory and field microphone preference ratings pooled across specific listeningsituations in each preference category for each of four participant groups.

Page 15: Walden et al: The Robustness of Hearing Aid … · Dos personas hipoacúsicas establecieron juicios de preferencia en cuanto a ... sor due to earmold and ear canal effects. If these

which revealed no significant group differ-ences (χ2 = 8.37, p = 0.21) . The DIR fieldpreferences were less consistently replicat-ed in the laboratory for all groups, but par-ticularly for the Dir group where no-preferenceratings were the most frequent response. A3 X 4 chi square analysis of the data forDIR-preferred sites revealed that the groupdifferences were highly significant (χ2 =34.36, p < 0.001). Similarly, the responsepatterns for the no-preference sites weresignificantly different (χ2 = 167.48, p <0.001) among the participant groups.Although the NH group replicated the no-preference field preferences with a relativelyhigh degree of consistency (76.3%), theOmni and Dir groups more frequently pre-ferred the directional processing for theselistening environments based on the labo-ratory recordings.

Differences among the four participantgroups are summarized in Figure 9, whichshows the percent agreement between thelaboratory rating and the field preferencefor each of the four groups. These data werecompared using a two-way ANOVA withparticipant group and field preference cate-gory as factors. As expected, the main effectfor preference category was highly signifi-cant (F = 41.48, p < 0.001). Agreementbetween the field and laboratory prefer-ences was significantly higher for theOMNI-preferred sites. However, the maineffect for group was not significant (F =2.24, p = 0.08). Although, overall, the fourgroups did not differ significantly from oneanother, a significant group X preferencecategory interaction was observed (F =2.55, p < 0.05). As noted earlier, thereappeared to be some tendency for the

Cohort and NH groups to show slightlygreater agreement with the field ratingsthan the Omni and Dir groups, althoughthis was mainly attributable to the data forthe no-preference field sites.

Given that the Cohort group, on average,had the most hearing loss of the participantgroups and the NH group, by definition, hadthe least, it seems unlikely that hearingloss, per se, explains the group differencesobserved. However, excluding the NH par-ticipants, the average hearing losses of theparticipant groups did not differ greatly (seeFigure 2), thereby limiting the potentialinfluence of this variable. Perhaps the mostobvious audiometric difference between thethree HI participant groups was that theCohort group had, on average, flatter audio-grams than the Omni and Dir groups.Recall that the field recordings were madewith the hearing aid that was programmedbased on the mean audiogram of the 17 par-ticipants in Walden et al (2004), from whichthe two field raters and the eight membersof the Cohort group were selected. As shownin Figure 1, these individuals had relativelyflat audiometric configurations, whereasthe average audiograms of Omni and Dirparticipant groups were more sloping (seeFigure 2). It is likely, therefore, that theamplification parameters programmed intothe hearing aid (e.g., compression ratios)were less appropriate to the Omni and Dirgroups than for the Cohort group. This sug-gests that audiometric configuration couldhave contributed to the degree of agreementwith the field ratings. That the NH partici-pants had, by definition, flat audiometricconfigurations and, with the Cohort group,showed more consistent agreement with thefield preferences may support this interpre-tation as well. This possibility was exploredfurther in the following analysis.

Although the slope of the mean audio-grams differed slightly among the three HIparticipant groups, there was considerableoverlap among individual participantsacross the three groups. The possible influ-ence of audiometric configuration, therefore,was further explored by selecting the threeparticipants from each of the HI groupswith the best hearing thresholds at 500 Hz(n = 9) and the three participants with poor-est thresholds at 500 Hz (n = 9). The twonewly formed groups will be referred to asthe “Sloping” and “Flat” groups, and their

JJoouurrnnaall ooff tthhee AAmmeerriiccaann AAccaaddeemmyy ooff AAuuddiioollooggyy/Volume 18, Number 5, 2007

372

FFiigguurree 99.. Agreement of laboratory ratings with the fieldpreferences for each participant group.

ϖ

Page 16: Walden et al: The Robustness of Hearing Aid … · Dos personas hipoacúsicas establecieron juicios de preferencia en cuanto a ... sor due to earmold and ear canal effects. If these

mean audiograms are shown in Figure 10. Ifthe audiometric configuration and/or theappropriateness of the amplification param-eters to the participant’s hearing loss had asignificant effect on the degree of agreementof the laboratory ratings with the field pref-erences, one would expect that the Slopinggroup would show less agreement with fieldpreferences than the Flat group. However,this was not observed. The percent agree-ment of the laboratory ratings with the fieldpreferences for the Sloping and Flat groupsis shown in Figure 11. Agreement betweenfield preferences and laboratory ratings forthe two groups were not significantly differ-ent (χ2 = 1.14, p = 0.57), and, if anything,these findings are in the opposite directionto that hypothesized. Hence, these data pro-vide little support for the notion that audio-metric configuration was a significant influ-ence in the level of agreement of the labora-tory ratings with the field preferences.

DDIISSCCUUSSSSIIOONN

This study was motivated by a generalinterest in the development of effective

automatic directionality algorithms in hear-ing aids. Such an algorithm would automat-ically switch to the preferred microphoneprocessing mode (omnidirectional or direc-tional) based on an analysis of the acousticenvironment. Although beyond the scope ofthis discussion, there are a number of tech-nical and other problems that complicatethe design of algorithms that can consis-tently make correct decisions regarding thepreferred microphone mode in everyday lis-tening. However, one of these potential diffi-culties is the possibility that HI patientswho are audiometrically similar are, never-theless, quite individual in their preferredmicrophone mode in everyday listening. Toexplore this issue, the extent to whichmicrophone preferences based on recordingsof everyday listening environments agreedwith live microphone preferences in thoseenvironments were compared across HI lis-teners.

IInnttrraassuubbjjeecctt AAggrreeeemmeenntt

As noted earlier, the most direct way toexplore the robustness of microphone pref-erences in everyday listening situationswould be to have several HI persons make

microphone preferences in identical listen-ing situations. However, this is nearlyimpossible because of the dynamic andindividual nature of everyday listeningenvironments. The approach used in thisstudy was to record everyday listeningenvironments through the two microphonemodes and have listeners make preferencejudgments based on the recordings. Thisassumes that the basis for microphone pref-erences in everyday listening situations islargely contained in the acoustic input.That the field raters were able to replicatetheir live microphone preferences in thelaboratory with a relatively high degree ofaccuracy suggests that this assumption isvalid, at least to a first approximation.

RRoobbuussttnneessss ooff MMiiccrroopphhoonnee PPrreeffeerreenncceess/Walden et al

373

FFiigguurree 1111.. Agreement of laboratory ratings with fieldpreferences for the Sloping and the Flat groups for eachpreference category.

FFiigguurree 1100.. Mean audiogram for the three participantsin each HI group with best pure-tone thresholds at 500Hz (Sloping group) and for the three participants in eachgroup with the poorest thresholds at 500 Hz (Flat group).The bars indicate one standard deviation.

Page 17: Walden et al: The Robustness of Hearing Aid … · Dos personas hipoacúsicas establecieron juicios de preferencia en cuanto a ... sor due to earmold and ear canal effects. If these

Clearly, however, this was truer for omnidi-rectional microphone preferences than fordirectional preferences. Omnidirectionalprocessing was preferred in the laboratoryby the field raters for >95% of the recordeditems from the OMNI-preferred sites.About 80% of the directional field prefer-ences were replicated in the laboratory. Theimplication is that whatever factor(s) makeomnidirectional processing preferable in alistening environment are well representedin the acoustic input.

A characteristic common to each of thefour OMNI-preferred sites was that the pri-mary signal (talker) was not located infront of the listener. In contrast, the signalwas in front for all four DIR-preferred sites.Taken together, this suggests that the rela-tive intelligibility of the primary talker inthe two processing modes was the basis forthe preference judgments, consistent withthe findings of Walden et al (2005). Giventhat directional processing is designed toimprove the SNR of sounds coming fromthe front by reducing the level of soundsoriginating from the sides and/or back, it isreasonable to assume that a signal of inter-est coming from the side or behind the lis-tener may be more intelligible in the omni-directional mode. However, as important asthe location of the signal source appears,this factor alone is not sufficient to create adistinct preference for omnidirectional ordirectional processing in everyday listeningsituations. The field raters readily identi-fied everyday listening situations wherethey had no preference for either micro-phone mode, consistent with the findings ofSurr et al (2002) and Walden et al (2004). Inall of these field studies, HI listeners fre-quently expressed no preference for eitheromnidirectional or directional processing ineveryday listening situations where the sig-nal of interest was coming from the front,as well as from the side or behind the lis-tener. Although talker location may not bethe sufficient determinant, nevertheless, itis possible that the relative intelligibility ofthe talker of interest is the determining fac-tor in microphone preferences in everydaylistening. It is not always the case thatspeech signals located in front of the listen-er in a noisy listening environment will bemore intelligible in the directional mode,due to other environmental factors thatcompromise the directional processing such

as talker-to-listener distance and reverber-ation in the listening environment(Ricketts and Hornsby, 2003). When direc-tional microphones are unable to improvethe SNR noticeably due to such environ-mental factors, the listener may have nopreference for either processing mode.

If it is to be argued that the relativeintelligibility of the signal of interestthrough the two processing modes is theprimary determinant in microphone prefer-ence, some explanation is required for whyomnidirectional field preferences were con-sistently replicated in the laboratory by thefield raters, whereas these participantswere less consistent in replicating theirdirectional field preferences when listeningto the acoustic recordings. That is to say, ifthe omnidirectional mode was consistentlypreferred both in the field and in the labo-ratory recordings when omnidirectionalprocessing provided the better intelligibili-ty, should this not be true for directionalprocessing as well? For the most part, thiswas true in that nearly 80% of the fieldpreferences for directional processing werereplicated in the laboratory recordings.Still, there was not as much agreement bythe field raters between their field and lab-oratory preferences for the DIR-preferredsites as for OMNI-preferred sites.

Intelligibility may still provide the pri-mary explanation for microphone prefer-ences if we consider the possible influenceof visual cues. When the talker (i.e., signalof interest) is not in front of the listener,visual speech cues (speechreading) are notavailable. Thus, the relevant informationconcerning the speech signal is restricted toacoustic cues, which are captured by therecordings. In contrast, when the talker isin front, the listener can use visual cues tocompliment the auditory input. In thiscase, intelligibility is affected both by theacoustic signal and by nonacoustic factorsnot captured in the recordings. It is wellknown that there is a nonlinear relation-ship between SNR and the benefit of visualcues to speech recognition (Sumby andPollack, 1954). Small changes in the SNRthat are barely noticeable through auditionalone can result in substantial changes inspeech intelligibility when combined withvisual cues (Grant and Braida, 1991). It ispossible that the directional processing pro-vided only a minimal improvement in SNR

JJoouurrnnaall ooff tthhee AAmmeerriiccaann AAccaaddeemmyy ooff AAuuddiioollooggyy/Volume 18, Number 5, 2007

374

Page 18: Walden et al: The Robustness of Hearing Aid … · Dos personas hipoacúsicas establecieron juicios de preferencia en cuanto a ... sor due to earmold and ear canal effects. If these

in some of the DIR-preferred sites includedin this study, possibly due to the effects oftalker-to-listener distance, reverberation,and/or less than optimal spatial separa-tion of the signal and noise sources.Nevertheless, a small improvement in SNR,when combined with visual speech cuesavailable on the talker’s face, may haveresulted in a noticeable improvement inoverall intelligibility in the field. However,because the recordings from the DIR-preferred sites contain only the auditoryinformation, small changes in SNR may havebeen minimally noticeable in the laboratorytesting, making no preference for eithermicrophone mode more likely. The possibleinfluence of visual cues in patient preferenceand acceptance of directional microphoneprocessing, as well as on hearing aid benefitin general, is largely unknown and deservesinvestigation (Walden and Grant, 1993;Walden et al, 2001).

IInntteerrssuubbjjeecctt AAggrreeeemmeenntt

As already noted, automatic signal pro-cessing algorithms such as digital noisereduction and automatic directionality, asthey are currently implemented, assumethat the same processing mode will be pre-ferred by virtually all HI listeners in a par-ticular listening environment. Once theamplification parameters (e.g., gain, com-pression, frequency response) have beendetermined and the hearing aid fit to thepatient, the automatic algorithm will sam-ple the acoustic input and apply a decisionrule to determine which processing modewill be activated. For the most part, thesedecision rules are applied independent oflistener variables; that is, they are basedsolely on an analysis of the acoustic input.Consequently, it must be assumed that thepreferred signal processing in a particularacoustic environment is relatively constantacross HI listeners with roughly compara-ble hearing losses. The results of this studyof microphone preferences suggest that thisassumption may be only partially true.Clearly, preferences for omnidirectionalprocessing appear to be highly robust. If agiven HI listener strongly prefers omnidi-rectional processing in a particular noisylistening environment, it is quite likely thatother HI listeners will also prefer this pro-cessing mode. This is less the case, howev-

er, for directional processing. In this study,the field preferences for directional process-ing were replicated only about 57% of thetime in the laboratory across all partici-pants. This is substantially less than thenearly 80% agreement that was observedfor the field raters. Hence, it appears thatone cannot assume with a high degree ofcertainty that different listeners will all pre-fer directional processing in the same lis-tening environment. However, it is notablethat, when laboratory ratings did not agreewith field preferences for directional pro-cessing, listeners almost always expressedno preference for either processing mode.These data suggest, therefore, that it isunlikely that some listeners will stronglyprefer omnidirectional processing and oth-ers will strongly prefer directional process-ing in the same listening environment.

If we can interpret no preference to meanthat it does not matter to the listener whichmicrophone processing mode is activated,this would appear to minimize the practicalimplications of the lack of perfect agree-ment between the field and laboratory rat-ings. Generally, switchable omnidirectional/directional hearing aids must be in one pro-cessing mode or the other. There were veryfew instances in this study where omnidi-rectional processing was preferred in a liveeveryday environment and directional pro-cessing was preferred based on the acousticrecording of that environment, or viceversa. If it really does not matter which ofthe two microphone modes is activatedwhen the listener has no preference foreither, the individual differences in micro-phone preferences observed in this studymay not have serious implications for auto-matic directionality algorithms that arebased solely on an analysis of the acousticinput. This possibility could be examined byeliminating the no-preference response cat-egory in future studies.

The data of this study do not allow a pre-cise explanation of why preferences fordirectional processing appear less robustthan preferences for omnidirectional pro-cessing. Factors such as degree of hearingloss, audiometric configuration, experiencewith directional microphones, and overallhearing aid use did not seem highly influen-tial in this regard. There was some evidencethat the level of agreement between the lab-oratory ratings and the field directional

RRoobbuussttnneessss ooff MMiiccrroopphhoonnee PPrreeffeerreenncceess/Walden et al

375

Page 19: Walden et al: The Robustness of Hearing Aid … · Dos personas hipoacúsicas establecieron juicios de preferencia en cuanto a ... sor due to earmold and ear canal effects. If these

preferences varied somewhat with the DIR-preferred site (see second panel of Figure 7).It may be that the influence of visual cuesalready discussed could have contributed tothis lack agreement across the participants.Assuming one or more of the live direction-al preferences of the field raters were influ-enced by visual cues, this influence wouldnot be captured in the acoustic recordings ofthose environmental sites. As was the casefor the intrasubject data discussed above,this would likely lead to more no-preferenceratings in the laboratory for those sites byall participants, despite the unequivocalpreference for directional processing by thefield rater in the actual listening situation.This explanation requires considerablymore study before being accepted. However,its plausibility is perhaps increased by thefact that the two DIR-preferred sites forwhich the field raters showed the leastagreement between their laboratory andlive field ratings (conference room, outsidebuilding) were also the two sites for whichthe other 38 participants showed the leastagreement.

An additional factor that may have influ-enced intersubject agreement for DIR pref-erences is individual differences in direc-tional benefit and the SNR(s) at which thisbenefit is optimal. Recall that every HI par-ticipant had to obtain a DA of at least 15%to be enrolled in the study, and three SNRswere used to achieve this minimum criteri-on. The actual DA achieved ranged from15–65% across the 30 HI participants.Walden et al (2005) demonstrated that indi-vidual HI listeners can achieve their maxi-mum DA at different SNRs. It is likely thatparticipants derived varying amounts ofdirectional benefit in each of the 12 every-day listening environments (SNRs) includ-ed in the study. At the extreme, if a partici-pant failed to derive a directional advan-tage at a given SNR where other partici-pants benefited significantly from direc-tional processing, that individual would beunlikely to show a strong preference for thedirectional mode. As a preliminary explo-ration of this issue, DA was compared withthe percentage of DIR preferences across theDIR-preferred sites for the 30 HI partici-pants. No significant relationship (r = -0.15,p = 0.46) was observed between these twovariables, suggesting that the magnitude ofthe DA obtained in the laboratory was not a

factor in the degree of agreement betweenthe laboratory DIR preferences and thefield preferences. This is in general agree-ment with that of Cord et al (2004), whoobserved that success with directionalmicrophone hearing aids is not predicted bythe magnitude of the DA obtained in clini-cal testing. However, this finding is limitedby the use of different SNRs to obtain esti-mates of DA and the likelihood that SNRvaried considerably across the 12 everydaylistening environments. A more systematicexamination of this issue is required beforepatient directional benefit, as measured byDA, can be eliminated as a potential factorin the robustness of directional microphonepreferences.

IImmpplliiccaattiioonnss ffoorr HHeeaarriinngg AAiidd DDeessiiggnn

The results of this study provide only apartial answer to whether or not individualpatient differences must be considered inthe development of effective automatic sig-nal processing algorithms. It appears thatomnidirectional microphone preferences indaily listening are highly robust. Althoughthis appears less true for directional prefer-ences, it is the case that an omnidirectionalpreference was almost never expressed inthe laboratory for a DIR-preferred siterecording. It is probably reasonable to con-clude that these data do not provide strongevidence that AD algorithms must be cus-tomized to reflect individual patient differ-ences, although these data do not rule outthis possibility altogether.

Beyond the issue of individual differ-ences in microphone preferences, theresults of this study appear to have addi-tional implications for the development ofeffective AD algorithms, as well as for thepotential benefits of directional microphoneprocessing in general. Recall that all of theeveryday listening environments in thisstudy included background noise.Consistent with earlier field studies fromthis laboratory (Surr et al, 2002; Walden etal, 2004), it is clear that there are manynoisy listening situations encountered ineveryday living in which omnidirectionalprocessing will be preferred by the listener.Hence, a simple AD decision rule that acti-vates directional processing whenever noiseis detected in the acoustic input willinevitably lead to errors in selecting the

JJoouurrnnaall ooff tthhee AAmmeerriiccaann AAccaaddeemmyy ooff AAuuddiioollooggyy/Volume 18, Number 5, 2007

376

Page 20: Walden et al: The Robustness of Hearing Aid … · Dos personas hipoacúsicas establecieron juicios de preferencia en cuanto a ... sor due to earmold and ear canal effects. If these

preferred microphone processing mode.Similarly, for manually switchable omnidi-rectional/directional hearing aids, a strate-gy in which the patient switches to thedirectional mode whenever noise is presentwill also lead to errors. Although the pres-ence of noise (spatially separated from thesignal source) is a necessary characteristicfor directional benefit, other conditionsmust also exist in the environment relatedto the location and distance of the talkerrelative to the listener and/or the amount ofreverberation present. If one adopts a“scene analysis” approach to AD (seeWalden et al, 2005, for a discussion of sceneanalysis versus direct comparison approachesto AD), it is clear that there are multiplecharacteristics of the listening environmentthat must be detected in the acoustic inputin order for the algorithm to select the pre-ferred microphone mode consistently, thepresence of noise being but one. On theother hand, if a “direct comparison”approach is adopted in which the relativefidelity (e.g., SNR) of the two processingmodes is compared as the basis for micro-phone selection, the specific (physical) char-acteristics of the listening environmentneed not be determined. However, the dataof this study suggest that the relative fideli-ty of the acoustic input through the twomicrophone modes may not always defini-tively determine the preferred microphonemode in everyday listening. Perhaps moreprecisely, and in contrast to laboratory find-ings, the magnitude of the intelligibility dif-ference (Walden et al, 2005) and the fideli-ty difference (Grant et al, forthcoming)through the two processing modes may notbe monotonically related to the probabilityof a preference for one microphone mode orthe other due to the possible influence ofvisual cues and/or possible individual dif-ferences in how aggressively a patientwishes directional processing to be appliedin everyday listening. Another potentiallimitation of direct comparison approachesto AD (and of many scene analysis strate-gies, as well) is that such strategies cannotaccount for listener intent; that is, which“signal” in the environment is of interest.As long as the hearing aid wearer turns toface the signal of interest, this potentialproblem will be minimized. However, whenthe signal of interest is not in front of thelistener, which occurs often in everyday lis-

tening (Walden et al, 2004), direct compari-son approaches to AD will be susceptible toerrors in selecting the preferred micro-phone mode.

Whether one adopts a scene analysis or adirect comparison approach to AD, it is nec-essary for the hearing aid to sample andanalyze the acoustic input at some point inthe signal processing. The most convenientpoint is at the digital signal processor. Thisis the approach taken by most, if not all,implementations of AD to date. However,sampling the acoustic input at this pointdoes not account for the spectrum shapingand other signal processing that occurswithin the receiver, earmold, and ear canalof the hearing aid wearer. Recall that therecordings of the everyday listening envi-ronments in this study were made at theoutput of the digital signal processor, beforetransduction by the receiver. That the fieldraters were able to replicate their livemicrophone preferences from the laborato-ry recordings with a high degree of accura-cy—essentially perfect accuracy for theOMNI-preferred sites—suggests that it isnot necessary to consider receiver, earmold,and/or ear canal influences in automaticsignal processing algorithms; that is, theinput signal can be sampled and analyzedat the digital signal processor stage.Although alterations to the signal willoccur as a result of receiver transduction,and earmold and ear canal acoustics, thereis little reason to believe that these changeswould be large enough to result in a rever-sal of the preferred processing mode in aparticular listening environment.

AAcckknnoowwlleeddggmmeennttss.. Local monitoring of this studywas provided by the Department of ClinicalInvestigation, Walter Reed Army Medical Center,Washington, DC, under Work Unit No. 04-25017. Allpersons included in this study volunteered to partic-ipate and provided informed consent. The opinionsand assertions presented are the private views of theauthors and are not to be construed as official or asnecessarily reflecting the views of the Department ofthe Army, the Department of Defense, or GN ReSound.

RRoobbuussttnneessss ooff MMiiccrroopphhoonnee PPrreeffeerreenncceess/Walden et al

377

Page 21: Walden et al: The Robustness of Hearing Aid … · Dos personas hipoacúsicas establecieron juicios de preferencia en cuanto a ... sor due to earmold and ear canal effects. If these

RREEFFEERREENNCCEESS

Burkhard MD, Sachs RM. (1975) Anthropometricmanikin for acoustic research. J Acoust Soc Am58:214–222.

Cord MT, Surr RK, Walden BE, Dyrlund O. (2004)Relationship between laboratory measures of direc-tional advantage and everyday success withdirectional microphone hearing aids. J Am AcadAudiol 15:353–364.

Frye GJ. (2006) How to verify directional hearing aidsin the office. Hear Rev 13(1):48–49, 52, 54.

Grant KW, Braida LD. (1991) Evaluating the artic-ulation index for audiovisual input. J Acoust Soc Am89:2952–2960.

Grant KW, Elhilali M, Shamma SA, Walden BE, SurrRK, Cord MT, Summers V. (Forthcoming) An objec-tive measure for selecting microphone modes inOMNI/DIR hearing-aid circuits. Ear Hear.

Institute of Electrical and Electronic Engineers. (1969)IEEE recommended practice for speech quality meas-ures. IEEE Trans Audio Electroacoustics 17(3):225–246.

Jerger J. (1970) Clinical experience with impedanceaudiometry. Arch Otolaryngol 92:311–324.

Kochkin S. (2003) MarkeTrak VI: on the issue of value:hearing aid benefit, price, satisfaction, and brandrepurchase rates. Hear Rev 10(2):12, 14, 16, 18, 20,22–24, 26.

Ricketts TA, Hornsby WY. (2003) Distance and rever-beration effects on directional benefit. Ear Hear24:472–484.

Sumby WH, Pollack I. (1954) Visual contribution tospeech intelligibility in noise. J Acoust Soc Am26:212–215.

Surr RK, Walden BE, Cord MT, Olson L. (2002)Influence of environmental factors on hearing aidmicrophone preferences. J Am Acad Audiol13:308–322.

Valente M, Van Vliet D. (1997) The IndependentHearing Aid Fitting Forum (IHAFF) protocol. TrendsAmplif 2:6–35.

Walden BE, Grant KW. (1993) Research needs in reha-bilitative audiology. In: Alpiner JG, McCarthy PA,eds. Rehabilitative Audiology: Children and Adults.2nd ed. Baltimore: Williams and Wilkins, 500–528.

Walden BE, Grant KW, Cord MT. (2001) Effects ofamplification and speechreading on consonant recog-nition by persons with impaired hearing. Ear Hear22:333–341.

Walden BE, Surr RK, Cord MT, Dyrlund O. (2004)Predicting hearing aid microphone preference ineveryday listening. J Am Acad Audiol 15:365–396.

Walden BE, Surr RK, Grant KW, Summers WV, CordMT, Dyrlund O. (2005) Effect of signal-to-noise ratioon directional microphone benefit and preference.J Am Acad Audiol 16:662–676.

JJoouurrnnaall ooff tthhee AAmmeerriiccaann AAccaaddeemmyy ooff AAuuddiioollooggyy/Volume 18, Number 5, 2007

378

Page 22: Walden et al: The Robustness of Hearing Aid … · Dos personas hipoacúsicas establecieron juicios de preferencia en cuanto a ... sor due to earmold and ear canal effects. If these

AAppppeennddiixx 11.. HHeeaarriinngg AAiidd UUssee LLoogg ffoorr RReeccoorrddiinngg

Name:___________________________________ Time:__________ Date:___________ Clinician: ____________

For this person OMNI is: ___ Program 1 ___ Program 2

1. Describe the listening situation: __________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________

2. Program preference: ___ Program 1 ___ No Preference ___ Program 2

3. Where was the primary talker/sound located?___ Front ___ Side ___ Back

4. How far was the primary talker/sound?___ Less than 3 feet ___ 3–10 feet ___ More than 10 feet

5. Was noticeable background noise present? ___ Yes ___ No

6. If yes, was the noise? ___ Soft ___ Moderate ___ Loud

7. Describe the noise: _______________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________

8. After the recording, program preference:___ Program 1 ___ No Preference ___ Program 2

9. Reasons for preference: ___________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________

RRoobbuussttnneessss ooff MMiiccrroopphhoonnee PPrreeffeerreenncceess/Walden et al

379

FRONT

BACK