sound’s use in instructional software to enhance learning: a theory-to-practice content analysis

20
Abstract Sound may hold great promise for instructional software by sup- porting learning in a variety of ways. Conceptual and preconceptual barriers, however, still appear to prevent software designers from using sound more effectively in their instructional products. Interface books seldom discuss the use of sound and when they do, it is most often simple verbatim narration of on-screen text. This content analysis of 12 award-winning instructional soft- ware products indicated that, while sound is being incorporated into many learning environments, many instructional designers are using sound only for literal, information conveyance and not yet exploring how to exploit the associative potential of music, sound effects, and narration to help learners process the material under study more deeply. Keywords Human audition Instructional design Learning environments Use of sound Introduction For those who are not hearing impaired, auditory information participates fundamentally in the development of knowledge (McAdams & Bigand, 1993). M. J. Bishop (&) T. B. Amankwatia W. M. Cates College of Education, Teaching, Learning, and Technology, Lehigh University, 111 Research Drive, Room A109, Bethlehem, PA 18015, USA e-mail: [email protected] T. B. Amankwatia e-mail: [email protected] W. M. Cates e-mail: [email protected] 123 Education Tech Research Dev (2008) 56:467–486 DOI 10.1007/s11423-006-9032-3 DEVELOPMENT ARTICLE Sound’s use in instructional software to enhance learning: a theory-to-practice content analysis M. J. Bishop Tonya B. Amankwatia Ward Mitchell Cates Published online: 27 February 2007 Ó Association for Educational Communications and Technology 2007

Upload: m-j-bishop

Post on 14-Jul-2016

212 views

Category:

Documents


0 download

TRANSCRIPT

Abstract Sound may hold great promise for instructional software by sup-porting learning in a variety of ways. Conceptual and preconceptual barriers,however, still appear to prevent software designers from using sound moreeffectively in their instructional products. Interface books seldom discuss theuse of sound and when they do, it is most often simple verbatim narration ofon-screen text. This content analysis of 12 award-winning instructional soft-ware products indicated that, while sound is being incorporated into manylearning environments, many instructional designers are using sound only forliteral, information conveyance and not yet exploring how to exploit theassociative potential of music, sound effects, and narration to help learnersprocess the material under study more deeply.

Keywords Human audition Æ Instructional design Æ Learning environments ÆUse of sound

Introduction

For those who are not hearing impaired, auditory information participatesfundamentally in the development of knowledge (McAdams & Bigand, 1993).

M. J. Bishop (&) Æ T. B. Amankwatia Æ W. M. CatesCollege of Education, Teaching, Learning, and Technology, Lehigh University, 111 ResearchDrive, Room A109, Bethlehem, PA 18015, USAe-mail: [email protected]

T. B. Amankwatiae-mail: [email protected]

W. M. Catese-mail: [email protected]

123

Education Tech Research Dev (2008) 56:467–486DOI 10.1007/s11423-006-9032-3

DEVELOPMENT AR TICLE

Sound’s use in instructional software to enhancelearning: a theory-to-practice content analysis

M. J. Bishop Æ Tonya B. Amankwatia ÆWard Mitchell Cates

Published online: 27 February 2007� Association for Educational Communications and Technology 2007

Music’s artistry and affect concisely convey setting and mood (Bigand, 1993),environmental sounds help us to interpret the world around us (Deutsch,1986), and pauses and intonational speech patterns provide syntactic cues noteasily conveyed through text (Carr, 1986). Given these facts, it seems thatsound may hold great promise for instructional software. Sounds can gain andfocus learners’ attention, reduce distraction of competing stimuli, engageinterest over time, and make learning environments more tangible andemotionally arousing (Kohfeld, 1971; Posner, Nissen, & Klein, 1976; Thomas& Johnston, 1984). Sounds can help learners condense, elaborate upon, andorganize details about their surroundings, helping them see interconnectionsamong new pieces of information (Harmon, 1988; Perkins, 1983; Winn, 1993;Yost, 1993). Sounds can provide a familiar context that may help learnersrelate incoming information to existing knowledge (Gaver, 1993a, 1993b).

Unfortunately, software designers seeking theoretical and conceptualdirection for using sound to facilitate learning will find, as Barron and Kysilka(1993) did more than a decade ago, that there are still very few guidelinesavailable. Older well-known instructional and interface design books con-tained sparse information on theoretical uses of sound for instruction (seeBickford, 1997; Hannafin & Peck, 1988; Jonassen, 1988; Mandel, 1997; TheWindows interface guidelines for software design, 1995). And it appears newertexts have added little additional guidance on sound’s use. For example,Designing Effective Instruction (Morrison, Ross, & Kemp, 2001) included asection on designing with graphics, but nothing on sound. A still newer textThe Essential Guide to User Interface Design: An Introduction to GUI DesignPrinciples and Techniques (Galitz, 2002) dedicated only 4 of 730 pages tosound’s instructional use, addressing only sound’s potential role in supplyingverbal redundancy and facilitating dual coding. And, while Clark and Mayer’se-Learning and the Science of Instruction (2003) devoted 29 of 293 pages tosound’s use, the only sound type considered in 25 of those pages was speech.The remaining four pages focused on avoiding the use of ‘‘extraneous’’background music and environmental sound effects, without suggesting waysin which non-speech sounds might be used to enhance learning. Generally, theauthors of instructional design guidelines seem to recommend that sound’smajor function—other than supplying occasional bells and whistles to gainattention—should be either to narrate screen text or to provide stand-aloneaudio examples (like a musical performance or an historical speech).

Further compounding the problem for designers, argued Barron (2003), areinconclusive research findings on the effectiveness of multi-channel commu-nication and cue summation (see Barron, 1995; Barron & Atkins, 1994; Bar-ron & Kysilka, 1993; Lauret, 1998; Nasser & McEwen, 1976; Severin, 1967a,1967b; Van Mondfrans & Travers, 1964). And, while the literature has offereda plethora of relevant media-comparison studies on the use of sound incomputerized instruction, these studies fall short of organizing and synthe-sizing various findings in a way that suggests practical guidelines for designingwith sound (see for example Koroghlanian & Klein, 2000; Mayer & Anderson,1991; Mayer & Moreno, 1998; Moreno & Mayer, 2000; Newby, Cook, &

468 M. J. Bishop et al.

123

Merrill, 1988). While some multimedia research studies have suggested orinferred strategies for integrating sound generally, they offer few systematicdesign guidelines for incorporating sound—particularly music and soundeffects—in ways that might improve learning (see for example Deatherage,1972; Hereford & Winn, 1994; Lee & Owens, 2000; Najjar, 1998). Accordingto Clark and Mayer (2003), more research is needed to determine whethersound might contribute more to learning than simply help reduce visualprocessing load when necessary.

Recognizing the need to provide a more complete picture of sound’sinstructional potential, the first author created a framework for thinking sys-tematically about designing instruction with sound (see Table 1, Bishop, 2000;Bishop & Cates, 2001). Combining information-processing and communica-tion theories, the framework’s nine cells supply strategies for how narration,sound effects, and music might be used more effectively within the‘‘instructional communication system’’ to facilitate information processing(see acquisition, processing, and retrieval columns) at each level of learning(see selection, analysis, synthesis rows). Following the cells vertically down theinformation-processing columns, the framework anticipates deepening

Table 1 Sound-use design strategies framework

Acquisition Processing Retrieval

Selection 1. Use sound to gainattention

2. Use sound to isolateinformation

3. Use sound to tie intoprevious knowledge

Interested

Employ novel, bi-zarre, and humorousauditory stimuli

Group or simplifycontent informationconveyed to helplearners isolate anddisambiguate mes-sage stimuli

Recall learner’s mem-ories and evoke exist-ing schemas

Analysis 4. Use sound to focusattention

5. Use sound to orga-nize information

6. Use sound to buildupon existing knowl-edge

Curious

Alert learners tocontent points byshowing them whereto exert information-processing effort

Help learners differ-entiate among con-tent points and createa systematic auditorysyntax for categoriz-ing main ideas

Situate the learningwithin real-life ormetaphorical sce-narios

Synthesis 7. Use sound to holdattention over time

8. Use sound to elabo-rate upon informa-tion

9. Use sound to prepareknowledge for lateruse

Engaged

Immerse learners bymaking them feel thecontent is relevant,by helping to make itmore tangible, andby bolstering learnerconfidence

Supplement the con-tent by supplyingauditory images andmental models

Help learners trans-fer knowledge to newlearning situations bybuilding useful addi-tions to overallknowledge structures

Attend (contentsupport)

Organize (contextsupport)

Relate (constructsupport)

Sound’s use in instructional software 469

123

attentional, organizational, and relational difficulties at each subsequent phaseof learning (top to bottom). When tracing the cells horizontally across thelearning phases, the framework similarly anticipates waning interest, curiosity,and engagement at each deeper level of processing (left to right).

Thus, when one traces the first, selection-level row of cells horizontallyacross the information processing stages, the framework suggests thatlearner interest may be captured by an instructional message that employssound to gain attention with novelty (cell 1), to isolate information throughincreased salience (cell 2), and to tie into previous knowledge by evokingexisting schemas (cell 3). Similarly, learner curiosity might be aroused usingsound to focus attention by pointing out where to exert information-pro-cessing effort (cell 4), to organize information by differentiating betweencontent points and main ideas (cell 5), and to build upon existing knowl-edge by situating the material under study within real-life or metaphoricalscenarios (cell 6). Likewise, a learner’s level of engagement might be in-creased using sounds to hold attention over time by making the lessonmore relevant (cell 7), to elaborate upon information by supplying auditoryimages and mental models (cell 8), and to prepare knowledge for later useby providing additional knowledge structures that might be useful in sub-sequent learning (cell 9). When designed systematically into the instructionin this way, sound might supplement instructional messages with theadditional content, context, and construct support necessary to overcomemany of the acquisition, processing, and retrieval problems one mightencounter while learning.

While this model may make sense intuitively, it was based on theory, not ondirect observations or empirical research. Britt (1997) maintained that—-through simplification, explicitness, and reformulation—a theory-based modelprovides an effective way to sort out the chaos of systems that are too complexto deal with directly. Because explicit systems models can show the repeatingpatterns and relationships among the parts, they can help one understand thetrue complexity of the problem or situation. Sinha and Kuszta (1983) andSalisbury (1996) argued that to be useful, however, a model must represent allof the system’s components and the relationships between them simply en-ough to be understandable. The model must reduce complexity and ambiguitysufficiently so as to make analysis and the prediction of system behaviorpossible. But simplifying real-world complexity poses a dilemma. If a model istoo simplistic, the relationship of the model to its real-world counterpartbecomes tenuous. When this occurs, predictions of system behavior based onthe model can be grossly inaccurate. Thus, Fiske (1990) reminded us that wecannot expect to model a system precisely. Modeling is not about precisionbut, instead, about tentatively determining which things are important toconsider in capturing the essence of the system. A model cannot provide finalanswers. As knowledge is accumulated and relevant areas of the modeledsystem are clarified, the model is almost always modified or superceded.

In order to begin testing the assumptions made in the model, we reasonedthat it should be compared against what is known about ‘‘current practice’’

470 M. J. Bishop et al.

123

(Cates, 1985). We hoped this examination might help to explicate the modeland to identify more specific ways sound might be used to improve theeffectiveness and the efficiency of the instructional communication system.The purpose of this study, therefore, was to examine the current instructionalrole of sound in educational software and to determine whether the strategiesproposed by the theoretical framework are being reflected in practice. Theresearch questions guiding this study were,

1. What is sound’s current role in instructional software?2. To what extent are the instructional strategies from the framework cur-

rently employed in instructional software, if at all?3. Is sound currently being used to enhance learning in ways that fall outside

of the framework?

Methodology

To answer these questions, we employed both qualitative and quantitativecontent analysis methodologies. Content analyses are commonly used ineducational research to describe specific characteristics of written, visual, andauditory instructional materials (Gall, Gall, & Borg, 1999; Krippendorff,1980). The steps in a content analysis typically involve: (a) developinginstruments for recording the verbal or symbolic content; (b) deciding on thesampling plan to be used; (c) training the coder(s) to consistently apply thecoding scheme; (d) and collecting and analyzing the data (Ary, Jacobs, &Razavieh, 2002). Below, we discuss each of these steps within the context ofthis study.

Recording instruments

In order to ensure that data would be collected and coded systematically, wedeveloped three forms along with detailed procedures for recording andcoding both qualitative and quantitative data (Potter & James, 1999). Each ofthe forms used in this study and detailed procedures for how we used them areavailable at http://www.lehigh.edu/~mjba/SoundCA/.

On our Background Information Form we documented details such as theprogram’s name and version number, system requirements, publisher, publi-cation date, pricing information, delivery method (CD or Web-based),package contents (if applicable), and any supplementary materials that eithercame with the software or were required outside of the software. In addition,this form was used to record publishers’ statements, when available, abouttargeted audience, goals and objectives, and necessary prerequisite skills.Lastly, we used this form to track the operating system used to review thesoftware and whether it was reviewed using a student, teacher, or adminis-trator login.

Sound’s use in instructional software 471

123

Our more open-ended Narrative Description Form documented the startand stop times for the review, the software and lesson title, lesson topics andobjectives. It also supplied ample room for a descriptive narrative overview.This form was used to record rich, thick descriptions of the software and todocument any additional instructional strategies for sound’s use discoveredduring the observations (Altheide, 1996).

Lastly, our nine-page Frequency Form provided a checklist-style means ofmaking detailed observations for each cell of nine-cell framework (Potter &James, 1999). Working together, we derived indicators for what each of thesound-design strategies within each cell might look like for each of the threesound types (music, sound effects, and narration). The form supplied checkboxes next to each indicator that allowed our observer (the second author) torecord its presence in the software. This form also provided space to recordany additional indicators or any additional strategies that might be foundduring the observations.

We practiced using prototype forms on titles outside the sample to checkfor problems and assure each instance in which a strategy was employed couldbe easily and reliably recorded. Based on those practice sessions, we madeonly a few minor formatting revisions to the forms.

Sampling plan

In addition to developing coding procedures, content analysis requires a planfor sampling the media to be studied. We used criterion-based purposefulsampling (Patton, 2001) to select a small sample of ‘‘information-rich cases’’that would produce valid results with less time and effort (Berelson, 1952). Wechose software from our observer’s area of expertise, K-12 language arts,because some consider purposeful sampling a more effective technique whenthe researcher is familiar with all the elements of the study including thecontent and nature of the audience (Budd, Thorp, & Donohew, 1967). Inaddition, we reasoned that software aimed at a younger audience was likely toincorporate all three types of sound more extensively, which would give us abetter picture of the state-of-the-art in sound design practice.

In order to assure that the software reviewed was of high quality andactually being used by educators, we drew our sample from the Compre-hensive Courseware 2003–2004 winning software programs selected byQuality Education Data, Inc. (2003). Quality Education Data selected thefour winning instructional units based on responses from 72% of the 446district-level technology coordinators surveyed about their comprehensivecourseware purchases for 2003 and projections for 2004. Because the pro-grams selected by Quality Education Data were made up of multiple lessons,we decided to limit the number of lessons observed to 12—three from each ofthe four instructional units—that represented a maximum variation of gradelevels and language arts topics. Table 2 lists our sample of four instructionalunits, the lesson titles chosen for observation, and the targeted grade levels.

472 M. J. Bishop et al.

123

Coder training

In addition to addressing concerns over coding and sampling procedures,single-observer content analysis also requires that the coder be trained forstability (content coded consistently over time) and accuracy (content codedin conformity with a standard) (Weber, 1985). Working together, we practicedusing the finalized observational forms, including checking accuracy of clas-sifications. Furthermore, we revisited one practice lesson after a few weeks’interval to check for stability and revisited randomly selected parts in all 12sample lessons after a 3-month interval and one entire lesson after a 5-monthinterval to check for stability. Our observer also maintained a research journalthat documented her practice reviews for future training purposes involvingmultiple coders.

Data collection/analysis

This study employed an observer–participant method for collecting data(Merriam, 1998). While working through the 12 lessons as a student might, ourobserver collected qualitative and quantitative data through simultaneous anddeferred recording, a practice Best and Kahn (1993) recommended to mini-mize observation memory lapses and address interpretation objectivity con-cerns. In this way, our observer initially worked through each lesson and, usingthe Narrative Description Form, transcribed and described in detail everysound she heard as well as every other aspect of the software including nav-igation scheme, screen layout, and learner interactions. After our observerproofread and, when necessary, revised descriptions for clarity and com-pleteness, the forms were printed so that every sound could be identified,color coded according to sound type (music, sound effects, and narration), andassigned a unique numeric code. Music (green) was identified as any auditory

Table 2 Lessons analyzed from the four award-winning instructional units

Instructional unit(and company)

Lessonsanalyzed

Targetedgrade levels

SuccessMaker(Pearson Digital Learning)

Reading Readiness (2000) PreK–KInitial Reading (2000) K–2ndMy Friend Leslie (2000) 3rd–6th

Cornerstone(Riverdeep)

Capitalization L5 (2000) 3rd–4thSpelling L7 (2000) 3rd–4thUsage L10 (2000) 3rd–4th

Read 180: Show Me the Money!(Scholastic)

Bogus Bills (2004) 6th–12thFighting Forgery (2000) 6th–12thMaking Money (2000) 6th–12th

PLATO Web Learning Network(Plato Learning Systems)

Keeping Pronouns Consistent (2000) 7th–9thLocating What’s Important in Literature(2000)

7th–9th

Using Examples to Clarify Your Ideas(2000)

7th–9th

Sound’s use in instructional software 473

123

message that incorporated instrumental or vocal tones in a structured andsustained manner. Sound effects (red) were any artificially created or en-hanced sounds not involving speech or music. Human or synthetic speech wascategorized as narration (yellow).

During the next pass through each lesson, our observer used the FrequencyForm to preliminarily record observations of sounds that fit or did not fit withina particular sound-use strategy from the framework. Using the narrativedescriptions, our observer then augmented the preliminary Frequency Formobservations by specifying which existing or new indicator was represented bythe sound event and by transferring impressions and comments about relevantcontextual observations that surrounded the sound event. If a coded sound fitan indicator, its number was assigned to that cell. If it fit the analytic constructof the cell but not a predetermined indicator, our observer added a new indi-cator and assigned the sound’s number to that cell. If a sound fit more than onestrategy, she put it in each appropriate cell, and circled it. If a coded sound didnot fit within any cell, it was recorded on the back of the form and described.

Three months later, our observer made a final pass through a randomsegment of every lesson to confirm and validate her observations. Given therelatively few changes she made as a result of this final pass, we determinedshe had achieved stability in her observations (Patton, 2001).

Findings

In this way, sound’s current instructional role was observed and documentedon over 150 forms that comprised 3,026 coded instances of narration, soundeffects, and music. Each of those instances fell within the theoretical frame-work strategies. Once all the observations were made, the number of codedsound events from the form-based observations for each strategy and soundtype were tabulated to yield absolute frequencies.

Summary of findings by sound type

Table 3 supplies sound event totals and subtotals by lesson, unit, and sound type.As can be seen, all 12 of the lessons analyzed used at least one type of sound(sound effects, narration, or music). Lessons from the READ180: Show Me theMoney unit made the most extensive use of sound. In particular, FightingForgery from the Show Me the Money unit had the most sound events by far, with44.55% of the total sound events across all lessons (1,348 out of 3,026). Further,the Show Me the Money unit also used the greatest variety of sound types withinits lessons (music = 5.06%, sound effects = 27.54%, narration = 67.40%) andthe lowest relative percentage of narration events (67.40% as compared toSuccess Maker = 91.64%, Cornerstone = 91.06%, PLATO = 100%). Acrossall the lessons observed, music was used least frequently (115 times or 3.80% ofthe total sound events), sound effects more frequently (619 or 20.46%), andnarration was used most frequently (2,292 or 75.74%).

474 M. J. Bishop et al.

123

Table 3 Sound event totals and subtotals by lesson, unit, and sound type

Unit Lesson title M Fx N T

Read 180: ShowMe the Money!

Bogus Bills Frequency 21 103 322 446% within lesson 4.71 23.09 72.20 100.00% of total 18.26 16.64 14.05 14.74

Fighting Forgery Frequency 71 358 919 1348% within lesson 5.27 26.56 68.18 100.00% of total 61.74 57.84 40.10 44.55

Making Money Frequency 12 105 144 261% within lesson 4.60 40.23 55.17 100.00% of total 10.43 16.96 6.28 8.63

Unit totals Frequency 104 566 1385 2055% within lesson 5.06 27.54 67.40 100.00% of total 90.43 91.44 60.43 67.91

Cornerstone Capitalization L5 Frequency 3 9 65 77% within lesson 3.90 11.69 84.42 100.00% of total 2.61 1.45 2.84 2.54

Spelling L7 Frequency 2 8 249 259% within lesson 0.77 3.09 96.14 100.00% of total 1.74 1.29 10.86 8.56

Usage L10 Frequency 5 12 83 100% within lesson 5.00 12.00 83.00 100.00% of total 4.35 1.94 3.62 3.30

Unit totals Frequency 10 29 397 436% within lesson 2.29 6.65 91.06 100.00% of total 8.70 4.68 17.32 14.41

Success Maker Reading Readiness Frequency 0 0 26 26% within lesson 0.00 0.00 100.00 100.00% of total 0.00 0.00 1.13 0.86

Initial Reading Frequency 0 0 23 23% within lesson 0.00 0.00 100.00 100.00% of total 0.00 0.00 1.00 0.76

My Friend Leslie Frequency 1 24 225 250% within lesson 0.40 9.60 90.00 100.00% of total 0.87 3.88 9.82 8.26

Unit totals Frequency 1 24 274 299% within lesson 0.33 8.03 91.64 100.00% of total 0.87 3.88 11.95 9.88

PLATO WebLearning Network

Keeping PronounsConsistent

Frequency 0 0 10 10% within lesson 0.00 0.00 100.00 100.00% of total 0.00 0.00 0.44 0.33

Locating What’sImportant in Literature

Frequency 0 0 130 130% within lesson 0.00 0.00 100.00 100.00% of total 0.00 0.00 5.67 4.30

Using Examples toClarify Your Ideas

Frequency 0 0 96 96% within lesson 0.00 0.00 100.00 100.00% of total 0.00 0.00 4.19 3.17

Unit totals Frequency 0 0 236 236% within lesson 0.00 0.00 100.00 100.00% of total 0.00 0.00 10.30 7.80

Total Frequency 115 619 2,292 3,026% of total 3.80 20.46 75.74 100.00

Sound’s use in instructional software 475

123

Use of music

In every case, music was the least used sound type: out of the 12 lessons, only 7used music at all. Of those seven, in the lesson that used music most, FightingForgery (61.74% of the sample total), music still comprised only 5.27% of thetotal number of sound events. Further, the few times music was used inthe observed lessons, it was implemented in a very limited way. For example,the My Friend Leslie lesson utilized music only during the introduction; thethree lessons from the Cornerstone unit played music only during the finalassessment activity. Only the three lessons from the Show Me the Money unitemployed music systematically by playing upbeat musical themes with aReggae beat in the background of videos and during major screen transitions.This is reflected in the fact that the Show Me the Money unit made up 90.43%of the sample’s music events total (104 of 115).

Use of sound effects

Like music, only 7 of the 12 lessons observed used sound effects, with theFighting Forgery lesson once again leading the way (57.84% of the sampletotal, see Table 3). However, in terms of the relative frequency of soundeffects events within a lesson as compared to music and narration, MakingMoney was highest with 40.23% of its lesson total (105 out of 261). Like music,sound effects’ use was also limited—typically relegated only to gainingattention and accompanying congratulatory feedback for interactive items. Inthis case, however, My Friend Leslie distinguished itself from the other lessonsby using sound effects during screen transitions as well. And, as before, thethree Show Me the Money lessons had the highest and most varied utilizationof sound effects, once again scoring the highest unit totals at 91.44% of thesample’s sound effects events total (566 of 619).

Use of narration

All 12 lessons in the sample used some form of narration with 5 of the 12lessons using narration exclusively (100% of total sound events within lesson).As was the case for music and sound effects, Fighting Forgery also had thehighest percentage of narration use overall (40.10% of the sample total).Unlike music and sound effects, narration was not limited to just certain partsof lessons, but was widely implemented throughout the instruction, either withor without textual redundancy. Narration delivered content and conceptexplanations, explained how to use the interface, solicited learner input andskill demonstrations, provided feedback, reviewed learning strategies, andencouraged their future use. Once again, the Show Me the Money unit scoredhighest across the sample for the frequency of narration events at 60.43%(1,385 of 2,292).

476 M. J. Bishop et al.

123

Summary of findings by design strategy employed

Table 4 reports absolute and relative frequency totals for the entire sampleorganized by the design strategy employed from the 9-celled sound-useframework. For every strategy, narration was the most commonly imple-mented sound type. Of the nine cells, sound was implemented most exten-sively to gain attention (Cell 1, 32.2% of all sound events), followed by holdattention (Cell 7, 17.7%), then tie into previous knowledge (Cell 3, 15.1%).Music was used most often to organize information (Cell 5, 47.8% of all musicevents), followed by direct attention (Cell 1, 6.5%), then hold attention (Cell7, 9.6%). Sound effects were used almost exclusively for attentional purposeswith 56.5% of all sound effects events aimed at directing attention (Cell 1) and35.4% aimed at holding attention (Cell 7). Sound effects were also used tosome extent to help organize information (Cell 5, 13.2%). Narration was usedprimarily to direct learners’ attention (Cell 1, 25.3% of all narration events);however, it was also used quite frequently to tie into previous knowledge (Cell3, 19.9%).

Use of sound to support acquisition, processing, and retrieval

Looking down each of the columns in Table 4, one finds that the largemajority of sound events were aimed at supporting content acquisition bydirecting, focusing, and holding attention. Of all 3,026 sound events observed,1,823 (60.2%) fell into this category, with the largest proportion of thesecontent acquisition support sounds being narration (1,185 or 65.0% of allsounds in column). Interestingly, sound was used less extensively to supportcontext processing (547 or 18.1% of total sound events) than it was to supportconstruct retrieval (656 or 21.7% of total). That said, the use of sound tosupport retrieval was almost entirely dominated by narration (655 or 99.8% ofall sounds in column), whereas music and sound effects did play a larger rolein the support of processing (55 or 10.1% and 40 or 7.3%, respectively).

Use of sound to facilitate selection, analysis, and synthesis

As one examines the rows of Table 4, it appears sound was used mostextensively to facilitate selection by helping learners direct attention, isolateinformation, and tie into previous knowledge. Among all sound events acrossthe 12 lessons, 1,585 (52.4%) facilitated selection; and, once again, the largestproportion of these selection sounds was narration (1,191 or 75.1% of allsounds in row). The frequency with which sound was used to facilitate analysisand synthesis was almost equal (721 and 720, respectively, or 23.8% of allsounds in row). Here again, the distribution of sound types in these categoriesis interesting in that there was a higher percentage of sound effects used tofacilitate synthesis than any other level of learning (219 or 30.4% as comparedto 22.1% for selection and 6.9% for analysis).

Sound’s use in instructional software 477

123

Ta

ble

4A

bso

lute

an

dre

lati

ve

freq

uen

cies

acr

oss

less

on

so

rgan

ized

by

sou

nd

-use

desi

gn

stra

tegie

s

Acq

uis

itio

n(c

on

ten

tsu

pp

ort

)P

roce

ssin

g(c

on

text

sup

po

rt)

Retr

ieval

(co

nst

ruct

sup

po

rt)

Su

bto

tal

Sele

ctio

n1.

Dir

ect

att

en

tio

n2.

Iso

late

info

rmati

on

3.

Tie

into

pre

vio

us

kn

ow

led

ge

MS

Fx

NT

MS

Fx

NT

MS

Fx

NT

MS

Fx

NT

Fre

qu

en

cy43

350

580

973

00

155

155

10

456

457

44

350

1,1

91

1,5

85

%o

fro

w4.4

36.0

59.6

100.0

0.0

0.0

100.0

100.0

0.2

0.0

99.8

100.0

2.8

22.1

75.1

100.0

%o

fto

tal

37.4

56.5

25.3

32.2

0.0

0.0

6.8

5.1

0.9

0.0

19.9

15.1

38.3

56.5

52.0

52.4

An

aly

sis

4.

Fo

cus

att

en

tio

n5.

Org

an

ize

info

rmati

on

6.

Bu

ild

up

on

exis

tin

gk

no

wle

dge

MS

Fx

NT

MS

Fx

NT

MS

Fx

NT

MS

Fx

NT

Fre

qu

en

cy5

10

300

315

55

40

207

302

00

104

104

60

50

611

721

%o

fro

w1.6

3.2

95.2

100.0

18.2

13.2

68.5

100.0

0.0

0.0

100.0

100.0

8.3

6.9

84.7

100.0

%o

fto

tal

4.3

1.6

13.1

10.4

47.8

6.5

9.0

10.0

0.0

0.0

4.5

3.4

52.2

8.1

26.7

23.8

Syn

thesi

s7.

Ho

ldatt

en

tio

n8.

Ela

bo

rate

up

on

info

rmati

on

9.P

rep

are

kn

ow

led

ge

for

late

ru

seM

SF

xN

T

MS

Fx

NT

MS

Fx

NT

MS

Fx

NT

Fre

qu

en

cy11

219

305

535

00

90

90

00

95

95

11

219

490

720

%o

fro

w2.1

40.9

57.0

100.0

0.0

0.0

100.0

100.0

0.0

0.0

100.0

100.0

1.5

30.4

68.1

100.0

%o

fto

tal

9.6

35.4

13.3

17.7

0.0

0.0

3.9

3.0

0.0

0.0

4.1

3.1

9.6

35.4

21.4

23.8

Su

bto

tal

MS

Fx

NT

MS

Fx

NT

MS

Fx

NT

To

tals

MS

Fx

NT

Fre

qu

en

cy59

579

1,1

85

1,8

23

55

40

452

547

10

655

656

115

619

2,2

92

3,0

26

%o

fro

w3.2

31.8

65.0

100.0

10.1

7.3

82.6

100.0

0.2

0.0

99.8

100.0

3.8

20.5

75.7

100.0

%o

fto

tal

51.3

93.5

51.7

60.2

47.8

6.5

19.7

18.1

0.9

0.0

28.6

21.7

100.0

100.0

100.0

100.0

478 M. J. Bishop et al.

123

Limitations of the study

This descriptive study explored sound’s current use by employing contentanalysis techniques to examine a fairly homogenous sample of instructionalsoftware from only four different software companies. Because the exami-nation did not involve learners, we cannot confirm the effectiveness of thesound-use implementation strategies with regard to how much and what kindsof audio support improve student learning. Further, the inferences made hereabout sound’s intended purpose within these lessons were based on our ob-server’s subjective understanding and experience and not on consultationswith software designers. All these facts about the present study clearly limitthe generalizability of our findings and any conclusions that we might drawfrom them.

Despite these limitations, however, we think this examination can serve as acritical first step toward the examination of audio in instructional technolo-gies. The findings might help to elucidate the current state of practice insound’s use and reveal ways sound’s role might be expanded to improve theeffectiveness of instructional software. So, rather than make assertions thatarguably cannot be supported by the evidence from this study, in the nextsections we will use the findings to begin ‘‘coaxing out’’ some of the potentialbarriers and opportunities for sound’s use in computerized instructionalmaterials.

Barriers to sound’s use

Based on our observations of the lessons in this sample, it appears that soundwas not being used to support learning very extensively beyond the most basicinformation processing and instructional communication levels. Further, whendesigners did incorporate sound into the instruction, they mostly just usednarration to ‘‘sonify’’ what might have otherwise been done with text. Like theauthors of instructional design guidelines, it appears designers of computer-ized instruction may not be thinking about how sound might be incorporatedmore systematically to enhance learning—other than gaining attention ornarrating instructional events that used to be handled by screen text. Table 5illustrates the limited ways in which the designers of the software in oursample used music and sound effects to support learning.

In sharp contrast, computer game developers have been aggressively inte-grating sound into their applications for some time. When Creative Labs’SoundBlaster card was first released in 1990, it came equipped with a joystickport and was bundled with several audio-enhanced games, suggesting the closelink between sound and gaming (Creative Labs, 1998). Shortly after that,games like Myst (1994), Doom (1994), and The 7th Guest (1992) beganemploying large audio production teams to design high-quality, environmentalsound effects that are used extensively. Lucas Arts’ The Dig (1995) combinedenvironmental sound effects with eerie Wagnerian background music to make

Sound’s use in instructional software 479

123

the interface believable and to arouse emotions. Berkeley Systems’ You Don’tKnow Jack games (1995, 1996, 1997, 1998) relied almost entirely upon snappy,quick-witted, and occasionally randomly generated speech. Most recentlygames include soundtracks that are programmed to adjust to the situation inwhich the user finds him- or herself. For example, Midway’s Rise & Fall:Civilizations at War (2006) plays context-sensitive music depending on what’shappening in the action while EA Sports’ NCAA Football 07 (2006) incor-porates particularly raucous crowd sound effects during rivalry games. Theseand other computer games incorporate sound comprehensively to enhance theexperience. Why then is not sound being used as extensively in instructionalsoftware to enhance learning?

Our preconceptions about the way a tool works can limit the way we thinkabout using it to solve problems (Wertheimer, 1959). For example, Maier(1930, 1931) asked participants in a series of experiments to tie together twostrings that were hanging from the ceiling. Participants quickly discovered thatwhile they held onto one of the strings, they could not reach the other. Thesolution was to tie an object to one of the strings and then to swing the now-weighted string toward the other. Maier handed participants a pair of pliers,hinting that the tool could be used to solve the problem. He found that par-ticipants who could not envision the pliers as anything other than a grippingtool could only think to use the tool to extend their reach, an unsuccessfulapproach.

Similarly, the limitations of an older technology can define the way wethink to use a new, less limited technology (Divesta & Walls, 1967). Appar-ently, software designers are not immune to this ‘‘functional fixedness.’’ Forexample, Cates (1998) proposed that screen designs based on the impedimentsfaced by early Web adopters continue to influence developers’ concepts ofhow Web pages should look and function today, despite significant advancesin the capabilities of HTML. It seems that once design components like text,graphics, and sound have been assigned functions, their roles can become‘‘fixed’’ in the designer’s mind, regardless of advances in the technology.Cooper (1995) maintained that years of annoying, internal speaker-generated,corrective feedback ‘‘beeps’’ that coldly announce the user’s failure have so

Table 5 Ranking of strategies used by sound type

Rank Music Sound effects Narration

1 5. Organize information 1. Direct attention 1. Direct attention2 1. Direct attention 7. Hold attention 3. Tie into previous knowledge3 7. Hold attention 5. Organize information 7. Hold attention4 4. Focus attention 4. Focus attention 4. Focus attention5 3. Tie into previous

knowledge5. Organize information

6 2. Isolate information7 6. Build upon existing knowledge8 9. Prepare knowledge for later use9 8. Build upon existing knowledge

480 M. J. Bishop et al.

123

stigmatized computer sounds that most developers wrongly believe usingsound is undesirable and should no longer be considered as part of interfacedesign. It is also possible, however, that sound continues to be relegated in thesoftware interface to error messages, self-contained examples, and screen-textnarration because, as was the case with the pliers, few people can see how touse it otherwise.

Some other ways to think about using sound in instruction

If a picture can be used to tell a thousand words, then why cannot a sound beused to represent content? or depict a context? or illustrate a construct?Sounds are such an important part of the way we make sense of the worldaround us, they are rife with associations that can be exploited. In fact, it maybe that the type of sounds used (music, sound effect, or narration) is lessimportant than the kind of listening they encourage (Gaver, 1989, 1993b;Gaver, Smith, & O’Shea, 1991).

According to Zettl (1990), we tend to listen to some sounds—like a speechor the sound of a telephone ringing—as sounds per se. These ‘‘literal sounds’’convey to us a specific, literal meaning and also mentally refer us to the sound-producing source. Given this, literal sounds can be very effective ways tocommunicate information in learning environment; and, as our findings indi-cate, most instructional designers relegate sound to this literal, information-delivery function. Further, because literal sounds can be either source-con-nected (sound-producing source is seen while sound is heard) or source-dis-connected (sound-producing source is not seen or ‘‘off camera’’ while sound isheard), they can be used by designers of instructional technologies to enhancethe ‘‘bandwidth’’ of information that can be delivered simultaneously. Forexample, as Clark and Mayer (2003) have suggested, when the visual load ofan interface gets too great or when there is not enough screen space to includeall the text needed to describe a concept, a designer might instead considerusing narration accompanied by graphics or animations.

But Zettl (1990) suggested that communicating information is just one ofthe major functions that sound can play. Because some sounds are highlyimage-evoking and schema-activating—like a favorite song from high schoolor a cartoon’s ‘‘falling-off-a-cliff’’ slide whistle—they can also be particularlygood at establishing a location, time, or situation (outer orientation) andcreating a mood, energy, or structure (inner orientation). This phenomenon is,perhaps, most clearly evidenced by the language we use to describe these sortsof sounds: ‘‘a happy song,’’ ‘‘a droning train whistle,’’ ‘‘a far-away voice,’’ andthe like. Unlike literal sounds, these ‘‘non-literal’’ sounds are deliberatelysource-disconnected and are intentionally designed and implemented toevoke images, abstractions, and even emotions that refer us mentally to morethan just the sound-producing source. According to Norman (2004), the suc-cess of any design is as much influenced by our affective reaction to it as it isby cognition.

Sound’s use in instructional software 481

123

In the real world, we regularly depend on information we cull from literalsound to cross streets, answer phones, diagnose car trouble, pour liquids, andthe like (Bregman, 1993; Deutsch, 1986; McAdams, 1993). Because sound canbe extremely useful for communicating information, argued Mountford andGaver (1990), it is a natural channel for enhancing human–computer inter-actions. This, however, does not mean designers must necessarily use soundsthat are realistic, such as a ‘‘click’’ sound to accompany a button click, forexample. In fact, unlike the real world, interactions with objects in an interfacedo not make any noise at all until someone has chosen and inserted a specificsound into it. So, as long as interface sounds are used to enhance realism bybeing expected and consonant (Laurel, 1986, 1993), interactions with thatbutton can make any literal or non-literal sound the designer wishes. Why nottake the opportunity to reinforce the lesson’s content?

For example, in an information literacy course that covers the concept‘‘relevance,’’ one might consider using an eating ‘‘chomp’’ sound for relevantresources and a spitting-out ‘‘ptewie’’ sound for irrelevant resources. Thesesounds might then be varied in order to elaborate on the concept further; sothat resources that are clearly relevant might make a satisfying ‘‘munch’’sound whereas sources that are relevant but need to be ‘‘softened up a bit’’ tobe appropriate for the audience of the paper in which the resource will be usedmight make a teeth-shattering ‘‘crunch’’ sound. Used consistently in this way,one could establish an auditory syntax of metaphorical sounds that mightestablish a mental model of how concepts fit together. Later echoes of thesesounds might then help learners transfer prior learnings to new situations.

Conclusion

Sound already plays an important role in many instructional settings. Barker(1986) pointed out that there are several academic content areas where soundis necessary for learning (for example, early reading, modern languageteaching, and music instruction). More than 40 years ago, Flanders (1963)observed that two-thirds of typical classroom time is spent with someonespeaking, with two-thirds of the speaking done by teachers—and little haschanged since then (Mayer, 2001). But many classroom sounds are soundsper se that may or may not evoke additional associations. As new technologieslike podcasting, GarageBand, and Web-based audio conferencing continue tomake sound easier to create and incorporate in learning environments,instructional designers should think beyond using sound only for literal,information conveyance and begin exploring how to exploit the associativepotential of music, sound effects, and narration to help learners process thematerial under study more deeply.

Of course, employing sound in ways that contribute more directly tolearning may well leave out a portion of the learning population—those withhearing impairments. But rather than continue to bypass the great potentialsound has to offer most learners, we believe the design challenge should be,

482 M. J. Bishop et al.

123

instead, to provide the hearing impaired with similar benefits. While theseissues are outside the scope of this discussion, similarly accommodatinglearners with hearing impairments likely calls for creating some analogoussyntax that conveys what we hope sound will bring to the instruction, but in adifferent modality. Clearly the challenge to all instructional designers is toprovide the greatest benefit to the most learners in the most efficacious wayfor each individual.

References

Altheide, D. L. (1996). Qualitative media analysis. Thousand Oaks: Sage.Ary, D., Jacobs, L. C., & Razavieh, A. (2002). Introduction to research in education (6th edn).

Stamford: Wadsworth/Thomson Learning.Barker, P. (1986). A practical introduction to authoring for computer-assisted instruction. Part 6:

Interactive audio. British Journal of Educational Technology, 17, 110–128.Barron, A. E. (1995). Digital audio in multimedia. Educational Media International, 32(4), 190–

193.Barron, A. E. (2003). Audio in multimedia learning: Principles and practice. Paper presented at the

Association for Educational Communication and Technology, Anaheim, CA.Barron, A. E., & Atkins, D. (1994). Audio instruction in multimedia education: Is textual

redundancy important? Journal of Educational Multimedia and Hypermedia, 3(3–4), 295–306.Barron, A. E., & Kysilka, M. L. (1993). The effectiveness of digital audio in computer-based

training. Journal of Research on Computing in Education, 25, 277–289.Berelson, B. (1952). Content analysis in communication research. Glencoe: Free Press.Best, J. W., & Kahn, J. V. (1993). Research in education. Boston: Allyn and Bacon.Bickford, P. (1997). Interface design: The art of developing easy-to-use software. Boston: AP

Professional.Bigand, E. (1993). Contributions of music to research on human auditory cognition. In

S. McAdams, & E. Bigand (Eds.), Thinking in sound (pp. 231–277). New York: OxfordUniversity.

Bishop, M. J. (2000). The systematic use of sound in multimedia instruction to enhance learning.Dissertation Abstracts International, 61(07), 2669.

Bishop, M. J., & Cates, W. M. (2001). Theoretical foundations for sound’s use in multimediainstruction to enhance learning. Educational Technology Research & Development, 49(3),5–22.

Bregman, A. (1993). Auditory scene analysis: Hearing in complex environments. In S. McAdams,& E. Bigand (Eds.), Thinking in sound (pp. 10–36). New York: Oxford University.

Budd, R. W., Thorp, R. K., & Donohew, L. (1967). Content Analysis of Communications.New York: Macmillan.

Capitalization L5 Days, Months, and Holidays. (2000). Columbia: Achievement Technologies.Carr, T. (1986). Perceiving visual language. In K. Boff, L. Kaufman, & J. Thomas (Eds.),

Handbook of perception and human performance (Vol. 2, pp. 29.21–29.82). New York: Wiley.Cates, W. M. (1985). A practical guide to educational research. Englewood Cliffs: Prentice Hall.Cates, W. M. (1998). Deja vu all over again: Considering instructional design and the World Wide

Web. Paper presented at the annual convention of the Association for Educational Com-munications and Technology.

Clark, R. C., & Mayer, R. E. (2003). e-Learning and the science of instruction. San Francisco:Pfeiffer.

Cooper, A. (1995). About face: The essentials of user interface design. Foster City: IDG Books.Creative Labs. (1998). Milestones [On-line] from http://www.creativeLabs.com/corporate/

about_creative/milestones/mile1998.html.Deatherage, B. H. (1972). Auditory and other sensory forms of information presentation. In

H. P. VanCott, & R. G. Kinnkade (Eds.), Human engineering guide to equipment design(2nd edn) (pp. 123–160). Washington: U. S. Government Printing Office.

Sound’s use in instructional software 483

123

Deutsch, D. (1986). Auditory pattern recognition. In K. Boff, L. Kaufman, & J. Thomas (Eds.),Handbook of perception and human performance (Vol. 2, pp. 32.31–32.49). New York: Wiley.

Divesta, F. J., & Walls, R. T. (1967). Transfer of object-function in problem solving. AmericanEducational Research Journal, 4, 207–216.

Doom. (1994). Computer software. New York: GT Interactive.Flanders, N. (1963). Intent, action and feedback: A preparation for teaching. Journal of Teacher

Education, 14, 251–260.Galitz, W. O. (2002). The essential guide to user interface design: An introduction to GUI design

principles and techniques. New York: Wiley.Gall, J. P., Gall, M. D., & Borg, W. R. (1999). Applying educational research: A practical guide

(4th edn). New York: Longman.Gaver, W. (1989). The SonicFinder: An interface that uses auditory icons. Human-computer

Interaction, 4, 67–94.Gaver, W. (1993a). Synthesizing auditory icons. In INTERCHI ‘93: Conference on human factors

and computing systems (pp. 228–235). Reading: Addison-Wesley.Gaver, W. (1993b). What in the world do we hear? An ecological approach to auditory source

perception. Ecological Psychology, 5, 1–29.Gaver, W., Smith, R., & O’Shea, T. (1991). Effective sounds in complex systems: The ARKola

simulation. In Proceedings of the SIGCHI’91 Conference on Human Factors in ComputingSystems: Reaching through Technology (pp. 85–90).

Hannafin, M. J., & Peck, K. (1988). The design, development, and evaluation of instructionalsoftware. New York: Macmillan.

Harmon, R. (1988). Film producing: Low-budget films that sell. Hollywood: Samuel French Trade.Hereford, J., & Winn, W. (1994). Non-speech sound in human-computer interaction: A review and

design guidelines. Journal of Educational Computing Research, 11, 211–233.Initial Reading. (2000). Computer software. Mesa: Pearson.Jonassen D. H. (Ed.). (1988). Instructional designs for microcomputer courseware. Hillsdale:

Erlbaum.Keeping Pronouns Consistent. (2004). Computer software. Bloomington: PLATO Learning.Kohfeld, D. L. (1971). Simple reaction time as a function of stimulus intensity in decibels of light

and sound. Journal of Experimental Psychology, 88, 251–257.Koroghlanian, C., & Klein, J. D. (2000). The use of audio and computer-based instruction. Paper

presented at the Association for Educational Communications and Technology, Denver, CO.Krippendorff, K. (1980). Content analysis: An introduction to its methodology. Beverly Hills: Sage.Laurel, B. (1986). Interface as mimesis. In D. A. Norman, & S. W. Draper (Eds.), User centered

system design: New perspectives on human-computer interaction (pp. 68–85). Hillsdale: Law-rence Erlbaum.

Laurel, B. (1993). Computers as theatre. Reading: Addison-Wesley.Lauret, D. T. (1998). The auditory display in interactive courseware: Moving human factors into

computer education. Dissertation Abstracts International, 59(07), 2459 (UMI No. 9841947).Lee, W. W., & Owens, D. L. (2000). Multimedia-based instructional design. San Francisco: Jossey-

Bass/Pfeiffer.Locating What’s Important in Literature. (2004). Computer software. Bloomington: PLATO

Learning.Maier, N. R. F. (1930). Reasoning in humans. I: On direction. Journal of Comparative Psychology,

10, 115–143.Maier, N. R. F. (1931). Reasoning in humans. II: The solution of a problem and its appearance in

consciousness. Journal of Comparative Psychology, 12, 181–194.Mandel, T. (1997). The elements of user interface design. New York: Wiley.Mayer, R. E. (2001). Multimedia learning. Cambridge: University of Cambridge Press.Mayer, R. E., & Anderson, R. B. (1991). Animations need narrations: An experimental test of a

dual-dual coding hypothesis. Journal of Educational Psychology, 83, 484–490.Mayer, R. E., & Moreno, R. (1998). A split-attention effect in multimedia learning: Evidence for

dual processing systems in working memory. Journal of Educational Psychology, 90, 312–320.McAdams, S. (1993). Recognition of sound sources and events. In S. McAdams, & E. Bigand

(Eds.), Thinking in sound (pp. 146–198). New York: Oxford University.McAdams, S., & Bigand, E. (1993). Introduction to auditory cognition. In S. McAdams, &

E. Bigand (Eds.), Thinking in sound (pp. 1–9). New York: Oxford University.

484 M. J. Bishop et al.

123

Merriam, S. B. (1998). Qualitative research and case study applications in education (Rev. edn).San Francisco: Jossey-Bass.

Moreno, R., & Mayer, R. E. (2000). A coherence effect in multimedia learning: The case forminimizing irrelevant sounds in the design of multimedia instructional messages. Journal ofEducational Psychology, 92, 117–125.

Morrison, G. R., Ross, A. M., & Kemp, J. E. (2001). Designing effective instruction (3rd edn). NewYork: Wiley.

Mountford, J., & Gaver, W. (1990). Talking and listening to computers. In B. Laurel (Ed.), The artof human-computer interface design (pp. 319–334). Reading: Addison-Wesley.

Myst. (1994). Computer software. Novato: Broderbund.Najjar, L. J. (1998). Principles of educational multimedia user interface design. Human Factors

and Ergonomics Society, 40(5), 311–324.Nasser, D. L., & McEwen, W. J. (1976). The impact of alternative media channels: Recall and

involvement with messages. AV Communications Review, 24(3), 263–272.NCAA Football 07. (2006). Irvine: EA Sports.Newby, T. J., Cook, J. A., & Merrill, P. F. (1988). Visual mediational instruction: Reducing

interference within visual and aural multiple-discrimination tasks. Journal of EducationalPsychology, 80, 40–45.

Norman, D. A. (2004). Emotional design: Why we love (or hate) everyday things. New York: BasicBooks.

Patton, M. Q. (2001). Qualitative evaluation and research methods. Thousand Oaks: Sage.Perkins, M. (1983). Sensing the world. Indianapolis: Hackett.Posner, M. I., Nissen, M. J., & Klein, R. M. (1976). Visual dominance: An information-processing

account of its origins and significance. Psychological Review, 83, 157–171.Potter, W., & James, W. (1999). Rethinking validity and reliability in content analysis. Journal of

Applied Communications Research, 27(3), 258–285.Quality Education Data, Inc. (2003). Market News. Retrieved November 6, 2003, from http://

www.qeddata.com/newstechleader03.htm.Reading Adventures: My Friend Leslie. (2000). Computer software. Mesa: Pearson.Reading Readiness.(2000). Computer software. Mesa: Pearson.Rise & Fall: Civilizations at War. (2006). San Diego: Midway.Severin, W. J. (1967a). Another look at cue summation. AV Communication Review, 15, 233–245.Severin, W. J. (1967b). The effectiveness of relevant pictures in multiple-channel communications.

AV Communication Review, 15, 386–401.Show Me the Money! Bogus Bills. (2004). Computer software. New York: Scholastic.Show Me the Money! Fighting Forgery. (2004). Computer software. New York: Scholastic.Show Me the Money! Making Money. (2004). Computer software. New York: Scholastic.Spelling L7 Diphthongs and Less Common Vowel Diagraphs. (2000). Computer software.

Columbia: Achievement Technologies.The 7th Guest. (1992). Computer software. Irvine: Virgin Games.The Dig. (1995). Computer software. San Rafael: LucasArts.The Windows interface guidelines for software design. (1995). Redmond: Microsoft.Thomas, F., & Johnston, R. (1984). The Disney sounds. In W. Rawls (Ed.), Disney animation: The

illusion of life (pp. 145–161). New York: Abbeville.Usage: L10 Double Negatives. (2000). Computer software. Columbia: Achievement Technologies.Using Examples to Clarify Your Ideas. (2004). Computer software. Bloomington: PLATO

Learning.Van Mondfrans, A. P., & Travers, R. M. W. (1964). Learning of redundant materials presented

through two sensory modalities. Perceptual and Motor Skills, 19, 743–751.Weber, R. P. (1985). Basic content analysis. Beverly Hills: Sage.Wertheimer, M. (1959). Productive thinking (enlarged edn). New York: Harper and Row.Winn, W. D. (1993). Perception principles. In M. Fleming, & W. H. Levie (Eds.), Instructional

message design: Principles from the behavioral and cognitive sciences (2nd edn) (pp. 55–126).Englewood Cliffs: Educational Technology.

Yost, W. A. (1993). Overview: Psychoacoustics. In W. A. Yost, A. N. Popper, & R. R. Fay (Eds.),Human psychophysics (pp. 1–12). New York: Springer.

You Don’t Know Jack. (1995). Computer software. Berkeley: Berkeley Systems.You Don’t Know Jack Volume 2. (1996). Computer software. Berkeley: Berkeley Systems.

Sound’s use in instructional software 485

123

You Don’t Know Jack Volume 3. (1997). Computer software. Berkeley: Berkeley Systems.You Don’t Know Jack Volume 4: The Ride. (1998). Computer software. Berkeley: Berkeley Sys-

tems.Zettl, H. (1990). Sight, sound, motion: Applied media aesthetics (2nd edn). Belmont: Wadsworth.

M. J. (Mary Jean) Bishop is an assistant professor in the Teaching, Learning, and Technologyprogram at Lehigh University.

Tonya B. Amankwatia is a doctoral student in the Teaching, Learning and Technology program atLehigh University.

Ward Mitchell Cates is Interim Associate Dean for Lehigh’s College of Education and Professor inthe Teaching, Learning, and Technology program.

486 M. J. Bishop et al.

123