application design for wearable and context-...

10
20 PERVASIVE computing 1536-1268/02/$17.00 © 2002 IEEE Application Design for Wearable and Context- Aware Computers P ervasive or ubiquitous computing suc- ceeds when it helps people access and use information on or off the job. Even at work, many people don’t have desks, or spend most of their day on the run. A mobile system must therefore make information readily available at any place and time. The com- puting system should also be aware of the user’s con- text so that it can respond appropriately to users’ cognitive and social state and anticipate their needs. Carnegie Mellon’s Wearable Com- puters project (www.wearablegroup. org) is developing small-footprint computing systems that people can carry or wear to interact with com- puter-augmented environments. 1 We’ve designed and built more than two dozen wearable comput- ers over the past decade and have field-tested most of these. The application domains range from inspection, maintenance, manufacturing, and nav- igation to on-the-move collaboration, position sens- ing, and real-time speech recognition and language translation. We’ve developed a taxonomy of problem-solving capabilities for wearable and context-aware com- puters developed from our iterative design method- ology with a wide variety of end users, mainly mobile workers. Here we illustrate the taxonomy with wearable systems whose capabilities range from basic stored-information retrieval to synchro- nous or asynchronous collaboration to context- aware platforms with proactive assistants. Exam- ple evaluation methods show how user tests can quantify wearable systems’ effectiveness. Wearable computing challenges Today’s computer systems aren’t as effective as they could be because they distract users in many ways and can easily overwhelm users with data. Effective human–computer interaction design there- fore requires interfaces that preserve human atten- tion and avoid information overload. By bringing wearable computers into a growing number of application areas, we’ve improved their interfaces to better meet this need. The family tree of CMU wearable computers (see Figure 1), mod- eled after the US National Science Foundation fam- ily tree of early computers, classifies wearable com- puters into application categories and presents their development over the past decade. Each name rep- resents a distinct wearable computer placed under its corresponding application domain. The four starred designs (VuMan 3, MoCCA, Digital Ink, and Promera) have earned international design awards. Figure 2 shows CMU wearable computers, rang- ing from proof of concept, to customer-driven sys- tems based on a task specification, to visionary designs predicting wearable computers’ future form and functionality. We use techniques such as user-centered design, rapid prototyping, and in-field evaluation to iden- tify and refine paradigms that will prove useful across many applications. 2,3 These paradigms build To address mobile-application design challenges, the authors created four user interface models that map problem-solving capabilities to application requirements. Asim Smailagic and Daniel Siewiorek, Institute for Complex Engineered Systems and Human Computer Interaction Institute, Carnegie Mellon University WEARABLE COMPUTING

Upload: vuongthu

Post on 12-Apr-2018

226 views

Category:

Documents


2 download

TRANSCRIPT

20 PERVASIVEcomputing 1536-1268/02/$17.00 © 2002 IEEE

Application Design forWearable and Context-Aware Computers

Pervasive or ubiquitous computing suc-ceeds when it helps people access and useinformation on or off the job. Even atwork, many people don’t have desks, orspend most of their day on the run. A

mobile system must therefore make informationreadily available at any place and time. The com-puting system should also be aware of the user’s con-text so that it can respond appropriately to users’cognitive and social state and anticipate their needs.

Carnegie Mellon’s Wearable Com-puters project (www.wearablegroup.org) is developing small-footprintcomputing systems that people cancarry or wear to interact with com-puter-augmented environments.1

We’ve designed and built morethan two dozen wearable comput-

ers over the past decade and have field-tested mostof these. The application domains range frominspection, maintenance, manufacturing, and nav-igation to on-the-move collaboration, position sens-ing, and real-time speech recognition and languagetranslation.

We’ve developed a taxonomy of problem-solvingcapabilities for wearable and context-aware com-puters developed from our iterative design method-ology with a wide variety of end users, mainlymobile workers. Here we illustrate the taxonomywith wearable systems whose capabilities rangefrom basic stored-information retrieval to synchro-nous or asynchronous collaboration to context-

aware platforms with proactive assistants. Exam-ple evaluation methods show how user tests canquantify wearable systems’ effectiveness.

Wearable computing challengesToday’s computer systems aren’t as effective as

they could be because they distract users in manyways and can easily overwhelm users with data.Effective human–computer interaction design there-fore requires interfaces that preserve human atten-tion and avoid information overload.

By bringing wearable computers into a growingnumber of application areas, we’ve improved theirinterfaces to better meet this need. The family treeof CMU wearable computers (see Figure 1), mod-eled after the US National Science Foundation fam-ily tree of early computers, classifies wearable com-puters into application categories and presents theirdevelopment over the past decade. Each name rep-resents a distinct wearable computer placed underits corresponding application domain. The fourstarred designs (VuMan 3, MoCCA, Digital Ink, andPromera) have earned international design awards.

Figure 2 shows CMU wearable computers, rang-ing from proof of concept, to customer-driven sys-tems based on a task specification, to visionarydesigns predicting wearable computers’ future formand functionality.

We use techniques such as user-centered design,rapid prototyping, and in-field evaluation to iden-tify and refine paradigms that will prove usefulacross many applications.2,3 These paradigms build

To address mobile-application design challenges, the authors createdfour user interface models that map problem-solving capabilities toapplication requirements.

Asim Smailagic and Daniel Siewiorek,Institute for Complex EngineeredSystems and Human ComputerInteraction Institute, Carnegie Mellon University

W E A R A B L E C O M P U T I N G

on the notion that wearable computersshould merge the user’s information withhis or her workspace, blending seamlesslywith the user’s existing environment andproviding as little distraction as possible.This requirement often leads to replace-ments for the traditional desktop para-digm, which generally require a fixed phys-ical relationship between the user anddevices such as a keyboard and mouse.Identifying effective interaction modalitiesfor wearable computers and accuratelymodeling user tasks in software are amongthe most significant challenges in wearable-system design.

Table 1 shows that a new user interfacetakes about 10 years to become widely

deployed. Defining and refining user inter-face models requires extensive lab andend-user experimentation to filter outtechnology bugs, reduce costs, and adaptapplications to the new user interfaces.Speech and handwriting recognition areapproaching mainstream status after yearsof development. In the near future, posi-tion sensing, eye tracking, and stereo-graphic audio and visual output willenhance 3D virtual reality information, butthese technologies won’t be fully developedand deployed for at least another decade.

Mobile system design principlesMobile systems must balance resource

availability with portability and usability.

We’ve identified four design principles formobile systems.

User interface model. What metaphorscan we use for mobile informationaccess—what is the next “desktop” or“spreadsheet”? These metaphors typicallytake over a decade to develop (that is, thedesktop metaphor started in early 1970sat Xerox PARC and more than a decadepassed before it was widely available toconsumers). Extensive experimentationwith end-users is required to define andrefine these user interface models.

Input/output modalities. Several decadesof computer science research on modali-

OCTOBER–DECEMBER 2002 PERVASIVEcomputing 21

Figure 1. Family tree of Carnegie Mellon wearable computers (F15, F16, F18, and C130 are airccraft programs).

• SM1• Shell

• OSCAR• Ship building

• Electric boat

• SM0

Manufacturing

Plantoperation Maintenance

Languagetranslation

• Spot

• F18

• TIA-0• MIA

• C130

• F16

• AdTranz• Navigator2 • Frogman

✭ VuMan3

Metronaut•

• F15

✭ Digital Ink

Medical

Foldingdisplay

•✭ Promera

• Isaac

Navigation

Metronaut•

Mobileworker

concepts

Invisiblecomputing

smartrooms

2001

2000

1999

1998

1997

1996

1995

1994

1993

1992

1991

1990

1985

1970

• Navigator 1

• VuMan 2

• VuMan 1

Languagetranslation

Wireless communications

Miniaturedisplays

Speech recognition Microprocessors

✭ MoCCA• TIA-P

ties mimicking the human brain’s I/Ocapabilities haven’t yet produced accept-able accuracy and ease of use. Many cur-rent modalities require extensive trainingperiods, and their inaccuracies frustrateusers. Most of these modalities alsorequire extensive computing resources notavailable in lightweight, low-power wear-able computers. New, easy-to-use inputdevices such as CMU’s dial, developed for

list-oriented applications,4 could proveuseful.

Matching capabilities with applicationrequirements. Many mobile systemsattempt to pack as much capacity and per-formance in as small a package as possible.However, users don’t always need high-endcapabilities to complete a task. Enhance-ments such as full-color graphics not only

require substantial resources but also mightcompromise ease of use by generating infor-mation overload. Interface design and eval-uation should focus on the most effectiveinformation access means and resist thetemptation to provide extra capabilitiessimply because they’re available.

Quick interface evaluation. Currently,human–computer interface evaluation

22 PERVASIVEcomputing http://computer.org/pervasive

W E A R A B L E C O M P U T I N G

Visionary design and researchCustomer-driven systems design Exploratory systems design

2001

2000

1999

1998

1997

1996

1995

1994

1993

1992

1991

Itsy/CueWireless communication andlow-power innovation

MetronautNavigation + information

IssacSpeech interface assistant

FrogmanUnderwatermaintenance

Navigator 1 Navigation assistant

Navigation assistant VuMan 2

VuMan 1Manufacturing

Spot Research platform

IBM WearableAircraft maintenanceSmart ModulesLanguage translationMoCCAMobile work assistant

TIA-0Maintenance

OSCARPlant operation assistant MIABridge inspectionTIA-PLanguage translator F-15 maintenanceC-130 maintenanceNavigator 2Aircraft maintenance

VuMan 23Vehicle maintenance

Tactile display

SproutWireless communicator

StreetwareFashionable computers

Design for wearabilityWearable shape research

FolioFoldable display PromeraHandheld camera/projector

Digital InkDigital pen computer

Figure 2. Ten years of wearable computing at Carnegie Mellon (1991–2001).

TABLE 1User interface evolution.

Input, output, information Introduction Mainstream acceptance

Keyboard, alphanumeric display, text 1965 1975Keyboard and mouse, graphic display, icons 1980 1990Handwriting, speech recognition, speech synthesis, multimodal 1995 2005Position sensing, eye tracking, stereo audio and video, 3D virtual reality 2010 2020

requires elaborate procedures with manysubjects. Such evaluations might takemonths and don’t provide an appropriatereference for interface design. New evalu-ation techniques should especially focus ondecreasing human errors and frustration.

Functionalities Over the past decade, in building wear-

able computers for diverse applicationareas, we’ve observed that several func-tionalities prove useful across multipleapplications. These functionalities form thebasis for our four user interface models,each with unique user interfaces, I/Omodalities, and capabilities.

Procedures: text and graphics. Mainte-nance and plant operation applicationsdraw on a large volume of information thatchanges slowly. For example, even simpleaircraft might have more than 100,000manual pages. Operational changes andupgrades, however, make half of thesepages obsolete every six months for evenmature aircraft. Rather than distribute CD-ROMs to each maintenance person andrisk a procedure being performed on thebasis of obsolete information, maintenancefacilities usually maintain a centralizeddatabase that personnel consult for the rel-evant manual sections. The centralizedinformation base can change as needed.

Also, as manufacturing becomes morecustomized, no two aircraft on an assem-bly line are identical—they might belongto different airlines or be configured fordifferent missions. When manufacturingor maintenance personnel arrive for a day’swork, they receive a list of job ordersdescribing the tasks and including docu-mentation such as text and schematicdrawings. Because it’s centrally main-tained, even if this information changesdaily or hourly, workers still get accurateinformation.

Master–apprentice help desk. Sometimes aworker requires assistance from experiencedpersonnel. Historically, apprenticeship pro-grams let novices learn by observing andworking with experienced workers. Morerecently, help desks have evolved to provide

audio and visual access to experienced peo-ple for help with problem solving.

Team maintenance and collaboration.The help desk can service many field work-ers simultaneously. Today, downsizing andproductivity improvement efforts compeleven geographically distributed teams topool their knowledge to solve immediateproblems. In an extension of the help deskidea, a team of field service engineers, police,firefighters, and others trying to resolve anemergency situation must have reliable

access to information that will changeminute by minute or even second by second.

Context-aware collaboration with aproactive assistant. Distractions pose evenmore of a problem in mobile environmentsthan in desktop environments becausemobile users often must continue walking,driving, or taking part in other real-worldinteractions. A ubiquitous computing envi-ronment that minimizes distraction shouldtherefore include a context-aware systemable to “read” its user’s state and sur-roundings and modify its behavior on thebasis of this information. The system canalso act as a proactive assistant by linkinginformation such as location and schedulederived from many contexts, making deci-sions, and anticipating user needs. Mobilecomputers that can exploit contextualinformation will significantly reducedemands on human attention.

Example systemsHow do these four application paradigms

address the design principles of user inter-face, I/O modalities, and functional capa-bility requirements? We primarily use CMUwearable computer systems to illustrate theparadigms (see Figure 3), but other organi-zations’ systems also use these models.

For the procedures model, we look atVuMan 3, which provides text-based inspec-tion of heavy military vehicles.4 Otherapplicable examples include Navigator 2,which assists graphical-based inspection ofBoeing aircraft,1 and Georgia Tech’s wear-able computer for quality assurance inspec-tion in food-processing plants.5

We illustrate the master–apprenticehelp desk paradigm using TIA-P (Tacticalinformation Assistant Prototype), used forCMU’s C-130 help desk.6 Netman alsolets field technicians and office-based

experts collaborate in real time using audioand video.7

For team collaboration, we look atMoCCA (Mobile Communication andComputing Architecture), which supportscollaboration of geographically distributedfield engineers.8 Another system in thismodel, Land Warrior (www.fas.org/man/dod-101/sys/land/land-warrior.htm), is anintegrated infantry soldier system for closecombat designed to avoid informationoverload. (See “The Evolution of ArmyWearable Computers” article in this issue).

To illustrate context-aware collabora-tion we discuss a context-aware cell phone,a part of the contextual car and driverinterface that proactively helps driversmanage information and communication.Another example, Touring Machine, com-bines 3D augmented-reality graphics withmobile computing to help users navigatewhile traveling.9 Synthetic-assistant tech-nology developed at CMU6,10 lets a com-puter-modeled expert interact conversa-tionally, provide advice, read procedures,and answer questions.

Table 2 summarizes the four user inter-face paradigms with respect to the firstdesign principle and I/O modalities, andpresents each model’s knowledge sources.

Table 3 evaluates how the user interface

OCTOBER–DECEMBER 2002 PERVASIVEcomputing 23

Today, downsizing and productivity

improvement efforts compel even

geographically distributed teams to pool their

knowledge to solve immediate problems.

models deploy the third design principle,ability to fulfill requirements. For exam-ple, the master–apprentice model employsstatic and synchronous expert function-ality. Figure 4 depicts the four problem-solving capabilities in a state diagram.

Procedures: Prestored text andgraphics

Navigator 2 assists Boeing inspectorswith the sheet metal inspection of a mili-tary aircraft. An average 36-hour inspec-tion identifies about 100 defects. The user

begins by selecting an aircraft body regionand proceeds with object inspection, man-ual information referencing, archival obser-vation storage, and status recording. Dur-ing inspection, the field of interest narrowsfrom major features such as the plane’s left

24 PERVASIVEcomputing http://computer.org/pervasive

W E A R A B L E C O M P U T I N G

TABLE 2Input/output modalities and information sources for interface models.

User interface model Data representation Knowledge source Input Output

Prestored procedures: Text Task-specific, prestored Buttons Alphanumerictext procedures,

menu selection, input

Prestored procedures: Bitmap Task-specific, prestored Mouse Graphical usergraphics procedures, menu interface

selection, input

Master–apprentice Speech synthesis Archival Speech Multimediahelp desk

Team collaboration Pictures Team “corporate” memory Archival data Group collaboration

Synthetic collaboration 3D animation Real-time physical and Contextual Proactive, context-social context information appropriate

Figure 3. Wearable computer platform examples.

ChronologyApplication

Year

TIA-P

Context-aware

Team maintenance

Master–apprentice

Procedures(graphics)

Procedures (text)

VuMan 3

Navigator 2

MoCCA

GM/CMU Companion

Platform

20011997199619951993

TIA-P

VuMan 3

Navigator 2

MoCCA

GM/CMU Companion

wing or right tail to specific details such asindividual cockpit windowpanes or aircraftbody polygons (see Figure 5). The userrecords the area each defect covers and itstype using a “how malfunctioned” codesuch as corroded, cracked, or missing. Tomaximize usability, the system lets the userselect each item or control simply by speak-ing its name or, for more complicatedphonemes, a designated numeral. Boeingaircraft inspectors at McClellan Air ForceBase in California have praised this 2Dselection method, which specifies defectlocations on a planar region, and the over-all user interface design.

Master–apprentice (live expert) helpdesk

The C-130 project uses collaboration tofacilitate training and increase the numberof trainees per trainer. Remotely locatedtrainers teach inexperienced users to per-form a cockpit inspection. The traineeloads the inspection procedures and per-forms the inspection. A desktop systemmanages the normal job order process andlets instructors observe trainee behavior.In collaboration, the instructor looks overthe trainee’s shoulder (through a smallvideo camera attached to the top of thetrainee’s head mounted display) and offersadvice. In addition to a two-way audiochannel, the instructor can provide adviceusing a cursor to indicate areas on a cap-tured video image shared through a white-board. The instructor manages the sharingsession and whiteboard; the trainee canonly observe the whiteboard.

The synchronous communication bub-ble in Figure 4 shows the master–appren-tice paradigm’s capability. Synchronouscommunication facilitates answering ques-tions such as, “Where is the object?”

(annotating captured image), “How do Ido this?” (audio guidance through pre-stored material), and “What does the testresult mean?” (audio discussion). Thismodel also uses the static and prestoredcapability.

Team collaborationMoCCA supports a group of geograph-

ically distributed field service engineers.8

The FSEs spend 30 to 40 percent of theirtime driving to customer sites, and half ofwhat they service is third-party equipmentfor which they might not have written doc-umentation. MoCCA developers thereforesought to provide a system that let FSEs

access information and get advice fromother FSEs while commuting or at cus-tomer sites. The system supports synchro-nous and asynchronous collaboration (seeFigure 4) for both voice and digitized infor-mation.

User interviews yielded another chal-lenge: FSEs wanted laptop computer func-tionality, including a larger color display,with an operational cycle of at least eighthours. The system had to be very light,preferably less than one pound, and ableto access several legacy databases. Furtherdiscussions with the FSEs indicated thatthey most often used text-oriented data-bases and only rarely accessed graphical

OCTOBER–DECEMBER 2002 PERVASIVEcomputing 25

TABLE 3User interface models and problem-solving capabilities.

User interface model Problem-solving capability

Static Synchronous expert Asynchronous expert Proactive assistant

Prestored procedures Yes No No NoMaster–apprentice help desk Yes Yes No NoTeam collaboration Yes Yes Yes NoSynthetic collaboration Yes Yes Yes Yes

Proactiveassistant

Static/prestored procedures

Asynchronous

Referencea humanexpert

• Select region• Maintain object• Store observations• Reference information• Record status

• Where is the object?- Shared whiteboard with picture

• How to do it- Audio- Pointer on shared whiteboard

• What does test result mean?- Shared document scrolling

• Availability- Contact state (available, busy,

off-duty)- Mode (pager, phone,

whiteboard)

Synchronous

• “Remote” synthetic helper- Reading procedure and

answering questions- Frequently asked questions –

synthetic interview• Context-aware synthetic expert

- Location-specific information- Schedule- Provide advice based

on cognitive model- Overload- Cycling

• Where is the object?- Shared to-do list

• How to do it- Audio BBoards- Tips

• What does result of test mean?- Audio BBoards- Tips

Figure 4. State diagram of problem-solving capabilities.

databases. The system’s architecture com-bined a lightweight alphanumeric satellitecomputer with a base unit that FSEs couldcarry into any customer site and gaininstant access to the global infrastructure.

MoCCA’s asynchronous capabilities forteam problem-solving (see Figure 4) includeaudio bulletin boards and tips for sharedcollaboration space between remote FSEsand their colleagues. The audio bulletinboard compares to a storehouse of audioclips describing problems that FSEsencounter on the job. Each “trouble” topiccontains a list of audio responses fromother FSEs with possible solutions. Figure6 shows the integrated user interface thatstarts with the call list, list of availableFSEs, and information about the incomingservice request.

Context-aware collaborationWe designed a context-aware cell phone

application to give the remote caller feed-back on the current context of the personbeing called. It uses time (via a calendar),location, and audio environment sensingand interpretation to derive the callee’s con-text. It derives location from a GPS unit onthe car (some newer Ericsson and Motorola

cell phones also have GPS capability). If it determines that the callee is driving,

the system must let the caller interact witha driver as if he or she were a passenger inthe car. For example, in a particularly dif-ficult driving situation with a high cogni-tive load (such as passing a truck on adownhill curve at night in the rain), a pas-senger would be sensitive to the situationand suspend conversation until the drivingsituation has passed. With contemporarycell phones, however, the caller is unawareof the driver’s context and will continuetalking, perhaps causing the driver to entera state of cognitive overload.

EvaluationWe used laboratory prototypes to eval-

uate four applications’ performance foreach user interface model. Metrics includedtime on task and time required to achievehigh accuracy.

Prestored procedures We field-tested MoCCA at the Digital

Equipment Corporation facility in ForestHills, Pennsylvania. Five FSEs performed aset of typical troubleshooting and repairoperations on computing equipment

including printers, motherboards, and net-works. Our prototype saved FSEs consid-erable time—35 to 40 percent over the sys-tem they normally use (see Figure 7). TheFSEs used our system for the first time dur-ing these tests, and we would expectgreater efficiency with continued use.MoCCA also let FSEs immediately fixsome problems that otherwise would haverequired return trips to find and bring backmanuals.

Master–apprentice help deskFor this paradigm, we measured perfor-

mance on a bicycle repair task when work-ing alone compared to working with expertguidance. Workers and experts communi-cated through a video link and an audioconnection. Study participants consisted of60 CMU students (69 percent male) andtwo bicycle repair experts. Study partici-pants used a help desk collaborative systemwith head-mounted display and a smallCCD camera mounted on the display.

We found that workers performed sub-stantially better with collaborative help,and we used a repeated-measures analysisof variance test11 to examine the results’statistical significance. Workers with re-mote expert guidance took, on average,half the time to complete the repair tasks asthose working solo (7.5 versus 16.5 min-utes, respectively; p < .001). They also per-formed higher-quality repairs (79 percentof quality points for the collaborative con-dition versus 51 percent for the solo con-dition; p < .001). While access to an expertdramatically improved performance, hav-ing better communication tools did notimprove the number of tasks completed,the average time per completed task, orperformance quality. In particular, neithervideo (comparison of full-duplex audio/video with full-duplex audio/no video) norfull-duplex audio (comparison of full-duplex audio/video condition with half-duplex audio/video) helped workers per-form more tasks, perform tasks morequickly, or perform them better.12

26 PERVASIVEcomputing http://computer.org/pervasive

W E A R A B L E C O M P U T I N G

Figure 5. Sample user interface screen forstatic and prestored information.

Aircraft Body Main Menu

Select a defect location

U

L

Complete Remove

Skin Manual

300

280

240

200

1 2

4 5

3

270 by 265

R

D

02L

14:38 E F S

Type of defect

How-Mal

Corroded

Cracked

Dented

Deteriorated

Loose

Missing

WornOther...

Minor Major Critical

180

210

240

260

300

330

360

Team collaborationIdealink provides a virtual space for

groups to manipulate and share observa-tions about graphical objects related to theirwork task.13 Asynchronous audio tags letusers record an audio explanation or anno-tation of a particular object or procedure.The system records and archives each ses-sion, making the knowledge containedwithin them available for later reference.

We designed an experiment to comparehow effectively users communicated con-cepts with Idealink versus a traditionalwhiteboard. Eight groups of four CMU

OCTOBER–DECEMBER 2002 PERVASIVEcomputing 27

Figure 6. The Mobile Communication and Computing Architecture integrated interface.

Field source engineer information Paper messages

Dialer

Availability selection

Sendingpaper messages

Tip filtering

Call loggingTips

Call list queries

Call list

Cell phone accessto Voice BBoard

1. Make topic2. List topic3. Add message4. Delete topic Detailed call information

90

80

70

60

50

40

30

20

10

0

Printer

troub

lesho

oting

Printer

repa

ir

Problem type Moth

erboa

rd tro

ubles

hooti

ng

Motherb

oard

repair

Network

troub

lesho

oting

Network

repa

ir

Tim

e (m

inut

es) Traditional

MoCCA

Figure 7. Improvement in problem solvingwith prestored knowledge versustraditional approach.

students from different majors performeda group problem-solving task to design aremote control for a stereo system; fourgroups used Idealink, and the other fourused a standard whiteboard. We video-taped all sessions and examined therecordings to determine how frequentlyusers from each group requested clarifica-tion (see Table 4). We observed two eventtypes. Explicit communications offeredexplicit verbal references to a particularobject or region of the drawing area(“Look at the box on the left”). Implicitcommunication indirectly referred to anobject or drawing area region (“Look atthat one”). Errors occurred when a teammember misunderstood another teammember’s reference. Idealink not onlyreduced the number of communicationsbut also reduced the number of communi-cation errors by providing smaller regionsin which collaborators could more easilyfocus their attention.

Context-aware collaborationWe designed two experiments to test the

hypothesis that a context-aware cell phonecould change caller and driver behaviors.Experiment 1 tested whether remote cell-phone callers would slow or stop their con-versation with a driver when signaled.Experiment 2 tested whether a driver’s per-formance while speaking on a cell phonewould be improved by slowing or stoppingthe remote callers’ conversation.

We asked 24 participants to role-play aperson seeking to rent an apartment. Eachparticipant made successive cell phone callswhile driving to three “landlords,” playedby the experimenter. We gave participantsa list of questions to ask the landlord abouteach apartment (for example, how manybedrooms the apartment had). At a pre-specified point in each call, the landlord

would unexpectedly pause for 10 seconds.Results showed that the callers spoke fewerthan half the number of sentences duringthe pause when they were sent a signalcompared to when the driver remainedsilent. The spoken message, “The personyou have called is busy; please hold,” wasthe most effective signal.

The second experiment used a drivingsimulator composed from a virtual-realityauthoring environment that let users nav-igate a vehicle through a test track. Beforebeginning the experiment, the 20 partici-pants practiced using the driving simula-tor until they said they felt comfortable.Participants then completed one circuit onthe driving simulator under each of threeconditions: control (no phone call), callwithout pause, and call with pause. Resultsshowed that talking on the cell phonecaused people to crash more (6.8 crashes)compared to driving without a call (3.55crashes). Inducing pauses during the callcaused the driver to crash less (3.65crashes) when using the cell phone.

Our results show that a driver using acontext-aware cell phone that interruptsthe caller during dangerous driving condi-tions could make driving while talking onthe cell phone safer. We are building a con-text-aware cell phone that will recognizethese conditions and induce such a pause toeffectively direct the driver’s attention tothe driving task.

To effectively integrate wearablecomputers into ubiquitouscomputing environments, wemust address several important

challenges. How do we develop social andcognitive application models? How do weintegrate input from multiple sensors andmap them to users’ social and cognitive

states? How do we anticipate user needs?How do we interact with users? Our fourparadigms break these challenges intomanageable design and evaluation tasks toensure that applications developed for spe-cific domains best meet users’ needs.

Our future work will focus on developinga virtual coach that will deploy a wearableaugmented-cognition platform and soft-ware application. This system will monitorusers’ cognitive load, assess cognitive per-formance online, and route tasks to under-loaded users. Providing immediate sugges-tions to users for cognitive augmentationand arbitration of resource redeploymentwill further enhance performance.

ACKNOWLEDGMENTS We thank the US National Science Foundation,DARPA, and the Pennsylvania Infrastructure Tech-nology Alliance (PITA) for funding support.

REFERENCES1. A. Smailagic and D.P. Siewiorek, “The

CMU Mobile Computers: A New Genera-tion of Computer Systems,” Proc. IEEECOMPCON 94, IEEE Press, Piscataway,N.J., 1994, pp. 467–473.

2. A. Smailagic et al., “Benchmarking an Inter-disciplinary Concurrent Design Methodol-ogy for Electronic/Mechanical Design,”Proc. 32nd ACM/IEEE Design Automa-tion Conf. (DAC 95), ACM Press, NewYork, 1995, pp. 514–519.

3. D.P. Siewiorek, A. Smailagic, and J.C. Lee,“An Interdisciplinary Concurrent DesignMethodology as Applied to the NavigatorWearable Computer System,” J. Computerand Software Eng., vol. 2, no. 3, Oct. 1994,pp. 259–292.

4. A. Smailagic et al., “Very Rapid Prototyp-ing of Wearable Computers: A Case Studyof Custom versus Off-the-Shelf DesignMethodologies,” J. Design Automation forEmbedded Systems, vol. 3, nos. 2–3, Mar.1998, pp. 217–230.

5. L. Najjar, J.C. Thompson, and J.J. Ocker-man, “A Wearable Computer for QualityAssurance in a Food-Processing Plant,”Proc. 1st Int’l Symp. Wearable Computers,IEEE CS Press, Los Alamitos, Calif., 1997,pp. 163–164.

6. A. Smailagic, “An Evaluation of Audio-centric CMU Wearable Computers,” ACMJ. Special Topics in Mobile Networking, vol.6, no. 2, 1998, pp. 59–68.

28 PERVASIVEcomputing http://computer.org/pervasive

W E A R A B L E C O M P U T I N G

TABLE 4Idealink user evaluation.

Collaboration event Whiteboard Idealink

Explicit communication events 54 33Errors and difficulties 3 0

Implicit communication events 40 7Errors and difficulties 2 0

7. M. Bauer et al., “A Collaborative WearableSystem with Remote Sensing,” Proc. 2ndInt’l Symp. Wearable Computers, IEEE CSPress, Los Alamitos, Calif., 1998, pp.10–17.

8. A. Smailagic et al., “MoCCA: A MobileCommunication and Computing Architec-ture,” Proc. 3rd Int’l Symp. Wearable Com-puters, IEEE CS Press, Los Alamitos, Calif.,1999, pp. 64–71.

9. S. Feiner, B. MacIntyre, and T. Höllerer, “ATouring Machine: Prototyping 3D MobileAugmented Reality Systems for Exploringthe Urban Environment,” Proc. 1st Int’lSymp. Wearable Computers, IEEE CS Press,Los Alamitos, Calif., 1997, pp. 74–81.

10. D. Marinelli and S.M. Stevens, “SyntheticInterviews: The Art of Creating a ‘Dyad’between Humans and Machine-BasedCharacters,” Proc. 4th IEEE WorkshopInteractive Voice Technology for Telecom-munications Applications, IEEE Press, Pis-cataway, N.J., 1998.

11. E.R. Girden, ANOVA Repeated Measures,Sage Publications, Thousand Oaks, Calif.,1992.

12. R.E. Kraut, S.R. Fussell, and J. Siegel, “Sit-uational Awareness and ConversationalGrounding in Collaborative Bicycle Repair,”submitted for publication, 2001; www.andrew.cmu.edu/user/sfussell/Manuscripts_Folder/Bicycle_Repair.pdf.

13. D. Garlan et al., “Project Aura: Toward Dis-

traction-Free Pervasive Computing,” IEEEPervasive Computing, vol. 1, no. 2,Apr.–Jun. 2002, pp. 22–32.

For more information on this or any other comput-ing topic, please visit our Digital Library at http://computer.org/publications/dlib.

the AUTHORS

Asim Smailagic is the director of the Laboratory for Interactive Computer Systemsand Wearable Computers and a research professor at the Institute for Complex Engi-neered Systems at Carnegie Mellon University’s College of Engineering. His researchinterests include wearable computers, mobile and pervasive computing, reliability, andaudio and visual interfaces. He served as program committee chairman for the 2002IEEE Computer Society Symposium on VLSI 2002. Contact him at Inst. for ComplexEngineered Systems, Carnegie Mellon Univ., 5000 Forbes Ave., Pittsburgh, PA 15213;[email protected]; www-2.cs.cmu.edu/~asim.

Daniel Siewiorek is the director of the Human–Computer Interaction Institute andprofessor of electrical and computer engineering at Carnegie Mellon University. Hisresearch interests include wearable computing, fault-tolerant computing, and relia-bility. Contact him at the Human–Computer Interaction Inst., Carnegie Mellon Univ.,5000 Forbes Ave., Pittsburgh, PA 15213-3891; [email protected]; www-2.cs.cmu.edu/~dps.

PURPOSE The IEEE Computer Society is the world’s

largest association of computing professionals, and is the

leading provider of technical information in the field.

MEMBERSHIP Members receive the monthly

magazine COMPUTER, discounts, and opportunities

to serve (all activities are led by volunteer mem-

bers). Membership is open to all IEEE members,

affiliate society members, and others interested in

the computer field.

B O A R D O F G O V E R N O R STerm Expiring 2002: Mark Grant, Gene F.Hoffnagle, Karl Reed, Kathleen M. Swigger, RonaldWaxman, Michael R. Williams, Akihiko Yamada

Term Expiring 2003: Fiorenza C. Albert-Howard, Manfred Broy, Alan Clements, RichardA. Kemmerer, Susan A. Mengel, James W. Moore,Christina M. Schober

Term Expiring 2004: Jean M. Bacon, RicardoBaeza-Yates, Deborah M. Cooper, George V.Cybenko, Wolfgang K. Giloi, Haruhisha Ichikawa, Thomas W. WilliamsNext Board Meeting: 22 Feb 03, San Diego, CA

I E E E O F F I C E R SPresident: RAYMOND D. FINDLAY

President-Elect: MICHAEL S. ADLER

Past President: JOEL B. SYNDER

Executive Director: DANIEL J. SENESE

Secretary: HUGO M. FERNANDEZ VERSTAGEN

Treasurer: DALE C. CASTON

VP, Educational Activities: LYLE D. FEISEL

VP, Publications Activities: JAMES M. TIEN

VP, Regional Activities: W. CLEON ANDERSON

VP, Standards Association: BEN C. JOHNSON

VP, Technical Activities: MICHAEL R. LIGHTNER

President, IEEE-USA: LeEARL A. BRYANT

EXECUTIVE COMMITTEEPresident: WILLIS K. KING*University of HoustonDept. of Comp. Science501 PGHHouston, TX 77204-3010Phone: +1 713 743 3349 Fax: +1 713 743 [email protected]

President-Elect: STEPHEN L. DIAMOND*

Past President: BENJAMIN W. WAH*

VP, Educational Activities: CARL K. CHANG *

VP, Conferences and Tutorials: GERALD L. ENGEL*

VP, Chapters Activities: JAMES H. CROSS†

VP, Publications: RANGACHAR KASTURI†

VP, Standards Activities: LOWELL G. JOHNSON

(2ND VP)*

VP, Technical Activities: DEBORAH K.

SCHERRER(1ST VP)*

Secretary: DEBORAH M. COOPER*

Treasurer: WOLFGANG K. GILOI*

2001–2002 IEEE Division VIII Director:THOMAS W. WILLIAMS

2002–2003 IEEE Division V Director:GUYLAINE M. POLLOCK†

Executive Director: DAVID W. HENNAGE†

*voting member of the Board of Governors

COMPUTER SOCIETY WEB SITEThe IEEE Computer Society’s Web si te, at http://computer.org, offers information and samplesfrom the society’s publications and conferences, aswell as a broad range of information about technicalcommittees, standards, student activities, and more.

COMPUTER SOCIETY O F F I C E SHeadquarters Office

1730 Massachusetts Ave. NW Washington, DC 20036-1992Phone: +1 202 371 0101 • Fax: +1 202 728 9614E-mail: [email protected]

Publications Office10662 Los Vaqueros Cir., PO Box 3014Los Alamitos, CA 90720-1314Phone:+1 714 821 8380E-mail: [email protected] and Publication Orders:Phone: +1 800 272 6657 Fax: +1 714 821 4641E-mail: [email protected]

European Office13, Ave. de L’AquilonB-1200 Brussels, BelgiumPhone: +32 2 770 21 98 • Fax: +32 2 770 85 05E-mail: [email protected]

Asia/Pacific OfficeWatanabe Building1-4-2 Minami-Aoyama, Minato-ku,Tokyo 107-0062, JapanPhone: +81 3 3408 3118 • Fax: +81 3 3408 3553E-mail: [email protected]

E X E C U T I V E S T A F FExecutive Director: DAVID W. HENNAGEPublisher: ANGELA BURGESSAssistant Publisher: DICK PRICEAssociate Executive Director: ANNE MARIE KELLYChief Financial Officer: VIOLET S. DOANDirector, Information Technology & Services:ROBERT CAREManager, Research & Planning: JOHN C. KEATON

23-AUG-2002