mediascapes: context-aware multimedia experiences

8
98 1070-986X/07/$25.00 © 2007 IEEE Published by the IEEE Computer Society Multimedia at Work Qibin Sun Institute for Infocomm Research Stuart P. Stenton, Richard Hull, Patrick M. Goddi, Josephine E. Reid, Ben J. Clayton, Tom J. Melamed, and Susie Wee Hewlett-Packard Laboratories T he IT industry boasts its longstanding mantra to deliver “anything, anytime, anywhere.” Here we describe research addressing the next gen- eration of mobility technology, which will deliv- er “the right experience in the right moment.” The maturing field of pervasive computing yields the technology and the challenges described here. We focus on rich interactive mobile experi- ences triggered by context information available from the users, their environment, and a wealth of context-enabled content. We call such applications mediascapes. Recent developments For more than a decade scientists have demonstrated the potential value of the combi- nation of portable computing, embedded sensors, and pervasive networking. 1-5 Two recent devel- opments have catalyzed activity in the subarea of Location-Based Services (LBS): the availability of GPS sensors in consumer devices, primarily for satellite navigation, and the integration of Geographical Information Systems (GIS) technology into the Web through map and satellite image interfaces. Satellite navigation systems deliver a location to the user and a route to get there. Computer scientists have applied this technology to deliv- er other types of experiences, such as guided information tours (see http://www.gocarsf.com/) and location-based games (see http://www. pacmanhattan.com/). Now that we have maps capable of interfacing with GIS content, we can accelerate the process of getting digital content connected to locations through physical world metadata. We think of LBS as the first generation or a subcategory of the broader context-based mediascapes. Mediascapes and mscape Mediascapes infuse the landscape of our every- day environment with digital content and ser- vices. They deliver compelling user experiences when the user interacts with the physical world. They hail from a world of sensor-enabled, some- times network-connected devices accessing con- text-coded information and services. Simply put, a mediascape plays multimedia content (image, audio, or video) on a mobile device in response to context triggers. Users can employ any sensors to provide context and trigger multimedia, includ- ing location sensors, infrared (IR) beacons, radio frequency identification (RFID) tags, motion sen- sors, heart rates, and other biomonitors. Until recently, technology specialists built applications for consumption in a managed envi- ronment—for example, at a conference or demon- stration event—for research, or as a commercial rental. A technology we call mscape enables a broader range of designers from the creative indus- tries to build and explore the possibilities of con- text-based interactive media applications. Anyone can create, distribute, share, and play mediascapes. The mscape technology makes it easy to create and play context-aware multimedia experiences. It has an easy-to-use authoring tool that lets people create their own mediascapes by With today’s abundance of captured/created graphics/images, audio, and video in hand, can we make full use of this media to explore new expe- riences or applications? Along these lines, HP Labs has developed a proto- type mediascapes technology called mscapes. A mediascape is a context-aware multimedia experience that allows you to trigger multimedia content based on your context, such as physical locations. Although we believe that some similar concepts have been proposed in piecemeal here and there before, mediascapes offers the user some totally new experiences. Want to know more details? Follow me into the world of mediascapes. —Qibin Sun Editor’s Note Mediascapes: Context-Aware Multimedia Experiences

Upload: s-p-stenton

Post on 10-Feb-2017

215 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: Mediascapes: Context-Aware Multimedia Experiences

98 1070-986X/07/$25.00 © 2007 IEEE Published by the IEEE Computer Society

Multimedia at Work Qibin SunInstitute for Infocomm Research

Stuart P. Stenton,Richard Hull,

Patrick M. Goddi,Josephine E. Reid,

Ben J. Clayton,Tom J. Melamed,

and Susie WeeHewlett-Packard

Laboratories

The IT industry boasts its longstanding mantrato deliver “anything, anytime, anywhere.”

Here we describe research addressing the next gen-eration of mobility technology, which will deliv-er “the right experience in the right moment.”

The maturing field of pervasive computingyields the technology and the challenges describedhere. We focus on rich interactive mobile experi-ences triggered by context information availablefrom the users, their environment, and a wealth ofcontext-enabled content.

We call such applications mediascapes.

Recent developmentsFor more than a decade scientists have

demonstrated the potential value of the combi-nation of portable computing, embedded sensors,and pervasive networking.1-5 Two recent devel-opments have catalyzed activity in the subarea ofLocation-Based Services (LBS):

❚ the availability of GPS sensors in consumerdevices, primarily for satellite navigation,and

❚ the integration of Geographical InformationSystems (GIS) technology into the Webthrough map and satellite image interfaces.

Satellite navigation systems deliver a locationto the user and a route to get there. Computerscientists have applied this technology to deliv-er other types of experiences, such as guidedinformation tours (see http://www.gocarsf.com/)and location-based games (see http://www.pacmanhattan.com/). Now that we have mapscapable of interfacing with GIS content, we canaccelerate the process of getting digital contentconnected to locations through physical worldmetadata. We think of LBS as the first generationor a subcategory of the broader context-basedmediascapes.

Mediascapes and mscapeMediascapes infuse the landscape of our every-

day environment with digital content and ser-vices. They deliver compelling user experienceswhen the user interacts with the physical world.They hail from a world of sensor-enabled, some-times network-connected devices accessing con-text-coded information and services. Simply put,a mediascape plays multimedia content (image,audio, or video) on a mobile device in response tocontext triggers. Users can employ any sensors toprovide context and trigger multimedia, includ-ing location sensors, infrared (IR) beacons, radiofrequency identification (RFID) tags, motion sen-sors, heart rates, and other biomonitors.

Until recently, technology specialists builtapplications for consumption in a managed envi-ronment—for example, at a conference or demon-stration event—for research, or as a commercialrental. A technology we call mscape enables abroader range of designers from the creative indus-tries to build and explore the possibilities of con-text-based interactive media applications.

Anyone can create, distribute, share, and playmediascapes. The mscape technology makes iteasy to create and play context-aware multimediaexperiences. It has an easy-to-use authoring toolthat lets people create their own mediascapes by

With today’s abundance of captured/created graphics/images, audio,and video in hand, can we make full use of this media to explore new expe-riences or applications? Along these lines, HP Labs has developed a proto-type mediascapes technology called mscapes. A mediascape is a context-awaremultimedia experience that allows you to trigger multimedia content basedon your context, such as physical locations. Although we believe that somesimilar concepts have been proposed in piecemeal here and there before,mediascapes offers the user some totally new experiences. Want to knowmore details? Follow me into the world of mediascapes.

—Qibin Sun

Editor’s Note

Mediascapes: Context-AwareMultimedia Experiences

Page 2: Mediascapes: Context-Aware Multimedia Experiences

importing content and using simple logical rulesto combine sensed events with media playback.A scripting language that ties together mediacontent, sensor trigger events, and the contextlogic conveys the result.

The results of public trials and user researchhave guided the development of this scriptinglanguage.6 These studies informed developershow to refine the toolkit and decide significantfactors about the direction of the research. A pub-lishing platform—available at http://www.mscapers.com—makes mscape technology acces-sible through a Web portal that lets people shareand distribute mediascapes. It also provides aplace where the emerging community of medi-ascape builders can share, experience, and devel-op best-practice guidelines for design in the newmedium.

Mediascape researchers have considered userexperience to determine its improvement sinceits inception. Developers collected user data withevery pilot and deployment. Each public pilotadded to our knowledge of the medium, includ-ing how physical and digital experiences couldbe effectively fused and the different ways peo-ple want to explore its potential.

Mobile BristolThe HP Mediascape project originated from

collaboration among HP, the University of Bris-tol, and The Appliance Studio with matchingfunding from the UK Government’s Departmentof Trade and Industry. During the life of this pro-ject, called Mobile Bristol (see http://www.mobilebristol.com), researchers carried out anumber of public and educational trials and cre-ated a prototype authoring toolkit.7 This earlyprototype, the Mobile Bristol Toolkit, was avail-able for download on the Mobile Bristol Web site.Over 1,000 downloads worldwide have fueledthe emergence of a Mediascape design commu-nity in Europe, the US, and Canada.

After the completion of the Mobile Bristolproject, the HP team redesigned and built a newtoolkit from the ground up, continuing to usepublic trials as guides. They created two new Websites. The first, designed by Futurelab (seehttp://www.createascape.org), accessed a limitedversion of the toolkit for use in schools. The sec-ond—the mscapers publishing portal—let userscreate, download, and share mediascapes. Todate artists, filmmakers, broadcasters, education-alists, students, authors, and researchers have allcreated mediascapes.

The mediascape experienceA mediascape experience is media-rich, con-

text-aware, physical, and mobile, and it can besocial or personal as well. The media used caninclude images, video, audio, and flash interac-tions. What makes a mediascape experience dif-ferent from other rich media experiences deliveredon mobile devices is the logic that specifies the rel-evance it has to the physical situation—that is, aperson’s context. For example, if the personwalks into a specific space, then the device trig-gers the media content according to the logicassigned to that space. This logic may specify abehavior that depends on the number of times aperson has entered the space. The first time a per-son enters a space, the device may trigger a longeraudio description of what they can do there, buton each subsequent visit the device may triggera shorter audio stream that has different con-tent. Just like meeting a nonplayer character(NPC) in a video game, a shared context beginsto evolve.

The simplest way to approximate a mediascapeexperience, in the absence of sensing capabilities,is for the audience to self-report by manually trig-gering the delivery of media. Applications likepod tours, guides (see http://www.alcatraz.us/),Urban Tapestries,8 and Yellow Arrow (see http://yellowarrow.net/index2.php) all work in thisway. The capabilities of Google’s My Maps couldcreate this form of simple application.

Cheap, low-power sensors are steadily becom-ing increasingly available. The Nintendo Wii andSony Playstation 3 successfully use accelerome-ters. Some laptops also use them to close downthe hard drive if it’s dropped to minimize thedamage when it hits the floor. Technologists havecreated digital compasses the size of a postagestamp, and the medical field contributes a whole

99

July–Septem

ber 2007

The following are additional resources thatmight be of interest to the reader:

❚ The latest mscape toolkit and mediascapesare available at http://www.mscapers.com.

❚ The schools’ mediascape kit is available athttp://www.createascape.org.

❚ The early Mobile Bristol Toolkit is availableat http://www.mobilebristol.com.

Additional Resources

Page 3: Mediascapes: Context-Aware Multimedia Experiences

100

IEEE

Mul

tiM

edia

Multimedia at Work

host of biosensors. The challenges for architectsof this new medium include the following:

❚ How do they make sense of the data fromthese sensors?

❚ How do they combine data from these sensors?

❚ What are the most appropriate abstractions ofsensor data for designers?

❚ What are the semantics of contextual eventsthat will allow situations to be described andidentified as a combination of sensor triggers?

Mediacapes can provide a social experiencethat brings people together and forms commu-nities within a broader audience. On 2 July 2005,The Washington Post described Yellow Arrow as“geographical blogging,” referring to the way theworld can be tagged with personal experienceand commentary. In this way communities cancommunicate through and about their physicalenvironment. Inhabitants or visitors to a spacecan access the social history and presence of alocale as they pass through.

As mediascapes become more integrated withcommunications such as instant messaging, chatrooms, voice, and video, developers face chal-lenges in integrating these modalities as triggersand media feeds in a mediascape and makingsure the network prioritizes the different types oftraffic accordingly to maintain the experience.

The mscape authoring toolFor an authoring tool to fulfill the potential of

mediascapes as a new medium, it must supportthe following:

❚ an extensible language for describing context,

❚ the specification of context events and conse-quences,

❚ a representation of contextual state,

❚ the storage and management of media files,

❚ an authoring interface that allows nonpro-grammers to explore new genres of mediascape,and

❚ an emulator for testing the contextual statesand consequences.

As the complexity of sensed activity and con-text states increases, the challenge will be to keepthe authoring interface simple and accessible.Authors with a broad range of skills and perspec-tives must be able to explore new applicationsand genres of mediascapes for the medium torealize its full potential. Creators of such tech-nology rarely also originate its popular and emer-gent use.

For location-based mediascapes, the mscapersauthoring tool has a graphical user interface thatlets authors specify regions where content is tobe triggered. The tool allows authors to importmedia content, including images, video, andaudio. The author can then drop media contentonto the regions where they should be played(see Figure 1).

The tool also allows event-based logic to deter-mine how the media should be played. It allowsthe creator to specify an action on entering theregion such as “play video” and on exiting theregion such as “stop video.” In the case of audio,users can fade files in or out, loop them, or playthem to the end.

Furthermore, the tool supports state variablesthat allow the author to specify actions based onthe state. This, for example, allows the author tospecify playing one media clip when a user firstenters a region and playing a different media clipwhen the user re-enters the region over thecourse of the session. Users can also employ statevariables across regions. This allows the author tospecify logic such as playing a media clip when a

Figure 1. The mscape

authoring tool allows

people to create

location-based

mediascapes. It is

available for download

at http://www.

mscapers.com.

Page 4: Mediascapes: Context-Aware Multimedia Experiences

101

person enters a region, but only if the person hasvisited another region first.

Very advanced developers can extend theauthoring tool to add drivers for any sensingdevice. The programmers built the authoringplatform on top of an extensible plug-in archi-tecture that allows the addition of new sensorsand the specifications of new contexts.

The mscape software clientA software player of mediascapes created with

the mscape authoring tool requires two things ofthe handheld device: it must run the WindowsMobile operating system, and it must connect tothe required sensors. The most likely sensor amediascape requires is GPS. Some devices haveintegrated GPS, but plug-in or Bluetooth-con-nected GPS receivers will also work.

Users can download the mscape player andexisting mediascapes from http://www.mscapers.com. The mscape player supports both the play-ing and the authoring of mediascapes. Someauthors tie mediascapes to a specific locationwhere the experience deeply depends on thefusion of digital content with landmarks in thephysical surroundings. Others choose to makeportable mediascapes that are less dependent onspecific physical locations and might only use anopen space. Users can load these onto a playerwithout specific location information and rollthem out like a digital canvas in a suitable space.

Media format and scripting languageThe format of the new medium contains a

scripting language that draws upon a file store ofcontent that can comprise a number of mediatypes: HTML, MP3, or WAV audio; JPEG or GIFimages; and MPEG, WMV, or SWF video andflash interactions. The scripting language pullstogether the data from the sensors and holds thelogic that connects this data to the delivery ofmedia fragments from the content file. Theauthor can choose the method of media deliveryand the location of the content depending onthe demands of the application.

Playing modesMany users can hold mediascape applications

on removable storage cards, such as secure digitalmemory (SD) cards. This method of storage anddelivery works well for applications. The user candownload it from a PC before leaving the house oroffice or over a wireless network, and it doesn’trequire frequent updates to provide timely infor-

mation. Figure 2 shows a basic mediascape clientthat has user input, on-device sensors, stored mediaon the device, and scripting logic that ties thesetogether into the mediascape experience.

A mediascape client can also use beacons totrigger media content (see Figure 3). For example,an author can place an IR beacon with an ID in aspecific location, then the mediascape client thathas an IR sensor can swipe it by the beacon to trig-ger a media event. In addition, the beacon itselfmay have sensors to sense the context of the spe-cific location and convey the sensed informationto the mediascape client through the IR interfaceor over the network.

In future implementations, mediascapes canalso be built with a client-server architecture usingstreaming media over a wireless network. This isparticularly useful if the mediascape contentneeds frequent updating because it reflects rapid-ly changing or time-based information—or theconcurrent actions of others (as in a multiplayergame). Figure 4 shows the resulting system. The

Mediascape client

Logic

SensorsUser input Media

Networked sensor

Sensor

Networking

Local sensor

Signal generator

Sensor

Beacon

Signal generator

Networking

Networked mediascape client

Logic

Sensors

Networking

User input Media

Figure 3. Mediascape

clients can also use

networked sensors,

local sensors, and

beacons to trigger

media events.

Media content server

Media

NetworkingNetworking

Networked mediascape client

Logic

SensorsUser input Media

Figure 4. Mediascapes

can be built with a

client-server

architecture where the

media can be stored on

a media content server

and streamed over a

wireless network to the

mediascape client.

Figure 2. A basic

mediascape client has user

input, on-device sensors,

stored media on the device,

and scripting logic that ties

these together into the

mediascape experience.

Page 5: Mediascapes: Context-Aware Multimedia Experiences

102

IEEE

Mul

tiM

edia

Multimedia at Work

mediascape client may have media on the deviceitself. In addition, the device could interact witha media content server that sends updated mediato the client.

Users can operate multiplayer games in differ-ent modes (see Figure 5). They can operate in apeer-to-peer mode where the mediascape clientsinteract with each other directly through net-working connections. Alternatively, multiplayermediascapes can interact through a mutiplayerserver (see Figure 6), which has the logic for themediascape experience.

It is also useful to consider the mobile devicecapabilities and the network bandwidth. Withenough network bandwidth, the mediascapecould go to an in-network rendering model (asFigure 7 shows), where a machine in the networkrenders the player’s view and streams the result-ing video to the player through a regular videostreaming connection. This way, the device onlyneeds to decode a video stream (for example, anMPEG) rather than rendering full graphics.

On the other hand, if the device has sufficientcomputing capabilities but little network band-width, it could go into a mode where users sendcommands over the network, but the device itselfrenders the game through the software client.

Delivering mediascapes with these capabilitieswill require contextual intelligence in the net-work and will place heavy demands on the net-work’s ability to manage demands for itsbandwidth.

So far we’ve considered portable sensors con-nected to the handheld device and the deliveryof media via that same device. We’ve also devel-oped mediascapes that use IR beacons to providelocation-specific context. Data from sensors inthe environment or sensors in the network infra-structure could trigger the media in future medi-ascapes and then deliver them to output devicespresent in the user’s location—such as public dis-plays, home high-fidelity systems, or cinema.

Building a community of practiceThis budding science has the potential to

grow into a major revolution in the multimediafield, stemming into all different formats amongvarious industries. However, as with all tech-nologies at this delicate formative stage, we mustbuild a strong community to foster that growth.

Growing experience and design expertiseDesigning mediascapes requires a new set of

skills, techniques, and artistry. The merging of vir-tual content with physical space extends theboundaries of classic human–computer interac-tion. For some applications, the art of good medi-ascape design can be the right choice of media. Insome situations, the new medium demands usingmore audio to augment the visual nature of thephysical environment. An audience could lose abalanced fusion of the physical and digital if theyspend most of their time looking at a mobiledevice’s screen. The new medium is still in itsearly stages and new design guidelines alwaysemerge from each new exploratory application.

A computer screen with a mouse and key-board is no longer the only form of interaction.In the physical world of mobile applications,where movement can trigger different media,metaphors such as the desktop don’t make sense.

We need to establish a new set of designguidelines and interaction styles. This requiresnew skills to evolve among the authoring com-munity. Though it certainly won’t happen over-night, if the learning curve is prolonged it couldlead to failure of the technology’s adoption. Tosecure the future of the new medium, a widespectrum of potential authors need to get theircreative hands on the required design skills andauthoring capabilities.

Networked mediascape client Networked mediascape client

Logic

Sensors

Logic

Sensors

Networking

User input Media

Networking

User input Media

Figure 5. Multiplayer mediascapes can be implemented in a peer-to-peer mode

where mediascape clients interact with each other directly through network

connections to provide inputs to the mediascape logic.

Multiplayer server

Multiplayer logic

Networking

Networked mediascape client

Logic

Sensors

Networking

User input Media

Networked mediascape client

Logic

Sensors

Networking

User input Media

Figure 6. Multiplayer

mediascapes can also

be built with a client-

server architecture. In

this case, mediascape

clients interact with a

multiplayer server to

perform the logic for the

mediascape experience.

Page 6: Mediascapes: Context-Aware Multimedia Experiences

Fostering the emergence of useWhen the Lumiere brothers invented the cin-

ematograph, the first moving picture camera, theydidn’t conceive of today’s multibillion-dollar filmindustry. They thought they had invented adevice for capturing and reviewing moving pho-tographs. Sometime later, entrepreneurs with adifferent perspective recognized the value ofdelivering narrative as an engaging experience.

Many technologies emerge more valuablethan their original purposes. Technologists devel-oped mediascape technology with this principlein mind. For this reason, developers producedplatforms and tools for creating, sharing, andexperiencing mediascapes and have made thesetools available to the public. Once the develop-ers make the technology available within its ini-tially targeted markets, the lasting value emergesover time, and its adoption accelerates.

Publishing platform and Web portalJust as blogging and podcasting platforms have

stimulated a rapid growth in the number of digitalmedia authors and created demand from enthusi-astic consumers, a mediascape publishing platformhas the potential to do the same. The mscapersWeb portal attempts to build a mediascape com-munity of authors and consumers by providingaccess to creation tools and a means of broader dis-tribution. Users can browse through existing medi-ascapes and download them to their handhelddevice via their PC.

The publishing platform allows authors toupload mediascapes they’ve created using theauthoring tool or one of the Web template wiz-ards, which makes it easier for a user to create hisor her first portable mediascape. Consumers andauthors can rate and discuss the mediascapespublished on the site, sharing best practices andcreative solutions through forums, as well asposting design guidelines on a community wiki.

DeploymentsTo date, users have applied mediascapes in

education, games, art, guided tours, social narra-tives, and time travel (historical reconstructions).The early adopters have used the embryonictechnology in the areas of arts, education, gam-ing, and broadcasting. These individuals andorganizations possess creative skills and motiva-tions to explore new forms of expression. Othersources describe installations9-11 and include Riot,Savannah, CitiTag, and Yosemite (see http://video.telecomtv.com/hp/MscapeWIP3.wmv).

Here we describe three mediascape installations,tested and evaluated in public trials. We describeexperimental installations in the Tower of London,across three city blocks of San Francisco (see http://userwww.sfsu.edu/~plevine/projects/mediascape/mediascape.html), and finally a game sports sci-entists and game designers created called ‘Ere beDragons (see http://lansdown.mdx.ac.uk/people/stephen/dragons/index.html) that has touredEurope, Singapore, and the US. ‘Ere be Dragons wasthe first mediascape to deliver interactive mediabased on location and heart rate (see http://www.ipsi.fraunhofer.de/ambiente/pergames2006/final/PG_Davis_Dragons.pdf).

Tower of London: Entertaining and educatingthe younger visitors

Creators of the Tower of London experiment(see Figure 8) aimed to engage the younger visi-

103

July–Septem

ber 2007

Mediascape server

Streaming media clientLogic

Sensors

NetworkingNetworkingMedia

coder

Media decoderInput User inputMedia

Figure 7. Mediascapes

can be experienced on a

standard streaming

media client by using a

network rendering

model that renders the

mediascape on the

server and then encodes

and streams the

resulting video to the

streaming media client.

Context can be

provided by the client

or by the

infrastructure.

Figure 8. A mediascape

game was built for the

Tower of London and

deployed in a pilot for

visitors to experience.

Yeomen guards played

a role in the game by

carrying short-range

radio beacons that were

used for proximity

detection by nearby

mediascape clients.

Page 7: Mediascapes: Context-Aware Multimedia Experiences

104

IEEE

Mul

tiM

edia

Multimedia at Work

tors in the history of the tower. In this game, vis-itors would help past prisoners escape from thetower in the manner that they actually escaped.The developers deployed the pilot for one weekduring the school vacations. They imported amap of the tower into the authoring tool. Theyused two sensors, GPS for absolute location anda short-range radio beacon the Yeomen guardscarried for proximity detection. If a player got tooclose to a guard while helping a prisoner escape,the prisoner was caught and the player was sen-tenced to virtual years in the tower.

Questionnaires completed after playing thegame suggested the mediascape had met itsgoals. The players enjoyed the game, especiallyavoiding the warders. They could name the pris-oners and their escape routes, and they werehappy to play the game for well over an hour,much longer than they would tolerate a tradi-tional audio tour.

Scape the Hood: Three blocks of San Francisco’sMission District

As a flagship event at the 10th Annual DigitalStorytelling Festival, storytellers from San Fran-cisco State University, KQED public broadcastingstation, Hewlett-Packard, and from the localcommunity created a mediascape in three parts(see Figure 9). Each part covered a city block andaimed to enliven the neighborhood with storiesof its inhabitants, present and past.

The creators only used GPS. The first blockcovered the activities of the local artist commu-nity. Street art enriches this area, and local artistsadded digital stories to augment the muralsaround the streets and to describe the history ofthe area and the community.

The second block took the visitor on a timetraveler’s journey back to when the Ohlone tribesinhabited the marshlands of the now MissionDistrict. Ambient sounds and tribal storiesstripped away the layers of concrete to reveal thegrasslands that were once there.

The final block took another time shift, but ofa much shorter span. The mediascape playedrecordings of ambience and stories of a regularSaturday morning flea market for visitors to expe-rience throughout the rest of the week when thespace was just an empty parking lot. Users findthis part of the mediascape all the morepoignant, as the lot has subsequently been builton. However, around the perimeter of the newapartment block, visitors can still sample the sto-ries of a community that used to meet once aweek to trade and socialize and whose presence,like the Ohlone tribes, is only available to thetime-traveling mediascaper.

‘Ere be Dragons: Digital gardening for thephysically active

The universities of Middlesex and Notting-ham collaborated with a game company calledActive Ingredient to create ‘Ere be Dragons. TheMiddlesex University researchers wanted to cre-ate a video game that encouraged people to exer-cise in the physical world. They used GPS andheart-rate monitors.

This is a terra-forming game where usersspend 20 minutes moving around the streets ofa city (any city, see Figure 10a). As they do so, thegame creates a virtual landscape in the shape ofthe city’s street network (see Figure 10b). It givesthe users feedback regarding their heart rate andits proximity to their ideal rate. As the user getscloser to his or her ideal heart rate, the game cre-ates a more luxuriant virtual landscape. If they’removing too slow, then they get desert. If they’regoing too fast, then they get a thick woodland.Users score points for the quality of the landscapeon their return.

To add a social component to the game, yourcompetitors can claim your landscape by travel-ing over it with a higher heart rate. If they do thisbefore you return to base, they steal your pointsfor the land they’ve stolen.

Like the Tower of London game, this exempli-fies going beyond simply using location fromGPS as a context trigger. The game meets its goalof encouraging the right level of exercise, and ithas gained popularity during its tour aroundthe world.

Figure 9. A mediascape

called Scape the Hood

was built and deployed

in a San Francisco

neighborhood as part of

the Digital Storytelling

Festival to create a

mediascape experience

with the present and

past inhabitants of the

neighborhood.

Page 8: Mediascapes: Context-Aware Multimedia Experiences

Challenges and future workOn the input side, developers have used sen-

sors that include GPS, IR, and RF beacons, RFIDtags, digital compasses, and heart-rate monitors.Integrating new sensors’ data into the mscapescripting language is relatively easy. As new sen-sors become available, technologists can createnew plug-ins. The real challenge lies in definingthe semantics of the new contexts these new sen-sors reveal, along with creating authoring inter-faces that make describing and using these newcontexts easy for a broad range of skill sets. In thefuture, we foresee the possibility of using sensingcomponents in a mobile operator’s network suchas location servers, presence servers, and grouplist management servers.

On the output side, developers are challengedto make mediascape players available to thewidest number of people, so anyone can experi-ence the new medium on an everyday basis. Thismeans delivering the player across a wide rangeof handheld formats and creating mediascapesthat tune themselves to the sensors available tothe player, gracefully degrading as the number ofrecommended sensors decreases. MM

References1. T. Kindberg and J. Barton, “A Web-Based Nomadic

Computing System,” Computer Networks, vol. 35,

no. 4, 2001, pp. 443-456.

2. H.W. Gellersen, A. Schmidt, and M. Beigl, “Multi-

Sensor Context-Awareness in Mobile Devices and

Smart Artifacts,” Mobile Networks and Applications

(MONET), Springer Netherlands, 2002, pp. 341-351.

3. G.D. Abowd et al., “Prototypes and Paratypes:

Mixed Methods for Designing Mobile and Ubiqui-

tous Computing Applications,” IEEE Pervasive Com-

puting, vol. 4. no. 4, 2005, pp. 67-73.

4. E.J. Selker and W. Burleson, “Context Aware Design

and Interaction in Computing Systems,” IBM Sys-

tems J., vol. 39, nos. 3–4, 2000, pp. 617-632.

5. C. Randell, and H.L. Muller, “The Well Mannered

Wearable Computer,” Personal and Ubiquitous Com-

puting, vol. 6, no. 1, 2002, pp. 31-36.

6. J. Reid et al., “Parallel Worlds: Immersion in Loca-

tion-Based Experiences,” Proc. Special Interest Group

on Computer–Human Interaction (SIGCHI) Conf.

Human Factors in Computing Systems, ACM Press,

2005, pp. 1733-1736.

7. R. Hull, B. Clayton, and T. Melamed, “Rapid

Authoring of Mediascapes,” Ubiquitous Computing:

6th Int’l Conf. (UbiComp), Springer Berlin/Heidel-

berg, 2004, pp. 125-142.

8. G. Lane, “Urban Tapestries: Wireless Networking,

Public Authoring, and Social Knowledge,” Proc. 1st

Int’l Conf. Appliance Design, Springer-Verlag, 2003,

pp. 18-23.

9. M. Blythe et al., “Interdisciplinary Criticism:

Analysing the Experience of Riot! A Location-Sensi-

tive Digital Narrative,” Behaviour and Information

Technology, vol. 25, no. 2, 2006, pp. 127-139.

10. K. Facer et al., “Savannah: Mobile Gaming and

Learning?” J. Computer Assisted Learning, vol. 20,

2004, pp. 399-409.

11. Y. Vogiazou et al., “Design for Emergence: Experi-

ments with a Mixed Reality Urban Playground

Game,” Personal & Ubiquitous Computing, vol. 11,

no. 1, 2007, pp. 45-58.

Readers may contact Susie Wee at [email protected].

105

Figure 10. The ‘Ere be

Dragons mediascape

game was designed to

provide a physical

experience that

encourages healthy

exercise using (a) GPS

and (b) heart rate

monitors.

(a) (b)

Getaccessto individual IEEE Computer Society documents online.

More than 100,000 articles and conference papers available!

$9US per article for members

$19US for nonmembers

www.computer.org/publications/dlib