claire omalley

Upload: marta-conde

Post on 06-Jan-2016

25 views

Category:

Documents


0 download

DESCRIPTION

Multimedia

TRANSCRIPT

  • Literature Review in Learning with Tangible Technologies

    REPORT 12:

    FUTURELAB SERIES

    Claire OMalley, Learning Sciences Research Institute, School of Psychology, University of NottinghamDanae Stanton Fraser, Department of Psychology, University of Bath

  • ACKNOWLEDGEMENTS

    We would like to thank Keri Facer for invaluable comments andencouragement in the preparation of this review. We also gratefullyacknowledge the support of the UK Engineering and PhysicalSciences Research Council (EPSRC) and the European Commission(4th and 5th Framework Programmes) in funding some of the work reviewed here (eg KidStory, Shape, Equator).

    ABOUT FUTURELAB

    Futurelab is passionate about transforming the way people learn.Tapping into the huge potential offered by digital and othertechnologies, we are developing innovative learning resources andpractices that support new approaches to education for the 21stcentury.

    Working in partnership with industry, policy and practice, Futurelab:

    incubates new ideas, taking them from the lab to the classroom offers hard evidence and practical advice to support the design

    and use of innovative learning tools communicates the latest thinking and practice in educational ICT provides the space for experimentation and the exchange of ideas

    between the creative, technology and education sectors.

    A not-for-profit organisation, Futurelab is committed to sharing thelessons learnt from our research and development in order toinform positive change to educational policy and practice.

  • FOREWORD

    When we think of digital technologies inschools, we tend to think of computers,keyboards, sometimes laptops, and morerecently whiteboards and data projectors.These tools are becoming part of thefamiliar educational landscape. Outsidethe walls of the classroom, however,there are significant changes in how wethink about digital technologies or, to bemore precise, how we dont think aboutthem, as they disappear into our clothes,our fridges, our cars and our city streets.This disappearing technology, blendedseamlessly into the everyday objects of our lives, has become known asubiquitous computing. Which leads us to ask the question: what would aschool look like in which the technologydisappeared seamlessly into the everydayobjects and artefacts of the classroom?

    This review is an attempt to explore this question. It maps out the recenttechnological developments in the field,

    discusses evidence from educationalresearch and psychology, and provides an overview of a wide range ofchallenging projects that have attemptedto use such disappearing computers (or tangible interfaces) in education from digitally augmented paper, toys that remember the ways in which a childmoves them, to playmats that record and play back childrens stories. Thereview challenges us to think differentlyabout our future visions for educationaltechnology, and begins to map out aframework within which we can ask how best we might use these newapproaches to computing for learning.

    As always, we are keen to hear your comments on this review [email protected]

    Keri FacerLearning Research DirectorFuturelab

    1

    CONTENTS:

    EXECUTIVE SUMMARY 2

    SECTION 1INTRODUCTION 4

    SECTION 2TANGIBLE USER INTERFACES: NEW FORMS OF INTERACTIVITY 12

    SECTION 3WHY MIGHT TANGIBLETECHNOLOGIES BENEFITLEARNING? 18

    SECTION 4CASE STUDIES OF LEARNINGWITH TANGIBLES 27

    SECTION 5SUMMARY AND IMPLICATIONSFOR FUTURE RESEARCH,APPLICATIONS, POLICY AND PRACTICE 35

    GLOSSARY 39

    BIBLIOGRAPHY 42

    Literature Review in Learning with Tangible Technologies

    REPORT 12:

    FUTURELAB SERIES

    Claire OMalley, Learning Sciences Research Institute, School of Psychology, University of NottinghamDanae Stanton Fraser, Department of Psychology, University of Bath

  • EXECUTIVE SUMMARY

    The computer is now a familiar object inmost schools in the UK today. However,outside schools different approaches tointeracting with digital information andrepresentations are emerging. These canbe considered under the term tangibleinterfaces, which attempts to overcomethe difference between the ways we inputand control information and the ways thisinformation is represented. These tangibleinterfaces may be of significant benefit toeducation by enabling, in particular,younger children to play with actualphysical objects augmented withcomputing power. Tangible technologiesare part of a wider body of developingtechnology known as ubiquitouscomputing, in which computingtechnology is so embedded in the world that it disappears.

    Tangible technologies differ in terms of thebehaviour of control devices and resultingdigital effects. A contrast is made betweeninput devices where the form of usercontrol is arbitrary and has no specialbehavioural meaning with respect tooutput (eg using a generic tool like amouse to interact with the output on ascreen), and input devices which have aclose correspondence in behaviouralmeaning between input and output (eg using a stylus to draw a line directly on a tablet or touchscreen). The form ofsuch mappings may result in one-to-manyrelations between input and output (as in arbitrary relations between a mouse, joystick or trackpad and variousdigital effects on a screen), or one-to-onerelations (as in the use of special purposetransducers where each device has onefunction). Tangibles also differ in terms ofthe degree of metaphorical relationship

    between the physical and digitalrepresentation. They can range from beingcompletely analogous, in the case ofphysical devices resembling their digitalcounterparts, to having no analogy at all.They also differ in terms of the role of thecontrol device, irrespective of its behaviourand representational mapping. So, forexample, a control device might play therole of a container of digital information, arepresentational token of a digital referent,or a generic tool representing somecomputational function. Finally, thesetechnologies differ in terms of degree ofembodiment. This means the degree of attention paid to the control device asopposed to that which it represents;completely embodied systems are wherethe users primary focus is on the objectbeing manipulated rather than the toolbeing used to manipulate the object. This can be more or less affected by theextent of the metaphor used in mappingbetween the control device and theresulting effects.

    In general there are at least two senses inwhich tangible user interfaces strive toachieve really direct manipulation:

    1 In the mappings between the behaviourof the tool (physical or digital) which the user uses to engage with the objectof interest.

    2 In the mappings between the meaningor semantics of the representingworld (eg the control device) and therepresented world (eg the resultingoutput).

    Research from psychology and educationsuggests that there can be real benefits forlearning from tangible interfaces. Suchtechnologies bring physical activity and

    2

    tangibletechnologies are

    part of a widerbody of

    developingtechnology

    known as'ubiquitouscomputing

    EXECUTIVE SUMMARY

  • active manipulation of objects to theforefront of learning. Research has shownthat, with careful design of the activitiesthemselves, children (older as well asyounger) can solve problems and performin symbol manipulation tasks withconcrete physical objects when they fail to perform as well using more abstractrepresentations. The point is not that the objects are concrete and thereforesomehow easier to understand, but that physical activity itself helps to buildrepresentational mappings that serve tounderpin later more symbolically mediatedactivity after practise and the resultingexplicitation of sensorimotorrepresentations.

    However, other research has shown that it is important to build in activities thatsupport children in reflecting upon therepresentational mappings themselves.This work suggests that focusingchildrens attention on symbols as objectsmay make it harder for them to reasonwith symbols as representations. Some researchers argue for cyclingbetween what they call expressive and exploratory modes of learning with tangibles.

    A number of examples of tangibletechnologies are presented in this review.These are discussed under four headings:digitally augmented paper, physical objectsas icons (phicons), digital manipulativesand sensors/probes. The reason fortreating digital paper as a distinct categoryis to make the point that, rather than ICTreplacing paper and book-based learningactivities, they can be enhanced by a rangeof digital activities, from very simple use ofcheap barcodes printed on sticky labelsand attached to paper, to the much moresophisticated use of video tracing in theaugmented reality examples. However,

    even this latter technology is accessible for current UK schools but without theneed for special head-mounted displays to view the augmentation. For example,webcams and projectors can be used with the augmented reality software tooverlay videos and animations on top ofphysical objects.

    Although many of the technologiesreviewed in the section on phiconsinvolves fairly sophisticated systems, onceagain, it is possible to imagine quite simpleand cheap solutions. For example, barcode tags and tag readers are relativelyinexpensive. Tags can be embedded in a variety of objects. The only slightlycomplex aspect is that some degree ofprogramming is necessary to make use of the data generated by the tag reader.

    Finally, implications are drawn for futureresearch, applications and practice.Although there is evidence that, forexample, physical action with concreteobjects can support learning, its benefitsdepend on particular relationshipsbetween action and prior knowledge. Itsbenefits also depend on particular formsof representational mapping betweenphysical and digital objects. For example,there is quite strong evidence suggestingthat, particularly with young children, ifphysical objects are made too realisticthey can actually prevent children learningabout what the objects represent.

    Other research reviewed here has alsosuggested that so-called really directmanipulation may not be ideal for learningapplications where often the goal is toencourage the learner to reflect andabstract. This is borne out by researchshowing that transparent or really easy-to-use interfaces sometimes lead to lesseffective problem solving. This is not an

    3

    children cansolve problemswith concretephysical objectswhen they failusing moreabstractrepresentations

    REPORT 12LITERATURE REVIEW IN LEARNING WITH TANGIBLE TECHNOLOGIES

    CLAIRE OMALLEY, LEARNING SCIENCES RESEARCH INSTITUTE, SCHOOL OF PSYCHOLOGY, UNIVERSITY OF NOTTINGHAMDANAE STANTON FRASER, DEPARTMENT OF PSYCHOLOGY, UNIVERSITY OF BATH

  • argument that interfaces for learningshould be made difficult to use the pointis to channel the learners attention andeffort towards the goal or target of thelearning activity, not to allow the interfaceto get in the way. In the same vein, Papertargued that allowing children to constructtheir own interface (ie build robots orwrite programs) focused the childsattention on making their implicitknowledge explicit. Other researcherssupport this idea and argue that effectivelearning should involve both expressiveactivity, where the tangible represents orembodies the learners behaviour(physically or digitally), and exploratoryactivity, where the learner explores themodel embodied in the tangible interface.

    1 INTRODUCTION

    1.1 AIMS AND OUTLINE OF THE REVIEW

    The aims of this review are to:

    introduce the concept of tangiblecomputing and related concepts such as augmented reality and ubiquitouscomputing

    contrast tangible interfaces with current graphical user interfaces

    outline the new forms of interactivitymade possible by tangible userinterfaces

    outline some of the reasons, based onresearch in psychology and education,why learning with tangible technologiesmight provide benefits for learning

    present some examples of tangible user interfaces for learning

    use these examples to illustrate claimsfor the benefits of tangible interfacesfor learning

    provide a preliminary evaluation of the benefits and limitations of tangibletechnologies for learning.

    We do not wish to argue that tangibletechnologies are superior to other currentand accepted uses of ICT for learning. Wewish to open up the mind of the reader tonew possibilities of enhancing teachingand learning with technology. Some ofthese possibilities are achievable withrelatively simple and cheap technologies(eg barcodes). Others are still in the earlystages of development and involve moresophisticated uses of video-based imageanalysis or robotics. The point is not thesophistication of the technologies but theinnovative forms of interactivity they enable

    4

    the point is not the

    sophistication ofthe technologies

    but theinnovative forms

    of interactivitythey enable

    SECTION 1

    INTRODUCTION

  • and, it is hoped, the new possibilities for learning that they provide.

    We have tried in this review to outline the interactional capabilities afforded by tangible technology by analysing the properties of forms of action andrepresentation embodied in their use. Wehave also reviewed some of the relevantresearch from developmental psychologyand education in order to draw outimplications for the potential benefits and limitations of these approaches tolearning and teaching. Although many ofthe educational examples given here referto use by young children, several alsoinvolve children of secondary school age.In general we feel that the interactionaland educational principles involved can be applied to learning in a wide variety of age groups and contexts.

    1.2 BEYOND THE SCHOOL COMPUTER

    Few nowadays would disagree thatinformation and communicationtechnology (ICT) is important for learningand teaching, whether the argument isvocational (its important for young peopleto become skilled or literate in ICT toprepare them for life beyond school) ortechno-romantic (ICT provides powerfullearning experiences not easily achievedthrough other means see Simon 1987;Underwood and Underwood 1990; Wegerif2002). There is no doubt that there havebeen tremendous strides in the take-upand use of ICT in both formal school andinformal home settings in recent years,and that there have been many positiveimpacts on childrens learning (see Cox etal 2004a; Cox et al 2004b; Harrison et al2002). However, developments in the use

    of ICT in schools tend to lag behinddevelopments in other areas of life (theworkplace, home). At the turn of the 21stcentury, the use of computers in schoolslargely concerns the use of desktop(increasingly laptop) machines, whilstoutside of school the use of computersconsists increasingly of personal mobiledevices which bear little resemblance tothose desktop machines of the 1980s.

    Technology is now increasingly embeddedin our everyday lives. For example, whenwe go shopping barcodes are used bysupermarkets to calculate the price of our goods. Microchips embedded in our credit cards and magnetic stripsembedded in our loyalty cards are used bysupermarkets and banks (as well as otheragencies we may not know about) to debitour bank accounts, do stock control,predict our spending patterns and so on.Sensors in public places such as thestreet, stores, our workplace are used torecord our activities (eg reading carnumber plates, detecting our entrance andexit via radio frequency ID tags or pressureor motion sensors in buildings). Wecommunicate by mobile phone and ourlocation can be identified by doing so. Weuse remote control devices to control ourentertainment systems, the opening andclosing of our garages or other doors. Ourhomes are controlled by microcomputersin our washing machines, burglar alarms,lighting, heating and so on.

    This is as much true for school-agedchildren as it is for adults: the averagehome with teenagers owns an average of 3.5 mobile phones (survey byukclubculture, reported in The Observer,17 October 2004); teenagers prefer to buytheir music online rather than buying CDs;80% of teenagers access the internet from

    5

    technology isnow increasinglyembedded in oureveryday lives

  • home. Young peoples experience withtechnology outside of school is far moresophisticated than the grey/beige box onthe desk with its monitor, mouse andkeyboard.

    It is probably safe to say that most schoolsin the UK (both primary and secondary)now have fairly well-stocked ICT suites orlaboratories, where banks of PCs andmonitors line the walls or sit on desks inrows. It is less likely that they occupycentral stage in the classroom where coresubject teaching and learning takes place,but many schools are now adoptinglaptops and so this may be graduallychanging. A few schools may even beexperimenting with PDAs. Many schoolsuse interactive whiteboards orsmartboards. However, despite thechanges in technology, the organisation ofteaching and learning practice is relativelyunchanged from pre-ICT days in fact onemight be tempted to argue that interactivewhiteboards are popular with teacherslargely because they can easily incorporatetheir use into existing practices of face-to-face whole class teaching. The smartboardis the new blackboard, just as thePC/laptop/palmtop is the new slate (orpaper). There are occasional examples ofinnovative uses of technology, but the vastmajority of use of technology in learningarguably hasnt changed very much sincethe early days of the BBC micro in the1970s and early 1980s.

    However, outside schools there has been agrowing trend for technology to movebeyond the traditional model of thedesktop computer. In the field of human-computer interaction (HCI), traditionaldesktop metaphors are now beingsupplanted by research into the radically

    different relationships between realphysical environments and digital worlds1.Outside the school we are beginning to see the development of what is becomingknown as tangible interfaces.

    1.3 TANGIBLES: FROM GRAPHICALUSER INTERFACES (GUIs) TOTANGIBLE USER INTERFACES (TUIs)

    Its probably useful at this point tointroduce a brief history of the termtangibles.

    Digital spaces have traditionally beenmanipulated with simple input devicessuch as the keyboard and mouse, whichare used to control and manipulate(usually visual) representations displayedon output devices such as monitors,whiteboards or head-mounted displays.Tangible interfaces (as they have becomeknown) attempt to remove this input-output distinction and try to open up newpossibilities for interaction that blend thephysical and digital worlds (Ullmer andIshii 2000).

    When reviewing the background to TUIsmost people tend to refer to a paper byHiroshi Ishii and Brygg Ullmer of MITMedia Lab, published in 1997 (Ishii andUllmer 1997). They coined the phrasetangible bits as:

    an attempt to bridge the gap betweencyberspace and the physical environmentby making digital information (bits)tangible. (Ishii and Ullmer 1997 p235)

    In using the term bits they are making adeliberate pun we use the term bits to

    6

    'tangibleinterfaces' try

    to open up newpossibilities forinteraction that

    blend thephysical and

    digital worlds

    SECTION 1

    INTRODUCTION AND BACKGROUND

    1 See for example www.equator.ac.uk

  • refer to physical things, but in computerscience, the term bits refers also to digitalthings (ie binary digits). The phrasetangible bits therefore attempts toconsider both digital and physical things in the same way.

    Tangible interfaces emphasise touch andphysicality in both input and output. Oftentangible interfaces are closely coupled tothe physical representation of actualobjects (such as buildings in an urbanplanning application, or bricks in achildrens block building task). Applicationsrange from rehabilitation, for example,tangible technologies that enableindividuals to practise making a hot drinkfollowing a stroke (Edmans et al 2004) toobject-based interfaces running overshared distributed spaces, creating theillusion that users are interacting withshared physical objects (Brave et al 1998).

    The key issue to bear in mind in terms ofthe difference between typical desktopcomputer systems and tangible computingis that in typical desktop computersystems (so-called graphical userinterfaces or GUIs) the mapping betweenthe manipulation of the physical inputdevice (eg the point and click of themouse) and the resulting digitalrepresentation on the output device (thescreen) is relatively indirect and looselycoupled. You are making one sort ofmovement to have a very differentmovement represented on screen. Forexample, if I use a mouse to select a menuitem in a word processing application, Imove the mouse on a horizontal surface(my physical desktop) in 2D in order tocontrol a graphical pointer (the cursor) onthe screen. The input is physical but theoutput is digital. In the case of a desktopcomputer, the mapping between input and

    output is fairly obviously decoupledbecause I have to move the mouse in 2Don a horizontal surface, yet the outputappears on a vertical 2D plane. However,even if I use a stylus on a touchscreen(tablet or PDA), there is still a sense ofdecoupling of input and output becausethe output is in a different representationalform to the input ie the input is physicalbut the output is digital.

    In contrast, tangible user interfaces (TUIs)provide a much closer coupling betweenthe physical and digital to the extent thatthe distinction between input and outputbecomes increasingly blurred. Forexample, when using an abacus, there is no distinction between inputtinginformation and its representation thissort of blending is what is envisaged bytangible computing.

    1.4 DEFINING SOME KEY CONCEPTS:TANGIBLE COMPUTING, UBIQUITOUSCOMPUTING AND AUGMENTEDREALITY

    Tangible computing is part of a widerconcept know as ubiquitous computing.The vision of ubiquitous computing isusually attributed to Mark Weiser, late ofXerox PARC (Palo Alto Research Centre).Weiser published his vision for the futureof computing in a pioneering article inScientific American in 1991 (Weiser 1991).In it he talked about a vision where thedigital world blends into the physical world and becomes so much part of thebackground of our consciousness that itdisappears. He draws an analogy with print text is a form of symbolic representationthat is completely ubiquitous and pervasivein our physical environment (eg on

    7

    tangibleinterfacesemphasise touchand physicality in both input and output

  • billboards, signs, in shop windows etc). As long as we are skilled users of text, it takes no effort at all to scan theenvironment and process the information.The text just disappears into thebackground and we engage with thecontent it represents, effortlessly. Contrastthis with the experience of visiting aforeign country where not only do you notknow the language, but the alphabet iscompletely unfamiliar to you eg anEnglish visitor in China or Saudi Arabia suddenly the text is not only virtuallyimpossible to decipher, it actually looksvery strange to see the world plasteredwith what seem to you like hieroglyphs(which, of course, for the Chinese orArabic reader, are invisible). So, the visionof ubiquitous computing is that computingwill become so embedded in the world thatwe dont notice it, it disappears. This hasalready happened today to some extent.Computers are embedded in lightswitches, cars, ovens, telephones,doorways, wristwatches.

    In Weisers vision, ubiquitous computing is calm technology (Weiser and Brown1996). By this, he means that instead ofoccupying the centre of the usersattention all the time, technology movesseamlessly and without effort on our partbetween occupying the periphery of ourattention and occupying the centre. Ishii

    and Ullmer (1997) also make a distinctionin ubiquitous computing between theforeground or centre of the users attentionand the background or periphery of theirattention. They talk about the need toenable users both to grasp andmanipulate foreground digital informationusing physical objects, and to provideperipheral awareness of informationavailable in the ambient environment.The first case is what most people refer to nowadays as tangible computing; thesecond case is what is generally referredto as ambient media (or ambientintelligence, augmented space and so on).Both are part of the ubiquitous computingvision, but this review will focus only on the first case.

    A third related concept in tangiblecomputing is augmented reality (AR).Whereas in virtual reality (VR) the goal is often to immerse the user in acomputational world, in AR the physicalworld is augmented with digitalinformation. Paul Milgram coined thephrase mixed reality to refer to acontinuum between the real world andvirtual reality (Milgram and Kishino 1994).AR is where, for example, video images ofreal scenes are overlaid with 3D graphics.In augmented virtuality, displays of a 3Dvirtual world (possibly immersive) areoverlaid with video feeds from the real world.

    8

    SECTION 1

    INTRODUCTION AND BACKGROUND

    Fig 1: The continuum of mixed reality environments (from Milgram and Kishino 1994).

    Mixed Reality (MR)

    Virtuality Continuum (VC)

    RealEnvironment

    AugmentedReality (AR)

    VirtualEnvironment

    AugmentedVirtuality (AV)

  • As we shall see, many examples of so-called tangible computing have employedAR techniques and there are someinteresting educational applications.Generally, the distinction betweenubiquitous computing (or ubicomp),tangible technology, augmented reality and ambient media has been blurred orat least the areas overlap considerably.Dourish (2001), for example, refers to them all as tangible computing.

    One of the earliest examples of tangiblecomputing was Bricks, a graspable userinterface proposed by Fitzmaurice, Ishiiand Buxton (Fitzmaurice et al 1995). Thisconsisted of bricks (like LEGO bricks)which could be attached to virtualobjects, thus making the virtual objectsphysically graspable.

    Fitzmaurice et al cite the followingproperties of graspable user interfaces.Amongst other things, these interfaces:

    allow for more parallel inputspecification by the user, therebyimproving the expressiveness or thecommunication capacity with thecomputer

    take advantage of well developed,everyday, prehensile skills for physicalobject manipulations (cf MacKenzie andIberall 1994) and spatial reasoning

    externalise traditionally internalcomputer representations

    afford multi-person, collaborative use.

    We will return to this list of features of tangible technologies a little later on to consider their benefits or affordancesfor learning.

    The sheer range of applications means itis out of this reports scope to adequatelydescribe all tangible interface examples. In this review, we focus specifically onresearch in the area of tangible interfacesfor learning.

    1.5 WHY MIGHT TANGIBLES BE OF BENEFIT FOR LEARNING?

    Historically children have playedindividually and collaboratively withphysical items such as building blocks,shape puzzles and jigsaws, and have beenencouraged to play with physical objects to learn a variety of skills. Montessori(1912) observed that young children wereintensely attracted to sensory developmentapparatus. She observed that children usedmaterials spontaneously, independently,repeatedly and with deep concentration.Montessori believed that playing withphysical objects enabled children to engagein self-directed, purposeful activity. Sheadvocated childrens play with physicalmanipulatives as tools for development2.

    Resnick extended the tangible interfaceconcept for the educational domain in theterm digital manipulatives (Resnick et al1998), which he defined as familiarphysical items with added computationalpower which were aimed at enhancingchildrens learning. Here we discussphysical and tangible interfaces physicalin that the interaction is based onmovement and tangible in that objects are to be touched and grasped.

    Figures 2 and 3 show two examples of recent applications of tangibletechnologies for educational use.

    9

    historicallychildren haveplayed withphysical objectsto learn a varietyof skills

    2 www.montessori-ami.org/ami.htm

  • 10

    SECTION 1

    INTRODUCTION AND BACKGROUND

    Fig 2: These images show the MagiPlanet application developed at the Human InterfaceTechnology Laboratory New Zealand3 (Billinghurst 2002).

    Imagine a Year 5 teacher is trying to help her class understand planetary motion. They arestanding around in a circle looking down on a table on which are marked nine orbital pathsaround the Sun. The children have to place cards depicting each of the planets in its correctorbit. As they do, they can see overlaid onto the cards a 3D animation of that planet spinningon its own axis and orbiting around the Sun. They can pick up the card and rotate it to see thesurface of the planet in more detail, or its moons.

    In Figure 2 the physical table is augmentedwith overlaid 3D animations that are tied toparticular physical locations. These 3Danimations provide visualisationsappropriate to the meaning of the gesturesused by the children to interact with thedisplay. Instead of observing a pre-setanimation, or using a mouse to direct ananimation, the childs physical gesturesthemselves control the movementdisplayed. This example is calledMagiPlanet and is one of a range ofapplications of AR developed by MarkBillinghurst and his group at the HumanInterface Technology Laboratory NewZealand (Billinghurst 2002). Theapplication of this technology for learningin UK classrooms is currently beingexplored by Adrian Woolard and hiscolleagues at the BBCs Creative R&D, NewMedia and Technology Unit 4 (Woolard 2004).

    Figure 3 illustrates a tangible technologycalled I/O Brush, developed by KimikoRyokai, Stefan Marti and Hiroshi Ishii ofMIT Media Labs Tangible Media Group(Ryokai et al 2004). It is showcased atBETT 20055 as part of FuturelabsInnovations exhibition.

    Tangibles have been reported as having thepotential for providing innovative ways forchildren to play and learn, through novelforms of interacting and discovering andthe capacity to bring the playfulness backinto learning (Price et al 2003). Dourish(2001) discusses the potential of tangiblebits, where the digital world of informationis coupled with novel arrangements ofelectronically-embedded physical objects,providing different forms of userinteraction and system behaviour, incontrast with the standard desktop.

    3 www.hitlabnz.org/index.php?page=proj-magiplanet 4 www.becta.org.uk/etseminars/presentations/presentation.cfm?seminar_id=29&section=7_1&presentation_id=68&id=26085 www.bettshow.co.uk

  • Familiar objects such as building bricks,balls and puzzles are physicallymanipulated to make changes in anassociated digital world, capitalising on

    peoples familiarity with their way ofinteracting in the physical world (Ishii andUllmer 1997). In so doing, it is assumedthat more degrees of freedom are provided

    11

    6 http://web.media.mit.edu/~kimiko/iobrush/

    Fig 3: These images show the I/O Brushsystem developed by Kimiko Ryokai, StefanMarti and Hiroshi Ishii of MIT Media LabsTangible Media Group6 (Ryokai et al 2004).

    Imagine a reception class is creating a storytogether. Their teacher has been readingfrom a storybook about Peter Rabbit duringliteracy hour over the past few weeks.Today they are trying to create an animatedversion of the story to show in assembly.They are using special paintbrushes whichthey can sweep over the picture of PeterRabbit in the storybook. One child wants to draw a picture of Peter Rabbit hopping. Heplaces the brush over the picture of Peter Rabbit and as he does so he makes little hoppingmotions with his hand. Then he paints the brush over the display screen that the class areworking with and as he does a picture of the rabbit appears, hopping with the samemovements that the child made when he painted over the storybook. The special paintbrushhas picked up the image, with its colours, together with the movements made by the childand transferred these physical attributes to a digital animation.

  • for exploring, manipulating and reflectingupon the behaviour of artefacts and theireffects in the digital world. In relation tolearning, such tangibles are thought toprovide different kinds of opportunities for reasoning about the world throughdiscovery and participation (Soloway et al1994; Tapscott 1998). Tangible-mediatedlearning also has the potential to allowchildren to combine and recombine theknown and familiar in new and unfamiliarways encouraging creativity and reflection(Price et al 2003).

    1.6 SUMMARY

    While the computer is now a familiarobject in most schools in the UK today,outside schools different approaches tointeracting with digital information andrepresentations are emerging. These canbe considered under the term tangibleinterfaces which attempt to overcome thedifference between the ways we input andcontrol information and the ways thisinformation is represented. These tangibleinterfaces may be of significant benefit toeducation by enabling, in particular,younger children to play with actualphysical objects augmented withcomputing power. Tangible technologiesare part of a wider body of developingtechnology known as ubiquitouscomputing in which computing technologyis so embedded in the world that itdisappears. The next section of thisreview will discuss the key attributes ofthese technologies in more detail beforemoving on, in Section 3, to discuss theirpotential educational applications.

    2 TANGIBLE USER INTERFACES: NEW FORMS OF INTERACTIVITY

    In this section of the review we offer adiscussion of different ways in whichresearchers in the field have classifieddifferent types of tangible user interfaces.Each of the frameworks we describeemphasises slightly different properties of these interfaces, which present somepotentially interesting ways to think aboutthe educational possibilities of tangibletechnologies.

    The conceptual frameworks we describeclassify and analyse the properties oftangible technologies in terms of thehuman-computer interface. They shareseveral characteristics in common, butdiffer in the type and number ofdimensions and the degree to which theyrestrict or relax the space of possibletechnologies. All of them, albeit in different ways, incorporate a notion ofrepresentational mapping, ranging fromtight coupling (iconic or indexical) to loosecoupling (symbolic, arbitrary), whether interms of the referents of an operation(physical or digital) or in terms of thenature of the referring expression(operation or manipulation) and its effects.

    2.1 DIRECT MANIPULATIONINTERFACES

    Prior to the mid-80s, interaction withcomputers consisted of typing in specialcommands on a keyboard which resultedin the output of alphanumeric characterson a teleprinter or monitor, usuallyrestricted to 24 rows of around 80characters per line. In the mid-80s therewas something of a revolution in personalcomputing. Instead of the previous

    12

    tangibles arethought to

    provide differentkinds of

    opportunities forreasoning about

    the world

    SECTION 2

    TANGIBLE USER INTERFACES: NEW FORMS OF INTERACTIVITY

  • command-line interfaces and keyboards,technologies were developed whichenabled users to interact much moredirectly via a pointing device (the mouse)and a graphical user interface employingthe now familiar desktop system withwindows, icons and menus. The earliestexample of such systems was the Staroffice application (Smith et al 1982), theprecursor to Microsoft Windows and theApple Macintosh operating systems.

    Researchers in HCI called such systemsdirect manipulation interfaces (Hutchinset al 1986; Shneiderman 1983). Hutchins et al (1986) analysed the properties of so-called direct manipulation interfacesand argued that there were twocomponents to the impression the usercould form about the directness ofmanipulating the interface: the degree towhich input actions map onto outputdisplays in an interactional or articulatorysense (psychologists refer to this asstimulus-response compatibility), and thedegree to which it is possible to expressones intentions or goals with the inputlanguage (semantic directness).

    Articulatory directness refers to theextent to which the behaviour of an inputaction (eg moving the mouse) mapsdirectly or otherwise onto the effects onthe display (eg the cursor moving from oneposition to another). In other words, asystem has articulatory directness to theextent that the form of the manipulation ofthe input device mirrors the correspondingbehaviour on the display. This is a similarconcept to stimulus-response compatibility(SRC) in psychology. An example of SRC inthe physical world is where the motion ofturning a car steering wheel maps ontothe physical act of the car turning in theappropriate direction.

    Semantic directness in Hutchins et alsanalysis referred to the extent to which themeaning of a digital representation (eg anicon representing a wastebasket) mappedonto the physical referent in the real world(ie the referent of a wastebasket).

    In terms of tangible user interfaces,similar analyses have been carried outconcerning the mapping between (i) theform of physical controls and their effectsand (ii) the meaning of representations andthe objects to which they refer. We outlinethese analyses below.

    2.2 CONTROL AND REPRESENTATION

    In this section we consider the extension of Hutchins et als analysis of directmanipulation to the design of tangible user interfaces.

    As argued earlier, in traditional human-computer interfaces (ie graphical userinterfaces or GUIs), a distinction is typicallymade between input and output. Forexample, the mouse is an input device andthe screen is the output device. In tangibleuser interfaces this distinction disappears,in two senses:

    1 In tangible interfaces the device thatcontrols the effects that the user wantsto achieve may be at one and the sametime both input and output.

    2 In GUIs the input is normally physicaland the output is normally digital, but in tangible user interfaces there can be a variety of mappings of digital-to-physical representations, and viceversa. Ullmer and Ishii explain this byanalogy with one of the earliest forms of computer, the abacus:

    13

  • In particular, a key point to note is thatthe abacus is not an input device. Theabacus makes no distinction betweeninput and output. Instead, the beads,rods, and frame of the abacus serve asmanipulable physical representations ofabstract numerical values and operations.Simultaneously, these componentartefacts also serve as physical controlsfor directly manipulating their underlyingassociations. (Ullmer and Ishii 2000)

    The term physical representation isimportant here because Ullmer and Ishiiwish to argue that it is the representationalsignificance of a tangible device thatmakes it different to a mouse, for example,which has little representationalsignificance (ie a mouse isnt meant tomean anything). They explain thedifference by describing an interface forurban planning where, instead of icons ona screen controlled by a mouse, physicalmodels of buildings are used as physicalrepresentations of actual buildings instead of manipulating digitalrepresentations, you are manipulating themodels (physical representations) whichthen also manipulate a digital display(Underkoffler and Ishii 1999).

    The key issue here is that instead of therebeing a distinction between the controllingdevice (the mouse) and the representation(images of a city on screen), in this sort ofinteraction the control device (physicalmodels of buildings) is also a physicalrepresentation which is tightly coupledwith an additional digital representation.

    Ullmer and Ishii point out severalcharacteristics of this sort of approach:

    physical representations are coupled to digital information

    physical representations embodymechanisms for interactive control

    physical representations areperceptually coupled to activelymediated digital representations

    the physical state of tangibles embodies key aspects of the digital state of the system.

    Koleva et al (2003) develop Ullmer andIshiis approach a bit further. Theydistinguish between different types oftangible interfaces in terms of degree ofcoherence ie whether the physical andthe digital artefacts are seen as onecommon object that exists in both thephysical and digital worlds, or whetherthey are seen as separate but temporallyinterlinked objects. The weakest level ofcoherence in their scheme is seen ingeneral-purpose tools where one physicalobject may be used to manipulate anynumber of digital objects. An example isthe mouse, which controls several differentfunctions (eg menus, scrollbars, windows,check boxes) at different times. Incomputer science terms, these arereferred to as time-multiplexed devices(Fitzmaurice et al 1995). In contrast, onecan have physical objects as input deviceswhere each object is dedicated toperforming one and only one function these are referred to as space-multiplexeddevices. An example in tangible computingis the use of physical buildings in the Urpurban planning application of Underkofflerand Ishii (op cit).

    Space-multiplexed devices are what onemight call tightly coupled in terms ofmappings between physical and digitalrepresentations, whereas time-multiplexeddevices are less tightly coupled. Unlessone allows for this range of interface types,there would be a very small class of

    14

    SECTION 2

    TANGIBLE USER INTERFACES: NEW FORMS OF INTERACTIVITY

  • devices of interfaces that could count as tangible, and, in fact, many exampleswhich claim to employ tangible userinterfaces for learning do not lie on the strict, tightly coupled end of thiscontinuum.

    The strongest level of coherence (physical-digital coupling) in the scheme of Koleva etal is where there is the illusion that thephysical and digital representations are thesame object. An example in the world oftangible computing is the Illuminating Claysystem (Piper et al 2002), where a piece ofclay can be manipulated physically and asa result a display projected onto its surfaceis altered correspondingly.

    A second kind of distinction which Kolevaet al make is in the nature (meaning) ofthe representational mappings betweenphysical and digital. In the case of a literalor one-to-one mapping, any actions withthe physical devices should correspondexactly to the behaviour of the digitalobject. In the case of a transformationalmapping there is no such directcorrespondence (eg placing a digital objecton a particular location might trigger ananimation).

    2.3 CONTAINERS, TOKENS AND TOOLS

    Another classification of tangibletechnologies is provided by Holmquist et al (1999). They distinguish betweencontainers, tokens, and tools. They definecontainers as generic objects that can beassociated with any types of digitalinformation. For example, Rekimoto

    demonstrated how a pen could be used asa container to pick-and-drop digitalinformation between computers,analogous to the drag-and-drop facilityfor a single computer screen (Rekimoto1997). However, even though containershave some physical analogy in theirfunctionality, they do not necessarilyreflect physically that function nor theiruse they lack natural affordances in theGibsonian sense (Gibson 1979; Norman1988, 1999).

    In contrast, tokens are objects thatphysically resemble the information theyrepresent in some way. For example, in themetaDESK system (Ullmer and Ishii 1997)7,a set of objects were designed to physicallyresemble buildings on a digital map. Bymoving the physical buildings around onthe physical desktop, relevant portions ofthe map were displayed.

    Finally, in Holmquist et als analysis, toolsare defined as physical objects which areused as representations of computationalfunctions. Examples include the Brickssystem referred to at the beginning of thisreview (Fitzmaurice et al 1995), where thebricks are used as handles to manipulatedigital information. Another example is theuse of torches or flashlights to interactwith digital displays as in the Storytentsystem8 (Green et al 2002), described inmore detail in Section 4.

    2.4 EMBODIMENT AND METAPHOR

    Fishkin (2004) provides a two-dimensionalclassification involving the dimensionsembodiment and metaphor. By

    15

    7 http://tangible.media.mit.edu/projects/metaDESK/metaDESK.html 8 http://machen.mrl.nott.ac.uk/Challenges/Devices/torches.htm

  • embodiment Fishkin means the extent towhich the users are attending to an objectwhile they manipulate it. In terms ofWeisers notion of ubiquitous computingdiscussed previously (Weiser and Brown1996), this is the dimension of central-peripheral attentional focus. At oneextreme of Fishkins dimension ofembodiment, the output device is the sameas the input device examples include theTribble system9 (Lifton et al 2003) whichconsists of a sensate skin which can reactto being touched or stroked by changinglights on its surface, vibrating, etc. Anothersimilar example is Super Cilia Skin10(Raffle et al 2003). It consists of computer-controlled actuators that are attached toan elastic membrane. The actuatorschange their physical orientation torepresent information in a visual andtactile form. Just like clay will deform ifpressed, these surfaces act both as inputand output devices.

    Slightly less tight coupling between inputand output in Fishkins taxonomy is seen in cases where the output takes placeproximal or near to the input device.Examples include the Bricks system(Fitzmaurice et al 1995) and a systemcalled I/O Brush11 (Ryokai et al 2004)referred to in the introduction to thisreview. This consists of a paintbrush whichhas various sensors for bristles whichallow the user, amongst other things, topick up the properties of an object (eg itscolour) and paint them onto a surface.

    In the third case of embodiment the outputis around the user (eg in the form of audiooutput) what Ullmer and Ishii refer to asnon-graspable or ambient (Ullmer and

    Ishii 2000). Finally, the output can be distantfrom the input (eg on another screen).

    Fishkin argues that as embodimentincreases, the cognitive distance betweeninput and output increases. He suggeststhat if it is important to maintain acognitive dissimilarity between input andoutput objects then the degree ofembodiment should be decreased. Thismay be important in learning applications,for example if the aim is to encourage thelearner to reflect on the relationshipsbetween the two.

    The second dimension in Fishkinsframework is metaphor. In terms oftangible interfaces this means the extentto which the users actions are analogousto the real-world effect of similar actions.At one end of this continuum the tangiblemanipulation has no analogy to theresulting effect an extreme example is acommand-line interface. An example theycite from tangible interfaces is the Bit Ballsystem (Resnick et al 1998) wheresqueezing a ball alters audio output.

    In other systems an analogy is made to thephysical appearance (visual, auditory,tactile) of the device and its correspondingappearance in the real world. In terms ofGUIs this corresponds to the appearanceof icons representing physical documents,for example. However, the actionsperformed on/with such icons have noanalogy with the real world (eg crumplingpaper). In terms of TUIs examples arewhere physical cubes are used as inputdevices where the particular picture on aface of a cube determines an operation(Camarata et al 2002).

    16

    SECTION 2

    TANGIBLE USER INTERFACES: NEW FORMS OF INTERACTIVITY

    9 http://web.media.mit.edu/~lifton/Tribble/ 10 http://tangible.media.mit.edu/projects/Super_Cilia_Skin/Super_Cilia_Skin.htm 11 http://tangible.media.mit.edu/projects/iobrush/iobrush.htm

  • In contrast, other systems have analogiesbetween manipulations of the tangibledevice and the resulting operations (asopposed to operands). Yet other systemscombine analogies between the referentsin the physical and digital worlds and thereferring expressions or manipulations.For example, in GUIs the drag-and-dropinterface (eg dragging a document iconinto the wastebasket) involves analogiesconcerning both the appearance of theobjects (document, wastebasket) and thephysical operation of dragging anddropping an object from the desktop intothe wastebasket. An example in terms ofTUIs is the Urp system described earlier(Underkoffler and Ishii 1999) wherephysical input devices resemblingbuildings are used to move buildingsaround a map in the virtual world.

    Finally, Fishkin describes the otherextreme of this continuum, where thedigital system maps completely onto thephysical system metaphorically. He refersto this as really direct manipulation(Fishkin et al 2000). An example from GUIsis the use of a stylus or light pen tointeract with a touchscreen or tablet (pen-tablet interfaces), where writing on thescreen results in altering the documentdirectly. An example from TUIs that hegives is the Illuminating Clay system (Piper et al op cit).

    2.5 SUMMARY

    The dimensions highlighted in theseframeworks are:

    The behaviour of the control devicesand the resulting effects: a contrast ismade between input devices where theform of user control is arbitrary and hasno special behavioural meaning with

    respect to output (eg using a generictool like a mouse to interact with theoutput on a screen), and input deviceswhich have a close correspondence inbehavioural meaning between input andoutput (eg using a stylus to draw a linedirectly on a tablet or touchscreen). Thisis what Hutchins et al (1986) refer to asarticulatory correspondence. The formof such mappings may result in one-to-many relations between input andoutput (as in arbitrary relations betweena mouse, joystick or trackpad andvarious digital effects on a screen), orone-to-one relations (as in the use ofspecial purpose transducers where each device has one function).

    The semantics of the physical-digitalrepresentational mappings: this refersto the degree of metaphor between thephysical and digital representation. Itcan range from being a completeanalogy, in the case of physical devicesresembling their digital counterparts, to no analogy at all.

    The role of the control device: thisrefers to the general properties of thecontrol device, irrespective of itsbehaviour and representationalmapping. So, for example, a controldevice might play the role of a containerof digital information, a representationaltoken of a digital referent, or a generictool representing some computationalfunction.

    The degree of attention paid to thecontrol device as opposed to that whichit represents: completely embodiedsystems are where the users primaryfocus is on the object being manipulatedrather than the tool being used tomanipulate the object. This can be moreor less affected by the extent of themetaphor used in mapping between thecontrol device and the resulting effects.

    17

  • Of course none of these dimensions or analytic frameworks are entirelyindependent, nor are they complete. Thepoint is that there are at least two sensesin which tangible user interfaces strive toachieve really direct manipulation:

    1 In the mappings between the behaviourof the tool (physical or digital) which theuser uses to engage with the object ofinterest.

    2 In the mappings between the meaningor semantics of the representing world(eg the control device) and therepresented world (eg the resultingoutput).

    3 WHY MIGHT TANGIBLETECHNOLOGIES BENEFIT LEARNING?

    In this section we consider some of theresearch evidence from developmentalpsychology and education which mightinform the design and use of tangibles asnew forms of control systems, and interms of the learning possibilities impliedby the physical-digital representationalmappings we have discussed in theprevious section.

    3.1 THE ROLE OF PHYSICAL ACTION IN LEARNING

    It is commonly believed that physicalaction is important in learning, and thereis a good deal of research evidence inpsychology to support this. Piaget andBruner showed that children can oftensolve problems when given concretematerials to work with before they cansolve them symbolically for example,pouring water back and forth betweenwide and narrow containers eventuallyhelps young children discover that volumeis conserved (Bruner et al 1966; Martinand Schwartz in press; Piaget 1953).Physical movement can enhance thinkingand learning for example Rieser (Rieseret al 1994) has shown how locomotionsupports children in categorisation andrecall in tasks of perspective taking andspatial imagery, even when they typicallyfail to perform in symbolic versions of such tasks.

    So evidence suggests that young children(and indeed adults) can in some sensesknow things without being able to expresstheir understanding through verballanguage or without being able to reflecton what they know in an explicit sense.

    18

    it is commonlybelieved that

    physical action isimportant in

    learning

    SECTION 3

    WHY MIGHT TANGIBLE TECHNOLOGIES BENEFIT LEARNING?

  • This is supported by research by Goldin-Meadow (2003), who has shown over anextensive set of studies how gesturesupports thinking and learning. Churchand Goldin-Meadow (1986) studied 5-8year-olds explanations of Piagetianconservation tasks and showed that somechildrens gestures were mismatched withtheir verbal explanations. For example,they might say that a tall thin containerhas a large volume because its taller, buttheir gesture would indicate the width ofthe container. Children who producedthese kinds of gestures tended to be thosewho benefited most from instruction orexperimentation. The argument is thattheir gestures reflected an implicit or tacitunderstanding which wasnt open to beingexpressed in language. Goldin-Meadowalso argues that gestures accompanyingspeech can, in some circumstances,reduce the cognitive load on the part of the language producer and, in othercircumstances, facilitate parallelprocessing of information (Goldin-Meadow 2003).

    Research has also shown that touchingobjects helps young children in learning tocount not just in order to keep track ofwhat they are counting, but in developingone-to-one correspondences betweennumber words and item tags (Alibali andDiRusso 1999). Martin and Schwartz (inpress) studied 9 and 10 year-olds learningto solve fraction problems using physicalmanipulatives (pie wedges). They foundthat children could solve fraction problemsby moving physical materials, even thoughthey often couldnt solve the sameproblems in their heads, even when showna picture of the materials. However, actionitself was not enough its benefitsdepended on particular relationshipsbetween action and prior knowledge.

    Recent neuroscientific research suggeststhat some kinds of visuo-spatialtransformations (eg mental rotation tasks, object recognition, imagery) areinterconnected with motor processes and possibly driven by the motor system(Wexler 2002). Neuroscientific researchalso suggests that the neural areasactivated during finger-counting, which is a developmental strategy for learningcalculation skills, eventually come tounderpin numerical manipulation skillsin adults (Goswami 2004).

    3.2 THE USE OF PHYSICALMANIPULATIVES IN TEACHING AND LEARNING

    There is a long history of the use ofmanipulatives (physical objects) inteaching, especially in mathematics andespecially in early years education (egDienes 1964; Montessori 1917). There areseveral arguments concerning whymanipulation of concrete objects helps inlearning abstract mathematics concepts.One is what Chao et al (2000) call themental tool view the idea that thebenefit of physical materials comes fromusing the mental images formed duringexposure to the materials. Such mentalimages of the physical manipulations canthen guide and constrain problem solving(Stigler 1984).

    An alternative view is what Chao et al callthe abstraction view the idea that thevalue of physical materials lies in learnersabilities to abstract the relation of interestfrom a variety of concrete instances(Bruner 1966; Dienes 1964). However, thefindings concerning the effectiveness ofmanipulatives over traditional methods ofinstruction are inconsistent, and where

    19

    gesture supportsthinking andlearning

  • they are seen to be effective it is usuallyafter extensive instruction and practice(Uttal et al 1997).

    Uttal et al (op cit) are critical of the simpleargument that manipulatives make dealingwith abstractions easier because they areconcrete, familiar and non-symbolic. They argue that at least part of thedifficulty children may have with usingmanipulatives stems from the need tointerpret the manipulative as arepresentation of something else. Childrenneed to see and understand the relationsbetween the manipulatives and otherforms of mathematical expression inother words, children need to see that themanipulative can have a symbolic function,that it can stand for something.

    Understanding symbolic functions is notan all-or-nothing accomplishmentdevelopmentally. Children as young as 18 months to 2 years can be said tounderstand that a concrete object can havesome symbolic function as seen, forexample, in their use of pretence (eg Leslie1987). However the ability to reason withsystems of symbolic relations in order tounderstand particular mathematical ideas(eg to use Dienes blocks in solvingmathematical problems) requires a gooddeal of training and experience. DeLoachehas carried out extensive research on thedevelopment of symbol-mindedness inyoung children (DeLoache 2004) and hasshown that it develops gradually betweenaround 10 months of age and 3 years, thateven when children might be able todistinguish between an object as a symbol(eg a photograph, a miniature toy chair)and its referent (that which the photographdepicts, a real chair), they sometimesmake bizarre errors such as trying tograsp the photograph in the book

    (DeLoache et al 1998; Pierroutsakos andDeLoache 2003) or trying to sit in the chair(DeLoache et al 2004).

    3.3 REPRESENTATIONAL MAPPINGS

    In Section 2 we outlined Ullmer and Ishiisanalysis of the mappings between physicalrepresentations and digital information.They introduced a concept they callphicons (physical icons). In using thisterm they are careful to emphasise therepresentational properties of icons anddistinguish them from symbols. In HCIusage generally the term icon has come to stand for any kind of pictorialrepresentation on a screen. Ullmer andIshii use the term phicon to refer to anicon in the Peircean sense. Thephilosopher Peirce distinguished betweenindices, icons and symbols (Project 1998),where an icon shares representationalproperties in common with the object itrepresents, a symbol has an arbitraryrelationship to that which it signifies, andan index represents not by virtue ofsharing representational properties butliterally by standing for or pointing to, in a real sense, that which it signifies (eg aproper name is an index in this sense).

    This is an interesting distinction to make in terms of the affordances of tangibleinterfaces for learning. Bruner made thedistinction between enactive, iconic andsymbolic forms of representation in histheory of learning (1966). He argued thatchildren move through this sequence offorms of representation in learningconcepts in a given domain. For example, a child might start by being able to groupobjects together according to a number ofdimensions (eg size, colour, shape) theyare enacting their understanding of

    20

    understandingsymbolic

    functions is notan all-or-nothingaccomplishmentdevelopmentally

    SECTION 3

    WHY MIGHT TANGIBLE TECHNOLOGIES BENEFIT LEARNING?

  • classification. If they can draw pictures ofthese groupings they are displaying aniconic understanding. If they can useverbal labels to represent the categoriesthey are displaying a symbolicunderstanding. Iconic representations havea one-to-one correspondence with theirobjects they represent and they have someresemblance (in representational terms) tothat which they represent. They serve asbridging analogies (cf Brown and Clement1989) between an implicit sensorimotorrepresentation (in Piagetian terms) and an explicit or articulated symbolicrepresentation.

    Piagets developmental theory involved thetransformation of initially sensorimotorrepresentations (from simple reflexes tomore and more controlled repetition ofsensorimotor actions to achieve effects inthe world), to symbolic manipulations(ranging from simple single operations tocoordinated multiple operations, butoperating on concrete externalrepresentations), to fully-fledged formaloperations (involving cognitivemanipulation of complex symbol systems,as in hypothetico-deductive reasoning).Each transition through Piagetian stagesof cognitive development involvedincreasing explicitation of representations,gradually rendering them more and moreopen to conscious, reflective cognitivemanipulation. Similarly, Bruners theory ofdevelopment (1966) also involvedtransitions from implicit or sensorimotorrepresentations (which he terms enactive)to gradually more explicit and symbolicrepresentations. The distinction, however,between Piagets and Bruners theorieslies in the degree to which such changesare domain-general (for Piaget they were,for Bruner each new domain of learninginvolved transitions through these stages)and age-related or independent.

    More recent theories from cognitivedevelopment also involve progressiveexplicitation of representational systems for example, Karmiloff-Smiths theory ofrepresentational re-description (1992)describes transitions in childrensrepresentations from implicit to moreexplicit forms. In her theory children startout learning in a domain (eg drawing,mathematics) by operating on solelyexternal representations (similar toBruners enactive phase). At this level theyhave learned highly specific proceduresthat are not open to revision, consciousinspection or verbal report and aretriggered by specified environmentalconditions. Gradually theserepresentations become more flexible andcan be changed internally (for example,children gradually learn to generaliseprocedures and appear to operate withinternal rules for producing behaviour).Finally the child is able to reflect upon hisor her representations and is able, forexample, to transfer rules or proceduresacross domains or modes ofrepresentation.

    The representational transformationsdescribed by Piaget, Bruner and Karmiloff-Smith are also mirrored in another highlyinfluential theory of learning that of JohnAnderson but in the reverse sequence.Andersons theory of skill acquisition(Anderson 1993; Anderson and Lebiere1998) describes the transitions from initialexplicit or declarative representations thatinvolve conscious awareness and control(learning facts or condition-action rules) to procedures which become moreautomated and implicit with practice. Anexample is in learning to drive, where rulesof behaviour (eg depressing the clutch andmoving the gear lever into position) areinitially, for the novice driver, a matter ofeffortful recall and control and where it is

    21

  • difficult to carry out several actions inparallel, to the almost unconsciouscarrying out of highly routine procedures inthe expert driver (who can now move intogear, steer the car, carry on a conversationwith a passenger and monitor other traffic).

    The analysis of systems of representationalmapping presented in the previous sectionon tangible interfaces has, at least on theface of it, some parallel with the analysisof representational transformations intheories of cognitive development andlearning. It is interesting to try and drawsome inferences for the design of tangibletechnologies for learning. One of the mostinfluential educational researchers toattempt this is Seymour Papert.

    3.4 MAPPING FROM PHYSICAL TO DIGITAL REPRESENTATIONS: THE ARGUMENT FOR LEARNING

    Paperts theory of learning with technologywas heavily influenced by his mentor, JeanPiaget. Papert is famous for thedevelopment of the Logo programmingenvironment for children, which launcheda whole paradigm of educationalcomputing referred to, in his term, asconstructionism (Papert 1980)12 . Papertsobservation was that young children have adeep and implicit spatial knowledge basedon their own personal sensorimotorexperience of moving through a three-dimensional world. He argued that bygiving children a spatial metaphor bymanipulating another body (a physicalrobot or an on-screen cursor), they mightgradually develop increasingly more

    explicit representations of controlstructures for achieving effects such ascreating graphical representations on ascreen. The Logo programmingenvironment gave children a simplelanguage and control structure foroperating a turtle (the name given to theon-screen cursor or physical robot). Forexample, the Logo command sequence:

    TO SQUARE :STEPSREPEAT 4 [FORWARD :STEPS

    RIGHT 90]END

    results either in a cursor tracing out asquare on-screen or a robot movingaround in a square on the floor (McNerney2004). The idea was that this body-centredor turtle geometry (Abelson and diSessa1981) would enable children to learngeometric concepts more easily thanprevious more abstract teaching methods.McInerney puts it as follows:

    rather than immediately asking a childto describe how she might instruct theturtle to draw a square shape, she is askedto explore the creation of the shape bymoving her body; that is, actually walkingthe square. Through trial and error, thechild soon learns that walking a few steps,turning right 90, and repeating this fourtimes makes a square. In the process, shemight be asked to notice whether sheended up about where she started. Afterthis exercise, the child is better preparedto program the turtle to do the samething In the end, the child is rewarded bywatching the turtle move around the floorin the same way as she acted out theprocedure before. (McNerney 2004)

    22

    a child is askedto explore the

    creation of theshape by moving

    her body

    SECTION 3

    WHY MIGHT TANGIBLE TECHNOLOGIES BENEFIT LEARNING?

    12 The term constructionism is based on Piagets approach of constructivism the idea being that children develop knowledge through action in the world (rather than, in terms of the preceding behaviourist learning theories, through more passive response to external stimuli). By emphasising the construction of knowledge, Papert was stating that childrenlearn effectively by literally constructing knowledge for themselves through physical manipulation of the environment.

  • Papert outlined a number of principles for learning through such activities. These included the idea that otherwiseimplicit reasoning could be made explicit(ie the childs own sensorimotorrepresentations of moving themselvesthrough space had to be translated intoexplicit conscious and verbalisableinstructions for moving another bodythrough space), that the childs ownreasoning and its consequences weretherefore rendered visible to themselvesand others, that this fostered planning andproblem solving skills, that thereforeerrors in the childs reasoning (literally,bugs in programming terms) were madeexplicit and the means for debuggingsuch errors were made available, allleading to the development ofmetacognitive skills.

    In earlier sections of this review we sawhow various frameworks offered forunderstanding tangible computing referredto close coupling of physical and digitalrepresentations the ideal case beingreally direct manipulation where the userfeels as if they are interacting directly withthe object of interest rather than somesymbolic representation of the object(Fishkin et al 2000). Paperts vision alsoinvolves close coupling of the physical anddigital. However, crucial to Papertsarguments about the advantages of Logo,turtle geometry and body-centeredgeometry (McNerney 2004) is the notion ofreflection by the learner. The point aboutturtle geometry is not just that it isnatural for the learner to draw upon theirown experience of moving their bodythrough space, but that the act ofinstructing another body (whether it is arobot or a screen-based turtle) to producethe same behaviour renders the learnersknowledge explicit.

    So really direct manipulation may not bethe best for learning. For learning, the goalof interface design may not always be torender the interface transparent sometimes opacity may be desirable if itmakes the learner reflect upon theiractions (OMalley 1992). There are at leastthree layers of representational mappingto be considered in learning environments:

    the representation of the learningdomain (eg symbol systemsrepresenting mathematical concepts)

    the representation of the learningactivity (eg manipulating Dienes blockson a screen)

    the representation embodied in the toolsthemselves (eg the use of a mouse tomanipulate on-screen blocks versusphysically moving blocks around in thereal world).

    When designing interfaces for learning the main goal may not be the speed andease of creation of a product when theeducational activity is embedded within thetask one may not want to minimise thecognitive load involved. Research byGolightly and Gilmore (1997) found that amore complex interface produced moreeffective problem solving compared to aneasier-to-use interface. They argue that,for learning and problem solving, the rulesof transparent direct manipulationinterface design may need to be broken.However, designs should reduce thelearners cognitive load for performingnon-content related tasks in order toenable learners to allocate cognitiveresources to understanding theeducational content of the learning task. A similar point is made by Marshall et al(2003) in their analysis of tangibleinterfaces for learning.

    23

    the act ofinstructinganother body to produce thesame behaviourrenders thelearner'sknowledgeexplicit

  • Marshall et al discuss two forms oftangible technologies for learning:expressive and exploratory. They drawupon a concept from the philosopherHeidegger, termed readiness-to-hand(Heidegger 1996) later applied to human-computer interface design by Winogradand Flores (1986) and more recently totangible interfaces by Dourish (2001). Theconcept refers to the way that when we areworking with a tool (eg a hammer) wefocus not on the tool itself but on the taskfor which we are using the tool (iehammering in a nail). The contrast toreadiness-to-hand is present-at-hand ie focusing on the tool itself. Building onAckermanns work (1996, 1999), they arguethat effective learning involves being ableto reflect upon, objectify and reason aboutthe domain, not just to act within it.Marshall et als notion of expressivetangible systems refers to technologieswhich enable learners to create their ownexternal representations as in Resnicksdigital manipulatives (Resnick et al 1998),derived from the tradition of Papert andLogo programming. They argue that bymaking their own understanding of a topicexplicit, learners can make visible clearinconsistencies, conflicting beliefs andincorrect assumptions.

    Marshall et al contrast these expressiveforms of technologies with exploratorytangible systems where learners focus onthe way in which the system works ratherthan on the external representations theyare building. Again, this is in keeping withPaperts arguments about the value ofdebugging for learning. When theirprogram doesnt run or the robot doesntwork in the way in which they expected,learners are forced to step back and reflectupon the program itself or the technologyitself to try and understand why errors or

    bugs have occurred. In so doing, theybecome more reflective about thetechnology they are using. This line ofargument is also in keeping with a situatedlearning perspective, where opportunitiesfor learning occur when there arebreakdowns in readiness-to-hand (Lave and Wenger 1991).

    Marshall et al suggest that effectivelearning stems from cycling between thetwo modes of ready-to-hand (ie using thesystem to accomplish some task) andpresent-to-hand (ie focusing on thesystem itself). They also suggest two kindsof activity for using tangibles in learning;expressive activity is where the tangiblerepresents or embodies the learnersbehaviour (physically or digitally), andexploratory activity is where the learnerexplores the model embodied in thetangible interface that is provided by thedesigner (rather than themselves). Thelatter can be carried out either in somepractical sense (eg figuring out how thetechnology works, such as the physicalrobot in LEGO/Logo Martin and Resnick1993), or in what they call a theoreticalsense (eg reasoning about therepresentational system embodied in themodel, such as the programming languagein the case of systems like Logo).

    A study by Sedig et al (2001) provides somesupport for the claims made by Marshall etal. The study examined the role of interfacemanipulation style on reflective cognitionand concept learning. Three versions of atangrams puzzle (geometrically shapedpieces that must be arranged to fit a targetoutline) were designed:

    Direct Object Manipulation (DOM)interface in which the user manipulatesthe visual representation of the objects

    24

    effective learning involves

    being able toreflect upon,objectify and

    reason about thedomain, not just

    to act within it

    SECTION 3

    WHY MIGHT TANGIBLE TECHNOLOGIES BENEFIT LEARNING?

  • Direct Concept Manipulation (DCM)interface in which the user manipulatesthe visual representation of thetransformation being applied to the objects

    Reflective Direct Concept Manipulation(RDCM) interface in which the DCMapproach is used with scaffolding.

    Results from 44 11 and 12 year-oldsshowed that the RDCM group learnedsignificantly more than the DCM group,who in turn learned significantly more than the DOM group.

    Finally, research by Judy DeLoache andcolleagues on the development ofchildrens understanding of and reasoningwith symbols provides another note ofcaution in drawing implications forlearning from the physical-digitalrepresentational mappings in tangibles.Deloache et al (1998) argue that it cannotbe assumed that even the most obviousor iconic of symbols will automatically beinterpreted by children as a representationof something other than itself. We mightbe tempted to assume that little learning isrequired for highly realistic models orphotographs ie symbols that closelyresemble their referents. Even very younginfants can recognise information frompictures. However, DeLoache et al arguethat perceiving similarity between a pictureand a referent is not the same asunderstanding the nature of the picture.She and her colleagues have shown over anumber of years of research that problemsin understanding symbols continue wellbeyond infancy and into childhood.

    For example, young children (2.5 years)have problems understanding the use of aminiature scale model of a room as amodel. They understand perfectly well that

    it is a model and can find objects hidden inthe miniature room. However they havegreat difficulty in using this model as aclue to finding a corresponding real objecthidden in an identical real room (DeLoache1987). DeLoache argues that the problemstems from the dual nature ofrepresentations in such tasks. While themodels serve as representations ofanother space, they are also objects inthemselves ie a miniature room in which things can be hidden. Young children find it difficult, she argues, tosimultaneously reason about the model as a representation and as an interestingobject partly because it is highly salientand attractive as an object in its own right(DeLoache et al op cit). She tested thistheory by having children use photographsof the real room rather than the scalemodel and showed that children found thattask much easier. This and other researchleads DeLoache and colleagues to issuecaution in assuming that highly realisticrepresentations somehow make the task ofmapping from the symbol to the referenteasier. In fact, they argue, it can have theopposite effect to that which is desired.They argue that the task for young childrenin learning to use symbols is to realise thatthe properties of the symbols themselvesare less important than what theyrepresent. So making symbols into highlysalient concrete objects may make theirmeaning less, not more, clear to youngchildren.

    Such cautions may also apply to the use ofmanipulatives with older children. Forexample, Hughes (1986) found that 5 7year-olds had difficulties using smallblocks to represent a single number or thesimple addition problems. The difficultiesthe children had suggested that they didntreally understand how the blocks were

    25

  • supposed to relate to the numbers and theproblem. DeLoache et al (op cit) make avery interesting comment on the differencebetween the ways in which manipulativesare used in mathematics teaching in Japanand in North America:

    In Japan, where students excel inmathematics, a single, small set ofmanipulatives is used throughout theelementary school years. Because theobjects are used repeatedly in variouscontexts, they presumably become lessinteresting as things in themselves.Moreover, children become accustomed tousing the manipulatives to representdifferent kinds of mathematics problems.For these reasons, they are not faced withthe necessity of treating an objectsimultaneously as something interesting inits own right and a representation ofsomething else. In contrast, Americanteachers use a variety of objects in avariety of contexts. This practice may havethe unexpected consequence of focusingchildrens attention on the objects ratherthan on what the objects represent.(Deloache et al 1998)

    3.5 SUMMARY

    In this section we have reviewed researchfrom psychology and education thatsuggests that there can be real benefits forlearning from tangible interfaces. Suchtechnologies bring physical activity andactive manipulation of objects to theforefront of learning. Research has shownthat, with careful design of the activitiesthemselves, children (older as well asyounger) can solve problems and performin symbol manipulation tasks withconcrete physical objects when they fail toperform as well using more abstract

    representations. The point is not that theobjects are concrete and thereforesomehow easier to understand, but thatphysical activity itself helps to buildrepresentational mappings that serve tounderpin later more symbolically mediatedactivity after practise and the resultingexplicitation of sensorimotorrepresentations.

    However, other research has shown that itis important to build in activities thatsupport children in reflecting upon therepresentational mappings themselves.DeLoaches work suggests that focusingchildrens attention on symbols as objectsmay make it harder for them to reasonwith symbols as representations. Marshallet al (2003) argue for cycling between whatthey call expressive and exploratorymodes of learning with tangibles.

    26

    physical activity itself

    helps to buildrepresentational

    mappings

    SECTION 3

    WHY MIGHT TANGIBLE TECHNOLOGIES BENEFIT LEARNING?

  • 4 CASE STUDIES OF LEARNING WITH TANGIBLES

    In this section we present case studies ofeducational applications of tangibletechnologies. We summarise the projectsby highlighting the tangibility of theapplication and its educational potential.Where the technology has been evaluated,we report the outcomes. The studiespresented here do not give completecoverage to the field but rather indicatecurrent directions in the full range ofapplications.

    4.1 DIGITALLY AUGMENTED PAPER AND BOOKS

    In some examples of educationalapplications of tangibles, everydaytechnology such as paper, books andother physical displays have beenaugmented digitally.

    For example, Listen Reader (Back et al2001) is an interactive childrens storybook,which has a soundtrack triggered bymoving ones hands over the book. Thebook has RFID tags embedded within itwhich sense which page is open, andadditional sensors (capacitive fieldsensors) which measure an individualsposition in relation to the book and adjustthe sound accordingly.

    A more familiar, commercially availablesystem, is LeapPad13 , which is also adigitally augmented book. Provided withthe book is a pen that enables words andletters to be read out when a child touchesthe surface of each page. The book can

    also be placed in writing mode andchildren can write at their own pace. A whole library of activities and stories areavailable for various domains includingreading, vocabulary, mathematics andscience. The company claims that theeducational programs available for thissystem are based on research findings ineducation. For example, their LanguageFirst! Program was designed based onfindings from language acquisitionresearch.

    A recent EU-funded research anddevelopment project called Paper++14 wasaimed at providing digital augmentation ofpaper with multimedia effects. The projectused real paper together with specialtransparent and conductive inks that couldbe detected with a special sensing device(a wand). This made the whole surface ofthe paper active, without involving complextechnology embedded in the paper itself.The project focused on the use of paper in educational settings and developedtechnologies to support applications suchas childrens books, materials forstudents, brochures and guides.Naturalistic studies were conducted ofchildren in classrooms working with paperin which the researchers examined howsubtle actions involving paper form anintegral part of the process ofcollaboration between children. Theresearchers also compared the behaviourof children working with a CD-Rom, anordinary book, and an augmented paperprototype, to examine how the differentmedia affected childrens workingpractices. They found that the paperenabled a more fluid and balanced sharingof tasks between the children (Luff et al 2003).

    27

    'everydaytechnology' suchas paper, booksand otherphysical displayshave beenaugmenteddigitally

    13 www.leapfrog.com 14 www.paperplusplus.net

    SECTION 4

    CASE STUDIES OF LEARNING WITH TANGIBLES

  • In the KidStory project15 we developedvarious tangible technologies aimed atsupporting young childrens storytelling (OMalley and Stanton 2002; Stanton et al2002; Stanton et al 2001b). For example,childrens physical drawings on paper weretagged with barcodes which enabled themto use a barcode reader to physicallynavigate between episodes in their virtualstory displayed on a large screen, usingthe KidPad storytelling software (Druin et al 1997).

    In the Shape project a mixed realityexperience was developed to aid childrento discover, reason and reflect abouthistorical places and events (Stanton et al2003). An experience was developed whichinvolved a paper-based history huntaround Nottingham Castle searching forclues about historical events which tookplace there. The clues involved childrenmaking drawings or rubbings on paper ata variety of locations. The paper was thenelectronically tagged and used to interactwith a virtual reality projection on a displaycalled the Storytent (see Fig 4). A RadioFrequency ID (RFID) tag reader waspositioned embedded in a turntable insidethe tent. When placed on the turntable,each paper clue revealed an historic 3Denvironment of the castle from the locationat which the clue was found. Additionally,secret writing revealed the story of acharacter at this location by the use of anultra-violet light placed over the turntable.The aim was to help visitors learn aboutthe historical significance of particularlocations in and around the medievalcastle (no longer visible because it wasburned down) by explicitly linking theirphysical exploration of the castle (using thepaper clues) with a virtual reality

    simulation of the castle as it was inmedieval times.

    A final example of augmented paper isBillinghursts MagicBook16 (Billinghurst2002; Billinghurst and Kato 2002). Thisemploys augmented reality techniques toproject a 3D visual display as the userturns the page. The system involves theuser wearing a pseudo see-through head-mounted display (eg glasses). TheMagicBook contains special symbolscalled fiducial markers which provide a registration identity and spatialtransformation derived using a small videocamera mounted on the display. As theuser turns the page to reveal the markerthey see an image overlaid onto the pageof the book. The system also enablescollaboration with another user wearingidentical equipment. The same technologywas used in the MagiPlanet applicationillustrated at the beginning of this review.

    28

    as the user turns the page

    to reveal themarker they

    see an imageoverlaid onto thepage of the book

    SECTION 4

    CASE STUDIES OF LEARNING WITH TANGIBLES

    15 www.sics.se/kidstory 16 www.hitl.washington.edu/magicbook/

    Fig 4: Child and adult in the Storytent with close-up of turntable (inset).

  • 4.2 PHICONS: THE USE OF PHYSICALOBJECTS AS DIGITAL ICONS

    The previous section gave examples of the use of paper and books which wereaugmented by a range of technologies (eg barcodes, RFID tags, video-basedaugmented reality) to trigger digitaleffects. In this section we review examplesof the use of other physical objects suchas toys, blocks and physical tags, to triggerdigital effects.

    In recent years a range of digital toys havebeen designed aimed at very youngchildren. One example is the suite ofMicrosoft ActiMates, starting in 1997 with Barney the purple dinosaur. TheCACHET project (Luckin et al 2003)explored the use of these types ofinteractive toys (in this case, DW andArthur) in supporting collaboration. The toys are aimed at 4-7 year-olds andcontain embedded sensors that areactivated by children manipulating parts of the toy. The toys can also be linked to a PC and interact with game software togive help within the game. Luckin et alcompared the use of the software withhelp on-screen versus help from thetangible interface and found that thepresence of the toy increased thechildrens interactions with one anotherand with the facilitator.

    Storymat (Ryokai and Cassell 1999) is asoft interactive play mat that records andreplays children stories. The aim is tosupport collaborative storytelling even inthe absence of other children. The space isdesigned to encourage children to tellstories using soft toys. These stories arethen replayed whereby a moving projectionof the toy is projected over the mat,

    accompanied by the associated audio.Thus children can activate stories otherchildren have told and also edit them andcreate their own stories.

    In the KidStory project referred to earlier,RFID tags were embedded in toys as storycharacters. When the props were movednear to the tag reader it triggeredcorresponding events involving thecharacters on the screen (Fig 5).

    The Tangible Viewpoints system (Mazaleket al 2002) uses physical objects tonavigate through a multiple viewpointstory. When an object (shaped like a chesspawn) is placed on the interaction surface,the story segments associated with itscharacters point of view are projectedaround it in the form of images or text.

    Chromarium is a mixed reality activityspace that uses tangibles to help childrenaged 5-7 years experiment and learn about colour mixing (Rogers et al 2002)17.

    29

    Fig 5: RFID tags were embedded inphysical story props and characters andused to navigate digital stories in theKidStory project.

    17 www.equator.ac.uk/Projects/Digitalplay/Chromarium.htm

  • A number of different ways of mixingcolour were explored, using a variety ofphysical and digital tools. For example, oneactivity allowed them to combine coloursusing physical blocks, with differentcolours on each face. By placing twoblocks together children could elicit thecombined colour and digital animations on an adjacent screen. Children wereintuitively able to interact with the colouredcubes combining and recombining colourswith immediate visual feedback. Anotheractivity enabled children to use softwaretools, disguised as paintbrushes, on adigital interactive surface. Here childrencould drag and drop different coloureddigital discs and see the resultant mixes. A third activity again allowed children touse the digital interactive surface, but thistime their interactions with a digital imagetriggered a phy