literature review in learning with tangible technologies

52
Literature Review in Learning with Tangible Technologies REPORT 12: FUTURELAB SERIES Claire O’Malley, Learning Sciences Research Institute, School of Psychology, University of Nottingham Danae Stanton Fraser, Department of Psychology, University of Bath

Upload: others

Post on 18-Dec-2021

0 views

Category:

Documents


0 download

TRANSCRIPT

Literature Review in Learning with Tangible Technologies

REPORT 12:

FUTURELAB SERIES

Claire O’Malley, Learning Sciences Research Institute, School of Psychology, University of NottinghamDanae Stanton Fraser, Department of Psychology, University of Bath

FOREWORD

When we think of digital technologies inschools, we tend to think of computers,keyboards, sometimes laptops, and morerecently whiteboards and data projectors.These tools are becoming part of thefamiliar educational landscape. Outsidethe walls of the classroom, however,there are significant changes in how wethink about digital technologies – or, to bemore precise, how we don’t think aboutthem, as they disappear into our clothes,our fridges, our cars and our city streets.This disappearing technology, blendedseamlessly into the everyday objects of our lives, has become known as‘ubiquitous computing’. Which leads us to ask the question: what would aschool look like in which the technologydisappeared seamlessly into the everydayobjects and artefacts of the classroom?

This review is an attempt to explore this question. It maps out the recenttechnological developments in the field,

discusses evidence from educationalresearch and psychology, and provides an overview of a wide range ofchallenging projects that have attemptedto use such ‘disappearing computers’ (or tangible interfaces) in education –from digitally augmented paper, toys that remember the ways in which a childmoves them, to playmats that record and play back children’s stories. Thereview challenges us to think differentlyabout our future visions for educationaltechnology, and begins to map out aframework within which we can ask how best we might use these newapproaches to computing for learning.

As always, we are keen to hear your comments on this review [email protected]

Keri FacerLearning Research DirectorFuturelab

1

CONTENTS:

EXECUTIVE SUMMARY 2

SECTION 1INTRODUCTION 4

SECTION 2TANGIBLE USER INTERFACES: NEW FORMS OF INTERACTIVITY 12

SECTION 3WHY MIGHT TANGIBLETECHNOLOGIES BENEFITLEARNING? 18

SECTION 4CASE STUDIES OF LEARNINGWITH TANGIBLES 27

SECTION 5SUMMARY AND IMPLICATIONSFOR FUTURE RESEARCH,APPLICATIONS, POLICY AND PRACTICE 35

GLOSSARY 39

BIBLIOGRAPHY 42

Literature Review in Learning with Tangible Technologies

REPORT 12:

FUTURELAB SERIES

Claire O’Malley, Learning Sciences Research Institute, School of Psychology, University of NottinghamDanae Stanton Fraser, Department of Psychology, University of Bath

EXECUTIVE SUMMARY

The computer is now a familiar object inmost schools in the UK today. However,outside schools different approaches tointeracting with digital information andrepresentations are emerging. These canbe considered under the term ‘tangibleinterfaces’, which attempts to overcomethe difference between the ways we inputand control information and the ways thisinformation is represented. These ‘tangibleinterfaces’ may be of significant benefit toeducation by enabling, in particular,younger children to play with actualphysical objects augmented withcomputing power. Tangible technologiesare part of a wider body of developingtechnology known as ‘ubiquitouscomputing’, in which computingtechnology is so embedded in the world that it ‘disappears’.

Tangible technologies differ in terms of thebehaviour of control devices and resultingdigital effects. A contrast is made betweeninput devices where the form of usercontrol is arbitrary and has no specialbehavioural meaning with respect tooutput (eg using a generic tool like amouse to interact with the output on ascreen), and input devices which have aclose correspondence in behaviouralmeaning between input and output (eg using a stylus to draw a line directly on a tablet or touchscreen). The form ofsuch mappings may result in one-to-manyrelations between input and output (as in arbitrary relations between a mouse, joystick or trackpad and variousdigital effects on a screen), or one-to-onerelations (as in the use of special purposetransducers where each device has onefunction). Tangibles also differ in terms ofthe degree of metaphorical relationship

between the physical and digitalrepresentation. They can range from beingcompletely analogous, in the case ofphysical devices resembling their digitalcounterparts, to having no analogy at all.They also differ in terms of the role of thecontrol device, irrespective of its behaviourand representational mapping. So, forexample, a control device might play therole of a container of digital information, arepresentational token of a digital referent,or a generic tool representing somecomputational function. Finally, thesetechnologies differ in terms of degree of‘embodiment’. This means the degree of attention paid to the control device asopposed to that which it represents;completely embodied systems are wherethe user’s primary focus is on the objectbeing manipulated rather than the toolbeing used to manipulate the object. This can be more or less affected by theextent of the metaphor used in mappingbetween the control device and theresulting effects.

In general there are at least two senses inwhich tangible user interfaces strive toachieve ‘really direct manipulation’:

1 In the mappings between the behaviourof the tool (physical or digital) which the user uses to engage with the objectof interest.

2 In the mappings between the meaningor semantics of the representingworld (eg the control device) and therepresented world (eg the resultingoutput).

Research from psychology and educationsuggests that there can be real benefits forlearning from tangible interfaces. Suchtechnologies bring physical activity and

2

tangibletechnologies are

part of a widerbody of

developingtechnology

known as'ubiquitouscomputing’

EXECUTIVE SUMMARY

active manipulation of objects to theforefront of learning. Research has shownthat, with careful design of the activitiesthemselves, children (older as well asyounger) can solve problems and performin symbol manipulation tasks withconcrete physical objects when they fail to perform as well using more abstractrepresentations. The point is not that the objects are concrete and thereforesomehow ‘easier’ to understand, but that physical activity itself helps to buildrepresentational mappings that serve tounderpin later more symbolically mediatedactivity after practise and the resulting‘explicitation’ of sensorimotorrepresentations.

However, other research has shown that it is important to build in activities thatsupport children in reflecting upon therepresentational mappings themselves.This work suggests that focusingchildren’s attention on symbols as objectsmay make it harder for them to reasonwith symbols as representations. Some researchers argue for cyclingbetween what they call ‘expressive’ and ‘exploratory’ modes of learning with tangibles.

A number of examples of tangibletechnologies are presented in this review.These are discussed under four headings:digitally augmented paper, physical objectsas icons (phicons), digital manipulativesand sensors/probes. The reason fortreating digital paper as a distinct categoryis to make the point that, rather than ICTreplacing paper and book-based learningactivities, they can be enhanced by a rangeof digital activities, from very simple use ofcheap barcodes printed on sticky labelsand attached to paper, to the much moresophisticated use of video tracing in theaugmented reality examples. However,

even this latter technology is accessible for current UK schools but without theneed for special head-mounted displays to view the augmentation. For example,webcams and projectors can be used with the augmented reality software tooverlay videos and animations on top ofphysical objects.

Although many of the technologiesreviewed in the section on ‘phicons’involves fairly sophisticated systems, onceagain, it is possible to imagine quite simpleand cheap solutions. For example, barcode tags and tag readers are relativelyinexpensive. Tags can be embedded in a variety of objects. The only slightlycomplex aspect is that some degree ofprogramming is necessary to make use of the data generated by the tag reader.

Finally, implications are drawn for futureresearch, applications and practice.Although there is evidence that, forexample, physical action with concreteobjects can support learning, its benefitsdepend on particular relationshipsbetween action and prior knowledge. Itsbenefits also depend on particular formsof representational mapping betweenphysical and digital objects. For example,there is quite strong evidence suggestingthat, particularly with young children, ifphysical objects are made too ‘realistic’they can actually prevent children learningabout what the objects represent.

Other research reviewed here has alsosuggested that so-called ‘really directmanipulation’ may not be ideal for learningapplications where often the goal is toencourage the learner to reflect andabstract. This is borne out by researchshowing that ‘transparent’ or really easy-to-use interfaces sometimes lead to lesseffective problem solving. This is not an

3

children cansolve problemswith concretephysical objectswhen they failusing moreabstractrepresentations

REPORT 12LITERATURE REVIEW IN LEARNING WITH TANGIBLE TECHNOLOGIES

CLAIRE O’MALLEY, LEARNING SCIENCES RESEARCH INSTITUTE, SCHOOL OF PSYCHOLOGY, UNIVERSITY OF NOTTINGHAMDANAE STANTON FRASER, DEPARTMENT OF PSYCHOLOGY, UNIVERSITY OF BATH

argument that interfaces for learningshould be made difficult to use – the pointis to channel the learner’s attention andeffort towards the goal or target of thelearning activity, not to allow the interfaceto get in the way. In the same vein, Papertargued that allowing children to constructtheir own ‘interface’ (ie build robots orwrite programs) focused the child’sattention on making their implicitknowledge explicit. Other researcherssupport this idea and argue that effectivelearning should involve both expressiveactivity, where the tangible represents orembodies the learner’s behaviour(physically or digitally), and exploratoryactivity, where the learner explores themodel embodied in the tangible interface.

1 INTRODUCTION

1.1 AIMS AND OUTLINE OF THE REVIEW

The aims of this review are to:

• introduce the concept of tangiblecomputing and related concepts such as augmented reality and ubiquitouscomputing

• contrast tangible interfaces with current graphical user interfaces

• outline the new forms of interactivitymade possible by tangible userinterfaces

• outline some of the reasons, based onresearch in psychology and education,why learning with tangible technologiesmight provide benefits for learning

• present some examples of tangible user interfaces for learning

• use these examples to illustrate claimsfor the benefits of tangible interfacesfor learning

• provide a preliminary evaluation of the benefits and limitations of tangibletechnologies for learning.

We do not wish to argue that tangibletechnologies are superior to other currentand accepted uses of ICT for learning. Wewish to open up the mind of the reader tonew possibilities of enhancing teachingand learning with technology. Some ofthese possibilities are achievable withrelatively simple and cheap technologies(eg barcodes). Others are still in the earlystages of development and involve moresophisticated uses of video-based imageanalysis or robotics. The point is not thesophistication of the technologies but theinnovative forms of interactivity they enable

4

the point is not the

sophistication ofthe technologies

but theinnovative forms

of interactivitythey enable

SECTION 1

INTRODUCTION

and, it is hoped, the new possibilities for learning that they provide.

We have tried in this review to outline the interactional capabilities afforded by tangible technology by analysing the properties of forms of action andrepresentation embodied in their use. Wehave also reviewed some of the relevantresearch from developmental psychologyand education in order to draw outimplications for the potential benefits and limitations of these approaches tolearning and teaching. Although many ofthe educational examples given here referto use by young children, several alsoinvolve children of secondary school age.In general we feel that the interactionaland educational principles involved can be applied to learning in a wide variety of age groups and contexts.

1.2 BEYOND THE SCHOOL COMPUTER

Few nowadays would disagree thatinformation and communicationtechnology (ICT) is important for learningand teaching, whether the argument isvocational (it’s important for young peopleto become skilled or ‘literate’ in ICT toprepare them for life beyond school) or‘techno-romantic’ (ICT provides powerfullearning experiences not easily achievedthrough other means – see Simon 1987;Underwood and Underwood 1990; Wegerif2002). There is no doubt that there havebeen tremendous strides in the take-upand use of ICT in both formal school andinformal home settings in recent years,and that there have been many positiveimpacts on children’s learning (see Cox etal 2004a; Cox et al 2004b; Harrison et al2002). However, developments in the use

of ICT in schools tend to lag behinddevelopments in other areas of life (theworkplace, home). At the turn of the 21stcentury, the use of computers in schoolslargely concerns the use of desktop(increasingly laptop) machines, whilstoutside of school the use of computersconsists increasingly of personal mobiledevices which bear little resemblance tothose desktop machines of the 1980s.

Technology is now increasingly embeddedin our everyday lives. For example, whenwe go shopping barcodes are used bysupermarkets to calculate the price of our goods. Microchips embedded in our credit cards and magnetic stripsembedded in our loyalty cards are used bysupermarkets and banks (as well as otheragencies we may not know about) to debitour bank accounts, do stock control,predict our spending patterns and so on.Sensors in public places such as thestreet, stores, our workplace are used torecord our activities (eg reading carnumber plates, detecting our entrance andexit via radio frequency ID tags or pressureor motion sensors in buildings). Wecommunicate by mobile phone and ourlocation can be identified by doing so. Weuse remote control devices to control ourentertainment systems, the opening andclosing of our garages or other doors. Ourhomes are controlled by microcomputersin our washing machines, burglar alarms,lighting, heating and so on.

This is as much true for school-agedchildren as it is for adults: the averagehome with teenagers owns an average of 3.5 mobile phones (survey byukclubculture, reported in The Observer,17 October 2004); teenagers prefer to buytheir music online rather than buying CDs;80% of teenagers access the internet from

5

technology isnow increasinglyembedded in oureveryday lives

home. Young people’s experience withtechnology outside of school is far moresophisticated than the grey/beige box onthe desk with its monitor, mouse andkeyboard.

It is probably safe to say that most schoolsin the UK (both primary and secondary)now have fairly well-stocked ICT suites orlaboratories, where banks of PCs andmonitors line the walls or sit on desks inrows. It is less likely that they occupycentral stage in the classroom where coresubject teaching and learning takes place,but many schools are now adoptinglaptops and so this may be graduallychanging. A few schools may even beexperimenting with PDAs. Many schoolsuse interactive whiteboards or‘smartboards’. However, despite thechanges in technology, the organisation ofteaching and learning practice is relativelyunchanged from pre-ICT days – in fact onemight be tempted to argue that interactivewhiteboards are popular with teacherslargely because they can easily incorporatetheir use into existing practices of face-to-face whole class teaching. The smartboardis the new blackboard, just as thePC/laptop/palmtop is the new slate (orpaper). There are occasional examples ofinnovative uses of technology, but the vastmajority of use of technology in learningarguably hasn’t changed very much sincethe early days of the BBC micro in the1970s and early 1980s.

However, outside schools there has been agrowing trend for technology to movebeyond the traditional model of thedesktop computer. In the field of human-computer interaction (HCI), traditionaldesktop metaphors are now beingsupplanted by research into the radically

different relationships between realphysical environments and digital worlds1.Outside the school we are beginning to see the development of what is becomingknown as ‘tangible’ interfaces.

1.3 TANGIBLES: FROM GRAPHICALUSER INTERFACES (GUIs) TOTANGIBLE USER INTERFACES (TUIs)

It’s probably useful at this point tointroduce a brief history of the term‘tangibles’.

Digital spaces have traditionally beenmanipulated with simple input devicessuch as the keyboard and mouse, whichare used to control and manipulate(usually visual) representations displayedon output devices such as monitors,whiteboards or head-mounted displays.‘Tangible interfaces’ (as they have becomeknown) attempt to remove this input-output distinction and try to open up newpossibilities for interaction that blend thephysical and digital worlds (Ullmer andIshii 2000).

When reviewing the background to TUIsmost people tend to refer to a paper byHiroshi Ishii and Brygg Ullmer of MITMedia Lab, published in 1997 (Ishii andUllmer 1997). They coined the phrasetangible bits as:

“…an attempt to bridge the gap betweencyberspace and the physical environmentby making digital information (bits)tangible.” (Ishii and Ullmer 1997 p235)

In using the term ‘bits’ they are making adeliberate pun – we use the term bits to

6

'tangibleinterfaces' try

to open up newpossibilities forinteraction that

blend thephysical and

digital worlds

SECTION 1

INTRODUCTION AND BACKGROUND

1 See for example www.equator.ac.uk

refer to physical things, but in computerscience, the term bits refers also to digital‘things’ (ie binary digits). The phrase‘tangible bits’ therefore attempts toconsider both digital and physical things in the same way.

Tangible interfaces emphasise touch andphysicality in both input and output. Oftentangible interfaces are closely coupled tothe physical representation of actualobjects (such as buildings in an urbanplanning application, or bricks in achildren’s block building task). Applicationsrange from rehabilitation, for example,tangible technologies that enableindividuals to practise making a hot drinkfollowing a stroke (Edmans et al 2004) toobject-based interfaces running overshared distributed spaces, creating theillusion that users are interacting withshared physical objects (Brave et al 1998).

The key issue to bear in mind in terms ofthe difference between typical desktopcomputer systems and tangible computingis that in typical desktop computersystems (so-called graphical userinterfaces or GUIs) the mapping betweenthe manipulation of the physical inputdevice (eg the point and click of themouse) and the resulting digitalrepresentation on the output device (thescreen) is relatively indirect and looselycoupled. You are making one sort ofmovement to have a very differentmovement represented on screen. Forexample, if I use a mouse to select a menuitem in a word processing application, Imove the mouse on a horizontal surface(my physical desktop) in 2D in order tocontrol a graphical pointer (the cursor) onthe screen. The input is physical but theoutput is digital. In the case of a desktopcomputer, the mapping between input and

output is fairly obviously decoupledbecause I have to move the mouse in 2Don a horizontal surface, yet the outputappears on a vertical 2D plane. However,even if I use a stylus on a touchscreen(tablet or PDA), there is still a sense ofdecoupling of input and output becausethe output is in a different representationalform to the input – ie the input is physicalbut the output is digital.

In contrast, tangible user interfaces (TUIs)provide a much closer coupling betweenthe physical and digital – to the extent thatthe distinction between input and outputbecomes increasingly blurred. Forexample, when using an abacus, there is no distinction between ‘inputting’information and its representation – thissort of blending is what is envisaged bytangible computing.

1.4 DEFINING SOME KEY CONCEPTS:TANGIBLE COMPUTING, UBIQUITOUSCOMPUTING AND AUGMENTEDREALITY

Tangible computing is part of a widerconcept know as ‘ubiquitous computing’.The vision of ubiquitous computing isusually attributed to Mark Weiser, late ofXerox PARC (Palo Alto Research Centre).Weiser published his vision for the futureof computing in a pioneering article inScientific American in 1991 (Weiser 1991).In it he talked about a vision where thedigital world blends into the physical world and becomes so much part of thebackground of our consciousness that itdisappears. He draws an analogy with print– text is a form of symbolic representationthat is completely ubiquitous and pervasivein our physical environment (eg on

7

tangibleinterfacesemphasise touchand physicality in both input and output

billboards, signs, in shop windows etc). As long as we are skilled ‘users’ of text, it takes no effort at all to scan theenvironment and process the information.The text just disappears into thebackground and we engage with thecontent it represents, effortlessly. Contrastthis with the experience of visiting aforeign country where not only do you notknow the language, but the alphabet iscompletely unfamiliar to you – eg anEnglish visitor in China or Saudi Arabia –suddenly the text is not only virtuallyimpossible to decipher, it actually looksvery strange to see the world plasteredwith what seem to you like hieroglyphs(which, of course, for the Chinese orArabic reader, are ‘invisible’). So, the visionof ubiquitous computing is that computingwill become so embedded in the world thatwe don’t notice it, it disappears. This hasalready happened today to some extent.Computers are embedded in lightswitches, cars, ovens, telephones,doorways, wristwatches.

In Weiser’s vision, ubiquitous computing is ‘calm technology’ (Weiser and Brown1996). By this, he means that instead ofoccupying the centre of the user’sattention all the time, technology movesseamlessly and without effort on our partbetween occupying the periphery of ourattention and occupying the centre. Ishii

and Ullmer (1997) also make a distinctionin ubiquitous computing between theforeground or centre of the user’s attentionand the background or periphery of theirattention. They talk about the need toenable users both to ‘grasp andmanipulate’ foreground digital informationusing physical objects, and to provideperipheral awareness of informationavailable in the ambient environment.The first case is what most people refer to nowadays as tangible computing; thesecond case is what is generally referredto as ‘ambient media’ (or ambientintelligence, augmented space and so on).Both are part of the ubiquitous computingvision, but this review will focus only on the first case.

A third related concept in tangiblecomputing is augmented reality (AR).Whereas in virtual reality (VR) the goal is often to immerse the user in acomputational world, in AR the physicalworld is augmented with digitalinformation. Paul Milgram coined thephrase ‘mixed reality’ to refer to acontinuum between the real world andvirtual reality (Milgram and Kishino 1994).AR is where, for example, video images ofreal scenes are overlaid with 3D graphics.In augmented virtuality, displays of a 3Dvirtual world (possibly immersive) areoverlaid with video feeds from the real world.

8

SECTION 1

INTRODUCTION AND BACKGROUND

Fig 1: The continuum of mixed reality environments (from Milgram and Kishino 1994).

Mixed Reality (MR)

Virtuality Continuum (VC)

RealEnvironment

AugmentedReality (AR)

VirtualEnvironment

AugmentedVirtuality (AV)

As we shall see, many examples of so-called tangible computing have employedAR techniques and there are someinteresting educational applications.Generally, the distinction betweenubiquitous computing (or ‘ubicomp’),tangible technology, augmented reality and ‘ambient media’ has been blurred orat least the areas overlap considerably.Dourish (2001), for example, refers to them all as tangible computing.

One of the earliest examples of tangiblecomputing was Bricks, a ‘graspable userinterface’ proposed by Fitzmaurice, Ishiiand Buxton (Fitzmaurice et al 1995). Thisconsisted of bricks (like LEGO bricks)which could be ‘attached’ to virtualobjects, thus making the virtual objectsphysically graspable.

Fitzmaurice et al cite the followingproperties of graspable user interfaces.Amongst other things, these interfaces:

• allow for more parallel inputspecification by the user, therebyimproving the expressiveness or thecommunication capacity with thecomputer

• take advantage of well developed,everyday, prehensile skills for physicalobject manipulations (cf MacKenzie andIberall 1994) and spatial reasoning

• externalise traditionally internalcomputer representations

• afford multi-person, collaborative use.

We will return to this list of features of tangible technologies a little later on to consider their benefits or affordancesfor learning.

The sheer range of applications means itis out of this report’s scope to adequatelydescribe all tangible interface examples. In this review, we focus specifically onresearch in the area of tangible interfacesfor learning.

1.5 WHY MIGHT TANGIBLES BE OF BENEFIT FOR LEARNING?

Historically children have playedindividually and collaboratively withphysical items such as building blocks,shape puzzles and jigsaws, and have beenencouraged to play with physical objects to learn a variety of skills. Montessori(1912) observed that young children wereintensely attracted to sensory developmentapparatus. She observed that children usedmaterials spontaneously, independently,repeatedly and with deep concentration.Montessori believed that playing withphysical objects enabled children to engagein self-directed, purposeful activity. Sheadvocated children’s play with physicalmanipulatives as tools for development2.

Resnick extended the tangible interfaceconcept for the educational domain in theterm ‘digital manipulatives’ (Resnick et al1998), which he defined as familiarphysical items with added computationalpower which were aimed at enhancingchildren’s learning. Here we discussphysical and tangible interfaces – physicalin that the interaction is based onmovement and tangible in that objects are to be touched and grasped.

Figures 2 and 3 show two examples of recent applications of tangibletechnologies for educational use.

9

historicallychildren haveplayed withphysical objectsto learn a varietyof skills

2 www.montessori-ami.org/ami.htm

10

SECTION 1

INTRODUCTION AND BACKGROUND

Fig 2: These images show the MagiPlanet application developed at the Human InterfaceTechnology Laboratory New Zealand3 (Billinghurst 2002).

Imagine a Year 5 teacher is trying to help her class understand planetary motion. They arestanding around in a circle looking down on a table on which are marked nine orbital pathsaround the Sun. The children have to place cards depicting each of the planets in its correctorbit. As they do, they can see overlaid onto the cards a 3D animation of that planet spinningon its own axis and orbiting around the Sun. They can pick up the card and rotate it to see thesurface of the planet in more detail, or its moons.

In Figure 2 the physical table is augmentedwith overlaid 3D animations that are tied toparticular physical locations. These 3Danimations provide visualisationsappropriate to the meaning of the gesturesused by the children to interact with thedisplay. Instead of observing a pre-setanimation, or using a mouse to direct ananimation, the child’s physical gesturesthemselves control the movementdisplayed. This example is calledMagiPlanet and is one of a range ofapplications of AR developed by MarkBillinghurst and his group at the HumanInterface Technology Laboratory NewZealand (Billinghurst 2002). Theapplication of this technology for learningin UK classrooms is currently beingexplored by Adrian Woolard and hiscolleagues at the BBC’s Creative R&D, NewMedia and Technology Unit 4 (Woolard 2004).

Figure 3 illustrates a tangible technologycalled ‘I/O Brush’, developed by KimikoRyokai, Stefan Marti and Hiroshi Ishii ofMIT Media Lab’s Tangible Media Group(Ryokai et al 2004). It is showcased atBETT 20055 as part of Futurelab’sInnovations exhibition.

Tangibles have been reported as having thepotential for providing innovative ways forchildren to play and learn, through novelforms of interacting and discovering andthe capacity to bring the playfulness backinto learning (Price et al 2003). Dourish(2001) discusses the potential of ‘tangiblebits’, where the digital world of informationis coupled with novel arrangements ofelectronically-embedded physical objects,providing different forms of userinteraction and system behaviour, incontrast with the standard desktop.

3 www.hitlabnz.org/index.php?page=proj-magiplanet 4 www.becta.org.uk/etseminars/presentations/presentation.cfm?seminar_id=29&section=7_1&presentation_id=68&id=26085 www.bettshow.co.uk

Familiar objects such as building bricks,balls and puzzles are physicallymanipulated to make changes in anassociated digital world, capitalising on

people’s familiarity with their way ofinteracting in the physical world (Ishii andUllmer 1997). In so doing, it is assumedthat more degrees of freedom are provided

11

6 http://web.media.mit.edu/~kimiko/iobrush/

Fig 3: These images show the ‘I/O Brush’system developed by Kimiko Ryokai, StefanMarti and Hiroshi Ishii of MIT Media Lab’sTangible Media Group6 (Ryokai et al 2004).

Imagine a reception class is creating a storytogether. Their teacher has been readingfrom a storybook about Peter Rabbit duringliteracy hour over the past few weeks.Today they are trying to create an animatedversion of the story to show in assembly.They are using special paintbrushes whichthey can sweep over the picture of PeterRabbit in the storybook. One child wants to draw a picture of Peter Rabbit hopping. Heplaces the brush over the picture of Peter Rabbit and as he does so he makes little hoppingmotions with his hand. Then he paints the brush over the display screen that the class areworking with and as he does a picture of the rabbit appears, hopping with the samemovements that the child made when he painted over the storybook. The special paintbrushhas ‘picked up’ the image, with its colours, together with the movements made by the childand transferred these physical attributes to a digital animation.

for exploring, manipulating and reflectingupon the behaviour of artefacts and theireffects in the digital world. In relation tolearning, such tangibles are thought toprovide different kinds of opportunities for reasoning about the world throughdiscovery and participation (Soloway et al1994; Tapscott 1998). Tangible-mediatedlearning also has the potential to allowchildren to combine and recombine theknown and familiar in new and unfamiliarways encouraging creativity and reflection(Price et al 2003).

1.6 SUMMARY

While the computer is now a familiarobject in most schools in the UK today,outside schools different approaches tointeracting with digital information andrepresentations are emerging. These canbe considered under the term ‘tangibleinterfaces’ which attempt to overcome thedifference between the ways we input andcontrol information and the ways thisinformation is represented. These ‘tangibleinterfaces’ may be of significant benefit toeducation by enabling, in particular,younger children to play with actualphysical objects augmented withcomputing power. Tangible technologiesare part of a wider body of developingtechnology known as ‘ubiquitouscomputing’ in which computing technologyis so embedded in the world that it‘disappears’. The next section of thisreview will discuss the key attributes ofthese technologies in more detail beforemoving on, in Section 3, to discuss theirpotential educational applications.

2 TANGIBLE USER INTERFACES: NEW FORMS OF INTERACTIVITY

In this section of the review we offer adiscussion of different ways in whichresearchers in the field have classifieddifferent types of tangible user interfaces.Each of the frameworks we describeemphasises slightly different properties of these interfaces, which present somepotentially interesting ways to think aboutthe educational possibilities of tangibletechnologies.

The conceptual frameworks we describeclassify and analyse the properties oftangible technologies in terms of thehuman-computer interface. They shareseveral characteristics in common, butdiffer in the type and number ofdimensions and the degree to which theyrestrict or relax the space of possibletechnologies. All of them, albeit in different ways, incorporate a notion ofrepresentational mapping, ranging fromtight coupling (iconic or indexical) to loosecoupling (symbolic, arbitrary), whether interms of the referents of an operation(physical or digital) or in terms of thenature of the referring expression(operation or manipulation) and its effects.

2.1 DIRECT MANIPULATIONINTERFACES

Prior to the mid-80s, interaction withcomputers consisted of typing in specialcommands on a keyboard which resultedin the output of alphanumeric characterson a teleprinter or monitor, usuallyrestricted to 24 rows of around 80characters per line. In the mid-80s therewas something of a revolution in personalcomputing. Instead of the previous

12

tangibles arethought to

provide differentkinds of

opportunities forreasoning about

the world

SECTION 2

TANGIBLE USER INTERFACES: NEW FORMS OF INTERACTIVITY

command-line interfaces and keyboards,technologies were developed whichenabled users to interact much moredirectly via a pointing device (the mouse)and a graphical user interface employingthe now familiar desktop system withwindows, icons and menus. The earliestexample of such systems was the Staroffice application (Smith et al 1982), theprecursor to Microsoft Windows and theApple Macintosh operating systems.

Researchers in HCI called such systems‘direct manipulation’ interfaces (Hutchinset al 1986; Shneiderman 1983). Hutchins et al (1986) analysed the properties of so-called direct manipulation interfacesand argued that there were twocomponents to the impression the usercould form about the directness ofmanipulating the interface: the degree towhich input actions map onto outputdisplays in an interactional or articulatorysense (psychologists refer to this as‘stimulus-response compatibility’), and thedegree to which it is possible to expressone’s intentions or goals with the inputlanguage (semantic directness).

Articulatory directness refers to theextent to which the behaviour of an inputaction (eg moving the mouse) mapsdirectly or otherwise onto the effects onthe display (eg the cursor moving from oneposition to another). In other words, asystem has articulatory directness to theextent that the form of the manipulation ofthe input device mirrors the correspondingbehaviour on the display. This is a similarconcept to stimulus-response compatibility(SRC) in psychology. An example of SRC inthe physical world is where the motion ofturning a car steering wheel maps ontothe physical act of the car turning in theappropriate direction.

Semantic directness in Hutchins et al’sanalysis referred to the extent to which themeaning of a digital representation (eg anicon representing a wastebasket) mappedonto the physical referent in the real world(ie the referent of a wastebasket).

In terms of tangible user interfaces,similar analyses have been carried outconcerning the mapping between (i) theform of physical controls and their effectsand (ii) the meaning of representations andthe objects to which they refer. We outlinethese analyses below.

2.2 CONTROL AND REPRESENTATION

In this section we consider the extension of Hutchins et al’s analysis of directmanipulation to the design of tangible user interfaces.

As argued earlier, in traditional human-computer interfaces (ie graphical userinterfaces or GUIs), a distinction is typicallymade between input and output. Forexample, the mouse is an input device andthe screen is the output device. In tangibleuser interfaces this distinction disappears,in two senses:

1 In tangible interfaces the device thatcontrols the effects that the user wantsto achieve may be at one and the sametime both input and output.

2 In GUIs the input is normally physicaland the output is normally digital, but in tangible user interfaces there can be a variety of mappings of digital-to-physical representations, and viceversa. Ullmer and Ishii explain this byanalogy with one of the earliest forms of computer, the abacus:

13

“In particular, a key point to note is thatthe abacus is not an input device. Theabacus makes no distinction between‘input’ and ‘output’. Instead, the beads,rods, and frame of the abacus serve asmanipulable physical representations ofabstract numerical values and operations.Simultaneously, these componentartefacts also serve as physical controlsfor directly manipulating their underlyingassociations.” (Ullmer and Ishii 2000)

The term physical representation isimportant here because Ullmer and Ishiiwish to argue that it is the representationalsignificance of a tangible device thatmakes it different to a mouse, for example,which has little representationalsignificance (ie a mouse isn’t meant to‘mean’ anything). They explain thedifference by describing an interface forurban planning where, instead of icons ona screen controlled by a mouse, physicalmodels of buildings are used as physicalrepresentations of actual buildings –instead of manipulating digitalrepresentations, you are manipulating themodels (physical representations) whichthen also manipulate a digital display(Underkoffler and Ishii 1999).

The key issue here is that instead of therebeing a distinction between the controllingdevice (the mouse) and the representation(images of a city on screen), in this sort ofinteraction the control device (physicalmodels of buildings) is also a physicalrepresentation which is tightly coupledwith an additional digital representation.

Ullmer and Ishii point out severalcharacteristics of this sort of approach:

• physical representations are coupled to digital information

• physical representations embodymechanisms for interactive control

• physical representations areperceptually coupled to activelymediated digital representations

• the physical state of tangibles embodies key aspects of the digital state of the system.

Koleva et al (2003) develop Ullmer andIshii’s approach a bit further. Theydistinguish between different types oftangible interfaces in terms of ‘degree ofcoherence’ – ie whether the physical andthe digital artefacts are seen as onecommon object that exists in both thephysical and digital worlds, or whetherthey are seen as separate but temporallyinterlinked objects. The weakest level ofcoherence in their scheme is seen ingeneral-purpose tools where one physicalobject may be used to manipulate anynumber of digital objects. An example isthe mouse, which controls several differentfunctions (eg menus, scrollbars, windows,check boxes) at different times. Incomputer science terms, these arereferred to as ‘time-multiplexed’ devices(Fitzmaurice et al 1995). In contrast, onecan have physical objects as input deviceswhere each object is dedicated toperforming one and only one function –these are referred to as space-multiplexeddevices. An example in tangible computingis the use of physical ‘buildings’ in the Urpurban planning application of Underkofflerand Ishii (op cit).

Space-multiplexed devices are what onemight call ‘tightly coupled’ in terms ofmappings between physical and digitalrepresentations, whereas time-multiplexeddevices are less tightly coupled. Unlessone allows for this range of interface types,there would be a very small class of

14

SECTION 2

TANGIBLE USER INTERFACES: NEW FORMS OF INTERACTIVITY

devices of interfaces that could count as tangible, and, in fact, many exampleswhich claim to employ tangible userinterfaces for learning do not lie on the strict, tightly coupled end of thiscontinuum.

The strongest level of coherence (physical-digital coupling) in the scheme of Koleva etal is where there is the illusion that thephysical and digital representations are thesame object. An example in the world oftangible computing is the Illuminating Claysystem (Piper et al 2002), where a piece of‘clay’ can be manipulated physically and asa result a display projected onto its surfaceis altered correspondingly.

A second kind of distinction which Kolevaet al make is in the nature (meaning) ofthe representational mappings betweenphysical and digital. In the case of a literalor one-to-one mapping, any actions withthe physical devices should correspondexactly to the behaviour of the digitalobject. In the case of a transformationalmapping there is no such directcorrespondence (eg placing a digital objecton a particular location might trigger ananimation).

2.3 CONTAINERS, TOKENS AND TOOLS

Another classification of tangibletechnologies is provided by Holmquist et al (1999). They distinguish betweencontainers, tokens, and tools. They definecontainers as generic objects that can beassociated with any types of digitalinformation. For example, Rekimoto

demonstrated how a pen could be used asa container to ‘pick-and-drop’ digitalinformation between computers,analogous to the ‘drag-and-drop’ facilityfor a single computer screen (Rekimoto1997). However, even though containershave some physical analogy in theirfunctionality, they do not necessarilyreflect physically that function nor theiruse – they lack natural ‘affordances’ in theGibsonian sense (Gibson 1979; Norman1988, 1999).

In contrast, tokens are objects thatphysically resemble the information theyrepresent in some way. For example, in themetaDESK system (Ullmer and Ishii 1997)7,a set of objects were designed to physicallyresemble buildings on a digital map. Bymoving the physical buildings around onthe physical desktop, relevant portions ofthe map were displayed.

Finally, in Holmquist et al’s analysis, toolsare defined as physical objects which areused as representations of computationalfunctions. Examples include the Brickssystem referred to at the beginning of thisreview (Fitzmaurice et al 1995), where thebricks are used as ‘handles’ to manipulatedigital information. Another example is theuse of torches or flashlights to interactwith digital displays as in the Storytentsystem8 (Green et al 2002), described inmore detail in Section 4.

2.4 EMBODIMENT AND METAPHOR

Fishkin (2004) provides a two-dimensionalclassification involving the dimensions‘embodiment’ and ‘metaphor’. By

15

7 http://tangible.media.mit.edu/projects/metaDESK/metaDESK.html 8 http://machen.mrl.nott.ac.uk/Challenges/Devices/torches.htm

embodiment Fishkin means the extent towhich the users are attending to an objectwhile they manipulate it. In terms ofWeiser’s notion of ubiquitous computingdiscussed previously (Weiser and Brown1996), this is the dimension of central-peripheral attentional focus. At oneextreme of Fishkin’s dimension ofembodiment, the output device is the sameas the input device – examples include theTribble system9 (Lifton et al 2003) whichconsists of a ‘sensate skin’ which can reactto being touched or stroked by changinglights on its surface, vibrating, etc. Anothersimilar example is ‘Super Cilia Skin’10

(Raffle et al 2003). It consists of computer-controlled actuators that are attached toan elastic membrane. The actuatorschange their physical orientation torepresent information in a visual andtactile form. Just like clay will deform ifpressed, these surfaces act both as inputand output devices.

Slightly less tight coupling between inputand output in Fishkin’s taxonomy is seen in cases where the output takes placeproximal or near to the input device.Examples include the Bricks system(Fitzmaurice et al 1995) and a systemcalled I/O Brush11 (Ryokai et al 2004)referred to in the introduction to thisreview. This consists of a paintbrush whichhas various sensors for bristles whichallow the user, amongst other things, to‘pick up’ the properties of an object (eg itscolour) and ‘paint’ them onto a surface.

In the third case of embodiment the outputis ‘around’ the user (eg in the form of audiooutput) – what Ullmer and Ishii refer to as‘non-graspable’ or ambient (Ullmer and

Ishii 2000). Finally, the output can be distantfrom the input (eg on another screen).

Fishkin argues that as embodimentincreases, the ‘cognitive distance’ betweeninput and output increases. He suggeststhat if it is important to maintain a‘cognitive dissimilarity’ between input andoutput ‘objects’ then the degree ofembodiment should be decreased. Thismay be important in learning applications,for example if the aim is to encourage thelearner to reflect on the relationshipsbetween the two.

The second dimension in Fishkin’sframework is metaphor. In terms oftangible interfaces this means the extentto which the user’s actions are analogousto the real-world effect of similar actions.At one end of this continuum the tangiblemanipulation has no analogy to theresulting effect – an extreme example is acommand-line interface. An example theycite from tangible interfaces is the Bit Ballsystem (Resnick et al 1998) wheresqueezing a ball alters audio output.

In other systems an analogy is made to thephysical appearance (visual, auditory,tactile) of the device and its correspondingappearance in the real world. In terms ofGUIs this corresponds to the appearanceof icons representing physical documents,for example. However, the actionsperformed on/with such icons have noanalogy with the real world (eg crumplingpaper). In terms of TUIs examples arewhere physical cubes are used as inputdevices where the particular picture on aface of a cube determines an operation(Camarata et al 2002).

16

SECTION 2

TANGIBLE USER INTERFACES: NEW FORMS OF INTERACTIVITY

9 http://web.media.mit.edu/~lifton/Tribble/ 10 http://tangible.media.mit.edu/projects/Super_Cilia_Skin/Super_Cilia_Skin.htm 11 http://tangible.media.mit.edu/projects/iobrush/iobrush.htm

In contrast, other systems have analogiesbetween manipulations of the tangibledevice and the resulting operations (asopposed to operands). Yet other systemscombine analogies between the referentsin the physical and digital worlds and thereferring expressions or manipulations.For example, in GUIs the ‘drag-and-drop’interface (eg dragging a document iconinto the wastebasket) involves analogiesconcerning both the appearance of theobjects (document, wastebasket) and thephysical operation of dragging anddropping an object from the desktop intothe wastebasket. An example in terms ofTUIs is the Urp system described earlier(Underkoffler and Ishii 1999) wherephysical input devices resemblingbuildings are used to move buildingsaround a map in the virtual world.

Finally, Fishkin describes the otherextreme of this continuum, where thedigital system maps completely onto thephysical system metaphorically. He refersto this as ‘really direct manipulation’(Fishkin et al 2000). An example from GUIsis the use of a stylus or light pen tointeract with a touchscreen or tablet (pen-tablet interfaces), where writing on thescreen results in altering the documentdirectly. An example from TUIs that hegives is the Illuminating Clay system (Piper et al op cit).

2.5 SUMMARY

The dimensions highlighted in theseframeworks are:

• The behaviour of the control devicesand the resulting effects: a contrast ismade between input devices where theform of user control is arbitrary and hasno special behavioural meaning with

respect to output (eg using a generictool like a mouse to interact with theoutput on a screen), and input deviceswhich have a close correspondence inbehavioural meaning between input andoutput (eg using a stylus to draw a linedirectly on a tablet or touchscreen). Thisis what Hutchins et al (1986) refer to asarticulatory correspondence. The formof such mappings may result in one-to-many relations between input andoutput (as in arbitrary relations betweena mouse, joystick or trackpad andvarious digital effects on a screen), orone-to-one relations (as in the use ofspecial purpose transducers where each device has one function).

• The semantics of the physical-digitalrepresentational mappings: this refersto the degree of metaphor between thephysical and digital representation. Itcan range from being a completeanalogy, in the case of physical devicesresembling their digital counterparts, to no analogy at all.

• The role of the control device: thisrefers to the general properties of thecontrol device, irrespective of itsbehaviour and representationalmapping. So, for example, a controldevice might play the role of a containerof digital information, a representationaltoken of a digital referent, or a generictool representing some computationalfunction.

• The degree of attention paid to thecontrol device as opposed to that whichit represents: completely embodiedsystems are where the user’s primaryfocus is on the object being manipulatedrather than the tool being used tomanipulate the object. This can be moreor less affected by the extent of themetaphor used in mapping between thecontrol device and the resulting effects.

17

Of course none of these dimensions or analytic frameworks are entirelyindependent, nor are they complete. Thepoint is that there are at least two sensesin which tangible user interfaces strive toachieve ‘really direct manipulation’:

1 In the mappings between the behaviourof the tool (physical or digital) which theuser uses to engage with the object ofinterest.

2 In the mappings between the meaningor semantics of the representing world(eg the control device) and therepresented world (eg the resultingoutput).

3 WHY MIGHT TANGIBLETECHNOLOGIES BENEFIT LEARNING?

In this section we consider some of theresearch evidence from developmentalpsychology and education which mightinform the design and use of tangibles asnew forms of control systems, and interms of the learning possibilities impliedby the physical-digital representationalmappings we have discussed in theprevious section.

3.1 THE ROLE OF PHYSICAL ACTION IN LEARNING

It is commonly believed that physicalaction is important in learning, and thereis a good deal of research evidence inpsychology to support this. Piaget andBruner showed that children can oftensolve problems when given concretematerials to work with before they cansolve them symbolically – for example,pouring water back and forth betweenwide and narrow containers eventuallyhelps young children discover that volumeis conserved (Bruner et al 1966; Martinand Schwartz in press; Piaget 1953).Physical movement can enhance thinkingand learning – for example Rieser (Rieseret al 1994) has shown how locomotionsupports children in categorisation andrecall in tasks of perspective taking andspatial imagery, even when they typicallyfail to perform in symbolic versions of such tasks.

So evidence suggests that young children(and indeed adults) can in some senses‘know’ things without being able to expresstheir understanding through verballanguage or without being able to reflecton what they know in an explicit sense.

18

it is commonlybelieved that

physical action isimportant in

learning

SECTION 3

WHY MIGHT TANGIBLE TECHNOLOGIES BENEFIT LEARNING?

This is supported by research by Goldin-Meadow (2003), who has shown over anextensive set of studies how gesturesupports thinking and learning. Churchand Goldin-Meadow (1986) studied 5-8year-olds’ explanations of Piagetianconservation tasks and showed that somechildren’s gestures were mismatched withtheir verbal explanations. For example,they might say that a tall thin containerhas a large volume because it’s taller, buttheir gesture would indicate the width ofthe container. Children who producedthese kinds of gestures tended to be thosewho benefited most from instruction orexperimentation. The argument is thattheir gestures reflected an implicit or tacitunderstanding which wasn’t open to beingexpressed in language. Goldin-Meadowalso argues that gestures accompanyingspeech can, in some circumstances,reduce the cognitive load on the part of the language producer and, in othercircumstances, facilitate parallelprocessing of information (Goldin-Meadow 2003).

Research has also shown that touchingobjects helps young children in learning tocount – not just in order to keep track ofwhat they are counting, but in developingone-to-one correspondences betweennumber words and item tags (Alibali andDiRusso 1999). Martin and Schwartz (inpress) studied 9 and 10 year-olds learningto solve fraction problems using physicalmanipulatives (pie wedges). They foundthat children could solve fraction problemsby moving physical materials, even thoughthey often couldn’t solve the sameproblems in their heads, even when showna picture of the materials. However, actionitself was not enough – its benefitsdepended on particular relationshipsbetween action and prior knowledge.

Recent neuroscientific research suggeststhat some kinds of visuo-spatialtransformations (eg mental rotation tasks, object recognition, imagery) areinterconnected with motor processes and possibly driven by the motor system(Wexler 2002). Neuroscientific researchalso suggests that the neural areasactivated during finger-counting, which is a developmental strategy for learningcalculation skills, eventually come tounderpin numerical manipulation skillsin adults (Goswami 2004).

3.2 THE USE OF PHYSICALMANIPULATIVES IN TEACHING AND LEARNING

There is a long history of the use ofmanipulatives (physical objects) inteaching, especially in mathematics andespecially in early years education (egDienes 1964; Montessori 1917). There areseveral arguments concerning whymanipulation of concrete objects helps inlearning abstract mathematics concepts.One is what Chao et al (2000) call the‘mental tool view’ – the idea that thebenefit of physical materials comes fromusing the mental images formed duringexposure to the materials. Such mentalimages of the physical manipulations canthen guide and constrain problem solving(Stigler 1984).

An alternative view is what Chao et al callthe ‘abstraction view’ – the idea that thevalue of physical materials lies in learners’abilities to abstract the relation of interestfrom a variety of concrete instances(Bruner 1966; Dienes 1964). However, thefindings concerning the effectiveness ofmanipulatives over traditional methods ofinstruction are inconsistent, and where

19

gesture supportsthinking andlearning

they are seen to be effective it is usuallyafter extensive instruction and practice(Uttal et al 1997).

Uttal et al (op cit) are critical of the simpleargument that manipulatives make dealingwith abstractions easier because they areconcrete, familiar and non-symbolic. They argue that at least part of thedifficulty children may have with usingmanipulatives stems from the need tointerpret the manipulative as arepresentation of something else. Childrenneed to see and understand the relationsbetween the manipulatives and otherforms of mathematical expression – inother words, children need to see that themanipulative can have a symbolic function,that it can stand for something.

Understanding symbolic functions is notan all-or-nothing accomplishmentdevelopmentally. Children as young as 18 months to 2 years can be said tounderstand that a concrete object can havesome symbolic function as seen, forexample, in their use of pretence (eg Leslie1987). However the ability to reason withsystems of symbolic relations in order tounderstand particular mathematical ideas(eg to use Dienes blocks in solvingmathematical problems) requires a gooddeal of training and experience. DeLoachehas carried out extensive research on thedevelopment of ‘symbol-mindedness’ inyoung children (DeLoache 2004) and hasshown that it develops gradually betweenaround 10 months of age and 3 years, thateven when children might be able todistinguish between an object as a symbol(eg a photograph, a miniature toy chair)and its referent (that which the photographdepicts, a real chair), they sometimesmake bizarre errors such as trying tograsp the photograph in the book

(DeLoache et al 1998; Pierroutsakos andDeLoache 2003) or trying to sit in the chair(DeLoache et al 2004).

3.3 REPRESENTATIONAL MAPPINGS

In Section 2 we outlined Ullmer and Ishii’sanalysis of the mappings between physicalrepresentations and digital information.They introduced a concept they call‘phicons’ (physical icons). In using thisterm they are careful to emphasise therepresentational properties of icons anddistinguish them from symbols. In HCIusage generally the term icon has come to stand for any kind of pictorialrepresentation on a screen. Ullmer andIshii use the term phicon to refer to anicon in the Peircean sense. Thephilosopher Peirce distinguished betweenindices, icons and symbols (Project 1998),where an icon shares representationalproperties in common with the object itrepresents, a symbol has an arbitraryrelationship to that which it signifies, andan index represents not by virtue ofsharing representational properties butliterally by standing for or pointing to, in a real sense, that which it signifies (eg aproper name is an index in this sense).

This is an interesting distinction to make in terms of the affordances of tangibleinterfaces for learning. Bruner made thedistinction between ‘enactive’, ‘iconic’ and‘symbolic’ forms of representation in histheory of learning (1966). He argued thatchildren move through this sequence offorms of representation in learningconcepts in a given domain. For example, a child might start by being able to groupobjects together according to a number ofdimensions (eg size, colour, shape) – theyare enacting their understanding of

20

understandingsymbolic

functions is notan all-or-nothingaccomplishmentdevelopmentally

SECTION 3

WHY MIGHT TANGIBLE TECHNOLOGIES BENEFIT LEARNING?

classification. If they can draw pictures ofthese groupings they are displaying aniconic understanding. If they can useverbal labels to represent the categoriesthey are displaying a symbolicunderstanding. Iconic representations havea one-to-one correspondence with theirobjects they represent and they have someresemblance (in representational terms) tothat which they represent. They serve asbridging analogies (cf Brown and Clement1989) between an implicit sensorimotorrepresentation (in Piagetian terms) and an explicit or articulated symbolicrepresentation.

Piaget’s developmental theory involved thetransformation of initially sensorimotorrepresentations (from simple reflexes tomore and more controlled repetition ofsensorimotor actions to achieve effects inthe world), to symbolic manipulations(ranging from simple single operations tocoordinated multiple operations, butoperating on concrete externalrepresentations), to fully-fledged formaloperations (involving cognitivemanipulation of complex symbol systems,as in hypothetico-deductive reasoning).Each transition through Piagetian stagesof cognitive development involvedincreasing explicitation of representations,gradually rendering them more and moreopen to conscious, reflective cognitivemanipulation. Similarly, Bruner’s theory ofdevelopment (1966) also involvedtransitions from implicit or sensorimotorrepresentations (which he terms ‘enactive’)to gradually more explicit and symbolicrepresentations. The distinction, however,between Piaget’s and Bruner’s theorieslies in the degree to which such changesare domain-general (for Piaget they were,for Bruner each new domain of learninginvolved transitions through these stages)and age-related or independent.

More recent theories from cognitivedevelopment also involve progressive‘explicitation’ of representational systems– for example, Karmiloff-Smith’s theory ofrepresentational re-description (1992)describes transitions in children’srepresentations from implicit to moreexplicit forms. In her theory children startout learning in a domain (eg drawing,mathematics) by operating on solelyexternal representations (similar toBruner’s enactive phase). At this level theyhave learned highly specific proceduresthat are not open to revision, consciousinspection or verbal report and aretriggered by specified environmentalconditions. Gradually theserepresentations become more flexible andcan be changed internally (for example,children gradually learn to generaliseprocedures and appear to operate withinternal rules for producing behaviour).Finally the child is able to reflect upon hisor her representations and is able, forexample, to transfer rules or proceduresacross domains or modes ofrepresentation.

The representational transformationsdescribed by Piaget, Bruner and Karmiloff-Smith are also mirrored in another highlyinfluential theory of learning – that of JohnAnderson – but in the reverse sequence.Anderson’s theory of skill acquisition(Anderson 1993; Anderson and Lebiere1998) describes the transitions from initialexplicit or declarative representations thatinvolve conscious awareness and control(learning facts or condition-action rules) to procedures which become moreautomated and implicit with practice. Anexample is in learning to drive, where rulesof behaviour (eg depressing the clutch andmoving the gear lever into position) areinitially, for the novice driver, a matter ofeffortful recall and control and where it is

21

difficult to carry out several actions inparallel, to the almost unconsciouscarrying out of highly routine procedures inthe expert driver (who can now move intogear, steer the car, carry on a conversationwith a passenger and monitor other traffic).

The analysis of systems of representationalmapping presented in the previous sectionon tangible interfaces has, at least on theface of it, some parallel with the analysisof representational transformations intheories of cognitive development andlearning. It is interesting to try and drawsome inferences for the design of tangibletechnologies for learning. One of the mostinfluential educational researchers toattempt this is Seymour Papert.

3.4 MAPPING FROM PHYSICAL TO DIGITAL REPRESENTATIONS: THE ARGUMENT FOR LEARNING

Papert’s theory of learning with technologywas heavily influenced by his mentor, JeanPiaget. Papert is famous for thedevelopment of the Logo programmingenvironment for children, which launcheda whole paradigm of educationalcomputing referred to, in his term, as‘constructionism’ (Papert 1980)12 . Papert’sobservation was that young children have adeep and implicit spatial knowledge basedon their own personal sensorimotorexperience of moving through a three-dimensional world. He argued that bygiving children a spatial metaphor bymanipulating another body (a physicalrobot or an on-screen cursor), they mightgradually develop increasingly more

explicit representations of controlstructures for achieving effects such ascreating graphical representations on ascreen. The Logo programmingenvironment gave children a simplelanguage and control structure foroperating a ‘turtle’ (the name given to theon-screen cursor or physical robot). Forexample, the Logo command sequence:

TO SQUARE :STEPSREPEAT 4 [FORWARD :STEPS

RIGHT 90]END

results either in a cursor tracing out asquare on-screen or a robot movingaround in a square on the floor (McNerney2004). The idea was that this body-centredor ‘turtle geometry’ (Abelson and diSessa1981) would enable children to learngeometric concepts more easily thanprevious more abstract teaching methods.McInerney puts it as follows:

“…rather than immediately asking a childto describe how she might instruct theturtle to draw a square shape, she is askedto explore the creation of the shape bymoving her body; that is, actually ‘walkingthe square’. Through trial and error, thechild soon learns that walking a few steps,turning right 90°, and repeating this fourtimes makes a square. In the process, shemight be asked to notice whether sheended up about where she started. Afterthis exercise, the child is better preparedto program the turtle to do the samething… In the end, the child is rewarded bywatching the turtle move around the floorin the same way as she acted out theprocedure before.” (McNerney 2004)

22

a child is askedto explore the

creation of theshape by moving

her body

SECTION 3

WHY MIGHT TANGIBLE TECHNOLOGIES BENEFIT LEARNING?

12 The term ‘constructionism’ is based on Piaget’s approach of ‘constructivism’ – the idea being that children develop knowledge through action in the world (rather than, in terms of the preceding behaviourist learning theories, through more passive response to external stimuli). By emphasising the construction of knowledge, Papert was stating that childrenlearn effectively by literally constructing knowledge for themselves through physical manipulation of the environment.

Papert outlined a number of principles for learning through such activities. These included the idea that otherwiseimplicit reasoning could be made explicit(ie the child’s own sensorimotorrepresentations of moving themselvesthrough space had to be translated intoexplicit conscious and verbalisableinstructions for moving another bodythrough space), that the child’s ownreasoning and its consequences weretherefore rendered visible to themselvesand others, that this fostered planning andproblem solving skills, that thereforeerrors in the child’s reasoning (literally,‘bugs’ in programming terms) were madeexplicit and the means for ‘debugging’such errors were made available, allleading to the development ofmetacognitive skills.

In earlier sections of this review we sawhow various frameworks offered forunderstanding tangible computing referredto close coupling of physical and digitalrepresentations – the ideal case being‘really direct manipulation’ where the userfeels as if they are interacting directly withthe object of interest rather than somesymbolic representation of the object(Fishkin et al 2000). Papert’s vision alsoinvolves close coupling of the physical anddigital. However, crucial to Papert’sarguments about the advantages of Logo,turtle geometry and ‘body-centeredgeometry’ (McNerney 2004) is the notion ofreflection by the learner. The point aboutturtle geometry is not just that it is‘natural’ for the learner to draw upon theirown experience of moving their bodythrough space, but that the act ofinstructing another body (whether it is arobot or a screen-based turtle) to producethe same behaviour renders the learner’sknowledge explicit.

So ‘really direct manipulation’ may not bethe best for learning. For learning, the goalof interface design may not always be torender the interface ‘transparent’ –sometimes ‘opacity’ may be desirable if itmakes the learner reflect upon theiractions (O’Malley 1992). There are at leastthree layers of representational mappingto be considered in learning environments:

• the representation of the learningdomain (eg symbol systemsrepresenting mathematical concepts)

• the representation of the learningactivity (eg manipulating Dienes blockson a screen)

• the representation embodied in the toolsthemselves (eg the use of a mouse tomanipulate on-screen blocks versusphysically moving blocks around in thereal world).

When designing interfaces for learning the main goal may not be the speed andease of creation of a product – when theeducational activity is embedded within thetask one may not want to minimise thecognitive load involved. Research byGolightly and Gilmore (1997) found that amore complex interface produced moreeffective problem solving compared to aneasier-to-use interface. They argue that,for learning and problem solving, the rulesof transparent direct manipulationinterface design may need to be broken.However, designs should reduce thelearner’s cognitive load for performingnon-content related tasks in order toenable learners to allocate cognitiveresources to understanding theeducational content of the learning task. A similar point is made by Marshall et al(2003) in their analysis of tangibleinterfaces for learning.

23

the act ofinstructinganother body to produce thesame behaviourrenders thelearner'sknowledgeexplicit

Marshall et al discuss two forms oftangible technologies for learning:expressive and exploratory. They drawupon a concept from the philosopherHeidegger, termed ‘readiness-to-hand’(Heidegger 1996) later applied to human-computer interface design by Winogradand Flores (1986) and more recently totangible interfaces by Dourish (2001). Theconcept refers to the way that when we areworking with a tool (eg a hammer) wefocus not on the tool itself but on the taskfor which we are using the tool (iehammering in a nail). The contrast to‘readiness-to-hand’ is ‘present-at-hand’ –ie focusing on the tool itself. Building onAckermann’s work (1996, 1999), they arguethat effective learning involves being ableto reflect upon, objectify and reason aboutthe domain, not just to act within it.Marshall et al’s notion of ‘expressive’tangible systems refers to technologieswhich enable learners to create their ownexternal representations – as in Resnick’s‘digital manipulatives’ (Resnick et al 1998),derived from the tradition of Papert andLogo programming. They argue that bymaking their own understanding of a topicexplicit, learners can make visible clearinconsistencies, conflicting beliefs andincorrect assumptions.

Marshall et al contrast these expressiveforms of technologies with exploratorytangible systems where learners focus onthe way in which the system works ratherthan on the external representations theyare building. Again, this is in keeping withPapert’s arguments about the value of‘debugging’ for learning. When theirprogram doesn’t run or the robot doesn’twork in the way in which they expected,learners are forced to step back and reflectupon the program itself or the technologyitself to try and understand why errors or

bugs have occurred. In so doing, theybecome more reflective about thetechnology they are using. This line ofargument is also in keeping with a situatedlearning perspective, where opportunitiesfor learning occur when there are‘breakdowns’ in ‘readiness-to-hand’ (Lave and Wenger 1991).

Marshall et al suggest that effectivelearning stems from cycling between thetwo modes of ‘ready-to-hand’ (ie using thesystem to accomplish some task) and‘present-to-hand’ (ie focusing on thesystem itself). They also suggest two kindsof activity for using tangibles in learning;expressive activity is where the tangiblerepresents or embodies the learner’sbehaviour (physically or digitally), andexploratory activity is where the learnerexplores the model embodied in thetangible interface that is provided by thedesigner (rather than themselves). Thelatter can be carried out either in somepractical sense (eg figuring out how thetechnology works, such as the physicalrobot in LEGO/Logo – Martin and Resnick1993), or in what they call a theoreticalsense (eg reasoning about therepresentational system embodied in themodel, such as the programming languagein the case of systems like Logo).

A study by Sedig et al (2001) provides somesupport for the claims made by Marshall etal. The study examined the role of interfacemanipulation style on reflective cognitionand concept learning. Three versions of atangrams puzzle (geometrically shapedpieces that must be arranged to fit a targetoutline) were designed:

• Direct Object Manipulation (DOM)interface in which the user manipulatesthe visual representation of the objects

24

effective learning involves

being able toreflect upon,objectify and

reason about thedomain, not just

to act within it

SECTION 3

WHY MIGHT TANGIBLE TECHNOLOGIES BENEFIT LEARNING?

• Direct Concept Manipulation (DCM)interface in which the user manipulatesthe visual representation of thetransformation being applied to the objects

• Reflective Direct Concept Manipulation(RDCM) interface in which the DCMapproach is used with scaffolding.

Results from 44 11 and 12 year-oldsshowed that the RDCM group learnedsignificantly more than the DCM group,who in turn learned significantly more than the DOM group.

Finally, research by Judy DeLoache andcolleagues on the development ofchildren’s understanding of and reasoningwith symbols provides another note ofcaution in drawing implications forlearning from the physical-digitalrepresentational mappings in tangibles.Deloache et al (1998) argue that it cannotbe assumed that even the most ‘obvious’or iconic of symbols will automatically beinterpreted by children as a representationof something other than itself. We mightbe tempted to assume that little learning isrequired for highly realistic models orphotographs – ie symbols that closelyresemble their referents. Even very younginfants can recognise information frompictures. However, DeLoache et al arguethat perceiving similarity between a pictureand a referent is not the same asunderstanding the nature of the picture.She and her colleagues have shown over anumber of years of research that problemsin understanding symbols continue wellbeyond infancy and into childhood.

For example, young children (2.5 years)have problems understanding the use of aminiature scale model of a room as amodel. They understand perfectly well that

it is a model and can find objects hidden inthe miniature room. However they havegreat difficulty in using this model as aclue to finding a corresponding real objecthidden in an identical real room (DeLoache1987). DeLoache argues that the problemstems from the dual nature ofrepresentations in such tasks. While themodels serve as representations ofanother space, they are also objects inthemselves – ie a miniature room in which things can be hidden. Young children find it difficult, she argues, tosimultaneously reason about the model as a representation and as an interestingobject partly because it is highly salientand attractive as an object in its own right(DeLoache et al op cit). She tested thistheory by having children use photographsof the real room rather than the scalemodel and showed that children found thattask much easier. This and other researchleads DeLoache and colleagues to issuecaution in assuming that highly realisticrepresentations somehow make the task ofmapping from the symbol to the referenteasier. In fact, they argue, it can have theopposite effect to that which is desired.They argue that the task for young childrenin learning to use symbols is to realise thatthe properties of the symbols themselvesare less important than what theyrepresent. So making symbols into highlysalient concrete objects may make theirmeaning less, not more, clear to youngchildren.

Such cautions may also apply to the use ofmanipulatives with older children. Forexample, Hughes (1986) found that 5 – 7year-olds had difficulties using smallblocks to represent a single number or thesimple addition problems. The difficultiesthe children had suggested that they didn’treally understand how the blocks were

25

supposed to relate to the numbers and theproblem. DeLoache et al (op cit) make avery interesting comment on the differencebetween the ways in which manipulativesare used in mathematics teaching in Japanand in North America:

“In Japan, where students excel inmathematics, a single, small set ofmanipulatives is used throughout theelementary school years. Because theobjects are used repeatedly in variouscontexts, they presumably become lessinteresting as things in themselves.Moreover, children become accustomed tousing the manipulatives to representdifferent kinds of mathematics problems.For these reasons, they are not faced withthe necessity of treating an objectsimultaneously as something interesting inits own right and a representation ofsomething else. In contrast, Americanteachers use a variety of objects in avariety of contexts. This practice may havethe unexpected consequence of focusingchildren’s attention on the objects ratherthan on what the objects represent.”(Deloache et al 1998)

3.5 SUMMARY

In this section we have reviewed researchfrom psychology and education thatsuggests that there can be real benefits forlearning from tangible interfaces. Suchtechnologies bring physical activity andactive manipulation of objects to theforefront of learning. Research has shownthat, with careful design of the activitiesthemselves, children (older as well asyounger) can solve problems and performin symbol manipulation tasks withconcrete physical objects when they fail toperform as well using more abstract

representations. The point is not that theobjects are concrete and thereforesomehow ‘easier’ to understand, but thatphysical activity itself helps to buildrepresentational mappings that serve tounderpin later more symbolically mediatedactivity after practise and the resulting‘explicitation’ of sensorimotorrepresentations.

However, other research has shown that itis important to build in activities thatsupport children in reflecting upon therepresentational mappings themselves.DeLoache’s work suggests that focusingchildren’s attention on symbols as objectsmay make it harder for them to reasonwith symbols as representations. Marshallet al (2003) argue for cycling between whatthey call ‘expressive’ and ‘exploratory’modes of learning with tangibles.

26

physical activity itself

helps to buildrepresentational

mappings

SECTION 3

WHY MIGHT TANGIBLE TECHNOLOGIES BENEFIT LEARNING?

4 CASE STUDIES OF LEARNING WITH TANGIBLES

In this section we present case studies ofeducational applications of tangibletechnologies. We summarise the projectsby highlighting the tangibility of theapplication and its educational potential.Where the technology has been evaluated,we report the outcomes. The studiespresented here do not give completecoverage to the field but rather indicatecurrent directions in the full range ofapplications.

4.1 DIGITALLY AUGMENTED PAPER AND BOOKS

In some examples of educationalapplications of tangibles, ‘everydaytechnology’ such as paper, books andother physical displays have beenaugmented digitally.

For example, Listen Reader (Back et al2001) is an interactive children’s storybook,which has a soundtrack triggered bymoving one’s hands over the book. Thebook has RFID tags embedded within itwhich sense which page is open, andadditional sensors (capacitive fieldsensors) which measure an individual’sposition in relation to the book and adjustthe sound accordingly.

A more familiar, commercially availablesystem, is LeapPad®13 , which is also adigitally augmented book. Provided withthe book is a pen that enables words andletters to be read out when a child touchesthe surface of each page. The book can

also be placed in writing mode andchildren can write at their own pace. A whole library of activities and stories areavailable for various domains includingreading, vocabulary, mathematics andscience. The company claims that theeducational programs available for thissystem are based on research findings ineducation. For example, their LanguageFirst! Program was designed based onfindings from language acquisitionresearch.

A recent EU-funded research anddevelopment project called Paper++14 wasaimed at providing digital augmentation ofpaper with multimedia effects. The projectused real paper together with specialtransparent and conductive inks that couldbe detected with a special sensing device(a ‘wand’). This made the whole surface ofthe paper active, without involving complextechnology embedded in the paper itself.The project focused on the use of paper in educational settings and developedtechnologies to support applications suchas children’s books, materials forstudents, brochures and guides.Naturalistic studies were conducted ofchildren in classrooms working with paperin which the researchers examined howsubtle actions involving paper form anintegral part of the process ofcollaboration between children. Theresearchers also compared the behaviourof children working with a CD-Rom, anordinary book, and an augmented paperprototype, to examine how the differentmedia affected children’s workingpractices. They found that the paperenabled a more fluid and balanced sharingof tasks between the children (Luff et al 2003).

27

'everydaytechnology' suchas paper, booksand otherphysical displayshave beenaugmenteddigitally

13 www.leapfrog.com 14 www.paperplusplus.net

SECTION 4

CASE STUDIES OF LEARNING WITH TANGIBLES

In the KidStory project15 we developedvarious tangible technologies aimed atsupporting young children’s storytelling (O’Malley and Stanton 2002; Stanton et al2002; Stanton et al 2001b). For example,children’s physical drawings on paper weretagged with barcodes which enabled themto use a barcode reader to physicallynavigate between episodes in their virtualstory displayed on a large screen, usingthe KidPad storytelling software (Druin et al 1997).

In the Shape project a mixed realityexperience was developed to aid childrento discover, reason and reflect abouthistorical places and events (Stanton et al2003). An experience was developed whichinvolved a paper-based ‘history hunt’around Nottingham Castle searching forclues about historical events which tookplace there. The clues involved childrenmaking drawings or rubbings on paper ata variety of locations. The paper was thenelectronically tagged and used to interactwith a virtual reality projection on a displaycalled the Storytent (see Fig 4). A RadioFrequency ID (RFID) tag reader waspositioned embedded in a turntable insidethe tent. When placed on the turntable,each paper clue revealed an historic 3Denvironment of the castle from the locationat which the clue was found. Additionally,‘secret writing’ revealed the story of acharacter at this location by the use of anultra-violet light placed over the turntable.The aim was to help visitors learn aboutthe historical significance of particularlocations in and around the medievalcastle (no longer visible because it wasburned down) by explicitly linking theirphysical exploration of the castle (using thepaper clues) with a virtual reality

simulation of the castle as it was inmedieval times.

A final example of augmented paper isBillinghurst’s MagicBook16 (Billinghurst2002; Billinghurst and Kato 2002). Thisemploys augmented reality techniques toproject a 3D visual display as the userturns the page. The system involves theuser wearing a pseudo see-through head-mounted display (eg glasses). TheMagicBook contains special symbolscalled ‘fiducial markers’ which provide a registration identity and spatialtransformation derived using a small videocamera mounted on the display. As theuser turns the page to reveal the markerthey see an image overlaid onto the pageof the book. The system also enablescollaboration with another user wearingidentical equipment. The same technologywas used in the MagiPlanet applicationillustrated at the beginning of this review.

28

as the user turns the page

to reveal themarker they

see an imageoverlaid onto thepage of the book

SECTION 4

CASE STUDIES OF LEARNING WITH TANGIBLES

15 www.sics.se/kidstory 16 www.hitl.washington.edu/magicbook/

Fig 4: Child and adult in the Storytent with close-up of turntable (inset).

4.2 PHICONS: THE USE OF PHYSICALOBJECTS AS DIGITAL ICONS

The previous section gave examples of the use of paper and books which wereaugmented by a range of technologies (eg barcodes, RFID tags, video-basedaugmented reality) to trigger digitaleffects. In this section we review examplesof the use of other physical objects suchas toys, blocks and physical tags, to triggerdigital effects.

In recent years a range of digital toys havebeen designed aimed at very youngchildren. One example is the suite ofMicrosoft ActiMates™, starting in 1997 with Barney™ the purple dinosaur. TheCACHET project (Luckin et al 2003)explored the use of these types ofinteractive toys (in this case, DW™ andArthur™) in supporting collaboration. The toys are aimed at 4-7 year-olds andcontain embedded sensors that areactivated by children manipulating parts of the toy. The toys can also be linked to a PC and interact with game software togive help within the game. Luckin et alcompared the use of the software withhelp on-screen versus help from thetangible interface and found that thepresence of the toy increased thechildren’s interactions with one anotherand with the facilitator.

Storymat (Ryokai and Cassell 1999) is asoft interactive play mat that records andreplays children stories. The aim is tosupport collaborative storytelling even inthe absence of other children. The space isdesigned to encourage children to tellstories using soft toys. These stories arethen replayed whereby a moving projectionof the toy is projected over the mat,

accompanied by the associated audio.Thus children can activate stories otherchildren have told and also edit them andcreate their own stories.

In the KidStory project referred to earlier,RFID tags were embedded in toys as storycharacters. When the props were movednear to the tag reader it triggeredcorresponding events involving thecharacters on the screen (Fig 5).

The Tangible Viewpoints system (Mazaleket al 2002) uses physical objects tonavigate through a multiple viewpointstory. When an object (shaped like a chesspawn) is placed on the interaction surface,the story segments associated with itscharacter’s point of view are projectedaround it in the form of images or text.

Chromarium is a mixed reality activityspace that uses tangibles to help childrenaged 5-7 years experiment and learn about colour mixing (Rogers et al 2002)17.

29

Fig 5: RFID tags were embedded inphysical story props and characters andused to navigate digital stories in theKidStory project.

17 www.equator.ac.uk/Projects/Digitalplay/Chromarium.htm

A number of different ways of mixingcolour were explored, using a variety ofphysical and digital tools. For example, oneactivity allowed them to combine coloursusing physical blocks, with differentcolours on each face. By placing twoblocks together children could elicit thecombined colour and digital animations on an adjacent screen. Children wereintuitively able to interact with the colouredcubes combining and recombining colourswith immediate visual feedback. Anotheractivity enabled children to use softwaretools, disguised as paintbrushes, on adigital interactive surface. Here childrencould drag and drop different coloureddigital discs and see the resultant mixes. A third activity again allowed children touse the digital interactive surface, but thistime their interactions with a digital imagetriggered a physical movement on anadjacent toy windmill. In their comparisonof different types of these mappingsbetween digital and physicalrepresentations, Rogers et al (2002) foundthe coupling of a familiar physical actionwith an unfamiliar digital effect to beeffective in causing children to talk aboutand reflect upon their experience. Theactivities that enabled reversibility ofcolour mixing and immediate feedbackwere found to support more reflectiveactivity, in particular the physical blocksprovided a wider variety of physicalmanipulation and encouraged children to explore.

4.3 DIGITAL MANIPULATIVES

The examples in the previous sectionsinvolved the use of physical objects totrigger digital augmentations. In thissection we focus on a number of examples

of the use of physical devices which havecomputational properties embedded in them.

Resnick’s group at MIT’s Media Lab havebuilt on Papert’s pioneering work withLogo to develop a suite of tangibletechnologies dubbed ‘digital manipulatives’(Resnick et al 1998). The aim was toenable children to explore mathematicaland scientific concepts through directmanipulation of physical objects. Digitalmanipulatives are computationallyenhanced versions of toys enablingdynamics and systems to be explored. This work began with a collaboration with the LEGO toy company to create theLEGO/Logo robotics construction kit(Resnick 1993) by which children cancreate Logo programs to control LEGOassemblies. The kit consists of motors andsensors and children use Logo programsto control the items they build.

The next generation of these digitalmanipulatives was Programmable Bricks(P-Bricks) (Resnick et al 1996). Thesebricks contain output ports for controllingmotors and lights, and input ports forreceiving information from sensors (eglight, touch, temperature). As withLEGO/Logo, programs are written in Logoand downloaded to the P-Brick, which thencontains the program and is autonomous.These approaches are commerciallyavailable under the LEGO Mindstormsbrand18.

Crickets (Resnick et al 1998) were anotherextension but this time much smaller andwith more powerful processors and a two-way infrared capability. These Cricketshave been used, together with anaccelerometer and coloured LEDs to

30

interactions with a digital

image triggereda physical

movement on anadjacent toy

windmill

SECTION 4

CASE STUDIES OF LEARNING WITH TANGIBLES

18 http://mindstorms.lego.com

create the BitBall – a transparent rubberyball. BitBalls have been used in scientificinvestigations with undergraduates andchildren in learning kinematics. Forexample, BitBalls can be thrown up intothe air and can store acceleration datawhich can then be viewed graphically on ascreen. Children as young as 10 years oldused programmable bricks and Crickets tobuild and program robots to exhibit chosenbehaviours. Crickets have been used bychildren to create their own scientificinstruments to carry out investigations(Resnick et al 2000). For example, one girlbuilt a bird feeder with a sensor attachedso that when the bird landed to feed, the sensor triggered a photo of the bird to be taken. The girl could then see all the birds that had visited the feeder whileshe was away.

Curlybot (Frei et al 2000) is a palm-sizeddome-shaped two-wheel toy that canrecord and replay physical motion. A childpresses a button to record the movementof Curlybot and then presses a button toindicate recording has finished. The replayfacility then enables the child to replay themotion and Curlybot repeats the action atthe same speed and with the same pausesand motion. Curlybot can also be used witha pen attached to leave a trail as it moves.The authors highlight Curlybot as aneducational tool encompassing geometricdesign, gesture and narrative.

More recently, Raffle et al producedTopobo (Raffle et al 2004), a 3D‘constructive assembly system’ which isaimed at helping children to understandbehaviours of complex systems. Topobo isembedded with kinetic memory – theability to record and play back physicalmotion. By snapping together a

combination of static and motorisedcomponents, children can assembledynamic biomorphic figures, such asanimals, and animate them by pushing,pulling and twisting them, and observe the system play back those motions. The designers argue that Topobo can be used to help children learn aboutdynamic systems.

The Telltale system19 is a technology-enhanced language toy which is said to aidchildren in literacy development (Ananny2002). It consists of a caterpillar-likestructure and children can record a shortaudio clip in each segment of the body andcan rearrange each segment to alter thestory. Annany found that Telltale’ssegmented interface enabled children toproduce longer and more linguisticallyelaborate stories than with a non-segmented interface.

StoryBeads (Barry et al 2000) aretangible/wearable computing elementsdesigned to enable the construction ofstories by allowing users to trade storiesthrough pieces of images and text. The beads communicate by infrared andcan beam items from bead to bead or can be traded in order to construct andshare narratives.

Triangles (Gorbet et al 1998) were used asa tangible interface for exploring stories in a non-linear manner. Triangles areinterconnecting physical shapes withmagnetic edges. Each triangle contains amicroprocessor which can identify when itis connected to another. Triangles havebeen used for a range of applicationsincluding storytelling. Audio, video and stillimages can be captured and stored withinthe triangles. The three-sided nature of

31

BitBalls can be thrown upinto the air and can storeacceleration datawhich can thenbe viewedgraphically on a screen

19 http://web.media.mit.edu/~ananny/thesis.html

triangles allows the user to take differentpaths through the story space. Thesoftware records users’ interactions andenables the triggering of time sensitiveevents.

Thinking Tags (Borovoy et al 1996; Colella2000; Resnick and Wilensky 1997) areelectronic badges which employ similartechnology to communicate data betweenthemselves via infrared. They have beenused to create participatory simulations ofthe spread of a virus (Colella 2000). Thetechnology has also been used to creategenetics simulations for understandinginheritance (MacKinnon et al 2002).

The Ambient Wood (Price et al 2003;Rogers et al in press; Rogers et al 2002a;Rogers et al 2002b) project consisted ofgroups of children using mobiletechnologies outdoors. The aim was tocreate a novel learning experience tosupport scientific enquiry about thebiological processes taking place in awood. Pairs of children explored awoodland area using two styles of mobiledevice that presented digital informationtriggered by the immediate surroundingenvironment, and allowed further data tobe captured according to the children’sexploration. Children then used a tangibleinterface consisting of tokens made fromthe images they had received within thewood, in order to reflect on their findings.Within a den in the woodland, physicaltokens were used as a reflection tool. Thecombination of these tokens caused digitalreactions on screen concerning therelationships within the habitat. At a laterdate, back in the classroom, children usedthese tokens to build a food web based onthe items and relationships they hadlearned about in the wood (see Fig 6).

Another example of using tangibles forstorytelling was a project called Huntingthe Snark – a collaborative adventuregame designed within the Equator project20. The Snark was based on theelusive fictitious creature in Lewis Carroll’spoem (Carroll 1962/1876). In this gamechildren were given different technologiesto try to discover something about theSnark’s appearance, personality andpreferences. In one example children weregiven toy food made from coloured clay, inwhich were embedded RFID tags. Whenthey dropped these tags into a ‘well’ (aphysical chute containing a tag reader) thedisplay at the bottom of the well showed ashort video clip of the Snark’s smile orgrimace (depending on whether it liked thefood or not) and they heard acorresponding sound of laughter ordisgust. The aim was for children to collectinstances of the Snark’s preferences inorder to build up categories of its feedingbehaviour. Another tangible technologyused in the game involved a PDA andultra-sonic tracking to create a Snooper.

32

SECTION 4

CASE STUDIES OF LEARNING WITH TANGIBLES

20 www.equator.ac.uk

Fig 6: Using RFID tags as tokensrepresenting flora and fauna found in theAmbient Wood being used to interact witha table-top display back in the classroom.

As the children moved the Snooper aroundthe room they could discover hidden foodtokens which they could then try out in thewell. The PDA displayed an image of thefood token on the display as it was movednear to the location of the hidden tag.These and other technologies wereevaluated with 6 and 7 year-olds huntingthe Snark in pairs (Price et al 2003).

4.4 SENSORS AND DIGITAL PROBES

This section describes a range of tangibledevices which are based on physical toolswhich act as sensors or probes of theenvironment (eg light, colour, moisture).

A device for tangible storytelling involvesthe use of ordinary flashlights or torchesto interact with a display. For example theStorytent (Green et al 2002) usedflashlights to manipulate objects on a tentwhich had a 3D virtual world displayed onit. The same technology was used in adifferent application within the Shapeproject mentioned earlier to trigger audioclips on the wall of the underground cavesin Nottingham Castle, enabling users toplay sequences of such audio clips to hearabout the history of the caves as ithappened at the very location where theyshone the flashlight (Ghali et al 2003)21. A third example of the use of flashlights as tangible technologies was to interactwith a digital Sandpit – a floor projectionshowing virtual ‘sand’. As userscollaboratively shone their flashlights onthe sand it burned a virtual hole throughthe surface to reveal images beneath,telling the history of Nottingham Castle(Fraser et al 2003).

Another example of using physical tools tointeract with digital objects comes fromthe KidStory project mentioned earlier, inwhich pressure pads on the floor wereused to create a ‘magic carpet’ for childrento navigate collaboratively through theirstory in the 3D virtual world (Stanton et al 2001a).

I/O Brush (Ryokai et al 2004) is a drawingtool for young children aged 4 and over.The I/O Brush is modelled on a physicalpaintbrush but in addition has a smallvideo camera with lights and touchsensors embedded inside it. The aim is toget the children thinking about colours andtextures in the way an artist might.

33

a device fortangiblestorytellinginvolves the use of ordinaryflashlights ortorches tointeract with a display

Fig 7: A torch being used to interact with a display in the Shape project.

21 www.equator.ac.uk/Challenges/Devices/torches.htm

Children can move the brush over anyphysical surface and pick up colours andtextures and then draw with them oncanvas. The authors found that when theI/O brush was used in a kindergartenclass, children talked explicitly aboutpatterns and features available in theirenvironment.

The SENSE project (Tallyn et al 2004) hasbeen exploring the potential of sensortechnologies to support a hands-onapproach to learning science in the schoolclassroom. Children at two participatingschools designed and used their ownpollution sensors within their localenvironment. The technology consisted of a PDA and carbon monoxide pollutionsensor. The sensor was coloureddifferently on each side so that thedirection in which the sensor was facingwould be evident when children laterinspected video data of the sensor in use.Children captured their own sensor datausing this device, which was downloadedto additional visualisation technologies to help them analyse their data and tounderstand it in the context of similar data gathered by scientists carrying outrelated research (Equator Environmentale-Science project22).

In the Ambient Wood project mentionedearlier (Price et al 2003; Rogers et al inpress; Rogers et al 2002a; Rogers et al2002b) groups of children used mobiletechnologies (PDAs) outdoors to supportscientific enquiry about the biologicalprocesses taking place in a wood. One ofthe devices used (a probe tool) containedsensors enabling measurement of the lightand moisture levels within the wood. A small screen was also provided whichdisplayed the readings using graphical

visualisations. Analysis of the patterns ofinteraction amongst the children showedthat the technologies encouraged activeexploration, the generation of ideas andthe testing of hypotheses about theecology of the woodland.

4.5 SUMMARY

We have very briefly reviewed a number ofexamples of tangible technologies whichhave been designed to support learningactivities. We have discussed these underfour headings: digitally augmented paper,physical objects as icons (phicons), digitalmanipulatives and sensors/probes.

The reason for treating digital paper as adistinct category is to make the point that,rather than ICT replacing paper and book-based learning activities, they can beenhanced by a range of digital activities,from very simple use of cheap barcodesprinted on sticky labels and attached topaper, to the much more sophisticated useof video tracing in the augmented realityexamples. However, as Adrian Woolard andhis team at the BBC have shown, realclassrooms can use AR technology veryeffectively, without the need for specialhead-mounted displays to view theaugmentation. He and his team haveworked with teachers using webcams and projectors to overlay videos andanimations on top of physical objects. Evensomething as simple as projecting adisplay on top of paper, or on a tabletopwith physical objects, could be used as aneffective technique for getting children to see the relationships between objectsand representations they create andmanipulate in the physical world, and whathappens in the computational world.

34

children canmove the brush

over any physicalsurface and pick

up colours andtextures and

then draw withthem on canvas

SECTION 4

CASE STUDIES OF LEARNING WITH TANGIBLES

22 www.equator.ac.uk

Although many of the technologiesreviewed in the section on ‘phicons’involved fairly sophisticated systems, onceagain, it’s possible to imagine quite simpleand cheap solutions. For example, barcodetags and tag readers are relativelyinexpensive. Tags can be embedded in avariety of objects – we have evenexperimented with baking them in clay forthe Equator Snark project – they stillworked! The only slightly complex aspect is some programming to make use of thedata generated by the tag reader.

Digital manipulatives have been used overa number of years in classrooms in theform of Logo and Roamer, for example.The more advanced toolkits available with such systems as the Mindstormsrobotics kits, with their various kinds ofsensors, could be used in teaching andlearning with older age groups and across the curriculum.

Finally, sensors and digital probes havealso had a long history of use in UKclassrooms, especially in science, in theform of data-logging systems. Theincreasing accessibility of PDAs andtechnologies such as GPS make it feasibleto consider more mobile and ubiquitoususe of sensor-based systems. Teacherscould even think about using children’sown mobile phones as ‘probes’ to loginformation and share it in the classroom,especially with the availability of cameraphones.

5 SUMMARY AND IMPLICATIONS FORFUTURE RESEARCH, APPLICATIONS,POLICY AND PRACTICE

In Section 1 of this review we introducedthe concepts of tangible interfaces,ubiquitous computing and augmentedreality. We also provided a couple ofspecific examples of two forms of tangiblestechnology: the MagiPlanet system, whichuses augmented reality to overlay video onphysical objects, and the I/O Brush systemwhich uses a physical object to ‘pick up’properties of the world and interact withdigital representations of them.

We then considered in more detail inSection 2 the various interactionalproperties of tangible technology forcontrol of representations and formappings between forms ofrepresentations (physical and digital).Although researchers have used differentterminologies and structured the designspace in different ways, each of theframeworks presented in Section 2 (Ullmer and Ishii’s concepts of control and representation, Holmquist et al’sconcepts of containers, tokens and tools,Fishkin’s concepts of embodiment andmetaphor) is an attempt to deal explicitlywith the interaction metaphors embodiedin (i) the manipulation (operation) oftangible devices and (ii) the mappingsbetween forms (appearance) of physicaland digital representations.

In Section 3 we reviewed some of theresearch evidence from psychology and education concerning claims made for physical manipulatives. For example,people have argued that physicalmanipulatives are beneficial for learning because:

35

the increasingaccessibility ofPDAs make itfeasible toconsider moremobile andubiquitous use of sensor-basedsystems

SECTION 5

SUMMARY AND IMPLICATIONS FOR FUTURERESEARCH, APPLICATIONS, POLICY AND PRACTICE

• physical action is important in learning– children can demonstrate knowledgein their physical actions (eg gesture)even though they cannot talk about thatknowledge

• concrete objects are important inlearning – eg children can often solveproblems when given concrete materialsto work with even though they cannotsolve them symbolically or even whenthey cannot solve them ‘in their heads’

• physical materials give rise to mentalimages which can then guide andconstrain future problem solving in theabsence of the physical materials

• learners can abstract symbolic relationsfrom a variety of concrete instances

• physical objects that are familiar aremore easily understood by children thanmore symbolic entities.

In addition, the following claims have beenmade for tangible interfaces and digitalmanipulatives:

• they allow for parallel input (eg twohands) improving the expressiveness or the communication capacity with the computer

• they take advantage of well developedmotor skills for physical objectmanipulations and spatial reasoning

• they externalise traditionally internalcomputer representations

• they afford multi-person, collaborative use

• physical representations embody agreater variety of mechanisms forinteractive control

• physical representations areperceptually coupled to activelymediated digital representations

• the physical state of the tangibleembodies key aspects of the digital state of the system.

5.1 IMPLICATIONS FOR DESIGN AND USE

Although there is evidence that, forexample, physical action with concreteobjects can support learning, its benefitsdepend on particular relationshipsbetween action and prior knowledge. Itsbenefits also depend on particular formsof representational mapping betweenphysical and digital objects. For example,there is quite strong evidence suggestingthat, particularly with young children, ifphysical objects are made too ‘realistic’they can actually prevent children learningabout what the objects represent.

Other research reviewed here has alsosuggested that so-called ‘really directmanipulation’ (Fishkin et al 2000) may notbe ideal for learning applications whereoften the goal is to encourage the learnerto reflect and abstract. This is borne out byresearch showing that ‘transparent’ orreally easy-to-use interfaces sometimeslead to less effective problem solving.Sometimes more effort is required forlearning to occur, not less. However, itwould be wrong to conclude that interfacesfor learning should be made difficult to use– the point is to channel the learner’sattention and effort towards the goal ortarget of the learning activity, not to allowthe interface to get in the way. In the samevein, Papert argued that allowing childrento construct their own ‘interface’ (ie buildrobots or write programs) focused thechild’s attention on making their implicitknowledge explicit. Others (Marshall et al2003) support this idea and argue that

36

if physicalobjects are made

too 'realistic'they can actuallyprevent children

learning aboutwhat the objects

represent

SECTION 5

SUMMARY AND IMPLICATIONS FOR FUTURERESEARCH, APPLICATIONS, POLICY AND PRACTICE

effective learning should involve bothexpressive activity, where the tangiblerepresents or embodies the learner’sbehaviour (physically or digitally), andexploratory activity, where the learnerexplores the model embodied in thetangible interface.

Finally, some implications for design anduse of tangibles could be drawn fromresearch on multiple representations incognitive science. Ainsworth (Ainsworth1999; Ainsworth et al 2002; Ainsworth andVanLabeke 2004) has carried out researchon the use of multiple representations (eg text, graphics, animations) across anumber of domains. She argues that theuse of more than one representation cansupport learning in several different ways.For example, multiple representations canpresent complementary information, orthey can constrain interpretations of each other, or they can support theidentification of invariant aspects commonto all representations and supportabstraction. The research on the design of tangibles for learning has not so far drawn on research on multiplerepresentations, but it is clearly potentiallyrelevant and important.

5.2 IMPLICATIONS FOR POLICY AND PRACTICE

Many of the technologies reviewed hereare still in early prototype form and notreadily available for the classroom.However some of them are – Logo andRoamer have been used for many yearsnow in primary classrooms – and some ofthem are potentially available now with abit of programming effort (eg barcodes,

RFID tags). Even the augmented realitysoftware is available as a public domaintoolkit23 and can be used together withordinary cameras or webcams andprojectors to create effective augmentedreality demonstrations for children – asseen in the innovative work by AdrianWoolard and colleagues at the BBCCreative R&D. Data-logging and sensorshave been used for many years insecondary school science. New portabletechnologies such as PDAs extend theflexibility of such systems to createpotentially powerful mobile learningenvironments for use outside of theclassroom.

Even if the technologies aren’t yetavailable, the pedagogy underlying theseapproaches can be used as a source forideas in thinking about using ICT inteaching and learning various subjects.Especially in early years there is a need to recognise that younger children may not understand the representationalsignificance of the objects they are askedto work with in learning activities (whethercomputer-based or not). They needscaffolding in reasoning about therelationship between the properties of anobject (or symbol on the screen) and themore abstract properties they represent.

It is hoped that teachers might also takeinspiration from the whole idea oftechnology-enhanced learning movingbeyond the desktop or classroomcomputer by, for example, making linksbetween ICT-based activities and othermore physical activities. For example,interactive whiteboards are appealingbecause teachers can use them in familiarways for whole class teaching. But they

37

23 www.hitl.washington.edu/research/shared_space/download/

have much more potential than just beingslightly enhanced versions of PowerPoint.Children could be encouraged to interactactively with whiteboard sessions bycollecting their own data for presentation(eg via mobile phones, digital cameras).Physical displays around the classroommight be linked to digital displays by, forexample, camera phones or digitalcameras. Children could be encouraged tosimulate screen-based activities withphysical models, either on paper or using3D objects. Digital displays could beprojected in more imaginative ways thanjust the traditional vertical screen. Forexample, projecting onto the floor wouldenable children to sit in a circle around theshared screen. The display could becombined with the use of physical objectsand, using some ‘wizardry’, teachers mightmake some changes to the display bymanipulating the mouse or keyboard‘behind-the-scenes’. Displays could alsobe projected onto tabletops, over, say,paper which children are working with.Such arrangements not only enable thelinks between computer-based and noncomputer-based representations to bemade more explicit for children, but thesocial or collaborative arrangement alsochanges in interesting ways.

In terms of policy, ICT is already integratedinto teaching across the curriculum, butmore encouragement might be given toteachers in terms of incorporatingeveryday technologies (eg mobile phones)into classroom-based ICT activities (ratherthan seeing them as competing withschool-based activities). The importance ofphysical (kinaesthetic) and multisensoryactivities is also increasingly recognised,especially in primary years. However thisdevelopment is happening without regardto the use of ICT. This review provides

some suggestions, it is hoped, for seeinghow physical and digital activities could bemore integrated.

More research is needed on the benefits of tangibles for learning – so far theevaluation of these technologies has beenrather scarce. More effort is also neededto translate some of the researchprototypes into technologies and toolkitsthat teachers can use without too muchtechnical knowledge. Teachers also needtraining in the use of non-traditional formsof ICT and how to incorporate them intoteaching across the curriculum. Andfinally, new forms of assessment areneeded to reflect the potential of thesenew technologies for learning.

38

new forms ofassessment are

needed to reflectthe potential of

these newtechnologies for learning

SECTION 5

SUMMARY AND IMPLICATIONS FOR FUTURERESEARCH, APPLICATIONS, POLICY AND PRACTICE

GLOSSARY

Affordance a term first introduced by JJ Gibson, who took an ecologicalapproach to the study of perception. The basic concept is that certain physicalproperties of objects (eg the surface of atable) ‘naturally’ suggest what they can beused for (eg supporting objects placedupon them)

Ambient intelligence a term often used torefer to computation that is embedded inthe world around us. When the term‘intelligent’ is used, it usually meansautomatic. An example is the use of light,heat or motion sensors in a room that maycontrol heating or lighting depending onwho is in the room or what they are doing

Ambient media similar to ambientintelligence, this refers to a range of media (eg sound, light) that can be used to automatically detect changes in theenvironment and alter other aspects of the environment accordingly

AR (Augmented Reality) this term refersto the use of digital information toaugment physical interaction. An exampleis the projection of video displays onto 3Dphysical objects

Articulatory directness a concept from ananalysis of ‘direct manipulation’ interfacesby Hutchins, Hollan and Norman (1986)which refers to the degree of directnessbetween the behaviour of an input deviceand the resulting output

Augmented spaces this term refers to the digital augmentation (eg by video oraudio projections) of physical environments(eg a physical desktop)

Augmented virtuality the use of, forexample, video projections of physicalspaces into 3D virtual environments

Barcode reader a device used to detect a barcode symbol

Body-centred geometry a concept fromPapert’s vision of Logo where children aretaught to reason about geometry fromtheir own actions in the physical world (eg turning in a circle)

Breakdown a concept borrowed fromHeidegger referring to a break betweenpaying attention to the object of a tool (eg the nail one is hammering) and thetool itself (eg the hammer)

Bridging analogy the use of an analogy tobridge between one representation andanother, especially where the mappingbetween the two representations is notobvious to the learner – from Brown andClement (1989)

Bug an error in a computer program

Calm technology a concept from MarkWeiser (1991) referring to the idea oftechnology disappearing into thebackground of the user’s attentional focus

Command-line interface a term used torefer to computer systems where aprogramming language is needed tointeract with the system – usually used torefer to systems before the advent ofgraphical user interfaces

Constructionism a term usually attributedto Seymour Papert referring to theeducational benefits of the learnerconstructing a physical representation ofan abstract concept, as in the Logoprogramming environment (see alsoconstructivism)

Constructivism a concept usuallyattributed to Jean Piaget referring to theidea that learners actively construct theirown knowledge (as opposed to passivelyassimilating knowledge from another)

39

GLOSSARY

Container in the context of tangible userinterfaces this means the use of a controldevice to ‘contain’ some digital properties,as in ‘pick-and-drop’ interfaces.

Debugging the process of discovering anderadicating errors in computer programs(see ‘bug’)

Declarative representation usually refersto the way in which factual knowledge(knowing that) is stored in human memory.Used in contrast to procedural knowledge(knowing how)

Digital manipulative a term coined byMitchell Resnick and colleagues (1998) torefer to the digital augmentation ofphysical manipulatives such as Dienesblocks

Digital toys physical toys that can bemanipulated to produce digital effects (eg make sounds, trigger animations on a computer screen)

Direct manipulation a concept coined inthe mid-80s by Ben Schneiderman andothers to refer to the use of physical inputdevices such as mice to interact withgraphical displays. Also refers to a theoryconcerning direct physical-digitalmappings in human-computer interaction(Hutchins et al 1986)

Drag-and-drop a term used to refer to aninterface technique where one uses aninput device (usually a mouse) to selectand ‘drag’ a screen-based object from onelocation to another

Embodiment a term used in tangiblecomputing to refer to the degree to whichan interaction technique (eg manipulationof a digital object) carries with it animpression of being a physical (as opposedto digital) activity

Enactive a term used by Jerome Bruner torefer to a sensorimotor representation –eg a child who is physically counting outfrom an array of objects is said to beenacting the process of counting

Explicitation a term used to refer to thetransformation of an implicit or tacitrepresentation (eg procedural knowledgeof counting) to one that is open toverbalisation and reflection

Fiducial markers special graphicalsymbols overlaid on physical displays (egpaper) that are used by vision-basedsystems to register location and identity inorder to project augmented reality displays

GUI Graphical User Interface

HCI Human-Computer Interaction – aninterdisciplinary research field focused onthe design and evaluation of ICT systems

Head-mounted display a projectionsystem worn by a user – can be a helmetthat completely immerses the user or asee-through display (eg glasses) thatallows the user to see both the real worldand the projected world at the same time

ICT Information and CommunicationTechnology

Input device A physical device used tointeract with a computer system

Interactive whiteboard a display that usesa touchscreen for direct input with a stylusor other pointing device (also referred to asa ‘smartboard’)

LED Light Emitting Device (eg display on adigital watch)

Manipulative a term used to refer to theuse of physical objects such as Dienesblocks in teaching mathematics

40

GLOSSARY

Mixed reality A term used to refer to the merging of physical and digitalrepresentations

Output device usually refers to a screen ormonitor in traditional graphical userinterfaces, but can also refer to audiospeakers, force-feedback devices, tactiledisplays

Palmtop a small hand-held device usuallywith a touchcreen display that can bemanipulated with a stylus (see PDA)

PDA Personal Digital Assistant (eg Palm, iPaq)

Pick-and-drop an interaction techniquewhere information from one computersystem is ‘picked up’ by a device (eg a penor stylus) and ‘dropped onto’ anothercomputer system – by analogy with ‘drag-and-drop’ techniques

Prehensile skills grasping skills needed tohold and manipulate objects with thehands

Present-at-hand a term borrowed fromthe philosopher Heidegger to refer to anattentional focus on the tool itself (eg ahammer) rather than the object of interest(eg the nail)

Readiness-to-hand a term borrowed fromthe philosopher Heidegger to refer to anattentional focus on the object of interest(eg the nail) rather than the tool beingused (eg a hammer) (see ‘present-at-hand’)

RFID tags Radio Frequency Identificationtags – eg security tags on store goods

Semantic directness a term used byHutchins, Hollan and Norman (1986) torefer to the degree to which an interfacemetaphor (eg a wastebasket icon)resembles the operation the user has inmind (eg deleting a document)

Sensorimotor representation a termderived from Piaget’s theory of intellectualdevelopment referring to a non-symboliccognitive representation based in low-levelperception and action rather than ‘higher-level’ cognition

Space-multiplexed with space-multiplexedinput, each function to be controlled has adedicated transducer, each occupying itsown space. For example, a car has abrake, clutch, accelerator, steering wheel,and gear stick which are distinct,dedicated transducers controlling a singlespecific task (see ‘time-multiplexed’)

Stimulus-response compatibility the extent to which what people perceive is consistent with the actions they need to take – eg the extent to which thearrangement of ‘up’ and ‘down’ arrow keyson a keyboard is spatially compatible withtheir meaning or effects on the screenrather than being side-by-side

Tangible bits a term used to refer totangible (physical, graspable) digitalinformation

Tangible computing a term used to referto tangible (physical, graspable) computing

Tangible media a term used to refer totangible (physical, graspable) media

Techno-romantic the view that some usesof ICT can transform learning in ways notpossible without technology

Time-multiplexed time-multiplexing inputuses one device to control differentfunctions at different points in time. Forinstance, the mouse uses time-multiplexing as it controls functions asdiverse as menu selection, navigationusing the scroll widgets, pointing, andactivating ‘buttons’ (see ‘space-multiplexed’)

41

Token a term used in tangible userinterfaces to refer to a physicalrepresentation (instantiation) of a digital object

Touchscreen a screen which can detectphysical contact (eg the touch of a finger)with its surface

Transducer something that transformsone form of information into another

Transparent interface used to refer tointerfaces that are so easy to use they are‘obvious’ – usually contrasted with‘opaque’ or hard-to-use interfaces

TUI Tangible User Interface (see ‘GUI –Graphical User Interface’)

Turtle a term used in Logo programmingto refer to a (usually screen-based) cursorthat the learner ‘programs’ to movearound. It can also be a physical robot, asin the Roamer system

Turtle geometry the use of Logo to create(and teach about) geometric shapes

Ubiquitous computing a term used to referto technology that has become soembedded in the everyday world it‘disappears’ – usually contrasted withdesktop computing

Ultra-sonic tracking the use of ultra-sound to track the position of objects

Virtual reality 3D graphical animationsusually running in a completely seamless3D ‘world’ or ‘environment’. VR systemscan be desktop-based or immersive.Immersive VR may involve the wearing of ahead-mounted display or the user may bewithin a physical ‘cave’ where the 3Dvirtual world is projected onto the walls,floor and ceiling

Wearable computing technology (eg RFIDtags) that is worn on the person or sewninto the fabric of clothing

BIBLIOGRAPHY

Abelson, H and diSessa, A (1981). TurtleGeometry: The Computer as a Medium forExploring Mathematics. Cambridge,Massachusetts, USA: MIT Press

Ackermann, E (1996). Perspective-takingand object construction: two keys tolearning, in: Y Kafai & M Resnick (Eds),Constructionism in Practice: Designing,Thinking and Learning in a Digital World(pp25-35). Mahwah, NJ: LawrenceErlbaum Associates

Ackermann, E (1999). Pretense, modelsand machines, in: J Bliss, P Light and RSaljo (Eds), Learning Sites: Social andTechnological Contexts for Learning(pp144-154). Elsevier

Ainsworth, S (1999). The function ofmultiple representations. Computers &Education, 33, 131-152

Ainsworth, S, Bibby, P and Wood, D(2002). Examining the effects of differentmultiple representational systems inlearning primary mathematics. Journal ofthe Learning Sciences, 11, 25-61

Ainsworth, S and VanLabeke, N (2004).Multiple forms of dynamic representation.Learning and Instruction, 14, 241-255

Alibali, MW and DiRusso, AA (1999). Thefunction of gesture in learning to count:more than keeping track. CognitiveDevelopment, 14(1), 37-56

Ananny, M (2002). Supporting children’scollaborative authoring: practicing writtenliteracy while composing oral texts.Proceedings of the Conference onComputer Support for CollaborativeLearning, Boulder, CO, pp595-596

Anderson, JR (1993). Rules of the Mind.Hillsdale, NJ: Lawrence ErlbaumAssociates

42

BIBLIOGRAPHY

Anderson, JR and Lebiere, C (1998). TheAtomic Components of Thought. Mahwah,NJ: Lawrence Erlbaum Associates

Back, M, Cohen, J, Gold, R, Harrison, Sand Minneman, S (2001). Listen Reader:an electronically augmented paper-basedbook. Proceedings of the ACM SIGCHIConference on Human Factors inComputing Systems (CHI’01), Seattle, WA, ACM Press, pp23-29

Barry, B, Davenport, G and McGuire, D(2000). Storybeads: A wearable for storyconstruction and trade. Proceedings of theIEEE International Workshop onNetworked Appliances

Billinghurst, M (2002). Augmented realityin education. New Horizons for Learning

Billinghurst, M and Kato, H (2002).Collaborative augmented reality.Communications of the ACM, 45(7), 64-70

Borovoy, R, McDonald, M, Martin, F andResnick, M (1996). Things that blink:computationally augmented name tags.IBM Systems Journal, 35(3), 488-495

Brave, S, Ishii, H and Dahley, A (1998).Tangible interfaces for remotecollaboration and communication.Proceedings of the Conference onComputer Supported Cooperative Work(CSCW’98), ACM Press

Brown, D and Clement, J (1989).Overcoming misconceptions via analogicalreasoning: factors influencingunderstanding in a teaching experiment.Instructional Science, 18, 237-261

Bruner, J (1966). Toward a Theory ofInstruction. New York: WW Norton

Bruner, JS, Olver, RR and Greenfield, PM(1966). Studies in Cognitive Growth. NewYork: John Wiley & Sons

Camarata, K, Do, EY, Gross, M andJohnson, BR (2002). Navigational blocks:tangible navigation of digital information.Proceedings of the ACM SIGCHIConference on Human Factors inComputing Systems (CHI’02 ExtendedAbstracts), Seattle, Washington, USA,pp752-754

Carroll, L (1962/1876). The Hunting of theSnark. Bramhall House

Chao, SJ, Stigler, JW and Woodward, JA(2000). The effects of physical materials on kindergartners’ learning of numberconcepts. Cognition and Instruction, 18(3),285-316

Church, RB and Goldin-Meadow, S (1986).The mismatch between gesture andspeech as an index of transitionalknowledge. Cognition, 23, 43-71

Colella, V (2000). Participatory simulations:building collaborative understanding throughimmersive dynamic modelling. Journal ofthe Learning Sciences, 9(4), 471-500

Cox, M, Abbott, C, Webb, M, Blakeley, B,Beauchamp, T and Rhodes, V (2004). AReview of the Research Literature Relatingto ICT and Attainment. DfES/Becta

Cox, M, Webb, M, Abbott, C, Blakeley, B,Beauchamp, T and Rhodes, V (2004). AReview of the Research Evidence Relatingto ICT Pedagogy. DfES/Becta

DeLoache, JS (1987). Rapid change in thesymbolic functioning of very youngchildren. Science, 238, 1556-1557

DeLoache, JS (2004). Becoming symbol-minded. Trends in Cognitive Sciences, 8(2), 66-70

DeLoache, JS, Pierroutsakos, SL, Uttal,DH, Rosengren, KS and Gottlieb, A (1998).Grasping the nature of pictures.Psychological Science, 9(3), 205-210

43

Deloache, JS, Uttal, DH andPierroutsakos, SL (1998). Thedevelopment of early symbolization:educational implications. Learning andInstruction, 8(4), 325-339

DeLoache, JS, Uttal, DH and Rosengren,KS (2004). Scale errors offer evidence for aperception-action dissociation early in life.Science, 304, 1027-1029

Dienes, ZP (1964). Building UpMathematics (2nd ed). London: HutchinsonInternational

Dourish, P (2001). Where the Action Is: The Foundations of Embodied Interaction.Cambridge, MA: MIT Press

Druin, A, Stewart, J, Proft, D, Bederson, Band Hollan, J (1997). KidPad: a designcollaboration between children,technologists and educators. Proceedingsof the ACM SIGCHI Conference on HumanFactors in Computing Systems (CHI97),Atlanta, Georgia, USA, ACM Press, pp463-470

Edmans, JA, Gladman, J, Walker, M,Sunderland, A, Porter, A and StantonFraser, D (2004). Mixed realityenvironments in stroke rehabilitation:development of rehabilitation tools.Proceedings of the 4th InternationalConference on Disability, Virtual Realityand Associated Technologies. Oxford

Fishkin, KP (2004). A taxonomy for andanalysis of tangible interfaces. Personaland Ubiquitous Computing, 8(5), 347-358

Fishkin, KP, Gujar, AU, Harrison, BL,Moran, TP and Want, R (2000). Embodieduser interfaces for really directmanipulation. Communications of theACM, 43(9), 74-80

Fitzmaurice, GW, Ishii, H and Buxton, W(1995). Bricks: laying the foundations forgraspable user interfaces. Proceedings of

the SIGCHI Conference on Human Factorsin Computing Systems (CHI’95), Denver,Colorado, United States, ACM Press,pp442-449

Fraser, M, Stanton, D, Ng, K, Benford, S,O’Malley, C, Bowers, J et al (2003).Assembling history: achieving coherentexperiences with diverse technologies(ECSCW 2003). Proceedings of theECSCW2003

Frei, P, Su, V, Mikhak, B and Ishii, H(2000). Curlybot: designing a new class ofcomputational toys. Proceedings of theSIGCHI conference on Human Factors inComputing Systems (CHI’00), The Hague,Netherlands, ACM Press, pp129-136

Ghali, A, Bayomi, S, Green, J, Pridmore, Tand Benford, S (2003). Vision-mediatedinteraction with the Nottingham caves.Proceedings of the SPIE Conference onComputational Imaging

Gibson, JJ (1979). The Ecological Approachto Visual Perception. Boston: HoughtonMifflin

Goldin-Meadow, S (2003). HearingGesture: How Our Hands Help Us Think

Golightly, D and Gilmore, DJ (1997).Breaking the rules of direct manipulation.Proceedings of the Human-ComputerInteraction - Interact ‘97, Chapman & Hall,pp156-163

Gorbet, MG, Orth, M and Ishii, H (1998).Triangles: tangible interface formanipulation and exploration of digitalinformation topography. Proceedings of theACM SIGCHI Conference on Human factorsin Computing Systems, Los Angeles LA,ACM Press/Addison-Wesley Publishing Co,pp49-56

Goswami, U (2004). Neuroscience andeducation. British Journal of EducationalPsychology, 74, 1-14

44

BIBLIOGRAPHY

Green, J, Schnädelbach, H, Koleva, B,Benford, S, Pridmore, T and Medina, K(2002). Camping in the digital wilderness:tents and flashlights as interfaces tovirtual worlds. Proceedings of the ACMSICGHI Conference on Human Factors inComputing Systems (CHI’02), Minneapolis,Minnesota, ACM Press, pp780-781

Harrison, C, Comber, C, Fisher, T, Haw, K,Lewin, C, Lunzer, E et al (2002). ImpaCT2:The Impact of Information andCommunication Technologies on PupilLearning and Attainment (ICT in SchoolsResearch and Evaluation Series No 7):DfES/Becta

Heidegger, M (1996). Being and Time.Albany, New York: State University of NewYork Press

Holmquist, LE, Redström, J andLjungstrand, P (1999). Token-based accessto digital information. Proceedings of the1st International Symposium on Handheldand Ubiauitous Computing (HUC’99),Springer, pp234-245

Hughes, M (1986). Children and Number:Difficulties in Learning Mathematics.Oxford: Blackwell

Hutchins, E, Hollan, JD and Norman, DA(1986). Direct manipulation interfaces, in:DA Norman & SW Draper (Eds), UserCentered System Design (pp87-124).Hillsdale, NJ: Lawrence ErlbaumAssociates

Ishii, H and Ullmer, B (1997). Tangible bits:towards seamless interfaces betweenpeople, bits and atoms. Proceedings of theACM SIGCHI Conference on HumanFactors in Computing Systems (CHI’97),234-241

Karmiloff-Smith, A (1992). BeyondModularity. MIT Press

Koleva, B, Benford, S, Ng, KH andRodden, T (2003, September 8). Aframework for tangible user interfaces.Proceedings of the Physical Interaction(PI03) Workshop on Real World UserInterfaces, Mobile HCI Conference 2003,Udine, Italy

Lave, J and Wenger, E (1991). SituatedLearning: Legitimate PeripheralParticipation

Leslie, AM (1987). Pretense andrepresentation: the origins of ‘theory ofmind’. Psychological Review, 94, 412-426

Lifton, J, Broxton, M and Paradiso, J(2003). Distributed sensor networks assensate skin. Proceedings of the 2nd IEEEInternational Conference on Sensors(Sensors 2003), Toronto, Canada, IEEE,pp743-747

Luckin, R, Connolly, D, Plowman, L andAirey, S (2003). Children’s interactions withinteractive toy technology. Journal ofComputer Assisted Learning, 19, 165-176

Luff, P, Tallyn, E, Sellen, A, Heath, C,Frohlich, D and Murphy, R (2003). UserStudies, Content Provider Studies andDesign Concepts (No IST-2000-26130/D4&5)

MacKenzie, CL and Iberall, T (1994). The Grasping Hand. Amsterdam: North-Holland, Elsevier Science

MacKinnon, KA, Yoon, S and Andrews, G(2002). Using ‘Thinking Tags’ to improveunderstanding in science: a geneticssimulation. Proceedings of the Conferenceon Computer Suuport for CollaborativeLearning (CSCL 2002), Boulder, CO

Marshall, P, Price, S and Rogers, Y (2003).Conceptualising tangibles to supportlearning. Proceedings of the InteractionDesign and Children (IDC’03), Preston, UK,ACM Press

45

Martin, F and Resnick, M (1993).LEGO/Logo and electronic bricks: creatinga scienceland for children, in: D Ferguson(Ed), Advanced Educational Technologiesfor Mathematics and Science. New York:Springer

Martin, T and Schwartz, DL (in press).Physically distributed learning: adaptingand reinterpreting physical environmentsin the development of fraction concepts.Cognitive Science

Mazalek, A, Davenport, G and Ishii, H(2002). Tangible viewpoints: a physicalapproach to multimedia stories.Proceedings of the ACM MultimediaConference, Juan-les-Pins, France, ACM Press

McNerney, T (2004). From turtles totangible programming bricks: explorationsin physical language design. Personal andUbiquitous Computing, 8(5), 326-337

Milgram, P and Kishino, FA (1994). Ataxonomy of mixed reality visual displays.IEICE Transactions on InformationSystems, E77-D(12)

Montessori, M (1912). The MontessoriMethod. New York: Frederick Stokes Co

Montessori, M (1917). The AdvancedMontessori Method. New York: Frederick AStokes

Norman, DA (1988). The Psychology ofEveryday Things. New York: Basic Books

Norman, DA (1999). Affordances,conventions and design. Interactions, 6(3), 38-43

O’ Malley, C and Stanton, D (2002).Tangible technologies for collaborativestorytelling. Proceedings of the 1stEuropean Conference on Mobile andContextual Learning (MLearn 2002), pp3-7

O’Malley, C (1992). Designing computersystems to support peer learning.European Journal of Psychology ofEducation, 7(4), 339-352

Papert, S (1980). Mindstorms: Children,Computers and Powerful Ideas. Brighton:Harvester

Piaget, J (1953). How children formmathematical concepts. ScientificAmerican, 189(5), 74-79

Pierroutsakos, SL and DeLoache, JS(2003). Infants’ manual exploration ofpictured objects varying in realism.Infancy, 4, 141-156

Piper, B, Ratti, C and Ishii, H (2002).Illuminating clay: a 3-D tangible interfacefor landscape analysis. Proceedings of theACM SIGCHI Conference on HumanFactors in Computing Systems (CHI’02),Minneapolis, Minnesota, USA, ACM Press,pp355-362

Price, S, Rogers, Y, Scaife, M, Stanton, Dand Neale, H (2003). Using ‘tangibles’ topromote novel forms of playful learning.Interacting with Computers, 15(2), 169-185

Project, PE (Ed). (1998). The EssentialPeirce: Selected Philosophical Writings(Vol 2 (1893-1913)). Bloomington andIndianapolis: Indiana University Press

Raffle, H, Joachim, MW and Tichenor, J(2003). Super Cilia Skin: an interactivemembrane. Proceedings of the ACMSIGCHI Conference on Human Factors inComputing Systems (CHI2003 ExtendedAbtracts), ACM Press, pp808-809

Raffle, HS, Parkes, AJ and Ishii, H (2004).Topobo: a constructive assembly systemwith kinetic memory. Proceedings of theACM SIGCHI Conference on HumanFactors in Computing Systems, Vienna,Austria, ACM Press, pp647-654

46

BIBLIOGRAPHY

Rekimoto, J (1997). Pick-and-drop: adirect manipulation technique for multiplecomputer environments. Proceedings ofthe 10th Annual ACM Symposium on UserInterface Software and Technology(UIST’97), ACM Press, pp31-39

Resnick, M (1993). Behavior constructionkits. Communications of the ACM, 36(7),65-71

Resnick, M, Berg, R and Eisenberg, M(2000). Beyond black boxes: bringingtransparency and aesthetics back toscientific investigation. Journal of theLearning Sciences, 9(1), 7-30

Resnick, M, Martin, F, Sargent, R andSilverman, B (1996). Programmablebricks: toys to think with. IBM SystemsJournal, 35(3), 443-452

Resnick, M, Maryin, F, Berg, R, Boovoy, R,Colella, V, Kramer, K et al (1998). Digitalmanipulatives: new toys to think with.Proceedings of the ACM SIGCHIConference on Human Factors inComputing Systems, 281-287

Resnick, M and Wilensky, U (1997). Divinginto complexity: developing probabilisticcentralized thinking through role-playingactivities. Journal of the LearningSciences, 7(2)

Rieser, JJ, Garing, AE and MF, Young(1994). Imagery, action and youngchildren’s spatial orientation - it’s notbeing there that counts, it’s what one hasin mind. Child Development, 65(5), 1262-1278

Rogers, Y, Price, S, Randell, C, StantonFraser, D, Weal, M and Fitzpatrick, G(in press). Ubi-learning: integrating indoor and outdoor learning experiences.Communications of the ACM

Rogers, Y, Scaife, M, Gabrielli, S, Smith, Hand Harris, E (2002). A conceptualframework for mixed reality environments:designing novel learning activities foryoung children. Presence: Teleoperators &Virtal Environments, 11(6), 677-686

Rogers, Y, Scaife, M, Muller, H, Randell,C, Moss, A, Taylor, I et al (2002, 25-28June 2002). Things aren’t what they seemto be: innovation through technologyinspiration. Proceedings of the DesigningInteractive Systems, London, pp373-379

Ryokai, K, Marti, S and Ishii, H (2004). I/O brush: drawing with everyday objectsas ink. Proceedings of the ACM SIGCHIConference on Human factors inComputing Systems (CHI’04), Vienna,Austria, ACM Press, pp303-310

Ryokai, R and Cassell, J (1999). StoryMat:a play space for collaborative storytelling.Proceedings of the ACM SIGCHIConference on Human Factors inComputing Systems, Pittsburgh, PA, ACMPress, pp272-273

Sedig, K, Klawe, M and Westrom, M(2001). The role of interface manipulationstyle and scaffolding on cognition andconcept learning in learnware. ACMTransactions on Computer-HumanInteraction (ToCHI), 8(1), 34-59

Shneiderman, B (1983). Directmanipulation: a step beyond programminglanguages. IEEE Computer, 16(8), 57-69

Simon, T (1987). Claims for LOGO: whatshould we believe and why? In JRutkowska and C Crook (Eds), Computers,Cognition and Development (pp115-133).Chichester: John Wiley & Sons Ltd

Smith, DC, Irby, CH, Kimball, RB,Verplank, WH and Harslem, EF (1982).Designing the Star user interface. Byte,7(4), 242-282

47

Soloway, E, Guzdial, M and Hay, K (1994).Learner-centered design: the nextchallenge for HCI. ACM Interactions

Stanton, D, Bayon, V, Abnett, C, Cobb, Sand O’Malley, C (2002). The effects oftangible interfaces on children’scollaborative behaviour. Proceedings of theCHI’2002 ACM Human Factors inComputing Systems

Stanton, D, Bayon, V, Neale, H, Benford,S, Cobb, S, Ingram, R et al (2001a).Classroom collaboration in the design oftangible interfaces for storytelling.Proceedings of the SIGCHI Conference onHuman Factors in Computing Systems,Seattle, Washington, ACM Press, pp482-489

Stanton, D, Bayon, V, Neale, H, Benford,S, Cobb, S, Ingram, R et al (2001b).Classroom collaboration in the design oftangible interfaces for storytelling.Proceedings of the ACM SIGCHIConference on Human Factors inComputing Systems (CHI’01), pp482-489

Stanton, D, O’Malley, C, Ng, KH, Fraser, Mand Benford, S (2003 (June)). Situatinghistorical events through mixed reality:adult child interactions in the storytent, in:SLB Wasson, U Hoppe (Ed), Designing forChange in Networked LearningEnvironments (Vol 2, pp293-302)

Stigler, JW (1984). Mental abacus - theeffect of abacus training on chinesechildren’s mental calculation. CognitivePsychology, 16(2), 145-176

Tallyn, E, Stanton, D, Benford, S,Rowland, D, Kirk, D, Paxton, M et al (2004).Introducting eScience to the classroom.Proceedings of the UK e-Science All HandsMeeting, EPSRC, pp1027-1029

Tapscott, D (1998). Growing Up Digital: The Rise of the Next Generation. New York:McGraw Hill

Ullmer, B and Ishii, H (1997). ThemetaDESK: models and prototypes for

tangible user interfaces. Proceedings ofthe 10th Annual ACM Symposium on UserInterface Software and Technology(UIST’97), ACM Press, pp223-232

Ullmer, B and Ishii, H (2000). Emergingframeworks for tangible user interfaces.IBM Systems Journal, 39(3&4), 915-931

Underkoffler, J and Ishii, H (1999). Urp: aluminous-tangible workbench for urbanplanning and design. Proceedings of theSIGCHI Conference on Human Factors inComputing Systems (CHI’99), ACM Press,pp386-393

Underwood, JDM and Underwood, G(1990). Computers and Learning: HelpingChildren Acquire Thinking Skills (softbacked). Oxford: Basil Blackwell

Uttal, DH, Scudder, KV and DeLoache, JS(1997). Manipulatives as symbols: a newperspective on the use of concrete objectsto teach mathematics. Journal of AppliedDevelopmental Psychology, 18(1), 37-54

Wegerif, R (2002). Literature Review inThinking Skills, Technology and Learning(No 2). Futurelab

Weiser, M (1991). The computer for the 21stcentury. Scientific American, 265(3), 94-104

Weiser, M and Brown, JS (1996). TheComing of Age of Calm Technology.Retrieved 16 September 2004, fromwww.ubiq.com/hypertext/weiser/acmfuture2endnote.htm

Wexler, M (2002). More to 3-D vision thanmeets the eye. Trends in CognitiveSciences, 6(12)

Winograd, T and Flores, F (1986).Understanding Computers and Cognition.Norwood, NJ: Ablex

Woolard, A (2004). Evaluating augmentedreality for the future classroom.Proceedings of the Expert TechnologySeminar: Educational Environments ofTomorrow. Becta

48

BIBLIOGRAPHY

Acknowledgements

We would like to thank Keri Facer for invaluable comments and encouragement in thepreparation of this review. We also gratefully acknowledge the support of the UKEngineering and Physical Sciences Research Council (EPSRC) and the EuropeanCommission (4th and 5th Framework Programmes) in funding some of the work reviewed here (eg KidStory, Shape, Equator).

About Futurelab

Futurelab is passionate about transforming the way people learn. Tapping into the hugepotential offered by digital and other technologies, we are developing innovative learningresources and practices that support new approaches to education for the 21st century.

Working in partnership with industry, policy and practice, Futurelab:

• incubates new ideas, taking them from the lab to the classroom• offers hard evidence and practical advice to support the design and use of innovative

learning tools• communicates the latest thinking and practice in educational ICT• provides the space for experimentation and the exchange of ideas between the

creative, technology and education sectors.

A not-for-profit organisation, Futurelab is committed to sharing the lessons learnt fromour research and development in order to inform positive change to educational policyand practice.

Futurelab1 Canons RoadHarboursideBristol BS1 5UHUnited Kingdom

tel +44 (0)117 915 8200fax +44 (0)117 915 [email protected]

www.futurelab.org.uk

Registered charity 1113051

Creative Commons

© Futurelab 2006. All rights reserved; Futurelab has an open access policy which encourages circulation ofour work, including this report, under certain copyright conditions - however, please ensure that Futurelab isacknowledged. For full details of our Creative Commons licence, go to www.futurelab.org.uk/open_access.htm

Disclaimer

These reviews have been published to present useful and timely information and to stimulate thinking anddebate. It should be recognised that the opinions expressed in this document are personal to the author andshould not be taken to reflect the views of Futurelab. Futurelab does not guarantee the accuracy of theinformation or opinion contained within the review.

This publication is available to download from the Futurelab website –www.futurelab.org.uk/research/lit_reviews.htm

Also from Futurelab:

Literature Reviews and Research ReportsWritten by leading academics, these publications provide comprehensive surveys ofresearch and practice in a range of different fields.

HandbooksDrawing on Futurelab's in-house R&D programme as well as projects from around theworld, these handbooks offer practical advice and guidance to support the design anddevelopment of new approaches to education.

Opening Education SeriesFocusing on emergent ideas in education and technology, this series of publicationsopens up new areas for debate and discussion.

We encourage the use and circulation of the text content of these publications, whichare available to download from the Futurelab website – www.futurelab.org.uk/research.For full details of our open access policy, go to www.futurelab.org.uk/open_access.htm.

FUTURELAB SERIES

REPORT 12

ISBN: 0-9548594-2-1Futurelab © 2004