evaluating mobile learning

37
G. Vavoula – 15/10/09 Mobile Learning Evaluation MLearnResearch 09 Mobile Learning Evaluation Mobile Learning Evaluation Giasemi Vavoula University of Leicester

Upload: giasemi

Post on 02-Jul-2015

407 views

Category:

Education


0 download

DESCRIPTION

Presentation given at Mobile Learning Early Researcher Symposium, Learning Lab, University of Wolverhampton.

TRANSCRIPT

Page 1: Evaluating Mobile Learning

G. Vavoula – 15/10/09

Mobile Learning Evaluation MLearnResearch 09

Mobile Learning EvaluationMobile Learning Evaluation

Giasemi Vavoula

University of Leicester

Page 2: Evaluating Mobile Learning

G. Vavoula – 15/10/09

Mobile Learning Evaluation MLearnResearch 09

OverviewOverview

Evaluation (session) in context

Evaluation contextPart 1: What do we evaluate? (a framework)

Part 2: How do we evaluate it? (methods and tools)

Part 3: Practical & ethical considerations

Identifying assumptions

Page 3: Evaluating Mobile Learning

G. Vavoula – 15/10/09

Mobile Learning Evaluation MLearnResearch 09

Evaluation (session) in ContextEvaluation (session) in Context

Evaluation

ResearchPublishing

Ethics

Theorising

Page 4: Evaluating Mobile Learning

G. Vavoula – 15/10/09

Mobile Learning Evaluation MLearnResearch 09

Evaluation ContextEvaluation Context

Part 1. What do we evaluate?

M3 Evaluation Framework (Vavoula & Sharples 2009)

Part 2. How do we evaluate it?

Methods and tools

Case study of evaluation methods and tools within M3 Framework

Part 3. Practical & Ethical considerations

Who evaluates and who is evaluated

Where

When

Page 5: Evaluating Mobile Learning

G. Vavoula – 15/10/09

Mobile Learning Evaluation MLearnResearch 09

Part 1. What do we evaluate?Part 1. What do we evaluate?

technology

experience

institutional practice

personal practice

Page 6: Evaluating Mobile Learning

G. Vavoula – 15/10/09

Mobile Learning Evaluation MLearnResearch 09

Part 1. M3 Evaluation at three levelsPart 1. M3 Evaluation at three levels

Micro level: user’s experience of the technologyUsability

Utility of functions

Meso level: user’s learning/educational experienceCognitive learning

Breakthroughs

Breakdowns

Macro level: impact on institutional & personal learning/teaching practice

Appropriation of new technology: unexpected and envisaged use

New practices – further requirements

Page 7: Evaluating Mobile Learning

G. Vavoula – 15/10/09

Mobile Learning Evaluation MLearnResearch 09

Part 1. M3 Evaluation in three stagesPart 1. M3 Evaluation in three stages

User’s expectations (data collection)

User’s actual experience (data collection)

Expectations – reality gaps (data analysis)

Page 8: Evaluating Mobile Learning

G. Vavoula – 15/10/09

Mobile Learning Evaluation MLearnResearch 09

Part 1. Evaluation at 3 levels, Part 1. Evaluation at 3 levels, in 3 stages, throughout project lifecyclein 3 stages, throughout project lifecycle

design implement deploy

mic

ro

mes

o

m

acro

analyse requirements

Technology robust enough to support full user trial

Technology deployed long enough to assess impact

Page 9: Evaluating Mobile Learning

G. Vavoula – 15/10/09

Mobile Learning Evaluation MLearnResearch 09

Part 2. How do we evaluate?Part 2. How do we evaluate?Typical process:

Collect data

Analyse data

Answer/refine research questions

Page 10: Evaluating Mobile Learning

G. Vavoula – 15/10/09

Mobile Learning Evaluation MLearnResearch 09

Part 2. Case Study: MyartspacePart 2. Case Study: MyartspaceHandout phones

Explore museum

Recap learning task etc.

Logon Phone training

Share / present

Example gallery

CollectCollectCollect

Page 11: Evaluating Mobile Learning

G. Vavoula – 15/10/09

Mobile Learning Evaluation MLearnResearch 09

Part 2. Case study in greater scheme of thingsPart 2. Case study in greater scheme of things

Learning tools

Learning method + activities

Learning objectives + outcomes

Social setting

Location + space layout

Familiar, setFamiliar, setUnpredictableUnpredictable

Pre-determinedPre-determinedUnknown – some idea

Unknown

Pre-set, externalPre-set, externalUnknownUnknown

FixedKnownUnpredictableUnpredictable

FixedKnown but not standard

Known but not standard

Unpredictable

traditional classroom

museum school visit

general museum visit

mobile

vagueness++ --

Page 12: Evaluating Mobile Learning

G. Vavoula – 15/10/09

Mobile Learning Evaluation MLearnResearch 09

Part 2. Collect data @ all levelsPart 2. Collect data @ all levelsData sources

Stage 1 - expectations• Design heuristics• System documentation• Experience documentation• Promotion materials• Minutes of project meetings• Project proposal• Press coverage• Scoping study / literature review• Stakeholder/user interviews & focus groups• …

Stage 2 - reality• Evaluation outcomes Requirements specification• User observations• Stakeholder/user interviews & focus groups• User questionnaires• User-created artifacts• Stakeholder consultation workshops• Heuristic evaluation• …

Page 13: Evaluating Mobile Learning

G. Vavoula – 15/10/09

Mobile Learning Evaluation MLearnResearch 09

Part 2. Collect data @ micro levelPart 2. Collect data @ micro level

Method: Heuristic Evaluation

Collect data re expectations

Established design heuristics

Collect data re reality

Experts undertaking heuristic evaluation

Analyse gaps

Analysis of expert reports and production of (re)designrecommendations

Page 14: Evaluating Mobile Learning

G. Vavoula – 15/10/09

Mobile Learning Evaluation MLearnResearch 09

Part 2. Collect data @ micro levelPart 2. Collect data @ micro level

Method: Technical Testing

Collect data re expectations

Data supplied by system requirements

Collect data re reality

System performance tests outcomes

Analyse gaps

Comparison of performance data against requirements

Page 15: Evaluating Mobile Learning

G. Vavoula – 15/10/09

Mobile Learning Evaluation MLearnResearch 09

Part 2. Collect data @ micro levelPart 2. Collect data @ micro levelMethod: Full-scale user trial

Collect data re expectationsExamine system documentation (Teacher’s Pack and Lesson Plans, online help) for descriptions of functionality

Interview teacher prior to lesson to assess level of knowledge and expectations for functionality

Observe training sessions at museum and school to document how functionality is described to teachers/students.

Student questionnaires regarding expectations of system functionality in forthcoming lesson

Collect data re realityObserve lesson to establish actual teacher and student experience of functionality

Interview teacher after the lesson to clarify experience of functionality

Questionnaire and focus groups with students after the lesson to capture experience of functionality

Analyse gapsCapture expectations-reality gaps in terms of user experience of functionality through

• reflective interpretation of documentation analysis in the light of observations• interviews and focus groups with teachers/students• critical incident analysis with students

Page 16: Evaluating Mobile Learning

G. Vavoula – 15/10/09

Mobile Learning Evaluation MLearnResearch 09

Part 2. Collect data @ meso levelPart 2. Collect data @ meso levelMethod: Full-scale user trial

Collect data re expectationsAnalyse description of educational experience based on Teacher’s Pack and Lesson Plans

Interview teachers and museum educators prior to lessons about what they have planned for the students’ learning experience

Observe teachers and museum educators while presenting learning experience to students in the classroom/museum

Student questionnaires regarding expectations of learning experience in forthcoming lesson

Collect data re realityObserve educational experience in museum/classroom

• Note critical incidents that show new forms of learning or educational interaction• Note breakdowns

Interviews/focus groups with teachers, museum educators, students on educational experience in museum/classroom

Analyse gapsCapture expectations-reality gaps in terms of educational experience through

• reflective interpretation of documentation analysis and observations• interviews/focus groups with teachers, students, museum educators• critical incident analysis with students

Page 17: Evaluating Mobile Learning

G. Vavoula – 15/10/09

Mobile Learning Evaluation MLearnResearch 09

Part 2. Collect data @ macro levelPart 2. Collect data @ macro level

Method: Full-scale user trial

Collect data re expectationsAnalyse descriptions in service promotion materials, original proposal, minutes of early project meetings

Interviews with stakeholders to elicit initial expectations for impact of service

Collect data re realityReview of press coverage and interviews with stakeholders to document impact/transformations effected by the service

Analyse gapsReflective analysis of expectations-reality gaps in terms of service impact

Page 18: Evaluating Mobile Learning

G. Vavoula – 15/10/09

Mobile Learning Evaluation MLearnResearch 09

Part 2. Collect data @ all levelsPart 2. Collect data @ all levels

Method: Various for Requirements analysis

Collect data re expectationsScoping study of previous projects and related recommendations

Consultation workshop on ‘User Experience’ to establish requirements

Collect data re reality

Data supplied by evaluation analysis

Analyse gapsWorkshop to finalise educational and user requirements

Revisions of requirements in light of evaluation findings

Page 19: Evaluating Mobile Learning

G. Vavoula – 15/10/09

Mobile Learning Evaluation MLearnResearch 09

TOTAL

Collected objects

Written comments

Sounds

Photographs

ClassGroup avg.

58

7

7

11

33

637

75

77

121

364

“A student can effectively process

5-10 items during a single post-visit

lesson”

Part 2. Example of data analysisPart 2. Example of data analysis

“-It has a code- I want to take my own picture”

“How will I know what this photo

is about?”

“Expect to be able to record what pictures are of”

Page 20: Evaluating Mobile Learning

G. Vavoula – 15/10/09

Mobile Learning Evaluation MLearnResearch 09

TOTAL

Collected objects

Written comments

Sounds

Photographs

ClassGroup avg.

58

7

7

11

33

637

75

77

121

364

Part 2. Example of data analysisPart 2. Example of data analysis

“A student can effectively process

5-10 items during a single post-visit

lesson”

“-It has a code- I want to take my own picture”

“How will I know what this photo

is about?”

“Expect to be able to record what pictures are of”

Page 21: Evaluating Mobile Learning

G. Vavoula – 15/10/09

Mobile Learning Evaluation MLearnResearch 09

Part 2. Example of data analysisPart 2. Example of data analysis

micro

meso

macro

Creating and collectingitems is quick and easy

Children enjoy the creativityand sense of ownership in

creating own content

System does not supportannotating collected items

Frustration /confusion

Change to system tosupport photo annotation

Read label into the phoneafter each photo

Page 22: Evaluating Mobile Learning

G. Vavoula – 15/10/09

Mobile Learning Evaluation MLearnResearch 09

TOTAL

Collected objects

Written comments

Sounds

Photographs

ClassGroup avg.

58

7

7

11

33

637

75

77

121

364

Part 2. Example of data analysisPart 2. Example of data analysis

“A student can effectively process

5-10 items during a single post-visit

lesson”

“-It has a code- I want to take my own picture”

“How will I know what this photo

is about?”

“Expect to be able to record what pictures are of”

Page 23: Evaluating Mobile Learning

G. Vavoula – 15/10/09

Mobile Learning Evaluation MLearnResearch 09

Part 2. Example of data analysisPart 2. Example of data analysis

micro

meso

macro

Creating and collectingitems is quick and easy

Decomposing collectedcontent takes longer

Enforce upper limiton number of collected items

Teachers change their practiceto do >1 post-visit lesson

Make website simplerquicker to use

Educate students toregulate collecting

Page 24: Evaluating Mobile Learning

G. Vavoula – 15/10/09

Mobile Learning Evaluation MLearnResearch 09

Part 3. Beyond the case studyPart 3. Beyond the case study

Learning tools

Learning method + activities

Learning objectives + outcomes

Social setting

Location + space layout

Familiar, setFamiliar, setUnpredictableUnpredictable

Pre-determinedPre-determinedUnknown – some idea

Unknown

Pre-set, externalPre-set, externalUnknownUnknown

FixedKnownUnpredictableUnpredictable

FixedKnown but not standard

Known but not standard

Unpredictable

traditional classroom

museum school visit

general museum visit

mobile

vagueness++ --

Page 25: Evaluating Mobile Learning

G. Vavoula – 15/10/09

Mobile Learning Evaluation MLearnResearch 09

Part 3. More to considerPart 3. More to consider

Practical and ethical considerations of:

Where, when and how do we collect data

Who evaluates and who is evaluated

Whatever happened to learning outcomes?

Page 26: Evaluating Mobile Learning

G. Vavoula – 15/10/09

Mobile Learning Evaluation MLearnResearch 09

Part 3. Where / when / howPart 3. Where / when / how

Page 27: Evaluating Mobile Learning

G. Vavoula – 15/10/09

Mobile Learning Evaluation MLearnResearch 09

Part 3. Where / when / howPart 3. Where / when / how

Roto et al., 2004

Page 28: Evaluating Mobile Learning

G. Vavoula – 15/10/09

Mobile Learning Evaluation MLearnResearch 09

Part 3. Where / when / howPart 3. Where / when / how

Technology-based solutions

ASL MobileEye eyetracker(Wessel et al. 2007; Mayr et al. 2009)

Constraints (other than the obvious…):

Limited temporal and spatial accuracy (short fixations may be missed; tricky to calibrate fixation distance)

Laborious data analysis

Can’t infer cognitive processes…

Page 29: Evaluating Mobile Learning

G. Vavoula – 15/10/09

Mobile Learning Evaluation MLearnResearch 09

Part 3. Where / when / howPart 3. Where / when / how

‘Cooperative Inquiry’-based solutions(Hsi 2008)

Learner accounts(diaries, questionnaires, post-interviews, attitude surveys)

Constraints:Accuracy of recall

Post-rationalisation

Concern of projected image

Fragmentation of learning

Page 30: Evaluating Mobile Learning

G. Vavoula – 15/10/09

Mobile Learning Evaluation MLearnResearch 09

Part 3. Where / when / howPart 3. Where / when / how

Triangulation ever so important:mixed methods

Validate, and also

Capture different perspectives on:Video, audio, observation notes, learner-created artifacts, screenshots, interview transcripts…

Constraints:Synchronisation

or converting into meaningful narratives

Smith et al., 2007

Page 31: Evaluating Mobile Learning

G. Vavoula – 15/10/09

Mobile Learning Evaluation MLearnResearch 09

Part 3. Where / when / howPart 3. Where / when / how

Mobile technology translates (most often) to personal technology

Are learners willing to be monitored? How much of their privacy will they unveil? What if they’re under-age?

Is it OK to monitor everything? How much do we really need to know?

Even if they agree, is it easy to safeguard personal data? What are best dissemination practices?

Will users cooperate in practice? E.g. synchronise as and when needed?

Page 32: Evaluating Mobile Learning

G. Vavoula – 15/10/09

Mobile Learning Evaluation MLearnResearch 09

Part 3. Who evaluates / is evaluatedPart 3. Who evaluates / is evaluated

“Will users cooperate in practice?”

Users/participants as co-researchers• Who defines the agenda?• Ethics?• Capacity?• Commitment?

Page 33: Evaluating Mobile Learning

G. Vavoula – 15/10/09

Mobile Learning Evaluation MLearnResearch 09

Part 3. Learning outcomesPart 3. Learning outcomesAssessing learning processes and outcomes

Classroom: well-established assessment methods(essays, open-book exam, unseen exam, multiple-choice test)

• Formative assessement: provide feedback on progress• Summative assessment: judge achievement

– Measure of teaching success– Measure of learning effectiveness

(Boud 1995)

– Reliability? Validity?(Knight 2001)

Informal/Mobile: elusive, highly personal learning outcomes…

• When, what, how learning occurs not pre-determined• Sometimes not even post-determined…

Page 34: Evaluating Mobile Learning

G. Vavoula – 15/10/09

Mobile Learning Evaluation MLearnResearch 09

E.g. Museum learningE.g. Museum learning……Studies that measure knowledge gains give inconclusive results, reporting variable amount and nature of cognitive learning

Rennie & McClafferty 1995

Knowledge gains are hard to achieve during a short visit in an unfamiliar context

Gail Donald 1991

The main conceptual gains are in consolidating/ reinforcing previous knowledge, not acquiring new knowledge

Falk 2004

“Measurements of specific impacts with the traditional tools of experimental design are often inappropriate for the confounding variability of informal settings, making the result of such assessment often disappointing or insignificant”(Bitgood et al. 1994)

“Each visitor has a unique experience”

(Rennie & McClafferty 1996)

“Whilst many studies use performance on assessment as a proxy for learning, this remains

problematic for several reasons. Perhaps most importantly, it is

assumed that what has been learnt can be performed; that there is a correlation between

learning and assessment. This is evidently not the case.”

(Oliver & Harvey 2002)

Page 35: Evaluating Mobile Learning

G. Vavoula – 15/10/09

Mobile Learning Evaluation MLearnResearch 09

Part 3. Learning outcomesPart 3. Learning outcomesLearner perceptions

Attitudes towards the technology

Enjoyment of experience

Watch for processes which indicate that learning may be happening

showing responsibility for and initiating own learning (e.g. by writing, drawing, or taking photos by choice; deciding where and when to move)

being actively involved in learning (e.g. by absorbed, close examination of resources; or persevering with a task)

making links and transferring ideas and skills (e.g. by comparing evidence)

sharing learning with experts and peers (e.g. by talking and gesturing; or asking each other questions)

Griffin & Symington, 1998

Assess learner-created artifactsonline media they create, personal reflective accounts such as blogs and e-portfolios, logs of interactions with and through the technology

Longitudinal studies

Validated attitude measurement scales needed

Critical incident analysis may be helpful – but outcomes need to be triangulated

What makes a good blog? Assessment standards still to be agreed…

New research mindsets

Page 36: Evaluating Mobile Learning

G. Vavoula – 15/10/09

Mobile Learning Evaluation MLearnResearch 09

ConclusionConclusion

Notice the assumptions:

Mobile learning happens in discrete, time-bound episodes

(Mobile) learning is clearly distinguishable from other forms of human activity

More?...

Page 37: Evaluating Mobile Learning

G. Vavoula – 15/10/09

Mobile Learning Evaluation MLearnResearch 09

Related publicationsRelated publications

1. Vavoula, G., Pachler, N., and Kukulska-Hulme, A. (Eds.) (2009). Researching Mobile Learning: Frameworks, methods and research designs. Peter Lang.

2. Vavoula, G., Sharples, M. (2009). Meeting the Challenges in Evaluating Mobile Learning: A 3-level Evaluation Framework. International Journal of Mobile and Blended Learning, 1(2), pp. 54-75.

(Or view preprint)