learnersourcing: improving learning with collective learner activity

111
Juho Kim (MIT CSAIL) Learnersourcing : Improving Learning with Collective Learner Activity

Upload: juho-kim

Post on 08-Jan-2017

1.128 views

Category:

Education


0 download

TRANSCRIPT

Learnersourcing

Juho Kim (MIT CSAIL)Learnersourcing: Improving Learning with Collective Learner Activity

1

Video learning at scale

Video has emerged as a primary medium for online learning, and millions of learners today are watching videos on web platforms such as Khan Academy, Coursera, edX, and YouTube.

2

Video enables learning at scale

Videos are great in that they can be accessed any time from anywhere, and learners can watch at their own pace. Also, once a video is published, millions of learners can learn from the same material. Its an efficient and scalable way to deliver education.3

# of learnersmillionsonehundreds

Scalable delivery Scalable learning

[1 min]

But does it necessarily mean that video is a better medium for learning?Unfortunately, I would argue that the answer is no, at least not at the moment.Lets compare video learning against some of its competitors.4

In-person learning: Direct learner-instructor interaction

Effective pedagogy

In in-person learning, like 1:1 private tutoring or classroom lectures, the learner and the instructor interact directly.This direct link enables many effective instructional strategies,So that learners can stay engaged, while instructors can provide immediate feedback and adapt their instruction.5

Video learning: Mediatedlearner-instructor interaction

Video interfaces are limiting.

In video learning, however, there is no direct channel between the learner and the instructor.The video interface is in the middle. While this asynchronous interaction makes scalable delivery possible,There are trade-offs involved. Many of the good ingredients in 1:1 learning are missing.// most video interfaces are not really designed to support learning.

As a result, many video learners resort to passive and isolated viewing.And the video interface doesnt really adapt to the learner,Finding information and navigating to specific points inside a video is also difficult.

video search has been a notoriously difficult problem.

I think these limitations come from two major problems that most video interfaces have.First, video interfaces dont know who the learners are and how they are watching the video.Second of all, video interfaces dont know what the content is and how to make it more helpful for learners.

===

Human computation involves computational problems - Replaces computers with humans - Hard for computers, easy for humans. Or they work together to make it easy

Crowdsourcing - Replaces human workers with members of the public - Outsourcing it to an undefined, generally large group of people in the form of an open call" - Jeff Howe

Human computation is a different perspective on how humans and computers interact. It is different in that the human participation is determined by the computational framework. Humans are helping computers in the computation, and crowd intelligence is augmenting the computational power that the computer has.

6

no information about learnersChallenges in video learning at scale

??no information about content

For video learning to truly scale, I think its essential that the video interfaces understand the learners engagement and understand the content better.

Understanding learner engagement is hard because the interfaces cannot reach beyond the screen to really know what learners are doing.out in the wild

Understanding content is hard because it requires understanding the domain, learning objective, and presentational approach,Each of which is a challenging problem on its own.out in the wild

===Yet we hope to be able to analyze learners video usage patterns and watching behaviors.(Interaction pattern & behavior analysis)

And we hope to be able to mine, index, and summarize the video.(Video mining, indexing, summarization)

7

no information about learnersChallenges in video learning at scale

??no information about contentlack of interactivity

For video learning to truly scale, I think its essential that the video interfaces understand the learners engagement and understand the content better.

Understanding learner engagement is hard because the interfaces cannot reach beyond the screen to really know what learners are doing.out in the wild

Understanding content is hard because it requires understanding the domain, learning objective, and presentational approach,Each of which is a challenging problem on its own.out in the wild

===Yet we hope to be able to analyze learners video usage patterns and watching behaviors.(Interaction pattern & behavior analysis)

And we hope to be able to mine, index, and summarize the video.(Video mining, indexing, summarization)

8

Challenges in video learning at scaleUnderstand learners engagementUnderstand video contentSupport interactive learning

For video learning to truly scale, I think its essential that the video interfaces understand the learners engagement and understand the content better.

Understanding learner engagement is hard because the interfaces cannot reach beyond the screen to really know what learners are doing.out in the wild

Understanding content is hard because it requires understanding the domain, learning objective, and presentational approach,Each of which is a challenging problem on its own.out in the wild

===Yet we hope to be able to analyze learners video usage patterns and watching behaviors.(Interaction pattern & behavior analysis)

And we hope to be able to mine, index, and summarize the video.(Video mining, indexing, summarization)9

Challenges in video learning at scaleUnderstand learners engagementData miningSensemaking & analytics

Understand video contentInformation extractionComputer vision & Natural language processing

Support interactivityUI designSocial computingLearning science

For video learning to truly scale, I think its essential that the video interfaces understand the learners engagement and understand the content better.

Understanding learner engagement is hard because the interfaces cannot reach beyond the screen to really know what learners are doing.out in the wild

Understanding content is hard because it requires understanding the domain, learning objective, and presentational approach,Each of which is a challenging problem on its own.out in the wild

===Yet we hope to be able to analyze learners video usage patterns and watching behaviors.(Interaction pattern & behavior analysis)

And we hope to be able to mine, index, and summarize the video.(Video mining, indexing, summarization)10

Data-Driven Approachuse data from learner interaction to understand and improve learning

second-by-secondprocess trackingdata-drivencontent & UI updates

To address these challenges in video learning, I take a data-driven approach in my research.I use data generated from learners interaction with video content to understand and improve learning.

The fact that we have a large group of learners watching the same video provides some unique opportunities.We now have a way to track the learning process at a fine-grained level: second by second and click by click.And use this understanding to improve the content and video interfaces.

What becomes important here are the tools to collect, process, and present this large-scale learning interaction data.

11

Learnersourcingcrowdsourcing with learners as a crowd

too much focus on what learners DO. Im telling my trick already. Theres no wow, this is a neat solution to the problem, because Im giving away the suspense too easily. When I present my solution, people should be reacting, oh, wow, thats a neat solution"

===And my secret recipe is learnersourcing.

As the name implies, learnersourcing is crowdsourcing with learners as a crowd.

As we know, crowdsourcing has offered a new solution to many computationally difficult problems by introducing human intelligence as a building block.

While crowdsourcing often issues an open call to an undefined crowd,learnersourcing uses a specialized crowd, learners who are inherently motivated and naturally engaged in their learning.This difference makes learnersourcing tackle unique problems that computers or general crowdsourcing cannot easily solve.

And the idea is that the byproduct of learners natural learning activities can be used toDynamically improve the future learners experience.

===

Natural interaction with content, Or design special interactions to collect useful input from learners

12

Learnersourcingcrowdsourcing with learners as a crowd

inherently motivatednaturally engaged

too much focus on what learners DO. Im telling my trick already. Theres no wow, this is a neat solution to the problem, because Im giving away the suspense too easily. When I present my solution, people should be reacting, oh, wow, thats a neat solution"

===And my secret recipe is learnersourcing.

As the name implies, learnersourcing is crowdsourcing with learners as a crowd.

As we know, crowdsourcing has offered a new solution to many computationally difficult problems by introducing human intelligence as a building block.

While crowdsourcing often issues an open call to an undefined crowd,learnersourcing uses a specialized crowd, learners who are inherently motivated and naturally engaged in their learning.This difference makes learnersourcing tackle unique problems that computers or general crowdsourcing cannot easily solve.

And the idea is that the byproduct of learners natural learning activities can be used toDynamically improve the future learners experience.

===

Natural interaction with content, Or design special interactions to collect useful input from learners

13

Learnersourcingcrowdsourcing with learners as a crowd

Learners collective learning activities dynamically improve content & UI for future learners.inherently motivatednaturally engaged

too much focus on what learners DO. Im telling my trick already. Theres no wow, this is a neat solution to the problem, because Im giving away the suspense too easily. When I present my solution, people should be reacting, oh, wow, thats a neat solution"

===And my secret recipe is learnersourcing.

As the name implies, learnersourcing is crowdsourcing with learners as a crowd.

As we know, crowdsourcing has offered a new solution to many computationally difficult problems by introducing human intelligence as a building block.

While crowdsourcing often issues an open call to an undefined crowd,learnersourcing uses a specialized crowd, learners who are inherently motivated and naturally engaged in their learning.This difference makes learnersourcing tackle unique problems that computers or general crowdsourcing cannot easily solve.

And the idea is that the byproduct of learners natural learning activities can be used toDynamically improve the future learners experience.

===

Natural interaction with content, Or design special interactions to collect useful input from learners

14

Learners watch videos.UI providessocial navigation & recommendation.

System analyzes interaction traces for hot spots.[Learner3879, Video327, play, 35.6][Learner3879, Video327, pause, 47.2]

Video player adapts to collective learner engagement

[5 min]

Let me give you some concrete examples of what I mean by learnersourcing. A 30-second tour.What if the video understood how learners watch the video and adapt?

Heres a video player that adapts to learners watching behavior.As the learner watches a video, the clickstream log is analyzed by the system for a meaningful pattern.The system finds hot spots in the video where many learners were feeling confused about.Based on this information, the video interface dynamically improves various features for future learners.15

LectureScape [UIST 2014]Video player adapts to collective learner behavior

I implemented this idea in a video player called LectureScape.Ill talk about it in more detail later in the talk.16

Learners are prompted to summarize video sections.UI presents a video outline.

System coordinates learner tasks for afinal summary.Whats the overall goal of the section you just watched?

X V X X Video player coordinates learners to generate a video outline

Lets look at another example.This time, the video player adds a new learning activity for video viewers.Learners are asked to summarize the section that they just watched.The system collects summary labels from multiple learners and finalized into a video outline.The UI then adds this outline next to the video player for future learners.17

Crowdy [CSCW 2015]Video player coordinates learners to generate a video outline

Some colleagues and I implemented this idea in a video player called Crowdy.Ill also talk about it in more detail later in the talk.

18

Two types of learnersourcingPassive

track what learners are doingActive

ask learners to engage in activities

I categorize learnersourcing into two models.

In passive learning, I use existing data streams coming from leaners video watching activityto process and analyze them into meaningful information for future learners.

On the other hand, in active learning, I design new learning activities that benefit learners AND contribute data that can be used to improve learning exp. for the student.19

Learnersourcing applications for educational videos

ToolScape [CHI 2014]Interaction Peaks [L@S 2014]LectureScape [UIST 2014]RIMES [CHI 2015]Mudslide [CHI 2015]Crowdy [CSCW 2015]

In my research, Ive designed, built, and studied various active and passive learnersourcing applications to support video learning at scale.Ill cover most of them at least briefly later.20

Learnersourcing requires a multi-disciplinary approachCrowdsourcingQuality control, Task design, Large-scale input mgmtSocial computingIncentive design, Sense of community among learnersUI designData-driven & dynamic interaction techniquesVideo content analysisComputer vision, Natural language processingLearning sciencePedagogically useful activity, Theoretical background

Learnersourcing

collectively add content via pedagogically useful activitydynamically improve content & UI with learner data

Enhance content navigationCreate a sense of learning with othersImprove engagement & learning

[7 min]

Essentially, in learnersourcing, I strive to establish a feedback loop between the learner and the system.As the learner watches the video naturally or answers some prompt that is pedagogically meaningful,The byproduct of these activities are processed by the system to produce a meaningful outcome.Then the system used this outcome to dynamically improve content and UI for future learners.

Hopefully by the end of the talk, youll be convinced that learnersourcing can

In large-scale video learning environments, interfaces powered by learnersourcing can enhance content navigation, create a sense of learning with others, and ultimately improve learning.

22

Thesis statementIn large-scale video learning environments, interfaces powered by learnersourcing can enhance content navigation, create a sense of learning with others, and improve engagement and learning.

[7 min]

Essentially, in learnersourcing, I strive to establish a feedback loop between the learner and the system.As the learner watches the video naturally or answers some prompt that is pedagogically meaningful,The byproduct of these activities are processed by the system to produce a meaningful outcome.Then the system used this outcome to dynamically improve content and UI for future learners.

Hopefully by the end of the talk, youll be convinced that learnersourcing can

In large-scale video learning environments, interfaces powered by learnersourcing can enhance content navigation, create a sense of learning with others, and ultimately improve learning.

23

I. Passive learnersourcing (MOOC videos)Video player clickstream analysis [L@S 2014a, L@S 2014b]Data-driven content navigation [UIST 2014a, UIST 2014b]

II. Active learnersourcing (how-to videos)Step-by-step information [ CHI 2014]Summary of steps [CSCW 2015]

24

I. Passive learnersourcing (MOOC videos)Video player clickstream analysis [L@S 2014a, L@S 2014b]Data-driven content navigation [UIST 2014a, UIST 2014b]

II. Active learnersourcing (how-to videos)Step-by-step information [ CHI 2014]Summary of steps [CSCW 2015]

25

Video lectures in MOOCs

There are more than 6000 MOOCs now, some of which are from this very school and some from faculty in this room.

A MOOC often include tens to hundreds of video clips, and research shows that students taking a MOOC spend a majority of their time watching videos.

But MOOC instructors often dont have a good sense of how learners are using the videos.26

Classrooms: rich, natural interaction data

armgov on Flickr | CC by-nc-saMaria Fleischmann / Worldbank on Flickr | CC by-nc-nd

Love Krittaya | public domain

unknown author | from pc4all.co.kr

Traditional classrooms, on the other hand, provide natural interaction data.Instructors in a classroom can visually check students engagement.Students might be engaged and paying attention, confused with a question, or bored and falling asleep.Or it might be the entire class that falls asleep.

=== relate to the current room and how I can adapt dynamically

While online videos provide access any time from anywhere, they disconnect the interaction channel between instructor and students.One problem of videos is that interaction between student and instructor, and student and student is

===http://www.flickr.com/photos/10816734@N03/8249942979/in/photolist-dz27Cc-7FGT5v-fvhqZF-auPupW-8c6a4M-9uEhUU-dZUzSs-i54S8p-i55y7F-i54SgR-i54ZH3-i55xKi-i55179-i55yck-i54S3V-i55y6i-i54ZLE-i55xY4-i553f9-cRANHJ-98HXSU-apaQNA-aGNKak-cJ2nes-cJ2niN-cJ2nuh-cJ2nbq-cJ2npU-8aJrP5-dgF74D-bBLYjg-ddYkR6-9qqiYV-98HTZN-bEhdMJ-7WnzsU-88biBa-8qU4Cy-eEknDg-7Cpa5o-eEoW74-dmhvoT-9f7aPF-9irnJd-brfBxH-bpxmdx-87vrbw-8tXMNr-am1uZL-cXJdEo-bDpXMxhttp://www.flickr.com/photos/armgov/4991079510/sizes/z/https://www.pc4all.co.kr/new_co_upload_view.asp?num=448&category=comm

27

liquidnight on Flickr | CC by-nc-sa

For instructors recording videos for MOOCs, its like they are talking to a wall.Instructors have no way to see if students are engaged, confused, or bored.Students also have no way to express their reaction, or see how other students are watching the video.This seriously restricts our understanding of how students learn with the videos, and limits the video learning experience itself.

===// But a MOOC video experience is not like a classroom experience.// Theres a disconnect between instructors and students, in both time and location.

http://www.flickr.com/photos/liquidnight/2214264870/sizes/l/28

First MOOC-scale video interaction analysisData Source: 4 edX courses (fall 2012)

Domains: computer science, statistics, chemistry

Video Events: start, end, play, pause, jumpLearnersVideosMean Video LengthProcessed Video Events127,8398627:4639.3M

29

Factors affecting video engagementShorter videos- significant drop after 6 mins

Informal shots over studio production- more personal feel helps

Tablet drawing tutorials over slides- continuous visual flow helps

How Video Production Affects Student Engagement: An Empirical Study of MOOC Videos.Philip J. Guo, Juho Kim, Rob Rubin.Learning at Scale 2014.Metric: session length

We first looked at a few major factors that might affect learners engagement with video.By performing a post-hoc analysis looking at average session length as a metric,We found that

These are useful guidelines for instructors and video editors, and when we interviewed a video editor at edX,He enjoyed the fact that much of what he knew from his experience and intuition has been confirmed.

We wanted to now look at how learners navigate inside a video.30

How do learners navigate videos?Watch sequentially

Pause

Re-watch

Skip / Skim

Search

Watch sequentially: common when you watch for the first time

Pause: you might be confused, pace is too fast

Return: revisit important concept

Skip: search for a specific part, or quickly review

Weve seen many of such examples in the data.

31

Collective interaction traces

video timeLearner #1Learner #2Learner #3Learner #4. . . . . .Learner #7888Learner #7887

It might be far-fetched to make any conclusion based on one students data,But what if you had thousands of students data?

===

If every student watches it differently, why would combining them all be helpful? Clarify why having the pattern is useful.

32

Collective interaction tracesinto interaction patterns

video timeinteractionevents

It might be far-fetched to make any conclusion based on one students data,But what if you had thousands of students data?===Now that we looked at why data matters, and how data can be used,let me tell you about the specific dataset we used for our analysis.

33

Interaction peaksTemporal peaks in the number of interaction events, where a significant number of learners show similar interaction patterns

video timeUnderstanding In-Video Dropouts and Interaction Peaks in Online Lecture Videos.Juho Kim, Philip J. Guo, Daniel T. Seaton, Piotr Mitros, Krzysztof Z. Gajos, Robert C. Miller.Learning at Scale 2014.?!interactionevents

These hot spots in the video may indicate points of confusion or importance."give me more time" / "I'm not tracking/following" and "this is important"

===Re-watching peak: focusing on nonsequential sessionsPlay peak: points of interestThey often correlate, but not always.34

What causes an interaction peak?Video interaction log data

Video content analysisVisual content (video frames)Verbal content (transcript)

Observation: Visual / Topical transitions in the video often coincide with a peak.

Qualitative coding

----- Meeting Notes (2/24/14 16:16) -----one presentation style to another.code example, cut, talking head.36

Returning to content

interactionvideo time# play button clicks

37

Beginning of new material

interactionvideo time

# play button clicks

Data-Driven Video Interaction Techniques

video interfaces that adapt to collective learner interaction patterns

Remember that our motivation looking at this data was to find ways to use this data to improve students learning experience.Lets see how the data analysis I presented so far can be connect can be used to achieve this goal.----- Meeting Notes (10/7/14 17:52) -----analysis is useful for instructors and video editors, but can we more directly impact students' learning experience?what if we use this data to dynamically change the way video player works?39

Data-driven video interaction techniquesUse interaction peaks to draw learners attentionsupport diverse navigational needscreate a sense of learning with others

Remember that our motivation looking at this data was to find ways to use this data to improve students learning experience.Lets see how the data analysis I presented so far can be connect can be used to achieve this goal.----- Meeting Notes (10/7/14 17:52) -----analysis is useful for instructors and video editors, but can we more directly impact students' learning experience?what if we use this data to dynamically change the way video player works?40

LectureScape: Lecture video player powered by collective watching dataData-driven interaction techniques for improving navigation of educational videos.Juho Kim, Philip J. Guo, Carrie J. Cai, Shang-Wen (Daniel) Li, Krzysztof Z. Gajos, Robert C. Miller.UIST 2014.

41

Id like to focus on just a few of them.42

Where did other learners find confusing / important?I want a quick overview of this clip.

I want to see that previous slide.

Where did other learners find confusing / important?Which search result should I look at?I want a quick overview of this clip.

I want to see that previous slide.

TimelineSearchSummarization

Rollercoaster TimelineEmbedded visualization of collective interactions

Roller coaster

Phantom cursor

Visual & physical emphasis on interaction peaksRead wear [Hill et al., 1992], Semantic pointing [Blanch et al., 2004], Pseudo-haptic feedback [Lcuyer et al., 2004]

It is an example of control-display ratio adaptation[5, 16, 37], dynamically changing the ratio betweenphysical cursor movement and on-screen cursor movement.

The faster the dragging, the weaker the friction. We achieve this effect by temporarilyhiding the real cursor, and replacing it with a phantom cursor that moves slower than the real cursor within peak ranges(Figure 3). The idea of enlarging the motor space49

Related work

Read wear [Hill et al., 1992]Semantic pointing[Blanch et al., 2004]Pseudo-haptic feedback[Lcuyer et al., 2004]

Personal TracesVisualize parts of a video watched

Personal trace

TimelineSearchSummarization

In-video search Match visualization & highlights

Enhanced video search

Word Cloud

currentsectionprevioussectionnextsection

TimelineSearchSummarization

Visual clip highlights

Interaction data + frame processing

highlights

Bookmarking

bookmarking

pinning

Pinning: Automatic side-by-side view

Pinned slideVideo stream

Evaluation goalsQ1: How do users navigate lecture videos with LectureScape?

Q2: How do users interpret interaction data presented in LectureScape?

Q3: Are LectureScapes features usable and learnable?

66

Where did other learners find confusing / important?I want a quick overview of this clip.

I want to see that previous slide.

visual and physical feedbackextract highlight framesdetermine frames to pin

Lab study: 12 edX & on-campus studentsLectureScape vs baseline interface

Navigation & learning tasksVisual searchFind a slide where the instructor displays on screen examples of the singleton operation.Problem searchIf the step size in an approximation method decreases, does the code run faster or slower?Summarizationwrite down the main points of a video in three minutes.

----- Meeting Notes (1/28/15 18:46) ------ simulated navigation tasks? simulated task that people are liekly to encounter.- separate navigation vs learning.- what is within subjects?- say what UI evaluation is normally like: are you good at teaching UI concepts to the audience?68

Diverse navigation patternsWith LectureScape:more non-linear jumps in navigationmore navigation options- rollercoaster timeline- phantom cursor- highlight summary- pinning

[LectureScape] gives you more options. It personalizes the strategy I can use in the task.

Removed socially-ranked search

69

Interaction data givea sense of learning togetherInteraction peaks matched with participants points of confusion (8/12) and importance (6/12)Its not like cold-watching. It feels like watching with other students.

[interaction data] makes it seem more classroom-y, as in you can compare yourself to how other students are learning and what they need to repeat.

Where did other learners find confusing / important?Which search result should I look at?I want a quick overview of this clip.

I want to see that previous slide.

visual and physical feedbackinteraction frequency as weightextract highlight framesdetermine frames to pin

Higher perceived speed and efficiencyNo significant diff. in search task completion timePerception on task performance

Live deploymentedX integration in progressReal-time data processingImproved learning / engagement?

Extending this preliminary lab evaluation, Im currently collaborating with a team at edX to integrate the LectureScape player into the platform.edX instructors will have an option to opt in to use LectureScape as the default video player in our live deployment.Real classStreaming dataCold start problem, adaptive interfaces

74

Summary of passive learnersourcingUnobtrusive, adaptive use of interaction dataAnalysis of MOOC-scale video clickstream dataLectureScape: video player powered by learners collective watching behaviorData-driven interaction techniques for social navigation

75

I. Passive learnersourcing (MOOC videos)Video player clickstream analysis [L@S 2014a, L@S 2014b]Data-driven content navigation [UIST 2014a, UIST 2014b]

II. Active learnersourcing (how-to videos)Step-by-step information [ CHI 2014]Summary of steps [CSCW 2015]

we talk about millions of learners on MOOCs but its common for a single video to have millions of viewers.76

how-to videos online

You can learn how to do almost anything online nowadays by watching how-to videos,including cooking, applying photoshop filters, assembling furniture, and applyingmakeup. 77

Navigating how-to videos is hard

findrepeatskip

How do you go back to a step that you missed for the first time?You have to use the timeline slider to make imprecise estimates.

=== Q: Why videos in the first place?A: 1) Formative study at Adobe last year showed that it is a preference issue. There are certain types of people who learn better with videos, and they would turn to videos as much as possible, while others use static, step-by-step HTML tutorials.

2) Video captures physical procedures. Step-by-step sometimes skips important steps.

limits in navigation affects the learning experience and turns people away from watching video.78

How-to videos contain a step-by-step solution structure

Apply gradient map

They have a specific structure, which is that they contain step-by-step instructions.

What are some of the properties of how-to videos we can leverage in improving the learning experience?79

Completeness & detail of instructions[Eiriksdottir and Catrambone, 2011]Proactive & random access in instructional videos[Zhang et al., 2006]Interactivity: stopping, starting and replaying[Tversky et al., 2002]Subgoals: a group of steps representing task structures[Catrambone, 1994, 1998]Seeing and interacting with solution structure helps learning

80

Learning with solution structure helps

Combining the lessons from the literature, we can conclude that seeing and interacting with the solution structure helps.81

Learning with solution structure helps

Learning with solution structure helps

Ask a question: what are some steps you can think about?May be hard to remember and understand what each step does.83

Improving how-to video learningInteracting with the solutionUI for solution structure navigation

Seeing the solutionExtract steps + subgoals at scale

Improving how-to video learningInteracting with the solutionUI for solution structure navigation

Seeing the solutionExtract steps + subgoals at scale

ToolScape: Step-aware video playerCrowdsourcing Step-by-Step Information Extraction to Enhance Existing How-to Videos. Juho Kim, Phu Nguyen, Sarah Weir, Philip J. Guo, Robert C. Miller, & Krzysztof Z. Gajos. CHI 2014. Best of CHI Honorable Mention.

ToolScape adds the interactive timeline to let learners click eachtool or work-in-progress imageto repeat, jump, or skip steps in the workflow.86

work in progressimagesparts with no visual progressstep labels & links

ToolScape adds the interactive timeline to let learners click eachtool or work-in-progress imageto repeat, jump, or skip steps in the workflow.

Wadsworth constant87

Study: Photoshop design tasks12 novice Photoshop usersmanually annotated videos

88

Baseline

ToolScape

Participants felt more confident about their design skills with ToolScape.

Self-efficacy gainFour 7-Likert scale questionsMann-Whitneys U test (Z=2.06, p Baseline

1-way ANOVA: F(2, 226)=3.6, p< 0.05, partial 2=0.03 Crowdy vs Baseline: p < 0.05, Cohens d = 0.38 Expert vs Baseline: p < 0.05, Cohens d = 0.35Error bar: Standard error1-way ANOVA: F(2, 226)=4.8, p< 0.01, partial 2=0.04 Crowdy vs Baseline: p < 0.05, Cohens d = 0.38 Expert vs Baseline: p < 0.01, Cohens d = 0.45 Error bar: Standard error100 79100 75100 75

Pretest + Posttest scores were not different across conditions.

One-way ANOVA: p > 0.05Error bar: Standard error

Crowdy didnt add additional workload.Questions on mental demand, physical demand, temporal demand, performance, effort, and frustration

7-point Likert scale (1: low workload, 7: high workload)One-way ANOVA: p > 0.05Error bar: Standard error

Study 2: Subgoal labeling quality~50 web programming + statistics videos Classroom + live website deployment~1,000 participating users (out of ~2,500 visitors)

922 subgoals created, 966 upvotes, 527 upvotes

131

Evaluation: live deployment25-day deployment15 web programming videos150 participating users (out of 1,268 visitors)Stage 1: 109 subgoal labels createdStage 2: 109 upvotesStage 3: 13 upvotes

922 subgoals created, 966 upvotes, 527 upvotes

132

Analyzed 4 most popular videos4 external raters compared expert vs learner subgoals

Subgoal quality evaluation

14 out of 17learner labels matching or better than expert labels133

Majority of learner-generated subgoals were rated as matching or better than expert-generated ones.Analyzed 4 most popular videos4 external raters compared expert vs learner subgoals

14 out of 17learner labels matching or better than expert labels134

Majority of learner-generated subgoals were comparable in quality to expert-generated ones.- Analyzed 4 most popular videos- 4 external raters compared expert vs learner subgoals

14 out of 17learner labels matching or better than expert labels135

Interview with learners & creatorLearners

CreatorI was more... attentive to watching, to trying to understand what exactly am I watching.Having pop up questions means the viewer has to be paying attention.the choices ...made me feel as though I was on the same wavelength still.

the choices ...made me feel as though I was on the same wavelength still.136

Crowdys video learning modelIn-video quiz + note-taking

Designing learnersourcing activitiesEngaging & pedagogically meaningful tasks, while byproducts make useful information

Summarize, Compare, Inspect [Crowdy]Record your own explanation. [RIMES]Where is this lecture most confusing? Why?[Mudslide]

Crowdy: learning activity design this shows a pattern in my learnersourcing research design a meaningful activity, find a way to create something useful138

RIMES: Interactive exercises embedded into lecture videosRIMES: Embedding Interactive Multimedia Exercises in Lecture Videos.Juho Kim, Elena L. Glassman, Andrs Monroy-Hernndez, Meredith Ringel Morris.CHI 2015.

RIMES: Embedding Interactive Multimedia Exercises in Lecture Videos.Juho Kim, Elena L. Glassman, Andrs Monroy-Hernndez, Meredith Ringel Morris.CHI 2015.

Gallery of submitted responses

Mudslide: Spatially anchored confusion via learnersourcingMudslide: A Spatially Anchored Census of Student Confusion for Online Lecture Videos.Elena L. Glassman, Juho Kim, Andrs Monroy-Hernndez, Meredith Ringel Morris.Best of CHI Honorable Mention. CHI 2015.

Learnersourcing design principlesCrowdsourcing

simple and concrete task

quality control

data collection

microscopic, focused task

cost: moneyLearnersourcing

pedagogically meaningful task

incentive design

learning + data collection

overall contribution visible

cost: learners time & effort

Summary of active learnersourcingTechniques for extracting solution structurefrom existing videosVideo UIs for learning with steps & subgoalsStudies on learning benefits + label qualityLearnersourcing activity design: Engaging & pedagogically meaningful tasks, while byproducts make useful information

143

Future research agenda

Now Ill briefly mention some of my future research ideas, and wrap up.144

Richer learner responses

Learnersourcing research agenda

Many learnersourcing applications that I presented today deal with clickstream data or simple prompts.I think there are exciting opportunities in broadening the scope of learnersourcing to other learning contexts beyond videos.- Programming IDE that learnersources multiple implementations of a same function?- Graphical design tool that learnersources multiple visual assets?- Writing tool that learnersources multiple expressions and phrases?

With these ideas, we can make the existing creativity support tools more social and interactive.

===

145

Richer learner responses

Large-scale corpus of annotated videosMultiple learning pathsDeep search, browsing, recommendation

Learnersourcing research agenda

Ive shown how we can add annotations to existing videos with learnersourcing workflows.Now that we have the enabling technology, what if we had thousands of videos fully annotated and summarized?What interesting applications can we build?

Now that videos are indexed at the step level, Look at 100 different ways to perform a step and see what more common and less common approaches are.Based on this data, we can also make search, browsing, and recommendation work at the step level.

===Similar for more conceptual lecture videos where students alternative explanations can be indexed, browsed, etc.146

Richer learner responses

Large-scale corpus of annotated videosMultiple learning pathsDeep search, browsing, recommendation

Completely learnersourced course- Course created, taught, improved entirely by learners

Learnersourcing research agenda

We can also push forward the boundary of learnersourcing by expanding the role of learners.What if the entire course materials are created, taught, and improved by learners?With large-scale explanations, feedback, and improvements that are all learnersourced,Learners take a more active role in learning, and future learners can choose the best set of resources that work for them. 147

Generalizing learnersourcingcommunity-guided planning, discussion, decision making, collaborative work

- Conference planning [UIST 2013, CHI 2014, HCOMP 2013, HCOMP 2014]- Civic engagement [ CHI 2015, CHI 2015 EA]

Many social domains suffer from the same limitations that video learning does:Members are passive and isolated, there is lack of channel to express individuals input and contribute in a meaningful way.

Learnersourcing presented a model where micro contributions from members of a community can make a difference.This conceptual idea can generalize to various social domains such as open government, nutrition, healthcare, and accessiblity, just to name a few.

And the goal is to support more community-driven planning, discussion, decision making, and creative processes.

===a lot of community practice and civic issues matter to members, but the decision process and complicated structure are often not accessible to the members. The conceptual idea behind learnersourcing presents a technique for engaging community members in the process, while members engage in natural or voluntary or intrinsically motivated activities.148

Leveraging large-scale interaction dataData-driven supportUsability issue detectionAdaptation & personalizationSocial interaction

Pattern understandingCreative processLearning processGroup communication

Many social domains suffer from the same limitations that video learning does:Members are passive and isolated, there is lack of channel to express individuals input and contribute in a meaningful way.

Learnersourcing presented a model where micro contributions from members of a community can make a difference.This conceptual idea can generalize to various social domains such as open government, nutrition, healthcare, and accessiblity, just to name a few.

And the goal is to support more community-driven planning, discussion, decision making, and creative processes.

===a lot of community practice and civic issues matter to members, but the decision process and complicated structure are often not accessible to the members. The conceptual idea behind learnersourcing presents a technique for engaging community members in the process, while members engage in natural or voluntary or intrinsically motivated activities.149

Learnersourcing

collectively add content via pedagogically useful activitydynamically improve content & UI with learner data

enhance content navigationcreate a sense of learning with othersimprove engagement & learning

150

Learning at scale: Does learning scale?learningbenefitperlearner# of learners

I started this talk by asking the question of how can we make video learning really scale?151

Learning at scale: Does learning scale?learningbenefitperlearner

# of learners

I think unfortunately this is where we are. The learning experience becomes worse while delivery of content scales up.

===As number increases, fully supporting the good ingredients of 1:1 tutoring becomes harder and harder.

152

Learning at scale research: Enable the good parts of in-person learning, at scalelearningbenefitperlearner

# of learners

Many researchers interested in learning at scale are working hard to enable the good components of in-person learning in online settings.And learnersourcing is one such attempt at creating online learning environments that truly scale.153

Vision for learnersourcinglearningbenefitperlearner

Interactive, collaborative, data-drivenonline education# of learners

But with the unique opportunities we have because of the scale, because of the data, and because of the video,I believe we can be more ambitious and provide an even better learning experience than in-person learning.Learnersourcing is a step toward this vision, by supporting more interactive, collaborative, and data-driven learning.

==="beyond being there": technology for distance work, take advantage of new medium. this kind of logic can be applied in my intro.154

ContributionsLearnersourcing: support video learning at scaleUIsNovel video interfaces & data-driven interaction techniques powered by large-scale learning interaction dataWorkflowsTechniques for inferring learner engagement from clickstream data,and extracting semantic information from educational videosEvaluation studiesStudies measuring pedagogical benefits, resulting data quality, and learners qualitative experiences

Methods for collecting and processing large-scale data from learners

155

Learnersourcing: Improving Learning with Collective Learner ActivityJuho Kim | MIT CSAIL | [email protected] | juhokim.com

ToolScapeInteraction PeaksLectureScapeCrowdy

156

Learnersourcing: Improving Learning with Collective Learner ActivityJuho Kim | MIT CSAIL | [email protected] | juhokim.com

ToolScapeInteraction PeaksLectureScapeRIMESCrowdy

Mudslide

157