A Standard forAugmented RealityLearning Experience Models(AR-LEM)
Fridolin Wild1), Christine Perey2), Paul Lefrere3)
1) The Open University, UK 2) Perey Research and Consulting, CH3) CCA, UK
The traditional route to knowledge.
Photo: Simon Q (flickr)
2The Codrington Library, All Souls College, Oxford University
MATERIALS (e.g. yarn 76/2 710)
MACHINES
Machine parts Materials
OCCUPATIONAL HEALTH AND SAFETY
SET UP PARAMETERS
Its practical application.
Experience.
4
Mend the dissociative gap.
Photo: Marco Leo (flickr) 5
Embedding knowledge into experience
6
Creating Interoperability for AR learning experiences
7
The Cost of Integration
Studies show: – 30% of the time in software development
projects is spent on interface design and –implementation (Schwinn & Winter, 2005)
– 35% to 60% of the IT budget are spent on development and maintenance of interfaces (Ruh et al., 2001)
Rising heterogeneity and integration demand (Klesse et al., 2005)
Status Quo in Learning Technology
Plethora of (standard) software:C4LPT lists over 2,000 tools
Existing learning object / activity standards lack reality support
Multi-device orchestration (think wearables!)
=> enterprises and institutions face interoperability problems
Interoperability
is a property that emerges, when distinctive information systems
(subsystems)cooperatively exchange data
in such a way that they facilitate the successful accomplishment of an overarching task.
Wild & Sobernig (2005)
Dissociating Interoperability
(modified from Kosanke, 2005)
ARLEM conceptual model
12
World Knowledge
13
Activity Knowledge
http://bit.ly/arlem-input
14
The Activity Model
“find the spray gun nozzle size
13”
Messaging in the real-time presence
channel and tracking to xAPI
onEnter/onExit chaining of actions and
other activations/deact
ivations
Styling (cascading) of viewports and UI elements
Constraint modeling:specify validation
conditions and model workflow branchinge.g. smart player;e.g. search widget
http://bit.ly/arlem-input
15
The Workplace Model
The ‘tangibles’:Specific persons,
places, things
The ‘configurables’:devices (styling),
apps+widgets
The ‘triggers’:Markers trigger
Overlays; Overlays trigger human action
Overlay ‘Primitives’:enable re-use of e.g. graphical overlays
http://bit.ly/arlem-input
Action steps
<action id=‘start’ viewport=‘actions’ type=‘actions’></action>
Instructions for action
<instruction><![CDATA[ <h1>Assembly of a simple cabinet</h1> <p>Point to the cabinet to start…</p>]]></instruction>
Defining flow: Entry, Exit, Trigger
<enter removeSelf="false"></enter><exit> <activate type="actions" viewport="actions" id="step2"/> <deactivate type="actions" viewport="actions" id="start"/></exit><triggers> <trigger type="click" viewport="actions" id="start"/></triggers>
Nothing (for now)
On exit: launch step2
On exit: remove dialogue box ‘start’
This action step shall be exited by ‘clicking’ on the
dialogue box
Sample script<activity id="assembly" name="Assembly of cabinet" language="english" workplace="http://crunch.kmi.open.ac.uk/people/~jmartin/data/workplace-AIDIMA.xml" start="start">
<action id=‘start’ viewport=‘actions’ type=‘actions’> <enter removeSelf="false"> </enter> <exit> <activate type="actions" viewport="actions" id="step2"/> <deactivate type="actions" viewport="actions" id="start"/> </exit> <triggers> <trigger type="click" viewport="actions" id="start"/> </triggers> <instruction><![CDATA[<h1>Assembly of a simple cabinet</h1><p>Point to the cabinet to start ... </p>]]></instruction></action>
<action id="step2" viewport="actions” type=“actions”> <enter></enter> <exit removeSelf="true”></exit> <triggers> <trigger type="click" viewport="actions" id="step1"/> </triggers> <instruction><![CDATA[<h1>step2</h1><p>do this and that.</p>]]></instruction></action>
</activity>
Working with ‘tangibles’
Utilise computer vision engine to detect things/places/people (=tangibles)
Define tangibles in the workplace model
Then activate (or deactivate) what shall be visible and relevant in each action step
In the workplace model
We open the workplace model and define a new thing (under resources/tangibles/things):
<thing id="board1" name="Cabinet" urn="/tellme/object/cabinet1" detectable="001"> <pois> <poi id="leftside" x-offset="-0.5" y-offset="0" z-offset="0.1"/> <poi id="default" x-offset="0" y-offset="0" z-offset="0"/> </pois></thing>
The id is what we will reference
The detectable specifies, which
marker (or sensor state) will be bound to the thing Poi = point of interest:
specify locations relative to centre of marker (x=y=z=0: centre)
Triggers and tangibles
If you add a tangible trigger (for ‘stareGaze navigation’), an target icon will be overlaid, rotating in yellow, turning green when the stare duration (3 secs) has been reached
<trigger type="detect" id="board1" duration=”3"/>
Markers and pre-trained markers
Marker must be defined in the workplace model Possible to provide pretrained markers (and their PDF file
to print): named, e.g., 001 to 050 Markers shall be specified via their id in the computer
vision engine (under resources/triggers/detectables): <detectable id="001" sensor="engine" type="marker"/>
<detectable id=”myid" sensor="engine" type=”image_target” url=“myurl.org/marker.zip” />
Activates and deactivates
Now we have defined a thing called ‘board1’ and we have tied it to the marker 001
We can start referring to it now from the activity script: we can, e.g., activate pictogram overlays for the verbs of handling and motion
<activate tangible="board1" predicate="point" poi="leftside" option="down” />
<activity id="assembly" name="Assembly of cabinet" language="english" workplace="http://crunch.kmi.open.ac.uk/people/~jmartin/data/workplace-AIDIMA.xml" start="start">
<action id=‘start’ viewport=‘actions’ type=‘actions’> <enter removeSelf="false”> <activate tangible="board1" predicate="point" poi="leftside" option="down"/> <activate tangible="board1" predicate="addlabel" poi="default" option="touchme"/> </enter> <exit> <deactivate tangible="board1" predicate="point" poi="leftside"/> <deactivate tangible="board1" predicate="addlabel" poi="default"/> <activate type="actions" viewport="actions" id="step2"/> <deactivate type="actions" viewport="actions" id="start"/> </exit> <triggers> <trigger type="click" viewport="actions" id="start"/> </triggers> <instruction><![CDATA[<h1>Assembly of a simple cabinet</h1><p>Point to the cabinet to start ... </p>]]></instruction></action>
<action id="step2" viewport="actions” type=“actions”> <enter></enter> <exit removeSelf="true”></exit> <triggers> <trigger type="click" viewport="actions" id="step1"/> </triggers> <instruction><![CDATA[<h1>step2</h1><p>do this and that.</p>]]></instruction></action>
</activity>
Display an arrow pointing
downwards on the point of
interest ‘leftside’
Display a label ‘touchme’ at the
centre of the marker Remove both
visual overlays when this action
step is exited
Non-normed overlays<activate tangible=”board1" predicate="add3dmodel" poi="leftside" option=”augmentation"/>
<augmentations> <augmentation id="cube" scale="1" y_angle="180.0" url="http://myurl.org/cube.unity3d" /></augmentations>
<activate tangible=”board1” predicate=”addvideo” option=“http://myurl.org/myvideo.mp4"/>
<activate tangible=”board1" predicate=”addimage” option=“http://myurl.org/myvideo.png"/>
Normed overlays – verb primitives
All verbs need the ‘id’ of the tangible, some of them have ‘POIs’ that they need as input, few have ‘options’ 'point': poi + options = up, upperleft, left, lowerleft,
down, lowerright, right, upperright 'assemble’, ‘disassemble’ ‘close’ ‘cut’: poi 'drill': poi 'inspect': poi 'lift': 'lower’: 'lubricate': 'measure': poi
'open’ ‘pack’ ‘paint’ ‘plug’ 'rotate-cw’, 'rotate-ccw': poi 'screw': poi 'unfasten': poi 'unpack 'unplug’: 'unscrew': poi 'forbid': 'allow': 'pick': 'place':
28
Viitaniemi et al. (2014): Deliverable d4.2,
TELLME consortium
Warning signs
Add an enter activation:
<activate tangible=”board1" poi=“leftside” warning="p030"/>
…
30
Demo video:https://vimeo.com/138182933
Next virtual meeting
See http://arlem.kmi.open.ac.uk
October 12, 20158:00 PDT / 11 EDT / 16:00 BST / 17:00 CEST / 24:00 KT
31
The END