human-computer interaction lecture 3: text and gesture interaction

59
Human-Computer Interaction Lecture 3: Text and gesture interaction

Upload: lizbeth-welch

Post on 25-Dec-2015

265 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: Human-Computer Interaction Lecture 3: Text and gesture interaction

Human-Computer Interaction

Lecture 3: Text and gesture interaction

Page 2: Human-Computer Interaction Lecture 3: Text and gesture interaction

VISUAL DESIGN EXERCISE REVIEW

Page 3: Human-Computer Interaction Lecture 3: Text and gesture interaction

• VisitGuardian data blog

• Browse“Data A-Z”

• Proposenewvisualisation

Page 4: Human-Computer Interaction Lecture 3: Text and gesture interaction

Graphic Resources Correspondence Design Uses

Marks ShapeOrientationSizeTextureSaturationColourLine

Literal (visual imitation of physical features)Mapping (quantity, relative scale)Conventional (arbitrary)

Mark position, identify category (shape, texture colour)Indicate direction (orientation, line)Express magnitude (saturation, size, length)Simple symbols and colour codes

Symbols Geometric elementsLetter formsLogos and iconsPicture elementsConnective elements

Topological (linking)Depictive (pictorial conventions)Figurative (metonym, visual puns)Connotative (professional and cultural association)Acquired (specialist literacies)

Texts and symbolic calculiDiagram elementsBrandingVisual rhetoricDefinition of regions

Regions Alignment gridsBorders and framesArea fillsWhite spaceGestalt integration

ContainmentSeparationFraming (composition, photography)Layering

Identifying shared membershipSegregating or nesting multiple surface conventions in panelsAccommodating labels, captions or legends

Surfaces The planeMaterial object on which the marks are imposed (paper, stone)Mounting, orientation and display contextDisplay medium

Literal (map)Euclidean (scale and angle)Metrical (quantitative axes)Juxtaposed or ordered (regions, catalogues)Image-schematicEmbodied/situated

Typographic layoutsGraphs and chartsRelational diagramsVisual interfacesSecondary notationsSigns and displays

Page 5: Human-Computer Interaction Lecture 3: Text and gesture interaction

HISTORIC INTERFACES

Page 6: Human-Computer Interaction Lecture 3: Text and gesture interaction

Control panels

• The earliest computers were treated like scientific instruments– often controlled by physical

reconfiguring– processes monitored via lamps

and CRTs– usability considerations are

those of machinery

Page 7: Human-Computer Interaction Lecture 3: Text and gesture interaction

Algebraic languages

• Programmers worked away from the machine creating paper tapes– “interface” was of

mathematicians working at a desk

– Automatic Formula Translation (FORTRAN) was a great improvement in usability.

DIMENSION A(11)

READ A

2 DO 3,8,11 J=1,11

3 I=11-J

Y=SQRT(ABS(A(I+1)))+5*A(I+1)**3

IF (400>=Y) 8,4

4 PRINT I,999.

GOTO 2

8 PRINT I,Y

11 STOP

Page 8: Human-Computer Interaction Lecture 3: Text and gesture interaction

Data files

• Punch-cards were already established for commercial data processing– keypunch operators– programmers carried boxes of

cards– interaction consists of filing

and paperwork procedures

Page 9: Human-Computer Interaction Lecture 3: Text and gesture interaction

Command lines

• Teletypes allowed direct interaction, but:– local typing often echoed back

to paper– computer waited to respond

after each line

• Command/response, or dialogue paradigm

OBEY

YESSIR

Page 10: Human-Computer Interaction Lecture 3: Text and gesture interaction

Line editors

• Editing a file via command dialogue– user must establish context for

commands– user must maintain a mental

model of current state– user’s memory constraints

require extra actions to confirm state

10p

quick brown foz

.s/foz/fox/

+

?

9p

ggggg

.d

10p

quick brown foz

.s/foz/fox/

+

?

9p

ggggg

.d

Page 11: Human-Computer Interaction Lecture 3: Text and gesture interaction

WYSIWYG

• “Glass teletypes”– so-called because no need for

paper; instead used “scrolling” up the screen

• The full-screen editor– control codes allowed cursor

positioning– allows editing text within the

display context– user can see the product being

worked on• “What You See Is What You Get”

Page 12: Human-Computer Interaction Lecture 3: Text and gesture interaction

Modeless interaction

• Early full-screen editors (e.g. vi) were like previewer front ends for line editors.– Commands issued from separate command line– Modal interaction confusing and unpredictable

• In modeless editors (e.g. emacs)– Given keystroke has the same effect in any context

Page 13: Human-Computer Interaction Lecture 3: Text and gesture interaction

Menus

• What commands can I perform (line editor)?

• Better to list them, allowing the user to choose– menu screen, or “pop-up” if a full screen available

:afb21$ exEntering Ex mode. Type "visual" to get out.:help"help.txt" [readonly] 1185 lines, 55790 characters:

:afb21$ exEntering Ex mode. Type "visual" to get out.:help"help.txt" [readonly] 1185 lines, 55790 characters:

Page 14: Human-Computer Interaction Lecture 3: Text and gesture interaction

Pointing devices

• How to select from a full screen menu without many cursor movements?– Tab to position– Light pens

• point directly at a place on the screen

– Joystick• directional motion

– Mouse• near keyboard (unlike

light pen), while givingpositional control(unlike joystick)

Page 15: Human-Computer Interaction Lecture 3: Text and gesture interaction

Graphical displays

• Originally for technical applications:– draughting– schematics

• Allowed preview of graphic operations – Often toggled between text

(control) mode and graphic (output) mode.

Page 16: Human-Computer Interaction Lecture 3: Text and gesture interaction

Icons and windows

• Largely developed at Xerox Palo Alto Research Centre (PARC)– Star and Alto projects– bitmapped displays

• Multiple contexts shown by frames.

• Pictures used to represent abstract entities.

Page 17: Human-Computer Interaction Lecture 3: Text and gesture interaction

DIRECT MANIPULATION

Page 18: Human-Computer Interaction Lecture 3: Text and gesture interaction

Direct manipulation

• WIMP: window / icon / menu / pointer• Rapidly being superseded by touch devices • Most radical change in WIMP:

– Command is not unit of interaction

– Object of interest is unit of interaction (Sutherland, Smith)

Page 19: Human-Computer Interaction Lecture 3: Text and gesture interaction

Direct manipulation

• Described by Shneiderman:– objects of interest continuously visible– operations by physical actions, not commands– actions rapid, incremental, reversible– effect of actions immediately visible– basic commands for novices, more for experts

• Unfortunately these things are not true of all “graphical user interfaces”.

Page 20: Human-Computer Interaction Lecture 3: Text and gesture interaction

Graphics without directness

• Object of analysis isn’t visible• Effect of controls isn’t visible• All functions are presented to novices

Page 21: Human-Computer Interaction Lecture 3: Text and gesture interaction

Even less direct!

Page 22: Human-Computer Interaction Lecture 3: Text and gesture interaction
Page 23: Human-Computer Interaction Lecture 3: Text and gesture interaction

Fitts’ law

• Fitts’ law models the speed-accuracy trade-off in pointing

bIDaT

W

WDID 2log

• A model allows prediction and optimisation

Page 24: Human-Computer Interaction Lecture 3: Text and gesture interaction

GOMS: Goals, Operators, Methods, Selection

• Goals: what is the user trying to do?• Operators: what actions must they take?

– Home hands on keyboard or mouse– Key press & release (tapping keyboard or mouse button) – Point using mouse/lightpen etc

• Methods: what have they learned in the past?• Selection: how will they choose what to do?

– Mental preparation

Page 25: Human-Computer Interaction Lecture 3: Text and gesture interaction

Aim is to predict speed of interaction

• Which is faster? Make a word bold usinga) Keys only b) Font dialog

Page 26: Human-Computer Interaction Lecture 3: Text and gesture interaction

Keys-only method

<shift> +

+

+ + +

<ctrl> + b

Page 27: Human-Computer Interaction Lecture 3: Text and gesture interaction

Keys-only method

• Mental preparation: M• Home on keyboard: H• Mental preparation: M • Hold down shift: K• Press : K

• Press : K

• Press : K

• Press : K

• Press : K

• Press : K

• Press : K• Release shift: K• Mental preparation: M• Hold down control: K• Press b: K• Release control: K

Page 28: Human-Computer Interaction Lecture 3: Text and gesture interaction

Keys-only method

• 1 occurrence of H• 3 occurrences of M• 12 occurrences of K

0.401.35 * 30.28 * 127.81 seconds

Page 29: Human-Computer Interaction Lecture 3: Text and gesture interaction

Font dialog method

click,drag

release,move

click,move

release

move,click

move,click

Page 30: Human-Computer Interaction Lecture 3: Text and gesture interaction

Motion times from Fitts’ law

• From start of “The” to end of “cat” (t ~ k log (A/W)):– distance 110 pixels, target width 26 pixels, t = 0.88 s

• From end of “cat” to Format item on menu bar:– distance 97 pixels, target width 25 pixels, t = 0.85 s

• Down to the Font item on the Format menu:– distance 23 pixels, target width 26 pixels, t = 0.34 s

• To the “bold” entry in the font dialog:– distance 268 pixels, target width 16 pixels, t = 1.53 s

• From “bold” to the OK button in the font dialog:– distance 305 pixels, target width 20 pixels, t = 1.49 s

Page 31: Human-Computer Interaction Lecture 3: Text and gesture interaction

Font dialog method

• Mental preparation: M• Reach for mouse: H• Point to “The”: P • Click: K• Drag past “cat”: P• Release: K• Mental preparation: M• Point to menu bar: P• Click: K

• Drag to “Font”: P• Release: K• Mental preparation: M• Move to “bold”: P• Click: K• Release: K• Mental preparation: M• Move to “OK”: P• Click: K

Page 32: Human-Computer Interaction Lecture 3: Text and gesture interaction

Font dialog method

• 1 occurrence of H• 4 occurrences of M• 7 occurrences of K• 6 mouse motions P

• Total for dialog method:

• Total for keyboard method:

0.401.35 * 40.28 * 71.1 + 0.88 + 0.85 + 0.34 + 1.53 + 1.4913.95 secondsvs.7.81 seconds

Page 33: Human-Computer Interaction Lecture 3: Text and gesture interaction

GESTURES

Page 34: Human-Computer Interaction Lecture 3: Text and gesture interaction
Page 35: Human-Computer Interaction Lecture 3: Text and gesture interaction
Page 36: Human-Computer Interaction Lecture 3: Text and gesture interaction
Page 37: Human-Computer Interaction Lecture 3: Text and gesture interaction

Take another look …

Page 38: Human-Computer Interaction Lecture 3: Text and gesture interaction

Goldberg – 1915

Page 39: Human-Computer Interaction Lecture 3: Text and gesture interaction
Page 40: Human-Computer Interaction Lecture 3: Text and gesture interaction

Dimond - 1957

Page 41: Human-Computer Interaction Lecture 3: Text and gesture interaction

Apple Newton – 1987

Page 42: Human-Computer Interaction Lecture 3: Text and gesture interaction

Palm Pilot – 1996

Page 43: Human-Computer Interaction Lecture 3: Text and gesture interaction

Dasher – 2000

Page 44: Human-Computer Interaction Lecture 3: Text and gesture interaction

“BREAKING” FITTS’ LAW

Page 45: Human-Computer Interaction Lecture 3: Text and gesture interaction

W

DT

• Minimize D – minimize the average distance the pen or finger travels across the keyboard

• Maximize W – maximize the size of the user’s currently intended key (impossible?)

Page 46: Human-Computer Interaction Lecture 3: Text and gesture interaction

Minimising D with optimized keyboard layouts

CM KQ UF

HT

Z

OAE WB RS X

DNILG YJ VP F1

k w m u q '.

h t o f zc

i e Space n g bj

r s a d Returnv

x p l y Shift,

Page 47: Human-Computer Interaction Lecture 3: Text and gesture interaction

The design context (Per Ola Kristensson)

• Minimize two interdependent variables:– Time: mean time for users to achieve the task– Error: mean error rate during the task

• Other design considerations:– Is this technique easy-to-learn?– Is this technique designed for all users or a subset of them?– Does this technique rely on expensive high-quality sensors?– Is this technique fun to use or terribly tedious or boring?– Does this technique depend on other expensive resources (such as

high-quality language models)?– Is this technique easy to integrate with the rest of the product system?

Page 48: Human-Computer Interaction Lecture 3: Text and gesture interaction

Can we do better?

• Optimized keyboards are hard to learn• …and still bound by an inherent Fitts’ law speed-

accuracy tradeoff• …and users find them tedious

Page 49: Human-Computer Interaction Lecture 3: Text and gesture interaction

Case study: the “gesture keyboard”

Page 50: Human-Computer Interaction Lecture 3: Text and gesture interaction
Page 51: Human-Computer Interaction Lecture 3: Text and gesture interaction

The ShapeWriter gesture keyboard

• The vast majority of key combinations do not form valid words

• Let’s collect the once who do: into a large lexicon (> 60,000 words)

• Then create shapes on the keyboard for all words in the lexicon

Page 52: Human-Computer Interaction Lecture 3: Text and gesture interaction

Words form patterns on the layout

Page 53: Human-Computer Interaction Lecture 3: Text and gesture interaction

Do users actually reach automaticity?

Page 54: Human-Computer Interaction Lecture 3: Text and gesture interaction

Do users actually reach automaticity?

Page 55: Human-Computer Interaction Lecture 3: Text and gesture interaction

Do novice users get it?

y = 18.663x0.3061

R2 = 0.8121

y = 33.84x0.0212

R2 = 0.1754

0

5

10

15

20

25

30

35

40

45

5 10 15 20 25 30 35 40

Practice (minutes)

En

try

Rat

e (w

pm

)

Thumb keyboard

Continuous shape writing (QWERTY)

Page 56: Human-Computer Interaction Lecture 3: Text and gesture interaction

Speed and accuracy15

2025

30

Session

Entr

y ra

te (w

pm)

1 2 3 4 5 6 7 8 9 10

1520

2530

1 2 3 4 5 6 7 8 9 10

Software keyboardHandwriting recognition

0.00

0.05

0.10

0.15

0.20

0.25

Session

Prop

ortio

n of

tim

e sp

ent c

orre

ctin

g er

rors

1 2 3 4 5 6 7 8 9 100.

000.

050.

100.

150.

200.

251 2 3 4 5 6 7 8 9 10

Software keyboardHandwriting recognition

Page 57: Human-Computer Interaction Lecture 3: Text and gesture interaction
Page 58: Human-Computer Interaction Lecture 3: Text and gesture interaction

SPEECH, ERRORS AND DEIXIS

Page 59: Human-Computer Interaction Lecture 3: Text and gesture interaction

(videos, if time)

http://www.inference.phy.cam.ac.uk/kv227/videos/