august 9, 2007visit at the sri international1 preference-based search with suggestions paolo...

48
August 9, 2007 Visit at the SRI Internat ional 1 Preference-based search with Suggestions Paolo Viappiani Artificial Intelligence Lab (LIA) Ecole Polytechnique Federale Lausanne (EPFL)

Upload: amelia-glenn

Post on 05-Jan-2016

213 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: August 9, 2007Visit at the SRI International1 Preference-based search with Suggestions Paolo Viappiani Artificial Intelligence Lab (LIA) Ecole Polytechnique

August 9, 2007 Visit at the SRI International 1

Preference-based search with Suggestions

Paolo ViappianiArtificial Intelligence Lab (LIA)

Ecole Polytechnique Federale Lausanne (EPFL)

Page 2: August 9, 2007Visit at the SRI International1 Preference-based search with Suggestions Paolo Viappiani Artificial Intelligence Lab (LIA) Ecole Polytechnique

August 9, 2007 Visit at the SRI International 2

Overview

1. Preference-based search

2. Example-critiquing with Suggestions

3. Experimental results

4. Scalability and Implementation

5. Adaptive strategies

Page 3: August 9, 2007Visit at the SRI International1 Preference-based search with Suggestions Paolo Viappiani Artificial Intelligence Lab (LIA) Ecole Polytechnique

August 9, 2007 Visit at the SRI International 3

Traditional commerce Electronic commerce

• Electronic commerce– Human-computer

interactions

– User interfaces

Saved time

Fixed interaction

No third dimension

– Human interactions

– General outlook of possibilities

– Shop assistants

Increase customer’s awareness

Serendipitous discoveries

Require long time and physical displacement

Page 4: August 9, 2007Visit at the SRI International1 Preference-based search with Suggestions Paolo Viappiani Artificial Intelligence Lab (LIA) Ecole Polytechnique

August 9, 2007 Visit at the SRI International 4

User involvement

User

knowledge Database query

Implicit recommender systems

Mixed-initiative

systems

Page 5: August 9, 2007Visit at the SRI International1 Preference-based search with Suggestions Paolo Viappiani Artificial Intelligence Lab (LIA) Ecole Polytechnique

August 9, 2007 Visit at the SRI International 5

Form-filling• Example: actual scenario with travel

website (July 5th, 2006)• User wants to travel from Geneva to

Dublin • Return flight• Preferences

– Outbound flight, arrive by 5pm– Inbound flight, arrive by 3pm– (Cheapest)

Page 6: August 9, 2007Visit at the SRI International1 Preference-based search with Suggestions Paolo Viappiani Artificial Intelligence Lab (LIA) Ecole Polytechnique

August 9, 2007 Visit at the SRI International 6

Swiss will be cheaper

To be there at 5pm, I should leave around noon.

To arrive back at 3pm, I should leave in the morning

Page 7: August 9, 2007Visit at the SRI International1 Preference-based search with Suggestions Paolo Viappiani Artificial Intelligence Lab (LIA) Ecole Polytechnique

August 9, 2007 Visit at the SRI International 7

Example-based toolsSeveral proposed systems:

– Findme (Burke et al. ‘97) – Smartclient

(Pu&Faltings’00) – Expertclerk (Shimazu’01)

User expresses the preferences as critiques on displayed examples– Feedback directs the next

search cycle

Motivation: users’ preferences are often constructed when considering specific examples – (Payne et al. ’93; Slovic’95)

Initial preference

The system shows k examples

The user critiques the examples

The user picks the final choice

Page 8: August 9, 2007Visit at the SRI International1 Preference-based search with Suggestions Paolo Viappiani Artificial Intelligence Lab (LIA) Ecole Polytechnique

August 9, 2007 Visit at the SRI International 8

Prominence effectChoosing and matching give different results

Expected number of casualties

Cost % preferred (group 1)

% preferred (group 2)

X 500 (55 M) 67 4

Y 570 12M 33 96

The cost of program X is unknown for group 2

Preferences for group 1 are assessed by as choosing

Preferences for group 2 are acquired by matching – asking the right cost of program X such that the two program would be equally preferred

The “safety problem” experiment, 600 subjects (Tversky&Slovic,1998).

Page 9: August 9, 2007Visit at the SRI International1 Preference-based search with Suggestions Paolo Viappiani Artificial Intelligence Lab (LIA) Ecole Polytechnique

August 9, 2007 Visit at the SRI International 9

Anchoring effect• Users are biased to what is

shown to them (Tversky1974)

• Example– Three laptops that all weigh

around 3-4 kg– The user might never consider

a lighter model

• Metaphor: local optimum– When all options look similar,

motivation to state additional preference is low

Page 10: August 9, 2007Visit at the SRI International1 Preference-based search with Suggestions Paolo Viappiani Artificial Intelligence Lab (LIA) Ecole Polytechnique

August 9, 2007 Visit at the SRI International 10

Example-critiquing with Suggestions

Page 11: August 9, 2007Visit at the SRI International1 Preference-based search with Suggestions Paolo Viappiani Artificial Intelligence Lab (LIA) Ecole Polytechnique

August 9, 2007 Visit at the SRI International 11

Suggestions• Others have also recognized the need to help

users consider potentially neglected attributes• Show extreme examples (Linden’97) • Show diverse examples (Smyth &McGinty’03,

McSherry’02) • Problems:

– Extremes might be unrealistic– Too many to choose from– Diversity does not mean interesting– Might introduce a even worse bias

Page 12: August 9, 2007Visit at the SRI International1 Preference-based search with Suggestions Paolo Viappiani Artificial Intelligence Lab (LIA) Ecole Polytechnique

August 9, 2007 Visit at the SRI International 12

Model-based Suggestions• We show suggestions

– Based on the current preference model and possible extensions

– Optimally stimulate preference expression– Metaphor of Active Learning

Page 13: August 9, 2007Visit at the SRI International1 Preference-based search with Suggestions Paolo Viappiani Artificial Intelligence Lab (LIA) Ecole Polytechnique

August 9, 2007 Visit at the SRI International 13

The lookahead principle• Suggestions should not be optimal under

the current preference model, but should provide a high likelihood of optimality when an additional preference is added

• Implemented with Pareto-optimality– Avoid sensitivity to numerical errors

Display options that have high probability of becoming Pareto-optimal

Page 14: August 9, 2007Visit at the SRI International1 Preference-based search with Suggestions Paolo Viappiani Artificial Intelligence Lab (LIA) Ecole Polytechnique

August 9, 2007 Visit at the SRI International 14

The model

θ

penalty

Attribute values

I prefer attribute less

than θPreference order Probability

Furnished > unfurnished

0.60

Unfurnished > furnished

0.40

Discrete domain Continuous domains

p(θ)

•Preferences are order relations

•Distribution over possible “missing” preferences for suggestions

•Effective suggestions even with uniform distribution (user studies)

Page 15: August 9, 2007Visit at the SRI International1 Preference-based search with Suggestions Paolo Viappiani Artificial Intelligence Lab (LIA) Ecole Polytechnique

August 9, 2007 Visit at the SRI International 15

Probability of optimality Popt

Pareto dominance partial order

New preference option has to be better than all dominators

To become optimal, the black option has to be better

than all dominators w.r.t a new preference

H:= if c(θ,o2) > c(θ,o1) then 1 else 0

o better than o’

For all dominators O>

Integrate over possible preferences

O>

i

ia

iaopt OoPoP )),(1(1)(

Page 16: August 9, 2007Visit at the SRI International1 Preference-based search with Suggestions Paolo Viappiani Artificial Intelligence Lab (LIA) Ecole Polytechnique

August 9, 2007 Visit at the SRI International 16

SUGGESTIONS

Preferences: PRICE<500, DIST_UNIV<10

CANDIDATES

Example

Page 17: August 9, 2007Visit at the SRI International1 Preference-based search with Suggestions Paolo Viappiani Artificial Intelligence Lab (LIA) Ecole Polytechnique

August 9, 2007 Visit at the SRI International 17

•Initial preference: lowest priceO1 is the highest ranked

Other (hidden) preferences:•Arrive by 12:00•Leave from City airport

=> O4 is the best compromise (TARGET option)

Fare (a1) Arrival (a2) Airport (a3) Airline (a4)

O1 250 14:00 INT B

O2 300 9:00 INT A

O3 350 17:30 CITY B

O4 400 12:30 CITY B

O5 550 18:30 CITY B

O6 600 8:30 CITY A

User has to select a flight among a set of options.

4 attributes: fare, arrival time, departure airport, airline.

Page 18: August 9, 2007Visit at the SRI International1 Preference-based search with Suggestions Paolo Viappiani Artificial Intelligence Lab (LIA) Ecole Polytechnique

August 9, 2007 Visit at the SRI International 18

Fare (a1) Arrival (a2) δ2 Airport (a3)

δ3 Airline (a4)

δ4 P

O1 250 14:00 - INT - B -

O2 300 9:00 0.5 INT 0 A 0.5 0.437

O3 350 17:30 0.35 CITY 0.5 B 0 0.381

O4 400 12:30 0 CITY 0 B 0 0

O5 550 18:30 0.1 CITY 0 B 0 0.05

O6 600 8:30 0.05 CITY 0 A 0 0.025

O1 is the best option w.r.t. the current model

O2 and O3 are the best suggestions to stimulate preference over the other attributes

Extreme/diversity will select O5 or O6

O4 the real best option, became highest ranked once the hidden preferences are considered

Model based suggestion strategy ranks the options according to P, the likelihood to become optimal when new preferences are stated

i

ia

iaopt OoPoP )),(1(1)(

Page 19: August 9, 2007Visit at the SRI International1 Preference-based search with Suggestions Paolo Viappiani Artificial Intelligence Lab (LIA) Ecole Polytechnique

August 9, 2007 Visit at the SRI International 19

Evaluation with Simulations

Page 20: August 9, 2007Visit at the SRI International1 Preference-based search with Suggestions Paolo Viappiani Artificial Intelligence Lab (LIA) Ecole Polytechnique

August 9, 2007 Visit at the SRI International 20

User Studies

Two versions of the tools– C showing only Candidates at each interaction– C+S showing Candidates and Suggestions

Main objectives

1. Decision accuracy: the percentage of times the user succeeded in finding the target

2. User effort: the task time a user takes to make choice

Between / Within groups experiments

Page 21: August 9, 2007Visit at the SRI International1 Preference-based search with Suggestions Paolo Viappiani Artificial Intelligence Lab (LIA) Ecole Polytechnique

August 9, 2007 Visit at the SRI International 21

Hypothesis1. Model-based suggestions more complete

preference models2. Model-based suggestions more accurate

decisions3. More complete preference models more

accurate decisions (1+2)4. Question/answering incorrect preference

models and inaccurate decisions5. Most preferences are stated when the user

sees an additional opportunity, – i.e. most critiques are positive reactions to the

displayed options

Page 22: August 9, 2007Visit at the SRI International1 Preference-based search with Suggestions Paolo Viappiani Artificial Intelligence Lab (LIA) Ecole Polytechnique

August 9, 2007 Visit at the SRI International 22

Page 23: August 9, 2007Visit at the SRI International1 Preference-based search with Suggestions Paolo Viappiani Artificial Intelligence Lab (LIA) Ecole Polytechnique

August 9, 2007 Visit at the SRI International 23

H1. Model-based suggestions leads to more complete preference models

• Interface C+S (3 candidates, 3 suggestions) vs. interface C– Users of the C+S interface stated more

preferences– Incremental addition of preferences during the

use

• Interface C+S vs. Interface C showing 6 candidates– When suggestions are present, users state

more preferences (5.8 versus 4.8)

Page 24: August 9, 2007Visit at the SRI International1 Preference-based search with Suggestions Paolo Viappiani Artificial Intelligence Lab (LIA) Ecole Polytechnique

August 9, 2007 Visit at the SRI International 24

Online user study• FlatFinder was hosted on the laboratory server for one year• Collected the results in log files, several hundreds users

– Filtered out “incomplete” interaction

C C + random options

C+S

Critiquing cycles

2.89 2.75 3.00

Initial preferences

2.39 2.72 2.23

Incremental preferences

0.64 0.88 1.46

Page 25: August 9, 2007Visit at the SRI International1 Preference-based search with Suggestions Paolo Viappiani Artificial Intelligence Lab (LIA) Ecole Polytechnique

August 9, 2007 Visit at the SRI International 25

Decision accuracy & user effort

Decision accuracy

2535

45

75

0

100

Form filling Form &revisions

Example-critiquing

EC +suggestions

Time 2:45 5:30 8:09 7:39

Cycles 1.0 2.2 5.6 6.3

H2 Model-based suggestions leads to more accurate decisions

H4 Question/answering leads to inaccurate decisions

Page 26: August 9, 2007Visit at the SRI International1 Preference-based search with Suggestions Paolo Viappiani Artificial Intelligence Lab (LIA) Ecole Polytechnique

August 9, 2007 Visit at the SRI International 26

H3. More complete preference models lead to more accurate decisions

• Users that found their target stated more preferences (5.57) than users who did not (4.88)

• More preference revisions higher decision accuracy – People who found their targets

made more revisions– 6.9 as opposed to 4.5,

statistically significant (p=0.0439)• Mediation analysis

– Increase of accuracy not only because the preference model is more complete

Target found

0.45 0.83

Still not found

0.55 0.17

Δ|P|≤0 Δ|P|>0

Within-group experiments: difference in the number of preferences in the two use of the interface and difference in accuracy.

Page 27: August 9, 2007Visit at the SRI International1 Preference-based search with Suggestions Paolo Viappiani Artificial Intelligence Lab (LIA) Ecole Polytechnique

August 9, 2007 Visit at the SRI International 27

H4. Question/answering leads to incorrect preferences and inaccurate decisions

• Form-filling is not effective– Only 25% decision accuracy– Incorrect means objectives

• Average of 7.5 preferences– Stated before having considered any of the available options– Even after revisions, preferences were not retracted

• Example-critiquing:– Users begin with average of only 2.7 preferences

• Added average of 2.6 to reach 5.3• 50 % preferences were added during interaction

– Results suggest that volunteered preferences are more accurate

Page 28: August 9, 2007Visit at the SRI International1 Preference-based search with Suggestions Paolo Viappiani Artificial Intelligence Lab (LIA) Ecole Polytechnique

August 9, 2007 Visit at the SRI International 28

Other observations• Price

– When suggestions are present, users were willing to pay 7% more

– Within group experiments• Majority of times the user switched choice, the

last choice was more expensive

• Weights (attribute importance levels)– No correlation between number of changes

and interface type

Page 29: August 9, 2007Visit at the SRI International1 Preference-based search with Suggestions Paolo Viappiani Artificial Intelligence Lab (LIA) Ecole Polytechnique

August 9, 2007 Visit at the SRI International 29

Subjective evaluation• We asked questions at the end of the interaction• Example-critiquing is easier to use, enjoyable

and make the user more convinced about their choice than form-filling– Results confirmed in the within group experiments

• Suggestions do not make example-critiquing significantly easier to use or more enjoyable

Page 30: August 9, 2007Visit at the SRI International1 Preference-based search with Suggestions Paolo Viappiani Artificial Intelligence Lab (LIA) Ecole Polytechnique

August 9, 2007 Visit at the SRI International 30

0%

10%

20%

30%

40%

50%

60% Form-filling

Example-critiquing

Example-critiquing withsuggestions

No opinion

Preferred interface

Example-critiquing with Suggestions versus form-filling

Example-critiquing with/without suggestions

Within group experiments

Page 31: August 9, 2007Visit at the SRI International1 Preference-based search with Suggestions Paolo Viappiani Artificial Intelligence Lab (LIA) Ecole Polytechnique

August 9, 2007 Visit at the SRI International 31

H5. Most preferences are stated when the user sees an additional opportunity

% Critiques

Positive critiques 79%

Fully positive critiques 63%

Pareto critiques 47%

Utilitarian critiques 36%

Negative critiques 21%

Page 32: August 9, 2007Visit at the SRI International1 Preference-based search with Suggestions Paolo Viappiani Artificial Intelligence Lab (LIA) Ecole Polytechnique

August 9, 2007 Visit at the SRI International 32

Scalability and Implementation

Page 33: August 9, 2007Visit at the SRI International1 Preference-based search with Suggestions Paolo Viappiani Artificial Intelligence Lab (LIA) Ecole Polytechnique

August 9, 2007 Visit at the SRI International 33

Large databases

Relaxation of the look-ahead principle – Goal: overcome quadratic complexity of

matching each options with its dominators.1. Select suggestions from the top-k options [top-

suggestions]

2. Replace Pareto-optimality with Utility-dominance [utilitarian-suggestions]

3. Assume dominating options are a fixed number at the top [top-escape suggestions]

Page 34: August 9, 2007Visit at the SRI International1 Preference-based search with Suggestions Paolo Viappiani Artificial Intelligence Lab (LIA) Ecole Polytechnique

August 9, 2007 Visit at the SRI International 34

Top suggestions

Suggestions are not evenly distributed, but they are often at the top.

The fraction required to guarantee that respectively 50% and 80% of the suggestions are in the top positions.

~O(n1.2) for the given k

Page 35: August 9, 2007Visit at the SRI International1 Preference-based search with Suggestions Paolo Viappiani Artificial Intelligence Lab (LIA) Ecole Polytechnique

August 9, 2007 Visit at the SRI International 35

Top-escape suggestions• Consider few top options

• Maximize the probability Pesc of breaking the dominance with top options.

• Advantage: constant number of comparisons for each option

• Problem: the suggestions might not be good enough in existing preference– High Popt high Pesc

– But not always the contrary

Page 36: August 9, 2007Visit at the SRI International1 Preference-based search with Suggestions Paolo Viappiani Artificial Intelligence Lab (LIA) Ecole Polytechnique

August 9, 2007 Visit at the SRI International 36

Configurable problems• Configurable products consisting of many parts

– Constraint Satisfaction Problems (CSPs)• The constraints represent the feasible assignments• Preferences are soft constraints

• We need to generate– Candidate solutions

• Branch and Bound techniques

– Suggestions• Top-escape strategy

Generate suggestions solving a single optimization problem

Page 37: August 9, 2007Visit at the SRI International1 Preference-based search with Suggestions Paolo Viappiani Artificial Intelligence Lab (LIA) Ecole Polytechnique

August 9, 2007 Visit at the SRI International 37

CSP PreferencesPreference Distribution

Unknown preferences

Distribution of Soft Constraints

The preferences that are known

Soft constraints

Preference-based search in configurable catalogs

Feasible configurations

Hard constraints

Page 38: August 9, 2007Visit at the SRI International1 Preference-based search with Suggestions Paolo Viappiani Artificial Intelligence Lab (LIA) Ecole Polytechnique

August 9, 2007 Visit at the SRI International 38

Top options

Aux WeightedCSP

Variables, HC: same

SoftConstraints:

For each variable vi:

d prob(d> vi(stop) ) in πi

Suggestions(top-escape)

B&BSoftCSP

CSP + preferences

Distributions

πi

B&B

stop

Top-escape for CSPs

Page 39: August 9, 2007Visit at the SRI International1 Preference-based search with Suggestions Paolo Viappiani Artificial Intelligence Lab (LIA) Ecole Polytechnique

August 9, 2007 Visit at the SRI International 39

s0

s*

s1

s*

s1

s0

s1

s*

s0

s1

s0

s*

3 cases in which s1 escape s0

Current situation

New Preference

Dominated P.O P.O

Relation between Pesc and Popt

Page 40: August 9, 2007Visit at the SRI International1 Preference-based search with Suggestions Paolo Viappiani Artificial Intelligence Lab (LIA) Ecole Polytechnique

August 9, 2007 Visit at the SRI International 40

s0

s*1

s*2

s*3

s1

We can express the probability of becoming optimal with respect to the probability of escaping the top solution.

Popt = Pesc(stop) * P(no s* in S*: s*>s1| s1>stop)

Top-escape suggestions have high probability Pesc of escaping top options, but not necessarily become Pareto-optimal

Approximation: solve a WCSP that retrieve the S* that maximize the contribution

Relation between Pesc and Popt

Page 41: August 9, 2007Visit at the SRI International1 Preference-based search with Suggestions Paolo Viappiani Artificial Intelligence Lab (LIA) Ecole Polytechnique

August 9, 2007 Visit at the SRI International 41

Iterative strategyAlgorithm

– Top-escape suggestion s1

– Repeat• Generate new set of

dominators D* by solving a new Auxiliary-WCSP

– The solutions of this problem are the dominators that most contribute in the formula of Popt

• Calculate approximate value for Popt considering D* as dominators

• Consider options in D* as possible suggestions

• Stop when Popt does not increase anymore

s0

s*1

s*2

s*3

s1

Aux Weighted-CSP

Constraints: dominate s1

Soft constraints:

Probability that is better than s1 when s1 is better thn s0

Aux Weighted-CSP

Page 42: August 9, 2007Visit at the SRI International1 Preference-based search with Suggestions Paolo Viappiani Artificial Intelligence Lab (LIA) Ecole Polytechnique

August 9, 2007 Visit at the SRI International 42

Adaptive strategies

Page 43: August 9, 2007Visit at the SRI International1 Preference-based search with Suggestions Paolo Viappiani Artificial Intelligence Lab (LIA) Ecole Polytechnique

August 9, 2007 Visit at the SRI International 43

Learning by observing the user• We want improve suggestions by considering

1.Prior distribution about previous users

2.Adaptation by learning from user’s response

• Adaptive question answering strategy – Only ask questions that have impact

• Chajewska (2000) chooses questions to maximize VOI

• Boutilier (2002) considers the value of future questions

Page 44: August 9, 2007Visit at the SRI International1 Preference-based search with Suggestions Paolo Viappiani Artificial Intelligence Lab (LIA) Ecole Polytechnique

August 9, 2007 Visit at the SRI International 44

Generation of Adaptive Suggestions

The user The system

Current preferences

preferences

suggestions

Current preferences Distribution

update

Probability distribution of “missing” preferences

Generation of Adaptive Suggestions

preferences

Page 45: August 9, 2007Visit at the SRI International1 Preference-based search with Suggestions Paolo Viappiani Artificial Intelligence Lab (LIA) Ecole Polytechnique

August 9, 2007 Visit at the SRI International 45

Bayesian update

1200 Apartment Furnished Subway

900 Room Not Furnished

Subway

IF NO REACTION probability of Trans=Subway is decreased

)(

)()|()|(

critiquep

prefpprefcritiquepcritiqueprefp

Predicates

State: the user expresses the preference

Critique: the user has this preference ?

Depends on the displayed options. Assumption: > Popt

Page 46: August 9, 2007Visit at the SRI International1 Preference-based search with Suggestions Paolo Viappiani Artificial Intelligence Lab (LIA) Ecole Polytechnique

August 9, 2007 Visit at the SRI International 46

0.0%

20.0%

40.0%

60.0%

80.0%

100.0%

1 2 3 4 5

Adaptive suggestions

Model-based suggestions

Diverstiy

Extremes

Preference discovery

Evaluation of Adaptive SuggestionsSimulations

– Number of preferences discovered according to the lookahead principle

– Adaptive model-based suggestions perform even better than simple model-based suggestions

Number of shown suggestions

Page 47: August 9, 2007Visit at the SRI International1 Preference-based search with Suggestions Paolo Viappiani Artificial Intelligence Lab (LIA) Ecole Polytechnique

August 9, 2007 Visit at the SRI International 47

Conclusions and Contributions• Emergence of e-commerce

Need of personalized technologies

• Preference-based search– Inefficiency of current web tools

• Form filling achieves only 25% of accuracy due to means-objectives

• Example critiquing – Incremental preference acquisition

• Interaction paradigm that avoid means-objectives• Increase user awareness, preferences are constructed• Any critiques, at any time The user states only the preferences

they are sure about

– The need for Suggestions to avoid the anchoring effect

Page 48: August 9, 2007Visit at the SRI International1 Preference-based search with Suggestions Paolo Viappiani Artificial Intelligence Lab (LIA) Ecole Polytechnique

August 9, 2007 Visit at the SRI International 48

Contributions/2• Look-ahead principle

– Preferences are stated when an opportunity is identified

• Model-based suggestions– Metaphor of active learning– Model-based suggestions effectively stimulate users to express

accurate preferences– Dramatic increase of decision accuracy – up to 70%

• Scalability– Large datasets: relaxation of the look-ahead principle– Configurable products:

• Retrieve suggestions solving a single optimization problem

• Adaptive suggestions– Inference about user behavior