navigation aided retrieval

23
Navigation Aided Retrieval Shashank Pandit & Christopher Olston Carnegie Mellon & Yahoo

Upload: seth-hughes

Post on 02-Jan-2016

37 views

Category:

Documents


0 download

DESCRIPTION

Navigation Aided Retrieval. Shashank Pandit & Christopher Olston Carnegie Mellon & Yahoo. Search & Navigation Trends. Users often search and then supplement the search by extensively navigating beyond the search page to locate relevant information. Why ? Query formulation problems - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Navigation Aided Retrieval

Navigation Aided Retrieval

Shashank Pandit & Christopher Olston

Carnegie Mellon & Yahoo

Page 2: Navigation Aided Retrieval

Search & Navigation Trends

Users often search and then supplement the search by extensively navigating beyond the search page to locate relevant information.

Why ? Query formulation problems Open ended search tasks Preference for orienteering

Page 3: Navigation Aided Retrieval

Search & Navigation Trends

User behaviour in IR tasks not often fully exploited by search engines ……….. Content based – words PageRank – in and out links for popularity Collaborative – clicks on results

Search engines do not examine these navigation patterns ………(they fail to mention SearchGuide – Coyle et al that does)

Page 4: Navigation Aided Retrieval

NAR – Navigation Aided Recommendation

New retrieval paradigm that incorporates post query user navigation as an explicit component – NAR

A query is seen as a means to identify starting points for further navigation by users

The starting points are presented to the user in a result-list and they permit easy navigation to many documents which match the users query

Page 5: Navigation Aided Retrieval
Page 6: Navigation Aided Retrieval

NAR Navigation retrieval with Organic structure

Structure naturally present in pre-existing web documents

Advantages Human oversight – human generated categories etc Familiar user Interface – list of documents (i.e. result-

list) Single view of document collection Robust implementation – no semantic knowledge

required

Page 7: Navigation Aided Retrieval

The model

D – set of documents in corpus, T - users search task ST – answer set for search task, QT- the set of valid queries

for task T

Query submodel – belief distribution for the answer set given a query. What is the likelihood that doc d solves the task - Relevance

Navigation submodel – likelihood that a user starting at a particular document will be able to navigate (under guidance) to a document that solves the task.

Page 8: Navigation Aided Retrieval

Conventional probabilistic IR Model

No outward navigation considered

Probability of solving the task depends on whether there is a document in the document collection which solves the task

Probability of the document solving a task is based on its “relevance” to the query

Page 9: Navigation Aided Retrieval

Navigation-Conscious Model

Considers browsing as part of the search task

Query submodel – any probabilistic IR relevance ranking model

Navigation submodel – Stochastic model of user navigation WUFIS (Chi et al)

Page 10: Navigation Aided Retrieval

WUFIS

W(N, d1, d2) - probability that a user with need N will

navigate from d1 to d2.• Scent provided by anchor and surrounding

text. • The probability of a link being followed is

related to how well a user’s need matches the scent – similarity between weighted vector of need terms and scent terms.

Page 11: Navigation Aided Retrieval

Final Model

Documents starting point score

= Query submodel X Navigation submodel

Dd

n dddNWqdRqd'

)',),'((),'(),(

Page 12: Navigation Aided Retrieval

Volant - Prototype

Page 13: Navigation Aided Retrieval

Volant - Preprocessing

Content Engine R(d,q) –estimated by Okapi DM25 scoring function

Connectivity Engine Estimates the probability of a user with need N(d2)

navigating from d1 to d2 starting with dw

Dijikstra’s algorithm used to generate tuples

)d ,d ), W(N(d,d ,d ,d 212w21

Page 14: Navigation Aided Retrieval

Volant – Starting points

Query entered -> ranked list of starting points

1. Retrieve from the content engine all documents, d’, that are relevant to the query

2. For each document retrieved from 1 retrieve from the connectivity engine all documents d for which W(N(d’),d,d’)>0

3. For each unique d, compute the starting point score.

4. Sort in decreasing order of starting point score

Page 15: Navigation Aided Retrieval

Volant – Navigation Guidance

When a user is navigation Volant intercepts the document and highlights links that lead to documents relevant to their query, q.

1. Retrieve from content engine all documents d’ that are relevant to q

2. For each d’ retrieved, get the documents that can lead to d from the connectivity engine i.e. W(N(d’),d,d’)>0

3. For each tuple retrieved in step 2 highlight the links that point to dw

Page 16: Navigation Aided Retrieval

Evaluation

Hypothesis1. In query only scenarios Volant does not perform

significantly worse that conventional approaches

2. In combined query/navigation scenarios Volant selects high-quality starting points.

3. In a significant fraction of query navigation scenarios the best organic starting point is of higher quality than the one that can be synthesized using existing techniques.

Page 17: Navigation Aided Retrieval

Search Task Test Sets

Navigation prone scenarios are difficult to predict. Simplified Clarity Score was used to determine a set of ambiguous and unambiguous queries

Unambiguous – 20 search tasks with highest clarity from Trek 2000

Ambiguous - 48 randomly selected tasks from Trek 2003

Page 18: Navigation Aided Retrieval

Performance on Unambiguous Queries

Mean Average Precision

No significant difference Why? Relevant documents tended not to be

siblings or close cousins so Volant deemed that the best starting points were the documents themselves.

Page 19: Navigation Aided Retrieval

Performance on Ambiguous Queries

User study – 48 judges judge the suitability of starting documents as starting points

30 starting points generated 10 Trec winner 2003 CSIRO 10 Volant with user guidance 10 (same as first 10 Volant) Volant without user

guidance

Page 20: Navigation Aided Retrieval

Performance on Ambiguous Queries

Rating criteria Breadth – spectrum of people, different interests Accessibility – how easy to navigate and find info Appeal – presentation of material Usefulness – would people be able to complete

their task from this point.

Each judge spent 5 hours on their task

Page 21: Navigation Aided Retrieval

Results

Page 22: Navigation Aided Retrieval

Summary & Future Work

Effectiveness – responds to users and positions them at suitable starting point for their task, guides them to further information in a query driven fashion.

Relationship to conventional IR – generalizes conventional probabilistic IR model and is successful in scenarios where IR techniques fail – ambiguous queries etc

Page 23: Navigation Aided Retrieval

Discussion

Cold Start Problem

Scalability

Bias in Evaluation