![Page 1: Research on Recommender Systems: Beyond Ratings and Lists](https://reader036.vdocuments.site/reader036/viewer/2022081404/55943f141a28abf15b8b470f/html5/thumbnails/1.jpg)
Research on Recommender Systems: Beyond Ratings and Lists
Denis Parra, Ph.D. Information Sciences Assistant Professor, CS Department School of Engineering Pontificia Universidad Católica de Chile
Tuesday, November 11th of 2014
![Page 2: Research on Recommender Systems: Beyond Ratings and Lists](https://reader036.vdocuments.site/reader036/viewer/2022081404/55943f141a28abf15b8b470f/html5/thumbnails/2.jpg)
Outline
• Personal Introduction • Quick Overview of Recommender Systems • My Work on Recommender Systems
– Tag-Based Recommendation – Implicit-Feedback (time allowing …) – Visual Interactive Interfaces
• Summary & Current & Future Work
Nov 11th 2014 D.Parra ~ JCC 2014 ~ Invited Talk 2
![Page 3: Research on Recommender Systems: Beyond Ratings and Lists](https://reader036.vdocuments.site/reader036/viewer/2022081404/55943f141a28abf15b8b470f/html5/thumbnails/3.jpg)
Personal Introduction
• I’m from Valdivia! • There are many reasons to love Valdivia
Nov 11th 2014 D.Parra ~ JCC 2014 ~ Invited Talk 3
The City
![Page 4: Research on Recommender Systems: Beyond Ratings and Lists](https://reader036.vdocuments.site/reader036/viewer/2022081404/55943f141a28abf15b8b470f/html5/thumbnails/4.jpg)
Personal Introduction
• I’m from Valdivia! • There are many reasons to love Valdivia
Nov 11th 2014 D.Parra ~ JCC 2014 ~ Invited Talk 4
The Sports
![Page 5: Research on Recommender Systems: Beyond Ratings and Lists](https://reader036.vdocuments.site/reader036/viewer/2022081404/55943f141a28abf15b8b470f/html5/thumbnails/5.jpg)
Personal Introduction
• I’m from Valdivia! • There are many reasons to love Valdivia
Nov 11th 2014 D.Parra ~ JCC 2014 ~ Invited Talk 5
The Animals
![Page 6: Research on Recommender Systems: Beyond Ratings and Lists](https://reader036.vdocuments.site/reader036/viewer/2022081404/55943f141a28abf15b8b470f/html5/thumbnails/6.jpg)
Personal Introduction
• B.Eng. and professional title of Civil Engineer in Informatics from Universidad Austral de Chile (2004), Valdivia, Chile
• Ph.D. in Information Sciences at University of Pittsburgh (2013), Pittsburgh, PA, USA
Nov 11th 2014 D.Parra ~ JCC 2014 ~ Invited Talk 6
![Page 7: Research on Recommender Systems: Beyond Ratings and Lists](https://reader036.vdocuments.site/reader036/viewer/2022081404/55943f141a28abf15b8b470f/html5/thumbnails/7.jpg)
Personal Introduction
• B.Eng. and professional title of Civil Engineer in Informatics from Universidad Austral de Chile (2004), Valdivia, Chile
• Ph.D. in Information Sciences at University of Pittsburgh (2013), Pittsburgh, PA, USA
Nov 11th 2014 D.Parra ~ JCC 2014 ~ Invited Talk 7
![Page 8: Research on Recommender Systems: Beyond Ratings and Lists](https://reader036.vdocuments.site/reader036/viewer/2022081404/55943f141a28abf15b8b470f/html5/thumbnails/8.jpg)
INTRODUCTION Recommender Systems
Nov 11th 2014 8
* Danboard (Danbo): Amazon’s cardboard robot, in these slides it represents a recommender system
*
![Page 9: Research on Recommender Systems: Beyond Ratings and Lists](https://reader036.vdocuments.site/reader036/viewer/2022081404/55943f141a28abf15b8b470f/html5/thumbnails/9.jpg)
Recommender Systems (RecSys) Systems that help (groups of) people to find relevant items in
a crowded item or information space (MacNee et al. 2006)
Nov 11th 2014 D.Parra ~ JCC 2014 ~ Invited Talk 9
![Page 10: Research on Recommender Systems: Beyond Ratings and Lists](https://reader036.vdocuments.site/reader036/viewer/2022081404/55943f141a28abf15b8b470f/html5/thumbnails/10.jpg)
Why do we care about RecSys?
• RecSys have gained popularity due to several domains & applications that require people to make decisions among a large set of items.
Nov 11th 2014 D.Parra ~ JCC 2014 ~ Invited Talk 10
![Page 11: Research on Recommender Systems: Beyond Ratings and Lists](https://reader036.vdocuments.site/reader036/viewer/2022081404/55943f141a28abf15b8b470f/html5/thumbnails/11.jpg)
A lil’ bit of History
• First recommender systems were built at the beginning of 90’s (Tapestry, GroupLens, Ringo)
• Online contests, such as the Netflix prize, grew the attention on recommender systems beyond Computer Science (2006-2009)
Nov 11th 2014 D.Parra ~ JCC 2014 ~ Invited Talk 11
![Page 12: Research on Recommender Systems: Beyond Ratings and Lists](https://reader036.vdocuments.site/reader036/viewer/2022081404/55943f141a28abf15b8b470f/html5/thumbnails/12.jpg)
The Recommendation Problem
• The most popular way that the recommendation problem has been presented is about rating prediction:
• How good is my prediction?
Nov 11th 2014 D.Parra ~ JCC 2014 ~ Invited Talk 12
Item 1 Item 2 … Item m
User 1 1 5 4
User 2 5 1 ?
…
User n 2 5 ?
Predict!
![Page 13: Research on Recommender Systems: Beyond Ratings and Lists](https://reader036.vdocuments.site/reader036/viewer/2022081404/55943f141a28abf15b8b470f/html5/thumbnails/13.jpg)
Recommendation Methods
• Without covering all possible methods, the two most typical classifications on recommender algorithms are
Nov 11th 2014 D.Parra ~ JCC 2014 ~ Invited Talk 13
Classification 1 Classification 2 - Collaborative Filtering - Content-based Filtering - Hybrid
- Memory-based - Model-based
![Page 14: Research on Recommender Systems: Beyond Ratings and Lists](https://reader036.vdocuments.site/reader036/viewer/2022081404/55943f141a28abf15b8b470f/html5/thumbnails/14.jpg)
Collaborative Filtering (User-based KNN)
Nov 11th 2014 D.Parra ~ JCC 2014 ~ Invited Talk 14
• Step 1: Finding Similar Users (Pearson Corr.)
5
4
4
1
2
1
5
4
4
1 2
5
Active user
User_1
User_2
User_3
active user
user_1
user_2
user_3
![Page 15: Research on Recommender Systems: Beyond Ratings and Lists](https://reader036.vdocuments.site/reader036/viewer/2022081404/55943f141a28abf15b8b470f/html5/thumbnails/15.jpg)
Collaborative Filtering (User-based KNN)
Nov 11th 2014 D.Parra ~ JCC 2014 ~ Invited Talk 15
• Step 1: Finding Similar Users (Pearson Corr.)
5
4
4
1
2
1
5
4
4
1 2
5
Active user
User_1
User_2
User_3
∑∑∑
⊂⊂
⊂
−−
−−=
nunu
nu
CRi nniCRi uui
CRi nniuui
rrrr
rrrrnuSim
,,
,
22 )()(
))((),(
active user
user_1 0.4472136
user_2 0.49236596
user_3 -0.91520863
![Page 16: Research on Recommender Systems: Beyond Ratings and Lists](https://reader036.vdocuments.site/reader036/viewer/2022081404/55943f141a28abf15b8b470f/html5/thumbnails/16.jpg)
Collaborative Filtering (User-based KNN)
Nov 11th 2014 D.Parra ~ JCC 2014 ~ Invited Talk 16
• Step 2: Ranking the items to recommend
5
4
4
2
1
5
4
4
Active user
User_1
User_2
2
3
4
2
Item 1
Item 2
Item 3
![Page 17: Research on Recommender Systems: Beyond Ratings and Lists](https://reader036.vdocuments.site/reader036/viewer/2022081404/55943f141a28abf15b8b470f/html5/thumbnails/17.jpg)
Collaborative Filtering (User-based KNN)
Nov 11th 2014 D.Parra ~ JCC 2014 ~ Invited Talk 17
• Step 2: Ranking the items to recommend
5
4
4
2
1
5
4
4
Active user
User_1
User_2
∑∑
⊂
⊂−⋅
+=)(
)(
),(
)(),(),(
uneighborsn
uneighborsn nni
u nuuserSim
rrnuuserSimriupred2
3
4
2
Item 1
Item 2
Item 3
![Page 18: Research on Recommender Systems: Beyond Ratings and Lists](https://reader036.vdocuments.site/reader036/viewer/2022081404/55943f141a28abf15b8b470f/html5/thumbnails/18.jpg)
Pros/Cons of CF PROS: • Very simple to implement • Content-agnostic • Compared to other techniques such as content-
based, is more accurate CONS: • Sparsity • Cold-start • New Item
Nov 11th 2014 D.Parra ~ JCC 2014 ~ Invited Talk 18
![Page 19: Research on Recommender Systems: Beyond Ratings and Lists](https://reader036.vdocuments.site/reader036/viewer/2022081404/55943f141a28abf15b8b470f/html5/thumbnails/19.jpg)
Content-Based Filtering • Can be traced back to techniques from IR, where
the User Profile represents a query.
Nov 11th 2014 D.Parra ~ JCC 2014 ~ Invited Talk 19
user_profile = {w_1, w_2, …., w_3} using TF-IDF, weighting
Doc_1 = {w_1, w_2, …., w_3}
Doc_2 = {w_1, w_2, …., w_3}
Doc_3 = {w_1, w_2, …., w_3}
Doc_n = {w_1, w_2, …., w_3}
5
4
5
![Page 20: Research on Recommender Systems: Beyond Ratings and Lists](https://reader036.vdocuments.site/reader036/viewer/2022081404/55943f141a28abf15b8b470f/html5/thumbnails/20.jpg)
PROS/CONS of Content-Based Filtering PROS: • New items can be matched without previous
feedback • It can exploit also techniques such as LSA or LDA • It can use semantic data (ConceptNet, WordNet,
etc.) CONS: • Less accurate than collaborative filtering • Tends to overspecialization
Nov 11th 2014 D.Parra ~ JCC 2014 ~ Invited Talk 20
![Page 21: Research on Recommender Systems: Beyond Ratings and Lists](https://reader036.vdocuments.site/reader036/viewer/2022081404/55943f141a28abf15b8b470f/html5/thumbnails/21.jpg)
Hybridization • Combine previous methods to overcome their
weaknesses (Burke, 2002)
Nov 11th 2014 D.Parra ~ JCC 2014 ~ Invited Talk 21
![Page 22: Research on Recommender Systems: Beyond Ratings and Lists](https://reader036.vdocuments.site/reader036/viewer/2022081404/55943f141a28abf15b8b470f/html5/thumbnails/22.jpg)
C2. Model/Memory Classification
• Memory-based methods use the whole dataset in training and prediction. User and Item-based CF are examples.
• Model-based methods build a model during training and only use this model during prediction. This makes prediction performance way faster and scalable
Nov 11th 2014 D.Parra ~ JCC 2014 ~ Invited Talk 22
![Page 23: Research on Recommender Systems: Beyond Ratings and Lists](https://reader036.vdocuments.site/reader036/viewer/2022081404/55943f141a28abf15b8b470f/html5/thumbnails/23.jpg)
Model-based: Matrix Factorization
Nov 11th 2014 D.Parra ~ JCC 2014 ~ Invited Talk 23
Latent vector of the item
Latent vector of the user
SVD ~ Singular Value Decomposition
![Page 24: Research on Recommender Systems: Beyond Ratings and Lists](https://reader036.vdocuments.site/reader036/viewer/2022081404/55943f141a28abf15b8b470f/html5/thumbnails/24.jpg)
PROS/CONS of MF and latent factors model
PROS: • So far, state-of-the-art in terms of accuracy (these
methods won the Netflix Prize) • Performance-wise, the best option nowadays: slow
at training time O((m+n)3) compared to correlation O(m2n), but linear at prediction time O(m+n)
CONS: • Recommendations are obscure: How to explain that
certain “latent factors” produced the recommendation
Nov 11th 2014 D.Parra ~ JCC 2014 ~ Invited Talk 24
![Page 25: Research on Recommender Systems: Beyond Ratings and Lists](https://reader036.vdocuments.site/reader036/viewer/2022081404/55943f141a28abf15b8b470f/html5/thumbnails/25.jpg)
Rethinking the Recommendation Problem
• Ratings are scarce: need for exploiting other sources of user preference
• User-centric recommendation takes the problem beyond ratings and ranked lists: evaluate user engagement and satisfaction, not only RMSE
• Several other dimensions to consider in the evaluation: novelty of the results, diversity, coverage (user and catalog), serendipity
• Study de effect of interface characteristics: user-control, explainability
Nov 11th 2014 D.Parra ~ JCC 2014 ~ Invited Talk 25
![Page 26: Research on Recommender Systems: Beyond Ratings and Lists](https://reader036.vdocuments.site/reader036/viewer/2022081404/55943f141a28abf15b8b470f/html5/thumbnails/26.jpg)
My Take on RecSys Research
Nov 11th 2014 D.Parra ~ JCC 2014 ~ Invited Talk 30
![Page 27: Research on Recommender Systems: Beyond Ratings and Lists](https://reader036.vdocuments.site/reader036/viewer/2022081404/55943f141a28abf15b8b470f/html5/thumbnails/27.jpg)
My Work on RecSys
• Traditional RecSys: accurate prediction and TopN algorithms
• In my research I have contributed to RecSys by: – Utilizing other sources of user preference (Social Tags) – Exploiting implicit feedback for recommendation and for
mapping explicit feedback – Studying user-centric evaluation: the effect of user
controllability on user satisfaction in interactive interfaces
• And nowadays: Studying whether Virtual Worlds are a good proxy for real world recommendation tasks
Nov 11th 2014 D.Parra ~ JCC 2014 ~ Invited Talk 31
![Page 28: Research on Recommender Systems: Beyond Ratings and Lists](https://reader036.vdocuments.site/reader036/viewer/2022081404/55943f141a28abf15b8b470f/html5/thumbnails/28.jpg)
This is not only My work J
Nov 11th 2014 D.Parra ~ JCC 2014 ~ Invited Talk 32
• Dr. Peter Brusilovsky University of Pittsburgh, PA, USA
• Dr. Alexander Troussov IBM Dublin and TCD, Ireland
• Dr. Xavier Amatriain TID / Netflix, CA, USA
• Dr. Christoff Trattner NTNU, Norway
• Dr. Katrien Verbert KU Leuven, Belgium
• Dr. Leandro Balby-Marinho UFCG, Brasil
![Page 29: Research on Recommender Systems: Beyond Ratings and Lists](https://reader036.vdocuments.site/reader036/viewer/2022081404/55943f141a28abf15b8b470f/html5/thumbnails/29.jpg)
TAG-BASED RECOMMENDATION
![Page 30: Research on Recommender Systems: Beyond Ratings and Lists](https://reader036.vdocuments.site/reader036/viewer/2022081404/55943f141a28abf15b8b470f/html5/thumbnails/30.jpg)
Tag-based Recommendation
• D. Parra, P. Brusilovsky. Improving Collaborative Filtering in Social Tagging Systems for the Recommendation of Scientific Articles. Web Intelligence 2010, Toronto, Canada
• D. Parra, P. Brusilovsky. Collaborative Filtering for Social Tagging Systems: an Experiment with CiteULike. ACM Recsys 2009, New York, NY, USA
Nov 11th 2014 D.Parra ~ JCC 2014 ~ Invited Talk 34
![Page 31: Research on Recommender Systems: Beyond Ratings and Lists](https://reader036.vdocuments.site/reader036/viewer/2022081404/55943f141a28abf15b8b470f/html5/thumbnails/31.jpg)
Motivation • Ratings are scarce. Find another source of user
preference: Social Tagging Systems
Nov 11th 2014 D.Parra ~ JCC 2014 ~ Invited Talk 35
User
Resource
Tags
![Page 32: Research on Recommender Systems: Beyond Ratings and Lists](https://reader036.vdocuments.site/reader036/viewer/2022081404/55943f141a28abf15b8b470f/html5/thumbnails/32.jpg)
A Folksonomy
• When a user u uses adds an item i using one or more tags t1,…, tn, there is a tagging instance.
• The collection of tagging instances produces a folksonomy
Nov 11th 2014 D.Parra ~ JCC 2014 ~ Invited Talk 36
![Page 33: Research on Recommender Systems: Beyond Ratings and Lists](https://reader036.vdocuments.site/reader036/viewer/2022081404/55943f141a28abf15b8b470f/html5/thumbnails/33.jpg)
Applying CF over the Folksonomy
• In the first step: Calculate user similarity
• In the second step: incorporate the amount of raters to rank the items (NwCF)
Nov 11th 2014 D.Parra ~ JCC 2014 ~ Invited Talk 37
Traditional CF Tag-based CF Pearson Correlation over ratings
BM25 over social tags
![Page 34: Research on Recommender Systems: Beyond Ratings and Lists](https://reader036.vdocuments.site/reader036/viewer/2022081404/55943f141a28abf15b8b470f/html5/thumbnails/34.jpg)
Tag-based CF
Nov 11th 2014 D.Parra ~ JCC 2014 ~ Invited Talk 38
Query
Doc_1
Doc_2 Doc_3
BM25
= Active User
![Page 35: Research on Recommender Systems: Beyond Ratings and Lists](https://reader036.vdocuments.site/reader036/viewer/2022081404/55943f141a28abf15b8b470f/html5/thumbnails/35.jpg)
Okapi BM25
Nov 11th 2014 D.Parra ~ JCC 2014 ~ Invited Talk 39
BM25: We obtain the similarity between users (neighbors) using their set of tags as “documents” and performing an Okapi BM25 (probabilistic IR model) Retrieval Status Value calculation.
),())(1(log),( 10 iupredinbriudpre ⋅+=ʹ′
∑∈ +
+⋅
+×+−
+⋅=
qt tq
tq
tdaved
tdd tfk
tfktfLLbbk
tfkIDFRSV
3
3
1
1 )1())/()1((
)1(sim(u,v) =
Tag frequency in the neighbor (v) profile
Tag frequency in the active user (u) profile
![Page 36: Research on Recommender Systems: Beyond Ratings and Lists](https://reader036.vdocuments.site/reader036/viewer/2022081404/55943f141a28abf15b8b470f/html5/thumbnails/36.jpg)
Evaluation
Nov 11th 2014 D.Parra ~ JCC 2014 ~ Invited Talk 40
Item # unique instances # users 784
# items 26,599
# tags 26,009
# posts 71,413
# annotations 218,930
avg # items per user 91
avg # users per item 2.68
avg # tags per user 88.02
avg # users per tag 2.65
avg # tags per item 7.07
avg # items per tag 7.23
Item Phase 2 dataset
# users 5,849
# articles 574,907
# tags 139,993
#tagging incidents
2,337,571
Filtering process
• Crawl during 38 days during June-July 2009
![Page 37: Research on Recommender Systems: Beyond Ratings and Lists](https://reader036.vdocuments.site/reader036/viewer/2022081404/55943f141a28abf15b8b470f/html5/thumbnails/37.jpg)
Cross-validation
Nov 11th 2014 D.Parra ~ JCC 2014 ~ Invited Talk 41
• Test-validation-train sets, 10-fold cross validation
• Training to obtain parameter K: neighb. size • One run the experiment: ~12 hours
![Page 38: Research on Recommender Systems: Beyond Ratings and Lists](https://reader036.vdocuments.site/reader036/viewer/2022081404/55943f141a28abf15b8b470f/html5/thumbnails/38.jpg)
Results & Statistical Significance
• BM25 is intended to bring more neighbors, at the cost of more noise (neighbors not so similar)
• NwCF helps to decrease noise, so it was natural to combine them and try just that option
Nov 11th 2014 D.Parra ~ JCC 2014 ~ Invited Talk 42
CCF NwCF BM25+CCF BM25+NwCF
MAP@10 0.12875 0.1432* 0.1876** 0.1942***
K (neigh.size) 20 22 21 29
Ucov 81.12% 81.12% 99.23% 99.23%
Significance over the baseline: *p < 0.236, ** p < 0.033, *** p < 0.001
![Page 39: Research on Recommender Systems: Beyond Ratings and Lists](https://reader036.vdocuments.site/reader036/viewer/2022081404/55943f141a28abf15b8b470f/html5/thumbnails/39.jpg)
Take-aways
• We can exploit tags as a source for user similarity in recommendation algorithms
• Tag-based (BM25) similarity can be considered as an alternative to Pearson Correlation to calculate user similarity in STS.
• Incorporating the number of raters helped to decrease the noise produced by items with too few ratings
Nov 11th 2014 D.Parra ~ JCC 2014 ~ Invited Talk 43
![Page 40: Research on Recommender Systems: Beyond Ratings and Lists](https://reader036.vdocuments.site/reader036/viewer/2022081404/55943f141a28abf15b8b470f/html5/thumbnails/40.jpg)
IMPLICIT FEEDBACK
Work with Xavier Amatriain
![Page 41: Research on Recommender Systems: Beyond Ratings and Lists](https://reader036.vdocuments.site/reader036/viewer/2022081404/55943f141a28abf15b8b470f/html5/thumbnails/41.jpg)
Implicit-Feedback
• Slides are based on two articles: – Parra-Santander, D., & Amatriain, X. (2011). Walk the
Talk: Analyzing the relation between implicit and explicit feedback for preference elicitation. Proceedings of UMAP 2011, Girona, Spain
– Parra, D., Karatzoglou, A., Amatriain, X., & Yavuz, I. (2011). Implicit feedback recommendation via implicit-to-explicit ordinal logistic regression mapping. Proceedings of the CARS Workshop, Chicago, IL, USA, 2011.
Nov 11th 2014 D.Parra ~ JCC 2014 ~ Invited Talk 45
![Page 42: Research on Recommender Systems: Beyond Ratings and Lists](https://reader036.vdocuments.site/reader036/viewer/2022081404/55943f141a28abf15b8b470f/html5/thumbnails/42.jpg)
Introduction (1/2)
• Most of recommender system approaches rely on explicit information of the users, but…
• Explicit feedback: scarce (people are not especially eager to rate or to provide personal info)
• Implicit feedback: Is less scarce, but (Hu et al., 2008) There’s no negative feedback
… and if you watch a TV program just once or twice?
Noisy
… but explicit feedback is also noisy (Amatriain et al., 2009)
Preference & Confidence
… we aim to map the I.F. to preference (our main goal)
Lack of evaluation metrics
… if we can map I.F. and E.F., we can have a comparable evaluation
Nov 11th 2014 D.Parra ~ JCC 2014 ~ Invited Talk 46
![Page 43: Research on Recommender Systems: Beyond Ratings and Lists](https://reader036.vdocuments.site/reader036/viewer/2022081404/55943f141a28abf15b8b470f/html5/thumbnails/43.jpg)
Introduction (2/2)
• Is it possible to map implicit behavior to explicit preference (ratings)?
• Which variables better account for the amount of times a user listens to online albums? [Baltrunas & Amatriain CARS ‘09 workshop – RecSys 2009.]
• OUR APPROACH: Study with Last.fm users – Part I: Ask users to rate 100 albums (how to sample) – Part II: Build a model to map collected implicit feedback
and context to explicit feedback
Nov 11th 2014 D.Parra ~ JCC 2014 ~ Invited Talk 47
![Page 44: Research on Recommender Systems: Beyond Ratings and Lists](https://reader036.vdocuments.site/reader036/viewer/2022081404/55943f141a28abf15b8b470f/html5/thumbnails/44.jpg)
Walk the Talk (2011)
Albums they listened to during last: 7days, 3months, 6months, year, overall For each album in the list we obtained:
# user plays (in each period), # of global listeners and # of global plays
Nov 11th 2014 D.Parra ~ JCC 2014 ~ Invited Talk 48
![Page 45: Research on Recommender Systems: Beyond Ratings and Lists](https://reader036.vdocuments.site/reader036/viewer/2022081404/55943f141a28abf15b8b470f/html5/thumbnails/45.jpg)
Walk the Talk - 2 • Requirements: 18 y.o., scrobblings > 5000
Nov 11th 2014 D.Parra ~ JCC 2014 ~ Invited Talk 49
![Page 46: Research on Recommender Systems: Beyond Ratings and Lists](https://reader036.vdocuments.site/reader036/viewer/2022081404/55943f141a28abf15b8b470f/html5/thumbnails/46.jpg)
Quantization of Data for Sampling • What items should they rate? Item (album) sampling:
– Implicit Feedback (IF): playcount for a user on a given album. Changed to scale [1-3], 3 means being more listened to.
– Global Popularity (GP): global playcount for all users on a given album [1-3]. Changed to scale [1-3], 3 means being more listened to.
– Recentness (R) : time elapsed since user played a given album. Changed to scale [1-3], 3 means being listened to more recently.
Nov 11th 2014 D.Parra ~ JCC 2014 ~ Invited Talk 50
![Page 47: Research on Recommender Systems: Beyond Ratings and Lists](https://reader036.vdocuments.site/reader036/viewer/2022081404/55943f141a28abf15b8b470f/html5/thumbnails/47.jpg)
4 Regression Analysis
• Including Recentness increases R2 in more than 10% [ 1 -> 2] • Including GP increases R2, not much compared to RE + IF [ 1 -> 3] • Not Including GP, but including interaction between IF and RE
improves the variance of the DV explained by the regression model. [ 2 -> 4 ]
M1: implicit feedback
M2: implicit feedback & recentness
M4: Interaction of implicit feedback & recentness
M3: implicit feedback, recentness, global popularity
Nov 11th 2014 D.Parra ~ JCC 2014 ~ Invited Talk 51
![Page 48: Research on Recommender Systems: Beyond Ratings and Lists](https://reader036.vdocuments.site/reader036/viewer/2022081404/55943f141a28abf15b8b470f/html5/thumbnails/48.jpg)
4.1 Regression Analysis
• We tested conclusions of regression analysis by predicting the score, checking RMSE in 10-fold cross validation.
• Results of regression analysis are supported.
Model RMSE1 RMSE2 User average 1.5308 1.1051 M1: Implicit feedback 1.4206 1.0402 M2: Implicit feedback + recentness 1.4136 1.034 M3: Implicit feedback + recentness + global popularity 1.4130 1.0338 M4: Interaction of Implicit feedback * recentness 1.4127 1.0332
Nov 11th 2014 D.Parra ~ JCC 2014 ~ Invited Talk 52
![Page 49: Research on Recommender Systems: Beyond Ratings and Lists](https://reader036.vdocuments.site/reader036/viewer/2022081404/55943f141a28abf15b8b470f/html5/thumbnails/49.jpg)
Conclusions of Part I
• Using a linear model, Implicit feedback and recentness can help to predict explicit feedback (in the form of ratings)
• Global popularity doesn’t show a significant improvement in the prediction task
• Our model can help to relate implicit and explicit feedback, helping to evaluate and compare explicit and implicit recommender systems.
Nov 11th 2014 D.Parra ~ JCC 2014 ~ Invited Talk 53
![Page 50: Research on Recommender Systems: Beyond Ratings and Lists](https://reader036.vdocuments.site/reader036/viewer/2022081404/55943f141a28abf15b8b470f/html5/thumbnails/50.jpg)
Part II: Extension of Walk the Talk
• Implicit Feedback Recommendation via Implicit-to-Explicit OLR Mapping (Recsys 2011, CARS Workshop) – Consider ratings as ordinal variables – Use mixed-models to account for non-independence of
observations – Compare with state-of-the-art implicit feedback
algorithm
Nov 11th 2014 D.Parra ~ JCC 2014 ~ Invited Talk 54
![Page 51: Research on Recommender Systems: Beyond Ratings and Lists](https://reader036.vdocuments.site/reader036/viewer/2022081404/55943f141a28abf15b8b470f/html5/thumbnails/51.jpg)
Recalling the 1st study (5/5) • Prediction of rating by multiple Linear Regression
evaluated with RMSE. • Results showed that Implicit feedback (play count
of the album by a specific user) and recentness (how recently an album was listened to) were important factors, global popularity had a weaker effect.
• Results also showed that listening style (if user preferred to listen to single tracks, CDs, or either) was also an important factor, and not the other ones.
Nov 11th 2014 D.Parra ~ JCC 2014 ~ Invited Talk 55
![Page 52: Research on Recommender Systems: Beyond Ratings and Lists](https://reader036.vdocuments.site/reader036/viewer/2022081404/55943f141a28abf15b8b470f/html5/thumbnails/52.jpg)
... but
• Linear Regression didn’t account for the nested nature of ratings
• And ratings were treated as continuous, when they are actually ordinal.
User 1
1 3 5 3 0 4 5 2 2 1 5 4 3 2
User n
3 2 1 0 4 5 2 5 4 3 2 1 3 5
. . .
Nov 11th 2014 D.Parra ~ JCC 2014 ~ Invited Talk 56
![Page 53: Research on Recommender Systems: Beyond Ratings and Lists](https://reader036.vdocuments.site/reader036/viewer/2022081404/55943f141a28abf15b8b470f/html5/thumbnails/53.jpg)
So, Ordinal Logistic Regression! • Actually Mixed-Efects Ordinal Multinomial Logistic
Regression • Mixed-effects: Nested nature of ratings • We obtain a distribution over ratings (ordinal
multinomial) per each pair USER, ITEM -> we predict the rating using the expected value.
• … And we can compare the inferred ratings with a method that directly uses implicit information (playcounts) to recommend ( by Hu, Koren et al. 2007)
Nov 11th 2014 D.Parra ~ JCC 2014 ~ Invited Talk 57
![Page 54: Research on Recommender Systems: Beyond Ratings and Lists](https://reader036.vdocuments.site/reader036/viewer/2022081404/55943f141a28abf15b8b470f/html5/thumbnails/54.jpg)
Ordinal Regression for Mapping
• Model
• Predicted value
Nov 11th 2014 D.Parra ~ JCC 2014 ~ Invited Talk 58
![Page 55: Research on Recommender Systems: Beyond Ratings and Lists](https://reader036.vdocuments.site/reader036/viewer/2022081404/55943f141a28abf15b8b470f/html5/thumbnails/55.jpg)
Datasets
• D1: users, albums, if, re, gp, ratings, demographics/consumption
• D2: users, albums, if, re, gp, NO RATINGS.
Nov 11th 2014 D.Parra ~ JCC 2014 ~ Invited Talk 59
![Page 56: Research on Recommender Systems: Beyond Ratings and Lists](https://reader036.vdocuments.site/reader036/viewer/2022081404/55943f141a28abf15b8b470f/html5/thumbnails/56.jpg)
Results
Nov 11th 2014 D.Parra ~ JCC 2014 ~ Invited Talk 60
![Page 57: Research on Recommender Systems: Beyond Ratings and Lists](https://reader036.vdocuments.site/reader036/viewer/2022081404/55943f141a28abf15b8b470f/html5/thumbnails/57.jpg)
Conclusions & Current Work
Problem/ Challenge
1. Ground truth: How many Playcounts to relevancy? > Sensibility Analysis needed
2. Quantization of playcounts (implicit feedback), recentness, and overall number of listeners of an album (global popularity) [1-3] scale v/s raw playcounts > modifiy and compare 3. Additional/Alternative metrics for evaluation [MAP and nDCG used in the paper]
Nov 11th 2014 D.Parra ~ JCC 2014 ~ Invited Talk 61
![Page 58: Research on Recommender Systems: Beyond Ratings and Lists](https://reader036.vdocuments.site/reader036/viewer/2022081404/55943f141a28abf15b8b470f/html5/thumbnails/58.jpg)
VISUALIZATION + USER CONTROLLABILITY
Part of this work with Katrien Verbert
![Page 59: Research on Recommender Systems: Beyond Ratings and Lists](https://reader036.vdocuments.site/reader036/viewer/2022081404/55943f141a28abf15b8b470f/html5/thumbnails/59.jpg)
Visualization & User Controllability
• Motivation: Can user controllability and explainability improve user engagement and satisfaction with a recommender system?
• Specific research question: How intersections of contexts of relevance (of recommendation algorithms) might be better represented for user experience with the recommender?
Nov 11th 2014 D.Parra ~ JCC 2014 ~ Invited Talk 63
![Page 60: Research on Recommender Systems: Beyond Ratings and Lists](https://reader036.vdocuments.site/reader036/viewer/2022081404/55943f141a28abf15b8b470f/html5/thumbnails/60.jpg)
The Concept of Controllability MovieLens: example of traditional recommender list
Nov 11th 2014 D.Parra ~ JCC 2014 ~ Invited Talk 64
![Page 61: Research on Recommender Systems: Beyond Ratings and Lists](https://reader036.vdocuments.site/reader036/viewer/2022081404/55943f141a28abf15b8b470f/html5/thumbnails/61.jpg)
Visualization & User Controllability
• Motivation: Can user controllability and explainability improve user engagement and satisfaction with a recommender system?
• Specific research question: How intersections of contexts of relevance (of recommendation algorithms) might be better represented for user experience with the recommender?
Nov 11th 2014 D.Parra ~ JCC 2014 ~ Invited Talk 65
![Page 62: Research on Recommender Systems: Beyond Ratings and Lists](https://reader036.vdocuments.site/reader036/viewer/2022081404/55943f141a28abf15b8b470f/html5/thumbnails/62.jpg)
Research Platform
• The studies were conducted using Conference Navigator, a Conference Support System
• Our goal was recommending conference talks
Program
Proceedings
Author List
Recommendations
http://halley.exp.sis.pitt.edu/cn3/
Nov 11th 2014 D.Parra ~ JCC 2014 ~ Invited Talk 66
![Page 63: Research on Recommender Systems: Beyond Ratings and Lists](https://reader036.vdocuments.site/reader036/viewer/2022081404/55943f141a28abf15b8b470f/html5/thumbnails/63.jpg)
Hybrid RecSys: Visualizing Intersections
Clustermap Venn diagram
• Clustermap vs. Venn Diagram
Nov 11th 2014 D.Parra ~ JCC 2014 ~ Invited Talk 67
![Page 64: Research on Recommender Systems: Beyond Ratings and Lists](https://reader036.vdocuments.site/reader036/viewer/2022081404/55943f141a28abf15b8b470f/html5/thumbnails/64.jpg)
TalkExplorer – IUI 2013 • Adaptation of Aduna Visualization to CN • Main research question: Does fusion (intersection) of
contexts of relevance improve user experience?
Nov 11th 2014 D.Parra ~ JCC 2014 ~ Invited Talk 68
![Page 65: Research on Recommender Systems: Beyond Ratings and Lists](https://reader036.vdocuments.site/reader036/viewer/2022081404/55943f141a28abf15b8b470f/html5/thumbnails/65.jpg)
TalkExplorer - I
Entities Tags, Recommender Agents, Users
Nov 11th 2014 D.Parra ~ JCC 2014 ~ Invited Talk 69
![Page 66: Research on Recommender Systems: Beyond Ratings and Lists](https://reader036.vdocuments.site/reader036/viewer/2022081404/55943f141a28abf15b8b470f/html5/thumbnails/66.jpg)
TalkExplorer - II
Recommender Recommender
Cluster with intersection of entities Cluster (of
talks) associated to only one entity
• Canvas Area: Intersections of Different Entities
User
Nov 11th 2014 D.Parra ~ JCC 2014 ~ Invited Talk 70
![Page 67: Research on Recommender Systems: Beyond Ratings and Lists](https://reader036.vdocuments.site/reader036/viewer/2022081404/55943f141a28abf15b8b470f/html5/thumbnails/67.jpg)
TalkExplorer - III
Items Talks explored by the user
Nov 11th 2014 D.Parra ~ JCC 2014 ~ Invited Talk 71
![Page 68: Research on Recommender Systems: Beyond Ratings and Lists](https://reader036.vdocuments.site/reader036/viewer/2022081404/55943f141a28abf15b8b470f/html5/thumbnails/68.jpg)
Our Assumptions
• Items which are relevant in more that one aspect could be more valuable to the users
• Displaying multiple aspects of relevance visually is important for the users in the process of item’s exploration
Nov 11th 2014 D.Parra ~ JCC 2014 ~ Invited Talk 72
![Page 69: Research on Recommender Systems: Beyond Ratings and Lists](https://reader036.vdocuments.site/reader036/viewer/2022081404/55943f141a28abf15b8b470f/html5/thumbnails/69.jpg)
TalkExplorer Studies I & II • Study I
– Controlled Experiment: Users were asked to discover relevant talks by exploring the three types of entities: tags, recommender agents and users.
– Conducted at Hypertext and UMAP 2012 (21 users) – Subjects familiar with Visualizations and Recsys
• Study II – Field Study: Users were left free to explore the interface. – Conducted at LAK 2012 and ECTEL 2013 (18 users) – Subjects familiar with visualizations, but not much with
RecSys
Nov 11th 2014 D.Parra ~ JCC 2014 ~ Invited Talk 73
![Page 70: Research on Recommender Systems: Beyond Ratings and Lists](https://reader036.vdocuments.site/reader036/viewer/2022081404/55943f141a28abf15b8b470f/html5/thumbnails/70.jpg)
Evaluation: Intersections & Effectiveness • What do we call an “Intersection”?
• We used #explorations on intersections and their effectiveness, defined as:
Effectiveness = |𝑏𝑜𝑜𝑘𝑚𝑎𝑟𝑘𝑒𝑑 𝑖𝑡𝑒𝑚𝑠|/|𝑖𝑛𝑡𝑒𝑟𝑒𝑠𝑒𝑐𝑡𝑖𝑜𝑛𝑠 𝑒𝑥𝑝𝑙𝑜𝑟𝑒𝑑|
Nov 11th 2014 D.Parra ~ JCC 2014 ~ Invited Talk 74
![Page 71: Research on Recommender Systems: Beyond Ratings and Lists](https://reader036.vdocuments.site/reader036/viewer/2022081404/55943f141a28abf15b8b470f/html5/thumbnails/71.jpg)
Results of Studies I & II
• Effectiveness increases with intersections of more entities
• Effectiveness wasn’t affected in the field study (study 2)
• … but exploration distribution was affected
Nov 11th 2014 D.Parra ~ JCC 2014 ~ Invited Talk 75
![Page 72: Research on Recommender Systems: Beyond Ratings and Lists](https://reader036.vdocuments.site/reader036/viewer/2022081404/55943f141a28abf15b8b470f/html5/thumbnails/72.jpg)
SETFUSION: VENN DIAGRAM FOR USER-CONTROLLABLE INTERFACE
76
![Page 73: Research on Recommender Systems: Beyond Ratings and Lists](https://reader036.vdocuments.site/reader036/viewer/2022081404/55943f141a28abf15b8b470f/html5/thumbnails/73.jpg)
SetFusion – IUI 2014
Nov 11th 2014 D.Parra ~ JCC 2014 ~ Invited Talk 77
![Page 74: Research on Recommender Systems: Beyond Ratings and Lists](https://reader036.vdocuments.site/reader036/viewer/2022081404/55943f141a28abf15b8b470f/html5/thumbnails/74.jpg)
SetFusion I
Traditional Ranked List Papers sorted by Relevance. It combines 3 recommendation approaches.
Nov 11th 2014 D.Parra ~ JCC 2014 ~ Invited Talk 78
![Page 75: Research on Recommender Systems: Beyond Ratings and Lists](https://reader036.vdocuments.site/reader036/viewer/2022081404/55943f141a28abf15b8b470f/html5/thumbnails/75.jpg)
SetFusion - II Sliders Allow the user to control the importance of each data source or recommendation method
Interactive Venn Diagram Allows the user to inspect and to filter papers recommended. Actions available: - Filter item list by clicking on an area - Highlight a paper by mouse-over on a circle - Scroll to paper by clicking on a circle - Indicate bookmarked papers
Nov 11th 2014 D.Parra ~ JCC 2014 ~ Invited Talk 79
![Page 76: Research on Recommender Systems: Beyond Ratings and Lists](https://reader036.vdocuments.site/reader036/viewer/2022081404/55943f141a28abf15b8b470f/html5/thumbnails/76.jpg)
SetFusion – UMAP 2012
• Field Study: let users freely explore the interface
- ~50% (50 users) tried the SetFusion recommender
- 28% (14 users) bookmarked at least one paper
- Users explored in average 14.9 talks and bookmarked 7.36 talks in average.
A AB ABC AC B BC C15 7 9 26 18 4 1716% 7% 9% 27% 19% 4% 18%
Distribution of bookmarks per method or combination of methods
Nov 11th 2014 D.Parra ~ JCC 2014 ~ Invited Talk 80
![Page 77: Research on Recommender Systems: Beyond Ratings and Lists](https://reader036.vdocuments.site/reader036/viewer/2022081404/55943f141a28abf15b8b470f/html5/thumbnails/77.jpg)
TalkExplorer vs. SetFusion
• Comparing distributions of explorations
In studies 1 and 2 over talkExplorer we observed an important change in the distribution of explorations.
Nov 11th 2014 D.Parra ~ JCC 2014 ~ Invited Talk 81
![Page 78: Research on Recommender Systems: Beyond Ratings and Lists](https://reader036.vdocuments.site/reader036/viewer/2022081404/55943f141a28abf15b8b470f/html5/thumbnails/78.jpg)
TalkExplorer vs. SetFusion
• Comparing distributions of explorations
Comparing the field studies: - In TalkExplorer, 84% of
the explorations over intersections were performed over clusters of 1 item
- In SetFusion, was only 52%, compared to 48% (18% + 30%) of multiple intersections, diff. not statistically significant
Nov 11th 2014 D.Parra ~ JCC 2014 ~ Invited Talk 82
![Page 79: Research on Recommender Systems: Beyond Ratings and Lists](https://reader036.vdocuments.site/reader036/viewer/2022081404/55943f141a28abf15b8b470f/html5/thumbnails/79.jpg)
Summary & Conclusions
• We presented two implementations of visual interactive interfaces that tackle exploration on a recommendation setting
• We showed that intersections of several contexts of relevance help to discover relevant items
• The visual paradigm used can have a strong effect on user behavior: we need to keep working on visual representation that promote exploration without increasing the cognitive load over the users
Nov 11th 2014 D.Parra ~ JCC 2014 ~ Invited Talk 83
![Page 80: Research on Recommender Systems: Beyond Ratings and Lists](https://reader036.vdocuments.site/reader036/viewer/2022081404/55943f141a28abf15b8b470f/html5/thumbnails/80.jpg)
Limitations & Future Work
• Apply our approach to other domains (fusion of data sources or recommendation algorithms)
• For SetFusion, find alternatives to scale the approach to more than 3 sets, potential alternatives: – Clustering and – Radial sets
• Consider other factors that interact with the user satisfaction: – Controllability by itself vs. minimum level of accuracy
Nov 11th 2014 D.Parra ~ JCC 2014 ~ Invited Talk 84
![Page 81: Research on Recommender Systems: Beyond Ratings and Lists](https://reader036.vdocuments.site/reader036/viewer/2022081404/55943f141a28abf15b8b470f/html5/thumbnails/81.jpg)
More Details on SetFusion?
• Effect of other variables: gender, age, experience with in the domain, or familiarity with the system
• Check our upcoming paper in the IJHCS “User-controllable Personalization: A Case Study with SetFusion”: Controlled Laboratory study with SetFusion versus traditional ranked list
Nov 11th 2014 D.Parra ~ JCC 2014 ~ Invited Talk 85
![Page 82: Research on Recommender Systems: Beyond Ratings and Lists](https://reader036.vdocuments.site/reader036/viewer/2022081404/55943f141a28abf15b8b470f/html5/thumbnails/82.jpg)
CONCLUSIONS (& CURRENT) & FUTURE WORK
![Page 83: Research on Recommender Systems: Beyond Ratings and Lists](https://reader036.vdocuments.site/reader036/viewer/2022081404/55943f141a28abf15b8b470f/html5/thumbnails/83.jpg)
Challenges in Recommender Systems • Recommendation to groups • Cross-Domain recommendation • User-centric evaluation • Interactive interfaces and visualization • Improve Evaluation for comparison (P. Campos of U.
of Bio-Bio on doing fair evaluations considering time) • ML: Active learning, multi-armed bandits (exploration,
exploitation) • Prevent the “Filter Bubble” • Make algorithms resistant to attacks
Nov 11th 2014 D.Parra ~ JCC 2014 ~ Invited Talk 87
![Page 84: Research on Recommender Systems: Beyond Ratings and Lists](https://reader036.vdocuments.site/reader036/viewer/2022081404/55943f141a28abf15b8b470f/html5/thumbnails/84.jpg)
• Why? We have a Second Life dataset with 3 connected dimensions of information
• 2 undergoing projects: Entrepreneurship and LBSN
Are Virtual Worlds Good Proxies for Real World ?
Nov 11th 2014 D.Parra ~ JCC 2014 ~ Invited Talk 88
Social Network
Marketplace
Virtual World
![Page 85: Research on Recommender Systems: Beyond Ratings and Lists](https://reader036.vdocuments.site/reader036/viewer/2022081404/55943f141a28abf15b8b470f/html5/thumbnails/85.jpg)
Entrepreneurship • Can we predict whether a user will create a store
and how successful will she/he be? Literature on this area is extremely scarce.
Nov 11th 2014 D.Parra ~ JCC 2014 ~ Invited Talk 89
Social Network Marketplace
James Gaskin SEM, Causal models BYU, USA
Stephen Zhang Entrepreneurship PUC Chile
Christoph Trattner Social Networks NTNU, Norway
![Page 86: Research on Recommender Systems: Beyond Ratings and Lists](https://reader036.vdocuments.site/reader036/viewer/2022081404/55943f141a28abf15b8b470f/html5/thumbnails/86.jpg)
Location-Based Social Networks (LBSN) • How similar are the patterns of mobility in real
world and virtual world ?
Nov 11th 2014 D.Parra ~ JCC 2014 ~ Invited Talk 90
Social Network Virtual World
Christoph Trattner Social Networks NTNU, Norway
Leandro Balby-Marinho LBSN and RecSys UFCG, Brasil
![Page 87: Research on Recommender Systems: Beyond Ratings and Lists](https://reader036.vdocuments.site/reader036/viewer/2022081404/55943f141a28abf15b8b470f/html5/thumbnails/87.jpg)
Other RecSys Activities
• I am part of the Program Committee of the 2015 RecSys challenge. Don’t miss it!
» Is the user going to buy items in this session? Yes|No » If yes, what are the items that are going to be bought?
• Part of team creating the upcoming RecSys Forum (like SIGIR Forum). Coming Soon! (Alan Said, Cataldo Musto, Alejandro Bellogin, etc.)
Nov 11th 2014 D.Parra ~ JCC 2014 ~ Invited Talk 92
![Page 88: Research on Recommender Systems: Beyond Ratings and Lists](https://reader036.vdocuments.site/reader036/viewer/2022081404/55943f141a28abf15b8b470f/html5/thumbnails/88.jpg)
THANKS! [email protected]