optimizing average precision (ranking) incorporating high -order information

Download Optimizing Average  Precision (Ranking) Incorporating High -Order Information

If you can't read please download the document

Upload: quincy-greer

Post on 31-Dec-2015

37 views

Category:

Documents


1 download

DESCRIPTION

Learning to Rank using High-Order Information. Puneet K. Dokania 1 , A. Behl 2 , C. V. Jawahar 2 , M. Pawan Kumar 1. 1 Ecole Centrale Paris and INRIA Saclay - France, 2 IIIT Hyderabad - India. Aim. Results. HOB-SVM. Optimizing Average Precision (Ranking) - PowerPoint PPT Presentation

TRANSCRIPT

PowerPoint Presentation

Optimizing Average Precision (Ranking)Incorporating High-Order InformationAimMotivations and ChallengesHigh-Order Information

Action inside the bounding box ?Context helpsHOB-SVMHOAP-SVMEncoding high-order information (joint feature map):

Parameter Learning:

Sort difference of max-marginal scores to get ranking:Single Score

Use dynamic graph cut for fast computation of max-marginalsMax-marginals capture high-order informationEncode ranking and high-order information (AP-SVM + HOB-SVM):

Parameter Learning

Non Convex - > Difference of Convex -> CCCPRanking: Sort scores

Dynamic graph cut for fast upper bound

SVMAP-SVMHOB-SVMHOAP-SVMResultsAction ClassificationProblem Formulation: Given an image and a bounding box in the image, predict the action being performed in the bounding box.Dataset- PASCAL VOC 2011, 10 action classes, , 4846 images (2424 trainval + 2422 test images).Features: POSELET + GISTHigh-Order Information: Persons in the same are likely to perform same action. Connected bounding boxes belonging to the same image.

ConclusionsLearning to Rank using High-Order InformationPuneet K. Dokania1, A. Behl2, C. V. Jawahar2, M. Pawan Kumar11Ecole Centrale Paris and INRIA Saclay - France, 2IIIT Hyderabad - India

AP = 1Accuracy = 1 AP = 0.55Accuracy = 1 Average Precision OptimizationAP is the most commonly used evaluation metricAP loss depends on the ranking of the samplesOptimizing 0-1 loss may lead to suboptimal APNotationsSet of positive samples:Samples:

Labels:Ranking Matrix:Set of negative samples:

Loss function:

AP-SVMKey Idea: Uses SSVM to encode ranking (joint score):Parameter Learning

Ranking: Sort scores,

Optimizes AP (measure of ranking)

OptimizationConvex Cutting plane -> Most violated constraint (greedy) -> O(|P||N|) Incorporate High-order informationOptimizes Decomposable lossFor example, persons in the same image are likely to have same action

Ranking ??Use Max-marginals Incorporate high-order informationOptimizes AP based lossMethodsLossHigh-Order InformationRankingObjectiveSVM0-1NoYesConvexAP-SVMAP BasedNoYesConvexHOB-SVMDecomposableYesYesConvexHOAP-SVMAP BasedYesYesNon-Convex (Diff of Convex)AP doesnt decompose High Order + Ranking -> No MethodNo High-Order InformationRanking:Optimization:ConvexJoint score similar to AP-SVMSample scores similar to HOB-SVM (max-marginals)OptimizationPaired ttest:HOB-SVM better than SVM in 6 action classesHOB-SVM not better than AP-SVMHOAP-SVM better than SVM in 6 action classesHOAP-SVM better than AP-SVM in 4 actions classesCode and Data: http://cvn.ecp.fr/projects/ranking-highorder/ Results

1