probabilistic scene grammars: a general-purpose framework ...jchua/thesisdefense_slides.pdf ·...
TRANSCRIPT
Probabilistic Scene Grammars:A General-Purpose Framework for
Scene Understanding
Jeroen Chua
Advisor: Pedro Felzenszwalb
Why a general-purpose framework?
• Improvements can boost performance on many scene understanding tasks simultaneously
• Less engineering/research work for future scene understanding tasks
• Scene understanding tasks inform each other
• Scientifically interesting: can all (most) of scene understanding be understood as the same fundamental problem?
This thesis• Motivation for a general scene understanding framework
• Background/related work
• Representation for general scene understanding tasks
• Efficient approximate inference algorithm
• Notes on other possible inference algorithms
• Connections to related work
• Learning algorithm to estimate model parameters
• Experimental evaluation
• Extensions for larger/more complex tasks
• Directions for future research
• Motivation for a general scene understanding framework
• Background/related work
• Representation for general scene understanding tasks
• Efficient approximate inference algorithm
• Notes on other possible inference algorithms
• Connections to related work
• Learning algorithm to estimate model parameters
• Experimental evaluation
• Extensions for larger/more complex tasks
• Directions for future research
This talk
• Motivation for a general scene understanding framework
• Background/related work
• Representation for general scene understanding tasks
• Efficient approximate inference algorithm
• Learning algorithm to estimate model parameters
• Experimental evaluation
• Extensions for larger/more complex tasks
• Directions for future research
This talk
Contextual information is often useful
Context helps for _______________
Context helps for object recognition
Context helps for object recognition
Context helps for contour detection
Context helps for image segmentation
Related work• Contextual information
• Oliva and Torralba[1,2], Efros [3]
• Empirical studies [4]
• Gestalt Theory [5]
• Motivation• Probabilistic Programming Languages
[6,7,8]
[1] “Modelling the shape of the scene: a holistic representation of the spatial envelope”, IJCV 2001
[2] “The role of context in object recognition”, Trends in Cognitive Sciences, 2007
[3] “Unsupervised visual representation learning by context prediction”, ICCV 2015
[4] “An empirical study of context in object detection”, CVPR 2009
[5] Vision science: Photons to phenomenology, volume 1. MIT press, 1999
• Modelling• Pictorial Structures [8]
• DPM [9]
• “Markov backbone” model [10]
• Probabilistic inference for compositional models• Markov-chain Monte Carlo
• Heuristics
[6] “Picture: A probabilistic programming language for scene perception”, CVPR 2015
[7] “Edward: A library for probabilistic modeling, inference, and criticism”, arXiv 2016
[8] “The design and Implementation of Probabilisitic Programming Languages”, http://dippl.org 2014
[9] “Pictorial Structures for object recognition”, IJCV 2005
[10] “Object detection with grammar models”, NIPS 2011
[11] “Context and hierarchy in a probabilistic image model”, CVPR 2006
• Motivation for a general scene understanding framework
• Background/related work
• Representation for general scene understanding tasks
• Efficient approximate inference algorithm
• Learning algorithm to estimate model parameters
• Experimental evaluation
• Extensions for larger/more complex tasks
• Directions for future research
This talk
• Motivation for a general scene understanding framework
• Background/related work
• Representation for general scene understanding tasks
• Efficient approximate inference algorithm
• Learning algorithm to estimate model parameters
• Experimental evaluation
• Extensions for larger/more complex tasks
• Directions for future research
This talk
PSG
The Probabilistic Scene Grammar Framework
Probabilistic SceneGrammar Learning
Approximate EM Algorithm
Factor graph
f1
f2
X
Inference
f1
f2
X
The Probabilistic Scene Grammar Framework
Factor graph Inference Learning
Approximate EM Algorithm
f1
f2
X
f1
f2
X
Probabilistic SceneGrammar
Probabilistic Scene Grammars
•Context-free stochastic grammar• Symbols, eg. {Face, eye, conversation}• Pose space, eg. {location, orientation, scale}• Production rules, eg. Face{Eye,Eye,Nose,Mouth}• Production rule probabilities
•Geometric relationships between objects/parts
• “Self-rooting” parameter
Generative example: FacesSymbols: Face (F), Eye (E), Nose (N), Mouth (M), Eyelashes (L)
Rule P(rule) Spatial distribution type
Generative example: Faces
Rule P(rule) Spatial distribution type
Symbols: Face (F), Eye (E), Nose (N), Mouth (M), Eyelashes (L)
Generative example: Faces
Rule P(rule) Spatial distribution type
Symbols: Face (F), Eye (E), Nose (N), Mouth (M), Eyelashes (L)
Generative example: Faces
Rule P(rule) Spatial distribution type
Symbols: Face (F), Eye (E), Nose (N), Mouth (M), Eyelashes (L)
Generative example: Faces
Rule P(rule) Spatial distribution type
FE,E,N,M 1.0 Uniform
Symbols: Face (F), Eye (E), Nose (N), Mouth (M), Eyelashes (L)
Generative example: Faces
Rule P(rule) Spatial distribution type
FE,E,N,M 1.0 Uniform
Symbols: Face (F), Eye (E), Nose (N), Mouth (M), Eyelashes (L)
Generative example: Faces
Rule P(rule) Spatial distribution type
FE,E,N,M 1.0 Uniform
Symbols: Face (F), Eye (E), Nose (N), Mouth (M), Eyelashes (L)
Generative example: Faces
Rule P(rule) Spatial distribution type
FE,E,N,M 1.0 Uniform
Symbols: Face (F), Eye (E), Nose (N), Mouth (M), Eyelashes (L)
Generative example: Faces
Rule P(rule) Spatial distribution type
FE,E,N,M 1.0 Uniform
Symbols: Face (F), Eye (E), Nose (N), Mouth (M), Eyelashes (L)
Generative example: Faces
Rule P(rule) Spatial distribution type
FE,E,N,M 1.0 Uniform
Symbols: Face (F), Eye (E), Nose (N), Mouth (M), Eyelashes (L)
Generative example: Faces
Rule P(rule) Spatial distribution type
FE,E,N,M 1.0 Uniform
Symbols: Face (F), Eye (E), Nose (N), Mouth (M), Eyelashes (L)
Generative example: Faces
Rule P(rule) Spatial distribution type
FE,E,N,M 1.0 Uniform
Symbols: Face (F), Eye (E), Nose (N), Mouth (M), Eyelashes (L)
Generative example: Faces
Rule P(rule) Spatial distribution type
FE,E,N,M 1.0 Uniform
Symbols: Face (F), Eye (E), Nose (N), Mouth (M), Eyelashes (L)
Generative example: Faces
Rule P(rule) Spatial distribution type
FE,E,N,M 1.0 Uniform
Symbols: Face (F), Eye (E), Nose (N), Mouth (M), Eyelashes (L)
Generative example: Faces
Rule P(rule) Spatial distribution type
FE,E,N,M 1.0 Uniform
Symbols: Face (F), Eye (E), Nose (N), Mouth (M), Eyelashes (L)
Generative example: Faces
Rule P(rule) Spatial distribution type
FE,E,N,M 1.0 Uniform
Symbols: Face (F), Eye (E), Nose (N), Mouth (M), Eyelashes (L)
Generative example: Faces
Rule P(rule) Spatial distribution type
FE,E,N,M 1.0 Uniform
Symbols: Face (F), Eye (E), Nose (N), Mouth (M), Eyelashes (L)
Generative example: Faces
Rule P(rule) Spatial distribution type
FE,E,N,M 1.0 Uniform
Symbols: Face (F), Eye (E), Nose (N), Mouth (M), Eyelashes (L)
Generative example: Faces
Rule P(rule) Spatial distribution type
FE,E,N,M 1.0 Uniform
EL 0.5
E∅ 0.5 -
Symbols: Face (F), Eye (E), Nose (N), Mouth (M), Eyelashes (L)
Generative example: Faces
?Rule P(rule) Spatial distribution
type
FE,E,N,M 1.0 Uniform
EL 0.5
E∅ 0.5 -
Symbols: Face (F), Eye (E), Nose (N), Mouth (M), Eyelashes (L)
Generative example: Faces
?Rule P(rule) Spatial distribution
type
FE,E,N,M 1.0 Uniform
EL 0.5
E∅ 0.5 -
Symbols: Face (F), Eye (E), Nose (N), Mouth (M), Eyelashes (L)
Generative example: Faces
?
Rule P(rule) Spatial distribution type
FE,E,N,M 1.0 Uniform
EL 0.5
E∅ 0.5 -
Symbols: Face (F), Eye (E), Nose (N), Mouth (M), Eyelashes (L)
Generative example: Faces
?
Rule P(rule) Spatial distribution type
FE,E,N,M 1.0 Uniform
EL 0.5
E∅ 0.5 -
Symbols: Face (F), Eye (E), Nose (N), Mouth (M), Eyelashes (L)
Generative example: Faces
?Rule P(rule) Spatial distribution
type
FE,E,N,M 1.0 Uniform
EL 0.5
E∅ 0.5 -
Symbols: Face (F), Eye (E), Nose (N), Mouth (M), Eyelashes (L)
Generative example: Faces
?Rule P(rule) Spatial distribution
type
FE,E,N,M 1.0 Uniform
EL 0.5
E∅ 0.5 -
Symbols: Face (F), Eye (E), Nose (N), Mouth (M), Eyelashes (L)
Generative example: Faces
?Rule P(rule) Spatial distribution
type
FE,E,N,M 1.0 Uniform
EL 0.5
E∅ 0.5 -
Symbols: Face (F), Eye (E), Nose (N), Mouth (M), Eyelashes (L)
Generative example: Faces
?Rule P(rule) Spatial distribution
type
FE,E,N,M 1.0 Uniform
EL 0.5 Indep. Bernoullis, 50%
E∅ 0.5 -
Symbols: Face (F), Eye (E), Nose (N), Mouth (M), Eyelashes (L)
Generative example: Faces
?Rule P(rule) Spatial distribution
type
FE,E,N,M 1.0 Uniform
EL 0.5 Indep. Bernoullis, 50%
E∅ 0.5 -
Symbols: Face (F), Eye (E), Nose (N), Mouth (M), Eyelashes (L)
Generative example: Faces
Rule P(rule) Spatial distribution type
FE,E,N,M 1.0 Uniform
EL 0.5 Indep. Bernoullis, 50%
E∅ 0.5 -
N∅ 1.0 -
M∅ 1.0 -
L∅ 1.0 -
Symbols: Face (F), Eye (E), Nose (N), Mouth (M), Eyelashes (L)
Generative example: FacesSymbols: Face (F), eye (E), nose (N), mouth (M), eyelashes (L)
Generative example: FacesSymbols: Face (F), eye (E), nose (N), mouth (M), eyelashes (L)
Probabilistic Scene Grammar SpecificationContour detection grammarFace localization grammar
Binary image segmentation grammar
Probabilistic Scene Grammar SamplesContour detection grammarFace localization grammar
Binary image segmentation grammar
The Probabilistic Scene Grammar Framework
Factor graph Inference Learning
Approximate EM Algorithm
f1
f2
X
f1
f2
X
Probabilistic SceneGrammar
Factor Graphs
Encodes a distribution over random variables
f1f2
X1
X2
X3
Which child to choose ?
Which production rule?
EL 0.5 -
E∅ 0.5 -
Three types of distributions: 1) Categorical
Distributions of the PSG
Three types of distributions: 2) Independent Bernoullis
Distributions of the PSG
Three types of distributions: 2) Independent Bernoullis
Which children to choose ?
Distributions of the PSG
Three types of distributions: 3) Leaky-or
Distributions of the PSG
Object present?
Three types of distributions: 3) Leaky-or
?
?
Distributions of the PSG
Object present?
Three types of distributions: 3) Leaky-or
Distributions of the PSG
Framework: As a Factor Graph
Binary random variables: Factors:
=
Framework: As a Factor Graph
Binary random variables:X: Presence/absence of object
Factors:f1: Leaky-or factor
Connections with super-parts
f1 X
=
Framework: As a Factor Graph
Binary random variables:X: Presence/absence of objectRi: Choose rule i
Factors:f1: Leaky-or factorf2: Selection factor
Connections with super-parts
Xf1
f2
R1
R2
=
Framework: As a Factor Graph
Binary random variables:X: Presence/absence of objectRi: Choose rule iCj: Create child j
Factors:f1: Leaky-or factorf2: Selection factorf3: Bernoullis factor
Connections with super-parts
f1
f2
Connections with subparts
X
R1
R2 f3
C3
C1
C2
=
Framework: As a Factor Graph
Binary random variables:X: Presence/absence of objectRi: Choose rule iCj: Create child j
Factors:f1: Leaky-or factorf2: Selection factorf3: Bernoullis factorfd: Data model factor
=
Connections with super-parts
f1
f2
Connections with subparts
X
R1
R2 f3
C3
C1
C2
fd
• Symbols: Face, Eye, Eyelashes
• 1D pose spaces
• Spatial neighbourhoods:• Face(y) Eye(y’), y’ = {y-1,y,y+1}
• Eye(y) EyeLashes(y’), y’ = {y-1,y,y+1}
Framework: As a Factor Graph
=
• Symbols: Face, Eye, Eyelashes
• 1D pose spaces
• Spatial neighbourhoods:• Face(y) Eye(y’), y’ = {y-1,y,y+1}
• Eye(y) EyeLashes(y’), y’ = {y-1,y,y+1}
Framework: As a Factor Graph
=
The Probabilistic Scene Grammar Framework
Factor graph Inference Learning
Approximate EM Algorithm
f1
f2
X
f1
f2
X
Probabilistic SceneGrammar
•Given an image, what objects are in the image and what are their parts?
•Run Loopy Belief Propagation on the factor graph• Compute (approximate) posterior quantities, p(·|Image)
• In general: exponential time. Our case: linear time.
Approximate Inference
…X1 X2 XN
f
^
The Probabilistic Scene Grammar Framework
Factor graph Inference Learning
Approximate EM Algorithm
f1
f2
X
f1
f2
X
Probabilistic SceneGrammar
• Goal: Estimate model parameters• Rule probabilities
• Geometric relationships
• Leaky-or parameter
• Maximization-step: sums involving posterior quantities
• Exact posterior quantities are intractable to compute
• Expectation-step: use approximate posteriors computed by Loopy Belief Propagation
Learning: Approximate EM algorithm
Learning: Approximate EM algorithm
• Motivation for a general scene understanding framework
• Background/related work
• Representation for general scene understanding tasks
• Efficient approximate inference algorithm
• Learning algorithm to estimate model parameters
• Experimental evaluation
• Extensions for larger/more complex tasks
• Directions for future research
This talk
PSG
• Motivation for a general scene understanding framework
• Background/related work
• Representation for general scene understanding tasks
• Efficient approximate inference algorithm
• Learning algorithm to estimate model parameters
• Experimental evaluation
• Extensions for larger/more complex tasks
• Directions for future research
This talk
PSG framework is competitive with some specialized frameworks
Application: contour detection
• Dataset:• Ground truth: human-drawn object boundary contours from Berkeley
Segmentation Dataset [1]• B(x,y): Binary value of whether this pixel belongs to a contour • D(x,y): Pixel intensity• Data-model: D(x,y) ~ 𝑁(μB(x,y), σ)
Application: contour detection
[1] Arbelaez, Maire, Fowlkes, Malik. “Contour Detection and hierarchical image segmentation”, PAMI 2011.
• Model:• Simple grammar model (next slides)• Factor graph contains ~50M edges
• Training:• Model parameters estimated using approximate EM algorithm• 200 train, 200 test
Application: contour detection
Generating a curve
Choose: continue contour, change orientation, or STOP
Generating a curve
Generating a curve
Generating a curve
Choose: continue contour, change orientation, or STOP
Generating a curve
Generating a curve
Choose: continue contour, change orientation, or STOP
Generating a curve
Choose: continue contour, change orientation, or STOP
Generating a curve
Choose: continue contour, change orientation, or STOP
Generating a curve
Generating a curve
Generating a curve
Ground truth
Image data
P(B = 1 | Image)
Inferred contours
Contour detection: Comparison
1-level FOP and 4-level FOP from: “Multiscale Field of Patterns”, Felzenszwalb, Oberlin. NIPS 2014.
Context is important!
PSG framework competitive with Field-of-Patterns (FOP)
Application: binary image segmentation
• Dataset:• Ground truth: binary leaf masks [1]• B(x,y): Binary value of whether this pixel belongs to a leaf• D(x,y): Pixel intensity• Data-model: D(x,y) ~ 𝑁(μB(x,y), σ)
Application: binary image segmentation
[1] Soderkvist. “Computer vision classification of leaves from Swedish trees”, Master’s thesis 2011.
Simple process to generate a binary mask“Grow” a foreground. Red/black pixels are part of foreground.
Choose a pixel to be part of the foreground. Colour it red.
Simple process to generate a binary mask
Choose a pixel to be part of the foreground. Colour it red.
Simple process to generate a binary mask
Pick a red pixel and change its colour to black. Select some neighbours to be part of the foreground, if they are not already, and colour them red.
Simple process to generate a binary mask
Simple process to generate a binary maskPick a red pixel and change its colour to black. Select some neighbours to be part of the foreground, if they are not already, and colour them red.
Simple process to generate a binary maskPick a red pixel and change its colour to black. Select some neighbours to be part of the foreground, if they are not already, and colour them red.
Simple process to generate a binary maskPick a red pixel and change its colour to black. Select some neighbours to be part of the foreground, if they are not already, and colour them red.
Simple process to generate a binary maskPick a red pixel and change its colour to black. Select some neighbours to be part of the foreground, if they are not already, and colour them red.
Simple process to generate a binary mask
Simple process to generate a binary mask
• Models:• Simple segmentation grammar (previous slides)• More complex segmentation grammar
• Training:• Model parameters estimated using approximate EM algorithm• 50 train, 25 test
Application: binary image segmentation
Ground truth Image dataSimple grammar
Complex grammar
Binary image segmentation: Comparison
1-level FOP and 4-level FOP from: “Multiscale Field of Patterns”, Felzenszwalb, Oberlin. NIPS 2014.
Context is important!
Complex segmentation grammar competitive with Field-of-Patterns (FOP)
Application: Face localization
Application: Face localization
Face localization: Single-face Dataset• Labelled Faces in the Wild [1]
• Manually annotated 300 images with bounding box information for face, left eye, right eye, nose, mouth
• 200 training, 100 test
[1] Huang et al., “Labeled faces in the wild: A database for studying face recognition in unconstrained environments.”, Technical Report 07-49, University of Massachusetts, Amherst, October 2007.
PSG Face Grammar
•Symbols: Face (F), Left eye (L), Right eye (R), Nose (N), Mouth (M)
•Pose space: (x,y,scale)
•Mechanism to handle help suppress false positives
•Geometric model learned from labelled data
•Data-model: calibrated HOG filter scores.
•Factor graph has ~3M edges
Face localization: Baselines
•HOG Filter scores: calibrated HOG filter scores only
•“Pictorial Structures for object recognition”, IJCV 2005•Fast, exact inference•Assumes 1 object of each type per scene
Ground truth HOG FiltersPSG Face Grammar
Pictorial Structures
Face Localization: Performance comparison
Area under the precision-recall curve
Face localization: Family Portraits Dataset• Dataset collected from the Internet
• Manually annotated 40 images with bounding box information
• Average of 5.9 faces per image
• Similar as trained model from LFW, but with more scales
• Use all 40 images for testing
Ground truth
PSG Face Grammar
Pictorial Structures
Face Localization: Performance Comparison
Area under the precision-recall curve
• Motivation for a general scene understanding framework
• Background/related work
• Representation for general scene understanding tasks
• Efficient approximate inference algorithm
• Learning algorithm to estimate model parameters
• Experimental evaluation
• Extensions for larger/more complex tasks
• Directions for future research
This talk
• Motivation for a general scene understanding framework
• Background/related work
• Representation for general scene understanding tasks
• Efficient approximate inference algorithm
• Learning algorithm to estimate model parameters
• Experimental evaluation
• Extensions for larger/more complex tasks
• Directions for future research
This talk
Scaling up• Bigger images! More objects! Larger grammars!
Binary random variables:X: Presence/absence of objectRi: Choose rule iCj: Create child j
Connections with super-parts
f1
f2
Connections with subparts
X
R1
R2 f3
C3
C1
C2
Not so fast …
fd
Factors:f1: Leaky-or factorf2: Selection factorf3: Bernoullis factorfd: Data model factor
Graph has too many edges!•Most edges are inter-object
• Example: Eye of a particular face
•Most edges are inter-object
• Example: Eye of a particular face
• (15 x 15)
Size of image region
Graph has too many edges!
•Most edges are inter-object
• Example: Eye of a particular face
• (15 x 15) x 5
Size of image region
Graph has too many edges!
# orientations
•Most edges are inter-object
• Example: Eye of a particular face
• (15 x 15) x 5 x 5
Size of image region
Graph has too many edges!
# orientations# scales
•Most edges are inter-object
• Example: Eye of a particular face
• (15 x 15) x 5 x 5 x 5
Size of image region
# orientations# scales
# aspect ratios
Graph has too many edges!
•Most edges are inter-object
• Example: Eye of a particular face
• (15 x 15) x 5 x 5 x 5 = 28125 edges
Size of image region
# orientations# scales
# aspect ratios
Graph has too many edges!
•Most edges are inter-object
• Example: Eye of a particular face
• (15 x 15) x 5 x 5 x 5 = 28125 edges
Size of image region
# orientations# scales
# aspect ratios
This is for a single face, and a single eye!
Key issue: Size of distribution’s support
Graph has too many edges!
Reducing the number of factor graph edges
•Reduce total support of probability distributions Reduce # of factor graph edges
• Approximate N-D distribution as N one-dimensional distributions
• Decompose 1-D Uniform distribution into a set of Categorical distributions
Where to put eye? 1 of 25 choice x-coordinate of eye? 1 of 5 choice
y-coordinate of eye? 1 of 5 choice
Approximating an N-D distribution
Cost: 25 edges Cost: 10 edges
Approximating an N-D distribution
Original distribution
Approximated distribution
Decomposing a Uniform distribution
*
Decomposing a Uniform distribution
? * ? …*
p
p2 pNp1
• Decomposition can be phrased as a series of convolutions• Find N and pi’s such that f=p and ∑|pi|0 is minimized.• Minimum value of ∑|pi|0 = sum of prime factorization of |f|0
• Construction algorithm for pi’s in thesis
=
f
Decomposing a Uniform distribution on set {0,…,99}
Total support size: 14
Applying edge reductionsRule Spatial distribution Size of region
FaceNose Uniform [25,25]
Rule Spatial distribution Size of region
FaceNose-Y Uniform [1,25]
Nose-YNose Uniform [25,1]
Rule Spatial distribution Size of region
FaceNose-Y1 Uniform [1,5]
Nose-Y1Nose-Y Uniform [1,5]
Nose-YNose-Y2 Uniform [5,1]
Nose-Y2Nose Uniform [5,1]
625 edges
50 edges
20 edges
• Motivation for a general scene understanding framework
• Background/related work
• Representation for general scene understanding tasks
• Efficient approximate inference algorithm
• Learning algorithm to estimate model parameters
• Experimental evaluation
• Extensions for larger/more complex tasks
• Directions for future research
This talk
• Motivation for a general scene understanding framework
• Background/related work
• Representation for general scene understanding tasks
• Efficient approximate inference algorithm
• Learning algorithm to estimate model parameters
• Experimental evaluation
• Extensions for larger/more complex tasks
• Directions for future research
This talk
Directions for Future Research
•More scene understanding tasks
• Integration with Deep Learning
•Grammar learning
This talk• Motivation for a general scene understanding framework
• Background/related work
• Representation for general scene understanding tasks
• Efficient approximate inference algorithm
• Learning algorithm to estimate model parameters
• Experimental evaluation
• Extensions for larger/more complex tasks
• Directions for future research
Thanks!
Backup slides start
Face localization without a face data model
Family Portraits: Area under the precision-recall curve
PSG Face Grammar
Faceless Grammar
Face localization: 0-1 Face Dataset• Labelled Faces in the Wild [1] + images from VOC2012[2] without
faces
• 200 training, 200 test
• 100 test image have one face, 100 images have no faces.
[1] Huang et al., “Labeled faces in the wild: A database for studying face recognition in unconstrained environments.”, Technical Report 07-49, University of Massachusetts, Amherst, October 2007.
[2] Everingham, et al., “The PASCAL Visual Object Classes Challenge 2012 {(VOC2012)} Results”, 2012.
Model Face Left eye Right eye Nose Mouth Average
Pictorial Structures 0.86 0.94 0.86 0.81 0.84 0.86
PSG Face Grammar 1.00 0.98 0.95 0.99 0.93 0.97
Face localization: 0-1 Face Dataset
Area under the precision-recall curve for the 0-1 Face Dataset
Model Face Left eye Right eye Nose Mouth Average
Pictorial Structures 1.00 0.97 0.93 0.98 0.90 0.96
PSG Face Grammar 1.00 0.98 0.92 0.98 0.92 0.96
Area under the precision-recall curve for the Single-Face Dataset
Decomposing a Uniform distribution with prime support
Search over partitions. Dynamic programming?