object detection overview viola-jones dalal-triggs deformable models deep learning
TRANSCRIPT
Object Detection
• Overview• Viola-Jones• Dalal-Triggs• Deformable models• Deep learning
Recap: Viola-Jones sliding window detector
Fast detection through two mechanisms• Quickly eliminate unlikely windows• Use features that are fast to compute
Viola and Jones. Rapid Object Detection using a Boosted Cascade of Simple Features (2001).
Cascade for Fast Detection
Examples
Stage 1H1(x) > t1?
Reject
No
YesStage 2
H2(x) > t2?Stage N
HN(x) > tN?
Yes
… Pass
Reject
No
Reject
No
• Choose threshold for low false negative rate• Fast classifiers early in cascade• Slow classifiers later, but most examples don’t get there
Features that are fast to compute
• “Haar-like features”– Differences of sums of intensity– Thousands, computed at various positions and
scales within detection window
Two-rectangle features Three-rectangle features Etc.
-1 +1
Integral Images• ii = cumsum(cumsum(im, 1), 2)
x, y
ii(x,y) = Sum of the values in the grey region
SUM within Rectangle D is ii(4) - ii(2) - ii(3) + ii(1)
Feature selection with Adaboost
• Create a large pool of features (180K)• Select features that are discriminative and work well
together– “Weak learner” = feature + threshold + parity
– Choose weak learner that minimizes error on the weighted training set
– Reweight
Viola Jones Results
MIT + CMU face dataset
Speed = 15 FPS (in 2001)
Object Detection
• Overview• Viola-Jones• Dalal-Triggs• Deformable models• Deep learning
Example: Dalal-Triggs pedestrian detector
1. Extract fixed-sized (64x128 pixel) window at each position and scale
2. Compute HOG (histogram of gradient) features within each window
3. Score the window with a linear SVM classifier4. Perform non-maxima suppression to remove
overlapping detections with lower scoresNavneet Dalal and Bill Triggs, Histograms of Oriented Gradients for Human Detection, CVPR05
Slides by Pete Barnum Navneet Dalal and Bill Triggs, Histograms of Oriented Gradients for Human Detection, CVPR05
• Tested with– RGB– LAB– Grayscale
• Gamma Normalization and Compression– Square root– Log
Slightly better performance vs. grayscale
Very slightly better performance vs. no adjustment
uncentered
centered
cubic-corrected
diagonal
Sobel
Slides by Pete Barnum Navneet Dalal and Bill Triggs, Histograms of Oriented Gradients for Human Detection, CVPR05
Outperforms
• Histogram of gradient orientations
– Votes weighted by magnitude– Bilinear interpolation between cells
Orientation: 9 bins (for unsigned angles 0 -180)
Histograms in k x k pixel cells
Slides by Pete Barnum Navneet Dalal and Bill Triggs, Histograms of Oriented Gradients for Human Detection, CVPR05
Normalize with respect to surrounding cells
Slides by Pete Barnum Navneet Dalal and Bill Triggs, Histograms of Oriented Gradients for Human Detection, CVPR05
X=
Slides by Pete Barnum Navneet Dalal and Bill Triggs, Histograms of Oriented Gradients for Human Detection, CVPR05
# features = 15 x 7 x 9 x 4 = 3780
# cells
# orientations
# normalizations by neighboring cells
Original Formulation
Slides by Pete Barnum Navneet Dalal and Bill Triggs, Histograms of Oriented Gradients for Human Detection, CVPR05
pos w neg w
pedestrian
Slides by Pete Barnum Navneet Dalal and Bill Triggs, Histograms of Oriented Gradients for Human Detection, CVPR05
Pedestrian detection with HOG• Train a pedestrian template using a linear support vector
machine• At test time, convolve feature map with template• Find local maxima of response• For multi-scale detection, repeat over multiple levels of a HOG
pyramid
N. Dalal and B. Triggs, Histograms of Oriented Gradients for Human Detection, CVPR 2005
TemplateHOG feature map Detector response map
Something to think about…• Sliding window detectors work
– very well for faces– fairly well for cars and pedestrians– badly for cats and dogs
• Why are some classes easier than others?
Strengths and Weaknesses of Statistical Template Approach
Strengths• Works very well for non-deformable objects with
canonical orientations: faces, cars, pedestrians• Fast detection
Weaknesses• Not so well for highly deformable objects or “stuff”• Not robust to occlusion• Requires lots of training data
Tricks of the trade• Details in feature computation really matter
– E.g., normalization in Dalal-Triggs improves detection rate by 27% at fixed false positive rate
• Template size– Typical choice is size of smallest detectable object
• “Jittering” to create synthetic positive examples– Create slightly rotated, translated, scaled, mirrored versions as
extra positive examples• Bootstrapping to get hard negative examples
1. Randomly sample negative examples2. Train detector3. Sample negative examples that score > -1 4. Repeat until all high-scoring negative examples fit in memory
Things to remember
• Sliding window for search
• Features based on differences of intensity (gradient, wavelet, etc.)– Excellent results require careful feature
design
• Boosting for feature selection
• Integral images, cascade for speed
• Bootstrapping to deal with many, many negative examples
Examples
Stage 1H1(x) >
t1?
Reject
No
YesStage 2H2(x) >
t2?
Stage NHN(x) >
tN?
Yes
…Pass
Reject
No
Reject
No
Many slides from Lana Lazebnik based on P. Felzenszwalb
Generic object detection with deformable part-based models
Challenge: Generic object detection
Histograms of oriented gradients (HOG)• Partition image into blocks at multiple scales and
compute histogram of gradient orientations in each block
N. Dalal and B. Triggs, Histograms of Oriented Gradients for Human Detection, CVPR 2005
10x10 cells
20x20 cells
Image credit: N. Snavely
Histograms of oriented gradients (HOG)• Partition image into blocks at multiple scales and
compute histogram of gradient orientations in each block
N. Dalal and B. Triggs, Histograms of Oriented Gradients for Human Detection, CVPR 2005
Image credit: N. Snavely
Are we done?
• Single rigid template usually not enough to represent a category• Many objects (e.g. humans) are articulated, or
have parts that can vary in configuration
• Many object categories look very different from different viewpoints, or from instance to instance
Slide by N. Snavely
Discriminative part-based models
P. Felzenszwalb, R. Girshick, D. McAllester, D. Ramanan, Object Detection with Discriminatively Trained Part Based Models, PAMI
32(9), 2010
Root filter
Part filters
Deformation weights
Discriminative part-based models
P. Felzenszwalb, R. Girshick, D. McAllester, D. Ramanan, Object Detection with Discriminatively Trained Part Based Models, PAMI
32(9), 2010
Multiple components
Discriminative part-based models
P. Felzenszwalb, R. Girshick, D. McAllester, D. Ramanan, Object Detection with Discriminatively Trained Part Based Models, PAMI
32(9), 2010
Object hypothesis• Multiscale model: the resolution of part
filters is twice the resolution of the root
Scoring an object hypothesis• The score of a hypothesis is the sum of filter scores
minus the sum of deformation costs
),,,()(),...,( 22
0 10 ii
n
i
n
iiiiiin dydxdydxscore
DpHFpp
Filters
Subwindow features
Deformation weights
Displacements
Scoring an object hypothesis• The score of a hypothesis is the sum of filter scores
minus the sum of deformation costs
)()( zHwz score
Concatenation of filter and deformation
weights
Concatenation of subwindow features and displacements
Filters
Subwindow features
Deformation weights
Displacements
),,,()(),...,( 22
0 10 ii
n
i
n
iiiiiin dydxdydxscore
DpHFpp
Detection• Define the score of each root filter location as the
score given the best part placements:
),...,(max)( 0,...,
01
nscorescoren
ppppp
Detection• Define the score of each root filter location as the
score given the best part placements:
• Efficient computation: generalized distance transforms• For each “default” part location, find the score of
the “best” displacement
),,,(),(max),( 22
,dydxdydxdyydxxyxR ii
dydxi DHF
Head filter Deformation cost
),...,(max)( 0,...,
01
nscorescoren
ppppp
Detection• Define the score of each root filter location as the
score given the best part placements:
• Efficient computation: generalized distance transforms• For each “default” part location, find the score of
the “best” displacement
Head filter responsesDistance transformHead filter
),,,(),(max),( 22
,dydxdydxdyydxxyxR ii
dydxi DHF
),...,(max)( 0,...,
01
nscorescoren
ppppp
Detection
Detection result
Training• Training data consists of images with labeled
bounding boxes• Need to learn the filters and deformation parameters
Training• Our classifier has the form
• w are model parameters, z are latent hypotheses
• Latent SVM training:• Initialize w and iterate:
• Fix w and find the best z for each training example (detection)• Fix z and solve for w (standard SVM training)
• Issue: too many negative examples• Do “data mining” to find “hard” negatives
),(max)( zxHwx z f
Car model
Component 1
Component 2
Car detections
Person model
Person detections
Cat model
Cat detections
Bottle model
More detections
Quantitative results (PASCAL 2008)
• 7 systems competed in the 2008 challenge• Out of 20 classes, first place in 7 classes and
second place in 8 classes
Bicycles Person Bird
Proposed approach Proposed approach
Proposed approach
Detection state of the art
Object detection system overview. Our system (1) takes an input image, (2) extracts around 2000 bottom-up region proposals, (3) computes features for each proposal using a large convolutional neural network (CNN), and then (4) classifies each region using class-specific linear SVMs. R-CNN achieves a mean average precision (mAP) of 53.7% on PASCAL VOC 2010. For comparison, Uijlings et al. (2013) report 35.1% mAP using the same region proposals, but with a spatial pyramid and bag-of-visual-words approach. The popular deformable part models perform at 33.4%.
R. Girshick, J. Donahue, T. Darrell, and J. Malik, Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation, CVPR 2014, to appear.