dip ppt object recognition
TRANSCRIPT
Object RecognitionBy
A.Sravya
Given some knowledge of how certain objects may appear and an image of a scene possibly containing those objects, report which objects are present in the scene and where.
Object recognition problem
Image panoramas Image watermarking Global robot localization Face Detection Optical Character Recognition Manufacturing Quality Control Content-Based Image Indexing Object Counting and Monitoring Automated vehicle parking systems Visual Positioning and tracking Video Stabilization
Applications
Pattern or Object: Arrangement of descriptors(features)
Pattern class: Family of patterns that share some common properties
Pattern Recognition: Techniques for assigning patterns to their respective classes
Common pattern arrangements: 1. vectors – ( for quantitative descriptors) 2. strings 3. trees – (for structural descriptors) Approaches to pattern recognition Decision – theoretic quantitative descriptors Structural qualitative descriptors
Introduction
where xi represents the ith descriptor
n is no: of descriptors associated with the pattern
Example : Consider 3 types of iris flowers- setosa,virginica and versicolor Each flower is described by petal length and width . Therefore the pattern vector is given by:
Patterns and pattern classes vector example
Here is another example of pattern vector generation.
In this case, we are interested in different types of noisy shapes.
Patterns and pattern classesanother vector example
Recognition problems in which not only quantitative measures about each feature but also the spatial relationships between them determine class membership, are solved by structural approach
Example: Fingerprint recognition
Strings
String descriptions generate patterns of objects whose structure is based on relatively simple connectivity of primitives usually associated with boundary shape
Strings and trees
String of symbols w =……..abababab……….
String example
Tree descriptors more powerful than strings Most hierarchical ordering schemes lead to tree
structures Example:
Trees
Recognition based on Decision-Theoritic MethodsBased on the use of decision functions ( d(x) )Here we find W decision functions d1(x), d2(x),....... dW(x) with the property that, if a pattern x belongs to class ωi , then
ijWjdd ji ;,...,2,1 )()( xx
The decision boundary separating class and is given by
0)()(or )()( xxxx jiji dddd
Now the objective is to develop various approaches for finding decision functions that satisfy Eq(1)
1
Here we represent each class by a prototype pattern vector
An unknown pattern is assigned to the class to which it is closest in terms of a predefined approach
The two approaches are:
Minimum distance classifier – calculate the Euclidean distance
correlation
Decision theoritic methods- matching
Minimum distance classifier
Prototype pattern vector
Calculate the Euclidean distance between the unknown vector and the prototype vector
Distance measure is the decision function
…….large numerical value
Contd..
Decision boundary b/w classes and is
….perpendicular bisector
If dIj(x) > 0, then x belongs to
If dIj(x) < 0, then x belongs to
i
j
example
Correlation is used for finding matches of a sub image w(x,y) of size J X K within an image f(x,y) of size M X N
Correlation between w(x,y) and f(x,y) is given by
Matching by correlation
1,...,2,1,0
,1,...,2,1,0for
),(),(),(
Ny
Mx
tysxwtsfyxcs t
The maximum values of c indicates the positions where w best matches f
Contd..
This is a probabilistic approach to pattern recognition
Average loss
Optimum statistical classifiers
The classifier that minimizes the total average loss is called the Bayes classifier
Optimum statistical classifiers
Bayes classifier assigns an unknown pattern x to class if i
Loss for a correct decision is assigned ‘0’ and for incorrect decision ‘1’
Optimum statistical classifiers
Further simplified to
Finally
….Bayes Decision Function
BDF depends on the pdfs of the patterns in each class and the probability of occurrence of each class
Sample patterns are assigned to each class and then necessary parameters are estimated
Most commonly used form for is the Gaussian pdf
Bayesian classifier for guassian pattern classesBayes decision function for Gaussian pattern classes is
)(2
1 )()|()(
2
2
2
)(
j
mx
j
jjj pepxpxd j
j
here n = 1 & W = 2
Bayesian classifier for guassian pattern classes In n-dimensional case
Bayesian decision function for gaussian pattern classes under 0-1 loss function
Bayesian classifier for guassian pattern classes•BDF reduces to minimum distance classifier if:1. Pattern classes are Gaussian2. All covariance matrices are equal to the identity matrix3. All classes are equally likely to occur
• Therefore minimum distance classifier is optimum in Bayes sense if the above conditions are satisfied
Neural Networks
Neural network: information processing paradigm inspired by biological nervous systems, such as our brain
Structure: large number of highly interconnected processing elements (neurons) working together
Neurons are arranged in layers
Neural Networks
Each neuron within the network is usually a simple processing unit which takes one or more inputs and produces an output. At each neuron, every input has an associated weight which modifies the strength of each input. The neuron simply adds together all the inputs and calculates output.
Neurons: Elemental nonlinear computing elements
We use these networks for adaptively developing the coefficients of decision functions via successive presentations of training set of patterns
Training patterns: Sample patterns used to estimate desired parameters
Training set: Set of such patterns from each class Learning or Training: Process by which a training
set is used to obtain decision functions Perceptron model basic model of a neuron Perceptrons are learning machines
NN contd..
Perceptron for two pattern classes
Another way :
Training algorithms-linearly seperable classes
then
If ω2 and
This algorithm makes a change in w only if the pattern being considered at the kth step in the training sequence is misclassified
This method minimizes the error between the actual and the desired response
Training algorithms-Nonseperable classes
From gradient descent algorithm
Training algorithms-Nonseperable classes
Changing weights reduces the error by a factor
Multilayer feedforward neural networks We focus on decision functions of multiclass pattern recognition
problems, independent of whether the classes are separable or not
Activation element is a sigmoid function
Multilayer feedforward neural networks
Input to the activation element of each node in layer J
The outputs of layer K are
The final sigmoid function is
We begin by concentrating on the output layer The process starts with an arbitrary set of weights through out
the network Generalized delta rule has two basic phases: Phase 1 A training vector is propagated through the layers to compute
the output Oj for each node The outputs Oq of the nodes in the output layer are then
compared against their desired responses rp, to generate the error terms δq
Phase 2 A backward pass through the network during which the
appropriate error signal is passed to each node and the corresponding weight changes are made
Training by back propagation
example
Performance of a neural network as a function of noise level
Improvement in peformance by increasing no.of training patterns
Face recognition
Thank you