optical flow
DESCRIPTION
Optical flow . Or where do the pixels move?. Alon Gat . Problem Definition. Given: two or more frames of an image sequence Wanted: Displacement field between two consecutive frames optical flow. Visualize Optical Flow. - PowerPoint PPT PresentationTRANSCRIPT
Optical flow Or where do the pixels move?
Alon Gat
Given: two or more frames of an image sequence
Wanted: Displacement field between two consecutive frames optical
flow
Problem Definition
Visualize Optical FlowVector Plot: Subsample vector field and use arrows for visualization
Color Plot: Visualize direction as color and magnitude as brightness
Extraction of Motion Information • robot navigation/driver assistance • surveillance/tracking • action recognition
Processing of Image Sequences • video compression • ego motion compensation
Related Correspondence Problems • stereo reconstruction • structure-from-motion • medical image registration
What is optical flow good for?
• How to estimate pixel motion from two images?– Find pixel correspondences
• Given a pixel in img1, look for nearby pixels of the same color in img2
• Key assumptions– color constancy: a point in img1 looks “the same” in
img2• For grayscale images, this is brightness constancy
– small motion: points do not move very far
Assume brightness of patch remains same in both images:
),( yx
),( tvytux
ttime tttime ),( yx
Optical Flow: the vector field),( vu
Displacement: ),(),( tvtuyx
),,(),,( tyxItttvytuxI
Brightness Constancy Assumption
The Linearized Brightness Constancy Assumption
),,(),,( tyxItttvytuxI
),,(),,( tyxItIv
yIu
xItyxI
Brightness Constancy Assumption
Idea: If u and v are small and I is sufficiently smooth, one may linearize thisconstancy assumption via a first-order Taylor expansion around the point
0 tyx IvIuI Known
Unknown
Image derivative
Smoothness Constraint :meaning neighbor pixels in the picture has similar velocities.
Horn & Schunck Algorithm (MIT 1981)
,))()(( 2222 dxdyvvuu yxyx
,22dxdyvues
In other words, nearby pixels moves together
We seek the set that minimize:
Horn & Schunck Algorithm
dxdyvvuuIvIuIyxvyxuE yxyxtyx ))()(()),(),,(( 22222
),(),,( yxvyxu
Smoothness term
Data term brightness constancy
• data term - penalizes deviations from constancy assumptions• smoothness term - penalizes dev. from smoothness of the solution• regularization parameter α - determines the degree of smoothness
Output – the optical flow!
SmoothingIdea: In order to reduce the influence of noise and outliers, we convolve I0 with a Gaussian of mean μ = 0 and standard deviation
),,(),,( 0 tyxItyxIGaussian
Horn & Schunck Algorithm
According to the calculus of variations, a minimizer of E must fulfill the Euler-Lagrange equations
Which are highly non linear system of equations…
0
yx uuu F
yFx
F
dxdyvvuuvuyxFvuE yxyx ),,,,,,,(),(
Euler-Lagrange equations
0
yx vvv F
yFx
F
Calculus of Variation
Euler-Lagrange equations
dxdyvvuuIvIuIyxvyxuE yxyxtyx ))()(()),(),,(( 22222
)()(0
)()(0
yyxxytyx
yyxxxtyx
vvIIvIuI
uuIIvIuI
2
2
2
2
yx
ytyx
xtyx
IIvIuIv
IIvIuIu
)(
)(
Or
So we linearize again!
Horn & Schunck Algorithm
ytyx
xtyx
IIvIuIv
IIvIuIu
)(
)(
2,1,
2,1,
2,,1
2,,1
x
jiji
x
jiji
x
jiji
x
jiji
huu
huu
huu
huu
u
flow derivatives here discredited via
linear system of equations
Horn & Schunck Algorithm
0)()( klklklx
kltkl
klykl
klx uuIIvIuI
0)()( klklkly
kltkl
klykl
klx vvIIvIuI
klt
klykl
klt
klxkl
kl
klkly
klx
kly
kly
kly
klx
klx
klx
IIvIIu
vu
IIIIIIII
Horn & Schunck Algorithm
klt
klykl
klt
klxkl
kl
klkly
klx
kly
kly
kly
klx
klx
klx
IIvIIu
vu
IIIIIIII
klxkl
yklx
klt
nkl
kly
nkl
klxn
klnkl I
IIIvIuI
uu])()[( 22
1
klykl
yklx
klt
nkl
kly
nkl
klxn
klnkl I
IIIvIuI
vv])()[( 22
1
Update Rule:
Examples
Output
Hard to find boundaries.
Two approximations. Less accurate.
Problems with the method.
Instead of approximating the brightness constancy to the 1st Tylor expansion, we’ve add one order.
Second and more important, instead of solving E-L equations (which needed to be linearized) we wrote the Functional as n*m equations and minimized it with regular minimization methods (Gradient decent, Quasi Newton, and others)
So what are we trying to do?
1 .Mathemactica for symbolic calculations.
Where P are the Image derivatives, and u, v is the optical flow
Mathematica Symbolic Toolbox for MATLAB--Version 2.0(http://library.wolfram.com/infocenter/MathSource/5344/)
Quasi Newton IterationThe problem was that calculation time of inverse of non sparse matrix was long.And then multiplying two non sparse matrix…
Gradient Decent.Non of the above problems but linear convergence rate.And convergence to local min.
2 .Matlab for numeric calculations
In order to get things going faster (moving loooooooong string from matlab to mathematica takes awhile), we found the functional matrix’s constancy and calculate it in matlab.
for i=2:imageSizeN-1 for j=2:imageSizeM-1 gradC(i,j)=gradC(i,j)+alpha*(2*(-c(i-1,j)+c(i,j))+2*(-c(i,j-1)+c(i,j))-2*alpha*(-c(i,j)+c(i,j+1)) - 2*alpha*(-c(i,j)
+c(i+1,j))+4*exp((-1+c(i,j)^2+s(i,j)^2)^2)*c(i,j)*(-1+c(i,j)^2+s(i,j)^2) +2*(Ix(i,j)*m(i,j)+Ixz(i,j)*m(i,j)+2*Ixx(i,j)*c(i,j)*m(i,j)^2+Ixy(i,j)*m(i,j)^2*s(i,j))*(Iz(i,j)+Izz(i,j)+Ix(i,j)*c(i,j)*m(i,j)+Ixz(i,j)*c(i,j)*m(i,j)+Ixx(i,j)*c(i,j)^2*m(i,j)^2+Iy(i,j)*m(i,j)*s(i,j) +Iyz(i,j)*m(i,j)*s(i,j)+Ixy(i,j)*c(i,j)*m(i,j)^2*s(i,j)+Iyy(i,j)*m(i,j)^2*s(i,j)^2));
gradS(i,j)=gradS(i,j)+4*exp((-1+c(i,j)^2+s(i,j)^2)^2)*s(i,j)*(-1+c(i,j)^2+s(i,j)^2)+2*(Iy(i,j)*m(i,j)+Iyz(i,j)*m(i,j)+Ixy(i,j)*c(i,j)*m(i,j)^2+2*Iyy(i,j)*m(i,j)^2*s(i,j))*(Iz(i,j)+Izz(i,j) +Ix(i,j)*c(i,j)*m(i,j)+Ixz(i,j)*c(i,j)*m(i,j)+Ixx(i,j)*c(i,j)^2*m(i,j)^2+Iy(i,j)*m(i,j)*s(i,j)+Iyz(i,j)*m(i,j)*s(i,j)+Ixy(i,j)*c(i,j)*m(i,j)^2*s(i,j)+Iyy(i,j)*m(i,j)^2*s(i,j)^2) +alpha*(2*(-s(i-1,j)+s(i,j))+2*(-s(i,j-1)+s(i,j))-2*alpha*(-s(i,j)+s(i,j+1))-2*alpha*(-s(i,j)+s(i+1,j)));
gradM(i,j)=gradM(i,j)+beta*(2*(-m(i-1,j)+m(i,j))+2*(-m(i,j-1)+m(i,j)))-2*beta*(-m(i,j)+m(i,j+1))-2*beta*(-m(i,j)+m(i+1,j))+2*(Ix(i,j)*c(i,j)+Ixz(i,j)*c(i,j) +2*(Ixx(i,j)*c(i,j)^2*m(i,j)+Iy(i,j)*s(i,j)+Iyz(i,j)*s(i,j)+2*Ixy(i,j)*c(i,j)*m(i,j)*s(i,j)+2*Iyy(i,j)*m(i,j)*s(i,j)^2)*(Iz(i,j)+Izz(i,j)+Ix(i,j)*c(i,j)*m(i,j)+Ixz(i,j)*c(i,j)*m(i,j) +Ixx(i,j)*c(i,j)^2*m(i,j)^2+Iy(i,j)*m(i,j)*s(i,j)+Iyz(i,j)*m(i,j)*s(i,j)+Ixy(i,j)*c(i,j)*m(i,j)^2*s(i,j)+Iyy(i,j)*m(i,j)^2*s(i,j)^2));
end end end
3 .Leaving Mathematica
Gradient Decent
All methods need couple of unknown parameters which need to be selected by an educated guess. (Condor to the rescue)
All the image derivatives and gradients are calculated in a linear manner.
So what is there to optimize?
First results
Takes about 45min(highend algorithms take around 20sec…)
The Ground Truth.
Questions?