computing sift features

5
Centre for Mathematical Sciences, February 2010 Computer Vision, Laboratory session 2 In the first laboratory you downloaded datorseende.zip from the homepage: http://www.maths.lth.se/matematiklth/vision/datorseende Remember to run startup.m to get the correct paths. The files you are going to use can be found in the subdirectory lab2. This directory contains two images a.jpg and b.jpg. Feel free to use other images if you want to. To relieve you of some of the tedious typing, we have collected the code in this document in a script called lab2_cheats.m. In this session you will use the library VLFeat to find SIFT points and generate the descrip- tors. You will then have to write your own routines for matching the descriptors to each other and estimating a transformation between two images. First you will have to download and start VLFeat. Go to http://www.vlfeat.org/download. html and extract the binary package to a directory of your choice, e.g. H:\vlfeat. Then start Matlab, go the H:\vlfeat\toolbox subdirectory and run vl setup. Now you should see the following message: ** Welcome to the VLFeat Toolbox ** You will now be able to use VLFeat throughout this Matlab session. Computing and displaying SIFT features Now load the two images in Matlab and display them A = imread(’a.jpg’); B = imread(’b.jpg’); figure(1); imshow(A); figure(2); imshow(B); The images are partly overlapping. The goal is to place them on top of each other. Use VLFeat to compute SIFT features for both images [fA dA] = vl_sift( single(rgb2gray(A)) ); [fB dB] = vl_sift( single(rgb2gray(B)) ); 1

Upload: arshdeep-kaur

Post on 08-Apr-2015

139 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Computing Sift Features

Centre for Mathematical Sciences, February 2010

Computer Vision, Laboratory session 2

In the first laboratory you downloaded datorseende.zip from the homepage:

http://www.maths.lth.se/matematiklth/vision/datorseende

Remember to run startup.m to get the correct paths. The files you are going to use can befound in the subdirectory lab2. This directory contains two images a.jpg and b.jpg. Feelfree to use other images if you want to. To relieve you of some of the tedious typing, we havecollected the code in this document in a script called lab2_cheats.m.

In this session you will use the library VLFeat to find SIFT points and generate the descrip-tors. You will then have to write your own routines for matching the descriptors to each otherand estimating a transformation between two images.

First you will have to download and start VLFeat. Go to http://www.vlfeat.org/download.

html and extract the binary package to a directory of your choice, e.g. H:\vlfeat. Thenstart Matlab, go the H:\vlfeat\toolbox subdirectory and run vl setup. Now you shouldsee the following message:

** Welcome to the VLFeat Toolbox **

You will now be able to use VLFeat throughout this Matlab session.

Computing and displaying SIFT features

Now load the two images in Matlab and display them

A = imread(’a.jpg’);

B = imread(’b.jpg’);

figure(1);

imshow(A);

figure(2);

imshow(B);

The images are partly overlapping. The goal is to place them on top of each other. UseVLFeat to compute SIFT features for both images

[fA dA] = vl_sift( single(rgb2gray(A)) );

[fB dB] = vl_sift( single(rgb2gray(B)) );

1

Page 2: Computing Sift Features

size(dA)

size(dB)

The SIFT descriptors for the two images are contained in the columns of dA and dB, respec-tively. The descriptors are by default 128-dimensional.

Question 1. How many SIFT features did you find for the two images, respectively?

VLFeat contains routines for visualizing the computed SIFT features. Try executing

figure(1);

perm = randperm(size(fA,2)); %Premute the order randomly

sel = perm(1:50); %Choose the 50 first keypoints

h1 = vl_plotframe(fA(:,sel));

h2 = vl_plotframe(fA(:,sel));

set(h1,’color’,’k’,’linewidth’,3);

set(h2,’color’,’y’,’linewidth’,2);

%Also plot the histograms

h3 = vl_plotsiftdescriptor(dA(:,sel),fA(:,sel));

set(h3,’color’,’g’) ;

The green frames show roughly how much of the image was used to create the feature de-scriptor.

Matching of SIFT descriptors

We now have a large number of 128-dimensional vectors that we would like to match againsteach other. We use the regular Eucledian distance, i.e. the distance between descriptor i fromimage A and descriptor j from image B is norm(dA(:,i) - dB(:,j));. For each descriptor inimage A you should calculate the distance to all descriptors in image B and if

distance to the second closestdistance to the closest

≥ 1.5

the descriptor from A is matched to the closest one from B, otherwise the descriptor in A isnot matched at all. This criterion is to avoid having too many false matches for points inimage A which are not present in image B.

You should store your matches as rows in a matrix. Start by creating matches = zeros(0,2);

and whenever a match is found, you add it to the matrix with matches = [matches; i j];.

When you have computed the matches, verify the first 20 matches with the following code:

nmatch = size(matches,1);

XA = fA(1,:); %X coordinates for all keypoints from image A

YA = fA(2,:); %Y coordinates for all keypoints from image A

2

Page 3: Computing Sift Features

XB = fB(1,:); %X coordinates for all keypoints from image B

YB = fB(2,:); %Y coordinates for all keypoints from image B

I = matches(:,1); %indices for A’s matched points

J = matches(:,2); %indices for B’s matched points

figure(3);

for ind = 1:20 % view 20 first matches

i = I(ind);

j = J(ind);

subplot(2,1,1);

imshow(A);

hold on

plot(XA(i),YA(i),’r*’);

subplot(2,1,2);

imshow(B);

hold on

plot(XB(j),YB(j),’r*’);

pause

end

Question 2. How many of the first 20 matches looked correct?

Homography Estimation

Now you should find a homography describing the transformation between the two images.A homography is described by

xt

yt

wt

=

h11 h12 h13

h21 h22 h23

h31 h32 h33

xo

yo

wo

Since the homography matrix is only defined up to scale, it has 8 degrees of freedom and thusrequires 4 point correspondences to estimate. You have been provided with a Matlab routinecalled estimate_homography to do this. It uses the linear method discussed in the lectures.

Because not all correspondences are correct, you need to use RANSAC to find a set of goodcorrespondences (inliers). This is done by repeatedly choosing 4 matches at random, es-timate the homography using these point pairs and counting how many other matchesagree. A match between A = (xA, yA) and B = (xB, yB) is said to agree with thetransformation if the transformed point H(A) = (x′

A, y′A) is close to B, e.g. the error√

(x′A − xB)2 + (y′

A − yB)2 is less than 5 pixels.

Write a routine to estimate a homography. Feel free to use the skeleton below if you want to.

3

Page 4: Computing Sift Features

best_ninliers = 0;

best_i = [];

best_j = [];

for iter = 1:100

%Pick 4 random indices

rperm = randperm(nmatch);

index = rperm(1:4);

i = I(index);

j = J(index);

%Estimate Homography from these points

H = ....

%Calculate error (a vector of length nmatch)

err = ...

%Count how many of the matches agree

inliers = err < 5;

ninliers = sum(inliers);

if ninliers > best_ninliers

best_ninliers = ninliers;

best_i = I(inliers);

best_j = J(inliers);

end

end

best_ninliers

Question 3. How many inliers did you find?

Image Stitching

Now that you have a set of inliers, the images can be placed on top of each other with thefollowing code:

%Transform B to the coordinate system of A

TFORM = cp2tform( [XB(best_j)’ YB(best_j)’], ...

[XA(best_i)’ YA(best_i)’] ,...

’projective’);

[B2 Xdata Ydata] = imtransform(B,TFORM);

%Make B2 larger to fit A

B2(1,900,1) = 0;

%Place images on top of each other

i=ceil(-Ydata(1));

j=ceil(-Xdata(1));

4

Page 5: Computing Sift Features

B2(i:i+size(A,1)-1,j:j+size(A,2)-1,:) = A;

figure(4);

imshow(B2);

Question 4. Does the result look good?

If you used your own images you might have to adjust the indexing to be able to actually dis-play A and B2 in the same image. First display them separately to see if B has been transformedcorrectly.

This is how panoramas are created automatically. A good task for the 3 hp project coursenext study period would be to improve the method presented here to handle more imagesand blend the images to make the border invisible.

5