digital image analysis - edge-line detection

85
Edge/Line Edge/Line Detection Detection

Upload: gayathri-shunmugiah

Post on 01-Dec-2014

136 views

Category:

Documents


1 download

DESCRIPTION

Uploaded from Google Docs

TRANSCRIPT

Page 1: Digital Image Analysis - Edge-Line Detection

Edge/Line Detection Edge/Line Detection

Page 2: Digital Image Analysis - Edge-Line Detection

Edge/Line Detection Edge/Line Detection

Edge detection operators are often Edge detection operators are often implemented with convolution masks and implemented with convolution masks and most are based on discrete most are based on discrete approximations to differential operators approximations to differential operators

Differential operations measure the rate of Differential operations measure the rate of change in a function, in this case, the change in a function, in this case, the image brightness functionimage brightness function

Page 3: Digital Image Analysis - Edge-Line Detection

A large change in image brightness over a A large change in image brightness over a short spatial distance indicates the short spatial distance indicates the presence of an edge presence of an edge

Some edge detection operators return Some edge detection operators return orientation information (information about orientation information (information about the direction of the edge), while others the direction of the edge), while others only return information about the existence only return information about the existence of an edge at each point of an edge at each point

Page 4: Digital Image Analysis - Edge-Line Detection

Edge detection methods are used as a Edge detection methods are used as a first step in the line detection processfirst step in the line detection process

Edge detection is also used to find Edge detection is also used to find complex object boundaries by marking complex object boundaries by marking potential edge points corresponding to potential edge points corresponding to places in an image where rapid changes in places in an image where rapid changes in brightness occurbrightness occur

Page 5: Digital Image Analysis - Edge-Line Detection

After the edge points have been marked, After the edge points have been marked, they can be merged to form lines and they can be merged to form lines and object outlinesobject outlines

The edge is where the sudden change The edge is where the sudden change occurs, and a line or curve is a continuous occurs, and a line or curve is a continuous collection of edge points along a certain collection of edge points along a certain direction direction

Hough transform is used for line finding, Hough transform is used for line finding, but can be extended to find arbitrary but can be extended to find arbitrary shapesshapes

Page 6: Digital Image Analysis - Edge-Line Detection

Edge

Line

Edge

Figure 4.2-1: Edges and lines are perpendicularFigure 4.2-1: Edges and lines are perpendicular

The line shown here is vertical and the edge direction is horizontal. In this case the transition from black to white occurs along a row, this is the edge direction, but the line is vertical along a column.

Page 7: Digital Image Analysis - Edge-Line Detection

Preprocessing of image is required to Preprocessing of image is required to eliminate or at least minimize noise effectseliminate or at least minimize noise effects

There is tradeoff between sensitivity and There is tradeoff between sensitivity and accuracy in edge detection accuracy in edge detection

The parameters that we can set so that The parameters that we can set so that edge detector is sensitive include the size edge detector is sensitive include the size of the edge detection mask and the value of the edge detection mask and the value of the gray level threshold of the gray level threshold

Page 8: Digital Image Analysis - Edge-Line Detection

A larger mask or a higher gray level A larger mask or a higher gray level threshold will tend to reduce noise effects, threshold will tend to reduce noise effects, but may result in a loss of valid edge but may result in a loss of valid edge points points

Edge detection operators are based on the Edge detection operators are based on the idea that edge information in an image is idea that edge information in an image is found by looking at the relationship a pixel found by looking at the relationship a pixel has with its neighborshas with its neighbors

If a pixel's gray level value is similar to If a pixel's gray level value is similar to those around it, there is probably not an those around it, there is probably not an edge at that point edge at that point

Page 9: Digital Image Analysis - Edge-Line Detection

Figure 4.2-2: Noise in images requires tradeoffs between sensitivity and accuracy for edge detectors

b) Edge detector too sensitive, many edge points found that are attributable to noise

a) Noisy image

Page 10: Digital Image Analysis - Edge-Line Detection

Figure 4.2-2: Noise in images requires tradeoffs between sensitivity and accuracy for edge detectors(contd)

d) Reasonable result obtained by compromise between sensitivity and accuracy, may mitigate noise via postprocessing (original lizard photo courtesy of Mark Zuke)

c) Edge detector not sensitive enough, loss of valid edge points

Page 11: Digital Image Analysis - Edge-Line Detection

However, if a pixel has neighbors with However, if a pixel has neighbors with widely varying gray levels, it may widely varying gray levels, it may represent an edge pointrepresent an edge point

An edge can also be defined by a An edge can also be defined by a discontinuity in gray level values discontinuity in gray level values

Edges may exist anywhere and be defined Edges may exist anywhere and be defined by color, texture, shadow, etc., and may by color, texture, shadow, etc., and may not necessarily separate real world objectsnot necessarily separate real world objects

Page 12: Digital Image Analysis - Edge-Line Detection

Figure 4.2-3: Image objects may be parts of real objects

b) Butterfly after edge detection, note that image objects are separated by color, or gray level, changes

a) Butterfly image properties (original photo courtesy of Mark Zuke),

Page 13: Digital Image Analysis - Edge-Line Detection

Figure 4.2-3: Image objects may be parts of real objects (contd)

c) Image of objects in kitchen corner

Page 14: Digital Image Analysis - Edge-Line Detection

Figure 4.2-3: Image objects may be parts of real objects (contd)

d) Image after edge detection, note that some image objects are created by reflections in the image due to lighting conditions and object properties

Page 15: Digital Image Analysis - Edge-Line Detection

A real edge in an image tends to change A real edge in an image tends to change slowly, compared to the ideal edge model slowly, compared to the ideal edge model which is abrupt which is abrupt

This gradual change in real edges is a This gradual change in real edges is a minor form of blurring caused by the minor form of blurring caused by the imaging device, the lenses, or the lighting, imaging device, the lenses, or the lighting, and is typical for real-world (as opposed to and is typical for real-world (as opposed to computer generated) imagescomputer generated) images

Page 16: Digital Image Analysis - Edge-Line Detection
Page 17: Digital Image Analysis - Edge-Line Detection

Gradient OperatorsGradient Operators

Gradient operators are based on the idea Gradient operators are based on the idea of using the first or second derivative of of using the first or second derivative of the gray level function as an edge detectorthe gray level function as an edge detector

The first derivative will mark edge points, The first derivative will mark edge points, with steeper gray level changes providing with steeper gray level changes providing stronger edge points (larger magnitudes)stronger edge points (larger magnitudes)

Page 18: Digital Image Analysis - Edge-Line Detection
Page 19: Digital Image Analysis - Edge-Line Detection

The second derivative returns two The second derivative returns two impulses, one on either side of the edge, impulses, one on either side of the edge, which allows to measure edge location to which allows to measure edge location to sub-pixel accuracy sub-pixel accuracy

Sub-pixel accuracySub-pixel accuracy refers to the fact that refers to the fact that the zero-crossing may be at a fractional the zero-crossing may be at a fractional pixel distance, for example halfway pixel distance, for example halfway between two pixels, so we could say the between two pixels, so we could say the edge is at, for instance, edge is at, for instance, c = c = 75.575.5

Page 20: Digital Image Analysis - Edge-Line Detection

Roberts operatorRoberts operator::

A simple approximation to the first A simple approximation to the first derivative derivative

Marks edge points only; it does not return Marks edge points only; it does not return any information about the edge orientationany information about the edge orientation

Simplest of the edge detection operators Simplest of the edge detection operators and will work best with binary images and will work best with binary images

Page 21: Digital Image Analysis - Edge-Line Detection

There are two forms of the Roberts There are two forms of the Roberts operator:operator:

1.1. TheThe

2.2. TheThe

The second form of the equation is often used in practice The second form of the equation is often used in practice

due to its computational efficiency due to its computational efficiency

Page 22: Digital Image Analysis - Edge-Line Detection

Sobel operatorSobel operator: :

Approximates the gradient by using a row Approximates the gradient by using a row and a column mask, which approximates and a column mask, which approximates the first derivative in each direction the first derivative in each direction

The Sobel edge detection masks find The Sobel edge detection masks find edges in both the horizontal and vertical edges in both the horizontal and vertical directions, and then combine this directions, and then combine this information into a single metric information into a single metric

Page 23: Digital Image Analysis - Edge-Line Detection

The Sobel masks are as follows:The Sobel masks are as follows:

VERTICAL EDGEVERTICAL EDGE HORIZONTAL HORIZONTAL EDGEEDGE

Page 24: Digital Image Analysis - Edge-Line Detection

The Sobel masks are each convolved The Sobel masks are each convolved with the imagewith the image

At each pixel location there are two At each pixel location there are two numbers: numbers:

1.1. ss11, corresponding to the result from the , corresponding to the result from the

vertical edge mask, and vertical edge mask, and

2.2. ss22, from the horizontal edge mask, from the horizontal edge mask

Page 25: Digital Image Analysis - Edge-Line Detection

ss11 and and ss22 are used to compute two are used to compute two

metrics, the edge magnitude and the metrics, the edge magnitude and the edge direction, defined as follows:edge direction, defined as follows:

1.1. EDGE MAGNITUDE: EDGE MAGNITUDE:

2.2. EDGE DIRECTIONEDGE DIRECTION: :

Page 26: Digital Image Analysis - Edge-Line Detection

Prewitt operatorPrewitt operator: :

Approximates the gradient by using a row Approximates the gradient by using a row and a column mask, which approximates and a column mask, which approximates the first derivative in each direction the first derivative in each direction

The Prewitt edge detection masks look for The Prewitt edge detection masks look for edges in both the horizontal and vertical edges in both the horizontal and vertical directions, and then combine this directions, and then combine this information into a single metric information into a single metric

Page 27: Digital Image Analysis - Edge-Line Detection

The Prewitt masks are as follows:The Prewitt masks are as follows:

VERTICAL EDGEVERTICAL EDGE HORIZONTAL EDGE HORIZONTAL EDGE

Page 28: Digital Image Analysis - Edge-Line Detection

The Prewitt masks are each convolved The Prewitt masks are each convolved with the imagewith the image

At each pixel location there are two At each pixel location there are two numbers: numbers:

1.1. pp11, corresponding to the result from the , corresponding to the result from the

vertical edge mask, and vertical edge mask, and

2.2. pp22, from the horizontal edge mask, from the horizontal edge mask

Page 29: Digital Image Analysis - Edge-Line Detection

pp11 and and pp22 are used to compute two are used to compute two

metrics, the edge magnitude and the metrics, the edge magnitude and the edge direction, defined as follows:edge direction, defined as follows:

1.1. EDGE MAGNITUDE: EDGE MAGNITUDE:

2.2. EDGE DIRECTION: EDGE DIRECTION:

Page 30: Digital Image Analysis - Edge-Line Detection

The Prewitt is simpler to calculate than the The Prewitt is simpler to calculate than the Sobel, since the only coefficients are 1’s, Sobel, since the only coefficients are 1’s, which makes it easier to implement in which makes it easier to implement in hardwarehardware

However, the Sobel is defined to place However, the Sobel is defined to place emphasis on the pixels closer to the mask emphasis on the pixels closer to the mask center, which may desirable for some center, which may desirable for some applicationsapplications

Page 31: Digital Image Analysis - Edge-Line Detection

Laplacian operatorsLaplacian operators::

These are two-dimensional discrete These are two-dimensional discrete approximations to the second derivativeapproximations to the second derivative

Implemented by applying Implemented by applying oneone of the of the following convolution masks:following convolution masks:

Page 32: Digital Image Analysis - Edge-Line Detection

The Laplacian masks are The Laplacian masks are rotationally rotationally symmetricsymmetric, which means edges at all , which means edges at all orientations contribute to the resultorientations contribute to the result

Applied by selecting Applied by selecting oneone mask and mask and convolving it with the image convolving it with the image

The sign of the result (positive or negative) The sign of the result (positive or negative) from two adjacent pixel locations provides from two adjacent pixel locations provides directional information, and tells us which directional information, and tells us which side of the edge is brighter side of the edge is brighter

Page 33: Digital Image Analysis - Edge-Line Detection

These masks differ from the Laplacian-type These masks differ from the Laplacian-type previously described in that the center previously described in that the center coefficients have been decreased by one, coefficients have been decreased by one, as we are trying to find edges, and are not as we are trying to find edges, and are not interested in the image itself interested in the image itself

If we increase the center coefficient by one If we increase the center coefficient by one it is equivalent to adding the original image it is equivalent to adding the original image to the edge detected image to the edge detected image

Page 34: Digital Image Analysis - Edge-Line Detection

Compass MasksCompass Masks::

The The KirschKirsch and and RobinsonRobinson edge detection edge detection masks are called compass masks since masks are called compass masks since they are defined by taking a single mask they are defined by taking a single mask and rotating it to the eight major compass and rotating it to the eight major compass orientations: North, Northwest, West, orientations: North, Northwest, West, Southwest, South, Southeast, East, and Southwest, South, Southeast, East, and Northeast Northeast

Page 35: Digital Image Analysis - Edge-Line Detection

The The Kirsch compass masksKirsch compass masks are defined are defined as follows:as follows:

Page 36: Digital Image Analysis - Edge-Line Detection

The edge magnitude is defined as the The edge magnitude is defined as the maximum value found by the convolution maximum value found by the convolution of each of the masks with the image of each of the masks with the image

The edge direction is defined by the mask The edge direction is defined by the mask that produces the maximum magnitude that produces the maximum magnitude

For instance, kFor instance, k00 corresponds to a corresponds to a

horizontal edge, whereas khorizontal edge, whereas k55 corresponds corresponds

to a diagonal edge in the to a diagonal edge in the Northeast/Southwest direction Northeast/Southwest direction

Page 37: Digital Image Analysis - Edge-Line Detection

The The Robinson compass masksRobinson compass masks are used are used in a manner similar to the Kirsch masksin a manner similar to the Kirsch masks

They are easier to implement, as they rely They are easier to implement, as they rely only on coefficients of 0, 1, and 2, and are only on coefficients of 0, 1, and 2, and are symmetrical about their directional axis symmetrical about their directional axis (the axis with the zeros which corresponds (the axis with the zeros which corresponds to the line direction) to the line direction)

Page 38: Digital Image Analysis - Edge-Line Detection

The Robinson masks are as follows:The Robinson masks are as follows:

Page 39: Digital Image Analysis - Edge-Line Detection

The edge magnitude is defined as the The edge magnitude is defined as the maximum value found by the convolution maximum value found by the convolution of each of the masks with the image of each of the masks with the image

The edge direction is defined by the mask The edge direction is defined by the mask that produces the maximum magnitude that produces the maximum magnitude

Any of the edge detection masks can be Any of the edge detection masks can be extended by rotating them in a manner like extended by rotating them in a manner like the compass masks, which allows us to the compass masks, which allows us to extract explicit information about edges in extract explicit information about edges in any direction any direction

Page 40: Digital Image Analysis - Edge-Line Detection

Frei-Chen masksFrei-Chen masks

They form a complete set of basis vectors, They form a complete set of basis vectors, which means any 3x3 subimage can be which means any 3x3 subimage can be represented as a weighted sum of the nine represented as a weighted sum of the nine Frei-Chen masks Frei-Chen masks

The weights are found by projecting the The weights are found by projecting the subimage onto each basis vector subimage onto each basis vector

Page 41: Digital Image Analysis - Edge-Line Detection

The projection process is similar to the The projection process is similar to the convolution process in that, both overlay convolution process in that, both overlay the mask on the image, multiply coincident the mask on the image, multiply coincident terms, and sum the results – a terms, and sum the results – a vector inner vector inner productproduct

The Frei-Chen masks can be grouped into The Frei-Chen masks can be grouped into a set of four masks for an edge subspace, a set of four masks for an edge subspace, four masks for a line subspace, and one four masks for a line subspace, and one mask for an average subspace mask for an average subspace

Page 42: Digital Image Analysis - Edge-Line Detection
Page 43: Digital Image Analysis - Edge-Line Detection
Page 44: Digital Image Analysis - Edge-Line Detection

To get our image back we multiply the weightsTo get our image back we multiply the weights (projection values) by the basis vectors (the (projection values) by the basis vectors (the Frei-Chen masks)Frei-Chen masks)

This illustrates what is meant by a complete set This illustrates what is meant by a complete set of basis vectors allowing us to represent subimage of basis vectors allowing us to represent subimage by a weighted sumby a weighted sum

Page 45: Digital Image Analysis - Edge-Line Detection

To use the Frei-Chen masks for edge To use the Frei-Chen masks for edge detection, select a particular subspace of detection, select a particular subspace of interest and find the relative projection of interest and find the relative projection of the image onto the particular subspace the image onto the particular subspace

The set The set {e}{e} consists of the masks of interest consists of the masks of interest

The The (I(Is s ,,ffkk)) notation refers to the process of notation refers to the process of

vector inner product vector inner product

The advantage of this method is that we The advantage of this method is that we can select particular edge or line masks of can select particular edge or line masks of interest, and consider the projection of interest, and consider the projection of those masks onlythose masks only

Page 46: Digital Image Analysis - Edge-Line Detection
Page 47: Digital Image Analysis - Edge-Line Detection

Edge Detector PerformanceEdge Detector Performance

Objective and subjective evaluations can Objective and subjective evaluations can be useful be useful

Objective metrics allow us to compare Objective metrics allow us to compare different techniques with fixed analytical different techniques with fixed analytical methodsmethods

Subjective methods often have Subjective methods often have unpredictable resultsunpredictable results

Page 48: Digital Image Analysis - Edge-Line Detection

To develop a performance metric for edge To develop a performance metric for edge detection operators, we need to consider the detection operators, we need to consider the types of errors that can occur and define what types of errors that can occur and define what constitutes success constitutes success

Success criteria use in development of Canny Success criteria use in development of Canny algorithm algorithm

Detection – Detection – find all real edges, no false edgesfind all real edges, no false edges Localization – Localization – found in correct locationfound in correct location Single response – Single response – no multiple edges found for no multiple edges found for

single edgesingle edge

Page 49: Digital Image Analysis - Edge-Line Detection

Pratt’s Figure of MeritPratt’s Figure of Merit (FOM):(FOM):

Pratt first considered the types of errors Pratt first considered the types of errors that can occur with edge detectionthat can occur with edge detection

Types of errors:Types of errors:1.1. missing valid edge pointsmissing valid edge points2.2. classifying noise as valid edge pointsclassifying noise as valid edge points3.3. smearing of edgessmearing of edges

If these errors do not occur, we can say If these errors do not occur, we can say that we have achieved successthat we have achieved success

Page 50: Digital Image Analysis - Edge-Line Detection

Figure 4.2-10: Errors in Edge Detection

b) Missed edge pointsa) Original image

c) Noise misclassified as edge points

d) Smeared edge

Page 51: Digital Image Analysis - Edge-Line Detection

The Pratt FOM, is defined as follows:The Pratt FOM, is defined as follows:

Page 52: Digital Image Analysis - Edge-Line Detection

For this metric, For this metric, FOMFOM will be 1 for a perfect will be 1 for a perfect edge edge

Normalizing to the maximum of the ideal Normalizing to the maximum of the ideal and found edge points guarantees a and found edge points guarantees a penalty for smeared edges or missing penalty for smeared edges or missing edge pointsedge points

In general, this metric assigns a better In general, this metric assigns a better rating to smeared edges than to offset or rating to smeared edges than to offset or missing edges missing edges

Page 53: Digital Image Analysis - Edge-Line Detection

The distance measure can be defined in The distance measure can be defined in one of three ways:one of three ways:

1.1. City block distanceCity block distance, four connectivity:, four connectivity:

2.2. Chessboard distance, Chessboard distance, 8-connectivity:8-connectivity:

3.3. Euclidean distanceEuclidean distance, physical distance:, physical distance:

Page 54: Digital Image Analysis - Edge-Line Detection
Page 55: Digital Image Analysis - Edge-Line Detection
Page 56: Digital Image Analysis - Edge-Line Detection

Figure 4.2-11: Pratt Figure of Merit

b) Test image with added Gaussian noise with a variance of 25

a) Original test image

Note: The original test image has a gray level of 127 on the left and 102 on right

c) Test image with added Gaussian noise, variance 100

d) A 16x16 subimage cropped from image (c), enlarged to show that the edge is not as easyto find at the pixel level

Page 57: Digital Image Analysis - Edge-Line Detection

Figure 4.2-12: Pratt Figure of Merit Images

b) A 16x16 subimage cropped

a) Gaussian noisevariance of 50

c) Roberts FOM = 0.4977

d) Sobel FOM = 0.853

e) KirschFOM = 0.851

f) CannyFOM = 0.963

Page 58: Digital Image Analysis - Edge-Line Detection

Figure 4.2-12: Pratt Figure of Merit Images

b) A 16x16 subimage cropped

a) Gaussian noisevariance of 100

c) Roberts FOM = 0.1936

d) Sobel FOM = 0.470

e) KirschFOM = 0.640

f) CannyFOM = 0.956

Page 59: Digital Image Analysis - Edge-Line Detection

Figure 4.2-13: Edge Detection Examples

b) Roberts operator

a) Original image

c) Sobel operator

d) Prewitt operator

Page 60: Digital Image Analysis - Edge-Line Detection

Figure 4.2-13: Edge Detection Examples (contd)

f) Kirsch operator e) Laplacian operator

g) Robinson operator

Page 61: Digital Image Analysis - Edge-Line Detection

Edge detector results are not as good when Edge detector results are not as good when noise is added to the imagenoise is added to the image

To mitigate noise effects, preprocessing the To mitigate noise effects, preprocessing the image with mean, or averaging, spatial filters image with mean, or averaging, spatial filters can be donecan be done

Additionally, the size of the edge detection Additionally, the size of the edge detection masks can be extendedmasks can be extended for noise mitigation for noise mitigation

Page 62: Digital Image Analysis - Edge-Line Detection

An example of this method is to extend the An example of this method is to extend the Prewitt edge mask as follows:Prewitt edge mask as follows:

Can be rotated and used like the Prewitt Can be rotated and used like the Prewitt for edge magnitude and directionfor edge magnitude and direction

Called Called boxcar operatorsboxcar operators and can be and can be extended, 7x7, 9x9 and 11x11 are typicalextended, 7x7, 9x9 and 11x11 are typical

Page 63: Digital Image Analysis - Edge-Line Detection

The Sobel operator can be extended in a The Sobel operator can be extended in a similar manner:similar manner:

Can be rotated and used for edge Can be rotated and used for edge magnitude and direction as 3x3 Sobelmagnitude and direction as 3x3 Sobel

Page 64: Digital Image Analysis - Edge-Line Detection

Truncated pyramidTruncated pyramid operator can be obtained operator can be obtained by approximating a linear distribution:by approximating a linear distribution:

This operator provides weights that decrease from This operator provides weights that decrease from the center pixel, which will smooth the result in a the center pixel, which will smooth the result in a more natural manner more natural manner Used like Sobel for magnitude and direction Used like Sobel for magnitude and direction

Page 65: Digital Image Analysis - Edge-Line Detection

Figure 4.2-14: Edge Detection examples – Noise

b) Image with added noise a) Original image

Page 66: Digital Image Analysis - Edge-Line Detection

Figure 4.2-14: Edge Detection examples – Noise (contd)

d) Sobel with a 7x7 mask c) Sobel with a 3x3 mask

Page 67: Digital Image Analysis - Edge-Line Detection

Figure 4.2-14: Edge Detection examples – Noise (contd)

f) Prewitt with a 7x7 mask e) Prewitt with a 3x3 mask

Page 68: Digital Image Analysis - Edge-Line Detection

Figure 4.2-14: Edge Detection examples – Noise (contd)

h) Result from applying a threshold to the 7x7 Prewitt

g) Result from applying a threshold to the 3x3 Prewitt

Page 69: Digital Image Analysis - Edge-Line Detection

Figure 4.2-14: Edge Detection examples – Noise (contd)

j) Results from applying a threshold to the 7x7 truncated pyramid

i) Truncated pyramid with a 7x7 mask

Page 70: Digital Image Analysis - Edge-Line Detection

Figure 4.2-15: Advanced Edge Detectors with Noisy Images

a) Original image with salt-and-pepper noise added with a probability of 3% each

b) Canny results, parameters: % Low Threshold = 1, % High Threshold = 2, Variance = 2

Page 71: Digital Image Analysis - Edge-Line Detection

Figure 4.2-15: Advanced Edge Detectors with Noisy Images (contd)

d) Shen-Castan results, parameters: % Low Threshold = 1, % High Threshold = 2, Smooth factor = 0.9, Window size = 7, Thin Factor = 1

c) Frei-Chen results, parameters: Gaussian2 prefilter, max(edge,line), post-threshold = 190

Page 72: Digital Image Analysis - Edge-Line Detection

Figure 4.2-15: Advanced Edge Detectors with Noisy Images (contd)

f) Canny results, parameters: % Low Threshold = 1, % High Threshold = 1, Variance = 0.8

e) Original image with zero-mean Gaussian noise with a variance of 200 added

Page 73: Digital Image Analysis - Edge-Line Detection

Figure 4.2-15: Advanced Edge Detectors with Noisy Images (contd)

h) Shen-Castan results, parameters: % Low Threshold = 1, % High Threshold = 2, Smooth factor = 0.9, Window size = 7, Thin Factor = 1

g) Frei-Chen results, parameters: Gaussian2 prefilter, max(edge,line), post-threshold = 70

Page 74: Digital Image Analysis - Edge-Line Detection

Hough TransformHough Transform

Designed specifically to find lines, where a Designed specifically to find lines, where a lineline is a collection of edge points that are is a collection of edge points that are adjacent and have the same directionadjacent and have the same direction

The Hough transform that takes a The Hough transform that takes a collection of collection of nn edge points, and efficiently edge points, and efficiently finds all the lines on which the edge points finds all the lines on which the edge points lielie

Without the Hough algorithm it takes about Without the Hough algorithm it takes about nn33 comparisons to compare all points to all comparisons to compare all points to all lineslines

Page 75: Digital Image Analysis - Edge-Line Detection

The advantage of the Hough transform is The advantage of the Hough transform is that it provides parameters to reduce the that it provides parameters to reduce the search time for finding lines, with a given search time for finding lines, with a given set of edge pointsset of edge points

The parameters allow for the quantization The parameters allow for the quantization of the line search spaceof the line search space

These parameters can be adjusted based These parameters can be adjusted based on application requirementson application requirements

Page 76: Digital Image Analysis - Edge-Line Detection

The The normalnormal (perpendicular) representation of a line is given as: (perpendicular) representation of a line is given as:

ρρ = r = r coscos((θθ) + c ) + c sinsin((θθ))

Page 77: Digital Image Analysis - Edge-Line Detection

Figure 4.2.17: Hough space

The Hough transform works by quantizing The Hough transform works by quantizing ρρ and and θθ

Page 78: Digital Image Analysis - Edge-Line Detection

The algorithm consists of three primary The algorithm consists of three primary steps:steps:

Page 79: Digital Image Analysis - Edge-Line Detection
Page 80: Digital Image Analysis - Edge-Line Detection

When this process is completed, the When this process is completed, the number of hits in each block corresponds number of hits in each block corresponds to the number of pixels on the line as to the number of pixels on the line as defined by the values of defined by the values of ρρ and and θθ in that in that blockblock

The advantage of large quantization The advantage of large quantization blocks is that the search time is reduced, blocks is that the search time is reduced, but the price paid is less line resolution in but the price paid is less line resolution in the image spacethe image space

Page 81: Digital Image Analysis - Edge-Line Detection
Page 82: Digital Image Analysis - Edge-Line Detection

A threshold is selected and the A threshold is selected and the quantization blocks that contain more quantization blocks that contain more points than the threshold are examined points than the threshold are examined

Next, line continuity is considered by Next, line continuity is considered by searching for gaps in the line by finding searching for gaps in the line by finding the distance between points on the line the distance between points on the line (remember the points on a line correspond (remember the points on a line correspond to points recorded in the block)to points recorded in the block)

When this process is completed, the lines When this process is completed, the lines are marked in the output imageare marked in the output image

Page 83: Digital Image Analysis - Edge-Line Detection

A more advanced post-processing A more advanced post-processing algorithm is implemented in CVIPtools algorithm is implemented in CVIPtools with the Hough transform with the Hough transform

The algorithm works as follows:The algorithm works as follows:

1.1. Perform the Hough transform on the Perform the Hough transform on the input image containing marked edge input image containing marked edge points, which we will call image1. The points, which we will call image1. The result, image2, is an image in Hough result, image2, is an image in Hough space quantized by the parameter space quantized by the parameter delta delta lengthlength ( (ρρ) and ) and delta angledelta angle (fixed at one (fixed at one degree in CVIPtools)degree in CVIPtools)

Page 84: Digital Image Analysis - Edge-Line Detection

2.2. Threshold image2 by using the Threshold image2 by using the parameter parameter line pixelsline pixels, which is the , which is the minimum number of pixels in a line (in minimum number of pixels in a line (in one quantization box in Hough space), one quantization box in Hough space), and do the inverse Hough transform. and do the inverse Hough transform. This result, image3, is a mask image with This result, image3, is a mask image with lines found in the input image at the lines found in the input image at the specified angle(s), illustrated in Figure specified angle(s), illustrated in Figure 4.2-20c. Note that these lines span the 4.2-20c. Note that these lines span the entire imageentire image

3.3. Perform a logical operation, image1 AND Perform a logical operation, image1 AND image3. The result is image4, see Figure image3. The result is image4, see Figure 4.2-20d4.2-20d

Page 85: Digital Image Analysis - Edge-Line Detection

4.4. Apply an Apply an edge linkingedge linking process to image4 process to image4 to connect line segments; specifically we to connect line segments; specifically we implemented a implemented a snake eating algorithmsnake eating algorithm. . This works as follows: This works as follows:

a)a) A line segment is considered to be a snake. It A line segment is considered to be a snake. It can eat another snake within can eat another snake within connect distanceconnect distance along along line anglesline angles, and becomes longer (see , and becomes longer (see Figure 4.2-20e). This will connect disjoint line Figure 4.2-20e). This will connect disjoint line segments segments

b)b) If a snake is too small, less than If a snake is too small, less than segment segment lengthlength, it will be extinct. This will remove small , it will be extinct. This will remove small segments. The output from the snake eating segments. The output from the snake eating algorithm is the final result, illustrated in Figure algorithm is the final result, illustrated in Figure 4.2-20f4.2-20f