edges and contours– chapter 7. visual perception we don’t need to see all the color detail to...

Post on 23-Dec-2015

216 Views

Category:

Documents

2 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Edges and Contours– Chapter 7

Visual perception

• We don’t need to see all the color detail to recognize the scene content of an image

• That is, some data provides critical information for recognition, other data provides information that just makes things look “good”

Visual perception

• Sometimes we see things that are not really there!!!

Kanizsa Triangle (and variants)

Edges

• Edges (single points) and contours (chains of edges) play a dominant role in (various) biological vision systems– Edges are spatial positions in the image where the

intensity changes along some orientation (direction)

– The larger the change in intensity, the stronger the edge

– Basis of edge detection is the first derivative of the image intensity “function”

First derivative – continuous f(x)• Slope of the line at a point tangent to

the function

)()(' xdx

dfxf

First derivative – discrete f(u)• Slope of the line joining two adjacent (to the selected

point) point

2

)1()1()('

ufufuf

u-1 u+1u

Discrete edge detection• Formulated as two partial derivatives

– Horizontal gradients yield vertical edges

– Vertical gradients yield horizontal edges

– Upon detection we can learn the magnitude (strength) and orientation of the edge

• More in a minute…

),( vuu

I

),( vuv

I

NOTE

• In the following images, only the positive magnitude edges are shown

• This is an artifact of ImageJ

Process->Filters->Convolve… command

• Implemented as an edge operator, the code would have to compensate for this

Detecting edges – sharp image

Image VerticalEdges

5.00.05.0

HorizontalEdges

5.0

0.0

5.0

Detecting edges – blurry image

Image VerticalEdges

5.00.05.0

HorizontalEdges

5.0

0.0

5.0

The problem…• Localized (small neighborhood)

detectors are susceptible to noise

The solution

• Extend the neighborhood covered by the filter– Make the filter 2 dimensional

• Perform a smoothing step prior to the derivative– Since the operators are linear filters, we

can combine the smoothing and derivative operations into a single convolution

Edge operator

• The following edge operators produce two results– A “magnitude” edge map (image)

– An “orientation” edge map (image)

),(),(22

),( vuDvuD yxvuE

),(

),(),( tan

1

vu

vuvu

DD

x

y

Prewitt operator

• 3x3 neighborhood

• Equivalent to averaging followed by derivative– Note that these are convolutions, not matrix multiplications

101

101

101

HP

x

111

000

111

HP

y

101

1

1

1

H

P

y

1

0

1

111HP

y

Prewitt – sharp image

Prewitt – blurry image

Prewitt – noisy image

• Clearly this is not a good solution…what went wrong?– The smoothing just smeared out the noise

• How could you fix it?– Perform non-linear noise removal first

Prewitt magnitude and direction

Prewitt magnitude and direction

Sobel operator

• 3x3 neighborhood

• Equivalent to averaging followed by derivative– Note that these are convolutions, not matrix multiplications

– Same as Prewitt but the center row/column is weighted heavier

101

202

101

HP

x

121

000

121

HP

y

101

1

2

1

H

P

y

1

0

1

121HP

y

Sobel – sharp image

Sobel – blurry image

Sobel – noisy image

• Clearly this is not a good solution…what went wrong?– The smoothing just smeared out the noise

• How could you fix it?– Perform non-linear noise removal first

Sobel magnitude and direction

Sobel magnitude and direction

Sobel magnitude and direction

• Still not good…how could we fix this now? • Using the information of the direction (lots of randomly oriented,

non-homogeneous directions) can help to eliminate edged due to noise

– This is a “higher level” (intelligent) function

Roberts operator

• Looks for diagonal gradients rather than horizontal/vertical

• Everything else is similar to Prewitt and Sobel operators

01

101HR

10

012HR

Roberts magnitude and direction

Roberts magnitude and direction

Roberts magnitude and direction

Compass operators

• An alternative to computing edge orientation as an estimate derived from two oriented filters (horizontal and vertical)

• Compass operators employ multiple oriented filters

• To most famous are– Kirsch – Nevatia-Babu

Kirsch Filter

• Eight 3x3 kernel– Theoretically must perform eight convolutions– Realistically, only compute four convolutions, the other four

are merely sign changes

• The kernel that produces the maximum response is deemed the winner– Choose its magnitude– Choose its direction

Kirsch filter kernels

101

202

101

101

202

101

Vertical edges

210

101

012

210

101

012

L-R diagonal edges

012

101

210

012

101

210

R-L diagonal edges

121

000

121

121

000

121

Horizontal edges

Kirsch filter

Nevatia-Babu Filter

• Twelve 5x5 kernel– Theoretically must perform twelve convolutions– Increments of approximately 30°– Realistically, only compute six convolutions, the

other six are merely sign changes

• The kernel that produces the maximum response is deemed the winner– Choose its magnitude– Choose its direction

Nevatia-Babu filter

top related