image analysis, random fields and dynamic mcmc by marc sobel

Post on 13-Dec-2015

218 Views

Category:

Documents

3 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Image Analysis, Random Fields and Dynamic MCMC

By Marc Sobel

A field!!!!!!!!

Random Fields Random Fields RF consist in collections of

points P={p} and neighborhoods {Np} of points. (Neighborhoods Np do not contain p) The field imposes ‘label’ values f={f[p]} points. We use the notation f[S] for the label values imposed on a set S Random Fields have one central property which is closely related to the markov property: [ ] | [ / ] [ ] | pP f p f p P f p f N P

Reasoning: Hammersely-Clifford Theorem

Under certain assumptions, assuming the points can be enumerated by p1,…,pN we have that: (the distribution can be generated from these conditionals)

1* *1

* *1 1 1

* * *1 1 1 1

*

* *

[ ],..., [ ]

[ ],..., [ ]

( [ ] | [ ],..., [ ], [ ],..., [ ])

( [ ] | [ ],..., [ ], [ ],..., [ ])

[ ] | [ ], [ ] =

[ ] | [ ], [ ]

N

N

Ni i i N

i i i i N

i

i

f p f p

f p f p

f p f p f p f p f p

f p f p f p f p f p

f p f p f p

f p f p f p

N

N1

N

i

Gibbs Random Field Gibbs random fields are characterized by

Where cε are cliques. Cliques are contained in neighborhoods: ⊂

For example, if cliques c are all pairs, we could put :

( )

exp( )

( ) ; Z= expf

U fU fTP f

Z T

( ) ( ); where: V ( ) 0 if c c cc

U f V f f f N

C

1 21 &if f[c ] f[ ]; (f ) (autologistic model)

1 &otherwiseCc

V

Gibbs=Markov!!!!!! Under Gibbs, conditioning is on neighborhoods:

But, the term, Cancels in numerator and denominator giving the result

: [ ] ; [ ]

: [ ]

exp ( ) exp ( )

[ ] | [ ]

exp ( ) exp ( )

p p N p p

p N p p

c cf f p f f N f c N c N

p p N

c cf f N f c N c N

V f V f

P f p f f N f

V f V f

exp ( )p

cc N

V f

Examples of Random Fields Automodels: all cliques have one or two members. Autobinomial models: How to biuld a k color map:

labels are 0,1,…,k. Neighborhoods are of size M.

Autologistic model (i.e., Model which imposes energy 1 when contiguous elements are different and -1 otherwise).

( [ ] | #{ [ ] } ) (1 ) ;

exp{ }

1 exp{ }

s k sp

kP f p s f N s t p p

s

tp

t

1 21 &if f[c ] f[ ]; (f ) (autologistic model)

1 &otherwiseCc

V

A Metropolis Hastings update for autologistic field models

1) Propose a flip at a randomly selected point p.

2) The move probability is:

* [ ]

* [ ]

* [ ]

[ ] [ *]exp

Probmove=1[ ] [ *]

exp

[ ] [ *] =1 exp 2

p N p

p N p

p N p

f p f pT

f p f pT

f p f pT

The 1-d autologistic The 1-d autologistic is:

The effect of the prior is to smooth out the results.

( 1) ( 1)Probmove=1 exp 2

f p f pT

The 2-d autologistic or Ising Model

In a 2-d setting we update using:

1 2

Probmove=

( 0,1; 0,1)1 exp 2

f p p

T

Example: The Ising Model: Each rectangle below is a field configuration f: black=1 and white=-1. Color results from multiple label values

Extensions: Segmentation Start with a map Y (over a 2d grid). Assume we would like to distinguish which

points in the map are importand and which are background.

Devise an ising field model prior which captures the importand points of the maps and downweights the others. E.g.,

, 1, , 1( , )

, ,( , )

( , )i j i j i j

i ji j i j

i j

X X X

U X Y Y XT

Extensions (concluded)

So, minimizing the potential contains a ‘magnetic field’ (based on the first term) and an ‘external field’ based on the second term.

Other extensions are to Line processes, image reconstruction, texture representation.

Random Field Priors: The Ising model or autologistic model: (Metropolis Hastings updates): Temperature=5; at time t=10000. Note the presence of more ‘islands’.

Random Field Priors: The Ising model. (Metropolis Hastings updates):Temperature=.005; Note the presence of fewer islands.

Generalized Ising models: Mean Field Equation

The energy is:

What is the ‘impact of this prior’? Use mean field equations to get the closest possible prior (in KLD) which makes the field points mutually independent.

( ) 3 ; i j ji j

E x x x x

Generalized Field: Note the Swiss Cheese aspect. Temperature=10.

Mean Field Equation The mean field equation minimizes:

For distributions Q which make points mutually independent. For the generalized field model, the mean field equation is:

( )log

( )QQ X

KLD EF X

' [ ]

1[ ] [ '] 3

p N pa p a p

T

Mean Field Approximation to the General Ising Field at temperature T=10. We simulate from the mean field prior.We retain the swiss cheese but lose the islands.

Gaussian Process Autonormal models: If labels are real

numbers , (i.e., we are trying to biuld a picture with many different grey levels):

, '' [ ]

[ ] | [ ] ( [ '] )p p p pp N p

f p f N f p

Gaussian Processes

For Gaussian processes, the covariance,

Cov(f[p],f[p’])= Σ βp’,p’’ cov(f[p’],f[p’’]) + σ2 ;

This gives the Yule-Walker equation: COV=B*COV+I; or COV-1=(I-B)/σ2; So the likelihood is

given by,

Gaussian Processes The likelihood is gaussian with mean μ and

inverse covariance matrix I-B;

Example: assume a likelihood, centered at i+j. Assume a gaussian process prior.

2( ) '( )( )

exp2

f I B ff

2

' [ ]

[ , ] ,1 ;

(1/ 8) ' ; .01N

observe f i j i j

z

N

Posterior Distribution for the Gaussian Model

2 2' [ ]

2 2 2

[ ] | , [ ( )]

'

( ) ,1 8 1

N p

p F N p

F p

N

Gaussian field at time t=20,000 with conditional prior variance=.01. Mesh is over a realization of μ. Note how smooth the mesh is:

Maximum Aposteriori Estimates

2' [ ]

2 2

'

[ ] ( )1 8

N pp F p

MAP Estimator with prior variance =.5

Maximum Aposteriori Estimate with prior variance =.01

Smoothness Priors Suppose we observe data with prior,

221

,

2 2 22 2 22 , , ,

,

[ , ] [ , ] ;

( ) exp ;

( ) exp 2

i ji j

i i i j j ji j

d i j f i j z

f f f f

f f f f f

Smoothness priors

The smoothness prior π1 has the effect of imposing a small ‘derivative’ on the field.

The smoothness prior π2 has the effect of imposing a small curvature on the field.

Smoothness Priors Smoothness priors have the same kind of

impact as choosing a function which minimizes the ‘loss’,

Assume the likelihood

2

2( )

( , ) ( )if d

L f d f

Data = -5 below 50 and Data=5 above 50. Conditional prior variance is .5

Data = -5 below 50 and Data=5 above 50. Conditional prior variance is .005

top related