snakes for tracking via generalized deterministic annealing
TRANSCRIPT
Journal of Electronic Imaging 14(2), 023017 (Apr–Jun 2005)
Snakes for tracking via generalized deterministicannealing
Scott T. ActonUniversity of Virginia
Department of Electrical and Computer Engineering351 McCormick Road
Charlottesville, Virginia 22904E-mail: [email protected]
errt o
narna
ap-s osal.datri
t de
thatly.
beSec-al,d.
ra-tu-the-
lat-sesary,
e
ed
en-can
dy-
eter-emichorsm--
th
blethe
e an, ittoic
004
Abstract. An implementation for parametric snakes used for objecttracking is proposed via generalized deterministic annealing (GDA).Given an arbitrary energy functional that quantifies the quality of thecontour solution, GDA computes the snake position by approximat-ing the solution given by stochastic simulated annealing. First, theMarkov chain representing the solution space for the snake positionis broken into N smaller, local Markov chains representing the posi-tion of each discrete snake sample. At each annealing temperature,GDA directly approximates the stationary distribution of the localMarkov chains using a mean field approximation for neighboringsnake sample positions, and the final distribution reveals the solu-tion. In contrast to the typical implementation via gradient descent,annealing methods can avoid suboptimal local solutions and can beused to compute snakes that are effective in the presence of severenoise and distant initial positions. Unlike simulated annealing, GDAdoes not utilize random moves to slowly locate a high quality solu-tion and is thus appropriate for time critical applications. In the pa-per, synthetic experiments (on 231 images) are provided that com-pare the edge localization performance of snakes computed byGDA, simulated annealing and gradient descent for conditions ofvarying noise and varying initial snake position. The effectiveness ofGDA is also demonstrated in a challenging real-data application (on910 images) in which white blood cells are tracked from video mi-croscopy. © 2005 SPIE and IS&T. [DOI: 10.1117/1.1900744]
1 Introduction
Parametric snakes, pioneered by Kass, Witkin and Tzopoulos in 1987,1 have proven to be effective tools foimage analysis and have enjoyed a remarkable impacthe medical image analysis area2–5 in particular. A snake isan active contour that deforms itself based on exterforces such as the image intensity gradient and on inteforces such as contour smoothness.
1.1 Snakes via Gradient Descent
Typically, a snake energy functional is formed that encsulates the desirable properties, and then, the principlevariational calculus6 are applied to derive Euler equationthat are satisfied at local minima in the energy functionThe Euler equations are then discretized and used to upthe positions of discrete points/samples in the paramesnake. The updates are essentially providing a gradien
Paper 03146 received Oct. 24, 2003; revised manuscript received Aug. 11, 2accepted for publication Aug. 17, 2004; published online May 12, 2005.1017-9909/2005/$22.00 © 2005 SPIE and IS&T.
02301Journal of Electronic Imaging
-
n
ll
f
tec-
scent on the energy surface that always makes changesreflect the direction of maximal energy reduction localThis traditional gradient descent approach1 does havemerit. First, it is inexpensive computationally and mayimplemented using standard linear algebra techniques.ond, there indeed exist situations where the locally optimnot the globally optimal, snake configuration is desireConsider the application of delineating tumors from adiograph. First, the radiologist may coarsely delineate amor by hand and use a snake to refine and smoothboundary. If a global optimization technique were employed, then the snake may select a different tumor, vioing the radiologist selection. In this case, and in other cawhere the initial snake is always near the desired boundlocal solutions may be adequate.
In contrast, if the initial snake is not proximal to thdesired contour, then poor image quality~noise! and lack oflocal features~in which the snake cannot ‘‘see’’ the trucontour since it does not exist in the local neighborhoo!may render the gradient descent solution ineffective.
1.2 Snakes via Dynamic Programming
Therefore, alternate methods for minimizing the snakeergy have been investigated. For snake paradigms thatbe broken down into multistage decision processes,namic programming solutions are possible.7–9 Using hardconstraints, the admissible snake movements are dmined. Then, the locally optimal configuration within thset of admissible movements is selected. As the dynaprogramming method uses energy differences, the autof Ref. 7 make the argument that the dynamic programing solution is less vulnerable to instability due to computation of higher order derivatives.~The traditional varia-tional solution of Ref. 1 requires computation of fourorder derivatives, for example.! Another advantage is theability to enforce hard constraints that preclude undesiraconfigurations such as loops in the snake contour. Onnegative side, dynamic programming does not guaranteoptimal solution, and for some snake energy functionalsmay be impossible to break the optimization down instages.10 As noted by Grzeszczuk and Levin, the dynam
;
7-1 Apr–Jun 2005/Vol. 14(2)
l pain
stic
akrgyma
aing-na
dthisasthans o
’’bu,’’bleenal
anre-hethepe,inl’’
-
as-ptu-
thegby
nlyu-nd
l-a
vem-pre
ch
itionopsthest
on
eru-ticng.thend
areon-
orthntedricSec-ndent
ofseg-ionve-ch
stri-he-as
-
-up-
dying
dIn
aryr-
i-atex--f.age
er-
of
Acton: Snakes for tracking . . .
programming approach also precludes the use of globarameters such as snake perimeter or the enclosed areathe computation of the snake.10
1.3 Snakes via Simulated Annealing
Whereas gradient descent and other approaches such aHopfield network11 are deterministic methods, stochasmethods such as simulated annealing12,13 ~SA! may also beused to minimize the snake energy. Such methods mrandom moves that may include moves to higher enestates. The method holds the advantage of finding optisolutions at a high~usually prohibitive! cost. Friedland andRosenfeld14 used an energy functional to recognize certshapes, and the energy functional was minimized usinpractical-time ‘‘fast’’ SA algorithm. Their work bears similarity to snake-based methods, as their energy functiocontains both external energy~an edge detector! and inter-nal energy~smoothness! terms.
In Ref. 10, ‘‘Brownian strings’’ derived by simulateannealing are introduced for image segmentation. Withapproach, contour energy is formed that enacts a stochversion of region growing. The energy contains terms tmaximize the probability of the contour coinciding with aedge and terms that control the geometric characteristicthe boundary. The method uses training~from previous seg-mentation results! to determine the possible ‘‘cracks,which are the possible edges in the boundary. A contrition of this work is the design of a ‘‘move generatorwhich is used to perturb the boundary into permissineighboring configurations. The move generator is esstially the generation function needed for simulated anneing. Comparing their method to snakes, GrzeszczukLevin cite the added control their method provides for pserving contour topology. Snakes, on the other hand, wunconstrained, may lead to self-overlap. In contrast,deterministic annealing approach presented in this paallows ‘‘illegal’’ configurations including self-intersectionbut depends on the snake energy functional and the mmization technique to converge to a meaningful ‘‘legaconfiguration.
Similar to Ref. 10, Storvik utilizes stochastic SA to compute the boundary of a homogeneous image region.15 Thecontour is moved based on information from a binary clsification of the image data such that the contour attemto enclose pixel conforming to a certain graylevel distribtion. Moves are made by adding or deleting pixels frominterior region, similar to the traditional region growinprocess. It is shown that optimality can be achievedimplementation via SA, albeit at a high cost.~One examplerequired 100,000,000 iterations.!
Of the energy minimization techniques mentioned, oSA holds the potential for deriving optimal snake configrations in the case of nonconvex energy functionals. Adue to the conflicting constraints~internal energy versusexternal energy terms!, the snake energy functional is amost certainly nonconvex. So, if one is to implementsnake via SA, one must utilize randomly generated moto sample minima in the energy functional. At each teperature in the annealing process, the Markov chain resenting the solution space converges in distribution~mean-ing that a stationary distribution is reached for eaannealing temperature!. Many moves are ‘‘wasted’’ in the
02301Journal of Electronic Imaging
-to
the
e
l
a
l
ict
f
-
--d
n
r
i-
s
,
s
-
sense that they do not lead to an improved snake posand may include illegal snake configurations such as loor kinks in the contour. To guarantee convergence toglobal minimum, millions of these stochastic jumps mube applied, which may preclude real-time applications.
1.4 Organization of the Paper
This paper focuses on a snake implementation basedgeneralized deterministic annealing~GDA!, which yieldshigh quality solutions with improvements in efficiency ovSA. As such, the main contribution of the paper is formlation of the active contour technique in a determinisannealing framework that can be used for object trackiIn Sec. 2, the requisite background on stochastic SA,derivation of the GDA method for parametric snakes, athe accompanying implementation of GDA for snakespresented. The paper uses a radial snake model to demstrate the GDA implementation, which is appropriate fcell tracking but may be inappropriate for objects wimore complex geometries. Note that the method presein this paper is not intended for application to geometsnakes or level set methods such as Refs. 16 and 17.tion 3 will give results for both synthetic and real data awill provided comparisons to snakes computed by gradidescent and SA.
2 Theory
An alternative to the standard gradient descent methodcomputing snake movement for tasks such as imagementation and object tracking is proposed. The solutproposed in this paper exploits GDA to enact snake moment. Instead of making millions of random moves at eaannealing temperature in order to reach a stationary dibution as in the case of SA, GDA directly computes tstationary distribution oflocal Markov chains at each temperature in a deterministic manner. Here, each snake hNdiscrete points~the snakesamples!. Each local Markovchain represents the solution space~the set of possible positions for that sample!. GDA retains the ‘‘hill climbing’’ability that allows SA to escape local minima at high temperatures, but does not require an exorbitant amount ofdates to reach steady state~convergence! at each annealingtemperature. And, also like SA, GDA becomes a greelocal search algorithm at lower temperatures, guaranteeconvergence.
GDA tracks the probability of a given solution antherefore does not have explicit intermediate solutions.the process of approximating the stochastic SA stationdistribution~the probabilities for each state in a local Makov chain!, GDA uses a mean field approximation~the ex-pected position of the individual snake sample positions! toupdate the different solution probabilities. Unlike other bnary mean field annealing methods, GDA allows multistsolutions and does not require binary ‘‘neuron’’ multipleing ~hence, thegeneralizedin generalized deterministic annealing!. GDA was first defined by Acton and Bovik in Re18 for image enhancement and has been used for imrestoration.19 This study represents the first use of detministic annealing for active contours/snakes.
In order to define GDA for snakes, the relevant basicsSA must first be limned.
7-2 Apr–Jun 2005/Vol. 14(2)
ionse,e
-
y
o. T
n,
-eo
ten
he
ncly
to o
ure
eseoy
-
sticinal-beal-to
heur.iontiessic
al
aling
alnl
ob-
y
forarchke,
iderre-ilen
e
Acton: Snakes for tracking . . .
2.1 Background: Simulated Annealing
In the stochastic SA optimization process, the solutspaceV is represented by a Markov chain. In this caconsider one snakeC, which corresponds to one state in thMarkov chain. For a MarkovianC, it is required that theprobability P(C5Cj ).0 ;CjPV and that new solutionsare generated only within a neighborhoodN(Cj ). Then theMarkov chain representing the solutions forC can be mod-eled by the Gibbs distribution13
P~C5Cj !51
Ze2E~Cj !/T, ~1!
where the partition function is given by
Z5 (;CiPV
e2E~Ci !/T ~2!
and T is the annealing temperature andE is the energyfunctional that quantifies the solution quality~lower E ispreferable in our convention!. At high values ofT, the SAMarkov chain has a uniform distribution in which all solutions are equally likely. If annealed properly12 according toa slow logarithmic temperature decreaseT(t)>T0 /log(11t) for iterationt, the SA Markov chain will converge to auniform distribution over the global minima in the energfunctionalE. Typically, a fast geometric schedule12 is em-ployed in practical implementations whereT(t)5T(t21)t andt is a reduction factor slightly less than 1.
Moves in SA are first generated and then acceptedrejected based on the energy change and temperatureprobability of generating a move from one solutionC1 toanother neighboring solutionC2 is given by thegenerationfunction. For a uniformly distributed generation functiothe generation function is given byG(C1 ,C2)51/uN(C1)u, whereuN(C1)u is the cardinality of the neighborhood of solutionC1 . Other distributions, such as thGaussian are possible. In fact, the SA implementationSec. 3 utilizes a Gaussian distribution with a variance ofpixels as the generation function.
After a potential neighboring solution is generated, tsolution is accepted or rejected based on theacceptancefunction12
A~C1 ,C2 ,T!51
11e@E~C2!2E~C1!#/T. ~3!
If one were to implement SA on a computer, the acceptafunction valueA would be computed and then a uniformdistributed random variable~RV! with range@0,1# could beused to decide acceptance. If the RV has a value equalless thanA, then the move is accepted.
Essentially, the SA algorithm is started at a temperatwhere ‘‘bad’’ moves~moves to states of higher energy! arelikely. At each temperature, the SA algorithm makenough moves to achieve the stationary distribution. Thretically, SA needs O(KN) moves to achieve this stationardistribution at each temperature, whereN is the number ofvariables~snake samples in our study! andK is the numberof potential solution for each variable~snake sample posi
02301Journal of Electronic Imaging
rhe
f
e
r
-
tions!. A more practical rule of thumb is the O(KN) itera-tions suggested by Ref. 12, so that the number of stochamoves at each temperature is at least equal to the cardity of one solution neighborhood. The SA algorithm canhalted when a temperature is reached in which the SAgorithm behaves as a greedy algorithm and is unableescape local minima.
2.2 Generalized Deterministic Annealing
In this section, GDA is adopted as a tool for computing tpositions of theN discrete samples in the snake contoInstead of making random moves to sample the solutspace, a deterministic method that tracks the probabiliof each possible snake configuration is applied. The bastrategy of GDA is threefold.
1. Subdivide the solution space Markov chain into locMarkov chains that represent local variables~the po-sition of each snake sample, in this case!.
2. Approximate the stationary distribution of the locMarkov chain at each annealing temperature usthe SA transition probabilities.
3. When updating the stationary distribution of a locMarkov chain, utilize the mean field approximatio~the expected value! for the state of neighboring locaMarkov chains.
So, the SA dynamics are used to computeP(C1 ,C2 ,T),the probability of moving from solutionC1 to solutionC2
at temperatureT. Given a generation functionG(C1 ,C2)and an acceptance functionA(C1 ,C2 ,T), the transitionprobabilities are given by
P~C1 ,C2 ,T!5G~C1 ,C2!A~C1 ,C2 ,T! ;C2ÞC1 ; ~4!
or, in the case of a self-transition~where the move results inno change in snake position!
P~C1 ,C1 ,T!512 (;C2ÞC1
P~C1 ,C2 ,T!. ~5!
Note that it is assumed that all solutions have nonzero prability, that any solution is reachable~can be reached by afinite series of moves in the Markov chain!, and that thegeneration function is symmetric:G(C1 ,C2)5G(C2 ,C1).
Given an initial distribution over the solution space,p0 ,one can use~4! and~5! to recursively compute a stationardistribution for a fixed annealing temperatureT. However,since evaluating this distribution over the solution spaceall snakes would be as expensive as an exhaustive seitself, the problem should be localized. For one GDA snaN local Markov chains representing the positions of theNsamples of the discretized contour are created. So, consa fixed center (x,y); the sample positions in the snake adenoted by the polar (r ,u) position with respect to the center (x,y). ~Note that the center does not change whevolving the GDA snake.! Each of theN snake samples cabe indexed~parameterized! by the angleu. Then, a localMarkov chain tracks the distanceC(u)5r of the curve atangle u from the initial center position. For each snak
7-3 Apr–Jun 2005/Vol. 14(2)
napli
d ina
fymtoths.
nsra-
aletethe
.c-
--
tri-y
n
ty
c-
totheare
venincearenot
vteda
.ke
ion
larn
achs
al
r-r-
theofthemin
ntage-hecalsee.thee-bi-rgy
Acton: Snakes for tracking . . .
sample,K positions are allowed. Note that the (r ,u) repre-sentation used here is employed for the sake of notatioconvenience and because it is well-matched with the apcation of tracking quasicircular objects, as demonstrateSec. 3. Alternatively, a local solution space containingfinite number of Cartesian pairs is also possible~and hasbeen implemented! for the GDA snake. The attraction othe radial (r ,u) model lies in the simplicity of the energterms~as only one variable for each snake sample is coputed!, in the avoidance of reparameterization duebunching/spreading of adjacent contour samples, and inmatch for the application of delineating cell boundarieThe radial model also avoids exploring illegal solutiosuch as self-intersections and loops. A limitation of thedial model is that it assumes ‘‘star-shaped’’ boundaries.
To evaluate the probability of a transition within a locMarkov chain, evaluation of the change in the complsnake energy functional is not required. Instead, onlychange in the local energy functional,DEu(r 1 ,r 2), is com-puted. Actual examples ofDEu(r 1 ,r 2) are provided in Sec3.3 for the cell tracking application. The local energy funtional contains only the terms that depend onC(u), viz. thesnake sample position at angleu with respect to fixed center (x,y). Thus, DEu(r 1 ,r 2) is the change in energy induced by changingC(u) from r 1 to r 2 . Taking the sigmoi-dal acceptance function of~3! results in
Au~r 1 ,r 2 ,T!51
11e@Eu~r 2!2Eu~r 1!#/T5
1
11eDEu~r 1 ,r 2!/T~6!
for each local Markov chain.So, in GDA, one needs to estimate the stationary dis
bution p. One element,pu(r 1), represents the probabilitof the snake sample at angleu having a distance ofr 1 fromthe contour center. The probabilities~simulating one sto-chastic move via SA! can be updated using the transitioprobabilities
put11~r 2!← (
;r 1Þr 2 ,r 1PRPu~r 1 ,r 2 ,T!pu
t ~r 1!1put ~r 2!
3F12 (;r 3Þr 2 ,r 3PR
Pu~r 2 ,r 3 ,T!G , ~7!
wheret denotes thetth iterative update, andR denotes theK-member set of possible radial positions at sampleu. In~7!, Pu(r 1 ,r 2 ,T) represents the transition probabiliwithin the local Markov chain at sampleu for a transitionfrom r 1 to r 2 at temperatureT.
Using ~4! and~5!, assuming sigmoidal acceptance funtion, whereAu(r 1 ,r 2 ,T)512Au(r 2 ,r 1 ,T), and assuminga uniform generation function, then
put11~r 2!← 1
K (;r 1PR
Au~r 1 ,r 2 ,T!@put ~r 1!1pu
t ~r 2!#, ~8!
which is the GDA update function.So, onlyDEu(r 1 ,r 2) needs to be computed in order
update the probabilities for each state. Unfortunately,local Markov chains are not independent. In fact, they
02301Journal of Electronic Imaging
l-
-
e
locally dependent in general, since the samples of a gisnake are dependent upon each other. And in GDA, sthe probabilities of the various snake sample positionsbeing tracked, the positions of neighboring samples areknown explicitly.
Therefore, GDA utilizes a mean field approximation.18,20
To evaluate the local energy functionalDEu(r 1 ,r 2) for agiven local Markov chain, the remaining local Markochains are taken at the mean field position—the expecvalue. The mean field position is computing by takingweighted average~weighted bypu
t ) of the K possible po-sitions corresponding to theK local Markov chain statesThe mean field approximation for the position of the snaat angleu is
C̃~u!5roundF 1
K (;r 1PR
r 1put ~r 1!G , ~9!
after thetth update using~8! whereround(x) rounds thexto the nearest integerbx10.5c. While the distribution ofeach local Markov chain is being computed, the positC(u) is unknown, so the mean field approximationC̃(u) isused.
2.3 Implementation
2.3.1 Annealing schedule
The approach to annealing with GDA is essentially simito that of the practical SA algorithm detailed in Ref. 12. Ainitial and final temperature are computed. Then, at eannealing temperature, a predetermined set of update~inthis paper5K for local Markov chains of lengthK! areimplemented. So, each of theN snake samples has a locMarkov chain of lengthK that is updatedK times per tem-perature using~8!. Alternatively, one can track the convegence of the GDA distribution and require uniform convegence withX% ~typically, X55 or less!.
The main parameters in the annealing process areinitial and final temperatures. The acceptance function~6! can be exploited to determine these temperatures. Ifinitial temperature is chosen that is too low, the algorithwill be unable to escape local minima and will behavethe manner of gradient descent. So, a reasonable perceof these bad~positive energy change! moves should be accepted. Likewise, if the final temperature is too high, tGDA process may be halted before it converges to a lominimum in the energy. So, the probability of an increain the energy should be very low at the final temperatur
Since the energy functionals and the weights used inenergy functionals will change for different snake implmentations, the initial and final temperatures for an artrary functional should be derived. Assume that the enefunctional has a minimum~nonzero! positive energychangeDEu(r 1 ,r 2) associated with one move~one changein one snake sample position! is a, and the maximum en-ergy change isb. By algebraic manipulation of~6!, if theworst~highest energy! change ofb should be accepted withprobability p1 , then
7-4 Apr–Jun 2005/Vol. 14(2)
e
e
gy
e
theel, aentheantinef.
nsity,
beded
ra-andveslveria-in
uirenal
adsnta-
e.n-iva-
in
Acton: Snakes for tracking . . .
Tinit>b
lnS 12p1
p1D . ~10!
The ‘‘rule of thumb’’ used in our experiments~for p1
'0.25) is that the initial temperatureTinit should be setequal to the maximum energy changeb.
If the probability for an uphill~positive energy change!move at the lowest temperature isp2 ~for SA and GDA, thisprobability can never be zero!, then the final temperaturTfinal is bounded by
Tfinal<a
lnS 12p2
p2D . ~11!
The basic rule used in our experiments~for p2'0.05) isthat the final temperatureTfinal should be set equal to onthird of the minimum energy changea.
Now, to complete the GDA implementation, the enerchangesDEu(r 1 ,r 2) used in the GDA update function~8!need to be computed. The special case of using a snakfind and track cells is used to illustrate this procedure.
2.3.2 An example energy functional andcomputation of local energy differences
To demonstrate the implementation of the GDA snake,snake is applied to the delineation and tracking of a cboundary in a sequence of images. In this applicationsmooth snake that overlaps with the ridge of high gradimagnitude surrounding the cell is desired. Also, sincecells are nearly circular and are of a known scale, shapesize constraints are added to the problem. The resulenergy functional is similar to the functional used in R
02301Journal of Electronic Imaging
to
l
t
dg
22 ~implemented by gradient descent!, with a few excep-tions that will be noted. This energy functional contaifive terms corresponding to constraints for tension, rigidcorrespondence with the gradient magnitude~external en-ergy!, shape, and size. Each energy term will be descriindividually and the local energy change function needfor GDA will be given.
The first three energy terms~tension, rigidity, and exter-nal energy! are the typical energy terms used in any pametric snake such as in Ref. 1. Essentially, the tensionrigidity terms are manifested as first and second derivatiof the curve with respect to the curve parameter. To sofor the snake position using gradient descent, the vational analysis yields second and fourth order derivativesthe snake update equations. The fourth order terms reqspecial caution and typically lead to implicit, rather thaexplicit, update schemes. A drawback of the typicsmoothness terms~i.e., the tension and rigidity terms! isthat the absolute minimization of these energy terms leto a snake that reduces to a single point. For implemetion, this means that the external energy term~that enforcesconformity with the image edges! must counteract thesmoothness terms in order to avoid a ‘‘vanishing’’ snakSo, for the (r ,u) implementation presented here, these eergy terms are altered so that the first and second dertives of the distance from the center~the r portion! arepenalized. The tension, or first derivative term, is givendiscrete form as
Etension5 (u5u0
uN21 H CF S u12p
N Dmod 2p
G2C~u!J 2
, ~12!
where the summation is computed for theN samples of thesnake in angular steps of 2p/N. The energy differenceterm, needed for the GDA update Eq.~8!, is then
DEutension~r 1 ,r 2!5aS H r 22CF S u2
2p
N Dmod 2p
G J 2
1H CF S u12p
N Dmod 2p
G2r 2J 2
2H r 12CF S u22p
N Dmod 2p
G J 2
2H CF S u12p
N Dmod 2p
G2r 1J 2D . ~13!
So, ~13! gives the change in tension energy for changing the snake position at angleu from r 1 to r 2 .The second snake smoothness term, called the rigidity term here, can be written as
Erigid5 (u5u0
uN21 H CF S u22p
N Dmod 2p
G22C~u!1CF S u12p
N Dmod 2p
G J 2
. ~14!
Then, the energy difference term used in the GDA update is
7-5 Apr–Jun 2005/Vol. 14(2)
Acton: Snakes for tracking . . .
DEurigid~r 1 ,r 2!
5S H CF S u24p
N Dmod 2p
G22CF S u22p
N Dmod 2p
G1r 2J 2
1H CF S u22p
N Dmod 2p
G22r 21CF S u12p
N Dmod 2p
G J 2
1H r 222CF S u12p
N Dmod 2p
G1CF S u14p
N Dmod 2p
G J 2
2H CF S u24p
N Dmod 2p
G22CF S u22p
N Dmod 2p
G1r 1J 2
2H CF S u22p
N Dmod 2p
G22r 11CF S u12p
N Dmod 2p
G J 2
2H r 122CF S u12p
N Dmod 2p
G1CF S u14p
N Dmod 2p
G J 2D .
~15!
doal
o alu-onain
rgyto
edni-
-d bsekethe
reif-therceos-ra-feaini-
osiial
akeg a
n-apetro-, is
e
is
-
h-ee
on-ysdi-
atape
h
With GDA, the tension and rigidity terms presented herenot bias the snake toward contracting. Also note thatthough energy terms such as~15! contain higher order dif-ferences, these higher order differences cannot lead tunstable or unbounded solution, as with the explicit sotion of the traditional parametric snake. The GDA solutiis bound to the range prescribed by the local Markov chtopology and is not sensitive to a time step parameter.
Probably the most important term in a snake enefunctional is the external force that guides the snakecoexist with image edges. Typically, this force is realizby maximizing the contour integral of the gradient magtude for the contour specified by the snake. IfI @C(u),u# isthe image intensity at the polar~snake! position@C(u),u#,then
Eext52 (u5u0
uN21
u¹I @C~u!,u#u, ~16!
and, the corresponding energy difference needed for~up-date! is
DEuext~r 1 ,r 2!5@ u¹I ~r 1 ,u!u2u¹I ~r 2 ,u!u#. ~17!
It is worth noting here that the difficulty in locating a distant object using a gradient descent approach is cause~16!. Unless the initial position of the snake is very clo~within a few pixels! to the desired boundary, the snacomputed by gradient descent will not be attracted toboundary. Due to this dilemma, modifications to~16! thatextend the influence of the boundaries have been exploXu and Prince,21 for example, constructed a method of dfusing the gradient vectors that guide the snake toboundary. Of course, regularization of the external fothrough simply smoothing the gradient magnitude is psible ~and would improve the results obtained by the gdient descent approach!, but is implemented at the cost oedge localization and oversmoothing of detailed image ftures. In this paper, I take an alternate approach in leavthe external energy unmodified but modifying the optimzation technique in order to evaluate potential snake ptions not in the immediate neighborhood of the initsnake.
The experiments presented in Sec. 3 compare sncomputed by GDA, SA, and gradient descent in capturin
02301Journal of Electronic Imaging
-
n
y
d.
-g
-
s
cell boundary. For this application, additional energy costraints tailored to the approximately known size and shof the cells are required. The shape constraint, as induced in Ref. 22 for implementation via gradient descentwritten as
Eshape5 (u5u0
uN21
@C~u!2 r̄ #2, ~18!
where r̄ 5(u5u0
uN21C(u)/N and is referred to as the averag
radius. In a circular shaped object, eachC(u) would beclose to r̄ . The accompanying energy difference termthen
DEushape~r 1 ,r 2!5@~r 22 r̄ !22~r 12 r̄ !2#. ~19!
Note that whileEshapeandEtensionboth share minimum energy solutions when the contour is a circle,Eshape intro-duces the ‘‘global’’ knowledge of the average radius. Witout Eshape, it may be observed that a gradual drift in thradial length from point-to-point leads to distortion of thcircular shape.
For implementation by gradient descent, the shape cstraint of~18! is successful, since the initial snake is alwaclose to the final snake. But, for SA and GDA, where racal moves are possible at high temperatures,~18! allowsillegal configurations such as a ‘‘C’’-shaped contour thhas doubled over itself. To remedy this problem, the shconstraint is modified as follows:
Eshape5 (u5u0
uN21
$C~u!1C̃@~u1p!mod 2p#22r̄ %2, ~20!
whereC(u)1C̃@(u1p)mod 2p# measures the diameter witrespect to a given snake sample atu. The energy differenceterm used in SA and GDA is then
7-6 Apr–Jun 2005/Vol. 14(2)
cairednas
om
nt-nce
Ae
le
e
.
nak
-:
tion
eo
nce
p 7ti-
. 1
thehe-e
lu-. Ind
edSAbyum-
in
ctedSA
ach,e
nge,c-tal
Acton: Snakes for tracking . . .
DEushape~r 1 ,r 2!5~$r 21C̃@~u1p!mod 2p#2d̄%2
2$r 11C̃@~u1p!mod 2p#2d̄%2!
5r 222r 1
212C̃@~u1p!mod 2p#~r 22r 1!
22d̄~r 22r 1!. ~21!
For many applications, such as the cell tracking applition used in Sec. 3, the approximate size of the desobject is knowna priori. In such circumstances, given aexpected radiusr, a size constraint can be implementedfollows:
Esize5 (u5u0
uN21
@C~u!2r#2. ~22!
The energy difference for changing a snake position frr 1 to r 2 is
DEusize~r 1 ,r 2!5@~r 22r!22~r 12r!2#. ~23!
For the update of a GDA local Markov chain represeing a given snake sample position, the energy differeterms described in this section@~13!,~15!,~17!,~21!,~23!# areweighted and summed and then used as the input to~8!.
2.3.3 Algorithm flow
For the cell detection and tracking application, the GDsnake algorithm flow is described by the following ninsteps.
Step 1. SetK the number of possible radii at each angu and fix R, the set of possible radii.
Step 2. SelectTinit andTfinal for the annealing schedulusing ~10! and ~11! for a given energy functional.
Step 3. Obtain framef of the F frame video sequenceFor the first frame, initialize the center position (x,y)manually. After the first frame, the initial center positiocan be determined by the center of mass of the final sncomputed for the previous frame.
Step 4. Initialize theK-length stationary distribution estimate for each of theN contour samples to the trivial statepu
0(•)51/K.Repeat Steps 5–7K times.Step 5. Compute the mean field estimate for the posi
of each of theN contour samples using~9!.Step 6. For each of theN contour samples, compute th
K2 possible energy changes using a linear combination~13!, ~15!, ~17!, ~21!, and ~23!. Note that only 1/2K2 en-ergy changes need to be computed in reality, siDEu(r 1 ,r 2)52DEu(r 2 ,r 1).
Step 7. Using the energy changes computed in stecompute theK elements of the stationary distribution esmate for each of theN samples using~8!.
Step 8. Reduce the temperature geometrically:Tnew
5tTold . If Tnew,Tfinal , then stop. Else, return to step 5.These eight steps are depicted in the flowchart of Fig
02301Journal of Electronic Imaging
-
e
f
,
.
2.3.4 Computational cost
In the implementation of the gradient descent snake,snake is allowed to converge to a local minimum. In tcase of SA and GDA, I attempt to ‘‘equalize’’ the computational expense. GivenK possible positions for each snaksample in GDA, each GDA update effectively evaluatesK2
moves. With both GDA and SA, the main cost lies in evaating the energy difference incurred by a potential moveone GDA update, (1/2)K2 such computations are require@noting thatDEu(r 1 ,r 2)52DEu(r 2 ,r 1)]. So, a GDA up-date would be roughly equivalent in expense to (1/2)K2
simulated annealing moves. Obviously, if SA were allowto have infinitely many moves at each temperature, theresult would equal or exceed the solution quality givenGDA. The experiments presented in Sec. 3 restrict the nber of moves taken by SA to demonstrate the differencequality for a fixed reasonable computational expense.
As discussed in Ref. 18, the number of updates enaat a fixed temperature for a practical implementation ofis O(NK) updates; the ideal implementation requirescomputational cost on the order of exhaustive searO(KN). Of course, for both SA and GDA, convergencdepends on the temperature,T, ~and upon the maximumpossible energy change for a single contour sample chab!. Given that the local Markov chains are finite, irreduible and aperiodic, one can model the reduction in to
Fig. 1 GDA tracking algorithm flowchart.
7-7 Apr–Jun 2005/Vol. 14(2)
Acton: Snakes for tracking . . .
Journal of Electronic Im
Tabl
e1
Pra
ttfig
ure
ofm
erit
for
synt
hetic
expe
rimen
tsin
whi
cha
circ
ular
targ
etw
asca
ptur
edby
asn
ake
usin
ggr
adie
ntde
scen
t(gr
ad.)
,SA
,and
GD
A.T
hefir
stco
lum
nin
the
tabl
epr
ovid
esth
edi
stan
ceof
the
initi
alsn
ake
cent
erfr
omth
ece
nter
ofth
eci
rcle
,in
term
sof
the
circ
lera
dius
.T
head
ditiv
eG
auss
ian
dist
ribut
edno
ise
isin
crea
sed
from
ano
rmal
ized
varia
nce
of0
[sig
nal-t
o-no
ise
ratio
(SN
R)5
`]
toa
norm
aliz
edva
rianc
eof
0.2
(SN
R5
3dB
).
Dis
tanc
ein
Rad
ii
SN
R5
`S
NR
513
dBS
NR
510
dBS
NR
58
dBS
NR
57
dBS
NR
56
dBS
NR
55
dBS
NR
54.
5dB
SN
R5
4dB
SN
R5
3.5
dBS
NR
53
dB
Gra
d.S
AG
DA
Gra
d.S
AG
DA
Gra
d.S
AG
DA
Gra
d.S
AG
DA
Gra
d.S
AG
DA
Gra
d.S
AG
DA
Gra
d.S
AG
DA
Gra
d.S
AG
DA
Gra
d.S
AG
DA
Gra
d.S
AG
DA
Gra
d.S
AG
DA
00.
920.
960.
890.
920.
940.
890.
920.
960.
900.
930.
950.
900.
920.
960.
890.
940.
960.
900.
920.
970.
870.
960.
960.
860.
950.
960.
860.
930.
930.
840.
950.
970.
79
0.1
0.92
0.96
0.89
0.93
0.94
0.89
0.92
0.92
0.88
0.93
0.94
0.85
0.96
0.96
0.89
0.96
0.93
0.88
0.95
0.92
0.89
0.95
0.93
0.92
0.95
0.96
0.83
0.97
0.95
0.85
0.96
0.92
0.85
0.2
0.93
0.94
0.89
0.93
0.96
0.89
0.93
0.92
0.90
0.93
0.96
0.85
0.94
0.96
0.86
0.92
0.93
0.89
0.93
0.92
0.86
0.94
0.95
0.89
0.95
0.95
0.89
0.92
0.97
0.87
0.95
0.93
0.86
0.3
0.92
0.93
0.89
0.91
0.92
0.89
0.92
0.95
0.86
0.92
0.96
0.85
0.93
0.90
0.88
0.93
0.97
0.88
0.94
0.93
0.83
0.93
0.96
0.91
0.94
0.89
0.91
0.95
0.96
0.84
0.94
0.95
0.85
0.4
0.93
0.94
0.89
0.91
0.93
0.89
0.94
0.92
0.90
0.93
0.96
0.89
0.95
0.95
0.85
0.93
0.95
0.88
0.94
0.94
0.88
0.95
0.91
0.92
0.93
0.95
0.69
0.95
0.94
0.90
0.53
0.91
0.90
0.5
0.93
0.92
0.89
0.93
0.87
0.89
0.92
0.93
0.86
0.94
0.92
0.88
0.92
0.91
0.88
0.77
0.88
0.89
0.59
0.89
0.93
0.97
0.71
0.85
0.69
0.88
0.67
0.52
0.67
0.76
0.97
0.71
0.89
0.6
0.92
0.98
0.89
0.94
0.91
0.89
0.95
0.88
0.90
0.65
0.81
0.85
0.94
0.88
0.85
0.70
0.62
0.85
0.48
0.90
0.89
0.45
0.62
0.86
0.96
0.64
0.90
0.92
0.69
0.85
0.84
0.74
0.88
0.7
0.96
0.91
0.89
0.94
0.81
0.89
0.95
0.64
0.85
0.58
0.62
0.89
0.76
0.69
0.88
0.56
0.59
0.89
0.50
0.65
0.89
0.50
0.67
0.84
0.46
0.59
0.88
0.96
0.59
0.85
0.89
0.55
0.88
0.8
0.71
0.65
0.89
0.51
0.66
0.89
0.48
0.60
0.85
0.53
0.52
0.86
0.38
0.54
0.86
0.60
0.53
0.90
0.96
0.53
0.92
0.34
0.49
0.86
0.50
0.50
0.90
0.52
0.46
0.84
0.46
0.52
0.89
0.9
0.56
0.52
0.89
0.42
0.46
0.89
0.50
0.59
0.89
0.38
0.51
0.88
0.93
0.55
0.92
0.36
0.48
0.88
0.43
0.45
0.85
0.43
0.44
0.85
0.62
0.47
0.85
0.34
0.42
0.89
0.39
0.49
0.86
10.
450.
470.
890.
390.
470.
900.
580.
430.
890.
330.
430.
910.
480.
410.
870.
440.
390.
900.
360.
400.
740.
490.
420.
870.
330.
400.
910.
290.
420.
900.
620.
450.
84
1.1
0.38
0.40
0.89
0.39
0.41
0.89
0.47
0.39
0.89
0.45
0.42
0.86
0.43
0.37
0.86
0.35
0.37
0.93
0.31
0.35
0.90
0.42
0.37
0.89
0.42
0.34
0.92
0.44
0.40
0.85
0.42
0.37
0.89
1.2
0.32
0.38
0.89
0.34
0.32
0.89
0.52
0.34
0.90
0.28
0.32
0.89
0.30
0.32
0.88
0.39
0.34
0.85
0.30
0.36
0.85
0.40
0.35
0.88
0.29
0.37
0.90
0.36
0.33
0.85
0.36
0.32
0.78
1.3
0.31
0.33
0.89
0.32
0.30
0.89
0.30
0.31
0.91
0.29
0.32
0.89
0.35
0.33
0.85
0.33
0.31
0.89
0.31
0.30
0.86
0.28
0.33
0.88
0.33
0.31
0.85
0.28
0.32
0.90
0.29
0.32
0.85
1.4
0.29
0.31
0.89
0.27
0.31
0.89
0.27
0.32
0.86
0.30
0.32
0.89
0.32
0.30
0.89
0.28
0.31
0.89
0.32
0.32
0.86
0.32
0.29
0.85
0.29
0.29
0.86
0.28
0.30
0.91
0.28
0.26
0.84
1.5
0.29
0.30
0.89
0.29
0.28
0.89
0.27
0.29
0.89
0.29
0.28
0.89
0.29
0.31
0.90
0.31
0.27
0.85
0.29
0.28
0.86
0.29
0.28
0.90
0.28
0.27
0.82
0.31
0.28
0.85
0.29
0.29
0.87
1.6
0.29
0.28
0.89
0.27
0.34
0.89
0.30
0.33
0.89
0.30
0.28
0.93
0.29
0.28
0.90
0.31
0.30
0.85
0.27
0.29
0.84
0.29
0.28
0.87
0.28
0.28
0.86
0.28
0.30
0.87
0.29
0.30
0.90
1.7
0.28
0.32
0.89
0.28
0.33
0.89
0.27
0.30
0.89
0.28
0.32
0.90
0.27
0.33
0.90
0.28
0.33
0.86
0.27
0.30
0.86
0.30
0.31
0.86
0.29
0.29
0.92
0.29
0.27
0.67
0.30
0.29
0.85
1.8
0.28
0.33
0.89
0.32
0.32
0.89
0.27
0.35
0.91
0.28
0.31
0.88
0.31
0.33
0.85
0.27
0.29
0.85
0.28
0.31
0.89
0.28
0.27
0.89
0.30
0.31
0.86
0.29
0.32
0.87
0.28
0.31
0.85
1.9
0.28
0.29
0.89
0.28
0.30
0.91
0.27
0.32
0.85
0.29
0.33
0.85
0.27
0.32
0.89
0.28
0.29
0.86
0.29
0.30
0.86
0.27
0.32
0.88
0.28
0.28
0.85
0.30
0.32
0.85
0.26
0.30
0.90
20.
280.
350.
890.
270.
340.
890.
280.
270.
890.
240.
290.
860.
300.
290.
900.
300.
330.
880.
290.
280.
850.
280.
290.
850.
290.
300.
870.
220.
280.
880.
300.
290.
91
Ave
rage
0.58
0.59
0.89
0.56
0.58
0.89
0.58
0.57
0.88
0.53
0.56
0.88
0.58
0.57
0.88
0.53
0.54
0.88
0.52
0.55
0.87
0.53
0.53
0.88
0.54
0.54
0.86
0.54
0.53
0.85
0.55
0.53
0.86
023017-8aging Apr–Jun 2005/Vol. 14(2)
Acton: Snakes for tracking . . .
Journal of Electron
Fig. 2 Results from the noise-free experiments showing Pratt figure of merit vs distance from targetfor grad., SA, and GDA. The distance is measured in radii, so that a distance of 1.0 means that theinitial snake was one radius from the circular target center.
t
variation ~the error in the estimate! exponentially. IfDu(t)5(1/K)(;r 1PRupu
t (r 1)2puf (r 1)u is the variation of
the current stationary distribution estimatep with respect tothe final stationary distributionp f for sampleu at iteration
02301ic Imaging
t, then r(T)5Du(t)/Du(t21)'@(K22)/2K#exp(2T/b)11/2.18 To compute the number of iterations required,Mcan be given such that$@(K22)/2K#exp(2T/b)11/2%M isless than some upper bound, say 1/K. In this case, a wors
Fig. 3 Results from the noisy synthetic experiment (SNR53 dB) showing Pratt figure of merit vsdistance from target for grad., SA, and GDA. The distance is measured in radii, so that a distance of1.0 means that the initial snake was one radius from the circular target center.
7-9 Apr–Jun 2005/Vol. 14(2)
Acton: Snakes for tracking . . .
Journal of Electron
Fig. 4 Example images from the synthetic experiments in which the initial snake position (with respectto the center) was 0.75 cell radii away from the cell center. In this case, the images are corrupted withadditive Gaussian noise with 20% normalized noise variance [for 8 bit grayscale imagery, this meansthat the variance is 0.2(255)551]. (a)–(c) Initial, intermediate and final snakes using gradient descent.(d)–(f) Initial, intermediate and final snakes using simulated annealing. (g)–(i) Initial, intermediate andfinal snakes using generalized deterministic annealing.
nt
ntsn iialin
tyhelin
an
anri-
or-s
th-
3
ted
n,n-
case convergence is given byM'$ ln(1/K)/ ln@(K21)/K#11/2%, which behaves as O@K log(K)# @using the approxi-mation that ln(121/K)'21/K for small 1/K]. So, in prob-lems whereK,N, the GDA approach is more expediethan the O(KN) SA convergence and the O(NK) compro-mise for practical implementation of SA.
3 Results
The results from 231 single-image synthetic experimeare provided that demonstrate the edge localization giveimage segmentation by the GDA snake for varying initsnake positions and varying amounts of noise. Ten trackexperiments on real data sets of 91 frames each~3 s ofvideo at 30 fps! provide 910 images that reveal the abiliof the GDA snake to capture a moving object. Both tsynthetic and real data experiments have the goal of deeating a cell boundary.
3.1 Experiments with Synthetic Data
The synthetic experiments are set up as follows. First,ideal cell is created~with known radiusr! using a filledcircle of constant intensity of 10. The background hasinitial intensity of 0, before white Gaussian noise of va
023017ic Imaging
n
g
-
ancev is added. The variancev is normalized by the maxi-mum intensity of 255, so thatv50.2 would correspond to avariance of 51 intensity levels. In the experiments, the nmalized variancev is varied from 0.0 to 0.2 in incrementof 0.05. For each method~gradient descent, SA and GDA!,the initial snake is placed~centered! a distance ofdr awayfrom the true cell center. The distance factord is variedfrom 0 to 2.0 in increments of 0.1. So, testing three meods~gradient descent, SA and GDA! on 11 different inten-sities of noise and 21 different initial positions yields 69experiments.
The success of the synthetic experiments is evaluausing Pratt’s figure of merit24 for edge localization. ThePratt figure of merit is given by
FOM51
max$N̂,Nideal%(i 51
N̂1
11di2/9
, ~24!
whereN̂ is the number of edge pixels in the segmentatioNideal is the number of edge pixels surrounding the sythetic ideal cell, anddi is the distance between theith seg-
-10 Apr–Jun 2005/Vol. 14(2)
Acton: Snakes for tracking . . .
Journal of Electron
Fig. 5 Example images from the synthetic experiments in which the initial snake position (with respectto the center) was 2 cell radii (or one diameter) away from the cell center. In this case, the images arecorrupted with additive Gaussian noise with 20% normalized noise variance [for 8 bit grayscale imag-ery, this means that the variance is 0.2(255)551]. (a)–(c) Initial, intermediate and final snakes usinggradient descent. (d)–(f) Initial, intermediate and final snakes using simulated annealing. (g)–(i) Initial,intermediate and final snakes using generalized deterministic annealing.
ve3
the
eggh
n-ngr-thed
esdi-ynceter.n is
imel
tosem-ing
ntn-ar-SAas
tedSAlityted 5en
efor, 1,em-oalle-
mented edge pixel and the nearest ideal edge pixel.24 ThePratt figure of merit is given for each method on each leof noise and initial position in Table 1. Figures 2 andshow the Pratt figure of merit for the three methods inextreme cases of no noise~Fig. 2! and noise with 20%normalized variance~Fig. 3!. The results for the GDAsnake are consistently high for all amounts of image dradation and the entire range of starting positions. Althoua figure of merit of one would indicate a ‘‘perfect’’ segmetation, GDA yields a maximum of 0.92 due to the roundierror of allowing the GDA snake to exist only at integevalued locations in the image. Over all experiments,average figure of merit for GDA is 0.89, while for SA angradient descent, the average is 0.59 for both.
To obtain an idea of how each experiment progressFigs. 4 and 5 show images with the snakes superimposetheir initial position, intermediate position, and final postion. Figure 4 gives the images corresponding to the sthetic experiment with noise of 20% normalized varianand a starting position of 0.75 radii from the actual cenIn Fig. 5, the noise is the same, but the starting positiomore aggressive~2 radii from the actual center!. Note thatthe poor results obtained by gradient descent could beproved by regularization or prefiltering. The noise lev
023017ic Imaging
l
-
,in
-
-
~20% normalized variance! is sufficient to stop the gradiendescent snake at a suboptimal local configuration. I chto evaluate the algorithms without prefiltering so as to copare the optimization procedures instead of image filterprocedures.
In Figs. 4 and 5, the SA algorithm did utilize a sufficienumber of moves to find the more global minimum in eergy in the presence of local minima due to noise and ptial contours. As discussed in Sec. 2.3.4, the results forare obtained by using the equivalent number of movesexpended with GDA. If the number of moves associawith SA were increased by two orders of magnitude, theresults in Figs. 4 and 5 would match or exceed the quaof the results given by GDA. Also, it is important to nothat SA is stochastic—so the results shown in Figs. 4 anare only examples of results given by SA with the givparameters.
The same energy functional~described in Sec. 2.3! isutilized by SA, GDA, and gradient descent. Also, thweighting of energy terms is fixed. The weights appliedtension, rigidity, external energy, shape, and size are 620, 15, and 10, respectively. These weights representpirical choices and are not optimal in any sense. The gof the experiments is the comparison of the three imp
-11 Apr–Jun 2005/Vol. 14(2)
ofs isother
are
ti-f a.1
tioenl-
raev
x-
s.nti-ionfeahe
-ntur.yn-
ch-inn-akeke.
thethearergy
onore,tial
es.ch-omthe
areA,
cytee
-e
the
de-troidlly.
Acton: Snakes for tracking . . .
mentations for a fixed energy functional and fixed setweights. For all three methods, the number of sampleN564. The number of possible solutions for GDA is setthat the sample can be located from 1 pixel away frominitial center to 2r pixels away from the initial cente~wherer is the expected radius of the cell!, which equatesto K517 in both the synthetic and real experiments.
As to computational expense, the three algorithmscompared using a Matlab~Mathworks, Natick, MA! simu-lation on a Pentium IV with 1 GB of RAM. For gradiendescent, one update of theN samples required approxmately 0.1 s of processing time. For SA, the evaluation osingle move forN samples also required approximately 0s. With GDA, the evaluation ofK2 moves for each of theNsamples needed 1.3 s of processing time withK517. Toevaluate the same number of moves through the soluspace, SA requires 14.5 s, which is 11 times more expsive than GDA~where GDA gives superior solution quaity!. For real-time implementation~at frame rate!, the GDAapproach is promising. Recent experiments in our labotory with cell tracking implemented on a Mercury Adapdmultiprocessor indicate that a 1003 speed-up is possible.
3.2 Experiments with Real Data
A similar approach is taken with the real data tracking eperiments. Here, leukocytes~white blood cells! are ob-servedin vivo as they move through postcapillary venuleFor the study of the inflammatory response and of ainflammatory drugs, many laboratories record the positof these leukocytes manually in order to compute suchtures as leukocyte velocity, which is an indicator of tlevel of leukocyte activation.23 In ten videos of 91 frameseach~53 s at 30 fps!, the center of the cell is identified in
Fig. 6 Original frame 17 in a video sequence (upper-left) (note theblur due to motion); tracking result using GDA (upper-right); trackingresult using SA (lower left); tracking result using gradient descent(lower right).
023017Journal of Electronic Imaging
s
n-
-
-
each frame manually~which is the current method of finding ‘‘ground truth’’23!. Then, snakes computed via gradiedescent, SA and GDA are used to track the cell contoThe same energy functional and weights used in the sthetic are applied in each of the three optimization teniques. In each 91 frame experiment, the initial positionthe first frame is set to be equal to the initial position idetified in manual analysis. In subsequent frames, the sncomputed on the previous frame is used as the initial sna
The real data experiments provide challenges tosnake-based tracking technique that are not tested insynthetic experiments. In the real data, the leukocytesnot perfect circles and thus the shape term in the enefunctional is not perfectly satisfied; a similar observatican be made about the varying leukocyte size. Furthermthe real data present the problems of blurry imagery, parocclusion and inhomogeneous interior intensity profilAnd, the real microscopic imagery tests the tracking tenique in the presence of clutter—where strong edges frother cells and from the vessel boundary can attractsnakes away from their intended targets.
Sample frames from a typical 91 frame sequenceshown in Figs. 6–7. The tracking methods based on GDSA, and gradient descent successfully capture the leukoboundary in the first 16 frames. At the time in which fram17 is acquired~see Fig. 6!, motion blur occurs due to motion ~respiratory! of the subject. This motion thwarts thtracking algorithm based on gradient descent~see Fig. 7!.However, the SA and GDA snakes are able to reacquirecell after the abrupt movement~see Fig. 7!.
For each of the 910 frames tracked by the gradientscent snake, the SA snake, and the GDA snake, the cenof the snake is compared to the center identified manuaI then compute the root mean squared error~RMSE! in
Fig. 7 Original frame 20 in a video sequence (upper-left); trackingresult using GDA (upper-right); tracking result using SA (lower left);tracking result using gradient descent (lower right).
Table 2 RMSE in microns for ten 91 frame cell tracking experiments.
Algorithm 1 2 3 4 5 6 7 8 9 10 Overall RMSE
Gradient Descent 3.06 0.78 1.14 1.73 2.40 3.39 3.88 0.50 0.73 3.00 2.37
Simulated Annealing 9.75 1.07 1.55 2.63 1.20 1.01 4.52 1.35 1.50 7.80 4.40
GeneralizedDeterministicAnnealing
0.74 0.77 0.63 1.72 0.88 0.96 3.95 0.49 0.90 0.83 1.53
-12 Apr–Jun 2005/Vol. 14(2)
A
lts
f thgho
i-in.
fodeg.ienhilethinslydesi-
ialasd
ob
is-t by
ur
n-i-
-ual
ur
ingtica
-es-
-
lem-
ing
an
-nd
ns,
ing
to-
e-
,’’
for
t,
w,’’
-
Acton: Snakes for tracking . . .
position for each sequence. These RMSE values~in mi-crons! are listed in Table 2. The overall RMSE for the GDsnake is 1.53mm, compared to 2.37mm for gradient de-scent, and 4.40mm for SA. For a time critical applicationsuch as tracking, GDA provides both high quality resuand low computational expense.
So, the synthetic experiments show the robustness oGDA in terms of varying initial positions and varyinamounts of image degradation. The real experiments sthat the GDA snake is efficacious for a difficultin vivo celltracking problem.
4 Conclusions
In summary, GDA is a deterministic method for approxmating the stationary distribution of the SA Markov chaThis paper develops and demonstrates the use of GDAcomputing the position of a parametric snake and provian application of the GDA-driven snake to cell trackinGDA holds several advantages over the traditional graddescent approach. GDA can escape local minima, wgradient descent seeks the closest local minimum inenergy functional, which may lead to inferior solutionsthe case of noisy imagery. GDA is able to simultaneouevaluate multiple snake positions, while the gradientscent only ‘‘sees’’ the spatial neighbors to the current potion, which is problematic in the case where the initsnake is distant from the final desired position. In contrto stochastic methods such as SA, GDA is repeatable anappropriate for real-time applications such as trackingjects in video.
Acknowledgments
The author would like to thank Gang Dong for his asstance on the experiments. This work is supported in parNIH HL68510 and in part by the Whitaker Foundation.
References1. M. Kass, A. Witkin, and D. Terzopoulos, ‘‘Snakes: Active conto
models,’’ Int. J. Comput. Vis.1, 321–331~1987!.2. I. N. Bankman, T. Nizialek, I. Simon, O. B. Gatewood, I. N. Wei
berg, and W. R. Brody, ‘‘Segmentation algorithms for detecting mcrocalcifications in mammograms,’’IEEE Trans. Inf. Technol.Biomed.1, 141–149~1997!.
3. A. Blake and M. Isard,Active Contours: The Application of Techniques from Graphics, Vision, Control Theory and Statistics to VisTracking of Shapes in Motion, Springer, Berlin~1998!.
4. S. Logbregt and M. A. Viergever, ‘‘A discrete dynamic contomodel,’’ IEEE Trans. Med. Imaging14, 12–14~1995!.
5. I. Mikic, S. Krucinski, and J. D. Thomas, ‘‘Segmentation and trackin echocardiographic sequences: active contours guided by opflow estimates,’’IEEE Trans. Med. Imaging17, 274–284~1998!.
6. J. L. Troutman,Variational Calculus with Elementary Convexity,Springer, New York~1983!.
7. A. A. Amini, S. Tehrani, and T. E. Weymouth, ‘‘Using dynamic programming for minimizing the energy of active contours in the prence of hard constraints,’’IEEE Int. Conf. on Computer Vision, pp.95–99~1988!.
8. A. A. Amini, T. E. Weymouth, and R. C. Jain, ‘‘Using dynamic programming for solving variational problems in vision,’’IEEE Trans.Pattern Anal. Mach. Intell.12~9!, 855–867~1990!.
9. B. S. Morse, W. A. Barrett, J. K. Udupa, and R. P. Burton, ‘‘Trainaboptimal boundary finding using two-dimensional dynamic progra
023017Journal of Electronic Imaging
e
w
rs
t
e
-
tis-
l
ming,’’ Univ. of Pennsylvania Technical Report No. MIPG180~1991!.10. R. P. Grzeszczuk and D. N. Levin, ‘‘Browian strings: Segment
images with stochastically deformable contours,’’IEEE Trans. Pat-tern Anal. Mach. Intell.19, 1100–1114~1997!.
11. C. T. Tsai, Y. N. Sun, and P. C. Chung, ‘‘Minimising the energy ofactive contour model using a Hopfield network,’’IEE Proc. E: Com-put. Digit. Tech.140, 297–303~1993!.
12. E. H. L. Aarts and J. Korst,Simulated Annealing and Boltzmann Machines: A Stochastic Approach to Combinatorial Optimization aNeural Computing, Wiley, New York ~1987!.
13. D. Geman and S. Geman, ‘‘Stochastic relaxation, Gibbs distributioand Bayesian restoration of images,’’IEEE Trans. Pattern Anal.Mach. Intell.PAMI-6 , 721–741~1984!.
14. N. S. Friedland and A. Rosenfeld, ‘‘Compact object recognition usenergy-function-based optimization,’’IEEE Trans. Pattern Anal.Mach. Intell.14, 770–777~1992!.
15. G. Storvik, ‘‘A Bayesian approach to dynamic contours through schastic sampling and simulated annealing,’’IEEE Trans. Pattern Anal.Mach. Intell.16, 976–986~1994!.
16. T. F. Chan and L. A. Vese, ‘‘Active contours without edges,’’IEEETrans. Image Process.10, 266–277~2001!.
17. R. Goldenberg, R. Kimmel, E. Rivlin, and M. Rudzsky, ‘‘Fast geodsic active contours,’’IEEE Trans. Image Process.10, 1467–1475~2001!.
18. S. T. Acton and A. C. Bovik, ‘‘Generalized deterministic annealingIEEE Trans. Neural Netw.7, 686–699~1996!.
19. S. T. Acton and A. C. Bovik, ‘‘Piecewise and local image modelsregularized image restoration using cross validation,’’IEEE Trans.Image Process.8, 652–665~1999!.
20. G. Bilbro, R. Mann, T. K. Miller, W. E. Snyder, D. E. Van den Bouand M. White, ‘‘Optimization by mean field annealing,’’ inAdvancesin Neural Information Processing Systems 1, D. S. Touretzky, ed.,Morgan Kaufmann, San Mateo, CA~1989!.
21. C. Xu and J. L. Prince, ‘‘Snakes, shapes, and gradient vector floIEEE Trans. Image Process.7, 359–369~1998!.
22. N. Ray, S. T. Acton, and K. Ley, ‘‘Tracking leukocytesin vivo withshape and size constrained active contours,’’IEEE Trans. MedicalImaging21, 1222–1235~2002!.
23. S. T. Acton, K. Wethmar, and K. Ley, ‘‘Automatic tracking of leukocytesin vivo,’’ Microvasc. Res.63, 139–148~2002!.
24. W. K. Pratt,Digital Image Processing, Wiley, New York ~1977!.
Scott T. Acton graduated from OaktonHigh School in Vienna, Virginia. He re-ceived his BS degree in electrical engi-neering from Virginia Tech, Blacksburg in1988 as a Virginia Scholar. He received hisMS degree in electrical and computer engi-neering and his PhD degree in electricaland computer engineering from the Univer-sity of Texas at Austin in 1990 and 1993,respectively, as a member of the Labora-tory for Vision Systems. He has worked in
industry for AT&T, Oakton, VA, the MITRE Corporation, McLean, VA,and Motorola, Inc., Phoenix, AZ and in academia for OklahomaState University, Stillwater. For his research in tracking, Dr. Actonwas given an ARO Young Investigator Award. He received the Hal-liburton Outstanding Young Faculty Award in 1998. In 1997, he wasnamed the Eta Kappa Nu Outstanding Young Electrical Engineer—anational award that has been given annually since 1936. At the Uni-versity of Virginia, Charlottesville, he was named the OutstandingNew Teacher in 2002, elected a Faculty Fellow in 2003, and holdsthe Walter N. Munster chair in electrical and computer engineeringand biomedical engineering. He is the recipient of a Whitaker Foun-dation Biomedical Engineering Research Grant for work in cell de-tection and tracking. Dr. Acton served as Associate Editor for theIEEE Transactions on Image Processing and as Associate Editor forthe IEEE Signal Processing Letters. He is the 2004 Technical Pro-gram Chair and the 2006 General Chair for the Asilomar Conferenceon Signals, Systems and Computers. His research interests includeanisotropic diffusion, active contours, biomedical segmentationproblems, biomedical tracking problems, and war.
-13 Apr–Jun 2005/Vol. 14(2)