tracking pylorus in ultrasonic image sequences with edge-based optical flow

13
IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 31, NO. 3, MARCH 2012 843 Tracking Pylorus in Ultrasonic Image Sequences With Edge-Based Optical Flow Chaojie Chen, Yuanyuan Wang*, Senior Member, IEEE, Jinhua Yu, Zhuyu Zhou, Li Shen,and Yaqing Chen Abstract—Tracking pylorus in ultrasonic image sequences is an important step in the analysis of duodenogastric reux (DGR). We propose a joint prediction and segmentation method (JPS) which combines optical ow with active contour to track pylorus. The goal of the proposed method is to improve the pyloric tracking accuracy by taking account of not only the connection informa- tion among edge points but also the spatio-temporal information among consecutive frames. The proposed method is compared with other four tracking methods by using both synthetic and real ul- trasonic image sequences. Several numerical indexes: Hausdorff distance (HD), average distance (AD), mean edge distance (MED), and edge curvature (EC) have been calculated to evaluate the per- formance of each method. JPS achieves the minimum distance met- rics (HD, AD, and MED) and a smaller EC. The experimental re- sults indicate that JPS gives a better tracking performance than others by the best agreement with the gold curves while keeping the smoothness of the result. Index Terms—Edge tracking, optical ow, pyloric tracking, the snake. I. INTRODUCTION T UODENOGASTRIC reux (DGR) is a symptom in which contents in duodenum incorrectly ow from duodenum to stomach due to the defective valve function of pylorus, the connection between duodenum and stomach. DGR is a pathophysiological phenomenon. Physiological DGR may happen on everyone with small reux volume. Pathological DGR, on the other hand, has larger reux volume and occurs more frequently. Pathological DR is thought to have close relationship with gastritis, gastric ulcer, and gastric cancer, etc., [1]. Ultrasonography has been widely used in clinical gastroin- testinal examination with benets of good temporal resolution, noninvasiveness, no radiation, and low cost. With ultrasonic probe positioned at the level of the transpyloric plane, antrum, Manuscript received November 25, 2011; accepted January 06, 2012. Date of publication January 12, 2012; date of current version March 02, 2012. This work was supported in part by the National Natural Science Foundation of China under Grant 10974035 and Grant 81101049 and in part by the Program of Shanghai Subject Chief Scientist (No. 10XD1400600). Asterisk indicates cor- responding author. C. Chen and J. Yu are with Department of Electronic Engineering, Fudan University, Shanghai 200433, China. *Y. Wang is with Department of Electronic Engineering, Fudan University, Shanghai 200433, China (e-mail: [email protected]). Z. Zhou, L. Shen, and Y. Chen are with Department of Ultrasound Diagnose, Xinhua Hospital Afliated to Shanghai Jiaotong University School of Medicine, Shanghai 200092, China (e-mail: [email protected]). Color versions of one or more of the gures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identier 10.1109/TMI.2012.2183884 pylorus, and the proximal duodenum can be observed simulta- neously to see if DGR occurs [2], [3]. Current diagnosis of DGR depends on clinicians’ checking the ultrasonic video frame-by- frame and then computing some evaluation indexes. The diag- nosis process is tedious, time-consuming, and subject to the ob- server variability. It is meaningful to develop an automatic or semi-automatic computer-aided method to help detect and eval- uate DGR. As there may be other reux in stomach and duodenum be- cause of their peristalsis, only the reux passing through pylorus can be classied as DGR. Correctly locating pylorus, which can avoid mistakenly classifying local reux as DGR, is thus crit- ical in diagnosing DGR. There are two important indicators in evaluating the severity of DGR. One is the volume of the ow reuxed through pylorus. The other is the distance of the re- uxed ow from pylorus. Since estimations of both volume and distance need to know the position of pylorus, tracking pylorus acts as a crucial step in analyzing DGR quantitatively. Accurate tracking of pylorus will provide meaningful pathophysiologic information and benet an accurate diagnosis. Tracking moveable objects from video is an interesting and challenging topic in medical signal processing. Many works have been published, which can be simply classied into two categories: region-based tracking methods and boundary-based tracking methods. Region-based tracking methods make use of the information from local regions, which share similar intensity or texture prop- erties. In many cases, they do not have a specic requirement on initial conditions. One group of region-based tracking methods use segmentation technique and extract the motion by inten- sity analysis over the region. Based on level set, Belaid et al. [4] presented a new method which combined the phase based geodesic active contours with a new local maximum likelihood region term. In [5], Rousson et al. proposed a level set method for shape-driven object extraction. They used a voxelwise prob- abilistic level set formulation to account for the prior knowl- edge of the shape, and then introduced this prior knowledge as an energetic constraint into the segmentation process. Another group of region-based tracking methods estimate the motion di- rectly. The representatives for this group are block matching and optical ow. Block matching methods perform the tracking through a matching process according to a similarity metric on the visual representation space. The authors of [6] used sum of absolute differences (SAD) to track ultrasonic speckle pat- terns. The blood ow velocity and angle in carotid arteries could then be determined. Korstanje et al. [7] devised a dedicated 2-D multi-kernel block-matching scheme with subpixel motion esti- mation to handle large displacements over long sequences. They 0278-0062/$31.00 © 2012 IEEE

Upload: chaojie-chen

Post on 25-Sep-2016

214 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Tracking Pylorus in Ultrasonic Image Sequences With Edge-Based Optical Flow

IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 31, NO. 3, MARCH 2012 843

Tracking Pylorus in Ultrasonic Image SequencesWith Edge-Based Optical Flow

Chaojie Chen, Yuanyuan Wang*, Senior Member, IEEE, Jinhua Yu, Zhuyu Zhou, Li Shen, and Yaqing Chen

Abstract—Tracking pylorus in ultrasonic image sequences is animportant step in the analysis of duodenogastric reflux (DGR). Wepropose a joint prediction and segmentation method (JPS) whichcombines optical flow with active contour to track pylorus. Thegoal of the proposed method is to improve the pyloric trackingaccuracy by taking account of not only the connection informa-tion among edge points but also the spatio-temporal informationamong consecutive frames. The proposedmethod is comparedwithother four tracking methods by using both synthetic and real ul-trasonic image sequences. Several numerical indexes: Hausdorffdistance (HD), average distance (AD), mean edge distance (MED),and edge curvature (EC) have been calculated to evaluate the per-formance of eachmethod. JPS achieves theminimumdistancemet-rics (HD, AD, and MED) and a smaller EC. The experimental re-sults indicate that JPS gives a better tracking performance thanothers by the best agreement with the gold curves while keepingthe smoothness of the result.

Index Terms—Edge tracking, optical flow, pyloric tracking, thesnake.

I. INTRODUCTION

T UODENOGASTRIC reflux (DGR) is a symptom inwhich contents in duodenum incorrectly flow from

duodenum to stomach due to the defective valve function ofpylorus, the connection between duodenum and stomach. DGRis a pathophysiological phenomenon. Physiological DGR mayhappen on everyone with small reflux volume. PathologicalDGR, on the other hand, has larger reflux volume and occursmore frequently. Pathological DR is thought to have closerelationship with gastritis, gastric ulcer, and gastric cancer, etc.,[1].Ultrasonography has been widely used in clinical gastroin-

testinal examination with benefits of good temporal resolution,noninvasiveness, no radiation, and low cost. With ultrasonicprobe positioned at the level of the transpyloric plane, antrum,

Manuscript received November 25, 2011; accepted January 06, 2012. Dateof publication January 12, 2012; date of current version March 02, 2012. Thiswork was supported in part by the National Natural Science Foundation ofChina under Grant 10974035 and Grant 81101049 and in part by the Program ofShanghai Subject Chief Scientist (No. 10XD1400600). Asterisk indicates cor-responding author.C. Chen and J. Yu are with Department of Electronic Engineering, Fudan

University, Shanghai 200433, China.*Y. Wang is with Department of Electronic Engineering, Fudan University,

Shanghai 200433, China (e-mail: [email protected]).Z. Zhou, L. Shen, and Y. Chen are with Department of Ultrasound Diagnose,

Xinhua Hospital Affiliated to Shanghai Jiaotong University School ofMedicine,Shanghai 200092, China (e-mail: [email protected]).Color versions of one or more of the figures in this paper are available online

at http://ieeexplore.ieee.org.Digital Object Identifier 10.1109/TMI.2012.2183884

pylorus, and the proximal duodenum can be observed simulta-neously to see if DGR occurs [2], [3]. Current diagnosis of DGRdepends on clinicians’ checking the ultrasonic video frame-by-frame and then computing some evaluation indexes. The diag-nosis process is tedious, time-consuming, and subject to the ob-server variability. It is meaningful to develop an automatic orsemi-automatic computer-aided method to help detect and eval-uate DGR.As there may be other reflux in stomach and duodenum be-

cause of their peristalsis, only the reflux passing through pyloruscan be classified as DGR. Correctly locating pylorus, which canavoid mistakenly classifying local reflux as DGR, is thus crit-ical in diagnosing DGR. There are two important indicators inevaluating the severity of DGR. One is the volume of the flowrefluxed through pylorus. The other is the distance of the re-fluxed flow from pylorus. Since estimations of both volume anddistance need to know the position of pylorus, tracking pylorusacts as a crucial step in analyzing DGR quantitatively. Accuratetracking of pylorus will provide meaningful pathophysiologicinformation and benefit an accurate diagnosis.Tracking moveable objects from video is an interesting and

challenging topic in medical signal processing. Many workshave been published, which can be simply classified into twocategories: region-based tracking methods and boundary-basedtracking methods.Region-based tracking methods make use of the information

from local regions, which share similar intensity or texture prop-erties. In many cases, they do not have a specific requirement oninitial conditions. One group of region-based tracking methodsuse segmentation technique and extract the motion by inten-sity analysis over the region. Based on level set, Belaid et al.[4] presented a new method which combined the phase basedgeodesic active contours with a new local maximum likelihoodregion term. In [5], Rousson et al. proposed a level set methodfor shape-driven object extraction. They used a voxelwise prob-abilistic level set formulation to account for the prior knowl-edge of the shape, and then introduced this prior knowledge asan energetic constraint into the segmentation process. Anothergroup of region-based tracking methods estimate the motion di-rectly. The representatives for this group are block matchingand optical flow. Block matching methods perform the trackingthrough a matching process according to a similarity metric onthe visual representation space. The authors of [6] used sumof absolute differences (SAD) to track ultrasonic speckle pat-terns. The blood flow velocity and angle in carotid arteries couldthen be determined. Korstanje et al. [7] devised a dedicated 2-Dmulti-kernel block-matching scheme with subpixel motion esti-mation to handle large displacements over long sequences. They

0278-0062/$31.00 © 2012 IEEE

Page 2: Tracking Pylorus in Ultrasonic Image Sequences With Edge-Based Optical Flow

844 IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 31, NO. 3, MARCH 2012

utilized this block-matching algorithm to quantify tendon dis-placement without anatomical landmarks. Optical flow estima-tion, which was firstly proposed by Horn and Schunck [8], is an-other important technique in motion analysis. It has been used inmany medical applications. Yaacobi et al. [9] suggested usingan iterative optical flow algorithm to track the motion of theleft ventricle (LV) wall. Yang et al. [10] thought of the phaseas a property of the tissue which remains constant over a car-diac cycle. In their work, two phase images rather than bright-ness ones were used as sources of the optical flow to calculatethe displacement flow. The displacement flow was then used totrack LV. To track the lung tumor [11], Xu et al. proposed amethod which utilized the optical flow information to estimatethe tumor centroid displacement between frames as well as theinitial tumor contour. The final contour could be determined byfine tuning the initial contour using a template matching algo-rithm within a small search range.Boundary-based tracking algorithms, on the other hand,

make use of deformable contour models to track boundary. Thedeformable contour models, like snake [12]–[14], deformabletemplates [15], [16], and active shape/appearance models[17]–[19] are used as the description of the moving targets. Instate of the art, Somkantha et al. [13] proposed a snake basedtechnique whose edge map was derived from texture feature sothat even ill-defined edges in noisy images could be detected.To provide robust tracking of the vessel lumen, Guerrero etal. [16] proposed a modified Star–Kalman algorithm basedon an elliptical model. Recently, many literatures have takenspatio-temporal information into consideration to get a bettertracking performance. For instance, Fang et al. [14] proposeda method which incorporated the temporal information intothe active contour function to solve the dropout and specklenoise problems encountered in detecting the inner heart wallboundary. By separating the contour into strong and weaksegments with a Gaussian mixture model (GMM), the temporalinformation was incorporated into the external energy of theactive contour function accordingly to recover the missingboundary and strengthen the weak segments. Boundary-basedmethods have to deal with issues such as nonrigid objects,local deformations and changes of topology, which cannot beaddressed efficiently. Multiple dynamic models have been usedto improve shape tracking in presence of abrupt shape andmotion changes. Jacinto et al. [18] addressed object tracking inultrasound images using multiple dynamic models to track theevolution of the LV boundary, invalid observations (outliers)were modeled as well to reduce their influence on the shapeestimation and make the algorithm robust. One of the benefitsof these boundary-based tracking algorithms is that thesecontour models are built on flexible curves, which have someinternal stiffness properties that keep the regularity of the shape.Boundary-based features can provide reliable information forthe shape of targets and the motion tracking. The limitation isthat they lack the region information from the image and maylose their stability while the boundary information is not strongenough.From the above-mentioned paradigms, we can see that com-

bination of different methods has already been a trend to im-prove the tracking performance. In [20], the watershed trans-

formation was performed on the morphological image to lo-calize the region of interest. The dynamic directional gradientvector flow field was then utilized dynamically to deform thecontour into the endocardial boundary of the LV. Riha et al. [21]proposed a method which automatically monitored the arteryborder tissue movement using the optical flow technique, andthen fit the ellipse model from these monitoring points to getthe most suitable artery shape in every frame. Mansouri et al.[22] estimated the motion with a local search step for the bestvisual correspondences. The result of such a step was used todetermine the speed of the flow that performed tracking withina level set formulation. Eslami et al. [23] constructed an en-ergy equation including Mumford–Shah segmentation and op-tical flow based dense motion estimation. The minimizer of theequation could provide an optimum motion field and edge setby considering both spatial and temporal discontinuities. Theauthors of [24] used the Hough Transform to initialize an ac-tive-contour based methodology in an attempt to accurately de-tect the carotid artery wall from ultrasound images.As far as we know, there is little literature which has been

reported on tracking the motion of pylorus in ultrasonic imagesequences. Tracking pylorus confronts some common problemsin ultrasonic image analysis, such as high noise level, shadow,signal dropout, and so on. Furthermore, tracking dynamicmove-ment of pylorus is much more complicated since there existingvariable interference in the region of interest. The movement ofpylorus is not as regular as heart or vessel and few shape modelsor statistical information is available. Because of the pyloricmo-tion, the observing plane may tilt and the pyloric region in vi-sion may become obscure. Although the clinicians could adjustthe placement of detector in time and try their best to keep thetarget clear, there are still some frames where the target is undis-tinguishable. This phenomenon is occasionally observed espe-cially in the pyloric closures, which influences the visibility ofobject and poses a great challenge for current tracking methods.In this paper, an edge-based optical flow method is proposed

to track the pyloric position. This new method combines op-tical flow with active contour by considering the optical flowconstant constraint and the target’s feature simultaneously. Theproposed method is actually applied to track the gastroduodenalwalls around pylorus. By analyzing the shape of the gastroduo-denal walls, the location of pylorus could be known. More de-tails are given in Section III-A. The remainder of this paper is or-ganized as following. Section II introduces the backgrounds ofthe proposedmethod. Section III describes the proposedmethodand gives corresponding numerical solution. In Section IV, wecompare the proposed method with other tracking methods insynthetic and real ultrasonic image sequences. Sections V andVI give the authors’ discussion and conclusion, respectively.

II. BACKGROUNDS

A. Optical Flow

Optical flow relies on the assumption that the temporal inten-sity change is due to the motion only. It can be used to estimatethe dense motion field between successive frames.Let denote the image intensity at the time whose

spatial location is . According to the constant intensity

Page 3: Tracking Pylorus in Ultrasonic Image Sequences With Edge-Based Optical Flow

CHEN et al.: TRACKING PYLORUS IN ULTRASONIC IMAGE SEQUENCES WITH EDGE-BASED OPTICAL FLOW 845

assumption, the intensity of the same object is unchanged insuccessive frames, we can have

(1)

Applying one-order Taylor expansion while omitting thehigher order terms, it is easy to get

(2)

or

(3)

Denote and , (3) can be rewritten as

(4)

Equation (4) is the famous basic equation for optical flow.As there are two unknown variables ( and ) within one

equation, optical flow is originally an aperture problem. Tosolve the aperture problem, the existing optical flow methodscan be classified into local and global categories [8], [25]–[34].Global methods introduce the global smoothness regularization,resulting in the dense motion field. However, they are sensi-tive to the noise. The representative is proposed by Horn andSchunck (HS) [8], which combines the optical flow constraint(4) with a global smoothness term to constrain the estimatedmotion field . This method estimates the motion field byminimizing the energy function below

(5)

Local methods apply the optical flow constraint in a smallneighborhood, resulting in nondense motion field. They are rel-atively robust to the noise. For example, the method suggestedby Lucas–Kanade (LK) [25] considers a weighted least-squarefit of (4) in each local region by minimizing

(6)

where is the data in the window , denotes aweighting function that gives more weight to the constraints atthe center of the neighborhood than those at the periphery.Plenty of improved optical flow methods in state of the art

combine methods in other application to improve the accuracyor decrease computation load. For example, Brox et al. [30],[31] apply segmentation and matching before the optical flow.Bruhn et al. [32] combine local and global optical flow methodsto get the dense motion field while keeping robust to the noise.There are also optical flow methods incorporating with thephase [33] and statistics [34] information.

B. Active Contour

Parametric active contour model or the snake was firstly pro-posed by Kass et al. [12]. It is widely used in segmentation andtracking. Given a proper initial curve, the active contour canmove to the desired object boundary under the influence of itsenergy terms.Define a set of points , where repre-

sents the active contour points and is the normalized arc lengthalong the contour, . The energy function of active con-tour method is defined as

(7)

where represents the total energy of the contour, whichis composed of the internal energy term and the externalenergy term . gives constrain to the smoothness andstiffness of the contour. is defined as

(8)

where , denote , , respectively. isthe smoothness constraint weight and is stiffness constraintweight.The external energy is the energy pulls the active con-

tour toward the desired boundary. It is defined as the negativeof the image gradient intensity originally

(9)

where denotes the gradient operator, is a 2-D Gaussianfilter with the standard deviation and is the image.The external force is derived from the external energy and is

defined as the force which attracts the snake to edges

(10)

Many researches are dedicated in design of the improved ex-ternal force to making the active contour to get a large capturerange and the ability to capture concavities. New external forcesuch as the balloon force [35], the gradient vector field (GVF)[36], [37], and the vector field convolution (VFC) [38] have al-ready been proposed.We have chosen GVF as the external force for its good edge

detective ability. The GVF is calculated by minimizing the en-ergy function

(11)

where is a parameter controlling the degree of smoothness ofthe vector field and is the edge map. In this paper, is 0.02and the edge map is achieved by applying a Canny operate. Inorder to preserve the weak edge and avoid over-smoothing, thestandard deviation of the Gaussian filter in Canny detector ischosen to be 1.5. Instead of fixed thresholds, we use the dynamicadaptive thresholds and determine the thresholds automaticallyby analyzing the overall gradient histogram. In general, the high

Page 4: Tracking Pylorus in Ultrasonic Image Sequences With Edge-Based Optical Flow

846 IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 31, NO. 3, MARCH 2012

Fig. 1. Illustration of edge based pyloric tracking and DGR detection. Thebrown lines indicate the upper and low walls; the blue line indicates the closestplace between them and is considered as the center position of pylorus. It is sup-posed to detect DGR at the closest region.

threshold is determined by the gradient value corresponding to70% of the pixels in the cumulative histogram while 0.4 of thisgradient value is used for the low threshold [46].

III. METHODOLOGY

A. Basic Idea

Unlike the well-studied LV tracking in echocardiography,where the target has a distinguishable and closed boundary,pylorus is a physiological notation which connects duodenumand stomach. It is hard to tell its boundary precisely. Thoughachieving the good performance in echocardiogram, the regionbased tracking such as level set can hardly be applied in pylorictracking.From ultrasonic images, we can observe that there are two

protuberant gastroduodenal walls in the view (here we callthem the upper and lower wall). Pylorus can be seen as a valveconsisting of two fragments of the protuberant gastroduodenalwalls. According to this physiological structure, the upperwall and the lower wall are closer at pylorus than other nearbypositions. Thus, through tracking the upper and lower wallsimultaneously, the closest region between them can be known,it is supposed to be in the pyloric region as well. In other words,the closest region indicates the location of pylorus, providingcritical information for detecting and quantitatively analyzingDGR. Pylorus can be tracked by this feature, as shown inFig. 1. By tracking the upper and lower walls, the closest placebetween them in each frame can be located and is consideredas the center position of pylorus. This region can be used todetect DGR.Optical flows show a good performance in nonrigid motion

tracking and can provide pixel-wise motion estimation. How-ever, due to the high level noise in ultrasonic images, the es-timation error is much larger that achieved in video of naturalimages. The error may accumulate, which if large enough willmakes optical flow finally lose the target in large number offrames. Increasing frame rate is usually an effective way to al-leviate the tracking error in natural image sequences. For objecttracking in ultrasonic video, however, increasing frame rate may

not bring considerable improvement since the strong noise willappear in every frame. Furthermore, the frame rate range is de-termined by specific commercial ultrasound machines. More-over, optical flows have weak constraints in the spatial con-nection among target points, the tracking contours obtained byoptical flow usually appear zigzag pattern. As the snakes canproduce smooth contour, they are widely used in outlining andtracking region of interest in images. Accurate initialization isalways required to keep the snake from being trapped into localminimum, especially in a noisy environment.It is noticed that these two methods can be complementary

to each other. Some of studies have been proposed to combineoptical flow with the snake [39]–[41]. The basic idea of thesestudies is: firstly estimate the initial contour using optical flow,and then converge the initial contour to the desired edge by thesnake. The combination of these two techniques alleviates theintrinsic error accumulation problem of optical flow and per-forms well in many applications. However, it should also benotice that this combination does not improve the accuracy ofoptical flow. Especially in a noisy environment, errors of themotion estimation could lead to detecting failure. In ultrasonicimage sequences, it can also be observed that even with a goodinitialization given by optical flow, the snake may fail to con-verge to the target edge when there is edge leaking or stronginterference from other edges nearby.

B. Joint Prediction and Segmentation Method (JPS)

It is assumed that if the feature of target could be taken intoconsideration during the tracking, optical flow can providemore accurate estimation of motion. In this paper, the proposedmethod is designed to incorporate optical flow with the snake ina more fundamental way. A new function for joint motion es-timation and segmentation is developed, which simultaneouslyconsiders the optical flow constraint, the spatial connectionrelationships among the edge points, and the gradient featureto help improve the accuracy of optical flow and get a smoothresult.As the target to be tracked is the edge, there is no need

to calculate the whole motion field. A local optical flow ispreferred. Though the LK method has been widely used inultrasonic image tracking, it is difficult to add the spatialconnection information into the least-square fitting process.To incorporate extra information, energy function like the HSmethod is applied.Wenoticed that it is not reasonable to apply thesmooth constraint of the motion field in the whole image wheremay exist multimoving objects. This application will bring inestimation error especially on the boundary. Segmentation inprevious studies has been applied to constrain the applicationregion of this smooth constraint [30], [31]. While in thispaper, we apply the smooth constraint only on the trackededge, and assume the motion changes smoothly along theedge. In this way, the segmentation information is incorporatednaturally. Remind is the normalized arc length along thecontour, , are the derivations respect to . The smoothconstraint can be defined as , which canbe seen as the localization of the traditional smooth term.It only needs to cope with finite edge points and considerthe motion smoothness along edges.

Page 5: Tracking Pylorus in Ultrasonic Image Sequences With Edge-Based Optical Flow

CHEN et al.: TRACKING PYLORUS IN ULTRASONIC IMAGE SEQUENCES WITH EDGE-BASED OPTICAL FLOW 847

The localized optical flow energy function is given by

(12)

where controls the weight of motion smoothness constraintand determines the weight of optical flow information.Suppose the location of a target at time has already been

known and the target is an edge, the tracking result of this targetat should still be the edge as the target characteristic issupposed to keep consistent. In other words, the tracking resultshould satisfy the constraints for the edge. It is well known thatif a contour is on the edge, it will minimize the energy function(7), which considers the smoothness, stiffness constraints of theedge aswell as the gradient feature. So, the tracking result atshould minimize the energy function of the snake as following:

(13)where

(14)

Considering all constraints mentioned above, the energyfunction for the joint segmentation and the motionestimation is defined as

(15)

By minimizing (15), the estimation of the motion on edgescan be achieved.

C. Numerical Solution

To get the numerical solution, we firstly discretize the contourinto n points and represent them in spatial coordinates. As aresult we can get the contour point set

where the subscript denotes the point index on the contour.Then, we denote , andthe velocity vector . Reminding (14), wecan know the location of a contour point at with the knowl-edge of its location at and the corresponding velocity vector.By bringing (14) into (15) and performing the numerical differ-entiation, we can get the discretized version of (15) as following:

(16)

where

(17)

The variable is hidden as all variables appeared in (17) areat the same time.Applying Euler equations respect to , we have

(18)

We can get n equations in the form of (18) and write theseequations in matrix as following:

(19)

where , are the velocityvectors.Denote as the in-

tensity vector of the contour points. , , are defined in asimilar way

then

(20)

and are matrixes while is 1 vector.Applying the gradient decent method, a middle equation foris obtained

(21)

(22)

where is the iteration number, is a step size.

Page 6: Tracking Pylorus in Ultrasonic Image Sequences With Edge-Based Optical Flow

848 IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 31, NO. 3, MARCH 2012

Rearrange (22), we have

(23)

Similarly, applying Euler equations respect to , anothermiddle equation for is obtained

(24)

Combining (23) and (24), we have

(25)

where , , .

Equation (25) can be solved by matrix inversion

(26)

Applying (26) iteratively until the convergence conditionis satisfied, the tracking task is achieved.

Here is the threshold. In this paper, we used the threshold. It is observed that 5–40 iterations are needed for this

threshold.

D. Pyloric Tracking

As the targets to be tracked are unclosed edges, boundariesshould be treated carefully. Boundary information is needed.Prediction should to be applied to estimate the motion of the ter-minal points and the weight parameters on the terminal pointsneed to be changed accordingly (referencing to the unclosedsnake method). We used the LK method to do the prediction forits fast processing speed and good accuracy. A series of win-dows have been tested in ultrasonic images. The window sizeranges from three to seventeen. It was observed that with theincreasing of the window size, the tracking performance of LKfirstly improved gradually then decreased when the window sizeis too large. It was also observed that a large window size maybring in more robustness against the noise. According to ex-perimental results, we chose the optimal window size of LK aseleven by eleven. Remeshing is acquired to accommodate thenonrigid deformation, which deletes the vertices which are tooclose to each other (distance less than threshold/2) and inter-polates new vertices among the vertices which are too far toeach other (distance larger than 2 threshold), The thresholdis set to be three pixels in this paper. Before computation, eachimage sequence was subjected to preprocessing, consisting ofconvolution with a Gaussian filter, which is helpful to decreasethe fluctuation of intensity and estimate the intensity derivatives[47].

IV. EXPERIMENTS AND RESULTS

In this section, we compare our method (JPS) with themethodproposed by Jayabalan et al. [39] (here we denote it as JAYfor clarity), GVF snake and optical flow methods (HS, LK) insynthetic and real ultrasonic image sequences.We use the following four metrics to evaluate the results of

the compared methods, namely: Hausdorff distance (HD), the

Fig. 2. (a) Echogenicity model. (b) Spatial motion model. (c), (d) First and 18thframe of the synthetic ultrasonic sequence.

average distance (AD) [42], mean edge distance (MED) [43],and edge curvature (EC). The first three metrics are distancemetrics, which compare the output of the methods with the ref-erence contours and indicate the difference between two curves.The smaller these distance metrics are, the better tracking accu-racy is achieved. The last one is used to quantitatively describethe smoothness of the contours. The contour is smoother if ECis smaller.Let and be

two sets of points obtained by sampling the estimated contourand the reference contour. The smallest distance from a pointto a curve is defined as

(27)

The Hausdorff distance between both sets is the largest dis-tance from a point to the other set

(28)

The average distance (AD) between the two sets , is de-fined by

(29)

The denotation of MED is given as following:

(30)

Curvature is defined as . is averagecurvature of each edge point

(31)

It should be noticed that in this case, there exists two targets.We take the average of a certain metric for conciseness. As thetargets are unclosed and it is difficult to tell exactly where theterminals should be located, it is not appropriate to compare the

Page 7: Tracking Pylorus in Ultrasonic Image Sequences With Edge-Based Optical Flow

CHEN et al.: TRACKING PYLORUS IN ULTRASONIC IMAGE SEQUENCES WITH EDGE-BASED OPTICAL FLOW 849

Fig. 3. Tracking comparison in synthetic images. (a)–(d) Results of the 9th, 19th, 29th, 39th frame, respectively. The red lines denote the targets; the yellow linesdenote the tracking results.

tracking results with the gold line directly. To avoid the estima-tion for the terminals, we only considered the edge points withinthe pyloric region and chose the edge in the pyloric boundary asor . The pyloric region could simply be the common region

of all the contours and corresponding gold line.In experimental pyloric tracking, it is observed that when the

smallest distance from a point to the gold line is larger than fivepixels, it has large possibility of losing its target. As a result, theedge points should within this distance error especially the onesindicating to the closest region.

A. Comparison in Synthetic Images

A sequence of synthetic images was created to test the perfor-mance of the compared methods. We utilized the method pro-posed by Yu et al. [44] to generate log-compressed syntheticimage. It is assumed that the imaging system has a linear andspace invariant point spread function (PSF). The simulated ul-trasonic system has the transducer frequency of 4 MHz with thePSF dimensions in the axial and lateral directions 0.378 mm.The resulting sample size was 0.27 mm/pixel. An example ofultrasonic image simulation is shown in Fig. 2. Two semicircles

were used to represent the upper and lower wall in pyloric re-gion. We also created an extra edge near the lower wall to addin interference and make the situation more complicated.The pyloric motion is observed to be very complex, not spa-

tially homogeneous and different for every instant of the motioncycle. We followed the design idea of Ledesma–Carbayo et al.[45] and proposed the simplifiedmotionmodel as shown in (32).This model can be described analytically as a separable modelin the time and space

(32)

Where is the spatial dependence term and is thetemporal term. is the frame time and , is the spatial coordi-nate. We created the spatial term as shown in Fig. 2(b): the axialcomponent is maximum on the target edge. It decreases expo-nentially with the distance to the edge increasing. An equally in-creasing lateral component ranging from 0.4–0.6 pixels is posedalong the right direction. Unlike the simulation of the carotidand heart, for which the existing temporal model can be used,there is no temporal model proposed for pyloric motion. As a

Page 8: Tracking Pylorus in Ultrasonic Image Sequences With Edge-Based Optical Flow

850 IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 31, NO. 3, MARCH 2012

TABLE IQUANTITATIVE COMPARISON IN SYNTHETIC SEQUENCE

Fig. 4. AD of each frame in a synthetic sequence ( ).

result, we suggested a simplified temporal model as, where is the number of frames in

a cycle of the movement. Here, each special individual pointhas a sinusoid varying velocity in the time. This motion modelcould simulate the nonrigid pyloric closure and departure. Themaximum displacement on each image is less than two pixels.The choice of such displacements is derived from a rough esti-mation of pyloric tissue motion.We created a sequence of ultrasonic images according

to the motion model as following. Firstly, an initial scattermap was generated from the echogenicity model as shownin Fig. 2(a). The known motion field was then applied to thescatter map. The ultrasonic image sequence was computed byapplying the simulation process to the scatter map sequenceframe-by-frame. Two examples of the synthetic sequence aregiven in Fig. 2(c) and (d). We generated a sequence of 40frames which simulate a cycle of the pyloric movement. Wealso corrupted the sequence using different levels of addi-tive Gaussian noise to explore the behavior of the comparedmethods under the controlled noise [45]. The Gaussian noise isused to increase uncertainty on the image intensity, making theimaging condition more complicated.From the qualitative comparison frame-by-frame, it was ob-

served that JPS performed the best tracking of the upper andlower wall, the results were smooth and on the targets all thetime. When tracking the upper wall, the result of JAY was ob-served to be attracted to some interference nearby. The GVFsnake was easily attracted by the interference edge and could

Fig. 5. AD of different methods for different SNRs of the synthetic images.

TABLE IIPARAMETERS USED IN THE EXPERIMENT

hardly keep consistent with targets. The HS and LK methodwere able the track the pyloric movement, however, their resultscould hardly keep smooth in the tracking process. Examples aregiven in Fig. 3, which show the tracking results of the 9th, 19th,29th, 39th frame, respectively.Table I gives the quantitative metrics of the compared

methods in a synthetic sequence. JPS achieved the minimumdistance metrics which indicated its good correlation with the

Page 9: Tracking Pylorus in Ultrasonic Image Sequences With Edge-Based Optical Flow

CHEN et al.: TRACKING PYLORUS IN ULTRASONIC IMAGE SEQUENCES WITH EDGE-BASED OPTICAL FLOW 851

Fig. 6. Tracking comparison in four ultrasonic image sequences. (a)–(d) Last frame from different sequences. The red lines denote gold line and the yellow linesdenote the tracking results.

targets. The EC of JPS, JAY, and GVF snake were small, whichmeant the results of them were smooth. On the other hand, theEC of LK, HS were relatively large, indicating their resultswere jagged. The information of the quantitative metrics wasin accordance with the qualitative observation.Fig. 4 shows the frame-by-frame change of AD in one

sequence. It was observed that JPS always achieved a low AD(smaller than 1.2 pixels) in each frame which indicated that thecontours were always close to the targets and the JPS methodwas able to perform a good tracking. This experiment alsodemonstrated that the JPS method outperforms the others intracking accuracy especially when the number of the processedframes increased.To test the robustness of the methods, the compared methods

were applied to sequences with SNRs varying from 43.50 to

7.66 dB. The robustness of the methods in the presence of ad-ditional noise is illustrated in Fig. 5. The average of AD for theJPSmethod among these sequences was 0.9 pixels, while the av-erage for LK, HS, JAY, GVF snake was 1.3, 1.4, 1.1, 3.2 pixels,respectively. JPS kept the least AD among these sequences. Thisexperiment demonstrated the robustness of JPS in the case of asimple ultrasonic imaging model.

B. Comparison in Real Images

The real ultrasonic image sequences were obtained from aB-mode ultrasonic video which came from Xinhua Hospital,affiliated to Shanghai Jiaotong University School of Medicine.The video was captured at the level of the transpyloric planeby Siemens SEQUOIA 512. The ultrasonic probe operated at4.0 MHz and the frame rate was 27 frames/s. The video was

Page 10: Tracking Pylorus in Ultrasonic Image Sequences With Edge-Based Optical Flow

852 IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 31, NO. 3, MARCH 2012

TABLE IIIQUANTITATIVE METRICS COMPARISON IN ELEVEN SEQUENCES

saved in .avi format for offline analysis. The frames were orig-inally in size of 576 768 pixel with image resolution of 0.27mm/pixel. To remove the trivial information such as labels andreduce the calculating time, each frame was cropped to the sizeof 380 480 pixel with the region of interest before image pro-cessing.The details of parameters used in the real images experiments

are listed in Table II. The parameters of the tracking methodswere fixed among the sequences.As no ground truth was available in real ultrasound image

sequences, we got the validation measures in comparison withmanual tracking produced by experts. Four experienced expertsin ultrasonic image processing were invited to delineate thetarget curves manually. The gold lines were generated by av-eraging the experts’ results using the method proposed by Cha-lana et al. [42].Eleven ultrasonic image sequences picked from different

stages of the pyloric motion were used here. To get a goodobservation of the accumulation error of each method, the sizeof each sequence was chosen to be 100 frames, which was longenough for the error accumulation and made the differencesamong the compared methods obvious. The tracking procedureonly needs one initial contour model for the whole imagesequence and all the compared methods shared the same initialcontour which was given manually in the first frame. We eval-uated the tracking results of the different methods in the lastframe of each sequence. The pictures in Fig. 6 came from thelast frame of four sequences, which showed the tracking resultsof different methods accompanied by the gold line. From thequalitative observation, the JPS method showed a good trackingperformance: the results were close to the gold line and keptsmooth. JAY performed worse in consistence with the gold linethan JPS. It was observed that JAY and HS suffered a severedeviation as shown in Fig. 6(b). The results of GVF snake wereeasily trapped by interference edge and could not keep goodconsistent with the gold line in most cases. The results of theLK and HS method were jagged in some sequences.Table III gives the metrics of the compared tracking methods

with mean errors and standard deviation. From the table, we cansee that JPS achieved the minimum for HD, which indicates theresulting edges of JPS had the least deviation from the gold line.Achieving the minimum AD and MED indicates the results ofJPS has good correlation with the gold line. It was observed thatin most cases, the biggest deviations of JPS occurred near theterminals while the other part of contour close to the gold line,especially in the target region, which we define as the closest

TABLE IVSTATISTICAL SIGNIFICANCE BETWEEN JPS AND THE OTHER METHODS

region between the two edges and is used to locate the placewhere to detect DGR. The optical flowmethod (HS and LK) hadlarge EC and the results were jagged during to different levels oferror accumulation. JAY had smooth results with small value ofEC, however, its distance metrics were larger than those of JPS,which indicated that its tracking accuracy was worse than JPS.GVF snake performed the worst among the compared methodswith the largest distance metrics.We used the standard statistical analysis tool SPSS (Statistical

Product and Service Solutions) to compute the statistical signif-icance of these results with the paired student’s -test. This is areasonable choice because the variances of error measures aresimilar independently of the methodology, and the error distri-butions are roughly Gaussian. Details of the -value betweenJPS and the other methods are given in Table IV. In most cases,the -value are less than 0.05, except in terms of MED, the-value with respect to LK is 0.106. Recall that the -value indi-cates whether the averages from the two-sample sets differ sig-nificantly (note that the null hypothesis is that the means of twonormally distributions are equal). We can conclude that the JPSmethod has a statistical significance with respect to the othermethods in terms of HD, AD, and EC. In terms of MED, JPShas statistical significance with respect to HS, JAY, GVF snakewhile there is no statistical significance between JPS and LK.To get a better insight of the tracking process, the experts gave

the gold line every five frames in one sequence. We analyzedthe results of the compared methods according to the gold line.Frame-by-frame checking is not necessary due to the slow mo-tion of pylorus. Several snapshots of the tracking process areshown in Fig. 7. The tracking results of the 19th, 49th, 79th, and99th frame are given, as well as the gold line. Similar trackingperformance could be observed and it was observed that JPScompleted the tracking task better than the others. The resultsof JPS were quite close to the gold line while keeping smooth.

Page 11: Tracking Pylorus in Ultrasonic Image Sequences With Edge-Based Optical Flow

CHEN et al.: TRACKING PYLORUS IN ULTRASONIC IMAGE SEQUENCES WITH EDGE-BASED OPTICAL FLOW 853

Fig. 7. Tracking comparison in an ultrasonic image sequence. (a)–(d) Results of the 19th, 49th, 79th, and 99th frame, respectively. The red lines denote gold line;the yellow lines denote the tracking results.

With the increasing of frames, JPS could always keep consis-tence with the gold line with little error accumulation.The evaluation results in one sequence are given in Table V,

providing more detailed tracking performance evaluations ofeach method. JPS achieved the minimum distance metrics asexpected. The performance of LK and HS were similar. Theirresults were close to the gold line while the contour smooth-ness could hardly obtained. JAY had much smoother results.However, its tracking accuracy was less than JPS. GVF snakeachieved the worst tracking accuracy. In many frames, it couldbe considered as tracking fail when we define a tracking fail as

pixels.The tracking methods were implemented in Matlab and were

executed on a 1.79-GHz Intel Core 2 Duo CPU. Table VI givesthe time cost (mean std) of each method. The JPS method hasimproved the tracking accuracy, but the increasing of time cost

is not significant. This time cost was acceptable for the offlineprocessing of large number of frames.Some suggestions on the parameter selection are given below.

It is supposed to set the velocity smoothness constraint asmaller value ( experimentally) to allow for larger deforma-tion when the tracked target may endure sharp deformation, theweight of optical flow term should be large accordingly.Whileno sharp deformation exists, this parameter can be set a largervalue ( experimentally) to help keep the contour smooth,can be set a smaller value to improve the ability of edge de-tecting.

V. DISCUSSION

It has been demonstrated that the JPS method can achievegood tracking performances for the nonrigid motion target. Thetracked edges correlate to the true edge as determined by the

Page 12: Tracking Pylorus in Ultrasonic Image Sequences With Edge-Based Optical Flow

854 IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 31, NO. 3, MARCH 2012

TABLE VQUANTITATIVE METRICS COMPARISON IN ONE SEQUENCE

TABLE VITIME COST COMPARISON

experts. The JPS method is robust to noise and the variable mo-tions of the target while the other compared methods may ex-hibit the problems of the smoothness lacking, poor ability ofnonrigid motion tracking or sensitiveness to the noise. For ex-ample, the optical flow methods (HS and LK) can apply a goodtracking in most of the sequences where the influence of thenoise is not severe and the motion amplitude of the trackedtarget is small, as shown in Fig. 6(d). However, they are sen-sitive to the noise (Fig. 5) and will produce the larger errors inestimation of the faster movement. There is little spatial limi-tation among the location estimations. Different level of erroraccumulation could make the results jagged. The error accumu-late can be found in the results of Fig. 6(b). The JAY method,though has much smoother results by applying the snake, couldnot address the error accumulation problem efficiently. In thenoisy ultrasonic image sequences, when good initialization isnot available, the tracking accuracy of JAY decreases especiallyin tracking nonregular-shaped objects. The application of thesnake may even enlarge the error and distract the contour fromthe target as shown in Fig. 6(c). It can also be observed that thecontour of JAY may fail to converge to the target edge whenthere exists local minimum [Fig. 3(b)]. GVF snake is quietlyeasily trapped by interference edges and show theworst trackingaccuracy.It is observed that the JPS method shows a similar tracking

performance with the optical methods in many sequences wherethe environmental noise is not so severe that the optical methodscan achieve good estimation. However, the results of JPS aresmoother than that of the optical methods, which makes JPSquite suitable for tracking nonrigid deformable edges in ultra-sonic images. JPS can deal with the tracking of the nonrigidmotion targets in noisy ultrasonic images properly by consid-ering the spatio-temporal information. It gives the estimation ofthe targets’ motion not only on the assumption of the intensityconstraint of optical flow, the spatial connection relationshipsamong the resulting targets and the feature of the targets arealso taken into consideration, which makes the estimation moreaccurate. To track targets with large motion ( pixel/frame),warping is suggested, however, in pyloric tracking, it is not nec-essary.

A drawback of this tracking method is its sensitivity to theinitial location. A good initialization is acquired for a successfultracking. It is suggested to do the initialization in frames wherethe edges are distinguishable.As the gradient of intensity is not the best feature for the edges

in the pyloric ultrasonic images, more robust edge features aresupposed to be utilized to help the edge detection. Improve-ments are underway.

VI. CONCLUSION

In this work, an edge-based optical flow which combines thelocalized optical flow method with active contour is proposedto track the nonrigid motion of pylorus in ultrasonic image se-quences. The estimation of the motion is based on spatio-tem-poral information, which reduces the accumulation error of op-tical flow by considering the resulting edge information as well.The proposed method achieves a more stable tracking perfor-mance than the other tracking methods. The experiments showthat the proposed method can handle the task of nonrigid de-formable edge tracking properly, and is also robust to the noise.

REFERENCES[1] M. Fein, J. Maroske, and K. H. Fuchs, “Importance of duodenogastric

reflux in gastro-oesophageal reflux disease,” Br. J. Surg., vol. 93, no.12, pp. 1475–1482, Dec. 2006.

[2] J. Fujimura, K. Haruma, J. Hata, H. Yamanaka, K. Sumii, and G. Ka-jiyama, “Quantitation of duodenogastric reflux and antral motility bycolor Doppler ultrasonography: Study in healthy volunteers and pa-tients with gastric ulcer,” Scand. J. Gastroenterol., vol. 29, no. 10, pp.897–902, Oct. 1994.

[3] P. M. King, R. D. Adam, A. Pryde, W. N. McDicken, and R.C. Heading, “Relationships of human antroduodenal motility andtranspyloric fluid movement: Non-invasive observations withreal-time ultrasound,” Gut, vol. 25, pp. 1384–1391, 1984.

[4] A. Belaid, D. Boukerroui, Y. Maingourd, and J. F. Lerallut, “Implicitactive contours for ultrasound images segmentation driven by phaseinformation and local maximum likelihood,” in Proc. 2011 IEEE Int.Symp. Biomed. Imag.: From Nano to Macro, 2011, pp. 630–635.

[5] M. Rousson and N. Paragios, “Prior knowledge, level set represen-tations & visual grouping,” Int. J. Comput. Vis., vol. 76, no. 3, pp.231–243, 2008.

[6] C. M. Gallippi, L. N. Bohs, M. Anderson, A. Congdon, and G. E.Trahey, “Lateral blood velocity measurement in the carotid artery viaspeckle tracking,” in Proc. 2001 IEEE Ultrason. Symp., 2001, vol. 2,pp. 1451–1455.

Page 13: Tracking Pylorus in Ultrasonic Image Sequences With Edge-Based Optical Flow

CHEN et al.: TRACKING PYLORUS IN ULTRASONIC IMAGE SEQUENCES WITH EDGE-BASED OPTICAL FLOW 855

[7] J. W. H. Korstanje, R. W. Selles, H. J. Stam, S. E. R. Hovius, and J.G. Bosch, “Development and validation of ultrasound speckle trackingto quantify tendon displacement,” J. Biomechan., vol. 43, no. 7, pp.1373–1379, May 2010.

[8] B. K. P. Horn and B. G. Schunck, “Determining optical flow,” Artif.Intell., vol. 17, no. 1–3, pp. 185–283, Aug. 1981.

[9] M. Yaacobi, N. L. Cohen, and H. Guterman, “Simultaneous left atriumvolume tracking from echocardiographic movies,” in Proc. IEEE 25thConvention Electrical Electron. Eng. Israel, Dec. 2008, pp. 403–407.

[10] X. Yang and K. Murase, “A multi-scale phase-based optical flowmethod for motion tracking of left ventricle,” in Proc. 2010 4th Int.Conf. Bioinformat. Biomed. Eng. (ICBBE), Jun. 2010, pp. 1–4.

[11] Q. Xu, R. J. Hamilton, R. A. Schowengerdt, B. Alexander, and S. B.Jiang, “Lung tumor tracking in fluoroscopic video based on opticalflow,” Med. Phys., vol. 35, no. 12, pp. 5351–5359, Nov. 2008.

[12] M. Kass, A. Witkin, and D. Terzopoulos, “Snakes: Active contourmodels,” Int. J. Comput. Vis., vol. 1, no. 4, pp. 321–331, 1988.

[13] K. Somkantha, N. Theera-Umpon, and S. uephanwiriyakul, “Boundarydetection in medical images using edge following algorithm based onintensity gradient and texture gradient features,” IEEE Trans. Biomed.Eng., vol. 58, no. 3, pp. 567–573, Mar. 2011.

[14] W. Fang, K. L. Chan, S. Fu, and S. M. Krishnan, “Incorporating tem-poral information into active contour method for detecting heart wallboundary from echocardiographic image sequence,” ComputerizedMed. Imag. Graphics, vol. 32, no. 7, pp. 590–600, Oct. 2008.

[15] P. Lipson, A. Yuille, D. Okeefe, J. Cavanaugh, J. Taaffe, and D. Rosen-thal, “Deformable templates for feature extraction from medical im-ages,” in Proc. 1st Eur. Conf. Comput. Vis., 1990, vol. 427/1990, pp.413–417.

[16] J. Guerrero, S. E. Salcudean, J. A. McEwen, B. A. Masri, and S.Nicolaou, “Real-time vessel segmentation and tracking for ultrasoundimaging applications,” IEEE Trans. Med. Imag., vol. 26, no. 8, pp.1079–1090, Aug. 2007.

[17] T. F. Cootes, C. J. Taylor, D. H. Cooper, and J. Graham, “Active shapemodels—Their training and application,” Comput. Vis. Image Under-stand., vol. 61, no. 1, pp. 38–59, 1995.

[18] J. C. Nascimento and J. S. Marques, “Robust shape tracking with mul-tiple models in ultrasound images,” IEEE Trans. Image Process., vol.17, no. 3, pp. 392–406, Mar. 2008.

[19] A. Roussos, A. Katsamanis, and P. Maragos, “Tongue tracking in ul-trasound images with active appearance models,” in Proc. 2009 16thIEEE Int. Conf. Image Process. (ICIP), 2009, pp. 1733–1736.

[20] Kirthika and S. W. Foo, “Automated endocardial boundary detectionusing dynamic directional gradient vector flow snakes,” in Proc. Int.Conf. Biomed. Pharmaceutical Eng. (ICBPE), Dec. 2006, pp. 62–66.

[21] K. Riha and I. Potucek, “The sequential detection of artery sectionalarea using optical flow technique,” in Proc. 8th WSEAS Int. Conf. Cir-cuits, Syst., Electron. Control Signal Process., 2009, pp. 222–226.

[22] A. R. Mansouri, “Region tracking via level set PDEs without motioncomputation,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 24, no. 7,pp. 947–961, Jul. 2002.

[23] A. Eslami, M. Jahed, and T. Preusser, “Joint edge detection and motionestimation of cardiac MR image sequence by a phase field method,”Comput. Biol. Med., vol. 40, no. 1, pp. 21–28, Jan. 2010.

[24] J. Stoitsis, S. Golemati, S. Kendros, and K. S. Nikita, “Automated de-tection of the carotid artery wall in B-mode ultrasound images using ac-tive contours initialized by the Hough transform,” in Proc. 30th Annu.Int. Conf. IEEE Eng. Med. Biol. Soc., Aug. 2008, pp. 3146–3149.

[25] B. D. Lucas and T. Kanade, “An iterative image registration techniquewith an application to stereo vision,” in Proc. Imag. Understand. Work-shop, 1981, pp. 121–130.

[26] M. J. Black and P. Anandan, “A frame work for the robust estimationof optical flow,” in Proc. 4th Int. Conf. Comput. Vis., Berlin, Germany,May 1993, pp. 231–236.

[27] T. Gautama and M. A. V. Hulle, “A phase-based approach to the es-timation of the optical flow field using spatial filtering,” IEEE Trans.Neural Netw., vol. 13, no. 5, pp. 1127–1136, Sep. 2002.

[28] K. Rajpoot, V. Grau, J. A. Noble, H. Becher, and C. Szmigielski, “Theevaluation of single-view and multi-view fusion 3D echocardiographyusing image-driven segmentation and tracking,” Med. Image Anal.,vol. 15, no. 4, pp. 514–528, Aug. 2011.

[29] J. L. Barron, D. J. Fleet, and S. S. Beauchemin, “Performance of opticalflow techniques,” Int. J. Comput. Vis., vol. 12, no. 1, pp. 43–77, 1994.

[30] T. Brox, C. Bregler, and J. Malik, “Large displacement optical flow,”in IEEE Int. Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2009,pp. 41–48.

[31] S. Liao and B. Liu, “An edge-based approach to improve optical flowalgorithm,” inProc. 3rd Int. Cof. Adv. Comput. Theory Eng. (ICACTE),Aug. 2010, vol. 6, pp. 45–51.

[32] A. Bruhn, J. Weickert, and C. Schnörr, “Lucas/Kanade meetsHorn/Schunck: Combining local and global optic flow methods,” Int.J. Comput. Vis., vol. 61, no. 3, pp. 211–231, 2005.

[33] D. Fleet and A. D. Jepson, “Computation of component image velocityfrom local phase information,” Int. J. Comput. Vis., vol. 5, no. 1, pp.77–104, 1990.

[34] M. J. Black and P. Anandan, “The robust estimation of multiplemotions: Parametric and piecewise smooth flow fields,” Comput. Vis.Image Understand., vol. 63, no. 1, pp. 75–104, Jan. 1996.

[35] L. D. Cohen, “On active contour models and balloon,” CVGIP: ImageUnderstand., vol. 53, no. 2, pp. 2l1–218, Mar. 1991.

[36] C. Y. Xu and J. L. Prince, “Snakes, shapes, and gradient vector flow,”IEEE Trans. Image Process., vol. 7, no. 3, pp. 359–369, Mar. 1998.

[37] C. Y. Xu and J. L. Prince, “Generalized gradient vector flow externalforces for active contours,” Signal Process., vol. 71, no. 2, pp. 131–139,Dec. 1998.

[38] B. Li and S. T. Acton, “Active contour external force using vector fieldconvolution for image segmentation,” IEEE Trans. Image Process.,vol. 16, no. 8, pp. 2096–2106, Aug. 2007.

[39] E. Jayabalan, A. Krishnan, and R. Pugazendi, “Non rigid objecttracking in aerial videos by combined snake and optical flow tech-nique,” in Comput. Graphics, Imag. Visualizat., Aug. 2007, pp.388–396.

[40] E. Jayabalan and A. Krishnan, “Object detection and tracking in videosusing snake and optical flow approach,” Commun. Comput. Inf. Sci.,vol. 142, no. 2, pp. 299–301, 2011.

[41] K. Takaya, “Tracking a video object with the active contour (snake)predicted by the optical flow,” inCanad. Conf. Electrical Comput. Eng.(CCECE), May 2008, pp. 369–372.

[42] V. Chalana and Y. Kim, “A methodology for evaluation of boundarydetection algorithms onmedical images,” IEEE Trans.Med. Imag., vol.16, no. 5, pp. 642–652, Oct. 1997.

[43] A. Belaid, D. Boukerroui, Y. Maingourd, and J. F. Lerallut, “Phase-based level set segmentation of ultrasound images,” IEEE Trans. Inf.Technol. Biomed., vol. 15, no. 1, pp. 138–147, Jan. 2011.

[44] Y. Yu and S. T. Acton, “Speckle reducing anisotropic diffusion,” IEEETrans. Image Process., vol. 11, no. 11, pp. 1260–1270, Nov. 2002.

[45] M. J. Ledesma-Carbayo, J. Kybic, A. Santos, M. Sühling, P. Hunziker,and M. Unser, “Spatio-temporal nonrigid registration for ultrasoundcardiac motion estimation,” IEEE Trans. Med. Imag., vol. 24, no. 9,pp. 1113–1125, Sep. 2005.

[46] J. W. Lu, C. G.Wang, J. C. Ren, X. H. Yua, and Y. Lu, “Detecting punytarget in infrared images based on edge characteristics,” in Intell. Syst.Design Appl., Oct. 2006, pp. 283–288.

[47] P. Baraldi, A. Sarti, C. Lamberti, A. Prandini, and F. Sgallari, “Eval-uation of differential optical flow techniques on synthesized echo im-ages,” IEEE Trans. Biomed. Eng., vol. 43, no. 3, pp. 259–272, Mar.1996.