anovelhighbreakdownm-estimatorforvisualdata segmentation · 2016-12-02 ·...

6
A Novel High Breakdown M-estimator for Visual Data Segmentation Reza Hoseinnezhad Swinburne University of Technology Victoria, Australia [email protected] Abstract Most robust estimators, designed to solve computer vi- sion problems, use random sampling to optimize their objective functions. Since the random sampling process is patently blind and computationally cumbersome, other searches of parameter space using techniques such as Nelder Meade Simplex or gradient search techniques have been also proposed (particularly in combination with PbM- estimators). In this paper; we introduce a novel high break- down M-estimator having a differentiable objective function for which a closedform updating formula is mathematically derived (similar to redescending M-estimators) and used to search the parameter space. The resulting M-estimator has a high breakdown point and is called High Breakdown M- estimator (HBM). We show that this objective function can be optimized using an iterative reweighted least squares re- gression similar to redescending M-estimators. The closed mathematical form of HBM and its guaranteed stability combined with its high breakdown point and fast conver- gence speed make this estimator an outstanding choice for segmentation of multi-structural data. A number of exper- iments, using both synthetic and real data have been con- ducted to show and benchmark the performance of the pro- posed estimator both in terms of accurate segmentation of numerous structures in the data and also the convergence speed. Moreover, the computational time of HBM, ASSC, MSSE and PbM are compared using the same computing platform and the results show that HBM significantly out- performs aforementioned techniques. 1. Introduction Since the introduction of RANSAC [ ], a quarter of cen- tury ago, several high breakdown robust estimators have been specially designed to solve computer vision problems (e.g. RESC[ ], ALKS[ ], MSSE[ ], ASSC[ ] and Pro- jection based M-estimators [, , , ] also called PbM). All such estimators include three main steps: Alireza Bab-Hadiashar Swinburne University of Technology Victoria, Australia [email protected] * Optimization: Searching the parameter space to find the parameter values which optimize the objective function of the estimator. * Segmentation: Extracting an inlier-outlier di- chotomy using the parameters given by the searching process. * Refinement: Updating the parameter estimates with a least-squares fit to the extracted inliers. The robust estimators reported in computer vision so far, mainly differ in their objective functions and the way they extract an inlier-outlier dichotomy. For the objective function optimization, almost all ro- bust estimators (except PbM) use random sampling. The main reason is that the objective functions used in those high breakdown robust estimators are non-differentiable and optimization methods based on gradient and iterative reweighted least-squares regressions (as in redescending M- estimators ), cannot be employed. Random sampling is a random search scheme in the sam- ple space for the best elemental subset (p-tuple) that gives rise to the parameter values which optimize the objective function. An elemental subset is a subset of p data sam- ples (p is the dimension of parameter space) that defines a full rank system of equations from which a model candidate can be computed. If N elemental subsets are randomly se- lected, then with a probability of: Psuccess 1 - [1 -p]N (1) at least one of them is a good elemental subset (i.e. all its samples belong to the inlier structure), where e is the ratio of inliers samples. Thus, for a given success probability Psuccess, at least: (2) N log ( Psuccess) log( -EP) 'It is important to note that redescending M-estimators do not have high breakdown points and cannot be efficiently employed to solve visual data segmentation problems particularly with several data structures. 978-1-4244-1631-8/07/$25.00 ©2007 IEEE Authorized licensed use limited to: SWINBURNE UNIV OF TECHNOLOGY. Downloaded on October 26, 2009 at 21:33 from IEEE Xplore. Restrictions apply.

Upload: others

Post on 11-Mar-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: ANovelHighBreakdownM-estimatorforVisualData Segmentation · 2016-12-02 · ANovelHighBreakdownM-estimatorforVisualDataSegmentation RezaHoseinnezhad SwinburneUniversity ofTechnology

A Novel High Breakdown M-estimator for Visual Data Segmentation

Reza HoseinnezhadSwinburne University of Technology

Victoria, [email protected]

Abstract

Most robust estimators, designed to solve computer vi-sion problems, use random sampling to optimize theirobjective functions. Since the random sampling processis patently blind and computationally cumbersome, othersearches of parameter space using techniques such asNelder Meade Simplex or gradient search techniques havebeen also proposed (particularly in combination with PbM-estimators). In this paper; we introduce a novel high break-down M-estimator having a differentiable objectivefunctionfor which a closedform updatingformula is mathematicallyderived (similar to redescending M-estimators) and used tosearch the parameter space. The resulting M-estimator hasa high breakdown point and is called High Breakdown M-estimator (HBM). We show that this objective function canbe optimized using an iterative reweighted least squares re-gression similar to redescending M-estimators. The closedmathematical form of HBM and its guaranteed stabilitycombined with its high breakdown point and fast conver-gence speed make this estimator an outstanding choice forsegmentation of multi-structural data. A number of exper-iments, using both synthetic and real data have been con-ducted to show and benchmark the performance of the pro-posed estimator both in terms of accurate segmentation ofnumerous structures in the data and also the convergencespeed. Moreover, the computational time of HBM, ASSC,MSSE and PbM are compared using the same computingplatform and the results show that HBM significantly out-performs aforementioned techniques.

1. Introduction

Since the introduction of RANSAC [ ], a quarter of cen-tury ago, several high breakdown robust estimators havebeen specially designed to solve computer vision problems(e.g. RESC[ ], ALKS[ ], MSSE[ ], ASSC[ ] and Pro-jection based M-estimators [, , , ] also called PbM).All such estimators include three main steps:

Alireza Bab-HadiasharSwinburne University of Technology

Victoria, [email protected]

* Optimization: Searching the parameter space tofind the parameter values which optimize the objectivefunction of the estimator.

* Segmentation: Extracting an inlier-outlier di-chotomy using the parameters given by the searchingprocess.

* Refinement: Updating the parameter estimates witha least-squares fit to the extracted inliers.

The robust estimators reported in computer vision so far,mainly differ in their objective functions and the way theyextract an inlier-outlier dichotomy.

For the objective function optimization, almost all ro-bust estimators (except PbM) use random sampling. Themain reason is that the objective functions used in thosehigh breakdown robust estimators are non-differentiableand optimization methods based on gradient and iterativereweighted least-squares regressions (as in redescending M-estimators ), cannot be employed.

Random sampling is a random search scheme in the sam-ple space for the best elemental subset (p-tuple) that givesrise to the parameter values which optimize the objectivefunction. An elemental subset is a subset of p data sam-ples (p is the dimension of parameter space) that defines afull rank system of equations from which a model candidatecan be computed. If N elemental subsets are randomly se-lected, then with a probability of:

Psuccess 1 - [1 -p]N (1)

at least one of them is a good elemental subset (i.e. all itssamples belong to the inlier structure), where e is the ratioof inliers samples. Thus, for a given success probabilityPsuccess, at least:

(2)N log( Psuccess)log( -EP)

'It is important to note that redescending M-estimators do not have highbreakdown points and cannot be efficiently employed to solve visual datasegmentation problems particularly with several data structures.

978-1-4244-1631-8/07/$25.00 ©2007 IEEE

Authorized licensed use limited to: SWINBURNE UNIV OF TECHNOLOGY. Downloaded on October 26, 2009 at 21:33 from IEEE Xplore. Restrictions apply.

Page 2: ANovelHighBreakdownM-estimatorforVisualData Segmentation · 2016-12-02 · ANovelHighBreakdownM-estimatorforVisualDataSegmentation RezaHoseinnezhad SwinburneUniversity ofTechnology

elemental subsets should be randomly examined.

Two important observations are highlighted here: Firstly,the value of N given by equation (2) is a lower bound as itimplies that any elemental subset which contains only in-liers provides a suitable model candidate. This assumptionis not always true, specially if the measurement noise is sig-nificant [ ]. Secondly, for cases involving multi-structuraldata, the above minimum number of random p-tuples canbe substantial and the computational load of segmentationwould be too high for real-time (or near real-time) applica-tions. It is important to note that the inlier ratio is not priorlyknown and in equation (2), e should be taken as the smallestpossible ratio of inliers in the application.

The number of required elemental subsets can be sig-nificantly reduced when information regarding the reliabil-ity of the data points is available (either provided by useror derived from the data through an auxiliary estimationscheme). Guided sampling techniques, choose the elemen-tal subsets by directing the samples toward the points hav-ing higher probabilities of being inliers [ , ]. However,in most visual data segmentation problems, sufficiently re-liable information to guide the sampling is not available [ ].

An alternative approach to random sampling proposed asan optimization strategy for PbM is to use techniques likeNelder-Mead Simplex search [ , ]. Simplex is a heuris-tic search technique and it is highly sensitive to its initial-ized search point in parameter space. Therefore, substantialnumber of initializations are commonly required to guaran-tee that the global minimum (or maximum) of the objectivefunction would be found by the Simplex search. Subbaraoand Meer [ , ] have proposed using a local search (basedon the first order conjugate gradient method of a Grass-man manifold of the parameter vector 0 C RP satisfyingOTO = 1) in the neighborhood of each elemental subset.However, since the objective function of PbM estimator isnot differentiable, the dependence of the a parameter (in thecommon errors-in-variables regression model as explainedlater in this paper) on the parameter vector 0 has to be ig-nored. Therefore, the procedure of local optimization needsto be repeated for several elemental subsets.

In this paper, we introduce a new high breakdown esti-mator with a differentiable objective function that can beoptimized through an iterative reweighted least square re-gression scheme. Since the redescending M-estimators em-ploy similar continuous updating formulas for their searchscheme we call the new technique: High Breakdown M-estimator or HBM estimator for short. Our studies showthat the proposed technique can segment structures withpopulation ratios of less than 20% significantly faster thanother modern high breakdown techniques.

Consider a vision problem that involves segmentation ofseveral data structures. From each structure, ni measure-ment samples denoted by {yit; i = 1, ... , ni} are availableand each sample yi C RP is corrupted with independent andidentically distributed (i.i.d.) noise:

Yi = yio + 6yi; 6yi - GI(O, or21P) (3)

where yio is the true value of yi, GI(.) stands for a generalsymmetric distribution of independent measurement noisesamples and or is the unknown scale of noise. Usually,noise distribution is assumed to be normal however the mea-surement noise does not necessarily have to be normallydistributed. Indeed, characterizing the distribution by itsfirst two central moments in equation (3) implies normalityassumption as only a normal distribution can be uniquelycharacterized this way.

Each data structure can be modeled by the following lin-ear errors-in-variables (EIV) regression model:

Yioo-a 0; i 1..., ni (4)

where 0 C RP and a are the model parameters yet to be es-timated for each structure and the following constraints areimposed to eliminate the ambiguity of the model parametersbeing defined up to a multiplicative constant:

01=1; a>0. (5)

Since the proposed HBM estimator does not calculate the0 and az estimates separately, we augment those parametersand rewrite the model as below:

(6)

where xi, = [1 y7T]T and (9 [a 011T]T. Thus, the mea-surements are denoted by xi = [1 yT]T and we slightlymodify the constraints (5) as shown below:

9(1) > 0 ; 9OH = 1. (7)

For a given parameter estimate 9, each data sample xicorresponds to an algebraic distance ri = xT(. With tra-ditional regression models, these distances are called resid-uals and we also use this popular term in this paper. In theleast k-th order statistics (LkOS) estimator, the objectivefunction is the k-th order statistics of the squared residuals:

JLkOS (() = rk:n (8)

where n is the total number of available data samples. Theorder k is given by k = Enl where e is the minimum possi-ble ratio of inliers in the application. The breakdown pointof LkOS estimator can be higher than 50%. More precisely,

T(9=0 ;Xio t = ni

2. High Breakdown M-estimator

Authorized licensed use limited to: SWINBURNE UNIV OF TECHNOLOGY. Downloaded on October 26, 2009 at 21:33 from IEEE Xplore. Restrictions apply.

Page 3: ANovelHighBreakdownM-estimatorforVisualData Segmentation · 2016-12-02 · ANovelHighBreakdownM-estimatorforVisualDataSegmentation RezaHoseinnezhad SwinburneUniversity ofTechnology

provided there are moderate number of samples in the targetstructure, the breakdown point is (1 -e) x 100%. The ob-jective function (8) is usually optimized using random sam-pling.

In the proposed HBM estimator, the functional form ofthe k-th order statistics of the squared residuals is chosenas objective function. For a given parameter estimate 9, thesquared residuals {z,= ri; i 1,..., n} have a statisticaldistribution that can be estimated by the following kerneldensity estimator:

(9)fe (z) nh (ZK i)i=1

where K(.) is a kernel function with the following proper-ties:

r+CJ K(u)du 1 (10)

K(u) = K(-u) > 0 (11)

K(ua) > K(U2) for aull < aU21 (12)

and h is the kernel bandwidth. The value of the bandwidthhas a weak influence on the result of the M-estimation [ ]and we use the following formula to calculate it based on amedian of absolute differences (MAD) estimate [,, . .

By differentiating both sides of ( 7), the following equa-tions are derived:

o .,FA3 (z')-aE)

I y:nnh i=--1 nnh Li-.

1

1(18)

(fa4,0 K ( a zi ) d)(az'K( hi)

f-4(Z a K(azi)d)To optimize the objective function, the condition ( 6)should be satisfied. Thus, in the above equation we replacethe term a,, with zero:

° = 1h y=n J? !C,a7K (a°- zi ) dcv0hZ f1 2z,,eK' (-zih) da

=h2 i1aE5K( he) -K()](19)

The dependence of the bandwidth on the parameter esti-mates has been ignored in the above derivations as the band-width given by equation ( 3) does not substantially varywith 0 (and ah is small) and the size of bandwidth (andtherefore its variations) do not substantially affect the per-formance of the estimator [ , , , , ]. From the kernelproperties (I 0)-( l 2) we have K(-o) = 0.

By replacing the term a' with 2ri< D' the followingequation is derived:

h = n- 5 medi zi- medj zj l. (13)

The objective function of the HBM estimator is given by:

JHBM(O) = z = F-1(E) (14)

where F01 (.) is the inverse cumulative distribution func-tion (inverse CDF) of the squared residuals. The CDF ofthe squared residuals is the following differentiable func-tion of 9:

Fe(z) nh >ZJ K(- h )da.

12Zh2'Ki=l

-Zi ) rjh J'ro

0. (20)

As it is the case for redescending M-estimators, the aboveequation can be iteratively solved by updating the parame-ters through iterative reweighted least squares regression onthe data with the following weights:

Wi = hK (21)hri)hJ

Provided there are moderate number of data samples, thefunctional form of the k-th order statistics, z' can be ap-

(15) proximated with its sample value:

Therefore, the inverse CDF is also differentiable and can beoptimized by solving the following equation:

FeT 1 (c)

ao O=ee

az6

A

08 O=E)

For any parameter estimate we have:

e =FO(JLkOS)

=FO(z6)

h2(2h) (22)

This is equivalent to an M-estimator with the objective func-

0.O (16) tion Ei ( k ) where p(.) is proportional to the integralof the chosen kernel function. For example, for a Gaussiankernel K(u) = exp( u2) we have p(u) h2 (u)where (.) is the CDF of standard normal variables. Fig-ure I shows the p(.) function plotted versus sample residu-als. It is important to note that in contrast to redescending

(17) M-estimators, p(.) does not merely depend on r but also on

Authorized licensed use limited to: SWINBURNE UNIV OF TECHNOLOGY. Downloaded on October 26, 2009 at 21:33 from IEEE Xplore. Restrictions apply.

Page 4: ANovelHighBreakdownM-estimatorforVisualData Segmentation · 2016-12-02 · ANovelHighBreakdownM-estimatorforVisualDataSegmentation RezaHoseinnezhad SwinburneUniversity ofTechnology

p(r)

1 -21(rk /h)

2h2

-k: 0 +V1k; residuals (r)

FiguiJutrions. friGa)(usia rnels.t iqd MOScurve)p(r) functions for Gaussian kernels.

the k-th ord r tatistics of 411 squared re§iduals. Therefore,t iessauas. T ret re is not aLSLVisnQ ca euescgnzing .vi- sun2Jlator. I-ioweer,re escen in r-e imator. -owe r thLs distinction is tneas .was sowmin tn vio paraCrapns tne cienuence

mal reason o it hrea clOwnloifit.as we mdthemati-l nr 1S es t er tve optimzat1 n ofcal eim it aeo tttusingth, lle tmtthqUlVt

the tiaconlai o11l> hcnl turn aloo ten0o 0 lmzng 6nIn Hhlrl Iorm OIIe k-m re slehlcesti,mator Ilhave a Dhig:breakdown point.OIF sqae reslaas

lqWaW&M4 1lf%WK&AMBAWrrAsTo4§8WJVOtshiy& to psyPS4j qgh1qA& Fyj Iab+iuglPb*|Me4hqiwWSS#@uoV4tgEilrkpb"l*ES*g,e++S& bif(ht fwflwU ir

ir&* tlYriPWeh%4t,ad estimator mayreF uilkridghqAiithats dpfiiejiAzianmjgorifhmfof the IiIg1obAdisffibemgESt6 laVgingttnrisnglo cuge-qtlired. t i sAqr*fhdrswrr vI0vg t M-tiXiwtitiusoamWvawswt6)rmfjsYn&fS,i4Wdir*-ef4t + nTAhry9!veLti ifereAJ PtAlXs,

fAiiW 9UA¶c MI4R ZS1 rJpe W etPA fif-ti9fl-tiRrffieTgnj4j,fXelavkCPdS tShfl

b ef,s iAn igiy#odbp&&gt4eip6qVjtpgrA*be WdB& ig;-tiaYl¶atRons is also implemented in the PbM estimator (withconjugate radiet¢ r 'chr1 , ]) ut its required numberof initiaiziohs is\sev&al t emeseVthnrianh1iM as(Zi)-denced by buPsimulation results (WSV'tP6in 3). It is im-portat to no that the ri hm s own in Figure.2 is ollly

ip-ure, 1. snows tne a fmoopecflvirpc,tyon op-the o,lla ion,pC }-U In( .snl e 10 lO(DYm izo trlomn1azs ta plfea n Rn V es imao r,eJ ana Iny

a ne.sun4men tWe 1 t fsn ranlalo m arerqre to desfNfeuIne &egaiWoaesls~~~~~~~~~~~~1h

Wt44EfltR, vYTf&;Ue reweighted least squares pro-cedure may also be trapped in a local minimum of the ob-Je,qMtpA jt4t§ d be repeated with different ran-dom initializations. However, the required number of repe-titWasta*letlPlwteinpuberrlf dofi*MevtasMur

through extensive comparative case studies involving syn-thetic data and real 3-D range data segmentations. Due to 5

d

t

Figure 2. Optimization algit: pft3M eXitmEaSNFIDFigure 2. Optimization algorithm of the HBM estimator.

5ets given by ( 4. M. .Ae ran-54com initializal estima-5tr (with conji 2 equired543. . . .. .number of mnul i HBM544 .Wevidenced 1>0 . rtant to

54Dote that the a ie opti-54Tization part ( - egmen-54ttion algorith . sed for540gmentation -1 0 1 d, high5lvel of consis<> x involv-55illg close data structures l55in Fgure 3A snapsnot1 he simulations in two dimensional pa-

5531 lgurFajAetA X*WS&xgl#ketgafqppitthuifbWteoftYMddtiWW*naI55T. tte IBM d PbM iAMWDer of random initializations required

b5y HBM and PbM.We have examintd th y_t im ator

through extense ccimp rati case studi HBM *Tng syn-pleJiCitWjjQ 8j°j - spfpjAysl WUf pebnts

5,50%96eSfl ktb%;WiiRfhatllaMIg2%eW ets5 6&Vf MfffY4*tfhltisSffl esgtUpHeslW bgine56Utfeg defbjwri iriIittgin1.X5beut<£°r-56[*jpAq v+itb=n r qgA4vexn6ZYi6(w% W) Aql dset5e W

5A dith~et ,oi&IlO e svrf1Yi(.ffiWv 4a is5{ae1,dEfhst 4oi,n&bJ4ie oWetaJgb EdZi

56sghowmAdpdi,gin JcMsgrge%8ivbMoahin l*#XM tihWnWads567seginsxiaW X$tyMB^4Xgaf*bMo(WIIh av#gWiRpvJent

tributed in [- 1, +1] x [- 4, +4]. Then for each dataset, thelinear structure was segmented using HBM and PbM (withconjugate gradient local search scheme) and the success

rate (the ratio of successful segmentations) was recorded.A snapshot of the synthetic data and the segmented line isc1ikn,%T n Piiro 'I

1±- R pwoattl t§e -X-- oN Xit es:

22- Ch-(PPDearlr?]egtPSUaBubt-(f)giP-w>t-E)ldY b,ril3pipng;i

3 3 - iP et I ee

t ek eIllb t tr pom

55 SQrtthe resid.uals, find 2 and5 ort tne resitualas, indk:r andcal ult t weweiqhts wi usih4ca cuae ne we~ilghs u :-naequatQn 2. Reorder teata in anquatjon t/r er the anda in an

as enalq ca o egrcorrespondingascgunac cutero-1 eir corresponaingres saS a ae,resriduilsqt~adcluate

6- Calcu ate the sinqu ar value6- de8#t9StS%ggfV t

eiWW&6iPgcg Ap- f.vpa+nl ~&Pide dingt tgi@'vgi4resMA=envIt*^4; correspondin

t7 thet seellest ,igenv l e7s ae±~rlSa = I gns vI1 where

sign7dcla1atet,rV an() whe re

oW14 ;S +1 for vl > 0 and -1

8- Kne,wl is larger than a given8- (gpJ-tfwrIpi&i gEhrtl&, gtihen

rEadE4c( tuzfrth s9*<L faim thdp hhd, thens tepd aZ-e66,)th'ehwkwsae (4emti tshecrtlpangtlrewise save e and its

9- Tr, prjrydjL r4-imate is one of theg9- NjjqavplA ,g a@ s eyt:-t it:gae Sa$ exstqe

kAths ?4r el atEl-tAR~- .iqXct-jXg s-g~r § Aa11eE, reAtu,J~er statistics of the squaredk residuals.

Figure 5. (a) A picsults of range segr

ASSC (raPBM (raiMSSE (raPBM (coi

Authorized licensed use limited to: SWINBURNE UNIV OF TECHNOLOGY. Downloaded on October 26, 2009 at 21:33 from IEEE Xplore. Restrictions apply.

Page 5: ANovelHighBreakdownM-estimatorforVisualData Segmentation · 2016-12-02 · ANovelHighBreakdownM-estimatorforVisualDataSegmentation RezaHoseinnezhad SwinburneUniversity ofTechnology

4

-2

5m ~~~-460 625W ~~~-11 0 160 63

5%V x025 15 3 45) 60)675 90

5mi 6051unr4i a ~ 0(b bNumber of Initialisations~~ ~ ~ ~ ~ ~~~~~~~60

siu l~atio std6hw0i7gr o HMadPMesiaostrbtens 1607.te o ac aae h eto rcsigTm

Awl 608~~~~~~~~~~~~~hln) 0s 3 .rate~~ t ratto s cc s seen tton wa corded. S o 5 n 3s 61s 2~609

-------- 609W f w rad ~ ln) 2s 8s 2556 cu -----h6i1t0in2ik ~ U e al . maaie i sl s o mn~5 esuI7Of BM~A31 61t0otefv a hssandf et nl r i nF

/meth~~~0. bMor Tbe1 Cmaaiv rcssn ieseasd1o1emettoeed8 612y o l~ai

50§9 612~~~~~~~~~~~~rfteiepthe cne fo h enaoa yrmdi i51 ii Mlig W~~b oneusn rnomsmpig n.oc6wt-

~stin1R J~ ~Hhu~.i1iiTh ~ dI pledeonjuate gradent seach). Fo each esimator,the pro5 4~~~~~e~14adPIdro hi tapiitaixnewt

O.E ti1~k4clei ~* ~ FFoi~t e~bi,61~~lsbe~~We o~~~tWspla g~~~ i. ~ ~ ~ (using61

taioe .~ ~ enle ~QfaCs95§d2 616h 6pp5

usn rno smlNgmand onceIwithalstonjuaesrden eac,anbenei cosdrbly faster.

anrlstdCin etable .For comp8hartBAison prposes, all ~om-Fgue5ma) 'torewithepbMannd MSenaolforasolving the- fnamna

for mSSE,ioASSC adPM(snrandoPMesaimplngowre. oftcom1putationkby A1RASC-aedtchiu isalo2ub

Authorized licensed use limited to: SWINBURNE UNIV OF TECHNOLOGY. Downloaded on October 26, 2009 at 21:33 from IEEE Xplore. Restrictions apply.

Page 6: ANovelHighBreakdownM-estimatorforVisualData Segmentation · 2016-12-02 · ANovelHighBreakdownM-estimatorforVisualDataSegmentation RezaHoseinnezhad SwinburneUniversity ofTechnology

stantially higher as a large number of random samples isrequired to solve this problem. The results of our studyagain show that HBM is substantially faster than the othertwo techniques while the segmentation performances of thethree estimators (in terms of small estimation error and cor-rect number of inliers) are comparable.

4. Conclusions

A computationally efficient high breakdown robust es-timator, called HBM estimator, was introduced for solvingmulti-structural data segmentation problems encountered invarious computer vision applications. HBM estimator hasa novel differentiable objective function for which a closedform updating formula can be mathematically derived (sim-ilar to redescending M-estimators) and used to optimize itsobjective function. The resulting M-estimator has a highbreakdown point as it minimizes the functional form ofthe k-th smallest squared residual. We have mathemati-cally proved that optimization of this objective function canbe achieved by solving a weighted least squares problem.Thus, instead of minimizing the single k-th order statis-tics of the squared residuals (as in LkOS estimators), thisestimator minimizes a smoothed window of the residualsaround the k-th order statistics of the squared residuals.

The closed mathematical form of HBM and its guaran-teed stability (theoretically supported by stability proper-ties of redescending M-estimators) combined with its highbreakdown point (evidenced by LkOS and ALKS estima-tors) and its fast convergence speed make this estimator anexcellent choice for solving the problem of segmenting ofmulti-structural data.A number of experiments, using both synthetic and real

data have been conducted to benchmark the performance ofthe proposed estimator. The computational time of HBM,MSSE, ASSC and PbM are compared using the same com-puting platforms (CPU, memory, software, etc.) and theresults show that HBM outperforms aforementioned tech-niques.

AcknowledgementThis research was supported by the Australian Research

Council and Pacifica Group Technologies (PGT) throughthe ARC Linkage Project grant LP0561923.

[3] M. Fischler and R. Bolles. Random sample consensus: Aparadigm for model fitting with applications to image analy-sis and automated cartography. Comm. Assoc. Comp. Mach.,24(6):381-395, 1981. 1

[4] R. Hoseinnezhad and A. Bab-Hadiashar. High breakdownm-estimator: A fast robust estimator for computer vision ap-plications. Technical report, Swinburne University of Tech-nology, Victoria, Australia, July 2007. 5

[5] R. Hoseinnezhad, A. Bab-Hadiashar, and D. Suter. Finitesample bias of robust scale estimators in computer visionproblems. In G. B. et al., editor, Lecture Notes on ComputerScience (LNCS), No. 4291 (International Symposium on Vi-sual Computing - ISVC'06), pages 445-454, Lake Tahoe,Nevada, USA, November 2006. Springer. 4

[6] K. Lee, P. Meer, and R. Park. Robust adaptive segmentationof range images. IEEE Trans. PAMI, 20(2):200-205, 1998.1

[7] P. Meer. Robust techniques for computer vision. InG. Medioni and S. Kang, editors, Emerging Topics in Com-puter Vision, chapter 3, pages 107-190. Prentice Hall, 2004.01, 2, 3

[8] R. Subbarao and P. Meer. Heteroscedastic Projection basedM-estimators. In Workshop on Empirical Evaluation Meth-ods in Computer Vision (in conjunction with CVPR'05),pages 38-44, San Diego, CA, 2005. 1, 3

[9] R. Subbarao and P. Meer. Beyond RANSAC: User indepen-dent robust regression. In Workshop on 25 Years ofRANSAC(in conjunction with CVPR'06), pages 101-108, New York,NY, 2006. 1,2,3,4

[10] R. Subbarao and P. Meer. Subspace estimation using Projec-tion based M-estimators over Grassman manifolds. In A. P.Ales Leonardis, Horst Bischof, editor, 9th European Confer-ence on Computer Vision - ECCV'06, pages 301-312, Graz,Austria, May 2006. Springer. 2, 3, 4

[11] B. Tordoff and D. Murray. Guided sampling and consensusfor motion estimation. In 7th European Conference on Com-puter Vision - ECCV'02, pages 82-69, Copenhagen, Den-mark, May 2002. 2

[12] P. Torr and C. Davidson. IMPSAC: synthesis of importancesampling and random sample consensus. IEEE Transactionson Pattern Analysis and Machine Intelligence, 25(3):354-364, 2003. 2

[13] H. Wang and D. Suter. Robust adaptive-scale parametricmodel estimation for computer vision. IEEE Trans. PAMI,26(11):1459-1474, 2004. l, 5

[14] X. Yu, T. Bui, and A. Krzyzak. Robust estimation for rangeimage segmentation and reconstruction. IEEE Trans. PAMI,16(5):530-538, 1994. 1

References[1] A. Bab-Hadiashar and D. Suter. Robust segmentation of vi-

sual data using ranked unbiased scale estimator. ROBOTICA,17:649-660, 1999. l, 4, 5

[2] H. Chen and P. Meer. Robust regression with Projectionbased M-estimators. In Proceedings ofthe Ninth IEEE Inter-national Conference on Computer Vision (ICCV'03), pages878-885, Nice, France, 2003. l, 2, 3

Authorized licensed use limited to: SWINBURNE UNIV OF TECHNOLOGY. Downloaded on October 26, 2009 at 21:33 from IEEE Xplore. Restrictions apply.