optimal stack filters under rank selection and structural constraints

30
SIGNAL PROCESSING ELSEVIER Signal Processing 41 (1995) 309-338 Optimal stack filters under rank selection and structural constraints P. Kuosmanen *, J. Astola Signal Processing Laboratory, Tampere University of Technology, P.O.Box 553, FIN-33101 Tampere, Finland Received 21 January 1994; revised 8 July 1994 Abstract A new expression for the moments about the origin of the output of stack filtered data is derived in this paper. This expression is based on the A and M vectors containing the well-known coefficients Ai of stack filters and numbers M(@, y. N, i) defined in this paper. The noise attenuation capability of any stack filter can now be calculated using the A and M vector parameters in the new expression. The connection between the coefficients Ai and so called rank selection probabilities ri is reviewed and new constraints, called rank selection constraints, for stack filters are defined. The major contribution of the paper is the development of an extension of the optimal@ theory for stack filters presented by Yang et al. and Yin. This theory is based on the expression for the moments about the origin of the output, and combines the noise attenuation, rank selection constraints, and structural constraints on the filter’s behaviour. For self-dual stack filters it is proved that the optimal stack filter which achieves the best noise attenuation subject to rank selection and structural constraints can usually be obtained in closed form. An algorithm for finding this form is given and several design examples in which this algorithm is used are presented in this paper. Zusammenfassung In diesem Be&rag wird ein neuer Ausdruck hir die Momente (bezogen auf den Ursprung am Ausgang) stackgefilterter Daten hergeleitet. Der Ausdruck beruht auf der A- und M-Vectoren; sie enthalten die bekannten Koeffizienten Ai der Stack- filter sowie Grogen M( @, y, N, i) , die hier definiert werden. Fiir ein beliebiges Stackfilter kann nur unter Venvendung der A- und M-Vectorparameter in den neuen Ausdruck die Fglhigkeit zur St&unterdtilckung berechnet werden. Die Verbindung zwischen den Koeffizienten Ai und den sogenannten Rangordnungs-Wahrscheinlichkeiten ri wird neu gefaht, und es werden neue Randbedingungen - als Rangwahl-EinschClnkungen bezeichnet - fib Stackfilter definiert. Kern des be&ages ist die Weiterentwicklung der von Yang et al. und Yin ptisentierten Gptimaliti%stheorie fir Stackfilter. Diese Theorie beruht auf dem Ausdruck hir die Momente beziiglich der Ursprungs am Ausgang. Sie verbindet die Getiuschdlmpfung, Rangordnungs- beschr&rkungen und strukturelle Randbedingungen an das Filterverhalten. Fiir selbstduale Stackfilter wird bewiesen, daI3 man das optimale Stackfilter - welches unter Einhaltung von Einschriinkungen bezitglich der Rangauswahl und der Struktur die beste Rauschunterdrtickung eaielt - in geschlossener Form erhalten kann. Dazu wird ein Algorlthmus angegeben, und es werden verschiedene Entwurfsbeispiele hier vorgestellt, in denen der Algorithmus verwendet wird. * Corresponding author. Tel: +358-31-3161580; Fax: +358-31-3161857; E-mail: [email protected] 0165-1684/95/$09.50 @ 1995 Elsevier Science B.V. All rights reserved SSDIO165-1684(94)00106-5

Upload: p-kuosmanen

Post on 21-Jun-2016

217 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Optimal stack filters under rank selection and structural constraints

SIGNAL PROCESSING

ELSEVIER Signal Processing 41 (1995) 309-338

Optimal stack filters under rank selection and structural constraints

P. Kuosmanen *, J. Astola Signal Processing Laboratory, Tampere University of Technology, P.O.Box 553, FIN-33101 Tampere, Finland

Received 21 January 1994; revised 8 July 1994

Abstract

A new expression for the moments about the origin of the output of stack filtered data is derived in this paper. This expression is based on the A and M vectors containing the well-known coefficients Ai of stack filters and numbers M(@, y. N, i) defined in this paper. The noise attenuation capability of any stack filter can now be calculated using the A and M vector parameters in the new expression. The connection between the coefficients Ai and so called rank selection probabilities ri is reviewed and new constraints, called rank selection constraints, for stack filters are defined. The major contribution of the paper is the development of an extension of the optimal@ theory for stack filters presented by Yang et al. and Yin. This theory is based on the expression for the moments about the origin of the output, and combines the noise attenuation, rank selection constraints, and structural constraints on the filter’s behaviour. For self-dual stack filters it is proved that the optimal stack filter which achieves the best noise attenuation subject to rank selection and structural constraints can usually be obtained in closed form. An algorithm for finding this form is given and several design examples in which this algorithm is used are presented in this paper.

Zusammenfassung

In diesem Be&rag wird ein neuer Ausdruck hir die Momente (bezogen auf den Ursprung am Ausgang) stackgefilterter Daten hergeleitet. Der Ausdruck beruht auf der A- und M-Vectoren; sie enthalten die bekannten Koeffizienten Ai der Stack- filter sowie Grogen M( @, y, N, i) , die hier definiert werden. Fiir ein beliebiges Stackfilter kann nur unter Venvendung der A- und M-Vectorparameter in den neuen Ausdruck die Fglhigkeit zur St&unterdtilckung berechnet werden. Die Verbindung zwischen den Koeffizienten Ai und den sogenannten Rangordnungs-Wahrscheinlichkeiten ri wird neu gefaht, und es werden neue Randbedingungen - als Rangwahl-EinschClnkungen bezeichnet - fib Stackfilter definiert. Kern des be&ages ist die Weiterentwicklung der von Yang et al. und Yin ptisentierten Gptimaliti%stheorie fir Stackfilter. Diese Theorie beruht auf dem Ausdruck hir die Momente beziiglich der Ursprungs am Ausgang. Sie verbindet die Getiuschdlmpfung, Rangordnungs- beschr&rkungen und strukturelle Randbedingungen an das Filterverhalten. Fiir selbstduale Stackfilter wird bewiesen, daI3 man das optimale Stackfilter - welches unter Einhaltung von Einschriinkungen bezitglich der Rangauswahl und der Struktur die beste Rauschunterdrtickung eaielt - in geschlossener Form erhalten kann. Dazu wird ein Algorlthmus angegeben, und es werden verschiedene Entwurfsbeispiele hier vorgestellt, in denen der Algorithmus verwendet wird.

* Corresponding author. Tel: +358-31-3161580; Fax: +358-31-3161857; E-mail: [email protected]

0165-1684/95/$09.50 @ 1995 Elsevier Science B.V. All rights reserved SSDIO165-1684(94)00106-5

Page 2: Optimal stack filters under rank selection and structural constraints

310 t? Kuosmanen, J. AstoWSignal Processing 41 (1995) 309-338

Nous derivons dans cet article une expression nouvelle pour les moments autour de l’origine de la sortie d’un filtre en pile. Cette espression est basee sur les vecteurs A et M contenant les coefficients bien connus Ai des filtre en pile et sur les nombres M(@, y, N, i) definis dans cet article. La capacitk d’attenuation du bruit de tout filtre en pile peut maintenant &tre calculee a l’aide des parambtres des vecteurs A et M de cette expression nouvelle. La relation entre les coefficients Ai

et les probabihtes de selection de rang ri est pas&e en revue et des contraintes nouvelles, appelees contraintes de selection de rang, sont definies pour les filtres en pile. La contribution majeure de cet article est le developpement d’une extension de la theotie d’optimalite pour les filtms en pile pr&entee par Yang et al. et Yin. Cette theotie est basee sur l’expression des moments de la sortie autour de l’origine, et combine l’attenuation du bruit, les contraintes de selection de rang, et les contraintes structurelles sur le comportement du filtre. Pour les filtres en pile auto-duaux il est prow5 que le filtre en pile optimale operant la meilleure attenuation du bruit en &ant soumis aux contraintes de selection de rang et de structure peut &tre obtenu sous forme analytique. Un algorithme permettant de trouver cette forme et plusieurs examples de conception dans lesquels cet algorithme est utilisc sont present& dans cet article.

Keywords: Stack filters; Rank selection constraints; Structural constraints; Optimization

1. Introduction

Rank order based filters have received considerable attention during recent years. In the last decades, many different types of rank order based filters have been developed. Stack filters are a large class of nonlinear rank order based filters including all rank order filters, standard median and weighted median filters, some morphological filters, and a number of other nonlinear filters [ 181.

Basically, there are two different approaches to the design of rank order based filters. One might be called the estimation approach, while the other might be called the structural approach. The first approach relies on the statistical descriptions of the filters. Recently Prasad et al. [ 161 defined the concept of rank selection probabilities for stack filters. If they are known, the output distribution and output moments for i.i.d. input follow easily. Therefore, rank selection probabilities describe the statistical behaviour of stack filters completely and, by laying constraints for rank selection probabilities the filters satisfying these condition will have desired characteristics in output distribution. In [ lo] a simple connection between certain quantities (called Ai), from which the output distribution follows as well, and rank selection probabilities is derived. In the structural approach, the goal is to find a filter which preserves those shapes that are part of the signal while removing those that are part of noise.

In this paper, a new approach is developed for finding optimal stack filters. This approach combines estimation and structural approach as the aim is to seek an optimal stack filter, which produces the best noise attenuation and; at the same time, satisfies the given rank selection and structural constraints. Based on an expression for the moments about the origin of the output of stack filters presented in this paper, we prove that the optimal stack filter, which achieves the best noise attenuation subject to rank selection and structural constraints, can usually be obtained in closed form. We also present an algorithm for finding this closed form. Furthermore, several design examples illustrating the use of this algorithm are presented in this paper.

This paper is organized as follows. Section 2 briefly reviews the basic concepts of stack filters. In Section 3, the connection between rank selection probabilitiesand the coefficients Ai is established. Section 4 is devoted to the statistical analysis of stack filters. A new expression for the moments about the origin of the output of stack filters is presented. This expression is based on the coefficients Ai and numbers M( @, y. N, i> defined in this paper. The properties of the numbers M( @, y, N, i) is analyzed. The theory of optimal stack filtering under rank selection and structural constraints is developed in Section 5. Design examples are provided in Section 5. Section 6 contains some conclusions.

Page 3: Optimal stack filters under rank selection and structural constraints

P. Kuosmanen, J. Asto/a/&ya~ Processing 41 (1995) 309-338 311

2. Stack filters

Stack filters are a class of nonlinear filters, first introduced by Wendt et al. [ 181. Stack filters perform well in many situations where linear filters fail. Thus, stack filters have been used in many applications, cf. e.g. [ 151.

Definition 2.1. The real unit step function, denoted by u( .), is defined by

U(cu) = 1 if (Y 3 0, 0 otherwise,

and the thresholdingfunction at level p, denoted by Tg( a), is defined by

Tg(a) = 1 if LY > p, 0 otherwise.

(2.1)

(2.2)

The class of stack filters is relatively large and includes all ranked-order operators. The key to the analysis of stack filters comes from their definition by threshold decomposition which we now take into consideration [ 18,191.

Consider a vector x = (XI,X~...,XN), where Xi E {O,l,. . . , h4 - 1). The threshold decomposition of x amounts to decomposing it into M - 1 binary vectors x1, x2,. . . , xMml defined by

(2.3)

Thus, each binary vector x”’ is obtained by thresholding the input vector at the levels m, for 1 6 m 6 M - 1. The k-th level element xf: of a binary vector takes on the value 1 whenever the element of the input vector X, is greater than or equal to the level k.

It is important to note that this thresholding process can also be applied to all vectors that are quantized to a finite number of arbitrary levels.

The original multi-valued vector can be reconstructed from its binary vectors

M-1

x= c Xrn

!Fl

(2.4)

or, equivalently,

M-l x, = c $.

m=l

(2.5)

A Boolean function f( .) is called a positive Boolean function if it can be written as a Boolean expression that contains only uncomplemented input variables. For a positive Boolean function f( .) it holds

f(x) > f(y) whenever x 2 y. (2.6)

The property (2.6) of binary vectors is called stacking property.

Definition 2.2. The sfuckfiher S(e) is defined by a positive Boolean function f( .) as follows:

M-l

Sf(X) = c f(xrn). In=1

(2.7)

Page 4: Optimal stack filters under rank selection and structural constraints

312 I? Kuosmanen, J. Astola/Signal Processing 41 (1995) 309-338

Thus, filtering a vector x with a stack filter Sf( .) based on the positive Boolean function g( .) is equivalent to decomposing x into binary vectors xm, 1 < m 6 M - 1, by thresholding, filtering each threshold level with the binary filter g( .) and reconstructing the output vector as the sum (2.7).

By denoting

g-‘(l)={xE{O,l}NIg(x)=l}, (2.8)

we notice by using (2.6) that for positive Boolean functions it holds that if x E g-‘( 1) and y > x, then

YE g-W. The integer domain stack filter corresponding to a positive Boolean function can be expressed by replacing

AND and OR with MIN and MAX respectively [ IS]. For example, the three point median filter over integer variables X, Y and 2 is a stack filter defined by the positive Boolean function f(x, y, z) = xy + xz + yz, i.e.,

MED{X,YZ} =MAX{MIN{X,Y},MIN{X,Z},MIN{I:Z}}. (2.9)

The output distribution of a stack filter can be expressed using the following proposition from Yli-Harja et al. [19].

Proposition 2.3. Let the input values &, b E B in the window B of a stack filter Sf (a) be independent, identically distributed random variables having a common distribution function G(t). Let the positive Boolean function that defines the stack filter Sf( *) be f (-). Then the distribution function of the output Y(t) of the stack3lter S( .) is

PI 9(t) = xAi(l - @(t))‘@(t)lBl-‘, (2.10)

i=O

where the numbers Ai are defined by

Ai=I{x:f(x)=O,WH(x)=i}l

and wH( x) denotes the number of 1 ‘s in x, i.e., its Hamming weight.

(2.11)

The analysis of the statistical properties of stack filters in this paper rests on the numbers At and Proposi- tion 2.3. The following proposition gives bounds for the numbers Ai.

Proposition 2.4. The numbers Ai satisfy

N O<Ai< 0 i ’

i= l,...,N. (2.12)

Definition 2.5. The dual p(x) of a Boolean function g(x) is defined by the relation

g”(x) =zT. (2.13)

Definition 2.6. A Boolean function g(x) is self-dual iff

g(x) = L!?(x).

A stack filter which is defined by a self-dual positive Boolean function is called a self-dual filter.

(2.14)

The following proposition establishes an interesting and useful inter-relation between the coefficients Ai of self-dual stack filters.

Page 5: Optimal stack filters under rank selection and structural constraints

P Kuosmunen, J. Astola/Signal Processing 41 (I 995) 309-338 313

Proposition 2.7. For a self-dual stack$lter with window size N, we have

Ai + AN+ = N

0 i ’ i= l,...,N. (2.15)

Proof. According to the properties of the duals of Boolean functions, it is true that if x E ( u : f(u) = 1, WH(U) = i}, then T $Z {II : f(u) = 1, Wn(u) = N - i}, and vice versa. This gives (2.15). 0

3. Rank selection probabilities and stack filters

In the following the inputs of the stack filter are assumed to be i.i.d. with the probability density function $ow;d the distribution function @( .). Prasad et al. [ 161 defined so called rank selection probabilities as

Definition 3.1. Let Y be the output of a stack filter defined by a positive Boolean function f(e). Then the i-th Rank Selection Probability (RSP) is denoted by P [ Y = X(i) 1, 1 < i < N, and is the probability that the output Y = X(i).

Definition 3.2. The Rank Selection Probability Vector is the row vector r = (q , . . . , TN), where ri = P [ Y = X(i) 1, 1 < i < N.

In [lo] Kuosmanen et al. derived a simple connection between coefficients Ai and rank selection probabilities t-i. This connection is stated in the following proposition.

Proposition 3.3. The j-th rank selection probability of a stack$lter with window size N is given by

AN-j AN-j+1 rj=tr,--m-, j=l,...,N.

From Proposition 3.3 we obtain the following corollary.

(3.1)

Corollary 3.4. The coejj?cients Ai of a stack jilter S(a) of window size N and rank selection vector r = (rl, . . . , TN) satis’

A&l= (i:l) [gTrN-i] =sAi- (irl)rN-i, i=O,l,...,N- 1. (3.2)

By using Corollary 3.4 successively we obtain the following corollary.

Corollary 3.5. The coefticients Ai of a stack filter S( .) of window size N and rank selection vector t = (rl ,...,rN) satisfy

Aj+k = N-j

%-icN_j_k+,ri 3 16kGN-.i. = 1

(3.3)

As rN_-j 2 0, we obtain from Corollary 3.4 the following inequality for coefficients Ai.

Page 6: Optimal stack filters under rank selection and structural constraints

314 Pr Kuosmanen, J. Astola/Signal Processing 41 (1995) 309-338

Proposition 3.6. The CoefJicients Ai of a stack filter S(a) of window size N satisfy

Ai+1 < g Ai, i=O,l,..., N-l. (3.4)

Since

N-i,1 N-l

i+l ’ ’ when i 2 -

2 ’

Proposition 3.6 implies the following corollary.

Corollary 3.7. The coeficients Ai of a stackJilter S( .) of window size N satisfy

Ai+, < Ai, N-l

when i 2 -. 2

As Aj+k > 0, we obtain from Corollary 3.5 the following inequality for coefficients Ai.

(3.5)

(3.6)

Corollary 3.8. The coefJicients Ai of a stack jilter S( .) of window size N and rank selection vector r =

(rl,... , TN) satisfy

ri < A,, l<k<N-j.

In particulal; when k = 1, we have

N

0 j rN-j < Aj,

and when k = N - j, we have

N N-_j

0 j c ri < Aj. i=l

(3.7)

(3.8)

(3.9)

4. Output moments of stack filters

In this section we derive a method to calculate the output moments of stack filters by using the coefficients Ai.

LA the input values & in the window B (IBI = N) of a stack filter Sf( .) be independent, identically distributed random variables having a common distribution function G(t). Then the y-order moment about the origin of the output of a stack filter can be expressed as

N-l

~3 = E(YLt} = C AiM(@, 7, N, i) , (4.1) i=O

where Co

M(@, y, N, i) = I

~7; (( 1 - @(x))‘@(x)~-‘) dx, i=O,l,..., N- 1. (4.2)

Page 7: Optimal stack filters under rank selection and structural constraints

P Kuosmanen, J. Astola/Signal Processing 41 (1995) 309-338 315

By using the output moments about the origin we easily obtain output central moments, denoted by $’ =

E{(Y,,, - E{Y,&r}, f or example the second order central output moment equals

/..L2=~AiM(@,2*N,i)- r$AiM(@*l,N,i))2. (4.3)

4.1. Noise attenuation of stackjlters

The second order central output moment is quite often used as a measure of the noise attenuation capability of a filter. It quantifies the spread of the input samples with respect to their mean value. Eq. (4.3) gives an expression for the second order central output moment. M( .) is a function of the input distribution @, the window size N and index i. In the following the nature of the numbers M(@, y, N, i) is studied. Henceforward we assume that the input distribution @(t) is symmetric with respect to its mean pu,, which is assumed to exist. We assume that the set Q = { t : q5( t) > 0 } is a union of a countable number of disjoint intervals of positive measure. This means that if #(t) # 0, then there exists an interval It C (-co, 00) of positive measure such that t E If and 4(u) > 0 for all u E It and that there are countable number of intervals It. Without loss of generality, we assume

/_&=o.

Therefore,

a(t) = 1 -@(-t),

which implies

cb(t) =4(-f).

Proposition 4.1. The numbers M(@, y, N, i) have the following recurrence formula

M(@,y,N,i)=M(@,y,N-l,i-l)-M(@,y,N,i-l), l<i<N

with initial values

co

M(@P,y, N,O) = J

xy; (@(x)“) dx, i=O,l,..., N-l, O< N.

--oo

Pmof. The proof follows directly from Eq. (4.2) by substituting

(1 - @(n))‘@(x)N-’ = (1 - @(X))‘_‘@(x)N-’ - (1 - .(X))i-‘@(X)N-i+‘.

Proposition 4.2. The numbers M( @, y, N, i) satisfy

M(@,y,N,i) = 2 (i) (-l>‘-jM(@,y,N-JO), O<i< N. j=O

0

(4.4)

(4.5)

(4.6)

(4.7)

(4.8)

(4.9)

(4.10)

Page 8: Optimal stack filters under rank selection and structural constraints

316 I? Kuosmanen, J. Astola/Signal Processing 41 (1995) 309-338

Proof.

00

M(@, y, N, i) = s

.x7-$ (( 1 - @(x)>‘@(x)~-‘) dx

-co

= c(> j=O

; (-1)‘~jM(@,y,N-j,O). 0 (4.11)

It is clear that the formula for the numbers M(@, y, N, i) given in (4.10) can also be found by solving the recurrence formula in (4.7). This is shown in the following remark.

Remark 4.3. The recurrence formula of the form L(n, i) = L(n - 1, i - 1) - L(n, i - 1) can be recast into the form

L(n,i) = 2 j=O

(j)(-l)‘jL(n-j,O)+z(t)(-I)‘-+[L(O,i)+L(O,i-1)1* h=n

(4.12)

Proposition 4.4. The numbers M( @, y, N, i) satisfy the following symmetry property

M(@,y,N,N-i) =(-l)Y+‘M(@,y,N,i). (4.13)

Proof. Because by (4.9, @(-x) = 1 - Q(x), we have

(1 - @( -x))i@( -Api = (1 - @(X))+D(X)‘.

Thus, co

M(@, y, N, N - i) = s

xy$ (( 1 - @(x))~-‘G(X)‘) dx

(4.14)

= s xy& (( 1 - @( -x))~@(-x)~-~) dx

-a2 cc

= (-l)y+’ s

xy; (( 1 - @(x))~(x)~-‘) dx

=(-l)Y+iM(@,y,N,i). 0 (4.15)

F’roposition 4.5. Zf y is odd, then

M(@,y,N,i) {z 8 $ir=?sz (4.16)

Page 9: Optimal stack filters under rank selection and structural constraints

R Kuosnumen, J. Astola/Signal Processing 41 (1995) 309-338 317

and if y is even, then

=O ifi=N/2, >O ifi=OorN/2<i<N, < 0 othenvise.

(4.17)

Proof. It is enough to prove that Proposition 4.5 holds for 0 < i < N/2, because then it holds also for N/2 < i < N by Proposition 4.4. Let us denote

ZN.i(x) = (1 - @(x))i@(x)N-i , (4.18)

when (4.2) equals

M(@,y,N,i) = J

d x’zzN,i(x) dx. (4.19)

Assume first that i = 0, then

co

M(@,y,N,O) =N I x~@(x)~-‘+(x) dx

--oo 0

= N J &x)xY@(.#-’ dx + N m$(x)xy@(x)N-’ dx s -co 0

0

=N J

+(--x) ( -x)~@( -x)~-’ dx + N mq%(~)~y@(x)N-L dx J

c0 0

00

= N s

d(x)xy (@(x)~-’ + (-l)y@(-~)N-‘) dx.

0

(4.20)

Since Q(x) is an increasing function, it holds

0 < @(x)~-’ + (-l)Y@(-x)N-‘, for all x 3 0. (4.21)

Clearly,

0 < q5(x)xy, for all x 2 0. (4.22)

If 0 < &x>xy, 0 < x, then d(x) > 0 and thus also 0 < @(x)~-’ + (-l)y@(-~)~-‘. Because of our assumptions on the function d(x), there exists at least an interval Zk C (0,oc) of positive measure where 4(x) > 0 for all x E Ik, thus

M(@P,y,N,O) > 0. (4.23)

When i > 0 we rewrite (4.19) as

0 XT

MC@‘, y, N, 9 = J(J ) dy $zN,i(x) dx + $zN.i(x) dx.

--oo 0

(4.24)

Page 10: Optimal stack filters under rank selection and structural constraints

318 P: Kuosmanen. J. AstoldSignal Processing 41 (I 995) 309-338

Now, we obtain by changing the order of integration

00 -VT 00

$N,i(X) d.x = (-1)’ s s

dy $zN,i(.x) dx = (-1)’ J

Gv,i ( -fi) dy, (4.25)

0 -co 0

and

co co d

xZN,i(x) dx= dy OOd J J zZN,i(x) dx = J -ZN,i ( fi) dy.

0 e 0

(4.26)

This gives

00

M(@,T’,N,i) = J

((-l>yZN,i(-$$> - ZN,i(fi)) dY.

0

As G(x) is assumed to be symmetric,

(4.27)

ZN,i(-X) = Z,,,_i(X)

and so

(4.28)

M(@, Y, N, i) = Jm((-lI’Z~,~-i(fiI - ZN,i(fi)) dY=JmyY’-’ ((-l)‘Z,,_i(Y) -ZN,i(Y)) dy. (4.29)

0 0

Because zN,i( x) > 0 and z,,,_~ (x) > 0 for all x E W, M( @, y, N, i) < 0 when y is odd. Our assumptions imply that there exists at least an interval rk C (0, co) of positive measure, where d(x) > 0 for all x E lk. This implies that for all members of Ik (except possibly its right endpoint) ZN,i(X) > 0 and z,,,_;(n) > 0. From this we can conclude that M( @, y, N, i) < 0 when y is odd.

Assume then, that y is even. If i = N/2, then N - i = i and thus, zN,i(y) E ~~.~_~(y), which implies M(@,y,N,i)=O.Ifi<N-i,weobtain

ZN,N_;(J’) -ZN,i(y) =@(y)‘(l -@(VI)’ [Cl -@(Y))N-2i-@(.Y)N-2i] GO, Y 2 0, (4.30)

where the strict inequality holds for an interval of (0,oo) of positive measure. Therefore, MC@, y, N, i) < 0.

Similarly, if i > N - i, we obtain M( @, y, N, i) > 0. 0

Proposition 4.6. If y is odd, then

M(@,y,N,i) < M(@,y,N,i+ l), O<i<N/2-1, M(@, y, N, i) z=- M(@, y, N, i + 1)) i=OorN/2<i<N-1,

(4.31)

and if y is even, then

M(@,y,N,i) < M(@,y,N,i+l), O<i< N-l, M(@,y,N,i) > M(@,y,N,i+l), i=O,N-1.

(4.32)

Proof. Again it is enough to prove Proposition 4.6 for 0 < i < N/2 because then by Proposition 4.4 the assertation holds for N/2 < i < N. We prove only the case when y is even. The case of odd y can be proven

Page 11: Optimal stack filters under rank selection and structural constraints

P. Kuosmanen, J. Astola/Signal Processing 41 (1995) 309-338 319

in a similar manner. If i = 0 we have by Proposition 4.5 M(@, y, N, i + 1) < 0 < M(@, y, N, i) . Otherwise by (4.29),

m

M(@,y,N,i) -M(@,y,N,i+l) = s

yyy-’ (Z,,_;(Y) - z,,.i(y) - Z,.,,,_,_,(Y) + G,,+,(Y)) dy, (4.33) 0

where

Z,,-,(Y) - Z,i(Y) - z,,, --I-, (Y) + Z,,,,,(Y)

= (1 - @(y))N+qy)’ - (1 - @(y))‘@(y)N-’ - (1 - @(y))N-i-‘@(y)‘+’

+ ( 1 - @( y) )i+‘@( y) N-i-’

= (1 - @(y)>N-‘-‘@(y)‘( 1 - 2@(y)) + (1 - @(y))‘@(y)+“-‘( 1 - 2@(y))

= (1 - 2@(y)) [( 1 - @(y))N-‘-l@(y)’ + (1 - @(y))‘@(y)N-‘-l] . (4.34)

Now, (1 - @(y))N-i-l@(y)i + (1 - @(y))‘~D(y)~-‘-’ > 0 and 1 - 2@(y) 6 0 for all 0 < y. As there exists at least an interval Zk G (0, co) of positive measure, where 4(x) > 0 for all x E Zk, it holds (1 - 2@(y)) [(l -@(y))N-i-l@(y)i + (1 -@(y))‘@(~)~-‘-‘I < 0 for all x E Zk except possibly the right endpoint of Zk. Therefore, M(@, y, N, i) - M(@,y, N, i + 1) < 0 and M(@,y, N,i) < M(@,y, N, i + 1). 0

The following proposition gives an estimate of the increase of M(@, 2, N, i + 1) - M(@, 2, N, i).

Proposition 4.7. Let G(t) be a distribution function such that its density function 4(t) satisjies the following conditions: (i) f$( t) = 4(-t) for all t E W,

(ii) I$( t) is piecewise twice difSerentiable and the first derivative qb’( t) satis$es

4%) {

30 ift<o, <o ift>o. (4.35)

Then

M(@, 2, N, i + 1) > gM(@,2,N,i), forallO<i< N- 1. (4.36)

Proof. Because of our assumptions, for every t, qb( t) # 0, there exists an interval Z, C (-00, co) of positive measure such that t E Z, and 4(u) > 0 for all u E I,. Let Q = { t : 4(t) > 0 } = UrI Z,;. Then @-’ (t) is a

piecewise strictly increasing function @-’ (t) : [ 0, 1 ] H Q, i.e.,

1 W’)‘(r) = &@_‘(t)) > 0, r E lO,ll.

Now,

(N-i)M(@,2,N,i+l)-(i+l)M(@,2,N,i) co

= J x2$ ((N - i)( 1 - @(.x))~+‘@(x)~-~-’ - (i + l)( 1 - @(~))b(#-‘) dx -cc

(4.37)

= J x2$ ((N-i)(l -@(x))~+‘@(x)~-~-’ - (i+ l)(l -@(X))‘@(X)““-‘) dx

U’k

Page 12: Optimal stack filters under rank selection and structural constraints

320

M

P: Kuosmunen, J. Astola/Signal Processing 41 (1995) 30%.338

= J

(@-‘(JJ))‘$ ((N-i)(l -y)‘+‘yN-‘-’ - (i+ I)(1 -~)~y~-~) dy

0 00

= J

(~#‘-‘(y))~~ (

$ ((1 - y)‘+*yN-‘) dy.

0 >

By denoting u(y) = (@-‘(Y))~ and f(y) = (1 -y)‘+‘yN-’ (4.38) can be rewritten as

(N- i)M(@,2,N,i+ 1) - (i+ l)M(@,2,N,i)

= Tag” dy = u(y)f’(y) Ita - T~‘(Y)~‘(Y) dy 0 0

co co

= -u’(y)f(y) I;t+ + Su”( y)f(y) dy = s

u”(Y)~(Y) dy. 0 0

By differentiating u(y) twice and substituting (4.37) we obtain

U,,(y) =,W’(Y)) -@-‘(Y)WWY))

43(@-‘(Y)).

By substituting t = @-’ (y) (4.40) gives

*Uty) = 2m - M’(t) 43(t) .

(4.38)

(4.39)

(4.40)

(4.41)

Now, 4’(t) > 0 when t < 0 and 4’(t) < 0 when t > 0. Thus, t@(t) < 0. But as C*(y) is a mapping from [0, l] to the set Q, 4(t) > 0 for all t. This implies that u”(y) > 0 for all y E [0, 1 J . From this and the positivity of f(y) we can conclude that

Co

s u”(~)f(y) dy > 0. (4.42)

0

This completes our proof. 0

When we write (4.36) in the form

M(@,2,N,i+l) > M(@, 2, N, i) (4.43)

and notice that Ai < (r) we can observe that if the aim is to minimize the second order moment about the origin, the smaller values of i have more important role than the large values of i.

Now, we return to the calculation of the second order central output moment of the output of a stack filter Sf(.). Let

A=(Ao,At,...,&-I) (4.44)

denote the A vector of a stack filter Sf ( .) and

M,= (M(QI,y,N,O),M(~,y,N,l),...,M(~,y,N,N- 1)) (4.45)

Page 13: Optimal stack filters under rank selection and structural constraints

P. Kuosmanen, J. AstoWSignal Processing 41 (I 995) 309-338

denote the MY vector. Then (4.3) can be rewritten as

p* = AM; - (A#)*,

where T denotes matrix transpose. In case of self-dual filters (4.46) is reduced to

,l~* = AM;.

321

(4.46)

(4.47)

4.2. Calculation of the numbers M(@, y, N, i) for normal, Laplace and uniform distributions

By Proposition 4.2 the numbers M(@, y, N, i) can be calculated by the sum (4.10). Thus, all the numbers M( @, y. N, i) can be found if the numbers M( @, y, j, 0), N - i < j < N, are known. Clearly, by the definition of the numbers M(@, y, j,O) in (4.2), they are the moments of the j-th order statistics of j i.i.d. variates each with distribution function @i(e). In this subsection we consider the numbers M(Qj, y, k,O) of the three commonly used noise distributions, normal, Laplace and uniform with 0 mean and 1 variance. We denote these distributions by N(0, l), L(0, l), and U(0, l), respectively.

The following proposition is straightforward to prove.

Proposition 4.8. Let Q(x) be the distribution function of the uniform distribution tJ( 0,l). Then it holds

M(@, l,j,O) = O” d J xdx (@(x)‘) dx =

J?(j - 1) j + 1 (4.48)

-ca

co

MC@,% JO> = J x2; (@(x)‘) dx = :,‘:‘l;::::, .

-m

(4.49)

In [9] the following proposition is proved for the Laplace distribution.

Proposition 4.9. Let CD(X) be the distributionfunction of the Laplace distribution L(0, 1). Then it holds

M(@, l,j,O) = O” d J xdx (qx)‘) dx =

-ccl

and

ccl

M(@,2,j,O) = J

x2$ (@(x)j) dx =

--oo

(4.50)

(4.51)

The values M(@, y, j, 0) are extreme cases of the expected values and variances of order statistics. Thus, the normal case has also received considerable theoretical interest, cf. e.g. [5,13,14]. In [ 13,141 the expected values and variances of order statistics in samples from a normal parent distribution are tabulated. The values were computed on a basis of a Gauss-Legendre quadrature technique [ 6,171 to 25 decimal places for sample sizes 1,...,50.

In Table 1 the values of the numbers M(@, y, N, 0) are given for y = 1,2 and 1 < N < 25, for normal, Laplace and uniform distributions with 0 mean and 1 variance.

Page 14: Optimal stack filters under rank selection and structural constraints

322 P Kuosmanen, J. AstokdSignal Processing 41 (1995) 309-338

Table 1 The numbers M(@,y,j,O), y= 1,2, for N(O,l), L(O,l),and U(O,l) distributions

j N(O, 1) L(O, 1) U(O, 1)

y=l y=2 y=l y=2 y=l y=2

1 0.00000000 1 .oooooooo o.oooooooo I .oooooooo o.oooooooo 1 .oooooooo

2 0.56418958 1 .oooooooo 0.53033009 1 .oocmooo 0.57735027 1 .oooooooo

3 0.84628438 1.27566445 0.79549512 1.34027778 0.86602540 1.20000000

4 1.02937537 1.55132890 0.97963752 1.68055556 1.03923048 1.40000000

5 1.16296447 1.80002044 1.12326858 1.99685764 1.15470054 1.57142857

6 1.26720636 2.02173907 1.24185628 2.28918403 1.23717915 1.71428571

7 1.35217838 2.22030414 1.34313460 2.56042304 1.29903811 1.83333333

8 1.4236003 1 2.39953497 1.43162159 2.81346301 1.34715063 1.93333333

9 1.48501316 2.56261742 1.51022737 3.05075954 1.38564065 2.01818182

10 1.53875273 2.71210379 1.58095340 3.27433553 1.41713248 2.09090909

11 1.58643635 2.85002774 1.64524211 3.48585417 1.44337567 2.15384615

12 1.62922764 2.97801909 1.70417029 3.68669192 1.46558145 2.20879121

13 1.66799018 3.09739661 1.75856423 3.87799824 1.48461498 2.25714286

14 1.70338155 3.20923882 1.80907233 4.06074204 1.50111070 2.3OOOOOOO

15 1.73591344 3.3 1443706 1.85621299 4.23574744 1.51554446 2.33823529

16 1.76599139 3.41373541 1.90040725 4.40372128 1.52828012 2.37254902

17 1.79394198 3.50776083 1.94200181 4.56527453 1.53960072 2.40350877

18 1.82003188 3.59704617 1.98128554 4.72093902 1.54972967 2.43157895

19 1.84448151 3.68204785 2.01850169 4.87118073 1.55884573 2.45714286

20 1.86747506 3.76315971 2.05385703 5.01641035 1.56709359 2.4805 1948

21 1.88916791 3.84072385 2.08752879 5.15699185 1.57459164 2.50197628

22 1.90969232 3.91503925 2.11967000 5.29324940 1.58143769 2.52173913

23 1.92916171 3.98636868 2.15041378 5.42547307 1.58771324 2.54000000

24 1.94767407 4.05494432 2.17987656 5.55392352 1.59348674 2.55692308

25 1.96531461 4.12097229 2.20816083 5.67883596 1.59881613 2.57264957

4.3. Comparison of noise attenuation of stackJilters using A and M vectors

For i.i.d. input signals having the distribution function 4p, one can calculate the M, and A vectors corre- sponding to each filter and compare the variances of different filters. The My vectors can be easily calculated by using the values in Table 1 and Proposition 4.2.

Example 4.10. Compare the noise reduction of the following two stack filters with A vectors

A(&) =( 1, 9, 36, 84, 126, 0, 0, 0, 0) (4.52)

and

A(S2) = ( 1, 9, 0, 0, 0, 0, 0, 0, 0 ), (4.53)

for the normal distribution N( 0,l) . By using the values in Table 1 and Proposition 4.2 we obtain

Ml = ( 1.48501, -0.06141, -0.01001, -0.00354, -0.00218, -0.00218,

-0.00354, -0.01001, -0.06141 >, (4.54)

M2 = (2.56262, -0.16308, -0.01615, -0.00319, -0.00063, 0.00063,

0.00319, 0.01615, 0.16308 ), (4.55)

Page 15: Optimal stack filters under rank selection and structural constraints

P: Kuosmanen, J. Astola/Signal Processing 41 (1995) 309-338 323

and so

A(S,)M; - (A(S~)hf:l* < W2M; - (A(s2M43*. (4.56)

This indicates that Sr has better noise attenuation than S2 for normally distributed noise. Similar comparisons for other distributions can be easily done.

It may also be possible to compare the noise attenuation of different self-dual stack filters with the same window size even though the noise distribution is not known [ 201. As by the equation in (2.15) the coefficient Ai determines AN_+ for all 1 < i < I( N - 1) /2], we can use truncated A vectors instead of A vectors, where in the truncated vector only the coefficients Ao, AI,. . . , AL,~_,,~J are stored, i.e.,

A’ = (Ao,AI,. . . J~c~-1)/2~).

Proposition 4.5 implies that if for the truncated A vectors A’( S1) and A’( S2) it holds

A’(&) > A’(S2) 9 (4.58)

where > denotes the partial ordering of the vectors ( (XI, ~2, . . . , x,) 2 ( yi , ~2, . . . , yn) if and only if xi b yi foralliE{1,2 ,..., n}),then

P*(G) < P2(W, (4.59)

for any symmetric noise distribution.

Example 4.11. Comparison of the noise reduction of two self-dual stack filters, window size 9 and truncated A vectors

A’($) = ( 1, 9, 36, 84, 120) (4.60)

and

A’(S2) = ( 1, 9, 35, 77, 105). (4.61)

As A’( S1) 2 A’( &), SI has better noise reduction than Sz for any symmetric noise distribution.

If the truncated A vectors are incomparable, the noise reduction of two self-dual stack filters cannot be compared without knowing the corresponding &-vectors.

5. Optimal stack filters under constraints

In this section we develop a new method for finding the optimal stack filter in some specified sense. In this method we use the results presented in the previous sections. A theory with similar goal has been developed [ 3,4,8] as well as adaptive algorithms [ 111.

There are several optimality criteria which are usually used in filtering. Some are classic, e.g., the mean square error, the mean absolute error and the minimax error; while some are relatively newer, e.g., a set of structural constraints on the filter behaviour and associative memory [ 8, 221. The last two are intimately related to the theory of root signal sets, which, in simple terms, define the “passband” of nonlinear filters.

Page 16: Optimal stack filters under rank selection and structural constraints

324 R Kuosmanen, J. Asiola/Signal Processing 41 (199.5) 309-338

5. I. Problem formulation

Assume that the input xi of a stack filter Sf(.> with window length N is a constant signal s plus additive white noise ni, that is,

Xi=S+lli, (5.1)

where i stands for the i-th sample. We denote the N samples inside the filter window X1, X2, . . . , XN. The outputs^=Sf(X~,X2,..., XH) of the stack filter Sf(.) is an estimate of s. One optimization criterion is based on the minimization of the mean square error criterion defined as follows:

E{<s - s^>2}. (5.2)

Since s in (5.2) is a constant signal we can recast E{ (s - Z)‘} as

E{(s - s^)2} = AM; - (AM;)2, (5.3)

or in case of self-dual filters (5.3) can be recast as

E{ (s - Q’} = AM;, (5.4)

where, as stated before, the vectors Mi are independent of the filter, while the vector A can be understood as a function of the filter. In the optimization process we aim at finding the vector A = (Ao, AI,. . . , AN-I ) minimizing (5.3) or (5.4).

We write (5.4) as

E{ (s - s^>2} = Am&M; - AoM; + AIM;,

where A,d is the A vector of the standard N point median filter, that is,

Am,j=(L (:)v (;)n (,~N_N1),2,)~O~0,...,0),

and the vectors Ac = (0, Ay,Ai,... ,A~~,_,,~2~,0,0,...,0) and AI = (O,O,

* .., AL_, ) satisfy

AP = A~_i, l<i<N-1.

(5.5)

(5.6)

‘“‘At(N+1),2,‘At(N+3)/2J’

(5.7)

Now, A,dMT is independent on the filter and therefore it is a constant which can be left out in optimization. Thus, the optimization of self-dual,filters is reduced to minimizing

-AoM; + AIM;. (5.8)

Proposition 4.4 together with Eq, (5.7) imply -AiM(@,2, N, i) = AN_iM(@, 2, N, N - i), for 1 < i <

[(Nf 1>/2J.Th us, (5.8) is further reduced to minimizing

AbMiT, (5.9)

where AL and Mi are truncated Aa and M2 vectors given by

A; = (

0, A:, A;, . . . , A7c~_1,,2,) (5.10)

and

M;=(M(@,2,N,O), M(@,2,N,l), . . . . M(@P,2,N,[(N-l)/2j)),

respectively.

(5.11)

Page 17: Optimal stack filters under rank selection and structural constraints

P. Kuosmanen, J. AstoldSignal Processing 41 (1995) 309-338 325

The coefficient A: gives the number of binary vectors of Hamming weight i, 1 < i < [(IV - 1)/2J, which the stack filter maps to 1, that is,

A: + Ai = N

0 i ’ (5.12)

In earlier sections we have given some basic conditions that Ai have to satisfy. These conditions are listed below and are called Basic Constraints (BC) .

@Cl):

Ai E Z+.

(BC2):

Ao= 1, AN=O.

(BC3):

O<Ai< N 0 i ’

i= 1,2 ,..., N.

(BC4):

Ai+l 6 N-i -Ai, i=O,l,..., N- 1. i+l

The constraint (BC2) guarantees that the resulting optimal stack filter is not defined by the trivial positive Boolean functions f(x) = 0,V’x E (0, l}N or f(x) = 1, Vx E (0, l}N.

We have also a special constraint for self-dual filters:

(SDC): If it is required that the filter is self-dual, then

Ai = N

0 i -AN-iv i=O,l,..., N.

Definition 5.1. A point is called a feasible paint if it satisfies all constraints. A set of constraints is called feasible if the set of feasible points is non-empty. The corresponding solution space is called the solution space of the constraints.

Definition 5.2. A set of constraints is called irredwdunt if its solution space changes by leaving out any constraint.

It is a priori clear that any set of constraints can be reduced to an irredundant form.

5.2. Rank selection constraints

In optimization it is sometimes natural to give constraints for rank selection probabilities, for example, the smallest and the largest values in the window are not allowed to be the output, or the probability of the median sample to be the output must be greater than 0.5. This kind of constraints induce new constraints for the coefficients Ai. These constraints are called Rank Selection Constmints (RSC). In the following we present some basic rank selection constraints.

Page 18: Optimal stack filters under rank selection and structural constraints

326 P Kuosmanen, J. Astola/Signal Processing 41 (1995) 309-338

(RSCl): If it is required that

q = r;! = . . . = Q = 0,

then

Proof. According to (3.1) and (BC2)

AN-I AN AN-I rl=(“;‘)-m=--g-.

(5.13)

(5.14)

(5.15)

Thus, the requirement rl = 0 implies AN-I = 0 and similarly AN-~ = 0 implies rt = 0. Now,

AN-Z AN--I AN-Z

1’=(9-77=7PJ (5.16)

and so r2 = 0 implies AN-~ = 0 and AN-~ = 0 implies 1-2 = 0. By continuing this process we notice that (5.14) is a necessary and sufficient condition for (5.13) to hold. 0

(RSC2): If it is required that

rk = rk+t = ’ . . = rN = 0,

then

Aj = N

0 _i , j=1,2 ,..., N-k+

Proof. According to (3.1) and (BC2)

TN = f$ - & = 1 - cN!lJe

Now, rN = 0 implies A1 = (N”_ 1) = (y)

AI = (,,T!&

(5.17)

1. (5.18)

(5.19)

and At = (Nt,) = (7) implies rN = 0. Next we obtain, by assuming

(5.20)

Thus, TN-1 = 0 implies A2 = (E;‘) an d vice versa. By continuing this proqess we notice that (5.18) is a necessary and sufficient condition for (5.17) to hold. 0

Before presenting the third rank selection constraint we need to fix some notations. Let the Greek letters denote arbitrary binary relations for a while. Then, for example, we use

ff’ A’

x p2 Y

11

if and only if X ~2 Y

Y3 ( 1 r3

to denote that x is in relation CY with y if and only if X is in relation A with Y; x is in relation /3 with y if and only if X is in relation B with Y; x is in relation y with y if and only if X is in relation r with Y. Thus, the superscripts indicate which two relations are connected.

Page 19: Optimal stack filters under rank selection and structural constraints

P Kuosmmen, J. AstoWSignal Processing 41 (1995) 309-338

(RSC3): If it is required that for some i, 1 < i < N, and a E [0, 1 ] it holds

>’ a2

Ti Z3 II U,

G4 <5

then

321

(5.21)

(5.22)

Proof. The necessity of the condition (5.22) follows directly from Corollary 3.4. Assume that (5.22) holds, then by (3.1)

-- [i/(N- ii- 1)1&v-i - (i_Nl)’ = (i!J

a, (5.23)

which proves

Other rank

the sufficiency of the condition (5.22). Cl

selection constraints can be given by using the propositions and corollaries in Section 3.

5.3. Structural constraints

It is also possible to optimize stack filters under some pre-specified set of so called structural constraints. The goal of the structural constraints is to preserve some desired signal details, e.g. pulses in 1-D signals, lines in images, and to remove undesired signal patterns. The structural constraints consist of a list of different structures to be preserved, deleted or modified. Since stack filters obey the threshold decomposition, the structural constraints need only to be considered in the context of binary signals. That is, they can be specified by a set of binary vectors and their outputs. We divide these binary vectors into two subsets, type 1 constraints and type 0 constraints.

Definition 5.3. A binary vector which is specified by the structural constraints is called a type 1 constraint if its output is one; otherwise it is called a type 0 constraint.

We denote the set of all type 1 constraints by ri = {x1, x2,. . . , xp} and the set of all type 0 constraints by To = {Y,,Y,, * . . , yq}. Obviously, the total set of structuring constraints is the union of these subsets, denoted by r = rl U To. The following proposition gives a necessary condition for checking whether r is feasible or not.

Prqmition 5.4. A necessary condition under which the vectors of rl and TO satisfy the feasibility is that every pair of vectors x E rl and y E r. satisfies x & y.

Proof. If x < y, then f(x) = 1 implies by (2.6) f(y) = 1, which is in contradiction with y E To. !J

Page 20: Optimal stack filters under rank selection and structural constraints

328 P. Kuosmanen, J. Astola/Signal Processing 41 (1995) 309-338

If the filter is assumed to be self-dual, then it must hold, that for each x E rr it holds X E TO and for each y~TcitholdsJ~Tr.

A set of structural constraints, if feasible, is easy to reduce to its irredundant form, which is unique. We assume that in all future references to r it is reduced to its irredundant form.

Structural constraints induce two new constraints for the coefficients Ai:

(SCl): Let the number of vectors x E rt with wu(x) = i be y!‘) for all 1 6 i < N - 1. Then

A < N 0 i

- yj’), l<i<N-1.

(SC2): Let the number of vectors x E To with wH(x) = i be yj”’ for all 1 6 i < N - 1. Then

Aj > yj’), l<i<N-1. (5.25)

5.4. Optimization under constraints

In this section we state the optimization problem of stack filters in a new form, which suits our purposes. In the optimization, there can be constraints which the optimal filter must satisfy. We have already studied two different constraints: rank selection constraints and structural constraints. Rank selection constraints determine some output characteristics that the optimal filter ought to have. In other words, by rank selection constraints we limit our search to a set of stack filters with some common statistical description. Structural constraints on the other hand describe certain structures which must be deleted or preserved. Thus, structural constraints do not directly require strict statistical behaviour of the optimal filter. In our approach we combine both these different types of constraint by their relation to the coefficients Ai established in the earlier sections.

The problem of finding the coefficient vector A of the optimal stack filter under rank selection and/or structural constraints is stated as follows: (i) The optimal filter is not required to be self-dual

Minimize AM; - (AMT)2

subject to (BCl)-(BC4) basic constraints (possible) rank selection constraints (possible) structural constraints

(5.26)

(ii) The optimal filter is required to be self-dual

Minimize A Mz (5.27)

subject to (BCl ) -( BC4) basic constraints (SDC) self-duality constraint (possible) rank selection constraints (possible) structural constraints

For the optimization the following remark is essential.

Remark 5.5. Basic constraints, self-duality constraint, structural constraints, and all sensible rank selection constraints are such that the decrease of Ai does not increase the possible maximum value of A~+I. This is due to the nature of basic and self-duality constraints and the fact that the constraints of the form Ai+l < a - bAi, b > 0, and Ai+, < ( c/Ai) + d (or some modifications/combinations of these) cannot be obtained from any sensible rank selection or structural constraint.

Page 21: Optimal stack filters under rank selection and structural constraints

P. Kuosmmen, J. AstoldSignal Processing 41 (1995) 309-338 329

Example 5.6. Finding the coefficients Ai of the optimal stack filter of window length N = 5 under rank selection constraints

It =rs =o (5.28)

and

r3 < 0.5, (5.29)

where the input samples are i.i.d. from the N(0, 1) distribution. Constraint (5.28) gives equalities

A4 = 0, AI =5 (5.30)

and by (RSC3) constraint (5.19) yields the inequality

A >zA2-e,=A -5 3/ 3 2 2 *

Now, by (5.30)) (5.3 1) and basic constraints the optimal coefficients are found by minimizing

(l,5,A2,As,0)(~)-[(l,5,A2,A3,0) [zi)

= 0.55657 + 0.02697( As - A2) - (0.49501- O&+950( A2 + A3) )2

1 2

subject to

0 6 A:! < 10, A2 - 5 < A3 < AZ.

By using nonlinear programming the first solution is found to be

A=( 1, 5, 10, 10, 0),

which corresponds to the rank selection probability vector

r= (0, 1, 0, 0, O),

and the other solution is found to be

A= (1, 5, 0, 0, O),

which corresponds to the rank selection probability vector

r= (0, 0, 0, 1, 0).

Thus, the optimal filters are simply the second and fourth OS filters.

Example 5.7. Finding the coefficients Ai of the optimal self-dual stack rank selection constraint

(5.31)

(5.32)

(5.33)

(5.34)

(5.35)

(5.36)

filter of window length N = 7 under

(5.37)

Page 22: Optimal stack filters under rank selection and structural constraints

330 f! Kuosmanen, J. AstolalSignal Processing 41 (1995) 309-338

where the input samples are i.i.d. from the U(0, 1) distribution. The self-duality constraint gives

Ai= N 0 i

-AT-~, i= 1,273. (5.38)

Now, the rank selection constraint differs from the given constraints (RSPl )-(RSP3). By (3.1) and constraint (5.37)

A1 AI AZ l-7,r-21, (5.39)

that is,

21+ A2 > 6A1.

As by (5.38) rl = r7 and r2 = r6 we have rr > r2, which gives by (3.1)

(5.40)

6As > As. (5.41)

Now, the set of constraints { (5.38), (5.40), (5.41)) is not irredundant, because (5.38) and (5.40) together imply (5.41)) and (5.37) and (5.41) together imply (5.40). Therefore, we can leave either the constraint (5.40) or (5.41) out.

By (BC 1) -( BC4), (SDC) and (5.41) the optimal coefficients Ai are found by minimizing

( 1, AI, AZ, A37 ~44, AS, &I

subject to

1.83333 -0.11905 -0.0238 1 -0.00476 0.00476 0.02381 0.11905

(5.42)

0 6 AI < 5, 0 < A2 < 3A1, 5

0 6 A3 < -A27 3

1 3

A4 6 A3, As 6 -A4, 5

A6 f -As 3

A2 > 6At - 21, Ag=7-A,, As = 21 - A2, A4 = 35 - A3.

By using integer programming the solution is found to be

A=(l,6,18,30,5,3,1). (5.43)

Unfortunately, this is not the right answer, because if Al = 6, there is one binary vector with Hamming weight 1, such that g(x) = 1 and thus there are at least six binary vectors with Hamming weight 2, such that g(x) = 1 and thus A2 < 15. This illustrates that the constraint (BC4) is not sufficient, that is to say, it is too loose. The problem of finding an exact condition for the relation between Ai and Ai+l is still an open problem. Because of this, the results obtained by direct minimization are not necessarily correct and we have to seek for a better method.

Another serious problem is the necessity of the use of nonlinear programming in the case of not self-dual filters.

A third problem is often the problem of finding a stack filter corresponding to the found optimal vector Ai.

Note that the filter is seldom unique.

Page 23: Optimal stack filters under rank selection and structural constraints

P Kuosmanen, J. Astola/Signal Processing 41 (1995) 309-338 331

In the following sections we solve the above problems for self-dual filters by giving a greedy algorithm for finding an optimal self-dual stack filter satisfying the given constraints. Before presenting that algorithm we represent the optimization situation in terms of lattice theory in order to clarify the idea of our algorithm.

5.5. Lattice-theoretic representation of the optimization problem for self-dual jilters

A partially ordered set (poset) is a set with binary relation which is reflexive, antisymmetric and transitive. For example, BN = { (xl, x2,. . . , XN) : xi E (0, 1) }, x 6 y * xi < yi, i = 1,2,. . . , N, is a poset. A lattice is a poset L for which every pair of elements has a least upper bound and a greatest lower bound. For example BN is a lattice. A lower ideal of L is a subset P of L such that if n E P and y < x, then y E P. An upper ideal of L is a subset Q of L such that if x E Q and y 3 X, then y E P. Let y1,y2,. . . , yk E L, then P={xEL:Iy,,i=1,2,. . . , k : x < yj } is called the lower ideal generated by y, ,y2,. . . , yk, and similarly, Q = {x E L: 3y,, i= 1,2 ,..., k : x 2 yi } is called the upper ideal generated by yI, y,, . . . , y,. Let A C L, then x E L is called a maximal element of A, if y E A, y > x implies x = y, similarly a minimal element of A is defined. It is clear that an upper ideal is generated by its minimal elements and a lower ideal is generated by its maximal elements.

If in the optimization there are no constraints, then AA = (0, 0, . . . , 0) minimizes A@‘$ and therefore the A vector of the optimal filter equals the A vector of the median filter given in (5.6). In this case all vectors, with Hamming weight less than (N + 1) /2 have been mapped to 0 and other vectors have been mapped to 1. When there are constraints, for some vectors x, wu( x) < N/2, it holds f(x) = 1 and because of the self-duality, f(T) = 0, where f( .) is the positive Boolean function defining the optimal self-dual stack filter. Let 01 be the upper ideal generated by these vectors x and L$ be the lower ideal generated by the vectors i, that is,

ni = {X E BN : WH(X) 6 N/2} II f-‘( 1) (5.44)

and

.&={xEBN:wH(x)>N/~}~~-‘(0). (5.45)

Thus, 01 G f-’ ( 1) and L& 2 f-’ (0). In order that there exists such a Boolean function f( .) (or a stack filter) with L$ and 01, it must hold &+J n RI = 0. If constraints are such that it is possible to form sets & and L$ in such a way that L&J U 01 = 8 the constraints are feasible.

Furthermore, we denote Ged = { x E BN : WH (x) 2 N/2 }. Then the filter Sf ( .) can be described by

f-‘(l)=(a*u~~d)n(B~\~). (5.46)

Eq. (5.46), expressed as a Boolean function, yields

f(x) =

(

&.i(x) + fried(x) fify(xh

)

(5.47) i=l j=l

where f,&(e) is the Boolean function of the N point standard median filter, fi ( .) is the elementary conjunction of an element of RI, and p is the number of elements in the set 0,.

We can reduce (5.47) into a more simplified form, as we need only to sum over the minimal elements of 01 instead of summing over all members of 01. As proved in [ 191, the complements of the minimal elements of L?i are the maximal elements of a.

The idea of our algorithm is to choose the minimal elements of 01 in such a way that the constraints are satisfied and that the cardinalities of the sets 01 and L$ are kept as small as possible by adding new minimal elements only to the levels (by i-th level we mean the vectors x with wn (x) = i) where necessary and as few as possible.

Page 24: Optimal stack filters under rank selection and structural constraints

332 I! Kuosmanen, J. Astola/Signal Processing 41 (1995) 309-338

By choosing as few elements as possible we leave the maximum amount of freedom for the next levels, because we have now maximized the amount of vectors which may or may not belong to the sets tii and L$-, in the next levels. This freedom makes it usually easiest to satisfy given constraints.

5.6. An algorithm to optimize over self-dual$lters under constraints

Now we are ready to present a new algorithm for the optimization of self-dual filters under constraints.

Algorithm 1. Finding an optimal self-dual stack filter under given constraints. Let the Boolean variables be indexed as x1, x2,. . . , xN. Step I. Find the indices 1 < i < [(N - 1)/2] of the coefficients Ai for which there are constraints other than

basic constraints and self-duality constraint. Let such indices be il < i2 < . . . < i,. Step2. LetAi=(r)forlQi<il. Step 3. For the index il find the possible maximum value A,, under given constraints. Let Ai, = A,, and

d=(z)-A,. Now, d gives the number of binary vectors of Hamming weight il which should be minimal elements of the upper ideal 01 at level il.

Step 4. Choose d binary vectors xi, x2, . . . , xd to be minimal elements of the upper ideal Q. The vectors should satisfy xi 6 q, for all i # j, and not violate any structural constraint. If there are structural constraints that require specific vectors X, WH (x) = il, to be in fit, they are chosen. We will discuss in more detail how the optimal choice of these vectors can be done in Section 5.7. Let g( .) equal the sum of the elementary chozen vectors.

The steps (5) -( 8) are repeated for each k, 2 6 k Q r in numerical order. Step 5. For each ik-1 < i < ik find the maximum number of Ai, given by the Boolean function g( +), by

finding the cardinality of the set { y : g(y) = 0, IQ-I(Y) = i }. Denote that maximum value by A:. In the calculation of A; it is possible to use the inclusion-exclusion principle. For ik-r < i < ik let Ai = A:,

that is, in the levels where there are not any constraints, minimal elements are not added. Step 6. For the index ik find the possible maximum value Atk under given constraints. If the constraints do not

limit Ai, above and Ai satisfies the constraints concerning Ai,, then let Ae = Afk. Let d = Afk - Af . Now, d gives the number of binary vectors of Hamming weight ik which should be minimal elements of the upper ideal fit at level ik.

Step 7. If d < 0 or d is not defined then there does not exist a self-dual filter satisfying given constraints. Else let Ai, = Ark.

Step 8. Choose d binary vectors xi. ~2,. . . , xd to be minimal elements of the upper ideal 01. A chosen vector xi has to satisfy g(xi) = 0, g”(xi) = 1 and Xi sf q’, for all 1 < j 6 d, j Z i. Otherwise, either xi would already belong to the set 01 or L$ f~ ~21 # 8. Again, the chosen vectors must not be forbidden by some structural constraint and if there are structural constraints that require specific Vectors x, WH (x) = il, to be in 0,) they are chosen. If there are not d binary vectors satisfying above requirements then there does not exist a self-dual filter satisfying given constraints. The optimal choice of these vectors is discussed in Section 5.7. Let g’( .) equal the sum of the elementary conjunctions of the chosen vectors and let g( .) = g(s) + g’(e).

When all the indices 1 < i 6 i, have been run through, an optimal stack filter is defined by the Boolean function

f(x) = (f&(X) + g(x))gD(x). (5.48)

An optimal filter has the following interesting properties.

Corollary 5.8. An optimal self-dual stack filter is optimal for any noise having symmetric density function.

Page 25: Optimal stack filters under rank selection and structural constraints

R Kuosmanen, J. Astola/Signal Processing 41 (1995) 309-338 333

Proof. This corollary follows since the choice of the vectors is independent from the underlying noise dis- tribution and the properties of the numbers M( @, y, N, i) given in Section 4 hold for all symmetric density functions. Cl

Corollary 5.9. The optimal filter is not necessatily unique, as by choosing different vectors we would have obtained another optimalfllter. In fact every permutation of the variables x1, x2, . . _ , xN in the Boolean functions in (5.48) gives an optimal filtel; if there are no specijc vectors which need to be in 01 or & by structuring constraints.

Example 5.10. Consider a self-dual stack filter (N = 7) with two constraints: r6 > 2/21 and rs < 2/10. Let the Boolean variables be indexed by xi, x2,. . . ,x7. Step 1. The constraint r6 > 2/21 gives by (RSC3) a constraint for AZ: A2 < 3At - 2 and the constraint

r5 < 2/10 gives a constraint for As: A3 > 5/3 . A2 - 7. Thus, in this example iz = 2 and i2 = 3. Step 2. A1 = 7. Step 3. Constraint A2 < 3Ai - 2 gives A,, = 19. Thus, A2 = 19 and d = 2. Step 4. We choose the vectors xi = (l,l,O,O,O,O,O) and x2 = (O,l, l,O,O,O,O), then g(x) = ~1x2 + ~2x3.

Step 5. There are five vectors yi of weight 3, such that yi > XI, and five vectors yj, such that yi > x2, of which

thevectory=(l,l,l,O,O,O,O) satisfiesy>xi andy>z2.Thus,A!=(i)-2.5+1=26. Step 6. Now, the constraint A3 > 5/3 . A2 - 7 gives A3 > 24, which does not limit A3 above and A! satisfies

A! > 24. This gives d = 0. Step 7. A3 = 26. Thus, an optimal self-dual stack filter (N = 7) satisfying z-6 3 2/21 and z-5 6 2110 is defined by the Boolean function

f(x) = (fmed(x) + XIX2 + x2x3) (xl + x2> (x2 + x3). (5.49)

As already mentioned any stack filter can be implemented for multi-level signals by replacing AND and OR with MIN and MAX operations respectively. This gives

Corollary 5.11. For real-valued signals, the optimal stack filter defined by the positive Boolean function given in (5.47) is a composition of the median filter and a set of mMimum and minimum jilters:

Sf(X) = MW{MM{MED{X}, WX)}, So(X)}, (5.50)

where

SI(X) =MAX{Sfi(X),...,Sf,(X)}, (5.51)

82(X) =MIN{~~,(X),...,~~~(X)}, (5.52)

Sfi(X), i= 1,2,... , p, are stack filters for real-valued signals corresponding to the Boolean functions f t (x) . Likewise, q(X), i= 1,2,. . . , p correspond to the Boolean functions f?(x).

By Corollary 5.11 it is clear that the optimal self-dual stack filters defined in this paper are multistage rank order based filters. The idea of multistage filtering has been used to generate detail-preserving rank order based filters, c.f. e.g. [ 1,121. It has been shown that these filters perform well when they are applied to image processing [ 11. However, the design of these filters is not based on any optimal criterion. The optimal stack filters derived in this paper produce the best noise attenuation while satisfying some pre-specified constraints.

Example 5.10 continued. Now, an optimal stack filter which achieves the best noise attenuation and, at the same time, satisfies the rank selection constraints is for real-valued input signals,

Page 26: Optimal stack filters under rank selection and structural constraints

334 P Kuosmunen, J. AstolaISignal Processing 41 (1995) 309-338

Now, we return to Example 5.7, which we were not able to solve earlier.

Example 5.7 continued. In this example we have already found out that there is one constraint which is not basic constraint or self-duality constraint, and that is, 21 + A2 > 6At. Even though it may appear that this constraint does not bind Al at all, it actually does. Because A:! < 21, it is not possible that Al = 7. From this we can notice an important fact: the constraints of the kind Ai+i + a > bAi can in fact bind Ai and they should be checked. This can be done as follows: As by (BC4) A:! < 3Ai and by the constraint 21 +A2 > 6A1, it holds 6Ai - 21 < 3At, from which we obtain A1 < 7, which is a constraint for A,. Now, by using also this constraint in Algorithm 1 we easily obtain that 5 is the maximum value of Al, which satisfies constraints 21 + A2 > 6At and Al < 7, when A2 = 10. This means that d 3 2. But when we choose a vector xi, Wu(xt ) = 1, then for all other vectors y, Wu(y) = 1, it holds y < 5. Thus, it is impossible to choose more than one vector satisfying the conditions in Step 4 of Algorithm 1. This means that there does not exist a self-dual stack filter satisfying rl > r-2. This can also be proved for filters with different sized windows.

hqosition 5.12. There does not exist a self-dual stack filter satisfying rN > TN-1 or equivalently rl > r2.

BIDOf. By (3.1) and rN > rg_1

AI Al 2A2

‘-??N-N(N-1)

or equivalently

2(N- I)A, <2A2+N(N-1). (5.55)

If A1 = N in (5.55), then A2 > (“;‘), w ic h h is a contradiction to (2.12). If A1 = N- 1 in (5.55), then by (5.55)

A~>(N-1)(N-2)/2.Ontheotherhand,sinceA~=N-lwehaveA~~(~)-N+1=(N-l)(N-2)/2, which again gives a contradiction. This means that in order to (5.55) to hold, the coefficient AI must satisfy AI < N - 1. But this is impossible, because when we choose a vector xi, WH(x) = 1, then for all other vectors y, Wu(y) = 1, it holds y < 1 and the other vectors cannot belong to 01 anymore as they are members of

a. 0

Remark 5.13. A slight modification of Algorithm 1 gives a tool for finding a self-dual stack filter corresponding to a rank selection vector r, if such filter exists. The coefficient vector A is found from t by the inverse formula of (3.1) [lo]

ri = AN-k, k=1,2 ,..., N. (5.56)

Then in Algorithm 1 we have a constraint for every Ai given by (5.56).

5.7. On the optimal choice of the minimal elements of 01

In Steps 4 and 7 of Algorithm 1 the task is to choose d binary vectors xi, x2,. . . , Xd to be minimal elements of the upper ideal ai. If 01 = 0, that is, we have not added any minimal elements to 01, these chosen vectors should satisfy, xi 6 q, for all i # j. Otherwise & n L$ # 0. If 01 # 8, then in addition to that the chosen vectors should satisfy, xi g q, for all i + j, a chosen vector xi has to satisfy g( Xi) = 0 and go (xi) = 1, where

Page 27: Optimal stack filters under rank selection and structural constraints

R Kuosmanen, J. Astola/Signal Processing 41 (1995) 309-338 335

g(.) is the disjunction of the elementary conjunctions of the vectors that belong to 01. Also, while choosing vectors, if there are structural constraints, it must hold for all chosen vectors .q, that Xi sf y, where y E TO. In addition, the vectors must be chosen in such a way that all vectors in ri are chosen to ai.

We need the following properties of binary vectors.

Pmposition5.14. Fortwo vectors x = (_q,_q,...,x~) andy= (yi,yz,. ..,y~) itholdsx<jandy<xzf and only if there exists an entry i such that Xi = yi = 1.

Proof. Assume first that there exists an i such that Xi = yi -- -

= 1. This means that for X = (x1,x2,. . . ,XN) and 5 = (K,A, - -. 7 yN) it holds Z = yi = 0 and thus x 6 jj and y g X. Assume then that x d J and y ;f F. As y $J 2, there must exist an i, such that yi = 1 and Z = 0. But from this we obtain that there exists an i such that xi=yi= 1. El

Proposition 5.15. The maximum cardinal@ of the set 0 of binary vectors of length N and Hamming weight i, 16i< [<N-1)/2], such that for every pair of vectors x, y E 0 it holds x g J and y $ Y, is (:I;). The set of maximal cardinality is formed by constructing all binary vectors of Hamming weight i, with xj = 1 for some jixedl<j<N.

Two proofs for Proposition 5.15 can be found in [ 21. The original statement was in a paper by Erdos, Ko and Rado [ 71, which has turned out to be a milestone in the theory of extremal set systems according to many authorities.

By Proposition 5.15, if we want to guarantee that we can choose as many vectors as necessary to .f21, we should choose an index xi, and let Xj = 1 in all chosen vectors. By doing this, Proposition 5.14 implies that L+ n L+, = 0, as it should be. Usually, if there are no structural constraints, it is sensible to choose the index corresponding the value X, that we are estimating. The chosen index is called the principal index.

Another problem is how to choose the vectors in level k in such a way, that in levels 1, 1 > k there are as few vectors as possible. It is clear that for every vector Xi in level k, there exist N - k vectors Yj in level k + 1, such that zi < yj. If we choose a vector xi to be a member of fit, then by (2.6) all these N - k vectors in level k + 1 are members of 01 as well. This means that we should choose the vectors in level k in such a way that we obtain the same vectors as often as possible, when we form the N - k vectors corresponding each of the chosen vectors. Otherwise, the coefficient A~+I is not minimized.

By the distance of the vectors x = (xi,x~,...,x~) and y = (yr,y2,. . . , y,v) is meant the amount of the indexes i, where xi # yi. It is clear that the distance of two vectors in the same level is at minimum 2. If we choose two vectors x and y from level k with distance two, there is exactly one vector z in level k + 1, such that x < z and y 6 z, namely the vector z = (x1 + yt , x2 + ~2, . . . , xN + yN), where + denotes OR. It is naturally wise to choose the vector x that has maximum amount of the vectors y, in the same level and already in Q, with distance 2 between x and y. If there are more than one of these vectors, some tie breaking rule must be applied. In this case, if there is an index such that the vectors in 01 has 1 seldom in this index, then we break the tie in favor of a vector that has 1 in this index, if such vector exists. Often, it is recommended that the vectors are chosen, if possible, to be symmetric around the principal index.

5.8. Some examples of the optimalJilters under constraints

In this section we give some examples illustrating the use of Algorithm 1 in different situations and an important proposition, which makes the finding of the optimal self-dual filter easy in case that there are only structural constraints.

Page 28: Optimal stack filters under rank selection and structural constraints

336 R Kuosmanen, J. AstolaISignal Processing 41 (I 995) 309-338

Example 5.16. In the 1-D case it is often desirable to preserve pulses of length 2. (By a pulse of length L we mean any run of L O’s or L 1’s.) Assume that the real valued input of the stack filter is X =

(X-K,. . . , x-1,x0,x1,. . . ,XK) and the binary valued input is x = (X-K,. . . ,x_~,xo,xI,. . . ,xK). NOW, we have the following structural constraints:

K-l times K-l times K-l times K-l times

rl={(~l,l,o,~),(b,.,o,l,l,b,..)},

K-l times K-l times K-l times K-l times

~o={(~o,o,l,~),(~l,o,o,~)}.

(5.57)

(5.58)

The filter is now a self-dual stack filter, because x E ri implies X E ru, and x E TO implies Y E ri. Thus, the optimal stack filter can be obtained by using Algorithm 1 with constraints A2 < (“;‘) - 2 and &~_i b 2. In Step 4 the two chosen vectors are naturally the vectors in ri. Algorithm 1 gives that the optimal stack filter which preserves pulses of length 2 is for binary-valued input signals

f(x) = (fmed(x) +x-lxO+nOxl)(x-I +x0)(x0+x1). (5.59)

Thus, according to Corollary 5.8, the optimal stack filter which preserves pulses of length 2 is for real-valued input signals

$(X) =~~{~~{M~~{~},MIN{X-~,X0},~{X0,X~}},MAX{X-~,%},MAX{X0,X1}}.

(5.60)

An interesting observation is that this optimal filter can be proved to be the optimal weighted median filter given in [ 201.

The Boolean function of the optimal self-dual stack filter is obtained easily by the following proposition in the case that there are only structural constraints.

Proposition 5.17 ( [21] ). Assume that there are only structural constraints and the set of the structural con- straints r is in its irredundant form. Then the Boolean function corresponding to the optimal self-dual stack filter equals

f(x) = i

efi(X) + fmed(X) fi f?(X), i=l )

j=l

(5.61)

where fmed ( .) is the Boolean function of the N point standard median filtec fi ( .) is the elementary conjunction of an element of rl, and p is the number of elements in the set rl.

Proof. The idea of Algorithm 1 is to add as few minimal elements as possible. If there are only structural con- straints, the vectors in rl are the only minimal elements thas has to be added. This proves Proposition 5.17. 0

Example 5.18. Consider a 2-D 3 x 3 window where the structures to be preserved are the four “basic” straight lines. In the following the structuring constraints are presented before indexing the vectors.

(5.62)

Page 29: Optimal stack filters under rank selection and structural constraints

I! Kuosmanen, J. Astola/Signal Processing 41 (1995) 309-338

b-((i i i),(i p i),(b i g),(; i b)}.

When we index the variables in the following manner

(-1,l) (091) (-1,O) (090)

(-1, -1) (0, -1) (1, -1)

331

(5.63)

(5.64)

we obtain the Boolean function defining the optimal filter by Proposition 5.17 as

f(*) = (fmdx) + ~(O,l)~(O,O)~(O,-1) +X(-1,0)X(0,0)x(1,-o) + X(-I,-l)x(o,o)X(l,l) + n(_l,l)x(o,~)x(~,_l))

(X(0.1) +x(0,0) +-qo,-l))(~(-l,o) +-qo,o, +x(1,-0))

(X(-L--l) +x(0,0) +x(l,l))(x(-1,1)+x(0,0) +x(1,-1)). (5.65)

Example 5.19. Let the length of the input window be 7 and the only structures that are specified to be preserved are the highly oscillating signals, that is,

r1 = {(O,l,O, l,O, LO)}, (5.66)

To = {(LO, l,O, l,O, 1)). (5.67)

Assume also that the filter is required to be self-dual and there is one rank selection constraint: r-6 2 0.05. Structural constraints give by (SCl) and (SC2) the following constraints for the coefficients Ai: A3 6 34

and A4 B 1. From rank selection constraint r6 Z 0.05 we obtain by using (RSC3), A2 6 3At - 2. We index the variables of Boolean functions in this example as (x-3, x-2, x-~, x0, x1, x2, x3). When we use

Algorithm 1 for this problem, in Step 4 we have to choose two binary vectors to be minimal elements of Q. Structural constraints forbid all vectors x satisfying x < ( 1.0, 1,0, 1,0, 1) to be in 01 and force vectors y, y 2 (O,l,O, l,O, l,O), to be in G. By choosing XI = (0, l,O, l,O,O,O), also (0, l,O, l,O, 1,0) E L$ and so all vectors such that y 2 (0, 1 , 0, 1 , 0, 1,O) are in RI. We choose the other vector, by using the given scheme in 5.7, to be symmetric to xl, that is xp = (O,O, 0, l,O, l,O). This gives g(x) = x-2x0 +x0x2. Now, all constraints have been satisfied and therefore the Boolean function of the optimal filter equals

f(x) = (fmed(X) + x-2x0 +x0x2)(x--2 +x0)(x0 +.x2). (5.68)

6. Conclusions

Stack filters have received a great deal of attention during recents years and both deterministic and statistical properties have been investigated. It is possible to design stack filters to meet certain statistical and structural requirements. In this paper, we developed a new theory of optimal stack filtering under rank selection and structural constraints. This theory provides the designer with an algorithm to find an optimal stack filter which produces the best noise attenuation and, at the same time, satisfies the pre-defined rank selection constraints and structural constraints. This algorithm suits specially to the situations where training signals are unavailable.

References

[ l] G.R. Aree and R.E. Foster, “Detail preserving rank-order based filters for image processing”, ZEEE Trans. Acoust. Speech Signal Process., Vol. ASSP-37, January 1988, pp. 83-98.

Page 30: Optimal stack filters under rank selection and structural constraints

338 F! Kuosmanen, J. AstolaISignal Processing 41 (1995) 309-338

[2] B. Bollobas, Combinatorics: Sei Systems, Hypergraphs, Families of Vectors and Combinatorial Probability, Cambridge University Press, Cambridge, 1986.

131 E.J. Coyle, J.-H. Lin and M. Gabbouj, “Optimal stack filtering and the estimation and structural approaches to image processing”, IEEE Trans. Acoust. Speech Signal Process., Vol. 37, No. 12, December 1989, pp. 2037-2066.

[4] E.J. Coyle and J.-H. Lin, “Stack filters and the mean absolute error criterion”, IEEE Trans. Acoust. Speech Signal Process., Vol. 36, No. 1, January 1988, pp. 1244-1254.

[S] H. David, Order Statistics, Wiley, New York, 1981. [ 61 P.J. Davis and P Rabinowitz, Methods of Numerical Integration, Academic Press, New York, 1984. [7] P Erdos, C. Ko and R. Rado, “Intersection theorems for systems of finite sets”, Q. J. Math. Oxford (2). Vol. 12, 1961, pp. 313-320. [ 8] M. Gabbouj and E.J. Coyle, “Minimum mean absolute error stack filtering with structuring constrains and goals”, IEEE Trans. Acoust.

Speech Signal Process., Vol. 38, No. 6, June 1990, pp. 995-968. [9] L. Koskinen and J. Astola, “Asymptotic behavior of morphological filters”, J. Math. Imaging Vision, to appear.

[ lo] P Kuosmanen, J. Astola and S. Agaian, “On rank selection probabilities”, IEEE Trans. Signal Process., Vol. 42, No. 11, November 1994, pp. 3255-3258.

[ 111 J.-H. Lin, T.M. Sellke and E. Coyle, “Adaptive stack filtering under the mean absolute error criterion”, IEEE Trans. Acoust. Speech Signal Process., Vol. 38, No. 6, June 1990, pp. 938-954.

[ 121 A. Nieminen, P Heinonen and Y. Neuvo, “A new class of detail preserving filters for image processing”, IEEE Trans. Pattern Anal. Mach. Intell., Vol. PAMI-9, January 1987, pp. 74-90.

[ 131 RX Parrish, “Computing variances and covariances of normal order statistics”, Common. Statist. Simulation Computation, Vol. 21, No. 1, 1992, pp. 71-102.

[ 141 R.S. Parrish, “Computing expected values of normal order statistics”, Commun. Statist. Simulation Computation, Vol. 21, No. 1, 1992, pp. 57-70.

[ 151 I. Pitas and A. Venetsanopoulos, Nonlinear Digital Filters, Kluwer, Boston, MA, 1990. [ 161 M.K. Prasad and Y.H. Lee, “Stack filters and selection probabilities”, Proc. 1990 Internat. Symp. on Circuits and Systems, New

Orleans, LA, l-3 May 1990, pp. 1747-1750. [ 171 A.H. Stroud and D. Secrest, Gaussian Integration Formulas, Prentice-Hall, Englewood Cliffs, NJ, 1969. [ 181 P Wendt, E. Coyle and N. Gallagher, “Stack filters”, IEEE Trans. Acoust. Speech Signal Process., Vol. 34, August 1986, pp. 898-911. [ 191 0. Yli-Harja, J. Astola and Y. Neuvo, “Analysis of the properties of median and weighted median filters using threshold logic and

stack filter representation”, IEEE Trans. Signal Process., Vol. 39, February 1991, pp. 395-410. 1201 R. Yang, L. Yin, M. Gabbouj, J. Astola and Y. Neuvo, “Optimal weighted median filters under the structural constraints”, IEEE

Trans. Signal Process., submitted December 1992. [ 2 1 ] L. Yin, “Optimal stack filters under the structural constraints”, IEEE Trans. Signal Process., submitted November 1993. [22] P.-T. Yu and E. Coyle, “On the existence and design of the best stack filter based on associative memory”, IEEE Trans. Circuits

Systems, Vol. 39, No. 3, March 1992, pp. 171-184.