complete stability of cellular neural networks with time-varying delays

12
944 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS—I: REGULAR PAPERS, VOL. 53, NO. 4, APRIL 2006 Complete Stability of Cellular Neural Networks With Time-Varying Delays Zhigang Zeng and Jun Wang, Senior Member, IEEE Abstract—In this paper, the complete stability of cellular neural networks with time-varying delays is analyzed using the induction method and the contraction mapping principle. Delay-dependent and delay-independent conditions are obtained for locally stable equilibrium points to be located anywhere, which differ from the existing results on complete stability where the existence of equi- librium points in the saturation region is necessary for complete stability and locally stable equilibrium points can be in the satura- tion region only. In addition, some existing stability results in the literature are special cases of a new result herein. Simulation re- sults are also discussed by use of two illustrative examples. Index Terms—Cellular neural networks (CNNs), complete sta- bility, contraction mapping principle, time-varying delays, mathe- matical induction. I. INTRODUCTION I N RECENT years, cellular neural networks (CNNs) have been one of the most investigated paradigms for neural in- formation processing. In a wide range of applications, the CNNs are required to exhibit a large number of stable equilibrium points [1]–[11] instead of a single globally stable equilibrium point. If each trajectory of a neurodynamic system converges toward an equilibrium point (a stationary state), possibly within a set of many equilibrium points, then the neurodynamic system is called completely stable or multi-stable [12]–[21]. The CNNs with opposite-sign templates have been success- fully applied in connected component detection (CCD) in fea- ture extraction. It is known that the CNNs with nonsymmetric templates exhibit various dynamical phenomena such as peri- odic orbits or chaotic attractors. The complete stability of this type of CNNs has been presented [1]–[5]. Despite the apparent simplicity of the CNNs with symmetric templates, there are fundamental and somewhat unexpected dif- ficulties to analyze their complete stability using the classic LaSalle principle. Some recent studies [10], [11] introduced Manuscript received October 19, 2004; revised January 25, 2005 and April 25, 2005. This work was supported by the Hong Kong Research Grants Council under Grant CUHK4165/03E, by the Natural Science Foundation of China under Grant 60405002, and by China Postdoctoral Science Foundation under Grant 2004035579. This paper was recommended by Associate Editor C. T. Lin. Z. G. Zeng is with the School of Automation, Wuhan University of Tech- nology, Wuhan, 430070, China and also with the Department of Automation and Computer-Aided Engineering, The Chinese University of Hong Kong, Shatin, , Hong Kong (e-mail: [email protected]). J. Wang is with the Department of Automation and Computer-Aided Engi- neering, The Chinese University of Hong Kong, Shatin, Hong Kong (e-mail: [email protected]). Digital Object Identifier 10.1109/TCSI.2005.859616 a new method to analyze complete stability of the symmetric CNNs. Delayed CNNs (DCNNs) have found interesting applications in different areas such as classification of patterns and recon- struction of moving images. In these applications, it is essential that DCNNs involved are completely stable. DCNNs may be- come unstable or can exhibit periodic oscillations [6]. So far, only a few conditions are available for ascertaining complete stability of DCNNs with nonsymmetric templates [12]–[16]. Recently many new sufficient conditions to testing the exponential stability of recurrent neural networks with time delays have been proposed [17], [18], [20], [21]. Note that global stability implies complete stability, but not vice verse. In addition, Lyapunov method is used in most of the existing studies concerned with the criteria of global stability of neural networks. For the analysis of complete stability, the Lyapunov method is no longer effective because of the multiplicity of at- tractors. In this paper, the complete stability of DCNN is analyzed using the induction method and the contraction mapping prin- ciple. Two sufficient conditions are obtained that allow locally stable equilibrium points to be located anywhere. In addition, some existing stability results in the literature are special cases of a new result herein. The remaining part of this paper consists of five sections. In Section II, relevant background information is given. In Sections III and IV, delay-dependent and delay-independent conditions are proven, respectively, by using the induction method and the contraction mapping principle. In Section V, two illustrative ex- amples are provided with simulation results. Finally, concluding remarks are given in Section VI. II. BACKGROUND INFORMATION Consider the DCNN governed by the following normalized equations: (1) where is the state vector, and are connection weight matrices, is the external input vector and is the time-varying delay that satisfies ( and are constant) for all . 1057-7122/$20.00 © 2006 IEEE

Upload: doandien

Post on 01-Mar-2017

213 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Complete stability of cellular neural networks with time-varying delays

944 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS—I: REGULAR PAPERS, VOL. 53, NO. 4, APRIL 2006

Complete Stability of Cellular Neural Networks WithTime-Varying Delays

Zhigang Zeng and Jun Wang, Senior Member, IEEE

Abstract—In this paper, the complete stability of cellular neuralnetworks with time-varying delays is analyzed using the inductionmethod and the contraction mapping principle. Delay-dependentand delay-independent conditions are obtained for locally stableequilibrium points to be located anywhere, which differ from theexisting results on complete stability where the existence of equi-librium points in the saturation region is necessary for completestability and locally stable equilibrium points can be in the satura-tion region only. In addition, some existing stability results in theliterature are special cases of a new result herein. Simulation re-sults are also discussed by use of two illustrative examples.

Index Terms—Cellular neural networks (CNNs), complete sta-bility, contraction mapping principle, time-varying delays, mathe-matical induction.

I. INTRODUCTION

I N RECENT years, cellular neural networks (CNNs) havebeen one of the most investigated paradigms for neural in-

formation processing. In a wide range of applications, the CNNsare required to exhibit a large number of stable equilibriumpoints [1]–[11] instead of a single globally stable equilibriumpoint. If each trajectory of a neurodynamic system convergestoward an equilibrium point (a stationary state), possibly withina set of many equilibrium points, then the neurodynamic systemis called completely stable or multi-stable [12]–[21].

The CNNs with opposite-sign templates have been success-fully applied in connected component detection (CCD) in fea-ture extraction. It is known that the CNNs with nonsymmetrictemplates exhibit various dynamical phenomena such as peri-odic orbits or chaotic attractors. The complete stability of thistype of CNNs has been presented [1]–[5].

Despite the apparent simplicity of the CNNs with symmetrictemplates, there are fundamental and somewhat unexpected dif-ficulties to analyze their complete stability using the classicLaSalle principle. Some recent studies [10], [11] introduced

Manuscript received October 19, 2004; revised January 25, 2005 and April25, 2005. This work was supported by the Hong Kong Research GrantsCouncil under Grant CUHK4165/03E, by the Natural Science Foundation ofChina under Grant 60405002, and by China Postdoctoral Science Foundationunder Grant 2004035579. This paper was recommended by Associate EditorC. T. Lin.

Z. G. Zeng is with the School of Automation, Wuhan University of Tech-nology, Wuhan, 430070, China and also with the Department of Automation andComputer-Aided Engineering, The Chinese University of Hong Kong, Shatin, ,Hong Kong (e-mail: [email protected]).

J. Wang is with the Department of Automation and Computer-Aided Engi-neering, The Chinese University of Hong Kong, Shatin, Hong Kong (e-mail:[email protected]).

Digital Object Identifier 10.1109/TCSI.2005.859616

a new method to analyze complete stability of the symmetricCNNs.

Delayed CNNs (DCNNs) have found interesting applicationsin different areas such as classification of patterns and recon-struction of moving images. In these applications, it is essentialthat DCNNs involved are completely stable. DCNNs may be-come unstable or can exhibit periodic oscillations [6].

So far, only a few conditions are available for ascertainingcomplete stability of DCNNs with nonsymmetric templates[12]–[16]. Recently many new sufficient conditions to testingthe exponential stability of recurrent neural networks with timedelays have been proposed [17], [18], [20], [21]. Note thatglobal stability implies complete stability, but not vice verse.

In addition, Lyapunov method is used in most of the existingstudies concerned with the criteria of global stability of neuralnetworks. For the analysis of complete stability, the Lyapunovmethod is no longer effective because of the multiplicity of at-tractors.

In this paper, the complete stability of DCNN is analyzedusing the induction method and the contraction mapping prin-ciple. Two sufficient conditions are obtained that allow locallystable equilibrium points to be located anywhere. In addition,some existing stability results in the literature are special casesof a new result herein.

The remaining part of this paper consists of five sections. InSection II, relevant background information is given. In SectionsIII and IV, delay-dependent and delay-independent conditionsare proven, respectively, by using the induction method and thecontraction mapping principle. In Section V, two illustrative ex-amples are provided with simulation results. Finally, concludingremarks are given in Section VI.

II. BACKGROUND INFORMATION

Consider the DCNN governed by the following normalizedequations:

(1)

where is the state vector,and are connection weight matrices,

is the external input vector and isthe time-varying delay that satisfies (and are constant) for all .

1057-7122/$20.00 © 2006 IEEE

Page 2: Complete stability of cellular neural networks with time-varying delays

ZENG AND WANG: COMPLETE STABILITY OF CNNs 945

In particular, when the DCNN de-generates as a CNN. By extending the CNN model, some recentstudies [10], [11] considered

(2)

where is a continuous, nondecreasing, and boundedpiecewise-linear function.

Let be the space of continuous functionsmapping into with norm defined by

, where. Denote

as the vector norm of the vector . The initialcondition of DCNN (1) is assumed to be

where . Denote as thestate of DCNN (1) with initial condition , it means that

is continuous and satisfies (1) and ,for .

For checking complete stability of neural networks with sym-metric templates, there exist the following two theorems in theexisting results.

Forti–Tesi Theorem [10]: CNN described by dynamic (1) iscompletely stable, if is symmetric.

Forti Theorem [11]: Neural network described by dynamicequation (2) is completely stable, if is symmetric.

For checking complete stability of DCNNs (or CNNs) withnonsymmetric templates, there exist the following four theo-rems in the existing results.

Gilli Theorem [12]: DCNN described by dynamic equations(1) is completely stable if the matrix is row-sumdominant, where

Takahashi–Chua Theorem [13]: CNN described by dynamicequation (1) is completely stable if the comparison matrix of

is a nonsingular -matrix, where denotes the identitymatrix.

Takahashi Theorem [14]: DCNN described by dynamic equa-tion (1) is completely stable, if is a nonsingular -matrix.

Takahashi–Nishi Theorem [15]: DCNN described by dy-namic equation (1) is completely stable, if isa nonsingular -matrix, where

It is important to note that under the conditionstable equilibrium points can only be in the satura-

tion region . Therefore, the existenceof an equilibrium point in the saturation region is a necessarycondition for complete stability of a DCNN in the most existing

results. When , stable equilibrium points can rest in theregion . The result of complete stability for isstill lacking.

Zeng–Wang–Liao Theorem [16]: DCNN described by dy-namic equation (1) is globally exponentially stable (hencealso completely stable), if is a nonsingular

-matrix, where

All the above sufficient conditions are delay independent. Inaddition, the external inputs are cancelled in the above existingreports. Since the locations of equilibria of neural networks de-pend also on the inputs, ignoring the external inputs in stabilityanalysis may loose important information.

III. DELAY-DEPENDENT CONDITION

Denote five index sets:

.In this subsection, we always assume

. Let

Theorem 1: If is a nonsingular -matrix,then DCNN (1) is completely stable.

A. Procedure for Proofs

The new method for proving the complete stability in thispaper includes the following three steps.

The first step: Using the mathematical induction, we willprove that for the stateof DCNN (1), there exist sequence ofnumbers

such that,

or , or .The second step: Using the mathematical induction and the

contraction mapping principle, we will prove that under givenconditions, when .

The third step: Using the comparative method, wewill prove that under given conditions, for the state

of DCNN (1) and a sufficiencysmall , there exist a vector and an in-teger such that

. Hence, under given conditions, DCNN(1) is completely stable.

Page 3: Complete stability of cellular neural networks with time-varying delays

946 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS—I: REGULAR PAPERS, VOL. 53, NO. 4, APRIL 2006

B. Range of Activation Function

In the first of the proof, the range of activation function needsto be determined. In this subsection, we will determine the rangeof the activation function in fourlemmas and two propositions by using the the induction method.

For , let

when , let

For an integer , let

(4)

(5)

(6)

(7)

where (8)–(11) shown at the bottom of the page, hold. whenwe obtain (12) and (13), shown at the bottom of the next

page.Lemma 1: For , there exists such that

, one of the following three cases holds:

(14)

(15)

Proof: See Appendix.Lemma 2: For , there exists such that

, one of the following three cases holds:

(16)

(17)

(8)

(9)

(10)

(11)

Page 4: Complete stability of cellular neural networks with time-varying delays

ZENG AND WANG: COMPLETE STABILITY OF CNNs 947

Proof: See Appendix.Lemma 3: For , there exists

such that , one of the following three cases holds:

(18)

(19)

Proof: See Appendix.Lemma 4: For , there exists

such that , one of the following three cases holds:

(20)

(21)

Proof: See Appendix.By using the induction method, from Lemmas 1–4, for an

integer , we have the following proposition.Proposition 1: For an integer , there exists

such that , oneof the following three cases holds:

(22)

(23)

Proposition 2: There exists such that

or

Proof: , if , from (1)

Hence is monotone increasing. Thus, there existssuch that .

, if , from (1)

Hence, is monotone decreasing. Thus, there ex-ists such that .

C. Convergence of Sequence

Lemma 5: If is a nonsingular -matrix,then there exist such that when

, and.

Proof: See Appendix.

D. Proof of Theorem 1

Proof: According to Propositions 1 and 2, for any stateof DCNN (1), there exist an integer

and such that

From (22) and Lemma 5, for any sufficiency small thereexist an integer and a vector such that

. From , we have

Hence

Similarly, from , we have

(12)

(13)

Page 5: Complete stability of cellular neural networks with time-varying delays

948 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS—I: REGULAR PAPERS, VOL. 53, NO. 4, APRIL 2006

Let , then . Hence, DCNN (1) iscompletely stable.

IV. DELAY-INDEPENDENT CONDITION

Denote two more index sets:. In this subsection, we assume. For , let

For an integer , let

(24)

(25)

where (26)–(29), shown at the bottom of the page, are true.

Proposition 3: For an integer , there existssuch that , one of

the following three cases holds:

Proof: In the proofs of Lemmas 1–4, if, and are re-

placed by , and (or ,respectively, then Lemmas 1–4 are still hold. By using theinduction method, Proposition 3 holds.

Let

(30)

Theorem 2: If is a nonsingular -matrix,then DCNN (1) is completely stable.

By using Propositions 2 and 3, Theorem 2 can be proven sim-ilar to the proof Theorem 1.

Remark 1: If , then a stableequilibrium point can rest in the saturation region only (i.e.,

). Therefore, the existence of an equi-librium point in the saturation region is necessary for completestability conditions of DCNNs in the existing results. However,since Theorem 2 allows that for some ,according to Theorem 2, locally stable equilibrium points can belocated anywhere.

(26)

(27)

(28)

(29)

Page 6: Complete stability of cellular neural networks with time-varying delays

ZENG AND WANG: COMPLETE STABILITY OF CNNs 949

Fig. 1. Transient behavior of x in Example 1.

Remark 2: When and is anempty set, Theorem 2 and Takahashi Theorem are identical.Hence, Takahashi Theorem is a special case of Theorem 2.Since a CNN can be regarded as a special case of DCNN,Takahashi–Chua Theorem corresponds to the special case ofTheorem 2 (when ). Because a row-sumdominant matrix with nonpositive off-diagonal elements is an

-matrix, Theorem 2 is also a generalization of Gilli Theorem.Remark 3: When is an empty set, Theorem 2 and

Takahashi–Nishi Theorem are identical. Hence, Taka-hashi–Nishi Theorem is a special case of Theorem 2. When

is an identity unit matrix.Hence, is a nonsingular -matrix. According to Theorem2, DCNN (1) is completely stable. In this case, may be equalto 1. Moreover, if , DCNN (1) is completelystable without any other conditions.

V. NUMERICAL EXAMPLES

Example 1: Consider a DCNN, where

Choose . From (3)

which is a nonsingular -matrix. According to Theorem 1, thisDCNN is completely stable.

Since and is empty, Theorem 2 cannot beused to ascertain the complete stability of this DCNN. The re-sults in [10], [11] can deal with the case that . But sincethe DCNN in this example has time delay, the conditions in [10],[11] cannot be used to ascertain its complete stability. In addi-tion, since , the conditions in [12]–[15] cannot be usedto ascertain the complete stability of this DCNN. Simulation re-sults are depicted in Figs. 1–4, where all the trajectories from 36

Fig. 2. Transient behavior of x in Example 1.

Fig. 3. Transient behavior of (t; x ; x ) in Example 1.

random initial points converge to one of two equilibrium pointsat and ; i.e., the DCNN is bistable.

Example 2: Consider a DCNN, where

Choose . From (30)

which is a nonsingular -matrix. According to Theorem 2,this DCNN is completely stable. However, since , theconditions in [12]–[15] cannot be used to ascertain the com-plete stability of this DCNN. Since is asymmetric, the con-ditions in [10], [11] cannot be used to ascertain the completestability of this DCNN. Furthermore, since ,Theorem 1 cannot be used to ascertain the complete stability

Page 7: Complete stability of cellular neural networks with time-varying delays

950 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS—I: REGULAR PAPERS, VOL. 53, NO. 4, APRIL 2006

Fig. 4. Transient behavior of (x ; x ) in Example 1.

Fig. 5. Transient behavior of x in Example 2.

of this DCNN. Simulation results are depicted in Figs. 5–7,where all the trajectories from 36 random initial points convergeto one of two bistable equilibrium points and

.

VI. CONCLUDING REMARKS

In this paper, we present the analytical results on the completestability of CNNs with time-varying delays. Using the induc-tion method and the contraction mapping principle, delay-de-pendent and delay-independent criteria are presented to charac-terize complete stability which allow locally stable equilibriumpoints to be located anywhere, and differ from and extend theexisting results on complete stability.

Fig. 6. Transient behavior of x in Example 2.

Fig. 7. Transient behavior of x in Example 2.

APPENDIX

Proof of Lemma 5: From (8)–(13), we have

(31)

When , from (24) and , we have

Page 8: Complete stability of cellular neural networks with time-varying delays

ZENG AND WANG: COMPLETE STABILITY OF CNNs 951

Similarly, form (24) and (or ),

When , hence, from (4) and (5)

(32)

When , hence, from (6) and (7)

(33)

Because , if ,then . When

, we have

If , then ; if , then

. Hence, when, we have .

In addition, when .Similarly, when ,

we have ; when, we have

; when , wehave .

Similar to proof of (32) and (33)

By using the induction method, , and

are monotone increasing on , and aremonotone decreasing on .

In addition, is a nonsingular -matrix, thenthere exist such that

Let. Then

implies that . Let

Let

For any integer and , from (4) and (5),we obtain (34), shown at the bottom of the next page. Similarly,for any integer and , from (6) and (7)

(35)

From (34) and (35)

(36)

In addition, when . The mono-tonicity of and (36) imply that there exist

such that when, and .

Lemma 6: For , if there exists suchthat

(37)

then when , for

(38)

If there exists such that

(39)

then when , for

(40)

Proof: Let .If , then for . Hence

(41)

where . From (1) and , for

From (37), . Fromand (41),

i.e., (38) holds. Similarly, (40) holds.

Page 9: Complete stability of cellular neural networks with time-varying delays

952 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS—I: REGULAR PAPERS, VOL. 53, NO. 4, APRIL 2006

Lemma 7: For , if there exists suchthat

(42)

then when , there exists such that.

If there exists such that

(43)

then when , there exists such that.

Proof: If , then when, from (1)

Hence, is monotone increasing. Thus, there existssuch that .

(34)

Page 10: Complete stability of cellular neural networks with time-varying delays

ZENG AND WANG: COMPLETE STABILITY OF CNNs 953

If , then when ,from (1)

Hence, is monotone decreasing. Thus, there existssuch that .

According to Lemmas 6 and 7, in the following proofs ofLemmas 1 and 2, for , we always assume that (38) and(40) hold.

Proof of Lemma 1: , if , thenholds obviously.

If , then for , when , from(1), (40), and

Hence , and whenis monotone increasing. Thus, there exists such

that ; i.e., (15) holds., if , then holds obvi-

ously.If , then for , when ,

from (1), (38) and

Hence, , and whenis monotone decreasing. Thus, there exists

such that ; i.e., (15) holds.By summarizing the above proof, Lemma 1 is proven.

Proof of Lemma 2: , if , thenholds obviously.

If , then for , when , from(1), (40) and

Hence when is monotone decreasing.Thus, there exists such that

., if , then holds obvi-

ously.If , then for , when ,

from (1), (38) and

Page 11: Complete stability of cellular neural networks with time-varying delays

954 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS—I: REGULAR PAPERS, VOL. 53, NO. 4, APRIL 2006

Hence when is mono-tone increasing. Thus, there exists such that

.By summarizing the above proof, Lemma 2 is proven.According to Lemmas 1 and 2, for ,

(14)–(17) hold. Hence

Sequentially, if in Lemmas 6 and 7 is replaced by ,then (38) and (40) still hold.

Proof of Lemma 3: , if ,then holds obviously.

If , then when , from (1), (40) and

Hence, when is monotone increasing.Thus, there exists such that ; i.e.,(19) holds.

, if , thenholds obviously.

If , then when , from (1), (38)and

Hence, when is monotone decreasing.Thus, there exists such that ;i.e., (19) holds.

By summarizing the above proof, Lemma 3 is proven.Proof of Lemma 4: , if ,

then holds obviously.According to Lemmas 1 and 2, for ,

(14)–(17) hold. Hence

If , then when , from (1), (40) and

Hence, when is monotone decreasing.Thus, there exists such that

., if , then

holds obviously.If , then when , from (1), (38)

and

Page 12: Complete stability of cellular neural networks with time-varying delays

ZENG AND WANG: COMPLETE STABILITY OF CNNs 955

Hence, when is monotone increasing.Thus, there exists such that

.By summarizing the above proof, Lemma 4 is proven.Assume that when , Proposition 1 holds. Then for

Sequentially, if in Lemmas 6 and 7 is replaced by, then (38) and (40) still hold. It implies that Proposition

1 holds for . By induction, Proposition 1 holds for.

REFERENCES

[1] L. O. Chua and T. Roska, “Stability of a class of nonreciprocal cel-lular neural networks,” IEEE Trans. Circuits Syst., vol. 37, no. 12, pp.1520–1527, Dec. 1990.

[2] F. Zou and J. A. Nossek, “Stability of cellular neural networks with op-posite-sign templates,” IEEE Trans. Circuits Syst. I, Fundam. TheoryAppl., vol. 38, no. 6, pp. 675–677, Jun. 1991.

[3] M. P. Joy and V. Tavsanoglu, “A new parameter range for the stabilityof opposite-sign cellular neural networks,” IEEE Trans. Circuits Syst. I,Fundam. Theory Appl., vol. 40, no. 2, pp. 204–207, Feb. 1993.

[4] P. P. Civalleri and M. Gilli, “Practical stability criteria for cellular neuralnetworks,” Electron. Lett., vol. 33, pp. 970–971, 1997.

[5] C. W. Wu and L. O. Chua, “A more rigorous proof of complete sta-bility of cellular neural networks,” IEEE Trans. Circuits Syst. I, Fundam.Theory Appl., vol. 44, no. 2, pp. 370–371, Mar. 1997.

[6] P. P. Civalleri, M. Gilli, and L. Pandolfi, “On stability of cellular neuralnetworks with delay,” IEEE Trans. Circuits Syst. I, Fundam. TheoryAppl., vol. 40, no. 1, pp. 157–165, Jan. 1993.

[7] F. C. Sun, Z. Q. Sun, and P. Y. Woo, “Stable neural-network-based adap-tive control for sampled-data nonlinear systems,” IEEE Trans. NeuralNetw., vol. 9, pp. 956–968, 1998.

[8] G. D. Sandre, “Stability of 1-D-CNNs with Dirichlet boundary condi-tions and global propagation dynamics,” IEEE Trans. Circuits Syst. I,Fundam. Theory Appl., vol. 47, no. 7, pp. 785–792, Jul. 2000.

[9] M. D. Marco, M. Forti, and A. Tesi, “Existence and characterization oflimit cycles in nearly symmetric neural networks,” IEEE Trans. CircuitsSyst. I, Fundam. Theory Appl., vol. 49, no. 7, pp. 979–992, Jul. 2002.

[10] M. Forti and A. Tesi, “A new method to analyze complete stabilityof PWL cellular neural networks,” Int. J. Bifurc. Chaos, vol. 11, pp.655–676, 2001.

[11] M. Forti, “Some extensions of a new method to analyze completestability of neural networks,” IEEE Trans. Neural Netw., vol. 13, pp.1230–1238, 2002.

[12] M. Gilli, “Stability of cellular neural networks and delayed cellularneural networks with nonpositive templates and nonmonotonic outputfunctions,” IEEE Trans. Circuits Syst. I, Fundam. Theory Appl., vol. 41,no. 5, pp. 518–528, May 1994.

[13] N. Takahashi and L. O. Chua, “On the complete stability of nonsym-metric cellular neural networks,” IEEE Trans. Circuits Syst. I, Fundam.Theory Appl., vol. 45, no. 7, pp. 754–758, Jul. 1998.

[14] N. Takahashi, “A new sufficient condition for complete stability of cel-lular neural networks with delay,” IEEE Trans. Circuits Syst. I, Fundam.Theory Appl., vol. 47, pp. 793–799, 2000.

[15] N. Takahashi and T. Nishi, “A generalization of some complete stabilityconditions for cellular neural networks with delay,” IEICE Trans. Fund.,vol. E85-A, no. 9, pp. 2044–2051, 2002.

[16] Z. G. Zeng, J. Wang, and X. X. Liao, “Global exponential stability ofa general class of recurrent neural networks with time-varying delays,”IEEE Trans. Circuits Syst. I, Fundam. Theory Appl., vol. 50, no. 7, pp.1353–1358, Jul. 2003.

[17] C. C. Hwang, C. J. Cheng, and T. L. Liao, “Globally exponential sta-bility of generalized Cohen-Grossberg neural networks with delays,”Phys. Lett. A, vol. 319, pp. 157–166, 2003.

[18] Z. Liu, A. Chen, J. Cao, and L. Huang, “Existence and global exponen-tial stability of periodic solution for bam neural networks with periodicand time-varying delays,” IEEE Trans. Circuits Syst. I, Fundam. TheoryAppl., vol. 50, no. 6, pp. 1162–1173, Jun. 2003.

[19] X. Li, L. Huang, and J. Wu, “Further results on the stability of delayedcellular neural networks,” IEEE Trans. Circuits Syst. I, Fundam. TheoryAppl., vol. 50, no. 7, pp. 1239–1242, Jul. 2003.

[20] V. Singh, “A generalized LMI-based approach to the global asyptoticstability of delayed cellular neural networks,” IEEE Trans. Neural Netw.,vol. 15, pp. 223–225, 2004.

[21] Y. Zhang and K. K. Tan, “Multistability of discrete-time recurrent neuralnetworks with unsaturating piecewise linear activation functions,” IEEETrans. Neural Netw., vol. 15, pp. 329–336, 2004.

Zhigang Zeng received the B.S. degree in mathe-matics from Hubei Normal University, Huangshi,China, the M.S. degree in ecological mathematicsfrom Hubei University, Hubei, China, and the Ph.D.degree in systems analysis and integration fromHuazhong University of Science and Technology,Wuhan, China, in 1993, 1996, and 2003 respectively.

He is a Postdoctoral Research Fellow in theDepartment of Automation and Computer-AidedEngineering at the Chinese University of HongKong. He is also a Professor in the School of

Automation, Wuhan University of Technology, China. His current researchinterests include neural networks and stability analysis of dynamic systems.

Jun Wang (S’89–M’90–SM’93) received the B.S.degree in electrical engineering and the M.S. degreein systems engineering from Dalian University ofTechnology, Dalian, China, and the Ph.D. degreein systems engineering from Case Western ReserveUniversity, Cleveland, OH.

He is a Professor in the Department of Automationand Computer-Aided Engineering, Chinese Univer-sity of Hong Kong, Hong Kong. He was an AssociateProfessor at the University of North Dakota, GrandForks, until 1995. His current research interests in-

clude neural networks and their engineering applications.Dr. Wang is an Associate Editor of the IEEE TRANSACTIONS ON

NEURAL NETWORKS and IEEE TRANSACTIONS ON SYSTEMS, MAN, AND

CYBERNETICS—B: CYBERNETICS and IEEE TRANSACTIONS ON SYSTEMS,MAN, AND CYBERNETICS—C: APPLICATIONS AND REVIEWS.