global exponential stability of cellular neural networks with variable delays
TRANSCRIPT
Applied Mathematics and Computation 183 (2006) 1214–1219
www.elsevier.com/locate/amc
Global exponential stability of cellular neural networkswith variable delays
Ju H. Park
Robust Control and Nonlinear Dynamics Laboratory, Department of Electrical Engineering, Yeungnam University,
214-1 Dae-Dong, Kyongsan 712-749, Republic of Korea
Abstract
For cellular neural networks with time-varying delays, the problems of determining the exponential stability and esti-mating the exponential convergence rate are investigated by employing the Lyapunov–Krasovskii functional and linearmatrix inequality (LMI) technique. A novel criterion for the stability, which give information on the delay-dependentproperty, is derived. Two examples are given to demonstrate the effectiveness of the obtained results.� 2006 Elsevier Inc. All rights reserved.
Keywords: Neural networks; Exponential stability; LMI; Time-varying delays; Lyapunov–Krasovskii functional
1. Introduction
Time delay is commonly encountered in biological and artificial neural networks [1–6], and its existence isfrequently a source of oscillation and instability [7–10]. Therefore, the problem of stability analysis of delayedneural networks has been a focused topic of theoretical and practical importance. This stability issue has alsogained increasing attention for its essential role in signal processing, image processing, pattern classification,associative memories, fixed-point computation, and so on. Recently, using various analyzing methods, manycriteria for global stability of neural networks with constant delays or time-varying delays have been presented[11–16].
As we have known, fast convergence of a system is essential for real-time computation, and the exponen-tially convergence rate is generally used to determine the speed of neural computations. Thus, global exponen-tial stability for neural networks without delays or with delays has been also investigated [17–20] in very recentyears.
In this paper. we study the exponential stability and estimate the exponential convergence rates for neuralnetworks with time-varying delays. Lyapunov–Krasovskii functionals and linear matrix inequality (LMI)approaches are combined to investigate the problem. A novel delay-dependent criterion is presented in termsof LMI. The advantage of the proposed approach is that the resulting stability criterion can be used efficiently
0096-3003/$ - see front matter � 2006 Elsevier Inc. All rights reserved.
doi:10.1016/j.amc.2006.06.046
E-mail address: [email protected]
J.H. Park / Applied Mathematics and Computation 183 (2006) 1214–1219 1215
via existing numerical convex optimization algorithms such as the interior-point algorithms for solving LMIs[21].
Throughout the paper, Rn denotes the n dimensional Euclidean space, and Rn�m is the set of all n · m realmatrices. I denotes the identity matrix with appropriate dimensions. kxk denotes the Euclidean norm of vectorx. kM(Æ) and km(Æ) denote the largest and smallest eigenvalue of a given matrix, respectively. w denotes the ele-ments below the main diagonal of a symmetric block matrix. diag{Æ} denotes the block diagonal matrix. Forsymmetric matrices X and Y, the notation X > Y (respectively, X P Y ) means that the matrix X � Y is posi-tive definite (respectively, nonnegative).
2. Main results
Consider a continuous neural networks with time-varying delays can be described by the following stateequations:
_yiðtÞ ¼ �aiyiðtÞ þXn
j¼1
wijfjðyjðtÞÞ þXn
j¼1
w1ijfjðyjðt � hðtÞÞÞ þ bi; i ¼ 1; 2; . . . ; n; ð1Þ
or equivalently
_yðtÞ ¼ �AyðtÞ þ Wf ðyðtÞÞ þ W 1f ðyðt � hðtÞÞÞ þ b; ð2Þ
where yðtÞ ¼ ½y1ðtÞ; . . . ; ynðtÞ�T 2 Rn is the neuron state vector, f ðyðtÞÞ ¼ ½f1ðy1ðtÞÞ; . . . ; fnðynðtÞÞ�T 2 Rn is the
activation functions, f ðyðt � hðtÞÞÞ ¼ ½f1ðy1ðt � hðtÞÞÞ; . . . ; fnðynðt � hðtÞÞÞ�T 2 Rn, b = [b1, . . . ,bn]T is a constant
input vector, A = diag(ai) is a positive diagonal matrix, W = (wij)n·n and W 1 ¼ ðw1ijÞn�n are the interconnection
matrices representing the weight coefficients of the neurons, the time delays h(t) is bounded nonnegative func-tions satisfying 0 6 hðtÞ 6 �h, and it is assumed that _hðtÞ < hd < 1.
In this paper, it is assumed that the activation function f(y) is nondecreasing, bounded and globallyLipschitz; that is
0 6fiðn1Þ � fiðn2Þ
n1 � n2
6 ri; i ¼ 1; 2; . . . ; n: ð3Þ
Then, by using the well-known Brouwer’s fixed-point theorem [3], one can easily prove that there exists at leastone equilibrium point for Eq. (2).
For the sake of simplicity in the stability analysis of the system (2), we make the following transformationto the system (2):
xð�Þ ¼ yð�Þ � y�;
where y� ¼ ðy�1; y�2; . . . ; y�nÞT is an equilibrium point of Eq. (2). Under the transformation, it is easy to see that
the system (2) becomes
_xðtÞ ¼ �AxðtÞ þ WgðxðtÞÞ þ W 1gðxðt � hðtÞÞÞ; ð4Þ
where xðtÞ ¼ ½x1ðtÞ x2ðtÞ � � � xnðtÞ�T 2 Rn is the state vector of the transformed system, g(x) = [g1(x), . . . ,gn(x)]T and gjðxjðtÞÞ ¼ fjðxjðtÞ þ y�j Þ � fjðy�j Þ with gj(0) = 0, "j. It is noted that each activation function gi(Æ)satisfies the following sector condition:
0 6giðn1Þ � giðn2Þ
n1 � n2
6 ri; i ¼ 1; 2; . . . ; n: ð5Þ
The following definition and lemma will be used in main result.
Definition 1. For system defined by (4), if there exist a positive constant k and c(k) > 0 such that
kxðtÞk 6 cðkÞe�kt sup�hðtÞ6h60
kxðhÞk 8t > 0;
then, the origin of the system (4) is exponentially stable where k is called the convergence rate (or degree) ofexponential stability.
1216 J.H. Park / Applied Mathematics and Computation 183 (2006) 1214–1219
Lemma 1 [20]. Suppose that (3) holds, then
Z uv½giðsÞ � giðvÞ�ds 6 ½u� v�½giðuÞ � giðvÞ�; i ¼ 1; 2; . . . ; n:
Now we will present a new result for exponential stability of rate k of system (4).
Theorem 1. For given 0 6 hðtÞ 6 �h, _hðtÞ 6 hd < 1 and R = diag{r1,r2, . . . ,rn}, the equilibrium point of (4) is
globally exponentially stable with convergence rate k if there exist positive definite matrices P, Q, R, and a
positive diagonal matrix D = diag{d1,d2, . . . , dn} satisfying the following LMI:
R ¼P1 PW þ 4kD PW 1
H P2 DW 1
H H P3
264
375 < 0; ð6Þ
where
P1 ¼ 2kP � PA� ATP þ R;
P2 ¼ DW þ W TDþ Q� 2DAR�1;
P3 ¼ �ð1� hdÞe�2k�hðQþ R�1RR�1Þ:
Proof. Consider a Lyapunov function candidate as
V ¼ e2ktxTðtÞPxðtÞ þ 2Xn
i¼1
die2kt
Z xiðtÞ
0
giðsÞdsþZ t
t�hðtÞe2ksgTðxðsÞÞQgðxðsÞÞdsþ
Z t
t�hðtÞe2ksxTðsÞRxðsÞds:
ð7Þ
LetV 1 ¼ e2ktxTðtÞPxðtÞ; ð8Þ
V 2 ¼ 2Xn
i¼1
die2kt
Z xiðtÞ
0
giðsÞds; ð9Þ
V 3 ¼Z t
t�hðtÞe2ksgTðxðsÞÞQgðxðsÞÞds; ð10Þ
V 4 ¼Z t
t�hðtÞe2ksxTðsÞRxðsÞds: ð11Þ
Now, let us calculate the time derivative of Vi along the trajectory of (4). First the derivative of V1 is
_V 1 ¼ 2ke2ktxTðtÞPxðtÞ þ 2e2ktxTðtÞP _xðtÞ¼ 2ke2ktxTðtÞPxðtÞ þ 2e2ktxTðtÞP ð�AxðtÞ þ WgðxðtÞÞ þ W 1gðxðt � hðtÞÞÞÞ: ð12Þ
Second, we get the bound of _V 2 as
_V 2 ¼ 2Xn
i¼1
die2kt 2k
Z xiðtÞ
0
giðsÞdsþ giðxiðtÞÞ _xiðtÞ� �
¼Xn
i¼1
4kdie2kt
Z xiðtÞ
0
giðsÞdsþ 2ektgTðxðtÞÞD _xðtÞ
6 e2ktf4kgTðxðtÞÞDxðtÞ � 2gTðxðtÞÞDAxðtÞ þ 2gTðxðtÞÞDWgðxðtÞÞ þ 2gTðxðtÞÞDW 1gðxðt � hðtÞÞg; ð13Þ
where Lemma 1 is utilized.Third, the bound of _V 3 is as follows:
_V 3 ¼ e2ktgTðxðtÞÞQgðxðtÞÞ � ð1� _hðtÞÞe2kðt�hðtÞÞgTðxðt � hðtÞÞÞQgðxðt � hðtÞÞÞ
6 e2ktgTðxðtÞÞQgðxðtÞÞ � ð1� hdÞe2kte�2k�hgTðxðt � hðtÞÞÞQgðxðt � hðtÞÞÞ: ð14Þ
J.H. Park / Applied Mathematics and Computation 183 (2006) 1214–1219 1217
Finally, we have
_V 4 ¼ e2ktxTðtÞRxðtÞ � ð1� _hðtÞÞe2kðt�hðtÞÞxTðt � hðtÞÞRxðt � hðtÞÞ
6 e2ktxTðtÞRxðtÞ � ð1� hdÞe2kte�2k�hxTðt � hðtÞÞRxðt � hðtÞÞ: ð15Þ
Thus, it follows that_V 6 2ke2ktxTðtÞPxðtÞ þ 2e2ktxTðtÞP ð�AxðtÞ þ WgðxðtÞÞ þ W 1gðxðt � hðtÞÞÞÞ þ e2ktf4kgTðxðtÞÞDxðtÞ� 2gTðxðtÞÞDAxðtÞ þ 2gTðxðtÞÞDWgðxðtÞÞ þ 2gTðxðtÞÞDW 1gðxðt � hðtÞÞ þ gTðxðtÞÞQgðxðtÞÞg� ð1� hdÞe2kte�2k�hgTðxðt � hðtÞÞÞQgðxðt � hðtÞÞÞ þ e2ktxTðtÞRxðtÞ� ð1� hdÞe2kte�2k�hxTðt � hðtÞÞRxðt � hðtÞÞ: ð16Þ
Here note that
�2gTðxðtÞÞDAxðtÞ 6 �2gTðxðtÞÞDAR�1gðxðtÞÞ;�e�2k�hxTðt � hðtÞÞRxðt � hðtÞÞ 6 �e�2k�hgTðxðt � hðtÞÞÞR�1RR�1gðxðt � hðtÞÞÞ:
ð17Þ
Then, by using (17), it can be shown that
_V 6 e2ktfxTðtÞ½2kP � PA� ATP þ R�xðtÞ þ 2xTðtÞðPW þ 4kDÞgðxðtÞÞ þ 2xTðtÞPW 1gðxðt � hðtÞÞÞþ gTðxðtÞÞ½DW þ W TDþ Q� 2DAR�1�gðxðtÞÞ þ 2gTðxðtÞÞDW 1gðxðt � hðtÞÞ� ð1� hdÞgTðxðt � hðtÞÞÞ½e�2k�hQþ e�2k�hR�1RR�1�gðxðt � hðtÞÞÞg¼ nTðtÞRn; ð18Þ
where
nðtÞ ¼ xTðtÞ gTðxðtÞÞ gTðxðt � hðtÞÞÞ� �T
:
Since the matrix R given in (6) is a negative definite matrix, we have _V 6 0, it follows that
V 6 V ð0Þ:
SinceV ð0Þ ¼ xTð0ÞPxð0Þ þ 2Xn
i¼1
di
Z xið0Þ
0
giðsÞdsþZ 0
�hðtÞe2ksgTðxðsÞÞQgðxðsÞÞdsþ
Z 0
�hðtÞe2ksxTðsÞRxðsÞds
6 kMðP Þk/k2 þ 2dMrMk/k2 þ ðkMðQÞr2M þ kMðRÞÞ
Z 0
�hðtÞe2ksxTðsÞxðsÞds
6 kMðP Þk/k2 þ 2dMrMk/k2 þ ðkMðQÞr2M þ kMðRÞÞ
Z 0
��he2ksxTðsÞxðsÞds
6 kMðP Þk/k2 þ 2dMrMk/k2 þ ðkMðQÞr2M þ kMðRÞÞk/k2
Z 0
��he2ks ds
6 kMðP Þk/k2 þ 2dMrMk/k2 þ ðkMðQÞr2M þ kMðRÞÞk/k2 1� e2k�h
2k
¼ kMðPÞ þ 2dMrM þ ðkMðQÞr2M þ kMðRÞÞ
1� e2k�h
2k
� �k/k2
; ð19Þ
where dM = max(di), rM = max(ri), and k/k = sup�h(t)6h60kx(h)k.Also, we have
V P e2ktkmðP ÞkxðtÞk2;
which implies that
e2ktkmðP ÞkxðtÞk26 kMðPÞ þ 2dMrM þ ðkMðQÞr2
M þ kMðRÞÞ1� e2k�h
2k
� �k/k2
:
1218 J.H. Park / Applied Mathematics and Computation 183 (2006) 1214–1219
It follows that
kxðtÞk 6 cðkÞe�kt;
where
cðkÞ ¼ kMðP Þ þ 2dMrM þ ðkMðQÞr2M þ kMðRÞÞ
1� e2k�h
2k
� �� �kmðP Þ
�1=2
k/k:
Thus by Definition 1, our proof is completed. h
Remark 1. The criterion given in Theorem 1 is dependent on the time delay. It is well known that the delay-dependent criteria are less conservative than delay-independent criteria when the delay is small.
Remark 2. By iteratively solving the LMIs given in Theorem 1 with respect to �h, one can find the maximumallowable upper bound �h of time delay h(t) for guaranteeing the exponential stability of the system (4).
Remark 3. From Theorem 1, one can determine an upper bound of k such that system (4) is exponentiallystable. It requires solving the following optimization problem:
maximize k
subject to LMI ð6Þ:
This means that system (4) will be exponentially stable if k 6 �k where �k is the maximized value of k of theoptimization problem.
Remark 4. The LMI solutions of Theorem 1 can be obtained by solving the eigenvalue problem with respectto solution variables, which is a convex optimization problem [21]. In this paper, we utilize Matlab’s LMIControl Toolbox [22] which implements interior-point algorithm. This algorithm is significantly faster thanclassical convex optimization algorithms [21].
The following examples are given to show the less conservatism of our delay-dependent asymptotic stabilityresult than those in literature.
Example 1. Consider the following DCNNs (4) with constant delay h studied in [15,16] with
A ¼0:7 0
0 0:7
; W ¼ 0; W 1 ¼
0:1 0:1
0:3 0:3
; ð20Þ
and the activation function gi(x) = 0.5(jx + 1j � jx � 1j).
In the work [15,16], since the delay-dependent criteria for asymptotic stability of DCNNs are presented, itcannot be applied to check exponential stability of system (20). According to their works, the maximum allow-able bound �h for guaranteeing the asymptotic stability of system (20) are �h ¼ 0:3638 [16] and �h ¼ 0:45 [15],respectively. On the other hand, our delay-dependent exponential stability criterion in Theorem 1 presents�h ¼ 9:7101 for decay degree k = 0.05 and �h ¼ 4:0546 for k = 0.1, respectively. It means that for this exampleour criterion is less conservative than the existing delay-dependent criteria [15,16].
Example 2. Consider the following cellular neural networks [20]:
_yðtÞ ¼ �AyðtÞ þ Wf ðyðtÞÞ þ W 1f ðyðt � hðtÞÞÞ þ b; ð21Þ
whereA ¼1 0
0 1
; W ¼
�0:1 0:1
0:1 �0:1
; W ¼
�0:1 0:2
0:2 0:1
; f iðxÞ ¼ 0:5ðjy þ 1j � jy � 1jÞ ði ¼ 1; 2Þ;
and hðtÞ ¼ 12
sin2 t. It is obvious that in this case �h ¼ hd ¼ 0:5.
J.H. Park / Applied Mathematics and Computation 183 (2006) 1214–1219 1219
By applying the result, ðhþ 1ÞI � 2ð1� kÞI þ W þ W T þ e2kh
1�hdW T
1 W 1 < 0, in Zhang et al. [20], the maximum
exponential convergence rate is �k ¼ 0:19, while by Theorem 1 in this paper, we have �k ¼ 0:557, which showsthat our criterion is less conservative than one in [20]. For reference, when k = 0.557, the LMI solutions ofTheorem 1 are as follows:
P ¼48:4850 46:8275
46:8275 78:9255
; Q ¼
0:0791 �0:4240
�0:4240 2:7512
;
R ¼15:6673 17:4569
17:4569 35:0636
; D ¼
2:0358 0
0 1:2627
:
3. Concluding remarks
In this paper, the problems of exponential stability and exponential convergence rate for neural networkswith time-varying delays have been studied. The exponential stability criterion obtained in this paper, whichdepends on the size of the time delay, are derived by the Lyapunov–Krasoviskii functional and LMI frame-work. It has been shown that our novel criterion is less conservative than those reported in the literature bynumerical examples.
References
[1] H. Zhao, Global stability of bidirectional associative memory neural networks with distributed delays, Phys. Lett. A 297 (2002) 182–190.
[2] Y. Li, Global exponential stability of BAM neural networks with delays and impulses, Chaos Solitons Fractals 24 (2005) 279–285.[3] J. Cao, Global asymptotic stability of neural networks with transmission delays, Int. J. Syst. Sci. 31 (2000) 1313–1316.[4] A. Chen, L. Huang, J. Cao, Existence and stability of almost periodic solution for BAM neural networks with delays, Appl. Math.
Comput. 137 (2003) 177–193.[5] J. Liang, J. Cao, Exponential stability of continuous-time and discrete-time bidirectional associative memory networks with delays,
Chaos Solitons Fractals 22 (2004) 773–785.[6] L. Huang, C. Huang, B. Liu, Dynamics of a class of cellular neural networks with time-varying delays, Phys. Lett. A 345 (2005) 330–
334.[7] J.H. Park, Delay-dependent criterion for guaranteed cost control of neutral delay systems, J. Optim. Theory Appl. 124 (2005) 491–
502.[8] J. Hale, S.M. Verduyn Lunel, Introduction to Functional Differential Equations, Springer-Verlag, New York, 1993.[9] J.H. Park, LMI optimization approach to asymptotic stability of certain neutral delay differential equation with time-varying
coefficients, Appl. Math. Comput. 160 (2005) 335–361.[10] J.H. Park, Robust stabilization for dynamic systems with multiple time-varying delays and nonlinear uncertainties, J. Optim. Theory
Appl. 108 (2001) 155–174.[11] J. Cao, D. Zhou, Stability analysis of delayed cellular neural networks, Neural Networks 11 (1998) 1601–1605.[12] S. Arik, Global asymptotic stability of a larger class of neural networks with constant time delay, Phys. Lett. A 311 (2003) 504–511.[13] J.H. Park, A novel criterion for global asymptotic stability of BAM neural networks with time delays, Chaos Solitons Fractals 29
(2006) 446–453.[14] S. Xu, J. Lam, D.W.C. Ho, Y. Zou, Novel global asymptotic stability criteria for delayed cellular neural networks, IEEE Trans. Circ.
Syst. II, Express Briefs 52 (2005) 349–353.[15] Q. Zhang, X. Wei, J. Xu, Global asymptotic stability of Hopfield neural networks with transmission delays, Phys. Lett. A 318 (2003)
399–405.[16] A. Chen, J. Cao, L. Huang, An estimation of upperbound of delays for global asymptotic stability of delayed Hopfield neural
networks, IEEE Trans. Circ. Syst. I 49 (2002) 1028–1032.[17] X. Liao, G. Chen, E. Sanchez, Delay-dependent exponential stability analysis of delayed neural networks: an LMI approach, Neural
Networks 15 (2002) 855–866.[18] E. Yucel, S. Arik, New exponential stability results for delayed neural networks with time varying delays, Physica D 191 (2004) 314–322.[19] X. Huang, J. Cao, D. Huang, LMI-based approach for delay-dependent exponential stability analysis of BAM neural networks,
Chaos Solitions Fractals 24 (2005) 885–898.[20] Q. Zhang, X. Wei, J. Xu, Delay-dependent exponential stability of cellular neural networks with time-varying delays, Chaos Solitons
Fractals 23 (2005) 1363–1369.[21] B. Boyd, L.E. Ghaoui, E. Feron, V. Balakrishnan, Linear Matrix Inequalities in Systems and Control Theory, SIAM, Philadelphia,
1994.[22] P. Gahinet, A. Nemirovski, A. Laub, M. Chilali, LMI Control Toolbox User’s Guide, The Mathworks, Massachusetts, 1995.