novel delay-dependent stability criteria of neural networks with time-varying delay
TRANSCRIPT
ARTICLE IN PRESS
Neurocomputing 72 (2009) 1065– 1070
Contents lists available at ScienceDirect
Neurocomputing
0925-23
doi:10.1
� Corr
E-m
journal homepage: www.elsevier.com/locate/neucom
Novel delay-dependent stability criteria of neural networkswith time-varying delay
Yonggang Chen a,�, Yuanyuan Wu b
a Department of Mathematics, Henan Institute of Science and Technology, Xinxiang 453003, Chinab Research Institute of Automation, Southeast University, Nanjing 210096, China
a r t i c l e i n f o
Article history:
Received 16 December 2007
Received in revised form
23 March 2008
Accepted 27 March 2008
Communicated by T. Heskesexamples are presented to show the effectiveness and the less conservativeness of the proposed
Available online 22 April 2008
Keywords:
Stability
Neural networks
Time-varying delay
Linear matrix inequalities (LMIs)
12/$ - see front matter & 2008 Elsevier B.V. A
016/j.neucom.2008.03.006
esponding author. Tel.: +86 13782533365; fa
ail address: [email protected] (Y. Chen
a b s t r a c t
In this paper, the delay-dependent stability is investigated for neural networks with a time-varying
delay. By using the augmented Lyapunov functional method and by resorting to the novel method for
estimating the upper bound of the derivative of augmented Lyapunov functionals, the less conservative
asymptotic stability criteria are derived in terms of linear matrix inequalities (LMIs). Two numerical
method.
& 2008 Elsevier B.V. All rights reserved.
1. Introduction
Nowadays, neural networks are widely used in signal proces-sing, image processing, pattern classification, associative mem-ories, solving certain optimization problems, and so on. Some ofthese applications require that the equilibrium points of thedesigned networks be stable. On the other hand, time delays oftenoccur in many practical neural networks, and its existence maylead to instability, oscillation, and poor performances of neuralnetworks. Therefore, the asymptotic stability analysis for neuralnetworks with time delays has received great attention during thepast years and a number of remarkable results have been reported[1–13,15–21]. The obtained stability criteria can be classified intotwo types; that is, delay-independent criteria [1–3,17–20,5,6,9]and delay-dependent criteria [15,21,16,4,7,8,10–13]. It is wellknown that delay-dependent stability criteria are generally lessconservative than delay-independent ones especially when thesize of the delay is small.
Recently, several kinds of methods are used to analyze thestability of neural networks with time-varying delays[16,4,7,8,10–13]. In [7], several less conservative delay-dependentstability criteria are presented by considering the additionaluseful terms, when estimating the upper bound of the derivativeof Lyapunov functionals. In [12], by using the new augmented
ll rights reserved.
x: +86 373 3040081.
).
Lyapunov functional method and by reserving the additionaluseful terms, the new delay-dependent stability criteria areobtained which improve some existing results [21,4,10,13,7].
In this paper, we consider the delay-dependent stability for aclass of neural networks with time-varying delay. By constructingthe augmented Lyapunov functional, and by resorting to the newtechnique for estimating the upper bound of the derivative ofLyapunov functionals, the novel delay-dependent stability criteriaare established in terms of LMIs. Finally, two numerical exampleare presented to show that our results are less conservative thansome existing ones.
Notation: Throughout this paper, the superscript ‘‘T’’ standsfor the transpose of a matrix. Rn and Rn�n denote the n-dimensional Euclidean space and set of all n� n real matrices,respectively. A real symmetric matrix P40 ðX0Þ denotes P
being a positive definite (positive semi-definite) matrix. I isused to denote an identity matrix with proper dimension.Matrices, if not explicitly stated, are assumed to have compatibledimensions. The symmetric terms in a symmetric matrix aredenoted by �.
2. Problem formulation
The dynamic behavior of a continuous-time neural networkwith time-varying delay can be described as follows:
_yðtÞ ¼ �CyðtÞ þ Af ðyðtÞÞ þ Bf ðyðt � dðtÞÞÞ þ J, (1)
ARTICLE IN PRESS
Y. Chen, Y. Wu / Neurocomputing 72 (2009) 1065–10701066
where yðtÞ ¼ ½y1ðtÞ; y2ðtÞ; . . . ; ynðtÞ�T 2 Rn is the neuron state vector,
f ðyð�ÞÞ ¼ ½f 1ðy1ð�ÞÞ; f 2ðy2ð�ÞÞ; . . . ; f nðynð�ÞÞ�T 2 Rn is the activation
function, J ¼ ½J1; J2; . . . ; Jn�T 2 Rn is a constant input vector. C ¼
diagfc1; c2; . . . ; cng is a diagonal matrix with ci40, and A;B are theconnection weight matrices and the delayed weight matrix,respectively. The function dðtÞ denotes the time-varying delay,and throughout this paper, the following two cases of time-varying delay are considered.
Case 1: dðtÞ is a differentiable function satisfying
0pdðtÞpho1; _dðtÞpmo1; 8tX0.
Case 2: dðtÞ is a continuous function satisfying
0pdðtÞpho1; 8tX0,
where m and h are constants.In this paper, we assume that each activation functions
f ið�Þ; i ¼ 1;2; . . . ;n, satisfy the following inequalities:
0pf iðz1Þ � f iðz2Þ
z1 � z2pki; i ¼ 1;2; . . . ;n, (2)
where ki; i ¼ 1;2; . . . ;n are positive scalars.Assume that y� ¼ ½y�1; y
�2; . . . ; y
�n�
T is an equilibrium point ofEq. (1), by the coordinate transformation xð�Þ ¼ yð�Þ � y�, thensystem (1) can be transformed into the following system:
_xðtÞ ¼ �CxðtÞ þ AgðxðtÞÞ þ Bgðxðt � dðtÞÞÞ, (3)
where xðtÞ ¼ ½x1ðtÞ; x2ðtÞ; . . . ; xnðtÞ�T 2 Rn is the state vector of the
transformed system, gðxð�ÞÞ ¼ ½g1ðx1ð�ÞÞ; g2ðx2ð�ÞÞ; . . . ; gnðxnð�ÞÞ�T
2 Rn, and giðxiðtÞÞ ¼ f iðxiðtÞ þ y�i Þ � f iðy�i Þ; i ¼ 1;2; . . . ;n. According
to inequality (2), we can obtain
g2i ðxiÞpkixigiðxiðtÞÞpk2
i x2i ; i ¼ 1;2; . . . ;n. (4)
From the above analysis, we can see that the stability problemsystem (1) on equilibrium x� is changed into the zero stabilityproblem of system (3). Therefore, in the following part we willdevote into the stability analysis problem of system (3).
Before giving the main results, we will firstly introduce thefollowing lemmas.
Lemma 1 (Mahmoud [14]). For any real vectors a; b and any matrix
Q40 with appropriate dimensions, it follows that
2aTbpaTQaþ bTQ�1b.
Lemma 2. For any matrices Q1140; Q2240;Q12;
Q ¼Q11 Q12
QT12 Q22
" #40,
Mi;Ni ði ¼ 1;2; . . . ;14Þ, and scalar 0pdðtÞph, the following inequal-
ities hold:
�
Z t
t�dðtÞZTðsÞQZðsÞds
pxTðtÞC1xðtÞ þ dðtÞxT
ðtÞMTQ�1MxðtÞ, (5)
�
Z t�dðtÞ
t�hZTðsÞQZðsÞds
pxTðtÞC2xðtÞ þ ðh� dðtÞÞxT
ðtÞNTQ�1NxðtÞ, (6)
where
C1 ¼
M8 þMT8 �M8 þMT
9 MT10 MT
1 þMT11 MT
12 MT13 MT
14
� �M9 �MT9 �MT
10 M2 �MT11 �MT
12 �MT13 �MT
14
� � 0 M3 0 0 0
� � � M4 þMT4 MT
5 MT6 MT
7
� � � � 0 0 0
� � � � � 0 0
� � � � � � 0
2666666666664
3777777777775
,
C2 ¼
0 N8 �N8 0 N1 0 0
� N9 þ NT9 �N9 þ NT
10 NT11 N2 þ NT
12 NT13 NT
14
� � �N10 � NT10 �NT
11 N3 � NT12 �NT
13 �NT14
� � � 0 N4 0 0
� � � � N5 þ NT5 NT
6 NT7
� � � � � 0 0
� � � � � � 0
2666666666664
3777777777775
,
M ¼MT
1 MT2 MT
3 MT4 MT
5 MT6 MT
7
MT8 MT
9 MT10 MT
11 MT12 MT
13 MT14
" #,
N ¼NT
1 NT2 NT
3 NT4 NT
5 NT6 NT
7
NT8 NT
9 NT10 NT
11 NT12 NT
13 NT14
" #,
xðtÞ ¼ xTðtÞ xTðt � dðtÞÞ xTðt � hÞ
"
Z t
t�dðtÞxðsÞds
� �T Z t�dðtÞ
t�hxðsÞds
!T
gTðxðtÞÞ gTðxðt � dðtÞÞÞ
#T
,
ZðsÞ ¼ ½xTðsÞ _xTðsÞ�T.
Proof. By Lemma 1, we have
�
Z t
t�dðtÞZTðsÞQZðsÞds
p2
Z t
t�dðtÞZTðsÞMxðtÞdsþ
Z t
t�dðtÞxTðtÞMTQ�1MxðtÞds
¼ 2
Z t
t�dðtÞxðsÞds
� �T
xTðtÞ � xTðt � dðtÞÞ
" #MxðtÞ
þ dðtÞxTðtÞMTQ�1MxðtÞ
¼ 2xTðtÞF1MxðtÞ þ dðtÞxT
ðtÞMTQ�1MxðtÞ
¼ xTðtÞC1xðtÞ þ dðtÞxT
ðtÞMTQ�1MxðtÞ,
�
Z t�dðtÞ
t�hZTðsÞQZðsÞds
p2
Z t�dðtÞ
t�hZTðsÞNxðtÞdsþ
Z t�dðtÞ
t�hxTðtÞNTQ�1NxðtÞds
¼ 2
Z t�dðtÞ
t�hxðsÞds
!T
xTðt � dðtÞÞ � xTðt � hÞ
24
35NxðtÞ
þ ðh� dðtÞÞxTðtÞNTQ�1NxðtÞ
¼ 2xTðtÞF2NxðtÞ þ ðh� dðtÞÞxT
ðtÞNTQ�1NxðtÞ
¼ xTðtÞC2xðtÞ þ ðh� dðtÞÞxT
ðtÞNTQ�1NxðtÞ,
where
F1 ¼0 0 0 I 0 0 0
I �I 0 0 0 0 0
" #T
,
F2 ¼0 0 0 0 I 0 0
0 I �I 0 0 0 0
" #T
.
This completes the proof. &
Remark 1. The integral inequalities (5) and (6) are inspirited byLemma 1 in [22], and can be used effectively to estimate the upperbound of the derivative of augmented Lyapunov functional.
ARTICLE IN PRESS
Y. Chen, Y. Wu / Neurocomputing 72 (2009) 1065–1070 1067
3. Main results
In this section, we will present the new delay-dependentstability for system (3). As for time-varying delay dðtÞ satisfyingCase 1, we have the following result.
Theorem 1. For given diagonal matrix K ¼ diagfk1; k2; . . . ; kng, and
constants hX0; mX0. Under time-varying delay dðtÞ satisfying Case 1,system (3) is asymptotically stable, if there exist matrices P1140,P2240, Q1140, Q2240, P12, Q12, Ri40 ði ¼ 1;2;3Þ, S40, Z40,Mi;Ni ði ¼ 1;2; . . . ;14Þ,
P ¼P11 P12
PT12 P22
" #40; Q ¼
Q11 Q12
QT12 Q22
" #40,
and diagonal matrices LX0;D1X0;D2X0 such that the following
linear matrix inequalities (LMIs) hold
O hXT1
hX1 �hQ
" #o0, (7)
O hXT2
hX2 �hQ
" #o0, (8)
where
O ¼
O11 O12 O13 O14 O15 O16 O17 �hCTQ22ffiffiffimp
P12 0
� O22 O23 O24 O25 O26 O27 0 0ffiffiffimp
PT22
� � O33 O34 O35 �NT13 �NT
14 0 0 0
� � � O44 O45 O46 O47 0 0 0
� � � � O55 NT6 NT
7 0 0 0
� � � � � O66 LB hATQ22 0 0
� � � � � � O77 hBTQ22 0 0
� � � � � � � �hQ22 0 0
� � � � � � � � �S 0
� � � � � � � � � �Z
2666666666666666666664
3777777777777777777775
,
X1 ¼MT
1 MT2 MT
3 MT4 MT
5 MT6 MT
7 0 0 0
MT8 MT
9 MT10 MT
11 MT12 MT
13 MT14 0 0 0
" #,
X2 ¼NT
1 NT2 NT
3 NT4 NT
5 NT6 NT
7 0 0 0
NT8 NT
9 NT10 NT
11 NT12 NT
13 NT14 0 0 0
" #,
with
O11 ¼ � P11C � CTPT11 þ P12 þ PT
12 þ hðQ11 � Q12C
� CTQT12Þ þ R1 þ R2 þM8 þMT
8,
O12 ¼ �P12 �M8 þMT9 þ N8,
O13 ¼ MT10 � N8,
O14 ¼ PT22 � CTP12 þM1 þMT
11,
O15 ¼ MT12 þ N1,
O16 ¼ P11Aþ hQ12AþMT13 þ KD1 � CTLT,
O17 ¼ P11Bþ hQ12BþMT14,
O22 ¼ �ð1� mÞR1 þ mS�M9 �MT9 þ N9 þ NT
9,
O23 ¼ �MT10 � N9 þ NT
10,
O24 ¼ �PT22 þM2 �MT
11 þ NT11,
O25 ¼ �MT12 þ N2 þ NT
12,
O26 ¼ �MT13 þ NT
13,
O27 ¼ �MT14 þ NT
14 þ KD2,
O33 ¼ �N10 � NT10 � R2,
O34 ¼ M3 � NT11,
O35 ¼ N3 � NT12,
O44 ¼ M4 þMT4 þ mZ,
O45 ¼ MT5 þ N4,
O46 ¼ MT6 þ PT
12A,
O47 ¼ MT7 þ PT
12B,
O55 ¼ N5 þ NT5,
O66 ¼ R3 � 2D1 þ LAþ ATLT,
O77 ¼ �ð1� mÞR3 � 2D2.
Proof. We consider the following Lyapunov–Krasovskii functionaldescribed as
VðtÞ ¼ V1ðtÞ þ V2ðtÞ þ V3ðtÞ þ V4ðtÞ, (9)
where
V1ðtÞ ¼ wTðtÞPwðtÞ,
V2ðtÞ ¼
Z 0
�h
Z t
tþyZTðsÞQZðsÞds dy,
V3ðtÞ ¼ 2Xn
i¼1
li
Z xiðtÞ
0giðsÞds,
V4ðtÞ ¼
Z t
t�dðtÞxTðsÞR1xðsÞdsþ
Z t
t�hxTðsÞR2xðsÞds
þ
Z t
t�dðtÞgTðxðsÞÞR3gðxðsÞÞds
and
P ¼P11 P12
PT12 P22
" #40; Q ¼
Q11 Q12
QT12 Q22
" #40,
Ri40 ði ¼ 1;2;3Þ; L ¼ diagfl1; l2; . . . ; lngX0,
wðtÞ ¼ xTðtÞ
Z t
t�dðtÞxðsÞds
� �T" #T
,
ZðsÞ ¼ ½xTðsÞ _xTðsÞ�T.
ARTICLE IN PRESS
Y. Chen, Y. Wu / Neurocomputing 72 (2009) 1065–10701068
Differentiating VðtÞ with respect to t along system (3) gives that
_V1ðtÞ ¼ 2xðtÞR t
t�dðtÞ xðsÞds
24
35
TP11 P12
PT12 P22
" #
��CxðtÞ þ AgðxðtÞÞ þ Bgðxðt � dðtÞÞÞ
xðtÞ � ð1� _dðtÞÞxðt � dðtÞÞ
" #
¼ � 2xTðtÞP11CxðtÞ þ 2xTðtÞP11AgðxðtÞÞ
þ 2xTðtÞP11Bgðxðt � dðtÞÞÞ
� 2xTðtÞCTP12
Z t
t�dðtÞxðsÞds
þ 2
Z t
t�dðtÞxðsÞds
� �T
PT12Bgðxðt � dðtÞÞÞ
þ 2
Z t
t�dðtÞxðsÞds
� �T
PT12AgðxðtÞÞ
þ 2xTðtÞP12xðtÞ � 2xTðtÞP12xðt � dðtÞÞ
þ 2xTðtÞPT22
Z t
t�dðtÞxðsÞds
� 2xTðt � dðtÞÞPT22
Z t
t�dðtÞxðsÞds
þ 2 _dðtÞxTðtÞP12xðt � dðtÞÞ
þ 2 _dðtÞxTðt � dðtÞÞPT22
Z t
t�dðtÞxðsÞds, (10)
_V2ðtÞ ¼ hZTðtÞQZðtÞ �Z t
t�hZTðsÞQZðsÞds
¼ hZTðtÞQZðtÞ �Z t
t�dðtÞZTðsÞQZðsÞds
�
Z t�dðtÞ
t�hZTðsÞQZðsÞds
phxTðtÞQ11xðtÞ þ 2hxT
ðtÞQ12
�½�CxðtÞ þ AgðxðtÞÞ þ Bgðxðt � dðtÞÞÞ�
þ h½�CxðtÞ þ AgðxðtÞÞ þ Bgðxðt � dðtÞÞÞ�TQ22
�½�CxðtÞ þ AgðxðtÞÞ þ Bgðxðt � dðtÞÞÞ� þ xTðtÞC1xðtÞ
þ dðtÞxTðtÞMTQ�1MxðtÞ þ xT
ðtÞC2xðtÞ
þ ðh� dðtÞÞxTðtÞNTQ�1NxðtÞ, (11)
_V3ðtÞ ¼ 2Xn
i¼1
ligiðxiðtÞÞ_xiðtÞ
¼ 2gTðxðtÞÞL½�CxðtÞ þ AgðxðtÞÞ þ Bgðxðt � dðtÞÞÞ�,
(12)
_V4ðtÞ ¼ xTðtÞðR1 þ R2ÞxðtÞ þ gTðxðtÞÞR3gðxðtÞÞ
� ð1� _dðtÞÞxTðt � dðtÞÞR1xðt � dðtÞÞ
� xTðt � hÞR2xðt � hÞ � ð1� _dðtÞÞgT
�xððt � dðtÞÞÞR3gðxðt � dðtÞÞÞ,
pxTðtÞðR1 þ R2ÞxðtÞ þ gTðxðtÞÞR3gðxðtÞÞ
� ð1� mÞxTðt � dðtÞÞR1xðt � dðtÞÞ
� xTðt � hÞR2xðt � hÞ � ð1� mÞgT
�xððt � dðtÞÞÞR3gðxðt � dðtÞÞÞ, (13)
where Lemma 2 is utilized in (11) and L ¼ diagfl1; l2; . . . ; lng.
For some matrices S40; Z40; the following inequalities are
true:
2 _dðtÞxTðtÞP12xðt � dðtÞÞpmxTðtÞP12S�1PT12xðtÞ
þ mxTðt � dðtÞÞSxðt � dðtÞÞ, (14)
2 _dðtÞxTðt � dðtÞÞPT22
Z t
t�dðtÞxðsÞds
pmxTðt � dðtÞÞPT22Z�1P22xðt � dðtÞÞ
þ mZ t
t�dðtÞxðsÞds
� �T
Z
Z t
t�dðtÞxðsÞds. (15)
By inequalities (4), it is known that there exist diagonal matrices
D1X0 and D2X0 such that the following inequalities hold:
xTðtÞKD1gðxðtÞÞ � gTðxðtÞÞD1gðxðtÞÞX0, (16)
xTðt � dðtÞÞKD2gðxðt � dðtÞÞÞ
� gTðxðt � dðtÞÞÞD2gðxðt � dðtÞÞÞX0, (17)
where K ¼ diagfk1; k2; . . . ; kng. By considering (9)–(17), we can
eventually obtain
_VðtÞp _V1ðtÞ þ _V2ðtÞ þ _V3ðtÞ þ _V4ðtÞ
þ 2½xTðtÞKD1gðxðtÞÞ � gTðxðtÞÞD1gðxðtÞÞ�
þ 2½xTðt � dðtÞÞKD2gðxðt � dðtÞÞÞ
� gTðxðt � dðtÞÞÞD2gðxðt � dðtÞÞÞ�
pxTðtÞ½O0 þ hpTQ�1
22 pþ dðtÞMTQ�1M
þ ðh� dðtÞÞNTQ�1N�xðtÞ, (18)
where
O0 ¼
O11 þ mP12S�1PT12 O12 O13 O14 O15 O16 O17
� O22 þ mPT22Z�1P22 O23 O24 O25 O26 O27
� � O33 O34 O35 �NT13 �NT
14
� � � O44 O45 O46 O47
� � � � O55 NT6 NT
7
� � � � � O66 LB
� � � � � � O77
26666666666664
37777777777775
,
p ¼ ½�Q22C 0 0 0 0 Q22A Q22B�,
with Oij are defined in Theorem 1. It is well known that dðtÞ
MTQ�1Mþðh� dðtÞÞNTQ�1N¼dðtÞðMTQ�1M � NTQ�1NÞþhNTQ�1N,
and is bounded by hMTQ�1M and hNTQ�1N for dðtÞ ¼ h and
dðtÞ ¼ 0, respectively. Thus, if O0 þ hpTQ�122 pþ hMTQ�1Mo0 and
O0 þ hpTQ�122 pþ hNTQ�1No0, then we have _VðtÞo0, which im-
plies that system (3) is asymptotically stable. Using Schur
complement, inequalities (7) and (8) are equivalent to O0 þ
hpTQ�122 pþ hMTQ�1Mo0 and O0 þ hpTQ�1
22 pþ hNTQ�1No0, re-
spectively. This completes the proof. &
Remark 2. In this paper, in order to reduce the conservativeness,
�R t
t�h ZTðsÞQZðsÞds is divided into two parts, this is �
R tt�dðtÞ
ZTðsÞQZðsÞds andR t�dðtÞ
t�h ZTðsÞQZðsÞds, and the new established
inequities (5), (6) are used to estimate their upper bounds.
Remark 3. It is seen that dðtÞMTQ�1M þ ðh� dðtÞÞNTQ�1N is notsimply enlarged as hMTQ�1M þ hNTQ�1N, but estimated by twoless conservative bounds hMTQ�1M and hNTQ�1N. This kind ofestimation method is so effective to reduce the conservativenesswhich is illustrated by numerical examples, and was not used insome existing literatures [11,7,12].
When time-varying delay dðtÞ satisfies Case 2, we choose thefollowing Lyapunov–Krasovskii functional:
~VðtÞ ¼ xTðtÞP11xðtÞ þ V2ðtÞ þ V3ðtÞ þ
Z t
t�hxTðsÞR2xðsÞds, (19)
where V2ðtÞ and V3ðtÞ are defined in (9). According to the proof ofTheorem 1, the following theorem can be easily obtained.
ARTICLE IN PRESS
Table 1Maximum allowable delay bounds for Example 1
m 0.1 0.5 0.9 Unknown m[10,13] 3.2775 2.1502 1.3164 1.2598
[7] 3.2793 2.2245 1.5847 1.5444
Theorems 1 and 2 3.3428 2.5421 2.0867 2.0389
Table 2Maximum allowable delay bounds for Example 2
m 0.8 0.9 Unknown m[10,13] 1.2281 0.8636 0.8298
[7] 1.6831 1.1493 1.0880
Theorems 1 and 2 2.3534 1.6050 1.5103
Y. Chen, Y. Wu / Neurocomputing 72 (2009) 1065–1070 1069
Theorem 2. For given diagonal matrix K ¼ diagfk1; k2; . . . ; kng, and
constant hX0. Under time-varying delay dðtÞ satisfying Case 2,system (3) is asymptotically stable, if there exist matrices P1140;Q1140; Q2240; Q12; R240; Mi;Ni ði ¼ 1;2; . . . ;14Þ,
Q ¼Q11 Q12
QT12 Q22
" #40,
and diagonal matrices LX0;D1X0;D2X0 such that the following
LMIs hold:
O hXT1
hX1 �hQ
" #o0, (20)
O hXT2
hX2 �hQ
" #o0, (21)
where
O ¼
O11 O12 MT10 � N8 M1 þMT
11 MT12 þ N1 O16 O17 �hCTQ22
� O22 O23 O24 O25 O26 O27 0
� � O33 M3 � NT11 N3 � NT
12 �NT13 �NT
14 0
� � � M4 þMT4 MT
5 þ N4 MT6 MT
7 0
� � � � N5 þ NT5 NT
6 NT7 0
� � � � � O66 LB hATQ22
� � � � � � �2D2 hBTQ22
� � � � � � � �hQ22
26666666666666664
37777777777777775
,
X1 ¼MT
1 MT2 MT
3 MT4 MT
5 MT6 MT
7 0
MT8 MT
9 MT10 MT
11 MT12 MT
13 MT14 0
" #,
X2 ¼NT
1 NT2 NT
3 NT4 NT
5 NT6 NT
7 0
NT8 NT
9 NT10 NT
11 NT12 NT
13 NT14 0
" #,
O11 ¼ � P11C � CTPT11 þ hðQ11 � Q12C � CTQT
12Þ
þ R2 þM8 þMT8,
O12 ¼ �M8 þMT9 þ N8,
O22 ¼ �M9 �MT9 þ N9 þ NT
9,
O24 ¼ M2 �MT11 þ NT
11,
O66 ¼ �2D1 þ LAþ ATLT,
and other Oij are defined in Theorem 1.
4. Numerical examples
Example 1 (Liu and Chen [13]). Consider the neural network (3)with the following parameters:
C ¼ diagf1:2769;0:6231;0:9230;0:4480g,
A ¼
�0:0373 0:4852 �0:3351 0:2336
�1:6033 0:5988 �0:3224 1:2352
0:3394 �0:0860 �0:3824 �0:5785
�0:1311 0:3253 �0:9534 �0:5015
26664
37775,
B ¼
0:8674 �1:2405 �0:5325 0:0220
0:0474 �0:9164 0:0360 0:9816
1:8495 2:6117 �0:3788 0:8428
�2:0413 0:5179 1:1734 �0:2775
26664
37775,
k1 ¼ 0:1137; k2 ¼ 0:1279; k3 ¼ 0:7994; k4 ¼ 0:2368.
For this example, it can be checked that Theorem 1 in [1],Theorem 2 in [18], Theorem 1 in [9], and the stability condition in[15] are not satisfied. It means that they fail to conclude whetherthis system is asymptotically stable or not. However, when m ¼ 0,by applying the results in [21,4,7,8,10–13] to this system, theachieved maximum allowable delay bounds h are 1.4224 [21],1.9321 [4], 3.5841 [8,10,13,11,7], 3.5891 [12], respectively, while byusing Theorem 1 in this paper, we can obtain the larger upperbound h ¼ 3:6237. When the delay is time-varying, the obtaineddelay bounds h for different m by using Theorems 1 and 2 arelisted in Table 1. For a comparison, Table 1 also lists the delaybounds derived by the results in [10,13,7]. For this example, itis obvious that our results are less conservative than those in[1,18,15,21,4,7–13].
Example 2 (Hua et al. [10]). Consider the neural network (3) withtime-varying delay and the following parameters:
C ¼2 0
0 2
" #; A ¼
1 1
�1 �1
" #,
B ¼0:88 1
1 1
" #; k1 ¼ 0:4; k2 ¼ 0:8.
For this example, when time-varying delay dðtÞ is not differ-ential, the results in [4–6,9] are not applicable. However, we canobtain the delay bound h ¼ 1:5103 by using Theorem 2. For adetailed comparison with the results in [10,13,7], we list Table 2.For this example, it is seen that our results improve some existingresults [5,6,9,4,10,13,7].
5. Conclusion
This paper considers the asymptotic stability for neuralnetworks with time-varying delay. By using the augmentedLyapunov functional method and by resorting to the newtechnique for estimating the upper bound of the derivative ofaugmented Lyapunov functional, we obtain the less conservativestability criteria for two cases of time-varying delays in terms oflinear matrix inequalities (LMIs). Finally, two numerical examplesare given to show the effectiveness and benefits of the proposedmethod.
References
[1] S. Arik, Global asymptotic stability of a larger class of delayed cellular neuralnetworks with constant delay, Phys. Lett. A 311 (2003) 504–511.
ARTICLE IN PRESS
Y. Chen, Y. Wu / Neurocomputing 72 (2009) 1065–10701070
[2] J. Cao, J. Wang, Exponential stability and periodic oscillatory solution in BAMnetworks with delays, IEEE Trans. Neural Networks 13 (2002) 457–463.
[3] T. Chen, L. Rong, Delay-independent stability analysis of Cohen–Grossbergneural networks, Phys. Lett. A 317 (2003) 436–449.
[4] H.J. Cho, J.H. Park, Novel delay-dependent robust stability criterion ofdelayed cellular neural networks, Chaos, Solitons & Fractals 32 (2007)1194–1200.
[5] T. Ensari, S. Arik, Global stability of class of neural networks with time varyingdelays, IEEE Trans. Circuits Syst. II, Exp. Briefs 52 (2005) 126–130.
[6] T. Ensari, S. Arik, Global stability analysis of neural networks with multipletime varying delays, IEEE Trans. Autom. Control 50 (2005) 1781–1785.
[7] Y. He, G.P. Liu, D. Rees, New delay-dependent stability criteria for neuralnetworks with time-varying delay, IEEE Trans. Neural Networks 18 (2007)310–314.
[8] Y. He, Q.G. Wang, M. Wu, LMI-based stability criteria for neural networks withmultiple time-varying delays, Physica D 212 (2005) 126–136.
[9] Y. He, Q.G. Wang, W.X. Zheng, Global robust stability for delayed neuralnetworks with polytopic type uncertainties, Chaos, Solitons & Fractals 26(2005) 1349–1354.
[10] C.C. Hua, C.N. Long, X.P. Guan, New results on stability analysis ofneural networks with time-varying delays, Phys. Lett. A 352 (2006)335–340.
[11] T. Li, L. Guo, C. Lin, C.Y. Sun, Robust stability for neural networks with time-varying delays and linear fractional uncertainties, Neurocomputing 71 (2007)421–427.
[12] T. Li, L. Guo, C. Lin, C.Y. Sun, New results on global asymptotic stabilityanalysis for neural networks with time-varying delays, Nonlinear Anal.: RealWorld Appl. (2007), doi:10.1016/j.nonrwa.2007.08.025.
[13] H.L. Liu, G.H. Chen, Chaos, delay-dependent stability for neural networks withtime-varying delay, Chaos, Solitons & Fractals 33 (2007) 171–177.
[14] M.S. Mahmoud, Resilient Control of Uncertain Dynamical Systems, Springer,Berlin, 2004.
[15] J.H. Park, A new stability analysis of delayed cellular neural networks, Appl.Math. Comput. 181 (2006) 200–205.
[16] J.H. Park, H.J. Cho, Chaos, A delay-dependent asymptotic stability criterion ofcellular neural networks with time-varying discrete and distributed delays,Chaos, Solitons & Fractals 33 (2007) 436–442.
[17] V. Singh, A novel global robust stability criterion for delayed neural networks,Phys. Lett. A 337 (2005) 369–373.
[18] V. Singh, Simplified LMI condition for global asymptotic stability of delayedneural networks, Chaos, Solitons & Fractals 29 (2006) 470–473.
[19] Z. Wang, Y. Liu, X. Liu, On global asymptotic stability of neural networks withdiscrete and distributed delays, Phys. Lett. A 345 (2005) 299–308.
[20] S. Xu, Y. Chu, J. Lu, New results on global exponential stability of recurrentneural networks with time-varying delays, Phys. Lett. A 352 (2006) 371–379.
[21] S. Xu, J. Lam, D.W.C. Ho, Y. Zou, Novel global asymptotic stability criteria fordelayed cellular neural networks, IEEE Trans. Circuits Syst. II, Exp. Briefs 52(2005) 349–353.
[22] X.M. Zhang, M. Wu, J.H. She, Y. He, Delay-dependent stabilization of linearsystems with time-varying state and input delays, Automatica 41 (2005)1405–1412.
Yonggang Chen was born in 1981. He received the M.S.degree in control theory from Henan Normal Univer-sity in 2006. Now, he is a teacher with Henan Instituteof Science and Technology, PR China. His researchinterests include time-delay systems, neural networks,robust control.
Yuanyuan Wu was born in 1982. She received the M.S.degree in control theory from Henan Normal Univer-sity in 2006. She is now pursing his Ph.D. degree inResearch Institute of Automation, Southeast University,China. Her research interests include nonlinear sys-tems, neural networks.