research article a one-layer recurrent neural network for...
TRANSCRIPT
Research ArticleA One-Layer Recurrent Neural Network for SolvingPseudoconvex Optimization with Box Set Constraints
Huaiqin Wu Rong Yao Ruoxia Li and Xiaowei Zhang
Department of Applied Mathematics Yanshan University Qinhuangdao 066001 China
Correspondence should be addressed to Rong Yao yaorong304163com
Received 5 December 2013 Accepted 18 January 2014 Published 27 February 2014
Academic Editor Wei Bian
Copyright copy 2014 Huaiqin Wu et al This is an open access article distributed under the Creative Commons Attribution Licensewhich permits unrestricted use distribution and reproduction in any medium provided the original work is properly cited
A one-layer recurrent neural network is developed to solve pseudoconvex optimization with box constraints Compared withthe existing neural networks for solving pseudoconvex optimization the proposed neural network has a wider domain forimplementation Based on Lyapunov stable theory the proposed neural network is proved to be stable in the sense of Lyapunov Byapplying Clarkersquos nonsmooth analysis technique the finite-time state convergence to the feasible region defined by the constraintconditions is also addressed Illustrative examples further show the correctness of the theoretical results
1 Introduction
It is well known that nonlinear optimization problems arisein a broad variety of scientific and engineering applicationsincluding optimal control structure design image and signalprogress and robot control Most of nonlinear programmingproblems have a time-varying nature they have to be solvedin real time One promising approach to solve nonlinearprogramming problems in real time is to employ recurrentneural networks based on circuit implementation
In the past two decades neural networks for optimizationare studied massively and many good results are obtained inthe literature see [1ndash19] and references therein In particularLiang and Wang developed a recurrent neural network forsolving nonlinear optimization with a continuously differ-entiable objective function and bound constraints in [4] Aprojection neural network was proposed for solving nondif-ferentiable nonlinear programming problems by Xia et alin [20] In [9 19] Xue and Bian developed a subgradient-based neural network for solving nonsmooth convex ornonconvex optimization problems with a nonsmooth convexor nonconvex objective function
It should be noticed tha many nonlinear programmingproblems can be formulated as nonconvex optimizationproblems and among nonconvex programming as a special
case pseudoconvex programmings are found to be moreprevalent than other nonconvex programming Pseudocon-vex optimization problem has many applications in prac-tice such as fractional programming computer vision andproduction planning Very recently Liu et al presented aone-layer recurrent neural network for solving pseudoconvexoptimization subject to linear equality in [1] Hu and Wangproposed a recurrent neural network for solving pseudo-convex variational inequalities in [10] Qin et al proposeda new one-layer recurrent neural network for nonsmoothpseudoconvex optimization in [21]
Motivated by the works above our objective in this paperis to develop a one-layer recurrent neural network for solvingpseudoconvex optimization problem subject to a box setThe proposed network model is an improvement of theneural network model presented in [10] To the best of ourknowledge there are few works treating of the pseudoconvexoptimization problem with a box set constraint
For convenience some notations are introduced as fol-lows R denotes the set of real numbers R119899 denotes the 119899-dimensional Euclidean space andR119898times119899 denotes the set of all119898 times 119899 real matrices For any matrix 119860 119860 gt 0 (119860 lt 0)meansthat 119860 is a positive definite (negative definite) 119860minus1 denotesthe inverse of 119860 119860119879 denotes the transpose of 119860 120582max(119860)and 120582min(119860) denote the maximum and minimum eigenvalue
Hindawi Publishing CorporationMathematical Problems in EngineeringVolume 2014 Article ID 283092 8 pageshttpdxdoiorg1011552014283092
2 Mathematical Problems in Engineering
of 119860 respectively Given the vectors 119909 = (1199091 119909
119899)119879
119910 =
(1199101 119910
119899)119879
isin R119899 119909 = (sum119899
119894=11199092
119894)12 119909119879119910 = sum
119899
119894=1119909119894119910119894 119860
denotes the 2-norm of 119860 that is 119860 = radic120582(119860119879119860) where
120582(119860119879
119860) denotes the spectral radius of 119860119879119860 (119905) denotes thederivative of 119909(119905)
Given a set 119862 sub R119899 119870[119862] denotes the closure of theconvex hull of 119862
Let 119881 R119899 rarr R be a locally Lipschitz continuous func-tion Clarkersquos generalized gradient of 119881 at 119909 is defined by
120597119881 (119909) = 119870[ lim119894rarrinfin
nabla119881 (119909119894) lim119894rarrinfin
119909119894= 119909
119909119894isin R119899
Ω119881cupM]
(1)
where Ω119881sub R119899 is the set of Lebesgue measure zero nabla119881
does not exist andM sub R119899 is an arbitrary set with measurezeroThe set-valuedmap119866(sdot) is said to have a closed (convexcompact) image if for each 119909 isin 119864 119866(119909) is closed (convexcompact)
The remainder of this paper is organized as follows InSection 2 the related preliminary knowledge are given andthe problem formulation and the neural network model aredescribed In Section 3 the stability in the sense of Lyapunovand finite-time convergence of the proposed neural networkis proved In Section 4 illustrative examples are given to showthe effectiveness and the performance of the proposed neuralnetwork Some conclusions are drawn in Section 5
2 Model Description and Preliminaries
In this section a one-layer recurrent neural network modelis developed to solve pseudoconvex optimization with boxconstraints Some definitions and properties concerning theset-valued map and nonsmooth analysis are also introduced
Definition 1 (set-valued map) Suppose that to each point 119909of a set 119864 sube 119877
119899 there corresponds a nonempty set 119865(119909) sub 119877119899Then 119909 rarr 119865(119909) is said to be a set-valued map from 119864 119905119900 119877
119899
Definition 2 (locally Lipschitz function) A function 120601119877119899 rarr119877 is called Lipschitz near 119909
0if and only if there exist 120576 120598 gt 0
such that for any1199091 1199092isin 119861(119909
0 120598) satisfying 120601(119909
1)minus120601(119909
2) le
1205761199091minus 1199092 where 119861(119909
0 120598) = 119909 119909 minus 119909
0 lt 120598 The function
120601119877119899 rarr 119877 is said to be locally Lipschitz in119877119899 if it is Lipschitznear any point 119909 isin 119877119899
Definition 3 (regularity) A function 120601 119877119899 rarr 119877 which islocally Lipschitz near 119909 isin 119877119899 is said to be regular at 119909 if thereexists the one-sided directional derivative for any directionV isin 119877
119899 which is given by 1206011015840(119909 V) = lim120585rarr0+(120601(119909 + 120585 times V) minus
120601(119909))120585 and we have 1206010(119909 V) = 1206011015840
(119909 V) The function 120601 issaid to be regular in 119877119899 if it is regular for any 119909 isin 119877119899
Definition 4 A regular function 119891 119877119899
rarr 119877 is saidto be pseudoconvex on a set Ω if for for all 119909 119910 isin
Ω 119909 = 119910 120574(119909) isin 120597119891(119909) we have
120574(119909)119879
(119910 minus 119909) ge 0 997904rArr 119891 (119910) ge 119891 (119909) (2)
Definition 5 A function 119865 119877119899
rarr 119877119899 is said to be
pseudomonotone on a set Ω if for all 119909 1199091015840 isin Ω 119909 = 1199091015840 we
have
119865(119909)119879
(1199091015840
minus 119909) ge 0 997904rArr 119865(1199091015840
)
119879
(1199091015840
minus 119909) ge 0 (3)
Consider the following optimization problem with boxset constraint
min 119891 (119909)
subject to 119889 le 119861119909 le ℎ
(4)
where 119909 119889 ℎ isin 119877119899 and 119861 isin 119877119899times119899 is nonsingularSubstituting 119861119909 with 119911 then the problem (4) can be
transformed into the following problem
min 119891 (119861minus1
119911)
subject to 119889 le 119911 le ℎ
(5)
Let
119863(119911) =
119899
sum
119894=1
119889 (119911119894) 119894 = 1 2 119899 (6)
where 119889(119911119894) is defined as
119889 (119911119894) =
119911119894minus ℎ119894 119911119894ge ℎ119894
0 119889119894lt 119911119894lt ℎ119894
119889119894minus 119911119894 119911119894le 119889119894
(7)
Obviously 119889(119911119894) ge 0 and119863(119911) ge 0
Throughout this paper the following assumptions on theoptimization problem (4) are made
(A1) The objective function 119891(119909) of the problem (4) ispseudoconvex and regular and locally Lipschitz con-tinuous
(A2) 120597119891(119909) is bounded that is
sup120574(119909)isin120597119891(119909)
1003817100381710038171003817120574 (119909)
1003817100381710038171003817le 119897119891 (8)
where 119897119891gt 0 is a constant
In the following we develop a one-layer recurrent neuralnetwork for solving the problem (4) The dynamic equationof the proposed neural network model is described by differ-ential inclusion system
119889119911
119889119905
isin minus120597119891 (119861minus1
119911) minus 120583119870 [119892[119889ℎ]
] (119911) (9)
Mathematical Problems in Engineering 3
120597f(Bminus1z)120597z1
120597f(Bminus1z)120597z2
120597f(Bminus1z)120597zm
sum
sum
sum
sum
sum
sum
120583
120583
120583
minus
minus
minus
minus
minus
minus
minus
minus
minus
h1
d1
h2
d2
hm
dm
Z1
Z2
Zm
middot middot middot
int
int
int
Figure 1 Architecture of the neural network model (9)
where 120583 is a nonnegative constant 120597119891(119861minus1119911) = 119861minus1
120597119891(119909)and 119892
[119889ℎ]is a discontinuous function with its components
defined as
119892119894[119889ℎ]
(119904) =
1 119904 gt ℎ119894
0 119889119894lt 119904 lt ℎ
119894
minus1 119904 lt 119889119894
119870 [119892119894[119889ℎ]
] (119904) =
1 119904 gt ℎ119894
[0 1] 119904 = ℎ119894
0 119889119894lt 119904 lt ℎ
119894
[minus1 0] 119904 = 119889119894
minus1 119904 lt 119889119894
(10)
Architecture of the proposed neural network system model(9) is depicted in Figure 1
Definition 6 119911 isin 119877119899 is said to be an equilibrium point of thedifferential inclusion system (9) if
0 isin 119861minus1
120597119891 (119909) + 120583119870 [119892[119889ℎ]
] (119911) (11)
that is there exist
120574 isin 120597119891 (119909) 120585 isin 119870 [119892[119889ℎ]
] (119911) (12)
such that
119861minus1
120574 + 120583120585 = 0 (13)
where 119909 = 119861minus1119911
Definition 7 A function 119911(sdot) [0 119879] rarr 119877119899 is said to be a
solution of the system (9) with initial condition 119911(0) = 1199110
if 119911(sdot) is absolutely continuous on [0 119879] and for almost 119905 isin[0 119879]
119889119911 (119905)
119889119905
isin minus120597119891 (119861minus1
119911 (119905)) minus 120583119870 [119892[119889ℎ]
] (119911 (119905)) (14)
Equivalently there exists measurable functions 120574(119909) isin 120597119891(119909)120585(119911) isin 119870[119892
[119889ℎ]](119911) such that
(119905) = minus119861minus1
120574 (119861minus1
119911 (119905)) minus 120583120585 (119911 (119905)) (15)
4 Mathematical Problems in Engineering
Definition 8 Suppose that 119861 sub 119877119899 is a nonempty closed
convex set The normal cone to the set 119861 at 119911 isin 119861 is definedas119873119861(119911) = V isin 119877119899 V119879(119911 minus 119910) ge 0 for all 119910 isin 119861
Lemma 9 (see [22]) If 120601119894 119877119899
rarr 119877 119894 = 1 2 119898 isregular at 119909 then 120597(sum119898
119894=1120601119894(119909)) = sum
119898
119894=1(120597120601119894(119909))
Lemma 10 (see [22]) If 119881 119877119899
rarr 119877 is a regular function at119909 and 119909(sdot) 119877 rarr 119877
119899 is differentiable at 119905 and Lipschitz near 119905then 119889119881(119909(119905))119889119905 = ⟨120585 (119905)⟩ for all 120585 isin 120597119881(119909(119905))
Lemma 11 (see [22]) If 1198611 1198612sub 119877119899 are closed convex sets and
satisfy 0 isin int(1198611minus1198612) then for any 119911 isin 119861
1⋂11986121198731198611⋂1198612
(119911) =
1198731198611
(119911) + 1198731198612
(119911)
Lemma 12 (see [22]) If 119891 is locally Lipschitz near 119911 andattains a minimum over Ω at 119911 then 0 isin 120597119891(119911) + 119873
Ω(119911)
Set Ω = 119911 isin 119877119899
119889 le 119911 le ℎ Let 119904 isin int(Ω) then thereexists a constant 119903 gt 0 such that Ω sube 119861(119904 119903) where int(sdot)denotes the interior of the setΩ 119861(119904 119903) = 119911 isin 119877119899 119911 minus 119904 le119903 It is easy to verify the following lemma
Lemma13 For any 119911 isin 119877119899Ω and 120585 isin 119870[119892[119889ℎ]
](119911) (119911minus119904)119879120585 gt120596 where 120596 = min
1le119894le119899ℎ119894minus 119904119894 119904119894minus119889119894 and 119904
119894is the 119894th element
of 119904 isin intΩ
3 Main Results
In this section the main results concerned with the con-vergence and optimality conditions of the proposed neuralnetwork are addressed
Theorem 14 Suppose that the assumptions (A1) and (A
2)
hold Let 1199110isin 119861(119904 119903) If 120583 gt (119903119897
119891120596)119861
minus1
then the solution119911(119905) of the network system (9) with initial condition 119911(0) = 119911
0
satisfies 119911(119905) isin 119861(119904 119903)
Proof Set
120588 (119905) =
1
2
119911(119905) minus 1199042
(16)
By Lemma 10 and (15) evaluating the derivation of 120588(119905) alongthe trajectory of the system (9) gives
119889120588 (119905)
119889119905
= (119905)119879
(119911 (119905) minus 119904)
= [minus119861minus1
120574 (119909 (119905)) minus 120583120585 (119911 (119905))]
119879
(119911 (119905) minus 119904)
= minus[119861minus1
120574 (119909 (119905))]
119879
(119911 (119905) minus 119904) minus 120583120585(119911 (119905))119879
(119911 (119905) minus 119904)
le
10038171003817100381710038171003817119861minus110038171003817100381710038171003817119911 (119905) minus 119904
1003817100381710038171003817120574 (119909 (119905))
1003817100381710038171003817minus 120583(119911 (119905) minus 119904)
119879
120585 (119911 (119905))
(17)
If 119911(119905) isin Ω it follows directly that 119911(119905) isin 119861(119904 119903) If 119911(119905) isin119861(119904 119903) Ω according to Lemma 13 one gets that (119911(119905) minus119904)119879
120585(119911(119905)) gt 120596 Thus we have
119889120588 (119905)
119889119905
lt
10038171003817100381710038171003817119861minus110038171003817100381710038171003817119911 (119905) minus 119904
1003817100381710038171003817120574 (119909 (119905))
1003817100381710038171003817minus 120583120596
le 119903119897119891
10038171003817100381710038171003817119861minus110038171003817100381710038171003817minus 120583120596
(18)
If 120583 gt (119903119897119891120596)119861
minus1
then 119889120588(119905)119889119905 lt 0 this means that119911(119905) isin 119861(119904 119903) If not so the state 119911(119905) leaves 119861(119904 119903) at time1199051 and when 119905 = 119905
1 we have 119911(119905
1) minus 119904 = 119903 This implies that
(119889120588(119905)119889119905)|119905=1199051
ge 0 which is the contradictionAs a result if 120583 gt (119903119897
119891120596)119861
minus1
for any 1199110isin 119861(119904 119903)
the state 119911(119905) of the network system (9) with initial condition119911(0) = 119911
0satisfies 119911(119905) isin 119861(119904 119903) This completes the proof
Theorem 15 Suppose that assumptions (A1) and (A
2) hold If
120583 gt 119897119891119861minus1
then the solution of neural network system (9)with initial condition 119911(0) = 119911
0isin 119861(119904 119903) converges to the
feasible region Ω in finite time 119879 where 119879 = 119905lowast= 119863(119911(0))
radic119899(120583 minus 119897119891119861minus1
) and stays thereafter
Proof According to the definition of 119863(119911) 119863(119911) is a convexfunction in 119877119899 By Lemma 10 it follows that
119889119863 (119911)
119889119905
= 120577119879
(119905)
119889119911 (119905)
119889119905
forall120577 (119905) isin 120597119863 (119911) (19)
Noting that 120597119863(119911) = 119870[119892[119889ℎ]
](119911) we can obtain by (19) thatfor all 119911 isin 119861(119904 119903) Ω exist 120574(119909) isin 120597119891(119909) and 120585(119905) isin 119870[119892
[119889ℎ]](119911)
such that
119889119863 (119911)
119889119905
= 120585119879
(119905) (119905)
= 120585119879
(119905) (minus119861minus1
120574 (119909) minus 120583120585 (119905))
= minus120585119879
(119905) 119861minus1
120574 (119909) minus 1205831003817100381710038171003817120585 (119905)
1003817100381710038171003817
2
le1003817100381710038171003817120585 (119905)
1003817100381710038171003817
10038171003817100381710038171003817119861minus110038171003817100381710038171003817
1003817100381710038171003817120574 (119909)
1003817100381710038171003817minus 120583
1003817100381710038171003817120585 (119905)
1003817100381710038171003817
2
le1003817100381710038171003817120585 (119905)
1003817100381710038171003817[
10038171003817100381710038171003817119861minus110038171003817100381710038171003817
1003817100381710038171003817120574 (119909)
1003817100381710038171003817minus 120583
1003817100381710038171003817120585 (119905)
1003817100381710038171003817]
(20)
Since 119911 isin 119877119899 Ω and 120585(119905) isin 119870[119892[119889ℎ]
](119911) there at least is one ofthe components of 120585(119905)which is minus1 or 1 So 120585(119905) ge 1 Notingthat 120585(119905) le radic119899 we have
119889119863 (119911)
119889119905
le radic119899 [119897119891
10038171003817100381710038171003817119861minus110038171003817100381710038171003817minus 120583] (21)
Let 120572 = radic119899(120583 minus 119897119891119861minus1
) If 120583 gt 119897119891119861minus1
then 120572 gt 0 and
119889119863 (119911)
119889119905
le minus120572 (22)
Integrating (22) from 0 to 119905 we can obtain
119863(119911 (119905)) le 119863 (119911 (0)) minus 120572119905 (23)
Mathematical Problems in Engineering 5
Let 119905lowast= (1120572)119863(119911(0)) By (23)119863(119911(119905
lowast)) = 0 that is 119911
119894(119905lowast) =
ℎ119894or 119911119894(119905lowast) = 119889119894 This shows that the state trajectory of neural
network (9) with initial condition 119911(0) = 1199110reaches Ω in
finite time 119879 = 119905lowast= 119863(119911(0))radic119899(120583 minus 119897
119891119861minus1
)Next we prove that when 119905 ge 119905
lowast the trajectory stays inΩ
after reaching Ω If this is not true then there exists 1199051gt 119905lowast
such that the trajectory leavesΩ at 1199051 and there exists 119905
2gt 1199051
such that for 119905 isin (1199051 1199052) 119911(119905) isin 119877119899 Ω
By integrating (22) from 1199051to 1199052 it follows that
int
1199052
1199051
119889119863 (119911 (119905))
119889119905
119889119905 = 119863 (119911 (1199052)) minus 119863 (119911 (119905
1))
le minus120572 (1199052minus 1199051) lt 0
(24)
Due to 119863(119911(1199051)) = 0 119863(119911(119905
2)) lt 0 By the definition of
119863(119911(119905)) 119863(119911(119905)) ge 0 for any 119905 isin [0infin] which contradictsthe result above The proof is completed
Theorem 16 Suppose that assumptions (A1) and (A
2) hold If
120583 gt max119897119891119861minus1
(119903119897119891120596)119861
minus1
then the equilibrium pointof the neural network system (9) is an optimal solution of theproblem (4) and vice versa
Proof Denote 119911lowast as an equilibrium point of the neural
network system (9) then there exist 120574lowast isin 120597119891(119909lowast
) 120585lowast isin
119870[119892[119889ℎ]
](119911lowast
) such that
119861minus1
120574lowast
+ 120583120585lowast
= 0 (25)
where 120574lowast = 119861minus1119911lowast By Theorem 15 119911lowast isin Ω hence 120585lowast = 0 By(25) 120574lowast = 0We can get the following projection formulation
119911lowast
= 120601[119889ℎ]
(119911lowast
minus 119861minus1
120574lowast
) (26)
where 120601[119889ℎ]
= (120601[1198891ℎ1](1199101) 120601
[119889119899ℎ119899](119910119899)) with 120601
[119889119894ℎ119894](119910119894)
119894 = 1 119899 defined as
120601[119889119894ℎ119894](119910119894) =
ℎ119894 119910119894ge ℎ119894
119910119894 119889119894lt 119910119894lt ℎ119894
119889119894 119910119894le 119889119894
(27)
By the well-known projection theorem [17] (26) is equivalentto the following variational inequality
(119861minus1
120574lowast
)
119879
(119911 minus 119911lowast
) ge 0 forall119911 isin Ω (28)
Since 119891(119909) is pseudoconvex 119891(119861minus1119911) is pseudoconvex on ΩBy (28) we can obtain that 119891(119861minus1119911) ge 119891(119861
minus1
119911lowast
) This showsthat 119911lowast is a minimum point of 119891(119861minus1119911) overΩ
Next we prove the reverse side Denote 119911lowast as an optimalsolution of the problem then 119911
lowast
isin [119889 ℎ] Since 119911lowast is
a minimum point of 119891(119861minus1119911) over the feasible region Ωaccording to Lemma 12 it follows that
0 isin 120597119891 (119861minus1
119911lowast
) + 119873Ω(119911lowast
) (29)
From (29) it follows that there exist 120578lowast isin 119873Ω(119911lowast
) 119861minus1120574lowast =minus120578lowast
isin 119861minus1
120597119891(119909lowast
) and 120578lowast le 119897119891119861minus1
Noting that119873Ω(119911lowast
) =
V120585lowast V ge 0 120585lowast
isin 119870[119892[119889ℎ]
](119911lowast
) and at least one 120585lowast119894is 1
or minus 1 there exist 120573 ge 0 and 120585lowast isin 119870[119892[119889ℎ]
](119911lowast
) such that120573120585lowast
isin 119873Ω(119911lowast
) and 120578lowast = 120573120585lowastIn the following we prove that 120578lowast isin 120583119870[119892
[119889ℎ]](119911lowast
) Wesay that120573 le 120583 If not then120573 gt 120583 Since (119911lowastminus119904)119879120585lowast = sum119899
119894=1(119911lowast
119894minus
119904119894)120585119894ge 120596 we have
(119911lowast
minus 119904)119879
120578lowast
= 120573(119911lowast
minus 119904)119879
120585lowast
ge 120596120573 ge 120583120596 (30)
Thus 120578lowast ge 120583120596119911lowast
minus 119904 gt 120583120596119903 By the condition ofTheorem 16 120583 gt max(119897
119891119861minus1
(119903119897119891120596)119861
minus1
) Hence 120578 gt119897119891119861minus1
which contradicts with 1205782le 119897119891119861minus1
This impliesthat 120573120585lowast isin 120583119870[119892
[119889ℎ]](119911lowast
) that is 0 isin 120597119865(119911lowast)+120583119870[119892[119889ℎ]
](119911lowast
)
which means that 119911lowast is the equilibrium point of neuralnetwork system (9) This completes the proof
Theorem 17 Suppose that assumptions (A1) and (A
2) hold If
120583 gt max119897119891119861minus1
(119903119897119891120596)119861
minus1
then the equilibrium point ofthe neural network system (9) is stable in the sense of Lyapunov
Proof Denote 119911lowast as an equilibrium point of the neural
network system (9) that is
0 isin 120597119865 (119911lowast
) + 120583119870 [119892[119889ℎ]
(119911lowast
)] (31)
By Theorem 16 we get that 119911lowast is an optimal solution ofthe problem (4) that is 119891(119861minus1119911) ge 119891(119861
minus1
119911lowast
) 119911 isin Ω ByTheorem 15 the trajectory 119911(119905) with initial condition 119911(0) =1199110isin 119861(119904 119903) converges to the feasible region Ω in finite time
119879 = 119905lowast= 119863(119911(0))radic119899(120583 minus 119897
119891119861minus1
) and will remain in Ω
forever That is for all 119905 ge 119905lowast 119911(119905) isin Ω Let
1198811(119911) = 119891 (119861
minus1
119911) + 120583119863 (119911) (32)
Since 119911lowast is a minimum point of119891(119861minus1119911) onΩ we can get that1198811(119911) ge 119881
1(119911lowast
) for all 119911 isin ΩConsider the following Lyapunov function
119881 (119911) = 1198811(119911) minus 119881
1(119911lowast
) +
1
2
1003817100381710038171003817119911 minus 119911lowast1003817100381710038171003817
2
(33)
Obviously from (33) 119881(119911) ge (12)119911 minus 119911lowast2 and
120597119881 (119911) = 1205971198811(119911) + 119911 minus 119911
lowast
(34)
Evaluating the derivation of 119881 along the trajectory of thesystem (9) gives
119889119881 (119911 (119905))
119889119905
= 120585119879
(119905)
119889119911 (119905)
119889119905
forall120585 (119905) isin 120597119881 (119911 (119905)) (35)
Since (119905) isin minus1205971198811(119911(119905)) by (34) we can set 120585(119905) = minus(119905) + 119911 minus
119911lowast Hence119889119881 (119911 (119905))
119889119905
= (minus (119905) + 119911 minus 119911lowast
)119879
(119905)
= minus (119905)2
+ (119911 minus 119911lowast
) (119905)
le sup120579isin1205971198811(119911)
(minus1205792
) + sup120579isin1205971198811(119911)
[minus(119911 minus 119911lowast
)119879
120579]
= minus inf120579isin1205971198811(119911)
(1205792
) minus inf120579isin1205971198811(119911)
(119911 minus 119911lowast
)119879
120579
(36)
6 Mathematical Problems in Engineering
For any 120579 isin 1205971198811(119911) there exists 120574 isin 120597119891(119909) 120585 isin 119870[119892
[119889ℎ]](119911)
such that 120579 = 119861minus1
120574 + 120583120585 Since 119891(119909) is pseudoconvexon Ω 120597119891(119909) is pseudomonotone on Ω From the proof ofTheorem 16 for any 119911 isin Ω (119911 minus 119911
lowast
)119879
119861minus1
120574 ge 0 By thedefinition of 119892
[119889ℎ](119911) (119911 minus 119911lowast)119879120585 ge 0 Hence (119911 minus 119911lowast)119879120579 ge 0
This implies that
119889119881 (119911 (119905))
119889119905
le minus inf120579isin1205971198811(119911)
1205792
le 0 (37)
Equation (37) shows that the neural network system (9) isstable in the sense of Lyapunov The proof is complete
4 Numerical Examples
In this section two examples will be given to illustrate theeffectiveness of the proposed approach for solving the pseu-doconvex optimization problem
Example 1 Consider the quadratic fractional optimizationproblem
min 119891 (119909) =
119909119879
119876119909 + 119886119879
119909 + 1198860
119888119879119909 + 1198880
subject to 119889 le 119861119909 le ℎ
(38)
where119876 is an 119899 times 119899matrix 119886 119888 isin 119877119899 and 1198860 1198880isin 119877 Here we
choose 119899 = 3
119876 = (
5 minus1 2
minus1 5 minus1
2 minus1 3
) 119861 = (
2 1 1
11 minus16 11
11 minus92 minus10
)
119886 = (
1
minus2
minus2
) 119888 = (
2
1
minus1
) 119889 = (
minus2
minus1
minus6
)
ℎ = (
2
2
6
) 1198860= minus2 119888
0= 4
(39)
It is easily verified that 119876 is symmetric and positivedefinite and consequently is pseudoconvex onΩ = 119889 le 119861119909 le
ℎ The proposed neural network in (9) is capable of solvingthis problemObviously neural network (9) associated to (38)can be described as
119861 (119905) = minus119861minus1
nabla119891 (119909) minus 120583120585 (119861119909 (119905)) (40)
where
nabla119891 (119909) =
2119876119909 + 119886
119888119879119909 + 1198880
minus
119909119879
119876119909 + 119886119879
119909 + 1198860
(119888119879119909 + 1198880)2
119888
120585 (119861119909 (119905)) isin 119870 [119892[119889ℎ]
] (119861119909 (119905))
(41)
Let 119909 = minus1 minus1 minus1 119904 = 119861119909 isin int(Ω) then we have120596 = 3 Moreover the restricted region [119889 ℎ] sub 119861(119904 119903) with119903 = 2radic3 An upper bound of 120597119891(119909) is estimated as 119897
119891= 3
0 5 10 15
0
1
2
minus1
minus2
t
x1[t]
x2[t]
x3[t]
Figure 2 Time-domain behavior of the state variables 1199091 1199092 and
1199093with initial point 119909
0= (0 0 03)
0 5 10 15
0
1
2
3
minus1
minus2
t
x1[t]
x2[t]
x3[t]
Figure 3 Time-domain behavior of the state variables 1199091 1199092 and
1199093with initial point 119909
0= (minus02 0 04)
Then the designed parameter 120583 is estimated as 120583 gt 1032 Let120583 = 11 in the simulation
We have simulated the dynamical behavior of the neuralnetwork by using the mathematical software when 120583 = 11Figures 2 3 and 4 display the state trajectories of this neuralnetwork with different initial values which shows that thestate variables converge to the feasible region in finite timeThis is in accordance with the conclusion of Theorem 15Meanwhile it can be seen that the trajectory is stable in thesense of Lyapunov
Example 2 Consider the following pseudoconvex optimiza-tion
min 119891 (119909) = (1199091minus 1199093)2
+ (1199092minus 1199093)2
subject to 119889 le 119861119909 le ℎ
(42)
Mathematical Problems in Engineering 7
0 5 10 15 20 25 30
0
1
2
3
minus1
minus2
t
x1[t]
x2[t]
x3[t]
Figure 4 Time-domain behavior of the state variables 1199091 1199092 and
1199093with initial point 119909
0= (01 0 03)
where
119861 = (
1 2 6
0 1 2
2 3 1
) 119889 = (
1
1
3
) ℎ = (
2
4
5
) (43)
In this problem the objective function 119891(119909) is pseudo-convexThus the proposed neural network is suitable for solv-ing the problem in this case Neural network (9) associated to(42) can be described as
119861 (119905) = minus119861minus1
nabla119891 (119909) minus 120583120585 (119861119909 (119905)) (44)
where
nabla119891 (119909) = (
2 (1199091minus 1199093)
2 (1199092minus 1199093)
minus2 (1199091+ 1199092minus 21199093)
)
120585 (119861119909 (119905)) isin 119870 [119892[119889ℎ]
] (119861119909 (119905))
(45)
Let 119909 = 0 1 0 119904 = 119861119909 isin int(Ω) then we have 120596 = 2Moreover the restricted region [119889 ℎ] sub 119861(119904 119903) with 119903 = radic13An upper bound of 120597119891(119909) is estimated as 119897
119891= 4radic6 Then the
designed parameter 120583 is estimated as 120583 gt 294 Let 120583 = 30 inthe simulation
Figures 5 and 6 display the state trajectories of this neuralnetwork with different initial values It can be seen that thesetrajectories converge to the feasible region in finite time aswellThis is in accordance with the conclusion ofTheorem 15It can be verified that the trajectory is stable in the sense ofLyapunov
5 Conclusion
In this paper a one-layer recurrent neural network hasbeen presented for solving pseudoconvex optimization withbox constraint The neural network system model has beendescribed with a differential inclusion system The con-structed recurrent neural network has been proved to bestable in the sense of LyapunovThe conditions which ensure
00 01 02 03 04 05
0
5
10
15
20
minus5
minus10
minus15
minus01
t
x1[t]
x2[t]
x3[t]
x1 x2 x3(t)
Figure 5 Time-domain behavior of the state variables 1199091 1199092 and
1199093with initial point 119909
0= (54 65 minus15)
00 01 02 03 04 05
0
5
10
15
minus5
minus10
minus15
minus01
t
x1[t]
x2[t]
x3[t]
x1 x2 x3(t)
Figure 6 Time-domain behavior of the state variables 1199091 1199092 and
1199093with initial point 119909
0= (minus163 196 minus23)
the finite time state convergence to the feasible region havebeen obtained The proposed neural network can be used ina wide variety to solve a lot of optimization problem in theengineering application
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgments
This work was supported by the Natural Science Foundationof Hebei Province of China (A2011203103) and the HebeiProvince Education Foundation of China (2009157)
8 Mathematical Problems in Engineering
References
[1] Q Liu Z Guo and J Wang ldquoA one-layer recurrent neural net-work for constrained pseudoconvex optimization and its appli-cation for dynamic portfolio optimizationrdquo Neural Networksvol 26 pp 99ndash109 2012
[2] W Lu andTChen ldquoDynamical behaviors of delayed neural net-work systems with discontinuous activation functionsrdquo NeuralComputation vol 18 no 3 pp 683ndash708 2006
[3] Y Xia and J Wang ldquoA recurrent neural network for solvingnonlinear convex programs subject to linear constraintsrdquo IEEETransactions on Neural Networks vol 16 no 2 pp 379ndash3862005
[4] X-B Liang and J Wang ldquoA recurrent neural network for non-linear optimization with a continuously differentiable objectivefunction and bound constraintsrdquo IEEE Transactions on NeuralNetworks vol 11 no 6 pp 1251ndash1262 2000
[5] X Xue and W Bian ldquoA project neural network for solving de-generate convex quadratic programrdquo Neurocomputing vol 70no 13ndash15 pp 2449ndash2459 2007
[6] Y Xia H Leung and J Wang ldquoA projection neural networkand its application to constrained optimization problemsrdquo IEEETransactions on Circuits and Systems I vol 49 no 4 pp 447ndash458 2002
[7] S Qin and X Xue ldquoGlobal exponential stability and global con-vergence in finite time of neural networks with discontinuousactivationsrdquoNeural Processing Letters vol 29 no 3 pp 189ndash2042009
[8] S Effati A Ghomashi and A R Nazemi ldquoApplication ofprojection neural network in solving convex programmingproblemsrdquo Applied Mathematics and Computation vol 188 no2 pp 1103ndash1114 2007
[9] X Xue and W Bian ldquoSubgradient-based neural networks fornonsmooth convex optimization problemsrdquo IEEE Transactionson Circuits and Systems I vol 55 no 8 pp 2378ndash2391 2008
[10] X Hu and J Wang ldquoA recurrent neural network for solving aclass of general variational inequalitiesrdquo IEEE Transactions onSystemsMan andCybernetics B vol 37 no 3 pp 528ndash539 2007
[11] X Hu and J Wang ldquoSolving pseudomonotone variationalinequalities and pseudoconvex optimization problems usingthe projection neural networkrdquo IEEE Transactions on NeuralNetworks vol 17 no 6 pp 1487ndash1499 2006
[12] J Wang ldquoAnalysis and design of a recurrent neural networkfor linear programmingrdquo IEEE Transactions on Circuits andSystems I vol 40 no 9 pp 613ndash618 1993
[13] J Wang ldquoA deterministic annealing neural network for convexprogrammingrdquoNeural Networks vol 7 no 4 pp 629ndash641 1994
[14] J Wang ldquoPrimal and dual assignment networksrdquo IEEE Trans-actions on Neural Networks vol 8 no 3 pp 784ndash790 1997
[15] J Wang ldquoPrimal and dual neural networks for shortest-pathroutingrdquo IEEE Transactions on Systems Man and CyberneticsA vol 28 no 6 pp 864ndash869 1998
[16] W Bian and X Xue ldquoNeural network for solving constrainedconvex optimization problems with global attractivityrdquo IEEETransactions on Circuits and Systems I vol 60 no 3 pp 710ndash723 2013
[17] W Bian andX Xue ldquoA dynamical approach to constrained non-smooth convex minimization problem coupling with penaltyfunction method in Hilbert spacerdquo Numerical Functional Anal-ysis and Optimization vol 31 no 11 pp 1221ndash1253 2010
[18] Y Xia and J Wang ldquoA one-layer recurrent neural networkfor support vector machine learningrdquo IEEE Transactions onSystems Man and Cybernetics B vol 34 no 2 pp 1261ndash12692004
[19] X Xue and W Bian ldquoSubgradient-based neural networks fornonsmooth convex optimization problemsrdquo IEEE Transactionson Circuits and Systems I vol 55 no 8 pp 2378ndash2391 2008
[20] Y Xia H Leung and J Wang ldquoA projection neural networkand its application to constrained optimization problemsrdquo IEEETransactions on Circuits and Systems I vol 49 no 4 pp 447ndash458 2002
[21] S Qin W Bian and X Xue ldquoA new one-layer recurrentneural network for non-smooth pseudoconvex optimizationrdquoNeurocomputing vol 120 pp 655ndash662 2013
[22] F H ClarkeOptimization andNonsmooth AnalysisWiley NewYork NY USA 1983
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
2 Mathematical Problems in Engineering
of 119860 respectively Given the vectors 119909 = (1199091 119909
119899)119879
119910 =
(1199101 119910
119899)119879
isin R119899 119909 = (sum119899
119894=11199092
119894)12 119909119879119910 = sum
119899
119894=1119909119894119910119894 119860
denotes the 2-norm of 119860 that is 119860 = radic120582(119860119879119860) where
120582(119860119879
119860) denotes the spectral radius of 119860119879119860 (119905) denotes thederivative of 119909(119905)
Given a set 119862 sub R119899 119870[119862] denotes the closure of theconvex hull of 119862
Let 119881 R119899 rarr R be a locally Lipschitz continuous func-tion Clarkersquos generalized gradient of 119881 at 119909 is defined by
120597119881 (119909) = 119870[ lim119894rarrinfin
nabla119881 (119909119894) lim119894rarrinfin
119909119894= 119909
119909119894isin R119899
Ω119881cupM]
(1)
where Ω119881sub R119899 is the set of Lebesgue measure zero nabla119881
does not exist andM sub R119899 is an arbitrary set with measurezeroThe set-valuedmap119866(sdot) is said to have a closed (convexcompact) image if for each 119909 isin 119864 119866(119909) is closed (convexcompact)
The remainder of this paper is organized as follows InSection 2 the related preliminary knowledge are given andthe problem formulation and the neural network model aredescribed In Section 3 the stability in the sense of Lyapunovand finite-time convergence of the proposed neural networkis proved In Section 4 illustrative examples are given to showthe effectiveness and the performance of the proposed neuralnetwork Some conclusions are drawn in Section 5
2 Model Description and Preliminaries
In this section a one-layer recurrent neural network modelis developed to solve pseudoconvex optimization with boxconstraints Some definitions and properties concerning theset-valued map and nonsmooth analysis are also introduced
Definition 1 (set-valued map) Suppose that to each point 119909of a set 119864 sube 119877
119899 there corresponds a nonempty set 119865(119909) sub 119877119899Then 119909 rarr 119865(119909) is said to be a set-valued map from 119864 119905119900 119877
119899
Definition 2 (locally Lipschitz function) A function 120601119877119899 rarr119877 is called Lipschitz near 119909
0if and only if there exist 120576 120598 gt 0
such that for any1199091 1199092isin 119861(119909
0 120598) satisfying 120601(119909
1)minus120601(119909
2) le
1205761199091minus 1199092 where 119861(119909
0 120598) = 119909 119909 minus 119909
0 lt 120598 The function
120601119877119899 rarr 119877 is said to be locally Lipschitz in119877119899 if it is Lipschitznear any point 119909 isin 119877119899
Definition 3 (regularity) A function 120601 119877119899 rarr 119877 which islocally Lipschitz near 119909 isin 119877119899 is said to be regular at 119909 if thereexists the one-sided directional derivative for any directionV isin 119877
119899 which is given by 1206011015840(119909 V) = lim120585rarr0+(120601(119909 + 120585 times V) minus
120601(119909))120585 and we have 1206010(119909 V) = 1206011015840
(119909 V) The function 120601 issaid to be regular in 119877119899 if it is regular for any 119909 isin 119877119899
Definition 4 A regular function 119891 119877119899
rarr 119877 is saidto be pseudoconvex on a set Ω if for for all 119909 119910 isin
Ω 119909 = 119910 120574(119909) isin 120597119891(119909) we have
120574(119909)119879
(119910 minus 119909) ge 0 997904rArr 119891 (119910) ge 119891 (119909) (2)
Definition 5 A function 119865 119877119899
rarr 119877119899 is said to be
pseudomonotone on a set Ω if for all 119909 1199091015840 isin Ω 119909 = 1199091015840 we
have
119865(119909)119879
(1199091015840
minus 119909) ge 0 997904rArr 119865(1199091015840
)
119879
(1199091015840
minus 119909) ge 0 (3)
Consider the following optimization problem with boxset constraint
min 119891 (119909)
subject to 119889 le 119861119909 le ℎ
(4)
where 119909 119889 ℎ isin 119877119899 and 119861 isin 119877119899times119899 is nonsingularSubstituting 119861119909 with 119911 then the problem (4) can be
transformed into the following problem
min 119891 (119861minus1
119911)
subject to 119889 le 119911 le ℎ
(5)
Let
119863(119911) =
119899
sum
119894=1
119889 (119911119894) 119894 = 1 2 119899 (6)
where 119889(119911119894) is defined as
119889 (119911119894) =
119911119894minus ℎ119894 119911119894ge ℎ119894
0 119889119894lt 119911119894lt ℎ119894
119889119894minus 119911119894 119911119894le 119889119894
(7)
Obviously 119889(119911119894) ge 0 and119863(119911) ge 0
Throughout this paper the following assumptions on theoptimization problem (4) are made
(A1) The objective function 119891(119909) of the problem (4) ispseudoconvex and regular and locally Lipschitz con-tinuous
(A2) 120597119891(119909) is bounded that is
sup120574(119909)isin120597119891(119909)
1003817100381710038171003817120574 (119909)
1003817100381710038171003817le 119897119891 (8)
where 119897119891gt 0 is a constant
In the following we develop a one-layer recurrent neuralnetwork for solving the problem (4) The dynamic equationof the proposed neural network model is described by differ-ential inclusion system
119889119911
119889119905
isin minus120597119891 (119861minus1
119911) minus 120583119870 [119892[119889ℎ]
] (119911) (9)
Mathematical Problems in Engineering 3
120597f(Bminus1z)120597z1
120597f(Bminus1z)120597z2
120597f(Bminus1z)120597zm
sum
sum
sum
sum
sum
sum
120583
120583
120583
minus
minus
minus
minus
minus
minus
minus
minus
minus
h1
d1
h2
d2
hm
dm
Z1
Z2
Zm
middot middot middot
int
int
int
Figure 1 Architecture of the neural network model (9)
where 120583 is a nonnegative constant 120597119891(119861minus1119911) = 119861minus1
120597119891(119909)and 119892
[119889ℎ]is a discontinuous function with its components
defined as
119892119894[119889ℎ]
(119904) =
1 119904 gt ℎ119894
0 119889119894lt 119904 lt ℎ
119894
minus1 119904 lt 119889119894
119870 [119892119894[119889ℎ]
] (119904) =
1 119904 gt ℎ119894
[0 1] 119904 = ℎ119894
0 119889119894lt 119904 lt ℎ
119894
[minus1 0] 119904 = 119889119894
minus1 119904 lt 119889119894
(10)
Architecture of the proposed neural network system model(9) is depicted in Figure 1
Definition 6 119911 isin 119877119899 is said to be an equilibrium point of thedifferential inclusion system (9) if
0 isin 119861minus1
120597119891 (119909) + 120583119870 [119892[119889ℎ]
] (119911) (11)
that is there exist
120574 isin 120597119891 (119909) 120585 isin 119870 [119892[119889ℎ]
] (119911) (12)
such that
119861minus1
120574 + 120583120585 = 0 (13)
where 119909 = 119861minus1119911
Definition 7 A function 119911(sdot) [0 119879] rarr 119877119899 is said to be a
solution of the system (9) with initial condition 119911(0) = 1199110
if 119911(sdot) is absolutely continuous on [0 119879] and for almost 119905 isin[0 119879]
119889119911 (119905)
119889119905
isin minus120597119891 (119861minus1
119911 (119905)) minus 120583119870 [119892[119889ℎ]
] (119911 (119905)) (14)
Equivalently there exists measurable functions 120574(119909) isin 120597119891(119909)120585(119911) isin 119870[119892
[119889ℎ]](119911) such that
(119905) = minus119861minus1
120574 (119861minus1
119911 (119905)) minus 120583120585 (119911 (119905)) (15)
4 Mathematical Problems in Engineering
Definition 8 Suppose that 119861 sub 119877119899 is a nonempty closed
convex set The normal cone to the set 119861 at 119911 isin 119861 is definedas119873119861(119911) = V isin 119877119899 V119879(119911 minus 119910) ge 0 for all 119910 isin 119861
Lemma 9 (see [22]) If 120601119894 119877119899
rarr 119877 119894 = 1 2 119898 isregular at 119909 then 120597(sum119898
119894=1120601119894(119909)) = sum
119898
119894=1(120597120601119894(119909))
Lemma 10 (see [22]) If 119881 119877119899
rarr 119877 is a regular function at119909 and 119909(sdot) 119877 rarr 119877
119899 is differentiable at 119905 and Lipschitz near 119905then 119889119881(119909(119905))119889119905 = ⟨120585 (119905)⟩ for all 120585 isin 120597119881(119909(119905))
Lemma 11 (see [22]) If 1198611 1198612sub 119877119899 are closed convex sets and
satisfy 0 isin int(1198611minus1198612) then for any 119911 isin 119861
1⋂11986121198731198611⋂1198612
(119911) =
1198731198611
(119911) + 1198731198612
(119911)
Lemma 12 (see [22]) If 119891 is locally Lipschitz near 119911 andattains a minimum over Ω at 119911 then 0 isin 120597119891(119911) + 119873
Ω(119911)
Set Ω = 119911 isin 119877119899
119889 le 119911 le ℎ Let 119904 isin int(Ω) then thereexists a constant 119903 gt 0 such that Ω sube 119861(119904 119903) where int(sdot)denotes the interior of the setΩ 119861(119904 119903) = 119911 isin 119877119899 119911 minus 119904 le119903 It is easy to verify the following lemma
Lemma13 For any 119911 isin 119877119899Ω and 120585 isin 119870[119892[119889ℎ]
](119911) (119911minus119904)119879120585 gt120596 where 120596 = min
1le119894le119899ℎ119894minus 119904119894 119904119894minus119889119894 and 119904
119894is the 119894th element
of 119904 isin intΩ
3 Main Results
In this section the main results concerned with the con-vergence and optimality conditions of the proposed neuralnetwork are addressed
Theorem 14 Suppose that the assumptions (A1) and (A
2)
hold Let 1199110isin 119861(119904 119903) If 120583 gt (119903119897
119891120596)119861
minus1
then the solution119911(119905) of the network system (9) with initial condition 119911(0) = 119911
0
satisfies 119911(119905) isin 119861(119904 119903)
Proof Set
120588 (119905) =
1
2
119911(119905) minus 1199042
(16)
By Lemma 10 and (15) evaluating the derivation of 120588(119905) alongthe trajectory of the system (9) gives
119889120588 (119905)
119889119905
= (119905)119879
(119911 (119905) minus 119904)
= [minus119861minus1
120574 (119909 (119905)) minus 120583120585 (119911 (119905))]
119879
(119911 (119905) minus 119904)
= minus[119861minus1
120574 (119909 (119905))]
119879
(119911 (119905) minus 119904) minus 120583120585(119911 (119905))119879
(119911 (119905) minus 119904)
le
10038171003817100381710038171003817119861minus110038171003817100381710038171003817119911 (119905) minus 119904
1003817100381710038171003817120574 (119909 (119905))
1003817100381710038171003817minus 120583(119911 (119905) minus 119904)
119879
120585 (119911 (119905))
(17)
If 119911(119905) isin Ω it follows directly that 119911(119905) isin 119861(119904 119903) If 119911(119905) isin119861(119904 119903) Ω according to Lemma 13 one gets that (119911(119905) minus119904)119879
120585(119911(119905)) gt 120596 Thus we have
119889120588 (119905)
119889119905
lt
10038171003817100381710038171003817119861minus110038171003817100381710038171003817119911 (119905) minus 119904
1003817100381710038171003817120574 (119909 (119905))
1003817100381710038171003817minus 120583120596
le 119903119897119891
10038171003817100381710038171003817119861minus110038171003817100381710038171003817minus 120583120596
(18)
If 120583 gt (119903119897119891120596)119861
minus1
then 119889120588(119905)119889119905 lt 0 this means that119911(119905) isin 119861(119904 119903) If not so the state 119911(119905) leaves 119861(119904 119903) at time1199051 and when 119905 = 119905
1 we have 119911(119905
1) minus 119904 = 119903 This implies that
(119889120588(119905)119889119905)|119905=1199051
ge 0 which is the contradictionAs a result if 120583 gt (119903119897
119891120596)119861
minus1
for any 1199110isin 119861(119904 119903)
the state 119911(119905) of the network system (9) with initial condition119911(0) = 119911
0satisfies 119911(119905) isin 119861(119904 119903) This completes the proof
Theorem 15 Suppose that assumptions (A1) and (A
2) hold If
120583 gt 119897119891119861minus1
then the solution of neural network system (9)with initial condition 119911(0) = 119911
0isin 119861(119904 119903) converges to the
feasible region Ω in finite time 119879 where 119879 = 119905lowast= 119863(119911(0))
radic119899(120583 minus 119897119891119861minus1
) and stays thereafter
Proof According to the definition of 119863(119911) 119863(119911) is a convexfunction in 119877119899 By Lemma 10 it follows that
119889119863 (119911)
119889119905
= 120577119879
(119905)
119889119911 (119905)
119889119905
forall120577 (119905) isin 120597119863 (119911) (19)
Noting that 120597119863(119911) = 119870[119892[119889ℎ]
](119911) we can obtain by (19) thatfor all 119911 isin 119861(119904 119903) Ω exist 120574(119909) isin 120597119891(119909) and 120585(119905) isin 119870[119892
[119889ℎ]](119911)
such that
119889119863 (119911)
119889119905
= 120585119879
(119905) (119905)
= 120585119879
(119905) (minus119861minus1
120574 (119909) minus 120583120585 (119905))
= minus120585119879
(119905) 119861minus1
120574 (119909) minus 1205831003817100381710038171003817120585 (119905)
1003817100381710038171003817
2
le1003817100381710038171003817120585 (119905)
1003817100381710038171003817
10038171003817100381710038171003817119861minus110038171003817100381710038171003817
1003817100381710038171003817120574 (119909)
1003817100381710038171003817minus 120583
1003817100381710038171003817120585 (119905)
1003817100381710038171003817
2
le1003817100381710038171003817120585 (119905)
1003817100381710038171003817[
10038171003817100381710038171003817119861minus110038171003817100381710038171003817
1003817100381710038171003817120574 (119909)
1003817100381710038171003817minus 120583
1003817100381710038171003817120585 (119905)
1003817100381710038171003817]
(20)
Since 119911 isin 119877119899 Ω and 120585(119905) isin 119870[119892[119889ℎ]
](119911) there at least is one ofthe components of 120585(119905)which is minus1 or 1 So 120585(119905) ge 1 Notingthat 120585(119905) le radic119899 we have
119889119863 (119911)
119889119905
le radic119899 [119897119891
10038171003817100381710038171003817119861minus110038171003817100381710038171003817minus 120583] (21)
Let 120572 = radic119899(120583 minus 119897119891119861minus1
) If 120583 gt 119897119891119861minus1
then 120572 gt 0 and
119889119863 (119911)
119889119905
le minus120572 (22)
Integrating (22) from 0 to 119905 we can obtain
119863(119911 (119905)) le 119863 (119911 (0)) minus 120572119905 (23)
Mathematical Problems in Engineering 5
Let 119905lowast= (1120572)119863(119911(0)) By (23)119863(119911(119905
lowast)) = 0 that is 119911
119894(119905lowast) =
ℎ119894or 119911119894(119905lowast) = 119889119894 This shows that the state trajectory of neural
network (9) with initial condition 119911(0) = 1199110reaches Ω in
finite time 119879 = 119905lowast= 119863(119911(0))radic119899(120583 minus 119897
119891119861minus1
)Next we prove that when 119905 ge 119905
lowast the trajectory stays inΩ
after reaching Ω If this is not true then there exists 1199051gt 119905lowast
such that the trajectory leavesΩ at 1199051 and there exists 119905
2gt 1199051
such that for 119905 isin (1199051 1199052) 119911(119905) isin 119877119899 Ω
By integrating (22) from 1199051to 1199052 it follows that
int
1199052
1199051
119889119863 (119911 (119905))
119889119905
119889119905 = 119863 (119911 (1199052)) minus 119863 (119911 (119905
1))
le minus120572 (1199052minus 1199051) lt 0
(24)
Due to 119863(119911(1199051)) = 0 119863(119911(119905
2)) lt 0 By the definition of
119863(119911(119905)) 119863(119911(119905)) ge 0 for any 119905 isin [0infin] which contradictsthe result above The proof is completed
Theorem 16 Suppose that assumptions (A1) and (A
2) hold If
120583 gt max119897119891119861minus1
(119903119897119891120596)119861
minus1
then the equilibrium pointof the neural network system (9) is an optimal solution of theproblem (4) and vice versa
Proof Denote 119911lowast as an equilibrium point of the neural
network system (9) then there exist 120574lowast isin 120597119891(119909lowast
) 120585lowast isin
119870[119892[119889ℎ]
](119911lowast
) such that
119861minus1
120574lowast
+ 120583120585lowast
= 0 (25)
where 120574lowast = 119861minus1119911lowast By Theorem 15 119911lowast isin Ω hence 120585lowast = 0 By(25) 120574lowast = 0We can get the following projection formulation
119911lowast
= 120601[119889ℎ]
(119911lowast
minus 119861minus1
120574lowast
) (26)
where 120601[119889ℎ]
= (120601[1198891ℎ1](1199101) 120601
[119889119899ℎ119899](119910119899)) with 120601
[119889119894ℎ119894](119910119894)
119894 = 1 119899 defined as
120601[119889119894ℎ119894](119910119894) =
ℎ119894 119910119894ge ℎ119894
119910119894 119889119894lt 119910119894lt ℎ119894
119889119894 119910119894le 119889119894
(27)
By the well-known projection theorem [17] (26) is equivalentto the following variational inequality
(119861minus1
120574lowast
)
119879
(119911 minus 119911lowast
) ge 0 forall119911 isin Ω (28)
Since 119891(119909) is pseudoconvex 119891(119861minus1119911) is pseudoconvex on ΩBy (28) we can obtain that 119891(119861minus1119911) ge 119891(119861
minus1
119911lowast
) This showsthat 119911lowast is a minimum point of 119891(119861minus1119911) overΩ
Next we prove the reverse side Denote 119911lowast as an optimalsolution of the problem then 119911
lowast
isin [119889 ℎ] Since 119911lowast is
a minimum point of 119891(119861minus1119911) over the feasible region Ωaccording to Lemma 12 it follows that
0 isin 120597119891 (119861minus1
119911lowast
) + 119873Ω(119911lowast
) (29)
From (29) it follows that there exist 120578lowast isin 119873Ω(119911lowast
) 119861minus1120574lowast =minus120578lowast
isin 119861minus1
120597119891(119909lowast
) and 120578lowast le 119897119891119861minus1
Noting that119873Ω(119911lowast
) =
V120585lowast V ge 0 120585lowast
isin 119870[119892[119889ℎ]
](119911lowast
) and at least one 120585lowast119894is 1
or minus 1 there exist 120573 ge 0 and 120585lowast isin 119870[119892[119889ℎ]
](119911lowast
) such that120573120585lowast
isin 119873Ω(119911lowast
) and 120578lowast = 120573120585lowastIn the following we prove that 120578lowast isin 120583119870[119892
[119889ℎ]](119911lowast
) Wesay that120573 le 120583 If not then120573 gt 120583 Since (119911lowastminus119904)119879120585lowast = sum119899
119894=1(119911lowast
119894minus
119904119894)120585119894ge 120596 we have
(119911lowast
minus 119904)119879
120578lowast
= 120573(119911lowast
minus 119904)119879
120585lowast
ge 120596120573 ge 120583120596 (30)
Thus 120578lowast ge 120583120596119911lowast
minus 119904 gt 120583120596119903 By the condition ofTheorem 16 120583 gt max(119897
119891119861minus1
(119903119897119891120596)119861
minus1
) Hence 120578 gt119897119891119861minus1
which contradicts with 1205782le 119897119891119861minus1
This impliesthat 120573120585lowast isin 120583119870[119892
[119889ℎ]](119911lowast
) that is 0 isin 120597119865(119911lowast)+120583119870[119892[119889ℎ]
](119911lowast
)
which means that 119911lowast is the equilibrium point of neuralnetwork system (9) This completes the proof
Theorem 17 Suppose that assumptions (A1) and (A
2) hold If
120583 gt max119897119891119861minus1
(119903119897119891120596)119861
minus1
then the equilibrium point ofthe neural network system (9) is stable in the sense of Lyapunov
Proof Denote 119911lowast as an equilibrium point of the neural
network system (9) that is
0 isin 120597119865 (119911lowast
) + 120583119870 [119892[119889ℎ]
(119911lowast
)] (31)
By Theorem 16 we get that 119911lowast is an optimal solution ofthe problem (4) that is 119891(119861minus1119911) ge 119891(119861
minus1
119911lowast
) 119911 isin Ω ByTheorem 15 the trajectory 119911(119905) with initial condition 119911(0) =1199110isin 119861(119904 119903) converges to the feasible region Ω in finite time
119879 = 119905lowast= 119863(119911(0))radic119899(120583 minus 119897
119891119861minus1
) and will remain in Ω
forever That is for all 119905 ge 119905lowast 119911(119905) isin Ω Let
1198811(119911) = 119891 (119861
minus1
119911) + 120583119863 (119911) (32)
Since 119911lowast is a minimum point of119891(119861minus1119911) onΩ we can get that1198811(119911) ge 119881
1(119911lowast
) for all 119911 isin ΩConsider the following Lyapunov function
119881 (119911) = 1198811(119911) minus 119881
1(119911lowast
) +
1
2
1003817100381710038171003817119911 minus 119911lowast1003817100381710038171003817
2
(33)
Obviously from (33) 119881(119911) ge (12)119911 minus 119911lowast2 and
120597119881 (119911) = 1205971198811(119911) + 119911 minus 119911
lowast
(34)
Evaluating the derivation of 119881 along the trajectory of thesystem (9) gives
119889119881 (119911 (119905))
119889119905
= 120585119879
(119905)
119889119911 (119905)
119889119905
forall120585 (119905) isin 120597119881 (119911 (119905)) (35)
Since (119905) isin minus1205971198811(119911(119905)) by (34) we can set 120585(119905) = minus(119905) + 119911 minus
119911lowast Hence119889119881 (119911 (119905))
119889119905
= (minus (119905) + 119911 minus 119911lowast
)119879
(119905)
= minus (119905)2
+ (119911 minus 119911lowast
) (119905)
le sup120579isin1205971198811(119911)
(minus1205792
) + sup120579isin1205971198811(119911)
[minus(119911 minus 119911lowast
)119879
120579]
= minus inf120579isin1205971198811(119911)
(1205792
) minus inf120579isin1205971198811(119911)
(119911 minus 119911lowast
)119879
120579
(36)
6 Mathematical Problems in Engineering
For any 120579 isin 1205971198811(119911) there exists 120574 isin 120597119891(119909) 120585 isin 119870[119892
[119889ℎ]](119911)
such that 120579 = 119861minus1
120574 + 120583120585 Since 119891(119909) is pseudoconvexon Ω 120597119891(119909) is pseudomonotone on Ω From the proof ofTheorem 16 for any 119911 isin Ω (119911 minus 119911
lowast
)119879
119861minus1
120574 ge 0 By thedefinition of 119892
[119889ℎ](119911) (119911 minus 119911lowast)119879120585 ge 0 Hence (119911 minus 119911lowast)119879120579 ge 0
This implies that
119889119881 (119911 (119905))
119889119905
le minus inf120579isin1205971198811(119911)
1205792
le 0 (37)
Equation (37) shows that the neural network system (9) isstable in the sense of Lyapunov The proof is complete
4 Numerical Examples
In this section two examples will be given to illustrate theeffectiveness of the proposed approach for solving the pseu-doconvex optimization problem
Example 1 Consider the quadratic fractional optimizationproblem
min 119891 (119909) =
119909119879
119876119909 + 119886119879
119909 + 1198860
119888119879119909 + 1198880
subject to 119889 le 119861119909 le ℎ
(38)
where119876 is an 119899 times 119899matrix 119886 119888 isin 119877119899 and 1198860 1198880isin 119877 Here we
choose 119899 = 3
119876 = (
5 minus1 2
minus1 5 minus1
2 minus1 3
) 119861 = (
2 1 1
11 minus16 11
11 minus92 minus10
)
119886 = (
1
minus2
minus2
) 119888 = (
2
1
minus1
) 119889 = (
minus2
minus1
minus6
)
ℎ = (
2
2
6
) 1198860= minus2 119888
0= 4
(39)
It is easily verified that 119876 is symmetric and positivedefinite and consequently is pseudoconvex onΩ = 119889 le 119861119909 le
ℎ The proposed neural network in (9) is capable of solvingthis problemObviously neural network (9) associated to (38)can be described as
119861 (119905) = minus119861minus1
nabla119891 (119909) minus 120583120585 (119861119909 (119905)) (40)
where
nabla119891 (119909) =
2119876119909 + 119886
119888119879119909 + 1198880
minus
119909119879
119876119909 + 119886119879
119909 + 1198860
(119888119879119909 + 1198880)2
119888
120585 (119861119909 (119905)) isin 119870 [119892[119889ℎ]
] (119861119909 (119905))
(41)
Let 119909 = minus1 minus1 minus1 119904 = 119861119909 isin int(Ω) then we have120596 = 3 Moreover the restricted region [119889 ℎ] sub 119861(119904 119903) with119903 = 2radic3 An upper bound of 120597119891(119909) is estimated as 119897
119891= 3
0 5 10 15
0
1
2
minus1
minus2
t
x1[t]
x2[t]
x3[t]
Figure 2 Time-domain behavior of the state variables 1199091 1199092 and
1199093with initial point 119909
0= (0 0 03)
0 5 10 15
0
1
2
3
minus1
minus2
t
x1[t]
x2[t]
x3[t]
Figure 3 Time-domain behavior of the state variables 1199091 1199092 and
1199093with initial point 119909
0= (minus02 0 04)
Then the designed parameter 120583 is estimated as 120583 gt 1032 Let120583 = 11 in the simulation
We have simulated the dynamical behavior of the neuralnetwork by using the mathematical software when 120583 = 11Figures 2 3 and 4 display the state trajectories of this neuralnetwork with different initial values which shows that thestate variables converge to the feasible region in finite timeThis is in accordance with the conclusion of Theorem 15Meanwhile it can be seen that the trajectory is stable in thesense of Lyapunov
Example 2 Consider the following pseudoconvex optimiza-tion
min 119891 (119909) = (1199091minus 1199093)2
+ (1199092minus 1199093)2
subject to 119889 le 119861119909 le ℎ
(42)
Mathematical Problems in Engineering 7
0 5 10 15 20 25 30
0
1
2
3
minus1
minus2
t
x1[t]
x2[t]
x3[t]
Figure 4 Time-domain behavior of the state variables 1199091 1199092 and
1199093with initial point 119909
0= (01 0 03)
where
119861 = (
1 2 6
0 1 2
2 3 1
) 119889 = (
1
1
3
) ℎ = (
2
4
5
) (43)
In this problem the objective function 119891(119909) is pseudo-convexThus the proposed neural network is suitable for solv-ing the problem in this case Neural network (9) associated to(42) can be described as
119861 (119905) = minus119861minus1
nabla119891 (119909) minus 120583120585 (119861119909 (119905)) (44)
where
nabla119891 (119909) = (
2 (1199091minus 1199093)
2 (1199092minus 1199093)
minus2 (1199091+ 1199092minus 21199093)
)
120585 (119861119909 (119905)) isin 119870 [119892[119889ℎ]
] (119861119909 (119905))
(45)
Let 119909 = 0 1 0 119904 = 119861119909 isin int(Ω) then we have 120596 = 2Moreover the restricted region [119889 ℎ] sub 119861(119904 119903) with 119903 = radic13An upper bound of 120597119891(119909) is estimated as 119897
119891= 4radic6 Then the
designed parameter 120583 is estimated as 120583 gt 294 Let 120583 = 30 inthe simulation
Figures 5 and 6 display the state trajectories of this neuralnetwork with different initial values It can be seen that thesetrajectories converge to the feasible region in finite time aswellThis is in accordance with the conclusion ofTheorem 15It can be verified that the trajectory is stable in the sense ofLyapunov
5 Conclusion
In this paper a one-layer recurrent neural network hasbeen presented for solving pseudoconvex optimization withbox constraint The neural network system model has beendescribed with a differential inclusion system The con-structed recurrent neural network has been proved to bestable in the sense of LyapunovThe conditions which ensure
00 01 02 03 04 05
0
5
10
15
20
minus5
minus10
minus15
minus01
t
x1[t]
x2[t]
x3[t]
x1 x2 x3(t)
Figure 5 Time-domain behavior of the state variables 1199091 1199092 and
1199093with initial point 119909
0= (54 65 minus15)
00 01 02 03 04 05
0
5
10
15
minus5
minus10
minus15
minus01
t
x1[t]
x2[t]
x3[t]
x1 x2 x3(t)
Figure 6 Time-domain behavior of the state variables 1199091 1199092 and
1199093with initial point 119909
0= (minus163 196 minus23)
the finite time state convergence to the feasible region havebeen obtained The proposed neural network can be used ina wide variety to solve a lot of optimization problem in theengineering application
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgments
This work was supported by the Natural Science Foundationof Hebei Province of China (A2011203103) and the HebeiProvince Education Foundation of China (2009157)
8 Mathematical Problems in Engineering
References
[1] Q Liu Z Guo and J Wang ldquoA one-layer recurrent neural net-work for constrained pseudoconvex optimization and its appli-cation for dynamic portfolio optimizationrdquo Neural Networksvol 26 pp 99ndash109 2012
[2] W Lu andTChen ldquoDynamical behaviors of delayed neural net-work systems with discontinuous activation functionsrdquo NeuralComputation vol 18 no 3 pp 683ndash708 2006
[3] Y Xia and J Wang ldquoA recurrent neural network for solvingnonlinear convex programs subject to linear constraintsrdquo IEEETransactions on Neural Networks vol 16 no 2 pp 379ndash3862005
[4] X-B Liang and J Wang ldquoA recurrent neural network for non-linear optimization with a continuously differentiable objectivefunction and bound constraintsrdquo IEEE Transactions on NeuralNetworks vol 11 no 6 pp 1251ndash1262 2000
[5] X Xue and W Bian ldquoA project neural network for solving de-generate convex quadratic programrdquo Neurocomputing vol 70no 13ndash15 pp 2449ndash2459 2007
[6] Y Xia H Leung and J Wang ldquoA projection neural networkand its application to constrained optimization problemsrdquo IEEETransactions on Circuits and Systems I vol 49 no 4 pp 447ndash458 2002
[7] S Qin and X Xue ldquoGlobal exponential stability and global con-vergence in finite time of neural networks with discontinuousactivationsrdquoNeural Processing Letters vol 29 no 3 pp 189ndash2042009
[8] S Effati A Ghomashi and A R Nazemi ldquoApplication ofprojection neural network in solving convex programmingproblemsrdquo Applied Mathematics and Computation vol 188 no2 pp 1103ndash1114 2007
[9] X Xue and W Bian ldquoSubgradient-based neural networks fornonsmooth convex optimization problemsrdquo IEEE Transactionson Circuits and Systems I vol 55 no 8 pp 2378ndash2391 2008
[10] X Hu and J Wang ldquoA recurrent neural network for solving aclass of general variational inequalitiesrdquo IEEE Transactions onSystemsMan andCybernetics B vol 37 no 3 pp 528ndash539 2007
[11] X Hu and J Wang ldquoSolving pseudomonotone variationalinequalities and pseudoconvex optimization problems usingthe projection neural networkrdquo IEEE Transactions on NeuralNetworks vol 17 no 6 pp 1487ndash1499 2006
[12] J Wang ldquoAnalysis and design of a recurrent neural networkfor linear programmingrdquo IEEE Transactions on Circuits andSystems I vol 40 no 9 pp 613ndash618 1993
[13] J Wang ldquoA deterministic annealing neural network for convexprogrammingrdquoNeural Networks vol 7 no 4 pp 629ndash641 1994
[14] J Wang ldquoPrimal and dual assignment networksrdquo IEEE Trans-actions on Neural Networks vol 8 no 3 pp 784ndash790 1997
[15] J Wang ldquoPrimal and dual neural networks for shortest-pathroutingrdquo IEEE Transactions on Systems Man and CyberneticsA vol 28 no 6 pp 864ndash869 1998
[16] W Bian and X Xue ldquoNeural network for solving constrainedconvex optimization problems with global attractivityrdquo IEEETransactions on Circuits and Systems I vol 60 no 3 pp 710ndash723 2013
[17] W Bian andX Xue ldquoA dynamical approach to constrained non-smooth convex minimization problem coupling with penaltyfunction method in Hilbert spacerdquo Numerical Functional Anal-ysis and Optimization vol 31 no 11 pp 1221ndash1253 2010
[18] Y Xia and J Wang ldquoA one-layer recurrent neural networkfor support vector machine learningrdquo IEEE Transactions onSystems Man and Cybernetics B vol 34 no 2 pp 1261ndash12692004
[19] X Xue and W Bian ldquoSubgradient-based neural networks fornonsmooth convex optimization problemsrdquo IEEE Transactionson Circuits and Systems I vol 55 no 8 pp 2378ndash2391 2008
[20] Y Xia H Leung and J Wang ldquoA projection neural networkand its application to constrained optimization problemsrdquo IEEETransactions on Circuits and Systems I vol 49 no 4 pp 447ndash458 2002
[21] S Qin W Bian and X Xue ldquoA new one-layer recurrentneural network for non-smooth pseudoconvex optimizationrdquoNeurocomputing vol 120 pp 655ndash662 2013
[22] F H ClarkeOptimization andNonsmooth AnalysisWiley NewYork NY USA 1983
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
Mathematical Problems in Engineering 3
120597f(Bminus1z)120597z1
120597f(Bminus1z)120597z2
120597f(Bminus1z)120597zm
sum
sum
sum
sum
sum
sum
120583
120583
120583
minus
minus
minus
minus
minus
minus
minus
minus
minus
h1
d1
h2
d2
hm
dm
Z1
Z2
Zm
middot middot middot
int
int
int
Figure 1 Architecture of the neural network model (9)
where 120583 is a nonnegative constant 120597119891(119861minus1119911) = 119861minus1
120597119891(119909)and 119892
[119889ℎ]is a discontinuous function with its components
defined as
119892119894[119889ℎ]
(119904) =
1 119904 gt ℎ119894
0 119889119894lt 119904 lt ℎ
119894
minus1 119904 lt 119889119894
119870 [119892119894[119889ℎ]
] (119904) =
1 119904 gt ℎ119894
[0 1] 119904 = ℎ119894
0 119889119894lt 119904 lt ℎ
119894
[minus1 0] 119904 = 119889119894
minus1 119904 lt 119889119894
(10)
Architecture of the proposed neural network system model(9) is depicted in Figure 1
Definition 6 119911 isin 119877119899 is said to be an equilibrium point of thedifferential inclusion system (9) if
0 isin 119861minus1
120597119891 (119909) + 120583119870 [119892[119889ℎ]
] (119911) (11)
that is there exist
120574 isin 120597119891 (119909) 120585 isin 119870 [119892[119889ℎ]
] (119911) (12)
such that
119861minus1
120574 + 120583120585 = 0 (13)
where 119909 = 119861minus1119911
Definition 7 A function 119911(sdot) [0 119879] rarr 119877119899 is said to be a
solution of the system (9) with initial condition 119911(0) = 1199110
if 119911(sdot) is absolutely continuous on [0 119879] and for almost 119905 isin[0 119879]
119889119911 (119905)
119889119905
isin minus120597119891 (119861minus1
119911 (119905)) minus 120583119870 [119892[119889ℎ]
] (119911 (119905)) (14)
Equivalently there exists measurable functions 120574(119909) isin 120597119891(119909)120585(119911) isin 119870[119892
[119889ℎ]](119911) such that
(119905) = minus119861minus1
120574 (119861minus1
119911 (119905)) minus 120583120585 (119911 (119905)) (15)
4 Mathematical Problems in Engineering
Definition 8 Suppose that 119861 sub 119877119899 is a nonempty closed
convex set The normal cone to the set 119861 at 119911 isin 119861 is definedas119873119861(119911) = V isin 119877119899 V119879(119911 minus 119910) ge 0 for all 119910 isin 119861
Lemma 9 (see [22]) If 120601119894 119877119899
rarr 119877 119894 = 1 2 119898 isregular at 119909 then 120597(sum119898
119894=1120601119894(119909)) = sum
119898
119894=1(120597120601119894(119909))
Lemma 10 (see [22]) If 119881 119877119899
rarr 119877 is a regular function at119909 and 119909(sdot) 119877 rarr 119877
119899 is differentiable at 119905 and Lipschitz near 119905then 119889119881(119909(119905))119889119905 = ⟨120585 (119905)⟩ for all 120585 isin 120597119881(119909(119905))
Lemma 11 (see [22]) If 1198611 1198612sub 119877119899 are closed convex sets and
satisfy 0 isin int(1198611minus1198612) then for any 119911 isin 119861
1⋂11986121198731198611⋂1198612
(119911) =
1198731198611
(119911) + 1198731198612
(119911)
Lemma 12 (see [22]) If 119891 is locally Lipschitz near 119911 andattains a minimum over Ω at 119911 then 0 isin 120597119891(119911) + 119873
Ω(119911)
Set Ω = 119911 isin 119877119899
119889 le 119911 le ℎ Let 119904 isin int(Ω) then thereexists a constant 119903 gt 0 such that Ω sube 119861(119904 119903) where int(sdot)denotes the interior of the setΩ 119861(119904 119903) = 119911 isin 119877119899 119911 minus 119904 le119903 It is easy to verify the following lemma
Lemma13 For any 119911 isin 119877119899Ω and 120585 isin 119870[119892[119889ℎ]
](119911) (119911minus119904)119879120585 gt120596 where 120596 = min
1le119894le119899ℎ119894minus 119904119894 119904119894minus119889119894 and 119904
119894is the 119894th element
of 119904 isin intΩ
3 Main Results
In this section the main results concerned with the con-vergence and optimality conditions of the proposed neuralnetwork are addressed
Theorem 14 Suppose that the assumptions (A1) and (A
2)
hold Let 1199110isin 119861(119904 119903) If 120583 gt (119903119897
119891120596)119861
minus1
then the solution119911(119905) of the network system (9) with initial condition 119911(0) = 119911
0
satisfies 119911(119905) isin 119861(119904 119903)
Proof Set
120588 (119905) =
1
2
119911(119905) minus 1199042
(16)
By Lemma 10 and (15) evaluating the derivation of 120588(119905) alongthe trajectory of the system (9) gives
119889120588 (119905)
119889119905
= (119905)119879
(119911 (119905) minus 119904)
= [minus119861minus1
120574 (119909 (119905)) minus 120583120585 (119911 (119905))]
119879
(119911 (119905) minus 119904)
= minus[119861minus1
120574 (119909 (119905))]
119879
(119911 (119905) minus 119904) minus 120583120585(119911 (119905))119879
(119911 (119905) minus 119904)
le
10038171003817100381710038171003817119861minus110038171003817100381710038171003817119911 (119905) minus 119904
1003817100381710038171003817120574 (119909 (119905))
1003817100381710038171003817minus 120583(119911 (119905) minus 119904)
119879
120585 (119911 (119905))
(17)
If 119911(119905) isin Ω it follows directly that 119911(119905) isin 119861(119904 119903) If 119911(119905) isin119861(119904 119903) Ω according to Lemma 13 one gets that (119911(119905) minus119904)119879
120585(119911(119905)) gt 120596 Thus we have
119889120588 (119905)
119889119905
lt
10038171003817100381710038171003817119861minus110038171003817100381710038171003817119911 (119905) minus 119904
1003817100381710038171003817120574 (119909 (119905))
1003817100381710038171003817minus 120583120596
le 119903119897119891
10038171003817100381710038171003817119861minus110038171003817100381710038171003817minus 120583120596
(18)
If 120583 gt (119903119897119891120596)119861
minus1
then 119889120588(119905)119889119905 lt 0 this means that119911(119905) isin 119861(119904 119903) If not so the state 119911(119905) leaves 119861(119904 119903) at time1199051 and when 119905 = 119905
1 we have 119911(119905
1) minus 119904 = 119903 This implies that
(119889120588(119905)119889119905)|119905=1199051
ge 0 which is the contradictionAs a result if 120583 gt (119903119897
119891120596)119861
minus1
for any 1199110isin 119861(119904 119903)
the state 119911(119905) of the network system (9) with initial condition119911(0) = 119911
0satisfies 119911(119905) isin 119861(119904 119903) This completes the proof
Theorem 15 Suppose that assumptions (A1) and (A
2) hold If
120583 gt 119897119891119861minus1
then the solution of neural network system (9)with initial condition 119911(0) = 119911
0isin 119861(119904 119903) converges to the
feasible region Ω in finite time 119879 where 119879 = 119905lowast= 119863(119911(0))
radic119899(120583 minus 119897119891119861minus1
) and stays thereafter
Proof According to the definition of 119863(119911) 119863(119911) is a convexfunction in 119877119899 By Lemma 10 it follows that
119889119863 (119911)
119889119905
= 120577119879
(119905)
119889119911 (119905)
119889119905
forall120577 (119905) isin 120597119863 (119911) (19)
Noting that 120597119863(119911) = 119870[119892[119889ℎ]
](119911) we can obtain by (19) thatfor all 119911 isin 119861(119904 119903) Ω exist 120574(119909) isin 120597119891(119909) and 120585(119905) isin 119870[119892
[119889ℎ]](119911)
such that
119889119863 (119911)
119889119905
= 120585119879
(119905) (119905)
= 120585119879
(119905) (minus119861minus1
120574 (119909) minus 120583120585 (119905))
= minus120585119879
(119905) 119861minus1
120574 (119909) minus 1205831003817100381710038171003817120585 (119905)
1003817100381710038171003817
2
le1003817100381710038171003817120585 (119905)
1003817100381710038171003817
10038171003817100381710038171003817119861minus110038171003817100381710038171003817
1003817100381710038171003817120574 (119909)
1003817100381710038171003817minus 120583
1003817100381710038171003817120585 (119905)
1003817100381710038171003817
2
le1003817100381710038171003817120585 (119905)
1003817100381710038171003817[
10038171003817100381710038171003817119861minus110038171003817100381710038171003817
1003817100381710038171003817120574 (119909)
1003817100381710038171003817minus 120583
1003817100381710038171003817120585 (119905)
1003817100381710038171003817]
(20)
Since 119911 isin 119877119899 Ω and 120585(119905) isin 119870[119892[119889ℎ]
](119911) there at least is one ofthe components of 120585(119905)which is minus1 or 1 So 120585(119905) ge 1 Notingthat 120585(119905) le radic119899 we have
119889119863 (119911)
119889119905
le radic119899 [119897119891
10038171003817100381710038171003817119861minus110038171003817100381710038171003817minus 120583] (21)
Let 120572 = radic119899(120583 minus 119897119891119861minus1
) If 120583 gt 119897119891119861minus1
then 120572 gt 0 and
119889119863 (119911)
119889119905
le minus120572 (22)
Integrating (22) from 0 to 119905 we can obtain
119863(119911 (119905)) le 119863 (119911 (0)) minus 120572119905 (23)
Mathematical Problems in Engineering 5
Let 119905lowast= (1120572)119863(119911(0)) By (23)119863(119911(119905
lowast)) = 0 that is 119911
119894(119905lowast) =
ℎ119894or 119911119894(119905lowast) = 119889119894 This shows that the state trajectory of neural
network (9) with initial condition 119911(0) = 1199110reaches Ω in
finite time 119879 = 119905lowast= 119863(119911(0))radic119899(120583 minus 119897
119891119861minus1
)Next we prove that when 119905 ge 119905
lowast the trajectory stays inΩ
after reaching Ω If this is not true then there exists 1199051gt 119905lowast
such that the trajectory leavesΩ at 1199051 and there exists 119905
2gt 1199051
such that for 119905 isin (1199051 1199052) 119911(119905) isin 119877119899 Ω
By integrating (22) from 1199051to 1199052 it follows that
int
1199052
1199051
119889119863 (119911 (119905))
119889119905
119889119905 = 119863 (119911 (1199052)) minus 119863 (119911 (119905
1))
le minus120572 (1199052minus 1199051) lt 0
(24)
Due to 119863(119911(1199051)) = 0 119863(119911(119905
2)) lt 0 By the definition of
119863(119911(119905)) 119863(119911(119905)) ge 0 for any 119905 isin [0infin] which contradictsthe result above The proof is completed
Theorem 16 Suppose that assumptions (A1) and (A
2) hold If
120583 gt max119897119891119861minus1
(119903119897119891120596)119861
minus1
then the equilibrium pointof the neural network system (9) is an optimal solution of theproblem (4) and vice versa
Proof Denote 119911lowast as an equilibrium point of the neural
network system (9) then there exist 120574lowast isin 120597119891(119909lowast
) 120585lowast isin
119870[119892[119889ℎ]
](119911lowast
) such that
119861minus1
120574lowast
+ 120583120585lowast
= 0 (25)
where 120574lowast = 119861minus1119911lowast By Theorem 15 119911lowast isin Ω hence 120585lowast = 0 By(25) 120574lowast = 0We can get the following projection formulation
119911lowast
= 120601[119889ℎ]
(119911lowast
minus 119861minus1
120574lowast
) (26)
where 120601[119889ℎ]
= (120601[1198891ℎ1](1199101) 120601
[119889119899ℎ119899](119910119899)) with 120601
[119889119894ℎ119894](119910119894)
119894 = 1 119899 defined as
120601[119889119894ℎ119894](119910119894) =
ℎ119894 119910119894ge ℎ119894
119910119894 119889119894lt 119910119894lt ℎ119894
119889119894 119910119894le 119889119894
(27)
By the well-known projection theorem [17] (26) is equivalentto the following variational inequality
(119861minus1
120574lowast
)
119879
(119911 minus 119911lowast
) ge 0 forall119911 isin Ω (28)
Since 119891(119909) is pseudoconvex 119891(119861minus1119911) is pseudoconvex on ΩBy (28) we can obtain that 119891(119861minus1119911) ge 119891(119861
minus1
119911lowast
) This showsthat 119911lowast is a minimum point of 119891(119861minus1119911) overΩ
Next we prove the reverse side Denote 119911lowast as an optimalsolution of the problem then 119911
lowast
isin [119889 ℎ] Since 119911lowast is
a minimum point of 119891(119861minus1119911) over the feasible region Ωaccording to Lemma 12 it follows that
0 isin 120597119891 (119861minus1
119911lowast
) + 119873Ω(119911lowast
) (29)
From (29) it follows that there exist 120578lowast isin 119873Ω(119911lowast
) 119861minus1120574lowast =minus120578lowast
isin 119861minus1
120597119891(119909lowast
) and 120578lowast le 119897119891119861minus1
Noting that119873Ω(119911lowast
) =
V120585lowast V ge 0 120585lowast
isin 119870[119892[119889ℎ]
](119911lowast
) and at least one 120585lowast119894is 1
or minus 1 there exist 120573 ge 0 and 120585lowast isin 119870[119892[119889ℎ]
](119911lowast
) such that120573120585lowast
isin 119873Ω(119911lowast
) and 120578lowast = 120573120585lowastIn the following we prove that 120578lowast isin 120583119870[119892
[119889ℎ]](119911lowast
) Wesay that120573 le 120583 If not then120573 gt 120583 Since (119911lowastminus119904)119879120585lowast = sum119899
119894=1(119911lowast
119894minus
119904119894)120585119894ge 120596 we have
(119911lowast
minus 119904)119879
120578lowast
= 120573(119911lowast
minus 119904)119879
120585lowast
ge 120596120573 ge 120583120596 (30)
Thus 120578lowast ge 120583120596119911lowast
minus 119904 gt 120583120596119903 By the condition ofTheorem 16 120583 gt max(119897
119891119861minus1
(119903119897119891120596)119861
minus1
) Hence 120578 gt119897119891119861minus1
which contradicts with 1205782le 119897119891119861minus1
This impliesthat 120573120585lowast isin 120583119870[119892
[119889ℎ]](119911lowast
) that is 0 isin 120597119865(119911lowast)+120583119870[119892[119889ℎ]
](119911lowast
)
which means that 119911lowast is the equilibrium point of neuralnetwork system (9) This completes the proof
Theorem 17 Suppose that assumptions (A1) and (A
2) hold If
120583 gt max119897119891119861minus1
(119903119897119891120596)119861
minus1
then the equilibrium point ofthe neural network system (9) is stable in the sense of Lyapunov
Proof Denote 119911lowast as an equilibrium point of the neural
network system (9) that is
0 isin 120597119865 (119911lowast
) + 120583119870 [119892[119889ℎ]
(119911lowast
)] (31)
By Theorem 16 we get that 119911lowast is an optimal solution ofthe problem (4) that is 119891(119861minus1119911) ge 119891(119861
minus1
119911lowast
) 119911 isin Ω ByTheorem 15 the trajectory 119911(119905) with initial condition 119911(0) =1199110isin 119861(119904 119903) converges to the feasible region Ω in finite time
119879 = 119905lowast= 119863(119911(0))radic119899(120583 minus 119897
119891119861minus1
) and will remain in Ω
forever That is for all 119905 ge 119905lowast 119911(119905) isin Ω Let
1198811(119911) = 119891 (119861
minus1
119911) + 120583119863 (119911) (32)
Since 119911lowast is a minimum point of119891(119861minus1119911) onΩ we can get that1198811(119911) ge 119881
1(119911lowast
) for all 119911 isin ΩConsider the following Lyapunov function
119881 (119911) = 1198811(119911) minus 119881
1(119911lowast
) +
1
2
1003817100381710038171003817119911 minus 119911lowast1003817100381710038171003817
2
(33)
Obviously from (33) 119881(119911) ge (12)119911 minus 119911lowast2 and
120597119881 (119911) = 1205971198811(119911) + 119911 minus 119911
lowast
(34)
Evaluating the derivation of 119881 along the trajectory of thesystem (9) gives
119889119881 (119911 (119905))
119889119905
= 120585119879
(119905)
119889119911 (119905)
119889119905
forall120585 (119905) isin 120597119881 (119911 (119905)) (35)
Since (119905) isin minus1205971198811(119911(119905)) by (34) we can set 120585(119905) = minus(119905) + 119911 minus
119911lowast Hence119889119881 (119911 (119905))
119889119905
= (minus (119905) + 119911 minus 119911lowast
)119879
(119905)
= minus (119905)2
+ (119911 minus 119911lowast
) (119905)
le sup120579isin1205971198811(119911)
(minus1205792
) + sup120579isin1205971198811(119911)
[minus(119911 minus 119911lowast
)119879
120579]
= minus inf120579isin1205971198811(119911)
(1205792
) minus inf120579isin1205971198811(119911)
(119911 minus 119911lowast
)119879
120579
(36)
6 Mathematical Problems in Engineering
For any 120579 isin 1205971198811(119911) there exists 120574 isin 120597119891(119909) 120585 isin 119870[119892
[119889ℎ]](119911)
such that 120579 = 119861minus1
120574 + 120583120585 Since 119891(119909) is pseudoconvexon Ω 120597119891(119909) is pseudomonotone on Ω From the proof ofTheorem 16 for any 119911 isin Ω (119911 minus 119911
lowast
)119879
119861minus1
120574 ge 0 By thedefinition of 119892
[119889ℎ](119911) (119911 minus 119911lowast)119879120585 ge 0 Hence (119911 minus 119911lowast)119879120579 ge 0
This implies that
119889119881 (119911 (119905))
119889119905
le minus inf120579isin1205971198811(119911)
1205792
le 0 (37)
Equation (37) shows that the neural network system (9) isstable in the sense of Lyapunov The proof is complete
4 Numerical Examples
In this section two examples will be given to illustrate theeffectiveness of the proposed approach for solving the pseu-doconvex optimization problem
Example 1 Consider the quadratic fractional optimizationproblem
min 119891 (119909) =
119909119879
119876119909 + 119886119879
119909 + 1198860
119888119879119909 + 1198880
subject to 119889 le 119861119909 le ℎ
(38)
where119876 is an 119899 times 119899matrix 119886 119888 isin 119877119899 and 1198860 1198880isin 119877 Here we
choose 119899 = 3
119876 = (
5 minus1 2
minus1 5 minus1
2 minus1 3
) 119861 = (
2 1 1
11 minus16 11
11 minus92 minus10
)
119886 = (
1
minus2
minus2
) 119888 = (
2
1
minus1
) 119889 = (
minus2
minus1
minus6
)
ℎ = (
2
2
6
) 1198860= minus2 119888
0= 4
(39)
It is easily verified that 119876 is symmetric and positivedefinite and consequently is pseudoconvex onΩ = 119889 le 119861119909 le
ℎ The proposed neural network in (9) is capable of solvingthis problemObviously neural network (9) associated to (38)can be described as
119861 (119905) = minus119861minus1
nabla119891 (119909) minus 120583120585 (119861119909 (119905)) (40)
where
nabla119891 (119909) =
2119876119909 + 119886
119888119879119909 + 1198880
minus
119909119879
119876119909 + 119886119879
119909 + 1198860
(119888119879119909 + 1198880)2
119888
120585 (119861119909 (119905)) isin 119870 [119892[119889ℎ]
] (119861119909 (119905))
(41)
Let 119909 = minus1 minus1 minus1 119904 = 119861119909 isin int(Ω) then we have120596 = 3 Moreover the restricted region [119889 ℎ] sub 119861(119904 119903) with119903 = 2radic3 An upper bound of 120597119891(119909) is estimated as 119897
119891= 3
0 5 10 15
0
1
2
minus1
minus2
t
x1[t]
x2[t]
x3[t]
Figure 2 Time-domain behavior of the state variables 1199091 1199092 and
1199093with initial point 119909
0= (0 0 03)
0 5 10 15
0
1
2
3
minus1
minus2
t
x1[t]
x2[t]
x3[t]
Figure 3 Time-domain behavior of the state variables 1199091 1199092 and
1199093with initial point 119909
0= (minus02 0 04)
Then the designed parameter 120583 is estimated as 120583 gt 1032 Let120583 = 11 in the simulation
We have simulated the dynamical behavior of the neuralnetwork by using the mathematical software when 120583 = 11Figures 2 3 and 4 display the state trajectories of this neuralnetwork with different initial values which shows that thestate variables converge to the feasible region in finite timeThis is in accordance with the conclusion of Theorem 15Meanwhile it can be seen that the trajectory is stable in thesense of Lyapunov
Example 2 Consider the following pseudoconvex optimiza-tion
min 119891 (119909) = (1199091minus 1199093)2
+ (1199092minus 1199093)2
subject to 119889 le 119861119909 le ℎ
(42)
Mathematical Problems in Engineering 7
0 5 10 15 20 25 30
0
1
2
3
minus1
minus2
t
x1[t]
x2[t]
x3[t]
Figure 4 Time-domain behavior of the state variables 1199091 1199092 and
1199093with initial point 119909
0= (01 0 03)
where
119861 = (
1 2 6
0 1 2
2 3 1
) 119889 = (
1
1
3
) ℎ = (
2
4
5
) (43)
In this problem the objective function 119891(119909) is pseudo-convexThus the proposed neural network is suitable for solv-ing the problem in this case Neural network (9) associated to(42) can be described as
119861 (119905) = minus119861minus1
nabla119891 (119909) minus 120583120585 (119861119909 (119905)) (44)
where
nabla119891 (119909) = (
2 (1199091minus 1199093)
2 (1199092minus 1199093)
minus2 (1199091+ 1199092minus 21199093)
)
120585 (119861119909 (119905)) isin 119870 [119892[119889ℎ]
] (119861119909 (119905))
(45)
Let 119909 = 0 1 0 119904 = 119861119909 isin int(Ω) then we have 120596 = 2Moreover the restricted region [119889 ℎ] sub 119861(119904 119903) with 119903 = radic13An upper bound of 120597119891(119909) is estimated as 119897
119891= 4radic6 Then the
designed parameter 120583 is estimated as 120583 gt 294 Let 120583 = 30 inthe simulation
Figures 5 and 6 display the state trajectories of this neuralnetwork with different initial values It can be seen that thesetrajectories converge to the feasible region in finite time aswellThis is in accordance with the conclusion ofTheorem 15It can be verified that the trajectory is stable in the sense ofLyapunov
5 Conclusion
In this paper a one-layer recurrent neural network hasbeen presented for solving pseudoconvex optimization withbox constraint The neural network system model has beendescribed with a differential inclusion system The con-structed recurrent neural network has been proved to bestable in the sense of LyapunovThe conditions which ensure
00 01 02 03 04 05
0
5
10
15
20
minus5
minus10
minus15
minus01
t
x1[t]
x2[t]
x3[t]
x1 x2 x3(t)
Figure 5 Time-domain behavior of the state variables 1199091 1199092 and
1199093with initial point 119909
0= (54 65 minus15)
00 01 02 03 04 05
0
5
10
15
minus5
minus10
minus15
minus01
t
x1[t]
x2[t]
x3[t]
x1 x2 x3(t)
Figure 6 Time-domain behavior of the state variables 1199091 1199092 and
1199093with initial point 119909
0= (minus163 196 minus23)
the finite time state convergence to the feasible region havebeen obtained The proposed neural network can be used ina wide variety to solve a lot of optimization problem in theengineering application
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgments
This work was supported by the Natural Science Foundationof Hebei Province of China (A2011203103) and the HebeiProvince Education Foundation of China (2009157)
8 Mathematical Problems in Engineering
References
[1] Q Liu Z Guo and J Wang ldquoA one-layer recurrent neural net-work for constrained pseudoconvex optimization and its appli-cation for dynamic portfolio optimizationrdquo Neural Networksvol 26 pp 99ndash109 2012
[2] W Lu andTChen ldquoDynamical behaviors of delayed neural net-work systems with discontinuous activation functionsrdquo NeuralComputation vol 18 no 3 pp 683ndash708 2006
[3] Y Xia and J Wang ldquoA recurrent neural network for solvingnonlinear convex programs subject to linear constraintsrdquo IEEETransactions on Neural Networks vol 16 no 2 pp 379ndash3862005
[4] X-B Liang and J Wang ldquoA recurrent neural network for non-linear optimization with a continuously differentiable objectivefunction and bound constraintsrdquo IEEE Transactions on NeuralNetworks vol 11 no 6 pp 1251ndash1262 2000
[5] X Xue and W Bian ldquoA project neural network for solving de-generate convex quadratic programrdquo Neurocomputing vol 70no 13ndash15 pp 2449ndash2459 2007
[6] Y Xia H Leung and J Wang ldquoA projection neural networkand its application to constrained optimization problemsrdquo IEEETransactions on Circuits and Systems I vol 49 no 4 pp 447ndash458 2002
[7] S Qin and X Xue ldquoGlobal exponential stability and global con-vergence in finite time of neural networks with discontinuousactivationsrdquoNeural Processing Letters vol 29 no 3 pp 189ndash2042009
[8] S Effati A Ghomashi and A R Nazemi ldquoApplication ofprojection neural network in solving convex programmingproblemsrdquo Applied Mathematics and Computation vol 188 no2 pp 1103ndash1114 2007
[9] X Xue and W Bian ldquoSubgradient-based neural networks fornonsmooth convex optimization problemsrdquo IEEE Transactionson Circuits and Systems I vol 55 no 8 pp 2378ndash2391 2008
[10] X Hu and J Wang ldquoA recurrent neural network for solving aclass of general variational inequalitiesrdquo IEEE Transactions onSystemsMan andCybernetics B vol 37 no 3 pp 528ndash539 2007
[11] X Hu and J Wang ldquoSolving pseudomonotone variationalinequalities and pseudoconvex optimization problems usingthe projection neural networkrdquo IEEE Transactions on NeuralNetworks vol 17 no 6 pp 1487ndash1499 2006
[12] J Wang ldquoAnalysis and design of a recurrent neural networkfor linear programmingrdquo IEEE Transactions on Circuits andSystems I vol 40 no 9 pp 613ndash618 1993
[13] J Wang ldquoA deterministic annealing neural network for convexprogrammingrdquoNeural Networks vol 7 no 4 pp 629ndash641 1994
[14] J Wang ldquoPrimal and dual assignment networksrdquo IEEE Trans-actions on Neural Networks vol 8 no 3 pp 784ndash790 1997
[15] J Wang ldquoPrimal and dual neural networks for shortest-pathroutingrdquo IEEE Transactions on Systems Man and CyberneticsA vol 28 no 6 pp 864ndash869 1998
[16] W Bian and X Xue ldquoNeural network for solving constrainedconvex optimization problems with global attractivityrdquo IEEETransactions on Circuits and Systems I vol 60 no 3 pp 710ndash723 2013
[17] W Bian andX Xue ldquoA dynamical approach to constrained non-smooth convex minimization problem coupling with penaltyfunction method in Hilbert spacerdquo Numerical Functional Anal-ysis and Optimization vol 31 no 11 pp 1221ndash1253 2010
[18] Y Xia and J Wang ldquoA one-layer recurrent neural networkfor support vector machine learningrdquo IEEE Transactions onSystems Man and Cybernetics B vol 34 no 2 pp 1261ndash12692004
[19] X Xue and W Bian ldquoSubgradient-based neural networks fornonsmooth convex optimization problemsrdquo IEEE Transactionson Circuits and Systems I vol 55 no 8 pp 2378ndash2391 2008
[20] Y Xia H Leung and J Wang ldquoA projection neural networkand its application to constrained optimization problemsrdquo IEEETransactions on Circuits and Systems I vol 49 no 4 pp 447ndash458 2002
[21] S Qin W Bian and X Xue ldquoA new one-layer recurrentneural network for non-smooth pseudoconvex optimizationrdquoNeurocomputing vol 120 pp 655ndash662 2013
[22] F H ClarkeOptimization andNonsmooth AnalysisWiley NewYork NY USA 1983
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
4 Mathematical Problems in Engineering
Definition 8 Suppose that 119861 sub 119877119899 is a nonempty closed
convex set The normal cone to the set 119861 at 119911 isin 119861 is definedas119873119861(119911) = V isin 119877119899 V119879(119911 minus 119910) ge 0 for all 119910 isin 119861
Lemma 9 (see [22]) If 120601119894 119877119899
rarr 119877 119894 = 1 2 119898 isregular at 119909 then 120597(sum119898
119894=1120601119894(119909)) = sum
119898
119894=1(120597120601119894(119909))
Lemma 10 (see [22]) If 119881 119877119899
rarr 119877 is a regular function at119909 and 119909(sdot) 119877 rarr 119877
119899 is differentiable at 119905 and Lipschitz near 119905then 119889119881(119909(119905))119889119905 = ⟨120585 (119905)⟩ for all 120585 isin 120597119881(119909(119905))
Lemma 11 (see [22]) If 1198611 1198612sub 119877119899 are closed convex sets and
satisfy 0 isin int(1198611minus1198612) then for any 119911 isin 119861
1⋂11986121198731198611⋂1198612
(119911) =
1198731198611
(119911) + 1198731198612
(119911)
Lemma 12 (see [22]) If 119891 is locally Lipschitz near 119911 andattains a minimum over Ω at 119911 then 0 isin 120597119891(119911) + 119873
Ω(119911)
Set Ω = 119911 isin 119877119899
119889 le 119911 le ℎ Let 119904 isin int(Ω) then thereexists a constant 119903 gt 0 such that Ω sube 119861(119904 119903) where int(sdot)denotes the interior of the setΩ 119861(119904 119903) = 119911 isin 119877119899 119911 minus 119904 le119903 It is easy to verify the following lemma
Lemma13 For any 119911 isin 119877119899Ω and 120585 isin 119870[119892[119889ℎ]
](119911) (119911minus119904)119879120585 gt120596 where 120596 = min
1le119894le119899ℎ119894minus 119904119894 119904119894minus119889119894 and 119904
119894is the 119894th element
of 119904 isin intΩ
3 Main Results
In this section the main results concerned with the con-vergence and optimality conditions of the proposed neuralnetwork are addressed
Theorem 14 Suppose that the assumptions (A1) and (A
2)
hold Let 1199110isin 119861(119904 119903) If 120583 gt (119903119897
119891120596)119861
minus1
then the solution119911(119905) of the network system (9) with initial condition 119911(0) = 119911
0
satisfies 119911(119905) isin 119861(119904 119903)
Proof Set
120588 (119905) =
1
2
119911(119905) minus 1199042
(16)
By Lemma 10 and (15) evaluating the derivation of 120588(119905) alongthe trajectory of the system (9) gives
119889120588 (119905)
119889119905
= (119905)119879
(119911 (119905) minus 119904)
= [minus119861minus1
120574 (119909 (119905)) minus 120583120585 (119911 (119905))]
119879
(119911 (119905) minus 119904)
= minus[119861minus1
120574 (119909 (119905))]
119879
(119911 (119905) minus 119904) minus 120583120585(119911 (119905))119879
(119911 (119905) minus 119904)
le
10038171003817100381710038171003817119861minus110038171003817100381710038171003817119911 (119905) minus 119904
1003817100381710038171003817120574 (119909 (119905))
1003817100381710038171003817minus 120583(119911 (119905) minus 119904)
119879
120585 (119911 (119905))
(17)
If 119911(119905) isin Ω it follows directly that 119911(119905) isin 119861(119904 119903) If 119911(119905) isin119861(119904 119903) Ω according to Lemma 13 one gets that (119911(119905) minus119904)119879
120585(119911(119905)) gt 120596 Thus we have
119889120588 (119905)
119889119905
lt
10038171003817100381710038171003817119861minus110038171003817100381710038171003817119911 (119905) minus 119904
1003817100381710038171003817120574 (119909 (119905))
1003817100381710038171003817minus 120583120596
le 119903119897119891
10038171003817100381710038171003817119861minus110038171003817100381710038171003817minus 120583120596
(18)
If 120583 gt (119903119897119891120596)119861
minus1
then 119889120588(119905)119889119905 lt 0 this means that119911(119905) isin 119861(119904 119903) If not so the state 119911(119905) leaves 119861(119904 119903) at time1199051 and when 119905 = 119905
1 we have 119911(119905
1) minus 119904 = 119903 This implies that
(119889120588(119905)119889119905)|119905=1199051
ge 0 which is the contradictionAs a result if 120583 gt (119903119897
119891120596)119861
minus1
for any 1199110isin 119861(119904 119903)
the state 119911(119905) of the network system (9) with initial condition119911(0) = 119911
0satisfies 119911(119905) isin 119861(119904 119903) This completes the proof
Theorem 15 Suppose that assumptions (A1) and (A
2) hold If
120583 gt 119897119891119861minus1
then the solution of neural network system (9)with initial condition 119911(0) = 119911
0isin 119861(119904 119903) converges to the
feasible region Ω in finite time 119879 where 119879 = 119905lowast= 119863(119911(0))
radic119899(120583 minus 119897119891119861minus1
) and stays thereafter
Proof According to the definition of 119863(119911) 119863(119911) is a convexfunction in 119877119899 By Lemma 10 it follows that
119889119863 (119911)
119889119905
= 120577119879
(119905)
119889119911 (119905)
119889119905
forall120577 (119905) isin 120597119863 (119911) (19)
Noting that 120597119863(119911) = 119870[119892[119889ℎ]
](119911) we can obtain by (19) thatfor all 119911 isin 119861(119904 119903) Ω exist 120574(119909) isin 120597119891(119909) and 120585(119905) isin 119870[119892
[119889ℎ]](119911)
such that
119889119863 (119911)
119889119905
= 120585119879
(119905) (119905)
= 120585119879
(119905) (minus119861minus1
120574 (119909) minus 120583120585 (119905))
= minus120585119879
(119905) 119861minus1
120574 (119909) minus 1205831003817100381710038171003817120585 (119905)
1003817100381710038171003817
2
le1003817100381710038171003817120585 (119905)
1003817100381710038171003817
10038171003817100381710038171003817119861minus110038171003817100381710038171003817
1003817100381710038171003817120574 (119909)
1003817100381710038171003817minus 120583
1003817100381710038171003817120585 (119905)
1003817100381710038171003817
2
le1003817100381710038171003817120585 (119905)
1003817100381710038171003817[
10038171003817100381710038171003817119861minus110038171003817100381710038171003817
1003817100381710038171003817120574 (119909)
1003817100381710038171003817minus 120583
1003817100381710038171003817120585 (119905)
1003817100381710038171003817]
(20)
Since 119911 isin 119877119899 Ω and 120585(119905) isin 119870[119892[119889ℎ]
](119911) there at least is one ofthe components of 120585(119905)which is minus1 or 1 So 120585(119905) ge 1 Notingthat 120585(119905) le radic119899 we have
119889119863 (119911)
119889119905
le radic119899 [119897119891
10038171003817100381710038171003817119861minus110038171003817100381710038171003817minus 120583] (21)
Let 120572 = radic119899(120583 minus 119897119891119861minus1
) If 120583 gt 119897119891119861minus1
then 120572 gt 0 and
119889119863 (119911)
119889119905
le minus120572 (22)
Integrating (22) from 0 to 119905 we can obtain
119863(119911 (119905)) le 119863 (119911 (0)) minus 120572119905 (23)
Mathematical Problems in Engineering 5
Let 119905lowast= (1120572)119863(119911(0)) By (23)119863(119911(119905
lowast)) = 0 that is 119911
119894(119905lowast) =
ℎ119894or 119911119894(119905lowast) = 119889119894 This shows that the state trajectory of neural
network (9) with initial condition 119911(0) = 1199110reaches Ω in
finite time 119879 = 119905lowast= 119863(119911(0))radic119899(120583 minus 119897
119891119861minus1
)Next we prove that when 119905 ge 119905
lowast the trajectory stays inΩ
after reaching Ω If this is not true then there exists 1199051gt 119905lowast
such that the trajectory leavesΩ at 1199051 and there exists 119905
2gt 1199051
such that for 119905 isin (1199051 1199052) 119911(119905) isin 119877119899 Ω
By integrating (22) from 1199051to 1199052 it follows that
int
1199052
1199051
119889119863 (119911 (119905))
119889119905
119889119905 = 119863 (119911 (1199052)) minus 119863 (119911 (119905
1))
le minus120572 (1199052minus 1199051) lt 0
(24)
Due to 119863(119911(1199051)) = 0 119863(119911(119905
2)) lt 0 By the definition of
119863(119911(119905)) 119863(119911(119905)) ge 0 for any 119905 isin [0infin] which contradictsthe result above The proof is completed
Theorem 16 Suppose that assumptions (A1) and (A
2) hold If
120583 gt max119897119891119861minus1
(119903119897119891120596)119861
minus1
then the equilibrium pointof the neural network system (9) is an optimal solution of theproblem (4) and vice versa
Proof Denote 119911lowast as an equilibrium point of the neural
network system (9) then there exist 120574lowast isin 120597119891(119909lowast
) 120585lowast isin
119870[119892[119889ℎ]
](119911lowast
) such that
119861minus1
120574lowast
+ 120583120585lowast
= 0 (25)
where 120574lowast = 119861minus1119911lowast By Theorem 15 119911lowast isin Ω hence 120585lowast = 0 By(25) 120574lowast = 0We can get the following projection formulation
119911lowast
= 120601[119889ℎ]
(119911lowast
minus 119861minus1
120574lowast
) (26)
where 120601[119889ℎ]
= (120601[1198891ℎ1](1199101) 120601
[119889119899ℎ119899](119910119899)) with 120601
[119889119894ℎ119894](119910119894)
119894 = 1 119899 defined as
120601[119889119894ℎ119894](119910119894) =
ℎ119894 119910119894ge ℎ119894
119910119894 119889119894lt 119910119894lt ℎ119894
119889119894 119910119894le 119889119894
(27)
By the well-known projection theorem [17] (26) is equivalentto the following variational inequality
(119861minus1
120574lowast
)
119879
(119911 minus 119911lowast
) ge 0 forall119911 isin Ω (28)
Since 119891(119909) is pseudoconvex 119891(119861minus1119911) is pseudoconvex on ΩBy (28) we can obtain that 119891(119861minus1119911) ge 119891(119861
minus1
119911lowast
) This showsthat 119911lowast is a minimum point of 119891(119861minus1119911) overΩ
Next we prove the reverse side Denote 119911lowast as an optimalsolution of the problem then 119911
lowast
isin [119889 ℎ] Since 119911lowast is
a minimum point of 119891(119861minus1119911) over the feasible region Ωaccording to Lemma 12 it follows that
0 isin 120597119891 (119861minus1
119911lowast
) + 119873Ω(119911lowast
) (29)
From (29) it follows that there exist 120578lowast isin 119873Ω(119911lowast
) 119861minus1120574lowast =minus120578lowast
isin 119861minus1
120597119891(119909lowast
) and 120578lowast le 119897119891119861minus1
Noting that119873Ω(119911lowast
) =
V120585lowast V ge 0 120585lowast
isin 119870[119892[119889ℎ]
](119911lowast
) and at least one 120585lowast119894is 1
or minus 1 there exist 120573 ge 0 and 120585lowast isin 119870[119892[119889ℎ]
](119911lowast
) such that120573120585lowast
isin 119873Ω(119911lowast
) and 120578lowast = 120573120585lowastIn the following we prove that 120578lowast isin 120583119870[119892
[119889ℎ]](119911lowast
) Wesay that120573 le 120583 If not then120573 gt 120583 Since (119911lowastminus119904)119879120585lowast = sum119899
119894=1(119911lowast
119894minus
119904119894)120585119894ge 120596 we have
(119911lowast
minus 119904)119879
120578lowast
= 120573(119911lowast
minus 119904)119879
120585lowast
ge 120596120573 ge 120583120596 (30)
Thus 120578lowast ge 120583120596119911lowast
minus 119904 gt 120583120596119903 By the condition ofTheorem 16 120583 gt max(119897
119891119861minus1
(119903119897119891120596)119861
minus1
) Hence 120578 gt119897119891119861minus1
which contradicts with 1205782le 119897119891119861minus1
This impliesthat 120573120585lowast isin 120583119870[119892
[119889ℎ]](119911lowast
) that is 0 isin 120597119865(119911lowast)+120583119870[119892[119889ℎ]
](119911lowast
)
which means that 119911lowast is the equilibrium point of neuralnetwork system (9) This completes the proof
Theorem 17 Suppose that assumptions (A1) and (A
2) hold If
120583 gt max119897119891119861minus1
(119903119897119891120596)119861
minus1
then the equilibrium point ofthe neural network system (9) is stable in the sense of Lyapunov
Proof Denote 119911lowast as an equilibrium point of the neural
network system (9) that is
0 isin 120597119865 (119911lowast
) + 120583119870 [119892[119889ℎ]
(119911lowast
)] (31)
By Theorem 16 we get that 119911lowast is an optimal solution ofthe problem (4) that is 119891(119861minus1119911) ge 119891(119861
minus1
119911lowast
) 119911 isin Ω ByTheorem 15 the trajectory 119911(119905) with initial condition 119911(0) =1199110isin 119861(119904 119903) converges to the feasible region Ω in finite time
119879 = 119905lowast= 119863(119911(0))radic119899(120583 minus 119897
119891119861minus1
) and will remain in Ω
forever That is for all 119905 ge 119905lowast 119911(119905) isin Ω Let
1198811(119911) = 119891 (119861
minus1
119911) + 120583119863 (119911) (32)
Since 119911lowast is a minimum point of119891(119861minus1119911) onΩ we can get that1198811(119911) ge 119881
1(119911lowast
) for all 119911 isin ΩConsider the following Lyapunov function
119881 (119911) = 1198811(119911) minus 119881
1(119911lowast
) +
1
2
1003817100381710038171003817119911 minus 119911lowast1003817100381710038171003817
2
(33)
Obviously from (33) 119881(119911) ge (12)119911 minus 119911lowast2 and
120597119881 (119911) = 1205971198811(119911) + 119911 minus 119911
lowast
(34)
Evaluating the derivation of 119881 along the trajectory of thesystem (9) gives
119889119881 (119911 (119905))
119889119905
= 120585119879
(119905)
119889119911 (119905)
119889119905
forall120585 (119905) isin 120597119881 (119911 (119905)) (35)
Since (119905) isin minus1205971198811(119911(119905)) by (34) we can set 120585(119905) = minus(119905) + 119911 minus
119911lowast Hence119889119881 (119911 (119905))
119889119905
= (minus (119905) + 119911 minus 119911lowast
)119879
(119905)
= minus (119905)2
+ (119911 minus 119911lowast
) (119905)
le sup120579isin1205971198811(119911)
(minus1205792
) + sup120579isin1205971198811(119911)
[minus(119911 minus 119911lowast
)119879
120579]
= minus inf120579isin1205971198811(119911)
(1205792
) minus inf120579isin1205971198811(119911)
(119911 minus 119911lowast
)119879
120579
(36)
6 Mathematical Problems in Engineering
For any 120579 isin 1205971198811(119911) there exists 120574 isin 120597119891(119909) 120585 isin 119870[119892
[119889ℎ]](119911)
such that 120579 = 119861minus1
120574 + 120583120585 Since 119891(119909) is pseudoconvexon Ω 120597119891(119909) is pseudomonotone on Ω From the proof ofTheorem 16 for any 119911 isin Ω (119911 minus 119911
lowast
)119879
119861minus1
120574 ge 0 By thedefinition of 119892
[119889ℎ](119911) (119911 minus 119911lowast)119879120585 ge 0 Hence (119911 minus 119911lowast)119879120579 ge 0
This implies that
119889119881 (119911 (119905))
119889119905
le minus inf120579isin1205971198811(119911)
1205792
le 0 (37)
Equation (37) shows that the neural network system (9) isstable in the sense of Lyapunov The proof is complete
4 Numerical Examples
In this section two examples will be given to illustrate theeffectiveness of the proposed approach for solving the pseu-doconvex optimization problem
Example 1 Consider the quadratic fractional optimizationproblem
min 119891 (119909) =
119909119879
119876119909 + 119886119879
119909 + 1198860
119888119879119909 + 1198880
subject to 119889 le 119861119909 le ℎ
(38)
where119876 is an 119899 times 119899matrix 119886 119888 isin 119877119899 and 1198860 1198880isin 119877 Here we
choose 119899 = 3
119876 = (
5 minus1 2
minus1 5 minus1
2 minus1 3
) 119861 = (
2 1 1
11 minus16 11
11 minus92 minus10
)
119886 = (
1
minus2
minus2
) 119888 = (
2
1
minus1
) 119889 = (
minus2
minus1
minus6
)
ℎ = (
2
2
6
) 1198860= minus2 119888
0= 4
(39)
It is easily verified that 119876 is symmetric and positivedefinite and consequently is pseudoconvex onΩ = 119889 le 119861119909 le
ℎ The proposed neural network in (9) is capable of solvingthis problemObviously neural network (9) associated to (38)can be described as
119861 (119905) = minus119861minus1
nabla119891 (119909) minus 120583120585 (119861119909 (119905)) (40)
where
nabla119891 (119909) =
2119876119909 + 119886
119888119879119909 + 1198880
minus
119909119879
119876119909 + 119886119879
119909 + 1198860
(119888119879119909 + 1198880)2
119888
120585 (119861119909 (119905)) isin 119870 [119892[119889ℎ]
] (119861119909 (119905))
(41)
Let 119909 = minus1 minus1 minus1 119904 = 119861119909 isin int(Ω) then we have120596 = 3 Moreover the restricted region [119889 ℎ] sub 119861(119904 119903) with119903 = 2radic3 An upper bound of 120597119891(119909) is estimated as 119897
119891= 3
0 5 10 15
0
1
2
minus1
minus2
t
x1[t]
x2[t]
x3[t]
Figure 2 Time-domain behavior of the state variables 1199091 1199092 and
1199093with initial point 119909
0= (0 0 03)
0 5 10 15
0
1
2
3
minus1
minus2
t
x1[t]
x2[t]
x3[t]
Figure 3 Time-domain behavior of the state variables 1199091 1199092 and
1199093with initial point 119909
0= (minus02 0 04)
Then the designed parameter 120583 is estimated as 120583 gt 1032 Let120583 = 11 in the simulation
We have simulated the dynamical behavior of the neuralnetwork by using the mathematical software when 120583 = 11Figures 2 3 and 4 display the state trajectories of this neuralnetwork with different initial values which shows that thestate variables converge to the feasible region in finite timeThis is in accordance with the conclusion of Theorem 15Meanwhile it can be seen that the trajectory is stable in thesense of Lyapunov
Example 2 Consider the following pseudoconvex optimiza-tion
min 119891 (119909) = (1199091minus 1199093)2
+ (1199092minus 1199093)2
subject to 119889 le 119861119909 le ℎ
(42)
Mathematical Problems in Engineering 7
0 5 10 15 20 25 30
0
1
2
3
minus1
minus2
t
x1[t]
x2[t]
x3[t]
Figure 4 Time-domain behavior of the state variables 1199091 1199092 and
1199093with initial point 119909
0= (01 0 03)
where
119861 = (
1 2 6
0 1 2
2 3 1
) 119889 = (
1
1
3
) ℎ = (
2
4
5
) (43)
In this problem the objective function 119891(119909) is pseudo-convexThus the proposed neural network is suitable for solv-ing the problem in this case Neural network (9) associated to(42) can be described as
119861 (119905) = minus119861minus1
nabla119891 (119909) minus 120583120585 (119861119909 (119905)) (44)
where
nabla119891 (119909) = (
2 (1199091minus 1199093)
2 (1199092minus 1199093)
minus2 (1199091+ 1199092minus 21199093)
)
120585 (119861119909 (119905)) isin 119870 [119892[119889ℎ]
] (119861119909 (119905))
(45)
Let 119909 = 0 1 0 119904 = 119861119909 isin int(Ω) then we have 120596 = 2Moreover the restricted region [119889 ℎ] sub 119861(119904 119903) with 119903 = radic13An upper bound of 120597119891(119909) is estimated as 119897
119891= 4radic6 Then the
designed parameter 120583 is estimated as 120583 gt 294 Let 120583 = 30 inthe simulation
Figures 5 and 6 display the state trajectories of this neuralnetwork with different initial values It can be seen that thesetrajectories converge to the feasible region in finite time aswellThis is in accordance with the conclusion ofTheorem 15It can be verified that the trajectory is stable in the sense ofLyapunov
5 Conclusion
In this paper a one-layer recurrent neural network hasbeen presented for solving pseudoconvex optimization withbox constraint The neural network system model has beendescribed with a differential inclusion system The con-structed recurrent neural network has been proved to bestable in the sense of LyapunovThe conditions which ensure
00 01 02 03 04 05
0
5
10
15
20
minus5
minus10
minus15
minus01
t
x1[t]
x2[t]
x3[t]
x1 x2 x3(t)
Figure 5 Time-domain behavior of the state variables 1199091 1199092 and
1199093with initial point 119909
0= (54 65 minus15)
00 01 02 03 04 05
0
5
10
15
minus5
minus10
minus15
minus01
t
x1[t]
x2[t]
x3[t]
x1 x2 x3(t)
Figure 6 Time-domain behavior of the state variables 1199091 1199092 and
1199093with initial point 119909
0= (minus163 196 minus23)
the finite time state convergence to the feasible region havebeen obtained The proposed neural network can be used ina wide variety to solve a lot of optimization problem in theengineering application
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgments
This work was supported by the Natural Science Foundationof Hebei Province of China (A2011203103) and the HebeiProvince Education Foundation of China (2009157)
8 Mathematical Problems in Engineering
References
[1] Q Liu Z Guo and J Wang ldquoA one-layer recurrent neural net-work for constrained pseudoconvex optimization and its appli-cation for dynamic portfolio optimizationrdquo Neural Networksvol 26 pp 99ndash109 2012
[2] W Lu andTChen ldquoDynamical behaviors of delayed neural net-work systems with discontinuous activation functionsrdquo NeuralComputation vol 18 no 3 pp 683ndash708 2006
[3] Y Xia and J Wang ldquoA recurrent neural network for solvingnonlinear convex programs subject to linear constraintsrdquo IEEETransactions on Neural Networks vol 16 no 2 pp 379ndash3862005
[4] X-B Liang and J Wang ldquoA recurrent neural network for non-linear optimization with a continuously differentiable objectivefunction and bound constraintsrdquo IEEE Transactions on NeuralNetworks vol 11 no 6 pp 1251ndash1262 2000
[5] X Xue and W Bian ldquoA project neural network for solving de-generate convex quadratic programrdquo Neurocomputing vol 70no 13ndash15 pp 2449ndash2459 2007
[6] Y Xia H Leung and J Wang ldquoA projection neural networkand its application to constrained optimization problemsrdquo IEEETransactions on Circuits and Systems I vol 49 no 4 pp 447ndash458 2002
[7] S Qin and X Xue ldquoGlobal exponential stability and global con-vergence in finite time of neural networks with discontinuousactivationsrdquoNeural Processing Letters vol 29 no 3 pp 189ndash2042009
[8] S Effati A Ghomashi and A R Nazemi ldquoApplication ofprojection neural network in solving convex programmingproblemsrdquo Applied Mathematics and Computation vol 188 no2 pp 1103ndash1114 2007
[9] X Xue and W Bian ldquoSubgradient-based neural networks fornonsmooth convex optimization problemsrdquo IEEE Transactionson Circuits and Systems I vol 55 no 8 pp 2378ndash2391 2008
[10] X Hu and J Wang ldquoA recurrent neural network for solving aclass of general variational inequalitiesrdquo IEEE Transactions onSystemsMan andCybernetics B vol 37 no 3 pp 528ndash539 2007
[11] X Hu and J Wang ldquoSolving pseudomonotone variationalinequalities and pseudoconvex optimization problems usingthe projection neural networkrdquo IEEE Transactions on NeuralNetworks vol 17 no 6 pp 1487ndash1499 2006
[12] J Wang ldquoAnalysis and design of a recurrent neural networkfor linear programmingrdquo IEEE Transactions on Circuits andSystems I vol 40 no 9 pp 613ndash618 1993
[13] J Wang ldquoA deterministic annealing neural network for convexprogrammingrdquoNeural Networks vol 7 no 4 pp 629ndash641 1994
[14] J Wang ldquoPrimal and dual assignment networksrdquo IEEE Trans-actions on Neural Networks vol 8 no 3 pp 784ndash790 1997
[15] J Wang ldquoPrimal and dual neural networks for shortest-pathroutingrdquo IEEE Transactions on Systems Man and CyberneticsA vol 28 no 6 pp 864ndash869 1998
[16] W Bian and X Xue ldquoNeural network for solving constrainedconvex optimization problems with global attractivityrdquo IEEETransactions on Circuits and Systems I vol 60 no 3 pp 710ndash723 2013
[17] W Bian andX Xue ldquoA dynamical approach to constrained non-smooth convex minimization problem coupling with penaltyfunction method in Hilbert spacerdquo Numerical Functional Anal-ysis and Optimization vol 31 no 11 pp 1221ndash1253 2010
[18] Y Xia and J Wang ldquoA one-layer recurrent neural networkfor support vector machine learningrdquo IEEE Transactions onSystems Man and Cybernetics B vol 34 no 2 pp 1261ndash12692004
[19] X Xue and W Bian ldquoSubgradient-based neural networks fornonsmooth convex optimization problemsrdquo IEEE Transactionson Circuits and Systems I vol 55 no 8 pp 2378ndash2391 2008
[20] Y Xia H Leung and J Wang ldquoA projection neural networkand its application to constrained optimization problemsrdquo IEEETransactions on Circuits and Systems I vol 49 no 4 pp 447ndash458 2002
[21] S Qin W Bian and X Xue ldquoA new one-layer recurrentneural network for non-smooth pseudoconvex optimizationrdquoNeurocomputing vol 120 pp 655ndash662 2013
[22] F H ClarkeOptimization andNonsmooth AnalysisWiley NewYork NY USA 1983
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
Mathematical Problems in Engineering 5
Let 119905lowast= (1120572)119863(119911(0)) By (23)119863(119911(119905
lowast)) = 0 that is 119911
119894(119905lowast) =
ℎ119894or 119911119894(119905lowast) = 119889119894 This shows that the state trajectory of neural
network (9) with initial condition 119911(0) = 1199110reaches Ω in
finite time 119879 = 119905lowast= 119863(119911(0))radic119899(120583 minus 119897
119891119861minus1
)Next we prove that when 119905 ge 119905
lowast the trajectory stays inΩ
after reaching Ω If this is not true then there exists 1199051gt 119905lowast
such that the trajectory leavesΩ at 1199051 and there exists 119905
2gt 1199051
such that for 119905 isin (1199051 1199052) 119911(119905) isin 119877119899 Ω
By integrating (22) from 1199051to 1199052 it follows that
int
1199052
1199051
119889119863 (119911 (119905))
119889119905
119889119905 = 119863 (119911 (1199052)) minus 119863 (119911 (119905
1))
le minus120572 (1199052minus 1199051) lt 0
(24)
Due to 119863(119911(1199051)) = 0 119863(119911(119905
2)) lt 0 By the definition of
119863(119911(119905)) 119863(119911(119905)) ge 0 for any 119905 isin [0infin] which contradictsthe result above The proof is completed
Theorem 16 Suppose that assumptions (A1) and (A
2) hold If
120583 gt max119897119891119861minus1
(119903119897119891120596)119861
minus1
then the equilibrium pointof the neural network system (9) is an optimal solution of theproblem (4) and vice versa
Proof Denote 119911lowast as an equilibrium point of the neural
network system (9) then there exist 120574lowast isin 120597119891(119909lowast
) 120585lowast isin
119870[119892[119889ℎ]
](119911lowast
) such that
119861minus1
120574lowast
+ 120583120585lowast
= 0 (25)
where 120574lowast = 119861minus1119911lowast By Theorem 15 119911lowast isin Ω hence 120585lowast = 0 By(25) 120574lowast = 0We can get the following projection formulation
119911lowast
= 120601[119889ℎ]
(119911lowast
minus 119861minus1
120574lowast
) (26)
where 120601[119889ℎ]
= (120601[1198891ℎ1](1199101) 120601
[119889119899ℎ119899](119910119899)) with 120601
[119889119894ℎ119894](119910119894)
119894 = 1 119899 defined as
120601[119889119894ℎ119894](119910119894) =
ℎ119894 119910119894ge ℎ119894
119910119894 119889119894lt 119910119894lt ℎ119894
119889119894 119910119894le 119889119894
(27)
By the well-known projection theorem [17] (26) is equivalentto the following variational inequality
(119861minus1
120574lowast
)
119879
(119911 minus 119911lowast
) ge 0 forall119911 isin Ω (28)
Since 119891(119909) is pseudoconvex 119891(119861minus1119911) is pseudoconvex on ΩBy (28) we can obtain that 119891(119861minus1119911) ge 119891(119861
minus1
119911lowast
) This showsthat 119911lowast is a minimum point of 119891(119861minus1119911) overΩ
Next we prove the reverse side Denote 119911lowast as an optimalsolution of the problem then 119911
lowast
isin [119889 ℎ] Since 119911lowast is
a minimum point of 119891(119861minus1119911) over the feasible region Ωaccording to Lemma 12 it follows that
0 isin 120597119891 (119861minus1
119911lowast
) + 119873Ω(119911lowast
) (29)
From (29) it follows that there exist 120578lowast isin 119873Ω(119911lowast
) 119861minus1120574lowast =minus120578lowast
isin 119861minus1
120597119891(119909lowast
) and 120578lowast le 119897119891119861minus1
Noting that119873Ω(119911lowast
) =
V120585lowast V ge 0 120585lowast
isin 119870[119892[119889ℎ]
](119911lowast
) and at least one 120585lowast119894is 1
or minus 1 there exist 120573 ge 0 and 120585lowast isin 119870[119892[119889ℎ]
](119911lowast
) such that120573120585lowast
isin 119873Ω(119911lowast
) and 120578lowast = 120573120585lowastIn the following we prove that 120578lowast isin 120583119870[119892
[119889ℎ]](119911lowast
) Wesay that120573 le 120583 If not then120573 gt 120583 Since (119911lowastminus119904)119879120585lowast = sum119899
119894=1(119911lowast
119894minus
119904119894)120585119894ge 120596 we have
(119911lowast
minus 119904)119879
120578lowast
= 120573(119911lowast
minus 119904)119879
120585lowast
ge 120596120573 ge 120583120596 (30)
Thus 120578lowast ge 120583120596119911lowast
minus 119904 gt 120583120596119903 By the condition ofTheorem 16 120583 gt max(119897
119891119861minus1
(119903119897119891120596)119861
minus1
) Hence 120578 gt119897119891119861minus1
which contradicts with 1205782le 119897119891119861minus1
This impliesthat 120573120585lowast isin 120583119870[119892
[119889ℎ]](119911lowast
) that is 0 isin 120597119865(119911lowast)+120583119870[119892[119889ℎ]
](119911lowast
)
which means that 119911lowast is the equilibrium point of neuralnetwork system (9) This completes the proof
Theorem 17 Suppose that assumptions (A1) and (A
2) hold If
120583 gt max119897119891119861minus1
(119903119897119891120596)119861
minus1
then the equilibrium point ofthe neural network system (9) is stable in the sense of Lyapunov
Proof Denote 119911lowast as an equilibrium point of the neural
network system (9) that is
0 isin 120597119865 (119911lowast
) + 120583119870 [119892[119889ℎ]
(119911lowast
)] (31)
By Theorem 16 we get that 119911lowast is an optimal solution ofthe problem (4) that is 119891(119861minus1119911) ge 119891(119861
minus1
119911lowast
) 119911 isin Ω ByTheorem 15 the trajectory 119911(119905) with initial condition 119911(0) =1199110isin 119861(119904 119903) converges to the feasible region Ω in finite time
119879 = 119905lowast= 119863(119911(0))radic119899(120583 minus 119897
119891119861minus1
) and will remain in Ω
forever That is for all 119905 ge 119905lowast 119911(119905) isin Ω Let
1198811(119911) = 119891 (119861
minus1
119911) + 120583119863 (119911) (32)
Since 119911lowast is a minimum point of119891(119861minus1119911) onΩ we can get that1198811(119911) ge 119881
1(119911lowast
) for all 119911 isin ΩConsider the following Lyapunov function
119881 (119911) = 1198811(119911) minus 119881
1(119911lowast
) +
1
2
1003817100381710038171003817119911 minus 119911lowast1003817100381710038171003817
2
(33)
Obviously from (33) 119881(119911) ge (12)119911 minus 119911lowast2 and
120597119881 (119911) = 1205971198811(119911) + 119911 minus 119911
lowast
(34)
Evaluating the derivation of 119881 along the trajectory of thesystem (9) gives
119889119881 (119911 (119905))
119889119905
= 120585119879
(119905)
119889119911 (119905)
119889119905
forall120585 (119905) isin 120597119881 (119911 (119905)) (35)
Since (119905) isin minus1205971198811(119911(119905)) by (34) we can set 120585(119905) = minus(119905) + 119911 minus
119911lowast Hence119889119881 (119911 (119905))
119889119905
= (minus (119905) + 119911 minus 119911lowast
)119879
(119905)
= minus (119905)2
+ (119911 minus 119911lowast
) (119905)
le sup120579isin1205971198811(119911)
(minus1205792
) + sup120579isin1205971198811(119911)
[minus(119911 minus 119911lowast
)119879
120579]
= minus inf120579isin1205971198811(119911)
(1205792
) minus inf120579isin1205971198811(119911)
(119911 minus 119911lowast
)119879
120579
(36)
6 Mathematical Problems in Engineering
For any 120579 isin 1205971198811(119911) there exists 120574 isin 120597119891(119909) 120585 isin 119870[119892
[119889ℎ]](119911)
such that 120579 = 119861minus1
120574 + 120583120585 Since 119891(119909) is pseudoconvexon Ω 120597119891(119909) is pseudomonotone on Ω From the proof ofTheorem 16 for any 119911 isin Ω (119911 minus 119911
lowast
)119879
119861minus1
120574 ge 0 By thedefinition of 119892
[119889ℎ](119911) (119911 minus 119911lowast)119879120585 ge 0 Hence (119911 minus 119911lowast)119879120579 ge 0
This implies that
119889119881 (119911 (119905))
119889119905
le minus inf120579isin1205971198811(119911)
1205792
le 0 (37)
Equation (37) shows that the neural network system (9) isstable in the sense of Lyapunov The proof is complete
4 Numerical Examples
In this section two examples will be given to illustrate theeffectiveness of the proposed approach for solving the pseu-doconvex optimization problem
Example 1 Consider the quadratic fractional optimizationproblem
min 119891 (119909) =
119909119879
119876119909 + 119886119879
119909 + 1198860
119888119879119909 + 1198880
subject to 119889 le 119861119909 le ℎ
(38)
where119876 is an 119899 times 119899matrix 119886 119888 isin 119877119899 and 1198860 1198880isin 119877 Here we
choose 119899 = 3
119876 = (
5 minus1 2
minus1 5 minus1
2 minus1 3
) 119861 = (
2 1 1
11 minus16 11
11 minus92 minus10
)
119886 = (
1
minus2
minus2
) 119888 = (
2
1
minus1
) 119889 = (
minus2
minus1
minus6
)
ℎ = (
2
2
6
) 1198860= minus2 119888
0= 4
(39)
It is easily verified that 119876 is symmetric and positivedefinite and consequently is pseudoconvex onΩ = 119889 le 119861119909 le
ℎ The proposed neural network in (9) is capable of solvingthis problemObviously neural network (9) associated to (38)can be described as
119861 (119905) = minus119861minus1
nabla119891 (119909) minus 120583120585 (119861119909 (119905)) (40)
where
nabla119891 (119909) =
2119876119909 + 119886
119888119879119909 + 1198880
minus
119909119879
119876119909 + 119886119879
119909 + 1198860
(119888119879119909 + 1198880)2
119888
120585 (119861119909 (119905)) isin 119870 [119892[119889ℎ]
] (119861119909 (119905))
(41)
Let 119909 = minus1 minus1 minus1 119904 = 119861119909 isin int(Ω) then we have120596 = 3 Moreover the restricted region [119889 ℎ] sub 119861(119904 119903) with119903 = 2radic3 An upper bound of 120597119891(119909) is estimated as 119897
119891= 3
0 5 10 15
0
1
2
minus1
minus2
t
x1[t]
x2[t]
x3[t]
Figure 2 Time-domain behavior of the state variables 1199091 1199092 and
1199093with initial point 119909
0= (0 0 03)
0 5 10 15
0
1
2
3
minus1
minus2
t
x1[t]
x2[t]
x3[t]
Figure 3 Time-domain behavior of the state variables 1199091 1199092 and
1199093with initial point 119909
0= (minus02 0 04)
Then the designed parameter 120583 is estimated as 120583 gt 1032 Let120583 = 11 in the simulation
We have simulated the dynamical behavior of the neuralnetwork by using the mathematical software when 120583 = 11Figures 2 3 and 4 display the state trajectories of this neuralnetwork with different initial values which shows that thestate variables converge to the feasible region in finite timeThis is in accordance with the conclusion of Theorem 15Meanwhile it can be seen that the trajectory is stable in thesense of Lyapunov
Example 2 Consider the following pseudoconvex optimiza-tion
min 119891 (119909) = (1199091minus 1199093)2
+ (1199092minus 1199093)2
subject to 119889 le 119861119909 le ℎ
(42)
Mathematical Problems in Engineering 7
0 5 10 15 20 25 30
0
1
2
3
minus1
minus2
t
x1[t]
x2[t]
x3[t]
Figure 4 Time-domain behavior of the state variables 1199091 1199092 and
1199093with initial point 119909
0= (01 0 03)
where
119861 = (
1 2 6
0 1 2
2 3 1
) 119889 = (
1
1
3
) ℎ = (
2
4
5
) (43)
In this problem the objective function 119891(119909) is pseudo-convexThus the proposed neural network is suitable for solv-ing the problem in this case Neural network (9) associated to(42) can be described as
119861 (119905) = minus119861minus1
nabla119891 (119909) minus 120583120585 (119861119909 (119905)) (44)
where
nabla119891 (119909) = (
2 (1199091minus 1199093)
2 (1199092minus 1199093)
minus2 (1199091+ 1199092minus 21199093)
)
120585 (119861119909 (119905)) isin 119870 [119892[119889ℎ]
] (119861119909 (119905))
(45)
Let 119909 = 0 1 0 119904 = 119861119909 isin int(Ω) then we have 120596 = 2Moreover the restricted region [119889 ℎ] sub 119861(119904 119903) with 119903 = radic13An upper bound of 120597119891(119909) is estimated as 119897
119891= 4radic6 Then the
designed parameter 120583 is estimated as 120583 gt 294 Let 120583 = 30 inthe simulation
Figures 5 and 6 display the state trajectories of this neuralnetwork with different initial values It can be seen that thesetrajectories converge to the feasible region in finite time aswellThis is in accordance with the conclusion ofTheorem 15It can be verified that the trajectory is stable in the sense ofLyapunov
5 Conclusion
In this paper a one-layer recurrent neural network hasbeen presented for solving pseudoconvex optimization withbox constraint The neural network system model has beendescribed with a differential inclusion system The con-structed recurrent neural network has been proved to bestable in the sense of LyapunovThe conditions which ensure
00 01 02 03 04 05
0
5
10
15
20
minus5
minus10
minus15
minus01
t
x1[t]
x2[t]
x3[t]
x1 x2 x3(t)
Figure 5 Time-domain behavior of the state variables 1199091 1199092 and
1199093with initial point 119909
0= (54 65 minus15)
00 01 02 03 04 05
0
5
10
15
minus5
minus10
minus15
minus01
t
x1[t]
x2[t]
x3[t]
x1 x2 x3(t)
Figure 6 Time-domain behavior of the state variables 1199091 1199092 and
1199093with initial point 119909
0= (minus163 196 minus23)
the finite time state convergence to the feasible region havebeen obtained The proposed neural network can be used ina wide variety to solve a lot of optimization problem in theengineering application
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgments
This work was supported by the Natural Science Foundationof Hebei Province of China (A2011203103) and the HebeiProvince Education Foundation of China (2009157)
8 Mathematical Problems in Engineering
References
[1] Q Liu Z Guo and J Wang ldquoA one-layer recurrent neural net-work for constrained pseudoconvex optimization and its appli-cation for dynamic portfolio optimizationrdquo Neural Networksvol 26 pp 99ndash109 2012
[2] W Lu andTChen ldquoDynamical behaviors of delayed neural net-work systems with discontinuous activation functionsrdquo NeuralComputation vol 18 no 3 pp 683ndash708 2006
[3] Y Xia and J Wang ldquoA recurrent neural network for solvingnonlinear convex programs subject to linear constraintsrdquo IEEETransactions on Neural Networks vol 16 no 2 pp 379ndash3862005
[4] X-B Liang and J Wang ldquoA recurrent neural network for non-linear optimization with a continuously differentiable objectivefunction and bound constraintsrdquo IEEE Transactions on NeuralNetworks vol 11 no 6 pp 1251ndash1262 2000
[5] X Xue and W Bian ldquoA project neural network for solving de-generate convex quadratic programrdquo Neurocomputing vol 70no 13ndash15 pp 2449ndash2459 2007
[6] Y Xia H Leung and J Wang ldquoA projection neural networkand its application to constrained optimization problemsrdquo IEEETransactions on Circuits and Systems I vol 49 no 4 pp 447ndash458 2002
[7] S Qin and X Xue ldquoGlobal exponential stability and global con-vergence in finite time of neural networks with discontinuousactivationsrdquoNeural Processing Letters vol 29 no 3 pp 189ndash2042009
[8] S Effati A Ghomashi and A R Nazemi ldquoApplication ofprojection neural network in solving convex programmingproblemsrdquo Applied Mathematics and Computation vol 188 no2 pp 1103ndash1114 2007
[9] X Xue and W Bian ldquoSubgradient-based neural networks fornonsmooth convex optimization problemsrdquo IEEE Transactionson Circuits and Systems I vol 55 no 8 pp 2378ndash2391 2008
[10] X Hu and J Wang ldquoA recurrent neural network for solving aclass of general variational inequalitiesrdquo IEEE Transactions onSystemsMan andCybernetics B vol 37 no 3 pp 528ndash539 2007
[11] X Hu and J Wang ldquoSolving pseudomonotone variationalinequalities and pseudoconvex optimization problems usingthe projection neural networkrdquo IEEE Transactions on NeuralNetworks vol 17 no 6 pp 1487ndash1499 2006
[12] J Wang ldquoAnalysis and design of a recurrent neural networkfor linear programmingrdquo IEEE Transactions on Circuits andSystems I vol 40 no 9 pp 613ndash618 1993
[13] J Wang ldquoA deterministic annealing neural network for convexprogrammingrdquoNeural Networks vol 7 no 4 pp 629ndash641 1994
[14] J Wang ldquoPrimal and dual assignment networksrdquo IEEE Trans-actions on Neural Networks vol 8 no 3 pp 784ndash790 1997
[15] J Wang ldquoPrimal and dual neural networks for shortest-pathroutingrdquo IEEE Transactions on Systems Man and CyberneticsA vol 28 no 6 pp 864ndash869 1998
[16] W Bian and X Xue ldquoNeural network for solving constrainedconvex optimization problems with global attractivityrdquo IEEETransactions on Circuits and Systems I vol 60 no 3 pp 710ndash723 2013
[17] W Bian andX Xue ldquoA dynamical approach to constrained non-smooth convex minimization problem coupling with penaltyfunction method in Hilbert spacerdquo Numerical Functional Anal-ysis and Optimization vol 31 no 11 pp 1221ndash1253 2010
[18] Y Xia and J Wang ldquoA one-layer recurrent neural networkfor support vector machine learningrdquo IEEE Transactions onSystems Man and Cybernetics B vol 34 no 2 pp 1261ndash12692004
[19] X Xue and W Bian ldquoSubgradient-based neural networks fornonsmooth convex optimization problemsrdquo IEEE Transactionson Circuits and Systems I vol 55 no 8 pp 2378ndash2391 2008
[20] Y Xia H Leung and J Wang ldquoA projection neural networkand its application to constrained optimization problemsrdquo IEEETransactions on Circuits and Systems I vol 49 no 4 pp 447ndash458 2002
[21] S Qin W Bian and X Xue ldquoA new one-layer recurrentneural network for non-smooth pseudoconvex optimizationrdquoNeurocomputing vol 120 pp 655ndash662 2013
[22] F H ClarkeOptimization andNonsmooth AnalysisWiley NewYork NY USA 1983
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
6 Mathematical Problems in Engineering
For any 120579 isin 1205971198811(119911) there exists 120574 isin 120597119891(119909) 120585 isin 119870[119892
[119889ℎ]](119911)
such that 120579 = 119861minus1
120574 + 120583120585 Since 119891(119909) is pseudoconvexon Ω 120597119891(119909) is pseudomonotone on Ω From the proof ofTheorem 16 for any 119911 isin Ω (119911 minus 119911
lowast
)119879
119861minus1
120574 ge 0 By thedefinition of 119892
[119889ℎ](119911) (119911 minus 119911lowast)119879120585 ge 0 Hence (119911 minus 119911lowast)119879120579 ge 0
This implies that
119889119881 (119911 (119905))
119889119905
le minus inf120579isin1205971198811(119911)
1205792
le 0 (37)
Equation (37) shows that the neural network system (9) isstable in the sense of Lyapunov The proof is complete
4 Numerical Examples
In this section two examples will be given to illustrate theeffectiveness of the proposed approach for solving the pseu-doconvex optimization problem
Example 1 Consider the quadratic fractional optimizationproblem
min 119891 (119909) =
119909119879
119876119909 + 119886119879
119909 + 1198860
119888119879119909 + 1198880
subject to 119889 le 119861119909 le ℎ
(38)
where119876 is an 119899 times 119899matrix 119886 119888 isin 119877119899 and 1198860 1198880isin 119877 Here we
choose 119899 = 3
119876 = (
5 minus1 2
minus1 5 minus1
2 minus1 3
) 119861 = (
2 1 1
11 minus16 11
11 minus92 minus10
)
119886 = (
1
minus2
minus2
) 119888 = (
2
1
minus1
) 119889 = (
minus2
minus1
minus6
)
ℎ = (
2
2
6
) 1198860= minus2 119888
0= 4
(39)
It is easily verified that 119876 is symmetric and positivedefinite and consequently is pseudoconvex onΩ = 119889 le 119861119909 le
ℎ The proposed neural network in (9) is capable of solvingthis problemObviously neural network (9) associated to (38)can be described as
119861 (119905) = minus119861minus1
nabla119891 (119909) minus 120583120585 (119861119909 (119905)) (40)
where
nabla119891 (119909) =
2119876119909 + 119886
119888119879119909 + 1198880
minus
119909119879
119876119909 + 119886119879
119909 + 1198860
(119888119879119909 + 1198880)2
119888
120585 (119861119909 (119905)) isin 119870 [119892[119889ℎ]
] (119861119909 (119905))
(41)
Let 119909 = minus1 minus1 minus1 119904 = 119861119909 isin int(Ω) then we have120596 = 3 Moreover the restricted region [119889 ℎ] sub 119861(119904 119903) with119903 = 2radic3 An upper bound of 120597119891(119909) is estimated as 119897
119891= 3
0 5 10 15
0
1
2
minus1
minus2
t
x1[t]
x2[t]
x3[t]
Figure 2 Time-domain behavior of the state variables 1199091 1199092 and
1199093with initial point 119909
0= (0 0 03)
0 5 10 15
0
1
2
3
minus1
minus2
t
x1[t]
x2[t]
x3[t]
Figure 3 Time-domain behavior of the state variables 1199091 1199092 and
1199093with initial point 119909
0= (minus02 0 04)
Then the designed parameter 120583 is estimated as 120583 gt 1032 Let120583 = 11 in the simulation
We have simulated the dynamical behavior of the neuralnetwork by using the mathematical software when 120583 = 11Figures 2 3 and 4 display the state trajectories of this neuralnetwork with different initial values which shows that thestate variables converge to the feasible region in finite timeThis is in accordance with the conclusion of Theorem 15Meanwhile it can be seen that the trajectory is stable in thesense of Lyapunov
Example 2 Consider the following pseudoconvex optimiza-tion
min 119891 (119909) = (1199091minus 1199093)2
+ (1199092minus 1199093)2
subject to 119889 le 119861119909 le ℎ
(42)
Mathematical Problems in Engineering 7
0 5 10 15 20 25 30
0
1
2
3
minus1
minus2
t
x1[t]
x2[t]
x3[t]
Figure 4 Time-domain behavior of the state variables 1199091 1199092 and
1199093with initial point 119909
0= (01 0 03)
where
119861 = (
1 2 6
0 1 2
2 3 1
) 119889 = (
1
1
3
) ℎ = (
2
4
5
) (43)
In this problem the objective function 119891(119909) is pseudo-convexThus the proposed neural network is suitable for solv-ing the problem in this case Neural network (9) associated to(42) can be described as
119861 (119905) = minus119861minus1
nabla119891 (119909) minus 120583120585 (119861119909 (119905)) (44)
where
nabla119891 (119909) = (
2 (1199091minus 1199093)
2 (1199092minus 1199093)
minus2 (1199091+ 1199092minus 21199093)
)
120585 (119861119909 (119905)) isin 119870 [119892[119889ℎ]
] (119861119909 (119905))
(45)
Let 119909 = 0 1 0 119904 = 119861119909 isin int(Ω) then we have 120596 = 2Moreover the restricted region [119889 ℎ] sub 119861(119904 119903) with 119903 = radic13An upper bound of 120597119891(119909) is estimated as 119897
119891= 4radic6 Then the
designed parameter 120583 is estimated as 120583 gt 294 Let 120583 = 30 inthe simulation
Figures 5 and 6 display the state trajectories of this neuralnetwork with different initial values It can be seen that thesetrajectories converge to the feasible region in finite time aswellThis is in accordance with the conclusion ofTheorem 15It can be verified that the trajectory is stable in the sense ofLyapunov
5 Conclusion
In this paper a one-layer recurrent neural network hasbeen presented for solving pseudoconvex optimization withbox constraint The neural network system model has beendescribed with a differential inclusion system The con-structed recurrent neural network has been proved to bestable in the sense of LyapunovThe conditions which ensure
00 01 02 03 04 05
0
5
10
15
20
minus5
minus10
minus15
minus01
t
x1[t]
x2[t]
x3[t]
x1 x2 x3(t)
Figure 5 Time-domain behavior of the state variables 1199091 1199092 and
1199093with initial point 119909
0= (54 65 minus15)
00 01 02 03 04 05
0
5
10
15
minus5
minus10
minus15
minus01
t
x1[t]
x2[t]
x3[t]
x1 x2 x3(t)
Figure 6 Time-domain behavior of the state variables 1199091 1199092 and
1199093with initial point 119909
0= (minus163 196 minus23)
the finite time state convergence to the feasible region havebeen obtained The proposed neural network can be used ina wide variety to solve a lot of optimization problem in theengineering application
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgments
This work was supported by the Natural Science Foundationof Hebei Province of China (A2011203103) and the HebeiProvince Education Foundation of China (2009157)
8 Mathematical Problems in Engineering
References
[1] Q Liu Z Guo and J Wang ldquoA one-layer recurrent neural net-work for constrained pseudoconvex optimization and its appli-cation for dynamic portfolio optimizationrdquo Neural Networksvol 26 pp 99ndash109 2012
[2] W Lu andTChen ldquoDynamical behaviors of delayed neural net-work systems with discontinuous activation functionsrdquo NeuralComputation vol 18 no 3 pp 683ndash708 2006
[3] Y Xia and J Wang ldquoA recurrent neural network for solvingnonlinear convex programs subject to linear constraintsrdquo IEEETransactions on Neural Networks vol 16 no 2 pp 379ndash3862005
[4] X-B Liang and J Wang ldquoA recurrent neural network for non-linear optimization with a continuously differentiable objectivefunction and bound constraintsrdquo IEEE Transactions on NeuralNetworks vol 11 no 6 pp 1251ndash1262 2000
[5] X Xue and W Bian ldquoA project neural network for solving de-generate convex quadratic programrdquo Neurocomputing vol 70no 13ndash15 pp 2449ndash2459 2007
[6] Y Xia H Leung and J Wang ldquoA projection neural networkand its application to constrained optimization problemsrdquo IEEETransactions on Circuits and Systems I vol 49 no 4 pp 447ndash458 2002
[7] S Qin and X Xue ldquoGlobal exponential stability and global con-vergence in finite time of neural networks with discontinuousactivationsrdquoNeural Processing Letters vol 29 no 3 pp 189ndash2042009
[8] S Effati A Ghomashi and A R Nazemi ldquoApplication ofprojection neural network in solving convex programmingproblemsrdquo Applied Mathematics and Computation vol 188 no2 pp 1103ndash1114 2007
[9] X Xue and W Bian ldquoSubgradient-based neural networks fornonsmooth convex optimization problemsrdquo IEEE Transactionson Circuits and Systems I vol 55 no 8 pp 2378ndash2391 2008
[10] X Hu and J Wang ldquoA recurrent neural network for solving aclass of general variational inequalitiesrdquo IEEE Transactions onSystemsMan andCybernetics B vol 37 no 3 pp 528ndash539 2007
[11] X Hu and J Wang ldquoSolving pseudomonotone variationalinequalities and pseudoconvex optimization problems usingthe projection neural networkrdquo IEEE Transactions on NeuralNetworks vol 17 no 6 pp 1487ndash1499 2006
[12] J Wang ldquoAnalysis and design of a recurrent neural networkfor linear programmingrdquo IEEE Transactions on Circuits andSystems I vol 40 no 9 pp 613ndash618 1993
[13] J Wang ldquoA deterministic annealing neural network for convexprogrammingrdquoNeural Networks vol 7 no 4 pp 629ndash641 1994
[14] J Wang ldquoPrimal and dual assignment networksrdquo IEEE Trans-actions on Neural Networks vol 8 no 3 pp 784ndash790 1997
[15] J Wang ldquoPrimal and dual neural networks for shortest-pathroutingrdquo IEEE Transactions on Systems Man and CyberneticsA vol 28 no 6 pp 864ndash869 1998
[16] W Bian and X Xue ldquoNeural network for solving constrainedconvex optimization problems with global attractivityrdquo IEEETransactions on Circuits and Systems I vol 60 no 3 pp 710ndash723 2013
[17] W Bian andX Xue ldquoA dynamical approach to constrained non-smooth convex minimization problem coupling with penaltyfunction method in Hilbert spacerdquo Numerical Functional Anal-ysis and Optimization vol 31 no 11 pp 1221ndash1253 2010
[18] Y Xia and J Wang ldquoA one-layer recurrent neural networkfor support vector machine learningrdquo IEEE Transactions onSystems Man and Cybernetics B vol 34 no 2 pp 1261ndash12692004
[19] X Xue and W Bian ldquoSubgradient-based neural networks fornonsmooth convex optimization problemsrdquo IEEE Transactionson Circuits and Systems I vol 55 no 8 pp 2378ndash2391 2008
[20] Y Xia H Leung and J Wang ldquoA projection neural networkand its application to constrained optimization problemsrdquo IEEETransactions on Circuits and Systems I vol 49 no 4 pp 447ndash458 2002
[21] S Qin W Bian and X Xue ldquoA new one-layer recurrentneural network for non-smooth pseudoconvex optimizationrdquoNeurocomputing vol 120 pp 655ndash662 2013
[22] F H ClarkeOptimization andNonsmooth AnalysisWiley NewYork NY USA 1983
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
Mathematical Problems in Engineering 7
0 5 10 15 20 25 30
0
1
2
3
minus1
minus2
t
x1[t]
x2[t]
x3[t]
Figure 4 Time-domain behavior of the state variables 1199091 1199092 and
1199093with initial point 119909
0= (01 0 03)
where
119861 = (
1 2 6
0 1 2
2 3 1
) 119889 = (
1
1
3
) ℎ = (
2
4
5
) (43)
In this problem the objective function 119891(119909) is pseudo-convexThus the proposed neural network is suitable for solv-ing the problem in this case Neural network (9) associated to(42) can be described as
119861 (119905) = minus119861minus1
nabla119891 (119909) minus 120583120585 (119861119909 (119905)) (44)
where
nabla119891 (119909) = (
2 (1199091minus 1199093)
2 (1199092minus 1199093)
minus2 (1199091+ 1199092minus 21199093)
)
120585 (119861119909 (119905)) isin 119870 [119892[119889ℎ]
] (119861119909 (119905))
(45)
Let 119909 = 0 1 0 119904 = 119861119909 isin int(Ω) then we have 120596 = 2Moreover the restricted region [119889 ℎ] sub 119861(119904 119903) with 119903 = radic13An upper bound of 120597119891(119909) is estimated as 119897
119891= 4radic6 Then the
designed parameter 120583 is estimated as 120583 gt 294 Let 120583 = 30 inthe simulation
Figures 5 and 6 display the state trajectories of this neuralnetwork with different initial values It can be seen that thesetrajectories converge to the feasible region in finite time aswellThis is in accordance with the conclusion ofTheorem 15It can be verified that the trajectory is stable in the sense ofLyapunov
5 Conclusion
In this paper a one-layer recurrent neural network hasbeen presented for solving pseudoconvex optimization withbox constraint The neural network system model has beendescribed with a differential inclusion system The con-structed recurrent neural network has been proved to bestable in the sense of LyapunovThe conditions which ensure
00 01 02 03 04 05
0
5
10
15
20
minus5
minus10
minus15
minus01
t
x1[t]
x2[t]
x3[t]
x1 x2 x3(t)
Figure 5 Time-domain behavior of the state variables 1199091 1199092 and
1199093with initial point 119909
0= (54 65 minus15)
00 01 02 03 04 05
0
5
10
15
minus5
minus10
minus15
minus01
t
x1[t]
x2[t]
x3[t]
x1 x2 x3(t)
Figure 6 Time-domain behavior of the state variables 1199091 1199092 and
1199093with initial point 119909
0= (minus163 196 minus23)
the finite time state convergence to the feasible region havebeen obtained The proposed neural network can be used ina wide variety to solve a lot of optimization problem in theengineering application
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgments
This work was supported by the Natural Science Foundationof Hebei Province of China (A2011203103) and the HebeiProvince Education Foundation of China (2009157)
8 Mathematical Problems in Engineering
References
[1] Q Liu Z Guo and J Wang ldquoA one-layer recurrent neural net-work for constrained pseudoconvex optimization and its appli-cation for dynamic portfolio optimizationrdquo Neural Networksvol 26 pp 99ndash109 2012
[2] W Lu andTChen ldquoDynamical behaviors of delayed neural net-work systems with discontinuous activation functionsrdquo NeuralComputation vol 18 no 3 pp 683ndash708 2006
[3] Y Xia and J Wang ldquoA recurrent neural network for solvingnonlinear convex programs subject to linear constraintsrdquo IEEETransactions on Neural Networks vol 16 no 2 pp 379ndash3862005
[4] X-B Liang and J Wang ldquoA recurrent neural network for non-linear optimization with a continuously differentiable objectivefunction and bound constraintsrdquo IEEE Transactions on NeuralNetworks vol 11 no 6 pp 1251ndash1262 2000
[5] X Xue and W Bian ldquoA project neural network for solving de-generate convex quadratic programrdquo Neurocomputing vol 70no 13ndash15 pp 2449ndash2459 2007
[6] Y Xia H Leung and J Wang ldquoA projection neural networkand its application to constrained optimization problemsrdquo IEEETransactions on Circuits and Systems I vol 49 no 4 pp 447ndash458 2002
[7] S Qin and X Xue ldquoGlobal exponential stability and global con-vergence in finite time of neural networks with discontinuousactivationsrdquoNeural Processing Letters vol 29 no 3 pp 189ndash2042009
[8] S Effati A Ghomashi and A R Nazemi ldquoApplication ofprojection neural network in solving convex programmingproblemsrdquo Applied Mathematics and Computation vol 188 no2 pp 1103ndash1114 2007
[9] X Xue and W Bian ldquoSubgradient-based neural networks fornonsmooth convex optimization problemsrdquo IEEE Transactionson Circuits and Systems I vol 55 no 8 pp 2378ndash2391 2008
[10] X Hu and J Wang ldquoA recurrent neural network for solving aclass of general variational inequalitiesrdquo IEEE Transactions onSystemsMan andCybernetics B vol 37 no 3 pp 528ndash539 2007
[11] X Hu and J Wang ldquoSolving pseudomonotone variationalinequalities and pseudoconvex optimization problems usingthe projection neural networkrdquo IEEE Transactions on NeuralNetworks vol 17 no 6 pp 1487ndash1499 2006
[12] J Wang ldquoAnalysis and design of a recurrent neural networkfor linear programmingrdquo IEEE Transactions on Circuits andSystems I vol 40 no 9 pp 613ndash618 1993
[13] J Wang ldquoA deterministic annealing neural network for convexprogrammingrdquoNeural Networks vol 7 no 4 pp 629ndash641 1994
[14] J Wang ldquoPrimal and dual assignment networksrdquo IEEE Trans-actions on Neural Networks vol 8 no 3 pp 784ndash790 1997
[15] J Wang ldquoPrimal and dual neural networks for shortest-pathroutingrdquo IEEE Transactions on Systems Man and CyberneticsA vol 28 no 6 pp 864ndash869 1998
[16] W Bian and X Xue ldquoNeural network for solving constrainedconvex optimization problems with global attractivityrdquo IEEETransactions on Circuits and Systems I vol 60 no 3 pp 710ndash723 2013
[17] W Bian andX Xue ldquoA dynamical approach to constrained non-smooth convex minimization problem coupling with penaltyfunction method in Hilbert spacerdquo Numerical Functional Anal-ysis and Optimization vol 31 no 11 pp 1221ndash1253 2010
[18] Y Xia and J Wang ldquoA one-layer recurrent neural networkfor support vector machine learningrdquo IEEE Transactions onSystems Man and Cybernetics B vol 34 no 2 pp 1261ndash12692004
[19] X Xue and W Bian ldquoSubgradient-based neural networks fornonsmooth convex optimization problemsrdquo IEEE Transactionson Circuits and Systems I vol 55 no 8 pp 2378ndash2391 2008
[20] Y Xia H Leung and J Wang ldquoA projection neural networkand its application to constrained optimization problemsrdquo IEEETransactions on Circuits and Systems I vol 49 no 4 pp 447ndash458 2002
[21] S Qin W Bian and X Xue ldquoA new one-layer recurrentneural network for non-smooth pseudoconvex optimizationrdquoNeurocomputing vol 120 pp 655ndash662 2013
[22] F H ClarkeOptimization andNonsmooth AnalysisWiley NewYork NY USA 1983
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
8 Mathematical Problems in Engineering
References
[1] Q Liu Z Guo and J Wang ldquoA one-layer recurrent neural net-work for constrained pseudoconvex optimization and its appli-cation for dynamic portfolio optimizationrdquo Neural Networksvol 26 pp 99ndash109 2012
[2] W Lu andTChen ldquoDynamical behaviors of delayed neural net-work systems with discontinuous activation functionsrdquo NeuralComputation vol 18 no 3 pp 683ndash708 2006
[3] Y Xia and J Wang ldquoA recurrent neural network for solvingnonlinear convex programs subject to linear constraintsrdquo IEEETransactions on Neural Networks vol 16 no 2 pp 379ndash3862005
[4] X-B Liang and J Wang ldquoA recurrent neural network for non-linear optimization with a continuously differentiable objectivefunction and bound constraintsrdquo IEEE Transactions on NeuralNetworks vol 11 no 6 pp 1251ndash1262 2000
[5] X Xue and W Bian ldquoA project neural network for solving de-generate convex quadratic programrdquo Neurocomputing vol 70no 13ndash15 pp 2449ndash2459 2007
[6] Y Xia H Leung and J Wang ldquoA projection neural networkand its application to constrained optimization problemsrdquo IEEETransactions on Circuits and Systems I vol 49 no 4 pp 447ndash458 2002
[7] S Qin and X Xue ldquoGlobal exponential stability and global con-vergence in finite time of neural networks with discontinuousactivationsrdquoNeural Processing Letters vol 29 no 3 pp 189ndash2042009
[8] S Effati A Ghomashi and A R Nazemi ldquoApplication ofprojection neural network in solving convex programmingproblemsrdquo Applied Mathematics and Computation vol 188 no2 pp 1103ndash1114 2007
[9] X Xue and W Bian ldquoSubgradient-based neural networks fornonsmooth convex optimization problemsrdquo IEEE Transactionson Circuits and Systems I vol 55 no 8 pp 2378ndash2391 2008
[10] X Hu and J Wang ldquoA recurrent neural network for solving aclass of general variational inequalitiesrdquo IEEE Transactions onSystemsMan andCybernetics B vol 37 no 3 pp 528ndash539 2007
[11] X Hu and J Wang ldquoSolving pseudomonotone variationalinequalities and pseudoconvex optimization problems usingthe projection neural networkrdquo IEEE Transactions on NeuralNetworks vol 17 no 6 pp 1487ndash1499 2006
[12] J Wang ldquoAnalysis and design of a recurrent neural networkfor linear programmingrdquo IEEE Transactions on Circuits andSystems I vol 40 no 9 pp 613ndash618 1993
[13] J Wang ldquoA deterministic annealing neural network for convexprogrammingrdquoNeural Networks vol 7 no 4 pp 629ndash641 1994
[14] J Wang ldquoPrimal and dual assignment networksrdquo IEEE Trans-actions on Neural Networks vol 8 no 3 pp 784ndash790 1997
[15] J Wang ldquoPrimal and dual neural networks for shortest-pathroutingrdquo IEEE Transactions on Systems Man and CyberneticsA vol 28 no 6 pp 864ndash869 1998
[16] W Bian and X Xue ldquoNeural network for solving constrainedconvex optimization problems with global attractivityrdquo IEEETransactions on Circuits and Systems I vol 60 no 3 pp 710ndash723 2013
[17] W Bian andX Xue ldquoA dynamical approach to constrained non-smooth convex minimization problem coupling with penaltyfunction method in Hilbert spacerdquo Numerical Functional Anal-ysis and Optimization vol 31 no 11 pp 1221ndash1253 2010
[18] Y Xia and J Wang ldquoA one-layer recurrent neural networkfor support vector machine learningrdquo IEEE Transactions onSystems Man and Cybernetics B vol 34 no 2 pp 1261ndash12692004
[19] X Xue and W Bian ldquoSubgradient-based neural networks fornonsmooth convex optimization problemsrdquo IEEE Transactionson Circuits and Systems I vol 55 no 8 pp 2378ndash2391 2008
[20] Y Xia H Leung and J Wang ldquoA projection neural networkand its application to constrained optimization problemsrdquo IEEETransactions on Circuits and Systems I vol 49 no 4 pp 447ndash458 2002
[21] S Qin W Bian and X Xue ldquoA new one-layer recurrentneural network for non-smooth pseudoconvex optimizationrdquoNeurocomputing vol 120 pp 655ndash662 2013
[22] F H ClarkeOptimization andNonsmooth AnalysisWiley NewYork NY USA 1983
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of