research article extension of modified polak-ribière...
TRANSCRIPT
Research ArticleExtension of Modified Polak-Ribiegravere-Polyak Conjugate GradientMethod to Linear Equality Constraints Minimization Problems
Zhifeng Dai12
1 College of Business Central South University Hunan 410083 China2 College of Mathematics and Computational Science Changsha University of Science and Technology Hunan 410114 China
Correspondence should be addressed to Zhifeng Dai zhifengdai823163com
Received 19 March 2014 Revised 1 July 2014 Accepted 21 July 2014 Published 6 August 2014
Academic Editor Daniele Bertaccini
Copyright copy 2014 Zhifeng Dai This is an open access article distributed under the Creative Commons Attribution License whichpermits unrestricted use distribution and reproduction in any medium provided the original work is properly cited
Combining the Rosen gradient projection method with the two-term Polak-Ribiere-Polyak (PRP) conjugate gradient methodwe propose a two-term Polak-Ribiere-Polyak (PRP) conjugate gradient projection method for solving linear equality constraintsoptimization problemsThe proposed method possesses some attractive properties (1) search direction generated by the proposedmethod is a feasible descent direction consequently the generated iterates are feasible points (2) the sequences of functionare decreasing Under some mild conditions we show that it is globally convergent with Armijio-type line search Preliminarynumerical results show that the proposed method is promising
1 Introduction
In this paper we consider solving the following linear equalityconstraints optimization problem
min 119891 (119909)
st 119860119909 = 119887(1)
where 119891(119909) 119877119899 rarr 119877 is a smooth function and 119860 is a119898 times 119899matrix of rank 119898(119898 le 119899) In this paper the feasible region119863 and the feasible direction set 119878 are defined respectively asfollows
119863 = 119860119909 = 119887 119878 = 119860119889 = 0 (2)
Taking the negative gradient for a search direction (119889 =minusnabla119891(119909
119896)) is a natural way of solving unconstrained optimiza-
tion problems However this approach does not work forconstrained problems since the gradientmay not be a feasibledirection A basic technique to overcome this difficulty wasinitiated by Rosen [1] in 1960 To obtain a feasible searchdirection Rosen projected the gradient into the feasibleregion that is
119889 = minus119875nabla119891 (119909119896) 119875 = 119868 minus 119860119879(119860119860119879)
minus1
119860 (3)
The convergence of Rosenrsquos gradient projection method canbe proved by Du see [2ndash4]
In fact Rosenrsquos gradient projection method is an exten-sion of the steepest-descent method It is well known thatthe drawback of the steepest-descent method is easy to sufferfrom zig-zagging specially when the graph of 119891(119909) has anldquoelongatedrdquo form To overcome the zig-zagging we want touse the conjugate gradient method to modify the projectiondirection
It is well known that nonlinear conjugate gradient meth-ods such as the Polak-Ribiere-Polyak (PRP) method [5 6]are very efficient for large-scale unconstrained optimizationproblems due to their simplicity and low storage Howeverit does not necessarily satisfy the descent conditions 119892119879
119896119889119896le
minus1198881198921198962 119888 gt 0
Recently Cheng [7] proposed a two-term modified PRPmethod (called TMPRP) in which the direction 119889
119896is given
by
119889119896=
minus119892119896 if 119896 = 0
minus119892119896+ 120573PRP119896(119868 minus
119892119896119892119879119896
100381710038171003817100381711989211989610038171003817100381710038172)119889119896minus1 if 119896 ge 1 (4)
Hindawi Publishing CorporationAbstract and Applied AnalysisVolume 2014 Article ID 921364 9 pageshttpdxdoiorg1011552014921364
2 Abstract and Applied Analysis
An attractive property of the TMPRP method is that thedirection generated by the method satisfies
119892119879119896119889119896= minus100381710038171003817100381711989211989610038171003817100381710038172
(5)
which is independent of any line search The presentednumerical results show some potential advantage of theTMPRP method in Cheng [7] In fact we can easily rewritethe above direction 119889
119896(4) as a three-term form
119889119896=
minus119892119896 if 119896 = 0
minus119892119896+ 120573PRP119896119889119896minus1minus 120573PRP119896
119892119879119896119889119896minus1
100381710038171003817100381711989211989610038171003817100381710038172119892119896 if 119896 ge 1 (6)
In the past few years researchers have paid increasing atten-tion to the conjugate gradientmethods and their applicationsAmong others we mention here the following works forexample [8ndash29]
In the past few years some researchers also paid attentionto equality constrained problems Martınez et al [30] pro-posed a spectral gradient method for linearly constrainedoptimization by the following way to obtain the searchdirection 119889
119896isin 119877119899 is the unique solution of
min 1
2119889119879119861119896119889 + nabla119891(119909)
119879119889
st 119860119889 = 0
(7)
In this algorithm 119861119896can be computed by quasi-Newton
method in which the approximate Hessians satisfy a weaksecant equation The spectral choice of steplength is embed-ded into the Hessian approximation and the whole process iscombined with a nonmonotone line search strategy
C Li andDH Li [31] proposed a feasible Fletcher-Reevesconjugate gradient method for solving linear equality con-strained optimization problem with exact line search Theiridea is to use original Fletcher-Reeves conjugate gradientmethod to modify the Zoutendijk direction The Zoutendijkdirection is the feasible steepest descent direction It is asolution of the following problem
min nabla119891(119909)119879119889
st 119860119889 = 0 119889 le 1(8)
Li et al [32] also extended the modified Fletcher-Reevesconjugate gradient method in Zhang et al [33] to solve linearequality constraints optimization problems which combinedwith the Zoutendijk feasible direction method Under somemild conditions Li et al [32] showed that the proposedmethod with Armijo-type line search is globally convergent
In this paper we will extend the two-term Polak-Ribiere-Polyak (PRP) conjugate gradient method in Cheng [7] tosolve linear equality constraints optimization problems (1)which combines with the Rosen gradient projection methodin Rosen [1] Under some mild conditions we show that it isglobally convergent with Armijo-type line search
The rest of this paper is organized as follows In Section 2we firstly propose the algorithm and prove the the feasible
descent direction In Section 3 we prove the global conver-gence of the proposed method In Section 4 we give someimprovement for the algorithm In Section 5 we report somenumerical results to test the proposed method
2 Algorithm and the FeasibleDescent Direction
In this section we propose a two-term Polak-Ribiere-Polyakconjugate gradient projection method for solving the linearequality constraints optimization problem (1) The proposedmethod is a combination of the well-known Rose gradientprojection method and the two-term Polak-Ribiere-Polyak(PRP) conjugate gradient method in Cheng [7]
The iterative process of the proposed method is given by
119909119896+1= 119909119896+ 120572119896119889119896 (9)
and the search direction 119889119896is defined by
119889119896=
minus119875119892119896 if 119896 = 0
minus119875119892119896+ 120573PRP119896119889119896minus1minus 120573PRP119896
119892119879119896119889119896minus1
100381710038171003817100381711987511989211989610038171003817100381710038172119875119892119896 if 119896 ge 1
(10)
where
119875 = 119868 minus 119860119879(119860119860119879)minus1
119860 119892119896= nabla119891 (119909
119896)
119910119896minus1= 119892119896minus 119892119896minus1 120573PRP
119896=(119875119892119896)119879
119910119896minus1
1003817100381710038171003817119875119892119896minus110038171003817100381710038172
(11)
and 120572119896gt 0 is a steplength obtained by a line search
For convenience we call the method (9) and (10) asEMPRPmethod Now we prove that the direction 119889
119896defined
by (10) and (11) is a feasible descent direction of 119891 at 119909119896
Theorem 1 Suppose that 119909119896isin 119863 119875 = 119868 minus119860119879(119860119860119879)minus1119860 119889
119896is
defined by (10) If 119889119896= 0 then 119889
119896is a feasible descent direction
of 119891 at 119909119896
Proof From (9) (10) and the definition of 119875 we have
119892119879119896119889119896= 119892119879119896(minus119875119892
119896+ 120573PRP119896119889119896minus1minus 120573PRP119896
119892119879119896119889119896minus1
100381710038171003817100381711987511989211989610038171003817100381710038172119875119892119896)
= minus1003817100381710038171003817119875119892119896
10038171003817100381710038172
+ 120573PRP119896119892119879119896119889119896minus1minus 120573PRP119896
119892119879119896119889119896minus1
100381710038171003817100381711987511989211989610038171003817100381710038172
100381710038171003817100381711987511989211989610038171003817100381710038172
= minus1003817100381710038171003817119875119892119896
10038171003817100381710038172
(12)
This implies that 119889119896provides a descent direction of 119891 at 119909
119896
In what follows we show that 119889119896is a feasible descent
direction of 119891 at 119909119896 From (9) we have that
119860 (119875119892119896) = 119860 (119868 minus 119860119879(119860119860119879)
minus1
119860)119892119896= 0 (13)
Abstract and Applied Analysis 3
It follows from (10) and (13) that
119860119889119896= 119860(minus119875119892
119896+ 120573PRP119896119889119896minus1minus 120573PRP119896
119892119879119896119889119896minus1
100381710038171003817100381711987511989211989610038171003817100381710038172119875119892119896)
= 120573PRP119896119860119889119896minus1
(14)
When 119896 = 0 we have
1198601198890= 119860119875119892
0= 0 (15)
It is easy to get from (14) and (15) that for all 119896 119860119889119896= 0 is
satisfied That is 119889119896is feasible direction
In the remainder of this paper we always assume that119891(119909) satisfies the following assumptions
Assumption A (i) The level set
Ω = 119909 isin 119877119899 | 119891 (119909) le 119891 (1199090) (16)
is bounded(ii) Function 119891 119877119899 rarr 119877 is continuously differentiable
and bounded frombelow Its gradient is Lipschitz continuouson an open ball N containing Ω that is there is a constant119871 gt 0 such that
1003817100381710038171003817119892 (119909) minus 119892 (119910)1003817100381710038171003817 le 119871
1003817100381710038171003817119909 minus 1199101003817100381710038171003817 forall119909 119910 isinN (17)
Since119891(119909119896) is decreasing it is clear that the sequence 119909
119896
generated by Algorithm 2 is contained in Ω In addition weget from Assumption A that there is a constant 120574 gt 0 suchthat
1003817100381710038171003817119892 (119909)1003817100381710038171003817 le 120574 forall119909 isin Ω (18)
Since the matrix 119875 is a projection matrix it is reasonable toassume that there is a constant 119862 gt 0 such that
1003817100381710038171003817119875119892 (119909)1003817100381710038171003817 le 119862 forall119909 isin Ω (19)
We state the steps of the algorithm as follows
Algorithm 2 (EMPRPmethod with Armijio-type line search)Step 0 Choose an initial point 119909
0isin 119863 120576 gt 0 Let 119896 = 0
Step 1 Compute 119889119896by (10) where 120573
119896is computed by (11)
Step 2 If 119875119892119896 le 120576 stop else go to Step 3
Step 3 Given 120575 isin (0 12) 120588 isin (0 1) Determine a stepsize120572119896= max(120588119895 | 119895 = 0 1 2 ) satisfying
119891 (119909119896+ 120572119896119889119896) le 119891 (119909
119896) minus 1205751205722
119896
100381710038171003817100381711988911989610038171003817100381710038172
(20)
Step 4 Let 119909119896+1= 119909119896+ 120572119896119889119896 and 119896 = 119896 + 1 Go to Step 1
3 Global Convergence
In what follows we establish the global convergence theoremof the EMPRP method for general nonlinear objective func-tions We firstly give some important lemmas of the EMPRPmethod
Lemma 3 Suppose that 119909119896isin 119863 119875 = 119868 minus 119860119879(119860119860119879)minus1119860 119889
119896
is defined by (10) and (11) Then we have (119875119892119896)119879119889119896= 119892119879119896119889119896
(119875119892119896+1)119879119889119896= 119892119879119896+1119889119896 (119875119892119896)119879119889119896= 119892119879119896119889119896= minus119875119892
1198962
Proof By the definition of 119889119896 we have
(119875119892119896)119879
119889119896= 119892119879119896(119868 minus 119860119879(119860119860119879)
minus1
119860)119889119896
= 119892119879119896119889119896minus 119892119879119896(119860119879(119860119860119879)
minus1
)119860119889119896= 119892119879119896119889119896
(21)
It follows from (12) and (21) that
(119875119892119896)119879
119889119896= 119892119879119896119889119896= minus1003817100381710038171003817119875119892119896
10038171003817100381710038172
(22)
On the other hand we also have
(119875119892119896+1)119879
119889119896= 119892119879119896+1(119868 minus 119860119879(119860119860119879)
minus1
119860)119889119896
= 119892119879119896+1119889119896minus 119892119879119896+1(119860119879(119860119860119879)
minus1
)119860119889119896= 119892119879119896+1119889119896
(23)
Lemma4 Suppose that Assumption A holds 119909119896 is generated
by Algorithm 2 If there exists a constant 120576 gt 0 such that1003817100381710038171003817119875119892119896
1003817100381710038171003817 ge 120576 forall119896 (24)
then there exists a constant119872 gt 0 such that10038171003817100381710038171198891198961003817100381710038171003817 le 119872 forall119896 (25)
Proof It follows from (20) that the function value sequence119891(119909119896) is decreasing We also have from (20) that
infin
sum119894=1
minus 1205751205722119896
100381710038171003817100381711988911989610038171003817100381710038172
lt +infin (26)
as 119891 is bounded from below In particular we have
lim119896rarrinfin
120572119896
10038171003817100381710038171198891198961003817100381710038171003817 = 0 (27)
By the definition of 119889119896 we get from (17) (19) and (24) that
10038171003817100381710038171198891198961003817100381710038171003817 le
10038171003817100381710038171198751198921198961003817100381710038171003817 +
100381710038171003817100381711987511989211989610038171003817100381710038171003817100381710038171003817119910119896minus1
10038171003817100381710038171003817100381710038171003817119889119896minus1
10038171003817100381710038171003817100381710038171003817119875119892119896minus1
10038171003817100381710038172
+
100381710038171003817100381711987511989211989610038171003817100381710038171003817100381710038171003817119910119896minus1
10038171003817100381710038171003817100381710038171003817119875119892119896minus1
10038171003817100381710038172
sdot
100381710038171003817100381711989211989610038171003817100381710038171003817100381710038171003817119889119896minus1
10038171003817100381710038171003817100381710038171003817119875119892119896
10038171003817100381710038171003817100381710038171003817119875119892119896
10038171003817100381710038172
le 119862 +119862119871120572119896minus1
1003817100381710038171003817119889119896minus11003817100381710038171003817
12057621003817100381710038171003817119889119896minus1
1003817100381710038171003817 +120574119871120572119896minus1
1003817100381710038171003817119889119896minus11003817100381710038171003817
12057621003817100381710038171003817119889119896minus1
1003817100381710038171003817
le 119862 + (119862119871
1205762+120574119871
1205762)120572119896minus1
1003817100381710038171003817119889119896minus110038171003817100381710038172
(28)
Since 120572119896minus1119889119896minus1
rarr 0 as 119896 rarrprop there exist a constant119903 isin (0 1) and an integer 119896
0 such that the following inequality
holds for all 119896 ge 1198960
(119862119871
1205762+120574119871
1205762)120572119896minus1
1003817100381710038171003817119889119896minus11003817100381710038171003817 le 119903 (29)
4 Abstract and Applied Analysis
Hence we have for any 119896 gt 1198960
10038171003817100381710038171198891198961003817100381710038171003817 le 119862 + 119903
1003817100381710038171003817119889119896minus11003817100381710038171003817 le 119862 (1 + 119903 + 119903
2 + sdot sdot sdot + 119903119896minus1198960minus1)
+ 119903119896minus1198960100381710038171003817100381710038171198891198960
10038171003817100381710038171003817 le119862
1 minus 119903+100381710038171003817100381710038171198891198960
10038171003817100381710038171003817 (30)
Letting119872 = max1198891 1198891 119889
1198960 119862(1 minus 119903) + 119889
1198960 we
can get (25)
We now establish the global convergence theorem of theEMPRP method for general nonlinear objective functions
Theorem 5 Suppose that Assumption A holds 119909119896 is gener-
ated by Algorithm 2 Then we have
lim inf119896rarrinfin
10038171003817100381710038171198751198921198961003817100381710038171003817 = 0 (31)
Proof Suppose that lim inf119896rarrinfin
119875119892119896 = 0 for all 119896 Then
there exists a constant 120576 gt 0 such that1003817100381710038171003817119875119892119896
1003817100381710038171003817 gt 120576 forall119896 ge 0 (32)
We now prove (31) by considering the following two cases
Case (i) lim inf119896rarrinfin
120572119896gt 0 We get from (22) and (27) that
lim inf119896rarrinfin
119875119892119896 = 0 This contradicts assumption (32)
Case (ii) lim inf119896rarrinfin
120572119896= 0 That is there is an infinite index
set 119870 such thatlim119896isin119870119896rarrinfin
120572119896= 0 (33)
When 119896 isin 119870 is sufficiently large by the line search condition120588minus1120572119896does not satisfy inequality (20) This means
119891 (119909119896+ 120572119896119889119896) minus 119891 (119909
119896) gt minus120575120588minus21205722
119896
100381710038171003817100381711988911989610038171003817100381710038172
(34)
By themean-value theorem and inequality (17) there is a 119905119896isin
(0 1) such that 119909119896+ 119905119896120588minus1120572119896119889119896isin 119873 and
119891 (119909119896+ 120588minus1120572
119896119889119896) minus 119891 (119909
119896)
= 120588minus1120572119896119892(119909119896+ 119905119896120588minus1120572119896119889119896)119879
119889119896
= 120588minus1120572119896119892119879119896119889119896+ 120588minus1120572
119896(119892 (119909119896+ 119905119896120588minus1120572119896119889119896) minus 119892119896)119879
119889119896
le 120588minus1120572119896119892119879119896119889119896+ 120588minus1120572
119896
10038171003817100381710038171003817119892 (119909119896 + 119905119896120588minus1120572119896119889119896) minus 119892119896
1003817100381710038171003817100381710038171003817100381710038171198891198961003817100381710038171003817
le 120588minus1120572119896119892119879119896119889119896+ 119871120588minus21205722
119896
100381710038171003817100381711988911989610038171003817100381710038172
(35)
Substituting the last inequality into (34) for all 119896 isin 119870sufficiently large we have from (12) that
100381710038171003817100381711987511989211989610038171003817100381710038172
= minus119892119879119896119889119896le 120588minus1 (119871 + 120575) 120572
119896
100381710038171003817100381711988911989610038171003817100381710038172
(36)
Since 119889119896 is bounded and lim
119896rarrinfin120572119896= 0 the last inequality
implies
lim119896isin119870119896rarrinfin
10038171003817100381710038171198751198921198961003817100381710038171003817 = 0 (37)
This also yields a contradiction The proof is then complete
4 Improvement for Algorithm 2
In this section we propose techniques for improving theefficiency of Algorithm 2 in practical computation which isabout computing projections and the stepsize of the Armijo-type line search
41 Computing Projection In this paper as in Gould et al[34] instead of computing a basis for null space of matrix119860 we choose to work directly with the matrix of constraintgradients computing projections by normal equations As thecomputation of the projection is a key step in the proposedmethod following Gould et al [34] this projection can becomputed in an alternative way
Let
119892+ = minus119875119892 119875 = 119868 minus 119860119879(119860119860119879)minus1
119860 (38)
We can express this as
119892+ = minus119892 + 119860119879V (39)
where V is the solution of
119860119860119879V = 119860119892 (40)
Noting that (40) is the normal equation Since 119860 is a 119898 times 119899matrix of rank119898(119898 le 119899) 119860119860119879 is symmetric positive definitematrixWe use the Doolittle (called LU) factorization of119860119860119879to solve (40)
For the matrix 119860 is constant the factorization of 119860119860119879needs to be carried out only once at the beginning of theiterative process Using a Doolittle factorization of119860119860119879 (40)can be computed in the following form
119871 (119880119879V) = 119860119892 lArrrArr 119871119910 = 119860119892
119880119879V = 119910(41)
where 119871 119880 is the Doolittle factor of 119860119860119879
42 Computing Stepsize The drawback of the Armijo linesearch is how to choose the initial stepsize 120572
119896 If 120572119896is too
large then the procedure needs to call much more functionevaluations If 120572
119896is too small then the efficiency of related
algorithm will be decreased Therefore we should choose anadequate initial stepsize 120572
119896at each iteration In what follows
we propose a way to generate the initial stepsizeWe first estimate the stepsize determined by the exact line
search Support at the moment that 119891 is twice continuouslydifferentiable We denote by 119866(119909) the Hessian of 119891 at 119909 andabbreviate 119866(119909
119896) as 119866
119896 Notice that the exact line search
stepsize 120572119896satisfies
minus119892119879119896119889119896= (119892 (119909
119896+ 120572119896119889119896) minus 119892119896)119879
119889119896asymp 120572119896119889119879119896119866119896119889119896 (42)
This shows that scalar 120575119896≜ minus119892119879119896119889119896119889119879119896119866119896119889119896is an estimation
to 120572119896 To avoid the computation of the second derivative we
further estimate 120572119896by letting
120574119896=
minus120598119896119892119879119896119889119896
119889119879119896(119892 (119909119896+ 120598119896119889119896) minus 119892119896) (43)
Abstract and Applied Analysis 5
where positive sequence satisfies 120598119896 rarr 0 as 119896 rarr infin
Let the initial stepsize of the Armijo line search be anapproximation to
120575119896≜minus119892119879119896119889119896
119889119879119896119866119896119889119896
asympminus120598119896119892119879119896119889119896
119889119879119896(119892 (119909119896+ 120598119896119889119896) minus 119892119896)≜ 120574119896 (44)
It is not difficult to see that if 120598119896and 119889
119896 are sufficiently small
then 120575119896and 120574119896are good estimation to 120572
119896
So to improve the efficiency of EMPRP method inpractical computation we utilize the following line searchprocess
Line Search Process If inequality
119891 (119909119896+10038161003816100381610038161205741198961003816100381610038161003816 119889119896) le 119891 (119909119896) + 120575
1003816100381610038161003816120574119896100381610038161003816100381621003817100381710038171003817119889119896
10038171003817100381710038172
(45)
holds then we let 120572119896= |120574119896| Otherwise we let 120572
119896be the largest
scalar in the set |120574119896|120588119894 119894 = 0 1 such that inequality (45)
is satisfied
5 Numerical Experiments
This section reports some numerical experiments Firstly wetest the EMPRP method and compare it with the Rose gradi-ent projection method in [1] on low dimensional problemsSecondly we test the EMPRP method and compare it withthe spectral gradient method in Martınez et al [30] and thefeasible Fletcher-Reeves conjugate gradientmethod in Li et al[32] on large dimensional problems In the line searchprocess we set 120598
119896= 10minus6 120588 = 03 120575 = 002
The methods in the tables have the following meanings
(i) ldquoEMPRPrdquo stands for the EMPRP method with theArmijio-type line search (20)
(ii) ldquoROSErdquo stands for Rose gradient projection methodin [1] with the Armijio-type line search (20) That isin Algorithm 2 the direction 119889
119896= minus119875119892
119896
(iii) ldquoSPGrdquo stands for the spectral gradient method withthe nonmonotone line search in Martınez et al [30]where119872 = 10 119875 = 2
(iv) ldquoFFRrdquo stands for the feasible modified Fletcher-Reeves conjugate gradient method with the Armijio-type line search in Li et al [32]
We stop the iteration if the condition 119875119892119896 le 120576 is
satisfied where 120576 = 10minus5 If the iteration number exceeds105 we also stop the iteration Then we call it failure All ofthe algorithms are coded in Matlab 70 and run on a personalcomputer with a 20GHZ CPU processor
51 Numerical Comparison of EMPRP and ROSE We test theperformance of EMPRP and ROSEmethods on the followingtest problems with given initial points The results are listedin Table 1 119899 stands for the dimension of tested problemand 119899
119888stands for the number of constraints We will report
the following results the CPU time Time (in seconds) thenumber of iterations Iter the number of gradient evaluationsGeval and the number of function evaluations Feval
Problem 1 (HS28 [35]) The function HS28 in [35] is definedas follows
119891 (119909) = (1199091+ 1199092)2
+ (1199092+ 1199093)2
st 1199091+ 21199092+ 31199093minus 1 = 0
(46)
with the initial point 1199090= (minus4 1 1) The optimal solution
119909lowast = (05 minus05 05) and optimal function value 119891(119909lowast) = 0
Problem 2 (HS48 [35]) The function HS48 in [35] is definedas follows
119891 (119909) = (1199091minus 1)2
+ (1199092minus 1199093)2
+ (1199094minus 1199095)2
st 1199091+ 1199092+ 1199093+ 1199094+ 1199095minus 5 = 0
1199093+ 2 (119909
4+ 1199095) + 3 = 0
(47)
with the initial point 1199090= (3 5 minus3 2 minus2) The optimal
solution 119909lowast = (1 1 1 1 1) and optimal function value119891(119909lowast) = 0
Problem 3 (HS49 [35]) The function HS49 in [35] is definedas follows119891 (119909) = (119909
1minus 1199092)2
+ (1199093minus 1)2
+ (1199094minus 1)4
+ (1199095minus 1)6
st 1199091+ 1199092+ 1199093+ 41199094minus 7 = 0
1199093+ 51199095minus 6 = 0
(48)
with the initial point 1199090= (10 7 2 minus3 08) The optimal
solution 119909lowast = (1 1 1 1 1) and optimal function value119891(119909lowast) = 0
Problem 4 (HS50 [35]) The function HS50 in [35] is definedas follows119891 (119909) = (119909
1minus 1199092)2
+ (1199092minus 1199093)2
+ (1199093minus 1199094)2
+ (1199094minus 1199095)2
st 1199091+ 21199092+ 31199093minus 6 = 0
1199092+ 21199093+ 31199094minus 6 = 0
1199093+ 21199094+ 31199095minus 6 = 0
(49)
with the initial point 1199090= (35 minus31 11 5 minus5) The optimal
solution 119909lowast = (1 1 1 1 1) and optimal function value119891(119909lowast) = 0 Moreover we extend the dimension of functionHS51 [35] to 10 20with the initial point119909
0= (35 minus31 11 )
The optimal solution 119909lowast = (1 1 1) and optimal functionvalue 119891(119909lowast) = 0
Problem 5 (HS51 [35]) The function HS51 in [35] is definedas follows119891 (119909) = (119909
1minus 1199092)2
+ (1199092+ 1199093minus 2)2
+ (1199094minus 1)2
+ (1199095minus 1)2
st 1199091+ 31199092minus 4 = 0
1199093+ 1199094minus 21199095= 0
1199092minus 1199095= 0
(50)
6 Abstract and Applied Analysis
Table 1 Test results for Problems 1ndash5 with given initial points
Name 119899 119899119888
EMPRP ROSETime Iter Geval Feval Time Iter Geval Feval
HS28 3 1 00156 20 22 21 00468 71 72 143HS48 5 2 00156 26 28 27 00468 65 66 122HS49 5 2 00156 29 30 30 00780 193 194 336HS50 5 3 00156 22 23 23 00624 76 77 139HS51 5 3 00156 15 16 16 00624 80 81 148HS52 5 3 00156 30 31 31 00312 70 71 141HS50E1 10 8 00156 16 17 17 00312 63 64 127HS50E2 20 18 00156 15 16 16 00936 63 64 127
Table 2 Test results for Problem 6 with given initial points
119896 119899 119899119888
EMPRP FFR SPCGTime Iter Time Iter Time Iter
50 99 49 00568 73 00660 80 00256 70100 199 99 01048 103 01060 120 00756 90200 399 199 00868 108 02660 123 00904 97300 599 299 02252 121 04260 126 07504 123400 799 399 03142 121 05150 126 09804 153500 999 499 04045 138 07030 146 14320 1761000 1999 999 26043 138 21560 146 33468 1982000 3999 1999 82608 138 68280 146 91266 2163000 5999 2999 224075 138 151720 146 282177 2764000 7999 3999 401268 138 350460 146 560604 3245000 9999 4999 1056252 138 850460 146 1281504 476
with the initial point 1199090= (25 05 2 minus1 05) The optimal
solution 119909lowast = (1 1 1 1 1) and optimal function value119891(119909lowast) = 0
From Table 1 we can see that the EMPRP methodperforms better than the Rosen gradient projection methodin [1] which implies that the EMPRP method can improvethe computational efficiency of the Rosen gradient projectionmethod for solving linear equality constrained optimizationproblems
52 Numerical Comparison of EMPRP FFR and SPCG Inthis subsection we test the EMPRP method and compare itwith the spectral gradient method (called SPCG) inMartınezet al [30] and the feasible Fletcher-Reeves conjugate gradientmethod (called FFR) in Li et al [32] on the following largedimensional problems with given initial points The resultsare listed inTables 2 3 4 5 and 6Wewill report the followingresults the CPU time Time (in seconds) the number ofiterations Iter
Problem 6 Given a positive integer 119896 the function is definedas follows
119891 (119909) =1
2
119896minus2
sum119894=1
(119909119896+119894+1
minus 119909119896+119894)2
st 119909119896+119894minus 119909119894+1+ 119909119894= 119894 119894 = 1 119896 minus 1
(51)
with the initial point 1199090= (1 2 119896 2 3 119896) The
optimization function value 119891(119909lowast) = 0 This problem comesfromMartınez et al [30]
Problem 7 Given a positive integer 119899 the function is definedas follows
119891 (119909) =119899
sum119894=1
cos(2120587119909119894lowast sin( 120587
20))
st 119909119894minus 119909119894+1= 04 119894 = 1 119899 minus 1
(52)
with the initial point 1199090= (1 06 02 1 minus 04119899minus1) This
problem comes from Asaadi [36] and is called MAD6
Problem 8 Given a positive integer 119896 the function is definedas follows
119891 (119909) =1
2
119896minus2
sum119894=1
(119909119896+119894+1
minus 119909119896+119894)4
st 119909119896+119894minus 119909119894+1+ 119909119894= 119894 119894 = 1 119896 minus 1
(53)
with the initial point 1199090= (1 2 119896 2 3 119896) The
optimization function value 119891(119909lowast) = 0
Abstract and Applied Analysis 7
Table 3 Test results for Problem 7 with given initial points
119899 119899119888
EMPRP FFR SPCGTime Iter Time Iter Time Iter
200 99 01468 83 01679 94 01286 79400 199 02608 94 02860 103 02266 90600 299 06252 121 08422 132 07504 133800 399 14075 123 16510 138 14177 1491000 499 3280 141 27190 146 28790 1682000 999 84045 189 46202 202 51280 1903000 1499 120042 219 97030 248 107864 2604000 1999 172608 254 134255 288 191266 3165000 2499 206252 321 190940 330 291504 383
Table 4 Test results for Problem 8 with given initial points
119896 119899 119899119888
EMPRP FFR SPCGTime Iter Time Iter Time Iter
50 99 49 00668 78 00750 83 00286 75100 199 99 00988 113 01462 130 00826 106200 399 199 01978 123 02960 133 01864 124300 599 299 03436 128 05260 136 03504 156400 799 399 04142 134 07143 136 05804 186500 999 499 06039 152 09030 158 09331 2281000 1999 999 29065 152 29570 158 31260 2682000 3999 1999 83408 152 78280 158 82266 2983000 5999 2999 196086 152 174350 158 198672 3304000 7999 3999 463268 152 381460 158 561256 3685000 9999 4999 753972 152 583867 158 1257680 398
Table 5 Test results for Problem 9 with given initial points
119896 119899 119899119888
EMPRP FFR SPCGTime Iter Time Iter Time Iter
50 99 49 00568 65 00640 70 00266 62100 199 99 00868 109 01362 125 00726 103200 399 199 02110 120 02862 128 02064 124300 599 299 03250 127 05062 132 04524 160400 799 399 04543 138 07456 139 06804 192500 999 499 06246 156 09268 155 10876 2581000 1999 999 39245 156 34268 162 34560 2792000 3999 1999 93404 156 86240 162 86280 3203000 5999 2999 206125 156 196548 162 208656 3584000 7999 3999 482890 156 444330 162 577680 3865000 9999 4999 954680 156 838650 162 1288760 420
Problem 9 Given a positive integer 119896 the function is definedas follows
119891 (119909) =119896minus2
sum119894=1
100(119909119896+119894+1
minus 119909119896+119894)2
+ (1 minus 119909119896+119894)2
st 119909119896+119894minus 119909119894+1+ 119909119894= 119894 119894 = 1 119896 minus 1
(54)
with the initial point 1199090= (1 2 119896 2 3 119896) The
optimization function value 119891(119909lowast) = 0
Problem 10 Given a positive integer 119899 the function is definedas follows
119891 (119909) =119899
sum119894=1
cos2 (2120587119909119894lowast sin( 120587
20))
st 119909119894minus 119909119894+1= 04 119894 = 1 119899 minus 1
(55)
with the initial point 1199090= (1 06 02 1 minus 04 lowast (119899 minus 1))
8 Abstract and Applied Analysis
Table 6 Test results for Problem 10 with given initial points
119899 119899119888
EMPRP FFR SPCGTime Iter Time Iter Time Iter
200 99 01842 88 01984 98 01488 92400 199 02908 104 03260 113 02348 100600 299 08256 132 09422 142 09804 182800 399 16078 136 17512 148 26177 2591000 499 29801 148 28192 156 31790 2682000 999 78045 199 51202 212 81288 3203000 1499 128042 248 99035 256 147862 3604000 1999 192432 304 144258 298 221268 3865000 2499 296846 382 201932 378 321422 393
From Tables 2ndash6 we can see the EMPRPmethod and theFFR method in [32] perform better than the SPCG methodin Martınez et al [30] for solving large-scale linear equalityconstrained optimization problems as the EMPRP methodand the FFRmethod all are first ordermethods But the SPCGmethod in Martınez et al [30] needs to compute 119861
119896with
quasi-Newton method in which the approximate Hessianssatisfy a weak secant equation However as the EMPRPmethod also needs to compute projection the FFRmethod in[32] performs better than the EMPRP method when the testproblem becomes large
6 Conclusions
In this paper we propose a new conjugate gradient projectionmethod for solving linear equality constrained optimizationproblem (1) which combines the two-term modified Polak-Ribiere-Polyak (PRP) conjugate gradient method in Cheng[7] with the Rosen projectionmethodThe proposed methodalso can be regarded as an extension of the recently developedtwo-term modified Polak-Ribiere-Polyak (PRP) conjugategradient method in Cheng [7] Under some mild conditionswe show that it is globally convergent with the Armijio-typeline search
Conflict of Interests
The author declares that there is no conflict of interestsregarding the publication of this paper
Acknowledgments
The author thanks the anonymous referees for their valuablecomments and suggestions that improved the presenta-tion This work is supported by the NSF of China Grants11301041 11371154 71371065 and 71371195 and the Ministryof Education Humanities and Social Sciences Project Grants12YJC790027 Project funded by China Postdoctoral ScienceFoundation Natural Science Foundation ofHunan Province
References
[1] J B Rosen ldquoThe gradient projectionmethod for nonlinear pro-gramming I Linear constraintsrdquo SIAM Journal on AppliedMathematics vol 8 no 1 pp 181ndash217 1960
[2] D Z Du and X S Zhang ldquoA convergence theorem of Rosenrsquosgradient projection methodrdquo Mathematical Programming vol36 no 2 pp 135ndash144 1986
[3] D Du ldquoRemarks on the convergence of Rosenrsquos gradient pro-jection methodrdquo Acta Mathematicae Applicatae Sinica vol 3no 2 pp 270ndash279 1987
[4] D Z Du and X S Zhang ldquoGlobal convergence of Rosenrsquosgradient projection methodrdquo Mathematical Programming vol44 no 1 pp 357ndash366 1989
[5] E Polak and G Ribire ldquoNote surla convergence de directionsconjugueesrdquo Rev Francaise informat Recherche Operatinelle 3eAnnee vol 16 pp 35ndash43 1969
[6] B T Polyak ldquoThe conjugate gradient method in extremal prob-lemsrdquo USSR Computational Mathematics and MathematicalPhysics vol 9 no 4 pp 94ndash112 1969
[7] W Cheng ldquoA two-term PRP-based descent methodrdquoNumericalFunctional Analysis and Optimization vol 28 no 11 pp 1217ndash1230 2007
[8] W W Hager and H Zhang ldquoA new conjugate gradient methodwith guaranteed descent and an efficient line searchrdquo SIAMJournal on Optimization vol 16 no 1 pp 170ndash192 2005
[9] G Li C Tang and Z Wei ldquoNew conjugacy condition andrelated new conjugate gradientmethods for unconstrained opti-mizationrdquo Journal of Computational and Applied Mathematicsvol 202 no 2 pp 523ndash539 2007
[10] N Andrei ldquoAccelerated scaled memoryless BFGS precondi-tioned conjugate gradient algorithm for unconstrained opti-mizationrdquo European Journal of Operational Research vol 204no 3 pp 410ndash420 2010
[11] G Yu L Guan and W Chen ldquoSpectral conjugate gradientmethods with sufficient descent property for large-scale uncon-strained optimizationrdquoOptimizationMethods and Software vol23 no 2 pp 275ndash293 2008
[12] G Yuan ldquoModified nonlinear conjugate gradient methods withsufficient descent property for large-scale optimization pro-blemsrdquo Optimization Letters vol 3 no 1 pp 11ndash21 2009
Abstract and Applied Analysis 9
[13] Q Li and D-H Li ldquoA class of derivative-free methods for large-scale nonlinearmonotone equationsrdquo IMA Journal ofNumericalAnalysis vol 31 no 4 pp 1625ndash1635 2011
[14] Y Xiao and H Zhu ldquoA conjugate gradient method to solveconvex constrained monotone equations with applications incompressive sensingrdquo Journal of Mathematical Analysis andApplications vol 405 no 1 pp 310ndash319 2013
[15] Z Dai ldquoTwo modified HS type conjugate gradient methodsfor unconstrained optimization problemsrdquo Nonlinear AnalysisTheory Methods ampApplications vol 74 no 3 pp 927ndash936 2011
[16] Y-Y Chen and S-Q Du ldquoNonlinear conjugate gradient meth-ods with Wolfe type line searchrdquo Abstract and Applied Analysisvol 2013 Article ID 742815 5 pages 2013
[17] S Y Liu Y Y Huang and H W Jiao ldquoSufficient descent con-jugate gradient methods for solving convex constrained non-linear monotone equationsrdquo Abstract and Applied Analysis vol2014 Article ID 305643 12 pages 2014
[18] Z F Dai D H Li and F HWen ldquoRobust conditional value-at-risk optimization for asymmetrically distributed asset returnsrdquoPacific Journal of Optimization vol 8 no 3 pp 429ndash445 2012
[19] C Huang C Peng X Chen and F Wen ldquoDynamics analysisof a class of delayed economic modelrdquo Abstract and AppliedAnalysis vol 2013 Article ID 962738 12 pages 2013
[20] C Huang X Gong X Chen and F Wen ldquoMeasuring andforecasting volatility in Chinese stock market using HAR-CJ-Mmodelrdquo Abstract and Applied Analysis vol 2013 Article ID143194 13 pages 2013
[21] G Qin C Huang Y Xie and F Wen ldquoAsymptotic behavior forthird-order quasi-linear differential equationsrdquo Advances inDifference Equations vol 30 no 13 pp 305ndash312 2013
[22] C Huang H Kuang X Chen and F Wen ldquoAn LMI approachfor dynamics of switched cellular neural networks with mixeddelaysrdquo Abstract and Applied Analysis vol 2013 Article ID870486 8 pages 2013
[23] Q F Cui Z G Wang X Chen and F Wen ldquoSufficient condi-tions for non-Bazilevic functionsrdquo Abstract and Applied Anal-ysis vol 2013 Article ID 154912 4 pages 2013
[24] F Wen and X Yang ldquoSkewness of return distribution andcoefficient of risk premiumrdquo Journal of Systems Science amp Com-plexity vol 22 no 3 pp 360ndash371 2009
[25] F Wen and Z Liu ldquoA copula-based correlation measure andits application in Chinese stock marketrdquo International Journalof Information Technology amp Decision Making vol 8 no 4 pp787ndash801 2009
[26] F Wen Z He and X Chen ldquoInvestorsrsquo risk preference charac-teristics and conditional skewnessrdquo Mathematical Problems inEngineering vol 2014 Article ID 814965 14 pages 2014
[27] F Wen X Gong Y Chao and X Chen ldquoThe effects of prioroutcomes on risky choice evidence from the stock marketrdquoMathematical Problems in Engineering vol 2014 Article ID272518 8 pages 2014
[28] F Wen Z He X Gong and A Liu ldquoInvestorsrsquo risk preferencecharacteristics based on different reference pointrdquo DiscreteDynamics in Nature and Society vol 2014 Article ID 1583869 pages 2014
[29] Z Dai and F Wen ldquoRobust CVaR-based portfolio optimizationunder a genal a ne data perturbation uncertainty setrdquo Journal ofComputational Analysis and Applications vol 16 no 1 pp 93ndash103 2014
[30] J M Martınez E A Pilotta and M Raydan ldquoSpectral gradientmethods for linearly constrained optimizationrdquo Journal of
Optimization Theory and Applications vol 125 no 3 pp 629ndash651 2005
[31] C Li andD H Li ldquoAn extension of the Fletcher-Reeves methodto linear equality constrained optimization problemrdquo AppliedMathematics andComputation vol 219 no 23 pp 10909ndash109142013
[32] C Li L Fang and X Cui ldquoA feasible fletcher-reeves method tolinear equality constrained optimization problemrdquo in Proceed-ings of the International Conference on Apperceiving Computingand Intelligence Analysis (ICACIA rsquo10) pp 30ndash33 December2010
[33] L Zhang W J Zhou and D H Li ldquoGlobal convergence ofa modified Fletcher-Reeves conjugate gradient method withArmijo-type line searchrdquo Numerische Mathematik vol 104 no2 pp 561ndash572 2006
[34] N IM GouldM E Hribar and J Nocedal ldquoOn the solution ofequality constrained quadratic programming problems arisingin optimizationrdquo SIAM Journal on Scientific Computing vol 23no 4 pp 1376ndash1395 2001
[35] W Hock and K Schittkowski Test examples for nonlinear pro-gramming codes LectureNotes in Economics andMathematicalSystems Springer New York NY USA 1981
[36] J Asaadi ldquoA computational comparison of some non-linearprogramsrdquo Mathematical Programming vol 4 no 1 pp 144ndash154 1973
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
2 Abstract and Applied Analysis
An attractive property of the TMPRP method is that thedirection generated by the method satisfies
119892119879119896119889119896= minus100381710038171003817100381711989211989610038171003817100381710038172
(5)
which is independent of any line search The presentednumerical results show some potential advantage of theTMPRP method in Cheng [7] In fact we can easily rewritethe above direction 119889
119896(4) as a three-term form
119889119896=
minus119892119896 if 119896 = 0
minus119892119896+ 120573PRP119896119889119896minus1minus 120573PRP119896
119892119879119896119889119896minus1
100381710038171003817100381711989211989610038171003817100381710038172119892119896 if 119896 ge 1 (6)
In the past few years researchers have paid increasing atten-tion to the conjugate gradientmethods and their applicationsAmong others we mention here the following works forexample [8ndash29]
In the past few years some researchers also paid attentionto equality constrained problems Martınez et al [30] pro-posed a spectral gradient method for linearly constrainedoptimization by the following way to obtain the searchdirection 119889
119896isin 119877119899 is the unique solution of
min 1
2119889119879119861119896119889 + nabla119891(119909)
119879119889
st 119860119889 = 0
(7)
In this algorithm 119861119896can be computed by quasi-Newton
method in which the approximate Hessians satisfy a weaksecant equation The spectral choice of steplength is embed-ded into the Hessian approximation and the whole process iscombined with a nonmonotone line search strategy
C Li andDH Li [31] proposed a feasible Fletcher-Reevesconjugate gradient method for solving linear equality con-strained optimization problem with exact line search Theiridea is to use original Fletcher-Reeves conjugate gradientmethod to modify the Zoutendijk direction The Zoutendijkdirection is the feasible steepest descent direction It is asolution of the following problem
min nabla119891(119909)119879119889
st 119860119889 = 0 119889 le 1(8)
Li et al [32] also extended the modified Fletcher-Reevesconjugate gradient method in Zhang et al [33] to solve linearequality constraints optimization problems which combinedwith the Zoutendijk feasible direction method Under somemild conditions Li et al [32] showed that the proposedmethod with Armijo-type line search is globally convergent
In this paper we will extend the two-term Polak-Ribiere-Polyak (PRP) conjugate gradient method in Cheng [7] tosolve linear equality constraints optimization problems (1)which combines with the Rosen gradient projection methodin Rosen [1] Under some mild conditions we show that it isglobally convergent with Armijo-type line search
The rest of this paper is organized as follows In Section 2we firstly propose the algorithm and prove the the feasible
descent direction In Section 3 we prove the global conver-gence of the proposed method In Section 4 we give someimprovement for the algorithm In Section 5 we report somenumerical results to test the proposed method
2 Algorithm and the FeasibleDescent Direction
In this section we propose a two-term Polak-Ribiere-Polyakconjugate gradient projection method for solving the linearequality constraints optimization problem (1) The proposedmethod is a combination of the well-known Rose gradientprojection method and the two-term Polak-Ribiere-Polyak(PRP) conjugate gradient method in Cheng [7]
The iterative process of the proposed method is given by
119909119896+1= 119909119896+ 120572119896119889119896 (9)
and the search direction 119889119896is defined by
119889119896=
minus119875119892119896 if 119896 = 0
minus119875119892119896+ 120573PRP119896119889119896minus1minus 120573PRP119896
119892119879119896119889119896minus1
100381710038171003817100381711987511989211989610038171003817100381710038172119875119892119896 if 119896 ge 1
(10)
where
119875 = 119868 minus 119860119879(119860119860119879)minus1
119860 119892119896= nabla119891 (119909
119896)
119910119896minus1= 119892119896minus 119892119896minus1 120573PRP
119896=(119875119892119896)119879
119910119896minus1
1003817100381710038171003817119875119892119896minus110038171003817100381710038172
(11)
and 120572119896gt 0 is a steplength obtained by a line search
For convenience we call the method (9) and (10) asEMPRPmethod Now we prove that the direction 119889
119896defined
by (10) and (11) is a feasible descent direction of 119891 at 119909119896
Theorem 1 Suppose that 119909119896isin 119863 119875 = 119868 minus119860119879(119860119860119879)minus1119860 119889
119896is
defined by (10) If 119889119896= 0 then 119889
119896is a feasible descent direction
of 119891 at 119909119896
Proof From (9) (10) and the definition of 119875 we have
119892119879119896119889119896= 119892119879119896(minus119875119892
119896+ 120573PRP119896119889119896minus1minus 120573PRP119896
119892119879119896119889119896minus1
100381710038171003817100381711987511989211989610038171003817100381710038172119875119892119896)
= minus1003817100381710038171003817119875119892119896
10038171003817100381710038172
+ 120573PRP119896119892119879119896119889119896minus1minus 120573PRP119896
119892119879119896119889119896minus1
100381710038171003817100381711987511989211989610038171003817100381710038172
100381710038171003817100381711987511989211989610038171003817100381710038172
= minus1003817100381710038171003817119875119892119896
10038171003817100381710038172
(12)
This implies that 119889119896provides a descent direction of 119891 at 119909
119896
In what follows we show that 119889119896is a feasible descent
direction of 119891 at 119909119896 From (9) we have that
119860 (119875119892119896) = 119860 (119868 minus 119860119879(119860119860119879)
minus1
119860)119892119896= 0 (13)
Abstract and Applied Analysis 3
It follows from (10) and (13) that
119860119889119896= 119860(minus119875119892
119896+ 120573PRP119896119889119896minus1minus 120573PRP119896
119892119879119896119889119896minus1
100381710038171003817100381711987511989211989610038171003817100381710038172119875119892119896)
= 120573PRP119896119860119889119896minus1
(14)
When 119896 = 0 we have
1198601198890= 119860119875119892
0= 0 (15)
It is easy to get from (14) and (15) that for all 119896 119860119889119896= 0 is
satisfied That is 119889119896is feasible direction
In the remainder of this paper we always assume that119891(119909) satisfies the following assumptions
Assumption A (i) The level set
Ω = 119909 isin 119877119899 | 119891 (119909) le 119891 (1199090) (16)
is bounded(ii) Function 119891 119877119899 rarr 119877 is continuously differentiable
and bounded frombelow Its gradient is Lipschitz continuouson an open ball N containing Ω that is there is a constant119871 gt 0 such that
1003817100381710038171003817119892 (119909) minus 119892 (119910)1003817100381710038171003817 le 119871
1003817100381710038171003817119909 minus 1199101003817100381710038171003817 forall119909 119910 isinN (17)
Since119891(119909119896) is decreasing it is clear that the sequence 119909
119896
generated by Algorithm 2 is contained in Ω In addition weget from Assumption A that there is a constant 120574 gt 0 suchthat
1003817100381710038171003817119892 (119909)1003817100381710038171003817 le 120574 forall119909 isin Ω (18)
Since the matrix 119875 is a projection matrix it is reasonable toassume that there is a constant 119862 gt 0 such that
1003817100381710038171003817119875119892 (119909)1003817100381710038171003817 le 119862 forall119909 isin Ω (19)
We state the steps of the algorithm as follows
Algorithm 2 (EMPRPmethod with Armijio-type line search)Step 0 Choose an initial point 119909
0isin 119863 120576 gt 0 Let 119896 = 0
Step 1 Compute 119889119896by (10) where 120573
119896is computed by (11)
Step 2 If 119875119892119896 le 120576 stop else go to Step 3
Step 3 Given 120575 isin (0 12) 120588 isin (0 1) Determine a stepsize120572119896= max(120588119895 | 119895 = 0 1 2 ) satisfying
119891 (119909119896+ 120572119896119889119896) le 119891 (119909
119896) minus 1205751205722
119896
100381710038171003817100381711988911989610038171003817100381710038172
(20)
Step 4 Let 119909119896+1= 119909119896+ 120572119896119889119896 and 119896 = 119896 + 1 Go to Step 1
3 Global Convergence
In what follows we establish the global convergence theoremof the EMPRP method for general nonlinear objective func-tions We firstly give some important lemmas of the EMPRPmethod
Lemma 3 Suppose that 119909119896isin 119863 119875 = 119868 minus 119860119879(119860119860119879)minus1119860 119889
119896
is defined by (10) and (11) Then we have (119875119892119896)119879119889119896= 119892119879119896119889119896
(119875119892119896+1)119879119889119896= 119892119879119896+1119889119896 (119875119892119896)119879119889119896= 119892119879119896119889119896= minus119875119892
1198962
Proof By the definition of 119889119896 we have
(119875119892119896)119879
119889119896= 119892119879119896(119868 minus 119860119879(119860119860119879)
minus1
119860)119889119896
= 119892119879119896119889119896minus 119892119879119896(119860119879(119860119860119879)
minus1
)119860119889119896= 119892119879119896119889119896
(21)
It follows from (12) and (21) that
(119875119892119896)119879
119889119896= 119892119879119896119889119896= minus1003817100381710038171003817119875119892119896
10038171003817100381710038172
(22)
On the other hand we also have
(119875119892119896+1)119879
119889119896= 119892119879119896+1(119868 minus 119860119879(119860119860119879)
minus1
119860)119889119896
= 119892119879119896+1119889119896minus 119892119879119896+1(119860119879(119860119860119879)
minus1
)119860119889119896= 119892119879119896+1119889119896
(23)
Lemma4 Suppose that Assumption A holds 119909119896 is generated
by Algorithm 2 If there exists a constant 120576 gt 0 such that1003817100381710038171003817119875119892119896
1003817100381710038171003817 ge 120576 forall119896 (24)
then there exists a constant119872 gt 0 such that10038171003817100381710038171198891198961003817100381710038171003817 le 119872 forall119896 (25)
Proof It follows from (20) that the function value sequence119891(119909119896) is decreasing We also have from (20) that
infin
sum119894=1
minus 1205751205722119896
100381710038171003817100381711988911989610038171003817100381710038172
lt +infin (26)
as 119891 is bounded from below In particular we have
lim119896rarrinfin
120572119896
10038171003817100381710038171198891198961003817100381710038171003817 = 0 (27)
By the definition of 119889119896 we get from (17) (19) and (24) that
10038171003817100381710038171198891198961003817100381710038171003817 le
10038171003817100381710038171198751198921198961003817100381710038171003817 +
100381710038171003817100381711987511989211989610038171003817100381710038171003817100381710038171003817119910119896minus1
10038171003817100381710038171003817100381710038171003817119889119896minus1
10038171003817100381710038171003817100381710038171003817119875119892119896minus1
10038171003817100381710038172
+
100381710038171003817100381711987511989211989610038171003817100381710038171003817100381710038171003817119910119896minus1
10038171003817100381710038171003817100381710038171003817119875119892119896minus1
10038171003817100381710038172
sdot
100381710038171003817100381711989211989610038171003817100381710038171003817100381710038171003817119889119896minus1
10038171003817100381710038171003817100381710038171003817119875119892119896
10038171003817100381710038171003817100381710038171003817119875119892119896
10038171003817100381710038172
le 119862 +119862119871120572119896minus1
1003817100381710038171003817119889119896minus11003817100381710038171003817
12057621003817100381710038171003817119889119896minus1
1003817100381710038171003817 +120574119871120572119896minus1
1003817100381710038171003817119889119896minus11003817100381710038171003817
12057621003817100381710038171003817119889119896minus1
1003817100381710038171003817
le 119862 + (119862119871
1205762+120574119871
1205762)120572119896minus1
1003817100381710038171003817119889119896minus110038171003817100381710038172
(28)
Since 120572119896minus1119889119896minus1
rarr 0 as 119896 rarrprop there exist a constant119903 isin (0 1) and an integer 119896
0 such that the following inequality
holds for all 119896 ge 1198960
(119862119871
1205762+120574119871
1205762)120572119896minus1
1003817100381710038171003817119889119896minus11003817100381710038171003817 le 119903 (29)
4 Abstract and Applied Analysis
Hence we have for any 119896 gt 1198960
10038171003817100381710038171198891198961003817100381710038171003817 le 119862 + 119903
1003817100381710038171003817119889119896minus11003817100381710038171003817 le 119862 (1 + 119903 + 119903
2 + sdot sdot sdot + 119903119896minus1198960minus1)
+ 119903119896minus1198960100381710038171003817100381710038171198891198960
10038171003817100381710038171003817 le119862
1 minus 119903+100381710038171003817100381710038171198891198960
10038171003817100381710038171003817 (30)
Letting119872 = max1198891 1198891 119889
1198960 119862(1 minus 119903) + 119889
1198960 we
can get (25)
We now establish the global convergence theorem of theEMPRP method for general nonlinear objective functions
Theorem 5 Suppose that Assumption A holds 119909119896 is gener-
ated by Algorithm 2 Then we have
lim inf119896rarrinfin
10038171003817100381710038171198751198921198961003817100381710038171003817 = 0 (31)
Proof Suppose that lim inf119896rarrinfin
119875119892119896 = 0 for all 119896 Then
there exists a constant 120576 gt 0 such that1003817100381710038171003817119875119892119896
1003817100381710038171003817 gt 120576 forall119896 ge 0 (32)
We now prove (31) by considering the following two cases
Case (i) lim inf119896rarrinfin
120572119896gt 0 We get from (22) and (27) that
lim inf119896rarrinfin
119875119892119896 = 0 This contradicts assumption (32)
Case (ii) lim inf119896rarrinfin
120572119896= 0 That is there is an infinite index
set 119870 such thatlim119896isin119870119896rarrinfin
120572119896= 0 (33)
When 119896 isin 119870 is sufficiently large by the line search condition120588minus1120572119896does not satisfy inequality (20) This means
119891 (119909119896+ 120572119896119889119896) minus 119891 (119909
119896) gt minus120575120588minus21205722
119896
100381710038171003817100381711988911989610038171003817100381710038172
(34)
By themean-value theorem and inequality (17) there is a 119905119896isin
(0 1) such that 119909119896+ 119905119896120588minus1120572119896119889119896isin 119873 and
119891 (119909119896+ 120588minus1120572
119896119889119896) minus 119891 (119909
119896)
= 120588minus1120572119896119892(119909119896+ 119905119896120588minus1120572119896119889119896)119879
119889119896
= 120588minus1120572119896119892119879119896119889119896+ 120588minus1120572
119896(119892 (119909119896+ 119905119896120588minus1120572119896119889119896) minus 119892119896)119879
119889119896
le 120588minus1120572119896119892119879119896119889119896+ 120588minus1120572
119896
10038171003817100381710038171003817119892 (119909119896 + 119905119896120588minus1120572119896119889119896) minus 119892119896
1003817100381710038171003817100381710038171003817100381710038171198891198961003817100381710038171003817
le 120588minus1120572119896119892119879119896119889119896+ 119871120588minus21205722
119896
100381710038171003817100381711988911989610038171003817100381710038172
(35)
Substituting the last inequality into (34) for all 119896 isin 119870sufficiently large we have from (12) that
100381710038171003817100381711987511989211989610038171003817100381710038172
= minus119892119879119896119889119896le 120588minus1 (119871 + 120575) 120572
119896
100381710038171003817100381711988911989610038171003817100381710038172
(36)
Since 119889119896 is bounded and lim
119896rarrinfin120572119896= 0 the last inequality
implies
lim119896isin119870119896rarrinfin
10038171003817100381710038171198751198921198961003817100381710038171003817 = 0 (37)
This also yields a contradiction The proof is then complete
4 Improvement for Algorithm 2
In this section we propose techniques for improving theefficiency of Algorithm 2 in practical computation which isabout computing projections and the stepsize of the Armijo-type line search
41 Computing Projection In this paper as in Gould et al[34] instead of computing a basis for null space of matrix119860 we choose to work directly with the matrix of constraintgradients computing projections by normal equations As thecomputation of the projection is a key step in the proposedmethod following Gould et al [34] this projection can becomputed in an alternative way
Let
119892+ = minus119875119892 119875 = 119868 minus 119860119879(119860119860119879)minus1
119860 (38)
We can express this as
119892+ = minus119892 + 119860119879V (39)
where V is the solution of
119860119860119879V = 119860119892 (40)
Noting that (40) is the normal equation Since 119860 is a 119898 times 119899matrix of rank119898(119898 le 119899) 119860119860119879 is symmetric positive definitematrixWe use the Doolittle (called LU) factorization of119860119860119879to solve (40)
For the matrix 119860 is constant the factorization of 119860119860119879needs to be carried out only once at the beginning of theiterative process Using a Doolittle factorization of119860119860119879 (40)can be computed in the following form
119871 (119880119879V) = 119860119892 lArrrArr 119871119910 = 119860119892
119880119879V = 119910(41)
where 119871 119880 is the Doolittle factor of 119860119860119879
42 Computing Stepsize The drawback of the Armijo linesearch is how to choose the initial stepsize 120572
119896 If 120572119896is too
large then the procedure needs to call much more functionevaluations If 120572
119896is too small then the efficiency of related
algorithm will be decreased Therefore we should choose anadequate initial stepsize 120572
119896at each iteration In what follows
we propose a way to generate the initial stepsizeWe first estimate the stepsize determined by the exact line
search Support at the moment that 119891 is twice continuouslydifferentiable We denote by 119866(119909) the Hessian of 119891 at 119909 andabbreviate 119866(119909
119896) as 119866
119896 Notice that the exact line search
stepsize 120572119896satisfies
minus119892119879119896119889119896= (119892 (119909
119896+ 120572119896119889119896) minus 119892119896)119879
119889119896asymp 120572119896119889119879119896119866119896119889119896 (42)
This shows that scalar 120575119896≜ minus119892119879119896119889119896119889119879119896119866119896119889119896is an estimation
to 120572119896 To avoid the computation of the second derivative we
further estimate 120572119896by letting
120574119896=
minus120598119896119892119879119896119889119896
119889119879119896(119892 (119909119896+ 120598119896119889119896) minus 119892119896) (43)
Abstract and Applied Analysis 5
where positive sequence satisfies 120598119896 rarr 0 as 119896 rarr infin
Let the initial stepsize of the Armijo line search be anapproximation to
120575119896≜minus119892119879119896119889119896
119889119879119896119866119896119889119896
asympminus120598119896119892119879119896119889119896
119889119879119896(119892 (119909119896+ 120598119896119889119896) minus 119892119896)≜ 120574119896 (44)
It is not difficult to see that if 120598119896and 119889
119896 are sufficiently small
then 120575119896and 120574119896are good estimation to 120572
119896
So to improve the efficiency of EMPRP method inpractical computation we utilize the following line searchprocess
Line Search Process If inequality
119891 (119909119896+10038161003816100381610038161205741198961003816100381610038161003816 119889119896) le 119891 (119909119896) + 120575
1003816100381610038161003816120574119896100381610038161003816100381621003817100381710038171003817119889119896
10038171003817100381710038172
(45)
holds then we let 120572119896= |120574119896| Otherwise we let 120572
119896be the largest
scalar in the set |120574119896|120588119894 119894 = 0 1 such that inequality (45)
is satisfied
5 Numerical Experiments
This section reports some numerical experiments Firstly wetest the EMPRP method and compare it with the Rose gradi-ent projection method in [1] on low dimensional problemsSecondly we test the EMPRP method and compare it withthe spectral gradient method in Martınez et al [30] and thefeasible Fletcher-Reeves conjugate gradientmethod in Li et al[32] on large dimensional problems In the line searchprocess we set 120598
119896= 10minus6 120588 = 03 120575 = 002
The methods in the tables have the following meanings
(i) ldquoEMPRPrdquo stands for the EMPRP method with theArmijio-type line search (20)
(ii) ldquoROSErdquo stands for Rose gradient projection methodin [1] with the Armijio-type line search (20) That isin Algorithm 2 the direction 119889
119896= minus119875119892
119896
(iii) ldquoSPGrdquo stands for the spectral gradient method withthe nonmonotone line search in Martınez et al [30]where119872 = 10 119875 = 2
(iv) ldquoFFRrdquo stands for the feasible modified Fletcher-Reeves conjugate gradient method with the Armijio-type line search in Li et al [32]
We stop the iteration if the condition 119875119892119896 le 120576 is
satisfied where 120576 = 10minus5 If the iteration number exceeds105 we also stop the iteration Then we call it failure All ofthe algorithms are coded in Matlab 70 and run on a personalcomputer with a 20GHZ CPU processor
51 Numerical Comparison of EMPRP and ROSE We test theperformance of EMPRP and ROSEmethods on the followingtest problems with given initial points The results are listedin Table 1 119899 stands for the dimension of tested problemand 119899
119888stands for the number of constraints We will report
the following results the CPU time Time (in seconds) thenumber of iterations Iter the number of gradient evaluationsGeval and the number of function evaluations Feval
Problem 1 (HS28 [35]) The function HS28 in [35] is definedas follows
119891 (119909) = (1199091+ 1199092)2
+ (1199092+ 1199093)2
st 1199091+ 21199092+ 31199093minus 1 = 0
(46)
with the initial point 1199090= (minus4 1 1) The optimal solution
119909lowast = (05 minus05 05) and optimal function value 119891(119909lowast) = 0
Problem 2 (HS48 [35]) The function HS48 in [35] is definedas follows
119891 (119909) = (1199091minus 1)2
+ (1199092minus 1199093)2
+ (1199094minus 1199095)2
st 1199091+ 1199092+ 1199093+ 1199094+ 1199095minus 5 = 0
1199093+ 2 (119909
4+ 1199095) + 3 = 0
(47)
with the initial point 1199090= (3 5 minus3 2 minus2) The optimal
solution 119909lowast = (1 1 1 1 1) and optimal function value119891(119909lowast) = 0
Problem 3 (HS49 [35]) The function HS49 in [35] is definedas follows119891 (119909) = (119909
1minus 1199092)2
+ (1199093minus 1)2
+ (1199094minus 1)4
+ (1199095minus 1)6
st 1199091+ 1199092+ 1199093+ 41199094minus 7 = 0
1199093+ 51199095minus 6 = 0
(48)
with the initial point 1199090= (10 7 2 minus3 08) The optimal
solution 119909lowast = (1 1 1 1 1) and optimal function value119891(119909lowast) = 0
Problem 4 (HS50 [35]) The function HS50 in [35] is definedas follows119891 (119909) = (119909
1minus 1199092)2
+ (1199092minus 1199093)2
+ (1199093minus 1199094)2
+ (1199094minus 1199095)2
st 1199091+ 21199092+ 31199093minus 6 = 0
1199092+ 21199093+ 31199094minus 6 = 0
1199093+ 21199094+ 31199095minus 6 = 0
(49)
with the initial point 1199090= (35 minus31 11 5 minus5) The optimal
solution 119909lowast = (1 1 1 1 1) and optimal function value119891(119909lowast) = 0 Moreover we extend the dimension of functionHS51 [35] to 10 20with the initial point119909
0= (35 minus31 11 )
The optimal solution 119909lowast = (1 1 1) and optimal functionvalue 119891(119909lowast) = 0
Problem 5 (HS51 [35]) The function HS51 in [35] is definedas follows119891 (119909) = (119909
1minus 1199092)2
+ (1199092+ 1199093minus 2)2
+ (1199094minus 1)2
+ (1199095minus 1)2
st 1199091+ 31199092minus 4 = 0
1199093+ 1199094minus 21199095= 0
1199092minus 1199095= 0
(50)
6 Abstract and Applied Analysis
Table 1 Test results for Problems 1ndash5 with given initial points
Name 119899 119899119888
EMPRP ROSETime Iter Geval Feval Time Iter Geval Feval
HS28 3 1 00156 20 22 21 00468 71 72 143HS48 5 2 00156 26 28 27 00468 65 66 122HS49 5 2 00156 29 30 30 00780 193 194 336HS50 5 3 00156 22 23 23 00624 76 77 139HS51 5 3 00156 15 16 16 00624 80 81 148HS52 5 3 00156 30 31 31 00312 70 71 141HS50E1 10 8 00156 16 17 17 00312 63 64 127HS50E2 20 18 00156 15 16 16 00936 63 64 127
Table 2 Test results for Problem 6 with given initial points
119896 119899 119899119888
EMPRP FFR SPCGTime Iter Time Iter Time Iter
50 99 49 00568 73 00660 80 00256 70100 199 99 01048 103 01060 120 00756 90200 399 199 00868 108 02660 123 00904 97300 599 299 02252 121 04260 126 07504 123400 799 399 03142 121 05150 126 09804 153500 999 499 04045 138 07030 146 14320 1761000 1999 999 26043 138 21560 146 33468 1982000 3999 1999 82608 138 68280 146 91266 2163000 5999 2999 224075 138 151720 146 282177 2764000 7999 3999 401268 138 350460 146 560604 3245000 9999 4999 1056252 138 850460 146 1281504 476
with the initial point 1199090= (25 05 2 minus1 05) The optimal
solution 119909lowast = (1 1 1 1 1) and optimal function value119891(119909lowast) = 0
From Table 1 we can see that the EMPRP methodperforms better than the Rosen gradient projection methodin [1] which implies that the EMPRP method can improvethe computational efficiency of the Rosen gradient projectionmethod for solving linear equality constrained optimizationproblems
52 Numerical Comparison of EMPRP FFR and SPCG Inthis subsection we test the EMPRP method and compare itwith the spectral gradient method (called SPCG) inMartınezet al [30] and the feasible Fletcher-Reeves conjugate gradientmethod (called FFR) in Li et al [32] on the following largedimensional problems with given initial points The resultsare listed inTables 2 3 4 5 and 6Wewill report the followingresults the CPU time Time (in seconds) the number ofiterations Iter
Problem 6 Given a positive integer 119896 the function is definedas follows
119891 (119909) =1
2
119896minus2
sum119894=1
(119909119896+119894+1
minus 119909119896+119894)2
st 119909119896+119894minus 119909119894+1+ 119909119894= 119894 119894 = 1 119896 minus 1
(51)
with the initial point 1199090= (1 2 119896 2 3 119896) The
optimization function value 119891(119909lowast) = 0 This problem comesfromMartınez et al [30]
Problem 7 Given a positive integer 119899 the function is definedas follows
119891 (119909) =119899
sum119894=1
cos(2120587119909119894lowast sin( 120587
20))
st 119909119894minus 119909119894+1= 04 119894 = 1 119899 minus 1
(52)
with the initial point 1199090= (1 06 02 1 minus 04119899minus1) This
problem comes from Asaadi [36] and is called MAD6
Problem 8 Given a positive integer 119896 the function is definedas follows
119891 (119909) =1
2
119896minus2
sum119894=1
(119909119896+119894+1
minus 119909119896+119894)4
st 119909119896+119894minus 119909119894+1+ 119909119894= 119894 119894 = 1 119896 minus 1
(53)
with the initial point 1199090= (1 2 119896 2 3 119896) The
optimization function value 119891(119909lowast) = 0
Abstract and Applied Analysis 7
Table 3 Test results for Problem 7 with given initial points
119899 119899119888
EMPRP FFR SPCGTime Iter Time Iter Time Iter
200 99 01468 83 01679 94 01286 79400 199 02608 94 02860 103 02266 90600 299 06252 121 08422 132 07504 133800 399 14075 123 16510 138 14177 1491000 499 3280 141 27190 146 28790 1682000 999 84045 189 46202 202 51280 1903000 1499 120042 219 97030 248 107864 2604000 1999 172608 254 134255 288 191266 3165000 2499 206252 321 190940 330 291504 383
Table 4 Test results for Problem 8 with given initial points
119896 119899 119899119888
EMPRP FFR SPCGTime Iter Time Iter Time Iter
50 99 49 00668 78 00750 83 00286 75100 199 99 00988 113 01462 130 00826 106200 399 199 01978 123 02960 133 01864 124300 599 299 03436 128 05260 136 03504 156400 799 399 04142 134 07143 136 05804 186500 999 499 06039 152 09030 158 09331 2281000 1999 999 29065 152 29570 158 31260 2682000 3999 1999 83408 152 78280 158 82266 2983000 5999 2999 196086 152 174350 158 198672 3304000 7999 3999 463268 152 381460 158 561256 3685000 9999 4999 753972 152 583867 158 1257680 398
Table 5 Test results for Problem 9 with given initial points
119896 119899 119899119888
EMPRP FFR SPCGTime Iter Time Iter Time Iter
50 99 49 00568 65 00640 70 00266 62100 199 99 00868 109 01362 125 00726 103200 399 199 02110 120 02862 128 02064 124300 599 299 03250 127 05062 132 04524 160400 799 399 04543 138 07456 139 06804 192500 999 499 06246 156 09268 155 10876 2581000 1999 999 39245 156 34268 162 34560 2792000 3999 1999 93404 156 86240 162 86280 3203000 5999 2999 206125 156 196548 162 208656 3584000 7999 3999 482890 156 444330 162 577680 3865000 9999 4999 954680 156 838650 162 1288760 420
Problem 9 Given a positive integer 119896 the function is definedas follows
119891 (119909) =119896minus2
sum119894=1
100(119909119896+119894+1
minus 119909119896+119894)2
+ (1 minus 119909119896+119894)2
st 119909119896+119894minus 119909119894+1+ 119909119894= 119894 119894 = 1 119896 minus 1
(54)
with the initial point 1199090= (1 2 119896 2 3 119896) The
optimization function value 119891(119909lowast) = 0
Problem 10 Given a positive integer 119899 the function is definedas follows
119891 (119909) =119899
sum119894=1
cos2 (2120587119909119894lowast sin( 120587
20))
st 119909119894minus 119909119894+1= 04 119894 = 1 119899 minus 1
(55)
with the initial point 1199090= (1 06 02 1 minus 04 lowast (119899 minus 1))
8 Abstract and Applied Analysis
Table 6 Test results for Problem 10 with given initial points
119899 119899119888
EMPRP FFR SPCGTime Iter Time Iter Time Iter
200 99 01842 88 01984 98 01488 92400 199 02908 104 03260 113 02348 100600 299 08256 132 09422 142 09804 182800 399 16078 136 17512 148 26177 2591000 499 29801 148 28192 156 31790 2682000 999 78045 199 51202 212 81288 3203000 1499 128042 248 99035 256 147862 3604000 1999 192432 304 144258 298 221268 3865000 2499 296846 382 201932 378 321422 393
From Tables 2ndash6 we can see the EMPRPmethod and theFFR method in [32] perform better than the SPCG methodin Martınez et al [30] for solving large-scale linear equalityconstrained optimization problems as the EMPRP methodand the FFRmethod all are first ordermethods But the SPCGmethod in Martınez et al [30] needs to compute 119861
119896with
quasi-Newton method in which the approximate Hessianssatisfy a weak secant equation However as the EMPRPmethod also needs to compute projection the FFRmethod in[32] performs better than the EMPRP method when the testproblem becomes large
6 Conclusions
In this paper we propose a new conjugate gradient projectionmethod for solving linear equality constrained optimizationproblem (1) which combines the two-term modified Polak-Ribiere-Polyak (PRP) conjugate gradient method in Cheng[7] with the Rosen projectionmethodThe proposed methodalso can be regarded as an extension of the recently developedtwo-term modified Polak-Ribiere-Polyak (PRP) conjugategradient method in Cheng [7] Under some mild conditionswe show that it is globally convergent with the Armijio-typeline search
Conflict of Interests
The author declares that there is no conflict of interestsregarding the publication of this paper
Acknowledgments
The author thanks the anonymous referees for their valuablecomments and suggestions that improved the presenta-tion This work is supported by the NSF of China Grants11301041 11371154 71371065 and 71371195 and the Ministryof Education Humanities and Social Sciences Project Grants12YJC790027 Project funded by China Postdoctoral ScienceFoundation Natural Science Foundation ofHunan Province
References
[1] J B Rosen ldquoThe gradient projectionmethod for nonlinear pro-gramming I Linear constraintsrdquo SIAM Journal on AppliedMathematics vol 8 no 1 pp 181ndash217 1960
[2] D Z Du and X S Zhang ldquoA convergence theorem of Rosenrsquosgradient projection methodrdquo Mathematical Programming vol36 no 2 pp 135ndash144 1986
[3] D Du ldquoRemarks on the convergence of Rosenrsquos gradient pro-jection methodrdquo Acta Mathematicae Applicatae Sinica vol 3no 2 pp 270ndash279 1987
[4] D Z Du and X S Zhang ldquoGlobal convergence of Rosenrsquosgradient projection methodrdquo Mathematical Programming vol44 no 1 pp 357ndash366 1989
[5] E Polak and G Ribire ldquoNote surla convergence de directionsconjugueesrdquo Rev Francaise informat Recherche Operatinelle 3eAnnee vol 16 pp 35ndash43 1969
[6] B T Polyak ldquoThe conjugate gradient method in extremal prob-lemsrdquo USSR Computational Mathematics and MathematicalPhysics vol 9 no 4 pp 94ndash112 1969
[7] W Cheng ldquoA two-term PRP-based descent methodrdquoNumericalFunctional Analysis and Optimization vol 28 no 11 pp 1217ndash1230 2007
[8] W W Hager and H Zhang ldquoA new conjugate gradient methodwith guaranteed descent and an efficient line searchrdquo SIAMJournal on Optimization vol 16 no 1 pp 170ndash192 2005
[9] G Li C Tang and Z Wei ldquoNew conjugacy condition andrelated new conjugate gradientmethods for unconstrained opti-mizationrdquo Journal of Computational and Applied Mathematicsvol 202 no 2 pp 523ndash539 2007
[10] N Andrei ldquoAccelerated scaled memoryless BFGS precondi-tioned conjugate gradient algorithm for unconstrained opti-mizationrdquo European Journal of Operational Research vol 204no 3 pp 410ndash420 2010
[11] G Yu L Guan and W Chen ldquoSpectral conjugate gradientmethods with sufficient descent property for large-scale uncon-strained optimizationrdquoOptimizationMethods and Software vol23 no 2 pp 275ndash293 2008
[12] G Yuan ldquoModified nonlinear conjugate gradient methods withsufficient descent property for large-scale optimization pro-blemsrdquo Optimization Letters vol 3 no 1 pp 11ndash21 2009
Abstract and Applied Analysis 9
[13] Q Li and D-H Li ldquoA class of derivative-free methods for large-scale nonlinearmonotone equationsrdquo IMA Journal ofNumericalAnalysis vol 31 no 4 pp 1625ndash1635 2011
[14] Y Xiao and H Zhu ldquoA conjugate gradient method to solveconvex constrained monotone equations with applications incompressive sensingrdquo Journal of Mathematical Analysis andApplications vol 405 no 1 pp 310ndash319 2013
[15] Z Dai ldquoTwo modified HS type conjugate gradient methodsfor unconstrained optimization problemsrdquo Nonlinear AnalysisTheory Methods ampApplications vol 74 no 3 pp 927ndash936 2011
[16] Y-Y Chen and S-Q Du ldquoNonlinear conjugate gradient meth-ods with Wolfe type line searchrdquo Abstract and Applied Analysisvol 2013 Article ID 742815 5 pages 2013
[17] S Y Liu Y Y Huang and H W Jiao ldquoSufficient descent con-jugate gradient methods for solving convex constrained non-linear monotone equationsrdquo Abstract and Applied Analysis vol2014 Article ID 305643 12 pages 2014
[18] Z F Dai D H Li and F HWen ldquoRobust conditional value-at-risk optimization for asymmetrically distributed asset returnsrdquoPacific Journal of Optimization vol 8 no 3 pp 429ndash445 2012
[19] C Huang C Peng X Chen and F Wen ldquoDynamics analysisof a class of delayed economic modelrdquo Abstract and AppliedAnalysis vol 2013 Article ID 962738 12 pages 2013
[20] C Huang X Gong X Chen and F Wen ldquoMeasuring andforecasting volatility in Chinese stock market using HAR-CJ-Mmodelrdquo Abstract and Applied Analysis vol 2013 Article ID143194 13 pages 2013
[21] G Qin C Huang Y Xie and F Wen ldquoAsymptotic behavior forthird-order quasi-linear differential equationsrdquo Advances inDifference Equations vol 30 no 13 pp 305ndash312 2013
[22] C Huang H Kuang X Chen and F Wen ldquoAn LMI approachfor dynamics of switched cellular neural networks with mixeddelaysrdquo Abstract and Applied Analysis vol 2013 Article ID870486 8 pages 2013
[23] Q F Cui Z G Wang X Chen and F Wen ldquoSufficient condi-tions for non-Bazilevic functionsrdquo Abstract and Applied Anal-ysis vol 2013 Article ID 154912 4 pages 2013
[24] F Wen and X Yang ldquoSkewness of return distribution andcoefficient of risk premiumrdquo Journal of Systems Science amp Com-plexity vol 22 no 3 pp 360ndash371 2009
[25] F Wen and Z Liu ldquoA copula-based correlation measure andits application in Chinese stock marketrdquo International Journalof Information Technology amp Decision Making vol 8 no 4 pp787ndash801 2009
[26] F Wen Z He and X Chen ldquoInvestorsrsquo risk preference charac-teristics and conditional skewnessrdquo Mathematical Problems inEngineering vol 2014 Article ID 814965 14 pages 2014
[27] F Wen X Gong Y Chao and X Chen ldquoThe effects of prioroutcomes on risky choice evidence from the stock marketrdquoMathematical Problems in Engineering vol 2014 Article ID272518 8 pages 2014
[28] F Wen Z He X Gong and A Liu ldquoInvestorsrsquo risk preferencecharacteristics based on different reference pointrdquo DiscreteDynamics in Nature and Society vol 2014 Article ID 1583869 pages 2014
[29] Z Dai and F Wen ldquoRobust CVaR-based portfolio optimizationunder a genal a ne data perturbation uncertainty setrdquo Journal ofComputational Analysis and Applications vol 16 no 1 pp 93ndash103 2014
[30] J M Martınez E A Pilotta and M Raydan ldquoSpectral gradientmethods for linearly constrained optimizationrdquo Journal of
Optimization Theory and Applications vol 125 no 3 pp 629ndash651 2005
[31] C Li andD H Li ldquoAn extension of the Fletcher-Reeves methodto linear equality constrained optimization problemrdquo AppliedMathematics andComputation vol 219 no 23 pp 10909ndash109142013
[32] C Li L Fang and X Cui ldquoA feasible fletcher-reeves method tolinear equality constrained optimization problemrdquo in Proceed-ings of the International Conference on Apperceiving Computingand Intelligence Analysis (ICACIA rsquo10) pp 30ndash33 December2010
[33] L Zhang W J Zhou and D H Li ldquoGlobal convergence ofa modified Fletcher-Reeves conjugate gradient method withArmijo-type line searchrdquo Numerische Mathematik vol 104 no2 pp 561ndash572 2006
[34] N IM GouldM E Hribar and J Nocedal ldquoOn the solution ofequality constrained quadratic programming problems arisingin optimizationrdquo SIAM Journal on Scientific Computing vol 23no 4 pp 1376ndash1395 2001
[35] W Hock and K Schittkowski Test examples for nonlinear pro-gramming codes LectureNotes in Economics andMathematicalSystems Springer New York NY USA 1981
[36] J Asaadi ldquoA computational comparison of some non-linearprogramsrdquo Mathematical Programming vol 4 no 1 pp 144ndash154 1973
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
Abstract and Applied Analysis 3
It follows from (10) and (13) that
119860119889119896= 119860(minus119875119892
119896+ 120573PRP119896119889119896minus1minus 120573PRP119896
119892119879119896119889119896minus1
100381710038171003817100381711987511989211989610038171003817100381710038172119875119892119896)
= 120573PRP119896119860119889119896minus1
(14)
When 119896 = 0 we have
1198601198890= 119860119875119892
0= 0 (15)
It is easy to get from (14) and (15) that for all 119896 119860119889119896= 0 is
satisfied That is 119889119896is feasible direction
In the remainder of this paper we always assume that119891(119909) satisfies the following assumptions
Assumption A (i) The level set
Ω = 119909 isin 119877119899 | 119891 (119909) le 119891 (1199090) (16)
is bounded(ii) Function 119891 119877119899 rarr 119877 is continuously differentiable
and bounded frombelow Its gradient is Lipschitz continuouson an open ball N containing Ω that is there is a constant119871 gt 0 such that
1003817100381710038171003817119892 (119909) minus 119892 (119910)1003817100381710038171003817 le 119871
1003817100381710038171003817119909 minus 1199101003817100381710038171003817 forall119909 119910 isinN (17)
Since119891(119909119896) is decreasing it is clear that the sequence 119909
119896
generated by Algorithm 2 is contained in Ω In addition weget from Assumption A that there is a constant 120574 gt 0 suchthat
1003817100381710038171003817119892 (119909)1003817100381710038171003817 le 120574 forall119909 isin Ω (18)
Since the matrix 119875 is a projection matrix it is reasonable toassume that there is a constant 119862 gt 0 such that
1003817100381710038171003817119875119892 (119909)1003817100381710038171003817 le 119862 forall119909 isin Ω (19)
We state the steps of the algorithm as follows
Algorithm 2 (EMPRPmethod with Armijio-type line search)Step 0 Choose an initial point 119909
0isin 119863 120576 gt 0 Let 119896 = 0
Step 1 Compute 119889119896by (10) where 120573
119896is computed by (11)
Step 2 If 119875119892119896 le 120576 stop else go to Step 3
Step 3 Given 120575 isin (0 12) 120588 isin (0 1) Determine a stepsize120572119896= max(120588119895 | 119895 = 0 1 2 ) satisfying
119891 (119909119896+ 120572119896119889119896) le 119891 (119909
119896) minus 1205751205722
119896
100381710038171003817100381711988911989610038171003817100381710038172
(20)
Step 4 Let 119909119896+1= 119909119896+ 120572119896119889119896 and 119896 = 119896 + 1 Go to Step 1
3 Global Convergence
In what follows we establish the global convergence theoremof the EMPRP method for general nonlinear objective func-tions We firstly give some important lemmas of the EMPRPmethod
Lemma 3 Suppose that 119909119896isin 119863 119875 = 119868 minus 119860119879(119860119860119879)minus1119860 119889
119896
is defined by (10) and (11) Then we have (119875119892119896)119879119889119896= 119892119879119896119889119896
(119875119892119896+1)119879119889119896= 119892119879119896+1119889119896 (119875119892119896)119879119889119896= 119892119879119896119889119896= minus119875119892
1198962
Proof By the definition of 119889119896 we have
(119875119892119896)119879
119889119896= 119892119879119896(119868 minus 119860119879(119860119860119879)
minus1
119860)119889119896
= 119892119879119896119889119896minus 119892119879119896(119860119879(119860119860119879)
minus1
)119860119889119896= 119892119879119896119889119896
(21)
It follows from (12) and (21) that
(119875119892119896)119879
119889119896= 119892119879119896119889119896= minus1003817100381710038171003817119875119892119896
10038171003817100381710038172
(22)
On the other hand we also have
(119875119892119896+1)119879
119889119896= 119892119879119896+1(119868 minus 119860119879(119860119860119879)
minus1
119860)119889119896
= 119892119879119896+1119889119896minus 119892119879119896+1(119860119879(119860119860119879)
minus1
)119860119889119896= 119892119879119896+1119889119896
(23)
Lemma4 Suppose that Assumption A holds 119909119896 is generated
by Algorithm 2 If there exists a constant 120576 gt 0 such that1003817100381710038171003817119875119892119896
1003817100381710038171003817 ge 120576 forall119896 (24)
then there exists a constant119872 gt 0 such that10038171003817100381710038171198891198961003817100381710038171003817 le 119872 forall119896 (25)
Proof It follows from (20) that the function value sequence119891(119909119896) is decreasing We also have from (20) that
infin
sum119894=1
minus 1205751205722119896
100381710038171003817100381711988911989610038171003817100381710038172
lt +infin (26)
as 119891 is bounded from below In particular we have
lim119896rarrinfin
120572119896
10038171003817100381710038171198891198961003817100381710038171003817 = 0 (27)
By the definition of 119889119896 we get from (17) (19) and (24) that
10038171003817100381710038171198891198961003817100381710038171003817 le
10038171003817100381710038171198751198921198961003817100381710038171003817 +
100381710038171003817100381711987511989211989610038171003817100381710038171003817100381710038171003817119910119896minus1
10038171003817100381710038171003817100381710038171003817119889119896minus1
10038171003817100381710038171003817100381710038171003817119875119892119896minus1
10038171003817100381710038172
+
100381710038171003817100381711987511989211989610038171003817100381710038171003817100381710038171003817119910119896minus1
10038171003817100381710038171003817100381710038171003817119875119892119896minus1
10038171003817100381710038172
sdot
100381710038171003817100381711989211989610038171003817100381710038171003817100381710038171003817119889119896minus1
10038171003817100381710038171003817100381710038171003817119875119892119896
10038171003817100381710038171003817100381710038171003817119875119892119896
10038171003817100381710038172
le 119862 +119862119871120572119896minus1
1003817100381710038171003817119889119896minus11003817100381710038171003817
12057621003817100381710038171003817119889119896minus1
1003817100381710038171003817 +120574119871120572119896minus1
1003817100381710038171003817119889119896minus11003817100381710038171003817
12057621003817100381710038171003817119889119896minus1
1003817100381710038171003817
le 119862 + (119862119871
1205762+120574119871
1205762)120572119896minus1
1003817100381710038171003817119889119896minus110038171003817100381710038172
(28)
Since 120572119896minus1119889119896minus1
rarr 0 as 119896 rarrprop there exist a constant119903 isin (0 1) and an integer 119896
0 such that the following inequality
holds for all 119896 ge 1198960
(119862119871
1205762+120574119871
1205762)120572119896minus1
1003817100381710038171003817119889119896minus11003817100381710038171003817 le 119903 (29)
4 Abstract and Applied Analysis
Hence we have for any 119896 gt 1198960
10038171003817100381710038171198891198961003817100381710038171003817 le 119862 + 119903
1003817100381710038171003817119889119896minus11003817100381710038171003817 le 119862 (1 + 119903 + 119903
2 + sdot sdot sdot + 119903119896minus1198960minus1)
+ 119903119896minus1198960100381710038171003817100381710038171198891198960
10038171003817100381710038171003817 le119862
1 minus 119903+100381710038171003817100381710038171198891198960
10038171003817100381710038171003817 (30)
Letting119872 = max1198891 1198891 119889
1198960 119862(1 minus 119903) + 119889
1198960 we
can get (25)
We now establish the global convergence theorem of theEMPRP method for general nonlinear objective functions
Theorem 5 Suppose that Assumption A holds 119909119896 is gener-
ated by Algorithm 2 Then we have
lim inf119896rarrinfin
10038171003817100381710038171198751198921198961003817100381710038171003817 = 0 (31)
Proof Suppose that lim inf119896rarrinfin
119875119892119896 = 0 for all 119896 Then
there exists a constant 120576 gt 0 such that1003817100381710038171003817119875119892119896
1003817100381710038171003817 gt 120576 forall119896 ge 0 (32)
We now prove (31) by considering the following two cases
Case (i) lim inf119896rarrinfin
120572119896gt 0 We get from (22) and (27) that
lim inf119896rarrinfin
119875119892119896 = 0 This contradicts assumption (32)
Case (ii) lim inf119896rarrinfin
120572119896= 0 That is there is an infinite index
set 119870 such thatlim119896isin119870119896rarrinfin
120572119896= 0 (33)
When 119896 isin 119870 is sufficiently large by the line search condition120588minus1120572119896does not satisfy inequality (20) This means
119891 (119909119896+ 120572119896119889119896) minus 119891 (119909
119896) gt minus120575120588minus21205722
119896
100381710038171003817100381711988911989610038171003817100381710038172
(34)
By themean-value theorem and inequality (17) there is a 119905119896isin
(0 1) such that 119909119896+ 119905119896120588minus1120572119896119889119896isin 119873 and
119891 (119909119896+ 120588minus1120572
119896119889119896) minus 119891 (119909
119896)
= 120588minus1120572119896119892(119909119896+ 119905119896120588minus1120572119896119889119896)119879
119889119896
= 120588minus1120572119896119892119879119896119889119896+ 120588minus1120572
119896(119892 (119909119896+ 119905119896120588minus1120572119896119889119896) minus 119892119896)119879
119889119896
le 120588minus1120572119896119892119879119896119889119896+ 120588minus1120572
119896
10038171003817100381710038171003817119892 (119909119896 + 119905119896120588minus1120572119896119889119896) minus 119892119896
1003817100381710038171003817100381710038171003817100381710038171198891198961003817100381710038171003817
le 120588minus1120572119896119892119879119896119889119896+ 119871120588minus21205722
119896
100381710038171003817100381711988911989610038171003817100381710038172
(35)
Substituting the last inequality into (34) for all 119896 isin 119870sufficiently large we have from (12) that
100381710038171003817100381711987511989211989610038171003817100381710038172
= minus119892119879119896119889119896le 120588minus1 (119871 + 120575) 120572
119896
100381710038171003817100381711988911989610038171003817100381710038172
(36)
Since 119889119896 is bounded and lim
119896rarrinfin120572119896= 0 the last inequality
implies
lim119896isin119870119896rarrinfin
10038171003817100381710038171198751198921198961003817100381710038171003817 = 0 (37)
This also yields a contradiction The proof is then complete
4 Improvement for Algorithm 2
In this section we propose techniques for improving theefficiency of Algorithm 2 in practical computation which isabout computing projections and the stepsize of the Armijo-type line search
41 Computing Projection In this paper as in Gould et al[34] instead of computing a basis for null space of matrix119860 we choose to work directly with the matrix of constraintgradients computing projections by normal equations As thecomputation of the projection is a key step in the proposedmethod following Gould et al [34] this projection can becomputed in an alternative way
Let
119892+ = minus119875119892 119875 = 119868 minus 119860119879(119860119860119879)minus1
119860 (38)
We can express this as
119892+ = minus119892 + 119860119879V (39)
where V is the solution of
119860119860119879V = 119860119892 (40)
Noting that (40) is the normal equation Since 119860 is a 119898 times 119899matrix of rank119898(119898 le 119899) 119860119860119879 is symmetric positive definitematrixWe use the Doolittle (called LU) factorization of119860119860119879to solve (40)
For the matrix 119860 is constant the factorization of 119860119860119879needs to be carried out only once at the beginning of theiterative process Using a Doolittle factorization of119860119860119879 (40)can be computed in the following form
119871 (119880119879V) = 119860119892 lArrrArr 119871119910 = 119860119892
119880119879V = 119910(41)
where 119871 119880 is the Doolittle factor of 119860119860119879
42 Computing Stepsize The drawback of the Armijo linesearch is how to choose the initial stepsize 120572
119896 If 120572119896is too
large then the procedure needs to call much more functionevaluations If 120572
119896is too small then the efficiency of related
algorithm will be decreased Therefore we should choose anadequate initial stepsize 120572
119896at each iteration In what follows
we propose a way to generate the initial stepsizeWe first estimate the stepsize determined by the exact line
search Support at the moment that 119891 is twice continuouslydifferentiable We denote by 119866(119909) the Hessian of 119891 at 119909 andabbreviate 119866(119909
119896) as 119866
119896 Notice that the exact line search
stepsize 120572119896satisfies
minus119892119879119896119889119896= (119892 (119909
119896+ 120572119896119889119896) minus 119892119896)119879
119889119896asymp 120572119896119889119879119896119866119896119889119896 (42)
This shows that scalar 120575119896≜ minus119892119879119896119889119896119889119879119896119866119896119889119896is an estimation
to 120572119896 To avoid the computation of the second derivative we
further estimate 120572119896by letting
120574119896=
minus120598119896119892119879119896119889119896
119889119879119896(119892 (119909119896+ 120598119896119889119896) minus 119892119896) (43)
Abstract and Applied Analysis 5
where positive sequence satisfies 120598119896 rarr 0 as 119896 rarr infin
Let the initial stepsize of the Armijo line search be anapproximation to
120575119896≜minus119892119879119896119889119896
119889119879119896119866119896119889119896
asympminus120598119896119892119879119896119889119896
119889119879119896(119892 (119909119896+ 120598119896119889119896) minus 119892119896)≜ 120574119896 (44)
It is not difficult to see that if 120598119896and 119889
119896 are sufficiently small
then 120575119896and 120574119896are good estimation to 120572
119896
So to improve the efficiency of EMPRP method inpractical computation we utilize the following line searchprocess
Line Search Process If inequality
119891 (119909119896+10038161003816100381610038161205741198961003816100381610038161003816 119889119896) le 119891 (119909119896) + 120575
1003816100381610038161003816120574119896100381610038161003816100381621003817100381710038171003817119889119896
10038171003817100381710038172
(45)
holds then we let 120572119896= |120574119896| Otherwise we let 120572
119896be the largest
scalar in the set |120574119896|120588119894 119894 = 0 1 such that inequality (45)
is satisfied
5 Numerical Experiments
This section reports some numerical experiments Firstly wetest the EMPRP method and compare it with the Rose gradi-ent projection method in [1] on low dimensional problemsSecondly we test the EMPRP method and compare it withthe spectral gradient method in Martınez et al [30] and thefeasible Fletcher-Reeves conjugate gradientmethod in Li et al[32] on large dimensional problems In the line searchprocess we set 120598
119896= 10minus6 120588 = 03 120575 = 002
The methods in the tables have the following meanings
(i) ldquoEMPRPrdquo stands for the EMPRP method with theArmijio-type line search (20)
(ii) ldquoROSErdquo stands for Rose gradient projection methodin [1] with the Armijio-type line search (20) That isin Algorithm 2 the direction 119889
119896= minus119875119892
119896
(iii) ldquoSPGrdquo stands for the spectral gradient method withthe nonmonotone line search in Martınez et al [30]where119872 = 10 119875 = 2
(iv) ldquoFFRrdquo stands for the feasible modified Fletcher-Reeves conjugate gradient method with the Armijio-type line search in Li et al [32]
We stop the iteration if the condition 119875119892119896 le 120576 is
satisfied where 120576 = 10minus5 If the iteration number exceeds105 we also stop the iteration Then we call it failure All ofthe algorithms are coded in Matlab 70 and run on a personalcomputer with a 20GHZ CPU processor
51 Numerical Comparison of EMPRP and ROSE We test theperformance of EMPRP and ROSEmethods on the followingtest problems with given initial points The results are listedin Table 1 119899 stands for the dimension of tested problemand 119899
119888stands for the number of constraints We will report
the following results the CPU time Time (in seconds) thenumber of iterations Iter the number of gradient evaluationsGeval and the number of function evaluations Feval
Problem 1 (HS28 [35]) The function HS28 in [35] is definedas follows
119891 (119909) = (1199091+ 1199092)2
+ (1199092+ 1199093)2
st 1199091+ 21199092+ 31199093minus 1 = 0
(46)
with the initial point 1199090= (minus4 1 1) The optimal solution
119909lowast = (05 minus05 05) and optimal function value 119891(119909lowast) = 0
Problem 2 (HS48 [35]) The function HS48 in [35] is definedas follows
119891 (119909) = (1199091minus 1)2
+ (1199092minus 1199093)2
+ (1199094minus 1199095)2
st 1199091+ 1199092+ 1199093+ 1199094+ 1199095minus 5 = 0
1199093+ 2 (119909
4+ 1199095) + 3 = 0
(47)
with the initial point 1199090= (3 5 minus3 2 minus2) The optimal
solution 119909lowast = (1 1 1 1 1) and optimal function value119891(119909lowast) = 0
Problem 3 (HS49 [35]) The function HS49 in [35] is definedas follows119891 (119909) = (119909
1minus 1199092)2
+ (1199093minus 1)2
+ (1199094minus 1)4
+ (1199095minus 1)6
st 1199091+ 1199092+ 1199093+ 41199094minus 7 = 0
1199093+ 51199095minus 6 = 0
(48)
with the initial point 1199090= (10 7 2 minus3 08) The optimal
solution 119909lowast = (1 1 1 1 1) and optimal function value119891(119909lowast) = 0
Problem 4 (HS50 [35]) The function HS50 in [35] is definedas follows119891 (119909) = (119909
1minus 1199092)2
+ (1199092minus 1199093)2
+ (1199093minus 1199094)2
+ (1199094minus 1199095)2
st 1199091+ 21199092+ 31199093minus 6 = 0
1199092+ 21199093+ 31199094minus 6 = 0
1199093+ 21199094+ 31199095minus 6 = 0
(49)
with the initial point 1199090= (35 minus31 11 5 minus5) The optimal
solution 119909lowast = (1 1 1 1 1) and optimal function value119891(119909lowast) = 0 Moreover we extend the dimension of functionHS51 [35] to 10 20with the initial point119909
0= (35 minus31 11 )
The optimal solution 119909lowast = (1 1 1) and optimal functionvalue 119891(119909lowast) = 0
Problem 5 (HS51 [35]) The function HS51 in [35] is definedas follows119891 (119909) = (119909
1minus 1199092)2
+ (1199092+ 1199093minus 2)2
+ (1199094minus 1)2
+ (1199095minus 1)2
st 1199091+ 31199092minus 4 = 0
1199093+ 1199094minus 21199095= 0
1199092minus 1199095= 0
(50)
6 Abstract and Applied Analysis
Table 1 Test results for Problems 1ndash5 with given initial points
Name 119899 119899119888
EMPRP ROSETime Iter Geval Feval Time Iter Geval Feval
HS28 3 1 00156 20 22 21 00468 71 72 143HS48 5 2 00156 26 28 27 00468 65 66 122HS49 5 2 00156 29 30 30 00780 193 194 336HS50 5 3 00156 22 23 23 00624 76 77 139HS51 5 3 00156 15 16 16 00624 80 81 148HS52 5 3 00156 30 31 31 00312 70 71 141HS50E1 10 8 00156 16 17 17 00312 63 64 127HS50E2 20 18 00156 15 16 16 00936 63 64 127
Table 2 Test results for Problem 6 with given initial points
119896 119899 119899119888
EMPRP FFR SPCGTime Iter Time Iter Time Iter
50 99 49 00568 73 00660 80 00256 70100 199 99 01048 103 01060 120 00756 90200 399 199 00868 108 02660 123 00904 97300 599 299 02252 121 04260 126 07504 123400 799 399 03142 121 05150 126 09804 153500 999 499 04045 138 07030 146 14320 1761000 1999 999 26043 138 21560 146 33468 1982000 3999 1999 82608 138 68280 146 91266 2163000 5999 2999 224075 138 151720 146 282177 2764000 7999 3999 401268 138 350460 146 560604 3245000 9999 4999 1056252 138 850460 146 1281504 476
with the initial point 1199090= (25 05 2 minus1 05) The optimal
solution 119909lowast = (1 1 1 1 1) and optimal function value119891(119909lowast) = 0
From Table 1 we can see that the EMPRP methodperforms better than the Rosen gradient projection methodin [1] which implies that the EMPRP method can improvethe computational efficiency of the Rosen gradient projectionmethod for solving linear equality constrained optimizationproblems
52 Numerical Comparison of EMPRP FFR and SPCG Inthis subsection we test the EMPRP method and compare itwith the spectral gradient method (called SPCG) inMartınezet al [30] and the feasible Fletcher-Reeves conjugate gradientmethod (called FFR) in Li et al [32] on the following largedimensional problems with given initial points The resultsare listed inTables 2 3 4 5 and 6Wewill report the followingresults the CPU time Time (in seconds) the number ofiterations Iter
Problem 6 Given a positive integer 119896 the function is definedas follows
119891 (119909) =1
2
119896minus2
sum119894=1
(119909119896+119894+1
minus 119909119896+119894)2
st 119909119896+119894minus 119909119894+1+ 119909119894= 119894 119894 = 1 119896 minus 1
(51)
with the initial point 1199090= (1 2 119896 2 3 119896) The
optimization function value 119891(119909lowast) = 0 This problem comesfromMartınez et al [30]
Problem 7 Given a positive integer 119899 the function is definedas follows
119891 (119909) =119899
sum119894=1
cos(2120587119909119894lowast sin( 120587
20))
st 119909119894minus 119909119894+1= 04 119894 = 1 119899 minus 1
(52)
with the initial point 1199090= (1 06 02 1 minus 04119899minus1) This
problem comes from Asaadi [36] and is called MAD6
Problem 8 Given a positive integer 119896 the function is definedas follows
119891 (119909) =1
2
119896minus2
sum119894=1
(119909119896+119894+1
minus 119909119896+119894)4
st 119909119896+119894minus 119909119894+1+ 119909119894= 119894 119894 = 1 119896 minus 1
(53)
with the initial point 1199090= (1 2 119896 2 3 119896) The
optimization function value 119891(119909lowast) = 0
Abstract and Applied Analysis 7
Table 3 Test results for Problem 7 with given initial points
119899 119899119888
EMPRP FFR SPCGTime Iter Time Iter Time Iter
200 99 01468 83 01679 94 01286 79400 199 02608 94 02860 103 02266 90600 299 06252 121 08422 132 07504 133800 399 14075 123 16510 138 14177 1491000 499 3280 141 27190 146 28790 1682000 999 84045 189 46202 202 51280 1903000 1499 120042 219 97030 248 107864 2604000 1999 172608 254 134255 288 191266 3165000 2499 206252 321 190940 330 291504 383
Table 4 Test results for Problem 8 with given initial points
119896 119899 119899119888
EMPRP FFR SPCGTime Iter Time Iter Time Iter
50 99 49 00668 78 00750 83 00286 75100 199 99 00988 113 01462 130 00826 106200 399 199 01978 123 02960 133 01864 124300 599 299 03436 128 05260 136 03504 156400 799 399 04142 134 07143 136 05804 186500 999 499 06039 152 09030 158 09331 2281000 1999 999 29065 152 29570 158 31260 2682000 3999 1999 83408 152 78280 158 82266 2983000 5999 2999 196086 152 174350 158 198672 3304000 7999 3999 463268 152 381460 158 561256 3685000 9999 4999 753972 152 583867 158 1257680 398
Table 5 Test results for Problem 9 with given initial points
119896 119899 119899119888
EMPRP FFR SPCGTime Iter Time Iter Time Iter
50 99 49 00568 65 00640 70 00266 62100 199 99 00868 109 01362 125 00726 103200 399 199 02110 120 02862 128 02064 124300 599 299 03250 127 05062 132 04524 160400 799 399 04543 138 07456 139 06804 192500 999 499 06246 156 09268 155 10876 2581000 1999 999 39245 156 34268 162 34560 2792000 3999 1999 93404 156 86240 162 86280 3203000 5999 2999 206125 156 196548 162 208656 3584000 7999 3999 482890 156 444330 162 577680 3865000 9999 4999 954680 156 838650 162 1288760 420
Problem 9 Given a positive integer 119896 the function is definedas follows
119891 (119909) =119896minus2
sum119894=1
100(119909119896+119894+1
minus 119909119896+119894)2
+ (1 minus 119909119896+119894)2
st 119909119896+119894minus 119909119894+1+ 119909119894= 119894 119894 = 1 119896 minus 1
(54)
with the initial point 1199090= (1 2 119896 2 3 119896) The
optimization function value 119891(119909lowast) = 0
Problem 10 Given a positive integer 119899 the function is definedas follows
119891 (119909) =119899
sum119894=1
cos2 (2120587119909119894lowast sin( 120587
20))
st 119909119894minus 119909119894+1= 04 119894 = 1 119899 minus 1
(55)
with the initial point 1199090= (1 06 02 1 minus 04 lowast (119899 minus 1))
8 Abstract and Applied Analysis
Table 6 Test results for Problem 10 with given initial points
119899 119899119888
EMPRP FFR SPCGTime Iter Time Iter Time Iter
200 99 01842 88 01984 98 01488 92400 199 02908 104 03260 113 02348 100600 299 08256 132 09422 142 09804 182800 399 16078 136 17512 148 26177 2591000 499 29801 148 28192 156 31790 2682000 999 78045 199 51202 212 81288 3203000 1499 128042 248 99035 256 147862 3604000 1999 192432 304 144258 298 221268 3865000 2499 296846 382 201932 378 321422 393
From Tables 2ndash6 we can see the EMPRPmethod and theFFR method in [32] perform better than the SPCG methodin Martınez et al [30] for solving large-scale linear equalityconstrained optimization problems as the EMPRP methodand the FFRmethod all are first ordermethods But the SPCGmethod in Martınez et al [30] needs to compute 119861
119896with
quasi-Newton method in which the approximate Hessianssatisfy a weak secant equation However as the EMPRPmethod also needs to compute projection the FFRmethod in[32] performs better than the EMPRP method when the testproblem becomes large
6 Conclusions
In this paper we propose a new conjugate gradient projectionmethod for solving linear equality constrained optimizationproblem (1) which combines the two-term modified Polak-Ribiere-Polyak (PRP) conjugate gradient method in Cheng[7] with the Rosen projectionmethodThe proposed methodalso can be regarded as an extension of the recently developedtwo-term modified Polak-Ribiere-Polyak (PRP) conjugategradient method in Cheng [7] Under some mild conditionswe show that it is globally convergent with the Armijio-typeline search
Conflict of Interests
The author declares that there is no conflict of interestsregarding the publication of this paper
Acknowledgments
The author thanks the anonymous referees for their valuablecomments and suggestions that improved the presenta-tion This work is supported by the NSF of China Grants11301041 11371154 71371065 and 71371195 and the Ministryof Education Humanities and Social Sciences Project Grants12YJC790027 Project funded by China Postdoctoral ScienceFoundation Natural Science Foundation ofHunan Province
References
[1] J B Rosen ldquoThe gradient projectionmethod for nonlinear pro-gramming I Linear constraintsrdquo SIAM Journal on AppliedMathematics vol 8 no 1 pp 181ndash217 1960
[2] D Z Du and X S Zhang ldquoA convergence theorem of Rosenrsquosgradient projection methodrdquo Mathematical Programming vol36 no 2 pp 135ndash144 1986
[3] D Du ldquoRemarks on the convergence of Rosenrsquos gradient pro-jection methodrdquo Acta Mathematicae Applicatae Sinica vol 3no 2 pp 270ndash279 1987
[4] D Z Du and X S Zhang ldquoGlobal convergence of Rosenrsquosgradient projection methodrdquo Mathematical Programming vol44 no 1 pp 357ndash366 1989
[5] E Polak and G Ribire ldquoNote surla convergence de directionsconjugueesrdquo Rev Francaise informat Recherche Operatinelle 3eAnnee vol 16 pp 35ndash43 1969
[6] B T Polyak ldquoThe conjugate gradient method in extremal prob-lemsrdquo USSR Computational Mathematics and MathematicalPhysics vol 9 no 4 pp 94ndash112 1969
[7] W Cheng ldquoA two-term PRP-based descent methodrdquoNumericalFunctional Analysis and Optimization vol 28 no 11 pp 1217ndash1230 2007
[8] W W Hager and H Zhang ldquoA new conjugate gradient methodwith guaranteed descent and an efficient line searchrdquo SIAMJournal on Optimization vol 16 no 1 pp 170ndash192 2005
[9] G Li C Tang and Z Wei ldquoNew conjugacy condition andrelated new conjugate gradientmethods for unconstrained opti-mizationrdquo Journal of Computational and Applied Mathematicsvol 202 no 2 pp 523ndash539 2007
[10] N Andrei ldquoAccelerated scaled memoryless BFGS precondi-tioned conjugate gradient algorithm for unconstrained opti-mizationrdquo European Journal of Operational Research vol 204no 3 pp 410ndash420 2010
[11] G Yu L Guan and W Chen ldquoSpectral conjugate gradientmethods with sufficient descent property for large-scale uncon-strained optimizationrdquoOptimizationMethods and Software vol23 no 2 pp 275ndash293 2008
[12] G Yuan ldquoModified nonlinear conjugate gradient methods withsufficient descent property for large-scale optimization pro-blemsrdquo Optimization Letters vol 3 no 1 pp 11ndash21 2009
Abstract and Applied Analysis 9
[13] Q Li and D-H Li ldquoA class of derivative-free methods for large-scale nonlinearmonotone equationsrdquo IMA Journal ofNumericalAnalysis vol 31 no 4 pp 1625ndash1635 2011
[14] Y Xiao and H Zhu ldquoA conjugate gradient method to solveconvex constrained monotone equations with applications incompressive sensingrdquo Journal of Mathematical Analysis andApplications vol 405 no 1 pp 310ndash319 2013
[15] Z Dai ldquoTwo modified HS type conjugate gradient methodsfor unconstrained optimization problemsrdquo Nonlinear AnalysisTheory Methods ampApplications vol 74 no 3 pp 927ndash936 2011
[16] Y-Y Chen and S-Q Du ldquoNonlinear conjugate gradient meth-ods with Wolfe type line searchrdquo Abstract and Applied Analysisvol 2013 Article ID 742815 5 pages 2013
[17] S Y Liu Y Y Huang and H W Jiao ldquoSufficient descent con-jugate gradient methods for solving convex constrained non-linear monotone equationsrdquo Abstract and Applied Analysis vol2014 Article ID 305643 12 pages 2014
[18] Z F Dai D H Li and F HWen ldquoRobust conditional value-at-risk optimization for asymmetrically distributed asset returnsrdquoPacific Journal of Optimization vol 8 no 3 pp 429ndash445 2012
[19] C Huang C Peng X Chen and F Wen ldquoDynamics analysisof a class of delayed economic modelrdquo Abstract and AppliedAnalysis vol 2013 Article ID 962738 12 pages 2013
[20] C Huang X Gong X Chen and F Wen ldquoMeasuring andforecasting volatility in Chinese stock market using HAR-CJ-Mmodelrdquo Abstract and Applied Analysis vol 2013 Article ID143194 13 pages 2013
[21] G Qin C Huang Y Xie and F Wen ldquoAsymptotic behavior forthird-order quasi-linear differential equationsrdquo Advances inDifference Equations vol 30 no 13 pp 305ndash312 2013
[22] C Huang H Kuang X Chen and F Wen ldquoAn LMI approachfor dynamics of switched cellular neural networks with mixeddelaysrdquo Abstract and Applied Analysis vol 2013 Article ID870486 8 pages 2013
[23] Q F Cui Z G Wang X Chen and F Wen ldquoSufficient condi-tions for non-Bazilevic functionsrdquo Abstract and Applied Anal-ysis vol 2013 Article ID 154912 4 pages 2013
[24] F Wen and X Yang ldquoSkewness of return distribution andcoefficient of risk premiumrdquo Journal of Systems Science amp Com-plexity vol 22 no 3 pp 360ndash371 2009
[25] F Wen and Z Liu ldquoA copula-based correlation measure andits application in Chinese stock marketrdquo International Journalof Information Technology amp Decision Making vol 8 no 4 pp787ndash801 2009
[26] F Wen Z He and X Chen ldquoInvestorsrsquo risk preference charac-teristics and conditional skewnessrdquo Mathematical Problems inEngineering vol 2014 Article ID 814965 14 pages 2014
[27] F Wen X Gong Y Chao and X Chen ldquoThe effects of prioroutcomes on risky choice evidence from the stock marketrdquoMathematical Problems in Engineering vol 2014 Article ID272518 8 pages 2014
[28] F Wen Z He X Gong and A Liu ldquoInvestorsrsquo risk preferencecharacteristics based on different reference pointrdquo DiscreteDynamics in Nature and Society vol 2014 Article ID 1583869 pages 2014
[29] Z Dai and F Wen ldquoRobust CVaR-based portfolio optimizationunder a genal a ne data perturbation uncertainty setrdquo Journal ofComputational Analysis and Applications vol 16 no 1 pp 93ndash103 2014
[30] J M Martınez E A Pilotta and M Raydan ldquoSpectral gradientmethods for linearly constrained optimizationrdquo Journal of
Optimization Theory and Applications vol 125 no 3 pp 629ndash651 2005
[31] C Li andD H Li ldquoAn extension of the Fletcher-Reeves methodto linear equality constrained optimization problemrdquo AppliedMathematics andComputation vol 219 no 23 pp 10909ndash109142013
[32] C Li L Fang and X Cui ldquoA feasible fletcher-reeves method tolinear equality constrained optimization problemrdquo in Proceed-ings of the International Conference on Apperceiving Computingand Intelligence Analysis (ICACIA rsquo10) pp 30ndash33 December2010
[33] L Zhang W J Zhou and D H Li ldquoGlobal convergence ofa modified Fletcher-Reeves conjugate gradient method withArmijo-type line searchrdquo Numerische Mathematik vol 104 no2 pp 561ndash572 2006
[34] N IM GouldM E Hribar and J Nocedal ldquoOn the solution ofequality constrained quadratic programming problems arisingin optimizationrdquo SIAM Journal on Scientific Computing vol 23no 4 pp 1376ndash1395 2001
[35] W Hock and K Schittkowski Test examples for nonlinear pro-gramming codes LectureNotes in Economics andMathematicalSystems Springer New York NY USA 1981
[36] J Asaadi ldquoA computational comparison of some non-linearprogramsrdquo Mathematical Programming vol 4 no 1 pp 144ndash154 1973
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
4 Abstract and Applied Analysis
Hence we have for any 119896 gt 1198960
10038171003817100381710038171198891198961003817100381710038171003817 le 119862 + 119903
1003817100381710038171003817119889119896minus11003817100381710038171003817 le 119862 (1 + 119903 + 119903
2 + sdot sdot sdot + 119903119896minus1198960minus1)
+ 119903119896minus1198960100381710038171003817100381710038171198891198960
10038171003817100381710038171003817 le119862
1 minus 119903+100381710038171003817100381710038171198891198960
10038171003817100381710038171003817 (30)
Letting119872 = max1198891 1198891 119889
1198960 119862(1 minus 119903) + 119889
1198960 we
can get (25)
We now establish the global convergence theorem of theEMPRP method for general nonlinear objective functions
Theorem 5 Suppose that Assumption A holds 119909119896 is gener-
ated by Algorithm 2 Then we have
lim inf119896rarrinfin
10038171003817100381710038171198751198921198961003817100381710038171003817 = 0 (31)
Proof Suppose that lim inf119896rarrinfin
119875119892119896 = 0 for all 119896 Then
there exists a constant 120576 gt 0 such that1003817100381710038171003817119875119892119896
1003817100381710038171003817 gt 120576 forall119896 ge 0 (32)
We now prove (31) by considering the following two cases
Case (i) lim inf119896rarrinfin
120572119896gt 0 We get from (22) and (27) that
lim inf119896rarrinfin
119875119892119896 = 0 This contradicts assumption (32)
Case (ii) lim inf119896rarrinfin
120572119896= 0 That is there is an infinite index
set 119870 such thatlim119896isin119870119896rarrinfin
120572119896= 0 (33)
When 119896 isin 119870 is sufficiently large by the line search condition120588minus1120572119896does not satisfy inequality (20) This means
119891 (119909119896+ 120572119896119889119896) minus 119891 (119909
119896) gt minus120575120588minus21205722
119896
100381710038171003817100381711988911989610038171003817100381710038172
(34)
By themean-value theorem and inequality (17) there is a 119905119896isin
(0 1) such that 119909119896+ 119905119896120588minus1120572119896119889119896isin 119873 and
119891 (119909119896+ 120588minus1120572
119896119889119896) minus 119891 (119909
119896)
= 120588minus1120572119896119892(119909119896+ 119905119896120588minus1120572119896119889119896)119879
119889119896
= 120588minus1120572119896119892119879119896119889119896+ 120588minus1120572
119896(119892 (119909119896+ 119905119896120588minus1120572119896119889119896) minus 119892119896)119879
119889119896
le 120588minus1120572119896119892119879119896119889119896+ 120588minus1120572
119896
10038171003817100381710038171003817119892 (119909119896 + 119905119896120588minus1120572119896119889119896) minus 119892119896
1003817100381710038171003817100381710038171003817100381710038171198891198961003817100381710038171003817
le 120588minus1120572119896119892119879119896119889119896+ 119871120588minus21205722
119896
100381710038171003817100381711988911989610038171003817100381710038172
(35)
Substituting the last inequality into (34) for all 119896 isin 119870sufficiently large we have from (12) that
100381710038171003817100381711987511989211989610038171003817100381710038172
= minus119892119879119896119889119896le 120588minus1 (119871 + 120575) 120572
119896
100381710038171003817100381711988911989610038171003817100381710038172
(36)
Since 119889119896 is bounded and lim
119896rarrinfin120572119896= 0 the last inequality
implies
lim119896isin119870119896rarrinfin
10038171003817100381710038171198751198921198961003817100381710038171003817 = 0 (37)
This also yields a contradiction The proof is then complete
4 Improvement for Algorithm 2
In this section we propose techniques for improving theefficiency of Algorithm 2 in practical computation which isabout computing projections and the stepsize of the Armijo-type line search
41 Computing Projection In this paper as in Gould et al[34] instead of computing a basis for null space of matrix119860 we choose to work directly with the matrix of constraintgradients computing projections by normal equations As thecomputation of the projection is a key step in the proposedmethod following Gould et al [34] this projection can becomputed in an alternative way
Let
119892+ = minus119875119892 119875 = 119868 minus 119860119879(119860119860119879)minus1
119860 (38)
We can express this as
119892+ = minus119892 + 119860119879V (39)
where V is the solution of
119860119860119879V = 119860119892 (40)
Noting that (40) is the normal equation Since 119860 is a 119898 times 119899matrix of rank119898(119898 le 119899) 119860119860119879 is symmetric positive definitematrixWe use the Doolittle (called LU) factorization of119860119860119879to solve (40)
For the matrix 119860 is constant the factorization of 119860119860119879needs to be carried out only once at the beginning of theiterative process Using a Doolittle factorization of119860119860119879 (40)can be computed in the following form
119871 (119880119879V) = 119860119892 lArrrArr 119871119910 = 119860119892
119880119879V = 119910(41)
where 119871 119880 is the Doolittle factor of 119860119860119879
42 Computing Stepsize The drawback of the Armijo linesearch is how to choose the initial stepsize 120572
119896 If 120572119896is too
large then the procedure needs to call much more functionevaluations If 120572
119896is too small then the efficiency of related
algorithm will be decreased Therefore we should choose anadequate initial stepsize 120572
119896at each iteration In what follows
we propose a way to generate the initial stepsizeWe first estimate the stepsize determined by the exact line
search Support at the moment that 119891 is twice continuouslydifferentiable We denote by 119866(119909) the Hessian of 119891 at 119909 andabbreviate 119866(119909
119896) as 119866
119896 Notice that the exact line search
stepsize 120572119896satisfies
minus119892119879119896119889119896= (119892 (119909
119896+ 120572119896119889119896) minus 119892119896)119879
119889119896asymp 120572119896119889119879119896119866119896119889119896 (42)
This shows that scalar 120575119896≜ minus119892119879119896119889119896119889119879119896119866119896119889119896is an estimation
to 120572119896 To avoid the computation of the second derivative we
further estimate 120572119896by letting
120574119896=
minus120598119896119892119879119896119889119896
119889119879119896(119892 (119909119896+ 120598119896119889119896) minus 119892119896) (43)
Abstract and Applied Analysis 5
where positive sequence satisfies 120598119896 rarr 0 as 119896 rarr infin
Let the initial stepsize of the Armijo line search be anapproximation to
120575119896≜minus119892119879119896119889119896
119889119879119896119866119896119889119896
asympminus120598119896119892119879119896119889119896
119889119879119896(119892 (119909119896+ 120598119896119889119896) minus 119892119896)≜ 120574119896 (44)
It is not difficult to see that if 120598119896and 119889
119896 are sufficiently small
then 120575119896and 120574119896are good estimation to 120572
119896
So to improve the efficiency of EMPRP method inpractical computation we utilize the following line searchprocess
Line Search Process If inequality
119891 (119909119896+10038161003816100381610038161205741198961003816100381610038161003816 119889119896) le 119891 (119909119896) + 120575
1003816100381610038161003816120574119896100381610038161003816100381621003817100381710038171003817119889119896
10038171003817100381710038172
(45)
holds then we let 120572119896= |120574119896| Otherwise we let 120572
119896be the largest
scalar in the set |120574119896|120588119894 119894 = 0 1 such that inequality (45)
is satisfied
5 Numerical Experiments
This section reports some numerical experiments Firstly wetest the EMPRP method and compare it with the Rose gradi-ent projection method in [1] on low dimensional problemsSecondly we test the EMPRP method and compare it withthe spectral gradient method in Martınez et al [30] and thefeasible Fletcher-Reeves conjugate gradientmethod in Li et al[32] on large dimensional problems In the line searchprocess we set 120598
119896= 10minus6 120588 = 03 120575 = 002
The methods in the tables have the following meanings
(i) ldquoEMPRPrdquo stands for the EMPRP method with theArmijio-type line search (20)
(ii) ldquoROSErdquo stands for Rose gradient projection methodin [1] with the Armijio-type line search (20) That isin Algorithm 2 the direction 119889
119896= minus119875119892
119896
(iii) ldquoSPGrdquo stands for the spectral gradient method withthe nonmonotone line search in Martınez et al [30]where119872 = 10 119875 = 2
(iv) ldquoFFRrdquo stands for the feasible modified Fletcher-Reeves conjugate gradient method with the Armijio-type line search in Li et al [32]
We stop the iteration if the condition 119875119892119896 le 120576 is
satisfied where 120576 = 10minus5 If the iteration number exceeds105 we also stop the iteration Then we call it failure All ofthe algorithms are coded in Matlab 70 and run on a personalcomputer with a 20GHZ CPU processor
51 Numerical Comparison of EMPRP and ROSE We test theperformance of EMPRP and ROSEmethods on the followingtest problems with given initial points The results are listedin Table 1 119899 stands for the dimension of tested problemand 119899
119888stands for the number of constraints We will report
the following results the CPU time Time (in seconds) thenumber of iterations Iter the number of gradient evaluationsGeval and the number of function evaluations Feval
Problem 1 (HS28 [35]) The function HS28 in [35] is definedas follows
119891 (119909) = (1199091+ 1199092)2
+ (1199092+ 1199093)2
st 1199091+ 21199092+ 31199093minus 1 = 0
(46)
with the initial point 1199090= (minus4 1 1) The optimal solution
119909lowast = (05 minus05 05) and optimal function value 119891(119909lowast) = 0
Problem 2 (HS48 [35]) The function HS48 in [35] is definedas follows
119891 (119909) = (1199091minus 1)2
+ (1199092minus 1199093)2
+ (1199094minus 1199095)2
st 1199091+ 1199092+ 1199093+ 1199094+ 1199095minus 5 = 0
1199093+ 2 (119909
4+ 1199095) + 3 = 0
(47)
with the initial point 1199090= (3 5 minus3 2 minus2) The optimal
solution 119909lowast = (1 1 1 1 1) and optimal function value119891(119909lowast) = 0
Problem 3 (HS49 [35]) The function HS49 in [35] is definedas follows119891 (119909) = (119909
1minus 1199092)2
+ (1199093minus 1)2
+ (1199094minus 1)4
+ (1199095minus 1)6
st 1199091+ 1199092+ 1199093+ 41199094minus 7 = 0
1199093+ 51199095minus 6 = 0
(48)
with the initial point 1199090= (10 7 2 minus3 08) The optimal
solution 119909lowast = (1 1 1 1 1) and optimal function value119891(119909lowast) = 0
Problem 4 (HS50 [35]) The function HS50 in [35] is definedas follows119891 (119909) = (119909
1minus 1199092)2
+ (1199092minus 1199093)2
+ (1199093minus 1199094)2
+ (1199094minus 1199095)2
st 1199091+ 21199092+ 31199093minus 6 = 0
1199092+ 21199093+ 31199094minus 6 = 0
1199093+ 21199094+ 31199095minus 6 = 0
(49)
with the initial point 1199090= (35 minus31 11 5 minus5) The optimal
solution 119909lowast = (1 1 1 1 1) and optimal function value119891(119909lowast) = 0 Moreover we extend the dimension of functionHS51 [35] to 10 20with the initial point119909
0= (35 minus31 11 )
The optimal solution 119909lowast = (1 1 1) and optimal functionvalue 119891(119909lowast) = 0
Problem 5 (HS51 [35]) The function HS51 in [35] is definedas follows119891 (119909) = (119909
1minus 1199092)2
+ (1199092+ 1199093minus 2)2
+ (1199094minus 1)2
+ (1199095minus 1)2
st 1199091+ 31199092minus 4 = 0
1199093+ 1199094minus 21199095= 0
1199092minus 1199095= 0
(50)
6 Abstract and Applied Analysis
Table 1 Test results for Problems 1ndash5 with given initial points
Name 119899 119899119888
EMPRP ROSETime Iter Geval Feval Time Iter Geval Feval
HS28 3 1 00156 20 22 21 00468 71 72 143HS48 5 2 00156 26 28 27 00468 65 66 122HS49 5 2 00156 29 30 30 00780 193 194 336HS50 5 3 00156 22 23 23 00624 76 77 139HS51 5 3 00156 15 16 16 00624 80 81 148HS52 5 3 00156 30 31 31 00312 70 71 141HS50E1 10 8 00156 16 17 17 00312 63 64 127HS50E2 20 18 00156 15 16 16 00936 63 64 127
Table 2 Test results for Problem 6 with given initial points
119896 119899 119899119888
EMPRP FFR SPCGTime Iter Time Iter Time Iter
50 99 49 00568 73 00660 80 00256 70100 199 99 01048 103 01060 120 00756 90200 399 199 00868 108 02660 123 00904 97300 599 299 02252 121 04260 126 07504 123400 799 399 03142 121 05150 126 09804 153500 999 499 04045 138 07030 146 14320 1761000 1999 999 26043 138 21560 146 33468 1982000 3999 1999 82608 138 68280 146 91266 2163000 5999 2999 224075 138 151720 146 282177 2764000 7999 3999 401268 138 350460 146 560604 3245000 9999 4999 1056252 138 850460 146 1281504 476
with the initial point 1199090= (25 05 2 minus1 05) The optimal
solution 119909lowast = (1 1 1 1 1) and optimal function value119891(119909lowast) = 0
From Table 1 we can see that the EMPRP methodperforms better than the Rosen gradient projection methodin [1] which implies that the EMPRP method can improvethe computational efficiency of the Rosen gradient projectionmethod for solving linear equality constrained optimizationproblems
52 Numerical Comparison of EMPRP FFR and SPCG Inthis subsection we test the EMPRP method and compare itwith the spectral gradient method (called SPCG) inMartınezet al [30] and the feasible Fletcher-Reeves conjugate gradientmethod (called FFR) in Li et al [32] on the following largedimensional problems with given initial points The resultsare listed inTables 2 3 4 5 and 6Wewill report the followingresults the CPU time Time (in seconds) the number ofiterations Iter
Problem 6 Given a positive integer 119896 the function is definedas follows
119891 (119909) =1
2
119896minus2
sum119894=1
(119909119896+119894+1
minus 119909119896+119894)2
st 119909119896+119894minus 119909119894+1+ 119909119894= 119894 119894 = 1 119896 minus 1
(51)
with the initial point 1199090= (1 2 119896 2 3 119896) The
optimization function value 119891(119909lowast) = 0 This problem comesfromMartınez et al [30]
Problem 7 Given a positive integer 119899 the function is definedas follows
119891 (119909) =119899
sum119894=1
cos(2120587119909119894lowast sin( 120587
20))
st 119909119894minus 119909119894+1= 04 119894 = 1 119899 minus 1
(52)
with the initial point 1199090= (1 06 02 1 minus 04119899minus1) This
problem comes from Asaadi [36] and is called MAD6
Problem 8 Given a positive integer 119896 the function is definedas follows
119891 (119909) =1
2
119896minus2
sum119894=1
(119909119896+119894+1
minus 119909119896+119894)4
st 119909119896+119894minus 119909119894+1+ 119909119894= 119894 119894 = 1 119896 minus 1
(53)
with the initial point 1199090= (1 2 119896 2 3 119896) The
optimization function value 119891(119909lowast) = 0
Abstract and Applied Analysis 7
Table 3 Test results for Problem 7 with given initial points
119899 119899119888
EMPRP FFR SPCGTime Iter Time Iter Time Iter
200 99 01468 83 01679 94 01286 79400 199 02608 94 02860 103 02266 90600 299 06252 121 08422 132 07504 133800 399 14075 123 16510 138 14177 1491000 499 3280 141 27190 146 28790 1682000 999 84045 189 46202 202 51280 1903000 1499 120042 219 97030 248 107864 2604000 1999 172608 254 134255 288 191266 3165000 2499 206252 321 190940 330 291504 383
Table 4 Test results for Problem 8 with given initial points
119896 119899 119899119888
EMPRP FFR SPCGTime Iter Time Iter Time Iter
50 99 49 00668 78 00750 83 00286 75100 199 99 00988 113 01462 130 00826 106200 399 199 01978 123 02960 133 01864 124300 599 299 03436 128 05260 136 03504 156400 799 399 04142 134 07143 136 05804 186500 999 499 06039 152 09030 158 09331 2281000 1999 999 29065 152 29570 158 31260 2682000 3999 1999 83408 152 78280 158 82266 2983000 5999 2999 196086 152 174350 158 198672 3304000 7999 3999 463268 152 381460 158 561256 3685000 9999 4999 753972 152 583867 158 1257680 398
Table 5 Test results for Problem 9 with given initial points
119896 119899 119899119888
EMPRP FFR SPCGTime Iter Time Iter Time Iter
50 99 49 00568 65 00640 70 00266 62100 199 99 00868 109 01362 125 00726 103200 399 199 02110 120 02862 128 02064 124300 599 299 03250 127 05062 132 04524 160400 799 399 04543 138 07456 139 06804 192500 999 499 06246 156 09268 155 10876 2581000 1999 999 39245 156 34268 162 34560 2792000 3999 1999 93404 156 86240 162 86280 3203000 5999 2999 206125 156 196548 162 208656 3584000 7999 3999 482890 156 444330 162 577680 3865000 9999 4999 954680 156 838650 162 1288760 420
Problem 9 Given a positive integer 119896 the function is definedas follows
119891 (119909) =119896minus2
sum119894=1
100(119909119896+119894+1
minus 119909119896+119894)2
+ (1 minus 119909119896+119894)2
st 119909119896+119894minus 119909119894+1+ 119909119894= 119894 119894 = 1 119896 minus 1
(54)
with the initial point 1199090= (1 2 119896 2 3 119896) The
optimization function value 119891(119909lowast) = 0
Problem 10 Given a positive integer 119899 the function is definedas follows
119891 (119909) =119899
sum119894=1
cos2 (2120587119909119894lowast sin( 120587
20))
st 119909119894minus 119909119894+1= 04 119894 = 1 119899 minus 1
(55)
with the initial point 1199090= (1 06 02 1 minus 04 lowast (119899 minus 1))
8 Abstract and Applied Analysis
Table 6 Test results for Problem 10 with given initial points
119899 119899119888
EMPRP FFR SPCGTime Iter Time Iter Time Iter
200 99 01842 88 01984 98 01488 92400 199 02908 104 03260 113 02348 100600 299 08256 132 09422 142 09804 182800 399 16078 136 17512 148 26177 2591000 499 29801 148 28192 156 31790 2682000 999 78045 199 51202 212 81288 3203000 1499 128042 248 99035 256 147862 3604000 1999 192432 304 144258 298 221268 3865000 2499 296846 382 201932 378 321422 393
From Tables 2ndash6 we can see the EMPRPmethod and theFFR method in [32] perform better than the SPCG methodin Martınez et al [30] for solving large-scale linear equalityconstrained optimization problems as the EMPRP methodand the FFRmethod all are first ordermethods But the SPCGmethod in Martınez et al [30] needs to compute 119861
119896with
quasi-Newton method in which the approximate Hessianssatisfy a weak secant equation However as the EMPRPmethod also needs to compute projection the FFRmethod in[32] performs better than the EMPRP method when the testproblem becomes large
6 Conclusions
In this paper we propose a new conjugate gradient projectionmethod for solving linear equality constrained optimizationproblem (1) which combines the two-term modified Polak-Ribiere-Polyak (PRP) conjugate gradient method in Cheng[7] with the Rosen projectionmethodThe proposed methodalso can be regarded as an extension of the recently developedtwo-term modified Polak-Ribiere-Polyak (PRP) conjugategradient method in Cheng [7] Under some mild conditionswe show that it is globally convergent with the Armijio-typeline search
Conflict of Interests
The author declares that there is no conflict of interestsregarding the publication of this paper
Acknowledgments
The author thanks the anonymous referees for their valuablecomments and suggestions that improved the presenta-tion This work is supported by the NSF of China Grants11301041 11371154 71371065 and 71371195 and the Ministryof Education Humanities and Social Sciences Project Grants12YJC790027 Project funded by China Postdoctoral ScienceFoundation Natural Science Foundation ofHunan Province
References
[1] J B Rosen ldquoThe gradient projectionmethod for nonlinear pro-gramming I Linear constraintsrdquo SIAM Journal on AppliedMathematics vol 8 no 1 pp 181ndash217 1960
[2] D Z Du and X S Zhang ldquoA convergence theorem of Rosenrsquosgradient projection methodrdquo Mathematical Programming vol36 no 2 pp 135ndash144 1986
[3] D Du ldquoRemarks on the convergence of Rosenrsquos gradient pro-jection methodrdquo Acta Mathematicae Applicatae Sinica vol 3no 2 pp 270ndash279 1987
[4] D Z Du and X S Zhang ldquoGlobal convergence of Rosenrsquosgradient projection methodrdquo Mathematical Programming vol44 no 1 pp 357ndash366 1989
[5] E Polak and G Ribire ldquoNote surla convergence de directionsconjugueesrdquo Rev Francaise informat Recherche Operatinelle 3eAnnee vol 16 pp 35ndash43 1969
[6] B T Polyak ldquoThe conjugate gradient method in extremal prob-lemsrdquo USSR Computational Mathematics and MathematicalPhysics vol 9 no 4 pp 94ndash112 1969
[7] W Cheng ldquoA two-term PRP-based descent methodrdquoNumericalFunctional Analysis and Optimization vol 28 no 11 pp 1217ndash1230 2007
[8] W W Hager and H Zhang ldquoA new conjugate gradient methodwith guaranteed descent and an efficient line searchrdquo SIAMJournal on Optimization vol 16 no 1 pp 170ndash192 2005
[9] G Li C Tang and Z Wei ldquoNew conjugacy condition andrelated new conjugate gradientmethods for unconstrained opti-mizationrdquo Journal of Computational and Applied Mathematicsvol 202 no 2 pp 523ndash539 2007
[10] N Andrei ldquoAccelerated scaled memoryless BFGS precondi-tioned conjugate gradient algorithm for unconstrained opti-mizationrdquo European Journal of Operational Research vol 204no 3 pp 410ndash420 2010
[11] G Yu L Guan and W Chen ldquoSpectral conjugate gradientmethods with sufficient descent property for large-scale uncon-strained optimizationrdquoOptimizationMethods and Software vol23 no 2 pp 275ndash293 2008
[12] G Yuan ldquoModified nonlinear conjugate gradient methods withsufficient descent property for large-scale optimization pro-blemsrdquo Optimization Letters vol 3 no 1 pp 11ndash21 2009
Abstract and Applied Analysis 9
[13] Q Li and D-H Li ldquoA class of derivative-free methods for large-scale nonlinearmonotone equationsrdquo IMA Journal ofNumericalAnalysis vol 31 no 4 pp 1625ndash1635 2011
[14] Y Xiao and H Zhu ldquoA conjugate gradient method to solveconvex constrained monotone equations with applications incompressive sensingrdquo Journal of Mathematical Analysis andApplications vol 405 no 1 pp 310ndash319 2013
[15] Z Dai ldquoTwo modified HS type conjugate gradient methodsfor unconstrained optimization problemsrdquo Nonlinear AnalysisTheory Methods ampApplications vol 74 no 3 pp 927ndash936 2011
[16] Y-Y Chen and S-Q Du ldquoNonlinear conjugate gradient meth-ods with Wolfe type line searchrdquo Abstract and Applied Analysisvol 2013 Article ID 742815 5 pages 2013
[17] S Y Liu Y Y Huang and H W Jiao ldquoSufficient descent con-jugate gradient methods for solving convex constrained non-linear monotone equationsrdquo Abstract and Applied Analysis vol2014 Article ID 305643 12 pages 2014
[18] Z F Dai D H Li and F HWen ldquoRobust conditional value-at-risk optimization for asymmetrically distributed asset returnsrdquoPacific Journal of Optimization vol 8 no 3 pp 429ndash445 2012
[19] C Huang C Peng X Chen and F Wen ldquoDynamics analysisof a class of delayed economic modelrdquo Abstract and AppliedAnalysis vol 2013 Article ID 962738 12 pages 2013
[20] C Huang X Gong X Chen and F Wen ldquoMeasuring andforecasting volatility in Chinese stock market using HAR-CJ-Mmodelrdquo Abstract and Applied Analysis vol 2013 Article ID143194 13 pages 2013
[21] G Qin C Huang Y Xie and F Wen ldquoAsymptotic behavior forthird-order quasi-linear differential equationsrdquo Advances inDifference Equations vol 30 no 13 pp 305ndash312 2013
[22] C Huang H Kuang X Chen and F Wen ldquoAn LMI approachfor dynamics of switched cellular neural networks with mixeddelaysrdquo Abstract and Applied Analysis vol 2013 Article ID870486 8 pages 2013
[23] Q F Cui Z G Wang X Chen and F Wen ldquoSufficient condi-tions for non-Bazilevic functionsrdquo Abstract and Applied Anal-ysis vol 2013 Article ID 154912 4 pages 2013
[24] F Wen and X Yang ldquoSkewness of return distribution andcoefficient of risk premiumrdquo Journal of Systems Science amp Com-plexity vol 22 no 3 pp 360ndash371 2009
[25] F Wen and Z Liu ldquoA copula-based correlation measure andits application in Chinese stock marketrdquo International Journalof Information Technology amp Decision Making vol 8 no 4 pp787ndash801 2009
[26] F Wen Z He and X Chen ldquoInvestorsrsquo risk preference charac-teristics and conditional skewnessrdquo Mathematical Problems inEngineering vol 2014 Article ID 814965 14 pages 2014
[27] F Wen X Gong Y Chao and X Chen ldquoThe effects of prioroutcomes on risky choice evidence from the stock marketrdquoMathematical Problems in Engineering vol 2014 Article ID272518 8 pages 2014
[28] F Wen Z He X Gong and A Liu ldquoInvestorsrsquo risk preferencecharacteristics based on different reference pointrdquo DiscreteDynamics in Nature and Society vol 2014 Article ID 1583869 pages 2014
[29] Z Dai and F Wen ldquoRobust CVaR-based portfolio optimizationunder a genal a ne data perturbation uncertainty setrdquo Journal ofComputational Analysis and Applications vol 16 no 1 pp 93ndash103 2014
[30] J M Martınez E A Pilotta and M Raydan ldquoSpectral gradientmethods for linearly constrained optimizationrdquo Journal of
Optimization Theory and Applications vol 125 no 3 pp 629ndash651 2005
[31] C Li andD H Li ldquoAn extension of the Fletcher-Reeves methodto linear equality constrained optimization problemrdquo AppliedMathematics andComputation vol 219 no 23 pp 10909ndash109142013
[32] C Li L Fang and X Cui ldquoA feasible fletcher-reeves method tolinear equality constrained optimization problemrdquo in Proceed-ings of the International Conference on Apperceiving Computingand Intelligence Analysis (ICACIA rsquo10) pp 30ndash33 December2010
[33] L Zhang W J Zhou and D H Li ldquoGlobal convergence ofa modified Fletcher-Reeves conjugate gradient method withArmijo-type line searchrdquo Numerische Mathematik vol 104 no2 pp 561ndash572 2006
[34] N IM GouldM E Hribar and J Nocedal ldquoOn the solution ofequality constrained quadratic programming problems arisingin optimizationrdquo SIAM Journal on Scientific Computing vol 23no 4 pp 1376ndash1395 2001
[35] W Hock and K Schittkowski Test examples for nonlinear pro-gramming codes LectureNotes in Economics andMathematicalSystems Springer New York NY USA 1981
[36] J Asaadi ldquoA computational comparison of some non-linearprogramsrdquo Mathematical Programming vol 4 no 1 pp 144ndash154 1973
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
Abstract and Applied Analysis 5
where positive sequence satisfies 120598119896 rarr 0 as 119896 rarr infin
Let the initial stepsize of the Armijo line search be anapproximation to
120575119896≜minus119892119879119896119889119896
119889119879119896119866119896119889119896
asympminus120598119896119892119879119896119889119896
119889119879119896(119892 (119909119896+ 120598119896119889119896) minus 119892119896)≜ 120574119896 (44)
It is not difficult to see that if 120598119896and 119889
119896 are sufficiently small
then 120575119896and 120574119896are good estimation to 120572
119896
So to improve the efficiency of EMPRP method inpractical computation we utilize the following line searchprocess
Line Search Process If inequality
119891 (119909119896+10038161003816100381610038161205741198961003816100381610038161003816 119889119896) le 119891 (119909119896) + 120575
1003816100381610038161003816120574119896100381610038161003816100381621003817100381710038171003817119889119896
10038171003817100381710038172
(45)
holds then we let 120572119896= |120574119896| Otherwise we let 120572
119896be the largest
scalar in the set |120574119896|120588119894 119894 = 0 1 such that inequality (45)
is satisfied
5 Numerical Experiments
This section reports some numerical experiments Firstly wetest the EMPRP method and compare it with the Rose gradi-ent projection method in [1] on low dimensional problemsSecondly we test the EMPRP method and compare it withthe spectral gradient method in Martınez et al [30] and thefeasible Fletcher-Reeves conjugate gradientmethod in Li et al[32] on large dimensional problems In the line searchprocess we set 120598
119896= 10minus6 120588 = 03 120575 = 002
The methods in the tables have the following meanings
(i) ldquoEMPRPrdquo stands for the EMPRP method with theArmijio-type line search (20)
(ii) ldquoROSErdquo stands for Rose gradient projection methodin [1] with the Armijio-type line search (20) That isin Algorithm 2 the direction 119889
119896= minus119875119892
119896
(iii) ldquoSPGrdquo stands for the spectral gradient method withthe nonmonotone line search in Martınez et al [30]where119872 = 10 119875 = 2
(iv) ldquoFFRrdquo stands for the feasible modified Fletcher-Reeves conjugate gradient method with the Armijio-type line search in Li et al [32]
We stop the iteration if the condition 119875119892119896 le 120576 is
satisfied where 120576 = 10minus5 If the iteration number exceeds105 we also stop the iteration Then we call it failure All ofthe algorithms are coded in Matlab 70 and run on a personalcomputer with a 20GHZ CPU processor
51 Numerical Comparison of EMPRP and ROSE We test theperformance of EMPRP and ROSEmethods on the followingtest problems with given initial points The results are listedin Table 1 119899 stands for the dimension of tested problemand 119899
119888stands for the number of constraints We will report
the following results the CPU time Time (in seconds) thenumber of iterations Iter the number of gradient evaluationsGeval and the number of function evaluations Feval
Problem 1 (HS28 [35]) The function HS28 in [35] is definedas follows
119891 (119909) = (1199091+ 1199092)2
+ (1199092+ 1199093)2
st 1199091+ 21199092+ 31199093minus 1 = 0
(46)
with the initial point 1199090= (minus4 1 1) The optimal solution
119909lowast = (05 minus05 05) and optimal function value 119891(119909lowast) = 0
Problem 2 (HS48 [35]) The function HS48 in [35] is definedas follows
119891 (119909) = (1199091minus 1)2
+ (1199092minus 1199093)2
+ (1199094minus 1199095)2
st 1199091+ 1199092+ 1199093+ 1199094+ 1199095minus 5 = 0
1199093+ 2 (119909
4+ 1199095) + 3 = 0
(47)
with the initial point 1199090= (3 5 minus3 2 minus2) The optimal
solution 119909lowast = (1 1 1 1 1) and optimal function value119891(119909lowast) = 0
Problem 3 (HS49 [35]) The function HS49 in [35] is definedas follows119891 (119909) = (119909
1minus 1199092)2
+ (1199093minus 1)2
+ (1199094minus 1)4
+ (1199095minus 1)6
st 1199091+ 1199092+ 1199093+ 41199094minus 7 = 0
1199093+ 51199095minus 6 = 0
(48)
with the initial point 1199090= (10 7 2 minus3 08) The optimal
solution 119909lowast = (1 1 1 1 1) and optimal function value119891(119909lowast) = 0
Problem 4 (HS50 [35]) The function HS50 in [35] is definedas follows119891 (119909) = (119909
1minus 1199092)2
+ (1199092minus 1199093)2
+ (1199093minus 1199094)2
+ (1199094minus 1199095)2
st 1199091+ 21199092+ 31199093minus 6 = 0
1199092+ 21199093+ 31199094minus 6 = 0
1199093+ 21199094+ 31199095minus 6 = 0
(49)
with the initial point 1199090= (35 minus31 11 5 minus5) The optimal
solution 119909lowast = (1 1 1 1 1) and optimal function value119891(119909lowast) = 0 Moreover we extend the dimension of functionHS51 [35] to 10 20with the initial point119909
0= (35 minus31 11 )
The optimal solution 119909lowast = (1 1 1) and optimal functionvalue 119891(119909lowast) = 0
Problem 5 (HS51 [35]) The function HS51 in [35] is definedas follows119891 (119909) = (119909
1minus 1199092)2
+ (1199092+ 1199093minus 2)2
+ (1199094minus 1)2
+ (1199095minus 1)2
st 1199091+ 31199092minus 4 = 0
1199093+ 1199094minus 21199095= 0
1199092minus 1199095= 0
(50)
6 Abstract and Applied Analysis
Table 1 Test results for Problems 1ndash5 with given initial points
Name 119899 119899119888
EMPRP ROSETime Iter Geval Feval Time Iter Geval Feval
HS28 3 1 00156 20 22 21 00468 71 72 143HS48 5 2 00156 26 28 27 00468 65 66 122HS49 5 2 00156 29 30 30 00780 193 194 336HS50 5 3 00156 22 23 23 00624 76 77 139HS51 5 3 00156 15 16 16 00624 80 81 148HS52 5 3 00156 30 31 31 00312 70 71 141HS50E1 10 8 00156 16 17 17 00312 63 64 127HS50E2 20 18 00156 15 16 16 00936 63 64 127
Table 2 Test results for Problem 6 with given initial points
119896 119899 119899119888
EMPRP FFR SPCGTime Iter Time Iter Time Iter
50 99 49 00568 73 00660 80 00256 70100 199 99 01048 103 01060 120 00756 90200 399 199 00868 108 02660 123 00904 97300 599 299 02252 121 04260 126 07504 123400 799 399 03142 121 05150 126 09804 153500 999 499 04045 138 07030 146 14320 1761000 1999 999 26043 138 21560 146 33468 1982000 3999 1999 82608 138 68280 146 91266 2163000 5999 2999 224075 138 151720 146 282177 2764000 7999 3999 401268 138 350460 146 560604 3245000 9999 4999 1056252 138 850460 146 1281504 476
with the initial point 1199090= (25 05 2 minus1 05) The optimal
solution 119909lowast = (1 1 1 1 1) and optimal function value119891(119909lowast) = 0
From Table 1 we can see that the EMPRP methodperforms better than the Rosen gradient projection methodin [1] which implies that the EMPRP method can improvethe computational efficiency of the Rosen gradient projectionmethod for solving linear equality constrained optimizationproblems
52 Numerical Comparison of EMPRP FFR and SPCG Inthis subsection we test the EMPRP method and compare itwith the spectral gradient method (called SPCG) inMartınezet al [30] and the feasible Fletcher-Reeves conjugate gradientmethod (called FFR) in Li et al [32] on the following largedimensional problems with given initial points The resultsare listed inTables 2 3 4 5 and 6Wewill report the followingresults the CPU time Time (in seconds) the number ofiterations Iter
Problem 6 Given a positive integer 119896 the function is definedas follows
119891 (119909) =1
2
119896minus2
sum119894=1
(119909119896+119894+1
minus 119909119896+119894)2
st 119909119896+119894minus 119909119894+1+ 119909119894= 119894 119894 = 1 119896 minus 1
(51)
with the initial point 1199090= (1 2 119896 2 3 119896) The
optimization function value 119891(119909lowast) = 0 This problem comesfromMartınez et al [30]
Problem 7 Given a positive integer 119899 the function is definedas follows
119891 (119909) =119899
sum119894=1
cos(2120587119909119894lowast sin( 120587
20))
st 119909119894minus 119909119894+1= 04 119894 = 1 119899 minus 1
(52)
with the initial point 1199090= (1 06 02 1 minus 04119899minus1) This
problem comes from Asaadi [36] and is called MAD6
Problem 8 Given a positive integer 119896 the function is definedas follows
119891 (119909) =1
2
119896minus2
sum119894=1
(119909119896+119894+1
minus 119909119896+119894)4
st 119909119896+119894minus 119909119894+1+ 119909119894= 119894 119894 = 1 119896 minus 1
(53)
with the initial point 1199090= (1 2 119896 2 3 119896) The
optimization function value 119891(119909lowast) = 0
Abstract and Applied Analysis 7
Table 3 Test results for Problem 7 with given initial points
119899 119899119888
EMPRP FFR SPCGTime Iter Time Iter Time Iter
200 99 01468 83 01679 94 01286 79400 199 02608 94 02860 103 02266 90600 299 06252 121 08422 132 07504 133800 399 14075 123 16510 138 14177 1491000 499 3280 141 27190 146 28790 1682000 999 84045 189 46202 202 51280 1903000 1499 120042 219 97030 248 107864 2604000 1999 172608 254 134255 288 191266 3165000 2499 206252 321 190940 330 291504 383
Table 4 Test results for Problem 8 with given initial points
119896 119899 119899119888
EMPRP FFR SPCGTime Iter Time Iter Time Iter
50 99 49 00668 78 00750 83 00286 75100 199 99 00988 113 01462 130 00826 106200 399 199 01978 123 02960 133 01864 124300 599 299 03436 128 05260 136 03504 156400 799 399 04142 134 07143 136 05804 186500 999 499 06039 152 09030 158 09331 2281000 1999 999 29065 152 29570 158 31260 2682000 3999 1999 83408 152 78280 158 82266 2983000 5999 2999 196086 152 174350 158 198672 3304000 7999 3999 463268 152 381460 158 561256 3685000 9999 4999 753972 152 583867 158 1257680 398
Table 5 Test results for Problem 9 with given initial points
119896 119899 119899119888
EMPRP FFR SPCGTime Iter Time Iter Time Iter
50 99 49 00568 65 00640 70 00266 62100 199 99 00868 109 01362 125 00726 103200 399 199 02110 120 02862 128 02064 124300 599 299 03250 127 05062 132 04524 160400 799 399 04543 138 07456 139 06804 192500 999 499 06246 156 09268 155 10876 2581000 1999 999 39245 156 34268 162 34560 2792000 3999 1999 93404 156 86240 162 86280 3203000 5999 2999 206125 156 196548 162 208656 3584000 7999 3999 482890 156 444330 162 577680 3865000 9999 4999 954680 156 838650 162 1288760 420
Problem 9 Given a positive integer 119896 the function is definedas follows
119891 (119909) =119896minus2
sum119894=1
100(119909119896+119894+1
minus 119909119896+119894)2
+ (1 minus 119909119896+119894)2
st 119909119896+119894minus 119909119894+1+ 119909119894= 119894 119894 = 1 119896 minus 1
(54)
with the initial point 1199090= (1 2 119896 2 3 119896) The
optimization function value 119891(119909lowast) = 0
Problem 10 Given a positive integer 119899 the function is definedas follows
119891 (119909) =119899
sum119894=1
cos2 (2120587119909119894lowast sin( 120587
20))
st 119909119894minus 119909119894+1= 04 119894 = 1 119899 minus 1
(55)
with the initial point 1199090= (1 06 02 1 minus 04 lowast (119899 minus 1))
8 Abstract and Applied Analysis
Table 6 Test results for Problem 10 with given initial points
119899 119899119888
EMPRP FFR SPCGTime Iter Time Iter Time Iter
200 99 01842 88 01984 98 01488 92400 199 02908 104 03260 113 02348 100600 299 08256 132 09422 142 09804 182800 399 16078 136 17512 148 26177 2591000 499 29801 148 28192 156 31790 2682000 999 78045 199 51202 212 81288 3203000 1499 128042 248 99035 256 147862 3604000 1999 192432 304 144258 298 221268 3865000 2499 296846 382 201932 378 321422 393
From Tables 2ndash6 we can see the EMPRPmethod and theFFR method in [32] perform better than the SPCG methodin Martınez et al [30] for solving large-scale linear equalityconstrained optimization problems as the EMPRP methodand the FFRmethod all are first ordermethods But the SPCGmethod in Martınez et al [30] needs to compute 119861
119896with
quasi-Newton method in which the approximate Hessianssatisfy a weak secant equation However as the EMPRPmethod also needs to compute projection the FFRmethod in[32] performs better than the EMPRP method when the testproblem becomes large
6 Conclusions
In this paper we propose a new conjugate gradient projectionmethod for solving linear equality constrained optimizationproblem (1) which combines the two-term modified Polak-Ribiere-Polyak (PRP) conjugate gradient method in Cheng[7] with the Rosen projectionmethodThe proposed methodalso can be regarded as an extension of the recently developedtwo-term modified Polak-Ribiere-Polyak (PRP) conjugategradient method in Cheng [7] Under some mild conditionswe show that it is globally convergent with the Armijio-typeline search
Conflict of Interests
The author declares that there is no conflict of interestsregarding the publication of this paper
Acknowledgments
The author thanks the anonymous referees for their valuablecomments and suggestions that improved the presenta-tion This work is supported by the NSF of China Grants11301041 11371154 71371065 and 71371195 and the Ministryof Education Humanities and Social Sciences Project Grants12YJC790027 Project funded by China Postdoctoral ScienceFoundation Natural Science Foundation ofHunan Province
References
[1] J B Rosen ldquoThe gradient projectionmethod for nonlinear pro-gramming I Linear constraintsrdquo SIAM Journal on AppliedMathematics vol 8 no 1 pp 181ndash217 1960
[2] D Z Du and X S Zhang ldquoA convergence theorem of Rosenrsquosgradient projection methodrdquo Mathematical Programming vol36 no 2 pp 135ndash144 1986
[3] D Du ldquoRemarks on the convergence of Rosenrsquos gradient pro-jection methodrdquo Acta Mathematicae Applicatae Sinica vol 3no 2 pp 270ndash279 1987
[4] D Z Du and X S Zhang ldquoGlobal convergence of Rosenrsquosgradient projection methodrdquo Mathematical Programming vol44 no 1 pp 357ndash366 1989
[5] E Polak and G Ribire ldquoNote surla convergence de directionsconjugueesrdquo Rev Francaise informat Recherche Operatinelle 3eAnnee vol 16 pp 35ndash43 1969
[6] B T Polyak ldquoThe conjugate gradient method in extremal prob-lemsrdquo USSR Computational Mathematics and MathematicalPhysics vol 9 no 4 pp 94ndash112 1969
[7] W Cheng ldquoA two-term PRP-based descent methodrdquoNumericalFunctional Analysis and Optimization vol 28 no 11 pp 1217ndash1230 2007
[8] W W Hager and H Zhang ldquoA new conjugate gradient methodwith guaranteed descent and an efficient line searchrdquo SIAMJournal on Optimization vol 16 no 1 pp 170ndash192 2005
[9] G Li C Tang and Z Wei ldquoNew conjugacy condition andrelated new conjugate gradientmethods for unconstrained opti-mizationrdquo Journal of Computational and Applied Mathematicsvol 202 no 2 pp 523ndash539 2007
[10] N Andrei ldquoAccelerated scaled memoryless BFGS precondi-tioned conjugate gradient algorithm for unconstrained opti-mizationrdquo European Journal of Operational Research vol 204no 3 pp 410ndash420 2010
[11] G Yu L Guan and W Chen ldquoSpectral conjugate gradientmethods with sufficient descent property for large-scale uncon-strained optimizationrdquoOptimizationMethods and Software vol23 no 2 pp 275ndash293 2008
[12] G Yuan ldquoModified nonlinear conjugate gradient methods withsufficient descent property for large-scale optimization pro-blemsrdquo Optimization Letters vol 3 no 1 pp 11ndash21 2009
Abstract and Applied Analysis 9
[13] Q Li and D-H Li ldquoA class of derivative-free methods for large-scale nonlinearmonotone equationsrdquo IMA Journal ofNumericalAnalysis vol 31 no 4 pp 1625ndash1635 2011
[14] Y Xiao and H Zhu ldquoA conjugate gradient method to solveconvex constrained monotone equations with applications incompressive sensingrdquo Journal of Mathematical Analysis andApplications vol 405 no 1 pp 310ndash319 2013
[15] Z Dai ldquoTwo modified HS type conjugate gradient methodsfor unconstrained optimization problemsrdquo Nonlinear AnalysisTheory Methods ampApplications vol 74 no 3 pp 927ndash936 2011
[16] Y-Y Chen and S-Q Du ldquoNonlinear conjugate gradient meth-ods with Wolfe type line searchrdquo Abstract and Applied Analysisvol 2013 Article ID 742815 5 pages 2013
[17] S Y Liu Y Y Huang and H W Jiao ldquoSufficient descent con-jugate gradient methods for solving convex constrained non-linear monotone equationsrdquo Abstract and Applied Analysis vol2014 Article ID 305643 12 pages 2014
[18] Z F Dai D H Li and F HWen ldquoRobust conditional value-at-risk optimization for asymmetrically distributed asset returnsrdquoPacific Journal of Optimization vol 8 no 3 pp 429ndash445 2012
[19] C Huang C Peng X Chen and F Wen ldquoDynamics analysisof a class of delayed economic modelrdquo Abstract and AppliedAnalysis vol 2013 Article ID 962738 12 pages 2013
[20] C Huang X Gong X Chen and F Wen ldquoMeasuring andforecasting volatility in Chinese stock market using HAR-CJ-Mmodelrdquo Abstract and Applied Analysis vol 2013 Article ID143194 13 pages 2013
[21] G Qin C Huang Y Xie and F Wen ldquoAsymptotic behavior forthird-order quasi-linear differential equationsrdquo Advances inDifference Equations vol 30 no 13 pp 305ndash312 2013
[22] C Huang H Kuang X Chen and F Wen ldquoAn LMI approachfor dynamics of switched cellular neural networks with mixeddelaysrdquo Abstract and Applied Analysis vol 2013 Article ID870486 8 pages 2013
[23] Q F Cui Z G Wang X Chen and F Wen ldquoSufficient condi-tions for non-Bazilevic functionsrdquo Abstract and Applied Anal-ysis vol 2013 Article ID 154912 4 pages 2013
[24] F Wen and X Yang ldquoSkewness of return distribution andcoefficient of risk premiumrdquo Journal of Systems Science amp Com-plexity vol 22 no 3 pp 360ndash371 2009
[25] F Wen and Z Liu ldquoA copula-based correlation measure andits application in Chinese stock marketrdquo International Journalof Information Technology amp Decision Making vol 8 no 4 pp787ndash801 2009
[26] F Wen Z He and X Chen ldquoInvestorsrsquo risk preference charac-teristics and conditional skewnessrdquo Mathematical Problems inEngineering vol 2014 Article ID 814965 14 pages 2014
[27] F Wen X Gong Y Chao and X Chen ldquoThe effects of prioroutcomes on risky choice evidence from the stock marketrdquoMathematical Problems in Engineering vol 2014 Article ID272518 8 pages 2014
[28] F Wen Z He X Gong and A Liu ldquoInvestorsrsquo risk preferencecharacteristics based on different reference pointrdquo DiscreteDynamics in Nature and Society vol 2014 Article ID 1583869 pages 2014
[29] Z Dai and F Wen ldquoRobust CVaR-based portfolio optimizationunder a genal a ne data perturbation uncertainty setrdquo Journal ofComputational Analysis and Applications vol 16 no 1 pp 93ndash103 2014
[30] J M Martınez E A Pilotta and M Raydan ldquoSpectral gradientmethods for linearly constrained optimizationrdquo Journal of
Optimization Theory and Applications vol 125 no 3 pp 629ndash651 2005
[31] C Li andD H Li ldquoAn extension of the Fletcher-Reeves methodto linear equality constrained optimization problemrdquo AppliedMathematics andComputation vol 219 no 23 pp 10909ndash109142013
[32] C Li L Fang and X Cui ldquoA feasible fletcher-reeves method tolinear equality constrained optimization problemrdquo in Proceed-ings of the International Conference on Apperceiving Computingand Intelligence Analysis (ICACIA rsquo10) pp 30ndash33 December2010
[33] L Zhang W J Zhou and D H Li ldquoGlobal convergence ofa modified Fletcher-Reeves conjugate gradient method withArmijo-type line searchrdquo Numerische Mathematik vol 104 no2 pp 561ndash572 2006
[34] N IM GouldM E Hribar and J Nocedal ldquoOn the solution ofequality constrained quadratic programming problems arisingin optimizationrdquo SIAM Journal on Scientific Computing vol 23no 4 pp 1376ndash1395 2001
[35] W Hock and K Schittkowski Test examples for nonlinear pro-gramming codes LectureNotes in Economics andMathematicalSystems Springer New York NY USA 1981
[36] J Asaadi ldquoA computational comparison of some non-linearprogramsrdquo Mathematical Programming vol 4 no 1 pp 144ndash154 1973
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
6 Abstract and Applied Analysis
Table 1 Test results for Problems 1ndash5 with given initial points
Name 119899 119899119888
EMPRP ROSETime Iter Geval Feval Time Iter Geval Feval
HS28 3 1 00156 20 22 21 00468 71 72 143HS48 5 2 00156 26 28 27 00468 65 66 122HS49 5 2 00156 29 30 30 00780 193 194 336HS50 5 3 00156 22 23 23 00624 76 77 139HS51 5 3 00156 15 16 16 00624 80 81 148HS52 5 3 00156 30 31 31 00312 70 71 141HS50E1 10 8 00156 16 17 17 00312 63 64 127HS50E2 20 18 00156 15 16 16 00936 63 64 127
Table 2 Test results for Problem 6 with given initial points
119896 119899 119899119888
EMPRP FFR SPCGTime Iter Time Iter Time Iter
50 99 49 00568 73 00660 80 00256 70100 199 99 01048 103 01060 120 00756 90200 399 199 00868 108 02660 123 00904 97300 599 299 02252 121 04260 126 07504 123400 799 399 03142 121 05150 126 09804 153500 999 499 04045 138 07030 146 14320 1761000 1999 999 26043 138 21560 146 33468 1982000 3999 1999 82608 138 68280 146 91266 2163000 5999 2999 224075 138 151720 146 282177 2764000 7999 3999 401268 138 350460 146 560604 3245000 9999 4999 1056252 138 850460 146 1281504 476
with the initial point 1199090= (25 05 2 minus1 05) The optimal
solution 119909lowast = (1 1 1 1 1) and optimal function value119891(119909lowast) = 0
From Table 1 we can see that the EMPRP methodperforms better than the Rosen gradient projection methodin [1] which implies that the EMPRP method can improvethe computational efficiency of the Rosen gradient projectionmethod for solving linear equality constrained optimizationproblems
52 Numerical Comparison of EMPRP FFR and SPCG Inthis subsection we test the EMPRP method and compare itwith the spectral gradient method (called SPCG) inMartınezet al [30] and the feasible Fletcher-Reeves conjugate gradientmethod (called FFR) in Li et al [32] on the following largedimensional problems with given initial points The resultsare listed inTables 2 3 4 5 and 6Wewill report the followingresults the CPU time Time (in seconds) the number ofiterations Iter
Problem 6 Given a positive integer 119896 the function is definedas follows
119891 (119909) =1
2
119896minus2
sum119894=1
(119909119896+119894+1
minus 119909119896+119894)2
st 119909119896+119894minus 119909119894+1+ 119909119894= 119894 119894 = 1 119896 minus 1
(51)
with the initial point 1199090= (1 2 119896 2 3 119896) The
optimization function value 119891(119909lowast) = 0 This problem comesfromMartınez et al [30]
Problem 7 Given a positive integer 119899 the function is definedas follows
119891 (119909) =119899
sum119894=1
cos(2120587119909119894lowast sin( 120587
20))
st 119909119894minus 119909119894+1= 04 119894 = 1 119899 minus 1
(52)
with the initial point 1199090= (1 06 02 1 minus 04119899minus1) This
problem comes from Asaadi [36] and is called MAD6
Problem 8 Given a positive integer 119896 the function is definedas follows
119891 (119909) =1
2
119896minus2
sum119894=1
(119909119896+119894+1
minus 119909119896+119894)4
st 119909119896+119894minus 119909119894+1+ 119909119894= 119894 119894 = 1 119896 minus 1
(53)
with the initial point 1199090= (1 2 119896 2 3 119896) The
optimization function value 119891(119909lowast) = 0
Abstract and Applied Analysis 7
Table 3 Test results for Problem 7 with given initial points
119899 119899119888
EMPRP FFR SPCGTime Iter Time Iter Time Iter
200 99 01468 83 01679 94 01286 79400 199 02608 94 02860 103 02266 90600 299 06252 121 08422 132 07504 133800 399 14075 123 16510 138 14177 1491000 499 3280 141 27190 146 28790 1682000 999 84045 189 46202 202 51280 1903000 1499 120042 219 97030 248 107864 2604000 1999 172608 254 134255 288 191266 3165000 2499 206252 321 190940 330 291504 383
Table 4 Test results for Problem 8 with given initial points
119896 119899 119899119888
EMPRP FFR SPCGTime Iter Time Iter Time Iter
50 99 49 00668 78 00750 83 00286 75100 199 99 00988 113 01462 130 00826 106200 399 199 01978 123 02960 133 01864 124300 599 299 03436 128 05260 136 03504 156400 799 399 04142 134 07143 136 05804 186500 999 499 06039 152 09030 158 09331 2281000 1999 999 29065 152 29570 158 31260 2682000 3999 1999 83408 152 78280 158 82266 2983000 5999 2999 196086 152 174350 158 198672 3304000 7999 3999 463268 152 381460 158 561256 3685000 9999 4999 753972 152 583867 158 1257680 398
Table 5 Test results for Problem 9 with given initial points
119896 119899 119899119888
EMPRP FFR SPCGTime Iter Time Iter Time Iter
50 99 49 00568 65 00640 70 00266 62100 199 99 00868 109 01362 125 00726 103200 399 199 02110 120 02862 128 02064 124300 599 299 03250 127 05062 132 04524 160400 799 399 04543 138 07456 139 06804 192500 999 499 06246 156 09268 155 10876 2581000 1999 999 39245 156 34268 162 34560 2792000 3999 1999 93404 156 86240 162 86280 3203000 5999 2999 206125 156 196548 162 208656 3584000 7999 3999 482890 156 444330 162 577680 3865000 9999 4999 954680 156 838650 162 1288760 420
Problem 9 Given a positive integer 119896 the function is definedas follows
119891 (119909) =119896minus2
sum119894=1
100(119909119896+119894+1
minus 119909119896+119894)2
+ (1 minus 119909119896+119894)2
st 119909119896+119894minus 119909119894+1+ 119909119894= 119894 119894 = 1 119896 minus 1
(54)
with the initial point 1199090= (1 2 119896 2 3 119896) The
optimization function value 119891(119909lowast) = 0
Problem 10 Given a positive integer 119899 the function is definedas follows
119891 (119909) =119899
sum119894=1
cos2 (2120587119909119894lowast sin( 120587
20))
st 119909119894minus 119909119894+1= 04 119894 = 1 119899 minus 1
(55)
with the initial point 1199090= (1 06 02 1 minus 04 lowast (119899 minus 1))
8 Abstract and Applied Analysis
Table 6 Test results for Problem 10 with given initial points
119899 119899119888
EMPRP FFR SPCGTime Iter Time Iter Time Iter
200 99 01842 88 01984 98 01488 92400 199 02908 104 03260 113 02348 100600 299 08256 132 09422 142 09804 182800 399 16078 136 17512 148 26177 2591000 499 29801 148 28192 156 31790 2682000 999 78045 199 51202 212 81288 3203000 1499 128042 248 99035 256 147862 3604000 1999 192432 304 144258 298 221268 3865000 2499 296846 382 201932 378 321422 393
From Tables 2ndash6 we can see the EMPRPmethod and theFFR method in [32] perform better than the SPCG methodin Martınez et al [30] for solving large-scale linear equalityconstrained optimization problems as the EMPRP methodand the FFRmethod all are first ordermethods But the SPCGmethod in Martınez et al [30] needs to compute 119861
119896with
quasi-Newton method in which the approximate Hessianssatisfy a weak secant equation However as the EMPRPmethod also needs to compute projection the FFRmethod in[32] performs better than the EMPRP method when the testproblem becomes large
6 Conclusions
In this paper we propose a new conjugate gradient projectionmethod for solving linear equality constrained optimizationproblem (1) which combines the two-term modified Polak-Ribiere-Polyak (PRP) conjugate gradient method in Cheng[7] with the Rosen projectionmethodThe proposed methodalso can be regarded as an extension of the recently developedtwo-term modified Polak-Ribiere-Polyak (PRP) conjugategradient method in Cheng [7] Under some mild conditionswe show that it is globally convergent with the Armijio-typeline search
Conflict of Interests
The author declares that there is no conflict of interestsregarding the publication of this paper
Acknowledgments
The author thanks the anonymous referees for their valuablecomments and suggestions that improved the presenta-tion This work is supported by the NSF of China Grants11301041 11371154 71371065 and 71371195 and the Ministryof Education Humanities and Social Sciences Project Grants12YJC790027 Project funded by China Postdoctoral ScienceFoundation Natural Science Foundation ofHunan Province
References
[1] J B Rosen ldquoThe gradient projectionmethod for nonlinear pro-gramming I Linear constraintsrdquo SIAM Journal on AppliedMathematics vol 8 no 1 pp 181ndash217 1960
[2] D Z Du and X S Zhang ldquoA convergence theorem of Rosenrsquosgradient projection methodrdquo Mathematical Programming vol36 no 2 pp 135ndash144 1986
[3] D Du ldquoRemarks on the convergence of Rosenrsquos gradient pro-jection methodrdquo Acta Mathematicae Applicatae Sinica vol 3no 2 pp 270ndash279 1987
[4] D Z Du and X S Zhang ldquoGlobal convergence of Rosenrsquosgradient projection methodrdquo Mathematical Programming vol44 no 1 pp 357ndash366 1989
[5] E Polak and G Ribire ldquoNote surla convergence de directionsconjugueesrdquo Rev Francaise informat Recherche Operatinelle 3eAnnee vol 16 pp 35ndash43 1969
[6] B T Polyak ldquoThe conjugate gradient method in extremal prob-lemsrdquo USSR Computational Mathematics and MathematicalPhysics vol 9 no 4 pp 94ndash112 1969
[7] W Cheng ldquoA two-term PRP-based descent methodrdquoNumericalFunctional Analysis and Optimization vol 28 no 11 pp 1217ndash1230 2007
[8] W W Hager and H Zhang ldquoA new conjugate gradient methodwith guaranteed descent and an efficient line searchrdquo SIAMJournal on Optimization vol 16 no 1 pp 170ndash192 2005
[9] G Li C Tang and Z Wei ldquoNew conjugacy condition andrelated new conjugate gradientmethods for unconstrained opti-mizationrdquo Journal of Computational and Applied Mathematicsvol 202 no 2 pp 523ndash539 2007
[10] N Andrei ldquoAccelerated scaled memoryless BFGS precondi-tioned conjugate gradient algorithm for unconstrained opti-mizationrdquo European Journal of Operational Research vol 204no 3 pp 410ndash420 2010
[11] G Yu L Guan and W Chen ldquoSpectral conjugate gradientmethods with sufficient descent property for large-scale uncon-strained optimizationrdquoOptimizationMethods and Software vol23 no 2 pp 275ndash293 2008
[12] G Yuan ldquoModified nonlinear conjugate gradient methods withsufficient descent property for large-scale optimization pro-blemsrdquo Optimization Letters vol 3 no 1 pp 11ndash21 2009
Abstract and Applied Analysis 9
[13] Q Li and D-H Li ldquoA class of derivative-free methods for large-scale nonlinearmonotone equationsrdquo IMA Journal ofNumericalAnalysis vol 31 no 4 pp 1625ndash1635 2011
[14] Y Xiao and H Zhu ldquoA conjugate gradient method to solveconvex constrained monotone equations with applications incompressive sensingrdquo Journal of Mathematical Analysis andApplications vol 405 no 1 pp 310ndash319 2013
[15] Z Dai ldquoTwo modified HS type conjugate gradient methodsfor unconstrained optimization problemsrdquo Nonlinear AnalysisTheory Methods ampApplications vol 74 no 3 pp 927ndash936 2011
[16] Y-Y Chen and S-Q Du ldquoNonlinear conjugate gradient meth-ods with Wolfe type line searchrdquo Abstract and Applied Analysisvol 2013 Article ID 742815 5 pages 2013
[17] S Y Liu Y Y Huang and H W Jiao ldquoSufficient descent con-jugate gradient methods for solving convex constrained non-linear monotone equationsrdquo Abstract and Applied Analysis vol2014 Article ID 305643 12 pages 2014
[18] Z F Dai D H Li and F HWen ldquoRobust conditional value-at-risk optimization for asymmetrically distributed asset returnsrdquoPacific Journal of Optimization vol 8 no 3 pp 429ndash445 2012
[19] C Huang C Peng X Chen and F Wen ldquoDynamics analysisof a class of delayed economic modelrdquo Abstract and AppliedAnalysis vol 2013 Article ID 962738 12 pages 2013
[20] C Huang X Gong X Chen and F Wen ldquoMeasuring andforecasting volatility in Chinese stock market using HAR-CJ-Mmodelrdquo Abstract and Applied Analysis vol 2013 Article ID143194 13 pages 2013
[21] G Qin C Huang Y Xie and F Wen ldquoAsymptotic behavior forthird-order quasi-linear differential equationsrdquo Advances inDifference Equations vol 30 no 13 pp 305ndash312 2013
[22] C Huang H Kuang X Chen and F Wen ldquoAn LMI approachfor dynamics of switched cellular neural networks with mixeddelaysrdquo Abstract and Applied Analysis vol 2013 Article ID870486 8 pages 2013
[23] Q F Cui Z G Wang X Chen and F Wen ldquoSufficient condi-tions for non-Bazilevic functionsrdquo Abstract and Applied Anal-ysis vol 2013 Article ID 154912 4 pages 2013
[24] F Wen and X Yang ldquoSkewness of return distribution andcoefficient of risk premiumrdquo Journal of Systems Science amp Com-plexity vol 22 no 3 pp 360ndash371 2009
[25] F Wen and Z Liu ldquoA copula-based correlation measure andits application in Chinese stock marketrdquo International Journalof Information Technology amp Decision Making vol 8 no 4 pp787ndash801 2009
[26] F Wen Z He and X Chen ldquoInvestorsrsquo risk preference charac-teristics and conditional skewnessrdquo Mathematical Problems inEngineering vol 2014 Article ID 814965 14 pages 2014
[27] F Wen X Gong Y Chao and X Chen ldquoThe effects of prioroutcomes on risky choice evidence from the stock marketrdquoMathematical Problems in Engineering vol 2014 Article ID272518 8 pages 2014
[28] F Wen Z He X Gong and A Liu ldquoInvestorsrsquo risk preferencecharacteristics based on different reference pointrdquo DiscreteDynamics in Nature and Society vol 2014 Article ID 1583869 pages 2014
[29] Z Dai and F Wen ldquoRobust CVaR-based portfolio optimizationunder a genal a ne data perturbation uncertainty setrdquo Journal ofComputational Analysis and Applications vol 16 no 1 pp 93ndash103 2014
[30] J M Martınez E A Pilotta and M Raydan ldquoSpectral gradientmethods for linearly constrained optimizationrdquo Journal of
Optimization Theory and Applications vol 125 no 3 pp 629ndash651 2005
[31] C Li andD H Li ldquoAn extension of the Fletcher-Reeves methodto linear equality constrained optimization problemrdquo AppliedMathematics andComputation vol 219 no 23 pp 10909ndash109142013
[32] C Li L Fang and X Cui ldquoA feasible fletcher-reeves method tolinear equality constrained optimization problemrdquo in Proceed-ings of the International Conference on Apperceiving Computingand Intelligence Analysis (ICACIA rsquo10) pp 30ndash33 December2010
[33] L Zhang W J Zhou and D H Li ldquoGlobal convergence ofa modified Fletcher-Reeves conjugate gradient method withArmijo-type line searchrdquo Numerische Mathematik vol 104 no2 pp 561ndash572 2006
[34] N IM GouldM E Hribar and J Nocedal ldquoOn the solution ofequality constrained quadratic programming problems arisingin optimizationrdquo SIAM Journal on Scientific Computing vol 23no 4 pp 1376ndash1395 2001
[35] W Hock and K Schittkowski Test examples for nonlinear pro-gramming codes LectureNotes in Economics andMathematicalSystems Springer New York NY USA 1981
[36] J Asaadi ldquoA computational comparison of some non-linearprogramsrdquo Mathematical Programming vol 4 no 1 pp 144ndash154 1973
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
Abstract and Applied Analysis 7
Table 3 Test results for Problem 7 with given initial points
119899 119899119888
EMPRP FFR SPCGTime Iter Time Iter Time Iter
200 99 01468 83 01679 94 01286 79400 199 02608 94 02860 103 02266 90600 299 06252 121 08422 132 07504 133800 399 14075 123 16510 138 14177 1491000 499 3280 141 27190 146 28790 1682000 999 84045 189 46202 202 51280 1903000 1499 120042 219 97030 248 107864 2604000 1999 172608 254 134255 288 191266 3165000 2499 206252 321 190940 330 291504 383
Table 4 Test results for Problem 8 with given initial points
119896 119899 119899119888
EMPRP FFR SPCGTime Iter Time Iter Time Iter
50 99 49 00668 78 00750 83 00286 75100 199 99 00988 113 01462 130 00826 106200 399 199 01978 123 02960 133 01864 124300 599 299 03436 128 05260 136 03504 156400 799 399 04142 134 07143 136 05804 186500 999 499 06039 152 09030 158 09331 2281000 1999 999 29065 152 29570 158 31260 2682000 3999 1999 83408 152 78280 158 82266 2983000 5999 2999 196086 152 174350 158 198672 3304000 7999 3999 463268 152 381460 158 561256 3685000 9999 4999 753972 152 583867 158 1257680 398
Table 5 Test results for Problem 9 with given initial points
119896 119899 119899119888
EMPRP FFR SPCGTime Iter Time Iter Time Iter
50 99 49 00568 65 00640 70 00266 62100 199 99 00868 109 01362 125 00726 103200 399 199 02110 120 02862 128 02064 124300 599 299 03250 127 05062 132 04524 160400 799 399 04543 138 07456 139 06804 192500 999 499 06246 156 09268 155 10876 2581000 1999 999 39245 156 34268 162 34560 2792000 3999 1999 93404 156 86240 162 86280 3203000 5999 2999 206125 156 196548 162 208656 3584000 7999 3999 482890 156 444330 162 577680 3865000 9999 4999 954680 156 838650 162 1288760 420
Problem 9 Given a positive integer 119896 the function is definedas follows
119891 (119909) =119896minus2
sum119894=1
100(119909119896+119894+1
minus 119909119896+119894)2
+ (1 minus 119909119896+119894)2
st 119909119896+119894minus 119909119894+1+ 119909119894= 119894 119894 = 1 119896 minus 1
(54)
with the initial point 1199090= (1 2 119896 2 3 119896) The
optimization function value 119891(119909lowast) = 0
Problem 10 Given a positive integer 119899 the function is definedas follows
119891 (119909) =119899
sum119894=1
cos2 (2120587119909119894lowast sin( 120587
20))
st 119909119894minus 119909119894+1= 04 119894 = 1 119899 minus 1
(55)
with the initial point 1199090= (1 06 02 1 minus 04 lowast (119899 minus 1))
8 Abstract and Applied Analysis
Table 6 Test results for Problem 10 with given initial points
119899 119899119888
EMPRP FFR SPCGTime Iter Time Iter Time Iter
200 99 01842 88 01984 98 01488 92400 199 02908 104 03260 113 02348 100600 299 08256 132 09422 142 09804 182800 399 16078 136 17512 148 26177 2591000 499 29801 148 28192 156 31790 2682000 999 78045 199 51202 212 81288 3203000 1499 128042 248 99035 256 147862 3604000 1999 192432 304 144258 298 221268 3865000 2499 296846 382 201932 378 321422 393
From Tables 2ndash6 we can see the EMPRPmethod and theFFR method in [32] perform better than the SPCG methodin Martınez et al [30] for solving large-scale linear equalityconstrained optimization problems as the EMPRP methodand the FFRmethod all are first ordermethods But the SPCGmethod in Martınez et al [30] needs to compute 119861
119896with
quasi-Newton method in which the approximate Hessianssatisfy a weak secant equation However as the EMPRPmethod also needs to compute projection the FFRmethod in[32] performs better than the EMPRP method when the testproblem becomes large
6 Conclusions
In this paper we propose a new conjugate gradient projectionmethod for solving linear equality constrained optimizationproblem (1) which combines the two-term modified Polak-Ribiere-Polyak (PRP) conjugate gradient method in Cheng[7] with the Rosen projectionmethodThe proposed methodalso can be regarded as an extension of the recently developedtwo-term modified Polak-Ribiere-Polyak (PRP) conjugategradient method in Cheng [7] Under some mild conditionswe show that it is globally convergent with the Armijio-typeline search
Conflict of Interests
The author declares that there is no conflict of interestsregarding the publication of this paper
Acknowledgments
The author thanks the anonymous referees for their valuablecomments and suggestions that improved the presenta-tion This work is supported by the NSF of China Grants11301041 11371154 71371065 and 71371195 and the Ministryof Education Humanities and Social Sciences Project Grants12YJC790027 Project funded by China Postdoctoral ScienceFoundation Natural Science Foundation ofHunan Province
References
[1] J B Rosen ldquoThe gradient projectionmethod for nonlinear pro-gramming I Linear constraintsrdquo SIAM Journal on AppliedMathematics vol 8 no 1 pp 181ndash217 1960
[2] D Z Du and X S Zhang ldquoA convergence theorem of Rosenrsquosgradient projection methodrdquo Mathematical Programming vol36 no 2 pp 135ndash144 1986
[3] D Du ldquoRemarks on the convergence of Rosenrsquos gradient pro-jection methodrdquo Acta Mathematicae Applicatae Sinica vol 3no 2 pp 270ndash279 1987
[4] D Z Du and X S Zhang ldquoGlobal convergence of Rosenrsquosgradient projection methodrdquo Mathematical Programming vol44 no 1 pp 357ndash366 1989
[5] E Polak and G Ribire ldquoNote surla convergence de directionsconjugueesrdquo Rev Francaise informat Recherche Operatinelle 3eAnnee vol 16 pp 35ndash43 1969
[6] B T Polyak ldquoThe conjugate gradient method in extremal prob-lemsrdquo USSR Computational Mathematics and MathematicalPhysics vol 9 no 4 pp 94ndash112 1969
[7] W Cheng ldquoA two-term PRP-based descent methodrdquoNumericalFunctional Analysis and Optimization vol 28 no 11 pp 1217ndash1230 2007
[8] W W Hager and H Zhang ldquoA new conjugate gradient methodwith guaranteed descent and an efficient line searchrdquo SIAMJournal on Optimization vol 16 no 1 pp 170ndash192 2005
[9] G Li C Tang and Z Wei ldquoNew conjugacy condition andrelated new conjugate gradientmethods for unconstrained opti-mizationrdquo Journal of Computational and Applied Mathematicsvol 202 no 2 pp 523ndash539 2007
[10] N Andrei ldquoAccelerated scaled memoryless BFGS precondi-tioned conjugate gradient algorithm for unconstrained opti-mizationrdquo European Journal of Operational Research vol 204no 3 pp 410ndash420 2010
[11] G Yu L Guan and W Chen ldquoSpectral conjugate gradientmethods with sufficient descent property for large-scale uncon-strained optimizationrdquoOptimizationMethods and Software vol23 no 2 pp 275ndash293 2008
[12] G Yuan ldquoModified nonlinear conjugate gradient methods withsufficient descent property for large-scale optimization pro-blemsrdquo Optimization Letters vol 3 no 1 pp 11ndash21 2009
Abstract and Applied Analysis 9
[13] Q Li and D-H Li ldquoA class of derivative-free methods for large-scale nonlinearmonotone equationsrdquo IMA Journal ofNumericalAnalysis vol 31 no 4 pp 1625ndash1635 2011
[14] Y Xiao and H Zhu ldquoA conjugate gradient method to solveconvex constrained monotone equations with applications incompressive sensingrdquo Journal of Mathematical Analysis andApplications vol 405 no 1 pp 310ndash319 2013
[15] Z Dai ldquoTwo modified HS type conjugate gradient methodsfor unconstrained optimization problemsrdquo Nonlinear AnalysisTheory Methods ampApplications vol 74 no 3 pp 927ndash936 2011
[16] Y-Y Chen and S-Q Du ldquoNonlinear conjugate gradient meth-ods with Wolfe type line searchrdquo Abstract and Applied Analysisvol 2013 Article ID 742815 5 pages 2013
[17] S Y Liu Y Y Huang and H W Jiao ldquoSufficient descent con-jugate gradient methods for solving convex constrained non-linear monotone equationsrdquo Abstract and Applied Analysis vol2014 Article ID 305643 12 pages 2014
[18] Z F Dai D H Li and F HWen ldquoRobust conditional value-at-risk optimization for asymmetrically distributed asset returnsrdquoPacific Journal of Optimization vol 8 no 3 pp 429ndash445 2012
[19] C Huang C Peng X Chen and F Wen ldquoDynamics analysisof a class of delayed economic modelrdquo Abstract and AppliedAnalysis vol 2013 Article ID 962738 12 pages 2013
[20] C Huang X Gong X Chen and F Wen ldquoMeasuring andforecasting volatility in Chinese stock market using HAR-CJ-Mmodelrdquo Abstract and Applied Analysis vol 2013 Article ID143194 13 pages 2013
[21] G Qin C Huang Y Xie and F Wen ldquoAsymptotic behavior forthird-order quasi-linear differential equationsrdquo Advances inDifference Equations vol 30 no 13 pp 305ndash312 2013
[22] C Huang H Kuang X Chen and F Wen ldquoAn LMI approachfor dynamics of switched cellular neural networks with mixeddelaysrdquo Abstract and Applied Analysis vol 2013 Article ID870486 8 pages 2013
[23] Q F Cui Z G Wang X Chen and F Wen ldquoSufficient condi-tions for non-Bazilevic functionsrdquo Abstract and Applied Anal-ysis vol 2013 Article ID 154912 4 pages 2013
[24] F Wen and X Yang ldquoSkewness of return distribution andcoefficient of risk premiumrdquo Journal of Systems Science amp Com-plexity vol 22 no 3 pp 360ndash371 2009
[25] F Wen and Z Liu ldquoA copula-based correlation measure andits application in Chinese stock marketrdquo International Journalof Information Technology amp Decision Making vol 8 no 4 pp787ndash801 2009
[26] F Wen Z He and X Chen ldquoInvestorsrsquo risk preference charac-teristics and conditional skewnessrdquo Mathematical Problems inEngineering vol 2014 Article ID 814965 14 pages 2014
[27] F Wen X Gong Y Chao and X Chen ldquoThe effects of prioroutcomes on risky choice evidence from the stock marketrdquoMathematical Problems in Engineering vol 2014 Article ID272518 8 pages 2014
[28] F Wen Z He X Gong and A Liu ldquoInvestorsrsquo risk preferencecharacteristics based on different reference pointrdquo DiscreteDynamics in Nature and Society vol 2014 Article ID 1583869 pages 2014
[29] Z Dai and F Wen ldquoRobust CVaR-based portfolio optimizationunder a genal a ne data perturbation uncertainty setrdquo Journal ofComputational Analysis and Applications vol 16 no 1 pp 93ndash103 2014
[30] J M Martınez E A Pilotta and M Raydan ldquoSpectral gradientmethods for linearly constrained optimizationrdquo Journal of
Optimization Theory and Applications vol 125 no 3 pp 629ndash651 2005
[31] C Li andD H Li ldquoAn extension of the Fletcher-Reeves methodto linear equality constrained optimization problemrdquo AppliedMathematics andComputation vol 219 no 23 pp 10909ndash109142013
[32] C Li L Fang and X Cui ldquoA feasible fletcher-reeves method tolinear equality constrained optimization problemrdquo in Proceed-ings of the International Conference on Apperceiving Computingand Intelligence Analysis (ICACIA rsquo10) pp 30ndash33 December2010
[33] L Zhang W J Zhou and D H Li ldquoGlobal convergence ofa modified Fletcher-Reeves conjugate gradient method withArmijo-type line searchrdquo Numerische Mathematik vol 104 no2 pp 561ndash572 2006
[34] N IM GouldM E Hribar and J Nocedal ldquoOn the solution ofequality constrained quadratic programming problems arisingin optimizationrdquo SIAM Journal on Scientific Computing vol 23no 4 pp 1376ndash1395 2001
[35] W Hock and K Schittkowski Test examples for nonlinear pro-gramming codes LectureNotes in Economics andMathematicalSystems Springer New York NY USA 1981
[36] J Asaadi ldquoA computational comparison of some non-linearprogramsrdquo Mathematical Programming vol 4 no 1 pp 144ndash154 1973
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
8 Abstract and Applied Analysis
Table 6 Test results for Problem 10 with given initial points
119899 119899119888
EMPRP FFR SPCGTime Iter Time Iter Time Iter
200 99 01842 88 01984 98 01488 92400 199 02908 104 03260 113 02348 100600 299 08256 132 09422 142 09804 182800 399 16078 136 17512 148 26177 2591000 499 29801 148 28192 156 31790 2682000 999 78045 199 51202 212 81288 3203000 1499 128042 248 99035 256 147862 3604000 1999 192432 304 144258 298 221268 3865000 2499 296846 382 201932 378 321422 393
From Tables 2ndash6 we can see the EMPRPmethod and theFFR method in [32] perform better than the SPCG methodin Martınez et al [30] for solving large-scale linear equalityconstrained optimization problems as the EMPRP methodand the FFRmethod all are first ordermethods But the SPCGmethod in Martınez et al [30] needs to compute 119861
119896with
quasi-Newton method in which the approximate Hessianssatisfy a weak secant equation However as the EMPRPmethod also needs to compute projection the FFRmethod in[32] performs better than the EMPRP method when the testproblem becomes large
6 Conclusions
In this paper we propose a new conjugate gradient projectionmethod for solving linear equality constrained optimizationproblem (1) which combines the two-term modified Polak-Ribiere-Polyak (PRP) conjugate gradient method in Cheng[7] with the Rosen projectionmethodThe proposed methodalso can be regarded as an extension of the recently developedtwo-term modified Polak-Ribiere-Polyak (PRP) conjugategradient method in Cheng [7] Under some mild conditionswe show that it is globally convergent with the Armijio-typeline search
Conflict of Interests
The author declares that there is no conflict of interestsregarding the publication of this paper
Acknowledgments
The author thanks the anonymous referees for their valuablecomments and suggestions that improved the presenta-tion This work is supported by the NSF of China Grants11301041 11371154 71371065 and 71371195 and the Ministryof Education Humanities and Social Sciences Project Grants12YJC790027 Project funded by China Postdoctoral ScienceFoundation Natural Science Foundation ofHunan Province
References
[1] J B Rosen ldquoThe gradient projectionmethod for nonlinear pro-gramming I Linear constraintsrdquo SIAM Journal on AppliedMathematics vol 8 no 1 pp 181ndash217 1960
[2] D Z Du and X S Zhang ldquoA convergence theorem of Rosenrsquosgradient projection methodrdquo Mathematical Programming vol36 no 2 pp 135ndash144 1986
[3] D Du ldquoRemarks on the convergence of Rosenrsquos gradient pro-jection methodrdquo Acta Mathematicae Applicatae Sinica vol 3no 2 pp 270ndash279 1987
[4] D Z Du and X S Zhang ldquoGlobal convergence of Rosenrsquosgradient projection methodrdquo Mathematical Programming vol44 no 1 pp 357ndash366 1989
[5] E Polak and G Ribire ldquoNote surla convergence de directionsconjugueesrdquo Rev Francaise informat Recherche Operatinelle 3eAnnee vol 16 pp 35ndash43 1969
[6] B T Polyak ldquoThe conjugate gradient method in extremal prob-lemsrdquo USSR Computational Mathematics and MathematicalPhysics vol 9 no 4 pp 94ndash112 1969
[7] W Cheng ldquoA two-term PRP-based descent methodrdquoNumericalFunctional Analysis and Optimization vol 28 no 11 pp 1217ndash1230 2007
[8] W W Hager and H Zhang ldquoA new conjugate gradient methodwith guaranteed descent and an efficient line searchrdquo SIAMJournal on Optimization vol 16 no 1 pp 170ndash192 2005
[9] G Li C Tang and Z Wei ldquoNew conjugacy condition andrelated new conjugate gradientmethods for unconstrained opti-mizationrdquo Journal of Computational and Applied Mathematicsvol 202 no 2 pp 523ndash539 2007
[10] N Andrei ldquoAccelerated scaled memoryless BFGS precondi-tioned conjugate gradient algorithm for unconstrained opti-mizationrdquo European Journal of Operational Research vol 204no 3 pp 410ndash420 2010
[11] G Yu L Guan and W Chen ldquoSpectral conjugate gradientmethods with sufficient descent property for large-scale uncon-strained optimizationrdquoOptimizationMethods and Software vol23 no 2 pp 275ndash293 2008
[12] G Yuan ldquoModified nonlinear conjugate gradient methods withsufficient descent property for large-scale optimization pro-blemsrdquo Optimization Letters vol 3 no 1 pp 11ndash21 2009
Abstract and Applied Analysis 9
[13] Q Li and D-H Li ldquoA class of derivative-free methods for large-scale nonlinearmonotone equationsrdquo IMA Journal ofNumericalAnalysis vol 31 no 4 pp 1625ndash1635 2011
[14] Y Xiao and H Zhu ldquoA conjugate gradient method to solveconvex constrained monotone equations with applications incompressive sensingrdquo Journal of Mathematical Analysis andApplications vol 405 no 1 pp 310ndash319 2013
[15] Z Dai ldquoTwo modified HS type conjugate gradient methodsfor unconstrained optimization problemsrdquo Nonlinear AnalysisTheory Methods ampApplications vol 74 no 3 pp 927ndash936 2011
[16] Y-Y Chen and S-Q Du ldquoNonlinear conjugate gradient meth-ods with Wolfe type line searchrdquo Abstract and Applied Analysisvol 2013 Article ID 742815 5 pages 2013
[17] S Y Liu Y Y Huang and H W Jiao ldquoSufficient descent con-jugate gradient methods for solving convex constrained non-linear monotone equationsrdquo Abstract and Applied Analysis vol2014 Article ID 305643 12 pages 2014
[18] Z F Dai D H Li and F HWen ldquoRobust conditional value-at-risk optimization for asymmetrically distributed asset returnsrdquoPacific Journal of Optimization vol 8 no 3 pp 429ndash445 2012
[19] C Huang C Peng X Chen and F Wen ldquoDynamics analysisof a class of delayed economic modelrdquo Abstract and AppliedAnalysis vol 2013 Article ID 962738 12 pages 2013
[20] C Huang X Gong X Chen and F Wen ldquoMeasuring andforecasting volatility in Chinese stock market using HAR-CJ-Mmodelrdquo Abstract and Applied Analysis vol 2013 Article ID143194 13 pages 2013
[21] G Qin C Huang Y Xie and F Wen ldquoAsymptotic behavior forthird-order quasi-linear differential equationsrdquo Advances inDifference Equations vol 30 no 13 pp 305ndash312 2013
[22] C Huang H Kuang X Chen and F Wen ldquoAn LMI approachfor dynamics of switched cellular neural networks with mixeddelaysrdquo Abstract and Applied Analysis vol 2013 Article ID870486 8 pages 2013
[23] Q F Cui Z G Wang X Chen and F Wen ldquoSufficient condi-tions for non-Bazilevic functionsrdquo Abstract and Applied Anal-ysis vol 2013 Article ID 154912 4 pages 2013
[24] F Wen and X Yang ldquoSkewness of return distribution andcoefficient of risk premiumrdquo Journal of Systems Science amp Com-plexity vol 22 no 3 pp 360ndash371 2009
[25] F Wen and Z Liu ldquoA copula-based correlation measure andits application in Chinese stock marketrdquo International Journalof Information Technology amp Decision Making vol 8 no 4 pp787ndash801 2009
[26] F Wen Z He and X Chen ldquoInvestorsrsquo risk preference charac-teristics and conditional skewnessrdquo Mathematical Problems inEngineering vol 2014 Article ID 814965 14 pages 2014
[27] F Wen X Gong Y Chao and X Chen ldquoThe effects of prioroutcomes on risky choice evidence from the stock marketrdquoMathematical Problems in Engineering vol 2014 Article ID272518 8 pages 2014
[28] F Wen Z He X Gong and A Liu ldquoInvestorsrsquo risk preferencecharacteristics based on different reference pointrdquo DiscreteDynamics in Nature and Society vol 2014 Article ID 1583869 pages 2014
[29] Z Dai and F Wen ldquoRobust CVaR-based portfolio optimizationunder a genal a ne data perturbation uncertainty setrdquo Journal ofComputational Analysis and Applications vol 16 no 1 pp 93ndash103 2014
[30] J M Martınez E A Pilotta and M Raydan ldquoSpectral gradientmethods for linearly constrained optimizationrdquo Journal of
Optimization Theory and Applications vol 125 no 3 pp 629ndash651 2005
[31] C Li andD H Li ldquoAn extension of the Fletcher-Reeves methodto linear equality constrained optimization problemrdquo AppliedMathematics andComputation vol 219 no 23 pp 10909ndash109142013
[32] C Li L Fang and X Cui ldquoA feasible fletcher-reeves method tolinear equality constrained optimization problemrdquo in Proceed-ings of the International Conference on Apperceiving Computingand Intelligence Analysis (ICACIA rsquo10) pp 30ndash33 December2010
[33] L Zhang W J Zhou and D H Li ldquoGlobal convergence ofa modified Fletcher-Reeves conjugate gradient method withArmijo-type line searchrdquo Numerische Mathematik vol 104 no2 pp 561ndash572 2006
[34] N IM GouldM E Hribar and J Nocedal ldquoOn the solution ofequality constrained quadratic programming problems arisingin optimizationrdquo SIAM Journal on Scientific Computing vol 23no 4 pp 1376ndash1395 2001
[35] W Hock and K Schittkowski Test examples for nonlinear pro-gramming codes LectureNotes in Economics andMathematicalSystems Springer New York NY USA 1981
[36] J Asaadi ldquoA computational comparison of some non-linearprogramsrdquo Mathematical Programming vol 4 no 1 pp 144ndash154 1973
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
Abstract and Applied Analysis 9
[13] Q Li and D-H Li ldquoA class of derivative-free methods for large-scale nonlinearmonotone equationsrdquo IMA Journal ofNumericalAnalysis vol 31 no 4 pp 1625ndash1635 2011
[14] Y Xiao and H Zhu ldquoA conjugate gradient method to solveconvex constrained monotone equations with applications incompressive sensingrdquo Journal of Mathematical Analysis andApplications vol 405 no 1 pp 310ndash319 2013
[15] Z Dai ldquoTwo modified HS type conjugate gradient methodsfor unconstrained optimization problemsrdquo Nonlinear AnalysisTheory Methods ampApplications vol 74 no 3 pp 927ndash936 2011
[16] Y-Y Chen and S-Q Du ldquoNonlinear conjugate gradient meth-ods with Wolfe type line searchrdquo Abstract and Applied Analysisvol 2013 Article ID 742815 5 pages 2013
[17] S Y Liu Y Y Huang and H W Jiao ldquoSufficient descent con-jugate gradient methods for solving convex constrained non-linear monotone equationsrdquo Abstract and Applied Analysis vol2014 Article ID 305643 12 pages 2014
[18] Z F Dai D H Li and F HWen ldquoRobust conditional value-at-risk optimization for asymmetrically distributed asset returnsrdquoPacific Journal of Optimization vol 8 no 3 pp 429ndash445 2012
[19] C Huang C Peng X Chen and F Wen ldquoDynamics analysisof a class of delayed economic modelrdquo Abstract and AppliedAnalysis vol 2013 Article ID 962738 12 pages 2013
[20] C Huang X Gong X Chen and F Wen ldquoMeasuring andforecasting volatility in Chinese stock market using HAR-CJ-Mmodelrdquo Abstract and Applied Analysis vol 2013 Article ID143194 13 pages 2013
[21] G Qin C Huang Y Xie and F Wen ldquoAsymptotic behavior forthird-order quasi-linear differential equationsrdquo Advances inDifference Equations vol 30 no 13 pp 305ndash312 2013
[22] C Huang H Kuang X Chen and F Wen ldquoAn LMI approachfor dynamics of switched cellular neural networks with mixeddelaysrdquo Abstract and Applied Analysis vol 2013 Article ID870486 8 pages 2013
[23] Q F Cui Z G Wang X Chen and F Wen ldquoSufficient condi-tions for non-Bazilevic functionsrdquo Abstract and Applied Anal-ysis vol 2013 Article ID 154912 4 pages 2013
[24] F Wen and X Yang ldquoSkewness of return distribution andcoefficient of risk premiumrdquo Journal of Systems Science amp Com-plexity vol 22 no 3 pp 360ndash371 2009
[25] F Wen and Z Liu ldquoA copula-based correlation measure andits application in Chinese stock marketrdquo International Journalof Information Technology amp Decision Making vol 8 no 4 pp787ndash801 2009
[26] F Wen Z He and X Chen ldquoInvestorsrsquo risk preference charac-teristics and conditional skewnessrdquo Mathematical Problems inEngineering vol 2014 Article ID 814965 14 pages 2014
[27] F Wen X Gong Y Chao and X Chen ldquoThe effects of prioroutcomes on risky choice evidence from the stock marketrdquoMathematical Problems in Engineering vol 2014 Article ID272518 8 pages 2014
[28] F Wen Z He X Gong and A Liu ldquoInvestorsrsquo risk preferencecharacteristics based on different reference pointrdquo DiscreteDynamics in Nature and Society vol 2014 Article ID 1583869 pages 2014
[29] Z Dai and F Wen ldquoRobust CVaR-based portfolio optimizationunder a genal a ne data perturbation uncertainty setrdquo Journal ofComputational Analysis and Applications vol 16 no 1 pp 93ndash103 2014
[30] J M Martınez E A Pilotta and M Raydan ldquoSpectral gradientmethods for linearly constrained optimizationrdquo Journal of
Optimization Theory and Applications vol 125 no 3 pp 629ndash651 2005
[31] C Li andD H Li ldquoAn extension of the Fletcher-Reeves methodto linear equality constrained optimization problemrdquo AppliedMathematics andComputation vol 219 no 23 pp 10909ndash109142013
[32] C Li L Fang and X Cui ldquoA feasible fletcher-reeves method tolinear equality constrained optimization problemrdquo in Proceed-ings of the International Conference on Apperceiving Computingand Intelligence Analysis (ICACIA rsquo10) pp 30ndash33 December2010
[33] L Zhang W J Zhou and D H Li ldquoGlobal convergence ofa modified Fletcher-Reeves conjugate gradient method withArmijo-type line searchrdquo Numerische Mathematik vol 104 no2 pp 561ndash572 2006
[34] N IM GouldM E Hribar and J Nocedal ldquoOn the solution ofequality constrained quadratic programming problems arisingin optimizationrdquo SIAM Journal on Scientific Computing vol 23no 4 pp 1376ndash1395 2001
[35] W Hock and K Schittkowski Test examples for nonlinear pro-gramming codes LectureNotes in Economics andMathematicalSystems Springer New York NY USA 1981
[36] J Asaadi ldquoA computational comparison of some non-linearprogramsrdquo Mathematical Programming vol 4 no 1 pp 144ndash154 1973
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of