a new step-size skill for solving a class of nonlinear ... file1. introduction in [ll.], an...

12
A NEW STEP-SIZE SKILL FOR SOLVING A CLASS OF NONLINEAR PROJECTION EQUATIONS Sun Defeng (~nstitute of Applied Mathematics, Academia Sinica, Beijing, china) Abstract In this paper, a new step-size skill for a projection and contraction method [lo] for linear progranirriing is gcrieralized to an iterat,ive method [22] for solving non- linear projection ecluat,ion. For linear programming, our schcme is the sarne to that of [lCI]. For complerrientarity problern and related problems, we give an improved algorithm by considering the new step-size skill and ALGORITHM B disc11ssc:d in [22]. Nurnerical results arc! provided. 1. Introduction In [ll.], an iterativc projectio~l a i d contraction (PC) method for linear co~nplenlentar- ity problems was proposed. In practice, the algorithm behaves effectively, but in theory the step-size can't bc proved to be bounded away from zero. So no statement can 1)e made about the ratc of convergencc. Although a variant of the prime PC algorithm with constant step-size for linear programmming has a linear convergence [9], it converges rnucll slower in parcticc-.. In [10], Hc proposed a new step-size rule for the prime P C algo- rithm for the linear prograrrirning such that the resulting algorithm has a globally linear convergence property, and showed that the new resulting algorithm works better in prac- tice than the prime PC algorithm. In this paper, we will introduce a new step-size skill to a projection and cont,rac:tion method for nonlinear co~rlple~rlentarity and its exte~isions [22]. In order to obtain this, we first make a slight modification of the prime algorith~n in [22], arid then give the new step-size rule. For linear programming, our ALGORITHM C discussed in this paper is the same to that of [lo]. For the complementarity probleirl and related problems, we will give ALGORITHM D by considering ALGORITHM C in this paper and ALGORITHM B in [22]. Both in theoritical and in computational view point, ALGORITHM D is satisfactory. Assume that the mapping F : X C Rn + Rn is continuous and X is a closed convex subset of Rn, we will consider the solution of the following projection equations: where for any y E Rn, PX(~) = argmin{x E XJJJx - yll). (1.2) Here 11 . 11 denotes the 12-norm of Rn or its induced matrix norm of R7"7'. The linear programming , nonlinear co~rlple~rlentarity problem and nonlinear variational inequality problem can all be casted as a special case of (1.1), see [3] for a proof. For any P > 0, define ex(x,p) = x - Px[x - PF(x)]. (1.3)

Upload: others

Post on 01-Nov-2019

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: A NEW STEP-SIZE SKILL FOR SOLVING A CLASS OF NONLINEAR ... file1. Introduction In [ll.], an iterativc projectio~l aid contraction (PC) method for linear co~nplenlentar- ity problems

A NEW STEP-SIZE SKILL FOR SOLVING A CLASS OF NONLINEAR PROJECTION EQUATIONS

Sun Defeng (~nstitute of Applied Mathematics, Academia Sinica, Beijing, china)

Abstract

In this paper, a new step-size skill for a projection and contraction method [lo] for linear progranirriing is gcrieralized to an iterat,ive method [22 ] for solving non- linear projection ecluat,ion. For linear programming, our schcme is the sarne to that of [ lCI ] . For complerrientarity problern and related problems, we give an improved algorithm by considering the new step-size skill and ALGORITHM B disc11ssc:d in [ 22 ] . Nurnerical results arc! provided.

1. Introduction

In [ll.], an iterativc projectio~l a i d contraction (PC) method for linear co~nplenlentar- ity problems was proposed. In practice, the algorithm behaves effectively, but in theory the step-size can't bc proved to be bounded away from zero. So no statement can 1)e made about the ratc of convergencc. Although a variant of the prime P C algorithm with constant step-size for linear programmming has a linear convergence [9], it converges rnucll slower in parcticc-.. In [10], Hc proposed a new step-size rule for the prime P C algo- rithm for the linear prograrrirning such that the resulting algorithm has a globally linear convergence property, and showed that the new resulting algorithm works better in prac- tice than the prime P C algorithm. In this paper, we will introduce a new step-size skill to a projection and cont,rac:tion method for nonlinear co~rlple~rlentarity and its exte~isions [22]. In order to obtain this, we first make a slight modification of the prime algorith~n in [22], arid then give the new step-size rule. For linear programming, our ALGORITHM C discussed in this paper is the same to that of [lo]. For the complementarity probleirl and related problems, we will give ALGORITHM D by considering ALGORITHM C in this paper and ALGORITHM B in [22]. Both in theoritical and in computational view point, ALGORITHM D is satisfactory.

Assume that the mapping F : X C R n + Rn is continuous and X is a closed convex subset of Rn, we will consider the solution of the following projection equations:

where for any y E Rn , P X ( ~ ) = argmin{x E XJJJx - yll). (1.2) Here 1 1 . 1 1 denotes the 12-norm of Rn or its induced matrix norm of R7"7'. The linear programming , nonlinear co~rlple~rlentarity problem and nonlinear variational inequality problem can all be casted as a special case of (1.1), see [3] for a proof. For any P > 0, define

e x ( x , p ) = x - Px[x - PF(x)] . (1.3)

Page 2: A NEW STEP-SIZE SKILL FOR SOLVING A CLASS OF NONLINEAR ... file1. Introduction In [ll.], an iterativc projectio~l aid contraction (PC) method for linear co~nplenlentar- ity problems

Without causing any confusion, we will use e (x ,P) to represmir c : .> h i~ ~o

see that x is a soll~tion of (1.1) if and only if e(x, P) = 0 for soiiir. o r an! .> > Q

X* = {x E X / z is a solution of (1.1))

Definition 1. The mapping F : Rn + Rn is said to (a) be monotone over a set X if

[F(x) - ~ ( y ) ] ~ ( x - y) > 0, for all rr; , y E A': 1 1 5

(b) be pseudomonotone over X if

~ ( y ) ~ ( x - 9) > 0 implies ~ ( n ; ) ~ ( z - y) > 0, for all J: . E .Y. 1 6

2. Basic Preliminaries

Throughout this paper , we assume that X is a nonempty corlvex s111)st~r { ~ f R' and F ( x ) is continuous over X.

Lemma 1 [18]. I f F(x ) is continuous over a nonempty compact convex .*,r.t 1.. then there exists y* E Y suth that

~ ( y * ) ~ ( y - y*) > 0, for all y E Y

Lemma 2 [23]. For the projection operator Px (.), we have (i)when y E X , [z - ~ ~ ( z ) ] ~ [ y - Px(z) ] 5 0, for all z E Rn; 2 1

(ii) IIPx(z) - P x ( y ) / / 5 1 ) ~ - yll. for all Y, 2 E Rn. , .) -. 9 - Lemma 3 [2, 51. Given z E Rn and d E RTZ, then the function 8 defined

is an tit one (nonincreasing). Choose an arbitrary constant 7 E ( 0 , l ) (e.g., 7 = 112). When x E X \X* : defi~lc

t(x) , if t(x) > 0 7(x) = { 1nax{7' - lle(z, 112 ,

1, otherwise

where t(x) = [F(x) - F(PX[x - ~ ( x ) ] ) ] ~ e ( x , 1) For any z E X and ,D > 0, define

g(x1 P) = lle(x, P) 1I2/P. (2.6)

Frorn (i) of Lemma 2, taking z = x - P F ( x ) and y = x, we have

By (2.5)-(2.7), and noticing that for any z E X \ X * , q(z) E [q, 11, we have

Page 3: A NEW STEP-SIZE SKILL FOR SOLVING A CLASS OF NONLINEAR ... file1. Introduction In [ll.], an iterativc projectio~l aid contraction (PC) method for linear co~nplenlentar- ity problems

Theorem 1. Let p ( x l 0) and $(z, 13) be defined as in (2.5) a ~ ~ t l (2. ti 1 . rc-.-~,tr-:ir.v-l?..

tl~en fur any 0 > 0

(ij p (x , 4 ) > $(x lP) , fur all x E X ; (ii) x E X and $(x,/?) = 0 i f fx E X and 7,b(x1p) = 0 i f fx E X*.

For z E X \ X * , define

I otherwise

where t ( z ) = [ F ( x ) - F ( P x [ x - ~ ( z ) ] ) ] ~ e ( s , 1). Tt is easy to see that O < s(:r.) < 1. Theorem 2. Snpposc that F ( x ) is continr~ous over X and 17 E (0 , l ) . If S C .Y .I-'

is a compact set, thr:n tllcre exists a positive constant 6 ( 5 1) such that for all : I . E S tin<l'

0 E (O,d], when s (x ) < 1, we have

Proof. Note that for any 2: E X \ X * with s(x) < 1, we have

which, and the definition of 77(x), means that q ( z ) = 71. The rest proof is similar to Theorenl 2.2 in [22]. 3

In [22], we proposed a projection and contraction method (ALGORITHM A) for solviilg nonlinear projection equations.

ALGORITHM A -- -

Given z0 E X , positive c:onstallts s E (0, + a ) , 71 and cr E (0, I ) , and 0 < A l 5 A2 < 2 (In [22] we just t'ake A l = A2).

For k - 0, 1, .... if xk 6 X*: then do 1. Deterniirie /3k - s c u m k , where rnk is the smallest integer m sucll that

holds. 2. Calculate g(zk,f lk) := F ( P , ~ [ x ~ - p k ~ ( x k ) ] ) . 3. Calculate

~k = ~ p ( x ~ , P k ) / l l g ( x ~ , ~ ~ , , ) l l ~ .

4. Take yk E [Al, A2] (In [22] we just take yk = y = Al = A2) and set

When X is of the form X = {z E Rn 11 5 x 5 I / , ) , where 1 and u are two vectors of {Ru { a ) } n , we gave an improvemerit of ALGORITHM A, which is called ALGORITHM B, in 1221. As a corrlparision to ALGORITHM Dl the iterative form of ALGORITHM B will be listed out in the last part of sectiori 3.

Page 4: A NEW STEP-SIZE SKILL FOR SOLVING A CLASS OF NONLINEAR ... file1. Introduction In [ll.], an iterativc projectio~l aid contraction (PC) method for linear co~nplenlentar- ity problems

3. Algorithms and Convergence Properties

If we set

g(x,P) = F(Px[x - PF(x)l ) , P > 0,

then we have Theorem 3. Suppose that F(z) is continuous and pseudomonotonr~ o1.r.I- .Y. li'

X* # fl and there exists a positive number p such that (2.9) holds for sorncr .r E S .Y' . then

( z - z * ) ~ ~ ( z , P ) > p ( ~ , P ) - [l - q(z)]$(x, j?), f o r all z* E X*. (:3.21

Proof: Since X* # 0, from [3] we know that for any z* E X * , y E X we llavr

wliich, and the pseudomonotonicity of F (z) , nieans

Therefore,

(z - . X * ) ~ ~ ( X , p) = (x - x * ) ~ F ( P ~ [x - PF(z) ] ) = e(z, B ) ~ ' F ( P ~ [ ~ - PF(s)])

+{Px [z - PF(x) ] - Z * ) ~ F ( P X [ ~ : - PF(z)]) 2 e(z, / ? ) 7 ' ~ ( ~ x . [ x - PF(x)]) (using (3.3)) - [F(Px[x - PF(x)]) - F(s)ITe(z , P) + ~ ( x ) ~ e ( z , P ) 2 [rl(x) - .l.]$(z, p) + ~ ( x ) ~ e ( n : , P), (3.1)

the last inequality follows from (2.9). Therefore,

Now , we state our algorithm with new step-size rule. ALGORITHM C p--

Given zo E X, positivc constants ~ , C Y E (0 , l ) and 0 < Al 5 A2 < 2. For k = 0,1, ..., if xk 4 X*, then do 1. Calculate 77(xk) and s(xk). If s (xk) = 1, let pk = 1; otherwise det,erlniiir Pk =

S ( J : ~ ) Q ~ ~ ' A , where 7TLk is the sn~allest nonnegative integer m such that

holds. 2. Calculate g(xk, Pk) by (3.1) 3. Calculate

4. Take yk E [Al , A2] and set

Page 5: A NEW STEP-SIZE SKILL FOR SOLVING A CLASS OF NONLINEAR ... file1. Introduction In [ll.], an iterativc projectio~l aid contraction (PC) method for linear co~nplenlentar- ity problems

Remark 1. Theorern 2 ensures that Pk can be obtained in finine iiuiil1)cr of tri;rl- i f s(:rk) < 1. When s ( zk ) = 1, (3.5) holds for m = 0.

Remark 2. When F ( x ) = Dx+c and D is a skew-symmetric matrix (i.c.. D I = - D I .

then we have q(xk) = s (xk) = 1, which results that Pk = 1 for each step. and

So for linear programmirig (whrri trarislated into an equivale~lt linear complenlentarity problem), our ALGORITHM C is the same to that of [lo].

Theorem 4. Suppose that F(z) is continuous and pscudornonotone over X . If X* # 0, then for any x* E X * , the sequence {xk) generated by ALGORITHM C satisfim

Proof Frorri (i) of Lenlma 2 , we have

Page 6: A NEW STEP-SIZE SKILL FOR SOLVING A CLASS OF NONLINEAR ... file1. Introduction In [ll.], an iterativc projectio~l aid contraction (PC) method for linear co~nplenlentar- ity problems

Therefore.

112~" - 2*112 5 I(xk - % * / I 2 - 2YkPk(zk - ~ * ) * ~ ( 2 ~ , P k ) - Il.rk - ,I" - : ' 2 2

-T*Pllle(xk,&) - P k [ F ( z k ) - F(P\[~* - P ~ F ( z ' ) ] ) ] / '

7;~; +-IIe(xk,Pa,,) - rljk[F(xk) - F ( P , \ - [ ~ ~ - P ~ F ( ~ ~ ) I ) I I I ~ 0:

-- 2'YkPk [ e ( r k , li*) - A F ( Z * ) ~ ~ ( Z " z k ' I ) .

P k

Hence from Theorem 3 and the above formula, we have

Ilxktl - .*/I2 < \Isk - - * / I 2 - 2 r k P k { p ( ~ k , P k ) - [I - 7 7 ( ~ k ) ] $ ( ~ k , p k ) )

Page 7: A NEW STEP-SIZE SKILL FOR SOLVING A CLASS OF NONLINEAR ... file1. Introduction In [ll.], an iterativc projectio~l aid contraction (PC) method for linear co~nplenlentar- ity problems

After rearrangement, we have

i i ~ - ~ + l - - * 1 1 2 I IIxk - x*1I2 - ~ ~ ~ P ~ C V ( X ~ ) $ ( X ~ , P ~ )

which, and (i) of Lerrlma 2, means

which proves (3.10).

Define dist(x,X*) = inf{((x - x*)( ( x* E X*) . (3.1 1)

Since (3.10) holds for any x* E X*, then from Theorem 4,

i.e., the sequence {xk) is Fkjer-monotone relative to X*. Theorem 5. If the conditions of Theorem 4 hold, then there exists x* E X * sucll

that zk + Z* a s k 3 GO.

Proof For the sake of simplicity, we take yk = 1. Let x* E X*. It is easy to see that each Fijer-monotone sequence is bounded. Suppose that

then {xk) c S = {x E XI 4 5 dist(x, X*) , llx - x* 1 1 5 llxO - x* / ) ) and S is a compact set. Since S c X \ X * is a cornpact set, then from Theore~n 2 there exists a positive number 6(< 1) such that for all x E S with s (x ) < 1 and ,O E (0,6], (2.9) holds. Hence for each k with s(xk) < 1, we have

,Ok 2 rnin(cu6, s (xk)) . (3.14)

Page 8: A NEW STEP-SIZE SKILL FOR SOLVING A CLASS OF NONLINEAR ... file1. Introduction In [ll.], an iterativc projectio~l aid contraction (PC) method for linear co~nplenlentar- ity problems

From the definition of s ( z 9 , we know that if s ( z k ) < 1, then

[ F ( x k ) - F ( P ~ [z" ~ ( x ~ ) ] ) ] ~ e ( x ~ , 1) > 0 and rl(xk) = 7,

and

Since { x k } is bounded and F ( x ) is continuous over X , there exists a positive c.onsta11t M such that

l l ~ ( x ~ ) ) 1 1 , I IF(px[xk - F F ( x ~ ) ~ ) / / 5 M . ( 3 . 1 6 )

From the continuity of F and { z k } C S c X\X*: we know that there exists a positii-(1 constant bo such that

k J J e ( z , 1:lJI > 6 0 . (3.17)

From (3.15)-(3.17), for each Ic with s(:fik) < 1 we have

Substituting (3.18) into (3.14), gives

{ 60 Pk 2 min ( I - r l ) - } , 2M

for all Ic such that s ( x k ) < 1. From ALGORITHM C we know that if s ( x k ) = l l we llavc

Hence there exists a positive constant 6 ( 5 1) such that

1 L Pk 2 6 > 0, for all Ic.

From the continuity of F ( x ) and the boundedness of S ,

~ ~ P I I ~ ( x ~ , 11) - P ~ [ F ( x ~ ) - FF(PX [x" D P I ; F ( ~ ~ ) ] ) : ~ 1 1 < m. (3.22)

Combining (3.17): (3.21)-(3.22) and Lemma 3, we have

inf {rl(z"pk+(:ck ~ ( ~ ~ ) ~ 1 1 e ( ~ ~ , , o ~ ) 1 1 ~ ) } =

( z k ) - P k [ F ( z k ) - F ( P X [ z k - PkF(zk )] ) ]12 1 2 inf { q 2 ~ i ~ l e ( z k , 1)\14 } = EO > 0.

IIe(xk, P k ) - P k [ F ( z k ) - P P ~ ~ ( ~ X [ X " P k F ( ~ k ) ] ) ] 1 1 2

(3.23) From (3.13) there exists an integer Ico > 0 such that for all Ic 2 Ice,

Page 9: A NEW STEP-SIZE SKILL FOR SOLVING A CLASS OF NONLINEAR ... file1. Introduction In [ll.], an iterativc projectio~l aid contraction (PC) method for linear co~nplenlentar- ity problems

On the other hand: (3.12), (3.23) and (3.24) gives

wliich co~ltradicts (3.13). Therefore,

lim dist(xk, x*) = 0. k+m

Frorn (3.25) and (3.12) there exists 3' E X* such that

- zk + T* as k + oo. -

When X is of the following form

where 1 and u are two vectors of { R ~ { o o ) ) ~ ~ , we can give a modification of ALGORITHLI C. For any z E X, p > 0, denote

N(z, p) = {i((z i = li and (g(z, p)), 2 0) or (z, = I L ~ and (g(rc, P))i 5 O)),

Denote g N (z , p) aiid gn (x, p) as follows:

if i E B(:c, p) (sN(~:, P ) ) i = (g(x, P))i7 otherwise '

As a comparisiori we will first list out ALGORITHM B [22]. ALGORITHM B (An irnprovenlent of ALGORITHM A) Given xO E X, positive constants s E (0, +m), q and a E (0, l) , and 0 < A1 5 A2 < 2

(In [22] we just take A1 = A2). For k = 0,1, ..., if zk $! X * , then do 1. This step is the same t,o 1 of ALGORITHM A. 2. Calculate g(zk, Pk) and gu (zk , /Ik) by (3.1) and (3.28), respectively. 3. Calculate

~k = r ~ ' ~ ( x ~ , P k ) / l l g ~ ( x ~ , ~ k ) I l ~ .

4. Take yk E [A1, A2] (In [22] we just take yk = y = A1 = A2) and set,

Now we describe ALGORITHM D. ALGORITHM D ( An irnprovernent of ALGORITHM C) Given xO t X, positive constarlts 71, a E ( 0 , l ) and 0 < A1 5 A2 < 2. 1. This step is the same to 1 of ALGORITHM C. 2. Calculate g(xk, Pk) and gB(xk, pk) by (3.1) and (3.28), respectively.

Page 10: A NEW STEP-SIZE SKILL FOR SOLVING A CLASS OF NONLINEAR ... file1. Introduction In [ll.], an iterativc projectio~l aid contraction (PC) method for linear co~nplenlentar- ity problems

3. Calculate

4. Take yk E [A,, A,] and set

4. Numerical Experiments

In the following examples, we take = a = 0.5, and Al = A2 = 1.95 (Null~~ri(,iil results show that when yk approaches 2, I,lie resliltirlg algorithms behave bet,t,er. Tliis phenomcneon is also observed by He [10] for solving linear programming. The ason on may exist in that some uncertain terrns are lost by erllargirlg the inequalities.) and usc cp(x, 1) 5 c2 as a stop criteria, where E is a slnall nonegative number. In practice, we ivill use ALGORJTHM D instead of ALGOR.ITHM C, although it is reported in [IO] that for linear programming ALGORITHM C behaves better than the prirne a lg~r i t~h~r i . For using ALGORITHM I? in [22], we use .s(zk) and ~ ( x ~ ) to substitute s and 71, respec- tively. "ALGORITHM B" and "ALGOR.ITHM D" will be ahlxeviated as Alg. B a i d Alg. D, respectively. For Alg. B and Alg. D, the computing cost of each (out) it,eratioii is nearly the samc and the iri~ier iteration takes about half of the computing cost of tlic (out) iteration. So t l~t: effeciency of Alg. B and Alg. D can be measured by the sui11 of the number of iterations and the the half of the rluniber of inner iterations. Numerical results show that both Alg. B and Alg. D behave effectively, and Alg. D behaves slightly better that1 Alg. B does. But one point should be stressed is that the step-size in Alg. D can be proved to be bounded away from zero under the local Lipschitzian co~idition of the mapping F while Alg. I3 can't have s u d ~ a conclusion. So both in practice a i ~ d ill theory! Alg. D is an appropriate choice. Just as a referee poi~it~erl out that it was easy to construct a s111al1 example to make the present algorithms converge very slowly. Nev- ertheless, we didn't find a inore effective method under the conditions given in this paper.

Example 1. This example is discussed in [ I , 221. F ( x ) = Dx + c, where c is a vector and D is a non-synmletric matrix of the form

X = [ I , u], where 1 = (0, ..., 0) and u = (1, ..., 1). We take E~ = n10-14, where n is the dimension of the problem.

Page 11: A NEW STEP-SIZE SKILL FOR SOLVING A CLASS OF NONLINEAR ... file1. Introduction In [ll.], an iterativc projectio~l aid contraction (PC) method for linear co~nplenlentar- ity problems

Table 1 R.esults for example 1 with starting point (0,. . .,0)

- --

Algorithm Number of iterations (left) and iilimber of inner iteratiol~s - (right - 11=10 n=50 n=100 n=200 II=.;(NI -

Alg. B 18 8 5 7 11 18 9 20 12 Alg. D 18 9 19 10 15 4 16 6 13 -

Example 2. This example is a linear compleme'ntarity problenl dicnssed ill :li. 2 L ' . F ( s ) = D z + c , where c = (-1, ..., -1) and

I : : : We take E' - nl0-lG, where T L is the dimension of the problem.

Table 2 Results for example 2 with -- starting point (0,..,0) ~ o r i t h i h l r r ~ u m l , e r of iterations (left) and nuniber of inner iterations (right)

Example 3. This example is a 4-dimensional nonlinear compleinerltarit,y probleiil [14, 221. Wc take c2 = 10-If'. For starting point (0,0,0,0), the nlinlber of iterations and the iiurnber of inner iterations for Alg. B are 63 and 5 respectively and for Alg. D arc 64 and 5 respectively.

n=10

Acknowledgement Special thanks arc due to Prof. Jiye Han and Dr. Bingsheng He for their guidance and helpness. I also thank two anonymous referees for very helpful coinments and suggestions.

n=50 1 n=100 1 n=200 1 n=500 Alg. B 22

18

References [I] Byong-hun Ahn, Iterative mcthods for linear complementarity problem with upper-

bounds and lowerbounds. Math. Prog., 26 (1983). 295-315. [2] P. H. Calamai arid J. J. Mork. Projected gradient method for linearly constrained

problems, Math. Prog., 39 (1987), 93-116. [3] B. C. Eaves, On the basic theorem of coniplementarity, Math. Prog.. 1 (1971), 68-75. [4] M. Fukushima, Equivalent differentiable optimization problerns and descent nlethods

for asymmetric variational inequality problcms. Math. Prog., 52 (1992), 99-110. [5] E. II. Gafni and D. P. Bertsekas, Two-nietric projection nlethods for coristrained

optimization, SIAM J. Contr. Opti., 22 (1984), 936-964. [6] P.T. Harkpr and J. -S. Pang, A damped-Newton method for linear cornplenlentarity

problen~, in G. hllgower and K. Georg, eds., Computational Solutions of Nonlinear

8 3

2 0 2 23 6

25 24 -

5 4 --

31 29

--

10 5

Page 12: A NEW STEP-SIZE SKILL FOR SOLVING A CLASS OF NONLINEAR ... file1. Introduction In [ll.], an iterativc projectio~l aid contraction (PC) method for linear co~nplenlentar- ity problems

Systems of Equations, Lectures in Applied Mathematics, Vol. 26 (Xlllt~ric~ali \ l ; i r l l c , -

nlatical Society , Province, RI, 1990), 265-284. [7] P. T. Harker and J. -S. Pang, Finite-dimentional variational inequality alicl ~ i o ~ i l i ~ i ( ~ ; i ~ -

conlplenlentarity problems: A survey of theory, algorithms and apl,lii~i~tio~ls. J l [ ~ t l r Pr0.q.. 48 (1990), 161-220

[8] P. T. Harker and B. Xiao, Newton's method for the nonlinear complrment,arity pro\ 1-

leni: A B-differentiable equation approach, Math. Pro,q., 48 (1990), 339-357. [9] B. S. He, On a class of iterative projectiorl and contraction methods for linc'ar 1)ro-

gramming, JOTA, 78 (1993). [lo] B. S. He, Further developments in an iterative projection and contraction lrietliotl for

linear programming, J. Comp. Math.. 11 (1993), 350-364. 1111 B. S. IIe, A projection and contraction method for a class of linear comp1erncnt;irit~.

~ rob lems and its applicatiorls in convex quadratic programming, Appl. Math,. Op t i . . 25 (1992), 247-262.

[12] S. Karamardian, Generalized corrlplementarity problems, J O TA, 8 (1971), 747-756. [13] S. Kararnardian and S. Schaiblc, Seven kinds of monotone maps, JOTA, 66 (1990).

37-46. [14] M. Kojima arid S. Shindo, Ext,entioris of Newton arid quasi-Newton methods to sys-

tems of PC' equations, J . Oper. fies. Society of Japan, 29 (1986), 352-374. [15] G. M. Korpelevich, Thc extragradient method for finding saddle points and o t h c ~

problems, E'kon,orrrikci i matenlaticheskie metody, 12 (1976), 747-756. [IG] C. E. Lemke, On complementarity pivot theory, in Mathematics of thc dccisioli x i -

ences, G. B. Dantzig arid A. F. Veinott (cds.), 1968. [17] L. Mathiesen, An algoritlirn based on a sequence of linear complemerltarity prohlerlis

applied to a Walrasian equilibri~irn model: An example, Math. Prog., 37 (1987). 1-18. [18] J. J. Morit, Coercivity conditions in nonliriear complementarity problems, SIAM Rr-

view , 16 (1974), 1-16. [19] K. G. Murty, Linear Complementarity, Linear and Nonlinear Programming (Helder-

man, Berlin, 1988) [20] J. S. Pang and D. Chan, Iterative methods for variational and complementarity prob-

lems, Math. Prog., 24 (1982), 284-313. [21] D. Sun, An iterative method for solving variational inequality problerns and conlplc-

mentarity problerns, Numerical Mathematics, A Journal of Chinese U7~iversitie.s, 16 (1994), 145-153.

[22] D. Sun, A prqjection and coritraction method for the nonlinear cornplenlentarity problem and its extensions, Mathematica Numerica Sinica, 16 (1994), 183-194.

[23] E. H. Zarantonello, Projections on convex sets in Hilbert space and spectral thcory, in E. H. Zarantonello, ed., Contributions to nonlinear Functional Analysis (Academic Press, New York, 1971).