maximum principle for state-constrained optimal …

18
Dierential and Integral Equations, Volume 8, Number 1, January 1995, pp. 1 – 18. MAXIMUM PRINCIPLE FOR STATE-CONSTRAINED OPTIMAL CONTROL PROBLEMS GOVERNED BY QUASILINEAR ELLIPTIC* Eduardo Casas Departamento de Matem´ atica Aplicada y Ciencias de la Computaci´ on Universidad de Cantabria, 39005-Santander, Spain Jiongmin Yong Department of Mathematics, Fudan University, Shanghai 200433, China (Submitted by: Roger Temam) Abstract. In this paper, the authors study an optimal control problem for quasilinear elliptic partial dierential equations with pointwise state constraints. Weak and strong optimality conditions of Pontryagin maximum principle type are derived. In proving these results, we penalized the state constraints and respectively use the Ekeland variational principle and an exact penalization method. 1. Introduction. In this paper, our aim is to prove Pontryagin’s principle for pointwise state-constrained optimal control problems governed by very gen- eral quasilinear elliptic equations. The control is distributed and takes values in a bounded subset, not necessarily convex, of some Euclidean space. The cost func- tional is Lagrange type. Standard results of optimal control problems for linear elliptic equations with convex control set and convex functional can be found in [16]. In [1, 5], the re- sults were extended to linear or semilinear equations with state constraints. In the framework of semilinear elliptic equations, the Pontryagin type principle was first proved in [2] for problems without state constraints; later, in [3, 4] and [21], dif- ferent approaches were used to deal with the state-constrained case. In this paper, we improve the techniques of [3, 4] and [21] so that the extension of the results to the quasilinear equations is possible. We remove the weak stability assumption made in [4] for the weak version of Pontryagin principle (see §4). We also obtain the strong version of Pontryagin principle, which was not carried out in [21]. As in [3, 4], to prove this strong principle, we assume a stability condition for optimal cost functional with respect to small perturbations of the feasible state set. This leads to an exact penalization of the state constraint. The penalty functional used here is dierent from that in [3, 4], which allows us to shorten the proof. Let us mention some other papers related to the present one. In [6], optimal control of quasilinear elliptic equations without state constraints was considered; and for the evolution case of finite and infinite dimensions, see [10, 13, 18], and the references cited therein. This paper is organized as follows. In §2, optimal control problem is formulated and the state equation is studied. §3 is devoted to the derivation of the variation Received November 1992, in revised form June 1993. *This work was completed while the authors were visiting the IMA, University of Minnesota, USA, and partially supported by the IMA. The first author was also partially supported by Di- recci´ on General de Investigaci´ on Cient´ ıfica y T´ ecnica (Madrid) and the second by the NSF of China under Grant 19131050 and the Fok Ying Tung Education Foundation. AMS Subject Classifications: 49K20, 35J65, 35J85. 1

Upload: others

Post on 22-Nov-2021

16 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: MAXIMUM PRINCIPLE FOR STATE-CONSTRAINED OPTIMAL …

Di↵erential and Integral Equations, Volume 8, Number 1, January 1995, pp. 1 – 18.

MAXIMUM PRINCIPLE FOR STATE-CONSTRAINED OPTIMALCONTROL PROBLEMS GOVERNED BY QUASILINEAR ELLIPTIC*

Eduardo CasasDepartamento de Matematica Aplicada y Ciencias de la Computacion

Universidad de Cantabria, 39005-Santander, Spain

Jiongmin YongDepartment of Mathematics, Fudan University, Shanghai 200433, China

(Submitted by: Roger Temam)

Abstract. In this paper, the authors study an optimal control problem for quasilinearelliptic partial di↵erential equations with pointwise state constraints. Weak and strongoptimality conditions of Pontryagin maximum principle type are derived. In proving theseresults, we penalized the state constraints and respectively use the Ekeland variationalprinciple and an exact penalization method.

1. Introduction. In this paper, our aim is to prove Pontryagin’s principlefor pointwise state-constrained optimal control problems governed by very gen-eral quasilinear elliptic equations. The control is distributed and takes values in abounded subset, not necessarily convex, of some Euclidean space. The cost func-tional is Lagrange type.

Standard results of optimal control problems for linear elliptic equations withconvex control set and convex functional can be found in [16]. In [1, 5], the re-sults were extended to linear or semilinear equations with state constraints. In theframework of semilinear elliptic equations, the Pontryagin type principle was firstproved in [2] for problems without state constraints; later, in [3, 4] and [21], dif-ferent approaches were used to deal with the state-constrained case. In this paper,we improve the techniques of [3, 4] and [21] so that the extension of the resultsto the quasilinear equations is possible. We remove the weak stability assumptionmade in [4] for the weak version of Pontryagin principle (see §4). We also obtainthe strong version of Pontryagin principle, which was not carried out in [21]. Asin [3, 4], to prove this strong principle, we assume a stability condition for optimalcost functional with respect to small perturbations of the feasible state set. Thisleads to an exact penalization of the state constraint. The penalty functional usedhere is di↵erent from that in [3, 4], which allows us to shorten the proof.

Let us mention some other papers related to the present one. In [6], optimalcontrol of quasilinear elliptic equations without state constraints was considered;and for the evolution case of finite and infinite dimensions, see [10, 13, 18], and thereferences cited therein.

This paper is organized as follows. In §2, optimal control problem is formulatedand the state equation is studied. §3 is devoted to the derivation of the variation

Received November 1992, in revised form June 1993.*This work was completed while the authors were visiting the IMA, University of Minnesota,

USA, and partially supported by the IMA. The first author was also partially supported by Di-reccion General de Investigacion Cientıfica y Tecnica (Madrid) and the second by the NSF ofChina under Grant 19131050 and the Fok Ying Tung Education Foundation.

AMS Subject Classifications: 49K20, 35J65, 35J85.

1

Page 2: MAXIMUM PRINCIPLE FOR STATE-CONSTRAINED OPTIMAL …

2 EDUARDO CASAS AND JIONGMIN YONG

along given feasible pairs, which is needed to deal with the case of a not necessarilyconvex control set. The approach followed in this section is based on the methodused in [12]. In §§4 and 5, we obtain the weak and strong Pontryagin maximumprinciples.

2. Formulation of the problem. This section is devoted to a formulation ofthe control problem which will be studied in this paper. Our state equation is asfollows: ⇢ �r · a(x,ry(x)) = f(x, y(x), u(x)), in ⌦,

y��@⌦

= 0.(2.1)

In what follows, we always assume that ⌦ is a bounded region in Rn with a C1,�

boundary @⌦, for some � > 0 and U is a bounded measurable set in some Euclideanspace. We use | · | as the norm of vectors in Euclidean spaces or of matrices, whichcan be identified from the context. Also, we let h·, ·i be the inner products or dualityin possibly di↵erent spaces. For any measurable set S ⇢ Rn, we use |S| to denotethe Lebesgue measure of the set S.

We make the following assumptions.(A1) The function a : ⌦ ⇥ Rn ! Rn is continuous. For each x 2 ⌦, a(x, ·) is

di↵erentiable and a⇣(·, ·) is continuous (we use ⇣ as the dummy argument for ry).Moreover, there exist constants ↵ > 1, 0 < � 1, L � l > 0 and > 0, such thatfor all x, x 2 ⌦, ⇣, ⇠ 2 Rn,

�(+ |⇣|)↵�2|⇠|2 ha⇣(x, ⇣)⇠, ⇠i, |a⇣(x, ⇣)| ⇤(+ |⇣|)↵�2, (2.2)

|a(x, ⇣)� a(x, ⇣)| ⇤(1 + |⇣|)↵�1|x� x|�. (2.3)

(A2) The function f : ⌦⇥ R⇥ U ! R has the following properties: f(·, y, u) ismeasurable on ⌦, f(x, ·, u) is in C1(R) with f(x, ·, ·) and fy(x, ·, ·) being continuouson R⇥ U . Moreover,

fy(x, y, u) 0, 8(x, y, u) 2 ⌦⇥ R⇥ U, (2.4)

and for any R > 0, there exists an MR > 0, such that

|f(x, y, u)|+ |fy(x, y, u)| MR, 8(x, u) 2 ⌦⇥ U, |y| R. (2.5)

Next, we setU = {u : ⌦ ! U : u is measurable }.

Any element u 2 U is referred to as a control. In what follows, we will denote byC0(⌦) the set of all continuous functions on ⌦ which vanish on @⌦ and by C1,�(⌦)the set of all continuously di↵erentiable functions on ⌦ for which the first orderpartial derivatives are Holder continuous with the exponent � 2 (0, 1). Now, westate the following basic result.

Proposition 2.1. Let (A1)–(A2) hold. Then, for any u 2 U , there exists a uniquey ⌘ y(·;u) 2 C1,�(⌦)

TC0(⌦) solving (2.1) for some � 2 (0,min{�, �}). Further-

more, there exists a constant C > 0, independent of u 2 U , such that

ky(·;u)kC1,�(⌦) C, 8u 2 U . (2.6)

Page 3: MAXIMUM PRINCIPLE FOR STATE-CONSTRAINED OPTIMAL …

STATE CONSTRAINED OPTIMAL CONTROL 3

Sketch of the Proof. First of all, we truncate f : For any m > 0, let

fm(x, y, u) =

8><>:

f(x, y, u), if |y| m,

f(x,�m,u), if y < �m,

f(x,m, u), if y > m.

(2.7)

Then, we consider the following truncated problem:⇢ �r · a(x,ry(x)) = fm(x, y(x), u(x)), in ⌦,

y��@⌦

= 0.(2.8)

By [15], we know that (2.8) admits a unique solution ym 2 W 1,↵0 (⌦). Then, as in

[19], we are able to show that there exists a constant C > 0 independent of m andu 2 U , such that

kym(·;u)kL1(⌦) C, 8m > 0, u 2 U . (2.9)

Consequently, for m > C, we obtain that ym = y is a solution of (2.1). Thus, by [14],we obtain that in fact this y is in C1,�(⌦), for some � 2 (0,min{�, �}) and estimate(2.6) holds. Finally, the uniqueness follows immediately from the coercivity of theoperator (see (2.2) and (2.4)). ⇤

In what follows, any pair (y, u) 2 (C1,�(⌦)\C0(⌦))⇥U satisfying (2.1) is calleda feasible pair and we refer to the corresponding y and u as feasible state andcontrol, respectively. Clearly, under (A1)–(A2), U coincides with the set of allfeasible controls and for each feasible control u 2 U there corresponds a uniquefeasible state. Now, we let f0 : ⌦⇥ R⇥ U ! R be a given function. We make thefollowing assumption on this function:

(A3) The function f0(·, y, u) is measurable on ⌦, f0(x, ·, u) is in C1(R) withf0(x, ·, ·) and f0

y (x, ·, ·) being continuous on R ⇥ U . Furthermore, for any R > 0,there exists a function 'R 2 L1(⌦), such that

|f0(x, y, u)|+ |f0y (x, y, u)| 'R(x), 8(x, u) 2 ⌦⇥ U, |y| R. (2.10)

It is easy to see that under (A1)–(A3), for any u 2 U , the following functional iswell-defined:

J(u) =Z

⌦f0(x, y(x), u(x))dx. (2.11)

This functional is referred to as the cost functional. Next, we introduce anothermap g : ⌦⇥ R ! R. We assume the following:

(A4) The map g is continuous, gy(·, ·) exists and is also continuous on ⌦ ⇥ R.Moreover we assume that

g(x, 0) = 0, 8x 2 @⌦. (2.12)

Assumption (2.12) can be relaxed; see Remark 4.2.From above, we know that under (A1)–(A2), for any u 2 U , the corresponding

feasible state y is in C1,�(⌦). Thus, we may talk about the state constraint of form

g(x, y(x)) �, 8x 2 ⌦, (2.13)

where � > 0 is given. Of course, for any given u 2 U , the corresponding state ydoes not necessarily satisfy the constraint (2.13). We refer to any feasible pair (y, u)satisfying (2.13) as an admissible pair and the corresponding y and u as admissiblestate and control, respectively. We denote the set of all admissible controls by U�,indicating the dependence on � by the subscript. Now, our optimal control problemcan be stated as follows.

Page 4: MAXIMUM PRINCIPLE FOR STATE-CONSTRAINED OPTIMAL …

4 EDUARDO CASAS AND JIONGMIN YONG

Problem (P�). Under (A1)–(A4), find a control u 2 U�, such that

J(u) = infu2U�

J(u). (2.14)

Any admissible control u satisfying (2.14) is called an optimal control, the cor-responding state y is called an optimal state and the pair (y, u) is referred to as anoptimal pair.

3. Variation along given feasible pairs. In deriving necessary conditionsfor optimal pairs, one needs to make certain perturbations for the control and thecorresponding variations of the state and the cost functional need to be determined.This section is devoted to such a determination. We note that since the controldomain is not necessarily convex, the perturbation of the control is restricted to beof “spike” type. This causes the computation to be somewhat technical. Our basicidea here is taken from [12] and [13, 21].

For any feasible pair (y, u), we define

8><>:

aij(x) = ai,⇣j (x,ry(x)), 1 i, j n,

a0(x) = �fy(x, y(x), u(x)),c(x) = f0

y (x, y(x), u(x)),(3.1)

and given v 2 U ,

⇢h(x) = f(x, y(x), v(x))� f(x, y(x), u(x)),h0(x) = f0(x, y(x), v(x))� f0(x, y(x), u(x)).

(3.2)

Set

Az(x) ⌘ �nX

i,j=1

@xi

�aij(x)@xj z(x)

�+ a0(x)z(x). (3.3)

Since y 2 C1,�(⌦) with an estimate (2.6), by (2.2) and (2.4), we see that thefollowing hold: For some constants ⇤0 � �0 > 0, independent of u 2 U ,

�0|⇠|2 nX

i,j=1

aij(x)⇠i⇠j ⇤0|⇠|2, 8⇠ 2 Rn, x 2 ⌦, (3.4)

a0(x) � 0, a.e. x 2 ⌦. (3.5)

Moreover, each aij is in C(⌦) and the modulus of continuity for aij is uniform inu 2 U . Now, we consider the problem

⇢Az(x) = h(x), in ⌦,

z��@⌦

= 0.(3.6)

Clearly, since h 2 L1(⌦), this problem admits a unique solution z 2 W 1,p0 (⌦) \

C0(⌦) for every p > 1; see for instance [17].Our main result of this section is the following.

Page 5: MAXIMUM PRINCIPLE FOR STATE-CONSTRAINED OPTIMAL …

STATE CONSTRAINED OPTIMAL CONTROL 5

Theorem 3.1. Let (y, u) be a given feasible pair and v 2 U be fixed. Then, for any⇢ 2 (0, 1), there exists a measurable set E⇢ ⇢ ⌦, with the property that

|E⇢| = ⇢|⌦|, (3.7)

such that if we define u⇢ by

u⇢(x) =⇢

u(x), if x 2 ⌦ \E⇢,

v(x), if x 2 E⇢,(3.8)

and we let y⇢ be the state corresponding to u⇢, then

y⇢ = y + ⇢z + r⇢, lim⇢!0

1⇢kr⇢kW 1,p(⌦) = 0, (3.9)

andJ(u⇢) = J(u) + ⇢z0 + r0

⇢, lim⇢!0

1⇢ |r

0⇢| = 0, (3.10)

where z is the solution of (3.6), p is any number in [1,1) and z0 is given by

z0 =Z

⌦[f0

y (x, y(x), u(x))z(x) + f0(x, y(x), v(x))� f0(x, y(x), u(x))]dx. (3.11)

To prove the above result, we need some lemmas. First, we recall the so-calledEkeland distance. For any u, v 2 U , we let

d(u, v) = |{x 2 ⌦ : u(x) 6= v(x)}|. (3.12)

It is standard that (U , d(·, ·)) is a complete metric space (see [9]). Our first lemmais concerning the continuity of the state y(·;u) with respect to the control u.

Lemma 3.2. Let u, u 2 U and y, y be the corresponding states. Then,

ky � ykW 1,p(⌦)

8><>:

Cpd(u, u)n+pnp , if p > n

n�1 ,

Cp,qd(u, u)1/q, 8q > 1, if p = nn�1 ,

Cpd(u, u), if 1 p < nn�1 ,

(3.13)

with the constants Cp and Cp,q being independent of u and u.

Proof. We denote8>><>>:

aij(x) =Z 1

0ai,⇣j (x,ry(x) + ⌧r(y(x)� y(x)))d⌧, 1 i, j n,

a0(x) = �Z 1

0fy(x, y(x) + ⌧(y(x)� y(x)), u(x))d⌧.

(3.14)

Then, we define

Az(x) ⌘ �nX

i,j=1

@xi

�aij(x)@xj z(x)

�+ a0(x)z(x). (3.15)

Page 6: MAXIMUM PRINCIPLE FOR STATE-CONSTRAINED OPTIMAL …

6 EDUARDO CASAS AND JIONGMIN YONG

From (2.1), we see that y � y satisfies⇢

A(y(x)� y(x)) = f(x, y(x), u(x))� f(x, y(x), u(x)), in ⌦,

(y � y)��@⌦

= 0.(3.16)

By the Lp estimate for the divergence form elliptic equations (see [17]), we obtain

ky � ykW 1,p(⌦) Ckf(·, y, u)� f(·, y, u)kW�1,p(⌦). (3.17)

By Sobolev embedding, we have8><>:

Lnp

n+p (⌦) ,! W�1,p(⌦), for p > nn�1 ,

Lq(⌦) ,! W�1,p(⌦), for p = nn�1 , 8q > 1,

L1(⌦) ,! W�1,p(⌦), for 1 p < nn�1 .

(3.18)

This together with (3.17) and (2.5) gives (3.13). In the above, we should note thaty and y are bounded in C1,�(⌦) and the constant in Lp estimate only depends onthe modulus of continuity of the leading coe�cients, the ellipticity constant, thebounds of the coe�cients and the domain. Thus, the constant appearing in (3.13)is independent of controls u and u. ⇤

Our next lemma is essential in this paper.

Lemma 3.3. Let p > n, b0 2 L1(⌦) and b 2 Lp(⌦). For any ⇢ 2 (0, 1), let

E⇢ = {E ⇢ ⌦ : E is measurable with |E| = ⇢|⌦|}. (3.19)

Then,

infE2E⇢

{|R⌦(1� 1

⇢�E(x))b0(x)dx|+ k(1� 1⇢�E)bkW�1,p(⌦)} = 0. (3.20)

Proof. We let � be the kernel for the Newtonian potential,

�(x) =

(1

n(2�n)!n|x|2�n, n � 3,

12⇡ log |x|, n = 2,

(3.21)

with !n being the volume of the unit ball in Rn. Then, we let E 2 E⇢ be undeter-mined and set

V ⇢(x) =Z

⌦�(x� ⇠)(1� 1

⇢�E(⇠))b(⇠)d⇠, x 2 ⌦. (3.22)

We know that (see [11])

�V ⇢(x) = (1� 1⇢�E(x))b(x), x 2 ⌦. (3.23)

Then, for any ' 2 W 1,p0

0 (⌦), (p0 = pp�1 < n

n�1 ), we haveZ

⌦(1� 1

⇢�E(x))b(x)'(x)dx =Z

⌦�V ⇢(x)'(x)dx (3.24)

= �Z

⌦rV ⇢(x) ·r'(x)dx krV ⇢kLp(⌦)k'kW 1,p0 (⌦).

Page 7: MAXIMUM PRINCIPLE FOR STATE-CONSTRAINED OPTIMAL …

STATE CONSTRAINED OPTIMAL CONTROL 7

Thus,k(1� 1

⇢�E)bkW�1,p(⌦) krV ⇢kLp(⌦). (3.25)

Next, we estimate krV ⇢kLp(⌦). To this end, we denote

✓(x, ⇠) = rx�(x� ⇠)b(⇠), x, ⇠ 2 ⌦. (3.26)

Then, we have

limx0!x

Z⌦|✓(x0, ⇠)� ✓(x, ⇠)|d⇠ (3.27)

limx0!x

⇣Z⌦|rx�(x0 � ⇠)�rx�(x� ⇠)|p0d⇠

⌘1/p0

kbkLp(⌦) = 0.

Hence, for any " > 0, there exists a finite set {xk : 1 k k"} ⇢ ⌦, such that forany x 2 ⌦, there exists an xk, with the property that

Z⌦|✓(x, ⇠)� ✓(xk, ⇠)|d⇠ < ". (3.28)

Next, we set

⇥(⇠) =

0BBBB@

b0(⇠)✓(x1, ⇠)✓(x2, ⇠)

...✓(xk" , ⇠)

1CCCCA , ⇠ 2 ⌦. (3.29)

Clearly, ⇥ 2 L1(⌦; Rnk"+1). We can find a simple function

⇥(⇠) =Xi=1

⇥i�Fi(⇠), ⇠ 2 ⌦, ⇥i 2 Rnk"+1, (3.30)

with the Fi’s being mutually disjoint and ⌦ =S`

i=1 Fi, such that

Z⌦|⇥(⇠)� ⇥(⇠)|d⇠ < ". (3.31)

Then, we take Ei⇢ ⇢ Fi, such that

|Ei⇢| = ⇢|Fi|, 1 i `, (3.32)

and we define

E⇢ =[i=1

Ei⇢. (3.33)

Clearly, E⇢ 2 E⇢. Also, by the above construction, we have

Z⌦(1� 1

⇢�E⇢(⇠))⇥(⇠)d⇠ = 0. (3.34)

Page 8: MAXIMUM PRINCIPLE FOR STATE-CONSTRAINED OPTIMAL …

8 EDUARDO CASAS AND JIONGMIN YONG

Now, we take E = E⇢ in (3.22). By Lemma 3.4, which is going to be proved below,we know that

rV ⇢(x) =Z

⌦rx�(x� ⇠)(1� 1

⇢�E(⇠))b(⇠)d⇠ =Z

⌦(1� 1

⇢�E(⇠))✓(x, ⇠)d⇠. (3.35)

Thus, for any x 2 ⌦, we let xk satisfy (3.28). It follows that

|rV ⇢(x)| =���Z

⌦(1� 1

⇢�E⇢(⇠))✓(x, ⇠)d⇠��� (3.36)

���Z

⌦(1� 1

⇢�E⇢(⇠))(✓(x, ⇠)� ✓(xk, ⇠))d⇠���+ ���

Z⌦(1� 1

⇢�E⇢(⇠))✓(xk, ⇠)d⇠���

(1 + 1⇢ )"+ (1 + 1

⇢ )Z

⌦|⇥(⇠)� ⇥(⇠)|d⇠ +

���Z

⌦(1� 1

⇢�E⇢(⇠))⇥(⇠)d⇠���

2(1 + 1⇢ )".

Furthermore, recalling the definition of ⇥(⇠) and E⇢, we also have

���Z

⌦(1� 1

⇢�E⇢(⇠))b0(⇠)d⇠��� < (1 + 1

⇢ )". (3.37)

Combining (3.25), (3.36) and (3.37), we obtain (3.20) since " > 0 is arbitrary. ⇤In the above proof, we have used the following result.

Lemma 3.4. Let w be the Newtonian potential given by

w(x) =Z

⌦�(x� ⇠)f(⇠)d⇠, x 2 ⌦, (3.38)

with � being given by (3.21) and f 2 Lp(⌦). Then, for the case p > n/2, thereexists a constant C depending only on the domain ⌦, n and p, such that

kwkW 2,p(⌦) CkfkLp(⌦). (3.39)

Furthermore, if p > n, then

rw(x) =Z

⌦rx�(x� ⇠)f(⇠)d⇠, x 2 ⌦. (3.40)

Proof. First of all, by [11, p.230], for any f 2 Lp(⌦), with 1 < p < 1, theNewtonian potential w defined by (3.38) is in W 2,p(⌦), and satisfies

�w(x) = f(x), x 2 ⌦, (3.41)

andkD2wkLp(⌦) CkfkLp(⌦), (3.42)

with C depending only on n and p. On the other hand, we know that there existsa constant C1 > 0, such that

kwkW 2,p(⌦) C1[kwkLp(⌦) + kD2wkLp(⌦)]. (3.43)

Page 9: MAXIMUM PRINCIPLE FOR STATE-CONSTRAINED OPTIMAL …

STATE CONSTRAINED OPTIMAL CONTROL 9

Thus, by (3.42), for the case p > n/2, to get estimate (3.39), it su�ces to estimatekwkLp(⌦). Since ⌦ is bounded, we may find r > 0 such that

⌦ ⇢ Br(x) ⌘ {⇠ 2 Rn : |⇠ � x| r}, 8x 2 ⌦. (3.44)

Then, we have

kwkLp(⌦) =⇣Z

���Z

⌦�(x� ⇠)f(⇠)d⇠

���pdx⌘1/p

nZ

⇣ZBr(x)

|�(x� ⇠)|p0d⇠)p/p0dxo1/p

kfkLp(⌦) (3.45)

= k�kLp0 (Br(0))|⌦|1/pkfkLp(⌦) C2kfkLp(⌦).

Here, we have used the fact that p > n/2 implies � 2 Lp0(Br(0)). Thus, (3.39)follows. Finally, for the case p > n, we let {fk}1k=1 ⇢ D(⌦) be a sequence convergingto f in Lp(⌦) and let

wk(x) =Z

⌦�(x� ⇠)fk(⇠)d⇠, x 2 ⌦. (3.46)

From (3.39), we see that wk ! w strongly in W 2,p(⌦). On the other hand, from[11], we know that

@xiwk(x) =Z

⌦@xi�(x� ⇠)fk(⇠)d⇠, x 2 ⌦. (3.47)

Since p > n, @xi� 2 Lp0(Br(0)) for all r > 0 (note p0 < nn�1 ). Therefore, we pass to

the limits in (3.46) to get the desired result. ⇤By using the result of [20], we can actually show that the results of Lemma 3.4

hold for any 1 < p < 1. Now, we are ready to prove our Theorem 3.1.Proof of Theorem 3.1. It is enough to show the theorem for p > n. For any⇢ 2 (0, 1), by Lemma 3.3, we can find an E⇢ 2 E⇢, such that

���Z

⌦(1� 1

⇢�E⇢(x))h0(x)dx��+ k(1� 1

⇢�E⇢)hkW�1,p(⌦) ⇢, (3.48)

where h0 and h are given by (3.2). Let u⇢ be defined by (3.8) and let y⇢ be thecorresponding state. Let us set

z⇢(x) =y⇢(x)� y(x)

⇢, x 2 ⌦. (3.49)

Then, z⇢ satisfies

�nX

i,j=1

@xi(a⇢ij(x)@xj z⇢(x)) + a⇢

0(x)z⇢(x) = 1⇢�E⇢(x)h(x), z⇢

��@⌦

= 0, (3.50)

where

a⇢ij(x) =

Z 1

0ai,⇣j (x,ry(x) + ⌧r(y⇢(x)� y(x)))d⌧ 1 i, j n,

a⇢0(x) = �

Z 1

0fy(x, y(x) + ⌧(y⇢(x)� y(x)), u⇢(x))d⌧. (3.51)

Page 10: MAXIMUM PRINCIPLE FOR STATE-CONSTRAINED OPTIMAL …

10 EDUARDO CASAS AND JIONGMIN YONG

Clearly, by (2.6), Lemma 3.2 and (A1)–(A2), we see that

⇢a⇢

ij(x) ! aij(x) ⌘ ai,⇣j (x,ry(x)), in C(⌦),a⇢0(x) ! a0(x) ⌘ �fy(x, y(x), u(x)), in Lp(⌦).

(3.52)

By recalling z, the solution of (3.6), we have

�nX

i,j=1

@xi(a⇢ij(x)@xj (z⇢(x)� z(x))) + a⇢

0(x)(z⇢(x)� z(x))

=nX

i,j=1

@xi((a⇢ij(x)� aij(x))@xj z(x)) (3.53)

� (a⇢0(x)� a0(x))z(x)� (1� 1

⇢ )�E⇢(x))h(x),

(z⇢ � z)��@⌦

= 0.

By the result of [17] (see remark below), we have

kr⇢kW 1,p(⌦)

⇢= kz⇢ � zkW 1,p(⌦) (3.54)

C� nX

i,j

k(a⇢ij � aij)@xj zkLp(⌦) + k(a⇢

0 � a0)zkW�1,p(⌦) + k(1� 1⇢�E⇢)hkW�1,p(⌦)

C� nX

i,j

ka⇢ij � aijkL1(⌦)kzkW 1,p(⌦) + k(a⇢

0 � a0)zkLp(⌦) + ⇢

= o(1).

This proves (3.9), for p > n. Finally, we define z0 as in (3.11) and let

r0⇢ = J(u⇢)� J(u)� ⇢z0.

Then, using (3.48) and (3.52), we have

1⇢ |r

0⇢| =

��J(u⇢)� J(u)⇢

� z0�� (3.55)

���Z

h Z 1

0f0

y (x, y(x) + ⌧(y⇢(x)� y(x)), u⇢(x))d⌧z⇢(x)

� f0y (x, y(x), u(x))z(x)

idx���+ ���

Z⌦(1� 1

⇢�E⇢(x))h0(x)dx��� = o(1).

This proves (3.10).Remark 3.5. In [17] the W 1,p–regularity used in (3.54) was proved for p > n/(n�1), if n > 2; and for p � 2 if n = 2. In particular this result is true for all p � 2. Thenby a duality argument we can conclude that the results hold for all p 2 (1,+1).

4. Weak Pontryagin maximum principle. In this section, we present aPontryagin type maximum principle for optimal controls of our Problem (P�). Wedenote by M(⌦) the space of all real Borel measures in ⌦. Our main result of thissection is the following.

Page 11: MAXIMUM PRINCIPLE FOR STATE-CONSTRAINED OPTIMAL …

STATE CONSTRAINED OPTIMAL CONTROL 11

Theorem 4.1. Let (A1)–(A4) hold. Let (y, u) be an optimal pair. Then, thereexist a 0 0, a 2 W 1,p0

0 (⌦) with p0 < n/(n� 1) and a µ 2M(⌦), such that

| 0|+ kµkM(⌦) > 0, (4.1)

�nX

i,j=1

@xj (ai,⇣j (x,ry(x))@xi (x)) = fy(x, y(x), u(x)) (x)

+ 0f0y (x, y(x), u(x)) + gy(x, y(x))µ, in ⌦, (4.2)

��@⌦

= 0,

Z⌦(⌘(x)� g(x, y(x))dµ(x) � 0, 8⌘ 2 C0(⌦), with ⌘(x) �, 8x 2 ⌦, (4.3)

H(x, y(x), u(x), 0, (x)) = maxv2U

H(x, y(x), v, 0, (x)), a.e. x 2 ⌦, (4.4)

where the Hamiltonian H is given by

H(x, y, u, 0, ) = 0f0(x, y, u) + f(x, y, u), (4.5)

8(x, y, u, 0, ) 2 ⌦⇥ R⇥ U ⇥ R⇥ R.

Before proving the above theorem, let us give some preliminaries. Since C0(⌦)is a separable Banach space, by [8, p. 167], we know that there exists a norm,denoted by | · |0, which is equivalent to the norm k · kC0(⌦), such that the dual of(C0(⌦), | · |0) is strictly convex. It is clear that any element µ 2 (C0(⌦), | · |0)⇤ canstill be identified with an element of M(⌦), such that

hµ, ⌘i =Z

⌦⌘(x)dµ(x), 8⌘ 2 C0(⌦). (4.6)

In the rest of this section, whenever we write C0(⌦), the norm of it is always takento be the above | · |0, the dual space of it is still identified with M(⌦) and thecorresponding dual norm is denoted by | · |⇤. Now, we define

Q = {⌘ 2 C0(⌦) : ⌘(x) �, 8x 2 ⌦}. (4.7)

Clearly, Q is convex, closed and has a nonempty interior in C0(⌦). Let

dQ(⌘) = inf⌘2Q

|⌘ � ⌘|0, 8⌘ 2 C0(⌦). (4.8)

Then, dQ : C0(⌦) ! R is convex and Lipschitz continuous (with the Lipschitzconstant being 1). From [7], we know that the Clarke’s generalized gradient, denotedby @dQ, which coincides with the subdi↵erential in the sense of the convex analysisin this case, is convex and weak⇤-compact. Therefore, given ⇠ 2 @dQ(⌘), we havethat

h⇠, ⌘ � ⌘i+ dQ(⌘) dQ(⌘), 8⌘ 2 C0(⌦).

From this relation, it is easy to deduce that |⇠|⇤ 1, the identity |⇠|⇤ = 1 beingtrue whenever ⌘ /2 Q; see [13]. Since (M(⌦), | · |⇤) is strictly convex, @dQ(⌘) is a

Page 12: MAXIMUM PRINCIPLE FOR STATE-CONSTRAINED OPTIMAL …

12 EDUARDO CASAS AND JIONGMIN YONG

singleton for every ⌘ /2 Q. Furthermore, dQ : C0(⌦) ! R is Gateaux di↵erentiableat every point ⌘ /2 Q and {rdQ(⌘)} = @dQ(⌘); hence

|rdQ(⌘)|⇤ = 1, 8⌘ /2 Q. (4.9)

Now, we are ready to give a proof of our Theorem 4.1.Proof of Theorem 4.1. Let (y, u) be an optimal pair. For any u 2 U , let y(·;u)be the corresponding state, emphasizing the dependence of it on the control. Forany " > 0, we define

J"(u) = {[(J(u)� J(u) + ")+]2 + dQ(g(·, y(·;u)))2}1/2. (4.10)

Clearly, this functional is continuous on the (complete) metric space (U , d). Also,we have

J"(u) > 0, 8u 2 U , (4.11)

J"(u) = " infU

J"(u) + ". (4.12)

Hence, by Ekeland’s variational principle ([7,9]), we can find a u" 2 U , such that

d(u, u") p", (4.13)

J"(u") J"(u), (4.14)

J"(u)� J"(u") � �p" d(u, u"), 8u 2 U . (4.15)

We let v 2 U and " > 0 be fixed and let

y" = y(·;u").

Seta"

ij(x) = ai,⇣j (x,ry"(x)), 1 i, j n,

a"0(x) = �fy(x, y"(x), u"(x)),

(4.16)

andh"(x) = f(x, y"(x), v(x))� f(x, y"(x), u"(x)),

h0,"(x) = f0(x, y"(x), v(x))� f0(x, y"(x), u"(x)).(4.17)

Let A" be the elliptic di↵erential operator with the coe�cients given by (4.16),

A"z(x) ⌘ �nX

i,j=1

@xi

�a"

ij(x)@xj z(x)�

+ a"0(x)z(x). (4.18)

Then, (3.4)–(3.5) hold for this A" and the leading coe�cients a"ij(x) are uniformly

continuous in ⌦ independent of ". We consider the problem⇢A"z"(x) = h"(x), in ⌦,

z"��@⌦

= 0.(4.19)

We know that the above problem admits a unique solution z" 2 W 1,p0 (⌦) for all

p > 1. By Theorem 3.1, we know that for any ⇢ 2 (0, 1), there exists an E"⇢ ⇢ ⌦,

with the property |E"⇢| = ⇢|⌦|, such that if we define

u"⇢(x) =

⇢u"(x), if x 2 ⌦ \E"

⇢,

v(x), if x 2 E"⇢,

(4.20)

Page 13: MAXIMUM PRINCIPLE FOR STATE-CONSTRAINED OPTIMAL …

STATE CONSTRAINED OPTIMAL CONTROL 13

and let y"⇢ = y(·;u"

⇢) be the corresponding state, then

y"⇢ = y" + ⇢z" + r"

⇢, J(u"⇢) = J(u") + ⇢z0," + r0,"

⇢ , (4.21)

where z" is the solution of (4.19),

z0," =Z

⌦[f0

y (x, y"(x), u"(x))z"(x) + h0,"(x)]dx, (4.22)

and for any p 2 [1,1),

lim⇢!0

1⇢kr

"⇢kW 1,p(⌦) = lim

⇢!0

1⇢ |r

0,"⇢ | = 0. (4.23)

Now, we take u = u"⇢ in (4.15). Then, it follows that

�p"|⌦|

J"(u"⇢)� J"(u")⇢

(4.24)

=1

J"(u"⇢) + J"(u")

� [(J(u"⇢)� J(u) + ")+]2 � [(J(u")� J(u) + ")+]2

+dQ(g(·, y"

⇢))2 � dQ(g(·, y"))2

! (J(u")� J(u) + ")+

J"(u")z0," + hdQ(g(·, y"))⇠"

J"(u"), gy(·, y")z"i, (⇢! 0),

where⇠" =

⇢ rdQ(g(·, y")), if g(·, y") /2 Q,

0, if g(·, y") 2 Q.(4.25)

Next, we define ('0,",'") 2 [0, 1]⇥M(⌦) as

'0," =(J(u")� J(u) + ")+

J"(u"), '" =

dQ(g(·, y"))⇠"J"(u")

. (4.26)

Then we see that (4.24) can be written as

�p"|⌦| '0,"z0," +

Z⌦

gy(x, y"(x))z"(x)d'"(x), (4.27)

and from (4.9) and (4.10), we have

|'0,"|2 + |'"|2⇤ = 1. (4.28)

On the other hand, by the definition of subdi↵erential, we have

h'", ⌘ � g(·, y")i =Z

⌦(⌘(x)� g(x, y"(x))d'"(x) 0, (4.29)

8⌘ 2 C0(⌦), ⌘(x) �, 8x 2 ⌦. Next, by (4.13) and Lemma 3.2, we have

ky" � ykW 1,p(⌦) ! 0, ("! 0), (4.30)

Page 14: MAXIMUM PRINCIPLE FOR STATE-CONSTRAINED OPTIMAL …

14 EDUARDO CASAS AND JIONGMIN YONG

and therefore, recalling that p > n, it follows that y" ! y in C0(⌦). Thus, (4.29)implies

Z⌦(⌘ � g(·, y))d'"(x) |g(·, y")� g(·, y)|0 ⌘ �" ! 0, ("! 0) (4.31)

8⌘ 2 C0(⌦), with ⌘(x) �, 8x 2 ⌦. By extracting some subsequence, still denotedby itself, one has

'0," ! '0, '" ⇤* '. (4.32)

Let C > 0 satisfy k · kC0(⌦) C| · |0. Let us fix an element ⌘0 2 C0(⌦) and a realpositive number r satisfying ⌘0(x) < � � r, 8x 2 ⌦. Taking ⌘(x) = ⌘0(x) + ⌘(x),with |⌘|0 r/C, in (4.31), we obtain

Z⌦⌘(x)d'"(x)

Z⌦(g(x, y(x))� ⌘0(x))d'"(x) + �", 8|⌘|0 r/C. (4.33)

Taking the supremum in the left hand term, it follows that

r

C|'"|⇤

Z⌦(g(x, y(x))� ⌘0(x))d'"(x) + �".

Then, by (4.28), (4.31) and (4.32), we obtain

⇣C

r

Z⌦(g(x, y(x))� ⌘0(x)

⌘d'(x))2 + |'0|2 � lim

"!0[|'"|2⇤ + |'0,"|2] = 1. (4.34)

On the other hand, from (4.30) we have⇢

z" ! z, in W 1,p(⌦),z0," ! z0,

("! 0), (4.35)

where z is the solution of the variational system

�nX

i,j=1

@xi(ai,⇣j (x,ry(x))@xj z(x)) = fy(x, y(x), u(x))z(x) (4.36)

+ f(x, y(x), v(x))� f(x, y(x), u(x)), in ⌦,

z��@⌦

= 0,

and

z0 =Z

⌦{f0

y (x, y(x), u(x))z(x)dx +Z

⌦[f0(x, y(x), v(x))� f0(x, y(x), u(x))]dx.

(4.37)We note that the solution z of (4.36) and the quantity z0 defined by (4.37) dependon the choice of v 2 U . Thus, we denote them by z(·, v) and z0(v). respectively.Then, taking limits in (4.27), we obtain

'0z0(v) + h', gy(·, y)z(·; v)i � 0, 8v 2 U . (4.38)

Now, we let 0 = �'0 2 [�1, 0], µ = �'. (4.39)

Page 15: MAXIMUM PRINCIPLE FOR STATE-CONSTRAINED OPTIMAL …

STATE CONSTRAINED OPTIMAL CONTROL 15

Then, (4.1) follows from (4.34). Also, we obtain (4.3) by taking limits in (4.27)(along the above-mentioned subsequence). Furthermore,

0z0(v) + hµ, gy(·, y)z(·; v)i 0, 8v 2 U . (4.40)

Since µ 2M(⌦) ⇢ W�1,p0(⌦), we know ([17]) that (4.2) admits a unique solution 2 W 1,p0

0 (⌦). By some direct computation, we can reduce (4.40) toZ

� 0[f0(x, y(x), u(x))� f0(x, y(x), v(x))]

+ (x)[f(x, y(x), u(x))� f(x, y(x), v(x))]}dx (4.41)

⌘Z

⌦[H(x, y(x), u(x), 0, (x))�H(x, y(x), v(x), 0, (x))]dx � 0, 8v 2 U .

We take a countable dense subset {uk}k�1 ⇢ U . For each uk, there exists a mea-surable set ⌦k 2 ⌦ with |⌦k| = |⌦|, such that ⌦k consists of all Lebesgue pointsof the function H(x, y(x), u(x), 0, (x)) �H(x, y(x), uk, 0, (x)). Then, for anyx0 2 ⌦k and any small enough r > 0 (with Br(x0) ⌘ {x 2 Rn : |x� x0| r} ⇢ ⌦),we take v in (4.41) by

v(x) =⇢

u(x), if x 2 ⌦ \Br(x0),uk, if x 2 Br(x0).

(4.42)

Then, (4.41) readsZ

Br(x0)[H(x, y(x), u(x), 0, (x))�H(x, y(x), uk, 0, (x))]dx � 0, (4.43)

8r > 0. Dividing by |Br(x0)| and sending r ! 0, we obtain

H(x0, y(x0), u(x0), 0, (x0)) � H(x0, y(x0), uk, 0, (x0)), (4.44)

8x0 2 ⌦k, k � 1. Then, by the continuity of f0(x, y, u) and f(x, y, u) in u and thedensity of the countable set {uk}k�1, we obtain (4.4).Remark 4.2. We note that from (4.3), it follows that µ is a nonpositive measureand

suppµ ⇢ {x 2 ⌦ : g(x, y(x)) = �}. (4.45)

Also let us mention that assumption (2.12) can be removed. If we assume that

g(x, 0) < �, 8x 2 @⌦, (4.46)

the state constraint is inactive on the boundary of ⌦. So (4.45) is still true andTheorem 4.1 remains also true. Finally in the case where

supx2@⌦

g(x, 0) = �, (4.47)

the state constraint may be active on the boundary and consequently the support ofthe Lagrange multiplier µ may intersect the boundary of ⌦. In this case the adjointstate equation should be modified.

Page 16: MAXIMUM PRINCIPLE FOR STATE-CONSTRAINED OPTIMAL …

16 EDUARDO CASAS AND JIONGMIN YONG

Remark 4.3. Similar to [4], we may relax the continuity of the functions f and f0

in the control variable u. One of such interesting cases is that the functions f andf0 are given by

f(x, y, u) = f1(x, y, u) + f2(x, u), f0(x, y, u) = f01 (x, y, u) + f0

2 (x, u), (4.48)

with f1 and f01 satisfying (A2)–(A3), and f2 and f0

2 merely measurable and bounded.Our result remains true for such a case. Some other cases are also possible. Weomit the details here (see [4]).

5. Strong Pontryagin maximum principle. In this section we are going toprove that Theorem 4.1 holds with 0 = �1 if Problem (P�) is stable in a certainsense that we make precise in the following definition.Definition 5.1. We say that problem (P�) is strongly stable on the right if thereexist " > 0 and C > 0 such that

inf(P�)� inf(P�0) C(�0 � �), 8�0 2 [�, � + "]. (5.1)

This concept was used by Bonnans and Casas [4] to derive the Pontryagin Prin-ciple in a qualified form, that is to say the same conditions (4.2)–(4.4) but with theparameter 0 = �1. Here we will prove that the result stated in [4] for semilinearelliptic equations still holds for quasilinear elliptic equations. The approach we usehere is simpler than that given in [4]. The key in the proof of this principle is that itis possible to make an exact penalization of the state constraint provided the controlproblem is strongly stable on the right. Therefore the first question to consider iswhether this assumption is satisfied frequently or not. Fortunately most problemsare stable; more precisely:

Proposition 5.2. Let us denote by �0 a real number such that (P�0) has at leastone admissible control. Then for every � � �0, except at most a set of zero Lebesguemeasure, the problem (P�) is strongly stable on the right.

See [4] for the proof of this proposition. Now we can state the main result of thissection.

Theorem 5.3. Under assumptions (A1)–(A4) and provided that (P�) is stronglystable on the right, there exist a 2 W 1,p0

0 (⌦), with p0 < n/(n� 1), and µ 2M(⌦)such that (4.2)–(4.4) hold with 0 = �1.

Before proving this theorem we need some lemmas. Let Q and dQ be defined asin the previous section.

Lemma 5.4. Let us assume that (P�) is strongly stable on the right. Then thereexists a q > 0 such that u is a solution in (U , d) of the penalized problem

minu2U

Jq(u) = J(u) + qdQ(g(·, y(·;u))). (5.2)

Proof. Suppose the contrary. Then for each k > 0, there exists a uk 2 U , suchthat

J(uk) + kdQ(g(·, yk)) < J(u), 8k > 0, (5.3)

where yk is the feasible state corresponding to uk. Then, we see that

0 < dQ(g(·, yk)) ! 0, k !1. (5.4)

Page 17: MAXIMUM PRINCIPLE FOR STATE-CONSTRAINED OPTIMAL …

STATE CONSTRAINED OPTIMAL CONTROL 17

Since each g(·; yk) /2 Q, we have

�k = maxx2⌦

g(x, yk(x)) > �, 8k > 0, (5.5)

and by (5.4),lim

k!1�k = �. (5.6)

Then, by the strong stability, we have some constant C > 0, such that

inf(P�)� inf(P�k) C(�k � �), 8k > 0. (5.7)

However, this together (5.3) yields

C(�k � �) � J(u)� J(uk) > kdQ(g(·; yk)) (5.8)

� kC0k(g(·; yk)� �)+kC0(⌦) = kC0(�k � �), 8k > 0.

This is a contradiction. ⇤Since Jq is not Gateaux di↵erentiable at u we need to modify slightly this func-

tional.

Lemma 5.5. Let " > 0 and consider the problem

(P�,") minu2U

Jq,"(u) = J(u) + q{dQ(g(·; y(·;u)))2 + "2}1/2. (5.9)

Then the following identity holds:

lim"!0

inf(P�,") = infu2U

Jq(u). (5.10)

Proof. It is an immediate consequence of the inequality

Jq(u) Jq,"(u) Jq(u) + q✏. ⇤ (5.11)

Now, we present a proof of our main result of this section.Proof of Theorem 5.1. Lemmas 5.4 and 5.5 imply that u is a �2

" -solution of(P�,"), with �" & 0 as "& 0, that is,

Jq,"(u) inf(P�,") + �2" . (5.12)

Then we can apply again the Ekeland’s variational principle and deduce the exis-tence of an element u" 2 U , such that

d(u, u") �", Jq,"(u") Jq,"(u), (5.13)

andJq,"(u)� Jq,"(u") � ��"d(u, u"), 8u 2 U . (5.14)

Now we can argue as in the proof of Theorem 4.1 and replace (4.27) by

��"|⌦| lim⇢!0

Jq,"(u"⇢)� Jq,"(u")⇢

= z0," + h'", gy(·, y")z"i, (5.15)

Page 18: MAXIMUM PRINCIPLE FOR STATE-CONSTRAINED OPTIMAL …

18 EDUARDO CASAS AND JIONGMIN YONG

where the element '" 2M(⌦) is given by

'" =

(q dQ(g(·,y")){dQ(g(·,y"))2+"2}1/2rdQ(g(·, y")), if g(·, y") /2 Q,

0, if g(·, y") 2 Q.(5.16)

Therefore, we have |'"|⇤ q. Now we can take a subsequence that convergesweakly⇤ to an element ' 2 M(⌦). The rest is as in the proof of Theorem 4.1,taking '0," = 1.Acknowledgment. The authors thank Professor Luis A. Fernandez of Universityof Cantabria for his comments and suggestions on this paper.

REFERENCES

[1] J. F. Bonnans and E. Casas, Optimal control of semilinear multistate systems with stateconstraints, SIAM J. Control Optim., 27 (1989), 446–455.

[2] J. F. Bonnans and E. Casas, Un principe de Pontryagine pour le controle des systemessemilineaires elliptiques, J. Di↵. Equ., 90 (1991), 288–303.

[3] J. F. Bonnans and E. Casas, A boundary Pontryagin’s principle for the optimal control ofstate-constrained elliptic systems, Int. Ser. Numer. Math., 107 (1992), 241–249.

[4] J. F. Bonnans and E. Casas, An extension of Pontryagin’s principle for state-constrainedoptimal control of semilinear elliptic equations and variational inequalities, SIAM J. ControlOptim., to appear.

[5] E. Casas, Control of an elliptic problem with pointwise state constraints, SIAM J. ControlOptim., 24 (1986), 1309–1318.

[6] E. Casas and L. A. Fernandez, Distributed control of systems governed by a general classof quasilinear elliptic equations, J. Di↵. Equ., 104 (1993), 20–47.

[7] F. H. Clarke, “Optimization and Nonsmooth Analysis,” Wiley, New York, 1983.[8] J. Diestel, “Geometry of Banach Spaces — Selected Topics,” Lecture Notes in Math. No.

485, Springer-Verlag, Berlin, 1975.[9] I. Ekeland, Nonconvex minimization problems, Bull. Amer. Math. Soc. (New Series), 1

(1979), 443–474.[10] H.O. Fattorini and H. Frankowska, Necessary conditions for infinite dimensional control

problems, Math. Control Signal Systems, 4 (1991), 41–67.[11] D. Gilbarg and N. S. Trudinger, “Elliptic Partial Di↵erential Equations of Second Order,”

2nd Edition, Springer-Verlag, 1983.[12] X. Li, Vector-valued measure and the necessary conditions for the optimal control problems

of linear systems, Proc. IFAC 3rd Symposium on Control of Distributed Parameter Systems,Toulouse, France, 1982,

[13] X. Li and J. Yong, Necessary conditions of optimal control for distributed parameter sys-tems, SIAM J. Control Optim., 29 (1991), 895–908.

[14] G. M. Lieberman, Boundary regularity for solutions of degenerate elliptic equations, Non-linear Anal. TMA, 12 (1988), 1203–1219.

[15] J. L. Lions, “Quelques Methodes de Resolution des Problemes aux Limites Non Lineaires,”Dunod, Paris, 1969.

[16] J. L. Lions, “Optimal Control of Systems Governed by Partial Di↵erential Equations,”Springer-Verlag, New York, 1971

[17] C. B. Morrey, Jr., “Multiple Integrals in the Calculus of Variation,” Springer-Verlag, 1966.[18] L. S. Pontryagin, V. G. Boltyanskii, R. V. Gamkrelidze and E. F. Mischenko, “Mathematical

Theory of Optimal Processes,” Wiley, New York, 1962.[19] G. Stampacchia, Le probleme de Dirichlet pour les equations elliptiques du second ordre a

coe�cients discontinus, Ann. Inst. Fourier Grenoble, 15 (1965), 189–258.[20] E. M. Stein, “Singular Integrals and Di↵erentiability Properties of Functions,” Princeton

Univ. Press, Princeton, N.J., 1970.[21] J. Yong, Pontryagin maximum principle for semilinear second order elliptic partial dif-

ferential equations and variational inequalities with state constraints, Di↵erential IntegralEquations, 5 (1992), 1307–1334.