lyapunov stabilizability of controlled diffusions via a

29
DOI: 10.1007/s00245-005-0834-1 Appl Math Optim 53:1–29 (2006) © 2005 Springer Science+Business Media, Inc. Lyapunov Stabilizability of Controlled Diffusions via a Superoptimality Principle for Viscosity Solutions Annalisa Cesaroni Dipartimento di Matematica P. e A., Universit` a di Padova, via Belzoni 7, 35131 Padova, Italy [email protected] Abstract. We prove optimality principles for semicontinuous bounded viscosity solutions of Hamilton–Jacobi–Bellman equations. In particular, we provide a repre- sentation formula for viscosity supersolutions as value functions of suitable obstacle control problems. This result is applied to extend the Lyapunov direct method for stability to controlled Ito stochastic differential equations. We define the appropri- ate concept of the Lyapunov function to study stochastic open loop stabilizability in probability and local and global asymptotic stabilizability (or asymptotic controlla- bility). Finally, we illustrate the theory with some examples. Key Words. Controlled degenerate diffusion, Hamilton–Jacobi–Bellman inequal- ities, Viscosity solutions, Dynamic programming, Superoptimality principles, Ob- stacle problem, Stochastic control, Stability in probability, Asymptotic stability. AMS Classification. 49L25, 93E15, 93D05, 93D20. 1. Introduction We consider an N -dimensional stochastic differential equation dX t = f ( X t ) dt + σ( X t ) dW t , where W t is a standard M -dimensional Brownian motion. Since the sixties, a stochas- tic Lyapunov method for the analysis of the qualitative properties of the solutions of stochastic differential equations, in analogy to the deterministic Lyapunov method, was This research was partially supported by the Young-Researcher-Project CPDG034579 financed by the University of Padova.

Upload: others

Post on 11-Dec-2021

3 views

Category:

Documents


0 download

TRANSCRIPT

DOI: 10.1007/s00245-005-0834-1

Appl Math Optim 53:1–29 (2006)

© 2005 Springer Science+Business Media, Inc.

Lyapunov Stabilizability of Controlled Diffusions via aSuperoptimality Principle for Viscosity Solutions∗

Annalisa Cesaroni

Dipartimento di Matematica P. e A., Universita di Padova,via Belzoni 7, 35131 Padova, [email protected]

Abstract. We prove optimality principles for semicontinuous bounded viscositysolutions of Hamilton–Jacobi–Bellman equations. In particular, we provide a repre-sentation formula for viscosity supersolutions as value functions of suitable obstaclecontrol problems. This result is applied to extend the Lyapunov direct method forstability to controlled Ito stochastic differential equations. We define the appropri-ate concept of the Lyapunov function to study stochastic open loop stabilizability inprobability and local and global asymptotic stabilizability (or asymptotic controlla-bility). Finally, we illustrate the theory with some examples.

Key Words. Controlled degenerate diffusion, Hamilton–Jacobi–Bellman inequal-ities, Viscosity solutions, Dynamic programming, Superoptimality principles, Ob-stacle problem, Stochastic control, Stability in probability, Asymptotic stability.

AMS Classification. 49L25, 93E15, 93D05, 93D20.

1. Introduction

We consider an N -dimensional stochastic differential equation

d Xt = f (Xt ) dt + σ(Xt ) dWt ,

where Wt is a standard M-dimensional Brownian motion. Since the sixties, a stochas-tic Lyapunov method for the analysis of the qualitative properties of the solutions ofstochastic differential equations, in analogy to the deterministic Lyapunov method, was

∗ This research was partially supported by the Young-Researcher-Project CPDG034579 financed by theUniversity of Padova.

2 A. Cesaroni

developed. The main contributions are due to Has’minskii (see the monograph [20] andthe references therein) and Kushner (see the monographs [25] and [27]). They introducedthe notion of stability in probability and asymptotic stability in probability. This meansthat the probability that the trajectory leaves a given neighborhood of the equilibrium isdecreasing to zero as the initial data is approaching equilibrium. If, moreover, the trajec-tory is asymptotically approaching equilibrium with probability decreasing to zero as theinitial data is approaching equilibrium, the system is asymptotically stable in probability.Finally, if for every initial data the trajectory is asymptotically approaching equilibriumalmost surely, the system is asymptotically stable in the large.

The assumption of stability in probability implies that at equilibrium both thedrift and the diffusion of the stochastic systems have to vanish (for controlled sys-tems, that there is at least one constant control α ∈ A such that f (·, α) and σ(·, α)vanish at equilibrium). So equilibrium is preserved in the presence of noise. This con-dition excludes linear systems with additive nonvanishing noise, nevertheless, there areclasses of stochastic systems (both controlled and uncontrolled) which satisfy that con-dition and whose stability properties are interesting. We refer to stochastic differentialsystems which model population dynamics subjected to environmental noises, suchas stochastic Lotka–Volterra models (see, for example, [21], [30] and the referencestherein), to stochastic oscillators (see [30] and the references therein), or to determin-istic systems driven by a stochastic force (when one wants to stabilize only the statevariable).

The stochastic analog of deterministic Lyapunov functions V are twice differentiablecontinuous functions, which are positive definite and proper and satisfy the infinitesimaldecrease condition

−DV (x) · f (x)− trace[a(x)D2V (x)

] ≥ l(x), (1)

with l ≥ 0 for mere Lyapunov stability and l > 0 for x �= 0 for asymptotic stability,where a := σσ T /2. By the Dynkin formula, this differential inequality implies that thestochastic process V (Xt ), where Xt is the solution of the stochastic differential equa-tion starting from x , is a positive supermartingale. This fact translates, in the stochasticsetting, the requirement on the Lyapunov function to decrease along the trajectoriesof the dynamical system. There is a large literature on this kind of stochastic stabil-ity: we refer to the cited monographs and to [30], see also the references therein. Wealso recall here the work of Florchinger [18], [19] and Deng et al. [13] on feedbackstabilization for controlled stochastic differential equations by the Lyapunov functionmethod.

In this paper we extend the Lyapunov method for stochastic differential equationsessentially in two directions. First we consider controlled stochastic differential equationsin RN ,

d Xt = f (Xt , αt ) dt + σ(Xt , αt ) dWt ,

moreover, we allow the Lyapunov functions to be merely lower semicontinuous. Thenonexistence of smooth Lyapunov functions is well known in the deterministic case,see [2] for stable uncontrolled systems and the surveys [35] and [2] for asymptoticallystable controlled systems. Also in the stochastic case, the assumption of smoothness forLyapunov functions is not necessary and would limit considerably the applicability of

Lyapunov Stabilizability of Controlled Diffusions 3

the theory and the possibility of getting a complete Lyapunov-characterization of thestabilizability in probability by means of a converse theorem. Here we give an exampleof an uncontrolled degenerate diffusion process that is stable in probability but does notadmit a smooth Lyapunov function (Example 3 in Section 7). Kushner proved in [26] acharacterization of asymptotic uniform stochastic stability (for uncontrolled systems) bymeans of only continuous Lyapunov functions (here, however, the infinitesimal decreasecondition is not given with a differential inequality but in terms of the weak generator ofthe process). For stability in probability, Has’minskii provided a C2 Lyapunov functionunder the assumption of strict nondegeneracy of the diffusion: this result cannot beextended to possibly nondegenerate diffusions. Converse theorems in the controlled caseappear in the Ph.D. thesis by the author [9]. In particular, we prove that the existenceof a local Lyapunov function is also necessary for stability in probability. Hence weshow that if the system (CSDE) in Section 2 is uniformly asymptotically stabilizable inprobability then there exists a local strict Lyapunov function, which is continuous.

We define then a Lyapunov function for the stability in probability as a lower semi-continuous (LSC), positive definite, proper function V , continuous at 0 and satisfying ina viscosity sense the differential Hamilton–Jacobi–Bellman inequality

maxα∈A{−DV (x) · f (x, α)− trace[a(x, α)D2V (x)]} ≥ l(x), (2)

and we call it a strict Lyapunov function if l > 0 off 0. Our main results are the naturalextensions to the controlled diffusions of the first and second Lyapunov theorems:

the existence of a local Lyapunov function implies the (stochastic open loop) stabiliz-ability in probability of (CSDE); a strict Lyapunov function implies the (stochastic openloop) asymptotic stabilizability in probability.

The same proof provides the global versions as well: if V satisfies (2) in RN\{0} then(CSDE) is also (stochastic open loop) Lagrange stabilizable, i.e., has the property ofuniform boundedness of trajectories, and if V is strict then the system is (stochastic openloop) asymptotically stabilizable in the large. We also give sufficient conditions for thestability of viable (controlled invariant) sets more general than an equilibrium point.

The main tool to provide such a result is a superoptimality principle for lowersemicontinuous bounded viscosity supersolutions V of the Hamilton–Jacobi–Bellmanequation (2). A similar approach has been exploited in the deterministic case by Barronand Jensen (see [8]) for globally asymptotically stable systems affected by disturbancesand by Soravia [38], [37] for stable systems with competitive controls. Precisely weprove that

every bounded LSC viscosity supersolution V of (2) can be represented as

V (x) = infα

supt≥0

Ex

[V (Xα

t )+∫ t

0l(Xα

s ) ds

]. (3)

This representation formula is important on its own, since it refers to Hamilton–Jacobi–Bellman equations for which uniqueness of solutions is not expected. In partic-

4 A. Cesaroni

ular, this formula permits us to characterize the value function

V (x) = infα

Ex

∫ +∞0

l(Xαs ) ds,

when it is well defined, as the minimal nonnegative LSC viscosity supersolution of (2).Our proof adapts to the second-order case the arguments used by Soravia in [38] and

[39], where he provides, in the general context of differential games, a representationformula (a superoptimality principle which holds as an equality) for supersolutions offirst-order Isaacs equations. The main difficulty in the adaptation is the passage fromstochastic to deterministic dynamics and the use of stochastic controls. For the definitionof the classes of controls and for the compactness and measurable selection results weuse, we refer mainly to the article by Haussmann and Lepeltier [22] (see also the articleby El Karoui et al. [15] and the book by Stroock and Varadhan [40, Chapter 12]). Forrelated results on the existence of optimal controls for stochastic problems we refer tothe article by Kushner [28].

There is a large literature on dynamic programming and superoptimality and sub-optimality principles for viscosity solutions of second-order Hamilton–Jacobi–Bellmanequations, starting from the papers by Lions [29] (see also the books [16] and [23]). Werecall here the recent work by Soner and Touzi on dynamic programming for stochastictarget problems [32], [33]. We refer also to the paper by Swiech [41] on sub- and super-optimality principles for value functions of stochastic differential games (see also thepaper by Fleming and Souganidis [17]).

In the last section we present a simple application of our Lyapunov method. Weconsider an asymptotically controllable deterministic system and we study under whichconditions it remains stable if we add a stochastic perturbation to it. By the Lyapunovcharacterization of asymptotic controllability provided by Clarke et al. [11] and Rifford[31], we know that the unperturbed system admits a semiconcave Lyapunov function(except possibly at equilibrium). We obtain that the system remains stable under a smallintensity condition on diffusion matrix σ , depending on the semiconcavity constant ofthe Lyapunov function and on the qualitative properties of the stable trajectories of thedeterministic systems.

We conclude with some additional references. We recall that there are other notionsof stochastic stability. Kozin introduced the exponential almost sure stability of an un-controlled stochastic system. Stability in the mean square and p-stability were studiedby means of Lyapunov functions (we refer to the monograph [20]). In the controlledcase, in previous papers Bardi and the author (see [4] and [3], see also [1]) characterizedby means of appropriate Lyapunov functions the almost sure stabilizability of stochasticdifferential equations. This is a stronger notion of stochastic stability, never verified fornondegenerate processes. Indeed, a system is almost surely stabilizable if it behaves asa deterministic stabilizable system and remains almost surely in a neighborhood of theequilibrium point. Turning to deterministic controlled systems, a complete Lyapunovcharacterization of the asymptotic stabilizability (called asymptotic controllability) hasbeen proved by Sontag and Sussmann (see the articles [34] and [36] and the reviewpaper [35]). The infinitesimal decrease condition of the Lyapunov function along thetrajectories of the system is expressed in terms of Dini directional derivatives, contin-gent directional derivatives and proximal subgradients. There is a large literature on the

Lyapunov Stabilizability of Controlled Diffusions 5

stabilization of deterministic controlled system by the Lyapunov function method: werefer to the monograph [2] and to the papers [10] and [31], see also the references therein.

The paper is organized as follows. In Section 2 we introduce the stochastic controlproblems and recall the definitions and basic properties of the controls we use. Section 3 isdevoted to the proof of the representation formula (3). Section 4 contains the definitionsof stabilizability in probability, asymptotic stabilizability and Lyapunov functions; inSection 5 we apply the results in Section 3 to prove local and global versions of theLyapunov theorems. In Section 6 we introduce the notion of a controlled attractor andwe discuss the generalization of the direct Lyapunov method to the stabilization of sets.Finally, in Section 7 we present some examples illustrating the theory.

2. Stochastic Control Setting

In this section we introduce the stochastic control problem and recall the definitions andbasic properties of the controls we use.

We consider a controlled Ito stochastic differential equation:

(CSDE)

{d Xt = f (Xt , αt ) dt + σ(Xt , αt ) d Bt , t > 0,X0 = x .

We assume that αt takes values in a given compact set A ⊆ RM , and that f and σ arecontinuous functions defined in RN × A, taking values, respectively, in RN and in thespace of N × M matrices, and satisfying for all x, y ∈ RN and all α ∈ A,

| f (x, α)− f (y, α)| + ‖σ(x, α)− σ(y, α)‖ ≤ C |x − y|. (4)

We define

a(x, α) := 12σ(x, α)σ (x, α)

T

and assume

{(a(x, α), f (x, α)): α ∈ A} is convex for all x ∈ RN . (5)

We recall here the definition of admissible controls that we allow for our control prob-lems. For precise definitions we refer to [22] and [15] (see also the references therein).Actually in these articles the problem is formulated in terms of solutions of the martingaleproblem, but it is also shown that there is an equivalent formulation in terms of solutionsof (CSDE).

We relax the control problem by using weak controls, that is, admitting all the weaksolutions of (CSDE). We have not assigned a priori a probability space (,F,Ft ,P)with its filtration. So when we introduce a control we mean that we are at the same timechoosing also a probability space and a standard Brownian motion Bt on this space.Actually under hypothesis (4) it can be shown that the space of strong controls is notempty and that, under suitable assumptions on the cost functional (which are essentiallythe lower semicontinuity with respect to the x variable), the strong problem and the weakproblem have the same value [15, Theorem 4.11].

6 A. Cesaroni

Definition 1 (Strict Controls, Definition 2.2 of [22]). For every initial data x ∈ RN , astrict control is a progressively measurable A-valued process (αt )t≥0 such that there existsan RN -valued, right continuous, almost surely continuous, progressively measurablesolution Xα

t to (CSDE) (see also Definition 1.4 of [15]). We denote byAx the set of strictcontrols for x ∈ RN .

The class of strict controls can be embedded, as in the deterministic case, in a largerclass of admissible controls. We denote by M(A) the set of probability measures on Aendowed with the topology of weak convergence. We note that it is a separable metricspace.

Definition 2 (Relaxed Controls, Definition 3.2 of [22]). For every initial data x ∈ RN ,a relaxed control is a progressively measurable M(A)-valued process (µt )t≥0 such thatthere exists an RN -valued, right continuous, almost surely continuous, progressivelymeasurable solution Xµ

t to (CSDE) (see also Definition 2.4 of [15]). We denote byMx

the set of relaxed controls for x ∈ RN .

We now choose a canonical probability space for our control problem. By means ofthis canonical space we can give a formulation of the optimization problem in a convexcompact setting. The most natural canonical space for the strict control problem seemsto be the space of the trajectories X . of (CSDE). It is the space of continuous functions Cfrom [0,+∞) toRN with its natural filtration. In order to give a control on this space, it issufficient to specify the probability measure on C (which is the law of the process X .) andthe progressively measurable function α. Rather than working with this canonical spacewe consider the space of trajectories (X ., µ.) for µ a relaxed control. Let V be the spaceof measurable functions from [0,+∞) to M(A) with its canonical filtration. We denoteby M(V) the set of probability measures on V endowed with the stable topology (this is atopology introduced by Jacod and Memin, for a precise definition see [22, Section 3.10]and the references therein). The canonical space for the relaxed control problem will bethe product space C × V with the product filtration. We call a relaxed control definedin this canonical space canonic relaxed control or control rule (see Definition 3.12 of[22] and also Definition 3.2 of [15]). In order to identify a canonic relaxed control, it issufficient to specify the probability measure on the space C×V: canonic relaxed controlscan be considered as measures on the canonical space.

In the following we consider a cost functional

J (x, α) = supt≥0

Ex

[V (Xt )+

∫ t

0l(Xs) ds

],

where l is a continuous, nonnegative function and V is an LSC, nonnegative function.The functional j (x, t, α) = Ex [V (Xt ) +

∫ t0 l(Xs) ds] satisfies, for every x and t , the

lower continuity assumptions required in [22] on the cost functional. Then, since thesupremum of LSC maps is LSC, we get that the functional J (x, α) also satisfies thesame lower semicontinuity assumptions. We list here the results obtained in [22] that weare going to use. The crucial assumption for all of them, besides the right choice of theclass of admissible controls and the lower semicontinuity of the cost functional, is theconvexity assumption (5).

Lyapunov Stabilizability of Controlled Diffusions 7

The class of control rules is the class on which it is possible to formulate a dynamicprogramming principle and to show the existence of an optimal control. The key resultis Proposition 5.2 in [22]:

for every initial data x , the class of optimal control rules admissible for x is convex andcompact.

We have the following theorem stating the existence of an optimal control.

Theorem 3 (Theorem 4.7 and Corollary 4.8 of [22]). Under the convexity assumption(5) and the other assumptions listed above, for every initial data x ∈ RN there exists anoptimal control rule for the control problem

infα

J (x, α).

Moreover, the infimum of the cost functional computed on the class of control rulescoincides with the infimum of the cost functional computed on the class of strict controls:

infα∈Ax

J (x, α) = infα∈Mx

J (x, α). (6)

In particular, the optimal control can be chosen strict.

The two crucial properties on the control space to get a dynamic programming prin-ciple are the stability under measurable selection and the stability under concatenation(see [33]). They are satisfied by the class of control rules. We consider a measurableset-valued map from R

N to the space of probability measures on the canonical spaceC × V , with convex compact values. Then, by a standard measurable selection theorem(see Theorem 5.3 of [15]), this map has a measurable selector. In Lemma 5.5 of [22](see also Chapter 12 of [40] and Theorems 6.3 and 6.4 of [15]) it is proved that this mea-surable selector is an admissible control rule. Moreover, in Lemma 5.8 of [22] (see alsoTheorem 6.2 of [15]) it is shown that if we take an admissible control and then at somelater stopping time we switch to an ε-optimal control from then on, the concatenatedobject is still admissible.

Finally, we observe that all these results remain valid if we consider, instead of thetrajectories of (CSDE) inRN , the trajectories of this system stopped at the exit time froma given open set (see [22]).

3. Superoptimality Principles

In this section we prove a representation formula for bounded LSC viscosity supersolu-tions of Hamilton–Jacobi–Bellman equations. We are adapting to the second-order casethe proof of optimality principles for viscosity supersolutions of first-order Hamilton-Jacobi equations given by Soravia in [38] and [39]. This requires the use of stochasticcontrol instead of deterministic control. The representation formula is obtained by in-troducing a suitable sequence of obstacle problems, solved in a viscosity sense by V .

8 A. Cesaroni

The technical core of the result is Lemma 5, which proves a suboptimality principle forthe min-max value functions Lλ,k . The conclusion comes from a uniqueness result forviscosity solutions of such problems and from an approximation procedure. We giveboth global and local versions of the result.

We consider the following Hamilton–Jacobi–Bellman equation:

maxa∈A

{− f (x, a) · DV (x)− trace[a(x, a)D2V (x)]}− l(x) = 0, (7)

where l: RN → R is a nonnegative bounded continuous function.

Theorem 4 (Representation Formula for Viscosity Supersolutions). Consider abounded LSC function V : RN → R. If V is a viscosity supersolution of the Hamilton–Jacobi–Bellman equation (7) in RN , then it can be represented as

V (x) = infα

supt≥0

Ex

[V (Xα

t )+∫ t

0l(Xα

s ) ds

], (8)

where the infimum is taken over all strict admissible controls.

Proof. Without loss of generality, we can reduce to the case V ≥ 0 by an appropriatetranslation. Since V is LSC, bounded and nonnegative, we can consider an increasingsequence of continuous, nonnegative, bounded functions Vk such that

V (x) = supk≥0

Vk(x) for every x ∈ RN .

If V is continuous, we choose Vk = V for every k.Now for every k ≥ 0, we introduce the following obstacle problem in RN with

unknown W and obstacle Vk :

min{λW (x)+maxa∈A

[− f (x, a)DW (x)− trace a(x, a)D2W (x)]− l(x),

W (x)− Vk(x)} = 0. (9)

Obviously V is a bounded LSC viscosity supersolution of the problem (9) for everyλ ≥ 0 and every k ≥ 0. For λ > 0 fixed, define

Lλ,k(x) = infα

supt≥0

Ex

[e−λt Vk(X

αt )+

∫ t

0l(Xα

s )e−λs ds

].

The plan of the proof is the following. We define the upper semicontinuous envelopeof Lλ,k

L λ,k(x) := inf{v(x) | v ≥ Lλ,k in RN , v continuous}.

We show that L λ,k is a bounded (upper semicontinuous) viscosity subsolution of theobstacle problem (9). Then, by the comparison principle for bounded discontinuous vis-cosity solutions of Isaacs equations, we get that, for every λ > 0, Lλ,k(x) ≤ V (x). From

Lyapunov Stabilizability of Controlled Diffusions 9

this, we can conclude, sending λ to 0 and k to +∞, that V satisfies the superoptimalityprinciple (8).

By the definition and the boundness of Vk , we get that Lλ,k is bounded. We wantto prove that its upper semicontinuous envelope L λ,k is a viscosity subsolution of theobstacle problem (9). To get this result it is sufficient to check that L λ,k is a viscositysubsolution of the Hamilton–Jacobi–Bellman equation

λL(x)+maxa∈A{− f (x, a) · DL(x)− trace[a(x, a)D2L(x)]} − l(x) = 0 (10)

at the points x where L λ,k(x) > Vk(x). This result can be obtained by standard methodsin the theory of viscosity solutions if we prove a local suboptimality principle for L λ,kon such points x (see [12] and [16]).

First we need the following technical lemma whose proof we postpone to the end.

Lemma 5. If L λ,k(x) > Vk(x) then there exists a sequence xn → x with Lλ,k(xn)→L λ,k(x) and Lλ,k(xn) > Vk(Xn) for which there exists ε > 0 such that

Lλ,k(xn) ≤ infα

Exn

[e−λt L λ,k(X

αn (t))+

∫ t

0l(Xα

n (s))e−λs ds

](11)

for t ≤ ε and |xn − x | ≤ ε.

This is a local suboptimality principle. This inequality, by a standard argument inthe theory of viscosity solution (for the detailed argument see, for example, [6], seealso [16]), implies that L λ,k is a viscosity subsolution of (7) at the points x such thatL λ,k(x) > Vk(x). From this we deduce that L λ,k is a viscosity subsolution of (9).

By the comparison principle obtained for Isaacs operators and bounded discontinu-ous viscosity solutions by Ishii in Theorem 7.3 of [24], we get that V (x) ≥ L λ,k(x) forevery λ > 0 and k ≥ 0. In particular, for T > 0 and k fixed, we get

V (x) ≥ limλ→0

Lλ,k(x) ≥ limλ→0

infα

supt∈[0,T ]

Ex

[e−λt Vk(X

αt )+

∫ t

0l(Xα

s )e−λs ds

]

≥ limλ→0

e−λT infα

supt∈[0,T ]

Ex

[Vk(X

αt )+

∫ t

0l(Xα

s ) ds

].

Therefore, for every T > 0 and k ≥ 0,

V (x) ≥ infα

supt∈[0,T ]

Ex

[Vk(X

αt )+

∫ t

0l(Xα

s ) ds

].

Now we want to pass to the limit for k →+∞. For x fixed and every k ≥ 0 we consideran admissible control αk such that

V (x)+ 1

k≥ sup

t∈[0,T ]Ex

[Vk(X

αkt )+

∫ t

0l(Xαk

s ) ds

]. (12)

By the definitions recalled in Section 2 we can associate to each couple (Xαk. , αk) a

control rule Pk . By the compactness of the space of control rules, we can extract a

10 A. Cesaroni

subsequence of control rules, which we continue to denote by Pk , that converges to somecontrol rule P in the stable topology. This control rule is the measure of a trajectoryof (CSDE) (X ., µ) driven by a relaxed control µ. Since the convergence in the stabletopology implies in particular the weak convergence of the measures Pk to the measureP , we get immediately that for every t ∈ [0, T ],

limk→+∞

Ex

∫ t

0l(Xαk

s ) ds = Ex

∫ t

0l(Xs) ds,

where the expected value on the left- and right-hand sides is computed, respectively,using the measures Pk and P .

Recalling now that V = supk Vk , where Vk are continuous functions, it is easy toshow that V can be obtained as

V (x) = lim infk→+∞,y→x

Vk(y) := supδ

inf

{Vk(y) | |x − y| ≤ δ, k ≥ 1

δ

}.

Moreover, since convergence in the stable topology also implies convergence in proba-bility of Xαk

t to Xt , we get that, for t ∈ [0, T ] fixed, we can extract a subsequence Xαkt

which converges to Xt almost surely with respect to the measure P . Then, along thissubsequence,

lim infk→+∞

Vk(Xαkt ) ≥ lim inf

k→+∞,y→Xt

Vk(y) ≥ V (Xt ), P almost surely.

By the Fatou lemma and the definition of stable convergence we deduce that, for eacht ∈ [0, T ], along a subsequence,

lim infk→+∞

EVk(Xαkt ) ≥ EV (Xt ),

where the expected value is computed respectively using the measures Pk and P .To summarize, for every t ∈ [0, T ] we get, from (12),

V (x) ≥ limk→+∞

Ex

[Vk(X

αkt )+

∫ t

0l(Xαk

s ) ds

]≥ Ex

[V (Xt )+

∫ t

0l(Xs) ds

].

So for every T > 0 there exists a control rule for which

V (x) ≥ supt∈[0,T ]

Ex

[V (Xt )+

∫ t

0l(Xs) ds

].

Now, by statement (6) in Theorem 3, we obtain

V (x) ≥ infα

supt∈[0,T ]

Ex

[V (Xα

t )+∫ t

0l(Xα

s ) ds

]. (13)

Now it remains only to let T →+∞. For ε > 0, consider an ε/2 optimal control α for(13): in particular, it gives V (x)+ ε/2 ≥ Ex [V (Xα

T )+∫ T

0 l(Xαs ) ds]. We consider now

Lyapunov Stabilizability of Controlled Diffusions 11

the set-valued map x → R(x) := {ε/22 optimal control rules γ for (13)}. By the resultsrecalled in Section 2 (see [22]), we can extract a measurable selection of this map. Sothe map ω → Xα

T (ω)→ R(XαT (ω)) also has a measurable selection, which we denote

by β. We observe that, by the properties of control rules recalled in Section 2, β is anadmissible control rule. We get that

Ex V (XαT )+

ε

22≥ Ex sup

t∈[T,2T ]EXα

T

[V (Xβ

t )+∫ t

Tl(Xβ

s ) ds

]

≥ supt∈[T,2T ]

Ex

[V (Xβ

t )+∫ t

Tl(Xβ

s ) ds

].

Moreover, the control rule obtained concatenating this selected control and α is still anadmissible control rule. Therefore we obtain

V (x)+ ε2+ ε

22≥ Ex

[V (Xα

T )+ε

22+∫ T

0l(Xα

s ) ds

]

≥ supt∈[T,2T ]

Ex

[V (Xβ

t )+∫ T

0l(Xα

s ) ds +∫ t

Tl(Xβ

s ) ds

].

Now we consider an ε/23 optimal control γ for V (Xβ

2T ) and conclude as above that

V (x)+ ε2+ ε

22+ ε

23≥ sup

t∈[2T,3T ]Ex

[V (Xγ

t )+∫ 2T

0l(Xβ

s ) ds +∫ t

Tl(Xγ

s ) ds

].

We can then proceed recursively and conclude by induction that we can construct anadmissible control rule P such that

V (x)+ ε ≥ supt≥0

Ex

[V (Xt )+

∫ t

0l(Xs) ds

].

By statement (6) in Theorem 3 and recalling that ε is arbitrary, we obtain

V (x) ≥ infα

supt≥0

Ex

[V (Xt )+

∫ t

0l(Xs) ds

]≥ V (x), (14)

which is the desired formula.

We give here the proof of the technical Lemma 5

Proof of Lemma 5. If the statement were not true, for every sequence xm → x withLλ,k(xm)→ L λ,k(x) and Lλ,k(xm) > Vk(xm), for every n > 0 we could find tn ≤ 1/nand xmn with |xmn − x | ≤ 1/n such that

Lλ,k(xmn ) > infβ

Exmn

[e−λtn L λ,k(X

βmn(tn))+

∫ tn

0l(Xβ

mn(s))e−λs ds

]. (15)

12 A. Cesaroni

We reduce to this xmn subsequence which we denote by xn for simplicity. By definitionof Lλ,k , for every εn > 0 and every control α, there exists T (εn, α) such that

Lλ,k(xn)− εn<Exn

[e−λT (εn ,α)Vk(X

αn (T (εn, α)))+

∫ T (εn ,α)

0l(Xα

n (s))e−λsds

]. (16)

By inequality (15), we can choose a sequence εn → 0 and controls βn for which

Lλ,k(xn)− 2εn ≥ Exn

[e−λtn L λ,k(X

βnn (tn))+

∫ tn

0l(Xβn

n (s))e−λs ds

]. (17)

Therefore for every control α we obtain, from inequalities (16) and (17),

Exn

[e−λtn L λ,k(X

βnn (tn))+

∫ tn

0l(Xβn

n (s))e−λs ds

]+ εn

≤ Lλ,k(xn)− εn

< Exn

[e−λT (εn ,α)Vk(X

αn (T (εn, α)))+

∫ T (εn ,α)

0l(Xα

n (s))e−λs ds

].

We now claim that for every n there exists an α such that T (εn, α) ≤ tn . Assume bycontradiction that there exists N such that, for every α admissible, T (εN , α) > tN , inparticular, T (εN , αN ) > tN for every control αN which for t ≤ tN coincides with βN :by the previous inequality we get

ExN L λ,k(XβNN (tN ))+ εN

< ExN

[e−λ(T (εN ,αN )−tN )Vk(X

αNN (T (εN , αN )))

+∫ T (εN ,αN )

tN

l(XαNN (s))e

−λ(s−tN ) ds

]. (18)

We consider the set-valued map

x → R(x) :={γ control rule

∣∣∣Lλ,k(x)− εN

2

≥ supt≥0

Ex

[e−λt Vk(X

γt )+

∫ t

0l(Xγ

s )e−λs ds

]}.

By the results recalled in Section 2 (see [22]), we can extract a measurable selectionof this map. So the map ω → XβN

N (tN )(ω) → R(XβNN (tN )(ω)) also has a measurable

selection, which we denote by γN . We observe that, by the properties of control rulesrecalled in Section 2, γN is an admissible control rule and the concatenated control ofγN with βN is still an admissible control rule PN which is the measure associated to thecouple (X N (.), µN ). This gives

ExN Lλ,k(XβNN (tN ))− εN

2≥ sup

t≥tN

ExN

[e−λt Vk(X N (t))+

∫ t

tN

l(X N (s))e−λs ds

]. (19)

Lyapunov Stabilizability of Controlled Diffusions 13

Recalling that L λ,k ≥ Lλ,k , concatenating inequalities (18) and (19) we obtain

supt≥tN

ExN

[e−λt Vk(X N (t))+

∫ t

tN

l(X N (s))e−λs ds

]+ εN

2

≤ ExN

[e−λT (εN ,µN )Vk(X N (T (εN , µN )))+

∫ T (εN ,µN )

tN

l(X N (s))e−λs ds

].

This is not possible if, as we assumed, T (εN , µN ) ≥ tN : then we get a contradiction.Therefore there exists for every n an admissible control rule Pn such that T (εn, µn) ≤ tn:choosing α = µn in inequality (16) we get

Lλ,k(xn)− εn ≤ Exn

[e−λT (εn ,µn)Vk(Xn(T (εn, µn)))+

∫ T (εn ,µn)

0l(Xn(s))e

−λs

].

For every n we say An = {ω | Xn(T (εn, µn)) ∈ B(xn, (1/ 4√

n)} and Bn = \An . Sincefor every n the trajectory is a Markov process and the drift and the diffusion of thiscontrol problem are equi-Lipschitz and equi-bounded in the compacts with respect tothe control it is possible to show (we refer to pp. 284–285 of [14] for the proof) thatPn(Bn) ≤ K (1+|xn|2)3/2(1/n)3/8 where K depends on the constant C in (4). Thereforewe get

Lλ,k(xn)− εn

2<

∫An

[Vk(Xn(T (εn, µn)))+

∫ T (εn ,µn)

0l(Xn(s))e

−λs ds

]d Pn

+∫

Bn

[Vk(Xn(T (εn, µn)))+

∫ T (εn ,µn)

0l(Xn(s))e

−λsds

]d Pn

≤[

supB(x,2/

√n)

Vk(y)+ supB(x,2/

√n)

l(y)tn

]+ K

(1+ |xn|2

)3/2(

1

n

)3/8

.

From this, since Vk is continuous and tn ≤ 1/n, letting n→+∞, we deduce

L λ,k(x) ≤ Vk(x)

in contradiction with our assumption.

Remark. The previous result can be proved in more general situations: consider abounded, nonnegative, LSC viscosity supersolution V : RN → R of

maxa∈A

{− f (x, a) · DV (x)− trace[a(x, a)D2V (x)]}+ k(x)V (x) ≥ l(x),

where k: RN → R is a Lipschitz continuous nonnegative function. The proof of Theorem4 applies directly and we obtain the representation formula

V (x) = infα

supt≥0

Ex

[V (Xα

t )e−∫ t

0k(Xα

s ) ds +∫ t

0l(Xα

s )e−∫ s

0k(Xα

u )du ds

].

14 A. Cesaroni

We can also prove a localized version of Theorem 4.

Corollary 6. Consider an open set O ⊆ RN . For every δ > 0, consider the set

Oδ := {x ∈ O | d(x, ∂O) > δ} and denote by ταδ the stopping time at which the samplefunction of the process Xα

t reaches the boundary ∂Oδ: we denote by ταδ (t) the minimumbetween ταδ and t . Assume that V : O→ R is a bounded nonnegative function. If V is anLSC viscosity supersolution of the Hamilton–Jacobi–Bellman equation (7) in O, then itcan be represented, for every δ, x ∈ Oδ , as

V (x) = infα

supt≥0

Ex

[V (Xα

ταδ(t))+

∫ ταδ(t)

0l(Xα

s ) ds

].

Proof. We fix δ > 0 and a smooth cut off function 0 ≤ ξ ≤ 1 such that ξ(x) = 0 forx ∈ RN\O and ξ(x) = 1 for x ∈ Oδ . We consider the stochastic controlled differentialequation in RN :

(CSDE)′{

d Xt = f (Xt , αt )ξ2(Xt ) dt + σ(Xt , αt )ξ(Xt ) d Bt , t > 0,

X0 = x .

Observe that for x ∈ Oδ , the solution (X ′)α to (CSDE)′ coincides a.s. with the solutionXα to (CSDE) up to time ταδ . We define the process Xα

ταδ(t) obtained by stopping the

process (X ′)αt at the instant it reaches the boundary of Oδ: it has an Ito stochasticdifferential and is still a strong Markov process with continuous trajectories (see, forexample, Lemma 3.3.1 of [20] and the references therein).

We extend V outside O as a bounded nonnegative LSC function that we continueto denote by V . So it is immediate to show that V is a viscosity supersolution in RN ofthe equation

maxa∈A{−f (x, a)ξ 2(x) · DV (x)−trace[a(x, a)ξ 2(x)D2V (x)]}−l(x)ξ 2(x)=0. (20)

We can apply Theorem 4 to V . Indeed, it is sufficient to define

Lλ,k(x) =

infα supt≥0 Ex

[e−λτ

αδ(t)Vk(Xα

ταδ(t))

+ ∫ t0 l(Xα

ταδ(s))ξ

2(Xαταδ(s))e

−λταδ(t) ds

]in Oδ,

Vk(x) in RN\Oδ.

We can repeat the proof in Theorem 4 (all the results in [22] also hold for the stoppedprocess Y α) and we get that L λ,k is a viscosity supersolution of the obstacle problem (9)in RN . So again repeating the same arguments of Theorem 4 we get that V satisfies thefollowing representation formula for x ∈ Oδ:

V (x) = infα

supt≥0

Ex

[V (Xα

ταδ(t))+

∫ ταδ(t)

0l(Xα

s ) ds

].

Lyapunov Stabilizability of Controlled Diffusions 15

Remark (Minimal Nonnegative Solution). These representation formulas for viscos-ity solutions are interesting on their own, as we have pointed out in the Introduction:indeed they apply to Hamilton–Jacobi–Bellman equations for which there are no compar-ison principles and then no uniqueness of solutions. We consider the following Hamilton–Jacobi–Bellman equation in RN :

maxa∈A

{− f (x, a) · DV (x)− trace[a(x, a)D2V (x)]} = l(x)

with l ≥ 0: since the constant function U ≡ 0 is always a subsolution, it is interestingto characterize the minimal nonnegative supersolution.

From a control point of view, the natural solution seems to be the value function ofthe infinite horizon control problem with running cost l,

V∞(x) = infα

Ex

∫ +∞0

l(Xαs ) ds.

If V∞ is well defined and bounded, then it is possible to show that it is LSC, by an argumentbased on the properties of the class of admissible relaxed controls. Moreover, by standardmethods in the theory of viscosity solutions (see [17] and [12]), it is possible to showthat V∞ is a viscosity supersolution of the previous Hamilton–Jacobi–Bellman equation.In this case an easy application of the previous theorems gives that every bounded,nonnegative, viscosity supersolution V of the Hamilton–Jacobi–Bellman equation inR

N satisfies

V (x) ≥ V∞(x),

therefore V∞ is the minimal nonnegative viscosity supersolution of the equation.

Remark (Representation Formula for Viscosity Subsolutions). The counterpart ofTheorem 4 for viscosity subsolutions is straightforward from classical suboptimalityprinciples: let U : RN → R be an upper semicontinuous bounded viscosity subsolutionof the Hamilton–Jacobi–Bellman equation

maxa∈A

{− f (x, a) · DU (x)− trace[a(x, a)D2U (x)]} ≤ l(x),

then the function U can be represented as

U (x) = infα

inft≥0

Ex

[U (Xα

t )+∫ t

0l(Xα

s ) ds

].

4. Stability in Probability and Lyapunov Functions

We begin this section with the notion of both Lyapunov and asymptotic stability inprobability. They were introduced by Has’minskii and Kushner (see [20] and [25]) in thecase of uncontrolled stochastic differential equations. We present their natural extensionto the case of controlled diffusions.

16 A. Cesaroni

Definition 7 (Stabilizability in Probability). The controlled system (CSDE) is (stochas-tic open loop) stabilizable in probability at the origin if for all ε, k > 0 there exists δ > 0such that for every |x | ≤ δ there exists a control α. ∈ Ax such that the correspondingtrajectory X . verifies

Px

(supt≥0|Xt | ≥ k

)≤ ε.

This is equivalent to assuming that, for every positive k,

limx→0

infα

Px

(supt≥0|Xα

t | ≥ k

)= 0.

The system is (stochastic open loop) Lagrange stabilizable in probability, or it hasthe property of uniform boundedness of trajectories, if for each ε > 0, R > 0 there isS > 0 such that, for any initial point x with |x | ≤ R,

infα

Px

(supt≥0|Xα

t | ≥ S

)≤ ε.

This is equivalent to assuming that, for every R > 0,

limS→+∞

sup|x |≤R

infα

Px

(supt≥0|Xα

t | ≥ S

)= 0.

Remark. Stabilizability in probability implies that the origin is a controlled equilib-rium of (CSDE), i.e.,

∃α ∈ A, f (0, α) = 0, σ (0, α) = 0.

In fact, the definition gives for any ε > 0, for k > 0 fixed, an admissible control suchthat the corresponding trajectory starting at the origin satisfies P(supt≥0|Xt | ≥ k) ≤ εso

E0

∫ +∞0

l(|Xt |)e−λt dt ≤ ε

λ

for any λ > 0 and any real function l such that 0 ≤ l(r) ≤ 1 for any r and l(r) = 0for r ≤ k. Then infα.∈A0 E0

∫ +∞0 l(|Xt |)e−λt dt = 0. Theorem 3 implies that the inf

is attained: therefore for any k > 0 there is a minimizing control which produces atrajectory satisfying a.s. |Xt | ≤ k for all t ≥ 0. So infα.∈A0 E0

∫ +∞0 |Xt |e−λt dt = 0 for

any λ > 0. Again Theorem 3 implies that the inf is attained, and the minimizing controlproduces a trajectory satisfying a.s. |Xt | = 0 for all t ≥ 0. The conclusion follows fromstandard properties of stochastic differential equations.

Lyapunov Stabilizability of Controlled Diffusions 17

Regarding Lagrange stabilizability, we observe that, using standard properties ofdiffusions under regularity assumptions (4), it is possible to prove (see [14] and [20])that, for every fixed T > 0 and R > 0,

limS→+∞

sup|x |≤R

infα

Px

(sup

0≤t≤T|Xα

t | ≥ S

)= 0.

Nevertheless, Lagrange stabilizability is a stronger condition since it requires that

limS→+∞

sup|x |≤R

infα

supT≥0

Px

(sup

0≤t≤T|Xα

t | ≥ S

)= 0.

Controlled diffusion is said to be asymptotically stabilizable (or controllable) inprobability if the equilibrium point is not only stabilizable but also an attracting pointfor the system, locally around the equilibrium point.

Definition 8 (Asymptotic Stabilizability in Probability). The controlled system is lo-cally asymptotically stabilizable in probability (or asymptotically controllable in proba-bility) at the origin if for all ε, k > 0 there exists δ > 0 such that for every |x | ≤ δ thereexists a control α. ∈ Ax such that the corresponding trajectory X . verifies

Px

(supt≥0|Xt | ≥ k

)≤ ε and Px

(lim supt→+∞

|Xt | > 0

)≤ ε.

This is equivalent to assuming that, for all k > 0,

limx→0

infα

[Px

(supt≥0|Xα

t | ≥ k

)+ Px

(lim supt→+∞

|Xαt | > 0

)]= 0.

There is a global version of the previous stability notion:

Definition 9 (Asymptotic Stabilizability in the Large). The controlled system isasymptotic stabilizable in the large at the origin if it is Lyapunov stabilizable in proba-bility around the equilibrium and, for every x ∈ RN ,

infα

Px

(lim supt→+∞

|Xαt | > 0

)= 0.

This means that for every ε > 0 and for every initial data x we can choose an admissiblecontrol inAx which drives the trajectory to the equilibrium with probability greater than1− ε.

Next we give the appropriate definition of control Lyapunov functions for the studyof the stochastic stabilities defined above.

18 A. Cesaroni

Definition 10 (Lyapunov Function). Let O ⊆ RN be a bounded open set containingthe origin. A function V : O→ R is a local Lyapunov function for (CSDE) if it satisfiesthe following conditions:

(i) it is LSC and continuous at the origin;(ii) it is positive definite, i.e., V (0) = 0 and V (x) > 0 for all x �= 0;

(iii) it is bounded;(iv) it is a viscosity supersolution of the equation

maxα∈A{−DV (x) · f (x, α)− trace[a(x, α)D2V (x)]} = 0 in O. (21)

We introduce the notion of a strict Lyapunov function in both the local and global setting.

Definition 11 (Local Strict Lyapunov Function). LetO ⊆ RN be a bounded open setcontaining the origin. A function V : O → R is a local strict Lyapunov function for(CSDE) if it satisfies conditions (i)–(iii) in the previous definition and

(iv′) it is a viscosity supersolution of the equation

maxα∈A

{−DV (x) · f (x, α)− trace[a(x, α)D2V (x)

]} = l(x) in O, (22)

where l: O → R is a positive definite, bounded and uniformly continuousfunction.

Definition 12 (Global Strict Lyapunov Function). Let O ⊆ RN be an open set con-taining the origin. A function V : O→ R is a global strict Lyapunov function for (CSDE)if it satisfies the following conditions:

(i) it is LSC and continuous at the origin;(ii) it is positive definite, i.e., V (0) = 0 and V (x) > 0 for all x �= 0;

(iii) it is proper, i.e., limx→∂O V (x) = +∞, or, equivalently, its level sets {x |V (x) ≤ µ} are bounded;

(iv) it is a viscosity supersolution of the equation

maxα∈A

{−DV (x) · f (x, α)− trace[a(x, α)D2V (x)

]} = l(x) in O, (23)

where l: O→ R is a positive definite uniformly continuous function.

5. Direct Lyapunov Theorems

In this section we develop a direct Lyapunov method for the study of stabilizability inprobability of controlled diffusions both in the local and global setting. For the uncon-trolled case, the extension of the Lyapunov second method to the case of stochasticsystems is due to Has’minskii and Kushner independently (see the monographs [20] and[25], and see also the references therein for earlier related results).

The main tool of the proof of the Lyapunov theorems is the representation formulafor viscosity solutions obtained in Section 3.

Lyapunov Stabilizability of Controlled Diffusions 19

Theorem 13 (Stabilizability in Probability). Assume conditions (4) and (5) and theexistence of a local Lyapunov function V in the open set O. Then:

(i) the system is stabilizable in probability,(ii) if in addition the Lyapunov function is global, then the system is also Lagrange

stabilizable in probability.

Proof. We start by proving (i). We fix k > 0 such that Bk ⊂ O. We fix ε > 0 and defineη = ε min|y|≥k V (y). We denote by ταk (x) the first exit time of the trajectory Xα

t fromthe open ball Bk centered at the origin with radius k. By the continuity at the origin of Vwe can find θ > 0 such that if |x | ≤ θ then V (x) ≤ η/2. The superoptimality principlein Corollary 6 gives, for |x | ≤ θ ∧ k,

η/2 ≥ V (x) = infα

supt≥0

Ex V (Xt∧ταk (x)).

We now choose an η/2 optimal control α ∈ Ax for the previous control problem, wedenote by Xt the corresponding trajectory, stopped at the exit time from O, and we getfor every t ≥ 0,

η ≥ Ex V (Xt∧ταk ) ≥∫{sup0≤s≤t |Xs |≥k}

V (Xταk) ≥ P

(sup

0≤s≤t|Xs | ≥ k

)min|y|≥k

V (y).

As t → +∞, we obtain the following bound on the probability that the trajectory Xt

leaves the ball Bk :

P(

supt≥0|Xt | ≥ k

)≤ η

min|y|≥k V (y)= ε.

This proves the stabilizability in probability.We pass now to (ii). Repeating the argument above we get that, for every k > 0,

infα

P(

supt≥0|Xα

t | ≥ k

)≤ V (x)

min|y|≥k V (y).

This implies the Lagrange stabilizability: indeed, given R > 0 and ε > 0, we choose ksuch that

max|y|≤R V (y)

min|y|≥k V (y)≤ ε.

In case the system admits a strict Lyapunov function we prove that there exists acontrol which not only stabilizes diffusion in probability but also drives it asymptoticallyto the equilibrium. We obtain the result using standard martingale inequalities; in theuncontrolled case, a similar proof of asymptotic stability has been given in [13] (seealso [30]).

20 A. Cesaroni

Theorem 14 (Asymptotic Stabilizability). Assume conditions (5) and (4) and the ex-istence of a local strict Lyapunov function in an open set O. Then

(i) the system (CSDE) is locally asymptotically stabilizable in probability;(ii) if the strict Lyapunov function is global, then the system (CSDE) is asymptoti-

cally stabilizable in the large.

Proof. We start by proving (i). For every k > 0, such that Bk ⊂ O, we get, by Corollary6, that the function V satisfies, for x ∈ Bk , the following superoptimality principle:

V (x) = infα

supt≥0

Ex

[V (Xα

ταk (t))+

∫ ταk (t)

0l(Xα

s ) ds

], (24)

where the trajectories are stopped at the exit time from Bk . By Theorem 3 there existsan optimal control α ∈ Ax for this value problem. We indicate by X . the correspondingtrajectory and by τ the exit time from the open ball Bk . Repeating the proof of Theorem13 we get the stabilizability in probability:

Px (τ < +∞) = Px

(supt≥0|Xt | ≥ k

)≤ V (x)

min|y|≥k V (y).

We denote B(x) = {ω | |Xt (ω)| ≤ k,∀t ≥ 0}. By the previous estimate Px (B(x)) =Px (τ = +∞) ≥ 1− V (x)/(min|y|≥k V (y)).

We claim that l(Xt (ω))→ 0 as t →+∞ for almost all ω ∈ B(x), from this, usingthe positive definiteness of the function l, we can deduce that

Px

(lim supt→+∞

|Xt | > 0

)≤ V (x)

min|y|≥k V (y)

which gives, by continuity at the origin of the function V , the asymptotic stabilizabilityin probability.

We assume by contradiction that the claim is not true: then there exists ε > 0, asubset ε ⊆ B(x) with P(ε) > 0, and for every ω ∈ ε a sequence tn(ω) → +∞such that l(Xtn (ω)) > ε. We define

F(k) := max|x |≤k,α∈A

| f (x, α)|, �(k) = max|x |≤k,α∈A

‖σ(x, α)‖.

We indicate by τ(s) the minimum between τ and s and compute, for t ≥ 0 fixed,

E{

supt≤s≤t+h

|X τ(s) − X τ(t)|2}

= E

sup

t≤s≤t+h

∣∣∣∣∣∫ τ(s)

τ (t)f (Xu, αu) du +

∫ τ(s)

τ (t)σ (Xu, αu) d Bu

∣∣∣∣∣2

Lyapunov Stabilizability of Controlled Diffusions 21

≤ 2E

sup

t≤s≤t+h

∣∣∣∣∣∫ τ(s)

τ (t)f (Xu, αu) du

∣∣∣∣∣2

+ 2E

sup

t≤s≤t+h

∣∣∣∣∣∫ τ(s)

τ (t)σ (Xu, αu) d Bu

∣∣∣∣∣2

≤ 2F2(k)h2 + 2E

sup

t≤s≤t+h

∣∣∣∣∣∫ τ(s)

τ (t)σ (Xu, αu) d Bu

∣∣∣∣∣2 .

By Theorem 3.4 in [14] (the process | ∫ st σ(Xu, αu) d Bu | is a positive semimartingale)

we get

E{

supt≤s≤t+h

|X τ(s) − X τ(t)|2}

≤ 2F2(k)h2 + 8 supt≤s≤t+h

E

∣∣∣∣∣∫ τ(s)

τ (t)σ (Xu, αu) d Bu

∣∣∣∣∣2

≤ 2F2(k)h2 + 8 supt≤s≤t+h

E

{∫ τ(s)

τ (t)

∣∣σ(Xu, αu)∣∣2 du

}≤ 2F2(k)h2 + 8�2(k)h.

Then Chebyshev inequality gives, for every t ≥ 0 fixed,

P(

supt≤s≤t+h

|X τ(s) − X τ(t)| > r

)≤ 2F2(k)h2 + 8�2(k)h

r2. (25)

Since l is continuous, we can fix δ such that |l(x) − l(y)| ≤ ε/2 if |x − y| ≤ δ and|x |, |y| ≤ k: we compute

P(

supt≤s≤t+h

|l(X τ(s))− l(X τ(t))| ≤ ε

2

)

≥ P(

supt≤s≤t+h

|X τ(s) − X τ(t)| ≤ δ)

≥ 1− 2F2(k)h2 + 8�2(k)h

δ2.

We choose h such that 0 < (2F2(k)h2 + 8�2(k)h)/δ2 ≤ Px (ε) − r for some r > 0so that, for every t ≥ 0,

P(

supt≤s≤t+h

|l(X τ(s))− l(X τ(t))| ≤ ε

2

)≥ 1+ r − Px (ε). (26)

From (24), letting t →+∞, we get

V (x) ≥∫

B(x)

∫ +∞0

l(Xs) ds ≥∫ε

∫ +∞0

l(Xs) ds ≥∫ε

∑n

∫ tn+h

tn

l(Xs) ds

22 A. Cesaroni

≥∫ε

∑n

h inf[tn(ω),tn(ω)+h]

l(Xt (ω)) ≥ h∑

n

∫ε

inf[tn(ω),tn(ω)+h]

l(Xt (ω))

≥ hε

2

∑n

P

[(sup

tn≤s≤tn+h|l(Xs)− l(Xtn )| ≤

ε

2

)∩ε

]≥ h

∑n

ε

2r = +∞,

where the last inequalities are obtained using the strong Markov property of the processX τK (·). This gives a contradiction: then, for every ε > 0, P(ε) = 0. We have provedthat l(Xt )→ 0 as t → +∞ for almost all ω ∈ B(x), now the positive definiteness of limplies that

Px

{lim

t→+∞ |Xt | = 0

}= Px (Bx ) ≥ 1− V (x)

min|y|≥k V (y).

We now prove statement (ii). IfO coincides with the whole space, arguing as above,we get that for every k > 0 and x ∈ Bk there exists a strict control αk such that thecorresponding trajectory Xk verifies

Px

(lim supt→+∞

l(Xkt ) > 0

)≤ V (x)

min|y|≥k V (y).

Using the properness of the function V , by letting k → +∞, we get that, for everyx ∈ O,

infα

Px

(lim supt→+∞

l(Xαt ) > 0

)= 0 (27)

which gives, by the positive definiteness of the function l, asymptotic stabilizability inthe large.

Remark (Uniform Asymptotic Stabilizability in Probability). The existence of a Lya-punov function implies stronger asymptotic stability of the system, which we call uniformasymptotic stabilizability. Moreover, we will show in a forthcoming paper that uniformasymptotic stabilizability can be completely characterized in terms of strict Lyapunovfunctions.

The system (CSDE) is uniformly asymptotically stabilizable in probability in O iffor every x ∈ O there exists α ∈ Ax such that, for every k > 0,

limx→0

P(

supt≥0|Xt | ≥ k

)= 0,

supx∈O

T αx (O\Bk) < +∞,

where T αx (O\Bk) is the expected time spent by the trajectory X in the set O\Bk .

Lyapunov Stabilizability of Controlled Diffusions 23

The fact that the existence of a Lyapunov function implies uniform asymptoticstabilizability follows very easily from the representation formula for the function V andthe positive definiteness of the function l:

V (x) ≥ Ex

∫ τ

0l(Xs) ds ≥ T α

x (O\Br ) infy∈O\Br

l(y),

which implies

supx∈O

T αx (O\Br ) ≤ supx∈O V (x)

infy∈O\Br l(y)= ‖V ‖∞

infy∈O\Br l(y).

The proof of the fact that uniform asymptotic stability implies asymptotic stability(in particular that for every initial data there exists a control asymptotically driving thetrajectory to the origin almost surely) is an argument based on the continuity propertiesof the trajectories of (CSDE) of the type (25) we proved in Theorem 14.

6. Attractors

Next we extend the results in Section 4 to study the stabilizability of general closed setsM ⊆ RN . We denote by d(x,M) the distance between a point x ∈ RN and the set M .

We recall that a closed set M is viable with respect to a stochastic controlled dy-namical system if for every x ∈ M there exists an admissible control such that thecorresponding trajectory remains almost surely in M .

Definition 15 (Stabilizability in Probability at M). A closed set M is stabilizable inprobability for (CSDE) if for every k > 0 there exists δ > 0 such that, for everyx at distance less than δ from M , there exists an admissible control α such that thecorresponding trajectory X . verifies

Px

(supt≥0

d(Xt ,M) ≥ k

)≤ ε.

Remark. We observe that if M is stabilizable in probability according to the previousdefinition, then in particular it is viable. In fact, for every ε > 0 fixed, the definitiongives that, for x ∈ M , infα Ex

∫ +∞0 e−λt kε(Xt ) dt = 0 for any λ > 0 and any smooth

function kε which is nonnegative, bounded and null on the points at distance less thanε from M . By Theorem 3, the infimum is attained, therefore for every ε > 0 there is acontrol α ∈ Ax whose corresponding trajectory stays almost surely at distance less thanε from M : in particular, for every λ > 0, infα Ex

∫ +∞0 e−λt |Xt | dt = 0. Therefore, again

by Theorem 3, there exists, for every x ∈ M , a minimizing control whose correspondingtrajectory stays in M almost surely for all t ≥ 0.

A geometric characterization of the viability of closed sets with respect to a stochasticdifferential controlled equation has been given in [5] (see also the references therein).According to this characterization, we note that the fact that the set M is stabilizablein probability implies that the diffusion has to degenerate on its boundary: for every

24 A. Cesaroni

x ∈ ∂M there exists α ∈ A such that σ(x, α) · p = 0 for every p generalized normalvector to M at x .

We introduce the notion of controlled attractiveness: it coincides, when the systemis uncontrolled, with the standard notion of pathwise forward attractiveness (see [20]).

Definition 16 (Controlled Attractor). The set M is a controlled attractor for the sys-tem (CSDE) in the open set O ⊆ RN if for every initial data x ∈ O then

infα

Px

(lim supt→+∞

d(Xαt ,M) > 0

)= 0.

This means that for every ε > 0 there exists α ∈ Ax such that the correspondingtrajectory asymptotically approaches the set M with probability at least 1− ε.

The set O is called the domain of attraction for M : if it coincides with RN the setM is a global attractor.

Remark. We consider a function V : RN → R which satisfies the conditions in Defi-nition 12 of strict global Lyapunov function with the only difference that the function lis assumed only nonnegative definite. The proof of Theorem 14 can be repeated in thiscase: we obtain that, for every x ∈ RN ,

infα

Px

(lim supt→+∞

l(Xαt ) > 0

)= 0. (28)

We introduce the set L := {y | l(y) = 0}. From (28) we get that, for every x ∈ RN ,

infα

Px

(lim supt→+∞

d(Xαt ,L) > 0

)= 0,

which means that L is a controlled global attractor for the system. Results of this kindfor uncontrolled diffusion processes can be found in [30] and [13]. The earlier paperof Kushner [27] also studies a stochastic version of the La Salle invariance principle,namely, that the omega limit set of the process is an invariant subset of L, in a suitablesense.

We can generalize the notion of a control Lyapunov function in order to study theattractiveness and the stabilizability of a set M .

Definition 17 (Control M-Lyapunov Function). Let M ⊆ RN be a closed set and theO ⊆ R

N be an open set containing M . A function V : O → [0,+∞) is a controlM-Lyapunov function for (CSDE) if it satisfies

(i) it is LSC and continuous at every x ∈ ∂M ;(ii) it is M-positive definite, i.e., V (x) > 0 for x �∈ M and V (x) = 0 for x ∈ M ;

(iii) V is M-proper, i.e., its level sets {x | V (x) ≤ µ} are bounded;

Lyapunov Stabilizability of Controlled Diffusions 25

(iv) it is a viscosity supersolution of the equation

maxα∈A

{−DV (x) · f (x, α)− trace[a(x, α)D2V (x)

]} ≥ l(x) x ∈ O.

If l(x) ≥ 0 then V is a control M-Lyapunov function, and if l(x) is an M-positive definite,Lipschitz continuous bounded function then V is a strict control Lyapunov function.

We can therefore prove very similar results for the case of a set M as for the case ofan equilibrium point.

Theorem 18. If the system (CSDE) admits a control M-Lyapunov function V thenthe system is M stabilizable; if moreover the function V is a strict control M-Lyapunovfunction then the set M is a controlled attractor for the system with a domain of attractionequal to O.

Proof. In order to prove this result, one can repeat the proofs given in Theorems 13and 14 since the function V satisfies a superoptimality principle and the level sets of Vare a local basis of neighborhoods of M .

7. Examples

In this section we present some very simple examples illustrating the theory.The first example is about stochastic perturbations of stabilizable systems. We apply

the Lyapunov theorems to show that an asymptotically controllable deterministic dynam-ical system continues to be stabilizable or asymptotically stabilizable in probability ifwe perturb it with a white noise of small enough intensity. The idea to prove it relies onthe fact that if the stochastic perturbation is small enough, then a Lyapunov function forthe deterministic systems remains a Lyapunov function also for the stochastic one.

Example 1. We consider a deterministic controlled system in RN :

Xt = f (Xt , αt ), (29)

where f (x, a) is a Lipschitz continuous, locally bounded function in x uniformly withrespect to a and the controlα is a measurable function taking values in a compact space A.We assume that the system is globally asymptotically (stochastic open loop) stabilizableat the origin, i.e., asymptotically controllable in the terminology of deterministic systems[35], [36]. By the converse Lyapunov theorem [34], [36], there exists a continuous controlLyapunov function for the system, i.e., for some positive definite continuous function L ,there exists a proper, positive definite function V satisfying, in RN ,

maxα∈A{− f (x, α) · DV } ≥ L(x) (30)

in the viscosity sense. Moreover, we can choose the function V to be semiconcave awayfrom the origin as proved by Rifford in [31]. This means that for every δ > 0 there exists

26 A. Cesaroni

a semiconcavity constant Cδ > 0 such that the function

V (x)− Cδ

2|x |2

is concave in RN\Bδ . The semiconcavity constant Cδ is an upper bound on the secondderivatives of the function (to be intended in the sense of distributions). In particular, bythe definition of semiconcavity, we get that for every |x | > δ and for every (p, X) ∈J 2,−V (x) := {(q, Y ) ∈ RN × S(N ): for y → x , V (y) ≥ V (x)+ q · (y − x)+ 1

2 (y −x) · Y (y − x)+ o(|y − x |2)} (see [12]), then

CδIN − X ≥ 0. (31)

We study under which conditions the system continues to be asymptotically orLyapunov stabilizable if we perturb it with a white noise. We consider the perturbedsystem

d Xt = f (Xt , αt ) dt + σ(Xt , αt ) d Bt ,

where (Bt )t is an M-dimensional white noise and the function σ(x, a) is Lipschitzcontinuous in x uniformly with respect to a and takes values in the space of the N × Mdimensional matrices with entries in R.

By the semiconcavity inequality (31) and by (30), we get, for every |x | > δ, and forevery (p, X) ∈ J 2,−V (x),

maxα∈A{− f (x, α) · p − trace a(x, α)X}

≥ maxα∈A{− f (x, α) · p} −max

α∈A{trace a(x, α)X}

≥ L(x)− Cδ maxα∈A{trace a(x, α)}.

Therefore, if the diffusion σ satisfies a small intensity condition

trace a(x, α) ≤ L(x)

, ∀α ∈ A, ∀ |x | > δ,

we can conclude that the function V is a control Lyapunov function for the stochasticsystem and then, according to Theorem 13, the system is both Lyapunov and Lagrangestabilizable in probability.

If, moreover, for every δ > 0,

trace a(x, α) <L(x)

, ∀α ∈ A, ∀ |x | > δ,

it is possible to construct a positive definite, Lipschitz continuous function l such that Vis a viscosity supersolution of

maxa∈A

{− f (x, a) · DV (x)− trace a(x, α)D2V (x)} ≥ l(x)

Lyapunov Stabilizability of Controlled Diffusions 27

and then, by Theorem 14, the system is asymptotically stabilizable in the large at equi-librium.

A similar result can be obtained in the case of local asymptotically controllablesystems.

In the next example we give conditions on a radial function to be a Lyapunov functionfor stability in probability.

Example 2. In this example we consider as candidate Lyapunov function for the gen-eral controlled system (CSDE), the function V (x) = |x |γ for some positive γ > 0 andstudy under which conditions the system is stabilizable.

We compute

DV (x) = γ |x |γ−2x, D2V (x) = γ |x |γ−2I+ γ (γ − 2)|x |γ−4(x · xT ).

Therefore

maxa∈A

{− f (x, a) · DV (x)− trace a(x, α)D2V (x)}

= γ |x |γ−2 maxa∈A{− f (x, a) · x

− trace a(x, α)− (γ − 2)|x |−2 trace a(x, α)(x · xT ))}

= γ |x |γ−2 maxa∈A{− f (x, a) · x − trace a(x, α)− (γ − 2)|x |−2|σ(x, a)T · x |2}.

If γ ≤ 2, this gives that V is a Lyapunov function for the system if for every x thereexists α ∈ A such that f (x, α) · x + trace a(x, α) ≤ 0. We can observe that, sincetrace a(x, α) ≥ 0 for every α, the radial component of the drift f must be everywherenonpositive, for some α ∈ A. In particular, it must be negative to compensate for thedestabilizing role of the diffusion, whenever trace a(x, α) is nonnull.

We end with an example of an uncontrolled system that does not have a smoothLyapunov function but has an LSC Lyapunov function. Our example is a variant of adeterministic one by Krasovskii, see [2].

Example 3. Consider the stochastic system in R3:

(S)

d Xt = (1+ f (Zt )) Yt dt,dYt = [−(1+ f (Zt ))Xt + Yt (X2

t + Y 2t )

3 sin2(π/(X2t + Y 2

t ))] dt,d Zt = bZt dt + σ Zt d Bt ,

where Bt is a one-dimensional Brownian motion, b and σ are nonzero constants suchthat σ 2 − 2b > 0 and f satisfies f (0) = 0 and the hypotheses for the existence anduniqueness of solutions to (S). The system is stable in probability around the equilibriumpoint (0, 0, 0). We observe that if we choose as initial data (x0, y0, 0) the trajectory is

28 A. Cesaroni

a.s. (Xt , Yt , 0) for t ≥ 0 where (Xt , Yt ) satisfies

(RS)

{Xt = Yt ,

Yt = −Xt + Yt (X2t + Y 2

t )3 sin2(π/(X2

t + Y 2t )).

Then if V (·, ·, ·) is a Lyapunov function for the system (S) then V (·, ·, 0) has to be aLyapunov function for (RS). The system (RS) does not admit a continuous Lyapunovfunction: the circles {x2 + y2 = 1/n} are limit cycles and between two circles thetrajectories go from the internal towards the external circle (see [2]). Every Lyapunovfunction for (RS) is then LSC and coincides with

Wc(x, y) := cn for1

n< x2 + y2 ≤ 1

n − 1

for some sequence c of positive numbers cn decreasing to 0 as n→+∞.We now construct a Lyapunov function for the system (S). We choose k such that

0 < k < 1 − (2b/σ 2) and fix a sequence c as above: it is easy to verify that thefunction V (x, y, z) = Wc(x, y)+ zk is a local Lyapunov function for the system (S) on{(x, y, z): x2 + y2 < 1 z ∈ (−1, 1)} according to Definition 10.

References

1. J.P. Aubin, G. Da Prato: Stochastic Lyapunov method, NoDEA 2 (1995), 511–525.2. A. Bacciotti, L. Rosier: Liapunov Functions and Stability in Control Theory, Lecture Notes in Control

and Information Sciences 267, Springer-Verlag, New York, 2001.3. M. Bardi, A. Cesaroni: Viscosity Lyapunov functions for almost sure stability of degenerate diffusions,

in Elliptic and Parabolic Problems, Rolduc and Gaeta, 2001, J. Bemelmans et al. eds., pp. 322–331,World Scientific, Singapore, 2002.

4. M. Bardi, A. Cesaroni: Almost sure stabilizability of controlled degenerate diffusions, Preprint no 19,Dip. di Mat. Univ. di Padova, to appear in SIAM J. Control Optim.

5. M. Bardi, R. Jensen: A geometric characterization of viable sets for controlled degenerate diffusions,Set-Valued Anal. 10(2–3) (2002), 129–141.

6. G. Barles, J. Burdeau: The Dirichlet problem for semilinear second-order degenerate elliptic equationsand applications to stochastic exit time control problems, Comm. Partial Differential Equations 20(1–2)(1995), 129–178.

7. E.N. Barron: Viscosity solutions and analysis in L∞, in Nonlinear Analysis, Differential Equations andControl (Montreal, QC, 1998), pp. 1–60, NATO Sci. Ser. C Math. Phys. Sci., 528, Kluwer, Dordrecht,1999.

8. E.N. Barron, R. Jensen: Lyapunov stability using minimum distance control, Nonlinear Anal. 43(7)(2001), 923–936.

9. A. Cesaroni: Stability properties of controlled diffusion processes via viscosity methods, Ph.D. thesis,University of Padova, Padova, 2004.

10. F.H. Clarke, Yu. Ledyaev, E.D. Sontag, A.I. Subbotin: Asymptotic controllability implies feedbackstabilization, IEEE Trans. Automat. Control 42 (1997), 1394–1407.

11. F.H. Clarke, Yu.S. Ledyaev, L. Rifford, R.J. Stern: Feedback stabilization and Lyapunov functions, SIAMJ. Control Optim. 39(1) (2000), 25–48.

12. M.C. Crandall, H. Ishii, P.L. Lions: User’s guide to viscosity solutions of second order partial differentialequations, Bull. Amer. Math. Soc. 27 (1992), 1–67.

13. H. Deng, M. Krstic, R.J. Williams: Stabilization of stochastic nonlinear systems driven by noise ofunknown covariance, IEEE Trans. Automat. Control 46(8) (2001), 1237–1253.

14. J.L. Doob: Stochastic Processes, Wiley, New York, 1953.

Lyapunov Stabilizability of Controlled Diffusions 29

15. N. El Karoui, D. Huu Nguyen, M. Jeanblanc-Pique: Compactification methods in the control of degeneratediffusions: existence of an optimal control, Stochastics 20 (1987), 169–219.

16. W.H. Fleming, H.M. Soner: Controlled Markov Process and Viscosity Solutions, Springer-Verlag, NewYork, 1993.

17. W.H. Fleming, P.E. Souganidis: On the existence of value functions of two-player, zero-sum stochasticdifferential games, Indiana Univ. Math. J. 38(2) (1989), 293–314.

18. P. Florchinger: Lyapunov-like techniques for stochastic stability, SIAM J. Control Optim. 33(4) (1995),1151–1169.

19. P. Florchinger: A stochastic Jurdjevic–Quinn theorem, SIAM J. Control Optim. 41(1) (2002), 83–88.20. R.Z. Has’minskii: Stochastic Stability of Differential Equations, Sjithoff and Noordhoff, Alphen aan den

Rijn, 1980.21. R. Z. Has’minskii, F.C. Klebaner: Long term behavior of solutions of the Lotka–Volterra system under

small random perturbations, Ann. Appl. Probab. 11(3) (2001), 952–963.22. U.G. Haussmann, J.P. Lepeltier: On the existence of optimal controls, SIAM J. Control Optim. 28 (1990),

851–902.23. N. Ikeda, S. Watanabe: Stochastic Differential Equations and Diffusion Processes, North Holland,

Amsterdam, 1981.24. H. Ishii: On uniqueness and existence of viscosity solutions of fully nonlinear second-order elliptic PDEs,

Comm. Pure Appl. Math. 42(1) (1989), 15–45.25. H.J. Kushner: Stochastic Stability and Control, Academic Press, New York, 1967.26. H.J. Kushner: Converse theorems for stochastic Liapunov functions, SIAM J. Control Optim. 5 (1967),

228–233.27. H.J. Kushner: Stochastic stability, in Stability of Stochastic Dynamical Systems (Proc. Internat. Sympos.,

Univ. Warwick, Coventry, 1972), pp. 97–124, Lecture Notes in Mathematics, Vol. 294, Springer-Verlag,Berlin, 1972.

28. H.J. Kushner: Existence of optimal controls for variance control, in Stochastic Analysis, Control, Op-timization and Applications, pp. 421–437, Systems Control Found. Appl., Birkhauser Boston, Boston,MA, 1999.

29. P.-L. Lions: Optimal control of diffusion processes and Hamilton–Jacobi–Bellman equations. Part 1: Thedynamic programming principle and applications. Part 2: Viscosity solutions and uniqueness, Comm.Partial Differential Equations 8 (1983), 1101–1174 and 1229-1276.

30. X. Mao: Exponential Stability of Stochastic Differential Equations, Marcel Dekker, New York, 1994.31. L. Rifford: Existence of Lipschitz and semiconcave control-Lyapunov functions, SIAM J. Control Optim.

39(4) (2000), 1043–1064.32. H.M. Soner, N. Touzi: Stochastic target problems, dynamic programming, and viscosity solutions., SIAM

J. Control Optim. 41(2) (2002), 404–424.33. H.M. Soner, N. Touzi: Dynamic programming for stochastic target problems and geometric flows, J. Eur.

Math. Soc. (JEMS) 4(3) (2002), 201–236.34. E.D. Sontag: A Lyapunov-like characterization of asymptotic controllability, SIAM J. Control Optim.

21(3) (1983), 462–471.35. E.D. Sontag: Stability and stabilization: discontinuities and the effect of disturbances, in Nonlinear

Analysis, Differential Equations and Control (Montreal, QC, 1998), F.H. Clarke and R.J. Stern, eds.,pp. 551–598, Kluwer, Dordrecht, 1999.

36. E.D. Sontag, H.J. Sussmann: Non-smooth control Lyapunov functions, Proc. IEEE Conf. Decision andControl, New Orleans, Dec. 1995, pp. 2799–2805.

37. P. Soravia: Pursuit–evasion problems and viscosity solutions of Isaacs equations, SIAM J. Control.Optim. 31(3) (1993), 604–623.

38. P. Soravia: Stability of dynamical systems with competitive controls: the degenerate case, J. Math. Anal.Appl. 191 (1995), 428–449.

39. P. Soravia: Optimality principles and representation formulas for viscosity solutions of Hamilton–Jacobiequations. I, Equations of unbounded and degenerate control problems without uniqueness, Adv. Differ-ential Equations 4(2) (1999), 275–296.

40. D. Stroock, S.R.D. Varadhan: Multidimensional Diffusion Processes, Springer-Verlag, New York, 1979.41. A. Swiech: Another approach to the existence of value functions of stochastic differential games, J. Math.

Anal. Appl. 204(3) (1996), 884–897.

Accepted 1 May 2005. Online publication 16 September 2005.