computation of lyapunov functions for smooth nonlinear systems using convex optimization

10
Automatica 36 (2000) 1617}1626 Computation of Lyapunov functions for smooth nonlinear systems using convex optimization q Tor A. Johansen* Department of Engineering Cybernetics, Norwegian University of Science and Technology, N-7491 Trondheim, Norway Received 12 February 1999; revised 20 December 1999; received in "nal form 29 March 2000 Linearly parameterized non-quadratic Lyapunov-functions for smooth nonlinear systems are com- puted numerically using large-scale linear or quadratic programming. Abstract It is shown that for smooth nonlinear systems conditions for the existence of a Lyapunov function that guarantees uniform exponential stability can be formulated as linear inequalities de"ned pointwise in the state space when assuming a general linearly parameterized class of smooth non-quadratic Lyapunov-function candidates. Hence, computation of the Lyapunov function involves the solution of a convex large-scale optimization problem using linear or quadratic programming. The optimization criterion can for example be selected to "nd a Lyapunov function which predicts fast decay rate or large region of attraction. Analysis of the tradeo! between accuracy and computational complexity as well as possible conservativeness of the procedure is given particular attention. The procedure is illustrated using numerical examples. ( 2000 Elsevier Science Ltd. All rights reserved. Keywords: Nonlinear systems; Linear programming; Quadratic programming; Lyapunov functions 1. Introduction This work describes a procedure for computing a Lyapunov function (if one exists) for the equilibrium point x"0 for the class of nonautonomous nonlinear systems x 5 "f (x, h), (1) where x3Rn is the state vector, h3Rd is a possibly time- varying parameter vector, f(0,h)"0 for all h and f is smooth. Conditions for uniform exponential stability of the equilibrium of system (1) are considered. In other words, let X0LRn be a compact and connected region of the state space such that the origin is an interior point in X0, and #LRd a compact region of the parameter space. A Lyapunov function is sought that guarantees that if x(0) is in the region of attraction XLX0 then q This paper was not presented at any IFAC meeting. This paper was recommended for publication in revised form by Associate Editor C. Canudas de Wit under the direction of Editor H.K. Khalil. * Fax: #47-73-59-43-99. E-mail address: tor.arne.johansen@itk.ntnu.no (T.A. Johansen). x(t)P0 as tPR at an exponential rate for any para- meter trajectory that satis"es h(t)3# for all t. No knowl- edge of h is assumed to be available. This paper presents a computational approach for searching for a Lyapunov function. By assuming a linear parameterization of the set of Lyapunov-function candi- dates, the existence of a Lyapunov function leads to two linear inequalities that must hold for every x3X0 and h3#. By discretizing the compact sets X0 and #, the possible Lyapunov functions within the selected class of candidates are characterized (approximately due to the discretization) by a "nite number of linear inequalities. Introducing a parameterized Lyapunov-function is certainly not a new idea. Lyapunov functions of poly- nomial form were suggested in Zelentsovsky (1994). The problem of absolute stability for systems with nonlineari- ties that satis"es certain sector conditions has been studied using quadratic Lyapunov functions (Kamenet- skii & Pyatnitskii, 1987; Pyatnitskii & Skordodinskii, 1987) as well as piecewise quadratic Lyapunov functions (Molchanov & Pyatnitskii, 1986; Molchanov, 1987). Re- cently, the class of LPV (linear parameter-varying) sys- tems of the form x 5 "A(h)x has been studied extensively using quadratic Lyapunov functions (Boyd, Ghaoui, 0005-1098/00/$ - see front matter ( 2000 Elsevier Science Ltd. All rights reserved. PII: S 0 0 0 5 - 1 0 9 8 ( 0 0 ) 0 0 0 8 8 - 1

Upload: tor-a-johansen

Post on 02-Jul-2016

213 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Computation of Lyapunov functions for smooth nonlinear systems using convex optimization

Automatica 36 (2000) 1617}1626

Computation of Lyapunov functions for smooth nonlinear systemsusing convex optimizationq

Tor A. Johansen*Department of Engineering Cybernetics, Norwegian University of Science and Technology, N-7491 Trondheim, Norway

Received 12 February 1999; revised 20 December 1999; received in "nal form 29 March 2000

Linearly parameterized non-quadratic Lyapunov-functions for smooth nonlinear systems are com-puted numerically using large-scale linear or quadratic programming.

Abstract

It is shown that for smooth nonlinear systems conditions for the existence of a Lyapunov function that guarantees uniformexponential stability can be formulated as linear inequalities de"ned pointwise in the state space when assuming a general linearlyparameterized class of smooth non-quadratic Lyapunov-function candidates. Hence, computation of the Lyapunov function involvesthe solution of a convex large-scale optimization problem using linear or quadratic programming. The optimization criterion can forexample be selected to "nd a Lyapunov function which predicts fast decay rate or large region of attraction. Analysis of the tradeo!between accuracy and computational complexity as well as possible conservativeness of the procedure is given particular attention.The procedure is illustrated using numerical examples. ( 2000 Elsevier Science Ltd. All rights reserved.

Keywords: Nonlinear systems; Linear programming; Quadratic programming; Lyapunov functions

1. Introduction

This work describes a procedure for computinga Lyapunov function (if one exists) for the equilibriumpoint x"0 for the class of nonautonomous nonlinearsystems

x5 "f (x, h), (1)

where x3Rn is the state vector, h3Rd is a possibly time-varying parameter vector, f(0,h)"0 for all h and f issmooth. Conditions for uniform exponential stability ofthe equilibrium of system (1) are considered. In otherwords, let X0LRn be a compact and connected region ofthe state space such that the origin is an interior point inX0, and #LRd a compact region of the parameterspace. A Lyapunov function is sought that guaranteesthat if x(0) is in the region of attraction XLX0 then

qThis paper was not presented at any IFAC meeting. This paper wasrecommended for publication in revised form by Associate EditorC. Canudas de Wit under the direction of Editor H.K. Khalil.

*Fax: #47-73-59-43-99.E-mail address: [email protected] (T.A. Johansen).

x(t)P0 as tPR at an exponential rate for any para-meter trajectory that satis"es h(t)3# for all t. No knowl-edge of h is assumed to be available.

This paper presents a computational approach forsearching for a Lyapunov function. By assuming a linearparameterization of the set of Lyapunov-function candi-dates, the existence of a Lyapunov function leads to twolinear inequalities that must hold for every x3X0 andh3#. By discretizing the compact sets X0 and #, thepossible Lyapunov functions within the selected class ofcandidates are characterized (approximately due to thediscretization) by a "nite number of linear inequalities.

Introducing a parameterized Lyapunov-function iscertainly not a new idea. Lyapunov functions of poly-nomial form were suggested in Zelentsovsky (1994). Theproblem of absolute stability for systems with nonlineari-ties that satis"es certain sector conditions has beenstudied using quadratic Lyapunov functions (Kamenet-skii & Pyatnitskii, 1987; Pyatnitskii & Skordodinskii,1987) as well as piecewise quadratic Lyapunov functions(Molchanov & Pyatnitskii, 1986; Molchanov, 1987). Re-cently, the class of LPV (linear parameter-varying) sys-tems of the form x5 "A(h)x has been studied extensivelyusing quadratic Lyapunov functions (Boyd, Ghaoui,

0005-1098/00/$ - see front matter ( 2000 Elsevier Science Ltd. All rights reserved.PII: S 0 0 0 5 - 1 0 9 8 ( 0 0 ) 0 0 0 8 8 - 1

Page 2: Computation of Lyapunov functions for smooth nonlinear systems using convex optimization

Feron & Balahrishnan, 1994) or h-dependent quasi-quadratic Lyapunov functions (Gahinet, Apkarian& Chilali, 1996; Wu, Yang, Packard & Becker, 1996;Watanabe, Uchida & Fujita, 1996) that can be deter-mined by convex optimization by solving linear matrixinequalities (LMIs). However, embedding a class of non-linear systems into the LPV framework will lead toconservativeness since the nonlinear structure is not fullyutilized in the Lyapunov function. In Johansson andRantzer (1998) and Petterson and Lennartson (1997) it isshown how piecewise quadratic Lyapunov functions canbe characterized by LMIs, for general classes of nonlin-ear and hybrid systems. A constructive algorithm wherea polyhedral region of attraction is built sequentiallyusing a norm-based Lyapunov-function was suggested inBrayton and Tong (1980). Extensions to piecewise linearsystems were given in Ohta, Imanishi, Gong and Haneda(1993). Polyhedral Lyapunov-functions might be com-puted using linear programming as discussed in Blanc-hini (1995), and smoothed polyhedral functions wereconsidered in Blanchini and Miani (1996). Furthermore,another class of piecewise linear Lyapunov-functionswere suggested in Julian, Guivant and Desages (1999),where it was also shown that the resulting problem canbe solved by linear programming.

The main contribution of the present work is theutilization of a #exible and general smooth parameteriz-ation of the Lyapunov-function candidates that does notintroduce signi"cant conservativeness and allows theproblem to be reduced to a convex optimization probleminvolving linear inequality constraints at each point inthe state space. By discretization of the state space thisleads to a computational procedure based on linear orquadratic programming that is e$cient for systems ofsu$ciently low order. Furthermore, the method is#exible in the sense that it allows various additionalobjectives to be optimized, such as maximizing the decayrate, maximizing the region of attraction or minimizingthe complexity of the Lyapunov functions. Finally, theprocedure is accompanied by an analysis of the e!ectof discretization.

The remaining of this paper is organized as follows: InSection 2 a quite general linearly parameterized set ofLyapunov functions is introduced and an in"nite numberof linear inequality conditions on the parameters thatensure uniform exponential stability are derived. A pro-cedure for reducing the in"nite number of linear inequali-ties to a "nite number is described in Section 3, anda numerical example is included to illustrate the com-putational procedure. The e!ect of this approximation interms of accuracy is analysed in Section 4. Computa-tional complexity and possible conservativeness of theapproach are also discussed. Some concluding remarksare given in Section 5.

Some notation: For a vector x3Rn and any p51 thep-norm is de"ned by DDxDD

p"(+n

i/1Dx

iDp)1@p. In the limit

DDxDD="max

iDx

iD. For a matrix A we de"ne the induced

norm DDADDp"max

@@x@@p/1DDAxDD

p. For a square matrix A,

p6(A) and p6 (A) denote its smallest and largest singular

values, respectively, and DDADD2"p6 (A). The closed e-ball is

de"ned as Be(z0)"Mz3Rn D DDz!z0DD24eN for any e'0.

The ¸p(Z) space with ZLRn and p51 is the completion

of the space of all continuous functions f :ZPRm with("nite) norm DD f DD

p"(:

z|ZDD f (z)DDp

pdz)1@p. In the limit

DD f DD="sup

z|ZDD f (z)DD

=. A Sobolev p-norm on Z is de-

"ned for any function g3¸p(Z) with abolutely continu-

ous "rst derivative as DDgDD40"1,p

"DDgDDp#DDdg/dxDD

p. For

a compact set X, the set LX denotes its boundary.

2. Convex characterization of uniform exponentialstability

In this section a linearly parameterized set ofLyapunov functions is introduced. Conditions for uni-form exponential stability are derived. Due to the linearparameterization, they appear as an in"nite set of linearinequalities.

2.1. Parameterization of the Lyapunov functioncandidates

Consider a Lyapunov function candidate < :X0PRof the form

<(x)"xTP(x)x, (2)

where the matrix valued function P : X0PRnCn is de-"ned by the following linear parameterization:

P(x)"N+i/1

Pioi(x), (3)

where oi:X0PR are smooth basis-functions for all

i"1, 2,2, N and P1,P

2,2,P

Nare parameter matrices.

The attention is restricted to positive semi-de"nitebasis-functions that form a partition-of-unity:

N+i/1

oi(x)"1 for all x3X0. (4)

For later reference, the set of functions < de"ned by(2)}(4) and "xed basis-functions o

1,o

2,2,o

Nis denoted

by

VN"GxC

N+i/1

xTPixo

i(x) KP1

, P2,2,P

N3RnCnH. (5)

In the procedure developed below, we will make nofurther assumptions on the set of basis-functions. How-ever, as we shall see later, in order to argue somethingabout nonconservativeness of the approach, one mayassume that they are selected from a complete basis (in

1618 T.A. Johansen / Automatica 36 (2000) 1617}1626

Page 3: Computation of Lyapunov functions for smooth nonlinear systems using convex optimization

a Sobolev norm). Also notice that the Pi-matrices are not

restricted to be symmetric in general, although this maybe a useful to reduce the number of parameters withoutsigni"cantly reducing the representational power of theLyapunov-function candidate parameterization. Beforewe proceed, we would like to introduce the class ofsmooth locally quadratic Lyapunov function candidatesgenerated by normalized local Gaussian basis-functionsas an example of a useful class of basis-functions.

Example (Gaussian basis-functions). Consider basis-functions de"ned as

oi(x)"

ki(x)

+Nj/1

kj(x)

, (6)

ki(x)"expA!

1

2

n+k/1

(xk!x6

i,k)2

s2i,k

B, (7)

where x6i,k

and si,k

are parameters of the basis-functions.Eq. (6) normalizes the functions k

1,k

2,2,k

Nsuch that

(4) is always satis"ed. Taking into account the form ofthese basis-functions, they can be viewed as providinga smooth interpolation between N locally quadraticLyapunov functions xTP

ix each having a certain subset

of X0 were they are active. The parameters x6i,k

roughlyde"ne the location of the basis functions in the set X0,while the parameters s

i,kde"ne the degree of smoothness

of the basis functions. Thus, the set x61,2,x6

Nshould be

selected to cover X0 in a reasonable manner, and a typi-cal value for s

i,kis half of the average distance between

x6i,k

and its neigbouring points x6i,j

for iOj. Such basis-functions are used extensively for interpolating multiplelocally linear models and controllers (see e.g. Murray-Smith & Johansen, 1997). The resulting set of Lyapunov-function candidates is obviously related to piecewisequadratic Lyapunov-functions as used in Johansson andRantzer (1998) and Petterson and Lennartson (1997).A fundamental di!erence is that in the present approachthe Lyapunov function candidates are smooth due to thesmooth interpolation, while in Johansson and Rantzer(1998) continuity is achieved by requiring additional con-straints on the matrices P

1,P

2,2, P

Nto be ful"lled, and

in Petterson and Lennartson (1997) the Lyapunov func-tion may be discontinuous.

2.2. Inxnite linear inequalities

Notice that all <3VN

are smooth and satisfy<(0)"0. To ensure that< is a Lyapunov function candi-date it must satisfy<(x)'0 for all x3X0!M0N. Here thealternative (but somewhat stricter) condition is applied

<(x)5c1DDxDD2

2for all x3X0 (8)

for some (typically small) c1'0. In order for < to be

a Lyapunov function, its time-derivative along all trajec-tories in X0!M0N must be negative. This time-derivative

is given by the function ¸ :X0]#PR

<Q "¸(x, h)"f T(x, h)d<

dx(x), (9)

where (2) and (3) give

d<

dx(x)"

N+i/1AxTP

ix

doi

dx(x)#(PT

ix#P

ix)o

i(x)B. (10)

Eqs. (9) and (10) lead to

¸(x, h)"N+i/1AAf T(x, h)

doi

dx(x)BxTP

ix

#f T(x, h)(PTi#P

i)xo

i(x)B. (11)

A condition for uniform exponential stability of the equi-librium is now that there exist matrices P

1,P

2,2, P

Nsuch that

¸(x, h)4!c<(x) for all x3X0 and h3# (12)

for some constant c'0. The following theorem provesthat conditions (8) and (12) on P

1, P

2,2,P

Nensures that

< is indeed a Lyapunov function.

Theorem 1. Let X0 be a compact and connected set. Sup-pose <(x)5c

1DDxDD2

2for all x3X0 where c

1'0, and that

there exists a scalar c'0 such that ¸(x, h)4!c<(x) forall x3X0 and h3#. Then for all parameter trajectoriesh(t)3# and initial conditions x(0)3XLX0, the equilib-rium point is uniformly exponentially stable, i.e.

DDx(t)DD24S

c2

c1

DDx(0)DD2e~ct@2, (13)

where c2"max

ip6 (P

i). The region of attraction X is esti-

mated by

X"Gx3X0 K<(x)4 infm | /X0

<(m)H. (14)

Proof. Since ¸(x, h)(0 on X0 it follows that x(0)3Xguarantees x(t)3X0 for all t50. Now a

1(DDxDD

2)"

c1DDxDD2

24<(x)4c

2DDxDD2

2"a

2(DDxDD

2) and (d/dt)<(x)4

!c<(x)"!a3(DDxDD

2). It is clear that a

1, a

2and a

3are

class K functions and the result follows from Theorem4.1 and Corollary 4.2 of Khalil (1992). h

Notice that ¸ is in general a nonlinear functionof x and h, but a linear function of the parameters of <(the elements of the matrices P

1,P

2,2, P

N). In other

words, it can be represented in the form ¸(x, h)"pTl(x, h)where the function l : X0]#PRm does not dependon the parameter vector p3Rm de"ned byp"(P1,1

1, P1,2

1,2, P1,n

1, P2,1

1,P2,2

1,2,Pn,n

N)T and Pj,k

iis the

( j, k)-element of the Pimatrix. This parameter vector has

T.A. Johansen / Automatica 36 (2000) 1617}1626 1619

Page 4: Computation of Lyapunov functions for smooth nonlinear systems using convex optimization

m"Nn2 elements, and the function l can be easily de-rived from (11). Furthermore, < has a similar linearparametric representation <(x)"pTv(x) where the func-tion v : X0PRm can be derived easily from (2) and (3).Hence, conditions (8) and (12) for exponential stabilitycan be written as linear inequalities in the parameters p:

pTv(x)5c1DDxDD2

2for all x3X0 (15)

pT(l(x, h)#cv(x))40 for all x3X0 and h3#. (16)

Constraints (15) and (16) are state- and parameter-depen-dent which implies that there is an in"nite number ofthem. This leads to a so-called semi-in"nite program-ming problem (e.g Tanaka, Fukushima & Ibaraki, 1988;Polak, 1997). In Section 3, "nite discretizations of thestate and parameter spaces are introduced in order toreduce this in"nite number of linear inequalities toa "nite number of linear inequalities at the cost of anapproximation. The e!ect of this approximation is ana-lysed in Section 4.1 and related to some characteristicparameters of the system.

2.3. Convex objective functions and constraints

The linear inequalities (15) and (16) characterize a con-vex subset (the convex hull of the linear inequalities) ofthe parameter space Rm of < that de"ne Lyapunov func-tions for system (1). Assuming this set is nonempty (atleast one Lyapunov function exists), then even for verysimple parameterizations of V

N(i.e. small N) there will

typically exist an in"nite number of Lyapunov functions.Additional objectives may thus be speci"ed in order to"nd a Lyapunov function with some desirable properties.The set of Lyapunov functions will depend on the re-quired decay rate c, and the set of Lyapunov functionsconstrained by (15) and (16) will in general decrease asc increases until the point where no Lyapunov functionwithin the class V

Nexists. Typically, one will attempt to

"nd the `besta Lyapunov function in some sense, forexample, the one with largest region of attraction orfastest decay rate. It is also frequently desirable to "ndthe simplest possible Lyapunov function. This can beacheived within this framework by specifying a convexobjective function that should be minimized subject toconstraints (15) and (16). Below, we will formulate linearand quadratic objective functions corresponding to theseobjectives. They can be selected individually or combinedin a multi-objective optimization.

2.3.1. Simple Lyapunov function objectiveThe objective of "nding the simplest possible

Lyapunov function can be formulated as follows. Withthe selected parameterizationV

N, it is natural to think of

the set of quadratic Lyapunov functions as the simplestpossible ones, corresponding to a constant function P, i.e.P1"P

2"2"P

N. Hence, a natural objective would

be to seek matrices P1,P

2,2, P

Nthat are as similar

as possible. Mathematically, this is captured by theobjective

/I (p)"N+i/1

N+l/1

n+j/1

n+k/1

(Pj,ki!Pj,k

l)2w

i,l, (17)

where wi,l

is some positive weight that in the simplestcase is equal to one for all (i, l), but may in general betuned to re#ect the topology of the basis-functions. It iseasy to see that (17) can be written in the quadratic form/I (p)"pTQp for some positive de"nite matrix Q. Hence,this objective leads to a convex quadratic programmingproblem.

2.3.2. Average decay rate objective and constraintsThe objective of determining the Lyapunov function

with the least conservative bound on the average decayrate can be formulated as maximization of c'0, whichcan be implemented as a simple line search subject tofeasibility of constraints (15) and (16). Speci"cation ofa "xed c is similar to a constraint on the acceptableminimum decay rate predicted by the Lyapunovfunction.

An alternative is to minimize the average value of<Q over X0]#, namely

/I (p)"PX

0C#

¸(x, h) dx dh, (18)

i.e. the linear objective /I (p)"pTc where c":X

0C#(l(x,h)#

cv(x)) dxdh. This leads to a linear programming problem.

2.3.3. Region of attraction objective and constraintsSuppose we impose the following linear constraints:

pTv(x)51 for all x3X0!Xb,

pTv(x)41 for all x3Xa,

where XaLXbLX0. The motivation is that < is enfor-ced to have a level curve <(x)"1 in Xb!Xa whichmeans that it is required that the Lyapunov functionpredicts a region of attraction that contains Xa. Theregions Xa and Xb can be de"ned freely. The region ofattraction can be maximized by letting X0!Xb andXb!Xa become small. If Xa and Xb have simple geomet-ries such as balls or hyper-rectangles, this can be imple-mented easily by a line search for the maximum size ofXa and Xb when X0 is kept "xed.

3. Computational procedure

In the previous section, Lyapunov functions werecharacterized by an in"nite number of linear inequalities.In this section they are reduced to a "nite number bydiscretization of the state and parameter sets in order tomake the approach computationally feasible.

1620 T.A. Johansen / Automatica 36 (2000) 1617}1626

Page 5: Computation of Lyapunov functions for smooth nonlinear systems using convex optimization

3.1. Finite linear inequalities

The state- and parameter-dependent linear inequalities(15) and (16) de"ne an in"nite number of linear inequali-ties in the "nite number of parameters p. A "nite numberof linear inequalities results from discretization of thecompact sets X0 and # by de"ning "nite sets X0

dand

#d

containing points where the following constraints onthe parameters p are imposed for some a'0:

pTv(x)5c1DDxDD2

2

and

pT(l(x, h)#av(x))40 for all (x, h)3X0d]#

d. (19)

These inequalities are stacked in matrices as follows:

<I p5c8 , (20)

( I̧ #a<II )p40, (21)

where the rows of <I corresponds to vT(x) for each x3X0d,

the elements of c8 are c1DDxDD2

2for each x3X0

d, the rows of

I̧ corresponds to lT(x, h) for each (x, h)3X0d]#

d, and the

rows of <II corresponds to vT(x) for each (x, h)3X0d]#

d.

Additional linear constraints may be due to other objec-tives, such as a required region of attraction as discussedin Section 2.3. Hence, (20) and (21) de"ne a "nite numberof linear constraints in a "nite number of variables, whichare computationally feasible using standard linear orquadratic programming (e.g. Luenberger, 1989).

Because of the "nite discretization, the existence ofa parameter vector p3Rm that satis"es (20) and (21) is notsu$cient to guarantee that < is a Lyapunov function. Itmust be checked that there exists a c'0 such that¸(x, h)4!c<(x) and <(x)'0 holds for allx3X0!M0N and h3#. Then it can be argued thatx(0)3X guarantees uniform exponential stability of theorigin. In practise, these conditions must be checked ina su$ciently dense "nite set of checking pointsX0

c]#

cLX0]#. If they do not hold, the density of the

set of design points X0d

and the parameterization ofP should be made "ner, and the above procedure iter-ated. Theoretical bounds and guidelines for selecting thenumber of points in X0

d]#

dand X0

c]#

care given in

Section 4.

3.2. Procedure

A computational procedure for searching for aLyapunov function is

Input data: The system function f, a compact and con-nected set X0 that contains the origin as an interiorequilibrium point, and a compact set #.

Step 1: Select a set of basis functions o1,o

2,2,o

Nfor V

N.

Step 2: Select "nite sets X0dLX0 and #

dL# and

possibly a minimum region of attraction XaLXbLX0.

Step 3: Solve the convex optimization problem of de-termining a feasible (or maximum) a'0 while minimiz-ing one of the linear or quadratic objectives (17) or (18)subject to the linear constraints (20) and (21) with respectto p (there may be additional constraints if a minimumregion of attraction is speci"ed). If no solution was found,go to either Step 1 or Step 2.

Step 4: Generate `su$ciently densea but "nite checkingsets X

cLX0 and #

cL#.

Step 5: If ¸(x, h)4!c<(x) does not hold on Xc]#

cor <(x)'0 does not hold on X

cfor some c'0, go to

either Step 1 or Step 2.Output data: If the procedure converges, a Lyapunov

function has been found, and the region of attractionX can be estimated according to (14).

Note that the computational e$ciency of this iterativeprocedure might be improved if the re"nement of thediscretization and parameterization of the Lyapunovfunction at the next iteration is constructed such that theoptimal parameters from the previous iteration can beapplied to initialize the next linear or quadratic program(Polak, 1997).

3.3. Numerical example

Consider the autonomous nonlinear system de"ned by

f (x)"A!3x

1#x

2

2x21

0.3#(x2#0.4)(x

2!0.6)

!2x2B. (22)

We apply normalized local basis-functions of form (6)and (7) and de"ne the subset of the state space X0"

[!1,1]][!1,1]LR2. Consider the following threealternative parameterizations of the Lyapunov functioncandidates:

(i) P1: Quadratic Lyapunov function candidates,

N"1. The basis function parameters arex61"(0, 0)T and s

1,k"1.

(ii) P4: Non-quadratic Lyapunov function candidates

composed from four locally quadratic functions(symmetrically located in X0), N"4. The basis func-tion parameters are x6

1"(!1/2,!1/2)T, x6

2"

(1/2,!1/2)T, x63"(!1/2, 1/2)T, x6

4"(1/2, 1/2)T and

si,k"1/2 for all i, k.

(iii) P9: Non-quadratic Lyapunov function candidates

composed from nine locally quadratic functions(symmetrically located in X0), N"9. The basis func-tion parameters are x6

1"(!2/3,!2/3)T, x6

2"

(0,!2/3)T, x63"(2/3,!2/3)T, x6

4"(!2/3, 0)T,

x65"(0, 0)T, x6

6"(2/3, 0)T, x6

7"(!2/3, 2/3)T,

x68"(0, 2/3)T, x6

9"(2/3, 2/3)T, and s

i,k"1/3 for all i,k.

In all cases below we uniformly discretize X0 with 441points where the constraints are imposed, which leads to

T.A. Johansen / Automatica 36 (2000) 1617}1626 1621

Page 6: Computation of Lyapunov functions for smooth nonlinear systems using convex optimization

Table 1Summary of the properties of the computed Lyapunov functions for theautonomous system (22)!

Number ofbasis-functions

Objective Result

N"1 Decay rate a"1.43, r"0.36N"4 Decay rate a"3.17, r"0.20N"9 Decay rate a"5.06, r"0.30

N"1 Region of attraction a"1, r"0.42N"4 Region of attraction a"1, r"0.52N"9 Region of attraction a"1, r"0.85

!The parameter a is the bound on the decay rate, while r is the radiusof the largest square [!r, r]][!r, r]LX0 containing the region ofattraction X.

Fig. 1. Computed Lyapunov functions for the autonomous system (22). The left column shows Lyapunov functions where the decay rate is maximized.The right column shows Lyapunov functions where the region of attraction is maximized. The "rst row are with parameterization P

1, the second row

withP2, and the third row withP

3. Notice that the arrows only indicate the direction of the vector "eld f (x) since they are normalized to have the same

length (for the reason of improved presentation).

problems of reasonable computational complexity, with882 inequalities (or up to 1323 inequalities if constraintson the region of attraction are also included) in 4, 16 and36 variables, respectively.

First, assume we seek a Lyapunov function where theobjective is to predict the maximum decay rate. This isimplemented by maximizing the parameter a subject tofeasibility of the exponential stability conditions. In addi-tion, the Lyapunov function complexity measure (17) isminimized (for each "xed a). The results are summarizedin Table 1, and the level curves of the Lyapunov func-tions are shown in Fig. 1. Observe that the predicteddecay rate increases considerably as the number of para-meters in the Lyapunov function candidate parameteriz-ation increases. It can also be observed that the predictedregion of attraction typically decreases when the pre-dicted decay rate increases, which is not surprising sinceit is not an objective in this case. The example illustrates

1622 T.A. Johansen / Automatica 36 (2000) 1617}1626

Page 7: Computation of Lyapunov functions for smooth nonlinear systems using convex optimization

Table 2Summary of the properties of the computed Lyapunov functions for thenonautonomous system (23)!

Number ofbasis-functions

Objective Result

N"1 Decay rate a"0.56, r"0.30N"4 Decay rate a"2.48, r"0.16N"9 Decay rate a"5.50, r"0.13

N"1 Region of attraction a"1, r"0N"4 Region of attraction a"1, r"0.30N"9 Region of attraction a"1, r"0.70

!The parameter a is the bound on the decay rate, while r is the radiusof the largest square [!r, r]][!r, r]LX0 containing the region ofattraction X.

that a more complex Lyapunov function gives a lessconservative prediction of the decay rate.

Second, suppose we want to "nd a Lyapunov functionwhere the objective is to predict the maximum region ofattraction within X0"[!1,1]][!1,1]. This is imple-mented by de"ning Xa to be a square Xa"[!r, r]][!r, r] within the square X0, and Xb"[!0.95,0.95]][!0.95,0.95]. In addition, the Lyapunov function com-plexity measure (17) is minimized (for each "xed r), anda minimum decay rate a"1 is required. The results aresummarized in Table 1, and the level curves of theLyapunov functions are shown in Fig. 1. We observe thatthe predicted region of attraction increases considerablyas the number of parameters in the Lyapunov functioncandidate parameterization increases. Hence, the exampleillustrates that a more complex Lyapunov function givesa less conservative prediction of the region of attraction.

Finally, consider the nonautonomous nonlinear sys-tem de"ned by

f (x, h)"A!3(1#h)x

1#x

2

2(1!h)x21

0.3#(x2#0.4)(x

2!0.6)

!2x2B, (23)

where h(t)3[!0.2, 0.2] is a time-varying unknownparameter. Note that in the nominal case with h"0, thiscorresponds to the autonomous system (22) consideredabove. Now we seek a Lyapunov function that provesexponential stability of (23) for all trajectories ofh(t)3[!0.2, 0.2]. We apply the same three alternativeLyapunov function parameterizations P

1, P

4and P

9as

above, and in addition we introduce a discretizationalong the h-dimension such that the number of inequali-ties are 4410 (or up to 6615 if constraints on the region ofattraction are added). The results are summarized inTable 2, and the level curves of the Lyapunov functionsare shown in Fig. 2. As expected, due to the uncertainparameter h the regions of attraction and predicted decayrates are somewhat smaller than in the nominal case. Asin the nominal case, a more complex Lyapunov functionsgives a less conservative prediction of the region of at-traction or decay rate. Observe that for N"1 and a"1no (quadratic) Lyapunov function satisfying the con-straints exists.

4. Accuracy vs. computation e7ciency tradeo4

The computational procedure described above is notan algorithm, in the sense that several of the steps requirefurther speci"cation. In general, very dense uniform gridsof design and checking points should be avoided, inparticular, in high-dimensional state and parameterspaces since they will make the problem computationallyintractable due to a large number of inequalities.

In this section the required granularity of points in thesets X0

d]#

dand X0

c]#

care investigated theoretically,

and some bounds are derived. These can be useful inorder to generate sets of points that guarantees the exist-ence of a Lyapunov function despite the "nite discretiz-ation, and also in order to understand which factorsin#uence the required granularity.

4.1. The ewect of discretization

Important information about the required granularityof design and checking points in the state and parameterspaces can be determined by analyzing the complexity off over di!erent regions in the state subset X0. The idea isthat if f is a highly nonlinear function in some regions ofthe state or parameter spaces, a useful heuristic may be toallow large variations in P in these regions, and to in-crease the density of design and checking points in theseregions. De"ne the checking set granularity functione :X0]#PR

e(x,h)" inf(m,f)|X0

cC#c

DD(x, h)!(m, f)DD2. (24)

The usefulness of the above mentioned heuristic can beseen theoretically by assuming f, o

iand do

i/dx to be

bounded and locally Lipschitz functions in the sense thatfor every (x

1, h

1), (x

2, h

2)3Be(x, h) ((x,h)) there exist

bounded functions ¸f

:X0]#PR, ¸o :X0PR and¸o{ : X0PR that satisfy

DD f (x1,h

1)!f (x

2,h

2)DD

24¸

f(x, h)DD(x

1, h

1)!(x

2, h

2)DD

2,

(25)

Doi(x

1)!o

i(x

2)D4¸o(x)DDx

1!x

2DD2, (26)

KKdo

idx

(x1)!

doi

dx(x

2)KK

2

4¸o{(x)DDx1!x

2DD2

(27)

and de"ne K1(x, h)"sup

(m,f)|Be(x,h) ((x,h)) DD f (m, f)DD2, K

2(x)"

supm|Be(x) (x)DD(do

i/dx)(m)DD

2, PM "max

ip6 (P

i), and XM 0"

supx,m|X0 DDx!mDD

2.

T.A. Johansen / Automatica 36 (2000) 1617}1626 1623

Page 8: Computation of Lyapunov functions for smooth nonlinear systems using convex optimization

Fig. 2. Computed Lyapunov functions for the nonautonomous system (23). The left column shows Lyapunov functions where the decay rate ismaximized. The right column shows Lyapunov functions where the region of attraction is maximized. The "rst row are with parameterization P

1, the

second row with P2, and the third row with P

3. The arrows indicate the direction of the vector "eld f (x, h) for h3M!0.2, 0, 0.2N. Note that the vectors

are normalized to have the same length (for the reason of improved presentation).

Theorem 2. Suppose X0 and # are compact sets, <(x)'0for all x3X0!M0N and f, o

iand do

i/dx are bounded and

locally Lipschitz functions. Let c'0 be given and supposethere exists an a'c'0 such that for all (m, f)3X0

c]#

c

¸(m, f)4!a<(m). (28)

Assume the checking grid granularity e(x, h) is so xne that

e(x,h)4(a!c)<(x)

Q(x, h), (29)

where

Q(x, h)"K1(x, h)PM (NXM 0(¸o{(x)XM 0#2K

2(x)

#2¸o (x))#2)#XM 0PM (2#NK2(x)XM 0)¸

f(x,h)

#aPM XM 0(2#N¸o(x)XM 0). (30)

Then for all x3X0 and h3#

¸(x, h)4!c<(x). (31)

Proof. Let (x,h)3X0]# and (m, f)3(X0c]#

c)WBe(x, h)((x,h))

be arbitrary. From (28) we get

¸(x, h)"¸(m, f)#(¸(x, h)!¸(m, f)) (32)

4!a<(x)#a(<(x)!<(m))#(¸(x, h)!¸(m, f))

(33)

"!c<(x)!(a!c)<(x)#a(<(x)!<(m))

#(¸(x,h)!¸(m,f)). (34)

It can be veri"ed directly that the de"nition of Q implies

¸(x, h)4!c<(x)!(a!c)<(x)

#Q(x, h)DD(x, h)!(m, f)DD2

(35)

1624 T.A. Johansen / Automatica 36 (2000) 1617}1626

Page 9: Computation of Lyapunov functions for smooth nonlinear systems using convex optimization

and from (24) it is clear that

¸(x, h)4!c<(x)!(a!c)<(x)#Q(x, h)e(x, h)

4!c<(x)

where the last inequality follows from (29). h

This theorem relates the required local checkingset granularity to local complexity measures (local Lip-schitz constants) of the system function and the basisfunction of the Lyapunov function. The above result onlyconcerns the discretization of the condition <Q 40. Dis-cretization of the additional condition <(x)5c

1DDxDD2

2does usually not impose any requirements on the check-ing set granularity since<(x) is increasing with increasingDDxDD

2, and it is only required that <(x)'0 for

x3X0!M0N.The lower bound on the required checking set

granularity given by the right-hand side of (29) is typi-cally of limited practical usefulness. The reason for this istwofold. First, the involved parameters are typically hardto compute, and second, the bound is typically too con-servative. The main importance of Theorem 2 is thereforethat it proves existence of a lower bound and also that itillustrates how the lower bound depend on the charac-teristic parameters of the problem such as Lipschitzconstants and complexity of the Lyapunov functionparameterization.

4.2. Parameterization of the Lyapunov function candidates,revisited

The parameterization of the set of Lyapunov functioncandidates V

Nis of importance in order to determine

a tight Lyapunov function with a computationally e$-cient procedure. In order to argue that this parameteriz-ation does not introduce signi"cant conservativeness, it isconvenient if the basis-functions are selected from a set offunctions that guarantees that any smooth Lyapunovfunction candidate and its gradient can be approximatedto arbitrary accuracy on X0. By selecting the basis-func-tions from a basis that is complete in a Sobolev norm thiscan be satis"ed:

Theorem 3. Suppose X0 is a compact set and 03X0. Let=:X0PR be an arbitrary smooth Lyapunov function can-didate, i.e.=(0)"0, =(x)'0 for any x3X0!M0N. Sup-pose the set of functions G(X0) dexnes a complete basis forthe set of smooth functions in the Sobolev p-norm dexned onX0, where p51 is arbitrary. Then for any d'0 there existbasis-functions o

1, o

2,2, o

N3G(X0) and matrices

P1,P

2,2,P

Nsuch that <(x)"+N

i/1xTP

ixo

i(x) satisxes

DD=!<DDp4d, DDd=/dx!d</dxDD

p4d.

Proof. Taylor's theorem (see Abrahamson, Marsden& Ratiu, 1988, Theorem 2.4.15 for a general version of

this) gives

=(x)"=(0)#d=

dx(0)x#xTPI (x)x (36)

PI (x)"P1

0

(1!s)Ad2=

dx2(sx)!

d2=

dx2(0)Bds#

d2=

dx2(0). (37)

Recall that =(0)"0 and d=/dx(0)"0 since x"0 isa minimum for =. Hence,

=(x)!<(x)"xT(PI (x)!P(x))x (38)

and

d=

dx(x)!

d<

dx(x)"(PI (x)!P(x))Tx

#Ad

dx(xTPI (x))!

d

dx(xTP(x))B. (39)

The result follows from (38) and (39), the compactness ofX0 and the completeness of G(X0) since PI is smooth dueto the smoothness of= (cf. (37)). h

Notice that the set of smooth locally quadraticLyapunov function candidates with Gaussian basis-func-tions introduced in Section 2.1 can be shown to becomplete in the Sobolev norm (cf. Rovatti, 1996).

5. Concluding remarks

A numerical procedure for computation of Lyapunovfunctions for smooth nonlinear systems using linear orquadratic programming is suggested. The advantages ofthe approach is its fexibility with respect to parameteriz-ation of the smooth Lyapunov function candidates interms of basis-functions, and #exibility to impose addi-tional objectives and constraints on the Lyapunov func-tions. The applicability of the procedure is somewhatlimited by the fact that the number of linear inequalitiescharacterizing the Lyapunov functions will grow expo-nentially with both the dimension of the state space andthe required accuracy if one applies regular grids ofdesign and checking points. In order to help reducing thesize of the numerical problem it is shown that the designand checking points should be concentrated in regions ofthe state space where the nonlinearities are most pro-nounced. A lower bound on the required granularity ofthe checking point set is characterized in terms of regu-larity properties of the system function and basis func-tions in the Lyapunov function parameterization. Forthe purpose of checking stability, conservativeness of theapproach is due to "nite parameterization of the set ofLyapunov function candidates, and "nite discretizationof the state space.

T.A. Johansen / Automatica 36 (2000) 1617}1626 1625

Page 10: Computation of Lyapunov functions for smooth nonlinear systems using convex optimization

Acknowledgements

This work was sponsored by the European Commis-sion under the ESPRIT Long Term Research project28104 H2C.

References

Abrahamson, R., Marsden, J. E., & Ratiu, T. (1988). Manifolds, tensoranalysis and applications (2nd ed.). New York: Springer.

Blanchini, F. (1995). Nonquadratic Lyapunov functions for robustcontrol. Automatica, 31, 451}461.

Blanchini, F., & Miani, S. (1996). A new class of universal Lyapunovfunctions for the control of uncertain linear systems. Proceedingsof the 35th conference on decision and control, Kobe, Japan(pp. 1027}1032).

Boyd, S., El Ghaoui, L., Feron, E., & Balahrishnan, V. (1994). Linearmatrix inequalities in system and control theory. Philadelphia, PA:SIAM.

Brayton, R. K., & Tong, C. H. (1980). Constructive stability andasymptotic stability of dynamical systems. IEEE Transactions onCircuits and Systems, 27, 1121}1130.

Gahinet, P., Apkarian, P., & Chilali, M. (1996). A$ne parameter-depen-dent Lyapunov function and real parametric uncertainty. IEEETransactions on Automatic Control, 41, 436}442.

Johansson, M., & Rantzer, A. (1998). Computation of piecewise quad-ratic Lyapunov functions for hybrid systems. IEEE Transactions onAutomatic Control, 43, 555}559.

Julian, P., Guivant, J., & Desages, A. (1999). A parameterization ofpiecewise linear Lyapunov functions via linear programming,International Journal of Control, 72, 702}715.

Kamenetskii, V. A., & Pyatnitskii, E. S. (1987). Gradient method ofconstructing Lyapunov functions in problems of absolute stability.Automation and Remote Control, 48, 1}9.

Khalil, H. K. (1992). Nonlinear systems. New York: Macmillan.Luenberger, D. G. (1989). Introduction to linear and nonlinear

programming. Reading, MA: Addison-Wesley.Molchanov, A. P. (1987). Lyapunov functions for nonlinear discrete-

time control systems. Automation and Remote Control, 48,728}736.

Molchanov, A. P., & Pyatnitskii, E. S. (1986). Lyapunov functions thatspecify necessary and su$cient conditions of absolute stability ofnonlinear nonstationary control systems. Parts I}III. Automationand Remote Control, 47, 344}354, 443}451, 620}630.

Murray-Smith, R., & Johansen, T. A. (1997). Multiple model approachesto modelling and control. London: Taylor & Francis.

Ohta, Y., Imanishi, H., Gong, L., & Haneda, H. (1993). Computergenerated Lyapunov functions for a class of nonlinear systems.IEEE Transactions on Circuits and Systems, 40, 343}354.

Petterson, S., & Lennartson, B. (1997). Exponential stability analysisof nonlinear systems using LMIs. In Proceedings of the IEEEconference on decision and control, San Diego.

Polak, E. (1997). Optimization: algorithms and consistent approximation.New York: Springer.

Pyatnitskii, E. S., & Skordodinskii, V. I. (1987). A criterion of absolutestability of nonlinear sampled-data control systems in the form ofnumerical procedures. Automation and Remote Control, 48, 1190}1198.

Rovatti, R. (1996). Takagi}Sugeno models as approximators in Sobolevnorms* the SISO Case, Proceedings of the IEEE Conference FuzzySystems, New Orleans (pp. 1060}1066).

Tanaka, Y., Fukushima, M., & Ibaraki, T. (1988). A comparativestudy of several semi-in"nite nonlinear programming algorithms.European Journal of Operational Research, 36, 92}100.

Watanabe, R., Uchida, K., & Fujita, M. (1996). A new LMI approach toanalysis of linear systems with scheduling parameter* Reductionto "nite number of LMI conditions. In Proceedings of the 35thconference on decision and control. Kobe (pp. 1663}1665).

Wu, F., Yang, X. H., Packard, A., & Becker, G. (1996). Induced L2-

norm control for LPV systems with bounded parameter variationrates. International Journal on Nonlinear and Robust Control, 6,983}998.

Zelentsovsky, A. L. (1994). Nonquadratic Lyapunov function for robuststability analysis of linear uncertain systems. IEEE Transactions onAutomatic Control, 39, 135}138.

Tor A. Johansen is an Associate Professorat the Department of Engineering Cyber-netics at the Norwegian University ofScience and Technology, Trondheim, Nor-way. In 1990 he was at the NorwegianDefence Research Establishment atKjeller. He received his Dr. Ing. degree inelectrical and computer engineering fromthe Norwegian University of Science andTechnology, Trondheim in 1994. During1992 he was a research visitor at the Uni-versity of Southern California. From 1995

to 1997 he was a research engineering with SINTEF Electronics andCybernetics. He serves as an associate editor of Automatica and IEEETransactions on Fuzzy Systems, and is a member of the IEEE Tech-nical Commitee on Fuzzy Systems and the IFAC Technical Commiteeon Neural and Fuzzy Systems. His research interests include hybridcontrol, constrained control, optimization based control, multiplemodel methods, fuzzy control, nonlinear system identi"cation, andindustrial applications of systems engineering and real-time control.

1626 T.A. Johansen / Automatica 36 (2000) 1617}1626