spectralanalysis, stability andbifurcations systems

27
Spectral Analysis, Stabili Edited b Oleg N. Ki Dmitry E. Pel Series Ed Noel Chall iSfE

Upload: others

Post on 24-Apr-2022

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: SpectralAnalysis, Stability andBifurcations Systems

Nonlinear PhysicalSystems

Spectral Analysis, Stability and Bifurcations

Edited byOleg N. Kirillov

Dmitry E. Pelinovsky

Series EditorNoel Challamel

iSfE WILEY

Page 2: SpectralAnalysis, Stability andBifurcations Systems

First published 2014 in Great Britain and the United States by ISTE Ltd and John Wiley & Sons, Inc.

Apart from any fair dealing for the purposes of research or private study, or criticism or review, aspermitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced,stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers ,or in the case of reprographic reproduction in accordance with the terms and licenses issued by theCLA. Enquiries concerning reproduction outside these terms should be sent to the publishers at theundermentioned address:

Library of Congress Control Number: 2013950133

© ISTE Ltd 2014The rights of Oleg N. Kirillov and Dimtry E. Pelinovsky to be identified as the author of this work havebeen asserted by them in accordance with the Copyright, Designs and Patents Act 1988.

ISTE Ltd27-37 St George's RoadLondon SW19 4EUUK

www.iste.co.uk

John Wiley & Sons, Inc.III River StreetHoboken, NJ 07030USA

www .wiley.com

Preface .

Chapter 1. SurprisingDavide BIGONI, Diego

1.1. Introduction .

1.2. Buckling in ten

1.3. The effect of cc

1.4. .The Ziegler per

1.5. Conclusions .

1.6. Acknowledgme1.7. Bibliography.

MIX

British Library Cataloguing-in-Publication DataA CIP record for this book is available from the British LibraryISBN: 978-1-84821-420-0

A@" -uJ Paper fromFSC responsible sources

wwwfscorg FSee C013604

Printed and bound in Great Britain by CPI Group (UK) Ltd., Croydon, Surrey CRO 4YY

Chapter 2. WKB Soluand Applications . .Jean-Francois BONY, ~

2.1. Introduction .

2.2. Connection of I

2.2.1. A model in

2.2.2. Classical m

2.2.3. Review of ~

2.2.4. The microh2.2.5. The microl.

2.3. Applications to

2.3.1. Spectral pn2.3.2. Resonance­

2.4. Acknowledgrm2.5. Bibliography.

Page 3: SpectralAnalysis, Stability andBifurcations Systems

4.pplicationsto Partial Differentialinger, 1983.

s of elliptic quadratic differentialp. 363-391, 2008.

r quadratic differential operators"11. '

iroperties in resonant tunneling37,pp.4816-4844,1996.

»ds of Solution and Applications,

e curves by the method of exterior214, 1979.

Pseudospectra. The Behavior ofy Press , 2005.

American Mathemati cal Society,

Chapter 16

Stability Optimization for Polynomialsand Matrices

Suppose that the coefficients of a monic polynomial or the entries of a squarematrix depend affinely on parameters, and consider the problem of minimizing theroot radius (maximum of the moduli of the roots) or root abscissa (maximum of theirreal parts) in the polynomial case, and the spectral radius or spectral abscissa in thematrix case. These functions are not convex and they are typically not locally Lipschitznear minimizers. We first address polynomials, for which some remarkable analyticalresults are available in one special case, and then consider the more general case ofmatrices, focusing on the static output feedback (SOF) problem arising in control oflinear dynamical systems. We also briefly discuss some spectral radius optimizationproblems arising in the analysis of the transient behavior of a Markov chain and thedesign of smooth surfaces using subdivision algorithms.

16.1. Optimization of roots of polynomials

Optimization of roots of polynomials can arise in many contexts, but perhaps themost important application area is feedback control in the frequency domain[DOR 99]. Consider the problem

minmax{Re A I p(A) = O}pEP AEC

Chapter written by Michael L. OVERTON.

Page 4: SpectralAnalysis, Stability andBifurcations Systems

352 Nonlinear Physical Systems

where

This arises in maximizing, over all linear feedback controllers of order two, theasymptotic decay rate for a two-mass-spring dynamical system with one input (anactuator positioning the first mass) and one output (the measured position of thesecond mass) [HEN 06]. The polynomials in P are those that are admissible as thedenominators of the relevant rational closed-loop system transfer function, and theirroots must be in the left half of the complex plane for the system to be stable. Notethat P is a set of monic polynomials with degree 6 whose coefficients depend affinelyon five parameters. In [HEN 06], a formulation was given of a polynomial in P of theform (A - AO) 6 , with just one distinct negative real root AO of multiplicity 6, and itslocal optimality (the property that no nearby polynomial in P has all its roots to theleft of AO) was established. It was subsequently discovered that the solutionconstructed in [HEN 06] is globally optimal (no polynomial in P has all its roots tothe left of AO). This fact is a special case of a remarkable general theory that wesummarize below, extracting the main results from a recent paper by the author andothers [BLO 12]. As noted there, it turns out that part of this general theory was firstgiven in a PhD thesis by Raymond Chen [CHE 79b].

16.1.1. Root optimization over a polynomial family with a single affine constraint

Every monic polynomial of degree n may be represented by a point in C",representing the coefficients of the monomials An-I , . .. , A, 1. Let r denote the rootradius of such a polynomial:

r(p) = max {IAI : p(A) = 0, A E C}.

The polynomial p is said to be Schur stable if r(p) < 1. Let a denote the rootabscissa:

a(p) = max {Re(A) : p(A) = 0, A E C}.

The polynomial p is said to be Hurwitz stable if a(p) < O.

As functions of the polynomial coefficients, the radius r and abscissa a are notconvex. They are continuous, but not Lipschitz continuous near a polynomial p witha multiple root whose modulus or real part , respectively, equals r (p) or a(p). So, ingeneral, global minimization of the radius or absci ssa over an affine family of monic

St;

polynomials, pushing the roots ,complex plane, seems hard. Indepolynomial family contains one tlleft-half plane) have been studiecmonic polynomials of degree nanswered efficiently. Equivalemcoefficients.

16.1.2. The root radius

We begin by optimizing the rcto push all the roots as close 1

polynomial must exist; the follow

THEOREM 16.1.- [BLO 12, theb1 , ... , bn not all zero) and consic

The optimization problem

r " := inf r(p)pEP

has a globally optimal solution oi

for some integer k with 0 ::; k ::;

This leads immediately to an

COROLLARY 16.1.- [BLO 12,whose existence is asserted in tlu

3 = {r E R : gk(r ) = 0

where

Page 5: SpectralAnalysis, Stability andBifurcations Systems

k controllers of order two, thecal system with one input (an(the measured position of theiose that are admissible as theem transfer function, and theirr the system to be stable. Noteise coefficients depend affinelyzenof a polynomial in P of theot Ao of multiplicity 6, and itsial in P has all its roots to thediscovered that the solutioniornial in P has all its roots torkable general theory that we'ecent paper by the author andof this general theory was first

ith a single affine constraint

presented by a point in en,.. , A, 1. Let r denote the root

) < 1. Let a denote the root

) < O.

Iius r and abscissa a are notous near a polynomial p withy, equals r(p) or a(p). So, inver an affine family of monic

Stability Optimization for Polynomials and Matrices 353

polynomials, pushing the roots as far as possible toward the origin or left in thecomplex plane , seems hard. Indeed, variations on the question of whether a givenpolynomial family contains one that is stable (has roots inside the unit circle or in theleft-half plane) have been studied for decades [BLO 95]. But if an affine family ofmonic polynomials of degree n has n - 1 free parameters, this question can beanswered efficiently. Equivalently, there is a single affine constraint on thecoefficients.

16.1.2. The root radius

We begin by optimizing the root radius over real coefficients. Informally, we wantto push all the roots as close to zero as possible. By compactness, an optimalpolynomial must exist ; the following result states an explicit form for the solution.

THEOREM 16.1.- [BLO 12, theorem 1]- Let bo, b1 , . . . , b., be real scalars (withb1 , .. . , bn not all zero) and consider the affine family

n

P = {An + a1An- 1 + ... + an- 1A + an bo+L bjaj = 0, ai E R}.j = l

The optimization problem

r" := inf r(p)pEP

has a globally optimal solution of the form

for some integer k with 0 ~ k ~ ti, where , = r *.

This leads immediately to an algorithm for computing k and , :

COROLLARY 16.1.- [BLO 12, Corollary 2]- Let , be the globally optimal valuewhose existence is asserted in theorem 16.1 and consider the set

:=: = {r E R : gk(r) = 0 for some k E {O, 1,.,. , n}} ,

where

Page 6: SpectralAnalysis, Stability andBifurcations Systems

354 Nonlinear Physical Systems

and (Va , . . . ,vn ) is the convolution of the vectors

((n ~ k), (n ~ k), ...(~= ~)) and ((~), - G)' ...(-1)' G))

for k = 0, . . . ,n . Then, - ry and ry are elements of S with smallest magnitude.

Now, we consider the same problem with complex coefficients .

THEOREM 16.2.- [BLO 12, theorem 6]- Let ba, bi , . . . .b ., be complex scalars (withb1 , . .. , b., not all zero) and consider the affine family

n

P = {An + alA n-

1 + ... + an-IA + an : ba +L bjaj = 0, a; E C}.j=l

The optimization problem

r " := inf r(p)pEP

has an optimal solution of the form

P*(A) = (A _ ry) n E P

with - ry given by a root of smallest magnitude of the polynomial

h(A) = bnAn+bn-l (n: 1) A

n-

1 + ... + b, (~) A+bo

Compared to the case of real coefficients , this result is somewhat simpler to state:there is an optimal polynomial all of whose roots coincide at one point ry, compared totwo real roots ±ry in the real case. However, somewhat surprisingly, the proof in thecomplex case is much more complicated than in the real case, requiring an elaborateinduction on n (see [BLO 12, Appendix A] for detail s).

Sta

16.1.3. The root abscissa

Now, we consider root absciss:push all the roots as close to zero cinto the left half-plane. Consequeicoefficients , and hence there is nrThe next result states exactly wher

THEOREM 16.3.- [BLO 12, theb1 , . . . , b., not all zero) and consid

P = {An + alAn-

1 + ...+

Let k = max{j : bj # O} anc

h(A) = bnAn + bn- 1 (n:

The optimization problem

a " := inf a(p).pEP

has the infimal value

a " = min {,B E R : h (i )(-

where h (i ) is the i th derivative ofa minimizing polynomial p* if anderivatives) , and in this case, we (

P*(A) = (A - ry )n E P

with ry = a *.

The formula for the optimcharacterization of the optimal Ithere is no optimal polynomial wh was established in [BLO 12]. Ifinding a sequence of polynomi:The following result displays a SI

Page 7: SpectralAnalysis, Stability andBifurcations Systems

;), - (~), ... (_I)k (~))

ith smallest magnitude .

coefficients.

. . . b., be complex scalars (with

n

I- L bj aj = 0, ai E C}.j=l

olynomial

t is somewhat simpler to state:de at one point 'Y, compared tot surprisingly, the proof in theal case, requiring an elaborate

Stability Optimization for Polynomials and Matrices 355

16.1.3. The root abscissa

Now, we consider root abscissa optimization in the real case: instead of trying topush all the roots as close to zero as possible , we wish to push them as far as possibleinto the left half-plane. Consequently, we no longer have a compact domain for thecoefficients, and hence there is no guarantee that an optimal polynomial will exist.The next result states exactly when it exists and what form it has.

THEOREM 16.3.- [BLO 12, theorem 7]- Let bo,b1 , . . . , bn be real scalars (withb1 , ... , b., not all zero) and consider the affine family

n

p = {An + a1 An-1 + ... + an-1 A + an bo+L bj aj = 0, a; E R}.j = l

Let k = max{j : bj =I- O} and define the polynomial of degree k

The optimization problem

a " := inf a(p).pEP

has the infimal value

a " = min { ,8 E R : h(i )( - ,8) = 0 for some i E {O, ... , k - I} } ,

where h(i ) is the ith derivative of h. Furthermore, the optimal value a * is attained bya minimizing polynomial p* if and only if -a* is a root of h (as opposed to one of itsderivatives), and in this case, we can take

P*(A) = (A - 'Yt E P

with -y = a*.

The formula for the optimal value was given in [CRE 79b], as was thecharacterization of the optimal polynomial when -a* is a root of h. The fact thatthere is no optimal polynomial when -a* is a root of one of the higher derivatives ofh was established in [BLO 12]. In this case, we may consider instead the problem offinding a sequence of polynomials whose abscissa converges to the optimal value.The following result displays a specific structure for such a sequence:

Page 8: SpectralAnalysis, Stability andBifurcations Systems

356 Nonlinear Physical Systems

THEOREM 16.4.- [BLO 12, theorem 13]- Assume that -a* is not a root of h. Let 1!be the smallest integer i E {I , ... , k - I} for which -a* is a root of h(i). Then, forall sufficiently small E > 0, there exists a real scalar M Efor which

PE('\) := (,\ - ME)m(,\ - (a" + E)t-m E P,

where m = 1! or 1! + 1, and M E-+ - 00 as E -+ O.

Thus, as in the real radius case, two roots playa role, but only one is finite. Thereis a formula for M E, but it is too complicated to include here and depends on whetherm = 1! or m = 1! + 1. In practice, it is a bad idea to make E too small: then IMEIbecomes large.

We observed that, in the real case, the optimal value is not attained when oneof the derivatives of h has a real root to the right of the rightmost real root of h.However, it is not possible that a derivative of h has a complex root to the right of therightmost complex root of h. This follows immediately from the Gauss-Lucas theorem[BUR 05b, MAR 66]. Consequently, we might guess that the optimal abscissa valueis always attained in the complex case. Indeed, this is the case, as we now state.

THEOREM 16.5.- [BLO 12, theorem 14]- Let bo, b1 , ... .b., be complex scalars (withb1 , .. . , bn not all zero) and consider the affine family

n

P = {,\n + al,\n-l + ... + an-l'\ + an : bo + L bjaj = 0, ai E C}.j = l

The optimization problem

a " := inf a(p)p EP

has an optimal solution of the form

p*(,\) = (,\ - l') n E P

with - 1 given by a root with largest real part of the polynomial

h(A) = u;»: + bn- I (n: 1) An- I + 000 +b, (~) A+boo

Ste

1 .5 r,- - - ---r---

I - -realce- - - camph

:::>

~(ijEg.

0 .5

o'L-..-------'----2 -1

Figure 16.1. Optimal radiusconstraint bo + al + a2 + a3optimal value when the coeffu

curve shows the optimal 1

16.1.4. Examples

EXAMPLE 16.1.- [BLO 12, BUroot abscissa a over the monic P(coefficients of ,\n-l and ,\n-2

theorems 16.3 and 16.5 is given b

h(A) = (~)A2+nA= C

The roots of hare 0 and n-! l

a* = 0 and ,\ n is a global optira; E R and theorem 16.5 proves J

with the constraint al + a2 = 0 c

EXAMPLE 16.2.- Consider themonic polynomials of degree n wbo E R. When bo = 0, all the (polynomial is p* = ,\n with r " =to be real, we find that the set 2: (When bo = 1 and the coefficiendefined in theorem 16.2 is h('\)optimal value is r " = 1, with theoptimal value drops rapidly towa

Page 9: SpectralAnalysis, Stability andBifurcations Systems

Stability Optimization for Polynomials and Matrices 357

tat -a* is not a root of h. Let f!-a* is a root of h (i ). Then , for[f. for which

1.5r----.--- - .----..------..------..----------o>

-- re a l case- - - complex case

0 .5

432o-1O'-----'------"'oL.---~--~--"------'

-2

16.1.4. Examples

Figure 16.1. Optimal radius valuefor A3 + alA2 + a2A + a3 subject to theconstraint bo + al + a2 + a3 = 0, for bo E [-2,4]. The solid curve shows theoptimal value when the coefficients a; are required to be real while the dashed

curve shows the optimal value when the a; are allowed to be complex

Ie, but only one is finite. Theree here and depends on whether) make E too small: then IMf.1

alue is not attained when one: the rightmost real root of h.complex root to the right of thefrom the Gauss-Lucas theoremthat the optimal abscissa valuehe case, as we now state.

.. , bn be complex scalars (withEXAMPLE 16.1.- [BLO 12, BUR 0Ic]- Consider the problem of minimizing theroot abscissa a over the monic polynomials of degree n with the constraint that thecoefficients of An - 1 and An - 2 must sum to zero. The polynomial h defined intheorems 16.3 and 16.5 is given by

n

- Lbjaj = O,ai E C}.j=1

The roots of hare 0 and n-_21 and the only root of the derivative h (1) is n~\ ' soa " = 0 and An is a global optimizer. Theorem 16.5 proves global optimality overa; E R and theorem 16.5 proves global optimality over ai E C. Thus, no polynomialwith the constraint a1 + a2 = 0 can be Hurwitz stable.

.ynomial

EXAMPLE 16.2.- Consider the problem of minimizing the root radius r over themonic polynomials of degree n with the constraint that bo + a1 + ... + an = 0, wherebo E R. When bo = 0, all the coefficients can be taken to be zero, so the optimalpolynomial is p* = An with r " = O. When bo = 1 and the coefficients are restrictedto be real, we find that the set B defined in corollary 16.1 is just {-1, I}, so r * = 1.When bo = 1 and the coefficients a; are allowed to be complex, the polynomial hdefined in theorem 16.2 is h(A) = (A + l )" , so its only root is -1 and again theoptimal value is r * = 1, with the optimal polynomial p* = (A - I)". For bo < 1, theoptimal value drops rapidly toward zero in both the real and complex cases, but for

Page 10: SpectralAnalysis, Stability andBifurcations Systems

358 Nonlinear Physical Systems

bo > 1, the behavior is quite different: in the complex case, r * drops rapidly beforeturning around and increasing again, while in the real case , r * increases steadily as boincreases. Figure 16.1 shows the optimal value for n = 3 and bo E [-2,4]. The solidcurve shows the optimal value in the real case and the dashed curve shows the optimalvalue in the complex case . Thus, for all bo near to but not equal to one, there is a Schurstable polynomial whose coefficients sum to one if they are allowed to be complex,but if they are required to be real, no Schur stable polynomial exists when bo ~ 1.

There is a significant class of frequency domain stabilization problems that can besolved using theorem 16.3, as explained in [RAN 89]; see also [BLO 12, section I].These include the two-mass-spring stabilization example mentioned in section 16.1.Some other examples may be found in [BLO 12, section IV]. More may readily beexplored using a publicly available MATLAB code implementing the constructivealgorithms implicit in the theorems above.!

We should emphasize that multiple roots are very sensitive to perturbation. Morespecifically, a random perturbation of size E to the coefficients of a polynomial witha non-zero root with multiplicity k moves the corresponding roots by O(E1/ k ) . Inpractice, we might want to locally optimize a more robust measure of stability, asmentioned in section 16.3. Also, the monomial basis is a poor choice numericallyunless the polynomial has very small degree. Nonetheless, the optimal value can becomputed accurately even if n is quite large.

16.1.5. Polynomial root optimization with several affine constraints

A natural question is whether the results given above extend to cases where therethere are several affine constraints on the coefficients. Figure 16.2 shows contourplots of r for two randomly generated monic polynomial families with coefficientsdepending on two real parameters. In Figure 16.2(a) [BLO 10], the polynomials havedegree n = 3, so affine dependence on two parameters is equivalent to imposing asingle affine constraint on the coefficients, and hence the global minimizer can becomputed explicitly by theorem 16.1. The global minimizer (.\ - 1'1)3, where1'1 ~ 0.541, occurs on the right side of the plot. There is also a local minimizer,(.\ - 1'2)3, with 1'2 ~ -0.567, on the left side. Both - 1'1 and - 1'2 are roots of thepolynomial go defined in corollary 16.1 and so both are elements of 3, and indeed, itis generally true that all local minimizers must have roots ±1' E 3, and so elementsof 3 often (though not always) correspond to local minimizers. In contrast, inFigure 16.2(b), the polynomials have degree n = 4, so affine dependence on tWOparameters is equivalent to two affine constraints, and the theorems of the previoussection are not applicable. Again global and local minimizers appear on the right andleft, respectively, but these were approximated by numerical optimization; more on

1 www.cs.nyu.edu/overton/software/affpoly/.

St

this below. Both panels clearly ilbehavior of r, particularly around

Contour PlotofRadius ofaTwo-Parameter family ofMal/ i ,.. -, 1 ,, '" ., .,

xN

-1.5 -1 -0 .5 0 0.5x1

(a)

Figure 16.2. Contour plot of the roo,families with coefficients depending aJ

minimizer occurs on the right side oft.family for which the minimizers were (minimizers were approximated numet

Let us define the active rootsr (p) or real part equals a (p), deinactive. We know from the resuconstraint, the root radius alwaysin fact with at most two distinct athe quartic family plotted in the rconstraints on the coefficients, tl(.\ .: (1) 3(.\ - (2) where (1 ~ ­and one inactive root. The local(

2 - 2.\ - (3) (.\ - (3) , with (3 ~ -active roots with no inactive rootsminimizer is not.

Now consider the general prolon the coefficients. Based on exthere always exists a global minithere does not seem to be a usefroots. Thus, computing global opl

Page 11: SpectralAnalysis, Stability andBifurcations Systems

Stability Optimization for Polynomials and Matrices 359

this below. Both panels clearly illustrate both the non-convexity and non-Lipschitzbehavior of r, particularly around the global and local minimizers.

Figure 16.2. Contour plot of the root radius r for two randomly generated monic polynomialfamilies with coefficients depending affinely on two real parameters Xl, X 2. For each, the globalminimizer occurs on the right side ofthe plot and a local minimizer occurs on the left. a) A cubicfamily for which the minimizers were obtained using theorem 16.1. b) A quartic family for whichminimizers were approximated numerically

1.50.5

ContourPlotofRadius ofaTwo-ParameterFamily ofMonicQuartics

(b)

x,

-1

-~:,~~-2 -1 .5 -1 -0 .5

xN

1.50.5x,

(a)

-1 .5 -1 -0 .5

Contour PlotofRadiusofaTwo-Parameter family ofMonicCubics

xN

x case, r * drops rapidly beforecase, r * increases steadily as bo=3 and bo E [-2,4]. The solidlashed curve shows the optimalot equal to one, there is a Schur.ey are allowed to be complex ,nomial exists when bo ~ 1.

bilization problems that can be; see also [BLO 12, section I].ple mentioned in section 16.1.:ion IV]. More may readily bemplementing the constructive

sensitive to perturbation. More.fficients of a polynomial withponding roots by O(E1/ k ) . In.obust measure of stability, asis a poor choice numerically

.less, the optimal value can be

ne constraints

ve extend to cases where theres. Figure 16.2 shows contournial families with coefficients:LO 10], the polynomials havers is equivalent to imposing athe global minimizer can be

ninimizer (,\ - ')'1 ) 3 , whereere is also a local minimizer,- ')'1 and - ')'2 are roots of the~ elements of 3, and indeed, itlots ±')' E 3, and so elementsl minimizers. In contrast, inso affine dependence on twothe theorems of the previous

nizers appear on the right andterical optimization; more on

Let us define the active roots of a polynomial p as those whose modulus equalsr(p) or real part equals a(p), depending on the context; the others are said to beinactive. We know from the results of section 16.1.2 that when there is one affineconstraint, the root radius always has a global minimizer with no inactive roots andin fact with at most two distinct active roots (one in the complex case). In the case ofthe quartic family plotted in the right panel, for which there are implicitly two affineconstraints on the coefficients, the global minimizer that was found numerically is(,\ - (1)3(,\ - (2) where ( 1 ~ -0.983 and (2 ~ 0.905, with one triple active rootand one inactive root. The local minimizer that was found apparently has the form(,\ - (3)2(,\ - (3)2, with (3 ~ -0.053 + 1.07i , so it has a double conjugate pair ofactive roots with no inactive roots. The global minimizer is Schur stable but the localminimizer is not.

Now consider the general problem of minimizing r subject to ~ affine constraintson the coefficients. Based on extensive numerical experiments, we conjecture thatthere always exists a global minimizer with at most ~ - 1 inactive roots. However,there does not seem to be a useful bound on the number of possible distinct activeroots. Thus , computing global optimizers appears to be difficult.

Page 12: SpectralAnalysis, Stability andBifurcations Systems

360 Nonlinear Physical Systems

16.1.6. Variational analysis of the root radius and abscissa

Given a polynomial p that is a candidate for a local minimizer of r or a,discovered through numerical optimization or in some other way, how may we verifythat it is locally optimal, or at least a stationary point in a sense that is suitable fornon-smooth functions? When the coefficients are parametrized by two real variables,we can be reasonably confident simply by looking at a contour plot, but this is notpossible for many variables. Standard optimality conditions such as those used forconstrained optimization, based on Lagrange multipliers [NOC 06], or for convexoptimization, based on subgradients and other concepts from convex analysis[ROC 70], are not applicable to polynomial root optimization problems. However,there is a whole body of work known as non-smooth analysis or variational analysis[CLA 83, ROC 98, BaR 05] that is applicable. Following the development in[ROC 98], formulas for subgradients of the root abscissa a , valid even at polynomialswith multiple active roots where a is not Lipschitz, were presented in [BUR 01b],where a key result was proved: a is globally subdifferentially regular. Subsequently,this work was refined in [BUR OSb] and extended to more general max functions ofthe roots, including the root radius r, in [BUR 12]. These results can, in principle, beused to establish necessary or sufficient conditions for a candidate polynomial to be alocal minimizer of the radius r or abscissa a even in the presence of multiple activeroots, exploiting a chain rule from non-smooth analysis to take account of the affineparametrization; an abscissa example is worked out in detail in [BUR 06b]. We write"in principle" above, because if the optimization problem or the candidate minimizeris not sufficiently simple , or the candidate minimizer is not known exactly,establishing these conditions is complicated or impossible. We will mention a far lessrigorous but more practical alternative approach to assessing approximate localoptimality in section 16.2.3.

16.1.7. Computing the root radius and abscissa

In MATLAB , roots of a given polynomial p(A) = An + alA n - 1 + ...+ an arecomputed numerically by determining the eigenvalues of the correspondingcompanion matrix

-al 1-a2 1

IM(p) =

I1

-an 0 0

the rationale being that the method used to solve the eigenvalue problem is highlyreliable and efficient. So, in our root optimization experiments, we computed the root

se

radius r(p) or root abscissa a(p:M (p). The gradients needed fo:differentiating the spectral radiusordinary chain taking account of tthe polynomial coefficient spacegradients in the parameter spapolynomials with just one activecase) with multiplicity one, appiapproximate local minimizers ofeffectively, as will be discussed fu

16.2. Optimization of eigenvalu:

Now let us tum our attention tcdefine the spectral radius:

p(X) = max {IAI : det(Al

and a : en x n ---+ R the spectral c

a(X ) = max {Re(A) : del

An eigenvalue of X is saidspectral radius (abscissa) . The speLipschitz near a matrix with an acis non-derogatory if it is associateassociated with a single block in tlOf those matrices with an eigerreigenvalue is non-derogatory are I

The spectral radius and abscisthe characteristic polynomial det(to the more general case of an afparameters. For example, consider

Ate) = [!1 ~].

This matrix depends affinelyof its characteristic polynomial desection do not apply. The minimewhich the eigenvalues are ±i.

Page 13: SpectralAnalysis, Stability andBifurcations Systems

iscissa

a local minimizer of r or a~ther way, how may we verif;

t III a sense that is suitable formetrized by two real variablesa contour plot, but this is not

ditions such as those used forliers [NOe 06], or for convexncepts from convex analysisimization problems. Howeveranalysis or variational analysi~rllowing the development insa a, valid even at polynomialswere presented in [BUR OIb],entially regular. Subsequently,nore general max functions ofeseresults can, in principle , bea candidate polynomial to be ahe presence of multiple actives to take account of the affinedetail in [BUR 06b]. We writeern or the candidate minimizertizer is not known exactly,ble. We will mention a far lessassessing approximate local

An + alAn- 1 + " .+ an arealues of the corresponding

eigenvalue problem is highlyiments, we computed the root

Stability Optimization for Polynomials and Matrices 361

radius r(p) or root abscissa a(p) by computing the spectral radius or abscissa ofM (p). The gradients needed for numerical optimization were then obtained bydifferentiating the spectral radius or abscissa (more on this below), applying theordinary chain taking account of the companion matrix structure to yield gradients inthe polynomial coefficient space, and applying the chain rule again to recovergradients in the parameter space. Although the gradients are valid only atpolynomials with just one active root (or conjugate pair of active roots in the realcase) with multiplicity one, appropriate numerical methods using these gradientsapproximate local minimizers of r and a with multiple active roots surprisinglyeffectively, as will be discussed further in the next section.

16.2. Optimization of eigenvalues of matrices

Now let us tum our attention to eigenvalues of n x n matrices. Let p : cn x n ---+ Rdefine the spectral radius:

p(X) = max {IAI : det(AI - X) = 0, A E C}

and a : cn x n ---+ R the spectral abscissa:

a (X ) = max {Re(A) : det(AI - X) = 0, A E C} .

An eigenvalue of X is said to be active if its modulus (real part) equals thespectral radius (abscissa). The spectral functions p and a are not convex and are notLipschitz near a matrix with an active multiple eigenvalue. We say that an eigenvalueis non-derogatory if it is associated with only one right eigenvector. Equivalently, it isassociated with a single block in the Jordan form, whose dimension is its multiplicity.Of those matrices with an eigenvalue with given multiplicity, those for which theeigenvalue is non-derogatory are most generic [ARN 71].

The spectral radius and abscissa of a matrix X are the root radius and abscissa ofthe characteristic polynomial det(AI - X), but the results of section 1 do not extendto the more general case of an affine family of n x n matrices depending on n - 1parameters . For example, consider the matrix family

This matrix depends affinely on a single parameter ~ E R, but the coefficientsof its characteristic polynomial det(AI - A(~)) do not, so the results of the previoussection do not apply. The minimal spectral radius of A(~) is attained by ~ = 0, forwhich the eigenvalues are ±i.

Page 14: SpectralAnalysis, Stability andBifurcations Systems

362 Nonlinear Physical Systems

16.2.1. Static output feedback

As control feedback problems in the frequency domain are a source ofapplications for polynomial root optimization problems, so control feedbackproblems in state space are a source of applications for eigenvalue optimizationproblems. To be specific, let us consider the SOF stabilization problem. Given thediscrete-time dynamical system with control input and measured output

Xk+l = F Xk + GUk, Yk = H Xk

where F E Rn xn , G E Rn xp, H E R m xn, the SOF problem is to find a controllerK E RPx m so that, setting U = K Y, all solutions of

Xk+l = (F + GKH) Xk

converge to zero, that is all eigenvalues of F +GK H are inside the unit disk, or provethat this is not possible. The analogous continuous-time system is

x = Fx + Gu, y=Hx

and then the SOF problem is to find K so that the eigenvalues of F +GK H are in theleft half-plane. In general , SOF is a major open problem in control [BLO 95, KIM 94].However, if m = nand H is non-singular (without loss of generality, we can takeH = I, the identity matrix, so that y = x ), the SOF greatly simplifies and becomes thestate feedback stabilization problem. Then, provided the pair (F, G) is controllable,a property that holds generically, a standard "pole placement" procedure [WON 85]defines K so that the eigenvalues of F + GK have any desired values, so stabilizationis trivial. In fact, stabilization via pole placement extends generically to the case n <mp [WAN 96, WIL 97] (and to n = mp in the complex case [HER 77]).

When n > mp, the SOF problem seems hard. A natural approach is to formulatethe stabilization problem as an eigenvalue optimization problem: minimize either thespectral radius p or spectral abscissa a over the affine matrix parametrization F +GKH for K E R pm. But, as pointed out by Chen [CHE 79b, p. 4], [CHE 79a, p. 412],there is an interesting simplification in the case p = 1, i.e. when the dynamical systemhas a single control input , which depends on the following lemma due to Cauchy[HOR 13, equation [0.8.5.11]].

LEMMA 16.1.- Let A E c n xn, b E C n x 1 and e E C 1 x n . Then,

det(A + be) = det(A) + e adj (A) b

St.

where adj is variously known ascofactor matrix.

It follows that when p = 1, Sl

one has-

det(AI - (F + GKH)) =

The entries in adj (AI - F) ill

with degree n - 1. So, when p =of F + GK H depend affinely OI

hence optimizing the spectral radpolynomial root radius or root aassume m = n - 1 (so that the dthe SOF problem can be solved I

and 16.1.3. Indeed, this was a 1However, in general, optimizingK E Rpm does not reduce tceigenvalue optimization problem

16.2.2. Numerical methods for 11

In developing numerical menon-smooth functions, we take tobjective function may not be cevaluate it at only a finite numbdifferentiable at all of them. Indealmost everywhere on its domairbut semi-algebraic functions p, adescent (choosing each new itergradient from the current iterangenerating iterates that convergebut that is nowhere near a local m

To be able to approximate 10must be able to exploit the gradieone point. One such method, thesatisfactory convergence theory fbeen extended to non-locally-Lipin our first numerical experimen

2 There is a sign error in Chen 's vers

Page 15: SpectralAnalysis, Stability andBifurcations Systems

Stability Optimization for Polynomials and Matrices 363

where adj is variously known as the classical adjoint, the adjugate or the transposedcofactor matrix.

It follows that when p = 1, substituting AI - F for A, G for band - K H for c,one has-

icy domain are a source ofoblems, so control feedbacklS for eigenvalue optimizationabilization problem. Given thel measured output det(AI - (F + GKH)) = det(AI - F) - KH adj(AI - F)G. [16.1]

,problem is to find a controller

re inside the unit disk, or provee system is

values of F + GK H are in the.in control [BLO 95, KIM 94].oss of generality, we can takeitly simplifies and becomes thehe pair (F, G) is controllable,.ement" procedure [WON 85]desired values, so stabilizationds generically to the case n <. case [HER 77]).

tural approach is to formulateproblem: minimize either the: matrix parametrization F +79b, p. 4], [CHE 79a, p. 412],e. when the dynamical systemowing lemma due to Cauchy

>< n . Then,

The entries in adj(AI - F) are cofactors of AI - F, which are polynomials in Awith degree n - 1. So, when p = 1, the coefficients of the characteristic polynomialof F + GK H depend affinely on the m entries of the row vector K E R 1 x m, andhence optimizing the spectral radius or abscissa reduces to an affinely parametrizedpolynomial root radius or root abscissa optimization problem. Then, if we furtherassume m = n - 1 (so that the dynamical system has n - 1 outputs), it follows thatthe SOF problem can be solved explicitly using the results given in sections 16.1.2and 16.1.3. Indeed, this was a key motivation for Chen's PhD thesis [CHE 79b].However, in general, optimizing the spectral abscissa or radius of F + GK HoverK E Rpm does not reduce to polynomial root optimization, so we tackle theeigenvalue optimization problem directly.

16.2.2. Numerical methods for non-smooth optimization

In developing numerical methods suitable for searching for local minima ofnon-smooth functions, we take the following viewpoint: although the optimizationobjective function may not be differentiable or even Lipschitz at minimizers , weevaluate it at only a finite number of points and we can reasonably expect it to bedifferentiable at all of them. Indeed, any locally Lipschitz function is differentiablealmost everywhere on its domain, and this is also true of the non-locally-Lipschitzbut semi-algebraic functions p, o , rand a. However, the standard method of steepestdescent (choosing each new iterate by moving along the direction of the negativegradient from the current iterate) fails badly on non-smooth functions, typicallygenerating iterates that converge to a point where the function is not differentiablebut that is nowhere near a local minimizer.

To be able to approximate local minimizers of non-smooth functions , a methodmust be able to exploit the gradient information obtained at several points, not just atone point. One such method, the gradient sampling algorithm [BUR 05a], has a verysatisfactory convergence theory for locally Lipschitz functions, although this has notbeen extended to non-locally-Lipschitz functions. This was the method that we usedin our first numerical experiments with eigenvalue optimization [BUR 02], but the

2 There is a sign error in Chen 's version of this equation.

Page 16: SpectralAnalysis, Stability andBifurcations Systems

364 Nonlinear Physical Systems

computational cost is substantial. Another method, the Broyden­Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton algorithm with an Armijo-Wolfeline search, is a standard workhorse for minimizing smooth functions even in theabsence of convexity, but it has not generally been considered suitable for functionsthat are non-differentiable, let alone non-Lipschitz, at their local minimizers.However, although the theoretical properties of the BFGS method in this context arenot well understood, in practice we have found that it is an efficient and reliablemethod for identifying local minimizers of non-smooth functions, particularly in thelocally Lipschitz case [LEW 13]. Our MATLAB code HANSO combines the gradientsampling and BFGS methods together in one package.t

Both methods require repeated evaluation of the objective function and its gradientat a sequence of points. As already noted, we take the view that with high probability,the function will be differentiable at all these points, and hence the gradient will bewell defined, but in any case, using floating-point computer arithmetic it makes littlesense to try to determine whether or not a function is differentiable at a given point.In the case of the spectral radius p, the matrix at which it is being evaluated willnormally have only one active eigenvalue (or one complex conjugate pair of activeeigenvalues in the real case), with multiplicity one, implying that p is differentiable.If there happens to be a tie for the maximum value, we take the view that it is brokenarbitrarily ; when p is evaluated again at other nearby points, eventually the tie will bebroken differently, providing more gradient information.

Thus , in minimizing the spectral radius p or the spectral abscissa a of X (K) =F + GK H over the parameter matrix K E RP x m , all we need to do is to providea routine to compute p(X) or a (X ) together with its gradient with respect to X E

R n x n at a given matrix X = X (K) = F + GKH, and then use the ordinary chainrule to obtain the gradient of p(X (K)) or a( X( K )) with respect to the parametermatrix K at a given K. Assume that only one real eigenvalue or conjugate pair ofeigenvalues of X is active, with multiplicity one. Since X(K) is real for all real K,we can confine our attention to eigenvalues with non-negative imaginary part. Letthe active eigenvalue and its right and left eigenvectors be denoted by j.1" v and urespectively, that is Xv = j.1,V and u* X = uu", normalized so that u *v = 1. Then,exploiting well-known formulas [HOR 85, theorem 6.3.12] we have

St

16.2.3. Numerical results for sot

We define a class of SOF probscaled by 1/2, defined as a Toepliton the main diagonal and first thnsubdiagonal. We start by setting iand consider m ranging from 0 tcm rows of the identity matrix. Wlinterpret X = F+GKH as X =the complex plane as small circlesof F. The ot~er panels show, forof a matrix X obtained from nunF + GKH over K E R 1 x m . 1BFGS method at 100 different stai

from the standard normal distributwith early termination when the liwas defined as the final matrix witlarge circle on each panel has itscircles superimposed on the largeinside it correspond to inactive eig

In the first panel, we see that j

and in the second panel, that afterpairs of active eigenvalues whosenext panel , we see that by optimizstable matrix, with a real active eim = 3, we see active multiple eigexactly multiple eigenvalues, buteigenvalues coinciding up to plotpair of simple eigenvalues. Note tgeneric, the multiple eigenvaluesexpected to be non-derogatory,symmetry, is present. That is not t]

3 www.cs.nyu.edu/overton/software/hanso.

Under the assumption that j.1, is simple, the normalization u *v = 1 is alwayspossible, but if j.1, is close to a multiple eigenvalue, the resulting gradient normIlullllvll may be arbitrarily large. This is to be expected , because a and p are notLipschitz near a matrix with an active multiple eigenvalue .

\7a(X) = Re uv* and - j.1, *\7p(X) = Re~uv .

For m = 4, we observe an actiwe see a double active conjugate Ione inactive eigenvalue. For m =

with no inactive eigenvalues. As espectral radius optimization pnoptimization problems with n - r.Therefore, according to the conjeexist an optimal solution with rnossupports this conjecture, displayin4, 5 and 6 respectively.

Page 17: SpectralAnalysis, Stability andBifurcations Systems

method, the BroYden_~orithm with an Armijo-Wolfesmooth functions even in the

'nsidered suitable for functionsz, at their local minimizers.~GS method in this context areit is an efficient and reliable

h functions, particularly in theHANSO combines the gradient3

ective function and its gradient.iew that with high probability,and hence the gradient will beputer arithmetic it makes littledifferentiable at a given point.rich it is being evaluated willnplex conjugate pair of active.plying that p is differentiable.take the view that it is broken

oints, eventually the tie will be1.

ectral abscissa a of X (K) =II we need to do is to providegradient with respect to X E

id then use the ordinary chainwith respect to the parametergenvalue or conjugate pair of~ X (K) is real for all real K ,-negative imaginary part. Letrs be denoted by u, v and uilized so that u *v = 1. Then,12] we have

ilization u *v = 1 is alwaysthe resulting gradient norm

ed, because a and p are notLIe.

Stability Optimization for Polynomials and Matrices 365

16.2.3. Numerical results for some SOF problems

We define a class of SOF problems as follows. Let F be a Grear matrix [WRI 02]scaled by 1/2, defined as a Toeplitz matrix whose non-zeros consist of the number 0.5on the main diagonal and first three superdiagonals, and the number -0.5 on the firstsubdiagonal. We start by setting its order n = 8 and p = 1, with G = [1 , ... , If,and consider m ranging from 0 to 8, setting H to the matrix whose rows are the firstm rows of the identity matrix. When m = 0, the matrices Hand K are empty, so weinterpret X = F +GK H as X = F. The top half of Figure 16.3 shows eigenvalues inthe complex plane as small circles. The first panel, for m = 0, shows the eigenvaluesof F. The other panels show, for each value of m between 1 and 8, the eigenvaluesof a matrix X obtained from numerical minimization of the spectral radius of X =

F + GK Hover K E R 1 x"'. The optimization was conducted by initializing theBFGS method at 100 different starting points whose entries were randomly generatedfrom the standard normal distribution, taking up to 1,000 BFGS iterations for each run ,with early termination when the line search failed to obtain a reduction in p. Then, Xwas defined as the final matrix with the smallest spectral radius over all 100 runs. Thelarge circle on each panel has its center at the origin and radius p(X), so the smallcircles superimposed on the large circle show the active eigenvalues of X while thoseinside it correspond to inactive eigenvalues.

In the first panel, we see that F has just one conjugate pair of active eigenvalues,and in the second panel, that after optimizing p over K E R, there are two conjugatepairs of active eigenvalues whose modulus is slightly reduced below p(F). In thenext panel, we see that by optimizing over K E R 1 x 2, we are able to obtain a Schurstable matrix, with a real active eigenvalue as well as two active conjugate pairs. Form = 3, we see active multiple eigenvalues for the first time. Of course, these are notexactly multiple eigenvalues, but we observe one active conjugate pair of doubleeigenvalues coinciding up to plotting accuracy as well as another active conjugatepair of simple eigenvalues. Note that since non-derogatory eigenvalues are the mostgeneric, the multiple eigenvalues obtained from optimizing spectral functions areexpected to be non-derogatory, unless some special structure, such as matrixsymmetry, is present. That is not the case here.

For m = 4, we observe an active conjugate pair of triple eigenvalues. For m = 5,we see a double active conjugate pair and a triple active real eigenvalue, leaving justone inactive eigenvalue. For m = 6, we have one quadruple active conjugate pair,with no inactive eigenvalues. As explained in section 16.2.1, since p = 1, these SOFspectral radius optimization problems are actually equivalent to root radiusoptimization problems with n - m affine constraints on the polynomial coefficients.Therefore, according to the conjecture in section 16.1.5, for each m, there shouldexist an optimal solution with most n - m - 1 inactive roots. As n = 8, Figure 16.3supports this conjecture, displaying 4, 3, 2, 2, 1 and 0 inactive roots for m = 1, 2, 3,4, 5 and 6 respectively.

Page 18: SpectralAnalysis, Stability andBifurcations Systems

366 Nonlinear Physical Systems St,

For both m = 7 and m = 8, WEradius to close to its known optimeigenvalues are gathered on an arcoptimal value of the spectral raditreduced by another 50% compareeare not surprising given the sensitimultiplicity. For example, SUppOSI:zero eigenvalue perturbed by placiieigenvalues of X would lie equallypoint of view, the results achieved '

The scaling of the axes changechange in scaling , with all 8 eigermultiplicity 8. In this case, the eigeradius optimization problem withcomputed explicitly by the resultsthat the optimal polynomial is indon the panel for m = 7 marks theSOF reduces to the state feedbackthis can be solved by placing all tlradius is zero, as marked by the aSI

- 0.5 0 0.5

m=8

D .~

\.*

J~.~

-0.05 0 0.05

m=2

m=2

1/ .6

0.5

0

- 0.5

I ~ :- 0.5 0 0.5

m=5

0.2

o

om=1

-0.2

m=4

-0.5 0 0.5

m=1

m=7

[].. (]... . .

0.5 . .: : . <:> . 0.5 . .:.:...:. .· . . . . .· . . . . .· . . . . .

o " .: :.' 0 : 0:· . . . . .· . . . . .· . . . . .

-0.5 . : . . 0 . -0.5 . .:..:... :· . . .

o

m=O

m=3

m=6

- 0.5 0 0.5

-0.5 0 0.5

m=O

[J.. 0.2[J.

0.5 < .:: . : :• •• 0.1 •........•... .. ...· . . .· . . . .

O .. : '.. .. .. . 0 ' .· .. ..· .. . .· . . .• .. - 0.1 .

- 0.5 .. : : : . . :..· . . -0.2 .

Figure 16.3. Static output f eedback via spec tral radius optimization withn = 8, p = 1. Top: optimized eigenvalues. Bottom: sorted final spectral radius

values for 100 runs of BFGS

1.1

1.05 r 0.95

0.9

0.95 [:

:

jJo

".. J 0.85"-

50 100 0 50 100 0

m=6 m=7

0.9

This naturally raises the questi1 through m = 6? Although WI:we can get a good idea by lookin:(m = 0) simply displays the comthrough m = 8) displays the sorrandomly initiated runs of the BF<over K E R1 xm. For m = 1, weto plotting accuracy. For m = 2, tvis less than one indicating Schur sp(F ). For m = 3, we see that seVEBFGS, each one corresponding tomost frequently is the second smalis globally optimal in this case. Withe existence of several locally mino clear plateau at the left end of tlis not well approximated. Again, .multiple eigenvalue s that were founplateaus, but nonetheless, the irregmay indicate approximate convergem = 8 we see no plateau s at all, .theorem implies that minimizers of

.,.J~

00

50

50 100

50

m=5

m=8

~

1.4

1.6 .....' ------~...

1.2

u 1

100 050

m=4

o

1.6

1.4

1.2

100

1.2

,.' 10.8

/' 0.8

_.-/1 0.6 0.6

0.4 /.-.; 0.4

0.2~

100 0 50 100 0

50

50

m=3

o

0.8 f ",.

1.0414

1.0414

1.0413

1.0413

l0

r1.15

1.1

1.05

0.950

r1.1

..........

Page 19: SpectralAnalysis, Stability andBifurcations Systems

-0.5 0 0.5

The scaling of the axes changes for each m, but for m = 7, we have a dramaticchange in scaling, with all 8 eigenvalues nearly coalescing to one real eigenvalue ofmultiplicity 8. In this case, the eigenvalue optimization problem is equivalent to a rootradius optimization problem with just one affine constraint, whose solution can becomputed explicitly by the results in section 16.1.2. Applying corollary 16.1, we findthat the optimal polynomial is indeed of the form (A - 'Y) 8 , and the asterisk markedon the panel for m = 7 marks the location of the multiple root')'. Finally, for m = 8,SOF reduces to the state feedback problem, and since the pair (F, G) is controllable,this can be solved by placing all the eigenvalues at the origin, so the optimal spectralradius is zero, as marked by the asterisk .

Stability Optimization for Polynomials and Matrices 367

For both m = 7 and m = 8, we observe that BFGS is unable to reduce the spectralradius to close to its known optimal value. Instead, for m = 7, we find that the finaleigenvalues are gathered on an arc of a circle with a radius somewhat greater than theoptimal value of the spectral radius, and for m = 8, although the spectral radius isreduced by another 50% compared to m = 7, it is still not very small. These resultsare not surprising given the sensitivity of non-derogatory eigenvalues with such highmultiplicity. For example, suppose X were an upper Jordan block of order 8 with azero eigenvalue perturbed by placing the entry 10-8 in the lower left comer; then, theeigenvalues of X would lie equally spaced around the circle with radius 0.1. From thispoint of view, the results achieved by BFGS are remarkably good.

This naturally raises the question: how close to optimal are the results for m =1 through m = 6? Although we do not know the optimal values in these cases,we can get a good idea by looking at the lower half of Figure 16.3. The first panel(m = 0) simply displays the constant value p(F). Each subsequent panel (m = 1through m = 8) displays the sorted final spectral radius values found by the 100randomly initiated runs of the BFGS algorithm applied to minimize p(F + GK H)over K E R 1 x "'. For m = 1, we find that all 100 final values are the same at leastto plotting accuracy. For m = 2, two locally optimal values are found: the lower oneis less than one indicating Schur stability, but the other is substantially greater thanp(F). For m = 3, we see that several different locally optimal values were found byBFGS, each one corresponding to a "plateau" in the plotted values. The one foundmost frequently is the second smallest. It seems a reasonable guess that the smallestis globally optimal in this case. We see plateaus for m = 4, 5 and 6 too, indicatingthe existence of several locally minimal values, but, especially for m = 6, there isno clear plateau at the left end of the plot, indicating that the globally minimal valueis not well approximated. Again, this is not surprising, given the sensitivity of themultiple eigenvalues that were found in these cases. For m = 7, we see no convincingplateaus, but nonetheless, the irregularity of the optimal value curve at the top rightmay indicate approximate convergence to local minimizers for some runs. Finally, form = 8 we see no plateaus at all, which is to be expected since the pole placementtheorem implies that minimizers of p must have all eigenvalues equal to zero.

m=5

m=8

m=2

-0.5 0 0.5~

. .

0.5 .: ':" .· . .

0 :0 ' : "· . .· . .· . .

-0.5 .. : .... : ..... : .· . .

-0.05 0 0.05

m=21.6

1.4

1.2

10 50 100

m=5

0.95

0.9~

0.85 --...0 50 100

m=8

1.2

1 .0.8

,.J.-

0.6 -0.4

0.2

0 50 100

adius optimization withiorted final spectral radiusS

Page 20: SpectralAnalysis, Stability andBifurcations Systems

368 Nonlinear Physical Systems S

o

Now let us consider a largen = 15 scaled by 0.5 and sett[1,1 , ... ,1]T and the second (longer applies , so the eigenvaJpolynomial root optimization pretop part of Figure 16.4 shows ththe spectral radius , for m ranginactive multiple eigenvalues, eveconjugate pair. For m = 8, WI

generic (F,G, H) as mp > n. Tlhalf of Figure 16.4, there is justseveral clear plateaus of locally (to approximate the optimal valuhigh eigenvalue multiplicities, scapproximated.

o

m=8

-0.5 0 0.5

m=2

m=5

o

om=4

m=7

m=1

o

m=O

m=6

m=3

-0.5 0 0.5

m=O

1-~

1-~°

16.2.4. The Diaconis-Holmes-.

An eigenvalue optimization fthe analysis of a non-reversible :For ~ E [0,1], the associated dou

100

_ .I

50o

-0.5 0 0.5

m=2

1.3

1.2

1.1

10050o

-0.5 0 0.5

m=1

1.1

1.08

1.09

10050o

1.1013

1.10121-1 ~

1.1012

1.1011

Figure 16.4. Static output feedback via spectral radius optimization withn = 15, p = 2. Top: optimized eigenvalues. Bottom: sorted final spectral

radius values for 100 runs ofBFGS

~

~

~1-~

p(A(~)) = max {IAI : de

The rate of convergence of tl

Diaconis et al. [DIA 00] sluchain defined by this transitioicompared to 0 (n 2 ) steps for a :non-symmetric transition matricmatrices.)

r

,/

/

.~

50 100

m=5

m=8

1.2

1.15

1.1

1.1

1.05

o::t~/o 5o 100

10050

50

m=4

m=7

o

0.9lo-!:--~-----'o

1.1 f ...----/.",

1.2

1.3

1.1t .,.,/./

~

50 100

/-

50 100

m=3

m=6

~

o

1.2

1.1

1.15

1.05,..............­

o

1.2

1.15

1.1

1.05

1

Page 21: SpectralAnalysis, Stability andBifurcations Systems

Stability Optimization for Polynomials and Matrices 369

m=2

m=5

-0.5 a 0.5

m=8

Now let us consider a larger problem, setting F to the Grear matrix of ordern = 15 scaled by 0.5 and setting G to the 15 x 2 matrix with the first column[l ,l, . . . , l ]T and the second column [1, -1 , 1, ... , - 1, l]T. Equation [16.1] nolonger applies, so the eigenvalue optimization problem no longer reduces to apolynomial root optimization problem with affine constraints on the coefficients. Thetop part of Figure 16.4 shows the eigenvalues obtained by using BFGS to minimizethe spectral radius , for m ranging from 1 to 8. Every panel indicates the presence ofactive multiple eigenvalues, even for m = 1, for which there is an active doubleconjugate pair. For m = 8, we know that the optimal spectral radius is zero forgeneric (F,G,H) as mp > n. Turning to the sorted final function values in the lowerhalf of Figure 16.4, there is just one locally minimal value for m = 1, but there areseveral clear plateaus of locally optimal values for m = 2. However, BFGS is unableto approximate the optimal values accurately for m 2: 3, no doubt because of thehigh eigenvalue multiplicities, so it is hard to judge how many local minimizers wereapproximated.

-0.5 a 0.5

m=2

16.2.4. The Diaconis-Holmes-Neal Markov chain

An eigenvalue optimization problem with a very different character is provided bythe analysis of a non-reversible Markov chain for Monte Carlo simulation [DIA 00].For ~ E [0 ,1], the associated doubly stochastic transition matrix A(~) E R 2n x 2n is

1.3

1.2

~ 0

~

1-~ ~

~ 1-~

1-~

1-~

1-~

~

1-~

1-~o

The rate of convergence of the chain is determined by

,o(A(~))=max{IAI: det(AI-A(~))=O , AEC,Al=l}.

Diaconis et al. [DIA 00] showed that for ~ = lin, the non-reversible Markovchain defined by this transition matrix reaches a stationary state in 0 (n) steps,compared to O(n2 ) steps for a similar reversible chain. (Non-reversible chains havenon-symmetric transition matrices, while reversible chains have symmetric transitionmatrices.)

100

"

100

.:"

50

50

m=5

m=8

o

1.1

1.2

1.15

1.1

1.05

1""""'---~---..J

o

1.1

1.05

095~0.9 ......~-_~__-.J

o 50 100

adius optimization withm: sortedfinal spectralFGS

Page 22: SpectralAnalysis, Stability andBifurcations Systems

370 Nonlinear Physical Systems

We call this quantity the reduced spectral radius, as it is the spectral radius afterthe eigenvalue one is removed. We call an eigenvalue active if its modulus equals thereduced spectral radius and inactive if it is smaller (the eigenvalue one is neither activenor inactive) . Using formulas for the eigenvalues of A(~) given in [DIA 00], it is easyto prove that p(A(~)) is minimized over ~ E [0, 1] by

a 1- X l

1- X2

ReducedSpectral Radiusof K(x)

~ = sin(7rIn) > ~1 + sin(7rIn) n .

X 2

a X l

Xn

1- X n

16.2.5. Active derogatory eigen

Although non-derogatory eiin a matrix family may lead toTwo reduced spectral radius opiof surface subdivision schemes '[GRU 11, Chapters 2 and 4]. Inmatrix family A (x ) are fixedeigenvalues is to be minimizedhas one active zero eigenvalue

This matrix is still doubly stwe further reduce pby allowing

Somewhat surprisingly, it setappears to be optimal. This stateis numerical: when we appmentioned above, initialized atobtained convergence to ii: Thanalysis. We have established iTlocal optimality, and, furthernparametrization by setting X j =and make some assumptions thacondition for local optimality. r

reduced spectral abscissa is not c

of the presence of the active doucomplicated, and so we do not gThe analysis in [GAD 07] is begiven in [BUR Ola], which i~

polynomial root abscissa and roas subsequent work by Lewis [may be found in [BUR Ole]. 1chain problem because the aclnon-derogatory (two double rea]

0.8

I n=lO I- n=l00

0.6

(b)

0.40.20.7"--II --'----L.......----'---_-'----'--_

o0.5

(a)

0 .8

0 .6

0 .4

0 .2

0

-0.2

- 0 .4

-0.6

- 0 .8

-1- 1 -0.5

It is not surprising that with one free parameter, we can only make one pair ofeigenvalues coalesce. So, let us change the problem to have multiple parameters: forx = [Xl , ... ,xn]T, Xj E [0,1], j = 1, . . . ,n , we define the transition matrix A(x) ER 2n x 2n as

Figure 16.5. a) Eigenvalues of the 2n x 2n transition matrix A(~) for n = 10, along withcircles of radius p(A(~)) centered at the origin . Small circles: eigenvalues when ~ = lin(with n - 1 active complex conjugate pairs). Asterisks: eigenvalues when ~ = t (two conjugatepairs have coalesced to double real eigenvalues). Small squares: eigenvalues when ~ > t (thedouble eigenvalues have each split into a real pair, with p increasing rapidly). b) Plots p(A(~))

for ~ E [0,1] , for n = 10 and n = 100. Note that p(A(t)) < p(A(l/n)) and that p(A(t))approaches one as n increases

F~r ~ < ~ , there are n - 1 active conjugate pairs and one inactive eigenvalue. For~ = ~ , two conjugate pairs coalesce to two double real non-derogatory eigenvalues,resulting in two active double real eigenvalues and n - 3 active simple conjugate pairs.For ~ > ~ , the double eigenvalues each split into two real eigenvalues, increasing pbya term proportional to I~ - ~II /2, with two active simple eigenvalues. The eigenvaluesof A(~) are shown in Figure 16.5(a), along with a circle centered at the origin withradius p(A(~)) for n = 10 and for ~ = lin < ~, ~ = ~ and ~ > { Plots of p(A(~) )

. are shown in Figure 16.5(b) for ~ E [0,1 ] for both n = 10 and n = 100.

Page 23: SpectralAnalysis, Stability andBifurcations Systems

Stability Optimization for Polynomials and Matrices 371

as it is the spectral radius afteractive if its modulus equals the

: eigenvalue one is neither active. (~ ) given in [DIA 00], it is easy

o 1- X l

1- X2

o

1 - X n - l X n - l

X n- l 1 - Xn- l

rix A(~) for n = 10, along withzles: eigenvalues when ~ = l /nvalues when ~ = ~ (two conjugateires: eigenvalues when ~ > ~ (theTeasing rapidly). b) Plots p(A(~))

) < p(A(l/n) ) and that p(A(~) )

ind one inactive eigenvalue. Foreal non-derogatory eigenvalues,3 active simple conjugate pairs.eal eigenvalues, increasing pbyIe eigenvalues. The eigenvaluesrcle centered at the origin with: ~ and ~ > { Plots of p(A(~)): 10 and n = 100.

Reduced Spectral Radius of K(x)

Xn

o1 - X n

1- X l

1- X2

oX n

1- X n

This matrix is still doubly stochastic. With n free parameters instead of one, canwe further reduce pby allowing coalescence of more eigenvalues?

16.2.5. Active derogatory eigenvalues

Somewhat surprisingly, it seems that the answer is no. The vector x = [~, '... , ~]T

appears to be optimal. This statement is based on two arguments. The first argumentis numerical: when we applied the gradient sampling algorithm [BUR 05a]mentioned above, initialized at randomly generated points near x, we repeatedlyobtained convergence to x. The second argument is theoretical, using variationalanalysis. We have established in [GAD 07] that x satisfies a necessary condition forlocal optimality, and, furthermore, that if we remove some redundancy in theparametrization by setting Xj = Xn - l -j for j = 1,2, ... , Ln 21 J and X n - l = X n,

and make some assumptions that seem reasonable, we find that x satisfies a sufficientcondition for local optimality. These optimality conditions are not standard as thereduced spectral abscissa is not differentiable, in fact not even Lipschitz at x, becauseof the presence of the active double eigenvalues. The analysis of this example is quitecomplicated, and so we do not give further details here, but we give some references.The analysis in [GAD 07] is based on the variational analysis of spectral functionsgiven in [BUR 01a], which is in tum built on the variational analysis of thepolynomial root abscissa and root radius already mentioned in section 16.1.6, as wellas subsequent work by Lewis [LEW 03]. An accessible introduction to this subjectmay be found in [BUR 01c]. This variational analysis is applicable to the Markovchain problem because the active eigenvalues at the candidate minimizer are allnon-derogatory (two double real eigenvalues and n - 3 simple conjugate pairs).

0.80.6

(b)

0.40.2

we can only make one pair ofhave multiple parameters: for

e the transition matrix A(x) E

Although non-derogatory eigenvalues are the most generic, the structure presentin a matrix family may lead to local optimizers with active derogatory eigenvalues.Two reduced spectral radius optimization problems of this sort, arising in the designof surface subdivision schemes with applications in computer graphics, are studied in[GRU 11, Chapters 2 and 4]. In both problems, several of the largest eigenvalues of amatrix family A(x) are fixed and the largest of the moduli of the remainingeigenvalues is to be minimized. In one of these examples, the optimal matrix A(x)has one active zero eigenvalue associated with three Jordan blocks, respectively,

Page 24: SpectralAnalysis, Stability andBifurcations Systems

372 Nonlinear Physical Systems

having order 2, 1 and 1. In the second example, A(x) again apparently has one activezero eigenvalue but with four Jordan blocks , respectively, having order 5, 3, 2 and 2.In derogatory cases like this, variational analysis of the spectral abscissa or spectralradius becomes very difficult even for simple examples. The special case with twoJordan blocks with order 2 and 1, respectively, is analyzed in detail in [GRU 13].

16.3. Concluding remarks

We have seen that the optimization problems discussed in this chapter typicallylead to polynomials with multiple roots or matrices with non-derogatory multipleeigenvalues, and we have observed that the higher their multiplicity, the more thesemultiple roots or eigenvalues are sensitive to small perturbations; furthermore,computing these minimizers numerically is difficult. Instead of optimizingeigenvalues, we could consider optimizing pseudospectra. The e-pseudospectrum ofa matrix A, denoted (JE (A), is the set of points in the complex plane that areeigenvalues of matrices within E in norm of a given matrix [TRE 05]. Fortunately, forthe 2-norm, there is a more convenient equivalent definition using the singular valuedecomposition: (JE (A) is the set of points z E C for which the smallest singular valueof A - zI is no greater than E. Then, it is natural to define the pseudospectral radiusPE(A) and pseudospectral abscissa aE (A), respectively, as the largest of the moduliand largest of the real parts of the points in (J E (A). Algorithms to compute thesefunctions are given in [MEN OS, BUR 03b]. It turns out that these functions arelocally Lipschitz with respect to both A and E [LEW 08, GUR 12], although thepseudospectrum itself is not. For this reason, neglecting the cost of computing thepseudospectral radius and abscissa, local optimization of these functions is lessdifficult than for the spectral radius and abscissa: gradient sampling is known toconverge to non-smooth stationary points in the locally Lipschitz case, and while nosuch result is known for BFGS , the boundedness of the gradients in the Lipschitzcase allows for the use of a practical termination condition described in [LEW 13,section 6.3], instead of running the BFGS iteration until the line search fails or amaximum iteration count is exceeded, as described in section 16.2.3.

See [BUR 03a] for some examples of optimizing the pseudospectral abscissa forsome SOF problems arising in applications. In [BUR 02], we used a differentapproach for replacing the spectral abscissa by a robust alternative exploiting thewell-known Lyapunov characterization of stability, but this required solvingnonconvex optimization problems in a much larger variable space. Finally, wemention that using many of the ideas discussed in this chapter, we have built asoftware package for solving practical problems that arise in controller design[BUR 06a].4

4 www.cs.nyu.edu/overton/software/hifoo.

16.4. Acknowledgments

The author thanks O. Kirillo'this chapter was presented. Hepapers on generic solutions to tfrom the U.S. National Sciencgratefully acknowledged.

16.5. Bibliography

[ARN 71] ARNOLD VI., "On ill;

Surveys, vol. 26,pp. 29-43, 197

[BLO 95] BLONDEL V, GEVERScontrol", European Journal of t.

[BLO 10] BLONDEL VD., GUR"Explicit solutions for root option Decision and Control (CDC ,

[BLO 12] BLONDEL VD., GUR"Explicit solutions for root optiiIEEE Transactions on Automati

[BOR 05] BORWEIN J.M., LEWITheory and Examples, 2nd ed.,

[BUR 01a] BURKE J.V, OVERT(functions", Mathematical Prog.

[BUR 01b] BURKE J.V, OVERTIpolynomials", SIAM Journal 01

[BUR Ole] BURKE J.V, LEWISProceedings of the American M

[BUR 02] BURKE J.V , LEWISoptimizing matrix stability", L145,2002.

[BUR 03a] BURKE LV, LEWISoptimization approach to robicontrollers" , Fourth IFAC Sym;

[BUR 03b] BURKE J.V , LEWIS .algorithm for pseudospectra",

2003.

[BUR 05a] BURKE J.V, LEWISalgorithm for nonsmooth, noml5,pp.751-779,2005.

Page 25: SpectralAnalysis, Stability andBifurcations Systems

again apparently has one activerely, having order 5, 3, 2 and 2.:he spectral abscissa or spectraliles. The special case with twoyzed in detail in [GRU 13].

cussed in this chapter typicallywith non-derogatory multiple

eir multiplicity, the more theseIII perturbations ; furthermore ,icult. Instead of optimizingictra. The e-pseudospectrum ofn the complex plane that areatrix [TRE 05]. Fortunately, forinition using the singular valuehich the smallest singular value.efine the pseudospectral radiusly, as the largest of the moduliAlgorithms to compute these

s out that these functions areW 08, GUR 12], although theting the cost of computing theion of these functions is less~radient sampling is known to.y Lipschitz case, and while nothe gradients in the Lipschitz

ndition described in [LEW 13,until the line search fails or asection 16.2.3.

[he pseudospectral abscissa forUR 02], we used a differentbust alternative exploiting thet, but this required solvingr variable space. Finally, wethis chapter, we have built aiat arise in controller design

Stability Optimization for Polynomials and Matrices 373

16.4. Acknowledgments

The author thanks O. Kirillov for the kind invitation to the Banff workshop wherethis chapter was presented. He also thanks J. Willems for providing references topapers on generic solutions to the SOF problem when mp < n . Financial supportfrom the U.S. National Science Foundation under Grant DMS-1317205 is alsogratefully acknowledged.

16.5. Bibliography

[ARN 71] ARNOLD VI. , "On matrices depending on parameters", Russian MathematicalSurveys , vol. 26, pp. 29-43, 1971.

[BLO 95] BLONDEL V , GEVERS M., LINDQUIST A., "Survey on the state of systems andcontrol", European Journal ofControl, vol. 1, pp. 5-23,1995. .

[BLO 10] BLONDEL v.n. , GDRBDzBALABAN M. , MEGRETSKI A., OVERTON M .L. ,"Explicit solutions for root optimization of a polynomial family", 49th IEEE Conferenceon Decision and Control (CDC), 2010, IEEE, pp. 485-488, 2010.

[BLO 12] BLONDEL v.n., GDRBDzBALABAN M., MEGRETSKI A., OVERTON M.L.,"Explicit solutions for root optimization of a polynomial family with one affine constraint,"IEEE Transactions on Automatic Control, vol. 57, pp. 3078-3089,2012.

[BOR 05] BORWEIN J.M. , LEWIS A.S. , "Convex Analysis and Nonlinear Optimization:Theory and Examples , 2nd ed., Springer, New York, 2005.

[BUR 01a] BURKE J.V, OVERTON M.L. , "Variational analysis of non-Lipschitz spectralfunctions", Mathematical Programming, vol. 90, pp. 317-352, 200l.

[BUR 01b] BURKE J.V. , OVERTON M.L. , "Variational analysis of the abscissa map forpolynomials", SIAM Journal on Control and Optimization, vol. 39, pp. 1651-1676, 200l.

[BUR Ole] BURKE J.V, LEWIS A.S., OVERTON M.L., "Optimizing matrix stability",Proceedings of the American Mathematical Society , vol. 129, pp. 1635-1642, 200l.

[BUR 02] BURKE J.V , LEWIS A.S., OVERTON M .L., "Two numerical methods foroptimizing matrix stability", Linear Algebra and its Applications, vol. 351-352, pp. 117­145,2002.

[BUR 03a] BURKE J.V, LEWIS A.S. , OVERTON M.L., "A nonsmooth, nonconvexoptimization approach to robust stabilization by static output feedback and low-ordercontrollers", Fourth IFAC Symposium on Robust Control Design, Milan, 2003 .

[BUR 03b] BURKE J.V , LEWIS A.S. , OVERTON M.L., "Robust stability and a criss-crossalgorithm for pseudospectra", IMA Journal of Numerical Analysis, vol. 23, pp. 359-375,2003.

[BUR 05a] BURKE J.V , LEWIS A.S. , OVERTON M.L. , "A robust gradient samplingalgorithm for nonsmooth, nonconvex optimization", SIAM Journal on Optimization, vol.15,pp.751-779,2005.

Page 26: SpectralAnalysis, Stability andBifurcations Systems

374 Nonlinear Physical Systems

[BUR 05b] BURKEJ.Y., LEWIS A.S., OVERTON M.L., "Variational analysis of functions ofthe roots of polynomials", Mathematical Programming, vol. 104, pp. 263-292, 2005.

[BUR 06a] BURKEJ.Y., HENRIOND., LEWISA.S., OVERTON M.L., "HIFOO - aMATLABpackage for fixed-order controller design and Hoc optimization", Fifth IFAC Symposium onRobust Control Design, Toulouse, 2006.

[BUR 06b] BURKE J.Y., HENRION D., LEWIS A.S., OVERTON M.L.. "Stabilization vianonsmooth, nonconvex optimization", IEEE Transactions on Automatics Control, vol. 51,pp.1760-l769,2006.

[BUR 12] BURKEJ.Y., EATON J., "On the subdifferential regularity of max root functions forpolynomials", Nonlinear Analysis: Theory, Methods and Applications, vol. 75, p. 1168­1187,2012.

[CHE 79a] CHEN R., "On the problem of direct output feedback stabilization", ProceedingsofMTNS,pp.412-414,1979.

[CHE 79b] CHEN R., Output feedback stabilization of linear systems, PhD Thesis, Universityof Florida, 1979.

[CLA 83] CLARKE F.H., Optimization and Nonsmooth Analysis, John Wiley, New York,1983. Reprinted by SIAM, Philadelphia, PA, 1990.

[DIA 00] DIACONIS P., HOLMES S., NEAL R.M., "Analysis of a nonreversible Markov chainsampler", Annals ofApplied Probability, vol. 10, pp. 726-752, 2000.

[DOR 99] DORATO P., Analytic Feedback System Design: An Interpolation Approach,Thomson Publishing Services, UK, 1999.

[GAD 07] GADE K.K., OVERTON M.L., "Optimizing the asymptotic convergence rate ofthe Diaconis-Holmes-Neal sampler", Advances in Applied Mathematics, vol. 38, no. 3,pp.382-403,2007.

[GUR 12] GURBUZBALABAN M., OVERTON M.L., "On Nesterov's nonsmooth Chebyshev­Rosenbrock functions", Nonlinear Analysis, Theory, Methods and Applications, vol. 75,pp.1282-1289,2012.

[GRU 11] GRUNDEL S., Eigenvalue optimization in 0 2 subdivision andboundary subdivision, PhD Thesis, New York University, 2011. Available athttp://cs.nyu.edu/overton/phdtheses/sara.pdf.

[GRU 13] GRUNDEL S., OVERTON, M.L., "Variational analysis of the spectral abscissa at amatrix with a nongeneric multiple eigenvalue", Set-Valued and Variational Analysis, 2013.

[HER 77] HERMANN R., MARTIN C.F. , "Applications of algebraic geometry to systemtheory, Part I" , IEEE Transactions on Automatic Control, vol. 22, pp. 19-25, 1977.

[HEN 06] HENRION D. , OVERTON, M.L. , Maximizing the closed loop asymptotic decay ratefor the two-mass-spring control problem, Technical Report 06342, LAAS-CNRS, March2006. Available at http://homepages.laas.fr/henrionlPapers/massspring.pdf.

[HOR 85] HORN R.A., JOHNSON e.R., Matrix Analysis , 2nd ed., Cambridge UniversityPress, Cambridge, UK, 1985.

[HOR 13] HORN, R.A., JOHNSOJPress, Cambridge, UK, 2013.

[KIM 94] KIMURA H., "Pole assi]33rd IEEE Conference on Decij

[LEW 03] LEWIS A.S., "The rProgramming, vol. 97, nos. 1-2

[LEW 08] LEWIS A.S., PANG C.lon Optimization, 19:1048-1072

[LEW 13] LEWIS A.S. , OVERT(methods", Mathematical Progr

[MAR 66] MARDEN M., Geometi

[MEN 05] MENGI E., OVERTOpseudospectral radius and the I

Analysis, 25:648-669, 2005.

[NOC 06] NOCEDAL J., WRIGHT2006.

[RAN 89] RANTZER A., "Equivastabilization-applications to rApplications, vol. 122/123/124

[ROC 70] ROCKAFELLAR R.T.,1970.

[ROC 98] ROCKAFELLAR R.T., .

1998.

[TRE 05] TREFETHEN L.N. , EMNonnormal Matrices and Oper

[WAN 96] WANG X.A., "Gra~

assignment of linear systems",pp. 786-794, 1996.

[WIL 97] WILLEMS. r.c., "Gefeedback made simple", in ICommunications, Computatio.

1997.

[WON 85] WONHAM W.M., LinSpringer-Verlag, 1985.

[WRI02] WRIGHTT.G., "EigToAvailable at http://web.comlab

Page 27: SpectralAnalysis, Stability andBifurcations Systems

Variational analysis of functions ofvol. 104,pp.263-292,2005.

nON M .L., "HIFOO - a MATLABization", Fifth IFAC Symposium on

IERTON M.L.. "Stabilization vians on Automatics Control, vol. 51,

.egularity of max root functions fornd Applications, vol. 75, p. 1168-

.dback stabilization", Proceedings

IT systems, PhD Thesis, University

snalysis, John Wiley, New York,

.is of a nonreversible Markov chain5-752,2000.

ign: An Interpolation Approach ,

ie asymptotic convergence rate ofilied Mathematics, vol. 38, no. 3,

~esterov's nonsmooth Chebyshev­Iethods and Applications, vol. 75,

on in C 2 subdivision andJniversity, 2011. Available at

ialysi s of the spectral abscissa at aed and Variational Analysis , 2013.

of algebraic geometry to system" vol. 22, pp. 19-25,1977.

:closed loop asymptotic decay rateport 06342, LAAS-CNRS, Marchrs/massspring.pdf.

', 2nd ed., Cambridge University

Stability Optimization for Polynomials and Matrices 375

[HOR 13] HORN, R.A., JOHNSON C.R., Matrix Analysis, 2nd ed., Cambridge UniversityPress, Cambridge, UK, 2013.

[KIM 94] KIMURA H., "Pole assignment by output feedback: a longstanding open problem",33rd IEEE Conference on Decision and Control (CDC) , 1994, IEEE, pp. 2101-2105,1994.

[LEW 03] LEWIS A.S. , "The mathematics of eigenvalue optimization", MathematicalProgramming, vol. 97, nos. 1-2, Ser. B, pp. 155-176, ISMP, Copenhagen, 2003.

[LEW 08] LEWIS A.S. , PANG C.H.J . "Variational analysis of pseudospectra", SIAM Journalon Optimization , 19:1048-1072, 2008.

[LEW 13] LEWIS A.S ., OVERTON M.L. , "Nonsmooth optimization via quasi-Newtonmethods", Mathematical Programming, vol. 141, pp. 135-163,2013.

[MAR 66] MARDEN M., Geometry ofPolynomials, American Mathematical Society, 1966.

[MEN 05] MENGI E ., OVERTON M.L. , "Algorithms for the computation of thepseudospectral radius and the numerical radius of a matrix", IMA Journal on NumericalAnalysis, 25:648-669, 2005.

[NOC 06] NOCEDAL J. , WRIGHT S .J., Nonlinear Optimization , 2nd ed., Springer, New York,2006 .

[RAN 89] RANTZER A., "Equivalence between stability of partial realizations and feedbackstabilization-applications to reduced order stabilization", Linear Algebra and itsApplications, vol. 122/123/124, pp. 641-653, 1989.

[ROC 70] ROCKAFELLAR R. T., Convex Analysis, Princeton University Press, Princeton,1970.

[ROC 98] ROCKAFELLAR R.T. , WETS R.J.B. , Variational Analysis, Springer, New York,1998.

[TRE 05] TREFETHEN L.N. , EMBREE M., Spectra and Pseudospectra: The Behavior ofNonnormal Matrices and Operators, Princeton University Press, 2005.

[WAN 96] WANG X.A., "Grassmannian, central projection, and output feedback poleassignment of linear systems", IEEE Transactions on Automatic Control , vol. 41, no. 6,pp. 786-794, 1996.

[WIL 97] WILLEMS. J.C. , "Generic eigenvalue assignability by real memoryless outputfeedback made simple", in PAULRAJ A., ROYCHOWDHURY v., SCHAPER C.D. (eds),Communications, Computation, Control and Signal Processing, Kluwer, pp. 343-354,1997.

[WON 85] WONHAM W.M. , Linear Multivariable Control: A Geometric Approach, 3rd ed. ,Springer-Verlag, 1985 .

[WRI 02] WRIGHT T.G ., "EigTool: a graphical tool for nonsymrnetric eigenproblems", 2002.Available at http://web.comlab.ox.ac.uk/pseudospectra/eigtool/.