chapter 1 - analysis of linear system in state space form

Upload: bhameed-1958

Post on 03-Apr-2018

222 views

Category:

Documents


0 download

TRANSCRIPT

  • 7/29/2019 Chapter 1 - Analysis of Linear System in State Space Form

    1/25

    Chapter 1

    ANALYSIS OF LINEAR SYSTEMS IN

    STATE SPACE FORM

    This course focuses on the state space approach to the analysis and design of control systems. The idea

    of state of a system dates back to classical physics. Roughly speaking, the state of a system is thatquantity which, together with knowledge of future inputs to the system, determine the future behaviourof the system. For example, in mechanics, knowledge of the current position and velocity (equivalently,momentum) of a particle, and the future forces acting it, determines the future trajectory of the particle.Thus, the position and the velocity together qualify as the state of the particle.

    In the case of a continuous-time linear system, its state space representation corresponds to a systemof first order differential equations describing its behaviour. The fact that this description fits the idea ofstate will become evident after we discuss the solution of linear systems of differential equation.

    To illustrate how we can obtain state equations from simple differential equation models, consider thesecond order linear differential equation

    y(t) = u(t) (1.1)

    This is basically Newtons law F = ma for a particle moving on a line, where we have used u to denote Fm

    .It also corresponds to a transfer function from u to y to be 1

    s2. Let

    x1(t) = y(t)

    x2(t) = y(t)

    Set xT(t) = [x1(t) x2(t)]. Then we can write

    dx(t)

    dt=

    0 10 0

    x(t) +

    01

    u(t) (1.2)

    which is a system of first order linear differential equations.The above construction can be readily extended to higher order linear differential equations of the form

    y(n)(t) + a1y(n1)(t) + + an1y(t) + any(t) = u(t)

    Define x(t) =

    y dydt

    dn1ydtn1

    T. It is straightforward to check that x(t) satisfies the differential

    equation

    dx(t)

    dt=

    0 1 0 0 00 0 1 0 0...

    ... . . .

    ......

    0 0 0 1an an1 a2 a1

    x(t) +

    0...01

    u(t)

    1

  • 7/29/2019 Chapter 1 - Analysis of Linear System in State Space Form

    2/25

    2 CHAPTER 1. ANALYSIS OF LINEAR SYSTEMS IN STATE SPACE FORM

    On the other hand, it is not quite obvious how we should define the state x for a differential equation ofthe form

    y(t) + 3y(t) + 2y(t) = u(t) + 4u(t) (1.3)

    The general problem of connecting state space representations with transfer function representations willbe discussed later in this chapter. We shall only consider differential equations with constant coefficients

    in this course, although many results can be extended to differential equations whose coefficients vary withtime.

    Once we have obtained a state equation, we need to solve it to determine the state. For example, in thecase of Newtons law (1.1), solving the corresponding state equation (1.2) gives the position and velocityof the particle. We now turn our attention, in the next few sections, to the solution of state equations.

    1.1 The Transition Matrix for Linear Differential Equations

    We begin by discussing the solution of the linear constant coefficient homogeneous differential equation

    x(t) = Ax(t)

    x(t) Rn (1.4)

    x(0) = x0

    Very often, for simplicity, we shall suppress the time dependence in the notation. We shall now develop asystematic method for solving (1.4) using a matrix-valued function called the transition matrix.

    It is known from the theory of ordinary differential equations that a linear systems of differentialequations of the form

    x(t) = Ax(t) t 0 (1.5)

    x(0) = x0

    has a unique solution passing through x0 at t = 0. To obtain the explicit solution for x, recall that thescalar linear differential equation

    y = ay

    y(0) = y0

    has the solution

    y(t) = eaty0

    where the scalar exponential function eat has the infinite series expansion

    eat =

    k=0

    aktk

    k!(1.6)

    and satisfiesd

    dteat = aeat

    By analogy, let us define the matrix exponential

    eA =n=0

    An

    n!

  • 7/29/2019 Chapter 1 - Analysis of Linear System in State Space Form

    3/25

    1.2. PROPERTIES OF THE TRANSITION MATRIX AND SOLUTION OF LINEAR ODES 3

    where A0 is defined to be I, the identity matrix (analogous to a0 = 1). We can then define a function eAt,which we call the transition matrix of (1.4), by

    eAt =

    k=0

    Aktk

    k!(1.7)

    The infinite sum can be shown to converge always, and its derivative can be evaluated by differentiatingthe infinite sum term by term. The analogy with the solution of the scalar equation suggests that eAtx0 isthe natural candidate for the the solution of (1.4).

    1.2 Properties of the Transition Matrix and Solution of Linear ODEs

    We now prove several useful properties of the transition matrix

    (Property 1) eAt|t=0 = I (1.8)

    Follows immediately from the definition of the transition matrix (1.7).

    (Property 2) d

    dteAt = AeAt (1.9)

    To prove this, differentiate the infinite series in (1.7) term by term to get

    d

    dteAt =

    d

    dt

    k=0

    Aktk

    k!

    =

    k=1Aktk1

    (k 1)!

    = Ak=0

    Aktk

    k!

    = AeAt

    Properties (1) and (2) together can be considered the defining equations of the transition matrix. Inother words, eAt is the unique solution of the linear matrix differential equation

    dF

    dt= AF, with F(0) = I (1.10)

    To see this, note that (1.10) is a linear differential equation. Hence there exists a unique solution to theinitial value problem. Properties (1) and (2) show that eAt is a solution to (1.10). By uniqueness, eAt isthe only solution. This shows (1.10) can be considered the defining equation for eAt.

    (Property 3) If A and B commute, i.e. AB = BA, then

    e(A+B)t = eAteBt

    Proof: Consider eAteBtd

    dt(eAteBt) = AeAteBt + eAtBeBt (1.11)

  • 7/29/2019 Chapter 1 - Analysis of Linear System in State Space Form

    4/25

    4 CHAPTER 1. ANALYSIS OF LINEAR SYSTEMS IN STATE SPACE FORM

    If A and B commutes, eAtB = BeAt so that the R.H.S. of (1.11) becomes (A + B)eAteBt. Furthermore,at t = 0, eAteBt = I. Hence eAteBt satisfies the same differential equation as well as initial condition ase(A+B)t. By uniqueness, they must be equal.

    Setting t = 1 gives

    eA+B = eAeB whenever AB = BA

    In particular, since At commutes with As for all t, s, we have

    eA(t+s) = eAteAs t, s

    This is often referred to as the semigroup property.

    (Property 4) (eAt)1 = eAt

    Proof: We have

    eAteAt = eA(tt) = I

    Since eAt is a square matrix, this shows that (Property 4) is true. Thus, eAt is invertible for all t.

    Using properties (1) and (2), we immediately verify that

    x(t) = eAtx0 (1.12)

    satisfies the differential equation

    x(t) = Ax(t) (1.13)

    and the initial condition x(0) = x0. Hence it is the unique solution.

    More generally,

    x(t) = eA(tt0)x0 t t0 (1.14)

    is the unique solution to (1.13) with initial condition x(t0) = x0.

    1.3 Computing eAt: Linear Algebraic Methods

    The above properties of eAt do not provide a direct way for its computation. In this section, we developlinear algebraic methods for computing eAt.

    Let us first consider the special case where A is a diagonal matrix given by

    A =

    1 0 00 2

    . . . 00 0 n

    The is are not necessarily distinct and they may be complex, provided we interpret (1.13) as a differentialequation in Cn. Since A is diagonal,

    Ak =

    k1 0 00 k2

    . . . 00 0 kn

  • 7/29/2019 Chapter 1 - Analysis of Linear System in State Space Form

    5/25

    1.3. COMPUTING EAT: LINEAR ALGEBRAIC METHODS 5

    If we directly evaluate the sum of the infinite series in the definition of eAt in (1.7), we find for examplethat the (1, 1) entry of eAt is given by

    k=0

    k1tk

    k!= e1t

    Hence, in the diagonal case,

    eAt =

    e1t 0 00 e2t

    . . . 00 0 ent

    (1.15)

    We can also interpret this result by examining the structure of the state equation. In this case (1.13) iscompletely decoupled into n differential equations

    xi(t) = ixi(t) i = 1,...,n (1.16)

    so thatxi(t) = eitx

    0i(1.17)

    where x0i is the ith component of x0. It follows that

    x(t) =

    e1t 0 00 e2t

    . . . 00 0 ent

    x0 (1.18)

    Since x0 is arbitrary, eAt must be given by (1.15).

    We now extend the above results for a general A. We examine 2 cases.

    Case I: A Diagonalizable

    By definition, A diagonalizable means there exists a nonsingular matrix P (in general complex) suchthat P1AP is a diagonal matrix, say

    P1AP = =

    1 0 00 2

    . . . 00 0 n

    (1.19)

    where the is are the eigenvalues of the A matrix. Then

    eAt = ePP1

    t (1.20)

    Now notice that for any matrix B and nonsingular matrix P,

    (P BP1)2 = P BP1P BP1 = P B2P1

    so that in general(P BP1)k = P BkP1 (1.21)

    We then have the following

    ePBP1t = P eBtP1 for any n n matrix B (1.22)

  • 7/29/2019 Chapter 1 - Analysis of Linear System in State Space Form

    6/25

    6 CHAPTER 1. ANALYSIS OF LINEAR SYSTEMS IN STATE SPACE FORM

    Proof:

    ePBP1t =

    k=0

    (P BP1)ktk

    k!

    =

    k=0 P

    Bktk

    k! P

    1

    = P eBtP1

    Combining (1.15), (1.20) and (1.22), we find that whenever A is diagonalizable,

    eAt = P

    e1t 0 00 e2t

    . . . 00 0 ent

    P1 (1.23)

    From linear algebra, we know that a matrix A is diagonalizable if and only if it has n linearly indepen-dent eigenvectors. Each of the following two conditions is sufficient, though not necessary, to guaranteediagonalizability:

    (i) A has distinct eigenvalues

    (ii) A is a symmetric matrix

    In these two cases, A has a complete set of n independent eigenvectors v1, v2, , vn, with correspondingeigenvalues 1, 2, , n. Write

    P =

    v1 v2 vn

    (1.24)

    Then

    P1AP = =

    1 0 00 2

    . . . 00 0 n

    (1.25)

    The equations (1.24), (1.25), and (1.23) together provide a linear algebraic method for omputing eAt.From (1.23), we also see that the dynamic behaviour of eAt is completely characterized by the behaviourof eit, i = 1, 2, , n.

    Example 1:

    A =

    1 21 4

    The characteristic equation is given by

    det(sI A) = s2 5s + 6 = 0

    giving eigenvalues 2, 3. For the eigenvalue 2, its eigenvector v satisfies

    (2I A)v =

    1 21 2

    v = 0

  • 7/29/2019 Chapter 1 - Analysis of Linear System in State Space Form

    7/25

    1.3. COMPUTING EAT: LINEAR ALGEBRAIC METHODS 7

    We can choose v =

    21

    . For the eigenvalue 3, its eigenvector w satisfies

    (3I A)w =

    2 21 1

    w = 0

    We can choose w =

    11

    . The diagonalizing matrix P is given by

    P =

    v w

    =

    2 11 1

    P1 =

    1 1

    1 2

    Finally

    eAt = P etP1

    =

    2 11 1

    e2t 00 e3t

    1 1

    1 2

    =

    2e2t e3t 2e2t + 2e3t

    e2t e3t e2t + 2e3t

    Example 2:

    A =

    1 0 01 2 0

    1 0 1

    Owing to the form of A, we see right away that the eigenvalues are 1, 2 and -1 so that

    =

    1 0 00 2 0

    0 0 1

    The eigenvector corresponding to the eigenvalue 2 is given by

    0 1 0T

    , while that of -1 is

    0 0 1T

    .To find the eigenvector for the eigenvalue 1, we have

    1 0 01 2 01 0 1

    v1v2

    v3

    =

    v1v2

    v3

    so that

    22

    1

    is an eigenvector. The diagonalizing matrix P is then

    P =

    2 0 02 1 0

    1 0 1

    P1 =

    12 0 01 1 0

    12 0 1

  • 7/29/2019 Chapter 1 - Analysis of Linear System in State Space Form

    8/25

    8 CHAPTER 1. ANALYSIS OF LINEAR SYSTEMS IN STATE SPACE FORM

    eAt =

    2 0 02 1 0

    1 0 1

    et 0 00 e2t 0

    0 0 et

    12 0 01 1 0

    12 0 1

    =

    et 0 0et + e2t e2t 012(e

    t et) 0 et

    Model Decomposition:

    We can give a dynamical interpretation to the above results. Suppose A has distinct eigenvalues

    1,...,n so that it has a set of linearly independent eigenvectors v1,...,vn. If x0 = vj , then

    x(t) = eAtx0 = eAtvj

    =

    k=0

    Aktk

    k!vj =

    k=0

    kj tk

    k!vj

    = ejtvj

    This means that if we start along en eigenvector of A, the solution x will stay in the direction of the eigen-vector, with length being stretched or shrunk by ejt. In general, because the vis are linearly independent,an arbitrary initial condition x0 can be expressed as

    x0 =n

    j=1

    jvj = P (1.26)

    where

    P =

    v1 v2 vn

    and =

    1...

    n

    Using this representation, we have

    x(t) = eAtx0 = eAtn

    j=1

    jvj

    =n

    j=1

    jeAtvj =

    nj=1

    jejtvj (1.27)

    so that x(t) is expressible as a (time-varying) linear combination of the eigenvectors of A.

    We can connect this representation of x(t) with the representation of eAt in (1.23). Note that the P in(1.26) is the diagonalizing matrix for A, i.e.

    P1AP =

    and

    eAtx0 = P etP1x0 = P e

    t

    = P

    1e1t

    2e2t

    ...ne

    nt

    =j

    jejtvj

  • 7/29/2019 Chapter 1 - Analysis of Linear System in State Space Form

    9/25

    1.3. COMPUTING EAT: LINEAR ALGEBRAIC METHODS 9

    the same result as in (1.27). The representation (1.27) of the solution x(t) in terms of the eigenvectors ofA is often called the modal decomposition of x(t).

    Case II: A not Diagonalizable

    If A is not diagonalizable, then the above procedure cannot be carried out. If A does not have distinct

    eigenvalues, it is in general not diagonalizable since it may not have n linearly independent eigenvectors.For example, the matrix

    Anl =

    0 10 0

    which arises in the state space representation of Newtons Law, has 0 as a repeated eigenvalue and is notdiagonalizable. On the other hand, note that this Anl satisfies A

    2nl = 0, so that we can just sum the infinite

    series

    eAnlt = I + Anlt =

    1 t0 1

    The general situation is quite complicated, and we shall focus on the 3 3 case for illustration. Let us

    consider the following 3 3 matrix A which has the eigenvalue with multiplicity 3:

    A =

    1 00 1

    0 0

    (1.28)

    Write A = I + N where

    N =

    0 1 00 0 1

    0 0 0

    (1.29)

    Direct calculation shows that

    N2

    =0 0 1

    0 0 00 0 0

    and

    N3 = 0

    A matrix B satisfying Bk = 0 for some k 1 is called nilpotent. The smallest integer k such that Bk = 0is called the index of nilpotence. Thus the matrix N in (1.29) is nilpotent with index 3. But then thetransition matrix eNt is easily evaluated to be

    eNt =2

    j=0Njtj

    j!=

    1 t t

    2

    2!0 1 t0 0 1

    (1.30)

    Once we understand the 3 3 case, it is not difficult to check that an l l matrix of the form

    N =

    0 1 0 0. . .

    .... . . 0

    10 0

    i.e., 1s along the superdiagonal, and 0 everywhere else, is nilpotent and satisfies Nl = 0

  • 7/29/2019 Chapter 1 - Analysis of Linear System in State Space Form

    10/25

    10 CHAPTER 1. ANALYSIS OF LINEAR SYSTEMS IN STATE SPACE FORM

    To compute eAt when A is of the form (1.28), recall Property 4 of matrix exponentials:

    If A and B commute, i.e. AB = BA, then e(A+B)t = eAteBt.

    If A is of the form (1.28), then since I commutes with N, we can write, using Property 3 of transitionmatrices,

    eAt = e(I+N)t = e(I)teNt = (etI)(eNt)

    = et

    1 t t

    2

    2!0 1 t0 0 1

    (1.31)

    using (1.30).We are now in a position to sketch the general structure of the matrix exponential. More details are

    given in the Appendix to this chapter. The idea is to transform a general A using a nonsingular matrix Pinto

    P1AP = J =

    J1 0J2

    .. .

    0 Jk

    where each Ji = iI + Ni, Ni nilpotent. J is called the Jordan form of A. With the block diagonalstructure, we can verify

    Jm =

    Jm1 0Jm2

    . . .

    0 Jmk

    By applying the definition of the transition matrix, we obtain

    eJt =

    eJ1t 0

    eJ2t

    . . .

    0 eJkt

    (1.32)

    Each eJit can be evaluated by the method described above. Finally,

    eAt = P eJtP1 (1.33)

    The above procedure in principle enables us to evaluate eAt, the only problem being to find the matrixP which transforms A either to diagonal or Jordan form. In general, this can be a very tedious task (See

    Appendix to Chapter 1 for an explanation on how it can be done). The above formula is most useful asa means for studying the qualitative dependence of eAt on t, and will be particularly important in ourdiscussion of stability.

    We remark that the above results hold regardless of whether the is are real or complex. In the lattercase, we shall take the underlying vector space to be complex and the results then go through withoutmodification.

    1.4 Computing eAt: Laplace Transform Method

    Another method of evaluating eAt analytically is to use Laplace transforms.

  • 7/29/2019 Chapter 1 - Analysis of Linear System in State Space Form

    11/25

    1.4. COMPUTING EAT: LAPLACE TRANSFORM METHOD 11

    If we let G(t) = eAt, then the Laplace transform of G(t), denoted by G(s), satisfies

    sG(s) = AG(s) + I

    or

    G(s) = (sI A)1 (1.34)

    If we denote the inverse Laplace transform operation by L1

    eAt = L1[(sI A)1] (1.35)

    The Laplace transform methods thus involves 2 steps:

    (a) Find (sI A)1. The entries of (sI A)1 will be strictly proper rational functions.

    (b) Determine the inverse Laplace transform for each entry of (sI A)1. This is usually accomplishedby doing partial fractions expansion and using the fact that L1[ 1

    s] = et (or looking up a mathe-

    matical table!).

    Determining (sI A)1 using the cofactor expansion method from linear algebra can be quite tedious.Here we give an alternative method. Let det(sI A) = sn + p1s

    n1 + ... + pn = p(s). Write

    (sI A)1 =B(s)

    p(s)=

    sn1B1 + sn2B2 + ... + Bnp(s)

    (1.36)

    Then the Bi matrices can be determined recursively as follows:

    B1 = I

    Bk+1 = ABk + pkI 1 k n 1 (1.37)

    To illustrate the Laplace transform method for evaluating eAt, again consider

    A =

    1 0 01 2 0

    1 0 1

    p(s) = det

    s 1 0 0

    1 s 2 01 0 s + 1

    = (s 1)(s 2)(s + 1)

    = s3 2s2 s + 2

    The matrix polynomial B(s) can then be determined using the recursive procedure (1.37).

    B2 = A 2I =

    1 0 01 0 0

    1 0 3

    B3 = AB2 I =

    2 0 01 1 0

    2 0 2

  • 7/29/2019 Chapter 1 - Analysis of Linear System in State Space Form

    12/25

    12 CHAPTER 1. ANALYSIS OF LINEAR SYSTEMS IN STATE SPACE FORM

    giving

    B(s) =

    s2 s 2 0 0s + 1 s2 1 0

    s 2 0 s2 3s + 2

    Putting into (1.36) and doing partial fractions expansion, we obtain

    (sI A)1 =1

    (s 1)(2)

    2 0 02 0 0

    1 0 0

    + 1

    (s 2)(3)

    0 0 03 3 0

    0 0 0

    +1

    (s + 1)(6)

    0 0 00 0 0

    3 0 6

    eAt =

    et 0 0

    et 0 012e

    t 0 0

    +

    0 0 0

    e2t e2t 00 0 0

    +

    0 0 00 0 0

    12et 0 et

    =

    et 0 0et + e2t e2t 012(e

    t et) 0 et

    the same result as before.

    As a second example, let A =

    . Then

    A =

    00

    +

    0

    0

    .

    But

    e

    24 0 035t

    = L1

    s s

    1

    where L1 is the inverse Laplace transform operator

    = L1

    ss2+2 s2+2

    s2+2

    ss2+2

    =

    cos t sin t

    sin t cos t

    eAt = e(I)t

    cos t sin t sin t cos t

    =

    et cos t et sin t

    et sin t et cos t

    In this and the previous sections, we have described analytically procedures for computing eAt. Of

    course, when the dimension n is large, these procedures would be virtually impossible to carry out. Ingeneral, we must resort to numerical techniques. Numerically stable and efficient methods of evaluatingthe matrix exponential can be found in the research literature.

    1.5 Differential Equations with Inputs and Outputs

    The solution of the homogeneous equation (1.4) can be easily generalized to differential equations withinputs. Consider the equation

    x(t) = Ax(t) + Bu(t)x(0) = x0

    (1.38)

  • 7/29/2019 Chapter 1 - Analysis of Linear System in State Space Form

    13/25

    1.5. DIFFERENTIAL EQUATIONS WITH INPUTS AND OUTPUTS 13

    where u is a piecewise continuous Rm-valued function. Define a function z(t) = eAtx(t). Then z(t)satisfies

    z(t) = eAtAx(t) + eAtAx(t) + eAtBu(t)

    = eAtBu(t)

    Since the above equation does not depend on z(t) on the right hand side, it can be directly integrated togive

    z(t) = z(0) +

    t0

    eAsBu(s)ds = x(0) +

    t0

    eAsBu(s)ds

    Hencex(t) = eAtz(t)

    = eAtx0 +t0 e

    A(ts)Bu(s)ds(1.39)

    More generally, we have

    x(t) = eA(tt0)x(t0) + t

    t0

    eA(ts)Bu(s)ds (1.40)

    This is the variation of parameters formula for solving (1.38).We now consider linear systems with inputs and outputs.

    x(t) = Ax(t) + Bu(t) (1.41)

    x(0) = x0

    y(t) = Cx(t) + Du(t) (1.42)

    where the output y is a piecewise continuous Rp-valued function. Using (1.39) in (1.42) give the solutionsimmediately

    y(t) = CeAtx0 +

    t0

    CeA(ts)Bu(s)ds + Du(t) (1.43)

    In the case x0 = 0 and D = 0, we find

    y(t) =

    t0

    CeA(ts)Bu(s)ds

    which is of the form y(t) =t0 h(t s)u(s)ds, a convolution integral. The function h(t) = Ce

    AtB iscalled the impulse response of the system. If we allow generalized functions, we can incorporate a nonzeroD term as

    h(t) = CeAt

    B + D(t) (1.44)

    where (t) is the Dirac -function. Noting that the Laplace transform of eAt is (sI A)1, the transferfunction of the linear system from u to y, which is the Laplace transform of the impulse response, is givenby

    H(s) = C(sI A)1B + D (1.45)

    The entries of H(s) are proper rational functions, i.e., ratios of polynomials with degree of numerator degree of denominator. If D = 0 (no direct transmission from u to y), the entries are strictly proper.This is an important property of transfer functions arising from a state space representation of the formgiven by (1.41), (1.42).

    As an example, consider the standard circuit below.

  • 7/29/2019 Chapter 1 - Analysis of Linear System in State Space Form

    14/25

    14 CHAPTER 1. ANALYSIS OF LINEAR SYSTEMS IN STATE SPACE FORM

    C

    L R

    ++

    u --

    x1

    x2

    The choice of x1 to denote the voltage across the capacitor and x2 the current through the inductorgives a state description for the circuit. The equations are

    Cx1 = x2 (1.46)

    Lx2 + Rx2 + x1 = u (1.47)

    Converting into standard form, we have

    x =

    0 1C

    1L

    RL

    +

    0

    1L

    u (1.48)

    Take C = 0.5, L = 1, R = 3. The circuit state equation satisfies

    x =

    0 2

    1 3

    +

    01

    u = Ax + Bu (1.49)

    We have

    (sI A)1 =s 2

    1 s + 3

    1

    =

    s + 3 2

    1 s

    s2 + 3s + 2

    =

    2s+1 1s+2 2s+1 2s+2

    1s+1 +

    1s+2

    1s+1 +

    2s+2

    Upon inversion, we get

    eAt =

    2et e2t 2et 2e2tet + e2t et + 2e2t

    (1.50)If we are interested in the voltage across the capacitor as the output, we then have

    y = x1 =

    1 0

    x

    The impluse response from the input voltage source to the output voltage is given by

    h(t) = CeAtB = 2et 2e2t

  • 7/29/2019 Chapter 1 - Analysis of Linear System in State Space Form

    15/25

    1.6. STABILITY 15

    If we have a constant input u = 1, the output voltage using (1.39), assuming initial rest (x0 = 0), is givenby

    y(t) =

    t0

    [2e(ts) 2e2(ts)]ds

    = 2(1 et

    ) (1 e2t

    )

    Armed with (1.39) and (1.43), we can see why (1.38) deserves the name of state equation. The solutionsfor x(t) and y(t) from (1.39) and (1.43) show that indeed knowledge of the current state ( x0) and futureinputs u(t), t 0 determines the future states x(t), t 0 and outputs y(t), t 0.

    1.6 Stability

    Let us consider the unforced state equation first:

    x = Ax, x(0) = x0 .

    The vector x(t) 0 as t for every x0 all the eigenvalues of A lie in the open left half-plane.

    Idea of proof: From (1.33),

    x(t) = eAtx0 = P eJtP1x0

    The entries in the matrix eJt are of the form eitp(t) where p(t) is a polynomial in t, and i is an eigenvalueof A. These entries converge to 0 if and only if Rei < 0. Thus

    x(t) 0 x0

    all eigenvalues of A lie in { : Re < 0}

    Now let us look at the full system model:

    x = Ax + Bu

    y = Cx + Du .

    The transfer matrix from u to y is

    H(s) = C(sI A)1B + D .

    We can write this as

    H(s) =1

    det(sI A)C adj(sI A) B + D .

    Notice that the elements of the matrix adj(sI A) are all polynomials in s; consequently, they have nopoles. Notice also that det(sI A) is the characteristic polynomial of A. We can therefore conclude fromthe preceding equation that

    {eigenvalues of A} {poles of H(s)} .

    Hence, if all the eigenvalues of A are in the open left half-plane, then H(s) is a stable transfer matrix. Theconverse is not necessarily true.

  • 7/29/2019 Chapter 1 - Analysis of Linear System in State Space Form

    16/25

    16 CHAPTER 1. ANALYSIS OF LINEAR SYSTEMS IN STATE SPACE FORM

    Example:

    A =

    1 0

    0 2

    , B =

    1

    0

    C = [1 0], D = 0

    H(s) =1

    s + 1.

    So the transfer function has stable p oles, but the system is obviously unstable. Any initial conditionx0 with a nonzero second component will cause x(t) to grow without bound.

    A more subtle example is the following

    Example:

    A =

    0 1 1

    2 2 02 1 1

    , B = 1 0

    2 1

    1 1

    C = [2 2 1], D = [0 0]

    {eigenvalues of A} = {0, 1, 2}

    H(s) =1

    det(sI A)C adj(sI A) B

    =

    1

    s2 + 3s2 + 2s [ 2 2 1 ]

    s2 + 3s + 2 s + 2 s + 2

    2s 2 s2

    + s 2 22s + 2 s + 2 s2 + 2s + 2

    1 0

    2 11 1

    =1

    s2 + 3s2 + 2s[2s2 + 4s + 2 2s2 + 5s + 2 s2 + 4s + 2]

    1 0

    2 1

    1 1

    =1

    s3 + 3s2 + 2s[s2 + 2s s2 + s]

    =

    s + 2

    s2 + 3s + 2

    s + 1

    s2 + 3s + 2Thus {poles ofH(s)} = {-1,-2}. Hence the eigenvalue of A at = 0 does not appear as a pole ofH(s). Notethat there is a cancellation of a factor of s in the numerator and denominator of H(s). This correspondsto a pole-zero cancellation in the determination of H(s), and indicates something internal in the systemcannot be seen from the input-output behaviour.

    1.7 Transfer Function to State Space

    We have already seen, in Section 1.5, how we can determine the impulse response and the transfer functionof a linear time-invariant system from its state space description. In this section, we discuss the converse,

  • 7/29/2019 Chapter 1 - Analysis of Linear System in State Space Form

    17/25

    1.7. TRANSFER FUNCTION TO STATE SPACE 17

    harder problem of determining a state space description of a linear time-invariant system from its transferfunction. This is also called the realization problem in control theory. We shall only solve the single-input single-output case, as the general multivariable problem is much harder and beyond the scope of thiscourse.

    Suppose we are given the scalar-valued transfer function H(s) of a single-input single-output linear

    time invariant system. We assume H(s) is a proper rational function; otherwise we cannot find a statespace representation of the form (1.41), (1.42) for H(s). We also assume that there are no common factorsin the numerator and denominator of H(s). After long division, if necessary, we can write

    H(s) = d +q(s)

    p(s)

    where degree of q(s) < degree of p(s). The constant d corresponds to the direct transmission term from uto y, so the realization problem reduces to finding a triple (cT, A , b) such that

    cT(sI A)1b =q(s)

    p(s)

    so that the transfer function H(s) is given by

    H(s) = cT(sI A)1b + d = H(s) + d

    Note that we have used the notation cT for the 1 n matrix C, in keeping with our normal conventionthat lower case letters denote column vectors.

    We now give 2 solutions to the realization problem. Let

    H(s) =q(s)

    p(s)

    withp(s) = sn + p1s

    n1 + + pn

    q(s) = q1sn1 + q2s

    n2 + + qn

    Then the following (cT, A , b) triple realizes q(s)p(s)

    A =

    0 0 pn

    1 0 ...

    .... . .

    ......

    1 p1

    b =

    qn...

    q1

    cT = [0 0 1]

    To see this, let

    T(s) = [1(s) 2(s) ... n(s)]

  • 7/29/2019 Chapter 1 - Analysis of Linear System in State Space Form

    18/25

    18 CHAPTER 1. ANALYSIS OF LINEAR SYSTEMS IN STATE SPACE FORM

    = cT(sI A)1

    Then

    T(s)(sI A) = cT (1.51)

    Writing out (1.51) in component form, we have

    s1(s) 2(s) = 0

    s2(s) 3(s) = 0...

    sn1(s) n(s) = 0

    sn(s) +ni=1

    ni+1(s)pi = 1

    Successively solving these equations, we find

    k(s) = sk11(s), 2 k n

    and

    sn1(s) +ni=1

    pisni1(s) = 1 (1.52)

    so that

    1(s) =1

    p(s)

    Hence

    T(s) =1

    p(s)[1 s s2 sn1] (1.53)

    and

    cT(sI A)1b = T(s)b =q(s)

    p(s)

    The resulting system is often said to be in observable canonical form, We shall discuss the meaning ofthe word observable in control theory later in the course.

    As an example, we can now write down the state space representation for the equation given in (1.3).

    The transfer function of (1.3) is

    H(s) =s + 4

    s2 + 3s + 2

    The observable representation is therefore given by

    x =

    0 2

    1 3

    x +

    4

    1

    u

    y =

    0 1

    x

  • 7/29/2019 Chapter 1 - Analysis of Linear System in State Space Form

    19/25

    1.8. LINEARIZATION 19

    Now note that because H(s) is scalar-valued, H(s) = HT(s). We can immediately conclude that the

    following (cT1 , A1, b1) triple also realizesq(s)p(s)

    A1 =

    0 1 0 0 0

    0 0 1 0 0...

    ... . . .

    ......

    0 0 0 1

    pn pn1 p2 p1

    b1 =

    0...

    0

    1

    cT1 = [qn ... q1]

    so that H(s) can also be written as H(s) = d + cT1 (sI A1)1b1.

    The resulting system is said to be in controllable canonical form. We shall discuss the meaning of theword controllable in the next chapter. The controllable and observable canonical forms will be used laterfor control design.

    1.8 Linearization

    So far, we have only considered systems described by linear constant coefficient differential equations.Many systems are nonlinear, with dynamics described by nonlinear differential equations of the form

    x = f(x, u) (1.54)

    where f has n components, each of which is a continuously differentiable function of x1, xn, u1, , um.Suppose the constant vectors (xp, up) corresponds to a desired operating point, also called an equilibriumpoint, for the system, i.e.

    f(xp, up) = 0

    If the initial condition x0 = xp, then by using u(t) = up for all t, x(t) = xp for all t, i.e. the desiredoperating point is maintained. However, if x0 = xp then to satisfy (1.54), (x, u) will be different from

    (xp, up). However, if x0 is close to xp then by continuity, we expect x and u to be close to xp and up aswell. Let

    x = xp + x

    u = up + u

    Taylors series expansion shows that

    fi(x, u) = fi(xp + x,up + u)

    = [fi

    x1

    fi

    x2

    fi

    xn](xp,up)x + [

    fi

    u1

    fi

    u2

    fi

    um](xp,up)u + h.o.t.

  • 7/29/2019 Chapter 1 - Analysis of Linear System in State Space Form

    20/25

    20 CHAPTER 1. ANALYSIS OF LINEAR SYSTEMS IN STATE SPACE FORM

    If we drop higher order terms (h.o.t.) involving xj and uk, we obtain a linear differential equation for x

    d

    dtx = Ax + Bu (1.55)

    where [A]jk =fjxk

    (xp, up), [B]jk =fjuk

    (xp, up). A is called the Jacobian matrix of f with respect to x,

    sometimes, simply written as fx (xp, up). Similarly, B =fu (xp, up).

    The resulting system (1.55) is called the linearized system of the nonlinear system (1.54) about theoperating point (xp, up). The process is called linearization of the nonlinear system about (xp, up). Keepingthe original system near the operating point can often be achieved by keeping the linearized system near0.

    Example:

    Consider a pendulum subjected to an applied torque. The normalized equation of motion is

    +g

    lsin = u (1.56)

    where g is the gravitational constant, and l is the length of the pendulum. Let x =

    T. We have

    the following nonlinear differential equation

    x1 = x2

    x2 = g

    lsin x1 + u (1.57)

    We recognize

    f(x, u) =

    x2

    gl

    sin x1 + u

    Clearly (x, u) = (0, 0) is an equilibrium point. The linearized system is given by

    x =

    0 1

    gl

    0

    x +

    0

    1

    u = A0x + Bu

    The system matrix A0 has purely imaginary eigenvalues, corresponding to harmonic motion. Note,however, that (x1, x2, u) = (, 0, 0) is also an equilibrium point, corresponding to the pendulum in theinverted vertical position. The linearized system is then given by

    x = 0 1

    g

    l 0x +

    0

    1u = Ax + Bu

    The system matrix A has an unstable eigenvalue, and the behaviour of the linearized system aboutthe 2 equilibrium points are quite different.

  • 7/29/2019 Chapter 1 - Analysis of Linear System in State Space Form

    21/25

    APPENDIX TO CHAPTER I 21

    Appendix to Chapter 1: Jordan Forms

    This appendix provides more details on the Jordan form of a matrix and its use in computing the transitionmatrix. The results are deep and complicated, and we cannot give a complete exposition here. Standardtextbooks on linear algebra should be consulted for a more thorough treatment.

    When a matrix A has repeated eigenvalues, it is usually not diagonalizable. However, there alwaysexists a nonsingular matrix P such that

    P1AP = J =

    J1 0

    J2. . .

    0 Jk

    (A.1)

    where Ji is of the form

    Ji =

    i 1 0 0

    i 1 0...

    1

    0 i

    , an mi mi matrix (A.2)

    The is need not be all distinct, butk

    i=1 mi = n. The special form on the right hand side of (A.1) iscalled the Jordan form of A, and Ji is the Jordan block associated with eigenvalue i. Note that ifmi = 1for all i, i.e., each Jordan block is 1 1 and k = n, the Jordan form specializes to the diagonal form.

    There are immediately some natural questions which arise in connection with the Jordan form: for aneigenvalue i with multiplicity ni, how many Jordan blocks can there be associated with i, and how do wedetermine the size of the various blocks? Also, how do we determine the nonsingular P which transformsA into Jordan form?

    First recall that the nullspace of A, denoted by N(A), is the set of vectors x such that Ax = 0. N(A)is a subspace and has a dimension. If is an eigenvalue of A, the dimension of N(A I) is called thegeometric multiplicity of . The geometric multiplicity of corresponds to the number of independent

    eigenvectors associated with .Fact: Suppose is an eigenvalue of A. The number of Jordan blocks associated with is equal to thegeometric multiplicity of .

    The number of times is repeated as a root of the characteristic equation det(sI A) = 0 is calledits algebraic multiplicity (commonly simplified to just multiplicity). The geometric multiplicity of isalways less than or equal to its algebraic multiplicity. If the geometric multiplicity is strictly less than thealgebraic multiplicity, has generalized eigenvectors. A generalized eigenvector is a nonzero vector vsuch that for some positive integer k, (A I)kv = 0, but that (A I)k1v = 0. Such a vector v definesa chain of generalized eigenvectors {v1, v2, , vk} through the equations vk = v, vk1 = (A I)vk,vk2 = (A I)vk1, , v1 = (A I)v2. Note that (A I)v1 = (A I)

    kv = 0 so that v1 is an

  • 7/29/2019 Chapter 1 - Analysis of Linear System in State Space Form

    22/25

    22 CHAPTER 1. ANALYSIS OF LINEAR SYSTEMS IN STATE SPACE FORM

    eigenvector. This set of equations is often written as

    Av2 = v2 + v1

    Av3 = v3 + v2...

    Avk = vk + vk1

    It can be shown that the generalized eigenvectors associated with an eigenvalue are linearly indepen-dent. The nonsingular matrix P which transforms A into its Jordan form is constructed from the set ofeigenvectors and generalized eigenvectors.

    The complete procedure for determining all the generalized eigenvectors, though too complex to describeprecisely here, can be well-illustrated by an example.

    Example:

    Consider the matrix A given by

    A =

    0 0 1 0

    0 0 0 1

    0 0 1 1

    0 0 2 2

    det(sI A) = s3(s + 3)

    so that the eigenvalues are 3, and 0, with algebraic multiplicity 3. Let us first determine the geometricmultiplicity of the eigenvalue 0. The equation

    Aw =

    0 0 1 0

    0 0 0 1

    0 0 1 1

    0 0 2 2

    w1

    w2

    w3

    w4

    = 0 (A.3)

    gives w3 = 0, w4 = 0, and w1, w2 arbitrary. Thus the geometric multiplicity of 0 is 2, i.e., there are 2 Jordanblocks associated with 0. Since 0 has algebraic multiplicity 3, this means there are 2 eigenvectors and 1generalized eigenvector. To determine the generalized eigenvector, we solve the homogeneous equationA2v = 0, for solutions v such that Av = 0. Now

    A2 =

    0 0 1 0

    0 0 0 1

    0 0 1 1

    0 0 2 2

    0 0 1 0

    0 0 0 1

    0 0 1 1

    0 0 2 2

    =

    0 0 1 1

    0 0 2 2

    0 0 3 3

    0 0 6 6

  • 7/29/2019 Chapter 1 - Analysis of Linear System in State Space Form

    23/25

    APPENDIX TO CHAPTER I 23

    It is readily seen that

    v =

    0

    0

    1

    1

    solves A2v = 0 and

    Av =

    1

    1

    0

    0

    so that we have the chain of generalized eigenvectors

    v2 =

    0

    0

    1

    1

    , v1 =

    1

    1

    0

    0

    with v1 an eigenvector. From (A.3), a second independent eigenvector is w =

    1 0 0 0T

    . This

    completes the determination of the eigenvectors and generalized eigenvectors associated with 0. Finallythe eigenvector for the eigenvalue 3 is given by the solution of

    (A + 3I)z =

    3 0 1 0

    0 3 0 1

    0 0 2 1

    0 0 2 1

    z = 0

    yielding z =

    1 2 3 6T

    The nonsingular matrix bringing A to Jordan form is then given by

    P = w v1 v2 z =

    1 1 0 1

    0 1 0 2

    0 0 1 30 0 1 6

    Note that the chain of generalized eigenvectors v1, v2 go together. One can verify that

    P1 =

    1 1 13 13

    0 1 29 29

    0 0 2313

    0 0 1919

  • 7/29/2019 Chapter 1 - Analysis of Linear System in State Space Form

    24/25

    24 CHAPTER 1. ANALYSIS OF LINEAR SYSTEMS IN STATE SPACE FORM

    and

    P1AP =

    0 0 0 00 0 1 00 0 0 00 0 0 3

    (A.4)

    The ordering of the Jordan blocks is not unique. If we choose

    M =

    v1 v2 w z

    =

    1 0 1 11 0 0 20 1 0 30 1 0 6

    M1AM =

    0 1 0 00 0 0 00 0 0 0

    0 0 0 3

    (A.5)

    although it is common to choose the blocks ordered in decreasing size, i.e., that given in (A.5).Once we have transformed A into its Jordan form (A.1), we can readily determine the transition matrix

    eAt. As before, we haveAm = (P J P1)m = P JmP1

    By the block diagonal nature of J, we readily see

    Jm =

    Jm1 0

    Jm2. . .

    0 Jmk

    Thus,

    eAt = P

    eJ1t 0eJ2t

    . . .

    0 eJkt

    P1

    Since the mi mi block Ji = iI + Ni, using the methods of Section 1.3, we can immediately write down

    eJit = eit

    1 t tmi1

    (mi1)!

    . . .

    . . . t0 1

    We have now a complete linear algebraic procedure for determining eAt when A is not diagonalizable.For

    A =

    0 0 1 00 0 0 10 0 1 10 0 2 2

  • 7/29/2019 Chapter 1 - Analysis of Linear System in State Space Form

    25/25

    APPENDIX TO CHAPTER I 25

    and using the M matrix for transformation to Jordan form, we have

    eAt = M

    1 t 0 0

    0 1 0 0

    0 0 1 0

    0 0 0 e3t

    M

    1

    =

    1 0 19 +23t

    19e3t 19 +

    13t +

    19e3t

    0 1 29 +23t +

    29e3t 2

    9 +13 t

    29e3t

    0 0 23 +13e3t 1

    3 13e3t

    0 0 23 23e3t 1

    3 +23e3t