linear algebra

36
LINEAR ALGEBRA & MATRICES C.T. Abdallah, M. Ariola, F. Amato January 21, 2006 Contents 1 Review of Matrix Algebra 2 1.1 Determinants, Minors, and Cofactors ................ 4 1.2 Rank, Trace, and Inverse ...................... 5 1.3 Elementary Operations and Matrices ................ 7 2 Vectors and Linear (Vector) Spaces 9 2.1 Linear Equations .......................... 11 3 Eigenvalues and Eigenvectors 12 3.1 Jordan Forms ............................ 17 4 Inner Products & Norms 20 4.1 Inner Products ............................ 20 4.2 Norms ................................ 20 5 Applications of Linear Algebra 24 5.1 The Cayley-Hamilton Theorem ................... 24 5.2 Solving State Equations ....................... 24 5.3 Matrix Decompositions ....................... 29 5.4 Bilinear Forms and Sign-definite Matrices ............. 32 5.4.1 Definite Matrices ...................... 32 5.5 Matrix Inversion Lemmas ...................... 33 6 MATLAB Commands 34 1

Upload: rajeshshirwal382

Post on 06-Sep-2015

30 views

Category:

Documents


3 download

DESCRIPTION

linear algebra

TRANSCRIPT

  • LINEAR ALGEBRA & MATRICES

    C.T. Abdallah, M. Ariola, F. Amato

    January 21, 2006

    Contents1 Review of Matrix Algebra 2

    1.1 Determinants, Minors, and Cofactors . . . . . . . . . . . . . . . . 41.2 Rank, Trace, and Inverse . . . . . . . . . . . . . . . . . . . . . . 51.3 Elementary Operations and Matrices . . . . . . . . . . . . . . . . 7

    2 Vectors and Linear (Vector) Spaces 92.1 Linear Equations . . . . . . . . . . . . . . . . . . . . . . . . . . 11

    3 Eigenvalues and Eigenvectors 123.1 Jordan Forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

    4 Inner Products & Norms 204.1 Inner Products . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204.2 Norms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

    5 Applications of Linear Algebra 245.1 The Cayley-Hamilton Theorem . . . . . . . . . . . . . . . . . . . 245.2 Solving State Equations . . . . . . . . . . . . . . . . . . . . . . . 245.3 Matrix Decompositions . . . . . . . . . . . . . . . . . . . . . . . 295.4 Bilinear Forms and Sign-definite Matrices . . . . . . . . . . . . . 32

    5.4.1 Definite Matrices . . . . . . . . . . . . . . . . . . . . . . 325.5 Matrix Inversion Lemmas . . . . . . . . . . . . . . . . . . . . . . 33

    6 MATLAB Commands 34

    1

  • 1. Review of Matrix Algebra 2

    LINEAR ALGEBRA & MATRICES

    These notes deal with the study of linear Algebra and matrices. Linear Algebraplays an important role in the subareas of signal processing, control systems, com-munications, and more broadly in the studies of systems. The notes rely heavily on[1, 2, 3, 4].

    1 Review of Matrix Algebra

    We start this chapter by introducing matrices and their algebra. Notationally,matrices are denoted by capital letters A,M, and are rectangular arrays of ele-ments . Such elements are referred to as scalars and denoted by lowercase letters,a,b, ,etc.. Note that the scalars are not necessarily real or complex constants:they maybe real or complex numbers, polynomials, or general functions.

    Example 1 Consider the matrix

    A =

    1 2 3 4 5 62105 1 0.9 7.2104 0.17 4.961030.012338 11.72 2.6 8.7104 31 220 0 1 0 0 00 0 0 0 0 30

    This is a 56 matrix with real entries. On the other hand,

    P(s) =[

    s+1 s+3s2 +3s+2 s2 +5s+4

    ]

    is a 22 matrix of polynomials in the variable s. Finally,

    =[

    1+ j j 01+3 j 5 2 j

    ]

    is a 23 complex matrix.

    A matrix which has m rows and n columns, is said to be m n. A matrix may bedenoted by A = [ai j] where i = 1, m and j = 1, n.Example 2 The matrix A in Example 1 has 5 rows and 6 columns. The elementa11 = 1 while a35 =31.

  • 1. Review of Matrix Algebra 3

    A scalar is a 1 1 matrix. If m = n the matrix is said to be square. If m = 1 thematrix is a row matrix (or vector). If n = 1 the matrix is an m column matrix (orvector). If A is square then we define the Trace of A by the sum of its diagonalelements, i.e.

    Trace(A) =n

    i=1

    aii (1)

    Example 3 Consider P(s) of Example 1, then Trace(P(s)) = s2 +6s+5.

    Two matrices A and B are equal, written A = B, if and only if A has the samenumber of rows and columns as B and if ai j = bi j for all i, j. Two matrices A andB that have the same numbers of rows and columns may be added or subtractedelement by element, i.e.

    C = [ci j] = AB ci j = ai jbi j i, j (2)

    the multiplication of 2 matrices A and B to obtain C = AB may be performed if andonly if A has the same number of columns as B has rows. In fact,

    C = AB ci j =n

    k=1

    aikbk j (3)

    the matrix C is m q if A is m n and B is n q. Note that BA may not even bedefined and in general, BA is not equal to AB even when both products are defined.The matrix of all zeros is the Null matrix, and the square matrix A with aii = 1 andai j = 0 for i 6= j is the identity matrix. The identity matrix is denoted by I. Notethat AI = IA = A assuming A is nn as is I.

    The following properties of matrix Algebra are easily verified

    1. AB = BA2. A+(B+C) = (A+B)+C

    3. (A+B) = A+B for all scalars .

    4. A = A

    5. A(BC) = (AB)C

    6. A(B+C) = AB+AC

  • 1.1 Determinants, Minors, and Cofactors 4

    7. (B+C)A = BA+CA

    Let A = [ai j] be an m n matrix. The transpose of A is denoted by AT = [a ji]and is the nm matrix obtained by interchanging columns and rows. The matrixA is symmetric if A = AT , skew-symmetric if A = AT . Also, it can be seen that(AB)T = BT AT . In the case where A contains complex elements, we let A be theconjugate of A whose elements are the conjugates of those of A. Matrices satisfyingA = AT are Hermitian and those satisfying A = AT are skew-Hermitian.

    1.1 Determinants, Minors, and CofactorsIn this section we will consider square matrices only. The determinant of a squarematrix A denoted by det(A) or | A | is a scalar-valued function of A and is given asfollows for n = 1,2,3.

    1. n = 1, then | A |= a112. n = 2, then | A |= a11a22a12a213. n = 3, then | A |= a11(a22a33a23a32)+a12(a23a31a21a33)+a13(a21a32

    a22a31)

    Standard methods apply for calculating determinants. The determinant of a squarematrix is 0, that of an identity matrix is 1, and that of a triangular or diagonalmatrix is the product of all diagonal elements. Each element ai j in an nn squarematrix A has associated with it a minor Mi j obtained as the determinant of the(n 1) (n 1) matrix resulting form deleting the ith row and the jth column.From these minors, we can obtain the cofactors given by Ci j = (1)i+ jMi j.

    Example 4 Given

    A =

    2 4 13 0 2

    2 0 3

    Then, M12 = 5, C12 =5, M32 = 1 =C32.

    Some useful properties of determinants are listed below:

    1. Let A and B be nn matrices, then | AB |=| A | . | B |.2. | A |=| AT |

  • 1.2 Rank, Trace, and Inverse 5

    3. If A contains a row or a column of zeros, then | A |= 0.4. If any row (or column) of A is a linear combination of other rows (or columns),

    then | A |= 0.5. If we interchange any 2 rows (or columns) of a matrix A, the determinant of

    the resulting matrix is | A |.6. If we multiply a row (or a column) of a matrix A by a scalar , the determi-

    nant of the resulting matrix is | A |.7. Any multiple of a row (or a column) may be added to any other row (or

    column) of a matrix A without changing the value of the determinant.

    1.2 Rank, Trace, and Inverse

    The rank of an m n matrix A denoted by rA = rank(A) is the size of the largestnonzero determinant that can be formed from A. Note that rA min{m,n}. If A issquare and if rA = n, then A is nonsingular. If rA is the rank of A and rB is the rankof B, and if C = AB, then 0 rC min{rA,rB}. The trace of a square matrix A wasdefined earlier as Tr(A) = ni=1 aii.Example 5 Let us look at the rank of:

    A =

    2 4 13 0 2

    2 0 3

    It is easy to see that rA = 3. Now consider the matrix

    B =

    2 03 0

    1 1

    whose rank is rB = 2 and form

    A.B =

    17 18 2

    1 1

    and the rank(C) = 2 {rA,rB}.

  • 1.2 Rank, Trace, and Inverse 6

    If A and B are conformable square matrices, Tr(A + B) = Tr(A) + Tr(B), andTr(AB) = Tr(BA). Also, Tr(A) = Tr(AT ).

    Example 6 Show that Tr(ABC)= Tr(BT ATCT ). First, write Tr(ABC)= Tr(CAB),then Tr([CAB]T ) = Tr(CAB), thus proven.

    Note that rank(A+B) 6= rank(A)+ rank(B) and that Tr(AB) 6= Tr(A)Tr(B).Example 7 Let

    A =

    2 4 13 0 2

    2 0 3

    ; B =

    1 0 00 0 0

    0 0 0

    and note that rA = 3, rB = 1, while rA+B = 3. Also, note that Tr(AB) = 2 whileTr(A)Tr(B) = 52 = 10.

    Next, we define the inverse of a square, nonsingular matrix A as the square

    matrix B of the same dimensions such that

    AB = BA = I

    the inverse is denoted by A1 and may be found by

    A1 =CT

    | A |where C is the matrix of cofactors Ci j of A. Note that for the inverse to exist, | A |must be nonzero, which is equivalent to saying that A is nonsingular. We also writeCT = Ad joint(A).

    The following property holds:

    (AB)1 = B1A1

    assuming of course that A and B are compatible and both invertible.

  • 1.3 Elementary Operations and Matrices 7

    1.3 Elementary Operations and MatricesThe basic operations called elementary operations are as follows

    1. Interchange any 2 rows (or columns)2. Multiply any row (or column) by a scalar 3. Multiply any row (or column) by a scalar and add the resulting row (or

    column) to any other row (or column)Note that each of these elementary operations may be represented by a postmul-tiplication for column operations (or premultiplication for row operations) by ele-mentary matrices which are nonsingular.

    Example 8 Let

    A =

    2 1 00 2 1

    1 1 1

    Then, let use the following row operations:

    1. Exchange rows 1 and 3

    2. Multiply row 1 by -2 and add it to row 3

    3. Multiply row 2 by 1/2 and add it to row 3

    4. Multiply row 2 by 1/2

    5. Multiply row 3 by 2/5

    6. Multiply column 1 by -1 and add it to column 2

    7. Multiply column 1 by 1 and add it to column 3

    8. Multiply column 2 by -1/2 and add it to column 3

    The end result is

    B =

    1 0 00 1 0

    0 0 1

  • 1.3 Elementary Operations and Matrices 8

    Example 9 Given the following polynomial matrix

    P(s) =

    s2 00 s2

    1 s+1

    then the corresponding row operations are performed:

    1. interchange rows 3 and 1.

    2. multiply row 1 by s2 and add it to row 33. multiply row 2 by s+1 and add it to row 3.

    This corresponds to the following multiplications by the matrices given:

    1.

    P1(s) =

    1 s+10 s2

    s2 0

    =

    0 0 10 1 0

    1 0 0

    P(s)

    2.

    P2(s) =

    1 s+10 s2

    0 s2(s+1)

    =

    1 0 00 1 0s2 0 0

    P1(s)

    3.

    P3(s) =

    1 s+10 s2

    0 0

    =

    1 0 00 1 0

    0 s+1 0

    P(s)

    In general, for any matrix of rank r we can reduce it via column and row oper-ations to one of the following normal forms

    Ir, [Ir | 0], [Ir | 0]T ,or[

    Ir 00 0

    ](4)

    Next, we discuss vector spaces, or as they are sometimes called, linear space..

  • 2. Vectors and Linear (Vector) Spaces 9

    2 Vectors and Linear (Vector) SpacesIn most of our applications, we need to deal with linear real and complex vectorspaces which are defined subsequently.

    Definition 1 A real linear vector space (resp. complex linear vector space is a setV , equipped with 2 binary operations: the addition (+) and the scalar multiplication(.) such that

    1. x+ y = y+ x, x,y V2. x+(y+ z) = (x+ y)+ z, x,y,z V3. There is an element 0V in V such that x+0V = 0V + x = x, x V4. For each x V , there exists an element x V such that x+(x) = (x)+

    x = 0V

    5. For all scalars r1,r2 R (resp. c1,c2 C), and each xV , we have r1.(r2.x)=(r1r2).x (resp. c1.(c2.x) = (c1c2).x

    6. For each r R (resp. c C), and each x1,x2 V , r.(x1 + x2) = r.x1 + r.x2(resp. c.(x1 + x2) = c.x1 + c.x2)

    7. For all scalars r1,r2 R (resp. c1,c2 C), and each x V , we have (r1 +r2).x = r1.x+ r2.x (resp. (c1 + c2).x = c1.x+ c2.x)

    8. For each x V , we have 1.x = x where 1 is the unity in R (resp. in C).

    Example 10 The following are linear vector spaces with the associated scalarfields: Rn with R, Cn with C.

    Definition 2 A subset M of a vector space V is a subspace if it is a linear vectorspace in its own right. One necessary condition for M to be a subspace is that itcontains the zero vector.

    Let x1, . . .xn be some vectors in a linear space X defined over a field F. In thesenotes we shall consider either the real field R or the complex field C. The Span ofthe vectors x1, . . .xn over the field F is defined as

    SpanF{x1, . . .xn} := {x X : x =n

    i=1

    aixi, ai F}.

  • 2. Vectors and Linear (Vector) Spaces 10

    The vectors x1, . . .xn are said to be linearly independent ifn

    i=1

    aixi = 0 ai = 0, i = 1, . . .n

    otherwise they are linearly dependent. A set of vectors x1, . . .xm is a basis of alinear space X if the vectors are linearly independent and their Span is equal toX . In this case the linear space X is said to have finite dimension m.

    Note that any set of vectors containing the zero vector is linearly dependent.Also, if {xi} i = 1, ,n is linearly dependent, adding a new vector xn+1 will notmake the new set linearly independent. Finally, if the set {xi} is linearly dependent,then, at least one of the vectors in the set may be written as a linear combination ofthe others.

    How do we test for linear dependence/independence? Given a set of n vectors{xi} each having m components. Form the m n matrix A which has the vectorsxi as its columns. Note that if n > m then the set of vectors has to be linearlyindependent. Therefore, let us consider the case where n m. Form the n nmatrix G = AT A and check if it is nonsingular, i.e. if | G |6= 0, then {xi} is linearlyindependent, otherwise {xi} is linearly dependent.

    Geometrically, linear independence may be explained in Rn. Consider theplane R2 and suppose we have 2 vectors x1 and x2. If the 2 vectors are linearlydependent, then x2 = ax1 or both vectors lie along the same direction. If they werelinearly independent, then they form the 2 sides of a parallelogram. Therefore, inthe case of linear dependency, the parallelogram degenerate to a single line. Wecan equip a vector space with many functions. One of which is the inner productwhich takes two vectors in V to a scalar either in R or in C, the other one is thenorm of a vector which takes a vector in V to a positive value in R.

    Given two linear spaces X and Y over the same field F, a function A : X 7Y is a linear transformation if A (ax + by) = aA (x)+ bA (y), for all a, b F.Let A be a linear transformation A : X 7X and Y a linear subspace Y X .The subspace Y is said to be A invariant if

    A (y) Y , y Y .Given a linear transformation, we define the Range or Image of A as the subspaceof Y

    Range(A ) := {y Y : y = A (x), x X },and the Kernel or Null-Space of A as the subspace of X

    N (A ) := Ker(A ) := {x X : A (x) = 0}.Given X = Cm and Y = Cn, a matrix A Cmn, denoted by A = [ai j; 1 i

    m, 1 j n, is an example of linear transformation.

  • 2.1 Linear Equations 11

    Example 11 The set {ei} i = 1, ,n is linearly independent in Rn. On the otherhand, the 2 vectors, xT1 = [1 2]; xT2 = [2 4] are linearly dependent in R2.

    Example 12 Let X be the vector space of all vectors x = [x1 xn]T such that allcomponents are equal. Then X is spanned by the vector of all 1s. Therefore dim(X) = 1. On the other hand, if X is the vector space of all polynomials of degreen1 or less, a basis is {1, t, t2, , tn1} which makes dim (X) = n.

    2.1 Linear Equations

    In many control problems, we have to deal with a set of simultaneous linear alge-braic equations

    a11x1 +a12x2 + +a1nxn = y1a21x1 +a22x2 + +a2nxn = y2

    .

    .

    . =.

    .

    .

    am1x1 +am2x2 + +amnxn = ym (5)or in matrix notation

    Ax = y (6)for an mn matrix A, an n1 vector x and an m vector y. The problem is to findthe solution vector x, given A and y. Three cases might take place:

    1. No solution exist

    2. A unique solution exists

    3. An infinite number of solutions exist

    Now, looking back at the linear equation Ax = y, it is obvious that y should bein R(A) for a solution x to exist, in other words, if we form

    W = [A | y] (7)then rank(W ) = rank(A) is a necessary condition for the existence of at least onesolution. Now, if z N (A), and if x is any solution to Ax = y, then x + z is also asolution. Therefore, for a unique solution to exist, we need that N (A) = 0. Thatwill require that the columns of A form a basis of R(A), i.e. that there will be n ofthem and that they will be linearly independent, and of dimension n. Then, the Amatrix is invertible and x = A1y is the unique solution.

  • 3. Eigenvalues and Eigenvectors 12

    3 Eigenvalues and EigenvectorsLet A be an nm matrix, and denote the corresponding identity matrix by I. Then,let xi be a nonzero vector in Rn and i be scalar such that

    Axi = xi (8)Then, i is an eigenvalue of A and xi is the corresponding eigenvector. There willbe n eigenvalues of A (some of which redundant). In order to find the eigenvaluesof A we rewrite the previous equation

    (AiI)xi = 0 (9)Noting that xi can not be the zero vector, and recalling the conditions on the exis-tence of solutions of linear equations, we see that we have to require

    det(AiI) =| (AiI) |= 0 (10)We thus obtain an nth degree polynomial, which when set to zero, gives the char-acteristic equation

    | (A I) | = ( ) = 0= ( )n + cn1 n1 + + c1 + c0= (1)n( 1)m1( 2)m2 ( p)mp (11)

    Therefore, i is an eigenvalue of A of algebraic multiplicity mi. One can thenshows,

    Tr(A) =n

    i=1

    = (1)n+1cn1

    | A | =n

    i=1

    i = c0 (12)

    Also, if i is an eigenvalue, then so is i .How do we determine eigenvectors? We distinguish 2 cases:

    1. All eigenvalues are distinct: In this case, we first find Ad j(A I) with asa parameter. Then, successively substituting each eigenvalue i and selectingany nonzero column gives all eigenvectors.

    Example 13 Let

    A =

    0 1 00 0 118 27 10

  • 3. Eigenvalues and Eigenvectors 13

    then, solve for

    | A I |= 310 227 18 = 0

    which has 3 solutions, 1 = 1, 2 = 3, 3 = 6. Now, let us solve forthe eigenvectors using the suggested method

    Ad j(A I) = 2 +10 +27 +10 118 2 +10

    18 27 18 2

    so that for 1 =1 we can see that column 1 is

    x1 = [18 18 18]T

    Similarly, x2 = [7 21 63]T and x3 = [1 6 36]T . There is anothermethod of obtaining the eigenvectors from the definition by actually solvingAxi = ixi

    Note that once all eigenvectors are obtained, we can arrange them in an nnmodal matrix M = [x1 x2 xn]. Note that the eigenvectors are not uniquesince if xi is an eigenvector, then so is any xi for any scalar .

    2. Some eigenvalues are repeated: In this case, a full set of independent eigen-vectors may or may not exist. Suppose i is an eigenvalue with an algebraicmultiplicity mi. Then, the dimension of the Null space of A iI whichis also the number of linearly independent eigenvectors associated with i isthe geometric multiplicity qi of the eigenvalue i. We can distinguish 3 cases

    (a) Fully degenerate case qi = mi: In this case there will be qi independentsolutions to (AiI)xi rather than just one.Example 14 Given the matrix

    A =

    10/3 1 1 1/30 4 0 0

    2/3 1 3 1/32/3 1 1 11/3

    Then, its characteristic equation is

    ( ) = 414 3 +72 2160 +128 = 0

  • 3. Eigenvalues and Eigenvectors 14

    There are 4 roots, 1 = 2, 2 = 3 = 4 = 4. We can find the eigenvec-tor associated with 1 as x1 = [1 0 1 1]T . Then, we find

    (4IA)x =

    2/3 1 1 1/30 0 0 0

    2/3 1 1 1/32/3 1 1 1/3

    x

    There are an infinite number of solutions, 3 of which are x2 = [1 0 0 2]T ; x3 = [0 1 1 0]T ; x4 = [0 1 0 3]T

    Note that now if x1, xm are eigenvectors for eigenvalue i, then sois any y = mi=1 ixi.

    (b) Simple degenerate case qi = 1: Here we can find the first eigenvector inthe usual means, then we have mi1 generalized eigenvectors. Theseare obtained as

    Ax1 = ix1(Ai)x2 = x1

    .

    .

    . =.

    .

    .

    (Ai)xmi = xmi1 (13)

    Note that all xi found this way are linearly independent.

    Example 15

    A =

    0 1 0 00 0 1 00 0 0 18 20 18 7

    its characteristic polynomial is

    ( ) = 4 +7 3 +18 2 +20 +8 = 0

    then, 1 = 1, 2 = 3 = 4 = 2. The eigenvector of 1 is easilyfound to be x1 = [1 1 1 1]T . On the other hand, one eigenvector 0f2 is found to be x2 = [0.125 0.25 0.5 1]T , then x2 = (A + 2I)x3leading to x3 = [0.1875 0.25 0.25 0]T also, x3 = (A + 2I)x4 so thatx4 = [0.1875 0.1875 0.125 0]T

  • 3. Eigenvalues and Eigenvectors 15

    (c) General case 1 qi mi: Here, a general top-down method should be

    used. Here we solve the problem by writing

    (AiI)x1 = 0(AiI)x2 = x1 (AiI)2x2 = (AiI)x1 = 0(AiI)x3 = x2 (AiI)2x3 = (AiI)x2 = x1 6= 0

    (AiI)3x3 = (AiI)x1 = 0 (14)

    This approach continues until we reach the index ki of i. This index isfound as the smallest integer such that

    rank(AiI)k = nmi (15)

    the index indicates the length of the longest chain of eigenvectors andgeneralized eigenvectors associated with i.Example 16

    A =

    0 0 1 00 0 0 10 0 0 00 0 0 0

    its characteristic polynomial is

    ( ) = 4 = 0

    so 1 = 0 with m1 = 4. Next, form A 1I = A, and determine thatrank(A 1I) = 2. There are then 2 eigenvectors and 2 generalizedeigenvectors. The question is whether we have one eigenvector-generalizedeigenvector chain of length 3, or 2 chains of length 2. To check that,note that nm = 0, and that rank(A1I)2 = 0, therefore, the indexis k1 = 2. this then guarantees that one chain has length 2, making thelength of the other chain 2. First, consider (A1I)2x = 0 Any vec-tor satisfies this equation but only 4 vectors are linearly independent.Let us choose x1 = [1 0 0 0]T . Is this an eigenvector or a general-ized eigenvector? It is an eigenvector since (A1I)x1 = 0. Similarly,x2 = [0 1 0 0]T is an eigenvector. On the other hand, x3 = [0 0 1 0]Tis a generalized eigenvector since (A iI)x3 = x1 6= 0. Similarly,x4 = [0 0 0 1]T is a generalized eigenvector associated with x2.

  • 3. Eigenvalues and Eigenvectors 16

    Example 17 Let

    A =

    4 1 20 2 0

    0 1 2

    .

    The eigenvalues of A are 1 = 4, 2,3 = 2 and the two correspondent eigen-vectors are x1 =

    (1 0 0

    )T, x2 =

    (1 0 1)T . Therefore we have threeeigenvalues but only two linearly independent eigenvectors. We call gener-alized eigenvector of A the vector x3 =

    (1 1 1)T such that

    Ax3 = 2x3 + x2.

    The vector x3 is special in the sense that, x2 and x3 together span a two-dimensional A invariant subspace.

    In summary each nn matrix has n eigenvalues and n linearly independentvectors, either eigenvectors or generalized eigenvectors.

    To summarize, given the characteristic polynomial

    p( ) := det( IA) = n +a1 n1 + . . .+an, (16)the roots 1, . . .n of p( ) = 0 are the eigenvalues of A. The set = {1, . . .n} iscalled the spectrum of A. The spectral radius of A is defined as

    (A) := maxi=1,...n

    |i|.

    where |.| is the modulus of the argument and thus (A) is the radius of the smallestcircle in the complex plane, centered at the origin, that includes all the eigenvalues.As described earlier, the non-null vectors xi such that Axi = ixi are the (right-)eigenvectors of A. Similarly y 6= 0 is a left-eigenvector of A if yA = y. Ingeneral a matrix A has at least one eigenvector. It is easy to see that if x is aneigenvector of A, Span{x} is an A invariant subspace.

    In general let us suppose that a matrix A has an eigenvalue of multiplicity rbut with only one correspondent eigenvector. Then we can define r1 generalizedeigenvectors in the following way

    Ax1 = x1Ax2 = x2 + x1

    .

    .

    .

    Axr = xr + xr1.

  • 3.1 Jordan Forms 17

    The eigenvector and the r1 generalized eigenvectors together span a r-dimensionalA invariant subspace.

    3.1 Jordan FormsBased on the analysis above, we can produce the Jordan form of any nn matrix.In fact, if we have n different eigenvalues, we can find all eigenvectors xi, such thatAxi = ixi, then let M = [x1 x2 xn], and = diag(i). This leads to AM = M.We know that M1 exists since all eigenvectors are independent and therefore =M1AM. The Jordan form of A is then . If the eigenvectors are orthonormal,M1 = MT . In the case that qi = mi, we proceed the same way to obtain J = . Onthe other hand, if qi = 1, then using the eigenvectors and generalized eigenvectorsas columns of M will lead to J = diag[J1 J2 Jp] where each Ji is mimi andhas the following form

    Ji =

    i 1 0 00 i 1 00 0 i 0.

    .

    .

    .

    .

    .

    .

    .

    .

    .

    .

    .

    .

    .

    .

    0 0 i 10 0 0 i

    Finally, the general case will have Jordan blocks each of size ki as shown in theexamples below.

    Example 18 Let

    A =

    0 1 00 0 118 27 10

    the eigenvalues are 1 =1, 2 =3, 3 =6. The eigenvectors are

    x1 = [1 1 1]T

    Similarly, x2 = [1 3 9]T and x3 = [1 6 36]T . Then,

    M =

    1 1 11 3 6

    1 9 36

  • 3.1 Jordan Forms 18

    then

    M1 =

    1.8 0.9 0.11 1.167 0.167

    0.2 0.2667 0.067

    then = M1AM.

    Example 19 Given the matrix

    A =

    10/3 1 1 1/30 4 0 0

    2/3 1 3 1/32/3 1 1 11/3

    Then, its characteristic equation is

    ( ) = 414 3 +72 2160 +128 = 0

    There are 4 roots, 1 = 2, 2 = 3 = 4 = 4. We can find the eigenvector associatedwith 1 as x1 = [1 0 1 1]T . Then, we find x2 = [1 0 0 2]T ; x3[0 1 1 0]T ; x2 =[0 1 0 3]T Therefore,

    M =

    1 1 0 00 0 1 11 0 1 01 2 0 3

    then

    M1 =

    1/3 1/2 1/2 1/61/6 1/2 1/2 1/61/3 1/2 1/2 1/61/3 1/2 1/2 1/6

    then

    J =

    2 0 0 00 4 0 00 0 4 00 0 0 4

  • 3.1 Jordan Forms 19

    Example 20 Now consider

    A =

    0 1 0 00 0 1 00 0 0 18 20 18 7

    its characteristic polynomial is

    ( ) = 4 +7 3 +18 2 +20 +8 = 0

    then, 1 =1, 2 = 3 = 4 =2. Find eigenvectors as before and form

    M =

    1 1/8 0.1875 0.18751 1/4 1/4 0.18751 1/2 1/4 1/81 1 0 0

    Then,

    J =

    1 0 0 00 2 1 00 0 2 10 0 0 2

    Example 21

    A =

    0 0 1 00 0 0 10 0 0 00 0 0 0

    its characteristic polynomial is

    ( ) = 4 = 0

    so 1 = 0 with m1 = 4. Then,

    M =

    1 0 0 00 0 1 00 1 0 00 0 0 1

  • 4. Inner Products & Norms 20

    therefore making

    J =

    0 1 0 00 0 0 00 0 0 10 0 0 0

    4 Inner Products & Norms4.1 Inner Products

    An inner product is an operation between two vectors of a vector space which willallow us to define geometric concepts such as orthogonality and Fourier series, etc.The following defines an inner product.

    Definition 3 An inner product defined over a vector space V is a function < ., . >defined from V to F where F is either R or C such that x,y,z, V

    1. < x,y >=< y,x > where the < ., . > denotes the complex conjugate.2. < x,y+ z >=< x,y > + < x,z >

    3. < x,y >= < x,y >, F4. < x,x > 0 where the 0 occurs only for x = 0V

    Example 22 The usual dot product in Rn is an inner product, i.e. < x,y >= xT y =ni=1 xiyi.

    4.2 Norms

    A norm is a generalization of the ideas of distance and length. As stability theory isusually concerned with the size of some vectors and matrices, we give here a briefdescription of some norms that will be used in these notes. We will consider firstthe norms of vectors defined on a vector space X with the associated scalar field ofreal numbers R.

    Let X be a linear space on a field F. A function : X 7R is called a normif it satisfies the following proporties

  • 4.2 Norms 21

    1. x 0, x X2. x= 0 x = 03. ax= |a|x, a F, x X4. x+ y x+y, x,y X

    Let now X = Cn. For a vector x =(x1 . . . xn

    )T in Cn the p-norm is definedas

    xp :=(

    n

    i=1|xi|p

    ) 1p

    , p 1. (17)

    When p = 1,2,, we have the three important norms

    x1 :=n

    i=1|xi|

    x2 :=

    n

    i=1|xi|2

    x := maxi|xi|.

    In a Euclidean space Rn, the 2-norm is the usual Euclidean norm.

    Example 23 Consider the vector

    x =

    12

    2

    Then, x 1= 5, x 2= 3 and x = 2.

    Let us now consider the norms for a matrix A Cmn. First let us define theinduced p-norms

    Ap := supx 6=0

    Axpxp , p 1.

  • 4.2 Norms 22

    These norms are induced by the p-norms on vectors. For p = 1,2,, there existssome explicit formulae for the induced pnorms

    A1 := max1 jn

    m

    i=1|ai j| (maximum absolute column sum)

    A2 :=

    max (AA) (spectral norm)

    A := max1im

    n

    j=1|ai j| (maximum absolute row sum).

    Unless otherwise specified, we shall adopt the convention of denoting the 2-norm without any subscript. Therefore by x and A we shall mean respectivelyx2 and A2.

    Another often used matrix norm is the so-called Frobenius norm

    AF :=

    tr(AA) =

    m

    i=1

    n

    j=1|ai j|2. (18)

    It is possible to show that the Frobenius norm cannot be induced by any vectornorm.

    The following Lemma presents some useful results regarding matrix norms.

    Lemma 1 Let A Cmn. Then1. AB AB, for any induced norm (submultiplicative property)2. Given two unitary matrices U and V of suitable dimensions

    UAV= A UAVF = AF3. (A) A, for any induced norm and the Frobenius normGiven a square nonsingular matrix A, the quantity

    (A) := ApA1p (19)is called the condition number of A with respect to the induced matrix norm p.From Lemma 1, we have

    (A) = ApA1p AA1p = Ip = 1.If (A) is large, we say that A is ill conditioned; if (A) is small (i.e. close to 1),we say that A is well conditioned. It is possible to prove that, given a matrix A, thereciprocal of (A) gives a measure of how far A is from a singular matrix.

    We now present an important property of norms of vectors in Rn which will beuseful in the sequel.

  • 4.2 Norms 23

    Lemma 1 Let x a and x b be any two norms of a vector x Rn. Then thereexists finite positive constants k1 and k2 such that

    k1 x a x b k2 x a x Rn

    The two norms in the lemma are said to be equivalent and this particular prop-erty will hold for any two norms on Rn.

    Example 24 Note the following1. It can be shown that for x Rn

    ||x||1

    n||x||2||x|| ||x||1 n||x||

    ||x||2

    n||x||2. Consider again the vector of ex 23. Then we can check that

    ||x||1

    3||x||2||x|| ||x||1 3||x||

    ||x||2

    3||x||

    Note that a norm may be defined independently from an inner product. Also, wecan define the generalized angle between 2 vectors in Rn using

    x.y =< x,y >= x . y cos (20)

    where is the angle between x and y. Using the inner product, we can define theorthogonality of 2 vectors by

    x.y =< x,y >= 0 (21)

    which of course means that = (2i+1)pi/2.

  • 5. Applications of Linear Algebra 24

    5 Applications of Linear Algebra

    5.1 The Cayley-Hamilton TheoremNote that Ai = A A i times. Then, we have the Cayley-Hamilton theoremTheorem 1 Let A be an nn matrix with a characteristic equation

    | (A I) = ( )= ( )n + cn1 n1 + + c1 + c0= 0 (22)

    Then,

    (A) = (1)nAn + cn1An1 + + c1A+ c0I= 0 (23)

    Example 25 Use the Cayley-Hamilton theorem to find the inverse of a matrix A

    0 = (1)nAn + cn1An1 + + c1A+ c0I

    then,

    0 = (1)nAn1 + cn1An2 + + c1I + c0A1

    A1 =1c0

    [(1)nAn1 + cn1An2 + + c1A]

    5.2 Solving State EquationsSuppose we want to find a solution to the state-space equation

    x = Ax(t)+bu(t); x(0) = x0

    and assume initially that we are interested in finding x(t) for u(t) = 0;t 0.Effectively, we then have

    x = Ax(t)

  • 5.2 Solving State Equations 25

    Let us then take the Laplace transform,

    sX(s) x0 = AX(s)which gives

    X(s) = [sIA]1x0which then leads to

    x(t) = L 1[sIA]1x0we define the matrix exponential

    exp(At) = L 1[sIA]1

    so that

    x(t) = exp(At)x0

    there are many reasons why that is justified. In fact, it can be shown that the matrixexponential has the following properties

    1. ddt exp(At) = Aexp(At) = exp(At)A

    2. expA(t1 + t2) = exp(At1)exp(At2)

    3. exp(A0) = I

    4. exp(At) is nonsingular and [exp(At)]1 = exp(At)

    5. exp(At) = limk ki=0 Ait ii

    The matrix exp(At) is known as the state transition matrix.Now assume that u(t) is no longer zero. Then, the solution due to both x(0)

    and u(t) is given by

    x(t) = exp(At)x0 + t

    0eA(t)bu()d

    Example 26 Consider the system

    x =

    [2 14 2

    ]x

    x(0) =[

    11

    ]

  • 5.2 Solving State Equations 26

    To find x(t) let us find exp(At) using 2 methods. First, by evaluating the infiniteseries

    exp(At) = I +At +A2t2

    2+

    =

    [1 00 1

    ]+

    [2 14 2

    ]t +0

    =

    [1+2t t

    4t 12t]

    (24)

    so that

    x(t) =

    [1+2t t

    4t 12t][

    11

    ]

    =

    [1+3t6t1

    ]

    next, consider the Laplace transform approach, where we find (sIA)1

    (sIA)1 =[

    s+2s2

    1s2

    4s2

    s2s2

    ](25)

    so that

    exp(At) =[

    1+2t t4t 12t

    ](26)

    Now consider a third method whereby, we transform A to its Jordan normal form.The A matrix has a double eigenvalue at zero with an eigenvector x1 = [1 2]T anda generalized eigenvector at x2 = [0 1]. Therefore, the M matrix is given by[

    1 02 11

    ](27)

    Using T = M we obtain,

    J = A = T1AT =[

    0 10 0

    ](28)

    so that

    x =

    [0 10 0

    ]x; x(0) = [1 3]T

  • 5.2 Solving State Equations 27

    we can then solve 2 differential equations

    x1 = x2x2 = 0

    so that x2(t) = 3 and x1 = 3t +1. Therefore,

    x(t) = T1x

    =

    [1 02 1

    ][3t +1

    3

    ]

    =

    [3t +16t1

    ](29)

    Note that in general

    exp(At) = Mexp(Jt)M1

    since

    Mexp(Jt)M1 = Mexp(M1AM)M1

    = M[I +M1AMt +12(M1AMt)2 + ]M1

    = I +At +12

    A2t2 + = exp(At)

    It is extremely simple to calculate exp(Jt) = exp(t) for the case of diagonalizablematrix A as shown in the following example.

    Example 27 Consider the system

    x =

    [1 22 3

    ]x

    x(0) =[

    11

    ]

    then there are 2 distinct eigenvalues, and we can find exp(Jt),

    1 =0.2361; 2 = 4.2361M =

    [0.8507 0.52570.5257 0.8507

    ]

  • 5.2 Solving State Equations 28

    so that

    M1 =[

    0.8507 0.52570.5257 0.8507

    ]

    Note that M1 = MT . Then,

    exp(At) =[

    0.7237e0.2361t +0.2764e4.2361t 0.4472(e4.2361t e0.2361t)0.4472(e4.2361t e0.2361t) 0.2764e0.2361t +0.7237e4.2361t

    ]

    Recall that A matrix A Cnn is said to be Hermitian if A = A, unitary ifA1 = A. A Hermitian matrix has the property that all its eigenvalues are real. LetA =

    (a1 a2 an

    ). It is easy to verify that A is unitary if and only if ai a j = i j,

    where i j = 1 if i = j and i j = 0 if i 6= j, i.e. if and only if the columns of A forman orthonormal basis for Cn.

    Let us now consider a nonsingular A partitioned as

    A =(

    A11 A12A21 A22

    )(30)

    with A11 and A22 square matrices. Suppose that A11 is nonsingular. Then the matrix

    := A22A21A111 A12is called the Schur complement of A11 in A. Similarly, if A22 is nonsingular, thematrix

    := A11A12A122 A21is the Schur complement of A22 in A. A useful expression for the inverse of A interms of partitioned blocks is

    A1 =(

    1 A111 A1211A21A111 1

    )(31)

    supposing that all the relevant inverses exist. The following well-established iden-tities are also very useful

    (ABD1C)1 = A1 +A1B(DCA1B)1CA1 (32)(I +AB)1A = A(I +BA)1 (33)IAB(I +AB)1 = (I +AB)1 (34)

  • 5.3 Matrix Decompositions 29

    supposing the existence of all the needed inverses.If A11 is nonsingular then by multiplying A on the right by(

    I A111 A120 I

    )

    it is easy to verify thatdet(A) = det(A11)det(). (35)

    Similarly, if A22 is nonsingular, then by multiplying A on the right by(I 0

    A122 A21 I)

    we find thatdet(A) = det(A22)det( ). (36)

    Consider A Cmn and B Cnm. Using identities (35) and (36), it is easy toprove that that

    det(ImAB) = det(InBA).

    5.3 Matrix DecompositionsIn this section we shall explore some matrix decompositions that use Unitary ma-trices.

    Theorem 1 (Schur Decomposition) Let ACnn. Then there exist a Unitary ma-trix U Cnn so that A = UTU, where T is a triangular matrix whose diagonalentries are the eigenvalues of A.

    Proof 1 We shall prove the theorem by constructing the matrices U and T . Givena matrix A, we can always find an eigenvector u1, associated with an eigenvalue1, so that Au1 = 1u1. Moreover we can assume that u1 = 1. Let U1 =(u1 z2 zn

    )be a Unitary matrix. Since the columns of U1 are a basis for

    Cn, we have

    A(u1 z2 zn

    )=(u1 z2 zn

    )(1 0 A1

    ).

    ThereforeA = U1

    (1 0 A1

    )U1 . (37)

  • 5.3 Matrix Decompositions 30

    Now let us consider A1 C(n1)(n1). From (37) it is easy to see that the (n1)eigenvalues of A1 are also eigenvalues of A. Given 2 an eigenvalue of A1 with nor-malized eigenvector u2, we can construct a Unitary matrix U2 =

    (u2 z

    2 zn1

    )C

    (n1)(n1) so thatA1U2 = U2

    (2 0 A2

    ).

    Denoting by V2 the Unitary matrix

    V2 =(

    1 00 U2

    ),

    we have

    V 2 U1 AU1V2 =(

    1 00 U2

    )(1 0 A1

    )(1 00 U2

    )=

    1 0 2

    0 0 A2

    .

    The result can be proven iterating this procedure n times.

    Using Theorem 1 it is possible to prove that a Hermitian matrix A Cnn hasa set of n linearly independent eigenvectors. Indeed, since A is Hermitian, fromTheorem 1 we have

    A = UTU = UT U T = T .Since T is triangular, it must be diagonal. Therefore the columns of the matrix Uare the eigenvectors of A.

    In Theorem 1, we used the same basis both for the domain and the range of thematrix. In the following decomposition, we shall show what can be accomplishedwhen we choose possibly different bases for domain and the range.

    Theorem 2 (Singular Value Decomposition) Let A Cmn. Then A can alwaysbe written as A =UV , where U Cmm and V Cnn are Unitary matrices and = [i j] Rmn is a diagonal matrix with i j = 0 for i 6= j and ii = i 0.Proof 2 We shall prove the theorem by induction. First of all, let us suppose,without any loss of generality that A 6= 0 and m > n. Let now m = p+1 and n = 1.In this case we can write

    A =(

    A

    u2 un)

    10.

    .

    .

    0

    ,

  • 5.3 Matrix Decompositions 31

    with = A. Therefore A =UV , with U = ( A u2 un), = (1 0 0)Tand V = 1.

    We suppose now that the theorem holds for m = p + k and n = k and proveit for m = p + k + 1 and n = k + 1. Let A C(p+k+1)(k+1). All the eigenvaluesof AA are real (since AA is Hermitian), nonnegative and at least one is greaterthan zero since A 6= 0. Denote by the maximum eigenvalue of AA and by v thecorrespondent normalized eigenvector

    AAv = v.

    Let now = +

    and u = Av , so that u = 1. Let moreover U1 =(u U0

    ) C

    (p+k+1)(p+k+1) and V1 =(v V0

    ) C(k+1)(k+1) be Unitary matrices. We haveA1 = U1 AV1 =

    (u

    U0

    )A(v V0

    )=

    ( uAV00 U0 AV0

    ).

    Since uA = vA A = v, it follows that uAV0 = 0. Now an easy inductive argu-

    ment completes the proof, noting that U0 AV0 C(p+k)k.

    The scalars i are called the singular values of A. They are usually orderednonincreasingly 1 2 . . . r 0, with r = min{m,n}. The largest singularvalue 1 and the smallest singular value r are denoted by

    (A) := 1 (A) := r.

    The columns vi of V are called the right singular vectors of A, and the columns uiof U the left singular vectors. They are related by

    Avi = iui, i = 1, . . .r.

    The following Lemma shows some of the information we can get from thesingular value decomposition of a matrix A.

    Lemma 2 Let ACmn and consider its singular value decomposition A =UV .Then

    1. The rank k of A equals the number of singular values different from zero2. Range(A) = Span{u1, . . .uk}3. Ker(A) = Span{vk+1, . . .vn}4. 1 = A

  • 5.4 Bilinear Forms and Sign-definite Matrices 32

    5. A2F = ri=1 2i6. Given a square nonsingular matrix A

    (A) =1

    (A1)

    5.4 Bilinear Forms and Sign-definite MatricesThe expression < y,Ax >= yT Ax is a bilinear form. When y = x we have a quadraticform xT Ax. Every matrix A can be written as the sum of a symmetric and skewsymmetric matrices if it is real, and of a Hermitian and skew-Hermitian if it iscomplex. In fact, define the symmetric and anti-symmetric parts of A as:

    As =A+AT

    2;Aa =

    AAT2

    and the Hermitian and anti-Hermitian parts as:

    AH =A+(A)T

    2;AAH =

    A (A)T2

    Then note that < x,Ax >=< x,Asx > if A is real, and < x,Ax >=< x,AHx > if Ais complex.

    5.4.1 Definite Matrices

    Let us consider a Hermitian matrix A. A is said to be positive (semi)definite ifxAx > () 0,x Cn. We shall indicate a positive (semi)definite matrix by A >() 0. A Hermitian matrix A is said to negative (semi)definite if (A) is positive(semi)definite. The following Lemma gives a characterization of definite matrices.

    Lemma 3 Let A be a Hermitian matrix. Then

    1. A is positive (negative) definite if and only if all its eigenvalues are positive(negative).

    2. A is positive (negative) semidefinite if and only if all its eigenvalues are non-negative (nonpositive).

    Given a real nn matrix Q, then1. Q is positive-definite, if and only if xT Qx > 0 for all x 6= 0.2. Q is positive semidefinite if xT Qx 0 for all x.

  • 5.5 Matrix Inversion Lemmas 33

    3. Q is indefinite if xT Qx > for some x and xT Qx < 0 for other x.Q is negative definite (or semidefinite) if Q is positive definite (or semidefinite).If Q is symmetric then all its eigenvalues are real.Example 28 Show that if A is symmetric, then A is positive-definite if and only ifall of its eigenvalues (which we know are real) are positive. This then allows usto test for the sign-definiteness of a matrix A by looking at the eigenvalues of itssymmetric part.

    5.5 Matrix Inversion Lemmas

    There are various important relations involving inverses of matrices, one of whichis the Matrix Inversion Lemma

    (A1A2A14 A3)1 = A11 +A11 A2(A4A3A11 A2)1A3A11 (38)Example 29 Prove the Matrix Inversion lemmaSolution: Let the LHS be A1 and the RHS be B. The proof proceeds by showingthat

    AB = BA = I

    so,

    AB = (A1A2A14 A3)(A11 +A1A2(A4A3A11 A2)1A3A11 )= IA2A14 A3A11 +A2(A4A3A11 A2)1A3A11

    A2A14 A3A11 A2(A4A3A11 A2)1A3A11= IA2A14 [IA4(A4A3A11 A2)1 +A3A11 A2(A4A3A11 A2)1]A3A11= IA2A14 [I (A4A3A11 A2)(A4A3A11 A2)1]A3A11= I

    On the other hand,

    BA = (A11 +A1A2(A4A3A11 A2)1A3A11 )(A1A2A14 A3)

    = IA11 A2A14 A3 +A11 A2(A4A3A11 A2)1A3A11 A2(A4A3A11 A2)1A3A11 A2A14 A3

    = IA11 A2[I (A4A3A11 A2)1A4 +(A4A3A11 A2)1A3A11 A2)]A14 A3

    = IA11 A2[I (A4A3A11 A2)1(A4A3A11 A2)]A14 A3

    = I

  • 6. MATLAB Commands 34

    Now let us consider partition (30) for a Hermitian matrix A

    A =(

    A11 A12A12 A22

    )(39)

    with A11 and A22 square matrices. The following Theorem gives a characterizationof a positive matrix in terms of its partition (39).

    Theorem 3 Let A be a Hermitian matrix partitioned as in (39). Then1. A is positive definite if and only if A11 and A22 A12A111 A12 are positive

    definite2. A is positive definite if and only if A22 and A11 A12A122 A12 are positive

    definite

    Proof 3 1. The proof follows from the facts that(

    A11 A12A12 A22

    )=

    (I A111 A120 I

    )(A11 00 A22A12A111 A12

    )(I A111 A120 I

    )

    and that (I A111 A120 I

    )is a full rank matrix.

    2. As in (1), writing A as(

    A11 A12A12 A22

    )=

    (I A12A1220 I

    )(A11A12A122 A12 0

    0 A22

    )(I A12A1220 I

    ).

    6 MATLAB CommandsThere are plenty of MATLAB commands dealing with matrices. In this section weshall give a brief overview of those related with the topics covered in this chapter.We recall that the (conjugate) transpose of a matrix A is evaluated simply typing A

  • 6. MATLAB Commands 35

    and that a polynomial is represented by the row vector of its coefficients ordered indescending powers.

    Q=orth(A) returns a matrix Q whose columns are an orthonormal basis forthe range of a (rectangular) matrix A. Q=null(A) returns a matrix Q whose columns are an orthonormal basis forthe null space of a (rectangular) matrix A. det(A) returns the determinant of a square matrix A. inv(A) returns the inverse (if it exists) of a square matrix A. A warning isgiven if the matrix is ill conditioned.

    pinv(A) returns the pseudo-inverse of a matrix A. trace(A) returns the trace of a square matrix A. rank(A) returns the rank of a matrix A. cond(A) evaluates the condition number (19) of a square matrix A, using thematrix spectral norm. In this case (19) becomes

    (A) =(A)(A)

    .

    The command condest(A) can be used to get an estimate of the 1-norm condi-tion number.

    [V,D]=eig(A) returns a diagonal matrix D whose entries are the eigenvaluesof A and a matrix V whose columns are the normalized eigenvectors of A, such thatA*V=V*D. In general V is singular. The command eigs(A) returns only someof the eigenvalues, by default the six largest in magnitude.

    poly(A) returns a row vector with (n + 1) elements, whose entries are thecoefficients of the characteristic polynomial (16) of A. norm(A) returns the 2-norm for both vectors and matrices. Anyway there aresome differences in the two cases

    If A is a vector, the p-norm defined in (17) can be evaluated typing norm(A,p),where p can be either a real number or the string inf, to evaluate the norm.

    If A is a matrix, the argument p in the command norm(A,p) can be only1,2,inf,fro, where the returned norms are respectively the 1,2, orFrobenius matrix norms defined in section 4.2.

  • REFERENCES 36

    [U,T]=schur(A) returns an upper quasi-triangular matrix T and a Unitarymatrix U such that A = U*T*U. The matrix T presents the real eigenvalues of Aon the diagonal and the complex eigenvalues in 2-by-2 blocks on the diagonal. Thecommand [U,T] = rsf2csf(U,T) can be used to get an upper-triangular Tfrom an upper quasi-triangular T.

    [U,S,V]=svd(A) returns a diagonal, in general rectangular, matrix S, whoseentries are the singular values of A, and two Unitary matrices U and V, stacking upin columns the left and right singular vectors of A, such that A=U*S*V. Thecommand svds(A) returns only some of the singular values, by default the fivelargest.

    References

    [1] F. R. Gantmacher. The Theory of Matrices. Chelsea Publishing Company, NewYork, N.Y., 1959. Two volumes.

    [2] R. A. Horn and C. R. Johnson. Matrix Analysis. Cambridge University Press,1985.

    [3] B. Noble and J. W. Daniel. Applied Linear Algebra. Prentice-Hall, Thirdedition, 1988.

    [4] G. Strang. Introduction to Linear Algebra. Wellesley-Cambridge Press, Cam-bridge, MA, 3rd edition, 2003.