linear algebra written examinations study guide corona/hw/linear algebra study guide.pdfآ  linear...

Download Linear Algebra Written Examinations Study Guide corona/hw/Linear Algebra Study Guide.pdfآ  Linear Algebra

Post on 21-Apr-2020

14 views

Category:

Documents

5 download

Embed Size (px)

TRANSCRIPT

  • Linear Algebra Written Examinations Study Guide

    Eduardo Corona Other Authors as they Join In

    November 2, 2008

    Contents

    1 Vector Spaces and Matrix Operations 2

    2 Linear Operators 2

    3 Diagonalizable Operators 3 3.1 The Rayleigh Quotient and the Min-Max Theorem . . . . . . . . 4 3.2 Gershgorins Discs Theorem . . . . . . . . . . . . . . . . . . . . . 4

    4 Hilbert Space Theory: Interior Product, Orthogonal Projection and Adjoint Operators 5 4.1 Orthogonal Projection . . . . . . . . . . . . . . . . . . . . . . . . 7 4.2 The Gram-Schmidt Process and QR Factorization . . . . . . . . 8 4.3 Riesz Representation Theorem and The Adjoint Operator . . . . 9

    5 Normal and Self-Adjoint Operators: Spectral Theorems and Related Results 11 5.1 Unitary Operators . . . . . . . . . . . . . . . . . . . . . . . . . . 14 5.2 Positive Operators and Square Roots . . . . . . . . . . . . . . . . 15

    6 Singular Value Decomposition and the Moore-Penrose Gener- alized Inverse 16 6.1 Singular Value Decomposition . . . . . . . . . . . . . . . . . . . . 16 6.2 The Moore-Penrose Generalized Inverse . . . . . . . . . . . . . . 18 6.3 The Polar Decomposition . . . . . . . . . . . . . . . . . . . . . . 19

    7 Matrix Norms and Low Rank Approximation 19 7.1 The Frobenius Norm . . . . . . . . . . . . . . . . . . . . . . . . 19 7.2 Operator Norms . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 7.3 Low Rank Matrix Approximation: . . . . . . . . . . . . . . . . . 21

    1

  • 8 Generalized Eigenvalues, the Jordan Canonical Form and eA 22 8.1 The Generalized Eigenspace K� . . . . . . . . . . . . . . . . . . . 23 8.2 A method to compute the Jordan Form: The points diagram . . 24 8.3 Applications: Matrix Powers and Power Series . . . . . . . . . . . 24

    9 Nilpotent Operators 24

    10 Other Important Matrix Factorizations 24

    11 Other Topics (which appear in past exams) 24

    12 Yet more Topics I can think of 25

    1 Vector Spaces and Matrix Operations

    2 Linear Operators

    Denition 1 Let U ,V be vector spaces =F (Usually F = R or C). Then L(U; V ) = fT : U ! V j T is linearg: In particular, L(U;U) = L(U) is the space of linear operators of U , and L(U;F ) = U� is its algebraic dual.

    Denition 2 Important Subspaces: Given W � V (subspace), T�1(W ) � U: In particular, we are interested in T�1(f0g) = Ker(T ): Also, if S � U; then T (S) � U: We are most interested in T (U) = Ran(T ):

    Theorem 3 U; V ev=F , dim(U) = n; dim(V ) = m: Given B = fu1; :::; ung basis of U and B0 = fv1; :::; vmg basis of V; to each T 2 L(U; V ) we can associate a matrix [T ]B

    0

    B such that:

    Tui = a1iv1 + :::+ amivm 8i 2 f1; ::;mg [T ]B

    0

    B = (aij) in Mm�n(F )

    T U �! V

    B l l B0 Fn �! Fm

    [T ]B 0

    B

    Conversely, given a matrix A 2Mm�n(F ); there is a unique TA 2 L(U; V ) such that A = [T ]B

    0

    B :

    Proposition 4 Given T 2 L(U); there exist basis B and B0 of U such that:

    [T ]B 0

    B =

    � I 0 0 0

    � B is constructed as an extension for a basis for Ker(T ); and B0 as an extension for fT (u)gu2B :

    2

  • Theorem 5 (Rank and Nullity) dim(U) = dim(Ker(T ))+dim(Ran(T )): �(T ) = dim(Ker(T )) is known as the nullity of T; and r(T ) = dim(Ran(T )) as the rank of T:

    Change of Basis: U; V ev=F; B and � basis of U; B0,�0 basis of V; there exists P invertible such that:

    [T ]B 0

    B = P [T ] �0

    � P �1

    P is a matrix that performs a change of coordinates. This means that, if two matrices are similar, they represent the same linear operator using a di¤erent basis. This further justies that key properties of matrices are preserved under similarity.

    3 Diagonalizable Operators

    If U = V (T;is a linear operator), it is natural to impose that both basis B and B0 are also the same. In this case, it is no longer generally true that we can nd a basis B such that the corresponding matrix is diagonal. However, if there exists a basis B such that [T ]B

    0

    B = � diagonal matrix, we say T is diagonalizable.

    Denition 6 V ev=F; T 2 L(V ). � 2 F is an eigenvalue of T if 9 v 2 V a nonzero vector such that Tv = �v: All nonzero vectors such that this holds are known as eigenvectors of T:

    We can immediately derive, from this denition, that the existence of the eigenpair (�; v) (eigenvalue � and corresponding eigenvector v) is equivalent to the existence of a nonzero solution v to

    (T � �I)v = 0

    This in turn tells us that the eigenvalues of T are those such that the operator T � �I is not invertible. After selecting a basis B for V; this also means: u

    det([T ]BB � �I) = 0

    Which is called the characteristic equation of T . We notice this equation does not depend on the choice of basis B; since it is invariant under similarity:

    det(PAP�1 � �I) = det(P (A� �I)P�1) = det(A� �I)

    This equation nally is equivalent to nding the complex roots of a polyno- mial in �:We know this to be a really hard problem for n � 5, and a numerically ill-posed problem at that.

    3

  • Denition 7 V ev=F; T 2 L(V ), � an eigenvector of T: Then E� = fv 2 V j Tv = �vg is the eigenspace for �:

    Theorem 8 V ev=F of nite dimension; T 2 L(V ): The following are equiva- lent: i) T is diagonalizable ii) V has a basis of eigenvectors of T iii) There exist subspaces W1; :::;Wn such that dim(Wi) = 1; T (Wi) � Wi and V =

    Ln i=1Wi

    iv) V = Lk

    i=1E�k with f�1; :::; �kg eigenvalues of T v) Pk

    i=1 dim(E�i) = dim(V )

    Proposition 9 V ev=C; T 2 L(V ) then T has at least one eigenvalue (this is a corollary of the Fundamental Theorem of Algebra, applied to the characteristic equation).

    Theorem 10 (Schurs factorization) V ev=C; T 2 L(V ): There always exists a basis B such that [T ]B is upper triangular.

    3.1 The Rayleigh Quotient and the Min-Max Theorem

    3.2 Gershgorins Discs Theorem

    Although calculating eigenvalues of a big matrix is a very di¢ cult problem (computationally and analytically), it is very easy to come up with regions on the complex plane where all the eigenvalues of a particular operator T must lie. This technique was rst devised by the russian mathematician Semyon Aranovich Gershgorin (1901� 1933):

    Theorem 11 (Gershgorin, 1931) Let A = (aij) 2Mn(C): For each i 2 f1; ::; ng; we dene the ith "radius of A" as ri(A) =

    P j 6=i jaij j and the ith Gershgorin

    disc as Di(A) = fz 2 C j jz � aiij < ri(A)

    Then, if we dene �(A) = f� j � is an eigenvalue of Ag; it follows that:

    �(A) � n[ i=1

    Di(A)

    That is, all eigenvalues of A must lie inside one or more Gershgorin discs.

    4

  • Proof. Let � be an eigenvalue of A; v an associated eigenvector. We x i as the ith coordinate of v with maximum modulus, that is, jvij � jvkj 8k. Necessarily, jvij 6= 0. Then,

    Av = �v =) �vi =

    X aijvj

    (�� aii)vi = X j 6=i

    aijvj

    j(�� aii)j jvij � X j 6=i

    jaij j jvij

    � 2 Di(A)

    Now, we know that A represents a linear operator T 2 L(Rn), and that therefore its eigenvalues are invariant under transposition of A and under simi- larity. Therefore:

    Corollary 12 Let A = (aij) 2Mn(C): Then, �(A) � \ f

    n[ i=1

    Di(PAP �1) j P

    is invertibleg

    Of course, if T is diagonalizable, one of this P 0s is the one such that PAP�1

    is diagonal, and therefore the Gershgorin discs degenerate to the n points we are looking for. However, if we dont want to compute the eigenvalues, we can still use this to come up with a ne heuristic to reduce the region given by the union of the gershgorin discs: We can use permutation matrices or diagonal matrices as our P 0s to get a "reasonable region". This result also hints at the fact that, if we perturb a matrix A, the eigenvalues change continuously. The Gershgorin disc theorem is also a quick way to prove A is invertible if it

    is diagonal dominant, and it also provides us with results when the eigenvalues of A are all distinct (namely, that there must be at least one eigenvalue per Gershgorin disc).

    4 Hilbert Space Theory: Interior Product, Or- thogonal Projection and Adjoint Operators

    Denition 13 Let V ev=F: An interior product on V is a function : V � V ! F such that: 1) hu+ v; wi = hu;wi+ hv; wi 8u; v; w 2 V 2) h�u;wi = � hu; vi 8u; v 2 V , � 2 F 3) hu; vi = hv; ui 4) hu; ui � 0 and hu; ui = 0 =) u = 0

    5

  • By denition, every interior product induces a natural norm for V , given by kvkV =

    p hv; vi

    Denition 14 We say u and v are orthogonal, or u?v, if hu; vi = 0

    Some important identities:

    1. Pythagoras Theorem: u?v () ku+ vk2 = kuk2 + kvk2

    2. Cauchy-Bunyakowski-Schwarz: jhu; vij � kuk kvk 8u; v 2 V with equality () u = �v

    3. Parallelogram: ku+ vk2 + ku� vk2 = 2(kuk2 + kvk2) 8u; v 2 V

    4. Polarization:

    hu; vi = 1 4 fku+ vk2 � ku� vk2g 8u; v 2 V if F = R

    hu; vi = 1 4

    4X k=1

    ik

    u+ ikv

    2 8u; v 2 V if F = C

    In fact, identities 3 and 4 (Parallelogram and Polarization) give us both necessary and su¢ cient conditions for a norm to be induced by some interior product. In this fashion, we can prove kk1 and kk1 are not induced by an interior product by showing paralellogram fails.

    Denition 15 v 2 V is said to be of unit norm if kvk = 1

    Denition 16 A

Recommended

View more >