applied linear algebra solutions chapter 1

Upload: lokyan-sophie-cheng

Post on 04-Apr-2018

227 views

Category:

Documents


0 download

TRANSCRIPT

  • 7/29/2019 Applied Linear Algebra Solutions Chapter 1

    1/41

    Solutions Chapter 1

    1.1.1.(a) Reduce the system to x y = 7, 3 y = 4; then use Back Substitution to solve for

    x = 173 , y = 43 .(b) Reduce the system to 6 u + v = 5, 52 v = 52 ; then use Back Substitution to solve for

    u = 1, v = 1.(c) Reduce the system to p + q r = 0, 3 q+ 5 r = 3, r = 6; then solve for p = 5, q =

    11, r = 6.(d) Reduce the system to 2 u v + 2 w = 2, 32 v + 4 w = 2, w = 0; then solve for

    u = 13 , v = 43 , w = 0.(e) Reduce the system to 5 x1 + 3 x2 x3 = 9, 15 x2 25 x3 = 25 , 2 x3 = 2; then solve for

    x1 = 4, x2 = 4, x3 = 1.(f

    ) Reduce the system to x + z 2 w = 3, y + 3 w = 1, 4 z 16 w = 4, 6 w = 6; thensolve for x = 2, y = 2, z = 3, w = 1.(g) Reduce the system to 3 x1 + x2 = 1,

    83 x2 + x3 =

    23 ,

    218 x3 + x4 =

    34 ,

    5521 x4 =

    57 ; then

    solve for x1 =311 , x2 =

    211 , x3 =

    211 , x4 =

    311 .

    1.1.2. Plugging in the given values of x, y and z gives a+2 b c = 3, a2 c = 1, 1+2 b+c = 2.Solving this system yields a = 4, b = 0, and c = 1.

    1.1.3.(a) With Forward Substitution, we just start with the top equation and work down. Thus

    2 x = 6 so x = 3. Plugging this into the second equation gives 12 + 3y = 3, and soy = 3. Plugging the values of x and y in the third equation yields 3 + 4(3) z = 7,and so z = 22.

    (b) We will get a diagonal system with the same solution.

    (c) Start with the last equation and, assuming the coefficient of the last variable is = 0, usethe operation to eliminate the last variable in all the preceding equations. Then, againassuming the coefficient of the next-to-last variable is non-zero, eliminate it from all butthe last two equations, and so on.

    (d) For the systems in Exercise 1.1.1, the method works in all cases except (c) and (f).Solving the reduced system by Forward Substitution reproduces the same solution (asit must):

    (a) The system reduces to 32 x =172 , x + 2 y = 3.

    (b) The reduced system is 152 u =152 , 3 u 2 v = 5.

    (c) The method doesnt work since r doesnt appear in the last equation.

    (d) Reduce the system to 32 u =12 ,

    72 u v = 52 , 3 u 2 w = 1.

    (e) Reduce the system to 23 x1 =83 , 4 x1 + 3 x2 = 4, x1 + x2 + x3 =

    1.

    (f) Doesnt work since, after the first reduction, z doesnt occur in the next to lastequation.

    (g) Reduce the system to 5521 x1 =57 , x2 +

    218 x3 =

    34 , x3 +

    83 x4 =

    23 , x3 + 3 x4 = 1.

    1.2.1. (a) 3 4, (b) 7, (c) 6, (d) (2 0 1 2 ), (e)0B@ 026

    1CA.

    1

  • 7/29/2019 Applied Linear Algebra Solutions Chapter 1

    2/41

    1.2.2. (a)

    0B@ 1 2 34 5 67 8 9

    1CA, (b) 1 2 31 4 5

    !, (c)

    0B@ 1 2 3 44 5 6 77 8 9 3

    1CA, (d) ( 1 2 3 4 ),(e)

    0B@

    123

    1CA

    , (f) ( 1 ).

    1.2.3. x = 13 , y = 43 , z = 13 , w = 23 .1.2.4.

    (a) A =

    1 11 2

    !, x =

    xy

    !, b =

    73

    !;

    (b) A =

    6 13 2

    !, x =

    uv

    !, b =

    55

    !;

    (c) A =

    0B@ 1 1 12 1 31 1 0

    1CA, x =0B@pq

    r

    1CA, b =0B@ 03

    6

    1CA;(d) A =

    0

    B@2 1 2

    1 3 34 3 0

    1

    CA, x =

    0

    B@uv

    w

    1

    CA, b =

    0

    B@3

    27

    1

    CA;

    (e) A =

    0B@ 5 3 13 2 11 1 2

    1CA, x =0B@ x1x2

    x3

    1CA, b =0B@ 951

    1CA;

    (f) A =

    0BBB@1 0 1 22 1 2 10 6 4 21 3 2 1

    1CCCA, x =0BBB@

    xyz

    w

    1CCCA, b =0BBB@3

    321

    1CCCA;

    (g) A =

    0BBB@3 1 0 01 3 1 00 1 3 10 0 1 3

    1CCCA, x =0BBB@

    x1x2x3x4

    1CCCA, b =0BBB@

    1111

    1CCCA.1.2.5.

    (a) x y = 1, 2 x + 3 y = 3. The solution is x = 65 , y = 15 .(b) u + w = 1, u + v = 1, v + w = 2. The solution is u = 2, v = 1, w = 1.(c) 3 x1 x3 = 1, 2 x1 x2 = 0, x1 + x2 3 x3 = 1.

    The solution is x1 =15 , x2 = 25 , x3 = 25 .

    (d) x + y zw = 0, x + z+ 2 w = 4, x y + z = 1, 2 y z+ w = 5.The solution is x = 2, y = 1, z = 0, w = 3.

    1.2.6.

    (a) I =

    0

    BBB@1 0 0 00 1 0 0

    0 0 1 00 0 0 1

    1

    CCCA, O =0

    BBB@0 0 0 00 0 0 0

    0 0 0 00 0 0 0

    1

    CCCA.(b) I + O = I , I O = OI = O. No, it does not.

    1.2.7. (a) undefined, (b) undefined, (c)

    3 6 0

    1 4 2!

    , (d) undefined, (e) undefined,

    (f)

    0B@ 1 11 93 12 127 8 8

    1CA, (g) undefined, (h)0B@ 9 2 148 6 17

    12 3 28

    1CA, (i) undefined.

    2

  • 7/29/2019 Applied Linear Algebra Solutions Chapter 1

    3/41

    1.2.8. Only the third pair commute.

    1.2.9. 1, 6, 11, 16.

    1.2.10. (a)

    0

    B@1 0 00 0 0

    0 0 1

    1

    CA, (b)

    0

    BBB@2 0 0 00 2 0 00 0 3 0

    0 0 0 3

    1

    CCCA.

    1.2.11. (a) True, (b) true.

    1.2.12. (a) Let A =

    x yz w

    !. Then A D =

    a x b ya z b w

    !=

    a x a ybz b w

    != D A, so if a = b these

    are equal if and only if y = z = 0. (b) Every 2 2 matrix commutes with

    a 00 a

    != a I .

    (c) Only 3 3 diagonal matrices. (d) Any matrix of the form A =0B@x 0 00 y z

    0 u v

    1CA. (e) LetD = diag (d1, . . . , dn). The (i, j) entry of A D is aij dj . The (i, j) entry of D A is di aij . Ifdi= dj , this requires aij = 0, and hence, if all the dis are different, then A is diagonal.

    1.2.13. We need A of size m n and B of size n m for both products to be defined. Further,A B has size mm while B A has size n n, so the sizes agree if and only if m = n.

    1.2.14. B =

    x y0 x

    !where x, y are arbitrary.

    1.2.15. (a) (A + B)2 = (A + B)(A + B) = AA + AB + BA + BB = A2 + 2AB + B2, since

    AB = BA. (b) An example: A =

    1 20 1

    !, B =

    0 01 0

    !.

    1.2.16. IfA B is defined and A is mn matrix, then B is np matrix and A B is mp matrix;on the other hand if B A is defined we must have p = m and B A is n n matrix. Now,since A B = B A, we must have p = m = n.

    1.2.17. A Onp = Omp, Olm A = Oln.

    1.2.18. The (i, j) entry of the matrix equation c A = O is c aij = 0. If any aij = 0 then c = 0, sothe only possible way that c = 0 is if all aij = 0 and hence A = O.

    1.2.19. False: for example,

    1 00 0

    !0 01 0

    !=

    0 00 0

    !.

    1.2.20. False unless they commute: A B = B A.

    1.2.21. Let v be the column vector with 1 in its jth position and all other entries 0. Then Avis the same as the jth column of A. Thus, the hypothesis implies all columns of A are 0and hence A = O.

    1.2.22. (a) A must be a square matrix. (b) By associativity, A A2 = A A A = A2 A = A3.(c) The nave answer is n 1. A more sophisticated answer is to note that you can com-pute A2 = A A, A4 = A2 A2, A8 = A4 A4, and, by induction, A2

    r

    with only r matrixmultiplications. More generally, if the binary expansion of n has r +1 digits, with s nonzerodigits, then we need r + s 1 multiplications. For example, A13 = A8A4A since 13 is 1101in binary, for a total of 5 multiplications: 3 to compute A2, A4 and A8, and 2 more to mul-tiply them together to obtain A13.

    3

  • 7/29/2019 Applied Linear Algebra Solutions Chapter 1

    4/41

    1.2.23. A =

    0 10 0

    !.

    1.2.24. (a) If the ith row of A has all zero entries, then the (i, j) entry of A B is ai1b1j + +ainbnj = 0 b1j + + 0 bnj = 0, which holds for all j, so the ith row of A B will have all 0s.

    (b) If A = 1 1

    0 0!, B = 1 2

    3 4!, then B A = 1 1

    3 3!.1.2.25. The same solution X =

    1 13 2

    !in both cases.

    1.2.26. (a)

    4 51 2

    !, (b)

    5 1

    2 1!

    . They are not the same.

    1.2.27. (a) X = O. (b) Yes, for instance, A =

    1 20 1

    !, B =

    3 2

    2 1!

    , X =

    1 01 1

    !.

    1.2.28. A = (1/c)I when c = 0. Ifc = 0 there is no solution.

    1.2.29.

    (a) The ith entry of Az is 1 ai1+1 ai2+ +1 ain = ai1+ +ain, which is the ith row sum.(b) Each row of W has n 1 entries equal to 1n and one entry equal to

    1 nn and so its row

    sums are (n 1) 1n +1 n

    n = 0. Therefore, by part (a), Wz = 0. Consequently, the

    row sums of B = A W are the entries of B z = A Wz = A0 = 0, and the result follows.

    (c) z =

    0B@ 111

    1CA, and so Az =0B@ 1 2 12 1 34 5 1

    1CA0B@ 11

    1

    1CA =0B@ 26

    0

    1CA, while B = A W =0BB@

    1 2 12 1 3

    4 5 1

    1CCA0BBB@ 23 13 13

    13 23 1313

    13 23

    1CCCA =0BB@ 13 43 53

    0 1 14 5 1

    1CCA, and so B z =0B@ 00

    0

    1CA.

    1.2.30. Assume A has size m n, B has size np and C has size p q. The (k, j) entry of B Cis

    pXl=1

    bklclj , so the (i, j) entry of A (B C) isnX

    k=1

    aik

    0@ pXl=1

    bklclj

    1A = nXk=1

    pXl=1

    aikbklclj .

    On the other hand, the (i, l) entry of A B iskX

    i=1

    aikbkl, so the (i, j) entry of (A B) C ispXl=1

    0@ nXk=1

    aikbkl

    1A clj = nXk=1

    pXl=1

    aikbklclj . The two results agree, and so A (B C) =

    (A B) C. Remark: A more sophisticated, simpler proof can be found in Exercise 7.1.44.

    1.2.31.(a) We need A B and B A to have the same size, and so this follows from Exercise 1.2.13.(b) A B B A = O if and only if A B = B A.

    (c) (i) 1 26 1!, (ii) 0 00 0!, (iii) 0B@0 1 1

    1 0 11 1 01CA;

    (d) (i) [ c A + d B, C ] = (c A + d B)C C(c A + d B)= c(A C C A) + d(B C C B) = c [ A, B ] + d [ B, C],

    [ A,cB + d C ] = A(c B + d C) (c B + d C)A= c(A B B A) + d(A C C A) = c [ A, B ] + d [ A, C ].

    (ii) [ A, B ] = A B B A = (B A A B) = [ B, A ].

    4

  • 7/29/2019 Applied Linear Algebra Solutions Chapter 1

    5/41

    (iii)h

    [ A, B ], Ci

    = (A B B A) C C(A B B A) = A B C B A C C A B + C BA ,h[ C, A ], B

    i= (C A A C) B B (C AA B) = C A B A C B B C A + BAC,h

    [ B, C ], Ai

    = (B C C B) A A (B C C B) = B C A C B AA B C+ A C B .Summing the three expressions produces O.

    1.2.32. (a) (i) 4 , (ii) 0 , (b) tr(A + B) = nXi=1

    (aii + bii) =nX

    i=1

    aii +nX

    i=1

    bii = tr A + tr B.

    (c) The diagonal entries ofA B arenX

    j=1

    aij bji , so tr(A B) =nX

    i=1

    nXj=1

    aij bji ; the diagonal

    entries of B A arenX

    i=1

    bji aij , so tr(B A) =nX

    i=1

    nXj=1

    bji aij . These double summations are

    clearly equal. (d) tr C = tr(A B B A) = tr A B tr B A = 0 by part (a).(e) Yes, by the same proof.

    1.2.33. Ifb = Ax, then bi = ai1x1 + ai2x2 + + ainxn for each i. On the other hand,cj = (a1j , a2j , . . . , anj)

    T, and so the ith entry of the right hand side of (1.13) is

    x1ai1 + x2ai2 + + xnain, which agrees with the expression for bi. 1.2.34.

    (a) This follows by direct computation.(b) (i)2 1

    3 2

    !1 21 0

    !=

    23

    !( 1 2 ) +

    12

    !( 1 0 ) =

    2 43 6

    !+

    1 02 0

    !=

    1 45 6

    !.

    (ii)

    1 2 03 1 2

    !0B@ 2 53 01 1

    1CA = 13!

    ( 2 5 ) +

    21

    !(3 0 ) +

    02

    !( 1 1 )

    =

    2 5

    6 15!

    +

    6 03 0

    !+

    0 02 2

    !=

    8 5

    1 17!

    .

    (iii)0B@ 3 1 11 2 11 1 5

    1CA0B@ 2 3 03 1 4

    0 4 1

    1CA =0B@ 31

    1

    1CA( 2 3 0 ) +0B@12

    1

    1CA( 3 1 4 ) +0B@ 115

    1CA( 0 4 1 )

    =

    0B@ 6 9 02 3 02 3 0

    1CA +0B@3 1 46 2 8

    3 1 4

    1CA +0B@ 0 4 10 4 1

    0 20 5

    1CA =0B@ 3 14 34 1 9

    5 18 1

    1CA.(c) If we set B = x, where x is an n 1 matrix, then we obtain (1.14).(d) The (i, j) entry of A B is

    nXk=1

    aikbkj . On the other hand, the (i, j) entry ofck rk equals

    the product of the ith entry ofck, namely aik, with the jth entry ofrk, namely bkj .

    Summing these entries, aikbkj , over k yields the usual matrix product formula.

    1.2.35.(a) p(A) = A3 3A + 2 I , q(A) = 2A2 + I . (b) p(A) =

    2 84 6

    !, q(A) =

    1 00 1

    !.

    (c) p(A)q(A) = (A3 3A + 2I )(2A2 + I ) = 2A5 5A3 + 4A2 3A + 2I , whilep(x)q(x) = 2 x5 5 x3 + 4 x2 3 x + 2.

    (d) True, since powers of A mutually commute. For the particular matrix from (b),

    p(A) q(A) = q(A)p(A) =

    2 8

    4 6!

    .

    5

  • 7/29/2019 Applied Linear Algebra Solutions Chapter 1

    6/41

    1.2.36.(a) Check that S2 = A by direct computation. Another example: S =

    2 00 2

    !. Or, more

    generally, 2 times any of the matrices in part (c).

    (b) S2 is only defined if S is square.

    (c) Any of the matrices 1 00 1! , a bc a! , where a is arbitrary and b c = 1 a2.(d) Yes: for example

    0 11 0

    !.

    1.2.37. (a) M has size (i+j)(k+l). (b) M =

    0BBBBB@1 1 13 0 11 1 3

    2 2 01 1 1

    1CCCCCA. (c) Since matrix addition is

    done entry-wise, adding the entries of each block is the same as adding the blocks. (d) Xhas size k m, Y has size k n, Z has size l m, and W has size l n. Then A X + B Zwill have size i m. Its (p, q) entry is obtained by multiplying the pth row of M times theqth column of P, which is a

    p1x1q

    +

    + api

    xiq

    + bp1

    z1q

    +

    + bpl

    zlq

    and equals the

    sum of the (p, q) entries of A X and B Z. A similar argument works for the remaining three

    blocks. (e) For example, if X = (1), Y = ( 2 0 ), Z =

    01

    !, W =

    0 11 0

    !, then

    P =

    0B@ 1 2 00 0 11 1 0

    1CA, and so M P =0BBBBB@

    0 1 14 7 04 5 1

    2 4 20 1 1

    1CCCCCA. The individual block products are

    04

    !=

    13

    !(1) +

    1 10 1

    !01

    !,

    0B@4

    201CA = 0B@

    1

    211CA (1) +0B@

    1 3

    2 01 11CA

    01!,

    1 17 0

    !=

    13

    !( 2 0 ) +

    1 10 1

    !0 11 0

    !,

    0B@5

    1

    4 21 11CA = 0B@

    1

    211CA ( 2 0 ) +0B@

    1 3

    2 01 11CA

    0

    11 0!.

    1.3.1.

    (a)

    1 7

    2 9 42

    !2R1+R2

    1 70 5

    410

    !. Back Substitution yields x2 = 2, x1 = 10.

    (b)

    3 52 1

    18

    ! 23R1+R2 3 50 133

    126

    3

    !. Back Substitution yields w = 2, z = 3.

    (c) 0B@1

    2 1

    0 2 84 5 9 0

    891CA4R1+R3

    0B@1

    2 1

    0 2 80 3 13

    0

    891CA3

    2R2+R3

    0B@1

    2 1

    0 2 80 0 1

    0

    831CA.

    Back Substitution yields z = 3, y = 16, x = 29.

    (d)

    0B@ 1 4 22 0 33 2 2

    171

    1CA 2R1+R20B@ 1 4 20 8 7

    3 2 2

    151

    1CA3R1+R30B@ 1 4 20 8 7

    0 14 8

    154

    1CA74R2+R3

    0B@ 1 4 20 8 70 0 174

    15 514

    1CA. Back Substitution yields r = 3, q = 2, p = 1.

    6

  • 7/29/2019 Applied Linear Algebra Solutions Chapter 1

    7/41

    (e)

    0BBB@1 0 2 00 1 0 10 3 2 0

    4 0 0 7

    1

    20

    5

    1CCCA reduces to0BBB@

    1 0 2 00 1 0 10 0 2 30 0 0 5

    1

    26

    15

    1CCCA.Solution: x4 = 3, x3 = 32 , x2 = 1, x1 = 4.

    (f) 0BBB@1 3 1 1

    1 1 3 10 1 1 44 1 1 0

    2

    075

    1CCCA reduces to 0BBB@1 3 1 1

    0 2 2 00 0 2 40 0 0 24

    22

    848

    1CCCA.Solution: w = 2, z = 0, y = 1, x = 1.

    1.3.2.(a) 3 x + 2 y = 2, 4 x 3 y = 1; solution: x = 4, y = 5,(b) x + 2 y = 3, x + 2 y + z = 6, 2 x 3 z = 1; solution: x = 1, y = 2, z = 1,(c) 3 x y + 2 z = 3, 2 y 5 z = 1, 6 x 2 y + z = 3;

    solution: x = 23 , y = 3, z = 1,(d) 2 x y = 0, x + 2 y z = 1, y + 2 z w = 1, z+ 2 w = 0;

    solution: x = 1, y = 2, z = 2, w = 1.1.3.3. (a) x = 173 , y =

    43 ; (b) u = 1, v =

    1; (c) u = 32 , v =

    13 , w =

    16 ; (d) x1 =

    113 , x2 = 103 , x3 = 23 ; (e) p = 23 , q = 196 , r = 52 ; (f) a = 13 , b = 0, c = 43 , d = 23 ;(g) x = 13 , y =

    76 , z = 83 , w = 92 .

    1.3.4. Solving 6 = a + b + c, 4 = 4 a + 2 b + c, 0 = 9 a + 3 b + c, yields a = 1, b = 1, c = 6, soy = x2 + x + 6.

    1.3.5.

    (a) Regular:

    2 11 4

    !

    2 10 72

    !.

    (b) Not regular.

    (c) Regular:

    0B@

    3 2 11 4 3

    3 2 5

    1CA

    0B@

    3 2 10 103 830 0 4

    1CA

    .

    (d) Not regular:0B@ 1 2 32 4 1

    3 1 21CA 0B@ 1 2 30 0 5

    0 5 71CA.

    (e) Regular:0BBB@1 3 3 0

    1 0 1 23 3 6 12 3 3 5

    1CCCA 0BBB@

    1 3 3 00 3 4 20 6 3 10 3 3 5

    1CCCA 0BBB@

    1 3 3 00 3 4 20 0 5 50 0 1 7

    1CCCA 0BBB@

    1 3 3 00 3 4 20 0 5 50 0 0 6

    1CCCA.1.3.6.

    (a)

    i 1 + i1 i 1

    13 i

    !

    i 1 + i0 1 2 i

    11 2 i

    !;

    use Back Substitution to obtain the solution y = 1, x = 1

    2 i .

    (b)

    0B@ i 0 1 i0 2 i 1 + i1 2 i i

    2 i2

    1 2 i

    1CA 0B@ i 0 1 i0 2 i 1 + i0 0 2 i

    2 i2

    1 2 i

    1CA.solution: z = i , y = 12 32 i , x = 1 + i .

    (c)

    1 i 2 i 1 + i

    i1

    !

    1 i 2

    0 2 i

    i 32 12 i

    !;

    solution: y = 14 + 34 i , x = 12 .

    7

  • 7/29/2019 Applied Linear Algebra Solutions Chapter 1

    8/41

    (d)

    0B@ 1 + i i 2 + 2 i1 i 2 i3 3 i i 3 11 i

    00

    6

    1CA 0B@ 1 + i i 2 + 2 i0 1 2 + 3 i

    0 0 6 + 6 i

    00

    6

    1CA;solution: z = 12 12 i , y = 52 + 12 i , x = 52 + 2 i .

    1.3.7. (a) 2 x = 3, y = 4, 3 z = 1, u = 6, 8 v = 24. (b) x = 32 , y = 4, z = 13 ,u = 6, v = 3. (c) You only have to divide by each coefficient to find the solution.

    1.3.8. 0 is the (unique) solution since A0 = 0. 1.3.9.

    Back Substitution

    start

    set xn = cn/unnfor i = n 1 to 1 with increment 1

    set xi =1

    uii

    0@

    ci i+1

    Xj=1uijxj

    1A

    next j

    end

    1.3.10. Since a11 a12

    0 a22

    !b11 b12

    0 b22

    !=

    a11b11 a11b12 + a12b22

    0 a22b22

    !,

    b11 b120 b22

    !a11 a12

    0 a22

    !=

    a11b11 a22b12 + a12b11

    0 a22b22

    !,

    the matrices commute if and only if

    a11b12 + a12b22 = a22b12 + a12b11, or (a11 a22)b12 = a12(b11 b22).1.3.11. Clearly, any diagonal matrix is both lower and upper triangular. Conversely, A being

    lower triangular requires that aij = 0 for i < j; A upper triangular requires that aij = 0 fori > j. If A is both lower and upper triangular, aij = 0 for all i = j, which implies A is adiagonal matrix.

    1.3.12.(a) Set lij =

    (aij , i > j,

    0, i j, , uij =(

    aij , i < j,

    0, i j, dij =(

    aij , i = j,

    0, i = j.

    (b) L =

    0B@

    0 0 01 0 0

    2 0 0

    1CA , D =

    0B@

    3 0 00

    4 0

    0 0 5

    1CA , U =

    0B@

    0 1 10 0 2

    0 0 0

    1CA .

    1.3.13.(a) By direct computation, A2 =

    0B@ 0 0 10 0 00 0 0

    1CA, and so A3 = O.(b) Let A have size n n. By assumption, aij = 0 whenever i > j 1. By induction, one

    proves that the (i, j) entries of Ak are all zero whenever i > j k. Indeed, to computethe (i, j) entry of Ak+1 = A Ak you multiply the ith row of A, whose first i entries are 0,

    8

  • 7/29/2019 Applied Linear Algebra Solutions Chapter 1

    9/41

    by the jth column of Ak, whose first j k 1 entries are non-zero, and all the rest arezero, according to the induction hypothesis; therefore, if i > j k 1, every term in thesum producing this entry is 0, and the induction is complete. In particular, for k = n,

    every entry of Ak is zero, and so An = O.

    (c) The matrix A =

    1 1

    1

    1

    !has A2 = O.

    1.3.14.(a) Add 2 times the second row to the first row of a 2 n matrix.(b) Add 7 times the first row to the second row of a 2 n matrix.(c) Add 5 times the third row to the second row of a 3 n matrix.(d) Add 12 times the first row to the third row of a 3 n matrix.(e) Add 3 times the fourth row to the second row of a 4 n matrix.

    1.3.15. (a)

    0

    BBB@1 0 0 00 1 0 00 0 1 00 0 1 1

    1

    CCCA, (b)

    0

    BBB@1 0 0 00 1 0 00 0 1

    1

    0 0 0 1

    1

    CCCA, (c)

    0

    BBB@1 0 0 30 1 0 00 0 1 00 0 0 1

    1

    CCCA, (d)

    0

    BBB@1 0 0 00 1 0 00 0 1 00 2 0 1

    1

    CCCA.

    1.3.16. L3 L2 L1 =

    0B@ 1 0 02 1 00 12 1

    1CA = L1L2L3.

    1.3.17. E3 E2 E1 =

    0B@ 1 0 02 1 02 12 1

    1CA, E1 E2 E3 =0B@ 1 0 02 1 01 12 1

    1CA. The second is easier to predictsince its entries are the same as the corresponding entries of the Ei.

    1.3.18.(a) Suppose that E adds c = 0 times row i to row j = i, while

    eE adds d = 0 times row k to

    row l= k. Ifr

    1, . . . , r

    nare the rows, then the effect of eE E is to replace(i) rj by rl + cri + drk for j = l;

    (ii) rj by rj + cri and rl by rl + (c d)ri + drj for j = k;

    (iii) rj by rj + cri and rl by rl + drk otherwise.

    On the other hand, the effect of E eE is to replace(i) rj by rl + cri + drk for j = l;

    (ii) rj by rj + cri + (c d)rk and rl by rl + drk for i = l;

    (iii) rj by rj + cri and rl by rl + drk otherwise.

    Comparing results, we see that E eE = eE E whenever i = l and j = k.(b) E1E2 = E2E1, E1E3 = E3E1, and E3E2 = E2E3.(c) See the answer to part (a).

    1.3.19. (a) Upper triangular; (b) both special upper and special lower triangular; (c) lowertriangular; (d) special lower triangular; (e) none of the above.

    1.3.20. (a) aij = 0 for all i = j; (b) aij = 0 for all i > j; (c) aij = 0 for all i > j and aii = 1for all i; (d) aij = 0 for all i < j; (e) aij = 0 for all i < j and aii = 1 for all i.

    1.3.21.(a) Consider the product L M of two lower triangular n n matrices. The last n i entries

    in the ith row of L are zero, while the first j 1 entries in the jth column of M are zero.So if i < j each summand in the product of the ith row times the jth column is zero,

    9

  • 7/29/2019 Applied Linear Algebra Solutions Chapter 1

    10/41

    and so all entries above the diagonal in L M are zero.(b) The ith diagonal entry of L M is the product of the ith diagonal entry of L times the ith

    diagonal entry of M.(c) Special matrices have all 1s on the diagonal, and so, by part (b), does their product.

    1.3.22. (a) L = 1 0

    1 1!, U = 1 3

    0 3!, (b) L = 1 0

    3 1!, U = 1 3

    0 8!,(c) L =

    0B@ 1 0 01 1 01 0 1

    1CA, U =0B@1 1 10 2 0

    0 0 3

    1CA, (d) L =0B@ 1 0 012 1 0

    0 13 1

    1CA, U =0B@ 2 0 30 3 12

    0 0 76

    1CA,(e) L =

    0B@ 1 0 02 1 01 1 1

    1CA, U =0B@1 0 00 3 0

    0 0 2

    1CA, (f) L =0B@ 1 0 02 1 03 13 1

    1CA, U =0B@ 1 0 10 3 4

    0 0 133

    1CA,

    (g) L =

    0BBBB@1 0 0 00 1 0 0

    1 32 1 00 12 3 1

    1CCCCA, U =0BBBB@

    1 0 1 00 2 1 10 0 12

    72

    0 0 0 10

    1CCCCA, (h) L =0BBB@

    1 0 0 01 1 0 02 1 1 0

    3 1 2 1

    1CCCA,

    U =0BBB@ 1 1 2 30 3 1 30 0 4 1

    0 0 0 1

    1CCCA, (i) L = 0BBBBB@1 0 0 012 1 0 032 37 1 012

    17 522 1

    1CCCCCA, U =0BBBBB@

    2 1 3 1

    0 72 32 120 0 227 570 0 0 3522

    1CCCCCA.

    1.3.23. (a) Add 3 times first row to second row. (b) Add 2 times first row to third row.(c) Add 4 times second row to third row.

    1.3.24.

    (a)

    0BBB@1 0 0 02 1 0 03 4 1 05 6 7 1

    1CCCA(b) (1) Add 2 times first row to second row. (2) Add 3 times first row to third row.(3) Add 5 times first row to fourth row. (4) Add 4 times second row to third row.

    (5) Add 6 times second row to fourth row. (6) Add 7 times third row to fourth row.(c) Use the order given in part (b).

    1.3.25. See equation (4.51) for the general case.1 1t1 t2

    !=

    1 0t1 1

    !1 10 t2 t1

    !0B@ 1 1 1t1 t2 t3

    t21 t22 t

    23

    1CA =0B@ 1 0 0t1 1 0

    t21 t1 + t2 1

    1CA0B@ 1 1 10 t2 t1 t3 t1

    0 0 (t3 t1) (t3 t2)

    1CA ,

    0BBBBB@1 1 1 1

    t1 t2 t3 t4t21 t

    22 t

    23 t

    24

    t31 t32 t

    33 t

    34

    1CCCCCA =0BBBBB@

    1 0 0 0

    t1 1 0 0

    t21 t1 + t2 1 0

    t31 t21 + t1 t2 + t

    22 t1 + t2 + t3 1

    1CCCCCA0BBBBB@

    1 1 1 1

    0 t2 t1 t3 t1 t4 t10 0 (t3 t1) (t3 t2) (t4 t1) (t4 t2)0 0 0 (t4 t1) (t4 t2) (t4 t3)

    1CCCCCA.

    10

  • 7/29/2019 Applied Linear Algebra Solutions Chapter 1

    11/41

    1.3.26. False. For instance

    1 11 0

    !is regular. Only if the zero appear in the (1, 1) position

    does it automatically preclude regularity of the matrix.

    1.3.27. (n 1) + (n 2) + + 1 = n(n 1)2

    .

    1.3.28. We solve the equation 1 0l 1

    !u1 u20 u3

    ! = a bc d

    ! for u1, u2, u3, l, where a = 0 sinceA =

    a bc d

    !is regular. This matrix equation has a unique solution: u1 = a, u2 = b,

    u3 = db c

    a, l =

    c

    a.

    1.3.29. The matrix factorization A = L U is

    0 11 0

    !=

    1 0a 1

    !x y0 z

    !=

    x y

    a x a y + z

    !.

    This implies x = 0 and a x = 1, which is impossible.

    1.3.30.(a) Let u11, . . . , unn be the pivots of A, i.e., the diagonal entries of U. Let D be the diago-

    nal matrix whose diagonal entries are dii = sign uii. Then B = A D is the matrix ob-tained by multiplying each column of A by the sign of its pivot. Moreover, B = L U D =L eU, where eU = U D, is the L U factorization of B. Each column of eU is obtained bymultiplying it by the sign of its pivot. In particular, the diagonal entries of eU, which arethe pivots of B, are uii sign uii = |uii | > 0.

    (b) Using the same notation as in part (a), we note that C = D A is the matrix obtainedby multiplying each row of A by the sign of its pivot. Moreover, C = D L U. How-ever, D L is not special lower triangular, since its diagonal entries are the pivot signs.But bL = D L D is special lower triangular, and so C = D L D D U = bL bU, wherebU = D U, is the L U factorization of B. Each row of bU is obtained by multiplying itby the sign of its pivot. In particular, the diagonal entries of bU, which are the pivots ofC, are uii sign uii = |uii | > 0.

    (c) 0B@2 2 11 0 14 2 3

    1CA = 0B@ 1 0 0 12 1 02 6 11CA0B@2 2 10 1 32

    0 0 4

    1CA,0B@ 2 2 11 0 14 2 3

    1CA =0B@ 1 0 0 12 1 02 6 1

    1CA0B@ 2 2 10 1 32

    0 0 4

    1CA,0B@ 2 2 11 0 14 2 3

    1CA =0B@ 1 0 012 1 02 6 1

    1CA0B@ 2 2 10 1 32

    0 0 4

    1CA.

    1.3.31. (a) x =

    123

    !, (b) x =

    0@ 1414

    1A, (c) x =0B@ 01

    0

    1CA, (d) x =0BBB@ 47

    2757

    1CCCA, (e) x =0B@11

    52

    1CA,

    (f) x =

    0B@ 011

    1CA, (g) x =0BBB@

    2110

    1CCCA, (h) x =0BBBBBB@ 3712 171214

    2

    1CCCCCCA, (i) x =0BBBBBB@

    33563517835

    1CCCCCCA.

    11

  • 7/29/2019 Applied Linear Algebra Solutions Chapter 1

    12/41

    1.3.32.

    (a) L =

    1 0

    3 1!

    , U =

    1 30 11

    !; x1 =

    0@ 511211

    1A, x2 =

    11

    !, x3 =

    0@ 911311

    1A;

    (b) L =

    0B@

    1 0 01 1 0

    1 0 1

    1CA

    , U =

    0B@1 1 1

    0 2 0

    0 0 3

    1CA

    ; x1 =

    0B@1

    0

    0

    1CA

    , x2 =

    0

    BBB@ 16

    32

    53

    1

    CCCA;

    (c) L =

    0BB@1 0 0

    23 1 029

    53 1

    1CCA , U =0BB@

    9 2 10 13 130 0 13

    1CCA; x1 =0B@ 12

    3

    1CA, x2 =0B@291

    1CA;

    (d) L =

    0B@ 1 0 0.15 1 0.2 1.2394 1

    1CA, U =0B@ 2.0 .3 .40 .355 4.94

    0 0 .2028

    1CA;x1 =

    0B@ .69441.3889.0694

    1CA, x2 =0B@ 1.111182.2222

    6.1111

    1CA, x3 [email protected]

    1CA

    (e) L = 0BBB@1 0 0 0

    0 1 0 01 32 1 00 12 1 1

    1CCCA, U = 0BBBB@1 0 1 00 2 3 10 0 72 720 0 0 4

    1CCCCA; x1 = 0BBBBB@54

    141414

    1CCCCCA, x2 = 0BBBBB@114

    51411412

    1CCCCCA;

    (f) L =

    0BBB@1 0 0 04 1 0 0

    8 179 1 04 1 0 1

    1CCCA, U =0BBB@

    1 2 0 20 9 1 90 0 19 00 0 0 1

    1CCCA;

    x1 =

    0BBB@1040

    1CCCA, x2 =0BBB@

    1132

    1CCCA, x3 =0BBB@

    108

    414

    1CCCA.

    1.4.1. The nonsingular matrices are (a), (c), (d), (h).

    1.4.2. (a) Regular and nonsingular, (b) singular, (c) nonsingular, (d) regular and nonsingular.

    1.4.3. (a) x1 = 53 , x2 = 103 , x3 = 5; (b) x1 = 0, x2 = 1, x3 = 2;(c) x1 = 6, x2 = 2, x3 = 2; (d) x = 132 , y = 92 , z = 1, w = 3;(e) x1 = 11, x2 = 103 , x3 = 5, x4 = 7.

    1.4.4. Solve the equations 1 = 2 b + c, 3 = 2 a + 4 b + c, 3 = 2 a b + c, for a = 4, b = 2,c = 3, giving the plane z = 4 x 2 y + 3.

    1.4.5.

    (a) Suppose A is nonsingular. If a = 0 and c = 0, then we subtract c/a times the first rowfrom the second, producing the (2, 2) pivot entry (a d b c)/a = 0. Ifc = 0, then thepivot entry is d and so a d b c = a d = 0. Ifa = 0, then c = 0 as otherwise the firstcolumn would not contain a pivot. Interchanging the two rows gives the pivots c and b,and so a d b c = b c = 0.

    (b) Regularity requires a = 0. Proceeding as in part (a), we conclude that a d b c = 0 also.1.4.6. True. All regular matrices are nonsingular.

    12

  • 7/29/2019 Applied Linear Algebra Solutions Chapter 1

    13/41

    1.4.7. Since A is nonsingular, we can reduce it to the upper triangular form with nonzero diago-nal entries (by applying the operations # 1 and # 2). The rest of argument is the same asin Exercise 1.3.8.

    1.4.8. By applying the operations # 1 and # 2 to the system Ax = b we obtain an equivalentupper triangular system Ux = c. Since A is nonsingular, uii = 0 for all i, so by Back Sub-stitution each solution component, namely xn = c

    n

    unnand xi = 1uii

    0@ ci nXk= i+1

    uikxk 1A,for i = n 1, n 2, . . . , 1, is uniquely defined.

    1.4.9. (a) P1 =

    0BBB@1 0 0 00 0 0 10 0 1 00 1 0 0

    1CCCA, (b) P2 =0BBB@

    0 0 0 10 1 0 00 0 1 01 0 0 0

    1CCCA,(c) No, they do not commute. (d) P1 P2 arranges the rows in the order 4, 1, 3, 2, while

    P2 P1 arranges them in the order 2, 4, 3, 1.

    1.4.10. (a)

    0B@ 0 1 00 0 11 0 0

    1CA, (b)0BBB@

    0 0 0 10 0 1 01 0 0 00 1 0 0

    1CCCA, (c)0BBB@

    0 1 0 01 0 0 00 0 0 10 0 1 0

    1CCCA, (d)0BBBBB@

    0 0 0 1 01 0 0 0 00 0 1 0 00 1 0 0 00 0 0 0 1

    1CCCCCA.

    1.4.11. The (i, j) entry of the following Multiplication Table indicates the product PiPj , where

    P1 =

    0B@ 1 0 00 1 00 0 1

    1CA , P2 =0B@ 0 1 00 0 1

    1 0 0

    1CA , P3 =0B@ 0 0 11 0 0

    0 1 0

    1CA ,

    P4 =

    0B@

    0 1 01 0 0

    0 0 1

    1CA

    , P5 =

    0B@

    0 0 10 1 0

    1 0 0

    1CA

    , P6 =

    0B@

    1 0 00 0 1

    0 1 0

    1CA

    .

    The commutative pairs are P1Pi = PiP1, i = 1, . . . , 6, and P2P3 = P3P2.

    P1 P2 P3 P4 P5 P6

    P1 P1 P2 P3 P4 P5 P6

    P2 P2 P3 P1 P6 P4 P5

    P3 P3 P1 P2 P5 P6 P4

    P4 P4 P5 P6 P1 P2 P3

    P5 P5 P6 P4 P3 P1 P2

    P6 P6 P4 P5 P2 P3 P1

    1.4.12. (a)

    0BBB@1 0 0 00 1 0 00 0 1 00 0 0 1

    1CCCA,0BBB@

    0 1 0 00 0 0 10 0 1 01 0 0 0

    1CCCA,0BBB@

    0 0 0 11 0 0 00 0 1 00 1 0 0

    1CCCA,0BBB@

    0 1 0 01 0 0 00 0 1 00 0 0 1

    1CCCA,0BBB@

    0 0 0 10 1 0 00 0 1 01 0 0 0

    1CCCA,

    13

  • 7/29/2019 Applied Linear Algebra Solutions Chapter 1

    14/41

    0BBB@1 0 0 00 0 0 10 0 1 00 1 0 0

    1CCCA; (b)0BBB@

    0 1 0 01 0 0 00 0 0 10 0 1 0

    1CCCA,0BBB@

    1 0 0 00 0 0 10 1 0 00 0 1 0

    1CCCA,0BBB@

    0 0 0 10 1 0 01 0 0 00 0 1 0

    1CCCA,0BBB@

    1 0 0 00 1 0 00 0 0 10 0 1 0

    1CCCA,0BBB@

    0 0 0 11 0 0 0

    0 1 0 00 0 1 0

    1CCCA,

    0BBB@

    0 1 0 00 0 0 1

    1 0 0 00 0 1 0

    1CCCA; (c)

    0BBB@

    1 0 0 00 0 1 0

    0 1 0 00 0 0 1

    1CCCA,

    0BBB@

    0 0 0 10 0 1 0

    0 1 0 01 0 0 0

    1CCCA.

    1.4.13. (a) True, since interchanging the same pair of rows twice brings you back to where

    you started. (b) False; an example is the non-elementary permuation matrix

    0B@ 0 0 11 0 00 1 0

    1CA.(c) False; for example P =

    1 00 1

    !is not a permutation matrix. For a complete list of

    such matrices, see Exercise 1.2.36.

    1.4.14. (a) Only when all the entries ofv are different; (b) only when all the rows of A aredifferent.

    1.4.15. (a) 0B@1 0 0

    0 0 10 1 0

    1CA. (b) True. (c) False A P permutes the columns of A according tothe inverse (or transpose) permutation matrix P1 = PT.

    1.4.16.(a) If P has a 1 in position ((j), j), then it moves row j of A to row (j) of P A, which is

    enough to establish the correspondence.

    (b) (i)

    0B@ 0 1 01 0 00 0 1

    1CA, (ii)0BBB@

    0 0 0 10 1 0 00 0 1 01 0 0 0

    1CCCA, (iii)0BBB@

    1 0 0 00 0 1 00 0 0 10 1 0 0

    1CCCA, (iv)0BBBBB@

    0 0 0 0 10 0 0 1 00 0 1 0 00 1 0 0 01 0 0 0 0

    1CCCCCA.Cases (i) and (ii) are elementary matrices.

    (c) (i) 1 2 32 3 1

    !, (ii) 1 2 3 43 4 1 2

    !, (iii) 1 2 3 44 1 2 3

    !, (iv) 1 2 3 4 52 5 3 1 4

    !. 1.4.17. The first row of an nn permutation matrix can have the 1 in any of the n positions, so

    there are n possibilities for the first row. Once the first row is set, the second row can haveits 1 anywhere except in the column under the 1 in the first row, and so there are n 1possibilities. The 1 in the third row can be in any of the n 2 positions not under either ofthe previous two 1s. And so on, leading to a total of n(n 1)(n 2) 2 1 = n ! possiblepermutation matrices.

    1.4.18. Let ri, rj denote the rows of the matrix in question. After the first elementary row op-

    eration, the rows are ri and rj + ri. After the second, they are ri (rj + ri) = rj andrj + ri. After the third operation, we are left with

    rj and rj + ri + (

    rj) = ri.

    1.4.19. (a)

    0 11 0

    !0 12 1

    !=

    1 00 1

    !2 10 1

    !, x =

    0@ 523

    1A;

    14

  • 7/29/2019 Applied Linear Algebra Solutions Chapter 1

    15/41

    (b)

    0B@ 0 1 00 0 11 0 0

    1CA0B@ 0 0 41 2 3

    0 1 7

    1CA =0B@ 1 0 00 1 0

    0 0 1

    1CA0B@ 1 2 30 1 7

    0 0 4

    1CA, x =0BBB@

    5434

    14

    1CCCA;

    (c)

    0

    B@0 0 11 0 0

    0 1 0

    1

    CA

    0

    B@0 1 30 2 3

    1 0 2

    1

    CA=

    0

    B@1 0 00 1 0

    0 2 1

    1

    CA

    0

    B@1 0 20 1 30 0 9

    1

    CA, x =

    0

    B@1

    1

    0

    1

    CA;

    (d)

    0BBB@1 0 0 00 0 1 00 1 0 00 0 0 1

    1CCCA0BBB@

    1 2 1 03 6 2 11 1 7 21 1 2 1

    1CCCA =0BBB@

    1 0 0 01 1 0 03 0 1 01 3 215 1

    1CCCA0BBB@

    1 2 1 00 1 6 20 0 5 10 0 0 45

    1CCCA, x =0BBB@

    2213522

    1CCCA;

    (e)

    0BBB@0 0 1 01 0 0 00 1 0 00 0 0 1

    1CCCA0BBB@

    0 1 0 02 3 1 01 4 1 27 1 2 3

    1CCCA =0BBB@

    1 0 0 00 1 0 02 5 1 07 29 3 1

    1CCCA0BBB@

    1 4 1 20 1 0 00 0 3 40 0 0 1

    1CCCA, x =0BBB@11

    13

    1CCCA;

    (f)

    0BBBBB@

    0 0 1 0 00 1 0 0 00 0 0 1 01 0 0 0 0

    0 0 0 0 1

    1CCCCCA

    0BBBBB@

    0 0 2 3 40 1 7 2 31 4 1 1 10 0 1 0 2

    0 0 1 7 3

    1CCCCCA

    =

    0BBBBB@

    1 0 0 0 00 1 0 0 00 0 1 0 00 0 2 1 0

    0 0 173 1

    1CCCCCA

    0BBBBB@

    1 4 1 1 10 1 7 2 30 0 1 0 20 0 0 3 0

    0 0 0 0 1

    1CCCCCA

    , x =

    0BBBBB@

    100

    10

    1CCCCCA

    .

    1.4.20.

    (a)

    0B@ 1 0 00 0 10 1 0

    1CA0B@ 4 4 23 3 13 1 2

    1CA =0BB@

    1 0 0

    34 1 0 34 0 1

    1CCA0BB@

    4 4 20 2 120 0 52

    1CCA;solution: x1 =

    54 , x2 =

    74 , x3 =

    32 .

    (b)

    0BBB@0 0 1 00 1 0 01 0 0 00 0 0 1

    1CCCA0BBB@

    0 1 1 10 1 1 01 1 1 31 2 1 1

    1CCCA =0BBB@

    1 0 0 00 1 0 00 1 1 01 3 52 1

    1CCCA0BBB@

    1 1 1 30 1 1 00 0 2 10 0 0 32

    1CCCA;solution: x = 4, y = 0, z = 1, w = 1.

    (c) 0BBB@1 0 0 0

    0 0 0 10 0 1 00 1 0 0

    1CCCA0BBB@1

    1 2 1

    1 1 3 01 1 1 31 2 1 1

    1CCCA = 0BBB@1 0 0 0

    1 1 0 01 0 1 0

    1 0 12 11CCCA0BBB@

    1

    1 2 1

    0 3 3 00 0 2 40 0 0 1

    1CCCA;solution: x = 193 , y = 53 , z = 3, w = 2.

    1.4.21.(a) They are all of the form P A = L U, where P is a permutation matrix. In the first case,

    we interchange rows 1 and 2, in the second case, we interchange rows 1 and 3, in thethird case, we interchange rows 1 and 3 first and then interchange rows 2 and 3.

    (b) Same solution x = 1, y = 1, z = 2 in all cases. Each is done by a sequence of elemen-tary row operations, which do not change the solution.

    1.4.22. There are four in all:

    0B@ 0 1 01 0 00 0 1

    1CA0B@ 0 1 21 0 11 1 3

    1CA = 0B@ 1 0 00 1 01 1 1

    1CA0B@ 1 0 10 1 20 0 2

    1CA ,0B@ 0 1 00 0 1

    1 0 0

    1CA0B@ 0 1 21 0 1

    1 1 3

    1CA =0B@ 1 0 01 1 0

    0 1 1

    1CA0B@ 1 0 10 1 4

    0 0 2

    1CA ,0B@ 0 0 10 1 0

    1 0 0

    1CA0B@ 0 1 21 0 1

    1 1 3

    1CA =0B@ 1 0 01 1 0

    0 1 1

    1CA0B@ 1 1 30 1 4

    0 0 2

    1CA ,

    15

  • 7/29/2019 Applied Linear Algebra Solutions Chapter 1

    16/41

    0B@ 0 0 11 0 00 1 0

    1CA0B@ 0 1 21 0 1

    1 1 3

    1CA =0B@ 1 0 00 1 0

    1 1 1

    1CA0B@ 1 1 30 1 2

    0 0 2

    1CA .The other two permutation matrices are not regular.

    1.4.23. The maximum is 6 since there are 6 different 3 3 permutation matrices. For example,0B@ 1 0 01 1 01 1 1

    1CA = 0B@ 1 0 01 1 01 1 1

    1CA0B@ 1 0 00 1 00 0 1

    1CA ,0B@ 1 0 00 0 1

    0 1 0

    1CA0B@ 1 0 01 1 01 1 1

    1CA =0B@ 1 0 01 1 0

    1 1 1

    1CA0B@ 1 0 00 1 1

    0 0 1

    1CA ,0B@ 0 1 01 0 0

    0 0 1

    1CA0B@ 1 0 01 1 01 1 1

    1CA =0B@ 1 0 01 1 01 2 1

    1CA0B@ 1 1 00 1 0

    0 0 1

    1CA ,0B@

    0 1 00 0 11 0 0

    1CA

    0B@

    1 0 01 1 0

    1 1 1

    1CA

    =

    0B@

    1 0 01 1 0

    1

    12 1

    1CA

    0B@

    1 1 00 2 10 0 12

    1CA

    ,

    0B@ 0 0 10 1 01 0 0

    1CA0B@ 1 0 01 1 01 1 1

    1CA = 0B@ 1 0 01 1 01 12 1

    1CA0B@1 1 10 2 10 0 12

    1CA ,0B@ 0 0 11 0 0

    0 1 0

    1CA0B@ 1 0 01 1 01 1 1

    1CA =0B@ 1 0 01 1 01 2 1

    1CA0B@1 1 10 1 1

    0 0 1

    1CA .1.4.24. False. Changing the permuation matrix typically changes the pivots.

    1.4.25.

    Permuted L U factorization

    start

    set P = I , L = I , U = A

    for j = 1 to n

    if ukj = 0 for all k j, stop; print A is singularif ujj = 0 but ukj = 0 for some k > j then

    interchange rows j and k of U

    interchange rows j and k of P

    for m = 1 to j 1 interchange ljm and lkm next mfor i = j + 1 to n

    set lij = uij/ujj

    add uij times row j to row i of Anext i

    next j

    end

    16

  • 7/29/2019 Applied Linear Algebra Solutions Chapter 1

    17/41

    1.5.1.

    (a)

    2 3

    1 1!1 3

    1 2

    !=

    1 00 1

    !=

    1 31 2

    !2 3

    1 1!

    ,

    (b) 0B@ 2 1 13 2 12 1 2

    1CA0B@ 3 1 14 2 11 0 1

    1CA = 0B@ 1 0 00 1 00 0 1

    1CA = 0B@ 3 1 14 2 11 0 1

    1CA0B@ 2 1 13 2 12 1 2

    1CA,

    (c)

    0B@1 3 22 2 12 1 3

    1CA0BB@ 1 1 1

    47 17 37

    67 57 87

    1CCA =0B@ 1 0 00 1 0

    0 0 1

    1CA =0BB@ 1 1 1

    47 17 37

    67 57 87

    1CCA0B@1 3 22 2 12 1 3

    1CA.

    1.5.2. X =

    0B@5 16 63 8 31 3 1

    1CA; X A =0B@5 16 63 8 31 3 1

    1CA0B@ 1 2 00 1 3

    1 1 8

    1CA =0B@ 1 0 00 1 0

    0 0 1

    1CA.1.5.3. (a)

    0 11 0

    !, (b)

    1 0

    5 1!

    , (c)

    1 20 1

    !,

    (d) 0B@ 1 0 00 1 30 0 1

    1CA, (e) 0BBB@1 0 0 0

    0 1 0 00 6 1 00 0 0 1

    1CCCA, (f) 0BBB@0 0 0 1

    0 1 0 00 0 1 01 0 0 0

    1CCCA.

    1.5.4.

    0B@ 1 0 0a 1 0b 0 1

    1CA0B@ 1 0 0a 1 0b 0 1

    1CA =0B@ 1 0 00 1 0

    0 0 1

    1CA =0B@ 1 0 0a 1 0b 0 1

    1CA0B@ 1 0 0a 1 0

    b 0 1

    1CA;M1 =

    0B@ 1 0 0a 1 0a c b c 1

    1CA.1.5.5. The ith row of the matrix multiplied by the ith column of the inverse should be equal 1.

    This is not possible if all the entries of the ith row are zero; see Exercise 1.2.24.

    1.5.6. (a) A1 =1 1

    2 1!

    , B1 =0@ 23 13 13 13

    1A.(b) C =

    2 13 0

    !, C1 = B1A1 =

    0@ 0 131 23

    1A.1.5.7. (a) R1 =

    cos sin

    sin cos !

    . (b)

    ab

    != R1

    xy

    !=

    x cos + y sin

    x sin + y cos !

    .

    (c) det(R a I ) = det

    cos a sin sin cos a

    != (cos a)2 + (sin )2 > 0

    provided sin = 0, which is valid when 0 < < .1.5.8.

    (a) Setting P1 =0B@ 1 0 00 1 0

    0 0 1

    1CA , P2 = 0B@ 0 1 00 0 11 0 0

    1CA , P3 = 0B@ 0 0 11 0 00 1 0

    1CA ,

    P4 =

    0B@ 0 1 01 0 00 0 1

    1CA , P5 =0B@ 0 0 10 1 0

    1 0 0

    1CA , P6 =0B@ 1 0 00 0 1

    0 1 0

    1CA ,we find P11 = P1, P

    12 = P3, P

    13 = P2, P

    14 = P4, P

    15 = P5, P

    16 = P6.

    (b) P1, P4, P5, P6 are their own inverses.

    17

  • 7/29/2019 Applied Linear Algebra Solutions Chapter 1

    18/41

    (c) Yes: P =

    0BBB@0 1 0 01 0 0 00 0 0 10 0 1 0

    1CCCA interchanges two pairs of rows.

    1.5.9. (a) 0BBB@0 0 0 1

    0 0 1 00 1 0 01 0 0 0

    1CCCA, (b) 0BBB@0 0 0 1

    1 0 0 00 1 0 00 0 1 0

    1CCCA, (c) 0BBB@1 0 0 0

    0 0 1 00 0 0 10 1 0 0

    1CCCA, (d) 0BBBBB@1 0 0 0 00 0 0 1 0

    0 1 0 0 00 0 0 0 10 0 1 0 0

    1CCCCCA.

    1.5.10.(a) If i and j = (i) are the entries in the ith column of the 2 n matrix corresponding to

    the permutation, then the entries in the jth column of the 2 n matrix corresponding tothe permutation are j and i = 1(j). Equivalently, permute the columns so that thesecond row is in order 1, 2, . . . , n and then switch the two rows.

    (b) The permutations correspond to

    (i)

    1 2 3 44 3 2 1

    !, (ii)

    1 2 3 44 1 2 3

    !, (iii)

    1 2 3 41 3 4 2

    !, (iv)

    1 2 3 4 51 4 2 5 3

    !.

    The inverse permutations correspond to

    (i) 1 2 3 4

    4 3 2 1!

    , (ii) 1 2 3 4

    2 3 4 1!

    , (iii) 1 2 3 4

    1 4 2 3!

    , (iv) 1 2 3 4 5

    1 3 5 2 4!

    .

    1.5.11. Ifa = 0 the first row is all zeros, and so A is singular. Otherwise, we make d 0 byan elementary row operation. If e = 0 then the resulting matrix has a row of all zeros.Otherwise, we make h 0 by another elementary row operation, and the result is a matrixwith a row of all zeros.

    1.5.12. This is true if and only ifA2 = I , and so, according to Exercise 1.2.36, A is either of

    the form

    1 00 1

    !or

    a bc a

    !, where a is arbitrary and b c = 1 a2.

    1.5.13. (3 I A)A = 3A A2 = I , so 3 I A is the inverse of A.

    1.5.14. 1

    cA1

    !(c A) = 1

    cc A1A = I .

    1.5.15. Indeed, (An)1 = (A1)n.

    1.5.16. If all the diagonal entries are nonzero, then D1D = I . On the other hand, if one ofdiagonal entries is zero, then all the entries in that row are zero, and so D is not invertible.

    1.5.17. Since U1 is also upper triangular, the only nonzero summand in the product of the ith

    row of U and the ith column of U1 is the product of their diagonal entries, which mustequal 1 since U U1 = I .

    1.5.18. (a) A = I1A I . (b) If B = S1AS, then A = S B S1 = T1B T, where T = S1.(c) If B = S1A S and C = T1B T, then C = T1(S1AS)T = (S T)1A(S T).

    1.5.19. (a) Suppose D1 =

    X YZ W

    !. Then, in view of Exercise 1.2.37, the equation D D1 =

    I =

    I O

    O I

    !requires A X = I , A Y = O, B Z = O, B W = I. Thus, X = A1, W = B1

    and, since they are invertible, Y = A1O = O, Z = B1O = O.

    18

  • 7/29/2019 Applied Linear Algebra Solutions Chapter 1

    19/41

    (b)

    0BBB@ 13 23 0

    23 13 00 0 13

    1CCCA,0BBB@1 1 0 02 1 0 0

    0 0 5 30 0 2 1

    1CCCA.1.5.20.

    (a) B A = 1 1 01 1 1!0B@1

    1

    0 11 1

    1CA = 1 00 1!.(b) A X = I does not have a solution. Indeed, the first column of this matrix equation is

    the linear system

    0B@ 1 10 11 1

    1CAxy

    !=

    0B@ 100

    1CA, which has no solutions since x y = 1, y = 0,and x + y = 0 are incompatible.

    (c) Yes: for instance, B =

    2 3 1

    1 1 1!

    . More generally, B A = I if and only ifB =1 z 1 2 z zw 1 2 w w

    !, where z, w are arbitrary.

    1.5.21. The general solution to A X = I i s X = 0B@2 y 1

    2 v

    y v1 1 1CA, where y, v are arbitrary.Any of these matrices serves as a right inverse. On the other hand, the linear systemY A = I is incompatible and there is no solution.

    1.5.22.

    (a) No. The only solutions are complex, with a = 12 i

    q23

    b, where b = 0 is any

    nonzero complex number.

    (b) Yes. A simple example is A =

    1 11 0

    !, B =

    1 00 1

    !. The general solution to the

    2 2 matrix equation has the form A = B M, where M =

    x yz w

    !is any matrix with

    tr M = x + w =

    1, and det M = x w

    y z = 1. To see this, if we set A = B M,then (I + M)1 = I + M1, which is equivalent to I + M + M1 = O. Writing thisout using the formula (1.38) for the inverse, we find that if det M = x w y z = 1 thentr M = x+w = 1, while if det M= 1, then y = z = 0 and x+x1+1 = 0 = w+w1+1,in which case, as in part (a), there are no real solutions.

    1.5.23. E =

    0BBB@1 0 0 00 1 0 00 0 7 00 0 0 1

    1CCCA, E1 =0BBB@

    1 0 0 00 1 0 00 0 17 00 0 0 1

    1CCCA.

    1.5.24. (a) 0@ 1 231 13

    1A, (b) 0@ 18 3838 18

    1A, (c) 0@ 35 45 45 35 1A, (d) no inverse,

    (e)

    0B@ 3 2 29 7 61 1 1

    1CA, (f)0BBB@ 58 18 58 12 12 12

    78 38 18

    1CCCA, (g)0BB@

    52

    32

    12

    2 1 12 1 0

    1CCA,

    19

  • 7/29/2019 Applied Linear Algebra Solutions Chapter 1

    20/41

    (h)

    0BBB@0 2 1 11 6 2 30 5 0 30 2 0 1

    1CCCA, (i)0BBB@51 8 12 313 2 3 1

    21 3 5 15 1 1 0

    1CCCA.1.5.25.

    (a

    ) 1 0

    3 1!1 0

    0 3!1

    2

    0 1! = 1

    2

    3 3!,(b)

    1 03 1

    !1 00 8

    !1 30 1

    !=

    1 33 1

    !,

    (c)

    1 043 1

    !0@ 35 00 1

    1A 1 00 53

    !0@ 1 430 1

    1A =0@ 35 45

    45

    35

    1A,(d) not possible,

    (e)

    0B@ 1 0 03 1 00 0 1

    1CA0B@ 1 0 00 1 02 0 1

    1CA0B@ 1 0 00 1 0

    0 1 1

    1CA0B@ 1 0 00 1 0

    0 0 1

    1CA0B@ 1 0 00 1 0

    0 0 1

    1CA0

    B@1 0 20 1 00 0 1

    1

    CA

    0

    B@1 0 00 1 60 0 1

    1

    CA=

    0

    B@1 0 23 1 0

    2 1

    3

    1

    CA,

    (f)0B@ 1 0 03 1 0

    0 0 1

    1CA0B@ 1 0 00 1 02 0 1

    1CA0B@ 1 0 00 1 00 3 1

    1CA0B@ 1 0 00 1 00 0 1

    1CA0B@ 1 0 00 1 00 0 8

    1CA0B@ 1 0 30 1 0

    0 0 1

    1CA0B@ 1 0 00 1 4

    0 0 1

    1CA0B@ 1 2 00 1 0

    0 0 1

    1CA =0B@ 1 2 33 5 5

    2 1 2

    1CA,(g)

    0B@ 1 0 00 0 10 1 0

    1CA0B@ 1 0 00 1 0

    2 0 1

    1CA0B@ 2 0 00 1 0

    0 0 1

    1CA0B@ 1 0 00 1 0

    0 0 1

    1CA0B@ 1 0 00 1 0

    0 0 1

    1CA0B@

    1 0 10 1 00 0 1

    1CA

    0B@

    1 0 00 1 10 0 1

    1CA

    0B@

    1 12 00 1 00 0 1

    1CA

    =

    0B@

    2 1 24 2 30 1 1

    1CA

    ,

    (h)0BBB@ 1 0 0 00 0 1 00 1 0 0

    0 0 0 1

    1CCCA0BBB@ 1 0 0 012 1 0 00 0 1 0

    0 0 0 1

    1CCCA0BBB@ 1 0 0 00 1 0 00 0 1 0

    0 0 2 1

    1CCCA0BBB@ 2 0 0 00 1 0 00 0 1 0

    0 0 0 1

    1CCCA0BBB@ 1 0 0 00 12 0 00 0 1 0

    0 0 0 1

    1CCCA0BBB@

    1 0 0 120 1 0 00 0 1 00 0 0 1

    1CCCA0BBB@

    1 0 0 00 1 0 30 0 1 00 0 0 1

    1CCCA0BBB@

    1 0 0 00 1 0 00 0 1 30 0 0 1

    1CCCA0BBB@

    1 12 0 00 1 0 00 0 1 00 0 0 1

    1CCCA =0BBB@

    2 1 0 10 0 1 31 0 0 10 0 2 5

    1CCCA,

    (i)

    0BBB@1 0 0 00 1 0 00 0 0 10 0 1 0

    1CCCA0BBB@

    1 0 0 02 1 0 00 0 1 00 0 0 1

    1CCCA0BBB@

    1 0 0 00 1 0 00 0 1 03 0 0 1

    1CCCA0BBB@

    1 0 0 00 1 0 00 2 1 00 0 0 1

    1CCCA0BBB@

    1 0 0 00 1 0 00 0 1 00 1 0 1

    1CCCA0BBB@

    1 0 0 0

    0 1 0 00 0 1 00 0 0 1

    1CCCA0BBB@1 0 0 0

    0 1 0 00 0 1 00 0 0 1

    1CCCA0BBB@1 0 0 1

    0 1 0 00 0 1 00 0 0 1

    1CCCA0BBB@1 0 0 0

    0 1 0 20 0 1 00 0 0 1

    1CCCA0BBB@1 0 0 0

    0 1 0 00 0 1 50 0 0 1

    1CCCA0BBB@

    1 0 1 00 1 0 00 0 1 00 0 0 1

    1CCCA0BBB@

    1 0 0 00 1 1 00 0 1 00 0 0 1

    1CCCA0BBB@

    1 2 0 00 1 0 00 0 1 00 0 0 1

    1CCCA =0BBB@

    1 2 1 12 3 3 03 7 2 40 2 1 1

    1CCCA.

    20

  • 7/29/2019 Applied Linear Algebra Solutions Chapter 1

    21/41

    1.5.26. Applying Gaussian Elimination:

    E1 =

    0@ 1 0 13

    1

    1A, E1A =0B@32 12

    0 23

    1CA,

    E2 = 0@1 0

    0 32 1A, E2E1A = 0@32

    12

    0 11A,E3 =

    0@ 23 00 1

    1A, E3E2E1A =0@ 1 13

    0 1

    1A,E4 =

    0@ 1 130 1

    1A, E4E3E2E1A = I =

    1 00 1

    !,

    and hence A = E11 E12 E

    13 E

    14 =

    0@ 1 013

    1

    1A0@ 1 00 2

    3

    1A0@ 32 00 1

    1A0@ 1 130 1

    1A.

    1.5.27. (a) 0@ i2 12

    12 i2 1A, (b) 1 1 i

    1 + i 1 !,(c)

    0B@ i 0 11 i i 11 1 i

    1CA, (d)0B@ 3 + i 1 i i4 + 4 i 2 i 2 + i1 + 2 i 1 i 1

    1CA.1.5.28. No. If they have the same solution, then they both reduce to

    I | x

    under elementary

    row operations. Thus, by applying the appropriate elementary row operations to reduce theaugmented matrix of the first system to

    I | x

    , and then applying the inverse elemen-

    tary row operations we arrive at the augmented matrix for second system. Thus, the firstsystem can be changed into the second by the combined sequence of elementary row opera-tions, proving equivalence. (See also Exercise 2.5.44 for the general case.)

    1.5.29.(a) If eA = ENEN1 E2 E1 A where E1, . . . , E N represent the row operations applied toA, then eC = eA B = ENEN1 E2 E1 A B = ENEN1 E2 E1 C, which representsthe same sequence of row operations applied to C.

    (b)

    (E A) B =

    0B@ 1 2 12 3 22 3 2

    1CA0B@ 1 23 01 1

    1CA =0B@ 8 39 29 2

    1CA =0B@ 1 0 00 1 02 0 1

    1CA0B@ 8 39 2

    7 4

    1CA = E(A B).

    1.5.30. (a)

    0@

    12

    12

    14

    14

    1A

    1

    2!=

    0@ 12

    34

    1A; (b)

    0@

    517

    217

    117

    317

    1A

    2

    12!=

    2

    2!;

    (c)

    0BB@2 52 321 1 00 12 12

    1CCA0BB@

    3

    22

    1CCA =0BB@

    14

    5

    2

    1CCA; (d)0B@ 9 15 86 10 51 2 1

    1CA0B@ 31

    5

    1CA =0B@ 23

    0

    1CA;

    (e)

    0B@4 3 12 1 03 1 1

    1CA0B@ 357

    1CA =0B@413

    1CA; (f)0BBB@

    1 0 1 10 0 1 12 1 1 02 1 1 1

    1CCCA0BBB@

    4117

    6

    1CCCA =0BBB@

    314

    2

    1CCCA;

    21

  • 7/29/2019 Applied Linear Algebra Solutions Chapter 1

    22/41

    (g)

    0BBBBB@1 1 0 1

    52 2 32 32 4 3 2 3 12 1 12 12

    1CCCCCA

    0BBBBB@ 2

    3

    3

    2

    1CCCCCA =0BBBBB@

    312

    1 32

    1CCCCCA.

    1.5.31. (a) 0@ 1

    3 231A , (b) 0@

    1

    4141A, (c) 0@

    7

    5 151A, (d) singular matrix,

    (e)

    0B@141

    1CA, (f)0BBB@

    18

    1258

    1CCCA, (g)0B@

    1201

    1CA, (h)0BBB@

    4108

    3

    1CCCA, (i)0BBB@28712

    3

    1CCCA.

    1.5.32.

    (a)

    1 2

    3 1!

    =

    1 0

    3 1!

    1 00 7

    !1 20 1

    !;

    (b

    ) 0 1

    1 0!0 4

    7 2! = 1 0

    0 1!7 0

    0 4!1

    27

    0 1!(c)

    0B@ 2 1 22 4 10 2 1

    1CA =0B@ 1 0 01 1 0

    0 23 1

    1CA0B@ 2 0 00 3 0

    0 0 1

    1CA0B@ 1

    12 1

    0 1 10 0 1

    1CA;

    (d)

    0B@ 1 0 00 0 10 1 0

    1CA0B@ 1 1 51 1 2

    2 1 3

    1CA =0B@ 1 0 02 1 0

    1 0 1

    1CA0B@ 1 0 00 3 0

    0 0 7

    1CA0B@ 1 1 50 1 73

    0 0 1

    1CA;

    (e)

    0B@ 2 3 21 1 11 1 2

    1CA =0B@ 1 0 012 1 0

    12 1 1

    1CA0B@ 2 0 00 12 0

    0 0 1

    1CA0B@ 1

    32 1

    0 1 00 0 1

    1CA;

    (f) 0BBB@1 1 1 21 4 1 51 2 1 13 1 1 6

    1CCCA = 0BBB@1 0 0 0

    1 1 0 01 1 1 03 43 1 1

    1CCCA0BBB@1 0 0 0

    0 3 0 00 0 2 00 0 0 4

    1CCCA0BBB@1 1 1 20 1 0 10 0 1 00 0 0 1

    1CCCA;

    (g)

    0BBB@1 0 0 00 1 0 00 0 0 10 0 1 0

    1CCCA0BBB@

    1 0 2 32 2 0 11 2 2 10 1 1 2

    1CCCA =0BBB@

    1 0 0 02 1 0 00 12 1 01 1 0 1

    1CCCA0BBB@

    1 0 0 00 2 0 00 0 1 00 0 0 5

    1CCCA0BBB@

    1 0 2 30 1 2 720 0 1 1120 0 0 1

    1CCCA.1.5.33.

    (a)

    0@ 3757

    1A, (b) 83

    !, (c)

    0BBB@

    16

    2323

    1CCCA

    , (d)

    0B@ 120

    1CA, (e)0B@123

    7

    1CA, (f)0BBBB@

    73

    25

    5

    3

    1CCCCA

    , (g)

    0BBB@

    010

    2

    1CCCA

    .

    1.6.1. (a) ( 1 5 ), (b)

    1 01 2

    !, (c)

    1 22 1

    !, (d)

    0B@ 1 22 01 2

    1CA,

    22

  • 7/29/2019 Applied Linear Algebra Solutions Chapter 1

    23/41

    (e)

    0B@ 123

    1CA, (f) 1 3 52 4 6

    !, (g)

    0B@ 1 0 12 3 11 2 5

    1CA.

    1.6.2. AT =

    0

    B@3 1

    1 2

    1 1

    1

    CA, BT =

    1 2 32 0 4

    !,

    (A B)T = BTAT =2 0

    2 6

    !, (B A)T = ATBT =

    0B@1 6 55 2 113 2 7

    1CA.1.6.3. IfA has size m n and B has size n p, then (A B)T has size p m. Further, AT has

    size n m and BT has size p n, and so unless m = p the product ATBT is not defined.If m = p, then ATBT has size n n, and so to equal (A B)T, we must have m = n = p,so the matrices are square. Finally, taking the transpose of both sides, A B = (ATBT)T =

    (BT)T(AT)T = B A, and so they must commute.

    1.6.4. The (i, j) entry of C = (A B)T is the (j, i) entry of A B, so

    cij =

    nXk=1

    ajk bki =

    nXk=1

    ebik eakj ,where eaij = aji and ebij = bji are the entries of AT and BT respectively. Thus, cij equalsthe (i, j) entry of the product BTAT .

    1.6.5. (A B C)T = CTBTAT

    1.6.6. False. For example,

    1 10 1

    !does not commute with its transpose.

    1.6.7. IfA =

    a bc d

    !, then ATA = A AT if and only if b2 = c2 and (a d)(b c) = 0.

    So either b = c, or c = b = 0 and a = d. Thus all normal 2 2 matrices are of the forma bb d

    !or a bb a

    !.

    1.6.8.(a) (A B)T = ((A B)T)1 = (BTAT)1 = (AT)1(BT)1 = ATBT.

    (b) A B =

    1 02 1

    !, so (A B)T =

    1 20 1

    !, while AT =

    0 11 1

    !, BT =

    1 1

    1 2!

    ,

    so ATBT =

    1 20 1

    !.

    1.6.9. IfA is invertible, then so is AT by Lemma 1.32; then by Lemma 1.21 A AT and ATA areinvertible.

    1.6.10. No; for example, 12!( 3 4 ) = 3 4

    6 8! while 3

    4!( 1 2 ) = 3 6

    4 8!.

    1.6.11. No. In general, BTA is the transpose of ATB.

    1.6.12.(a) The ith entry of Aej is the product of the i

    th row of A with ej . Since all the entries in

    ej are zero except the jth entry the product will be equal to aij , i.e., the (i, j) entry of A.

    (b) By part (a), beTi Aej is the product of the row matrix beTi and the jth column of A. Since23

  • 7/29/2019 Applied Linear Algebra Solutions Chapter 1

    24/41

    all the entries in beTi are zero except the ith entry, multiplication by the jth column of Awill produce aij .

    1.6.13.(a) Using Exercise 1.6.12, aij = e

    Ti Aej = e

    Ti Bej = bij for all i, j.

    (b) Two examples: A = 1 20 1! , B = 1 1

    1 1!; A = 0 0

    0 0! , B = 0

    1

    1 0!. 1.6.14.

    (a) If pij = 1, then P A maps the jth row of A to its ith row. Then Q = PT has qji = 1,

    and so it does the reverse, mapping the ith row of A to its jth row. Since this holds forall such entries, the result follows.

    (b) No. Any rotation matrix

    cos sin sin cos

    !also has this property. See Section 5.3.

    1.6.15.(a) Note that (A PT)T = P AT, which permutes the rows of AT, which are the columns of

    A, according to the permutation P.(b) The effect of multiplying P A PT is equivalent to simultaneously permuting rows and

    columns of A according to the permutation P. Associativity of matrix multiplicationimplies that it doesnt matter whether the rows or the columns are permuted first.

    1.6.16.(a) Note that wvT is a scalar, and so

    A A1 = ( I vwT)( I cvwT) = I (1 + c)vwT + cv (wTv)wT

    = I (1 + c cwTv)vwT = I

    provided c = 1/(vTw 1), which works whenever wTv = 1.(b) A = I

    vwT =

    2 23 5!

    and c =1

    vT

    w 1= 14 , so A

    1 = I

    14 vw

    T = 54 123

    4 1

    2 !.(c) IfvTw = 1 then A is singular, since Av = 0 and v = 0, and so the homogeneous

    system does not have a unique solution.

    1.6.17. (a) a = 1; (b) a = 1, b = 2, c = 3; (c) a = 2, b = 1, c = 5.1.6.18.

    (a)

    0B@ 1 0 00 1 00 0 1

    1CA,0B@ 0 1 01 0 0

    0 0 1

    1CA,0B@ 0 0 10 1 0

    1 0 0

    1CA,0B@ 1 0 00 0 1

    0 1 0

    1CA.

    (b)

    0BBB@

    1 0 0 00 1 0 0

    0 0 1 00 0 0 1

    1CCCA,

    0BBB@

    0 1 0 01 0 0 0

    0 0 1 00 0 0 1

    1CCCA,

    0BBB@

    0 0 1 00 1 0 0

    1 0 0 00 0 0 1

    1CCCA,

    0BBB@

    0 0 0 10 1 0 0

    0 0 1 01 0 0 0

    1CCCA,

    0BBB@

    1 0 0 00 0 1 0

    0 1 0 00 0 0 1

    1CCCA,0BBB@

    1 0 0 00 0 0 10 0 1 00 1 0 0

    1CCCA,0BBB@

    1 0 0 00 1 0 00 0 0 10 0 1 0

    1CCCA,0BBB@

    0 1 0 01 0 0 00 0 0 10 0 1 0

    1CCCA,0BBB@

    0 0 1 00 0 0 11 0 0 00 1 0 0

    1CCCA,0BBB@

    0 0 0 10 0 1 00 1 0 01 0 0 0

    1CCCA.

    1.6.19. True, since (A2)T = (A A)T = ATAT = A A = A2.

    1.6.20. True. Invert both sides of the equation AT = A, and use Lemma 1.32.

    24

  • 7/29/2019 Applied Linear Algebra Solutions Chapter 1

    25/41

    1.6.21. False. For example

    0 11 0

    !2 11 3

    !=

    1 32 1

    !.

    1.6.22.(a) If D is a diagonal matrix, then for all i = j we have aij = aji = 0, so D is symmetric.(b) If L is lower triangular then aij = 0 for i < j, if it is symmetric then aji = 0 for i < j,

    so L is diagonal. If L is diagonal, then aij = 0 for i < j, so L is lower triangular and itis symmetric.

    1.6.23.(a) Since A is symmetric we have (An)T = (A A . . . A)T = ATAT . . . AT = A A . . . A = An

    (b) (2 A2 3 A + I )T = 2 (A2)T 3 AT + I = 2 A2 3 A + I(c) If p(A) = cn A

    n + + c1 A + c0 I, then p(A)T = cnAn + c1 A + c0 I T = cn (AT)n + c1 AT + c0 I = p(AT). In particular, if A = AT, then p(A)T = p(AT) = p(A).

    1.6.24. IfA has size m n, then AT has size n m and so both products are defined. Also,KT = (ATA)T = AT(AT)T = ATA = K and LT = (AAT)T = (AT)TAT = A AT = L.

    1.6.25.

    (a) 1 1

    1 4! = 1 0

    1 1!1 0

    0 3!1 1

    0 1! ,(b)

    2 33 1

    !=

    1 0

    32 1!2 0

    0 72

    !1 320 1

    !,

    (c)

    0B@ 1 1 11 3 21 2 0

    1CA =0B@ 1 0 01 1 01 12 1

    1CA0B@ 1 0 00 2 0

    0 0 32

    1CA0B@ 1 1 10 1 12

    0 0 1

    1CA,

    (d)

    0BBB@1 1 0 3

    1 2 2 00 2 1 03 0 0 1

    1CCCA =0BBB@

    1 0 0 01 1 0 0

    0 2 1 03 3 65 1

    1CCCA0BBB@

    1 0 0 00 1 0 00 0 5 00 0 0 495

    1CCCA0BBB@

    1 1 0 30 1 2 30 0 1 650 0 0 1

    1CCCA.

    1.6.26. M2 = 1 012 1

    ! 1 00 32

    !0@ 1 120 1

    1A, M3 = 0BB@1 0 012 1 0

    0 23 1

    1CCA0BB@2 0 0

    0 32 0

    0 0 43

    1CCA0BBB@1 12 0

    0 1 230 0 1

    1CCCA,

    M4 =

    0BBBBB@1 0 0 012 1 0 0

    0 23 1 0

    0 0 34 1

    1CCCCCA

    0BBBBB@2 0 0 0

    0 32 0 0

    0 0 43 0

    0 0 0 54

    1CCCCCA

    0BBBBB@1 12 0 0

    0 1 23 0

    0 0 1 340 0 0 1

    1CCCCCA.

    1.6.27. The matrix is not regular, since after the first set of row operations the (2, 2) entry is 0.More explicitly, if

    L =

    0B@

    1 0 0a 1 0

    b c 1

    1CA , D =

    0B@

    p 0 00 q 0

    0 0 r

    1CA , then L D L

    T =

    0B@

    p a p b pa p a2p + q a b p + c q

    b p a b p + c q b2p + c2q+ r

    1CA.

    Equating this to A, the (1, 1) entry requires p = 1, and so the (1, 2) entry requires a = 2,but the (2, 2) entry then implies q = 0, which is not an allowed diagonal entry for D.Even if we ignore this, the (1, 3) entry would set b = 1, but then the (2, 3) entry saysa b p + c q = 2 = 1, which is a contradiction.

    1.6.28. Write A = L D V, then AT = VTD UT = eL eU, where eL = VT and eU = D eL. Thus, ATis regular since the diagonal entries of eU, which are the pivots of AT, are the same as those

    25

  • 7/29/2019 Applied Linear Algebra Solutions Chapter 1

    26/41

    of D and U, which are the pivots of A.

    1.6.29. (a) The diagonal entries satisfy jii = jii and so must be 0. (b)

    0 11 0

    !. (c) No,

    because the (1, 1) entry is always 0. (d) Invert both sides of the equation JT = J anduse Lemma 1.32. (e) (JT)T = J = JT, (J K)T = JT KT = J K = (J K).J K is not, in general, skew-symmetric; for instance 0 11 0! 0 11 0! = 1 00 1!.(f) Since it is a scalar, vTJv = (vTJv)T = vTJT (vT)T = vTJv equals its ownnegative, and so is zero.

    1.6.30.(a) Let S = 12 (A + A

    T), J = 12 (A AT). Then ST = S, JT = J, and A = S+ J.

    (b)

    1 23 4

    !=

    1 5252 4

    !+

    0 1212 0

    !;

    0B@ 1 2 34 5 67 8 9

    1CA =0B@ 1 3 53 5 7

    5 7 9

    1CA +0B@ 0 1 21 0 1

    2 1 0

    1CA.

    1.7.1.

    (a) The solution is x = 107 , y = 197 . Gaussian Elimination and Back Substitution re-quires 2 multiplications and 3 additions; GaussJordan also uses 2 multiplications and 3

    additions; finding A1 =

    0@ 17 27 37 17

    1A by the GaussJordan method requires 2 additionsand 4 multiplications, while computing the solution x =

    0@ 17 27 37 17

    1A 47

    !=

    0@ 107 197

    1Atakes another 4 multiplications and 2 additions.

    (b) The solution is x = 4, y = 5, z = 1. Gaussian Elimination and Back Substitu-tion requires 17 multiplications and 11 additions; GaussJordan uses 20 multiplications

    and 11 additions; computing A1 =

    0B@ 0 1 12 8 532 5 3

    1CA takes 27 multiplications and 12additions, while multiplying A

    1

    b = x takes another 9 multiplications and 6 additions.(c) The solution is x = 2, y = 1, z = 25 . Gaussian Elimination and Back Substitutionrequires 6 multiplications and 5 additions; GaussJordan is the same: 6 multiplications

    and 5 additions; computing A1 =

    0BBB@ 12 32 32 12 12 12 25 0 15

    1CCCA takes 11 multiplications and 3additions, while multiplying A1b = x takes another 8 multiplications and 5 additions.

    1.7.2.(a) For a general matrix A, each entry of A2 requires n multiplications and n 1 additions,

    for a total of n3 multiplications and n3 n2 additions, and so, when compared withthe efficient version of the GaussJordan algorithm, takes exactly the same amount of

    computation.(b) A3 = A2A requires a total of 2 n3 multiplications and 2 n3 2 n2 additions, and so isabout twice as slow.

    (c) You can compute A4 as A2A2, and so only 2 matrix multiplications are required. In

    general, if 2r k < 2r+1 has j ones in its binary representation, then you need r multi-plications to compute A2, A4, A8, . . . A2

    r

    followed by j 1 multiplications to form Ak asa product of these particular powers, for a total of r + j 1 matrix multiplications, andhence a total of (r + j 1)n3 multiplications and (r + j 1)n2(n 1) additions. See

    26

  • 7/29/2019 Applied Linear Algebra Solutions Chapter 1

    27/41

    Exercise 1.7.8 and [11] for more sophisticated ways to speed up the computation.

    1.7.3. Back Substitution requires about one half the number of arithmetic operations as multi-plying a matrix times a vector, and so is twice as fast.

    1.7.4. We begin by proving (1.61). We must show that 1 + 2 + 3 + . . . + (n 1) = n(n 1)/2for n = 2, 3, . . .. For n = 2 both sides equal 1. Assume that (1.61) is true for n = k. Then

    1 + 2 + 3 + . . . + (k1) + k = k(k1)/2 + k = k(k + 1)/2, so (1.61) is true for n = k +1. Nowthe first equation in (1.62) follows if we note that 1 + 2 + 3 + . . . + (n 1) + n = n(n + 1)/2.Next we prove the first equation in (1.60), namely 2 + 6 + 12 + . . . + (n 1)n = 13 n3 13 nfor n = 2, 3, . . .. For n = 2 both sides equal 2. Assume that the formula is true for n = k.Then 2+6+12+ . . . + (k1)k + k(k +1) = 13 k3 13 k + k2 + k = 13 (k + 1)3 13 (k +1), so theformula is true for n = k + 1, which completes the induction step. The proof of the secondequation is similar, or, alternatively, one can use the first equation and (1.61) to show that

    nXj=1

    (nj)2 =nX

    j=1

    (nj)(nj + 1) nX

    j=1

    (nj) = n3 n

    3 n

    2 n2

    =2 n3 3 n2 + n

    6.

    1.7.5. We may assume that the matrix is regular, so P = I , since row interchanges have noeffect on the number of arithmetic operations.

    (a) First, according to (1.60), it takes 13 n3 13 n multiplications and 13 n3 12 n2 + 16 n

    additions to factor A = L U. To solve Lcj = ej by Forward Substitution, the first j 1entries ofc are automatically 0, the jth entry is 1, and then, for k = j + 1, . . . n, we needkj 1 multiplications and the same number of additions to compute the kth entry, fora total of 12 (nj)(nj 1) multiplications and additions to find cj . Similarly, to solveUxj = cj for the j

    th column of A1 requires 12 n2 + 12 n multiplications and, since the

    first j 1 entries ofcj are 0, also 12 n2 12 n j + 1 additions. The grand total is n3multiplications and n (n 1)2 additions.

    (b) Starting with the large augmented matrix M =

    A | I

    , it takes 12 n2(n 1) multipli-

    cations and 12 n (n

    1)2 additions to reduce it to triangular form U |

    C with U uppertriangular and C lower triangular, then n2 multiplications to obtain the special uppertriangular form

    V | B

    , and then 12 n

    2(n 1) multiplications and, since B is uppertriangular, 12 n (n 1)2 additions to produce the final matrix

    I | A1

    . The grand

    total is n3 multiplications and n (n 1)2 additions. Thus, both methods take the sameamount of work.

    1.7.6. Combining (1.6061), we see that it takes 13 n3 + 12 n

    2 56 n multiplications and 13 n3 13 nadditions to reduce the augmented matrix to upper triangular form

    U | c

    . Dividing the

    jth row by its pivot requires nj +1 multiplications, for a total of 12 n2+ 12 n multiplicationsto produce the special upper triangular form

    V | e

    . To produce the solved form

    I | d

    requires an additional 12 n

    2

    12 n multiplications and the same number of additions for a

    grand total of 13 n3 + 32 n

    2 56 n multiplications and 13 n3 + 12 n2 56 n additions needed tosolve the system.

    1.7.7. Less efficient, by, roughly, a factor of 32 . It takes12 n

    3 + n2 12 n multiplications and12 n

    3 12 n additions.

    27

  • 7/29/2019 Applied Linear Algebra Solutions Chapter 1

    28/41

    1.7.8.(a) D1 + D3 D4 D6 = (A1 + A4) (B1 + B4) + (A2 A4) (B3 + B4)

    (A1 + A2) B4 A4(B1 B3) = A1B1 + A2B3 = C1,D4 + D7 = (A1 + A2) B4 + A1(B2 B4) = A1B2 + A2B4 = C2,D5

    D6 = (A3 + A4) B1

    A4(B1

    B3) = A3B1 + A4B3 = C3,

    D1 D2 D5 + D7 = (A1 + A4) (B1 + B4) (A1 A3) (B1 + B2) (A3 + A4) B1 + A1(B2 B4) = A3B2 + A4B4 = C4.

    (b) To compute D1, . . . , D7 requires 7 multiplications and 10 additions; then to computeC1, C2, C3, C4 requires an additional 8 additions for a total of 7 multiplications and 18additions. The traditional method for computing the product of two 2 2 matrices re-quires 8 multiplications and 4 additions.

    (c) The method requires 7 multiplications and 18 additions of n n matrices, for a total of7 n3 and 7 n2(n1)+18 n2 7 n3 additions, versus 8 n3 multiplications and 8 n2(n1) 8 n3 additions for the direct method, so there is a savings by a factor of 78 .

    (d) Let r denote the number of multiplications and r the number of additions to computethe product of 2r 2r matrices using Strassens Algorithm. Then, r = 7 r1, whiler = 7 r

    1 + 18

    22r2, where the first factor comes from multiplying the blocks,

    and the second from adding them. Since 1 = 1, 1 = 0. Clearly, r = 7r, while aninduction proves the formula for r = 6(7

    r1 4r1), namely

    r+1 = 7 r1 + 18 4r1 = 6(7r 7 4r1) + 18 4r1 = 6(7r 4r).

    Combining the operations, Strassens Algorithm is faster by a factor of

    2 n3

    r + r=

    23r+1

    13 7r1 6 4r1 ,

    which, for r = 10, equals 4.1059, for r = 25, equals 30.3378, and, for r = 100, equals

    678, 234, which is a remarkable savings but bear in mind that the matrices have sizearound 1030, which is astronomical!

    (e) One way is to use block matrix multiplication, in the trivial form

    A OO I

    !B OO I

    !=

    C OO I

    !where C = A B. Thus, choosing I to be an identity matrix of the appropriate

    size, the overall size of the block matrices can be arranged to be a power of 2, and thenthe reduction algorithm can proceed on the larger matrices. Another approach, trickierto program, is to break the matrix up into blocks of nearly equal size since the Strassenformulas do not, in fact, require the blocks to have the same size and even apply to rect-angular matrices whose rectangular blocks are of compatible sizes.

    1.7.9.

    (a)

    0B@ 1 2 01 1 10 2 3

    1CA =0B@ 1 0 01 1 0

    0 2 1

    1CA0B@ 1 2 00 1 1

    0 0 5

    1CA, x =0B@23

    0

    1CA;

    (b)

    0BBB@1 1 0 0

    1 2 1 00 1 4 10 0 1 6

    1CCCA =0BBB@

    1 0 0 01 1 0 0

    0 1 1 00 0 1 1

    1CCCA0BBB@

    1 1 0 00 1 1 00 0 5 10 0 0 7

    1CCCA, x =0BBB@

    1012

    1CCCA;

    28

  • 7/29/2019 Applied Linear Algebra Solutions Chapter 1

    29/41

    (c)

    0BBB@1 2 0 0

    1 3 0 00 1 4 10 0 1 1

    1CCCA =0BBB@

    1 0 0 01 1 0 0

    0 1 1 00 0 14 1

    1CCCA0BBB@

    1 2 0 00 1 0 00 0 4 10 0 0 54

    1CCCA, x =0BBBB@4

    2

    25 35

    1CCCCA.

    1.7.10.

    (a) 0B@ 2 1 01 2 10 1 2

    1CA = 0B@ 1 0 0 12 1 00 23 1

    1CA0B@ 2 1 00 32 10 0 43

    1CA,0BBB@

    2 1 0 01 2 1 0

    0 1 2 10 0 1 2

    1CCCA =0BBBB@

    1 0 0 0 12 1 0 0

    0 23 1 00 0 34 1

    1CCCCA0BBBB@

    2 1 0 00 32 1 00 0 43 10 0 0 54

    1CCCCA ,0BBBBB@

    2 1 0 0 01 2 1 0 0

    0 1 2 1 00 0 1 2 10 0 0 1 2

    1CCCCCA =0BBBBBBB@

    1 0 0 0 0 12 1 0 0 0

    0 23 1 0 00 0 34 1 00 0 0

    45 1

    1CCCCCCCA

    0BBBBBBB@

    2 1 0 0 00 32 1 0 00 0 43 1 00 0 0 54 10 0 0 0 65

    1CCCCCCCA

    ;

    (b) 32 , 2, 32 T , ( 2, 3, 3, 2 )T , 52 , 4, 92 , 4, 52 T .(c) The subdiagonal entries in L are li+1,i = i/(i + 1) 1, while the diagonal entries in

    U uii = (i + 1)/i 1. 1.7.11.

    (a)

    0B@ 2 1 01 2 10 1 2

    1CA =0B@ 1 0 0 12 1 0

    0 25 1

    1CA0B@ 2 1 00 52 1

    0 0 125

    1CA ,0BBB@

    2 1 0 01 2 1 0

    0 1 2 10 0

    1 2

    1CCCA

    =

    0BBBB@

    1 0 0 0 12 1 0 0

    0 25 1 0

    0 0 5

    12 1

    1CCCCA

    0BBBB@

    2 1 0 00 52 1 0

    0 0 125 1

    0 0 0

    29

    12

    1CCCCA

    ,

    0BBBBB@2 1 0 0 01 2 1 0 00 1 2 1 00 0 1 2 10 0 0 1 2

    1CCCCCA =0BBBBBBB@

    1 0 0 0 0 12 1 0 0 0

    0 25 1 0 00 0 512 1 00 0 0 1229 1

    1CCCCCCCA

    0BBBBBBB@

    2 1 0 0 00 52 1 0 0

    0 0 125 1 0

    0 0 0 2912 1

    0 0 0 0 7029

    1CCCCCCCA;

    (b)13 ,

    13 ,

    23

    T,

    829 ,

    1329 ,

    1129 ,

    2029

    T,

    310 ,

    25 ,

    12 ,

    25 ,

    710

    T.

    (c) The subdiagonal entries in L approach 1 2 = .414214, and the diagonal entries inU approach 1 +

    2 = 2.414214.

    1.7.12. Both false. For example,

    0BBB@1 1 0 0

    1 1 1 00 1 1 10 0 1 1

    1CCCA0BBB@1 1 0 0

    1 1 1 00 1 1 10 0 1 1

    1CCCA = 0BBB@2 2 1 0

    2 3 2 11 2 3 20 1 2 2

    1CCCA, 0BBB@1 1 0 0

    1 1 1 00 1 1 10 0 1 1

    1CCCA1

    = 0BBB@1 0

    1 1

    0 0 1 11 1 0 01 1 0 1

    1CCCA.

    1.7.13.0BB@

    4 1 1

    1 4 1

    1 1 4

    1CCA =0BB@

    1 0 014 1 014

    15 1

    1CCA0BB@

    4 1 1

    0 15434

    0 0 185

    1CCA

    29

  • 7/29/2019 Applied Linear Algebra Solutions Chapter 1

    30/41

    0BBBBB@4 1 0 1

    1 4 1 0

    0 1 4 1

    1 0 1 4

    1CCCCCA =0BBBBB@

    1 0 0 014 1 0 0

    0 415 1 014 115 27 1

    1CCCCCA

    0BBBBB@4 1 0 1

    0 154 1 140 0 5615

    1615

    0 0 0 247

    1CCCCCA

    0BBBBBBB@

    4 1 0 0 1

    1 4 1 0 00 1 4 1 0

    0 0 1 4 1

    1 0 0 1 4

    1CCCCCCCA =0BBBBBBBB@

    1 0 0 0 0

    14 1 0 0 00 415 1 0 0

    0 0 1556 1 014 115 156 519 1

    1CCCCCCCCA0BBBBBBBB@

    4 1 0 0 1

    0 154 1 0 140 0 5615 1

    115

    0 0 0 209565556

    0 0 0 0 6619

    1CCCCCCCCAFor the 6 6 version we have0BBBBBBBBBB@

    4 1 0 0 0 1

    1 4 1 0 0 0

    0 1 4 1 0 0

    0 0 1 4 1 0

    0 0 0 1 4 1

    1 0 0 0 1 4

    1CCCCCCCCCCA

    =

    0BBBBBBBBBBB@

    1 0 0 0 0 014 1 0 0 0 0

    0 415 1 0 0 0

    0 0 1556 1 0 0

    0 0 0 56209 1 0

    14 115 156 1209 726 1

    1CCCCCCCCCCCA

    0BBBBBBBBBBB@

    4 1 0 0 0 1

    0 154 1 0 0 140 0 5615 1 0

    115

    0 0 0 20956 1 1560 0 0 0 780209

    210209

    0 0 0 0 0 4513

    1CCCCCCCCCCCA

    The pattern is that the only the entries lying on the diagonal, the subdiagonal or the lastrow of L are nonzero, while the only nonzero entries of U are on its diagonal, superdiagonalor last column.

    1.7.14.(a) Assuming regularity, the only row operations required to reduce A to upper triangular

    form U are, for each j = 1, . . . , n1, to add multiples of the jth row to the (j + 1)st andthe nth rows. Thus, the only nonzero entries below the diagonal in L are at positions(j, j + 1) and (j, n). Moreover, these row operations only affect zero entries in the lastcolumn, leading to the final form of U.

    (b) 0B@1 1 1

    1 2 11 1 31CA = 0B@1 0 0

    1 1 01 2 11CA0B@1 1 10 1 20 0 21CA,0BBBBB@

    1 1 0 0 11 2 1 0 0

    0 1 3 1 00 0 1 4 1

    1 0 0 1 5

    1CCCCCA =0BBBBBB@

    1 0 0 0 01 1 0 0 0

    0 1 1 0 00 0 12 1 0

    1 1 12 37 1

    1CCCCCCA

    0BBBBBB@1 1 0 0 10 1 1 0 10 0 2 1 10 0 0 72 320 0 0 0 137

    1CCCCCCA,0BBBBBBB@

    1 1 0 0 0 11 2 1 0 0 0

    0 1 3 1 0 00 0 1 4 1 00 0 0 1 5 1

    1 0 0 0

    1 6

    1CCCCCCCA

    =

    0BBBBBBBB@

    1 0 0 0 0 01 1 0 0 0 00 1 1 0 0 00 0 12 1 0 00 0 0 27 1 0

    1 1 12 17 833 1

    1CCCCCCCCA

    0BBBBBBBB@

    1 1 0 0 0 10 1 1 0 0 10 0 2 1 0 10 0 0 72 1 120 0 0 0 337 870 0 0 0 0 10433

    1CCCCCCCCA.

    The 4 4 case is a singular matrix.

    30

  • 7/29/2019 Applied Linear Algebra Solutions Chapter 1

    31/41

    1.7.15.(a) If matrix A is tridiagonal, then the only nonzero elements in ith row are ai,i1, aii, ai,i+1.

    So aij = 0 whenever | ij | > 1.

    (b) For example,

    0

    BBBBBBB@

    2 1 1 0 0 01 2 1 1 0 01 1 2 1 1 00 1 1 2 1 10 0 1 1 2 10 0 0 1 1 2

    1

    CCCCCCCA has band width 2;

    0

    BBBBBBB@

    2 1 1 1 0 01 2 1 1 1 01 1 2 1 1 11 1 1 2 1 10 1 1 1 2 10 0 1 1 1 2

    1

    CCCCCCCA has bandwidth 3.

    (c) U is a matrix that result from applying the row operation # 1 to A, so all zero entriesin A will produce corresponding zero entries in U. On the other hand, if A is of bandwidth k, then for each column of A we need to perform no more than k row replace-ments to obtain zeros below the diagonal. Thus L which reflects these row replace-ments will have at most k nonzero entries below the diagonal.

    (d)

    0

    BBBBBBB@

    2 1 1 0 0 01 2 1 1 0 01 1 2 1 1 00 1 1 2 1 10 0 1 1 2 10 0 0 1 1 2

    1

    CCCCCCCA =

    0BBBBBBBBBBB@

    1 0 0 0 0 012 1 0 0 0 012

    13 1 0 0 0

    0 2312 1 0 0

    0 0 3412 1 0

    0 0 0 1 12 1

    1CCCCCCCCCCCA

    0BBBBBBBBBBB@

    2 1 1 0 0 0

    0 3212 1 0 0

    0 0 4323 1 0

    0 0 0 1 12 1

    0 0 0 0 1 120 0 0 0 0 34

    1CCCCCCCCCCCA

    ,

    0BBBBBBB@

    2 1 1 1 0 01 2 1 1 1 01 1 2 1 1 11 1 1 2 1 10 1 1 1 2 10 0 1 1 1 2

    1CCCCCCCA =

    0BBBBBBBBBBB@

    1 0 0 0 0 012 1 0 0 0 012

    13 1 0 0 0

    12

    13

    14 1 0 0

    0 2312

    25 1 0

    0 0 3435

    14 1

    1CCCCCCCCCCCA

    0BBBBBBBBBBB@

    2 1 1 1 0 0

    0 3212

    12 1 0

    0 0 4313

    23 1

    0 0 0 5412

    34

    0 0 0 0 4515

    0 0 0 0 0 34

    1CCCCCCCCCCCA.

    (e) 13 , 13 , 0, 0, 13 , 13 T

    , 23 , 13 ,1

    3

    ,

    1

    3

    , 1

    3

    , 2

    3 T

    .

    (f) For A we still need to compute k multipliers at each stage and update at most 2 k2 en-

    tries, so we have less than (n1)(k + 2 k2) multiplications and (n1) 2 k2 additions. Forthe right-hand side we have to update at most k entries at each stage, so we have lessthan (n 1)k multiplications and (n 1)k additions. So we can get by with less thantotal (n 1)(2 k + 2 k2) multiplications and (n 1)(k + 2 k2) additions.

    (g) The inverse of a banded matrix is not necessarily banded. For example, the inverse of0B@ 2 1 01 2 10 1 2

    1CA is0BBB@

    34 12 14

    12 1 1214 12 34

    1CCCA

    1.7.16. (a) (8, 4 )T, (b) (10,4.1 )T, (c) (8.1,4.1 )T. (d) Partial pivoting reducesthe effect of round off errors and results in a significantly more accurate answer.

    1.7.17. (a) x = 117 1.57143, y = 17 .142857, z = 17 .142857,(b) x = 3.357, y = .5, z = .1429, (c) x = 1.572, y = .1429, z = .1429.

    1.7.18. (a) x = 2, y = 2, z = 3, (b) x = 7.3, y = 3.3, z = 2.9, (c) x = 1.9, y = 2.,z = 2.9, (d) partial pivoting works markedly better, especially for the value of x.

    31

  • 7/29/2019 Applied Linear Algebra Solutions Chapter 1

    32/41

    1.7.19. (a) x = 220., y = 26, z = .91; (b) x = 190., y = 24, z = .84; (c) x = 210,y = 26, z = 1. (d) The exact solution is x = 213.658, y = 25.6537, z = .858586.Full pivoting is the most accurate. Interestingly, partial pivoting fares a little worse thanregular elimination.

    1.7.20. (a)0BBB@

    6

    5 135 95

    1CCCA = 0B@ 1.22.61.81CA, (b) 0BBBBBB@

    14 541814

    1CCCCCCA, (c)0BBB@

    0

    110

    1CCCA, (d) 0BBBBBB@3235193512357635

    1CCCCCCA =0BBB@

    .9143

    .5429.34292.1714

    1CCCA.

    1.7.21. (a)

    0@ 113813

    1A = .0769.6154

    !, (b)

    0BBB@ 45 815 1915

    1CCCA =0B@ .8000.53331.2667

    1CA,

    (c)

    0BBBBBB@

    21213812159242

    56121

    1CCCCCCA

    =

    0BBB@

    .0165

    .3141

    .2438

    .4628

    1CCCA

    , (d)

    0B@

    .732.002

    .508

    1CA

    .

    1.7.22. The results are the same.

    1.7.23.

    Gaussian Elimination With Full Pivoting

    start

    for i = 1 to n

    set (i) = (i) = i

    next i

    for j = 1 to nif m(i),j = 0 for all i j, stop; print A is singularchoose i j and k j such that m(i),(k) is maximalinterchange (i) (j)interchange (k) (j)for i = j + 1 to n

    set z = m(i),(j)/m(k),(j)

    set m(i),(j) = 0

    for k = j + 1 to n + 1

    set m(i),(k) = m(i),(k) z m(i),(k)next k

    next i

    next j

    end

    32

  • 7/29/2019 Applied Linear Algebra Solutions Chapter 1

    33/41

    1.7.24. We let x Rn be generated using a random number generator, compute b = Hnx andthen solve Hny = b for y. The error is e = xy and we use e = max | ei | as a measure ofthe overall error. Using Matlab, running Gaussian Elimination with pivoting:

    n 10 20 50 100

    e .00097711 35.5111 318.3845 1771.1

    Using Mathematica, running regular Gaussian Elimination:

    n 10 20 50 100

    e .000309257 19.8964 160.325 404.625

    In Mathematica, using the built-in LinearSolve function, which is more accurate sinceit uses a more sophisticated solution method when confronted with an ill-posed linear sys-tem:

    n 10 20 50 100

    e .00035996 .620536 .65328 .516865

    (Of course, the errors vary a bit each time the program is run due to the randomness of thechoice ofx.)

    1.7.25.(a) H13 =

    0B@ 9 360 3036 192 18030 180 180

    1CA,

    H14 =

    0BBB@16 120 240 140

    120 1200 2700 1680240 2700 6480 4200

    140 1680 4200 2800

    1CCCA ,

    H15 =

    0BBBBB@

    25 300 1050 1400 630300 4080 18900 26880 126001050 18900 79380 117600 56700

    1400 26880 117600 179200 88200630 12600 56700 88200 44100

    1CCCCCA

    .

    (b) The same results are obtained when using floating point arithmetic in either Mathe-matica or Matlab.

    (c) The product fK10H10, where fK10 is the computed inverse, is fairly close to the 10 10identity matrix; the largest error is .0000801892 in Mathematica or .000036472 inMatlab. As for fK20H20, it is nowhere close to the identity matrix: in Mathemat-ica the diagonal entries range from 1.34937 to 3.03755, while the largest (in absolutevalue) off-diagonal entry is 4.3505; in Matlab the diagonal entries range from .4918to 3.9942, while the largest (in absolute value) off-diagonal entry is 5.1994.

    1.8.1.(a) Unique solution: ( 12 , 34 )T;(b) infinitely many solutions: (1 2 z,1 + z, z)T, where z is arbitrary;(c) no solutions;

    (d) unique solution: (1,2, 1)T;(e) infinitely many solutions: (5 2 z, 1, z, 0)T, where z is arbitrary;(f) infinitely many solutions: (1, 0, 1, w)T, where w is arbitrary;

    (g) unique solution: (2, 1, 3, 1)T.

    33

  • 7/29/2019 Applied Linear Algebra Solutions Chapter 1

    34/41

    1.8.2. (a) Incompatible; (b) incompatible; (c) (1, 0)T; (d) ( 1 + 3 x22 x3, x2, x3)T, where x2and x3 are arbitrary; (e) ( 152 , 23,10)T; (f) (5 3 x4, 19 4 x4,6 2 x4, x4)T, wherex4 is arbitrary; (g) incompatible.

    1.8.3. The planes intersect at (1, 0, 0).

    1.8.4. (i) a= b and b

    = 0; (ii) b = 0, a

    =

    2; (iii) a = b= 0, or a =

    2 and b = 0.

    1.8.5. (a) b = 2, c = 1 or b = 12 , c = 2; (b) b = 2, 12 ; (c) b = 2, c = 1, or b = 12 , c = 2.1.8.6.

    (a)

    1 + i 12 (1 + i )y,y, iT

    , where y is arbitrary;;

    (b) ( 4 i z+ 3 + i , i z+ 2 i , z)T, where z is arbitrary;(c) ( 3 + 2 i ,1 + 2 i , 3 i )T;(d) (z (3 + 4 i )w,z (1 + i )w , z, w )T, where z and w are arbitrary.

    1.8.7. (a) 2 , (b) 1 , (c) 2 , (d) 3 , (e) 1 , (f) 1 , (g) 2 , (h) 2 , (i) 3.

    1.8.8.

    (a) 1 1

    1 2! = 1 0

    1 1!1 1

    0 3!,(b)

    2 1 3

    2 1 3!

    =

    1 0

    1 1!

    2 1 30 0 0

    !,

    (c)

    0B@ 1 1 11 1 21 1 0

    1CA =0B@ 1 0 01 1 01 1 1

    1CA0B@ 1 1 10 0 1

    0 0 0

    1CA,(d)

    0B@ 1 0 00 0 10 1 0

    1CA0B@ 2 1 01 1 1

    2 1 1

    1CA =0B@ 1 0 012 1 0

    1 0 1

    1CA0B@ 2 1 00 32 1

    0 0 1

    1CA,(e)

    0B@

    30

    2

    1CA

    =

    0B@

    1 0 00 1 0

    23 0 1

    1CA

    0B@

    300

    1CA

    ,

    (f) ( 0 1 2 5 ) = ( 1 )( 0 1 2 5 ),

    (g)

    0BBB@0 1 0 01 0 0 00 0 1 00 0 0 1

    1CCCA0BBB@

    0 34 11 2

    1 5

    1CCCA =0BBB@

    1 0 0 00 1 0 014 34 1 0

    14 74 0 1

    1CCCA0BBB@

    1 20 70 00 0

    1CCCA,

    (h)

    0BBBBB@1 1 2 12 1 1 01 2 3 14 1 3 20 3 5 2

    1CCCCCA =0BBBBB@

    1 0 0 0 02 1 0 0 01 1 1 0 04 1 0 1 00 1 0 0 1

    1CCCCCA

    0BBBBB@1 1 2 10 3 5 20 0 0 00 0 0 00 0 0 0

    1CCCCCA,

    (i)

    0B@

    0 1 00 0 1

    1 0 0

    1CA0B@

    0 0 0 3 11 2

    3 1

    2

    2 4 2 1 2

    1CA =

    0B@

    1 0 02 1 0

    0 0 1

    1CA0B@

    1 2 3 1 20 0 4

    1 2

    0 0 0 3 1

    1CA.

    1.8.9. (a)

    x = 1,

    y = 0,

    z = 0.

    (b)

    x + y = 1,

    y + z = 0,

    x z = 1.(c)

    x + y = 1,

    y + z = 0,

    x z = 0.

    1.8.10. (a)

    1 0 00 1 0

    !, (b)

    0B@ 1 0 00 1 00 0 0

    1CA, (c)0B@ 1 00 1

    0 0

    1CA, (d)0B@ 1 0 00 1 0

    0 0 1

    1CA.

    34

  • 7/29/2019 Applied Linear Algebra Solutions Chapter 1

    35/41

    1.8.11.(a) x2 + y2 = 1, x2 y2 = 2;(b) y = x2, x y + 2 = 0; solutions: x = 2, y = 4 and x = 1, y = 1;(c) y = x3, x y = 0; solutions: x = y = 0, x = y = 1, x = y = 1;(d) y = sin x, y = 0; solutions: x = k , y = 0, for k any integer.

    1.8.12. That variable does not appear anywhere in the system, and is automatically free (al-though it doesnt enter into any of the formulas, and so is, in a sense, irrelevant).

    1.8.13. True. For example, take a matrix in row echelon form with r pivots, e.g., the matrix Awith aii = 1 for i = 1, . . . , r, and all other entries equal to 0.

    1.8.14. Both false. The zero matrix has no pivots, and hence has rank 0.

    1.8.15.(a) Each row of A = vwT is a scalar multiple, namely viw, of the vector w. If necessary,

    we use a row interchange to ensure that the first row is non-zero. We then subtract theappropriate scalar multiple of the first row from all the others. This makes all rows be-low the first zero, and so the resulting matrix is in row echelon form has a single non-zero row, and hence a single pivot proving that A has rank 1.

    (b) (i)1 23 6

    !, (ii)

    0B@8 40 04 2

    1CA, (iii) 2 6 23 9 3!.(c) The row echelon form of A must have a single nonzero row, say wT. Reversing the ele-

    mentary row operations that led to the row echelon form, at each step we either inter-change rows or add multiples of one row to another. Every row of every matrix obtained

    in such a fashion must be some scalar multiple ofwT, and hence the original matrixA = vwT, where the entries vi of the vector v are the indicated scalar multiples.

    1.8.16. 1.

    1.8.17. 2.

    1.8.18. Example: A = 1 00 0! , B = 0 10 0! so A B = 0 10 0! has rank 1, but B A = 0 00 0!has rank 0.

    1.8.19.(a) Under elementary row operations, the reduced form of C will be

    U Z

    where U is the

    row echelon form of A. Thus, C has at least r pivots, namely the pivots in A. Examples:

    rank

    1 2 12 4 2

    != 1 = rank

    1 22 4

    !, while rank

    1 2 12 4 3

    != 2 > 1 = rank

    1 22 4

    !.

    (b) Applying elementary row operations, we can reduce E to

    U

    W

    !where U is the row ech-

    elon form of A. If we can then use elementary row operations of type #1 to eliminateall entries of W, then the row echelon form of E has the same number of pivots as Aand so rank E = rank A. Otherwise