solutions elementary linear algebra andrilli hecker - answers (4th edition)

270
Instructor’s Manual with Sample Tests for Elementary Linear Algebra Fourth Edition Stephen Andrilli and David Hecker

Upload: prabodh-agarwal

Post on 28-Apr-2015

5.939 views

Category:

Documents


304 download

DESCRIPTION

Solutions to andrilli hecker intoduction to linear algebra 8th ed.

TRANSCRIPT

Page 1: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Instructor’s Manual with Sample Tests for

Elementary Linear Algebra

Fourth Edition

Stephen Andrilli and David Hecker

Page 2: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Dedication

To all the instructors who have used the various editions of our book over the years

Page 3: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

ii

Table of Contents

Preface ........................................................................iii Answers to Exercises………………………………….1 Chapter 1 Answer Key ……………………………….1 Chapter 2 Answer Key ……………………………...21 Chapter 3 Answer Key ……………………………...37 Chapter 4 Answer Key ……………………………...58 Chapter 5 Answer Key ……………………………...94 Chapter 6 Answer Key …………………………….127 Chapter 7 Answer Key …………………………….139 Chapter 8 Answer Key …………………………….156 Chapter 9 Answer Key …………………………….186 Appendix B Answer Key ………………………….203 Appendix C Answer Key ………………………….204 Chapter Tests ……..……………………………….206 Chapter 1 – Chapter Tests …………………………207 Chapter 2 – Chapter Tests …………………………211 Chapter 3 – Chapter Tests …………………………217 Chapter 4 – Chapter Tests …………………………221 Chapter 5 – Chapter Tests …………………………224 Chapter 6 – Chapter Tests …………………………230 Chapter 7 – Chapter Tests …………………………233 Answers to Chapter Tests ………………………….239 Chapter 1 – Answers for Chapter Tests …………...239 Chapter 2 – Answers for Chapter Tests …………...242 Chapter 3 – Answers for Chapter Tests …………...244 Chapter 4 – Answers for Chapter Tests …………...247 Chapter 5 – Answers for Chapter Tests …………...251 Chapter 6 – Answers for Chapter Tests …………...257 Chapter 7 – Answers for Chapter Tests …………...261

Page 4: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

iii

Preface

This Instructor’s Manual with Sample Tests is designed to accompany Elementary Linear Algebra, 4th edition, by Stephen Andrilli and David Hecker. This manual contains answers for all the computational exercises in the textbook and detailed solutions for virtually all of the problems that ask students for proofs. The exceptions are typically those exercises that ask students to verify a particular computation, or that ask for a proof for which detailed hints have already been supplied in the textbook. A few proofs that are extremely trivial in nature have also been omitted. This manual also contains sample Chapter Tests for the material in Chapters 1 through 7, as well as answer keys for these tests. Additional information regarding the textbook, this manual, the Student Solutions Manual, and linear algebra in general can be found at the web site for the textbook, where you found this manual. Thank you for using our textbook. Stephen Andrilli David Hecker August 2009

Page 5: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 1.1

Answers to Exercises

Chapter 1

Section 1.1

(1) (a) [9;�4], distance =p97

(b) [�5; 1;�2], distance =p30

(c) [�1;�1; 2;�3;�4], distance =p31

(2) (a) (3; 4; 2) (see Figure 1)

(b) (0; 5; 3) (see Figure 2)

(c) (1;�2; 0) (see Figure 3)(d) (3; 0; 0) (see Figure 4)

(3) (a) (7;�13) (b) (�6; 3; 2) (c) (�1; 3;�1; 4; 6)

(4) (a)�163 ;�

133 ; 8

�(b)

�� 26

3 ; 0;�6; 6�

(5) (a)h

3p70;� 5p

70; 6p

70

i; shorter, since length of original vector is > 1

(b)h

4p21; 1p

21; 0;� 2p

21

i; shorter, since length of original vector is > 1

(c) [0:6;�0:8]; neither, since given vector is a unit vector

(d)h

1p11;� 2p

11;� 1p

11; 1p

11; 2p

11

i; longer, since length of original vector

=p115 < 1

(6) (a) Parallel

(b) Parallel

(c) Not parallel

(d) Not parallel

(7) (a) [�6; 12; 15](b) [2; 0;�6]

(c) [�3; 4; 8](d) [�5; 1; 1]

(e) [6;�20;�13](f) [�23; 12; 11]

(8) (a) x+ y = [1; 1], x� y = [�3; 9], y � x = [3;�9] (see Figure 5)(b) x+ y = [3;�5], x� y = [17; 1], y � x = [�17;�1] (see Figure 6)(c) x+y = [1; 8;�5], x�y = [3; 2;�1], y�x = [�3;�2; 1] (see Figure 7)(d) x+y = [�2;�4; 4], x�y = [4; 0; 6], y�x = [�4; 0;�6] (see Figure 8)

(9) With A = (7;�3; 6), B = (11;�5; 3), and C = (10;�7; 8), the length ofside AB = length of side AC =

p29. The triangle is isosceles, but not

equilateral, since the length of side BC isp30.

1

Page 6: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 1.1

2

Page 7: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 1.1

3

Page 8: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 1.1

(10) (a) [10;�10] (b) [�5p3;�15] (c) [0; 0] = 0

(11) See Figures 1.7 and 1.8 in Section 1.1 of the textbook.

(12) See Figure 9. Both represent the same diagonal vector by the associativelaw of addition for vectors.

Figure 9: x+ (y + z)

(13) [0:5� 0:6p2;�0:4

p2] � [�0:3485;�0:5657]

(14) (a) If x = [a1; a2] is a unit vector, then cos �1 = a1kxk =

a11 = a1. Simi-

larly, cos �2 = a2.

(b) Analogous to part (a).

(15) Net velocity = [�2p2;�3 + 2

p2], resultant speed � 2.83 km/hr

(16) Net velocity =h� 3

p2

2 ;8�3

p2

2

i; speed � 2.83 km/hr

(17) [�8�p2;�

p2]

(18) Acceleration = 120 [

1213 ;�

34465 ;

39265 ] � [0:0462;�0:2646; 0:3015]

(19) Acceleration =�132 ; 0;

43

�(20) 180p

14[�2; 3; 1] � [�96:22; 144:32; 48:11]

(21) a = [ �mg1+p3; mg

1+p3]; b = [ mg

1+p3; mg

p3

1+p3]

(22) Let a = [a1; : : : ; an]. Then, kak2 = a21+ � � �+a2n is a sum of squares, whichmust be nonnegative. The square root of a nonnegative real number is anonnegative real number. The sum can only be zero if every ai = 0.

4

Page 9: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 1.2

(23) Let x = [x1; x2; : : : ; xn]. Then kcxk =p(cx1)2 + � � �+ (cxn)2 =p

c2(x21 + � � �+ x2n) = jcjpx21 + � � �+ x2n = jcj kxk:

(24) In each part, suppose that x = [x1; : : : ; xn], y = [y1; : : : ; yn], andz = [z1; : : : ; zn].

(a) x+ (y + z) = [x1; : : : ; xn] + [(y1 + z1); : : : ; (yn + zn)] =[(x1 + (y1 + z1)); : : : ; (xn + (yn + zn))] =[((x1 + y1) + z1); : : : ; ((xn + yn) + zn)] =[(x1 + y1); : : : ; (xn + yn)] + [z1; : : : ; zn] = (x+ y) + z

(b) x+ (�x) = [x1; : : : ; xn] + [�x1; : : : ;�xn] =[(x1 + (�x1)); : : : ; (xn + (�xn))] = [0; : : : ; 0]. Also, (�x) + x =x+ (�x) (by part (1) of Theorem 1.3) = 0, by the above.

(c) c(x+y) = c([(x1+y1); : : : ; (xn+yn)]) = [c(x1+y1); : : : ; c(xn+yn)] =[(cx1+cy1); : : : ; (cxn+cyn)] = [cx1; : : : ; cxn]+[cy1; : : : ; cyn] = cx+cy

(d) (cd)x = [((cd)x1); : : : ; ((cd)xn)] = [(c(dx1)); : : : ; (c(dxn))] =c[(dx1); : : : ; (dxn)] = c(d[x1; : : : ; xn]) = c(dx)

(25) If c = 0, done. Otherwise, ( 1c )(cx) =1c (0) ) ( 1c � c)x = 0 (by part (7) of

Theorem 1.3) ) x = 0: Thus either c = 0 or x = 0.

(26) Let x = [x1; x2; : : : ; xn]. Then [0; 0; : : : ; 0] = c1x� c2x = [(c1 � c2)x1;: : : ; (c1 � c2)xn]. Since c1 � c2 6= 0, we have x1 = x2 = � � � = xn = 0.

(27) (a) F (b) T (c) T (d) F (e) T (f) F (g) F

Section 1.2

(1) (a) arccos(� 275p37), or about 152:6�, or 2.66 radians

(b) arccos( 13p13p66), or about 63:7�, or 1.11 radians

(c) arccos(0), which is 90�, or �2 radians

(d) arccos(�1), which is 180�, or � radians (since x = �2y)

(2) The vector from A1 to A2 is [2;�7;�3], and the vector from A1 to A3 is[5; 4;�6]. These vectors are orthogonal.

(3) (a) [a; b] � [�b; a] = a(�b) + ba = 0: Similarly, [a;�b] � [b; a] = 0:(b) A vector in the direction of the line ax+ by + c = 0 is [b;�a], while

a vector in the direction of bx� ay + d = 0 is [a; b].

(4) (a) 12 joules

(b) 1040p5

9 , or 258:4 joules

(c) �124 joules

(5) Note that y�z is a scalar, so x�(y�z) is not de�ned.

5

Page 10: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 1.2

(6) In all parts, let x = [x1; x2; : : : ; xn] ; y = [y1; y2; : : : ; yn] ; andz = [z1; z2; : : : ; zn] :

Part (1): x � y = [x1; x2; : : : ; xn] � [y1; y2; : : : ; yn] = x1y1 + � � �+ xnyn =y1x1 + � � �+ ynxn = [y1; y2; : : : ; yn] � [x1; x2; : : : ; xn] = y � xPart (2): x � x = [x1; x2; : : : ; xn] � [x1; x2; : : : ; xn] = x1x1 + � � � + xnxn =x21+ � � �+x2n. Now x21+ � � �+x2n is a sum of squares, each of which must benonnegative. Hence, the sum is also nonnegative, and so its square root

is de�ned. Thus, 0 � x � x = x21 + � � �+ x2n =�p

x21 + � � �+ x2n�2= kxk2.

Part (3): Suppose x �x = 0. From part (2), 0 = x �x = x21+ � � �+x2n � x2i ,for each i, since the sum of the remaining squares (without x2i ), which isnonnegative. Hence, 0 � x2i for each i. But x2i � 0, because it is a square.Hence each xi = 0. Therefore, x = 0.Next, suppose x = 0. Then x � x = [0; : : : ; 0] � [0; : : : ; 0]= (0)(0) + � � �+ (0)(0) = 0.Part (4): c(x � y) = c ([x1; x2; : : : ; xn] � [y1; y2; : : : ; yn])= c (x1y1 + � � �+ xnyn) = cx1y1 + � � �+ cxnyn =[cx1; cx2; : : : ; cxn] � [y1; y2; : : : ; yn] = (cx) � y.Next, c(x � y) = c(y � x) (by part (1)) = (cy) � x (by the above) = x � (cy),by part (1):

Part (6): (x+ y) � z = ([x1; x2; : : : ; xn] + [y1; y2; : : : ; yn]) � [z1; z2; : : : ; zn]= [x1 + y1; x2 + y2; : : : ; xn + yn] � [z1; z2; : : : ; zn]= (x1 + y1)z1 + (x2 + y2)z2 + � � �+ (xn + yn)zn= (x1z1 + x2z2 + � � �+ xnzn) + (y1z1 + y2z2 + � � �+ ynzn):Also, (x � z) + (y � z) = ([x1; x2; : : : ; xn] � [z1; z2; : : : ; zn])+ ([y1; y2; : : : ; yn] � [z1; z2; : : : ; zn])= (x1z1 + x2z2 + � � �+ xnzn) + (y1z1 + y2z2 + � � �+ ynzn).Hence, (x+ y) � z = (x � z) + (y � z).

(7) No; consider x = [1; 0], y = [0; 1], and z = [1; 1].

(8) A method similar to the �rst part of the proof of Theorem 1.6 in thetextbook yields:ka� bk2 � 0) (a � a)� (b � a)� (a � b) + (b � b) � 0) 1� 2(a � b) + 1 � 0) a � b � 1.

(9) Note that (x+y)�(x�y) = (x�x) + (y�x)� (x�y)� (y�y) = kxk2�kyk2.

(10) Note that kx+ yk2 = kxk2 + 2(x�y) + kyk2, whilekx� yk2 = kxk2 � 2(x�y) + kyk2.

(11) (a) This follows immediately from the solution to Exercise 10 above.

(b) Note that kx+ y + zk2 = k(x+ y) + zk2= kx+ yk2 + 2((x+ y)�z) + kzk2= kxk2 + 2(x�y) + kyk2 + 2(x�z) + 2(y�z) + kzk2.

(c) This follows immediately from the solution to Exercise 10 above.

6

Page 11: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 1.2

(12) Note that x�(c1y + c2z) = c1(x�y) + c2(x�z).

(13) cos �1 = apa2+b2+c2

, cos �2 = bpa2+b2+c2

, and cos �3 = cpa2+b2+c2

(14) (a) Length =p3s

(b) angle = arccos(p33 ) � 54:7

�, or 0:955 radians

(15) (a) [� 35 ;�

310 ;�

32 ]

(b) [ 9017 ;�5417 ; 0] � [5:29;�3:18; 0]

(c) [ 16 ; 0;�16 ;

13 ]

(16) (a) 0 (zero vector). The dropped perpendicular travels along b to thecommon initial point of a and b.

(b) The vector b. The terminal point of b lies on the line through a, sothe dropped perpendicular has length zero.

(17) ai, bj, ck

(18) (a) Parallel: [ 2029 ;�3029 ;

4029 ], orthogonal: [�

19429 ;

8829 ;

16329 ]

(b) Parallel: [� 12 ; 1�

12 ], orthogonal: [�

112 ; 1;

152 ]

(c) Parallel: [ 6049 ;�4049 ;

12049 ], orthogonal: [�

35449 ;

13849 ;

22349 ]

(19) From the lower triangle in the �gure, we have (projrx) + (projrx� x) =re�ection of x (see Figure 10).

Figure 10: Re�ection of a vector x through a line.

(20) For the case kxk � kyk: j kxk�kyk j = kyk�kxk = kx+y+(�x)k�kxk� kx+yk+k�xk�kxk (by the Triangle Inequality) = kx+yk+kxk�kxk =kx+ yk.The case kxk � kyk is done similarly, with the roles of x and y reversed.

(21) (a) Let c = (x�y)=(kxk2), and w = y � projxy. (In this way, cx =projxy.)

7

Page 12: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 1.3

(b) Suppose cx+w = dx+ v. Then (d� c)x = w � v, and(d� c)x�(w � v) = (w � v)�(w � v) = kw � vk2. But(d � c)x�(w � v) = (d � c)(x�w) � (d � c)(x�v) = 0, since v and ware orthogonal to x. Hence kw�vk2 = 0 =) w�v = 0 =) w = v.Then, cx = dx =) c = d, from Exercise 26 in Section 1.1, since x isnonzero.

(22) If � is the angle between x and y, and � is the angle between projxy

and projyx, then cos� =projxy � projyxkprojxykkprojyxk =

�x�ykxk2

�x �

�y�xkyk2

�y � x�y

kxk2

�x � y�x

kyk2

�y =

�x�ykxk2

��y�xkyk2

�(x�y)�

jx�yjkxk2

�kxk

�jy�xjkyk2

�kyk

=

�(x�y)3

kxk2kyk2

��

(x�y)2kxkkyk

� = (x�y)kxkkyk = cos �. Hence � = �.

(23) (a) T (b) T (c) F (d) F (e) T (f) F

Section 1.3

(1) (a) We have k4x+ 7yk � k4xk+ k7yk = 4kxk+ 7kyk � 7kxk+ 7kyk =7(kxk+ kyk).

(b) Let m = maxfjcj; jdjg. Then kcx� dyk � m(kxk+ kyk).

(2) (a) Note that 6j � 5 = 3(2(j � 1)) + 1. Let k = 2(j � 1).(b) Consider the number 4.

(3) Note that since x 6= 0 and y 6= 0, projxy = 0 i¤ (x�y)=(kxk2) = 0 i¤x�y = 0 i¤ y�x = 0 i¤ (y�x)=(kyk2) = 0 i¤ projyx = 0.

(4) If y = cx (for c > 0), then kx+yk = kx+cxk = (1+c)kxk = kxk+ckxk =kxk+ kcxk = kxk+ kyk.On the other hand, if kx+yk = kxk+kyk, then kx+yk2 = (kxk+kyk)2.Now kx+yk2 = (x+y)�(x+y) = kxk2+2(x�y)+kyk2, while (kxk+kyk)2 =kxk2 + 2kxkkyk + kyk2. Hence x�y = kxkkyk. By Result 4, y = cx, forsome c > 0.

(5) (a) Consider x = [1; 0; 0] and y = [1; 1; 0].

(b) If x 6= y, then x �y 6= kxk2.(c) Yes

(6) (a) Suppose y 6= 0. We must show that x is not orthogonal to y. Nowkx + yk2 = kxk2, so kxk2 + 2(x�y) + kyk2 = kxk2. Hence kyk2 =�2(x�y). Since y 6= 0, we have kyk2 6= 0, and so x�y 6= 0.

(b) Suppose x is not a unit vector. We must show that x�y 6= 1. Nowprojxy = x =) ((x�y)=(kxk2))x = 1x =) (x�y)=(kxk2) = 1 =)x�y = kxk2. But then kxk 6= 1 =) kxk2 6= 1 =) x�y 6= 1.

(7) Use Exercise 11(a) in Section 1.2.

8

Page 13: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 1.3

(8) (a) Contrapositive: If x = 0, then x is not a unit vector.Converse: If x is nonzero, then x is a unit vector.Inverse: If x is not a unit vector, then x = 0.

(b) (Let x and y be nonzero vectors.)Contrapositive: If y 6= projxy, then x is not parallel to y.Converse: If y = projxy, then x k y.Inverse: If x is not parallel to y, then y 6= projxy.

(c) (Let x, y be nonzero vectors.)Contrapositive: If projyx 6= 0, then projxy 6= 0.Converse: If projyx = 0, then projxy = 0.Inverse: If projxy 6= 0, then projyx 6= 0.

(9) (a) Converse: Let x and y be nonzero vectors in Rn. If kx+ yk > kyk,then x�y � 0.

(b) Let x = [2;�1] and y = [0; 2].

(10) (a) Converse: Let x, y, and z be vectors in Rn. If y = z, then x�y = x�z.The converse is obviously true, but the original statement is false ingeneral, with counterexample x = [1; 1], y = [1;�1], and z = [�1; 1].

(b) Converse: Let x and y be vectors in Rn. If kx + yk � kyk, thenx �y = 0. The original statement is true, but the converse is falsein general. Proof of the original statement follows from kx + yk2 =(x+ y) � (x+ y) = kxk2 + 2(x �y) + kyk2 = kxk2 + kyk2 � kyk2.Counterexample to converse: let x = [1; 0], y = [1; 1].

(c) Converse: Let x, y be vectors in Rn, with n > 1. If x = 0 ory = 0, then x�y = 0. The converse is obviously true, but the originalstatement is false in general, with counterexample x = [1;�1] andy = [1; 1].

(11) Suppose x ? y and n is odd. Then x �y = 0. Now x �y =Pn

i=1 xiyi. Buteach product xiyi equals either 1 or �1. If exactly k of these productsequal 1, then x � y = k � (n� k) = �n+ 2k. Hence �n+ 2k = 0, and son = 2k, contradicting n odd.

(12) Suppose that the vectors [1; a], [1; b], and [z1; z2] as constructed in thehint are all mutually orthogonal. Then z1 + az2 = 0 = z1 + bz2. Hence(a� b)z2 = 0, so either a = b or z2 = 0. But if a = b, then [1; a] and [1; b]are not orthogonal. Alternatively, if z2 = 0, then z1+ az2 = 0 =) z1 = 0,so [z1; z2] = [0; 0], a contradiction.

(13) Base Step (n = 1): x1 = x1.Inductive Step: Assume x1+x2+� � �+xn�1+xn = xn+xn�1+� � �+x2+x1,for some n � 1: Prove: x1 + x2 + � � � + xn�1 + xn + xn+1 = xn+1+xn + xn�1 + � � � + x2 + x1: But x1 + x2 + � � � + xn�1 + xn + xn+1 =(x1 + x2 + � � � + xn�1 + xn) + xn+1 = xn+1+ (x1 + x2 + � � � + xn�1 +xn) = xn+1 + (xn + xn�1 + � � �+ x2 + x1) (by the assumption) = xn+1+xn + xn�1 + � � �+ x2 + x1:

9

Page 14: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 1.3

(14) Base Step (m = 1): kx1k � kx1k.Inductive Step: Assume kx1 + � � � + xmk � kx1k + � � � + kxmk, for somem � 1. Prove kx1+� � �+xm+xm+1k � kx1k+� � �+kxmk+kxm+1k. But, bythe Triangle Inequality, k(x1+� � �+xm)+xm+1k � kx1+� � �+xmk+kxm+1k� kx1k+ � � �+ kxmk+ kxm+1k.

(15) Base Step (k = 1): kx1k2 = kx1k2.Inductive Step: Assume kx1 + � � � + xkk2 = kx1k2 + � � � + kxkk2. Provekx1+ � � �+xk+xk+1k2 = kx1k2+ � � �+kxkk2+kxk+1k2. Use the fact thatk(x1+� � �+xk)+xk+1k2 = kx1+� � �+xkk2+2((x1+� � �+xk)�xk+1)+kxk+1k2= kx1+ � � �+xkk2+kxk+1k2, since xk+1 is orthogonal to all of x1; : : : ;xk.

(16) Base Step (k = 1): We must show (a1x1)�y � ja1j kyk. But(a1x1)�y � j(a1x1)�yj � ka1x1k kyk (by the Cauchy-Schwarz Inequality)= ja1j kx1k kyk = ja1j kyk, since x1 is a unit vector.Inductive Step: Assume (a1x1 + � � �+ akxk)�y � (ja1j+ � � �+ jakj)kyk,for some k � 1. Prove(a1x1 + � � �+ akxk + ak+1xk+1)�y � (ja1j+ � � �+ jakj+ jak+1j)kyk.Use the fact that ((a1x1 + � � �+ akxk) +ak+1xk+1)�y= (a1x1 + � � �+ akxk)�y + (ak+1xk+1)�y and the Base Step.

(17) Base Step (n = 1): We must show that k[x1]k � jx1j. This is obvioussince k[x1]k =

px21 = jx1j.

Inductive Step: AssumeqPk

i=1 x2i �

Pki=1 jxij and prove

qPk+1i=1 x

2i �Pk+1

i=1 jxij. Let y = [x1; : : : ; xk; xk+1], z = [x1; : : : ; xk; 0], and w =[0; : : : ; 0; xk+1]. Note that y = z+w. So, by the Triangle Inequality, kyk �kzk+kwk. Thus,

qPk+1i=1 x

2i �

qPki=1 x

2i +qx2k+1 �

Pki=1 jxij+

qx2k+1

(by the inductive hypothesis) =Pk

i=1 jxij+ jxk+1j =Pk+1

i=1 jxij.

(18) Step 1 cannot be reversed, because y could equal �(x2 + 2).Step 2 cannot be reversed, because y2 could equal x4 + 4x2 + c.Step 4 cannot be reversed, because in general y does not have to equalx2 + 2.Step 6 cannot be reversed, since dy

dx could equal 2x+ c.

All other steps remain true when reversed.

(19) (a) For every unit vector x in R3, x � [1;�2; 3] 6= 0.(b) x 6= 0 and x�y � 0, for some vectors x and y in Rn.(c) x = 0 or kx+ yk 6= kyk, for all vectors x and y in Rn.(d) There is some vector x 2 Rn for which x�x � 0.(e) There is an x 2 R3 such that for every nonzero y 2 R3, x �y 6= 0.(f) For every x 2 R4, there is some y 2 R4 such that x�y 6= 0.

10

Page 15: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 1.4

(20) (a) Contrapositive: If x 6= 0 and kx� yk � kyk, then x �y 6= 0.Converse: If x = 0 or kx� yk > kyk, then x �y = 0.Inverse: If x �y 6= 0, then x 6= 0 and kx� yk � kyk.

(b) Contrapositive: If kx� yk � kyk, then either x = 0 or x�y 6= 0.Converse: If kx� yk > kyk, then x 6= 0 and x�y = 0.Inverse: If x = 0 or x�y 6= 0, then kx� yk � kyk.

(21) Suppose x 6= 0. We must prove x�y 6= 0 for some vector y 2 Rn. Lety = x.

(22) The contrapositive is: If u and v are (nonzero) vectors that are not inopposite directions, then there is a vector x such that u � x > 0 andv � x > 0. If u and v are in the same direction, we can let x = u. So,we can assume that u and v are not parallel. Consider a vector x thatbisects the angle between u and v. Geometrically, it is clear that such avector makes an acute angle with both u and v, which �nishes the proof.(Algebraically, a formula for such a vector would be x = u

kuk +vkvk . Then,

u � x = u ��

ukuk +

vkvk

�= u�u

kuk +u�vkvk � kuk �

��� u�vkvk ��� > kuk � kukkvkkvk (by

the Cauchy-Schwarz inequality)1 = kuk � kuk = 0. Hence u � x > 0. Asimilar proof shows that v � x > 0.)

(23) Let x = [1; 1], y = [1;�1].

(24) Let y = [1;�2; 2]. Then since x�y � 0, Result 2 implies that kx + yk >kyk = 3.

(25) (a) F(b) T

(c) T(d) F

(e) F(f) F

(g) F(h) T

(i) F

Section 1.4

(1) (a)

24 2 1 32 7 �59 0 �1

35(b) Impossible

(c)

24 �16 8 120 20 �424 4 �8

35(d)

24 �32 8 6�8 2 140 6 �8

35(e) Impossible

(f)

24 �7 0 8�1 3 19 9 �5

35

(g)

24 �23 14 �9�5 8 8�9 �18 1

35(h) Impossible

(i)

24 �1 1 12�1 5 88 �3 �4

35

(j)

24 �1 1 12�1 5 88 �3 �4

35(k)

��12 8 �610 �8 26

�(l) Impossible

(m) Impossible

(n)

24 13 �6 23 �3 �53 5 1

351We used > rather than � in the Cauchy-Schwarz inequality because equality occurs in

the Cauchy-Schwarz inequality only when the vectors are parallel (see Result 4).

11

Page 16: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 1.4

(2) Square: B;C;E;F;G;H;J;K;L;M;N;P;QDiagonal: B;G;NUpper triangular: B;G;L;NLower triangular: B;G;M;N;QSymmetric: B;F;G;J;N;PSkew-symmetric: H (but not E;C;K)

Transposes: AT =��1 0 64 1 0

�, BT = B, CT =

��1 �11 1

�, and so

on

(3) (a)

26643 � 1

252

� 12 2 1

52 1 2

3775+2664

0 � 12

32

12 0 4

� 32 �4 0

3775

(b)

26641 3

2 0

32 3 �1

0 �1 0

3775 +

26640 � 3

2 �432 0 0

4 0 0

3775

(c)

26642 0 0 00 5 0 00 0 �2 00 0 0 5

3775 +

26640 3 4 �1

�3 0 �1 2�4 1 0 01 �2 0 0

3775

(d)

2664�3 7 �2 �17 4 3 �6

�2 3 5 �8�1 �6 �8 �5

3775 +

26640 �4 7 �34 0 2 5

�7 �2 0 �63 �5 6 0

3775(4) If AT = BT , then

�AT�T

=�BT�T, implying A = B, by part (1) of

Theorem 1.12.

(5) (a) If A is an m � n matrix, then AT is n � m. If A = AT , thenm = n.

(b) If A is a diagonal matrix and if i 6= j, then aij = 0 = aji.(c) Follows directly from part (b), since In is diagonal.

(d) The matrix must be a square zero matrix; that is, On, for some n.

(6) (a) If i 6= j, then aij + bij = 0 + 0 = 0.(b) Use the fact that aij + bij = aji + bji.

(7) Use induction on n. Base Step (n = 1): Obvious. Inductive Step: AssumeA1; : : : ;Ak+1 and B =

Pki=1Ai are upper triangular. Prove that D =Pk+1

i=1 Ai is upper triangular. Let C = Ak+1. Then D = B +C. Hence,dij = bij + cij = 0 + 0 = 0 if i > j (by the inductive hypothesis). HenceD is upper triangular.

12

Page 17: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 1.4

(8) (a) Let B = AT . Then bij = aji = aij = bji.Let D = cA. Then dij = caij = caji = dji.

(b) Let B = AT . Then bij = aji = �aij = �bji.Let D = cA. Then dij = caij = c(�aji) = �caji = �dji.

(9) In is de�ned as the matrix whose (i; j) entry is �ij .

(10) Part (4): Let B = A + (�A) (= (�A) + A, by part (1)). Then bij =aij + (�aij) = 0.Part (5): LetD = c(A+B) and letE = cA+cB. Then, dij = c(aij+bij) =caij + cbij = eij .Part (7): Let B = (cd)A and let E = c(dA). Then bij = (cd)aij =c(daij) = eij .

(11) Part (1): Let B = AT and C = (AT )T . Then cij = bji = aij .Part (3): Let B = c(AT ), D = cA and F = (cA)T . Then fij = dji =caji = bij .

(12) Assume c 6= 0. We must show A = Omn. But for all i; j, 1 � i � m,1 � j � n, caij = 0 with c 6= 0. Hence, all aij = 0.

(13) (a) ( 12 (A +AT ))T = 1

2 (A +AT )T = 1

2 (AT + (AT )T ) = 1

2 (AT +A) =

12 (A+A

T )

(b) ( 12 (A �AT ))T = 1

2 (A �AT )T = 1

2 (AT � (AT )T ) = 1

2 (AT �A) =

� 12 (A�A

T )

(c) 12 (A+A

T ) + 12 (A�A

T ) = 12 (2A) = A

(d) S1 �V1 = S2 �V2

(e) Follows immediately from part (d).

(f) Follows immediately from parts (d) and (e).

(g) Parts (a) through (c) show A can be decomposed as the sum ofa symmetric matrix and a skew-symmetric matrix, while parts (d)through (f) show that the decomposition for A is unique.

(14) (a) Trace (B) = 1, trace (C) = 0, trace (E) = �6, trace (F) = 2, trace(G) = 18, trace (H) = 0, trace (J) = 1, trace (K) = 4, trace (L) =3, trace (M) = 0, trace (N) = 3, trace (P) = 0, trace (Q) = 1

(b) Part (i): Let D = A+B. Then trace(D) =Pn

i=1 dii =Pni=1 aii +

Pni=1 bii = trace(A) + trace(B).

Part (ii): Let B = cA. Then trace(B) =Pn

i=1 bii =Pn

i=1 caii =cPn

i=1 aii = c(trace(A)).Part (iii): Let B = AT . Then trace(B) =

Pni=1 bii =

Pni=1 aii (since

bii = aii for all i) = trace(A).

(c) Not necessarily: consider the matrices L and N in Exercise 2. (Note:If n = 1, the statement is true.)

13

Page 18: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 1.5

(15) (a) F (b) T (c) F (d) T (e) T

Section 1.5

(1) (a) Impossible

(b)

24 34 �2442 498 �22

35(c) Impossible

(d)

24 73 �3477 �2519 �14

35

(e) [�38]

(f)

24 �24 48 �163 �6 2

�12 24 �8

35(g) Impossible

(h) [56;�8](i) Impossible

(j) Impossible

(k)

24 22 9 �69 73 182 �6 4

35

(l)

26645 3 2 54 1 3 11 1 0 24 1 3 1

3775(m) [226981]

(n)

24 146 5 �603154 27 �56038 �9 �193

35 (o) Impossible

(2) (a) No(b) Yes

(c) No(d) Yes

(e) No

(3) (a) [15;�13;�8]

(b)

24 1163

35(c) [4]

(d) [2; 8;�2; 12]

(4) (a) Valid, by Theorem 1.14,part (1)

(b) Invalid

(c) Valid, by Theorem 1.14,part (1)

(d) Valid, by Theorem 1.14,part (2)

(e) Valid, by Theorem 1.16

(f) Invalid

(g) Valid, by Theorem 1.14,part (3)

(h) Valid, by Theorem 1.14,part (2)

(i) Invalid

(j) Valid, by Theorem 1.14,part (3), and Theorem 1.16

(5)

Outlet 1Outlet 2Outlet 3Outlet 4

Salary Fringe Bene�ts2664$367500 $78000$225000 $48000$765000 $162000$360000 $76500

3775

(6)JuneJulyAugust

Tickets Food Souvenirs24 $1151300 $3056900 $2194400$1300700 $3456700 $2482400$981100 $2615900 $1905100

3514

Page 19: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 1.5

(7)NitrogenPhosphatePotash

Field 1 Field 2 Field 324 1:00 0:45 0:650:90 0:35 0:750:95 0:35 0:85

35(in tons)

(8)

Rocket 1Rocket 2Rocket 3Rocket 4

Chip 1 Chip 2 Chip 3 Chip 426642131 1569 1839 27502122 1559 1811 26942842 2102 2428 36182456 1821 2097 3124

3775(9) (a) One example:

�1 10 �1

(b) One example:

24 1 1 00 �1 00 0 1

35(c) Consider

24 0 0 11 0 00 1 0

35

(10) (a) Third row, fourth column entry of AB

(b) Fourth row, �rst column entry of AB

(c) Third row, second column entry of BA

(d) Second row, �fth column entry of BA

(11) (a)Pn

k=1 a3kbk2 (b)Pm

k=1 b4kak1

(12) (a) [�27; 43;�56](b)

24 56�5718

35(13) (a) [�61; 15; 20; 9]

(b)

24 43�41�12

35(14) (a) Let B = Ai. Then B is m � 1 and bk1 =

Pmj=1 akjij1 = (ak1)(1) +

(ak2)(0) + (ak3)(0) = ak1.

(b) Aei = ith column of A.

(c) By part (b), each column of A is easily seen to be the zero vector byletting x equal each of e1; : : : ; en in turn.

(15) Proof of Part (2): The (i; j) entry of A(B+C)

= (ith row of A)�(jth column of (B+C))= (ith row of A)�(jth column of B + jth column of C)= (ith row of A)�(jth column of B)

+ (ith row of A)�(jth column of C)= (i; j) entry of AB + (i; j) entry of AC= (i; j) entry of (AB+AC).The proof of Part (3) is similar. For the �rst equation in part (4), the

15

Page 20: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 1.5

(i; j) entry of c(AB)= c ((ith row of A)�(jth column of B))= (c(ith row of A))�(jth column of B)= (ith row of cA)�(jth column of B)= (i; j) entry of (cA)B.The proof of the second equation is similar.

(16) Let B = AOnp. Clearly B is an m� p matrix and bij =Pn

k=1 aik0 = 0.

(17) Proof that AIn = A: Let B = AIn. Then B is clearly an m � n matrixand bjl =

Pnk=1 ajkikl = ajl, since ikl = 0 unless k = l, in which case

ikk = 1. The proof that ImA = A is similar.

(18) (a) We need to show cij = 0 if i 6= j. Now, if i 6= j, both factorsin each term of the formula for cij in the formal de�nition of matrixmultiplication are zero, except possibly for the terms aiibij and aijbjj .But since the factors bij and aij also equal zero, all terms in theformula for cij equal zero.

(b) Assume i > j. Consider the term aikbkj in the formula for cij . Ifi > k, then aik = 0. If i � k, then j < k, so bkj = 0. Hence all termsin the formula for cij equal zero.

(c) Let L1 and L2 be lower triangular matrices. U1 = LT1 and U2 =LT2 are then upper triangular, and L1L2 = UT

1UT2 = (U2U1)

T (byTheorem 1.16). But by part (b), U2U1 is upper triangular. So L1L2is lower triangular.

(19) Base Step: Clearly, (cA)1 = c1A1.Inductive Step: Assume (cA)n = cnAn, and prove (cA)n+1 = cn+1An+1.Now, (cA)n+1 = (cA)n(cA) = (cnAn)(cA) (by the inductive hypothesis)= cncAnA (by part (4) of Theorem 1.14) = cn+1An+1.

(20) Proof of Part (1): Base Step: As+0 = As = AsI = AsA0.Inductive Step: Assume As+t = AsAt for some t � 0. We must proveAs+(t+1) = AsAt+1. But As+(t+1) = A(s+t)+1 = As+tA = (AsAt)A (bythe inductive hypothesis) = As(AtA) = AsAt+1.Proof of Part (2): (We only need prove (As)t = Ast.) Base Step: (As)0 =I = A0 = As0.Inductive Step: Assume (As)t = Ast for some integer t � 0. We mustprove (As)t+1 = As(t+1). But (As)t+1 = (As)tAs = AstAs (by theinductive hypothesis) = Ast+s (by part (1)) = As(t+1).

(21) (a) If A is an m � n matrix, and B is a p � r matrix, then the fact thatAB exists means n = p, and the fact that BA exists means m = r.Then AB is an m � m matrix, while BA is an n � n matrix. IfAB = BA, then m = n.

(b) Note that by the Distributive Law, (A+B)2 = A2+AB+BA+B2.

(22) (AB)C = A(BC) = A(CB) = (AC)B = (CA)B = C(AB).

16

Page 21: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 1.5

(23) Use the fact that ATBT = (BA)T , while BTAT = (AB)T .

(24) (AAT )T = (AT )TAT (by Theorem 1.16) = AAT . (Similarly for ATA.)

(25) (a) IfA;B are both skew-symmetric, then (AB)T = BTAT = (�B)(�A)= (�1)(�1)BA = BA. The symmetric case is similar.

(b) Use the fact that (AB)T = BTAT = BA, since A;B are symmetric.

(26) (a) The (i; i) entry of AAT = (ith row of A)�(ith column of AT ) = (ithrow of A)�(ith row of A) = sum of the squares of the entries in theith row of A. Hence, trace(AAT ) is the sum of the squares of theentries from all rows of A.

(c) Trace(AB) =Pn

i=1(Pn

k=1 aikbki) =Pn

i=1

Pnk=1 bkiaik =Pn

k=1

Pni=1 bkiaik. Reversing the roles of the dummy variables k and

i givesPn

i=1

Pnk=1 bikaki, which is equal to trace(BA).

(27) (a) Consider any matrix of the form�1 0x 0

�.

(c) (In �A)2 = I2n � InA�AIn +A2 = In �A�A+A = In �A.

(d)

24 2 �1 �11 0 �11 �1 0

35(e) A2 = (A)A = (AB)A = A(BA) = AB = A.

(28) (a) (ith row of A) � (jth column of B) = (i; j)th entry of Omp = 0.

(b) Consider A =

�1 2 �12 4 �2

�and B =

24 1 �20 11 0

35.(c) Let C =

24 2 �40 22 0

35.(29) See Exercise 30(c).

(30) Throughout this solution, we use the fact that Aei = (ith column of A).(See Exercise 14(b).) Similarly, if ei is thought of as a row matrix, eiA= (ith row of A).

(a) (jth column of Aij) = A(jth column of ij) = Aei = (ith columnof A). And, if k 6= j, (kth column of Aij) = A(kth column ofij) = A0 = 0.

(b) (ith row of ijA) = (ith row of ij)A = ejA = (jth row of A).And, if k 6= i, (kth row of ijA) = (kth row of ij)A = 0A = 0.

17

Page 22: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Chapter 1 Review

(c) Suppose A is an n � n matrix that commutes with all other n � nmatrices. Suppose i 6= j. Then aij = ith entry of (jth column ofA) = ith entry of (ith column of Aji) (by reversing the roles of iand j in part (a)) = (i; i)th entry of Aji = (i; i)th entry of jiA(since A commutes with ji) = ith entry of (ith row of jiA) = 0(since all rows other than the jth row of jiA have all zero entries,by part (b)). Thus, A is diagonal. Next, aii = ith entry of (ithcolumn of A) = ith entry of (jth column of Aij) (by part (a)) =(i; j)th entry of Aij = (i; j)th entry of ijA (since A commuteswith ij) = jth entry of (ith row of ijA) = jth entry of (jth rowof A) (by part (b)) = ajj . Thus, all the main diagonal terms of Aare equal, and so A = cIn for some c 2 R.

(31) (a) T (b) T (c) T (d) F (e) F (f) F

Chapter 1 Review Exercises

(1) Yes. Vectors corresponding to adjacent sides are orthogonal. Vectorscorresponding to opposite sides are parallel, with one pair having slope35and the other pair having slope �

53 .

(2) u =h

5p394;� 12p

394; 15p

394

i� [0:2481;�0:5955; 0:7444]; slightly longer.

(3) Net velocity =�4p2� 5;�4

p2�� [0:6569;�5:6569]; speed � 5:6947

mi/hr.

(4) a = [�10; 9; 10] m/sec2

(5) jx � yj = 74 � kxk kyk � 90:9

(6) � � 136�

(7) projab =�11425 ;�

3825 ;

1925 ;

5725

�= [4:56;�1:52; 0:76; 2:28];

b� projab =�� 1425 ;�

6225 ;

5625 ;�

3225

�= [�0:56;�2:48; 2:24;�1:28];

a � (b� projab) = 0.

(8) �1782 joules

(9) We must prove that (x+ y) � (x� y) = 0) kxk = kyk : But,(x+ y) � (x� y) = 0) x � x� x � y+ y � x� y � y = 0 ) kxk2 � kyk2 = 0) kxk2 = kyk2 = kxk = kyk

(10) First, x 6= 0, or else projxy is not de�ned. Also, y 6= 0, since that wouldimply projxy = y. Now, assume xky. Then, there is a scalar c 6= 0 suchthat y = cx. Hence, projxy =

�x�ykxk2

�x =

�x�cxkxk2

�x =

�ckxk2kxk2

�x = cx

= y, contradicting the assumption that y 6= projxy.

18

Page 23: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Chapter 1 Review

(11) (a) 3A� 4CT =�

3 2 13�11 �19 0

�; AB =

�15 �21 �422 �30 11

�;

BA is not de�ned; AC =�23 14�5 23

�;

CA =

24 30 �11 172 0 18

�11 5 16

35; A3 is not de�ned;

B3 =

24 97 �128 24�284 375 �92268 �354 93

35(b) Third row of BC = [5 8].

(12) S =

26644 � 1

2112

� 12 7 �1112 �1 �2

3775; V =

26640 � 5

2 � 12

52 0 �212 2 0

3775(13) (a) (3(A�B)T )T = 3(A�B). Also,

�(3(A�B)T ) = 3(�1)(AT �BT ) = 3(�1)((�A)�(�B)) (de�nitionof skew-symmetric) = 3(A�B). Hence,(3(A�B)T )T = �(3(A�B)T ), so 3(A�B)T is skew-symmetric.

(b) Let C = A +B. Now, cij = aij + bij . But for i < j, aij = bij = 0.Hence, for i < j, cij = 0. Thus, C is lower triangular.

(14)Company ICompany IICompany III

Price Shipping Cost24 $168500 $24200$202500 $29100$155000 $22200

35 .(15) Take the transpose of both sides of ATBT = BTAT to get BA = AB.

Then, (AB)2 = (AB)(AB) = A(BA)B = A(AB)B = A2B2.

(16) Negation: For every square matrix A, A2 = A. Counterexample: A =[�1].

(17) If A 6= O22, then some row of A, say the ith row, is nonzero. ApplyResult 5 in Section 1.3 with x = (ith row of A).

(18) Base Step (k = 2): Suppose A and B are upper triangular n�n matrices,and let C = AB. Then aij = bij = 0, for i > j. Hence, for i > j,cij =

Pnm=1 aimbmj =

Pi�1m=1 0 �bmj+aiibij+

Pnm=i+1 aim �0 = aii(0) = 0.

Thus, C is upper triangular.Inductive Step: LetA1; : : : ;Ak+1 be upper triangular matrices. Then, theproduct C = A1 � � �Ak is upper triangular by the Induction Hypothesis,

19

Page 24: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Chapter 1 Review

and so the product A1 � � �Ak+1 = CAk+1 is upper triangular by the BaseStep.

(19) (a) Let A and B be n � n matrices having the properties given in theexercise. Let C = AB. Then we know that aij = 0 for all i < j,bij = 0 for all i > j, cij = 0 for all i 6= j, and that aii 6= 0 andbii 6= 0 for all i. We need to prove that aij = 0 for all i > j. We useinduction on j.Base Step (j = 1): Let i > j = 1. Then 0 = ci1 =

Pnk=1 aikbk1

= ai1b11 +Pn

k=2 aikbk1 = ai1b11 +Pn

k=2 aik � 0 = ai1b11. Sinceb11 6= 0, this implies that ai1 = 0, completing the Base Step.Inductive Step: Assume for j = 1; 2; : : : ;m � 1 that aij = 0 forall i > j: That is, assume we have already proved that, for somem � 2, that the �rst m � 1 columns of A have all zeros below themain diagonal. To complete the inductive step, we need to provethat the mth column of A has all zeros below the main diagonal.That is, we must prove that aim = 0 for all i > m. Let i > m. Then,0 = cim =

Pnk=1 aikbkm =

Pm�1k=1 aikbkm+aimbmm+

Pnk=m+1 aikbkm

=Pm�1

k=1 0 � bkm + aimbmm +Pn

k=m+1 aik � 0 = aimbmm. But, sincebmm 6= 0, we must have aim = 0, completing the proof.

(b) Apply part (a) to BT and AT to prove that BT is diagonal. HenceB is also diagonal.

(20) (a) F

(b) T

(c) F

(d) F

(e) F

(f) T

(g) F

(h) F

(i) F

(j) T

(k) T

(l) T

(m) T

(n) F

(o) F

(p) F

(q) F

(r) T

20

Page 25: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 2.1

Chapter 2

Section 2.1

(1) (a) f(�2; 3; 5)g(b) f(4;�5; 2)g(c) fg(d) f(c+ 7; 2c� 3; c; 6) j c 2 Rg; (7;�3; 0; 6); (8;�1; 1; 6); (9; 1; 2; 6)(e) f(2b � d � 4; b; 2d + 5; d; 2) j b; d 2 Rg; (�4; 0; 5; 0; 2); (�2; 1; 5; 0; 2);

(�5; 0; 7; 1; 2)(f) f(b+ 3c� 2; b; c; 8) j b; c 2 Rg; (�2; 0; 0; 8); (�1; 1; 0; 8); (1; 0; 1; 8)(g) f(6;�1; 3)g(h) fg

(2) (a) f(3c+ 13e+ 46; c+ e+ 13; c;�2e+ 5; e) j c; e 2 Rg(b) f(3b� 12d+ 50e� 13; b; 2d� 8e+ 3; d; e;�2) j b; d; e 2 Rg(c) f(�20c+9d�153f�68; 7c�2d+37f+15; c; d; 4f+2; f) j c; d; f 2 Rg(d) fg

(3) 51 nickels, 62 dimes, 31 quarters

(4) y = 2x2 � x+ 3

(5) y = �3x3 + 2x2 � 4x+ 6

(6) x2 + y2 � 6x� 8y = 0, or (x� 3)2 + (y � 4)2 = 25

(7) In each part, R(AB) = (R(A))B, which equals the given matrix.

(a)

266426 15 �66 4 10 �6 1210 4 �14

3775 (b)

266426 15 �610 4 �1418 6 156 4 1

3775(8) (a) To save space, we write hCii for the ith row of a matrix C.

For the Type (I) operation R : hii � chii: Now, hR(AB)ii =chABii = chAiiB (by the hint in the text) = hR(A)iiB= hR(A)Bii.But, if k 6= i, hR(AB)ik = hABik = hAikB (by the hint in the text)= hR(A)ikB = hR(A)Bik.For the Type (II) operation R : hii � chji + hii: Now, hR(AB)ii= chABij + hABii = chAijB+ hAiiB (by the hint in the text) =(chAij + hAii)B = hR(A)iiB = hR(A)Bii. But, if k 6= i, hR(AB)ik= hABik = hAikB (by the hint in the text) = hR(A)ikB =hR(A)Bik.

21

Page 26: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 2.2

For the Type (III) operation R : hii ! hji: Now, hR(AB)ii =hABij = hAijB (by the hint in the text) = hR(A)iiB = hR(A)Bii.Similarly, hR(AB)ij = hABii = hAiiB (by the hint in the text)= hR(A)ijB = hR(A)Bij . And, if k 6= i and k 6= j, hR(AB)ik =hABik = hAikB (by the hint in the text) = hR(A)ikB = hR(A)Bik.

(b) Use induction on n, the number of row operations used.Base Step: The case n = 1 is part (1) of Theorem 2.1, and is provenin part (a) of this exercise.Inductive Step: Assume that

Rn(� � � (R2(R1(AB))) � � �) = Rn(� � � (R2(R1(A))) � � �)B

and prove that

Rn+1(Rn(� � � (R2(R1(AB))) � � �)) = Rn+1(Rn(� � � (R2(R1(A))) � � �))B:

Now,

Rn+1(Rn(� � � (R2(R1(AB))) � � �)) = Rn+1(Rn(� � � (R2(R1(A))) � � �)B)

(by the inductive hypothesis) = Rn+1(Rn(� � � (R2(R1(A))) � � �))B(by part (a)).

(9) Multiplying a row by zero changes all of its entries to zero, essentiallyerasing all of the information in the row.

(10) Follow the hint in the textbook. First, A(X1 + c(X2 � X1)) = AX1 +cA(X2 � X1) = B + cAX2 � cAX1 = B + cB � cB = B. Then wewant to show that X1 + c(X2 � X1) = X1 + d(X2 � X1) implies thatc = d to prove that each real number c produces a di¤erent solution. ButX1 + c(X2 � X1) = X1 + d(X2 � X1) ) c(X2 � X1) = d(X2 � X1)) (c � d)(X2 � X1) = 0 ) (c � d) = 0 or (X2 � X1) = 0. However,X2 6= X1, and so c = d.

(11) (a) T (b) F (c) F (d) F (e) T (f) T

Section 2.2

(1) Matrices in (a), (b), (c), (d), and (f) are not in reduced row echelon form.Matrix in (a) fails condition 2 of the de�nition.Matrix in (b) fails condition 4 of the de�nition.Matrix in (c) fails condition 1 of the de�nition.Matrix in (d) fails conditions 1, 2, and 3 of the de�nition.Matrix in (f) fails condition 3 of the de�nition.

22

Page 27: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 2.2

(2) (a)

24 1 4 00 0 10 0 0

�������13�30

35(b) I4

(c)

26641 �2 0 110 0 1 �20 0 0 00 0 0 0

���������23500

3775

(d)

2666641 0 00 1 00 0 10 0 00 0 0

377775(e)

�1 �2 0 2 �10 0 1 �1 3

���� 12�

(f) I5

(3) (a)

24 1 0 0 �20 1 0 30 0 1 5

35; (�2; 3; 5)(e)

24 1 �2 0 1 0 �40 0 1 �2 0 50 0 0 0 1 2

35; f(2b� d� 4; b; 2d+ 5; d; 2) j b; d 2 Rg(g)

26641 0 0 60 1 0 �10 0 1 30 0 0 0

3775; f(6;�1; 3)g(4) (a) Solution set = f(c� 2d;�3d; c; d) j c; d 2 Rg; one particular solution

= (�3;�6; 1; 2)(b) Solution set = f(e;�2d� e;�3d; d; e) j d; e 2 Rg; one particular solu-

tion = (1; 1; 3;�1; 1)(c) Solution set = f(�4b + 2d � f; b;�3d + 2f; d;�2f; f) j b; d; f 2 Rg;

one particular solution = (�3; 1; 0; 2;�6; 3)

(5) (a) f(2c;�4c; c) j c 2 Rg =fc(2;�4; 1) j c 2 Rg

(b) f(0; 0; 0)g

(c) f(0; 0; 0; 0)g(d) f(3d;�2d; d; d) j d 2 Rg

= fd(3;�2; 1; 1) j d 2 Rg

(6) (a) a = 2, b = 15, c = 12, d = 6

(b) a = 2, b = 25, c = 16, d = 18

(c) a = 4, b = 2, c = 4, d = 1, e = 4

(d) a = 3, b = 11, c = 2, d = 3, e = 2, f = 6

(7) (a) A = 3, B = 4, C = �2 (b) A = 4, B = �3, C = 1, D = 0

(8) Solution for system AX = B1: (6;�51; 21);solution for system AX = B2: ( 353 ;�98;

792 ).

(9) Solution for system AX = B1: (�1;�6; 14; 9);solution for system AX = B2: (�10; 56; 8; 103 ).

23

Page 28: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 2.2

(10) (a) R1: (III): h1i ! h2iR2: (I): h1i � 1

2 h1iR3: (II): h2i � 1

3 h2iR4: (II): h1i � �2 h2i+ h1i ;

(b) AB =�57 �2156 �20

�;

R4(R3(R2(R1(AB)))) = R4(R3(R2(R1(A))))B =

��10 419 �7

�:

(11) (a) A(X1 +X2) = AX1 +AX2 = O +O = O; A(cX1) = c(AX1) =cO = O.

(b) Any nonhomogeneous system with two equations and two unknownsthat has a unique solution will serve as a counterexample. For in-stance, consider �

x + y = 1x � y = 1

:

This system has a unique solution: (1; 0). Let (s1; s2) and (t1; t2)both equal (1; 0). Then the sum of solutions is not a solution in thiscase. Also, if c 6= 1, then the scalar multiple of a solution by c is nota solution.

(c) A(X1 +X2) = AX1 +AX2 = B+O = B.

(d) Let X1 be the unique solution to AX = B. Suppose AX = O hasa nontrivial solution X2. Then, by (c), X1 + X2 is a solution toAX = B, and X1 6= X1 +X2 since X2 6= O. This contradicts theuniqueness of X1.

(12) If a 6= 0, then pivoting in the �rst column of�a bc d

���� 00�yields"

1 ba

0 d� c�ba

������ 00

#. There is a nontrivial solution if and only if the

(2; 2) entry of this matrix is zero, which occurs if and only if ad� bc = 0.

Similarly, if c 6= 0, swapping the �rst and second rows of�a bc d

���� 00�

and then pivoting in the �rst column yields

"1 d

c

0 b� a�dc

������ 00

#. There

is a nontrivial solution if and only if the (2; 2) entry of this matrix is zero,which occurs if and only if ad� bc = 0. Finally, if both a and c equal 0,then ad� bc = 0 and (1; 0) is a nontrivial solution.

(13) (a) The contrapositive is: If AX = O has only the trivial solution, thenA2X = O has only the trivial solution.Let X1 be a solution to A2X = O. Then O = A2X1 = A(AX1).Thus AX1 is a solution to AX = O. Hence AX1 = O by thepremise. Thus, X1 = O, using the premise again.

24

Page 29: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 2.3

(b) The contrapositive is: Let t be a positive integer. If AX = O hasonly the trivial solution, then AtX = O has only the trivial solution.Proceed by induction. The statement is clearly true when t = 1,completing the Base Step.Inductive Step: Assume that ifAX = O has only the trivial solution,then AtX = O has only the trivial solution. We must prove that ifAX = O has only the trivial solution, then At+1X = O has only thetrivial solution.Let X1 be a solution to At+1X = O. Then A(AtX1) = O. ThusAtX1 = O, since it is a solution to AX = O. But then X1 = O bythe inductive hypothesis.

(14) (a) T (b) T (c) F (d) T (e) F (f) F

Section 2.3

(1) (a) A row operation of type (I) converts A to B: h2i � �5 h2i.(b) A row operation of type (III) converts A to B: h1i ! h3i.(c) A row operation of type (II) converts A to B:h2i � h3i+ h2i.

(2) (a) B = I3. The sequence of row operations converting A to B is:(I): h1i � 1

4 h1i(II): h2i � 2 h1i + h2i(II): h3i � �3 h1i + h3i(III): h2i ! h3i(II): h1i � 5 h3i + h1i

(b) The sequence of row operations converting B to A is:(II): h1i � �5 h3i+ h1i(III): h2i ! h3i(II): h3i � 3 h1i+ h3i(II): h2i � �2 h1i+ h2i(I): h1i � 4 h1i

(3) (a) The common reduced row echelon form is I3.

(b) The sequence of row operations is:(II): h3i � 2 h2i+ h3i(I): h3i � �1 h3i(II): h1i � �9 h3i+ h1i(II): h2i � 3 h3i+ h2i

(II): h3i � � 95 h2i+ h3i

25

Page 30: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 2.3

(II): h1i � � 35 h2i+ h1i

(I): h2i � � 15 h2i

(II): h3i � �3 h1i+ h3i(II): h2i � �2 h1i+ h2i(I): h1i � �5 h1i

(4) A reduces to26641 0 0 �2 30 1 0 �1 00 0 1 1 00 0 0 0 0

3775 ; while B reduces to

26641 0 0 �2 00 1 0 �1 00 0 1 1 00 0 0 0 1

3775 :(5) (a) 2 (b) 1 (c) 2 (d) 3 (e) 3 (f) 2

(6) Corollary 2.6 cannot be used in either part (a) or part (b) because thereare more equations than variables.

(a) Rank = 3. Theorem 2.5 predicts that there is only the trivial solution.Solution set = f(0; 0; 0)g

(b) Rank = 2. Theorem 2.5 predicts that nontrivial solutions exist. So-lution set = f(3c;�4c; c) j c 2 Rg

(7) In the following answers, the asterisk represents any real entry:

(a) Smallest rank = 1 Largest rank = 426641 � �0 0 00 0 00 0 0

���������000

377526641 0 00 1 00 0 10 0 0

��������0001

3775(b) Smallest rank = 1 Largest rank = 324 1 � � �

0 0 0 00 0 0 0

�������00

35 24 1 0 0 �0 1 0 �0 0 1 �

���������

35(c) Smallest rank = 2 Largest rank = 324 1 � � �

0 0 0 00 0 0 0

������010

35 24 1 0 � �0 1 � �0 0 0 0

������001

35

26

Page 31: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 2.3

(d) Smallest rank = 1 Largest rank = 32666641 � �0 0 00 0 00 0 00 0 0

�����������0000

3777752666641 0 00 1 00 0 10 0 00 0 0

�������������00

377775(8) (a) x = � 2111a1 +

611a2

(b) x = 3a1 � 4a2 + a3(c) Not possible

(d) x = 12a1 �

12a2 +

12a3

(e) The answer is not unique; one possible answer is x = �3a1+2a2+0a3.

(f) Not possible (g) x = 2a1 � a2 � a3 (h) Not possible

(9) (a) Yes: 5(row 1) � 3(row 2) �1(row 3)

(b) Not in row space(c) Not in row space(d) Yes: �3(row 1) + 1(row 2)

(e) Yes, but the linear combina-tion of the rows is not unique;one possible expression for thegiven vector is �3(row 1) +1(row 2) + 0(row 3).

(10) (a) [13;�23; 60] = �2q1 + q2 + 3q3(b) q1 = 3r1 � r2 � 2r3; q2 = 2r1 + 2r2 � 5r3; q3 = r1 � 6r2 + 4r3(c) [13;�23; 60] = �r1 � 14r2 + 11r3

(11) (a) (i) B =

24 1 0 �1 20 1 3 20 0 0 0

35 ; (ii) [1; 0;�1; 2] = � 78 [0; 4; 12; 8] +12 [2; 7; 19; 18] + 0[1; 2; 5; 6]; [0; 1; 3; 2] =

14 [0; 4; 12; 8] + 0[2; 7; 19; 18] +

0[1; 2; 5; 6]; (iii) [0; 4; 12; 8] = 0[1; 0;�1; 2] + 4[0; 1; 3; 2]; [2; 17; 19; 18] =2[1; 0;�1; 2] + 7[0; 1; 3; 2]; [1; 2; 5; 6] = 1[1; 0;�1; 2]+ 2[0; 1; 3; 2]

(b) (i)B =

26641 2 3 0 �10 0 0 1 50 0 0 0 00 0 0 0 0

3775 ; (ii) [1; 2; 3; 0;�1] = � 53 [1; 2; 3;�4;�21]�43 [�2;�4;�6; 5; 27]+0[13; 26; 39; 5; 12]+0[2; 4; 6;�1;�7]; [0; 0; 0; 1; 5] =� 23 [1; 2; 3;�4;�21]�

13 [�2;�4;�6; 5; 27] + 0[13; 26; 39; 5; 12] +

0[2; 4; 6;�1;�7]; (iii) [1; 2; 3;�4;�21] = 1[1; 2; 3; 0;�1]� 4[0; 0; 0; 1; 5];[�2;�4;�6; 5; 27] =�2[1; 2; 3; 0;�1] + 5[0; 0; 0; 1; 5]; [13; 26; 39; 5; 12] =13[1; 2; 3; 0;�1] + 5[0; 0; 0; 1; 5]; [2; 4; 6;�1;�7] = 2[1; 2; 3; 0;�1] �1[0; 0; 0; 1; 5]

27

Page 32: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 2.3

(12) Suppose that all main diagonal entries of A are nonzero. Then, for each i,perform the row operation hii � (1=aii)hii on the matrix A. This willconvert A into In. We prove the converse by contrapositive. Supposesome diagonal entry aii equals 0. Then the ith column of A has all zeroentries. No step in the row reduction process will alter this column ofzeroes, and so the unique reduced row echelon form for the matrix mustcontain at least one column of zeroes, and so cannot equal In.

(13) (a) Suppose we are performing row operations on an m � n matrix A.Throughout this part, we will write hBii for the ith row of a matrixB.For the Type (I) operation R : hii � chii: Now R�1 is hii � 1

c hii.Clearly, R and R�1 change only the ith row of A. We want to showthat R�1R leaves hAii unchanged. But hR�1(R(A))ii = 1

c hR(A)ii= 1

c (chAii) = hAii.For the Type (II) operation R : hii � chji+hii: Now R�1 is hii ��chji+ hii. Again, R and R�1 change only the ith row of A, and weneed to show that R�1R leaves hAii unchanged. But hR�1(R(A))ii= �chR(A)ij+hR(A)ii = �chAij+hR(A)ii = �chAij+chAij+hAii= hAii.For the Type (III) operation R : hii ! hji: Now, R�1 = R. Also,R changes only the ith and jth rows of A, and these get swapped.Obviously, a second application of R swaps them back to where theywere, proving that R is indeed its own inverse.

(b) An approach similar to that used for Type (II) operations in theabridged proof of Theorem 2.3 in the text works just as easily forType (I) and Type (III) operations. However, here is a di¤erent ap-proach: Suppose R is a row operation, and let X satisfy AX = B.Multiplying both sides of this matrix equation by the matrix R(I)yields R(I)AX = R(I)B, implying R(IA)X = R(IB), by Theo-rem 2.1. Thus, R(A)X = R(B), showing that X is a solution to thenew linear system obtained from AX = B after the row operation Ris performed.

(14) The zero vector is a solution to AX = O, but it is not a solution forAX = B.

(15) Consider the systems�x + y = 1x + y = 0

and�x � y = 1x � y = 2

:

The reduced row echelon matrices for these inconsistent systems are, re-spectively, �

1 10 0

���� 01�

and�1 �10 0

���� 01�:

Thus, the original augmented matrices are not row equivalent, since theirreduced row echelon forms are di¤erent.

28

Page 33: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 2.3

(16) (a) Plug each of the 5 points in turn into the equation for the conic. Thiswill give a homogeneous system of 5 equations in the 6 variables a,b, c, d, e, and f . This system has a nontrivial solution, by Corollary2.6.

(b) Yes. In this case, there will be even fewer equations, so Corollary 2.6again applies.

(17) The fact that the system is homogeneous was used in the proof of Theo-rem 2.5 to show that the system is consistent (because it has the trivialsolution). However, nonhomogeneous systems can be inconsistent, and sothe proof of Theorem 2.5 does not work for nonhomogeneous systems.

(18) (a) R(A) and A are row equivalent, and hence have the same reducedrow echelon form (by Theorem 2.4), and the same rank.

(b) The reduced row echelon form of A cannot have more nonzero rowsthan A.

(c) If A has k rows of zeroes, then rank(A) = m � k. But AB has atleast k rows of zeroes, so rank(AB) � m� k.

(d) Let A = Rn(� � � (R2(R1(D))) � � � ), where D is in reduced row ech-elon form. Then rank(AB) = rank(Rn(� � � (R2(R1(D))) � � �)B) =rank(Rn(� � � (R2(R1(DB))) � � �)) (by part (2) of Theorem 2.1) =rank(DB) (by repeated use of part (a)) � rank(D) (by part (c)) =rank(A) (by de�nition of rank).

(19) (a) As in the abridged proof of Theorem 2.8 in the text, let a1; : : : ;amrepresent the rows of A, and let b1; : : : ;bm represent the rows of B.For the Type (I) operation R : hii � chii: Now bi = 0a1 + 0a2 +� � �+ cai + 0ai+1 + � � �+ 0am, and, for k 6= i, bk = 0a1 + 0a2 + � � �+1ak+0ak+1+ � � �+0am. Hence, each row of B is a linear combinationof the rows of A, implying it is in the row space of A.For the Type (II) operation R : hii � chji + hii: Now bi =0a1+0a2+ � � �+ caj+0aj+1+ � � �+ai+0ai+1+ : : :+0am, where ournotation assumes i > j. (An analogous argument works for i < j.)And, for k 6= i, bk = 0a1+0a2+ � � �+1ak+0ak+1+ � � �+0am. Hence,each row of B is a linear combination of the rows of A, implying itis in the row space of A.For the Type (III) operation R : hii ! hji: Now, bi = 0a1+0a2+� � �+ 1aj + 0aj+1 + � � �+ 0am, bj = 0a1 + 0a2 + � � �+ 1ai + 0ai+1 +� � �+0am, and, for k 6= i, k 6= j, bk = 0a1+0a2+ � � �+1ak+0ak+1+� � �+ 0am. Hence, each row of B is a linear combination of the rowsof A, implying it is in the row space of A.

(b) This follows directly from part (a) and Lemma 2.7.

(20) Let k be the number of matrices between A and B when performing rowoperations to get from A to B. Use a proof by induction on k.Base Step: If k = 0, then there are no intermediary matrices, and Exercise

29

Page 34: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 2.4

19 shows that the row space of B is contained in the row space of A.Inductive Step: Given the chain

A! D1 ! D2 ! � � � ! Dk ! Dk+1 ! B;

we must show that the row space B is contained in the row space of A.The inductive hypothesis shows that the row space of Dk+1 is in the rowspace of A, since there are only k matrices between A and Dk+1 in thechain. Thus, each row of Dk+1 can be expressed as a linear combinationof the rows of A. But by Exercise 19, each row of B can be expressedas a linear combination of the rows of Dk+1. Hence, by Lemma 2.7, eachrow of B can be expressed as a linear combination of the rows of A, andtherefore is in the row space of A. By Lemma 2.7 again, the row spaceof B is contained in the row space of A.

(21) (a) Let xij represent the jth coordinate of xi. The corresponding homo-geneous system in variables a1; : : : ; an+1 is8><>:

a1x11 + a2x21 + � � � + an+1xn+1;1 = 0...

.... . .

......

a1x1n + a2x2n + � � � + an+1xn+1;n = 0

;

which has a nontrivial solution for a1; : : : ; an+1, by Corollary 2.6.

(b) Using part (a), we can suppose that a1x1+ � � �+an+1xn+1 = 0, withai 6= 0 for some i. Then xi = �a1ai x1 � � � � �

ai�1aixi�1 � ai+1

aixi+1 �

� � � � an+1aixn+1. Let bj = �ajai for 1 � j � n+ 1, j 6= i.

(22) (a) T (b) T (c) F (d) F (e) F (f) T

Section 2.4

(1) The product of each given pair of matrices equals I.

(2) (a) Rank = 2; nonsingular(b) Rank = 2; singular(c) Rank = 3; nonsingular

(d) Rank = 4; nonsingular

(e) Rank = 3; singular

(3) No inverse exists for (b), (e) and (f).

(a)

" 110

115

310 � 2

15

#(c)

"� 221 � 5

84

17 � 1

28

#(d)

" 311

211

411 � 1

11

#

30

Page 35: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 2.4

(4) No inverse exists for (b) and (e).

(a)

24 1 3 2�1 0 22 2 �1

35

(c)

266432 0 1

2

� 3 12 � 12

� 83

13 � 23

3775

(d)

26641 �1 1 2�7 5 �10 �19�2 1 �2 �43 �2 4 8

3775

(f)

26644 �1 3 �6�3 1 �5 1010 �2 0 11 0 �3 6

3775(5) (a)

� 1a11

0

0 1a22

(b)

24 1a11

0 0

0 1a22

0

0 0 1a33

35 (c)

266641a11

0 � � � 0

0 1a22

� � � 0...

.... . .

...0 0 � � � 1

ann

37775

(6) (a) The general inverse is�

cos � sin �� sin � cos �

�.

When � = �6 , the inverse is

" p32

12

� 12

p32

#.

When � = �4 , the inverse is

" p22

p22

�p22

p22

#.

When � = �2 , the inverse is

�0 1�1 0

�.

(b) The general inverse is

24 cos � sin � 0� sin � cos � 0

0 0 1

35.When � = �

6 , the inverse is

2664p32

12 0

� 12

p32 0

0 0 1

3775.

When � = �4 , the inverse is

2664p22

p22 0

�p22

p22 0

0 0 1

3775.When � = �

2 , the inverse is

24 0 1 0�1 0 00 0 1

35.(7) (a) Inverse =

" 23

13

73

53

#; solution set = f(3;�5)g

31

Page 36: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 2.4

(b) Inverse =

26641 0 �385

25 � 175

15 � 15 � 45

3775; solution set = f(�2; 4;�3)g

(c) Inverse =

26641 �13 �15 5�3 3 0 �7�1 2 1 �30 �4 �5 1

3775; solution set = f(5;�8; 2;�1)g(8) (a) Consider

�0 11 0

�.

(b) Consider

24 0 1 01 0 00 0 1

35.(c) A = A�1 if A is involutory.

(9) (a) A = I2, B = �I2, A+B = O2

(b) A =

�1 00 0

�, B =

�0 00 1

�, A+B =

�1 00 1

�(c) If A = B = I2, then A+B = 2I2, A�1 = B�1 = I2, A�1 +B�1 =

2I2, and (A+B)�1 = 12I2, so A

�1 +B�1 6= (A+B)�1.

(10) (a) B must be the zero matrix.

(b) No. AB = In implies A�1 exists (and equals B). Multiply bothsides of AC = On on the left by A�1.

(11) : : : ;A�9;A�5;A�1;A3;A7;A11; : : :

(12) B�1A is the inverse of A�1B.

(13) (A�1)T = (AT )�1 (by Theorem 2.11, part (4)) = A�1.

(14) (a) No step in the row reduction process will alter the column of zeroes,and so the unique reduced row echelon form for the matrix mustcontain a column of zeroes, and so cannot equal In.

(b) Such a matrix will contain a column of all zeroes. Then use part (a).

(c) When pivoting in the ith column of the matrix during row reduction,the (i; i) entry will be nonzero, allowing a pivot in that position.Then, since all entries in that column below the main diagonal arealready zero, none of the rows below the ith row are changed in thatcolumn. Hence, none of the entries below the ith row are changedwhen that row is used as the pivot row. Thus, none of the nonzeroentries on the main diagonal are a¤ected by the row reduction stepson previous columns (and the matrix stays in upper triangular formthroughout the process). Since this is true for each main diagonalposition, the matrix row reduces to In.

32

Page 37: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 2.4

(d) If A is lower triangular with no zeroes on the main diagonal, thenAT is upper triangular with no zeroes on the main diagonal. Bypart (c), AT is nonsingular. Hence A = (AT )T is nonsingular bypart (4) of Theorem 2.11.

(e) Note that when row reducing the ith column ofA, all of the followingoccur:

1. The pivot aii is nonzero, so we use a type (I) operation to changeit to a 1. This changes only row i.

2. No nonzero targets appear below row i, so all type (II) row op-erations only change rows above row i.

3. Because of (2), no entries are changed below the main diagonal,and no main diagonal entries are changed by type (II) operations.

When row reducing [AjIn], we use the exact same row operations weuse to reduce A. Since In is also upper triangular, fact (3) aboveshows that all the zeroes below the main diagonal of In remain zerowhen the row operations are applied. Thus, the result of the rowoperations, namely A�1, is upper triangular.

(15) (a) Part (1): Since AA�1 = I, we must have (A�1)�1 = A.Part (2): For k > 0, to show (Ak)�1 = (A�1)k, we must show thatAk(A�1)k = I. Proceed by induction on k.Base Step: For k = 1, clearly AA�1 = I.Inductive Step: Assume Ak(A�1)k = I. Prove Ak+1(A�1)k+1 = I.Now, Ak+1(A�1)k+1 = AAk(A�1)kA�1 = AIA�1 = AA�1 = I.This concludes the proof for k > 0. We now show Ak(A�1)k = I fork � 0.For k = 0, clearly A0(A�1)0 = I I = I. The case k = �1 is coveredby part (1) of the theorem.For k � �2, (Ak)�1 = ((A�1)�k)�1 (by de�nition) = ((A�k)�1)�1

(by the k > 0 case) = A�k (by part (1)).

(b) To show (A1 � � �An)�1 = A�1

n � � �A�11 , we must prove that

(A1 � � �An)(A�1n � � �A�1

1 ) = I. Use induction on n.Base Step: For n = 1, clearly A1A

�11 = I.

Inductive Step: Assume that (A1 � � �An)(A�1n � � �A�1

1 ) = I. Provethat (A1 � � �An+1)(A

�1n+1 � � �A�1

1 ) = I.

Now, (A1 � � �An+1)(A�1n+1 � � �A�1

1 )

= (A1 � � �An)An+1A�1n+1(A

�1n � � �A�1

1 )

= (A1 � � �An)I(A�1n � � �A�1

1 ) = (A1 � � �An)(A�1n � � �A�1

1 ) = I.

(16) We must prove that (cA)( 1cA�1) = I. But, (cA)( 1cA

�1) = c 1cAA�1 =

1I = I.

(17) (a) Let p = �s, q = �t. Then p; q > 0. Now, As+t = A�(p+q) =(A�1)p+q = (A�1)p(A�1)q (by Theorem 1.15) = A�pA�q = AsAt.

33

Page 38: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 2.4

(b) Let q = �t. Then (As)t = (As)�q = ((As)�1)q = ((A�1)s)q (byTheorem 2.11, part (2)) = (A�1)sq (by Theorem 1.15) = A�sq

(by Theorem 2.11, part (2)) = As(�q) = Ast. Similarly, (As)t =((A�1)s)q (as before) = ((A�1)q)s (by Theorem 1.15) = (A�q)s =(At)s.

(18) First assume AB = BA. Then (AB)2 = ABAB = A(BA)B =A(AB)B = A2B2.

Conversely, if (AB)2 = A2B2, then ABAB = AABB =)A�1ABABB�1 = A�1AABBB�1 =) BA = AB.

(19) If (AB)q = AqBq for all q � 2, use q = 2 and the proof in Exercise 18 toshow BA = AB.Conversely, we need to show that BA = AB ) (AB)q = AqBq for allq � 2. First, we prove that BA = AB) ABq = BqA for all q � 2. Weuse induction on q.Base Step (q = 2): AB2 = A(BB) = (AB)B = (BA)B = B(AB) =B(BA) = (BB)A = B2A.Inductive Step: ABq+1 = A(BqB) = (ABq)B = (BqA)B (by the in-ductive hypothesis) = Bq(AB) = Bq(BA) = (BqB)A = Bq+1A.Now we use this �lemma�(BA = AB ) ABq = BqA for all q � 2) toprove BA = AB ) (AB)q = AqBq for all q � 2. Again, we proceed byinduction on q.Base Step (q = 2): (AB)2 = (AB)(AB) = A(BA)B = A(AB)B =A2B2.Inductive Step: (AB)q+1 = (AB)q(AB) = (AqBq)(AB) (by the induc-tive hypothesis) =Aq(BqA)B=Aq(ABq)B (by the lemma) = (AqA)(BqB)= Aq+1Bq+1.

(20) Base Step (k = 0): In = (A1 � In)(A� In)�1.Inductive Step: Assume In+A+A2+ � � �+Ak = (Ak+1�In)(A� In)�1;for some k: Prove In+A+A2+� � �+Ak+Ak+1 = (Ak+2�In)(A� In)�1:Now, In+A+A2+ � � �+Ak+Ak+1 = (In+A+A

2+ � � �+Ak)+Ak+1 =(Ak+1 � In)(A� In)�1 +Ak+1(A� In)(A� In)�1 (where the �rst termis obtained from the inductive hypothesis)= ((Ak+1 � In) +Ak+1(A� In))(A� In)�1 = (Ak+2 � In)(A� In)�1:

(21) (a) Suppose that n > k. Then, by Corollary 2.6, there is a nontrivial Xsuch that BX = O. Hence, (AB)X = A(BX) = AO = O. But,(AB)X = InX = X. Therefore, X = O, which gives a contradiction.

(b) Suppose that k > n. Then, by Corollary 2.6, there is a nontrivial Ysuch that AY = O. Hence, (BA)Y = B(AY) = BO = O. But,(BA)Y = IkY = X. Therefore, Y = O, which gives a contradiction.

(c) Parts (a) and (b) combine to prove that n = k. Hence A and B areboth square matrices. They are nonsingular with A�1 = B by thede�nition of a nonsingular matrix, since AB = In = BA.

34

Page 39: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Chapter 2 Review

(22) (a) F (b) T (c) T (d) F (e) F (f) T

Chapter 2 Review Exercises

(1) (a) x1 = �6, x2 = 8, x3 = �5(b) No solutions

(c) f[�5� c+ e; 1� 2c� e; c; 1 + 2e; e] j c; e 2 Rg

(2) y = �2x3 + 5x2 � 6x+ 3

(3) (a) No. Entries above pivots need to be zero.

(b) No. Rows 3 and 4 should be switched.

(4) a = 4, b = 7, c = 4, d = 6

(5) (i) x1 = �308, x2 = �18, x3 = �641, x4 = 108(ii) x1 = �29, x2 = �19, x3 = �88, x4 = 36(iii) x1 = 139, x2 = 57, x3 = 316, x4 = �30

(6) Corollary 2.6 applies since there are more variables than equations.

(7) (a) (I): h3i �6 h3i(b) (II): h2i 3 h4i+ h2i(c) (III): h2i $ h3i

(8) (a) rank(A) = 2, rank(B) = 4, rank(C) = 3

(b) AX = 0 and CX = 0: in�nite number of solutions; BX = 0: onesolution

(9) rref(A) = rref(B) =

24 1 0 �3 0 20 1 2 0 �40 0 0 1 3

35. Therefore, A and B are

row equivalent by an argument similar to that in Exercise 3(b) of Section2.3.

(10) (a) Yes. [�34; 29;�21] = 5[2;�3;�1] + 2[5;�2; 1]� 6[9;�8; 3](b) Yes. [�34; 29;�21] is a linear combination of the rows of the matrix.

(11) A�1 =

"� 29

19

� 1613

#

(12) (a) Nonsingular. A�1 =

264 �12

52 2

�1 6 41 �5 �3

375(b) Singular

35

Page 40: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Chapter 2 Review

(13) This is true by the Inverse Method, which is justi�ed in Section 2.4.

(14) No, by part (1) of Theorem 2.15.

(15) The inverse of the coe¢ cient matrix is

24 3 �1 �42 �1 �31 �2 �2

35 : The solution setis x1 = �27, x2 = �21, x3 = �1:

(16) (a) Because B is nonsingular, B is row equivalent to Im (see Exercise13). Thus, there is a sequence of row operations, R1; : : : ; Rk suchthat R1(� � � (Rk(B)) � � � ) = Im. Hence, by part (2) of Theorem 2.1,R1(� � � (Rk(BA)) � � � ) = R1(� � � (Rk(B)) � � � )A = ImA = A. There-fore, BA is row equivalent to A. Thus BA and A have the samereduced row echelon form, and so have the same rank.

(b) By part (d) of Exercise 18 in Section 2.3, rank(AC) � rank(A). Simi-larly, rank(A) = rank((AC)C�1) � rank(AC). Hence, rank(AC) =rank(A).

(17) (a) F

(b) F

(c) F

(d) T

(e) F

(f) T

(g) T

(h) T

(i) F

(j) T

(k) F

(l) F

(m) T

(n) T

(o) F

(p) T

(q) T

(r) T

(s) T

36

Page 41: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 3.1

Chapter 3

Section 3.1

(1) (a) �17(b) 6

(c) 0

(d) 1

(e) �108(f) 156

(g) �40(h) �60

(i) 0

(j) �3

(2) (a)

���� 4 3�2 4

���� = 22(b)

������0 2 �31 4 24 �1 1

������ = 65

(c)

�������3 0 52 �1 46 4 0

������ = 118

(3) (a) (�1)2+2���� 4 �39 �7

���� = �1 (b) (�1)2+3���� �9 6

4 3

���� = 51

(c) (�1)4+3�������5 2 13�8 2 22�6 �3 �16

������ = 222(d) (�1)1+2

���� x� 4 x� 3x� 1 x+ 2

���� = �2x+ 11(4) Same answers as Exercise 1.

(5) (a) 0 (b) �251 (c) �60 (d) 352

(6) a11a22a33a44 + a11a23a34a42 + a11a24a32a43 + a12a21a34a43+ a12a23a31a44 + a12a24a33a41 + a13a21a32a44 + a13a22a34a41+ a13a24a31a42 + a14a21a33a42 + a14a22a31a43 + a14a23a32a41� a11a22a34a43 � a11a23a32a44 � a11a24a33a42 � a11a21a33a44� a12a23a34a41 � a12a24a31a43 � a13a21a34a42 � a13a22a31a44� a13a24a32a41 � a14a21a32a43 � a14a22a33a41 � a14a23a31a42

(7) Let A =�1 11 1

�, and let B =

�1 00 1

�.

(8) (a) Perform the basketweaving method.

(b) (a � b)�a = (a2b3 � a3b2)a1 + (a3b1 � a1b3)a2 + (a1b2 � a2b1)a3 =a1a2b3 � a1a3b2 + a2a3b1 � a1a2b3 + a1a3b2 � a2a3b1 = 0.Similarly, (a � b)�b = 0.

(9) (a) 7 (b) 18 (c) 12 (d) 0

(10) Let x = [x1; x2] and y = [y1; y2]. Then projxy =�x�ykxk2

�x =

1kxk2 [(x1y1 + x2y2)x1; (x1y1 + x2y2)x2]. Hence y � projxy =

37

Page 42: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 3.1

1kxk2 (kxk

2y)� projxy =1

kxk2 [(x21 + x

22)y1; (x

21 + x

22)y2]� 1

kxk2 [(x1y1 + x2y2)x1; (x1y1 + x2y2)x2]

= 1kxk2 [x

21y1 + x

22y1 � x21y1 � x1x2y2; x21y2 + x22y2 � x1x2y1 � x22y2]

= 1kxk2 [x

22y1 � x1x2y2; x21y2 � x1x2y1]

= 1kxk2 [x2(x2y1 � x1y2); x1(x1y2 � x2y1)]

= x1y2�x2y1kxk2 [�x2; x1]. Thus, kxk ky � projxyk =

kxk jx1y2�x2y1jkxk2px22 + x

21 = jx1y2 � x2y1j = absolute value of

���� x1 x2y1 y2

����.(11) (a) 18 (b) 7 (c) 63 (d) 8

(12) Note that������x1 x2 x3y1 y2 y3z1 z2 z3

������ = x1y2z3+x2y3z1+x3y1z2�x3y2z1�x1y3z2�x2y1z3 =

(x2y3 � x3y2)z1 + (x3y1 � x1y3)z2 + (x1y2 � x2y1)z3 = (x � y)�z (fromthe de�nition in Exercise 8). Also note that the formula given in thehint for this exercise for the area of the parallelogram equals kx � yk.Hence, the volume of the parallelepiped equals kproj(x �y)zk kx � yk =���z�(x � y)kx � yk

��� kx � yk = jz�(x � y)j = absolute value of

������x1 x2 x3y1 y2 y3z1 z2 z3

������.(13) (a) Base Step: If n = 1, then A = [a11], so cA = [ca11], and jcAj =

ca11 = cjAj.Inductive Step: Assume that if A is an n�n matrix and c is a scalar,then jcAj = cnjAj. Prove that if A is an (n + 1) � (n + 1) matrix,and c is a scalar, then jcAj = cn+1jAj. Let B = cA. Then jBj =bn+1;1Bn+1;1+bn+1;2Bn+1;2+ � � �+bn+1;nBn+1;n+bn+1;n+1Bn+1;n+1:Each Bn+1;i = (�1)n+1+ijBn+1;ij = cn(�1)n+1+ijAn+1;ij (by theinductive hypothesis, since Bn+1;i is an n � n matrix) = cnAn+1;i:Thus, jcAj = jBj = can+1;1(c

nAn+1;1) + can+1;2(cnAn+1;2) + � � � +can+1;n(c

nAn+1;n) + can+1;n+1(cnAn+1;n+1) =cn+1(an+1;1An+1;1 + an+1;2An+1;2 + � � �+ an+1;nAn+1;n+ an+1;n+1An+1;n+1) = cn+1jAj:

(b) j2Aj = 23jAj (since A is a 3� 3 matrix) = 8jAj:

38

Page 43: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 3.2

(14) ��������x �1 0 00 x �1 00 0 x �1a0 a1 a2 a3 + x

�������� = a0(�1)4+1�������1 0 0x �1 00 x �1

������+ a1(�1)4+2

������x 0 00 �1 00 x �1

������ + a2(�1)4+3������x �1 00 x 00 0 �1

������+ a3(�1)4+4

������x �1 00 x �10 0 x

������ :The four 3 � 3 determinants in the previous equation can be calculatedusing �basketweaving�as �1, x, �x2, and x3, respectively. Therefore, theoriginal 4 � 4 determinant equals (�a0)(�1) + (a1)(x) + (�a2)(�x2) +(a3)(x

3) = a0 + a1x+ a2x2 + a3x

3:

(15) (a) x = �5 or x = 2(b) x = �2 or x = �1

(c) x = 3, x = 1, or x = 2

(16) (a) Use �basketweaving�and factor.

(b) 20

(17) (a) Area of T =p3s2

4 :

(b) Suppose one side of T extends from (a; b) to (c; d), where (a; b) and(c; d) are lattice points. Then s2 = (c � a)2 + (d � b)2; an integer.Hence, s

2

4 is rational, and the area of T =p3s2

4 is irrational.

(c) Suppose the vertices of T are lattice points X = (a; b), Y = (c; d),Z = (e; f). Then side XY is expressed using vector [c � a; d � b],and side XZ is expressed using vector [e�a; f � b]. Hence, the areaof T = 1

2 (area of the parallelogram formed by [c�a; d�b] and [e�a;

f � b]) = 12

���� c� a d� be� a f � b

���� :(d) 1

2

���� c� a d� be� a f � b

���� = 12 ((c�a)(f�b)�(d�b)(e�a)); which is

12 times

the di¤erence of two products of integers, and hence, is rational.

(18) (a) F (b) T (c) F (d) F (e) T

Section 3.2

(1) (a) (II): h1i � �3 h2i+ h1i; determinant = 1(b) (III): h2i ! h3i; determinant = �1

39

Page 44: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 3.2

(c) (I): h3i � �4 h3i; determinant = �4(d) (II): h2i � 2 h1i+ h2i; determinant = 1(e) (I): h1i � 1

2 h1i; determinant =12

(f) (III): h1i ! h2i; determinant = �1

(2) (a) 30

(b) �3(c) �4(d) 3

(e) 35

(f) 20

(3) (a) Determinant = �2;nonsingular

(b) Determinant = 1; nonsingular

(c) Determinant = �79;nonsingular

(d) Determinant = 0; singular

(4) (a) Determinant = �1; the system has only the trivial solution.

(b) Determinant = 0; the system has nontrivial solutions. (One nontriv-ial solution is (1; 7; 3)).

(c) Determinant = �42; the system has only the trivial solution.

(5) Use Theorem 3.2.

(6) Use row operations to reverse the order of the rows of A. This can bedone using 3 type (III) operations, an odd number. (These operations areh1i $ h6i, h2i $ h5i and h3i $ h4i.) Hence, by part (3) of Theorem 3.3,jAj = �a16a25a34a43a52a61.

(7) If jAj 6= 0, A�1 exists. Hence AB = AC =) A�1AB = A�1AC =)B = C.

(8) (a) Base Step: When n = 1, A = [a11]. Then B = R(A) (with i = 1)= [ca11]. Hence, jBj = ca11 = cjAj:

(b) Inductive Hypothesis: Assume that if A is an n � n matrix, and Ris the row operation hii c hii ; and B = R(A), then jBj = cjAj.Note: In parts (c) and (d), we must prove that if A is an(n+ 1)� (n+ 1) matrix and R is the row operation hii c hii ; andB = R(A), then jBj = cjAj.

(c) Inductive Step when i 6= n:jBj = bn+1;1Bn+1;1 + � � �+ bn+1;nBn+1;n + bn+1;n+1Bn+1;n+1 =bn+1;1(�1)n+1+1jBn+1;1j+ � � �+ bn+1;n(�1)n+1+njBn+1;nj+ bn+1;n+1(�1)n+1+n+1jBn+1;n+1j: But since jBn+1;ij is the deter-minant of the n�n matrix An+1;i after the row operation hii c hiihas been performed (since i 6= n), the inductive hypothesis showsthat each jBn+1;ij = cjAn+1;ij: Since each bn+1;i = an+1;i, we havejBj = can+1;1(�1)n+1+1jAn+1;1j+ � � �+can+1;n(�1)n+1+njAn+1;nj+can+1;n+1(�1)n+1+n+1jAn+1;n+1j = cjAj:

40

Page 45: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 3.2

(d) Inductive Step when i = n: Again, jBj = bn+1;1Bn+1;1 + � � � +bn+1;nBn+1;n + bn+1;n+1Bn+1;n+1:Since i = n, each bn+1;i = can+1;i, while each Bn+1;i = An+1;i (sincethe �rst n rows of A are left unchanged by the row operation inthis case). Hence, jBj = c(an+1;1An+1;1) + � � � + c(an+1;nAn+1;n) +c(an+1;n+1An+1;n+1) = cjAj:

(9) (a) In order to add a multiple of one row to another, we need at leasttwo rows. Hence n = 2 here.

(b) Let A =

�a11 a12a21 a22

�: Then the row operation h1i ch2i + h1i

produces B =�ca21 + a11 ca22 + a12

a21 a22

�; and jBj = (ca21+a11)a22

� (ca22 + a12)a21; which reduces to a11a22 � a12a21 = jAj:

(c) Let A =

�a11 a12a21 a22

�: Then the row operation h2i ch1i + h2i

produces B =

�a11 a12

ca11 + a21 ca12 + a22

�; and jBj = a11(ca12 +

a22)� a12(ca11 + a21); which reduces to a11a22 � a12a21 = jAj:

(10) (a) Inductive Hypothesis: Assume that if A is an (n�1)�(n�1) matrixand R is a type (II) row operation, then jR(A)j = jAj.Remark: We must prove that if A is an n � n matrix, and R is atype (II) row operation, then jR(A)j = jAj:In what follows, let R: hii � c hji+ hii (so, i 6= j).

(b) Inductive Step when i 6= n; j 6= n: Let B = R(A). Then jBj =bn1Bn1 + � � � + bnnBnn: But since the nth row of A is not a¤ectedby R, each bni = ani: Also, each Bni = (�1)n+ijBnij; and jBnij isthe determinant of the (n� 1)� (n� 1) matrix R(Ani). But by theinductive hypothesis, each jBnij = jAnij. Hence, jBj = jAj.

(c) Since i 6= j, we can assume j 6= n. Let i = n, and let k 6= n:The nth row of R1(R2(R1(A))) = kth row of R2(R1(A)) = kth rowof R1(A) + c(jth row of R1(A)) = nth row of A + c(jth row of A)= nth row of R(A).The kth row of R1(R2(R1(A))) = nth row of R2(R1(A)) = nth rowof R1(A) (since k 6= n) = kth row of A = kth row of R(A).The lth row (where l 6= k; n) ofR1(R2(R1(A))) = lth row ofR2(R1(A))(since l 6= k; n) = lth row of R1(A) (since l 6= k) = lth row of A(since l 6= k; n) = lth row of R(A) (since l 6= n = i).

(d) Inductive Step when i = n: From part (c), we have: jR(A)j =jR1(R2(R1(A)))j = �jR2(R1(A))j (by part (3) of Theorem 3.3) =�jR1(A)j (by part (b), since k; j 6= n = i) = �(�jAj) (by part (3)of Theorem 3.3) = jAj:

(e) Since i 6= j, we can assume i 6= n:Now, the ith row ofR1(R3(R1(A))) =ith row ofR3(R1(A)) = ith row ofR1(A)+c(kth row ofR1(A)) = ith

41

Page 46: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 3.2

row of A + c(nth row of A) (since i 6= k) = ith row of R(A) (sincej = n).The kth row of R1(R3(R1(A))) = nth row of R3(R1(A)) = nth rowof R1(A) (since i 6= n) = kth row of A = kth row of R(A) (sincei 6= k).The nth row of R1(R3(R1(A))) = kth row of R3(R1(A)) = kth rowof R1(A) (since k 6= i) = nth row of A = nth row of R(A) (sincei 6= n).The lth row (where l 6= i; k; n) of R1(R3(R1(A))) = lth row ofR3(R1(A)) = lth row of R1(A) = lth row of A = lth row of R(A).

(f) Inductive Step when j = n: jR(A)j = jR1(R3(R1(A)))j =� jR3(R1(A))j (by part (3) of Theorem 3.3) = �jR1(A)j (by part(b)) = �(�jAj) (by part (3) of Theorem 3.3) = jAj:

(11) (a) Suppose the ith row of A is a row of all zeroes. Let R be the rowoperation hii 2 hii. Then jR(A)j = 2jAj by part (1) of Theorem3.3. But also jR(A)j = jAj since R has no e¤ect on A. Hence,2jAj = jAj, so jAj = 0.

(b) If A contains a row of zeroes, rank(A) < n (since the reduced rowechelon form of A also contains at least one row of zeroes). Hence byCorollary 3.6, jAj = 0.

(12) (a) Let rows i; j of A be identical. Let R be the row operationhii $ hji : Then, R(A) = A, and so jR(A)j = jAj: But by part (3)of Theorem 3.3, jR(A)j = �jAj: Hence, jAj = �jAj; and so jAj = 0:

(b) If two rows of A are identical, then subtracting one of these rowsfrom the other shows that A is row equivalent to a matrix with arow of zeroes. Use part (b) of Exercise 11.

(13) (a) Suppose the ith row of A = c(jth row of A). Then the row operationR : hii �c hji + hii shows A is row equivalent to a matrix with arow of zeroes. Use part (a) of Exercise 11.

(b) Use the given hint, Theorem 2.5 and Corollary 3.6.

(14) (a) Let D =

�A CO B

�. If jAj = 0, then, when row reducing D into

upper triangular form, one of the �rst m columns will not be a pivotcolumn. Similarly, if jAj 6= 0 and jBj = 0, one of the last (n �m)columns will not be a pivot column. In either case, some columnwill not contain a pivot, and so rank(D) < n. Hence, in both cases,jDj = 0 = jAjjBj.Now suppose that both jAj and jBj are nonzero. Then the row

operations for the �rst m pivots used to put D into upper triangularform will be the same as the row operations for the �rst m pivotsused to putA into upper triangular form (putting 1�s along the entiremain diagonal of A). These operations convert D into the matrix

42

Page 47: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 3.3�U GO B

�, where U is an m � m upper triangular matrix with 1�s

on the main diagonal, and G is some m � (n�m) matrix. Hence, atthis point in the computation of jDj, the multiplicative factor P thatwe use to calculate the determinant (as in Example 5 in the text)equals 1=jAj.The remaining (n � m) pivots to put D into upper triangular

form will involve the same row operations needed to put B into uppertriangular form (with 1�s all along the main diagonal). Clearly, thismeans that the multiplicative factor P must be multiplied by 1=jBj,and so the new value of P equals 1=(jAjjBj). Since the �nal matrixis in upper triangular form, its determinant is 1. Multiplying thisby the reciprocal of the �nal value of P (as in Example 5) yieldsjDj = jAjjBj.

(b) (18� 18)(20� 3) = 0(17) = 0

(15) Follow the hint in the text. If A is row equivalent to B in reduced row ech-elon form, follow the method in Example 5 in the text to �nd determinantsusing row reduction by maintaining a multiplicative factor P throughoutthe process. But, since the same rules work for f with regard to row op-erations as for the determinant, the same factor P will be produced forf . If B = In, then f(B) = 1 = jBj. Since the factor P is the same for fand the determinant, f(A) = (1=P )f(B) = (1=P )jBj = jAj. If, instead,B 6= In, then the nth row of B will be all zeroes. Perform the operation(I): hni 2hni on B, which yields B. Then, f(B) = 2f(B), implyingf(B) = 0. Hence, f(A) = (1=P )0 = 0.

(16) (a) F (b) T (c) F (d) F (e) F (f) T

Section 3.3

(1) (a) a31(�1)3+1jA31j+a32(�1)3+2jA32j+a33(�1)3+3jA33j+a34(�1)3+4jA34j(b) a11(�1)1+1jA11j+a12(�1)1+2jA12j+a13(�1)1+3jA13j+a14(�1)1+4jA14j(c) a14(�1)1+4jA14j+a24(�1)2+4jA24j+a34(�1)3+4jA34j+a44(�1)4+4jA44j(d) a11(�1)1+1jA11j+a21(�1)2+1jA21j+a31(�1)3+1jA31j+a41(�1)4+1jA41j

(2) (a) �76 (b) 465 (c) 102 (d) 410

(3) In each part, the answer is given in the form Adjoint, Determinant, Inverse.

(a)

24 �6 9 36 �42 0�4 8 2

35, �6,2664

1 � 32 � 12� 1 7 0

23 � 43 � 13

3775

43

Page 48: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 3.3

(b)

24 3 18 �6�15 �65 2015 60 �15

35, 15,2664

15

65 � 25

� 1 � 13343

1 4 �1

3775

(c)

2664�3 0 3 �30 0 0 0�3 0 3 �36 0 �6 6

3775, 0, no inverse

(d)

24 6 0 09 �12 00 0 �8

35, �24,2664� 14 0 0

� 38

12 0

0 0 13

3775

(e)

24 3 �1 �20 �3 �60 0 �9

35, 9,2664

13 � 19 � 290 � 13 � 230 0 �1

3775

(f)

26642 2 �2 10 �4 4 �20 0 4 �20 0 0 �2

3775, 4,2666664

12

12 � 12

14

0 �1 1 � 120 0 1 � 120 0 0 � 12

3777775(4) (a) f(�4; 3;�7)g

(b) f(5;�1;�3)g(c) f(6; 4;�5)g(d) f(4;�1;�3; 6)g

(5) (a) If A is nonsingular, (AT )�1 = (A�1)T by part (4) of Theorem 2.11.For the converse, A�1 = ((AT )�1)T .

(b) jABj = jAjjBj = jBjjAj = jBAj.

(6) (a) Use the fact that jABj = jAjjBj.(b) jABj = j � BAj =) jABj = (�1)njBAj (by Corollary 3.4) =)jAjjBj = (�1)jBjjAj (since n is odd). Hence, either jAj = 0 orjBj = 0.

(7) (a) jAAT j = jAjjAT j = jAjjAj = jAj2 � 0.(b) jABT j = jAjjBT j = jAT jjBj

(8) (a) AT = �A, so jAj = jAT j = j �Aj = (�1)njAj (by Corollary 3.4) =(�1)jAj (since n is odd). Hence jAj = 0.

(b) Consider A =

�0 �11 0

�.

44

Page 49: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 3.3

(9) (a) ITn = In = I�1n .

(b) Consider

24 0 1 01 0 00 0 1

35.(c) jAT j = jA�1j =) jAj =

1=jAj =) jAj2 = 1 =)jAj = �1.

(10) Note that

B =

24 9 0 �33 2 �1�6 0 1

35 has a determinant equal to �18. Hence, if B = A2,

then jAj2 is negative, a contradiction.

(11) (a) Base Step: k = 2: jA1A2j = jA1jjA2j by Theorem 3.7.Inductive Step: Assume jA1A2 � � �Akj = jA1jjA2j � � � jAkj, for anyn� n matrices A1;A2; : : : ;Ak:Prove jA1A2 � � �AkAk+1j = jA1jjA2j � � � jAkjjAk+1j, for any n � nmatrices A1;A2; : : : ;Ak;Ak+1:But, jA1A2 � � �AkAk+1j = j(A1A2 � � �Ak)Ak+1j= jA1A2 � � �AkjjAk+1j (by Theorem 3.7) = jA1jjA2j � � � jAkjjAk+1jby the inductive hypothesis.

(b) Use part (a) with Ai = A, for 1 � i � k: Or, for an induction proof:Base Step (k = 1): Obvious.Inductive Step: jAk+1j = jAkAj = jAkjjAj = jAjkjAj = jAjk+1

(c) Use part (b): jAjk = jAkj = jOnj = 0, and so jAj = 0. Or, for aninduction proof:Base Step (k = 1): Obvious.Inductive Step: Assume.Ak+1 = On. If A is singular, then jAj = 0,and we are done. But A can not be nonsingular, for if it is, Ak+1 =On ) Ak+1A�1 = OnA

�1 ) Ak = On ) jAj = 0 by the inductivehypothesis. But this implies A is singular, a contradiction.

(12) (a) jAnj = jAjn. If jAj = 0, then jAjn = 0 is not prime. Similarly, ifjAj = �1, then jAjn = �1 and is not prime. If j(jAj)j > 1, then jAjnhas j(jAj)j as a positive proper divisor, for n � 2.

(b) jAjn = jAnj = jIj = 1. Since jAj is an integer, jAj = �1. If n is odd,then jAj 6= �1, so jAj = 1.

(13) (a) Let P�1AP = B. Suppose P is an n � n matrix. Then P�1 is alsoan n � n matrix. Hence P�1AP is not de�ned unless A is also ann � n matrix, and thus, the product B = P�1AP is also n � n.

(b) For example, consider B =

�2 11 1

��1A

�2 11 1

�=

��6 �416 11

�,

or B =�

2 �5�1 3

��1A

�2 �5�1 3

�=

�10 �124 �5

�.

(c) A = InAIn = I�1n AIn

45

Page 50: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 3.3

(d) If P�1AP = B, then PP�1APP�1 = PBP�1 =) A = PBP�1.Hence (P�1)�1BP�1 = A.

(e) A similar to B implies there is a matrix P such that A = P�1BP.B similar to C implies there is a matrix Q such that B = Q�1CQ.Hence, A = P�1BP = P�1(Q�1CQ)P = (P�1Q�1)C(QP) =(QP)�1C(QP), and so A is similar to C.

(f) A similar to In implies there is a matrix P such that A = P�1InP.Hence, A = P�1InP = P

�1P = In.

(g) jBj = jP�1APj = jP�1jjAjjPj = jP�1jjPjjAj = (1=jPj)jPjjAj =jAj.

(14) BA=(jAj jBj)

(15) Use the fact that A�1 = 1jAjA (by Corollary 3.12), which equals �A, since

jAj = �1. The entries of A are sums and products of the entries of A.Since the entries of A are all integers, the entries of �A are also integers.

(16) Use the fact that AA = jAjIn (from Theorem 3.11) and the fact that Ais singular i¤ jAj = 0.

(17) (a) We must show that the (i; j) entry of the adjoint of AT equals the(i; j) entry of AT . But, the (i; j) entry of the adjoint of AT = (j; i)cofactor of AT = (�1)j+ij(AT )jij = (�1)i+j jAij j = (j; i) entry of A= (i; j) entry of AT .

(b) The (i; j) entry of the adjoint of kA = (j; i) cofactor of kA =(�1)j+ij(kA)jij = (�1)j+ikn�1jAjij (by Corollary 3.4, since Aji isan (n�1) � (n�1) matrix) = kn�1((j; i) cofactor ofA) = kn�1((i; j)entry of A).

(18) (a) Aji = (j; i) cofactor of A = (�1)j+ijAjij = (�1)j+ij(AT )ij j= (�1)i+j jAij j (since A is symmetric) = (i; j) cofactor of A = Aij .

(b) Consider A =

24 0 1 1�1 0 1�1 �1 0

35. Then A =

24 1 �1 1�1 1 �11 �1 1

35,which is not skew-symmetric.

(19) Let A be a nonsingular upper triangular matrix. Since A�1 = 1jAjA, it is

enough to show that A is upper triangular; that is, that the (i; j) entry ofA = 0 for i > j. But (i; j) entry of A = (j; i) cofactor ofA = (�1)j+ijAjij.Notice that when i > j, the matrix Aji is also upper triangular with atleast one zero on the main diagonal. Thus, jAij j = 0.

(20) (a) If A is singular, then by Exercise 16, AA = On. If A�1 exists, thenAAA�1 = OnA�1 =) A = On =) A = On, a contradiction,since On is singular.

46

Page 51: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 3.3

(b) Assume A is an n � n matrix. Suppose jAj 6= 0. Then A�1 exists,and by Corollary 3.12, A = jAjA�1. Hence jAj = j jAjA�1 j =jAjnjA�1j (by Corollary 3.4) = jAjn 1

jAj = jAjn�1.

Also, if jAj = 0, then A is singular by part (a). Hence jAj = 0 =jAjn�1.

(21) The solution is completely outlined in the given Hint.

(22) (a) T (b) T (c) F (d) T (e) F (f) T

(23) (a) Let R be the type (III) row operation hki ! hk � 1i, let A be ann� n matrix and let B = R(A). Then, by part (3) of Theorem 3.3,jBj = (�1)jAj. Next, notice that the submatrix A(k�1)j = Bkj be-cause the (k � 1)st row of A becomes the kth row of B, implyingthe same row is being eliminated from both matrices, and since allother rows maintain their original relative positions (notice the samecolumn is being eliminated in both cases). Hence, a(k�1)1A(k�1)1 +a(k�1)2A(k�1)2 + � � � + a(k�1)nA(k�1)n = bk1(�1)(k�1)+1jA(k�1)1j +bk2(�1)(k�1)+2jA(k�1)2j + � � � + bkn(�1)(k�1)+njA(k�1)nj (becausea(k�1)j = bkj for 1 � j � n) = bk1(�1)kjBk1j + bk2(�1)k+1jBk2j +� � �+bkn(�1)k+n�1jBknj= (�1)(bk1(�1)k+1jBk1j+bk2(�1)k+2jBk2j+� � �+ bkk(�1)k+njBknj) = (�1)jBj (by applying Theorem 3.10 alongthe kth row of B) = (�1)jR(A)j = (�1)(�1)jAj = jAj, �nishing theproof.

(b) The de�nition of the determinant is the Base Step for an inductionproof on the row number, counting down from n to 1. Part (a) is theInductive Step.

(24) Suppose row i of A equals row j of A. Let R be the type (III) rowoperation hii ! hji: Then, clearly A = R(A). Thus, by part (3) ofTheorem 3.3, jAj = �jAj, or 2jAj = 0, implying jAj = 0.

(25) Let A, i; j; and B be as given in the exercise and its hint. Then byExercise 24, jBj = 0, since its ith and jth rows are the same. Also, sinceevery row of A equals the corresponding row of B, with the exception ofthe jth row, the submatricesAjk and Bjk are equal for 1 � k � n. Hence,Ajk = Bjk for 1 � k � n. Now, computing the determinant of B using acofactor expansion along the jth row (allowed by Exercise 23) yields 0 =jBj = bj1Bj1+bj2Bj2+� � �+bjnBjn = ai1Bj1+ai2Bj2+� � �+ainBjn (becausethe jth row of B equals the ith row ofA) = ai1Aj1+ai2Aj2+� � �+ainAjn,completing the proof.

(26) Using the fact that the general (k;m) entry of A is Amk, we see thatthe (i; j) entry of AA equals ai1Aj1 + ai2Aj2 + � � � + ainAjn. If i = j,Exercise 23 implies that this sum equals jAj, while if i 6= j, Exercise 25implies that the sum equals 0. Hence, AA equals jAj on the main diagonaland 0 o¤ the main diagonal, yielding (jAj)In.

47

Page 52: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 3.3

(27) Suppose A is nonsingular. Then jAj 6= 0 by Theorem 3.5. Thus, by

Exercise 26, A�

1jAjA

�= In. Therefore, by Theorem 2.9,

�1jAjA

�A = In,

implying AA = (jAj)In.

(28) Using the fact that the general (k;m) entry of A is Amk, we see that the(j; j) entry of AA equals a1jA1j + a2jA2j + � � �+ anjAnj . But all maindiagonal entries of AA equal jAj by Exercise 27.

(29) Let A be a singular n � n matrix. Then jAj = 0 by Theorem 3.5. Bypart (4) of Theorem 2.11, AT is also singular (or else A = (AT )T isnonsingular). Hence, jAT j = 0 by Theorem 3.5, and so jAj = jAT j in thiscase.

(30) Case 1: Assume 1 � k < j and 1 � i < m. Then the (i; k) entry of(Ajm)

T = (k; i) entry of Ajm = (k; i) entry of A = (i; k) entry of AT =(i; k) entry of (AT )mj . Case 2: Assume j � k < n and 1 � i < m. Thenthe (i; k) entry of (Ajm)

T = (k; i) entry of Ajm = (k+1; i) entry of A =(i; k+1) entry of AT = (i; k) entry of (AT )mj . Case 3: Assume 1 � k < jand m < i � n. Then the (i; k) entry of (Ajm)

T = (k; i) entry of Ajm

= (k; i+ 1) entry of A = (i+ 1; k) entry of AT = (i; k) entry of (AT )mj .Case 4: Assume j � k < n and m < i � n. Then the (i; k) entry of(Ajm)

T = (k; i) entry of Ajm = (k + 1; i+ 1) entry of A = (i+ 1; k + 1)entry of AT = (i; k) entry of (AT )mj . Hence, the corresponding entries of(Ajm)

T and (AT )mj are all equal, proving that the matrices themselvesare equal.

(31) Suppose A is a nonsingular n � n matrix. Then, by part (4) of Theo-rem 2.11, AT is also nonsingular. We prove jAj = jAT j by induction onn.Base Step: Assume n = 1. Then A = [a11] = A

T , and so their determi-nants must be equal.Inductive Step: Assume that jBj = jBT j for any (n � 1) � (n � 1)nonsingular matrix B, and prove that jAj = jAT j for any n � n non-singular matrix A. Now, �rst note that, by Exercise 30, j(AT )nij =j(Ain)

T j. But, j(Ain)T j = jAinj, either by the inductive hypothesis, if Ain

is nonsingular, or by Exercise 29, if Ain is singular. Hence, j(AT )nij =jAinj. Let D = AT . So, jDnij = jAinj. Then, by Exercise 28, jAj =a1nA1n+a2nA2n+ � � �+annAnn = dn1(�1)1+njA1nj+dn2(�1)2+njA2nj+� � � + dnn(�1)n+njAnnj = dn1(�1)1+njDn1j + dn2(�1)2+njDn2j + � � � +dnn(�1)n+njDnnj = jDj, since this is the cofactor expansion for jDj alongthe last row of D. This completes the proof.

(32) Let A be an n � n singular matrix. Let D = AT . D is also singu-lar by part (4) of Theorem 2.11 (or else A = DT is nonsingular). Now,Exercise 30 shows that jDjkj = j(Akj)

T j, which equals jAkj j, by Exer-cise 29 if Akj is singular, or by Exercise 31 if Akj is nonsingular. So,

48

Page 53: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 3.3

jAj = jDj (by Exercise 29) = dj1Dj1 + dj2Dj2 + � � � + djnDjn (by Exer-cise 23) = a1j(�1)j+1jDj1j+ a2j(�1)j+2jDj2j+ � � �+ anj(�1)j+njDjnj =a1j(�1)1+j jA1j j + a2j(�1)2+j jA2j j + � � � + anj(�1)n+j jAnj j = a1jA1j +a2jA2j + � � �+ anjAnj , and we are �nished.

(33) If A has two identical columns, then AT has two identical rows. HencejAT j = 0 by Exercise 24. Thus, jAj = 0, by Theorem 3.9 (which is provenin Exercises 29 and 31).

(34) Let B be the n � n matrix, all of whose entries are equal to the corre-sponding entries in A, except that the jth column of B equals the ithcolumn of A (as does the ith column of B). Then jBj = 0 by Exer-cise 33. Also, since every column of A equals the corresponding col-umn of B, with the exception of the jth column, the submatrices Akj

and Bkj are equal for 1 � k � n. Hence, Akj = Bkj for 1 � k � n.Now, computing the determinant of B using a cofactor expansion alongthe jth column (allowed by part (2) of Theorem 3.10, proven in Ex-ercises 28 and 32) yields 0 = jBj = b1jB1j + b2jB2j + � � � + bnjBnj =a1iB1j+a2iB2j+ � � �+aniBnj (because the jth column of B equals the ithcolumn of A) = a1iA1j + a2iA2j + � � �+ aniAnj , completing the proof.

(35) Using the fact that the general (k;m) entry of A is Amk, we see thatthe (i; j) entry of AA equals a1jA1i + a2jA2i + � � � + anjAni. If i = j,Exercise 32 implies that this sum equals jAj, while if i 6= j, Exercise 34implies that the sum equals 0. Hence,AA equals jAj on the main diagonaland 0 o¤ the main diagonal, and so AA = (jAj)In. (Note that, since Ais singular, jAj = 0, and so, in fact, AA = On.)

(36) (a) By Exercise 27, 1jAjAA = In. Hence, AX = B ) 1

jAjAAX =1jAjAB ) InX = 1

jAjAB ) X = 1jAjAB.

(b) Using part (a) and the fact that the general (k;m) entry of A isAmk, we see that the kth entry of X equals the kth entry of 1

jAjAB= 1

jAj (b1A1k + � � �+ bnAnk).

(c) The matrix Ak is de�ned to equal A in every entry, except in thekth column, which equals the column vector B. Thus, since this kthcolumn is ignored when computing (Ak)ik, the (i; k) submatrix ofAk, we see that (Ak)ik = Aik for each 1 � i � n. Hence, computingjAkj by performing a cofactor expansion down the kth column of Ak

(by part (2) of Theorem 3.10) yields jAkj = b1(�1)1+kj(Ak)1kj +b2(�1)2+kj(Ak)2kj + � � � + bn(�1)n+kj(Ak)nkj = b1(�1)1+kjA1kj +b2(�1)2+kjA2kj+� � �+bn(�1)n+kjAnkj = b1A1k+b2A2k+� � �+bnAnk.

(d) Theorem 3.13 claims that the kth entry of the solution X of AX = Bis jAkj=jAj. Part (b) gives the expression 1

jAj (b1A1k + � � �+ bnAnk)for this kth entry, and part (c) replaces (b1A1k + � � � + bnAnk) withjAkj.

49

Page 54: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 3.4

Section 3.4

(1) (a) x2 � 7x+ 14(b) x3 � 6x2 + 3x+ 10(c) x3 � 8x2 + 21x� 18

(d) x3 � 8x2 + 7x� 5

(e) x4 � 3x3 � 4x2 + 12x

(2) (a) E2 = fb[1; 1]g(b) E2 = fc[�1; 1; 0]g

(c) E�1 = fb[1; 2; 0] + c[0; 0; 1]g

(3) In the answers for this exercise, b, c, and d represent arbitrary scalars.

(a) � = 1, E1 = fa[1; 0]g, algebraic multiplicity of � = 2(b) �1 = 2, E2 = fb[1; 0]g, algebraic multiplicity of �1 = 1; �2 = 3,

E3 = fb[1;�1]g, algebraic multiplicity of �2 = 1(c) �1 = 1, E1 = fa[1; 0; 0]g, algebraic multiplicity of �1 = 1; �2 = 2,

E2 = fb[0; 1; 0]g, algebraic multiplicity of �2 = 1; �3 = �5, E�5 =fc[� 16 ;

37 ; 1]g, algebraic multiplicity of �3 = 1

(d) �1 = 1, E1 = fb[3; 1]g, algebraic multiplicity of �1 = 1; �2 = �1,E�1 = fb[7; 3]g, algebraic multiplicity of �2 = 1

(e) �1 = 0, E0 = fc[1; 3; 2]g, algebraic multiplicity of �1 = 1; �2 = 2,E2 = fb[0; 1; 0] + c[1; 0; 1]g, algebraic multiplicity of �2 = 2

(f) �1 = 13, E13 = fc[4; 1; 3]g, algebraic multiplicity of �1 = 1; �2 =�13, E�13 = fb[1;�4; 0]+c[3; 0;�4]g, algebraic multiplicity of �2 = 2

(g) �1 = 1, E1 = fd[2; 0; 1; 0]g, algebraic multiplicity of �1 = 1; �2 = �1,E�1 = fd[0; 2;�1; 1]g, algebraic multiplicity of �2 = 1

(h) �1 = 0, E0 = fc[�1; 1; 1; 0] + d[0;�1; 0; 1]g, algebraic multiplicity of�1 = 2; �2 = �3, E�3 = fd[�1; 0; 2; 2]g, algebraic multiplicity of�2 = 2

(4) (a) P =�3 21 1

�; D =

�3 00 �5

�(b) P =

�2 51 2

�, D =

�2 00 �2

�(c) Not diagonalizable

(d) P =

24 6 1 12 2 15 1 1

35, D =

24 1 0 00 �1 00 0 2

35(e) Not diagonalizable

(f) Not diagonalizable

(g) P =

24 2 1 03 0 �10 3 1

35, D =

24 2 0 00 2 00 0 3

3550

Page 55: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 3.4

(h) P =

24 6 3 12 1 0�1 0 1

35, D =

24 0 0 00 1 00 0 1

35

(i) P =

26642 1 1 12 0 2 �11 0 1 00 1 0 1

3775, D =

26641 0 0 00 1 0 00 0 �1 00 0 0 0

3775(5) (a)

�32770 �6553832769 �65537

�(b)

24 �17 6 24�15 6 20�9 3 13

35 (c) A49 = A

(d)

24 352246 �4096 354294�175099 2048 �175099�175099 4096 �177147

35(e)

24 4188163 6282243 �94218304192254 6288382 �94320604190208 6285312 �9426944

35(6) A similar to B implies there is a matrix P such that A = P�1BP. Hence,

pA(x) = jxIn�Aj = jxIn�P�1BPj = jxP�1InP�P�1BPj = jP�1(xIn�B)Pj = jP�1jj(xIn � B)jjPj = 1

jPj j(xIn � B)jjPj =1jPj jPjj(xIn � B)j

= j(xIn �B)j = pB(x).

(7) (a) Suppose A = PDP�1 for some diagonal matrix D. Let C be thediagonal matrix with cii = (dii)

13 . Then C3 = D. Clearly then,

(PCP�1)3 = A.

(b) If A has all eigenvalues nonnegative, then A has a square root. Pro-ceed as in part (a), only taking square roots instead of cube roots.

(8) One possible answer:

24 3 �2 �2�7 10 118 �10 �11

35 :(9) The characteristic polynomial of the matrix is x2 � (a + d)x + (ad � bc),

whose discriminant simpli�es to (a� d)2 + 4bc.

(10) (a) Let v be an eigenvector for A corresponding to �. Then Av =�v, A2v = A(Av) = A(�v) = �(Av) = �(�v) = �2v, and, ananalogous induction argument shows Akv = �kv, for any integerk � 1.

(b) Consider the matrix A =

�0 �11 0

�. Although A has no eigenval-

ues, A4 = I2 has 1 as an eigenvalue.

(11) (�x)njA�1jpA( 1x ) = (�x)njA�1j j 1xIn �Aj = (�x)

njA�1( 1xIn �A)j= (�x)nj 1xA

�1 � Inj = j(�x)( 1xA�1 � In)j = jxIn �A�1j = pA�1(x).

51

Page 56: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 3.4

(12) Both parts are true, because xIn � A is also upper triangular, and sopA(x) = jxIn �Aj = (x� a11)(x� a22) � � � (x� ann), by Theorem 3.2.

(13) pAT (x) = jxIn �AT j = jxITn �AT j = j(xIn �A)T j = jxIn �Aj = pA(x)

(14) ATX = X since the entries of each row of AT sum to 1. Hence � = 1 isan eigenvalue for AT , and hence for A by Exercise 13.

(15) Base Step: k = 1. Use argument in the text directly after the de�nitionof similarity.Inductive Step: Assume that if P�1AP = D, then for some k > 0,Ak = PDkP�1.Prove that if P�1AP = D; then Ak+1 = PDk+1P�1:But Ak+1 = AAk = (PDP�1)(PDkP�1) (by the inductive hypothesis)= PDk+1P�1:

(16) SinceA is upper triangular, so is xIn�A. Thus, by Theorem 3.2, jxIn�Ajequals the product of the main diagonal entries of xIn � A, which is(x� a11)(x� a22) � � � (x� ann). Hence the main diagonal entries of A arethe eigenvalues of A. Thus, A has n distinct eigenvalues. Then, by theMethod for Diagonalizing an n�n Matrix given in Section 3.4, the neces-sary matrix P can be constructed (since only one fundamental eigenvectorfor each eigenvalue is needed in this case), and so A is diagonalizable.

(17) AssumeA is an n�nmatrix. ThenA is singular, jAj = 0, (�1)njAj =0 , j �Aj = 0 , j0In �Aj = 0 , pA(0) = 0 , � = 0 is an eigenvaluefor A.

(18) Let D = P�1AP. Then D = DT = (P�1AP)T = PTAT (P�1)T =((PT )�1)�1AT (PT )�1, so AT is diagonalizable.

(19) D = P�1AP =) D�1 = (P�1AP)�1 = P�1A�1P. But D�1 is a diag-onal matrix whose main diagonal entries are the reciprocals of the maindiagonal entries ofD: (Since the main diagonal entries ofD are the nonzeroeigenvalues of A;their reciprocals exist.) Thus, A�1 is diagonalizable.

(20) (a) The ith column of AP = A(ith column of P) = APi. Since Pi isan eigenvector for �i; we have APi = �iPi, for each i.

(b) P�1�iPi = �i(P�1(ith column of P)) = �i(ith column of In) = �iei:

(c) The ith column of P�1AP = P�1(ith column of AP) = P�1(�iPi)(by part (a)) = (�iei) (by part (b)). Hence, P�1AP is a diagonalmatrix with main diagonal entries �1; �2; : : : ; �n:

(21) Since PD = AP, the ith column of PD equals the ith column of AP.But the ith column of PD = P(ith column of D) = P(diiei) = diiPei =dii(P(ith column of In)) = dii(ith column of PIn) = dii(ith column ofP) = diiPi: Also, the ith column of AP = A(ith column of P) = APi:Thus, dii is an eigenvalue for the ith column of P.

52

Page 57: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 3.4

(22) Use induction on n.Base Step (n = 1): jCj = a11x + b11, which has degree 1 if k = 1

(implying a11 6= 0), and degree 0 if k = 0.Inductive Step: Assume true for (n � 1) � (n � 1) matrices and provefor n� n matrices. Using a cofactor expansion along the �rst row yieldsjCj = (a11x + b11)(�1)2jC11j + (a12x + b12)(�1)3jC12j + � � � + (a1nx +b1n)(�1)n+1jC1nj.Case 1: Suppose the �rst row of A is all zeroes (so each a1i = 0).

Then, each of the submatrices A1i has at most k nonzero rows. Hence,by the inductive hypothesis, each jC1ij = jA1ix+B1ij is a polynomial ofdegree at most k. Therefore, jCj = b11(�1)2jC11j + b12(�1)3jC12j+ � � �+b1n(�1)n+1jC1nj is a sum of constants multiplied by such polynomials, andhence is a polynomial of degree at most k.Case 2: Suppose some a1i 6= 0. Then, at most k � 1 of the rows from

2 through n of A have a nonzero entry. Hence, each submatrix A1i hasat most k � 1 nonzero rows, and, by the inductive hypothesis, each ofjC1ij = jA1ix +B1ij is a polynomial of degree at most k � 1. Therefore,each term (a1ix + b1i)(�1)i+1jC1ij is a polynomial of degree at most k.The desired result easily follows.

(23) (a) jxI2 �Aj =���� x� a11 �a12�a21 x� a22

���� = (x� a11)(x� a22)� a12a21= x2 � (a11 + a22)x+ (a11a22 � a12a21) = x2 � (trace(A))x+ jAj:

(b) Use induction on n.Base Step (n = 1): Now pA(x) = x� a11, which is a �rst degree

polynomial with coe¢ cient of its x term equal to 1.Inductive Step: Assume that the statement is true for all

(n � 1) � (n � 1) matrices, and prove it is true for an n � n matrixA. Consider

C = xIn �A =

26664(x� a11) �a12 � � � �a1n�a21 (x� a22) � � � �a2n...

.... . .

...�an1 �an2 � � � (x� ann)

37775 .Using a cofactor expansion along the �rst row, pA(x) = jCj =(x� a11)(�1)1+1jC11j + (�a12)(�1)1+2jC12j + (�a13)(�1)1+3jC13j+ � � � + (�a1n)(�1)1+njC1nj. Now jC11j = pA11

(x), and so, by theinductive hypothesis, is an n � 1 degree polynomial with coe¢ cientof xn�1 equal to 1. Hence, (x � a11)jC11j is a degree n polynomialhaving coe¢ cient 1 for its xn term. But, for j � 2, C1j is an(n� 1)� (n� 1) matrix having exactly n� 2 rows containing entrieswhich are linear polynomials, since the linear polynomials at the (1; 1)and (j; j) entries of C have been eliminated. Therefore, by Exercise22, jC1j j is a polynomial of degree at most n � 2. Summing to getjCj then gives an n degree polynomial with coe¢ cient of the xn term

53

Page 58: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Chapter 3 Review

equal to 1, since only the term (x � a11)jC11j in jCj contributes tothe xn term in pA(x).

(c) Note that the constant term of pA(x) is pA(0) = j0In�Aj = j�Aj =(�1)njAj.

(d) Use induction on n.Base Step (n = 1): Now pA(x) = x� a11. The degree 0 term is

�a11 = � trace(A).Inductive Step: Assume that the statement is true for all

(n � 1) � (n � 1) matrices, and prove it is true for an n � n matrixA. We know that pA(x) is a polynomial of degree n from part (b).Then, let C = xIn �A as in part (b). Then, arguing as in part (b),only the term (x � a11)jC11j in jCj contributes to the xn and xn�1terms in pA(x), because all of the terms of the form �a1j jC1j j arepolynomials of degree � (n� 2), for j � 2 (using Exercise 22). Also,since jC11j = pA11

(x), the inductive hypothesis and part (b) implythat jC11j = xn�1 � trace(A11)x

n�2 + (terms of degree � (n� 3)).Hence, (x� a11)jC11j = (x� a11)(xn�1 � trace(A11)x

n�2 + (termsof degree � (n�3))) = xn � a11xn�1 � trace(A11)x

n�1 + (terms ofdegree � (n�2)) = xn � trace(A)xn�1 + (terms of degree � (n�2)),since a11 + trace(A11) = trace(A). This completes the proof.

(24) (a) T

(b) F

(c) T

(d) T

(e) F

(f) T

(g) T

(h) F

Chapter 3 Review Exercises

(1) (a) (3,4) minor = jA34j = �30(b) (3,4) cofactor = �jA34j = 30

(c) jAj = �830(d) jAj = �830

(2) jAj = �262

(3) jAj = �42

(4) Volume = 45

(5) (a) jBj = 60 (b) jBj = �15 (c) jBj = 15

(6) (a) Yes (b) 4 (c) Yes (d) Yes

(7) 378

54

Page 59: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Chapter 3 Review

(8) (a) A =

24 0 �2 �4�2 �2 22 1 �5

35 (b) A�1 =

264 0 1 21 1 �1

�1 � 1252

375

(9) (a) A�1 =

26641 �1 �2

�3 72

112

� 52114

174

3775 (b) A =

24 �4 4 812 �14 �2210 �11 �17

35

(c) �11 (d) B is singular, so we can notcompute B�1.

(10) x1 = �4, x2 = �3, x3 = 5

(11) (a) The determinant of the given matrix is �289. Thus, we would needjAj4 = �289. But no real number raised to the fourth power isnegative.

(b) The determinant of the given matrix is zero, making it singular.Hence it can not be the inverse of any matrix.

(12) B similar to A implies there is a matrix P such that B = P�1AP.

(a) We will prove that Bk = P�1AkP by induction on k.Base Step: k = 1. B = P�1AP is given.Inductive Step: Assume that Bk = P�1AkP for some k. ThenBk+1 = BBk = (P�1AP)(P�1AkP) = P�1A(PP�1)AkP= P�1A(In)A

kP = P�1AAkP = P�1Ak+1P, so Bk+1 is similarto Ak+1.

(b) jBT j = jBj = jP�1APj = jP�1jjAjjPj = 1jPj jAjjPj = jAj = jA

T j

(c) If A is nonsingular, B�1 = (P�1AP)�1 = P�1A�1(P�1)�1 =P�1A�1P, so B is nonsingular.Note thatA = PBP�1. So, ifB is nonsingular,A�1 = (PBP�1)�1 =(P�1)�1B�1P�1 = PB�1P�1, so A is nonsingular.

(d) By part (c), A�1 = (P�1)�1B�1(P�1), proving that A�1 is similarto B�1.

(e) B+ In = P�1AP+ In = P�1AP+P�1InP = P�1(A+ In)P.

(f) Exercise 26(c) in Section 1.5 states that trace(AB) = trace(BA) forany two n�n matrices A and B. In this particular case, trace(B) =trace(P�1(AP)) = trace((AP)P�1) = trace(A(PP�1)) = trace(AIn)= trace(A).

(g) If B is diagonalizable, then there is a matrix Q such that D =Q�1BQ is diagonal. Thus,D = Q�1(P�1AP)Q= (Q�1P�1)A(PQ)= (PQ)�1A(PQ), and so A is diagonalizable. Since A is similar toB if and only if B is similar to A (see Exercise 13(d) in Section

55

Page 60: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Chapter 3 Review

3.3), an analogous argument shows that A is diagonalizable =) Bis diagonalizable.

(13) Assume in what follows that A is an n � n matrix. Let Ai be the ithmatrix (as de�ned in Theorem 3.13) for the matrix A.

(a) Let C = R([AjB]): We must show that for each i, 1 � i � n; thematrix Ci (as de�ned in Theorem 3.13) for C is identical to R(Ai):First we consider the columns of Ci other than the ith column.

If 1 � j � n with j 6= i, then the jth column of Ci is the same asthe jth column of C = R([AjB]). But since the jth column of [AjB]is identical to the jth column of Ai, it follows that the jth columnof R([AjB]) is identical to the jth column of R(Ai). Therefore, for1 � j � n with j 6= i, the jth column of Ci is the same as the jthcolumn of R(Ai).Finally, we consider the ith column of Ci. Now, the ith column

of Ci is identical to the last column of R([AjB]) = [R(A)jR(B)],which is R(B): But since the ith column of Ai equals B, it followsthat the ith column of R(Ai) = R(B). Thus, the ith column of Ciis identical to the ith column of R(Ai).Therefore, since Ci and R(Ai) agree in every column, we have

Ci = R(Ai):

(b) Consider each type of operation in turn. First, if R is the type (I)operation hki c hki for some c 6= 0, then by part (1) of Theorem3.3, jR(Ai)j

jR(A)j =cjAijcjAj =

jAijjAj . If R is any type (II) operation, then

by part (2) of Theorem 3.3, jR(Ai)jjR(A)j =

jAijjAj . If R is any type (III)

operation, then jR(Ai)jjR(A)j =

(�1)jAij(�1)jAj =

jAijjAj .

(c) In the special case when n = 1, we have A = [1]. Let B = [b]. Hence,A1 (as de�ned in Theorem 3.13) = B = [b]: Then the formula in The-orem 3.13 gives x1 =

jA1jjAj =

b1 = b, which is the correct solution to

the equation AX = B in this case. For the remainder of the proof,we assume n > 1.If A = In, then the solution to the system is X = B. There-

fore we must show that, for each i, 1 � i � n, the formula given inTheorem 3.13 yields xi = bi (the ith entry of B). First, note thatjAj = jInj = 1. Now, the ith matrix Ai (as de�ned in Theorem 3.13)for A is identical to In except in its ith column, which equals B.Therefore, the ith row of Ai has bi as its ith entry, and zeroes else-where. Thus, a cofactor expansion along the ith row of Ai yields jAij= bi(�1)i+ijIn�1j = bi. Hence, for each i, the formula in Theorem3.13 produces xi =

jAijjAj =

bi1 = bi, completing the proof.

(d) BecauseA is nonsingular, [AjB] row reduces to [InjX], whereX is theunique solution to the system. Let [CjD] represent any intermediateaugmented matrix during this row reduction process. Now, by part

56

Page 61: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Chapter 3 Review

(a) and repeated use of part (b), the ratio jCijjCj is identical to the

ratio jAijjAj obtained from the original augmented matrix, for each

i, 1 � i � n. But part (c) proves that for the �nal augmentedmatrix, [InjX], this common ratio gives the correct solution for xi,for each i, 1 � i � n. Since all of the systems corresponding to theseintermediate matrices have the same unique solution, the formulain Theorem 3.13 gives the correct solution for the original system,[AjB], as well. Thus, Cramer�s Rule is validated.

(14) (a) pA(x) = x3 � x2 � 10x � 8 = (x + 2)(x + 1)(x � 4); Eigenvalues:�1 = �2, �2 = �1, �3 = 4; Eigenspaces: E�2 = fa[�1; 3; 3] j a 2 Rg,E�1 = fa[�2; 7; 7] j a 2 Rg, E4 = fa[�3; 10; 11] j a 2 Rg; P =24 �1 �2 �3

3 7 103 7 11

35; D =

24 �2 0 00 �1 00 0 4

35(b) pA(x) = x3+x2�21x�45 = (x+3)2(x�5); Eigenvalues: �1 = 5, �2 =�3; Eigenspaces: E5 = fa[�1; 4; 4] j a 2 Rg; E�3 = fa[�2; 1; 0] +

b[2; 0; 1] j a; b 2 Rg; P =

24 �1 �2 24 1 04 0 1

35; D =

24 5 0 00 �3 00 0 �3

35(15) (a) pA(x) = x3 � 1 = (x� 1)(x2 + x+ 1). � = 1 is the only eigenvalue,

having algebraic multiplicity 1. Thus, at most one fundamental eigen-vector will be produced, which is insu¢ cient for diagonalization byStep 4 of the Diagonalization Method.

(b) pA(x) = x4 + 6x3 + 9x2 = x2(x + 3)2. Even though the eigenvalue�3 has algebraic multiplicity 2, only one fundamental eigenvector isproduced for � = �3 because (�3I4 � A) has rank 3. Hence, weget only 3 fundamental eigenvectors overall, which is insu¢ cient fordiagonalization by Step 4 of the Diagonalization Method.

(16) A13 =

24 �9565941 9565942 4782976�12754588 12754589 63773003188648 �3188648 �1594325

35(17) (a) �1 = 2, �2 = �1, �3 = 3

(b) E2 = fa[1;�2; 1; 1] j a 2 Rg, E�1 = fa[1; 0; 0; 1]+b[3; 7;�3; 2] j a; b 2Rg, E3 = fa[2; 8;�4; 3] j a 2 Rg

(c) jAj = 6

(18) (a) F(b) T(c) F(d) F(e) T

(f) T(g) T(h) T(i) F(j) T

(k) F(l) F(m) T(n) F(o) F

(p) F(q) F(r) T(s) T(t) F

(u) F(v) T(w) T(x) T(y) F

(z) F

57

Page 62: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 4.1

Chapter 4

Section 4.1

(1) Property (2): u� (v � w) = (u� v)� wProperty (5): a� (u� v) = (a� u)� (a� v)Property (6): (a+ b)� u = (a� u)� (b� u)Property (7): (ab)� u = a� (b� u)

(2) Let W = fa[1; 3; 2] j a 2 Rg. First, W is closed under addition be-cause a[1; 3; 2] + b[1; 3; 2] = (a + b)[1; 3; 2], which has the correct form.Similarly, c(a[1; 3; 2]) = (ca)[1; 3; 2], proving closure under scalar multipli-cation. Now, properties (1), (2), (5), (6), (7), and (8) are true for allvectors in R3 by Theorem 1.3, and hence are true in W. Property (3)is satis�ed because [0; 0; 0] = 0[1; 3; 2] 2 W, and [0; 0; 0] + a[1; 3; 2] =a[1; 3; 2] + [0; 0; 0] = [0; 0; 0]. Finally, property (4) follows from the equa-tion (a[1; 3; 2]) + ((�a)[1; 3; 2]) = [0; 0; 0].

(3) Let W = ff 2 P3 j f(2) = 0g. First, W is closed under addition since thesum of polynomials of degree � 3 has degree � 3, and because if f ;g 2 W,then (f + g)(2) = f(2) + g(2) = 0+ 0 = 0, and so (f + g) 2 W. Similarly,if f 2 W and c 2 R, then cf has the proper degree, and (cf)(2) = cf(2) =c0 = 0, thus establishing closure under scalar multiplication. Now prop-erties (1), (2), (5), (6), (7), and (8) are true for all real-valued functionson R, as shown in Example 6 in Section 4.1 of the text. Also, property (3)holds because the zero function z (see Example 6) is a polynomial of de-gree 0, and z(2) = 0. Finally, property (4) is true since �f is the additiveinverse of f (again, see Example 6) and because f 2 W ) �f 2 W, since�f has the correct degree and (�f)(2) = �f(2) = �0 = 0.

(4) Clearly the operation � is commutative. Also, � is associative because(x�y)�z = (x3+y3)1=3�z = (((x3+y3)1=3)3+z3)1=3 = ((x3+y3)+z3)1=3= (x3 + (y3 + z3))1=3 = (x3 + ((y3 + z3)1=3)3)1=3 = x � (y3 + z3)1=3 =x� (y � z).Clearly, the real number 0 acts as the additive identity, and, for any realnumber x, the additive inverse of x is �x, because (x3+(�x)3)1=3 = 01=3= 0, the additive identity.

The �rst distributive law holds because a � (x � y) = a � (x3 + y3)1=3= a1=3(x3 + y3)1=3 = (a(x3 + y3))1=3 = (ax3 + ay3)1=3 = ((a1=3x)3 +(a1=3y)3)1=3 = (a1=3x)� (a1=3y) = (a� x)� (a� y).Similarly, the other distributive law holds because (a+b)�x = (a+b)1=3x= (a+b)1=3(x3)1=3 = (ax3+bx3)1=3 = ((a1=3x)3+(b1=3x)3)1=3 = (a1=3x)�(b1=3x) = (a� x)� (b� x).Associativity of scalar multiplication holds since (ab) � x = (ab)1=3x =a1=3b1=3x = a1=3(b1=3x) = a� (b1=3x) = a� (b� x).Finally, 1� x = 11=3x = 1x = x.

58

Page 63: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 4.1

(5) The set is not closed under addition. For example,�1 11 1

�+�1 22 4

�=�2 33 5

�.

(6) The set does not contain the zero vector.

(7) Property (8) is not satis�ed. For example, 1� 5 = 0 6= 5.

(8) Properties (2), (3), and (6) are not satis�ed. Property (4) makes no sensewithout Property (3). The following is a counterexample for Property (2):3� (4� 5) = 3� 18 = 42, but (3� 4)� 5 = 14� 5 = 38.

(9) Properties (3) and (6) are not satis�ed. Property (4) makes no sensewithout Property (3). The following is a counterexample for Property (6):(1+2)�[3; 4] = 3�[3; 4] = [9; 12], but (1�[3; 4])�(2�[3; 4]) = [3; 4]�[6; 8]= [9; 0].

(10) Not a vector space. Property (6) is not satis�ed. For example, (1+2)�3= 3 � 3 = 9, but (1 � 3) � (2 � 4) = 3 � 8 = (35 + 85)1=5 = 330111=5 �8:01183.

(11) (a) Suppose B = 0, x1;x2 2 V, and c 2 R. Then, A(x1 + x2) =Ax1 +Ax2 = 0 + 0 = 0, and A(cx1) = cAx1 = c0 = 0, and so Vis closed. Conversely, assume V is closed. Let x 2 V. Then B =A(0x) = 0(Ax) (by part (4) of Theorem 1.14) = 0B = 0.

(b) Theorem 1.3 shows these six properties hold for all vectors in Rn,including those in V.

(c) Since A0 = 0, 0 2 V if and only if B = 0.(d) If property (3) is not satis�ed, there can not be an inverse for any

vector, since a vector added to its inverse must equal the (nonexis-tent) zero vector. If B = 0, then the inverse of x 2 V is �x, whichis in V since A(�x) = �Ax = �0 = 0.

(e) Clearly, V is a vector space if and only if B = 0. Closure and theremaining 8 properties are all covered in parts (a) through (d).

(12) Suppose 01 and 02 are both zero vectors. Then, 01 = 01 + 02 = 02.

(13) Zero vector = [2;�3]; additive inverse of [x; y] = [4� x;�6� y]

(14) (a) Add �v to both sides.(b) av = bv =) av + (�(bv)) = bv + (�(bv)) =) av + (�1)(bv) = 0

(by Theorem 4.1, part (3), and property (4)) =) av + (�b)v = 0(by property (7)) =) (a� b)v = 0 (by property (6)) =) a� b = 0(by Theorem 4.1, part (4)) =) a = b.

(c) Multiply both sides by the scalar 1a ; and use property (7).

59

Page 64: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 4.2

(15) Mimic the steps in Example 6 in Section 4.1 in the text, restricting theproof to polynomial functions only. Note that the zero function z used inproperties (3) and (4) is a (constant) polynomial function.

(16) Mimic the steps in Example 6 in Section 4.1, generalizing the proof toallow the domain of the functions to be the set X rather than R.

(17) Base Step: n = 1. Then a1v1 2 V; since V is closed under scalar multi-plication.Inductive Step: Assume a1v1+ � � �+ anvn 2 V if v1; : : : ;vn 2 V,a1; : : : ; an 2 R: Prove a1v1+ � � �+ anvn + an+1vn+1 2 Vif v1; : : : ;vn;vn+1 2 V, a1; : : : ; an; an+1 2 R: But, a1v1+ � � �+ anvn +an+1vn+1 = (a1v1+ � � �+ anvn) + an+1vn+1 = w + an+1vn+1 (by theinductive hypothesis). Also, an+1vn+1 2 V since V is closed under scalarmultiplication. Finally, w+an+1vn+1 2 V since V is closed under addition.

(18) 0v = 0v + 0 = 0v + (0v + (�(0v))) = (0v + 0v) + (�(0v))= (0 + 0)v + (�(0v)) = 0v + (�(0v)) = 0.

(19) Let V be a nontrivial vector space, and let v 2 V with v 6= 0. Thenthe set S = fav j a 2 Rg is a subset of V, because V is closed underscalar multiplication. Also, if a 6= b then av 6= bv, since av = bv )av � bv = 0 ) (a � b)v = 0 ) a = b (by part (4) of Theorem 4.1).Hence, the elements of S are distinct, making S an in�nite set, since Ris in�nite. Thus, V has an in�nite number of elements because it has anin�nite subset.

(20) (a) F(b) F

(c) T(d) T

(e) F(f) T

(g) T

Section 4.2

(1) When needed, we will refer to the given set as V.

(a) Not a subspace; no zero vector.

(b) Not a subspace; not closed under either operation.Counterexample for scalar multiplication: 2[1; 1] = [2; 2] =2 V

(c) Subspace. Nonempty: [0; 0] 2 V;Addition: [a; 2a] + [b; 2b] = [(a+ b); 2(a+ b)] 2 V;Scalar multiplication: c[a; 2a] = [ca; 2(ca)] 2 V

(d) Not a subspace; not closed under addition.Counterexample: [1; 0] + [0; 1] = [1; 1] =2 V

(e) Not a subspace; no zero vector.

(f) Subspace. Nonempty: [0; 0] 2 V; Addition: [a; 0] + [b; 0]= [a+ b; 0] 2 V; Scalar multiplication: c[a; 0] = [ca; 0] 2 V.

60

Page 65: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 4.2

(g) Not a subspace; not closed under addition.Counterexample: [1; 1] + [1;�1] = [2; 0] =2 V.

(h) Subspace. Nonempty: [0; 0] 2 V since 0 = �3(0); Vectors in V areprecisely those of the form [a;�3a]. Addition: [a;�3a] + [b;�3b] =[a+ b;�3a� 3b] = [(a+ b);�3(a+ b)] 2 V;Scalar multiplication: c[a;�3a] = [(ca);�3(ca)] 2 V.

(i) Not a subspace; no zero vector since 0 6= 7(0)� 5.(j) Not a subspace; not closed under either operation.

Counterexample for addition: [1; 1] + [2; 4] = [3; 5] =2 V, since 5 6= 32.(k) Not a subspace; not closed under either operation.

Counterexample for scalar multiplication: 2[0;�4] = [0;�8] =2 V,since �8 < 2(0)� 5.

(l) Not a subspace; not closed under either operation.Counterexample for scalar multiplication: 2[0:75; 0] = [1:5; 0] =2 V.

(2) When needed, we will refer to the given set as V.

(a) Subspace. Nonempty: O22 2 V;

Addition:�a �ab 0

�+

�c �cd 0

�=

�(a+ c) �(a+ c)(b+ d) 0

�2 V;

Scalar multiplication: c�a �ab 0

�=

�(ca) �(ca)(cb) 0

�2 V.

(b) Not a subspace; not closed under addition.

Counterexample:�1 10 0

�+

�0 01 1

�=

�1 11 1

�=2 V.

(c) Subspace. Nonempty: O22 2 V;

Addition:�a bb c

�+

�d ee f

�=

�(a+ d) (b+ e)(b+ e) (c+ f)

�2 V;

Scalar multiplication: c�a bb d

�=

�(ca) (cb)(cb) (cd)

�2 V.

(d) Not a subspace; no zero vector since O22 is singular.

(e) Subspace. Nonempty: O22 2 V;

Addition:�a bc �(a+ b+ c)

�+

�d ef �(d+ e+ f)

�=

�a+ d b+ ec+ f �(a+ b+ c)� (d+ e+ f)

�=

�(a+ d) (b+ e)(c+ f) �((a+ d) + (b+ e) + (c+ f))

�2 V;

61

Page 66: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 4.2

Scalar multiplication: k�a bc �(a+ b+ c)

�=

�ka kbkc k(�(a+ b+ c))

�=

�(ka) (kb)(kc) �((ka) + (kb) + (kc))

�2 V.

(f) Subspace. Nonempty: O22 2 V;

Addition:�a bc �a

�+

�d ef �d

�=

�(a+ d) (b+ e)(b+ e) �(a+ d)

�2 V;

Scalar multiplication: c�a bd �a

�=

�(ca) (cb)(cd) �(ca)

�2 V.

(g) Subspace. Let B =

�1 3

�2 �6

�. Nonempty: O22 2 V because

O22B = O22. Addition: If A;C 2 V, then (A+C)B = AB+CB =O22 + O22 = O22. Hence, (A + C) 2 V; Scalar multiplication: IfA 2 V, then (cA)B = c(AB) = cO22 = O22, and so (cA) 2 V.

(h) Not a subspace; not closed under addition.

Counterexample:�1 10 0

�+

�0 01 1

�=

�1 11 1

�=2 V.

(3) When needed, we will refer to the given set as V. Also, let z be the zeropolynomial.

(a) Subspace. Nonempty: Clearly, z 2 V. Addition: (ax5 + bx4 + cx3 +dx2 + ax + e) + (rx5 + sx4 + tx3 + ux2 + rx + w) = (a + r)x5 +(b + s)x4 + (c + t)x3 + (d + u)x2 + (a + r)x + (e + w) 2 V; Scalarmultiplication: t(ax5+ bx4+ cx3+ dx2+ ax+ e) = (ta)x5+(tb)x4+(tc)x3 + (td)x2 + (ta)x+ (te) 2 V.

(b) Subspace. Nonempty: z 2 V. Suppose p;q 2 V.Addition: (p+ q)(3) = p(3) + q(3) = 0 + 0 = 0. Hence (p+ q) 2 V;Scalar multiplication: (cp)(3) = cp(3) = c(0) = 0. Thus (cp) 2 V.

(c) Subspace. Nonempty: z 2 V.Addition: (ax5 + bx4 + cx3 + dx2 + ex� (a+ b+ c+ d+ e))+ (rx5 + sx4 + tx3 + ux2 + wx� (r + s+ t+ u+ w))= (a+ r)x5 + (b+ s)x4 + (c+ t)x3 + (d+ u)x2 + (e+ w)x+ (�(a+ b+ c+ d+ e)� (r + s+ t+ u+ w))= (a+ r)x5 + (b+ s)x4 + (c+ t)x3 + (d+ u)x2 + (e+ w)x+ (�((a+ r) + (b+ s) + (c+ t) + (d+ u) + (e+ w))) 2 V;Scalar multiplication: t(ax5+bx4+cx3+dx2+ex�(a+b+c+d+e))= (ta)x5 + (tb)x4 + (tc)x3 + (td)x2 + (te)x� ((ta) + (tb) + (tc) + (td) + (te)) 2 V.

(d) Subspace. Nonempty: z 2 V. Suppose p;q 2 V.Addition: (p + q)(3) = p(3) + q(3) = p(5) + q(5) = (p + q)(5).Hence (p+q) 2 V; Scalar multiplication: (cp)(3) = cp(3) = cp(5) =(cp)(5). Thus (cp) 2 V.

62

Page 67: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 4.2

(e) Not a subspace; z =2 V.(f) Not a subspace; not closed under scalar multiplication. Counterex-

ample: �x2 has a relative maximum at x = 0, but (�1)(�x2) = x2does not �it has a relative minimum instead.

(g) Subspace. Nonempty: z 2 V. Suppose p;q 2 V.Addition: (p+q)0(4) = p0(4)+q0(4) = 0+0 = 0. Hence (p+q) 2 V;Scalar multiplication: (cp)0(4) = cp0(4) = c(0) = 0. Thus (cp) 2 V.

(h) Not a subspace; z =2 V.

(4) Let V be the given set of vectors. Setting a = b = c = 0 shows that0 2 V. Hence V is nonempty. For closure under addition, note that[a1; b1; 0; c1; a1� 2b1+ c1] + [a2; b2; 0; c2; a2� 2b2+ c2] = [a1+ a2; b1+ b2;0; c1+ c2; a1�2b1+ c1 +a2�2b2+ c2]. Letting A = a1+a2, B = b1+ b2,C = c1 + c2, we see that the sum equals [A;B; 0; C;A � 2B + C], whichhas the correct form, and so is in V. Similarly, for closure under scalarmultiplication, k[a; b; 0; c; a�2b+c] = [ka; kb; 0; kc; ka�2(kb)+kc], whichis also seen to have the correct form. Now apply Theorem 4.2.

(5) Let V be the given set of vectors. Setting a = b = c = 0 shows that 0 2 V.Hence V is nonempty. For closure under addition, note that [2a1 � 3b1;a1�5c1; a1; 4c1�b1; c1] + [2a2�3b2; a2�5c2; a2; 4c2�b2; c2] = [2a1�3b1+ 2a2 � 3b2; a1 � 5c1 + a2 � 5c2, a1 + a2; 4c1 � b1 + 4c2 � b2, c1 + c2].Letting A = a1+ a2, B = b1+ b2, C = c1+ c2, we see that the sum equals[2A � 3B; A � 5C; A; 4C � B; C], which has the correct form, and so isin V. Similarly, for closure under scalar multiplication, k[2a� 3b; a� 5c;a; 4c� b; c] = [2(ka)� 3(kb); (ka)� 5(kc); ka; 4(kc)� (kb); kc], which isalso seen to have the correct form. Now apply Theorem 4.2.

(6) (a) Let V = fx 2 R3 jx � [1;�1; 4] = 0g. Clearly [0; 0; 0] 2 V since[0; 0; 0] � [1;�1; 4] = 0. Hence V is nonempty. Next, if x;y 2 V, then(x+ y) � [1;�1; 4] = x � [1;�1; 4] + y � [1;�1; 4] = 0 + 0 = 0. Thus,(x + y) 2 V, and so V is closed under addition. Also, if x 2 V andc 2 R, then (cx) � [1;�1; 4] = c(x � [1;�1; 4]) = c(0) = 0. Therefore(cx) 2 V, so V is closed under scalar multiplication. Hence, byTheorem 4.2, V is a subspace of R3.

(b) Plane, since it contains the nonparallel vectors [0; 4; 1] and [1; 1; 0].

(7) (a) The zero function z(x) = 0 is continuous, and so the set is nonempty.Also, from calculus, we know that the sum of continuous functions iscontinuous, as is any scalar multiple of a continuous function. Hence,both closure properties hold. Apply Theorem 4.2.

(b) Replace the word �continuous�everywhere in part (a) with the word�di¤erentiable.�

(c) Let V be the given set. The zero function z(x) = 0 satis�es z( 12 ) = 0,and so V is nonempty. Furthermore, if f ;g 2 V and c 2 R, then

63

Page 68: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 4.2

(f + g)( 12 ) = f( 12 ) + g(12 ) = 0 + 0 = 0, and (cf)( 12 ) = cf( 12 ) = c0 =

0. Hence, both closure properties hold. Now use Theorem 4.2.

(d) Let V be the given set. The zero function z(x) = 0 is continuousand satis�es

R 10z(x) dx = 0, and so V is nonempty. Also, from cal-

culus, we know that the sum of continuous functions is continuous,as is any scalar multiple of a continuous function. Furthermore, iff ;g 2 V and c 2 R, then

R 10(f + g)(x) dx =

R 10(f(x) + g(x)) dx =R 1

0f(x) dx +

R 10g(x) dx = 0+ 0 = 0, and

R 10(cf)(x) dx =

R 10cf(x) dx

= cR 10f(x) dx = c0 = 0. Hence, both closure properties hold. Finally,

apply Theorem 4.2.

(8) The zero function z(x) = 0 is di¤erentiable and satis�es 3(dz=dx)� 2z =0, and so V is nonempty. Also, from calculus, we know that the sumof di¤erentiable functions is di¤erentiable, as is any scalar multiple of adi¤erentiable function. Furthermore, if f ;g 2 V and c 2 R, then 3(f + g)0� 2(f + g) = (3f 0 � 2f) + (3g0 � 2g) = 0 + 0 = 0, and 3(cf)0 � 2(cf) =3cf 0 � 2cf = c(3f 0 � 2f) = c0 = 0. Hence, both closure properties hold.Now use Theorem 4.2.

(9) Let V be the given set. The zero function z(x) = 0 is twice-di¤erentiableand satis�es z00 + 2z0 � 9z = 0, and so V is nonempty. From calcu-lus, sums and scalar multiples of twice-di¤erentiable functions are twice-di¤erentiable. Furthermore, if f ;g 2 V and c 2 R, then (f+g)00 + 2(f+g)0� 9(f + g) = (f 00 + 2f 0 � 9f) + (g00 + 2g0 � 9g) = 0 + 0 = 0, and (cf)00+ 2(cf)0 � 9(cf) = cf 00 + 2cf 0 � 9cf = c(f 00 + 2f 0 � 9f) = c0 = 0. Hence,both closure properties hold. Theorem 4.2 �nishes the proof.

(10) The given subset does not contain the zero function.

(11) First note that AO = OA, and so O 2 W. Thus W is nonempty. Next,let B1;B2 2 W. Then A(B1 + B2) = AB1 + AB2 = B1A + B2A =(B1 +B2)A and A(cB1) = c(AB1) = c(B1A) = (cB1)A.

(12) (a) Closure under addition is a required property of every vector space.

(b) Let A be a singular n� n matrix. Then jAj = 0. But then jcAj =cnjAj = 0, so cA is also singular.

(c) All eight properties hold.

(d) For n = 2,�1 00 0

�+

�0 00 1

�=

�1 00 1

�.

(e) No; if jAj 6= 0 and c = 0, then jcAj = jOnj = 0.

(13) (a) Since the given line goes through the origin, it must have either theform y = mx, or x = 0. In the �rst case, all vectors on the line havethe form [x;mx]. Notice that [x1;mx1]+[x2;mx2] = [x1+x2;m(x1+x2)], which lies on y = mx, and c[x;mx] = [cx;m(cx)], which alsolies on y = mx. If the line has the form x = 0, then all vectors on

64

Page 69: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 4.2

the line have the form [0; y]. These vectors clearly form a subspaceof R2.

(b) The given subset does not contain the zero vector.

(14) (a) Follow the proof of Theorem 4.4, substituting 15 for �:

(b) E15 = fa[4; 1; 0] + b[2; 0; 1]g = f[4a + 2b; a; b]g: Let A = 4a + 2b;and B = a: Then b = 1

2 (A � 4a) =12 (A � 4B) =

12A � 2B: Hence

E15 � f[A;B; 12A� 2B]g =W:

(c)

24 16 �4 �23 3 �62 �8 11

3524 ab

12a� 2b

35 =24 15a

15b152 a� 30b

35 = 1524 a

b12a� 2b

35 :Hence, W = f[a; b; 12a� 2b]g � E15:

(15) S = f0g, the trivial subspace of Rn.

(16) Let a 2 V with a 6= 0, and let b 2 R. Then ( ba )a = b 2 V. (We have usedbold-faced variables when we are considering objects as vectors rather thanscalars. However, a and a have the same value as real numbers; similarlyfor b and b.)

(17) The given subset does not contain the zero vector.

(18) Note that 0 2 W1 and 0 2 W2. Hence 0 2 W1 \W2, and so W1 \W2 isnonempty. Let x;y 2 W1\W2. Then x+y 2 W1 sinceW1 is a subspace,and x+y 2 W2 sinceW2 is a subspace. Hence, x+y 2 W1\W2. Similarly,if c 2 R, then cx 2 W1 and cx 2 W2 since W1 and W2 are subspaces.Thus cx 2 W1 \W2. Now apply Theorem 4.2.

(19) (a) If W is a subspace, then aw1; bw2 2 W by closure under scalarmultiplication. Hence aw1 + bw2 2 W by closure under addition.Conversely, setting b = 0 shows that aw1 2 W for all w1 2 W. Thisestablishes closure under scalar multiplication. Use a = 1 and b = 1to get closure under addition.

(b) If W is a subspace, proceed as in part (a). Conversely, �rst considera = 1 to get closure under addition. Then use cw =(c� 1)w +w to prove closure under scalar multiplication.

(20) Suppose w1;w2 2 W: Then w1 + w2 = 1w1 + 1w2 is a �nite linearcombination of vectors of W, so w1 +w2 2 W. Also, c1w1 (for c1 2 R) isa �nite linear combination in W, so c1w1 2 W. Now apply Theorem 4.2.

(21) Apply Theorems 4.4 and 4.3.

(22) (a) F

(b) T

(c) F

(d) T

(e) T

(f) F

(g) T

(h) T

65

Page 70: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 4.3

Section 4.3

(1) (a) f[a; b;�a+ b] j a; b 2 Rg(b) fa[1; 13 ;�

23 ] j a 2 Rg

(c) f[a; b;�b] j a; b 2 Rg(d) Span(S) = R3

(e) f[a; b; c;�2a + b + c] j a; b; c 2Rg

(f) f[a; b; 2a+ b; a+ b] j a; b 2 Rg

(2) (a) fax3 + bx2 + cx� (a+ b+ c) j a; b; c 2 Rg

(b) fax3 + bx2 + d j a; b; d 2 Rg (c) fax3 � ax+ b j a; b 2 Rg

(3) (a)��

a bc �a� b� c

� ���� a; b; c 2 R�(b)

��a b

a� b �8a+ 3b

� ���� a; b 2 R�(c)

��a bc d

� ���� a; b; c; d 2 R� =M22

(4) (a) W = row space ofA = row space of

24 1 1 0 01 0 1 00 1 1 1

35 = fa[1; 1; 0; 0]+b[1; 0; 1; 0] +c[0; 1; 1; 1] j a; b; c 2 Rg

(b) B =

266641 0 0 � 1

2

0 1 0 12

0 0 1 12

37775(c) Row space ofB = fa[1; 0; 0;� 1

2 ]+b[0; 1; 0;12 ]+c[0; 0; 1;

12 ] j a; b; c 2 Rg

= f[a; b; c;� 12a+

12b+

12c] j a; b; c 2 Rg

(5) (a) W = row space of A = row space of

24 2 1 0 3 43 1 �1 4 2

�4 �1 7 0 0

35= fa[2; 1; 0; 3; 4] + b[3; 1;�1; 4; 2] + c[�4;�1; 7; 0; 0]g

(b) B =

24 1 0 0 2 �20 1 0 �1 80 0 1 1 0

35(c) Row space of B = f[a; b; c; 2a� b+ c;�2a+ 8b] j a; b; c 2 Rg

(6) S spans R3 by the Simpli�ed Span Method since24 1 3 �12 7 �34 8 �7

35 row reduces to24 1 0 00 1 00 0 1

35.66

Page 71: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 4.3

(7) S does not span R3 by the Simpli�ed Span Method since

26641 �2 23 �4 �11 �4 90 2 �7

3775 row reduces to26666641 0 �5

0 1 � 72

0 0 0

0 0 0

3777775.Thus [0; 0; 1] is not in span(S).

(8) ax2 + bx+ c = a(x2 + x+ 1) + (b� a)(x+ 1) + (c� b)1.

(9) The set does not span P2 by the Simpli�ed Span Method since24 1 4 �32 1 50 7 �11

35 row reduces to26641 0 23

7

0 1 � 117

0 0 0

3775.Thus 0x2 + 0x+ 1 is not in the span of the set.

(10) (a) [�4; 5;�13] = 5[1;�2;�2]� 3[3;�5; 1] + 0[�1; 1;�5].(b) S does not span R3 by the Simpli�ed Span Method since24 1 �2 �2

3 �5 1�1 1 �5

35 row reduces to24 1 0 120 1 70 0 0

35.Thus [0; 0; 1] is not in span(S).

(11) One answer is �1(x3 � 2x2 + x� 3) + 2(2x3 � 3x2 + 2x+ 5)�1(4x2 + x� 3) + 0(4x3 � 7x2 + 4x� 1).

(12) Let S1 = f[1; 1; 0; 0]; [1; 0; 1; 0]; [1; 0; 0; 1]; [0; 0; 1; 1]g. The Simpli�ed SpanMethod shows that S1 spans R4. Thus, since S1 � S, S also spans R4.

(13) Let b 2 R. Then b = ( ba )a 2 span(fag). (We have used bold-facedvariables when we are considering objects as vectors rather than scalars.However, a and a have the same value as real numbers; similarly for band b.)

(14) (a) Hint: Use Theorem 1.13.

(b) Statement: If S1 is the set of symmetric n�n matrices, and S2 is theset of skew-symmetric n � n matrices, then span(S1 [ S2) = Mnn.Proof: Again, use Theorem 1.13.

(15) Apply the Simpli�ed Span Method to S. The matrix24 0 1 0 11 0 1 0

�12 3 �2 3

35 represents the vectors in S. Then24 1 0 0 00 1 0 10 0 1 0

3567

Page 72: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 4.3

is the corresponding reduced row echelon form matrix, which clearly indi-

cates the desired result.

(16) (a) f[�3; 2; 0]; [4; 0; 5]g(b) E1 = f[� 3

2b +45c; b; c]g = fb[� 3

2 ; 1; 0] + c[45 ; 0; 1]g = f b2 [�3; 2; 0] +

c5 [4; 0; 5]g; so the set in part (a) spans E1.

(17) Span(S1) � span(S2), sincea1v1 + � � �+ anvn = (�a1)(�v1) + � � �+ (�an)(�vn):Span(S2) � span(S1), sincea1(�v1) + � � �+ an(�vn) = (�a1)v1 + � � �+ (�an)vn:

(18) If u = av, then all vectors in span(S) are scalar multiples of v 6= 0: Hence,span(S) is a line through the origin. If u 6= av; then span(S) is the set ofall linear combinations of the vectors u;v; and since these vectors pointin di¤erent directions, span(S) is a plane through the origin.

(19) First, suppose that span(S) = R3. Let x be a solution to Ax = 0. Thenu � x = v � x = w � x = 0, since these are the three coordinates of Ax.Because span(S) = R3, there exist a; b; c such that au + bv + cw = x.Hence, x �x = (au+ bv+ cw) �x = au �x+ bv � x+ cw �x = 0. Therefore,by part (3) of Theorem 1.5, x = 0. Thus, Ax = 0 has only the trivialsolution. Theorem 2.5 and Corollary 3.6 then show that jAj 6= 0.Next, suppose that jAj 6= 0. Then Corollary 3.6 implies that rank(A) = 3.This means that the reduced row echelon form for A has three nonzerorows. This can only occur ifA row reduces to I3. Thus,A is row equivalentto I3. By Theorem 2.8, A and I3 have the same row space. Hence, span(S)= row space of A = row space of I3 = R3.

(20) Choose n = max(degree(p1),: : :, degree(pk)).

(21) S1 � S2 � span(S2), by Theorem 4.5, part (1). Then, since span(S2) is asubspace of V containing S1 (by Theorem 4.5, part (2)),span(S1) � span(S2) by Theorem 4.5, part (3).

(22) (a) Suppose S is a subspace. Then S � span(S) by Theorem 4.5, part (1).Also, S is a subspace containing S, so span(S) � S by Theorem 4.5,part (3). Hence, S = span(S).Conversely, if span(S) = S, then S is a subspace by Theorem 4.5,part (2).

(c) span(fskew-symmetric 3 � 3 matricesg) = fskew-symmetric 3 � 3matricesg

(23) If span(S1) = span(S2), then S1 � span(S1) = span(S2) and S2 � span(S2)= span(S1).

Conversely, since span(S1) and span(S2) are subspaces, part (3) of Theo-rem 4.5 shows that if S1 � span(S2) and S2 � span(S1), then span(S1) �span(S2) and span(S2) � span(S1), and so span(S1) = span(S2).

68

Page 73: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 4.3

(24) (a) Clearly S1 \ S2 � S1 and S1 \ S2 � S2. Corollary 4.6 then impliesspan(S1\S2) � span(S1) and span(S1\S2) � span(S2). The desiredresult easily follows.

(b) S1 = f[1; 0; 0]; [0; 1; 0]g, S2 = f[0; 1; 0]; [0; 0; 1]g(c) S1 = f[1; 0; 0]; [0; 1; 0]g, S2 = f[1; 0; 0]; [1; 1; 0]g

(25) (a) Clearly S1 � S1 [ S2 and S2 � S1 [ S2. Corollary 4.6 then impliesspan(S1) � span(S1[S2) and span(S2) � span(S1[S2). The desiredresult easily follows.

(b) If S1 � S2; then span(S1) � span(S2), so span(S1) [ span(S2) =span(S2). Also S1 [ S2 = S2:

(c) S1 = fx5g, S2 = fx4g

(26) If span(S) = span(S [ fvg), then v 2 span(S [ fvg) = span(S).Conversely, if v 2 span(S), then since S � span(S), we haveS [ fvg � span(S). Then span(S [ fvg) � span(S) by Theorem 4.5,part (3). The reverse inclusion follows from Corollary 4.6, and sospan(S) = span(S [ fvg).

(27) Step 3 of the Diagonalization Method creates a set S = fX1; : : : ;Xmgof fundamental eigenvectors where each Xj is the particular solution of�In � A = 0 obtained by letting the jth independent variable equal 1while letting all other independent variables equal 0. Now, let X 2 E�:Then X is a solution of �In �A = 0: Suppose X is obtained by settingthe ith independent variable equal to ki; for each i; with 1 � i � m: ThenX = k1X1 + : : : + kmXm: Hence X is a linear combination of the Xi�s,and thus S spans E�:

(28) We show that span(S) is closed under vector addition. Let u and v betwo vectors in span(S). Then there are �nite subsets fu1; : : :;ukg andfv1; : : :;vlg of S such that u = a1u1+ � � �+akuk and v = b1v1+ � � �+blvlfor some real numbers a1; : : :; ak; b1; : : : ; bl: The natural thing to do at thispoint is to combine the expressions for u and v by adding correspondingcoe¢ cients. However, each of the subsets fu1; : : :;ukg or fv1; : : :;vlg maycontain elements not found in the other, so it is di¢ cult to match up theircoe¢ cients. We need to create a larger set containing all the vectors inboth subsets.

Consider the �nite subset X = fu1; : : :;ukg [ fv1; : : :;vlg of S. Re-naming the elements of X; we can suppose X is the �nite subsetfw1; : : :;wmg of S. Hence, there are real numbers c1; : : :; cm and d1; : : :; dmsuch that u = c1w1 + � � � + cmwm and v = d1w1 + � � � + dmwm: (Notethat ci = aj if wi = uj and that ci = 0 if wi =2 fu1; : : :;ukg. A similarformula gives the value of each di:) Then,

u+ v =mXi=1

ciwi +mXi=1

diwi =mXi=1

(ci + di)wi;

69

Page 74: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 4.4

a linear combination of the elements wi in the subset X of S. Thus,u+ v 2 span(S).

(29) (a) F(b) T

(c) F(d) F

(e) F(f) T

(g) F

Section 4.4

(1) (a) Linearly independent (a[0; 1; 1] = [0; 0; 0] =) a = 0)

(b) Linearly independent (neither vector is a multiple of the other)

(c) Linearly dependent (each vector is a multiple of the other)

(d) Linearly dependent (contains [0; 0; 0])

(e) Linearly dependent (Theorem 4.7)

(2) Linearly independent: (b), (c), (f). The others are linearly dependent,and the appropriate reduced row echelon form matrix for each is given.

(a)

24 1 0 10 1 �10 0 0

35(d)

24 1 0 30 1 20 0 0

35(e)

266641 0 � 1

2

0 1 12

0 0 00 0 0

37775

(3) Linearly independent: (a), (b). The others are linearly dependent, andthe appropriate reduced row echelon form matrix for each is given.

(c)�

2 7 12�6 2 �7

�reduces to

"1 0 73

46

0 1 2923

#

(d)

24 1 1 1 11 1 �1 �11 �1 1 �1

35 reduces to24 1 0 0 �10 1 0 10 0 1 1

35

(4) (a) Linearly independent:

24 1 1 10 0 1

�1 1 0

35 row reduces to I3.(b) Linearly independent:2664

�1 0 11 0 00 2 11 �1 0

3775 row reduces to26641 0 00 1 00 0 10 0 0

3775(c) Consider the polynomials in P2 as corresponding vectors in R3. Then,

the set is linearly dependent by Theorem 4.7.

70

Page 75: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 4.4

(d) Linearly dependent:26643 1 0 10 0 0 02 1 1 11 0 �5 �10

3775 row reduces to2666641 0 0 5

2

0 1 0 � 132

0 0 1 52

0 0 0 0

377775, althoughrow reduction really is not necessary since the original matrix clearlyhas rank � 3 due to the row of zeroes.

(e) Linearly independent. See the remarks before Example 15 in thetextbook. No polynomial in the set can be expressed as a �nitelinear combination of the others since each has a distinct degree.

(f) Linearly independent. Use an argument similar to that in Example15 in the textbook. (Alternately, use the comments right beforeExample 15 in the textbook since the polynomial pn = 1 + 2x +� � � + (n + 1)xn can not be expressed as a �nite linear combinationof other vectors in the given set. The reason for this is that sinceall vectors in the set have a di¤erent degree, we �nd that the degreeof the highest degree polynomial with a nonzero coe¢ cient in sucha linear combination is distinct from n, and hence the �nite linearcombination could not equal pn.)

(5) 21�1 �20 1

�+15

�3 2

�6 1

��18

�4 �1

�5 2

�+2

�3 �30 0

�=

�0 00 0

�.

(6)

266666641 4 0 02 2 1 7

�1 �6 1 51 1 �1 23 0 2 �10 1 2 6

37777775 row reduces to266666641 0 0 00 1 0 00 0 1 00 0 0 10 0 0 00 0 0 0

37777775, so the set is lin-

early independent by the Independence Test Method.

(7) (a) [�2; 0; 1] is not a scalar multiple of [1; 1; 0].(b) [0; 1; 0]

(c) No; [0; 0; 1] will also work.

(d) Any nonzero linear combination of [1; 1; 0] and [�2; 0; 1], other than[1; 1; 0] and [�2; 0; 1] themselves, will work.

(8) (a) Applying the Independence Test Method, we �nd that the matrixwhose columns are the given vectors row reduces to I3.

(b) [�2; 0; 3;�4] = �1[2;�1; 0; 5] + 1[1;�1; 2; 0] + 1[�1; 0; 1; 1].(c) No, because S is linearly independent (see Theorem 4.9).

(9) (a) x3 � 4x+ 8 = 2(2x3 � x+ 3)� (3x3 + 2x� 2).

71

Page 76: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 4.4

(b) See part (a). Also, 4x3+5x� 7 = �1(2x3�x+3)+2(3x3+2x� 2);2x3 � x+ 3 = 2

3 (x3 � 4x+ 8) + 1

3 (4x3 + 5x� 7);

3x3 + 2x� 2 = 13 (x

3 � 4x+ 8) + 23 (4x

3 + 5x� 7)(c) No polynomial in S is a scalar multiple of any other polynomial in

S.

(10) Following the hint in the textbook, let A be the matrix whose rows arethe vectors u, v, and w. By the Independence Test Method S is linearlyindependent i¤AT row reduces to a matrix with a pivot in every columni¤AT has rank 3 i¤ jAT j 6= 0 i¤ jAj 6= 0.

(11) In each part, there are many di¤erent correct answers. Only one possibilityis given here.

(a) fe1; e2; e3; e4g (b) fe1; e2; e3; e4g (c) f1; x; x2; x3g

(d)��

1 0 00 0 0

�;

�0 1 00 0 0

�;

�0 0 10 0 0

�;

�0 0 01 0 0

��

(e)

8<:24 1 0 00 0 00 0 0

35 ;24 0 1 01 0 00 0 0

35 ;24 0 0 10 0 01 0 0

35 ;24 0 0 00 1 00 0 0

359=;(Notice that each matrix is symmetric.)

(12) If S is linearly dependent, then a1v1+� � �+anvn = 0 for some fv1; : : : ;vng� S and a1; : : : ; an 2 R, with some ai 6= 0. Then vi = �a1

aiv1� � � � �

ai�1aivi�1 � ai+1

aivi+1 � � � � � an

aivn. So, if w 2 span(S), then w = bvi +

c1w1+ � � �+ckwk, for some fw1; : : : ;wkg � S�fvig and b; c1; : : : ; ck 2 R.Hence, w = � ba1

aiv1 � � � � � bai�1

aivi�1 � bai+1

aivi+1 � � � � � ban

aivn + c1w1 +

� � � + ckwk 2 span(S � fvig). So, span(S) � span(S � fvig). Also,span(S � fvig) � span(S) by Corollary 4.6. Thus span(S � fvig) =span(S).

Conversely, if span(S � fvg) = span(S) for some v 2 S, then by The-orem 4.5, part (1), v 2 span(S) = span(S � fvg), and so S is linearlydependent by the Alternate Characterization of linear dependence in Ta-ble 4.1.

(13) In each part, you can prove that the indicated vector is redundant bycreating a matrix using the given vectors as rows, and showing that theSimpli�ed Span Method leads to the same reduced row echelon form (ex-cept, perhaps, with an extra row of zeroes) with or without the indicatedvector as one of the rows.

(a) [0; 0; 0; 0]. Any linear combination of the other three vectors becomesa linear combination of all four vectors when 1[0; 0; 0; 0] is added toit.

72

Page 77: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 4.4

(b) [0; 0;�6; 0]: Let S be the given set of three vectors. We prove thatv = [0; 0;�6; 0] is redundant. To do this, we need to show thatspan(S�fvg) = span(S). We will do this by applying the Simpli�edSpan Method to both S � fvg and S. If the method leads to thesame reduced row echelon form (except, perhaps, with an extra rowof zeroes) with or without v as one of the rows, then the two spansare equal.

For span(S � fvg):�1 1 0 01 1 1 0

�row reduces to

�1 1 0 00 0 1 0

�.

For span(S):24 1 1 0 01 1 1 00 0 �6 0

35 row reduces to24 1 1 0 00 0 1 00 0 0 0

35.Since the reduced row echelon form matrices are the same, except forthe extra row of zeroes, the two spans are equal, and v = [0; 0;�6; 0]is a redundant vector.For an alternate approach to proving that span(S�fvg) = span(S),�rst note that S � fvg � S, and so span(S � fvg) � span(S) byCorollary 4.6. Next, [1; 1; 0; 0] and [1; 1; 1; 0] are clearly inspan(S � fvg) by part (1) of Theorem 4.5. But v 2 span(S � fvg)as well, because [0; 0;�6; 0] = (6)[1; 1; 0; 0]+(�6)[1; 1; 1; 0]. Hence, Sis a subset of the subspace span(S � fvg). Therefore, by part (3) ofTheorem 4.5, span(S) � span(S � fvg). Thus, since we have provensubset inclusion in both directions, span(S � fvg) = span(S).

(c) Let S be the given set of vectors. Every one of the given 16 vectors inS is individually redundant. One clever way to see this is to considerthe matrix

A =

26641 1 1 11 �1 1 11 1 �1 11 1 1 �1

3775 ;whose rows are 4 of the vectors in S. Now,A row reduces to I4. Thus,the four rows of A span R4, by the Simpli�ed Span Method. Hence,any of the remaining 12 vectors in S are redundant since they are inthe span of these 4 rows. Now, repeat this argument using �A toshow that the four rows in A are also individually redundant. (Notethat A and �A are row equivalent, so we do not have to perform asecond row reduction.)

(14) pA(x) = (x� 1)(x� 2)2.

(15) Use the de�nition of linear independence.

73

Page 78: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 4.4

(16) Let S = fv1; : : : ;vkg � Rn, with n < k, and let A be the n � kmatrix having the vectors in S as columns. Then the system AX = Ohas nontrivial solutions by Corollary 2.6. But, if X = [x1; : : : ; xk], thenAX = x1v1 + � � �+ xkvk, and so, by the de�nition of linear dependence,the existence of a nontrivial solution to AX = 0 implies that S is linearlydependent.

(17) Notice that xf 0(x) is not a scalar multiple of f(x). For if the terms off(x) are written in order of descending degree, and the �rst two nonzeroterms are, respectively, anxn and akxk, then the corresponding terms ofxf 0(x) are nanxn and kakxk, which are di¤erent multiples of anxn andakx

k, respectively, since n 6= k.

(18) Theorem 4.5, part (3) shows that span(S) � W. Thus v 62 span(S). Nowif S [ fvg is linearly dependent, then there exists fw1; : : : ;wng � S sothat a1w1 + � � � + anwn + bv = 0, with not all of a1; : : : ; an; b equal tozero. Also, b 6= 0 since S is linearly independent. Hence,v = �a1

b w1 � � � � �anb wn 2 span(S), a contradiction.

(19) (a) Suppose a1v1+ � � �+akvk = 0. Then a1Av1+ � � �+akAvk = 0 =)a1 = � � � = ak = 0, since T is linearly independent.

(b) The converse to the statement is: If S = fv1; : : : ;vkg is a linearlyindependent subset of Rm, then T = fAv1; : : : ;Avkg is a linearlyindependent subset of Rn with Av1; : : : ;Avk distinct. We can con-struct speci�c counterexamples that contradict either (or both) ofthe conclusions of the converse. For a speci�c counterexample inwhich Av1; : : : ;Avk are distinct but T is linearly dependent, let

A =

�1 11 1

�and S = f[1; 0]; [1; 2]g. Then T = f[1; 1]; [3; 3]g. Note

that S is linearly independent, but the vectors Av1 and Av2 are dis-tinct and T is linearly dependent. For a speci�c counterexample inwhich Av1; : : : ;Avk are not distinct but T is linearly independent,

use A =

�1 11 1

�and S = f[1; 2]; [2; 1]g. Then T = f[3; 3]g. Note

that Av1 and Av2 both equal [3; 3], and that both S and T are lin-early independent. For a �nal counterexample in which both partsof the conclusion are false, let A = O22 and S = fe1; e2g. Then Av1and Av2 both equal 0, S is linearly independent, but T = f0g islinearly dependent.

(c) The converse to the statement is: If S = fv1; : : : ;vkg is a lin-early independent subset of Rm, then T = fAv1; : : : ;Avkg is a lin-early independent subset of Rn with Av1; : : : ;Avk distinct. Supposea1Av1 + � � �+ akAvk = 0. Then a1A�1Av1 + � � �+ akA�1Avk = 0=) a1v1 + � � � + akvk = 0 ) a1 = � � � = ak = 0, since S is linearlyindependent.

74

Page 79: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 4.4

(20) Case 1: S is empty. This case is obvious, since the empty set is the onlysubset of S.Case 2: Suppose S = fv1; : : : ;vng is a �nite nonempty linearly indepen-dent set in V, and let S1 be a subset of S. If S1 is empty, it is linearlyindependent by de�nition. Suppose S1 is nonempty. After renumberingthe subscripts if necessary, we can assume S1 = fv1; : : : ;vkg, with k � n.If a1v1 + � � � + akvk = 0, then a1v1 + � � � + akvk + 0vk+1 + � � � 0vn = 0,and so a1 = � � � = ak = 0, since S is linearly independent. Hence, S1 islinearly independent.Case 3: Suppose S is an in�nite linearly independent subset of V. Thenany �nite subset of S is linearly independent by the de�nition of linearindependence for in�nite subsets. If S1 is an in�nite subset, then any�nite subset S2 of S1 is also a �nite subset of S. Hence S2 is linearlyindependent because S is. Thus, S2 is linearly independent, by de�nition.

(21) First, let S = fag. Then a is the only vector in S, so we must only con-sider span(S�fag). But span(S�fag) = span(f g) = f0g (by de�nition).Thus, a 2 span(S � fag) if and only if a = 0, which is true if and only iffag is linearly dependent.Next, consider S = f g. The empty set is de�ned to be linearly indepen-dent. Also, there are no vectors in S, so, in particular, there are no vectorsv in S such that v 2 span(S � fvg).

(22) First, suppose that v1 6= 0 and that for every k, with 2 � k � n, wehave vk 62 span(fv1; : : : ;vk�1g). It is enough to show that S is linearlyindependent by assuming that S is linearly dependent and showing thatthis leads to a contradiction. If S is linearly dependent, then there realnumbers a1; : : : ; an, not all zero, such that

0 = a1v1 + � � �+ aivi + � � �+ anvn:

Suppose k is the largest subscript such that ak 6= 0. Then

0 = a1v1 + � � �+ akvk:

If k = 1, then a1v1 = 0 with a1 6= 0. This implies that v1 = 0, acontradiction. If k � 2, then solving for vk yields

vk =

��a1ak

�v1 + � � � +

��ak�1ak

�vk�1;

where vk is expressed as a linear combination of v1; : : : ;vk�1, contradict-ing the fact that vk 62 span(fv1; : : : ;vk�1g).Conversely, suppose S is linearly independent. Notice that v1 6= 0

since any �nite subset of V containing 0 is linearly dependent. We mustprove that for each k with 1 � k � n, vk 62 span(fv1; : : : ;vk�1g). Now,by Corollary 4.6, span(fv1; : : : ;vk�1g) � span(S � fvkg) sincefv1; : : : ;vk�1g � S � fvkg. But, by the �rst �boxed� alternate charac-terization of linear independence following Example 11 in the textbook,vk 62 span(S � fvkg). Therefore, vk 62 span(fv1; : : : ;vk�1g).

75

Page 80: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 4.4

(23) Reversing the order of the elements, f (k) 6= ak+1f (k+1) + � � �+ anf (n), forany ak+1; : : : ; an, because all polynomials on the right have lower degreesthan f (k). Now use Exercise 22.

(24) (a) If S is linearly independent, then the zero vector has a unique ex-pression by Theorem 4.10.Conversely, suppose v 2 span(S) has a unique expression of the formv = a1v1 + � � � + anvn, where v1; : : : ;vn are in S. To show that Sis linearly independent, it is enough to show that any �nite subsetS1 = fw1; : : : ;wkg of S is linearly independent. Suppose b1w1+� � �+bkwk = 0. Now b1w1 + � � �+ bkwk + a1v1 + � � �+ anvn = 0+ v = v.If wi = vj for some j, then the uniqueness of the expression forv implies that bi + aj = aj , yielding bi = 0. If, instead, wi doesnot equal any vj , then bi = 0, again by the uniqueness assumption.Thus each bi must equal zero. Hence, S1 is linearly independent bythe de�nition of linear independence.

(b) S is linearly dependent i¤ every vector v in span(S) can be expressedin more than one way as a linear combination of vectors in S (ignoringzero coe¢ cients).

(25) Follow the hint in the text. The fundamental eigenvectors are found bysolving the homogeneous system (�In�A)v = 0. Each vi has a �1�in theposition corresponding to the ith independent variable for the system anda �0�in the position corresponding to every other independent variable.Hence, any linear combination a1v1+ � � �+akvk is an n-vector having thevalue ai in the position corresponding to the ith independent variable forthe system, for each i. So if this linear combination equals 0, then eachai must equal 0, proving linear independence.

(26) (a) Since T [ fvg is linearly dependent, by the de�nition of linear in-dependence, there is some �nite subset ft1; : : : ; tkg of T such thata1t1+ � � �+aktk+ bv = 0; with not all coe¢ cients equal to zero. Butif b = 0, then a1t1 + � � � + aktk = 0 with not all ai = 0, contradict-ing the fact that T is linearly independent. Hence, b 6= 0; and thusv = (�a1

b )t1 + � � �+ (�akb )tk: Hence, v 2 span(T ).

(b) The statement in part (b) is merely the contrapositive of the state-ment in part (a).

(27) Suppose that S is linearly independent, and suppose v 2 span(S) can beexpressed both as v = a1u1 + � � � + akuk and v = b1v1 + � � � + blvl; fordistinct u1; : : :;uk 2 S and distinct v1; : : :;vl 2 S, where these expressionsdi¤er in at least one nonzero term. Since the ui�s might not be distinctfrom the vi�s, we consider the set X = fu1; : : : ;ukg [ fv1; : : : ;vlg andlabel the distinct vectors in X as fw1; : : :;wmg: Then we can express v =a1u1+� � �+akuk as v = c1w1+� � �+cmwm; and also v = b1v1+� � �+blvl asv = d1w1+� � �+dmwm; choosing the scalars ci; di; 1 � i � m with ci = aj

76

Page 81: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 4.5

if wi = uj , ci = 0 otherwise, and di = bj if wi = vj , di = 0 otherwise.Since the original linear combinations for v are distinct, we know thatci 6= di for some i. Now, v � v = 0 = (c1 � d1)w1 + � � �+ (cm � dm)wm:Since fw1; : : :;wmg � S; a linearly independent set, each cj � dj = 0 forevery j with 1 � j � m. But this is a contradiction since ci 6= di:Conversely, assume every vector in span(S) can be uniquely expressed

as a linear combination of elements of S: Since 0 2 span(S), there isexactly one linear combination of elements of S that equals 0. Now, iffv1; : : : ;vng is any �nite subset of S, we have 0 = 0v1+� � �+0vn. Becausethis representation is unique, it means that in any linear combination offv1; : : : ;vng that equals 0, the only possible coe¢ cients are zeroes. Thus,by de�nition, S is linearly independent.

(28) (a) F(b) T

(c) T(d) F

(e) T(f) T

(g) F(h) T

(i) T

Section 4.5

(1) For each part, row reduce the matrix whose rows are the given vectors.The result will be I4. Hence, by the Simpli�ed Span Method, the set ofvectors spans R4. Row reducing the matrix whose columns are the givenvectors also results in I4. Therefore, by the Independence Test Method,the set of vectors is linearly independent. (After the �rst row reduction,you could just apply part (1) of Theorem 4.13, and then skip the secondrow reduction.)

(2) Follow the procedure in the answer to Exercise 1.

(3) Follow the procedure in the answer to Exercise 1, except that row reduc-tion results in I5 instead of I4.

(4) (a) Not a basis: jSj < dim(R4). (linearly independent but does not span)(b) Not a basis: jSj < dim(R4). (linearly dependent and does not span)(c) Basis: Follow the procedure in the answer to Exercise 1.

(d) Not a basis:

26641 �2 0 23 0 6 102 6 10 �30 7 7 1

3775 row reduces to26641 0 2 �00 1 1 00 0 0 10 0 0 0

3775.(linearly dependent and does not span)

(e) Not a basis: jSj > dim(R4). (linearly dependent but spans)

(5) (a) [�1; 1; 1;�1] is not a scalar multiple of [2; 3; 0;�1].Also, [1; 4; 1;�2] = 1[2; 3; 0;�1] + 1[�1; 1; 1;�1], and[3; 2;�1; 0] = 1[2; 3; 0;�1]� 1[�1; 1; 1;�1].

(b) 2

77

Page 82: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 4.5

(c) No; dim(span(S)) = 2 6= 4 = dim(R4)(d) Yes. B is a basis for span(S) by Theorem 4.14, and so B is a spanning

set for span(S). Finally, by part (1) of Theorem 4.13, no set smallerthan B could be a spanning set for span(S).

(6) (a) Solving an appropriate homogeneous system shows that B is linearlyindependent. Also,x�1 = 1

12 (x3�x2+2x+1)+ 1

12 (2x3+4x�7)� 1

12 (3x3�x2�6x+6)

and x3 � 3x2 � 22x+ 34 = 1(x3 � x2 + 2x+ 1) � 3(2x3 + 4x� 7) +2(3x3 � x2 � 6x+ 6).

(b) dim(span(S)) = 3

(c) No; dim(span(S)) = 3 6= 4 = dim(P3)(d) By Theorem 4.14, B is a basis for span(S), and so, B is a spanning

set for span(S). Finally, by part (1) of Theorem 4.13, no set smallerthan B could be a spanning set for span(S).

(7) (a) W is nonempty, since 0 2 W: Let X1;X2 2 W; c 2 R: Then A(X1+X2) = AX1+AX2 = 0+0 = 0: Also, A(cX1) = c(AX1) = c0 = 0:Hence W is closed under addition and scalar multiplication, and sois a subspace by Theorem 4.2.

(b) A row reduces to

26666641 0 1

525 1

0 1 25 � 1

5 �1

0 0 0 0 0

0 0 0 0 0

3777775. Thus, a basis for W is

f[�1;�2; 5; 0; 0]; [�2; 1; 0; 5; 0]; [�1; 1; 0; 0; 1]g(c) dim(W) = 3; rank(A) = 2; 3 + 2 = 5

(8) The dimension of a proper nontrivial subspace must be 1 or 2, by Theorem4.16. If the basis for the subspace contains one [two] elements, then thesubspace is a line [plane] through the origin as in Exercise 18 of Section4.3.

(9) These vectors are linearly independent by Exercise 23, Section 4.4. Thegiven set is clearly the basis of a subspace of Pn of dimension n+1. ApplyTheorem 4.16 to show that this subspace equals Pn.

(10) (a) Let S = fI2;A;A2;A3;A4g. If S is linearly independent, then S is abasis forW = span(S) �M22, and since dim(W) = 5 and dim(M22)= 4, this contradicts Theorem 4.16. Hence S is linearly dependent.Now use the de�nition of linear dependence.

(b) Generalize the proof in part (a).

(11) (a) B is easily seen to be linearly independent using Exercise 22, Section4.4. To show B spans V, note that every f 2 V can be expressed asf = (x� 2)g, for some g 2 P4.

78

Page 83: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 4.5

(b) 5

(c) f(x� 2)(x� 3); x(x� 2)(x� 3); x2(x� 2)(x� 3); x3(x� 2)(x� 3)g(d) 4

(12) (a) Let V = R3, and let S = f[1; 0; 0]; [2; 0; 0]; [3; 0; 0]g.(b) Let V = R3, and let T = f[1; 0; 0]; [2; 0; 0]; [3; 0; 0]g.

(13) If S spans V, then S is a basis by Theorem 4.13. If S is linearly indepen-dent, then S is a basis by Theorem 4.13.

(14) By Theorem 4.13, if S spans V, then S is a basis for V, so S is linearlyindependent. Similarly, if S is linearly independent, then S is a basis forV, so S spans V.

(15) (a) Suppose B = fv1; : : : ;vng.Linear independence: a1Av1 + � � �+ anAvn = 0=) A�1(a1Av1+ � � �+anAvn) = A�10 =) a1v1+ � � �+anvn = 0=) a1 = � � � = an = 0.Spanning: Suppose A�1v = a1v1+ � � �+anvn. Then v = AA�1v =a1Av1 + � � �+ anAvn.

(b) Similar to part (a).

(c) Note that Aei = (ith column of A).

(d) Note that eiA = (ith row of A).

(16) (a) If S = ff1; : : : ; fkg, let n = max(degree(f1); : : :,degree(fk)).Then S � Pn.

(b) Apply Theorem 4.5, part (3). (c) xn+1 62 span(S).

(17) (a) Suppose V is �nite dimensional. Since V has an in�nite linearlyindependent subset, part (2) of Theorem 4.13 is contradicted.

(b) f1; x; x2; : : :g is a linearly independent subset of P, and hence, of anyvector space containing P. Apply part (a).

(18) (a) It is given that B is linearly independent.

(b) Span(B) is a subspace of V, and so by part (3) of Theorem 4.5,span(S) � span(B). Thus, V � span(B). Also, B � V, and sospan(B) � V, again by part (3) of Theorem 4.5. Hence, span(B) =V.

(c) Since w =2 B, this follows directly from the de�nition for �B is amaximal linearly independent subset of S�given in the textbook.

79

Page 84: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 4.5

(d) Since C = B [ fwg is linearly dependent, by the de�nition of linearindependence, there is some �nite subset fv1; : : : ;vkg of B such thata1v1 + � � � + akvk + bw = 0; with not all coe¢ cients equal to zero.But if b = 0, then a1v1 + � � � + akvk = 0 with not all ai = 0,contradicting the fact that B is linearly independent. Hence, b 6= 0;and thus w = (�a1

b )v1+ � � �+(�akb )vk: Hence, w 2 span(B). (This

is essentially identical to the solution for part (a) of Exercise 26 ofSection 4.4.)

(e) Part (d) proves that S � span(B), since all vectors in S that are notalready in B are proven to be in span(B). Part (b) then shows thatB spans V, which, by part (a), is su¢ cient to �nish the proof of thetheorem.

(19) (a) It is given that S is a spanning set.

(b) Let S be a spanning set for a vector space V. If S is linearly depen-dent, then there is a subset C of S such that C 6= S and C spansV.

(c) The statement in part (b) is essentially half of the �if and only if�statement in Exercise 12 in Section 4.4. We rewrite the appropriatehalf of the proof of that exercise.Since S is linearly dependent, then a1v1 + � � � + anvn = 0 for somefv1; : : : ;vng � S and a1; : : : ; an 2 R, with some ai 6= 0. Thenvi = �a1

aiv1 � � � � � ai�1

aivi�1 � ai+1

aivi+1 � � � � � an

aivn. So,

if w 2 span(S), then w = bvi + c1w1 + � � � + ckwk, for somefw1; : : : ;wkg � S � fvig and b; c1; : : : ; ck 2 R. Hence, w = � ba1

aiv1

� � � � � bai�1aivi�1 � bai+1

aivi+1 � � � � � ban

aivn + c1w1 + � � � +

ckwk 2 span(S � fvig). So, span(S) � span(S � fvig). Also,span(S � fvig) � span(S) by Corollary 4.6. Thus span(S � fvig) =span(S) = V. Letting C = S � fvig completes the proof.

(20) Suppose B is a basis for V. Then B is linearly independent in V. If v =2 B;then v 2 span(B) = V. Thus, C = B [ fvg is linearly dependent sincev 2 span(C�fvg) = span(B). Thus B is a maximal linearly independentsubset of V.

(21) Suppose B is a basis for V. Then B spans V. Also, if v 2 B, thenspan(B � fvg) 6= span(B) = V by Exercise 12, Section 4.4. Thus B is aminimal spanning set for V.

(22) (a) The empty set is linearly independent and is a subset of W. Hence0 2 A.

(b) Since V andW share the same operations, every linearly independentsubset T ofW is also a linearly independent subset of V (by the de�n-ition of linear independence). Hence, using part (2) of Theorem 4.13on T in V shows that k = jT j � dim(V).

80

Page 85: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 4.5

(c) T is clearly linearly independent by the de�nition of A. SupposeC � W with T � C and T 6= C. We must show that C is linearlydependent. If not, then jCj 2 A, by the de�nition of A. (Note thatjCj is �nite, by an argument similar to that given in part (b).) ButT � C and T 6= C implies that jCj > jT j = n. This contradicts thefact that n is the largest element of A.

(d) This is obvious.

(e) Since T is a �nite basis for W, from part (d), we see that W is �nitedimensional. Also, dim(W) = jT j � dim(V), by part (b).

(f) Suppose dim(W) = dim(V), and let T be a basis for W. Then Tis a linearly independent subset of V (see the discussion in part (b))with jT j = dim(W) = dim(V). Part (2) of Theorem 4.13 then showsthat T is a basis for V. Hence, W = span(T ) = V.

(g) The converse of part (f) is �If W = V, then dim(W) = dim(V).�This is obviously true, since the dimension of a �nite dimensionalvector space is unique.

(23) Let B = fv1; : : : ;vn�1g � Rn be a basis for V. Let A be the (n� 1)� nmatrix whose rows are the vectors in B, and consider the homogeneoussystem AX = 0. By Corollary 2.6, the system has a nontrivial solutionx. Therefore, Ax = 0. That is, x � vi = 0 for each 1 � i � n� 1. Now,suppose v 2 V. Then there are a1; : : : ; an�1 2 R such that v = a1v1 + � � �+an�1vn�1. Hence, x � v = x � a1v1 + � � � +x � an�1vn�1 = 0 + � � �+ 0 =0. Therefore V � fv 2 Rn jx � v = 0g. Let W = fv 2 Rn jx � v = 0g.So, V � W. Now, it is easy to see that W is a subspace of Rn. (Clearly0 2 W since x �0 = 0. HenceW is nonempty. Next, if w1;w2 2 W, thenx � (w1 +w2) = x �w1 + x �w2 = 0+ 0 = 0. Thus, (w1 +w2) 2 W, andso W is closed under addition. Also, if w 2 W and c 2 R, then x � (cw)= c(x �w) = c(0) = 0. Therefore (cw) 2 W, and W is closed under scalarmultiplication. Hence, by Theorem 4.2,W is a subspace of Rn.) If V =W,we are done. Suppose, then, that V 6=W. Then, by Theorem 4.16, n�1 =dim(V) < dim(W) � dim(Rn) = n. Hence, dim(W) = n, and soW = Rn,by Theorem 4.16. But x =2 W, since x � x 6= 0, because x is nontrivial.This contradiction completes the proof.

(24) Let B be a maximal linearly independent subset of S. S only has �-nitely many linearly independent subsets, since it only has �nitely manyelements. Also, since f g � S, S has at least one linearly independentsubset. Select any set B of the largest possible size from the list of all ofthe linearly independent subsets of S. Then no other linearly independentsubset of S can be larger than B. So, by Theorem 4.14, B is a basis forV. Since jBj is �nite, V is �nite dimensional.

81

Page 86: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 4.6

(25) (a) T

(b) F

(c) F

(d) F

(e) F

(f) T

(g) F

(h) F

(i) F

(j) T

Section 4.6

Problems 1 through 10 ask you to construct a basis for some vector space.The answers here are those you will most likely get using the techniques in thetextbook. However, other answers are possible.

(1) (a) f[1; 0; 0; 2;�2]; [0; 1; 0; 0; 1]; [0; 0; 1;�1; 0]g(b) f[1; 0;� 1

5 ;65 ;

35 ]; [0; 1;�

15 ;�

95 ;�

25 ]g

(c) fe1; e2; e3; e4; e5g(d) f[1; 0; 0;�2;� 13

4 ]; [0; 1; 0; 3;92 ]; [0; 0; 1; 0;�

14 ]g

(2) fx3 � 3x; x2 � x; 1g

(3)

8<:24 1 0

43

13

2 0

35 ;24 0 1� 13 � 1

30 0

35 ;24 0 00 00 1

359=;(4) (a) f[3; 1;�2]; [6; 2;�3]g

(b) f[4; 7; 1]; [1; 0; 0]g(c) f[1; 3;�2]; [2; 1; 4]; [0; 1;�1]g(d) f[1; 4;�2]; [2;�8; 5]; [0;�7; 2]g

(e) f[3;�2; 2]; [1; 2;�1]; [3;�2; 7]g(f) f[3; 1; 0]; [2;�1; 7]; [1; 5; 7]g(g) fe1; e3g(h) f[1;�3; 0]; [0; 1; 1]g

(5) (a) fx3 � 8x2 + 1; 3x3 � 2x2 + x; 4x3 + 2x� 10; x3 � 20x2 � x+ 12g(b) f�2x3 + x+ 2; 3x3 � x2 + 4x+ 6; 8x3 + x2 + 6x+ 10g

(c) fx3; x2; xg (d) f1; x; x2g (e) fx3 + x2; x; 1g

(f) f8x3; 8x3 + x2; 8x3 + x; 8x3 + 1g

(6) (a) One answer is fij j 1 � i; j � 3g, where ij is the 3�3 matrix with(i; j) entry = 1 and all other entries 0.

(b)

8<:24 1 1 11 1 11 1 1

35 ,24 �1 1 1

1 1 11 1 1

35,24 1 �1 11 1 11 1 1

35,24 1 1 �11 1 11 1 1

35,24 1 1 1�1 1 11 1 1

35,24 1 1 11 �1 11 1 1

35,24 1 1 11 1 �11 1 1

35,24 1 1 1

1 1 1�1 1 1

35,24 1 1 11 1 11 �1 1

359=;82

Page 87: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 4.6

(c)

8<:24 1 0 00 0 00 0 0

35 ;24 0 1 01 0 00 0 0

35 ;24 0 0 10 0 01 0 0

35 ;24 0 0 00 1 00 0 0

35 ;24 0 0 00 0 10 1 0

35 ;24 0 0 00 0 00 0 1

359=;(d)

8<:24 1 0 00 1 00 0 1

35 ,24 1 1 00 1 00 0 1

35,24 1 0 10 1 00 0 1

35,24 1 0 01 1 00 0 1

35,24 1 0 00 1 10 0 1

35,24 1 0 00 1 01 0 1

35,24 1 0 00 1 00 1 1

35,24 1 0 00 2 00 0 1

35,24 1 0 00 1 00 0 2

359=;(7) (a) f[1;�3; 0; 1; 4]; [2; 2; 1;�3; 1]; [1; 0; 0; 0; 0]; [0; 1; 0; 0; 0]; [0; 0; 1; 0; 0]g

(b) f[1; 1; 1; 1; 1]; [0; 1; 1; 1; 1]; [0; 0; 1; 1; 1]; [0; 0; 1; 0; 0]; [0; 0; 0; 1; 0]g(c) f[1; 0;�1; 0; 0]; [0; 1;�1; 1; 0]; [2; 3;�8;�1; 0]; [1; 0; 0; 0; 0]; [0; 0; 0; 0; 1]g

(8) (a) fx3 � x2; x4 � 3x3 + 5x2 � x; x4; x3; 1g(b) f3x� 2; x3 � 6x+ 4; x4; x2; xg(c) fx4 � x3 + x2 � x+ 1; x3 � x2 + x� 1; x2 � x+ 1; x2; xg

(9) (a)

8<:24 1 �1�1 10 0

35 ;24 0 0

1 �1�1 1

35 ;24 1 00 00 0

35 ,24 0 10 00 0

35 ;24 0 01 00 0

35 ;24 0 00 01 0

359=;(b)

8<:24 0 �2

1 0�1 2

35 ;24 0 �30 13 �6

35 ;24 0 1

1 1�4 8

35 ,24 1 00 00 0

35 ;24 0 10 00 0

35 ;24 0 00 01 0

359=;(c)

8<:24 3 0�1 70 1

35 ;24 �1 0

1 30 �2

35 ;24 2 03 10 �1

35 ,

83

Page 88: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 4.6

24 6 00 10 �1

35 ;24 0 10 00 0

35 ;24 0 00 01 0

359=;(10) fij j 1 � i � j � 4g, where ij is the 4� 4 matrix with (i; j) entry = 1

and all other entries 0. Notice that the condition 1 � i � j � 4 assuresthat only upper triangular matrices are included.

(11) (a) 2 (b) 8 (c) 4 (d) 3

(12) (a) A basis for Un is fij j 1 � i � j � ng, where ij is the n � nmatrix having a zero in every entry, except for the (i; j)th entry,which equals 1. Since there are n � i + 1 entries in the ith row ofan n � n matrix on or above the main diagonal, this basis containsPn

i=1(n� i+1) =Pn

i=1 n �Pn

i=1 i +Pn

i=1 1 = n2�n(n+1)=2+n

= (n2 + n)=2 elements.Similarly, a basis for Ln is fij j 1 � j � i � ng, and a basis for

the symmetric n� n matrices is fij +Tij j 1 � i � j � ng.

(b) (n2 � n)=2

(13) (a) SA is nonempty, since 0 2 SA: Let X1;X2 2 SA; c 2 R: ThenA(X1+X2) = AX1+AX2 = 0+0 = 0: Also, A(cX1) = c(AX1) =c0 = 0: Hence SA is closed under addition and scalar multiplication,and so is a subspace by Theorem 4.2.

(b) Let B be the reduced row echelon form of A. Each basis elementof SA is found by setting one independent variable of the systemAX = O equal to 1 and the remaining independent variables equalto zero. (The values of the dependent variables are then determinedfrom these values.) Since there is one independent variable for eachnonpivot column in B, dim(SA) = number of nonpivot columns of B.But, clearly, rank(A) = number of pivot columns in B. So dim(SA)+ rank(A) = total number of columns in B = n.

(14) Let V and S be as given in the statement of the theorem. LetA = fk j a set T exists with T � S, jT j = k, and T linearly independentg.Step 1: The empty set is linearly independent and is a subset of S. Hence0 2 A, and A is nonempty.Step 2: Suppose k 2 A. Then, every linearly independent subset T ofS is also a linearly independent subset of V. Hence, using part (2) ofTheorem 4.13 on T shows that k = jT j � dim(V).Step 3: Suppose n is the largest number in A, which must exist, sinceA is nonempty and all its elements are � dim(V). Let B be a linearlyindependent subset of S with jBj = n, which exists because n 2 A. Wewant to show that B is a maximal linearly independent subset of S. Now,B is given as linearly independent. Suppose C � S with B � C and

84

Page 89: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 4.6

B 6= C. We must show that C is linearly dependent. If not, then jCj 2 A,by the de�nition of A. (Note that jCj is �nite, by an argument similar tothat given in Step 2.) But B � C and B 6= C implies that jCj > jBj = n.This contradicts the fact that n is the largest element of A.Step 4: The set B from Step 3 is clearly a basis for V, by Theorem 4.14.This completes the proof.

(15) (a) Let B0 be a basis for W. Expand B0 to a basis B for V (usingTheorem 4.18).

(b) No; consider the subspace W of R3 given by W = f[a; 0; 0] j a 2 Rg.No subset of B = f[1; 1; 0]; [1;�1; 0]; [0; 0; 1]g (a basis for R3) is abasis for W.

(c) Yes; consider Y = span(B0).

(16) (a) Let B1 be a basis for W. Expand B1 to a basis for V (using The-orem 4.18). Let B2 = B � B1, and let W 0 = span(B2). SupposeB1 = fv1; : : :vkg and B2 = fu1; : : : ;ulg. Since B is a basis for V, allv 2 V can be written as v = w+w0 with w = a1v1+ � � �+akvk 2 Wand w0 = b1u1 + � � �+ blul 2 W 0.For uniqueness, suppose v = w1+w0

1 = w2+w02, with w1;w2 2

W and w01;w

02 2 W 0. Then w1 �w2 = w0

2 �w01 2 W \W 0 = f0g.

Hence, w1 = w2 and w01 = w

02.

(b) In R3, consider W = f[a; b; 0] j a; b 2 Rg. We could letW 0 = f[0; 0; c] j c 2 Rg or W 0 = f[0; c; c] j c 2 Rg.

(17) (a) Let A be the matrix whose rows are the n-vectors in S. Let C bethe reduced row echelon form matrix for A. If the nonzero rows of Cform the standard basis for Rn; then span(S) = span(fe1; : : : ; eng)= Rn: Conversely, if span(S) = Rn; then there must be at least nnonzero rows of C by part (1) of Theorem 4.13. But this means thereare at least n rows with pivots, and hence all n columns have pivots.Thus the nonzero rows of C are e1; : : : ; en; which form the standardbasis for Rn.

(b) Let A and C be as in the answer to part (a). Note that A and C aren�n matrices. Now, since jBj = n, B is a basis for Rn () span(B)= Rn (by Theorem 4.13) () the rows of C form the standard basisfor Rn (from part (a)) () C = In () jAj 6= 0:

(18) Let A be an m� n matrix and let S � Rn be the set of the m rows of A.

(a) Suppose C is the reduced row echelon form of A. Then the Sim-pli�ed Span Method shows that the nonzero rows of C form a ba-sis for span(S). But the number of such rows is rank(A). Hence,dim(span(S)) = rank(A).

85

Page 90: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 4.7

(b) Let D be the reduced row echelon form of AT . By the IndependenceTest Method, the pivot columns of D correspond to the columns ofAT which make up a basis for span(S). Hence, the number of pivotcolumns of D equals dim(span(S)). However, the number of pivotcolumns of D equals the number of nonzero rows of D (each pivotcolumn shares the single pivot element with a unique pivot row, andvice-versa), which equals rank(AT ).

(c) Both ranks equal dim(span(S)), and so they must be equal.

(19) Use the fact that sin(�i + �j) = (cos�i) sin�j + (sin�i) cos�j : Note thatthe ith row of A is then (cos�i)x1 + (sin�i)x2: Hence each row of A is alinear combination of fx1;x2g; and so dim(span({rows of A})) � 2 < nHence, A has less than n pivots when row reduced, and so jAj = 0.

(20) (a) T(b) T

(c) F(d) T

(e) T(f) F

(g) F

Section 4.7

(1) (a) [v]B = [7;�1;�5]

(b) [v]B =[3;�2;�1;�2]

(c) [v]B = [�2; 4;�5]

(d) [v]B =[10; 3; 2;�1]

(e) [v]B = [4;�5; 3]

(f) [v]B = [�3; 2; 3]

(g) [v]B = [�1; 4;�2](h) [v]B = [2;�3; 1](i) [v]B = [5; 2;�6](j) [v]B = [5;�2]

(2) (a)

24 �102 20 367 �13 �236 �7 �1

35

(b)

24 �2 �6 �13 6 1

�6 �7 �1

35

(c)

24 20 �30 �6924 �24 �80�9 11 31

35

(d)

2664�1 �4 2 �94 5 1 30 2 �3 1

�4 �13 13 �15

3775(e)

�3 2

�2 �3

(f)

24 6 1 21 1 2

�1 �1 �3

35(g)

24 �3 �2 42 3 �52 1 3

35(3) [(2; 6)]B = (2; 2) (see Figure 11)

(4) (a) P =�

13 31�18 �43

�, Q =

��11 �829 21

�, T =

�1 3

�1 �4

(b) P =

24 0 1 00 0 11 0 0

35, Q =

24 0 1 01 0 00 0 1

35, T =24 0 0 10 1 01 0 0

3586

Page 91: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 4.7

Figure 11: (2; 6) in B-coordinates

(c) P =

24 2 8 13�6 �25 �4311 45 76

35, Q =

24 �24 �2 130 3 �1139 13 �5

35,T =

24 �25 �97 �15031 120 185145 562 868

35

(d) P =

2664�3 �45 �77 �38�41 �250 �420 �20519 113 191 93�4 �22 �37 �18

3775, Q =

26641 0 1 30 1 2 �11 2 6 63 �1 6 36

3775,

T =

26644 2 3 11 �2 �1 �15 1 7 22 1 3 1

3775

87

Page 92: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 4.7

(5) (a) C = ([1;�4; 0;�2; 0]; [0; 0; 1; 4; 0]; [0; 0; 0; 0; 1]); P =

24 1 6 31 5 31 3 2

35;Q = P�1 =

24 1 �3 31 �1 0

�2 3 �1

35; [v]B = [17; 4;�13]; [v]C = [2;�2; 3](b) C = ([1; 0; 17; 0; 21]; [0; 1; 3; 0; 5]; [0; 0; 0; 1; 2]); P =

24 1 3 1�5 �14 �40 2 3

35;Q = P�1 =

24 �34 �7 215 3 �1

�10 �2 1

35; [v]B = [5;�2; 3]; [v]C = [2;�9; 5](c) C = ([1; 0; 0; 0]; [0; 1; 0; 0]; [0; 0; 1; 0]; [0; 0; 0; 1]);

P =

26643 6 �4 �2

�1 7 �3 04 �3 3 16 �2 4 2

3775;Q = P�1 =

26666641 �4 �12 7

� 2 9 27 � 312

� 5 22 67 � 772

5 �23 �71 41

3777775;[v]B = [2; 1;�3; 7]; [v]C = [10; 14; 3; 12]

(6) For each i, vi = 0v1 + 0v2 + � � � + 1vi + � � � + 0vn; and so [vi]B =[0; 0; : : : ; 1; : : : 0] = ei:

(7) (a) Transition matrices to C1, C2, C3, C4, and C5, respectively:24 0 1 00 0 11 0 0

35,24 0 0 11 0 00 1 0

35,24 1 0 00 0 10 1 0

35,24 0 1 01 0 00 0 1

35,24 0 0 10 1 01 0 0

35(b) Let B = (v1; : : : ;vn) and C = (v�1 ; : : : ;v�n), where �1; �2; � � � ; �n

is a rearrangement of the numbers 1; 2; � � � ; n. Let P be the transitionmatrix from B to C. The goal is to show, for 1 � i � n, that theith row of P equals e�i . This is true i¤ (for 1 � i � n) the �ithcolumn of P equals ei. Now the �ith column of P is, by de�nition,[v�i ]C . But since v�i is the ith vector in C, [v�i ]C = ei, completingthe proof.

(8) (a) Following the Transition Matrix Method, it is necessary to row reduce2664First Second nthvector vector � � � vectorin in inC C C

��������In

3775 :The algorithm from Section 2.4 for calculating inverses shows thatthe result obtained in the last n columns is the desired inverse.

88

Page 93: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 4.7

(b) Following the Transition Matrix Method, it is necessary to row reduce2664 In

��������First Second nthvector vector � � � vectorin in inB B B

3775 :Clearly this is already in row reduced form, and so the result is asdesired.

(9) Let S be the standard basis for Rn. Exercise 8(b) shows that P is thetransition matrix from B to S, and Exercise 8(a) shows that Q�1 is thetransition matrix from S to C. Theorem 4.21 �nishes the proof.

(10) C = ([�142; 64; 167]; [�53; 24; 63]; [�246; 111; 290])

(11) (a) Easily veri�ed with straightforward computations.

(b) [v]B = [1; 2;�3]; Av = [14; 2; 5]; D[v]B = [Av]B = [2;�2; 3]

(12) (a) �1 = 2, v1 = [2; 1; 5]; �2 = �1, v2 = [19; 2; 31]; �3 = �3, v3 =[6;�2; 5]

(b) D =

24 2 0 00 �1 00 0 �3

35(c)

24 2 19 61 2 �25 31 5

35(13) Let V, B , and the ai�s andw�s be as given in the statement of the theorem.

Proof of part (1): Suppose that [w1]B = [b1; : : : ; bn] and [w2]B = [c1; : : : ; cn].Then, w1 = b1v1 + � � � +bnvn and w2 = c1v1 + � � � +cnvn. Hence,w1 +w2 = b1v1 + � � � +bnvn + c1v1 + � � � +cnvn = (b1 + c1)v1 + � � �+(bn + cn)vn, implying [w1 + w2]B = [(b1 + c1); : : : ; (bn + cn)], whichequals [w1]B + [w2]B .Proof of part (2): Suppose that [w1]B = [b1; : : : ; bn]. Then, w1 = b1v1+ � � � +bnvn. Hence, a1w1 = a1(b1v1 + � � � +bnvn) = a1b1v1 + � � �+a1bnvn, implying [a1w1]B = [a1b1; : : : ; a1bn], which equals a1[b1; : : : ; bn].Proof of part (3): Use induction on k.Base Step (k = 1): This is just part (2).Inductive Step: Assume true for any linear combination of k vectors, andprove true for a linear combination of k + 1 vectors. Now, [a1w1 + � � � +akwk+ak+1wk+1]B = [a1w1+ � � �+akwk]B + [ak+1wk+1]B (by part (1))= [a1w1 + � � � + akwk]B + ak+1[wk+1]B (by part (2)) = a1[w1]B + � � �+ak[wk]B + ak+1[wk+1]B (by the inductive hypothesis).

(14) Let v 2 V. Then, using Theorem 4.20, (QP)[v]B = Q[v]C = [v]D. Hence,by Theorem 4.20, QP is the (unique) transition matrix from B to D.

89

Page 94: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Chapter 4 Review

(15) Let B, C, V, and P be as given in the statement of the theorem. LetQ be the transition matrix from C to B. Then, by Theorem 4.21, QPis the transition matrix from B to B. But, clearly, since for all v 2 V,I[v]B = [v]B , Theorem 4.20 implies that I must be the transition matrixfrom B to B. Hence, QP = I, �nishing the proof of the theorem.

(16) (a) F(b) T

(c) T(d) F

(e) F(f) T

(g) F

Chapter 4 Review Exercises

(1) This is a vector space. To prove this, modify the proof in Example 7 inSection 4.1.

(2) Zero vector = [�4; 5]; additive inverse of [x; y] is [�x� 8;�y + 10]

(3) (a), (b), (d), (f) and (g) are not subspaces; (c) and (e) are subspaces

(4) (a) span(S) = f[a; b; c; 5a� 3b+ c] j a; b; c 2 Rg 6= R4

(b) Basis = f[1; 0; 0; 5]; [0; 1; 0;�3]; [0; 0; 1; 1]g; dim(span(S)) = 3

(5) (a) span(S) = fax3 + bx2 + (�3a+ 2b)x+ c j a; b; c 2 Rg 6= P3(b) Basis = fx3 � 3x; x2 + 2x; 1g; dim(span(S)) = 3

(6) (a) span(S) =��

a b 4a� 3bc �2a+ b� c d

� ���� a; b; c; d 2 R� 6=M23

(b) Basis =��

1 0 40 �2 0

�;

�0 1 �30 1 0

�;

�0 0 01 �1 0

�;

�0 0 00 0 1

��;

dim(span(S)) = 4

(7) (a) S is linearly independent

(b) S is a maximal linearly independent subset of itself. S spans R3.(c) No, by Theorem 4.9

(8) (a) S is linearly dependent; x3 � 2x2 � x+ 2 = 3(�5x3 + 2x2 + 5x� 2)+ 8(2x3 � x2 � 2x+ 1)

(b) S does not span P3. Maximal linearly independent subset of S =f�5x3 + 2x2 + 5x� 2; 2x3 � x2 � 2x+ 1; �2x3 + 2x2 + 3x� 5g

(c) Yes, by Theorem 4.9. One such alternate linear combination is�5��5x3 + 2x2 + 5x� 2

��5�2x3 � x2 � 2x+ 1

�+ 1

�x3 � 2x2 � x+ 2

��1��2x3 + 2x2 + 3x� 5

�.

(9) (a) Linearly dependent;24 4 63 42 5

35 = �24 �10 �14�10 �8�6 �12

35� 224 7 125 73 10

35+24 8 163 102 13

3590

Page 95: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Chapter 4 Review

(b)

8<:24 4 011 �26 �1

35 ;24 �10 �14�10 �8�6 �12

35 ;24 7 125 73 10

35 ;24 8 163 102 13

35 ;24 6 114 73 9

359=;;S does not spanM32

(c) Yes, by Theorem 4.9. One such alternate linear combination is

2

24 4 011 �26 �1

35+24 �10 �14�10 �8�6 �12

35�24 7 125 73 10

35�24 8 163 102 13

35+

24 4 63 42 5

35+24 6 114 73 9

35 :(10) v = 1v. Also, since v 2 span(S), v = a1v1 + � � � + anvn, for some

a1; : : : ; an. These are 2 di¤erent ways to express v as a linear combinationof vectors in T .

(11) Use the same technique as in Example 15 in Section 4.4.

(12) (a) The matrix whose rows are the given vectors row reduces to I4, so theSimpli�ed Span Method shows that the given set spans R4. Since theset has 4 vectors and dim(R4) = 4, part (1) of Theorem 4.13 showsthat the given set is a basis for R4.

(b) Similar to part (a). The matrix

24 2 2 131 0 34 1 16

35 row reduces to I3:

(c) Similar to part (a). The matrix

26641 5 0 36 �1 4 37 �4 7 �1

�3 7 �2 4

3775 row reducesto I4:

(13) (a) W nonempty: 0 2 W because A0 = O. Closure under addition: IfX1;X2 2 W, thenA(X1+X2) = AX1+AX2 = O+O = O. Closureunder scalar multiplication: If X 2 W, then A(cX) = cAX = cO =O.

(b) Basis = f[3; 1; 0; 0]; [�2; 0; 1; 1]g(c) dim(W) = 2, rank(A) = 2; 2 + 2 = 4

(14) (a) First, use direct computation to check that every polynomial in B isin V. Next,2664

1 0 00 1 0

�3 �2 00 0 1

3775 clearly row reduces to

26641 0 00 1 00 0 10 0 0

3775, and so B is

91

Page 96: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Chapter 4 Review

linearly independent by the Independence Test Method. Finally, sincethe polynomial x =2 V, dim(V) < dim(P3) = 4. But, jBj � dim(V),implying that jBj = dim(V) and that B is a basis for V, by Theorem4.13. Thus, dim(V) = 3.

(b) C = f1; x3 � 3x2 + 3xg is a basis for W and dim(W) = 2.

(15) (a) T = f[2;�3; 0; 1]; [4; 3; 0; 4]; [1; 0; 2; 1]g(b) Yes

(16) (a) T = fx2 � 2x; x3 � x; 2x3 � 2x2 + 1g(b) Yes

(17) f[2; 1;�1; 2]; [1;�2; 2;�4]; [0; 1; 0; 0]; [0; 0; 1; 0]g

(18)

8<:24 3 �10 21 0

35 ;24 �1 2

0 �13 0

35 ;24 2 �10 14 0

35 ;24 1 00 00 0

35 ;24 0 01 00 0

35 ;24 0 00 00 1

359=;(19) fx4 + 3x2 + 1; x3 � 2x2 � 1; x+ 3g

(20) (a) [v]B = [�3;�1;�2](b) [v]B = [4; 2;�3]

(c) [v]B = [�3; 5;�1]

(21) (a) [v]B = [27;�62; 6]; P =

24 4 2 5�1 0 13 1 �2

35; [v]C = [14;�21; 7](b) [v]B = [�4;�1; 3]; P =

24 4 1 �21 2 0

�1 1 1

35; [v]C = [�23;�6; 6]

(c) [v]B = [4;�2; 1;�3]; P =

26646 2 2 �15 0 1 �1

�3 1 �1 0�5 2 0 1

3775;[v]C = [25; 24;�15;�27]

(22) (a) P =

26640 0 1 01 0 1 11 �1 1 10 1 0 1

3775

(b) Q =

26640 1 0 11 0 1 11 1 0 10 0 1 0

3775

(c) R = QP =

26641 1 1 21 0 2 21 1 2 21 �1 1 1

3775

(d) R�1 =

26640 �3 2 20 �1 1 0

�1 0 1 01 2 �2 �1

3775(23) (a) �1 = 0, v1 = [4; 4; 13]; �2 = 2, v2 = [�3; 2; 0]; v3 = [3; 0; 4]

92

Page 97: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Chapter 4 Review

(b) D =

24 0 0 00 2 00 0 2

35 (c)

24 4 �3 34 2 013 0 4

35(24) (a) C = f[1; 0; 0;�7]; [0; 1; 0; 6]; [0; 0; 1; 4]g

(b) P =

24 1 �1 3�2 �1 �51 1 2

35(c) [v]B = [25;�2;�10]; (Note: P�1 =

24 3 5 8�1 �1 �1�1 �2 �3

35 :)(d) v = [�3; 2; 3; 45]

(25) By part (b) of Exercise 8 in Section 4.7, the matrix C is the transitionmatrix from C-coordinates to standard coordinates. Hence, by Theorem4.21, CP is the transition matrix from B-coordinates to standard coor-dinates. However, again by part (b) of Exercise 8 in Section 4.7, thistransition matrix is the matrix whose columns are the vectors in B.

(26) (a) T(b) T(c) F(d) T(e) F

(f) T(g) T(h) F(i) F(j) T

(k) F(l) F(m) T(n) T(o) F

(p) F(q) T(r) F(s) T(t) T

(u) T(v) T(w) F(x) T(y) T

(z) F

93

Page 98: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 5.1

Chapter 5

Section 5.1

(1) (a) Linear operator: f([x; y] + [w; z]) = f([x+ w; y + z])= [3(x+ w)� 4(y + z);�(x+ w) + 2(y + z)]= [(3x� 4y) + (3w � 4z); (�x+ 2y) + (�w + 2z)]= [3x� 4y;�x+ 2y] + [3w � 4z;�w + 2z] = f([x; y]) + f([w; z]);also, f(c[x; y]) = f([cx; cy]) = [3(cx)� 4(cy);�(cx) + 2(cy)]= [c(3x� 4y); c(�x+ 2y)] = c[3x� 4y;�x+ 2y] = cf([x; y]).

(b) Not a linear transformation: h([0; 0; 0; 0]) = [2;�1; 0;�3] 6= 0.(c) Linear operator: k([a; b; c] + [d; e; f ]) = k([a+ d; b+ e; c+ f ]) =

[b+ e; c+ f; a+ d] = [b; c; a] + [e; f; d] = k([a; b; c]) + k([d; e; f ]);also, k(d[a; b; c]) = k([da; db; dc]) = [db; dc; da] = d[b; c; a]= dk([a; b; c]).

(d) Linear operator: l��

a bc d

�+

�e fg h

��= l

��a+ e b+ fc+ g d+ h

��=

�(a+ e)� 2(c+ g) + (d+ h) 3(b+ f)

�4(a+ e) (b+ f) + (c+ g)� 3(d+ h)

=

�(a� 2c+ d) + (e� 2g + h) 3b+ 3f

�4a� 4e (b+ c� 3d) + (f + g � 3h)

=

�a� 2c+ d 3b�4a b+ c� 3d

�+

�e� 2g + h 3f�4e f + g � 3h

= l

��a bc d

��+ l

��e fg h

��;

also, l�s

�a bc d

��= l

��sa sbsc sd

��

=

�(sa)� 2(sc) + (sd) 3(sb)

�4(sa) (sb) + (sc)� 3(sd)

=

�s(a� 2c+ d) s(3b)s(�4a) s(b+ c� 3d)

�= s

�a� 2c+ d 3b�4a b+ c� 3d

= sl

��a bc d

��.

(e) Not a linear transformation: n�2

�1 20 1

��= n

��2 40 2

��= 4,

but 2 n��

1 20 1

��= 2(1) = 2.

(f) Not a linear transformation: r(8x3) = 2x2, but 8 r(x3) = 8(x2).

94

Page 99: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 5.1

(g) Not a linear transformation: s(0) = [1; 0; 1] 6= 0.(h) Linear transformation: t((ax3+ bx2+ cx+d)+ (ex3+ fx2+ gx+h))

= t((a+ e)x3 + (b+ f)x2 + (c+ g)x+ (d+ h)) =(a+ e) + (b+ f) + (c+ g) + (d+ h) = (a+ b+ c+ d)+ (e+ f + g + h) = t(ax3 + bx2 + cx+ d) + t(ex3 + fx2 + gx+ h);also, t(s(ax3 + bx2 + cx+ d)) = t((sa)x3 + (sb)x2 + (sc)x+ (sd))= (sa)+(sb)+(sc)+(sd) = s(a+b+c+d) = s t(ax3+bx2+cx+d).

(i) Not a linear transformation: u((�1)[0; 1; 0; 0]) = u([0;�1; 0; 0]) = 1,but (�1) u([0; 1; 0; 0]) = (�1)(1) = �1.

(j) Not a linear transformation: v(x2+(x+1)) = v(x2+x+1) = 1, butv(x2) + v(x+ 1) = 0 + 0 = 0.

(k) Linear transformation: g

0@24 a bc de f

35+24 u vw xy z

351A

= g

0@24 a+ u b+ vc+ w d+ xe+ y f + z

351A = (a+ u)x4 � (c+ w)x2 + (e+ y)

= (ax4 � cx2 + e) + (ux4 � wx2 + y)

= g

0@24 a bc de f

351A+ g0@24 u v

w xy z

351A; also, g0@s24 a bc de f

351A

= g

0@24 sa sbsc sdse sf

351A = (sa)x4 � (sc)x2 + (se) = s(ax4 � cx2 + e)

= s g

0@24 a bc de f

351A.(l) Not a linear transformation: e([3; 0]+[0; 4]) = e([3; 4]) =

p32 + 42 =

5, but e([3; 0]) + e([0; 4]) =p32 + 02 +

p02 + 42 = 3 + 4 = 7.

(2) (a) i(v +w) = v +w = i(v) + i(w); also, i(cv) = cv = c i(v).

(b) z(v+w) = 0 = 0+ 0 = z(v) + z(w); also, z(cv) = 0 = c0 = c z(v).

(3) f(v+w) = k(v+w) = kv+ kw = f(v) + f(w); f(av) = kav = a(kv) =af(v).

(4) (a) f([u; v; w] + [x; y; z]) = f([u+ x; v + y; w + z]) =[�(u+ x); v + y; w + z] = [(�u) + (�x); v + y; w + z]= [�u; v; w] + [�x; y; z] = f([u; v; w]) + f([x; y; z]);f(c[x; y; z]) = f([cx; cy; cz]) = [�(cx); cy; cz] = [c(�x); cy; cz]= c[�x; y; z] = c f([x; y; z]).

95

Page 100: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 5.1

(b) f([x; y; z]) = [x;�y; z]. It is a linear operator.(c) Through the y-axis: g([x; y]) = [�x; y]; Through the x-axis: f([x; y]) =

[x;�y]; both are linear operators.

(5) h([a1; a2; a3] + [b1; b2; b3]) = h([a1 + b1; a2 + b2; a3 + b3]) =[a1 + b1; a2 + b2; 0] = [a1; a2; 0] + [b1; b2; 0]= h([a1; a2; a3]) + h([b1; b2; b3]);h(c[a1; a2; a3]) = h([ca1; ca2; ca3]) = [ca1; ca2; 0] = c[a1; a2; 0]= c h([a1; a2; a3]);j([a1; a2; a3; a4] + [b1; b2; b3; b4]) = j([a1 + b1; a2 + b2; a3 + b3; a4 + b4]) =[0; a2 + b2; 0; a4 + b4] = [0; a2; 0; a4] + [0; b2; 0; b4]= j([a1; a2; a3; a4]) + j([b1; b2; b3; b4]);j(c[a1; a2; a3; a4]) = j([ca1; ca2; ca3; ca4]) = [0; ca2; 0; ca4] = c[0; a2; 0; a4]= c j([a1; a2; a3; a4])

(6) f([a1; a2; : : : ; ai; : : : ; an] + [b1; b2; : : : ; bi; : : : ; bn])= f([a1 + b1; a2 + b2; : : : ; ai + bi; : : : ; an + bn]) = ai + bi= f([a1; a2; : : : ; ai; : : : ; an]) + f([b1; b2; : : : ; bi; : : : ; bn]);f(c[a1; a2; : : : ; ai; : : : ; an]) = f([ca1; ca2; : : : ; cai; : : : ; can]) = cai= c f([a1; a2; : : : ; ai; : : : ; an])

(7) g(y + z) = projx(y + z) =x�(y+z)kxk2 x = (x�y)+(x�z)

kxk2 x = (x�y)kxk2 x +

(x�z)kxk2 x =

g(y) + g(z); g(cy) = x�(cy)kxk2 x = c

�x�ykxk2

�x = cg(y)

(8) L(y1+y2) = x � (y1+y2) = (x �y1) + (x �y2) = L(y1) +L(y2); L(cy1) =x � (cy1) = c(x � y1) = cL(y1)

(9) Follow the hint in the text, and use the sum of angle identities for sineand cosine.

(10) (a) Use Example 10.

(b) Use the hint in Exercise 9.(c)

24 sin � 0 cos �0 1 0

cos � 0 � sin �

35(11) The proofs follow the proof in Example 10, but with the matrixA replaced

by the speci�c matrices of this exercise.

(12) f(A + B) = trace(A + B) =Pn

i=1(aii + bii) =Pn

i=1 aii +Pn

i=1 bii =trace(A) + trace(B) = f(A) + f(B); f(cA) = trace(cA) =

Pni=1 caii

= cPn

i=1 aii = c trace(A) = c f(A).

(13) g(A1 + A2) = (A1 + A2) + (A1 + A2)T = (A1 + A

T1 ) + (A2 + A

T2 ) =

g(A1) + g(A2); g(cA) = cA + (cA)T = c(A +AT ) = cg(A): The prooffor h is done similarly.

(14) (a) f(p1 + p2) =R(p1 + p2) dx =

Rp1 dx +

Rp2 dx = f(p1) + f(p2)

(since the constant of integration is assumed to be zero for all these

96

Page 101: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 5.1

inde�nite integrals); f(cp) =R(cp) dx = c

Rp dx = cf(p) (since

the constant of integration is assumed to be zero for these inde�niteintegrals)

(b) g(p1+p2) =R ba(p1+p2) dx =

R bap1 dx+

R bap2 dx = g(p1)+ g(p2);

g(cp) =R ba(cp) dx = c

R bap dx = cg(p)

(15) Base Step: k = 1: An argument similar to that in Example 3 in Section5.1 shows that M : V �! V given by M(f) = f 0 is a linear operator.Inductive Step: Assume that Ln : V �! V given by Ln(f) = f (n) is alinear operator. We need to show that Ln+1 : V �! V given by Ln+1(f) =f (n+1) is a linear operator. But since Ln+1 = Ln�M; Theorem 5.2 assuresus that Ln+1 is a linear operator.

(16) f(A1 +A2) = B(A1 +A2) = BA1 + BA2 = f(A1) + f(A2); f(cA) =B(cA) = c(BA) = cf(A):

(17) f(A1+A2) = B�1(A1+A2)B = B

�1A1B+B�1A2B = f(A1)+f(A2);

f(cA) = B�1(cA)B =c(B

�1AB) =cf(A)

(18) (a) L(p + q) = (p + q)(a) = p(a) + q(a) = L(p) + L(q). Similarly,L(cp) = (cp)(a) = c(p(a)) = cL(p).

(b) For all x 2 R, L((p+q)(x)) = (p+q)(x+a) = p(x+a)+q(x+a) =L(p(x))+L(q(x)). Also, L((cp)(x)) = (cp)(x+a) = c(p(x+a)) =cL(p(x)).

(19) Let p = anxn + � � � + a1x + a0 and let q = bnxn + � � � + b1x + b0. ThenL(p+q) = L(anxn+� � �+a1x+a0+bnxn+� � �+b1x+b0) = L((an+bn)xn+� � �+(a1+ b1)x+(a0+ b0)) = (an+ bn)An+ � � �+(a1+ b1)A+(a0+ b0)In= (anAn+ � � �+ a1A+ a0In)+ (bnAn+ � � �+ b1A+ b0In) = L(p)+L(q).Similarly, L(cp) = L(c(anxn+� � �+a1x+a0)) = L(canxn+� � �+ca1x+ca0)= canAn + � � �+ ca1A+ ca0In = c(anAn + � � �+ a1A+ a0In) = cL(p).

(20) L(x� y) = L(xy) = ln(xy) = ln(x) + ln(y) = L(x) + L(y);L(a� x) = L(xa) = ln(xa) = a ln(x) = aL(x)

(21) f(0) = 0+ x = x 6= 0, contradicting Theorem 5.1, part (1).

(22) f(0) = A0+ y = y 6= 0, contradicting Theorem 5.1, part (1).

(23) f

0@24 1 0 00 1 00 0 1

351A = 1, but f

0@24 1 0 00 1 00 0 0

351A + f0@24 0 0 0

0 0 00 0 1

351A =

0 + 0 = 0. (Note: for any n > 1, f(2In) = 2n, but 2f(In) = 2(1) = 2.)

(24) L2(v +w) = L1(2(v +w)) = 2L1(v +w) = 2(L1(v) + L1(w))= 2L1(v) + 2L1(w) = L2(v) + L2(w); L2(cv) = L1(2cv) = 2cL1(v)= c(2L1(v)) = cL2(v).

97

Page 102: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 5.1

(25) L([�3; 2; 4]) = [12;�11; 14];

L

0@24 xyz

351A =

24 �2 3 01 �2 �10 1 3

3524 xyz

35 =24 �2x+ 3yx� 2y � zy + 3z

35(26) L(i) = 7

5 i�115 j; L(j) = �

25 i�

45 j

(27) L(x� y) = L(x+ (�y)) = L(x) + L((�1)y) = L(x) + (�1)L(y)= L(x)� L(y).

(28) Follow the hint in the text. Letting a = b = 0 proves property (1) of alinear transformation. Letting b = 0 proves property (2).

(29) Suppose Part (3) of Theorem 5.1 is true for some n. We prove it true forn+ 1.L(a1v1 + � � �+ anvn + an+1vn+1) = L(a1v1 + � � �+ anvn) +L(an+1vn+1)(by property (1) of a linear transformation) = (a1L(v1)+ � � �+anL(vn))+L(an+1vn+1) (by the inductive hypothesis) = a1L(v1) + � � �+ anL(vn) +an+1L(vn+1) (by property (2) of a linear transformation), and we aredone.

(30) (a) a1v1 + � � �+ anvn = 0V =) L(a1v1 + � � �+ anvn) = L(0V)=) a1L(v1) + � � �+ anL(vn) = 0W =) a1 = a2 = � � � = an = 0.

(b) Consider the zero linear transformation.

(31) 0W 2 W 0, so 0V 2 L�1(f0Wg) � L�1(W 0). Hence, L�1(W 0) is nonempty.Also, x;y 2 L�1(W 0) =) L(x); L(y) 2 W 0 =) L(x) + L(y) 2 W 0

=) L(x+ y) 2 W 0 =) x+ y 2 L�1(W 0). Finally, x 2 L�1(W 0) =)L(x) 2 W 0 =) aL(x) 2 W 0 (for any a 2 R) =) L(ax) 2 W 0

=) ax 2 L�1(W 0). Hence L�1(W 0) is a subspace by Theorem 4.2.

(32) Let c = L(1). Then L(x) = L(x1) = xL(1) = xc = cx.

(33) (L2 �L1)(av) = L2(L1(av)) = L2(aL1(v)) = aL2(L1(v)) = a(L2 �L1(v)).

(34) (a) For L1 � L2 : L1 � L2(x+ y) = L1(x+ y) + L2(x+ y) =(L1(x) + L1(y)) + (L2(x) + L2(y)) = (L1(x) + L2(x))+ (L1(y) + L2(y)) = L1 � L2(x) + L1 � L2(y);L1�L2(cx) = L1(cx)+L2(cx) = cL1(x)+cL2(x) = c(L1(x)+L2(x))= cL1 � L2(x).For c � L1: c � L1(x + y) = c(L1(x + y)) = c(L1(x) + L1(y)) =cL1(x) + cL2(y) = c � L1(x) + c � L1(x); c � L1(ax) = cL1(ax) =c(aL1(x)) = a(cL1(x)) = a(c� L1(x)).

(b) Use Theorem 4.2: the existence of the zero linear transformationshows that the set is nonempty, while part (a) proves closure underaddition and scalar multiplication.

98

Page 103: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 5.2

(35) De�ne a line l parametrically by ftx+y j t 2 Rg, for some �xed x;y 2 R2.Since L(tx+y) = tL(x)+L(y), L maps l to the set ftL(x)+L(y) j t 2 Rg.If L(x) 6= 0; this set represents a line; otherwise, it represents a point.

(36) (a) F

(b) T

(c) F

(d) F

(e) T

(f) F

(g) T

(h) T

Section 5.2

(1) For each operator, check that the ith column of the given matrix (for1 � i � 3) is the image of ei. This is easily done by inspection.

(2) (a)

24 �6 4 �1�2 3 �53 �1 7

35

(b)�3 �5 1 �25 1 �2 8

�(c)

24 4 �1 3 31 3 �1 5

�2 �7 5 �1

35

(d)

2664�3 0 �2 00 �1 0 40 4 �1 3

�6 �1 0 2

3775(3) (a)

��47 128 �288�18 51 �104

(b)

24 �3 52 �14 �2

35

(c)

24 22 1462 3968 43

35

(d)

24 �32 16 24 15�10 7 12 6�121 63 96 58

35

(e)

266666645 6 0

�11 �26 �6�14 �19 �16 3 �2

�1 1 111 13 0

37777775

(4) (a)

24 �202 �32 �43�146 �23 �3183 14 18

35(b)

�21 7 21 16

�51 �13 �51 �38

�(c)

24 98 �91 60 �7628 �25 17 �21135 �125 82 �104

35

(5) See the answer for Exercise 3(d).

(6) (a)�67 �12337 �68

(b)

24 �7 2 105 �2 �9

�6 1 8

35(c)

26648 �9 9 �148 �8 5 �76 �6 �1 61 �1 �2 5

3775

99

Page 104: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 5.2

(7) (a)

24 3 0 0 00 2 0 00 0 1 0

35;12x2 � 10x+ 6

(b)

26666413 0 0

0 12 0

0 0 10 0 0

377775; 23x3� 12x

2+5x

(8) (a)

" p32 � 1

2

12

p32

#(b)

"12

p3� 9 � 13

2

252

12

p3 + 9

#

(9) (a)

266666641 0 0 0 0 00 0 0 1 0 00 1 0 0 0 00 0 0 0 1 00 0 1 0 0 00 0 0 0 0 1

37777775 (b) 12

266666641 0 0 �1 0 01 0 0 1 0 00 1 1 0 �1 00 1 1 0 1 00 �1 0 0 �1 10 �1 0 0 1 �1

37777775

(10)

24 �12 12 �2�4 6 �2�10 �3 7

35

(11) (a) Matrix for L1:

26641 �1 �10 2 31 3 0

�2 0 1

3775; matrix for L2: � 0 2 �2 31 0 �1 1

(b) Matrix for L2 � L1:��8 �2 9�2 �4 0

�(12) (a) A2 represents a rotation through the angle � twice, which corresponds

to a rotation through the angle 2�.

(b) Generalize the argument in part (a).

(13) (a) In (b) On (c) cIn

(d) The n � n matrix whose columns are e2; e3; : : : ; en; e1, respectively

(e) The n � n matrix whose columns are en, e1, e2, : : :, en�1, respec-tively

(14) Let A be the matrix for L (with respect to the standard bases). Then Ais a 1� n matrix (an n-vector). If we let x = A, then Ay = x � y; for ally 2 Rn:

(15) [(L2 � L1)(v)]D = [L2(L1(v))]D = ACD[L1(v)]C = ACD(ABC [v]B) =(ACDABC)[v]B . Now apply the uniqueness condition in Theorem 5.5.

(16) (a)

24 0 �4 �13�6 5 62 �2 �3

35100

Page 105: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 5.2

(b)

24 2 0 00 �1 00 0 1

35(c) The vectors in B are eigenvectors for the matrix from part (a) cor-

responding to the eigenvalues, 2, �1, and 1, respectively.

(17) Easily veri�ed with straightforward computations.

(18) (a) pABB(x) = x(x� 1)2

(b) E1 = fb[2; 1; 0] + c[2; 0; 1]g; basis for E1 = f[2; 1; 0]; [2; 0; 1]g;E0 = fc[�1; 2; 2]g; basis for E0 = f[�1; 2; 2]g;C = ([2; 1; 0]; [2; 0; 1]; [�1; 2; 2]) (Note: the remaining answers willvary if the vectors in C are ordered di¤erently)

(c) Using the basis C ordered as ([2; 1; 0]; [2; 0; 1]; [�1; 2; 2]), the answer

is P =

24 2 2 �11 0 20 1 2

35, with P�1 = 19

24 2 5 �42 �4 5

�1 2 2

35. Anotherpossible answer, using C = ([�1; 2; 2]; [2; 1; 0]; [2; 0; 1]), is

P =

24 �1 2 22 1 02 0 1

35, with P�1 = 19

24 �1 2 22 5 42 �4 5

35 :(d) Using C = ([2; 1; 0]; [2; 0; 1]; [�1; 2; 2]) producesACC =

24 1 0 00 1 00 0 0

35.Using C = ([�1; 2; 2]; [2; 1; 0]; [2; 0; 1]) yields

ACC = P�1ABBP =

24 0 0 00 1 00 0 1

35 instead.(e) L is a projection onto the plane formed by [2; 1; 0] and [2; 0; 1]; while

[�1; 2; 2] is orthogonal to this plane.

(19) [L(v)]C = 1k [L(v)]B = 1

kABB [v]B = 1kABB(k[v]C) = ABB [v]C . By

Theorem 5.5, ACC = ABB .

(20) Follow the hint in the text. Suppose dim(V) = n, and that X and Y aresimilar n� n matrices, where Y = P�1XP.Let B = (w1; : : : ;wn) be some ordered basis for V. Consider the

products X[w1]B ; : : : ;X[wn]B 2 Rn: Each of these products representsthe coordinatization of a unique vector in V. De�ne L : V ! V to bethe unique linear transformation (as speci�ed by Theorem 5.4) such thatL(wi) is the vector in V whose coordinatization equals X[wi]B . Thatis, L : V ! V is the unique linear transformation such that X[wi]B =[L(wi)]B : Then, by de�nition, X is the matrix for L with respect to B.Next, since P is nonsingular, Exercise 15(c) in Section 4.5 implies that

the columns of P form a basis for Rn. Let vi be the unique vector in V such

101

Page 106: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 5.3

that [vi]B = the ith column of P. Now the vectors v1; : : : ;vn are distinctbecause the columns of P are distinct, and because coordinatization sendsdi¤erent vectors in V to di¤erent vectors in Rn. Also, the set fv1; : : : ;vngis linearly independent, because a1v1 + � � �+ anvn = 0V , [a1v1 + � � �+anvn]B = 0, a1[v1]B+� � �+an[vn]B = 0 (by part (3) of Theorem 4.19),a1 = � � � = an = 0, since the set f[v1]B ; : : : ; [vn]Bg is linearly independent.Part (2) of Theorem 4.13 then implies that C = (v1; : : : ;vn) is an orderedbasis for V. Also, by de�nition, P is the transition matrix from C toB, and therefore P�1 is the transition matrix from C to B. Finally, byTheorem 5.6, Y is the matrix for L with respect to C.

(21) Find the matrix for L with respect to B, and then apply Theorem 5.6.

(22) Let A =

�1 00 �1

�and P = 1p

1+m2

�1 �mm 1

�. Then A represents a

re�ection through the x-axis, and P represents a counterclockwise rotationputting the x-axis on the line y = mx (using Example 9, Section 5.1).Thus, the desired matrix is PAP�1.

(23) (a) L2 (b) U2 (c) D2

(24) Let fv1; : : : ;vkg be a basis for Y. Extend this to a basis B = fv1; : : : ;vngfor V. By Theorem 5.4, there is a unique linear transformation

L0 : V �! W such that L0(vi) =�L(vi) if i � k0 if i > k

.

Clearly, L0 agrees with L on Y.

(25) Let L : V �! W be a linear transformation such that L(v1) = w1;L(v2) = w2; : : :, L(vn) = wn; with fv1;v2; : : : ;vng a basis for V. Ifv 2V, then v = c1v1 + c2v2 + � � � + cnvn; for unique c1; c2; : : : ; cn 2 R(by Theorem 4.9). But then L(v) = L(c1v1 + c2v2 + � � � + cnvn) =c1L(v1)+c2L(v2)+ � � �+cnL(vn) = c1w1+c2w2+ � � �+cnwn: Hence L(v)is determined uniquely, for each v 2V, and so L is uniquely determined.

(26) (a) T

(b) T

(c) F

(d) F

(e) T

(f) F

(g) T

(h) T

(i) T

(j) F

Section 5.3

(1) (a) Yes, because L([1;�2; 3]) = [0; 0; 0](b) No, because L([2;�1; 4]) = [5;�2;�1]

(c) No; the system

8<: 5x1 + x2 � x3 = 2�3x1 + x3 = �1x1 � x2 � x3 = 4

102

Page 107: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 5.3

has no solutions. Note:

24 5 1 �1�3 0 11 �1 �1

������2

�14

35 row reduces to264 1 0 � 13

0 1 23

0 0 0

�������1313

4

375, where we have not row reduced beyond the

augmentation bar.

(d) Yes, because L([�4; 4; 0]) = [�16; 12;�8]

(2) (a) No, since L(x3 � 5x2 + 3x� 6) = 6x3 + 4x� 9 6= 0(b) Yes, because L(4x3 � 4x2) = 0(c) Yes, because, for example, L(x3 + 4x+ 3) = 8x3 � x� 1(d) No, since every element of range(L) has a zero coe¢ cient for its x2

term.

(3) In each part, we give the reduced row echelon form for the matrix for thelinear transformation as well.

(a)

24 1 0 20 1 �30 0 0

35; dim(ker(L)) = 1, basis for ker(L) = f[�2; 3; 1]g,

dim(range(L)) = 2, basis for range(L) = f[1;�2; 3]; [�1; 3;�3]g

(b)

26641 0 10 1 �20 0 00 0 0

3775; dim(ker(L)) = 1, basis for ker(L) = f[�1; 2; 1]g,

dim(range(L)) = 2, basis for range(L) = f[4; 7;�2; 3]; [�2; 1;�1;�2]g

(c)�1 0 50 1 �2

�; dim(ker(L)) = 1, basis for ker(L) = f[�5; 2; 1]g,

dim(range(L)) = 2, basis for range(L) = f[3; 2]; [2; 1]g

(d)

2666641 0 �1 10 1 3 �20 0 0 00 0 0 00 0 0 0

377775; dim(ker(L)) = 2,basis for ker(L) = f[1;�3; 1; 0]; [�1; 2; 0; 1]g, dim(range(L)) = 2,basis for range(L) = f[�14;�4;�6; 3; 4]; [�8;�1; 2;�7; 2]g

(4) Students can generally solve these problems by inspection.

(a) dim(ker(L)) = 2, basis for ker(L) = f[1; 0; 0], [0; 0; 1]g, dim(range(L))= 1, basis for range(L) = f[0; 1]g

103

Page 108: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 5.3

(b) dim(ker(L)) = 0, basis for ker(L) = fg, dim(range(L)) = 2, basis forrange(L) = f[1; 1; 0]; [0; 1; 1]g

(c) dim(ker(L)) = 2, basis for ker(L) =��

1 00 0

�;

�0 00 1

��,

dim(range(L)) = 2, basis for range(L) =

8<:24 0 10 00 0

35 ;24 0 01 00 0

359=;(d) dim(ker(L)) = 2, basis for ker(L) = fx4; x3g, dim(range(L)) = 3,

basis for range(L) = fx2; x; 1g(e) dim(ker(L)) = 0, basis for ker(L) = fg, dim(range(L)) = 3, basis for

range(L) = fx3; x2; xg(f) dim(ker(L)) = 1, basis for ker(L) = f[0; 1; 1]g, dim(range(L)) = 2,

basis for range(L) = f[1; 0; 1]; [0; 0;�1]g (A simpler basis for range(L)= f[1; 0; 0]; [0; 0; 1]g.)

(g) dim(ker(L)) = 0, basis for ker(L) = fg (empty set), dim(range(L))= 4, basis for range(L) = standard basis forM22

(h) dim(ker(L)) = 6, basis for ker(L) =

8<:24 1 0 00 0 00 0 0

35 ,24 0 1 01 0 00 0 0

35,24 0 0 10 0 01 0 0

35,24 0 0 00 1 00 0 0

35,24 0 0 00 0 10 1 0

35,24 0 0 00 0 00 0 1

359=;,dim(range(L)) = 3, basis for range(L) =8<:24 0 1 0�1 0 00 0 0

35 ,24 0 0 1

0 0 0�1 0 0

35,24 0 0 00 0 10 �1 0

359=;(i) dim(ker(L)) = 1, basis for ker(L) = fx2 � 2x+1g, dim(range(L)) =

2, basis for range(L) = f[1; 2]; [1; 1]g (A simpler basis for range(L) =standard basis for R2.)

(j) dim(ker(L)) = 2, basis for ker(L) = f(x+1)x(x�1); x2(x+1)(x�1)g,dim(range(L)) = 3, basis for range(L) = fe1; e2; e3g

(5) (a) ker(L) = V, range(L) = f0Wg (b) ker(L) = f0Vg, range(L) = V

(6) L(A + B) = trace(A + B) =P3

i=1(aii + bii) =P3

i=1 aii +P3

i=1 bii =

trace(A) + trace(B) = L(A) + L(B); L(cA) = trace(cA) =P3

i=1 caii= c

P3i=1 aii = c(trace(A)) = cL(A);

ker(L) =

8<:24 a b cd e fg h �a� e

35 ������ a; b; c; d; e; f; g; h 2 R9=;,

dim(ker(L)) = 8, range(L) = R, dim(range(L)) = 1

104

Page 109: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 5.3

(7) range(L) = V, ker(L) = f0Vg

(8) ker(L) = f0g, range(L) = fax4 + bx3 + cx2g, dim(ker(L)) = 0,dim(range(L)) = 3

(9) ker(L) = fdx+ e j d; e 2 Rg, range(L) = P2, dim(ker(L)) = 2,dim(range(L)) = 3

(10) When k � n, ker(L) = all polynomials of degree less than k, dim(ker(L))= k, range(L) = Pn�k, and dim(range(L)) = n � k + 1. When k > n,ker(L) = Pn, dim(ker(L)) = n+ 1, range(L) = f0g,and dim(range(L)) = 0.

(11) Since range(L) = R, dim(range(L)) = 1. By the Dimension Theorem,dim(ker(L)) = n. Since the given set is clearly a linearly independentsubset of ker(L) containing n distinct elements, it must be a basis forker(L) by Theorem 4.13.

(12) ker(L) = f[0; 0; : : : ; 0]g, range(L) = Rn (Note: Every vector X is in therange since L(A�1X) = A(A�1X) = X.)

(13) ker(L) = f0Vg i¤ dim(ker(L)) = 0 i¤ dim(range(L)) = dim(V) � 0 =dim(V) i¤ range(L) = V.

(14) For ker(L): 0V 2 ker(L), so ker(L) 6= fg. If v1;v2 2 ker(L), thenL(v1 + v2) = L(v1) + L(v2) = 0V + 0V = 0V , and so v1 + v2 2 ker(L).Similarly, L(av1) = 0V , and so av1 2 ker(L).For range(L): 0W 2 range(L) since 0W = L(0V). If w1;w2 2 range(L),then there exist v1;v2 2 V such that L(v1) = w1 and L(v2) = w2. Hence,L(v1 + v2) = L(v1) + L(v2) = w1 + w2. Thus w1 + w2 2 range(L).Similarly, L(av1) = aL(v1) = aw1, so aw1 2 range(L).

(15) (a) If v 2 ker(L1), then (L2 � L1)(v) = L2(L1(v)) = L2(0W) = 0X .(b) If x 2 range(L2 � L1), then there is a v such that (L2 � L1)(v) = x.

Hence L2(L1(v)) = x, so x 2 range(L2).(c) dim(range(L2 � L1)) = dim(V) � dim(ker(L2 � L1))

� dim(V) � dim(ker(L1)) (by part (a)) = dim(range(L1))

(16) Consider L��

xy

��=

�1 �11 �1

� �xy

�. Then, ker(L) = range(L) =

f[a; a] j a 2 Rg.

(17) (a) Let P be the transition matrix from the standard basis to B, and letQ be the transition matrix from the standard basis to C. Note thatboth P and Q are nonsingular by Theorem 4.22. Then, by Theorem5.6, B = QAP�1. Hence, rank(B) = rank(QAP�1) = rank(AP�1)(by part (a) of Review Exercise 16 in Chapter 2) = rank(A) (by part(b) of Review Exercise 16 in Chapter 2).

105

Page 110: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 5.3

(b) The comments prior to Theorem 5.9 in the text show that parts (a)and (b) of the theorem are true if A is the matrix for L with respectto the standard bases. However, part (a) of this exercise shows that ifB is the matrix for L with respect to any bases, rank(B) = rank(A).Hence, rank(B) can be substituted for any occurrence of rank(A) inthe statement of Theorem 5.9. Thus, parts (a) and (b) of Theorem5.9 hold when the matrix for L with respect to any bases is used.Part (c) of Theorem 5.9 follows immediately from parts (a) and (b).

(18) (a) By Theorem 4.18, there are vectors q1; : : : ;qt in V such that B =fk1; : : : ;ks;q1; : : : ;qtg is a basis for V. Hence, dim(V) = s+ t.

(b) If v 2 V; then v = a1k1 + � � � + asks + b1q1 + � � � + btqt for somea1; : : : ; as; b1; : : : ; bt 2 R: Hence,

L(v) = L(a1k1 + � � �+ asks + b1q1 + � � �+ btqt)= a1L(k1) + � � �+ asL(ks) + b1L(q1) + � � �+ btL(qt)= a10+ � � �+ as0+ b1L(q1) + � � �+ btL(qt)= b1L(q1) + � � �+ btL(qt):

(c) By part (b), any element of range(L) is a linear combination ofL(q1); : : : ; L(qt), so range(L) is spanned by fL(q1); : : : ; L(qt)g. Hence,by part (1) of Theorem 4.13, range(L) is �nite dimensional anddim(range(L)) � t.

(d) Suppose c1L(q1) + � � �+ ctL(qt) = 0W . Then L(c1q1 + � � �+ ctqt) =0W , and so c1q1 + � � �+ ctqt 2 ker(L).

(e) Every element of ker(L) can be expressed as a linear combination ofthe basis vectors k1; : : : ;ks for ker(L).

(f) The fact that c1q1+ � � �+ctqt = d1k1+ � � �+dsks implies that d1k1+� � �+dsks�c1q1�� � ��ctqt = 0W . But sinceB = fk1; : : : ;ks;q1; : : : ;qtgis linearly independent, d1 = � � � = ds = c1 = � � � = ct = 0, by thede�nition of linear independence.

(g) In part (d) we assumed that c1L(q1) + � � � + ctL(qt) = 0W . Thisled to the conclusion in part (f) that c1 = � � � = ct = 0. HencefL(q1); : : : ; L(qt)g is linearly independent by de�nition.

(h) Part (c) proved that fL(q1); : : : ; L(qt)g spans range(L) and part (g)shows that fL(q1); : : : ; L(qt)g is linearly independent. Hence,fL(q1); : : : ; L(qt)g is a basis for range(L).

(i) We assumed that dim(ker(L)) = s. Part (h) shows thatdim(range(L)) = t. Therefore, dim(ker(L))+dim(range(L)) = s+t =dim(V) (by part (a)).

(19) From Theorem 4.16, we have dim(ker(L)) � dim(V). The Dimension The-orem states that dim(range(L)) is �nite and dim(ker(L)) + dim(range(L))= dim(V). Since dim(ker(L)) is nonnegative, it follows that dim(range(L))� dim(V).

106

Page 111: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 5.4

(20) (a) F

(b) F

(c) T

(d) F

(e) T

(f) F

(g) F

(h) F

Section 5.4

(1) (a) Not one-to-one, because L([1; 0; 0]) = L([0; 0; 0]) = [0; 0; 0; 0]; notonto, because [0; 0; 0; 1] is not in range(L)

(b) Not one-to-one, because L([1;�1; 1]) = [0; 0]; onto, becauseL([a; 0; b]) = [a; b]

(c) One-to-one, because L([x; y; z]) = [0; 0; 0] implies that[2x; x + y + z;�y] = [0; 0; 0], which gives x = y = z = 0; onto,because every vector [a; b; c] can be expressed as [2x; x + y + z;�y],where x = a

2 , y = �c, and z = b�a2 + c

(d) Not one-to-one, because L(1) = 0; onto, because L(ax3+ bx2+ cx) =ax2 + bx+ c

(e) One-to-one, because L(ax2+ bx+ c) = 0 implies that a+ b = b+ c =a+ c = 0, which gives b = c and hence a = b = c = 0; onto, becauseevery polynomial Ax2 +Bx+ C can be expressed as(a+ b)x2 + (b+ c)x+ (a+ c), where a = (A�B + C)=2,b = (A+B � C)=2, and c = (�A+B + C)=2

(f) One-to-one, because L��

a bc d

��=

�0 00 0

�=)

�d b+ c

b� c a

�=�

0 00 0

�=) d = a = b + c = b � c = 0 =) a = b = c = d = 0;

onto, because L

"z x+y

2

x�y2 w

#!=

�w xy z

(g) Not one-to-one, because L��

0 1 01 0 �1

��= L

��0 0 00 0 0

��=�

0 00 0

�; onto, because every 2 � 2 matrix

�A BC D

�can be

expressed as�a �c2e d+ f

�, where a = A, c = �B, e = C=2, d = D,

and f = 0

(h) One-to-one, because L(ax2+ bx+ c) =�0 00 0

�implies that a+ c =

b � c = �3a = 0, which gives a = b = c = 0; not onto, because�0 10 0

�is not in range(L)

(2) (a) One-to-one; onto; the matrix row reduces to I2, which means thatdim(ker(L)) = 0 and dim(range(L)) = 2.

107

Page 112: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 5.4

(b) One-to-one; not onto; the matrix row reduces to

24 1 00 10 0

35, whichmeans that dim(ker(L)) = 0 and dim(range(L)) = 2.

(c) Not one-to-one; not onto; the matrix row reduces to

26641 0 � 2

5

0 1 � 65

0 0 0

3775,which means that dim(ker(L)) = 1 and dim(range(L)) = 2.

(d) Not one-to-one; onto; the matrix row reduces to

24 1 0 0 �20 1 0 30 0 1 �1

35,which means that dim(ker(L)) = 1 and dim(range(L)) = 3.

(3) (a) One-to-one; onto; the matrix row reduces to I3 which means thatdim(ker(L)) = 0 and dim(range(L)) = 3.

(b) Not one-to-one; not onto; the matrix row reduces to

26641 0 0 �20 1 0 �20 0 1 10 0 0 0

3775,which means that dim(ker(L)) = 1 and dim(range(L)) = 3.

(c) Not one-to-one; not onto; the matrix row reduces to

26666641 0 � 10

111911

0 1 311 � 9

11

0 0 0 0

0 0 0 0

3777775,which means that dim(ker(L)) = 2 and dim(range(L)) = 2.

(4) (a) Let L : Rn �! Rm. Then dim(range(L)) =n � dim(ker(L)) � n < m, so L is not onto.

(b) Let L : Rm �! Rn. Then dim(ker(L))= m � dim(range(L)) � m� n > 0, so L is not one-to-one.

(5) (a) L(In) = AIn � InA = A�A = On. Hence In 2 ker(L), and so L isnot one-to-one by Theorem 5.12.

(b) Use part (a) of this exercise, part (b) of Theorem 5.12, and the Di-mension Theorem.

(6) If L(A) = O3, then AT = �A, so A is skew-symmetric. But since Ais also upper triangular, A = O3. Thus, ker(L) = f0g and L is one-to-one. By the Dimension Theorem, dim (range(L)) = dim (U3) = 6 (sincedim(ker(L)) = 0). Hence, range(L) 6=M33, and L is not onto.

108

Page 113: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 5.5

(7) Suppose L is not one-to-one. Then ker(L) is nontrivial. Letv 2 ker(L) with v 6= 0: Then T = fvg is linearly independent. ButL(T ) = f0Wg, which is linearly dependent, a contradiction.

(8) (a) Suppose w 2 L(span(S)). Then there is a vector v 2 span(S)such that L(v) = w. There are also vectors v1; : : : ;vk 2 S andscalars a1; : : : ; ak such that a1v1+ � � �+ akvk = v. Hence, w = L(v)= L(a1v1+ � � �+akvk) = L(a1v1)+ � � �+L(akvk) = a1L(v1)+ � � �+akL(vk), which shows that w 2 span(L(S)). Hence L(span(S)) �span(L(S)).Next, S � span(S) (by Theorem 4.5) ) L(S) � L(span(S))

) span(L(S)) � L(span(S)) (by part (3) of Theorem 4.5, sinceL(span(S)) is a subspace of W by part (1) of Theorem 5.3). There-fore, L(span(S)) = span(L(S)).

(b) By part (a), L(S) spans L(span(S)) = L(V) = range(L).(c) W = span(L(S)) = L(span(S)) (by part (a)) � L(V) (because

span(S) � V) = range(L). Thus, L is onto.

(9) (a) F

(b) F

(c) T

(d) T

(e) T

(f) T

(g) F

(h) F

Section 5.5

(1) In each part, let A represent the given matrix for L1 and let B representthe given matrix for L2. By Theorem 5.15, L1 is an isomorphism if andonly if A is nonsingular, and L2 is an isomorphism if and only if B isnonsingular. In each part, we will give jAj and jBj to show that A and Bare nonsingular.

(a) jAj = 1, jBj = 3, L�11

0@24 x1x2x3

351A =

24 0 0 10 �1 01 �2 0

3524 x1x2x3

35,L�12

0@24 x1x2x3

351A =

24 1 0 00 0 � 1

32 1 0

3524 x1x2x3

35,(L2 � L1)

0@24 x1x2x3

351A =

24 0 �2 11 4 �20 3 0

3524 x1x2x3

35,(L2 � L1)�1

0@24 x1x2x3

351A = (L�11 � L�12 )

0@24 x1x2x3

351A

109

Page 114: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 5.5

=

26642 1 0

0 0 13

1 0 23

377524 x1x2x3

35

(b) jAj = �1, jBj = �1, L�11

0@24 x1x2x3

351A =

24 0 �2 10 1 01 �8 4

3524 x1x2x3

35,L�12

0@24 x1x2x3

351A =

24 0 1 01 0 12 0 3

3524 x1x2x3

35,(L2 � L1)

0@24 x1x2x3

351A =

24 �1 1 0�4 0 11 0 0

3524 x1x2x3

35,(L2 � L1)�1

0@24 x1x2x3

351A = (L�11 � L�12 )

0@24 x1x2x3

351A=

24 0 0 11 0 10 1 4

3524 x1x2x3

35(c) jAj = �1, jBj = 1, L�11

0@24 x1x2x3

351A =

24 2 �4 �17 �13 �35 �10 �3

3524 x1x2x3

35,L�12

0@24 x1x2x3

351A =

24 1 0 �13 1 �3

�1 �2 2

3524 x1x2x3

35,(L2 � L1)

0@24 x1x2x3

351A =

24 29 �6 �421 �5 �238 �8 �5

3524 x1x2x3

35,(L2 � L1)�1

0@24 x1x2x3

351A = (L�11 � L�12 )

0@24 x1x2x3

351A=

24 �9 �2 8�29 �7 26�22 �4 19

3524 x1x2x3

35(2) L is invertible (L�1 = L). By Theorem 5.14, L is an isomorphism.

(3) (a) L1 is a linear operator: L1(B + C) = A(B + C) = AB + AC =L1(B) + L1(C); L1(cB) = A(cB) = c(AB) = cL1(B).Note that L1 is invertible (L

�11 (B) = A�1B). Use Theorem 5.14.

110

Page 115: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 5.5

(b) L2 is a linear operator: L2(B+C) = A(B+C)A�1

= (AB+AC)A�1 = ABA�1 +ACA�1 = L2(B) + L2(C);L2(cB) = A(cB)A

�1 = c(ABA�1) = cL2(B).Note that L2 is invertible (L

�12 (B) = A�1BA). Use Theorem 5.14.

(4) L is a linear operator: L(p+ q) = (p+ q) + (p+ q)0 = p+ q+ p0 + q0 =(p + p0) + (q + q0) = L(p) + L(q); L(cp) = (cp) + (cp)0 = cp + cp0 =c(p+ p0) = cL(p).Now, if L(p) = 0, then p+p0 = 0 =) p = �p0: But if p is not a constantpolynomial, then p and p0 have di¤erent degrees, a contradiction. Hence,p0 = 0; and so p = 0. Thus, ker(L) = f0g and L is one-to-one. Then, byCorollary 5.21, L is an isomorphism.

(5) (a)�0 11 0

�(b) Use Theorem 5.15 (since the matrix in part (a) is nonsingular).

(c) If A is the matrix for R in part (a), then A2 = I2, so R = R�1.

(d) Performing the re�ection R twice in succession gives the identitymapping.

(6) The change of basis is performed by multiplying by a nonsingular transi-tion matrix. Use Theorem 5.15. (Note that multiplication by a matrix isa linear transformation.)

(7) (M � L1)(v) = (M � L2)(v) =)(M�1 �M � L1)(v) = (M�1 �M � L2)(v) =) L1(v) = L2(v)

(8) Suppose dim(V) = n > 0 and dim(W) = _k > 0.Suppose L is an isomorphism. Let M be the matrix for L�1 with

respect to C and B. Then (L�1 � L)(v) = v for every v 2 V =)MABC [v]B = [v]B for every v 2 V (by Theorem 5.7). Letting vi be theith basis element in B, [vi]B = ei, and so ei = [vi]B = MABC [vi]B =MABCei = (ith column of MABC). Therefore, MABC = In. Similarly,using the fact that (L � L�1)(w) = w for every w 2 W implies thatABCM = Ik. Exercise 21 in Section 2.4 then implies that n = k, ABC isnonsingular, and M = A�1

BC .Conversely, if ABC is nonsingular, then ABC is square, n = k, A�1

BC

exists, and A�1BCABC = ABCA

�1BC = In. Let K be the linear operator

from W to V whose matrix with respect to C and B is A�1BC . Then, for

all v 2 V, [(K � L)(v)]B = A�1BCABC [v]B = In[v]B = [v]B . Therefore,

(K �L)(v) = v for all v 2 V, since coordinatizations are unique. Similarly,for all w 2 W, [(L �K)(w)]C = ABCA

�1BC [w]C = In[w]C = [w]C . Hence

(L � K)(w) = w for all w 2 W. Therefore, K acts as an inverse for L.Hence, L is an isomorphism by Theorem 5.14.

(9) In all parts, use Corollary 5.19. In part (c), the answer to the question isyes.

111

Page 116: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 5.5

(10) If L is an isomorphism, (L�L)�1 = L�1 �L�1, so L�L is an isomorphismby Theorem 5.14.

Conversely, if L � L is an isomorphism, it is easy to prove L is one-to-oneand onto directly. Or, let f = L � (L � L)�1 and g = (L � L)�1 � L. Thenclearly, L � f = I (identity operator on V) = g � L. But f = I � f =g � L � f = g � I = g, and so f = g = L�1. Hence L is an isomorphism byTheorem 5.14.

(11) (a) Let v 2 V with v 6= 0V . Since (L � L)(v) = 0V , either L(v) = 0V orL(L(v)) = 0V , with L(v) 6= 0V . Hence, ker(L) 6= f0Vg, since one ofthe two vectors v or L(v) is nonzero and is in ker(L).

(b) Proof by contrapositive: The goal is to prove that if L is an iso-morphism, then L � L 6= L or L is the identity transformation. As-sume L is an isomorphism and L � L = L (see �If A; Then B or CProofs� in Section 1.3). Then L is the identity transformation be-cause (L � L)(v) = L(v) =) (L�1 � L � L)(v) = (L�1 � L)(v) =)L(v) = v.

(12) Range(L) = column space of A (see comments just before the RangeMethod in Section 5.3 of the textbook). Now, L is an isomorphism i¤ Lis onto (by Corollary 5.21) i¤ range(L) = Rn i¤ span(fcolumns of Ag) =Rn i¤ the n columns of A are linearly independent (by Theorem 4.13).

(13) (a) No, by Corollary 5.21, because dim(R6) = dim(P5).(b) No, by Corollary 5.21, because dim(M22) = dim(P3).

(14) (a) Use Theorem 5.13.

(b) By Exercise 8(c) in Section 5.4, L is onto. Now, if L is not one-to-one, then ker(L) is nontrivial. Suppose v 2 ker(L) with v 6= 0:If B = fv1; : : : ;vng; then there are a1; : : : ; an 2 R such that v =a1v1+ � � �+anvn; where not every ai = 0 (since v 6= 0). Then 0W =L(v) = L(a1v1+ � � �+anvn) = a1L(v1)+ � � �+anL(vn): But sinceeach L(vj) is distinct, and some ai 6= 0; this contradicts the linearindependence of L(B). Hence L is one-to-one.

(c) T (e1) = [3; 1]; T (e2) = [5; 2]; and T (e3) = [3; 1]: Hence, T (B) =f[3; 1]; [5; 2]g (because identical elements of a set are not listed morethan once). This set is clearly a basis for R2. T is not an isomorphismor else we would contradict Theorem 5.17, since dim(R3) 6= dim(R2).

(d) Part (c) is not a counterexample to part (b) because in part (c) theimages of the vectors in B are not distinct.

(15) Let B = (v1; : : : ;vn); let v 2 V, and let a1; : : : ; an 2 R such thatv = a1v1 + � � �+ anvn: Then [v]B = [a1; : : : ;an]: But L(v) =L(a1v1 + � � �+ anvn) = a1L(v1) + � � �+ anL(vn): From Exercise 14(a),fL(v1); : : : ; L(vn)g is a basis for L(B) =W (since L is onto).Hence, [L(v)]L(B) = [a1; : : : ;an] as well.

112

Page 117: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 5.5

(16) (Note: This exercise will be used as part of the proof of the DimensionTheorem, so we do not use the Dimension Theorem in this proof.)Because Y is a subspace of the �nite dimensional vector space V, Y

is also �nite dimensional (Theorem 4.16). Suppose fy1; : : : ;ykg is a basisfor Y. Because L is one-to-one, the vectors L(y1); : : : ; L(yk) are distinct.By part (1) of Theorem 5.13, fL(y1); : : : ; L(yk)g = L(fy1; : : : ;ykg) islinearly independent. Part (a) of Exercise 8 in Section 5.4 shows thatspan(L(fy1; : : : ;ykg)) = L(span(fy1; : : : ;ykg)) = L(Y). Therefore,fL(y1); : : : ; L(yk)g is a k-element basis for L(Y). Hence, dim(L(Y)) =k = dim(Y).

(17) (a) Let v 2 ker(T ). Then (T2 � T )(v) = T2(T (v)) = T2(0W) = 0Y , andso v 2 ker(T2 � T ). Hence, ker(T ) � ker(T2 � T ). Next, supposev 2 ker(T2 � T ). Then 0Y = (T2 � T )(v) = T2(T (v)), which impliesthat T (v) 2 ker(T2). But T2 is one-to-one, and so ker(T2) = f0Wg.Therefore, T (v) = 0W , implying that v 2 ker(T ). Hence,ker(T2 � T ) � ker(T ). Therefore, ker(T ) = ker(T2 � T ).

(b) Suppose w 2 range(T � T1): Then there is some x 2 X such that(T �T1)(x) = w; implying T (T1(x)) = w: Hence w is the image underT of T1(x) and so w 2 range(T ). Thus, range(T � T1) � range(T ).Similarly, if w 2 range(T ); there is some v 2 V such that T (v) = w:Since T�11 exists, (T � T1)(T�11 (v)) = w; and so w 2 range(T � T1):Thus, range(T ) � range(T � T1); �nishing the proof.

(c) Let v 2 T1(ker(T � T1)). Thus, there is an x 2 ker(T � T1) such thatT1(x) = v. Then, T (v) = T (T1(x)) = (T � T1)(x) = 0W . Thus,v 2 ker(T ). Therefore, T1(ker(T �T1)) � ker(T ). Now suppose thatv 2 ker(T ). Then T (v) = 0W . Thus, (T � T1)(T�11 (v)) = 0W aswell. Hence, T�11 (v) 2 ker(T � T1). Therefore, v = T1(T

�11 (v)) 2

T1(ker(T � T1)), implying ker(T ) � T1(ker(T � T1)), completing theproof.

(d) From part (c), dim(ker(T )) = dim(T1(ker(T � T1))). Apply Exercise16 using T1 as L and Y = ker(T �T1) to show that dim(T1(ker(T �T1)))= dim(ker(T � T1)).

(e) Suppose y 2 range(T2 � T ). Hence, there is a v 2 V such that(T2 �T )(v) = y. Thus, y = T2(T (v)). Clearly, T (v) 2 range(T ), andso y 2 T2(range(T )). This proves that range(T2�T ) � T2(range(T )).Next, suppose that y 2 T2(range(T )). Then there is some w 2range(T ) such that T2(w) = y. Since w 2 range(T ), there is a v 2 Vsuch that T (v) = w. Therefore, (T2 � T )(v) = T2(T (v)) = T2(w) =y. This establishes that T2(range(T )) � range(T2 � T ), �nishing theproof.

(f) From part (e), dim(range(T2 � T )) = dim(T2(range(T ))). Apply Ex-ercise 16 using T2 as L and Y = range(T ) to show thatdim(T2(range(T ))) = dim(range(T )).

113

Page 118: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 5.5

(18) (a) Part (c) of Exercise 17 states that T1 (ker (T � T1)) = ker (T ). Sub-stituting T = L2 �L and T1 = L�11 yields L�11

�ker�L2 � L � L�11

��=

ker (L2 � L). Now M = L2 � L � L�11 . Hence, L�11 (ker (M)) =ker (L2 � L).

(b) With L2 = T2 and L = T , part (a) of Exercise 17 shows thatker (L2 � L) = ker (L). This, combined with part (a) of this exer-cise, shows that L�11 (ker (M)) = ker (L).

(c) From part (b), dim(ker (L)) = dim(L�11 (ker (M))). Apply Exer-cise 16 with L replaced by L�11 and Y = ker(M) to show thatdim(L�11 (ker (M))) = dim(ker(M)), completing the proof.

(d) Part (e) of Exercise 17 states that range (T2 � T ) = T2 (range(T )).Substituting L �L�11 for T and L2 for T2 yields range

�L2 � L � L�11

�= L2

�range(L � L�11 )

�. Apply L�12 to both sides to produce

L�12 (range�L2 � L � L�11

�) = range(L�L�11 ). UsingM = L2�L�L�11

gives L�12 (range (M)) = range(L � L�11 ), the desired result.

(e) Part (b) of Exercise 17 states that range (T � T1) = range (T ). Sub-stituting L for T and L�11 for T1 produces range(L�L�11 ) = range (L).Combining this with L�12 (range (M)) = range(L�L�11 ) from part (d)of this exercise yields L�12 (range(M)) = range(L).

(f) From part (e), dim(L�12 (range(M))) = dim(range(L)). Apply Ex-ercise 16 with L replaced by L�12 and Y = range(M) to show thatdim(L�12 (range(M))) = dim(range(M)), completing the proof.

(19) (a) A nonsingular 2� 2 matrix must have at least one of its �rst columnentries nonzero. Then, use an appropriate equation from the twogiven in the problem.

(b)�k 00 1

� �xy

�=

�kxy

�, a contraction (if k � 1) or dilation (if

k � 1) along the x-coordinate. Similarly,�1 00 k

� �xy

�=

�xky

�,

a contraction (if k � 1) or dilation (if k � 1) along the y-coordinate.

(c) By the hint, multiplying by�k 00 1

��rst performs a

dilation/contraction by part (a) followed by a re�ection about the

y-axis. Using�1 00 k

�=

�1 00 �1

� �1 00 �k

�shows that multi-

plying by�1 00 k

��rst performs a dilation/contraction by part (a)

followed by a re�ection about the x-axis.

(d) These matrices are de�ned to represent shears in Exercise 11 in Sec-tion 5.1.

114

Page 119: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 5.5

(e)�0 11 0

� �xy

�=

�yx

�; that is, this matrix multiplication switches

the coordinates of a vector in R2. That is precisely the result of are�ection through the line y = x.

(f) This is merely a summary of parts (a) through (e).

(20)

" p22 �

p22

p22

p22

#=

" p22 0

0 1

#"1 0

p22 1

# �1 0

0p2

� �1 �10 1

�, where

the matrices on the right side are, respectively, a contraction along thex-coordinate, a shear in the y-direction, a dilation along the y-coordinate,and a shear in the x-direction.

(21) (a) Consider L: P2 ! R3 given by L(p) = [p(x1);p(x2);p(x3)]: Now,dim(P2) = dim(R3) = 3: Hence, by Corollary 5.21, if L is either one-to-one or onto, it has the other property as well.We will show that L is one-to-one using Theorem 5.12. If p 2 ker(L),

then L(p) = 0, and so p(x1) = p(x2) = p(x3) = 0. Hence p is apolynomial of degree � 2 touching the x-axis at x = x1, x = x2, andx = x3. Since the graph of p must be either a parabola or a line,it cannot touch the x-axis at three distinct points unless its graph isthe line y = 0. That is, p = 0 in P2. Therefore, ker(L) = f0g, andL is one-to-one.Now, by Corollary 5.21, L is onto. Thus, given any 3-vector

[a; b; c], there is some p 2 P2 such that p(x1) = a, p(x2) = b, andp(x3) = c.

(b) From part (a), L is an isomorphism. Hence, any [a; b; c] 2 R3 has aunique pre-image under L in P2.

(c) Generalize parts (a) and (b) using L: Pn ! Rn+1 given by L(p) =[p(x1); : : : ;p(xn);p(xn+1)]: Note that any polynomial in ker(L) hasn+1 distinct roots, and so must be trivial. Hence L is an isomorphismby Corollary 5.21. Thus, given any (n+ 1)-vector [a1; : : : ; an; an+1],there is a unique p 2 Pn such that p(x1) = a1; : : : ; p(xn) = an, andp(xn+1) = an+1.

(22) (a) Clearly, multiplying a nonzero polynomial by x produces anothernonzero polynomial. Hence, ker(L) = f0Pg, and L is one-to-one.But, every nonzero element of range(L) has degree at least 1, since Lmultiplies by x. Hence, the constant polynomial 1 is not in range(L),and L is not onto.

(b) Corollary 5.21 requires that the domain and codomain of the lineartransformation be �nite dimensional. However, P is in�nite dimen-sional.

115

Page 120: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 5.6

(23) (a) T(b) T

(c) F(d) F

(e) F(f) T

(g) T(h) T

(i) T

Section 5.6

(1) (a)Eigenvalue Basis for E� Alg. Mult. Geom. Mult.�1 = 2 f[1; 0]g 2 1

(b)Eigenvalue Basis for E� Alg. Mult. Geom. Mult.�1 = 3 f[1; 4]g 1 1�2 = 2 f[0; 1]g 1 1

(c)

Eigenvalue Basis for E� Alg. Mult. Geom. Mult.�1 = 1 f[2; 1; 1]g 1 1�2 = �1 f[�1; 0; 1]g 1 1�3 = 2 f[1; 1; 1]g 1 1

(d)Eigenvalue Basis for E� Alg. Mult. Geom. Mult.�1 = 2 f[5; 4; 0]; [3; 0; 2]g 2 2�2 = 3 f[0; 1;�1]g 1 1

(e)Eigenvalue Basis for E� Alg. Mult. Geom. Mult.�1 = �1 f[�1; 2; 3]g 2 1�2 = 0 f[�1; 1; 3]g 1 1

(f)Eigenvalue Basis for E� Alg. Mult. Geom. Mult.�1 = 0 f[6;�1; 1; 4]g 2 1�2 = 2 f[7; 1; 0; 5]g 2 1

(2) (a) C = (e1; e2; e3; e4);B = ([1; 1; 0; 0]; [0; 0; 1; 1]; [1;�1; 0; 0]; [0; 0; 1;�1]);

A =

26640 1 0 01 0 0 00 0 0 10 0 1 0

3775 ; D =

26641 0 0 00 1 0 00 0 �1 00 0 0 �1

3775 ;

P =

26641 0 1 01 0 �1 00 1 0 10 1 0 �1

3775(b) C = (x2; x; 1); B = (x2 � 2x+ 1;�x+ 1; 1);

A =

24 2 0 0�2 1 00 �1 0

35 ; D =

24 2 0 00 1 00 0 0

35 ; P =24 1 0 0�2 �1 01 1 1

35(c) Not diagonalizable; C = (x2; x; 1); A =

24 �1 0 02 �3 00 1 �3

35;

116

Page 121: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 5.6

Eigenvalue Basis for E� Alg. Mult. Geom. Mult.�1 = �1 f2x2 + 2x+ 1g 1 1�2 = �3 f1g 2 1

(d) C = (x2; x; 1);

B = (2x2 � 8x+ 9; x; 1);

A =

24 �1 0 0�12 �4 018 0 �5

35 ;D =

24 �1 0 00 �4 00 0 �5

35 ;P =

24 2 0 0�8 1 09 0 1

35(e) Not diagonalizable; no eigenvalues; A =

"12 �

p32p

32

12

#

(f) C =��

1 00 0

�;

�0 10 0

�;

�0 01 0

�;

�0 00 1

��;

B =

��1 00 0

�;

�0 00 1

�;

�0 11 0

�;

�0 1

�1 0

��;

A =

26641 0 0 00 0 1 00 1 0 00 0 0 1

3775 ; D =

26641 0 0 00 1 0 00 0 1 00 0 0 �1

3775 ;

P =

26641 0 0 00 0 1 10 0 1 �10 1 0 0

3775(g) C =

��1 00 0

�;

�0 10 0

�;

�0 01 0

�;

�0 00 1

��;

B =

��1 00 0

�;

�0 11 0

�;

�0 00 1

�;

�0 �11 0

��;

A =

26641 0 0 00 1 �1 00 �1 1 00 0 0 0

3775 ; D =

26640 0 0 00 0 0 00 0 0 00 0 0 2

3775 ;

P =

26641 0 0 00 1 0 �10 1 0 10 0 1 0

3775(h) C =

��1 00 0

�;

�0 10 0

�;

�0 01 0

�;

�0 00 1

��;

117

Page 122: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 5.6

B =

��3 05 0

�;

�0 30 5

�;

�1 02 0

�;

�0 10 2

��;

A =

2664�4 0 3 00 �4 0 3

�10 0 7 00 �10 0 7

3775 ;D =

26641 0 0 00 1 0 00 0 2 00 0 0 2

3775 ;

P =

26643 0 1 00 3 0 15 0 2 00 5 0 2

3775

(3) (a) pL(x) =

��������(x� 5) �2 0 �12 (x� 1) 0 1�4 �4 (x� 3) �2�16 0 8 (x+ 5)

��������= (x� 3)

������(x� 5) �2 �12 (x� 1) 1�16 0 x+ 5

������� 8������(x� 5) �2 �12 (x� 1) 1�4 �4 �2

������= (x� 3)

��16

���� �2 �1(x� 1) 1

����+ (x+ 5) ���� (x� 5) �22 (x� 1)

������8�(x� 5)

���� (x� 1) 1�4 �2

����� 2 ���� �2 �1�4 �2

����� 4 ���� �2 �1(x� 1) 1

�����= (x� 3)(�16(x� 3) + (x+ 5)(x2 � 6x+ 9))� 8((x� 5)(�2x+ 6)� 2(0)� 4(x� 3))= (x� 3)(�16(x� 3) + (x+ 5)(x� 3)2)� 8((�2)(x� 5)(x� 3)� 4(x� 3))= (x� 3)2(�16 + (x+ 5)(x� 3)) +16(x� 3)((x� 5) + 2)= (x � 3)2(x2 + 2x � 31) + 16(x � 3)2 = (x � 3)2(x2 + 2x � 15)= (x � 3)3(x + 5). You can check that this polynomial expands asclaimed.

(b)

2664�10 �2 0 �12 �6 0 1

�4 �4 �8 �2�16 0 8 0

3775 becomes2666641 0 0 1

8

0 1 0 � 18

0 0 1 14

0 0 0 0

377775 when put intoreduced row echelon form.

(4) For both parts, the only eigenvalue is � = 1; E1 = f1g.

(5) SinceA is upper triangular, so is xIn�A. Thus, by Theorem 3.2, jxIn�Ajequals the product of the main diagonal entries of xIn � A, which is(x�aii)n, since a11 = a22 = � � � = ann: Thus A has exactly one eigenvalue� with algebraic multiplicity n. Hence A is diagonalizable () � has

118

Page 123: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 5.6

geometric multiplicity n () E� = Rn () Av = �v for all v 2 Rn ()Av = �v for all v 2 fe1; : : : ; eng () A = �In (using Exercise 14(b) inSection 1.5).

(6) In both cases, the given linear operator is diagonalizable, but there areless than n distinct eigenvalues. This occurs because at least one of theeigenvalues has geometric multiplicity greater than 1.

(7) (a)

24 1 1 �10 1 00 0 1

35 ; eigenvalue � = 1; basis for E1 = f[1; 0; 0]; [0; 1; 1]g;

� has algebraic multiplicity 3 and geometric multiplicity 2

(b)

24 1 0 00 1 00 0 0

35 ; eigenvalues �1 = 1; �2 = 0; basis for E1 = f[1; 0; 0];

[0; 1; 0]g; �1 has algebraic multiplicity 2 and geometric multiplicity 2

(8) (a) Zero is not an eigenvalue for L i¤ ker(L) = f0g. Now apply Theo-rem 5.12 and Corollary 5.21.

(b) L(v) = �v =) L�1(L(v)) = L�1(�v) =) v = �L�1(v) =)1�v = L

�1(v) (since � 6= 0 by part (a)).

(9) Let P be such that P�1AP is diagonal. Let C = (v1; : : : ;vn), where [vi]B= ith column of P. Then, by de�nition, P is the transition matrix fromC to B. Then, for all v 2 V, P�1AP[v]C = P�1A[v]B = P�1[L(v)]B =[L(v)]C . Hence, as in part (a), P�1AP is the matrix for L with respectto C. But P�1AP is diagonal.

(10) If P is the matrix with columns v1; : : : ;vn, then P�1AP = D is a diagonalmatrix with �1; : : : ; �n as the main diagonal entries of D. Clearly jDj =�1�2 � � ��n. But jDj = jP�1APj = jP�1jjAjjPj = jAjjP�1jjPj = jAj.

(11) Clearly n �Pk

i=1(algebraic multiplicity of �i) �Pk

i=1(geometric multi-plicity of �i), by Theorem 5.26.

(12) Proof by contrapositive: If pL(x) has a nonreal root, then the sum ofthe algebraic multiplicities of the eigenvalues of L is less than n. ApplyTheorem 5.28.

(13) (a) A(Bv) = (AB)v = (BA)v = B(Av) = B(�v) = �(Bv).

(b) The eigenspaces E� for A are one-dimensional. Let v 2 E�, withv 6= 0. Then Bv 2 E�, by part (a). Hence Bv = cv for some c 2 R,and so v is an eigenvector for B. Repeating this for each eigenspaceof A shows that B has n linearly independent eigenvectors. Nowapply Theorem 5.22.

119

Page 124: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Chapter 5 Review

(14) (a) Suppose v1 and v2 are eigenvectors for A corresponding to �1 and�2, respectively. Let K1 be the 2 � 2 matrix [v1 0] (where thevectors represent columns). Similarly, let K2 = [0 v1], K3 = [v2 0],and K4 = [0 v2]. Then a little thought shows that fK1;K2g is abasis for E�1 inM22, and fK3;K4g is a basis for E�2 inM22.

(b) Generalize the construction in part (a).

(15) Suppose v 2 B1 \ B2: Then v 2 B1 =) v 2 E�1 =) L(v) = �1v:Similarly, v 2 B2 =) v 2 E�2 =) L(v) = �2v: Hence, �1v =�2v; and so(�1��2)v = 0: Since �1 6=�2;v = 0: But 0 can not be an element of anybasis, and so B1 \B2 is empty.

(16) (a) E�i is a subspace, hence closed.

(b) First, substitute ui forPki

j=1 aijvij in the given double sum equa-tion. This proves

Pni=1 ui = 0. Now, the set of all nonzero ui�s

is linearly independent by Theorem 5.23, since they are eigenvectorscorresponding to distinct eigenvalues. But then, the nonzero termsinPn

i=1 ui would give a nontrivial linear combination from a linearlyindependent set equal to the zero vector. This contradiction showsthat all of the ui�s must equal 0.

(c) Using part (b) and the de�nition of ui, 0 =Pki

j=1 aijvij for each i.But fvi1; : : : ;vikig is linearly independent, since it is a basis for E�i .Hence, for each i, ai1 = � � � = aiki = 0.

(d) Apply the de�nition of linear independence to B.

(17) Example 7 states that pA(x) = x3�4x2+x+6. So, pA(A) = A3�4A2+

A+ 6I3 =

24 �17 22 76�374 214 117854 �27 �163

35 � 424 5 2 �4�106 62 33418 �9 �53

35

+

24 31 �14 �92�50 28 15818 �9 �55

35 + 624 1 0 00 1 00 0 1

35 =24 0 0 00 0 00 0 0

35, verifying theCayley-Hamilton Theorem for this matrix.

(18) (a) F

(b) T

(c) T

(d) F

(e) T

(f) T

(g) T

(h) F

(i) F

(j) T

Chapter 5 Review Exercises

(1) (a) Not a linear transformation: f([0; 0; 0]) 6= [0; 0; 0](b) Linear transformation:

g((ax3 + bx2 + cx+ d) + (kx3 + lx2 +mx+ n))= g((a+ k)x3 + (b+ l)x2 + (c+m)x+ (d+ n))

120

Page 125: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Chapter 5 Review

=

24 4(b+ l)� (c+m) 3(d+ n)� (a+ k)2(d+ n) + 3(a+ k) 4(c+m)

5(a+ k) + (c+m) + 2(d+ n) 2(b+ l)� 3(d+ n)

35=

24 4b� c 3d� a2d+ 3a 4c

5a+ c+ 2d 2b� 3d

35+24 4l �m 3n� k

2n+ 3k 4m5k +m+ 2n 2l � 3n

35= g(ax3 + bx2 + cx+ d) + g(kx3 + lx2 +mx+ n);

g(k(ax3 + bx2 + cx+ d)) = g((ka)x3 + (kb)x2 + (kc)x+ (kd))

=

24 4(kb)� (kc) 3(kd)� (ka)2(kd) + 3(ka) 4(kc)

5(ka) + (kc) + 2(kd) 2(kb)� 3(kd)

35= k

24 4b� c 3d� a2d+ 3a 4c

5a+ c+ 2d 2b� 3d

35= k(g(ax3 + bx2 + cx+ d))

(c) Not a linear transformation: h([1; 0] + [0; 1]) 6= h([1; 0]) + h([0; 1])since h([1; 1]) = [2;�3] and h([1; 0]) + h([0; 1]) = [0; 0]+[0; 0] = [0; 0].

(2) [1:598; 3:232]

(3) f(A1) + f(A2) = CA1B�1 +CA2B

�1 = C(A1B�1 +A2B

�1)= C(A1 +A2)B

�1 = f(A1 +A2);f(kA) = C(kA)B�1 = kCAB�1 = kf(A).

(4) L([6; 2;�7]) = [20; 10; 44]; L([x; y; z]) =

24 �3 5 �42 �1 04 3 �2

3524 xyz

35= [�3x+ 5y � 4z; 2x� y; 4x+ 3y � 2z]

(5) (a) Use Theorem 5.2 and part (1) of Theorem 5.3.

(b) Use Theorem 5.2 and part (2) of Theorem 5.3.

(6) (a) ABC =

�29 32 �243 42 �6

(b) ABC =

24 113 �58 51 �58566 �291 255 �283

�1648 847 �742 823

35(7) (a) ABC =

24 2 1 �3 01 3 0 �40 0 1 �2

35; ADE =

24 �151 99 163 16171 �113 �186 �17238 �158 �260 �25

35

(b) ABC =

26646 �1 �10 3 22 0 �41 �5 1

3775; ADE =

2664115 �45 59374 �146 190�46 15 �25�271 108 �137

3775121

Page 126: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Chapter 5 Review

(8)

24 0 �1 00 0 00 0 1

35(9) (a) pABB

(x) = x3 � x2 � x+ 1 = (x+ 1)(x� 1)2

(b) Fundamental eigenvectors for �1 = 1: f[2; 1; 0]; [2; 0; 3]g; for �2 = �1:f[3;�6; 2]g

(c) C = ([2; 1; 0]; [2; 0; 3]; [3;�6; 2])

(d) ACC =

24 1 0 00 1 00 0 �1

35. (Note: P =24 2 2 31 0 �60 3 2

35 and

P�1 = 141

24 18 5 �12�2 4 153 �6 �2

35)(e) This is a re�ection through the plane spanned by f[2; 1; 0]; [2; 0; 3]g.

The equation for this plane is 3x� 6y + 2z = 0.

(10) (a) Basis for ker(L) = f[2;�3; 1; 0]; [�3; 4; 0; 1]g; basis for range(L) =f[3; 2; 2; 1]; [1; 1; 3; 4]g

(b) 2 + 2 = 4

(c) [�18; 26;�4; 2] =2 ker(L) because L([�18; 26;�4; 2]) =[�6;�2; 10; 20] 6= 0; [�18; 26;�6; 2] 2 ker(L) becauseL([�18; 26;�6; 2]) = 0

(d) [8; 3;�11;�23] 2 range(L); row reduction shows [8; 3;�11;�23] =5[3; 2; 2; 1]� 7[1; 1; 3; 4], and so, L([5;�7; 0; 0]) = [8; 3;�11;�23]

(11) Matrix for L with respect to standard bases:

26641 0 0 0 0 �10 1 �2 0 0 00 0 0 1 0 �30 0 0 0 0 0

3775 ;Basis for ker(L) =

��0 2 10 0 0

�;

�0 0 00 1 0

�;

�1 0 03 0 1

��;

Basis for range(L) = fx3; x2; xg;dim(ker(L)) + dim(range(L)) = 3 + 3 = 6 = dim(M32)

(12) (a) If v 2 ker(L1), then L1(v) = 0W . Thus, (L2 �L1)(v) = L2(L1(v)) =L2(0W) = 0X , and so v 2 ker(L2 � L1). Therefore,ker(L1) � ker(L2 � L1). Hence, dim(ker(L1)) � dim(ker(L2 � L1)).

(b) Let L1 be the projection onto the x-axis (L1([x; y]) = [x; 0]) and L2be the projection onto the y-axis (L2([x; y]) = [0; y]). A basis forker(L1) is f[0; 1]g, while ker(L2 � L1) = R2.

122

Page 127: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Chapter 5 Review

(13) (a) By part (2) of Theorem 5.9 applied to L and M , dim(ker(L)) �dim(ker(M)) = (n � rank(A)) � (m � rank(AT )) = (n � m) �(rank(A)� rank(AT )) = n�m, by Corollary 5.11.

(b) L onto =) dim(range(L)) = m = rank(A) (part (1) of Theorem5.9) = rank(AT ) (Corollary 5.11) = dim(range(M)). Thus, by part(3) of Theorem 5.9, dim(ker(M)) + dim(range(M)) = m, implyingdim(ker(M)) + m = m, and so dim(ker(M)) = 0, and M is one-to-one by part (1) of Theorem 5.12.

(c) Converse: M one-to-one =) L onto. This is true. M one-to-one=) dim(ker(M)) = 0 =) dim(ker(L)) = n � m (by part (a)) =)(n � m) + dim(range(L)) = n (by part (3) of Theorem 5.9) =)dim(range(L)) = m =) L is onto (by Theorem 4.16, and part (2) ofTheorem 5.12).

(14) (a) L is not one-to-one because L(x3�x+1) = O22. Corollary 5.21 thenshows that L is not onto.

(b) ker(L) has fx3 � x + 1g as a basis, so dim(ker(L)) = 1. Thus,dim(range(L)) = 3 by the Dimension Theorem.

(15) In each part, let A represent the given matrix for L with respect to thestandard bases.

(a) The reduced row echelon form of A is I3. Therefore, ker(L) = f0g,and so dim(ker(L)) = 0 and L is one-to-one. By Corollary 5.21, L isalso onto. Hence dim(range(L)) = 3.

(b) The reduced row echelon form of A is

24 1 0 4 00 1 �1 00 0 0 1

35.dim(ker(L)) = 4� rank(A) = 1. L is not one-to-one.dim(range(L)) = rank(A) = 3. L is onto.

(16) (a) dim(ker(L)) = dim(P3) � dim(range(L)) � 4 � dim(R3) = 1, so Lcan not be one-to-one

(b) dim(range(L)) = dim(P2)�dim(ker(L)) � dim(P2) = 3 < dim(M22),so L can not be onto.

(17) (a) L(v1) = cL(v2) = L(cv2) =) v1 = cv2, because L is one-to-one. Inthis case, the set fL(v1); L(v2)g is linearly dependent, since L(v1) isa nonzero multiple of L(v2). Part (1) of Theorem 5.13 then impliesthat fv1;v2g must be linearly dependent, which is correct, since v1is a nonzero multiple of v2.

(b) Consider the contrapositive: For L onto and w 2 W, if fv1;v2gspans V, then L(av1 + bv2) = w for some a; b 2 R. Proof of thecontrapositive: Suppose L is onto, w 2 W, and fv1;v2g spans V.By part (2) of Theorem 5.13, fL(v1); L(v2)g spans W, so there exista; b 2 R such that w = aL(v1) + bL(v2) = L(av1 + bv2).

123

Page 128: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Chapter 5 Review

(18) (a) L1 and L2 are isomorphisms (by Theorem 5.15) because their (given)matrices are nonsingular. Their inverses are given in part (b).

(b) Matrix for L2 � L1 =

266481 71 �15 18107 77 �31 1969 45 �23 11

�29 �36 �1 �9

3775;

Matrix for L�11 : 15

26642 �10 19 110 5 �10 �53 �15 26 14

�4 15 �23 �17

3775;

Matrix for L�12 : 12

2664�8 26 �30 210 �35 41 �410 �30 34 �2

�14 49 �57 6

3775 :

(c) Both computations yield 110

2664�80 371 �451 7220 �120 150 �30

�110 509 �619 98190 �772 922 �124

3775 :(19) (a) The determinant of the shear matrix given in Table 5.1 is 1, so this

matrix is nonsingular. Therefore, the given mapping is an isomor-phism by Theorem 5.15.

(b) The inverse isomorphism is L

0@24 a1a2a3

351A =

24 1 0 �k0 1 �k0 0 1

3524 a1a2a3

35=

24 a1 � ka3a2 � ka3a3

35 . This represents a shear in the z-direction withfactor �k.

(20) (a) (BTAB)T = BTAT (BT )T = BTAB, since A is symmetric

(b) Suppose A;C 2 W. Note that if BTAB = BTCB, then A = Cbecause B is nonsingular. Thus L is one-to-one, and so L is onto byCorollary 5.21. An alternate approach is as follows: Suppose C 2 W.If A = (BT )�1CB�1; then L(A) = C: Thus L is onto, and so L isone-to-one by Corollary 5.21.

(21) (a) L(ax4 + bx3 + cx2) = 4ax3 + (12a+ 3b)x2 + (6b+ 2c)x+ 2c. Clearlyker(L) = f0g. Apply part (1) of Theorem 5.12.

(b) No. dim(W) = 3 6= dim(P3)(c) The polynomial x =2 range(L) since it does not have the correct form

for an image under L (as stated in part (a)).

124

Page 129: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Chapter 5 Review

(22) In parts (a), (b), and (c), let A represent the given matrix.

(a) (i) pA(x) = x3� 3x2� x+3 = (x� 1)(x+1)(x� 3); Eigenvalues forL: �1 = 1, �2 = �1, and �3 = 3; Basis for E1: f[�1; 3; 4]g; Basis forE�1: f[�1; 4; 5]g; Basis for E3: f[�6; 20; 27]g(ii) All algebraic and geometric multiplicities equal 1; L is diagonalizable.

(iii) B = f[�1; 3; 4]; [�1; 4; 5]; [�6; 20; 27]g; D =

24 1 0 00 �1 00 0 3

35;

P =

24 �1 �1 �63 4 204 5 27

35 : (Note that: P�1 =24 �8 3 �4

1 3 �21 �1 1

35).(b) (i) pA(x) = x3 + 5x2 + 20x+ 28 = (x+ 2)(x2 + 3x+ 14); Eigenvalue

for L: � = �2 since x2 + 3x + 14 has no real roots; Basis for E�2:f[0; 1; 1]g(ii) For � = �2: algebraic multiplicity = geometric multiplicity = 1;L is not diagonalizable.

(c) (i) pA(x) = x3 � 5x2 + 3x + 9 = (x + 1)(x � 3)2; Eigenvalues forL: �1 = �1, and �2 = 3; Basis for E�1: f[1; 3; 3]g; Basis for E3:f[1; 5; 0]; [3; 0; 25]g(ii) For �1 = �1: algebraic multiplicity = geometric multiplicity =1; For �2 = 3: algebraic multiplicity = geometric multiplicity = 2; Lis diagonalizable.

(iii) B = f[1; 3; 3]; [1; 5; 0]; [3; 0; 25]g; D =

24 �1 0 00 3 00 0 3

35;

P =

24 1 1 33 5 03 0 25

35 : (Note that: P�1 = 15

24 125 �25 �15�75 16 9�15 3 2

35).(d) Matrix for L with respect to standard coordinates:

A =

26641 0 0 0

�3 0 0 00 �2 �1 00 0 �1 �2

3775.(i) pA(x) = x4 + 2x3 � x2 � 2x = x(x � 1)(x + 1)(x + 2); Eigenval-ues for L: �1 = 1, �2 = 0, �3 = �1, and �4 = �2; Basis for E1:f�x3+3x2�3x+1g (f[�1; 3;�3; 1]g in standard coordinates); Basisfor E0: fx2 � 2x+ 1g (f[0; 1;�2; 1]g in standard coordinates); Basisfor E�1: f�x+1g (f[0; 0;�1; 1]g in standard coordinates); Basis forE�2: f1g (f[0; 0; 0; 1]g in standard coordinates)(ii) All algebraic and geometric multiplicities equal 1; L is diagonal-izable.

125

Page 130: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Chapter 5 Review

(iii) B = f�x3 + 3x2 � 3x+ 1; x2 � 2x+ 1;�x+ 1; 1g(f[�1; 3;�3; 1]; [0; 1;�2; 1]; [0; 0;�1; 1]; [0; 0; 0; 1]g in standard coordi-

nates); D =

26641 0 0 00 0 0 00 0 �1 00 0 0 �2

3775; P =2664�1 0 0 03 1 0 0

�3 �2 �1 01 1 1 1

3775 : (Notethat: P�1 = P).

(23) Basis for E1: f[a; b; c]; [d; e; f ]g; Basis for E�1: f[bf�ce; cd�af; ae�bd]g.Basis of eigenvectors: f[a; b; c]; [d; e; f ]; [bf � ce; cd� af; ae� bd]g;

D =

24 1 0 00 1 00 0 �1

35(24) pA(A) = A4 � 4A3 � 18A2 + 108A� 135I4

=

2664649 216 �176 �68

�568 �135 176 681136 432 �271 �136

�1088 0 544 625

3775� 42664

97 54 �8 19�70 �27 8 �19140 108 11 38304 0 �152 �125

3775

� 18

266437 12 �8 �2

�28 �3 8 256 24 �7 �4

�32 0 16 25

3775+ 1082664

5 2 0 1�2 1 0 �14 4 3 216 0 �8 �5

3775

2664135 0 0 00 135 0 00 0 135 00 0 0 135

3775 = O44

(25) (a) T

(b) F

(c) T

(d) T

(e) T

(f) F

(g) F

(h) T

(i) T

(j) F

(k) F

(l) T

(m) F

(n) T

(o) T

(p) F

(q) F

(r) T

(s) F

(t) T

(u) T

(v) F

(w) T

(x) T

(y) T

(z) T

126

Page 131: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 6.1

Chapter 6

Section 6.1

(1) Orthogonal, not orthonormal: (a), (f);Orthogonal and orthonormal: (b), (d), (e), (g);Neither: (c)

(2) Orthogonal: (a), (d), (e);Not orthogonal, columns not normalized: (b), (c)

(3) (a) [v]B =h2p3+32 ; 3

p3�22

i(b) [v]B = [2;�1; 4]

(c) [v]B = [3; 13p3

3 ; 5p6

3 ; 4p2]

(4) (a) f[5;�1; 2]; [5;�3;�14]g (b) f[2;�1; 3; 1]; [�7;�1; 0; 13]g

(c) f[2; 1; 0;�1]; [�1; 1; 3;�1]; [5;�7; 5; 3]g(d) f[0; 1; 3;�2]; [2; 3;�1; 0]; [�3; 2; 0; 1]g(e) f[4;�1;�2; 2]; [2; 0; 3;�1]; [3; 8;�4;�6]g

(5) (a) f[2; 2;�3]; [13;�4; 6]; [0; 3; 2]g(b) f[1;�4; 3]; [25; 4;�3]; [0; 3; 4]g(c) f[1;�3; 1]; [2; 5; 13]; [4; 1;�1]g(d) f[3; 1;�2]; [5;�3; 6]; [0; 2; 1]g

(e) f[2; 1;�2; 1]; [3;�1; 2;�1];[0; 5; 2;�1]; [0; 0; 1; 2]g

(f) f[2; 1; 0;�3]; [0; 3; 2; 1];[5;�1; 0; 3]; [0; 3;�5; 1]g

(6) Orthogonal basis for W = f[�2;�1; 4;�2; 1]; [4;�3; 0;�2; 1];[�1;�33; 15; 38;�19]; [3; 3; 3;�2;�7]g

(7) (a) [�1; 3; 3] (b) [3; 3; 1] (c) [5; 1; 1] (d) [4; 3; 2]

(8) (a) (civi) � (cjvj) = cicj(vi � vj) = cicj0 = 0, for i 6= j.(b) No

(9) (a) Express v and w as linear combinations of fu1; : : : ;ung using The-orem 6.3. Then expand and simplify v �w.

(b) Let w = v in part (a).

(10) Follow the hint. Then use Exercise 9(b). Finally, drop some terms to getthe inequality.

(11) (a) (A�1)T = (AT )�1 = (A�1)�1, since A is orthogonal.

(b) AB(AB)T = AB(BTAT ) = A(BBT )AT = AIAT = AAT = I.Hence (AB)T = (AB)�1.

127

Page 132: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 6.1

(12) A = AT =) A2 = AAT =) In = AAT =) AT = A�1.

Conversely, AT = A�1 =) In = AAT =) A = A2AT =)

A = InAT =) A = AT .

(13) SupposeA is both orthogonal and skew-symmetric. ThenA2 = A(�AT ) =�AAT = �In. Hence jAj2 = j�Inj = (�1)njInj (by Corollary 3.4) = �1,if n is odd, a contradiction.

(14) A(A+In)T = A(AT +In) = AAT +A = In+A (since A is orthogonal).

Hence jIn +Aj = jAj j(A+ In)T j = (�1)jA+ Inj, implying jA+ Inj = 0.Thus A+ In is singular.

(15) Use part (2) of Theorem 6.7. To be a unit vector, the �rst column must be[�1; 0; 0]. To be a unit vector orthogonal to the �rst column, the secondcolumn must be [0;�1; 0], etc.

(16) (a) By Theorem 6.5, we can expand the orthonormal set fug to an or-thonormal basis for Rn. Form the matrix using these basis vectorsas rows. Then use part (1) of Theorem 6.7.

(b)

2664p66

p63

p66

p306 �

p3015 �

p3030

0p55 � 2

p5

5

3775 is one possible answer.(17) Proof of the other half of part (1) of Theorem 6.7: (i; j) entry of AAT =

(ith row of A)�(jth row of A) =�1 i = j0 i 6= j (since the rows of A form

an orthonormal set). Hence AAT = In, and A is orthogonal.

Proof of part (2): A is orthogonal i¤ AT is orthogonal (by part (2) ofTheorem 6.6) i¤ the rows of AT form an orthonormal basis for Rn (bypart (1) of Theorem 6.7) i¤ the columns of A form an orthonormal basisfor Rn.

(18) (a) kvk2 = v � v = Av � Av (by Theorem 6.9) = kAvk2. Then takesquare roots.

(b) Using part (a) and Theorem 6.9, we have v�w(kvk)(kwk) =

(Av)�(Aw)(kAvk)(kAwk) ,

and so the cosines of the appropriate angles are equal.

(19) A detailed proof of this can be found in the beginning of the proof ofTheorem 6.18 in Appendix A. (That portion of the proof uses no resultsbeyond Section 6.1.)

(20) (a) Let Q1 and Q2 be the matrices whose columns are the vectors in Band C, respectively. Then Q1 is orthogonal by part (2) of Theorem6.7. Also, Q1 (respectively, Q2) is the transition matrix from B(respectively, C) to standard coordinates. By Theorem 4.22, Q�1

2

128

Page 133: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 6.2

is the transition matrix from standard coordinates to C. Theorem4.21 then implies P = Q�1

2 Q1: Hence, Q2 = Q1P�1: By part (b) of

Exercise 11, Q2 is orthogonal, since both P and Q1 are orthogonal.Thus C is an orthonormal basis by part (2) of Theorem 6.7.

(b) Let B = (b1; : : : ;bk); where k = dim(V). Then the ith column of thetransition matrix from B to C is [bi]C . Now, [bi]C �[bj ]C = bi �bj (byExercise 19, since C is an orthonormal basis). This equals 0 if i 6= jand equals 1 if i = j; since B is orthonormal. Hence, the columnsof the transition matrix form an orthonormal set of k vectors in Rk;and so this matrix is orthogonal by part (2) of Theorem 6.7.

(c) Let C = (c1; : : : ; ck); where k = dim(V). Then ci � cj = [ci]B � [cj ]B(by Exercise 19 since B is orthonormal) = (P[ci]B) � (P[cj ]B) (byTheorem 6.9 since P is orthogonal) = [ci]C � [cj ]C (since P is thetransition matrix from B to C) = ei � ej ; which equals 0 if i 6= j andequals 1 if i = j: Hence C is orthonormal.

(21) First note that ATA is an n� n matrix. If i 6= j, the (i; j) entry of ATAequals (ith row ofAT )�(jth column ofA) = (ith column ofA)�(jth columnof A) = 0, because the columns of A are orthogonal to each other. Thus,ATA is a diagonal matrix. Finally, the (i; i) entry of ATA equals (ithrow of AT ) � (ith column of A) = (ith column of A) � (ith column of A)= 1, because the ith column of A is a unit vector.

(22) (a) F

(b) T

(c) T

(d) F

(e) T

(f) T

(g) T

(h) F

(i) T

(j) T

Section 6.2

(1) (a) W? = span(f[2; 3]g)(b) W? = span(f[5; 2;�1];

[0; 1; 2]g)(c) W? = span(f[2; 3; 7]g)(d) W? = span(f[3;�1; 4]g)

(e) W? = span(f[�2; 5;�1]g)

(f) W? = span(f[7; 1;�2;�3];[0; 4;�1; 2]g)

(g) W? = span(f[3;�2; 4; 1]g)

(2) (a) w1 = projWv = [� 3335 ;

11135 ;

127 ]; w2 = [�

235 ;�

635 ;

27 ]

(b) w1 = projWv = [� 179 ;�

109 ;

149 ]; w2 = [

269 ;�

269 ;

139 ]

(c) w1 = projWv = [17 ;�

37 ;�

27 ]; w2 = [

137 ;

177 ;�

197 ]

(d) w1 = projWv = [� 5435 ;

435 ;

4235 ;

9235 ]; w2 = [

1935 ;

10135 ;

6335 ;�

2235 ]

(3) Use the orthonormal basis fi; jg for W.

129

Page 134: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 6.2

(4) (a) 3p12943 (b)

p380622 (c) 2

p24713 (d) 8

p17

17

(5) (a) Orthogonal projection onto 3x+ y + z = 0

(b) Orthogonal re�ection through x� 2y � 2z = 0(c) Neither. pL(x) = (x � 1)(x + 1)2. E1 = spanf[�7; 9; 5]g, E�1 =

spanf[�7; 2; 0]; [11; 0; 2]g. Eigenspaces have wrong dimensions.(d) Neither. pL(x) = (x � 1)2(x + 1). E1 = spanf[�1; 4; 0]; [�7; 0; 4]g,

E�1 = spanf[2; 1; 3]g. Eigenspaces are not orthogonal.

(6) 19

24 5 2 �42 8 2

�4 2 5

35

(7) 17

24 �2 3 �63 6 2

�6 2 3

35

(8) (a)

26413 � 1

3 � 13

� 13

56 � 1

6

� 13 � 1

656

375 (b) This is straightforward.

(9) (a) Characteristic polynomial = x3 � 2x2 + x(b) Characteristic polynomial = x3 � x2

(c) Characteristic polynomial = x3 � x2 � x+ 1

(10) (a) 159

24 50 �21 �3�21 10 �7�3 �7 58

35

(b) 117

24 8 6 �66 13 4

�6 4 13

35

(c) 19

26642 2 3 �12 8 0 23 0 6 �3

�1 2 �3 2

3775

(d) 115

26649 3 �3 �63 1 �1 �2

�3 �1 1 2�6 �2 2 4

3775(11) W?

1 =W?2 =) (W?

1 )? = (W?

2 )? =) W1 =W2 (by Corollary 6.14).

(12) Let v 2 W?2 and let w 2 W1. Then w 2 W2, and so v � w = 0, which

implies v 2 W?1 .

(13) Both parts rely on the uniqueness assertion just before Example 6 and thefacts in the box just before Example 7 in Section 6.2, and the equationv = v + 0.

130

Page 135: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 6.2

(14) v 2 W? =) projWv = 0 (see Exercise 13) =) minimum distancebetween P and W = kvk (by Theorem 6.17).Conversely, suppose kvk is the minimum distance from P toW. Let w1 =projWv and w2 = v � projWv. Then kvk = kw2k, by Theorem 6.17.Hence kvk2 = kw1+w2k2 = (w1+w2) � (w1+w2) = w1 �w1+2w1 �w2+w2 � w2 = kw1k2 + kw2k2 (since w1 ? w2). Subtracting kvk2 = kw2k2from both sides yields 0 = kw1k2, implyingw1 = 0. Hence v = w2 2 W?.

(15) (a) p1 + p2 (b) cp1

To prove each part, just use the de�nition of projection and properties ofthe dot product.

(16) Let A 2 V, B 2 W. Then �A �B�=Pni=1

Pnj=1 aijbij =

Pni=2

Pi�1j=1 aijbij+

Pni=1 aiibii+

Pn�1i=1

Pnj=i+1 aijbij

=Pn

i=2

Pi�1j=1 aijbij + 0+

Pn�1i=1

Pnj=i+1 aji(�bji) (since bii = 0)

=Pn

i=2

Pi�1j=1 aijbij �

Pnj=2

Pj�1i=1 ajibji = 0. Now use Exercise 12(b) in

Section 4.6 and follow the hint in the text.

(17) Let u = akak . Then fug is an orthonormal basis for W. Hence projWb =

(b � u)u = (b � akak )

akak = (

b�akak2 )a = projab, according to Section 1.2.

(18) Let S be a spanning set for W. Clearly, if v 2 W? and u 2 S thenv � u = 0.Conversely, suppose v � u = 0 for all u 2 S. Let w 2 W. Then w =a1u1 + � � �+ anun for some u1; : : : ;un 2 S. Hencev �w = a1(v � u1) + � � �+ an(v � un) = 0 + � � �+ 0 = 0. Thus v 2 W?.

(19) First, showW � (W?)?. Let w 2 W. We need to show that w 2 (W?)?.That is, we must prove that w?v for all v 2 W?. But, by the de�nitionof W?, w � v = 0, since w 2 W, which completes this half of the proof.Next, prove W = (W?)?. By Corollary 6.13, dim(W) = n� dim(W?) =n � (n � dim((W?)?)) = dim((W?)?). Thus, by Theorem 4.16, W =(W?)?.

(20) The proof that L is a linear operator is straightforward.To show W? = ker(L), we �rst prove that W? � ker(L). If v 2 W?,

then since v �w = 0 for every w 2 W, L(v) = projW(cv) = (v � u1)u1 +� � �+ (v � uk)uk = 0u1 + � � �+ 0uk = 0. Hence, v 2 ker(L).

Next, we prove that range(L) = W. First, note that if w 2 W, thenw = w + 0, where w 2 W and 0 2 W?. Hence, by the statementsin the text just before and just after Example 6 in Section 6.2, L(w) =projWw = w. Therefore, every w in W is in the range of L. Similarly,the same statements surrounding Example 6 imply projWv 2 W for everyv 2 Rn. Hence, range(L) =W.

131

Page 136: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 6.2

Finally, by the Dimension Theorem, we have

dim(ker(L)) = n� dim(range(L)) = n� dim(W) = dim(W?):

Since W? � ker(L), Theorem 4.16 implies that W? = ker(L).

(21) Since for each v 2 ker(L) we have Av = 0, each row of A is orthogonal tov. Hence, ker(L) � (row space of A)?. Also, dim(ker(L)) = n � rank(A)(by part (2) of Theorem 5.9) = n � dim(row space of A) = dim((rowspace of A)?) (by Corollary 6.13). Then apply Theorem 4.16.

(22) Suppose v 2 (ker(L))? and T (v) = 0. Then L(v) = 0, so v 2 ker(L).Apply Theorem 6.11 to show ker(T ) = f0g.

(23) First, a 2 W? by the statement just after Example 6 in Section 6.2 of thetext. Also, since projWv 2 W and w 2 W (since T is in W), we haveb = (projWv)�w 2 W.Finally, kv �wk2 = kv � projWv + (projWv)�wk

2= ka+ bk2 =

(a+b) � (a+b) = kak2+kbk2 (since a � b = 0) � kak2 = kv � projWvk2:

(24) (a) Mimic the proof of Theorem 6.11.

(b) Mimic the proofs of Theorem 6.12 and Corollary 6.13.

(25) Note that, for any v 2 Rk, if we consider v to be an k � 1 matrix, thenvTv is the 1� 1 matrix whose single entry is v � v. Thus, we can equatev � v with vTv.

(a) v � (Aw) = vT (Aw) = (vTA)w = (ATv)Tw = (ATv) �w(b) Let v 2 ker(L2). We must show that v is orthogonal to every vector

in range(L1). An arbitrary element of range(L1) is of the form L1(w),for some w 2 Rn. Now, v � L1(w) = L2(v) �w (by part (a)) = 0 �w(because v 2 ker(L2)) = 0. Hence, v 2 (range(L1))?.

(c) By Theorem 5.9, dim(ker(L2)) = m� rank(AT ) = m� rank(A) (byCorollary 5.11). Also, dim(range(L1)?) = m � dim(range(L1)) (byCorollary 6.13)= m�rank(A) (by Theorem 5.9). Hence, dim(ker(L2))= dim(range(L1)

?). This, together with part (b) and Theorem 4.16completes the proof.

(d) The row space of A = the column space of AT = range(L2). Switch-ing the roles of A and AT in parts (a), (b) and (c), we see that(range(L2))

? = ker(L1). Taking the orthogonal complement of bothsides of this equality and using Corollary 6.14 yields range(L2) =(ker(L1))

?, which completes the proof.

(26) (a) T

(b) T

(c) F

(d) F

(e) T

(f) F

(g) T

(h) T

(i) T

(j) F

(k) T

(l) T

132

Page 137: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 6.3

Section 6.3

(1) Parts (a) and (g) are symmetric, because the matrix for L is symmetric.

Part (b) is not symmetric, because the matrix for L with respect to thestandard basis is not symmetric.

Parts (c), (d), and (f) are symmetric, since L is orthogonally diagonaliz-able.

Part (e) is not symmetric, since L is not diagonalizable, and hence notorthogonally diagonalizable.

(2) (a) 125

��7 2424 7

(b) 111

24 11 �6 6�6 18 06 0 4

35(c) 1

49

24 58 �6 �18�6 53 12�18 12 85

35

(d) 1169

2664�119 �72 �96 0�72 119 0 96�96 0 119 �720 96 �72 �119

3775(3) (a) B = ( 113 [5; 12];

113 [�12; 5]), P =

113

�5 �1212 5

�, D =

�0 00 169

�(b) B = ( 15 [4; 3];

15 [3;�4]), P =

15

�4 33 �4

�, D =

�3 00 �1

�(c) B =

�1p2[�1; 1; 0]; 1

3p2[1; 1; 4]; 13 [�2;�2; 1]

�(other bases are possi-

ble, since E1 is two-dimensional), P =

2664� 1p

21

3p2

� 23

1p2

13p2

� 23

0 43p2

13

3775,D =

24 1 0 00 1 00 0 3

35. Another possibility isB = ( 13 [�1; 2; 2];

13 [2;�1; 2];

13 [2; 2;�1]),

P = 13

24 �1 2 �22 �1 �22 2 1

35, D =

24 1 0 00 1 00 0 3

35(d) B = ( 19 [�8; 1; 4];

19 [4; 4; 7];

19 [1;�8; 4]),

P = 19

24 �8 4 11 4 �84 7 4

35, D =

24 0 0 00 �3 00 0 9

35

133

Page 138: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 6.3

(e) B =�

1p14[3; 2; 1; 0]; 1p

14[�2; 3; 0; 1]; 1p

14[1; 0;�3; 2]; 1p

14[0;�1; 2; 3]

�,

P = 1p14

26643 �2 1 02 3 0 �11 0 �3 20 1 2 3

3775, D =

26642 0 0 00 2 0 00 0 �3 00 0 0 5

3775(f) B = ( 1p

26[4; 1; 3]; 15 [3; 0;�4];

15p26[4;�25; 3]) (other bases are possi-

ble, since E�13 is two-dimensional),

P =

266644p26

35

45p26

1p26

0 � 255p26

3p26

� 45

35p26

37775, D =

24 13 0 00 �13 00 0 �13

35(g) B =

�1p5[1; 2; 0]; 1p

6[�2; 1; 1]; 1p

30[2;�1; 5]

�,

P =

266641p5

� 2p6

2p30

2p5

1p6

� 1p30

0 1p6

5p30

37775, D =

24 15 0 00 15 00 0 �15

35(4) (a) C = ( 119 [�10; 15; 6];

119 [15; 6; 10]),

B = ( 119p5[20; 27; 26]; 1

19p5[35;�24;�2]),

A =

��2 22 1

�, P = 1p

5

�1 �22 1

�, D =

�2 00 �3

�(b) C = (12 [1;�1; 1; 1];

12 [�1; 1; 1; 1];

12 [1; 1; 1;�1]),

B = ( 12p5[3;�1; 3; 1]; 1p

30[�2; 4; 3; 1]; 16 [�1;�1; 0; 2]);

A =

24 2 �1 2�1 2 22 2 �1

35, P =26664

2p5

� 1p30

1p6

0 5p30

1p6

1p5

2p30

� 2p6

37775,

D =

24 3 0 00 3 00 0 �3

35(5) (a) 1

25

�23 �36

�36 2

�(b) 1

17

�353 �120

�120 514

� (c) 13

24 11 4 �44 17 �8

�4 �8 17

35

(6) For example, the matrix A in Example 7 of Section 5.6 is diagonalizablebut not symmetric and hence not orthogonally diagonalizable.

134

Page 139: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 6.3

(7) 12

�a+ c+

p(a� c)2 + 4b2 0

0 a+ c�p(a� c)2 + 4b2

�(Note: You only

need to compute the eigenvalues. You do not need the correspondingeigenvectors.)

(8) (a) Since L is diagonalizable and � = 1 is the only eigenvalue, E1 = Rn.Hence, for every v 2 Rn, L(v) = 1v = v. Therefore, L is the identityoperator.

(b) L must be the zero linear operator. Since L is diagonalizable, theeigenspace for 0 must be all of V.

(9) Let L1 and L2 be symmetric operators on Rn. Then, note that for allx;y 2 Rn, (L2�L1)(x)�y = L2(L1(x))�y = L1(x)�L2(y) = x�(L1�L2)(y).Now L2 � L1 is symmetric i¤ for all x;y 2 Rn, x � (L1 � L2)(y)= (L2 � L1)(x) � y = x � (L2 � L1)(y) i¤ for all y 2 Rn, (L1 � L2)(y) =(L2 � L1)(y) i¤ (L1 � L2) = (L2 � L1). (This argument uses the fact thatif for all x 2 Rn, x � y = x � z, then y = z. Proof: Let x = y � z. Then,((y � z) � y) = ((y � z) � z) =) ((y � z) � y) � ((y � z) � z) = 0 =)(y � z) � (y � z) = 0 =) ky � zk2 = 0 =) y � z = 0 =) y = z.)

(10) (i)=) (ii): This is Exercise 6 in Section 3.4.

(ii)=) (iii): Since A and B are symmetric, both are orthogonally similarto diagonal matrices D1 and D2, respectively, by Theorems 6.18 and 6.20.Now, since A and B have the same characteristic polynomial, and hencethe same eigenvalues with the same algebraic multiplicities (and geometricmultiplicities, since A and B are diagonalizable), we can assume D1 = D2

(by listing the eigenvectors in an appropriate order when diagonalizing).Thus, ifD1 = P

�11 AP1 andD2 = P

�12 BP2, for some orthogonal matrices

P1 and P2, then (P1P�12 )�1A(P1P

�12 ) = B. Note that P1P

�12 is orthog-

onal by Exercise 11, Section 6.1, so A and B are orthogonally similar.

(iii)=) (i): Trivial.

(11) L(v1) � v2 = v1 � L(v2) =) (�1v1) � v2 = v1 � (�2v2)=) (�2 � �1)(v1 � v2) = 0 =) v1 � v2 = 0, since �2 � �1 6= 0.

(12) Suppose A is orthogonal, and � is an eigenvalue for A with correspondingunit eigenvector u. Then 1 = u � u = Au � Au (by Theorem 6.9) =(�u) � (�u) = �2(u � u) = �2. Hence � = �1.Conversely, suppose that A is symmetric with all eigenvalues equal to �1.Let P be an orthogonal matrix with P�1AP = D, a diagonal matrix withall entries equal to �1 on the main diagonal. Note that D2 = In. Then,since A = PDP�1, AAT = AA = PD2P�1 = PInP

�1 = In. Hence Ais orthogonal.

135

Page 140: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Chapter 6 Review

(13) (a) T (b) F (c) T (d) T (e) T (f) T

Chapter 6 Review Exercises

(1) (a) [v]B = [�1; 4; 2](b) [v]B = [3; 2; 3]

(2) (a) f[1;�1;�1; 1]; [1; 1; 1; 1]g(b) f[1; 3; 4; 3; 1]; [�1;�1; 0; 1; 1]; [�1; 3;�4; 3;�1]g

(3) f[6; 3;�6]; [3; 6; 6]; [2;�2; 1]g

(4) (a) f[4; 7; 0; 4]; [2; 0; 1;�2]; [29;�28;�18; 20]; [0; 4;�14;�7]g

(b) f 19 [4; 7; 0; 4];13 [2; 0; 1;�2];

19p29[29;�28;�18; 20]; 1

3p29[0; 4;�14;�7]g

(c)

266666449

79 0 4

9

23 0 1

3 � 23p

299 � 28

9p29

� 2p29

209p29

0 43p29

� 143p29

� 73p29

3777775(5) Using the hint, note that v � w = v � (ATAw). Now, ATAej is the jth

column of ATA, so ei � (ATAej) is the (i; j) entry of ATA. Since thisequals ei � ej , we see that ATA = In. Hence, AT = A�1, and A isorthogonal.

(6) (a) �A(A� In)T = �A(AT � ITn ) = �AAT +A = �In +A= A�In. Therefore, jA� Inj=

���A(A� In)T ��= j�Aj ��(A� In)T ��= (�1)n jAj j(A� In)j = � jA� Inj, since n is odd and jAj = 1.This implies that jA� Inj = 0, and so A� In is singular.

(b) By part (a), A � In is singular. Hence, there is a nonzero vectorv such that (A � In)v = 0. Thus, Av = Inv = v, an so v is aneigenvector for the eigenvalue � = 1.

(c) Suppose v is a unit eigenvector for A corresponding to � = 1. Ex-pand the set fvg to an orthonormal basis B = fw;x;vg for R3.Notice that we listed the vector v last. Let Q be the orthogonal ma-trix whose columns are w; x; and v, in that order. Let P = QTAQ.Now, the last column of P equals QTAv = QTv (since Av = v)= e3, since v is a unit vector orthogonal to the �rst two rows of QT .Next, since P is the product of orthogonal matrices, P is orthogonal,and since jQT j = jQj = �1; we must have jPj = jAj = 1. Also, the�rst two columns of P are orthogonal to its last column, e3, makingthe (3; 1) and (3; 2) entries of P equal to 0. Since the �rst columnof P is a unit vector with third coordinate 0, it can be expressed as[cos �; sin �; 0], for some value of �, with 0 � � < 2�. The second

136

Page 141: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Chapter 6 Review

column of P must be a unit vector with third coordinate 0, orthogo-nal to the �rst column. The only possibilities are �[� sin �; cos �; 0].Choosing the minus sign makes jPj = �1, so the second column mustbe [� sin �; cos �; 0]. Thus, P = QTAQ has the desired form.

(d) The matrix for the linear operator L on R3 given by L(v) = Av withrespect to the standard basis has a matrix that is orthogonal withdeterminant 1. But, by part (c), and the Orthogonal DiagonalizationMethod, the matrix for L with respect to the ordered orthonormal

basis B = (w;x;v) is

24 cos � � sin � 0sin � cos � 00 0 1

35. According to Table 5.1in Section 5.2, this is a counterclockwise rotation around the z-axisthrough the angle �. But since we are working in B-coordinates, thez-axis corresponds to the vector v, the third vector in B. Becausethe basis B is orthonormal, the plane of the rotation is perpendicularto the axis of rotation.

(e) The axis of rotation is in the direction of the vector [�1; 3; 3]: Theangle of rotation is � � 278�. This is a counterclockwise rotationabout the vector [�1; 3; 3] as you look down from the point (�1; 3; 3)toward the plane through the origin spanned by [6; 1; 1] and [0; 1;�1].

(f) Let G be the matrix with respect to the standard basis for any cho-sen orthogonal re�ection through a plane in R3. Then, G is orthog-onal, jGj = �1 (since it has eigenvalues �1; 1; 1), and G2 = I3:Let C = AG. Then C is orthogonal since it is the product of or-thogonal matrices. Since jAj = �1, it follows that jCj = 1. Thus,A = AG2 = CG, where C represents a rotation about some axis inR3 by part (d) of this exercise.

(7) (a) w1 = projWv = [0;�9; 18]; w2 = projW?v = [2; 16; 8]

(b) w1 = projWv =�72 ;

232 ;

52 ;�

32

�; w2 = projW?v =

�� 32 ;�

32 ;

92 ;�

152

�(8) (a) Distance � 10:141294

(b) Distance � 14:248050

(9) f[1; 0;�2;�2]; [0; 1;�2; 2]g

(10) 13

24 2 �1 1�1 2 11 1 2

35(11) 1

7

24 3 6 �26 �2 3

�2 3 6

35(12) (a) pL(x) = x(x� 1)2 = x3 � 2x2 + x

(b) pL(x) = x2(x� 1) = x3 � x2

137

Page 142: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Chapter 6 Review

(13) (a) Not a symmetric operator, since the matrix for L with respect to the

standard basis is

26640 0 0 03 0 0 00 2 0 00 0 1 0

3775 ; which is not symmetric.(b) Symmetric operator, since the matrix for L is orthogonally diagonal-

izable.

(c) Symmetric operator, since the matrix for L with respect to the stan-dard basis is symmetric.

(14) (a) B =�1p6[�1;�2; 1]; 1p

30[1; 2; 5]; 1p

5[�2; 1; 0]

�;

P =

2664� 1p

61p30

� 2p5

� 2p6

2p30

1p5

1p6

5p30

0

3775; D =

24 1 0 00 2 00 0 �1

35(b) B =

�1p3[1;�1; 1] ; 1p

2[1; 1; 0] ; 1p

6[�1; 1; 2]

�;

P =

266641p3

1p2

� 1p6

� 1p3

1p2

1p6

1p3

0 2p6

37775; D =

24 0 0 00 3 00 0 3

35(15) Any diagonalizable 4�4matrix that is not symmetric works. For example,

chose any non-diagonal upper triangular 4�4matrix with 4 distinct entrieson the main diagonal.

(16) Because�ATA

�T= AT

�AT�T= ATA, we see that ATA is symmetric.

ThereforeATA is orthogonally diagonalizable by Theorems 6.18 and 6.20.

(17) (a) T

(b) T

(c) T

(d) T

(e) F

(f) F

(g) T

(h) T

(i) T

(j) T

(k) T

(l) T

(m) F

(n) T

(o) T

(p) T

(q) T

(r) F

(s) F

(t) T

(u) T

(v) F

(w) T

138

Page 143: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 7.1

Chapter 7

Section 7.1

(1) (a) [1 + 4i; 1 + i; 6� i](b) [�12� 32i;�7 + 30i; 53� 29i](c) [5 + i; 2� i; 3i]

(d) [�24� 12i;�28� 8i;�32i](e) 1 + 28i

(f) 3 + 77i

(2) Let z1 = [a1; : : : ; an], and let z2 = [b1; : : : ; bn].

(a) Part (1): z1 �z2 =Pn

i=1 aibi =Pn

i=1 aibi =Pn

i=1 aibi =Pn

i=1 aibi =Pni=1 biai = z2 � z1

Part (2): z1 � z1 =Pn

i=1 aiai =Pn

i=1 jaij2 � 0: Also, z1 � z1 =Pni=1 jaij2 =

�pPni=1 jaij2

�2= kz1k2

(b) k(z1 � z2) = kPn

i=1 aibi =Pn

i=1 kaibi =Pn

i=1 aikbi = z1 � (kz2)

(3) (a)�11 + 4i �4� 2i2� 4i 12

(b)

24 1� i 2i 6� 4i0 3� i 510i 0 7 + 3i

35(c)

24 1� i 0 10i2i 3� i 0

6� 4i 5 7 + 3i

35

(d)��3� 15i �3 9i9� 6i 0 3 + 12i

(e)��7 + 6i �9� i�3� 3i 4� 6i

(f)�1 + 40i �4� 14i13� 50i 23 + 21i

�(g)

��12 + 6i �16� 3i �11 + 41i�15 + 23i �20 + 5i �61 + 15i

�(h)

�86� 33i 6� 39i61 + 36i 13 + 9i

�(i)

24 4 + 36i �5 + 39i1� 7i �6� 4i5 + 40i �7� 5i

35

(j)

24 40 + 58i 50i �20 + 80i4 + 8i 8� 6i �2056� 10i 50 + 10i 80 + 102i

35(4) (a) (i; j) entry of (kZ)� = (i; j) entry of (kZ)

T= (j; i) entry of (kZ) =

kzji = kzji = k((j; i) entry of Z) = k((i; j) entry of ZT)

= k((i; j) entry of Z�)

(b) Let Z be an m � n complex matrix and let W be an n � p matrix.Note that (ZW)� and W�Z� are both p �m matrices. We presenttwo methods of proof that (ZW)� =W�Z�:First method: Notice that the rule (AB)T = BTAT holds for

139

Page 144: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 7.1

complex matrices, since matrix multiplication for complex matricesis de�ned in exactly the same manner as for real matrices, and hencethe proof of Theorem 1.16 is valid for complex matrices as well. Thuswe have: (ZW)� = (ZW)T =WTZT = (WT )(ZT ) (by part (4) ofTheorem 7.2) =W�Z�:

Second method: We begin by computing the (i; j) entry of (ZW)�:Now, the (j; i) entry of ZW = zj1w1i+ � � �+zjnwni. Hence, the (i; j)entry of (ZW)� = ((j; i) entry of ZW) = (zj1w1i + � � �+ zjnwni) =zj1w1i + � � � + zjnwni. Next, we compute the (i; j) entry of W�Z�

to show that it equals the (i; j) entry of (ZW)�. Let A = Z�, ann � m matrix, and B = W�, a p � n matrix. Then akl = zlk andbkl = wlk. Hence, the (i; j) entry ofW�Z� = the (i; j) entry of BA

= bi1a1j+ � � �+binanj = w1izj1+ � � �+wnizjn = zj1w1i+ � � �+zjnwni.Therefore, (ZW)� =W�Z�.

(5) (a) Skew-Hermitian(b) Neither

(c) Hermitian(d) Skew-Hermitian

(e) Hermitian

(6) (a) H� = ( 12 (Z + Z�))� = 1

2 (Z + Z�)� (by part (3) of Theorem 7.2)

= 12 (Z

�+Z) (by part (2) of Theorem 7.2) = H. A similar calculationshows that K is skew-Hermitian.

(b) Part (a) proves existence for the decomposition by providing speci�cmatrices H and K. For uniqueness, suppose Z = H+K, where H isHermitian and K is skew-Hermitian. Our goal is to show that H andK must satisfy the formulas from part (a). Now Z� = (H +K)� =H�+K� = H�K. Hence Z+Z� = (H+K)+(H�K) = 2H, whichgives H = 1

2 (Z+ Z�). Similarly, Z� Z� = 2K, so K = 1

2 (Z� Z�).

(7) (a) HJ is Hermitian i¤ (HJ)� = HJ i¤ J�H� = HJ i¤ JH = HJ (sinceH and J are Hermitian).

(b) Use induction on k.Base Step (k = 1): H1 = H is given to be Hermitian.Inductive Step: Assume Hk is Hermitian and prove that Hk+1 isHermitian. But HkH = Hk+1 = H1+k = HHk (by part (1) ofTheorem 1.15), and so by part (a), Hk+1 = HkH is Hermitian.

(c) (P�HP)� = P�H�(P�)� = P�HP (since H is Hermitian).

(8) (AA�)� = (A�)�A� = AA� (and A�A is handled similarly).

(9) Let Z be Hermitian. Then ZZ� = ZZ = Z�Z. Similarly, if Z is skew-Hermitian, ZZ� = Z(�Z) = �(ZZ) = (�Z)Z = Z�Z.

(10) Let Z be normal. Consider H1 =12 (Z + Z

�) and H2 =12 (Z � Z

�).Note that H1 is Hermitian and H2 is skew-Hermitian (by Exercise 6(a)).Clearly Z = H1 +H2.

140

Page 145: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 7.2

Now, H1H2 = (12 )(

12 )(Z+ Z

�)(Z� Z�) = 14 (Z

2 + Z�Z� ZZ� � (Z�)2) =14 (Z

2 � (Z�)2) (since Z is normal). A similar computation shows thatH2H1 is also equal to 1

4 (Z2 � (Z�)2). Hence H1H2 = H2H1.

Conversely, letH1 be Hermitian andH2 be skew-Hermitian withH1H2 =H2H1. If Z = H1 +H2, then ZZ

� = (H1 +H2)(H1 +H2)�

= (H1+H2)(H�1+H

�2) (by part (2) of Theorem 7.2) = (H1+H2)(H1�H2)

(since H1 is Hermitian and H2 is skew-Hermitian)= H2

1 +H2H1 �H1H2 �H22 = H

21 �H2

2 (since H1H2 = H2H1).A similar argument shows Z�Z = H2

1 �H22 also, and so ZZ

� = Z�Z, andZ is normal.

(11) (a) F (b) F (c) T (d) F (e) F (f) T

Section 7.2

(1) (a) w = 15 +

135 i; z =

285 �

35 i

(b) No solutions: After applying the row reduction method to the �rst 3

columns, the matrix changes to:

24 1 1 + i 00 0 10 0 0

1� i3 + 4i4� 2i

35(c) Matrix row reduces to

24 1 0 4� 3i0 1 �i0 0 0

2 + 5i5 + 2i0

35;Solution set = f( (2 + 5i)� (4� 3i)c; (5 + 2i) + ic; c) j c 2 Cg

(d) w = 7 + i; z = �6 + 5i(e) No solutions: After applying the row reduction method to the �rst

column, the matrix changes to:�1 2 + 3i0 0

�1 + 3i3 + 4i

�(f) Matrix row reduces to

�1 0 �3 + 2i0 1 4 + 7i

�i2� 5i

�;

Solution set = f( (�i) + (3� 2i)c; (2� 5i)� (4 + 7i)c; c) j c 2 Cg

(2) (a) jAj = 0; jA�j = 0 = jAj(b) jAj = �15� 23i; jA�j = �15 + 23i = jAj(c) jAj = 3� 5i; jA�j = 3 + 5i = jAj

(3) Your computations may produce complex scalar multiples of the vectorsgiven here, although they might not be immediately recognized as such.

(a) pA(x) = x2 + (1 � i)x � i = (x � i)(x + 1); Eigenvalues �1 = i,�2 = �1; Ei = fc[1 + i; 2] j c 2 Cg; E�1 = fc[7 + 6i; 17] j c 2 Cg.

141

Page 146: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 7.3

(b) pA(x) = x3 � 11x2 + 44x� 34; eigenvalues: �1 = 5+ 3i, �2 = 5� 3i,�3 = 1; E�1 = fc[1� 3i; 5; 1� 3i] j c 2 Cg;E�2 = fc[1 + 3i; 5; 1 + 3i] j c 2 Cg; E�3 = fc[1; 2; 2] j c 2 Cg.

(c) pA(x) = x3 + (2 � 2i)x2 � (1 + 4i)x � 2 = (x � i)2(x + 2); �1 = i,�2 � 2; Ei = fc[(�3� 2i); 0; 2] + d[1; 1; 0] j c; d 2 Cg andE�2 = fc[�1; i; 1] j c 2 Cg.

(d) pA(x) = x3�2x2�x+2+i(�2x2+4x) = (x�i)2(x�2); eigenvalues:�1 = i, �2 = 2; Ei = fc[1; 1; 0] j c 2 Cg; E2 = fc[i; 0; 1] j c 2 Cg.(Note that A is not diagonalizable.)

(4) (a) Let A be the matrix from Exercise 3(a). A is diagonalizable since A

has 2 distinct eigenvalues. P =�1 + i 7 + 6i2 17

�; D =

�i 00 �1

�:

(b) LetA be the matrix from Exercise 3(d). The solution to Exercise 3(d)shows that the Diagonalization Method of Section 3.4 only producestwo fundamental eigenvectors. Hence, A is not diagonalizable byStep 4 of that method.

(c) Let A be the matrix in Exercise 3(b). pA(x) = x3� 11x2+44x� 34.There are 3 eigenvalues, 5 + 3i; 5 � 3i; and 1, each producing onecomplex fundamental eigenvector. Thus, there are three fundamentalcomplex eigenvectors, and soA is diagonalizable as a complex matrix.On the other hand, A has only one real eigenvalue � = 1, and since� produces only one real fundamental eigenvector, A is not diago-nalizable as a real matrix.

(5) By the Fundamental Theorem of Algebra, the characteristic polynomialpA(x) of a complex n � n matrix A must factor into n linear factors.Since each root of pA(x) has multiplicity 1, there must be n distinct roots.Each of these roots is an eigenvalue, and each eigenvalue yields at leastone fundamental eigenvector. Thus, we have a total of n fundamentaleigenvectors, showing that A is diagonalizable.

(6) (a) T (b) F (c) F (d) F

Section 7.3

(1) (a) Mimic the proof in Example 6 in Section 4.1 of the textbook, re-stricting the discussion to polynomial functions of degree � n, andusing complex-valued functions and complex scalars instead of theirreal counterparts.

(b) This is the complex analog of Theorem 1.11. Property (1) is provenfor the real case just after the statement of Theorem 1.11 in Section1.4 of the textbook. Generalize this proof to complex matrices. Theother properties are similarly proved.

142

Page 147: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 7.3

(2) (a) Linearly independent; dim = 2

(b) Not linearly independent; dim = 1.(Note: [�3 + 6i; 3; 9i] = 3i[2 + i;�i; 3])

(c) Not linearly independent; dim = 2. (Note: Matrix formed using

given vectors as columns row reduces to

24 1 0 10 1 �20 0 0

35 :)(d) Not linearly independent, dim = 2. (Note: Matrix formed using given

vectors as columns row reduces to

24 1 0 i0 1 �2i0 0 0

35 :)(3) (a) Linearly independent; dim = 2. (Note: [�i; 3 + i;�1] is not a scalar

multiple of [2 + i;�i; 3]:)(b) Linearly independent; dim = 2. (Note: [�3 + 6i; 3; 9i] is not a real

scalar multiple of [2 + i;�i; 3]:)(c) Not linearly independent; dim = 2.

(Note:

266666643 1 1

�1 1 �31 �2 52 0 20 4 �8

�1 1 �3

37777775 row reduces to266666641 0 10 1 �20 0 00 0 00 0 00 0 0

37777775 :)(d) Linearly independent; dim = 3.

(Note:

266666643 1 3

�1 1 11 �2 �22 0 50 4 3

�1 1 �8

37777775 row reduces to266666641 0 00 1 00 0 10 0 00 0 00 0 0

37777775 :)

(4) (a) The 3 � 3 complex matrix whose columns (or rows) are the givenvectors row reduces to I3.

(b) [i; 1 + i;�1]

(5) Ordered basis = ([1; 0]; [i; 0]; [0; 1]; [0; i]), matrix =

26640 0 1 00 0 0 �11 0 0 00 �1 0 0

3775(6) Span: Let v 2 V. Then there exists c1; : : : ; cn 2 C such that v = c1v1 +

� � �+ cnvn. Suppose, for each j, cj = aj + bji, for some aj ; bj 2 R. Thenv = (a1+b1i)v1+� � �+(an+bni)vn = a1v1+b1(iv1)+� � �+anvn+bn(ivn),which shows that v 2 span(fv1; iv1; : : : ;vn; ivng).

143

Page 148: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 7.4

Linear Independence: Suppose a1v1+ b1(iv1)+ � � �+ anvn+ bn(ivn) = 0.Then (a1 + b1i)v1 + � � � + (an + bni)vn = 0, implying (a1 + b1i) = � � � =(an + bni) = 0 (since fv1; : : : ;vng is linearly independent). Hence, a1 =b1 = a2 = b2 = � � � = an = bn = 0.

(7) Exercise 6 clearly implies that if V is an n-dimensional complex vectorspace, then it is 2n-dimensional when considered as a real vector space.Hence, the real dimension must be even. Therefore, R3 cannot be consid-ered as a complex vector space since its real dimension is odd.

(8)

2664�3 + i � 2

5 �115 i

12 �

32 i �i

� 12 +

72 i � 8

5 �45 i

3775(9) (a) T (b) F (c) T (d) F

Section 7.4

(1) (a) Not orthogonal; [1 + 2i;�3� i] � [4� 2i; 3 + i] = �10 + 10i(b) Not orthogonal; [1� i;�1 + i; 1� i] � [i;�2i; 2i] = �5� 5i

(c) Orthogonal (d) Orthogonal

(2) (cizi) �(cjzj) = cicj(zi �zj) (by parts (4) and (5) of Theorem 7.1) = cicj(0)(since zi is orthogonal to zj) = 0; so cizi is orthogonal to cjzj : Also,jjcizijj2 = (cizi) � (cizi) = cici(zi � zi) = jcij2jjzijj2 = 1; since jcij = 1 andzi is a unit vector. Hence, each cizi is also a unit vector.

(3) (a) f[1 + i; i; 1]; [2;�1� i;�1 + i];[0; 1; i]g

(b)

266641+i2

i2

12

2p8

�1�ip8

�1+ip8

0 1p2

ip2

37775(4) Part (1): For any n� n matrix B, jBj = jBj; by part (3) of Theorem 7.5.

Therefore,

���� jAj ����2 = jAj jAj = jAj jAj = jAj j(A)T j = jAj jA�j = jAA�j =

jInj = 1.Part (2): A unitary ) A� = A�1 ) (A�)

�1= A ) (A�)

�1= (A�)

� )A� is unitary.

(5) (AB)� = B�A� (by part (5) of Theorem 7.2) = B�1A�1 = (AB)�1.

(6) (a) A� = A�1 i¤ (A)T = A�1 i¤ (A)T = A�1 i¤ (A)� = (A)�1 (using

the fact that A�1 = (A)�1, since AA�1 = In =) AA�1 = In =In =) (A)(A�1) = In =) A�1 = (A)�1).

144

Page 149: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 7.4

(b) (Ak)� = ((Ak))T = ((A)k)T (by part (4) of Theorem 7.2) = ((A)T )k

= (A�)k = (A�1)k = (Ak)�1.

(c) A2 = In i¤ A = A�1 i¤ A = A� (since A is unitary) i¤ A isHermitian.

(7) (a) A is unitary i¤ A� = A�1 i¤ AT = A�1 i¤ (AT )T = (A�1)T i¤(AT )� = (AT )�1 i¤AT is unitary.

(b) Follow the hint in the textbook.

(8) Modify the proof of Theorem 6.8.

(9) Z� = 13

24 �1� 3i 2� 2i 22� 2i �2i �2i�2 2i 1� 4i

35 andZZ� = Z�Z = 1

9

24 22 8 10i8 16 2i

�10i �2i 25

35.(10) (a) Let A be the given matrix for L. A is normal since

A� =

�1 + 6i 2 + 10i�10 + 2i 5

�and AA� = A�A =�

141 12� 12i12 + 12i 129

�. Hence A is unitarily diagonalizable by The-

orem 7.9.

(b) P =

" �1+ip6

1�ip3

2p6

1p3

#;

P�1AP = P�AP =

�9 + 6i 00 �3� 12i

(11) (a) A is normal sinceA� =

24 �4� 5i 2� 2i 4� 4i2� 2i �1� 8i �2 + 2i4� 4i �2 + 2i �4� 5i

35 andAA� =

A�A = 81I3. Hence A is unitarily diagonalizable by Theorem 7.9.

(b) P = 13

24 �2 1 2i1 �2 2i2 2 i

35 and P�1AP = P�AP =24 �9 0 0

0 9i 00 0 9i

35.(12) (a) Note that Az �Az = �z � �z = �(�z � z) (by part (5) of Theorem 7.1)

= ��(z�z) (by part (4) of Theorem 7.1), while Az�Az = z�(A�(Az))(by Theorem 7.3) = z � ((A�A)z) = z � ((A�1A)z) = z � z. Thus��(z � z) = z � z, and so �� = 1, which gives j�j2 = 1, and hencej�j = 1.

(b) Let A be unitary, and let � be an eigenvalue of A. By part (a),j�j = 1.Now, suppose A is Hermitian. Then � is real by Theorem 7.11.Hence � = �1.

145

Page 150: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 7.5

Conversely, suppose every eigenvalue of A is �1. Since A is unitary,A� = A�1, and so AA� = A�A = I, which implies that A isnormal. Thus A is unitarily diagonalizable (by Theorem 7.9). LetA = PDP�, where P is unitary and D is diagonal. But since theeigenvalues of A are �1 (real), D� = D. Thus A� = (PDP�)� =PDP� = A, and A is Hermitian.

(13) The eigenvalues are �4, 2 +p6, and 2�

p6.

(14) (a) Since A is normal, A is unitarily diagonalizable (by Theorem 7.9).Hence A = PDP� for some diagonal matrix D and some unitarymatrix P. Since all eigenvalues of A are real, the main diagonalelements of D are real, and so D� = D. Thus, A� = (PDP�)� =PD�P� = PDP� = A, and so A is Hermitian.

(b) Since A is normal, A is unitarily diagonalizable (by Theorem 7.9).Hence, A = PDP� for some diagonal matrix D and some unitary

matrix P. Thus PP� = I. Also note that if D =

264 �1 � � � 0...

. . ....

0 � � � �n

375,

thenD� =

264 �1 � � � 0...

. . ....

0 � � � �n

375, and soDD� =

264 �1�1 � � � 0...

. . ....

0 � � � �n�n

375

=

264 j�1j2 � � � 0...

. . ....

0 � � � j�nj2

375. Since the main diagonal elements of Dare the (not necessarily distinct) eigenvalues of A, and these eigen-values all have absolute value 1, it follows that DD� = I. Then,AA� = (PDP�)(PDP�)� = (PDP�)(PD�P�) = P(D(P�P)D�)P�

= P(DD�)P� = PP� = I. Hence, A� = A�1, and A is unitary.

(c) Because A is unitary, A�1 = A�. Hence, AA� = In = A�A. There-

fore, A is normal.

(15) (a) F (b) T (c) T (d) T (e) F

Section 7.5

(1) (a) Note that < x;x > = (Ax) � (Ax) = kAxk2 � 0.kAxk2 = 0, Ax = 0, A�1Ax = A�10, x = 0.< x;y > = (Ax) � (Ay) = (Ay) � (Ax) = < y;x >.< x+ y; z > = (A(x+ y)) � (Az) = (Ax+Ay) �Az= ((Ax) � (Az)) + ((Ay) � (Az)) = < x; z > + < y; z >.< kx;y > = (A(kx)) � (Ay) = (k(Ax)) � (Ay) = k((Ax) � (Ay)) =k < x;y >.

146

Page 151: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 7.5

(b) < x;y > = �183, kxk =p314

(2) Note that < p1;p1 > = a2n + � � �+ a21 + a20 � 0.Clearly, a2n + � � �+ a21 + a20 = 0 if and only if each ai = 0.< p1;p2 > = anbn + � � � + a1b1 + a0b0 = bnan + � � � + b1a1 + b0a0 =< p2;p1 >.< p1 + p2;p3 > = < (an + bn)x

n + � � �+ (a0 + b0); cnxn + � � �+ c0 >= (an + bn)cn + � � �+ (a0 + b0)c0 = (ancn + bncn) + � � �+ (a0c0 + b0c0)= (ancn + � � �+ a0c0) + (bncn + � � �+ b0c0) = < p1;p3 > + < p2;p3 >.< kp1;p2 > = (kan)bn+� � �+(ka1)b1+(ka0)b0 = k(anbn+� � �+a1b1+a0b0)= k < p1;p2 >.

(3) (a) Note that < f ; f > =R ba(f(t))2 dt � 0. Also, we know from calculus

that a nonnegative continuous function on an interval has integralzero if and only if the function is the zero function.< f ;g > =

R baf(t)g(t) dt =

R bag(t)f(t) dt = < g; f >.

< f + g;h > =R ba(f(t) + g(t))h(t) dt =

R ba(f(t)h(t) + g(t)h(t)) dt

=R baf(t)h(t) dt+

R bag(t)h(t) dt = < f ;h > + < g + h >.

< kf ;g > =R ba(kf(t))g(t) dt = k

R baf(t)g(t) dt = k < f ;g >.

(b) < f ;g > = 12 (e

� + 1); kfk =q

12 (e

2� � 1)

(4) Note that < A;A > = trace(ATA), which by Exercise 26 in Section 1.5is the sum of the squares of all the entries of A, and hence is nonnegative.Also, this is clearly equal to zero if and only if A = On.Next, < A;B > = trace(ATB) = trace((ATB)T ) (by Exercise 14 inSection 1.4) = trace(BTA) = < B;A >.< A+B;C > = trace((A+B)TC) = trace((AT +BT )C) = trace(ATC+BTC) = trace(ATC) + trace(BTC) = < A;C > + < B;C >.< kA;B > = trace((kA)TB) = trace((kAT )B) = trace(k(ATB))= k(trace(ATB)) = k < A;B >.

(5) (a) < 0;x > = < 0 + 0;x > = < 0;x > + < 0;x >. Since < 0;x >2 C, we know that �� < 0;x >� exists. Adding �� < 0;x >� toboth sides gives 0 = < 0;x >. Then, < x;0 > = 0, by property (3)in the de�nition of an inner product.

(b) For complex inner product spaces, < x; ky > = < ky;x > (by prop-

erty (3) of an inner product space) = k < y;x > (by property (5) of

an inner product space) = k(< y;x >) = k < x;y > (by property (3)of an inner product space).A similar proof works in real inner product spaces.

(6) kkxk =p< kx; kx > =

pk < x; kx > (by property (5) of an inner product

space) =pkk < x;x > (by part (3) of Theorem 7.12) =

pjkj2 < x;x >

= jkjp< x;x > = jkj kxk.

147

Page 152: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 7.5

(7) (a) kx+ yk2 = < x+ y;x+ y > = < x;x > + < x;y > + < y;x >+ < y;y > = kxk2 + 2 < x;y > +kyk2 (by property (3) of an innerproduct space).

(b) Use part (a) and the fact that x and y are orthogonal i¤< x;y > =0.

(c) A proof similar to that in part (a) yieldskx � yk2 = kxk2 � 2 < x;y > +kyk2. Add this to the equation forkx+ yk2 in part (a).

(8) (a) Subtract the equation for kx � yk2 in the answer to Exercise 7(c)from the equation for kx + yk2 in Exercise 7(a). Then multiply by14 .

(b) In the complex case, kx+ yk2 = < x+ y;x+ y > = < x;x > +< x;y > + < y;x > + < y;y > = < x;x > + < x;y >

+ < x;y > + < y;y >. Similarly, kx�yk2 = < x;x > � < x;y > �< x;y > + < y;y >, kx+ iyk2 = < x;x > � i < x;y > + i< x;y >+ < y;y >, and kx� iyk2 = < x;x > + i < x;y > � i< x;y > +

< y;y >. Then, kx + yk2 � kx � yk2 = 2 < x;y > + 2< x;y >.

Also, i(kx+ iyk2 � kx� iyk2) = i(�2i < x;y > +2i< x;y >) =2 < x;y > � 2< x;y >. Adding these and multiplying by 1

4 producesthe desired result.

(9) (a)q

�3

3 �3�2

(b) 0:941 radians, or, 53:9�

(10) (a)p174 (b) 0:586 radians, or, 33:6�

(11) (a) If either x or y is 0, then the theorem is obviously true. We only needto examine the case when both kxk and kyk are nonzero. We needto prove �kxk kyk � j < x;y > j � kxk kyk. By property (5) of aninner product space, and part (3) of Theorem 7.12, this is equivalentto proving j < x

kxk ;ykyk > j � 1. Hence, it is enough to show

j < a;b > j � 1 for any unit vectors a and b.Let < a;b > = z 2 C. We need to show jzj � 1. If z = 0, we

are done. If z 6= 0, then let v = jzjz a. Then < v;b > = < jzj

z a;b >

= jzjz < a;b >= jzj

z z = jzj 2 R. Also, since��� jzjz ��� = 1, v = jzj

z a is a

unit vector. Now, 0 � kv�bk2 = < v�b;v�b > = kvk2� < v;b >�< v;b >+ kbk2 = 1� jzj � jzj+ 1 = 2� 2jzj. Hence �2 � �2jzj,or 1 � jzj, completing the proof.

(b) Note that kx + yk2 = < x + y;x + y > = < x;x > + < x;y >

+ < y;x > + < y;y > = kxk2+ < x;y > +< x;y > + kyk2 (byproperty (3) of an inner product space) = kxk2 +

148

Page 153: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 7.5

2(real part of < x;y >)+kyk2 � kxk2+2j < x;y > j+kyk2 � kxk2+2kxk kyk+kyk2 (by the Cauchy-Schwarz Inequality) = (kxk+kyk)2.

(12) The given inequality follows directly from the Cauchy-Schwarz Inequality.

(13) (i) d(x;y) = kx � yk = k(�1)(x � y)k (by Theorem 7.13) = ky � xk =d(y;x).(ii) d(x;y) = kx � yk = p

< x� y;x� y > � 0. Also, d(x;y) = 0 i¤kx� yk = 0 i¤ < x� y;x� y > = 0 i¤ x� y = 0 (by property (2) of aninner product space) i¤ x = y.(iii) d(x;y) = kx� yk = k(x� z) + (z� y)k � kx� zk+ kz� yk (by theTriangle Inequality, part (2) of Theorem 7.14) = d(x; z) + d(z;y).

(14) (a) Orthogonal

(b) Orthogonal

(c) Not orthogonal

(d) Orthogonal

(15) Suppose x 2 T , where x is a linear combination of fx1; : : : ;xng � T .Then x = a1x1 + � � �+ anxn. Consider < x;x > = < x; a1x1 > + � � �+< x; anxn > (by part (2) of Theorem 7.12) = a1 < x;x1 > + � � �+an < x;xn > (by part (3) of Theorem 7.12) = a1(0) + � � � + an(0) (sincevectors in T are orthogonal) = 0, a contradiction, since x is assumed tobe nonzero.

(16) (a)R ��� cosmtdt =

1m (sinmt)j

��� = 1

m (sinm� � sin(�m�)) =1m (0)

(since the sine of an integral multiple of � is 0) = 0.Similarly,

R ��� sinmtdt = �

1m (cosmt)j

���

= � 1m (cosm� � cos(�m�)) = �

1m (cosm� � cosm�) (since

cos(�x) = cosx) = � 1m (0) = 0.

(b) Use the trigonometric identities cosmt cosnt = 12 (cos(m� n)t+

cos(m+ n)t) and sinmt sinnt = 12 (cos(m� n)t� cos(m+ n)t).

Then,R ��� cosmt cosnt dt =

12

R ��� cos(m� n)t dt+

12

R ��� cos(m + n)t dt =

12 (0) +

12 (0) = 0, by part (a), since m � n is

an integer. Also,R ��� sinmt sinnt dt =

12

R ��� cos(m � n)t dt � 1

2

R ��� cos(m + n)t dt =

12 (0)�

12 (0) = 0, by part (a).

(c) Use the trigonometric identity sinnt cosmt = 12 (sin(n+m)t+

sin(n�m)t). Then,R ��� cosmt sinnt dt =

12

R ��� sin(n+m)t dt+

12

R ��� sin(n �m)t dt =

12 (0) +

12 (0) = 0, by part (a), since n �m is

an integer.

(d) Obvious from parts (a), (b), and (c) with the real inner productof Example 5 (or Example 8) with a = ��, b = �: < f ;g >=R ��� f(t)g(t) dt.

149

Page 154: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 7.5

(17) Let B = (v1; : : : ;vk) be an orthogonal ordered basis for a subspace W ofV, and let v 2 W. Let [v]B = [a1; : : : ; ak]. The goal is to show thatai = < v;vi > =kvik2. Now, v = a1v1 + � � �+ akvk. Hence, < v;vi > =< a1v1 + � � �+ akvk;vi > = a1 < v1;vi > + � � �+ ai < vi;vi > + � � �+ak < vk;vi > (by properties (4) and (5) of an inner product)= a1(0)+ � � �+ai�1(0)+aikvik2+ai+1(0)+ � � �+ak(0) (since B is orthog-onal) = aikvik2. Hence, ai = < v;vi > =kvik2. Also, if B is orthonormal,then each kvik = 1, so ai = < v;vi >.

(18) Let v = a1v1 + � � �+ akvk and w = b1v1 + � � �+ bkvk, whereai = < v;vi > and bi = < w;vi > (by Theorem 7.16). Then < v;w > =

< a1v1 + � � �+ akvk; b1v1 + � � �+ bkvk > = a1b1 < v1;v1 > +a2b2 < v2;v2 > + � � � + akbk < vk;vk > (by property (5) of an innerproduct and part (3) of Theorem 7.12, and since < vi;vj > = 0 for i 6= j)= a1b1 + a2b2 + � � �+ akbk (since kvik = 1, for 1 � i � k) =< v;v1 > < w;v1 >+ < v;v2 > < w;v2 >+ � � �+ < v;vk > < w;vk >.

(19) Using w1 = t2 � t + 1, w2 = 1, and w3 = t yields the orthogonal basisfv1;v2;v3g, with v1 = t2 � t+ 1, v2 = �20t2 + 20t+ 13, andv3 = 15t

2 + 4t� 5.

(20) f[�9;�4; 8]; [27; 11;�22]; [5; 3;�4]g

(21) The proof is totally analogous to the (long) proof of Theorem 6.4 in Section6.1 of the textbook. Use Theorem 7.15 in the proof in place of Theorem 6.1.

(22) (a) Follow the proof of Theorem 6.11, using properties (4) and (5) ofan inner product. In the last step, use the fact that < w;w >= 0 =) w = 0 (by property (2) of an inner product).

(b) Prove part (4) of Theorem 7.19 �rst, using a proof similar to theproof of Theorem 6.12. (In that proof, use Theorem 7.16 in place ofTheorem 6.3.) Then prove part (5) of Theorem 7.19 as follows:Let W be a subspace of V of dimension k. By Theorem 7.17, W hasan orthogonal basis fv1; : : : ;vkg. Expand this basis to an orthogo-nal basis for all of V. (That is, �rst expand to any basis for V byTheorem 4.18, then use the Gram-Schmidt Process. Since the �rst kvectors are already orthogonal, this expands fv1; : : : ;vkg to an or-thogonal basis fv1; : : : ;vng for V.) Then, by part (4) of Theorem7.19, fvk+1; : : : ;vng is a basis for W?, and so dim(W?) = n � k.Hence dim(W) + dim(W?) = n.

(c) Since any vector in W is orthogonal to all vectors in W?,W � (W?)?:

(d) By part (3) of Theorem 7.19, W � (W?)?. Next, by part (5) ofTheorem 7.19, dim(W) = n � dim(W?) = n � (n � dim((W?)?))= dim((W?)?). Thus, by Theorem 4.16, or its complex analog,W = (W?)?.

150

Page 155: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 7.5

(23) W? = span(ft3 � t2; t+ 1g)

(24) Using 1 and t as the vectors in the Gram-Schmidt Process, the basisobtained for W? is f�5t2 + 10t� 2; 15t2 � 14t+ 2g.

(25) As in the hint, let fv1; : : : ;vkg be an orthonormal basis for W. Nowif v 2 V, let w1 = hv;v1iv1 + � � � + hv;vkivk and w2 = v � w1.Then, w1 2 W because w1 is a linear combination of basis vectors forW. We claim that w2 2 W?. To see this, let u 2 W. Then u =a1v1 + � � � + akvk for some a1; : : : ; ak. Then hu;w2i = hu;v �w1i =ha1v1 + � � �+ akvk; v � (hv;v1iv1 + � � �+ hv;vkivk)i =ha1v1 + � � �+ akvk; vi�ha1v1 + � � �+ akvk; hv;v1iv1 + � � �+ hv;vkivki=Pk

i=1 ai hvi;vi �Pk

i=1

Pkj=1 aihv;vji hvi;vji. But hvi;vji = 0 when

i 6= j and hvi;vji = 1 when i = j, since fv1; : : : ;vkg is an orthonormalset. Hence, hu;w2i =

Pki=1 ai hvi;vi�

Pki=1 aihv;vii =

Pki=1 ai hvi;vi�Pk

i=1 ai hvi;vi = 0. Since this is true for every u 2 W, we conclude thatw2 2 W?.Finally, we want to show uniqueness of decomposition. Suppose that

v = w1+w2 and v = w01+w

02; where w1;w

01 2 W and w2, w0

2 2 W?. Wewant to show thatw1 = w0

1 and w2 = w02: Now, w1�w0

1 = w02�w2. Also,

w1�w01 2 W, but w0

2�w2 2 W?: Thus, w1�w01 = w

02�w2 2 W\W?:

By part (2) of Theorem 7.19, w1 �w01 = w

02 �w2 = 0: Hence, w1 = w0

1

and w2 = w02.

(26) w1 = 12� (sin t� cos t), w2 =

1ket � 1

2� sin t+12� cos t

(27) Orthonormal basis for W =nq

1514 (2t

2 � 1);q

3322 (10t

2 + 7t+ 2)o. Then

v = 4t2 � t + 3 = w1 + w2, where w1 = 123 (32t

2 + 73t + 57) 2 W andw2 =

1223 (5t

2 � 8t+ 1) 2 W?.

(28) Let W = span(fv1; : : : ;vkg). Notice that w1 =Pk

i=1 < v;vi > vi(by Theorem 7.16, since fv1; : : : ;vkg is an orthonormal basis for W by

Theorem 7.15). Then, using the hint in the text yields kvk2 = < v;v >= < w1 + w2;w1 + w2 > = < w1;w1 > + < w2;w1 > + < w1;w2 >

+ < w2;w2 > = kw1k2 + kw2k2 (since w1 2 W and w2 2 W?). Hence,

kvk2 � kw1k2

=D�Pk

i=1 < v;vi > vi

�;�Pk

j=1 < v;vj > vj

�E=Pk

i=1

D(< v;vi > vi) ;

�Pkj=1 < v;vj > vj

�E=Pk

i=1

Pkj=1 h(< v;vi > vi) ; (< v;vj > vj)i

=Pk

i=1

Pkj=1((< v;vi >< v;vj >) < vi;vj >)

=Pk

i=1(< v;vi >< v;vi >) (because fv1; : : : ;vkg is an orthonormalset) =

Pki=1 < v;vi >

2 .

151

Page 156: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Chapter 7 Review

(29) (a) Let fv1; : : : ;vkg be an orthonormal basis forW. Let x;y 2 V. ThenprojWx = < x;v1 > v1 + � � �+ < x;vk > vk,projWy = < y;v1 > v1 + � � �+ < y;vk > vk, projW(x+ y) =< x+ y;v1 > v1 + � � �+ < x+ y;vk > vk, and projW(kx) =< kx;v1 > v1 + � � �+ < kx;vk > vk. Clearly, by properties (4)and (5) of an inner product, projW(x + y) = projWx + projWy,and projW(kx) = k(projWx). Hence L is a linear transformation.

(b) ker(L) = W?, range(L) = W(c) First, we establish that if v 2 W, then L(v) = v. Note that for

v 2 W, v = v + 0, where v 2 W and 0 2 W?. But by Theorem7.20, any decomposition v = w1 +w2 with w1 2 W and w2 2 W?

is unique. Hence, w1 = v and w2 = 0. Then, since w1 = projWv,we get L(v) = v.Finally, let v 2 V and let x = L(v) = projWv 2 W. Then

(L � L)(v) = L(L(v)) = L(x) = projWx = x (since x 2 W) = L(v).Hence, L � L = L.

(30) (a) F (b) F (c) F (d) T (e) F

Chapter 7 Review Exercises

(1) (a) 0

(b) (1 + 2i)(v � z) = ((1 + 2i)v) � z =� 21 + 43i, v � ((1 + 2i)z) = 47� 9i(c) Since (v � z) = (13 + 17i), (1 + 2i)(v � z) = ((1 + 2i)v) � z =

(1+2i)(13+17i) = �21+43i; while v�((1+2i)z) = (1 + 2i)(13+17i) =(1� 2i)(13 + 17i) = 47� 9i

(d) w � z =� 35 + 24i; w � (v + z) = �35 + 24i, since w � v = 0

(2) (a) H =

24 2 1 + 3i 7� i1� 3i 34 � 10� 10i7 + i � 10 + 10i 30

35(b) AA� =

�32 �2 + 14i

�2� 14i 34

�, which is clearly Hermitian.

(3) (Az) �w = z �(A�w) (by Theorem 7.3) = z �(�Aw) (A is skew-Hermitian)= �z � (Aw) (by Theorem 7.1, part (5))

(4) (a) w = 4 + 3i, z = �2i(b) x = 2 + 7i, y = 0, z = 3� 2i(c) No solutions

(d) f[(2 + i)� (3� i)c; (7 + i)� ic; c] j c 2 Cg= f[(2� 3c) + (1 + c)i; 7 + (1� c)i; c] j c 2 Cg

152

Page 157: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Chapter 7 Review

(5) By part (3) of Theorem 7.5, jA�j = jAj. Hence, jA�Aj = jA�j � jAj =

jAj � jAj =����jAj����2, which is real and nonnegative. This equals zero if and

only if jAj = 0, which occurs if and only if A is singular (by part (4) ofTheorem 7.5).

(6) (a) pA(x) = x3 � x2 + x � 1 = (x2 + 1)(x � 1) = (x � i)(x + i)(x � 1);

D =

24 i 0 00 �i 00 0 1

35; P =24 �2� i �2 + i 0

1� i 1 + i 21 1 1

35(b) pA(x) = x3�2ix2�x = x(x�i)2; Not diagonalizable. Eigenspace for

� = i is one-dimensional, with fundamental eigenvector [1�3i; �1; 1].Fundamental eigenvector for � = 0 is [�i; 1; 1].

(7) (a) One possibility: Consider L : C ! C given by L(z) = z. Note thatL(v +w) = (v +w) = v +w = L(v) + L(w). But L is not a linearoperator on C because L(i) = �i, but iL(1) = i(1) = i, so the rule�L(cv) = cL(v)�is not satis�ed.

(b) The example given in part (a) is a linear operator on C, thought of asa real vector space. In that case we may use only real scalars, and so,if v = a+bi, then L(cv) = L(ca+cbi) = ca�cbi = c(a�bi) = cL(v).Note: for any function L from any vector space to itself (real orcomplex), the rule �L(v+w) = L(v) + L(w)�implies that L(cv) =cL(v) for any rational scalar c. Thus, any example for real vectorspaces for which L(v + w) = L(v) + L(w) is satis�ed but L(cv) =cL(v) is not satis�ed for some c must involve an irrational value ofc.One possibility: Consider R as a vector space over the rationals

(as the scalar �eld). Choose an uncountably in�nite basis B for Rover Q with 1 2 B and � 2 B. De�ne L : R! R by letting L(1) = 1,but L(v) = 0 for every v 2 B with v 6= 1. (De�ning L on a basisuniquely determines L.) This L is a linear operator on R over Q, andso L(v+w) = L(v)+L(w) is satis�ed. But, by de�nition L(�) = 0,but �L(1) = �(1) = �. Thus, L is not a linear operator on the realvector space R.

(8) (a) B = f[1; i; 1;�i]; [4 + 5i; 7� 4i; i; 1] ;[10; � 2 + 6i; � 8� i; � 1 + 8i] ; [0; 0; 1; i]g

(b) C = f 12 [1; i; 1;�i];1

6p3[4 + 5i; 7� 4i; i; 1] ;

13p30[10; � 2 + 6i; � 8� i; � 1 + 8i] ; 1p

2[0; 0; 1; i]g

(c)

2666412 � 1

2 i12

12 i

16p3(4� 5i) 1

6p3(7 + 4i) � 1

6p3i 1

6p3

103p30

13p30(�2� 6i) 1

3p30(�8 + i) 1

3p30(�1� 8i)

0 0 1p2

� 1p2i

37775153

Page 158: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Chapter 7 Review

(9) (a) pA(x) = x2 � 5x + 4 = (x � 1)(x � 4); D =

�1 00 4

�; P =24 1p

6(�1� i) 1p

3(1 + i)

2p6

1p3

35(b) pA(x) = x3 + (�98 + 98i)x2 � 4802ix = x(x � 49 + 49i)2; D =24 0 0 0

0 49� 49i 00 0 49� 49i

35; P =

2666467

1p5i �4

7p5

37 i

2p5

�2i7p5

27 0 15

7p5

37775 is a probableanswer. Another possibility: P = 1

7

24 3i �2 �6i2 6i 36i 3 2i

35

(10) Both AA� and A�A equal

24 80 105 + 45i 60105� 45i 185 60� 60i

60 60 + 60i 95

35, and soA is normal. Hence, by Theorem 7.9, A is unitarily diagonalizable.

(11) Now, A�A =266664459 �46 + 459i �459 + 46i �11� 925i �83 + 2227i

�46� 459i 473 92 + 445i �918 + 103i 2248� 139i�459� 46i 92� 445i 473 �81 + 932i 305� 2206i�11 + 925i �918� 103i �81� 932i 1871 �4482� 218i�83� 2227i 2248 + 139i 305 + 2206i �4482 + 218i 10843

377775,but AA� =266664

7759 �120 + 2395i 3850� 1881i �1230� 3862i �95� 2905i�120� 2395i 769 �648� 1165i �1188 + 445i �906 + 75i3850 + 1881i �648 + 1165i 2371 329� 2220i 660� 1467i�1230 + 3862i �1188� 445i 329 + 2220i 2127 1467 + 415i�95 + 2905i �906� 75i 660 + 1467i 1467� 415i 1093

377775,so A is not normal. Hence, A is not unitarily diagonalizable, by Theo-rem 7.9. (Note: A is diagonalizable (eigenvalues 0, �1, and �i), but theeigenspaces are not orthogonal to each other.)

(12) If U is a unitary matrix, then U� = U�1. Hence, U�U = UU� = I.

(13) Distance =q

8105 � 0:276

(14) f[1; 0; 0]; [4; 3; 0]; [5; 4; 2]g

(15) Orthogonal basis for W: fsinx; x cosx+ 12 sinxg;

w1 = 2 sinx� 364�2+3 (x cosx+

12 sinx); w2 = x�w1

154

Page 159: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Chapter 7 Review

(16) (a) T

(b) F

(c) T

(d) F

(e) F

(f) T

(g) F

(h) T

(i) F

(j) T

(k) T

(l) T

(m) F

(n) T

(o) T

(p) T

(q) T

(r) T

(s) T

(t) T

(u) T

(v) T

(w) F

155

Page 160: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 8.1

Chapter 8

Section 8.1

(1) Symmetric: (a), (b), (c), (d), (g)

(a) G1:

26640 1 1 11 0 1 11 1 0 11 1 1 0

3775

(b) G2 :

2666641 1 0 0 11 0 0 1 00 0 1 0 10 1 0 0 11 0 1 1 1

377775(c) G3:

24 0 0 00 0 00 0 0

35

(d) G4:

266666640 1 0 1 0 01 0 1 1 0 00 1 0 0 1 11 1 0 0 1 00 0 1 1 0 10 0 1 0 1 0

37777775

(e) D1:

26640 1 0 00 0 1 01 0 0 11 1 0 0

3775

(f) D2:

26640 1 1 00 1 1 10 1 0 10 0 0 1

3775

(g) D3:

2666640 1 1 0 01 0 0 0 11 0 0 1 00 0 1 0 10 1 0 1 0

377775

(h) D4:

266666666664

0 1 0 0 0 0 0 00 0 1 0 0 0 0 00 0 0 1 0 0 0 00 0 0 0 1 0 0 00 0 0 0 0 1 0 00 0 0 0 0 0 1 00 0 0 0 0 0 0 11 0 0 0 0 0 0 0

377777777775(2) All �gures for this exercise appear on the next page.

F can be the adjacency matrix for either a graph or digraph (see Fig-ure 12).G can be the adjacency matrix for a digraph (only) (see Figure 13).H can be the adjacency matrix for a digraph (only) (see Figure 14).I can be the adjacency matrix for a graph or digraph (see Figure 15).K can be the adjacency matrix for a graph or digraph (see Figure 16).L can be the adjacency matrix for a graph or digraph (see Figure 17).

156

Page 161: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 8.1

157

Page 162: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 8.1

(3) The digraph is shown in Figure 18 (see previous page), and the adjacencymatrix is

ABCDEF

A B C D E F266666640 0 1 0 0 00 0 0 1 1 00 1 0 0 1 01 0 0 0 0 11 1 0 1 0 00 0 0 1 0 0

37777775:The transpose gives no new information. But it does suggest a di¤erentinterpretation of the results: namely, the (i; j) entry of the transposeequals 1 if author j in�uences author i (see Figure 18 on previous page).

(4) (a) 3

(b) 3

(c) 6 = 1 + 1 + 4

(d) 7 = 0 + 1 + 1 + 5

(e) Length 4

(f) Length 3

(5) (a) 4

(b) 0

(c) 5 = 1 + 1 + 3

(d) 7 = 1 + 0 + 3 + 3

(e) No such path exists

(f) Length 2

(6) (a) Figure 8.5: 7; Figure 8.6: 2 (b) Figure 8.5: 8; Figure 8.6: 4

(c) Figure 8.5: 3 = 0 + 1 + 0 + 2; Figure 8.6: 17 = 1 + 2 + 4 + 10

(7) (a) If the vertex is the ith vertex, then the ith row and ith column entriesof the adjacency matrix all equal zero, except possibly for the (i; i)entry.

(b) If the vertex is the ith vertex, then the ith row entries of the adjacencymatrix all equal zero, except possibly for the (i; i) entry. (Note: Theith column entries may be nonzero.)

(8) (a) The trace equals the number of loops in the graph or digraph.

(b) The trace equals the number of cycles of length k in the graph ordigraph. (See Exercise 6 in the textbook for the de�nition of a cycle.)

(9) (a) The digraph in Figure 8.5 is strongly connected; the digraph in Figure8.6 is not strongly connected (since there is no path directed to P5from any other vertex).

(b) Use the fact that if a path exists between a given pair Pi, Pj ofvertices, then there must be a path of length at most n� 1 betweenPi and Pj . This is because any longer path would involve returningto a vertex Pk that was already visited earlier in the path since thereare only n vertices. (That is, any portion of the longer path thatinvolves travelling from Pk to itself could be removed from that pathto yield a shorter path between Pi and Pj). Then use the statementin the box just before Example 4 in the textbook.

158

Page 163: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 8.3

(10) (a) Use part (c).

(b) Yes, it is a dominance digraph, because no tie games are possible andbecause each team plays every other team. Thus if Pi and Pj are twogiven teams, either Pi defeats Pj or vice versa.

(c) Since the entries of both A and AT are all zeroes and ones, then,with i 6= j, the (i; j) entry of A +AT = aij + aji = 1 i¤ aij = 1 oraji = 1, but not both. Also, the (i; i) entry of A+AT = 2aii = 0 i¤aii = 0. Hence, A+AT has the desired properties (all main diagonalentries equal to 0, and all other entries equal to 1) i¤ there is alwaysexactly one edge between two distinct vertices (aij = 1 or aji = 1,but not both, for i 6= j), and the digraph has no loops (aii = 0).(This is the de�nition of a dominance digraph.)

(11) We prove Theorem 8.1 by induction on k. For the base step, k = 1. Thiscase is true by the de�nition of the adjacency matrix. For the inductivestep, let B = Ak. Then Ak+1 = BA. Note that the (i; j) entry of Ak+1

= (ith row of B)�(jth column of A) =Pn

q=1 biqaqj =Pn

q=1 (number ofpaths of length k from Pi to Pq)�(number of paths of length 1 from Pq toPj) = (number of paths of length k + 1 from Pi to Pj).

(12) (a) T(b) F

(c) T(d) F

(e) T(f) T

(g) T

Section 8.2

(1) (a) I1 = 8, I2 = 5, I3 = 3

(b) I1 = 10, I2 = 4, I3 = 5, I4 = 1, I5 = 6

(c) I1 = 12, I2 = 5, I3 = 3, I4 = 2, I5 = 2, I6 = 7

(d) I1 = 64, I2 = 48, I3 = 16, I4 = 36, I5 = 12, I6 = 28

(2) (a) T (b) T

Section 8.3

(1) (a) y = �0:8x� 3:3, y t �7:3 when x = 5(b) y = 1:12x+ 1:05, y t 6:65 when x = 5(c) y = �1:5x+ 3:8, y t �3:7 when x = 5

(2) (a) y = 0:375x2 + 0:35x+ 3:60(b) y = �0:83x2 + 1:47x� 1:8

(c) y = �0:042x2+0:633x+0:266

159

Page 164: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 8.3

(3) (a) y = 14x

3 + 2528x

2 + 2514x+

3735 (b) y = � 16x

3 � 114x

2 � 13x+

11735

(4) (a) y = 4:4286x2 � 2:0571(b) y = 0:7116x2 + 1:6858x+ 0:8989

(c) y = �0:1014x2 + 0:9633x� 0:8534

(d) y = �0:1425x3 + 0:9882x (e) y = 0x3 � 0:3954x2 + 0:9706

(5) (a) y = 0:4x+ 2:54; the angle reaches 20� in the 44th month

(b) y = 0:02857x2 + 0:2286x + 2:74; the angle reaches 20� in the 21stmonth

(c) The quadratic approximation will probably be more accurate afterthe �fth month because the amount of change in the angle from thevertical is generally increasing with each passing month.

(6) The answers to (a) and (b) assume that the years are renumbered assuggested in the textbook.

(a) y = 25:2x+ 126:9; predicted population in 2020 is 328:5 million

(b) y = 0:29x2 +23:16x+129:61; predicted population in 2020 is 333:51million

(c) The quadratic approximation is probably more accurate. The pop-ulation changes between successive decades (that is, the �rst di¤er-ences of the original data) are, respectively, 28:0, 24:0, 23:2, 22:2,and 32:7 (million). This shows that the change in the population isdecreasing slightly each decade at �rst, but then starts increasing.For the linear approximation in part (a) to be a good �t for the data,these �rst di¤erences should all be relatively close to 25:2, the slopeof the line in part (a). But that is not the case here. For this rea-son, the linear model does not seem to �t the data well. The seconddi¤erences of the original data (the changes between successive �rstdi¤erences) are, respectively, �4:0, �0:8, �1:0, and +10:5 (million).For the quadratic approximation in part (b) to be a good �t for thedata, these second di¤erences should all be relatively close to 0:58,the second derivative of the quadratic in part (b). However, that isnot the case here. On the other hand, the positive second derivativeof the quadratic model in part (b) indicates that this graph is concaveup, and therefore is a better �t to the overall shape of the data thanthe linear model. This probably makes the quadratic approximationmore reliable for predicting the population in the future.

(7) The least-squares polynomial is y = 45x

2 � 25x + 2, which is the exact

quadratic through the given three points.

160

Page 165: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 8.4

(8) When t = 1, the system in Theorem 8.2 is

�1 1 � � � 1a1 a2 � � � an

�266641 a11 a2...

...1 an

37775�c0c1

�=

�1 1 � � � 1a1 a2 � � � an

�26664b1b2...bn

37775 ;

which becomes

"n

Pni=1 aiPn

i=1 aiPn

i=1 a2i

# �c0c1

�=

" Pni=1 biPni=1 aibi

#;

the desired system.

(9) (a) x1 = 23039 , x2 =

15539 ;8>><>>:

4x1 � 3x2 = 11 23 ; which is almost 12

2x1 + 5x2 = 31 23 ; which is almost 32

3x1 + x2 = 21 23 ; which is close to 21

(b) x1 = 4611 , x2 = �

1211 , x3 =

6233 ;8>>>>><>>>>>:

2x1 � x2 + x3 = 11 13 ; which is almost 11

� x1 + 3x2 � x3 = �9 13 ; which is almost � 9

x1 � 2x2 + 3x3 = 12;

3x1 � 4x2 + 2x3 = 20 23 ; which is almost 21

(10) (a) T (b) F (c) F (d) F

Section 8.4

(1) A is not stochastic, since A is not square; A is not regular, since A is notstochastic.B is not stochastic, since the entries of column 2 do not sum to 1; B isnot regular, since B is not stochastic.C is stochastic; C is regular, since C is stochastic and has all nonzeroentries.D is stochastic; D is not regular, since every positive power of D is amatrix whose rows are the rows of D permuted in some order, and henceevery such power contains zero entries.E is not stochastic, since the entries of column 1 do not sum to 1; E isnot regular, since E is not stochastic.F is stochastic; F is not regular, since every positive power of F has allsecond row entries zero.G is not stochastic, since G is not square; G is not regular, since G is not

161

Page 166: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 8.4

stochastic.H is stochastic; H is regular, since

H2 =

266412

14

14

14

12

14

14

14

12

3775 ;which has no zero entries.

(2) (a) p1 = [ 518 ;1318 ], p2 = [

67216 ;

149216 ]

(b) p1 = [29 ;1336 ;

512 ],

p2 = [25108 ;

97216 ;

2372 ]

(c) p1 = [1748 ;13 ;

516 ],

p2 = [205576 ;

49144 ;

175576 ]

(3) (a) [ 25 ;35 ] (b) [ 1859 ;

2059 ;

2159 ] (c) [ 14 ;

310 ;

310 ;

320 ]

(4) (a) [0:4; 0:6] (b) [0:305; 0:339; 0:356]

(5) (a) [0:34; 0:175; 0:34; 0:145] in the next election;[0:3555; 0:1875; 0:2875; 0:1695] in the election after that

(b) The steady-state vector is [0:36; 0:20; 0:24; 0:20]; in a century, thevotes would be 36% for Party A and 24% for Party C.

(6) (a)

26666666664

12

16

16

15 0

18

12 0 0 1

5

18 0 1

2110

110

14 0 1

612

15

0 13

16

15

12

37777777775(b) If M is the stochastic matrix in part (a), then

M2 =

26666666664

41120

16

15

1360

9100

18

2780

13240

13200

15

320

13240

73240

29200

325

1348

13120

29120

107300

1360

980

13

15

1360

2875

37777777775; which has all entries nonzero.

Thus, M is regular.

(c) 29120 , since the probability vector after two time intervals is

[ 15 ;13240 ;

73240 ;

29120 ;

15 ]

(d) [ 15 ;320 ;

320 ;

14 ;

14 ]; over time, the rat frequents rooms B and C the least

and rooms D and E the most.

162

Page 167: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 8.5

(7) The limit vector for any initial input is [1; 0; 0]. However,M is not regular,since every positive power of M is upper triangular, and hence is zerobelow the main diagonal. The unique �xed point is [1; 0; 0].

(8) (a) Any steady-state vector is a solution of the system��1� a ba 1� b

���1 00 1

���x1x2

�=

�00

�;

which simpli�es to��a ba �b

� �x1x2

�=

�00

�. Using the fact that

x1 + x2 = 1 yields a larger system whose augmented matrix is24 �a ba �b1 1

������001

35. This reduces to26641 0

0 1

0 0

��������ba+b

aa+b

0

3775, assuming aand b are not both zero. (Begin the row reduction with the row

operation h1i ! h3i in case a = 0.) Hence, x1 = ba+b , x2 =

aa+b is

the unique steady-state vector.

(9) It is enough to show that the product of any two stochastic matricesis stochastic. Let A and B be n � n stochastic matrices. Then theentries of AB are clearly nonnegative, since the entries of both A andB are nonnegative. Furthermore, the sum of the entries in the jth col-umn of AB =

Pni=1(AB)ij =

Pni=1 (ith row of A)�(jth column of B) =Pn

i=1 (ai1b1j + ai2b2j + � � �+ ainbnj) = (Pn

i=1 ai1) b1j + (Pn

i=1 ai2) b2j +� � �+ (

Pni=1 ain) bnj = (1)b1j + (1)b2j + � � �+ (1)bnj (since A is stochastic,

and each of the summations is a sum of the entries of a column of A) =1 (since B is stochastic). Hence AB is stochastic.

(10) Suppose M is a k � k matrix.Base Step (n = 1): ith entry inMp = (ith row ofM)�p =

Pkj=1mijpj =Pk

j=1 (probability of moving from state Sj to Si)(probability of being instate Sj) = probability of being in state Si after 1 step of the process =ith entry of p1.Inductive Step: Assume pk =Mkp is the probability vector after k stepsof the process. Then, after an additional step of the process, the proba-bility vector pk+1 =Mpk =M(M

kp) =Mk+1p.

(11) (a) F (b) T (c) T (d) T (e) F

Section 8.5

(1) (a) �24 �46 �15 �30 10 16 39 6226 42 51 84 24 37 �11 �23

163

Page 168: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 8.6

(b) 97 177 146 96 169 143 113201 171 93 168 133 175 311254 175 312 256 238 (430) (357)

(where �27�was used twice to pad the last vector)

(2) (a) HOMEWORK IS GOOD FOR THE SOUL

(b) WHO IS BURIED IN GRANT(�)S TOMB�

(c) DOCTOR, I HAVE A CODE

(d) TO MAKE A SLOW HORSE FAST DON(�)T FEED IT��

(3) (a) T (b) T (c) F

Section 8.6

(1) (a) (III): h2i ! h3i; inverse operation is (III): h2i ! h3i. Thematrix is its own inverse.

(b) (I): h2i � �2 h2i; inverse operation is

(I): h2i � � 12 h2i. The inverse matrix is

24 1 0 00 � 12 00 0 1

35.(c) (II): h3i � �4 h1i+ h3i; inverse operation is

(II): h3i � 4 h1i+ h3i. The inverse matrix is

24 1 0 00 1 04 0 1

35.(d) (I): h2i � 6 h2i; inverse operation is (I): h2i � 1

6 h2i. The

inverse matrix is

2666641 0 0 0

0 16 0 0

0 0 1 00 0 0 1

377775.(e) (II): h3i � �2 h4i+ h3i; inverse operation is (II):

h3i � 2 h4i+ h3i. The inverse matrix is

26641 0 0 00 1 0 00 0 1 20 0 0 1

3775.(f) (III): h1i ! h4i; inverse operation is (III): h1i ! h4i. The

matrix is its own inverse.

(2) (a)�4 93 7

�=�4 00 1

� �1 03 1

� �1 00 1

4

� �1 9

40 1

� �1 00 1

�(b) Not possible, since the matrix is singular.

164

Page 169: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 8.6

(c) The product of the following matrices in the order listed:26640 1 0 01 0 0 00 0 1 00 0 0 1

3775,2664�3 0 0 00 1 0 00 0 1 00 0 0 1

3775,26641 0 0 00 1 0 00 0 1 03 0 0 1

3775,26641 0 0 00 0 1 00 1 0 00 0 0 1

3775,26641 0 0 00 6 0 00 0 1 00 0 0 1

3775,26641 0 0 00 1 0 00 0 5 00 0 0 1

3775,26666641 0 0 0

0 1 � 53 0

0 0 1 0

0 0 0 1

3777775,26666641 0 0 2

3

0 1 0 0

0 0 1 0

0 0 0 1

3777775,26666641 0 0 0

0 1 0 � 160 0 1 0

0 0 0 1

3777775(3) If A and B are row equivalent, then B = Ek � � �E2E1A for some ele-

mentary matrices E1; : : : ;Ek, by Theorem 8.8. Hence B = PA, withP = Ek � � �E1. By Corollary 8.9, P is nonsingular.Conversely, if B = PA for a nonsingular matrix P, then by Corollary 8.9,P = Ek � � �E1 for some elementary matrices E1; : : : ;Ek. Hence A and Bare row equivalent by Theorem 8.8.

(4) Follow the hint in the textbook. Because U is upper triangular with allnonzero diagonal entries, when row reducing U to In, no type (III) rowoperations are needed, and all type (II) row operation used will be of theform hii chji + hii, where i < j (the pivots are all below the targets).Thus, none of the type (II) row operations change a diagonal entry, sinceuji = 0 when i < j. Hence, only type (I) row operations make changes onthe main diagonal, and no main diagonal entry will be made zero. Also, theelementary matrices for the type (II) row operations mentioned are uppertriangular: nonzero on the main diagonal, and perhaps in the (i; j) entry,with i < j. Since the elementary matrices for type (I) row operationsare also upper triangular, we see that U�1, which is the product of allthese elementary matrices, is the product of upper triangular matrices.Therefore, it is also upper triangular.

(5) For type (I) and (III) operations, E = ET . If E corresponds to a type (II)operation hii � c hji+ hii, then ET corresponds tohji � c hii+ hji, so ET is also an elementary matrix.

(6) Note that (AFT )T = FAT . Now, multiplying by F on the left performsa row operation on AT . Taking the transpose again shows that this is acorresponding column operation on A. The �rst equation show that thisis the same as multiplying by FT on the right side of A.

165

Page 170: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 8.7

(7) A is nonsingular i¤ rank(A) = n (by Theorem 2.14) i¤A is row equivalentto In (by the de�nition of rank) i¤A = Ek � � �E1In = Ek � � �E1 for someelementary matrices E1; : : : ;Ek (by Theorem 8.8).

(8) AX = O has a nontrivial solution i¤ rank(A) < n (by Theorem 2.5) i¤A is singular (by Theorem 2.14) i¤ A cannot be expressed as a productof elementary matrices (by Corollary 8.9).

(9) (a) EA and A are row equivalent (by Theorem 8.8), and hence have thesame reduced row echelon form, and the same rank.

(b) The reduced row echelon form of A cannot have more nonzero rowsthan A.

(c) If A has k rows of zeroes, then rank(A) = m � k. But AB has atleast k rows of zeroes, so rank(AB) � m� k.

(d) Let A = Ek � � �E1D, where D is in reduced row echelon form. Thenrank(AB) = rank(Ek � � �E1DB) = rank(DB) (by repeated use ofpart (a)) � rank(D) (by part (c)) = rank(A) (by de�nition of rank).

(e) Exercise 18 in Section 2.3 proves the same results as those in parts(a) through (d), except that it is phrased in terms of the underlyingrow operations rather than in terms of elementary matrices.

(10) (a) T (b) F (c) F (d) T (e) T

Section 8.7

(1) (a) � = 12 arctan(�

p33 ) = �

�12 ; P =

" p6+p2

4

p6�p2

4p2�p6

4

p6+p2

4

#;

equation in uv-coordinates: u2 � v2 = 2; or, u22 �v2

2 = 1;center in uv-coordinates: (0; 0); center in xy-coordinates: (0; 0); seeFigures 19 and 20.

166

Page 171: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 8.7

Figure 19: Rotated Hyperbola

Figure 20: Original Hyperbola

(b) � = �4 , P =

" p22 �

p22

p22

p22

#; equation in uv-coordinates: 4u2+9v2�

8u = 32, or, (u�1)2

9 + v2

4 = 1; center in uv-coordinates: (1; 0); center

in xy-coordinates: (p22 ;

p22 ); see Figures 21 and 22.

167

Page 172: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 8.7

Figure 21: Rotated Ellipse

Figure 22: Original Ellipse

168

Page 173: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 8.7

(c) � = 12 arctan(�

p3) = ��6 ; P =

" p32

12

� 12p32

#; equation in

uv-coordinates: v = 2u2� 12u+13, or (v+5) = 2(u� 3)2; vertex inuv-coordinates: (3;�5); vertex in xy-coordinates: (0:0981;�5:830);see Figures 23 and 24.

(d) � t 0:6435 radians (about 36�520); P =

" 45 � 3535

45

#; equation in

uv-coordinates: 4u2 � 16u+9v2 +18v = 11; or, (u�2)2

9 + (v+1)2

4 = 1;

center in uv-coordinates: (2;�1); center in xy-coordinates = ( 115 ;25 );

see Figures 25 and 26.

(e) � t �0:6435 radians (about �36�520); P =" 4

535

� 3545

#; equation in

uv-coordinates: 12v = u2 + 12u; or, (v + 3) = 112 (u + 6)

2; vertex in

uv-coordinates: (�6;�3); vertex in xy-coordinates = (� 335 ;65 ); see

Figures 27 and 28.

(f) (All answers are rounded to four signi�cant digits.) � t 0:4442 ra-

dians (about 25�270); P =

�0:9029 �0:42980:4298 0:9029

�; equation in uv-

coordinates: u2

(1:770)2 �v2

(2:050)2 = 1; center in uv-coordinates: (0; 0);

center in xy-coordinates = (0; 0); see Figures 29 and 30.

169

Page 174: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 8.7

Figure 23: Rotated Parabola

Figure 24: Original Parabola

170

Page 175: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 8.7

Figure 25: Rotated Ellipse

Figure 26: Original Ellipse

171

Page 176: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 8.7

Figure 27: Rotated Parabola

Figure 28: Original Parabola

172

Page 177: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 8.7

Figure 29: Rotated Hyperbola

Figure 30: Original Hyperbola

(2) (a) T (b) F (c) T (d) F

173

Page 178: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 8.8

Section 8.8

(1) (a) (9; 1); (9; 5); (12; 1); (12; 5); (14; 3)

(b) (3; 5); (1; 9); (5; 7); (3; 10); (6; 9)

(c) (�2; 5); (0; 9); (�5; 7); (�2; 10); (�5; 10)(d) (20; 6); (20; 14); (32; 6); (32; 14); (40; 10)

(2) See the following pages for �gures.

(a) (3; 11); (5; 9); (7; 11); (11; 7); (15; 11); see Figure 31

(b) (�8; 2); (�7; 5); (�10; 6); (�9; 11); (�14; 13); see Figure 32(c) (8; 1); (8; 4); (11; 4); (10; 10); (16; 11); see Figure 33

(d) (3; 18); (4; 12); (5; 18); (7; 6); (9; 18); see Figure 34

Figure 31

174

Page 179: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 8.8

Figure 32

Figure 33

175

Page 180: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 8.8

Figure 34

(3) (a) (3;�4); (3;�10); (7;�6); (9;�9); (10;�3)(b) (4; 3); (10; 3); (6; 7); (9; 9); (3; 10)

(c) (�2; 6); (0; 8); (�8; 17); (�10; 22); (�16; 25)(d) (6; 4); (11; 7); (11; 2); (14; 1); (10;�3)

(4) (a) (14; 9); (10; 6); (11; 11); (8; 9); (6; 8); (11; 14)

(b) (�2;�3); (�6;�3); (�3;�6); (�6;�6); (�8;�6); (�2;�9)(c) (2; 4); (2; 6); (8; 5); (8; 6) (twice), (14; 4)

(5) (a) (4; 7); (6; 9); (10; 8); (9; 2); (11; 4)

(b) (0; 5); (1; 7); (0; 11); (�5; 8); (�4; 10)(c) (7; 3); (7; 12); (9; 18); (10; 3); (10; 12)

(6) (a) (2; 20); (3; 17); (5; 14); (6; 19); (6; 16); (9; 14); see Figure 35

(b) (17; 17); (17; 14); (18; 10); (20; 15); (19; 12); (22; 10); see Figure 36

(c) (1; 18); (�3; 13); (�6; 8); (�2; 17); (�5; 12); (�6; 10); see Figure 37(d) (�19; 6); (�16; 7); (�15; 9); (�27; 7); (�21; 8); (�27; 10); see Figure 38

176

Page 181: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 8.8

Figure 35

Figure 36

177

Page 182: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 8.8

Figure 37

Figure 38

(7) (a) Show that the given matrix is equal to the product24 1 0 r0 1 s0 0 1

3524 cos � � sin � 0sin � cos � 00 0 1

3524 1 0 �r0 1 �s0 0 1

35 :(b) Show that the given matrix is equal to the product24 1 0 0

0 1 b0 0 1

35� 11+m2

�24 1�m2 2m 02m m2 � 1 00 0 1

3524 1 0 00 1 �b0 0 1

35 :(c) Show that the given matrix is equal to the product

178

Page 183: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 8.824 1 0 r0 1 s0 0 1

3524 c 0 00 d 00 0 1

3524 1 0 �r0 1 �s0 0 1

35 :(8) Show that the given matrix is equal to the product24 1 0 k

0 1 00 0 1

3524 �1 0 00 1 00 0 1

3524 1 0 �k0 1 00 0 1

35 :(9) (a) (4; 7); (6; 9); (10; 8); (9; 2); (11; 4)

(b) (0; 5); (1; 7); (0; 11); (�5; 8); (�4; 10)(c) (7; 3); (7; 12); (9; 18); (10; 3); (10; 12)

(10) (a) Multiplying the two matrices yields I3.

(b) The �rst matrix represents a translation along the vector [a; b], whilethe second is the inverse translation along the vector [�a;�b].

(c) See the answer for Exercise 7(a), and substitute a for r, and b for s,and then use part (a) of this Exercise.

(11) (a) Show that the matrices in parts (a) and (c) of Exercise 7 commutewith each other when c = d. Both products equal24 c(cos �) �c(sin �) r(1� c(cos �)) + sc(sin �)c(sin �) c(cos �) s(1� c(cos �))� rc(sin �)0 0 1

35.(b) Consider the re�ection about the y-axis, whose matrix is given in

Section 8.8 of the textbook as A =

24 �1 0 00 1 00 0 1

35, and a counter-clockwise rotation of 90� about the origin, whose matrix is

B =

24 0 �1 01 0 00 0 1

35. Then AB =24 0 1 01 0 00 0 1

35, butBA =

24 0 �1 0�1 0 00 0 1

35, and so A and B do not commute. In par-

ticular, starting from the point (1; 0), performing the rotation andthen the re�ection yields (0; 1). However, performing the re�ectionfollowed by the rotation produces (0;�1).

(c) Consider a re�ection about y = �x and scaling by a factor of c inthe x-direction and d in the y-direction, with c 6= d: Now,(re�ection)�(scaling)([1; 0]) = (re�ection)([c; 0]) = [0;�c]. But,(scaling)�(re�ection)([1; 0]) = (scaling)([0;�1]) = [0;�d]. Hence,(re�ection)�(scaling) 6= (scaling)�(re�ection).

179

Page 184: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 8.9

(12) (a) The matrices are�cos � � sin �sin � cos �

�and

24 cos � � sin � 0sin � cos � 00 0 1

35. Ver-ify that they are orthogonal by direct computation of AAT .

(b) If A =

24 0 �1 181 0 �60 0 1

35, then AAT =

24 325 �108 18�108 37 �618 �6 1

35 6= I3.(c) Let m be the slope of a nonvertical line. Then, the matrices are, re-

spectively,

"1�m2

1+m22m1+m2

2m1+m2

m2�11+m2

#and

264 1�m2

1+m22m1+m2 0

2m1+m2

m2�11+m2 0

0 0 1

375 : For a ver-tical line, the matrices are

��1 00 1

�and

24 �1 0 00 1 00 0 1

35. All thesematrices are obviously symmetric, and since a re�ection is its owninverse, all matrices are their own inverses. Hence, if A is any one ofthese matrices, then AAT = AA = I, so A is orthogonal.

(13) (a) F (b) F (c) F (d) T (e) T (f) F

Section 8.9

(1) (a) b1et�73

�+ b2e

�t�21

�(b) b1e3t

�11

�+ b2e

�2t�34

(c) b1

24 0�11

35+ b2et24 1�11

35+ b3e3t24 201

35(d) b1et

24 �110

35 + b2et24 502

35 + b3e4t24 111

35 (There are other possibleanswers. For example, the �rst two vectors in the sum could beany basis for the two-dimensional eigenspace corresponding to theeigenvalue 1.)

(e) b1et

2664�1�110

3775+b2et26641301

3775+b326642301

3775 +b4e�3t26640111

3775 (There are otherpossible answers. For example, the �rst two vectors in the sum couldbe any basis for the two-dimensional eigenspace corresponding to theeigenvalue 1.)

180

Page 185: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 8.9

(2) (a) y = b1e2t + b2e�3t (b) y = b1et + b2e�t + b3e5t

(c) y = b1e2t + b2e�2t + b3ep2t + b4e

�p2t

(3) Using the fact that Avi = �ivi, it can be easily seen that, with F(t) =b1e

�1tv1 + � � �+ bke�ktvk, both sides of F0(t) = AF(t) yield �1b1e�1tv1 +� � �+ �kbke�ktvk.

(4) (a) Suppose (v1; : : : ;vn) is an ordered basis for Rn consisting of eigen-vectors of A corresponding to the eigenvalues �1; : : : ; �n. Then,by Theorem 8.11, all solutions to F0(t) = AF(t) are of the formF(t) = b1e

�1tv1 + � � � + bne�ntvn. Substituting t = 0 yields v =F(0) = b1v1 + � � �+ bnvn. But since (v1; : : : ;vn) forms a basis forRn, b1; : : : ; bn are uniquely determined (by Theorem 4.9). Thus, F(t)is uniquely determined.

(b) F(t) = 2e5t

24 101

35� et24 �12

0

35� 2e�t24 111

35.(5) (a) The equations f1(t) = y, f2(t) = y0; : : : ; fn(t) = y(n�1) translate into

these (n � 1) equations: f 01(t) = f2(t), f 02(t) = f3(t); : : : ; f0n�1(t) =

fn(t). Also, y(n) + an�1y(n�1) + � � �+ a1y0 + a0y = 0 translates intof 0n(t) = �a0f1(t)�a1f2(t)�� � ��an�1fn(t). These n equations takentogether are easily seen to be represented by F0(t) = AF(t), whereA and F are as given.

(b) Base Step (n = 1): A = [�a0], xI1 �A = [x + a0], and so pA(x) =x+ a0.Inductive Step: Assume true for k. Prove true for k+1. For the casek + 1,

A =

26666640 1 0 0 � � � 00 0 1 0 � � � 0...

......

.... . .

...0 0 0 0 � � � 1�a0 �a1 �a2 �a3 � � � �ak

3777775 :Then

(xIk+1 �A) =

2666664x �1 0 0 � � � 0 00 x �1 0 � � � 0 0...

......

.... . .

......

0 0 0 0 � � � x �1a0 a1 a2 a3 � � � ak�1 x+ ak

3777775 :

181

Page 186: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 8.9

Using a cofactor expansion on the �rst column yields

jxIk+1 �Aj = x

���������x �1 0 � � � 0 0...

......

. . ....

...0 0 0 � � � x �1a1 a2 a3 � � � ak�1 x+ ak

���������

+(�1)(k+1)+1a0

����������1 0 � � � 0 0x �1 � � � 0 0...

.... . .

......

0 0 � � � x �1

��������� :The �rst determinant equals jxIk �Bj, where

B =

26666640 1 0 � � � 0 00 0 1 � � � 0 0...

......

. . ....

0 0 0 � � � 0 1�a1 �a2 �a3 � � � �ak�1 �ak

3777775 :

Now, using the inductive hypothesis on the �rst determinant, andTheorems 3.2 and 3.9 on the second determinant (since its corre-sponding matrix is lower triangular) yields pA(x) = x(xk+akxk�1+� � �+ a2x+ a1) + (�1)k+2a0(�1)k = xk+1 + akxk + � � �+ a1x+ a0.

(6) (a) [b2; b3; : : : ; bn;�a0b1 � a1b2 � � � � � an�1bn](b) By part (b) of Exercise 5, 0 = pA(�) = �n + an�1�

n�1 + � � � +a1� + a0. Thus, �

n = �a0 � a1� � � � � � an�1�n�1. Letting x =[1; �; �2; : : : ; �n�1] and substituting x for [b1; : : : ; bn] in part (a) givesAx = [�; �2; : : : ; �n�1;�a0 � a1�� � � � � an�1�n�1]= [�; �2; : : : ; �n�1; �n] = �x.

(c) Let v = [c; v2; v3; : : : ; vn]. Then �v = [c�; v2�; : : : ; vn�]. By part (a),Av = [v2; v3; : : : ; vn;�a0c � � � � � an�1vn]. Equating the �rst n � 1coordinates of Av and �v yields v2 = c�, v3 = v2�; : : : ; vn = vn�1�.Further recursive substitution gives v2 = c�, v3 = c�2; : : : ; vn =c�n�1. Hence v = c[1; �; : : : ; �n�1].

(d) By parts (b) and (c), f[1; �; : : : ; �n�1]g is a basis for E�.

182

Page 187: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 8.10

(7) (a) T (b) T (c) T (d) F

Section 8.10

(1) (a) Unique least-squares solution: v =�2330 ;

1110

�; jjAv�bjj =

p66 t 0:408;

jjAz� bjj = 1

(b) Unique least-squares solution: v =�22975 ;

13

�; jjAv � bjj = 59

p3

15 t6:813; jjAz� bjj = 7

(c) In�nite number of least-squares solutions, all of the formv =

�7c+ 17

3 ;�13c�233 ; c

�; two particular least-squares solutions

are�173 ;�

233 ; 0

�and

�8;�12; 13

�; with v as either of these vectors,

jjAv � bjj =p63 t 0:816; jjAz� bjj = 3

(d) In�nite number of least-squares solutions, all of the formv =

�12c+

54d� 1;�

32c�

114 d+

193 ; c; d

�; two particular least-squares

solutions are�� 12 ;

296 ; 1; 0

�and

�14 ;

4312 ; 0; 1

�; with v as either of these

vectors, jjAv � bjj =p63 t 0:816; jjAz� bjj =

p5 t 2:236

(2) (a) In�nite number of least-squares solutions, all of the formv =

�� 47c+

1942 ;

87c�

521 ; c

�; with 5

24 � c �1924 :

(b) In�nite number of least-squares solutions, all of the formv =

�� 65c+

1910 ;�

15c+

1935 ; c

�; with 0 � c � 19

12 :

(3) (a) v t [1:58;�0:58]; (�0I�C)v t [0:025;�0:015] (Actual nearest eigen-value is 2 +

p3 � 3:732)

(b) v t [0:46;�0:36; 0:90]; (�0I�C)v t [0:03;�0:04; 0:07] (Actual near-est eigenvalue is

p2 � 1:414)

(c) v t [1:45;�0:05;�0:40]; (�0I�C)v t [�0:09;�0:06;�0:15] (Actualnearest eigenvalue is 3

p12 � 2:289)

(4) If AX = b is consistent, then b is in the subspace W of Theorem 8.12.Thus, projWb = b: Finally, the fact that statement (3) of Theorem 8.12is equivalent to statement (1) of Theorem 8.12 shows that the actual so-lutions are the same as the least-squares solutions in this case.

(5) Let b = Av1. Then ATAv1 = ATb. Also, ATAv2 = A

TAv1 = ATb.

Hence v1 and v2 both satisfy part (3) of Theorem 8.12. Therefore, by part(1) of Theorem 8.12, ifW = fAx j x 2 Rng, then Av1 = projWb = Av2.

(6) Let b = [b1; :::; bn], let A be as in Theorem 8.2, and let W = fAx j x 2Rng: Since projWb 2 W exists, there must be some v 2 Rn such thatAv = projWb. Hence, v satis�es part (1) of Theorem 8.12, so (A

TA)v =ATb by part (3) of Theorem 8.12. This shows that the system (ATA)X =ATb is consistent, which proves part (2) of Theorem 8.2.

Next, let f(x) = d0+ d1x+ � � �+ dtxt and z = [d0; d1; : : : ; dt]: A shortcomputation shows that jjAz � bjj2 = Sf ; the sum of the squares of the

183

Page 188: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 8.11

vertical distances illustrated in Figure 8.9, just before the de�nition of aleast-squares polynomial in Section 8.3 of the textbook. Hence, minimizingjjAz � bjj over all possible (t + 1)-vectors z gives the coe¢ cients of adegree t least-squares polynomial for the given points (a1; b1); : : : ; (an; bn):However, parts (2) and (3) of Theorem 8.12 show that such a minimalsolution is found by solving (ATA)v = ATb; thus proving part (1) ofTheorem 8.2.

Finally, whenATA is row equivalent to It+1, the uniqueness conditionholds by Theorems 2.14 and 2.15, which proves part(3) of Theorem 8.2.

(7) (a) F (b) T (c) T (d) T (e) T

Section 8.11

(1) (a) C =�8 120 �9

�, A =

�8 66 �9

(b) C =�7 �170 11

�, A =

"7 � 172

� 172 11

#

(c) C =

24 5 4 �30 �2 50 0 0

35, A =

26645 2 � 322 �2 5

2

� 32

52 0

3775(2) (a) A =

�43 �24�24 57

�, P = 1

5

��3 44 3

�, D =

�75 00 25

�,

B =�15 [�3; 4];

15 [4; 3]

�, [x]B = [�7;�4], Q(x) = 4075

(b) A =

24 �5 16 4016 37 1640 16 49

35, P = 19

24 1 �8 4�8 1 44 4 7

35,D =

24 27 0 00 �27 00 0 81

35, B = � 19 [1;�8; 4]; 19 [�8; 1; 4]; 19 [4; 4; 7]�,[x]B = [3;�6; 3], Q(x) = 0

(c) A =

24 18 48 �3048 �68 18�30 18 1

35, P = 17

24 2 �6 33 �2 �66 3 2

35,D =

24 0 0 00 49 00 0 �98

35, B = � 17 [2; 3; 6]; 17 [�6;�2; 3]; 17 [3;�6; 2]�,[x]B = [5; 0; 6], Q(x) = �3528

184

Page 189: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 8.11

(d) A =

26641 0 �12 120 5 60 60

�12 60 864 57612 60 576 864

3775, P = 117

26641 0 12 �120 1 �12 �12

�12 12 1 012 12 0 1

3775,

D =

2664289 0 0 00 1445 0 00 0 0 00 0 0 0

3775,B =

�117 [1; 0;�12; 12];

117 [0; 1; 12; 12];

117 [12;�12; 1; 0];

117 [�12;�12; 0; 1]

�, [x]B = [1;�3;�3;�10], Q(x) = 13294

(3) First, aii = eTi Aei = eTi Bei = bii. Also, for i 6= j, let x = ei + ej . Then

aii+aij+aji+ajj = xTAx = xTBx = bii+bij+bji+bjj . Using aii = bii

and ajj = bjj , we get aij + aji = bij + bji. Hence, since A and B aresymmetric, 2aij = 2bij , and so aij = bij .

(4) Yes; if Q(x) = �aijxixj , 1 � i � j � n, then xTC1x and C1 uppertriangular imply that the (i; j) entry for C1 is zero if i > j and aij ifi � j. A similar argument describes C2. Thus, C1 = C2.

(5) (a) See the Quadratic Form Method in Section 8.11 of the textbook.Consider the result Q(x) = d11y

21 + � � � + dnny2n in Step 3, where

d11; : : : ; dnn are the eigenvalues of A, and [x]B = [y1; : : : ; yn].Clearly, if all of d11; : : : ; dnn are positive and x 6= 0, then Q(x) mustbe positive. (Obviously, Q(0) = 0.)Conversely, if Q(x) is positive for all x 6= 0, then choose x so that[x]B = ei to yield Q(x) = dii, thus proving that the eigenvalue dii ispositive.

(b) Replace �positive� with �nonnegative� throughout the solution topart (a).

(6) (a) T (b) F (c) F (d) T (e) T

185

Page 190: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 9.1

Chapter 9

Section 9.1

(1) (a) Solution to �rst system: (602; 1500); solution to second system:(302; 750). The systems are ill-conditioned, because a very smallchange in the coe¢ cient of y leads to a very large change in thesolution.

(b) Solution to �rst system: (400; 800; 2000); solution to second system:(336; 666 23 ; 1600). Systems are ill-conditioned.

(2) Answers to this problem may di¤er signi�cantly from the following, de-pending on where the rounding is performed in the algorithm:

(a) Without partial pivoting: (3210; 0:765); with partialpivoting: (3230; 0:767). Actual solution is (3214; 0:765).

(b) Without partial pivoting: (3040; 10:7;�5:21); with partial pivoting:(3010; 10:1;�5:03). (Actual solution is (3000; 10;�5).)

(c) Without partial pivoting: (2:26; 1:01;�2:11); with partialpivoting: (277;�327; 595). Actual solution is (267;�315; 573).

(3) Answers to this problem may di¤er signi�cantly from the following, de-pending on where the rounding is performed in the algorithm:

(a) Without partial pivoting: (3214; 0:7651); with partialpivoting: (3213; 0:7648). Actual solution is (3214; 0:765).

(b) Without partial pivoting: (3001; 10;�5); with partial pivoting:(3000; 9:995;�4:999). (Actual solution is (3000; 10;�5).)

(c) Without partial pivoting: (�2:380; 8:801;�16:30); with partial piv-oting: (267:8;�315:9; 574:6). Actual solution is (267;�315; 573).

(4) (a)x1 x2

Initial Values 0:000 0:000After 1 Step 5:200 �6:000After 2 Steps 6:400 �8:229After 3 Steps 6:846 �8:743After 4 Steps 6:949 �8:934After 5 Steps 6:987 �8:978After 6 Steps 6:996 �8:994After 7 Steps 6:999 �8:998After 8 Steps 7:000 �9:000After 9 Steps 7:000 �9:000

186

Page 191: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 9.1

(b)x1 x2 x3

Initial Values 0:000 0:000 0:000After 1 Step �0:778 �4:375 2:000After 2 Steps �1:042 �4:819 2:866After 3 Steps �0:995 �4:994 2:971After 4 Steps �1:003 �4:995 2:998After 5 Steps �1:000 �5:000 2:999After 6 Steps �1:000 �5:000 3:000After 7 Steps �1:000 �5:000 3:000

(c)x1 x2 x3

Initial Values 0:000 0:000 0:000After 1 Step �8:857 4:500 �4:333After 2 Steps �10:738 3:746 �8:036After 3 Steps �11:688 4:050 �8:537After 4 Steps �11:875 3:975 �8:904After 5 Steps �11:969 4:005 �8:954After 6 Steps �11:988 3:998 �8:991After 7 Steps �11:997 4:001 �8:996After 8 Steps �11:999 4:000 �8:999After 9 Steps �12:000 4:000 �9:000After 10 Steps �12:000 4:000 �9:000

(d)x1 x2 x3 x4

Initial Values 0:000 0:000 0:000 0:000After 1 Step 0:900 �1:667 3:000 �2:077After 2 Steps 1:874 �0:972 3:792 �2:044After 3 Steps 1:960 �0:999 3:966 �2:004After 4 Steps 1:993 �0:998 3:989 �1:999After 5 Steps 1:997 �1:001 3:998 �2:000After 6 Steps 2:000 �1:000 3:999 �2:000After 7 Steps 2:000 �1:000 4:000 �2:000After 8 Steps 2:000 �1:000 4:000 �2:000

(5) (a)x1 x2

Initial Values 0:000 0:000After 1 Step 5:200 �8:229After 2 Steps 6:846 �8:934After 3 Steps 6:987 �8:994After 4 Steps 6:999 �9:000After 5 Steps 7:000 �9:000After 6 Steps 7:000 �9:000

187

Page 192: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 9.1

(b)x1 x2 x3

Initial Values 0:000 0:000 0:000After 1 Step �0:778 �4:569 2:901After 2 Steps �0:963 �4:978 2:993After 3 Steps �0:998 �4:999 3:000After 4 Steps �1:000 �5:000 3:000After 5 Steps �1:000 �5:000 3:000

(c)x1 x2 x3

Initial Values 0:000 0:000 0:000After 1 Step �8:857 3:024 �7:790After 2 Steps �11:515 3:879 �8:818After 3 Steps �11:931 3:981 �8:974After 4 Steps �11:990 3:997 �8:996After 5 Steps �11:998 4:000 �8:999After 6 Steps �12:000 4:000 �9:000After 7 Steps �12:000 4:000 �9:000

(d)x1 x2 x3 x4

Initial Values 0:000 0:000 0:000 0:000After 1 Step 0:900 �1:767 3:510 �2:012After 2 Steps 1:980 �1:050 4:003 �2:002After 3 Steps 2:006 �1:000 4:002 �2:000After 4 Steps 2:000 �1:000 4:000 �2:000After 5 Steps 2:000 �1:000 4:000 �2:000

(6) Strictly diagonally dominant: (a), (c)

(7) (a) Put the third equation �rst, and move the other two down to get thefollowing:

x1 x2 x3Initial Values 0:000 0:000 0:000After 1 Step 3:125 �0:481 1:461After 2 Steps 2:517 �0:500 1:499After 3 Steps 2:500 �0:500 1:500After 4 Steps 2:500 �0:500 1:500

(b) Put the �rst equation last (and leave the other two alone) to get:x1 x2 x3

Initial Values 0:000 0:000 0:000After 1 Step 3:700 �6:856 4:965After 2 Steps 3:889 �7:980 5:045After 3 Steps 3:993 �8:009 5:004After 4 Steps 4:000 �8:001 5:000After 5 Steps 4:000 �8:000 5:000After 6 Steps 4:000 �8:000 5:000

188

Page 193: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 9.1

(c) Put the second equation �rst, the fourth equation second, the �rstequation third, and the third equation fourth to get the following:

x1 x2 x3 x4Initial Values 0:000 0:000 0:000 0:000After 1 Step 5:444 �5:379 9:226 �10:447After 2 Steps 8:826 �8:435 10:808 �11:698After 3 Steps 9:820 �8:920 10:961 �11:954After 4 Steps 9:973 �8:986 10:994 �11:993After 5 Steps 9:995 �8:998 10:999 �11:999After 6 Steps 9:999 �9:000 11:000 �12:000After 7 Steps 10:000 �9:000 11:000 �12:000After 8 Steps 10:000 �9:000 11:000 �12:000

(8) The Jacobi Method yields the following:

x1 x2 x3Initial Values 0:0 0:0 0:0After 1 Step 16:0 �13:0 12:0After 2 Steps �37:0 59:0 �87:0After 3 Steps 224:0 �61:0 212:0After 4 Steps �77:0 907:0 �1495:0After 5 Steps 3056:0 2515:0 �356:0After 6 Steps 12235:0 19035:0 �23895:0

The Gauss-Seidel Method yields the following:

x1 x2 x3Initial Values 0:0 0:0 0:0After 1 Step 16:0 83:0 �183:0After 2 Steps 248:0 1841:0 �3565:0After 3 Steps 5656:0 41053:0 �80633:0After 4 Steps 124648:0 909141:0 �1781665:0

The actual solution is (2;�3; 1).

(9) (a)x1 x2 x3

Initial Values 0:000 0:000 0:000After 1 Step 3:500 2:250 1:625After 2 Steps 1:563 2:406 2:516After 3 Steps 1:039 2:223 2:869After 4 Steps 0:954 2:089 2:979After 5 Steps 0:966 2:028 3:003After 6 Steps 0:985 2:006 3:005After 7 Steps 0:995 2:000 3:003After 8 Steps 0:999 1:999 3:001After 9 Steps 1:000 2:000 3:000After 10 Steps 1:000 2:000 3:000

189

Page 194: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 9.2

(b)x1 x2 x3

Initial Values 0:000 0:000 0:000After 1 Step 3:500 4:000 4:500After 2 Steps �0:750 0:000 0:750After 3 Steps 3:125 4:000 4:875After 4 Steps �0:938 0:000 0:938After 5 Steps 3:031 4:000 4:969After 6 Steps �0:984 0:000 0:985After 7 Steps 3:008 4:000 4:992After 8 Steps �0:996 0:000 0:996

(10) (a) T (b) F (c) F (d) T (e) F (f) F

Section 9.2

(1) (a) LDU =

�1 0

�3 1

� �2 00 5

� �1 �20 1

�(b) LDU =

�1 012 1

� �3 00 �2

� �1 1

30 1

(c) LDU =

24 1 0 0�2 1 0�2 4 1

35 24 �1 0 00 2 00 0 3

35 24 1 �4 20 1 �40 0 1

35

(d) LDU =

26641 0 0

52 1 0

12 � 3

2 1

377524 2 0 00 �4 00 0 3

35 24 1 3 �20 1 �50 0 1

35(e) LDU =2666664

1 0 0 0

� 43 1 0 0

�2 � 32 1 0

23 �2 0 1

3777775

2666664�3 0 0 0

0 � 23 0 0

0 0 12 0

0 0 0 1

3777775

26666641 � 1

3 � 13

13

0 1 52 � 11

2

0 0 1 3

0 0 0 1

3777775(f) LDU =2664

1 0 0 02 1 0 0

�3 �3 1 0�1 2 �2 1

37752664�3 0 0 00 �2 0 00 0 1 00 0 0 �3

377526641 4 �2 �30 1 0 �10 0 1 50 0 0 1

3775(2) (a) With the given values of L, D, and U, LDU =

�x xzwx wxz + y

�.

The �rst row of LDU cannot equal [0; 1], since this would give x = 0,forcing xz = 0 6= 1.

190

Page 195: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 9.3

(b) The matrix�0 11 0

�cannot be reduced to row echelon form using

only type (I) and lower type (II) row operations.

(3) (a) KU =

��1 02 �3

� �1 �50 1

�; solution = f(4;�1)g.

(b) KU =

24 2 0 02 �1 01 �3 3

35 24 1 �2 50 1 30 0 1

35; solution = f(5;�1; 2)g.(c) KU =

24 �1 0 04 3 0

�2 5 �2

35 24 1 �3 20 1 �50 0 1

35; solution = f(2;�3; 1)g.(d) KU =

26643 0 0 01 �2 0 0

�5 �1 4 01 3 0 �3

377526641 �5 2 20 1 �3 00 0 1 �20 0 0 1

3775;solution = f(2;�2; 1; 3)g.

(4) (a) F (b) T (c) F (d) F

Section 9.3

(1) (a) After nine iterations, eigenvector = [0:60; 0:80] and eigenvalue = 50.

(b) After �ve iterations, eigenvector = [0:91; 0:42] and eigenvalue = 5:31.

(c) After seven iterations, eigenvector = [0:41; 0:41; 0:82] and eigenvalue= 3:0.

(d) After �ve iterations, eigenvector = [0:58; 0:58; 0:00; 0:58] and eigen-value = 6:00.

(e) After �fteen iterations, eigenvector = [0:346; 0:852; 0:185; 0:346] andeigenvalue = 5:405

(f) After six iterations, eigenvector = [0:4455;�0:5649;�0:6842;�0:1193]and eigenvalue = 5:7323.

(2) The Power Method fails to converge, even after many iterations, for thematrices in both parts. The matrix in part (a) is not diagonalizable, with1 as its only eigenvalue and dim(E1) = 1. The matrix in part (b) haseigenvalues 1, 3, and �3, none of which are strictly dominant.

(3) (a) ui is derived from ui�1 as ui = ciAui�1, where ci is a normalizingconstant. A proof by induction shows that ui = kiA

iu0 for somenonzero constant ki. If u0 = a0v1 + b0v2, then ui = kia0A

iv1 +kib0A

iv2 = kia0�i1v1+kib0�

i2v2. Hence ai = kia0�

i1 and bi = kib0�

i2.

Thus,jaijjbij

=

�����kia0�i1kib0�i2

����� =�����1�2

����i ja0jjb0j:

191

Page 196: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 9.4

Therefore, the ratio jaijjbij !1 as a geometric sequence.

(b) Let �1; : : : �n be the eigenvalues of A with j�1j > j�j j, for 2 � j � n.Let fv1; : : : ;vng be as given in the exercise. Suppose the initialvector in the Power Method is u0 = a01v1 + � � � + a0nvn and theith iteration yields ui = ai1v1 + � � �+ ainvn. As in part (a), a proofby induction shows that ui = kiA

iu0 for some nonzero constantki. Therefore, ui = kia01A

iv1 + kia02Aiv2 + � � � + kia0nAivn =

kia01�i1v1 + kia02�

i2v2 + � � � + kia0n�invn. Hence, aij = kia0j�

ij .

Thus, for 2 � j � n, �j 6= 0, and a0j 6= 0, we have

jai1jjaij j

=

��kia01�i1����kia0j�ij�� =�����1�j

����i ja01jja0j j:

(4) (a) F (b) T (c) T (d) F

Section 9.4

(1) (a) Q = 13

24 2 1 �2�2 2 �11 2 2

35; R =

24 3 6 30 6 �90 0 3

35(b) Q = 1

11

24 6 �2 �97 �6 66 9 2

35; R =

24 11 22 110 11 220 0 11

35

(c) Q =

2664p66

p33

p22

�p63

p33 0

p66

p33 �

p22

3775; R =

2664p6 3

p6 � 2

p6

3

0 2p3 � 10

p3

3

0 0p2

3775

(d) Q = 13

26642 2 02 �2 10 �1 �21 0 �2

3775; R =

24 6 �3 90 9 120 0 15

35

(e) Q = 1105

266414 99 �2 3270 0 �70 �3577 �18 64 260 30 45 �90

3775; R =

2664105 105 �105 2100 210 105 3150 0 105 4200 0 0 210

3775

(2) (a) Q = 113

24 3 44 �1212 3

35; R =

�13 260 13

�;�xy

�= 1

169

�940

�362

��

�5:562

�2:142

192

Page 197: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 9.4

(b) Q = 13

26642 �2 11 0 �20 �1 �22 2 0

3775; R =

24 3 9 60 6 90 0 12

35;24 xyz

35 = 572

24 43362

35 �24 2:9860:2084:306

35

(c) Q = 19

26666641 8 2

p2

4 0 72

p2

0 4 � 92

p2

�8 1 2p2

3777775; R =

24 9 �9 90 18 �90 0 18

p2

35;24 xyz

35 =

1108

24 �616566

35 �24 �0:565

0:6020:611

35

(d) Q =

2666666664

13

23

219

p19

29

49 � 1

19

p19

49

29 � 3

19

p19

49 � 4

9 � 119

p19

23 � 1

3219

p19

3777777775; R =

24 9 �18 180 9 9

0 0 2p19

35;24 xyz

35 =

11026

24 296866513267

35 �24 2:8936:4823:184

35(3) We will show that the entries of Q and R are uniquely determined by the

given requirements. We will proceed column by column, using a proof byinduction on the column number m.

Base Step: m = 1. [1st column of A] = Q � [1st column of R] =[1st column of Q] (r11), because R is upper triangular. But, since[1st column of Q] is a unit vector and r11 is positive, r11 =k[1st column of A]k, and [1st column ofQ] = 1

r11[1st column ofA]. Hence,

the �rst column in each of Q and R is uniquely determined.

Inductive Step: Assume m > 1 and that columns 1; : : : ; (m � 1) ofboth Q and R are uniquely determined. We will show that the mth col-umn in each of Q and R is uniquely determined. Now,[mth column of A] = Q � [mth column of R] =Pm

i=1[ith column of Q] (rim). Let v =Pm�1

i=1 [ith column of Q] (rim). Bythe Inductive Hypothesis, v is uniquely determined.

Next, [mth column of Q] (rmm) = [mth column of A] � v. Butsince [mth column of Q] is a unit vector and rmm is positive, rmm =

193

Page 198: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 9.5

k[mth column of A]� vk, and [mth column of Q] =1

rmm([mth column of A]� v). Hence, the mth column in each of Q and

R is uniquely determined.

(4) (a) If A is square, then A = QR, where Q is an orthogonal matrix andR has nonnegative entries along its main diagonal. Setting U = R,we see that ATA = UTQTQU = UT (In)U = UTU.

(b) Suppose A = QR (where this is a QR factorization of A) and ATA= UTU. Then R and RT must be nonsingular. Also,P = Q(RT )�1UT is an orthogonal matrix, because PPT

= Q(RT )�1UT�Q(RT )�1UT

�T= Q(RT )�1UTUR�1QT

= Q(RT )�1ATAR�1QT = Q(RT )�1(RTQT )(QR)R�1QT

= Q((RT )�1RT )(QTQ)(RR�1)QT = QInInInQT = QQT = In.

Finally, PU = Q(RT )�1UTU = (QT )�1(RT )�1ATA =

(RTQT )�1ATA = (AT)�1ATA = InA = A, and so PU is a QR

factorization of A, which is unique by Exercise 3. Hence, U isuniquely determined.

(5) (a) T (b) T (c) T (d) F (e) T

Section 9.5

(1) For each part, one possibility is given.

(a) U = 1p2

�1 11 �1

�, � =

�2p10 0

0p10

�, V = 1p

5

�2 �11 2

�(b) U = 1

5

��3 �4�4 3

�, � =

�15p2 0

0 10p2

�, V = 1p

2

��1 11 1

�(c) U = 1p

10

��3 11 3

�, � =

�9p10 0 0

0 3p10 0

�,

V = 13

24 �1 �2 2�2 2 12 1 2

35(d) U = 1p

5

��1 �2�2 1

�,� =

�5p5 0 0

0 5p5 0

�,V = 1

5

24 �3 0 44 0 30 5 0

35

(e) U =

266642p13

187p13

� 37

3p13

�127p13

27

0 137p13

67

37775, � =

24 1 0 00 1 00 0 0

35, V = U (Note: A

represents the orthogonal projection onto the plane�3x+ 2y + 6z = 0.)

194

Page 199: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 9.5

(f) U = 17

24 6 2 33 �6 �22 3 �6

35, � =24 2

p2 0

0p2

0 0

35, V = 1p2

�1 �11 1

(g) U = 111

24 2 6 9�9 6 �26 7 �6

35, � =24 3 00 20 0

35, V =

�0 11 0

(h) U =

24 � 25

p2 � 8

15

p2 1

312

p2 � 1

6

p2 2

3310

p2 � 13

30

p2 � 2

3

35, � =24 2 0 0 00 2 0 00 0 0 0

35,

V = 1p2

26640 �1 0 1

�1 0 1 01 0 1 00 1 0 1

3775(2) (a) A+ = 1

2250

�104 70 122

�158 110 31

�, v = 1

2250

�56183364

�,

ATAv = ATb = 115

�68233874

�(b) A+ = 1

450

�8 43 �11

�6 24 �48

�, v = 1

90

�173264

�,

ATAv = ATb =

�65170

(c) A+ = 184

24 36 24 12 012 36 �24 0

�31 �23 41 49

35, v = 114

24 44�1871

35,

ATAv = ATb = 17

24 127�3060

35(d) A+ = 1

54

24 2 0 4 48 6 4 14 6 �4 �7

35, v = 154

24 26597

35,

ATAv = ATb =

24 1324�2

35(3) (a) A = 2

p10

�1p2

�11

���1p5

�2 1

��+p10

�1p2

�1

�1

�� �1p5

��1 2

��195

Page 200: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 9.5

(b) A = 9p10

�1p10

��31

�� �13

��1 �2 2

��+ 3

p10

�1p10

�13

�� �13

��2 2 1

��(c) A = 2

p2

0@ 17

24 632

351A� 1p2

�1 1

��

+p2

0@ 17

24 2�63

351A � 1p2

��1 1

��

(d) A = 3

0@ 111

24 2�96

351A� 0 1�+ 2

0@ 111

24 667

351A� 1 0�

(e) A = 2

264 � 25

p2

12

p2

310

p2

375� 1p2

�0 �1 1 0

��

+ 2

264 � 815

p2

� 16

p2

� 1330

p2

375 � 1p2

��1 0 0 1

��

(4) IfA is orthogonal, thenATA = In. Therefore, � = 1 is the only eigenvaluefor ATA, and the eigenspace is all of Rn. Therefore, fv1; : : : ;vng can beany orthonormal basis for Rn. If we take fv1; : : : ;vng to be the standardbasis for Rn, then the corresponding singular value decomposition for Ais AInIn. If, instead, we use the rows of A, which form an orthonormalbasis for Rn, to represent fv1; : : : ;vng, then the corresponding singularvalue decomposition for A is InInA.

(5) If A = U�VT , then ATA = V�TUTU�VT = V�T�VT . For i � m,let �i be the (i; i) entry of �. Thus, for i � m, ATAvi = V�T�VTvi= V�T�ei = V�T (�iei) = V(�2i ei) = �2ivi. (Note that in this proof,�ei�has been used to represent both a vector in Rn and a vector in Rm.)If i > m, then ATAvi = V�

T�VTvi = V�T�ei = V�

T (0) = 0. Thisproves the claims made in the exercise.

(6) Because A is symmetric, it is orthogonally diagonalizable. Let D =PTAP be an orthogonal diagonalization for A, with the eigenvalues ofA along the main diagonal of D. Thus, A = PDPT . By Exercise 5, thecolumns of P form an orthonormal basis of eigenvectors for ATA, andthe corresponding eigenvalues are the squares of the diagonal entries inD. Hence, the singular values of A are the square roots of these eigenval-

196

Page 201: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 9.5

ues, which are the absolute values of the diagonal entries of D. Since thediagonal entries of D are the eigenvalues of A, this completes the proof.

(7) Express v 2 Rn as v = a1v1 + � � � + anvn, where fv1; : : : ;vng are rightsingular vectors for A. Since fv1; : : : ;vng is an orthonormal basis for Rn,kvk =

pa21 + � � �+ a2n. Thus,

kAvk2 = (Av) � (Av)= (a1Av1 + � � �+ anAvn) � (a1Av1 + � � �+ anAvn)= a21�

21 + � � �+ a2n�2n (by parts (2) and (3) of Lemma 9.4)

� a21�21 + a

22�

21 + � � �+ a2n�21

= �21�a21 + a

22 + � � �+ a2n

�= �21 kvk

2:

Therefore, kAvk � �1 kvk.

(8) In all parts, assume k represents the number of nonzero singular values.

(a) The ith column of V is the right singular vector vi, which is a uniteigenvector corresponding to the eigenvalue �i of ATA. But �vi isalso a unit eigenvector corresponding to the eigenvalue �i of ATA.Thus, if any vector vi is replaced with its opposite vector, the setfv1; : : : ;vng is still an orthonormal basis for Rn consisting of eigen-vectors for ATA. Since the vectors are kept in the same order, the�i do not increase, and thus fv1; : : : ;vng ful�lls all the necessaryconditions to be a set of right singular vectors for A. For i � k, theleft singular vector ui = 1

�iAvi, so when we change the sign of vi,

we must adjust U by changing the sign of ui as well. For i > k,changing the sign of vi has no e¤ect on U, but still produces a validsingular value decomposition.

(b) If the eigenspace E� for ATA has dimension higher than 1, then thecorresponding right singular vectors can be replaced by any ortho-normal basis for E�, for which there is an in�nite number of choices.Then the associated left singular vectors u1; : : : ;uk must be adjustedaccordingly.

(c) Let fv1; : : : ;vng be a set of right singular vectors for A. By Exercise5, the ith diagonal entry of�must be the square root of an eigenvalueofATA corresponding to the eigenvector vi. Since fv1; : : : ;vng is anorthonormal basis of eigenvectors, it must correspond to a completeset of eigenvalues for ATA. Also by Exercise 5, all of the vectors vi,for i > m, correspond to the eigenvalue 0. Hence, all of the squareroots of the nonzero eigenvalues of ATA must lie on the diagonal of�. Thus, the values that appear on the diagonal of � are uniquelydetermined. Finally, the order in which these numbers appear isdetermined by the requirement for the singular value decompositionthat they appear in non-increasing order.

197

Page 202: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 9.5

(d) By de�nition, for 1 � i � k, ui = 1�iAvi, and so these left singular

vectors are completely determined by the choices made for v1; : : : ;vk,the �rst k columns of V.

(e) Columns k + 1 through m of U are the left singular vectorsuk+1; : : : ;um, which can be any orthonormal basis for the orthog-onal complement of the column space of A (by parts (2) and (3)of Theorem 9.5). If m = k + 1, this orthogonal complement to thecolumn space is one-dimensional, and so there are only two choicesfor uk+1, which are opposites of each other, because uk+1 must be aunit vector. If m > k+1, the orthogonal complement to the columnspace has dimension greater than 1, and there is an in�nite numberof choices for its orthonormal basis.

(9) (a) Each right singular vector vi, for 1 � i � k, must be an eigenvectorfor ATA. Performing the Gram-Schmidt Process on the rows of A,eliminating zero vectors, and normalizing will produce an orthonor-mal basis for the row space of A, but there is no guarantee that it

will consist of eigenvectors for ATA. For example, if A =

�1 10 1

�,

performing the Gram-Schmidt Process on the rows of A produces

the two vectors [1; 1] and�� 12 ;

12

�, neither of which is an eigenvector

for ATA =

�1 11 2

�.

(b) The right singular vectors vk+1; : : : ;vn form an orthonormal basisfor the eigenspace E0 of ATA. Any orthonormal basis for E0 willdo. By part (5) of Theorem 9.5, E0 equals the kernel of the lineartransformation L whose matrix with respect to the standard basesis A. A basis for ker(L) can be found by using the Kernel Method.That basis can be turned into an orthonormal basis for ker(L) byapplying the Gram-Schmidt Process and normalizing.

(10) IfA = U�VT , as given in the exercise, thenA+ = V�+UT , and soA+A= V�+UTU�VT = V�+�VT . Note that �+� is an n � n diagonalmatrix whose �rst k diagonal entries equal 1, with the remaining diagonalentries equal to 0. Note also that since the columns v1; : : : ;vn of V areorthonormal, VTvi = ei, for 1 � i � n.

(a) If 1 � i � k, then A+Avi = V�+�VTvi = V�

+�ei = Vei = vi.

(b) If i > k, then A+Avi = V�+�VTvi = V�

+�ei = V (0) = 0.

(c) By parts (4) and (5) of Theorem 9.5, fv1; : : : ;vkg is an orthonormalbasis for (ker(L))?, and fvk+1; : : : ;vng is an orthonormal basis forker(L). The orthogonal projection onto (ker(L))? sends every vectorin (ker(L))? to itself, and every vector orthogonal to (ker(L))? (thatis, every vector in ker(L)) to 0. Parts (a) and (b) prove that this is

198

Page 203: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 9.5

precisely the result obtained by multiplying A+A by the bases for(ker(L))

? and ker(L).

(d) Using part (a), if 1 � i � k, then AA+Avi = A (A+Avi) = Avi.By part (b), if i > k, AA+Avi = A (A+Avi) = A(0) = 0. But,if i > k, vi 2 ker(L), and so Avi = 0 as well. Hence, AA+A andA represent linear transformations from Rn to Rm that agree on abasis for Rn. Therefore, they are equal as matrices.

(e) By part (d), AA+A = A. If A is nonsingular, we can multiply bothsides by A�1 (on the left) to obtain A+A = In. Multiplying by A�1

again (on the right) yields A+ = A�1.

(11) If A = U�VT , as given in the exercise, then A+ = V�+UT . Notethat since the columns u1; : : : ;um of U are orthonormal, UTui = ei, for1 � i � m.

(a) If 1 � i � k, then A+ui = V�+UTui = V�+ei = V�1�iei

�= 1

�ivi. Thus, AA

+ui = A (A+ui) = A

�1�ivi

�= 1

�iAvi = ui.

(b) If i > k, then A+ui = V�+UTui = V�+ei = V (0) = 0. Thus,AA+ui = A (A

+ui) = A (0) = 0.

(c) By Theorem 9.5, fu1; : : : ;ukg is an orthonormal basis for range(L),and fuk+1; : : : ;umg is an orthonormal basis for (range(L))?. Theorthogonal projection onto range(L) sends every vector in range(L)to itself, and every vector in (range(L))? to 0. Parts (a) and (b)prove that this is precisely the action of multiplying by AA+ byshowing how it acts on bases for range(L) and (range(L))?.

(d) Using part (a), if 1 � i � k, then A+AA+ui = A+�AA+ui

�=

A+ui. By part (b), if i > k, A+AA+ui = A+�AA+ui

�= A+(0)

= 0. But, if i > k, A+ui = 0 as well. Hence, A+AA+ and A+

represent linear transformations from Rm to Rn that agree on a basisfor Rm. Therefore, they are equal as matrices.

(e) Let L1 : (ker(L))? ! range(L) be given by L1(v) = Av. Parts (2)and (3) of Theorem 9.5 shows that dim((ker(L))?) = k =dim(range(L)). Part (a) of this exercise and part (2) of Theorem 9.5shows that T : range(L) ! range(L) given by T (u) = L1(A

+u) isthe identity linear transformation. Hence, L1 must be onto, and soby Corollary 5.21, L1 is an isomorphism. Because T is the identitytransformation, the inverse of L1 hasA+ as its matrix with respect tothe standard basis. This shows that A+u is uniquely determined foru 2 range(L). Thus, since the subspace range(L) depends only on A,not on the singular value decomposition of A, A+u is uniquely deter-mined on the subspace range(L), independently from which singularvalue decomposition is used forA to computeA+. Similarly, part (3)of Theorem 9.5 and part (b) of this exercise shows thatA+u = 0 on a

199

Page 204: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 9.5

basis for (range(L))?, and so A+u = 0 for all vectors in (range(L))?

� again, independent of which singular value decomposition of Ais used. Thus, A+u is uniquely determined for u 2 range(L) andu 2 (range(L))?, and thus, on a basis for Rm which can be chosenfrom these two subspaces. Thus, A+ is uniquely determined.

(12) By part (a) of Exercise 26 in Section 1.5, trace�AAT

�equals the sum of

the squares of the entries ofA. IfA = U�VT is a singular value decompo-sition of A, then AAT = U�VTV�TUT = U��TUT . Using part (c) of

Exercise 26 in Section 1.5, we see that trace�AAT

�= trace

�U��TUT

�= trace

�UUT��T

�= trace

���T

�= �21 + � � �+ �2k.

(13) Let V1 be the n� n matrix whose columns are

vi; : : : ;vj ;v1; : : : ;vi�1;vj+1; : : : ;vn;

in that order. Let U1 be the m�m matrix whose columns are

ui; : : : ;uj ;u1; : : : ;ui�1;uj+1; : : : ;um;

in that order. (The notation used here assumes that i > 1; j < n, andj < m, but the construction of V1 and U1 can be adjusted in an obviousway if i = 1, j = n, or j = m.) Let �1 be the diagonal m � n matrixwith �i; : : : ; �j in the �rst j � i + 1 diagonal entries and zero on theremaining diagonal entries. We claim that U1�1V

T1 is a singular value

decomposition for Aij . To show this, because U1, �1, and V1 are of theproper form, it is enough to show that Aij = U1�1V

T1 .

If i � l � j, then Aijvl =��iuiv

Ti + � � �+ �jujvTj

�vl = �lul, and�

U1�1VT1

�vl = U1�1el�i+1 = U1�lel�i+1 = �lul. If l < i or j < l,

then Aijvl =��iuiv

Ti + � � �+ �jujvTj

�vl = 0. If l < i,

�U1�1V

T1

�vl =

U1�1el+j�i+1 = U10 = 0, and if j < l,�U1�1V

T1

�vl = U1�1el = U10

= 0. Hence, Aijvl =�U1�1V

T1

�vl for every l, and so Aij = U1�1V

T1 .

Finally, since we know a singular value decomposition forAij , we knowthat �i; : : : ; �j are the singular values for Aij , and by part (1) of Theorem9.5, rank(Aij) = j � i+ 1.

(14) (a) A =

26666440 �5 15 �15 5 �301:8 3 1:2 1:2 �0:6 1:850 5 45 �45 �5 �60

�2:4 �1:5 0:9 0:9 3:3 �2:442:5 �2:5 60 �60 2:5 �37:5

377775

(b) A1 =

26666425 0 25 �25 0 �250 0 0 0 0 050 0 50 �50 0 �500 0 0 0 0 050 0 50 �50 0 �50

377775

200

Page 205: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 9.5

A2 =

26666435 0 15 �15 0 �350 0 0 0 0 055 0 45 �45 0 �550 0 0 0 0 040 0 60 �60 0 �40

377775

A3 =

26666440 �5 15 �15 5 �300 0 0 0 0 050 5 45 �45 �5 �600 0 0 0 0 0

42:5 �2:5 60 �60 2:5 �37:5

377775

A4 =

26666440 �5 15 �15 5 �301:8 1:8 0 0 �1:8 1:850 5 45 �45 �5 �60

�2:4 �2:4 0 0 2:4 �2:442:5 �2:5 60 �60 2:5 �37:5

377775(c) N(A) � 153:85; N(A �A1)=N(A) � 0:2223; N(A �A2)=N(A) �

0:1068; N(A�A3)=N(A) � 0:0436; N(A�A4)=N(A) � 0:0195(d) The method described in the text for the compression of digital im-

ages takes the matrix describing the image and alters it by zeroingout some of the lower singular values. This exercise illustrates howthe matrices Ai that use only the �rst i singular values for a matrixA get closer to approximating A as i increases. The matrix for adigital image is, of course, much larger than the 5 � 6 matrix con-sidered in this exercise. Also, you can frequently get a very goodapproximation of the image using a small enough number of singularvalues so that less data needs to be saved. Using the outer productform of the singular value decomposition, only the singular valuesand the relevant singular vectors are needed to construct each Ai, sonot all mn entries of Ai need to be kept in storage.

(15) These are the steps to process a digital image in MATLAB, as describedin the textbook:

�Enter the command : edit

�Use the text editor to enter the following MATLAB program:

function totalmat = RevisedPic(U, S, V, k)

T = V�totalmat = 0;for i = 1:ktotalmat = totalmat + U(:,i)*T(i,:)*S(i,i);end

201

Page 206: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Section 9.5

� Save this program under the name RevisedPic.m

�Enter the command: A=imread(�picture�lename�);where picture�lename is the name of the �le containing thepicture, including its �le extension.(preferably .tif or .jpg)

�Enter the command: ndims(A)

� If the response is �3�, then MATLAB is treating your pictureas if it is in color, even if it appears to be black-and-white.If this is the case, enter the command: B=A(:,:,1);(to use only the �rst color of the three used for color pictures)

followed by the command: C=double(B);(to convert the integer format to decimal)

Otherwise, just enter: C=double(A);

�Enter the command: [U,S,V]=svd(C);(This computes the singular value decomposition of C.)

�Enter the command: W=RevisedPic(U,S,V,100);(The number �100�is the number of singular values you are using.You may change this to any value you like.)

�Enter the command: R=uint8(round(W));(This converts the decimal output to the correct integer format.)

�Enter the command: imwrite(R,�Revised100.tif�,�tif�)(This will write the revised picture out to a �le named�Revised100.tif�. Of course, you can use any �le name youwould like, or use the .jpg extension instead of .tif.)

�You may repeat the steps:W=RevisedPic(U,S,V,k);R=uint8(round(W));imwrite(R,�Revisedk.tif�,�tif�)

with di¤erent values for k to see how the picture gets morere�ned as more singular values are used.

�Output can be viewed using any appropriate software you mayhave on your computer for viewing such �les.

(16) (a) F(b) T

(c) F(d) F

(e) F(f) T

(g) F(h) T

(i) F(j) T

(k) T

202

Page 207: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Appendix B

Appendices

Appendix B

(1) (a) Not a function; unde�ned for x < 1

(b) Function; range = fx 2 R jx � 0g; image of 2 = 1; pre-images of 2= f�3; 5g

(c) Not a function; two values are assigned to each x 6= 1(d) Function; range = f�4;�2; 0; 2; 4; : : :g; image of 2 = �2; pre-images

of 2 = f6; 7g(e) Not a function (k unde�ned at � = �

2 )

(f) Function; range = all prime numbers; image of 2 = 2; pre-image of2 = f0; 1; 2g

(g) Not a function; 2 is assigned to two di¤erent values (�1 and 6)

(2) (a) f�15;�10;�5; 5; 10; 15g(b) f�9;�8;�7;�6;�5; 5; 6; 7; 8; 9g(c) f: : : ;�8;�6;�4;�2; 0; 2; 4; 6; 8; : : :g

(3) (g � f)(x) = 14

p75x2 � 30x+ 35; (f � g)(x) = 1

4 (5p3x2 + 2� 1)

(4) (g � f)��

xy

��=

��8 242 8

� �xy

�,

(f � g)��

xy

��=

��12 8�4 12

� �xy

�(5) (a) Let f : A �! B be given by f(1) = 4, f(2) = 5, f(3) = 6. Let

g : B �! C be given by g(4) = g(7) = 8, g(5) = 9, g(6) = 10. Theng � f is onto, but f�1(f7g) is empty, so f is not onto.

(b) Use the example from part (a). Note that g � f is one-to-one, butg(4) = g(7), so g is not one-to-one.

(6) The function f is onto because, given any real number x, the n � ndiagonal matrix A with a11 = x and aii = 1 for 2 � i � n has determinantx. Also, f is not one-to-one because for every x 6= 0, any other matrixobtained by performing a type (II) operation on A (for example,< 1 > �< 2 > + < 1 >) also has determinant x.

(7) The function f is not onto because only symmetric 3 � 3 matrices arein the range of f (f(A) = A +AT = (AT +A)T = (f(A))T ). Also, fis not one-to-one because if A is any nonsymmetric 3 � 3 matrix, thenf(A) = A+AT = AT + (AT )T = f(AT ), but A 6= AT .

(8) f is not one-to-one, because f(x+1) = f(x+3) = 1; f is not onto, becausethere is no pre-image for xn. For n � 3; the pre-image of P2 is P3.

203

Page 208: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Appendix C

(9) If f(x1) = f(x2), then 3x31�5 = 3x32�5 =) 3x31 = 3x32 =) x31 = x

32 =)

x1 = x2. So f is one-to-one. Also, if b 2 R, then f��

b+53

� 13

�= b, so f is

onto. Inverse of f = f�1(x) =�x+53

� 13 .

(10) The function f is one-to-one, because

f(A1) = f(A2)

) B�1A1B = B�1A2B

) B(B�1A1B)B

�1 = B(B�1A2B)B�1

) (BB�1)A1(BB�1) = (BB�1)A2(BB

�1)

) InA1In = InA2In

) A1 = A2.The function f is onto, because for any C 2Mnn, f(BCB

�1) =B�1(BCB�1)B = C. Also, f�1(A) = BAB�1.

(11) (a) Let c 2 C. Since g � f is onto, there is some a 2 A such that(g � f)(a) = c. Then g(f(a)) = c. Let f(a) = b. Then g(b) = c, so gis onto. (Exercise 5(a) shows that f is not necessarily onto.)

(b) Let a1; a2 2 A such that f(a1) = f(a2). Then g(f(a1)) = g(f(a2)),implying (g � f)(a1) = (g � f)(a2). But since g � f is one-to-one,a1 = a2. Hence, f is one-to-one. (Exercise 5(b) shows that g is notnecessarily one-to-one.)

(12) (a) F

(b) T

(c) F

(d) F

(e) F

(f) F

(g) F

(h) F

Appendix C

(1) (a) 11� i(b) 24� 32i(c) 20� 12i(d) 18� 9i

(e) 9 + 19i(f) 2 + 42i(g) �17� 19i(h) 5� 4i

(i) 9 + 2i(j) �6(k) 16 + 22i(l)p73

(m)p53

(n) 5

(2) (a) 320 +

120 i (b) 3

25 �425 i (c) � 4

17 �117 i (d) � 5

34 +334 i

(3) In all parts, let z1 = a1 + b1i and z2 = a2 + b2i.

(a) Part (1): z1 + z2 = (a1 + b1i) + (a2 + b2i) = (a1 + a2) + (b1 + b2)i =(a1 + a2)� (b1 + b2)i = (a1 � b1i) + (a2 � b2i) = z1 + z2.Part (2): (z1z2) = (a1 + b1i)(a2 + b2i)

= (a1a2 � b1b2) + (a1b2 + a2b1)i = (a1a2 � b1b2) � (a1b2 + a2b1)i =(a1a2� (�b1)(�b2)) + (a1(�b2)+ a2(�b1))i = (a1� b1i)(a2� b2i) =z1 z2.

204

Page 209: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Exercises Appendix C

(b) If z1 6= 0, then 1z1exists. Hence, 1

z1(z1z2) =

1z1(0), implying z2 = 0.

(c) Part (4): z1 = z1 , a1 + b1i = a1 � b1i , b1i = �b1i , 2b1i = 0, b1 = 0 , z1 is real.Part (5): z1 = �z1, a1+b1i = �(a1�b1i), a1+b1i = �a1+b1i ,a1 = �a1 , 2a1 = 0 , a1 = 0 , z1 is pure imaginary.

(4) In all parts, let z1 = a1+b1i and z2 = a2+b2i. Also note that z1z1 = jz1j2because z1z1 = (a1 + b1i)(a1 � b1i) = a21 + b21.

(a) jz1z2j2 = (z1z2)(z1z2) (by the above) = (z1z2) (z1 z2) (by part (2) ofTheorem C.1) = (z1z1) (z2z2) = jz1j2jz2j2 = (jz1j jz2j)2. Now takesquare roots.

(b)��� 1z1 ��� = ��� z1

jz1j2

��� (by the boxed equation just before Theorem C.1 in

Appendix C of the textbook) =��� a1a21+b

21� b1

a21+b21i���

=

r�a1

a21+b21

�2+�

b1a21+b

21

�2=

r�1

a21+b21

�2(a21 + b

21) =

1pa21+b

21

= 1jz1j

(c)�z1z2

�=�z1z2z2z2

�=�z1z2jz2j2

�=�(a1+b1i)(a2�b2i)

a22+b22

�= (a1a2�b1b2)

a22+b22

+ (b1a2�a1b2)a22+b

22

i = (a1a2�b1b2)a22+b

22� (b1a2�a1b2)

a22+b22

i

= (a1a2�b1b2)a22+b

22

+ (�b1a2+a1b2)a22+b

22

i = (a1�b1i)(a2+b2i)a22+b

22

= z1z2z2z2

= z1z2

(5) (a) F (b) F (c) T (d) T (e) F

205

Page 210: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

206

Sample Tests We include here three sample tests for each of Chapters 1 through 7. Answer keys for all tests follow the test collection itself. The answer keys also include optional hints that an instructor might want to divulge for some of the more complicated or time-consuming problems. Instructors who are supervising independent study students can use these tests to measure their students’ progress at regular intervals. However, in a typical classroom situation, we do not expect instructors to give a test immediately after every chapter is covered, since that would amount to six or seven tests throughout the semester. Rather, we envision these tests as being used as a test bank; that is, a supply of questions, or ideas for questions, to assist instructors in composing their own tests. Note that most of the tests, as printed here, would take a student much more than an hour to complete, even with the use of software on a computer and/or calculator. Hence, we expect instructors to choose appropriate subsets of these tests to fulfill their classroom needs.

Page 211: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Chapter Tests Chapter 1 - Version A

Test for Chapter 1 � Version A

(1) A rower can propel a boat 5 km/hr on a calm river. If the rower rowssoutheastward against a current of 2 km/hr northward, what is the netvelocity of the boat? Also, what is the net speed of the boat?

(2) Use a calculator to �nd the angle � (to the nearest degree) between thevectors x = [3;�2; 5] and y = [�4; 1;�1].

(3) Prove that, for any vectors x;y 2 Rn,

kx+ yk2 + kx� yk2 = 2(kxk2 + kyk2):

(4) Let x = [�3; 4; 2] represent the force on an object in a three-dimensionalcoordinate system, and let a = [5;�1; 3] be a given vector. Use projaxto decompose x into two component forces in directions parallel and or-thogonal to a. Verify that your answer is correct.

(5) State the contrapositive, converse, and inverse of the following statement:

If kx+ yk 6= kxk+ kyk, then x is not parallel to y.Which one of these is logically equivalent to the original statement?

(6) Give the negation of the following statement:

kxk = kyk or (x� y) � (x+ y) 6= 0.

(7) Use a proof by induction to show that if the n � n matrices A1; : : : ;Ak

are diagonal, thenPk

i=1Ai is diagonal.

(8) Decompose A =

24 �4 3 �26 �1 72 4 �3

35 into the sum of a symmetric matrix

S and a skew-symmetric matrix V.

(9) Given the following information about the employees of a certain TV net-work, calculate the total amount of salaries and perks paid out by thenetwork for each TV show:

TV Show 1TV Show 2TV Show 3TV Show 4

Actors Writers Directors266412 4 210 2 36 3 59 4 1

3775

ActorWriterDirector

Salary Perks24 $50000 $40000$60000 $30000$80000 $25000

35(10) Let A and B be symmetric n � n matrices. Prove that AB is symmetric

if and only if A and B commute.

207

Page 212: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Chapter Tests Chapter 1 - Version B

Test for Chapter 1 � Version B

(1) Three people are competing in a three-way tug-of-war, using three ropesall attached to a brass ring. Player A is pulling due north with a force of100 lbs. Player B is pulling due east, and Player C is pulling at an angleof 30 degrees o¤ due south, toward the west side. The ring is not moving.At what magnitude of force are Players B and C pulling?

(2) Use a calculator to �nd the angle � (to the nearest degree) between thevectors x = [2;�5] and y = [�6; 5].

(3) Prove that, for any vectors a and b in Rn, the vector b � projab isorthogonal to a.

(4) Let x = [5;�2;�4] represent the force on an object in a three-dimensionalcoordinate system, and let a = [�2; 3;�3] be a given vector. Use projaxto decompose x into two component forces in directions parallel and or-thogonal to a. Verify that your answer is correct.

(5) State the contrapositive, converse, and inverse of the following statement:

If kxk2 + 2(x � y) > 0, then kx+ yk > kyk.Which one of these is logically equivalent to the original statement?

(6) Give the negation of the following statement:

There is a unit vector x in R3 such that x is parallel to [�2; 3; 1].

(7) Use a proof by induction to show that if the n � n matrices A1; : : : ;Ak

are lower triangular, thenPk

i=1Ai is lower triangular.

(8) Decompose A =

24 5 8 �2�6 3 �39 4 1

35 into the sum of a symmetric matrix S

and a skew-symmetric matrix V.

(9) Given the following information about the amount of foods (in ounces)eaten by three cats each week, and the percentage of certain nutrients ineach food type, �nd the total intake of each type of nutrient for each cateach week:

Cat 1Cat 2Cat 3

Food A Food B Food C Food D24 9 4 7 86 3 10 44 6 9 7

35

Food AFood BFood CFood D

Nutrient 1 Nutrient 2 Nutrient 326645% 4% 8%2% 3% 2%9% 2% 6%6% 0% 8%

3775

208

Page 213: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Chapter Tests Chapter 1 - Version B

(10) Prove that if A and B are both n � n skew-symmetric matrices, then(AB)T = BA.

209

Page 214: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Chapter Tests Chapter 1 - Version C

Test for Chapter 1 � Version C

(1) Using Newton�s Second Law of Motion, �nd the acceleration vector on a10 kg object in a three-dimensional coordinate system when the followingforces are simultaneously applied:

� a force of 6 newtons in the direction of the vector [�4; 0; 5],� a force of 4 newtons in the direction of the vector [2; 3;�6], and� a force of 8 newtons in the direction of the vector [�1; 4;�2].

(2) Use a calculator to �nd the angle � (to the nearest degree) between thevectors x = [�2; 4;�3] and y = [8;�3;�2].

(3) Prove that, for any vectors x;y 2 Rn,

kx+ yk2 � (kxk+ kyk)2:

(Hint: Use the Cauchy-Schwarz Inequality.)

(4) Let x = [�4; 2;�7] represent the force on an object in a three-dimensionalcoordinate system, and let a = [6;�5; 1] be a given vector. Use projaxto decompose x into two component forces in directions parallel and or-thogonal to a. Verify that your answer is correct.

(5) Use a proof by contrapositive to prove the following statement:

If y 6= projxy, then y 6= cx for all c 2 R.

(6) Give the negation of the following statement:

For every vector x 2 Rn, there is some y 2 Rn such that kyk > kxk.

(7) Decompose A =

24 �3 6 37 5 2�2 4 �4

35 into the sum of a symmetric matrix S

and a skew-symmetric matrix V.

(8) Given A =

24 �4 3 1 56 �9 2 �48 7 �1 2

35 and B =

26646 4 �2�1 �3 42 3 9�2 5 �8

3775, calcu-late, if possible, the third row of AB and the second column of BA.

(9) Use a proof by induction to show that if the n � n matrices A1; : : : ;Ak

are diagonal, then the product A1 � � �Ak is diagonal.

(10) Prove that ifA is a skew-symmetric matrix, thenA3 is also skew-symmetric.

210

Page 215: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Chapter Tests Chapter 2 - Version A

Test for Chapter 2 � Version A

(1) Use Gaussian elimination to �nd the quadratic equation y = ax2 + bx+ cthat goes through the points (3; 7), (�2; 12), and (�4; 56).

(2) Use Gaussian elimination or the Gauss-Jordan method to solve the fol-lowing linear system. Give the full solution set.8<: 5x1 + 10x2 + 2x3 + 14x4 + 2x5 = �13

�7x1 � 14x2 � 2x3 � 22x4 + 4x5 = 473x1 + 6x2 + x3 + 9x4 = �13

(3) Solve the following homogeneous system using the Gauss-Jordan method.Give the full solution set, expressing the vectors in it as linear combinationsof particular solutions.8<: 2x1 + 5x2 � 16x3 � 9x4 = 0

x1 + x2 � 2x3 � 2x4 = 0�3x1 + 2x2 � 14x3 � 2x4 = 0

(4) Find the rank of A =

24 2 �1 8�2 2 �10�5 3 �21

35. Is A row equivalent to I3 ?

(5) Solve the following two systems simultaneously:8<: 2x1 + 5x2 + 11x3 = 82x1 + 7x2 + 14x3 = 63x1 + 11x2 + 22x3 = 9

and8<: 2x1 + 5x2 + 11x3 = 252x1 + 7x2 + 14x3 = 303x1 + 11x2 + 22x3 = 47

(6) Let A be a m � n matrix, B be an n � p matrix, and let R be the type(I) row operation R : hii � chii, for some scalar c and some i with1 � i � m. Prove that R(AB) = R(A)B. (Since you are proving partof Theorem 2.1 in the textbook, you may not use that theorem in yourproof. However, you may use the fact from Chapter 1 that(kth row of (AB)) = (kth row of A)B.)

(7) Determine whether or not [�15; 10;�23] is in the row space of

A =

24 5 �4 82 1 2�1 �5 1

35 :

211

Page 216: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Chapter Tests Chapter 2 - Version A

(8) Find the inverse of A =

24 3 �1 28 �9 34 �3 2

35.(9) Without using row reduction, �nd the inverse of the matrixA=

�6 �85 �9

�.

(10) LetA and B be nonsingular n�n matrices. Prove thatA and B commuteif and only if (AB)2 = A2B2.

212

Page 217: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Chapter Tests Chapter 2 - Version B

Test for Chapter 2 � Version B

(1) Use Gaussian elimination to �nd the circle x2+ y2+ ax+ by+ c that goesthrough the points (5;�1), (6;�2), and (1;�7).

(2) Use Gaussian elimination or the Gauss-Jordan method to solve the fol-lowing linear system. Give the full solution set.8<: 3x1 � 2x2 + 19x3 + 11x4 + 9x5 � 27x6 = 31�2x1 + x2 � 12x3 � 7x4 � 8x5 + 21x6 = �222x1 � 2x2 + 14x3 + 8x4 + 3x5 � 14x6 = 19

(3) Solve the following homogeneous system using the Gauss-Jordan method.Give the full solution set, expressing the vectors in it as linear combinationsof particular solutions.8<: �3x1 + 9x2 + 8x3 + 4x4 = 0

5x1 � 15x2 + x3 + 22x4 = 04x1 � 12x2 + 3x3 + 22x4 = 0

(4) Find the rank of A =

24 3 �9 71 �2 2

�15 41 �34

35. Is A row equivalent to I3 ?

(5) Solve the following two systems simultaneously:8<: 4x1 � 3x2 � x3 = �13�4x1 � 2x2 � 3x3 = 145x1 � x2 + x3 = �17

and8<: 4x1 � 3x2 � x3 = 13�4x1 � 2x2 � 3x3 = �165x1 � x2 + x3 = 18

(6) Let A be a m�n matrix, B be an n�p matrix, and let R be the type (II)row operation R : hii � chji+ hii, for some scalar c and some i; j with1 � i; j � m. Prove that R(AB) = R(A)B. (Since you are proving partof Theorem 2.1 in the textbook, you may not use that theorem in yourproof. However, you may use the fact from Chapter 1 that(kth row of (AB) = (kth row of A)B.)

(7) Determine whether [�2; 3; 1] is in the row space ofA=

24 2 �1 �5�4 1 9�1 1 3

35.(8) Find the inverse of A =

24 3 3 12 1 010 3 �1

35.213

Page 218: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Chapter Tests Chapter 2 - Version B

(9) Solve the linear system8<: �5x1 + 9x2 + 4x3 = �917x1 � 14x2 � 10x3 = 146�7x1 + 12x2 + 4x3 = �120

without using row reduction, if the inverse of the coe¢ cient matrix24 �5 9 47 �14 �10�7 12 4

35 is

24 32 6 �1721 4 �11�7 � 32

72

35 :(10) (a) Prove that if A is a nonsingular matrix and c 6= 0, then

(cA)�1 = ( 1c )A�1.

(b) Use the result from part (a) to prove that if A is a nonsingular skew-symmetric matrix, then A�1 is also skew-symmetric.

214

Page 219: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Chapter Tests Chapter 2 - Version C

Test for Chapter 2 � Version C

(1) Use Gaussian elimination to �nd the values of A, B, C, and D that solvethe following partial fractions problem:

5x2 � 18x+ 1(x� 2)2(x+ 3) =

A

x� 2 +Bx+ C

(x� 2)2 +D

x+ 3:

(2) Use Gaussian elimination or the Gauss-Jordan method to solve the fol-lowing linear system. Give the full solution set.8>><>>:

9x1 + 36x2 � 11x3 + 3x4 + 34x5 � 15x6 = �6�10x1 � 40x2 + 9x3 � 2x4 � 38x5 � 9x6 = �894x1 + 16x2 � 4x3 + x4 + 15x5 = 2211x1 + 44x2 � 12x3 + 2x4 + 47x5 + 3x6 = 77

(3) Solve the following homogeneous system using the Gauss-Jordan method.Give the full solution set, expressing the vectors in it as linear combinationsof particular solutions.8<: �2x1 � 2x2 + 14x3 + x4 � 7x5 = 0

6x1 + 9x2 � 27x3 � 2x4 + 21x5 = 0�3x1 � 4x2 + 16x3 + x4 � 10x5 = 0

(4) Find the rank of A =

26645 6 �2 232 4 �1 13�6 �9 1 �274 6 �1 19

3775. Is A row equivalent to

I4 ?

(5) Find the reduced row echelon form matrix B for the matrix

A =

24 2 2 3�3 2 15 1 3

35 ;and list a series of row operations that converts B to A.

(6) Determine whether [25; 12;�19] is a linear combination of a = [7; 3; 5],b = [3; 3; 4], and c = [3; 2; 3].

(7) Find the inverse of A =

2664�2 0 1 �14 �1 �1 33 1 �1 �23 7 �2 �16

3775.

215

Page 220: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Chapter Tests Chapter 2 - Version C

(8) Solve the linear system8<: �4x1 + 2x2 � 5x3 = �3�4x1 + x2 � 3x3 = �76x1 + 3x2 � 4x3 = 26

without using row reduction, if the inverse of the coe¢ cient matrix24 �4 2 �5�4 1 �36 3 �4

35 is

24 52 � 72 � 12

�17 23 4�9 12 2

35 :(9) Prove by induction that ifA1; : : : ;Ak are nonsingular n�n matrices, then

(A1 � � �Ak)�1 = (Ak)

�1 � � � (A1)�1.

(10) Suppose that A and B are n � n matrices, and that R1; : : : ; Rk arerow operations such that R1(R2(� � � (Rk(AB)) � � � )) = In. Prove thatR1(R2(� � � (Rk(A)) � � � )) = B�1.

216

Page 221: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Chapter Tests Chapter 3 - Version A

Test for Chapter 3 � Version A

(1) Calculate the area of the parallelogram determined by the vectorsx = [�2; 5] and y = [6;�3].

(2) If A is a 5� 5 matrix with determinant 4, what is j6Aj ? Why?

(3) Calculate the determinant of A =

24 5 �6 �34 6 15 10 2

35 by row reducing A to

upper triangular form.

(4) Prove that if A and B are n� n matrices, then jABT j = jATBj.

(5) Use cofactor expansion along any row or column to �nd the determinantof

A =

26643 0 �4 64 �2 5 2�3 3 0 �57 0 0 �8

3775 :Be sure to use cofactor expansion to �nd any 3 � 3 determinants neededas well.

(6) Find the inverse of A =

24 2 3 21 2 2�3 0 8

35 by �rst computing the adjointmatrix for A.

(7) Use Cramer�s Rule to solve the following system:8<: 5x1 + x2 + 2x3 = 3�8x1 + 2x2 � x3 = 176x1 � x2 + x3 = �10

:

(8) Let A =

24 0 1 �11 0 1�1 �1 0

35 : Find a nonsingular matrix P having all

integer entries, and a diagonal matrix D such that D = P�1AP.

(9) Prove that an n � n matrix A is singular if and only if � = 0 is aneigenvalue for A.

(10) Suppose that �1 = 2 is an eigenvalue for an n� n matrix A. Prove that�2 = 8 is an eigenvalue for A3.

217

Page 222: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Chapter Tests Chapter 3 - Version B

Test for Chapter 3 � Version B

(1) Calculate the volume of the parallelepiped determined by the vectorsx = [2; 7;�1], y = [4; 2;�3], and z = [�5; 6; 2].

(2) If A is a 4� 4 matrix with determinant 7, what is j5Aj ? Why?

(3) Calculate the determinant of A =

26642 �4 �12 156 11 25 �324 13 32 �413 �16 �45 57

3775 by row re-

ducing A to upper triangular form.

(4) Prove that if A and B are n�n matrices with AB = �BA, and n is odd,then either A or B is singular.

(5) Use cofactor expansion along any row or column to �nd the determinantof

A =

26642 1 5 24 3 �1 0�6 8 0 01 7 0 �3

3775 :Be sure to use cofactor expansion to �nd any 3 � 3 determinants neededas well.

(6) Find the inverse of A =

24 0 1 24 3 �42 1 �2

35 by �rst computing the adjointmatrix for A.

(7) Use Cramer�s Rule to solve the following system:8<: 4x1 � 4x2 � 3x3 = �106x1 � 5x2 � 10x3 = �28�2x1 + 2x2 + 2x3 = 6

:

(8) Let A =

24 2 1 �31 2 �31 1 �2

35 : Find a nonsingular matrix P having all integerentries, and a diagonal matrix D such that D = P�1AP.

(9) Consider the matrix A =

26642 1 0 00 2 1 00 0 2 10 0 0 2

3775.(a) Show that A has only one eigenvalue. What is it?

(b) Show that Step 3 of the Diagonalization Method of Section 3.4 pro-duces only one fundamental eigenvector for A.

218

Page 223: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Chapter Tests Chapter 3 - Version B

(10) Consider the matrix A =

"35 � 4545

35

#. It can be shown that, for every

nonzero vector X, the vectors X and (AX) form an angle with each othermeasuring � = arccos( 35 ) � 53

�. (You may assume this fact.) Show whyA is not diagonalizable, �rst from an algebraic perspective, and then froma geometric perspective.

219

Page 224: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Chapter Tests Chapter 3 - Version C

Test for Chapter 3 � Version C

(1) Calculate the volume of the parallelepiped determined by the vectorsx = [5;�2; 3], y = [�3; 1;�2], and z = [4;�5; 1].

(2) Calculate the determinant of A =

26644 3 1 21 9 0 28 3 2 �24 3 1 1

3775 by row reducing Ato upper triangular form.

(3) Use a cofactor expansion along any row or column to �nd the determinantof

A =

2666640 0 0 0 50 12 0 9 40 0 0 8 30 14 11 7 215 13 10 6 1

377775 :Be sure to use cofactor expansion on each 4�4, 3�3, and 2�2 determinantyou need to calculate as well.

(4) Prove that if A is an orthogonal matrix (that is, AT = A�1),then jAj = �1.

(5) Let A be a nonsingular matrix. Express A�1 in terms of jAj and theadjoint of A. Use this to prove that if A is an n�n matrix with jAj 6= 0,then the determinant of the adjoint of A equals jAjn�1.

(6) Perform (only) the Base Step in the proof by induction of the followingstatement, which is part of Theorem 3.3: Let A be an n�n matrix, withn � 2, and let R be the type (II) row operation hji � chii+ hji. ThenjAj = jR(A)j.

(7) Use Cramer�s Rule to solve the following system:8<: �9x1 + 6x2 + 2x3 = �415x1 � 3x2 � x3 = 22�8x1 + 6x2 + x3 = �40

:

(8) Use diagonalization to calculate A9 if A =

�5 �63 �4

�.

(9) Let A =

24 �4 6 �60 2 03 �3 5

35 : Find a nonsingular matrix P having all

integer entries, and a diagonal matrix D such that D = P�1AP.

(10) Let � be an eigenvalue for an n�n matrix A having algebraic multiplicityk. Prove that � is also an eigenvalue with algebraic multiplicity k for AT .

220

Page 225: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Chapter Tests Chapter 4 - Version A

Test for Chapter 4 � Version A

(1) Prove that R is a vector space using the operations � and � given byx� y = (x5 + y5)1=5 and a� x = a1=5x.

(2) Prove that the set of n�n skew-symmetric matrices is a subspace ofMnn.

(3) Prove that the set f[5;�2;�1]; [2;�1;�1]; [8;�2; 1]g spans R3.

(4) Find a simpli�ed general form for all the vectors in span(S) in P3 if

S = f4x3 � 20x2 � x� 11; 6x3 � 30x2 � 2x� 18;� 5x3 + 25x2 + 2x+ 16g:

(5) Prove that the set f[2;�1;�11;�4]; [9;�1; 2; 2]; [4; 0; 21; 8]; [�2; 1; 8; 3]g inR4 is linearly independent.

(6) Determine whether��4 13

�19 1

�;

�3 �1519 �1

�;

�2 �1215 �1

�;

��4 7�5 2

��is a basis forM22.

(7) Find a maximal linearly independent subset of the set

S = f3x3 � 2x2 + 3x� 1;�6x3 + 4x2 � 6x+ 2; x3 � 3x2 + 2x� 1;7x3 + 5x� 1; 11x3 � 12x2 + 13x+ 3g

in P3. What is dim(span(S))?

(8) Prove that the columns of a nonsingular n� n matrix A span Rn.

(9) Consider the matrix A =

24 5 �6 51 0 0�2 4 �3

35, which has � = 2 as an eigen-value. Find a basis B for R3 that contains a basis for the eigenspace E2for A.

(10) (a) Find the transition matrix from B-coordinates to C-coordinates if

B = ([�11;�19; 12]; [7; 16;�2]; [�8:� 12; 11]) and

C = ([2; 4;�1]; [�1;�2; 1]; [1; 1;�2])

are ordered bases for R3.(b) Given v = [�63;�113; 62] 2 R3, �nd [v]B , and use your answer to

part (a) to �nd [v]C .

221

Page 226: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Chapter Tests Chapter 4 - Version B

Test for Chapter 4 � Version B

(1) Prove that R with the usual scalar multiplication, but with addition givenby x� y = 3(x+ y) is not a vector space.

(2) Prove that the set of all polynomials p in P4 for which the coe¢ cient ofthe second-degree term equals the coe¢ cient of the fourth-degree term isa subspace of P4.

(3) Let V be a vector space with subspaces W1 and W2. Prove that theintersection of W1 and W2 is a subspace of V.

(4) Let S = f[5;�15;�4]; [�2; 6; 1]; [�9; 27; 7]g. Determine whether [2;�4; 3]is in span(S).

(5) Find a simpli�ed general form for all the vectors in span(S) inM22 if

S =

��2 �6�1 �1

�;

�5 �15�1 1

�;

�8 �24�2 1

�;

�3 �91 2

��:

(6) Let S be the set

f2x3 � 11x2 � 12x� 1; 7x3 + 35x2 + 16x� 7;� 5x3 + 29x2 + 33x+ 2;�5x3 � 26x2 � 12x+ 5g

in P3 and let p(x) 2 P3. Prove that there is exactly one way to expressp(x) as a linear combination of the elements of S.

(7) Find a subset of

S = f[2;�3; 4;�1]; [�6; 9;�12; 3]; [3; 1;�2; 2];[2; 8;�12; 3]; [7; 6;�10; 4]g

that is a basis for span(S) in R4. Does S span R4?

(8) Let A be a nonsingular n � n matrix. Prove that the rows of A arelinearly independent.

(9) Consider the matrix A =

2664�1 �2 3 12 �6 6 70 0 �2 02 �4 6 5

3775, which has � = �2 as aneigenvalue. Find a basis B for R3 that contains a basis for the eigenspaceE�2 for A.

(10) (a) Find the transition matrix from B-coordinates to C-coordinates if

B = (�21x2 + 14x� 38; 17x2 � 10x+ 32; �10x2 + 4x� 23) andC = (�7x2 + 4x� 14; 2x2 � x+ 4; x2 � x+ 1)are ordered bases for P2.

(b) Given v = �100x2 +64x� 181 2 P3, �nd [v]B , and use your answerto part (a) to �nd [v]C .

222

Page 227: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Chapter Tests Chapter 4 - Version C

Test for Chapter 4 � Version C

(1) The set R2 with operations [x; y] � [w; z] = [x + w + 3; y + z � 4], anda� [x; y] = [ax+3a�3; ay�4a+4] is a vector space. Find the zero vector0 and the additive inverse of v = [x; y] for this vector space.

(2) Prove that the set of all 3-vectors orthogonal to [2;�3; 1] forms a subspaceof R3.

(3) Determine whether [�3; 6; 5] is in span(S) in R3, if

S = f[3;�6;�1]; [4;�8;�3]; [5;�10;�1]g:

(4) Find a simpli�ed form for all the vectors in span(S) in P3 if

S = f4x3 � x2 � 11x+ 18;�3x3 + x2 + 9x� 14; 5x3 � x2 � 13x+ 22g:

(5) Prove that the set���3 639 �46

�;

��5 5614 �41

�;

��4 4413 �32

�;

�5 14

�13 �10

��inM22 is linearly independent.

(6) Find a maximal linearly independent subset of

S =

��3 �1�2 4

�;

��6 24 �8

�;

�2 1�2 0

�;

��1 �32 4

�;

�4 �3�2 2

��inM22. What is dim(span(S))?

(7) Let A be an n� n singular matrix and let S be the set of columns of A.Prove that dim(span(S)) < n.

(8) Enlarge the linearly independent set

T = f2x3 � 3x2 + 3x+ 1;�x3 + 4x2 � 6x� 2g

of P3 to a basis for P3.

(9) Let A 6= In be an n � n matrix having eigenvalue � = 1. Prove thatdim(E1) < n.

(10) (a) Find the transition matrix from B-coordinates to C-coordinates if

B = ([10;�17; 8]; [�4; 10;�5]; [29;�36; 16]) and

C = ([12;�12; 5]; [�5; 3;�1]; [1;�2; 1])

are ordered bases for R3.(b) Given v = [�109; 155;�71] 2 R3, �nd [v]B , and use your answer to

part (a) to �nd [v]C .

223

Page 228: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Chapter Tests Chapter 5 - Version A

Test for Chapter 5 � Version A

(1) Prove that the mapping f : Mnn �! Mnn given by f(A) = BAB�1,where B is some �xed nonsingular n � n matrix, is a linear operator.

(2) Suppose L: R3 �! R3 is a linear transformation, and L([�2; 5; 2]) =[2; 1;�1], L([0; 1; 1]) = [1;�1; 0], and L([1;�2;�1]) = [0; 3;�1]. FindL([2;�3; 1]).

(3) Find the matrix ABC for L: R2 �! P2 given by

L([a; b]) = (�2a+ b)x2 � (2b)x+ (a+ 2b);

for the ordered bases B = ([3;�7]; [2;�5]) for R2,and C = (9x2 + 20x� 21; 4x2 + 9x� 9; 10x2 + 22x� 23) for P2.

(4) For the linear transformation L: R4 �! R3 given by

L

0BB@2664x1x2x3x4

37751CCA =

24 4 �8 �1 �7�3 6 1 6�4 8 2 10

352664x1x2x3x4

3775 ;�nd a basis for ker(L), a basis for range(L), and verify that the DimensionTheorem holds for L.

(5) (a) Consider the linear transformation L : P3 �! R3 given by

L(p(x)) =

�p(3); p0(1);

Z 1

0

p(x) dx

�:

Prove that ker(L) is nontrivial.

(b) Use part (a) to prove that there is a nonzero polynomial p 2 P3 suchthat p(3) = 0, p0(1) = 0, and

R 10p(x) dx = 0.

(6) Prove that the mapping L: R3 �! R3 given by

L([x; y; z]) =

24 �5 2 16 �3 �210 �3 �1

3524 xyz

35is one-to-one. Is L an isomorphism?

(7) Show that L: Pn �! Pn given by L(p) = p� p0 is an isomorphism.

224

Page 229: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Chapter Tests Chapter 5 - Version A

(8) Consider the diagonalizable operator L :M22 �!M22

given by L(K) = K�KT . Let A be the matrix representation of L withrespect to the standard basis C forM22.

(a) Find an ordered basis B ofM22 consisting of fundamental eigenvec-tors for L, and the diagonal matrix D that is the matrix representa-tion of L with respect to B.

(b) Calculate the transition matrix P from B to C, and verifythat D = P�1AP.

(9) Indicate whether each given statement is true or false.

(a) If L is a linear operator on a �nite dimensional vector space for whichevery root of pL(x) is real, then L is diagonalizable.

(b) Let L : V �! V be an isomorphism with eigenvalue �. Then � 6= 0and 1=� is an eigenvalue for L�1.

(10) Prove that the linear operator on P3 given by L(p(x)) = p(x + 1) is notdiagonalizable.

225

Page 230: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Chapter Tests Chapter 5 - Version B

Test for Chapter 5 � Version B

(1) Prove that the mapping f : Mnn �! Mnn given by f(A) = A+AT isa linear operator.

(2) Suppose L: R3 �! P3 is a linear transformation, and L([1;�1; 1]) =2x3 � x2 + 3, L([2; 5; 6]) = �11x3 + x2 + 4x� 2, andL([1; 4; 4]) = �x3 � 3x2 + 2x+ 10. Find L([6;�4; 7]).

(3) Find the matrix ABC for L: M22 �! R3 given by

L

��a bc d

��= [�2a+ c� d; a� 2b+ d; 2a� b+ c];

for the ordered bases

B =

��1 �2�4 �1

�;

��4 611 3

�;

��2 13 1

�;

�1 45 1

��forM22 and C = ([�1; 0; 2]; [2; 2;�1]; [1; 3; 2]) for R3.

(4) For the linear transformation L: R4 �! R4 given by

L

0BB@2664x1x2x3x4

37751CCA =

26642 5 4 81 �1 �5 �1�4 2 16 14 1 �10 2

37752664x1x2x3x4

3775 ;�nd a basis for ker(L), a basis for range(L), and verify that the DimensionTheorem holds for L.

(5) Let A be a �xed m � n matrix, with m 6= n. Let L1 : Rn �! Rm begiven by L1(x) = Ax, and let L2 : Rm �! Rn be given by L2(y) = ATy.

(a) Prove or disprove: dim(range(L1)) = dim(range(L2)):

(b) Prove or disprove: dim(ker(L1)) = dim(ker(L2)):

(6) Prove that the mapping L: M22 �! M22 given by

L

��a bc d

��=

�3a� 2b� 3c� 4d �a� c� d

�a� d 3a� b� c� d

�is one-to-one. Is L an isomorphism?

(7) LetA be a �xed nonsingular n � n matrix. Show that L: Mnn �! Mnn

given by L(B) = BA�1 is an isomorphism.

226

Page 231: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Chapter Tests Chapter 5 - Version B

(8) Consider the diagonalizable operator L :M22 �! M22 given by

L

��a bc d

��=

��7 12�4 7

� �a bc d

�:

LetA be the matrix representation of L with respect to the standard basisC forM22.

(a) Find an ordered basis B ofM22 consisting of fundamental eigenvec-tors for L, and the diagonal matrix D that is the matrix representa-tion of L with respect to B.

(b) Calculate the transition matrix P from B to C, and verify thatD = P�1AP.

(9) Indicate whether each given statement is true or false.

(a) Let L : V �! V be an isomorphism on a �nite dimensional vectorspace V, and let � be an eigenvalue for L. Then � = �1.

(b) A linear operator L on an n-dimensional vector space is diagonaliz-able if and only if L has n distinct eigenvalues.

(10) Let L be the linear operator on R3 representing a counterclockwise rotationthrough an angle of �=6 radians around an axis through the origin that isparallel to [4;�2; 3]. Explain in words why L is not diagonalizable.

227

Page 232: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Chapter Tests Chapter 5 - Version C

Test for Chapter 5 � Version C

(1) Let A be a �xed n� n matrix. Prove that the function L : Pn �!Mnn

given byL(anx

n + an�1xn�1 + � � �+ a1x+ a0)

= anAn + an�1A

n�1 + � � �+ a1A+ a0Inis a linear transformation.

(2) Suppose L: R3 �! M22 is a linear transformation such that

L([6;�1; 1]) =�35 �2417 9

�; L([4;�2;�1]) =

�21 �185 11

�;

and L([�7; 3; 1]) =��38 31�11 �17

�: Find L([�3; 1;�4]):

(3) Find the matrix ABC for L: R3 �! R4 given by

L([x; y; z]) = [3x� y; x� z; 2x+ 2y � z; � x� 2y];

for the ordered bases

B = ([�5; 1;�2]; [10;�1; 3]; [41;�12; 20]) for R3; andC = ([1; 0;�1;�1]; [0; 1;�2; 1]; [�1; 0; 2; 1]; [0;�1; 2; 0]) for R4:

(4) Consider L: M22 �! P2 given by

L

��a11 a12a21 a22

��= (a11 + a22)x

2 + (a11 � a21)x+ a12:

What is ker(L)? What is range(L)? Verify that the Dimension Theoremholds for L.

(5) Suppose that L1 : V �! W and L2 :W �! Y are linear transformations,where V; W, and Y are �nite dimensional vector spaces.

(a) Prove that ker(L1) � ker(L2 � L1).(b) Prove that dim(range(L1)) � dim(range(L2 � L1)).

(6) Prove that the mapping L: P2 �! R4 given by

L(ax2 + bx+ c) = [�5a+ b� c; � 7a� c; 2a� b� c; � 2a+ 5b+ c]

is one-to-one. Is L an isomorphism?

(7) Let B = (v1; : : : ;vn) be an ordered basis for an n-dimensional vectorspace V. De�ne L : Rn �! V by L([a1; : : : ; an]) = a1v1 + � � � + anvn.Prove that L is an isomorphism.

228

Page 233: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Chapter Tests Chapter 5 - Version C

(8) Consider the diagonalizable operator L : P2 �! P2 given by

L(ax2 + bx+ c) = (3a+ b� c)x2 + (�2a+ c)x+ (4a+ 2b� c):

LetA be the matrix representation of L with respect to the standard basisC for P2.

(a) Find an ordered basis B of P2 consisting of fundamental eigenvectorsfor L, and the diagonal matrix D that is the matrix representationof L with respect to B.

(b) Calculate the transition matrix P from B to C, and verify thatD = P�1AP.

(9) Indicate whether each given statement is true or false.

(a) If L is a diagonalizable linear operator on a �nite dimensional vec-tor space then the algebraic multiplicity of � equals the geometricmultiplicity of � for every eigenvalue � of L.

(b) A linear operator L on a �nite dimensional vector space V is one-to-one if and only if 0 is not an eigenvalue for L.

(10) Prove that the linear operator on Pn, for n > 0, de�ned by L(p) = p0 isnot diagonalizable.

229

Page 234: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Chapter Tests Chapter 6 - Version A

Test for Chapter 6 � Version A

(1) Use the Gram-Schmidt Process to enlarge the following set to an orthog-onal basis for R3: f[4;�3; 2]g.

(2) Let L : R3 �! R3 be the orthogonal re�ection through the planex � 2y + 2z = 0. Use eigenvalues and eigenvectors to �nd the matrixrepresentation of L with respect to the standard basis for R3. (Hint:[1;�2; 2] is orthogonal to the given plane.)

(3) For the subspace W = span(f[2;�3; 1]; [�3; 2; 2]g) of R3, �nd a basis forW?. What is dim(W?)? (Hint: The two given vectors are not orthogo-nal.)

(4) Suppose A is an n � n orthogonal matrix. Prove that for every x 2 Rn,kAxk = kxk.

(5) Prove the following part of Corollary 6.14: Let W be a subspace of Rn.Then W � (W?)?.

(6) Let L : R4 �! R4 be the orthogonal projection onto the subspaceW = span(f[2; 1;�3;�1]; [5; 2; 3; 3]g) of R4. Find the characteristic poly-nomial of L.

(7) Explain why L : R3 �! R3 given by the orthogonal projection onto theplane 3x+ 5y � 6z = 0 is orthogonally diagonalizable.

(8) Consider L : R3 �! R3 given by

L

0@24 x1x2x3

351A =

24 5 4 �24 5 2�2 2 8

3524 x1x2x3

35 :Find an ordered orthonormal basis B of fundamental eigenvectors for L,and the diagonal matrix D that is the matrix for L with respect to B.

(9) Use orthogonal diagonalization to �nd a symmetric matrix A such that

A3 =1

5

�31 1818 4

�:

(10) Let L be a symmetric operator on Rn and let �1 and �2 be distincteigenvalues for L with corresponding eigenvectors v1 and v2. Prove thatv1 ? v2.

230

Page 235: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Chapter Tests Chapter 6 - Version B

Test for Chapter 6 � Version B

(1) Use the Gram-Schmidt Process to �nd an orthogonal basis for R3, startingwith the linearly independent set f[2;�3; 1]; [4;�2;�1]g. (Hint: The twogiven vectors are not orthogonal.)

(2) Let L : R3 �! R3 be the orthogonal projection onto the plane 3x+4z = 0.Use eigenvalues and eigenvectors to �nd the matrix representation of Lwith respect to the standard basis for R3. (Hint: [3; 0; 4] is orthogonal tothe given plane.)

(3) Let W = span(f[2; 2;�3]; [3; 0;�1]g) in R3.Decompose v = [�12;�36;�43] into w1 +w2, where w1 2 W andw2 2 W?. (Hint: The two vectors spanning W are not orthogonal.)

(4) Suppose that A and B are n� n orthogonal matrices. Prove that AB isan orthogonal matrix.

(5) Prove the following part of Corollary 6.14: Let W be a subspace of Rn.Assuming W � (W?)? has already been shown, prove that W = (W?)?.

(6) Let v = [�3; 5; 1; 0], and let W be the subspace of R4 spanned byS = f[0; 3; 6;�2]; [�6; 2; 0; 3]g. Find projW?v.

(7) Is L : R2 �! R2 given by

L

��x1x2

��=

�3 77 �5

� �x1x2

�orthogonally diagonalizable? Explain.

(8) Consider L : R3 �! R3 given by

L

0@24 x1x2x3

351A =

24 40 18 618 13 �126 �12 45

3524 x1x2x3

35 :Find an ordered orthonormal basis B of fundamental eigenvectors for L,and the diagonal matrix D that is the matrix for L with respect to B.

(9) Use orthogonal diagonalization to �nd a symmetric matrix A such that

A2 =

�10 �6�6 10

�:

(10) Let A be an n� n symmetric matrix. Prove that if all eigenvalues for Aare �1, then A is orthogonal.

231

Page 236: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Chapter Tests Chapter 6 - Version C

Test for Chapter 6 � Version C

(1) Use the Gram-Schmidt Process to enlarge the following set to an orthog-onal basis for R4: f[�3; 1;�1; 2]; [2; 4; 0; 1]g.

(2) Let L : R3 �! R3 be the orthogonal projection onto the plane2x � 2y � z = 0. Use eigenvalues and eigenvectors to �nd the matrixrepresentation of L with respect to the standard basis for R3. (Hint:[2;�2;�1] is orthogonal to the given plane.)

(3) For the subspace W = span(f[5; 1;�2]; [�8;�2; 9]g) of R3, �nd a basis forW?. What is dim(W?)? (Hint: The two given vectors are not orthogo-nal.)

(4) Let A be an n � n orthogonal skew-symmetric matrix. Show that n iseven. (Hint: Assume n is odd and calculate jA2j to reach a contradiction.)

(5) Prove that if W1 and W2 are two subspaces of Rn such that W1 � W2,then W?

2 � W?1 .

(6) Let P = (5; 1; 0;�4), and let

W = span(f[2; 1; 0; 2]; [�2; 0; 1; 2]; [0; 2; 2;�1]g):

Find the minimum distance from P to W.

(7) Explain why the matrix for L : R3 �! R3 given by the orthogonalre�ection through the plane 4x� 3y � z = 0 with respect to the standardbasis for R3 is symmetric.

(8) Consider L : R3 �! R3 given by

L

0@24 x1x2x3

351A =

24 49 �84 �72�84 23 �84�72 �84 49

3524 x1x2x3

35 :Find an ordered orthonormal basis B of fundamental eigenvectors for L,and the diagonal matrix D that is the matrix for L with respect to B.

(9) Use orthogonal diagonalization to �nd a symmetric matrix A such that

A2 =

�10 3030 90

�:

(10) Let A and B be n � n symmetric matrices such that pA(x) = pB(x).Prove that there is an orthogonal matrix P such that A = PBP�1.

232

Page 237: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Chapter Tests Chapter 7 - Version A

Test for Chapter 7 � Version A

Note that there are more than 10 problems in this test, with at leastthree problems from each section of Chapter 7.

(1) (Section 7.1) Verify that a�b = b � a for the complex vectors a = [2; i; 2+i]and b = [1 + i; 1� i; 2 + i].

(2) (Section 7.1) Suppose H and P are n � n complex matrices and that His Hermitian. Prove that P�HP is Hermitian.

(3) (Section 7.1) Indicate whether each given statement is true or false.

(a) If Z andW are n� n complex matrices, then ZW = (WZ).

(b) If an n� n complex matrix is normal, then it is either Hermitian orskew-Hermitian.

(4) (Section 7.2) Use Gaussian elimination to solve the following system oflinear equations:�

iz1 + (1 + 3i)z2 = 6 + 2i(2 + 3i)z1 + (7 + 7i)z2 = 20� 2i :

(5) (Section 7.2) Give an example of a matrix having all real entries that isnot diagonalizable when thought of as a real matrix, but is diagonalizablewhen thought of as a complex matrix. Prove that your example works.

(6) (Section 7.2) Find all eigenvalues and a basis of fundamental eigenvectorsfor each eigenspace for L : C3 �! C3 given by

L

0@24 z1z2z3

351A =

24 1 0 �1� i�1 i i�i 0 �1 + i

3524 z1z2z3

35 :Is L diagonalizable?

(7) (Section 7.3) Prove or disprove: For n � 1, the set of polynomials in PCnwith real coe¢ cients is a complex subspace of PCn .

(8) (Section 7.3) Find a basis B for span(S) that is a subset of

S = f[1 + 3i; 2 + i; 1� 2i]; [�2 + 4i; 1 + 3i; 3� i];[1; 2i; 3 + i]; [5 + 6i; 3;�1� 2i]g:

(9) (Section 7.3) Find the matrix with respect to the standard bases for thecomplex linear transformation L :MC

22 ! C2 given by

L

��a bc d

��= [a+ bi; c+ di] :

Also, �nd the matrix for L with respect to the standard bases whenthought of as a linear transformation between real vector spaces.

233

Page 238: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Chapter Tests Chapter 7 - Version A

(10) (Section 7.4) Use the Gram-Schmidt Process to �nd an orthogonal basisfor the complex vector space C3 starting with the linearly independent setf[1; i; 1 + i]; [3;�i;�1� i]g.

(11) (Section 7.4) Find a unitary matrix P such that

P��0 ii 0

�P

is diagonal.

(12) (Section 7.4) Show that if A and B are n� n unitary matrices, then ABis unitary and

���jAj��� = 1.(13) (Section 7.5) Prove that

hf ;gi = f(0)g(0) + f(1)g(1) + f(2)g(2)

is a real inner product on P2. For f(x) = 2x + 1 and g(x) = x2 � 1,calculate hf ;gi and kfk.

(14) (Section 7.5) Find the distance between x = [4; 1; 2] and y = [3; 3; 5] inthe real inner product space consisting of R3 with inner product

hx;yi = Ax �Ay; where A =

24 4 0 �13 3 11 2 1

35 :(15) (Section 7.5) Prove that in a real inner product space, for any vectors x

and y,kxk = kyk if and only if hx+ y; x� yi = 0:

(16) (Section 7.5) Use the Generalized Gram-Schmidt Process to �nd an or-thogonal basis for P2 starting with the linearly independent set fx2; xgusing the real inner product given by

hf ;gi =Z 1

�1f(t)g(t) dt:

(17) (Section 7.5) Decompose v = [0; 1;�3] in R3 as w1 +w2, wherew1 2 W = span(f[2; 1;�4]; [4; 3;�8]g) and w2 2 W?, using the real innerproduct on R3 given by hx;yi = Ax �Ay, with

A =

24 3 �1 10 3 11 2 1

35 :

234

Page 239: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Chapter Tests Chapter 7 - Version B

Test for Chapter 7 � Version B

Note that there are more than 10 problems in this test, with at leastthree problems from each section of Chapter 7.

(1) (Section 7.1) Calculate A�B for the complex matrices

A =

�i 3 + 2i5 �2i

�and B =

�1� i 3� 2i3i 7

�:

(2) (Section 7.1) Prove that if H is a Hermitian matrix, then iH is a skew-Hermitian matrix.

(3) (Section 7.1) Indicate whether each given statement is true or false.

(a) Every skew-Hermitian matrix must have all zeroes on its main diag-onal.

(b) If x;y 2 Cn, but each entry of x and y is real, then x � y producesthe same answer if the dot product is thought of as taking place inCn as it would if the dot product were considered to be taking placein Rn.

(4) (Section 7.2) Use the an inverse matrix to solve the given linear system:�(4 + i)z1 + (2 + 14i)z2 = 21 + 4i(2� i)z1 + (6 + 5i)z2 = 10� 6i :

(5) (Section 7.2) Explain why the sum of the algebraic multiplicities of acomplex n� n matrix must equal n.

(6) (Section 7.2) Find all eigenvalues and a basis of fundamental eigenvectorsfor each eigenspace for L : C2 �! C2 given by

L

��z1z2

��=

��i 14 3i

� �z1z2

�:

Is L diagonalizable?

(7) (Section 7.3) Prove or disprove: The set of n� n Hermitian matrices is acomplex subspace ofMC

nn. If it is a subspace, compute its dimension.

(8) (Section 7.3) If

S = f[i;�1;�2 + i]; [2� i; 2 + 2i; 5 + 2i];[0; 2; 2� 2i]; [1 + 3i;�2 + 2i;�3 + 5i]g;

�nd a basis B for span(S) that uses vectors having a simpler form thanthose in S.

235

Page 240: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Chapter Tests Chapter 7 - Version B

(9) (Section 7.3) Show that L : MC22 ! MC

22 given by L(Z) = Z� is not acomplex linear transformation.

(10) (Section 7.4) Use the Gram-Schmidt Process to �nd an orthogonal basisfor the complex vector space C3 containing f[1 + i; 1� i; 2]g.

(11) (Section 7.4) Prove that Z =�1 + i �1 + i1� i 1 + i

�is unitarily diagonalizable.

(12) (Section 7.4) Let A be an n � n unitary matrix. Prove that A2 = In ifand only if A is Hermitian.

(13) (Section 7.5) Let A be the 3 � 3 nonsingular matrix24 4 3 1�1 4 3�1 1 1

35 :Prove that hx;yi = Ax � Ay is a real inner product on R3. For x =[2;�1; 1] and y = [1; 4; 2], calculate hx;yi and kxk for this inner product.

(14) (Section 7.5) Find the distance between w = [4 + i; 5; 4 + i; 2 + 2i] andz = [1+ i; 2i;�2i; 2+3i] in the complex inner product space C3 using theusual complex dot product as an inner product.

(15) (Section 7.5) Prove that in a real inner product space, for any vectors xand y,

hx;yi = 1

4(kx+ yk2 � kx� yk2):

(16) (Section 7.5) Use the Generalized Gram-Schmidt Process to �nd an or-thogonal basis for P2 containing x2 using the real inner product givenby

hf ;gi = f(0)g(0) + f(1)g(1) + f(2)g(2):

(17) (Section 7.5) Decompose v = x+ 3 in P2 as w1 +w2, wherew1 2 W = span(fx2g) and w2 2 W?, using the real inner product on P2given by

hf ;gi =Z 1

0

f(t)g(t) dt:

236

Page 241: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Chapter Tests Chapter 7 - Version C

Test for Chapter 7 � Version C

Note that there are more than 10 problems in this test, with at leastthree problems from each section of Chapter 7.

(1) (Section 7.1) Calculate ATB� for the complex matrices

A =

�2 3� i

1 + 2i �i

�and B =

�4 + i 3i2 1� 2i

�:

(2) (Section 7.1) Let H and J be n � n Hermitian matrices. Prove that ifHJ is Hermitian, then HJ = JH.

(3) (Section 7.1) Indicate whether each given statement is true or false.

(a) If x,y 2 Cn with x � y 2 R, then x � y = y � x.(b) IfA is an n�n Hermitian matrix, and x;y 2 Cn, thenAx�y = x�Ay.

(4) (Section 7.2) Use Cramer�s Rule to solve the following system of linearequations: �

�iz1 + (1� i)z2 = �3� 2i(1� 2i)z1 + (4� i)z2 = �5� 6i :

(5) (Section 7.2) Show that a complex n�n matrix that is not diagonalizablemust have an eigenvalue � whose algebraic multiplicity is strictly greaterthan its geometric multiplicity.

(6) (Section 7.2) Find all eigenvalues and a basis of fundamental eigenvectorsfor each eigenspace for L : C2 �! C2 given by

L

��z1z2

��=

�0 2�2i 2 + 2i

� �z1z2

�:

Is L diagonalizable?

(7) (Section 7.3) Prove that the set of normal 2� 2 matrices is not a complexsubspace ofMC

22 by showing that it is not closed under matrix addition.

(8) (Section 7.3) Find a basis B for C4 that contains

f[1; 3 + i; i; 1� i]; [1 + i; 3 + 4i;�1 + i; 2]g:

(9) (Section 7.3) Letw 2 Cn be a �xed nonzero vector. Show that L : Cn ! Cgiven by L(z) = w � z is not a complex linear transformation.

(10) (Section 7.4) Use the Gram-Schmidt Process to �nd an orthogonal basisfor the complex vector space C3, starting with the basis f[2� i; 1 + 2i; 1];[0; 1; 0]; [�1; 0; 2 + i]g.

237

Page 242: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Chapter Tests Chapter 7 - Version C

(11) (Section 7.4) Let Z be an n � n complex matrix whose rows form anorthonormal basis for Cn. Prove that the columns of Z also form anorthonormal basis for Cn:

(12) (Section 7.4)

(a) Show that

A =

��23i 36�36 �2i

�is normal, and hence unitarily diagonalizable.

(b) Find a unitary matrix P such that P�1AP is diagonal.

(13) (Section 7.5) For x = [x1; x2] and y = [y1; y2] in R2, prove that hx;yi =2x1y1 � x1y2 � x2y1 + x2y2 is a real inner product on R2. For x = [2; 3]and y = [4;�4], calculate hx;yi and kxk for this inner product.

(14) (Section 7.5) Find the distance between f(x) = sin2 x and g(x) = � cos2 xin the real inner product space consisting of the set of all real-valuedcontinuous functions de�ned on the interval [0; �] with inner product

hf ;gi =Z �

0

f(t)g(t) dt:

(15) (Section 7.5) Prove that in a real inner product space, vectors x and y areorthogonal if and only if kx+ yk2 = kxk2 + kyk2.

(16) (Section 7.5) Use the Generalized Gram-Schmidt Process to �nd an or-thogonal basis for R3, starting with the linearly independent setf[�1; 1; 1]; [3;�4;�1]g using the real inner product given by

hx;yi = Ax �Ay; where A =

24 3 2 12 1 22 1 1

35 :(17) (Section 7.5) Decompose v = x2 in P2 as w1 +w2, where

w1 2 W = span(fx � 5; 6x � 5g) and w2 2 W?, using the real innerproduct on P2 given by

hf ;gi = f(0)g(0) + f(1)g(1) + f(2)g(2):

238

Page 243: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Chapter Tests Chapter 1 - Version A

Answers to Test for Chapter 1 � Version A

(1) The net velocity vector is [ 5p2

2 ; 2�5p2

2 ] � [3:54;�1:54] km/hr. The netspeed is approximately 3.85 km/hr.

(2) The angle (to the nearest degree) is 137�.

(3) kx+ yk2 + kx� yk2 = (x+ y) � (x+ y) + (x� y) � (x� y)= x � (x+ y) + y � (x+ y) + x � (x� y) � y � (x� y)= x � x + x � y + y � x + y � y + x � x � x � y � y � x + y � y= 2(x � x) + 2(y � y) = 2

�kxk2 + kyk2

�(4) The vector x= [�3; 4; 2] can be expressed as [� 6535 ;

1335 ;�

3935 ]+[�

4035 ;

12735 ;

10935 ],

where the �rst vector (projax) of the sum is parallel to a (since it equals� 1335a ), and the second vector (x � projax) of the sum is easily seen tobe orthogonal to a.

(5) Contrapositive: If x is parallel to y, then kx+ yk = kxk+ kyk.Converse: If x is not parallel to y, then kx+ yk 6= kxk+ kyk.Inverse: If kx+ yk = kxk+ kyk, then x is parallel to y.Only the contrapositive is logically equivalent to the original statement.

(6) The negation is: kxk 6= kyk and (x� y) � (x+ y) = 0.

(7) Base Step: Assume A and B are any two diagonal n� n matrices. Then,C = A + B is also diagonal because for i 6= j, cij = aij + bij = 0+0 = 0,since A and B are both diagonal.

Inductive Step: Assume that the sum of any k diagonal n � n matricesis diagonal. We must show for any diagonal n� n matrices A1; : : : ;Ak+1

thatPk+1

i=1 Ai is diagonal. Now,Pk+1

i=1 Ai = (Pk

i=1Ai) +Ak+1. Let B=Pk

i=1Ai. Then B is diagonal by the inductive hypothesis. Hence,Pk+1i=1 Ai = B + Ak+1 is the sum of two diagonal matrices, and so is

diagonal by the Base Step.

(8) A = S + V, where S =

2664�4 9

2 0

92 �1 11

2

0 112 �3

3775, V =

26640 � 32 �232 0 3

2

2 � 32 0

3775, S issymmetric, and V is skew-symmetric.

(9)

TV Show 1TV Show 2TV Show 3TV Show 4

Salary Perks2664$1000000 $650000$860000 $535000$880000 $455000$770000 $505000

3775

239

Page 244: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Chapter Tests Chapter 1 - Version B

(10) Assume A and B are both n� n symmetric matrices. Then AB is sym-metric if and only if (AB)T = AB if and only if BTAT = AB (byTheorem 1.16) if and only if BA = AB (since A and B are symmetric)if and only if A and B commute.

Answers to Test for Chapter 1 � Version B

(1) Player B is pulling with a force of 100=p3 lbs, or about 57:735 lbs. Player

C is pulling with a force of 200=p3 lbs, or about 115:47 lbs.

(2) The angle (to the nearest degree) is 152�.

(3) See the proof in the textbook, just before Theorem 1.10 in Section 1.2.

(4) The vector x = [5;�2;�4] can be expressed as[ 822 ;�

1222 ;

1222 ]+[

10222 ;�

3222 ;�

10022 ], where the �rst vector (projax) of the sum

is parallel to a (since it equals � 422a), and the second vector (x�projax)

of the sum is easily seen to be orthogonal to a.

(5) Contrapositive: If kx+ yk � kyk, then kxk2 + 2(x � y) � 0.Converse: If kx+ yk > kyk, then kxk2 + 2(x � y) > 0.Inverse: If kxk2 + 2(x � y) � 0, then kx+ yk � kyk.Only the contrapositive is logically equivalent to the original statement.

(6) The negation is: No unit vector x 2 R3 is parallel to [�2; 3; 1].

(7) Base Step: Assume A and B are any two lower triangular n�n matrices.Then, C = A + B is also lower triangular because for i < j, cij =aij + bij = 0 + 0 = 0, since A and B are both lower triangular.

Inductive Step: Assume that the sum of any k lower triangular n � nmatrices is lower triangular. We must show for any lower triangular n�nmatrices A1; : : : ;Ak+1 that

Pk+1i=1 Ai is lower triangular. Now,

Pk+1i=1 Ai

= (Pk

i=1Ai) +Ak+1. Let B =Pk

i=1Ai. Then B is lower triangular bythe inductive hypothesis. Hence,

Pk+1i=1 Ai = B + Ak+1 is the sum of two

lower triangular matrices, and so is lower triangular by the Base Step.

(8) A = S + V, where S =

26645 1 7

2

1 3 12

72

12 1

3775 ; V =

26640 7 � 112� 7 0 � 72112

72 0

3775 ; S issymmetric, and V is skew-symmetric.

(9)Cat 1Cat 2Cat 3

Nutrient 1 Nutrient 2 Nutrient 324 1:64 0:62 1:861:50 0:53 1:461:55 0:52 1:54

35 (All �gures are in ounces.)

240

Page 245: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Chapter Tests Chapter 1 - Version C

(10) AssumeA and B are both n�n skew-symmetric matrices. Then (AB)T =BTAT (by Theorem 1.16) = (�B)(�A) (since A, B are skew-symmetric)= BA (by part (4) of Theorem 1.14).

Answers to Test for Chapter 1 � Version C

(1) The acceleration vector is approximately [�0:435; 0:870;�0:223] m/sec2.This can also be expressed as 1:202[�0:362; 0:724;�0:186] m/sec2, wherethe latter vector is a unit vector.

(2) The angle (to the nearest degree) is 118�.

(3) See the proof of Theorem 1.7 in Section 1.2 of the textbook.

(4) The vector x = [�4; 2;�7] can be expressed as�� 24662 ;

20562 ;�

4162

�+�� 262 ;�

8162 ;�

39362

�; where the �rst vector (projax) of the sum is parallel

to a (since it equals � 4162a ) and the second vector (x � projax) of thesum is easily seen to be orthogonal to a.

(5) Assume y = cx, for some c 2 R. We must prove that y = projxy. Now,if y = cx, then projxy =

�x�ykxk2

�x =

�x�(cx)kxk2

�x =

�c(x�x)kxk2

�x (by part

(4) of Theorem 1.5) = cx (by part (2) of Theorem 1.5) = y.

(6) The negation is: For some vector x 2 Rn, there is no vector y 2 Rn suchthat kyk > kxk.

(7) A = S + V, where S =

2664�3 13

212

132 5 3

12 3 �4

3775 ; V =

26640 � 12

52

12 0 �1

� 52 1 0

3775 ; Sis symmetric, and V is skew-symmetric.

(8) The third row of AB is [35; 18;�13], and the second column of BA is[�32; 52; 42;�107].

(9) Base Step: Suppose A and B are two n � n diagonal matrices. LetC = AB. Then, cij =

Pnk=1 aikbkj . But if k 6= i, then aik = 0. Thus,

cij = aiibij . Then, if i 6= j, bij = 0, hence cij = aii0 = 0. Hence C isdiagonal.

Inductive Step: Assume that the product of any k diagonal n � n ma-trices is diagonal. We must show that for any diagonal n � n matricesA1; : : : ;Ak+1 that the productA1 � � �Ak+1 is diagonal. Now, A1 � � �Ak+1

= (A1 � � �Ak)Ak+1. Let B = A1 � � �Ak. Then B is diagonal by the in-ductive hypothesis. Hence, A1 � � �AkAk+1 = BAk+1 is the product oftwo diagonal matrices, and so is diagonal by the Base Step.

241

Page 246: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Chapter Tests Chapter 2 - Version A

(10) Assume that A is skew-symmetric. Then (A3)T = (A2A)T = AT (A2)T

(by Theorem 1.16) = AT (AA)T = ATATAT (again by Theorem 1.16)= (�A)(�A)(�A) (since A is skew-symmetric) = �(A3) (by part (4) ofTheorem 1.14). Hence, A3 is skew-symmetric.

Answers to Test for Chapter 2 � Version A

(1) The quadratic equation is y = 3x2 � 4x� 8.

(2) The solution set is f(�2b� 4d� 5; b; 3d+ 2; d; 4) j b; d 2 Rg.

(3) The solution set is fc(�2; 4; 1; 0) j c 2 Rg.

(4) The rank of A is 2, and so A is not row equivalent to I3.

(5) The solution sets are, respectively, f(3;�4; 2)g and f(1;�2; 3)g.

(6) First, ith row of R(AB)= c(ith row of (AB))= c(ith row of A)B (by the hint)= (ith row of R(A))B= ith row of (R(A)B) (by the hint).

Now, if k 6= i, kth row of R(AB)= kth row of (AB)= (kth row of A)B (by the hint)= (kth row of R(A))B= kth row of (R(A)B) (by the hint).

Hence, R(AB) = R(A)B, since they are equal on each row.

(7) Yes: [�15; 10;�23] = (�2) (row 1) + (�3) (row 2) + (�1)(row 3).

(8) The inverse of A is

24 �9 �4 15�4 �2 712 5 �19

35.(9) The inverse of A is � 1

14

��9 8�5 6

�=

" 914 � 47514 � 37

#.

(10) Suppose A and B are nonsingular n� n matrices. If A and B commute,then AB = BA. Hence, A(AB)B = A(BA)B, and so A2B2 = (AB)2.

Conversely, supposeA2B2 = (AB)2. Since bothA andB are nonsingular,we know that A�1 and B�1 exist. Multiply both sides of A2B2 = ABABby A�1 on the left and by B�1 on the right to obtain AB = BA. Hence,A and B commute.

242

Page 247: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Chapter Tests Chapter 2 - Version C

Answers to Test for Chapter 2 � Version B

(1) The equation of the circle is x2 + y2 � 6x+ 8y + 12 = 0,or (x� 3)2 + (y + 4)2 = 13.

(2) The solution set isf(�5c� 3d+ f + 6; 2c+ d� 3f � 2; c; d; 2f + 1; f) j c; d; f 2 Rg.

(3) The solution set is fb(3; 1; 0; 0) + d(�4; 0;�2; 1) j b; d 2 Rg.

(4) The rank of A is 3, and so A is row equivalent to I3.

(5) The solution sets are, respectively, f(�2; 3;�4)g and f(3;�1; 2)g.

(6) First, ith row of R(AB)= c(jth row of (AB)) + (ith row of (AB))= c(jth row of A)B + (ith row of A)B (by the hint)= (c(jth row of A) + (ith row of A))B= (ith row of R(A))B= ith row of (R(A)B) (by the hint).

Now, if k 6= i, kth row of R(AB)= kth row of (AB)= (kth row of A)B (by the hint)= (kth row of R(A))B= kth row of (R(A)B) (by the hint).

Hence, R(AB) = R(A)B, since they are equal on each row.

(7) The vector [�2; 3; 1] is not in the row space of A.

(8) The inverse of A is

24 1 �6 1�2 13 �24 �21 3

35.(9) The solution set is f(4;�7;�2)g.

(10) (a) To prove that the inverse of (cA) is ( 1c )A�1, simply note that the

product (cA) times ( 1c )A�1 equals I (by part (4) of Theorem 1.14).

(b) Now, assume A is a nonsingular skew-symmetric matrix. Then,(A�1)T = (AT )�1 (by part (4) of Theorem 2.11) = (�1A)�1 (sinceA is skew-symmetric) = ( 1�1 )A

�1 (by part(a)) = �(A�1). Hence,A�1 is skew-symmetric.

Answers to Test for Chapter 2 � Version C

(1) A = 3, B = �2, C = 1, and D = 4.

(2) The solution set is f(�4b� 3e+ 3; b; 2e� 3; 5e� 2; e; 4) j b; e 2 Rg.

(3) The solution set is fc(12;�5; 1; 0; 0) + e(�1;�1; 0; 3; 1) j c; e 2 Rg.

243

Page 248: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Chapter Tests Chapter 3 - Version A

(4) The rank of A is 3, and so A is not row equivalent to I4.

(5) The reduced row echelon form matrix for A is B = I3. One possiblesequence of row operations converting B to A is:

(II): < 2 > � 1110 < 3 > + < 2 >

(II): < 1 > � 25 < 3 > + < 1 >

(I): < 3 > � � 110 < 3 >

(II): < 3 > � �4 < 2 > + < 3 >(II): < 1 > � 1 < 2 > + < 1 >(I): < 2 > � 5 < 2 >(II): < 3 > � 5 < 1 > + < 3 >(II): < 2 > � �3 < 1 > + < 2 >(I): < 1 > � 2 < 1 >

(6) Yes: [25; 12;�19] = 4[7; 3; 5] + 2[3; 3; 4]� 3[3; 2; 3].

(7) The inverse of A is

26641 3 �4 13 21 �34 84 14 �21 51 8 �13 3

3775.(8) The solution set is f(4;�6;�5)g.

(9) Base Step: Assume A and B are two n � n nonsingular matrices. Then(AB)�1 = B�1A�1, by part (3) of Theorem 2.11.

Inductive Step: Assume that for any set of k nonsingular n� n matrices,the inverse of their product is found by multiplying the inverses of the ma-trices in reverse order. We must show that ifA1; : : : ;Ak+1 are nonsingularn�n matrices, then (A1 � � �Ak+1)

�1 = (Ak+1)�1 � � � (A1)

�1. Now, let B= A1 � � �Ak. Then (A1 � � �Ak+1)

�1 = (BAk+1)�1 = (Ak+1)

�1B�1 (bythe Base Step, or by part (3) of Theorem 2.11) = (Ak+1)

�1(A1 � � �Ak)�1

= (Ak+1)�1(Ak)

�1 � � � (A1)�1, by the inductive hypothesis.

(10) Now, (R1(R2(� � � (Rk(A)) � � � )))B = R1(R2(� � � (Rk(AB)) � � � )) (by part(2) of Theorem 2.1) = In (given). Hence R1(R2(� � � (Rk(A)) � � � )) = B�1.

Answers for Test for Chapter 3 � Version A

(1) The area of the parallelogram is 24 square units.

(2) The determinant of 6A is 65(4) = 31104.

(3) The determinant of A is �2.

244

Page 249: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Chapter Tests Chapter 3 - Version B

(4) Note that jABT j = jAj � jBT j (by Theorem 3.7) = jAT j � jBj (by Theo-rem 3.9) = jATBj (again by Theorem 3.7).

(5) The determinant of A is 1070.

(6) The adjoint matrix for A is

24 16 �24 2�14 22 �26 �9 1

35 and jAj = 2. Hence

A�1 =

24 8 �12 1�7 11 �13 � 92

12

35.(7) The solution set is f(�2; 3; 5)g.

(8) (Optional hint: pA(x) = x3 � x.)

Answer: P =

24 �1 �1 �10 1 11 0 1

35 ; D =

24 1 0 00 �1 00 0 0

35(9) � = 0 is an eigenvalue for A , pA(0) = 0 , j0In �Aj = 0 , j�Aj = 0, (�1)njAj = 0 , jAj = 0 , A is singular.

(10) Let X be an eigenvector for A corresponding to the eigenvalue �1 = 2.Hence, AX = 2X. Therefore, A3X = A2(AX) = A2(2X) = 2(A2X) =2A(AX) = 2A(2X) = 4AX = 4(2X) = 8X. This shows that X is aneigenvector corresponding to the eigenvalue �2 = 8 for A3.

Answers for Test for Chapter 3 � Version B

(1) The volume of the parallelepiped is 59 cubic units.

(2) The determinant of 5A is 54(7) = 4375.

(3) The determinant of A is �1.

(4) Assume A and B are n� n matrices and that AB = �BA, with n odd.Then, jABj = j � BAj. Now, j � BAj = (�1)njBAj (by Corollary 3.4)= �jBAj, since n is odd. Hence, jABj = �jBAj. By Theorem 3.7, thismeans that jAj � jBj = �jBj � jAj. Since jAj and jBj are real numbers, thiscan only be true if either jAj or jBj equals zero. Hence, either A or B issingular (by Theorem 3.5).

(5) The determinant of A is �916.

(6) The adjoint matrix for A is

24 �2 4 �100 �4 8�2 2 �4

35 and jAj = �4. HenceA�1 =

26412 �1 5

2

0 1 � 212 � 12 1

375.245

Page 250: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Chapter Tests Chapter 3 - Version C

(7) The solution set is f(�3;�2; 2)g.

(8) Optional hint: pA(x) = x3 � 2x2 + x.

Answer: P =

24 �1 3 11 0 10 1 1

35 ; D =

24 1 0 00 1 00 0 0

35(9) (a) Because A is upper triangular, we can easily see that

pA(x) = (x� 2)4. Hence � = 2 is the only eigenvalue for A.

(b) The reduced row echelon form for 2I4�A is

26640 1 0 00 0 1 00 0 0 10 0 0 0

3775, whichis just � (2I4 �A). There is only one non-pivot column (column 1),and so Step 3 of the Diagonalization Method will produce only onefundamental eigenvector. That fundamental eigenvector is [1; 0; 0; 0].

(10) From the algebraic perspective, pA(x) = x2 � 65x+1, which has no roots,

since its discriminant is � 6425 . Hence, A has no eigenvalues, and so A cannot be diagonalized. From the geometric perspective, for any nonzerovector X, AX will be rotated away from X through an angle �. Hence,X and AX can not be parallel. Thus, A can not have any eigenvectors.This makes A nondiagonalizable.

Answers for Test for Chapter 3 � Version C

(1) The volume of the parallelepiped is 2 cubic units.

(2) The determinant of A is 3.

(3) One possibility: jAj = 5

��������0 12 0 90 0 0 80 14 11 715 13 10 6

�������� = 5(8)

������0 12 00 14 1115 13 10

������ =5(8)(�12)

���� 0 1115 10

���� = 5(8)(�12)(�11) j15j = 79200. Cofactor expan-

sions were used along the �rst row in each case, except for the 4�4 matrix,where the cofactor expansion was along the second row.

(4) Assume that AT = A�1. Then jAT j = jA�1j. By Theorems 3.8 and 3.9,we have jAj = 1=jAj. Hence, jAj2 = 1, and so jAj = �1.

(5) Let A be a nonsingular n � n matrix. Recall that A�1 = (1=jAj)A,by Corollary 3.12, where A is the adjoint of A. Therefore, A = jAjA�1.Hence, jAj equals the determinant of jAjA�1, which is equal to jAjnjA�1j,by Corollary 3.4. However, since jAj and jA�1j are reciprocals (by Theo-rem 3.8), we have jAjnjA�1j = (jAjn�1jAj)(1=jAj) = jAjn�1.

246

Page 251: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Chapter Tests Chapter 4 - Version A

(6) Base Step (n = 2): Let A =

�a11 a12a21 a22

�: Note that jAj = a11a22 �

a12a21. Since A has only two rows, there are only two possible forms forthe row operation R.

Case 1: R = h1i � ch2i+h1i. ThenR(A) =�a11 + ca21 a12 + ca22

a21 a22

�:

Hence jR(A)j = (a11 + ca21)a22 � (a12 + ca22)a21 = a11a22 + ca21a22 �a12a21 � ca22a21 = a11a22 � a12a21 = jAj.

Case 2: R = h2i � ch1i+h2i. ThenR(A) =�

a11 a12a21 + ca11 a22 + ca12

�:

Hence jR(A)j = a11(a22 + ca12) � a12(a21 + ca11) = a11a22 + a11ca12 �a12a21 � a12ca11 = a11a22 � a12a21 = jAj.Therefore jAj = jR(A)j in all possible cases when n = 2, thus com-

pleting the Base Step.

(7) The solution set is f(3;�3; 2)g.

(8) If P =�1 21 1

�, then A = P

��1 00 2

�P�1.

So A9 = P

��1 00 2

�9P�1 = P

��1 00 512

�P�1 =

�1025 �1026513 �514

�.

(9) (Optional hint: pA(x) = (x+ 1)(x� 2)2.)

Answer: P =

24 �2 1 �10 1 01 0 1

35 ; D =

24 �1 0 00 2 00 0 2

35(10) We will show that pA(x) = pAT (x). Hence, the linear factor (x � �)

will appear exactly k times in pAT (x) since it appears exactly k times inpA(x). Now pAT (x) = jxIn�AT j = j(xIn)T�AT j (since (xIn) is diagonal,hence symmetric) = j(xIn �A)T j (by Theorem 1.12) = jxIn �Aj (byTheorem 3.9) = pA(x).

Answers for Test for Chapter 4 � Version A

(1) Clearly the operation � is commutative. Also, � is associative because(x�y)�z = (x5+y5)1=5�z = (((x5+y5)1=5)5+z5)1=5 = ((x5+y5)+z5)1=5= (x5 + (y5 + z5))1=5 = (x5 + ((y5 + z5)1=5)5)1=5 = x � (y5 + z5)1=5 =x� (y � z).Clearly, the real number 0 acts as the additive identity, and, for any realnumber x, the additive inverse of x is �x, because (x5+(�x)5)1=5 = 01=5= 0, the additive identity.

Also, the �rst distributive law holds because a� (x�y) = a� (x5+y5)1=5= a1=5(x5 + y5)1=5 = (a(x5 + y5))1=5 = (ax5 + ay5)1=5 = ((a1=5x)5 +(a1=5y)5)1=5 = (a1=5x)� (a1=5y) = (a� x)� (a� y).

247

Page 252: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Chapter Tests Chapter 4 - Version A

Similarly, the other distributive law holds because (a+b)�x = (a+b)1=5x= (a+ b)1=5(x5)1=5 = (ax5 + bx5)1=5 = ((a1=5x)5 + (b1=5x)5)1=5

= (a1=5x)� (b1=5x) = (a� x)� (b� x).Associativity of scalar multiplication holds because (ab)�x = (ab)1=5x =a1=5b1=5x = a1=5(b1=5x) = a� (b1=5x) = a� (b� x).Finally, 1� x = 11=5x = 1x = x.

(2) Since the set of n�n skew-symmetric matrices is nonempty, for any n � 1,it is enough to prove closure under addition and scalar multiplication.That is, we must show that if A and B are any two n�n skew-symmetricmatrices, and if c is any real number, then A + B is skew-symmetric, andcA is skew-symmetric. However, A+B is skew-symmetric since (A+B)T

= AT +BT (by part (2) of Theorem 1.12) = �A�B (since A and B areskew-symmetric) = �(A +B). Similarly, cA is skew-symmetric because(cA)T = cAT (by part (3) of Theorem 1.12) = c(�A) = �cA.

(3) Form the matrix A whose rows are the given vectors:

24 5 �2 �12 �1 �18 �2 1

35.It is easily shown that A row reduces to I3. Hence, the given vectors spanR3. (Alternatively, the vectors span R3 since jAj is nonzero.)

(4) A simpli�ed form for the vectors in span(S) is: fa(x3 � 5x2 � 2)+ b(x+ 3) j a; b 2 Rg = fax3 � 5ax2 + bx+ (�2a+ 3b) j a; b 2 Rg.

(5) Use the Independence Test Method (in Section 4.4). Note that the matrixwhose columns are the given vectors row reduces to I4. Thus, there is apivot in every column, and the vectors are linearly independent.

(6) The given set is a basis for M22. Since the set contains four vectors, itis only necessary to check either that it spans M22 or that it is linearlyindependent (see Theorem 4.13). To check for linear independence, formthe matrix whose columns are the entries of the given matrices. (Be sureto take the entries from each matrix in the same order.) Since this matrixrow reduces to I4, the set of vectors is linearly independent, hence a basisforM22.

(7) Using the Independence Test Method, as explained in Section 4.6 producesthe basis B = f3x3�2x2+3x�1; x3�3x2+2x�1; 11x3�12x2+13x+3gfor span(S). This set B is a maximal linearly independent subset of S,since the Independence Test Method works precisely by extracting such asubset from S. Since B has 3 elements, dim(span(S)) = 3.

(8) Let S be the set of columns ofA. BecauseA is nonsingular, it row reducesto In. Hence, the Independence Test Method applied to S shows that Sis linearly independent. Thus, S itself is a basis for span(S). Since S hasn elements, dim(span(S)) = n, and so span(S) = Rn, by Theorem 4.16.Hence S spans Rn.

248

Page 253: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Chapter Tests Chapter 4 - Version B

(9) One possibility is B = f[2; 1; 0]; [1; 0; 0]; [0; 0; 1]g.

(10) (a) The transition matrix from B to C is

24 �2 3 �14 �3 2�3 �2 �4

35.(b) For v = [�63;�113; 62], [v]B = [3;�2; 2], and [v]C = [�14; 22;�13].

Answers for Test for Chapter 4 � Version B

(1) The operation � is not associative in general because (x� y)� z =(3(x+ y))� z = 3(3(x+ y)) + z) = 9x+ 9y + 3z, while x� (y � z) =x� (3(y+ z)) = 3(x+3(y+ z)) = 3x+9y+9z instead. For a particularcounterexample, (1� 2)� 3 = 9� 3 = 30, but 1� (2� 3) = 1� 15 = 48.

(2) The set of all polynomials in P4 for which the coe¢ cient of the second-degree term equals the coe¢ cient of the fourth-degree term has the formfax4 + bx3 + ax2 + cx + dg. To show this set is a subspace of P4, it isenough to show that it is closed under addition and scalar multiplication,as it is clearly nonempty. Clearly, this set is closed under addition because(ax4 + bx3 + ax2 + cx + d) + (ex4 + fx3 + ex2 + gx + h) = (a + e)x4 +(b + f)x3 + (a + e)x2 + (c + g)x + (d + h), which has the correct formbecause this latter polynomial has the coe¢ cients of its second-degreeand fourth-degree terms equal. Similarly, k(ax4 + bx3 + ax2 + cx + d) =(ka)x4+ (kb)x3+ (ka)x2+ (kc)x+ (kd), which also has the coe¢ cients ofits second-degree and fourth-degree terms equal.

(3) Let V be a vector space with subspaces W1 and W2. Let W = W1 \W2.First, W is clearly nonempty because 0 2 W since 0 must be in both W1

and W2 due to the fact that they are subspaces: Thus, to show that W isa subspace of V, it is enough to show that W is closed under addition andscalar multiplication. Let x and y be elements ofW, and let c be any realnumber. Then, x and y are elements of both W1 and W2, and since W1

and W2 are both subspaces of V, hence closed under addition, we knowthat x + y is in both W1 and W2. Thus x + y is in W. Similarly, sinceW1 and W2 are both subspaces of V, then cx is in both W1 and W2, andhence cx is in W.

(4) The vector [2;�4; 3] is not in span(S). Note that24 5 �2 �9�15 6 27�4 1 7

������2�43

35 row reduces to264 1 0 � 530 1 1

3

0 0 0

�������� 83� 2332

375.

249

Page 254: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Chapter Tests Chapter 4 - Version C

(5) A simpli�ed form for the vectors in span(S) is:�a

�1 �30 0

�+ b

�0 01 0

�+ c

�0 00 1

� ���� a; b; c 2 R�=

��a �3ab c

� ���� a; b; c 2 R� :(6) Form the matrix A whose columns are the coe¢ cients of the polynomials

in S. Row reducingA shows that there is a pivot in every column. HenceS is linearly independent by the Independence Test Method in Section 4.4.Hence, since S has four elements and dim(P3) = 4, Theorem 4.13 showsthat S is a basis for P3. Therefore, S spans P3, showing that p(x) can beexpressed as a linear combination of the elements in S. Finally, since S islinearly independent, the uniqueness assertion is true by Theorem 4.9.

(7) A basis for span(S) is f[2;�3; 4;�1]; [3; 1;�2; 2]; [2; 8;�12; 3]g. Hencedim(span(S)) = 3. Since dim(R4) = 4, S does not span R4.

(8) SinceA is nonsingular, AT is also nonsingular (Theorem 2.11). ThereforeAT row reduces to In, giving a pivot in every column. The IndependenceTest Method from Section 4.4 now shows the columns of AT , and hencethe rows of A, are linearly independent.

(9) One possible answer: B = f[2; 1; 0; 0]; [�3; 0; 1; 0]; [1; 0; 0; 0], [0; 0; 0; 1]g

(10) (a) The transition matrix from B to C is

24 3 �1 12 4 �3�4 2 3

35.(b) For v =�100x2+64x�181, [v]B = [2;�4;�1], and [v]C = [9;�9;�19].

Answers for Test for Chapter 4 � Version C

(1) From part (2) of Theorem 4.1, we know 0v = 0 in any general vector space.Thus, 0 � [x; y] = [0x + 3(0) � 3; 0y � 4(0) + 4] = [�3; 4] is the additiveidentity 0 in this given vector space. Similarly, by part (3) of Theorem 4.1,(�1)v is the additive inverse of v. Hence, the additive inverse of [x; y] is(�1)� [x; y] = [�x� 3� 3;�y + 4 + 4] = [�x� 6;�y + 8].

(2) LetW be the set of all 3-vectors orthogonal to [2;�3; 1]. First, [0; 0; 0] 2 Wbecause [0; 0; 0] � [2;�3; 1] = 0. Hence W is nonempty. Thus, to showthat W is a subspace of R3, it is enough to show that W is closed underaddition and scalar multiplication. Let x;y 2 W; that is, let x and y bothbe orthogonal to [2;�3; 1]. Then, x + y is also orthogonal to [2;�3; 1],because (x+ y) � [2;�3; 1] = (x � [2;�3; 1]) + (y � [2;�3; 1]) = 0 + 0 = 0.Similarly, if c is any real number, and x is in W, then cx is also in Wbecause (cx) � [2;�3; 1] = c(x � [2;�3; 1]) (by part (4) of Theorem 1.5)= c(0) = 0.

250

Page 255: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Chapter Tests Chapter 5 - Version A

(3) The vector [�3; 6; 5] is in span(S) since it equals115 [3;�6;�1]�

125 [4;�8;�3] + 0[5;�10;�1].

(4) A simpli�ed form for the vectors in span(S) is:fa(x3 � 2x+ 4) + b(x2 + 3x� 2) j a; b 2 Rg= fax3 + bx2 + (�2a+ 3b)x+ (4a� 2b) j a; b 2 Rg.

(5) Form the matrix A whose columns are the entries of each given matrix(taking the entries in the same order each time). Noting that every columnin the reduced row echelon form for A contains a pivot, the IndependenceTest Method from Section 4.4 shows that the given set is linearly inde-pendent.

(6) A maximal linearly independent subset of S is��

3 �1�2 4

�;

�2 1�2 0

�;�

4 �3�2 2

��. Theorem 4.14 shows that this set is a basis for span(S),

and so dim(span(S)) = 3.

(7) Note that span(S) is the row space of AT . Now AT is also singular (useTheorems 1.12 and 2.11), and so rank(AT ) < n by Theorem 2.14. Hence,the Simpli�ed Span Method as explained in Section 4.6 for �nding a basisfor span(S) will produce fewer than n basis elements (the nonzero rows ofthe reduced row echelon form of AT ), showing that dim(span(S)) < n.

(8) One possible answer is f2x3 � 3x2 + 3x + 1;�x3 + 4x2 � 6x � 2; x3; xg.(This is obtained by enlarging T with the standard basis for P3 and theneliminating x2 and 1 for redundancy.)

(9) First, by Theorem 4.4, E1 is a subspace of Rn, and so by Theorem 4.16,dim(E1) � dim(Rn) = n. But, if dim(E1) = n, then Theorem 4.16 showsthat E1 = Rn. Hence, for every vector X 2 Rn, AX = X. In particular,this is true for each vector e1; : : : ; en. Using these vectors as columnsforms the matrix In. Thus we see that AIn = In. implying A = In. Thiscontradicts the given condition A 6= In. Hence dim(E1) 6= n, and sodim(E1) < n.

(10) (a) The transition matrix from B to C is

24 2 �1 33 �2 21 �2 3

35.(b) For v = [�109; 155;�71], [v]B = [�1; 3;�3], and

[v]C = [�14;�15;�16].

Answers to Test for Chapter 5 � Version A

(1) To show f is a linear operator, we must show that f(A1+A2) = f(A1)+f(A2), for all A1;A2 2Mnn, and that f(cA) = cf(A), for all c 2 R and

251

Page 256: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Chapter Tests Chapter 5 - Version A

all A 2 Mnn. However, the �rst equation holds since B(A1 + A2)B�1

= BA1B�1 +BA2B

�1 by the distributive laws of matrix multiplicationover addition. Similarly, the second equation holds true since B(cA)B�1

= cBAB�1 by part (4) of Theorem 1.14.

(2) L([2;�3; 1]) = [�1;�11; 4].

(3) The matrix ABC =

24 167 11746 32

�170 �119

35.(4) The kernel of L = f[2b+ d; b;�3d; d] j b; d 2 Rg. Hence, a basis for ker(L)

= f[2; 1; 0; 0]; [1; 0;�3; 1]g. A basis for range(L) is the setf[4;�3; 4]; [�1; 1; 2]g, the �rst and third columns of the original matrix.Thus, dim(ker(L)) + dim(range(L)) = 2 + 2 = 4 = dim(R4), and theDimension Theorem is veri�ed.

(5) (a) Now, dim(range(L)) � dim(R3) = 3. Hence dim(ker(L)) = dim(P3)�dim(range(L)) (by the Dimension Theorem) = 4 � dim(range(L))� 4� 3 = 1. Thus, ker(L) is nontrivial.

(b) Any nonzero polynomial p in ker(L) satis�es the given conditions.

(6) The 3 � 3 matrix given in the de�nition of L is clearly the matrix for Lwith respect to the standard basis for R3. Since the determinant of thismatrix is �1, it is nonsingular. Theorem 5.15 thus shows that L is anisomorphism, implying also that it is one-to-one.

(7) First, L is a linear operator, because L(p1+p2) = (p1+p2)� (p1�p2)0= (p1�p10)+(p2�p20) = L(p1)+L(p2), and because L(cp) = cp�(cp)0= cp� cp0 = c(p� p0) = cL(p).Since the domain and codomain of L have the same dimension, in order toshow L is an isomorphism, it is only necessary to show either L is one-to-one, or L is onto (by Corollary 5.21). We show L is one-to-one. SupposeL(p) = 0 (the zero polynomial). Then, p� p0 = 0, and so p = p0. Butthese polynomials have di¤erent degrees unless p is constant, in whichcase p0 = 0. Therefore, p = p0 = 0. Hence, ker(L) = f0g, and L isone-to-one.

(8) B =��

1 00 0

�;

�0 11 0

�;

�0 00 1

�;

�0 �11 0

��,

P =

26641 0 0 00 1 0 �10 1 0 10 0 1 0

3775, D =

26640 0 0 00 0 0 00 0 0 00 0 0 2

3775Other answers for B and P are possible since the eigenspace E0 is three-dimensional.

252

Page 257: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Chapter Tests Chapter 5 - Version B

(9) (a) False

(b) True

(10) The matrix for L with respect to the standard basis for P3 is lower trian-gular with all 1�s on the main diagonal. (The ith column of this matrixconsists of the coe¢ cients of (x + 1)4�i in order of descending degree.)Thus, pL(x) = (x � 1)4, and � = 1 is the only eigenvalue for L. Now, ifL were diagonalizable, the geometric multiplicity of � = 1 would have tobe 4, and we would have E1 = P3. This would imply that L(p) = p forall p 2 P3. But this is clearly false since the image of the polynomial xis x+ 1.

Answers to Test for Chapter 5 � Version B

(1) To show f is a linear operator, we must show that f(A1 +A2) =f(A1) + f(A2), for all A1;A2 2Mnn, and that f(cA) = cf(A), forall c 2 R and all A 2Mnn. However, the �rst equation holds since(A1 +A2) + (A1 +A2)

T = A1 +A2 +A1T +A2

T (by part (2) of The-orem 1.12) = (A1 + A1

T ) + (A2 + A2T ) (by commutativity of matrix

addition). Similarly, f(cA) = (cA) + (cA)T = cA+ cAT

= c(A+AT ) = cf(A).

(2) L([6;�4; 7]) = �x2 + 2x+ 3

(3) The matrix ABC =

24 �55 165 49 56�46 139 42 4532 �97 �29 �32

35.(4) The kernel of L = f[3c;�2c; c; 0] j c 2 Rg. Hence, a basis for ker(L) =f[3;�2; 1; 0]g. The range of L is spanned by columns 1, 2, and 4 of theoriginal matrix. Since these columns are linearly independent, a basisfor range(L) = f[2; 1;�4; 4]; [5;�1; 2; 1]; [8: � 1; 1; 2]g. The DimensionTheorem is veri�ed because dim(ker(L)) = 1, dim(range(L)) = 3, and thesum of these dimensions equals the dimension of R4; the domain of L.

(5) (a) This statement is true. dim(range(L1)) = rank(A) (by part (1) ofTheorem 5.9) = rank(AT ) (by Corollary 5.11) = dim(range(L2))(by part (1) of Theorem 5.9)

(b) This statement is false. For a particular counterexample, use A =Omn, in which case dim(ker(L1)) = n 6= m = dim(ker(L2)). Ingeneral, using the Dimension Theorem, dim(ker(L1))= n�dim(range(L1)), and dim(ker(L2)) =m�dim(range(L1)). But,by part (a), dim(range(L1)) = dim(range(L2)), and since m 6= n,dim(ker(L1)) can never equal dim(ker(L2)).

253

Page 258: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Chapter Tests Chapter 5 - Version C

(6) Solving the appropriate homogeneous system to �nd the kernel of L gives

ker(L) =��

0 00 0

��. Hence, L is one-to-one by part (1) of Theo-

rem 5.12. Since the domain and codomain of L have the same dimension,L is also an isomorphism by Corollary 5.21.

(7) First, L is a linear operator, since L(B1 +B2) = (B1 +B2)A�1

= B1A�1 +B2A

�1 = L(B1) + L(B2), and since L(cB) = (cB)A�1

= c(BA�1) = cL(B).

Since the domain and codomain of L have the same dimension, in orderto show that L is an isomorphism, by Corollary 5.21 it is enough to showeither that L is one-to-one, or that L is onto. We show L is one-to-one.Suppose L(B1) = L(B2). Then B1A�1 = B2A�1. Multiplying both sideson the right by A gives B1 = B2. Hence, L is one-to-one.

(8) (Optional Hint: pL(x) = x4 � 2x2 + 1 = (x� 1)2(x+ 1)2.)Answer:

B =

��3 02 0

�;

�0 30 2

�;

�2 01 0

�;

�0 20 1

��;

A =

2664�7 0 12 00 �7 0 12�4 0 7 00 �4 0 7

3775, P =26643 0 2 00 3 0 22 0 1 00 2 0 1

3775,

D =

26641 0 0 00 1 0 00 0 �1 00 0 0 �1

3775 Other answers for B and P are possible since

the eigenspaces E1 and E�1 are both two-dimensional.

(9) (a) False(b) False

(10) All vectors parallel to [4;�2; 3] are mapped to themselves under L, andso � = 1 is an eigenvalue for L. Now, all other nonzero vectors in R3are rotated around the axis through the origin parallel to [4;�2; 3], andso their images under L are not parallel to themselves. Hence E1 is theone-dimensional subspace spanned by [4;�2; 3], and there are no othereigenvalues. Therefore, the sum of the geometric multiplicities of alleigenvalues is 1, which is less than 3, the dimension of R3. Thus Theorem5.28 shows that L is not diagonalizable.

Answers to Test for Chapter 5 � Version C

(1) To show that L is a linear transformation, we must prove that L(p+q) =L(p) +L(q) and that L(cp) = cL(p) for all p;q 2 Pn and all c 2 R. Let

254

Page 259: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Chapter Tests Chapter 5 - Version C

p = anxn + � � �+ a1x+ a0 and let q = bnxn + � � �+ b1x+ b0. Then

L(p+ q) = L(anxn + � � �+ a1x+ a0 + bnxn + � � �+ b1x+ b0)

= L((an + bn)xn + � � �+ (a1 + b1)x+ (a0 + b0))

= (an + bn)An + � � �+ (a1 + b1)A+ (a0 + b0)In

= anAn + � � �+ a1A+ a0In + bnAn + � � �+ b1A+ b0In

= L(p) + L(q):

Similarly,

L(cp) = L(c(anxn + � � �+ a1x+ a0))

= L(canxn + � � �+ ca1x+ ca0)

= canAn + � � �+ ca1A+ ca0In

= c(anAn + � � �+ a1A+ a0In)

= cL(p):

(2) L([�3; 1;�4]) =��29 21�2 6

�.

(3) The matrix ABC =

2664�44 91 350�13 23 118�28 60 215�10 16 97

3775.(4) The matrix for L with respect to the standard bases forM22 and P2 is

A =

24 1 0 0 11 0 �1 00 1 0 0

35, which row reduces to24 1 0 0 10 1 0 00 0 1 1

35. Henceker(L) =

�d

��1 0�1 1

� ���� d 2 R�, which has dimension 1. A basis for

range(L) consists of the �rst three �columns�of A, and hencedim(range(L)) = 3, implying range(L) = P2. Note thatdim(ker(L)) + dim(range(L)) = 1 + 3 = 4 = dim(M22), thus verifyingthe Dimension Theorem.

(5) (a) v 2 ker(L1) ) L1(v) = 0W ) L2(L1(v)) = L2(0W) )(L2 � L1)(v) = 0Y ) v 2 ker(L2 � L1).Hence ker(L1) � ker(L2 � L1).

(b) By part (a) and Theorem 4.16, dim(ker(L1)) � dim(ker(L2 � L1)).Hence, by the Dimension Theorem,dim(range(L1)) = dim(V)� dim(ker(L1))� dim(V)� dim(ker(L2 � L1)) = dim(range(L2 � L1)).

255

Page 260: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Chapter Tests Chapter 5 - Version C

(6) Solving the appropriate homogeneous system to �nd the kernel of L showsthat ker(L) consists only of the zero polynomial. Hence, L is one-to-oneby part (1) of Theorem 5.12. Also, dim(ker(L)) = 0. However, L is notan isomorphism, because by the Dimension Theorem, dim(range(L)) = 3,which is less than the dimension of the codomain R4.

(7) First, we must show that L is a linear transformation. Now

L([a1; : : : ; an] + [b1; : : : ; bn]) = L([a1 + b1; : : : ; an + bn])

= (a1 + b1)v1 + � � �+ (an + bn)vn= a1v1 + b1v1 + � � �+ anvn + bnvn= a1v1 + � � �+ anvn + b1v1 + � � �+ bnvn= L([a1; : : : ; an]) + L([b1; : : : ; bn]):

Also,

L(c[a1; : : : ; an]) = L([ca1; : : : ; can])

= ca1v1 + � � �+ canvn= c(a1v1 + � � �+ anvn)= cL([a1; : : : ; an]):

In order to prove that L is an isomorphism, it is enough to show that L isone-to-one or that L is onto (by Corollary 5.21), since the dimensions ofthe domain and the codomain are the same. We show that L is one-to-one. Suppose L([a1; : : : ; an]) = 0V . Then a1v1+a2v2+ � � �+anvn = 0V ,implying a1 = a2 = � � � = an = 0 because B is a linearly independentset (since B is a basis for V). Hence [a1; : : : ; an] = [0; : : : ; 0]. Therefore,ker(L) = f[0; : : : ; 0]g, and L is one-to-one by part (1) of Theorem 5.12.

(8) B = (x2 � x+ 2; x2 � 2x; x2 + 2),

P =

24 1 1 1�1 �2 02 0 2

35, D =

24 0 0 00 1 00 0 1

35Other answers for B and P are possible since the eigenspace E1 is two-dimensional.

(9) (a) True

(b) True

(10) If p is a nonconstant polynomial, then p0 is nonzero and has a degreelower than that of p, Hence p0 6= cp, for any c 2 R. Hence, nonconstantpolynomials can not be eigenvectors. All constant polynomials, however,are in the eigenspace E0 of � = 0. Thus, E0 is one-dimensional. Therefore,the sum of the geometric multiplicities of all eigenvalues is 1, which is lessthan dim(Pn) = n + 1, since n > 0. Hence L is not diagonalizable byTheorem 5.28.

256

Page 261: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Chapter Tests Chapter 6 - Version A

Answers for Test for Chapter 6 � Version A

(1) Performing the Gram-Schmidt Process as outlined in the text and us-ing appropriate multiples to avoid fractions leads to the following basis:f[4;�3; 2]; [13; 12;�8]; [0; 2; 3]g.

(2) Matrix for L = 19

24 7 4 �44 1 8�4 8 1

35.(3) Performing the Gram-Schmidt Process as outlined in the text and using

appropriate multiples to avoid fractions leads to the following basis forW:f[2;�3; 1]; [�11;�1; 19]g. Continuing the process leads to the followingbasis for W?: f[8; 7; 5]g. Also, dim(W?) = 1.

(4) kAxk2 = Ax �Ax = x �x (by Theorem 6.9) = kxk2. Taking square rootsyields kAxk = kxk.

(5) Let w 2 W. We need to show that w 2 (W?)?. To do this, we needonly prove that w?v for all v 2 W?. But, by the de�nition of W?,w � v = 0, since w 2 W, which completes the proof.

(6) The characteristic polynomial is pL(x) = x2(x�1)2 = x4�2x3+x2. (Thisis the characteristic polynomial for every orthogonal projection onto a 2-dimensional subspace of R4.)

(7) If v1 and v2 are unit vectors in the given plane with v1 ? v2, then thematrix for L with respect to the ordered orthonormal basis�

1p70[3; 5;�6];v1;v2

�is

24 0 0 00 1 00 0 1

35 :(8) (Optional Hint: pL(x) = x3 � 18x2 + 81x.)

Answer: B = (13 [2;�2; 1];13 [1; 2; 2];

13 [2; 1;�2]), D =

24 0 0 00 9 00 0 9

35 :Other answers forB are possible since the eigenspace E9 is two-dimensional.

Another likely answer for B is ( 13 [2;�2; 1];1p2[1; 1; 0]; 1p

18[�1; 1; 4]).

(9) A3 =

�1p5

��1 22 1

����1 00 8

��1p5

��1 22 1

��.

Hence, one possible answer is

A =

�1p5

��1 22 1

����1 00 2

��1p5

��1 22 1

��= 1

5

�7 66 �2

�:

:

257

Page 262: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Chapter Tests Chapter 6 - Version B

(10) (Optional Hint: Use the de�nition of a symmetric operator to show that(�2 � �1)(v1 � v2) = 0.)Answer: L symmetric) v1 �L(v2) = L(v1) �v2 ) v1 �(�2v2) = (�1v1) �v2) �2(v1 �v2) = �1(v1 �v2)) (�2��1)(v1 �v2) = 0) (v1 �v2) = 0 (since�2 6= �1) ) v1?v2.

Answers for Test for Chapter 6 � Version B

(1) Performing the Gram-Schmidt Process as outlined in the text and us-ing appropriate multiples to avoid fractions leads to the following basis:f[2;�3; 1]; [30; 11;�27]; [5; 6; 8]g.

(2) Matrix for L = 125

24 16 0 �120 25 0

�12 0 9

35.(3) Performing the Gram-Schmidt Process as outlined in the text and using

appropriate multiples to avoid fractions leads to the following basis forW: f[2; 2;�3]; [33;�18; 10]g. Continuing the process leads to the follow-ing basis for W?: f[2; 7; 6]g. Using this information, we can show that[�12;�36;�43] = [0; 6; 7] + [�12;�42;�36], where the �rst vector of thissum is in W, and the second is in W?.

(4) BecauseA andB are orthogonal,AAT = BBT = In. Hence (AB)(AB)T

= (AB)(BTAT ) = A(BBT )AT = AInAT = AAT = In. Thus, AB is

orthogonal.

(5) By Corollary 6.13, dim(W) = n� dim(W?) = n� (n� dim((W?)?)) =dim((W?)?). Thus, by Theorem 4.16, W = (W?)?.

(6) Now S is an orthogonal basis for W, and so projWv =v�[0;3;6;�2]

[0;3;6;�2]�[0;3;6;�2] [0; 3; 6;�2] +v�[�6;2;0;3]

[�6;2;0;3]�[�6;2;0;3] [�6; 2; 0; 3]

= 17 [�24; 17; 18; 6]. Thus, projW?v = v � projWv = 1

7 [3; 18;�11;�6].

(7) Yes, by Theorems 6.18 and 6.20, because the matrix for L with respect tothe standard basis is symmetric.

(8) (Optional Hint: �1 = 0, �2 = 49)

Answer: B = (17 [�3; 6; 2];17 [6; 2; 3];

17 [2; 3;�6]), D =

24 0 0 00 49 00 0 49

35 :Other answers forB are possible since the eigenspace E49 is two-dimensional.

Another likely answer for B is ( 17 [�3; 6; 2];1p5[2; 1; 0]; 1p

245[2;�4; 15]).

258

Page 263: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Chapter Tests Chapter 6 - Version C

(9) A2 =

�1p2

�1 �11 1

���4 00 16

��1p2

�1 1�1 1

��.

Hence, one possible answer is

A =

�1p2

�1 �11 1

���2 00 4

��1p2

�1 1�1 1

��=

�3 �1�1 3

�:

:

(10) (Optional Hint: Consider the orthogonal diagonalization of A and thefact that A2 = AAT .)Answer: Since A is symmetric, it is orthogonally diagonalizable. Sothere is an orthogonal matrix P and a diagonal matrix D such that D =P�1AP, or equivalently, A = PDP�1 = PDPT . Since all eigenvaluesof A are �1, D has only these values on its main diagonal, and henceD2 = In. Therefore, AA

T = PDPT (PDPT )T = PDPT ((PT )TDTPT )= PDPT (PDPT ) = PD(PTP)DPT = PDInDP

T = PD2PT = PInPT

= PPT = In. Thus, AT = A�1, and A is orthogonal.

Answers for Test for Chapter 6 � Version C

(1) (Optional Hint: The two given vectors are orthogonal. Therefore, onlytwo additional vectors need to be found.) Performing the Gram-SchmidtProcess as outlined in the text and using appropriate multiples to avoidfractions leads to the following basis:f[�3; 1;�1; 2]; [2; 4; 0; 1]; [22;�19;�21; 32]; [0; 1;�7;�4]g.

(2) Matrix for L = 19

24 5 �4 2�4 5 22 2 8

35.(3) Performing the Gram-Schmidt Process as outlined in the text and using

appropriate multiples to avoid fractions leads to the following basis forW:f[5; 1;�2]; [2; 0; 5]g. Continuing the process leads to the following basisfor W?: f[5;�29;�2]g. Also, dim(W?) = 1.

(4) Suppose n is odd. Then jAj2 = jA2j = jAAj = jA(�AT )j = j �AAT j= (�1)njAAT j = (�1)jInj = �1. But jAj2 can not be negative. Thiscontradiction shows that n is even.

(5) Let w2 2 W?2 . We must show that w2 2 W?

1 . To do this, we show thatw2 �w1 = 0 for all w1 2 W1. So, let w1 2 W1 � W2. Thus, w1 2 W2.Hence, w1 �w2 = 0, because w2 2 W?

2 . This completes the proof.

(6) Let v = [5; 1; 0;�4]. Then, projWv =13 [14; 5;�2;�12]; projW?v =

13 [1;�2; 2; 0]; minimum distance =

13 [1;�2; 2; 0]

= 1.259

Page 264: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Chapter Tests Chapter 6 - Version C

(7) If v1 and v2 are unit vectors in the given plane with v1 ? v2, then thematrix for L with respect to the ordered orthonormal basis

�1p26[4;�3;�1];v1;v2

�is

24 �1 0 00 1 00 0 1

35 :Hence, L is orthogonally diagonalizable. Therefore, L is a symmetric oper-ator, and so its matrix with respect to any orthonormal basis is symmetric.In particular, this is true of the matrix for L with respect to the standardbasis.

(8) (Optional Hint: �1 = 121, �2 = �121)Answer: B = ( 111 [2; 6;�9];

111 [�9; 6; 2];

111 [6; 7; 6]),

D =

24 121 0 00 121 00 0 �121

35.Other answers for B are possible since the eigenspace E121is two-dimensional. Another likely answer for

B is ( 1p85[�7; 6; 0]; 1

11p85[�36;�42; 85]; 111 [6; 7; 6]).

(9) A2 =

�1p10

�3 1�1 3

���0 00 100

��1p10

�3 �11 3

��.

Hence, one possible answer is

A =

�1p10

�3 1�1 3

���0 00 10

��1p10

�3 �11 3

��=

�1 33 9

�:

:

(10) (Optional Hint: First show that A and B orthogonally diagonalize to thesame matrix.)Answer: First, because A and B are symmetric, they are both orthogo-nally diagonalizable, by Theorems 6.18 and 6.20. But, since the charac-teristic polynomials of A and B are equal, these matrices have the sameeigenvalues with the same algebraic multiplicities. Thus, there is a singlediagonal matrix D; having these eigenvalues on its main diagonal, suchthat D = P�11 AP1 and D = P�12 BP2, for some orthogonal matrices P1and P2. Hence P�11 AP1 = P�12 BP2, or A = P1P

�12 BP2P

�11 . Let

P = P1P�12 : Then A = PBP�1, where P is orthogonal by parts (2) and

(3) of Theorem 6.6.

260

Page 265: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Chapter Tests Chapter 7 - Version A

Answers to Test for Chapter 7 � Version A

(1) a � b = b � a = [2� 2i;�1 + i; 5]

(2) (P�HP)� = P�H�(P�)� (by part (5) of Theorem 7.2) = P�H�P (bypart (1) of Theorem 7.2) = P�HP (sinceH is Hermitian). Hence P�HPis Hermitian.

(3) (a) False

(b) False

(4) z1 = 1 + i, z2 = 1� 2i

(5) Let A =

�0 11 0

�. Then pA(x) = x2+1, which has no real roots. Hence

A has no real eigenvalues, and so is not diagonalizable as a real matrix.However, pA(x) has complex roots i and �i. Thus, this 2� 2 matrix hastwo complex distinct eigenvalues, and thus must be diagonalizable.

(6) (Optional Hint: pL(x) = x3 � 2ix2 � x.)Answer: �1 = 0, �2 = i; basis for E0 = f[1 + i;�i; 1]g, basis for Ei =f[i; 0; 1]; [0; 1; 0]g. L is diagonalizable.

(7) The statement is false. The polynomial x is in the given subset, but ix isnot. Hence, the subset is not closed under scalar multiplication.

(8) B = f[1 + 3i; 2 + i; 1� 2i]; [1; 2i; 3 + i]g

(9)As a complex lineartransformation:

As a real linear transformation:

�1 i 0 00 0 1 i

� 26641 0 0 �1 0 0 0 00 1 1 0 0 0 0 00 0 0 0 1 0 0 �10 0 0 0 0 1 1 0

3775(10) (Optional Hint: The two given vectors are already orthogonal.)

Answer: f[1; i; 1 + i]; [3;�i;�1� i]; [0; 2;�1 + i]g

(11) P =p22

�1 11 �1

�(12) (AB)� = B�A� = B�1A�1 = (AB)�1, and so AB is unitary. Also,���jAj���2 = jAj jAj = jAj jA�j (by part (3) of Theorem 7.5) = jAA�j = jInj

= 1.

(13) Proofs for the �ve properties of an inner product:

(1) hf ; fi = f2(0) + f2(1) + f2(2) � 0, since the sum of squares is alwaysnonnegative.

261

Page 266: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Chapter Tests Chapter 7 - Version B

(2) Suppose f(x) = ax2 + bx+ c. Then hf ; fi = 0=) f(0) = 0; f(1) = 0; f(2) = 0 (since a sum of squares of realnumbers can only be zero if each number is zero)

=) (a; b; c) satis�es8<: c = 0a + b + c = 04a + 2b + c = 0

=) a = b = c = 0 (since the coe¢ cient matrix of thehomogeneous system has determinant �2 ).Conversely, it is clear that h0;0i = 0.

(3) hf ;gi = f(0)g(0) + f(1)g(1) + f(2)g(2)= g(0)f(0) + g(1)f(1) + g(2)f(2) = hg; fi

(4) hf + g;hi = (f(0) + g(0))h(0) + (f(1) + g(1))h(1) + (f(2) + g(2))h(2)= f(0)h(0) + g(0)h(0) + f(1)h(1) + g(1)h(1) + f(2)h(2) + g(2)h(2)= f(0)h(0) + f(1)h(1) + f(2)h(2) + g(0)h(0) + g(1)h(1) + g(2)h(2)= hf ;hi + hg;hi

(5) hkf ;gi = (kf)(0)g(0) + (kf)(1)g(1) + (kf)(2)g(2)= kf(0)g(0) + kf(1)g(1) + kf(2)g(2)= k(f(0)g(0) + f(1)g(1) + f(2)g(2))= khf ;gi

Also, hf ;gi = 14 and kfk =p35.

(14) The distance between x and y is 11.

(15) Note that hx+ y;x� yi = hx;xi + hx;�yi + hy;xi + hy;�yi =kxk2 � kyk2. So, if kxk = kyk, then hx+ y;x� yi = kxk2 � kyk2 = 0.Conversely, hx+ y;x� yi = 0 =) kxk2 � kyk2 = 0 =) kxk2 = kyk2=) kxk = kyk.

(16) fx2; x; 3� 5x2g

(17) w1 = [�8;�5; 16], w2 = [8; 6;�19]

Answers to Test for Chapter 7 � Version B

(1) A�B =

��1 + 14i 33� 3i�5� 5i 5 + 2i

�(2) (iH)� = iH� (by part (3) of Theorem 7.2) = (�i)H� = (�i)H (since H

is Hermitian) = �(iH). Hence, iH is skew-Hermitian.

(3) (a) False

(b) True

262

Page 267: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Chapter Tests Chapter 7 - Version B

(4) The determinant of the matrix of coe¢ cients is 1, and so the inverse matrix

is�6 + 5i �2� 14i�2 + i 4 + i

�. Using this gives z1 = 2 + i and z2 = �i.

(5) Let A be an n � n complex matrix. By the Fundamental Theorem ofAlgebra, pA(x) factors into n linear factors. Hence,pA(x) = (x � �1)k1 � � � (x � �m)km , where �1; : : : ; �m are the eigenvaluesof A, and k1; : : : ; km are their corresponding algebraic multiplicities. But,since the degree of pA(x) is n,

Pmi=1 ki = n.

(6) � = i; basis for Ei = f[1; 2i]g. L is not diagonalizable.

(7) The set of Hermitian n � n matrices is not closed under scalar mul-tiplication, and so is not a subspace of MC

nn. To see this, note that

i

�1 00 0

�=

�i 00 0

�. However,

�1 00 0

�is Hermitian and

�i 00 0

�is

not. Since the set of Hermitian n � n matrices is not a vector space, itdoes not have a dimension.

(8) B = f[1; 0; i]; [0; 1; 1� i]g

(9) Note that L�i

�1 00 0

��= L

��i 00 0

��=

�i 00 0

��=

��i 00 0

�,

but iL��

1 00 0

��= i

�1 00 0

��= i

�1 00 0

�=

�i 00 0

�.

(10) f[1 + i; 1� i; 2]; [3; i;�1 + i]; [0; 2;�1� i]g

(11) Now, Z� =�

1� i 1 + i�1� i 1� i

�and ZZ� =

�4 4i

�4i 4

�= Z�Z. Hence, Z

is normal. Thus, by Theorem 7.9, Z is unitarily diagonalizable.

(12) Now, A unitary implies AA� = In. Suppose A2 = In. Hence AA� =AA. Multiplying both sides by A�1 (= A�) on the left yields A� = A,and A is Hermitian. Conversely, if A is Hermitian, then A = A�. HenceAA� = In yields A2 = In.

(13) Proofs for the �ve properties of an inner product:

(1) hx;xi = Ax �Ax � 0, since � is an inner product.(2) hx;xi = 0 =) Ax �Ax = 0 =) Ax = 0 (since � is an inner product)

=) x = 0 (since A is nonsingular)

Conversely, h0;0i = A0 �A0 = 0 � 0 = 0.(3) hx;yi = Ax �Ay = Ay �Ax = hy;xi(4) hx+ y; zi = A(x+ y) �Az = (Ax+Ay) �Az

= Ax �Az+Ay �Az = hx; zi + hy; zi

263

Page 268: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Chapter Tests Chapter 7 - Version C

(5) hkx;yi = A(kx) �Ay = (kAx) � (Ay)= k((Ax) � (Ay)) = khx;yi

Also, hx;yi = 35 and kxk = 7.

(14) The distance between w and z is 8.

(15) 14 (kx+ yk

2 � kx� yk2) = 14 (hx+ y;x+ yi � hx� y;x� yi)

= 14 (hx+ y;xi + hx+ y;yi � hx� y;xi + hx� y;yi)

= 14 (hx;xi + hy;xi + hx;yi + hy;yi � hx;xi + hy;xi + hx;yi � hy;yi)

= 14 (4hx;yi) = hx;yi.

(16) fx2; 17x� 9x2; 2� 3x+ x2g

(17) w1 = 254 x

2, w2 = x+ 3� 254 x

2

Answers to Test for Chapter 7 � Version C

(1) ATB� =

�14� 5i 1 + 4i8� 7i 8� 3i

�(2) HJ = (HJ)� = J�H� = JH, because H, J, and HJ are all Hermitian.

(3) (a) True

(b) True

(4) z1 = 4� 3i, z2 = �1 + i

(5) Let A be an n�n complex matrix that is not diagonalizable. By the Fun-damental Theorem of Algebra, pA(x) factors into n linear factors. Hence,pA(x) = (x � �1)k1 � � � (x � �m)km , where �1; : : : ; �m are the eigenval-ues of A, and k1; : : : ; km are their corresponding algebraic multiplicities.Now, since the degree of pA(x) is n,

Pmi=1 ki = n. Hence, if each geo-

metric multiplicity actually equaled each algebraic multiplicity for eacheigenvalue, then the sum of the geometric multiplicities would also be n,implying that A is diagonalizable. However, since A is not diagonalizable,some geometric multiplicity must be less than its corresponding algebraicmultiplicity.

(6) �1 = 2, �2 = 2i; basis for E2 = f[1; 1]g, basis for E2i = f[1; i]g. L isdiagonalizable.

(7) We need to �nd normal matrices Z and W such that (Z +W) is not

normal. Note that Z =

�1 11 0

�and W =

�0 ii 0

�are normal ma-

trices (Z is Hermitian and W is skew-Hermitian). Let Y = Z +W =�1 1 + i

1 + i 0

�. Then YY� =

�1 1 + i

1 + i 0

� �1 1� i

1� i 0

264

Page 269: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Chapter Tests Chapter 7 - Version C

=

�2 1� i

1 + i 2

�, while Y�Y =

�1 1� i

1� i 0

� �1 1 + i

1 + i 0

�=

�2 1 + i

1� i 2

�, and so Y is not normal.

(8) One possibility:B = f[1; 3 + i; i; 1� i]; [1 + i; 3 + 4i;�1 + i; 2]; [1; 0; 0; 0]; [0; 0; 1; 0]g

(9) Let j be such that wj , the jth entry of w, does not equal zero. Then,L(iej) = wj(i) = w1(�i). However, iL(ej) = i(wj)(1). Setting theseequal gives �iwj = iwj , which implies 0 = 2iwj , and thus wj = 0. Butthis contradicts the assumption that wj 6= 0.

(10) (Optional Hint: The last vector in the given basis is already orthogonalto the �rst two.)Answer: f[2� i; 1 + 2i; 1]; [5i; 6;�1 + 2i]; [�1; 0; 2 + i]g

(11) Because the rows of Z form an orthonormal basis for Cn, by part (1) ofTheorem 7.7, Z is a unitary matrix. Hence, by part (2) of Theorem 7.7,the columns of Z form an orthonormal basis for Cn.

(12) (a) Note that AA� = A�A =

�1825 900i�900i 1300

�.

(b) (Optional Hint: pA(x) = x2 � 25ix+ 1250, or, �1 = 25iand �2 = �50i.)

Answer: P = 15

�3 44i �3i

�(13) Proofs for the �ve properties of an inner product:

Property 1: hx;xi = 2x1x1 � x1x2 � x2x1 + x2x2= x21 + x

21 � 2x1x2 + x22

= x21 + (x1 � x2)2 � 0.Property 2: hx;xi = 0 precisely when x1 = 0 and x1 � x2 = 0,

which is when x1 = x2 = 0.Property 3: hy;xi = 2y1x1 � y1x2 � y2x1 + y2x2

= 2x1y1 � x1y2 � x2y1 + x2y2 = hx;yiProperty 4: Let z = [z1; z2]. Then, hx+ y; zi

= 2(x1 + y1)z1 � (x1 + y1)z2 � (x2 + y2)z1 + (x2 + y2)z2= 2x1z1 + 2y1z1 � x1z2 � y1z2 � x2z1 � y2z1 + x2z2 + y2z1= 2x1z1 � x1z2 � x2z1 + x2z2 + 2y1z1 � y1z2 � y2z1 + y2z1= hx; zi + hy; zi

Property 5: hkx;yi= 2(kx1)y1 � (kx1)y2 � (kx2)y1 + (kx2)y2= k(2x1y1 � x1y2 � x2y1 + x2y2) = khx;yi

Also, hx;yi = 0 and kxk =p5.

(14) Distance between f and g isp�.

265

Page 270: Solutions Elementary Linear Algebra Andrilli Hecker - Answers (4th Edition)

Andrilli/Hecker - Answers to Chapter Tests Chapter 7 - Version C

(15) x ? y , hx;yi = 0 , 2hx;yi = 0, kxk2 + 2hx;yi + kyk2 = kxk2 + kyk2

, hx;xi + hx;yi + hy;xi + hy;yi = kxk2 + kyk2

, hx;x+ yi + hy;x+ yi = kxk2 + kyk2

, hx+ y;x+ yi = kxk2 + kyk2

, kx+ yk2 = kxk2 + kyk2

(16) (Optional Hint: h[�1; 1; 1]; [3;�4;�1]i = 0)Answer: f[�1; 1; 1]; [3;�4;�1]; [�1; 2; 0]g

(17) w1 = 2x� 13 , w2 = x

2 � 2x+ 13

266