abstract algebra notes

112
Definitions, Theorems and Exercises Abstract Algebra Math 332 Ethan D. Bloch December 26, 2013

Upload: dinhkhuong

Post on 30-Jan-2017

282 views

Category:

Documents


6 download

TRANSCRIPT

Page 1: Abstract Algebra Notes

Definitions, Theorems and Exercises

Abstract AlgebraMath 332

Ethan D. Bloch

December 26, 2013

Page 2: Abstract Algebra Notes

ii

Page 3: Abstract Algebra Notes

Contents

1 Binary Operations 31.1 Binary Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41.2 Isomorphic Binary Operations . . . . . . . . . . . . . . . . . . . . . . . . 8

2 Groups 92.1 Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102.2 Isomorphic Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152.3 Basic Properties of Groups . . . . . . . . . . . . . . . . . . . . . . . . . . 172.4 Subgroups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

3 Various Types of Groups 253.1 Cyclic Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263.2 Finitely Generated Groups . . . . . . . . . . . . . . . . . . . . . . . . . . 293.3 Dihedral Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303.4 Permutations and Permutation Groups . . . . . . . . . . . . . . . . . . . . 313.5 Permutations Part II and Alternating Groups . . . . . . . . . . . . . . . . . 33

4 Basic Constructions 354.1 Direct Products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364.2 Finitely Generated Abelian Groups . . . . . . . . . . . . . . . . . . . . . . 384.3 Infinite Products of Groups . . . . . . . . . . . . . . . . . . . . . . . . . . 394.4 Cosets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404.5 Quotient Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

Page 4: Abstract Algebra Notes

2 Contents

5 Homomorphisms 455.1 Homomorphisms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465.2 Kernel and Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

6 Applications of Groups 516.1 Group Actions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

7 Rings and Fields 557.1 Rings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 567.2 Polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

8 Vector Spaces 618.1 Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 628.2 Vector Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 638.3 Subspaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 668.4 Linear Combinations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 688.5 Linear Independence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 708.6 Bases and Dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 738.7 Bases for Arbitrary Vector Spaces . . . . . . . . . . . . . . . . . . . . . . 79

9 Linear Maps 819.1 Linear Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 829.2 Kernel and Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 859.3 Rank-Nullity Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 879.4 Isomorphisms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 899.5 Spaces of Linear Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91

10 Linear Maps and Matrices 9510.1 Review of Matrices—Multiplication . . . . . . . . . . . . . . . . . . . . . 9610.2 Linear Maps Given by Matrix Multiplication . . . . . . . . . . . . . . . . . 9810.3 All Linear Maps Fn→ Fm . . . . . . . . . . . . . . . . . . . . . . . . . . 10010.4 Coordinate Vectors with respect to a Basis . . . . . . . . . . . . . . . . . . 10110.5 Matrix Representation of Linear Maps—Basics . . . . . . . . . . . . . . . 10210.6 Matrix Representation of Linear Maps—Composition . . . . . . . . . . . . 10410.7 Matrix Representation of Linear Maps—Isomorphisms . . . . . . . . . . . 10610.8 Matrix Representation of Linear Maps—The Big Picture . . . . . . . . . . 10810.9 Matrix Representation of Linear Maps—Change of Basis . . . . . . . . . . 109

Page 5: Abstract Algebra Notes

1

Binary Operations

Page 6: Abstract Algebra Notes

4 1. Binary Operations

1.1 Binary Operations

Fraleigh, 7th ed. – Section 2Gallian, 8th ed. – Section 2Judson, 2013 – Section 3.2

Definition 1.1.1. Let A be a set. A binary operation on A is a function A×A→ A. Aunary operation on A is a function A→ A. 4

Definition 1.1.2. Let A be a set, let ∗ be a binary operation on A and let H ⊆ A. The subsetH is closed under ∗ if a∗b ∈ H for all a,b ∈ H. 4

Definition 1.1.3. Let A be a set, and let ∗ be a binary operation on A. The binary operation∗ satisfies the Commutative Law (an alternative expression is that ∗ is commutative) ifa∗b = b∗a for all a,b ∈ A. 4

Definition 1.1.4. Let A be a set, and let ∗ be a binary operation on A. The binary operation∗ satisfies the Associative Law (an alternative expression is that ∗ is associative) if (a ∗b)∗ c = a∗ (b∗ c) for all a,b,c ∈ A. 4

Definition 1.1.5. Let A be a set, and let ∗ be a binary operation on A.

1. Let e∈ A. The element e is an identity element for ∗ if a∗e = a = e∗a for all a∈ A.

2. If ∗ has an identity element, the binary operation ∗ satisfies the Identity Law. 4

Lemma 1.1.6. Let A be a set, and let ∗ be a binary operation on A. If ∗ has an identityelement, the identity element is unique.

Proof. Let e, e ∈ A. Suppose that e and e are both identity elements for ∗. Then e = e ∗e = e, where in the first equality we are thinking of e as an identity element, and in thesecond equality we are thinking of e as an identity element. Therefore the identity elementis unique.

Definition 1.1.7. Let A be a set, and let ∗ be a binary operation of A. Let e ∈ A. Supposethat e is an identity element for ∗.

1. Let a ∈ A. An inverse for a is an element a ′ ∈ A such that a∗a ′ = e and a ′ ∗a = e.

2. If every element in A has an inverse, the binary operation ∗ satisfies the InversesLaw. 4

Exercises

Exercise 1.1.1. Which of the following formulas defines a binary operation on the givenset?

Page 7: Abstract Algebra Notes

1.1 Binary Operations 5

(1) Let ∗ be defined by x∗ y = xy for all x,y ∈ {−,−,−, . . .}.

(2) Let � be defined by x� y =√

xy for all x,y ∈ [,∞).

(3) Let ⊕ be defined by x⊕ y = x− y for all x,y ∈Q.

(4) Let ◦ be defined by (x,y)◦(z,w) = (x+ z,y+w) for all (x,y),(z,w) ∈ R− {(,)}.

(5) Let � be defined by x� y = |x+ y| for all x,y ∈ N.

(6) Let ⊗ be defined by x⊗ y = ln(|xy|− e) for all x,y ∈ N.

Exercise 1.1.2. For each of the following binary operations, state whether the binary oper-ation is associative, whether it is commutative, whether there is an identity element and, ifthere is an identity element, which elements have inverses.

(1) The binary operation ⊕ on Z defined by x⊕ y =−xy for all x,y ∈ Z.

(2) The binary operation ? on R defined by x? y = x+y for all x,y ∈ R.

(3) The binary operation ⊗ on R defined by x⊗ y = x+ y− for all x,y ∈ R.

(4) The binary operation ∗ on Q defined by x∗ y = (x+ y) for all x,y ∈Q.

(5) The binary operation ◦ on R defined by x◦y = x for all x,y ∈ R.

(6) The binary operation � on Q defined by x� y = x+ y+ xy for all x,y ∈Q.

(7) The binary operation � on R defined by (x,y)� (z,w) = (xz,y + w) for all(x,y),(z,w) ∈ R.

Exercise 1.1.3. For each of the following binary operations given by operation tables, statewhether the binary operation is commutative, whether there is an identity element and, ifthere is an identity element, which elements have inverses. (Do not check for associativity.)

(1)

⊗ .

(2)

� j k l mj k j m jk j k l ml k l j lm j m l m .

(3)

∗ x y z wx x z w yy z w y xz w y x zw y x z w .

Page 8: Abstract Algebra Notes

6 1. Binary Operations

(4)

? a b c d ea d e a b bb e a b a dc a b c d ed b a d e ce b d e c a .

(5)

� i r s a b ci i r s a b cr r s i c a bs s i r b c aa a b c i s rb b c a r i sc c a b s r i

.

Exercise 1.1.4. Find an example of a set and a binary operation on the set such that thebinary operation satisfies the Identity Law and Inverses Law, but not the Associative Law,and for which at least one element of the set has more than one inverse. The simplest wayto solve this problem is by constructing an appropriate operation table.

Exercise 1.1.5. Let n ∈ N. Recall the definition of the set Zn and the binary operation · onZn. Observe that [] is the identity element for Zn with respect to multiplication. Let a ∈ Z.Prove that the following are equivalent.

a. The element [a] ∈ Zn has an inverse with respect to multiplication.

b. The equation ax≡ (mod n) has a solution.

c. There exist p,q ∈ Z such that ap+nq = .

(It turns out that the three conditions listed above are equivalent to the fact that a and n arerelatively prime.)

Exercise 1.1.6. Let A be a set. A ternary operation on A is a function A×A×A→ A. Aternary operation ? : A×A×A→ A is left-induced by a binary operation � : A×A→ A if?((a,b,c)) = (a�b)� c for all a,b,c ∈ A.

Is every ternary operation on a set left-induced by a binary operation? Give a proof or acounterexample.

Exercise 1.1.7. Let A be a set, and let ∗ be a binary operation on A. Suppose that ∗ satisfiesthe Associative Law and the Commutative Law. Prove that (a∗b)∗ (c∗d) = b∗ [(d ∗a)∗c]for all a,b,c,d ∈ A.

Exercise 1.1.8. Let B be a set, and let � be a binary operation on B. Suppose that � satisfiesthe Associative Law. Let

P = {b ∈ B | b�w = w�b for all w ∈ B}.

Prove that P is closed under �.Exercise 1.1.9. Let C be a set, and let ? be a binary operation on C. Suppose that ? satisfiesthe Associative Law and the Commutative Law. Let

Q = {c ∈C | c? c = c}.

Page 9: Abstract Algebra Notes

1.1 Binary Operations 7

Prove that Q is closed under ?.

Exercise 1.1.10. Let A be a set, and let ∗ be a binary operation on A. An element c ∈ A isa left identity element for ∗ if c∗a = a for all a ∈ A. An element d ∈ A is a right identityelement for ∗ if a∗d = a for all a ∈ A.

(1) If A has a left identity element, is it unique? Give a proof or a counterexample.

(2) If A has a right identity element, is it unique? Give a proof or a counterexample.

(3) If A has a left identity element and a right identity element, do these elements haveto be equal? Give a proof or a counterexample.

Page 10: Abstract Algebra Notes

8 1. Binary Operations

1.2 Isomorphic Binary Operations

Fraleigh, 7th ed. – Section 3Gallian, 8th ed. – Section 6

Definition 1.2.1. Let (G,∗) and (H,�) be sets with binary operations, and let f : G→ Hbe a function. The function f is an isomorphism of the binary operations if f is bijectiveand if f (a∗b) = f (a)� f (b) for all a,b ∈ G. 4Definition 1.2.2. Let (G,∗) and (H,�) be sets with binary operations. The binary opera-tions ∗ and � are isomorphic if there is an isomorphism G→ H. 4Theorem 1.2.3. Let (G,∗) and (H,�) be sets with binary operations. Suppose that (G,∗)and (H,�) are isomorphic.

1. (G,∗) satisfies the Commutative Law if and only if (H,�) satisfies the CommutativeLaw.

2. (G,∗) satisfies the Associative Law if and only if (H,�) satisfies the Associative Law.

3. (G,∗) satisfies the Identity Law if and only if (H,�) satisfies the Identity Law. Iff : G→ H is an isomorphism, then f (eG) = eH .

4. (G,∗) satisfies the Inverses Law if and only if (H,�) satisfies the Inverses Law.

Exercises

Exercise 1.2.1. Prove that the two sets with binary operations in each of the following pairsare isomorphic.

(1) (Z,+) and (Z,+), where Z= {n | n ∈ Z}.

(2) (R− {}, ·) and (R− {−},∗), where x∗ y = x+ y+ xy for all x,y ∈ R− {−}.

(3) (R,+) and (M×(R),+), where M×(R) is the set of all × matrices with realentries.

Exercise 1.2.2. Let f : Z→ Z be defined by f (n) = n+ for all n ∈ Z.

(1) Define a binary operation ∗ on Z so that f is an isomorphism of (Z,+) and (Z,∗),in that order.

(2) Define a binary operation � on Z so that f is an isomorphism of (Z,�) and (Z,+),in that order.

Exercise 1.2.3. Prove Theorem 1.2.3 (2).

Exercise 1.2.4. Prove Theorem 1.2.3 (3).

Exercise 1.2.5. Prove Theorem 1.2.3 (4).

Page 11: Abstract Algebra Notes

2

Groups

Page 12: Abstract Algebra Notes

10 2. Groups

2.1 Groups

Fraleigh, 7th ed. – Section 4Gallian, 8th ed. – Section 2Judson, 2013 – Section 3.2

Definition 2.1.1. Let G be a non-empty set, and let ∗ be a binary operation on G. Thepair (G,∗) is a group if ∗ satisfies the Associative Law, the Identity Law and the InversesLaw. 4Definition 2.1.2. Let (G,∗) be a group. The group (G,∗) is abelian if ∗ satisfies the Com-mutative Law. 4Lemma 2.1.3. Let G be a group. If g ∈ G, then g has a unique inverse.

Definition 2.1.4. Let G be a group. If G is a finite set, then the order of the group, denoted|G|, is the cardinality of the set G. 4Definition 2.1.5. Let n ∈ N, and let a,b ∈ Z. The number a is congruent to the number bmodulo n, denoted a≡ b (mod n), if a−b = kn for some k ∈ Z. 4Theorem 2.1.6. Let n ∈N, and let a ∈ Z. Then there is a unique r ∈ {, . . . ,n−} such thata≡ r (mod n).

Theorem 2.1.7. Let n ∈ N.

1. Let a,b ∈ Z. If a≡ b (mod n), then [a] = [b]. If a 6≡ b (mod n), then [a]∩ [b] = /0.

2. []∪ []∪ . . .∪ [n−] = Z.

Definition 2.1.8. Let n ∈ N. The set of integers modulo n, denoted Zn, is the set definedby Zn = {[], [], . . . , [n−]}, where the relation classes are for congruence modulo n. 4Definition 2.1.9. Let n ∈ N. Let + and · be the binary operations on Zn defined by [a] +[b] = [a+b] and [a] · [b] = [ab] for all [a], [b] ∈ Zn. 4Lemma 2.1.10. Let n ∈ N, and let a,b,c,d ∈ Z. Suppose that a ≡ c (mod n) and b ≡ d(mod n). Then a+b≡ c+d (mod n) and ab≡ cd (mod n).

Proof. There exist k, j ∈ Z such that a − c = kn and b − d = jn. Then a = c + kn andb = d + jn, and therefore

a+b = (c+ kn)+(d + jn) = c+d +(k+ j)n,ab = (c+ kn)(d + jn) = cd +(c j+dk+ k jn)n.

The desired result now follows.

Corollary 2.1.11. Let n∈N, and let [a], [b], [c], [d]∈Zn. Suppose that [a] = [c] and [b] = [d].Then [a+b] = [c+d] and [ab] = [cd].

Page 13: Abstract Algebra Notes

2.1 Groups 11

Lemma 2.1.12. Let n ∈ N. Then (Zn,+) is an abelian group.

Example 2.1.1. We wish to list all possible symmetries of an equilateral triangle.Such a triangle is shown in Figure 2.1.1. The letters A, B and C are not part of the triangle,but are added for our convenience. Mathematically, a symmetry of an object in the planeis an isometry of the plane (that is, a motion that does not change lengths between points)that take the object onto itself. In other words, a symmetry of an object in the plane is anisometry of the plane that leaves the appearance of the object unchanged.

Because the letters A, B and C in Figure 2.1.1 are not part of the triangle, a symmetryof the triangle may interchange these letters; we use the letters to keep track of what theisometry did. There are only two types of isometries that will leave the triangle lookingunchanged: reflections (that is, flips) of the plane in certain lines, and rotations of the planeby certain angles about the center of the triangle. (This fact takes a proof, but is beyond thescope of this course.) In Figure 2.1.2 we see the three possible lines in which the plane canbe reflected without changing the appearance of the triangle. Let M, M and M denotethe reflections of the planes in these lines. For example, if we apply reflection M to theplane, we see that the vertex labeled B is unmoved, and that the vertices labeled A and Care interchanged, as seen in Figure 2.1.3. The only two possible non-trivial rotations of theplane about the center of the triangle that leave the appearance of the triangle unchanged arerotation by ◦ clockwise and rotation by ◦ clockwise, denoted R and R. We donot need rotation by ◦ counterclockwise and rotation by ◦ counterclockwise, eventhough they also leave the appearance of the triangle unchanged, because they have thesame net effect as rotation by ◦ clockwise and rotation by ◦ clockwise, respectively,and it is only the net effect of isometries that is relevant to the study of symmetry. Let Idenote the identity map of the plane, which is an isometry, and which can be thought of asrotation by ◦.

A

C B

Figure 2.1.1.

The set G = {I,R,R,M,M,M} is the collection of all isometries of the planethat take the equilateral triangle onto itself. Each of these isometries can be thought ofas a function R → R, and as such we can combine these isometries by composition offunctions. It can be proved that the composition of isometries is an isometry, and therefore

Page 14: Abstract Algebra Notes

12 2. Groups

L1

L3

L2

Figure 2.1.2.

A

C B

C

A B

M2

Figure 2.1.3.

Page 15: Abstract Algebra Notes

2.1 Groups 13

composition becomes a binary operation on the set G; the details are omitted. We can thenform the operation table for this binary operation:

◦ I R R M M M

I I R R M M M

R R R I M M M

R R I R M M M

M M M M I R R

M M M M R I R

M M M M R R I .

Composition of functions is associative, and hence this binary operation on G is associa-tive. Observe that I is an identity element. It is seen that I, M, M and M are their owninverses, and that R and R are inverses of each other. Therefore (G,◦) is a group. Thisgroup is not abelian, however. For example, we see that R ◦M 6= M ◦R.

The group G is called the symmetry group of the equilateral triangle. ♦

Exercises

Exercise 2.1.1. For each of the following sets with binary operations, state whether the setwith binary operation is a group, and whether it is an abelian group.

(1) The set (,], and the binary operation multiplication.

(2) The set of positive rational numbers, and the binary operation multiplication.

(3) The set of even integers, and the binary operation addition.

(4) The set of even integers, and the binary operation multiplication.

(5) The set Z, and the binary operation ∗ on Z defined by a∗b = a−b for all a,b ∈ Z.

(6) The set Z, and the binary operation ? on Z defined by a?b = ab+a for all a,b ∈ Z.

(7) The set Z, and the binary operation � on Z defined by a�b= a+b+ for all a,b∈Z.

(8) The set R− {−}, and the binary operation � on R− {−} defined by a� b = a+b+ab for all a,b ∈ R− {−}.

Exercise 2.1.2. Let P = {a,b,c,d,e}. Find a binary operation ∗ on P given by an operationtable such that (P,∗) is a group.

Exercise 2.1.3. Find an example of a set and a binary operation on the set given by anoperation table such that each element of the set appears once and only once in each rowof the operation table and once and only once in each column, but the set together with thisbinary operation is not a group.

Page 16: Abstract Algebra Notes

14 2. Groups

Exercise 2.1.4. Let A be a set. Let P(A) denote the power set of A. Define the binaryoperation 4 on P(A) by X 4 Y = (X −Y )∪ (Y −X) for all X ,Y ∈P(A). (This binaryoperation is called symmetric difference. Prove that (P(A),4) is an abelian group.

Exercise 2.1.5. Let (G,∗) be a group. Prove that if x ′ = x for all x ∈ G, then G is abelian.Is the converse to this statement true?

Exercise 2.1.6. Let (H,?) be a group. Suppose that H is finite, and has an even number ofelements. Prove that there is some h ∈ H such that h 6= eH and h?h = eH .

Page 17: Abstract Algebra Notes

2.2 Isomorphic Groups 15

2.2 Isomorphic Groups

Fraleigh, 7th ed. – Section 3Gallian, 8th ed. – Section 6Judson, 2013 – Section 9.1

Definition 2.2.1. Let (G,∗) and (H,�) be groups, and let f : G→ H be a function. Thefunction f is an isomorphism (sometimes called a group isomorphism) if f is bijectiveand if f (a∗b) = f (a)� f (b) for all a,b ∈ G. 4

Definition 2.2.2. Let (G,∗) and (H,�) be groups. The groups G and H are isomorphic ifthere is an isomorphism G→ H. If G and H are isomorphic, it is denoted G ∼= H. 4

Theorem 2.2.3. Let G, H be groups, and let f : G→ H be an isomorphism.

1. f (eG) = eH .

2. If a ∈G, then f (a ′) = [ f (a)] ′, where the first inverse is in G, and the second is in H.

Proof. We will prove Part (2), leaving the rest to the reader.Let ∗ and � be the binary operations of G and H, respectively.

(2). Let a ∈G. Then f (a)� f (a ′) = f (a∗a ′) = f (eG) = eH , where the last equality usesPart (1) of this theorem, and the other two equalities use the fact that f is a isomorphismand that G is a group. A similar calculation shows that f (a ′)� f (a) = eH . By Lemma 2.1.3,it follows that [ f (a)] ′ = f (a ′).

Theorem 2.2.4. Let G, H and K be groups, and let f : G→ H and j : H → K be isomor-phisms.

1. The identity map G : G→ G is an isomorphism.

2. The function f− is an isomorphism.

3. The function j◦ f is an isomorphism.

Lemma 2.2.5. Let G and H be groups. Suppose that G and H are isomorphic. Then G isabelian if and only if H is abelian.

Proof. This lemma follows immediately from Theorem 1.2.3 (1).

Lemma 2.2.6. Let (G,∗) be a group, let A be a set, and let f : A→ G be a bijective map.Then there is a unique binary operation � on A such that (A,�) is a group and f is anisomorphism.

Page 18: Abstract Algebra Notes

16 2. Groups

Exercises

Exercise 2.2.1. Which of the following functions are isomorphisms? The groups underconsideration are (R,+), and (Q,+), and ((,∞), ·).

(1) Let f : Q→ (,∞) be defined by f (x) = x for all x ∈Q.

(2) Let k : (,∞)→ (,∞) be defined by k(x) = x− for all x ∈ (,∞).

(3) Let m : R→ R be defined by m(x) = x+ for all x ∈ R.

(4) Let g : (,∞)→ R be defined by g(x) = lnx for all x ∈ (,∞).

Exercise 2.2.2. Prove Theorem 2.2.3 (1).

Exercise 2.2.3. Prove Theorem 2.2.4 (1) and (2).

Exercise 2.2.4. Prove that up to isomorphism, the only two groups with four elements areZ and the Klein -group K. Consider all possible operation tables for the binary operationof a group with four elements; use the fact that each element of a group appears once ineach row and once in each column of the operation table for the binary operation of thegroup, as stated in Remark 2.3.2.

Exercise 2.2.5. Let G be a group, and let g∈G. Let ig : G→G be defined by ig(x) = gxg−

for all x ∈ G. Prove that ig is an isomorphism.

Page 19: Abstract Algebra Notes

2.3 Basic Properties of Groups 17

2.3 Basic Properties of Groups

Fraleigh, 7th ed. – Section 4Gallian, 8th ed. – Section 2Judson, 2013 – Section 3.2

Theorem 2.3.1. Let G be a group, and let a,b,c ∈ G.

1. e− = e.

2. If ac = bc, then a = b (Cancellation Law).

3. If ca = cb, then a = b (Cancellation Law).

4. (a−)− = a.

5. (ab)− = b−a−.

6. If ba = e, then b = a−.

7. If ab = e, then b = a−.

Proof. We prove Part (5), leaving the rest to the reader.

(5). By Lemma 2.1.3 we know that ab has a unique inverse. If we can show that(ab)(b−a−) = e and (b−a−)(ab) = e, then it will follow that a−b− is the uniqueinverse for ab, which means that (ab)− = b−a−. Using the definition of a group we seethat

(ab)(b−a−) = [(ab)b−]a− = [a(bb−)]a− = [ae]a− = aa− = e.

A similar computation shows that (b−a−)(ab) = e.

Remark 2.3.2. A useful consequence of Theorem 2.3.1 (2) and (3) is that if the binaryoperation of a group with finitely many elements is given by an operation table, then eachelement of the group appears once and only once in each row of the operation table andonce and only once in each column (consider what would happen otherwise). On the otherhand, just because an operation table does have each element once and only once in eachrow and once and only once in each column does not guarantee that the operation yields agroup; the reader is asked to find such an operation table in Exercise 2.1.3. ♦

Theorem 2.3.3. Let G be a group. The following are equivalent.

a. G is abelian.

b. (ab)− = a−b− for all a,b ∈ G.

Page 20: Abstract Algebra Notes

18 2. Groups

c. aba−b− = e for all a,b ∈ G.

d. (ab) = ab for all a,b ∈ G.

Theorem 2.3.4 (Definition by Recursion). Let H be a set, let e ∈ H and let k : H → Hbe a function. Then there is a unique function f : N→ H such that f () = e, and thatf (n+) = k( f (n)) for all n ∈ N.

Definition 2.3.5. Let G be a group and let a ∈ G.

1. The element an ∈ G is defined for all n ∈ N by letting a = a, and an+ = a ·an forall n ∈ N.

2. The element a ∈G is defined by a = e. For each n ∈N, the element a−n is definedby a−n = (an)−. 4

Lemma 2.3.6. Let G be a group, let a ∈ G and let n,m ∈ Z.

1. anam = an+m.

2. (an) ′ = a−n.

Lemma 2.3.7. Let G be a group, let a ∈ G and let n,m ∈ Z.

1. anam = an+m.

2. (an)− = a−n.

Proof. First, we prove Part (1) in the case where n ∈ N and m = −n. Using the definitionof a−n we see that anam = ana−n = an(an)− = e = a = an+m.

We now prove Part (2). There are three cases. First, suppose n > . Then n ∈ N. Thenthe result is true by definition. Second, suppose n = . Then (an)− = (a)− = e− = e =a = a− = a−n. Third, suppose n < . Then −n ∈ N. We then have e = e− = (a)− =(ana−n)− = (a−n)−(an)−, and similarly e = (an)−(a−n)−. By the uniqueness of in-verses we deduce that [(a−n)−]− = (an)−, and hence a−n = (an)−.

We now prove Part (1) in general. There are five cases. First, suppose that n > andm > . Then n,m ∈ N. This part of the proof is by induction. We will use induction on k toprove that for each k ∈ N, the formula akap = ak+p holds for all p ∈ N.

Let k = . Let p ∈ N. Then by the definition of ap we see that akap = a ·ap = a ·ap =ap+ = a+p = ak+p. Hence the result is true for k = .

Now let k ∈ N. Suppose that the result is true for k. Let p ∈ N. Then by the definition ofak we see that ak+ap = (a ·ak) ·ap = a · (ak ·ap) = a ·ak+p = a(k+p)+ = a(k+)+p. Hencethe result is true for k+ . It follows by induction that akap = ak+p for all p ∈ N, for allk ∈ N.

Second, suppose that n = or m = . Without loss of generality, assume that n = . Thenby the definition of am we see that anam = aam = e ·am = am = a+m = an+m.

Page 21: Abstract Algebra Notes

2.3 Basic Properties of Groups 19

Third, suppose that n > and m < . Then −m > , and hence n∈N and −m∈N. Thereare now three subcases.

For the first subcase, suppose that n + m > . Therefore n + m ∈ N. Let r = n + m.Then n = r + (−m). Because r > and −m > , then by the first case in this proof, to-gether with the definition of a−(−m), we see that anam = ar+(−m)am = (ara−m)a−(−m) =ar(a−ma−(−m)) = ar(a−m(a−m)−) = ar · e = ar = an+m.

For the second subcase, suppose that n+m = . Then m = −n. Using the definition ofa−n we see that anam = ana−n = an(an)− = e = a = an+m.

For the third subcase, suppose that n + m < . Then (−n) + (−m) = −(n + m) > .Because −m > and −n < , we can use the first of our three subcases, together withthe definition of a−n, to see that anam = ((an)−)−((am)−)− = (a−n)−(a−m)− =[a−ma−n]− = [a(−m)+(−n)]− = [a−(n+m)]− = [(an+m)−]− = an+m. We have now com-pleted the proof in the case where n > and m < .

Fourth, suppose that n < and m > . This case is similar to the previous case, and weomit the details.

Fifth, suppose that n < and m < . Then −n > and −m > , and we can proceedsimilarly to the third subcase of the third case of this proof; we omit the details.

Definition 2.3.8. Let A be a set, and let ∗ be a binary operation on A. An element e ∈ A isa left identity element for ∗ if e ∗ a = a for all a ∈ A. If ∗ has a left identity element, thebinary operation ∗ satisfies the Left Identity Law. 4Definition 2.3.9. Let A be a set, and let ∗ be a binary operation of A. Let e ∈ A. Supposethat e is a left identity element for ∗. If a ∈ A, a left inverse for a is an element a ′ ∈ A suchthat a ′ ∗ a = e. If every element in A has a left inverse, the binary operation ∗ satisfies theLeft Inverses Law. 4Theorem 2.3.10. Let G be a set, and let ∗ be a binary operation of A. If the pair (G,∗)satisfies the Associative Law, the Left Identity Law and the Left Inverses Law, then (G,∗) isa group.

Exercises

Exercise 2.3.1. Let H be a group, and let a,b,c∈H. Prove that if abc = eH , then bca = eH .

Exercise 2.3.2. Let G be a group. An element g ∈G is idempotent if g = g. Prove that Ghas precisely one idempotent element.

Exercise 2.3.3. Let H be a group. Suppose that h = eH for all h ∈ H. Prove that H isabelian.

Exercise 2.3.4. Let G be a group, and let a,b ∈ G. Prove that (ab) = ab if and only ifab = ba.

Exercise 2.3.5. Let G be a group, and let a,b ∈G. Prove that (ab)− = a−b− if and onlyif ab = ba. (Do not use Theorem 2.3.3.)

Page 22: Abstract Algebra Notes

20 2. Groups

Exercise 2.3.6. Find an example of a group G, and elements a,b ∈ G, such that (ab)− 6=a−b−.

Exercise 2.3.7. Let (H,∗) be a group. Let � be the binary operation on H defined bya�b = b∗a for all a,b ∈ H.

(1) Prove that (H,�) is a group.

(2) Prove that (H,∗) and (H,�) are isomorphic.

Exercise 2.3.8. Let G be a group, and let g ∈ G. Let ig : G→ G be defined by ig(x) = gxg ′

for all x ∈ G. Prove that ig is an isomorphism.

Exercise 2.3.9. Let G be a group, and let g ∈ G. Suppose that G is finite. Prove that thereis some n ∈ N such that gn = eG.

Page 23: Abstract Algebra Notes

2.4 Subgroups 21

2.4 Subgroups

Fraleigh, 7th ed. – Section 5Gallian, 8th ed. – Section 3Judson, 2013 – Section 3.3

Definition 2.4.1. Let G be a group, and let H ⊆G be a subset. The subset H is a subgroupof G if the following two conditions hold.

(a) H is closed under ∗.

(b) (H,∗) is a group.

If H is a subgroup of G, it is denoted H ≤ G. 4

Lemma 2.4.2. Let G be a group, and let H ≤ G.

1. The identity element of G is in H, and it is the identity element of H.

2. The inverse operation in H is the same as the inverse operation in G.

Proof.

(1). Let eG be the identity element of G. Because (H,∗) is a group, it has an identityelement, say eH . (We cannot assume, until we prove it, that eH is the same as eG.) TheneH ∗ eH = eH thinking of eH as being in H, and eG ∗ eH = eH thinking of eH as being in G.Hence eH ∗ eH = eG ∗ eH . Because both eH and eG are in G, we can use Theorem 2.3.1 (2)to deduce that eH = eG. Hence eG ∈ H, and eG is the identity element of H.

(2). Now let a ∈H. Because (G,∗) is a group, the element a has an inverse a− ∈G. Wewill show that a− ∈H. Because (H,∗) is a group, then a has an inverse a ∈H. (Again, wecannot assume, until we prove it, that a is the same as a−.) Using the definition of inverses,and what we saw in the previous paragraph, we know that a−∗a= eG and a∗a= eG. Hencea∗a = a− ∗a. Using Theorem 2.3.1 (2) again, we deduce that a− = a.

Theorem 2.4.3. Let G be a group, and let H ⊆ G. Then H ≤ G if and only if the followingthree conditions hold.

(i) e ∈ H.

(ii) If a,b ∈ H, then a∗b ∈ H.

(iii) If a ∈ H, then a− ∈ H.

Page 24: Abstract Algebra Notes

22 2. Groups

Proof. First suppose that H is a subgroup. Then Property (ii) holds by the definition of asubgroup, and Properties (i) and (iii) hold by Lemma 2.4.2.

Now suppose that Properties (i), (ii) and (iii) hold. To show that H is a subgroup, weneed to show that (H,∗) is a group. We know that ∗ is associative with respect to all theelements of G, so it certainly is associative with respect to the elements of H. The otherproperties of a group follow from what is being assumed.

Theorem 2.4.4. Let G be a group, and let H ⊆ G. Then H ≤ G if and only if the followingthree conditions hold.

(i) H 6= /0.

(ii) If a,b ∈ H, then a∗b ∈ H.

(iii) If a ∈ H, then a− ∈ H.

Proof. First, suppose that H is a subgroup. Then Properties (i), (ii) and (iii) hold byLemma 2.4.3.

Second, suppose that Properties (i), (ii) and (iii) hold. Because H 6= /0, there is someb∈H. By Property (iii) we know that b− ∈H. By Property (ii) we deduce that b−∗b∈H,and hence e ∈ H. We now use Lemma 2.4.3 to deduce that H is a subgroup.

Lemma 2.4.5. Let G be a group, and let K ⊆ H ⊆ G. If K ≤ H and H ≤ G, then K ≤ G.

Lemma 2.4.6. Let G be a group, and let {Hi}i∈I be a family of subgroups of G indexed byI. Then

⋂i∈I Hi ≤ G.

Theorem 2.4.7. Let G, H be groups, and let f : G→ H be an isomorphism.

1. If A≤ G, then f (A)≤ H.

2. If B≤ H, then f−(B)≤ G.

Proof. We will prove Part (1), leaving the rest to the reader.Let ∗ and � be the binary operations of G and H, respectively.

(1). Let A ≤ G. By Lemma 2.4.2 (1) we know that eG ∈ A, and by Theorem 2.2.3 (1)we know that eH = f (eG) ∈ f (A). Hence f (A) is non-empty. We can therefore use Theo-rem 2.4.4 to show that f (A) is a subgroup of H. Let x,y ∈ f (A). Then there are a,b ∈ Asuch that x = f (a) and y = f (b). Hence x � y = f (a)� f (b) = f (a ∗ b), because f is a iso-morphism. Because A is a subgroup of G we know that a ∗ b ∈ A, and hence x � y ∈ f (A).Using Theorem 2.2.3 (2) we see that x− = [ f (a)]− = f (a−). Because A is a subgroup ofG, it follows from Theorem 2.4.4 (iii) that a− ∈ A. We now use Theorem 2.4.4 to deducethat f (A) is a subgroup of H.

Page 25: Abstract Algebra Notes

2.4 Subgroups 23

Lemma 2.4.8. Let (G,∗) and (H,�) be groups, and let f : G→ H be a function. Supposethat f is injective, and that f (a∗b) = f (a)� f (b) for all a,b ∈ G.

1. f (G)≤ H.

2. The map f : G→ f (G) is an isomorphism.

Proof. Part (1) is proved using the same proof as Theorem 2.4.7 (1), and Part (2) is trivial.

Exercises

Exercise 2.4.1. Let GL(R) denote the set of invertible × matrices with real numberentries, and let SL(R) denote the set of all × matrices with real number entries thathave determinant . Prove that SL(R) is a subgroup of GL(R). (This exercise requiresfamiliarity with basic properties of determinants.)

Exercise 2.4.2. Let n ∈ N.

(1) Prove that (Zn,+) is an abelian group.

(2) Suppose that n is not a prime number. Then n = ab for some a,b ∈ N such that < a < n and < b < n. Prove that the set {[], [a], [a], . . . , [(b−)a]} is a subgroupof Zn.

(3) Is (Zn− {[]}, ·) a group for all n? If not, can you find any conditions on n that wouldguarantee that (Zn − {[]}, ·) is a group?

Exercise 2.4.3. Find all the subgroups of the symmetry group of the square.

Exercise 2.4.4. Let G be a group, and let A,B ⊆ G. Suppose that G is abelian. Let ABdenote the subset

AB = {ab | a ∈ A and b ∈ B}.

Prove that if A,B≤ G, then AB≤ G.

Exercise 2.4.5. Let G be a group, and let H ⊆ G. Prove that H ≤ G if and only if thefollowing two conditions hold.

(i) H 6= /0.

(ii) If a,b ∈ H, then a∗b− ∈ H.

Exercise 2.4.6. Let G be a group. Suppose that G is abelian. Let I denote the subset

I = {g ∈ G | g = eG}.

Prove that I ≤ G.

Page 26: Abstract Algebra Notes

24 2. Groups

Exercise 2.4.7. Let G be a group, and let H ⊆ G. Suppose that the following three condi-tions hold.

(i) H 6= /0.

(ii) H is finite.

(iii) H is closed under ∗.

Prove that H ≤ G.

Exercise 2.4.8. Let G be a group, and let s ∈ G. Let Cs denote the subset

Cs = {g ∈ G | gs = sg}.

Prove that Cs ≤ G.

Exercise 2.4.9. Let G be a group, and let A⊆ G. Let CA denote the subset

CA = {g ∈ G | ga = ag for all a ∈ A}.

Prove that CA ≤ G.

Page 27: Abstract Algebra Notes

3

Various Types of Groups

Page 28: Abstract Algebra Notes

26 3. Various Types of Groups

3.1 Cyclic Groups

Fraleigh, 7th ed. – Section 6Gallian, 8th ed. – Section 4Judson, 2013 – Section 4.1

Lemma 3.1.1. Let G be a group and let a ∈ G. Then

{an | n ∈ Z}=⋂{H ≤ G | a ∈ H}. (3.1.1)

Definition 3.1.2. Let G be a group and let a ∈G. The cyclic subgroup of G generated bya, denoted 〈a〉, is the set in Equation 3.1.1. 4

Definition 3.1.3. Let G be a group. Then G is a cyclic group if G = 〈a〉 for some a ∈ G;the element a is a generator of G. 4

Definition 3.1.4. Let G be a group and let H ≤ G. Then H is a cyclic subgroup of G ifH = 〈a〉 for some a ∈ G; the element a is a generator of H. 4

Definition 3.1.5. Let G be a group and let a ∈ G. If 〈a〉 is finite, the order of a, denoted|a|, is the cardinality of 〈a〉. If 〈a〉 is infinite, then a has infinite order. 4

Theorem 3.1.6 (Well-Ordering Principle). Let A ⊆ N be a set. If A is non-empty, thenthere is a unique m ∈ A such that m≤ a for all a ∈ A.

Theorem 3.1.7 (Division Algorithm). Let a,b ∈ Z. Suppose that b 6= . Then there areunique q,r ∈ Z such that a = qb+ r and ≤ r < |b|.

Theorem 3.1.8. Let G be a group, let a ∈ G and let m ∈ N. Then |a| = m if and only ifam = e and ai 6= e for all i ∈ {, . . . ,m−}.

Proof. First, suppose that |a| = m. Then 〈a〉 = {an | n ∈ Z} has m elements. Therefore theelements a,a,a, . . . ,am cannot be all distinct. Hence there are i, j ∈ {, . . . ,m} such thati 6= j and ai = a j. WLOG assume i< j. Then a j−i = e. Observe that j− i∈N, and j− i≤m.Let M = {p∈N | ap = e}. Then M⊆N, and M 6= /0 because j− i∈M. By the Well-OrderingPrinciple (Theorem 3.1.6), there is a unique k ∈ M such that k ≤ x for all x ∈ M. Henceak = e and aq 6= e for all q ∈ {, . . . ,k−}.

Because j− i ∈M, it follows that k ≤ j− i ≤ m. Let y ∈ Z. By the Division Algorithm(Theorem 3.1.7), there are unique q,r ∈ Z such that y = qk+ r and ≤ r < k. Then ay =aqk+r =(ak)qar = eqar = ar. It follows that 〈a〉⊆ {an | n∈ {, . . . ,k−}}. Therefore |〈a〉|≤ k.Because |〈a〉| = m, we deduce that m ≤ k. It follows that k = m. Hence am = e and ai 6= efor all i ∈ {, . . . ,m−}.

Second, suppose am = e and ai 6= e for all i ∈ {, . . . ,m− }. We claim that 〈a〉 ⊆ {an |

n ∈ {, . . . ,m−}}, using the Division Algorithm similarly to an argument seen previouslyin this proof. Additionally, we claim that all the elements a,a,a, . . . ,am− are distinct. Ifnot, then an argument similar to the one above would show that there are s, t ∈ {, . . . ,m−}

Page 29: Abstract Algebra Notes

3.1 Cyclic Groups 27

such that s < t and at−s = e; because t − s≤ m− < m, we would have a contradiction. Itfollows that 〈a〉= {an | n ∈ {, . . . ,m−}}, and hence |a|= m.

Lemma 3.1.9. Let G be a group. If G is cyclic, then G is abelian.

Theorem 3.1.10. Let G be a group and let H ≤ G. If G is cyclic, then H is cyclic.

Corollary 3.1.11. Every subgroup of Z has the form nZ for some n ∈ N∪ {}.Theorem 3.1.12. Let G be a group. Suppose that G is cyclic.

1. If G is infinite, then G ∼= Z.

2. If |G|= n for some n ∈ N, then G ∼= Zn.

Definition 3.1.13. Let a,b ∈ Z. If at least one of a or b is not zero, the greatest commondivisor of a and b, denoted (a,b), is the largest integer that divides both a and b. Let(,) = . 4Lemma 3.1.14. Let a,b ∈ Z. Then (a,b) exists and (a,b)≥ .

Definition 3.1.15. Let a,b∈Z. The numbers a and b are relatively prime if (a,b) = . 4Lemma 3.1.16. Let a,b ∈ Z. Then there are m,n ∈ Z such that (a,b) = ma+nb.

Corollary 3.1.17. Let a,b ∈ Z. If r is a common divisor of a and b, then r|(a,b).

Corollary 3.1.18. Let a,b ∈ Z. Then (a,b) = if and only if there are m,n ∈ Z such thatma+nb = .

Corollary 3.1.19. Let a,b,r ∈ Z. Suppose that (a,b) = . If a|br then a|r.

Theorem 3.1.20. Let G be a group. Suppose that G = 〈a〉 for some a ∈G, and that |G|= nfor some n ∈ N. Let s,r ∈ N.

1. |as|= n(n,s) .

2. 〈as〉= 〈ar〉 if and only if (n,s) = (n,r).

Corollary 3.1.21. Let G be a group. Suppose that G = 〈a〉 for some a∈G, and that |G|= nfor some n ∈ N. Let s ∈ N. Then G = 〈as〉 if and only if (n,s) = .

Exercises

Exercise 3.1.1. Let C be a cyclic group of order . How many generators does C have?

Exercise 3.1.2. List all orders of subgroups of Z.

Exercise 3.1.3. Find all the subgroups of Z.

Exercise 3.1.4. Let p,q ∈ N be prime numbers. How many generators does the group Zpqhave?

Page 30: Abstract Algebra Notes

28 3. Various Types of Groups

Exercise 3.1.5. Let G be a group, and let g,h ∈ G. Prove that if gh has order p for somep ∈ N, then hg has order p.

Exercise 3.1.6. Let G, H be groups, and let f ,k : G→ H be isomorphisms. Suppose thatG = 〈a〉 for some a ∈ G. Prove that if f (a) = k(a), then f = k.

Exercise 3.1.7. Let G be a group. Prove that if G has a finite number of subgroups, then Gis finite.

Exercise 3.1.8. Let G be a group, and let A,B ≤ G. Suppose that G is abelian, and that Aand B are cyclic and finite. Suppose that |A| and |B| are relatively prime. Prove that G has acyclic subgroup of order |A| · |B|

Page 31: Abstract Algebra Notes

3.2 Finitely Generated Groups 29

3.2 Finitely Generated Groups

Fraleigh, 7th ed. – Section 7

Lemma 3.2.1. Let G be a group and let S⊆ G.

1. {H ≤ G | S⊆ H} 6= /0.

2.⋂{H ≤ G | S⊆ H}≤ G.

3. If K ≤ G and S⊆ K, then⋂{H ≤ G | S⊆ H}⊆ K.

Proof. Part (1) holds because G ∈ {H ≤ G | S ⊆ H}, Part (2) holds by Lemma 2.4.6, andPart (3) is straightforward.

Definition 3.2.2. Let G be a group and let S⊆ G. The subgroup generated by S, denoted〈S〉, is defined by 〈S〉=

⋂{H ≤ G | S⊆ H}. 4

Theorem 3.2.3. Let G be a group and let S⊆ G. Then

〈S〉= {(a)n(a)n · · ·(ak)nk | k ∈ N, and a, . . . ,ak ∈ S, and n, . . . ,nk ∈ Z}.

Remark 3.2.4. In an abelian group, using additive notation, we would write the result ofTheorem 3.2.3 as

〈S〉= {na+na+ · · ·+nkak | k ∈ N, and a, . . . ,ak ∈ S, and n, . . . ,nk ∈ Z}. ♦

Definition 3.2.5. Let G be a group and let S ⊆ G. Suppose 〈S〉 = G. The set S generatesG, and the elements of S are generators of G. 4

Definition 3.2.6. Let G be a group. If G = 〈S〉 for some finite subset S ⊆ G, then G isfinitely generated.

Page 32: Abstract Algebra Notes

30 3. Various Types of Groups

3.3 Dihedral Groups

Fraleigh, 7th ed. – Section 8Gallian, 8th ed. – Section 1Judson, 2013 – Section 5.2

Theorem 3.3.1. Let n ∈ N. Suppose n ≥ . For a regular n-gon, let denote the identitysymmetry, let r denote the smallest possible clockwise rotation symmetry, and let m denotea reflection symmetry.

1. rn = , and rk 6= for k ∈ {, . . . ,n−}.

2. m = and m 6= .

3. rm = mr−.

4. If p ∈ Z, then rpm = mr−p.

5. If p ∈ N, then rp = rk for a unique k ∈ {,, . . . ,n−}.

6. If p,s ∈ {,, . . . ,n−}, then rp = rs and mrp = mrs if and only if p = s.

7. If p ∈ {,, . . . ,n−}, then m 6= rp.

8. If p ∈ {,, . . . ,n−}, then (rp)− = rn−p and (mrp)− = mrp.

Proof. Parts (1) and (2) follow immediately from the definition of r and m. Part (3) can beverified geometrically by doing both sides to a polygon. For Part (4), prove it for positivep by induction, for p = trivially, and for negative p by using the fact that it has beenproved for positive p and then for p negative looking at mrpm=mr−(−p)m= r−pmm= r−p.Part (5) uses the Division Algorithm (Theorem 3.1.7). Part (6) is proved similarly to proofswe saw for cyclic groups. Part (7) is geometric, because t reverses orientation and powersof r preserve orientation. Part (8) is also geometric, using basic ideas about rotations andreflections.

Definition 3.3.2. Let n ∈ N. Suppose n ≥ . For a regular n-gon, let denote the identitysymmetry, let r denote the smallest possible clockwise rotation symmetry, and let m denotea reflection symmetry. The n-th dihedral group, denoted Dn, is the group

Dn = {,r,r, . . . ,rn−,m,mr,mr, . . . ,mrn−}. 4

Theorem 3.3.3. Let n ∈ N. Suppose n≥ . Then there is a unique group with generators aand b that satisfy the following three conditions.

(i) an = , and ak 6= for all k ∈ {, . . . ,n−}.

(ii) b = and b 6= .

(iii) ab = ba−.

Page 33: Abstract Algebra Notes

3.4 Permutations and Permutation Groups 31

3.4 Permutations and Permutation Groups

Fraleigh, 7th ed. – Section 8Gallian, 8th ed. – Section 5, 6

Judson, 2013 – Section 5.1

Definition 3.4.1. Let A be a non-empty set. A permutation of A is a bijective map A→ A.The set of all permutations of A is denoted SA. The identity permutation A→ A is denotedι . 4

Definition 3.4.2. Let A be a non-empty set. The composition of two permutations of A iscalled permutation multiplication. 4

Lemma 3.4.3. Let A be a non-empty set. The pair (SA,◦) is a group.

Remark 3.4.4. In the group SA, we will usually write στ as an abbreviation of σ ◦τ . ♦

Lemma 3.4.5. Let A and B be non-empty sets. Suppose that A and B have the same cardi-nality. Then SA ∼= SB.

Definition 3.4.6. Let n ∈ N, and let A = {, . . . ,n}. The group SA is denoted Sn. The groupSn is the symmetric group on n letters. 4

Proposition 3.4.7. Let A be a non-empty set. Then SA is abelian if and only if A is finiteand |A|≤ .

Proof. It is easy to see that S and S are abelian (look at the groups explicitly). We seethat S is not abelian by looking at the two permutations with second rows () and ().However, those two permutations can be thought of as belonging to SA for any A that hasthree or more elements (including the case when A is infinite), and hence all such groupsare not abelian.

Theorem 3.4.8 (Cayley’s Theorem). Let G be a group. Then there is a set A such that Gis isomorphic to a subgroup of SA. If G is finite, a finite set A can be found.

Exercises

Exercise 3.4.1. Let σ ,τ ∈ S be defined by

σ =

(

)and τ =

(

).

Compute each of the following.

(1) σ.

(2) τ−.

Page 34: Abstract Algebra Notes

32 3. Various Types of Groups

(3) τσ .

(4) στ .

Exercise 3.4.2. Find all subgroups of S.

Exercise 3.4.3. Let n∈N. Suppose n≥ . Let σ ∈ Sn. Suppose that στ = τσ for all τ ∈ Sn.Prove that σ = ι .

Exercise 3.4.4. Let A be a non-empty set, and let P≤ SA. The subgroup P is transitive onA if for each x,y ∈ A, there is some σ ∈ P such that σ(x) = y.

Prove that if A is finite, then there is a subgroup Q ≤ SA such that Q is cyclic, that|Q|= |A|, and that Q is transitive on A.

Page 35: Abstract Algebra Notes

3.5 Permutations Part II and Alternating Groups 33

3.5 Permutations Part II and Alternating Groups

Fraleigh, 7th ed. – Section 9Gallian, 8th ed. – Section 5Judson, 2013 – Section 5.1

Definition 3.5.1. Let A be a non-empty set, and let σ ∈ SA. Let ∼ be the relation on Adefined by a ∼ b if and only if b = σn(a) for some n ∈ Z, for all a,b ∈ A. 4

Lemma 3.5.2. Let A be a non-empty set, and let σ ∈ SA. The relation ∼ is an equivalencerelation on A.

Definition 3.5.3. Let A be a set, and let σ ∈ SA. The equivalence classes of ∼ are called theorbits of σ . 4

Definition 3.5.4. Let n ∈ N, and let σ ∈ Sn. The permutation σ is a cycle if it has at mostone orbit with more than one element. The length of a cycle is the number of elements inits largest orbit. A cycle of length is a transposition. 4

Lemma 3.5.5. Let n∈N, and let σ ∈ Sn. Then σ is the product of disjoint cycles; the cyclesof length greater than are unique, though not their order.

Corollary 3.5.6. Let n ∈ N, and let σ ∈ Sn. Suppose n ≥ . Then σ is the product oftranspositions.

Theorem 3.5.7. Let n ∈ N, and let σ ∈ Sn. Suppose n≥ . Then either all representationsof σ as a product of transpositions have an even number of transpositions, or all have anodd number of transpositions.

Definition 3.5.8. Let n ∈ N, and let σ ∈ Sn. Suppose n ≥ . The permutation σ is evenor odd, respectively, if it is the product of an even number or odd number, respectively, oftranspositions. 4

Definition 3.5.9. Let n∈N. Suppose n≥ . The set of all even permutations of A is denotedAn. 4

Lemma 3.5.10. Let n ∈ N. Suppose n≥ .

1. The set An is a subgroup of Sn.

2. |An|=n! .

Definition 3.5.11. Let n ∈ N. Suppose n≥ . The group An is the alternating group on nletters. 4

Exercises

Page 36: Abstract Algebra Notes

34 3. Various Types of Groups

Exercise 3.5.1. Compute the product of cycles (,,)(,) as a single permutation in thefollowing groups.

(1) In S.

(2) In S.

Exercise 3.5.2. Let σ ∈ S be defined by

σ =

(

).

(1) Write σ as a product of cycles.

(2) Write σ as a product of transpositions.

Exercise 3.5.3. Let n ∈ N. Suppose n≥ . Let σ ∈ Sn.

(1) Prove that σ can be written as a product of at most n− transpositions.

(2) Prove that if σ is not a cycle, it can be written as a product of at most n− transpo-sitions.

(3) Prove that if σ is odd, it can be written as a product of n+ transpositions.

(4) Prove that if σ is even, it can be written as a product of n+ transpositions.

Exercise 3.5.4. Let n∈N. Suppose n≥ . Let K ≤ Sn. Prove that either all the permutationsin K are even, or exactly half the permutations in K are even.

Exercise 3.5.5. Let n ∈N. Suppose n≥ . Let σ ∈ Sn. Suppose that σ is odd. Prove that ifτ ∈ Sn is odd, then there is some η ∈ An such that τ = ση .

Exercise 3.5.6. Let n ∈N. Let σ ∈ Sn. Prove that if σ is a cycle of odd length, then σ is acycle.

Page 37: Abstract Algebra Notes

4

Basic Constructions

Page 38: Abstract Algebra Notes

36 4. Basic Constructions

4.1 Direct Products

Fraleigh, 7th ed. – Section 11Gallian, 8th ed. – Section 8Judson, 2013 – Section 9.2

Definition 4.1.1. Let H and K be groups. The product binary operation on H×K is thebinary operation defined by (h,k)(h,k) = (hh,kk) for all (h,k),(h,k) ∈ H ×K. 4Lemma 4.1.2. Let H and K be groups. The set H×K with the product binary operation isa group.

Definition 4.1.3. Let H and K be groups. The set H×K with the product binary operationis the direct product of the groups H and K. 4Lemma 4.1.4. Let H and K be groups. Then H×K ∼= K×H.

Lemma 4.1.5. Let H and K be groups. Suppose that H and K are abelian. Then H×K isabelian.

Theorem 4.1.6. Let m,n ∈N. The group Zm×Zn is cyclic and is isomorphic to Zmn if andonly if m and n are relatively prime.

Definition 4.1.7. Let G be a group, and let A,B ⊆ G. Let AB denote the subset AB = {ab |

a ∈ A and b ∈ B}. 4Lemma 4.1.8. Let H and K be groups. Let H = H× {eK} and K = {eH}×K.

1. H, K ≤ H×K.

2. HK = H×K.

3. H ∩ K = {(eH ,eK)}.

4. hk = kh for all h ∈ H and k ∈ K.

Lemma 4.1.9. Let G be a group, and let H,K ≤ G. Suppose that the following propertieshold.

(i) HK = G.

(ii) H ∩K = {e}.

(iii) hk = kh for all h ∈ H and k ∈ K.

Then G ∼= H×K.

Lemma 4.1.10. Let G be a group, and let H,K ≤G. Then HK = G and H ∩K = {e} if andonly if for every g ∈ G there are unique h ∈ H and k ∈ K such that g = hk.

Page 39: Abstract Algebra Notes

4.1 Direct Products 37

Theorem 4.1.11. Let m, . . . ,mr ∈ N. The group∏r

i=Zmi is cyclic and is isomorphic toZmm···mr if and only if mi and mk are relatively prime for all i,k ∈ {, . . . ,r} such that i 6= k.

Exercises

Exercise 4.1.1. List all the elements of Z×Z, and find the order of each element.

Exercise 4.1.2. Find all the subgroups of Z×Z×Z.

Exercise 4.1.3. Prove Lemma 4.1.8.

Exercise 4.1.4. Prove Lemma 4.1.9.

Page 40: Abstract Algebra Notes

38 4. Basic Constructions

4.2 Finitely Generated Abelian Groups

Fraleigh, 7th ed. – Section 11Gallian, 8th ed. – Section 11Judson, 2013 – Section 13.1

Theorem 4.2.1 (Fundamental Theorem of Finitely Generated Abelian Groups). Let Gbe a finitely generated abelian group. Then

G = Z(p)n ×Z(p)n ×·· ·×Z(pk)nk ×Z×Z×·· ·×Z

for some k ∈ N, and prime numbers p, . . . , pk ∈ N, and n, . . . ,nk ∈ N. This direct productis unique up to the rearrangement of factors.

Exercises

Exercise 4.2.1. Find, up to isomorphism, all abelian groups of order .

Exercise 4.2.2. Find, up to isomorphism, all abelian groups of order .

Exercise 4.2.3. How many abelian groups are there, up to isomorphism, of order .

Exercise 4.2.4. Let G be a group. Suppose that G is finite and abelian. Prove that G isnot cyclic if and only if there is some prime number p ∈ N such that G has a subgroupisomorphic to Zp×Zp.

Page 41: Abstract Algebra Notes

4.3 Infinite Products of Groups 39

4.3 Infinite Products of Groups

Definition 4.3.1. Let I be a non-empty set, and let {Ai}i∈I be a family of sets indexed by I.The product of the family of sets, denoted

∏i∈I Ai, is the set defined by∏

i∈I

Ai = { f ∈F (I,⋃i∈I

Ai) | f (i) ∈ Ai for all i ∈ I}.

If all the sets Ai are equal to a single set A, the product∏

i∈I Ai is denoted by AI . 4

Theorem 4.3.2. Let I be a non-empty set, and let {Ai}i∈I be a family of non-empty setsindexed by I. Then

∏i∈I Ai 6= /0.

Definition 4.3.3. Let I be a non-empty set, and let {Gi}i∈I be a family of non-empty groupsindexed by I. Let f ,g ∈

∏i∈I Gi.

1. Let f g : I→⋃i∈I Ai be defined by ( f g)(i) = f (i)g(i) for all i ∈ I.

2. Let f ′ : I→⋃i∈I Ai be defined by ( f )(i) = [ f (i)]− for all i ∈ I.

3. Let e : I→⋃i∈I Ai be defined by e(i) = eGi for all i ∈ I. 4

Lemma 4.3.4. Let I be a non-empty set, and let {Gi}i∈I be a family of non-empty groupsindexed by I. Let f ,g ∈

∏i∈I Gi.

1. f g ∈∏

i∈I Gi.

2. f ′ ∈∏

i∈I Gi.

3. e ∈∏

i∈I Gi.

Lemma 4.3.5. Let I be a non-empty set, and let {Gi}i∈I be a family of non-empty groupsindexed by I. Then

∏i∈I Gi is group.

Proof. We will show the Associative Law; the other properties are similar. Let f ,g,h ∈∏i∈I Gi. Let i ∈ I. Then

[( f g)h](i) = [( f g)(i)]h(i) = [ f (i)g(i)]h(i) = f (i)[g(i)h(i)]= f (i)[(gh)(i)] = [ f (gh)](i).

Hence ( f g)h = f (gh).

Page 42: Abstract Algebra Notes

40 4. Basic Constructions

4.4 Cosets

Fraleigh, 7th ed. – Section 10Gallian, 8th ed. – Section 7

Judson, 2013 – Section 6.1, 6.2

Definition 4.4.1. Let G be a group and let H ≤ G. Let ∼L and ∼R be the relations on Gdefined by a ∼L b if and only if a−b∈H for all a,b∈G, and a ∼R b if and only if ab− ∈Hfor all a,b ∈ G. 4

Lemma 4.4.2. Let G be a group and let H ≤ G. The relations ∼L and ∼R are equivalencerelations on G.

Definition 4.4.3. Let G be a group, let H ≤ G and let a ∈ G. Let aH and Ha be defined by

aH = {ah | h ∈ H} and Ha = {ha | h ∈ H}. 4

Lemma 4.4.4. Let G be a group, let H ≤ G and let a ∈ G.

1. The equivalence class of a with respect to ∼L is aH.

2. The equivalence class of a with respect to ∼R is Ha.

Proof. With respect to ∼L, we have

[a] = {b ∈ G | a ∼L b}= {b ∈ G | a−b ∈ H}

= {b ∈ G | a−b = h for some h ∈ H}

= {b ∈ G | b = ah for some h ∈ H}= {ah | h ∈ H}= aH.

The proof for ∼R is similar.

Definition 4.4.5. Let G be a group, let H ≤ G and let a ∈ G. The left coset of a (withrespect to H) is the set aH. The right coset of a (with respect to H) is the set Ha. 4

Lemma 4.4.6. Let G be a group, let H ≤ G and let a,b ∈ G.

1. aH = bH if and only if a−b ∈ H.

2. Ha = Hb if and only if ab− ∈ H.

3. aH = H if and only if a ∈ H.

4. Ha = H if and only if a ∈ H.

Proof. This lemma follows immediately from the definition of ∼L and ∼R and Lemma 4.4.4.

Page 43: Abstract Algebra Notes

4.4 Cosets 41

Lemma 4.4.7. Let G be a group and let H ≤ G.

1. All left cosets of G with respect to H and all right cosets of G with respect to H havethe same cardinality as H.

2. The family of all left cosets of G with respect to H has the same cardinality as thefamily of all right cosets of G with respect to H.

Definition 4.4.8. Let G be a group, let H ≤ G. The index of H in G, denoted (G : H), isthe number of left cosets of G with respect to H. 4Theorem 4.4.9. Let G be a group and let H ≤ G. Suppose that G is finite. Then |G| =|H | · (G : H).

Corollary 4.4.10 (Lagrange’s Theorem). Let G be a group and let H ≤ G. Suppose that Gis finite. Then |H | divides |G|.

Corollary 4.4.11. Let G be a group and let a ∈G. Suppose that G is finite. Then |a| divides|G|.

Corollary 4.4.12. Let G be a group. If |G| is a prime number, then G is cyclic.

Corollary 4.4.13. Let p ∈ N be a prime number. The only group of order p, up to isomor-phism, is Zp.

Theorem 4.4.14. Let G be a group and let K ≤ H ≤ G. Suppose that (G : H) and (H : K)are finite. Then (G : K) is finite, and (G : K) = (G : H) · (H : K).

Exercises

Exercise 4.4.1. Find all cosets of the group Z with respect to the subgroup Z.

Exercise 4.4.2. Find all cosets of the group Z with respect to the subgroup 〈〉.Exercise 4.4.3. Find (Z : 〈〉).Exercise 4.4.4. Let G be a group, and let p,q∈N be prime numbers. Suppose that |G|= pq.Prove that every proper subgroup of G is cyclic.

Exercise 4.4.5. Let G be a group, let H ≤ G. Prove that there is a bijective map from theset of all left cosets of G with respect to H to the set of all right cosets of G with respect toH. (Note: the group G is not necessarily finite.)

Exercise 4.4.6. Prove Theorem 4.4.14.

Exercise 4.4.7. Let G be a group, let H ≤G. Suppose that G is finite, and that (G : H) = .Prove that every left coset of G with respect to H is a right coset of G with respect to H.

Exercise 4.4.8. Let G be a group. Suppose that G is finite. Let n = |G|. Prove that if g ∈G,then gn = eG.

Page 44: Abstract Algebra Notes

42 4. Basic Constructions

4.5 Quotient Groups

Fraleigh, 7th ed. – Section 14, 15Gallian, 8th ed. – Section 9Judson, 2013 – Section 10.1

Lemma 4.5.1. Let G be a group and let H ≤ G. The formula (aH)(bH) = (ab)H for alla,b ∈ G gives a well-defined binary operation on the set of all left cosets of G with respectto H if and only if gH = Hg for all g ∈ G.

Lemma 4.5.2. Let G be a group and let H ≤ G. The following are equivalent.

a. gHg− ⊆ H for all g ∈ G.

b. gHg− = H for all g ∈ G.

c. gH = Hg for all g ∈ G.

Definition 4.5.3. Let G be a group and let H ≤ G. The subgroup H is normal if any ofthe conditions in Lemma 4.5.2 are satisfied. If H is a normal subgroup of G, it is denotedH C G. 4

Definition 4.5.4. Let G be a group and let H CG. The set of all left cosets of G with respectto H is denoted G/H. 4

Lemma 4.5.5. Let G be a group and let H C G. The set G/H with the binary operationgiven by (aH)(bH) = (ab)H for all a,b ∈ G is a group.

Definition 4.5.6. Let G be a group and let H C G. The set G/H with the binary operationgiven by (aH)(bH) = (ab)H for all a,b ∈ G is the quotient group of G by H. 4

Corollary 4.5.7. Let G be a group and let H C G. Suppose that G is finite. Then |G/H | =

(G : H) = |G||H |

.

Proof. This corollary follows immediately from Theorem 4.4.9.

Lemma 4.5.8. Let G be a group and let H ≤ G. Suppose that G is abelian. Then G/H isabelian.

Lemma 4.5.9. Let G be a group and let H ≤ G. Suppose that G is cyclic. Then G/H iscyclic.

Lemma 4.5.10. Let G be a group and let H ≤ G. If (G : H) = , then H C G.

Lemma 4.5.11. Let G be a group and let H ≤ G. Suppose that G is finite. If |H | = |G|,

then H C G.

Lemma 4.5.12. Let H and K be groups. Let G = H×K.

Page 45: Abstract Algebra Notes

4.5 Quotient Groups 43

1. H× {eK}C G and {eH}×K C G.

2. G/(H× {eK}) ∼= K and G/({eH}×K) ∼= H.

Lemma 4.5.13. Let G be a group, and let H,K ≤G. Suppose that the following propertieshold.

(i) HK = G.

(ii) H ∩K = {e}.

Then H,K C G if and only if hk = kh for all h ∈ H and k ∈ K.

Definition 4.5.14. Let G be a group. The group G is simple if it has no non-trivial propernormal subgroups. 4

Exercises

Exercise 4.5.1. Compute each of the following quotient groups; state what the group is inthe form given by the Fundamental Theorem of Finitely Generated Abelian Groups.

(1) (Z×Z)/〈(,)〉.

(2) (Z×Z)/〈(,)〉.

(3) (Z×Z×Z)/〈(,,)〉.

(4) (Z×Z)/〈(,)〉.

(5) (Z×Z×Z)/〈(,,)〉.

Exercise 4.5.2. Let G be a group and let H C G. Suppose that (G : H) is finite. Let m =(G : H). Prove that if g ∈ G, then gm ∈ H.

Exercise 4.5.3. Let G be a group, and let {Hi}i∈I be a family of normal subgroups of Gindexed by I. Prove that

⋂i∈I Hi C G.

Exercise 4.5.4. Let G be a group and let S⊆ G.

(1) Prove that {H C G | S⊆ H} 6= /0.

(2) Prove that⋂{H C G | S⊆ H}C G.

(3) Prove that if K C G and S⊆ K, then⋂{H C G | S⊆ H}⊆ K.

The normal subgroup generated by S, which is defined by⋂{H ≤ G | S ⊆ H}, is the

smallest normal subgroup of G that contains S.

Page 46: Abstract Algebra Notes

44 4. Basic Constructions

Exercise 4.5.5. Let G be a group. A commutator in G is an element of G that can beexpressed in the form aba−b− for some a,b ∈G. The commutator subgroup of G is thesmallest normal subgroup of G that contains all the commutators in G; such a subgroupexists by Exercise 4.5.4.

Let C denote the commutator subgroup of G. Prove that G/C is abelian.

Exercise 4.5.6. Let G be a group and let H ≤ G. Suppose that no other subgroup of G hasthe same cardinality as H. Prove that H C G.

Exercise 4.5.7. Let G be a group, let H ≤ G, and let N C G.

(1) Prove that H ∩N C H.

(2) Is H ∩N a normal subgroup of G? Give a proof or a counterexample.

Exercise 4.5.8. Let G be a group, and let m ∈ N. Suppose that G has a subgroup of orderm. Let K =

⋂{H ≤ G | |H |= m}. Prove that K C G.

Page 47: Abstract Algebra Notes

5

Homomorphisms

Page 48: Abstract Algebra Notes

46 5. Homomorphisms

5.1 Homomorphisms

Fraleigh, 7th ed. – Section 13Gallian, 8th ed. – Section 10Judson, 2013 – Section 11.1

Definition 5.1.1. Let (G,∗) and (H,�) be groups, and let f : G→ H be a function. Thefunction f is a homomorphism (sometimes called a group homomorphism) if f (a∗b) =f (a)� f (b) for all a,b ∈ G. 4

Theorem 5.1.2. Let G, H be groups, and let f : G→ H be a homomorphism.

1. f (eG) = eH .

2. If a ∈G, then f (a−) = [ f (a)]−, where the first inverse is in G, and the second is inH.

3. If A≤ G, then f (A)≤ H.

4. If B≤ H, then f−(B)≤ G.

Theorem 5.1.3. Let G, H and K be groups, and let f : G→H and j : H→ K be homomor-phisms. Then j◦ f is a homomorphism.

Lemma 5.1.4. Let G and H be groups. Suppose that G is cyclic with generator a. If b ∈H,there is a unique homomorphism f : G→ H such that f (a) = b.

Definition 5.1.5. Let G be a group. An endomorphism of G is a homomorphism G→ G.An automorphism of G is an isomorphism G→ G. 4

Definition 5.1.6. Let G be a group. An automorphism f : G→ G is an inner automor-phism if there is some g ∈ G such that f (x) = gxg− for all x ∈ G. 4

Lemma 5.1.7. Let G be a group, and let H ≤ G. Then H C G if and only if f (H) = H forall inner automorphisms of G.

Exercises

Exercise 5.1.1. Which of the following functions are homomorphisms? Which of the ho-momorphisms are isomorphisms? The groups under consideration are (R,+), and (Q,+),and ((,∞), ·).

(1) Let f : Q→ (,∞) be defined by f (x) = x for all x ∈Q.

(2) Let k : (,∞)→ (,∞) be defined by k(x) = x− for all x ∈ (,∞).

(3) Let m : R→ R be defined by m(x) = x+ for all x ∈ R.

Page 49: Abstract Algebra Notes

5.1 Homomorphisms 47

(4) Let g : (,∞)→ R be defined by g(x) = lnx for all x ∈ (,∞).

(5) Let h : R→ R be defined by h(x) = |x| for all x ∈ R.

Exercise 5.1.2. Prove that the function det : GL(R)→R− {} is a homomorphism, wherethe binary operation for both groups is multiplication.

Exercise 5.1.3.

(1) Let j : Z→Z be defined by j([x]) = [x] for all [x]∈Z, where the two appearancesof “[x]” in the definition of j refer to elements in different groups. Is this functionwell-defined? If it is well-defined, is it a homomorphism?

(2) Let k : Z → Z be defined by k([x]) = [x] for all [x] ∈ Z. Is this function well-defined? If it is well-defined, is it a homomorphism?

(3) Can you find criteria on n,m ∈N that will determine when the function r : Zn→ Zmdefined by r([x]) = [x] for all [x] ∈Zn is well-defined and is a homomorphism? Proveyour claim.

Exercise 5.1.4. Let G, H be groups. Prove that the projection maps π : G×H → G andπ : G×H→ H are homomorphisms.

Exercise 5.1.5. Let G be a group, and let f : G→G be defined by f (x) = x− for all x ∈G.Is g a homomorphism? Give a proof or a counterexample.

Exercise 5.1.6. Let G, H be groups, and let f ,k : G→H be homomorphisms. Suppose thatG = 〈S〉 for some S⊆ G. Prove that if f (a) = k(a) for all a ∈ S, then f = k.

Page 50: Abstract Algebra Notes

48 5. Homomorphisms

5.2 Kernel and Image

Fraleigh, 7th ed. – Section 13, 14Gallian, 8th ed. – Section 10

Judson, 2013 – Section 11.1, 11.2

Definition 5.2.1. Let G and H be groups, and let f : G→ H be a homomorphism.

1. The kernel of f , denoted ker f , is the set ker f = f−({eH}).

2. The image of f , denoted im f , is the set im f = f (G). 4

Remark 5.2.2. Observe that

ker f = {g ∈ G | f (g) = eH}

andim f = {h ∈ H | h = f (g) for some g ∈ G}. ♦

Lemma 5.2.3. Let G, H be groups, and let f : G→ H be a homomorphism.

1. ker f ≤ G.

2. im f ≤ H.

Proof. Straightforward, using Theorem 5.1.2.

Lemma 5.2.4. Let G, H be groups, and let f : G→H be a homomorphism. Then ker f CG.

Theorem 5.2.5. Let G and H be groups, and let f : G→ H be a homomorphism. Thefunction f is injective if and only if ker f = {eG}.

Proof. Suppose that f is injective. Because f (eG) = eH by Theorem 5.1.2 (1), it followsfrom the injectivity of f that ker f = f−({eH}) = {eG}.

Now suppose that ker f = {eG}. Let a,b ∈ G, and suppose that f (a) = f (b). By Theo-rem 5.1.2 (2) and the definition of homomorphisms we see that

f (b∗a−) = f (b)� f (a−) = f (a)� [ f (a)]− = eH .

It follows that b∗a− ∈ f−({eH}) = ker f . Because ker f = {eG}, we deduce that b∗a− =eG. A similar calculation shows that a− ∗ b = eG. By Lemma 2.1.3 we deduce that(a−)− = b, and therefore by Theorem 2.3.1 (4) we see that b = a. Hence f is injec-tive.

Lemma 5.2.6. Let G, H be groups, and let f : G→ H be a homomorphism. Let h ∈ H. Ifa ∈ f−({h}), then f−({h}) = a(ker f ).

Page 51: Abstract Algebra Notes

5.2 Kernel and Image 49

Definition 5.2.7. Let G be a group and let N C G. The canonical map for G and N is thefunction γ : G→ G/N defined by γ(g) = gN for all g ∈ G. 4Lemma 5.2.8. Let G be a group and let N C G. The canonical map γ : G→ G/N is asurjective homomorphism, and kerγ = N.

Theorem 5.2.9 (First Isomorphism Theorem). Let G, H be groups, and let f : G→ Hbe a homomorphism. Then there is a a unique isomorphism g : G/ker f → im f such thatf = g◦γ , where γ : G→ G/ker f is the canonical map.

Remark 5.2.10. The condition f = g◦γ in the First Isomorphism Theorem is representedby the following commutative diagram (discuss what that means).

G im f ≤ H

G/ker f

f

γg

Corollary 5.2.11. Let G, H be groups, and let f : G→ H be a homomorphism.

1. G/ker f ∼= im f .

2. If f is surjective, then G/ker f ∼= H.

Exercises

Exercise 5.2.1. Find the kernel of each of the following homomorphisms.

(1) Let f : Z→ Z be the unique homomorphism determined by f () = [].

(2) Let g : Z×Z→ Z be the unique homomorphism determined by g((,)) = andg((,)) = .

Exercise 5.2.2. In Exercise 5.1.3, you found criteria on n,m ∈ N that determine whenthe function r : Zn → Zm defined by r([x]) = [x] for all [x] ∈ Zn is well-defined and is ahomomorphism. Find the kernel for those functions that are well-defined and are homo-morphisms.

Exercise 5.2.3. Let G, H be groups, and let f : G→ H be a homomorphism. Prove thatim f is abelian if and only if aba−b− ∈ ker f for all a,b ∈ G.

Exercise 5.2.4. Let G be a group, and let g ∈G. Let p : Z→G be defined by p(n) = gn forall n ∈ N. Find ker f and im f .

Page 52: Abstract Algebra Notes

50 5. Homomorphisms

Page 53: Abstract Algebra Notes

6

Applications of Groups

Page 54: Abstract Algebra Notes

52 6. Applications of Groups

6.1 Group Actions

Fraleigh, 7th ed. – Section 16, 17Gallian, 8th ed. – Section 29

Judson, 2013 – Section 14.1–14.3

Definition 6.1.1. Let X be a set and let G be a group. An action of G on X is a function∗ : G×X → X that satisfies the following two conditions.

1. eGx = x for all x ∈ X .

2. (ab)x = a(bx) for all x ∈ X and a,b ∈ G. 4

Lemma 6.1.2. Let X be a set and let G be a group.

1. Let ∗ : G×X→X be an action of G on X. Let φ : G→ SX be defined by φ(g)(x) = gxfor all g ∈ G and x ∈ X. Then φ is well-defined and is a homomorphism.

2. Let ψ : G→ SX be a homomorphism. Let ? : G×X → X be defined by ?((g,x)) =ψ(g)(x) for all g ∈ G and x ∈ X. Then ? is an action of G on X.

Definition 6.1.3. Let X be a set and let G be a group. The set X is a G-set if there is anaction of G on X . 4

Definition 6.1.4. Let X be a set and let G be a group. Suppose that X is a G-set. Let g ∈G.The fixed set of g, denoted Xg, is the set Xg = {x ∈ X | gx = x}. 4

Definition 6.1.5. Let X be a set and let G be a group. Suppose that X is a G-set.

1. The group G acts faithfully on X if Xg 6= X for all g ∈ G− {eG}.

2. The group G acts transitively on X if for each x,y ∈ X there is some g ∈G such thatgx = y. 4

Definition 6.1.6. Let X be a set and let G be a group. Suppose that X is a G-set. Letx ∈ X . The isotropy subgroup of x (also called the stabilizer of x), denoted Gx, is the setGx = {g ∈ G | gx = x}. 4

Lemma 6.1.7. Let X be a set and let G be a group. Suppose that X is a G-set. Let x ∈ X.Then Gx ≤ G.

Definition 6.1.8. Let X be a set and let G be a group. Suppose that X is a G-set. Let ∼ bethe relation on X defined by x ∼ y if and only if there is some g ∈ G such that gx = y for allx,y ∈ X . 4

Lemma 6.1.9. Let X be a set and let G be a group. Suppose that X is a G-set. The relations∼ is an equivalence relation on X.

Page 55: Abstract Algebra Notes

6.1 Group Actions 53

Definition 6.1.10. Let X be a set and let G be a group. Suppose that X is a G-set. Let x∈ X .The orbit of x (with respect to G), denoted Gx, is the set Gx = {gx | g ∈ G}. 4

Lemma 6.1.11. Let X be a set and let G be a group. Suppose that X is a G-set. Let x ∈ X.

1. Suppose Gx is finite. Then |Gx|= (G : Gx).

2. Suppose G is finite. Then |G|= |Gx| · |Gx|.

Theorem 6.1.12 (Burnside’s Formula). Let X be a set and let G be a group. Suppose thatX and G are finite, and that X is a G-set. Let r be the number of orbits in X with respect toG. Then

r · |G|=∑g∈G

|Xg|.

Exercises

Exercise 6.1.1. An action of (R,+) on the plane R is obtained by assigning to each θ ∈Rthe rotation of the R about the origin counterclockwise by angle θ . Let P ∈ R. Supposethat P is not the origin.

(1) Prove that R is a R-set.

(2) Describe the orbit RP geometrically.

(3) Find the isotropy subgroup RP.

Exercise 6.1.2. Let X be a set and let G be a group. Suppose that X is a G-set. Prove thatG acts faithfully on X if and only if for each g,h ∈ G such that g 6= h, there is some x ∈ Xsuch that gx 6= hx.

Exercise 6.1.3. Let X be a set, let Y ⊆ X , and let G be a group. Suppose that X is a G-set.Let GY = {g ∈ G | gy = y for all y ∈ Y }. Prove that GY ≤ G.

Exercise 6.1.4. The four faces of a tetrahedral die are labeled with , , and dots,respectively. How many different tetrahedral dice can be made?

Exercise 6.1.5. Each face of a cube is painted with one of eight colors; no two faces canhave the same color. How many different cubes can be made?

Exercise 6.1.6. Each corner of a cube is painted with one of four colors; different cornersmay have the same color. How many different cubes can be made?

Page 56: Abstract Algebra Notes

54 6. Applications of Groups

Page 57: Abstract Algebra Notes

7

Rings and Fields

Page 58: Abstract Algebra Notes

56 7. Rings and Fields

7.1 Rings

Fraleigh, 7th ed. – Section 18Gallian, 8th ed. – Section 12, 13

Judson, 2013 – Section 16.1, 16.2

Definition 7.1.1. Let A be a set, and let + and · be binary operations on A.

1. The binary operations + and · satisfy the Left Distributive Law (an alternativeexpression is that · is left distributive over +) if a · (b+ c) = (a · b)+ (a · c) for alla,b,c ∈ A.

2. The binary operations + and · satisfy the Right Distributive Law (an alternativeexpression is that · is right distributive over +) if (b+ c) ·a = (b ·a)+(c ·a) for alla,b,c ∈ A. 4

Definition 7.1.2. Let R be a non-empty set, and let and let + and · be binary operations onR. The triple (R,+, ·) is a ring if the following three properties hold.

(i) (R,+) is an abelian group.

(ii) The binary operation · is associative.

(iii) The binary operation · is left distributive and right distributive over +. 4

Lemma 7.1.3. Let (R,+, ·) be a ring, and let a,b ∈ R.

1. ·a = and a ·= .

2. a(−b) = (−a)b =−(ab).

3. (−a)(−b) = ab.

Definition 7.1.4. Let (R,+, ·) be a ring.

1. The ring R is commutative if the binary operation · satisfies the Commutative Law.

2. The ring R is a ring with unity if the binary operation · has an identity element(usually denoted ). 4

Lemma 7.1.5. Let (R,+, ·) be a ring with unity. Then = if and only if R = {}.

Definition 7.1.6. Let (R,+, ·) be a ring with unity, and let a ∈ R. Then a is a unit if thereis some b ∈ R such that ab = and ba = . 4Lemma 7.1.7. Let (R,+, ·) be a ring with unity, and let a ∈ R.

1. The unity is unique.

Page 59: Abstract Algebra Notes

7.1 Rings 57

2. If there is some b ∈ R such that ab = and ba = , then b is unique.

Lemma 7.1.8. Let (R,+, ·) be a ring with unity, and let a,b,c ∈ R.

1. If 6= , then is not a unit.

2. If c is a unit and ca = cb, then a = b.

3. If c is a unit and ac = bc, then a = b.

4. If a is a unit, then a− is a unit and (a−)− = a.

5. If a and b are units, then ab is a unit and (ab)− = b−a−.

Definition 7.1.9. Let (R,+, ·) be a ring.

1. The ring R is an integral domain if it is a commutative ring with unity, if 6= , andif ab = implies a = or b = for all a,b ∈ R.

2. The ring R is a division ring (also called a skew field) if it is a ring with unity, if 6= , and if every non-zero element is a unit.

3. The ring R is a field if it is a commutative ring with unity, if 6= , if every non-zeroelement is a unit, and if the binary operation · satisfies the Commutative Law. 4

Lemma 7.1.10. Let (R,+, ·) be a ring.

1. If R is a division ring, then it is an integral domain.

2. If R is a field, it is a division ring.

Proof. Part (2) is evident. For Part (1), suppose that R is a division ring. Let a,b ∈ R.Suppose ab = . Suppose further that a 6= . Then a is a unit. Then a−(ab) = a− ·. Usingthe Associative Law, Inverses Law and Identity Law for ·, together with Lemma 7.1.3 (1),we deduce that b = .

Exercises

Exercise 7.1.1. For each of the following sets with two binary operations, state whetherthe set with binary operations are a ring, whether it is a commutative ring, whether it is aring with unity, and whether it is a field.

(1) The set N, with the standard addition and multiplication.

(2) Let n ∈ N. The set nZ, with the standard addition and multiplication.

(3) The set Z×Z, with the standard addition and multiplication on each component.

Page 60: Abstract Algebra Notes

58 7. Rings and Fields

(4) The set Z×Z, with the standard addition and multiplication on each component.

(5) The set {a+b√ | a,b ∈ Z}, with the standard addition and multiplication.

(6) The set {a+b√ | a,b ∈Q}, with the standard addition and multiplication.

Exercise 7.1.2. Let (R,+, ·) be a ring with unity. Let U be the set of all the units of R.Prove that (U, ·) is a group.

Exercise 7.1.3. Let (R,+, ·) be a ring. Prove that a−b = (a+b)(a−b) for all a,b ∈ R ifand only if R is commutative.

Exercise 7.1.4. Let (G,+) be an abelian group. Let a binary operation · on G be definedby a ·b = for all a,b ∈ G, where is the identity element for +. Prove that (G,+, ·) is aring.

Exercise 7.1.5. Let (R,+, ·) be a ring. An element a ∈ R is idempotent if a = a. Supposethat R is commutative. Let P be the set of all the idempotent elements of R. Prove that P isclosed under multiplication.

Exercise 7.1.6. Let (R,+, ·) be a ring. An element a ∈ R is nilpotent if an = for somen ∈N. Suppose that R is commutative. Let c,d ∈ R. Prove that if c and d are nilpotent, thenc+d is nilpotent.

Exercise 7.1.7. Let (R,+, ·) be a ring. The ring R is a Boolean Ring if a = a for all a ∈ R(that is, if every element of R is idempotent). Prove that if R is a Boolean ring, then R iscommutative.

Exercise 7.1.8. Let A be a set. Let P(A) denote the power set of A. Let binary operations+ and · on P(A) be defined by

X +Y = (X ∪Y )−(X ∩Y ) and X ·Y = X ∩Y

for all X ,Y ∈P(A). Prove that (P(A),+, ·) is a Boolean ring (as defined in Exercise 7.1.7);make sure to prove first that it is a ring.

Page 61: Abstract Algebra Notes

7.2 Polynomials 59

7.2 Polynomials

Fraleigh, 7th ed. – Section 22Gallian, 8th ed. – Section 16Judson, 2013 – Section 17.1

Definition 7.2.1. Let R be a ring. The set of polynomials over R, denoted R[x], is the set

R[x] = { f : N∪ {}→ R | there is some N ∈ N∪ {} such thatf (i) = for all i ∈ N∪ {} such that i > N}. 4

Definition 7.2.2. Let R be a ring.

1. Let 0 : N∪ {}→ R be defined by 0(i) = for all i ∈ N∪ {}.

2. Suppose that R has a unity. Let 1 : N∪ {}→ R be defined by 1() = and 1(i) = for all i ∈ N. 4

Definition 7.2.3. Let R be a ring, and let f ∈ R[x]. Suppose that f 6= 0. The degree of f ,denoted deg f , is the smallest N ∈ N∪ {} such that f (i) = for all i ∈ N∪ {} such thati > N. 4Definition 7.2.4. Let R be a ring, and let f ,g ∈ R[x]. Let f +g, f g,− f : N∪ {}→ R bedefined by ( f +g)(i) = f (i)+g(i), and ( f g)(i) =

∑ik= f (k)g(i−k), and (− f )(i) = − f (i)

for all i ∈ N∪ {}. 4Lemma 7.2.5. Let R be a ring, and let f ,g ∈ R[x].

1. f +g, f g,− f ∈ R[x].

2. deg( f +g)≤max{deg f ,degg}.

3. deg( f g)≤ deg f +degg. If R is an integral domain, then deg( f g) = deg f +degg.

Lemma 7.2.6. Let R be a ring.

1. (R[x],+, ·) is a ring.

2. If R is commutative, then R[x] is commutative.

3. If R has a unity, then R[x] has a unity.

4. If R is an integral domain, then R[x] is an integral domain.

Definition 7.2.7. Let R be a ring, let f ∈ R[x] and let r ∈ R. Let r f : N∪ {}→ R be definedby (r f )(i) = r f (i) for all i ∈ N∪ {}. 4Lemma 7.2.8. Let R be a ring, let f ∈ R[x] and let r ∈ R. Then r f ∈ R[x].

Page 62: Abstract Algebra Notes

60 7. Rings and Fields

Definition 7.2.9. Let R be an integral domain. Let x : N∪ {}→ R be defined by x() = and x(i) = for all i ∈ N∪ {}− {}. 4

Lemma 7.2.10. Let R be an integral domain, let f ∈ R[x], and let n ∈ N. If f 6= 0, supposethat n≥ deg f . Then there are unique a,a, . . . ,an ∈ R such that f = a1+ax+ · · ·+anxn.

Definition 7.2.11. Let R be a commutative ring with unity, let S ⊆ R be a commutativesubring with identity, and let α ∈ R. The evaluation map with respect to α is the functionΦα : S[x]→ R defined by Φα(a1+ax+ · · ·+anxn) = a1+aα + · · ·+anαn for all f ∈S[x]. 4

Definition 7.2.12. Let R be a commutative ring with unity, let S⊆ R be a commutative sub-ring with identity, and let f ∈ S[x]. The polynomial function induced by f is the functionf : R→ R defined by f (α) = Φα( f ) for all α ∈ R. 4

Definition 7.2.13. Let R be a commutative ring with unity, let S ⊆ R be a commutativesubring with identity, and let f ∈ S[x]. A zero of f is any α ∈ R such that f (α) = . 4

Exercises

Exercise 7.2.1. List all the polynomials in Z[x] that have degree less than or equal to .

Exercise 7.2.2. Find the units in each of the following rings.

(1) Z[x].

(2) Z[x].

Exercise 7.2.3. Let D be an integral domain. Describe the units in D[x].

Page 63: Abstract Algebra Notes

8

Vector Spaces

Page 64: Abstract Algebra Notes

62 8. Vector Spaces

8.1 Fields

Definition 8.1.1. Let F be a non-empty set, and let and let + and · be binary operations onR. The triple (F,+, ·) is a field if the following four properties hold:

(i) (F,+) is an abelian group.

(ii) (F − {}, ·) is an abelian group.

(iii) The binary operation · is left distributive and right distributive over +.

(iii) 6= . 4

Lemma 8.1.2. Let F be a field, and let x,y,z ∈ F.

1. 0 is unique.

2. is unique.

3. −x is unique.

4. If x 6= 0, then x− is unique.

5. x+ y = x+ z implies y = z.

6. If x 6= 0, then x · y = x · z implies y = z.

7. x ·0= 0.

8. −(−x) = x.

9. If x 6= 0, then (x−)− = x.

10. (−x) · y = x · (−y) = −(x · y).

11. (−x) · (−y) = x · y.

12. 0 has no multiplicative inverse.

13. xy = 0 if and only if x = 0 or y = 0.

Page 65: Abstract Algebra Notes

8.2 Vector Spaces 63

8.2 Vector Spaces

Friedberg-Insel-Spence, 4th ed. – Section 1.2

Definition 8.2.1. Let F be a field. A vector space (also called a linear space) over F isa set V with a binary operation +: V ×V → V and scalar multiplication F ×V → V thatsatisfy the following properties. Let x,y,z ∈V and let a,b ∈ F .

1. (x+ y)+ z = x+(y+ z).

2. x+ y = y+ x.

3. There is an element 0 ∈V such that x+0 = x.

4. There is an element −x ∈V such that x+(−x) = 0.

5. x = x.

6. (ab)x = a(bx).

7. a(x+ y) = ax+ay.

8. (a+b)x = ax+by. 4

Definition 8.2.2. Let F be a field, and let m,n ∈ N. The set of all m× n matrices withentries in F is denoted Mm×n(F). An element A ∈Mm×n(F) is abbreviated by the notationA =

[ai j]. 4

Definition 8.2.3. Let F be a field, and let m,n ∈ N.

1. The m×n zero matrix is the matrix Omn defined by Omn =[ci j], where ci j = 0 for

all i ∈ {, . . . ,m} and j ∈ {, . . . ,n}.

2. The n×n identity matrix is the matrix In defined by In =[δi j], where

δi j =

{, if i = j0, if i 6= j

for all i, j ∈ {, . . . ,n}. 4

Definition 8.2.4. Let F be a field, and let m,n ∈ N. Let A,B ∈ Mm×n(F), and let c ∈ F .Suppose that A =

[ai j]

and B =[bi j].

1. The matrix A+B ∈Mm×n(F) is defined by A+B =[ci j], where ci j = ai j + bi j for

all i ∈ {, . . . ,m} and j ∈ {, . . . ,n}.

Page 66: Abstract Algebra Notes

64 8. Vector Spaces

2. The matrix −A ∈ Mm×n(F) is defined by −A =[di j], where di j = −ai j for all i ∈

{, . . . ,m} and j ∈ {, . . . ,n}.

3. The matrix cA ∈ Mm×n(F) is defined by cA =[si j], where si j = cai j for all i ∈

{, . . . ,m} and j ∈ {, . . . ,n}. 4

Lemma 8.2.5. Let F be a field, and let m,n ∈ N. Let A,B,C ∈Mm×n(F), and let s, t ∈ F.

1. A+(B+C) = (A+B)+C.

2. A+B = B+A.

3. A+Omn = A and A+Omn = A.

4. A+(−A) = Omn and (−A)+A = Omn.

5. A = A.

6. (st)A = s(tA).

7. s(A+B) = sA+ sB.

8. (s+ t)A = sA+ tA.

Corollary 8.2.6. Let F be a field, and let m,n ∈ N. Then Mm×n(F) is a vector space overF.

Lemma 8.2.7. Let V be a vector space over a field F. let x,y,z ∈V and let a ∈ F.

1. 0 is unique.

2. −x is unique.

3. x+ y = x+ z implies y = z.

4. −(x+ y) = (−x)+(−y).

5. 0x = 0.

6. a0 = 0.

7. (−a)x = a(−x) = −(ax).

8. (−)x =−x.

9. ax = 0 if and only if a = 0 or x = 0.

Proof. Parts (1), (2), (3) and (4) are immediate, using the fact (V,+) is an abelian group,and the fact that we have proved these properties for groups.

The remaining parts of this lemma are left to the reader in Exercise 8.2.1.

Page 67: Abstract Algebra Notes

8.2 Vector Spaces 65

Exercises

Exercise 8.2.1. Prove Lemma 8.2.7 (5), (6), (7), (8) and (9).

Exercise 8.2.2. Let V , W be vector spaces over a field F . Define addition and scalar multi-plication on V ×W as follows. For each (v,w),(x,y) ∈V ×W and c ∈ F , let

(v,w)+(x,y) = (v+ x,w+ y) and c(v,w) = (cv,cw).

Prove that V ×W is a vector space over F with these operations. This vector space is calledthe product vector space of V and W .

Exercise 8.2.3. Let F be a field, and let S be a non-empty set. Let F (S,F) be the set of allfunctions S→ F . Define addition and scalar multiplication on F (S,F) as follows. For eachf ,g ∈F (S,F) and c ∈ F , let f + g,c f ∈F (S,F) be defined by ( f + g)(x) = f (x)+ g(x)and (c f )(x) = c f (x) for all x ∈ S.

Prove that F (S,F) is a vector space over F with these operations.

Page 68: Abstract Algebra Notes

66 8. Vector Spaces

8.3 Subspaces

Friedberg-Insel-Spence, 4th ed. – Section 1.3

Definition 8.3.1. Let V be a vector space over a field F , and let W ⊆ V . The subset W isclosed under scalar multiplication by F if av ∈W for all v ∈W and a ∈ F . 4

Definition 8.3.2. Let V be a vector space over a field F , and let W ⊆V . The subset W is asubspace of V if the following three conditions hold.

1. W is closed under +.

2. W is closed under scalar multiplication by F .

3. W is a vector space over F . 4

Lemma 8.3.3. Let V be a vector space over a field F, and let W ⊆V be a subspace.

1. The additive identity element of V is in W, and it is the additive identity element ofW.

2. The additive inverse operation in W is the same as the additive inverse operation inV .

Proof. Because (V,+) is an abelian group, this result is the same as Lemma 2.4.2.

Lemma 8.3.4. Let V be a vector space over a field F, and let W ⊆V . Then W is a subspaceof V if and only if the following three conditions hold.

1. W 6= /0.

2. W is closed under +.

3. W is closed under scalar multiplication by F.

Proof. First, suppose that W is a subspace of V . Then W 6= /0, because 0 ∈W , and henceProperty (1) holds. Properties (2) and (3) hold by definition.

Second, suppose that Properties (1), (2) and (3) hold. To show that W is a subspaceof V , we need to show that W is a vector space over F . We know that + is associativeand commutative with respect to all the elements of V , so it certainly is associative andcommutative with respect to the elements of V .

Because W 6= /0, then there is some x ∈W . Then −x = (−)x by Lemma 8.2.7 (8). Itfollows from Property (3) that −x ∈W , and by Property (2) we deduce that 0 = x+(−x) ∈W . Hence Parts (1), (2), (3) and (4) of Definition 8.2.1 hold for W . Parts (5), (6), (7) and(8) of that definition immediately hold for W because they hold for V .

Page 69: Abstract Algebra Notes

8.3 Subspaces 67

Lemma 8.3.5. Let V be a vector space over a field F, and and let U ⊆W ⊆V be subsets.If U is a subspace of W, and W is a subspace of V , then U is a subspace of V .

Proof. Straightforward.

Lemma 8.3.6. Let V be a vector space over a field F, and let {Wi}i∈I be a family of sub-spaces of V indexed by I. Then

⋂i∈I Wi is a subspace of V .

Proof. Note that 0∈Wi for all i∈ I by Lemma 8.3.3. Hence 0∈ {Wi}i∈I . Therefore {Wi}i∈I 6=/0.

Let x,y ∈⋂

i∈I Wi and let a ∈ F . Let k ∈ I. Then x,y ∈ Wk, so x + y ∈ Wk and ax ∈Wk. Therefore x + y ∈

⋂i∈I Wi and ax ∈

⋂i∈I Wi. Therefore

⋂i∈I is a subspace of U by

Lemma 8.3.4.

Exercises

Exercise 8.3.1. LetW = {

[ xyz

]∈ R | x+ y+ z = }.

Prove that W is a subspace of R.

Exercise 8.3.2. Let F be a field, and let S be a non-empty set. Let F (S,F) be as defined inExercise 8.2.3. Let C (S,F) be defined by

C (S,F) = { f ∈F (S,F) | f (s) = 0 for all but a finite number of elements s ∈ S}.

Prove that C (S,F) is a subspace of F (S,F).

Exercise 8.3.3. Let V be a vector space over a field F , and let W ⊆ V . Prove that W is asubspace of V if and only if the following conditions hold.

1. W 6= /0.

2. If x,y ∈W and a ∈ F , then ax+ y ∈W .

Exercise 8.3.4. Let V be a vector space over a field F , and let W ⊆ V be a subspace. Letw, . . . ,wn ∈W and a, . . . ,an ∈ F . Prove that aw+ · · ·+anwn ∈W .

Exercise 8.3.5. Let V be a vector space over a field F , and let S,T ⊆V . The sum of S andT is the subset of V defined by

S+T = {s+ t | s ∈ S and t ∈ T }.

Let X ,Y ⊆V be subspaces.

(1) Prove that X ⊆ X +Y and Y ⊆ X +Y .

(2) Prove that X +Y is a subspace of V .

(3) Prove that if W is a subspace of V such that X ⊆W and Y ⊆W , then X +Y ⊆W .

Page 70: Abstract Algebra Notes

68 8. Vector Spaces

8.4 Linear Combinations

Friedberg-Insel-Spence, 4th ed. – Section 1.4

Definition 8.4.1. Let V be a vector space over a field F , and let S ⊆ V be a non-emptysubset. Let v ∈V . The vector v is a linear combination of vectors of S if

v = av+av+ · · ·+anvn

for some n ∈ N and some v,v, . . . ,vn ∈ S and a,a, . . . ,an ∈ F 4Definition 8.4.2. Let V be a vector space over a field F , and let S ⊆ V be a non-emptysubset. The span of S, denoted span(S), is the set of all linear combinations of the vectorsin S. Also, let span( /0) = {0}. 4Lemma 8.4.3. Let V be a vector space over a field F, and let S⊆V be a non-empty subset.

1. S⊆ span(S).

2. span(S) is a subspace of V .

3. If W ⊆V is a subspace and S⊆W, then span(S)⊆W.

4. span(S) =⋂{U ⊆V |U is a subspace of V and S⊆U}.

Proof. Straightforward.

Definition 8.4.4. Let V be a vector space over a field F , and let S ⊆ V be a non-emptysubset. The set S spans (also generates) V if span(S) =V . 4Remark 8.4.5. There is a standard strategy for showing that a set S spans V , as follows.

Proof. Let v ∈V ....

(argumentation)...

Let v, . . . ,vn ∈ S and a, . . . ,an ∈ F be defined by . . ....

(argumentation)...

Then v = av+ · · ·+anvn. Hence S spans V .

In the above strategy, if S is finite, then we can take v, . . . ,vn to be all of S. ♦

Exercises

Page 71: Abstract Algebra Notes

8.4 Linear Combinations 69

Exercise 8.4.1. Using only the definition of spanning, prove that {[ ] , [ ]} spans R.

Exercise 8.4.2. Let V be a vector space over a field F , and let W ⊆ V . Prove that W is asubspace of V if and only if span(W ) =W .

Exercise 8.4.3. Let V be a vector space over a field F , and let S ⊆ V . Prove thatspan(span(S)) = span(S).

Exercise 8.4.4. Let V be a vector space over a field F , and let S,T ⊆ V . Suppose thatS⊆ T .

(1) Prove that span(S)⊆ span(T ).

(2) Prove that if span(S) =V , then span(T ) =V .

Exercise 8.4.5. Let V be a vector space over a field F , and let S,T ⊆V .

(1) Prove that span(S∩T )⊆ span(S)∩ span(T ).

(2) Give an example of subsets S,T ⊆R such that S and T are non-empty, not equal toeach other, and span(S∩T ) = span(S)∩span(T ). A proof is not needed; it suffices tostate what each of S, T , S∩T , span(S), span(T ), span(S∩T ) and span(S)∩ span(T )are.

(3) Give an example of subsets S,T ⊆R such that S and T are non-empty, not equal toeach other, and span(S∩T )$ span(S)∩span(T ). A proof is not needed; it suffices tostate what each of S, T , S∩T , span(S), span(T ), span(S∩T ) and span(S)∩ span(T )are.

Page 72: Abstract Algebra Notes

70 8. Vector Spaces

8.5 Linear Independence

Friedberg-Insel-Spence, 4th ed. – Section 1.5

Definition 8.5.1. Let V be a vector space over a field F , and let S⊆V . The set S is linearlydependent if there are n ∈N, distinct vectors v,v, . . .vn ∈ S, and a,a, . . .an ∈ F that arenot all 0, such that av+ · · ·+anvn = 0. 4

Lemma 8.5.2. Let V be a vector space over a field F, and let S ⊆ V . If 0 ∈ S, then S islinearly dependent.

Proof. Observe that ·0 = 0.

Lemma 8.5.3. Let V be a vector space over a field F, and let S ⊆ V . Suppose that S 6= /0and S 6= {0}. The following are equivalent.

a. S is linearly dependent.

b. There is some v ∈ S such that v ∈ span(S− {v}).

c. There is some v ∈ S such that span(S− {v}) = span(S).

Proof. Suppose S is linearly dependent. Then there are n∈N, distinct vectors v, . . . ,vn ∈ S,and a, . . . ,an ∈ F not all 0, such that av+ · · ·+anvn = 0. Then there is some k ∈ {, . . . ,n}such that ak 6= 0. Therefore

vk =−aak

v− · · ·−ak−

akvk−−

ak+

akvk+− · · ·−

an

akvn.

Hence vk ∈ span(S− {vk}).Suppose that is some v ∈ S such that v ∈ span(S − {v}). Then there are p ∈ N, and

w,w, . . .wp ∈ S− {v} and c,c, . . .cp ∈ F such that v = cw+ · · ·+ cpwpBy Exercise 8.4.4 (1) we know that span(S− {v})⊆ span(S).Let x ∈ span(S). Then there are m ∈ N, and u,u, . . .um ∈ S and b,b, . . .bm ∈ F such

that x = bu+ · · ·+ bmum. First, suppose that v is not any of u,u, . . .um. Then clearlyx ∈ span(S− {v}). Second, suppose that v is one of u,u, . . .um. Without loss of generality,suppose that v = u. Then

x = b(cw+ · · ·+ cpwp)+bu+ · · ·+bmum

= bcw+ · · ·+bcpwp +bu+ · · ·+bmum.

Hence x ∈ span(S − {v}). Putting the two cases together, we conclude that span(S) ⊆span(S− {v}). Therefore span(S− {v}) = span(S)

Suppose that there is some w∈ S such that span(S− {w}) = span(S). Because w∈ S, thenw ∈ span(S), and hence w ∈ span(S− {w}). Hence there are r ∈ m, and x, . . . ,xr ∈ S− {w}

Page 73: Abstract Algebra Notes

8.5 Linear Independence 71

and d, . . . ,dr ∈F such that w= dx+ · · ·+drxr. Without loss of generality, we can assumethat x, . . . ,xr are distinct. Therefore

·w+(−d)x+ · · ·+(−dm)xm = 0.

Because 6= 0, and because w,x, . . . ,xr are distinct, we deduce that S is linearly dependent.

Definition 8.5.4. Let V be a vector space over a field F , and let S⊆V . The set S is linearlyindependent if it is not linearly dependent. 4

Lemma 8.5.5. Let V be a vector space over a field F.

1. /0 is linearly independent.

2. If v ∈V and v 6= 0, then {v} is linearly independent.

Proof. Straightforward.

Remark 8.5.6. There is a standard strategy for showing that a set S in a vector space islinearly independent, as follows.

Proof. Let v, . . . ,vn ∈ S and a, . . . ,an ∈ F . Suppose that v, . . . ,vn are distinct, and thatav+ · · ·+anvn = 0.

...(argumentation)

...Then a = 0, . . ., an = 0. Hence S is linearly independent.

In the above strategy, if S is finite, then we simply take v, . . . ,vn to be all of S. ♦

Lemma 8.5.7. Let V be a vector space over a field F, and let S ⊆ S ⊆V .

1. If S is linearly dependent, then S is linearly dependent.

2. If S is linearly independent, then S is linearly independent.

Proof. Straightforward.

Lemma 8.5.8. Let V be a vector space over a field F, let S⊆V and let v ∈V −S. Supposethat S is linearly independent. Then S∪ {v} is linearly dependent if and only if v ∈ span(S).

Proof. Suppose that S∪ {v} is linearly dependent. Then there are n ∈N, and v,v, . . . ,vn ∈S∪ {v} and a,a, . . . ,an ∈ F not all equal to zero such that av+ · · ·+anvn = 0. Because Sis linearly independent, it must be the case that v is one of the vectors v,v, . . . ,vn. Without

Page 74: Abstract Algebra Notes

72 8. Vector Spaces

loss of generality, assume v = v. It must be the case that a 6= 0, again because S is linearlyindependent. Then

v =−aa

v− · · ·−an

av

Because v, . . . ,vn ∈ S, then v ∈ span(S).

Suppose that v ∈ span(S). Then v is a linear combination of the vectors of S. Thus S∪ {v}is linearly independent by Lemma 8.5.2.

Exercises

Exercise 8.5.1. Using only the definition of linear independence, prove that {x+ ,x+x,x+} is a linearly independent subset of R[x].

Exercise 8.5.2. Let V be a vector space over a field F , and let u,v ∈V . Suppose that u 6= v.Prove that {u,v} is linearly dependent if and only if at least one of u or v is a multiple of theother.

Exercise 8.5.3. Let V be a vector space over a field F , and let u, . . . ,un ∈V . Prove that theset {u, . . . ,un} is linearly dependent if and only if u = 0 or uk+ ∈ span({u, . . . ,uk}) forsome k ∈ {, . . . ,n−}.

Page 75: Abstract Algebra Notes

8.6 Bases and Dimension 73

8.6 Bases and Dimension

Friedberg-Insel-Spence, 4th ed. – Section 1.6

Definition 8.6.1. Let V be a vector space over a field F , and let B⊆V . The set B is a basisfor V if B is linearly independent and B spans V . 4Theorem 8.6.2. Let V be a vector space over a field F, and let B⊆V .

1. The set B is a basis for V if and only if every vector in V can be written as a linearcombination of vectors in B, where the set of vectors in B with non-zero coefficientsin any such linear combination, together with their non-zero coefficients, are unique.

2. Suppose that B = {u, . . . ,un} for some n ∈ N and u, . . . ,un ∈ V . Then B is a basisfor V if and only if for each vector v ∈ V , there are unique a, . . . ,an ∈ F such thatv = au+ · · ·+anun.

Proof.

(1). Suppose that B is a basis for V . Then B spans V , and hence every vector in V can bewritten as a linear combination of vectors in B. Let v ∈V . Suppose that there are n,m ∈ N,and v, . . . ,vn,u, . . . ,um ∈ B and a, . . . ,an,b, . . . ,bm ∈ F such that

v = av+av+ · · ·+anvn and v = bu+bu+ · · ·+bmum.

Without loss of generality, suppose that n≥m. If might be the case that the sets {v, . . . ,vn}

and {u, . . . ,um} overlap. By renaming and reordering the vectors in these two sets appropri-ately, we may assume that {v, . . . ,vn} and {u, . . . ,um} are both subsets of a set {z, . . . ,zp}

for some p ∈ N and z, . . . ,zp ∈ B. It will then suffice to show that if

v = cz+ cz+ · · ·+ cpzp and v = dz+dz+ · · ·+dpzp (8.6.1)

for some c, . . . ,cp,d, . . . ,dp ∈ F , then ci = di for all i ∈ {, . . . , p}.Suppose that Equation 8.6.1 holds. Then

(c−d)z+ · · ·+(cp −dp)zp = 0.

Because B is linearly independent, it follows that ci −di = 0 for all i ∈ {, . . . , p}. Becauseci = di for all i ∈ {, . . . , p}, we see in particular that ci = 0 if and only if di = 0. Henceevery vector in V can be written as a linear combination of vectors in B, where the set ofvectors in B with non-zero coefficients in any such linear combination, together with theirnon-zero coefficients, are unique.

Next, suppose that every vector in V can be written as a linear combination of vectors inB, where the set of vectors in B with non-zero coefficients in any such linear combination,together with their non-zero coefficients, are unique. Clearly B spans V . Suppose that thereare n ∈ N, and v, . . . ,vn ∈ B and a, . . . ,an ∈ F such that av+av+ · · ·+anvn = 0. It isalso the case that 0 · v+ 0 · v+ · · ·+ 0 · vn = 0. By uniqueness, we deduce that ai = 0 forall i ∈ {, . . . ,n}. Hence B is linearly independent.

Page 76: Abstract Algebra Notes

74 8. Vector Spaces

(2). This part of the theorem follows from the previous part.

Lemma 8.6.3. Let V be a vector space over a field F, and let S ⊆ V . The following areequivalent.

a. S is a basis for V .

b. S is linearly independent, and is contained in no linearly independent subset of Vother than itself.

Proof. Suppose that S is a basis for V . Then S is linearly independent. Suppose that S $ Tfor some linearly independent subset T ⊆ V . Let v ∈ T − S. Because S is a basis, thenspan(S) = V , and hence v ∈ span(S). It follows from Lemma 8.5.8 that S∪ {v} is linearlydependent. It follows from Lemma 8.5.7 (1) that T is linearly dependent, a contradiction.Hence S is contained in no linearly independent subset of V other than itself.

Suppose that S is linearly independent, and is contained in no linearly independent sub-set of V other than itself. Let w ∈ V . First, suppose that w ∈ S. Then w ∈ span(S) byLemma 8.4.3 (1). Second, suppose that w∈V −S. By the hypothesis on S we see that S∪{w}is linearly dependent. Using Lemma 8.5.8 we deduce that w ∈ span(S). Combining the twocases, it follows that V ⊆ span(S). By definition span(S)⊆V . Therefore span(S) =V , andhence S is a basis.

Theorem 8.6.4. Let V be a vector space over a field F, and let S ⊆ V . Suppose that S isfinite. If S spans V , then some subset of S is a basis for V .

Proof. Suppose that S spans V . If S is linearly independent then S is a basis for V . Nowsuppose that S is linearly dependent.

Case One: Suppose S = {0}. Then V = span(S) = {0}. This case is trivial because /0 is abasis.

Case Two: Suppose S contains at least one non-zero vector. Let v ∈ S be such thatv 6= 0. Then {v} is linearly independent by Lemma 8.5.5. By adding one vector from S ata time, we obtain a linearly independent subset {v, . . . ,vn} ⊆ S such that adding any morevectors from set S would render the subset linearly dependent.

Let B= {v, . . . ,vn}. Because S is finite and B⊆ S, we can write S= {v, . . . ,vn,vn+, . . . ,vp}

for some p ∈ Z such that p≥ n.Let i ∈ {n+ , . . . , p}. Then by the construction of B we know that B∪ {vi} is linearly

dependent. It follows from Lemma 8.5.8 implies that vi ∈ span(B).Let w ∈ V − B. Because S spans V , there are a, . . . ,ap ∈ F such that w = av +

av + · · ·+ apvp. Because each of vn+, . . . ,vp is a linear combination of the elementsof B, it follows that w can be written as a linear combination of elements of B. We thenuse Lemma 8.5.3 (b) to deduce that B∪ {w} is linearly dependent. It now follows fromLemma 8.6.3 that B is a basis.

Page 77: Abstract Algebra Notes

8.6 Bases and Dimension 75

Theorem 8.6.5 (Replacement Theorem). Let V be a vector space over a field F, and letS,L⊆V . Suppose that S and L are finite sets. Suppose that S spans V , and that L is linearlyindependent.

1. |L|≤ |S|.

2. There is a subset H ⊆ S such that |H |= |S|− |L|, and such that L∪H spans V .

Proof. Let m = |L| and n = |S|. We will show that this theorem holds by induction on m.

Base Case: Suppose m = . Then L = /0 and m ≤ n. Let H = S. Then H and S haven−m = n−= n elements, and L∪H = /0∪S = S, and so L∪H spans V .

Induction Step: Suppose the result is true for m, and suppose L has m + vectors.Suppose L = {v, . . . ,vm+}. Let L ′ = {v, . . . ,vm}. By Lemma 8.5.7 we know that L ′ islinearly independent. Hence, by the inductive hypothesis, we know that m ≤ n and thatthere is a subset H ′ ⊆ S such that H ′ has n−m elements and L ′ ∪H ′ spans V . SupposeH ′ = {u, . . . ,un−m}. Because L ′∪H ′ spans V , there are a, . . . ,am,b, . . . ,bm−m ∈ F suchthat vn+ = av+ · · ·+ anvn + bu+ · · ·+ bn−mun−m. Because v, . . . ,vn+ is linearly in-dependent, then vn+ is not a linear combination of {v, . . . ,vn}. Hence n−m > and notall b, . . . ,bn−m are zero.

Because n−m > , then n > m, and therefore n≥ m+.Without loss of generality, assume b 6= 0. Then

u =

bvm+−

ab

v+ · · ·−am

bvm −

bb

u+ · · ·+−bn−m

bun−m.

Let H = {u, . . . ,un−m}. Clearly H has n−(m+) elements. Then

L∪H = {v, . . . ,vm+,u, . . . ,un−m}.

We claim that L∪H spans V . Clearly, v, . . . ,vm,u, . . . ,un−m ∈ span(L∪H). Also u ∈span(L∪H). Hence L ′∪H ′ ⊆ span(L∪H). We know that span(L ′∪H ′) = V , and henceby Exercise 8.4.4 (2) we see that span(span(L∪H)) = V . It follows from Exercise 8.4.3that span(L∪H) =V .

Corollary 8.6.6. Let V be a vector space over a field F. Suppose that V has a finite basis.Then all bases of V are finite, and have the same number of vectors.

Proof. Let B be a finite basis for V . Let n = |B|. Let K be some other basis of V . Supposethat K has more elements than B. Then K has at least n+ elements (it could be that K isinfinite). In particular, let C be a subset of K that has precisely n+ elements. Then C islinearly independent by Lemma 8.5.7. Because B spans V , then by Theorem 8.6.5 (1) wededuce that n+≤ n, which is a contradiction.

Page 78: Abstract Algebra Notes

76 8. Vector Spaces

Next, suppose that K has fewer elements than B. Then K is finite. Let m = |K|. Thenm < n. Because K spans V and B is linearly independent, then by Theorem 8.6.5 (1) wededuce that n≤ m, which is a contradiction.

We conclude that K has the same number of vectors as B.

Definition 8.6.7. Let V be a vector space over a field F . The vector space V is finite-dimensional if V has a finite basis. The vector space V is infinite-dimensional if V doesnot have a finite basis. If V is finite-dimensional, the dimension of V , denoted dim(V ), isthe number of elements in any basis. 4

Lemma 8.6.8. Let V be a vector space over a field F. Then dim(V ) = if and only ifV = {0}.

Proof. By Lemma 8.5.5 (1) we know that /0 is linearly independent. Using Definition 8.4.2we see that dim(V ) = if and only if /0 is a basis for V if and only if V = span( /0) if andonly if V = {0}.

Corollary 8.6.9. Let V be a vector space over a field F, and let S ⊆ V . Suppose that V isfinite-dimensional. Suppose that S is finite.

1. If S spans V , then |S|≥ dim(V ).

2. If S spans V and |S|= dim(V ), then S is a basis.

3. If S is linearly independent, then |S|≤ dim(V ).

4. If S is linearly independent and |S|= dim(V ), then S is a basis.

5. If S is linearly independent, then it can be extended to a basis.

Proof. We prove Parts (1) and (5), leaving the rest to the reader in Exercise 8.6.2.Let n = dim(V ).

(1). Suppose that S spans V . By Theorem 8.6.4 we know that there is some H ⊆ S suchthat H is a basis for V . Corollary 8.6.6 implies that |H |= n. It follows that |S|≥ n.

(5). Suppose that S is linearly independent. Let B be a basis for V . Then |B|= n. BecauseB is a basis for V , then B spans V . By the Replacement Theorem (Theorem 8.6.5) there isa subset K ⊆ B such that |K| = |B|− |S|, and such that S∪K spans V . Note that |S∪K| =|B| = n. It follows from Part (2) of this corollary that S∪K is a basis. Therefore S can beextended to a basis.

Theorem 8.6.10. Let V be vector space over a field F, and let W ⊆ V be a subspace.Suppose that V is finite-dimensional.

Page 79: Abstract Algebra Notes

8.6 Bases and Dimension 77

1. W is finite-dimensional.

2. dim(W )≤ dim(V ).

3. If dim(W ) = dim(V ), then W =V .

4. Any basis for W can be extended to a basis for V .

Proof. Let n = dim(V ). We prove all four parts of the theorem together.Case One: Suppose W = {0}. Then all four parts of the theorem hold.Case Two: Suppose W 6= {0}. Then there is some x ∈W such that x 6= 0. Note that {x}

is linearly independent. It might be the case that there is some x ∈W such that {x,x}is linearly independent. Keep going, adding one vector at a time while maintaining linearindependence. Because W ⊆ V , then there are at most n linearly independent vectors inW by Corollary 8.6.9 (3). Hence we can keep adding vectors until we get {x, . . . ,xk} ∈Wfor some k ∈ N such that k ≤ n, where adding any other vector in V would render the setlinearly dependent. Hence, adding any vector in W would render it linearly dependent. ByLemma 8.6.3 we see that {x, . . . ,xk} is a basis for W . Therefore W is finite-dimensionaland dim(W )≤ dim(V ).

Now suppose dim(W ) = dim(V ). Then k = n and {x, . . . ,xn} is a linearly independentset in V with n elements. By Corollary 8.6.9 (4), we know that {x, . . . ,xn} is a basis for V .Then W = span({x, . . . ,xn}) =V .

From Corollary 8.6.9 (5) we deduce that any basis for W , which is a linearly independentset in V , can be extended to a basis for V .

Exercises

Exercise 8.6.1. LetW = {

[ xyz

]∈ R | x+ y+ z = }.

It was proved in Exercise 8.3.1 that W is a subspace of R. What is dim(W )? Prove youranswer.

Exercise 8.6.2. Prove Corollary 8.6.9 (2), (3) and (4).

Exercise 8.6.3. Let V be a vector space over a field F , and let X ,Y ⊆ V be subspaces.Suppose that X and Y are finite-dimensional. Find necessary and sufficient conditions on Xand Y so that dim(X ∩Y ) = dim(X).

Exercise 8.6.4. Let V , W be vector spaces over a field F . Suppose that V and W are finite-dimensional. Let V ×W be the product vector space, as defined in Exercise 8.2.2. Expressdim(V ×W ) in terms of dim(V ) and dim(W ). Prove your answer.

Exercise 8.6.5. Let V be a vector space over a field F , and let L ⊆ S ⊆ V . Suppose that Sspans V . Prove that the following are equivalent.

Page 80: Abstract Algebra Notes

78 8. Vector Spaces

a. L is a basis for V .

b. L is linearly independent, and is contained in no linearly independent subset of Sother than itself.

Page 81: Abstract Algebra Notes

8.7 Bases for Arbitrary Vector Spaces 79

8.7 Bases for Arbitrary Vector Spaces

Friedberg-Insel-Spence, 4th ed. – Section 1.7

Definition 8.7.1. Let P be a non-empty family of sets, and let M ∈P . The set M is amaximal element of P if there is no Q ∈P such that M $ Q. 4

Lemma 8.7.2. Let V be a vector space over a field F. Let B be the family of all linearlyindependent subsets of V . Let S ∈B. Then S is a basis for V if and only if S is a maximalelement of B.

Proof. This lemma follows immediately from Lemma 8.6.3.

Definition 8.7.3. Let P be a non-empty family of sets, and let C ⊆P . The family C is achain if A,B ∈ C implies A⊆ B or A⊇ B. 4

Theorem 8.7.4 (Zorn’s Lemma). Let P be a non-empty family of sets. Suppose that foreach chain C in P , the set

⋃C∈C C is in P . Then P has a maximal element.

Theorem 8.7.5. Let V be a vector space over a field F. Then V has a basis.

Proof. Let B be the family of all linearly independent subsets of V . We will show that Bhas a maximal element by using Zorn’s Lemma (Theorem 8.7.4). The maximal element ofB will be a basis for V by Lemma 8.7.2.

Because /0 is a linearly independent subset of V , as stated in Lemma 8.5.5 (1), we seethat /0 ∈B, and hence B is non-empty.

Let C be a chain in B. Let U =⋃

C∈C C. We need to show that U ∈ B. That is, weneed to show that U is linearly independent. Let v, . . . ,vn ∈U and suppose av+ · · ·+anvn = 0 for some a, . . . ,an ∈ F . By the definition of union, we know that for each i ∈{, . . . ,n}, there is some Ci ∈ C such that vi ∈ Ci. Because C is a chain, we know thatfor any two of C, . . . ,Cn, one contains the other. Hence we can find k ∈ {, . . . ,n} such thatCi⊆Ck for all i∈ {, . . . ,n}. Hence v, . . . ,vn ∈Ck. Because Ck ∈C ⊆B, then Ck is linearlyindependent, and so av+ · · ·+ anvn = 0 implies ai = 0 for all i ∈ {, . . . ,n}. Hence U islinearly independent, and therefore U ∈B.

We have now seen that B satisfies the hypotheses of Zorn’s Lemma, and by that lemmawe deduce that B has a maximal element.

Exercises

Exercise 8.7.1. Let V be a vector space over a field F , and let S⊆V . Prove that if S spansV , then some subset of S is a basis for V .

Page 82: Abstract Algebra Notes

80 8. Vector Spaces

Page 83: Abstract Algebra Notes

9

Linear Maps

Page 84: Abstract Algebra Notes

82 9. Linear Maps

9.1 Linear Maps

Friedberg-Insel-Spence, 4th ed. – Section 2.1

Definition 9.1.1. Let V,W be vector spaces over a field F . Let f : V →W be a function.The function f is a linear map (also called linear transformation or vector space homo-morphism) if

(i) f (x+ y) = f (x)+ f (y)

(ii) f (cx) = c f (x)

for all x,y ∈V and c ∈ F 4

Lemma 9.1.2. Let V , W be vector spaces over a field F, and let f : V →W be a linearmap.

1. f (0) = 0.

2. If x ∈V , then f (−x) = − f (x).

Proof. We already proved this result about groups, because linear maps are group homo-morphisms.

Lemma 9.1.3. Let V , W be vector spaces over a field F, and let f : V →W be a function.The following are equivalent.

a. f is a linear map.

b. f (cx+ y) = c f (x)+ f (y) for all x,y ∈V and c ∈ F.

c. f (ax+ · · ·+anxn) = a f (x)+ · · ·+an f (xn) for all x, . . .xn ∈V and a, . . .an ∈ F.

Proof. Showing (a)⇒ (b) is straightforward. Showing (b)⇒ (c) requires induction on n.Showing (c)⇒ (a) is straightforward.

Lemma 9.1.4. Let V , W, Z be vector spaces over a field F, and let f : V→W and g : W→ Zbe linear maps.

1. The identity map V : V →V is a linear map.

2. The function g◦ f is a linear map.

Proof.

(1). This part is straightforward.

Page 85: Abstract Algebra Notes

9.1 Linear Maps 83

(2). Let x,y ∈V and c ∈ F . Then

(g◦ f )(x+y) = g( f (x+y)) = g( f (x)+ f (y)) = g( f (x))+g( f (y)) = (g◦ f )(x)+(g◦ f )(y)

and(g◦ f )(cx) = g( f (cx)) = g(c( f (x))) = c(g(( f (x))) = c(g◦ f (x)).

Lemma 9.1.5. Let V , W be vector spaces over a field F, and let f : V →W be a linearmap.

1. If A is a subspace of V , then f (A) is a subspace of W.

2. If B is a subspace of W, then f−(B) is a subspace of V .

Proof. The proof of this result is similar to the proof of the analogous result for grouphomomorphisms.

Theorem 9.1.6. Let V , W be vector spaces over a field F.

1. Let B be a basis for V . Let g : B→W be a function. Then there is a unique linearmap f : V →W such that f |B = g.

2. Let {v, . . . ,vn} be a basis for V , and let w, . . . ,wn ∈W. Then there is a unique linearmap f : V →W such that f (vi) = wi for all i ∈ {, . . . ,n}.

Proof. We prove Part (1); Part (2) follows immediately from Part (1).Let v ∈ V . Then by Theorem 8.6.2 (1) we know that v can be written as v = ax +· · ·+ anxn for some x, . . . ,xn ∈ B and a, . . .an ∈ F , where the set of vectors with non-zero coefficients, together with their non-zero coefficients, are unique. Then define f (v) =ag(x) + · · ·+ ang(xn). If v is written in two different ways as linear combinations ofelements of B, then the uniqueness of the vectors in B with non-zero coefficients, togetherwith their non-zero coefficients, implies that f (v) is well-defined.

Observe that if v ∈ B, then v = · v is the unique way of expressing v as a linear combi-nation of vectors in B, and therefore f (v) = ·g(v) = g(v). Hence f |B = g.

Let v,w,∈V and let c ∈ F . Then we can write v = ax+ · · ·+anxn and w = bx+ · · ·+bnxn where x, . . . ,xn ∈ B and a, . . . ,an,b, . . . ,bn ∈ F . Then v+w =

∑ni=(ai +bi)xi, and

hence

f (v+w) =n∑

i=

(ai +bi)g(xi) =

n∑i=

aig(xi)+

n∑i=

big(xi) = f (v)+ f (w).

A similar proof shows that f (cv) = c f (v). Hence f is linear map.

Page 86: Abstract Algebra Notes

84 9. Linear Maps

Let h : V →W be a linear map such that h|B = g. Let v ∈ V . Then v = ax+ · · ·+anxnfor some x, . . . ,xn ∈ B and a, . . .an ∈ F . Hence

h(v) = h(n∑

i=

aixi) =

n∑i=

aih(xi) =

n∑i=

aig(xi) = f (v)

Therefore h = f . It follows that f is unique.

Corollary 9.1.7. Let V , W be vector spaces over a field F, and let f ,g : V →W be linearmaps. Let B be a basis for V . Suppose that f (v) = g(v) for all v ∈ B. Then f = g.

Proof. Trivial.

Exercises

Exercise 9.1.1. Prove that there exists a linear map f : R → R such that f ([ ]) =[

]and f ([ ]) =

[ −

]. What is f ([ ])?

Exercise 9.1.2. Does there exist a linear map g : R → R such that g([

]) = [ ] and

g([−

]) = [ ]? Explain why or why not.

Page 87: Abstract Algebra Notes

9.2 Kernel and Image 85

9.2 Kernel and Image

Friedberg-Insel-Spence, 4th ed. – Section 2.1

Definition 9.2.1. Let V , W be vector spaces over a field F , and let f : V →W be a linearmap.

1. The kernel (also called the null space) of f , denoted ker f , is the set ker f =f−({0}).

2. The image of f , denoted im f , is the set im f = f (V ). 4

Remark 9.2.2. Observe that

ker f = {v ∈V | f (v) = 0}

andim f = {w ∈W | w = f (v) for some v ∈V }. ♦

Lemma 9.2.3. Let V , W be vector spaces over a field F, and let f : V →W be a linearmap.

1. ker f is a subspace of V .

2. im f is a subspace of W.

Proof. Use Lemma 9.1.5.

Lemma 9.2.4. Let V , W be vector spaces over a field F, and let f : V →W be a linearmap. Then f is injective if and only if ker f = {0}.

Proof. Vector spaces are abelian groups, and linear maps are group homomorphisms, andtherefore this result follows from Theorem 5.2.5, which is the analogous result for grouphomomorphisms.

Lemma 9.2.5. Let V , W be vector spaces over a field F, and let f : V →W be a linearmap. Let w ∈W. If a ∈ f−({w}), then f−({w}) = a+ker f .

Proof. Again, this result follows from Lemma 5.2.6, which is the analogous result forgroup homomorphisms (though here we use additive notation instead of multiplicative no-tation).

Lemma 9.2.6. Let V , W be vector spaces over a field F, let f : V →W be a linear map andlet B be a basis for V . Then im f = span( f (B)).

Page 88: Abstract Algebra Notes

86 9. Linear Maps

Proof. Clearly f (B) ⊆ im f . By Lemma 9.2.3 (2) and Lemma 8.4.3 (3), we deduce thatspan( f (B))⊆ im f .

Let y ∈ im f . Then y = f (v) for some v ∈ V . Then v = av + · · ·+ anvn for somev, . . . ,vn ∈ B and a, . . . ,an ∈ F . Then

y = f (v) = f (av+ · · ·+anvn) = a f (v)+ · · ·+an f (vn) ∈ span( f (B)).

Therefore im f ⊆ span(B), and hence im f = span( f (B)).

Lemma 9.2.7. Let V , W be vector spaces over a field F, and let f : V →W be a linearmap. Suppose that V is finite-dimensional. Then ker f and im f are finite-dimensional.

Proof. By Lemma 9.2.3 (1) we know that ker f is a subspace of V , and hence ker f isfinite-dimensional by Theorem 8.6.10 (1).

Let B be a basis for V . By Corollary 8.6.6 we know that B is finite. Hence f (B) is finite.By Lemma 9.2.6 we see that im f = span( f (B)). It follows from Theorem 8.6.4 that asubset of f (B) is a basis for im f , which implies that im f is finite-dimensional.

Exercises

Exercise 9.2.1. Let V , W be vector spaces over a field F , and let f : V →W be a linearmap. Let w, . . . ,wk ∈ im f be linearly independent vectors. Let v, . . . ,vk ∈ V be vectorssuch that f (vi) = wi for all i ∈ {, . . . ,k}. Prove that v, . . . ,vk are linearly independent.

Exercise 9.2.2. Let V and W be vector spaces over a field F , and let f : V →W be a linearmap.

(1) Prove that f is injective if and only if for every linearly independent subset S ⊆ V ,the set f (S) is linearly independent.

(2) Supppose that f is injective. Let T ⊆ V . Prove that T is linearly independent if andonly if f (T ) is linearly independent.

(3) Supppose that f is bijective. Let B ⊆ V . Prove that B is a basis for V if and only iff (B) is a basis for W .

Exercise 9.2.3. Find an example of two linear maps f ,g : R→ R such that ker f = kergand im f = img, and none of these kernels and images is the trivial vector space, and f 6= g.

Page 89: Abstract Algebra Notes

9.3 Rank-Nullity Theorem 87

9.3 Rank-Nullity Theorem

Friedberg-Insel-Spence, 4th ed. – Section 2.1

Definition 9.3.1. Let V , W be vector spaces over a field F , and let f : V →W be a linearmap.

1. If ker f is finite-dimensional, the nullity of f , denoted nullity( f ), is defined bynullity( f ) = dim(ker f ).

2. If im f is finite-dimensional, the rank of f , denoted rank( f ), is defined by rank( f ) =dim(im f ). 4

Theorem 9.3.2 (Rank-Nullity Theorem). Let V , W be vector spaces over a field F, andlet f : V →W be a linear map. Suppose that V is finite-dimensional. Then

nullity( f )+ rank( f ) = dim(V ).

Proof. Let n = dim(V ). By Lemma 9.2.3 (1) we know that ker f is a subspace of V , andhence ker f is finite-dimensional by Theorem 8.6.10 (1), and nullity( f ) = dim(ker f ) ≤dim(V ) by Theorem 8.6.10 (2). Let k = nullity( f ). Then k ≤ n. Let {v, . . . ,vk} be a basisfor ker f . By Theorem 8.6.10 (4) {v, . . . ,vk} can be extended to a basis {v, . . . ,vn} forV . We will show that { f (vk+), . . . , f (vn)} is a basis for im f . It will then follow that therank( f ) = n− k, which will prove the theorem.

By Lemma 9.2.6 we know that im f = span({ f (v), . . . , f (vn)}). Note that v, . . . ,vk ∈ker f , and therefore f (v)= · · ·= f (vk)= 0. It follows that im f = span({ f (vk+), . . . , f (vn)}).

Suppose bk+ f (vk+)+ · · ·+bn f (vn) = 0 for some bk+, . . . ,bn ∈F . Hence f (bk+vk++· · ·+bnvn) = 0. Therefore bk+vk++ · · ·+bnvn ∈ ker f . Because {v, . . . ,vn} is a basis forker f , then bk+vk++ · · ·+bnvn = bv+ · · ·+bkvk for some b, . . . ,bk ∈ F . Then bv+· · ·+ bkvk + (−bk+)vk++ · · ·+ (−bn)vn = 0. Because {v, . . . ,vn} is a basis for V , thenb = · · ·= bn = 0. Therefore f (vk+), . . . , f (vn) are linearly independent.

Corollary 9.3.3. Let V , W be vector spaces over a field F, and let f : V →W be a linearmap. Suppose that V is finite-dimensional. Then rank( f )≤ dim(V ).

Proof. Trivial.

Corollary 9.3.4. Let V , W be vector spaces over a field F, and let f : V →W be a lin-ear map. Suppose that V and W are finite-dimensional, and that dim(V ) = dim(W ). Thefollowing are equivalent.

a. f is injective.

b. f is surjective

c. f is bijective.

Page 90: Abstract Algebra Notes

88 9. Linear Maps

d. rank( f ) = dim(V ).

Proof. Clearly (c) ⇒ (a), and (c) ⇒ (b). We will show below that (a) ⇔ (d), and (b) ⇔(d). It will then follow that (a)⇔ (b), and from that we will deduce that (a)⇒ (c), and (b)⇒ (c).

(a)⇔ (d) By Lemma 9.2.4 we know that f is injective if and only if ker f = {0}. ByLemma 8.6.8 we deduce that f is injective if and only if dim(ker f ) = , and by definitionthat is true if and only if nullity( f ) = . By The Rank-Nullity Theorem (Theorem 9.3.2),we know that nullity( f ) = dim(V )− rank( f ). It follows that f is injective if and only ifdim(V )− rank( f ) = , which is the same as rank( f ) = dim(V ).

(b)⇔ (d) By definition f is surjective if and only if im f = W . By Lemma 9.2.3 (2)we know that im f is a subspace of W . If im f = W then clearly dim(im f ) = dim(W );by Theorem 8.6.10 (3) we know that if dim(im f ) = dim(W ) then im f = W . Hence f issurjective if and only if dim(im f ) = dim(W ), and by definition that is true if and only ifrank( f ) = dim(W ). By hypothesis dim(W ) = dim(V ), and therefore f is surjective if andonly if rank( f ) = dim(V ).

Corollary 9.3.5. Let V , W, Z be vector spaces over a field F, and let f : V →W andg : W → Z be linear maps. Suppose that V and W are finite-dimensional.

1. rank(g◦ f )≤ rank(g).

2. rank(g◦ f )≤ rank( f ).

Proof.

(1). Observe that im(g◦ f ) = (g◦ f )(V ) = g( f (V ))⊆ g(W ) = img. By Lemma 9.2.3 (2)we know that im(g◦ f ) and img are subspaces of W . It is straightforward to see thatim(g◦ f ) is a subspace of img. It follows from Theorem 8.6.10 (2) that rank(g◦ f ) =dim(im(g◦ f ))≤ dim(img) = rank(g).

(2). By Corollary 9.3.3 we see that rank(g◦ f ) = dim(im(g◦ f )) = dim((g◦ f )(V )) =dim(g( f (V ))) = dim(g| f (V )( f (V ))) = rank(g| f (V )) ≤ dim( f (V )) = dim(im f ) = rank( f ).

Exercises

Exercise 9.3.1. Let V , W be vector spaces over a field F , and let f : V →W be a linearmap. Suppose that V and W are finite-dimensional.

(1) Prove that if dim(V )< dim(W ), then f cannot be surjective.

(2) Prove that if dim(V )> dim(W ), then f cannot be injective.

Page 91: Abstract Algebra Notes

9.4 Isomorphisms 89

9.4 Isomorphisms

Friedberg-Insel-Spence, 4th ed. – Section 2.4

Definition 9.4.1. Let V and W be a vector space over a field F and let f : V →W be afunction. The function f is an isomorphism if f is bijective and is a linear map. 4Definition 9.4.2. Let V , W be a vector space over a field F . The vector spaces V and W areisomorphic if there is an isomorphism V →W . 4Lemma 9.4.3. Let V , W be vector spaces over a field F, and let f : V →W be an isomor-phism. Then f− is an isomorphism.

Proof. The proof of this result is similar to the proof of Theorem 2.2.4 (2), which is theanalogous result for group isomorphisms.

Corollary 9.4.4. Let V , W be vector spaces over a field F, and let f : V →W be a lin-ear map. Suppose that V and W are finite-dimensional, and that dim(V ) = dim(W ). Thefollowing are equivalent.

a. f is injective.

b. f is surjective

c. f is an isomorphism.

d. rank( f ) = dim(V ).

Lemma 9.4.5. Let V , W be vector spaces over a field F, and let f : V →W be a linearmap. Let B be a basis for V . Then f is an isomorphism if and only if f (B) is a basis for W.

Proof. Suppose that f is an isomorphism. Let v,v, . . .vn ∈ f (B) and a,a, . . .an ∈ F , andsuppose that v, . . . ,vn are distinct, and that av+ · · ·+anvn = 0. There are w, . . . ,wn ∈ Bsuch that f (wi) = vi for all i ∈ {, . . . ,n}. Clearly w, . . . ,wn are distinct. Then a f (w)+· · ·+ an f (wn) = 0. It follows that f (aw + · · ·+ anwn) = 0, which means that aw +· · ·+ anwn ∈ ker f . Because f is injective, then by Lemma 9.2.4 we know that ker f ={0}. Therefore aw+ · · ·+anwn = 0. Because {w, . . . ,wn} ⊆ B, and because B is linearlyindependent, it follows from Lemma 8.5.7 (2) that {w, . . . ,wn} is linearly independent.Hence a = a = · · · = an = . We deduce that f (B) is linearly independent. Because f issurjective, we know that im f =W . It follows from Lemma 9.2.6 that span( f (B)) =W . Weconclude that f (B) is a basis for W .

Suppose that f (B) is a basis for W . Then span( f (B)) = W , and by Lemma 9.2.6 wededuce that im f =W , which means that f is surjective. Let v ∈ ker f . Because B is a basisfor V , there are m ∈ N, vectors u, . . . ,um ∈ B and c, . . . ,cm ∈ F such that v = cu+ · · ·+cmum. Then f (cu+ · · ·+cmum) = 0, and hence c f (u)+ · · ·+cm(um) = 0. Because f (B)is linearly independent, it follows that c = · · · = cm = 0. We deduce that v = 0. Thereforeker f = {0}. By Lemma 9.2.4 we conclude that f is injective.

Page 92: Abstract Algebra Notes

90 9. Linear Maps

Theorem 9.4.6. Let V , W be vector spaces over a field F. Then V and W are isomorphicif and only if there is a basis B of V and a basis C of W such that B and C have the samecardinality.

Proof. Suppose V and W are isomorphic. Let f : V →W be an isomorphism, and let D bea basis of V . Then by Lemma 9.4.5 we know that f (D) is a basis of W , and clearly D andf (D) have the same cardinality.

Suppose that there is a basis B of V and a basis C of W such that B and C have thesame cardinality. Let g : B→C be a bijective map. Extend g to a linear map h : V →W byTheorem 9.1.6 (1). Then h(B) =C, so h(B) is a basis for W , and it follows by Lemma 9.4.5that h is an isomorphism.

Corollary 9.4.7. Let V , W be vector spaces over a field F. Suppose that V and W areisomorphic. Then V is finite-dimensional if and only if W is finite-dimensional. If V and Ware both finite-dimensional, then dim(V ) = dim(W ).

Proof. This result follows immediately from Theorem 9.4.6, because a vector space is fi-nite dimensional if and only if it has a finite basis, and the dimension of a finite-dimensionalvector space is the cardinality of any basis of the vector space.

Corollary 9.4.8. Let V , W be vector spaces over a field F. Suppose that V and W arefinite-dimensional. Then V and W are isomorphic if and only if dim(V ) = dim(W ).

Proof. This result follows immediately from Theorem 9.4.6, because the dimension of afinite-dimensional vector space is the cardinality of any basis of the vector space.

Corollary 9.4.9. Let V be a vector space over a field F. Suppose that V is finite-dimensional. Let n = dim(V ). Then V is isomorphic to Fn.

Proof. Observe that dim(Fn) = n. The result then follows immediately from Corol-lary 9.4.8.

Exercises

Exercise 9.4.1. Let V be a vector space over a field F . Suppose that V non-trivial. Let Bbe a basis for V . Let C (B,F) be as defined in Exercise 8.3.2. It was seen in Exercise 8.3.2that C (B,F) is a vector space over F . Let Ψ : C (B,F)→V be defined by

Ψ( f ) =∑v∈B

f (v) 6=0

f (v)v,

for all f ∈ C (B,F). Prove that Ψ is an isomorphism. Hence every non-trivial vector spacecan be viewed as a space of functions.

Page 93: Abstract Algebra Notes

9.5 Spaces of Linear Maps 91

9.5 Spaces of Linear Maps

Friedberg-Insel-Spence, 4th ed. – Section 2.2

Definition 9.5.1. Let V and W be vector spaces over a field F . The set of all linear mapsV →W is denoted L (V,W ). The set of all linear maps V →V is denoted L (V ). 4

Definition 9.5.2. Let A be a set, let W be a vector space over a field F , let f ,g : A→W befunctions and let c ∈ F .

1. Let f +g : A→W be defined by ( f +g)(x) = f (x)+g(x) for all x ∈ A.

2. Let − f : A→W be defined by (− f )(x) = − f (x) for all x ∈ A.

3. Let c f : A→W be defined by (c f )(x) = c f (x) for all x ∈ A.

4. Let 0 : A→W be defined by 0(x) = 0 for all x ∈ A. 4

Lemma 9.5.3. Let V , W be vector spaces over a field F, let f ,g : V →W be linear mapsand let c ∈ F.

1. f +g is a linear map.

2. − f is a linear map.

3. c f is a linear map.

4. 0 is a linear map.

Proof. We prove Part (1); the other parts are similar, and are left to the reader.

(1). Let x,y ∈V and let d ∈ F . Then

( f +g)(x+ y) = f (x+ y)+g(x+ y) = [ f (x)+ f (y)]+ [g(x)+g(y)]= [ f (x)+g(x)]+ [ f (y)+g(y)] = ( f +g)(x)+( f +g)(y)

and

( f +g)(dx) = f (dx)+g(dx) = d f (x)+dg(x) = d[ f (x)+g(x)] = d( f +g)(x).

Lemma 9.5.4. Let V , W be vector spaces over a field F. Then L (V,W ) is a vector spaceover F.

Page 94: Abstract Algebra Notes

92 9. Linear Maps

Proof. We will show Property (7) in the definition of vector spaces; the other propertiesare similar. Let f ,g ∈L (V,W ) and let a ∈ F . Let x ∈V . Then

[a( f +g)](x) = a[( f +g)(x)] = a[ f (x)+g(x)]= a f (x)+ag(x) = (a f )(x)+(ag)(x) = [a f +ag](x).

Hence a( f +g) = a f +ag.

Lemma 9.5.5. Let V , W, X, Z be vector spaces over a field F. Let f ,g : V → W andk : X →V and h : W → Z be linear maps, and let c ∈ F.

1. ( f +g)◦k = ( f ◦k)+(g◦k).

2. h◦( f +g) = (h◦ f )+(h◦g).

3. c(h◦ f ) = (ch)◦ f = h◦(c f ).

Proof. We prove Part (1); the other parts are similar, and are left to the reader.

(1). Let x ∈ X . Then

[( f +g)◦k](x) = ( f +g)(k(x)) = f (k(x))+g(k(x))= ( f ◦k)(x)+(g◦k)(x) = [( f ◦k)+(g◦k)](x).

Hence ( f +g)◦k = ( f ◦k)+(g◦k).

Theorem 9.5.6. Let V , W be vector spaces over a field F. Suppose that V and W are finite-dimensional. Then L (V,W ) is finite-dimensional, and dim(L (V,W )) = dim(V ) ·dim(W ).

Proof. Let n = dim(V ) and m = dim(W ). Let {v, . . . ,vn} be a basis for V , and let{w, . . . ,wm} be a basis for W .

For each i ∈ {, . . . ,n} and j ∈ {, . . . ,m}, let ei j : V →W be defined as follows. First, let

ei j(vk) =

{w j, if k = i0, if k ∈ {, . . . ,n} and k 6= i.

Next, because {v, . . . ,vn} is a basis for V , we can use Theorem 9.1.6 (2) to extend ei j to aunique linear map V →W .

We claim that the set T = {ei j | i ∈ {, . . . ,n} and j ∈ {, . . . ,m}} is a basis for L (V,W ).Once we prove that claim, the result will follow, because T has nm elements.

Suppose that there is some ai j ∈ F for each i ∈ {, . . . ,n} and j ∈ {, . . . ,m} such that

n∑i=

m∑j=

ai jei j = 0.

Page 95: Abstract Algebra Notes

9.5 Spaces of Linear Maps 93

Let k ∈ {, . . . ,n}. Thenn∑

i=

m∑j=

ai jei j(vk) = 0(vk),

which implies thatm∑

j=

ak jw j = 0.

Because {w, . . . ,wm} is linearly independent, it follows that ak j = 0 for all j ∈ {, . . . ,m}.We deduce that ai j = 0 for all i ∈ {, . . . ,n} and j ∈ {, . . . ,m}. Hence T is linearly inde-

pendent.Let f ∈L (V,W ). Let r ∈ {, . . . ,n}. Then f (vr) ∈W . Because {w, . . . ,wm} is spans W ,

there is some crp ∈ F for each p ∈ {, . . . ,m} such that f (vr) =∑m

j= cr jw j.Observe that

n∑i=

m∑j=

ci jei j(vr) =

m∑j=

cr jw j = f (vr).

Hence f and∑n

i=

∑mj= ci jei j agree on {v, . . . ,vn}, and it follows from Corollary 9.1.7

that f =∑

i j ci jei j. Hence T spans L (V,W ), and we conclude that T is a basis for L (V,W ).

Exercises

Exercise 9.5.1. Let V , W be vector spaces over a field F , and let f ,g : V →W be non-zero linear maps. Suppose that im f ∩ img = {0}. Prove that { f ,g} is a linearly independentsubset of L (V,W ).

Exercise 9.5.2. Let V , W be vector spaces over a field F , and let S⊆V . Let S◦ ⊆L (V,W )be defined by

S◦ = { f ∈L (V,W ) | f (x) = 0 for all x ∈ S}.

(1) Prove that S◦ is a subspace of L (V,W ).

(2) Let T ⊆V . Prove that if S⊆ T , then T ◦ ⊆ S◦.

(3) Let X ,Y ⊆ V be subspaces. Prove that (X +Y )◦ = X◦∩Y ◦. (See Exercise 8.3.5 forthe definition of X +Y .)

Page 96: Abstract Algebra Notes

94 9. Linear Maps

Page 97: Abstract Algebra Notes

10

Linear Maps and Matrices

Page 98: Abstract Algebra Notes

96 10. Linear Maps and Matrices

10.1 Review of Matrices—Multiplication

Friedberg-Insel-Spence, 4th ed. – Section 2.3

Definition 10.1.1. Let F be a field, and let m,n, p∈N. Let A∈Mm×n(F) and B∈Mn×p(F).Suppose that A =

[ai j]

and B =[bi j]. The matrix AB ∈Mm×p(F) is defined by AB =

[ci j],

where ci j =∑n

k= aikbk j for all i ∈ {, . . . ,m} and j ∈ {, . . . , p}. 4Lemma 10.1.2. Let F be a field, and let m,n, p,q ∈N. Let A ∈Mm×n(F), let B ∈Mn×p(F)and let C ∈Mp×q(F).

1. A(BC) = (AB)C.

2. AIn = A and ImA = A.

Proof.

(1). Suppose that A =[ai j]

and B =[bi j]

and C =[ci j], and AB =

[si j]

and BC =[ti j]

and A(BC) =[ui j]

and (AB)C =[wi j]. Then si j =

∑nk= aikbk j for all i ∈ {, . . . ,m}

and j ∈ {, . . . , p}; and ti j =∑p

z= bizcz j for all i ∈ {, . . . ,n} and j ∈ {, . . . ,q}. Then ui j =∑nx= aixtx j =

∑nx= aix

(∑pz= bxzcz j

)for all i ∈ {, . . . ,m} and j ∈ {, . . . ,q}; and wi j =∑p

y= siycy j =∑p

y=

(∑nk= aikbky

)cy j for all i ∈ {, . . . ,m} and j ∈ {, . . . ,q}. Rearranging

shows that ui j = wi j for all i ∈ {, . . . ,m} and j ∈ {, . . . ,q}.

(2). Straightforward.

Definition 10.1.3. Let F be a field, and let n ∈ N. Let A ∈ Mn×n(F). The matrix A isinvertible if there is some B ∈Mn×n(F) such that BA = In and AB = In. Such a matrix B isan inverse of A. 4Lemma 10.1.4. Let F be a field, and let n ∈N. Let A ∈Mn×n(F). If A has an inverse, thenthe inverse is unique.

Proof. The proof is similar to that used for groups.

Definition 10.1.5. Let F be a field, and let n ∈ N. Let A ∈Mn×n(F). If A has an inverse,then the inverse is denoted A−. 4Lemma 10.1.6. Let F be a field, and let n ∈ N. Let A,B ∈Mn×n(F). Suppose that A and Bare invertible

1. A− is invertible, and (A−)− = A.

2. AB is invertible, and (AB)− = B−A−.

Proof. The proof is similar to that used for groups.

Page 99: Abstract Algebra Notes

10.1 Review of Matrices—Multiplication 97

Definition 10.1.7. Let F be a field, and let n ∈ N. The set of all n× n invertible matriceswith entries in F is denoted GLn(F). 4

Corollary 10.1.8. Let F be a field, and let n ∈ N. Then (GLn(F), ·) is a group.

Lemma 10.1.9. Let F be a field, and let m,n, p ∈ N. Let A,B ∈Mm×n(F) and let C,D ∈Mn×p(F). Then A(C+D) = AC+AD and (A+B)C = AC+BC.

Proof. Straightforward.

Corollary 10.1.10. Let F be a field, and let n ∈ N. Then (Mn×n(F),+, ·) is a ring withunity.

Exercises

Exercise 10.1.1. Let F be a field, and let n ∈ N. Let A,B ∈ Mn×n(F). The trace of A isdefined by

trA =

n∑i=

aii.

Prove that tr(AB) = tr(BA).

Page 100: Abstract Algebra Notes

98 10. Linear Maps and Matrices

10.2 Linear Maps Given by Matrix Multiplication

Friedberg-Insel-Spence, 4th ed. – Section 2.3

Definition 10.2.1. Let F be a field, and let m,n ∈ N. Let A ∈Mm×n(F). The linear mapinduced by A is the function LA : Fn→ Fm defined by LA(v) = Av for all v ∈ Fn. 4

Lemma 10.2.2. Let F be a field, and let m,n, p∈N. Let A,B∈Mm×n(F), let C ∈Mn×p(F),and let s ∈ F.

1. LA is a linear map.

2. LA = LB if and only if A = B.

3. LA+B = LA +LB.

4. LsA = sLA.

5. LAC = LA ◦LC.

6. Suppose m = n. Then LIn = Fn .

Proof. Suppose that A =[ai j]

and B =[bi j]. Let {e, . . . ,en} be the standard basis for Fn.

(1). Let v,w∈ Fn. Then LA(v+w) = A(v+w) = Av+Aw = LA(v)+LA(w), and LA(sv) =A(sv) = s(Av) = sLA(v).

(2). If A = B, then clearly LA = LB.Suppose LA = LB. Let j ∈ {, . . . ,n}. Then LA(ei) = LB(ei), and hence Ae j = Be j, which

means that the j-th column of A equals the j-th column of B. Hence A = B.

(3). Let v ∈ Fn. Then LA+B(v) = (A+B)(v) = Av+Bv = LA(v)+LB(v). Hence LA+B =LA +LB.

(4). The proof is similar to the proof of Part (3).

(5). Let j ∈ {, . . . ,n}. Then LAC(e j) = (AC)(e j), and (LA ◦LC)(e j) = LA(LC(e j)) =A(C(e j)). Observe that (AC)(e j) is the j-th column of AC, and that C(e j) is the j-th columnof C. However, the j-th column of AC is defined by A times the j-th column of C. HenceLAC(e j) = (LA ◦LC)(e j). Therefore LAC and LA ◦LC agree on a basis, and by Corollary 9.1.7we deduce that LAC = LA ◦LC.

(6). Trivial.

Corollary 10.2.3. Let F be a field, and let m,n, p,q ∈ N. Let A ∈ Mm×n(F), let B ∈Mn×p(F), and let C ∈Mp×q(F). Then (AB)C = A(BC).

Page 101: Abstract Algebra Notes

10.2 Linear Maps Given by Matrix Multiplication 99

Proof. Using Lemma 10.2.2 (3) together with the associativity of the composition of func-tions, we see that LA(BC) = LA ◦LBC = LA ◦(LB ◦LC) = (LA ◦LB)◦LC = LAB ◦LC = L(AB)C.By Lemma 10.2.2 (2) we deduce that A(BC) = (AB)C.

Page 102: Abstract Algebra Notes

100 10. Linear Maps and Matrices

10.3 All Linear Maps Fn→ Fm

Friedberg-Insel-Spence, 4th ed. – Section 2.2

Lemma 10.3.1. Let F be a field. Let n,m ∈ N, and let f : Fn→ Fm be a linear map. Thenf = LA, where A ∈Mm×n(F) is the matrix that has columns f (e), . . . , f (en).

Proof. Let i ∈ {, . . . ,n}. Let[ ai

...ami

]= f (ei).

Let v ∈ Fn. Then v =[ x

...xn

]for some x, . . . ,xn ∈ F . Then

f (v) = f ([ x

...xn

]) = f (xe+ · · ·+ xnen) = x f (e)+ · · ·+ xn f (en)

= x

[ a...

am

]+ · · ·+ xn

[ an...

amn

]=

[xa+···+xnan

...xam+···+xnamn

]

=

[ a ··· an...

...am ··· amn

][ x...

xn

]= Av = LA(v).

Hence f = LA.

Page 103: Abstract Algebra Notes

10.4 Coordinate Vectors with respect to a Basis 101

10.4 Coordinate Vectors with respect to a Basis

Friedberg-Insel-Spence, 4th ed. – Section 2.2

Definition 10.4.1. Let V be a vector space over a field F , and let β ⊆ V be a basis for V .The set β is an ordered basis if the elements of β are given a specific order. 4

Definition 10.4.2. Let V be a vector space over a field F . Suppose that V is finite-dimensional. Let n = dim(V ). Let β = {v, . . . ,vn} be an ordered basis for V . Let x ∈ V .Then there are unique a, . . . ,an ∈ F such that x = av+ · · ·+anvn. The coordinate vec-

tor of x relative to β is [x]β =

[a...

an

]∈ Fn. 4

Lemma 10.4.3. Let F be a field, and let n ∈N. Let β be the standard ordered basis for Fn.If v ∈ Fn, then [v]β = v

Proof. Straightforward.

Definition 10.4.4. Let V be a vector space over a field F . Suppose that V is finite-dimensional. Let n = dim(V ). Let β be an ordered basis for V . The standard represen-tation of V with respect to β is the function φβ : V → Fn defined by φβ (x) = [x]β for allx ∈V . 4

Theorem 10.4.5. Let V be a vector space over a field F. Suppose that V is finite-dimensional. Let n = dim(V ). Let β be an ordered basis for V . Then φβ is an isomorphism.

Proof. Let {e, . . . ,en} be the standard basis for Fn.Let β = {u, . . . ,un}. Let i ∈ {, . . . ,n}. Then φβ (ui) = ei. By Theorem 9.1.6 (2) there is a

unique linear map g : V → Fn such that g(ui) = ei for all i ∈ {, . . . ,n}.Let v ∈V . Then there are unique a, . . . ,an ∈ F such that x = av+ · · ·+anvn. Hence

φβ (v) =[a

...an

]= ae+ · · ·+anen = ag(u)+ · · ·+ang(un)

= g(au+ · · ·+anun) = g(v).

Hence φβ = g. It follows that φβ is linear.We know by Lemma 9.2.6 that imφβ = span(φβ (β )) = span{e, . . . ,en}= Fn. Hence φβ

is surjective. Because dim(V ) = n = dim(Fn), it follows from Corollary 9.4.4 that φβ is anisomorphism.

Page 104: Abstract Algebra Notes

102 10. Linear Maps and Matrices

10.5 Matrix Representation of Linear Maps—Basics

Friedberg-Insel-Spence, 4th ed. – Section 2.2

Definition 10.5.1. Let V , W be vector spaces over a field F , and let f : V →W be a linearmap. Suppose that V and W are finite-dimensional. Let n = dim(V ) and m = dim(W ). Letβ = {v, . . . ,vn} be an ordered basis for V and γ = {w, . . . ,wn} be an ordered basis for W .The matrix representation of f with respect to β and γ is the matrix [ f ]γ

βwith j-th column

equal to [ f (v j)]γ for all j ∈ {, . . . ,n}.If V =W and β = γ , the matrix [ f ]γ

βis written [ f ]β . 4

Remark 10.5.2. With the hypotheses of Definition 10.5.1, we see that [ f ]γβ=[ai j], where

the elements ai j ∈ F are the elements such that

f (v j) =

m∑i=

ai jwi

for all j ∈ {, . . .n}.Note that [ f ]γ

βis an m×n matrix. ♦

Lemma 10.5.3. Let V , W be vector spaces over a field F, let f ,g : V →W be linear maps,and let c ∈ F. Suppose that V and W are finite-dimensional. Let n = dim(V ). Let β be anordered basis for V , and let γ be an ordered basis for W.

1. [ f ]γβ= [g]γ

βif and only if f = g.

2. [ f +g]γβ= [ f ]γ

β+[g]γ

β.

3. [c f ]γβ= c[ f ]γ

β.

4. [V ]β = In.

Proof. We prove Part (1); the other parts are straightforward.

(1). If f = g, then clearly [ f ]γβ= [g]γ

β.

Suppose that [ f ]γβ= [g]γ

β. Let β = {v, . . . ,vn}. Let j ∈ {, . . . ,n}. Then [ f (v j)]γ is the j-th

column of [ f ]γβ

, and [g(v j)]γ is the j-th column of [g]γβ

. It follows that f (v j) and g(v j) havethe same coordinate vector relative to γ . Hence f (v j) = g(v j). Therefore f and g agree ona basis, and by Corollary 9.1.7 we deduce that f = g.

Exercises

Page 105: Abstract Algebra Notes

10.5 Matrix Representation of Linear Maps—Basics 103

Exercise 10.5.1. Let V , W be vector spaces over a field F . Suppose that V and W are finite-dimensional. Let n = dim(V ) and m = dim(W ). Let β be an ordered basis for V , and let γ

be an ordered basis for W . Let A ∈Mm×n(F). Prove that there is a linear map f : V →Wsuch that [ f ]γ

β= A.

Exercise 10.5.2. Let V , W be vector spaces over a field F , and let f : V →W be a linearmap. Suppose that V and W are finite-dimensional.

(1) Suppose that f is an isomorphism. Then there is an ordered basis α for V and anordered basis δ for W such that [ f ]δα is the identity matrix.

(2) Suppose that f is an arbitrary linear map. Then there is an ordered basis α for V andan ordered basis δ for W such that [ f ]δα has the form

[ f ]γβ=

(Ir OO O

),

where O denotes the appropriate zero matrices, for some r ∈ {,, . . . ,n}.

Page 106: Abstract Algebra Notes

104 10. Linear Maps and Matrices

10.6 Matrix Representation of Linear Maps—Composition

Friedberg-Insel-Spence, 4th ed. – Section 2.3

Theorem 10.6.1. Let V , W, Z be vector spaces over a field F, and let f : V →W andg : W → Z be linear maps. Suppose that V , W and Z are finite-dimensional. Let β be anordered basis for V , let γ be an ordered basis for W, and let δ be an ordered basis for Z.Then [g◦ f ]δ

β= [g]δγ [ f ]

γ

β.

Proof. Suppose that [ f ]γβ=[ai j], that [g]δγ =

[bi j], that [g◦ f ]δ

β=[ci j], and that [g]δγ [ f ]

γ

β=[

di j].

Let n = dim(V ), let m = dim(W ) and let p = dim(Z). Let β = {v, . . . ,vn}, let γ ={w, . . . ,wm} and let δ = {z, . . . ,zp}.

By the definition of matrix multiplication, we see that di j =∑n

k= bikak j for all i ∈{, . . . , p} and j ∈ {, . . . ,n}.

Let j ∈ {, . . . ,n}. Then by Remark 10.5.2 we see that

(g◦ f )(v j) =

p∑r=

cr jzr.

On the other hand, using Remark 10.5.2 again, we have

(g◦ f )(v j) = g( f (v j)) = g(m∑

i=

ai jwi) =

m∑i=

ai jg(wi)

=

m∑i=

ai j

[ p∑r=

brizr

]=

p∑r=

[m∑

i=

briai j

]zr.

Because {z, . . . ,zp} is a basis, it follows Theorem 8.6.2 (2) that∑m

i= briai j = cr j for allr ∈ {, . . . , p}.

Hence di j = ci j for all i ∈ {, . . . , p} and j ∈ {, . . . ,n}, which means that [g◦ f ]δβ=

[g]δγ [ f ]γ

β.

Theorem 10.6.2. Let V , W be vector spaces over a field F, and let f : V →W be a linearmap. Suppose that V and W are finite-dimensional. Let β be an ordered basis for V and letγ be an ordered basis for W. Let v ∈V . Then [ f (v)]γ = [ f ]γ

β[v]β .

Proof. Let h : F → V be defined by h(a) = av for all a ∈ F . Let g : F →W be defined byg(a) = a f (v) for all a ∈ F . It can be verified that h and g are linear maps; the details areleft to the reader.

Let α = {} be the standard basis ordered basis for F as a vector space over itself. Observethat f ◦h = g, because f (h(a)) = f (av) = a f (v) = g(a) for all a ∈ F . Then

[ f (v)]γ = [g()]γ = [g]γα = [ f ◦h]γα = [ f ]γβ[h]βα = [ f ]γ

β[h()]β = [ f ]γ

β[v]β

Page 107: Abstract Algebra Notes

10.6 Matrix Representation of Linear Maps—Composition 105

Lemma 10.6.3. Let F be a field, and let m,n ∈ N. Let β be the standard ordered basis forFn, and let γ be the standard ordered basis for Fm.

1. Let A ∈Mm×n(F). Then [LA]γ

β= A.

2. Let f : Fn→ Fm be a linear map. Then f = LC, where C = [ f ]γβ

.

Proof.

(1). Let {e, . . . ,en} be the standard basis for Fn. Let j ∈ {, . . . ,n}. By Lemma 10.4.3, wesee that Ae j = LA(e j) = [LA(e j)]γ . Observe that Ae j is the j-th column of A, and [LA(e j)]γis the j-th column of [LA]

γ

β. Hence A = [LA]

γ

β.

(2). Let v ∈ Fn. Using Lemma 10.4.3 and Theorem 10.6.2, we see that f (v) = [ f (v)]γ =[ f ]γ

β[v]β =Cv = LC(v). Hence f = LC.

Page 108: Abstract Algebra Notes

106 10. Linear Maps and Matrices

10.7 Matrix Representation of Linear Maps—Isomorphisms

Friedberg-Insel-Spence, 4th ed. – Section 2.4

Theorem 10.7.1. Let V , W be vector spaces over a field F, and let f : V →W be a linearmap. Suppose that V and W are finite-dimensional, and that dim(V ) = dim(W ). Let β bean ordered basis for V , and let γ be an ordered basis for W.

1. f is an isomorphism if and only if [ f ]γβ

is invertible.

2. If f is an isomorphism, then [ f−]β

γ =([ f ]γ

β

)−.

Proof. Both parts of the theorem are proved together. Let n = dim(V ) = dim(W ).Suppose that f is an isomorphism. By definition of inverse maps we know that f− ◦ f =

V and f ◦ f− = W . By Lemma 9.4.3 we know that that f− is a linear map. Hence, usingTheorem 10.6.1 and Lemma 10.5.3 (4), we deduce that

[ f−]β

γ [ f ]γ

β= [ f− ◦ f ]β

β= [V ]

β

β= [V ]β = In.

A similar argument shows that[ f ]γ

β[ f−]

β

γ = In.

It follows that [ f ]γβ

is invertible and([ f ]γ

β

)−= [ f−]

β

γ .

Suppose that [ f ]γβ

is invertible. Let A = [ f ]γβ

. Then there is some B ∈Mn×n(F) such thatAB = In and BA = In. Suppose that B =

[bi j].

Suppose that β = {v, . . . ,vn} and that γ = {w, . . . ,wn}. By Theorem 9.1.6 (2) there is aunique linear map g : W → V such that g(wi) =

∑nk= bkivi for all i ∈ {, . . . ,n}. Then by

definition we have [g]βγ = B.Using Theorem 10.6.1 and Lemma 10.5.3 (4), we deduce that

[g◦ f ]ββ= [g]βγ [ f ]

γ

β= BA = In = [V ]

β

β.

A similar argument shows that[ f ◦g]γγ = [W ]

γ

γ .

It follows from Lemma 10.5.3 (1) that g◦ f = V and f ◦g = W . Hence f has an inverse,and it is therefore bijective. We conclude that f is an isomorphism.

Corollary 10.7.2. Let F be a field, and let n ∈ N. Let A ∈Mn×n(F).

1. A is invertible if and only if LA is an isomorphism.

2. If A is invertible, then (LA)− = LA− .

Page 109: Abstract Algebra Notes

10.7 Matrix Representation of Linear Maps—Isomorphisms 107

Proof. Left to the reader in Exercise 10.7.3.

Exercises

Exercise 10.7.1. In this exercise, we will use the notation f (β ) = γ in the sense of orderedbases, so that f takes the first element of β to the first element of γ , the second element ofβ to the second element of γ , etc.

Let V , W be vector spaces over a field F , and let f : V →W be a linear map. Supposethat V and W are finite-dimensional.

(1) Let β be an ordered basis for V , let γ be an ordered basis for W . Then [ f ]γβ

is theidentity matrix if and only if f (β ) = γ .

(2) The map f is an isomorphism if and only if there is an ordered basis α for V and anordered basis δ for W such that [ f ]δα is the identity matrix.

Exercise 10.7.2. Let V , W be vector spaces over a field F , and let f : V →W be a linearmap. Suppose that V and W are finite-dimensional. Let β be an ordered basis for V , and letγ be an ordered basis for W . Let A = [ f ]γ

β.

(1) Prove that rank( f ) = rank(LA).

(2) Prove that nullity( f ) = nullity(LA).

Exercise 10.7.3. Prove Corollary 10.7.2.

Page 110: Abstract Algebra Notes

108 10. Linear Maps and Matrices

10.8 Matrix Representation of Linear Maps—The Big Picture

Friedberg-Insel-Spence, 4th ed. – Section 2.4

Theorem 10.8.1. Let V , W be vector spaces over a field F. Suppose that V and W are finite-dimensional. Let n = dim(V ) and let m = dim(W ). Let β be an ordered basis for V , and letγ be an ordered basis for W. Let Φ : L (V,W )→Mm×n(F) be defined by Φ( f ) = [ f ]γ

βfor

all f ∈L (V,W ).

1. Φ is an isomorphism.

2. LΦ( f ) ◦φβ = φγ ◦ f for all f ∈L (V,W ).

Proof.

(1). The fact that Φ is a linear map is just a restatement of Lemma 10.5.3 (2) and (3). Weknow by Theorem 9.5.6 that dim(L (V,W )) = nm. We also know that dim(Mm×n(F)) =nm. Hence dim(L (V,W )) = dim(Mm×n(F)). The fact that Φ is injective is just a restate-ment of Lemma 10.5.3 (1). It now follows from Corollary 9.4.4 that Φ is an isomorphism.

(2). Let f ∈L (V,W ). Let v ∈V . Using Theorem 10.6.2, we see that

(φγ ◦ f )(v)= φγ( f (v))= [ f (v)]γ = [ f ]γβ[v]β =Φ( f )φβ (v)= LΦ( f )(φβ (v))= (LΦ( f ) ◦φβ )(v).

Hence LΦ( f ) ◦φβ = φγ ◦ f .

Remark 10.8.2. The equation LΦ( f ) ◦φβ = φγ ◦ f in Theorem 10.8.1 (2) is represented bythe following commutative diagram, where “commutative” here means that going aroundthe diagram either way yields the same result.

V W

Fn Fm

f

φβ φγ

LΦ( f )

Page 111: Abstract Algebra Notes

10.9 Matrix Representation of Linear Maps—Change of Basis 109

10.9 Matrix Representation of Linear Maps—Change of Basis

Friedberg-Insel-Spence, 4th ed. – Section 2.5

Lemma 10.9.1. Let V be a vector space over a field F. Suppose that V is finite-dimensional.Let β and β ′ be ordered bases for V .

1. [V ]β

β ′ is invertible.

2. If v ∈V , then [v]β = [V ]β

β ′ [v]β ′ .

Proof.

(1). We know that V is an isomorphism, and therefore Theorem 10.7.1 (1) implies that[V ]

β

β ′ is invertible.

(2). Let v ∈ V . Then V (v) = v, and hence [V (v)]β = [v]β . It follows from Theo-

rem 10.6.2 that [V ]β

β ′ [v]β ′ = [v]β .

Definition 10.9.2. Let V be a vector space over a field F . Suppose that V is finite-dimensional. Let β and β ′ be ordered bases for V . The change of coordinate matrix(also called the change of basis matrix) that changes β ′-coordinates into β -coordinates isthe matrix [V ]

β

β ′ . 4

Lemma 10.9.3. Let V be a vector space over a field F. Suppose that V is finite-dimensional.Let α , β and γ be ordered bases for V . Let Q be the change of coordinate matrix thatchanges α-coordinates into β -coordinates, and let R be the change of coordinate matrixthat changes β -coordinates into γ-coordinates

1. RQ is the change of coordinate matrix that changes α-coordinates into γ-coordinates

2. Q− is the change of coordinate matrix that changes β -coordinates into α-coordi-nates

Proof. Left to the reader in Exercise 10.9.1.

Theorem 10.9.4. Let V , W be vector spaces over a field F. Suppose that V and W arefinite-dimensional. Let β and β ′ be ordered bases for V , and let γ and γ ′ be ordered basesfor W. Let Q be the change of coordinate matrix that changes β ′-coordinates into β -co-ordinates, and let P be the change of coordinate matrix that changes γ ′-coordinates intoγ-coordinates. If f : V →W is a linear map, then [ f ]γ

β ′ = P−[ f ]γβ

Q.

Page 112: Abstract Algebra Notes

110 10. Linear Maps and Matrices

Proof. Let f : V → W be a linear map. Observe that f = W ◦ f ◦V . Then [ f ]γ′

β ′ =

[W ◦ f ◦V ]γ ′

β ′ . It follows from Theorem 10.6.1 that [ f ]γ′

β ′ = [W ]γ ′γ [ f ]γ

β[V ]

β

β ′ . By Lemma 10.9.3,

we deduce that [ f ]γ′

β ′ = P−[ f ]γβ

Q.

Corollary 10.9.5. Let V be a vector space over a field F. Suppose that V is finite-dimensional. Let β and β ′ be ordered bases for V . Let Q be the change of coordinatematrix that changes β ′-coordinates into β -coordinates. If f : V → V is a linear map, then[ f ]β ′ = Q−[ f ]β Q.

Corollary 10.9.6. Let F be a field, and let n ∈ N. Let A ∈ Mn×n(F). Let γ = {v, . . . ,vn}

be an ordered basis for Fn. Let Q ∈Mn×n(F) be the matrix whose j-th column is v j. Then[LA]γ = Q−AQ.

Definition 10.9.7. Let F be a field, and let n ∈N. Let A,B ∈Mn×n(F). The matrices A andB are similar if there is an invertible matrix Q ∈Mn×n(F) such that A = Q−BQ. 4

Lemma 10.9.8. Let F be a field, and let n ∈N. The relation of matrices being similar is anequivalence relation on Mn×n(F).

Proof. Left to the reader in Exercise 10.9.2.

Exercises

Exercise 10.9.1. Prove Lemma 10.9.3.

Exercise 10.9.2. Prove Lemma 10.9.8.