lie algebras solutions - ayla p....

33
Lie Algebras Solutions A.P. S´ anchez Tufts University May 6, 2014

Upload: others

Post on 20-Mar-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Lie Algebras Solutions - Ayla P. Sánchezandrew-sanchez.weebly.com/.../lie.algebras.solutions.pdfNow, we observe a similar argument: adding a linear combination of v n+1 L= v 0 n+1

Lie Algebras Solutions

A.P. SanchezTufts University

May 6, 2014

Page 2: Lie Algebras Solutions - Ayla P. Sánchezandrew-sanchez.weebly.com/.../lie.algebras.solutions.pdfNow, we observe a similar argument: adding a linear combination of v n+1 L= v 0 n+1

Foreword

This solutions document is a companion to Lie Algebras by Fulton B. Gonzalez, asavailable at www.tufts.edu/~fgonzale/. These solution were typed throughout asemester long course using the version dated December 4th, 2007. Note that whilethere are 11 Chapters, only the first 8 of them contain exercises. This is intendedfor personal use, and I cannot guarantee the accuracy of every solution within.

The most recent version of these solutions can be found on my personal webitewww.asanchezmath.com.

1

Page 3: Lie Algebras Solutions - Ayla P. Sánchezandrew-sanchez.weebly.com/.../lie.algebras.solutions.pdfNow, we observe a similar argument: adding a linear combination of v n+1 L= v 0 n+1

Chapter 1: Background Linear Algebra

1.1.2 ⇒Say x ∈ U ∩W . We must also have that (−x) is in U ∩W as well since vectorspaces are closed under multiplication by constants. (x) + (−x) = 0.Since U +W is direct, must have x = (−x) = 0, so U ∩W = {0}.⇐Say for u ∈ U and w ∈ W we have u + w = 0. Then u + w ∈ U ∩W , thenu = (−w) ∈ W , meaning u ∈ U ∩W , thus u = 0. w = 0 follows from there,and therefore U +W is a direct sum.

1.1.4 Let {v1, . . . , vk} be a basis of U ∩W .Extend this to a basis {u1, . . . , un, v1, . . . , vk} of U .Similarly, extend it to a basis {v1, . . . , vk, w1, . . . , wm} of W .{u1, . . . , un, v1, . . . , vk, w1, . . . , wm} is then a basis of U +W .dim(U) = n+ k, dim(W ) = k +m,dim(U ∩W ) = k, and dim(U +W ) = n+ k +m

(n+ k +m) = (n+ k) + (k +m)− (k)

Note that if you have U⊕

W , then by Exercise 1.1.2, k = 0.

1.3.1 Let B = {v1, . . . , vn}, B′ = {w1, . . . , wm}, and B′′ = {u1, . . . , uk}T (vj) =

∑mh=1 ahjwh, and S(wh) =

∑ki=1 bihui.

The (ih)-th entry of MB′′B(S) = bih and the (hj)-th entry of MB′B(T ) = ahj.

Via matrix multiplication, the (ij)-th entry ofMB′′B(S)MB′B(T ) =∑m

h=1 bihahj

On the other hand, by composing first, we get

S◦T (vj) = S

(m∑

h=1

ahjwh

)=

m∑h=1

ahjS(wh) =m∑

h=1

k∑i=1

ahjbihui =k∑

i=1

(m∑

h=1

bihahj

)ui

Thus the (ij)-th entry of MB′′B(ST ) =∑m

h=1 bihahj

Since they are equal in all entries, MB′′B(ST ) = MB′′B(S)MB′B(T )

2

Page 4: Lie Algebras Solutions - Ayla P. Sánchezandrew-sanchez.weebly.com/.../lie.algebras.solutions.pdfNow, we observe a similar argument: adding a linear combination of v n+1 L= v 0 n+1

1.3.4 1) T : V → W injective ⇔ tT : W ∗ → V ∗ surjective.⇒Let f : V → F ∈ V ∗. Take a basis {v1, . . . vn}. Since T is injective these mapto n-linearly independent vectors, {T (v1), . . . , T (vn)} in W .Expand this to a basis of W , {T (v1), . . . , T (vn), w1, . . . , wk}For all w ∈ W , w =

∑ni=1 aiT (vi) +

∑kj=1 bjwj = T (

∑ni=1 aivi) +

∑kj=1 bjwj.

Let v be the unique vector in V given by∑n

i=1 aivi and w′ be the unique vector

in W/T (V ) given by∑k

j=1 bjwj, then that means w = T (v) + w′.

Define g : W → F by g(w) = L(T (v) + w′) = f(v). Since decomposition intoT (v) and w are unique, this is a well defined linear functional.Then tT (g)(v) = g ◦ T (v) = g(T (v) + 0) = f(v), so tT (g) = f ,Therefore, tT is surjective.

⇐Say kerT 6= {0}, then there exists v ∈ kerT , v 6= 0 such that T (v) = 0.from v we can get a basis {v, v2, . . . , vn} of V .

Let f(v) = 1, and extend the function linearly to the basis to get f ∈ V ∗.Since tT is surjective, there exists a g ∈ W ∗ such that f = tT (g)

1 = f(v) = tT (g)(v) = g ◦ T (v) = g(0) = 0

This is a contradiction, so such a v cannot exist, and thus kerT = {0}, andtherefore T is injective.

2)Define (T (V ))⊥ := {f ∈ W ∗ : f ◦ T (v) = 0, ∀v ∈ V }.f ∈ ker tT means that tT (f) = f ◦ T = 0. This means that for all v ∈ V ,f ◦ T (v) = 0, and thus we have that ker tT = (T (V ))⊥.

If {v1, . . . , vn} are a basis of V , we can get a spanning set of T (V ), {T (v1), . . . , T (vn)}.By throwing out a finite number T (vi)’s, we can have this be a basis of T(V).Expand this to a basis of W , {T (v1), . . . , T (vn′), w1, . . . , wk} and take the dualbasis {α1, . . . , αn′ , β, . . . , βk} of W ∗.Notice the following: βj(T (vi)) = 0, so βi ∈ (T (V ))⊥ Since the T (vi) spanT (V ), the βj must span (T (V ))⊥, and we have the following:

dimW ∗ = dimT (V ) + dim(T (V ))⊥

From rank-nullity, we have dim tT (V ∗) = dimW ∗ − dim ker(tT ).

dim tT (V ∗) = dimW ∗ − (dimW ∗ − dimT (V )) = dimT (V )

3

Page 5: Lie Algebras Solutions - Ayla P. Sánchezandrew-sanchez.weebly.com/.../lie.algebras.solutions.pdfNow, we observe a similar argument: adding a linear combination of v n+1 L= v 0 n+1

3) An m × n matrix A can be thought of as the matrix of a linear functionT : Rn → Rm in the standard orthonormal basis.

The dimension of the column space of A is the same as the dimension of theimage of the linear function T .

Row space can be thought of as the dimension of the column space of thetranpose of A. We know that MB′∗,B∗(

tT ) = t(MB,B′(T )), so the dimension ofthe row space is actually the dimension of the image of tT .

By part (b), we know that these two images have the same dimension, so thecolumn space dimension must equal the row space dimension.

1.4.1 First we note that if we take n linearly dependent vectors, they do not haveany n-dimensional volume and also have a zero determinant, so we can assumethat we are taking {v1, . . . , vn} linearly independent.

Let A be the matrix obtained by taking v1 through vn as columns of A.

We prove this by induction on the dimension. Say that the determinant of nvectors is the volume of the n parallelepiped whose sides are the column vectors.Let {v1, . . . , vn+1} be the vectors along the side of an (n+ 1)-parallelepiped.

The volume of the (n + 1)-parallelepiped is the volume of the base timesthe height. Here the base is the (n − 1) dimensional parallelepiped given by{v1, . . . , vn} and the height is the projection of vn+1 onto the vector to n normalto the hyperplane spanned by {v1, . . . , vn}First we note that adding a linear combination of vi’s to the vn+1 will notchange the height by the following reason:The height h is the length of the projection of vn+1 onto n, and the projectionof any vi onto n must be 0, since n is normal to the hyperplane, thus the lengthof the projection of vn+1 − vi onto n is h− 0 = h still.

From this, with appropriate switching of rows, we can subtract a linear com-bination L of v1, . . . , vn to vn+1 such that vn+1 − L is a vector of all 0’s exceptfor a single component. This component must be h since the length of theprojection is less than or equal to the length of the vector.

det[v1 . . . vn+1] = det[v1 . . . , vn, (vn+1 − L)] = det

| . . . | 0

v1 . . . vn...

| . . . | 0| . . . | h

4

Page 6: Lie Algebras Solutions - Ayla P. Sánchezandrew-sanchez.weebly.com/.../lie.algebras.solutions.pdfNow, we observe a similar argument: adding a linear combination of v n+1 L= v 0 n+1

Now, we observe a similar argument: adding a linear combination of vn+1−L =v′n+1 to vn will not change the volumes, because v′n+1’s remaining entry mustbe in the direction of n, and hence adding v′n+1 to all the vi is just a translationof the hyperplane.

We can use this to subtract a multiple of v′n+1 such that the last entry ofv′i = vi −Kiv

′n+1 is 0

det[(v1 −K1v′n+1), . . . (vn −Knv

′n+1), (vn+1 − L)] = det

| . . . | 0

v′1 . . . v′n...

| . . . | 00 . . . 0 h

Finally, note that {v1, . . . , vn} and {v′1, . . . , v′n} yield parallelopipids of the samevolume since they are translates of each other. Now, we evaluate the above byexpanding along the last column and have:

h det

| . . . |v′1 . . . v′n| . . . |

= h× Vol(v1, . . . vn) = Vol(v1, . . . vn+1)

Therefore, by induction, the determinant of the vectors yields the volume ofthe parallelepiped.

5

Page 7: Lie Algebras Solutions - Ayla P. Sánchezandrew-sanchez.weebly.com/.../lie.algebras.solutions.pdfNow, we observe a similar argument: adding a linear combination of v n+1 L= v 0 n+1

1.5.3 kerS is T -invariant.Say x ∈ kerT , then S(x) = 0.0 = T (0) = T ◦ S(x) = S ◦ T (x), so T (x) ∈ kerS.

S(V ) is T -invariant.say x ∈ S(V ), then there exists a v such that S(v) = xT (x) = T ◦ S(v) = S ◦ T (v), so T (x) ∈ S(V ).

1.6.4 Cayley-Hamilton implies that there exists f(z) such that f(T ) = 0 of theform f(z) =

∏mi=1(z − λi). If you distribute it out, you will get the following

polynomial:

f(z) = zn + (−λ1 − · · · − λn)zn−1 + · · ·+ cz + (−λ1)(. . . )(−λn)

Notice the product of eigenvalues is the determinant and the sum of eigenvaluesis the trace. To simplify, we write them as such.Plugging in T and rearranging gives the following:

0 = f(T ) = T n − Tr(T )T n−1 + + · · ·+ cT + (−1)n(detT )I

I =(−1)n+1

detT(T n − Tr(T )T n−1 + · · ·+ cT )

I = T

((−1)n+1

detTT n−1 +

(−1)n+2 TrT

detTT n−2 + · · ·+ (−1)n+1c

detT

)(

(−1)n+1

detTT n−1 +

(−1)n+2 TrT

detTT n−2 + · · ·+ (−1)n+1c

detT

)= T−1

Let P (z) =

((−1)n+1∏m

i=1 λizn−1 +

(−1)n+2∑m

i=1 λi∏mi=1 λi

zn−2 + · · ·+ (−1)n+1c∏mi=1 λi

)Then P (z) is a polynomial such that P (T ) = T−1

1.6.5 Let k and h be the exponents on (z − 4) and (z − 5) in f(z), characteristicpolynomial of T . Then f(T ) = (T − 4Iv)

k(T − 5Iv)h = 0.

Since V is n-dimensional, it can only have n eigenvalues counting multiplicity.Since 4 and 5 are the only eigenvalues, we must have that h ≤ n − 1 andk ≤ n− 1, so the characteristic polynomial divides (z − 4)n−1(z − 5)n−1.Therefore (T − 4Iv)

n−1(T − 5Iv)n−1 = f(T )q(T ) = 0q(T ) = 0.

6

Page 8: Lie Algebras Solutions - Ayla P. Sánchezandrew-sanchez.weebly.com/.../lie.algebras.solutions.pdfNow, we observe a similar argument: adding a linear combination of v n+1 L= v 0 n+1

1.7.3 Let v ∈ V . For any v′ ∈ V , v = (v − T nv′) + T nv′.Since Tv′ ∈ T n(V ) for all v′, we need only find a v′ such that v−T nv′ ∈ kerT n.For this to be true, we must have T n(v − T nv′) = T nv − T 2nv′ = 0.Equivalently, we need a v′ such that T nv = T 2nv′.Since n = dimV , T n(V ) = T 2n(V ), so there must be such a v′.

The sum has to be a direct sum due to exercise 1.1.4 and rank-nullity.

1.7.5 First we note that if kerT i−1 ( kerT i, then there exists a vector v such thatT i−1v 6= 0, but T iv = 0, so the dimension of kerT i is at least one more thankerT i−1. Since this holds for the entire chain of eigenspaces, we must have thatthe dimension of kerT n−1 is (n− 1).

That means that there are (n− 1) linearly independant vectors such thatT n−1v = 0, i.e. the eigenvalue λ = 0 has multiplicity at least (n−1). Since thedimension of V is n, there can only be one more eigenvalue, so T has at mosttwo distinct eigenvalues.

1.8.1 ⇒If N is nilpotent, then there exists an m such that Nm = 0. Say λ is aneigenvalue of N . It must have an eigenvector v 6= 0 satisfying Nv = λv.Applying N m times gives the following:

0 = 0v = Nmv = Nn−1Nv = Nn−1λv = Nn−2Nλv = Nn−2λ2v = · · · = λmv

So we have λm = 0, hence λ = 0.

⇐Since the only eigenvalue of N is 0, then the characteristic polynomial of N isp(z) = zn. By Cayley-Hamilton theorem, p(N) = Nn = 0, making N nilpotent.

1.9.4 Characteristic Polynomial: (x− λ1)3(x− λ1)3Minimal Polynomial: (x− λ1)3(x− λ1)2

1.9.5 Let r′ = min{r : (T − λiIv)r|Vi= 0}.

Say r′ > ri

Then ker(T − λiIv)ri ( ker(T − λiIv)r′

So there exists v ∈ Vi, v 6= 0 such that (T − λiIv)riv 6= 0.

Recall that 0 = Pmin(T ) =∏

(T −λi)ri , so it must be that∏

j 6=i(T −λj)rjv = 0.

This implies that v has to be in some second generalized eigenspace. V is thedirect product of the generalized eigenspaces, so v cannot be contained in someother eigenspace without being the zero vector, so we have a contradiction.

7

Page 9: Lie Algebras Solutions - Ayla P. Sánchezandrew-sanchez.weebly.com/.../lie.algebras.solutions.pdfNow, we observe a similar argument: adding a linear combination of v n+1 L= v 0 n+1

Now say r′ < ri(T − λiIv)r

′= 0 . So (z − λi)r

′divides Pmin(z).

This contradicts Pmin(z) being minimal.

Therefore r′ = ri.

1.9.6 Since V is the direct sum of generalized eigenspace, v ∈ ker(T − λIV )m forsome eigenvalue λ. This means that there exists a l ≤ m such that (T −λIV )lv = 0, but (T − λIV )l−1v 6= 0.

Define s(z) = (z − λ)l. Then s(T )v = 0 and s(z) is monic. Any monicpolynomial is a product of linear factors. Since generalized eigenspaces onlytrivially intersect, there is no other linear factors other than (z − λ) to whichs(T )v = 0 is possible. Since l is the smallest exponent for which v is in thekernel, so it is of lowest degree.

Let Pmin(z) be the minimal polynomial of T . By the divison algorithm, thereexists q(z) and r(z) such that Pmin(z) = s(z)q(z) + r(z) with degree of r(z)strictly less than the degree of s(z).Since Pmin(T )v = 0 and s(T )v = 0, we must have r(T )v = 0.If r(z) 6= 0, then this contradicts s(z) being of smallest degree.

Thus Pmin(z) = s(z)q(z), so s(z) divides the minimal polynomial.

1.9.7

1 0 0 00 1 0 00 0 2 00 0 0 0

1.9.12 This can be proved by induction. The base case is Prop 1.9.11.

Say there exists a basis for which S1, . . . , Sk are diagonal.Then in that basis, S ′ = S1S2 . . . Sk is a product of diagonal matrices and thusmust be diagonal, hence S ′ is a diagonalizable operator.Since the Si’s all pairwise commute, we can notice the following:

Sk+1S′ = Sk+1S1S2 . . . Sk = S1S2Sk+1 . . . Sk = · · · = S1S2 . . . SkSk+1 = S ′Sk+1

So Sk+1 and S ′ are diagonalizable linear operators, so there must be a basisthat diagonalizes both of them by Prop 1.9.11.Since diagonalization is unique, this diagonalization of S ′ from this basis mustbe the same as the one from our hypothesis, which means that this basis mustalso diagonalize S1 through Sk.

8

Page 10: Lie Algebras Solutions - Ayla P. Sánchezandrew-sanchez.weebly.com/.../lie.algebras.solutions.pdfNow, we observe a similar argument: adding a linear combination of v n+1 L= v 0 n+1

1.10.3 Let B = {b1, . . . , bn} be a basis for V .Then T (bj) =

∑ni=1 aijbi and v =

∑nj=1 vjbj

So the (ij)th component of MB,B(T ) is aij and the jth component of [v]B is vj.This means the ith component of MB,B(T )[v]B =

∑nj=1 aijvj

T (v) = T

(n∑

j=1

vjbj

)=

n∑j=1

vjT (bj) =n∑

j=1

n∑i=1

vjaijbi =n∑

i=1

(n∑

j=1

vjaij

)bi

Then the ith component of [Tv]B is∑n

j=1 vjaij.

Since they are equal componentwise, [Tv]B = MB,B(T )[v]B.

9

Page 11: Lie Algebras Solutions - Ayla P. Sánchezandrew-sanchez.weebly.com/.../lie.algebras.solutions.pdfNow, we observe a similar argument: adding a linear combination of v n+1 L= v 0 n+1

Chapter 2: Lie Algebras: Definition and Basic

Properties

2.1.5 a× (b× c) = a× ((b2c3 − b3c2)i + (b3c1 − b1c3)j + (b1c2 − b2c1)k)

The ith component of which is (a2(b1c2 − b2c1)− a3(b3c1 − b1c3))= a2b1c2 − a2b2c1 − a3b3c1 + a3b1c3 = b1(a2c2 + a3c3)− c1(a2b2 + a3c3)

add and subtract a1b1c1 and we have

b1(a1c1 + a2c2 + a3c3)− c1(a1b1 + a2b2 + a3b3) = b1(a · c)− c1(a · b)

Similarly, the jth component is (a3(b2c3 − b3c2)− a1(b1c2 − b2c1))a3b2c3 − a3b3c2 − a1b1c2 + a1b2c1 = b2(a1c1 + a3c3)− c2(a1b1 + a3c3)

add and subtract a2b2c2 and we have

b2(a1c1 + a2c2 + a3c3)− c2(a1b1 + a2b2 + a3c3) = b2(a · c)− c2(a · b)

And thirdly, the k component is (a1(b3c1 − b1c3)− a2(b2c3 − b3c2))a1b3c1 − a1b1c3 − a2b2c3 + a2b3c2 = b3(a1c1 + a2c2)− c3(a1b1 + a2b2)

add and subtract a3b3c3 and we have

b3(a1c1 + a2c2 + a3c3)− c3(a1b1 + a2b2 + a3c3) = b3(a · c)− c3(a · b)

Thus we have a× (b× c) = (a · c)b− (a · b)cIf we switch letters around, we have: b× (c× a) = (b · a)c− (b · c)aand lastly c× (a× b) = (c · b)a− (c · a)b

a× (b× c) + b× (c× a) + c× (a× b)

= (a · c)b− (a · b)c+ (b · a)c− (b · c)a+ (c · b)a− (c · a)b

=((((((((

((a · c)b− (c · a)b((((

((((((

−(a · b)c+ (b · a)c((((((((

((−(b · c)a+ (c · b)a = 0

Thus the cross product satisfies the Jacobi identity.

a×a = (a2a3−a3a2)i+(a3a1−a1a3)j+(a1a2−a2a1)k = 0, so it is alternating.

10

Page 12: Lie Algebras Solutions - Ayla P. Sánchezandrew-sanchez.weebly.com/.../lie.algebras.solutions.pdfNow, we observe a similar argument: adding a linear combination of v n+1 L= v 0 n+1

(αa+ βc)× c = det

i j kαa1 + βc1 αa2 + βc2 αa3 + βc3

b1 b2 b3

= ((αa2 + βc2)b3 − (αa3 + βc3)b2)i + ((αa3 + βc3)b1

−(αa1 + βc1)b3)j + ((αa1 + βc1)b2 − (αa2 + βc2)b1)k

= α(a2b3 − a3b2)i + α(a3b1 − a1b3)j + α(a1b2 − a2b1)k

+β(c2b3 − c3b2)i + β(c3b1 − c1b3)j + β(c1b2 − c2b1)k

= αa× c+ βb× Y

And thus the cross product is linear.

Therefore it (R3,×) is a Lie algebra.

2.1.9 (D1 +D2)(a.b) = D1(a.b) +D2(a.b) = D1(a).b+ a.D1(b) +D2(a).b+ a.D2(b)

= D1(a).b+D2(a).b+a.D1(b)+a.D2(b) = (D1(a)+D2(a)).b+a.(D1(b)+D2(b))

= ((D1 +D2)(a)).b+ a.((D1 +D2)(b)), thus (D1 +D2) ∈ Der A

λD(a.b) = λ(D(a).b+ a.D(b)) = λD(a).b+ λa.D(b) = (λD)(a).b+ a.(λD)(b)Thus, λD ∈ Der A.

2.1.20 Consider the 2n× 2n matrix X =

[A BC D

].

X ∈ sp(n,F) iff Jn(tX)J−1n = −X holds.Note that J−1n = −Jn = tJn and observe:

Jn(tX)J−1n =

[0 In−In 0

] [tA tCtB tD

] [0 −InIn 0

]=

[tB tD−tA −tC

] [0 −InIn 0

]

=

[tD −tB−tC tA

]=

[−A −B−C −D

]= −X

We get D = −tA, B = tB, and C = tC.

11

Page 13: Lie Algebras Solutions - Ayla P. Sánchezandrew-sanchez.weebly.com/.../lie.algebras.solutions.pdfNow, we observe a similar argument: adding a linear combination of v n+1 L= v 0 n+1

2.1.25 For sl(n,F), Tr(tX) = Tr(X) = 0,Thus tX satisfies the condition, hence sl(n,F) is invariant.

For so(n,F), tX = −X, applying t to X givest(tX) = t(−X) = −(tX), so it is invariant.Thus tX satisfies the condition, hence so(n,F) is invariant.

For sp(n,F), Jn(tX)J−1n = −X, applying t to X gives:

Jn(t(tX))J−1n = Jn(t(tX))(tJn) = t(Jn(tX))(tJn)) = t(Jn(tX)J−1n ) = t(−X) = −tX

Thus tX satisfies the condition, hence sp(n,F) is invariant.

For u(n), (X∗)∗ = (−X)∗ = −X∗.Thus X∗ satisfies the condition, hence u(n) is invariant.

For su(n,F), Tr(X∗) = Tr(X) = 0. Combined with the above, we have thatX∗ satisfies the condition, hence su(n) is invariant.

For u(p, q), Ip,q(X∗)Ip,q = −X, note I∗p,q = Ip,q.

Ip,q((X∗)∗)Ip,q = Ip,q((X

∗)∗)I∗p,q = (Ip,q((X∗)∗)I∗p,q)

∗ = (Ip,q((X∗)Ip,q)

∗ = (−X)∗ = −X∗

Thus X∗ satisfies the condition, hence u(p, q) is invariant.

For su(p, q), the above condition combined with Tr(X∗) = Tr(X) = 0 givesthat su(p, q) is invariant.

12

Page 14: Lie Algebras Solutions - Ayla P. Sánchezandrew-sanchez.weebly.com/.../lie.algebras.solutions.pdfNow, we observe a similar argument: adding a linear combination of v n+1 L= v 0 n+1

Chapter 3: Basic Algebraic Facts

3.2.2 Say g is not abelian. Let {A,B} be a basis for g.[A,B] = C = aA+ bB.

[A,C] = [A, aA+ bB] = a����[A,A] + b[A,B] = b(aA+ bB) = bC

Consider X = 1bA and Y = 1

bC.

[X, Y ] =

[1

bA,

1

bC

]=

1

b2[A,C] =

1

b2bC =

1

bC = Y

Hence, we have a basis {X, Y } of g such that [X, Y ] = Y .

3.2.3 Elements of so(3) are of the form

0 a1 a3−a1 0 a2−a3 −a2 0

, so we could consider

the map φ that sends such elements to (a1, a2, a3). Clearly it is bijective.

0 a1 a3−a1 0 a2−a3 −a2 0

0 b1 b3−b1 0 b2−b3 −b2 0

=

−a1b1 − a3b3 −a3b2 a1b2−a2b3 −a1b1 − a2b2 −a1b3a2b1 −a3b1 −a3b3 − a2b2

−a1b1 − a3b3 −a3b2 a1b2

−a2b3 −a1b1 − a2b2 −a1b3a2b1 −a3b1 −a3b3 − a2b2

− −b1a1 − b3a3 −b3a2 b1a2

b2a3 −b1a1 − b2a2 −b1a3b2a1 −b3a1 −b3a3 − b2a2

=

0 a2b3 − a3b2 a1b2 − b1a2−(b3 − a3b2) 0 a3b1 − a1b3−(a1b2 − b1a2) −(a3b1 − a1b3) 0

= [X, Y ]

Under φ, this maps exactly to (a1, a2, a3)× (b1, b2, b3), therefore we haveφ([X, Y ]) = [φ(X), φ(Y )], and therefore φ is a Lie algebra isomorphism.

13

Page 15: Lie Algebras Solutions - Ayla P. Sánchezandrew-sanchez.weebly.com/.../lie.algebras.solutions.pdfNow, we observe a similar argument: adding a linear combination of v n+1 L= v 0 n+1

3.2.8 Let g ∈ g. Clearly g = g+ϕ(g)2

+ g−ϕ(g)2

ϕ

(g + ϕ(g)

2

)=

1

2(ϕ(g) + ϕ(ϕ(g))) =

1

2(ϕ(g) + g)) =

g + ϕ(g)

2

ϕ

(g − ϕ(g)

2

)=

1

2(ϕ(g)− ϕ(ϕ(g))) =

1

2(ϕ(g)− g)) = −g + ϕ(g)

2

Thus g+ϕ(g)2∈ h and g−ϕ(g)

2∈ q and we have a decomposition, g = h + q.

Say h ∈ h ∩ q, then h = φ(h) = −h, so h = 0, thus h ∩ q = {0}, making oursum direct as per 1.1.2, so g = h + q.

ϕ([h, h]) = [ϕ(h), ϕ(h)] = [h, h] , thus [h, h] ⊂ h

ϕ([q, q]) = [ϕ(q), ϕ(q)] = [−q,−q] = [q, q] , thus [q, q] ⊂ h

ϕ([h, q]) = [ϕ(h), ϕ(q)] = [h,−q] = −[h, q] , thus [h, q] ⊂ q

3.4.7 ⇒Say g is an ideal of m. We must show m = g

⊕h for some h.

Define h = g⊥ := {m ∈ m : [m, g] = 0, ∀g ∈ g}For all z ∈ m, ad(z)|g is a derivation of g.Since it is complete, this implies that ad(z)|g = ad(x) for some x ∈ g.ad(z − x)|g = 0, so z − x ∈ h. Let y = z − x,then we have for all z ∈ m, z = x+ y where x ∈ g and y ∈ h.Therefore m = g + h.

Say x ∈ g ∩ h, then, [x, g] = 0 for all g, and x ∈ g. This means x ∈ c. Sincec = {0}, g ∩ h = {0}, so the sum is direct.

⇐To show it is complete, we must show ad(g) = Der(g).We always have that ad(g) ⊂ Der(g) is an ideal.Note that ad(g) ∼= g, so we have that it is an ideal of Der(g), and henceDer(g) = ad(g)

⊕h for some h by hypothesis.

Say D ∈ h. Then 0 = [adX,D] = ad(Dx). Then Dx commutes with every-thing, hence Dx = 0 for all x since c = {0}, which implies D = 0.

Thus h = {0}, therefore ad(g) = Der(g).

14

Page 16: Lie Algebras Solutions - Ayla P. Sánchezandrew-sanchez.weebly.com/.../lie.algebras.solutions.pdfNow, we observe a similar argument: adding a linear combination of v n+1 L= v 0 n+1

3.4.8 This question is difficult. If you have an idea on how to solve it, please sendme an email.

15

Page 17: Lie Algebras Solutions - Ayla P. Sánchezandrew-sanchez.weebly.com/.../lie.algebras.solutions.pdfNow, we observe a similar argument: adding a linear combination of v n+1 L= v 0 n+1

Chapter 4: Solvable Lie Algebras and Lies The-

orem

4.1.8 ⇒The derived sequence from definition 4.1.5 is such a sequence, and the abeliancondition holds by Theorem 4.1.2.

⇐From Theorem 4.1.2, g/g1 is abelian implies g(1) ⊂ g1.Similarly, gi/gi+1 is abelian implies g(i+1) ⊂ gi+1 for all i.Thus gm−1/gm being abelian implies g(m) ⊂ gm ⊂ {0}, so g is solvable.

4.1.15 Call the map f . The image of f is by definition U ′ + U ′′.Since the intersection is {0}, by Exercise 1.1.2, the sum is direct.Thus all that remains to be shown is that the map is linear.

f(k(u′, u′′) + (v′, v′′)) = f((ku′ + v′, ku′′ + v′′) = ku′ + v′ + ku′′ + v′′

= k(u′ + u′′) + v′ + v′′ = k(f(u′, u′′) + f(v′, v′′)

4.1.16 Linearity first slot:

[c(x1, y1) + (x3, y3), (x2, y2)] = [(cx1 + x3, cy1 + y3), (x2, y2)]

= ([cx1 + x3, x2]g, [cy1 + y3, y2]h) = (c[x1, x2]g + [x3, x2]g, c[y1, y2]h + [y3, y2]h)

= c([x1, x2]g, [y1, y2]h) + ([x3, x2]g, [y3, y2]h)

Linearity in the second slot is similar.

Anticommutative:

[(x1, y1), (x1, y1)] = ([x1, x1]g, [y1, y1]h) = (0, 0)

Jacobi identity: We seek to prove the following:

[(x1, y1), [(x2, y2), (x3, y3)]]+[(x2, y2), [(x3, y3), (x1, y1)]]+[(x3, y3), [(x1, y1), (x2, y2)] = 0

To do this, we look at the first expression and notice the following:

[(x1, y1), [(x2, y2), (x3, y3)]] = [(x1, y1), ([x2, x3]g, [y2, y3]h)] = ([x1, [x2, x3]g]g, [y1, [y2, y3]h]h)

16

Page 18: Lie Algebras Solutions - Ayla P. Sánchezandrew-sanchez.weebly.com/.../lie.algebras.solutions.pdfNow, we observe a similar argument: adding a linear combination of v n+1 L= v 0 n+1

So if we compute the other two similarly, when we add them together thenour first slot is ([x1, [x2, x3]g]g + [x3, [x1, x2]g]g + [x2, [x3, x1]g]g, which must be 0since it is the Jacobi identity on g.

By the same argument, ([y1, [y2, y3]h]h + [y3, [y1, y2]h]h + [y2, [y3, y1]h]h is the sec-ond slot, which must be 0 since it is the Jacobi identity on h.

4.1.17 The Lie bracket on g is is the Lie bracket of sl(2,C) on each of the compo-nents, so an ideal in g must be the product of two ideals in sl(2,C). However,by example 3.3.7, sl(2,C) is simple, and thus has no nonzero ideals.Therefore, the only ideals of g are g, {(0, 0)}, sl(2,C)× {0} and {0} × sl(2,C).

Consider the ideal h = sl(2,C)× {0} ⊂ g.We can check that it is indeed an ideal by the following: For any (x1, y1) ∈ gand (x2, 0) ∈ h,, we have [(x1, y1), (x2, 0)] = ([x1, x2], [y1, 0]) = ([x1, x2], 0) ∈ h.h is a nonzero ideal of g, so g cannot be simple.

Since sl(2,C) is simple, it is not solvable as per the remark on the bottom ofpage 60, hence sl(2,C) × {0} and {0} × sl(2,C) cannot be solvable, making{(0, 0)} the only solvable ideal of g. This means the radical of g must be thezero ideal, making g semisimple.

4.1.19 Say h ⊂ g is a semisimple subalgebra. By the observations on the bottomof page 62, any subalgebra of a solvable lie algebra is solvable, so h is solvable.The definition of semisimple is that the maximal solvable ideal of h is {0}. Thusthe only way that h can be solvable is if it is contained in {0}, so h = {0}.

17

Page 19: Lie Algebras Solutions - Ayla P. Sánchezandrew-sanchez.weebly.com/.../lie.algebras.solutions.pdfNow, we observe a similar argument: adding a linear combination of v n+1 L= v 0 n+1

Chapter 5: Nilpotent Lie Algebras and Engels

Theorem

5.1.5 (⇒)From being nilpotent, the descending central series suffices.

(⇐)First we note that g0 = C0 and say Ci ⊂ gi for induction.Ci+1 = [Ci, g] ⊂ [gi, g]Since gi/gi+1 ⊂ c(g/gi+1), [gi/gi+1, g/gi+1] = 0, meaning [gi, g] ⊂ gi+1

Then we have Ci+1 ⊂ gi+1.

If gm = {0} for some m, it must be that Cm = {0}, hence it is nilpotent.

5.1.6 First we prove g(i) ⊂ Ci(g) for all i.

The base case is C1(g) = g(1)

Assume Ci(g) ⊃ g(i) for induction.

g(i) ⊂ g and g(i) ⊂ Ci(g)⇒ [g(i), g(i)] ⊂ [Ci(g), g].

By definition, Ci+1(g) = [Ci(g), g] and g(i+1) = [g(i), g(i)].

Thus we have g(i+1) ⊂ Ci+1(g) as desired.

If g is nilpotent, then for some m, Cm(g) = {0}.Since g(m) ⊂ Cm(g), we have g(m) = {0}, making it solvable.

5.1.8 Tn(F)First note that the identity matrix is an upper triangular matrix and it com-mutes with everything in Tn(F). Moreover, scalar multiples of the identity(scalar matrices) also commute with everything in Tn(F). We show this is all.Say X ∈ c(Tn(F)).Consider the matrix Λ consisting of all zeros except for two entries along thediagonal, λi 6= λj in the (i, i) and (j, j) slots respectively, i < j.

The (i, j)th entry of [X,Λ] is λiXij − λjXij.This, however, must be zero, so then Xij = 0.

This means that X has only nozero entries along the diagonal. To prove theyare all equal, consider that X must commute with the matrices Eij, which hasa 1 in the (i, j) slot.

The (i, j)th entry of [X,Eij] is Xii −Xjj.This, however, must be zero, so then Xii = Xjj.

18

Page 20: Lie Algebras Solutions - Ayla P. Sánchezandrew-sanchez.weebly.com/.../lie.algebras.solutions.pdfNow, we observe a similar argument: adding a linear combination of v n+1 L= v 0 n+1

Therefore X is a scalar matrix. The space of scalar matrices is of dimension 1,so then Tn(F) has center of dimension 1.

Un(F)First note that matrices whose only entry is the upper right corner commutewith everything in Un(F), as multiplying one with any strictly upper triangularmatrix yields the zero matrix. We show this is all.Say X ∈ c(Un(F)).[X,E1n] = 0, obviously, so consider Eij for i < j except (1, n).The (i, j)th entry of [X,Eij] is Xij −��Xji = Xij since X is upper triangular.Hence Xij = 0 for i < j except (1, n).Then the only element of X that is nonzero is the (1, n)th entry, so the centerconsits of the space of matrices with a single nonzero entry in the top rightcorner, which is clearly of dimension 1.

5.1.9 Multiplying any two elemens leaves only the top right corner nonzero. i.e.0 t1 . . . tn c

w1

0...wn

0

0 t′1 . . . t′n c′

w′1

0...w′n0

=

0 0 . . . 0 cw′1

0

0...00

Clearly then, taking the Lie bracket leaves matrices with nonzero entries in thetop right corner. Then g1 is the one-dimensional Lie algebra.

Furthermore, multiplying a matrix with only an entry in the top right corneryields the zero matrix as the first column and last row are all zero, so there isnever a nonzero entry. This makes the Lie bracket of any matrix from g1 withanything in hn zero as below.

0 t1 . . . tn c

w1

0...wn

0

x

0

x

0

0 t1 . . . tn cw1

0...wn

0

=[

0]−[

0]

19

Page 21: Lie Algebras Solutions - Ayla P. Sánchezandrew-sanchez.weebly.com/.../lie.algebras.solutions.pdfNow, we observe a similar argument: adding a linear combination of v n+1 L= v 0 n+1

The descending central series is hn ⊃ R ⊃ {0}, making the general HeisenbergLie algebra 2-step nilpotent.

5.1.10 As per exercise 3.2.2,we know that if g is a 2-dimensional lie algebra, thenit has basis {X, Y } such that [X, Y ] = Y .Let h be the 1-dimensional Lie algebra generated by Y .

[aX+bY, a′X+b′Y ] = aa′����[X,X]+ab′[X, Y ]+ba′[Y,X]+bb′��

��[Y, Y ] = (ab′−a′b)Y ∈ h

Clearly then, C1(g) = g1 = h

[aX + bY, cY ] = ac[X, Y ] + bc����[Y, Y ] = acY ∈ h

Hence C2(g) = h. By the same argument, Ci(g) = h for all i > 1.

However, [aY, bY ] = 0, so we have g2 = {0}.Therefore the derived series is:

g ⊃ h ⊃ {0}

Making g solvable.

The Descending central series is:

g ⊃ h ⊃ h ⊃ h ⊃ . . .

Making g not nilpotent.

5.1.11 1)Since h ⊂ g, clearly C1(h) ⊂ C1(g) /For induction, say Ci(h) ⊂ Ci(g) .Ci+1(h) = [Ci(h), h] ⊂ [Ci(g), g] = Ci+1(g)

So Ci(h) ⊂ Ci(g) for all i.

If g were nilpotent, then Cm(g) = {0} for some m.By above, Ci(h) = {0}, making h nilpotent as well.

2)By Prop 5.1.4, we need only look at the ascending central series.If g is nilpotent, then Cm = g for some m.

20

Page 22: Lie Algebras Solutions - Ayla P. Sánchezandrew-sanchez.weebly.com/.../lie.algebras.solutions.pdfNow, we observe a similar argument: adding a linear combination of v n+1 L= v 0 n+1

Say c = {0}. Then {0} = C0 = C1.Assume for induction that Ci = {0}.Note that quotienting by the zero ideal does nothing and we have the following:Ci+1 is the ideal of g such that Ci+1 = Ci+1/Ci = c(g/Ci) = c(g) = {0}Thus Ci = {0} for all i, which is a contradiction to g being nilpotent.

5.1.13 Exercise 5.1.10 provides a counterexample to this.The two dimensional Lie algebra g is not nilpotent, but it has an ideal h thatis one dimensional, therefore nilpotent. g/h is likewise one dimensional, thusnilpotent.

21

Page 23: Lie Algebras Solutions - Ayla P. Sánchezandrew-sanchez.weebly.com/.../lie.algebras.solutions.pdfNow, we observe a similar argument: adding a linear combination of v n+1 L= v 0 n+1

Chapter 6: Cartans Criteria for Solvability and

Semisimplicity

6.1.2 Let Q be an inner product invariant under Ad(G).

Q(Ad(g)X,Ad(g)Y ) = Q(X, Y )

Let X1, . . . , Xn be orthonormal with respect to Q.Then Ad(g)is an orthogonal matrix. The Lie algebra of the Lie group O(n) isso(n), the space of skew symmetric matrices.Then ad(X) is skew symmetric, meaning Xij = −Xji.

B(X,X) = Tr(ad(X) ◦ ad(X)) =n∑

i,j=1

XijXji =n∑

i,j=1

−X2ij = −

(n∑

i,j=1

X2ij

)≤ 0

Furthermore, say c = {0}, ad(X) 6= 0 if X 6= 0, so for all X 6= 0, there is atleast one Xij 6= 0. B(X,X) ≤ −X2

ij < 0, so B is negative definite.

Conversely, if B(X,X) < 0 for all X 6= 0, then for any such X, ad(X) has atleast one Xij 6= 0, meaning ad(X) is not the zero matrix, and hence ad(X) 6= 0,so X is not in the center. Therefore the center consists only of zero.

6.2.2 (i) Take a C-basis of V, {v1, . . . , vn}.Note that we automatically get an R-basis {v1, . . . , vn, iv1, . . . ivn}T (vj) =

∑nk=1 akjvk, where akj are the entries of T as an n× n C matrix.

If we write each akj as akj = αkj + iβkj, then note Tr(T ) =∑n

k=1 αkk + iβkk.

T (vj) =n∑

k=1

akjvk = T (vj) =n∑

k=1

(αkj + iβkj)vk =n∑

k=1

αkjvk +n∑

k=1

βkjivk

T (ivj) =n∑

k=1

akjivk = T (vj) =n∑

k=1

(αkj + iβkj)ivk =n∑

k=1

αkjivk +n∑

k=1

−βkjvk

Then as a real matrix, TR is a block matrix of the form

[[αkj] [−βkj][βkj] [αkj]

].

Clearly then Tr(TR) =∑n

k=1 αkk +∑n

k=1 αkk = 2Re(Tr(T )).

22

Page 24: Lie Algebras Solutions - Ayla P. Sánchezandrew-sanchez.weebly.com/.../lie.algebras.solutions.pdfNow, we observe a similar argument: adding a linear combination of v n+1 L= v 0 n+1

(ii)For all x ∈ U c = U ⊕ JU , we have x = a+ ib for a, b ∈ U .The obvious extention of T to Tc is Tc(x) = T (a) + iT (b).

Say {u1, . . . , un} is an R-basis of U . Then it is a C-basis of U c since for all x,the a and b components can be expressed as linear combinations of uk’s.

Since we can look at T with the above basis as an R basis and also Tc in thebasis above but as a C basis, we are looking at the same operator on the samebasis, so it will have the same trace.

6.2.3 If we take an arbitrary element X of gl(n,C), we have the following:

X =X − tX

2+ i

X + tX

2i

t

(X − tX

2

)=

tX −X2

= −X −tX

2∈ u(n)

t

(X + tX

2i

)=

tX +X

−2i= −X + tX

2∈ u(n)

Thus we can decompose any element into a sum of a skew-hermitian matrixplus i times a skew-hermitian matrix. I.e. gl(n,C) = u(n) + iu(n).

Say X ∈ u(n). tX = −X, so tiX = (−i)(−X) = X.Then iu(n) is made up of hermitian matrices.

Since the only matrix that is both hermitian and skew-hermitian is the zeromatrix, we have u(n)∩ iu(n) = {0}. Therefore gl(n,C) = u(n)⊕ iu(n), makingu(n) a real form of gl(n,C).

Now, for any X, decompose it as above and apply τ , which just changes theaddition in the middle into a subtraction.

τ(X) =X − tX

2− i(X + tX

2i

)=X − tX −X − tX

2= − tX

23

Page 25: Lie Algebras Solutions - Ayla P. Sánchezandrew-sanchez.weebly.com/.../lie.algebras.solutions.pdfNow, we observe a similar argument: adding a linear combination of v n+1 L= v 0 n+1

6.2.5 ⇒Say BR(x, y) = 0 for all y.

Then 2ReB(x, y) = 0 for all y.

B(x, y) =������ReB(x, y) + ImB(x, y) = Re(−iB(x, y) =((((

(((Re(B(x, iiy) = 0.

Since B is nondegenerate, x = 0, thus BR is nondegenerate.

⇐Say B(x, y) = 0 for all y.

Then BR(x, y) = 2ReB(x, y) = 0.

Since BR is nondegenerate, x = 0, thus B is nondegenerate.

6.3.4 The formula can be given by matrix equation: V x = C where x is the columnvector of the (n+ 1) values in F, C are the coefficients of C0, . . . , Cn writen asa column vector, and V is a Vandermonde Matrix.

The determinant of a Vandermonde Matrix is∏

i<j(xi − xj).Since we chose the x0, . . . , xn to be distinct points, every (xi − xj) is nonzero,and the determinant is nonzero, making C unique.

Hence the coefficients of our polynomial are unique, so the polynomial is unique.

6.4.8 This follows rather nicely from previous:

g is semisimple ⇔ the Killing Form B on g is nondegenerate (Thm. 6.1.4) .⇔ the Killing Form BR on gR is nondegenerate (Ex. 6.2.5).⇔ g is semisimple (Thm. 6.1.4).

24

Page 26: Lie Algebras Solutions - Ayla P. Sánchezandrew-sanchez.weebly.com/.../lie.algebras.solutions.pdfNow, we observe a similar argument: adding a linear combination of v n+1 L= v 0 n+1

Chapter 7: Semisimple Lie Algebras: Basic Struc-

ture and Representations

7.1.3 g =⊕n

i=1 gi, where each gi is simple.Recall that by Cartan’s critera, we only need to check that B is nondegenerate.Let X = X1 + · · ·+Xn and Y = Y1 + · · ·+ Yn ∈ g.

Consider adXi ◦ adYj for i 6= j.For any z this is [Xi, [Yj, z]]. We have two relevant cases:If z 6∈ gi, then [Yj, z] = 0 by the sum being direct, which makes [Xi, [Yj, z]] = 0.

If z ∈ gi, then [Yj, z] ∈ gi by these being ideals, so [Xi, [Yj, z]] is the bracket ofsomething in gi and gj, so it is still zero again by the sum being direct.

Therefore, for adX ◦ adY , there are no mixed terms, and we can split it upamongst each of the summands.

adX ◦ adY = adX1 ◦ adY1 + · · ·+ adXn ◦ adYn

Since trace is additive, when we pass to the killing form, we have that thekilling form on g is a sum of the killing forms on gi.

B(X, Y ) = Tr(adX ◦ adY ) = Tr(adX1 ◦ adY1 + · · ·+ adXn ◦ adYn)

= Tr(adX1 ◦ adY1) + · · ·+ Tr(adXn ◦ adYn) = Bg1(X1, Y1) + · · ·+Bgn(Xn, Yn)

Recall that simple implies semisimple, so none of the Bgi are nondegenerate,thus B is nondegenerate, making g semisimple.

(This can also be proven using the solvable radical by noting that any ideal ofg must be a direct sum of some of the gi’s and bracketing).

7.2.2 First note that since gc is a direct sum, when we expand the following bracket,we have nice cancellation:

[x+σ(x), y+σ(y)] = [x, y]+((((

((((((

[x, σ(y)] + [σ(x), y]+[σ(x), σ(y)] = [x, y]+σ([x, y])

This shows that the function x 7→ x+ σ(x) respects the bracket operator.

It is clearly invertible, as we can decompose any y into as sum of elements fromg1 and σ(g1) and use the map y = y′ + σ(y′′) 7→ y′ works as below:

x 7→ x+ σ(x) 7→ x

25

Page 27: Lie Algebras Solutions - Ayla P. Sánchezandrew-sanchez.weebly.com/.../lie.algebras.solutions.pdfNow, we observe a similar argument: adding a linear combination of v n+1 L= v 0 n+1

7.3.12 Under the basis, we can write the functions as marices:

π(f) =

01 0

. . .

1 0

, π(h) =

n

n− 2. . .

−n

π(e) =

0 n

0 2n− 1. . .

0

π(h)π(f)− π(f)π(h) will be a matrix of entries on the subdiagonal whose en-tries are (n− 2j − 2)− (n− 2j) = −2.This is clearly the matrix −2π(f).

π(h)π(e) − π(e)π(h) will be a matrix of entries on the superdiagonal whoseentries are (n− 2j)j(n− j + 1)− j(n− j + 1)(n− 2j − 2).Factoring out j and (n− j + 1) gives j(n− j + 1)(((((

(((((

n− 2j − n+ 2j + 2).This equals 2j(n− j + 1), which are exactly the entries of the matrix 2π(e).

π(e)π(f)− π(f)π(e) will be a diagonal matrix whose entries are(j + 1)(n− j)− j(n− j + 1) = jn− j2 + n− j − jn+ j2 − j = n− 2j.This is clearly the matrix π(h).

Thus, by matrix calculations, we get the same relations as in sl(2 C):[π(h), π(f)] = −2π(f), [π(h), π(e)] = 2π(e), and [π(e), π(f)] = π(h),which means that πn is a representation.

Now, say there exists a subspace of V invariant under sl(2 C). Take a minimaldimensional subspace W that is invariant. It is irreducible, and we can invokethe theorem: it has a basis of eigenvectors with some relations. The choice ofv0 in the theorem is unique up to scaling, so we can take W and V to havethe same first basis vector. The rest of the basis vectors were attained bysuccessively hitting v0 with π(e) and stopping at the vector such that takingπ(e) once more makes it zero. This is unique, so then V and W have basismade up of the same number of power of v0, so they have the same dimensionand then must be equal.

Therefor πn is an irreducible representation.

26

Page 28: Lie Algebras Solutions - Ayla P. Sánchezandrew-sanchez.weebly.com/.../lie.algebras.solutions.pdfNow, we observe a similar argument: adding a linear combination of v n+1 L= v 0 n+1

7.3.13 Linearity:

(π(αX1 +X2)p)(z, w) =

([αa1 + a2 αb1 + b2αc1 + c2 −αa1 − a2

])p

)(z, w)

= ((αa1 + a2)z + (αc1 + c2)w)∂p

∂z+ ((αb1 + b2)z − (αa1 + a2)w)

∂p

∂w

= α

((a1z + c1w)

∂p

∂z+ (b1z − a1w)

∂p

∂w

)+ (a2z + c2w)

∂p

∂z+ (b2z − a2w)

∂p

∂w

= α(π(X1)p)(z, w) + (π(X2)p)(z, w)

Bracket Operator:

X1X2 =

[a1 b1c1 −a1

] [a2 b2c2 −a2

]=

[a1a2 + b1c2 a1b2 − a2b1a2c1 − a1c2 c1b2 + a1a2

]

[X1, X2] =

[���a1a2 + b1c2 −���a2a1 − b2c1 a1b2 − a2b1 − a2b1 + a1b2a2c1 − a1c2 − a1c2 + a2c1 c1b2 +���a1a2 − c2b1 −���a2a1

]=

[b1c2 − b2c1 2(a1b2 − a2b1)

2(a2c1 − a1c2) c1b2 +−c2b1

](π([X1, X2))p)(z, w) = ((b1c2−b2c1)z+2(a2c1−a1c2)w)

∂p

∂z+(2(a1b2−a2b1)z−(b1c2−b2c1)w)

∂p

∂w

To have this be a Lie algebra homomorphism, we need π of the bracket of X1

and X2 applied to a polynomial, the expression above, be equal to the bracketof the operators π(X1) and π(X2) applied to a polynomial.

First we multiply, and then we use that calculation to compute the Lie bracket:

π(X1)◦π(X2) =

((a1z + c1w)

∂z+ (b1z − a1w)

∂w

)((a2z + c2w)

∂z+ (b2z − a2w)

∂w

)

= (a1z+c1w)a2∂

∂z+(a1z+c1w)b2

∂w+(b1z−a1w)c2

∂z+(b1z−a1w)(−a2)

∂w

= ((a1a2 + b1c2)z + (a2c1 − a1c2)w)∂

∂z+ ((a1b2 − a2b1)z + (b2c1 + a1a2)w)

∂w

Composing in the other order gives similar result with 1 and 2 switched.

27

Page 29: Lie Algebras Solutions - Ayla P. Sánchezandrew-sanchez.weebly.com/.../lie.algebras.solutions.pdfNow, we observe a similar argument: adding a linear combination of v n+1 L= v 0 n+1

[π(X1), π(X2)] = ((���a1a2 + b1c2−���a1a2− b2c1)z+ (a2c1−a1c2−a1c2 +a2c1)w)∂

∂z

+((a1b2 − a2b1 − a2b1 + a1b2)z + (b2c1 +���a1a2 − b1c2 −���a1a2)w)∂

∂w

= ((b1c2 − b2c1)z + 2(a2c1 − a1c2)w)∂

∂z+ (2(a1b2 − a2b1)z − (b1c2 − b2c1)w)

∂w

Apply this to any p and we get the same expression as before, and thus wehave π([X1, X2])(p)(z, w) = [π(X1), π(X2)](p)(z, w), making π a Lie algebrahomomorphism.

7.3.14 Taking v0 = zn, we can use the homomorphism from problem 7.3.13.In this case, E is the matrix where a = c = 0 and b = 1.

π(E)v0 = z∂(zn)

∂w= 0

π(E)vj = z∂(P (n, j)zn−jwj)

∂w= jP (n, j)zn+1−jwj−1 = j

n!

(n− j)!zn+1−jwj−1

= j(n− j + 1)n!

(n− j + 1)!zn+1−jwj−1 = j(n− j + 1)vj−1

In this case, F is the matrix where a = b = 0 and c = 1.

π(F )v0 = w∂(zn)

∂z= nzn−1w1 = v1

π(F )vj = w∂(P (n, j)zn−jwj)

∂z= (n− j)P (n, j)zn−j−1wj+1

= (n− j) n!

(n− j)!zn−j−1wj+1 = vj+1

In this case, H is the matrix where b = c = 0 and a = 1.

π(H)v0 = z∂(zn)

∂z− w∂(zn)

∂w= nzn = nv0

π(H)vj = z∂(P (n, j)zn−jwj)

∂z− w∂(P (n, j)zn−jwj)

∂w

= (n− j)P (n, j)zn−jwj − jP (n, j)zn−jwj = (n− 2j)vj

28

Page 30: Lie Algebras Solutions - Ayla P. Sánchezandrew-sanchez.weebly.com/.../lie.algebras.solutions.pdfNow, we observe a similar argument: adding a linear combination of v n+1 L= v 0 n+1

Chapter 8: Root Space Decompositions

8.1.1 To prove this, we work by induction on k.

For k = 0, this is obviously true since both sides are [x, y].Assume the expansion is true for k, meaning the following holds:

(D − (λ+ µ)Ig)k[x, y] =

k∑r=0

(k

r

)[(D − λIg)rx, (D − µIg)k−ry]

We now work from the right side:

(D−(λ+µ)Ig)k+1[x, y] = (D−(λ+µ)Ig)

(k∑

r=0

(k

r

)[(D − λIg)rx, (D − µIg)k−ry]

)

Distribute the D using the properties of derivations to get two terms:

D[(D−λIg)rx, (D−µIg)k−ry] = [D(D−λIg)rx, (D−µIg)k−ry]+[(D−λIg)rx,D(D−µIg)k−ry]

Then distribute the −(λ+ µ)Ig and use linearity to get two more:

−(λ+ µ)Ig[(D − λIg)rx, (D − µIg)k−ry]

= [−λIg(D − λIg)rx, (D − µIg)k−ry] + [(D − λIg)rx,−µIg(D − µIg)k−ry]

Combine the first of each and the last of each to get two terms:[(D − λIg)r+1x, (D − µIg)k−ry] and [(D − λIg)rx, (D − µIg)k+1−ry]

Now put back the binomial coefficients and carefully change the parameters:

k∑r=0

(k

r

)([(D − λIg)r+1x, (D − µIg)k−ry] + [(D − λIg)rx, (D − µIg)k+1−ry]

)k∑

r=0

(k

r

)[(D−λIg)r+1x, (D−µIg)k−ry]+

k∑r=0

(k

r

)[(D−λIg)rx, (D−µIg)k+1−ry]

k+1∑r=1

(k

r − 1

)[(D−λIg)rx, (D−µIg)k+1−ry]+

k∑r=0

(k

r

)[(D−λIg)rx, (D−µIg)k+1−ry]

29

Page 31: Lie Algebras Solutions - Ayla P. Sánchezandrew-sanchez.weebly.com/.../lie.algebras.solutions.pdfNow, we observe a similar argument: adding a linear combination of v n+1 L= v 0 n+1

k+1∑r=0

((k

r − 1

)+

(k

r

))[(D − λIg)rx, (D − µIg)k+1−ry]

Take note that(

k−1

)=(

kk+1

)= 0, so we are not adding in an extra term by

extending the indicies on the two sums.

Recall that binomial coefficients have a recursive formula:(k+1r

)=(

kr−1

)+(kr

).

k+1∑r=0

(k + 1

r

)[(D − λIg)rx, (D − µIg)k+1−ry]

Which is exactly what we want.

8.1.4 Before we prove the statements, we have the following three remarks:

Remark adgl(V )S is semisimple linear transformation since S is semisimple.We can always take a basis of V consisting of eigenvectors so that S is diagonal.gl(V ) has basis Eij. These are eigenvectors of adgl(V )S via

adgl(V )S(Eij = [S,Eij] = (λi − λj)Eij

Remark If N is nilpotent, then so is adgl(V )(N)If Na = 0, let m = 2a. Then for any T , we have the following

(adgl(V )(N)

)m(T ) =

∑(−1)r

(m

r

)N rTNm−r = 0

Remark They are polynomials!Since ad is a homomorphism we have that S and N commute iff adgl(V )(S) andadgl(V )(N) commute as well.

By uniqueness of the Jordan-Chevalley decomposition, adgl(V )(S) + adgl(V )(N)is the Jordan-Chevalley decomposition of adgl(V )(X), and therefore adgl(V )(S)and adgl(V )(N) are polynomials in adgl(V )(X).

30

Page 32: Lie Algebras Solutions - Ayla P. Sánchezandrew-sanchez.weebly.com/.../lie.algebras.solutions.pdfNow, we observe a similar argument: adding a linear combination of v n+1 L= v 0 n+1

(a)Since adgl(V )(X) leaves g invariant, then adgl(V )(X)|g = adg(X).

adgl(V )S is a polynomial in adgl(V )(X) by our remarks, so when we restrict it tojust g, we are taking a linear combination of powers of something that leavesg invariant. The same argument holds for adgl(V )N

(b)Bracketing with Xs is the semisimple part of the J.C. decomposition of adgX.By uniqueness of the J.C. decomposition, we must have adgl(V )(S)|g = adg(Xs)

for any Y ∈ g [S, Y ] = adgl(V )S(Y ) = adgXs(Y ) = [Xs, Y ], by the above, andtherefore [S −Xs, Y ] = 0 by linearity.So S −Xs commutes with everything in g, and therefore is in the centralizer.

The same argument works for N −Xn.

(c)S is a polynomial in X, so [S,Xs] = [

∑ajX

j, Xs] =∑aj[X

j, Xs].[Xj, Xs] = −adgXs(X

j), which must be zero, since Xs is semisimple and com-mutes with X.

The same argument works for N and Xn.

(d)Since S and N are polynomials in X and this holds for X, it automaticallyholds for S and N .

(e)[S −Xs, g] = 0, so for all T ∈ g, we have that (S −Xs) ◦ T = T ◦ (S −Xs).This means S −Xs intertwines the g action on Vi, so by Schur’s Lemma it isa scalar operator. The same argument works for N −Xn.

(f)Xs can be broken down as Xs = (Xs − S) + S. By (d), S leaves Vi invariantand is semisimple, whereas (Xs − S) is a scalar operator by (e), and is thensemisimple. The sum of two semisimples is semisimple, so Xs is semisimple.

(g)Recall that g is semisimple, so g = [g, g]. Then any Y is a linear combinationof Lie brackets, which have zero trace. Trace respects linear combinations,therefore Tr(Y ) = 0.

31

Page 33: Lie Algebras Solutions - Ayla P. Sánchezandrew-sanchez.weebly.com/.../lie.algebras.solutions.pdfNow, we observe a similar argument: adding a linear combination of v n+1 L= v 0 n+1

(h)As we proved in (e), N −Xn is a scalar operator, i.e. N −Xn = λI.A nilpotent linear transformation has only zero as an eigenvalue, so Tr(N) = 0.Xn is in g, so by (g), Tr(Xn) = 0.λn = Tr(N −Xn) = Tr(N)− Tr(Xn) = 0, therefore λ = 0, thus N −Xn = 0.

Again, we proved in (e), S −Xs is a scalar operator, i.e. N −Xn = µI.Now, X = S +N = Xs +Xn. Rearranging gives S = Xs +Xn −N = X −N .By part (g), Tr(X) = 0 and by nilpotency, Tr(N) = 0, hence Tr(S) = 0.We also know, by (g), that Tr(Xs) = 0.Combining this gives us µn = Tr(S − Xs) = Tr(S) − Tr(Xs) = 0, thereforeµ = 0, thus N −Xn = 0.

So we have N = Xn and S = Xs, making the Jordan-Chevalley and abstractJordan-Chevalley decomposition equal.

8.3.5 To do soon

32