math 436: calculus on manifolds - fall 2021

22
Math 436: Calculus on Manifolds - Fall 2021 Josh Lau This course was taught in the Fall 2021 term at UVic. The instructor was professor Heath Emerson. These are my notes for this course, and are not official nor endorsed by the instruc- tor(s) or anything like that. Contents 1 Manifolds 2 1.1 Definition of a Manifold ................................ 2 1.2 Tensor Fields ...................................... 3 1.3 Grassman Algebra ................................... 9 1.4 Exterior Differentiation ................................ 11 1.5 Mappings ........................................ 13 1.6 Affine Connections ................................... 16 1.7 Parallelism ....................................... 18 1

Upload: others

Post on 19-Dec-2021

6 views

Category:

Documents


0 download

TRANSCRIPT

Math 436: Calculus on Manifolds - Fall 2021

Josh Lau

This course was taught in the Fall 2021 term at UVic. The instructor was professor HeathEmerson. These are my notes for this course, and are not official nor endorsed by the instruc-tor(s) or anything like that.

Contents

1 Manifolds 21.1 Definition of a Manifold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.2 Tensor Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.3 Grassman Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91.4 Exterior Differentiation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111.5 Mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131.6 Affine Connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161.7 Parallelism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

1

1 Manifolds

1.1 Definition of a Manifold

Definition 1.1.1. Let U ⊂ Rn and V ⊂ Rm be open, and x1, . . . , xn be the coordinate projec-tions xi : U → R, and y1, . . . , ym be the coordinate projections yj : V → R. Then a functionϕ : U → V is smooth (sometimes misleadingly called differentiable) if yj ◦ϕ : U → R has partial

derivatives of all orders for all 1 ≤ j ≤ m. That is,∂r(yj◦ϕ)

∂xi1∂xi2 ···∂xirexists for all j, i1, i2, . . . , ir.

ϕ is a diffeomorphism onto V if ϕ is smooth, bijective, and its set-theoretic inverse ϕ−1 isalso smooth.

Remark. The function ϕ : R → R given by ϕ(x) = x3 is smooth, bijective, but not a diffeo-morphism, because ϕ−1(x) = x1/3 is not smooth (as it is not differentiable at x = 0).

Lemma 1.1.2. If A,B ⊂ Rn are disjoint with A compact and B closed, then there exists afunction ϕ : Rn → Rn such that ϕ

∣∣A

= 1 and ϕ∣∣B

= 0 and ϕ is smooth.

Proof. Let 0 < a < b, and f : R→ R be given by:

f(x) =

{exp

(1x−b −

1x−a

)if a < x < b

0 otherwise

One can check that f is smooth. Now let F (x) =(∫ b

x f(t)dt)/(∫ b

a f(t)dt)

. Then F (x) = 1

when x ≤ a and F (x) = 0 when x ≥ b, and F is smooth. Next, let ψ : Rn → R be givenby ψ(x1, . . . , xn) = F (x2

1 + · · · + x2n). Then ψ ≡ 1 when x2

1 + · · · + x2n ≤ a and ψ ≡ 0 when

x21 + · · ·+ x2

n ≥ b and ψ is smooth. Now given A and B as in the statement of the Lemma, bycompactness of A, we may cover A be a finite number of open balls B1, . . . , Bk, each of whichis disjoint from B. For each 1 ≤ i ≤ k, let B′i be slightly smaller such that A is still covered bythe Bi’s, and such that there is a smooth function ψi : Rn → Rn satisfying ψi ≡ 1 on B′i andψi ≡ 0 on BC

i for each 1 ≤ i ≤ k. Then ϕ := 1− (1−ψ1)(1−ψ2) · · · (1−ψk) is as required.

Definition 1.1.3. A (smooth) manifold of dimension m is a set M which is a countable unionof subsets Ui, where, for each i, there exists ϕi : Ui → Vi ⊂ Rm a homeomorphism with Vi open,and such that for all i, j, the map:

ϕi ◦ ϕ−1j : ϕj(Ui ∩ Uj)→ ϕi(Ui ∩ Uj)

is a diffeomorphism between two open subsets of Rm. We also require a Hausdorff condition:if p 6= q are in M , then either p, q ∈ Ui for some i, or there exist i, j such that p ∈ Ui,q ∈ Uj , and Ui ∩ Uj = ∅. We endow M with a topology given by the base of open sets{ϕ−1

i (V ) : V ⊂ ϕi(Ui) is open}. Each pair (ϕi, Ui) is called a chart. Note the topology on M isthe weakest topology for which each of the coordinate charts ϕi : Ui → Vi is continuous.

Definition 1.1.4. Let M be a manifold. A function f : M → R is smooth if f◦ϕ−1i : ϕi(Ui)→ R

is continuous for all charts (ϕi, Ui).

Example 1.1.5. An example of a manifold of dimension 2 is the 2-sphere S2 = {(x, y, z) ∈ R3 :x2 +y2 + z2 = 1}. This manifold may be covered by six charts (one for each open hemisphere) -for example, the chart which covers the top half is given by U = {(x, y, z) ∈ S2 : z > 0} (whichis homeomorphic to the open disk D◦ = {(x, y) ∈ R2 : x2 + y2 < 1}) with the projection mapϕ : U → R2, ϕ(x, y, z) = (x, y).

Definition 1.1.6. Let M be a manifold. An atlas for M is any collection of charts {(Ui, ϕi)}i∈Iof charts for M . A differential structure on M is a maximal atlas.

2

We will assume all manifolds posses a countable atlas.

Definition 1.1.7. A covering {Ui}i∈I of a topological space X is locally finite if every pointp ∈ X has a neighbourhood which meets on finitely many Ui’s. X is paracompact if every opencover of X has a locally finite refinement.

Theorem 1.1.8. Let M be a manifold.

• M is paracompact.

• If {Ui}i∈I is a locally finite cover of a manifold M , then there exists a collection of smoothmaps {ϕi : M → R}i∈I such that suppϕi ⊂ Ui and

∑i∈I ϕi(p) = 1 for all p ∈M .

1.2 Tensor Fields

For the remainder of this section, let M be a minfold. We denote the R-algebra of smoothfunctions on M by C∞(M).

Definition 1.2.1. Let A be an algebra. A derivation of A is a linear map D : A→ A satisfyingD(ab) = D(a)b+ aD(b) for all a, b ∈ A.

Definition 1.2.2. A vector field on M is a derivation of C∞(M).

Example 1.2.3. (Some examples of derivations)

• If M = R, then C∞(M) = C∞(R), and a well-known derivation of C∞ is differentiation,that is, the map f 7→ f ′.

• If M = Rn, and ri : Rn → R is the ith coordinate projection, then for each i, ∂∂ri

:

C∞(Rn) → C∞(Rn) defined by ∂f∂ri

(p) = ddt

∣∣t=0

f(p + tei) (where ei is the ith standard

basis vector in Rn) is a derivation, commonly known as the ith partial derivative.

• Let M be any smooth manifold, and (U,ϕ) be a coordinate chart for M . Then for each i,the map ∂

∂xi: C∞(U) → C∞(U) given by ∂f

∂xi:= ∂f

∂ri(f ◦ ϕ−1) is a derivation of C∞(U).

Also, if ρ ∈ C∞(M) with supp ρ ⊂ U , then f 7→ ρ ·∂f∣∣U

∂xiis a derivation of C∞(M).

Definition 1.2.4. We define

D1(M) := {D : D is a derivation of C∞(M)} = {D : D is a vector field on M}

Definition 1.2.5. If A is an algebra, M is a module over A if it is a vector space, with amultiplication A×M →M , (a,m) 7→ am such that A→ EndR(M), a 7→ La where La(m) = am,is an algebra homomorphism.

Proposition 1.2.6. D1(M) is a module over C∞(M) with multiplication given by ((ϕD)(f))(p) =ϕ(p) · (Df)(p) for all ϕ ∈ C∞(M) and D ∈ D1(M).

Proof. We just need to check that ϕD is indeed a derivation for all ϕ ∈ C∞(M) and D ∈ D1(M).Indeed this is the case:

(ϕD)(f1f2) = ϕ ·D(f1f2) = ϕ · ((Df1)f2 + f1Df2)

= (ϕD)(f1) · f2 + f1 · (ϕD)(f2)

Proposition 1.2.7. If A is an algebra and D1, D2 are derivations of A, then the commutator,[D1, D2] := D1D2 −D2D

1, is a derivation of A.

3

Proof. Let D1 and D2 be derivations of A. Then for any a, b ∈ A, we have:

[D1, D2](ab) = (D1D2 −D2D1)(ab) = D1(D2(ab))−D2(D1(ab))

= D1(D2(a)b+ aD2(b))−D2(D1(a)b+ aD1(b))

= D1(D2(a)b) +D1(aD2(b))−D2(D1(a)b)−D2(aD1(b))

= D1(D2(a))b+D2(a)D1(b) +D1(a)D2(b) + aD1(D2(b))

−D2(D1(a))b−D1(a)D2(b)−D2(a)D1(b)− aD2(D1(b))

= D1(D2(a))b−D2(D1(a))b+ aD1(D2(b))− aD2(D1(b))

= (D1D2 −D2D1)(a)b+ a(D1D2 −D2D

1(b))

= [D1, D2](a)b+ a[D1, D2](b)

Definition 1.2.8. When M is a smooth manifold and D1, D2 ∈ D1(M), then we call thecommutator [D1, D2] is a Lie Bracket.

Fact (Locality of derivations of C∞(M)). If D inD1(M), then Df(p) only depends on f in aneighbourhood of p (i.e. if f1 = f2 on a neighbourhood of p, then Df1 = Df2 in a neighbourhoodof p).

Proof. It suffices to show that if f = 0 in a neighbourhood of p, then Df = 0 in a neighbourhoodof p. For this, we may find a function ρ ∈ C∞(M) with ρ ≡ 1 on supp f and ρ ≡ 0 in aneighbourhood of p. Then f = ρf , so Df = D(ρf) = D(ρ)f + ρD(f), which vanishes in aneighbourhood of p (because both ρ and f vanish in a neighbourhood of p).

Definition 1.2.9. For any p ∈M , we define C∞(p) := {(f, U) : p ∈ U is open, f ∈ C∞(U)}/ ∼where ∼ is the equivalence relation (f, U) ∼ (f ′, U ′) if and only if there exists a neighbourhoodW of p such that W ⊂ U ∩ U ′ and f

∣∣W

= f ′∣∣W

. Equivalence classes (i.e. elements of C∞(p))are called germs (of smooth functions) at p.

Thus, if D ∈ D1(M), then by the previous fact, Df(p) only depends on the germ of f atp. Thus, any p ∈ M and any D ∈ D1(M) determine a linear map Dp : C∞(p) → R given byDp(f) := (Df)(p) which satisfies Dp(f1f2) = (Dpf1)f2(p) + f1(p)Dp(f2).

Remark. Vector fields extend from open sets: Let U ⊂ M be open, X ∈ D1(U), and p ∈ U .Choose a neighbourhood V of p with V ⊂ U . Let ρ ≡ 1 on V , and supp ρ ⊂ U . Then we maydefine:

X(f)(q) :=

{ρ(q) ·X(f

∣∣U

) if q ∈ U0 if q /∈ U

which is a vector field on M extending X.

Definition 1.2.10. The tangent space Mp to M at a point p is the space of all point derivationsof C∞(M) at p, that is, linear maps Xp : C∞(p) → R satisfying Xp(fg) = Xp(f)g(p) +f(p)Xp(g).

Example 1.2.11. Let ρ ∈ M , and (U,ϕ) be a coordinate chart around p, and xi = ri ◦ ϕ. Iff ∈ C∞(U), define ∂f

∂xi:= ∂

∂ri(f ◦ ϕ−1). Then ∂

∂xidefines an element of D1(U), and a point

derivation ∂∂xi

∣∣p

at each p ∈ U . So ∂∂xi

∣∣p∈Mp.

Theorem 1.2.12 (Taylor’s Theorem). If f ∈ C∞(0), then f has a Taylor series expansion,

f ∼ f(0) +

n∑i=1

∂f

∂ri(0)ri +

1

2!

∑i,j

∂2f

∂ri∂rj(0)rirj + · · ·+ 1

k!

∑i1,...,ik

∂kf

∂i1 · · · ∂ik(0)ri1 · · · rik + · · ·

4

where ∼ means f(x) −∑k

i=0 airi = O(|r|k+1) as r → 0 in Rn. In particular, f(x) = f(0) +∑n

i=1∂f∂ri

(0)ri +R1R2 where R1(0) = R2(0) = 0 and R1 and R2 are both smooth functions.

Proof. Suppose f is smooth in a neighbourhood of 0 in Rn. By the fundamental theorem ofcalculus and then the chain rule,

f(r)− f(0) =

∫ 1

0

∂t(f(tr))dt =

∫ 1

0

∑i

∂f

∂ri(tr)ridt =

∑i

ri

∫ 1

0

∂f

∂ri(tr)dt

Since ∂f∂ri

is also smooth in a neighbourhood of 0, we can repeat the above with f replaced by∂f∂ri

which gives, for all i,

∂f

∂ri(r) =

∂f

∂ri(0) +

∑j

rj

∫ 1

0

∂2f

∂ri∂rj(sr)ds

Now then,

f(r)− f(0) =∑i

ri

∫ 1

0

∂f

∂ri(0) +

∑j

rj

∫ 1

0

∂2f

∂ri∂rj(str)ds

dt

=n∑i=1

∂f

∂ri(0)ri +

∫ 1

0

∑i,j

rirj

∫ 1

0

∂2f

∂ri∂rj(str)dsdt

Now continue the above process to obtain the result.

Let p ∈M , and (U,ϕ) be a coordinate chart around p, andX ∈ D1(U). Let ϕ = (x1, . . . , xn),and ∂

∂xibe defined as before. Set ai := X(xi) ∈ C∞(U). We claim thatX =

∑ni=1 ai

∂∂xi

. Indeed,

f ◦ ϕ−1 is a smooth function in a neighbourhood of 0 in Rn. By Taylor’s theorem,

f ◦ ϕ−1(r) = f ◦ ϕ−1(0) +n∑i=1

∂(f ◦ ϕ−1)

∂ri(0)ri +R1R2

Hence,

f(q) = f(p) +

n∑i=1

∂f

∂xi(p)Xi(q) +R1(q)R2(q)

for q ∈ U . Now X ∈ D1(U), so by the derivation property, X(R1R2)(p) = X(R1)(p)R2(p) +R1(p)X(R2)(p) = X(R1)(p) · 0 + 0 ·X(R2)(p) = 0. Hence,

(Xf)(p) =

n∑i=1

∂f

∂xi(p)X(xi)(p) =

n∑i=1

ai(p)∂

∂xi

∣∣p(f)

So Xp =∑n

i=1 ai(p)∂∂xi

∣∣p∈Mp. This shows the following corollary:

Corollary 1.2.13. If X ∈ D1(M), then locally in coordinates, X =∑n

i=1 ai∂∂xi

where ai =X(xi). Also, if M is an n-dimensional manifold, then Mp is an n-dimensional vector space, and

if x1, . . . , xn are local coordinates around p, then a basis for Mp is given by{

∂∂x1

∣∣p, . . . , ∂

∂xn

∣∣p

}.

So a vector field X on M can be identified with a family {Xp : p ∈M} where Xp ∈Mp forall p ∈M , and f 7→ Xp(f) is smooth for all f ∈ C∞(M).

Definition 1.2.14. A 1-form on M is a C∞(M)-module map ω : D1(M) → C∞(M) (i.e. ωsatisfies ω(fX) = f · ω(X) for all f ∈ C∞(M) and all X ∈ D1(M)). The set of all 1-forms onM is denoted D1(M), and is a C∞(M)-module.

5

If U ⊂M is open, and X ∈ D1(M), then X restricts to a vector field on U : let X∣∣U

(f)(p) =

X(f)(p) where f ∈ C∞(M) and f = f in a neighbourhood of p (Take a ρ ∈ C∞(M) whereρ∣∣UC ≡ 0 and ρ ≡ 1 in that neighbourhood of p, and set f = ρ · f).

If X ∈ D1(U) for an open U ⊂ M , and p ∈ U , then there exists X ∈ D1(M) such thatX = X in a neighbourhood of p. To see this, let ρ ≡ 1 on V ⊂ U , a neighbourhood of p andsupp(ρ) ⊂ U . Then for f ∈ C∞(M), let

X(f)(q) :=

{ρ(q) ·X(f

∣∣U

)(q) if q ∈ U0 if q /∈ U

Vector fields in coordinates. Suppose X ∈ D1(M), and (U,ϕ) is a coordinate chart, ϕ =(x1, . . . , xn). If f ∈ C∞(M) and p ∈ U , X(f)(p) = X

∣∣U

(f∣∣U

)(p). Now for each p ∈ U ,

f(q) = f(p) +

n∑i=1

∂f

∂xi(p)(xi(q)− pi) + ψ(q)

where ψ vanishes to order 2 at p, by Taylor’s Theorem. Hence,

X(f)(p) =n∑i=1

∂f

∂xi(p)X(xi − pi) =

n∑i=1

∂f

∂xi(p)X(xi) =

n∑i=1

ai(p)∂f

∂xi(p)

where ai = X(xi). Since X(f) ∈ C∞(M), p 7→ ai(p) is smooth on U , so X∣∣U

=∑

i ai∂∂xi

wherea1, . . . , an ∈ C∞(U).

Remark. The C∞(M)-module D1(M) is ”locally free” (finitely generated projective). In gen-eral, if A is a commutative ring, then an A-module M is free if M ∼= A ⊕ · · · ⊕ A ∼= An. Theabove shows that, D1(U) is a free C∞(U)-module with generators ∂

∂x1, . . . , ∂

∂xnfor any coordi-

nate chart (U,ϕ = (x1, . . . , xn)). In particular, D1(Rn) is a free C∞(Rn)-module, generated by∂∂r1, . . . , ∂

∂rn.

Recall that the tangent space Mp to M at a point p is the set of point derivations L :C∞(p)→ R (i.e. L is a linear map satisfying L(f1f2) = f1(p)L(f2) + L(f1)f2(p)).

Lemma 1.2.15. If (U,ϕ = (x1, . . . , xn)) is a coordinate chart around p ∈ M , then ∂∂xi

∣∣p

: f ∈C∞(p) 7→ ∂f

∂xi(p) is in Mp. If L ∈ Mp, then there exist unique a1, . . . , an ∈ C∞(U) such that

L(f) =∑

i ai∂f∂xi

(p). Therefore, Mp is an n-dimensional vector space for all p ∈M . Moreover,

if X ∈ D1(M), then Xp : f 7→ X(f)(p) belongs to Mp. This defines a map D1(M)→Mp whichis onto.

Recall that D1(M) denotes the dual of the module D1(M), that is, the set of linear mapsω : D1(M) → C∞(M) satisfying ω(f · X) = fω(X) for all f ∈ C∞(M) and all X ∈ D1(M).Recall that the elements of D1(M) are called 1-forms.

Lemma 1.2.16. If X = Y in a neighbourhood of p, then ω(X) = ω(Y ) in a neighbourhood ofp, for all ω ∈ D1(M).

Proof. Suppose X = 0 in a neighbourhood of p. Let ρ ∈ C∞(M) satisfy ρ ≡ 0 in thatneighbourhood of p, and ρ ·X = X. Then ω(X) = ω(ρX) = ρω(X), which vanishes near p.

Lemma 1.2.17. A 1-form ω on M restricts to a 1-form on any open U ⊂M .

Proof. If X ∈ D1(U) and p ∈ U , let X ∈ D1(M) be such that X = X near p. Then setω∣∣U

(X) = ω(X) near p.

6

Lemma 1.2.18. If X,Y ∈ D1(M) and Xp = Yp ∈ Mp, then ω(X)(p) = ω(Y )(p) for allω ∈ D1(M).

Proof. It suffices to show Xp = 0 implies ω(X)(p) = 0. In a neighbourhood U of p, X∣∣U

=∑i ai

∂∂xi

, where (x1, . . . , xn) are coordinates on U . Since Xp = 0, a1(p) = · · · = an(p) = 0, and

ω(X)(p) = ω∣∣U

(X∣∣U

)(p) = ω∣∣U

(∑i ai

∂∂xi

)(p) =

∑i ai(p)ω

∣∣U

(∂∂xi

)(p) = 0.

Definition 1.2.19. The cotangent space at p is defined to be the dual space of Mp, that is,M∗p .

There is a natural map D1(M) → M∗p , ω 7→ ωp where ωp(X) = ω(X)(p) for X ∈ D1(M).This is well defined because if Xp = Yp, then ω(X)(p) = ω(Y )(p) by Lemma 1.2.18.

Lemma 1.2.20. If U ⊂ M is open, ω ∈ D1(U), and p ∈ U , then there exists a neighbourhoodV ⊂ U of p and ω ∈ D1(M) such that ω

∣∣V

= ω∣∣V

.

Proof. Set ω(X) = ρω(X∣∣V

).

Lemma 1.2.21. Let D1(p) := {ωp : ω ∈ D1(M)} ⊂M∗p . Then D1(p) = M∗p .

Proof. Let (x1, . . . , xn) be coordinates on U , a neighbourhood of p. The space D1(U) has

C∞(U)-module basis ∂∂x1

, . . . , ∂∂xn

. Let ω1, . . . , ωn be the dual basis in D1(U). Thus ωj

(∂∂xi

)=

δji. Now if L ∈ M∗p , and `j := L(

∂∂xj

∣∣p

), we may define θ =

∑j `jωj ∈ D1(U). Then

θ(

∂∂xi

)=∑

j `jωj

(∂∂xi

)= `i. So θp = L.

Example 1.2.22. If f ∈ C∞(M), then (df)(X) := X(f) defines a 1-form on M .

Recall. D1(M) (the C∞(M)-module of vector fields on M) is C∞(M) dual to D1(M) (theC∞(M)-module of 1-forms on M). A vector field X on M is equivalent to a map p 7→ Xp ∈Mp

satisfying p 7→ Xp(f) is smooth for all f ∈ C∞(M). Locally, all vector fields are∑ai

∂∂xi

in alocal coordinate chart. Similarly, any ω ∈ D1(M) is equivalent to an assignment p 7→ ωp ∈M∗psuch that p 7→ ωp(Xp) is smooth for all X ∈ D1(M). Locally, every ω =

∑aidxi where dxi is

the 1-form on U dual to ∂∂xi

, i.e. dxi

(∂∂xj

)= δij . Or, more gerenally, for any f ∈ C∞(M),

there is a 1-form df defined by df(X) = X(f).

Definition 1.2.23. Let V and W be vector spaces. The tensor product of V and W is a vectorspace, denoted V ⊗W , together with a bilinear map V ×W → V ⊗W , (v, w) 7→ v⊗w, satisfyingthe following universal property: If M is a vector space and f : V ×W →M is a bilinear map,then there is a unique linear map f : V ⊗W →M such that the following diagram commutes:

V ×W M

V ⊗W

f

f

Remark. Since the map V ×W → V ⊗W , (v, w) 7→ v ⊗ w is bilinear, we have that (αv +βv′)⊗ w = αv ⊗ w + βv′ ⊗ w and v ⊗ (αw + βw′) = αv ⊗ w + βv ⊗ w′.

We can construct V ⊗W as follows: Let FV×W denote the (free) vector space with basisV ×W , and L denote the subspace of FV×W spanned by the elements of the form (αv+βv′, w)−α(v, w) − β(v′, w) and (v, αw + βw′) − α(v, w) − β(v, w′) where α, β are scalars and v, v′ ∈ Vand w,w′ ∈ W . Then we can let V ⊗W := FV×W /L. Elements of V ⊗W in this constructionare finite linear combinations

∑vj ⊗ wj where vj ⊗ wj is the image of (vj , wj) in the quotient.

Note that this construction has the desired universal property: if f : V ×W → M is bilinear,then we can define a linear map f(v ⊗ w) = f(v, w).

7

Exercise 1.2.24. Show that if {e1, . . . , en} is a basis for V and {f1, . . . , fm} is a basis for W ,then {vi ⊗ vj : 1 ≤ i ≤ n, 1 ≤ j ≤ m} is a basis for V ⊗W . In particular, the tensor product oftwo finite-dimensional vector spaces is finite dimensional, and Rn ⊗ Rm ∼= Rnm.

Proposition 1.2.25. If V is finite-dimensional, V ⊗ V ∗ ∼= End(V ) by the map v ⊗ f 7→ Tv⊗fwhere Tv⊗f (u) := f(u)v.

Definition 1.2.26. Let Drs be the space of all C∞(M)-multilinear maps

D1(M)× · · · ×D1(M)︸ ︷︷ ︸r times

×D1(M)× · · · ×D1(M)︸ ︷︷ ︸s times

→ C∞(M)

These maps are called tensor fields on M of type (r, s). If p ∈M , then let Drs(p) denote the set

of all multilinear mapsM∗p × · · · ×M∗p︸ ︷︷ ︸

r times

×Mp × · · · ×Mp︸ ︷︷ ︸s times

→ R

Note that any T ∈ Drs restricts to an element Tp ∈ Dr

s(p), given by

Tp((θp)1, . . . , (θp)r, (Xp)1, . . . , (Xp)s) = T (θ1, . . . , θr, X1, . . . , Xs)(p)

where each θi is an extension of (θp)i to a 1-form on M and each Xi is an extension of (Xp)i toa vector field on M .

Example 1.2.27. If θ1, . . . , θs ∈ D1(M) and X1, . . . , Xr ∈ D1(M), then we may define T ∈ Drs

byT (ω1, . . . , ωr, Y1, . . . , Ys) := X1(ω1)X2(ω2) · · ·Xr(ωr)θ1(Y1)θ2(Y2) · · · θs(Ys)

Remark. For every p ∈M , we have that

Drs(p)

∼= Mp ⊗ · · · ⊗Mp︸ ︷︷ ︸r times

⊗M∗p ⊗ · · · ⊗M∗p︸ ︷︷ ︸s times

via the mapX1 ⊗ · · · ⊗Xr ⊗ θ1 ⊗ · · · ⊗ θs → T

where T is defined in Example 1.2.27 above.

Example 1.2.28. If (V, 〈·, ·〉) is a (real) inner product space, then 〈·, ·〉 gives a tensor of type(0, 2), V ×V → R. Similarly if (U,ϕ) is a local coordinate chart and for each p ∈ U , 〈·, ·〉p is an

inner product on Mp, then there is a tensor T : D1(M)×D1(M)→ C∞(U) of type (0, 2) givenby T (X,Y ) = [p 7→ 〈Xp, Yp〉p]. Note that this particular tensor satisfies T (X,Y ) = T (Y,X),T (X,X) ≥ 0, and T (X,X) = 0 =⇒ X = 0.

Example 1.2.29. If V is a finite-dimensional vector space with basis {e1, . . . , en} and V ∗ hasdual basis {f1, . . . , fn}, then we may consider the tensor product

W := V ⊗ · · · ⊗ V︸ ︷︷ ︸r times

⊗V ∗ ⊗ · · · ⊗ V ∗︸ ︷︷ ︸s times

= V ⊗r ⊗ (V ∗)⊗s

Then a basis for W consists of elements of the form ei1 ⊗ · · · ⊗ eir ⊗ fi1 ⊗ · · · ⊗ fis wherei1, . . . , ir, j1, . . . , js ∈ {1, 2, . . . , n}. Therefore dimW = nr+s.

Definition 1.2.30 (Contractions). In the situation of the above example, let 1 ≤ α ≤ r and1 ≤ β ≤ s. Then we define Cαβ : V ⊗r ⊗ (V ∗)⊗s → V ⊗(r−1) ⊗ (V ∗)⊗(s−1) by

Cαβ(ei1 ⊗ · · · ⊗ eir ⊗ fi1 ⊗ · · · ⊗ fis) := fβ(eα)ei1 ⊗ · · · eα · · · ⊗ eir ⊗ fi1 ⊗ · · · fβ · · · ⊗ fis

where theˆabove eα and fβ denotes that these letters are missing.

8

Recall from Proposition 1.2.25 above that V ⊗V ∗ ∼= End(V ). So what does C11 : V ⊗V ∗ → Rcorrespond to under this isomorphism? Let {e1, . . . , en} be a basis for V , and {f1, . . . , fn} bethe dual basis for V ∗ (i.e. fi(ej) = δij). Then if T ∈ End(V ) with (Tij) the matrix for T withrespect to the basis {e1, . . . , en} for V , then Tei =

∑j Tijej for each 1 ≤ i ≤ n. And recall

the isomorphism from Proposition 1.2.25 is v ⊗ f ↔ Tv,f (w) = f(w)v (call this isomorphismϕ). Then T =

∑i,j Ti,jTei,fj , which, under this isomorphism corresponds to

∑i,j Tijei ⊗ fj ∈

V ⊗V ∗. Therefore, C11ϕ−1(T ) = C11

(∑i,j Tijei ⊗ fj

)=∑

i,j TijC11(ei⊗fj) =∑

i,j Tijfj(ei) =∑i,j Ti,jδij =

∑i Tii = trace(T ). So C11 corresponds to the trace operator.

Denote the set of invertible elements of End(V ) by GL(V ). Then GL(V ) acts naturally on Vby translation, that is, (g, v) 7→ g(v). This induces an action of GL(V ) on V ∗ by g(f) := f ◦g−1.These two actions give rise to an action of GL(V ) on V ⊗V ∗, given by g(v⊗f) := g(v)⊗g(f) =g(v)⊗ f ◦ g−1. Similarly, GL(V ) acts on V ⊗r ⊗ (V ∗)⊗s for r, s ≥ 0.

Exercise 1.2.31. Show that Cαβ commutes with the GL(V ) actions on V ⊗r ⊗ (V ∗)⊗s and onV ⊗(r−1) ⊗ (V ∗)⊗(s−1), i.e. such that Cαβ(g(T )) = g(Cαβ(T )) for all g ∈ GL(V ).

For example, C11(g(v⊗ f)) = C11(g(v)⊗ f ◦ g−1) = (f ◦ g−1)(g(v)) = f(v) = C11(v ◦ f). SoC11 is invariant under the GL(V ) action on V ⊗V ∗, i.e. C11(gT ) = C11(T ). Also, under V ⊗V ∗,the GL(V ) action on V ⊗ V ∗ corresponds to the action of GL(V ) on End(V ) by conjugation,that is, g · T := gTg−1. So the invariance of the contraction C11 under the GL(V ) actioncorresponds to the invariance of the trace under conjugation (i.e. trace(T ) = trace(gTg−1) forall g ∈ GL(V )).

Remark. As V is finite-dimensional, GL(V ) is dense in End(V ), when End(V ) is given theoperator norm.

Now contracting at any point p ∈ M gives a contraction Cαβ : Drs(p) → Dr−1

s−1(p), and

contracting at each point of M induces a contraction Cαβ : Drs(M)→ Dr−1

s−1(M).

1.3 Grassman Algebra

Fix a manifold M . Let U0(M) = C∞(M), and then for each k ≥ 1, let Uk(M) denote the spaceof alternating C∞(M)-multilinear maps T : D1(M)× · · · ×D1(M)︸ ︷︷ ︸

k times

→ C∞(M). What is means

for T to be alternating is that, for any permutation σ ∈ Sk, we have that:

T (Xσ−1(1), · · · , Xσ−1(k)) = sgn(σ)T (X1, . . . , Xk)

Now define a map A : D0k(M)→ Uk(M) by

A(T )(X1, . . . , Xk) :=1

k!

∑σ∈Sk

sgn(σ)T (Xσ−1(1), . . . , Xσ−1(k))

One can verify that A is a projection (i.e. A = A∗ = A2) which maps D0k(M) onto Uk(M).

Definition 1.3.1. The elements of Uk(M) are called differential k-forms on M .

Let U∗(M) :=⊕∞

k=0 Uk(M). Then U∗(M) is an algebra under the wedge product, denoted ∧and defined by α∧β := A(α⊗β), where (α⊗β)(X1, . . . , Xr, Y1, . . . , Ys) = α(X1, . . . , Xr)β(Y1, . . . , Ys).Moreover, the wedge product satisfies Uk(M)∧Ul(M) ⊂ Uk+l(M) (making U∗(M) into a gradedalgebra), and if α ∈ Ur(M), β ∈ Us(M), then α ∧ β = (−1)rsβ ∧ α.Differential Forms in Coordinates: Suppose U ⊂ M with coordinates x1, . . . , xn. Let

dx1, . . . , dxn ∈ D1(U) be the corresponding 1-forms. Then dxi

(∂∂xj

)= ∂

∂xj(xi) = δij . So

{dx1, . . . , dxn} is the dual C∞(U)-module basis for D1(U), dual to the C∞(M) module basis{∂∂x1

, . . . , ∂∂xn

}for D1(U). The following table summarizes how k-forms on U look like, for

k = 0, 1, 2, 3.

9

0-forms on U f ∈ C∞(M)

1-forms on U∑aidxi

2-forms on U∑

i<j aijdxi ∧ dxj3-forms on U

∑i<j<k aijkdxi ∧ dxj ∧ dxk

Remark. If U ⊂ M is the domain of a coordinate chart with coordinates x1, . . . , xn, then forall i, dxi ∈ D1(U) = U1(U), and dxi(X) = X(xi) for X ∈ D1(U). Note dx1, . . . , dxn is the dualbasis for ∂

∂x1, . . . , ∂

∂xn. Thus ∂xi

∂xj= δij , because ∂f

∂xj= ∂

∂rj(f ◦ ϕ−1), so ∂xi

∂xj= ∂

∂rj(xi ◦ ϕ−1) =

∂∂rjri = δij .

Note that for 2-forms, we have that:

(dxi ∧ dxj)(X1, X2) = det

(dxi(X1) dxi(X2)dxj(X1) dxj(X2)

)= dxi(X1)dxj(X2)− dxi(X2)dxj(X1)

In general, for k-forms, we have:

(dxr1 ∧ · · · ∧ dxrk)(X1, . . . , Xk) = det[dxri(Xj)]

Note importantly that dxri(Xj) = Xj(xri) as expected.

Remark. If ω ∈ Uk(U), p ∈ U , and p ∈ V ⊂ V ⊂ U , let χ = 1 on V with supp(χ) ⊂ U . Then,

ω :=

{χω on U

0 elsewhere

and ω ∈ D0k(U). Thus, a local k-form ω on U , extends to a global k-form ω on M .

A k-form ω on M restricts, for each p, to an alternating linear function Mp × · · · ×Mp︸ ︷︷ ︸k times

→ R.

This yields an element of what we call

Λk(V ∗) :=

R-multilinear functionals V × · · · × V︸ ︷︷ ︸k times

→ R

for V ∗ = Mp. We define the exterior algebra of V to be the space

Λ∗(V ∗) :=

∞⊕k=0

Λk(V ∗)

Let e1, . . . , en be a basis for V , and e∗1, . . . , e∗n be the dual basis for V ∗. For any k1, . . . , kr,

set (e∗k1∧ · · · ∧ e∗kr)(v1, . . . , vr) := det[e∗ki(vj)]. Then e∗k1

∧ · · · ∧ e∗kr ∈ Λr(V ∗).

Proposition 1.3.2. {e∗k1∧· · ·∧e∗kr : k1 < · · · < kr} forms a basis for Λr(V ∗) for r = 0, 1, 2, . . ..

If f1, . . . , fr ∈ V ∗, we may similarly define (f1 ∧ · · · ∧ fr)(v1, . . . , vr) := det[fi(vj)]. Noticethat if some fi = fj , then fi ∧ · · · fj = 0, because the matrix [fi(vj)] would have a repeated row(thus have zero determinant). Hence if f1 is a linear combination of the fj ’s (j ≥ 2), then,

f1 ∧ · · · ∧ fr =

n∑j=2

ajfj ∧ f2 ∧ · · · ∧ fr = 0

Therefore f1 ∧ · · · ∧ fr = 0 unless f1, . . . , fr is linearly independent.

10

Corollary 1.3.3. Λr(V ∗) = {0} if r > n, and Λr(V ∗) ∼= R spanned by the element f1 ∧ · · · ∧ fnfor any basis f1, . . . , fn of V ∗.

Indeed, if ω : V ∗ × · · · × V ∗ → R, and e1, . . . , en is a basis for V and e∗1, . . . , e∗n is the dual

basis for V ∗, and we set ωk1,...,kr = ω(ek1 , . . . , ekr), then

ω =∑

k1<···<kr

ωk1,...,kre∗k1∧ · · · ∧ e∗kr

Recall that we define the alternation map A : D0s(M)→ Us(M) by:

Aω :=1

s!

∑σ∈Sn

sgn(σ)ω ◦ σ−1

and the wedge α ∧ β := A(α⊗ β), where α is an r-form, β is an s-form, and

(α⊗ β)(X1, . . . , Xr, Y1, . . . , Ys) = α(X1, . . . , Xr)β(Y1, . . . , Ys)

Theorem 1.3.4. {e∗k1∧ · · · ∧ e∗kr : k1 < · · · < kr} is a basis for Λr(V ∗), so dim(Λr(V ∗)) =

(nr

)and dim(Λ∗(V ∗)) = 2n.

Corollary 1.3.5. U∗(M) is a graded algebra under the wedge product. Moreover, if α ∈ Uk(M)and β ∈ U`(M), then α ∧ β = (−1)k`β ∧ α.

Locally, k-forms are all of the form∑

i1<···<ik αi1,...,ikdxi1 ∧ · · · ∧ dxik .

1.4 Exterior Differentiation

Locally, we have that

d(∑

ai1,...,ikdxi1 ∧ · · · ∧ dxik)

=

n∑r=0

∂(ai1,...,ik)

∂xrdxr ∧ dxi1 ∧ · · · ∧ dxik

For example, we can verify that for f ∈ C∞(M) = U0(M), that df ∈ U1(M) has the samemeaning as before, that is, df(X) = X(f). In particular, since if x1, . . . , xn are local coordinate

charts, then we know that dx1, . . . , dxn is a basis for U1(M), so df =∑n

i=1 aidxi. So df(

∂∂xj

)=

∂f∂xj

, and∑n

i=1 aidxi

(∂∂xj

)=∑n

i=1 aiδij = aj . Thus aj = ∂f∂xj

, so locally, df =∑n

i=1∂f∂xidxi.

Theorem 1.4.1. There exists a unique map d : U∗(M)→ U∗(M) such that:

(1) d maps Us(M) into Us+1(M) for all s ≥ 0.

(2) d2 = 0.

(3) df(X) = X(f) for all X ∈ D1(M).

(4) d(ω1 ∧ ω2) = dω1 ∧ ω2 + (−1)deg(ω1)ω1 ∧ dω2.

Proof. Note that if d satisfies (1)-(4), and if ω ∈ Us(M), such that ω = 0 on U for some openU ⊂M , then dω = 0 on U . Indeed, let p ∈ U , and p ∈ V ⊂ V ⊂ U , and χ = 0 on V , χ = 1 onUC . Then χω = ω, so dω = d(χω) = dχ · ω + χdω = 0 on V , so dω(p) = 0. Since this holdsfor all p ∈ U , so dω = 0 on U . If ω ∈ D0

s(M), and x1, . . . , xn are coordinates on U ⊂ M , thenω =

∑ak1,...,ksdxk1 ∧ · · · ∧ dxks , so

dω =∑

d (ak1,...,ksdxk1 ∧ · · · ∧ dxks)

=∑

d (ak1,...,ks) dxk1 ∧ · · · ∧ dxks +∑

ak1,...,ksd (dxk1 ∧ · · · ∧ dxks)

=∑

d (ak1,...,ks) dxk1 ∧ · · · ∧ dxks

11

where the second sum in the second line is 0 by inductively applying condition (4), becaused2 = 0. This implies uniqueness. For existence, d is defined by:

dω(X1, . . . , Xp+1) =1

p+ 1

(p+1∑i=1

(−1)i+1Xi(ω(X1, . . . , Xi, . . . , Xp+1))

+∑i<j

(−1)i+jω([Xi, Xj ], X1, . . . , Xi, . . . , Xj , . . . , Xp+1)

)In particular,

d(fdxk1 ∧ · · · ∧ dxks) =n∑i=1

∂f

∂xidxi ∧ dxk1 ∧ · · · ∧ dxks

One can verify d satisfies conditions (1)-(4).

Locally, if x1, . . . , xn are coordinates on U ⊂ M , then dxi ∈ U1(U), and dx1(p), . . . , dxn(p)span M∗p for each p ∈ U . More generally, the forms dxi1 ∧ · · · ∧ dxik span Uk(U) as a C∞(U)-module. So if ω ∈ Uk(M), then ω

∣∣U∈ Uk(U) and locally, ω =

∑fi1,...,ikdxi1 ∧ · · · ∧ dxik

and,

dω =∑

i1<···<ik

(dfi1,...,ik) ∧ dxi1 ∧ · · · ∧ dxik =∑

i1<···<ik

n∑j=1

∂f

∂xjdxj ∧ dxi1 ∧ · · · ∧ dxik

Note that if α ∈ Uk(M) and β ∈ U`(M), then d(α ∧ β) = dα ∧ β + (−1)kα ∧ dβ by condition(4) of Theorem 1.4.1 above. Also, since dxi ∧ dxj = −dxj ∧ dxi, we can show d2 = 0 as follows:

locally, df =∑ ∂f

∂xjdxj , so, since d(dxj) = 0 for all j,

d2f =∑j

d

(∂f

∂xjdxj

)=∑j

d

(∂f

∂xj

)dxj +

∂f

∂xjd(dxj)

=∑j

d

(∂f

∂xj

)dxj =

∑i,j

∂2f

∂xi∂jdxi ∧ dxj =

∑i 6=j

∂2f

∂xi∂xjdxi ∧ dxj

=∑i<j

∂2f

∂xi∂xjdxi ∧ dxj +

∑j<i

∂2f

∂xi∂xjdxi ∧ dxj

=∑i<j

∂2f

∂xi∂xjdxi ∧ dxj +

∑i<j

∂2f

∂xj∂xidxj ∧ dxi

=∑i<j

∂2f

∂xi∂xjdxi ∧ dxj +

∑i<j

∂2f

∂xj∂xi(−dxi ∧ dxj)

=∑i<j

(∂2f

∂xi∂xj− ∂2f

∂xj∂xi

)dxi ∧ dxj

= 0

because f is smooth, so its mixed partial derivatives are equal.

Remark. Since d2 = 0, the map d : U∗(M) → U∗(M) satisfies ran(d) ⊂ ker(d). Then the deRham cohomology of M is defined by:

H∗dR(M) := ker(d)/ ran(d) =

n⊕k=0

HkdR(M)

The elements of ker(d) are called closed forms and the elements of ran(d) are called exact forms.Some facts about the de Rham cohomology:

12

• If M is compact, H∗dR(M) is finite-dimensional.

• M → H∗dR(M) is homotopy invariant.

• H0dR(M) = R#{connected components of M}

1.5 Mappings

Definition 1.5.1. Let M and N be smooth manifolds, and Φ : M → N . Φ is smooth at p ∈Mif g ◦ Φ ∈ C∞(p) for all g ∈ C∞(Φ(p)). Φ is smooth if it is smooth at every p ∈ M . Φ is adiffeomorphism if Φ is bijective, and Φ and Φ−1 are both smooth.

Exercise 1.5.2. If (U,ϕ = (x1, . . . , xn)) and (V, ψ = (y1, . . . , yn)) are coordinate charts aroundp and q = Φ(p) respectively, then Φ is smooth at p if and only if ψ ◦ Φ ◦ ϕ−1 : Rn → Rm issmooth at ϕ(p).

Proposition 1.5.3. Let Φ : M → N be smooth. Then Φ induces a family of linear mapsdΦp : Mp → NΦ(p) by dΦp(Xp)(f) = Xp(f ◦ Φ) for all f ∈ C∞(Φ(p)).

Lemma 1.5.4 (Functoriality). Suppose Φ : M → N and Ψ : N → L are smooth maps. Thenfor all p ∈M , the diagram

Mp NΦ(p)

LΨ(Φ(p))

dΦp

d(Ψ◦Φ)pdΨΦ(p)

commutes. In other words, dΨΦ(p) ◦ dΦp = d(Ψ ◦ Φ)p.

Proof. For all Xp ∈Mp and f ∈ C∞(Ψ(Φ(p))),

dΨΦ(p) ◦ dΦp = dΦp(Xp)(f ◦Ψ) = Xp(f ◦Ψ ◦ Φ) = d(Ψ ◦ Φ)p(Xp)(f)

We now consider how to obtain the chain rule from calculus given the above. Let Φ :M → N be smooth, and p ∈ U ⊂ M , and x1, . . . , xn be coordinates on U , and y1, . . . , ym becoordinates on V , where Φ(p) ∈ V . Then Mp has basis ∂

∂x1

∣∣p, . . . , ∂

∂xn

∣∣p, and NΦ(p) has basis

∂∂y1

∣∣Φ(p)

, . . . , ∂∂ym

∣∣Φ(p)

. In particular, Mp∼= Rn and NΦ(p)

∼= Rm. Thus Φ induces a linear map

Rn → Rm, whose matrix representation will be denoted [Φ]. Then the jth column of [Φ] are

the coefficients of the vector dΦp

(∂∂xj

∣∣p

)with respect to the basis ∂

∂y1

∣∣Φ(p)

, . . . , ∂∂ym

∣∣Φ(p)

. So

computing this vector, we see:

dΦp

(∂

∂xj

∣∣p

)=

m∑i=1

dΦp

(∂

∂xj

∣∣p

)(yi)

∂yi

∣∣Φ(p)

=

m∑i=1

∂xj

∣∣p(yi ◦ Φ)

∂yi

∣∣Φ(p)

=

m∑i=1

∂(yi ◦ Φ)

∂xj(p)

∂yi

∣∣Φ(p)

So the jth column of [Φ] is given by

∂(y1◦Φ)∂xj

(p)...

∂(ym◦Φ)∂xj

(p)

. Note that [Φ] is commonly called the

Jacobian matrix.

13

Corollary 1.5.5. (The Chain Rule) Let MΦ→ N

Ψ→ L be smooth maps, p ∈ M , x1, . . . , xnbe coordinate charts around p, y1, . . . , ym be coordinate charts around Φ(p), and z1, . . . , zr becoordinate charts around Ψ(Φ(p)). Then the following diagram commutes:

Mp NΦ(p) LΨ(Φ(p))

Rn Rm Rr

dΦp

d(Ψ◦Φ)p

∼= x

dΨΦ(p)

∼= y ∼= z

[Φ]yx

[Ψ◦Φ]zx

[Ψ]zy

Example 1.5.6. Suppose γ : (−ε, ε) → M is a smooth curve, γ(0) = p ∈ U , with x1, . . . , xncoordinates on U . Then,

dγ0

(∂

∂xi

∣∣p

)=∑ ∂(xi ◦ γ)

∂t(0)

∂xi

∣∣p

= γ′(0)

Example 1.5.7. We’ll now give some more examples of manifolds. Of course, the obviousexample is Rn, and also recall the example of Sn, the n-sphere, which can be covered by

2(n + 1) charts (x1, . . . , xn) 7→ (x1, . . . , xi−1,√

1−∑

j x2j , xi+1, . . . , xn). Another example is

the n-torus, Tn := T× · · · × T︸ ︷︷ ︸ntimes

where T = S1. However, we typically use different charts, so

that (−1, 1) 3 t 7→ e2πit ∈ T, and then T can be covered by 2-charts, as opposed to the 4 youwould use for S1 (covering the 4 hemispheres). Note that the transition functions on Tn aretranslations - for example, a second chart so that the two charts together cover T could be givenby (−1, 1) 3 t 7→ e2πi(t+1/4) ∈ T, so translating by π

2 , or a quarter of the circle. Similarly, for the2-torus T2, charts can be given as (s, t) 7→ (e2πis, e2πit). Also, by using the same chart as the1-torus, and the identity chart on R, we can consider the manifold T× R (which is a cylinder)and cover it using two charts, as in the case with T.

We now give a short remark about tangent vectors. Let M be a manifold, and p ∈ M .Suppose γ : (−ε, ε) → M is a smooth curve with γ(0) = p. Then the map dγp : T0(−ε, ε) →Tγ(0)M has dγp

(ddt

∣∣0

)∈Mp, which is the tangent vector γ′(0) to γ at 0. By definition,

dγp

(d

dt

∣∣∣0

)(f) =

d

dt

∣∣∣0(f ◦ γ)

Suppose x1, . . . , xn are coordinates around p. By the Chain Rule,

d

dt

∣∣∣t=0

(f ◦ γ) =

n∑i=1

∂f

∂xi(p)γ′i(0)

where γi(t) = xi ◦ γ(t). Thus, γ′(0)(f) =∑n

i=1∂f∂xi

(γ(0))γ′i(t), or γ′(0) =∑n

i=1 γ′i(0) ∂

∂xi

∣∣p.

Exercise 1.5.8. Suppose∑n

i=1 ai∂∂xi∈Mp. Show there exists a smooth curve γ : (−ε, ε)→M ,

γ(0) = p, such that γ′(0) =∑n

i=1 ai∂∂xi

∣∣p. (Hint: if γi = xi ◦ γ, then we need γ′i(0) = ai, so γ is

the curve with γi(0) = ai, so γ(t) = ϕ−1(t(a1, . . . , an)) where ϕ is the chart ϕ = (x1, . . . , xn)).

Proposition 1.5.9. If Φ : M → N and dΦp is an isomorphism Mp → NΦ(p) for some p ∈M ,then there exist open submanifolds U ⊂ M containing p and V ⊂ N containing Φ(p) such thatΦ maps U diffeomorphically onto V .

14

Definition 1.5.10. Say Φ : M → N is regular at p if dΦp is one-to-one. If M ⊂ N where M isa smooth manifold, and the inclusion map i : M → N is smooth and regular at every p ∈ M ,then we say M is a submanifold of N .

Theorem 1.5.11. If M is a smooth submanifold of N , and p ∈ M , then there exists a chartϕ = (x1, . . . , xn) defined on V (p ∈ V ) such that ϕ(p) = 0 and U := {q ∈ V : xm+1(q) = · · · =xn(q) = 0} is equal to M ∩ V , and x1, . . . , xm forms a coordinate chart for M near p.

We will now consider transformations of vector fields. If Φ : M → N , then Φ determinesa linear map dΦp : Mp → NΦ(p) for each p ∈ M , given by dΦp(Xp)(f) = Xp(f ◦ Φ) forf ∈ C∞(Φ(p)), and a map dΦt

p : N∗Φ(p) →M∗p defined by dΦtp(ωp)(Xp) = ωp(dΦp(Xp)).

Exercise 1.5.12. Suppose Xp ∈ Mp is determined by a smooth curve γ : (−ε, ε) → M , withγ(0) = p, and γ′(0) = Xp. Then dΦp(Xp) = (Φ ◦ γ)′(0).

Solution. dΦp(Xp)(f) = Xp(f ◦ Φ) = ddt

∣∣∣t=0

(f ◦ Φ ◦ γ) = (Φ ◦ γ)′(0)(f).

Say X,Y are vector fields on M and on N respectively, are Φ-related, and write X ∼ΦY

if dΦp(Xp) = YΦ(p) for all p ∈ M . Equivalently, this holds if Y (f) ◦ Φ = X(f ◦ Φ) for allf ∈ C∞(N).

Proposition 1.5.13. If X1 ∼ΦY1 and X2 ∼

ΦY2, then [X1, X2] ∼

Φ[Y1, Y2].

Proof. We need to show that dΦp([X1, X2]) = [Y1, Y2]Φ(p), or equivalently that [Y1, Y2](f) ◦Φ =[X1, X2](f ◦ Φ), for f ∈ C∞(N). Since X1 ∼

ΦY1, we know X1(f ◦ Φ) = Y (f) ◦ Φ, and similarly

for X2 and Y2. Hence,

Y1(Y2(f)) ◦ Φ = X1(Y2(f) ◦ Φ) = X1(X2(f ◦ Φ))

Similarly, Y2(Y1(f)) ◦ Φ = X2(X1(f ◦ Φ)). Taking the difference, we obtain the result.

If Φ : M → N is smooth, then this induces a map Φ∗ : Dr(N)→ Dr(M) satisfying:

• Φ∗(f) = f ◦ Φ for f ∈ D0(N) = C∞(N), and,

• Φ∗(ω)p(A1, . . . , Ar) = ωp(dΦp(A1(p)), . . . , dΦp(An(p))).

Note that Φ∗ also satisfies Φ∗(ω ∧ τ) = Φ∗(ω) ∧ Φ∗(τ); indeed,

Φ∗(ω ∧ τ)(A1, . . . , Ar, Ar+1, . . . , Ar+s) = (ω ∧ τ)((dΦ)A1, . . . , (dΦ)Ar+s)

= A(ω ⊗ τ)((dΦ)A1, . . . , (dΦ)Ar+s)

=1

(r + s)!

∑σ∈Sr+s

sgn(σ)(ω ⊗ τ)((dΦ)Aσ(1), . . . , (dΦ)Aσ(r+s))

=1

(r + s)!

∑σ∈Sr+s

sgn(σ)ω((dΦ)Aσ(1), . . . , (dΦ)Aσ(r))τ((dΦ)Aσ(r+1), . . . , (dΦ)Aσ(r+s))

= (Φ∗ω ∧ Φ∗τ)(A1, . . . , Ar+s)

Locally, if p ∈ U ⊂ M and ϕ = (x1, . . . , xn) is a coordinate chart on U , and Φ(p) ∈ V ⊂ Mand V has coordinates ψ = (y1, . . . , ym), then a differential r-form ω on N restricts to one onV , and then can be written in the form:

ω =∑

i1<...<ir

ai1,...,irdyi1 ∧ · · · ∧ dyir

15

and thus,

Φ∗(ω) =∑

i1<...<ir

(ai1,...,ir ◦ Φ)Φ∗(dyi1) ∧ · · · ∧ Φ∗(dyir)

Recall that dΦp

(∂∂xi

∣∣p

)=∑

j∂(yj◦Φ)∂xi

(p) ∂∂yj

∣∣Φ(p)

, hence:

Φ∗(dyk)

(∂

∂xi

∣∣∣∣p

)= dyk

(dΦp

(∂

∂xi

∣∣∣∣p

))= dyk

∑j

∂(yj ◦ Φ)

∂xi(p)

∂yj

∣∣∣∣Φ(p)

=∂(yk ◦ Φ)

∂xi

Therefore, Φ∗(dyk) =∑

i∂(yk◦Φ)∂xi

dxi, and so

Φ∗(ai1,...,irdyi1 ∧ · · · ∧ dyir) = (ai1,...,ir ◦ Φ)∑

α1,...,αr

(∂(yi1 ◦ Φ)

∂α1· · · ∂(yir ◦ Φ)

∂αr

)dxα1 ∧ · · · ∧ dxαr

If k = m = n, then,

Φ∗(ady1 ∧ · · · ∧ dym) = (a ◦ Φ)∑

α1,...,αm

(∂(y1 ◦ Φ)

∂α1· · · ∂(ym ◦ Φ)

∂αr

)dxα1 ∧ · · · ∧ dxαr︸ ︷︷ ︸

determinant of Jacobian dx1∧···∧dxn

Corollary 1.5.14. Φ∗ commutes with the exterior derivative, d.

Proof. We have that:

Φ∗(df) = Φ∗

(∑i

∂f

∂yiyi

)=∑i,j

(∂f

∂yi◦ Φ

)∂(yi ◦ Φ)

∂xjdxj

and:

d(Φ∗(f)) = d(f ◦ Φ) =∑j

∂(f ◦ Φ)

∂xjdxj

=∑i,j

(∂f

∂yi◦ Φ

)∂(yi ◦ Φ)

∂xjdxj (Chain Rule)

= Φ∗(df)

This proves the corollary for 1-forms. We leave the proof for r-forms where r ≥ 1 as anexercise.

1.6 Affine Connections

Again, fix a manifold M . The motivating question for this section is: how do you differentiatea vector field? If we have a curve γ, we can simply consider:

d

dt

∣∣∣∣t=0

f(γ(t)) =f(γ(t))− f(p)

t

But more generally, we need the idea of a connection.

Definition 1.6.1. An affine connection on M is a map which assigns to each X ∈ D1(M) alinear map ∇X : D1(M)→ D1(M), called covariant differentiation by X, satisfying:

(1) ∇fX+gY Z = f∇XZ + g∇Y Z

16

(2) ∇X(fY ) = X(f)Y + f∇XY

Lemma 1.6.2. If X or Y vanish on U ⊂M , then so does ∇XY .

Proof. Suppose X = 0 on U , and let p ∈ V ⊂ V ⊂ U , and let ρ = 0 on V and ρ = 1 on UC .Then ρ ·X = X, which implies ∇XY = ∇ρXY = ρ∇XY which is 0 at p. Thus ∇XY (p) = 0 forall p ∈ U . If Y = 0, and ρ is as above, then ρY = Y , so ∇XY = ∇X(ρY ) = X(ρ)Y + ρ∇X(Y ),which equals 0 at p.

Corollary 1.6.3. ∇ restricts to a connection on any U ⊂M .

Proof. Define ∇XY (p) = (∇X Y )(p) where X, Y are vector fields on M such that X = X near

p, and Y = Y near p.

Example 1.6.4. On Rn, the vector fields ∂∂r1, . . . , ∂

∂rnare globally defined. If X =

∑ai

∂∂ri

,

and Y =∑bi

∂∂ri

, then if ∇ is any connection on Rn, it satisfies:

∇XY = ∇∑ ai∂

∂ri

∑j

bj∂

∂rj

=∑i,j

ai∇ ∂∂ri

(bj

∂rj

)=∑i,j

ai∂bj∂ri

∂rj+ aibj∇ ∂

∂ri

∂rj

So specifying ∇ is equivalent to specifying n functions Γ1ij , . . . ,Γ

nj on Rn so that ∇ ∂

∂ri

∂∂rj

=∑nk=1 Γkij

∂∂rk

. Using this notation, the above equals:

∇XY =∑i,j

ai∂bj∂ri

∂rj+∑i,j,k

aibjΓkij

∂rk=∑k

∑i

ai∂bk∂ri

+∑j

aibjΓkij

∂rk

The functions Γkij are called Christoffel symbols. In particular, if we set Γkij = 0 for all i, j, k, weget a connection on Rn given by:

∇XY = ∇X(Y1, . . . , Yn) = (X(Y1), . . . , X(Yn))

Or in other words, ∇XY =∑X(Yj)

∂∂rj

if Y =∑

j Yj∂∂rj

.

Connections Exist Locally on M : Let U ⊂M have coordinates x1, . . . , xn. Then, D1(U) ={∑bi

∂∂xi

: b1, . . . , bn ∈ C∞(U)}

, and we can set ∇X(∑

bi∂∂xi

)=∑X(bi)

∂∂xi

. This is a con-

nection on U .

Remark. If ∇ and ∇′ are connections on ∇−∇′ does not differentiate - it is ”order 0” while ∇has ”order 1” as a differential operator. For example, the Lie derivative LX : C∞(M)→ C∞(M)given by LX(f) = X(f) is order 1 while the multiplication operator Mf : C∞(M) → C∞(M)given by Mf (h) = f · h is order 0. The reason why ∇X −∇′X is order 0 is because:

(∇X −∇′X)(f · Y ) = ∇X(fY )−∇′X(fY ) = X(f)Y + f∇XY −X(f)Y − f∇′X(Y )

= f∇XY −X(f)Y − f∇′X(Y ) = f(∇X −∇′X)(Y )

So ∇X − ∇′X is C∞(M)-linear, as opposed to satisfying a ”Leibniz rule”. In particular, theseconnections induce a C∞(M)-linear map D1(M)→ EndC∞(M)(D

1(M)), X 7→ ∇X −∇′X .

Theorem 1.6.5. If M is a smooth manifold, then there exists a connection on M .

17

Proof. Let {(Ui, ϕi)} be a collection of coordinate charts covering M . Let ∇i be a connectionon Ui for each i, and let {ρi} be a partition of unity subordinate to the cover {Ui}. Thensupp(ρi) ⊂ Ui,

∑ρi(x) = 1 for all x, and ρi(x) 6= 0 for at most finite i, for all x. Now if

X,Y ∈ D1(M), let ∇XY =∑

i ρi∇iX(Y ). Then,

∇fX(Y ) =∑u

ρi∇ifX(Y ) = f∑i

ρi∇iX(Y ) = f∇X(Y )

and,

∇X(fY ) =∑i

ρi∇iρi∇iX(fY ) =∑i

ρi(X(f)Y + f∇iX(Y ))

=∑i

ρiX(f)Y + f∑i

ρi∇iX(Y ) = X(f)Y + f∇XY

so ∇ is indeed a connection on M .

Example 1.6.6 (Connections on Surfaces). Assume S ⊂ R3 and is covered by the images ofsmooth maps x : U ⊂ R3 → S ⊂ R3 such that ∂

∂x1= ∂x

∂r1and ∂

∂x2= ∂x

∂r2satisfying ∂

∂x1× ∂∂x26= 0.

Such S gives a smooth manifold with charts given by ϕ = x−1, and assuming ϕ is centered at(0, 0), when p ∈M ,

∂f

∂x1(p) =

∂r1(f ◦ ϕ)(ϕ(p)) =

d

dt

∣∣∣∣t=0

(f ◦ ϕ)(ϕ(p) + (t, 0))

Classically, for i = 1, 2 ∂x∂ri∼= ∂

∂xiare tangent to S and span the tangent plane - locally they

are vector fields on S, which we can interpret as R3-valued functions of the coordinates r1, r2.

So we can form ∂2x∂ri∂rj

= ∂∂ri

(∂x∂rj

). Now let n =

∂∂x1× ∂

∂x2∣∣∣ ∂∂x1× ∂

∂x2

∣∣∣ , which is the unit normal field

to S (locally), and is orthogonal at p to Mp. Then, ∂2x∂ri∂rj

= Lijn + ∇ij , where Lij ∈ M⊥pand ∇ij ∈ Mp. Now let ∇ be the connection on S defind by ∇XY = projMp

(X(Y )) where

X(Y )(p) = limt→01t (Y (α(t))− Y (p)) where α : (−ε, ε)→ S, α(0) = p, and α′(0) = X(p). This

is a connection on S. Then ∇ij = ∇ ∂∂xi

(∂∂xj

). This is known as the Grassman connection.

1.7 Parallelism

The goal of this section is to provide a precise notion of moving a vector along a curve whilekeeping the angle it makes with the curve constant.

Fix a manifold M and a connection ∇. Let γ : I → M (where I ⊂ R is an interval) be aregular (i.e. γ′(t) 6= 0 for all t) curve. Let J ⊂ I be a compact subinterval such that γ(J) iscontained in a coordinate neighborhood of M .

Lemma 1.7.1. If g ∈ C∞(J), then there exists G ∈ C∞(M) such that G(γ(t)) = g(t) for allt ∈ J .

Proof. By the Chain rule, γ′(t0) =∑γ′i(t0) ∂

∂xiwhere γi = xi ◦ γ. So there exists i such that

γ′i(t0) 6= 0, since γ is regular. By the inverse function theorem, there exists ηi, differentiable ina neighbourhood of γi(t0), such that ηi(γi(t)) = t for t in a neighbourhood of t0. The functionq 7→ g(ηi(xi(q))) is differentiable near γ(t0). We may therefore extend it to a smooth functionG∗ on U such that G∗(q) = g(ηi(xi(q))) near γ(t0). Then G∗(γ(t)) = g(ηi(γi(t))) = g(t) for allt in a neighbourhood of γ(t0). Hence, since J is compact, there exist U1, . . . , Un open in U andGi ∈ C∞(U) such that Gi(γ(t)) = g(t) if γ(t) ∈ Ui. Put G′ =

∑ρiGi where {ρi} is a partition

18

of unity subordinate to the cover {Ui}ni=1. Then G′ ∈ C∞(U), and G′(γ(t)) = g(t) for all t ∈ J .Finally, let ψ ∈ C∞(M), suppψ ⊂ U such that ψ = 1 on γ(J) and put

G =

{ψ ·G′ in U

0 elsewhere

Now let Y (t) ∈Mγ(t) be a smoothly varying family of tangent vectors. Let X(t) = γ′(t). Bythe Lemma, there exist vector fields X,Y on M such that X(γ(t)) = γ′(t) and Y (γ(t)) = Y (t).Indeed, locally, X(t) = γ′(t) =

∑a∗i (t)

∂∂xi

, and each a∗i (by the Lemma) can be extended to asmooth function ai on M such that ai(γ(t)) = a∗i (t). This gives the extension of X, as desired,and similarly for Y .

Definition 1.7.2. We say that Y (t) is parallel along γ if ∇XY (γ(t)) = 0 for all t, where X andY are extensions of X(t) and Y (t) respectively, to all of M .

Fix t0 and coordaintes around γ(t0). Then X =∑Xi

∂∂xi

and Y =∑Yi

∂∂xi

on U , and so

∇XY = ∇∑Xi∂

∂xi

(∑Yi

∂xi

)=∑i,j

Xi∇ ∂∂xi

(Yj

∂xj

)=∑i,k

xi∂Yk∂xi

∂xk+∑i,j,k

XiYjΓkij

∂xk

So ∇XY = 0 if and only if∑

i

(Xi

∂Yk∂xi

+∑

j XiYjΓkij

)= 0 for all k. Applying this equation to

γ(t), we see that for each k, ∇XY (γ(t)) = 0 if and only if

0 =∑i

Xi(γ(t))∂Yk∂xi

(γ(t)) +∑j

Xi(γ(t))Yj(γ(t))Γkij(γ(t))

=∑i

γ′i(t)∂Yk∂xi(γ(t)) +

∑j

γ′i(t)Yj(γ(t))Γkij(γ(t))

= (Yk ◦ γ)′(t) +

∑i,j

γ′i(t)Yj(t)Γkij(γ(t))

where in the last equality we use the Chain rule. Note that this only depends on the restrictionsof X and Y to γ.

Definition 1.7.3. Let γ : I →M be a regular curve in M . Then γ is a geodesic if t 7→ γ′(t) isparallel along γ. A geodesic γ is called maximal if it is not properly contained in a geodesic.

Geodesic Equations: Locally, γ′(t) =∑

i γ′i(t)

∂∂xi

, so with the equation developed above, wesee that for γ to be a geoedesic, for each k = 1, 2, . . . , n, we need:

γ′′k (t) +∑i,j

γ′i(t)γ′j(t)Γ

kij(γ(t)) = 0

Example 1.7.4. Let S be a surface in R3, and∇ be the Grassmann connection. Then∇γ′(γ′)(t)is the tangential component of γ′(γ′) = γ′′(t). So γ is a geodesic if and only if γ′′(t) has zerotangential component for all t, i.e. when γ′′(t) ⊥M⊥p (in R3). So if, for example, S = S2 ⊂ R3,then the great circle γ(t) = (cos t, sin t, 0) has γ′′(t) = −(cos t, sin t, 0), which has zero tangentialcomponent for all t, so the great circle γ is a geodesic.

19

Remark. If γ is a geodesic in M , and t = f(s) is a reparameterization, then s 7→ γ(f(s))is a geodesic if and only if f is linear. To see this, if γ(t) = (x1(t), . . . , xn(t)), and we letyk(s) = xk(f(s)), then y′k(s) = x′k(f(s))f ′(s) and y′′k(s) = x′′k(f(s))(f ′(s))2 +s′k(f(s))f ′′(s). Thegeodesic equation now says:

0 = y′′k(s) +∑i,j

y′i(s)y′j(s)Γ

kij(γ(f(s)))

= x′′k(f(s))(f ′(s))2 + x′k(f(s))f ′′(s) +∑i,j

x′i(s)x′j(s)f

′(s)2Γkij(γ(f(s)))

= x′k(f(s))f ′′(s)

Note that this is because x′′k(f(s)) +∑

i,j x′i(s)x

′j(s)Γ

kij(γ(f(s))) = 0, since γ is a geodesic. In

particular, if γ ◦ f(s) is a geodesic, then x′k(f(s))f ′′(s) = 0, for all k, and since γ is regular,there is at least one k such that x′k(f(s)) 6= 0 for all s, from which it follows that f ′′(s) = 0 forall s. Hence f is linear.

Parallelism. Let γ : J →M be a smooth regular curve, let a ∈ J , and choose Y ∈Mγ(a). Then

the system of ODE’s dYkdt +

∑Γkijx

′i(t)Yj(t) = 0 with initial condition Yj(a) = Yj , j = 1, 2, . . . , n

where Y =∑Yj

∂∂xj

is a system of ODE’s and has a unique solution Y1(t), . . . , Yn(t). So

t 7→∑Yi(t)

∂∂xi

∣∣γ(t)

is a parellel field of tangent vectors along γ, equal to γ at t = a.

Theorem 1.7.5. If p, q ∈ M and γ is a curve from p to q in M , then parallel transport along

γ determines an isomorphism Mp∼=→Mq.

Theorem 1.7.6. For all p ∈ M and all nonzero x ∈ Mp, there exists a unique geodesic γ inM such that γ(0) = p and γ′(0) = x.

Recall that γ : I →M is a geodesic if ∇γ′(t)(γ′(t)) = 0. Locally, by the geodesic equations thisoccurs if and only if

0 = γ′′k (t) +∑i,j

Γkij(γ(t))γ′i(t)γ′j(t)

for each k = 1, 2, . . . , n. If Γkij = 0, this says γ′′k (t) = 0, or in other words, γ is a line.

1.8 Exponential Map

For the remainder of this section fix a smooth manifold M , a connection ∇ on M on a pointp ∈M .

Theorem 1.8.1. There exists a neighbourhood N0 of 0 in Mp such that the map X 7→ γX(1)

is a diffeomorphism N0∼=→ Np where Np is a neighbourhood of p in M , and for X ∈ Mp, we

denote by γX the unique maximal geodeisc through p with γ′X(0) = X.

The map N0 → M , X 7→ γX(1) is called the exponential map, denoted Exp. Note thatγtX(1) = γX(t) for all 0 ≤ t ≤ 1.

Definition 1.8.2. A neighbourhood N0 of 0 in Mp is called normal if Exp is a diffeomorphismof N0 onto a neighbourhood Np of p, and if X ∈ N0, then tX ∈ N0 for all t ∈ [0, 1]. We call Np

a normal neighbourhood of p.

Theorem 1.8.3. Every point p ∈M has a normal neighbourhood Np which is a normal neigh-bourhood of each of its points. In particular, any two points of Np are joined by a unique geodesicsegment in Np.

20

Remark. If Np is a normal neighbourhood of p, and if X1, . . . , Xn is a basis for Mp, then(t1, . . . , tn) 7→ Exp(t1X1 + · · ·+ tnXn) gives a system of local coordiantes around p.

Exercise 1.8.4. At the centre of a normal coordinate system, the (corresponding) Christoffelsymbols vanish.

Note that locally, for each i,

∂xi= tangent to t 7→ Exp(t1X1 + · · ·+ ti−1Xi−1 + tXi + ti+1Xi+1 + · · ·+ tnXn)

21

Appendix

The textbook used as the primary reference for this course was the text by Sigurdur Helgason,Differential Geometry and Symmetric Spaces.

Last Updated: November 1, 2021