lecture notes vector analysis math 332iavramid/notes/vecanal4.pdf · lecture notes vector analysis...

118
Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM 87801 May 19, 2004 Author: Ivan Avramidi; File: vecanal4.tex; Date: July 1, 2005; Time: 13:34

Upload: vodien

Post on 07-Mar-2018

217 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

Lecture NotesVector Analysis

MATH 332

Ivan Avramidi

New Mexico Institute of Mining and Technology

Socorro, NM 87801

May 19, 2004

Author: Ivan Avramidi; File: vecanal4.tex; Date:July 1, 2005; Time: 13:34

Page 2: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

Contents

1 Linear Algebra 11.1 Vectors inRn and Matrix Algebra . . . . . . . . . . . . . . . . . . . 1

1.1.1 Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1.2 Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.1.3 Determinant . . . . . . . . . . . . . . . . . . . . . . . . . . . 81.1.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

1.2 Vector Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111.2.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

1.3 Inner Product and Norm . . . . . . . . . . . . . . . . . . . . . . . . 141.3.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

1.4 Linear Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171.4.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

2 Vector and Tensor Algebra 272.1 Metric Tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272.2 Dual Space and Covectors . . . . . . . . . . . . . . . . . . . . . . . 29

2.2.1 Einstein Summation Convention . . . . . . . . . . . . . . . . 312.3 General Definition of a Tensor . . . . . . . . . . . . . . . . . . . . . 34

2.3.1 Orientation, Pseudotensors and Volume . . . . . . . . . . . . 372.4 Operators and Tensors . . . . . . . . . . . . . . . . . . . . . . . . . 412.5 Vector Algebra inR3 . . . . . . . . . . . . . . . . . . . . . . . . . . 44

3 Geometry 493.1 Geometry of Euclidean Space . . . . . . . . . . . . . . . . . . . . . 493.2 Basic Topology ofRn . . . . . . . . . . . . . . . . . . . . . . . . . . 533.3 Curvilinear Coordinate Systems . . . . . . . . . . . . . . . . . . . . 54

3.3.1 Change of Coordinates . . . . . . . . . . . . . . . . . . . . . 563.3.2 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

3.4 Vector Functions of a Single Variable . . . . . . . . . . . . . . . . . 593.5 Geometry of Curves . . . . . . . . . . . . . . . . . . . . . . . . . . . 613.6 Geometry of Surfaces . . . . . . . . . . . . . . . . . . . . . . . . . 65

I

Page 3: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

II CONTENTS

4 Vector Analysis 694.1 Vector Functions of Several Variables . . . . . . . . . . . . . . . . . 694.2 Directional Derivative and the Gradient . . . . . . . . . . . . . . . . 714.3 Exterior Derivative . . . . . . . . . . . . . . . . . . . . . . . . . . . 734.4 Divergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 764.5 Curl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 774.6 Laplacian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 784.7 Differential Vector Identities . . . . . . . . . . . . . . . . . . . . . . 794.8 Orthogonal Curvilinear Coordinate Systems inR3 . . . . . . . . . . . 80

5 Integration 835.1 Line Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 835.2 Surface Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . 845.3 Volume Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . 865.4 Fundamental Integral Theorems . . . . . . . . . . . . . . . . . . . . 87

5.4.1 Fundamental Theorem of Line Integrals . . . . . . . . . . . . 875.4.2 Green’s Theorem . . . . . . . . . . . . . . . . . . . . . . . . 875.4.3 Stokes’s Theorem . . . . . . . . . . . . . . . . . . . . . . . . 875.4.4 Gauss’s Theorem . . . . . . . . . . . . . . . . . . . . . . . . 875.4.5 General Stokes’s Theorem . . . . . . . . . . . . . . . . . . . 88

6 Potential Theory 896.1 Simply Connected Domains . . . . . . . . . . . . . . . . . . . . . . 906.2 Conservative Vector Fields . . . . . . . . . . . . . . . . . . . . . . . 91

6.2.1 Scalar Potential . . . . . . . . . . . . . . . . . . . . . . . . . 916.3 Irrotational Vector Fields . . . . . . . . . . . . . . . . . . . . . . . . 926.4 Solenoidal Vector Fields . . . . . . . . . . . . . . . . . . . . . . . . 93

6.4.1 Vector Potential . . . . . . . . . . . . . . . . . . . . . . . . . 936.5 Laplace Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94

6.5.1 Harmonic Functions . . . . . . . . . . . . . . . . . . . . . . 946.6 Poisson Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95

6.6.1 Dirac Delta Function . . . . . . . . . . . . . . . . . . . . . . 956.6.2 Point Sources . . . . . . . . . . . . . . . . . . . . . . . . . . 956.6.3 Dirichlet Problem . . . . . . . . . . . . . . . . . . . . . . . . 956.6.4 Neumann Problem . . . . . . . . . . . . . . . . . . . . . . . 956.6.5 Green’s Functions . . . . . . . . . . . . . . . . . . . . . . . 95

6.7 Fundamental Theorem of Vector Analysis . . . . . . . . . . . . . . . 96

7 Basic Concepts of Differential Geometry 977.1 Manifolds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 987.2 Differential Forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99

7.2.1 Exterior Product . . . . . . . . . . . . . . . . . . . . . . . . 997.2.2 Exterior Derivative . . . . . . . . . . . . . . . . . . . . . . . 99

7.3 Integration of Differential Forms . . . . . . . . . . . . . . . . . . . . 1007.4 General Stokes’s Theorem . . . . . . . . . . . . . . . . . . . . . . . 1017.5 Tensors in General Curvilinear Coordinate Systems . . . . . . . . . . 102

vecanal4.tex; July 1, 2005; 13:34; p. 1

Page 4: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

CONTENTS III

7.5.1 Covariant Derivative . . . . . . . . . . . . . . . . . . . . . . 102

8 Applications 1038.1 Mechanics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104

8.1.1 Inertia Tensor . . . . . . . . . . . . . . . . . . . . . . . . . . 1048.1.2 Angular Momentum Tensor . . . . . . . . . . . . . . . . . . 104

8.2 Elasticity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1058.2.1 Strain Tensor . . . . . . . . . . . . . . . . . . . . . . . . . . 1058.2.2 Stress Tensor . . . . . . . . . . . . . . . . . . . . . . . . . . 105

8.3 Fluid Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1068.3.1 Continuity Equation . . . . . . . . . . . . . . . . . . . . . . 1068.3.2 Tensor of Momentum Flux Density . . . . . . . . . . . . . . 1068.3.3 Euler’s Equations . . . . . . . . . . . . . . . . . . . . . . . . 1068.3.4 Rate of Deformation Tensor . . . . . . . . . . . . . . . . . . 1068.3.5 Navier-Stokes Equations . . . . . . . . . . . . . . . . . . . . 106

8.4 Heat and Diffusion Equations . . . . . . . . . . . . . . . . . . . . . . 1078.5 Electrodynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108

8.5.1 Tensor of Electromagnetic Field . . . . . . . . . . . . . . . . 1088.5.2 Maxwell Equations . . . . . . . . . . . . . . . . . . . . . . . 1088.5.3 Scalar and Vector Potentials . . . . . . . . . . . . . . . . . . 1088.5.4 Wave Equations . . . . . . . . . . . . . . . . . . . . . . . . . 1088.5.5 D’Alambert Operator . . . . . . . . . . . . . . . . . . . . . . 1088.5.6 Energy-Momentum Tensor . . . . . . . . . . . . . . . . . . . 108

8.6 Basic Concepts of Special and General Relativity . . . . . . . . . . . 109

Bibliography 111

Notation 113

Index 113

vecanal4.tex; July 1, 2005; 13:34; p. 2

Page 5: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

IV CONTENTS

vecanal4.tex; July 1, 2005; 13:34; p. 3

Page 6: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

Chapter 1

Linear Algebra

1.1 Vectors inRn and Matrix Algebra

1.1.1 Vectors

• Rn is the set of all orderedn-tuples of real numbers, which can be assembled ascolumns or as rows.

• Let x1, . . . , xn ben real numbers. Then thecolumn-vector (or justvector) is anorderedn-tuple of the form

v =

v1

v2...

vn

,and therow-vector (also called acovector) is an orderedn-tuple of the form

vT = (v1, v2, . . . , vn) .

The real numbersx1, . . . xn are called thecomponentsof the vectors.

• The operation that converts column-vectors into row-vectors and vice versa pre-serving the order of the components is called thetransposition and denoted byT. That is

v1

v2...

vn

T

= (v1, v2, . . . , vn) and (v1, v2, . . . , vn)T =

v1

v2...

vn

.Of course, for any vectorv

(vT)T = v .

1

Page 7: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

2 CHAPTER 1. LINEAR ALGEBRA

• Theaddition of vectors is defined by

u + v =

u1 + v1

u2 + v2...

un + vn

,and

u + v = (u1 + v1, . . . ,un + vn) .

• Notice that one cannot add a column-vector and a row-vector!

• The multiplication of vectors by a real constant, called ascalar, is defined by

av =

av1

av2...

avn

, av = (av1, . . . ,avn) .

• The vectors that have only zero elements are calledzero vectors, that is

0 =

00...0

, 0T = (0, . . . ,0) .

• The set of column-vectors

e1 =

100...0

, e2 =

010...0

, · · · , en =

0...001

and the set of row-vectors

eT1 = (1,0, . . . ,0), eT

2 = (0,1, . . . ,0), eTn = (0,0, . . . ,1)

are called thestandard (or canonical) basesin Rn.

• There is a naturalproduct of column-vectors and row-vectorsthat assigns to arow-vector and a column-vector a real number

〈uT , v〉 = (u1,u2, . . . ,un)

v1

v2...

vn

=n∑

i=1

uivi = u1v1 + u2v2 + · · · + unvn .

This is the simplest instance of a more general multiplication rule for matriceswhich can be summarized by saying that one multipliesrow by column.

vecanal4.tex; July 1, 2005; 13:34; p. 4

Page 8: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

1.1. VECTORS IN RN AND MATRIX ALGEBRA 3

• The product of two column-vectors and the product of two row-vectors, calledthe inner product (or thescalar product), is defined then by

(u, v) = (uT , vT) = 〈uT , v〉 =n∑

i=1

uivi = u1v1 + · · · + unvn .

• Finally, we define thenorm (or the length) of both column-vectors and row-vectors is defined by

||v|| = ||vT || =√〈vT , v〉 =

n∑i=1

v2i

1/2

=

√v2

1 + · · · + v2n .

1.1.2 Matrices

• A set of n2 real numbersAi j , i, j = 1, . . . ,n, arranged in an array that hasncolumns andn rows

A =

A11 A12 · · · A1n

A21 A22 · · · A2n...

.... . .

...An1 An2 · · · Ann

is called asquaren× n real matrix .

• The set of all real squaren× n matrices is denoted by Mat(n,R).

• The numberAi j (also called an entry of the matrix) appears in thei-th row andthe j-th column of the matrixA

A =

A11 A12 · · · A1 j · · · A1n

A21 A22 · · · A2 j · · · A2n...

.... . .

......

...

Ai1 Ai2 · · · Ai j · · · Ain

......

......

. . ....

An1 An2 · · · An j · · · Ann

• Remark. Notice that the first index indicates the row and the second index

indicates the column of the matrix.

• The matrix whose all entries are equal to zero is called thezero matrix.

• Theaddition of matrices is defined by

A+ B =

A11 + B11 A12 + B12 · · · A1n + B1n

A21 + B21 A22 + B22 · · · A2n + B2n...

.... . .

...An1 + Bn1 An2 + Bn2 · · · Ann + Bnn

vecanal4.tex; July 1, 2005; 13:34; p. 5

Page 9: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

4 CHAPTER 1. LINEAR ALGEBRA

and themultiplication by scalars by

cA=

cA11 cA12 · · · cA1n

cA21 cA22 · · · cA2n...

.... . .

...cAn1 cAn2 · · · cAnn

• The numbersAi i are called thediagonal entries. Of course, there aren diagonal

entries. The set of diagonal entries is called thediagonalof the matrixA.

• The numbersAi j with i , j are calledoff-diagonal entries; there aren(n − 1)off-diagonal entries.

• The numbersAi j with i < j are called theupper triangular entries . The set ofupper triangular entries is called theupper triangular part of the matrixA.

• The numbersAi j with i > j are called thelower triangular entries . The set oflower triangular entries is called thelower triangular part of the matrixA.

• The number of upper-triangular entries and the lower-triangular entries is thesame and is equal ton(n− 1)/2.

• A matrix whose only non-zero entries are on the diagonal is called adiagonalmatrix . For a diagonal matrix

Ai j = 0 if i , j .

• The diagonal matrix

A =

λ1 0 · · · 00 λ2 · · · 0....... . .

...0 0 · · · λn

is also denoted by

A = diag (λ1, λ2, . . . , λn)

• A diagonal matrix whose all diagonal entries are equal to 1

I =

1 0 · · · 00 1 · · · 0....... . .

...0 0 · · · 1

is called theidentity matrix . The elements of the identity matrix are

I i j =

1, if i = j

0, if i , j .

vecanal4.tex; July 1, 2005; 13:34; p. 6

Page 10: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

1.1. VECTORS IN RN AND MATRIX ALGEBRA 5

• A matrix A of the form

A =

∗ ∗ · · · ∗

0 ∗ · · · ∗

....... . .

...0 0 · · · ∗

where∗ represents nonzero entries is called anupper triangular matrix . Itslower triangular part is zero, that is,

Ai j = 0 if i < j .

• A matrix A of the form

A =

∗ 0 · · · 0∗ ∗ · · · 0....... . .

...∗ ∗ · · · ∗

whose upper triangular part is zero, that is,

Ai j = 0 if i > j ,

is called alower triangular matrix .

• The transpose of a matrix A whosei j -th entry isAi j is the matrixAT whosei j -th entry isA ji . That is,AT obtained fromA by switching the roles of rows andcolumns ofA:

AT =

A11 A21 · · · A j1 · · · An1

A12 A22 · · · A j2 · · · An2...

.... . .

......

...

A1i A2i · · · A j i · · · Ani

......

......

. . ....

A11 A2n · · · A jn · · · Ann

or

(AT)i j = A ji .

• A matrix A is calledsymmetric if

AT = A

andanti-symmetric ifAT = −A .

• The number of independent entries of an anti-symmetric matrix isn(n− 1)/2.

• The number of independent entries of a symmetric matrix isn(n+ 1)/2.

vecanal4.tex; July 1, 2005; 13:34; p. 7

Page 11: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

6 CHAPTER 1. LINEAR ALGEBRA

• Every matrixA can be uniquely decomposed as the sum of its diagonal partAD,the lower triangular partAL and the upper triangular partAU

A = AD + AL + AU .

• For an anti-symmetric matrix

ATU = −AL and AD = 0 .

• For a symmetric matrixAT

U = AL .

• Every matrixA can be uniquely decomposed as the sum of its symmetric partAS

and its anti-symmetric partAA

A = AS + AA ,

where

AS =12

(A+ AT) , AA =12

(A− AT) .

• The product of matrices is defined as follows. Thei j -th entry of the productC = ABof two matricesA andB is

Ci j =

n∑k=1

AikBk j = Ai1B1 j + Ai2B2 j + · · · + AinBn j .

This is again a multiplication of the “i-th row of the matrixA by the j-th columnof the matrixB”.

• Theorem 1.1.1 The product of matrices isassociative, that is, for any matricesA, B, C

(AB)C = A(BC) .

• Theorem 1.1.2 For any two matrices A and B

(AB)T = BT AT .

• A matrix A is calledinvertible if there is another matrixA−1 such that

AA−1 = A−1A = I .

The matrixA−1 is called theinverseof A.

• Theorem 1.1.3 For any two invertible matrices A and B

(AB)−1 = B−1A−1 ,

and(A−1)T = (AT)−1 .

vecanal4.tex; July 1, 2005; 13:34; p. 8

Page 12: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

1.1. VECTORS IN RN AND MATRIX ALGEBRA 7

• A matrix A is calledorthogonal if

AT A = AAT = I ,

which meansAT = A−1.

• The trace is a map tr : Mat(n,R) that assigns to each matrixA = (Ai j ) a realnumber trA equal to the sum of the diagonal elements of a matrix

tr A =n∑

k=1

Akk .

• Theorem 1.1.4 The trace has the properties

tr (AB) = tr (BA) ,

andtr AT = tr A .

• Obviously, the trace of an anti-symmetric matrix is equal to zero.

• Finally, we define the multiplication of column-vectors by matrices from the leftand the multiplication of row-vectors by matrices from the right as follows.

• Each matrix defines a naturalleft action on a column-vectorand aright actionon a row-vector.

• For each column-vectorv and a matrixA = (Ai j ) the column-vectoru = Av isgiven by

u1

u2...ui...

un

=

A11 A12 · · · A1n

A21 A22 · · · A2n...

.... . .

...Ai1 Ai2 · · · Ain...

......

...An1 An2 · · · Ann

v1

v2...vi...

vn

=

A11v1 + A12v2 + · · · + A1nvn

A21v1 + A22v2 + · · · + A2nvn...

Ai1v1 + Ai2v2 + · · · + Ainvn...

An1v1 + An2v2 + · · · + Annvn

• The components of the vectoru are

ui =

n∑j=1

Ai j v j = Ai1v1 + Ai2v2 + · · · + Ainvn .

• Similarly, for a row vectorvT the components of the row-vectoruT = vT A aredefined by

ui =

n∑j=1

v jA j i = v1A1i + v2A2i + · · · + vnAni .

vecanal4.tex; July 1, 2005; 13:34; p. 9

Page 13: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

8 CHAPTER 1. LINEAR ALGEBRA

1.1.3 Determinant

• Consider the setZn = {1,2, . . . ,n} of the firstn integers. Apermutation ϕ of theset{1,2, . . . ,n} is an orderedn-tuple (ϕ(1), . . . , ϕ(n)) of these numbers.

• That is, a permutation is a bijective (one-to-one and onto) function

ϕ : Zn→ Zn

that assigns to each numberi from the setZn = {1, . . . ,n} another numberϕ(i)from this set.

• An elementary permutation is a permutation that exchanges the order of onlytwo numbers.

• Every permutation can be realized as a product (or a composition) of elemen-tary permutations. A permutation that can be realized by an even number ofelementary permutations is called aneven permutation. A permutation thatcan be realized by an odd number of elementary permutations is called anoddpermutation.

• Proposition 1.1.1 Theparity of a permutation does not depend on the repre-sentation of a permutation by a product of the elementary ones.

• That is, each representation of an even permutation has even number of elemen-tary permutations, and similarly for odd permutations.

• Thesign of a permutation ϕ, denoted by sign(ϕ) (or simply (−1)ϕ), is definedby

sign(ϕ) = (−1)ϕ =

{+1, if ϕ is even,−1, if ϕ is odd

• The set of all permutations ofn numbers is denoted bySn.

• Theorem 1.1.5 The cardinality of this set, that is, the number of different per-mutations, is

|Sn| = n! .

• The determinant is a map det : Mat(n,R) → R that assigns to each matrixA = (Ai j ) a real number detA defined by

detA =∑ϕ∈Sn

sign (ϕ)A1ϕ(1) · · ·Anϕ(n) ,

where the summation goes over alln! permutations.

• The most important properties of the determinant are listed below:

Theorem 1.1.6 1. The determinant of the product of matrices is equal to theproduct of the determinants:

det(AB) = detAdetB .

vecanal4.tex; July 1, 2005; 13:34; p. 10

Page 14: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

1.1. VECTORS IN RN AND MATRIX ALGEBRA 9

2. The determinants of a matrix A and of its transpose AT are equal:

detA = detAT .

3. The determinant of the inverse A−1 of an invertible matrix A is equal to theinverse of the determinant of A:

detA−1 = (detA)−1

4. A matrix is invertible if and only if its determinant is non-zero.

• The set of real invertible matrices (with non-zero determinant) is denoted byGL(n,R). The set of matrices with positive determinant is denoted byGL+(n,R).

• A matrix with unit determinant is calledunimodular .

• The set of real matrices with unit determinant is denoted byS L(n,R).

• The set of real orthogonal matrices is denoted byO(n).

• Theorem 1.1.7 The determinant of an orthogonal matrix is equal to either1 or−1.

• An orthogonal matrix with unit determinant (a unimodular orthogonal matrix) iscalled aproper orthogonal matrix or just arotation .

• The set of real orthogonal matrices with unit determinant is denoted byS O(n).

• A setG of invertible matrices forms agroup if it is closed under taking inverseand matrix multiplication, that is, if the inverseA−1 of any matrixA in G belongsto the setG and the productABof any two matricesA andB in G belongs toG.

1.1.4 Exercises

1. Show that the product of invertible matrices is an invertible matrix.

2. Show that the product of matrices with positive determinant is a matrix with positivedeterminant.

3. Show that the inverse of a matrix with positive determinant is a matrix with positivedeterminant.

4. Show thatGL(n,R) forms a group (called thegeneral linear group).

5. Show thatGL+(n,R) is a group (called theproper general linear group).

6. Show that the inverse of a matrix with negative determinant is a matrix with negativedeterminant.

7. Show that: a) the product of an even number of matrices with negative determinant is amatrix with positive determinant, b) the product of odd matrices with negative determinantis a matrix with negative determinant.

8. Show that the product of matrices with unit determinant is a matrix with unit determinant.

9. Show that the inverse of a matrix with unit determinant is a matrix with unit determinant.

vecanal4.tex; July 1, 2005; 13:34; p. 11

Page 15: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

10 CHAPTER 1. LINEAR ALGEBRA

10. Show thatS L(n,R) forms a group (called thespecial linear group or theunimodulargroup).

11. Show that the product of orthogonal matrices is an orthogonal matrix.

12. Show that the inverse of an orthogonal matrix is an orthogonal matrix.

13. Show thatO(n) forms a group (called theorthogonal group).

14. Show that orthogonal matrices have determinant equal to either+1 or−1.

15. Show that the product of orthogonal matrices with unit determinant is an orthogonal ma-trix with unit determinant.

16. Show that the inverse of an orthogonal matrix with unit determinant is an orthogonalmatrix with unit determinant.

17. Show thatS O(n) forms a group (called theproper orthogonal group or the rotationgroup).

vecanal4.tex; July 1, 2005; 13:34; p. 12

Page 16: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

1.2. VECTOR SPACES 11

1.2 Vector Spaces

• A real vector spaceconsists of a setE, whose elements are calledvectors, andthe set of real numbersR, whose elements are calledscalars. There are twooperations on a vector space:

1. Vector addition, + : E × E → E, that assigns to two vectorsu, v ∈ Eanother vectoru + v, and

2. Multiplication by scalars , · : R × E → E, that assigns to a vectorv ∈ Eand a scalara ∈ R a new vectorav ∈ E.

The vector addition is anassociative commutativeoperation with anadditiveidentity . It satisfies the following conditions:

1. u + v = v + u, ∀u, v, ∈ E

2. (u + v) + w = u + (v + w), ∀u, v,w ∈ E

3. There is a vector0 ∈ E, called thezero vector, such that for anyv ∈ Ethere holdsv + 0 = v.

4. For any vectorv ∈ E, there is a vector (−v) ∈ E, called theoppositeof v,such thatv + (−v) = 0.

The multiplication by scalars satisfies the following conditions:

1. a(bv) = (ab)v, ∀v ∈ E, ∀a,bR,

2. (a+ b)v = av + bv, ∀v ∈ E, ∀a,bR,

3. a(u + v) = au + av, ∀u, v ∈ E, ∀aR,

4. 1v = v ∀v ∈ E.

• The zero vector is unique.

• For anyu, v ∈ E there is a unique vector denoted byw = v − u, called thedifference ofv andu, such thatu + w = v.

• For anyv ∈ E,0v = 0 , and (−1)v = −v .

• Let E be a real vector space andA = {e1, . . . ,ek} be a finite collection of vectorsfrom E. A linear combination of these vectors is a vector

a1e1 + · · · + akek ,

where{a1, . . . ,an} are scalars.

• A finite collection of vectorsA = {e1, . . . ,ek} is linearly independent if

a1e1 + · · · + akek = 0

impliesa1 = · · · = ak = 0.

vecanal4.tex; July 1, 2005; 13:34; p. 13

Page 17: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

12 CHAPTER 1. LINEAR ALGEBRA

• A collectionA of vectors islinearly dependentif it is not linearly independent.

• Two non-zero vectorsu andv which are linearly dependent are also calledpar-allel, denoted byu||v.

• A collectionA of vectors is linearly independent if no vector ofA is a linearcombination of a finite number of vectors fromA.

• LetA be a subset of a vector spaceE. ThespanofA, denoted by spanA, is thesubset ofE consisting of all finite linear combinations of vectors fromA, i.e.

spanA = {v ∈ E | v = a1e1 + · · · + akek , ei ∈ A, ai ∈ R} .

We say that the subset spanA is spanned byA.

• Theorem 1.2.1 The span of any subset of a vector space is a vector space.

• A vector subspaceof a vector spaceE is a subsetS ⊆ E of E which is itself avector space.

• Theorem 1.2.2 A subset S of E is a vector subspace of E if and only ifspanS =S .

• Span ofA is the smallest subspace ofE containingA.

• A collectionB of vectors of a vector spaceE is a basisof E if B is linearlyindependent and spanB = E.

• A vector spaceE is finite-dimensional if it has a finite basis.

• Theorem 1.2.3 If the vector space E is finite-dimensional, then the number ofvectors in any basis is the same.

• Thedimensionof a finite-dimensional real vector spaceE, denoted by dimE, isthe number of vectors in a basis.

• Theorem 1.2.4 If {e1, . . . ,en} is a basis in E, then for every vectorv ∈ E thereis a unique set of real numbers(vi) = (v1, . . . , vn) such that

v =n∑

i=1

viei = v1e1 + · · · + vnen .

• The real numbersvi , i = 1, . . . ,n, are called thecomponents of the vector vwith respect to the basis{ei}.

• It is customary to denote the components of vectors bysuperscripts, whichshould not be confused with powers of real numbers

v2 , (v)2 = vv, . . . , vn , (v)n .

vecanal4.tex; July 1, 2005; 13:34; p. 14

Page 18: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

1.2. VECTOR SPACES 13

Examples of Vector Subspaces

• Zero subspace{0}.

• Line with a tangent vectoru:

S1 = span{u} = {v ∈ E | v = tu, t ∈ R} .

• Planespanned by two nonparallel vectorsu1 andu2

S2 = span{u1,u2} = {v ∈ E | v = tu1 + su2, t, s ∈ R} .

• More generally, ak-planespanned by a linearly independent collection ofk vec-tors{u1, . . . ,uk}

Sk = span{u1, . . . ,uk} = {v ∈ E | v = t1u1 + · · · + tkuk, t1, . . . , tk ∈ R} .

• An (n− 1)-plane in ann-dimensional vector space is called ahyperplane.

1.2.1 Exercises

1. Show that ifλv = 0, then eitherv = 0 orλ = 0.

2. Prove that the span of a collection of vectors is a vector subspace.

vecanal4.tex; July 1, 2005; 13:34; p. 15

Page 19: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

14 CHAPTER 1. LINEAR ALGEBRA

1.3 Inner Product and Norm

• A real vector spaceE is called aninner product space if there is a function(·, ·) : E × E→ R, called theinner product , that assigns to every two vectorsuandv a real number (u, v) and satisfies the conditions:∀u, v,w ∈ E, ∀a ∈ R:

1. (v, v) ≥ 0

2. (v, v) = 0 if and only ifv = 0

3. (u, v) = (v,u)

4. (u + v,w) = (u,w) + (v,w)

5. (au, v) = (u,av) = a(u, v)

A finite-dimensional inner product space is called aEuclidean space.

• The inner product is often called thedot product, or thescalar product, and isdenoted by

(u, v) = u · v .

• All spaces considered below are Euclidean spaces. Henceforth,E will denote ann-dimensional Euclidean space if not specified otherwise.

• The Euclidean norm is a function|| · || : E → R that assigns to every vectorv ∈ E a real number||v|| defined by

||v|| =√

(v, v).

• The norm of a vector is also called thelength.

• A vector with unit norm is called aunit vector.

• Theorem 1.3.1 For anyu, v ∈ E there holds

||u + v||2 = ||u||2 + 2(u, v) + ||v||2 .

• Theorem 1.3.2 Cauchy-Schwarz’s Inequality.For anyu, v ∈ E there holds

|(u, v)| ≤ ||u|| ||v|| .

The equality

|(u, v)| = ||u|| ||v||

holds if and only ifu andv are parallel.

• Corollary 1.3.1 Triangle Inequality. For anyu, v ∈ E there holds

||u + v|| ≤ ||u|| + ||v|| .

vecanal4.tex; July 1, 2005; 13:34; p. 16

Page 20: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

1.3. INNER PRODUCT AND NORM 15

• Theangle between two non-zero vectors uandv is defined by

cosθ =(u, v)||u|| ||v||

, 0 ≤ θ ≤ π .

Then the inner product can be written in the form

(u, v) = ||u|| ||v|| cosθ .

• Two non-zero vectorsu, v ∈ E areorthogonal, denoted byu ⊥ v, if

(u, v) = 0.

• A basis{e1, . . . ,en} is calledorthonormal if each vector of the basis is a unitvector and any two distinct vectors are orthogonal to each other, that is,

(ei ,ej) =

{1, if i = j0, if i , j

.

• Theorem 1.3.3 Every Euclidean space has an orthonormal basis.

• Let S ⊂ E be a nonempty subset ofE. We say thatx ∈ E is orthogonal to S,denoted byx ⊥ S, if x is orthogonal to every vector ofS.

• The setS⊥ = {x ∈ E | x ⊥ S}

of all vectors orthogonal toS is called theorthogonal complementof S.

• Theorem 1.3.4 The orthogonal complement of any subset of a Euclidean spaceis a vector subspace.

• Two subsetsA andB of E areorthogonal, denoted byA ⊥ B, if every vector ofA is orthogonal to every vector ofB.

• Let S be a subspace ofE andS⊥ be its orthogonal complement. If every elementof E can be uniquely represented as the sum of an element ofS and an elementof S⊥, thenE is thedirect sum of S andS⊥, which is denoted by

E = S ⊕ S⊥ .

• The union of a basis ofS and a basis ofS⊥ gives a basis ofE.

1.3.1 Exercises

1. Show that the Euclidean norm has the following properties

(a) ||v|| ≥ 0, ∀v ∈ E;

(b) ||v|| = 0 if and only if v = 0;

vecanal4.tex; July 1, 2005; 13:34; p. 17

Page 21: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

16 CHAPTER 1. LINEAR ALGEBRA

(c) ||av|| = |a| ||v||, ∀v ∈ E,∀a ∈ R.

2. Parallelogram Law. Show that for anyu, v ∈ E

||u + v||2 + ||u − v||2 = 2(||u||2 + ||v||2

)3. Show that any orthogonal system inE is linearly independent.

4. Gram-Schmidt orthonormalization process. Let G = {u1, · · · ,uk} be a linearly inde-pendent collection of vectors. LetO = {v1, · · · , vk} be a new collection of vectors definedrecursively by

v1 = u1,

v j = u j −

j−1∑i=1

vi(vi ,u j)

||vi ||2, 2 ≤ j ≤ k,

and the collectionB = {e1, . . . ,ek} be defined by

ei =vi

||vi ||.

Show that: a)O is an orthogonal system and b)B is an orthonormal system.

5. Pythagorean Theorem.Show that ifu ⊥ v, then

||u + v||2 = ||u||2 + ||v||2 .

6. LetB = {e1, · · · en} be an orthonormal basis inE. Show that for any vectorv ∈ E

v =n∑

i=1

ei(ei , v)

and

||v||2 =n∑

i=1

(ei , v)2 .

7. Prove that the orthogonal complement of a subsetS of E is a vector subspace ofE.

8. LetS be a subspace inE. Prove that

a) E⊥ = {0}, b) {0}⊥ = E, c) (S⊥)⊥ = S.

9. Show that the intersection of orthogonal subsets of a Euclidean space is either empty orconsists of only the zero vector. That is, for two subsetsA andB, if A ⊥ B, thenA∩B = {0}or ∅.

vecanal4.tex; July 1, 2005; 13:34; p. 18

Page 22: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

1.4. LINEAR OPERATORS 17

1.4 Linear Operators

• A linear operator on a vector spaceE is a mappingL : E → E satisfying thecondition∀u, v ∈ E, ∀a ∈ R,

L(u + v) = L(u) + L(v) and L(av) = aL(v).

• Identity operator I on E is defined by

I v = v, ∀v ∈ E

• Null operator 0 : E→ E is defined by

0v = 0, ∀v ∈ E

• The vectoru = L(v) is theimageof the vectorv.

• If S is a subset ofE, then the set

L(S) = {u ∈ E | u = L(v) for somev ∈ S}

is theimage of the setS and the set

L−1(S) = {v ∈ E | L(v) ∈ S}

is theinverse image of the setA.

• The image of the whole spaceE of a linear operatorL is therange(or theimage)of L, denoted by

Im(L) = L(E) = {u ∈ E | u = L(v) for somev ∈ E} .

• Thekernel Ker(L) (or thenull space) of an operatorL is the set of all vectors inE which are mapped to zero, that is

Ker (L) = L−1({0}) = {v ∈ E | L(v) = 0} .

• Theorem 1.4.1 For any operatorL the setsIm(L) and Ker (L) are vector sub-spaces.

• The dimension of the kernel Ker (L) of an operatorL

null (L) = dim Ker (L)

is called thenullity of the operatorL.

• The dimension of the range Im(L) of an operatorL

rank (L) = dim Ker (L)

is called therank of the operatorL.

vecanal4.tex; July 1, 2005; 13:34; p. 19

Page 23: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

18 CHAPTER 1. LINEAR ALGEBRA

• Theorem 1.4.2 For any operatorL on an n-dimensional Euclidean space E

rank (L) + null (L) = n

• The setL(E) of all linear operators on a vector spaceE is a vector space withtheaddition of operators andmultiplication by scalars defined by

(L1 + L2)(x) = L1(x) + L2(x), and (aL)(x) = aL(x) .

• Theproduct of the operatorsA andB is the composition ofA andB.

• Since the product of operators is defined as a composition of linear mappings,it is automaticallyassociative, which means that for any operatorsA, B andC,there holds

(AB )C = A(BC) .

• The integer powers of an operator are defined as the multiple composition of theoperator with itself, i.e.

A0 = I A1 = A, A2 = AA , . . .

• The operatorA on E is invertible if there exists an operatorA−1 on E, called theinverseof A, such that

A−1A = AA−1 = I .

• Theorem 1.4.3 Let A andB be invertible operators. Then:

(A−1)−1 = A , (AB )−1 = B−1A−1 .

• The operatorsA andB arecommuting if

AB = BA

andanti-commuting ifAB = −BA .

• The operatorsA andB are said to beorthogonal to each other if

AB = BA = 0 .

• An operatorA is involutive ifA2 = I

idempotent ifA2 = A ,

andnilpotent if for some integerk

Ak = 0 .

vecanal4.tex; July 1, 2005; 13:34; p. 20

Page 24: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

1.4. LINEAR OPERATORS 19

Selfadjoint Operators

• Theadjoint A∗ of an operatorA is defined by

(Au, v) = (u,A∗v), ∀u, v ∈ E.

• Theorem 1.4.4 For any two operatorsA andB

(A∗)∗ = A , (AB )∗ = B∗A∗ .

• An operatorA is self-adjoint if

A∗ = A

andanti-selfadjoint if

A∗ = −A

• Every operatorA can be decomposed as the sum

A = AS + AA

of its selfadjoint part AS and itsanti-selfadjoint part AA

AS =12

(A + A∗) , AA =12

(A − A∗) .

• An operatorA is calledunitary if

AA ∗ = A∗A = I .

• An operatorA on E is calledpositive, denoted byA ≥ 0, if it is selfdadjoint and∀v ∈ E

(Av, v) ≥ 0.

Projection Operators

• Let S be a subspace ofE andE = S⊕S⊥. Then for anyu ∈ E there exist uniquev ∈ S andw ∈ S⊥ such that

u = v + w .

The vectorv is called theprojection of u ontoS.

• The operatorP on E defined by

Pu = v

is called theprojection operator ontoS.

vecanal4.tex; July 1, 2005; 13:34; p. 21

Page 25: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

20 CHAPTER 1. LINEAR ALGEBRA

• The operatorP⊥ defined by

P⊥u = w

is the projection operator ontoS⊥.

• The operatorsP andP⊥ are calledcomplementary projections. They have theproperties:

P∗ = P, (P⊥)∗ = P⊥ ,

P + P⊥ = I ,

P2 = P , (P⊥)2 = P⊥ ,

PP⊥ = P⊥P = 0 .

• Theorem 1.4.5 An operatorP is a projection if and only ifP is idempotent andself-adjoint.

• More generally, a collection of projections{P1, . . . ,Pk} is acomplete orthogo-nal system of complimentary projectionsif

PiPk = 0 if i , k

andk∑

i=1

Pi = P1 + · · · + Pk = I .

• A complete orthogonal system of projections defines the orthogonal decomposi-tion of the vector space

E = E1 ⊕ · · · ⊕ Ek ,

whereEi is the subspace the projectionPi projects onto.

• Theorem 1.4.6 1. The dimension of the subspaces Ei are equal to the ranksof the projectionsPi

dimEi = rankPi .

2. The sum of dimensions of the vector subspaces Ei equals the dimension ofthe vector space E

n∑i=1

dimEi = dimE1 + · · · + dimEk = dimE .

vecanal4.tex; July 1, 2005; 13:34; p. 22

Page 26: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

1.4. LINEAR OPERATORS 21

Spectral Decomposition Theorem

• A real numberλ is called aneigenvalueof an operatorA if there is a unit vectoru ∈ E such that

Au = λu .

The vectoru is called theeigenvectorcorresponding to the eigenvalueλ.

• The span of all eigenvectors corresponding to the eigenvalueλ of an operatorAis called theeigenspaceof λ.

• The dimension of the eigenspace of the eigenvalueλ is called themultiplicity(also called thegeometric multiplicity ) of λ.

• An eigenvalue of multiplicity 1 is calledsimple (or non-degenerate).

• An eigenvalue of multiplicity greater than 1 is calledmultiple (or degenerate).

• The set of all eigenvalues of an operator is called thespectrumof the operator.

• Theorem 1.4.7 Let A be a selfadjoint operator. Then:

1. The number of eigenvalues counted with multiplicity is equal to the dimen-sion n= dimE of the vector space E.

2. The eigenvectors corresponding to distinct eigenvalues are orthogonal toeach other.

• Theorem 1.4.8 Spectral Decomposition of Self-Adjoint Operators.Let Abe a selfadjoint operator on E. Then there exists an orthonormal basisB =

{e1, . . . ,en} in E consisting of eigenvectors ofA corresponding to the eigenvalues{λ1, . . . λn}, and the corresponding system of orthogonal complimentary projec-tions{P1, . . . ,Pn} onto the one-dimensional eigenspaces Ei ,

A =n∑

i=1

λiPi .

The projections{Pi} are defined by

Piv = ei(ei , v) .

and satisfy the equations

n∑i=1

Pi = I , and PiP j = 0 if i , j .

• In other words, for any

v =n∑

i=1

ei(ei , v) ,

we have

Av =n∑

i=1

λiei(ei , v) .

vecanal4.tex; July 1, 2005; 13:34; p. 23

Page 27: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

22 CHAPTER 1. LINEAR ALGEBRA

• Let f : R→ R be a real-valued function onR. Let A be a selfadjoint operator ona Euclidean spaceE given by its spectral decomposition

A =n∑

i=1

λiPi ,

wherePi are the one-dimensional projections. Then one can define afunctionof the self-adjoint operator f (A) on E by

f (A) =n∑

i=1

f (λi)Pi .

• Theexponential of an operatorA is defined by

expA =∞∑

k=1

1k!

Ak =

n∑i=1

eλi Pi

• Theorem 1.4.9 LetU be a unitary operator on a real vector space E. Then thereexists an ani-selfadjoint operatorA such that

U = expA .

• Recall that the operatorsU andA satisfy the equations

U∗ = U−1 andA∗ = −A.

• Let A be a self-adjoint operator with the eigenvalues{λ1, . . . , λn} . Then thetraceof the operator and thedeterminant of the operator A are defined by

tr A =n∑

i=1

λi , detA = λ1 · · · λn .

• Note thattr I = n , detI = 1 .

• The trace of a projectionP onto a vector subspaceS is equal to its rank, or thedimension of the vector subspaceS,

tr P = rankP = dimS .

• The trace of a function of a self-adjoint operatorA is then

tr f (A) =n∑

i=1

f (λi) .

If there are multiple eigenvalues, then each eigenvalue should be counted withits multiplicity.

vecanal4.tex; July 1, 2005; 13:34; p. 24

Page 28: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

1.4. LINEAR OPERATORS 23

• Theorem 1.4.10Let A be a self-adjoint operator. Then

det expA = etr A .

• Let A be a positive definite operator,A > 0. Thezeta-function of the operatorA is defined by

ζ(s) = tr A−s =

n∑i=1

1λs

i

.

• Theorem 1.4.11The zeta-functions has the properties

ζ(0) = n ,

andζ′(0) = − log detA .

Examples

• Letu be a unit vector andPu be the projection onto the one-dimensional subspace(line) Su spanned byu defined by

Puv = u(u, v) .

The orthogonal complementS⊥u is the hyperplane with the normalu. The oper-atorJu defined by

Ju = I − 2Pu

is called thereflection operator with respect to the hyperplaneS⊥u . The reflec-tion operator is aself-adjoint involution , that is, it has the following properties

J∗u = Ju , J2u = I .

The reflection operator has the eigenvalue−1 with multiplicity 1 and the eigenspaceSu, and the eigenvalue+1 with multiplicity (n− 1) and with eigenspaceS⊥u .

• Let u1 andu2 be an orthonormal system of two vectors andPu1,u2 be the projec-tion operator onto the two-dimensional space (plane)Su1,u2 spanned byu1 andu2

Pu1,u2v = u1(u1, v) + u2(u2, v) .

Let Nu1,u2 be an operator defined by

Nu1,u2v = u1(u2, v) − u2(u1, v) .

ThenNu1,u2Pu1,u2 = Pu1,u2Nu1,u2 = Nu1,u2

andN2

u1,u2= −Pu1,u2 .

vecanal4.tex; July 1, 2005; 13:34; p. 25

Page 29: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

24 CHAPTER 1. LINEAR ALGEBRA

A rotation operator Ru1,u2(θ) with the angleθ in the planeSu1,u2 is defined by

Ru1,u2(θ) = I − Pu1,u2 + cosθ Pu1,u2 + sinθ Nu1,u2 .

The rotation operator isunitary , that is, it satisfies the equation

R∗u1,u2Ru1,u2 = I .

• Theorem 1.4.12 Spectral Decomposition of Unitary Operators on Real Vec-tor Spaces.LetU be a unitary operator on a real vector space E. Then the onlyeigenvalues ofU are+1 and−1 (possibly multiple) and there exists an orthogo-nal decomposition

E = E+ ⊕ E− ⊕ V1 ⊕ · · · ⊕ Vk ,

where E+ and E− are the eigenspaces corresponding to the eigenvalues1 and−1, and{V1, . . . , vk} are two-dimensional subspaces such that

dimE = dimE+ + dimE− + 2k .

Let P+, P−, P1, . . . ,Pk be the corresponding orthogonal complimentary systemof projections, that is,

P+ + P− +k∑

i=1

Pi = I .

Then there exists a corresponding system of operatorsN1, . . . ,Nk satisfying theequations

N2i = −Pi , NiPi = PiNi = Ni ,

NiP j = P jNi = 0 , if i , j

and the anglesθ1, . . . θk such that

U = P+ − P− +k∑

i=1

(cosθi Pi + sinθi Ni) .

1.4.1 Exercises

1. Prove that the range and the kernel of any operator are vector spaces.

2. Show that

(aA + bB)∗ = aA∗ + bB∗ ∀a,b ∈ R ,

(A∗)∗ = A

(AB )∗ = B∗A∗

3. Show that for any operatorA the operatorsAA ∗ andA + A∗ are selfadjoint.

4. Show that the product of two selfadjoint operators is selfadjoint if and only if they com-mute.

5. Show that a polynomialp(A) of a selfadjoint operatorA is a selfadjoint operator.

vecanal4.tex; July 1, 2005; 13:34; p. 26

Page 30: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

1.4. LINEAR OPERATORS 25

6. Prove that the inverse of an invertible operator is unique.

7. Prove that an operatorA is invertible if and only if KerA = {0}, that is,Av = 0 impliesv = 0.

8. Prove that for an invertible operatorA, Im(A) = E, that is, for any vectorv ∈ E there is avectoru ∈ E such thatv = Au.

9. Show that if an operatorA is invertible, then

(A−1)−1 = A .

10. Show that the productAB of two invertible operatorsA andB is invertible and

(AB )−1 = B−1A−1

11. Prove that the adjointA∗ of any invertible operatorA is invertible and

(A∗)−1 = (A−1)∗ .

12. Prove that the inverseA−1 of a selfadjoint invertible operator is selfadjoint.

13. An operatorA on E is calledisometric if ∀v ∈ E,

||Av|| = ||v|| .

Prove that an operator is unitary if and only if it is isometric.

14. Prove that unitary operators preserves inner product. That is, show that ifA is a unitaryoperator, then∀u, v ∈ E

(Au,Av) = (u, v) .

15. Show that for every unitary operatorA bothA−1 andA∗ are unitary.

16. Show that for any operatorA the operatorsAA ∗ andA∗A are positive.

17. What subspaces do the null operator0 and the identity operatorI project onto?

18. Show that for any two projection operatorsP andQ, PQ = 0 if and only if QP = 0.

19. Prove the following properties of orthogonal projections

P∗ = P, (P⊥)∗ = P⊥, P⊥ + P = I, PP⊥ = P⊥P = 0 .

20. Prove that an operator is projection if and only if it is idempotent and selfadjoint.

21. Give an example of an idempotent operator inR2 which is not a projection.

22. Show that any projection operatorP is positive. Moreover, show that∀v ∈ E

(Pv, v) = ||Pv||2 .

23. Prove that the sumP = P1 + P2 of two projectionsP1 andP2 is a projection operator ifand only ifP1 andP2 are orthogonal.

24. Prove that the productP = P1P2 of two projectionsP1 andP2 is a projection operator ifand only ifP1 andP2 commute.

25. Find the eigenvalues of a projection operator.

26. Prove that the span of all eigenvectors corresponding to the eigenvalueλ of an operatorAis a vector space.

vecanal4.tex; July 1, 2005; 13:34; p. 27

Page 31: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

26 CHAPTER 1. LINEAR ALGEBRA

27. LetE(λ) = Ker (A − λI) .

Show that: a) ifλ is not an eigenvalue ofA, thenE(λ) = ∅, and b) ifλ is an eigenvalue ofA, thenE(λ) is the eigenspace corresponding to the eigenvalueλ.

28. Show that the operatorA − λI is invertible if and only ifλ is not an eigenvalue of theoperatorA.

29. LetT be a unitary operator. Then the operatorsA and

A = TAT−1

are calledsimilar . Show that the eigenvalues of similar operators are the same.

30. Show that an operator similar to a selfadjoint operator is selfadjoint and an operator sim-ilar to an anti-selfadjoint operator is anti-selfadjoint.

31. Show that all eigenvalues of a positive operatorA are non-negative.

32. Show that the eigenvectors corresponding to distinct eigenvalues of a unitary operator areorthogonal to each other.

33. Show that the eigenvectors corresponding to distinct eigenvalues of a selfadjoint operatorare orthogonal to each other.

34. Show that all eigenvalues of a unitary operatorA have absolute value equal to 1.

35. Show that ifA is a projection, then it can only have two eigenvalues: 1 and 0.

vecanal4.tex; July 1, 2005; 13:34; p. 28

Page 32: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

Chapter 2

Vector and Tensor Algebra

2.1 Metric Tensor

• Let E be a Euclidean space and{ei} = {e1, . . .en} be a basis (not necessarilyorthonormal). Then each vectorv ∈ E can be represented as a linear combination

v =n∑

i=1

viei ,

wherevi , i = 1, . . . ,n, are the components of the vectorv with respect to thebasis{ei} (or contravariant componentsof the vectorv). We stress once againthat contravariant components of vectors are denoted by upper indices (super-scripts).

• Let G = (gi j ) be a matrix whose entries are defined by

gi j = (ei ,ej) .

These numbers are called thecomponents of the metric tensorwith respect tothe basis{ei} (also calledcovariant componentsof the metric).

• Notice that the matrixG is symmetric, that is,

gi j = g ji , GT = G.

• Theorem 2.1.1 The matrix G is invertible and

detG > 0 .

• The elements of the inverse matrixG−1 = (gi j ) are called thecontravariantcomponentsof the metric. They satisfy the equations

n∑j=1

gi j g jk = δij ,

27

Page 33: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

28 CHAPTER 2. VECTOR AND TENSOR ALGEBRA

whereδij is theKronecker symbol defined by

δij =

1, if i = j

0, if i , j

• Since the inverse of a symmetric matrix is symmetric, we have

gi j = g ji .

• In orthonormal basis

gi j = gi j = δi j , G = G−1 = I .

• Let v ∈ E be a vector. The real numbers

vi = (ei , v)

are called thecovariant componentsof the vectorv. Notice that covariant com-ponents of vectors are denoted by lower indices (subscripts).

• Theorem 2.1.2 Let v ∈ E be a vector. The covariant and the contravariantcomponents ofv are related by

vi =

n∑j=1

gi j vj , vi =

n∑j=1

gi j v j .

• Theorem 2.1.3 The metric determines the inner product and the norm by

(u, v) =n∑

j=1

n∑i=1

gi j uiv j =

n∑j=1

n∑i=1

gi j uiv j .

||v||2 =n∑

i=1

n∑j=1

gi j viv j =

n∑j=1

n∑i=1

gi j viv j .

vecanal4.tex; July 1, 2005; 13:34; p. 29

Page 34: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

2.2. DUAL SPACE AND COVECTORS 29

2.2 Dual Space and Covectors

• A linear mappingω : E→ R that assigns to a vectorv ∈ E a real number〈ω, v〉and satisfies the condition:∀u, v ∈ E, ∀a ∈ R

〈ω,u + v〉 = 〈ω,u〉 + 〈ω, v〉 , and 〈ω,av〉 = a〈ω, v〉 ,

is called alinear functional .

• The space of linear functionals is a vector space, called thedual spaceof Eand denoted byE∗, with the addition and multiplication by scalars defined by:∀ω,σ ∈ E∗, ∀v ∈ E, ∀a ∈ R,

〈ω + σ, v〉 = 〈ω, v〉 + 〈σ, v〉 , and 〈aω, v〉 = a〈ω, v〉 .

The elements of the dual spaceE∗ are also calledcovectorsor 1-forms. Inkeeping with tradition we will denote covectors by Greek letters.

• Theorem 2.2.1 The dual space E∗ of a real vector space E is a real vector spaceof the same dimension.

• Let {ei} = {e1, . . . ,en} be a basis inE. A basis{ωi} = {ω1, . . .ωn} in E∗ such that

〈ωi ,ej〉 = δij

is called thedual basis.

• The dual{ωi} of an orthonormal basis is also orthonormal.

• Given a dual basis{ωi} every covectorσ in E∗ can be represented in a uniqueway as

σ =n∑

i=1

σiωi ,

where the real numbers (σi) = (σ1, . . . , σn) are called thecomponents of thecovectorσ with respect to the basis{ωi}.

• The advantage of using the dual basis is that it allows one to compute the com-ponents of a vectorv and a covectorσ by

vi = 〈ωi , v〉

andσi = 〈σ,ei〉 .

That is

v =n∑

i=1

ei〈ωi , v〉

and

σ =n∑

i=1

〈σ,ei〉ωi .

vecanal4.tex; July 1, 2005; 13:34; p. 30

Page 35: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

30 CHAPTER 2. VECTOR AND TENSOR ALGEBRA

• More generally, the action of a covectorσ on a vectorv has the form

〈σ, v〉 =n∑

i=1

〈σ,ei〉〈ωi , v〉 =

n∑i=1

σivi .

• The existense of the metric allows us to define the following map

g : E→ E∗

that assigns to each vectorv a covectorg(v) such that

〈g(v),u〉 = (v,u) .

Then

g(v) =n∑

i=1

(v,ei)ωi .

In particular,

g(ek) =n∑

i=1

gkiωi .

• Let v be a vector andσ be the corresponding covector, soσ = g(v) andv =g−1(σ). Then their components are related by

σi =

n∑j=1

gi j vj , vi =

n∑j=1

gi jσ j .

• The inverse mapg−1 : E∗ → E that assigns to each covectorσ a vectorg−1(σ)such that

〈σ,u〉 = (g−1(σ),u) ,

can be defined as follows. First, we define

g−1(ωk) =n∑

i=1

gkiei .

Then

g−1(σ) =n∑

k=1

n∑i=1

〈σ,ek〉gkiei .

• The inner product on the dual spaceE∗ is defined so that for any two covectorsα andσ

(α,σ) = 〈α,g−1(σ)〉 = (g−1(α),g−1(α)) .

• This definition leads togi j = (ωi ,ω j) .

vecanal4.tex; July 1, 2005; 13:34; p. 31

Page 36: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

2.2. DUAL SPACE AND COVECTORS 31

• Theorem 2.2.2 The inner product on the dual space E∗ is determined by

(α,σ) =

n∑i=1

n∑j=1

gi jαiσ j .

In particular,

(ωi ,σ) =

n∑j=1

gi jσ j

• The inverse mapg−1 can be defined in terms of the inner product of covectors as

g−1(σ) =n∑

i=1

ei(ωi ,σ) .

• Since there is a one-to-one correspondence between vectors and covectors, wecan treat a vectorv and the corresponding covectorg(v) as a single object anddenote the componentsvi of the vectorv and the components of the covectorg(v) by the same letter, that is,

vi =

n∑j=1

gi j v j , vi =

n∑j=1

gi j vj .

• We call vi the contravaraint componentsandvi the covariant components.This operation is calledraising and lowering an index; we usegi j to raise anindex andgi j to lower an index.

2.2.1 Einstein Summation Convention

• In many equations of vector and tensor calculus summation over components ofvectors, covectors and, more generally, tensors, with respect to a given basis fre-quently appear. Such a summation usually occurs on a pair of equal indices, onelower index and one upper index, and one sums over all values of indices from 1to n. The number of summation symbols

∑ni=1 is equal to the number of pairs of

repeated indices. That is why even simple equations become cumbersome anduncomfortable to work with. This lead Einstein to drop all summation signs andto adopt the followingsummation convention:

1. In any expression there are two types of indices:free indicesandrepeatedindices.

2. Free indices appear only once in an expression; they are assumed to takeall possible values from 1 ton. For example, in the expression

gi j vj

the indexi is a free index.

vecanal4.tex; July 1, 2005; 13:34; p. 32

Page 37: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

32 CHAPTER 2. VECTOR AND TENSOR ALGEBRA

3. The position of all free indices in all terms in an equation must be the same.For example,

gi j vj + αi = σi

is a correct equation, while the equation

gi j vj + αi = σi

is a wrong equation.

4. Repeated indices appear twice in an expression. It is assumed that there is asummation over each repeated pair of indices from 1 ton. The summationover a pair of repeated indices in an expression is called thecontraction.For example, in the expression

gi j vj

the indexj is a repeated index. It actually means

n∑j=1

gi j vj .

This is the result of the contraction of the indicesk andl in the expression

gikvl .

5. Repeated indices aredummy indices: they can be replaced by any otherletter (not already used in the expression) without changing the meaning ofthe expression. For example

gi j vj = gikvk

just meansn∑

j=1

gi j vj = gi1v1 + · · · + ginvn,

no matter how the repeated index is called.

6. Indices cannot be repeated on the same level. That is, in a pair of repeatedindices one index is in upper position and another is in the lower position.For example,

vivi

is a wrong expression.

7. There cannot be indices occuring three or more times in any expression.For example, the expression

gii vi

does not make sense.

• From now on we will use the Einstein summation convention. We will say thatan equation is written in tensor notation.

vecanal4.tex; July 1, 2005; 13:34; p. 33

Page 38: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

2.2. DUAL SPACE AND COVECTORS 33

Examples

• First, we list the equations we already obtained above

vi = gi j vj , v j = g ji vi ,

(u, v) = gi j uiv j = uiv

i = uivi = gi j uiv j ,

(α,β) = gi jαiβ j = αiβi = αiβ

i = gi jαiβ j .

gi j gjk = δki .

• A contraction of indices one of which belongs to the Kronecker symbol justrenames the index. For example:

δijvj = vi , δijδ

jk = δ

ik ,

etc.

• The contraction of the Kronecker symbol gives

δii =

n∑i=1

1 = n .

vecanal4.tex; July 1, 2005; 13:34; p. 34

Page 39: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

34 CHAPTER 2. VECTOR AND TENSOR ALGEBRA

2.3 General Definition of a Tensor

• It should be realized that a vector is an invariant geometric object that does notdepend on the basis; it exists by itself independently of the basis. The basis is justa convenient tool to represent vectors by its components. The componenets of avector do depend on the basis. It is this transformation law of the components ofa vector (and, more generally, a tensor as we will see later) that makes ann-tupleof real numbers (v1, . . . , vn) a vector. Not every collection ofn real numbers isa vector. To represent a vector, a geometric object that does not depend on thebasis, these numbers should transform according to a very special rule under achange of basis.

• Let {ei} = {e1, . . . ,en} and{e′j} = {e′1, . . . ,e

′n} be two different bases inE. Obvi-

ously, the vectors from one basis can be decomposed as a linear combination ofvectors from another basis, that is

ei = Λjie′j ,

whereΛ ji , i, j = 1, . . . ,n, is a set ofn2 real numbers, forming thetransforma-

tion matrix Λ = (Λij). Of course, we also have the inverse transformation

e′j = Λk

jek

whereΛkj , k, j = 1, . . . ,n, is another set ofn2 real numbers.

• The dual bases{ωi} and{ω′ j} are related by

ω′i = Λijω

j , ωi = Λijω′ j .

• By using the second equation in the first and vice versa we obtain

ei = Λk

jΛjiek , e′j = Λ

ikΛ

kje′i ,

which means thatΛk

jΛji = δ

ki , Λi

kΛk

j = δij .

Thus, the matrixΛ = (Λij) is theinverse transformation matrix .

• In matrix notation this becomes

ΛΛ = I , ΛΛ = I ,

which means that the matrixΛ is invertible and

Λ = Λ−1 .

• The components of a vectorv with respect to the basis{e′i } are

v′i = 〈ω′i , v〉 = Λij〈ω

j , v〉 .

vecanal4.tex; July 1, 2005; 13:34; p. 35

Page 40: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

2.3. GENERAL DEFINITION OF A TENSOR 35

This immediately givesv′i = Λi

jvj .

This is thetransformation law of contravariant components. It is easy torecognize this as the action of the transformation matrix on the column-vector ofthe vector components from the left.

• We can compute the transformation law of the components of a covectorσ asfollows

σ′i = 〈σ,e′i 〉 = Λ

ji〈σ,ej〉 ,

which givesσ′i = Λ

jiσ j .

This is thetransformation law of covariant components. It is the action ofthe inverse transformation matrix on the row-vector from the right. That is, thecomponents of covectors are transformed with the transpose of the inverse trans-formation matrix!

• Now let us compute thetransformation law of the covariant components ofthe metric tensorgi j . By the definition we have

g′i j = (e′i ,e′j) = Λ

kiΛ

lj(ek,el) .

This leads tog′i j = Λ

kiΛ

ljgkl .

• Similarly, thecontravariant components of the metric tensorgi j transformaccording to

g′i j = ΛikΛ

jlg

kl .

• The transformation law of the metric components in matrix notation reads

G′ = (Λ−1)TGΛ−1

andG′−1 = ΛG−1ΛT .

• We denote the determinant of the covariant metric componentsG = (gi j ) by

|g| = detG = det(gi j ) .

• Taking the determinant of this equation we obtain thetransformation law ofthe determinant of the metric

|g′| = (detΛ)−2|g| .

vecanal4.tex; July 1, 2005; 13:34; p. 36

Page 41: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

36 CHAPTER 2. VECTOR AND TENSOR ALGEBRA

• More generally, a set of real numbersT i1...ipj1... jq is said to representcomponents

of a tensor of type(p,q) (p times contravariant andq times covariant) if theytransform under a change of the basis according to

T′i1...ip

j1... jq= Λi1

l1 · · ·Λip

lpΛm1

j1 · · · Λmq

jqTl1...lpm1...mq

.

This is the generaltransformation law of the components of a tensor of type(p,q).

• Therank of the tensor of type (p,q) is the number (p+ q).

• A tensor product of a tensorA of type (p,q) and a tensorB of type (r, s) is atensorA⊗ B of type (p+ r,q+ s) with components

(A⊗ B)i1...iql1...lrj1... jqk1...ks

= Ai1...ip

j1... jqBl1...lr

k1...ks.

• Thesymmetrization of a tensor of type (0, k) with componentsAi1...ik is anothertensor of the same type with components

A(i1...ik) =1k!

∑ϕ∈Sk

Aiϕ(1)...iϕ(k) ,

where summation goes over all permutations ofk indices. The symmetrizationis denoted by parenthesis.

• The antisymmetrization of a tensor of type (0, k) with componentsAi1...ik isanother tensor of the same type with components

A[i1...ik] =1k!

∑ϕ∈Sk

sign (ϕ)Aiϕ(1)...iϕ(k) ,

where summation goes over all permutations ofk indices. The antisymmetriza-tion is denoted by square brackets.

• A tensorAi1...ik is symmetric if

A(i1...ik) = Ai1...ik

andanti-symmetric ifA[i1...ik] = Ai1...ik .

• Anti-symmetric tensors of type (0, p) are calledp-forms.

• Anti-symmetric tensors of type (p,0) are calledp-vectors.

• A tensor isisotropic if it is a tensor product ofgi j , gi j andδij .

• Every isotropic tensor has an even rank.

vecanal4.tex; July 1, 2005; 13:34; p. 37

Page 42: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

2.3. GENERAL DEFINITION OF A TENSOR 37

• For example, the most general isotropic tensor of rank two is

Aij = aδij ,

wherea is a scalar, and the most general isotropic tensor of rank four is

Ai jkl = agi j gkl + bδikδ

jl + cδilδ

jk ,

wherea,b, c are scalars.

2.3.1 Orientation, Pseudotensors and Volume

• Since the transformation matrixΛ is invertible, then the determinant detΛ iseither positive or negative. If detΛ > 0 then we say that the bases{ei} and{e′i }have thesame orientation, and if detΛ < 0 then we say that the bases{ei} and{e′i } have theopposite orientation.

• This defines an equivalence relation on the set of all bases onE called theori-entation of the vector spaceE. This equivalence relation divides the set of allbases in two equivalence classes, called thepositively oriented andnegativelyoriented bases.

• A vector space together with a choice of what equivalence class is positivelyoriented is called anoriented vector space.

• A set of real numbersAi1...ip

j1... jqis said to represent components of apseudo-tensor

of type (p,q) if they transform under a change of the basis according to

A′i1...ip

j1... jq= sign(detΛ)Λi1

l1 · · ·Λip

lpΛm1

j1 · · · Λmq

jqAl1...lpm1...mq

,

where sign(x) = +1 if x > 0 and sign(x) = −1 if x < 0.

• TheLevi-Civita symbol (also calledalternating symbol) is defined by

εi1...in = εi1...in =

+1, if ( i1, . . . , in) is an even permutation of (1, . . . ,n),

−1, if ( i1, . . . , in) is an odd permutation of (1, . . . ,n),

0, if two or more indices are the same.

• The Levi-Civita symbolsεi1...in andεi1...in do not represent tensors! They have thesame values in all bases.

• Theorem 2.3.1 The determinant of a matrix A= (Aij) can be written as

detA = εi1...inA1i1 . . .A

nin

= ε j1... jnAj1

1 . . .Ajn

n

=1n!εi1...inε j1... jnA

j1i1 . . .A

jnin .

Here, as usual, a summation over all repeated indices is assumed from 1 ton.

vecanal4.tex; July 1, 2005; 13:34; p. 38

Page 43: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

38 CHAPTER 2. VECTOR AND TENSOR ALGEBRA

• Theorem 2.3.2 There holds the identity

εi1...inε j1... jn =∑ϕ∈Sn

sign(ϕ) δi1jϕ(1)· · · δinjϕ(n)

= n!δi1[ j1· · · δinjn] .

The contraction of this identity over k indices gives

εi1...in−km1...mkε j1... jn−km1...mk = k!(n− k)!δi1[ j1· · · δin−k

jn−k] .

In particular,εm1...mnεm1...mn = n! .

• Theorem 2.3.3 The sets of real numbers Ei1...in and Ei1...in defined by

Ei1...in =√|g| εi1...in

Ei1...in =1√|g|εi1...in ,

where|g| = det(gi j ), define (pseudo)-tensors of type(0,n) and(n,0) respectively.

• Let {v1, . . . , vn} be an orderedn-tuple of vectors. Thevolume of the paral-lelepiped spanned by the vectors{v1, . . . , vn} is a real number defined by

|vol (v1, . . . , vn)| =√

det((vi , v j)) .

• Theorem 2.3.4 Let {ei} be a basis in E,{ωi} be the dual basis, and{v1, . . . , vn}

be a set of n vectors. Let V= (vij) be the matrix of contravariant components of

the vectors{v j}

vij = 〈ω

i , v j〉,

and W= (vi j ) be the matrix of covariant components of the vectors{v j}

vi j = (ei , v j) = gikvkj .

Then the volume of the parallelepiped spanned by the vectors{v1, . . . , vn} is

|vol (v1, . . . , vn)| =√|g| |detV| =

|detW|√|g|.

• If the vectors{v1, . . . , vn} are linearly dependent, then

vol (v1, . . . , vn) = 0 .

• If the vectors{v1, . . . , vn} are linearly independent, then the volume is a positivereal number that does not depend on the orientation of the vectors.

vecanal4.tex; July 1, 2005; 13:34; p. 39

Page 44: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

2.3. GENERAL DEFINITION OF A TENSOR 39

• The signed volume of the parallelepiped spanned by an orderedn-tuple ofvectors{v1, . . . , vn} is

vol (v1, . . . , vn) =√|g| detV

= sign(v1, . . . , vn) |vol (v1, . . . , vn)| , .

The sign of the signed volume depends on the orientation of the vectors{v1, . . . , vn}:

sign(v1, . . . , vn) = sign(detV) =

+1, if {v1, . . . , vn} is positively oriented

−1, if {v1, . . . , vn} is negatively oriented

• Theorem 2.3.5 The signed volume is equal to

vol (v1, . . . , vn) = Ei1...in vi11 · · · v

inn = Ei1...in vi11 · · · vinn ,

where vi j = 〈ωi , v j〉 and vi j = (ei , v j).

That is why the pseudo-tensorEi1...in is also called thevolume form.

Exterior Product and Duality

• The volume form allows one to define the duality ofk-forms and (n− k)-vectorsas follows. For eachk-form Ai1...ik one assigns thedual (n− k)-vector by

∗A j1... jn−k =1k!

E j1... jn−ki1...ik Ai1...ik .

Similarly, for eachk-vectorAi1...ik one assigns thedual (n− k)-form

∗A j1... jn−k =1k!

E j1... jn−ki1...ik Ai1...ik .

• Theorem 2.3.6 For each k-formα there holds

∗ ∗ α = (−1)k(n−k)α .

That is,∗∗ = (−1)k(n−k) .

• The exterior product of a k-form A and am-form B is a (k + m)-form A ∧ Bdefined by

(A∧ B)i1...ik j1... jm =(k+m)!

k!m!A[i1...ik Bj1... jm]

Similarly, one can define the exterior product ofp-vectors.

• Theorem 2.3.7 The exterior product is associative, that is,

(A∧ B) ∧C = A∧ (B∧C) .

vecanal4.tex; July 1, 2005; 13:34; p. 40

Page 45: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

40 CHAPTER 2. VECTOR AND TENSOR ALGEBRA

• A collection{v1, . . . , vn−1} of (n− 1) vectors defines a covectorα by

α = ∗(v1 ∧ · · · ∧ vn−1)

or, in components,α j = E ji1...in−1v

i11 · · · v

in−1n−1 .

• Theorem 2.3.8 Let {v1, . . . , vn−1} be a collection of(n − 1) vectors and S=span{v1, . . . vn−1} be the hyperplane spanned by these vectors. Lete be a unitvector orthogonal to S oriented in such a way that{v1, . . . , vn−1,e} is orientedpositively. Then the vectoru = g−1(α) corresponding to the1-formα = ∗(v1 ∧

· · · ∧ vn−1) is parallel toe (with the same orientation)

u = e ||u||

and has the norm||u|| = vol (v1, . . . , vn−1,e) .

• In three dimensions, i.e. whenn = 3, this defines a binary operation×, calledthevector product, that is

u = v × w = ∗(v ∧ w) ,

oru j = E jikviwk =

√|g| ε jikviwk .

vecanal4.tex; July 1, 2005; 13:34; p. 41

Page 46: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

2.4. OPERATORS AND TENSORS 41

2.4 Operators and Tensors

• Let A be an operator onE. Let {ei} = {e1, . . . ,en} be a basis in a Euclideanspace and{ωi} = {ω1, . . . ,ωn} be the dual basis inE∗. The real square matrixA = (Ai

j), i, j = 1, . . . ,n, defined by

Aej = Aijei ,

is called thematrix of the operator A.

• Therefore, there is a one-to-one correspondence between the operators onE andthe real squaren× n matricesA = (Ai

j).

• It can be computed by

Aij = 〈ω

i ,Aej〉 = gik(ek,Aej) .

• Remark. Notice that the upper index, which is the first one, indicates the rowand the lower index is the second one indicating the column of the matrix. Theconvenience of this notation comes from the fact that all upper indices (alsocalledcontravariant indices) indicate the components of vectors and “belong”to the vector spaceE while all lower indices (calledcovariant indices) indicatecomponents of covectors and “belong” to the dual spaceE∗.

• The matrix of the identity operatorI is

I ij = δ

ij .

• For anyv ∈ Ev = v jej , v j = 〈ω j , v〉

we haveAv = Ai

jvjei .

That is, the componentsui of the vectoru = Av are given by

ui = Aijv

j .

Transformation Law of Matrix of an Operator

• Under a change of the basisei = Λjie′j , the matrixAi

j of an operatorA transformsaccording to

A′i j = ΛikAk

mΛm

j ,

which in matrix notation reads

A′ = ΛAΛ−1 .

• Therefore, the matrixA = (Aij) of an operatorA represents the components of

a tensor of type (1,1). Conversely, such tensors naturally define linear operatorson E. Thus, linear operators onE and tensors of type (1,1) can be identified.

vecanal4.tex; July 1, 2005; 13:34; p. 42

Page 47: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

42 CHAPTER 2. VECTOR AND TENSOR ALGEBRA

• The determinant and the trace of the matrix of an operator are invariant under thechange of the basis, that is

detA′ = detA , tr A′ = tr A .

• Therefore, one can define thedeterminant of the operator A and thetrace ofthe operator A by the determinant and the trace of its matrix, that is,

detA = detA , tr A = tr A .

• For self-adjoint operators these definitions are consistent with the definition interms of the eigenvalues given before.

• The matrix of the sumA + B of two operatorsA andB is the sum of matricesAandB of the operatosA andB.

• The matrix of a scalar multiplecA is equal tocA, whereA is the matrix of theoperatorA andc ∈ R.

Matrix of the Product of Operators

• The matrix of the productC = AB of two operators reads

Cij = 〈ω

i ,ABej〉 = 〈ωi ,Aek〉〈ω

k,Bej〉 = AikBk

j ,

which is exactly the product of matricesA andB.

• Thus, the matrix of the productAB of the operatorsA and B is equal to theproductABof matrices of these operators in the same order.

• The matrix of the inverseA−1 of an invertible operatorA is equal to the inverseA−1 of the matrixA of the operatorA.

• Theorem 2.4.1 The algebraL(E) of linear operators on E is isomorphic to thealgebraMat(n,R) of real square n× n matrices.

Matrix of the Adjoint Operator

• For the adjoint operatorA∗ we have

(ei ,A∗ej) = (Aei ,ej) = (ej ,Aei) .

Therefore, thematrix of the adjoint operator is

A∗k j = gki(ei ,A∗ej) = gkiAl

igl j .

In matrix notation this reads

A∗ = G−1ATG .

vecanal4.tex; July 1, 2005; 13:34; p. 43

Page 48: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

2.4. OPERATORS AND TENSORS 43

• Thus, the matrix of a self-adjoint operatorA satisfies the equation

Akj = gkiAl

igl j or gikAkj = Al

igl j ,

which in matrix notation reads

A = G−1ATG , or GA= ATG .

• The matrix of a unitary operatorA satisfies the equation

gkiAligl j A

jm = δ

km or Al

igl j Ajm = gim ,

which in matrix notation has the form

G−1ATGA= I , or ATGA= G .

vecanal4.tex; July 1, 2005; 13:34; p. 44

Page 49: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

44 CHAPTER 2. VECTOR AND TENSOR ALGEBRA

2.5 Vector Algebra inR3

• We denote the standard orthonormal basis inR3 by

e1 = i , e2 = j , e3 = k ,

so thatei · ej = δi j .

• Each vectorv is decomposed as

v = v1 i + v2 j + v3 k .

The components are computed by

v1 = v · i , v2 = v · j , v3 = v · k .

• The norm of the vector

||v|| =√

v21 + v2

2 + v23 .

• Scalar product is defined by

v · u = v1u1 + v2u2 + v3u3 .

• The angle between vectors

cosθ =u · v||u|| ||v||

.

• The orthogonal decomposition of a vectorv with respect to a given unit vectoruis

v = v‖ + v⊥ ,

wherev‖ = u(u · v) , v⊥ = v − u(u · v) .

• We denote the Cartesian coordinates inR3 by

x1 = x , x2 = y , x3 = z.

Theradius vector (theposition vector) is

r = x i + y j + zk .

• Theparametric equation of a lineparallel to a vectoru = a i + b j + ck is

r = r 0 + tu ,

where r 0 = x0 i + y0 j + z0 k is a fixed vector andt is a real parameter. Incomponents,

x = x0 + at , y = y0 + bt , z= z0 + ct .

Thenon-parametric equation of a line(if a,b, c are non-zero) is

x− x0

a=

y− y0

b=

z− z0

c.

vecanal4.tex; July 1, 2005; 13:34; p. 45

Page 50: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

2.5. VECTOR ALGEBRA IN R3 45

• Theparametric equation of a planespanned by two non-parallel vectorsu andv is

r = r 0 + tu + sv ,

wheret ands are real parameters.

• A vectorn that is perpendicular to both vectorsu andv is normal to the plane.

• Thenon-parametric equationof a plane with the normaln = a i + b j + ck is

( r − r 0) · n = 0

ora(x− x0) + b(y− y0) + (z− z0) = 0 ,

which can also be written as

ax+ by+ cz= d ,

whered = ax0 + by0 + cz0 .

• Thepositive (right-handed) orientation of a planeis defined by theright hand(or counterclockwise) rule. That is, ifu1 andu2 span a plane then we orient theplane by saying which vector is the first and which is the second. The orientationis positive if the rotation fromu1 to u2 is counterclockwise and negative if it isclockwise. A plane has two sides. The positive side of the plane is the side withthe positive orientation, the other side has the negative (left-handed) orientation.

• Thevector product of two vectors is defined by

w = u × v = det

∣∣∣∣∣∣∣∣i j k

u1 u2 u3

v1 v2 v3

∣∣∣∣∣∣∣∣ ,or, in components,

wi = εi jku jvk =12εi jk (u jvk − ukv j) .

• The vector products of the basis vectors are

ei × ej = εi jkek .

• If u andv are two nonzero nonparallel vectors, then the vectorw = u × v isorthogonal to both vectors,u andv, and, hence, to the plane spanned by thesevectors. It defines a normal to this plane.

• Thearea of the parallelogram spanned by two vectors uandv is

area(u, v) = |u × v| = ||u|| ||v|| sinθ .

vecanal4.tex; July 1, 2005; 13:34; p. 46

Page 51: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

46 CHAPTER 2. VECTOR AND TENSOR ALGEBRA

• Thesigned volume of the parallelepiped spanned by three vectors u, v andwis

vol(u, v,w) = u · (v,w) = det

∣∣∣∣∣∣∣∣u1 u2 u3

v1 v2 v3

w1 w2 w3

∣∣∣∣∣∣∣∣ = εi jkuiv jwk .

The signed volume is also called thescalar triple product and denoted by

[u, v,w] = u · (v × w) .

• The signed volume is zero if and only if the vectors are linearly dependent, thatis, coplanar.

• For linearly independent vectors its sign depends on the orientation of the tripleof vectors{u, vw}

vol(u, v,w) = sign (u, v,w)|vol(u, v,w)| ,

where

sign (u, v,w) =

{1 if {u, v,w} is positively oriented−1 if {u, v,w} is negatively oriented

• The scalar triple product is linear in each argument, anti-symmetric

[u, v,w] = −[v,u,w] = −[u,w, v] = −[w, v,u]

cyclic[u, v,w] = [v,w,u] = [w,u, v] .

It is normalized so that[ i , j , k ] = 1 .

• The orthogonal decomposition of a vectorv with respect to a unit vectoru canbe written in the form

v = u(u · v) − u × (u × v) .

• The Levi-Civita symbol in three dimensions

εi jk = εi jk =

+1 if (i, j, k) = (1,2,3), (2,3,1), (3,1,2)

−1 if (i, j, k) = (2,1,3), (3,2,1), (1,3,2)

0 otherwise

has the following properties:

εi jk = −ε jik = −εik j = −εk ji

εi jk = ε jki = εki j

vecanal4.tex; July 1, 2005; 13:34; p. 47

Page 52: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

2.5. VECTOR ALGEBRA IN R3 47

εi jkεmnl = 6δm[i δ

njδ

lk]

= δmi δnjδ

lk + δ

mj δ

nkδ

li + δ

mk δ

ni δ

lj − δ

mi δ

nkδ

lj − δ

mj δ

ni δ

lk − δ

mk δ

njδ

li

εi jkεmnk = 2δm[i δ

nj] = δ

mi δ

nj − δ

mj δ

ni

εi jkεm jk = 2δmi

εi jkεi jk = 6

• This leads to many vector identities that express double vector product in termsof scalar product. For example,

u × (v × w) = (u · w)v − (u · v)w

u × (v × w) + v × (w × u) + w × (u × v) = 0

(u × v) × (w × n) = v[u,w,n] − u[v,w,n]

(u × v) · (w × n) = (u · w)(v · n) − (u · n)(v · w)

vecanal4.tex; July 1, 2005; 13:34; p. 48

Page 53: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

48 CHAPTER 2. VECTOR AND TENSOR ALGEBRA

vecanal4.tex; July 1, 2005; 13:34; p. 49

Page 54: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

Chapter 3

Geometry

3.1 Geometry of Euclidean Space

• The setRn can be viewed geometrically as a set of points, that we will denoteby P, Q, etc. With each pointP we associate an orderedn-tuple of real numbers(xi

P) = (x1P, . . . , x

nP), called thecoordinatesof the pointP. The assignment of

n-tuples of real numbers to the points in space should be bijective. That is,different points are assigned differentn-tuples, and for everyn-tuple there is apoint in space with such coordinates. Such a map is called acoordinate system.

• A spaceRn with a coordinate system is aEuclidean spaceif the distancebe-tween any two pointsP andQ is determined by

d(P,Q) =

√√n∑

i=1

(xiP − xi

Q)2 .

Such coordinate system is calledCartesian.

• The pointO with the zero coordinates (0, . . . ,0) is called theorigin of the Carte-sian coordinate system.

• In Rn it is convenient to associate vectors with points in space. With each pointP with Cartesian coordinates (x1, . . . , xn) in Rn we associate the column-vectorr = (xi) with the components equal to the Cartesian coordinates of the pointP.We say that this vectorpoints from the origin O to the point P; it has its tailat the pointO and its tip at the pointP. This vector is often called theradiusvector, or theposition vector, of the pointP and denoted byr .

• Similarly, with every two pointsP andQ with the coordinates (xiP) and (xi

Q) weassociate the vector

uPQ = r Q − r P = (xiQ − xi

P)

thatpoints from the point P to the point Q.

49

Page 55: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

50 CHAPTER 3. GEOMETRY

• Obviously, the Euclidean distance is given by

d(P,Q) = || r P − r Q|| .

• The standard (orthonormal) basis{e1, . . . ,en} of Rn are the unit vectors that con-nect the originO with the points{(1,0, . . . ,0), . . . , (0, . . . ,0,1)} that have onlyone nonzero coordinate which is equal to 1.

• The one-dimensional subspacesLi spanned by a single basis vectorei ,

Li = span{ei} = {P | r P = tei , t ∈ R},

are the lines called thecoordinate axes. There aren coordinate axes; they aremutually orthogonal and intersect at only one point, the originO.

• The two-dimensional subspacesPi j spanned by a couple of basis vectorsei andej ,

Pi j = span{ei ,ej} = {P | r P = tei + sej , t, s ∈ R},

are the planes called thecoordinate planes. There aren(n − 1)/2 coordinateplanes; the coordinate planes are mutually orthogonal and intersect along thecoordinate axes.

• Let a andb be real numbers such thata < b. The set [a,b] is a closed intervalin R. A parametrized curveC in Rn is a mapC : [a,b] → Rn which assigns apoint inRn

C : r (t) = (xi(t))

to each real numbert ∈ [a,b].

• The positiveorientation of the curveC is determined by the standard orientationof R, that is, by the direction of increasing values of the parametert.

• The point r (a) is the initial point and the pointr (b) is the endpoint of thecurve.

• The curve (−C) is the parametrized curve with the opposite orientation. If thecurveC is parametrized byr (t), a ≤ t ≤ b, then the curve (−C) is parametrizedby

(−C) : r (−t + a+ b) .

• Theboundary ∂C of the curveC consists of two pointsC0 andC1 correspond-ing to r (a) and r (b), that is,

∂C = C1 −C0 .

• A curve C is continuous if all the functionsxi(t) are continuous for anyt on[a,b].

vecanal4.tex; July 1, 2005; 13:34; p. 50

Page 56: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

3.1. GEOMETRY OF EUCLIDEAN SPACE 51

• Let a1,a2 andb1,b2 be real numbers such that

a1 < b1, and a2 < b2 .

The setD = [a1,b1] × [a2,b2] is aclosed rectanglein the planeR2.

• A parametrized surfaceS in Rn is a mapS : D→ Rn which assigns a point

S : r (u) = (xi(u))

in Rn to each pointu = (u1,u2) in the rectangleD.

• The positiveorientation of the surfaceS is determined by the positive orienta-tion of the standard basis inR2. The surface (−S) is the surface with the oppositeorientation.

• The boundary ∂S of the surfaceS consists of four curvesS(1),0, S(1),1, S(2),0,andS(2),1 parametrized byr (a1, v), r (b1, v), r (u,a2) and r (u,b2) respectively.Taking into account the orientation, the boundary of the surfaceS is

∂S = S(2),0 + S(1),1 − S(2),1 − S(1),0 .

• Let a1, . . . ,ak andb1, . . . ,bk be real numbers such that

ai < bi , i = 1, . . . , k.

The setD = [a1,b1] × · · · × [ak,bk] is called aclosedk-rectangle in Rk. Inparticular, the set [0,1]k = [0,1] × · · · × [0,1] is thestandard k-cube.

• Let D = [a1,b1] × · · · × [ak,bk] be a closed rectangle inRk. A parametrizedk-dimensional surfaceS in Rn is a continuous mapS : D → Rn which assignsa point

S : r (u) = (xi(u))

in Rn to each pointu = (u1, . . . ,uk) in the rectangleD.

• A (n − 1)-dimensional surface is called thehypersurface. A non-parametrizedhypersurface can be described by a single equation

F(x) = F(x1, . . . , xn) = 0 ,

whereF : Rn→ R is a real-valued function ofn coordinates.

• Theboundary ∂S of S consists of (k− 1)-surfaces,S(i),0 andS(i),1, i = 1, . . . , k,called thefacesof thek-surfaceS. Of course, ak-surfaceS has 2k faces. ThefaceS(i),0 is parametrized by

S(i),0 : r (u1, . . . ,ui−1,ai ,ui+1, . . . ,uk) ,

where thei-th parameterui is fixed at the initial point, i.e.ui = ai , and the faceS(i),0 is parametrized by

S(i),1 : r (u1, . . . ,ui−1,bi ,ui+1, . . . ,uk) ,

where thei-th parameterui is fixed at the endpoint, i.e.ui = bi .

vecanal4.tex; July 1, 2005; 13:34; p. 51

Page 57: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

52 CHAPTER 3. GEOMETRY

• The boundary of the surfaceS is defined by

∂S =k∑

i=1

(−1)i(S(i),0 − S(i),1) .

• Let S1, . . . ,Sm be parametrizedk-surfaces. A formal sum

S =m∑

i=1

aiSi

with integer coefficientsa1, . . . ,am, is called ak-chain. Usually (but not always)the integersai are equal to 1, (−1) or 0.

• The product of anyk-chainS with zero is called the zero chain

0S = 0 .

• The addition ofk-chains and multiplication by integers is defined by

m∑i=1

aiSi +

m∑i=1

biSi =

m∑i=1

(ai + bi)Si ,

b

m∑i=1

aiSi

= m∑i=1

(bai)Si .

• Theboundary of ak-chainS is an (k− 1)-chain∂S defined by

m∑i=1

aiSi

= m∑i=1

ai∂Si .

• Theorem 3.1.1 For any k-chain S there holds

∂(∂S) = 0 .

vecanal4.tex; July 1, 2005; 13:34; p. 52

Page 58: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

3.2. BASIC TOPOLOGY OF RN 53

3.2 Basic Topology ofRn

• Let P0 be a point in a Euclidean spaceRn andε > 0 be a positive real number.The open ball Bε(P0) or radiusε with the center atP0 is the set of all pointswhose distance from the pointP0 is less thanε, that is,

Bε(P0) = {P | d(P,P0) < ε} .

• A neighborhoodof a pointP0 is any set that contains an open ball centered atP0.

• Let S be a subset of a Euclidean spaceRn. A point P is aninterior point of S ifthere is a neighborhood ofP that lies completely inS.

• A point P is an exterior point of S if there is a neighborhood ofP that liescompletely outside ofS.

• A point P is a boundary point of S if it is neither an interior nor an exteriorpoint. If P is a boundary point ofS, then every neighborhood ofP containspoints inS and points not inS.

• The set of boundary points ofS is called theboundary of S, denoted by∂S.

• The set of all interior points ofS is called theinterior of S, denoted bySo.

• A setS is calledopenif every point ofS is an interior point ofS, that is,S = So.

• A setS is closedif it contains all its boundary points, that is,S = So ∪ ∂S.

• Henceforth, we will consider only open sets and call themregionsof space.

• A regionS is calledconnected(or arc-wise connected) if for any two pointsPandQ in S there is an arc joiningP andQ that lies withinS.

• A connected region, that is a connected open set, is called adomain.

• A domainS is said to besimply-connectedif every closed curve lying withinScan be continuously deformed to a point in the domain without any part of thecurve passing through regions outside the domain.

• A domain is simply connected if for any closed curve lying in the domain therecan be found a surface within the domain that has that curve as its boundary.

• A domain is said to bestar-shapedif there is a pointP in the domain such thatfor any other point in the domain the entire line segment joining these two pointslies in the domain.

vecanal4.tex; July 1, 2005; 13:34; p. 53

Page 59: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

54 CHAPTER 3. GEOMETRY

3.3 Curvilinear Coordinate Systems

• We say that a functionf (x) = f (x1, . . . , xn) is smoothif it has continuous partialderivatives of all orders.

• Let P be a point with Cartesian coordinates (xi). Suppose that we assign anothern-tuple of real numbers (qi) = (q1, . . . ,qn) to the pointP, so that

xi = f i(q) ,

where f i(q) = f i(q1, . . . ,qn) are smooth functions of the variablesxi . We willcall this achange of coordinates.

• The matrix

J =

(∂xi

∂q j

)is called theJacobian matrix. The determinant of this matrix is called theJa-cobian.

• A point P0 at which the Jacobian matrix is invertible, that is, the Jacobian is notzero, detJ , 0, is called anonsingular point of the new coordinate system (qi).

• Theorem 3.3.1 (Inverse Function Theorem)In a neighborhood of any nonsin-gular point P0 the change of coordinates is invertible.

That is, if xi = f i(q) and

det

(∂xi

∂q j

) ∣∣∣∣∣∣P0

, 0 ,

then for all points sufficiently close to P0 there exist n smooth functions

qi = hi(x) = hi(x1, . . . , xn) ,

of the variables(xi) such that

f i(h1(x), . . . ,hn(x)) = xi , hi( f 1(q), . . . , f n(q)) = qi .

The Jacobian matrix of the inverse transformation is the inverse matrix of theJacobian matrix of direct transformation, i.e.

∂xi

∂q j

∂q j

∂xk= δik , and

∂qi

∂x j

∂x j

∂qk= δik .

• The curvesCi along which only one coordinate is varied, while all other arefixed, are called thecoordinate curves, that is,

xi = xi(q10, . . . ,q

i−10 ,q

i ,qi+10 , . . . ,q

n0)} .

vecanal4.tex; July 1, 2005; 13:34; p. 54

Page 60: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

3.3. CURVILINEAR COORDINATE SYSTEMS 55

• The vectors

ei =∂ r∂qi

are tangent to the coordinate curves.

• The surfacesSi j along which only two coordinates are varied, while all other arefixed, are called thecoordinate surfaces, that is,

xi = xi(q10, . . . ,q

i−10 ,q

i ,qi+10 , . . . ,q

j−10 ,q

j ,q j+10 , . . . ,q

n0)} .

• Theorem 3.3.2 For each point P there are n coordinate curves that pass throughP. The set of tangent vectors{ei} to these coordinate curves is linearly indepen-dent and forms a basis.

• The basis{ei} is not necessarily orthonormal.

• The metric tensor is defined as usual by

gi j = ei · ej =

n∑k=1

∂xk

∂qi

∂xk

∂q j.

• The dual basis of 1-forms is defined by

ωi = dqi =∂qi

∂x jσ j ,

whereσ j is the standard dual basis.

• The vector

d r = ei dqi =∂ r∂qi

dqi

is called theinfinitesimal displacement.

• The arc length, called theinterval , is determined by

ds2 = ||d r ||2 = d r · d r = gi j dqidqj .

• The volume of the parallelepiped spanned by the vectors{e1dq1, . . . ,endqn},called thevolume element, is

dV =√|g| dq1 · · · dqn ,

where, as usual,|g| = det(gi j ).

• A coordinate system is calledorthogonal if the vectors∂ r /∂qi are mutuallyorthogonal. The norms of these vectors

hi =

∣∣∣∣∣∣∣∣∣∣∂ r∂qi

∣∣∣∣∣∣∣∣∣∣are called thescale factors.

vecanal4.tex; July 1, 2005; 13:34; p. 55

Page 61: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

56 CHAPTER 3. GEOMETRY

• Then one can introduce theorthonormal basis {ei} by

ei =

∣∣∣∣∣∣∣∣∣∣∂ r∂qi

∣∣∣∣∣∣∣∣∣∣−1 ∂ r∂qi=

1hi

∂ r∂qi.

• For an orthonormal system the vector components are (there is no differencebetween contravariant and covariant components)

vi = vi = v · ei .

• Then the interval has the form

ds2 =

n∑i=1

h2i (dqi)2 .

• The volume element in orthogonal coordinate system is

dV = h1 · · · hn dq1 · · · dqn .

3.3.1 Change of Coordinates

• Let (qi) and (q′ j) be two curvilinear coordinate systems. Then they should berelated by a smooth invertible transformation

q′i = f i(q) = f i(q1, . . . ,qn) , qi = hi(q′) = f i(q′1, . . . ,q′n) ,

such thatf i(h(q′)) = q′i , hi( f (q)) = qi .

• The Jacobian matrices are related by

∂q′i

∂q j

∂q j

∂q′k= δik ,

∂qi

∂q′ j∂q′ j

∂qk= δik ,

so that the matrix(∂q′i

∂q j

)is inverse to

(∂qi

∂q′ j

).

• The basis vectors in these coordinate systems are

e′i =∂ r∂q′i, ei =

∂ r∂qi.

Therefore, they are related by a linear transformation

e′i =∂q j

∂q′iej , ej =

∂q′i

∂q je′j .

They have the same orientation if the Jacobian of the change of coordinates ispositive and oppositive orientation if the Jacobian is negative.

vecanal4.tex; July 1, 2005; 13:34; p. 56

Page 62: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

3.3. CURVILINEAR COORDINATE SYSTEMS 57

• Thus, a set of real numbersT i1...ipj1... jq is said to representcomponents of a ten-

sor of type (p,q) (p times contravariant andq times covariant) if they transformunder a change of coordinates according to

T′i1...ip

j1... jq=∂q′i1

∂ql1· · ·∂q′ip

∂qlp

∂qm1

∂q′ j1· · ·∂qmq

∂q′ jqT

l1...lpm1...mq

.

This is the generaltransformation law of the components of a tensor of type(p,q) with respect to a change of curvilinear coordinates.

• A pseudo-tensor has an additional factor equal to the sign of the Jacobian, thatis, the components of a pseudo-tensor of type (p,q) transform as

T′i1...ip

j1... jq= sign

[det

(∂q′i1

∂ql1

)]∂q′i1

∂ql1· · ·∂q′ip

∂qlp

∂qm1

∂q′ j1· · ·∂qmq

∂q′ jqT

l1...lpm1...mq

.

This is the generaltransformation law of the components of a pseudo-tensorof type (p,q) with respect to a change of curvilinear coordinates.

3.3.2 Examples

• Thepolar coordinates in R2 are introduced by

x1 = ρ cosϕ , x2 = ρ sinϕ ,

whereρ ≥ 0 and 0≤ ϕ < 2π. The Jacobian matrix is

J =

(cosϕ −ρ sinϕsinϕ ρ cosϕ

).

The Jacobian isdetJ = ρ .

Thus, the only singular point of the polar coordinate system is the originρ = 0.At all nonsingular points the change of variables is invertible and we have

ρ =√

(x1)2 + (x2)2 , ϕ = cos−1

(x1

ρ

)= sin−1

(x2

ρ

).

The coordinate curves ofρ are half-lines (rays) going through origin with theslope tanϕ. The coordinate curves ofϕ are circles with the radiusρ centered atthe origin.

• Thecylindrical coordinates in R3 are introduced by

x1 = ρ cosϕ , x2 = ρ sinϕ , x3 = z

whereρ ≥ 0, 0≤ ϕ < 2π andz ∈ R. The Jacobian matrix is

J =

cosϕ −ρ sinϕ 0sinϕ ρ cosϕ 0

0 0 1

.vecanal4.tex; July 1, 2005; 13:34; p. 57

Page 63: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

58 CHAPTER 3. GEOMETRY

The Jacobian isdetJ = ρ .

Thus, the only singular point of the cylindrical coordinate system is the originρ = 0. At all nonsingular points the change of variables is invertible and we have

ρ =√

(x1)2 + (x2)2 , ϕ = cos−1

(x1

ρ

)= sin−1

(x2

ρ

), z= x3 .

The coordinate curves ofρ are horizontal half-lines in the planez= const goingthrough thez-axis. The coordinate curves ofϕ are circles in the planez= constof radiusρ centered at thez axis. The coordinate curves ofz are vertical lines.The coordinate surfaces ofρ, ϕ are horizontal planes. The coordinate surfaces ofρ, z are vertical half-planes going through thez-axis. The coordinate surfaces ofϕ, z are vertical cylinders centered at the origin.

• Thespherical coordinatesin R3 are introduced by

x1 = r sinθ cosϕ , x2 = r sinθ sinϕ , x3 = r cosθ

wherer ≥ 0, 0≤ ϕ < 2π and 0≤ θ ≤ π. The Jacobian matrix is

J =

sinθ cosϕ r cosθ cosϕ −r sinθ sinϕsinθ sinϕ r cosθ sinϕ r sinθ cosϕ

cosθ −r sinθ 0

.The Jacobian is

detJ = r2 sinθ .

Thus, the singular points of the spherical coordinate system are the points whereeitherr = 0, which is the origin, orθ = 0 or θ = π, which is the wholez-axis. Atall nonsingular points the change of variables is invertible and we have

r =√

(x1)2 + (x2)2 + (x3)2 ,

ϕ = cos−1

(x1

ρ

)= sin−1

(x2

ρ

),

θ = cos−1

(x3

r

),

whereρ =√

(x1)2 + (x2)2.

The coordinate curves ofr are half-lines going through the origin. The coordi-nate curves ofϕ are circles of radiusr sinθ centered at thezaxis. The coordinatecurves ofθ are vertical half-circles of radiusr centered at the origin. The coor-dinate surfaces ofr, ϕ are half-cones around thez-axis going through the origin.The coordinate surfaces ofr, θ are vertical half-planes going through thez-axis.The coordinates surfaces ofϕ, θ are spheres of radiusr centered at the origin.

vecanal4.tex; July 1, 2005; 13:34; p. 58

Page 64: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

3.4. VECTOR FUNCTIONS OF A SINGLE VARIABLE 59

3.4 Vector Functions of a Single Variable

• A vector-valued function is a mapv : [a,b] → E from an interval [a,b] ofreal numbers to a vector spaceE that assigns a vectorv(t) to each real numbert ∈ [a,b].

• We say that a vector valued functionv(t) has alimit v 0 ast → t0, denoted by

limt→t0

v(t) = v0

iflimt→t0||v(t) − v0|| = 0 .

• A vector valued functionv(t) is continuousat t0 if

limt→t0

v(t) = v(t0) .

A vector valued functionv(t) is continuous on the interval [a,b] if it is continuousat every pointt of this interval.

• A vector valued functionv(t) is differentiableat t0 if there exists the limit

limh→0

v(t0 + h) − v(t0)h

.

If this limit exists it is called thederivative of the functionv(t) at t0 and denotedby

v′(t0) =dvdt= lim

h→0

v(t0 + h) − v(t0)h

.

If the functionv(t) is differentiable at everyt in an interval [a,b], then it is calleddifferentiable on that interval.

• Let {ei} be a constant basis inE that does not depend ont. Then a vector valuedfunctionv(t) is represented by its components

v(t) = vi(t)ei

and the derivative ofv can be computed componentwise

dvdt=

dvi

dtei .

• The derivative is a linear operation, that is,

ddt

(u + v) =dudt+

dvdt,

ddt

(cv) = cdvdt,

wherec is a scalar constant.

vecanal4.tex; July 1, 2005; 13:34; p. 59

Page 65: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

60 CHAPTER 3. GEOMETRY

• More generally, the derivative satisfies the product rules

ddt

( f v) = fdvdt+

d fdt

v ,

ddt

(u, v) =

(dudt, v

)+

(u,

dvdt

)Similarly for the exterior product

ddtω ∧ σ =

dωdt∧ σ + ω ∧

dσdt

By taking the dual of this equation we obtain inR3 the product rule for the vectorproduct

ddt

u × v =dudt× v + u ×

dvdt

• Theorem 3.4.1 The derivativev′(t) of a vector valued functionv(t) with theconstant norm is orthogonal tov(t). That is, if||v(t)|| = const, then for any t

(v′(t), v(t)) = 0 .

vecanal4.tex; July 1, 2005; 13:34; p. 60

Page 66: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

3.5. GEOMETRY OF CURVES 61

3.5 Geometry of Curves

• Let r = r (t) be a parametrized curve.

• A curver = r 0 + t u

is astraight line parallel to the vectoru passing through the pointr 0.

• Let u andv be two orthonormal vectors. Then the curve

r = r 0 + a (cost u + sint v)

is acircle of radiusa with the center atr 0 in the plane spanned by the vectorsuandv.

• Let {u, v,w} be an orthonormal triple of vectors. Then the curve

r = r 0 + b (cost u + sint v) + at w

is thehelix of radiusb with the axis passing through the pointr 0 and parallel tow.

• The vertical distance between the coils of the helix, equal to 2π|a|, is called thepitch.

• Let (qi) be a curvilinear coordinate system. Then a curve can be described byqi = qi(t), which in Cartesian coordinates becomesr = r (q(t)).

• The derivatived rdt=∂ r∂qi

dqi

dt

of the vector valued functionr (t) is called thetangent vector. If r (t) representsthe position of a particle at the timet, then r ′ is thevelocity of the particle.

• The norm ∣∣∣∣∣∣∣∣∣∣d rdt

∣∣∣∣∣∣∣∣∣∣ =√

gi j (q(t))dqi

dtdqj

dt

of the velocity is called thespeed. Here, as usual

gi j =∂ r∂qi·∂ r∂q j=

n∑k=1

∂xk

∂qi

∂xk

∂q j

is the metric tensor in the coordinate system (qi).

• We will say that a curver = r (t) is smooth if:a) it has continuous derivatives of all orders,b) there are no self-intersections, andc) the speed is non-zero, i.e.|| r ′(t)|| , 0, at every point on the curve.

vecanal4.tex; July 1, 2005; 13:34; p. 61

Page 67: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

62 CHAPTER 3. GEOMETRY

• For a curver : [a,b] → Rn, the possibility thatr (a) = r (b) is allowed. Then itis called aclosed curve. A closed curve does not have a boundary.

• A curve consisting of a finite number of smooth arcs joined together withoutself-intersections is calledpiece-wise smooth, or justregular.

• For each regular curve there is anatural parametrization , or theunit-speedparametrization with a natural parameters such that∣∣∣∣∣∣∣∣∣∣d r

ds

∣∣∣∣∣∣∣∣∣∣ = 1 .

• The orientation of a parametrized curve is determined by the direction of in-creasing parameter. The pointr (a) is called the initial point and the pointr (b)is called the endpoint.

• Nonparametric curves are not oriented.

• Theunit tangent is determined by

T =∣∣∣∣∣∣∣∣∣∣d r

dt

∣∣∣∣∣∣∣∣∣∣−1 d rdt.

For the natural parametrization the tangent is the unit tangent, i.e.

T =d rds=∂ r∂qi

dqi

ds.

• The norm of the displacement vectord r = r ′dt

ds=∣∣∣∣∣∣∣∣∣∣d r

dt

∣∣∣∣∣∣∣∣∣∣ dt = ||d r || =

√gi j (q(t))

dqi

dtdqj

dtdt .

is called thelength element.

• The length of a smooth curver : [a,b] → Rn is defined by

L =∫

Cds=

∫ b

a

∣∣∣∣∣∣∣∣∣∣d rdt

∣∣∣∣∣∣∣∣∣∣ dt =∫ b

a

√gi j (q(t))

dqi

dtdqj

dtdt

For the natural parametrization the length of the curve is simply

L = b− a .

That is why, the parameters is nothing but thelength of the arc of the curvefrom the initial point r (a) to the current pointr (t)

s(t) =∫ t

adτ

∣∣∣∣∣∣∣∣∣∣d rdτ

∣∣∣∣∣∣∣∣∣∣ .vecanal4.tex; July 1, 2005; 13:34; p. 62

Page 68: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

3.5. GEOMETRY OF CURVES 63

This means thatdsdt=

∣∣∣∣∣∣∣∣∣∣d rdt

∣∣∣∣∣∣∣∣∣∣ ,and

d rdt=

dsdt

d rds.

• The second derivative

r ′′ =d2 rdt2=

ddt

(∂ r∂qi

)dqi

dt+∂ r∂qi

d2qi

dt2.

is called theacceleration.

• In the natural parametrization this gives the natural rate of change of the unittangent

dTds=

d2 rds2=

dds

(∂ r∂qi

)dqi

ds+∂ r∂qi

d2qi

ds2.

• The norm of this vector is called thecurvature of the curve

κ =

∣∣∣∣∣∣∣∣∣∣dTds

∣∣∣∣∣∣∣∣∣∣ = ∣∣∣∣∣∣∣∣∣∣d rdt

∣∣∣∣∣∣∣∣∣∣−1 ∣∣∣∣∣∣∣∣∣∣dTdt

∣∣∣∣∣∣∣∣∣∣ .Theradius of curvature is defined by

ρ =1κ.

• The normalized rate of change of the unit tangent defines theprincipal normal

N = ρdTds=

∣∣∣∣∣∣∣∣∣∣dTdt

∣∣∣∣∣∣∣∣∣∣−1 dTdt.

• The unit tangent and the principal normal are orthogonal to each other. Theyform an orthonormal system.

• Theorem 3.5.1 For any smooth curver = r (t), the accelerationr ′′ lies in theplane spanned by the vectorsT and N . The orthogonal decomposition ofr ′′

with respect toT and N has the form

r ′′ =d || r ′||

dtT + κ || r ′||2 N .

• The vectordNds+ κT

is orthogonal to both vectorsT and N , and hence, to the plane spanned by thesevectors. In a general spaceRn this vector could be decomposed with respect toa basis in the (n − 2)-dimensional subspace orthogonal to this plane. We willrestrict below to the casen = 3.

vecanal4.tex; July 1, 2005; 13:34; p. 63

Page 69: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

64 CHAPTER 3. GEOMETRY

• In R3 one defines thebinormal

B = T × N .

Then the triple{T , N , B } is a right-handed orthonormal system called amovingframe.

• By using the orthogonal decomposition of the acceleration one can obtain analternative formula for the curvature of a curve inR3 as follows. We compute

r ′ × r ′′ = κ || r ′||3 B .

Therefore,

κ =|| r ′ × r ′′|||| r ′||3

.

• The scalar quantity

τ = B ·dNds= −N ·

dBds

is called thetorsion of the curve.

• Theorem 3.5.2 (Frenet-Serret Equations)For any smooth curve inR3 therehold

dTds

= κN

dNds

= −κT + τB

dBds

= −τN .

• Theorem 3.5.3 Any two curves inR3 with identical curvature and torsion arecongruent.

vecanal4.tex; July 1, 2005; 13:34; p. 64

Page 70: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

3.6. GEOMETRY OF SURFACES 65

3.6 Geometry of Surfaces

• Let S be a parametrized surface. It can be described in general curvilinear coor-dinates byqi = qi(u, v), whereu ∈ [a,b] andv ∈ [c,d]. Then r = r (q(u, v)).

• The parametersu andv are called thelocal coordinateson the surface.

• The curvesr (u, v0) and r (u0, v), with one coordinate being fixed, are called thecoordinate curves.

• The tangent vectors to the coordinate curves

r u =∂ r∂u=∂ r∂qi

∂qi

∂uand r v =

∂ r∂v=∂ r∂qi

∂qi

∂v

aretangent to the surface.

• A surface issmooth if:a) r (u, v) has continuous partial derivatives of all orders,b) the tangent vectorsr u and r v are non-zero and linearly independent,c) there are no self-intersections.

• It is allowed thatr (a, v) = r (b, v) and r (u, c) = r (u,d).

• A planeTP spanned by the tangent vectorsr u and r v at a pointP on a smoothsurfaceS is called thetangent plane.

• A surface is smooth if the tangent plane is well defined, that is, the tangent vec-tors are linearly independent (nonparallel), which means that it does not degen-erate to a line or a point at every point of the surface.

• A surface ispiece-wise smoothif it consists of a finite number of smooth piecesjoined together.

• The orientation of the surface is achived by cutting it in small pieces and orient-ing the small pieces separately. If this can be made consistenly for the wholesurface, then it is calledorientable.

• The boundary∂S of the surfacer = r (u, v), whereu ∈ [a,b], v ∈ [c,d] consistsof the curvesr (a, v), r (b, v), r (u, c) and r (u,d). A surface without boundaryis calledclosed.

• Remark. There are non-orientable smooth surfaces.

• In R3 one can define theunit normal vector to the surface by

n = || r u × r v||−1 r u × r v .

Notice that|| r u × r v|| =

√|| r u||

2|| r v||2 − ( r u · r v)2 .

vecanal4.tex; July 1, 2005; 13:34; p. 65

Page 71: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

66 CHAPTER 3. GEOMETRY

In components,

( r u × r v)i =√

gεilm∂ql

∂u∂qm

∂v,

r u · r u = gi j∂qi

∂u∂q j

∂u, r v · r v = gi j

∂qi

∂v∂q j

∂v, r u · r v = gi j

∂qi

∂u∂q j

∂v.

• The sign of the normal is determined by the orientation of the surface.

• For a smooth surface the unit normal vector fieldn varies smoothly over thesurface.

• The normal to a closed surface inR3 is usually oriented in theoutward direction.

• In R3 a surface can also be described by a single equation

F(x, y, z) = 0 .

This equation does not prescribe the orientation though. Then

∂F∂xi

∂xi

∂u= 0 ,

∂F∂xi

∂xi

∂v= 0 .

The unit normal vector is then

n = ±grad F||grad F||

.

The sign here is fixed by the choice of the orientation. In components,

ni =∂F∂xi.

• Let r (u, v) be a surface,u = u(t), v = v(t) be a curve in the rectangleD =[a,b]× [c,d], and r ((u(t), v(t)) be the image of that curve on the surfaceS. Thenthe arc length of this curve is

dl =∣∣∣∣∣∣∣∣∣∣d r

dt

∣∣∣∣∣∣∣∣∣∣ dt ,

ordl2 = ||d r ||2 = hab dua dub ,

whereu1 = u andu2 = v, and

hab = gi j∂qi

∂ua

∂q j

∂ub,

is the induced metric on the surface and the indicesa,b take only the values1,2. In more detail,

dl2 = h11 du2 + 2h12 du dv+ h22 dv2 ,

andh11 = r u · r u , h12 = h21 = r u · r v , h22 = r v · r v .

vecanal4.tex; July 1, 2005; 13:34; p. 66

Page 72: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

3.6. GEOMETRY OF SURFACES 67

• The area of a plane spanned by the vectorsr u du and r v dv is called theareaelement

dS =√

h du dv,

whereh = dethab = || r u||

2 || r v||2 − ( r u · r v)

2 .

• In R3 the area element can also be written as

dS = || r u × r v|| du dv

• The area element of a surface inR3 parametrized by

x = u, y = v, z= f (u, v) ,

is

dS =

√1+

(∂ f∂x

)2

+

(∂ f∂y

)2

dx dy.

• The area element of a surface inR3 described by one equationF(x, y, z) = 0 is

dS =

∣∣∣∣∣∂F∂z∣∣∣∣∣−1

||grad F|| dx dy

=

∣∣∣∣∣∂F∂z∣∣∣∣∣−1

√(∂F∂x

)2

+

(∂F∂y

)2

+

(∂F∂z

)2

dx dy

if ∂F/∂z, 0.

• The area of a surfaceS described byr : D→ Rn, whereD = [a,b] × [c,d], is

S =∫

SdS =

∫ b

a

∫ d

c

√h du dv.

• Let S be a parametrized hypersurface defined byr = r (u) = r (u1, . . . , r n−1).The tangent vectors to the hypersurface are

∂ r∂ua,

wherea = 1,2, . . . , (n− 1).

The tangent space at a pointP on the hypersurface is the hyperplane equal to thespan of these vectors

T = span

{∂ r∂ua, . . . ,

∂ r∂ua

}.

The unit normal to the hypersurfaceS at the pointP is the unit vectorn orthog-onal toT.

vecanal4.tex; July 1, 2005; 13:34; p. 67

Page 73: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

68 CHAPTER 3. GEOMETRY

If the hypersurface is described by a single equation

F(x) = F(x1, . . . , xn) = 0 ,

then the normal is

n = ±grad F||grad F||

.

The sign here is fixed by the choice of the orientation. In components,

(dF)i =∂F∂xi.

The arc length of a curve on the hypersurface is

dl2 = habduadub ,

where

hab = gi j∂qi

∂ua

∂q j

∂ub.

The area element of the hypersurface is

dS =√

h du1 . . . dun−1 .

vecanal4.tex; July 1, 2005; 13:34; p. 68

Page 74: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

Chapter 4

Vector Analysis

4.1 Vector Functions of Several Variables

• The setRn is the set of orderedn-tuples of real numbersx = (x1, . . . , xn). Wecall suchn-tuplespoints in the spaceRn. Note that the points in space (althoughrelated to vectors) are not vectors themselves!

• Let D = [a1,b1] × · · · × [an,bn] be a closed rectangle inRn.

• A scalar field is a scalar-valued function ofn variables. In other words, it is amap f : D → R, which assigns a real numberf (x) = f (x1, . . . , xn) to each pointx = (x1, . . . , xn) of D.

• The hypersurfaces defined byf (x) = c ,

wherec is a constant, are calledlevel surfacesof the scalar fieldf .

• The level surfaces do not intersect.

• A vector field is a vector-valued function ofn variables; it is a mapv : D → Rn

that assigns a vectorv(x) to each pointx = (x1, . . . , xn) in D.

• A tensor field is a tensor-valued function onD.

• Let v be a vector field. A pointx0 in Rn such thatv(x0) = 0 is called asingularpoint (or acritical point ) of the vector fieldv. A point x is calledregular if itis not singular, that is, ifv(x) , 0.

• In a neighborhood of a regular point of a vector fieldv there is a family ofparametrized curvesr (t) such that at each point the vectorv is tangent to thecurves, that is,

d rdt= f v ,

where f is a scalar field. Such curves are called theflow lines, or stream lines,or characteristic curves, of the vector fieldv.

69

Page 75: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

70 CHAPTER 4. VECTOR ANALYSIS

• Flow lines do not intersect.

• No flow lines pass through a singular point.

• The flow lines of a vector fieldv = vi(x)ei can be found from the differentialequations

dx1

v1= · · · =

dxn

vn.

vecanal4.tex; July 1, 2005; 13:34; p. 69

Page 76: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

4.2. DIRECTIONAL DERIVATIVE AND THE GRADIENT 71

4.2 Directional Derivative and the Gradient

• Let P0 be a point andu be a unit vector. Thenr (s) = r 0 + su is the equation ofthe oriented line passing throughP0 with the unit tangentu.

• Let f (x) be a scalar field. Then the derivative

dds

f (x(s))∣∣∣∣s=0

at s = 0 is called thedirectional derivative of f at the pointP0 in the directionof u and denoted by

∇u f =dds

f (x(s))∣∣∣∣s=0.

• The directional derivatives in the direction of the basis vectorsei are the partialderivatives

∇ei f =∂ f∂xi,

which are also denoted by

∂i f =∂ f∂xi.

• More generally, letr (s) be a parametrized curve in the natural parametrizationandu = d r /dsbe the unit tangent. Then

∇u f =∂ f∂xi

dxi

ds

In curvilinear coordinates

∇u f =∂ f∂qi

dqi

ds.

• The covector (1-form) field with the components∂ f /∂qi is denoted by

d f =∂ f∂xi

dxi =∂ f∂qi

dqi .

• Therefore, the 1-formsdqi form a basis in the dual space of covectors.

• The vector field corresponding to the 1-formd f is called thegradient of thescalar fieldf and denoted by

grad f = ∇ f = gi j ∂ f∂qi

ej .

• The directional derivative is simply the action of the covectord f on the vectoru(or the inner product of the vectorsgrad f andu)

∇u f = 〈d f,u〉 = ( grad f ,u) .

vecanal4.tex; July 1, 2005; 13:34; p. 70

Page 77: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

72 CHAPTER 4. VECTOR ANALYSIS

• Therefore,∇u f = ||grad f || cosθ ,

whereθ is the angle between the gradient and the unit tangentu.

• Gradient of a scalar field points in the direction of the maximum rate of increaseof the scalar field.

• The maximum value of the directional derivative at a fixed point is equal to thenorm of the gradient

maxu∇u f = ||grad f || .

• The minimum value of the directional derivative at a fixed point is equal to thenegative norm of the gradient

minu∇u f = −||grad f || .

• Let f be a scalar field andP0 be a point wheregrad f , 0. Let r = r (s) bea curve passing throughP with the unit tangentu = d r /ds. Suppose that thedirectional derivative vanishes,∇u f = 0. Then the unit tangentu is orthogonalto the gradientgrad f at P. The set of all such curves forms a level surfacef (x) = c, wherec = f (P0). The gradientgrad f is orthogonal to the tangentplane to the this surface atP0.

• Theorem 4.2.1 For any smooth scalar field f there is a level surface f(x) = cpassing through every point where the gradient of f is non-zero,grad f , 0.The gradientgrad f is orthogonal to this surface at this point.

• A vector fieldv is calledconservativeif there is a scalar fieldf such that

v = grad f .

The scalar fieldf is called thescalar potentialof v.

vecanal4.tex; July 1, 2005; 13:34; p. 71

Page 78: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

4.3. EXTERIOR DERIVATIVE 73

4.3 Exterior Derivative

• Recall that antisymmetric tensors of type (0, k) are called thek-forms, and theantisymmetric tensors of type (k,0) are calledk-vectors. We denote the space ofall k-forms byΛk and the space of allk-vectors byΛk.

• The exterior derivatived is an operator

d : Λk → Λk+1 ,

that assigns a (k+ 1)-form to eachk-form. It is defined as follows.

• A scalar field can be also called a 0-form. The exterior derivative of a zero formf is a 1-form

d f =∂ f∂qi

dxi

with components

(d f)i =∂ f∂qi.

• The exterior derivative of a 1-form is a 2-formdσ defined by

(dσ)i j =∂σ j

∂qi−∂σi

∂q j.

• The exterior derivative of ak-formσ is a (k+ 1)-formdσ with components

(dσ)i1i2...ik+1 = (k+ 1)∂

∂q[i1σi2...ik+1] =

∑ϕ∈Sk+1

sign (ϕ)∂

∂qiϕ(1)σiϕ(2)...iϕ(k+1) .

• Theorem 4.3.1 The exterior derivative of a k-form is a(k+ 1)-form.

• The exterior derivative plays the role of the gradient fork-forms.

• Theorem 4.3.2 The exterior derivative has the property

d2 = 0

• Recall that the duality operator∗ assigns a (n− k)-vector to eachk-form and an(n− k)-form to eachk-vector:

∗ : Λk → Λn−k , ∗ : Λk → Λn−k .

• Therefore, one can define the operator

∗d : Λk → Λn−k−1 ,

which assigns a (n− k− 1)-vector to eachk-form by

(∗dσ)i1...in−k−1 =1k!

g−1/2εi1...in−k−1 j1 j2... jk+1∂

∂q j1σ j2... jk+1

vecanal4.tex; July 1, 2005; 13:34; p. 72

Page 79: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

74 CHAPTER 4. VECTOR ANALYSIS

• We can also define the operator

∗d∗ : Λk → Λk−1

acting onk-vectors, which assigns a (k− 1) vector to ak-vector.

• Theorem 4.3.3 For any k-vector A with components Ai1...ik there holds

(∗d ∗ A)i1...ik−1 = (−1)nk+1g−1/2 ∂

∂q j

(g1/2A ji1...ik−1

).

• The operator∗d∗ plays the role of the divergence ofk-vectors.

• Theorem 4.3.4 The operator∗d∗ has the property

(∗d∗)2 = 0

• Let G denote the operator that convertsk-vectors tok-forms,

G : Λk → Λk .

That is, ifA j1... jk are the components of ak-vector, then the correspondingk-formσ = GAhas components

σi1...ik = gi1 j1 . . . gik jk Aj1... jk .

• Then the operatorG ∗ d : Λk → Λn−k−1

assigns a (n− k− 1)-form to eachk-form by

(G ∗ dσ)i1...in−k−1 =1k!

g−1/2gi1m1 · · · gin−k−1mn−k−1εm1...mn−k−1 j1 j2... jk+1

∂q j1σ j2... jk+1

• The operator∗d plays the role of the curl ofk-forms.

• Further, we can define the operator

δ = G ∗ dG∗ : Λk → Λk−1 ,

which assigns a (k− 1)-form to eachk-form.

• Theorem 4.3.5 For any k-form A with componentsσi1...ik there holds

(δσ)i1...ik−1 = (−1)nk+1g−1/2gi1 j1 · · · gik−1 jk−1

∂q j

(g1/2g jpg j1m1 · · · g jk−1mk−1σpm1...mk−1

).

• The operatorδ plays the role of the divergence ofk-forms.

• Theorem 4.3.6 The operatorδ has the property

δ2 = 0 .

vecanal4.tex; July 1, 2005; 13:34; p. 73

Page 80: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

4.3. EXTERIOR DERIVATIVE 75

• Therefore the operator

L = dδ + δd

assigns ak-form to eachk-form, that is,

L : Λk → Λk .

• This operator plays the role of the Laplacian ofk-forms.

• A k-formσ is calledclosedif dσ = 0.

• A k-formσ is calledexact if there is a (k− 1)-formα such thatσ = dα.

• A 1-formσ corresponding to conservative vector fieldv is exact, that is,σ = d f .

• Every exactk-form is closed.

vecanal4.tex; July 1, 2005; 13:34; p. 74

Page 81: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

76 CHAPTER 4. VECTOR ANALYSIS

4.4 Divergence

• Thedivergenceof a vector fieldv is a scalar field defined by

div v = (−1)n+1 ∗ d ∗ v ,

which in local coordinates becomes

div v = g−1/2 ∂

∂qi(g1/2vi) .

whereg = detgi j .

• Theorem 4.4.1 For any vector fieldv the divergencediv v is a scalar field.

• The divergence of a covector fieldσ is

divσ = g−1/2 ∂

∂qi(g1/2gi jσ j) .

• In Cartesian coordinates this gives simply

div v = ∂ivi .

• A vector fieldv is calledsolenoidalif

div v = 0 .

• The 2-form∗v dual to a solenoidal vector fieldv is closed, that is,d ∗ v = 0.

Physical Interpretation of Divergence

• The divergence of a vector field is the net outflux of the vector field per unitvolume.

vecanal4.tex; July 1, 2005; 13:34; p. 75

Page 82: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

4.5. CURL 77

4.5 Curl

• Recall that the operator∗d assigns a (n− k− 1)-vector to ak-form. In casen = 3andk = 1 this operator assigns a vector to a 1-form. This enables one to definethecurl operator in R3, which assigns a vector to a covector by

curl σ = ∗dσ ,

or, in components,

( curl σ)i = g−1/2εi jk∂

∂q jσk = g−1/2 det

∣∣∣∣∣∣∣∣∣∣∣∣∣e1 e2 e3

∂q1

∂q2

∂q3

σ1 σ2 σ3

∣∣∣∣∣∣∣∣∣∣∣∣∣.

• We can also define the curl of a vector fieldv by

( curl v )i = g−1/2εi jk∂

∂q j(gkmvm) .

• In Cartesian coordinates we have simply

( curl σ)i = εi jk∂ jσk .

This can also be written in the form

curl σ = det

∣∣∣∣∣∣∣∣i j k∂x ∂y ∂z

σ1 σ2 σ3

∣∣∣∣∣∣∣∣• A vector fieldv in R3 is calledirrotational if

curl v = 0 .

• The one-formσ corresponding to an irrotational vector fieldv is closed, that isdσ = 0.

• Each conservative vector field is irrotational.

• Let v be a vector field. If there is a vector fieldA such that

v = curl A ,

when A is called thevector potential of v.

• If v has a vector potential, then it is solenoidal.

• If A is a vector potential forv, then the 2-form∗v dual tov is exact, that is,∗v = dα, whereα is the 1-form corresponding toA .

Physical Interpretation of the Curl

• The curl of a vector field measures its tendency to swirl; it is the swirl per unitarea.

vecanal4.tex; July 1, 2005; 13:34; p. 76

Page 83: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

78 CHAPTER 4. VECTOR ANALYSIS

4.6 Laplacian

• The scalarLaplace operator (or the Laplacian) is the map∆ : C∞(Rn) →C∞(Rn) that assigns a scalar field to a scalar field. It is defined by

∆ f = div grad f = g−1/2∂ig1/2gi j∂ j f .

• In Cartesian coordinates it is simply

∆ f = ∂i∂i f .

• The Laplacian of a 1-form (covector field)σ is defined as follows. First, oneobtains a 2-formdσ by the exterior derivative. Then one take the dual of this2-form to get a (n − 2)-form ∗dσ. Then one acts by exterior derivative to get a(n − 1)-form d ∗ dσ, and, finally, by taking the dual again one gets the 1-form∗d∗dσ. Similarly, reversing the order of operations one gets the 1-formd∗d∗σ.The Laplacian is the sum of these 1-forms, i.e.

∆σ = −(G ∗ dG∗ d + dG∗ dG∗)σ .

The expression of this Laplacian in components is too complicated, in general.

• The components expression for this is

(∆v)i =

{gi j ∂

∂q jg−1/2 ∂

∂qkg1/2 − g−1/2 ∂

∂q jg1/2gpigq j ∂

∂qpgqk

+g−1/2 ∂

∂q jg1/2gp jgqi ∂

∂qpgqk

}vk .

• Of course, in Cartesian coordinates this simpifies significantly

(∆v)i = ∂ j∂ jvi .

• In R3 it can be written as

∆v = grad div v − curl curl v .

Interpretation of the Laplacian

• The Laplacian∆measures the difference between the value of a scalar fieldf (P)at a pointP and the average off around this point.

vecanal4.tex; July 1, 2005; 13:34; p. 77

Page 84: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

4.7. DIFFERENTIAL VECTOR IDENTITIES 79

4.7 Differential Vector Identities

• The identities below that involve the vector product and the curl apply only forR3. Other formulas are valid for arbitraryRn in arbitrary coordinate systems:

grad ( f h) = ( grad f )h+ f grad h

div ( f v) = ( grad f ) · v + f div v

grad f (h(x)) =d fdh

grad h

curl ( f v) = ( grad f ) × v + f curl v

div (u × v) = ( curl u ) · v − u · ( curl v )

curl grad f = 0

div curl v = 0

div ( grad f × grad h) = 0

• Let ei be the standard basis inRn, xi be the Cartesian coordinates,r = xiei bethe position (radius) vector field andr = || r || =

√xi xi . Scalar fields that depend

only onr and vector fields that depend onx andr are calledradial fields. Belowa is a constant vector field.

div r = n

curl r = 0

grad (a · r ) = a

curl (a× r ) = 2a

grad r =rr

grad f (r) =d fdr

rr

grad1r= −

rr3

grad rk = krk−2 r

∆ f (r) = f ′′ +(n− 1)

rf ′

∆rk = k(k+ n− 2)rk−2

∆1

rn−2= 0

• Some useful formulas when working with radial fields are

∂i xk = δki , δii = n .

vecanal4.tex; July 1, 2005; 13:34; p. 78

Page 85: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

80 CHAPTER 4. VECTOR ANALYSIS

4.8 Orthogonal Curvilinear Coordinate Systems inR3

• Let (q1,q2,q3) be an orthogonal coordinate system inR3 and{e1, e2, e3} be thecorresponding orthonormal basis

ei =1hi

∂ r∂qi.

where

hi =

∣∣∣∣∣∣∣∣∣∣∂ r∂qi

∣∣∣∣∣∣∣∣∣∣are the scale factors.

Then for any vectorv = vi ei the contravariant and the covariant componentscoincide

vi = vi = ei · v .

• The displacement vector, the interval and the volume element in the orthogonalcoordinate system are

d r = h1e1 + h2e2 + h3e3 ,

ds2 = h21(dq1)2 + h2

2(dq2)2 + h23(dq3)2 ,

dV = h1h2h3 dq1dq2dq3 .

• The differential operators introduced above take the following form

grad f = e11h1

∂q1f + e2

1h2

∂q2f + e3

1h3

∂q3f

div v =1

h1h2h3

{∂

∂q1(h2h3v1) +

∂q2(h3h1v2) +

∂q3(h1h2v3)

}

curl v = e11

h2h3

[∂

∂q2(h3v3) −

∂q3(h2v2)

]+ e2

1h3h1

[∂

∂q3(h1v1) −

∂q1(h3v3)

]+ e3

1h1h2

[∂

∂q1(h2v2) −

∂q2(h1v1)

]

∆ f =1

h1h2h3

{∂

∂q1

(h2h3

h1

∂q1

)+∂

∂q1

(h2h3

h1

∂q1

)+∂

∂q1

(h2h3

h1

∂q1

)}f

• Cylindrical coordinates:

d r = dρ eρ + ρdϕ eϕ + dzez

ds2 = dρ2 + ρ2dϕ2 + dz2

vecanal4.tex; July 1, 2005; 13:34; p. 79

Page 86: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

4.8. ORTHOGONAL CURVILINEAR COORDINATE SYSTEMS IN R3 81

dV = ρ dρdϕdz

grad f = eρ∂ρ f + eϕ1ρ∂ϕ f + ez∂z f

div v =1ρ∂ρ(ρvρ) +

1ρ∂ϕvϕ + ∂zvz

curl v =1ρ

∣∣∣∣∣∣∣∣eρ ρeϕ ez

∂ρ ∂ϕ ∂z

vρ ρvϕ vz

∣∣∣∣∣∣∣∣∆ f =

1ρ∂ρ (ρ ∂ρ f ) +

1ρ2∂2ϕ f + ∂2

z f

• Spherical coordinates:

d r = dr er + rdθ eθ + r sinθ dϕ eϕ

ds2 = dr2 + r2dθ2 + r2 sin2 θ dϕ2

dV = r2 sinθ dr dθ dϕ

grad f = er∂r f + eθ1r∂θ f + eϕ

1r sinθ

∂ϕ f

div v =1r2∂r (r

2vr ) +1

r sinθ∂θ(sinθ vθ) +

1r sinθ

∂ϕvϕ

curl v =1

r2 sinθ

∣∣∣∣∣∣∣∣er reθ r sinθ eϕ∂r ∂θ ∂ϕvr rvθ r sinθ vϕ

∣∣∣∣∣∣∣∣∆ f =

1r2∂r (r

2∂r f ) +1

r2 sinθ∂θ(sinθ∂θ f ) +

1

r2 sin2 θ∂2ϕ f

vecanal4.tex; July 1, 2005; 13:34; p. 80

Page 87: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

82 CHAPTER 4. VECTOR ANALYSIS

vecanal4.tex; July 1, 2005; 13:34; p. 81

Page 88: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

Chapter 5

Integration

5.1 Line Integrals

• Let C be a smooth curve described byr (t), wheret ∈ [a,b]. The length of thecurve is defined by

L =∫

Cds=

∫ b

a

∣∣∣∣∣∣∣∣∣∣d rdt

∣∣∣∣∣∣∣∣∣∣ dt .

• Let f be a scalar field. Then theline integral of the scalar field f is∫C

f ds=∫ b

af (x(t))

∣∣∣∣∣∣∣∣∣∣d rdt

∣∣∣∣∣∣∣∣∣∣ dt .

• If v is a vector field, then theline integral of the vector field v along the curveC is defined by ∫

Cv · d r =

∫ b

av(x(t)) ·

d rdt

dt .

• In components, the line integral of a vector field takes the form∫C

v · d r =∫

Cvidqi =

∫C

(v1dq1 + · · · + vndqn

),

wherevi = gi j v j are the covariant components of the vector field.

• The expressionσ = vidqi = v1dq1 + · · · + vndqn

is called adifferential 1-form . Each covector naturally defines a differentialform. That is why it is also called a 1-form .

• If C is a closed curve, then the line integral of a vector field is denoted by∮C

v · d r

and is called thecirculation of the vector field v about the closed curveC.

83

Page 89: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

84 CHAPTER 5. INTEGRATION

5.2 Surface Integrals

• Let S be a smooth parametrized surface described byr : D → Rn, whereD = [a,b] × [c,d]. Thesurface integral of a scalar field f is∫

Sf dS =

∫ b

a

∫ d

cf (x(u, v))

√h du dv,

whereu1 = u,u2 = v, h = dethab andhab is the induced metric on the surface

hab = gi j∂qi

∂ua

∂qi

∂ub.

• Let A be an antisymmetric tensor field of type (0,2) with componentsAi j . Itnaturally defines thedifferential 2-form

α =∑i< j

Ai j dqi ∧ dqj

= A12dq1 ∧ dq2 + · · · + A1ndq1 ∧ dqn

+A23dq2 ∧ dq3 + · · · + A2ndq2 ∧ dqn

+ · · · + An−1,n dqn−1 ∧ dqn .

• Then thesurface integral of a2-form α is defined by∫Sα =

∫S

∑i< j

Ai j dqi ∧ dqj =

∫ b

a

∫ d

c

∑i< j

Ai j Ji j du dv,

where

Ji j =∂qi

∂u∂q j

∂v−∂q j

∂u∂qi

∂v.

• In R3 every 2-form defines a dual vector. Therefore, one can integrate vectorsover a surface. Letv be a vector field inR3. Then the dual two form is

Ai j =√

gεi jkvk ,

orA12 =

√g v3 , A13 = −

√g v2 , A23 =

√g v1 .

Ttherefore,

α =√

g(v3dq1 ∧ dq2 − v2dq1 ∧ dq3 + v1dq2 ∧ dq3

).

Then thesurface integral of the vector field v, called thetotal flux of thevector field through the surface, is∫

Sα =

∫S

v · n dS =∫ b

a

∫ d

c[v, r u, r v] du dv,

vecanal4.tex; July 1, 2005; 13:34; p. 82

Page 90: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

5.2. SURFACE INTEGRALS 85

wheren = || r u × r v||

−1 r u × r v

is the unit normal to the surface and

[v, r u, r v] = vol (v, r u, r v) =√

gεi jkvi ∂qj

∂u∂qk

∂v.

• Similarly, the integrals of adifferential k-form

α =∑

i1<···<ik

Ai1...ik dqi1 ∧ · · · ∧ dqik

with componentsAi1...ik over ak-dimensional surfaceqi = qi(u1, . . . ,uk), ui ∈

[ai ,bi ] are defined by∫Sα =

∫ bk

ak

· · ·

∫ b1

a1

∑i1<···<ik

Ai1...ik Ji1...ik du1 · · · duk ,

where

Ji1...ik = k!∂q[i1

∂u1· · ·∂qik]

∂uk.

• The surface integral over a closed surfaceS without boundary is denoted by∮Sα .

• In the casek = n−1 we obtain the integral of a (n−1)-formα over a hypersurface∫Sα =

∫ bn−1

an−1

· · ·

∫ b1

a1

∑i1<···<in−1

Ai1...in−1 Ji1...in−1 du1 · · · dun−1 .

Let n be the unit vector orthogonal to the hypersurface andv = ∗α be the vectorfield dual to the (n− 1)-formα. Then∫

Sα =

∫ bn−1

an−1

· · ·

∫ b1

a1

v · n√

h du1 · · · dun−1 .

This defines thetotal flux of the vector field v through the hypersurfaceS.

The normal can be determined by

√h nj =

1(n− 1)!

√gε ji1...in−1

∂qi1

∂u1· · ·∂qin−1

∂un−1.

vecanal4.tex; July 1, 2005; 13:34; p. 83

Page 91: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

86 CHAPTER 5. INTEGRATION

5.3 Volume Integrals

• Let D = [a1,b1] × · · · × [an,bn] be a domain inRn described in local coordinates(qi) by

qi ∈ [ai ,bi ] .

• Thevolume elementin general curvilinear coordinates is

dV =√

g dq1 · · · dqn ,

whereg = det(gi j ).

• Thevolume of the regionD is

V =∫

DdV =

∫ b1

a1

· · ·

∫ bn

an

√g dq1 · · · dqn .

• Thevolume integral of a scalar field f (x) is∫D

f dV =∫ b1

a1

· · ·

∫ bn

an

f (x(q))√

g dq1 · · · dqn .

vecanal4.tex; July 1, 2005; 13:34; p. 84

Page 92: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

5.4. FUNDAMENTAL INTEGRAL THEOREMS 87

5.4 Fundamental Integral Theorems

5.4.1 Fundamental Theorem of Line Integrals

• Theorem 5.4.1 Let C be a smooth curve parametrized byr (t), t ∈ [a,b]. Thenfor any scalar field f (a0-form)∫

Cd f =

∫C

∂ f∂qi

dqi =

∫C

grad f · d r = f (x(b)) − f (x(a)) .

• The line integral of a conservative vector field does not depend on the interior ofthe curve but only on the endpoints of the curve.

• Corollary 5.4.1 The circulation of a smooth conservative vector field over aclosed smooth curve is zero,∮

Cgrad f · d r = 0 .

5.4.2 Green’s Theorem

• Theorem 5.4.2 Let x and y be the Cartesian coordinates inR2. Let U be abounded region inR2 with the boundary∂U, which is a closed curve orientedcouterclockwise. Then for any1-formα = Ai dxi = A1dx+ A2dy∫

U

(∂A2

∂x−∂A1

∂y

)dx dy=

∮∂U

(A1dx+ A2dy) .

5.4.3 Stokes’s Theorem

• Theorem 5.4.3 Let S be a bounded surface inR3 with the boundary∂S orientedconsistently with the the surface S . Then for any vector fieldv∫

Scurl v · n dS =

∮∂S

v · d r .

5.4.4 Gauss’s Theorem

• Theorem 5.4.4 Let D be a bounded domain inR3 with the boundary∂D ori-ented by an outward normal. Then for any vector fieldv∫

Ddiv v dV =

∮∂D

v · n dS

vecanal4.tex; July 1, 2005; 13:34; p. 85

Page 93: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

88 CHAPTER 5. INTEGRATION

5.4.5 General Stokes’s Theorem

• Theorem 5.4.5 Let S be a bounded smooth k-dimensional surface inRn with theboundary∂S , which is a closed(k−1)-dimensional surface oriented consistentlywith S . Let

α =∑

i1<···<ik−1

Ai1...ik−1dqi1 ∧ · · · ∧ dqik−1

be a smooth(k− 1)-form. Then∫S

dα =∮∂Sα .

• In components this formula takes the form∫S

∂Ai1...ik−1

∂q j

∂q[ j

∂u1

∂qi1

∂u2· · ·∂qik−1]

∂ukdu1 · · · duk =

∫∂S

Ai1...ik−1

∂q[i1

∂u1· · ·∂qik−1]

∂uk−1du1 · · · duk−1 .

vecanal4.tex; July 1, 2005; 13:34; p. 86

Page 94: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

Chapter 6

Potential Theory

89

Page 95: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

90 CHAPTER 6. POTENTIAL THEORY

6.1 Simply Connected Domains

vecanal4.tex; July 1, 2005; 13:34; p. 87

Page 96: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

6.2. CONSERVATIVE VECTOR FIELDS 91

6.2 Conservative Vector Fields

6.2.1 Scalar Potential

vecanal4.tex; July 1, 2005; 13:34; p. 88

Page 97: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

92 CHAPTER 6. POTENTIAL THEORY

6.3 Irrotational Vector Fields

vecanal4.tex; July 1, 2005; 13:34; p. 89

Page 98: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

6.4. SOLENOIDAL VECTOR FIELDS 93

6.4 Solenoidal Vector Fields

6.4.1 Vector Potential

vecanal4.tex; July 1, 2005; 13:34; p. 90

Page 99: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

94 CHAPTER 6. POTENTIAL THEORY

6.5 Laplace Equation

6.5.1 Harmonic Functions

vecanal4.tex; July 1, 2005; 13:34; p. 91

Page 100: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

6.6. POISSON EQUATION 95

6.6 Poisson Equation

6.6.1 Dirac Delta Function

6.6.2 Point Sources

6.6.3 Dirichlet Problem

6.6.4 Neumann Problem

6.6.5 Green’s Functions

vecanal4.tex; July 1, 2005; 13:34; p. 92

Page 101: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

96 CHAPTER 6. POTENTIAL THEORY

6.7 Fundamental Theorem of Vector Analysis

vecanal4.tex; July 1, 2005; 13:34; p. 93

Page 102: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

Chapter 7

Basic Concepts of DifferentialGeometry

97

Page 103: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

98 CHAPTER 7. BASIC CONCEPTS OF DIFFERENTIAL GEOMETRY

7.1 Manifolds

vecanal4.tex; July 1, 2005; 13:34; p. 94

Page 104: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

7.2. DIFFERENTIAL FORMS 99

7.2 Differential Forms

7.2.1 Exterior Product

7.2.2 Exterior Derivative

vecanal4.tex; July 1, 2005; 13:34; p. 95

Page 105: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

100 CHAPTER 7. BASIC CONCEPTS OF DIFFERENTIAL GEOMETRY

7.3 Integration of Differential Forms

vecanal4.tex; July 1, 2005; 13:34; p. 96

Page 106: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

7.4. GENERAL STOKES’S THEOREM 101

7.4 General Stokes’s Theorem

vecanal4.tex; July 1, 2005; 13:34; p. 97

Page 107: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

102 CHAPTER 7. BASIC CONCEPTS OF DIFFERENTIAL GEOMETRY

7.5 Tensors in General Curvilinear Coordinate Systems

7.5.1 Covariant Derivative

vecanal4.tex; July 1, 2005; 13:34; p. 98

Page 108: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

Chapter 8

Applications

103

Page 109: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

104 CHAPTER 8. APPLICATIONS

8.1 Mechanics

8.1.1 Inertia Tensor

8.1.2 Angular Momentum Tensor

vecanal4.tex; July 1, 2005; 13:34; p. 99

Page 110: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

8.2. ELASTICITY 105

8.2 Elasticity

8.2.1 Strain Tensor

8.2.2 Stress Tensor

vecanal4.tex; July 1, 2005; 13:34; p. 100

Page 111: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

106 CHAPTER 8. APPLICATIONS

8.3 Fluid Dynamics

8.3.1 Continuity Equation

8.3.2 Tensor of Momentum Flux Density

8.3.3 Euler’s Equations

8.3.4 Rate of Deformation Tensor

8.3.5 Navier-Stokes Equations

vecanal4.tex; July 1, 2005; 13:34; p. 101

Page 112: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

8.4. HEAT AND DIFFUSION EQUATIONS 107

8.4 Heat and Diffusion Equations

vecanal4.tex; July 1, 2005; 13:34; p. 102

Page 113: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

108 CHAPTER 8. APPLICATIONS

8.5 Electrodynamics

8.5.1 Tensor of Electromagnetic Field

8.5.2 Maxwell Equations

8.5.3 Scalar and Vector Potentials

8.5.4 Wave Equations

8.5.5 D’Alambert Operator

8.5.6 Energy-Momentum Tensor

vecanal4.tex; July 1, 2005; 13:34; p. 103

Page 114: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

8.6. BASIC CONCEPTS OF SPECIAL AND GENERAL RELATIVITY 109

8.6 Basic Concepts of Special and General Relativity

vecanal4.tex; July 1, 2005; 13:34; p. 104

Page 115: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

110 CHAPTER 8. APPLICATIONS

vecanal4.tex; July 1, 2005; 13:34; p. 105

Page 116: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

Bibliography

[1] A. I. Borisenko and I. E. Tarapov,Vector and Tensor Analysis, Dover, 1979

[2] D. E. Bourne and P. C. Kendall,Vector Analysis and Cartesian Tensors, Nelson,1977

[3] H. F. Davis and A. D. Snider, Introduction to Vector Analysis, 7th Edition,Brown Publishers, 1995

[4] J. H. Hubbard and B. B. Hubbard,Vector Calculus, Linear Algebra, and Differ-ential Forms, (2nd Edition), Prentice Hall, 2001

[5] H. M. Schey,Div, grad, curl, and all that: an informal text on vector calculus,W.W. Norton, 1997

[6] Th. Shifrin,Multivariable Mathematics, Wiley, 2005

[7] M. Spivak,Calculus on Manifolds: A Modern Approach to Classical Theoremsof Advanced Calculus, HarperCollins Publishers, 1965

[8] J. L. Synge and A. Schild,Tensor Calculus, Dover, 1978

[9] R. C. Wrede,Introduction to Vector and Tensor Analysis, Dover, 1972

111

Page 117: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

112 BIBLIOGRAPHY

vecanal4.tex; July 1, 2005; 13:34; p. 106

Page 118: Lecture Notes Vector Analysis MATH 332iavramid/notes/vecanal4.pdf · Lecture Notes Vector Analysis MATH 332 Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM

Notation

113