vector calculus - university of california, riversidemath.ucr.edu/~monnot/vector calculus.pdf · 1...

39
Vector Calculus Andrew Monnot Contents 1 Vector Spaces 2 1.1 Basic Terminology .............................. 2 1.2 Bases and Dimension of Vector Spaces ................... 2 1.3 Functions between Vector Spaces ...................... 3 2 The Vector Space R n 5 2.1 Magnitude in R ................................ 5 2.2 Magnitudes in R n ............................... 6 2.3 Dot Product and Distance in R n ....................... 7 2.4 Hyperplanes .................................. 9 2.5 The Cross Product .............................. 9 3 Limits and Continuity of Functions between Vector Spaces 12 3.1 Limits ..................................... 12 3.2 Continuity ................................... 14 4 Differentiation 15 4.1 Differentiation in Vector Spaces ....................... 15 4.2 Differentiation in R n ............................. 16 4.3 Differentials and Tangent Planes in R n ................... 17 4.4 Optimization ................................. 19 5 Integration 21 5.1 Integration in Vector Spaces? ........................ 21 5.2 Integration in R n ............................... 21 5.3 Change of Variables .............................. 23 5.4 Subordinate Integration ........................... 24 6 Vector Calculus in R 3 28 6.1 Gradient, Curl, and Divergence ....................... 28 6.2 Main Theorems ................................ 30 7 Applications 35 7.1 Vortex Dynamics ............................... 35 7.2 Electrodynamics ............................... 36 References 39 1

Upload: dinhminh

Post on 09-Mar-2018

224 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: Vector Calculus - University of California, Riversidemath.ucr.edu/~monnot/vector calculus.pdf · 1 Vector Spaces 1.1 Basic Terminology We begin with the formal de nition of a vector

Vector CalculusAndrew Monnot

Contents

1 Vector Spaces 21.1 Basic Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.2 Bases and Dimension of Vector Spaces . . . . . . . . . . . . . . . . . . . 21.3 Functions between Vector Spaces . . . . . . . . . . . . . . . . . . . . . . 3

2 The Vector Space Rn 52.1 Magnitude in R . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52.2 Magnitudes in Rn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62.3 Dot Product and Distance in Rn . . . . . . . . . . . . . . . . . . . . . . . 72.4 Hyperplanes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92.5 The Cross Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

3 Limits and Continuity of Functions between Vector Spaces 123.1 Limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123.2 Continuity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

4 Differentiation 154.1 Differentiation in Vector Spaces . . . . . . . . . . . . . . . . . . . . . . . 154.2 Differentiation in Rn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164.3 Differentials and Tangent Planes in Rn . . . . . . . . . . . . . . . . . . . 174.4 Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

5 Integration 215.1 Integration in Vector Spaces? . . . . . . . . . . . . . . . . . . . . . . . . 215.2 Integration in Rn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215.3 Change of Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235.4 Subordinate Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

6 Vector Calculus in R3 286.1 Gradient, Curl, and Divergence . . . . . . . . . . . . . . . . . . . . . . . 286.2 Main Theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

7 Applications 357.1 Vortex Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357.2 Electrodynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

References 39

1

Page 2: Vector Calculus - University of California, Riversidemath.ucr.edu/~monnot/vector calculus.pdf · 1 Vector Spaces 1.1 Basic Terminology We begin with the formal de nition of a vector

1 Vector Spaces

1.1 Basic Terminology

We begin with the formal definition of a vector space.

Definition 1.1. A vector space over a field F (also called an F -vector space)is a set V together a binary operation + : V × V → V (called addition), a function· : F × V → V (called scalar multiplication), and an element 0 ∈ V (called the zerovector) such that

u+ v = v + u (commutativity)

(u+ v) + w = u+ (v + w) (associativity)

v + 0 = v (additive identity)

v + (−v) = 0 (existence of an inverse −v)

(rs) · v = r · (sv) (associativity of scalars)

1 · v = v (scalar identity)

for all u, v, w ∈ V and r, s, 1 ∈ F. We will call elements of V vectors and elements of Fscalars.

For example, the real numbers R is a vector space over itself: addition is just ordinaryaddition, and scalar multiplication is regular multiplication. The complex numbers C isa vector space over the real numbers (and over itself). Recall that if we have two sets Xand Y , we can define the Cartesian product as the set

X × Y = (x, y) : x ∈ X and y ∈ Y .

Definition 1.2. Let U and V be vector spaces over a field F. We define the productvector space U × V as Cartesian product of U and V together with the followingdefinitions:

0 = (0, 0)

(u1, v1) + (u2, v2) = (u1 + u2, v1 + v2)

r · (u, v) = (ru, rv).

One can easily verify that U × V becomes an F -vector space with the above rules.(note by ru we mean r · u, and we will omit the · from now on when performing scalarmultiplication). It follows that Rn = Xni=1 R is a real vector space, and that Cn is both areal and complex vector space (meaning its scalars may come from the field R or C).

1.2 Bases and Dimension of Vector Spaces

Definition 1.3. A subset S ⊆ V of vectors is called a spanning set (or is said to spanV ) if for every v ∈ V we can write

v =n∑i=1

ciei

for ei ∈ S and ci ∈ F.

2

Page 3: Vector Calculus - University of California, Riversidemath.ucr.edu/~monnot/vector calculus.pdf · 1 Vector Spaces 1.1 Basic Terminology We begin with the formal de nition of a vector

Example 1.4. Let V = R2 and S = i, j where i = (1, 0) and j = (0, 1). Then S spansR2 since for any v = (a, b) ∈ R2 we can write

(a, b) = (a, 0) + (0, b) = a(1, 0) + b(0, 1) = ai+ bj.

Definition 1.5. A subset L ⊆ V is said to be linearly independent (or, any twoelements of L are linearly independent) if for any finite number of elements v1, ..., vnin L,

c1v1 + · · · cnvn = 0 =⇒ c1 = · · · = cn = 0.

Definition 1.6. A subset B ⊆ V of vectors is called a basis of/for V if B spans V andis linearly independent.

Example 1.7. We showed that i, j spanned R2. Now observe that

ai+ bj = 0⇐⇒ a(1, 0) + b(0, 1) = (a, 0) + (0, b) = (a, b) = (0, 0)

and hence a = 0 and b = 0. So i, j is a linearly independent set, and is hence a basis ofR2.

Proposition 1.8. Let B and B′ be two bases for a vector space V. Then B and B′ havethe same number of elements (cardinality). Moreover, every vector space has a basis.

Definition 1.9. Let B be a basis of V. We define the dimension of V over its field Fby

dimF (V ) = |B|,

where |B| is the cardinality of B.

Example 1.10. Let V = Rn and ei = (0, ..., 1, ..., 0) be the vector in Rn with a 1 in theith component and 0s in each of the other components. Then e1, ..., en is a basis of Rn

and hencedimR(Rn) = n.

1.3 Functions between Vector Spaces

Functions between two F -vector spaces U and V are just functions f : U → V. Unlessotherwise specified, we will assume all vector spaces we mention have scalars that comefrom the same field. Sometimes a function between two vector spaces is special :

Definition 1.11. A function f : U → V is called a linear map (or more technically, ahomomorphism of vector spaces) if for all x, y ∈ U and r ∈ F we have

f(x+ y) = f(x) + f(y)

f(rx) = rf(x).

Example 1.12. Any function f : R→ R of the form f(x) = ax is a linear map since

f(x+ y) = a(x+ y) = ax+ ay = f(x) + f(y)

andf(rx) = a(rx) = r(ax) = rf(x).

3

Page 4: Vector Calculus - University of California, Riversidemath.ucr.edu/~monnot/vector calculus.pdf · 1 Vector Spaces 1.1 Basic Terminology We begin with the formal de nition of a vector

Proposition 1.13. If f : U → V is a linear map between vector spaces, then f(0) = 0.

Proof. Since f is a linear map and every vector x ∈ U has an inverse −x such thatx+ (−x) = 0, we have that

f(0) = f(x+ (−x)) = f(x) + f(−x) = f(x)− f(x) = 0

for any choice of a vector x ∈ U. Now suppose U and V are finite dimensional vector spaces over a field F and f : U →

V is a function between them. Suppose dim(U) = m and dim(V ) = n. Then for x ∈ U wecan write x = (x1, ..., xm) where xi ∈ F. Similarly we can write f(x) = (f1(x), ..., fn(x))such that fi : U → F.

4

Page 5: Vector Calculus - University of California, Riversidemath.ucr.edu/~monnot/vector calculus.pdf · 1 Vector Spaces 1.1 Basic Terminology We begin with the formal de nition of a vector

2 The Vector Space Rn

2.1 Magnitude in RFor x ∈ R, we define its norm (or magnitude) as its absolute value |x|, which is definedas

|x| =

x if x > 0−x if x < 00 if x = 0

.

And for two real numbers x, y ∈ R, we define their distance as the norm of the difference:d(x, y) = |x− y|.

Proposition 2.1. For x, y, z ∈ R we have

(a) |x| ≥ 0 and |x| = 0 iff x = 0.

(b) |xy| = |x||y|.

(c) |x+ y| ≤ |x|+ |y|.

Proof. (a) This follows from the definition (if x < 0, then −x > 0.) (b) xy < 0 iffx < 0 or y < 0 but not both. Hence |xy| = −xy = |x||y|. xy > 0 iff x, y > 0 or x, y < 0.In either case, |xy| = xy = (−x)(−y) = |x||y|. And |xy| = 0 iff x = 0 or y = 0 and henceiff |x||y| = 0. (c) Note that we have −|x| ≤ x ≤ |x| and −|y| ≤ y ≤ |y|. If we add thesetwo inequalities we obtain

−(|x|+ |y|) ≤ x+ y ≤ |x|+ |y|,

or equivalently, |x+ y| ≤ |x|+ |y|.

Proposition 2.2. For x, y, z ∈ R we have

(a) |x− y| ≥ 0 and |x− y| = 0 iff x = y.

(b) |x− y| = |y − x|.

(c) |x− y| ≤ |x− z|+ |z − y|.

Proof. (a) Nonnegativity follows immediately from the previous proposition, as doesthe fact that |x− y| = 0 iff x− y = 0 iff x = y. (b) We have

|x− y| = |(−1)(y − x)| = | − 1||y − x| = |y − x|.

(c) We have|x− y| = |(x− z) + (z − y)| ≤ |x− z|+ |z − y|

by the previous proposition.

5

Page 6: Vector Calculus - University of California, Riversidemath.ucr.edu/~monnot/vector calculus.pdf · 1 Vector Spaces 1.1 Basic Terminology We begin with the formal de nition of a vector

2.2 Magnitudes in Rn

Can we construct a notion of magnitude ‖ · ‖ on Rn that satisifies the same properties?Namely, is there a function ‖ · ‖ : Rn → R such that

(a) ‖x‖ ≥ 0 and ‖x‖ = 0 iff x = 0,

(b) ‖cx‖ = |c|‖x‖, and

(c) ‖x+ y‖ ≤ ‖x‖+ ‖y‖,

which we will call a norm. Note that in condition (b) above we look at a vector x ∈ Rn

being multiplied by a scalar c ∈ R since we don’t yet have a notion of a product of vectors.The first guess for x = (x1, ..., xn) ∈ Rn might be

‖x‖ = ‖x‖1 =n∑i=1

|xi|.

Proposition 2.3. ‖ · ‖1 is a norm on Rn.

Proof. As a sum of absolute values, clearly ‖x‖1 ≥ 0. Moreover ‖x‖1 = 0 iff xi = 0for each xi (as we saw for absolute value) and hence iff x = 0 (since the zero vector inRn is the vector with 0 in every component). Note for c ∈ R we also have

‖cx‖1 = ‖(cx1, ..., cxn)‖1 =n∑i=1

|cxi| =n∑i=1

|c||xi| = |c|n∑i=1

|xi| = |c|‖x‖1.

And lastly

‖x+ y‖1 = ‖(x1, ..., xn) + (y1, ..., yn)‖1

= ‖(x1 + y1, ..., xn + yn)‖1

=n∑i=1

|xi + yi|

≤n∑i=1

|xi|+ |yi|

=n∑i=1

|xi|+n∑i=1

|yi|

= ‖x‖1 + ‖y‖1.

Let us also define

‖x‖n =

(m∑i=1

|xi|n)1/n

.

Proposition 2.4. ‖x‖n is a norm on Rm.

Proof. ‖x‖n ≥ 0 as it is a root of sums of positive numbers. Thus it is zero iff x = 0.We also have

‖cx‖nn =m∑i=1

|cxi|n =m∑i=1

|c|n|xi|n = |c|nm∑i=1

|xi|n = |c|n‖x‖nn

6

Page 7: Vector Calculus - University of California, Riversidemath.ucr.edu/~monnot/vector calculus.pdf · 1 Vector Spaces 1.1 Basic Terminology We begin with the formal de nition of a vector

and hence ‖cx‖n = |c|‖x‖n. The last part, ‖x + y‖n ≤ ‖x‖n + ‖y‖n, will be proved forn = 2 once we discuss the dot product.

In R, |x− y| satisfied additional properties.

Proposition 2.5. For x, y, z ∈ Rm we have

(a) ‖x− y‖n ≥ 0 and ‖x− y‖n = 0 iff x = y.

(b) ‖x− y‖n = ‖y − x‖n.

(c) ‖x− y‖n ≤ ‖x− z‖n + ‖z − y‖n.

Proof. (a) This again follows immediately from the fact that ‖ · ‖n is a norm. (b)We have

‖x− y‖n =

(m∑i=1

|xi − yi|n)1/n

=

(m∑i=1

|(−1)(yi − xi)|n)1/n

=

(m∑i=1

| − 1|n|yi − xi|n)1/n

=

(m∑i=1

|yi − xi|n)1/n

= ‖y − x‖n.

(c) And this similarly immediately follows:

‖x+ y‖n = ‖(x− z) + (z − y)‖n ≤ ‖x− z‖n + ‖z − y‖n.

2.3 Dot Product and Distance in Rn

Definition 2.6. The dot product is the function · : Rn × Rn → R defined by

x · y =m∑i=1

xiyi.

Note that we have ‖x‖2 =√x · x and hence

(x− y) · (x− y) = ‖x− y‖22

=n∑i=1

|xi − yi|2

=n∑i=1

(x2i + 2|xi||yi|+ y2

i )

= x · x+ 2x · y + y · y= ‖x‖2

2 + 2x · y + ‖y‖22.

7

Page 8: Vector Calculus - University of California, Riversidemath.ucr.edu/~monnot/vector calculus.pdf · 1 Vector Spaces 1.1 Basic Terminology We begin with the formal de nition of a vector

Because of this relation of the dot product to the norm ‖ · ‖2, we will henceforth denote‖ · ‖2 simply by ‖ · ‖.

Theorem 2.7. (Cauchy-Schwarz Inequality) If x, y ∈ Rn, then

|x · y| ≤ ‖x‖‖y‖.

Proof. Suppose y = 0, then the result is trivial. Now let y 6= 0 and note that for anyt ∈ R we have

0 ≤ ‖x− ty‖2 = ‖x‖2 − 2t(x · y) + t2‖y‖2.

Let t = x · y/‖y‖2, then we obtain

0 ≤ ‖x‖2 − 2(x · y)2

‖y‖2+

(x · y)2‖y‖2

‖y‖4

= ‖x‖2 − 2(x · y)2

‖y‖2+

(x · y)2

‖y‖2

=‖x‖2‖y‖2 − (x · y)2

‖y‖2.

Thus 0 ≤ ‖x‖2‖y‖2 − (x · y)2 which implies (x · y)2 ≤ ‖x‖2‖y‖2 and hence gives us theresult.

Corollary 2.8. ‖ · ‖ is a norm.

Proof. We’ve already shown two conditions earlier for ‖ · ‖n. It remains to show that‖x+ y‖ ≤ ‖x‖+ ‖y‖.

‖x+ y‖2 = ‖x‖2 + 2x · y + ‖y‖2

≤ ‖x‖2 + 2‖x‖‖y‖+ ‖y‖2

= (‖x‖+ ‖y‖)2

and hence that ‖x+ y‖ ≤ ‖x‖+ ‖y‖.

Definition 2.9. Two vectors x, y ∈ Rn are orthogonal if x ·y = 0. We define the angleθ between x and y by the formula

cos θ =x · y‖x‖‖y‖

.

Hence x and y are orthogonal iff θ = π/2.

Corollary 2.10. (Law of Cosines)

‖x+ y‖2 = ‖x‖2 + 2‖x‖‖y‖ cos θ + ‖y‖2.

Corollary 2.11. (Pythagorean Theorem) If x and y are orthogonal, then

‖x− y‖2 = ‖x‖2 + ‖y‖2.

Since ei and ej are orthogonal for i 6= j in Rn, the Pythagorean theorem gives us theintuition for defining distance.

Definition 2.12. We define the distance between two vectors x, y ∈ Rn by

d(x, y) = ‖x− y‖.

8

Page 9: Vector Calculus - University of California, Riversidemath.ucr.edu/~monnot/vector calculus.pdf · 1 Vector Spaces 1.1 Basic Terminology We begin with the formal de nition of a vector

2.4 Hyperplanes

To define a hyperplane in Rn, we will want to know the direction n that it faces and onepoint p we want to be on the hyperplane. We’ll then add all points on line segmentscontaining p such that all points on the line also face in the direction of n.

Definition 2.13. We define the hyperplane of p with normal vector n 6= 0 by

Πn(p) = x ∈ Rn : (x− p) · n = 0.

That is, n is orthogonal to all points on the plane. It follows that points x =(x1, ..., , xm) on hyperplanes in Rm satisfy the equation

m∑i=1

ni(xi − pi) = 0

or equivalentlym∑i=1

nixi =m∑i=1

nipi = C

for a choice of n, p. Hence a hyperplane is a vector space iff C = 0, for in this case we’dhave

∑mi=1 nixi = 0 and hence

m∑i=1

ni(cxi) = cm∑i=1

nixi = c0 = 0.

So cx ∈ Πn(p). And if y is also in such a hyperplane, then

m∑i=1

ni(xi + yi) =m∑i=1

nixi +m∑i=1

niyi = 0 + 0 = 0.

So x+ y ∈ Πn(p). In this case (that is, when p · n = 0) we have

dimR(Πn(p)) = m− 1.

2.5 The Cross Product

For this section we will let V = R3.

Definition 2.14. We define the cross product as the function × : R3 × R3 → R3

defined by

x× y =

∣∣∣∣∣∣e1 e2 e3

x1 x2 x3

y1 y2 y3

∣∣∣∣∣∣ = (x2y3 − x3y2, x3y1 − x1y3, x1y2 − x2y1).

Proposition 2.15. For x, y, z ∈ R3 and c ∈ R we have

(a) x× x = 0

(b) x× y = −(y × x)

9

Page 10: Vector Calculus - University of California, Riversidemath.ucr.edu/~monnot/vector calculus.pdf · 1 Vector Spaces 1.1 Basic Terminology We begin with the formal de nition of a vector

(c) (cx)× y = x× (cy) = c(x× y)

(d) x× (y + z) = (x× y) + (x× z)

(e) (x× y) · z = x · (y × z)

(f) x× (y × z) = (x · z)y − (x · y)z

(g) ‖x× y‖2 = ‖x‖2‖y‖2 − (x · y)2

(h) If x× y 6= 0, then x× y is orthogonal to x and y.

Proof. (a)-(c) are obvious. (d) We have

x× (y + z) = (x2(y3 + z3)− x3(y2 + z2), x3(y1 + z1)− x1(y3 + z3), x1(y2 + z2)− x2(y1 + z1))

= (x2y3 − x3y2, x3y1 − x1y3, x1y2 − x2y1) + (x2z3 − x3z2, x3z1 − x1z3, x1z2 − x2z1)

= (x× y) + (x× z).

(e)

(x× y) · z = (x2y3 − x3y2)z1 + (x3y1 − x1y3)z2 + (x1y2 − x2y1)z3

= x1(y2z3 − y3z2) + x2(y3z1 − y1z3) + x3(y1z2 − y2z1)

= x · (y × z).

(f) Left to reader of course. (g)

‖x× y‖2 = (x× y) · (x× y)

= x · (y × (x× y))

= x · (‖y‖2x− (y · x)y)

= ‖x‖2‖y‖2 − x · ((y · x)y)

= ‖x‖2‖y‖2 − (x · y)2.

(h) Suppose x× y 6= 0, then

(x× y) · x = −(y × x) · x = −y · (x× x) = −y · 0 = 0.

A similar argument works for y.

Corollary 2.16.

x× (y × z) + z × (x× y) + y × (z × x) = 0.

Proof. If we let J denote the expression on the left, then

J = (x · z)y − (x · y)z + (z · y)x− (z · x)y + (y · x)z − (y · z)x = 0.

The above identity is called the Jacobi identity.

Corollary 2.17.‖x× y‖ = ‖x‖‖y‖ sin θ.

10

Page 11: Vector Calculus - University of California, Riversidemath.ucr.edu/~monnot/vector calculus.pdf · 1 Vector Spaces 1.1 Basic Terminology We begin with the formal de nition of a vector

Proof.

‖x× y‖2 = ‖x‖2‖y‖2 − (x · y)2

= ‖x‖2‖y‖2 − (‖x‖‖y‖ cos θ)2

= ‖x‖2‖y‖2 − ‖x‖2‖y‖2 cos2 θ

= ‖x‖2‖y‖2 − ‖x‖2‖y‖2(1− sin2 θ)

= ‖x‖2‖y‖2 sin2 θ,

which yields the result. While this cross product is only defined on R3, we will later present another cross

product on R7 and explain why these are the only two dimensions with a nontrivial crossproduct comatible with magnitude.

11

Page 12: Vector Calculus - University of California, Riversidemath.ucr.edu/~monnot/vector calculus.pdf · 1 Vector Spaces 1.1 Basic Terminology We begin with the formal de nition of a vector

3 Limits and Continuity of Functions between Vec-

tor Spaces

In order to discuss limits and continuity on vector spaces, we need a notion of “closeness”.If our vector space has a norm ‖ · ‖, then we will say two vectors x and y are ε-close fora real number ε > 0 if there is some vector z ∈ V such that

x, y ∈ Bε(z) = v ∈ V : ‖z − v‖ < ε.

Limits can also be defined on more abstract vector spaces that only have a notion ofdistance (vector metric spaces)–or even more generally, that have a more abstract notionof closeness (topological vector spaces). But we will limit oursevles to vector spaces withnorms (normed spaces) whose field is R.

3.1 Limits

Definition 3.1. Let U and V be normed spaces with norms ‖·‖U and ‖·‖V respectively.Then for a function f : U → V, we say L is the limit of f as x→ a, denoted

limx→a

f(x) = L

for x, a ∈ U and L ∈ V, if for every ε > 0 there is a δ > 0 such that

‖x− a‖U < δ =⇒ ‖f(x)− L‖V < ε.

Proposition 3.2. Let f, g : U → V be functions between normed spaces such thatlimx→a f(x) and limx→a g(x) exist and c ∈ R. Then

(a) limx→a

(f(x) + g(x)) = limx→a

f(x) + limx→a

g(x).

(b) limx→a

(cf(x)) = c limx→a

f(x).

(c)∥∥∥limx→a

f(x)∥∥∥ = lim

x→a‖f(x)‖.

Proof. We omit (a) and (b) as their proofs are essentially the same as those fromcalculus. (c) Let L = limx→a f(x). Then for every ε > 0 there is a δL such that

‖x− a‖ < δL =⇒ ‖f(x)− L‖ < ε.

Hence we wish to show limx→a ‖f(x)‖ = ‖L‖. Let ε > 0 and let δ = δL. Then

‖x− a‖ < δL =⇒|‖f(x)‖ − ‖L‖|≤ ‖f(x)− L‖< ε,

which gives us the result.

12

Page 13: Vector Calculus - University of California, Riversidemath.ucr.edu/~monnot/vector calculus.pdf · 1 Vector Spaces 1.1 Basic Terminology We begin with the formal de nition of a vector

Proposition 3.3. Let f, g : Rn → Rm, then

limx→a

(f(x) · g(x)) =(

limx→a

f(x))·(

limx→a

g(x)).

Also if f, g : R3 → R3, then

limx→a

(f(x)× g(x)) =(

limx→a

f(x))×(

limx→a

g(x)).

Proof. Let f(x) = (f(x)1, ..., f(x)m) and g(x) = (g(x)1, ..., g(x)m). Then

limx→a

(f(x) · g(x)) = limx→a

(m∑i=1

f(x)ig(x)i

)

=m∑i=1

limx→a

(f(x)ig(x)i)

=m∑i=1

(limx→a

f(x)i

)(limx→a

g(x)i

)=(

limx→a

f(x))·(

limx→a

g(x)).

The latter claim follows from the fact that

limx→a

f(x) =(

limx→a

f(x)1, limx→a

f(x)2, limx→a

f(x)3

).

We also have a claim regarding composition of functions.

Proposition 3.4. Let f : U → V and g : V → W such that limx→a f(x) = L andlimy→L g(y) = M. Then

limx→a

((g f)(x)) = limy→L

g(y) = M.

Proof. Let ε > 0, then there is a δ1 > 0 such that

‖x− a‖ < δ1 =⇒ ‖f(x)− L‖ < δ2

where δ2 is such that

‖f(x)− L‖ < δ2 =⇒ ‖g(f(x))−M‖ < ε.

Hence‖x− a‖ < δ1 =⇒ ‖g(f(x))−M‖ < ε.

Proposition 3.5. Let f : Rn → Rm. Then limx→a f(x) exists iff limxi→ai f(x) exists forall 1 ≤ i ≤ n. Moreover if limx→a f(x) exists, then

limx→a

f(x) = limx1→a1

limx2→a2

· · · limxn→an

f(x),

and the limits are interchangable.

13

Page 14: Vector Calculus - University of California, Riversidemath.ucr.edu/~monnot/vector calculus.pdf · 1 Vector Spaces 1.1 Basic Terminology We begin with the formal de nition of a vector

3.2 Continuity

Definition 3.6. Let f : U → V be a function between normed spaces and Ω ⊆ U. f iscontinuous on Ω iff for all a ∈ Ω

limx→a

f(x) = f(a).

In other words f is continuous on Ω iff for every ε > 0 and a ∈ Ω, there is a δa suchthat

‖x− a‖U < δa =⇒ ‖f(x)− f(a)‖V < ε.

That is, the limits of f exist at all points a ∈ Ω and are equal to f(a).

Proposition 3.7. Let f, g : U → V be continuous and h : V → W be continuous onran f. Then

(a) f + g is continuous on U.

(b) cf is continuous on U for c ∈ R.

(c) fg is continuous on U.

(d) h f is continuous on U.

We leave the proof to the reader.

14

Page 15: Vector Calculus - University of California, Riversidemath.ucr.edu/~monnot/vector calculus.pdf · 1 Vector Spaces 1.1 Basic Terminology We begin with the formal de nition of a vector

4 Differentiation

4.1 Differentiation in Vector Spaces

Definition 4.1. Let f : U → V be a function between normed spaces. We say that fis differentiable at x ∈ U if ‖Dfx(h)‖V /‖h‖U <∞ and

limh→0

‖f(x+ h)− f(x)−Dfx(h)‖V‖h‖U

= 0

for a linear function Dfx : U → V. f is differentiable on U if it is differentiable at everyx ∈ U (that is, for every x ∈ U, there is a Dfx satisfying the above limit). In this casewe call Df the derivative (or Frechet derivative) of f.

That is, f is differentiable at x if for every ε > 0, there is a δx > 0 such that

‖h‖U < δx =⇒ ‖f(x+ h)− f(x)−Dfx(h)‖V‖h‖U

< ε,

since the fraction given is a function Fx : U → R with

Fx(h) =‖f(x+ h)− f(x)−Dfx(h)‖V

‖h‖Uand we are looking at limh→0 Fx(h).

Proposition 4.2. Let f : U → V be differentiable at a ∈ U, then f is continuous at a.

Proof. Let ε > 0, then we wish to find a δ > 0 such that limx→a f(x) = f(a). Inother words, we want that

‖x− a‖U < δ =⇒ ‖f(x)− f(a)‖V < ε.

Note that since f is differentiable, f(a) is defined. We also have a δa for any ε′such that

‖h‖U < δa =⇒ ‖f(a+ h)− f(a)−Dfa(h)‖V‖h‖U

< ε′.

Or equivalently, we get that

‖f(a+ h)− f(a)−Dfa(h)‖V < ε′‖h‖U < ε′δa.

If ε′ = 1, then note that we have

‖f(a+ h)− f(a)−Dfa(h)‖V < ‖h‖Uand hence by the triangle inequality

‖f(a+ h)− f(a)‖V ≤ ‖Dfx(h)‖V + ‖h‖U ≤ ‖Dfa‖‖h‖U + ‖h‖U = ‖h‖U(‖Dfa‖+ 1)

where ‖Dfa‖ = sup‖Dfa((h))‖U/‖h‖V . Since f is differentiable, ‖Dfa‖ < ∞. Thuslimh→0 f(a+ h) = f(a). Hence if h = x− a and δ = ε/(‖Dfa‖+ 1) we obtain

‖x− a‖U = ‖h‖U < δ =⇒‖f(x)− f(a)‖V= ‖f(a+ h)− f(a)‖V≤ ‖h‖U(‖Dfa‖+ 1)

<

‖Dfa‖+ 1

)(‖Dfa‖+ 1)

= ε.

So f is continuous at a. Some standard properties follow, whose proofs we leave to the reader.

15

Page 16: Vector Calculus - University of California, Riversidemath.ucr.edu/~monnot/vector calculus.pdf · 1 Vector Spaces 1.1 Basic Terminology We begin with the formal de nition of a vector

Proposition 4.3. Let f, g : U → V be differentiable on U and h : V → W bedifferentiable on ran f with derivatives Df,Dg, and Dh respectively. Then

(i) cf is differentiable, and D(cf) = cDf.

(ii) f + g is differentiable, and D(f + g) = Df +Dg.

(iii) (Chain Rule) h f is differentiable, and D(h f) = D(h(f))D(f) (where theproduct is matrix multiplication).

4.2 Differentiation in Rn

Let f : Rn → R. Then the derivative Df of f (if it exists) would satisfy

limh→0

|f(x+ h)− f(x)−Dfx(h)|‖h‖

= 0

for every x ∈ Rn.

Definition 4.4. Let f : Rn → R. We define the ith partial derivative of f (denotedDif, fxi , or ∂f

∂xi: Rn → R) by

∂f

∂xi(x) = lim

h→0

f(x1, ..., xi + h, ..., xn)− f(x1, ..., xn)

h

if the limit exists.

We will often write ∂f/∂xi and omit the (x).

Proposition 4.5. Let f : Rn → Rm be differentiable with f(x) = (f1(x), ..., fm(x)).Then Difj exists for each 1 ≤ i ≤ n and 1 ≤ j ≤ m.

Proof. Let h = tei with t > 0 (without loss of generality). Then since f is differen-tiable (and hence each fj) and ‖tei‖ = t‖ei‖ = t, we have

limh→0

fj(x+ h)− fj(x)−Dfj(tei)‖tei‖

= limt→0+

fj(x+ tei)− fj(x)

t−Dfj(ei) = 0

and hence Dfj(ei) = Difj.

Corollary 4.6. If f : Rn → Rm is differentiable, then its derivative Df (called theJacobian derivative of f) is the function defined by

Dfx =

∂f1∂x1

(x) · · · ∂f1∂xn

(x)...

. . ....

∂fm∂x1

(x) · · · ∂fm∂xn

(x)

Corollary 4.7. If f : Rn → R, then its derivative, called the gradient, is given by

∇fx =

(∂f

∂x1

(x), · · · , ∂f∂xn

(x)

).

16

Page 17: Vector Calculus - University of California, Riversidemath.ucr.edu/~monnot/vector calculus.pdf · 1 Vector Spaces 1.1 Basic Terminology We begin with the formal de nition of a vector

Note, often in calculus/analysis texts, Dfx and ∇fx are written as Df(x) and ∇f(x).We do not use this notation since we do not mean the matrix multiplication of Df andx or ∇f and x. Contrastingly in the definition of the derivative we had Dfx(h) where wedid mean the matrix multiplication of Dfx and the vector h. Correspondingly, evaluationof Dfx on a vector h ∈ Rn is defined by matrix multiplication (or in the case of thegradient, via the dot product). We will henceforth leave out the (x) for convenience.

Theorem 4.8. If f : U → Rm with U ⊆ Rn is a function such that Difj exists and iscontinuous at a for all 1 ≤ i ≤ n and 1 ≤ j ≤ m, then f is differentiable at a.

Let us use the notation

Dijf =∂2f

∂xi∂xj=

∂xi

(∂f

∂xj

).

Theorem 4.9. If Dijf and Djif exist and are continuous, then Dijf = Djif.

Proposition 4.10. If f, g : Rn → Rm are differentiable, then

D(f · g) = D(f)g + fD(g)

where the products are matrix multiplication.If f, g : Rn → R are differentiable, then

∇(fg) = f∇g + g∇f.

And if f, g : R3 → R3 are differentiable, then

D(f × g) = f ×D(g) +D(f)× g.

Definition 4.11. Let f : Rn → R be differentiable and v ∈ Rn. We define the direc-tional derivative of f in the direction of v as

∇vf = ∇f · v

‖v‖.

Proposition 4.12. Let u = v/‖v‖. The directional derivative ∇vf satisfies

∇vf = limt→0

f(x+ tu)− f(x)

t.

Hence

∇eif =∂f

∂xi.

4.3 Differentials and Tangent Planes in Rn

Let f : Rn → R. We should want a differential of f, denoted df, to be a real valuerepresenting the extent to which the function is changing. In the case for n = 1, we have

df = f ′ dx = Df dx.

To still get a real value in n dimensions, we change the product to dot product.

17

Page 18: Vector Calculus - University of California, Riversidemath.ucr.edu/~monnot/vector calculus.pdf · 1 Vector Spaces 1.1 Basic Terminology We begin with the formal de nition of a vector

Definition 4.13. Let f : Rn → R. Then we define the differential of f as

df = Df · dx = ∇f · dx =n∑i=1

∂f

∂xidxi.

Similarly, if f : Rn → Rm, we define its differential via matrix multiplication as the vector

df = Df dx =

∂f1∂x1

· · · ∂f1∂xn

.... . .

...∂fm∂x1

· · · ∂fm∂xn

dx1

...dxn

=

df1...dfm

.

Recall a hyperplane Π is given by the equation

n · (x− p) = 0

for a point p ∈ Π and a vector n orthogonal to points on the plane. Suppose we wish tofind a hyperplane tangent to f at point p for a differentiable function f. Then we needonly to find a normal vector n. In the case when f : R → R, we used the point-slopemethod to obtain the tangent plane (which was a line) through the point (p, f(p)) thatalso went through (a, f(a)) defined by

f(x)− f(a) = f ′(p)(x− a).

In this case, if we place the function in R2 so that f = (x, f(x)), we had a tangent lineat the point (p, f(p)). Hence

0 = n · ((x, f(x))− (a, f(a))) = (n1, n2) · ((x− a), (f(x)− f(a))).

Thus we had n = (f ′(p),−1). Analogously in Rn, consider the differential approximationof a differentiable function f : Rn → R:

∆f = f(x)− f(a) =n∑i=1

∂f

∂xi(p)(xi − ai) =

n∑i=1

∂f

∂xi(p)∆xi.

Hence, placing the function into Rn+1 so that our second vector becomesA = (a1, ..., an, f(a))and the function becomes X = (x1, ..., xn, f(x)), we would want

0 = n · (X − A) =n+1∑i=1

ni(Xi − Ai) = −(f(x)− f(a)) +n∑i=1

ni(xi − ai).

Hence

n =

(∂f

∂x1

(p), ...,∂f

∂xn(p),−1

).

So, this characterizes an “(n−1)-dimensional” tangent hyperplane (with n+1 coordinates,but note that having n−1 of the evaluations of the partial derivatives at p puts a constrainton the last one) in Rn while viewing f as an “n-dimensional” subset of Rn+1 as all pointsof the form (x1, ..., xn, f(x)) (similarly, choosing the n xi’s constrains f(x)–hence “n-dimensional”).

The construction of a tangent hyperplane is the answer to the question: “Given adifferentiable function f : Rn → R and two points p ∈ Rn and a ∈ Rn+1, can one

18

Page 19: Vector Calculus - University of California, Riversidemath.ucr.edu/~monnot/vector calculus.pdf · 1 Vector Spaces 1.1 Basic Terminology We begin with the formal de nition of a vector

construct a hyperplane containing the points (p1, ..., pn, f(p)) and a?” A weaker questionis: “Given a differentiable function f : Rn → R and a point a ∈ Rn+1, can one find apoint p ∈ Rn such that there is a tangent hyperplane of f at p passing through a?” Theanswer to this is also yes with some slight assumptions.

For two points a, b ∈ Rn, we define the line segment between them as

L(a, b) = (1− t)a+ tb : t ∈ [0, 1].

Theorem 4.14. (Mean Value Theorem) Let U ⊆ Rn, f : U → R be differentiable,a ∈ U, and L(x, a) ⊆ U. Then there exists a p ∈ L(x, a) such that

f(x)− f(a) = ∇f(p) · (x− a).

4.4 Optimization

Optimization is the study of critical points of differentiable functions. Like the differentialof a function, the notion of a critical point of a function doesn’t quite generalize to anarbitrary differentiable function between vector spaces. We in turn only consider thenotion of critical points for differentiable functions f : Rn → Rm.

For motivation, if f : R→ R is differentiable and f ′(x) = 0 for some x ∈ R, then wecall x a critical point. For functions f : Rn → R, if ∇f(x) = 0 for some x ∈ Rn, we willsay the same of x. But for f : Rn → Rm, we slightly weaken our requirements regardingthe Jacobian derivative Df.

Definition 4.15. Let f : Rn → Rm be a linear function. We define the rank of f by

rank f = dimR(im f).

Definition 4.16. Let f : Rn → Rm be differentiable. x ∈ Rn is a critical point of f if

rankDf(x) < m.

A point which is not a critical point is called a regular point.

So for m = 1, we would need rank∇f = 0, and hence that ∇f = 0.

Definition 4.17. Let f : Rn → R. A point p ∈ Rn is a

(a) local minimum of f if there is some r > 0 such that f(p) ≤ f(x) for allx ∈ Br(p);

(b) local maximum of f if there is some r > 0 such that f(p) ≥ f(x) for allx ∈ Br(p);

(c) local extremum of f if it is either a local minimum or local maximum of f.

Proposition 4.18. If p is a local extremum of a differentiable f, then ∇f(p) = 0 (andhence p is a critical point).

Definition 4.19. If f : Rn → Rm, then x is a local (min/max/extreme) of f if itis a local (min/max/extreme) of fi for some 1 ≤ i ≤ m. A critical point which is not alocal extremum is called a saddle point.

19

Page 20: Vector Calculus - University of California, Riversidemath.ucr.edu/~monnot/vector calculus.pdf · 1 Vector Spaces 1.1 Basic Terminology We begin with the formal de nition of a vector

Theorem 4.20. (Second Derivatives Test) Let x be a critical point of a twicedifferentiable function f : Rn → Rm.

(a) If all of the eigenvalues of D2f(x) are negative, then x is a local maximum.

(b) If all of the eigenvalues of D2f(x) are positive, then x is a local minimum.

What if we wish to find critical points of f : Rn → R subject to a contraint g(x) = 0?As we have seen, normal vectors for tangent hyperplanes are characterized by the gradient,so a tangent hyperplane at a critical point is characterized by the gradient at that point(which would be 0 since it’s a critical point). In this case, the “critical points” of fsubject to g may not actually be critical points of f, but they will be characterized by∇f being normal to g. In other words, we are looking for points such that ∇f and ∇gare parallel. So the equation

∇f + λ∇g = 0

should be satisfied for some scalar λ. If we had multiple contraints gi(x) = 0 for 1 ≤ i ≤ k,then we would similarly want ∇f to be simultaneously parallel to each of the gradients∇gi. Thus the relative extrema (or extrema of f given gi) are the points thatsatisfy

∇f(x) +k∑i=1

λi∇gi(x) = 0

where the λi’s are called the Langrange multipliers of the constraints.

20

Page 21: Vector Calculus - University of California, Riversidemath.ucr.edu/~monnot/vector calculus.pdf · 1 Vector Spaces 1.1 Basic Terminology We begin with the formal de nition of a vector

5 Integration

5.1 Integration in Vector Spaces?

Can we define a notion of integration in general vector spaces? It turns out that if f : U →V is a function between vector spaces with U and V normed and V “complete” (Cauchysequences have limits), then there is a natural way to define the integral of f (if it exists).It ends up being nothing intuitive and is correspondingly not particularly interestingto examine. It requires some knowledge of measure theory: U has an induced measurespace structure with d-dimensional Hausdorff measure and Caratheodory-measurable setsas the σ-algebra, and the completion of V allows one to define limits of step functions...butenough of that. Since these notes do not require measure theory as a prerequisite, we inturn restrict ourselves to real vector spaces.

5.2 Integration in Rn

Recall that for a function f : R→ R, we define the integral of f over the interval [a, b](or equivalently (a, b), (a, b] or [a, b)) by∫ b

a

f(x) dx = limn→∞

n∑i=1

f(ti)∆xi

where ∆xi = |xi − xi−1|, x0 = a, xn = b, and ti ∈ [xi−1, xi]. The limit assumes that wemake further subdivisions of the interval [a, b] as n→∞—where in each successive step,xi might be different than before.

To amend this, one can discuss partitions of the interval [a, b] as sets P consisting ofpoints in [a, b] (the xi’s). One can then say a partition P ′ is finer than P if P ⊆ P ′ (i.e.that P ′ has all the same points as P and possibly some more). We can define a norm ona partition P = x1, ..., xn−1, xn (with x0 = a and xn = n) by

‖P‖ = maxi≤n

∆xi = maxi≤n|xi − xi−1|.

Then if Pn is a sequence of partitions of [a, b] such that Pn+1 is finer than Pn for all nand limn→∞ ‖Pn‖ = 0, we can then define the integral by∫ b

a

f(x) dx = limn→∞

∑xi∈Pn

f(ti)∆xi

where ti ∈ [xi−1, xi]. In this case, one can show that the above definition does not dependupon the ti’s chosen in each subinterval.

Now what if f : R2 → R? If we wanted to continue with the same reasoning, wemight want to look at sums of the form∑

i

f(ti)∆Ai

where Ai is a rectangle in R2, ∆Ai is the area of the rectangle, and ti is a point in therectangle. It turns out this is the correct way to define the integral. Now since ti ∈ R2,

21

Page 22: Vector Calculus - University of California, Riversidemath.ucr.edu/~monnot/vector calculus.pdf · 1 Vector Spaces 1.1 Basic Terminology We begin with the formal de nition of a vector

we can write it as ti = (t1i , t2i) for t1i , t2i ∈ R. Similarly since Ai is a rectangle in R2, itsarea can be written as product |xi − xi−1||yi − yi−1|. Hence our sum looks like∑

i

f(ti)∆Ai =∑i

f(t1i , t2i)|xi − xi−1||yi − yi−1| =∑i

f(t1i , t2i)∆xi ∆yi.

Hence we are essentially just subdividing the x axis and y axis, or say, intervals on themsuch as [a, b] and [c, d], with the intent of letting the number of rectangles go to infinity(while their areas go to 0). Should we require [a, b] and [c, d] to have the same numberof subdivisions at each stage? It turns out for that most ordinary circumstances, it willnot matter. Hence we may index the subdivisions of the [a, b] and [c, d] axes separatelyas follows: ∑

j

∑i

f(t1i , t2j)∆xi ∆yj =∑j

(∑i

f(t1i , t2j)∆xi

)∆yj

and correspondingly sum over one and then the other. So we want a sequence of partitionsPn of [a, b] and a sequence of partitions P ′m of [c, d] such that limn→∞ ‖Pn‖ = 0 andlimm→∞ ‖P ′m‖ = 0. We then define the integral of f over the rectangle [a, b]× [c, d] by∫ d

c

∫ b

a

f(x, y) dx dy =

∫ d

c

(∫ b

a

f(x, y) dx

)dy = lim

m→∞

∑yj∈P ′m

(limn→∞

∑xi∈Pn

f(t1i , t2j) ∆xi

)∆yj.

Theorem 5.1. (Fubini’s Theorem) Let f : R2 → R and R = [a, b] × [c, d] be arectangle in R2 such that f(x, ·) : R → R is integrable on [c, d] and f(·, y) : R → R is

integrable on [a, b] for all x, y ∈ R. If∫ dc

(∫ baf(x, y) dx) dy exists and is finite, then∫∫

R

f(x, y) dA :=

∫ d

c

(∫ b

a

f(x, y) dx

)dy =

∫ b

a

(∫ d

c

f(x, y) dy

)dx.

By induction, this theorem says that if f : Rn → R is a function and we integrateon an n-dimensional rectangle such that any of the one-dimensional partial functions isintegrable, then if any iteration of the integrals exists and is finite, it is equal to all otheriterations of the integral. We can thus define the integral of f : Rn → R on a rectangle.

Definition 5.2. Let f : Rn → R and R = Xni=1[ai, bi] be an n-dimensional rectangle. Ifeach f(x1, ..., ·i, ..., xn) is integrable on [ai, bi], then we define∫

· · ·∫R

f(x1, ..., xn) dV =

∫ b1

a1

· · ·∫ bn

an

f(x1, ..., xn) dx1 · · · dxn.

If R = Xni=1[ai, bi], then we equate the following notation:∫R

f dx =

∫· · ·∫R

f dV =

∫ b1

a1

· · ·∫ bn

an

f dx1 · · · dxn.

Like derivatives, if f : Rn → Rm with f(x) = (f1(x), ..., fm(x)) for x ∈ Rn and R =Xni=1[ai, bi], we define ∫

R

f dx =

(∫R

f1 dx, ...,

∫R

fm dx

).

What if we wish to integrate over subsets of Rn that aren’t rectangles? This requiresanother definition of the integral, which turns out to include that one we previously had.

22

Page 23: Vector Calculus - University of California, Riversidemath.ucr.edu/~monnot/vector calculus.pdf · 1 Vector Spaces 1.1 Basic Terminology We begin with the formal de nition of a vector

Say we want to integrate over a subset X ⊆ Rn, we will have to make an assumptionabout X that

supS⊆X

P (S)

k∑i=1

|Ri| = infX⊆S′P (S′)

m∑j=1

|S ′j|

where the supremum (or infimum) is taken over all subsets S (or S ′) contained in (orcontaining) X and partitions of those rectangles (and Ri denotes the ith rectangle in apartition P (S) of S). In this case, we call X a Jordan region and denote the commonevalue by |X|, and call it the volume of X. Let us define the norm of an n-dimensionalpartition by

‖P (S)‖ = maxi|Ri|.

That is, the norm of the partition is the volume of the largest rectangle in the partition.

Definition 5.3. Let f : Rn → R and P (R)n and P (R′)n be a sequence of partitionsof R and R′ respectively that are contained in or contain X respectively with X ⊆Rn. Also suppose P (R)n ⊆ P (R)n+1 and P (R′)n ⊆ P (R′)n+1 for all n and that both‖P (R)n‖, ‖P (R′)n‖ → 0 as n→∞. Then we define the integral of f over X ⊆ Rn, by

∫X

f dx = supS⊆X

limn→∞

∑Ri∈P (S)n

|Ri|

= infX⊆S′

limn→∞

∑R′j∈P (S′)n

|R′j|

5.3 Change of Variables

Theorem 5.4. (Change of Variables) Let Ω ⊆ Rn and φ : Ω → Rn be an injectivecontinuously differentiable function such that |Dφ| 6= 0. If X ⊆ Ω is a Jordan region andf : Rn → R with f φ integrable on Ω and f integrable on φ(Ω), then∫

φ(Ω)

f(φ(x)) dφ(x) =

∫Ω

(f φ)(x)| detDφ(x)| dx.

When f : R→ R, this is simply the u-substitution rule.

Example 5.5. (Polar Coordinates in R2) Define φ : R+ × [0, 2π)→ R2 (where R+

is the nonnegative reals) by

φ(r, θ) = (r cos θ, r sin θ).

Then it is clear that this map is continuously differentiable and injective. The magnitudeof the determinant of the Jacobian is

| detDφ| =∣∣∣∣∂x∂r ∂x

∂θ∂y∂r

∂y∂θ

∣∣∣∣ =

∣∣∣∣cos θ −r sin θsin θ r cos θ

∣∣∣∣ = |r cos2 θ + r sin2 θ| = r.

Hence ∫R2

f(x, y) dx dy =

∫ 2π

0

∫ ∞0

f(r, θ) r dr dθ

23

Page 24: Vector Calculus - University of California, Riversidemath.ucr.edu/~monnot/vector calculus.pdf · 1 Vector Spaces 1.1 Basic Terminology We begin with the formal de nition of a vector

Example 5.6. (Spherical Coordinates in R3) Define φ : R+ × [0, 2π)× [0, π]→ R3

byφ(ρ, θ, ϕ) = (ρ sinϕ cos θ, ρ sinϕ sin θ, ρ cosϕ).

This map is also clearly continuously differentiable and injective. We also have

| detDφ| =

∣∣∣∣∣∣∣∂x∂ρ

∂x∂θ

∂x∂ϕ

∂y∂ρ

∂y∂θ

∂y∂ϕ

∂z∂ρ

∂z∂θ

∂z∂ϕ

∣∣∣∣∣∣∣=

∣∣∣∣∣∣sinϕ cos θ −ρ sinϕ sin θ ρ cosϕ cos θsinϕ sin θ ρ sinϕ cos θ ρ cosϕ sin θ

cosϕ 0 −ρ sinϕ

∣∣∣∣∣∣= ρ2 sinϕ.

Thus ∫R3

f(x, y, z) dx dy dz =

∫ π

0

∫ 2π

0

∫ ∞0

f(ρ, θ, ϕ) ρ2 sinϕdρ dθ dϕ.

5.4 Subordinate Integration

We discussed integrating a function f : Rn → R over X ⊆ Rn. Suppose either that Xis a subspace of Rn with dimR(X) < n or that all elements x ∈ X have at least onecomponent the same (that is, for all x, y ∈ X, xi = yi for some i). Let φ : X → Rn be theidentity map, which is certainly continuously differentiable and injective. Let us supposeeach element of X has the form (x1, ..., xn−1, c), then φ(x) = (f1(x), ..., fn−1(x), fn(x))where fi(x) = xi for 1 ≤ i < n and fn(x) = c. Hence

∂fi∂xj

=

1 if i = j 6= n0 if i 6= j or i = j = n

.

That is,

detDφ =

∣∣∣∣∣∣∣∣∣∣∣

1 0 · · · 00 1 0 · · · 0...

. . ....

0 · · · 1 00 · · · 0 0

∣∣∣∣∣∣∣∣∣∣∣= 0.

Hence if f : Rn → R and k < n, the change of variables theorem gives us that∫Rk

f dx = 0.

This may seem to suggest that there is no nice way of integrating of subsets of strictlyless “dimension”. Rather than integrating in the classical way (in which case we essen-tially get hypervolumes being 0 since, say, one of the components is constant and hencerectangles have 0 length in that component), we will come up with alternate methods forsubordinate integration.

Let γ : [0, 1] → Rn be a continuously differentiable and injective map (which we willcall a path) and f : Rn → R. We ask the question: what is the integral of f along γ?Change of variables would give us:∫

γ

f dγ :=

∫γ([0,1])

f(x) dγ(x) =

∫ 1

0

f(γ(t))| detDγ| dt.

24

Page 25: Vector Calculus - University of California, Riversidemath.ucr.edu/~monnot/vector calculus.pdf · 1 Vector Spaces 1.1 Basic Terminology We begin with the formal de nition of a vector

But detDγ doesn’t make sense in this case since

Dγ = (γ′1(t), ..., γ′n(t)) .

It turns out that ‖Dγ‖ works as a substitute for | detDγ|. Intuitively in Rn we have aninfinitesimal Pythagorean theorem:

dγ2 = dγ21 + · · ·+ dγ2

n,

which would yield∫γ

f dγ =

∫ 1

0

f

√(dγ1

dt

)2

+ · · ·+(dγndt

)2

dt =

∫ 1

0

f ‖Dγ‖ dt.

Now suppose instead that f : Rn → Rn, then we can define∫γ

f · dγ =

∫γ

(f1(x), ..., fn(x)) · (dγ1, ..., dγn)

=

∫γ

f1(x) dγ1 + · · ·+∫γ

fn(x) dγn

=

∫ 1

0

f1(γ(t)) γ′1(t) dt+ · · ·+∫ 1

0

fn(γ(t)) γ′n(t) dt

=

∫ 1

0

(f1(γ(t)), ..., fn(γ(t))) · (γ′1(t), ..., γ′n(t)) dt

=

∫ 1

0

f(γ(t)) · γ′(t) dt.

The above two definitions (for f : R → R and f : Rn → Rn) of integration with respectto a path γ are called line integrals. What if want to integrate with respect to a surfaceinstead of a line? Instead of γ : [0, 1]→ Rn, suppose we have a continuously differentiableS : X → R where X ⊆ Rn. We can call this a surface in Rn+1. We want to define∫

S

f dS

with f : Rn+1 → R. The area of a polytope spanned by n vectors V = v1, ..., vn is‖ detA(V )‖ where A(V ) is the (n + 1) × (n + 1) matrix whose first row consists of thebasis vectors ei and whose i+ 1 row is the vector A(vi). This vector corresponds to theparallelogram spanned by vi in the ith and n + 1 image dimension. That is, in a smallpatch of the image of S we have

A(vi) = ∆xiei +∂S

∂xi∆xien+1.

Thus

det(A(V )) =

∣∣∣∣∣∣∣∣∣∣∣∣∣∣

e1 · · · ei · · · en+1

∆x1 · · · 0 · · · ∂S∂x1

∆x1

.... . .

...0 · · · ∆xi · · · ∂S

∂xi∆xi

.... . .

...0 · · · ∆xn

∂S∂xn

∆xn

∣∣∣∣∣∣∣∣∣∣∣∣∣∣=

(− ∂S∂x1

∆x1 · · ·∆xn, ...,−∂S

∂xn∆x1 · · ·∆xn,∆x1 · · ·∆xn

).

25

Page 26: Vector Calculus - University of California, Riversidemath.ucr.edu/~monnot/vector calculus.pdf · 1 Vector Spaces 1.1 Basic Terminology We begin with the formal de nition of a vector

Thus

∆S = ‖ detA(V )‖

=

√(∂S

∂x1

∆x1 · · ·∆xn)2

+ · · ·+(∂S

∂xn∆x1 · · ·∆xn

)2

+ (∆x1 · · ·∆xn)2

=

√(∂S

∂x1

)2

+ · · ·+(∂S

∂xn

)2

+ 1 ∆x1 · · ·∆xn.

Or in the limit:

dS =

√(∂S

∂x1

)2

+ · · ·+(∂S

∂xn

)2

+ 1 dx1 · · · dxn.

Hence for f : Rn+1 → R and a surface S : Rn → R, we define the surface integral of fwith respect to S by

∫S

f(x1, ..., xn+1) dS =

∫S

f(x1, ..., xn, S(x1, ..., xn))

√(∂S

∂x1

)2

+ · · ·+(∂S

∂xn

)2

+ 1 dx1 · · · dxn.

If instead we have that f : Rn+1 → Rn+1, we can define its surface integral with respectto an oriented surface S (meaning it has a normal vector that changes continuously alongS) as ∫

S

f · dS =

∫S

(f · n) dS

where n is a unit normal vector to S. Since S : Rn → R, suppose we consider the zerofunction

Σ(x1, ..., xn, S(x1, ..., xn)) = Σ(x1, ..., xn, t) = t− S(x1, ..., xn) = 0.

The gradient of this function can be written

∇Σ =

(− ∂S∂x1

, ...,− ∂S

∂xn, 1

).

Also

∇Σ · (x1, ..., xn, S(x1, ..., xn)) = − ∂S∂x1

x1 − · · · −∂S

∂xnxn + S(x1, ..., xn) = c

for some constant c (since applying d to both sides must give us a 0 on the right, bycomputing the differential of S). But since Σ = 0, we must have c = 0. So ∇Σ is normalto the surface. Hence we can define the unit normal vector

n =∇Σ

‖∇Σ‖=

(− ∂S∂x1, ..., ∂S

∂xn, 1)

√(∂S∂x1

)2

+ · · ·+(∂S∂xn

)2

+ 1

.

26

Page 27: Vector Calculus - University of California, Riversidemath.ucr.edu/~monnot/vector calculus.pdf · 1 Vector Spaces 1.1 Basic Terminology We begin with the formal de nition of a vector

Hence we obtain∫S

f · dS =

∫S

(f · n) dS

=

∫S

f ·

(− ∂S∂x1, ..., ∂S

∂xn, 1)

√(∂S∂x1

)2

+ · · ·+(∂S∂xn

)2

+ 1

√(∂S

∂x1

)2

+ · · ·+(∂S

∂xn

)2

+ 1 dx1 · · · dxn

=

∫S

(f1, ..., fn+1) ·(− ∂S∂x1

, ...,∂S

∂xn, 1

)dx1 · · · dxn

=

∫S

(−f1

∂S

∂x1

− · · · − fn∂S

∂xn+ fn+1

)dx1 · · · dxn

where each fi has the change of variables in its arguments: fi(x1, ..., xn, S(x1, ..., xn)).

Theorem 5.7. (Fundamental Theorem for Line Integrals) Let γ : [a, b]→ Rn bea path (continuously differentiable) and f : Rn → R be differentiable. Then∫

γ

∇f · dγ = f(γ(b))− f(γ(a)).

Proof. By defition we have∫γ

∇f · dγ =

∫ b

a

∇f(γ(t)) · γ′(t) dt

=

∫ b

a

(∂f

∂γ1

, ...,∂f

∂γn

)·(dγ1

dt, ...,

dγndt

)dt

=

∫ b

a

(∂f

∂γ1

dγ1

dt+ · · ·+ ∂f

∂γn

dγndt

)dt

=

∫ b

a

df(γ(t))

dtdt

= f(γ(b))− f(γ(a)).

27

Page 28: Vector Calculus - University of California, Riversidemath.ucr.edu/~monnot/vector calculus.pdf · 1 Vector Spaces 1.1 Basic Terminology We begin with the formal de nition of a vector

6 Vector Calculus in R3

6.1 Gradient, Curl, and Divergence

Recall that for f : Rn → R, we have the gradient ∇f : Rn → Rn defined by

∇f(x) =

(∂f

∂x1

(x), ...,∂f

∂xn(x)

).

Definition 6.1. Let f : Rn → R. the divergence of f, denoted div f or ∇ · f, is a mapfrom Rn to R defined by

div f(x) = ∇ · f(x) =∂f1

∂x1

(x) + · · ·+ ∂fn∂xn

(x).

We also had the notion of a cross product of two vectors in R3.

Definition 6.2. Let f : R3 → R3. The curl (or vorticity) of f, denoted curl f or ∇×f,is a map from R3 → R3 defined by

curl f = ∇× f =

∣∣∣∣∣∣e1 e2 e3∂∂x1

∂∂x2

∂∂x3

f1 f2 f3

∣∣∣∣∣∣ =

(∂f3

∂x2

− ∂f2

∂x3

,∂f1

∂x3

− ∂f3

∂x1

,∂f2

∂x1

− ∂f1

∂x2

).

Since we will be restricting ourselves to R3 in this section, we will simplify our notationwith x1 = x, x2 = y, and x3 = z:

∇f =

(∂f

∂x,∂f

∂y,∂f

∂z

)= (fx, fy, fz)

curl f =

(∂f3

∂y− ∂f2

∂z,∂f1

∂z− ∂f3

∂x,∂f2

∂x− ∂f1

∂y

)div f =

∂f1

∂x+∂f2

∂y+∂f3

∂z.

Proposition 6.3.curl∇f = 0.

Proof.

curl∇f = curl (fx, fy, fz)

=

∣∣∣∣∣∣e1 e2 e3∂∂x

∂∂y

∂∂z

fx fy fz

∣∣∣∣∣∣= (fzy − fyz, fxz − fzx, fyx − fxy)= 0.

28

Page 29: Vector Calculus - University of California, Riversidemath.ucr.edu/~monnot/vector calculus.pdf · 1 Vector Spaces 1.1 Basic Terminology We begin with the formal de nition of a vector

Proposition 6.4.div curl f = 0.

Proof.

div curl f = div

(∂f3

∂y− ∂f2

∂z,∂f1

∂z− ∂f3

∂x,∂f2

∂x− ∂f1

∂y

)=

∂x

(∂f3

∂y− ∂f2

∂z

)+

∂y

(∂f1

∂z− ∂f3

∂x

)+

∂z

(∂f2

∂x− ∂f1

∂y

)= 0.

If curl f = 0, we say that f is irrotational (or conservative). If div f = 0, we saythat f is incompressible. We will also sometimes refer to functions f : R3 → R asscalar fields and functions f : R3 → R3 as vector fields. Now if C∞(X, Y ) denotes thesmooth functions from X to Y with X, Y subsets of Rn, then we have a chain of maps

C∞(R3,R)grad−→ C∞(R3,R3)

curl−→ C∞(R3,R3)div−→ C∞(R3,R)

such that any double composition in the chain is 0 (i.e. curl of grad, or div or curl).Such a chain is called a complex. Of interest might be the vector fields f : R3 → R3

that fall into two categories: (1) irrotational ones that aren’t gradients of a scalar field,and (2) incompressible ones that aren’t the curl of a vector field. Although it turns outthat all irrotational ones are the gradient of a scalar field, and all incompressible ones arethe curl of a vector field. These theorems follow from the fundamental theorem for lineintegrals, Stokes’ theorem, and Gauss’ theorem (that latter two of which appear in thenext section).

We will also use the following notation:

∇2f = ∇ · ∇f = div (grad f) =∂2f

∂x2+∂2f

∂y2+∂2f

∂z2

for a scalar field f, and∇2f =

(∇2f1,∇2f2,∇2f3

)for a vector field f. In each case, we call this the Laplacian of f.

Proposition 6.5.curl (curl f) = ∇(div f)−∇2f.

Proof. Let P = f3y−f2z , Q = f1z−f3x , and R = f2x−f1y so that curl f = (P,Q,R).

29

Page 30: Vector Calculus - University of California, Riversidemath.ucr.edu/~monnot/vector calculus.pdf · 1 Vector Spaces 1.1 Basic Terminology We begin with the formal de nition of a vector

Then

curl (curl f) =

∣∣∣∣∣∣e1 e2 e3∂∂x

∂∂y

∂∂z

P Q R

∣∣∣∣∣∣= (Ry −Qz, Pz −Rx, Qx − Py)=((f2xy − f1yy)− (f1zz − f3xz), (f3yz − f2zz)− (f2xx − f1yx),

(f1zx − f3xx)− (f3yy − f2zy))

=(f2xy + f3xz − (f1yy + f1zz), f3yz + f1yx − (f2xx + f2zz), f1zx + f2zy − (f3xx + f3yy)

)=(f2xy + f3xz + f1xx , f3yz + f1yx + f2yy , f1zx + f2zy + f3zz

)−∇2f

=

(∂

∂x(f1x + f2y + f3z),

∂y(f1x + f2y + f3z),

∂z(f1x + f2y + f3z)

)−∇2f

= ∇(f1x + f2y + f3z)−∇2f

= ∇(div f)−∇2f.

We will denote the above by curl2f.

6.2 Main Theorems

The theorems in this section generalize the fundamental theorem of calculus. In moreadvanced mathematics courses, there are even more general versions of the fundamentaltheorem of calculus:

Radon-Nikodym Theorem (Measure Theory) If µ is a σ-finite signed measure and ν is σ-finite positive measure (think of these as a generalization of differentials) on a measurablespace (X,Σ), then there exist unique σ-finite signed measures λ, ρ such that λ ⊥ µ, ρ µ,and ν = λ+ ρ. Moreover there is a µ-integrable function f such that

ρ(A) =

∫A

f dµ.

Here f is called the Radon-Nikodym derivative and is denoted f = dρdµ.

You can think of it as saying “given two differentials dy and dx, there is an integrablefunction f such that dy =

∫f dx.” And we can write f = dy/dx. The actual definition

of a measure isn’t quite like a differential, but it’s a managable generalization. Anothergeneral form is:

Stokes’ Theorem (Differentiable Manifold Theory) Let M be a smooth, compact, oriented,n-dimensional manifold with boundary and ω be an (n− 1)-form on M. Then∫

M

dω =

∫∂M

ω.

An n-dimensional manifold can be thought of as a space that looks like Rn locally (i.e.in small neighborhoods). For example, the interval [a, b] is a compact 1-dimensionalmanifold with boundary (its boundary is a, b, the endpoints). A 0-form on this interval

30

Page 31: Vector Calculus - University of California, Riversidemath.ucr.edu/~monnot/vector calculus.pdf · 1 Vector Spaces 1.1 Basic Terminology We begin with the formal de nition of a vector

is just a smooth function f defined on it (in general, an n-form looks like f dx1∧· · ·∧dxn).So the theorem applied to this example is (and recall df = f ′(x) dx)∫

[a,b]

df =

∫[a,b]

f ′(x) dx =

∫a,b

f = f(b)− f(a),

which is simply the fundamental theorem of calculus. It turns out that all of the theoremsin this section are special cases of the above Stokes’ theorem. In what follows, a simplepath will be an injective one (so it doesn’t intersect itself); an orientation on a closedpath will a choice about whether one moves clockwise or counterclockwise. A closed pathwith orientation is called an oriented path. An oriented surface is a surface with aset of normal vectors at each point that change continuously.

Theorem 6.6. (Kelvin-Stokes’ Theorem) Let S : R2 → R be a smooth, bounded,and oriented surface in R3 with a simple, closed, and piecewise-smooth path γ : [a, b] : Ras a boundary and f : R3 → R3 be a vector field that is continuously differentiable. Then∫

γ

f · dγ =

∫∫S

curl f · dS.

This can be seen as a special case of the above Stokes’ theorem by the followingreasoning. S is a surface, so it looks like R2 in very small neighborhoods. That is, S is a2-dimensional manifold. It also satisfies the assumptions we need. Also,

f · dγ = f1 dγ1 + f2 dγ2 + f3 dγ3.

f · dγ is a sum of 1-forms, and is thus a 1-form itself. In the statement of the generalStokes’ theorem, dω refers to the de Rham derivative of ω, and in our case it turns outthat

d(f · dγ) = curl f · dS.

But we can prove the Kelvin-Stokes’ theorem directly without showing it is a special caseof the more general Stokes’ theorem by noting the fact that since γ is a boundary of thesurface, we can write

γ(t) = (x(t), y(t), z(t)) = (x(t), y(t), S (x(t), y(t))) .

31

Page 32: Vector Calculus - University of California, Riversidemath.ucr.edu/~monnot/vector calculus.pdf · 1 Vector Spaces 1.1 Basic Terminology We begin with the formal de nition of a vector

Proof.∫γ

f · dγ =

∫ b

a

f · γ′(t) dt

=

∫ b

a

(f1x′(t) + f2y

′(t) + f3z′(t)) dt

=

∫ b

a

(f1x

′(t) + f2y′(t) + f3

(∂S

∂x

dx

dt+∂S

∂y

dy

dt

))dt

=

∫ b

a

((f1 + f3

∂S

∂x

)dx

dt+

(f2 + f3

∂S

∂y

)dy

dt

)dt

=

∫γ∩R2

(f1 + f3

∂S

∂x

)dx+

(f2 + f3

∂S

∂y

)dy

=

∫∫S

(∂

∂x

(f2 + f3

∂S

∂y

)− ∂

∂y

(f1 + f3

∂S

∂x

))dx dy

=

∫∫S

(−(f3y − f2z)

∂S

∂x− (f1z − f3x)

∂S

∂y+ (f2x − f1y)

)dx dy

=

∫∫S

curl f ·(−∂S∂x

,−∂S∂y, 1

)dx dy

=

∫∫S

curl f · dS.

Corollary 6.7. (Green’s Theorem) Let γ : [a, b] → R2 be a simple, closed, andpiecewise-smooth path in the plane where A denotes the region enclosed by the path.Let f : R2 → R2 be continuously differentiable. Then∫

γ

f · dγ =

∫∫A

(∂f2

∂x− ∂f1

∂y

)dx dy.

Proof. Since A and γ are in R2, they are in R3. Moreover S is easily shown to besmooth, bounded (by γ), and oriented and

dS =

√1 +

(∂z

∂x

)2

+

(∂z

∂y

)2

dx dy = dx dy

since z = 0, and the normal vector of S is (0, 0, 1). Let f = (f1, f2, 0) and γ(t) =(x(t), y(t), 0). Then Kelvin-Stokes’ gives us∫

γ

f · dγ =

∫∫A

curl f · dS

=

∫∫A

∣∣∣∣∣∣e1 e2 e3∂∂x

∂∂y

∂∂z

f1 f2 0

∣∣∣∣∣∣ · (0, 0, 1) dx dy

=

∫∫A

(∂f2

∂x− ∂f1

∂y

)dx dy.

32

Page 33: Vector Calculus - University of California, Riversidemath.ucr.edu/~monnot/vector calculus.pdf · 1 Vector Spaces 1.1 Basic Terminology We begin with the formal de nition of a vector

Theorem 6.8. (Gauss’ Theorem)(Divergence Theorem) Let B be a simple bodyand S be its boundary with positive orientation. If f : R3 → R3 is continuously differen-tiable on B then ∫∫

S

f · dS =

∫∫∫B

div f dxdydz.

Proof.∫∫S

f dS =

∫∫S

(f1, f2, f3) · (n1, n2, n3) dS

=

∫∫S

(f1n1 + f2n2 + f3n3) dS

=

∫∫S

(∫∂f1

∂xdx

)dydz +

∫∫S

(∫∂f2

∂ydy

)dxdz +

∫∫S

(∫∂f3

∂zdz

)dxdy

=

∫∫∫B

(∂f1

∂x+∂f2

∂y+∂f3

∂z

)dx dy dz

=

∫∫∫B

div f dxdydz.

Recall that curl of gradient is zero and divergence of curl is zero. Also recall that vectorfields with 0 curl are irrotational and vector fields with 0 divergence are incompressible.We said that, in fact, every irrotational vector field is the gradient of some other vectorfield, and that every incompressible vector field is the curl of another vector field. Thefirst step in showing this is to show that every vector field can be written as the sum ofan irrotational and incompressible field.

Theorem 6.9. (Helmholtz)(Fundamental Theorem of Vector Calculus) Letf : R3 → R3 be a vector field that is twice continuously differentiable. Then f = C + Rwhere C is an incompressible vector field and R is an irrotational vector field.

It turns out that we can write

f = curlA−∇B

for some vector field A and scalar field B. It follows immediately that C = curlA andR = −∇B for the divergence of curl is 0 and curl of gradient is 0. In fact A and B havethe forms

A =1

∫R3

curl f

‖x− x0‖· dx

and

B =1

∫R3

div f

‖x− x0‖dx

for any choice of x0 ∈ R3.

Corollary 6.10. (grad-curl-div Exact Sequence Theorem)

(a) If f is a scalar field, then curl∇f = 0.

(b) If f is a vector field, then div curl f = 0.

33

Page 34: Vector Calculus - University of California, Riversidemath.ucr.edu/~monnot/vector calculus.pdf · 1 Vector Spaces 1.1 Basic Terminology We begin with the formal de nition of a vector

(c) If f is an irrotational vector field, then it is the gradient of some scalar field.

(d) If f is an incompressible vector field, then it is the curl of some vector field.

Proof. We have already proven (a) and (b). By Helmholtz we have f = curlA−∇Bwhere A and B are defined by the integrals above. If f is irrotational, curl f = 0, soA = 0 and thus f = −∇B. Similarly if f is incompressible, then div f = 0, so B = 0 andhence f = curlA.

In the Helmholtz decomposition of f, we sometimes call A the vector potential off and B the scalar potential of f. A natural question to ask at this point is: what canwe say about a vector field f which is incompressible and irrotational?

If f is irrotational, then f = −∇B as we have just shown. But if it is also incom-pressible, we have

div f = −div∇B = −∇2B = 0 =⇒ ∇2B = 0.

Dually, we could have written f = curlA since it is incompressible. Since it is alsoirrotational we have

curl f = curl2A = ∇(divA)−∇2A = 0.

Since both equations involve the Laplacian (∇2) and f can be written in either way, wewill stick to the simpler first case. Hence an irrotational and incompressible vector fieldf has the form

f = −∇ϕ

where ϕ is a solution to Laplace’s equation:

∇2ϕ = 0.

Solutions to Laplace’s equations are called harmonic functions.

34

Page 35: Vector Calculus - University of California, Riversidemath.ucr.edu/~monnot/vector calculus.pdf · 1 Vector Spaces 1.1 Basic Terminology We begin with the formal de nition of a vector

7 Applications

7.1 Vortex Dynamics

In fluid mechanics, we can let v : R3 → R3 be the velocity field of a fluid (where a fluidis defined with the local property of being an amount of mass or energy per unit volumeand having a velocity vector at each point). Let us assume conservation of mass/energyin a fixed volume V. So we have

ε =

∫∫∫V

ρ dV

where ρ denotes mass/energy density and ε is the total mass/energy in the volume. Thechange in energy over time depends on how much energy enters or leaves the volume V.Hence if J denotes the energy velocity field (in the sense that J = ρv), then

∂ε

∂t= −

∫∫∂V

J · dS.

That is, a negative change in energy corresponds to how much of the energy velocityfield hits the boundary of the volume (i.e. exits the volume). We use partial derivativenotation since we think of energy field as a function ε : R4 → R with a time variable aswell. By Gauss’ theorem we have

−∫∫∫

V

div J dV = −∫∫

∂V

J · dS =∂ε

∂t=

∫∫∫V

∂ρ

∂tdV.

Hence∂ρ

∂t= −div J.

This is equivalent to saying

0 =∂ρ

∂t+ div (ρv) =

∂ρ

∂t+∇ρ · v + ρ div v.

This is called the continuity equation. As a function of four variables, the total deriva-tive of the energy density with respect to time is

dt=∂ρ

∂t

dt

dt+∂ρ

∂x

dx

dt+∂ρ

∂y

dy

dt+∂ρ

∂z

dz

dt

=∂ρ

∂t+∇ρ · v

= −ρ div v.

Now we can define the vorticity of the fluid velocity field v as its curl. We will write

ω = curl v.

Suppose we have a subvolume in which the vorticity of the fluid is nonzero (which wewill call a vortex). Then since divω = 0, it follows that the rate of change in density ofthe vortex is 0 by applying the equation we derived above. We can however ask how thevortex changes as a whole over time in the fluid. A similar equation can be derived fromthe Navier-Stokes equation for a Newtonian fluid:

dt=∂w

∂t+∇ω · v.

35

Page 36: Vector Calculus - University of California, Riversidemath.ucr.edu/~monnot/vector calculus.pdf · 1 Vector Spaces 1.1 Basic Terminology We begin with the formal de nition of a vector

Since v : R3 → R3, we also have ω : R3 → R3, so what does ∂ω/∂t mean? Moreover,what is the gradient of a vector field? Suppose we have a collection of velocity fields vt.Then we have a collection of vorticity fields ωt = curl vt. We can in turn think of v andω as functions v, ω : R4 → R3. Correspondingly we will have

∂ω

∂t=

∂t(ω1(x, y, z, t), ω2(x, y, z, t), ω3(x, y, z, t)) =

(∂ω1

∂t,∂ω2

∂t,∂ω3

∂t

)and ∇ω is simply the Jacobian of the original vector field:

∇ω = Dω =

ω1x ω1y ω1z

ω2x ω2y ω2z

ω3x ω3y ω3z

.

Hence the equation becomes

dt= (ω1t , ω2t , ω3t) +

ω1x ω1y ω1z

ω2x ω2y ω2z

ω3x ω3y ω3z

· (v1, v2, v3)

= (ω1t , ω2t , ω3t) +3∑j=1

vj(ω1j , ω2j , ω3j)

=

(ω1t +

3∑j=1

vjω1j , ω2t +3∑j=1

vjω2j , ω3t +3∑j=1

vjω3j

)

where the dot product is defined on column vectors of the Jacobian and

∂ωi∂xj

=

ωix if j = 1ωiy if j = 2ωiz if j = 3

.

This equation, called the vorticity equation, describes the how a vortex changes overtime as it moves through a velocity field. It gives componentwise equations

dωidt

=∂ωi∂t

+3∑j=1

vj∂ωi∂xj

.

It proves to be useful to talk of the operator

D

Dt=

∂t+ v · ∇,

which is called the material differential operator. When applied to a scalar or vectorfield, it is called the material derivative of the field.

7.2 Electrodynamics

In electrodynamics we have two vector fields E,B : R3 → R3 called the electric andmagnetic fields respectively. Contributions from Gauss, Faraday, Ampere, and Maxwell

36

Page 37: Vector Calculus - University of California, Riversidemath.ucr.edu/~monnot/vector calculus.pdf · 1 Vector Spaces 1.1 Basic Terminology We begin with the formal de nition of a vector

led to the discovery of what are called Maxwell’s equations:

divE =ρ

ε0divB = 0

curlE = −∂B∂t

curlB = µ0

(J + ε0

∂E

∂t

)where the partial derivatives with respect to time are defined as before for fluid velocityfields, ρ = dQ/dV is the charge density, J = dI/dV is the current density, ε0 is thepermittivity of free space (or electric constant), and µ0 is the permeability of free space(or magnetic constant).

Integrating the first equation over a closed volume and applying Gauss’ theorem givesus ∫∫∫

V

divE dV =

∫∫∂V

E · dS =

∫∫∫V

ρ

ε0dV =

Q

ε0

where S = ∂V. Similarly for the second equation we obtain∫∫∂V

B · dS = 0.

Integrating the third equation over ∂V and applying Kelvin-Stokes’ gives us∫∫∂V

curlE · dS =

∫γ

E · dγ = −∫∫

∂V

∂B

∂t· dS

where γ is a simple closed curve on the boundary of V. Similarly for the fourth equationone can obtain∫

γ

B · dγ = µ0

∫∫∂V

(J + ε0

∂E

∂t

)· dS = µ0

(IS + ε0

∫∫∂V

∂E

∂t· dS

)where IS is the net current on the boundary. This gives us the integral forms of Maxwell’sequations: ∫∫

∂V

E · dS =Q

ε0∫∫∂V

B · dS = 0∫γ

E · dγ = −∫∫

∂V

∂B

∂t· dS

= − d

dt

∫∫∂V

B · dS∫γ

B · dγ = µ0

(IS + ε0

∫∫∂V

∂E

∂t· dS

)= µ0

(IS + ε0

d

dt

∫∫∂V

E · dS)

37

Page 38: Vector Calculus - University of California, Riversidemath.ucr.edu/~monnot/vector calculus.pdf · 1 Vector Spaces 1.1 Basic Terminology We begin with the formal de nition of a vector

where γ is a simple closed curve on ∂V. If we assume our volume has no charge or current,then ρ = J = 0, and Maxwell’s equations are

divE = 0

divB = 0

curlE = −∂B∂t

curlB = µ0ε0∂E

∂t=

1

c2

∂E

∂t

where c is the speed of an electromagnetic wave in a vacuum, since

c =1

√µ0ε0

.

When there is no charge or current, we also have

curl2(E) = ∇(divE)−∇2E

= −∇2E

= curl (curlE)

= −curl∂B

∂t

= − ∂

∂tcurlB

= − 1

c2

∂2E

∂t2.

So∂2E

∂t2= c2∇2E.

Also

curl2(B) = ∇(divB)−∇2B

= −∇2B

= curl curlB

=1

c2curl

∂E

∂t

=1

c2

∂tcurlE

= − 1

c2

∂2B

∂t2.

So we also have∂2B

∂t2= c2∇2B.

Hence E and B both satisfy the equation

∂2ψ

∂t2= c2∇2ψ

for a constant c and vector field ψ. This equation is called the wave equation, and hasimportant applications in other areas of physics as well.

38

Page 39: Vector Calculus - University of California, Riversidemath.ucr.edu/~monnot/vector calculus.pdf · 1 Vector Spaces 1.1 Basic Terminology We begin with the formal de nition of a vector

References

[1] Aubin, Theirry. A Course in Differential Geometry. American Mathematical Society.2001.

[2] Folland, Gerald B. Real Analysis: Modern Techniques and Their Applications. 2ndEdition. John Wiley & Sons, Inc. 1999.

[3] Griffiths, David. Introduction to Electrodynamics. 3rd Edition. Prentice Hall, Inc.1999.

[4] Stewart, James. Calculus. 5th Edition. Brooks/Cole. 2003.

[5] Wade, William. An Introduction to Analysis. 3rd Edition. Pearson Education, Inc.2004.

39