i n t r o d u c t i o n t o pa rt i a l d i f f e r e n t...

541
RUSSELL L. HERMAN INTRODUCTION TO PARTIAL DIFFERENTIAL EQUATIONS R. L. HERMAN - VERSION DATE: JUNE 19, 2015

Upload: lecong

Post on 13-Mar-2018

226 views

Category:

Documents


4 download

TRANSCRIPT

Page 1: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

R U S S E L L L . H E R M A N

I N T R O D U C T I O N T OP A R T I A L D I F F E R E N T I A L E Q U AT I O N S

R . L . H E R M A N - V E R S I O N D AT E : J U N E 1 9 , 2 0 1 5

Page 2: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

Copyright ©2015 by Russell L. Herman

published by r. l. herman

This text has been reformatted from the original using a modification of the Tufte-book document class in LATEX.See tufte-latex.googlecode.com.

introduction to partial differential equations by Russell Herman. Versions of these notes have beenposted online since Spring 2005.http://people.uncw.edu/hermanr/pde1/pdebook

June 2015

Page 3: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238
Page 4: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

Contents

Prologue ixIntroduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ixAcknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi

1 First Order Partial Differential Equations 11.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

1.2 Linear Constant Coefficient Equations . . . . . . . . . . . . . . 3

1.3 Quasilinear Equations: The Method of Characteristics . . . . . 5

1.3.1 Geometric Interpretation . . . . . . . . . . . . . . . . . 5

1.3.2 Characteristics . . . . . . . . . . . . . . . . . . . . . . . 6

1.4 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

1.4.1 Conservation Laws . . . . . . . . . . . . . . . . . . . . . 10

1.4.2 Nonlinear Advection Equations . . . . . . . . . . . . . 12

1.4.3 The Breaking Time . . . . . . . . . . . . . . . . . . . . . 14

1.4.4 Shock Waves . . . . . . . . . . . . . . . . . . . . . . . . . 16

1.4.5 Rarefaction Waves . . . . . . . . . . . . . . . . . . . . . 18

1.4.6 Traffic Flow . . . . . . . . . . . . . . . . . . . . . . . . . 22

1.5 General First Order PDEs . . . . . . . . . . . . . . . . . . . . . 25

1.6 Modern Nonlinear PDEs . . . . . . . . . . . . . . . . . . . . . . 29

Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

2 Second Order Partial Differential Equations 332.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

2.2 Derivation of Generic 1D Equations . . . . . . . . . . . . . . . 36

2.2.1 Derivation of Wave Equation for String . . . . . . . . . 36

2.2.2 Derivation of 1D Heat Equation . . . . . . . . . . . . . 38

2.3 Boundary Value Problems . . . . . . . . . . . . . . . . . . . . . 39

2.4 Separation of Variables . . . . . . . . . . . . . . . . . . . . . . . 42

2.4.1 The 1D Heat Equation . . . . . . . . . . . . . . . . . . . 42

2.4.2 The 1D Wave Equation . . . . . . . . . . . . . . . . . . . 45

2.5 Laplace’s Equation in 2D . . . . . . . . . . . . . . . . . . . . . . 48

2.6 Classification of Second Order PDEs . . . . . . . . . . . . . . . 50

2.7 d’Alembert’s Solution of the Wave Equation . . . . . . . . . . 55

Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

3 Trigonometric Fourier Series 71

Page 5: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

iv

3.1 Introduction to Fourier Series . . . . . . . . . . . . . . . . . . . 71

3.2 Fourier Trigonometric Series . . . . . . . . . . . . . . . . . . . 74

3.3 Fourier Series Over Other Intervals . . . . . . . . . . . . . . . . 80

3.3.1 Fourier Series on [a, b] . . . . . . . . . . . . . . . . . . . 86

3.4 Sine and Cosine Series . . . . . . . . . . . . . . . . . . . . . . . 88

3.5 Solution of the Heat Equation . . . . . . . . . . . . . . . . . . . 93

3.6 Finite Length Strings . . . . . . . . . . . . . . . . . . . . . . . . 96

3.7 The Gibbs Phenomenon . . . . . . . . . . . . . . . . . . . . . . 97

Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102

4 Sturm-Liouville Boundary Value Problems 1074.1 Sturm-Liouville Operators . . . . . . . . . . . . . . . . . . . . . 107

4.2 Properties of Sturm-Liouville Eigenvalue Problems . . . . . . 111

4.2.1 Adjoint Operators . . . . . . . . . . . . . . . . . . . . . 114

4.2.2 Lagrange’s and Green’s Identities . . . . . . . . . . . . 116

4.2.3 Orthogonality and Reality . . . . . . . . . . . . . . . . . 116

4.2.4 The Rayleigh Quotient . . . . . . . . . . . . . . . . . . . 117

4.3 The Eigenfunction Expansion Method . . . . . . . . . . . . . . 119

4.4 Appendix: The Fredholm Alternative Theorem . . . . . . . . . 122

Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124

5 Non-sinusoidal Harmonics and Special Functions 1275.1 Function Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . 128

5.2 Classical Orthogonal Polynomials . . . . . . . . . . . . . . . . 131

5.3 Fourier-Legendre Series . . . . . . . . . . . . . . . . . . . . . . 134

5.3.1 Properties of Legendre Polynomials . . . . . . . . . . . 135

5.3.2 Generating Functions The Generating Function for Leg-endre Polynomials . . . . . . . . . . . . . . . . . . . . . 138

5.3.3 The Differential Equation for Legendre Polynomials . 142

5.3.4 Fourier-Legendre Series . . . . . . . . . . . . . . . . . . 143

5.4 Gamma Function . . . . . . . . . . . . . . . . . . . . . . . . . . 145

5.5 Fourier-Bessel Series . . . . . . . . . . . . . . . . . . . . . . . . 147

5.6 Appendix: The Least Squares Approximation . . . . . . . . . 152

Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154

6 Problems in Higher Dimensions 1596.1 Vibrations of Rectangular Membranes . . . . . . . . . . . . . . 161

6.2 Vibrations of a Kettle Drum . . . . . . . . . . . . . . . . . . . . 166

6.3 Laplace’s Equation in 2D . . . . . . . . . . . . . . . . . . . . . . 175

6.3.1 Poisson Integral Formula . . . . . . . . . . . . . . . . . 183

6.4 Three Dimensional Cake Baking . . . . . . . . . . . . . . . . . 184

6.5 Laplace’s Equation and Spherical Symmetry . . . . . . . . . . 191

6.5.1 Spherical Harmonics . . . . . . . . . . . . . . . . . . . . 197

6.6 Spherically Symmetric Vibrations . . . . . . . . . . . . . . . . . 200

6.7 Baking a Spherical Turkey . . . . . . . . . . . . . . . . . . . . . 201

6.8 Schrödinger Equation in Spherical Coordinates . . . . . . . . 206

6.9 Curvilinear Coordinates . . . . . . . . . . . . . . . . . . . . . . 213

Page 6: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

v

Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221

7 Green’s Functions and Nonhomogeneous Problems 2257.1 Initial Value Green’s Functions . . . . . . . . . . . . . . . . . . 227

7.2 Boundary Value Green’s Functions . . . . . . . . . . . . . . . . 231

7.2.1 Properties of Green’s Functions . . . . . . . . . . . . . 234

7.2.2 The Differential Equation for the Green’s Function . . 238

7.2.3 Series Representations of Green’s Functions . . . . . . 242

7.2.4 The Generalized Green’s Function . . . . . . . . . . . . 247

7.3 The Nonhomogeneous Heat Equation . . . . . . . . . . . . . . 251

7.3.1 Nonhomogeneous Time Independent Boundary Con-ditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252

7.3.2 Time Dependent Boundary Conditions . . . . . . . . . 256

7.4 Green’s Functions for 1D Partial Differential Equations . . . . 257

7.4.1 Heat Equation . . . . . . . . . . . . . . . . . . . . . . . . 257

7.4.2 Wave Equation . . . . . . . . . . . . . . . . . . . . . . . 258

7.5 Green’s Functions for the 2D Poisson Equation . . . . . . . . . 259

7.6 Method of Eigenfunction Expansions . . . . . . . . . . . . . . 262

7.6.1 The Nonhomogeneous Heat Equation . . . . . . . . . . 262

7.6.2 The Forced Vibrating Membrane . . . . . . . . . . . . . 267

7.7 Green’s Function Solution of Nonhomogeneous Heat Equation 269

7.8 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274

7.8.1 Laplace’s Equation: ∇2ψ = 0. . . . . . . . . . . . . . . . 278

7.8.2 Homogeneous Time Dependent Equations . . . . . . . 279

7.8.3 Inhomogeneous Steady State Equation . . . . . . . . . 280

7.8.4 Inhomogeneous, Time Dependent Equations . . . . . . 281

Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281

8 Complex Representations of Functions 2858.1 Complex Representations of Waves . . . . . . . . . . . . . . . . 285

8.2 Complex Numbers . . . . . . . . . . . . . . . . . . . . . . . . . 287

8.3 Complex Valued Functions . . . . . . . . . . . . . . . . . . . . 290

8.3.1 Complex Domain Coloring . . . . . . . . . . . . . . . . 293

8.4 Complex Differentiation . . . . . . . . . . . . . . . . . . . . . . 297

8.5 Complex Integration . . . . . . . . . . . . . . . . . . . . . . . . 300

8.5.1 Complex Path Integrals . . . . . . . . . . . . . . . . . . 300

8.5.2 Cauchy’s Theorem . . . . . . . . . . . . . . . . . . . . . 303

8.5.3 Analytic Functions and Cauchy’s Integral Formula . . 307

8.5.4 Laurent Series . . . . . . . . . . . . . . . . . . . . . . . . 311

8.5.5 Singularities and The Residue Theorem . . . . . . . . . 314

8.5.6 Infinite Integrals . . . . . . . . . . . . . . . . . . . . . . 322

8.5.7 Integration Over Multivalued Functions . . . . . . . . 328

8.5.8 Appendix: Jordan’s Lemma . . . . . . . . . . . . . . . . 332

8.6 Laplace’s Equation in 2D, Revisited . . . . . . . . . . . . . . . 333

8.6.1 Fluid Flow . . . . . . . . . . . . . . . . . . . . . . . . . . 334

8.6.2 Conformal Mappings . . . . . . . . . . . . . . . . . . . 346

Page 7: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

vi

Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350

9 Transform Techniques in Physics 3579.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357

9.1.1 Example 1 - The Linearized KdV Equation . . . . . . . 358

9.1.2 Example 2 - The Free Particle Wave Function . . . . . . 360

9.1.3 Transform Schemes . . . . . . . . . . . . . . . . . . . . . 362

9.2 Complex Exponential Fourier Series . . . . . . . . . . . . . . . 363

9.3 Exponential Fourier Transform . . . . . . . . . . . . . . . . . . 365

9.4 The Dirac Delta Function . . . . . . . . . . . . . . . . . . . . . 369

9.5 Properties of the Fourier Transform . . . . . . . . . . . . . . . 372

9.5.1 Fourier Transform Examples . . . . . . . . . . . . . . . 374

9.6 The Convolution Operation . . . . . . . . . . . . . . . . . . . . 379

9.6.1 Convolution Theorem for Fourier Transforms . . . . . 382

9.6.2 Application to Signal Analysis . . . . . . . . . . . . . . 386

9.6.3 Parseval’s Equality . . . . . . . . . . . . . . . . . . . . . 388

9.7 The Laplace Transform . . . . . . . . . . . . . . . . . . . . . . . 389

9.7.1 Properties and Examples of Laplace Transforms . . . . 391

9.8 Applications of Laplace Transforms . . . . . . . . . . . . . . . 396

9.8.1 Series Summation Using Laplace Transforms . . . . . 396

9.8.2 Solution of ODEs Using Laplace Transforms . . . . . . 399

9.8.3 Step and Impulse Functions . . . . . . . . . . . . . . . . 402

9.9 The Convolution Theorem . . . . . . . . . . . . . . . . . . . . . 407

9.10 The Inverse Laplace Transform . . . . . . . . . . . . . . . . . . 410

9.11 Transforms and Partial Differential Equations . . . . . . . . . 413

9.11.1 Fourier Transform and the Heat Equation . . . . . . . 413

9.11.2 Laplace’s Equation on the Half Plane . . . . . . . . . . 415

9.11.3 Heat Equation on Infinite Interval, Revisited . . . . . . 417

9.11.4 Nonhomogeneous Heat Equation . . . . . . . . . . . . 419

9.11.5 Solution of the 3D Poisson Equation . . . . . . . . . . . 422

Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425

10 Numerical Solutions of PDEs 43110.1 Ordinary Differential Equations . . . . . . . . . . . . . . . . . . 431

10.1.1 Euler’s Method . . . . . . . . . . . . . . . . . . . . . . . 431

10.1.2 Higher Order Taylor Methods . . . . . . . . . . . . . . 435

10.1.3 Runge-Kutta Methods . . . . . . . . . . . . . . . . . . . 440

10.2 The Heat Equation . . . . . . . . . . . . . . . . . . . . . . . . . 445

10.2.1 The Finite Difference Method . . . . . . . . . . . . . . . 445

10.3 Truncation Error . . . . . . . . . . . . . . . . . . . . . . . . . . . 451

10.4 Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 452

10.5 Matrix Formulation . . . . . . . . . . . . . . . . . . . . . . . . . 454

A Calculus Review 455A.1 What Do I Need To Know From Calculus? . . . . . . . . . . . 455

A.1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 455

A.1.2 Trigonometric Functions . . . . . . . . . . . . . . . . . . 457

Page 8: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

vii

A.1.3 Hyperbolic Functions . . . . . . . . . . . . . . . . . . . 461

A.1.4 Derivatives . . . . . . . . . . . . . . . . . . . . . . . . . 463

A.1.5 Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . 464

A.1.6 Geometric Series . . . . . . . . . . . . . . . . . . . . . . 477

A.1.7 Power Series . . . . . . . . . . . . . . . . . . . . . . . . . 479

A.1.8 The Binomial Expansion . . . . . . . . . . . . . . . . . . 486

Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 492

B Ordinary Differential Equations Review 497B.1 First Order Differential Equations . . . . . . . . . . . . . . . . 497

B.1.1 Separable Equations . . . . . . . . . . . . . . . . . . . . 498

B.1.2 Linear First Order Equations . . . . . . . . . . . . . . . 499

B.2 Second Order Linear Differential Equations . . . . . . . . . . . 501

B.2.1 Constant Coefficient Equations . . . . . . . . . . . . . . 502

B.3 Forced Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . 505

B.3.1 Method of Undetermined Coefficients . . . . . . . . . . 506

B.3.2 Periodically Forced Oscillations . . . . . . . . . . . . . 508

B.3.3 Method of Variation of Parameters . . . . . . . . . . . . 511

B.4 Cauchy-Euler Equations . . . . . . . . . . . . . . . . . . . . . . 514

Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 519

Index 523

Page 9: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

viii

Dedicated to those students who have endured

previous versions of my notes.

Page 10: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

Prologue

“How can it be that mathematics, being after all a product of human thought inde-pendent of experience, is so admirably adapted to the objects of reality?.” - AlbertEinstein (1879-1955)

Introduction

This set of notes is being compiled for use in a two semester courseon mathematical methods for the solution of partial differential equationstypically taken by majors in mathematics, the physical sciences, and engi-neering. Partial differential equations often arise in the study of problemsin applied mathematics, mathematical physics, physical oceanography, me-teorology, engineering, and biology, economics, and just about everythingelse. However, many of the key methods for studying such equations ex-tend back to problems in physics and geometry. In this course we willinvestigate analytical, graphical, and approximate solutions of some stan-dard partial differential equations. We will study the theory, methods ofsolution and applications of partial differential equations.

We will first introduce partial differential equations and a few models.A PDE, for short, is an equation involving the derivatives of some unknownmultivariable function. It is a natural extenson of ordinary differential equa-tions (ODEs), which are differential equations for an unknown function oneone variable. We will begin by classifying some of these equations.

While it is customary to begin the study of PDEs with the one dimen-sional heat and wave equations, we will begin with first order PDEs andthen proceed to the other second order equations. This will allow for an un-derstanding of characteristics and also open the door to the study of somenonlinear equations related to some current research in the evolution ofwave equations.

There are different methods for solving partial differential and these willbe explored throughout the course. As we progress through the course, wewill introduce standard numerical methods since knowing how to numer-ically solve differential equations can be useful in research. We will alsolook into the standard solutions, including separation of variables, startingin one dimension and then proceeding to higher dimensions. This naturallyleads to finding solutions as Fourier series and special functions, such as

Page 11: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

x partial differential equations

Legendre polynomials and Bessel functions.The specific topics to be studied and approximate number of lectures

will includeFirst Semester: (26 lectures)

• Introduction (1)

• First Order PDEs (2)

• Traveling Waves (1)

• Shock and Rarefaction Waves (2)

• Second Order PDEs (1)

• 1D Heat Equation (1)

• 1D Wave Equation - d’Alembert Solution (2)

• Separation of Variables (1)

• Fourier Series (4)

• Equations in 2D - Laplace’s Equation, Vibrating Membranes (4)

• Numerical Solutions (2)

• Special Functions (3)

• Sturm-Liouville Theory (2)

Second semester: (25 lectures)

• Nonhomogeneous Equations (2)

• Green’s Functions - ODEs (2)

• Green’s Functions - PDEs (2)

• Complex Variables (4)

• Fourier Transforms (3)

• Nonlinear PDEs (2)

• Other - numerical, conservation laws, symmetries (10)

An appendix is provided for reference, especially to basic calculus tech-niques, differential equations, and (maybe later) linear algebra.

Page 12: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

CONTENTS xi

Acknowledgments

Most, if not all, of the ideas and examples are not my own. Thesenotes are a compendium of topics and examples that I have used in teachingnot only differential equations, but also in teaching numerous courses inphysics and applied mathematics. Some of the notions even extend back towhen I first learned them in courses I had taken.

I would also like to express my gratitude to the many students who havefound typos, or suggested sections needing more clarity in the core set ofnotes upon which this book was based. This applies to the set of notes usedin my mathematical physics course, applied mathematics course, An Intro-duction to Fourier and Complex Analysis with Application to the Spectral Analysisof Signals, and ordinary differential equations course, A Second Course in Or-dinary Differential Equations: Dynamical Systems and Boundary Value Problems,all of which have some significant overlap with this book.

Page 13: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

1First Order Partial Differential Equations

“The profound study of nature is the most fertile source of mathematical discover-ies.” - Joseph Fourier (1768-1830)

1.1 Introduction

We begin our study of partial differential equations with firstorder partial differential equations. Before doing so, we need to define a fewterms.

Recall (see the appendix on differential equations) that an n-th orderordinary differential equation is an equation for an unknown function y(x) n-th order ordinary differential equation

that expresses a relationship between the unknown function and its first nderivatives. One could write this generally as

F(y(n)(x), y(n−1)(x), . . . , y′(x), y(x), x) = 0. (1.1)

Here y(n)(x) represents the nth derivative of y(x). Furthermore, and initialvalue problem consists of the differential equation plus the values of the Initial value problem.

first n− 1 derivatives at a particular value of the independent variable, sayx0:

y(n−1)(x0) = yn−1, y(n−2)(x0) = yn−2, . . . , y(x0) = y0. (1.2)

If conditions are instead provided at more than one value of the indepen-dent variable, then we have a boundary value problem. .

If the unknown function is a function of several variables, then the deriva-tives are partial derivatives and the resulting equation is a partial differen-tial equation. Thus, if u = u(x, y, . . .), a general partial differential equationmight take the form

F(

x, y, . . . , u,∂u∂x

,∂u∂y

, . . . ,∂2u∂x2 , . . .

)= 0. (1.3)

Since the notation can get cumbersome, there are different ways to writethe partial derivatives. First order derivatives could be written as

∂u∂x

, ux, ∂xu, Dxu.

Page 14: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

2 partial differential equations

Second order partial derivatives could be written in the forms

∂2u∂x2 , uxx, ∂xxu, D2

xu.

∂2u∂x∂y

=∂2u

∂y∂x, uxy, ∂xyu, DyDxu.

Note, we are assuming that u(x, y, . . .) has continuous partial derivatives.Then, according to Clairaut’s Theorem (Alexis Claude Clairaut, 1713-1765) ,mixed partial derivatives are the same.

Examples of some of the partial differential equation treated in this bookare shown in Table 2.1. However, being that the highest order derivatives inthese equation are of second order, these are second order partial differentialequations. In this chapter we will focus on first order partial differentialequations. Examples are given by

ut + ux = 0.

ut + uux = 0.

ut + uux = u.

3ux − 2uy + u = x.

For function of two variables, which the above are examples, a generalfirst order partial differential equation for u = u(x, y) is given as

F(x, y, u, ux, uy) = 0, (x, y) ∈ D ⊂ R2. (1.4)

This equation is too general. So, restrictions can be placed on the form,leading to a classification of first order equations. A linear first order partialdifferential equation is of the formLinear first order partial differential

equation.a(x, y)ux + b(x, y)uy + c(x, y)u = f (x, y). (1.5)

Note that all of the coefficients are independent of u and its derivatives andeach term in linear in u, ux, or uy.

We can relax the conditions on the coefficients a bit. Namely, we could as-sume that the equation is linear only in ux and uy. This gives the quasilinearfirst order partial differential equation in the formQuasilinear first order partial differential

equation.a(x, y, u)ux + b(x, y, u)uy = f (x, y, u). (1.6)

Note that the u-term was absorbed by f (x, y, u).In between these two forms we have the semilinear first order partial

differential equation in the formSemilinear first order partial differentialequation.

a(x, y)ux + b(x, y)uy = f (x, y, u). (1.7)

Here the left side of the equation is linear in u, ux and uy. However, the righthand side can be nonlinear in u.

For the most part, we will introduce the Method of Characteristics forsolving quasilinear equations. But, let us first consider the simpler case oflinear first order constant coefficient partial differential equations.

Page 15: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

first order partial differential equations 3

1.2 Linear Constant Coefficient Equations

Let’s consider the linear first order constant coefficient par-tial differential equation

aux + buy + cu = f (x, y), (1.8)

for a, b, and c constants with a2 + b2 > 0. We will consider how such equa-tions might be solved. We do this by considering two cases, b = 0 andb 6= 0.

For the first case, b = 0, we have the equation

aux + cu = f .

We can view this as a first order linear (ordinary) differential equation withy a parameter. Recall that the solution of such equations can be obtainedusing an integrating factor. [See the discussion after Equation (B.7).] Firstrewrite the equation as

ux +ca

u =fa

.

Introducing the integrating factor

µ(x) = exp(∫ x c

adξ) = e

ca x,

the differential equation can be written as

(µu)x =fa

µ.

Integrating this equation and solving for u(x, y), we have

µ(x)u(x, y) =1a

∫f (ξ, y)µ(ξ) dξ + g(y)

eca xu(x, y) =

1a

∫f (ξ, y)e

ca ξ dξ + g(y)

u(x, y) =1a

∫f (ξ, y)e

ca (ξ−x) dξ + g(y)e−

ca x. (1.9)

Here g(y) is an arbitrary function of y.For the second case, b 6= 0, we have to solve the equation

aux + buy + cu = f .

It would help if we could find a transformation which would eliminate oneof the derivative terms reducing this problem to the previous case. That iswhat we will do.

We first note that

aux + buy = (ai + bj) · (uxi + uyj)

= (ai + bj) · ∇u. (1.10)

Page 16: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

4 partial differential equations

Recall from multivariable calculus that the last term is nothing but a direc-tional derivative of u(x, y) in the direction ai + bj. [Actually, it is propor-tional to the directional derivative if ai + bj is not a unit vector.]

Therefore, we seek to write the partial differential equation as involving aderivative in the direction ai + bj but not in a directional orthogonal to this.In Figure 1.1 we depict a new set of coordinates in which the w direction isorthogonal to ai + bj.x

z = y

w = bx− ay

ai + bj

Figure 1.1: Coordinate systems for trans-forming aux + buy + cu = f into bvz +cv = f using the transformation w =bx− ay and z = y.

We consider the transformation

w = bx− ay,

z = y. (1.11)

We first note that this transformation is invertible,

x =1b(w + az),

y = z. (1.12)

Next we consider how the derivative terms transform. Let u(x, y) =

v(w, z). Then, we have

aux + buy = a∂

∂xv(w, z) + b

∂yv(w, z),

= a[

∂v∂w

∂w∂x

+∂v∂z

∂z∂x

]+b[

∂v∂w

∂w∂y

+∂v∂z

∂z∂y

]= a[bvw + 0 · vz] + b[−avw + vz]

= bvz. (1.13)

Therefore, the partial differential equation becomes

bvz + cv = f(

1b(w + az), z

).

This is now in the same form as in the first case and can be solved using anintegrating factor.

Example 1.1. Find the general solution of the equation 3ux − 2uy + u = x.First, we transform the equation into new coordinates.

w = bx− ay = −2x− 3y,

and z = y. The,

ux − 2uy = 3[−2vw + 0 · vz]− 2[−3vw + vz]

= −2vz. (1.14)

The new partial differential equation for v(w, z) is

−2∂v∂z

+ v = x = −12(w + 3z).

Page 17: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

first order partial differential equations 5

Rewriting this equation,

∂v∂z− 1

2v =

14(w + 3z),

we identify the integrating factor

µ(z) = exp[−∫ z 1

2dζ

]= e−z/2.

Using this integrating factor, we can solve the differential equation for v(w, z).

∂z

(e−z/2v

)=

14(w + 3z)e−z/2,

e−z/2v(w, z) =14

∫ z(w + 3ζ)e−ζ/2 dζ

= −12(w + 6 + 3z)e−z/2 + c(w)

v(w, z) = −12(w + 6 + 3z) + c(w)ez/2

u(x, y) = x− 3 + c(−2x− 3y)ey/2.

(1.15)

1.3 Quasilinear Equations: The Method of Characteristics

1.3.1 Geometric Interpretation

We consider the quasilinear partial differential equation intwo independent variables,

a(x, y, u)ux + b(x, y, u)uy − c(x, y, u) = 0. (1.16)

Let u = u(x, y) be a solution of this equation. Then,

f (x, y, u) = u(x, y)− u = 0

describes the solution surface, or integral surface, Integral surface.

We recall from multivariable, or vector, calculus that the normal to theintegral surface is given by the gradient function,

∇ f = (ux, uy,−1).

Now consider the vector of coefficients, v = (a, b, c) and the dot productwith the gradient above:

v · ∇ f = aux + buy − c.

This is the left hand side of the partial differential equation. Therefore, forthe solution surface we have

v · ∇ f = 0,

or v is perpendicular to ∇ f . Since ∇ f is normal to the surface, v = (a, b, c)is tangent to the surface. Geometrically, v defines a direction field, calledthe characteristic field. These are shown in Figure 1.2. The characteristic field.

∇ f

v

Figure 1.2: The normal to the integralsurface,∇ f = (ux , uy,−1), and the tan-gent vector, v = (a, b, c), are orthogonal.

Page 18: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

6 partial differential equations

1.3.2 Characteristics

We seek the forms of the characteristic curves such as the oneshown in Figure 1.2. Recall that one can parametrize space curves,

c(t) = (x(t), y(t), u(t)), t ∈ [t1, t2].

The tangent to the curve is then

v(t) =dc(t)

dt=

(dxdt

,dydt

,dudt

).

However, in the last section we saw that v(t) = (a, b, c) for the partial dif-ferential equation a(x, y, u)ux + b(x, y, u)uy − c(x, y, u) = 0. This gives theparametric form of the characteristic curves as

dxdt

= a,dydt

= b,dudt

= c. (1.17)

Another form of these equations is found by relating the differentials, dx,dy, du, to the coefficients in the differential equation. Since x = x(t) andy = y(t), we have

dydx

=dy/dtdx/dt

=ba

.

Similarly, we can show that

dudx

=ca

,dudy

=cb

.

All of these relations can be summarized in the form

dt =dxa

=dyb

=duc

. (1.18)

How do we use these characteristics to solve quasilinear partial differen-tial equations? Consider the next example.

Example 1.2. Find the general solution: ux + uy − u = 0.We first identify a = 1, b = 1, and c = u. The relations between the differentials

isdx1

=dy1

=duu

.

We can pair the differentials in three ways:

dydx

= 1,dudx

= u,dudy

= u.

Only two of these relations are independent. We focus on the first pair.The first equation gives the characteristic curves in the xy-plane. This equation

is easily solved to givey = x + c1.

The second equation can be solved to give u = c2ex.

Page 19: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

first order partial differential equations 7

The goal is to find the general solution to the differential equation. Since u =

u(x, y), the integration “constant” is not really a constant, but is constant withrespect to x. It is in fact an arbitrary constant function. In fact, we could view itas a function of c1, the constant of integration in the first equation. Thus, we letc2 = G(c1) for G and arbitrary function. Since c1 = y − x, we can write thegeneral solution of the differential equation as

u(x, y) = G(y− x)ex.

Example 1.3. Solve the advection equation, ut + cux = 0, for c a constant, andu = u(x, t), |x| < ∞, t > 0.

The characteristic equations are

dτ =dt1

=dxc

=du0

(1.19)

and the parametric equations are given by

dxdτ

= c,dudτ

= 0. (1.20)

These equations imply that

• u = const. = c1.

• x = ct + const. = ct + c2.Traveling waves.

As before, we can write c1 as an arbitrary function of c2. However, before doingso, let’s replace c1 with the variable ξ and then we have that

ξ = x− ct, u(x, t) = f (ξ) = f (x− ct)

where f is an arbitrary function. Furthermore, we see that u(x, t) = f (x − ct)indicates that the solution is a wave moving in one direction in the shape of theinitial function, f (x). This is known as a traveling wave. A typical traveling waveis shown in Figure 1.3.

x

u

f (x) f (x− ct)

c

Figure 1.3: Depiction of a traveling wave.u(x, t) = f (x) at t = 0 travels withoutchanging shape.

Note that since u = u(x, t), we have

0 = ut + cux

=∂u∂t

+dxdt

∂u∂x

=du(x(t), t

dt. (1.21)

This implies that u(x, t) = constant along the characteristics, dxdt = c.

As with ordinary differential equations, the general solution provides aninfinite number of solutions of the differential equation. If we want to pickout a particular solution, we need to specify some side conditions. We Side conditions.

investigate this by way of examples.

Example 1.4. Find solutions of ux + uy − u = 0 subject to u(x, 0) = 1.

Page 20: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

8 partial differential equations

We found the general solution to the partial differential equation as u(x, y) =

G(y− x)ex. The side condition tells us that u = 1 along y = 0. This requires

1 = u(x, 0) = G(−x)ex.

Thus, G(−x) = e−x. Replacing x with −z, we find

G(z) = ez.

Thus, the side condition has allowed for the determination of the arbitrary functionG(y− x). Inserting this function, we have

u(x, y) = G(y− x)ex = ey−xex = ey.

Side conditions could be placed on other curves. For the general line,y = mx + d, we have u(x, mx + d) = g(x) and for x = d, u(d, y) = g(y).As we will see, it is possible that a given side condition may not yield asolution. We will see that conditions have to be given on non-characteristiccurves in order to be useful.

Example 1.5. Find solutions of 3ux − 2uy + u = x for a) u(x, x) = x and b)u(x, y) = 0 on 3y + 2x = 1.

Before applying the side condition, we find the general solution of the partialdifferential equation. Rewriting the differential equation in standard form, we have

3ux − 2uy = x = u.

The characteristic equations are

dx3

=dy−2

=du

x− u. (1.22)

These equations imply that

• −2dx = 3dy

This implies that the characteristic curves (lines) are 2x + 3y = c1.

• dudx = 1

3 (x− u).

This is a linear first order differential equation, dudx + 1

3 u = 13 x. It can be solved

using the integrating factor,

µ(x) = exp(

13

∫ xdξ

)= ex/3.

ddx

(uex/3

)=

13

xex/3

uex/3 =13

∫ xξeξ/3 dξ + c2

= (x− 3)ex/3 + c2

u(x, y) = x− 3 + c2e−x/3. (1.23)

Page 21: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

first order partial differential equations 9

As before, we write c2 as an arbitrary function of c1 = 2x + 3y. This gives thegeneral solution

u(x, y) = x− 3 + G(2x + 3y)e−x/3.

Note that this is the same answer that we had found in Example 1.1

Now we can look at any side conditions and use them to determine particularsolutions by picking out specific G’s.

a u(x, x) = xThis states that u = x along the line y = x. Inserting this condition into thegeneral solution, we have

x = x− 3 + G(5x)e−x/3,

orG(5x) = 3ex/3.

Letting z = 5x,G(z) = 3ez/15.

The particular solution satisfying this side condition is

u(x, y) = x− 3 + G(2x + 3y)e−x/3

= x− 3 + 3e(2x+3y)/15e−x/3

= x− 3 + 3e(y−x)/5. (1.24)

This surface is shown in Figure 1.5.

Figure 1.4: Integral surface found in Ex-ample 1.5.

In Figure 1.5 we superimpose the values of u(x, y) along the characteristiccurves. The characteristic curves are the red lines and the images of thesecurves are the black lines. The side condition is indicated with the blue curvedrawn along the surface.

Figure 1.5: Integral surface with sidecondition and characteristics for Exam-ple 1.5.

The values of u(x, y) are found from the side condition as follows. For x = ξ

on the blue curve, we know that y = ξ and u(ξ, ξ) = ξ. Now, the character-istic lines are given by 2x + 3y = c1. The constant c1 is found on the bluecurve from the point of intersection with one of the black characteristic lines.For x = y = ξ, we have c1 = 5ξ. Then, the equation of the characteristicline, which is red in Figure 1.5, is given by y = 1

3 (5ξ − 2x).Along these lines we need to find u(x, y) = x− 3 + c2e−x/3. First we haveto find c2. We have on the blue curve, that

ξ = u(ξ, ξ)

= ξ − 3 + c2e−ξ/3. (1.25)

Therefore, c2 = 3eξ/3. Inserting this result into the expression for the solu-tion, we have

u(x, y) = x− 3 + e(ξ−x)/3.

So, for each ξ, one can draw a family of spacecurves(x,

13(5ξ − 2x), x− 3 + e(ξ−x)/3

)yielding the integral surface.

Page 22: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

10 partial differential equations

b u(x, y) = 0 on 3y + 2x = 1.

For this condition, we have

0 = x− 3 + G(1)e−x/3.

We note that G is not a function in this expression. We only have one valuefor G. So, we cannot solve for G(x). Geometrically, this side condition corre-sponds to one of the black curves in Figure 1.5.

1.4 Applications

1.4.1 Conservation Laws

There are many applications of quasilinear equations, especiallyin fluid dynamics. The advection equation is one such example and gener-alizations of this example to nonlinear equations leads to some interestingproblems. These equations fall into a category of equations called conser-vation laws. We will first discuss one-dimensional (in space) conservationslaws and then look at simple examples of nonlinear conservation laws.

Conservation laws are useful in modeling several systems. They can beboiled down to determining the rate of change of some stuff, Q(t), in aregion, a ≤ x ≤ b, as depicted in Figure 1.6. The simples model is to thinkof fluid flowing in one dimension, such as water flowing in a stream. Or,it could be the transport of mass, such as a pollutant. One could think oftraffic flow down a straight road.

Figure 1.6: The rate of change of Q be-tween x = a and x = b depends on therates of flow through each end.

x = a x = b

Q(t)φ(a, t) φ(b, t)

This is an example of a typical mixing problem. The rate of change ofQ(t) is given as

the rate of change of Q = Rate in− Rate Out + source term.

Here the “Rate in” is how much is flowing into the region in Figure 1.6 fromthe x = a boundary. Similarly, the “Rate out” is how much is flowing intothe region from the x = b boundary. [Of course, this could be the other way,but we can imagine for now that q is flowing from left to right.] We candescribe this flow in terms of the flux, φ(x, t) over the ends of the region.On the left side we have a gain of φ(a, t) and on the right side of the regionthere is a loss of φ(b, t).

The source term would be some other means of adding or removing Qfrom the region. In terms of fluid flow, there could be a source of fluid

Page 23: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

first order partial differential equations 11

inside the region such as a faucet adding more water. Or, there could be adrain letting water escape. We can denote this by the total source over theinterval,

∫ ba f (x, t) dx. Here f (x, t) is the source density.

In summary, the rate of change of Q(x, t) can be written as

dQdt

= φ(a, t)− φ(b, t) +∫ b

af (x, y) dx.

We can write this in a slightly different form by noting that φ(a, t) −φ(b, t) can be viewed as the evaluation of antiderivatives in the FundamentalTheorem of Calculus. Namely, we can recall that∫ b

a

∂φ(x, t)∂x

dx = φ(b, t)− φ(a, t).

The difference is not exactly in the order that we desire, but it is easy to seethat Integral form of conservation law.

dQdt

= −∫ b

a

∂φ(x, t)∂x

dx +∫ b

af (x, t) dx. (1.26)

This is the integral form of the conservation law.We can rewrite the conservation law in differential form. First, we intro-

duce the density function, u(x, t), so that the total amount of stuff at a giventime is

Q(t) =∫ b

au(x, t) dx.

Introducing this form into the integral conservation law, we have

ddt

∫ b

au(x, t) dx = −

∫ b

a

∂φ

∂xdx +

∫ b

af (x, t) dx. (1.27)

Assuming that a and b are fixed in time and that the integrand is continuous,we can bring the time derivative inside the integrand and collect the threeterms into one to find∫ b

a(ut(x, t) + φx(x, t)− f (x, t)) dx = 0, ∀x ∈ [a, b].

We cannot simply set the integrant to zero just because the integral van-ishes. However, if this result holds for every region [a, b], then we can con-clude the integrand vanishes. So, under that assumption, we have the localconservation law, Differential form of conservation law.

ut(x, t) + φx(x, t) = f (x, t). (1.28)

This partial differential equation is actually an equation in terms of twounknown functions, assuming we know something about the source func-tion. We would like to have a single unknown function. So, we need someadditional information. This added information comes from the constitutiverelation, a function relating the flux to the density function. Namely, we willassume that we can find the relationship φ = φ(u). If so, then we can write

∂φ

∂x=

du∂u∂x

,

or φx = φ′(u)ux.

Page 24: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

12 partial differential equations

Example 1.6. Inviscid Burgers’ Equation Find the equation satisfied by u(x, t)for φ(u) = 1

2 u2 and f (x, t) ≡ 0.For this flux function we have φx = φ′(u)ux = uux. The resulting equation is

then ut + uux = 0. This is the inviscid Burgers’ equation. We will later discussBurgers’ equation.

Example 1.7. Traffic FlowThis is a simple model of one-dimensional traffic flow. Let u(x, t) be the density

of cars. Assume that there is no source term. For example, there is no way for a carto disappear from the flow by turning off the road or falling into a sinkhole. Also,there is no source of additional cars.

Let φ(x, t) denote the number of cars per hour passing position x at time t. Notethat the units are given by cars/mi times mi/hr. Thus, we can write the flux asφ = uv, where v is the velocity of the carts at position x and time t.

u

v

v1

u1

Figure 1.7: Car velocity as a function ofcar density.

In order to continue we need to assume a relationship between the car velocityand the car density. Let’s assume the simplest form, a linear relationship. The moredense the traffic, we expect the speeds to slow down. So, a function similar to thatin Figure 1.7 is in order. This is a straight line between the two intercepts (0, v1)

and (u1, 0). It is easy to determine the equation of this line. Namely the relationshipis given as

v = v1 −v1

u1u.

This gives the flux as

φ = uv = v1

(u− u2

u1

).

We can now write the equation for the car density,

0 = ut + φ′ux

= ut + v1

(1− 2u

u1

)ux. (1.29)

1.4.2 Nonlinear Advection Equations

In this section we consider equations of the form ut + c(u)ux = 0.When c(u) is a constant function, we have the advection equation. In the lasttwo examples we have seen cases in which c(u) is not a constant function.We will apply the method of characteristics to these equations. First, we willrecall how the method works for the advection equation.

The advection equation is given by ut + cux = 0. The characteristic equa-tions are given by

dxdt

= c,dudt

= 0.

These are easily solved to give the result that

u(x, t) = constant along the lines x = ct + x0,

where x0 is an arbitrary constant.

Page 25: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

first order partial differential equations 13

The characteristic lines are shown in Figure 1.8. We note that u(x, t) =

u(x0, 0) = f (x0). So, if we know u initially, we can determine what u is at alater time.

x

t

x0

t = t1

slope = 1/c

u(x0 + ct1, t1) = u(x0, 0)

Figure 1.8: The characteristics lines thext-plane.

In Figure 1.8 we see that the value of u(x0, ) at t = 0 and x = x0 propa-gates along the characteristic to a point at time t = t1. From x− ct = x0, wecan solve for x in terms of t1 and find that u(x0 + ct1, t1) = u(x0, 0).

Plots of solutions u(x, t) versus x for specific times give traveling wavesas shown in Figure 1.3. In Figure 1.9 we show how each wave profile fordifferent times are constructed for a given initial condition.

x

u

x0

Figure 1.9: For each x = x0 at t = 0,u(x0 + ct, t) = u(x0, 0).

The nonlinear advection equation is given by ut + c(u)ux = 0, |x| < ∞.Let u(x, 0) = u0(x) be the initial profile. The characteristic equations aregiven by

dxdt

= c(u),dudt

= 0.

These are solved to give the result that

u(x, t) = constant,

along the characteristic curves x′(t) = c(u). The lines passing though u(x0, ) =u0(x0) have slope 1/c(u0(x0)).

Example 1.8. Solve ut + uux = 0, u(x, 0) = e−x2.

For this problem u = constant along

dxdt

= u.

Since u is constant, this equation can be integrated to yield x = u(x0, 0)t + x0.Inserting the initial condition, x = e−x2

0 t + x0. Therefore, the solution is

u(x, t) = e−x20 along x = e−x2

0 t + x0.

In Figure 1.10 the characteristics a shown. In this case we see that the charac-teristics intersect. In Figure charlines3 we look more specifically at the intersectionof the characteristic lines for x0 = 0 and x0 = 1. These are approximately the firstlines to intersect; i.e., there are (almost) no intersections at earlier times. At the

Page 26: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

14 partial differential equations

Figure 1.10: The characteristics linesthe xt-plane for the nonlinear advectionequation.

x

t

slope = ex20

Figure 1.11: The characteristics lines forx0 = 0, 1 in the xt-plane for the nonlinearadvection equation.

x

t

u = 1e

u = 1

x0 = 0 x0 = 1

intersection point the function u(x, t) appears to take on more than one value. Forthe case shown, the solution wants to take the values u = 0 and u = 1.

In Figure 1.12 we see the development of the solution. This is found using aparametric plot of the points (x0 + te−x2

0 , e−x20) for different times. The initial profile

propagates to the right with the higher points traveling faster than the lower pointssince x′(t) = u > 0. Around t = 1.0 the wave breaks and becomes multivalued.The time at which the function becomes multivalued is called the breaking time.

x

u

t =0.0

x

u

t =0.5

x

u

t =1.0

x

u

t =1.5

x

u

t =2.0

Figure 1.12: The development of a gra-dient catastrophe in Example 1.8 leadingto a multivalued function.

1.4.3 The Breaking Time

In the last example we saw that for nonlinear wave speeds a gradi-ent catastrophe might occur. The first time at which a catastrophe occursis called the breaking time. We will determine the breaking time for thenonlinear advection equation, ut + c(u)ux = 0. For the characteristic corre-sponding to x0 = ξ, the wavespeed is given by

F(ξ) = c(u0(ξ))

and the characteristic line is given by

x = ξ + tF(ξ).

The value of the wave function along this characteristic isu0(ξ) = u(ξ, 0).

u(x, t) = u(ξ + tF(ξ), t)

= . (1.30)

Therefore, the solution is

u(x, t) = u0(ξ) along x = ξ + tF(ξ).

Page 27: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

first order partial differential equations 15

This means that

ux = u′0(ξ)ξx and ut = u′0(ξ)ξt.

We can determine ξx and ξt using the characteristic line

ξ = x− tF(ξ).

Then, we have

ξx = 1− tF′(ξ)ξx

=1

1 + tF′(ξ).

ξt =∂

∂t(x− tF(ξ))

= −F(ξ)− tF′(ξ)ξt

=−F(ξ)

1 + tF′(ξ). (1.31)

Note that ξx and ξt are undefined if the denominator in both expressionsvanishes, 1 + tF′(ξ) = 0, or at time

t = − 1F′(ξ)

.

The minimum time for this to happen in the breaking time, The breaking time.

tb = min− 1

F′(ξ)

. (1.32)

Example 1.9. Find the breaking time for ut + uux = 0, u(x, 0) = e−x2.

Since c(u) = u, we have

F(ξ) = c(u0(ξ)) = e−ξ2

andF′(ξ) = −2ξe−ξ2

.

This gives

t =1

2ξe−ξ2 .

We need to find the minimum time. Thus, we set the derivative equal to zero andsolve for ξ.

0 =d

(eξ2

)

=

(2− 1

ξ2

)eξ2

2. (1.33)

Thus, the minimum occurs for 2− 1ξ2 = 0, or ξ = 1/

√2. This gives

tb = t(

1√2

)=

12√

2e−1/2

=

√e2≈ 1.16. (1.34)

Page 28: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

16 partial differential equations

1.4.4 Shock Waves

Solutions of nonlinear advection equations can become multival-ued due to a gradient catastrophe. Namely, the derivatives ut and ux becomeundefined. We would like to extend solutions past the catastrophe. How-ever, this leads to the possibility of discontinuous solutions. Such solutionswhich may not be differentiable or continuous in the domain are known asweak solutions. In particular, consider the initial value problemWeak solutions.

ut + φx = 0, x ∈ R, t > 0, u(x, 0) = u0(x).

Then, u(x, t) is a weak solution of this problem if∫ ∞

0

∫ ∞

−∞[uvt + φvx] dxdt +

∫ ∞

−∞u0(x)v(x, 0) dx = 0

for all smooth functions v ∈ C∞(R × [0, ∞)) with compact support, i.e.,v ≡= 0 outside some compact subset of the domain.

x

u

t =1.5

x

u

t =1.75

x

u

t =2.0

Figure 1.13: The shock solution after thebreaking time.

Effectively, the weak solution that evolves will be a piecewise smoothfunction with a discontinuity, the shock wave, that propagates with shockspeed. It can be shown that the form of the shock will be the discontinuityshown in Figure 1.13 such that the areas cut from the solutions will cancelleaving the total area under the solution constant. [See G. B. Whitham’sLinear and Nonlinear Waves, 1973.] We will consider the discontinuity asshown in Figure 1.14.

x

u

u−s

u+s

Figure 1.14: Depiction of the jump dis-continuity at the shock position.

We can find the equation for the shock path by using the integral form ofthe conservation law,

ddt

∫ b

au(x, t) dx = φ(a, t)− φ(b, t).

Recall that one can differentiate under the integral if u(x, t) and ut(x, t) arecontinuous in x and t in an appropriate subset of the domain. In particu-lar, we will integrate over the interval [a, b] as shown in Figure 1.15. Thedomains on either side of shock path are denoted as R+ and R− and thelimits of x(t) and u(x, t) as one approaches from the left of the shock aredenoted by x−s (t) and u− = u(x−s , t). Similarly, the limits of x(t) and u(x, t)as one approaches from the right of the shock are denoted by x+s (t) andu+ = u(x+s , t).

x

t

R+R−

a b

Figure 1.15: Domains on either side ofshock path are denoted as R+ and R−.

We need to be careful in differentiating under the integral,

ddt

∫ b

au(x, t) dx =

ddt

[∫ x−s (t)

au(x, t) dx +

∫ b

x+s (t)u(x, t) dx

]

=∫ x−s (t)

aut(x, t) dx +

∫ b

x+s (t)ut(x, t) dx

+u(x−s , t)dx−sdt− u(x+s , t)

dx+sdt

= φ(a, t)− φ(b, t). (1.35)

Page 29: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

first order partial differential equations 17

Taking the limits a→ x−s and b→ x+s , we have that(u(x−s , t)− u(x+s , t)

) dxs

dt= φ(x−s , t)− φ(x+s , t).

Adopting the notation[ f ] = f (x+s )− f (x−s ),

we arrive at the Rankine-Hugonoit jump condition The Rankine-Hugonoit jump condition.

dxs

dt=

[φ]

[u]. (1.36)

This gives the equation for the shock path as will be shown in the nextexample.

Example 1.10. Consider the problem ut + uux = 0, |x| < ∞, t > 0 satisfying theinitial condition

u(x, 0) =

1, x ≤ 0,0, x > 0.

x

u1

x

t

u = 0u = 1

Figure 1.16: Initial condition and charac-teristics for Example 1.10.

The characteristics for this partial differential equation are familiar by now. Theinitial condition and characteristics are shown in Figure 1.16. From x′(t) = u,there are two possibilities. If u = 0, then we have a constant. If u = 1 along thecharacteristics, the we have straight lines of slope one. Therefore, the characteristicsare given by

x(t) =

x0, x > 0,

t + x0, x < 0.

As seen in Figure 1.16 the characteristics intersect immediately at t = 0. Theshock path is found from the Rankine-Hugonoit jump condition. We first note thatφ(u) = 1

2 u2, since φx = uux. Then, we have

dxs

dt=

[φ]

[u]

=12 u+2 − 1

2 u−2

u+ − u−

=12(u+ + u−)(u+ − u−)

u+ − u−

=12(u+ + u−)

=12(0 + 1) =

12

. (1.37)

Now we need only solve the ordinary differential equation x′s(t) =12 with initial

condition xs(0) = 0. This gives xs(t) = t2 . This line separates the characteristics

on the left and right side of the shock solution. The solution is given by

u(x, t) =

1, x ≤ t/2,0, x > t/2.

x

t

u = 0u = 1

Figure 1.17: The characteristic lines endat the shock path (in red). On the leftu = 1 and on the right u = 0.

In Figure 1.17 we show the characteristic lines ending at the shock path (in red)with u = 0 and on the right and u = 1 on the left of the shock path. This isconsistent with the solution. One just sees the initial step function moving to theright with speed 1/2 without changing shape.

Page 30: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

18 partial differential equations

1.4.5 Rarefaction Waves

Shocks are not the only type of solutions encountered when thevelocity is a function of u. There may sometimes be regions where the char-acteristic lines do not appear. A simple example is the following.

Example 1.11. Draw the characteristics for the problem ut + uux = 0, |x| < ∞,t > 0 satisfying the initial condition

u(x, 0) =

0, x ≤ 0,1, x > 0.

x

u1

x

t

u = 1u = 0

Figure 1.18: Initial condition and charac-teristics for Example 1.14.

In this case the solution is zero for negative values of x and positive for positivevalues of x as shown in Figure 1.18. Since the wavespeed is given by u, the u = 1initial values have the waves on the right moving to the right and the values on theleft stay fixed. This leads to the characteristics in Figure 1.18 showing a region inthe xt-plane that has no characteristics. In this section we will discover how to fillin the missing characteristics and, thus, the details about the solution between theu = 0 and u = 1 values.

As motivation, we consider a smoothed out version of this problem.

Example 1.12. Draw the characteristics for the initial condition

u(x, 0) =

0, x ≤ −ε,

x+ε2ε , |x| ≤ ε,1, x > ε.

The function is shown in the top graph in Figure 1.19. The leftmost and right-most characteristics are the same as the previous example. The only new part isdetermining the equations of the characteristics for |x| ≤ ε. These are found usingthe method of characteristics as

x = ξ + u0(ξ)t, u0(ξ) =ξ + ε

2εt.

These characteristics are drawn in Figure 1.19 in red. Note that these lines take onslopes varying from infinite slope to slope one, corresponding to speeds going fromzero to one.

x

u1

ε-ε

x

t

u = 1ε-ε

u = 0

Figure 1.19: The function and character-istics for the smoothed step function.

Comparing the last two examples, we see that as ε approaches zero, thelast example converges to the previous example. The characteristics in theregion where there were none become a “fan”. We can see this as follows.

Characteristics for rarefaction, or expan-sion, waves are fan-like characteristics.

Since |ξ| < ε for the fan region, as ε gets small, so does this interval. Let’sscale ξ as ξ = σε, σ ∈ [−1, 1]. Then,

x = σε + u0(σε)t, u0(σε) =σε + ε

2εt =

12(σ + 1)t.

For each σ ∈ [−1, 1] there is a characteristic. Letting ε→ 0, we have

x = ct, c =12(σ + 1)t.

Page 31: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

first order partial differential equations 19

Thus, we have a family of straight characteristic lines in the xt-plane passingthrough (0, 0) of the form x = ct for c varying from c = 0 to c = 1. Theseare shown as the red lines in Figure 1.20.

The fan characteristics can be written as x/t = constant. So, we canseek to determine these characteristics analytically and in a straight forwardmanner by seeking solutions of the form u(x, t) = g( x

t ). x

t

u = 1u = 0

Figure 1.20: The characteristics for Ex-ample 1.14 showing the “fan” character-istics.

Example 1.13. Determine solutions of the form u(x, t) = g( xt ) to ut + uux = 0.

Inserting this guess into the differential equation, we have

Seek rarefaction fan waves usingu(x, t) = g( x

t ).

0 = ut + uux

=1t

g′(

g− xt

). (1.38)

Thus, either g′ = 0 or g = xt . The first case will not work since this gives constant

solutions. The second solution is exactly what we had obtained before. Recall thatsolutions along characteristics give u(x, t) = x

t = constant. The characteristicsand solutions for t = 0, 1, 2 are shown in Figure rarefactionfig4. At a specific timeone can draw a line (dashed lines in figure) and follow the characteristics back tothe t = 0 values, u(ξ, 0) in order to construct u(x, t).

x

t

u = 1u = 0

t = 1t = 2

x

u1

t = 0

x

u1

t = 1

x

u1

t = 2

Figure 1.21: The characteristics and so-lutions for t = 0, 1, 2 for Example 1.14

As a last example, let’s investigate a nonlinear model which possessesboth shock and rarefaction waves.

Example 1.14. Solve the initial value problem ut + u2ux = 0, |x| < ∞, t > 0satisfying the initial condition

u(x, 0) =

0, x ≤ 0,1, 0 < x < 2,0, x ≥ 2.

Page 32: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

20 partial differential equations

The method of characteristics gives

dxdt

= u2,dudt

= 0.

Therefore,

u(x, t) = u0(ξ) = const. along the lines x(t) = u20(ξ)t + ξ.

There are three values of u0(ξ),

u0(ξ) =

0, ξ ≤ 0,1, 0 < ξ < 2,0, ξ ≥ 2.

In Figure 1.22 we see that there is a rarefaction and a gradient catastrophe.

Figure 1.22: In this example there occursa rarefaction and a gradient catastrophe.

x

u1

20

x

t

u = 1u = 0 u = 0

In order to fill in the fan characteristics, we need to find solutions u(x, t) =

g(x/t). Inserting this guess into the differential equation, we have

0 = ut + u2ux

=1t

g′(

g2 − xt

). (1.39)

Thus, either g′ = 0 or g2 = xt . The first case will not work since this gives constant

solutions. The second solution gives

g( x

t

)=

√xt

.

. Therefore, along the fan characteristics the solutions are u(x, t) =√

xt = con-

stant. These fan characteristics are added in Figure 1.23.Next, we turn to the shock path. We see that the first intersection occurs at the

point (x, t) = (2, 0). The Rankine-Hugonoit condition gives

dxs

dt=

[φ]

[u]

=13 u+3 − 1

3 u−3

u+ − u−

=13(u+ − u−)(u+2

+ u+u− + u−2)

u+ − u−

Page 33: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

first order partial differential equations 21

x

t

u = 1u = 0 u = 0

Figure 1.23: The fan characteristics areadded to the other characteristic lines.

=13(u+2

+ u+u− + u−2)

=13(0 + 0 + 1) =

13

. (1.40)

Thus, the shock path is given by x′s(t) = 13 with initial condition xs(0) = 2.

This gives xs(t) = t3 + 2. In Figure 1.24 the shock path is shown in red with the

fan characteristics and vertical lines meeting the path. Note that the fan lines andvertical lines cross the shock path. This leads to a change in the shock path.

x

t

u = 1u = 0 u = 0

Figure 1.24: The shock path is shown inred with the fan characteristics and ver-tical lines meeting the path.

The new path is found using the Rankine-Hugonoit condition with u+ = 0 and

u− =√

xt . Thus,

dxs

dt=

[φ]

[u]

=13 u+3 − 1

3 u−3

u+ − u−

=13(u+ − u−)(u+2

+ u+u− + u−2)

u+ − u−

=13(u+2

+ u+u− + u−2)

=13(0 + 0 +

√xs

t) =

13

√xs

t. (1.41)

We need to solve the initial value problem

dxs

dt=

13

√xs

t, xs(3) = 3.

This can be done using separation of variables. Namely,∫ dxs√xs

=13

t√t.

Page 34: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

22 partial differential equations

This gives the solution√

xs =13

√t + c.

Since the second shock solution starts at the point (3, 3), we can determine c =23

√3. This gives the shock path as

xs(t) =(

13

√t +

23

√3)2

.

In Figure 1.25 we show this shock path and the other characteristics ending onthe path.

Figure 1.25: The second shock path isshown in red with the characteristicsshown in all regions.

x

t

u = 1u = 0 u = 0

It is interesting to construct the solution at different times based on the charac-teristics. For a given time, t, one draws a horizontal line in the xt-plane and readsoff the values of u(x, t) using the values at t = 0 and the rarefaction solutions. Thisis shown in Figure 1.26. The right discontinuity in the initial profile continues asa shock front until t = 3. At that time the back rarefaction wave has caught up tothe shock. After t = 3, the shock propagates forward slightly slower and the heightof the shock begins to decrease. Due to the fact that the partial differential equationis a conservation law, the area under the shock remains constant as it stretches anddecays in amplitude.

1.4.6 Traffic Flow

An interesting application is that of traffic flow. We had al-ready derived the flux function. Let’s investigate examples with varyinginitial conditions that lead to shock or rarefaction waves. As we had seenearlier in modeling traffic flow, we can consider the flux function

φ = uv = v1

(u− u2

u1

),

which leads to the conservation law

ut + v1(1−2uu1

)ux = 0.

Here u(x, t) represents the density of the traffic and u1 is the maximumdensity and v1 is the initial velocity.

Page 35: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

first order partial differential equations 23

x

t

u = 1u = 0 u = 0

t = 1t = 2t = 3t = 4t = 5

x

u1

t = 0

20

x

u1

t = 1

20

x

u1

t = 2

20

x

u1

t = 3

20

x

u1

t = 4

20

x

u1

t = 5

20

Figure 1.26: Solutions for the shock-rarefaction example.

Page 36: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

24 partial differential equations

First, consider the flow of traffic vas it approaches a red light as shownin Figure 1.27. The traffic that is stopped has reached the maximum densityu1. The incoming traffic has a lower density, u0. For this red light problem,we consider the initial condition

u(x, 0) =

u0, x < 0,u1, x ≥ 0.

Figure 1.27: Cars approaching a redlight.

xu1 cars/miu0 < u1 cars/mi

The characteristics for this problem are given by

x = c(u(x0, t))t + x0,

where

c(u(x0, t)) = v1(1−2u(x0, 0)

u1).

Since the initial condition is a piecewise-defined function, we need to con-sider two cases.

x

u

u0

u1

x

t

u1u0

Figure 1.28: Initial condition and charac-teristics for the red light problem.

First, for x ≥ 0, we have

c(u(x0, t)) = c(u1) = v1(1−2u1

u1) = −v1.

Therefore, the slopes of the characteristics, x = −v1t + x0 are −1/v1.For x0 < 0, we have

c(u(x0, t)) = c(u0) = v1(1−2u0

u1).

So, the characteristics are x = −v1(1− 2u0u1

)t + x0.

x

t

u1u0

x

t

u1u0

Figure 1.29: The addition of the shockpath for the red light problem.

In Figure 1.28 we plot the initial condition and the characteristics forx < 0 and x > 0. We see that there are crossing characteristics and the begincrossing at t = 0. Therefore, the breaking time is tb = 0. We need to find theshock path satisfying xs(0) = 0. The Rankine-Hugonoit conditions give

dxs

dt=

[φ]

[u]

=12 u+2 − 1

2 u−2

u+ − u−

=12

0− v1u2

0u1

u1 − u0

= −v1u0

u1. (1.42)

Thus, the shock path is found as xs(t) = −v1u0u1

.

Page 37: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

first order partial differential equations 25

In Figure 1.29 we show the shock path. In the top figure the red lineshows the path. In the lower figure the characteristics are stopped on theshock path to give the complete picture of the characteristics. The picturewas drawn with v1 = 2 and u0/u1 = 1/3.

The next problem to consider is stopped traffic as the light turns green.The cars in Figure 1.30 begin to fan out when the traffic light turns green.In this model the initial condition is given by

u(x, 0) =

u1, x ≤ 0,0, x > 0.

x0 cars/miu1 cars/mi

Figure 1.30: Cars begin to fan out whenthe traffic light turns green.

Again,

c(u(x0, t)) = v1(1−2u(x0, 0)

u1).

Inserting the initial values of u into this expression, we obtain constantspeeds, ±v1. The resulting characteristics are given by

x(t) =

−v1t + x0, x ≤ 0,v1t + x0, x > 0.

This leads to a rarefaction wave with the solution in the rarefaction regiongiven by

u(x, t) = g(x/t) =12

u1

(1− 1

v1

xt

).

The characteristics are shown in Figure ??. The full solution is then

u(x, t) =

u1, x ≤ −v1t,

g(x/t), |x| < v1t,0, x > v1t.

x

t

u1u0

Figure 1.31: The characteristics for thegreen light problem.

1.5 General First Order PDEs

We have spent time solving quasilinear first order partial differentialequations. We now turn to nonlinear first order equations of the form

F(x, y, u, ux, uy) = 0,

Page 38: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

26 partial differential equations

for u = u(x, y).If we introduce new variables, p = ux and q = uy, then the differential

equation takes the formF(x, y, u, p, q) = 0.

Note that for u(x, t) a function with continuous derivatives, we have

py = uxy = uyx = qx.

We can view F = 0 as a surface in a five dimensional space. Since thearguments are functions of x and y, we have from the multivariable ChainRule that

dFdx

= Fx + Fu∂u∂x

+ Fp∂p∂x

+ Fq∂q∂x

0 = Fx + pFu + pxFp + pyFq. (1.43)

This can be rewritten as a quasilinear equation for p(x, y) :

Fp px + Fq px = −Fx − pFu.

The characteristic equations are

dxFp

=dyFq

= − dpFx + pFu

.

Similarly, from dFdy = 0 we have that

dxFp

=dyFq

= − dqFy + qFu

.

Furthermore, since u = u(x, y),

du =∂u∂x

dx +∂u∂y

dy

= pdx + qdy

= pdx + qFq

Fpdx

=

(p + q

Fq

Fp

). (1.44)

Therefore,dxFp

=du

pFp + qFq.

Combining these results we have the Charpit Equations

The Charpit equations. These werenamed after the French mathematicianPaul Charpit Villecourt, who was proba-bly the first to present the method in histhesis the year of his death, 1784. Hiswork was further extended in 1797 byLagrange and given a geometric expla-nation by Gaspard Monge (1746-1818) in1808. This method is often called theLagrange-Charpit method.

dxFp

=dyFq

=du

pFp + qFq= − dp

Fx + pFu= − dq

Fy + qFu. (1.45)

These equations can be used to find solutions of nonlinear first order partialdifferential equations as seen in the following examples.

Page 39: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

first order partial differential equations 27

Example 1.15. Find the general solutio of u2x + yuy − u = 0.

First, we introduce ux = p and uy = q. Then,

F(x, y, u, p, q) = p2 + qy− u = 0.

Next we identify

Fp = 2p, Fq = y, Fu = −1, Fx = 0, , Fy = q.

Then,

pFp + qFq = 2p2 + qy,

Fx + pFu = −p,

Fy + qFu = q− q = 0.

The Charpit equations are then

dx2p

=dyy

=du

2p2 + qy=

dpp

=dq0

.

The first conclusion is that q = c1 = constant. So, from the partial differ-ential equation we have u = p2 + c1y.

Since du = pdx + qdy = pdx + c1dy, then

du− cdy =√

u− c1y dx.

Therefore, ∫ d(u− c1y)√u− c1y

=∫

dx∫ z√z= x + c2

2√

u− c1y = x + c2. (1.46)

Solving for u, we have

u(x, y) =14(x + c2)

2 + c1y.

This example required a few tricks to implement the solution. Sometimesone needs to find parametric solutions. Also, if an initial condition is given,one needs to find the particular solution. In the next example we show howparametric solutions are found to the initial value problem.

Example 1.16. Solve the initial value problem u2x + uy + u = 0, u(x, 0) = x.

We consider the parametric form of the Charpit equations,

dt =dxFp

=dyFq

=du

pFp + qFq= − dp

Fx + pFu= − dq

Fy + qFu. (1.47)

This leads to the system of equations

dxdt

= Fp = 2p.

Page 40: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

28 partial differential equations

dydt

= Fq = 1.

dudt

= pFp + qFq = 2p2 + q.

dpdt

= −(Fx + pFu) = −p.

dqdt

= −(Fy + qFu) = −q.

The second, fourth, and fifth equations can be solved to obtain

y = t + c1.

p = c2e−t.

q = c3e−t.

Inserting these results into the remaining equations, we have

dxdt

= 2c2e−t.

dudt

= 2c22e−2t + c3e−t.

These equations can be integrated to find Inserting these results into the remain-ing equations, we have

x = −2c2e−t + c4.

u = −c22e−2t − c3e−t + c5.

This is a parametric set of equations for u(x, t). Since

e−t =x− c4

−2c2,

we have

u(x, y) = −c22e−2t − c3e−t + c5.

= −c22

(x− c4

−2c2

)2− c3

(x− c4

−2c2

)+ c5

=14(x− c4)

2 +c3

2c2(x− c4). (1.48)

We can use the initial conditions by first parametrizing the conditions. Letx(s, 0) = s and y(s, 0) = 0, Then, u(s, 0) = s. Since u(x, 0) = x, ux(x, 0) = 1,or p(s, 0) = 1.

From the partial differential equation, we have p2 + q + u = 0. Therefore,

q(s, 0) = −p2(s, 0)− u(s, 0) = −(1 + s).

These relations imply that

y(s, t)|t−0 = 0⇒ c1 = 0.

p(s, t)|t−0 = 1⇒ c2 = 1.

q(s, t)|t−0 = −(1 + s) = c3.

Page 41: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

first order partial differential equations 29

So,

y(s, t) = t.

p(s, t) = e−t.

q(s, t) = −(1 + s)e−t.

The conditions on x and u give

x(s, t) = (s + 2)− 2e−t,

u(s, t) = (s + 1)e−t − e−2t.

1.6 Modern Nonlinear PDEs

The study of nonlinear partial differential equations is a hotresearch topic. We will (eventually) describe some examples of importantevolution equations and discuss their solutions in the last chapter.

Problems

1. Write the following equations in conservation law form, ut + φx = 0 byfinding the flux function φ(u).

a. ut + cux = 0.

b. ut + uux − µuxx = 0.

c. ut + 6uux + uxxx = 0.

d. ut + u2ux + uxxx = 0.

2. Consider the Klein-Gordon equation, utt − auxx = bu for a and b con-stants. Find traveling wave solutions u(x, t) = f (x− ct).

3. Find the general solution u(x, y) to the following problems.

a. ux = 0.

b. yux − xuy = 0.

c. 2ux + 3uy = 1.

d. ux + uy = u.

4. Solve the following problems.

a. ux + 2uy = 0, u(x, 0) = sin x.

b. ut + 4ux = 0, u(x, 0) = 11+x2 .

c. yux − xuy = 0, u(x, 0) = x.

d. ut + xtux = 0, u(x, 0) = sin x.

e. yux + xuy = 0, u(0, y) = e−y2.

f. xut − 2xtux = 2tu, u(x, 0) = x2.

Page 42: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

30 partial differential equations

g. (y− u)ux + (u− x)uy = x− y, u = 0 on xy = 1.

h. yux + xuy = xy, x, y > 0, for u(x, 0) = e−x2, x > 0 and u(0, y) =

e−y2, y > 0.

5. Consider the problem ut + uux = 0, |x| < ∞, t > 0 satisfying the initialcondition u(x, 0) = 1

1+x2 .

a. Find and plot the characteristics.

b. Graphically locate where a gradient catastrophe might occur. Es-timate from your plot the breaking time.

c. Analytically determine the breaking time.

d. Plot solutions u(x, t) at times before and after the breaking time.

6. Consider the problem ut + u2ux = 0, |x| < ∞, t > 0 satisfying the initialcondition u(x, 0) = 1

1+x2 .

a. Find and plot the characteristics.

b. Graphically locate where a gradient catastrophe might occur. Es-timate from your plot the breaking time.

c. Analytically determine the breaking time.

d. Plot solutions u(x, t) at times before and after the breaking time.

7. Consider the problem ut + uux = 0, |x| < ∞, t > 0 satisfying the initialcondition

u(x, 0) =

2, x ≤ 0,1, x > 0.

a. Find and plot the characteristics.

b. Graphically locate where a gradient catastrophe might occur. Es-timate from your plot the breaking time.

c. Analytically determine the breaking time.

d. Find the shock wave solution.

8. Consider the problem ut + uux = 0, |x| < ∞, t > 0 satisfying the initialcondition

u(x, 0) =

1, x ≤ 0,2, x > 0.

a. Find and plot the characteristics.

b. Graphically locate where a gradient catastrophe might occur. Es-timate from your plot the breaking time.

c. Analytically determine the breaking time.

d. Find the shock wave solution.

9. Consider the problem ut + uux = 0, |x| < ∞, t > 0 satisfying the initialcondition

u(x, 0) =

0, x ≤ −1,2, |x| < 1,1, x > 1.

Page 43: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

first order partial differential equations 31

a. Find and plot the characteristics.

b. Graphically locate where a gradient catastrophe might occur. Es-timate from your plot the breaking time.

c. Analytically determine the breaking time.

d. Find the shock wave solution.

10. Solve the problem ut + uux = 0, |x| < ∞, t > 0 satisfying the initialcondition

u(x, 0) =

1, x ≤ 0,

1− xa , 0 < x < a,

0, x ≥ a.

11. Solve the problem ut + uux = 0, |x| < ∞, t > 0 satisfying the initialcondition

u(x, 0) =

0, x ≤ 0,xa , 0 < x < a,1, x ≥ a.

12. Consider the problem ut + u2ux = 0, |x| < ∞, t > 0 satisfying the initialcondition

u(x, 0) =

2, x ≤ 0,1, x > 0.

a. Find and plot the characteristics.

b. Graphically locate where a gradient catastrophe might occur. Es-timate from your plot the breaking time.

c. Analytically determine the breaking time.

d. Find the shock wave solution.

13. Consider the problem ut + u2ux = 0, |x| < ∞, t > 0 satisfying the initialcondition

u(x, 0) =

1, x ≤ 0,2, x > 0.

a. Find and plot the characteristics.

b. Find and plot the fan characteristics.

c. Write out the rarefaction wave solution for all regions of the xt-plane.

14. Solve the initial-value problem ut + uux = 0 |x| < ∞, t > 0 satisfying

u(x, 0) =

1, x ≤ 0,

1− x, 0 ≤ x ≤ 1,0, x ≥ 1.

15. Consider the stopped traffic problem in a situation where the maximumcar density is 200 cars per mile and the maximum speed is 50 miles per hour.Assume that the cars are arriving at 30 miles per hour. Find the solution ofthis problem and determine the rate at which the traffic is backing up. Howdoes the answer change if the cars were arriving at 15 miles per hour.

Page 44: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

32 partial differential equations

16. Solve the following nonlinear equations where p = ux and q = uy.

a. p2 + q2 = 1, u(x, x) = x.

b. pq = u, u(0, y) = y2.

c. p + q = pq, u(x, 0) = x.

d. pq = u2

e. p2 + qy = u.

17. Find the solution of xp + qy− p2q− u = 0 in parametric form for theinitial conditions at t = 0 :

x(t, s) = s, y(t, s) = 2, u(t, s) = s + 1

.

Page 45: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

2Second Order Partial Differential Equa-tions

“Either mathematics is too big for the human mind or the human mind is more thana machine.” - Kurt Gödel (1906-1978)

2.1 Introduction

In this chapter we will introduce several generic second order linearpartial differential equations and see how such equations lead naturally tothe study of boundary value problems for ordinary differential equations.These generic differential equation occur in one to three spatial dimensionsand are all linear differential equations. A list is provided in Table 2.1. Herewe have introduced the Laplacian operator, ∇2u = uxx + uyy + uzz. Depend-ing on the types of boundary conditions imposed and on the geometry ofthe system (rectangular, cylindrical, spherical, etc.), one encounters manyinteresting boundary value problems.

Name 2 Vars 3 DHeat Equation ut = kuxx ut = k∇2uWave Equation utt = c2uxx utt = c2∇2u

Laplace’s Equation uxx + uyy = 0 ∇2u = 0Poisson’s Equation uxx + uyy = F(x, y) ∇2u = F(x, y, z)

Schrödinger’s Equation iut = uxx + F(x, t)u iut = ∇2u + F(x, y, z, t)u

Table 2.1: List of generic partial differen-tial equations.

Let’s look at the heat equation in one dimension. This could describe theheat conduction in a thin insulated rod of length L. It could also describethe diffusion of pollutant in a long narrow stream, or the flow of trafficdown a road. In problems involving diffusion processes, one instead callsthis equation the diffusion equation. [See the derivation in Section 2.2.2.]

A typical initial-boundary value problem for the heat equation would bethat initially one has a temperature distribution u(x, 0) = f (x). Placing thebar in an ice bath and assuming the heat flow is only through the ends ofthe bar, one has the boundary conditions u(0, t) = 0 and u(L, t) = 0. Ofcourse, we are dealing with Celsius temperatures and we assume there is

Page 46: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

34 partial differential equations

plenty of ice to keep that temperature fixed at each end for all time as seenin Figure 2.1. So, the problem one would need to solve is given as [IC =initial condition(s) and BC = boundary conditions.]

x0 L

u(0, 0) = 0 u(L, 0) = 0

Figure 2.1: One dimensional heated rodof length L. 1D Heat Equation

PDE ut = kuxx, 0 < t, 0 ≤ x ≤ L,IC u(x, 0) = f (x), 0 < x < L,BC u(0, t) = 0, t > 0,

u(L, t) = 0, t > 0,

(2.1)

Here, k is the heat conduction constant and is determined using proper-ties of the bar.

Another problem that will come up in later discussions is that of thevibrating string. A string of length L is stretched out horizontally with bothends fixed such as a violin string as shown in Figure 2.2. Let u(x, t) bethe vertical displacement of the string at position x and time t. The motionof the string is governed by the one dimensional wave equation. [See thederivation in Section 2.2.1.] The string might be plucked, giving the stringan initial profile, u(x, 0) = f (x), and possibly each point on the string hasan initial velocity ut(x, 0) = g(x). The initial-boundary value problem forthis problem is given below.

1D Wave Equation

PDE utt = c2uxx 0 < t, 0 ≤ x ≤ LIC u(x, 0) = f (x) 0 < x < L

ut(x, 0) = g(x) 0 < x < LBC u(0, t) = 0 t > 0

u(L, t) = 0 t > 0

(2.2)

In this problem c is the wave speed in the string. It depends on the massper unit length of the string, µ, and the tension, τ, placed on the string.

u(x, t)

x0 L

u(0, 0) = 0 u(L, 0) = 0

Figure 2.2: One dimensional string oflength L.

There is a rich history on the study of these and other partial differentialequations and much of this involves trying to solve problems in physics.Consider the one dimensional wave motion in the string. Physically, thespeed of these waves depends on the tension in the string and its massdensity. The frequencies we hear are then related to the string shape, or theallowed wavelengths across the string. We will be interested the harmonics,or pure sinusoidal waves, of the vibrating string and how a general waveon the string can be represented as a sum over such harmonics. This willtake us into the field of spectral, or Fourier, analysis. The solution of theheat equation also involves the use of Fourier analysis. However, in thiscase there are no oscillations in time.

There are many applications that are studied using spectral analysis. Atthe root of these studies is the belief that continuous waveforms are com-prised of a number of harmonics. Such ideas stretch back to the Pythagore-ans study of the vibrations of strings, which led to their program of a world

Page 47: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

second order partial differential equations 35

of harmony. This idea was carried further by Johannes Kepler (1571-1630) inhis harmony of the spheres approach to planetary orbits. In the 1700’s oth-ers worked on the superposition theory for vibrating waves on a stretchedspring, starting with the wave equation and leading to the superpositionof right and left traveling waves. This work was carried out by peoplesuch as John Wallis (1616-1703), Brook Taylor (1685-1731) and Jean le Rondd’Alembert (1717-1783).

y

x

Figure 2.3: Plot of the second harmonicof a vibrating string at different times.

In 1742 d’Alembert solved the wave equation

c2 ∂2y∂x2 −

∂2y∂t2 = 0,

where y is the string height and c is the wave speed. However, this solutionled him and others, like Leonhard Euler (1707-1783) and Daniel Bernoulli(1700-1782), to investigate what "functions" could be the solutions of thisequation. In fact, this led to a more rigorous approach to the study ofanalysis by first coming to grips with the concept of a function. For example,in 1749 Euler sought the solution for a plucked string in which case theinitial condition y(x, 0) = h(x) has a discontinuous derivative! (We will seehow this led to important questions in analysis.)

In 1753 Daniel Bernoulli viewed the solutions as a superposition of sim-ple vibrations, or harmonics. Such superpositions amounted to looking atsolutions of the form

y(x, t) = ∑k

ak sinkπx

Lcos

kπctL

,

where the string extends over the interval [0, L] with fixed ends at x = 0 andx = L.

y

x0 L

2L

AL2

Figure 2.4: Plot of an initial condition fora plucked string.

However, the initial profile for such superpositions is given by

y(x, 0) = ∑k

ak sinkπx

L.

It was determined that many functions could not be represented by a finitenumber of harmonics, even for the simply plucked string in Figure 2.4 givenby an initial condition of the form

y(x, 0) =

Ax, 0 ≤ x ≤ L/2

A(L− x), L/2 ≤ x ≤ L

Thus, the solution consists generally of an infinite series of trigonometricfunctions.

The one dimensional version of the heatequation is a partial differential equationfor u(x, t) of the form

∂u∂t

= k∂2u∂x2 .

Solutions satisfying boundary condi-tions u(0, t) = 0 and u(L, t) = 0, are ofthe form

u(x, t) =∞

∑n=0

bn sinnπx

Le−n2π2t/L2

.

In this case, setting u(x, 0) = f (x), onehas to satisfy the condition

f (x) =∞

∑n=0

bn sinnπx

L.

This is another example leading to an in-finite series of trigonometric functions.

Such series expansions were also of importance in Joseph Fourier’s (1768-1830) solution of the heat equation. The use of Fourier expansions has be-come an important tool in the solution of linear partial differential equa-tions, such as the wave equation and the heat equation. More generally,using a technique called the Method of Separation of Variables, allowedhigher dimensional problems to be reduced to one dimensional boundaryvalue problems. However, these studies led to very important questions,which in turn opened the doors to whole fields of analysis. Some of theproblems raised were

Page 48: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

36 partial differential equations

1. What functions can be represented as the sum of trigonometricfunctions?

2. How can a function with discontinuous derivatives be representedby a sum of smooth functions, such as the above sums of trigono-metric functions?

3. Do such infinite sums of trigonometric functions actually convergeto the functions they represent?

There are many other systems for which it makes sense to interpret thesolutions as sums of sinusoids of particular frequencies. For example, wecan consider ocean waves. Ocean waves are affected by the gravitationalpull of the moon and the sun and other numerous forces. These lead to thetides, which in turn have their own periods of motion. In an analysis ofwave heights, one can separate out the tidal components by making use ofFourier analysis.

In the Section 2.4 we describe how to go about solving these equationsusing the method of separation of variables. We will find that in orderto accommodate the initial conditions, we will need to introduce Fourierseries before we can complete the problems, which will be the subject of thefollowing chapter. However, we first derive the one-dimensional wave andheat equations.

2.2 Derivation of Generic 1D Equations

2.2.1 Derivation of Wave Equation for String

The wave equation for a one dimensional string is derived basedupon simply looking at Newton’s Second Law of Motion for a piece of thestring plus a few simple assumptions, such as small amplitude oscillationsand constant density.

We begin with F = ma. The mass of a piece of string of length ds ism = ρ(x)ds. From Figure (2.5) an incremental length f the string is given by

∆s2 = ∆x2 + ∆u2.

The piece of string undergoes an acceleration of a = ∂2u∂t2 .

We will assume that the main force acting on the string is that of tension.Let T(x, t) be the magnitude of the tension acting on the left end of the pieceof string. Then, on the right end the tension is T(x + ∆x, t). At these pointsthe tension makes an angle to the horizontal of θ(x, t) and θ(x + ∆x, t),respectively.

Assuming that there is no horizontal acceleration, the x-component in thesecond law, ma = F, for the string element is given by

The wave equation is derived from F =ma.

0 = T(x + ∆x, t) cos θ(x + ∆x, t)− T(x, t) cos θ(x, t).

Page 49: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

second order partial differential equations 37

x

u

u(x, t)

T(x, t)

θ(x, t)

T(x + ∆x, t)

θ(x + ∆x, t)

∆s∆u

∆xθ

Figure 2.5: A small piece of string is un-der tension.

The vertical component is given by

ρ(x)∆s∂2u∂t2 = T(x + ∆x, t) sin θ(x + ∆x, t)− T(x, t) sin θ(x, t)

The length of the piece of string can be written in terms of ∆x,

∆s =√

∆x2 + ∆u2 =

√1 +

(∆u∆x

)2∆x.

and the right hand sides of the component equation can be expanded about∆x = 0, to obtain

T(x + ∆x, t) cos θ(x + ∆x, t)− T(x, t) cos θ(x, t) ≈ ∂(T cos θ)

∂x(x, t)∆x

T(x + ∆x, t) sin θ(x + ∆x, t)− T(x, t) sin θ(x, t) ≈ ∂(T sin θ)

∂x(x, t)∆x.

Furthermore, we note that

tan θ = lim∆x→0

∆u∆x

=∂u∂x

.

Now we can divide these component equations by ∆x and let ∆x → 0.This gives the approximations

0 =T(x + ∆x, t) cos θ(x + ∆x, t)− T(x, t) cos θ(x, t)

∆x

≈ ∂(T cos θ)

∂x(x, t)

ρ(x)∂2u∂t2

δsδs

=T(x + ∆x, t) sin θ(x + ∆x, t)− T(x, t) sin θ(x, t)

∆x

ρ(x)∂2u∂t2

√1 +

(∂u∂x

)2≈ ∂(T sin θ)

∂x(x, t). (2.3)

We will assume a small angle approximation, giving

sin θ ≈ tan θ =∂u∂x

,

Page 50: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

38 partial differential equations

cos θ ≈ 1, and √1 +

(∂u∂x

)2≈ 1.

Then, the horizontal component becomes

∂T(x, t)∂x

= 0.

Therefore, the magnitude of the tension T(x, t) = T(t) is at most time de-pendent.

The vertical component equation is now

ρ(x)∂2u∂t2 = T(t)

∂x

(∂u∂x

)= T(t)

∂2u∂x2 .

Assuming that ρ and T are constant and defining

c2 =Tρ

,

we obtain the one dimensional wave equation,

∂2u∂t2 = c2 ∂2u

∂x2 .

2.2.2 Derivation of 1D Heat Equation

Consider a one dimensional rod of length L as shown in Figure 2.6.It is heated and allowed to sit. The heat equation is the governing equationwhich allows us to determine the temperature of the rod at a later time.

We begin with some simple thermodynamics. Recall that to raise thetemperature of a mass m by ∆T takes thermal energy given by

Q = mc∆T,

assuming the mass does not go through a phase transition. Here c is thespecific heat capacity of the substance. So, we will begin with the heatcontent of the rod as

Q = mcT(x, t)

and assume that m and c are constant.x

0 L

u(0, 0) = 0 u(L, 0) = 0

Figure 2.6: One dimensional heated rodof length L.

We will also need Fourier’s law of heat transfer or heat conduction . Thislaw simply states that heat energy flows from warmer to cooler regions andis written in terms of the heat energy flux, φ(x, t). The heat energy flux, orflux density, gives the rate of energy flow per area. Thus, the amount ofheat energy flowing over the left end of the region of cross section A in time∆t is given φ(x, t)∆tA. The units of φ(x, t) are then J/s/m2 = W/m2.

Fourier’s law of heat conduction states that the flux density is propor-tional to the gradient of the temperature,

φ = −K∂T∂x

.

Page 51: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

second order partial differential equations 39

x0 L∆x

φ(x + ∆x, t)φ(x, t)Flux in Flux out Figure 2.7: A one dimensional rod of

length L. Heat can flow through incre-ment ∆x.

Here K is the thermal conductivity and the negative sign takes into accountthe direction of flow from higher to lower temperatures.

Now we make use of the conservation of energy. Consider a small sectionof the rod of width ∆x as shown in Figure 2.7. The rate of change of theenergy through this section is due to energy flow through the ends. Namely,

Rate of change of heat energy = Heat in−Heat out.

The energy content of the small segment of the rod is given by

∆Q = (ρA∆x)cT(x, t + ∆t)− (ρA∆x)cT(x, t).

The flow rates across the boundaries are given by the flux.

(ρA∆x)cT(x, t + ∆t)− (ρA∆x)cT(x, t) = [φ(x, t)− φ(x + ∆x, t)]∆tA.

Dividing by ∆x and ∆t and letting ∆x, ∆t→ 0, we obtain

∂T∂t

= − 1cρ

∂φ

∂x.

Using Fourier’s law of heat conduction,

∂T∂t

=1cρ

∂x

(K

∂T∂x

).

Assuming K, c, and ρ are constant, we have the one dimensional heatequation as used in the text:

∂T∂t

= k∂2T∂x2 ,

where k = kcρ .

2.3 Boundary Value Problems

You might have only solved initial value problems in your under-graduate differential equations class. For an initial value problem one has tosolve a differential equation subject to conditions on the unknown functionand its derivatives at one value of the independent variable. For example,for x = x(t) we could have the initial value problem

x′′ + x = 2, x(0) = 1, x′(0) = 0. (2.4)

Typically, initial value problems involve time dependent functions andboundary value problems are spatial. So, with an initial value problem one

Page 52: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

40 partial differential equations

knows how a system evolves in terms of the differential equation and thestate of the system at some fixed time. Then one seeks to determine thestate of the system at a later time.

Example 2.1. Solve the initial value problem, x′′+ 4x = cos t, x(0) = 1, x′(0) =0.

Note that the conditions are provided at one time, t = 0. Thus, this an initialvalue problem. Recall from your course on differential equations that we need tofind the general solution and then apply the initial conditions. Furthermore, thisis a nonhomogeneous differential equation, so the solution is a sum of a solution ofthe homogeneous equation and a particular solution of the nonhomogeneous equa-tion, x(t) = xh(t) + xp(t). [See the ordinary differential equations review in theAppendix.]

The solution of x′′ + 4x = 0 is easily found as

xh(t) = c1 cos 2t + c2 sin 2t.

The particular solution is found using the Method of Undetermined Coefficients.We guess a solution of the form

xp(t) = A cos t + B sin t.

Differentiating twice, we have

x′′p(t) = −(A cos t + B sin t).

So,x′′p + 4xp = −(A cos t + B sin t) + 4(A cos t + B sin t).

Comparing the right hand side of this equation with cos t in the original problem,we are led to setting B = 0 and A = 1

3 cos t. Thus, the general solution is

x(t) = c1 cos 2t + c2 sin 2t +13

cos t.

We now apply the initial conditions to find the particular solution. The firstcondition, x(0) = 1, gives

1 = c1 +13

.

Thus, c1 = 23 . Using this value for c1, the second condition, x′(0) = 0, gives

c2 = 0. Therefore,

x(t) =13(2 cos 2t + cos t).

For boundary values problems, one knows how each point responds toits neighbors, but there are conditions that have to be satisfied at the end-points. An example would be a horizontal beam supported at the ends, likea bridge. The shape of the beam under the influence of gravity, or otherforces, would lead to a differential equation and the boundary conditionsat the beam ends would affect the solution of the problem. There are alsoa variety of other types of boundary conditions. In the case of a beam, oneend could be fixed and the other end could be free to move. We will explore

Page 53: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

second order partial differential equations 41

the effects of different boundary conditions in our discussions and exercises.But, we will first solve a simple boundary value problem which is a slightmodification of the above problem.

Example 2.2. Solve the boundary value problem, x′′+ x = 2, x(0) = 1, x(1) =0.

Note that the conditions at t = 0 and t = 1 make this a boundary value prob-lem since the conditions are given at two different points. As with initial valueproblems, we need to find the general solution and then apply any conditions thatwe may have. This is a nonhomogeneous differential equation, so the solution isa sum of a solution of the homogeneous equation and a particular solution of thenonhomogeneous equation, x(t) = xh(t) + xp(t). The solution of x′′ + x = 0 iseasily found as

xh(t) = c1 cos t + c2 sin t.

The particular solution is found using the Method of Undetermined Coefficients,

xp(t) = 2.

Thus, the general solution is

x(t) = 2 + c1 cos t + c2 sin t.

We now apply the boundary conditions and see if there are values of c1 and c2

that yield a solution to this boundary value problem. The first condition, x(0) = 0,gives

0 = 2 + c1.

Thus, c1 = −2. Using this value for c1, the second condition, x(1) = 1, gives

0 = 2− 2 cos 1 + c2 sin 1.

This yields

c2 =2(cos 1− 1)

sin 1.

We have found that there is a solution to the boundary value problem and it isgiven by

x(t) = 2(

1− cos t(cos 1− 1)

sin 1sin t

).

Boundary value problems arise in many physical systems, just as the ini-tial value problems we have seen earlier. We will see in the next sections thatboundary value problems for ordinary differential equations often appearin the solutions of partial differential equations. However, there is no guar-antee that we will have unique solutions of our boundary value problemsas we had found in the example above.

Now that we understand simple boundary value problems for ordinarydifferential equations, we can turn to initial-boundary value problems forpartial differential equations. We will see that a common method for study-ing these problems is to use the method of separation of variables. In thismethod the problem of solving partial differential equations is to separate

Page 54: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

42 partial differential equations

the partial differential equation into several ordinary differential equationsof which several are boundary value problems of the sort seen in this sec-tion.

2.4 Separation of Variables

Solving many of the linear partial differential equations pre-sented in the first section can be reduced to solving ordinary differentialequations. We will demonstrate this by solving the initial-boundary valueproblem for the heat equation as given in (2.1). We will employ a methodtypically used in studying linear partial differential equations, called theMethod of Separation of Variables. In the next subsections we describe howthis method works for the one-dimensional heat equation, one-dimensionalwave equation, and the two-dimensional Laplace equation.

2.4.1 The 1D Heat Equation

We want to solve the heat equation,

ut = kuxx, 0 < t, 0 ≤ x ≤ L.

subject to the boundary conditions

u(0, t) = 0, u(L, t) = 0, t > 0,

and the initial condition

u(x, 0) = f (x), 0 < x < L.Solution of the 1D heat equation usingthe method of separation of variables. We begin by assuming that u can be written as a product of single variable

functions of each independent variable,

u(x, t) = X(x)T(t).

Substituting this guess into the heat equation, we find that

XT′ = kX′′T.

The prime denotes differentiation with respect to the independent vari-able and we will suppress the independent variable in the following unlessneeded for emphasis.

Dividing both sides of this result by k and u = XT, yields

1k

T′

T=

X′′

X.

k We have separated the functions of time on one side and space on theother side. The constant k could be on either side of this expression, but wemoved it to make later computations simpler.

Page 55: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

second order partial differential equations 43

The only way that a function of t equals a function of x is if the functionsare constant functions. Therefore, we set each function equal to a constant,λ : [For example, if Aect = ax2 + b is possible for any x or t, then this is onlypossible if a = 0, c = 0 and b = A.]

1k

T′

T︸︷︷︸function of t

=X′′

X︸︷︷︸function of x

= λ.︸︷︷︸constant

This leads to two equations:

T′ = kλT, (2.5)

X′′ = λX. (2.6)

These are ordinary differential equations. The general solutions to theseconstant coefficient equations are readily found as

T(t) = Aekλt, (2.7)

X(x) = c1e√

λx + c2e−√

λx. (2.8)

We need to be a little careful at this point. The aim is to force the final so-lutions to satisfy both the boundary conditions and initial conditions. Also,we should note that λ is arbitrary and may be positive, zero, or negative.We first look at how the boundary conditions on u(x, t) lead to conditionson X(x).

The first boundary condition is u(0, t) = 0. This implies that

X(0)T(t) = 0, for all t.

The only way that this is true is if X(0) = 0. Similarly, u(L, t) = 0 for all timplies that X(L) = 0. So, we have to solve the boundary value problem

X′′ − λX = 0, X(0) = 0 = X(L). (2.9)

An obvious solution is X ≡ 0. However, this implies that u(x, t) = 0, whichis not an interesting solution. We call such solutions, X ≡ 0, trivial solutionsand will seek nontrivial solution for these problems.

There are three cases to consider, depending on the sign of λ.

Case I. λ > 0

In this case we have the exponential solutions

X(x) = c1e√

λx + c2e−√

λx. (2.10)

For X(0) = 0, we have0 = c1 + c2.

We will take c2 = −c1. Then,

X(x) = c1(e√

λx − e−√

λx) = 2c1 sinh√

λx.

Page 56: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

44 partial differential equations

Applying the second condition, X(L) = 0 yields

c1 sinh√

λL = 0.

This will be true only if c1 = 0, since λ > 0. Thus, the only solution inthis case is the trivial solution, X(x) = 0.

Case II. λ = 0

For this case it is easier to set λ to zero in the differential equation. So,X′′ = 0. Integrating twice, one finds

X(x) = c1x + c2.

Setting x = 0, we have c2 = 0, leaving X(x) = c1x. Setting x = L,we find c1L = 0. So, c1 = 0 and we are once again left with a trivialsolution.

Case III. λ < 0

In this case is would be simpler to write λ = −µ2. Then the differentialequation is

X′′ + µ2X = 0.

The general solution is

X(x) = c1 cos µx + c2 sin µx.

At x = 0 we get 0 = c1. This leaves X(x) = c2 sin µx.

At x = L, we find0 = c2 sin µL.

So, either c2 = 0 or sin µL = 0. c2 = 0 leads to a trivial solution again.But, there are cases when the sine is zero. Namely,

µL = nπ, n = 1, 2, . . . .

Note that n = 0 is not included since this leads to a trivial solution.Also, negative values of n are redundant, since the sine function is anodd function.

In summary, we can find solutions to the boundary value problem (2.9)for particular values of λ. The solutions are

Xn(x) = sinnπx

L, n = 1, 2, 3, . . .

for

λn = −µ2n = −

(nπ

L

)2, n = 1, 2, 3, . . . .

We should note that the boundary value problem in Equation (2.9) is aneigenvalue problem. We can recast the differential equation as

LX = λX,

Page 57: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

second order partial differential equations 45

where

L = D2 =d2

dx2

is a linear differential operator. The solutions, Xn(x), are called eigenfunc-tions and the λn’s are the eigenvalues. We will elaborate more on this char-acterization later in the next chapter.

We have found the product solutions of the heat equation (2.1) satisfying Product solutions.

the boundary conditions. These are

un(x, t) = ekλnt sinnπx

L, n = 1, 2, 3, . . . . (2.11)

However, these do not necessarily satisfy the initial condition u(x, 0) = f (x).What we do get is

un(x, 0) = sinnπx

L, n = 1, 2, 3, . . . .

So, if the initial condition is in one of these forms, we can pick out the rightvalue for n and we are done.

For other initial conditions, we have to do more work. Note, since the General solution.

heat equation is linear, the linear combination of the product solutions isalso a solution of the heat equation. The general solution satisfying thegiven boundary conditions is given as

u(x, t) =∞

∑n=1

bnekλnt sinnπx

L. (2.12)

The coefficients in the general solution are determined using the initialcondition. Namely, setting t = 0 in the general solution, we have

f (x) = u(x, 0) =∞

∑n=1

bn sinnπx

L.

So, if we know f (x), can we find the coefficients, bn? If we can, then we willhave the solution to the full initial-boundary value problem.

The expression for f (x) is a Fourier sine series. We will need to digressinto the study of Fourier series in order to see how one can find the Fourierseries coefficients given f (x). Before proceeding, we will show that this pro-cess is not uncommon by applying the Method of Separation of Variables tothe wave equation in the next section.

2.4.2 The 1D Wave Equation

In this section we will apply the Method of Separation of Variables tothe one dimensional wave equation, given by

∂2u∂2t

= c2 ∂2u∂2x

, t > 0, 0 ≤ xłL, (2.13)

Page 58: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

46 partial differential equations

subject to the boundary conditions

u(0, t) = 0, u(L, t) = 0, t > 0,

and the initial conditions

u(x, 0) = f (x), ut(x, 0) = g(x), 0 < x < L.

This problem applies to the propagation of waves on a string of length Lwith both ends fixed so that they do not move. u(x, t) represents the verticaldisplacement of the string over time. The derivation of the wave equationassumes that the vertical displacement is small and the string is uniform.The constant c is the wave speed, given by

c =√

τ

µ,

where τ is the tension in the string and µ is the mass per unit length. We canunderstand this in terms of string instruments. The tension can be adjustedto produce different tones and the makeup of the string (nylon or steel, thickor thin) also has an effect. In some cases the mass density is changed simplyby using thicker strings. Thus, the thicker strings in a piano produce lowerfrequency notes.

The utt term gives the acceleration of a piece of the string. The uxx is theconcavity of the string. Thus, for a positive concavity the string is curvedupward near the point of interest. Thus, neighboring points tend to pullupward towards the equilibrium position. If the concavity is negative, itwould cause a negative acceleration.Solution of the 1D wave equation using

the Method of Separation of Variables. The solution of this problem is easily found using separation of variables.We let u(x, t) = X(x)T(t). Then we find

XT′′ = c2X′′T,

which can be rewritten as1c2

T′′

T=

X′′

X.

Again, we have separated the functions of time on one side and space onthe other side. Therefore, we set each function equal to a constant, λ.

1c2

T′′

T︸ ︷︷ ︸function of t

=X′′

X︸︷︷︸function of x

= λ.︸︷︷︸constant

This leads to two equations:

T′′ = c2λT, (2.14)

X′′ = λX. (2.15)

As before, we have the boundary conditions on X(x):

X(0) = 0, and X(L) = 0,

Page 59: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

second order partial differential equations 47

giving the solutions, as shown in Figure 2.8,

Xn(x) = sinnπx

L, λn = −

(nπ

L

)2.

The main difference from the solution of the heat equation is the form ofthe time function. Namely, from Equation (2.14) we have to solve

T′′ +(nπc

L

)2T = 0. (2.16)

This equation takes a familiar form. We let

ωn =nπc

L,

then we haveT′′ + ω2

nT = 0.

This is the differential equation for simple harmonic motion and ωn is theangular frequency. The solutions are easily found as

T(t) = An cos ωnt + Bn sin ωnt. (2.17)

x

y

L0

X1(x) = sin πxL

x

y

L0

X2(x) = sin 2πxL

x

y

L0

X3(x) = sin 3πxL

Figure 2.8: The first three harmonics ofthe vibrating string.

Therefore, we have found that the product solutions of the wave equationtake the forms sin nπx

L cos ωnt and sin nπxL sin ωnt. The general solution, a

superposition of all product solutions, is given by

General solution.

u(x, t) =∞

∑n=1

[An cos

nπctL

+ Bn sinnπct

L

]sin

nπxL

. (2.18)

This solution satisfies the wave equation and the boundary conditions.We still need to satisfy the initial conditions. Note that there are two initialconditions, since the wave equation is second order in time.

First, we have u(x, 0) = f (x). Thus,

f (x) = u(x, 0) =∞

∑n=1

An sinnπx

L. (2.19)

In order to obtain the condition on the initial velocity, ut(x, 0) = g(x), weneed to differentiate the general solution with respect to t:

ut(x, t) =∞

∑n=1

nπcL

[−An sin

nπctL

+ Bn cosnπct

L

]sin

nπxL

. (2.20)

Then, we have from the initial velocity

g(x) = ut(x, 0) =∞

∑n=1

nπcL

Bn sinnπx

L. (2.21)

So, applying the two initial conditions, we have found that f (x) and g(x),are represented as Fourier sine series. In order to complete the problem weneed to determine the coefficients An and Bn for n = 1, 2, 3, . . .. Once wehave these, we have the complete solution to the wave equation. We hadseen similar results for the heat equation. In the next chapter we will findout how to determine these Fourier coefficients for such series of sinusoidalfunctions.

Page 60: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

48 partial differential equations

2.5 Laplace’s Equation in 2D

Another generic partial differential equation is Laplace’s equa-tion, ∇2u = 0. Laplace’s equation arises in many applications. As an exam-ple, consider a thin rectangular plate with boundaries set at fixed tempera-tures. Assume that any temperature changes of the plate are governed bythe heat equation, ut = k∇2u, subject to these boundary conditions. How-ever, after a long period of time the plate may reach thermal equilibrium.If the boundary temperature is zero, then the plate temperature decays tozero across the plate. However, if the boundaries are maintained at a fixednonzero temperature, which means energy is being put into the system tomaintain the boundary conditions, the internal temperature may reach anonzero equilibrium temperature. Reaching thermal equilibrium meansThermodynamic equilibrium, ∇2u = 0.

that asymptotically in time the solution becomes time independent. Thus,the equilibrium state is a solution of the time independent heat equation,∇2u = 0.

A second example comes from electrostatics. Letting φ(r) be the electricpotential, one has for a static charge distribution, ρ(r), that the electric field,E = ∇φ, satisfies one of Maxwell’s equations, ∇ · E = ρ/ε0. In regionsdevoid of charge, ρ(r) = 0, the electric potential satisfies Laplace’s equation,∇2φ = 0.Incompressible, irrotational fluid flow,

∇2φ = 0, for velocity v = ∇φ. See morein Section 2.5.

As a final example, Laplace’s equation appears in two-dimensional fluidflow. For an incompressible flow, ∇ · v = 0. If the flow is irrotational, then∇ × v = 0. We can introduce a velocity potential, v = ∇φ. Thus, ∇ × vvanishes by a vector identity and ∇ · v = 0 implies ∇2φ = 0. So, once againwe obtain Laplace’s equation.

Solutions of Laplace’s equation are called harmonic functions and we willencounter these in Chapter 8 on complex variables and in Section 2.5 we willapply complex variable techniques to solve the two-dimensional Laplaceequation. In this section we use the Method of Separation of Variables tosolve simple examples of Laplace’s equation in two dimensions. Three-dimensional problems will studied in Chapter 6.

Example 2.3. Equilibrium Temperature Distribution for a Rectangular PlateLet’s consider Laplace’s equation in Cartesian coordinates,

uxx + uyy = 0, 0 < x < L, 0 < y < H

with the boundary conditions

u(0, y) = 0, u(L, y) = 0, u(x, 0) = f (x), u(x, H) = 0.

The boundary conditions are shown in Figure 6.8

x0

y

0 L

H

∇2u = 0

u(x, 0) = f (x)

u(x, H) = 0

u(0, y) = 0 u(L, y) = 0

Figure 2.9: In this figure we show thedomain and boundary conditions for theexample of determining the equilibriumtemperature distribution for a rectangu-lar plate.

As with the heat and wave equations, we can solve this problem using the methodof separation of variables. Let u(x, y) = X(x)Y(y). Then, Laplace’s equation be-comes

X′′Y + XY′′ = 0

Page 61: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

second order partial differential equations 49

and we can separate the x and y dependent functions and introduce a separationconstant, λ,

X′′

X= −Y′′

Y= −λ.

Thus, we are led to two differential equations,

X′′ + λX = 0,

Y′′ − λY = 0. (2.22)

From the boundary condition u(0, y) = 0, u(L, y) = 0, we have X(0) =

0, X(L) = 0. So, we have the usual eigenvalue problem for X(x),

X′′ + λX = 0, X(0) = 0, X(L) = 0.

The solutions to this problem are given by

Xn(x) = sinnπx

L, λn =

(nπ

L

)2, n = 1, 2, . . . .

The general solution of the equation for Y(y) is given by

Y(y) = c1e√

λy + c2e−√

λy.

The boundary condition u(x, H) = 0 implies Y(H) = 0. So, we have

c1e√

λH + c2e−√

λH = 0.

Thus,c2 = −c1e2

√λH .

Inserting this result into the expression for Y(y), we have Note: Having carried out this compu-tation, we can now see that it wouldbe better to guess this form in the fu-ture. So, for Y(H) = 0, one wouldguess a solution Y(y) = sinh

√λ(H− y).

For Y(0) = 0, one would guess a so-lution Y(y) = sinh

√λy. Similarly, if

Y′(H) = 0, one would guess a solutionY(y) = cosh

√λ(H − y).

Y(y) = c1e√

λy − c1e2√

λHe−√

λy

= c1e√

λH(

e−√

λHe√

λy − e√

λHe−√

λy)

= c1e√

λH(

e−√

λ(H−y) − e√

λ(H−y))

= −2c1e√

λH sinh√

λ(H − y). (2.23)

Since we already know the values of the eigenvalues λn from the eigenvalueproblem for X(x), we have that the y-dependence is given by

Yn(y) = sinhnπ(H − y)

L.

So, the product solutions are given by

un(x, y) = sinnπx

Lsinh

nπ(H − y)L

, n = 1, 2, . . . .

These solutions satisfy Laplace’s equation and the three homogeneous boundaryconditions and in the problem.

The remaining boundary condition, u(x, 0) = f (x), still needs to be satisfied.Inserting y = 0 in the product solutions does not satisfy the boundary condition

Page 62: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

50 partial differential equations

unless f (x) is proportional to one of the eigenfunctions Xn(x). So, we first writedown the general solution as a linear combination of the product solutions,

u(x, y) =∞

∑n=1

an sinnπx

Lsinh

nπ(H − y)L

. (2.24)

Now we apply the boundary condition, u(x, 0) = f (x), to find that

f (x) =∞

∑n=1

an sinhnπH

Lsin

nπxL

. (2.25)

Defining bn = an sinh nπHL , this becomes

f (x) =∞

∑n=1

bn sinnπx

L. (2.26)

We see that the determination of the unknown coefficients, bn, is simply done byrecognizing that this is a Fourier sine series. We now move on to the study ofFourier series and provide more complete answers in Chapter 6.

2.6 Classification of Second Order PDEs

We have studied several examples of partial differential equations, theheat equation, the wave equation, and Laplace’s equation. These equationsare examples of parabolic, hyperbolic, and elliptic equations, respectively.Given a general second order linear partial differential equation, how canwe tell what type it is? This is known as the classification of second orderPDEs.

Let u = u(x, y). Then, the general form of a linear second order partialdifferential equation is given by

a(x, y)uxx + 2b(x, y)uxy + c(x, y)uyy + d(x, y)ux + e(x, y)uy + f (x, y)u = g(x, y).(2.27)

In this section we will show that this equation can be transformed into oneof three types of second order partial differential equations.

Let x = x(ξ, η) and y = yξ, η) be an invertible transformation from co-ordinates (ξ, η) to coordinates (x, y). Furthermore, let u(x(ξ, η), y(ξ, η)) =

U(ξ, η). How does the partial differential equation (2.27) transform?We first need to transform the derivatives of u(x, t). We have

ux = Uξ ξx + Uηηx,

uy = Uξ ξy + Uηηy,

uxx =∂

∂x(Uξξx + Uηηx),

= Uξξξ2x + 2Uξηξxηx + Uηηη2

x + Uξ ξxx + Uηηxx,

uyy =∂

∂y(Uξξy + Uηηy),

= Uξξ ξ2y + 2Uξηξyηy + Uηηη2

y + Uξξyy + Uηηyy,

Page 63: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

second order partial differential equations 51

uxy =∂

∂y(Uξ ξx + Uηηx),

= Uξξξxξy + Uξηξxηy + Uξηξyηx + Uηηηxηy + Uξ ξxy + Uηηxy.

(2.28)

Inserting these derivatives into Equation (2.27), we have

g− f U = auxx + 2buxy + cuyy + dux + euy

= a(

Uξξξ2x + 2Uξηξxηx + Uηηη2

x + Uξ ξxx + Uηηxx

)+2b

(Uξξξxξy + Uξηξxηy + Uξηξyηx

+ Uηηηxηy + Uξξxy + Uηηxy)

+c(

Uξξξ2y + 2Uξηξyηy + Uηηη2

y + Uξ ξyy + Uηηyy

)+d(Uξ ξx + Uηηx

)+e(Uξξy + Uηηy

)= (aξ2

x + 2bξxξy + cξ2y)Uξξ

+(2aξxηx + 2bξxηy + 2bξyηx + 2cξyηy)Uξη

+(aη2x + 2bηxηy + cη2

y)Uηη

+(aξxx + 2bξxy + cξyy + dξx + eξy)Uξ

+(aηxx + 2bηxy + cηyy + dηx + eηy)Uη

= AUξξ + 2BUξη + CUηη + DUξ + EUη . (2.29)

Picking the right transformation, we can eliminate some of the secondorder derivative terms depending on the type of differential equation. Thisleads to three types: elliptic, hyperbolic, or parabolic.

For example, if transformations can be found to make A ≡ 0 and C ≡ 0,then the equation reduces to

Uξη = lower order terms.

Such an equation is called hyperbolic. A generic example of a hyperbolicequation is the wave equation. Hyperbolic case.

The conditions that A ≡ 0 and C ≡ 0 give the conditions

aξ2x + 2bξxξy + cξ2

y = 0.

aη2x + 2bηxηy + cη2

y = 0. (2.30)

We seek ξ and η satisfying these two equations, which are of the sameform. Let’s assume that ξ = ξ(x, y) is a constant curve in the xy-plane.Furthermore, if this curve is the graph of a function, y = y(x), then

dx= ξx +

dydx

ξy = 0.

Thendydx

= − ξx

ξy.

Page 64: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

52 partial differential equations

Inserting this expression in A = 0, we have

A = aξ2x + 2bξxξy + cξ2

y

= ξ2y

(a(

ξx

ξy

)2+ 2b

ξx

ξy+ c

)

= ξ2y

(a(

dydx

)2− 2b

dydx

+ c

)= 0. (2.31)

This equation is satisfied if y(x) satisfies the differential equation

dydx

=b±√

b2 − aca

.

So, for A = 0, we choose ξ and η to be constant on these characteristiccurves.

Example 2.4. Show that uxx − uyy = 0 is hyperbolic.In this case we have a = 1 = −c and b = 0. Then,

dydx

= ±1.

This gives y(x) = ±x + c. So, we choose ξ and η constant on these characteristiccurves. Therefore, we let ξ = x− y, η = x + y.

Let’s see if this transformation transforms the differential equation into a canon-ical form. Let u(x, y) = U(ξ, η). Then, the needed derivatives become

ux = Uξ ξx + Uηηx = Uξ + Uη .

uy = Uξ ξy + Uηηy = −Uξ + Uη .

uxx =∂

∂x(Uξ + Uη)

= Uξξξx + Uξηηx + Uηξ ξx + Uηηηx

= Uξξ + 2Uξη + Uηη .

uyy =∂

∂y(−Uξ + Uη)

= −Uξξξy −Uξηηy + Uηξ ξy + Uηηηy

= Uξξ − 2Uξη + Uηη . (2.32)

Inserting these derivatives into the differential equation, we have

0 = uxx − uyy = 4Uξη .

Thus, the transformed equation is Uξη = 0. Thus, showing it is a hyperbolic equa-tion.

We have seen that A and C vanish for ξ(x, y) and η(x, y) constant alongthe characteristics

dydx

=b±√

b2 − aca

for second order hyperbolic equations. This is possible when b2 − ac > 0since this leads to two characteristics.

Page 65: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

second order partial differential equations 53

In general, if we consider the second order operator

L[u] = a(x, y)uxx + 2b(x, y)uxy + c(x, y)uyy,

then this operator can be transformed to the new form

L′[U] = BUξη

if b2 − ac > 0. An example of a hyperbolic equation is the wave equation,utt = uxx.

When b2 − ac = 0, then there is only one characteristic solution, dydx = b

a .

This is the parabolic case. But, dydx = − ξx

ξy. So, Parabolic case.

ba= − ξx

ξy,

oraξx + bξy = 0.

Also, b2 − ac = 0 implies that c = b2/a.Inserting these expression into coefficient B, we have

B = 2aξxηx + 2bξxηy + 2bξyηx + 2cξyηy

= 2(aξx + bξy)ηx + 2(bξx + cξy)ηy

= 2ba(aξx + bξy)ηy = 0. (2.33)

Therefore, in the parabolic case, A = 0 and B = 0, and L[u] transforms to

L′[U] = CUηη

when b2 − ac = 0. This is the canonical form for a parabolic operator. Anexample of a parabolic equation is the heat equation, ut = uxx.

Finally, when b2 − ac < 0, we have the elliptic case. In this case we Elliptic case.

cannot force A = 0 or C = 0. However, in this case we can force B = 0. Aswe just showed, we can write

B = 2(aξx + bξy)ηx + 2(bξx + cξy)ηy.

Letting ηx = 0, we can choose ξ to satisfy bξx + cξy = 0.

A = aξ2x + 2bξxξy + cξ2

y = aξ2x − cξ2

y =ac− b2

cξ2

x

C = aη2x + 2bηxηy + cη2

y = cη2y

Furthermore, setting ac−b2

c ξ2x = cη2

y , we can make A = C and L[u] trans-forms to

L′[U] = A[Uξξ + Uηη ]

when b2 − ac < 0. This is the canonical form for an elliptic operator. Anexample of an elliptic equation is Laplace’s equation, uxx + uyy = 0.

Page 66: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

54 partial differential equations

Classification of Second Order PDEsThe second order differential operator

L[u] = a(x, y)uxx + 2b(x, y)uxy + c(x, y)uyy,

can be transformed to one of the following forms:

• b2 − ac > 0. Hyperbolic: L[u] = B(x, y)uxy

• b2 − ac = 0. Parabolic: L[u] = C(x, y)uyy

• b2 − ac < 0. Elliptic: L[u] = A(x, y)[uxx + uyy]

As a final note, the terminology used in this classification is borrowedfrom the general theory of quadratic equations which are the equations fortranslated and rotated conics. Recall that the general quadratic equation intwo variable takes the form

ax2 + 2bxy + cy2 + dx + ey + f = 0. (2.34)

One can complete the squares in x and y to obtain the new form

a(x− h)2 + 2bxy + c(y− k)2 + f ′ = 0.

So, translating points (x, y) using the transformations x′ = x − h and y′ =y− k, we find the simpler form

ax2 + 2bxy + cy2 + f = 0.

Here we dropped all primes.We can also introduce transformations to simplify the quadratic terms.

Consider a rotation of the coordinate axes by θ,

x′ = x cos θ + y sin θ

y′ = −x sin θ + y cos θ, (2.35)

or

x = x′ cos θ − y′ sin θ

y = x′ sin θ + y′ cos θ. (2.36)

The resulting equation takes the form

Ax′2 + 2Bx′y′ + Cy

′2 + D = 0,

where

A = a cos2 θ + 2b sin θ cos θ + c sin2 θ.

B = (c− a) sin θ cos θ + b(cos2 θ − sinθ).

C = a sin2 θ − 2b sin θ cos θ + c cos2 θ. (2.37)

Page 67: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

second order partial differential equations 55

We can eliminate the x′y′ term by forcing B = 0. Since cos2 θ − sin2 θ =

cos 2θ and sin θ cos θ = 12 sin 2θ, we have

B =(c− a)

2sin 2θ + b cos 2θ = 0.

Therefore, the condition for eliminating the x′y′ term is

cot(2θ) =a− c

2b.

Furthermore, one can show that b2− ac = B2− AC. From the form Ax′2 +

2Bx′y′ + Cy′2 + D = 0, the resulting quadratic equation takes one of the

following forms:

• b2 − ac > 0. Hyperbolic: Ax2 − Cy2 + D = 0.

• b2 − ac = 0. Parabolic: Ax2 + By + D = 0.

• b2 − ac < 0. Elliptic: Ax2 + Cy2 + D = 0.

Thus, one can see the connection between the classification of quadraticequations and second order partial differential equations in two indepen-dent variables.

2.7 d’Alembert’s Solution of the Wave Equation

A general solution of the one-dimensional wave equation canbe found. This solution was first Jean-Baptiste le Rond d’Alembert (1717-1783) and is referred to as d’Alembert’s formula. In this section we willderive d’Alembert’s formula and then use it to arrive at solutions to thewave equation on infinite, semi-infinite, and finite intervals.

We consider the wave equation in the form utt = c2uxx and introduce thetransformation

u(x, t) = U(ξ, η), where ξ = x + ct and η = x− ct.

Note that ξ, and η are the characteristics of the wave equation.We also need to note how derivatives transform. For example

∂u∂x

=∂U(ξ, η)

∂x

=∂U(ξ, η)

∂ξ

∂ξ

∂x+

∂U(ξ, η)

∂η

∂η

∂x

=∂U(ξ, η)

∂ξ+

∂U(ξ, η)

∂η. (2.38)

Therefore, as an operator, we have

∂x=

∂ξ+

∂η.

Page 68: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

56 partial differential equations

Similarly, one can show that

∂t= c

∂ξ− c

∂η.

Using these results, the wave equation becomes

0 = utt − c2uxx

=

(∂2

∂t2 − c2 ∂2

∂x2

)u

=

(∂

∂t+ c

∂x

)(∂

∂t− c

∂x

)u

=

(c

∂ξ− c

∂η+ c

∂ξ+ c

∂η

)(c

∂ξ− c

∂η− c

∂ξ− c

∂η

)U

= −4c2 ∂

∂ξ

∂ηU. (2.39)

Therefore, the wave equation has transformed into the simpler equation,

Uηξ = 0.

Not only is this simpler, but we see it is once again a confirmation thatthe wave equation is a hyperbolic equation. Of course, it is also easy tointegrate. Since

∂η

(∂U∂ξ

)= 0,

∂U∂ξ

= constant with respect to ξ = Γ(η).

A further integration gives

U(ξ, η) =∫ η

Γ(η′) dη′ + F(ξ) ≡ G(η) + F(η).

Therefore, we have as the general solution of the wave equation,

u(x, t) = F(x + ct) + G(x− ct), (2.40)

where F and G are two arbitrary, twice differentiable functions. As t isincreased, we see that F(x + ct) gets horizontally shifted to the left andG(x − ct) gets horizontally shifted to the right. As a result, we concludethat the solution of the wave equation can be seen as the sum of left andright traveling waves.u(x, t) = sum of left and right traveling

waves. Let’s use initial conditions to solve for the unknown functions. We let

u(x, 0) = f (x), ut(x, 0) = g(x), |x| < ∞.

Applying this to the general solution, we have

f (x) = F(x) + G(x) (2.41)

g(x) = c[F′(x)− G′(x)]. (2.42)

Page 69: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

second order partial differential equations 57

We need to solve for F(x) and G(x) in terms of f (x) and g(x). IntegratingEquation (2.42), we have

1c

∫ x

0g(s) ds = F(x)− G(x)− F(0) + G(0).

Adding this result to Equation (2.42), gives

F(x) =12

f (x) +12c

∫ x

0g(s) ds +

12[F(0)− G(0)].

Subtracting from Equation (2.42), gives

G(x) =12

f (x)− 12c

∫ x

0g(s) ds− 1

2[F(0)− G(0)].

Now we can write out the solution u(x, t) = F(x + ct) + G(x− ct), yield-ing d’Alembert’s solution d’Alembert’s solution

u(x, t) =12[ f (x + ct) + f (x− ct)] +

12c

∫ x+ct

x−ctg(s) ds. (2.43)

When f (x) and g(x) are defined for all x ∈ R, the solution is well-defined.However, there are problems on more restricted domains. In the next exam-ples we will consider the semi-infinite and finite length string problems.Ineach case we will need to consider the domain of dependence and the do-main of influence of specific points. These concepts are shown in Figure2.10. The domain of dependence of point P is red region. The point P de-pends on the values of u and ut at points inside the domain. The domain ofinfluence of P is the blue region. The points in the region are influenced bythe values of u and ut at P.

x

t

x = η + ctx = ξ − ct

g(x)f (η) f (ξ)

P

Influence

Dependence

Figure 2.10: The domain of dependenceof point P is red region. The point P de-pends on the values of u and ut at pointsinside the domain. The domain of influ-ence of P is the blue region. The pointsin the region are influenced by the val-ues of u and ut at P.

Example 2.5. Use d’Alembert’s solution to solve

utt = c2uxx, u(x, 0) = f (x), ut(x, 0) = g(x), 0 ≤ x < ∞.

The d’Alembert solution is not well-defined for this problem because f (x − ct)is not defined for x− ct < 0 for c, t > 0. There are similar problems for g(x). Thiscan be seen by looking at the characteristics in the xt-plane. In Figure 2.11 thereare characteristics emanating from the points marked by η0 and ξ0 that intersectin the domain x > 0. The point of intersection of the blue lines have a domain ofdependence entirely in the region x, t > 0, however the domain of dependence of

Page 70: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

58 partial differential equations

Figure 2.11: The characteristics for thesemi-infinite string.

x

t

x = η0 + ct

η0 ξ00

P

point P reaches outside this region. Only characteristics ξ = x + ct reach point P,but characteristics η = x − ct do not. But, we need f (η) and g(x) for x < ct toform a solution.

This can be remedied if we specified boundary conditions at x = 0. For example,we will assume the end x = 0 is fixed,Fixed end boundary condition

u(0, t) = 0, t ≥ 0.

Imagine an infinite string with one end (at x = 0) tied to a pole.Since u(x, t) = F(x + ct) + G(x− ct), we have

u(0, t) = F(ct) + G(−ct) = 0.

Letting ζ = −ct, this gives G(ζ) = −F(−ζ), ζ ≤ 0.Note that

G(ζ) =12

f (ζ)− 12c

∫ ζ

0g(s) ds

−F(−ζ) = −12

f (−ζ)− 12c

∫ −ζ

0g(s) ds

= −12

f (−ζ) +12c

∫ ζ

0g(σ) dσ

(2.44)

Comparing the expressions for G(ζ) and −F(−ζ), we see that

f (ζ) = − f (−ζ), g(ζ) = −g(−ζ).

These relations imply that we can extend the functions into the region x < 0 if wemake them odd functions, or what are called odd extensions. An example is shownin Figure 2.12.

Another type of boundary condition is if the end x = 0 is free,Free end boundary condition

ux(0, t) = 0, t ≥ 0.

In this case we could have an infinite string tied to a ring and that ring is allowedto slide freely up and down a pole.

One can prove that this leads to

f (−ζ) = f (ζ), g(−ζ) = g(ζ).

Thus, we can use an even extension of these function to produce solutions.

Page 71: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

second order partial differential equations 59

Example 2.6. Solve the initial-boundary value problem

utt = c2uxx, 0 ≤ x < ∞, t > 0.

u(x, 0) =

x, 0 ≤ x ≤ 1,

2− x, 1 ≤ x ≤ 2,0, x > 2,

0 ≤ x < ∞

ut(x, 0) = 0, 0 ≤ x < ∞.

u(0, t) = 0, t > 0. (2.45)

This is a semi-infinite string with a fixed end. Initially it is plucked to producea nonzero triangular profile for 0 ≤ x ≤ 2. Since the initial velocity is zero, thegeneral solution is found from d’Alembert’s solution,

u(x, t) =12[ fo(x + ct) + fo(x− ct)],

where fo(x) is the odd extension of f (x) = u(x, 0). In Figure 2.12 we show theinitial condition and its odd extension. The odd extension is obtained throughreflection of f (x) about the origin.

x

uf (x) = u(x, 0)

x

ufo(x)

Figure 2.12: The initial condition and itsodd extension. The odd extension is ob-tained through reflection of f (x) aboutthe origin.

The next step is to look at the horizontal shifts of fo(x). Several examples areshown in Figure 2.13.These show the left and right traveling waves.

In Figure 2.14 we show superimposed plots of fo(x + ct) and fo(x − ct) forgiven times. The initial profile in at the bottom. By the time ct = 2 the fulltraveling wave has emerged. The solution to the problem emerges on the right sideof the figure by averaging each plot.

Example 2.7. Use d’Alembert’s solution to solve

utt = c2uxx, u(x, 0) = f (x), ut(x, 0) = g(x), 0 ≤ x ≤ `.

The general solution of the wave equation was found in the form

u(x, t) = F(x + ct) + G(x− ct).

Page 72: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

60 partial differential equations

Figure 2.13: Examples of fo(x + ct) andfo(x− ct).

x

u fo(x + 0)

x

u fo(x− 0)

x

u fo(x + 1)

x

u fo(x− 1)

x

u fo(x + 2)

x

u fo(x− 2)

Page 73: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

second order partial differential equations 61

x

ufo(x + 0) fo(x− 0)

x

ufo(x + 0.5) fo(x− 0.5)

x

ufo(x + 1) fo(x− 1)

x

ufo(x + 1.5) fo(x− 1.5)

x

ufo(x + 2) fo(x− 2)

x

ufo(x + 2.5) fo(x− 2.5)

Figure 2.14: Superimposed plots offo(x + ct) and fo(x− ct) for given times.The initial profile in at the bottom. Bythe time ct = 2 the full traveling wavehas emerged.

Page 74: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

62 partial differential equations

Figure 2.15: On the left is a plot of f (x +ct), f (x − ct) from Figure 2.14 and theaverage, u(x, t). On the right the solutionalone is shown for ct = 0 at bottom toct = 1 at top for the semi-infinite stringproblem

xu

xu

xu

xu

xu

xu

xu

xu

xu

xu

xu

xu

xu

xu

xu

xu

xu

xu

xu

xu

xu

xu

Page 75: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

second order partial differential equations 63

However, for this problem we can only obtain information for values of x and tsuch that 0 ≤ x + ct ≤ ` and 0 ≤ x − ct ≤ `. In Figure 2.16 the characteristicsx = ξ + ct and x = η − ct for 0 ≤ ξ, η ≤ `. The main (gray) triangle, which isthe domain of dependence of the point (`, 2, `/2c), is the only region in which thesolution can be found based solely on the initial conditions. As with the previousproblem, boundary conditions will need to be given in order to extend the domainof the solution.

In the last example we saw that a fixed boundary at x = 0 could be satisfiedwhen f (x) and g(x) are extended as odd functions. In Figure 2.17 we indicate howthe characteristics are affected by drawing in the new one as red dashed lines. Thisallows us to now construct solutions based on the initial conditions under the linex = `− ct for 0 ≤ x ≤ `. The new region for which we can construct solutionsfrom the initial conditions is indicated in gray in Figure 2.17.

x

tx = ctx = `− ct

`2c

0 `f (x)

Figure 2.16: The characteristics emanat-ing from the interval 0 ≤ x ≤ ` for thefinite string problem.

−`x

t

x = ctx = `− ct

`2c

0 `f (x)f (−x)

Figure 2.17: The red dashed lines are thecharacteristics from the interval [−`, 0]from using the odd extension about x =0.

We can add characteristics on the right by adding a boundary condition at x = `.Again, we could use fixed u(`, t) = 0, or free, ux(`, t) = 0, boundary conditions.This allows us to now construct solutions based on the initial conditions for ` ≤x ≤ 2`.

Let’s consider a fixed boundary condition at x = `. Then, the solution mustsatisfy

u(`, t) = F(`+ ct) + G(`− ct) = 0.

Page 76: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

64 partial differential equations

To see what this means, let ζ = `+ ct. Then, this condition gives (since ct = ζ− `)

F(ζ) = −G(2`− ζ), ` ≤ ζ ≤ 2`.

Note that G(2`− ζ) is defined for 0 ≤ 2`− ζ ≤ `. Therefore, this is a well-definedextension of the domain of F(x).

Note that

F(ζ) =12

f (ζ) +12c

∫ `

0g(s) ds.

−G(2`− ζ) = −12

f (2`− ζ) +12c

∫ 2`−ζ

0g(s) ds

= −12

f (2`− ζ)− 12c

∫ ζ

0g(2`− σ) dσ

(2.46)

Comparing the expressions for G(ζ) and −G(2`− ζ), we see that

f (ζ) = − f (2`− ζ), g(ζ) = −g(2`− ζ).

These relations imply that we can extend the functions into the region x > ` ifwe consider an odd extension of f (x) and g(x) about x = `.. This will give theblue dashed characteristics in Figure 2.18 and a larger gray region to construct thesolution.

Figure 2.18: The red dashed lines are thecharacteristics from the interval [−`, 0]from using the odd extension about x =0 and the blue dashed lines are the char-acteristics from the interval [`, 2`] fromusing the odd extension about x = `.

−`x

t

x = ctx = `− ct

`2c

0 `f (x)f (−x) f (2`− x) 2`

So far we have extended f (x) and g(x) to the interval −` ≤ x ≤ 2` inorder to determine the solution over a larger xt-domain. For example, thefunction f (x) has been extended to

fext(x) =

− f (−x), −` < x < 0,

f (x), 0 < x < `,− f (2`− x), ` < x < 2`.

A similar extension is needed for g(x). Inserting these extended functionsinto d’Alembert’s solution, we can determine u(x, t) in the region indicatedin Figure 2.18.

Page 77: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

second order partial differential equations 65

Even though the original region has been expanded, we have not deter-mined how to find the solution throughout the entire strip, [0, `] × [0, ∞).This is accomplished by periodically repeating these extended functionswith period 2`. This can be shown from the two conditions

f (x) = − f (−x), −` ≤ x ≤ 0,

f (x) = − f (2`− x), ` ≤ x ≤ 2`. (2.47)

Now, consider

f (x + 2`) = − f (2`− (x− 2`))

= − f (−x)

= f (x). (2.48)

This shows that f (x) is periodic with period 2`. Since g(x) satisfies the sameconditions, then it is as well.

In Figure 2.19 we show how the characteristics are extended throughoutthe domain strip using the periodicity of the extended initial conditions. Thecharacteristics from the interval endpoints zig zag throughout the domain,filling it up. In the next example we show how to construct the odd periodicextension of a specific function.

x

t

0 ` 2` 3``−2`

xu(x, 0)

0` 2` 3`−`−2`

Figure 2.19: Extending the characteris-tics throughout the domain strip.

Example 2.8. Construct the periodic extension of the plucked string initial profilegiven by

f (x) =

x, 0 ≤ x ≤ `

2 ,`− x, `

2 ≤ x ≤ `,

satisfying fixed boundary conditions at x = 0 and x = `.We first take the solution and add the odd extension about x = 0. Then we add

an extension beyond x = `. This process is shown in Figure 2.20.

We can use the odd periodic function to construct solutions. In this casewe use the result from the last example for obtaining the solution of theproblem in which the initial velocity is zero, u(x, t) = 1

2 [ f (x + ct) + f (x −ct)]. Translations of the odd periodic extension are shown in Figure 2.21.

Page 78: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

66 partial differential equations

Figure 2.20: Construction of odd peri-odic extension for (a) The initial profile,f (x). (b) Make f (x) an odd function on[−`, `]. (c) Make the odd function peri-odic with period 2`.

x

u(a)

x

u(b)

x

u(c)(c)(c)(c)

In Figure 2.22 we show superimposed plots of f (x + ct) and f (x − ct) fordifferent values of ct. A box is shown inside which the physical wave canbe constructed. The solution is an average of these odd periodic extensionswithin this box. This is displayed in Figure 2.23.

Figure 2.21: Translations of the odd pe-riodic extension.

x

uf (x)f (x)f (x)f (x)f (x)

x

uf (x + .2)f (x + .2)f (x + .2)f (x + .2)f (x + .2)

x

uf (x + .4)f (x + .4)f (x + .4)f (x + .4)f (x + .4)

x

uf (x + .6)f (x + .6)f (x + .6)f (x + .6)f (x + .6)

x

uf (x− .2)f (x− .2)f (x− .2)f (x− .2)f (x− .2)

x

uf (x− .4)f (x− .4)f (x− .4)f (x− .4)f (x− .4)

x

uf (x− .6)f (x− .6)f (x− .6)f (x− .6)f (x− .6)

Problems

1. Solve the following initial value problems.

a. x′′ + x = 0, x(0) = 2, x′(0) = 0.

b. y′′ + 2y′ − 8y = 0, y(0) = 1, y′(0) = 2.

c. x2y′′ − 2xy′ − 4y = 0, y(1) = 1, y′(1) = 0.

2. Solve the following boundary value problems directly, when possible.

a. x′′ + x = 2, x(0) = 0, x′(1) = 0.

b. y′′ + 2y′ − 8y = 0, y(0) = 1, y(1) = 0.

Page 79: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

second order partial differential equations 67

x

uf (x)f (x)f (x)f (x)f (x)

x

uf (x + .2)f (x + .2)f (x + .2)f (x + .2)f (x + .2) f (x− .2)f (x− .2)f (x− .2)f (x− .2)f (x− .2)

x

uf (x + .4)f (x + .4)f (x + .4)f (x + .4)f (x + .4) f (x− .4)f (x− .4)f (x− .4)f (x− .4)f (x− .4)

x

uf (x + .6)f (x + .6)f (x + .6)f (x + .6)f (x + .6) f (x− .6)f (x− .6)f (x− .6)f (x− .6)f (x− .6)

x

uf (x + .8)f (x + .8)f (x + .8)f (x + .8)f (x + .8) f (x− .8)f (x− .8)f (x− .8)f (x− .8)f (x− .8)

x

uf (x + 1)f (x + 1)f (x + 1)f (x + 1)f (x + 1) f (x− 1)f (x− 1)f (x− 1)f (x− 1)f (x− 1) Figure 2.22: Superimposed translationsof the odd periodic extension.

x

u

ct = 0

x

uct = 1

x

u

x

u

x

u

x

u

x

u

x

u

x

u

x

u

x

u

x

u

Figure 2.23: On the left is a plot of f (x +ct), f (x − ct) from Figure 2.22 and theaverage, u(x, t). On the right the solutionalone is shown for ct = 0 to ct = 1.

Page 80: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

68 partial differential equations

c. y′′ + y = 0, y(0) = 1, y(π) = 0.

3. Consider the boundary value problem for the deflection of a horizontalbeam fixed at one end,

d4ydx4 = C, y(0) = 0, y′(0) = 0, y′′(L) = 0, y′′′(L) = 0.

Solve this problem assuming that C is a constant.

4. Find the product solutions, u(x, t) = T(t)X(x), to the heat equation,ut − uxx = 0, on [0, π] satisfying the boundary conditions ux(0, t) = 0 andu(π, t) = 0.

5. Find the product solutions, u(x, t) = T(t)X(x), to the wave equationutt = 2uxx, on [0, 2π] satisfying the boundary conditions u(0, t) = 0 andux(2π, t) = 0.

6. Find product solutions, u(x, t) = X(x)Y(y), to Laplace’s equation, uxx +

uyy = 0, on the unit square satisfying the boundary conditions u(0, y) = 0,u(1, y) = g(y), u(x, 0) = 0, and u(x, 1) = 0.

7. Consider the following boundary value problems. Determine the eigen-values, λ, and eigenfunctions, y(x) for each problem.

a. y′′ + λy = 0, y(0) = 0, y′(1) = 0.

b. y′′ − λy = 0, y(−π) = 0, y′(π) = 0.

c. x2y′′ + xy′ + λy = 0, y(1) = 0, y(2) = 0.

d. (x2y′)′ + λy = 0, y(1) = 0, y′(e) = 0.In problem d you will not get exacteigenvalues. Show that you obtain atranscendental equation for the eigenval-ues in the form tan z = 2z. Find the firstthree eigenvalues numerically.

8. Classify the following equations as either hyperbolic, parabolic, or ellip-tic.

a. uyy + uxy + uxx = 0.

b. 3uxx + 2uxy + 5uyy = 0.

c. x2uxx + 2xyuxy + y2uyy = 0.

d. y2uxx + 2xyuxy + (x2 + 4x4)uyy = 0.

9. Use d’Alembert’s solution to prove

f (−ζ) = f (ζ), g(−ζ) = g(ζ)

for the semi-infinite string satisfying the free end condition ux(0, t) = 0.

10. Derive a solution similar to d’Alembert’s solution for the equation utt +

2uxt − 3u = 0.

11. Construct the appropriate periodic extension of the plucked string ini-tial profile given by

f (x) =

x, 0 ≤ x ≤ `

2 ,`− x, `

2 ≤ x ≤ `,

satisfying the boundary conditions at u(0, t) = 0 and ux(`, t) = 0 for t > 0.

Page 81: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

second order partial differential equations 69

12. Find and sketch the solution of the problem

utt = uxx, 0 ≤ x ≤ 1, t > o

u(x, 0) =

0, 0 ≤ x < 1

4 ,1, 1

4 ≤ x ≤ 34 ,

0, 34 < x ≤ 1,

ut(x, 0) = 0,

u(0, t) = 0, t > 0,

u(1, t) = 0, t > 0,

Page 82: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238
Page 83: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

3Trigonometric Fourier Series

“Ordinary language is totally unsuited for expressing what physics really asserts,since the words of everyday life are not sufficiently abstract. Only mathematics andmathematical logic can say as little as the physicist means to say.” Bertrand Russell(1872-1970)

3.1 Introduction to Fourier Series

We will now turn to the study of trigonometric series. You have seenthat functions have series representations as expansions in powers of x, orx − a, in the form of Maclaurin and Taylor series. Recall that the Taylorseries expansion is given by

f (x) =∞

∑n=0

cn(x− a)n,

where the expansion coefficients are determined as

cn =f (n)(a)

n!.

From the study of the heat equation and wave equation, we have foundthat there are infinite series expansions over other functions, such as sinefunctions. We now turn to such expansions and in the next chapter we willfind out that expansions over special sets of functions are not uncommon inphysics. But, first we turn to Fourier trigonometric series.

We will begin with the study of the Fourier trigonometric series expan-sion

f (x) =a0

2+

∑n=1

an cosnπx

L+ bn sin

nπxL

.

We will find expressions useful for determining the Fourier coefficientsan, bn given a function f (x) defined on [−L, L]. We will also see if theresulting infinite series reproduces f (x). However, we first begin with somebasic ideas involving simple sums of sinusoidal functions.

There is a natural appearance of such sums over sinusoidal functions inmusic. A pure note can be represented as

y(t) = A sin(2π f t),

Page 84: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

72 partial differential equations

where A is the amplitude, f is the frequency in hertz (Hz), and t is time inseconds. The amplitude is related to the volume of the sound. The largerthe amplitude, the louder the sound. In Figure 3.1 we show plots of twosuch tones with f = 2 Hz in the top plot and f = 5 Hz in the bottom one.

0 1 2 3

−2

0

2

t

y(t)

(a) y(t) = 2 sin(4π f t)

0 1 2 3

−2

0

2

t

(b) y(t) = sin(10π f t)

y(t)

Figure 3.1: Plots of y(t) = A sin(2π f t)on [0, 5] for f = 2 Hz and f = 5 Hz.

In these plots you should notice the difference due to the amplitudes andthe frequencies. You can easily reproduce these plots and others in yourfavorite plotting utility.

As an aside, you should be cautious when plotting functions, or samplingdata. The plots you get might not be what you expect, even for a simple sinefunction. In Figure 3.2 we show four plots of the function y(t) = 2 sin(4πt).In the top left you see a proper rendering of this function. However, if youuse a different number of points to plot this function, the results may be sur-prising. In this example we show what happens if you use N = 200, 100, 101points instead of the 201 points used in the first plot. Such disparities arenot only possible when plotting functions, but are also present when collect-ing data. Typically, when you sample a set of data, you only gather a finiteamount of information at a fixed rate. This could happen when getting dataon ocean wave heights, digitizing music and other audio to put on yourcomputer, or any other process when you attempt to analyze a continuoussignal.

Figure 3.2: Problems can occur whileplotting. Here we plot the func-tion y(t) = 2 sin 4πt using N =201, 200, 100, 101 points.

0 1 2 3 4 5−4

−2

0

2

4y(t)=2 sin(4 π t) for N=201 points

Time

y(t)

0 1 2 3 4 5−4

−2

0

2

4y(t)=2 sin(4 π t) for N=200 points

Time

y(t)

0 1 2 3 4 5−4

−2

0

2

4y(t)=2 sin(4 π t) for N=100 points

Time

y(t)

0 1 2 3 4 5−4

−2

0

2

4y(t)=2 sin(4 π t) for N=101 points

Time

y(t)

Next, we consider what happens when we add several pure tones. Afterall, most of the sounds that we hear are in fact a combination of pure toneswith different amplitudes and frequencies. In Figure 3.3 we see what hap-pens when we add several sinusoids. Note that as one adds more and moretones with different characteristics, the resulting signal gets more compli-cated. However, we still have a function of time. In this chapter we will ask,

Page 85: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

trigonometric fourier series 73

“Given a function f (t), can we find a set of sinusoidal functions whose sumconverges to f (t)?”

0 1 2 3

−2

0

2

t

y(t)

(a) Sum of signals with frequencies

f = 2 Hz and f = 5 Hz.

0 1 2 3

−2

0

2

t

(b) Sum of signals with frequencies

f = 2 Hz, f = 5 Hz, and f = 8 Hz.

y(t)

Figure 3.3: Superposition of several si-nusoids.

Looking at the superpositions in Figure 3.3, we see that the sums yieldfunctions that appear to be periodic. This is not to be unexpected. We recallthat a periodic function is one in which the function values repeat over thedomain of the function. The length of the smallest part of the domain whichrepeats is called the period. We can define this more precisely: A function issaid to be periodic with period T if f (t + T) = f (t) for all t and the smallestsuch positive number T is called the period.

For example, we consider the functions used in Figure 3.3. We began withy(t) = 2 sin(4πt). Recall from your first studies of trigonometric functionsthat one can determine the period by dividing the coefficient of t into 2π toget the period. In this case we have

T =2π

4π=

12

.

Looking at the top plot in Figure 3.1 we can verify this result. (You cancount the full number of cycles in the graph and divide this into the totaltime to get a more accurate value of the period.)

In general, if y(t) = A sin(2π f t), the period is found as

T =2π

2π f=

1f

.

Of course, this result makes sense, as the unit of frequency, the hertz, is alsodefined as s−1, or cycles per second.

Returning to Figure 3.3, the functions y(t) = 2 sin(4πt), y(t) = sin(10πt),and y(t) = 0.5 sin(16πt) have periods of 0.5s, 0.2s, and 0.125s, respectively.Each superposition in Figure 3.3 retains a period that is the least commonmultiple of the periods of the signals added. For both plots, this is 1.0s= 2(0.5)s = 5(.2)s = 8(.125)s.

Our goal will be to start with a function and then determine the ampli-tudes of the simple sinusoids needed to sum to that function. We will seethat this might involve an infinite number of such terms. Thus, we will bestudying an infinite series of sinusoidal functions.

Secondly, we will find that using just sine functions will not be enougheither. This is because we can add sinusoidal functions that do not neces-sarily peak at the same time. We will consider two signals that originateat different times. This is similar to when your music teacher would makesections of the class sing a song like “Row, Row, Row your Boat” starting atslightly different times.

0 1 2 3

−2

0

2

t

y(t)

(a) Plot of each function.

0 1 2 3

−2

0

2

t

(b) Plot of the sum of the functions.

y(t)

Figure 3.4: Plot of the functions y(t) =2 sin(4πt) and y(t) = 2 sin(4πt + 7π/8)and their sum.

We can easily add shifted sine functions. In Figure 3.4 we show thefunctions y(t) = 2 sin(4πt) and y(t) = 2 sin(4πt + 7π/8) and their sum.Note that this shifted sine function can be written as y(t) = 2 sin(4π(t +7/32)). Thus, this corresponds to a time shift of −7/32.

So, we should account for shifted sine functions in the general sum. Ofcourse, we would then need to determine the unknown time shift as wellas the amplitudes of the sinusoidal functions that make up the signal, f (t).

Page 86: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

74 partial differential equations

While this is one approach that some researchers use to analyze signals,there is a more common approach. This results from another reworking ofthe shifted function.

We should note that the form in thelower plot of Figure 3.4 looks like a sim-ple sinusoidal function for a reason. Let

y1(t) = 2 sin(4πt),

y2(t) = 2 sin(4πt + 7π/8).

Then,

y1 + y2 = 2 sin(4πt + 7π/8) + 2 sin(4πt)

= 2[sin(4πt + 7π/8) + sin(4πt)]

= 4 cos7π

16sin(

4πt +7π

16

).

Consider the general shifted function

y(t) = A sin(2π f t + φ). (3.1)

Note that 2π f t + φ is called the phase of the sine function and φ is calledthe phase shift. We can use the trigonometric identity (A.17) for the sine ofthe sum of two angles1 to obtain

1 Recall the identities (A.17)-(A.18)

sin(x + y) = sin x cos y + sin y cos x,

cos(x + y) = cos x cos y− sin x sin y.

y(t) = A sin(2π f t + φ)

= A sin(φ) cos(2π f t) + A cos(φ) sin(2π f t). (3.2)

Defining a = A sin(φ) and b = A cos(φ), we can rewrite this as

y(t) = a cos(2π f t) + b sin(2π f t).

Thus, we see that the signal in Equation (3.1) is a sum of sine and cosinefunctions with the same frequency and different amplitudes. If we can finda and b, then we can easily determine A and φ:

A =√

a2 + b2, tan φ =ba

.

We are now in a position to state our goal.

Goal - Fourier Analysis

Given a signal f (t), we would like to determine its frequency content byfinding out what combinations of sines and cosines of varying frequenciesand amplitudes will sum to the given function. This is called FourierAnalysis.

3.2 Fourier Trigonometric Series

As we have seen in the last section, we are interested in findingrepresentations of functions in terms of sines and cosines. Given a functionf (x) we seek a representation in the form

f (x) ∼ a0

2+

∑n=1

[an cos nx + bn sin nx] . (3.3)

Notice that we have opted to drop the references to the time-frequency formof the phase. This will lead to a simpler discussion for now and one canalways make the transformation nx = 2π fnt when applying these ideas toapplications.

The series representation in Equation (3.3) is called a Fourier trigonomet-ric series. We will simply refer to this as a Fourier series for now. The set

Page 87: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

trigonometric fourier series 75

of constants a0, an, bn, n = 1, 2, . . . are called the Fourier coefficients. Theconstant term is chosen in this form to make later computations simpler,though some other authors choose to write the constant term as a0. Ourgoal is to find the Fourier series representation given f (x). Having foundthe Fourier series representation, we will be interested in determining whenthe Fourier series converges and to what function it converges.

0 10 20

0.5

1

1.5

t

y(t)

(a) Plot of function f (t).

0 10 20

0.5

1

1.5

t

(b) Periodic extension of f (t).

y(t)

Figure 3.5: Plot of the function f (t) de-fined on [0, 2π] and its periodic exten-sion.

From our discussion in the last section, we see that The Fourier series isperiodic. The periods of cos nx and sin nx are 2π

n . Thus, the largest period,T = 2π, comes from the n = 1 terms and the Fourier series has period 2π.This means that the series should be able to represent functions that areperiodic of period 2π.

While this appears restrictive, we could also consider functions that aredefined over one period. In Figure 3.5 we show a function defined on [0, 2π].In the same figure, we show its periodic extension. These are just copies ofthe original function shifted by the period and glued together. The extensioncan now be represented by a Fourier series and restricting the Fourier seriesto [0, 2π] will give a representation of the original function. Therefore, wewill first consider Fourier series representations of functions defined on thisinterval. Note that we could just as easily considered functions defined on[−π, π] or any interval of length 2π. We will consider more general intervalslater in the chapter.

Fourier Coefficients

Theorem 3.1. The Fourier series representation of f (x) defined on [0, 2π], whenit exists, is given by (3.3) with Fourier coefficients

an =1π

∫ 2π

0f (x) cos nx dx, n = 0, 1, 2, . . . ,

bn =1π

∫ 2π

0f (x) sin nx dx, n = 1, 2, . . . . (3.4)

These expressions for the Fourier coefficients are obtained by consideringspecial integrations of the Fourier series. We will now derive the an integralsin (3.4).

We begin with the computation of a0. Integrating the Fourier series termby term in Equation (3.3), we have∫ 2π

0f (x) dx =

∫ 2π

0

a0

2dx +

∫ 2π

0

∑n=1

[an cos nx + bn sin nx] dx. (3.5)

We will assume that we can integrate the infinite sum term by term. Then Evaluating the integral of an infinite se-ries by integrating term by term dependson the convergence properties of the se-ries.

we will need to compute∫ 2π

0

a0

2dx =

a0

2(2π) = πa0,∫ 2π

0cos nx dx =

[sin nx

n

]2π

0= 0,

∫ 2π

0sin nx dx =

[− cos nx

n

]2π

0= 0. (3.6)

Page 88: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

76 partial differential equations

From these results we see that only one term in the integrated sum does notvanish leaving ∫ 2π

0f (x) dx = πa0.

This confirms the value for a0.2

2 Note that a02 is the average of f (x) over

the interval [0, 2π]. Recall from the firstsemester of calculus, that the average ofa function defined on [a, b] is given by

fave =1

b− a

∫ b

af (x) dx.

For f (x) defined on [0, 2π], we have

fave =1

∫ 2π

0f (x) dx =

a0

2.

Next, we will find the expression for an. We multiply the Fourier series(3.3) by cos mx for some positive integer m. This is like multiplying bycos 2x, cos 5x, etc. We are multiplying by all possible cos mx functions fordifferent integers m all at the same time. We will see that this will allow usto solve for the an’s.

We find the integrated sum of the series times cos mx is given by∫ 2π

0f (x) cos mx dx =

∫ 2π

0

a0

2cos mx dx

+∫ 2π

0

∑n=1

[an cos nx + bn sin nx] cos mx dx.

(3.7)

Integrating term by term, the right side becomes∫ 2π

0f (x) cos mx dx =

a0

2

∫ 2π

0cos mx dx

+∞

∑n=1

[an

∫ 2π

0cos nx cos mx dx + bn

∫ 2π

0sin nx cos mx dx

].

(3.8)

We have already established that∫ 2π

0 cos mx dx = 0, which implies that thefirst term vanishes.

Next we need to compute integrals of products of sines and cosines. Thisrequires that we make use of some of the trigonometric identities listed inChapter 1. For quick reference, we list these here.

Useful Trigonometric Identities

sin(x± y) = sin x cos y± sin y cos x (3.9)

cos(x± y) = cos x cos y∓ sin x sin y (3.10)

sin2 x =12(1− cos 2x) (3.11)

cos2 x =12(1 + cos 2x) (3.12)

sin x sin y =12(cos(x− y)− cos(x + y)) (3.13)

cos x cos y =12(cos(x + y) + cos(x− y)) (3.14)

sin x cos y =12(sin(x + y) + sin(x− y)) (3.15)

We first want to evaluate∫ 2π

0 cos nx cos mx dx. We do this by using the

Page 89: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

trigonometric fourier series 77

product identity (3.14). We have∫ 2π

0cos nx cos mx dx =

12

∫ 2π

0[cos(m + n)x + cos(m− n)x] dx

=12

[sin(m + n)x

m + n+

sin(m− n)xm− n

]2π

0= 0. (3.16)

There is one caveat when doing such integrals. What if one of the de-nominators m± n vanishes? For this problem m + n 6= 0, since both m andn are positive integers. However, it is possible for m = n. This means thatthe vanishing of the integral can only happen when m 6= n. So, what canwe do about the m = n case? One way is to start from scratch with ourintegration. (Another way is to compute the limit as n approaches m in ourresult and use L’Hopital’s Rule. Try it!)

For n = m we have to compute∫ 2π

0 cos2 mx dx. This can also be handledusing a trigonometric identity. Using the half angle formula, (3.12), withθ = mx, we find∫ 2π

0cos2 mx dx =

12

∫ 2π

0(1 + cos 2mx) dx

=12

[x +

12m

sin 2mx]2π

0

=12(2π) = π. (3.17)

To summarize, we have shown that

∫ 2π

0cos nx cos mx dx =

0, m 6= nπ, m = n.

(3.18)

This holds true for m, n = 0, 1, . . . . [Why did we include m, n = 0?] Whenwe have such a set of functions, they are said to be an orthogonal set over theintegration interval. A set of (real) functions φn(x) is said to be orthogonalon [a, b] if

∫ ba φn(x)φm(x) dx = 0 when n 6= m. Furthermore, if we also have

that∫ b

a φ2n(x) dx = 1, these functions are called orthonormal. Definition of an orthogonal set of func-

tions and orthonormal functions.The set of functions cos nx∞n=0 are orthogonal on [0, 2π]. Actually, they

are orthogonal on any interval of length 2π. We can make them orthonormalby dividing each function by

√π as indicated by Equation (3.17). This is

sometimes referred to normalization of the set of functions.The notion of orthogonality is actually a generalization of the orthogonal-

ity of vectors in finite dimensional vector spaces. The integral∫ b

a f (x) f (x) dxis the generalization of the dot product, and is called the scalar product off (x) and g(x), which are thought of as vectors in an infinite dimensionalvector space spanned by a set of orthogonal functions. We will return tothese ideas in the next chapter.

Returning to the integrals in equation (3.8), we still have to evaluate∫ 2π0 sin nx cos mx dx. We can use the trigonometric identity involving prod-

ucts of sines and cosines, (3.15). Setting A = nx and B = mx, we find

Page 90: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

78 partial differential equations

that ∫ 2π

0sin nx cos mx dx =

12

∫ 2π

0[sin(n + m)x + sin(n−m)x] dx

=12

[− cos(n + m)x

n + m+− cos(n−m)x

n−m

]2π

0

= (−1 + 1) + (−1 + 1) = 0. (3.19)

So,

∫ 2π

0sin nx cos mx dx = 0. (3.20)

For these integrals we also should be careful about setting n = m. In thisspecial case, we have the integrals

∫ 2π

0sin mx cos mx dx =

12

∫ 2π

0sin 2mx dx =

12

[− cos 2mx

2m

]2π

0= 0.

Finally, we can finish evaluating the expression in Equation (3.8). Wehave determined that all but one integral vanishes. In that case, n = m. Thisleaves us with ∫ 2π

0f (x) cos mx dx = amπ.

Solving for am gives

am =1π

∫ 2π

0f (x) cos mx dx.

Since this is true for all m = 1, 2, . . . , we have proven this part of the theorem.The only part left is finding the bn’s This will be left as an exercise for thereader.

We now consider examples of finding Fourier coefficients for given func-tions. In all of these cases we define f (x) on [0, 2π].

Example 3.1. f (x) = 3 cos 2x, x ∈ [0, 2π].We first compute the integrals for the Fourier coefficients.

a0 =1π

∫ 2π

03 cos 2x dx = 0.

an =1π

∫ 2π

03 cos 2x cos nx dx = 0, n 6= 2.

a2 =1π

∫ 2π

03 cos2 2x dx = 3,

bn =1π

∫ 2π

03 cos 2x sin nx dx = 0, ∀n.

(3.21)

The integrals for a0, an, n 6= 2, and bn are the result of orthogonality. For a2, theintegral can be computed as follows:

a2 =1π

∫ 2π

03 cos2 2x dx

Page 91: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

trigonometric fourier series 79

=3

∫ 2π

0[1 + cos 4x] dx

=3

x +14

sin 4x︸ ︷︷ ︸This term vanishes!

0

= 3. (3.22)

Therefore, we have that the only nonvanishing coefficient is a2 = 3. So there isone term and f (x) = 3 cos 2x.

Well, we should have known the answer to the last example before doingall of those integrals. If we have a function expressed simply in terms ofsums of simple sines and cosines, then it should be easy to write down theFourier coefficients without much work. This is seen by writing out theFourier series,

f (x) ∼ a0

2+

∑n=1

[an cos nx + bn sin nx] .

=a0

2+ a1 cos x + b1 sin x ++a2 cos 2x + b2 sin 2x + . . . . (3.23)

For the last problem, f (x) = 3 cos 2x. Comparing this to the expandedFourier series, one can immediately read off the Fourier coefficients withoutdoing any integration. In the next example we emphasize this point.

Example 3.2. f (x) = sin2 x, x ∈ [0, 2π].We could determine the Fourier coefficients by integrating as in the last example.

However, it is easier to use trigonometric identities. We know that

sin2 x =12(1− cos 2x) =

12− 1

2cos 2x.

There are no sine terms, so bn = 0, n = 1, 2, . . . . There is a constant term, implyinga0/2 = 1/2. So, a0 = 1. There is a cos 2x term, corresponding to n = 2, soa2 = − 1

2 . That leaves an = 0 for n 6= 0, 2. So, a0 = 1, a2 = − 12 , and all other

Fourier coefficients vanish.

Example 3.3. f (x) =

1, 0 < x < π,−1, π < x < 2π,

.π 2π

−2

−1

0

1

2

x

Figure 3.6: Plot of discontinuous func-tion in Example 3.3.

This example will take a little more work. We cannot bypass evaluating anyintegrals this time. As seen in Figure 3.6, this function is discontinuous. So, wewill break up any integration into two integrals, one over [0, π] and the other over[π, 2π].

a0 =1π

∫ 2π

0f (x) dx

=1π

∫ π

0dx +

∫ 2π

π(−1) dx

=1π(π) +

1π(−2π + π) = 0. (3.24)

an =1π

∫ 2π

0f (x) cos nx dx

Page 92: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

80 partial differential equations

=1π

[∫ π

0cos nx dx−

∫ 2π

πcos nx dx

]=

[(1n

sin nx)π

0−(

1n

sin nx)2π

π

]= 0. (3.25)

bn =1π

∫ 2π

0f (x) sin nx dx

=1π

[∫ π

0sin nx dx−

∫ 2π

πsin nx dx

]=

[(− 1

ncos nx

0+

(1n

cos nx)2π

π

]

=1π

[− 1

ncos nπ +

1n+

1n− 1

ncos nπ

]=

2nπ

(1− cos nπ). (3.26)

We have found the Fourier coefficients for this function. Before inserting theminto the Fourier series (3.3), we note that cos nπ = (−1)n. Therefore,

Often we see expressions involvingcos nπ = (−1)n and 1 ± cos nπ = 1 ±(−1)n. This is an example showing howto re-index series containing cos nπ.

1− cos nπ =

0, n even2, n odd.

(3.27)

So, half of the bn’s are zero. While we could write the Fourier series representationas

f (x) ∼ 4π

∑n=1n odd

1n

sin nx,

we could let n = 2k− 1 in order to capture the odd numbers only. The answer canbe written as

f (x) =4π

∑k=1

sin(2k− 1)x2k− 1

,

Having determined the Fourier representation of a given function, wewould like to know if the infinite series can be summed; i.e., does the seriesconverge? Does it converge to f (x)? We will discuss this question later inthe chapter after we generalize the Fourier series to intervals other than forx ∈ [0, 2π].

3.3 Fourier Series Over Other Intervals

In many applications we are interested in determining Fourier seriesrepresentations of functions defined on intervals other than [0, 2π]. In thissection we will determine the form of the series expansion and the Fouriercoefficients in these cases.

The most general type of interval is given as [a, b]. However, this oftenis too general. More common intervals are of the form [−π, π], [0, L], or

Page 93: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

trigonometric fourier series 81

[−L/2, L/2]. The simplest generalization is to the interval [0, L]. Such in-tervals arise often in applications. For example, for the problem of a onedimensional string of length L we set up the axes with the left end at x = 0and the right end at x = L. Similarly for the temperature distribution alonga one dimensional rod of length L we set the interval to x ∈ [0, 2π]. Suchproblems naturally lead to the study of Fourier series on intervals of lengthL. We will see later that symmetric intervals, [−a, a], are also useful.

Given an interval [0, L], we could apply a transformation to an intervalof length 2π by simply rescaling the interval. Then we could apply thistransformation to the Fourier series representation to obtain an equivalentone useful for functions defined on [0, L].

t0 L

x0 2π

Figure 3.7: A sketch of the transforma-tion between intervals x ∈ [0, 2π] andt ∈ [0, L].

We define x ∈ [0, 2π] and t ∈ [0, L]. A linear transformation relating theseintervals is simply x = 2πt

L as shown in Figure 3.7. So, t = 0 maps to x = 0and t = L maps to x = 2π. Furthermore, this transformation maps f (x) toa new function g(t) = f (x(t)), which is defined on [0, L]. We will determinethe Fourier series representation of this function using the representationfor f (x) from the last section.

Recall the form of the Fourier representation for f (x) in Equation (3.3):

f (x) ∼ a0

2+

∑n=1

[an cos nx + bn sin nx] . (3.28)

Inserting the transformation relating x and t, we have

g(t) ∼ a0

2+

∑n=1

[an cos

2nπtL

+ bn sin2nπt

L

]. (3.29)

This gives the form of the series expansion for g(t) with t ∈ [0, L]. But, westill need to determine the Fourier coefficients.

Recall, that

an =1π

∫ 2π

0f (x) cos nx dx.

We need to make a substitution in the integral of x = 2πtL . We also will need

to transform the differential, dx = 2πL dt. Thus, the resulting form for the

Fourier coefficients is

an =2L

∫ L

0g(t) cos

2nπtL

dt. (3.30)

Similarly, we find that

bn =2L

∫ L

0g(t) sin

2nπtL

dt. (3.31)

We note first that when L = 2π we get back the series representation thatwe first studied. Also, the period of cos 2nπt

L is L/n, which means that therepresentation for g(t) has a period of L corresponding to n = 1.

At the end of this section we present the derivation of the Fourier seriesrepresentation for a general interval for the interested reader. In Table 3.1we summarize some commonly used Fourier series representations.

Integration of even and odd functionsover symmetric intervals, [−a, a].

Page 94: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

82 partial differential equations

At this point we need to remind the reader about the integration of evenand odd functions on symmetric intervals.

We first recall that f (x) is an even function if f (−x) = f (x) for all x.One can recognize even functions as they are symmetric with respect to they-axis as shown in Figure 3.8.

Even Functions.

a−a x

y(x)

Figure 3.8: Area under an even functionon a symmetric interval, [−a, a].

If one integrates an even function over a symmetric interval, then one hasthat ∫ a

−af (x) dx = 2

∫ a

0f (x) dx. (3.32)

One can prove this by splitting off the integration over negative values of x,using the substitution x = −y, and employing the evenness of f (x). Thus,

∫ a

−af (x) dx =

∫ 0

−af (x) dx +

∫ a

0f (x) dx

= −∫ 0

af (−y) dy +

∫ a

0f (x) dx

=∫ a

0f (y) dy +

∫ a

0f (x) dx

= 2∫ a

0f (x) dx. (3.33)

This can be visually verified by looking at Figure 3.8.Odd Functions.

A similar computation could be done for odd functions. f (x) is an oddfunction if f (−x) = − f (x) for all x. The graphs of such functions aresymmetric with respect to the origin as shown in Figure 3.9. If one integratesan odd function over a symmetric interval, then one has that∫ a

−af (x) dx = 0. (3.34)

a−a

x

y(x)

Figure 3.9: Area under an odd functionon a symmetric interval, [−a, a].

Example 3.4. Let f (x) = |x| on [−π, π] We compute the coefficients, beginningas usual with a0. We have, using the fact that |x| is an even function,

a0 =1π

∫ π

−π|x| dx

=2π

∫ π

0x dx = π (3.35)

We continue with the computation of the general Fourier coefficients for f (x) =|x| on [−π, π]. We have

an =1π

∫ π

−π|x| cos nx dx =

∫ π

0x cos nx dx. (3.36)

Here we have made use of the fact that |x| cos nx is an even function.In order to compute the resulting integral, we need to use integration by parts ,

∫ b

au dv = uv

∣∣∣ba−∫ b

av du,

by letting u = x and dv = cos nx dx. Thus, du = dx and v =∫

dv = 1n sin nx.

Page 95: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

trigonometric fourier series 83

Fourier Series on [0, L]

f (x) ∼ a0

2+

∑n=1

[an cos

2nπxL

+ bn sin2nπx

L

]. (3.37)

an =2L

∫ L

0f (x) cos

2nπxL

dx. n = 0, 1, 2, . . . ,

bn =2L

∫ L

0f (x) sin

2nπxL

dx. n = 1, 2, . . . . (3.38)

Fourier Series on [− L2 , L

2 ]

f (x) ∼ a0

2+

∑n=1

[an cos

2nπxL

+ bn sin2nπx

L

]. (3.39)

an =2L

∫ L2

− L2

f (x) cos2nπx

Ldx. n = 0, 1, 2, . . . ,

bn =2L

∫ L2

− L2

f (x) sin2nπx

Ldx. n = 1, 2, . . . . (3.40)

Fourier Series on [−π, π]

f (x) ∼ a0

2+

∑n=1

[an cos nx + bn sin nx] . (3.41)

an =1π

∫ π

−πf (x) cos nx dx. n = 0, 1, 2, . . . ,

bn =1π

∫ π

−πf (x) sin nx dx. n = 1, 2, . . . . (3.42)

Table 3.1: Special Fourier Series Repre-sentations on Different Intervals

Page 96: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

84 partial differential equations

Continuing with the computation, we have

an =2π

∫ π

0x cos nx dx.

=2π

[1n

x sin nx∣∣∣π0− 1

n

∫ π

0sin nx dx

]= − 2

[− 1

ncos nx

0

= − 2πn2 (1− (−1)n). (3.43)

Here we have used the fact that cos nπ = (−1)n for any integer n. This leadsto a factor (1− (−1)n). This factor can be simplified as

1− (−1)n =

2, n odd0, n even

. (3.44)

So, an = 0 for n even and an = − 4πn2 for n odd.

Computing the bn’s is simpler. We note that we have to integrate |x| sin nx fromx = −π to π. The integrand is an odd function and this is a symmetric interval.So, the result is that bn = 0 for all n.

Putting this all together, the Fourier series representation of f (x) = |x| on[−π, π] is given as

f (x) ∼ π

2− 4

π

∑n=1n odd

cos nxn2 . (3.45)

While this is correct, we can rewrite the sum over only odd n by reindexing. Welet n = 2k− 1 for k = 1, 2, 3, . . . . Then we only get the odd integers. The seriescan then be written as

f (x) ∼ π

2− 4

π

∑k=1

cos(2k− 1)x(2k− 1)2 . (3.46)

Throughout our discussion we have referred to such results as Fourierrepresentations. We have not looked at the convergence of these series.Here is an example of an infinite series of functions. What does this seriessum to? We show in Figure 3.10 the first few partial sums. They appear tobe converging to f (x) = |x| fairly quickly.

Even though f (x) was defined on [−π, π] we can still evaluate the Fourierseries at values of x outside this interval. In Figure 3.11, we see that therepresentation agrees with f (x) on the interval [−π, π]. Outside this intervalwe have a periodic extension of f (x) with period 2π.

Another example is the Fourier series representation of f (x) = x on[−π, π] as left for Problem 7. This is determined to be

f (x) ∼ 2∞

∑n=1

(−1)n+1

nsin nx. (3.47)

As seen in Figure 3.12 we again obtain the periodic extension of the func-tion. In this case we needed many more terms. Also, the vertical parts of the

Page 97: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

trigonometric fourier series 85

3.0

2.5

2.0

1.5

1.0

.5

3.2.1.0.–1.–2.–3.

3.0

2.5

2.0

1.5

1.0

.5

3.2.1.0.–1.–2.–3.

2.5

2.0

1.5

1.0

.5

3.2.1.0.–1.–2.–3.

2.5

2.0

1.5

1.0

.53.2.1.0.–1.–2.–3.

xx

xx

N = 4

N = 2N = 1

N = 3

Figure 3.10: Plot of the first partial sumsof the Fourier series representation forf (x) = |x|.

0

0.5

1

1.5

2

2.5

3

–6 –4 –2 2 4 6 8 10 12

x

Figure 3.11: Plot of the first 10 termsof the Fourier series representation forf (x) = |x| on the interval [−2π, 4π].

3.2.1.0.

–1.–2.–3.

12.10.8.6.4.2.0.–2.–4.–6.

3.2.1.0.

–1.–2.–3.

12.10.8.6.4.2.0.–2.–4.–6.

N = 10

N = 200

x

x

Figure 3.12: Plot of the first 10 termsand 200 terms of the Fourier series rep-resentation for f (x) = x on the interval[−2π, 4π].

Page 98: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

86 partial differential equations

first plot are nonexistent. In the second plot we only plot the points and notthe typical connected points that most software packages plot as the defaultstyle.

Example 3.5. It is interesting to note that one can use Fourier series to obtainsums of some infinite series. For example, in the last example we found that

x ∼ 2∞

∑n=1

(−1)n+1

nsin nx.

Now, what if we chose x = π2 ? Then, we have

π

2= 2

∑n=1

(−1)n+1

nsin

2= 2

[1− 1

3+

15− 1

7+ . . .

].

This gives a well known expression for π:

π = 4[

1− 13+

15− 1

7+ . . .

].

3.3.1 Fourier Series on [a, b]

A Fourier series representation is also possible for a general interval,t ∈ [a, b]. As before, we just need to transform this interval to [0, 2π]. LetThis section can be skipped on first read-

ing. It is here for completeness and theend result, Theorem 3.2 provides the re-sult of the section.

x = 2πt− ab− a

.

Inserting this into the Fourier series (3.3) representation for f (x) we obtain

g(t) ∼ a0

2+

∑n=1

[an cos

2nπ(t− a)b− a

+ bn sin2nπ(t− a)

b− a

]. (3.48)

Well, this expansion is ugly. It is not like the last example, where thetransformation was straightforward. If one were to apply the theory toapplications, it might seem to make sense to just shift the data so that a = 0and be done with any complicated expressions. However, some studentsenjoy the challenge of developing such generalized expressions. So, let’s seewhat is involved.

First, we apply the addition identities for trigonometric functions andrearrange the terms.

g(t) ∼ a0

2+

∑n=1

[an cos

2nπ(t− a)b− a

+ bn sin2nπ(t− a)

b− a

]=

a0

2+

∑n=1

[an

(cos

2nπtb− a

cos2nπab− a

+ sin2nπtb− a

sin2nπab− a

)+ bn

(sin

2nπtb− a

cos2nπab− a

− cos2nπtb− a

sin2nπab− a

)]=

a0

2+

∑n=1

[cos

2nπtb− a

(an cos

2nπab− a

− bn sin2nπab− a

)+ sin

2nπtb− a

(an sin

2nπab− a

+ bn cos2nπab− a

)]. (3.49)

Page 99: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

trigonometric fourier series 87

Defining A0 = a0 and

An ≡ an cos2nπab− a

− bn sin2nπab− a

Bn ≡ an sin2nπab− a

+ bn cos2nπab− a

, (3.50)

we arrive at the more desirable form for the Fourier series representation ofa function defined on the interval [a, b].

g(t) ∼ A0

2+

∑n=1

[An cos

2nπtb− a

+ Bn sin2nπtb− a

]. (3.51)

We next need to find expressions for the Fourier coefficients. We insertthe known expressions for an and bn and rearrange. First, we note thatunder the transformation x = 2π t−a

b−a we have

an =1π

∫ 2π

0f (x) cos nx dx

=2

b− a

∫ b

ag(t) cos

2nπ(t− a)b− a

dt, (3.52)

and

bn =1π

∫ 2π

0f (x) cos nx dx

=2

b− a

∫ b

ag(t) sin

2nπ(t− a)b− a

dt. (3.53)

Then, inserting these integrals in An, combining integrals and making useof the addition formula for the cosine of the sum of two angles, we obtain

An ≡ an cos2nπab− a

− bn sin2nπab− a

=2

b− a

∫ b

ag(t)

[cos

2nπ(t− a)b− a

cos2nπab− a

− sin2nπ(t− a)

b− asin

2nπab− a

]dt

=2

b− a

∫ b

ag(t) cos

2nπtb− a

dt. (3.54)

A similar computation gives

Bn =2

b− a

∫ b

ag(t) sin

2nπtb− a

dt. (3.55)

Summarizing, we have shown that:

Theorem 3.2. The Fourier series representation of f (x) defined on [a, b] whenit exists, is given by

f (x) ∼ a0

2+

∑n=1

[an cos

2nπxb− a

+ bn sin2nπxb− a

]. (3.56)

with Fourier coefficients

an =2

b− a

∫ b

af (x) cos

2nπxb− a

dx. n = 0, 1, 2, . . . ,

bn =2

b− a

∫ b

af (x) sin

2nπxb− a

dx. n = 1, 2, . . . . (3.57)

Page 100: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

88 partial differential equations

3.4 Sine and Cosine Series

In the last two examples ( f (x) = |x| and f (x) = x on [−π, π]) wehave seen Fourier series representations that contain only sine or cosineterms. As we know, the sine functions are odd functions and thus sumto odd functions. Similarly, cosine functions sum to even functions. Suchoccurrences happen often in practice. Fourier representations involving justsines are called sine series and those involving just cosines (and the constantterm) are called cosine series.

Another interesting result, based upon these examples, is that the originalfunctions, |x| and x agree on the interval [0, π]. Note from Figures 3.10-3.12

that their Fourier series representations do as well. Thus, more than one se-ries can be used to represent functions defined on finite intervals. All theyneed to do is to agree with the function over that particular interval. Some-times one of these series is more useful because it has additional propertiesneeded in the given application.

We have made the following observations from the previous examples:

1. There are several trigonometric series representations for a func-tion defined on a finite interval.

2. Odd functions on a symmetric interval are represented by sineseries and even functions on a symmetric interval are representedby cosine series.

These two observations are related and are the subject of this section.We begin by defining a function f (x) on interval [0, L]. We have seen thatthe Fourier series representation of this function appears to converge to aperiodic extension of the function.

In Figure 3.13 we show a function defined on [0, 1]. To the right is itsperiodic extension to the whole real axis. This representation has a periodof L = 1. The bottom left plot is obtained by first reflecting f about the y-axis to make it an even function and then graphing the periodic extension ofthis new function. Its period will be 2L = 2. Finally, in the last plot we flipthe function about each axis and graph the periodic extension of the newodd function. It will also have a period of 2L = 2.

In general, we obtain three different periodic representations. In orderto distinguish these we will refer to them simply as the periodic, even andodd extensions. Now, starting with f (x) defined on [0, L], we would liketo determine the Fourier series representations leading to these extensions.[For easy reference, the results are summarized in Table 3.2]

We have already seen from Table 3.1 that the periodic extension of f (x),defined on [0, L], is obtained through the Fourier series representation

f (x) ∼ a0

2+

∑n=1

[an cos

2nπxL

+ bn sin2nπx

L

], (3.58)

Page 101: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

trigonometric fourier series 89

−1 0 1 2 3−1.5

−1

−0.5

0

0.5

1

1.5f(x) on [0,1]

x

f(x)

−1 0 1 2 3−1.5

−1

−0.5

0

0.5

1

1.5Periodic Extension of f(x)

x

f(x)

−1 0 1 2 3−1.5

−1

−0.5

0

0.5

1

1.5Even Periodic Extension of f(x)

x

f(x)

−1 0 1 2 3−1.5

−1

−0.5

0

0.5

1

1.5Odd Periodic Extension of f(x)

x

f(x)

Figure 3.13: This is a sketch of a func-tion and its various extensions. The orig-inal function f (x) is defined on [0, 1] andgraphed in the upper left corner. To itsright is the periodic extension, obtainedby adding replicas. The two lower plotsare obtained by first making the originalfunction even or odd and then creatingthe periodic extensions of the new func-tion.

where

an =2L

∫ L

0f (x) cos

2nπxL

dx. n = 0, 1, 2, . . . ,

bn =2L

∫ L

0f (x) sin

2nπxL

dx. n = 1, 2, . . . . (3.59)

Given f (x) defined on [0, L], the even periodic extension is obtained by Even periodic extension.

simply computing the Fourier series representation for the even function

fe(x) ≡

f (x), 0 < x < L,f (−x) −L < x < 0.

(3.60)

Since fe(x) is an even function on a symmetric interval [−L, L], we expectthat the resulting Fourier series will not contain sine terms. Therefore, theseries expansion will be given by [Use the general case in (3.56) with a = −Land b = L.]:

fe(x) ∼ a0

2+

∑n=1

an cosnπx

L. (3.67)

with Fourier coefficients

an =1L

∫ L

−Lfe(x) cos

nπxL

dx. n = 0, 1, 2, . . . . (3.68)

However, we can simplify this by noting that the integrand is even andthe interval of integration can be replaced by [0, L]. On this interval fe(x) =f (x). So, we have the Cosine Series Representation of f (x) for x ∈ [0, L] isgiven as Fourier Cosine Series.

f (x) ∼ a0

2+

∑n=1

an cosnπx

L. (3.69)

where

an =2L

∫ L

0f (x) cos

nπxL

dx. n = 0, 1, 2, . . . . (3.70)

Page 102: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

90 partial differential equations

Table 3.2: Fourier Cosine and Sine SeriesRepresentations on [0, L] Fourier Series on [0, L]

f (x) ∼ a0

2+

∑n=1

[an cos

2nπxL

+ bn sin2nπx

L

]. (3.61)

an =2L

∫ L

0f (x) cos

2nπxL

dx. n = 0, 1, 2, . . . ,

bn =2L

∫ L

0f (x) sin

2nπxL

dx. n = 1, 2, . . . . (3.62)

Fourier Cosine Series on [0, L]

f (x) ∼ a0/2 +∞

∑n=1

an cosnπx

L. (3.63)

where

an =2L

∫ L

0f (x) cos

nπxL

dx. n = 0, 1, 2, . . . . (3.64)

Fourier Sine Series on [0, L]

f (x) ∼∞

∑n=1

bn sinnπx

L. (3.65)

where

bn =2L

∫ L

0f (x) sin

nπxL

dx. n = 1, 2, . . . . (3.66)

Page 103: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

trigonometric fourier series 91

Similarly, given f (x) defined on [0, L], the odd periodic extension is Odd periodic extension.

obtained by simply computing the Fourier series representation for the oddfunction

fo(x) ≡

f (x), 0 < x < L,− f (−x) −L < x < 0.

(3.71)

The resulting series expansion leads to defining the Sine Series Representa-tion of f (x) for x ∈ [0, L] as Fourier Sine Series Representation.

f (x) ∼∞

∑n=1

bn sinnπx

L. (3.72)

where

bn =2L

∫ L

0f (x) sin

nπxL

dx. n = 1, 2, . . . . (3.73)

Example 3.6. In Figure 3.13 we actually provided plots of the various extensionsof the function f (x) = x2 for x ∈ [0, 1]. Let’s determine the representations of theperiodic, even and odd extensions of this function.

For a change, we will use a CAS (Computer Algebra System) package to do theintegrals. In this case we can use Maple. A general code for doing this for theperiodic extension is shown in Table 3.3.

Example 3.7. Periodic Extension - Trigonometric Fourier Series Using thecode in Table 3.3, we have that a0 = 2

3 , an = 1n2π2 , and bn = − 1

nπ . Thus, theresulting series is given as

f (x) ∼ 13+

∑n=1

[1

n2π2 cos 2nπx− 1nπ

sin 2nπx]

.

In Figure 3.14 we see the sum of the first 50 terms of this series. Generally,we see that the series seems to be converging to the periodic extension of f . Thereappear to be some problems with the convergence around integer values of x. Wewill later see that this is because of the discontinuities in the periodic extension andthe resulting overshoot is referred to as the Gibbs phenomenon which is discussedin the last section of this chapter.

0

0.2

0.4

0.6

0.8

1

–1 1 2 3

x

Figure 3.14: The periodic extension off (x) = x2 on [0, 1].

Page 104: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

92 partial differential equations

Table 3.3: Maple code for computingFourier coefficients and plotting partialsums of the Fourier series.

> restart:

> L:=1:

> f:=x^2:

> assume(n,integer):

> a0:=2/L*int(f,x=0..L);

a0 := 2/3

> an:=2/L*int(f*cos(2*n*Pi*x/L),x=0..L);

1

an := -------

2 2

n~ Pi

> bn:=2/L*int(f*sin(2*n*Pi*x/L),x=0..L);

1

bn := - -----

n~ Pi

> F:=a0/2+sum((1/(k*Pi)^2)*cos(2*k*Pi*x/L)

-1/(k*Pi)*sin(2*k*Pi*x/L),k=1..50):

> plot(F,x=-1..3,title=‘Periodic Extension‘,

titlefont=[TIMES,ROMAN,14],font=[TIMES,ROMAN,14]);

Example 3.8. Even Periodic Extension - Cosine SeriesIn this case we compute a0 = 2

3 and an = 4(−1)n

n2π2 . Therefore, we have

f (x) ∼ 13+

4π2

∑n=1

(−1)n

n2 cos nπx.

In Figure 3.15 we see the sum of the first 50 terms of this series. In this case theconvergence seems to be much better than in the periodic extension case. We alsosee that it is converging to the even extension.

Figure 3.15: The even periodic extensionof f (x) = x2 on [0, 1].

0

0.2

0.4

0.6

0.8

1

–1 1 2 3

x

Page 105: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

trigonometric fourier series 93

Example 3.9. Odd Periodic Extension - Sine SeriesFinally, we look at the sine series for this function. We find that

bn = − 2n3π3 (n

2π2(−1)n − 2(−1)n + 2).

Therefore,

f (x) ∼ − 2π3

∑n=1

1n3 (n

2π2(−1)n − 2(−1)n + 2) sin nπx.

Once again we see discontinuities in the extension as seen in Figure 3.16. However,we have verified that our sine series appears to be converging to the odd extensionas we first sketched in Figure 3.13.

–1

–0.5

0

0.5

1

–1 1 2 3x

Figure 3.16: The odd periodic extensionof f (x) = x2 on [0, 1].

3.5 Solution of the Heat Equation

We started this chapter seeking solutions of initial-boundary valueproblems involving the heat equation and the wave equation. In particular,we found the general solution for the problem of heat flow in a one dimen-sional rod of length L with fixed zero temperature ends. The problem wasgiven by

PDE ut = kuxx, 0 < t, 0 ≤ x ≤ L,IC u(x, 0) = f (x), 0 < x < L,BC u(0, t) = 0, t > 0,

u(L, t) = 0, t > 0.

(3.74)

We found the solution using separation of variables. This resulted in asum over various product solutions:

u(x, t) =∞

∑n=1

bnekλnt sinnπx

L,

whereλn = −

(nπ

L

)2.

Page 106: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

94 partial differential equations

This equation satisfies the boundary conditions. However, we had onlygotten to state initial condition using this solution. Namely,

f (x) = u(x, 0) =∞

∑n=1

bn sinnπx

L.

We were left with having to determine the constants bn. Once we knowthem, we have the solution.

Now we can get the Fourier coefficients when we are given the initialcondition, f (x). They are given by

bn =2L

∫ L

0f (x) sin

nπxL

dx, n = 1, 2, . . . .

We consider a couple of examples with different initial conditions.

Example 3.10. Consider the solution of the heat equation with f (x) = sin x andL = π.

In this case the solution takes the form

u(x, t) =∞

∑n=1

bnekλnt sin nx.

However, the initial condition takes the form of the first term in the expansion; i.e.,the n = 1 term. So, we need not carry out the integral because we can immediatelywrite b1 = 1 and bn = 0, n = 2, 3, . . .. Therefore, the solution consists of just oneterm,

u(x, t) = e−kt sin x.

In Figure 3.17 we see that how this solution behaves for k = 1 and t ∈ [0, 1].

Figure 3.17: The evolution of the initialcondition f (x) = sin x for L = π andk = 1.

Example 3.11. Consider solutions of the heat equation with f (x) = x(1− x) andL = 1.

Page 107: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

trigonometric fourier series 95

This example requires a bit more work. The solution takes the form

u(x, t) =∞

∑n=1

bne−n2π2kt sin nπx,

where

bn = 2∫ 1

0f (x) sin nπx dx.

This integral is easily computed using integration by parts

bn = 2∫ 1

0x(1− x) sin nπx dx

=

[2x(1− x)

(− 1

nπcos nπx

)]1

0+

2nπ

∫ 1

0(1− 2x) cos nπx dx

= − 2n2π2

[(1− 2x) sin nπx]10 + 2

∫ 1

0sin nπx dx

=

4n3π3 [cos nπx]10

=4

n3π3 (cos nπ − 1)

=

0, n even

− 8n3π3 , n odd

. (3.75)

So, we have that the solution can be written as

u(x, t) =8

π3

∑`=1

1(2`− 1)3 e−(2`−1)2π2kt sin(2`− 1)πx.

In Figure 3.18 we see that how this solution behaves for k = 1 and t ∈ [0, 1].Twenty terms were used. We see that this solution diffuses much faster than in thelast example. Most of the terms damp out quickly as the solution asymptoticallyapproaches the first term.

Figure 3.18: The evolution of the initialcondition f (x) = x(1− x) for L = 1 andk = 1.

Page 108: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

96 partial differential equations

3.6 Finite Length Strings

We now return to the physical example of wave propagation in astring. We found that the general solution can be represented as a sum overproduct solutions. We will restrict our discussion to the special case that theinitial velocity is zero and the original profile is given by u(x, 0) = f (x). Thesolution is then

u(x, t) =∞

∑n=1

An sinnπx

Lcos

nπctL

(3.76)

satisfying

f (x) =∞

∑n=1

An sinnπx

L. (3.77)

We have seen that the Fourier sine series coefficients are given by

An =2L

∫ L

0f (x) sin

nπxL

dx. (3.78)

We can rewrite this solution in a more compact form. First, we define thewave numbers,

kn =nπ

L, n = 1, 2, . . . ,

and the angular frequencies,

ωn = ckn =nπc

L.

Then, the product solutions take the form

sin knx cos ωnt.

Using trigonometric identities, these products can be written as

sin knx cos ωnt =12[sin(knx + ωnt) + sin(knx−ωnt)] .

Inserting this expression in the solution, we have

u(x, t) =12

∑n=1

An [sin(knx + ωnt) + sin(knx−ωnt)] . (3.79)

Since ωn = ckn, we can put this into a more suggestive form:

u(x, t) =12

[∞

∑n=1

An sin kn(x + ct) +∞

∑n=1

An sin kn(x− ct)

]. (3.80)

We see that each sum is simply the sine series for f (x) but evaluated atThe solution of the wave equation canbe written as the sum of right and lefttraveling waves.

either x + ct or x− ct. Thus, the solution takes the form

u(x, t) =12[ f (x + ct) + f (x− ct)] . (3.81)

If t = 0, then we have u(x, 0) = 12 [ f (x) + f (x)] = f (x). So, the solution

satisfies the initial condition. At t = 1, the sum has a term f (x− c).

Page 109: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

trigonometric fourier series 97

Recall from your mathematics classes that this is simply a shifted versionof f (x). Namely, it is shifted to the right. For general times, the functionis shifted by ct to the right. For larger values of t, this shift is further tothe right. The function (wave) shifts to the right with velocity c. Similarly,f (x + ct) is a wave traveling to the left with velocity −c.

Thus, the waves on the string consist of waves traveling to the right andto the left. However, the story does not stop here. We have a problemwhen needing to shift f (x) across the boundaries. The original problemonly defines f (x) on [0, L]. If we are not careful, we would think that thefunction leaves the interval leaving nothing left inside. However, we haveto recall that our sine series representation for f (x) has a period of 2L. So,before we apply this shifting, we need to account for its periodicity. In fact,being a sine series, we really have the odd periodic extension of f (x) beingshifted. The details of such analysis would take us too far from our currentgoal. However, we can illustrate this with a few figures.

x

f (x)

a L

1

Figure 3.19: The initial profile for astring of length one plucked at x = a.

We begin by plucking a string of length L. This can be represented by thefunction

f (x) =

xa 0 ≤ x ≤ a

L−xL−a a ≤ x ≤ L

(3.82)

where the string is pulled up one unit at x = a. This is shown in Figure 3.19.Next, we create an odd function by extending the function to a period of

2L. This is shown in Figure 3.20.

x

f (x)

a2L− a

L 2L

1

-1

Figure 3.20: Odd extension about theright end of a plucked string.

Finally, we construct the periodic extension of this to the entire line. InFigure 3.21 we show in the lower part of the figure copies of the periodic ex-tension, one moving to the right and the other moving to the left. (Actually,the copies are 1

2 f (x ± ct).) The top plot is the sum of these solutions. Thephysical string lies in the interval [0,1]. Of course, this is better seen whenthe solution is animated.

The time evolution for this plucked string is shown for several times inFigure 3.22. This results in a wave that appears to reflect from the ends astime increases.

Figure 3.21: Summing the odd periodicextensions. The lower plot shows copiesof the periodic extension, one moving tothe right and the other moving to theleft. The upper plot is the sum.

The relation between the angular frequency and the wave number, ω =

ck, is called a dispersion relation. In this case ω depends on k linearly. If oneknows the dispersion relation, then one can find the wave speed as c = ω

k .In this case, all of the harmonics travel at the same speed. In cases wherethey do not, we have nonlinear dispersion, which we will discuss later.

3.7 The Gibbs Phenomenon

We have seen the Gibbs phenomenon when there is a jump discontinu-ity in the periodic extension of a function, whether the function originallyhad a discontinuity or developed one due to a mismatch in the values ofthe endpoints. This can be seen in Figures 3.12, 3.14 and 3.16. The Fourierseries has a difficult time converging at the point of discontinuity and thesegraphs of the Fourier series show a distinct overshoot which does not go

Page 110: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

98 partial differential equations

Figure 3.22: This Figure shows theplucked string at six successive times.

t = 0

t = 0.2

t = 0.4

t = 0.1

t = 0.3

t = 0.5

away. This is called the Gibbs phenomenon3 and the amount of overshoot

3 The Gibbs phenomenon was named af-ter Josiah Willard Gibbs (1839-1903) eventhough it was discovered earlier by theEnglishman Henry Wilbraham (1825-1883). Wilbraham published a soon for-gotten paper about the effect in 1848. In1889 Albert Abraham Michelson (1852-1931), an American physicist,observedan overshoot in his mechanical graphingmachine. Shortly afterwards J. WillardGibbs published papers describing thisphenomenon, which was later to becalled the Gibbs phenomena. Gibbs wasa mathematical physicist and chemistand is considered the father of physicalchemistry.

can be computed.In one of our first examples, Example 3.3, we found the Fourier series

representation of the piecewise defined function

f (x) =

1, 0 < x < π,−1, π < x < 2π,

to be

f (x) ∼ 4π

∑k=1

sin(2k− 1)x2k− 1

.

Figure 3.23: The Fourier series represen-tation of a step function on [−π, π] forN = 10.

–1

–0.5

0.5

1

–3 –2 –1 1 2 3

In Figure 3.23 we display the sum of the first ten terms. Note the wig-gles, overshoots and under shoots. These are seen more when we plot therepresentation for x ∈ [−3π, 3π], as shown in Figure 3.24.

We note that the overshoots and undershoots occur at discontinuities inthe periodic extension of f (x). These occur whenever f (x) has a disconti-

Page 111: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

trigonometric fourier series 99

–1

–0.5

0.5

1

–8 –6 –4 –2 2 4 6 8

Figure 3.24: The Fourier series represen-tation of a step function on [−π, π] forN = 10 plotted on [−3π, 3π] displayingthe periodicity.

nuity or if the values of f (x) at the endpoints of the domain do not agree.

–1

–0.5

0.5

1

–3 –2 –1 1 2 3

Figure 3.25: The Fourier series represen-tation of a step function on [−π, π] forN = 20.

1.2

1.0

0.8

0.6

0.4

0.2

0

0.02 0.04 0.06 0.08 0.1

Figure 3.26: The Fourier series represen-tation of a step function on [−π, π] forN = 100.

One might expect that we only need to add more terms. In Figure 3.25 weshow the sum for twenty terms. Note the sum appears to converge better forpoints far from the discontinuities. But, the overshoots and undershoots arestill present. In Figures 3.26 and 3.27 show magnified plots of the overshootat x = 0 for N = 100 and N = 500, respectively. We see that the overshootpersists. The peak is at about the same height, but its location seems to begetting closer to the origin. We will show how one can estimate the size ofthe overshoot.

1.2

1.0

0.8

0.6

0.4

0.2

0

0.02 0.04 0.06 0.08 0.1

Figure 3.27: The Fourier series represen-tation of a step function on [−π, π] forN = 500.

We can study the Gibbs phenomenon by looking at the partial sums ofgeneral Fourier trigonometric series for functions f (x) defined on the inter-val [−L, L]. Writing out the partial sums, inserting the Fourier coefficientsand rearranging, we have

SN(x) = a0 +N

∑n=1

[an cos

nπxL

+ bn sinnπx

L

]

Page 112: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

100 partial differential equations

=1

2L

∫ L

−Lf (y) dy +

N

∑n=1

[(1L

∫ L

−Lf (y) cos

nπyL

dy)

cosnπx

L

+

(1L

∫ L

−Lf (y) sin

nπyL

dy.)

sinnπx

L

]

=1L

L∫−L

12

+N

∑n=1

(cos

nπyL

cosnπx

L+ sin

nπyL

sinnπx

L

)f (y) dy

=1L

L∫−L

12+

N

∑n=1

cosnπ(y− x)

L

f (y) dy

≡ 1L

L∫−L

DN(y− x) f (y) dy

We have defined

DN(x) =12+

N

∑n=1

cosnπx

L,

which is called the N-th Dirichlet kernel .We now prove

Lemma 3.1. The N-th Dirichlet kernel is given by

DN(x) =

sin((N+ 1

2 )πxL )

2 sin πx2L

, sin πx2L 6= 0,

N + 12 , sin πx

2L = 0.

Proof. Let θ = πxL and multiply DN(x) by 2 sin θ

2 to obtain:

2 sinθ

2DN(x) = 2 sin

θ

2

[12+ cos θ + · · ·+ cos Nθ

]= sin

θ

2+ 2 cos θ sin

θ

2+ 2 cos 2θ sin

θ

2+ · · ·+ 2 cos Nθ sin

θ

2

= sinθ

2+

(sin

2− sin

θ

2

)+

(sin

2− sin

2

)+ · · ·

+

[sin(

N +12

)θ − sin

(N − 1

2

]= sin

(N +

12

)θ. (3.83)

Thus,

2 sinθ

2DN(x) = sin

(N +

12

)θ.

If sin θ2 6= 0, then

DN(x) =sin(

N + 12

2 sin θ2

, θ =πxL

.

Page 113: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

trigonometric fourier series 101

If sin θ2 = 0, then one needs to apply L’Hospital’s Rule as θ → 2mπ:

limθ→2mπ

sin(

N + 12

2 sin θ2

= limθ→2mπ

(N + 12 ) cos

(N + 1

2

cos θ2

=(N + 1

2 ) cos (2mπN + mπ)

cos mπ

=(N + 1

2 )(cos 2mπN cos mπ − sin 2mπN sin mπ)

cos mπ

= N +12

. (3.84)

We further note that DN(x) is periodic with period 2L and is an evenfunction.

So far, we have found that the Nth partial sum is given by

SN(x) =1L

L∫−L

DN(y− x) f (y) dy. (3.85)

Making the substitution ξ = y− x, we have

SN(x) =1L

∫ L−x

−L−xDN(ξ) f (ξ + x) dξ

=1L

∫ L

−LDN(ξ) f (ξ + x) dξ. (3.86)

In the second integral we have made use of the fact that f (x) and DN(x) areperiodic with period 2L and shifted the interval back to [−L, L].

We now write the integral as the sum of two integrals over positive andnegative values of ξ and use the fact that DN(x) is an even function. Then,

SN(x) =1L

∫ 0

−LDN(ξ) f (ξ + x) dξ +

1L

∫ L

0DN(ξ) f (ξ + x) dξ

=1L

∫ L

0[ f (x− ξ) + f (ξ + x)] DN(ξ) dξ. (3.87)

We can use this result to study the Gibbs phenomenon whenever it oc-curs. In particular, we will only concentrate on the earlier example. For thiscase, we have

SN(x) =1π

∫ π

0[ f (x− ξ) + f (ξ + x)] DN(ξ) dξ (3.88)

for

DN(x) =12+

N

∑n=1

cos nx.

Also, one can show that

f (x− ξ) + f (ξ + x) =

2, 0 ≤ ξ < x,0, x ≤ ξ < π − x,−2, π − x ≤ ξ < π.

Page 114: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

102 partial differential equations

Thus, we have

SN(x) =2π

∫ x

0DN(ξ) dξ − 2

π

∫ π

π−xDN(ξ) dξ

=2π

∫ x

0DN(z) dz +

∫ x

0DN(π − z) dz. (3.89)

Here we made the substitution z = π − ξ in the second integral.The Dirichlet kernel for L = π is given by

DN(x) =sin(N + 1

2 )x2 sin x

2.

For N large, we have N + 12 ≈ N, and for small x, we have sin x

2 ≈x2 . So,

under these assumptions,

DN(x) ≈ sin Nxx

.

Therefore,

SN(x)→ 2π

∫ x

0

sin Nξ

ξdξ for large N, and small x.

If we want to determine the locations of the minima and maxima, wherethe undershoot and overshoot occur, then we apply the first derivative testfor extrema to SN(x). Thus,

ddx

SN(x) =2π

sin Nxx

= 0.

The extrema occur for Nx = mπ, m = ±1,±2, . . . . One can show that thereis a maximum at x = π/N and a minimum for x = 2π/N. The value forthe overshoot can be computed as

SN(π/N) =2π

∫ π/N

0

sin Nξ

ξdξ

=2π

∫ π

0

sin tt

dt

=2π

Si(π)

= 1.178979744 . . . . (3.90)

Note that this value is independent of N and is given in terms of the sineintegral,

Si(x) ≡∫ x

0

sin tt

dt.

Problems

1. Write y(t) = 3 cos 2t− 4 sin 2t in the form y(t) = A cos(2π f t + φ).

Page 115: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

trigonometric fourier series 103

2. Derive the coefficients bn in Equation(3.4).

3. Let f (x) be defined for x ∈ [−L, L]. Parseval’s identity is given by

1L

∫ L

−Lf 2(x) dx =

a20

2+

∑n=1

a2n + b2

n.

Assuming the the Fourier series of f (x) converges uniformly in (−L, L),prove Parseval’s identity by multiplying the Fourier series representationby f (x) and integrating from x = −L to x = L. [In section 9.6.3 we willencounter Parseval’s equality for Fourier transforms which is a continuousversion of this identity.]

4. Consider the square wave function

f (x) =

1, 0 < x < π,−1, π < x < 2π.

a. Find the Fourier series representation of this function and plot thefirst 50 terms.

b. Apply Parseval’s identity in Problem 3 to the result in part a.

c. Use the result of part b to show π2

8 =∞

∑n=1

1(2n− 1)2 .

5. For the following sets of functions: i) show that each is orthogonal on thegiven interval, and ii) determine the corresponding orthonormal set. [Seepage 77]

a. sin 2nx, n = 1, 2, 3, . . . , 0 ≤ x ≤ π.

b. cos nπx, n = 0, 1, 2, . . . , 0 ≤ x ≤ 2.

c. sin nπxL , n = 1, 2, 3, . . . , x ∈ [−L, L].

6. Consider f (x) = 4 sin3 2x.

a. Derive the trigonometric identity giving sin3 θ in terms of sin θ andsin 3θ using DeMoivre’s Formula.

b. Find the Fourier series of f (x) = 4 sin3 2x on [0, 2π] without com-puting any integrals.

7. Find the Fourier series of the following:

a. f (x) = x, x ∈ [0, 2π].

b. f (x) = x2

4 , |x| < π.

c. f (x) =

π2 , 0 < x < π,−π

2 , π < x < 2π.

8. Find the Fourier Series of each function f (x) of period 2π. For eachseries, plot the Nth partial sum,

SN =a0

2+

N

∑n=1

[an cos nx + bn sin nx] ,

Page 116: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

104 partial differential equations

for N = 5, 10, 50 and describe the convergence (is it fast? what is it converg-ing to, etc.) [Some simple Maple code for computing partial sums is shownin the notes.]

a. f (x) = x, |x| < π.

b. f (x) = |x|, |x| < π.

c. f (x) =

0, −π < x < 0,1, 0 < x < π.

9. Find the Fourier series of f (x) = x on the given interval. Plot the Nthpartial sums and describe what you see.

a. 0 < x < 2.

b. −2 < x < 2.

c. 1 < x < 2.

10. The result in problem 7b above gives a Fourier series representation ofx2

4 . By picking the right value for x and a little arrangement of the series,show that [See Example 3.5.]

a.π2

6= 1 +

122 +

132 +

142 + · · · .

b.π2

8= 1 +

132 +

152 +

172 + · · · .

Hint: Consider how the series in part a. can be used to do this.

11. Sketch (by hand) the graphs of each of the following functions overfour periods. Then sketch the extensions each of the functions as both aneven and odd periodic function. Determine the corresponding Fourier sineand cosine series and verify the convergence to the desired function usingMaple.

a. f (x) = x2, 0 < x < 1.

b. f (x) = x(2− x), 0 < x < 2.

c. f (x) =

0, 0 < x < 1,1, 1 < x < 2.

d. f (x) =

π, 0 < x < π,

2π − x, π < x < 2π.

12. Consider the function f (x) = x, −π < x < π.

a. Show that x = 2 ∑∞n=1(−1)n+1 sin nx

n .

b. Integrate the series in part a and show that

x2 =π2

3− 4

∑n=1

(−1)n+1 cos nxn2 .

Page 117: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

trigonometric fourier series 105

c. Find the Fourier cosine series of f (x) = x2 on [0, π] and compareit to the result in part b.

13. Consider the function f (x) = x, 0 < x < 2.

a. Find the Fourier sine series representation of this function and plotthe first 50 terms.

b. Find the Fourier cosine series representation of this function andplot the first 50 terms.

c. Apply Parseval’s identity in Problem 3 to the result in part b.

d. Use the result of part c to find the sum ∑∞n=1

1n4 .

14. Differentiate the Fourier sine series term by term in Problem 18. Showthat the result is not the derivative of f (x) = x.

15. Find the general solution to the heat equation, ut − uxx = 0, on [0, π]

satisfying the boundary conditions ux(0, t) = 0 and u(π, t) = 0. Determinethe solution satisfying the initial condition,

u(x, 0) =

x, 0 ≤ x ≤ π

2 ,π − x, π

2 ≤ x ≤ π,

16. Find the general solution to the wave equation utt = 2uxx, on [0, 2π]

satisfying the boundary conditions u(0, t) = 0 and ux(2π, t) = 0. Deter-mine the solution satisfying the initial conditions, u(x, 0) = x(4π − x), andut(x, 0) = 0.

17. Recall the plucked string initial profile example in the last chapter givenby

f (x) =

x, 0 ≤ x ≤ `

2 ,`− x, `

2 ≤ x ≤ `,

satisfying fixed boundary conditions at x = 0 and x = `. Find and plot thesolutions at t = 0, .2, ..., 1.0, of utt = uxx, for u(x, 0) = f (x), ut(x, 0) = 0,with x ∈ [0, 1].

18. Find and plot the solutions at t = 0, .2, ..., 1.0, of the problem

utt = uxx, 0 ≤ x ≤ 1, t > 0

u(x, 0) =

0, 0 ≤ x < 1

4 ,1, 1

4 ≤ x ≤ 34 ,

0, 34 < x ≤ 1,

ut(x, 0) = 0,

u(0, t) = 0, t > 0,

u(1, t) = 0, t > 0.

19. Find the solution to Laplace’s equation, uxx + uyy = 0, on the unitsquare, [0, 1]× [0, 1] satisfying the boundary conditions u(0, y) = 0, u(1, y) =y(1− y), u(x, 0) = 0, and u(x, 1) = 0.

Page 118: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238
Page 119: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

4Sturm-Liouville Boundary Value Prob-lems

We have seen that trigonometric functions and special functionsare the solutions of differential equations. These solutions give orthogonalsets of functions which can be used to represent functions in generalizedFourier series expansions. At the same time we would like to generalizethe techniques we had first used to solve the heat equation in order to solvemore general initial-boundary value problems. Namely, we use separationof variables to separate the given partial differential equation into a set ofordinary differential equations. A subset of those equations provide us witha set of boundary value problems whose eigenfunctions are useful in repre-senting solutions of the partial differential equation. Hopefully, those solu-tions will form a useful basis in some function space.

A class of problems to which our previous examples belong are theSturm-Liouville eigenvalue problems. These problems involve self-adjoint(differential) operators which play an important role in the spectral theoryof linear operators and the existence of the eigenfunctions needed to solvethe interesting physics problems described by the above initial-boundaryvalue problems. In this section we will introduce the Sturm-Liouville eigen-value problem as a general class of boundary value problems containing theLegendre and Bessel equations and supplying the theory needed to solve avariety of problems.

4.1 Sturm-Liouville Operators

In physics many problems arise in the form of boundary value prob-lems involving second order ordinary differential equations. For example,we will explore the wave equation and the heat equation in three dimen-sions. Separating out the time dependence leads to a three dimensionalboundary value problem in both cases. Further separation of variables leadsto a set of boundary value problems involving second order ordinary dif-ferential equations.

Page 120: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

108 partial differential equations

In general, we might obtain equations of the form

a2(x)y′′ + a1(x)y′ + a0(x)y = f (x) (4.1)

subject to boundary conditions. We can write such an equation in operatorform by defining the differential operator

L = a2(x)D2 + a1(x)D + a0(x),

where D = d/dx. Then, Equation (4.1) takes the form

Ly = f .

Recall that we had solved such nonhomogeneous differential equations inChapter 2. In this section we will show that these equations can be solvedusing eigenfunction expansions. Namely, we seek solutions to the eigen-value problem

Lφ = λφ

with homogeneous boundary conditions on φ and then seek a solution ofthe nonhomogeneous problem, Ly = f , as an expansion over these eigen-functions. Formally, we let

y(x) =∞

∑n=1

cnφn(x).

However, we are not guaranteed a nice set of eigenfunctions. We need anappropriate set to form a basis in the function space. Also, it would benice to have orthogonality so that we can easily solve for the expansioncoefficients.

It turns out that any linear second order differential operator can beturned into an operator that possesses just the right properties (self-adjointedness)to carry out this procedure. The resulting operator is referred to as a Sturm-Liouville operator. We will highlight some of the properties of these opera-tors and see how they are used in applications.

We define the Sturm-Liouville operator asThe Sturm-Liouville operator.

L =d

dxp(x)

ddx

+ q(x). (4.2)

The Sturm-Liouville eigenvalue problem is given by the differential equa-tionThe Sturm-Liouville eigenvalue prob-

lem. Ly = −λσ(x)y,

ord

dx

(p(x)

dydx

)+ q(x)y + λσ(x)y = 0, (4.3)

for x ∈ (a, b), y = y(x), plus boundary conditions. The functions p(x), p′(x),q(x) and σ(x) are assumed to be continuous on (a, b) and p(x) > 0, σ(x) > 0on [a, b]. If the interval is finite and these assumptions on the coefficientsare true on [a, b], then the problem is said to be a regular Sturm-Liouvilleproblem. Otherwise, it is called a singular Sturm-Liouville problem.

Page 121: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

sturm-liouville boundary value problems 109

We also need to impose the set of homogeneous boundary conditionsTypes of boundary conditions.

α1y(a) + β1y′(a) = 0,

α2y(b) + β2y′(b) = 0. (4.4)

The α’s and β’s are constants. For different values, one has special typesof boundary conditions. For βi = 0, we have what are called Dirichletboundary conditions. Namely, y(a) = 0 and y(b) = 0. For αi = 0, we Dirichlet boundary conditions - the so-

lution takes fixed values on the bound-ary. These are named after Gustav Leje-une Dirichlet (1805-1859).

have Neumann boundary conditions. In this case, y′(a) = 0 and y′(b) = 0.In terms of the heat equation example, Dirichlet conditions correspond

Neumann boundary conditions - thederivative of the solution takes fixed val-ues on the boundary. These are namedafter Carl Neumann (1832-1925).

to maintaining a fixed temperature at the ends of the rod. The Neumannboundary conditions would correspond to no heat flow across the ends, orinsulating conditions, as there would be no temperature gradient at thosepoints. The more general boundary conditions allow for partially insulatedboundaries.

Another type of boundary condition that is often encountered is the pe-riodic boundary condition. Consider the heated rod that has been bent toform a circle. Then the two end points are physically the same. So, wewould expect that the temperature and the temperature gradient shouldagree at those points. For this case we write y(a) = y(b) and y′(a) = y′(b).Boundary value problems using these conditions have to be handled differ-ently than the above homogeneous conditions. These conditions leads todifferent types of eigenfunctions and eigenvalues. Differential equations of Sturm-Liouville

form.As previously mentioned, equations of the form (4.1) occur often. Wenow show that any second order linear operator can be put into the formof the Sturm-Liouville operator. In particular, equation (4.1) can be put intothe form

ddx

(p(x)

dydx

)+ q(x)y = F(x). (4.5)

Another way to phrase this is provided in the theorem:The proof of this is straight forward as we soon show. Let’s first consider

the equation (4.1) for the case that a1(x) = a′2(x). Then, we can write theequation in a form in which the first two terms combine,

f (x) = a2(x)y′′ + a1(x)y′ + a0(x)y

= (a2(x)y′)′ + a0(x)y. (4.6)

The resulting equation is now in Sturm-Liouville form. We just identifyp(x) = a2(x) and q(x) = a0(x).

Not all second order differential equations are as simple to convert. Con-sider the differential equation

x2y′′ + xy′ + 2y = 0.

In this case a2(x) = x2 and a′2(x) = 2x 6= a1(x). So, this does not fall intothis case. However, we can change the operator in this equation, x2D +

xD, to a Sturm-Liouville operator, Dp(x)D for a p(x) that depends on thecoefficients x2 and x..

Page 122: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

110 partial differential equations

In the Sturm Liouville operator the derivative terms are gathered togetherinto one perfect derivative, Dp(x)D. This is similar to what we saw in theChapter 2 when we solved linear first order equations. In that case wesought an integrating factor. We can do the same thing here. We seek amultiplicative function µ(x) that we can multiply through (4.1) so that itcan be written in Sturm-Liouville form.

We first divide out the a2(x), giving

y′′ +a1(x)a2(x)

y′ +a0(x)a2(x)

y =f (x)

a2(x).

Next, we multiply this differential equation by µ,

µ(x)y′′ + µ(x)a1(x)a2(x)

y′ + µ(x)a0(x)a2(x)

y = µ(x)f (x)

a2(x).

The first two terms can now be combined into an exact derivative (µy′)′

if the second coefficient is µ′(x). Therefore, µ(x) satisfies a first order, sepa-rable differential equation:

dx= µ(x)

a1(x)a2(x)

.

This is formally solved to give the sought integrating factor

µ(x) = e∫ a1(x)

a2(x) dx.

Thus, the original equation can be multiplied by factor

µ(x)a2(x)

=1

a2(x)e∫ a1(x)

a2(x) dx

to turn it into Sturm-Liouville form.In summary,

Equation (4.1),

a2(x)y′′ + a1(x)y′ + a0(x)y = f (x), (4.7)

can be put into the Sturm-Liouville form

ddx

(p(x)

dydx

)+ q(x)y = F(x), (4.8)

where

p(x) = e∫ a1(x)

a2(x) dx,

q(x) = p(x)a0(x)a2(x)

,

F(x) = p(x)f (x)

a2(x). (4.9)

Page 123: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

sturm-liouville boundary value problems 111

Example 4.1. Convert x2y′′ + xy′ + 2y = 0 into Sturm-Liouville form.We can multiply this equation by

µ(x)a2(x)

=1x2 e

∫ dxx =

1x

,

to put the equation in Sturm-Liouville form:

Conversion of a linear second orderdifferential equation to Sturm Liouvilleform.

0 = xy′′ + y′ +2x

y

= (xy′)′ +2x

y. (4.10)

4.2 Properties of Sturm-Liouville Eigenvalue Problems

There are several properties that can be proven for the (regular)Sturm-Liouville eigenvalue problem in (4.3). However, we will not provethem all here. We will merely list some of the important facts and focus ona few of the properties.

Real, countable eigenvalues.

1. The eigenvalues are real, countable, ordered and there is a smallesteigenvalue. Thus, we can write them as λ1 < λ2 < . . . . However,there is no largest eigenvalue and n→ ∞, λn → ∞. Oscillatory eigenfunctions.

2. For each eigenvalue λn there exists an eigenfunction φn with n − 1zeros on (a, b).

3. Eigenfunctions corresponding to different eigenvalues are orthogonalwith respect to the weight function, σ(x). Defining the inner productof f (x) and g(x) as

〈 f , g〉 =∫ b

af (x)g(x)σ(x) dx, (4.11)

then the orthogonality of the eigenfunctions can be written in the Orthogonality of eigenfunctions.

form〈φn, φm〉 = 〈φn, φn〉δnm, n, m = 1, 2, . . . . (4.12)

4. The set of eigenfunctions is complete; i.e., any piecewise smooth func-tion can be represented by a generalized Fourier series expansion ofthe eigenfunctions,

f (x) ∼∞

∑n=1

cnφn(x),

where

cn =〈 f , φn〉〈φn, φn〉

.

Actually, one needs f (x) ∈ L2σ(a, b), the set of square integrable func-

tions over [a, b] with weight function σ(x). By square integrable, wemean that 〈 f , f 〉 < ∞. One can show that such a space is isomorphicto a Hilbert space, a complete inner product space. Hilbert spacesplay a special role in quantum mechanics.

Complete basis of eigenfunctions.

Page 124: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

112 partial differential equations

5. The eigenvalues satisfy the Rayleigh quotient

λn =

−pφndφndx

∣∣∣ba+∫ b

a

[p(

dφndx

)2− qφ2

n

]dx

〈φn, φn〉.

The Rayleigh quotient is named afterLord Rayleigh, John William Strutt, 3rdBaron Raleigh (1842-1919).

This is verified by multiplying the eigenvalue problem

Lφn = −λnσ(x)φn

by φn and integrating. Solving this result for λn, we obtain the Rayleighquotient. The Rayleigh quotient is useful for getting estimates ofeigenvalues and proving some of the other properties.

Example 4.2. Verify some of these properties for the eigenvalue problem

y′′ = −λy, y(0) = y(π) = 0.

This is a problem we had seen many times. The eigenfunctions for this eigenvalueproblem are φn(x) = sin nx, with eigenvalues λn = n2 for n = 1, 2, . . . . Thesesatisfy the properties listed above.

First of all, the eigenvalues are real, countable and ordered, 1 < 4 < 9 < . . . .There is no largest eigenvalue and there is a first one.

y

x

Figure 4.1: Plot of the eigenfunctionsφn(x) = sin nx for n = 1, 2, 3, 4.

The eigenfunctions corresponding to each eigenvalue have n− 1 zeros 0n (0, π).This is demonstrated for several eigenfunctions in Figure 4.1.

We also know that the set sin nx∞n=1 is an orthogonal set of basis functions of

length

‖φn‖ =√

π

2.

Thus, the Rayleigh quotient can be computed using p(x) = 1, q(x) = 0, and theeigenfunctions. It is given by

R =−φnφ′n

∣∣∣π0+∫ π

0 (φ′n)2 dx

π2

=2π

∫ π

0

(−n2 cos nx

)2dx = n2. (4.13)

Therefore, knowing the eigenfunction, the Rayleigh quotient returns the eigenvaluesas expected.

Example 4.3. We seek the eigenfunctions of the operator found in Example 4.1.Namely, we want to solve the eigenvalue problem

Ly = (xy′)′ +2x

y = −λσy (4.14)

subject to a set of homogeneous boundary conditions. Let’s use the boundary condi-tions

y′(1) = 0, y′(2) = 0.

[Note that we do not know σ(x) yet, but will choose an appropriate function toobtain solutions.]

Page 125: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

sturm-liouville boundary value problems 113

Expanding the derivative, we have

xy′′ + y′ +2x

y = −λσy.

Multiply through by x to obtain

x2y′′ + xy′ + (2 + λxσ) y = 0.

Notice that if we choose σ(x) = x−1, then this equation can be made a Cauchy-Euler type equation. Thus, we have

x2y′′ + xy′ + (λ + 2) y = 0.

The characteristic equation is

r2 + λ + 2 = 0.

For oscillatory solutions, we need λ + 2 > 0. Thus, the general solution is

y(x) = c1 cos(√

λ + 2 ln |x|) + c2 sin(√

λ + 2 ln |x|). (4.15)

Next we apply the boundary conditions. y′(1) = 0 forces c2 = 0. This leaves

y(x) = c1 cos(√

λ + 2 ln x).

The second condition, y′(2) = 0, yields

sin(√

λ + 2 ln 2) = 0.

This will give nontrivial solutions when√

λ + 2 ln 2 = nπ, n = 0, 1, 2, 3 . . . .

In summary, the eigenfunctions for this eigenvalue problem are

yn(x) = cos( nπ

ln 2ln x

), 1 ≤ x ≤ 2

and the eigenvalues are λn =( nπ

ln 2

)2 − 2 for n = 0, 1, 2, . . . .Note: We include the n = 0 case because y(x) = constant is a solution of the

λ = −2 case. More specifically, in this case the characteristic equation reduces tor2 = 0. Thus, the general solution of this Cauchy-Euler equation is

y(x) = c1 + c2 ln |x|.

Setting y′(1) = 0, forces c2 = 0. y′(2) automatically vanishes, leaving the solutionin this case as y(x) = c1.

We note that some of the properties listed in the beginning of the section hold forthis example. The eigenvalues are seen to be real, countable and ordered. There isa least one, λ0 = −2. Next, one can find the zeros of each eigenfunction on [1,2].Then the argument of the cosine, nπ

ln 2 ln x, takes values 0 to nπ for x ∈ [1, 2]. Thecosine function has n− 1 roots on this interval.

Page 126: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

114 partial differential equations

Orthogonality can be checked as well. We set up the integral and use the substi-tution y = π ln x/ ln 2. This gives

〈yn, ym〉 =∫ 2

1cos

( nπ

ln 2ln x

)cos

(mπ

ln 2ln x

) dxx

=ln 2π

∫ π

0cos ny cos my dy

=ln 2

2δn,m. (4.16)

4.2.1 Adjoint Operators

In the study of the spectral theory of matrices, one learns aboutthe adjoint of the matrix, A†, and the role that self-adjoint, or Hermitian,matrices play in diagonalization. Also, one needs the concept of adjoint todiscuss the existence of solutions to the matrix problem y = Ax. In the samespirit, one is interested in the existence of solutions of the operator equationLu = f and solutions of the corresponding eigenvalue problem. The studyof linear operators on a Hilbert space is a generalization of what the readerhad seen in a linear algebra course.

Just as one can find a basis of eigenvectors and diagonalize Hermitian,or self-adjoint, matrices (or, real symmetric in the case of real matrices), wewill see that the Sturm-Liouville operator is self-adjoint. In this section wewill define the domain of an operator and introduce the notion of adjointoperators. In the last section we discuss the role the adjoint plays in theexistence of solutions to the operator equation Lu = f .

We begin by defining the adjoint of an operator. The adjoint, L†, of oper-ator L satisfies

〈u, Lv〉 = 〈L†u, v〉for all v in the domain of L and u in the domain of L†. Here the domainof a differential operator L is the set of all u ∈ L2

σ(a, b) satisfying a givenset of homogeneous boundary conditions. This is best understood throughexample.

Example 4.4. Find the adjoint of L = a2(x)D2 + a1(x)D + a0(x) for D = d/dx.In order to find the adjoint, we place the operator inside an integral. Consider

the inner product

〈u, Lv〉 =∫ b

au(a2v′′ + a1v′ + a0v) dx.

We have to move the operator L from v and determine what operator is acting on uin order to formally preserve the inner product. For a simple operator like L = d

dx ,this is easily done using integration by parts. For the given operator, we will needto apply several integrations by parts to the individual terms. We consider eachderivative term in the integrand separately.

For the a1v′ term, we integrate by parts to find∫ b

au(x)a1(x)v′(x) dx = a1(x)u(x)v(x)

∣∣∣ba−∫ b

a(u(x)a1(x))′v(x) dx. (4.17)

Page 127: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

sturm-liouville boundary value problems 115

Now, we consider the a2v′′ term. In this case it will take two integrations byparts:

∫ b

au(x)a2(x)v′′(x) dx = a2(x)u(x)v′(x)

∣∣∣ba−∫ b

a(u(x)a2(x))′v(x)′ dx

=[a2(x)u(x)v′(x)− (a2(x)u(x))′v(x)

] ∣∣∣ba

+∫ b

a(u(x)a2(x))′′v(x) dx. (4.18)

Combining these results, we obtain

〈u, Lv〉 =∫ b

au(a2v′′ + a1v′ + a0v) dx

=[a1(x)u(x)v(x) + a2(x)u(x)v′(x)− (a2(x)u(x))′v(x)

] ∣∣∣ba

+∫ b

a

[(a2u)′′ − (a1u)′ + a0u

]v dx. (4.19)

Inserting the boundary conditions for v, one has to determine boundary condi-tions for u such that

[a1(x)u(x)v(x) + a2(x)u(x)v′(x)− (a2(x)u(x))′v(x)

] ∣∣∣ba= 0.

This leaves

〈u, Lv〉 =∫ b

a

[(a2u)′′ − (a1u)′ + a0u

]v dx ≡ 〈L†u, v〉.

Therefore,

L† =d2

dx2 a2(x)− ddx

a1(x) + a0(x). (4.20)

Self-adjoint operators.When L† = L, the operator is called formally self-adjoint. When the

domain of L is the same as the domain of L†, the term self-adjoint is used.As the domain is important in establishing self-adjointness, we need to doa complete example in which the domain of the adjoint is found.

Example 4.5. Determine L† and its domain for operator Lu = dudx where u satisfies

the boundary conditions u(0) = 2u(1) on [0, 1].We need to find the adjoint operator satisfying 〈v, Lu〉 = 〈L†v, u〉. Therefore,

we rewrite the integral

〈v, Lu〉 >=∫ 1

0v

dudx

dx = uv∣∣10 −

∫ 1

0u

dvdx

dx = 〈L†v, u〉.

From this we have the adjoint problem consisting of an adjoint operator and theassociated boundary condition (or, domain of L†.):

1. L† = − ddx .

2. uv∣∣∣10= 0⇒ 0 = u(1)[v(1)− 2v(0)]⇒ v(1) = 2v(0).

Page 128: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

116 partial differential equations

4.2.2 Lagrange’s and Green’s Identities

Before turning to the proofs that the eigenvalues of a Sturm-Liouvilleproblem are real and the associated eigenfunctions orthogonal, we will firstneed to introduce two important identities. For the Sturm-Liouville opera-tor,

L =d

dx

(p

ddx

)+ q,

we have the two identities:

Lagrange’s Identity: uLv− vLu = [p(uv′ − vu′)]′.Green’s Identity:

∫ ba (uLv− vLu) dx = [p(uv′ − vu′)]|ba.

The proof of Lagrange’s identity follows by a simple manipulations ofthe operator:

uLv− vLu = u[

ddx

(p

dvdx

)+ qv

]− v

[d

dx

(p

dudx

)+ qu

]= u

ddx

(p

dvdx

)− v

ddx

(p

dudx

)= u

ddx

(p

dvdx

)+ p

dudx

dvdx− v

ddx

(p

dudx

)− p

dudx

dvdx

=d

dx

[pu

dvdx− pv

dudx

]. (4.21)

Green’s identity is simply proven by integrating Lagrange’s identity.

4.2.3 Orthogonality and Reality

We are now ready to prove that the eigenvalues of a Sturm-Liouvilleproblem are real and the corresponding eigenfunctions are orthogonal. Theseare easily established using Green’s identity, which in turn is a statementabout the Sturm-Liouville operator being self-adjoint.

Example 4.6. The eigenvalues of the Sturm-Liouville problem (4.3) are real.Let φn(x) be a solution of the eigenvalue problem associated with λn:

Lφn = −λnσφn.

We want to show that Namely, we show that λn = λn, where the bar means complexconjugate. So, we also consider the complex conjugate of this equation,

Lφn = −λnσφn.

Now, multiply the first equation by φn, the second equation by φn, and then subtractthe results. We obtain

φnLφn − φnLφn = (λn − λn)σφnφn.

Page 129: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

sturm-liouville boundary value problems 117

Integrating both sides of this equation, we have∫ b

a

(φnLφn − φnLφn

)dx = (λn − λn)

∫ b

aσφnφn dx.

We apply Green’s identity to the left hand side to find

[p(φnφ′n − φnφ′n)]|

ba = (λn − λn)

∫ b

aσφnφn dx.

Using the homogeneous boundary conditions (4.4) for a self-adjoint operator, theleft side vanishes. This leaves

0 = (λn − λn)∫ b

aσ‖φn‖2 dx.

The integral is nonnegative, so we must have λn = λn. Therefore, the eigenvaluesare real.

Example 4.7. The eigenfunctions corresponding to different eigenvalues of theSturm-Liouville problem (4.3) are orthogonal.

This is proven similar to the last example. Let φn(x) be a solution of the eigen-value problem associated with λn,

Lφn = −λnσφn,

and let φm(x) be a solution of the eigenvalue problem associated with λm 6= λn,

Lφm = −λmσφm,

Now, multiply the first equation by φm and the second equation by φn. Subtractingthese results, we obtain

φmLφn − φnLφm = (λm − λn)σφnφm

Integrating both sides of the equation, using Green’s identity, and using thehomogeneous boundary conditions, we obtain

0 = (λm − λn)∫ b

aσφnφm dx.

Since the eigenvalues are distinct, we can divide by λm − λn, leaving the desiredresult, ∫ b

aσφnφm dx = 0.

Therefore, the eigenfunctions are orthogonal with respect to the weight functionσ(x).

4.2.4 The Rayleigh Quotient

The Rayleigh quotient is useful for getting estimates of eigenvaluesand proving some of the other properties associated with Sturm-Liouville

Page 130: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

118 partial differential equations

eigenvalue problems. The Rayleigh quotient is general and finds applica-tions for both matrix eigenvalue problems as well as self-adjoint operators.For a Hermitian matrix M the Rayleigh quotient is given by

R(v) =〈v, Mv〉〈v, v〉 .

One can show that the critical values of the Rayleigh quotient, as a functionof v, are the eigenvectors of M and the values of R at these critical valuesare the corresponding eigenvectors. In particular, minimizing R(v over thevector space will give the lowest eigenvalue. This leads to the Rayleigh-Ritzmethod for computing the lowest eigenvalues when the eigenvectors are notknown.

This definition can easily be extended to Sturm-Liouville operators,

R(φn) =〈φnLφn〉〈φn, φn〉

.

We begin by multiplying the eigenvalue problem

Lφn = −λnσ(x)φn

by φn and integrating. This gives∫ b

a

[φn

ddx

(p

dφn

dx

)+ qφ2

n

]dx = −λn

∫ b

aφ2

nσ dx.

One can solve the last equation for λ to find

λn =−∫ b

a

[φn

ddx

(p dφn

dx

)+ qφ2

n

]dx∫ b

a φ2nσ dx

= R(φn).

It appears that we have solved for the eigenvalues and have not neededthe machinery we had developed in Chapter 4 for studying boundary valueproblems. However, we really cannot evaluate this expression when we donot know the eigenfunctions, φn(x) yet. Nevertheless, we will see what wecan determine from the Rayleigh quotient.

One can rewrite this result by performing an integration by parts on thefirst term in the numerator. Namely, pick u = φn and dv = d

dx

(p dφn

dx

)dx

for the standard integration by parts formula. Then, we have

∫ b

aφn

ddx

(p

dφn

dx

)dx = pφn

dφn

dx

∣∣∣ba−∫ b

a

[p(

dφn

dx

)2− qφ2

n

]dx.

Inserting the new formula into the expression for λ, leads to the RayleighQuotient

λn =

−pφndφndx

∣∣∣ba+∫ b

a

[p(

dφndx

)2− qφ2

n

]dx∫ b

a φ2nσ dx

. (4.22)

In many applications the sign of the eigenvalue is important. As we hadseen in the solution of the heat equation, T′ + kλT = 0. Since we expect

Page 131: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

sturm-liouville boundary value problems 119

the heat energy to diffuse, the solutions should decay in time. Thus, wewould expect λ > 0. In studying the wave equation, one expects vibrationsand these are only possible with the correct sign of the eigenvalue (positiveagain). Thus, in order to have nonnegative eigenvalues, we see from (4.22)that

a. q(x) ≤ 0, and

b. −pφndφndx

∣∣∣ba≥ 0.

Furthermore, if λ is a zero eigenvalue, then q(x) ≡ 0 and α1 = α2 = 0in the homogeneous boundary conditions. This can be seen by setting thenumerator equal to zero. Then, q(x) = 0 and φ′n(x) = 0. The second ofthese conditions inserted into the boundary conditions forces the restrictionon the type of boundary conditions.

One of the properties of Sturm-Liouville eigenvalue problems with ho-mogeneous boundary conditions is that the eigenvalues are ordered, λ1 <

λ2 < . . . . Thus, there is a smallest eigenvalue. It turns out that for anycontinuous function, y(x),

λ1 = miny(x)

−py dydx

∣∣∣ba+∫ b

a

[p(

dydx

)2− qy2

]dx∫ b

a y2σ dx(4.23)

and this minimum is obtained when y(x) = φ1(x). This result can be usedto get estimates of the minimum eigenvalue by using trial functions whichare continuous and satisfy the boundary conditions, but do not necessarilysatisfy the differential equation.

Example 4.8. We have already solved the eigenvalue problem φ′′ + λφ = 0,φ(0) = 0, φ(1) = 0. In this case, the lowest eigenvalue is λ1 = π2. We canpick a nice function satisfying the boundary conditions, say y(x) = x− x2. Insert-ing this into Equation (4.23), we find

λ1 ≤∫ 1

0 (1− 2x)2 dx∫ 10 (x− x2)2 dx

= 10.

Indeed, 10 ≥ π2.

4.3 The Eigenfunction Expansion Method

In this section we solve the nonhomogeneous problem Ly = fusing expansions over the basis of Sturm-Liouville eigenfunctions. We haveseen that Sturm-Liouville eigenvalue problems have the requisite set of or-thogonal eigenfunctions. In this section we will apply the eigenfunctionexpansion method to solve a particular nonhomogeneous boundary valueproblem.

Page 132: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

120 partial differential equations

Recall that one starts with a nonhomogeneous differential equation

Ly = f ,

where y(x) is to satisfy given homogeneous boundary conditions. Themethod makes use of the eigenfunctions satisfying the eigenvalue problem

Lφn = −λnσφn

subject to the given boundary conditions. Then, one assumes that y(x) canbe written as an expansion in the eigenfunctions,

y(x) =∞

∑n=1

cnφn(x),

and inserts the expansion into the nonhomogeneous equation. This gives

f (x) = L(

∑n=1

cnφn(x)

)= −

∑n=1

cnλnσ(x)φn(x).

The expansion coefficients are then found by making use of the orthogo-nality of the eigenfunctions. Namely, we multiply the last equation by φm(x)and integrate. We obtain

∫ b

af (x)φm(x) dx = −

∑n=1

cnλn

∫ b

aφn(x)φm(x)σ(x) dx.

Orthogonality yields∫ b

af (x)φm(x) dx = −cmλm

∫ b

aφ2

m(x)σ(x) dx.

Solving for cm, we have

cm = −∫ b

a f (x)φm(x) dx

λm∫ b

a φ2m(x)σ(x) dx

.

Example 4.9. As an example, we consider the solution of the boundary value prob-lem

(xy′)′ +yx=

1x

, x ∈ [1, e], (4.24)

y(1) = 0 = y(e). (4.25)

This equation is already in self-adjoint form. So, we know that the associatedSturm-Liouville eigenvalue problem has an orthogonal set of eigenfunctions. Wefirst determine this set. Namely, we need to solve

(xφ′)′ +φ

x= −λσφ, φ(1) = 0 = φ(e). (4.26)

Rearranging the terms and multiplying by x, we have that

x2φ′′ + xφ′ + (1 + λσx)φ = 0.

Page 133: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

sturm-liouville boundary value problems 121

This is almost an equation of Cauchy-Euler type. Picking the weight functionσ(x) = 1

x , we havex2φ′′ + xφ′ + (1 + λ)φ = 0.

This is easily solved. The characteristic equation is

r2 + (1 + λ) = 0.

One obtains nontrivial solutions of the eigenvalue problem satisfying the boundaryconditions when λ > −1. The solutions are

φn(x) = A sin(nπ ln x), n = 1, 2, . . . .

where λn = n2π2 − 1.It is often useful to normalize the eigenfunctions. This means that one chooses

A so that the norm of each eigenfunction is one. Thus, we have

1 =∫ e

1φn(x)2σ(x) dx

= A2∫ e

1sin(nπ ln x)

1x

dx

= A2∫ 1

0sin(nπy) dy =

12

A2. (4.27)

Thus, A =√

2. Several of these eigenfunctions are show in Figure 4.2.x

1 e

Figure 4.2: Plots of the first five eigen-functions, y(x) =

√2 sin(nπ ln x).

We now turn towards solving the nonhomogeneous problem, Ly = 1x . We first

expand the unknown solution in terms of the eigenfunctions,

y(x) =∞

∑n=1

cn√

2 sin(nπ ln x).

Inserting this solution into the differential equation, we have

1x= Ly = −

∑n=1

cnλn√

2 sin(nπ ln x)1x

.

Next, we make use of orthogonality. Multiplying both sides by the eigenfunctionφm(x) =

√2 sin(mπ ln x) and integrating, gives

λmcm =∫ e

1

√2 sin(mπ ln x)

1x

dx =

√2

mπ[(−1)m − 1].

Solving for cm, we have

cm =

√2

[(−1)m − 1]m2π2 − 1

.

Finally, we insert these coefficients into the expansion for y(x). The solution isthen

y(x) =∞

∑n=1

2nπ

[(−1)n − 1]n2π2 − 1

sin(nπ ln(x)).

We plot this solution in Figure 4.3.

1.0 1.2 1.4 1.6 1.8 2.0 2.2 2.4 2.6

0

-.01

-.02

-0.3

-.04

-.05

-.06

Figure 4.3: Plot of the solution in Exam-ple 4.9.

Page 134: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

122 partial differential equations

4.4 Appendix: The Fredholm Alternative Theorem

Given that Ly = f , when can one expect to find a solution? Is itunique? These questions are answered by the Fredholm Alternative Theo-rem. This theorem occurs in many forms from a statement about solutionsto systems of algebraic equations to solutions of boundary value problemsand integral equations. The theorem comes in two parts, thus the term“alternative”. Either the equation has exactly one solution for all f , or theequation has many solutions for some f ’s and none for the rest.

The reader is familiar with the statements of the Fredholm Alternativefor the solution of systems of algebraic equations. One seeks solutions ofthe system Ax = b for A an n×m matrix. Defining the matrix adjoint, A∗

through < Ax, y >=< x, A∗y > for all x, y,∈ Cn, then either

Theorem 4.1. First AlternativeThe equation Ax = b has a solution if and only if < b, v >= 0 for all v

satisfying A∗v = 0.

or

Theorem 4.2. Second AlternativeA solution of Ax = b, if it exists, is unique if and only if x = 0 is the only

solution of Ax = 0.

The second alternative is more familiar when given in the form: Thesolution of a nonhomogeneous system of n equations and n unknowns isunique if the only solution to the homogeneous problem is the zero solution.Or, equivalently, A is invertible, or has nonzero determinant.

Proof. We prove the second theorem first. Assume that Ax = 0 for x 6= 0and Ax0 = b. Then A(x0 + αx) = b for all α. Therefore, the solution is notunique. Conversely, if there are two different solutions, x1 and x2, satisfyingAx1 = b and Ax2 = b, then one has a nonzero solution x = x1 − x2 suchthat Ax = A(x1 − x2) = 0.

The proof of the first part of the first theorem is simple. Let A∗v = 0 andAx0 = b. Then we have

< b, v >=< Ax0, v >=< x0, A∗v >= 0.

For the second part we assume that < b, v >= 0 for all v such that A∗v = 0.Write b as the sum of a part that is in the range of A and a part that in thespace orthogonal to the range of A, b = bR + bO. Then, 0 =< bO, Ax >=<

A∗b, x > for all x. Thus, A∗bO. Since < b, v >= 0 for all v in the nullspaceof A∗, then < b, bO >= 0.

Therefore, < b, v >= 0 implies that

0 =< b, bO >=< bR + bO, bO >=< bO, bO > .

This means that bO = 0, giving b = bR is in the range of A. So, Ax = b hasa solution.

Page 135: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

sturm-liouville boundary value problems 123

Example 4.10. Determine the allowed forms of b for a solution of Ax = b to exist,where

A =

(1 23 6

).

First note that A∗ = AT . This is seen by looking at

< Ax, y > = < x, A∗y >n

∑i=1

n

∑j=1

aijxjyi =n

∑j=1

xj

n

∑j=1

aijyi

=n

∑j=1

xj

n

∑j=1

(aT)ji yi. (4.28)

For this example,

A∗ =

(1 32 6

).

We next solve A∗v = 0. This means, v1 + 3v2 = 0. So, the nullspace of A∗ isspanned by v = (3,−1)T . For a solution of Ax = b to exist, b would have to beorthogonal to v. Therefore, a solution exists when

b = α

(13

).

So, what does the Fredholm Alternative say about solutions of boundaryvalue problems? We extend the Fredholm Alternative for linear operators.A more general statement would be

Theorem 4.3. If L is a bounded linear operator on a Hilbert space, then Ly = fhas a solution if and only if < f , v >= 0 for every v such that L†v = 0.

The statement for boundary value problems is similar. However, we needto be careful to treat the boundary conditions in our statement. As we haveseen, after several integrations by parts we have that

< Lu, v >= S(u, v)+ < u,L†v >,

where S(u, v) involves the boundary conditions on u and v. Note that fornonhomogeneous boundary conditions, this term may no longer vanish.

Theorem 4.4. The solution of the boundary value problem Lu = f with boundaryconditions Bu = g exists if and only if

< f , v > −S(u, v) = 0

for all v satisfying L†v = 0 and B†v = 0.

Example 4.11. Consider the problem

u′′ + u = f (x), u(0)− u(2π) = α, u′(0)− u′(2π) = β.

Page 136: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

124 partial differential equations

Only certain values of α and β will lead to solutions. We first note that

L = L† =d2

dx2 + 1.

Solutions of

L†v = 0, v(0)− v(2π) = 0, v′(0)− v′(2π) = 0

are easily found to be linear combinations of v = sin x and v = cos x.Next, one computes

S(u, v) =[u′v− uv′

]2π0

= u′(2π)v(2π)− u(2π)v′(2π)− u′(0)v(0) + u(0)v′(0).

(4.29)

For v(x) = sin x, this yields

S(u, sin x) = −u(2π) + u(0) = α.

Similarly,S(u, cos x) = β.

Using < f , v > −S(u, v) = 0, this leads to the conditions that we were seeking,∫ 2π

0f (x) sin x dx = α,

∫ 2π

0f (x) cos x dx = β.

Problems

1. Prove the if u(x) and v(x) satisfy the general homogeneous boundaryconditions

α1u(a) + β1u′(a) = 0,

α2u(b) + β2u′(b) = 0 (4.30)

at x = a and x = b, then

p(x)[u(x)v′(x)− v(x)u′(x)]x=bx=a = 0.

2. Prove Green’s Identity∫ b

a (uLv− vLu) dx = [p(uv′ − vu′)]∣∣∣ba

for the gen-

eral Sturm-Liouville operator L.

3. Find the adjoint operator and its domain for Lu = u′′ + 4u′ − 3u, u′(0) +4u(0) = 0, u′(1) + 4u(1) = 0.

4. Show that a Sturm-Liouville operator with periodic boundary conditionson [a, b] is self-adjoint if and only if p(a) = p(b). [Recall, periodic boundaryconditions are given as u(a) = u(b) and u′(a) = u′(b).]

Page 137: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

sturm-liouville boundary value problems 125

5. The Hermite differential equation is given by y′′− 2xy′+λy = 0. Rewritethis equation in self-adjoint form. From the Sturm-Liouville form obtained,verify that the differential operator is self adjoint on (−∞, ∞). Give theintegral form for the orthogonality of the eigenfunctions.

6. Find the eigenvalues and eigenfunctions of the given Sturm-Liouvilleproblems.

a. y′′ + λy = 0, y′(0) = 0 = y′(π).

b. (xy′)′ + λx y = 0, y(1) = y(e2) = 0.

7. The eigenvalue problem x2y′′ − λxy′ + λy = 0 with y(1) = y(2) = 0 isnot a Sturm-Liouville eigenvalue problem. Show that none of the eigenval-ues are real by solving this eigenvalue problem.

8. In Example 4.8 we found a bound on the lowest eigenvalue for the giveneigenvalue problem.

a. Verify the computation in the example.

b. Apply the method using

y(x) =

x, 0 < x < 1

21− x, 1

2 < x < 1.

Is this an upper bound on λ1

c. Use the Rayleigh quotient to obtain a good upper bound for thelowest eigenvalue of the eigenvalue problem: φ′′ + (λ− x2)φ = 0,φ(0) = 0, φ′(1) = 0.

9. Use the method of eigenfunction expansions to solve the problems:

a. y′′ = x2, y(0) = y(1) = 0.

b. y′′ + 4y = x2, y′(0) = y′(1) = 0.

10. Determine the solvability conditions for the nonhomogeneous bound-ary value problem: u′′ + 4u = f (x), u(0) = α, u′(π/4) = β.

Page 138: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238
Page 139: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

5Non-sinusoidal Harmonics and SpecialFunctions

“To the pure geometer the radius of curvature is an incidental characteristic - likethe grin of the Cheshire cat. To the physicist it is an indispensable characteristic.It would be going too far to say that to the physicist the cat is merely incidentalto the grin. Physics is concerned with interrelatedness such as the interrelatednessof cats and grins. In this case the "cat without a grin" and the "grin without acat" are equally set aside as purely mathematical phantasies.” Sir Arthur StanleyEddington (1882-1944)

In this chapter we provide a glimpse into generalized Fourier seriesin which the normal modes of oscillation are not sinusoidal. For vibratingstrings, we saw that the harmonics were sinusoidal basis functions for alarge, infinite dimensional, function space. Now, we will extend these ideasto non-sinusoidal harmonics and explore the underlying structure behindthese ideas. In particular, we will explore Legendre polynomials and Besselfunctions which will later arise in problems having cylindrical or sphericalsymmetry.

The background for the study of generalized Fourier series is that offunction spaces. We begin by exploring the general context in which onefinds oneself when discussing Fourier series and (later) Fourier transforms.We can view the sine and cosine functions in the Fourier trigonometric seriesrepresentations as basis vectors in an infinite dimensional function space. Agiven function in that space may then be represented as a linear combinationover this infinite basis. With this in mind, we might wonder

• Do we have enough basis vectors for the function space?

• Are the infinite series expansions convergent?

• What functions can be represented by such expansions?

In the context of the boundary value problems which typically appear inphysics, one is led to the study of boundary value problems in the form ofSturm-Liouville eigenvalue problems. These lead to an appropriate set ofbasis vectors for the function space under consideration. We will touch alittle on these ideas, leaving some of the deeper results for more advanced

Page 140: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

128 partial differential equations

courses in mathematics. For now, we will turn to function spaces and ex-plore some typical basis functions, many which originated from the studyof physical problems. The common basis functions are often referred to asspecial functions in physics. Examples are the classical orthogonal polyno-mials (Legendre, Hermite, Laguerre, Tchebychef) and Bessel functions. Butfirst we will introduce function spaces.

5.1 Function Spaces

Earlier we studied finite dimensional vector spaces. Given a setof basis vectors, akn

k=1, in vector space V, we showed that we can expandany vector v ∈ V in terms of this basis, v = ∑n

k=1 vkak. We then spentsome time looking at the simple case of extracting the components vk of thevector. The keys to doing this simply were to have a scalar product and anorthogonal basis set. These are also the key ingredients that we will needin the infinite dimensional case. In fact, we had already done this when westudied Fourier series.

Recall when we found Fourier trigonometric series representations offunctions, we started with a function (vector) that we wanted to expand in aset of trigonometric functions (basis) and we sought the Fourier coefficients(components). In this section we will extend our notions from finite dimen-We note that the above determination

of vector components for finite dimen-sional spaces is precisely what we haddone to compute the Fourier coefficientsusing trigonometric bases. Reading fur-ther, you will see how this works.

sional spaces to infinite dimensional spaces and we will develop the neededbackground in which to think about more general Fourier series expansions.This conceptual framework is very important in other areas in mathematics(such as ordinary and partial differential equations) and physics (such asquantum mechanics and electrodynamics).

We will consider various infinite dimensional function spaces. Functionsin these spaces would differ by their properties. For example, we could con-sider the space of continuous functions on [0,1], the space of differentiablycontinuous functions, or the set of functions integrable from a to b. As youwill see, there are many types of function spaces . In order to view thesespaces as vector spaces, we will need to be able to add functions and multi-ply them by scalars in such as way that they satisfy the definition of a vectorspace as defined in Chapter 3.

We will also need a scalar product defined on this space of functions.There are several types of scalar products, or inner products, that we candefine. An inner product 〈, 〉 on a real vector space V is a mapping fromV ×V into R such that for u, v, w ∈ V and α ∈ R one has

1. 〈v, v〉 ≥ 0 and 〈v, v〉 = 0 iff v = 0.

2. 〈v, w〉 = 〈w, v〉.

3. 〈αv, w〉 = α〈v, w〉.

4. 〈u + v, w〉 = 〈u, w〉+ 〈v, w〉.

A real vector space equipped with the above inner product leads to what

Page 141: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

non-sinusoidal harmonics and special functions 129

is called a real inner product space. For complex inner product spaces theabove properties hold with the third property replaced with 〈v, w〉 = 〈w, v〉.

For the time being, we will only deal with real valued functions and,thus, we will need an inner product appropriate for such spaces. One suchdefinition is the following. Let f (x) and g(x) be functions defined on [a, b]and introduce the weight function σ(x) > 0. Then, we define the innerproduct, if the integral exists, as

〈 f , g〉 =∫ b

af (x)g(x)σ(x) dx. (5.1)

Spaces in which 〈 f , f 〉 < ∞ under this inner product are called the space of The space of square integrable functions.

square integrable functions on (a, b) under weight σ and denoted as L2σ(a, b).

In what follows, we will assume for simplicity that σ(x) = 1. This is possibleto do by using a change of variables.

Now that we have function spaces equipped with an inner product, weseek a basis for the space. For an n-dimensional space we need n basisvectors. For an infinite dimensional space, how many will we need? Howdo we know when we have enough? We will provide some answers to thesequestions later.

Let’s assume that we have a basis of functions φn(x)∞n=1. Given a func-

tion f (x), how can we go about finding the components of f in this basis?In other words, let

f (x) =∞

∑n=1

cnφn(x).

How do we find the cn’s? Does this remind you of Fourier series expan-sions? Does it remind you of the problem we had earlier for finite dimen-sional spaces? [You may want to review the discussion at the end of Section?? as you read the next derivation.]

Formally, we take the inner product of f with each φj and use the prop-erties of the inner product to find

〈φj, f 〉 = 〈φj,∞

∑n=1

cnφn〉

=∞

∑n=1

cn〈φj, φn〉. (5.2)

If the basis is an orthogonal basis, then we have

〈φj, φn〉 = Njδjn, (5.3)

where δjn is the Kronecker delta. Recall from Chapter 3 that the Kroneckerdelta is defined as

δij =

0, i 6= j1, i = j.

(5.4)

Continuing with the derivation, we have For the generalized Fourier series expan-sion f (x) = ∑∞

n=1 cnφn(x), we have de-termined the generalized Fourier coeffi-cients to be cj = 〈φj, f 〉/〈φj, φj〉.

〈φj, f 〉 =∞

∑n=1

cn〈φj, φn〉

=∞

∑n=1

cnNjδjn (5.5)

Page 142: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

130 partial differential equations

Expanding the sum, we see that the Kronecker delta picks out one nonzeroterm:

〈φj, f 〉 = c1Njδj1 + c2Njδj2 + . . . + cjNjδjj + . . .

= cjNj. (5.6)

So, the expansion coefficients are

cj =〈φj, f 〉

Nj=〈φj, f 〉〈φj, φj〉

j = 1, 2, . . . .

We summarize this important result:

Generalized Basis Expansion

Let f (x) be represented by an expansion over a basis of orthogonal func-tions, φn(x)∞

n=1,

f (x) =∞

∑n=1

cnφn(x).

Then, the expansion coefficients are formally determined as

cn =〈φn, f 〉〈φn, φn〉

.

This will be referred to as the general Fourier series expansion and thecj’s are called the Fourier coefficients. Technically, equality only holdswhen the infinite series converges to the given function on the interval ofinterest.

Example 5.1. Find the coefficients of the Fourier sine series expansion of f (x),given by

f (x) =∞

∑n=1

bn sin nx, x ∈ [−π, π].

In the last chapter we already established that the set of functions φn(x) =

sin nx for n = 1, 2, . . . is orthogonal on the interval [−π, π]. Recall that usingtrigonometric identities, we have for n 6= m

〈φn, φm〉 =∫ π

−πsin nx sin mx dx = πδnm. (5.7)

Therefore, the set φn(x) = sin nx for n = 1, 2, . . . is an orthogonal set of functionson the interval [−π, π].

We determine the expansion coefficients using

bn =〈φn, f 〉

Nn=〈φn, f 〉〈φn, φn〉

=1π

∫ π

−πf (x) sin nx dx.

Does this result look familiar?Just as with vectors in three dimensions, we can normalize these basis functions

to arrive at an orthonormal basis. This is simply done by dividing by the length of

Page 143: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

non-sinusoidal harmonics and special functions 131

the vector. Recall that the length of a vector is obtained as v =√

v · v. In the sameway, we define the norm of a function by

‖ f ‖ =√〈 f , f 〉.

Note, there are many types of norms, but this induced norm will be sufficient.1 1 The norm defined here is the natural,or induced, norm on the inner productspace. Norms are a generalization of theconcept of lengths of vectors. Denoting‖v‖ the norm of v, it needs to satisfy theproperties

1. ‖v‖ ≥ 0. ‖v‖ = 0 if and only if v = 0.

2. ‖αv‖ = |α|‖v‖.3. ‖u + v‖ ≤ ‖u‖+ ‖v‖.Examples of common norms are

1. Euclidean norm:

‖v‖ =√

v21 + · · ·+ v2

n.

2. Taxicab norm:

‖v‖ = |v1|+ · · ·+ |vn|.

3. Lp norm:

‖ f ‖ =(∫

[ f (x)]p dx) 1

p.

For this example, the norms of the basis functions are ‖φn‖ =√

π. Definingψn(x) = 1√

πφn(x), we can normalize the φn’s and have obtained an orthonormal

basis of functions on [−π, π].We can also use the normalized basis to determine the expansion coefficients. In

this case we have

bn =〈ψn, f 〉

Nn= 〈ψn, f 〉 = 1

π

∫ π

−πf (x) sin nx dx.

5.2 Classical Orthogonal Polynomials

There are other basis functions that can be used to develop seriesrepresentations of functions. In this section we introduce the classical or-thogonal polynomials. We begin by noting that the sequence of functions1, x, x2, . . . is a basis of linearly independent functions. In fact, by theStone-Weierstraß Approximation Theorem2 this set is a basis of L2

σ(a, b), the2 Stone-Weierstraß Approximation The-orem Suppose f is a continuous functiondefined on the interval [a, b]. For everyε > 0, there exists a polynomial func-tion P(x) such that for all x ∈ [a, b], wehave | f (x)− P(x)| < ε. Therefore, everycontinuous function defined on [a, b] canbe uniformly approximated as closely aswe wish by a polynomial function.

space of square integrable functions over the interval [a, b] relative to weightσ(x). However, we will show that the sequence of functions 1, x, x2, . . .does not provide an orthogonal basis for these spaces. We will then proceedto find an appropriate orthogonal basis of functions.

We are familiar with being able to expand functions over the basis 1, x, x2, . . .,since these expansions are just Maclaurin series representations of the func-tions about x = 0,

f (x) ∼∞

∑n=0

cnxn.

However, this basis is not an orthogonal set of basis functions. One caneasily see this by integrating the product of two even, or two odd, basisfunctions with σ(x) = 1 and (a, b)=(−1, 1). For example,

∫ 1

−1x0x2 dx =

23

.

The Gram-Schmidt OrthogonalizationProcess.Since we have found that orthogonal bases have been useful in determin-

ing the coefficients for expansions of given functions, we might ask, “Givena set of linearly independent basis vectors, can one find an orthogonal basisof the given space?" The answer is yes. We recall from introductory linearalgebra, which mostly covers finite dimensional vector spaces, that there isa method for carrying this out called the Gram-Schmidt OrthogonalizationProcess. We will review this process for finite dimensional vectors and thengeneralize to function spaces.

Page 144: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

132 partial differential equations

Let’s assume that we have three vectors that span the usual three dimen-sional space, R3, given by a1, a2, and a3 and shown in Figure 5.1. We seekan orthogonal basis e1, e2, and e3, beginning one vector at a time.

First we take one of the original basis vectors, say a1, and define

e1 = a1.

It is sometimes useful to normalize these basis vectors, denoting such anormalized vector with a “hat”:

e1 =e1

e1,

where e1 =√

e1 · e1.a

aa3 2

1

Figure 5.1: The basis a1, a2, and a3, ofR3.

Next, we want to determine an e2 that is orthogonal to e1. We take an-other element of the original basis, a2. In Figure 5.2 we show the orientationof the vectors. Note that the desired orthogonal vector is e2. We can nowwrite a2 as the sum of e2 and the projection of a2 on e1. Denoting this pro-jection by pr1a2, we then have

e2 = a2 − pr1a2. (5.8)

e

a2

1

pr a1 2

e2

Figure 5.2: A plot of the vectors e1, a2,and e2 needed to find the projection ofa2, on e1.

Recall the projection of one vector onto another from your vector calculusclass.

pr1a2 =a2 · e1

e21

e1. (5.9)

This is easily proven by writing the projection as a vector of length a2 cos θ

in direction e1, where θ is the angle between e1 and a2. Using the definitionof the dot product, a · b = ab cos θ, the projection formula follows.

Combining Equations (5.8)-(5.9), we find that

e2 = a2 −a2 · e1

e21

e1. (5.10)

It is a simple matter to verify that e2 is orthogonal to e1:

e2 · e1 = a2 · e1 −a2 · e1

e21

e1 · e1

= a2 · e1 − a2 · e1 = 0. (5.11)

e

a

a3

2

1

pr a1 3

pr a2 3

e2

Figure 5.3: A plot of vectors for deter-mining e3.

Next, we seek a third vector e3 that is orthogonal to both e1 and e2. Picto-rially, we can write the given vector a3 as a combination of vector projectionsalong e1 and e2 with the new vector. This is shown in Figure 5.3. Thus, wecan see that

e3 = a3 −a3 · e1

e21

e1 −a3 · e2

e22

e2. (5.12)

Again, it is a simple matter to compute the scalar products with e1 and e2

to verify orthogonality.We can easily generalize this procedure to the N-dimensional case. Let

an, n = 1, ..., N be a set of linearly independent vectors in RN . Then, anorthogonal basis can be found by setting e1 = a1 and defining

Page 145: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

non-sinusoidal harmonics and special functions 133

en = an −n−1

∑j=1

an · ej

e2j

ej, n = 2, 3, . . . , N. (5.13)

Now, we can generalize this idea to (real) function spaces. Let fn(x),n ∈ N0 = 0, 1, 2, . . ., be a linearly independent sequence of continuousfunctions defined for x ∈ [a, b]. Then, an orthogonal basis of functions,φn(x), n ∈ N0 can be found and is given by

φ0(x) = f0(x)

and

φn(x) = fn(x)−n−1

∑j=0

〈 fn, φj〉‖φj‖2 φj(x), n = 1, 2, . . . . (5.14)

Here we are using inner products relative to weight σ(x),

〈 f , g〉 =∫ b

af (x)g(x)σ(x) dx. (5.15)

Note the similarity between the orthogonal basis in (5.14) and the expressionfor the finite dimensional case in Equation (5.13).

Example 5.2. Apply the Gram-Schmidt Orthogonalization process to the set fn(x) =xn, n ∈ N0, when x ∈ (−1, 1) and σ(x) = 1.

First, we have φ0(x) = f0(x) = 1. Note that∫ 1

−1φ2

0(x) dx = 2.

We could use this result to fix the normalization of the new basis, but we will holdoff doing that for now.

Now, we compute the second basis element:

φ1(x) = f1(x)− 〈 f1, φ0〉‖φ0‖2 φ0(x)

= x− 〈x, 1〉‖1‖2 1 = x, (5.16)

since 〈x, 1〉 is the integral of an odd function over a symmetric interval.For φ2(x), we have

φ2(x) = f2(x)− 〈 f2, φ0〉‖φ0‖2 φ0(x)− 〈 f2, φ1〉

‖φ1‖2 φ1(x)

= x2 − 〈x2, 1〉‖1‖2 1− 〈x

2, x〉‖x‖2 x

= x2 −∫ 1−1 x2 dx∫ 1−1 dx

= x2 − 13

. (5.17)

Page 146: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

134 partial differential equations

So far, we have the orthogonal set 1, x, x2 − 13. If one chooses to normalize

these by forcing φn(1) = 1, then one obtains the classical Legendre polynomials,Pn(x). Thus,

P2(x) =12(3x2 − 1).

Note that this normalization is different than the usual one. In fact, we see theP2(x) does not have a unit norm,

‖P2‖2 =∫ 1

−1P2

2 (x) dx =25

.

The set of Legendre3 polynomials is just one set of classical orthogo-3 Adrien-Marie Legendre (1752-1833)was a French mathematician who mademany contributions to analysis andalgebra.

nal polynomials that can be obtained in this way. Many of these specialfunctions had originally appeared as solutions of important boundary valueproblems in physics. They all have similar properties and we will just elab-orate some of these for the Legendre functions in the next section. Othersin this group are shown in Table 5.1.

Table 5.1: Common classical orthogo-nal polynomials with the interval andweight function used to define them.

Polynomial Symbol Interval σ(x)Hermite Hn(x) (−∞, ∞) e−x2

Laguerre Lαn(x) [0, ∞) e−x

Legendre Pn(x) (-1,1) 1

Gegenbauer Cλn (x) (-1,1) (1− x2)λ−1/2

Tchebychef of the 1st kind Tn(x) (-1,1) (1− x2)−1/2

Tchebychef of the 2nd kind Un(x) (-1,1) (1− x2)−1/2

Jacobi P(ν,µ)n (x) (-1,1) (1− x)ν(1− x)µ

5.3 Fourier-Legendre Series

In the last chapter we saw how useful Fourier series expansions werefor solving the heat and wave equations. In Chapter 6 we will investigatepartial differential equations in higher dimensions and find that problemswith spherical symmetry may lead to the series representations in terms of abasis of Legendre polynomials. For example, we could consider the steadystate temperature distribution inside a hemispherical igloo, which takes theform

φ(r, θ) =∞

∑n=0

AnrnPn(cos θ)

in spherical coordinates. Evaluating this function at the surface r = a asφ(a, θ) = f (θ), leads to a Fourier-Legendre series expansion of function f :

f (θ) =∞

∑n=0

cnPn(cos θ),

where cn = Anan

Page 147: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

non-sinusoidal harmonics and special functions 135

In this section we would like to explore Fourier-Legendre series expan-sions of functions f (x) defined on (−1, 1):

f (x) ∼∞

∑n=0

cnPn(x). (5.18)

As with Fourier trigonometric series, we can determine the expansion coef-ficients by multiplying both sides of Equation (5.18) by Pm(x) and integrat-ing for x ∈ [−1, 1]. Orthogonality gives the usual form for the generalizedFourier coefficients,

cn =〈 f , Pn〉‖Pn‖2 , n = 0, 1, . . . .

We will later show that‖Pn‖2 =

22n + 1

.

Therefore, the Fourier-Legendre coefficients are

cn =2n + 1

2

∫ 1

−1f (x)Pn(x) dx. (5.19)

5.3.1 Properties of Legendre PolynomialsThe Rodrigues Formula.

We can do examples of Fourier-Legendre expansions given just afew facts about Legendre polynomials. The first property that the Legendrepolynomials have is the Rodrigues formula:

Pn(x) =1

2nn!dn

dxn (x2 − 1)n, n ∈ N0. (5.20)

From the Rodrigues formula, one can show that Pn(x) is an nth degreepolynomial. Also, for n odd, the polynomial is an odd function and for neven, the polynomial is an even function.

Example 5.3. Determine P2(x) from Rodrigues formula:

P2(x) =1

222!d2

dx2 (x2 − 1)2

=18

d2

dx2 (x4 − 2x2 + 1)

=18

ddx

(4x3 − 4x)

=18(12x2 − 4)

=12(3x2 − 1). (5.21)

Note that we get the same result as we found in the last section using orthogonal-ization.

The first several Legendre polynomials are given in Table 5.2. In Figure5.4 we show plots of these Legendre polynomials. The Three Term Recursion Formula.

Page 148: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

136 partial differential equations

Table 5.2: Tabular computation of theLegendre polynomials using the Ro-drigues formula.

n (x2 − 1)n dn

dxn (x2 − 1)n 12nn! Pn(x)

0 1 1 1 1

1 x2 − 1 2x 12 x

2 x4 − 2x2 + 1 12x2 − 4 18

12 (3x2 − 1)

3 x6 − 3x4 + 3x2 − 1 120x3 − 72x 148

12 (5x3 − 3x)

Figure 5.4: Plots of the Legendre poly-nomials P2(x), P3(x), P4(x), and P5(x).

P (x) 5

P (x) 4

P (x) 3

P (x) 2

–1

–0.5

0.5

1

–1 –0.8 –0.6 –0.4 –0.2 0.2 0.4 0.6 0.8 1

x

All of the classical orthogonal polynomials satisfy a three term recursionformula (or, recurrence relation or formula). In the case of the Legendrepolynomials, we have

(n + 1)Pn+1(x) = (2n + 1)xPn(x)− nPn−1(x), n = 1, 2, . . . . (5.22)

This can also be rewritten by replacing n with n− 1 as

(2n− 1)xPn−1(x) = nPn(x) + (n− 1)Pn−2(x), n = 1, 2, . . . . (5.23)

Example 5.4. Use the recursion formula to find P2(x) and P3(x), given thatP0(x) = 1 and P1(x) = x.

We first begin by inserting n = 1 into Equation (5.22):

2P2(x) = 3xP1(x)− P0(x) = 3x2 − 1.

So, P2(x) = 12 (3x2 − 1).

For n = 2, we have

3P3(x) = 5xP2(x)− 2P1(x)

=52

x(3x2 − 1)− 2x

=12(15x3 − 9x). (5.24)

This gives P3(x) = 12 (5x3 − 3x). These expressions agree with the earlier results.

We will prove the three term recursion formula in two ways. First we

The first proof of the three term recur-sion formula is based upon the nature ofthe Legendre polynomials as an orthog-onal basis, while the second proof is de-rived using generating functions.

Page 149: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

non-sinusoidal harmonics and special functions 137

use the orthogonality properties of Legendre polynomials and the followinglemma.

Lemma 5.1. The leading coefficient of xn in Pn(x) is 12nn!

(2n)!n! .

Proof. We can prove this using the Rodrigues formula. First, we focus onthe leading coefficient of (x2 − 1)n, which is x2n. The first derivative of x2n

is 2nx2n−1. The second derivative is 2n(2n− 1)x2n−2. The jth derivative is

djx2n

dxj = [2n(2n− 1) . . . (2n− j + 1)]x2n−j.

Thus, the nth derivative is given by

dnx2n

dxn = [2n(2n− 1) . . . (n + 1)]xn.

This proves that Pn(x) has degree n. The leading coefficient of Pn(x) cannow be written as

[2n(2n− 1) . . . (n + 1)]2nn!

=[2n(2n− 1) . . . (n + 1)]

2nn!n(n− 1) . . . 1n(n− 1) . . . 1

=1

2nn!(2n)!

n!. (5.25)

Theorem 5.1. Legendre polynomials satisfy the three term recursion formula

(2n− 1)xPn−1(x) = nPn(x) + (n− 1)Pn−2(x), n = 1, 2, . . . . (5.26)

Proof. In order to prove the three term recursion formula we consider theexpression (2n− 1)xPn−1(x)− nPn(x). While each term is a polynomial ofdegree n, the leading order terms cancel. We need only look at the coeffi-cient of the leading order term first expression. It is

2n− 12n−1(n− 1)!

(2n− 2)!(n− 1)!

=1

2n−1(n− 1)!(2n− 1)!(n− 1)!

=(2n− 1)!

2n−1 [(n− 1)!]2.

The coefficient of the leading term for nPn(x) can be written as

n1

2nn!(2n)!

n!= n

(2n2n2

)(1

2n−1(n− 1)!

)(2n− 1)!(n− 1)!

(2n− 1)!

2n−1 [(n− 1)!]2.

It is easy to see that the leading order terms in the expression (2n− 1)xPn−1(x)−nPn(x) cancel.

The next terms will be of degree n− 2. This is because the Pn’s are eithereven or odd functions, thus only containing even, or odd, powers of x. Weconclude that

(2n− 1)xPn−1(x)− nPn(x) = polynomial of degree n− 2.

Therefore, since the Legendre polynomials form a basis, we can write thispolynomial as a linear combination of Legendre polynomials:

(2n− 1)xPn−1(x)− nPn(x) = c0P0(x) + c1P1(x) + . . . + cn−2Pn−2(x). (5.27)

Page 150: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

138 partial differential equations

Multiplying Equation (5.27) by Pm(x) for m = 0, 1, . . . , n− 3, integratingfrom −1 to 1, and using orthogonality, we obtain

0 = cm‖Pm‖2, m = 0, 1, . . . , n− 3.

[Note:∫ 1−1 xkPn(x) dx = 0 for k ≤ n − 1. Thus,

∫ 1−1 xPn−1(x)Pm(x) dx = 0

for m ≤ n− 3.]Thus, all of these cm’s are zero, leaving Equation (5.27) as

(2n− 1)xPn−1(x)− nPn(x) = cn−2Pn−2(x).

The final coefficient can be found by using the normalization condition,Pn(1) = 1. Thus, cn−2 = (2n− 1)− n = n− 1.

5.3.2 Generating Functions The Generating Function for Legendre Poly-nomials

A second proof of the three term recursion formula can be ob-tained from the generating function of the Legendre polynomials. Manyspecial functions have such generating functions. In this case it is given by

g(x, t) =1√

1− 2xt + t2=

∑n=0

Pn(x)tn, |x| ≤ 1, |t| < 1. (5.28)

This generating function occurs often in applications. In particular, itarises in potential theory, such as electromagnetic or gravitational potentials.These potential functions are 1

r type functions.

Figure 5.5: The position vectors used todescribe the tidal force on the Earth dueto the moon. r

2

r1

r1

r -2

For example, the gravitational potential between the Earth and the moonis proportional to the reciprocal of the magnitude of the difference betweentheir positions relative to some coordinate system. An even better example,would be to place the origin at the center of the Earth and consider theforces on the non-pointlike Earth due to the moon. Consider a piece of theEarth at position r1 and the moon at position r2 as shown in Figure 5.5. Thetidal potential Φ is proportional to

Φ ∝1

|r2 − r1|=

1√(r2 − r1) · (r2 − r1)

=1√

r21 − 2r1r2 cos θ + r2

2

,

where θ is the angle between r1 and r2.Typically, one of the position vectors is much larger than the other. Let’s

assume that r1 r2. Then, one can write

Φ ∝1√

r21 − 2r1r2 cos θ + r2

2

=1r2

1√1− 2 r1

r2cos θ +

(r1r2

)2.

Page 151: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

non-sinusoidal harmonics and special functions 139

Now, define x = cos θ and t = r1r2

. We then have that the tidal potential isproportional to the generating function for the Legendre polynomials! So,we can write the tidal potential as

Φ ∝1r2

∑n=0

Pn(cos θ)

(r1

r2

)n.

The first term in the expansion, 1r2

, is the gravitational potential that givesthe usual force between the Earth and the moon. [Recall that the gravita-tional potential for mass m at distance r from M is given by Φ = −GMm

r and

that the force is the gradient of the potential, F = −∇Φ ∝ ∇(

1r

).] The next

terms will give expressions for the tidal effects.Now that we have some idea as to where this generating function might

have originated, we can proceed to use it. First of all, the generating functioncan be used to obtain special values of the Legendre polynomials.

Example 5.5. Evaluate Pn(0) using the generating function. Pn(0) is found byconsidering g(0, t). Setting x = 0 in Equation (5.28), we have

g(0, t) =1√

1 + t2

=∞

∑n=0

Pn(0)tn

= P0(0) + P1(0)t + P2(0)t2 + P3(0)t3 + . . . . (5.29)

We can use the binomial expansion to find the final answer. Namely, we have

1√1 + t2

= 1− 12

t2 +38

t4 + . . . .

Comparing these expansions, we have the Pn(0) = 0 for n odd and for even integersone can show (see Problem 12) that4 4 This example can be finished by first

proving that

(2n)!! = 2nn!

and

(2n− 1)!! =(2n)!(2n)!!

=(2n)!2nn!

.

P2n(0) = (−1)n (2n− 1)!!(2n)!!

, (5.30)

where n!! is the double factorial,

n!! =

n(n− 2) . . . (3)1, n > 0, odd,n(n− 2) . . . (4)2, n > 0, even,1 n = 0,−1

.

Example 5.6. Evaluate Pn(−1). This is a simpler problem. In this case we have

g(−1, t) =1√

1 + 2t + t2=

11 + t

= 1− t + t2 − t3 + . . . .

Therefore, Pn(−1) = (−1)n.Proof of the three term recursion for-mula using the generating function.

Example 5.7. Prove the three term recursion formula,

(k + 1)Pk+1(x)− (2k + 1)xPk(x) + kPk−1(x) = 0, k = 1, 2, . . . ,

Page 152: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

140 partial differential equations

using the generating function.We can also use the generating function to find recurrence relations. To prove the

three term recursion (5.22) that we introduced above, then we need only differentiatethe generating function with respect to t in Equation (5.28) and rearrange the result.First note that

∂g∂t

=x− t

(1− 2xt + t2)3/2 =x− t

1− 2xt + t2 g(x, t).

Combining this with∂g∂t

=∞

∑n=0

nPn(x)tn−1,

we have

(x− t)g(x, t) = (1− 2xt + t2)∞

∑n=0

nPn(x)tn−1.

Inserting the series expression for g(x, t) and distributing the sum on the right side,we obtain

(x− t)∞

∑n=0

Pn(x)tn =∞

∑n=0

nPn(x)tn−1 −∞

∑n=0

2nxPn(x)tn +∞

∑n=0

nPn(x)tn+1.

Multiplying out the x− t factor and rearranging, leads to three separate sums:

∑n=0

nPn(x)tn−1 −∞

∑n=0

(2n + 1)xPn(x)tn +∞

∑n=0

(n + 1)Pn(x)tn+1 = 0. (5.31)

Each term contains powers of t that we would like to combine into a single sum.This is done by reindexing. For the first sum, we could use the new index k = n− 1.Then, the first sum can be written

∑n=0

nPn(x)tn−1 =∞

∑k=−1

(k + 1)Pk+1(x)tk.

Using different indices is just another way of writing out the terms. Note that

∑n=0

nPn(x)tn−1 = 0 + P1(x) + 2P2(x)t + 3P3(x)t2 + . . .

and∞

∑k=−1

(k + 1)Pk+1(x)tk = 0 + P1(x) + 2P2(x)t + 3P3(x)t2 + . . .

actually give the same sum. The indices are sometimes referred to as dummy indicesbecause they do not show up in the expanded expression and can be replaced withanother letter.

If we want to do so, we could now replace all of the k’s with n’s. However, we willleave the k’s in the first term and now reindex the next sums in Equation (5.31).The second sum just needs the replacement n = k and the last sum we reindexusing k = n + 1. Therefore, Equation (5.31) becomes

∑k=−1

(k + 1)Pk+1(x)tk −∞

∑k=0

(2k + 1)xPk(x)tk +∞

∑k=1

kPk−1(x)tk = 0. (5.32)

Page 153: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

non-sinusoidal harmonics and special functions 141

We can now combine all of the terms, noting the k = −1 term is automaticallyzero and the k = 0 terms give

P1(x)− xP0(x) = 0. (5.33)

Of course, we know this already. So, that leaves the k > 0 terms:

∑k=1

[(k + 1)Pk+1(x)− (2k + 1)xPk(x) + kPk−1(x)] tk = 0. (5.34)

Since this is true for all t, the coefficients of the tk’s are zero, or

(k + 1)Pk+1(x)− (2k + 1)xPk(x) + kPk−1(x) = 0, k = 1, 2, . . . .

While this is the standard form for the three term recurrence relation, the earlierform is obtained by setting k = n− 1.

There are other recursion relations which we list in the box below. Equa-tion (5.35) was derived using the generating function. Differentiating it withrespect to x, we find Equation (5.36). Equation (5.37) can be proven usingthe generating function by differentiating g(x, t) with respect to x and re-arranging the resulting infinite series just as in this last manipulation. Thiswill be left as Problem 4. Combining this result with Equation (5.35), wecan derive Equations (5.38)-(5.39). Adding and subtracting these equationsyields Equations (5.40)-(5.41).

Recursion Formulae for Legendre Polynomials for n = 1, 2, . . . .

(n + 1)Pn+1(x) = (2n + 1)xPn(x)− nPn−1(x) (5.35)

(n + 1)P′n+1(x) = (2n + 1)[Pn(x) + xP′n(x)]− nP′n−1(x)

(5.36)

Pn(x) = P′n+1(x)− 2xP′n(x) + P′n−1(x) (5.37)

P′n−1(x) = xP′n(x)− nPn(x) (5.38)

P′n+1(x) = xP′n(x) + (n + 1)Pn(x) (5.39)

P′n+1(x) + P′n−1(x) = 2xP′n(x) + Pn(x). (5.40)

P′n+1(x)− P′n−1(x) = (2n + 1)Pn(x). (5.41)

(x2 − 1)P′n(x) = nxPn(x)− nPn−1(x) (5.42)

Finally, Equation (5.42) can be obtained using Equations (5.38) and (5.39).Just multiply Equation (5.38) by x,

x2P′n(x)− nxPn(x) = xP′n−1(x).

Now use Equation (5.39), but first replace n with n − 1 to eliminate thexP′n−1(x) term:

x2P′n(x)− nxPn(x) = P′n(x)− nPn−1(x).

Rearranging gives the Equation (5.42).

Page 154: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

142 partial differential equations

Example 5.8. Use the generating function to prove

‖Pn‖2 =∫ 1

−1P2

n(x) dx =2

2n + 1.

Another use of the generating function is to obtain the normalization constant.This can be done by first squaring the generating function in order to get the prod-ucts Pn(x)Pm(x), and then integrating over x.The normalization constant.

Squaring the generating function has to be done with care, as we need to makeproper use of the dummy summation index. So, we first write

11− 2xt + t2 =

[∞

∑n=0

Pn(x)tn

]2

=∞

∑n=0

∑m=0

Pn(x)Pm(x)tn+m. (5.43)

Integrating from x = −1 to x = 1 and using the orthogonality of the Legendrepolynomials, we have∫ 1

−1

dx1− 2xt + t2 =

∑n=0

∑m=0

tn+m∫ 1

−1Pn(x)Pm(x) dx

=∞

∑n=0

t2n∫ 1

−1P2

n(x) dx. (5.44)

However, one can show that55 You will need the integral∫ dxa + bx

=1b

ln(a + bx) + C.∫ 1

−1

dx1− 2xt + t2 =

1t

ln(

1 + t1− t

).

Expanding this expression about t = 0, we obtain66 You will need the series expansion

ln(1 + x) =∞

∑n=1

(−1)n+1 xn

n

= x− x2

2+

x3

3− · · · .

1t

ln(

1 + t1− t

)=

∑n=0

22n + 1

t2n.

Comparing this result with Equation (5.44), we find that

‖Pn‖2 =∫ 1

−1P2

n(x) dx =2

2n + 1. (5.45)

5.3.3 The Differential Equation for Legendre Polynomials

The Legendre polynomials satisfy a second order linear differentialequation. This differential equation occurs naturally in the solution of initial-boundary value problems in three dimensions which possess some sphericalsymmetry. We will see this in the last chapter. There are two approacheswe could take in showing that the Legendre polynomials satisfy a particulardifferential equation. Either we can write down the equations and attemptto solve it, or we could use the above properties to obtain the equation. Fornow, we will seek the differential equation satisfied by Pn(x) using the aboverecursion relations.

Page 155: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

non-sinusoidal harmonics and special functions 143

We begin by differentiating Equation (5.42) and using Equation (5.38) tosimplify:

ddx

((x2 − 1)P′n(x)

)= nPn(x) + nxP′n(x)− nP′n−1(x)

= nPn(x) + n2Pn(x)

= n(n + 1)Pn(x). (5.46)

Therefore, Legendre polynomials, or Legendre functions of the first kind,are solutions of the differential equation

(1− x2)y′′ − 2xy′ + n(n + 1)y = 0.

As this is a linear second order differential equation, we expect two linearly A generalization of the Legendre equa-tion is given by (1 − x2)y′′ − 2xy′ +[

n(n + 1)− m2

1−x2

]y = 0. Solutions to

this equation, Pmn (x) and Qm

n (x), arecalled the associated Legendre functionsof the first and second kind.

independent solutions. The second solution, called the Legendre functionof the second kind, is given by Qn(x) and is not well behaved at x = ±1.For example,

Q0(x) =12

ln1 + x1− x

.

We will not need these for physically interesting examples in this book.

5.3.4 Fourier-Legendre Series

With these properties of Legendre functions we are now preparedto compute the expansion coefficients for the Fourier-Legendre series repre-sentation of a given function.

Example 5.9. Expand f (x) = x3 in a Fourier-Legendre series.We simply need to compute

cn =2n + 1

2

∫ 1

−1x3Pn(x) dx. (5.47)

We first note that ∫ 1

−1xmPn(x) dx = 0 for m > n.

As a result, we have that cn = 0 for n > 3. We could just compute∫ 1−1 x3Pm(x) dx

for m = 0, 1, 2, . . . outright by looking up Legendre polynomials. But, note that x3

is an odd function. So, c0 = 0 and c2 = 0.This leaves us with only two coefficients to compute. We refer to Table 5.2 and

find that

c1 =32

∫ 1

−1x4 dx =

35

c3 =72

∫ 1

−1x3[

12(5x3 − 3x)

]dx =

25

.

Thus,

x3 =35

P1(x) +25

P3(x).

Page 156: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

144 partial differential equations

Of course, this is simple to check using Table 5.2:

35

P1(x) +25

P3(x) =35

x +25

[12(5x3 − 3x)

]= x3.

We could have obtained this result without doing any integration. Write x3 as alinear combination of P1(x) and P3(x) :

x3 = c1x +12

c2(5x3 − 3x)

= (c1 −32

c2)x +52

c2x3. (5.48)

Equating coefficients of like terms, we have that c2 = 25 and c1 = 3

2 c2 = 35 .

Example 5.10. Expand the Heaviside7 function in a Fourier-Legendre series.7 Oliver Heaviside (1850-1925) was anEnglish mathematician, physicist andengineer who used complex analysis tostudy circuits and was a co-founder ofvector analysis. The Heaviside functionis also called the step function.

The Heaviside function is defined as

H(x) =

1, x > 0,0, x < 0.

(5.49)

In this case, we cannot find the expansion coefficients without some integration. Wehave to compute

cn =2n + 1

2

∫ 1

−1f (x)Pn(x) dx

=2n + 1

2

∫ 1

0Pn(x) dx. (5.50)

We can make use of identity (5.41),

P′n+1(x)− P′n−1(x) = (2n + 1)Pn(x), n > 0. (5.51)

We have for n > 0

cn =12

∫ 1

0[P′n+1(x)− P′n−1(x)] dx =

12[Pn−1(0)− Pn+1(0)].

For n = 0, we have

c0 =12

∫ 1

0dx =

12

.

This leads to the expansion

f (x) ∼ 12+

12

∑n=1

[Pn−1(0)− Pn+1(0)]Pn(x).

We still need to evaluate the Fourier-Legendre coefficients

cn =12[Pn−1(0)− Pn+1(0)].

Since Pn(0) = 0 for n odd, the cn’s vanish for n even. Letting n = 2k− 1, were-index the sum, obtaining

f (x) ∼ 12+

12

∑k=1

[P2k−2(0)− P2k(0)]P2k−1(x).

Page 157: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

non-sinusoidal harmonics and special functions 145

We can compute the nonzero Fourier coefficients, c2k−1 = 12 [P2k−2(0)− P2k(0)],

using a result from Problem 12:

P2k(0) = (−1)k (2k− 1)!!(2k)!!

. (5.52)

Namely, we have

c2k−1 =12[P2k−2(0)− P2k(0)]

=12

[(−1)k−1 (2k− 3)!!

(2k− 2)!!− (−1)k (2k− 1)!!

(2k)!!

]= −1

2(−1)k (2k− 3)!!

(2k− 2)!!

[1 +

2k− 12k

]= −1

2(−1)k (2k− 3)!!

(2k− 2)!!4k− 1

2k. (5.53)

0

0.2

0.4

0.6

0.8

1

–0.8 –0.6 –0.4 –0.2 0.2 0.4 0.6 0.8x

Figure 5.6: Sum of first 21 terms forFourier-Legendre series expansion ofHeaviside function.

Thus, the Fourier-Legendre series expansion for the Heaviside function is givenby

f (x) ∼ 12− 1

2

∑n=1

(−1)n (2n− 3)!!(2n− 2)!!

4n− 12n

P2n−1(x). (5.54)

The sum of the first 21 terms of this series are shown in Figure 5.6. We note the slowconvergence to the Heaviside function. Also, we see that the Gibbs phenomenon ispresent due to the jump discontinuity at x = 0. [See Section 3.7.]

5.4 Gamma FunctionThe name and symbol for the Gammafunction were first given by Legendre in1811. However, the search for a gener-alization of the factorial extends back tothe 1720’s when Euler provided the firstrepresentation of the factorial as an infi-nite product, later to be modified by oth-ers like Gauß, Weierstraß, and Legendre.

A function that often occurs in the study of special functions

is the Gamma function. We will need the Gamma function in the nextsection on Fourier-Bessel series.

For x > we define the Gamma function as

Γ(x) =∫ ∞

0tx−1e−t dt, x > 0. (5.55)

The Gamma function is a generalization of the factorial function and a plotis shown in Figure 5.7. In fact, we have

Γ(1) = 1

andΓ(x + 1) = xΓ(x).

The reader can prove this identity by simply performing an integration byparts. (See Problem 7.) In particular, for integers n ∈ Z+, we then have

Γ(n + 1) = nΓ(n) = n(n− 1)Γ(n− 2) = n(n− 1) · · · 2Γ(1) = n!.

–6

–4

–2

2

4

1 2 3 4–1–2–3–4–6

x

Figure 5.7: Plot of the Gamma function.

We can also define the Gamma function for negative, non-integer valuesof x. We first note that by iteration on n ∈ Z+, we have

Γ(x + n) = (x + n− 1) · · · (x + 1)xΓ(x), x + n > 0.

Page 158: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

146 partial differential equations

Solving for Γ(x), we then find

Γ(x) =Γ(x + n)

(x + n− 1) · · · (x + 1)x, −n < x < 0

Note that the Gamma function is undefined at zero and the negative inte-gers.

Example 5.11. We now prove that

Γ(

12

)=√

π.

This is done by direct computation of the integral:

Γ(

12

)=∫ ∞

0t−

12 e−t dt.

Letting t = z2, we have

Γ(

12

)= 2

∫ ∞

0e−z2

dz.

Due to the symmetry of the integrand, we obtain the classic integral

Γ(

12

)=∫ ∞

−∞e−z2

dz,

which can be performed using a standard trick.8 Consider the integral8 In Example 9.5 we show the more gen-eral result:∫ ∞

−∞e−βy2

dy =

√π

β. I =

∫ ∞

−∞e−x2

dx.

Then,I2 =

∫ ∞

−∞e−x2

dx∫ ∞

−∞e−y2

dy.

Note that we changed the integration variable. This will allow us to write thisproduct of integrals as a double integral:

I2 =∫ ∞

−∞

∫ ∞

−∞e−(x2+y2) dxdy.

This is an integral over the entire xy-plane. We can transform this Cartesian inte-gration to an integration over polar coordinates. The integral becomes

I2 =∫ 2π

0

∫ ∞

0e−r2

rdrdθ.

This is simple to integrate and we have I2 = π. So, the final result is found bytaking the square root of both sides:

Γ(

12

)= I =

√π.

In Problem 12 the reader will prove the identity

Γ(n +12) =

(2n− 1)!!2n

√π.

Page 159: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

non-sinusoidal harmonics and special functions 147

Another useful relation, which we only state, is

Γ(x)Γ(1− x) =π

sin πx.

The are many other important relations, including infinite products, whichwe will not need at this point. The reader is encouraged to read aboutthese elsewhere. In the meantime, we move on to the discussion of anotherimportant special function in physics and mathematics.

5.5 Fourier-Bessel Series

Bessel functions arise in many problems in physics possessing cylin-drical symmetry such as the vibrations of circular drumheads and the radialmodes in optical fibers. They also provide us with another orthogonal setof basis functions.

The first occurrence of Bessel functions (zeroth order) was in the work Bessel functions have a long historyand were named after Friedrich WilhelmBessel (1784-1846).

of Daniel Bernoulli on heavy chains (1738). More general Bessel functionswere studied by Leonhard Euler in 1781 and in his study of the vibratingmembrane in 1764. Joseph Fourier found them in the study of heat conduc-tion in solid cylinders and Siméon Poisson (1781-1840) in heat conductionof spheres (1823).

The history of Bessel functions, does not just originate in the study of thewave and heat equations. These solutions originally came up in the studyof the Kepler problem, describing planetary motion. According to G. N.Watson in his Treatise on Bessel Functions, the formulation and solution ofKepler’s Problem was discovered by Joseph-Louis Lagrange (1736-1813), in1770. Namely, the problem was to express the radial coordinate and whatis called the eccentric anomaly, E, as functions of time. Lagrange foundexpressions for the coefficients in the expansions of r and E in trigonometricfunctions of time. However, he only computed the first few coefficients. In1816 Friedrich Wilhelm Bessel (1784-1846) had shown that the coefficientsin the expansion for r could be given an integral representation. In 1824 hepresented a thorough study of these functions, which are now called Besselfunctions.

You might have seen Bessel functions in a course on differential equationsas solutions of the differential equation

x2y′′ + xy′ + (x2 − p2)y = 0. (5.56)

Solutions to this equation are obtained in the form of series expansions.Namely, one seeks solutions of the form

y(x) =∞

∑j=0

ajxj+n

by determining the for the coefficients must take. We will leave this for ahomework exercise and simply report the results.

Page 160: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

148 partial differential equations

One solution of the differential equation is the Bessel function of the firstkind of order p, given as

y(x) = Jp(x) =∞

∑n=0

(−1)n

Γ(n + 1)Γ(n + p + 1)

( x2

)2n+p. (5.57)

Figure 5.8: Plots of the Bessel functionsJ0(x), J1(x), J2(x), and J3(x).

J (x) 3

J (x) 2

J (x) 1

J (x) 0

–0.4

–0.2

0

0.2

0.4

0.6

0.8

1

2 4 6 8 10

x

In Figure 5.8 we display the first few Bessel functions of the first kindof integer order. Note that these functions can be described as decayingoscillatory functions.

A second linearly independent solution is obtained for p not an integer asJ−p(x). However, for p an integer, the Γ(n+ p+ 1) factor leads to evaluationsof the Gamma function at zero, or negative integers, when p is negative.Thus, the above series is not defined in these cases.

Another method for obtaining a second linearly independent solution isthrough a linear combination of Jp(x) and J−p(x) as

Np(x) = Yp(x) =cos πpJp(x)− J−p(x)

sin πp. (5.58)

These functions are called the Neumann functions, or Bessel functions ofthe second kind of order p.

In Figure 5.9 we display the first few Bessel functions of the second kindof integer order. Note that these functions are also decaying oscillatoryfunctions. However, they are singular at x = 0.

In many applications one desires bounded solutions at x = 0. Thesefunctions do not satisfy this boundary condition. For example, we willlater study one standard problem is to describe the oscillations of a circulardrumhead. For this problem one solves the two dimensional wave equationusing separation of variables in cylindrical coordinates. The radial equationleads to a Bessel equation. The Bessel function solutions describe the radialpart of the solution and one does not expect a singular solution at the centerof the drum. The amplitude of the oscillation must remain finite. Thus, onlyBessel functions of the first kind can be used.

Page 161: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

non-sinusoidal harmonics and special functions 149

3N (x) 2N (x) 1N (x)

0N (x)

–1

–0.8

–0.6

–0.4

–0.2

0

0.2

0.4

0.6

0.8

1

2 4 6 8 10

x

Figure 5.9: Plots of the Neumann func-tions N0(x), N1(x), N2(x), and N3(x).

Bessel functions satisfy a variety of properties, which we will only listat this time for Bessel functions of the first kind. The reader will have theopportunity to prove these for homework.

Derivative Identities These identities follow directly from the manipula-tion of the series solution.

ddx[xp Jp(x)

]= xp Jp−1(x). (5.59)

ddx[x−p Jp(x)

]= −x−p Jp+1(x). (5.60)

Recursion Formulae The next identities follow from adding, or subtract-ing, the derivative identities.

Jp−1(x) + Jp+1(x) =2px

Jp(x). (5.61)

Jp−1(x)− Jp+1(x) = 2J′p(x). (5.62)

Orthogonality As we will see in the next chapter, one can recast theBessel equation into an eigenvalue problem whose solutions form an or-thogonal basis of functions on L2

x(0, a). Using Sturm-Liouville theory, onecan show that

∫ a

0xJp(jpn

xa)Jp(jpm

xa) dx =

a2

2[

Jp+1(jpn)]2

δn,m, (5.63)

where jpn is the nth root of Jp(x), Jp(jpn) = 0, n = 1, 2, . . . . A list of someof these roots are provided in Table 5.3.

Generating Function

ex(t− 1t )/2 =

∑n=−∞

Jn(x)tn, x > 0, t 6= 0. (5.64)

Page 162: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

150 partial differential equations

Table 5.3: The zeros of Bessel Functions,Jm(jmn) = 0.

n m = 0 m = 1 m = 2 m = 3 m = 4 m = 51 2.405 3.832 5.136 6.380 7.588 8.771

2 5.520 7.016 8.417 9.761 11.065 12.339

3 8.654 10.173 11.620 13.015 14.373 15.700

4 11.792 13.324 14.796 16.223 17.616 18.980

5 14.931 16.471 17.960 19.409 20.827 22.218

6 18.071 19.616 21.117 22.583 24.019 25.430

7 21.212 22.760 24.270 25.748 27.199 28.627

8 24.352 25.904 27.421 28.908 30.371 31.812

9 27.493 29.047 30.569 32.065 33.537 34.989

Integral Representation

Jn(x) =1π

∫ π

0cos(x sin θ − nθ) dθ, x > 0, n ∈ Z. (5.65)

Fourier-Bessel Series

Since the Bessel functions are an orthogonal set of functions of a Sturm-Liouville problem, we can expand square integrable functions in this ba-sis. In fact, the Sturm-Liouville problem is given in the form

x2y′′ + xy′ + (λx2 − p2)y = 0, x ∈ [0, a], (5.66)

satisfying the boundary conditions: y(x) is bounded at x = 0 and y(a) =0. The solutions are then of the form Jp(

√λx), as can be shown by making

the substitution t =√

λx in the differential equation. Namely, we lety(x) = u(t) and note that

dydx

=dtdx

dudt

=√

λdudt

.

Then,t2u′′ + tu′ + (t2 − p2)u = 0,

which has a solution u(t) = Jp(t).In the study of boundary value prob-lems in differential equations, Sturm-Liouville problems are a bountifulsource of basis functions for the spaceof square integrable functions as will beseen in the next section.

Using Sturm-Liouville theory, one can show that Jp(jpnxa ) is a basis

of eigenfunctions and the resulting Fourier-Bessel series expansion of f (x)defined on x ∈ [0, a] is

f (x) =∞

∑n=1

cn Jp(jpnxa), (5.67)

where the Fourier-Bessel coefficients are found using the orthogonalityrelation as

cn =2

a2[

Jp+1(jpn)]2 ∫ a

0x f (x)Jp(jpn

xa) dx. (5.68)

Example 5.12. Expand f (x) = 1 for 0 < x < 1 in a Fourier-Bessel series ofthe form

f (x) =∞

∑n=1

cn J0(j0nx)

Page 163: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

non-sinusoidal harmonics and special functions 151

.We need only compute the Fourier-Bessel coefficients in Equation (5.68):

cn =2

[J1(j0n)]2

∫ 1

0xJ0(j0nx) dx. (5.69)

From the identity

ddx[xp Jp(x)

]= xp Jp−1(x). (5.70)

we have

∫ 1

0xJ0(j0nx) dx =

1j20n

∫ j0n

0yJ0(y) dy

=1

j20n

∫ j0n

0

ddy

[yJ1(y)] dy

=1

j20n[yJ1(y)]

j0n0

=1

j0nJ1(j0n). (5.71)

0

0.2

0.4

0.6

0.8

1

1.2

0.2 0.4 0.6 0.8 1

x

Figure 5.10: Plot of the first 50 termsof the Fourier-Bessel series in Equation(5.72) for f (x) = 1 on 0 < x < 1.

As a result, the desired Fourier-Bessel expansion is given as

1 = 2∞

∑n=1

J0(j0nx)j0n J1(j0n)

, 0 < x < 1. (5.72)

In Figure 5.10 we show the partial sum for the first fifty terms of this series.Note once again the slow convergence due to the Gibbs phenomenon.

Page 164: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

152 partial differential equations

5.6 Appendix: The Least Squares Approximation

In the first section of this chapter we showed that we can expandfunctions over an infinite set of basis functions as

f (x) =∞

∑n=1

cnφn(x)

and that the generalized Fourier coefficients are given by

cn =< φn, f >

< φn, φn >.

In this section we turn to a discussion of approximating f (x) by the partialsums ∑N

n=1 cnφn(x) and showing that the Fourier coefficients are the bestcoefficients minimizing the deviation of the partial sum from f (x). This willlead us to a discussion of the convergence of Fourier series.

More specifically, we set the following goal:

Goal

To find the best approximation of f (x) on [a, b] by SN(x) =N∑

n=1cnφn(x)

for a set of fixed functions φn(x); i.e., to find the expansion coefficients,cn, such that SN(x) approximates f (x) in the least squares sense.

We want to measure the deviation of the finite sum from the given func-tion. Essentially, we want to look at the error made in the approximation.This is done by introducing the mean square deviation:

EN =∫ b

a[ f (x)− SN(x)]2ρ(x) dx,

where we have introduced the weight function ρ(x) > 0. It gives us a senseas to how close the Nth partial sum is to f (x).The mean square deviation.

We want to minimize this deviation by choosing the right cn’s. We beginby inserting the partial sums and expand the square in the integrand:

EN =∫ b

a[ f (x)− SN(x)]2ρ(x) dx

=∫ b

a

[f (x)−

N

∑n=1

cnφn(x)

]2

ρ(x) dx

=

b∫a

f 2(x)ρ(x) dx− 2b∫

a

f (x)N

∑n=1

cnφn(x)ρ(x) dx

+

b∫a

N

∑n=1

cnφn(x)N

∑m=1

cmφm(x)ρ(x) dx (5.73)

Looking at the three resulting integrals, we see that the first term is justthe inner product of f with itself. The other integrations can be rewritten

Page 165: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

non-sinusoidal harmonics and special functions 153

after interchanging the order of integration and summation. The doublesum can be reduced to a single sum using the orthogonality of the φn’s.Thus, we have

EN = < f , f > −2N

∑n=1

cn < f , φn > +N

∑n=1

N

∑m=1

cncm < φn, φm >

= < f , f > −2N

∑n=1

cn < f , φn > +N

∑n=1

c2n < φn, φn > . (5.74)

We are interested in finding the coefficients, so we will complete thesquare in cn. Focusing on the last two terms, we have

−2N

∑n=1

cn < f , φn > +N

∑n=1

c2n < φn, φn >

=N

∑n=1

< φn, φn > c2n − 2 < f , φn > cn

=N

∑n=1

< φn, φn >

[c2

n −2 < f , φn >

< φn, φn >cn

]

=N

∑n=1

< φn, φn >

[(cn −

< f , φn >

< φn, φn >

)2−(

< f , φn >

< φn, φn >

)2]

.

(5.75)

To this point we have shown that the mean square deviation is given as

EN =< f , f > +N

∑n=1

< φn, φn >

[(cn −

< f , φn >

< φn, φn >

)2−(

< f , φn >

< φn, φn >

)2]

.

So, EN is minimized by choosing

cn =< f , φn >

< φn, φn >.

However, these are the Fourier Coefficients. This minimization is often re-ferred to as Minimization in Least Squares Sense. Minimization in Least Squares Sense

Inserting the Fourier coefficients into the mean square deviation yields Bessel’s Inequality.

0 ≤ EN =< f , f > −N

∑n=1

c2n < φn, φn > .

Thus, we obtain Bessel’s Inequality:

< f , f >≥N

∑n=1

c2n < φn, φn > .

Convergence in the mean.For convergence, we next let N get large and see if the partial sums con-

verge to the function. In particular, we say that the infinite series convergesin the mean if ∫ b

a[ f (x)− SN(x)]2ρ(x) dx → 0 as N → ∞.

Page 166: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

154 partial differential equations

Letting N get large in Bessel’s inequality shows that the sum ∑Nn=1 c2

n <

φn, φn > converges if

(< f , f >=∫ b

af 2(x)ρ(x) dx < ∞.

The space of all such f is denoted L2ρ(a, b), the space of square integrable

functions on (a, b) with weight ρ(x).From the nth term divergence test from calculus we know that ∑ an con-

verges implies that an → 0 as n → ∞. Therefore, in this problem the termsc2

n < φn, φn > approach zero as n gets large. This is only possible if the cn’sgo to zero as n gets large. Thus, if ∑N

n=1 cnφn converges in the mean to f ,then

∫ ba [ f (x)−∑N

n=1 cnφn]2ρ(x) dx approaches zero as N → ∞. This impliesfrom the above derivation of Bessel’s inequality that

< f , f > −N

∑n=1

c2n(φn, φn)→ 0.

This leads to Parseval’s equality:Parseval’s equality.

< f , f >=∞

∑n=1

c2n < φn, φn > .

Parseval’s equality holds if and only if

limN→∞

b∫a

( f (x)−N

∑n=1

cnφn(x))2ρ(x) dx = 0.

If this is true for every square integrable function in L2ρ(a, b), then the set of

functions φn(x)∞n=1 is said to be complete. One can view these functions

as an infinite dimensional basis for the space of square integrable functionson (a, b) with weight ρ(x) > 0.

One can extend the above limit cn → 0 as n→ ∞, by assuming that φn(x)‖φn‖

is uniformly bounded and thatb∫a| f (x)|ρ(x) dx < ∞. This is the Riemann-

Lebesgue Lemma, but will not be proven here.

Problems

1. Consider the set of vectors (−1, 1, 1), (1,−1, 1), (1, 1,−1).

a. Use the Gram-Schmidt process to find an orthonormal basis for R3

using this set in the given order.

b. What do you get if you do reverse the order of these vectors?

2. Use the Gram-Schmidt process to find the first four orthogonal polyno-mials satisfying the following:

a. Interval: (−∞, ∞) Weight Function: e−x2.

b. Interval: (0, ∞) Weight Function: e−x.

Page 167: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

non-sinusoidal harmonics and special functions 155

3. Find P4(x) using

a. The Rodrigues’ Formula in Equation (5.20).

b. The three term recursion formula in Equation (5.22).

4. In Equations (5.35)-(5.42) we provide several identities for Legendre poly-nomials. Derive the results in Equations (5.36)-(5.42) as described in the text.Namely,

a. Differentiating Equation (5.35) with respect to x, derive Equation(5.36).

b. Derive Equation (5.37) by differentiating g(x, t) with respect to xand rearranging the resulting infinite series.

c. Combining the last result with Equation (5.35), derive Equations(5.38)-(5.39).

d. Adding and subtracting Equations (5.38)-(5.39), obtain Equations(5.40)-(5.41).

e. Derive Equation (5.42) using some of the other identities.

5. Use the recursion relation (5.22) to evaluate∫ 1−1 xPn(x)Pm(x) dx, n ≤ m.

6. Expand the following in a Fourier-Legendre series for x ∈ (−1, 1).

a. f (x) = x2.

b. f (x) = 5x4 + 2x3 − x + 3.

c. f (x) =

−1, −1 < x < 0,1, 0 < x < 1.

d. f (x) =

x, −1 < x < 0,0, 0 < x < 1.

7. Use integration by parts to show Γ(x + 1) = xΓ(x).

8. Prove the double factorial identities:

(2n)!! = 2nn!

and

(2n− 1)!! =(2n)!2nn!

.

9. Express the following as Gamma functions. Namely, noting the formΓ(x + 1) =

∫ ∞0 txe−t dt and using an appropriate substitution, each expres-

sion can be written in terms of a Gamma function.

a.∫ ∞

0 x2/3e−x dx.

b.∫ ∞

0 x5e−x2dx

c.∫ 1

0

[ln(

1x

)]ndx

Page 168: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

156 partial differential equations

10. The coefficients Cpk in the binomial expansion for (1 + x)p are given by

Cpk =

p(p− 1) · · · (p− k + 1)k!

.

a. Write Cpk in terms of Gamma functions.

b. For p = 1/2 use the properties of Gamma functions to write C1/2k

in terms of factorials.

c. Confirm you answer in part b by deriving the Maclaurin seriesexpansion of (1 + x)1/2.

11. The Hermite polynomials, Hn(x), satisfy the following:

i. < Hn, Hm >=∫ ∞−∞ e−x2

Hn(x)Hm(x) dx =√

π2nn!δn,m.

ii. H′n(x) = 2nHn−1(x).

iii. Hn+1(x) = 2xHn(x)− 2nHn−1(x).

iv. Hn(x) = (−1)nex2 dn

dxn

(e−x2

).

Using these, show that

a. H′′n − 2xH′n + 2nHn = 0. [Use properties ii. and iii.]

b.∫ ∞−∞ xe−x2

Hn(x)Hm(x) dx =√

π2n−1n! [δm,n−1 + 2(n + 1)δm,n+1] .[Use properties i. and iii.]

c. Hn(0) =

0, n odd,

(−1)m (2m)!m! , n = 2m.

[Let x = 0 in iii. and iterate.

Note from iv. that H0(x) = 1 and H1(x) = 2x. ]

12. In Maple one can type simplify(LegendreP(2*n-2,0)-LegendreP(2*n,0));to find a value for P2n−2(0)− P2n(0). It gives the result in terms of Gammafunctions. However, in Example 5.10 for Fourier-Legendre series, the valueis given in terms of double factorials! So, we have

P2n−2(0)− P2n(0) =√

π(4n− 1)2Γ(n + 1)Γ

( 32 − n

) = (−1)n (2n− 3)!!(2n− 2)!!

4n− 12n

.

You will verify that both results are the same by doing the following:

a. Prove that P2n(0) = (−1)n (2n−1)!!(2n)!! using the generating function

and a binomial expansion.

b. Prove that Γ(

n + 12

)= (2n−1)!!

2n√

π using Γ(x) = (x − 1)Γ(x − 1)and iteration.

c. Verify the result from Maple that P2n−2(0)− P2n(0) =√

π(4n−1)2Γ(n+1)Γ( 3

2−n).

d. Can either expression for P2n−2(0)− P2n(0) be simplified further?

13. A solution Bessel’s equation, x2y′′+ xy′+(x2− n2)y = 0, , can be foundusing the guess y(x) = ∑∞

j=0 ajxj+n. One obtains the recurrence relationaj =

−1j(2n+j) aj−2. Show that for a0 = (n!2n)−1 we get the Bessel function of

the first kind of order n from the even values j = 2k:

Jn(x) =∞

∑k=0

(−1)k

k!(n + k)!

( x2

)n+2k.

Page 169: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

non-sinusoidal harmonics and special functions 157

14. Use the infinite series in the last problem to derive the derivative iden-tities (5.70) and (5.60):

a. ddx [x

n Jn(x)] = xn Jn−1(x).

b. ddx [x

−n Jn(x)] = −x−n Jn+1(x).

15. Prove the following identities based on those in the last problem.

a. Jp−1(x) + Jp+1(x) = 2px Jp(x).

b. Jp−1(x)− Jp+1(x) = 2J′p(x).

16. Use the derivative identities of Bessel functions,(5.70)-(5.60), and inte-gration by parts to show that∫

x3 J0(x) dx = x3 J1(x)− 2x2 J2(x) + C.

17. Use the generating function to find Jn(0) and J′n(0).

18. Bessel functions Jp(λx) are solutions of x2y′′ + xy′ + (λ2x2 − p2)y = 0.Assume that x ∈ (0, 1) and that Jp(λ) = 0 and Jp(0) is finite.

a. Show that this equation can be written in the form

ddx

(x

dydx

)+ (λ2x− p2

x)y = 0.

This is the standard Sturm-Liouville form for Bessel’s equation.

b. Prove that ∫ 1

0xJp(λx)Jp(µx) dx = 0, λ 6= µ

by considering∫ 1

0

[Jp(µx)

ddx

(x

ddx

Jp(λx))− Jp(λx)

ddx

(x

ddx

Jp(µx))]

dx.

Thus, the solutions corresponding to different eigenvalues (λ, µ)are orthogonal.

c. Prove that ∫ 1

0x[

Jp(λx)]2 dx =

12

J2p+1(λ) =

12

J′2p (λ).

19. We can rewrite Bessel functions, Jν(x), in a form which will allow theorder to be non-integer by using the gamma function. You will need the

results from Problem 12b for Γ(

k + 12

).

a. Extend the series definition of the Bessel function of the first kindof order ν, Jν(x), for ν ≥ 0 by writing the series solution for y(x)in Problem 13 using the gamma function.

b. Extend the series to J−ν(x), for ν ≥ 0. Discuss the resulting seriesand what happens when ν is a positive integer.

Page 170: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

158 partial differential equations

c. Use these results to obtain the closed form expressions

J1/2(x) =

√2

πxsin x,

J−1/2(x) =

√2

πxcos x.

d. Use the results in part c with the recursion formula for Besselfunctions to obtain a closed form for J3/2(x).

20. In this problem you will derive the expansion

x2 =c2

2+ 4

∑j=2

J0(αjx)α2

j J0(αjc), 0 < x < c,

where the α′js are the positive roots of J1(αc) = 0, by following the belowsteps.

a. List the first five values of α for J1(αc) = 0 using the Table 5.3 andFigure 5.8. [Note: Be careful determining α1.]

b. Show that ‖J0(α1x)‖2 = c2

2 . Recall,

‖J0(αjx)‖2 =∫ c

0xJ2

0 (αjx) dx.

c. Show that ‖J0(αjx)‖2 = c2

2[

J0(αjc)]2 , j = 2, 3, . . . . (This is the most

involved step.) First note from Problem 18 that y(x) = J0(αjx) is asolution of

x2y′′ + xy′ + α2j x2y = 0.

i. Verify the Sturm-Liouville form of this differential equation:(xy′)′ = −α2

j xy.ii. Multiply the equation in part i. by y(x) and integrate from

x = 0 to x = c to obtain∫ c

0(xy′)′y dx = −α2

j

∫ c

0xy2 dx

= −α2j

∫ c

0xJ2

0 (αjx) dx. (5.76)

iii. Noting that y(x) = J0(αjx), integrate the left hand side by partsand use the following to simplify the resulting equation.1. J′0(x) = −J1(x) from Equation (5.60).2. Equation (5.63).3. J2(αjc) + J0(αjc) = 0 from Equation (5.61).

iv. Now you should have enough information to complete thispart.

d. Use the results from parts b and c and Problem 16 to derive theexpansion coefficients for

x2 =∞

∑j=1

cj J0(αjx)

in order to obtain the desired expansion.

Page 171: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

6Problems in Higher Dimensions

“Equations of such complexity as are the equations of the gravitational field can befound only through the discovery of a logically simple mathematical condition thatdetermines the equations completely or at least almost completely.”

“What I have to say about this book can be found inside this book.” AlbertEinstein (1879-1955)

In this chapter we will explore several examples of the solution ofinitial-boundary value problems involving higher spatial dimensions. Theseare described by higher dimensional partial differential equations, such asthe ones presented in Table 2.1 in Chapter 2. The spatial domains of theproblems span many different geometries, which will necessitate the use ofrectangular, polar, cylindrical, or spherical coordinates.

We will solve many of these problems using the method of separation ofvariables, which we first saw in Chapter 2. Using separation of variableswill result in a system of ordinary differential equations for each problem.Adding the boundary conditions, we will need to solve a variety of eigen-value problems. The product solutions that result will involve trigonomet-ric or some of the special functions that we had encountered in Chapter 5.These methods are used in solving the hydrogen atom and other problemsin quantum mechanics and in electrostatic problems in electrodynamics.We will bring to this discussion many of the tools from earlier in this bookshowing how much of what we have seen can be used to solve some genericpartial differential equations which describe oscillation and diffusion typeproblems.

As we proceed through the examples in this chapter, we will see somecommon features. For example, the two key equations that we have stud-ied are the heat equation and the wave equation. For higher dimensionalproblems these take the form

ut = k∇2u, (6.1)

utt = c2∇2u. (6.2)

We can separate out the time dependence in each equation. Inserting aguess of u(r, t) = φ(r)T(t) into the heat and wave equations, we obtain

T′φ = kT∇2φ, (6.3)

Page 172: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

160 partial differential equations

T′′φ = c2T∇2φ. (6.4)

Dividing each equation by φ(r)T(t), we can separate the time and space de-pendence just as we had in Chapter ??. In each case we find that a functionof time equals a function of the spatial variables. Thus, these functions mustbe constant functions. We set these equal to the constant −λ and find therespective equations

1k

T′

T=∇2φ

φ= −λ, (6.5)

1c2

T′′

T=∇2φ

φ= −λ. (6.6)

The sign of λ is chosen because we expect decaying solutions in time for theheat equation and oscillations in time for the wave equation and will pickλ > 0.

The respective equations for the temporal functions T(t) are given by

T′ = −λkT, (6.7)

T′′ + c2λT = 0. (6.8)

These are easily solved as we had seen in Chapter ??. We have

T(t) = T(0)e−λkt, (6.9)

T(t) = a cos ωt + b sin ωt, ω = c√

λ, (6.10)

where T(0), a, and b are integration constants and ω is the angular fre-quency of vibration.

In both cases the spatial equation is of the same form,The Helmholtz equation.

∇2φ + λφ = 0. (6.11)

This equation is called the Helmholtz equation. For one dimensional prob-The Helmholtz equation is named af-ter Hermann Ludwig Ferdinand vonHelmholtz (1821-1894). He was both aphysician and a physicist and made sig-nificant contributions in physiology, op-tics, acoustics, and electromagnetism.

lems, which we have already solved, the Helmholtz equation takes the formφ′′ + λφ = 0. We had to impose the boundary conditions and found thatthere were a discrete set of eigenvalues, λn, and associated eigenfunctions,φn.

In higher dimensional problems we need to further separate out thespatial dependence. We will again use the boundary conditions to findthe eigenvalues, λ, and eigenfunctions, φ(r), for the Helmholtz equation,though the eigenfunctions will be labeled with more than one index. Theresulting boundary value problems are often second order ordinary dif-ferential equations, which can be set up as Sturm-Liouville problems. Weknow from Chapter 5 that such problems possess an orthogonal set of eigen-functions. These can then be used to construct a general solution from theproduct solutions which may involve elementary, or special, functions, suchas Legendre polynomials and Bessel functions.

We will begin our study of higher dimensional problems by consider-ing the vibrations of two dimensional membranes. First we will solve the

Page 173: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

problems in higher dimensions 161

problem of a vibrating rectangular membrane and then we will turn ourattention to a vibrating circular membrane. The rest of the chapter will bedevoted to the study of other two and three dimensional problems possess-ing cylindrical or spherical symmetry.

6.1 Vibrations of Rectangular Membranes

Our first example will be the study of the vibrations of a rectangularmembrane. You can think of this as a drumhead with a rectangular crosssection as shown in Figure 6.1. We stretch the membrane over the drumheadand fasten the material to the boundary of the rectangle. The height of thevibrating membrane is described by its height from equilibrium, u(x, y, t).This problem is a much simpler example of higher dimensional vibrationsthan that possessed by the oscillating electric and magnetic fields in the lastchapter.

x

y

H

L00

Figure 6.1: The rectangular membrane oflength L and width H. There are fixedboundary conditions along the edges.

Example 6.1. The vibrating rectangular membrane.The problem is given by the two dimensional wave equation in Cartesian coordi-

nates,utt = c2(uxx + uyy), t > 0, 0 < x < L, 0 < y < H, (6.12)

a set of boundary conditions,

u(0, y, t) = 0, u(L, y, t) = 0, t > 0, 0 < y < H,

u(x, 0, t) = 0, u(x, H, t) = 0, t > 0, 0 < x < L, (6.13)

and a pair of initial conditions (since the equation is second order in time),

u(x, y, 0) = f (x, y), ut(x, y, 0) = g(x, y). (6.14)

The first step is to separate the variables: u(x, y, t) = X(x)Y(y)T(t). In-serting the guess, u(x, y, t) into the wave equation, we have

X(x)Y(y)T′′(t) = c2 (X′′(x)Y(y)T(t) + X(x)Y′′(y)T(t))

.

Dividing by both u(x, y, t) and c2, we obtain

1c2

T′′

T︸ ︷︷ ︸Function of t

=X′′

X+

Y′′

Y︸ ︷︷ ︸Function of x and y

= −λ. (6.15)

We see that we have a function of t equals a function of x and y. Thus,both expressions are constant. We expect oscillations in time, so we choosethe constant λ to be positive, λ > 0. (Note: As usual, the primes meandifferentiation with respect to the specific dependent variable. So, thereshould be no ambiguity.)

These lead to two equations:

T′′ + c2λT = 0, (6.16)

Page 174: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

162 partial differential equations

andX′′

X+

Y′′

Y= −λ. (6.17)

We note that the spatial equation is just the separated form of Helmholtz’sequation with φ(x, y) = X(x)Y(y).

The first equation is easily solved. We have

T(t) = a cos ωt + b sin ωt, (6.18)

whereω = c

√λ. (6.19)

This is the angular frequency in terms of the separation constant, or eigen-value. It leads to the frequency of oscillations for the various harmonics ofthe vibrating membrane as

ν =ω

2π=

c2π

√λ. (6.20)

Once we know λ, we can compute these frequencies.Next we solve the spatial equation. We need carry out another separation

of variables. Rearranging the spatial equation, we have

X′′

X︸︷︷︸Function of x

= −Y′′

Y− λ︸ ︷︷ ︸

Function of y

= −µ. (6.21)

Here we have a function of x equal to a function of y. So, the two expressionsare constant, which we indicate with a second separation constant, −µ < 0.We pick the sign in this way because we expect oscillatory solutions forX(x). This leads to two equations:

X′′ + µX = 0,

Y′′ + (λ− µ)Y = 0. (6.22)

We now impose the boundary conditions. We have u(0, y, t) = 0 for allt > 0 and 0 < y < H. This implies that X(0)Y(y)T(t) = 0 for all t andy in the domain. This is only true if X(0) = 0. Similarly, from the otherboundary conditions we find that X(L) = 0, Y(0) = 0, and Y(H) = 0. Wenote that homogeneous boundary conditions are important in carrying outthis process. Nonhomogeneous boundary conditions could be imposed justlike we had in Section 7.3, but we still need the solutions for homogeneousboundary conditions before tackling the more general problems.

In summary, the boundary value problems we need to solve are:

X′′ + µX = 0, X(0) = 0, X(L) = 0.

Y′′ + (λ− µ)Y = 0, Y(0) = 0, Y(H) = 0. (6.23)

We have seen boundary value problems of these forms in Chapter ??. Thesolutions of the first eigenvalue problem are

Xn(x) = sinnπx

L, µn =

(nπ

L

)2, n = 1, 2, 3, . . . .

Page 175: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

problems in higher dimensions 163

The second eigenvalue problem is solved in the same manner. The dif-ferences from the first problem are that the “eigenvalue” is λ− µ, the inde-pendent variable is y, and the interval is [0, H]. Thus, we can quickly writedown the solutions as

Ym(y) = sinmπx

H, λ− µm =

(mπ

H

)2, m = 1, 2, 3, . . . .

At this point we need to be careful about the indexing of the separationconstants. So far, we have seen that µ depends on n and that the quantityκ = λ− µ depends on m. Solving for λ, we should write λnm = µn + κm, or

λnm =(nπ

L

)2+(mπ

H

)2, n, m = 1, 2, . . . . (6.24)

Since ω = c√

λ, we have that the discrete frequencies of the harmonics are The harmonics for the vibrating rectan-gular membrane are given by

νnm =c2

√( nL

)2+(m

H

)2,

for n, m = 1, 2, . . . .

given by

ωnm = c

√(nπ

L

)2+(mπ

H

)2, n, m = 1, 2, . . . . (6.25)

We have successfully carried out the separation of variables for the waveequation for the vibrating rectangular membrane. The product solutionscan be written as

unm = (a cos ωnmt + b sin ωnmt) sinnπx

Lsin

mπyH

(6.26)

and the most general solution is written as a linear combination of the prod-uct solutions,

u(x, y, t) = ∑n,m

(anm cos ωnmt + bnm sin ωnmt) sinnπx

Lsin

mπyH

.

However, before we carry the general solution any further, we will firstconcentrate on the two dimensional harmonics of this membrane.

x

y

L0

X1(x) = sin πxL

x

y

L0

X2(x) = sin 2πxL

x

y

L0

X3(x) = sin 3πxL

Figure 6.2: The first harmonics of the vi-brating string

For the vibrating string the nth harmonic corresponds to the functionsin nπx

L and several are shown in Figure 6.2. The various harmonics corre-spond to the pure tones supported by the string. These then lead to thecorresponding frequencies that one would hear. The actual shapes of theharmonics are sketched by locating the nodes, or places on the string thatdo not move.

In the same way, we can explore the shapes of the harmonics of the vi-brating membrane. These are given by the spatial functions

φnm(x, y) = sinnπx

Lsin

mπyH

. (6.27)

Instead of nodes, we will look for the nodal curves, or nodal lines. These A discussion of the nodal lines.

are the points (x, y) at which φnm(x, y) = 0. Of course, these depend on theindices, n and m.

For example, when n = 1 and m = 1, we have

sinπxL

sinπyH

= 0.

Page 176: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

164 partial differential equations

Figure 6.3: The first few modes of thevibrating rectangular membrane. Thedashed lines show the nodal lines indi-cating the points that do not move forthe particular mode. Compare these thenodal lines to the 3D view in Figure 6.1

n = 1 n = 2 n = 3

m = 1

m = 2

m = 3

x

y

H

L

x

y

H

L

x

y

H

L

x

y

H

L

x

y

H

L

x

y

H

L

x

y

H

L

x

y

H

L

x

y

H

L

These are zero when either

sinπxL

= 0, or sinπyH

= 0.

Of course, this can only happen for x = 0, L and y = 0, H. Thus, there areno interior nodal lines.

When n = 2 and m = 1, we have y = 0, H and

sin2πx

L= 0,

or, x = 0, L2 , L. Thus, there is one interior nodal line at x = L

2 . These pointsstay fixed during the oscillation and all other points oscillate on either sideof this line. A similar solution shape results for the (1,2)-mode; i.e., n = 1and m = 2.

In Figure 6.3 we show the nodal lines for several modes for n, m = 1, 2, 3with different columns corresponding to different n-values while the rowsare labeled with different m-values. The blocked regions appear to vibrateindependently. A better view is the three dimensional view depicted inFigure 6.1 . The frequencies of vibration are easily computed using theformula for ωnm.

For completeness, we now return to the general solution and apply theinitial conditions. The general solution is given by a linear superposition ofthe product solutions. There are two indices to sum over. Thus, the generalsolution isThe general solution for the vibrating

rectangular membrane.

u(x, y, t) =∞

∑n=1

∑m=1

(anm cos ωnmt + bnm sin ωnmt) sinnπx

Lsin

mπyH

, (6.28)

Page 177: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

problems in higher dimensions 165

m = 1 m = 2 m = 3

n = 1

n = 2

n = 3

Table 6.1: A three dimensional view ofthe vibrating rectangular membrane forthe lowest modes. Compare these im-ages with the nodal lines in Figure 6.3

where

ωnm = c

√(nπ

L

)2+(mπ

H

)2. (6.29)

The first initial condition is u(x, y, 0) = f (x, y). Setting t = 0 in the gen-eral solution, we obtain

f (x, y) =∞

∑n=1

∑m=1

anm sinnπx

Lsin

mπyH

. (6.30)

This is a double Fourier sine series. The goal is to find the unknown coeffi-cients anm.

The coefficients anm can be found knowing what we already know aboutFourier sine series. We can write the initial condition as the single sum

f (x, y) =∞

∑n=1

An(y) sinnπx

L, (6.31)

where

An(y) =∞

∑m=1

anm sinmπy

H. (6.32)

These are two Fourier sine series. Recalling from Chapter ?? that thecoefficients of Fourier sine series can be computed as integrals, we have

An(y) =2L

∫ L

0f (x, y) sin

nπxL

dx,

anm =2H

∫ H

0An(y) sin

mπyH

dy. (6.33)

Page 178: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

166 partial differential equations

Inserting the integral for An(y) into that for anm, we have an integralrepresentation for the Fourier coefficients in the double Fourier sine series,

anm =4

LH

∫ H

0

∫ L

0f (x, y) sin

nπxL

sinmπy

Hdxdy. (6.34)

The Fourier coefficients for the doubleFourier sine series. We can carry out the same process for satisfying the second initial condi-

tion, ut(x, y, 0) = g(x, y) for the initial velocity of each point. Inserting thegeneral solution into this initial condition, we obtain

g(x, y) =∞

∑n=1

∑m=1

bnmωnm sinnπx

Lsin

mπyH

. (6.35)

Again, we have a double Fourier sine series. But, now we can quickly de-termine the Fourier coefficients using the above expression for anm to findthat

bnm =4

ωnmLH

∫ H

0

∫ L

0g(x, y) sin

nπxL

sinmπy

Hdxdy. (6.36)

This completes the full solution of the vibrating rectangular membraneproblem. Namely, we have obtained the solutionThe full solution of the vibrating rectan-

gular membrane.

u(x, y, t) =∞

∑n=1

∑m=1

(anm cos ωnmt + bnm sin ωnmt) sinnπx

Lsin

mπyH

,

(6.37)where

anm =4

LH

∫ H

0

∫ L

0f (x, y) sin

nπxL

sinmπy

Hdxdy, (6.38)

bnm =4

ωnmLH

∫ H

0

∫ L

0g(x, y) sin

nπxL

sinmπy

Hdxdy, (6.39)

and the angular frequencies are given by

ωnm = c

√(nπ

L

)2+(mπ

H

)2. (6.40)

6.2 Vibrations of a Kettle Drumx

y

aP

Figure 6.4: The circular membrane of ra-dius a. A general point on the mem-brane is given by the distance from thecenter, r, and the angle, . There are fixedboundary conditions along the edge atr = a.

In this section we consider the vibrations of a circular membrane ofradius a as shown in Figure 6.4. Again we are looking for the harmonicsof the vibrating membrane, but with the membrane fixed around the cir-cular boundary given by x2 + y2 = a2. However, expressing the boundarycondition in Cartesian coordinates is awkward. Namely, we can only writeu(x, y, t) = 0 for x2 + y2 = a2. It is more natural to use polar coordinatesas indicated in Figure 6.4. Let the height of the membrane be given byu = u(r, θ, t) at time t and position (r, θ). Now the boundary condition isgiven as u(a, θ, t) = 0 for all t > 0 and θ ∈ [0, 2π].

Before solving the initial-boundary value problem, we have to cast thefull problem in polar coordinates. This means that we need to rewrite the

Page 179: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

problems in higher dimensions 167

Laplacian in r and θ. To do so would require that we know how to transformderivatives in x and y into derivatives with respect to r and θ. Using the re-sults from Section ?? on curvilinear coordinates, we know that the Laplaciancan be written in polar coordinates. In fact, we could use the results fromProblem ?? in Chapter ?? for cylindrical coordinates for functions which arez-independent, f = f (r, θ). Then, we would have

∇2 f =1r

∂r

(r

∂ f∂r

)+

1r2

∂2 f∂θ2 .

Derivation of Laplacian in polar coordi-nates.We can obtain this result using a more direct approach, namely apply-

ing the Chain Rule in higher dimensions. First recall the transformationsbetween polar and Cartesian coordinates:

x = r cos θ, y = r sin θ

andr =

√x2 + y2, tan θ =

yx

.

Now, consider a function f = f (x(r, θ), y(r, θ)) = g(r, θ). (Technically, oncewe transform a given function of Cartesian coordinates we obtain a newfunction g of the polar coordinates. Many texts do not rigorously distin-guish between the two functions.) Thinking of x = x(r, θ) and y = y(r, θ),we have from the chain rule for functions of two variables:

∂ f∂x

=∂g∂r

∂r∂x

+∂g∂θ

∂θ

∂x

=∂g∂r

xr− ∂g

∂θ

yr2

= cos θ∂g∂r− sin θ

r∂g∂θ

. (6.41)

Here we have used∂r∂x

=x√

x2 + y2=

xr

;

and∂θ

∂x=

ddx

(tan−1 y

x

)=−y/x2

1 +( y

x)2 = − y

r2 .

Similarly,

∂ f∂y

=∂g∂r

∂r∂y

+∂g∂θ

∂θ

∂y

=∂g∂r

yr+

∂g∂θ

xr2

= sin θ∂g∂r

+cos θ

r∂g∂θ

. (6.42)

The 2D Laplacian can now be computed as

∂2 f∂x2 +

∂2 f∂y2 = cos θ

∂r

(∂ f∂x

)− sin θ

r∂

∂θ

(∂ f∂x

)+ sin θ

∂r

(∂ f∂y

)+

cos θ

r∂

∂θ

(∂ f∂y

)

Page 180: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

168 partial differential equations

= cos θ∂

∂r

(cos θ

∂g∂r− sin θ

r∂g∂θ

)− sin θ

r∂

∂θ

(cos θ

∂g∂r− sin θ

r∂g∂θ

)+ sin θ

∂r

(sin θ

∂g∂r

+cos θ

r∂g∂θ

)+

cos θ

r∂

∂θ

(sin θ

∂g∂r

+cos θ

r∂g∂θ

)= cos θ

(cos θ

∂2g∂r2 +

sin θ

r2∂g∂θ− sin θ

r∂2g∂r∂θ

)− sin θ

r

(cos θ

∂2g∂θ∂r

− sin θ

r∂2g∂θ2 − sin θ

∂g∂r− cos θ

r∂g∂θ

)+ sin θ

(sin θ

∂2g∂r2 +

cos θ

r∂2g∂r∂θ

− cos θ

r2∂g∂θ

)+

cos θ

r

(sin θ

∂2g∂θ∂r

+cos θ

r∂2g∂θ2 + cos θ

∂g∂r− sin θ

r∂g∂θ

)=

∂2g∂r2 +

1r

∂g∂r

+1r2

∂2g∂θ2

=1r

∂r

(r

∂g∂r

)+

1r2

∂2g∂θ2 .

(6.43)

The last form often occurs in texts because it is in the form of a Sturm-Liouville operator. Also, it agrees with the result from using the Laplacianwritten in cylindrical coordinates as given in Problem ?? of Chapter ??.

Now that we have written the Laplacian in polar coordinates we can posethe problem of a vibrating circular membrane.

Example 6.2. The vibrating circular membrane.This problem is given by a partial differential equation,11 Here we state the problem of a vibrat-

ing circular membrane. We have chosen−π < θ < π, but could have just as eas-ily used 0 < θ < 2π. The symmetric in-terval about θ = 0 will make the use ofboundary conditions simpler.

utt = c2[

1r

∂r

(r

∂u∂r

)+

1r2

∂2u∂θ2

], (6.44)

t > 0, 0 < r < a, −π < θ < π,

the boundary condition,

u(a, θ, t) = 0, t > 0, −π < θ < π, (6.45)

and the initial conditions,

u(r, θ, 0) = f (r, θ), 0 < r < a,−π < θ < π,

ut(r, θ, 0) = g(r, θ), , 0 < r < a,−π < θ < π. (6.46)

Now we are ready to solve this problem using separation of variables. Asbefore, we can separate out the time dependence. Let u(r, θ, t) = T(t)φ(r, θ).As usual, T(t) can be written in terms of sines and cosines. This leads tothe Helmholtz equation,

∇2φ + λφ = 0.

Page 181: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

problems in higher dimensions 169

We now separate the Helmholtz equation by letting φ(r, θ) = R(r)Θ(θ). Thisgives

1r

∂r

(r

∂RΘ∂r

)+

1r2

∂2RΘ∂θ2 + λRΘ = 0. (6.47)

Dividing by u = RΘ, as usual, leads to

1rR

ddr

(r

dRdr

)+

1r2Θ

d2Θdθ2 + λ = 0. (6.48)

The last term is a constant. The first term is a function of r. However, themiddle term involves both r and θ. This can be remedied by multiplying theequation by r2. Rearranging the resulting equation, we can separate out theθ-dependence from the radial dependence. Letting µ be another separationconstant, we have

rR

ddr

(r

dRdr

)+ λr2 = − 1

Θd2Θdθ2 = µ. (6.49)

This gives us two ordinary differential equations:

d2Θdθ2 + µΘ = 0,

rddr

(r

dRdr

)+ (λr2 − µ)R = 0. (6.50)

Let’s consider the first of these equations. It should look familiar by now.For µ > 0, the general solution is

Θ(θ) = a cos√

µθ + b sin√

µθ.

The next step typically is to apply the boundary conditions in θ. However,when we look at the given boundary conditions in the problem, we do notsee anything involving θ. This is a case for which the boundary conditionsthat are needed are implied and not stated outright.

We can determine the hidden boundary conditions by making some ob-servations. Let’s consider the solution corresponding to the endpoints θ =

±π. We note that at these θ-values we are at the same physical point for anyr < a. So, we would expect the solution to have the same value at θ = −π asit has at θ = π. Namely, the solution is continuous at these physical points.Similarly, we expect the slope of the solution to be the same at these points.This can be summarized using the boundary conditions The boundary conditions in θ are peri-

odic boundary conditions.

Θ(π) = Θ(−π), Θ′(π) = Θ′(−π).

Such boundary conditions are called periodic boundary conditions.Let’s apply these conditions to the general solution for Θ(θ). First, we set

Θ(π) = Θ(−π) and use the symmetries of the sine and cosine functions toobtain

a cos√

µπ + b sin√

µπ = a cos√

µπ − b sin√

µπ.

This implies thatsin√

µπ = 0.

Page 182: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

170 partial differential equations

This can only be true for√

µ = m, for m = 0, 1, 2, 3, . . . . Therefore, theeigenfunctions are given by

Θm(θ) = a cos mθ + b sin mθ, m = 0, 1, 2, 3, . . . .

For the other half of the periodic boundary conditions, Θ′(π) = Θ′(−π),we have that

−am sin mπ + bm cos mπ = am sin mπ + bm cos mπ.

But, this gives no new information since this equation boils down to bm =

bm..To summarize what we know at this point, we have found the general

solutions to the temporal and angular equations. The product solutions willhave various products of cos ωt, sin ωt and cos mθ, sin mθ∞

m=0. We alsoknow that µ = m2 and ω = c

√λ.

We still need to solve the radial equation. Inserting µ = m2, the radialequation has the form

rddr

(r

dRdr

)+ (λr2 −m2)R = 0. (6.51)

Expanding the derivative term, we have

r2R′′(r) + rR′(r) + (λr2 −m2)R(r) = 0. (6.52)

The reader should recognize this differential equation from Equation (5.66).It is a Bessel equation with bounded solutions R(r) = Jm(

√λr).

Recall there are two linearly independent solutions of this second orderequation: Jm(

√λr), the Bessel function of the first kind of order m, and

Nm(√

λr), the Bessel function of the second kind of order m, or Neumannfunctions. Plots of these functions are shown in Figures 5.8 and 5.9. So, wehave the general solution of the radial equation is

R(r) = c1 Jm(√

λr) + c2Nm(√

λr).

Now we are ready to apply the boundary conditions to the radial factorin the product solutions. Looking at the original problem we find onlyone condition: u(a, θ, t) = 0 for t > 0 and −π < < π. This implies thatR(a) = 0. But where is the second condition?

This is another unstated boundary condition. Look again at the plotsof the Bessel functions. Notice that the Neumann functions are not wellbehaved at the origin. Do you expect that the solution will become infiniteat the center of the drum? No, the solutions should be finite at the center. So,this observation leads to the second boundary condition. Namely, |R(0)| <∞. This implies that c2 = 0.

Now we are left withR(r) = Jm(

√λr).

We have set c1 = 1 for simplicity. We can apply the vanishing condition atr = a. This gives

Jm(√

λa) = 0.

Page 183: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

problems in higher dimensions 171

Looking again at the plots of Jm(x), we see that there are an infinite numberof zeros, but they are not as easy as π! In Table 6.2 we list the nth zeros ofJm, which were first seen in Table 5.3.

n m = 0 m = 1 m = 2 m = 3 m = 4 m = 51 2.405 3.832 5.136 6.380 7.588 8.771

2 5.520 7.016 8.417 9.761 11.065 12.339

3 8.654 10.173 11.620 13.015 14.373 15.700

4 11.792 13.324 14.796 16.223 17.616 18.980

5 14.931 16.471 17.960 19.409 20.827 22.218

6 18.071 19.616 21.117 22.583 24.019 25.430

7 21.212 22.760 24.270 25.748 27.199 28.627

8 24.352 25.904 27.421 28.908 30.371 31.812

9 27.493 29.047 30.569 32.065 33.537 34.989

Table 6.2: The zeros of Bessel Functions,Jm(jmn) = 0.

Let’s denote the nth zero of Jm(x) by jmn. Then, the boundary conditiontells us that √

λa = jmn, m = 0, 1, . . . , n = 1, 2, . . . .

This gives us the eigenvalues as

λmn =

(jmn

a

)2, m = 0, 1, . . . , n = 1, 2, . . . .

Thus, the radial function satisfying the boundary conditions is

Rmn(r) = Jm

(jmn

ar)

.

We are finally ready to write out the product solutions for the vibratingcircular membrane. They are given by Product solutions for the vibrating circu-

lar membrane.

u(r, θ, t) =

cos ωmntsin ωmnt

cos mθ

sin mθ

Jm(

jmn

ar). (6.53)

Here we have indicated choices with the braces, leading to four differenttypes of product solutions. Also, the angular frequency depends on thezeros of the Bessel functions,

ωmn =jmn

ac, m = 0, 1, . . . , n = 1, 2, . . . .

As with the rectangular membrane, we are interested in the shapes of theharmonics. So, we consider the spatial solution (t = 0)

φ(r, θ) = (cos mθ)Jm

(jmn

ar)

.

Including the solutions involving sin mθ will only rotate these modes. Thenodal curves are given by φ(r, θ) = 0. This can be satisfied if cos mθ = 0,or Jm(

jmna r) = 0. The various nodal curves which result are shown in Figure

6.5.

Page 184: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

172 partial differential equations

Figure 6.5: The first few modes of the vi-brating circular membrane. The dashedlines show the nodal lines indicating thepoints that do not move for the partic-ular mode. Compare these nodal lineswith the three dimensional images inFigure 6.3.

m = 0 m = 1 m = 2

n = 1

n = 2

n = 3

For the angular part, we easily see that the nodal curves are radial lines,θ =const. For m = 0, there are no solutions, since cos mθ = 1 for m = 0. inFigure 6.5 this is seen by the absence of radial lines in the first column.

For m = 1, we have cos θ = 0. This implies that θ = ±π2 . These values

give the vertical line as shown in the second column in Figure 6.5. Form = 2, cos 2θ = 0 implies that θ = π

4 , 3π4 . This results in the two lines shown

in the last column of Figure 6.5.We can also consider the nodal curves defined by the Bessel functions.

We seek values of r for which jmna r is a zero of the Bessel function and lies

in the interval [0, a]. Thus, we have

jmn

ar = jmj, 1 ≤ j ≤ n,

or

r =jmj

jmna, 1 ≤ j ≤ n.

These will give circles of these radii with jmj ≤ jmn, or j ≤ n. For m = 0and n = 1, there is only one zero and r = a. In fact, for all n = 1 modes,there is only one zero giving r = a. Thus, the first row in Figure 6.5 showsno interior nodal circles.

For a three dimensional view, one can look at Figure 6.3. Imagine thatthe various regions are oscillating independently and that the points on thenodal curves are not moving.

We should note that the nodal circles are not evenly spaced and that theradii can be computed relatively easily. For the n = 2 modes, we have twocircles, r = a and r = jm1

jm2a as shown in the second row of Figure 6.5. For

Page 185: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

problems in higher dimensions 173

n = 1 n = 2 n = 3

m = 0

m = 1

m = 2

Table 6.3: A three dimensional view ofthe vibrating circular membrane for thelowest modes. Compare these imageswith the nodal line plots in Figure 6.5.

m = 0,

r =2.4055.520

a ≈ 0.4357a

for the inner circle. For m = 1,

r =3.8327.016

a ≈ 0.5462a,

and for m = 2,

r =5.1368.417

a ≈ 0.6102a.

For n = 3 we obtain circles of radii

r = a, r =jm1

jm3a, and r =

jm2

jm3a.

For m = 0,

r = a,5.5208.654

a ≈ 0.6379a,2.4058.654

a ≈ 0.2779a.

Similarly, for m = 1,

r = a,3.83210.173

a ≈ 0.3767a,7.016

10.173a ≈ 0.6897a,

and for m = 2,

r = a,5.13611.620

a ≈ 0.4420a,8.417

11.620a ≈ 0.7224a.

Page 186: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

174 partial differential equations

Example 6.3. Vibrating AnnulusMore complicated vibrations can be dreamt up for this geometry. Consider an

annulus in which the drum is formed from two concentric circular cylinders andthe membrane is stretch between the two with an annular cross section as shownin Figure 6.6. The separation would follow as before except now the boundaryconditions are that the membrane is fixed around the two circular boundaries. Inthis case we cannot toss out the Neumann functions because the origin is not partof the drum head.

ab

x

y

Figure 6.6: An annular membrane withradii a and b > a. There are fixed bound-ary conditions along the edges at r = aand r = b.

The domain for this problem is shown in Figure 6.6 and the problem is given bythe partial differential equation

utt = c2[

1r

∂r

(r

∂u∂r

)+

1r2

∂2u∂θ2

], (6.54)

t > 0, b < r < a, −π < θ < π,

the boundary conditions,

u(b, θ, t) = 0, u(a, θ, t) = 0, t > 0, −π < θ < π, (6.55)

and the initial conditions,

u(r, θ, 0) = f (r, θ), b < r < a,−π < θ < π,

ut(r, θ, 0) = g(r, θ), , b < r < a,−π < θ < π. (6.56)

Since we cannot dispose of the Neumann functions, the product solutions takethe form

u(r, θ, t) =

cos ωtsin ωt

cos mθ

sin mθ

Rm(r), (6.57)

whereRm(r) = c1 Jm(

√λr) + c2Nm(

√λr)

and ω = c√

λ, m = 0, 1, . . . .For this problem the radial boundary conditions are that the membrane is fixed

at r = a and r = b. Taking b < a, we then have to satisfy the conditions

R(a) = c1 Jm(√

λa) + c2Nm(√

λa) = 0,

R(b) = c1 Jm(√

λb) + c2Nm(√

λb) = 0. (6.58)

This leads to two homogeneous equations for c1 and c2. The coefficient determi-nant of this system has to vanish if there are to be nontrivial solutions. This givesthe eigenvalue equation for λ :

Jm(√

λa)Nm(√

λb)− Jm(√

λb)Nm(√

λa) = 0.

There are an infinite number of zeros of the function

F(λ) = λ : Jm(√

λa)Nm(√

λb)− Jm(√

λb)Nm(√

λa).

In Figure 6.7 we show a plot of F(λ) for a = 4, b = 2 and m = 0, 1, 2, 3.

Page 187: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

problems in higher dimensions 175

Figure 6.7: Plot of the function

F(λ) = Jm(√

λa)Nm(√

λb)− Jm(√

λb)Nm(√

λa)

for a = 4 and b = 2 and m = 0, 1, 2, 3.

This eigenvalue equation needs to be solved numerically. Choosing a = 2 andb = 4, we have for the first few modes√

λmn ≈ 1.562, 3.137, 4.709, m = 0

≈ 1.598, 3.156, 4.722, m = 1

≈ 1.703, 3.214, 4.761, m = 2. (6.59)

Note, since ωmn = c√

λmn, these numbers essentially give us the frequencies ofoscillation.

For these particular roots, we can solve for c1 and c2 up to a multiplicativeconstant. A simple solution is to set

c1 = Nm(√

λmnb), c2 = Jm(√

λmnb).

This leads to the basic modes of vibration,

Rmn(r)Θm(θ) = cos mθ(

Nm(√

λmnb)Jm(√

λmnr)− Jm(√

λmnb)Nm(√

λmnr))

,

for m = 0, 1, . . . , and n = 1, 2, . . . . In Figure 6.4 we show various modes for theparticular choice of annular membrane dimensions, a = 2 and b = 4.

6.3 Laplace’s Equation in 2D

Another of the generic partial differential equations is Laplace’sequation, ∇2u = 0. This equation first appeared in the chapter on complexvariables when we discussed harmonic functions. Another example is theelectric potential for electrostatics. As we described Chapter ??, for staticelectromagnetic fields,

∇ · E = ρ/ε0, E = ∇φ.

In regions devoid of charge, these equations yield the Laplace equation∇2φ = 0.

Another example comes from studying temperature distributions. Con-sider a thin rectangular plate with the boundaries set at fixed temperatures.Temperature changes of the plate are governed by the heat equation. The

Page 188: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

176 partial differential equations

Table 6.4: A three dimensional view ofthe vibrating annular membrane for thelowest modes.

n = 1 n = 2 n = 3

m = 0

m = 1

m = 2

solution of the heat equation subject to these boundary conditions is timedependent. In fact, after a long period of time the plate will reach thermalequilibrium. If the boundary temperature is zero, then the plate temperaturedecays to zero across the plate. However, if the boundaries are maintainedat a fixed nonzero temperature, which means energy is being put into thesystem to maintain the boundary conditions, the internal temperature mayreach a nonzero equilibrium temperature. Reaching thermal equilibriumThermodynamic equilibrium, ∇2u = 0.

means that asymptotically in time the solution becomes time independent.Thus, the equilibrium state is a solution of the time independent heat equa-tion, which is another Laplace equation, ∇2u = 0.Incompressible, irrotational fluid flow,

∇2φ = 0, for velocity v = ∇φ. As another example we could look at fluid flow. For an incompressibleflow, ∇ · v = 0. If the flow is irrotational, then ∇× v = 0. We can introducea velocity potential, v = ∇φ. Thus, ∇× v vanishes by a vector identity and∇ · v = 0 implies ∇2φ = 0. So, once again we obtain Laplace’s equation.

In this section we will look at examples of Laplace’s equation in twodimensions. The solutions in these examples could be examples from anyof the application in the above physical situations and the solutions can beapplied appropriately.

Example 6.4. Equilibrium Temperature Distribution for a Rectangular PlateLet’s consider Laplace’s equation in Cartesian coordinates,

uxx + uyy = 0, 0 < x < L, 0 < y < H

Page 189: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

problems in higher dimensions 177

with the boundary conditions

u(0, y) = 0, u(L, y) = 0, u(x, 0) = f (x), u(x, H) = 0.

The boundary conditions are shown in Figure 6.8

x0

y

0 L

H

∇2u = 0

u(x, 0) = f (x)

u(x, H) = 0

u(0, y) = 0 u(L, y) = 0

Figure 6.8: In this figure we show thedomain and boundary conditions for theexample of determining the equilibriumtemperature distribution for a rectangu-lar plate.

As with the heat and wave equations, we can solve this problem using the methodof separation of variables. Let u(x, y) = X(x)Y(y). Then, Laplace’s equation be-comes

X′′Y + XY′′ = 0

and we can separate the x and y dependent functions and introduce a separationconstant, λ,

X′′

X= −Y′′

Y= −λ.

Thus, we are led to two differential equations,

X′′ + λX = 0,

Y′′ − λY = 0. (6.60)

From the boundary condition u(0, y) = 0, u(L, y) = 0, we have X(0) =

0, X(L) = 0. So, we have the usual eigenvalue problem for X(x),

X′′ + λX = 0, X(0) = 0, X(L) = 0.

The solutions to this problem are given by

Xn(x) = sinnπx

L, λn =

(nπ

L

)2, n = 1, 2, . . . .

The general solution of the equation for Y(y) is given by

Y(y) = c1e√

λy + c2e−√

λy.

The boundary condition u(x, H) = 0 implies Y(H) = 0. So, we have

c1e√

λH + c2e−√

λH = 0.

Thus,c2 = −c1e2

√λH .

Inserting this result into the expression for Y(y), we have Note: Having carried out this compu-tation, we can now see that it wouldbe better to guess this form in the fu-ture. So, for Y(H) = 0, one wouldguess a solution Y(y) = sinh

√λ(H− y).

For Y(0) = 0, one would guess a so-lution Y(y) = sinh

√λy. Similarly, if

Y′(H) = 0, one would guess a solutionY(y) = cosh

√λ(H − y).

Y(y) = c1e√

λy − c1e2√

λHe−√

λy

= c1e√

λH(

e−√

λHe√

λy − e√

λHe−√

λy)

= c1e√

λH(

e−√

λ(H−y) − e√

λ(H−y))

= −2c1e√

λH sinh√

λ(H − y). (6.61)

Since we already know the values of the eigenvalues λn from the eigenvalueproblem for X(x), we have that the y-dependence is given by

Yn(y) = sinhnπ(H − y)

L.

Page 190: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

178 partial differential equations

So, the product solutions are given by

un(x, y) = sinnπx

Lsinh

nπ(H − y)L

, n = 1, 2, . . . .

These solutions satisfy Laplace’s equation and the three homogeneous boundaryconditions and in the problem.

The remaining boundary condition, u(x, 0) = f (x), still needs to be satisfied.Inserting y = 0 in the product solutions does not satisfy the boundary conditionunless f (x) is proportional to one of the eigenfunctions Xn(x). So, we first writedown the general solution as a linear combination of the product solutions,

u(x, y) =∞

∑n=1

an sinnπx

Lsinh

nπ(H − y)L

. (6.62)

Now we apply the boundary condition, u(x, 0) = f (x), to find that

f (x) =∞

∑n=1

an sinhnπH

Lsin

nπxL

. (6.63)

Defining bn = an sinh nπHL , this becomes

f (x) =∞

∑n=1

bn sinnπx

L. (6.64)

We see that the determination of the unknown coefficients, bn, is simply done byrecognizing that this is a Fourier sine series. The Fourier coefficients are easilyfound as

bn =2L

∫ L

0f (x) sin

nπxL

dx. (6.65)

Since an = bn/ sinh nπHL , we can finish solving the problem. The solution is

u(x, y) =∞

∑n=1

an sinnπx

Lsinh

nπ(H − y)L

, (6.66)

where

an =2

L sinh nπHL

∫ L

0f (x) sin

nπxL

dx. (6.67)

x0

y

0 L

H

∇2u = 0

u = f1(x)

u = f2(x)

u = g1(y) u = g2(y)

Figure 6.9: In this figure we show the do-main and general boundary conditionsfor the example of determining the equi-librium temperature distribution for arectangular plate.

Example 6.5. Equilibrium Temperature Distribution for a Rectangular Plate forGeneral Boundary Conditions

A more general problem is to seek solutions to Laplace’s equation in Cartesiancoordinates,

uxx + uyy = 0, 0 < x < L, 0 < y < H

with non-zero boundary conditions on more than one side of the domain,

u(0, y) = g1(y), u(L, y) = g2(y), 0 < y < H,

u(x, 0) = f1(x), u(x, H) = f2(x), 0 < x < L.

These boundary conditions are shown in Figure 6.9

Page 191: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

problems in higher dimensions 179

x0

y

0 L

H

∇2u1 = 0

u1 = f1(x)

u1 = 0

u1 = 0 u1 = 0

x0

y

0 L

H

∇2u2 = 0

u2 = 0

u2 = f2(x)

u2 = 0 u2 = 0

x0

y

0 L

H

∇2u3 = 0

u3 = 0

u3 = 0

u3 = g1(y) u3 = 0

x0

y

0 L

H

∇2u4 = 0

u4 = 0

u4 = 0

u4 = 0 u4 = g2(y)

Figure 6.10: The general boundary valueproblem for a rectangular plate can bewritten as the sum of these four separateproblems.

The problem with this example is that none of the boundary conditions are ho-mogeneous. This means that the corresponding eigenvalue problems will not havethe homogeneous boundary conditions which Sturm-Liouville theory in Section 4needs. However, we can express this problem in terms of four different problemswith nonhomogeneous boundary conditions on only one side of the rectangle.

In Figure 6.10 we show how the problem can be broken up into four separateproblems for functions ui(x, y), i = 1, . . . , 4. Since the boundary conditions andLaplace’s equation are linear, the solution to the general problem is simply the sumof the solutions to these four problems,

u(x, y) = u1(x, y) + u2(x, y) + u3(x, y) + u4(x, y).

Then, this solution satisfies Laplace’s equation,

∇2u(x, y) = ∇2u1(x, y) +∇2u2(x, y) +∇2u3(x, y) +∇2u4(x, y) = 0,

and the boundary conditions. For example, using the boundary conditions definedin Figure 6.10, we have for y = 0,

u(x, 0) = u1(x, 0) + u2(x, 0) + u3(x, 0) + u4(x, 0) = f1(x).

The other boundary conditions can also be shown to hold.We can solve each of the problems in Figure 6.10 quickly based on the solution we

obtained in the last example. The solution for u1(x, y), which satisfies the boundaryconditions

u1(0, y) = 0, u1(L, y) = 0, 0 < y < H,

u1(x, 0) = f1(x), u1(x, H) = 0, 0 < x < L,

Page 192: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

180 partial differential equations

is the easiest to write down. It is given by

u1(x, y) =∞

∑n=1

an sinnπx

Lsinh

nπ(H − y)L

. (6.68)

where

an =2

L sinh nπHL

∫ L

0f1(x) sin

nπxL

dx. (6.69)

For the boundary conditions

u2(0, y) = 0, u2(L, y) = 0, 0 < y < H,

u2(x, 0) = 0, u2(x, H) = f2(x), 0 < x < L.

the boundary conditions for X(x) are X(0) = 0 and X(L) = 0. So, we get thesame form for the eigenvalues and eigenfunctions as before:

Xn(x) = sinnπx

L, λn =

(nπ

L

)2, n = 1, 2, . . . .

The remaining homogeneous boundary condition is now Y(0) = 0. Recallingthat the equation satisfied by Y(y) is

Y′′ − λY = 0,

we can write the general solution as

Y(y) = c1 cosh√

λy + c2 sinh√

λy.

Requiring Y(0) = 0, we have c1 = 0, or

Y(y) = c2 sinh√

λy.

Then, the general solution is

u2(x, y) =∞

∑n=1

bn sinnπx

Lsinh

nπyL

. (6.70)

We now force the nonhomogeneous boundary condition, u2(x, H) = f2(x),

f2(x) =∞

∑n=1

bn sinnπx

Lsinh

nπHL

. (6.71)

Once again we have a Fourier sine series. The Fourier coefficients are given by

bn =2

L sinh nπHL

∫ L

0f2(x) sin

nπxL

dx. (6.72)

Next we turn to the problem with the boundary conditions

u3(0, y) = g1(y), u3(L, y) = 0, 0 < y < H,

u3(x, 0) = 0, u3(x, H) = 0, 0 < x < L.

Page 193: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

problems in higher dimensions 181

In this case the pair of homogeneous boundary conditions u3(x, 0) = 0, u3(x, H) =

0 lead to solutions

Yn(y) = sinnπy

H, λn = −

(nπ

H

)2, n = 1, 2 . . . .

The condition u3(L, 0) = 0 gives X(x) = sinh nπ(L−x)H .

The general solution satisfying the homogeneous conditions is

u3(x, y) =∞

∑n=1

cn sinnπy

Hsinh

nπ(L− x)H

. (6.73)

Applying the nonhomogeneous boundary condition, u3(0, y) = g1(y), we obtainthe Fourier sine series

g1(y) =∞

∑n=1

cn sinnπy

Hsinh

nπLH

. (6.74)

The Fourier coefficients are found as

cn =2

H sinh nπLH

∫ H

0g1(y) sin

nπyH

dy. (6.75)

Finally, we can find the solution

u4(0, y) = 0, u4(L, y) = g2(y), 0 < y < H,

u4(x, 0) = 0, u4(x, H) = 0, 0 < x < L.

Following the above analysis, we find the general solution

u4(x, y) =∞

∑n=1

dn sinnπy

Hsinh

nπxH

. (6.76)

The nonhomogeneous boundary condition, u(L, y) = g2(y), is satisfied if

g2(y) =∞

∑n=1

dn sinnπy

Hsinh

nπLH

. (6.77)

The Fourier coefficients, dn, are given by

dn =2

H sinh nπLH

∫ H

0g1(y) sin

nπyH

dy. (6.78)

The solution to the general problem is given by the sum of these four solutions.

u(x, y) =∞

∑n=1

[(an sinh

nπ(H − y)L

+ bn sinhnπy

L

)sin

nπxL

+

(cn sinh

nπ(L− x)H

+ dn sinhnπx

H

)sin

nπyH

],

(6.79)

where the coefficients are given by the above Fourier integrals.

Page 194: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

182 partial differential equations

Example 6.6. Laplace’s Equation on a DiskWe now turn to solving Laplace’s equation on a disk of radius a as shown in

Figure 6.11. Laplace’s equation in polar coordinates is given by

1r

∂r

(r

∂u∂r

)+

1r2

∂2u∂θ2 = 0, 0 < r < a, −π < θ < π. (6.80)

The boundary conditions are given as

u(a, θ) = f (θ), −π < θ < π, (6.81)

plus periodic boundary conditions in θ.x

y

a

u(a, θ) = f (θ)

Figure 6.11: The disk of radius a withboundary condition along the edge atr = a.

Separation of variable proceeds as usual. Let u(r, θ) = R(r)Θ(θ). Then

1r

∂r

(r

∂(RΘ)

∂r

)+

1r2

∂2(RΘ)

∂θ2 = 0, (6.82)

orΘ

1r(rR′)′ +

1r2 RΘ′′ = 0. (6.83)

Diving by u(r, θ) = R(r)Θ(θ), multiplying by r2, and rearranging, we have

rR(rR′)′ = −Θ′′

Θ= λ. (6.84)

Since this equation gives a function of r equal to a function of θ, we set theequation equal to a constant. Thus, we have obtained two differential equations,which can be written as

r(rR′)′ − λR = 0, (6.85)

Θ′′ + λΘ = 0. (6.86)

We can solve the second equation subject to the periodic boundary conditions inthe θ variable. The reader should be able to confirm that

Θ(θ) = an cos nθ + bn sin nθ, λ = n2, n = 0, 1, 2, . . .

is the solution. Note that the n = 0 case just leads to a constant solution.Inserting λ = n2 into the radial equation, we find

r2R′′ + rR′ − n2R = 0.

This is a Cauchy-Euler type of ordinary differential equation. Recall that we solvesuch equations by guessing a solution of the form R(r) = rm. This leads to thecharacteristic equation m2 − n2 = 0. Therefore, m = ±n. So,

R(r) = c1rn + c2r−n.

Since we expect finite solutions at the origin, r = 0, we can set c2 = 0. Thus, thegeneral solution is

u(r, θ) =a0

2+

∑n=1

(an cos nθ + bn sin nθ) rn. (6.87)

Page 195: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

problems in higher dimensions 183

Note that we have taken the constant term out of the sum and put it into a familiarform.

Now we can impose the remaining boundary condition, u(a, θ) = f (θ), or

f (θ) =a0

2+

∑n=1

(an cos nθ + bn sin nθ) an. (6.88)

This is a Fourier trigonometric series. The Fourier coefficients can be determinedusing the results from Chapter 4:

an =1

πan

∫ π

−πf (θ) cos nθ dθ, n = 0, 1, . . . , (6.89)

bn =1

πan

∫ π

−πf (θ) sin nθ dθ n = 1, 2 . . . . (6.90)

6.3.1 Poisson Integral Formula

We can put the solution from the last example in a more compactform by inserting the Fourier coefficients into the general solution. Doingthis, we have

u(r, θ) =a0

2+

∑n=1

(an cos nθ + bn sin nθ) rn

=1

∫ π

−πf (φ) dφ

+1π

∫ π

−π

∑n=1

[cos nφ cos nθ + sin nφ sin nθ]( r

a

)nf (φ) dφ

=1π

∫ π

−π

[12+

∑n=1

cos n(θ − φ)( r

a

)n]

f (φ) dφ. (6.91)

The term in the brackets can be summed. We note that

cos n(θ − φ)( r

a

)n= Re

(ein(θ−φ)

( ra

)n)= Re

( ra

ei(θ−φ))n

. (6.92)

Therefore,

∑n=1

cos n(θ − φ)( r

a

)n= Re

(∞

∑n=1

( ra

ei(θ−φ))n)

.

The right hand side of this equation is a geometric series with common ratio

of ra ei(θ−φ), which is also the first term of the series. Since

∣∣∣ ra ei(θ−φ)

∣∣∣ = ra < 1,

the series converges. Summing the series, we obtain

∑n=1

( ra

ei(θ−φ))n

=ra ei(θ−φ)

1− ra ei(θ−φ)

=rei(θ−φ)

a− rei(θ−φ)(6.93)

Page 196: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

184 partial differential equations

We need to rewrite this result so that we can easily take the real part.Thus, we multiply and divide by the complex conjugate of the denominatorto obtain

∑n=1

( ra

ei(θ−φ))n

=rei(θ−φ)

a− rei(θ−φ)

a− re−i(θ−φ)

a− re−i(θ−φ)

=are−i(θ−φ) − r2

a2 + r2 − 2ar cos(θ − φ). (6.94)

The real part of the sum is given as

Re

(∞

∑n=1

( ra

ei(θ−φ))n)

=ar cos(θ − φ)− r2

a2 + r2 − 2ar cos(θ − φ).

Therefore, the factor in the brackets under the integral in Equation (6.91) is

12+

∑n=1

cos n(θ − φ)( r

a

)n=

12+

ar cos(θ − φ)− r2

a2 + r2 − 2ar cos(θ − φ)

=a2 − r2

2(a2 + r2 − 2ar cos(θ − φ)).

(6.95)

Thus, we have shown that the solution of Laplace’s equation on a diskof radius a with boundary condition u(a, θ) = f (θ) can be written in theclosed formPoisson Integral Formula

u(r, θ) =1

∫ π

−π

a2 − r2

a2 + r2 − 2ar cos(θ − φ)f (φ) dφ. (6.96)

This result is called the Poisson Integral Formula and

K(θ, φ) =a2 − r2

a2 + r2 − 2ar cos(θ − φ)

is called the Poisson kernel.

Example 6.7. Evaluate the solution (6.96) at the center of the disk.We insert r = 0 into the solution (6.96) to obtain

u(0, θ) =1

∫ π

−πf (φ) dφ.

Recalling that the average of a function g(x) on [a, b] is given by

gave =1

b− a

∫ b

ag(x) dx,

we see that the value of the solution u at the center of the disk is the average of theboundary values. This is sometimes referred to as the mean value theorem.

6.4 Three Dimensional Cake Baking

In the rest of the chapter we will extend our studies to three di-mensional problems. In this section we will solve the heat equation as welook at examples of baking cakes.

Page 197: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

problems in higher dimensions 185

We consider cake batter, which is at room temperature of Ti = 80F. It isplaced into an oven, also at a fixed temperature, Tb = 350F. For simplicity,we will assume that the thermal conductivity and cake density are constant.Of course, this is not quite true. However, it is an approximation whichsimplifies the model. We will consider two cases, one in which the cake is arectangular solid, such as baking it in a 13′′× 9′′× 2′′ baking pan. The othercase will lead to a cylindrical cake, such as you would obtain from a roundcake pan. This discussion of cake baking is

adapted from R. Wilkinson’s thesiswork. That in turn was inspired by workdone by Dr. Olszewski,(2006) From bak-ing a cake to solving the diffusion equa-tion. American Journal of Physics 74(6).

Assuming that the heat constant k is indeed constant and the temperatureis given by T(r, t), we begin with the heat equation in three dimensions,

∂T∂t

= k∇2T. (6.97)

We will need to specify initial and boundary conditions. Let Ti be the initialbatter temperature, T(x, y, z, 0) = Ti.

We choose the boundary conditions to be fixed at the oven temperatureTb. However, these boundary conditions are not homogeneous and wouldlead to problems when carrying out separation of variables. This is easilyremedied by subtracting the oven temperature from all temperatures in-volved and defining u(r, t) = T(r, t)− Tb. The heat equation then becomes

∂u∂t

= k∇2u (6.98)

with initial conditionu(r, 0) = Ti − Tb.

The boundary conditions are now homogeneous. We cannot be any morespecific than this until we specify the geometry.

Example 6.8. Temperature of a Rectangular Cake

x

y

z

W

H

L

Figure 6.12: The dimensions of a rectan-gular cake.

We will consider a rectangular cake with dimensions 0 ≤ x ≤ W, 0 ≤ y ≤ L,and 0 ≤ z ≤ H as show in Figure 6.12. For this problem, we seek solutions of theheat equation plus the conditions

u(x, y, z, 0) = Ti − Tb,

u(0, y, z, t) = u(W, y, z, t) = 0,

u(x, 0, z, t) = u(x, L, z, t) = 0,

u(x, y, 0, t) = u(x, y, H, t) = 0.

Using the method of separation of variables, we seek solutions of the form

u(x, y, z, t) = X(x)Y(y)Z(z)G(t). (6.99)

Substituting this form into the heat equation, we get

1k

G′

G=

X′′

X+

Y′′

Y+

Z′′

Z. (6.100)

Setting these expressions equal to −λ, we get

1k

G′

G= −λ and

X′′

X+

Y′′

Y+

Z′′

Z= −λ. (6.101)

Page 198: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

186 partial differential equations

Therefore, the equation for G(t) is given by

G′ + kλG = 0.

We further have to separate out the functions of x, y, and z. We anticipate thatthe homogeneous boundary conditions will lead to oscillatory solutions in thesevariables. Therefore, we expect separation of variables will lead to the eigenvalueproblems

X′′ + µ2X = 0, X(0) = X(W) = 0,

Y′′ + ν2Y = 0, Y(0) = Y(L) = 0,

Z′′ + κ2Z = 0, Z(0) = Z(H) = 0. (6.102)

Noting thatX′′

X= −µ2,

Y′′

Y= −ν2,

Z′′

Z= −κ2,

we find from the heat equation that the separation constants are related,

λ2 = µ2 + ν2 + κ2.

We could have gotten to this point quicker by writing the first separated equationlabeled with the separation constants as

1k

G′

G︸︷︷︸−λ

=X′′

X︸︷︷︸−µ

+Y′′

Y︸︷︷︸−ν

+Z′′

Z︸︷︷︸−κ

.

Then, we can read off the eigenvalues problems and determine that λ2 = µ2 + ν2 +

κ2.From the boundary conditions, we get product solutions for u(x, y, z, t) in the

formumn`(x, y, z, t) = sin µmx sin νny sin κ`z e−λmn`kt,

for

λmnl = µ2m + ν2

n + κ2` =

(mπ

W

)2+(nπ

L

)2+

(`π

H

)2, m, n, ` = 1, 2, . . . .

The general solution is a linear combination of all of the product solutions, summedover three different indices,

u(x, y, z, t) =∞

∑m=1

∑n=1

∑`=1

Amnl sin µmx sin νny sin κ`z e−λmn`kt, (6.103)

where the Amn`’s are arbitrary constants.We can use the initial condition u(x, y, z, 0) = Ti − Tb to determine the Amn`’s.

We find

Ti − Tb =∞

∑m=1

∑n=1

∑`=1

Amnl sin µmx sin νny sin κ`z. (6.104)

This is a triple Fourier sine series.

Page 199: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

problems in higher dimensions 187

We can determine these coefficients in a manner similar to how we handleddouble Fourier sine series earlier in the chapter. Defining

bm(y, z) =∞

∑n=1

∑`=1

Amnl sin νny sin κ`z,

we obtain a simple Fourier sine series:

Ti − Tb =∞

∑m=1

bm(y, z) sin µmx. (6.105)

The Fourier coefficients can then be found as

bm(y, z) =2

W

∫ W

0(Ti − Tb) sin µmx dx.

Using the same technique for the remaining sine series and noting that Ti− Tb isconstant, we can determine the general coefficients Amnl by carrying out the neededintegrations:

Amnl =8

WLH

∫ H

0

∫ L

0

∫ W

0(Ti − Tb) sin µmx sin νny sin κ`z dxdydz

= (Ti − Tb)8

π3

[cos (mπx

W )

m

]W

0

[cos ( nπy

L )

n

]L

0

[cos ( `πz

H )

`

]H

0

= (Ti − Tb)8

π3

[cos mπ − 1

m

] [cos nπ − 1

n

] [cos `π − 1

`

]= (Ti − Tb)

8π3

0, for at least one m, n, ` even,[−2

m] [−2

n] [−2

`

], for m, n, ` all odd.

Since only the odd multiples yield non-zero Amn` we let m = 2m′ − 1, n =

2n′ − 1, and ` = 2`′ − 1 for m′, n′, `′ = 1, 2, . . . . The expansion coefficients cannow be written in the simpler form

Amnl =64(Tb − Ti)

(2m′ − 1) (2n′ − 1) (2`′ − 1)π3 .

x y

z

W

H

L

Figure 6.13: Rectangular cake showing avertical slice.

Substituting this result into general solution and dropping the primes, we find

u(x, y, z, t) =64(Tb − Ti)

π3

∑m=1

∑n=1

∑`=1

sin µmx sin νny sin κ`z e−λmn`kt

(2m− 1)(2n− 1)(2`− 1),

where

λmn` =

((2m− 1)π

W

)2

+

((2n− 1)π

L

)2

+

((2`− 1)π

H

)2

for m, n, ` = 1, 2, . . ..Recalling that the solution to the physical problem is

T(x, y, z, t) = u(x, y, z, t) + Tb,

we have the final solution is given by

T(x, y, z, t) = Tb +64(Tb − Ti)

π3

∑m=1

∑n=1

∑`=1

sin µmx sin νny sin κ`z e−λmn`kt

(2m− 1)(2n− 1)(2`− 1).

Page 200: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

188 partial differential equations

Figure 6.14: Temperature evolution fora 13′′ × 9′′ × 2′′ cake shown as verticalslices at the indicated length in feet.

Temperatures at 0.0625 of width

0 0.2 0.4 0.60

0.02

0.04

0.06

0.08

0.1

0.12

0.14

0.16

100

150

200

250

300

350Temperatures at 0.125 of width

0 0.2 0.4 0.60

0.02

0.04

0.06

0.08

0.1

0.12

0.14

0.16

100

150

200

250

300

350

Temperatures at 0.25 of width

0 0.2 0.4 0.60

0.02

0.04

0.06

0.08

0.1

0.12

0.14

0.16

100

150

200

250

300

350Temperatures at 0.5 of width

0 0.2 0.4 0.60

0.02

0.04

0.06

0.08

0.1

0.12

0.14

0.16

100

150

200

250

300

350

We show some temperature distributions in Figure 6.14. Since we cannot cap-ture the entire cake, we show vertical slices such as depicted in Figure 6.13. Verticalslices are taken at the positions and times indicated for a 13′′ × 9′′ × 2′′ cake. Ob-viously, this is not accurate because the cake consistency is changing and this willaffect the parameter k. A more realistic model would be to allow k = k(T(x, y, z, t)).However, such problems are beyond the simple methods described in this book.

Example 6.9. Circular Cakes

a

H

Figure 6.15: Geometry for a cylindricalcake.

In this case the geometry of the cake is cylindrical as show in Figure 6.15. There-fore, we need to express the boundary conditions and heat equation in cylindricalcoordinates. Also, we will assume that the solution, u(r, z, t) = T(r, z, t) − Tb,is independent of θ due to axial symmetry. This gives the heat equation in θ-independent cylindrical coordinates as

∂u∂t

= k(

1r

∂r

(r

∂u∂r

)+

∂2u∂z2

), (6.106)

where 0 ≤ r ≤ a and 0 ≤ z ≤ Z. The initial condition is

u(r, z, 0) = Ti − Tb,

and the homogeneous boundary conditions on the side, top, and bottom of the cakeare

u(a, z, t) = 0,

u(r, 0, t) = u(r, Z, t) = 0.

Page 201: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

problems in higher dimensions 189

Again, we seek solutions of the form u(r, z, t) = R(r)H(z)G(t). Separation ofvariables leads to

1k

G′

G︸︷︷︸−λ

=1

rRddr(rR′)

︸ ︷︷ ︸−µ2

+H′′

H︸︷︷︸−ν2

. (6.107)

Here we have indicated the separation constants, which lead to three ordinarydifferential equations. These equations and the boundary conditions are

G′ + kλG = 0,ddr(rR′)+ µ2rR = 0, R(a) = 0, R(0) is finite,

H′′ + ν2H = 0, H(0) = H(Z) = 0. (6.108)

We further note that the separation constants are related by λ = µ2 + ν2.We can easily write down the solutions for G(t) and H(z),

G(t) = Ae−λkt

andHn(z) = sin

nπzZ

, n = 1, 2, 3, . . . ,

where ν = nπZ . Recalling from the rectangular case that only odd terms arise in

the Fourier sine series coefficients for the constant initial condition, we proceed byrewriting H(z) as

Hn(z) = sin(2n− 1)πz

Z, n = 1, 2, 3, . . . (6.109)

with ν = (2n−1)πZ .

The radial equation can be written in the form

r2R′′ + rR′ + µ2r2R = 0.

This is a Bessel equation of the first kind of order zero which we had seen in Section5.5. Therefore, the general solution is a linear combination of Bessel functions of thefirst and second kind,

R(r) = c1 J0(µr) + c2N0(µr). (6.110)

Since R(r) is bounded at r = 0 and N0(µr) is not well behaved at r = 0, we setc2 = 0. Up to a constant factor, the solution becomes

R(r) = J0(µr). (6.111)

The boundary condition R(a) = 0 gives the eigenvalues as

µm =j0m

a, m = 1, 2, 3, . . . ,

where j0m is the mth roots of the zeroth-order Bessel function, J0(j0m) = 0.Therefore, we have found the product solutions

Hn(z)Rm(r)G(t) = sin(2n− 1)πz

ZJ0

( ra

j0m

)e−λnmkt, (6.112)

Page 202: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

190 partial differential equations

where m = 1, 2, 3, . . . , n = 1, 2, . . . . Combining the product solutions, the generalsolution is found as

u(r, z, t) =∞

∑n=1

∑m=1

Anm sin(2n− 1)πz

ZJ0

( ra

j0m

)e−λnmkt (6.113)

with

λnm =

((2n− 1)π

Z

)2

+

(j0m

a

)2,

for n, m = 1, 2, 3, . . . .Inserting the solution into the constant initial condition, we have

Ti − Tb =∞

∑n=1

∑m=1

Anm sin(2n− 1)πz

ZJ0

( ra

j0m

).

This is a double Fourier series but it involves a Fourier-Bessel expansion. Writing

bn(r) =∞

∑m=1

Anm J0

( ra

j0m

),

the condition becomes

Ti − Tb =∞

∑n=1

bn(r) sin(2n− 1)πz

Z.

As seen previously, this is a Fourier sine series and the Fourier coefficients aregiven by

bn(r) =2Z

∫ Z

0(Ti − Tb) sin

(2n− 1)πzZ

dz

=2(Ti − Tb)

Z

[− Z(2n− 1)π

cos(2n− 1)πz

Z

]Z

0

=4(Ti − Tb)

(2n− 1)π.

We insert this result into the Fourier-Bessel series,

4(Ti − Tb)

(2n− 1)π=

∑m=1

Anm J0

( ra

j0m

),

and recall from Section 5.5 that we can determine the Fourier coefficients Anm usingthe Fourier-Bessel series,

f (x) =∞

∑n=1

cn Jp(jpnxa), (6.114)

where the Fourier-Bessel coefficients are found as

cn =2

a2[

Jp+1(jpn)]2 ∫ a

0x f (x)Jp(jpn

xa) dx. (6.115)

Comparing these series expansions, we have

Anm =2

a2 J21 (j0m)

4(Ti − Tb)

(2n− 1)π

∫ a

0J0(µmr)r dr. (6.116)

Page 203: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

problems in higher dimensions 191

In order to evaluate∫ a

0 J0(µmr)r dr, we let y = µmr and get∫ a

0J0(µmr)rdr =

∫ µma

0J0(y)

yµm

dyµm

=1

µ2m

∫ µma

0J0(y)y dy

=1

µ2m

∫ µma

0

ddy

(yJ1(y)) dy

=1

µ2m(µma)J1(µma) =

a2

j0mJ1(j0m). (6.117)

Here we have made use of the identity ddx (xJ1(x)) = J0(x) from Section 5.5.

Substituting the result of this integral computation into the expression for Anm,we find

Anm =8(Ti − Tb)

(2n− 1)π1

j0m J1(j0m).

Substituting this result into the original expression for u(r, z, t), gives

u(r, z, t) =8(Ti − Tb)

π

∑n=1

∑m=1

sin (2n−1)πzZ

(2n− 1)J0(

ra j0m)e−λnmkt

j0m J1(j0m).

Therefore, T(r, z, t) is found as

T(r, z, t) = Tb +8(Ti − Tb)

π

∑n=1

∑m=1

sin (2n−1)πzZ

(2n− 1)J0(

ra j0m)e−λnmkt

j0m J1(j0m),

where

λnm =

((2n− 1)π

Z

)2

+

(j0m

a

)2, n, m = 1, 2, 3, . . . .

Figure 6.16: Depiction of a sideview of avertical slice of a circular cake.

We have therefore found the general solution for the three-dimensional heat equa-tion in cylindrical coordinates with constant diffusivity. Similar to the solutionsshown in Figure 6.14 of the previous section, we show in Figure 6.17 the tempera-ture evolution throughout a standard 9′′ round cake pan. These are vertical slicessimilar to what is depicted in Figure 6.16.

Again, one could generalize this example to considerations of other typesof cakes with cylindrical symmetry. For example, there are muffins, Bostonsteamed bread which is steamed in tall cylindrical cans. One could alsoconsider an annular pan, such as a bundt cake pan. In fact, such problemsextend beyond baking cakes to possible heating molds in manufacturing.

6.5 Laplace’s Equation and Spherical Symmetry

We have seen that Laplace’s equation, ∇2u = 0, arises in electro-statics as an equation for electric potential outside a charge distribution andit occurs as the equation governing equilibrium temperature distributions.As we had seen in the last chapter, Laplace’s equation generally occurs in

Page 204: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

192 partial differential equations

Figure 6.17: Temperature evolution for astandard 9′′ cake shown as vertical slicesthrough the center.

Temperatures for t = 15 min

-0.2 0 0.20

0.02

0.04

0.06

0.08

0.1

0.12

0.14

0.16

100

150

200

250

300

350Temperatures for t = 20 min

-0.2 0 0.20

0.02

0.04

0.06

0.08

0.1

0.12

0.14

0.16

100

150

200

250

300

350

Temperatures for t = 25 min

-0.2 0 0.20

0.02

0.04

0.06

0.08

0.1

0.12

0.14

0.16

100

150

200

250

300

350Temperatures for t = 30 min

-0.2 0 0.20

0.02

0.04

0.06

0.08

0.1

0.12

0.14

0.16

100

150

200

250

300

350

the study of potential theory, which also includes the study of gravitationaland fluid potentials. The equation is named after Pierre-Simon Laplace(1749-1827) who had studied the properties of this equation. Solutions ofLaplace’s equation are called harmonic functions.

Example 6.10. Solve Laplace’s equation in spherical coordinates.

x

y

r

u(r, θ, φ) = g(θ, φ)

Figure 6.18: A sphere of radius r withthe boundary condition u(r, θ, φ) =g(θ, φ).

We seek solutions of this equation inside a sphere of radius r subject to the bound-ary condition as shown in Figure 6.18. The problem is given by Laplace’s equationLaplace’s equation in spherical coordinates22 The Laplacian in spherical coordinates

is given in Problem ?? in Chapter 8.

1ρ2

∂ρ

(ρ2 ∂u

∂ρ

)+

1ρ2 sin θ

∂θ

(sin θ

∂u∂θ

)+

1ρ2 sin2 θ

∂2u∂φ2 = 0, (6.118)

where u = u(ρ, θ, φ).The boundary conditions are given by

u(r, θ, φ) = g(θ, φ), 0 < φ < 2π, 0 < θ < π,

and the periodic boundary conditions

u(ρ, θ, 0) = u(ρ, θ, 2π), uφ(ρ, θ, 0) = uφ(ρ, θ, 2π),

where 0 < ρ < ∞, and 0 < θ < π.

As before, we perform a separation of variables by seeking product so-lutions of the form u(ρ, θ, φ) = R(ρ)Θ(θ)Φ(φ). Inserting this form into theLaplace equation, we obtain

x

y

z

ρ

φ

θ

Figure 6.19: Definition of spherical coor-dinates (ρ, θ, φ). Note that there are dif-ferent conventions for labeling sphericalcoordinates. This labeling is used oftenin physics.

Page 205: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

problems in higher dimensions 193

ΘΦρ2

ddρ

(ρ2 dR

)+

RΦρ2 sin θ

ddθ

(sin θ

dΘdθ

)+

RΘρ2 sin2 θ

d2Φdφ2 = 0. (6.119)

Multiplying this equation by ρ2 and dividing by RΘΦ, yields

1R

ddρ

(ρ2 dR

)+

1sin θΘ

ddθ

(sin θ

dΘdθ

)+

1sin2 θΦ

d2Φdφ2 = 0. (6.120)

Note that the first term is the only term depending upon ρ. Thus, we canseparate out the radial part. However, there is still more work to do on theother two terms, which give the angular dependence. Thus, we have

− 1R

ddρ

(ρ2 dR

)=

1sin θΘ

ddθ

(sin θ

dΘdθ

)+

1sin2 θΦ

d2Φdφ2 = −λ, (6.121)

where we have introduced the first separation constant. This leads to twoequations:

ddρ

(ρ2 dR

)− λR = 0 (6.122)

and1

sin θΘddθ

(sin θ

dΘdθ

)+

1sin2 θΦ

d2Φdφ2 = −λ. (6.123)

Equation (6.123) is a key equation whichoccurs when studying problems possess-ing spherical symmetry. It is an eigen-value problem for Y(θ, φ) = Θ(θ)Φ(φ),LY = −λY, where

L =1

sin θ

∂θ

(sin θ

∂θ

)+

1sin2 θ

∂2

∂φ2 .

The eigenfunctions of this operator arereferred to as spherical harmonics.

The final separation can be performed by multiplying the last equation bysin2 θ, rearranging the terms, and introducing a second separation constant:

sin θ

Θddθ

(sin θ

dΘdθ

)+ λ sin2 θ = − 1

Φd2Φdφ2 = µ. (6.124)

From this expression we can determine the differential equations satisfiedby Θ(θ) and Φ(φ):

sin θddθ

(sin θ

dΘdθ

)+ (λ sin2 θ − µ)Θ = 0, (6.125)

andd2Φdφ2 + µΦ = 0. (6.126)

We now have three ordinary differential equations to solve. These are theradial equation (6.122) and the two angular equations (6.125)-(6.126). Wenote that all three are in Sturm-Liouville form. We will solve each eigen-value problem subject to appropriate boundary conditions.

The simplest of these differential equations is Equation (6.126) for Φ(φ).We have seen equations of this form many times and the general solutionis a linear combination of sines and cosines. Furthermore, in this problemu(ρ, θ, φ) is periodic in φ,

u(ρ, θ, 0) = u(ρ, θ, 2π), uφ(ρ, θ, 0) = uφ(ρ, θ, 2π).

Since these conditions hold for all ρ and θ, we must require that Φ(φ) satisfythe periodic boundary conditions

Φ(0) = Φ(2π), Φ′(0) = Φ′(2π).

Page 206: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

194 partial differential equations

The eigenfunctions and eigenvalues for Equation (6.126) are then found as

Φ(φ) = cos mφ, sin mφ , µ = m2, m = 0, 1, . . . . (6.127)

Next we turn to solving equation, (6.125). We first transform this equationin order to identify the solutions. Let x = cos θ. Then the derivatives withrespect to θ transform as

ddθ

=dxdθ

ddx

= − sin θd

dx.

Letting y(x) = Θ(θ) and noting that sin2 θ = 1− x2, Equation (6.125) be-comes

ddx

((1− x2)

dydx

)+

(λ− m2

1− x2

)y = 0. (6.128)

We further note that x ∈ [−1, 1], as can be easily confirmed by the reader.This is a Sturm-Liouville eigenvalue problem. The solutions consist of a

set of orthogonal eigenfunctions. For the special case that m = 0 Equation(6.128) becomes

ddx

((1− x2)

dydx

)+ λy = 0. (6.129)

In a course in differential equations one learns to seek solutions of thisequation in the form

y(x) =∞

∑n=0

anxn.

This leads to the recursion relation

an+2 =n(n + 1)− λ

(n + 2)(n + 1)an.

Setting n = 0 and seeking a series solution, one finds that the resulting seriesdoes not converge for x = ±1. This is remedied by choosing λ = `(`+ 1)for ` = 0, 1, . . . , leading to the differential equation

ddx

((1− x2)

dydx

)+ `(`+ 1)y = 0. (6.130)

We saw this equation in Chapter 5 in the form

(1− x2)y′′ − 2xy′ + `(`+ 1)y = 0.

The solutions of this differential equation are Legendre polynomials, de-noted by P`(x).

For the more general case, m 6= 0, the differential equation (6.128) withλ = `(`+ 1) becomesassociated Legendre functions

ddx

((1− x2)

dydx

)+

(`(`+ 1)− m2

1− x2

)y = 0. (6.131)

The solutions of this equation are called the associated Legendre functions.The two linearly independent solutions are denoted by Pm

` (x) and Qm` (x).

The latter functions are not well behaved at x = ±1, corresponding to the

Page 207: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

problems in higher dimensions 195

north and south poles of the original problem. So, we can throw out thesesolutions in many physical cases, leaving

Θ(θ) = Pm` (cos θ)

as the needed solutions. In Table 6.5 we list a few of these.

Pmn (x) Pm

n (cos θ)

P00 (x) 1 1

P01 (x) x cos θ

P11 (x) −(1− x2)

12 − sin θ

P02 (x) 1

2 (3x2 − 1) 12 (3 cos2 θ − 1)

P12 (x) −3x(1− x2)

12 −3 cos θ sin θ

P22 (x) 3(1− x2) 3 sin2 θ

P03 (x) 1

2 (5x3 − 3x) 12 (5 cos3 θ − 3 cos θ)

P13 (x) − 3

2 (5x2 − 1)(1− x2)12 − 3

2 (5 cos2 θ − 1) sin θ

P23 (x) 15x(1− x2) 15 cos θ sin2 θ

P33 (x) −15(1− x2)

32 −15 sin3 θ

Table 6.5: Associated Legendre Func-tions, Pm

n (x).

The associated Legendre functions are related to the Legendre polynomi-als by3 3 The factor of (−1)m is known as the

Condon-Shortley phase and is useful inquantum mechanics in the treatment ofagular momentum. It is sometimes omit-ted by some

Pm` (x) = (−1)m(1− x2)m/2 dm

dxm P`(x), (6.132)

for ` = 0, 1, 2, , . . . and m = 0, 1, . . . , `. We further note that P0` (x) = P`(x),

as one can see in the table. Since P`(x) is a polynomial of degree `, then form > `, dm

dxm P`(x) = 0 and Pm` (x) = 0.

Furthermore, since the differential equation only depends on m2, P−m` (x)

is proportional to Pm` (x). One normalization is given by

P−m` (x) = (−1)m (`−m)!

(`+ m)!Pm` (x).

The associated Legendre functions also satisfy the orthogonality condi-tion Orthogonality relation.∫ 1

−1Pm` (x)Pm

`′ (x) dx =2

2`+ 1(`+ m)!(`−m)!

δ``′ . (6.133)

The last differential equation we need to solve is the radial equation. Withλ = `(`+ 1), ` = 0, 1, 2, . . . , the radial equation (6.122) can be written as

ρ2R′′ + 2ρR′ − `(`+ 1)R = 0. (6.134)

The radial equation is a Cauchy-Euler type of equation. So, we can guessthe form of the solution to be R(ρ) = ρs, where s is a yet to be determinedconstant. Inserting this guess into the radial equation, we obtain the char-acteristic equation

s(s + 1) = `(`+ 1).

Solving for s, we haves = `,−(`+ 1).

Page 208: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

196 partial differential equations

Thus, the general solution of the radial equation is

R(ρ) = aρ` + bρ−(`+1). (6.135)

We would normally apply boundary conditions at this point. The bound-ary condition u(r, θ, φ) = g(θ, φ) is not a homogeneous boundary condition,so we will need to hold off using it until we have the general solution to thethree dimensional problem. However, we do have a hidden condition. Sincewe are interested in solutions inside the sphere, we need to consider whathappens at ρ = 0. Note that ρ−(`+1) is not defined at the origin. Since thesolution is expected to be bounded at the origin, we can set b = 0. So, in thecurrent problem we have established that

R(ρ) = aρ`.

When seeking solutions outside thesphere, one considers the boundary con-dition R(ρ) → 0 as ρ → ∞. In this case,R(ρ) = ρ−(`+1).

We have carried out the full separation of Laplace’s equation in sphericalcoordinates. The product solutions consist of the forms

u(ρ, θ, φ) = ρ`Pm` (cos θ) cos mφ

andu(ρ, θ, φ) = ρ`Pm

` (cos θ) sin mφ

for ` = 0, 1, 2, . . . and m = 0,±1, , . . . ,±`. These solutions can be combinedto give a complex representation of the product solutions as

u(ρ, θ, φ) = ρ`Pm` (cos θ)eimφ.

The general solution is then given as a linear combination of these productsolutions. As there are two indices, we have a double sum:4

4 While this appears to be a complex-valued solution, it can be rewritten asa sum over real functions. The innersum contains terms for both m = k andm = −k. Adding these contributions, wehave that

a`kρ`Pk` (cos θ)eikφ + a`(−k)ρ

`P−k` (cos θ)e−ikφ

can be rewritten as

(A`k cos kφ + B`k sin kφ)ρ`Pk` (cos θ).

u(ρ, θ, φ) =∞

∑`=0

`

∑m=−`

a`mρ`Pm` (cos θ)eimφ. (6.136)

Example 6.11. Laplace’s Equation with Azimuthal SymmetryAs a simple example we consider the solution of Laplace’s equation in which there

is azimuthal symmetry. Let

u(r, θ, φ) = g(θ) = 1− cos 2θ.

This function is zero at the poles and has a maximum at the equator. So, this couldbe a crude model of the temperature distribution of the Earth with zero temperatureat the poles and a maximum near the equator.

x

y

r

u(r, θ, φ) = 1− cos 2θ

Figure 6.20: A sphere of radius r withthe boundary condition

u(r, θ, φ) = 1− cos 2θ.

In problems in which there is no φ-dependence, only the m = 0 terms of thegeneral solution survives. Thus, we have that

u(ρ, θ, φ) =∞

∑`=0

a`ρ`P`(cos θ). (6.137)

Here we have used the fact that P0` (x) = P`(x). We just need to determine the

unknown expansion coefficients, a`. Imposing the boundary condition at ρ = r, weare lead to

g(θ) =∞

∑`=0

a`r`P`(cos θ). (6.138)

Page 209: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

problems in higher dimensions 197

This is a Fourier-Legendre series representation of g(θ). Since the Legendre poly-nomials are an orthogonal set of eigenfunctions, we can extract the coefficients.

In Chapter 5 we had proven that∫ π

0Pn(cos θ)Pm(cos θ) sin θ dθ =

∫ 1

−1Pn(x)Pm(x) dx =

22n + 1

δnm.

So, multiplying the expression for g(θ) by Pm(cos θ) sin θ and integrating, weobtain the expansion coefficients:

a` =2`+ 1

2r`

∫ π

0g(θ)P`(cos θ) sin θ dθ. (6.139)

Sometimes it is easier to rewrite g(θ) as a polynomial in cos θ and avoid theintegration. For this example we see that

g(θ) = 1− cos 2θ

= 2 sin2 θ

= 2− 2 cos2 θ. (6.140)

Thus, setting x = cos θ and G(x) = g(θ(x)), we have G(x) = 2− 2x2.We seek the form

G(x) = c0P0(x) + c1P1(x) + c2P2(x),

where P0(x) = 1, P1(x) = x, and P2(x) = 12 (3x2 − 1). Since G(x) = 2− 2x2

does not have any x terms, we know that c1 = 0. So,

2− 2x2 = c0(1) + c212(3x2 − 1) = c0 −

12

c2 +32

c2x2.

By observation we have c2 = − 43 and thus, c0 = 2 + 1

2 c2 = 43 . Therefore,

G(x) = 43 P0(x)− 4

3 P2(x).We have found the expansion of g(θ) in terms of Legendre polynomials,

g(θ) =43

P0(cos θ)− 43

P2(cos θ). (6.141)

Therefore, the nonzero coefficients in the general solution become

a0 =43

, a2 =43

1r2 ,

and the rest of the coefficients are zero. Inserting these into the general solution, wehave the final solution

u(ρ, θ, φ) =43

P0(cos θ)− 43

r

)2P2(cos θ)

=43− 2

3

r

)2(3 cos2 θ − 1). (6.142)

6.5.1 Spherical Harmonics

The solutions of the angular parts of the problem are often com-bined into one function of two variables, as problems with spherical sym-metry arise often, leaving the main differences between such problems con-fined to the radial equation. These functions are referred to as sphericalharmonics, Y`m(θ, φ), which are defined with a special normalization as

Y`m(θ, φ), are the spherical harmonics.Spherical harmonics are important inapplications from atomic electron con-figurations to gravitational fields, plane-tary magnetic fields, and the cosmic mi-crowave background radiation.

Page 210: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

198 partial differential equations

Y`m(θ, φ) = (−1)m

√2`+ 1

(`−m)!(`+ m)!

Pm` (cos θ)eimφ. (6.143)

These satisfy the simple orthogonality relation∫ π

0

∫ 2π

0Y`m(θ, φ)Y∗`′m′(θ, φ) sin θ dφ dθ = δ``′δmm′ .

As seen earlier in the chapter, the spherical harmonics are eigenfunctionsof the eigenvalue problem LY = −λY, where

L =1

sin θ

∂θ

(sin θ

∂θ

)+

1sin2 θ

∂2

∂φ2 .

This operator appears in many problems in which there is spherical sym-metry, such as obtaining the solution of Schrödinger’s equation for the hy-drogen atom as we will see later. Therefore, it is customary to plot sphericalharmonics. Because the Y`m’s are complex functions, one typically plots ei-ther the real part or the modulus squared. One rendition of |Y`m(θ, φ)|2 isshown in Figure 6.6 for `, m = 0, 1, 2, 3.

Table 6.6: The first few spherical har-monics, |Y`m(θ, φ)|2

m = 0 m = 1 m = 2 m = 3

` = 0

` = 1

` = 2

` = 3

We could also look for the nodal curves of the spherical harmonics likewe had for vibrating membranes. Such surface plots on a sphere are shownin Figure 6.7. The colors provide for the amplitude of the |Y`m(θ, φ)|2. Wecan match these with the shapes in Figure 6.6 by coloring the plots withsome of the same colors as shown in Figure 6.7. However, by plotting justthe sign of the spherical harmonics, as in Figure 6.8, we can pick out thenodal curves much easier.

Page 211: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

problems in higher dimensions 199

m = 0 m = 1 m = 2 m = 3

` = 0

` = 1

` = 2

` = 3

Table 6.7: Spherical harmonic contoursfor |Y`m(θ, φ)|2.

m = 0 m = 1 m = 2 m = 3

` = 0

` = 1

` = 2

` = 3

Table 6.8: In these figures we showthe nodal curves of |Y`m(θ, φ)|2 Alongthe first column (m = 0) are the zonalharmonics seen as ` horizontal circles.Along the top diagonal (m = `) arethe sectional harmonics. These look likeorange sections formed from m verticalcircles. The remaining harmonics aretesseral harmonics. They look like acheckerboard pattern formed from inter-sections of `−m horizontal circles and mvertical circles.

Figure 6.21: Zonal harmonics, ` = 1,m = 0.

Figure 6.22: Zonal harmonics, ` = 2,m = 0.

Figure 6.23: Sectoral harmonics, ` = 2,m = 2.

Figure 6.24: Tesseral harmonics, ` = 3,m = 1.

Spherical, or surface, harmonics can be further grouped into zonal, sec-toral, and tesseral harmonics. Zonal harmonics correspond to the m = 0modes. In this case, one seeks nodal curves for which P`(cos θ) = 0. So-lutions of this equation lead to constant θ values such that cos θ is a zeroof the Legendre polynomial, P`(x). The zonal harmonics correspond to thefirst column in Figure 6.8. Since P`(x) is a polynomial of degree `, the zonalharmonics consist of ` latitudinal circles.

Sectoral, or meridional, harmonics result for the case that m = ±`. Forthis case, we note that P±`` (x) ∝ (1 − x2)m/2. This function vanishes forx = ±1, or θ = 0, π. Therefore, the spherical harmonics can only producenodal curves for eimφ = 0. Thus, one obtains the meridians satisfying thecondition A cos mφ+ B sin mφ = 0. Solutions of this equation are of the formφ = constant. These modes can be seen in Figure 6.8 in the top diagonaland can be described as m circles passing through the poles, or longitudinalcircles.

Tesseral harmonics consist of the rest of the modes, which typically looklike a checker board glued to the surface of a sphere. Examples can beseen in the pictures of nodal curves, such as Figure 6.8. Looking in Figure6.8 along the diagonals going downward from left to right, one can see thesame number of latitudinal circles. In fact, there are `−m latitudinal nodalcurves in these figures

Page 212: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

200 partial differential equations

In summary, the spherical harmonics have several representations, asshow in Figures 6.7-6.8. Note that there are ` nodal lines, m meridionalcurves, and `−m horizontal curves in these figures. The plots in Figure 6.6are the typical plots shown in physics for discussion of the wavefunctionsof the hydrogen atom. Those in 6.7 are useful for describing gravitationalor electric potential functions, temperature distributions, or wave modeson a spherical surface. The relationships between these pictures and thenodal curves can be better understood by comparing respective plots. Sev-eral modes were separated out in Figures 6.21-6.26 to make this comparisoneasier.

Figure 6.25: Sectoral harmonics, ` = 3,m = 3.

Figure 6.26: Tesseral harmonics, ` = 4,m = 3.

6.6 Spherically Symmetric Vibrations

x

y

r

Figure 6.27: A vibrating sphere of radiusr with the initial conditions

u(θ, φ, 0) = f (θ, φ),

ut(θ, φ, 0) = g(θ, φ).

Another application of spherical harmonics is a vibrating spher-ical membrane, such as a balloon. Just as for the two-dimensional mem-branes encountered earlier, we let u(θ, φ, t) represent the vibrations of thesurface about a fixed radius obeying the wave equation, utt = c2∇2u, andsatisfying the initial conditions

u(θ, φ, 0) = f (θ, φ), ut(θ, φ, 0) = g(θ, φ).

In spherical coordinates, we have (for ρ = r = constant.)

utt =c2

r2

(1

sin θ

∂θ

(sin θ

∂u∂θ

)+

1sin2 θ

∂2u∂φ2

), (6.144)

where u = u(θ, φ, t).The boundary conditions are given by the periodic boundary conditions

u(θ, 0, t) = u(θ, 2π, t), uφ(θ, 0, t) = uφ(θ, 2π, t),

where 0 < t, and 0 < θ < π, and that u = u(θ, φ, t) should remainbounded.

Noting that the wave equation takes the form

utt =c2

r2 Lu, where LY`m = −`(`+ 1)Y`m

for the spherical harmonics Y`m(θ, φ) = Pm` (cos θ)eimφ, then we can seek

product solutions of the form

u`m(θ, φ, t) = T(t)Y`m(θ, φ).

Inserting this form into the wave equation in spherical coordinates, we find

T′′Y`m = − c2

r2 T(t)`(`+ 1)Y`m,

or

T′′ + `(`+ 1)c2

r2 T(t).

Page 213: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

problems in higher dimensions 201

The solutions of this equation are easily found as

T(t) = A cos ω`t + B sin ω`t, ω` =√`(`+ 1)

cr

.

Therefore, the product solutions are given by

u`m(θ, φ, t) = [A cos ω`t + B sin ω`t]Y`m(θ, φ)

for ` = 0, 1, . . . , m = −`,−`+ 1, . . . , `.In Figure 6.28 we show several solutions for r = c = 1 at t = 10.

Figure 6.28: Modes for a vibrating spher-ical membrane:Row 1: (1, 0), (1, 1);Row 2: (2, 0), (2, 1), (2, 2);Row 3 (3, 0), (3, 1), (3, 2), (3, 3).

The general solution is found as

u(θ, φ, t) =∞

∑`=0

`

∑m=−`

[A`m cos ω`t + B`m sin ω`t]Y`m(θ, φ).

An interesting problem is to consider hitting the balloon with a velocityimpulse while at rest. An example of such a solution is shown in Figure6.29. In this images several modes are excited after the impulse.

Figure 6.29: A moment captured from asimulation of a spherical membrane af-ter hit with a velocity impulse.

6.7 Baking a Spherical Turkey

During one year as this course was being taught, an instructor returnedfrom the American holiday of Thanksgiving, where it is customary to cook aturkey. Such a turkey is shown in Figure 6.30. This reminded the instructorof a typical problem, such as in Weinberger, (1995, p. 92.), where one isgiven a roast of a certain volume and one is asked to find the time it takesto cook one double the size. In this section, we explore a similar problemfor cooking a turkey.

Often during this time of the year, November, articles appear with somescientific evidence as to how to gauge how long it takes to cook a turkey of agiven weight. Inevitably it refers to the story, as told in http://today.slac.

stanford.edu/a/2008/11-26.htmhttp://today.slac.stanford.edu/a/2008/11-26.htm that Pief Panofsky, a former SLAC Director, was determined to find a

Page 214: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

202 partial differential equations

Figure 6.30: A 12-lb turkey leaving theoven.

nonlinear equation for determining cooking times instead of using the ruleof thumb of 30 minutes per pound of turkey. He had arrived at the form,

t =W2/3

1.5,

where t is the cooking time and W is the weight of the turkey in pounds.Nowadays, one can go to Wolframalpha.com and enter the question "howlong should you cook a turkey" and get results based on a similar formula.

Before turning to the solution of the heat equation for a turkey, let’s con-sider a simpler problem.

Example 6.12. If it takes 4 hours to cook a 10 pound turkey in a 350o F oven, thenhow long would it take to cook a 20 pound turkey at the same conditions?

In all of our analysis, we will consider a spherical turkey. While the turkey inFigure 6.30 is not quite spherical, we are free to approximate the turkey as such. Ifyou prefer, we could imagine a spherical turkey like the one shown in Figure 6.31.

This problem is one of scaling. Thinking of the turkey as being spherically sym-metric, then the baking follows the heat equation in the form

ut =kr2

∂r

(r2 ∂u

∂r

).

We can rescale the variables from coordinates (r, t) to (ρ, τ) as r = βρ, andt = ατ. Then the derivatives transform as

∂r=

∂ρ

∂r∂

∂ρ=

∂ρ,

∂t=

∂τ

∂t∂

∂τ=

∂τ. (6.145)

Inserting these transformations into the heat equation, we have

uτ =α

β2kρ2

∂ρ

(ρ2 ∂u

∂ρ

).

To keep conditions the same, then we need α = β2. So, the transformation thatkeeps the form of the heat equation the same, or makes it invariant, is r = βρ, andt = β2τ. This is also known as a self-similarity transformation.

Page 215: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

problems in higher dimensions 203

Figure 6.31: The depiction of a sphericalturkey.

So, if the radius increases by a factor of β, then the time to cook the turkey(reaching a given temperature, u), would increase by β2. Returning to the problem,if the weight of the doubles, then the volume doubles, assuming that the density isheld constant. However, the volume is proportional to r3. So, r increases by a factorof 21/3. Therefore, the time increases by a factor of 22/3 ≈ 1.587. This give the timefor cooking a 20 lb turkey as t = 4(22/3) = 28/3 ≈ 6.35 hours.

The previous example shows the power of using similarity transforma-tions to get general information about solutions of differential equations.However, we have focussed on using the method of separation of variablesfor most of the book so far. We should be able to find a solution to thespherical turkey model using these methods as well. This will be shown inthe next example.

Example 6.13. Find the temperature, T(ρ, t) inside a spherical turkey, initially at40, which is F placed in a 350 F. Assume that the turkey is of constant densityand that the surface of the turkey is maintained at the oven temperature. [We willalso neglect convection and radaition processes inside the oven.]

The problem can be formulated as a heat equation problem for T(ρ, t) :

Tt =kr2

∂r

(r2 ∂T

∂r

), 0 < ρ < a, t > 0,

T(a, t) = 350, T(ρ, t) finite at ρ = 0, t > 0,

T(ρ, 0) = 40. (6.146)

We note that the boundary condition is not homogeneous. However, we can fixthat by introducing the auxiliary function (the difference between the turkey andoven temperatures) u(ρ, t) = T(ρ, t)− Ta, where Ta = 350. Then, the problem tobe solved becomes

ut =kr2

∂r

(r2 ∂u

∂r

), 0 < ρ < a, t > 0,

u(a, t) = 0, u(ρ, t) finite at ρ = 0, t > 0,

u(ρ, 0) = Tai− Ta = −310, (6.147)

Page 216: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

204 partial differential equations

where Ti = 40.We can now employ the method of separation of variables. Let u(ρ, t) = R(ρ)G(t).

Inserting into the heat equation for u, we have

1k

G′

G=

1R

(R′′ +

R′)= −λ.

This give the two ordinary differential equations, the temporal equation,

G′ = −kλG, (6.148)

and the radial equation,ρR′′ + 2R′ + λρR = 0. (6.149)

The temporal equation is easy to solve,

G(t) = G0e−λkt.

However, the radial equation is slightly more difficult. But, making the substitutionR(ρ) = y(ρ)/ρ, it is readily transformed into a simpler form:55 The radial equation almost looks famil-

iar when it is multiplied by ρ :

ρ2R′′ + 2ρR′ + λρ2R = 0.

If it were not for the ’2’, it would be thezeroth order Bessel equation. This is ac-tually the zeroth order spherical Besselequation. In general, the spherical Besselfunctions, jn(x) and yn(x), satisfy

x2y′′ + 2xy′ + [x2 − n(n + 1)]y = 0.

So, the radial solution of the turkeyproblem is

R(ρ) = jn(√

λρ) =sin√

λρ√λρ

.

We further note that

jn(x) =√

π

2xJn+ 1

2(x)

y′′ + λy = 0.

The boundary conditions on u(ρ, t) = R(ρ)G(t) transfer to R(a) = 0 and R(ρ)finite at the origin. In turn, this means that y(a) = 0 and y(ρ) has to vanish nearthe origin. If y(ρ) does not vanish near the origin, then R(ρ) is not finite as ρ→ 0.

So, we need to solve the boundary value problem

y′′ + λy = 0, y(0) = 0, y(a) = 0.

This gives the well-known set of eigenfunctions

y(ρ) = sinnπρ

a, λn =

(nπ

a

)2, n = 1, 2, 3, . . . .

Therefore, we have found

R(ρ) =sin nπρ

, λn =(nπ

a

)2, n = 1, 2, 3, . . . .

The general solution to the auxiliary problem is

u(ρ, t) =∞

∑n=1

Ansin nπρ

e−(nπ/a)2kt.

This gives the general solution for the temperature as

T(ρ, t) = Ta +∞

∑n=1

Ansin nπρ

e−(nπ/a)2kt.

All that remains is to find the solution satisfying the initial condition, T(ρ, 0) =40. Inserting t = 0, we have

Ti − Ta =∞

∑n=1

Ansin nπρ

.

Page 217: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

problems in higher dimensions 205

This is almost a Fourier sine series. Multiplying by ρ, we have

(Ti − Ta)ρ =∞

∑n=1

An sinnπρ

a.

Now, we can solve for the coefficients,

An =2a

∫ a

0(Ti − Ta)ρ sin

nπρ

adρ

=2anπ

(Ti − Ta)(−1)n+1. (6.150)

This gives the final solution,

T(ρ, t) = Ta +2a(Ti − Ta)

π

∑n=1

(−1)n+1

nsin nπρ

e−(nπ/a)2kt.

For generality, the ambient and initial temperature were left in terms of Ta and Ti,respectively.

It is interesting to use the above solution to compare roasting differentturkeys. We take the same conditions as above. Let the radius of the spheri-cal turkey be six inches. We will assume that such a turkey takes four hoursto cook, i.e., reach a temperature of 180 F. Plotting the solution with 400

terms, one finds that k ≈ 0.000089. This gives a “baking time” of t1 = 239.63.A plot of the temperature at the center point (ρ = a/2) of the bird is in Fig-ure 6.32.

0

20

40

60

80

100

120

140

160

180

50 100 150 200t

Figure 6.32: The temperature at the cen-ter of a turkey with radius a = 0.5 ft andk ≈ 0.000089.

Using the same constants, but increasing the radius of a turkey to a =

0.5(21/3) ft, we obtain the temperature plot in Figure 6.33. This radius cor-responds to doubling the volume of the turkey. Solving for the time at whichthe center temperature (at ρ = a/2) reaches 180 F, we obtained t2 = 380.38.

Page 218: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

206 partial differential equations

Comparing the two temperatures, we find the ratio (using the full compu-tation of the solution in Maple)

t2t1

=380.3813709239.6252478

≈ 1.587401054.

The compares well to22/3 ≈ 1.587401052.

Of course, the temperature is not quite the center of the spherical turkey.The reader can work out the details for other locations. Perhaps other in-teresting models would be a spherical shell of turkey with a corse of breadstuffing. Or, one might consider an ellipsoidal geometry.

Figure 6.33: The temperature at the cen-ter of a turkey with radius a = 0.5(21/3)ft and k ≈ 0.000089.

0

20

40

60

80

100

120

140

160

180

50 100 150 200 250 300 350t

6.8 Schrödinger Equation in Spherical Coordinates

Another important eigenvalue problem in physics is the Schrödingerequation. The time-dependent Schrödinger equation is given by

ih∂Ψ∂t

= − h2

2m∇2Ψ + VΨ. (6.151)

Here Ψ(r, t) is the wave function, which determines the quantum state ofa particle of mass m subject to a (time independent) potential, V(r). FromPlanck’s constant, h, one defines h = h

2π . The probability of finding theparticle in an infinitesimal volume, dV, is given by |Ψ(r, t)|2 dV, assumingthe wave function is normalized,∫

all space|Ψ(r, t)|2 dV = 1.

Page 219: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

problems in higher dimensions 207

One can separate out the time dependence by assuming a special form,Ψ(r, t) = ψ(r)e−iEt/h, where E is the energy of the particular stationary statesolution, or product solution. Inserting this form into the time-dependentequation, one finds that ψ(r) satisfies the time-independent Schrödingerequation,

− h2

2m∇2ψ + Vψ = Eψ. (6.152)

Assuming that the potential depends only on the distance from the ori-gin, V = V(ρ), we can further separate out the radial part of this solutionusing spherical coordinates. Recall that the Laplacian in spherical coordi-nates is given by

∇2 =1ρ2

∂ρ

(ρ2 ∂

∂ρ

)+

1ρ2 sin θ

∂θ

(sin θ

∂θ

)+

1ρ2 sin2 θ

∂2

∂φ2 . (6.153)

Then, the time-independent Schrödinger equation can be written as

− h2

2m

[1ρ2

∂ρ

(ρ2 ∂ψ

∂ρ

)+

1ρ2 sin θ

∂θ

(sin θ

∂ψ

∂θ

)+

1ρ2 sin2 θ

∂2ψ

∂φ2

]= [E−V(ρ)]ψ. (6.154)

Let’s continue with the separation of variables. Assuming that the wavefunction takes the form ψ(ρ, θ, φ) = R(ρ)Y(θ, φ), we obtain

− h2

2m

[Yρ2

ddρ

(ρ2 dR

)+

Rρ2 sin θ

∂θ

(sin θ

∂Y∂θ

)+

Rρ2 sin2 θ

∂2Y∂φ2

]= RY[E−V(ρ)]ψ. (6.155)

Dividing by ψ = RY, multiplying by − 2mρ2

h2 , and rearranging, we have

1R

ddρ

(ρ2 dR

)− 2mρ2

h2 [V(ρ)− E] = − 1Y

LY,

where

L =1

sin θ

∂θ

(sin θ

∂θ

)+

1sin2 θ

∂2

∂φ2 .

We have a function of ρ equal to a function of the angular variables. So,we set each side equal to a constant. We will judiciously write the separationconstant as `(`+ 1). The resulting equations are then

ddρ

(ρ2 dR

)− 2mρ2

h2 [V(ρ)− E] R = `(`+ 1)R, (6.156)

1sin θ

∂θ

(sin θ

∂Y∂θ

)+

1sin2 θ

∂2Y∂φ2 = −`(`+ 1)Y. (6.157)

The second of these equations should look familiar from the last section.This is the equation for spherical harmonics,

Y`m(θ, φ) =

√2`+ 1

2(`−m)!(`+ m)!

Pm` eimφ. (6.158)

Page 220: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

208 partial differential equations

So, any further analysis of the problem depends upon the choice of po-tential, V(ρ), and the solution of the radial equation. For this, we turn to thedetermination of the wave function for an electron in orbit about a proton.

Example 6.14. The Hydrogen Atom - ` = 0 StatesHistorically, the first test of the Schrödinger equation was the determination of

the energy levels in a hydrogen atom. This is modeled by an electron orbiting aproton. The potential energy is provided by the Coulomb potential,

V(ρ) = − e2

4πε0ρ.

Thus, the radial equation becomesSolution of the hydrogen problem.

ddρ

(ρ2 dR

)+

2mρ2

h2

[e2

4πε0ρ+ E

]R = `(`+ 1)R. (6.159)

Before looking for solutions, we need to simplify the equation by absorbing someof the constants. One way to do this is to make an appropriate change of variables.Let ρ = ar. Then, by the Chain Rule we have

ddρ

=drdρ

ddr

=1a

ddr

.

Under this transformation, the radial equation becomes

ddr

(r2 du

dr

)+

2ma2r2

h2

[e2

4πε0ar+ E

]u = `(`+ 1)u, (6.160)

where u(r) = R(ρ). Expanding the second term,

2ma2r2

h2

[e2

4πε0ar+ E

]u =

[mae2

2πε0h2 r +2mEa2

h2 r2]

u,

we see that we can define

a =2πε0h2

me2 , (6.161)

ε = −2mEa2

h2

= −2(2πε0)2h2

me4 E. (6.162)

Using these constants, the radial equation becomes

ddr

(r2 du

dr

)+ ru− `(`+ 1)u = εr2u. (6.163)

Expanding the derivative and dividing by r2,

u′′ +2r

u′ +1r

u− `(`+ 1)r2 u = εu. (6.164)

The first two terms in this differential equation came from the Laplacian. The thirdterm came from the Coulomb potential. The fourth term can be thought to contribute

Page 221: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

problems in higher dimensions 209

to the potential and is attributed to angular momentum. Thus, ` is called theangular momentum quantum number. This is an eigenvalue problem for the radialeigenfunctions u(r) and energy eigenvalues ε.

The solutions of this equation are determined in a quantum mechanics course. Inorder to get a feeling for the solutions, we will consider the zero angular momentumcase, ` = 0 :

u′′ +2r

u′ +1r

u = εu. (6.165)

Even this equation is one we have not encountered in this book. Let’s see if we canfind some of the solutions.

First, we consider the behavior of the solutions for large r. For large r the secondand third terms on the left hand side of the equation are negligible. So, we have theapproximate equation

u′′ − εu = 0. (6.166)

Therefore, the solutions behave like u(r) = e±√

εr for large r. For bounded solutions,we choose the decaying solution.

This suggests that solutions take the form u(r) = v(r)e−√

εr for some unknownfunction, v(r). Inserting this guess into Equation (6.165), gives an equation forv(r) :

rv′′ + 2(1−√

εr)

v′ + (1− 2√

ε)v = 0. (6.167)

Next we seek a series solution to this equation. Let

v(r) =∞

∑k=0

ckrk.

Inserting this series into Equation (6.167), we have

∑k=1

[k(k− 1) + 2k]ckrk−1 +∞

∑k=1

[1− 2√

ε(k + 1)]ckrk = 0.

We can re-index the dummy variable in each sum. Let k = m in the first sum andk = m− 1 in the second sum. We then find that

∑k=1

[m(m + 1)cm + (1− 2m

√ε)cm−1

]rm−1 = 0.

Since this has to hold for all m ≥ 1,

cm =2m√

ε− 1m(m + 1)

cm−1.

Further analysis indicates that the resulting series leads to unbounded solutionsunless the series terminates. This is only possible if the numerator, 2m

√ε − 1,

vanishes for m = n, n = 1, 2 . . . . Thus,

ε =1

4n2 .

Since ε is related to the energy eigenvalue, E, we have

En = − me4

2(4πε0)2h2n2.

Page 222: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

210 partial differential equations

Inserting the values for the constants, this gives

En = −13.6 eVn2 .

This is the well known set of energy levels for the hydrogen atom.Energy levels for the hydrogen atom.

The corresponding eigenfunctions are polynomials, since the infinite series wasforced to terminate. We could obtain these polynomials by iterating the recursionequation for the cm’s. However, we will instead rewrite the radial equation (6.167).

Let x = 2√

εr and define y(x) = v(r). Then

ddr

= 2√

εd

dx.

This gives

2√

εxy′′ + (2− x)2√

εy′ + (1− 2√

ε)y = 0.

Rearranging, we have

xy′′ + (2− x)y′ +1

2√

ε(1− 2

√ε)y = 0.

Noting that 2√

ε = 1n , this equation becomes

xy′′ + (2− x)y′ + (n− 1)y = 0. (6.168)

The resulting equation is well known. It takes the form

xy′′ + (α + 1− x)y′ + ny = 0. (6.169)

Solutions of this equation are the associated Laguerre polynomials. The solutionsare denoted by Lα

n(x). They can be defined in terms of the Laguerre polynomials,

Ln(x) = ex(

ddx

)n(e−xxn).

The associated Laguerre polynomials are defined as

Lmn−m(x) = (−1)m

(d

dx

)mLn(x).

Note: The Laguerre polynomials were first encountered in Problem 2 in Chapter 5as an example of a classical orthogonal polynomial defined on [0, ∞) with weightw(x) = e−x. Some of these polynomials are listed in Table 6.9 and several Laguerrepolynomials are shown in Figure 6.34.The associated Laguerre polynomials are

named after the French mathematicianEdmond Laguerre (1834-1886).

Comparing Equation (6.168) with Equation (6.169), we find that y(x) = L1n−1(x).

In summary, we have made the following transformations:In most derivation in quantum mechan-

ics a = a02 . where a0 = 4πε0 h2

me2 is the Bohrradius and a0 = 5.2917× 10−11m.

1. R(ρ) = u(r), ρ = ar.

2. u(r) = v(r)e−√

εr.

3. v(r) = y(x) = L1n−1(x), x = 2

√εr.

Page 223: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

problems in higher dimensions 211

Lmn (x)

L00(x) 1

L01(x) 1− x

L02(x) 1

2 (x2 − 4x + 2)L0

3(x) 16 (−x3 + 9x2 − 18x + 6)

L10(x) 1

L11(x) 2− x

L12(x) 1

2 (x2 − 6x + 6)L1

3(x) 16 (−x3 + 3x2 − 36x + 24)

L20(x) 1

L21(x) 3− x

L22(x) 1

2 (x2 − 8x + 12)L2

3(x) 112 (−2x3 + 30x2 − 120x + 120)

Table 6.9: Associated Laguerre Func-tions, Lm

n (x). Note that L0n(x) = Ln(x).

Figure 6.34: Plots of the first few La-guerre polynomials.

Page 224: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

212 partial differential equations

Figure 6.35: Plots of R(ρ) for a = 1 andn = 1, 2, 3, 4 for the ` = 0 states.

Therefore,R(ρ) = e−

√ερ/aL1

n−1(2√

ερ/a).

However, we also found that 2√

ε = 1/n. So,

R(ρ) = e−ρ/2naL1n−1(ρ/na).

In Figure 6.35 we show a few of these solutions.

Example 6.15. Find the ` ≥ 0 solutions of the radial equation.For the general case, for all ` ≥ 0, we need to solve the differential equation

u′′ +2r

u′ +1r

u− `(`+ 1)r2 u = εu. (6.170)

Instead of letting u(r) = v(r)e−√

εr, we let

u(r) = v(r)r`e−√

εr.

This lead to the differential equation

rv′′ + 2(`+ 1−√

εr)v′ + (1− 2(`+ 1)√

ε)v = 0. (6.171)

as before, we let x = 2√

εr to obtain

xy′′ + 2[`+ 1− x

2

]v′ +

[1

2√

ε− `(`+ 1)

]v = 0.

Noting that 2√

ε = 1/n, we have

xy′′ + 2 [2(`+ 1)− x] v′ + (n− `(`+ 1))v = 0.

We see that this is once again in the form of the associate Laguerre equation and thesolutions are

y(x) = L2`+1n−`−1(x).

Page 225: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

problems in higher dimensions 213

So, the solution to the radial equation for the hydrogen atom is given by

R(ρ) = r`e−√

εrL2`+1n−`−1(2

√εr)

=( ρ

2na

)`e−ρ/2naL2`+1

n−`−1

( ρ

na

). (6.172)

Interpretations of these solutions will be left for your quantum mechanics course.

6.9 Curvilinear Coordinates

In order to study solutions of the wave equation, the heat equa-tion, or even Schrödinger’s equation in different geometries, we need to seehow differential operators, such as the Laplacian, appear in these geome-tries. The most common coordinate systems arising in physics are polarcoordinates, cylindrical coordinates, and spherical coordinates. These re-flect the common geometrical symmetries often encountered in physics.

In such systems it is easier to describe boundary conditions and to makeuse of these symmetries. For example, specifying that the electric potentialis 10.0 V on a spherical surface of radius one, we would say φ(x, y, z) = 10for x2 + y2 + z2 = 1. However, if we use spherical coordinates, (r, θ, φ), thenwe would say φ(r, θ, φ) = 10 for r = 1, or φ(1, θ, φ) = 10. This is a muchsimpler representation of the boundary condition.

However, this simplicity in boundary conditions leads to a more compli-cated looking partial differential equation in spherical coordinates. In thissection we will consider general coordinate systems and how the differen-tial operators are written in the new coordinate systems. This is a moregeneral approach than that taken earlier in the chapter. For a more modernand elegant approach, one can use differential forms.

We begin by introducing the general coordinate transformations betweenCartesian coordinates and the more general curvilinear coordinates. Let theCartesian coordinates be designated by (x1, x2, x3) and the new coordinatesby (u1, u2, u3). We will assume that these are related through the transfor-mations

x1 = x1(u1, u2, u3),

x2 = x2(u1, u2, u3),

x3 = x3(u1, u2, u3). (6.173)

Thus, given the curvilinear coordinates (u1, u2, u3) for a specific point inspace, we can determine the Cartesian coordinates, (x1, x2, x3), of that point.We will assume that we can invert this transformation: Given the Cartesiancoordinates, one can determine the corresponding curvilinear coordinates.

In the Cartesian system we can assign an orthogonal basis, i, j, k. As aparticle traces out a path in space, one locates its position by the coordinates(x1, x2, x3). Picking x2 and x3 constant, the particle lies on the curve x1 =

value of the x1 coordinate. This line lies in the direction of the basis vector

Page 226: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

214 partial differential equations

i. We can do the same with the other coordinates and essentially map outa grid in three dimensional space as sown in Figure 6.36. All of the xi-curves intersect at each point orthogonally and the basis vectors i, j, klie along the grid lines and are mutually orthogonal. We would like tomimic this construction for general curvilinear coordinates. Requiring theorthogonality of the resulting basis vectors leads to orthogonal curvilinearcoordinates.

x1x2

x3

i

j

k

Figure 6.36: Plots of xi-curves formingan orthogonal Cartesian grid.

As for the Cartesian case, we consider u2 and u3 constant. This leads toa curve parametrized by u1 : r = x1(u1)i + x2(u1)j + x3(u1)k. We call thisthe u1-curve. Similarly, when u1 and u3 are constant we obtain a u2-curveand for u1 and u2 constant we obtain a u3-curve. We will assume that thesecurves intersect such that each pair of curves intersect orthogonally as seenin Figure 6.37. Furthermore, we will assume that the unit tangent vectors tothese curves form a right handed system similar to the i, j, k systems forCartesian coordinates. We will denote these as u1, u2, u3.

u3

u2

u1

u1

u2u3

Figure 6.37: Plots of general ui-curvesforming an orthogonal grid.

We can determine these tangent vectors from the coordinate transforma-tions. Consider the position vector as a function of the new coordinates,

r(u1, u2, u3) = x1(u1, u2, u3)i + x2(u1, u2, u3)j + x3(u1, u2, u3)k.

Then, the infinitesimal change in position is given by

dr =∂r

∂u1du1 +

∂r∂u2

du2 +∂r

∂u3du3 =

3

∑i=1

∂r∂ui

dui.

We note that the vectors ∂r∂ui

are tangent to the ui-curves. Thus, we definethe unit tangent vectors

ui =

∂r∂ui∣∣∣ ∂r∂ui

∣∣∣ .Solving for the original tangent vector, we have

∂r∂ui

= hiui,

whereThe scale factors, hi ≡∣∣∣ ∂r

∂ui

∣∣∣ .

hi ≡∣∣∣∣ ∂r∂ui

∣∣∣∣ .

The hi’s are called the scale factors for the transformation. The infinitesimalchange in position in the new basis is then given by

dr =3

∑i=1

hiuiui.

Example 6.16. Determine the scale factors for the polar coordinate transformation.The transformation for polar coordinates is

x = r cos θ, y = r sin θ.

Here we note that x1 = x, x2 = y, u1 = r, and u2 = θ. The u1-curves are curveswith θ = const. Thus, these curves are radial lines. Similarly, the u2-curves have

Page 227: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

problems in higher dimensions 215

r = const. These curves are concentric circles about the origin as shown in Figure6.38.

The unit vectors are easily found. We will denote them by ur and uθ . We candetermine these unit vectors by first computing ∂r

∂ui. Let

r = x(r, θ)i + y(r, θ)j = r cos θi + r sin θj.

Then,

∂r∂r

= cos θi + sin θj

∂r∂θ

= −r sin θi + r cos θj. (6.174)

x

y

θ = const

r = const

ur

Figure 6.38: Plots an orthogonal polargrid.

.

The first vector already is a unit vector. So,

ur = cos θi + sin θj.

The second vector has length r since | − r sin θi + r cos θj| = r. Dividing ∂r∂θ by r,

we haveuθ = − sin θi + cos θj.

We can see these vectors are orthogonal (ur · uθ = 0) and form a right handsystem. That they form a right hand system can be seen by either drawing thevectors, or computing the cross product,

(cos θi + sin θj)× (− sin θi + cos θj) = cos2 θi× j− sin2 θj× i

= k. (6.175)

Since

∂r∂r

= ur,

∂r∂θ

= ruθ ,

The scale factors are hr = 1 and hθ = r.

x

y

r dθr

uruθ

Figure 6.39: Infinitesimal area in polarcoordinates.

Once we know the scale factors, we have that

dr =3

∑i=1

hiduiui.

The infinitesimal arclength is then given by the Euclidean line element

ds2 = dr · dr =3

∑i=1

h2i du2

i

when the system is orthogonal. The h2i are referred to as the metric coeffi-

cients.

Example 6.17. Verify that dr = drur + r dθuθ directly from r = r cos θi+ r sin θjand obtain the Euclidean line element for polar coordinates.

Page 228: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

216 partial differential equations

We begin by computing

dr = d(r cos θi + r sin θj)

= (cos θi + sin θj) dr + r(− sin θi + cos θj) dθ

= drur + r dθuθ . (6.176)

This agrees with the form dr = ∑3i=1 hiduiui when the scale factors for polar coor-

dinates are inserted.The line element is found as

ds2 = dr · dr

= (drur + r dθuθ) · (drur + r dθuθ)

= dr2 + r2 dθ2. (6.177)

This is the Euclidean line element in polar coordinates.

Also, along the ui-curves,

dr = hiduiui, (no summation).

This can be seen in Figure 6.40 by focusing on the u1 curve. Along this curve,u2 and u3 are constant. So, du2 = 0 and du3 = 0. This leaves dr = h1du1u1

along the u1-curve. Similar expressions hold along the other two curves.

h1 du1

h3 du2

h3 du3

u1

u2u3

Figure 6.40: Infinitesimal volume ele-ment with sides of length hi dui .

We can use this result to investigate infinitesimal volume elements forgeneral coordinate systems as shown in Figure 6.40. At a given point(u1, u2, u3) we can construct an infinitesimal parallelepiped of sides hidui,i = 1, 2, 3. This infinitesimal parallelepiped has a volume of size

dV =

∣∣∣∣ ∂r∂u1· ∂r

∂u2× ∂r

∂u3

∣∣∣∣ du1du2du3.

The triple scalar product can be computed using determinants and the re-sulting determinant is call the Jacobian, and is given by

J =

∣∣∣∣ ∂(x1, x2, x3)

∂(u1, u2, u3)

∣∣∣∣=

∣∣∣∣ ∂r∂u1· ∂r

∂u2× ∂r

∂u3

∣∣∣∣=

∣∣∣∣∣∣∣∂x1∂u1

∂x2∂u1

∂x3∂u1

∂x1∂u2

∂x2∂u2

∂x3∂u2

∂x1∂u3

∂x2∂u3

∂x3∂u3

∣∣∣∣∣∣∣ . (6.178)

Therefore, the volume element can be written as

dV = J du1du2du3 =

∣∣∣∣ ∂(x1, x2, x3)

∂(u1, u2, u3)

∣∣∣∣ du1du2du3.

Example 6.18. Determine the volume element for cylindrical coordinates (r, θ, z),given by

x = r cos θ, (6.179)

y = r sin θ, (6.180)

z = z. (6.181)

Page 229: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

problems in higher dimensions 217

Here, we have (u1, u2, u3) = (r, θ, z) as displayed in Figure 6.41. Then, theJacobian is given by

J =

∣∣∣∣∂(x, y, z)∂(r, θ, z)

∣∣∣∣=

∣∣∣∣∣∣∣∂x∂r

∂y∂r

∂z∂r

∂x∂θ

∂y∂θ

∂z∂θ

∂x∂z

∂y∂z

∂z∂z

∣∣∣∣∣∣∣=

∣∣∣∣∣∣∣cos θ sin θ 0−r sin θ r cos θ 0

0 0 1

∣∣∣∣∣∣∣= r (6.182)

Thus, the volume element is given as

dV = rdrdθdz.

This result should be familiar from multivariate calculus. x

y

z P

θr

z

Figure 6.41: Cylindrical coordinate sys-tem.

Another approach is to consider the geometry of the infinitesimal volumeelement. The directed edge lengths are given by dsi = hiduiui as seen inFigure 6.37. The infinitesimal area element of for the face in direction uk isfound from a simple cross product,

dAk = dsi × dsj = hihjduidujui × uj.

Since these are unit vectors, the areas of the faces of the infinitesimal vol-umes are dAk = hihjduiduj.

The infinitesimal volume is then obtained as

dV = |dsk · dAk| = hihjhkduidujduk|ui · (uk × uj)|.

Thus, dV = h1h2h3du1du1du3. Of course, this should not be a surprise since

J =∣∣∣∣ ∂r∂u1· ∂r

∂u2× ∂r

∂u3

∣∣∣∣ = |h1u1 · h2u2 × h3u3| = h1h2h3.

Example 6.19. For polar coordinates, determine the infinitesimal area element.In an earlier example, we found the scale factors for polar coordinates as hr = 1

and hθ = r. Thus, dA = hrhθ drdθ = r drdθ. Also, the last example for cylin-drical coordinates will yield similar results if we already know the scales factorswithout having to compute the Jacobian directly. Furthermore, the area elementperpendicular to the z-coordinate gives the polar coordinate system result.

Next we will derive the forms of the gradient, divergence, and curl incurvilinear coordinates using several of the identities in section ??. Theresults are given here for quick reference.

Page 230: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

218 partial differential equations

∇φ =3

∑i=1

uihi

∂φ

∂ui

=u1

h1

∂φ

∂u1+

u2

h2

∂φ

∂u2+

u3

h3

∂φ

∂u3. (6.183)

∇ · F =1

h1h2h3

(∂

∂u1(h2h3F1) +

∂u2(h1h3F2) +

∂u3(h1h2F3)

).

(6.184)

∇× F =1

h1h2h3

∣∣∣∣∣∣∣h1u1 h2u2 h3u3

∂∂u1

∂∂u2

∂∂u3

F1h1 F2h2 F3h3

∣∣∣∣∣∣∣ . (6.185)

∇2φ =1

h1h2h3

(∂

∂u1

(h2h3

h1

∂φ

∂u1

)+

∂u2

(h1h3

h2

∂φ

∂u2

)+

∂u3

(h1h2

h3

∂φ

∂u3

))(6.186)

Gradient, divergence and curl in orthog-onal curvilinear coordinates.

We begin the derivations of these formulae by looking at the gradient,Derivation of the gradient form.

∇φ, of the scalar function φ(u1, u2, u3). We recall that the gradient operatorappears in the differential change of a scalar function,

dφ = ∇φ · dr =3

∑i=1

∂φ

∂uidui.

Since

dr =3

∑i=1

hiduiui, (6.187)

we also have that

dφ = ∇φ · dr =3

∑i=1

(∇φ)i hidui.

Comparing these two expressions for dφ, we determine that the componentsof the del operator can be written as

(∇φ)i =1hi

∂φ

∂ui

and thus the gradient is given by

∇φ =u1

h1

∂φ

∂u1+

u2

h2

∂φ

∂u2+

u3

h3

∂φ

∂u3. (6.188)

Next we compute the divergence,Derivation of the divergence form.

∇ · F =3

∑i=1∇ · (Fiui) .

We can do this by computing the individual terms in the sum. We willcompute ∇ · (F1u1) .

Using Equation (6.188), we have that

∇ui =uihi

.

Page 231: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

problems in higher dimensions 219

Then∇u2 ×∇u3 =

u2 × u3

h2h3=

u1

h2h3.

Solving for u1, givesu1 = h2h3∇u2 ×∇u3.

Inserting this result into ∇ · (F1u1) and using the vector identity 2c fromsection ??,

∇ · ( f A) = f∇ ·A + A · ∇ f ,

we have

∇ · (F1u1) = ∇ · (F1h2h3∇u2 ×∇u3)

= ∇ (F1h2h3) · ∇u2 ×∇u3 + F1h2h2∇ · (∇u2 ×∇u3).

(6.189)

The second term of this result vanishes by vector identity 3c,

∇ · (∇ f ×∇g) = 0.

Since ∇u2 ×∇u3 = u1h2h3

, the first term can be evaluated as

∇ · (F1u1) = ∇ (F1h2h3) ·u1

h2h3=

1h1h2h3

∂u1(F1h2h3) .

Similar computations can be carried out for the remaining components,leading to the sought expression for the divergence in curvilinear coordi-nates:

∇ · F =1

h1h2h3

(∂

∂u1(h2h3F1) +

∂u2(h1h3F2) +

∂u3(h1h2F3)

). (6.190)

Example 6.20. Write the divergence operator in cylindrical coordinates.In this case we have

∇ · F =1

hrhθhz

(∂

∂r(hθhzFr) +

∂θ(hrhzFθ) +

∂θ(hrhθ Fz)

)=

1r

(∂

∂r(rFr) +

∂θ(Fθ) +

∂θ(rFz)

)=

1r

∂r(rFr) +

1r

∂θ(Fθ) +

∂θ(Fz) . (6.191)

We now turn to the curl operator. In this case, we need to evaluate Derivation of the curl form.

∇× F =3

∑i=1∇× (Fiui) .

Again we focus on one term, ∇× (F1u1) . Using the vector identity 2e,

∇× ( f A) = f∇×A−A×∇ f ,

we have

∇× (F1u1) = ∇× (F1h1∇u1)

= F1h1∇×∇u1 −∇ (F1h1)×∇u1. (6.192)

Page 232: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

220 partial differential equations

The curl of the gradient vanishes, leaving

∇× (F1u1) = ∇ (F1h1)×∇u1.

Since ∇u1 = u1h1

, we have

∇× (F1u1) = ∇ (F1h1)×u1

h1

=

(3

∑i=1

uihi

∂ (F1h1)

∂ui

)× u1

h1

=u2

h3h1

∂ (F1h1)

∂u3− u3

h1h2

∂ (F1h1)

∂u2. (6.193)

The other terms can be handled in a similar manner. The overall result isthat

∇× F =u1

h2h3

(∂ (h3F3)

∂u2− ∂ (h2F2)

∂u3

)+

u2

h1h3

(∂ (h1F1)

∂u3− ∂ (h3F3)

∂u1

)+

u3

h1h2

(∂ (h2F2)

∂u1− ∂ (h1F1)

∂u2

)(6.194)

This can be written more compactly as

∇× F =1

h1h2h3

∣∣∣∣∣∣∣h1u1 h2u2 h3u3

∂∂u1

∂∂u2

∂∂u3

F1h1 F2h2 F3h3

∣∣∣∣∣∣∣ (6.195)

Example 6.21. Write the curl operator in cylindrical coordinates.

∇× F =1r

∣∣∣∣∣∣∣er reθ ez∂∂r

∂∂θ

∂∂z

Fr rFθ Fz

∣∣∣∣∣∣∣=

(1r

∂Fz

∂θ− ∂Fθ

∂z

)er +

(∂Fr

∂z− ∂Fz

∂r

)eθ

+1r

(∂(rFθ)

∂r− ∂Fr

∂θ

)ez. (6.196)

Finally, we turn to the Laplacian. In the next chapter we will solvehigher dimensional problems in various geometric settings such as the waveequation, the heat equation, and Laplace’s equation. These all involveknowing how to write the Laplacian in different coordinate systems. Since∇2φ = ∇ · ∇φ, we need only combine the results from Equations (6.188)and (6.190)for the gradient and the divergence in curvilinear coordinates.This is straight forward and gives

∇2φ =1

h1h2h3

(∂

∂u1

(h2h3

h1

∂φ

∂u1

)+

∂u2

(h1h3

h2

∂φ

∂u2

)+

∂u3

(h1h2

h3

∂φ

∂u3

)). (6.197)

The results of rewriting the standard differential operators in cylindricaland spherical coordinates are shown in Problems ?? and ??. In particular,the Laplacians are given as

Page 233: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

problems in higher dimensions 221

Cylindrical Coordinates:

∇2 f =1r

∂r

(r

∂ f∂r

)+

1r2

∂2 f∂θ2 +

∂2 f∂z2 . (6.198)

Spherical Coordinates:

∇2 f =1ρ2

∂ρ

(ρ2 ∂ f

∂ρ

)+

1ρ2 sin θ

∂θ

(sin θ

∂ f∂θ

)+

1ρ2 sin2 θ

∂2 f∂φ2 . (6.199)

Problems

1. A rectangular plate 0 ≤ x ≤ L 0 ≤ y ≤ H with heat diffusivity constant kis insulated on the edges y = 0, H and is kept at constant zero temperatureon the other two edges. Assuming an initial temperature of u(x, y, 0) =

f (x, y), use separation of variables t find the general solution.

2. Solve the following problem.

uxx + uyy + uzz = 0, 0 < x < 2π, 0 < y < π, 0 < z < 1,

u(x, y, 0) = sin x sin y, u(x, y, z) = 0on other faces.

3. Consider Laplace’s equation on the unit square, uxx + uyy = 0, 0 ≤ x, y ≤1. Let u(0, y) = 0, u(1, y) = 0 for 0 < y < 1 and uy(x, 0) = 0 for 0 < y < 1.Carry out the needed separation of variables and write down the productsolutions satisfying these boundary conditions.

4. Consider a cylinder of height H and radius a.

a. Write down Laplace’s Equation for this cylinder in cylindrical coordi-nates.

b. Carry out the separation of variables and obtain the three ordinarydifferential equations that result from this problem.

c. What kind of boundary conditions could be satisfied in this problemin the independent variables?

5. Consider a square drum of side s and a circular drum of radius a.

a. Rank the modes corresponding to the first 6 frequencies for each.

b. Write each frequency (in Hz) in terms of the fundamental (i.e., thelowest frequency.)

c. What would the lengths of the sides of the square drum have to be tohave the same fundamental frequency? (Assume that c = 1.0 for eachone.)

6. We presented the full solution of the vibrating rectangular membranein Equation 6.37. Finish the solution to the vibrating circular membrane bywriting out a similar full solution.

Page 234: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

222 partial differential equations

7. A copper cube 10.0 cm on a side is heated to 100 C. The block is placedon a surface that is kept at 0 C. The sides of the block are insulated, sothe normal derivatives on the sides are zero. Heat flows from the top ofthe block to the air governed by the gradient uz = −10C/m. Determinethe temperature of the block at its center after 1.0 minutes. Note that thethermal diffusivity is given by k = K

ρcp, where K is the thermal conductivity,

ρ is the density, and cp is the specific heat capacity.

8. Consider a spherical balloon of radius a. Small deformations on thesurface can produce waves on the balloon’s surface.

a. Write the wave equation in spherical polar coordinates. (Note: ρ isconstant!)

b. Carry out a separation of variables and find the product solutions forthis problem.

c. Describe the nodal curves for the first six modes.

d. For each mode determine the frequency of oscillation in Hz assumingc = 1.0 m/s.

9. Consider a circular cylinder of radius R = 4.00 cm and height H = 20.0cm which obeys the steady state heat equation

urr +1r

ur + uzz.

Find the temperature distribution, u(r, z), given that u(r, 0) = 0C, u(r, 20) =20C, and heat is lost through the sides due to Newton’s Law of Cooling

[ur + hu]r=4 = 0,

for h = 1.0 cm−1.

10. The spherical surface of a homogeneous ball of radius one in main-tained at zero temperature. It has an initial temperature distribution u(ρ, 0) =100o C. Assuming a heat diffusivity constant k, find the temperature through-out the sphere, u(ρ, θ, φ, t).

11. Determine the steady state temperature of a spherical ball maintainedat the temperature

u(x, y, z) = x2 + 2y2 + 3z2, ρ = 1.

[Hint - Rewrite the problem in spherical coordinates and use the propertiesof spherical harmonics.]

12. ) A hot dog initially at temperature 50C is put into boiling water at100C. Assume the hot dog is 12.0 cm long, has a radius of 2.00 cm, and theheat constant is 2.0× 10−5 cm2/s.

a. Find the general solution for the temperature. [Hint: Solve the heatequation for u(r, z, t) = T(r, z, t)− 100, where T(r, z, t) is the temper-ature of the hot dog.]

Page 235: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

problems in higher dimensions 223

b. indicate how one might proceed with the remaining information inorder to determine when the hot dog is cooked; i.e., when the centertemperature is 80C.

Page 236: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238
Page 237: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

7Green’s Functions and NonhomogeneousProblems

“The young theoretical physicists of a generation or two earlier subscribed to thebelief that: If you haven’t done something important by age 30, you never will.Obviously, they were unfamiliar with the history of George Green, the miller ofNottingham.” Julian Schwinger (1918-1994)

The wave equation, heat equation, and Laplace’s equation aretypical homogeneous partial differential equations. They can be written inthe form

Lu(x) = 0,

where L is a differential operator. For example, these equations can bewritten as (

∂2

∂t2 − c2∇2)

u = 0,(∂

∂t− k∇2

)u = 0,

∇2u = 0. (7.1)George Green (1793-1841), a Britishmathematical physicist who had littleformal education and worked as a millerand a baker, published An Essay onthe Application of Mathematical Analysisto the Theories of Electricity and Mag-netism in which he not only introducedwhat is now known as Green’s func-tion, but he also introduced potentialtheory and Green’s Theorem in his stud-ies of electricity and magnetism. Re-cently his paper was posted at arXiv.org,arXiv:0807.0088.

In this chapter we will explore solutions of nonhomogeneous partial dif-ferential equations,

Lu(x) = f (x),

by seeking out the so-called Green’s function. The history of the Green’sfunction dates back to 1828, when George Green published work in whichhe sought solutions of Poisson’s equation ∇2u = f for the electric potentialu defined inside a bounded volume with specified boundary conditions onthe surface of the volume. He introduced a function now identified as whatRiemann later coined the “Green’s function”. In this chapter we will derivethe initial value Green’s function for ordinary differential equations. Later inthe chapter we will return to boundary value Green’s functions and Green’sfunctions for partial differential equations.

As a simple example, consider Poisson’s equation,

∇2u(r) = f (r).

Page 238: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

226 partial differential equations

Let Poisson’s equation hold inside a region Ω bounded by the surface ∂Ωas shown in Figure 7.1. This is the nonhomogeneous form of Laplace’sequation. The nonhomogeneous term, f (r), could represent a heat sourcein a steady-state problem or a charge distribution (source) in an electrostaticproblem.

∂Ω

Ω

n

Figure 7.1: Let Poisson’s equation holdinside region Ω bounded by surface ∂Ω.

Now think of the source as a point source in which we are interested inthe response of the system to this point source. If the point source is locatedat a point r′, then the response to the point source could be felt at pointsr. We will call this response G(r, r′). The response function would satisfy apoint source equation of the form

∇2G(r, r′) = δ(r− r′).

Here δ(r− r′) is the Dirac delta function, which we will consider in moreThe Dirac delta function satisfies

δ(r) = 0, r 6= 0,∫Ω

δ(r) dV = 1.

detail in Section 9.4. A key property of this generalized function is thesifting property, ∫

Ωδ(r− r′) f (r) dV = f (r′).

The connection between the Green’s function and the solution to Pois-son’s equation can be found from Green’s second identity:∫

∂Ω[φ∇ψ− ψ∇φ] · n dS =

∫Ω[φ∇2ψ− ψ∇2φ] dV.

Letting φ = u(r) and ψ = G(r, r′), we have11 We note that in the following the vol-ume and surface integrals and differen-tiation using ∇ are performed using ther-coordinates.

∫∂Ω

[u(r)∇G(r, r′)− G(r, r′)∇u(r)] · n dS

=∫

Ω

[u(r)∇2G(r, r′)− G(r, r′)∇2u(r)

]dV

=∫

Ω

[u(r)δ(r− r′)− G(r, r′) f (r)

]dV

= u(r′)−∫

ΩG(r, r′) f (r) dV. (7.2)

Solving for u(r′), we have

u(r′) =∫

ΩG(r, r′) f (r) dV

+∫

∂Ω[u(r)∇G(r, r′)− G(r, r′)∇u(r)] · n dS. (7.3)

If both u(r) and G(r, r′) satisfied Dirichlet conditions, u = 0 on ∂Ω, then thelast integral vanishes and we are left with22 In many applications there is a symme-

try,G(r, r′) = G(r′, r).

Then, the result can be written as

u(r) =∫

ΩG(r, r′) f (r′) dV′.

u(r′) =∫

ΩG(r, r′) f (r) dV.

So, if we know the Green’s function, we can solve the nonhomogeneousdifferential equation. In fact, we can use the Green’s function to solve non-homogenous boundary value and initial value problems. That is what wewill see develop in this chapter as we explore nonhomogeneous problemsin more detail. We will begin with the search for Green’s functions for ordi-nary differential equations.

Page 239: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

green’s functions and nonhomogeneous problems 227

7.1 Initial Value Green’s Functions

In this section we will investigate the solution of initial value prob-lems involving nonhomogeneous differential equations using Green’s func-tions. Our goal is to solve the nonhomogeneous differential equation

a(t)y′′(t) + b(t)y′(t) + c(t)y(t) = f (t), (7.4)

subject to the initial conditions

y(0) = y0 y′(0) = v0.

Since we are interested in initial value problems, we will denote the inde-pendent variable as a time variable, t.

Equation (7.4) can be written compactly as

L[y] = f ,

where L is the differential operator

L = a(t)d2

dt2 + b(t)ddt

+ c(t).

The solution is formally given by

y = L−1[ f ].

The inverse of a differential operator is an integral operator, which we seekto write in the form

y(t) =∫

G(t, τ) f (τ) dτ.

The function G(t, τ) is referred to as the kernel of the integral operator and G(t, τ) is called a Green’s function.

is called the Green’s function.In the last section we solved nonhomogeneous equations like (7.4) using

the Method of Variation of Parameters. Letting,

yp(t) = c1(t)y1(t) + c2(t)y2(t), (7.5)

we found that we have to solve the system of equations

c′1(t)y1(t) + c′2(t)y2(t) = 0.

c′1(t)y′1(t) + c′2(t)y

′2(t) =

f (t)q(t)

. (7.6)

This system is easily solved to give

c′1(t) = − f (t)y2(t)a(t)

[y1(t)y′2(t)− y′1(t)y2(t)

]c′2(t) =

f (t)y1(t)a(t)

[y1(t)y′2(t)− y′1(t)y2(t)

] . (7.7)

Page 240: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

228 partial differential equations

We note that the denominator in these expressions involves the Wronskianof the solutions to the homogeneous problem, which is given by the deter-minant

W(y1, y2)(t) =

∣∣∣∣∣ y1(t) y2(t)y′1(t) y′2(t)

∣∣∣∣∣ .

When y1(t) and y2(t) are linearly independent, then the Wronskian is notzero and we are guaranteed a solution to the above system.

So, after an integration, we find the parameters as

c1(t) = −∫ t

t0

f (τ)y2(τ)

a(τ)W(τ)dτ

c2(t) =∫ t

t1

f (τ)y1(τ)

a(τ)W(τ)dτ, (7.8)

where t0 and t1 are arbitrary constants to be determined from the initialconditions.

Therefore, the particular solution of (7.4) can be written as

yp(t) = y2(t)∫ t

t1

f (τ)y1(τ)

a(τ)W(τ)dτ − y1(t)

∫ t

t0

f (τ)y2(τ)

a(τ)W(τ)dτ. (7.9)

We begin with the particular solution (7.9) of the nonhomogeneous differ-ential equation (7.4). This can be combined with the general solution of thehomogeneous problem to give the general solution of the nonhomogeneousdifferential equation:

yp(t) = c1y1(t) + c2y2(t) + y2(t)∫ t

t1

f (τ)y1(τ)

a(τ)W(τ)dτ − y1(t)

∫ t

t0

f (τ)y2(τ)

a(τ)W(τ)dτ.

(7.10)However, an appropriate choice of t0 and t1 can be found so that we

need not explicitly write out the solution to the homogeneous problem,c1y1(t) + c2y2(t). However, setting up the solution in this form will allowus to use t0 and t1 to determine particular solutions which satisfies certainhomogeneous conditions. In particular, we will show that Equation (7.10)can be written in the form

y(t) = c1y1(t) + c2y2(t) +∫ t

0G(t, τ) f (τ) dτ, (7.11)

where the function G(t, τ) will be identified as the Green’s function.The goal is to develop the Green’s function technique to solve the initial

value problem

a(t)y′′(t) + b(t)y′(t) + c(t)y(t) = f (t), y(0) = y0, y′(0) = v0. (7.12)

We first note that we can solve this initial value problem by solving twoseparate initial value problems. We assume that the solution of the homo-geneous problem satisfies the original initial conditions:

a(t)y′′h (t) + b(t)y′h(t) + c(t)yh(t) = 0, yh(0) = y0, y′h(0) = v0. (7.13)

Page 241: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

green’s functions and nonhomogeneous problems 229

We then assume that the particular solution satisfies the problem

a(t)y′′p(t) + b(t)y′p(t) + c(t)yp(t) = f (t), yp(0) = 0, y′p(0) = 0. (7.14)

Since the differential equation is linear, then we know that

y(t) = yh(t) + yp(t)

is a solution of the nonhomogeneous equation. Also, this solution satisfiesthe initial conditions:

y(0) = yh(0) + yp(0) = y0 + 0 = y0,

y′(0) = y′h(0) + y′p(0) = v0 + 0 = v0.

Therefore, we need only focus on finding a particular solution that satisfieshomogeneous initial conditions. This will be done by finding values for t0

and t1 in Equation (7.9) which satisfy the homogeneous initial conditions,yp(0) = 0 and y′p(0) = 0.

First, we consider yp(0) = 0. We have

yp(0) = y2(0)∫ 0

t1

f (τ)y1(τ)

a(τ)W(τ)dτ − y1(0)

∫ 0

t0

f (τ)y2(τ)

a(τ)W(τ)dτ. (7.15)

Here, y1(t) and y2(t) are taken to be any solutions of the homogeneousdifferential equation. Let’s assume that y1(0) = 0 and y2 6= (0) = 0. Then,we have

yp(0) = y2(0)∫ 0

t1

f (τ)y1(τ)

a(τ)W(τ)dτ (7.16)

We can force yp(0) = 0 if we set t1 = 0.Now, we consider y′p(0) = 0. First we differentiate the solution and find

that

y′p(t) = y′2(t)∫ t

0

f (τ)y1(τ)

a(τ)W(τ)dτ − y′1(t)

∫ t

t0

f (τ)y2(τ)

a(τ)W(τ)dτ, (7.17)

since the contributions from differentiating the integrals will cancel. Evalu-ating this result at t = 0, we have

y′p(0) = −y′1(0)∫ 0

t0

f (τ)y2(τ)

a(τ)W(τ)dτ. (7.18)

Assuming that y′1(0) 6= 0, we can set t0 = 0.Thus, we have found that

yp(x) = y2(t)∫ t

0

f (τ)y1(τ)

a(τ)W(τ)dτ − y1(t)

∫ t

0

f (τ)y2(τ)

a(τ)W(τ)dτ

=∫ t

0

[y1(τ)y2(t)− y1(t)y2(τ)

a(τ)W(τ)

]f (τ) dτ. (7.19)

This result is in the correct form and we can identify the temporal, orinitial value, Green’s function. So, the particular solution is given as

yp(t) =∫ t

0G(t, τ) f (τ) dτ, (7.20)

Page 242: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

230 partial differential equations

where the initial value Green’s function is defined as

G(t, τ) =y1(τ)y2(t)− y1(t)y2(τ)

a(τ)W(τ).

We summarize

Solution of IVP Using the Green’s Function

The solution of the initial value problem,

a(t)y′′(t) + b(t)y′(t) + c(t)y(t) = f (t), y(0) = y0, y′(0) = v0,

takes the form

y(t) = yh(t) +∫ t

0G(t, τ) f (τ) dτ, (7.21)

where

G(t, τ) =y1(τ)y2(t)− y1(t)y2(τ)

a(τ)W(τ)(7.22)

is the Green’s function and y1, y2, yh are solutions of the homogeneousequation satisfying

y1(0) = 0, y2(0) 6= 0, y′1(0) 6= 0, y′2(0) = 0, yh(0) = y0, y′h(0) = v0.

Example 7.1. Solve the forced oscillator problem

x′′ + x = 2 cos t, x(0) = 4, x′(0) = 0.

We first solve the homogeneous problem with nonhomogeneous initial conditions:

x′′h + xh = 0, xh(0) = 4, x′h(0) = 0.

The solution is easily seen to be xh(t) = 4 cos t.Next, we construct the Green’s function. We need two linearly independent so-

lutions, y1(x), y2(x), to the homogeneous differential equation satisfying differenthomogeneous conditions, y1(0) = 0 and y′2(0) = 0. The simplest solutions arey1(t) = sin t and y2(t) = cos t. The Wronskian is found as

W(t) = y1(t)y′2(t)− y′1(t)y2(t) = − sin2 t− cos2 t = −1.

Since a(t) = 1 in this problem, we compute the Green’s function,

G(t, τ) =y1(τ)y2(t)− y1(t)y2(τ)

a(τ)W(τ)

= sin t cos τ − sin τ cos t

= sin(t− τ). (7.23)

Note that the Green’s function depends on t − τ. While this is useful in somecontexts, we will use the expanded form when carrying out the integration.

We can now determine the particular solution of the nonhomogeneous differentialequation. We have

xp(t) =∫ t

0G(t, τ) f (τ) dτ

Page 243: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

green’s functions and nonhomogeneous problems 231

=∫ t

0(sin t cos τ − sin τ cos t) (2 cos τ) dτ

= 2 sin t∫ t

0cos2 τdτ − 2 cos t

∫ t

0sin τ cos τdτ

= 2 sin t[

τ

2+

12

sin 2τ

]t

0− 2 cos t

[12

sin2 τ

]t

0= t sin t. (7.24)

Therefore, the solution of the nonhomogeneous problem is the sum of the solutionof the homogeneous problem and this particular solution: x(t) = 4 cos t + t sin t.

7.2 Boundary Value Green’s Functions

We solved nonhomogeneous initial value problems in Section 7.1using a Green’s function. In this section we will extend this method to thesolution of nonhomogeneous boundary value problems using a boundaryvalue Green’s function. Recall that the goal is to solve the nonhomogeneousdifferential equation

L[y] = f , a ≤ x ≤ b,

where L is a differential operator and y(x) satisfies boundary conditions atx = a and x = b.. The solution is formally given by

y = L−1[ f ].

The inverse of a differential operator is an integral operator, which we seekto write in the form

y(x) =∫ b

aG(x, ξ) f (ξ) dξ.

The function G(x, ξ) is referred to as the kernel of the integral operator andis called the Green’s function.

We will consider boundary value problems in Sturm-Liouville form,

ddx

(p(x)

dy(x)dx

)+ q(x)y(x) = f (x), a < x < b, (7.25)

with fixed values of y(x) at the boundary, y(a) = 0 and y(b) = 0. How-ever, the general theory works for other forms of homogeneous boundaryconditions.

We seek the Green’s function by first solving the nonhomogeneous dif-ferential equation using the Method of Variation of Parameters. Recall thismethod from Section B.3.3. We assume a particular solution of the form

yp(x) = c1(x)y1(x) + c2(x)y2(x),

which is formed from two linearly independent solution of the homoge-neous problem, yi(x), i = 1, 2. We had found that the coefficient functionssatisfy the equations

c′1(x)y1(x) + c′2(x)y2(x) = 0

c′1(x)y′1(x) + c′2(x)y′2(x) =f (x)p(x)

. (7.26)

Page 244: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

232 partial differential equations

Solving this system, we obtain

c′1(x) = − f y2

pW(y1, y2),

c′1(x) =f y1

pW(y1, y2),

where W(y1, y2) = y1y′2 − y′1y2 is the Wronskian. Integrating these formsand inserting the results back into the particular solution, we find

y(x) = y2(x)∫ x

x1

f (ξ)y1(ξ)

p(ξ)W(ξ)dξ − y1(x)

∫ x

x0

f (ξ)y2(ξ)

p(ξ)W(ξ)dξ,

where x0 and x1 are to be determined using the boundary values. In par-ticular, we will seek x0 and x1 so that the solution to the boundary valueproblem can be written as a single integral involving a Green’s function.Note that we can absorb the solution to the homogeneous problem, yh(x),into the integrals with an appropriate choice of limits on the integrals.

We now look to satisfy the conditions y(a) = 0 and y(b) = 0. First we usesolutions of the homogeneous differential equation that satisfy y1(a) = 0,y2(b) = 0 and y1(b) 6= 0, y2(a) 6= 0. Evaluating y(x) at x = 0, we have

y(a) = y2(a)∫ a

x1

f (ξ)y1(ξ)

p(ξ)W(ξ)dξ − y1(a)

∫ a

x0

f (ξ)y2(ξ)

p(ξ)W(ξ)dξ

= y2(a)∫ a

x1

f (ξ)y1(ξ)

p(ξ)W(ξ)dξ. (7.27)

We can satisfy the condition at x = a if we choose x1 = a.Similarly, at x = b we find that

y(b) = y2(b)∫ b

x1

f (ξ)y1(ξ)

p(ξ)W(ξ)dξ − y1(b)

∫ b

x0

f (ξ)y2(ξ)

p(ξ)W(ξ)dξ

= −y1(b)∫ b

x0

f (ξ)y2(ξ)

p(ξ)W(ξ)dξ. (7.28)

This expression vanishes for x0 = b.The general solution of the boundaryvalue problem. So, we have found that the solution takes the form

y(x) = y2(x)∫ x

a

f (ξ)y1(ξ)

p(ξ)W(ξ)dξ − y1(x)

∫ x

b

f (ξ)y2(ξ)

p(ξ)W(ξ)dξ. (7.29)

This solution can be written in a compact form just like we had done forthe initial value problem in Section 7.1. We seek a Green’s function so thatthe solution can be written as a single integral. We can move the functionsof x under the integral. Also, since a < x < b, we can flip the limits in thesecond integral. This gives

y(x) =∫ x

a

f (ξ)y1(ξ)y2(x)p(ξ)W(ξ)

dξ +∫ b

x

f (ξ)y1(x)y2(ξ)

p(ξ)W(ξ)dξ. (7.30)

This result can now be written in a compact form:

Page 245: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

green’s functions and nonhomogeneous problems 233

Boundary Value Green’s Function

The solution of the boundary value problem

ddx

(p(x)

dy(x)dx

)+ q(x)y(x) = f (x), a < x < b,

y(a) = 0, y(b) = 0. (7.31)

takes the form

y(x) =∫ b

aG(x, ξ) f (ξ) dξ, (7.32)

where the Green’s function is the piecewise defined function

G(x, ξ) =

y1(ξ)y2(x)

pW, a ≤ ξ ≤ x,

y1(x)y2(ξ)

pW, x ≤ ξ ≤ b,

(7.33)

where y1(x) and y2(x) are solutions of the homogeneous problem satis-fying y1(a) = 0, y2(b) = 0 and y1(b) 6= 0, y2(a) 6= 0.

The Green’s function satisfies several properties, which we will explorefurther in the next section. For example, the Green’s function satisfies theboundary conditions at x = a and x = b. Thus,

G(a, ξ) =y1(a)y2(ξ)

pW= 0,

G(b, ξ) =y1(ξ)y2(b)

pW= 0.

Also, the Green’s function is symmetric in its arguments. Interchanging thearguments gives

G(ξ, x) =

y1(x)y2(ξ)

pW, a ≤ x ≤ ξ,

y1(ξ)y2(x)pW

. ξ ≤ x ≤ b,(7.34)

But a careful look at the original form shows that

G(x, ξ) = G(ξ, x).

We will make use of these properties in the next section to quickly deter-mine the Green’s functions for other boundary value problems.

Example 7.2. Solve the boundary value problem y′′ = x2, y(0) = 0 = y(1)using the boundary value Green’s function.

We first solve the homogeneous equation, y′′ = 0. After two integrations, wehave y(x) = Ax + B, for A and B constants to be determined.

We need one solution satisfying y1(0) = 0 Thus,

0 = y1(0) = B.

Page 246: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

234 partial differential equations

So, we can pick y1(x) = x, since A is arbitrary.The other solution has to satisfy y2(1) = 0. So,

0 = y2(1) = A + B.

This can be solved for B = −A. Again, A is arbitrary and we will choose A = −1.Thus, y2(x) = 1− x.

For this problem p(x) = 1. Thus, for y1(x) = x and y2(x) = 1− x,

p(x)W(x) = y1(x)y′2(x)− y′1(x)y2(x) = x(−1)− 1(1− x) = −1.

Note that p(x)W(x) is a constant, as it should be.Now we construct the Green’s function. We have

G(x, ξ) =

−ξ(1− x), 0 ≤ ξ ≤ x,−x(1− ξ), x ≤ ξ ≤ 1.

(7.35)

Notice the symmetry between the two branches of the Green’s function. Also, theGreen’s function satisfies homogeneous boundary conditions: G(0, ξ) = 0, from thelower branch, and G(1, ξ) = 0, from the upper branch.

Finally, we insert the Green’s function into the integral form of the solution andevaluate the integral.

y(x) =∫ 1

0G(x, ξ) f (ξ) dξ

=∫ 1

0G(x, ξ)ξ2 dξ

= −∫ x

0ξ(1− x)ξ2 dξ −

∫ 1

xx(1− ξ)ξ2 dξ

= −(1− x)∫ x

0ξ3 dξ − x

∫ 1

x(ξ2 − ξ3) dξ

= −(1− x)[

ξ4

4

]x

0− x

[ξ3

3− ξ4

4

]1

x

= −14(1− x)x4 − 1

12x(4− 3) +

112

x(4x3 − 3x4)

=1

12(x4 − x). (7.36)

Checking the answer, we can easily verify that y′′ = x2, y(0) = 0, and y(1) = 0.

7.2.1 Properties of Green’s Functions

We have noted some properties of Green’s functions in the lastsection. In this section we will elaborate on some of these properties as a toolfor quickly constructing Green’s functions for boundary value problems. Welist five basic properties:

1. Differential Equation:

The boundary value Green’s function satisfies the differential equation∂

∂x

(p(x) ∂G(x,ξ)

∂x

)+ q(x)G(x, ξ) = 0, x 6= ξ.

Page 247: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

green’s functions and nonhomogeneous problems 235

This is easily established. For x < ξ we are on the second branchand G(x, ξ) is proportional to y1(x). Thus, since y1(x) is a solution ofthe homogeneous equation, then so is G(x, ξ). For x > ξ we are onthe first branch and G(x, ξ) is proportional to y2(x). So, once againG(x, ξ) is a solution of the homogeneous problem.

2. Boundary Conditions:

In the example in the last section we had seen that G(a, ξ) = 0 andG(b, ξ) = 0. For example, for x = a we are on the second branchand G(x, ξ) is proportional to y1(x). Thus, whatever condition y1(x)satisfies, G(x, ξ) will satisfy. A similar statement can be made forx = b.

3. Symmetry or Reciprocity: G(x, ξ) = G(ξ, x)

We had shown this reciprocity property in the last section.

4. Continuity of G at x = ξ: G(ξ+, ξ) = G(ξ−, ξ)

Here we define ξ± through the limits of a function as x approaches ξ

from above or below. In particular,

G(ξ+, x) = limx↓ξ

G(x, ξ), x > ξ,

G(ξ−, x) = limx↑ξ

G(x, ξ), x < ξ.

Setting x = ξ in both branches, we have

y1(ξ)y2(ξ)

pW=

y1(ξ)y2(ξ)

pW.

Therefore, we have established the continuity of G(x, ξ) between thetwo branches at x = ξ.

5. Jump Discontinuity of ∂G∂x at x = ξ:

∂G(ξ+, ξ)

∂x− ∂G(ξ−, ξ)

∂x=

1p(ξ)

This case is not as obvious. We first compute the derivatives by not-ing which branch is involved and then evaluate the derivatives andsubtract them. Thus, we have

∂G(ξ+, ξ)

∂x− ∂G(ξ−, ξ)

∂x= − 1

pWy1(ξ)y′2(ξ) +

1pW

y′1(ξ)y2(ξ)

= −y′1(ξ)y2(ξ)− y1(ξ)y′2(ξ)

p(ξ)(y1(ξ)y′2(ξ)− y′1(ξ)y2(ξ))

=1

p(ξ). (7.37)

Here is a summary of the properties of the boundary value Green’s functionbased upon the previous solution.

Page 248: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

236 partial differential equations

Properties of the Green’s Function

1. Differential Equation:∂

∂x

(p(x) ∂G(x,ξ)

∂x

)+ q(x)G(x, ξ) = 0, x 6= ξ

2. Boundary Conditions: Whatever conditions y1(x) and y2(x) sat-isfy, G(x, ξ) will satisfy.

3. Symmetry or Reciprocity: G(x, ξ) = G(ξ, x)

4. Continuity of G at x = ξ: G(ξ+, ξ) = G(ξ−, ξ)

5. Jump Discontinuity of ∂G∂x at x = ξ:

∂G(ξ+, ξ)

∂x− ∂G(ξ−, ξ)

∂x=

1p(ξ)

We now show how a knowledge of these properties allows one to quicklyconstruct a Green’s function with an example.

Example 7.3. Construct the Green’s function for the problem

y′′ + ω2y = f (x), 0 < x < 1,

y(0) = 0 = y(1),

with ω 6= 0.

I. Find solutions to the homogeneous equation.

A general solution to the homogeneous equation is given as

yh(x) = c1 sin ωx + c2 cos ωx.

Thus, for x 6= ξ,

G(x, ξ) = c1(ξ) sin ωx + c2(ξ) cos ωx.

II. Boundary Conditions.

First, we have G(0, ξ) = 0 for 0 ≤ x ≤ ξ. So,

G(0, ξ) = c2(ξ) cos ωx = 0.

So,G(x, ξ) = c1(ξ) sin ωx, 0 ≤ x ≤ ξ.

Second, we have G(1, ξ) = 0 for ξ ≤ x ≤ 1. So,

G(1, ξ) = c1(ξ) sin ω + c2(ξ) cos ω. = 0

A solution can be chosen with

c2(ξ) = −c1(ξ) tan ω.

This gives

G(x, ξ) = c1(ξ) sin ωx− c1(ξ) tan ω cos ωx.

Page 249: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

green’s functions and nonhomogeneous problems 237

This can be simplified by factoring out the c1(ξ) and placing the remainingterms over a common denominator. The result is

G(x, ξ) =c1(ξ)

cos ω[sin ωx cos ω− sin ω cos ωx]

= − c1(ξ)

cos ωsin ω(1− x). (7.38)

Since the coefficient is arbitrary at this point, as can write the result as

G(x, ξ) = d1(ξ) sin ω(1− x), ξ ≤ x ≤ 1.

We note that we could have started with y2(x) = sin ω(1− x) as one of thelinearly independent solutions of the homogeneous problem in anticipationthat y2(x) satisfies the second boundary condition.

III. Symmetry or Reciprocity

We now impose that G(x, ξ) = G(ξ, x). To this point we have that

G(x, ξ) =

c1(ξ) sin ωx, 0 ≤ x ≤ ξ,

d1(ξ) sin ω(1− x), ξ ≤ x ≤ 1.

We can make the branches symmetric by picking the right forms for c1(ξ)

and d1(ξ). We choose c1(ξ) = C sin ω(1− ξ) and d1(ξ) = C sin ωξ. Then,

G(x, ξ) =

C sin ω(1− ξ) sin ωx, 0 ≤ x ≤ ξ,C sin ω(1− x) sin ωξ, ξ ≤ x ≤ 1.

Now the Green’s function is symmetric and we still have to determine theconstant C. We note that we could have gotten to this point using the Methodof Variation of Parameters result where C = 1

pW .

IV. Continuity of G(x, ξ)

We already have continuity by virtue of the symmetry imposed in the laststep.

V. Jump Discontinuity in ∂∂x G(x, ξ).

We still need to determine C. We can do this using the jump discontinuityin the derivative:

∂G(ξ+, ξ)

∂x− ∂G(ξ−, ξ)

∂x=

1p(ξ)

.

For this problem p(x) = 1. Inserting the Green’s function, we have

1 =∂G(ξ+, ξ)

∂x− ∂G(ξ−, ξ)

∂x

=∂

∂x[C sin ω(1− x) sin ωξ]x=ξ −

∂x[C sin ω(1− ξ) sin ωx]x=ξ

= −ωC cos ω(1− ξ) sin ωξ −ωC sin ω(1− ξ) cos ωξ

= −ωC sin ω(ξ + 1− ξ)

= −ωC sin ω. (7.39)

Page 250: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

238 partial differential equations

Therefore,

C = − 1ω sin ω

.

Finally, we have the Green’s function:

G(x, ξ) =

− sin ω(1− ξ) sin ωx

ω sin ω, 0 ≤ x ≤ ξ,

− sin ω(1− x) sin ωξ

ω sin ω, ξ ≤ x ≤ 1.

(7.40)

It is instructive to compare this result to the Variation of Parameters re-sult.

Example 7.4. Use the Method of Variation of Parameters to solve

y′′ + ω2y = f (x), 0 < x < 1,

y(0) = 0 = y(1), ω 6= 0.

We have the functions y1(x) = sin ωx and y2(x) = sin ω(1− x) as the solu-tions of the homogeneous equation satisfying y1(0) = 0 and y2(1) = 0. We needto compute pW:

p(x)W(x) = y1(x)y′2(x)− y′1(x)y2(x)

= −ω sin ωx cos ω(1− x)−ω cos ωx sin ω(1− x)

= −ω sin ω (7.41)

Inserting this result into the Variation of Parameters result for the Green’s functionleads to the same Green’s function as above.

7.2.2 The Differential Equation for the Green’s Function

As we progress in the book we will develop a more general theoryof Green’s functions for ordinary and partial differential equations. Muchof this theory relies on understanding that the Green’s function really is thesystem response function to a point source. This begins with recalling thatthe boundary value Green’s function satisfies a homogeneous differentialequation for x 6= ξ,

∂x

(p(x)

∂G(x, ξ)

∂x

)+ q(x)G(x, ξ) = 0, x 6= ξ. (7.42)

H(x)

x

1

0

Figure 7.2: The Heaviside step function,H(x).

For x = ξ, we have seen that the derivative has a jump in its value. Thisis similar to the step, or Heaviside, function,

H(x) =

1, x > 0,0, x < 0.

This function is shown in Figure 7.2 and we see that the derivative of thestep function is zero everywhere except at the jump, or discontinuity . Atthe jump, there is an infinite slope, though technically, we have learned that

Page 251: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

green’s functions and nonhomogeneous problems 239

there is no derivative at this point. We will try to remedy this situation byintroducing the Dirac delta function,

δ(x) =d

dxH(x).

We will show that the Green’s function satisfies the differential equation

∂x

(p(x)

∂G(x, ξ)

∂x

)+ q(x)G(x, ξ) = δ(x− ξ). (7.43)

However, we will first indicate why this knowledge is useful for the generaltheory of solving differential equations using Green’s functions. The Dirac delta function is described in

more detail in Section 9.4. The key prop-erty we will need here is the sifting prop-erty, ∫ b

af (x)δ(x− ξ) dx = f (ξ)

for a < ξ < b.

As noted, the Green’s function satisfies the differential equation

∂x

(p(x)

∂G(x, ξ)

∂x

)+ q(x)G(x, ξ) = δ(x− ξ) (7.44)

and satisfies homogeneous conditions. We will use the Green’s function tosolve the nonhomogeneous equation

ddx

(p(x)

dy(x)dx

)+ q(x)y(x) = f (x). (7.45)

These equations can be written in the more compact forms

L[y] = f (x)

L[G] = δ(x− ξ). (7.46)

Using these equations, we can determine the solution, y(x), in terms ofthe Green’s function. Multiplying the first equation by G(x, ξ), the secondequation by y(x), and then subtracting, we have

GL[y]− yL[G] = f (x)G(x, ξ)− δ(x− ξ)y(x).

Now, integrate both sides from x = a to x = b. The left hand side becomes

∫ b

a[ f (x)G(x, ξ)− δ(x− ξ)y(x)] dx =

∫ b

af (x)G(x, ξ) dx− y(ξ).

Using Green’s Identity from Section 4.2.2, the right side is Recall that Green’s identity is given by∫ b

a(uLv− vLu) dx = [p(uv′ − vu′)]ba.∫ b

a(GL[y]− yL[G]) dx =

[p(x)

(G(x, ξ)y′(x)− y(x)

∂G∂x

(x, ξ)

)]x=b

x=a.

Combining these results and rearranging, we obtain The general solution in terms of theboundary value Green’s function withcorresponding surface terms.

y(ξ) =∫ b

af (x)G(x, ξ) dx

−[

p(x)(

y(x)∂G∂x

(x, ξ)− G(x, ξ)y′(x))]x=b

x=a. (7.47)

Page 252: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

240 partial differential equations

We will refer to the extra terms in the solution,

S(b, ξ)− S(a, ξ) =

[p(x)

(y(x)

∂G∂x

(x, ξ)− G(x, ξ)y′(x))]x=b

x=a,

as the boundary, or surface, terms. Thus,

y(ξ) =∫ b

af (x)G(x, ξ) dx− [S(b, ξ)− S(a, ξ)].

The result in Equation (7.47) is the key equation in determining the so-lution of a nonhomogeneous boundary value problem. The particular setof boundary conditions in the problem will dictate what conditions G(x, ξ)

has to satisfy. For example, if we have the boundary conditions y(a) = 0and y(b) = 0, then the boundary terms yield

y(ξ) =∫ b

af (x)G(x, ξ) dx−

[p(b)

(y(b)

∂G∂x

(b, ξ)− G(b, ξ)y′(b))]

+

[p(a)

(y(a)

∂G∂x

(a, ξ)− G(a, ξ)y′(a))]

=∫ b

af (x)G(x, ξ) dx + p(b)G(b, ξ)y′(b)− p(a)G(a, ξ)y′(a).

(7.48)

The right hand side will only vanish if G(x, ξ) also satisfies these homoge-neous boundary conditions. This then leaves us with the solution

y(ξ) =∫ b

af (x)G(x, ξ) dx.

We should rewrite this as a function of x. So, we replace ξ with x and xwith ξ. This gives

y(x) =∫ b

af (ξ)G(ξ, x) dξ.

However, this is not yet in the desirable form. The arguments of the Green’sfunction are reversed. But, in this case G(x, ξ) is symmetric in its arguments.So, we can simply switch the arguments getting the desired result.

We can now see that the theory works for other boundary conditions. Ifwe had y′(a) = 0, then the y(a) ∂G

∂x (a, ξ) term in the boundary terms could bemade to vanish if we set ∂G

∂x (a, ξ) = 0. So, this confirms that other boundaryvalue problems can be posed besides the one elaborated upon in the chapterso far.

We can even adapt this theory to nonhomogeneous boundary conditions.We first rewrite Equation (7.47) as

y(x) =∫ b

aG(x, ξ) f (ξ) dξ −

[p(ξ)

(y(ξ)

∂G∂ξ

(x, ξ)− G(x, ξ)y′(ξ))]ξ=b

ξ=a.

(7.49)Let’s consider the boundary conditions y(a) = α and y′(b) = β. We alsoassume that G(x, ξ) satisfies homogeneous boundary conditions,

G(a, ξ) = 0,∂G∂ξ

(b, ξ) = 0.

Page 253: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

green’s functions and nonhomogeneous problems 241

in both x and ξ since the Green’s function is symmetric in its variables.Then, we need only focus on the boundary terms to examine the effect onthe solution. We have

S(b, x)− S(a, x) =

[p(b)

(y(b)

∂G∂ξ

(x, b)− G(x, b)y′(b))]

−[

p(a)(

y(a)∂G∂ξ

(x, a)− G(x, a)y′(a))]

= −βp(b)G(x, b)− αp(a)∂G∂ξ

(x, a). (7.50)

Therefore, we have the solution General solution satisfying the nonho-mogeneous boundary conditions y(a) =α and y′(b) = β. Here the Green’sfunction satisfies homogeneous bound-ary conditions, G(a, ξ) = 0, ∂G

∂ξ (b, ξ) =0.

y(x) =∫ b

aG(x, ξ) f (ξ) dξ + βp(b)G(x, b) + αp(a)

∂G∂ξ

(x, a). (7.51)

This solution satisfies the nonhomogeneous boundary conditions.

Example 7.5. Solve y′′ = x2, y(0) = 1, y(1) = 2 using the boundary valueGreen’s function.

This is a modification of Example 7.2. We can use the boundary value Green’sfunction that we found in that problem,

G(x, ξ) =

−ξ(1− x), 0 ≤ ξ ≤ x,−x(1− ξ), x ≤ ξ ≤ 1.

(7.52)

We insert the Green’s function into the general solution (7.51) and use the givenboundary conditions to obtain

y(x) =∫ 1

0G(x, ξ)ξ2 dξ −

[y(ξ)

∂G∂ξ

(x, ξ)− G(x, ξ)y′(ξ)]ξ=1

ξ=0

=∫ x

0(x− 1)ξ3 dξ +

∫ 1

xx(ξ − 1)ξ2 dξ + y(0)

∂G∂ξ

(x, 0)− y(1)∂G∂ξ

(x, 1)

=(x− 1)x4

4+

x(1− x4)

4− x(1− x3)

3+ (x− 1)− 2x

=x4

12+

3512

x− 1. (7.53)

Of course, this problem can be solved by direct integration. The general solutionis

y(x) =x4

12+ c1x + c2.

Inserting this solution into each boundary condition yields the same result.The Green’s function satisfies a deltafunction forced differential equation.We have seen how the introduction of the Dirac delta function in the

differential equation satisfied by the Green’s function, Equation (7.44), canlead to the solution of boundary value problems. The Dirac delta functionalso aids in the interpretation of the Green’s function. We note that theGreen’s function is a solution of an equation in which the nonhomogeneousfunction is δ(x− ξ). Note that if we multiply the delta function by f (ξ) andintegrate, we obtain ∫ ∞

−∞δ(x− ξ) f (ξ) dξ = f (x).

Page 254: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

242 partial differential equations

We can view the delta function as a unit impulse at x = ξ which can beused to build f (x) as a sum of impulses of different strengths, f (ξ). Thus,the Green’s function is the response to the impulse as governed by the dif-ferential equation and given boundary conditions.Derivation of the jump condition for the

Green’s function. In particular, the delta function forced equation can be used to derive thejump condition. We begin with the equation in the form

∂x

(p(x)

∂G(x, ξ)

∂x

)+ q(x)G(x, ξ) = δ(x− ξ). (7.54)

Now, integrate both sides from ξ − ε to ξ + ε and take the limit as ε → 0.Then,

limε→0

∫ ξ+ε

ξ−ε

[∂

∂x

(p(x)

∂G(x, ξ)

∂x

)+ q(x)G(x, ξ)

]dx = lim

ε→0

∫ ξ+ε

ξ−εδ(x− ξ) dx

= 1. (7.55)

Since the q(x) term is continuous, the limit as ε → 0 of that term vanishes.Using the Fundamental Theorem of Calculus, we then have

limε→0

[p(x)

∂G(x, ξ)

∂x

]ξ+ε

ξ−ε

= 1. (7.56)

This is the jump condition that we have been using!

7.2.3 Series Representations of Green’s Functions

There are times that it might not be so simple to find the Green’sfunction in the simple closed form that we have seen so far. However,there is a method for determining the Green’s functions of Sturm-Liouvilleboundary value problems in the form of an eigenfunction expansion. Wewill finish our discussion of Green’s functions for ordinary differential equa-tions by showing how one obtains such series representations. (Note thatwe are really just repeating the steps towards developing eigenfunction ex-pansion which we had seen in Section 4.3.)

We will make use of the complete set of eigenfunctions of the differentialoperator, L, satisfying the homogeneous boundary conditions:

L[φn] = −λnσφn, n = 1, 2, . . .

We want to find the particular solution y satisfying L[y] = f and homo-geneous boundary conditions. We assume that

y(x) =∞

∑n=1

anφn(x).

Inserting this into the differential equation, we obtain

L[y] =∞

∑n=1

anL[φn] = −∞

∑n=1

λnanσφn = f .

Page 255: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

green’s functions and nonhomogeneous problems 243

This has resulted in the generalized Fourier expansion

f (x) =∞

∑n=1

cnσφn(x)

with coefficientscn = −λnan.

We have seen how to compute these coefficients earlier in section 4.3. Wemultiply both sides by φk(x) and integrate. Using the orthogonality of theeigenfunctions, ∫ b

aφn(x)φk(x)σ(x) dx = Nkδnk,

one obtains the expansion coefficients (if λk 6= 0)

ak = −( f , φk)

Nkλk,

where ( f , φk) ≡∫ b

a f (x)φk(x) dx.As before, we can rearrange the solution to obtain the Green’s function.

Namely, we have

y(x) =∞

∑n=1

( f , φn)

−Nnλnφn(x) =

∫ b

a

∑n=1

φn(x)φn(ξ)

−Nnλn︸ ︷︷ ︸G(x,ξ)

f (ξ) dξ

Therefore, we have found the Green’s function as an expansion in theeigenfunctions: Green’s function as an expansion in the

eigenfunctions.G(x, ξ) =

∑n=1

φn(x)φn(ξ)

−λnNn. (7.57)

We will conclude this discussion with an example. We will solve thisproblem three different ways in order to summarize the methods we haveused in the text.

Example 7.6. Solve

y′′ + 4y = x2, x ∈ (0, 1), y(0) = y(1) = 0

using the Green’s function eigenfunction expansion. Example using the Green’s functioneigenfunction expansion.The Green’s function for this problem can be constructed fairly quickly for this

problem once the eigenvalue problem is solved. The eigenvalue problem is

φ′′(x) + 4φ(x) = −λφ(x),

where φ(0) = 0 and φ(1) = 0. The general solution is obtained by rewriting theequation as

φ′′(x) + k2φ(x) = 0,

wherek2 = 4 + λ.

Page 256: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

244 partial differential equations

Solutions satisfying the boundary condition at x = 0 are of the form

φ(x) = A sin kx.

Forcing φ(1) = 0 gives

0 = A sin k⇒ k = nπ, k = 1, 2, 3 . . . .

So, the eigenvalues are

λn = n2π2 − 4, n = 1, 2, . . .

and the eigenfunctions are

φn = sin nπx, n = 1, 2, . . . .

We also need the normalization constant, Nn. We have that

Nn = ‖φn‖2 =∫ 1

0sin2 nπx =

12

.

We can now construct the Green’s function for this problem using Equation(7.57).

G(x, ξ) = 2∞

∑n=1

sin nπx sin nπξ

(4− n2π2). (7.58)

Using this Green’s function, the solution of the boundary value problem becomes

y(x) =∫ 1

0G(x, ξ) f (ξ) dξ

=∫ 1

0

(2

∑n=1

sin nπx sin nπξ

(4− n2π2)

)ξ2 dξ

= 2∞

∑n=1

sin nπx(4− n2π2)

∫ 1

0ξ2 sin nπξ dξ

= 2∞

∑n=1

sin nπx(4− n2π2)

[(2− n2π2)(−1)n − 2

n3π3

](7.59)

We can compare this solution to the one we would obtain if we did notemploy Green’s functions directly. The eigenfunction expansion method forsolving boundary value problems, which we saw earlier is demonstrated inthe next example.

Example 7.7. Solve

y′′ + 4y = x2, x ∈ (0, 1), y(0) = y(1) = 0

using the eigenfunction expansion method.Example using the eigenfunction expan-sion method. We assume that the solution of this problem is in the form

y(x) =∞

∑n=1

cnφn(x).

Page 257: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

green’s functions and nonhomogeneous problems 245

Inserting this solution into the differential equation L[y] = x2, gives

x2 = L[

∑n=1

cn sin nπx

]

=∞

∑n=1

cn

[d2

dx2 sin nπx + 4 sin nπx]

=∞

∑n=1

cn[4− n2π2] sin nπx (7.60)

This is a Fourier sine series expansion of f (x) = x2 on [0, 1]. Namely,

f (x) =∞

∑n=1

bn sin nπx.

In order to determine the cn’s in Equation (7.60), we will need the Fourier sineseries expansion of x2 on [0, 1]. Thus, we need to compute

bn =21

∫ 1

0x2 sin nπx

= 2[(2− n2π2)(−1)n − 2

n3π3

], n = 1, 2, . . . . (7.61)

The resulting Fourier sine series is

x2 = 2∞

∑n=1

[(2− n2π2)(−1)n − 2

n3π3

]sin nπx.

Inserting this expansion in Equation (7.60), we find

2∞

∑n=1

[(2− n2π2)(−1)n − 2

n3π3

]sin nπx =

∑n=1

cn[4− n2π2] sin nπx.

Due to the linear independence of the eigenfunctions, we can solve for the unknowncoefficients to obtain

cn = 2(2− n2π2)(−1)n − 2

(4− n2π2)n3π3 .

Therefore, the solution using the eigenfunction expansion method is

y(x) =∞

∑n=1

cnφn(x)

= 2∞

∑n=1

sin nπx(4− n2π2)

[(2− n2π2)(−1)n − 2

n3π3

]. (7.62)

We note that the solution in this example is the same solution as we hadobtained using the Green’s function obtained in series form in the previousexample.

One remaining question is the following: Is there a closed form for theGreen’s function and the solution to this problem? The answer is yes!

Example 7.8. Find the closed form Green’s function for the problem

y′′ + 4y = x2, x ∈ (0, 1), y(0) = y(1) = 0

Page 258: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

246 partial differential equations

and use it to obtain a closed form solution to this boundary value problem.We note that the differential operator is a special case of the example done in

section 7.2. Namely, we pick ω = 2. The Green’s function was already found inthat section. For this special case, we haveUsing the closed form Green’s function.

G(x, ξ) =

− sin 2(1− ξ) sin 2x

2 sin 2, 0 ≤ x ≤ ξ,

− sin 2(1− x) sin 2ξ

2 sin 2, ξ ≤ x ≤ 1.

(7.63)

Using this Green’s function, the solution to the boundary value problem is read-ily computed

y(x) =∫ 1

0G(x, ξ) f (ξ) dξ

= −∫ x

0

sin 2(1− x) sin 2ξ

2 sin 2ξ2 dξ +

∫ 1

x

sin 2(ξ − 1) sin 2x2 sin 2

ξ2 dξ

= − 14 sin 2

[−x2 sin 2 + (1− cos2 x) sin 2 + sin x cos x(1 + cos 2)

].

= − 14 sin 2

[−x2 sin 2 + 2 sin2 x sin 1 cos 1 + 2 sin x cos x cos2 1)

].

= − 18 sin 1 cos 1

[−x2 sin 2 + 2 sin x cos 1(sin x sin 1 + cos x cos 1)

].

=x2

4− sin x cos(1− x)

4 sin 1. (7.64)

In Figure 7.3 we show a plot of this solution along with the first fiveterms of the series solution. The series solution converges quickly to theclosed form solution.

Figure 7.3: Plots of the exact solution toExample 7.6 with the first five terms ofthe series solution.

x0 .5 1

-.02

-.04

-.06

-.08

As one last check, we solve the boundary value problem directly, as wehad done in the last chapter.

Example 7.9. Solve directly:

y′′ + 4y = x2, x ∈ (0, 1), y(0) = y(1) = 0.

Direct solution of the boundary valueproblem. The problem has the general solution

y(x) = c1 cos 2x + c2 sin 2x + yp(x),

Page 259: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

green’s functions and nonhomogeneous problems 247

where yp is a particular solution of the nonhomogeneous differential equation. Us-ing the Method of Undetermined Coefficients, we assume a solution of the form

yp(x) = Ax2 + Bx + C.

Inserting this guess into the nonhomogeneous equation, we have

2A + 4(Ax2 + Bx + C) = x2,

Thus, B = 0, 4A = 1 and 2A + 4C = 0. The solution of this system is

A =14

, B = 0, C = −18

.

So, the general solution of the nonhomogeneous differential equation is

y(x) = c1 cos 2x + c2 sin 2x +x2

4− 1

8.

We next determine the arbitrary constants using the boundary conditions. Wehave

0 = y(0)

= c1 −18

0 = y(1)

= c1 cos 2 + c2 sin 2 +18

(7.65)

Thus, c1 = 18 and

c2 = −18 + 1

8 cos 2sin 2

.

Inserting these constants into the solution we find the same solution as before.

y(x) =18

cos 2x−[

18 + 1

8 cos 2sin 2

]sin 2x +

x2

4− 1

8

=(cos 2x− 1) sin 2− sin 2x(1 + cos 2)

8 sin 2+

x2

4

=(−2 sin2 x)2 sin 1 cos 1− sin 2x(2 cos2 1)

16 sin 1 cos 1+

x2

4

= − (sin2 x) sin 1 + sin x cos x(cos 1)4 sin 1

+x2

4

=x2

4− sin x cos(1− x)

4 sin 1. (7.66)

7.2.4 The Generalized Green’s Function

When solving Lu = f using eigenfuction expansions, there can bea problem when there are zero eigenvalues. Recall from Section 4.3 the

Page 260: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

248 partial differential equations

solution of this problem is given by

y(x) =∞

∑n=1

cnφn(x),

cn = −∫ b

a f (x)φn(x) dx

λm∫ b

a φ2n(x)σ(x) dx

. (7.67)

Here the eigenfunctions, φn(x), satisfy the eigenvalue problem

Lφn(x) = −λnσ(x)φn(x), x ∈ [a, b]

subject to given homogeneous boundary conditions.Note that if λm = 0 for some value of n = m, then cm is undefined.

However, if we require

( f , φm) =∫ b

af (x)φn(x) dx = 0,

then there is no problem. This is a form of the Fredholm Alternative.The Fredholm Alternative.

Namely, if λn = 0 for some n, then there is no solution unless f , φm) = 0;i.e., f is orthogonal to φn. In this case, an will be arbitrary and there are aninfinite number of solutions.

Example 7.10. u′′ = f (x), u′(0) = 0, u′(L) = 0.The eigenfunctions satisfy φ′′n (x) = −λnφn(x), φ′n(0) = 0, φ′n(L) = 0. There

are the usual solutions,

φn(x) = cosnπx

L, λn =

(nπ

L

)2, n = 1, 2, . . . .

However, when λn = 0, φ′′0 (x) = 0. So, φ0(x) = Ax + B. The boundaryconditions are satisfied if A = 0. So, we can take φ0(x) = 1. Therefore, thereexists an eigenfunction corresponding to a zero eigenvalue. Thus, in order to havea solution, we have to require ∫ L

0f (x) dx = 0.

Example 7.11. u′′ + π2u = β + 2x, u(0) = 0, u(1) = 0.In this problem we check to see if there is an eigenfunctions with a zero eigen-

value. The eigenvalue problem is

φ′′ + π2φ = 0, φ(0) = 0, φ(1) = 0.

A solution satisfying this problem is easily founds as

φ(x) = sin πx.

Therefore, there is a zero eigenvalue. For a solution to exist, we need to require

0 =∫ 1

0(β + 2x) sin πx dx

= − β

πcos πx

∣∣10 + 2

[1π

x cos πx− 1π2 sin πx

]1

0

= − 2π(β + 1). (7.68)

Thus, either β = −1 or there are no solutions.

Page 261: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

green’s functions and nonhomogeneous problems 249

Recall the series representation of the Green’s function for a Sturm-Liouvilleproblem in Equation (7.57),

G(x, ξ) =∞

∑n=1

φn(x)φn(ξ)

−λnNn. (7.69)

We see that if there is a zero eigenvalue, then we also can run into troubleas one of the terms in the series is undefined.

Recall that the Green’s function satisfies the differential equation LG(x, ξ) =

δ(x− ξ), x, ξ ∈ [a, b] and satisfies some appropriate set of boundary condi-tions. Using the above analysis, if there is a zero eigenvalue, then Lφh(x) =0. In order for a solution to exist to the Green’s function differential equa-tion, then f (x) = δ(x− ξ) and we have to require

0 = ( f , φh) =∫ b

aφh(x)δ(x− ξ) dx = φh(ξ),

for and ξ ∈ [a, b]. Therefore, the Green’s function does not exist.We can fix this problem by introducing a modified Green’s function.

Let’s consider a modified differential equation,

LGM(x, ξ) = δ(x− ξ) + cφh(x)

for some constant c. Now, the orthogonality condition becomes

0 = ( f , φh) =∫ b

aφh(x)[δ(x− ξ) + cφh(x)] dx

= φh(ξ) + c∫ b

aφ2

h(x) dx. (7.70)

Thus, we can choose

c = − φh(ξ)∫ ba φ2

h(x) dx

Using the modified Green’s function, we can obtain solutions to Lu = f .We begin with Green’s identity from Section 4.2.2, given by∫ b

a(uLv− vLu) dx = [p(uv′ − vu′)]ba.

Letting v = GM, we have

∫ b

a(GML[u]− uL[GM]) dx =

[p(x)

(GM(x, ξ)u′(x)− u(x)

∂GM∂x

(x, ξ)

)]x=b

x=a.

Applying homogeneous boundary conditions, the right hand side vanishes.Then we have

0 =∫ b

a(GM(x, ξ)L[u(x)]− u(x)L[GM(x, ξ)]) dx

=∫ b

a(GM(x, ξ) f (x)− u(x)[δ(x− ξ) + cφh(x)]) dx

u(ξ) =∫ b

aGM(x, ξ) f (x) dx− c

∫ b

au(x)φh(x) dx. (7.71)

Page 262: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

250 partial differential equations

Noting that u(x, t) = c1φh(x) + up(x),, the last integral gives

−c∫ b

au(x)φh(x) dx =

φh(ξ)∫ ba φ2

h(x) dx

∫ b

aφ2

h(x) dx = c1φh(ξ).

Therefore, the solution can be written as

u(x) =∫ b

af (ξ)GM(x, ξ) dξ + c1φh(x).

Here we see that there are an infinite number of solutions when solutionsexist.

Example 7.12. Use the modified Green’s function to solve u′′ + π2u = 2x − 1,u(0) = 0, u(1) = 0.

We have already seen that a solution exists for this problem, where we have setβ = −1 in Example 7.11.

We construct the modified Green’s function from the solutions of

φ′′n + π2φn = −λnφn, φ(0) = 0, φ(1) = 0.

The general solutions of this equation are

φn(x) = c1 cos√

π2 + λnx + c2 sin√

π2 + λnx.

Applying the boundary conditions, we have c1 = 0 and√

π2 + λn = nπ. Thus,the eigenfunctions and eigenvalues are

φn(x) = sin nπx, λn = (n2 − 1)π2, n = 1, 2, 3, . . . .

Note that λ1 = 0.The modified Green’s function satisfies

d2

dx2 GM(x, ξ) + π2GM(x, ξ) = δ(x− ξ) + cφh(x),

where

c = − φ1(ξ)∫ 10 φ2

1(x) dx

= − sin πξ∫ 10 sin2 πξ, dx

= −2 sin πξ. (7.72)

We need to solve for GM(x, ξ). The modified Green’s function satisfies

d2

dx2 GM(x, ξ) + π2GM(x, ξ) = δ(x− ξ)− 2 sin πξ sin πx,

and the boundary conditions GM(0, ξ) = 0 and GM(1, ξ) = 0. We assume aneigenfunction expansion,

GM(x, ξ) =∞

∑n=1

cn(ξ) sin nπx.

Page 263: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

green’s functions and nonhomogeneous problems 251

Then,

δ(x− ξ)− 2 sin πξ sin πx =d2

dx2 GM(x, ξ) + π2GM(x, ξ)

= −∞

∑n=1

λncn(ξ) sin nπx (7.73)

The coefficients are found as

−λncn = 2∫ 1

0[δ(x− ξ)− 2 sin πξ sin πx] sin nπx dx

= 2 sin nπξ − 2 sin πξδn1. (7.74)

Therefore, c1 = 0 and cn = 2 sin nπξ, for n > 1.We have found the modified Green’s function as

GM(x, ξ) = −2∞

∑n=2

sin nπx sin nπξ

λn.

We can use this to find the solution. Namely, we have (for c1 = 0)

u(x) =∫ 1

0(2ξ − 1)GM(x, ξ) dξ

= −2∞

∑n=2

sin nπxλn

∫ 1

0(2ξ − 1) sin nπξ dx

= −2∞

∑n=2

sin nπx(n2 − 1)π2

[− 1

nπ(2ξ − 1) cos nπξ +

1n2π2 sin nπξ

]1

0

= 2∞

∑n=2

1 + cos nπ

n(n2 − 1)π3 sin nπx. (7.75)

We can also solve this problem exactly. The general solution is given by

u(x) = c1 sin πx + c2 cos πx +2x− 1

π2 .

Imposing the boundary conditions, we obtain

u(x) = c1 sin πx +1

π2 cos πx +2x− 1

π2 .

Notice that there are an infinite number of solutions. Choosing c1 = 0, we have theparticular solution

u(x) =1

π2 cos πx +2x− 1

π2 .

Figure 7.4: The solution for Example7.12.

In Figure 7.4 we plot this solution and that obtained using the modified Green’sfunction. The result is that they are in complete agreement.

7.3 The Nonhomogeneous Heat Equation

Boundary value Green’s functions do not only arise in the so-lution of nonhomogeneous ordinary differential equations. They are alsoimportant in arriving at the solution of nonhomogeneous partial differen-tial equations. In this section we will show that this is the case by turningto the nonhomogeneous heat equation.

Page 264: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

252 partial differential equations

7.3.1 Nonhomogeneous Time Independent Boundary Conditions

Consider the nonhomogeneous heat equation with nonhomogeneous bound-ary conditions:

ut − kuxx = h(x), 0 ≤ x ≤ L, t > 0,

u(0, t) = a, u(L, t) = b,

u(x, 0) = f (x). (7.76)

We are interested in finding a particular solution to this initial-boundaryvalue problem. In fact, we can represent the solution to the general nonho-mogeneous heat equation as the sum of two solutions that solve differentproblems.

First, we let v(x, t) satisfy the homogeneous problem

vt − kvxx = 0, 0 ≤ x ≤ L, t > 0,

v(0, t) = 0, v(L, t) = 0,

v(x, 0) = g(x), (7.77)

which has homogeneous boundary conditions.We will also need a steady state solution to the original problem. AThe steady state solution, w(t), satisfies

a nonhomogeneous differential equationwith nonhomogeneous boundary condi-tions. The transient solution, v(t), sat-isfies the homogeneous heat equationwith homogeneous boundary conditionsand satisfies a modified initial condition.

steady state solution is one that satisfies ut = 0. Let w(x) be the steady statesolution. It satisfies the problem

−kwxx = h(x), 0 ≤ x ≤ L.

w(0, t) = a, w(L, t) = b. (7.78)

Now consider u(x, t) = w(x) + v(x, t), the sum of the steady state so-lution, w(x), and the transient solution, v(x, t). We first note that u(x, t)satisfies the nonhomogeneous heat equation,

ut − kuxx = (w + v)t − (w + v)xx

= vt − kvxx − kwxx ≡ h(x). (7.79)

The boundary conditions are also satisfied. Evaluating, u(x, t) at x = 0and x = L, we have

u(0, t) = w(0) + v(0, t) = a,

u(L, t) = w(L) + v(L, t) = b. (7.80)

Finally, the initial condition givesThe transient solution satisfies

v(x, 0) = f (x)− w(x).u(x, 0) = w(x) + v(x, 0) = w(x) + g(x).

Thus, if we set g(x) = f (x)− w(x), then u(x, t) = w(x) + v(x, t) will be thesolution of the nonhomogeneous boundary value problem. We all readyknow how to solve the homogeneous problem to obtain v(x, t). So, we onlyneed to find the steady state solution, w(x).

There are several methods we could use to solve Equation (7.78) for thesteady state solution. One is the Method of Variation of Parameters, which

Page 265: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

green’s functions and nonhomogeneous problems 253

is closely related to the Green’s function method for boundary value prob-lems which we described in the last several sections. However, we will justintegrate the differential equation for the steady state solution directly tofind the solution. From this solution we will be able to read off the Green’sfunction.

Integrating the steady state equation (7.78) once, yields

dwdx

= −1k

∫ x

0h(z) dz + A,

where we have been careful to include the integration constant, A = w′(0).Integrating again, we obtain

w(x) = −1k

∫ x

0

(∫ y

0h(z) dz

)dy + Ax + B,

where a second integration constant has been introduced. This gives thegeneral solution for Equation (7.78).

The boundary conditions can now be used to determine the constants. Itis clear that B = a for the condition at x = 0 to be satisfied. The secondcondition gives

b = w(L) = −1k

∫ L

0

(∫ y

0h(z) dz

)dy + AL + a.

Solving for A, we have

A =1

kL

∫ L

0

(∫ y

0h(z) dz

)dy +

b− aL

.

Inserting the integration constants, the solution of the boundary valueproblem for the steady state solution is then The steady state solution.

w(x) = −1k

∫ x

0

(∫ y

0h(z) dz

)dy +

xkL

∫ L

0

(∫ y

0h(z) dz

)dy +

b− aL

x + a.

This is sufficient for an answer, but it can be written in a more compactform. In fact, we will show that the solution can be written in a way that aGreen’s function can be identified.

First, we rewrite the double integrals as single integrals. We can do thisusing integration by parts. Consider integral in the first term of the solution,

I =∫ x

0

(∫ y

0h(z) dz

)dy.

Setting u =∫ y

0 h(z) dz and dv = dy in the standard integration by partsformula, we obtain

I =∫ x

0

(∫ y

0h(z) dz

)dy

= y∫ y

0h(z) dz

∣∣∣x0−∫ x

0yh(y) dy

=∫ x

0(x− y)h(y) dy. (7.81)

Page 266: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

254 partial differential equations

Thus, the double integral has now collapsed to a single integral. Replac-ing the integral in the solution, the steady state solution becomes

w(x) = −1k

∫ x

0(x− y)h(y) dy +

xkL

∫ L

0(L− y)h(y) dy +

b− aL

x + a.

We can make a further simplification by combining these integrals. Thiscan be done if the integration range, [0, L], in the second integral is split intotwo pieces, [0, x] and [x, L]. Writing the second integral as two integrals overthese subintervals, we obtain

w(x) = −1k

∫ x

0(x− y)h(y) dy +

xkL

∫ x

0(L− y)h(y) dy

+x

kL

∫ L

x(L− y)h(y) dy +

b− aL

x + a. (7.82)

Next, we rewrite the integrands,

w(x) = −1k

∫ x

0

L(x− y)L

h(y) dy +1k

∫ x

0

x(L− y)L

h(y) dy

+1k

∫ L

x

x(L− y)L

h(y) dy +b− a

Lx + a. (7.83)

It can now be seen how we can combine the first two integrals:

w(x) = −1k

∫ x

0

y(L− x)L

h(y) dy +1k

∫ L

x

x(L− y)L

h(y) dy +b− a

Lx + a.

The resulting integrals now take on a similar form and this solution canbe written compactly as

w(x) = −∫ L

0G(x, y)[−1

kh(y)] dy +

b− aL

x + a,

where

G(x, y) =

x(L− y)

L, 0 ≤ x ≤ y,

y(L− x)L

, y ≤ x ≤ L,

is the Green’s function for this problem.The Green’s function for the steady stateproblem. The full solution to the original problem can be found by adding to this

steady state solution a solution of the homogeneous problem,

ut − kuxx = 0, 0 ≤ x ≤ L, t > 0,

u(0, t) = 0, u(L, t) = 0,

u(x, 0) = f (x)− w(x). (7.84)

Example 7.13. Solve the nonhomogeneous problem,

ut − uxx = 10, 0 ≤ x ≤ 1, t > 0,

u(0, t) = 20, u(1, t) = 0,

u(x, 0) = 2x(1− x). (7.85)

Page 267: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

green’s functions and nonhomogeneous problems 255

In this problem we have a rod initially at a temperature of u(x, 0) = 2x(1− x).The ends of the rod are maintained at fixed temperatures and the bar is continuallyheated at a constant temperature, represented by the source term, 10.

First, we find the steady state temperature, w(x), satisfying

−wxx = 10, 0 ≤ x ≤ 1.

w(0, t) = 20, w(1, t) = 0. (7.86)

Using the general solution, we have

w(x) =∫ 1

010G(x, y) dy− 20x + 20,

where

G(x, y) =

x(1− y), 0 ≤ x ≤ y,y(1− x), y ≤ x ≤ 1,

we compute the solution

w(x) =∫ x

010y(1− x) dy +

∫ 1

x10x(1− y) dy− 20x + 20

= 5(x− x2)− 20x + 20,

= 20− 15x− 5x2. (7.87)

Checking this solution, it satisfies both the steady state equation and boundaryconditions.

The transient solution satisfies

vt − vxx = 0, 0 ≤ x ≤ 1, t > 0,

v(0, t) = 0, v(1, t) = 0,

v(x, 0) = x(1− x)− 10. (7.88)

Recall, that we have determined the solution of this problem as

v(x, t) =∞

∑n=1

bne−n2π2t sin nπx,

where the Fourier sine coefficients are given in terms of the initial temperaturedistribution,

bn = 2∫ 1

0[x(1− x)− 10] sin nπx dx, n = 1, 2, . . . .

Therefore, the full solution is

u(x, t) =∞

∑n=1

bne−n2π2t sin nπx + 20− 15x− 5x2.

Note that for large t, the transient solution tends to zero and we are left with thesteady state solution as expected.

Page 268: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

256 partial differential equations

7.3.2 Time Dependent Boundary Conditions

In the last section we solved problems with time independent boundary con-ditions using equilibrium solutions satisfying the steady state heat equationsand nonhomogeneous boundary conditions. When the boundary condi-tions are time dependent, we can also convert the problem to an auxiliaryproblem with homogeneous boundary conditions.

Consider the problem

ut − kuxx = h(x), 0 ≤ x ≤ L, t > 0,

u(0, t) = a(t), u(L, t) = b(t), t > 0,

u(x, 0) = f (x), 0 ≤ x ≤ L. (7.89)

We define u(x, t) = v(x, t) + w(x, t), where w(x, t) is a modified form ofthe steady state solution from the last section,

w(x, t) = a(t) +b(t)− a(t)

Lx.

Noting that

ut = vt + a +b− a

Lx,

uxx = vxx, (7.90)

we find that v(x, t) is a solution of the problem

vt − kvxx = h(x)−[

a(t) +b(t)− a(t)

Lx]

, 0 ≤ x ≤ L, t > 0,

v(0, t) = 0, v(L, t) = 0, t > 0,

v(x, 0) = f (x)−[

a(0) +b(0)− a(0)

Lx]

, 0 ≤ x ≤ L. (7.91)

Thus, we have converted the original problem into a nonhomogeneous heatequation with homogeneous boundary conditions and a new source termand new initial condition.

Example 7.14. Solve the problem

ut − uxx = x, 0 ≤ x ≤ 1, t > 0,

u(0, t) = 2, u(L, t) = t, t > 0

u(x, 0) = 3 sin 2πx + 2(1− x), 0 ≤ x ≤ 1. (7.92)

We first defineu(x, t) = v(x, t) + 2 + (t− 2)x.

Then, v(x, t) satisfies the problem

vt − vxx = 0, 0 ≤ x ≤ 1, t > 0,

v(0, t) = 0, v(L, t) = 0, t > 0,

v(x, 0) = 3 sin 2πx, 0 ≤ x ≤ 1. (7.93)

Page 269: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

green’s functions and nonhomogeneous problems 257

This problem is easily solved. The general solution is given by

v(x, t) =∞

∑n=1

bn sin nπxe−n2π2t.

We can see that the Fourier coefficients all vanish except for b2. This gives v(x, t) =3 sin 2πxe−4π2t and, therefore, we have found the solution

u(x, t) = 3 sin 2πxe−4π2t + 2 + (t− 2)x.

7.4 Green’s Functions for 1D Partial Differential Equations

In Section 7.1 we encountered the initial value Green’s func-tion for initial value problems for ordinary differential equations. In thatcase we were able to express the solution of the differential equation L[y] =f in the form

y(t) =∫

G(t, τ) f (τ) dτ,

where the Green’s function G(t, τ) was used to handle the nonhomoge-neous term in the differential equation. In a similar spirit, we can introduceGreen’s functions of different types to handle nonhomogeneous terms, non-homogeneous boundary conditions, or nonhomogeneous initial conditions.Occasionally, we will stop and rearrange the solutions of different problemsand recast the solution and identify the Green’s function for the problem.

In this section we will rewrite the solutions of the heat equation and waveequation on a finite interval to obtain an initial value Green;s function. As-suming homogeneous boundary conditions and a homogeneous differentialoperator, we can write the solution of the heat equation in the form

u(x, t) =∫ L

0G(x, ξ; t, t0) f (ξ) dξ.

where u(x, t0) = f (x), and the solution of the wave equation as

u(x, t) =∫ L

0Gc(x, ξ, t, t0) f (ξ) dξ +

∫ L

0Gs(x, ξ, t, t0)g(ξ) dξ.

where u(x, t0) = f (x) and ut(x, t0) = g(x). The functions G(x, ξ; t, t0),G(x, ξ; t, t0), and G(x, ξ; t, t0) are initial value Green’s functions and we willneed to explore some more methods before we can discuss the properties ofthese functions. [For example, see Section.]

We will now turn to showing that for the solutions of the one dimensionalheat and wave equations with fixed, homogeneous boundary conditions, wecan construct the particular Green’s functions.

7.4.1 Heat Equation

In Section 3.5 we obtained the solution to the one dimensional heatequation on a finite interval satisfying homogeneous Dirichlet conditions,

ut = kuxx, 0 < t, 0 ≤ x ≤ L,

Page 270: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

258 partial differential equations

u(x, 0) = f (x), 0 < x < L,

u(0, t) = 0, t > 0,

u(L, t) = 0, t > 0. (7.94)

The solution we found was the Fourier sine series

u(x, t) =∞

∑n=1

bneλnkt sinnπx

L,

whereλn = −

(nπ

L

)2

and the Fourier sine coefficients are given in terms of the initial temperaturedistribution,

bn =2L

∫ L

0f (x) sin

nπxL

dx, n = 1, 2, . . . .

Inserting the coefficients bn into the solution, we have

u(x, t) =∞

∑n=1

(2L

∫ L

0f (ξ) sin

nπξ

Ldξ

)eλnkt sin

nπxL

.

Interchanging the sum and integration, we obtain

u(x, t) =∫ L

0

(2L

∑n=1

sinnπx

Lsin

nπξ

Leλnkt

)f (ξ) dξ.

This solution is of the form

u(x, t) =∫ L

0G(x, ξ; t, 0) f (ξ) dξ.

Here the function G(x, ξ; t, 0) is the initial value Green’s function for theheat equation in the form

G(x, ξ; t, 0) =2L

∑n=1

sinnπx

Lsin

nπξ

Leλnkt.

which involves a sum over eigenfunctions of the spatial eigenvalue problem,Xn(x) = sin nπx

L .

7.4.2 Wave Equation

The solution of the one dimensional wave equation (2.2),

utt = c2uxx, 0 < t, 0 ≤ x ≤ L,

u(0, t) = 0, u(L, 0) = 0, t > 0,

u(x, 0) = f (x), ut(x, 0) = g(x), 0 < x < L, (7.95)

was found as

u(x, t) =∞

∑n=1

[An cos

nπctL

+ Bn sinnπct

L

]sin

nπxL

.

Page 271: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

green’s functions and nonhomogeneous problems 259

The Fourier coefficients were determined from the initial conditions,

f (x) =∞

∑n=1

An sinnπx

L,

g(x) =∞

∑n=1

nπcL

Bn sinnπx

L, (7.96)

as

An =2L

∫ L

0f (ξ) sin

nπξ

Ldξ,

Bn =L

nπc2L

∫ L

0f (ξ) sin

nπξ

Ldξ. (7.97)

Inserting these coefficients into the solution and interchanging integra-tion with summation, we have

u(x, t) =∫ ∞

0

[2L

∑n=1

sinnπx

Lsin

nπξ

Lcos

nπctL

]f (ξ) dξ

+∫ ∞

0

[2L

∑n=1

sinnπx

Lsin

nπξ

Lsin nπct

Lnπc/L

]g(ξ) dξ

=∫ L

0Gc(x, ξ, t, 0) f (ξ) dξ +

∫ L

0Gs(x, ξ, t, 0)g(ξ) dξ. (7.98)

In this case, we have defined two Green’s functions,

Gc(x, ξ, t, 0) =2L

∑n=1

sinnπx

Lsin

nπξ

Lcos

nπctL

,

Gs(x, ξ, t, 0) =2L

∑n=1

sinnπx

Lsin

nπξ

Lsin nπct

Lnπc/L

. (7.99)

The first, Gc, provides the response to the initial profile and the second, Gs,to the initial velocity.

7.5 Green’s Functions for the 2D Poisson Equation

C

S r− r′

r = (x, y)

r′ = (ξ, η)

Figure 7.5: Domain for solving Poisson’sequation.

In this section we consider the two dimensional Poisson equation withDirichlet boundary conditions. We consider the problem

∇2u = f , in D,

u = g, on C, (7.100)

for the domain in Figure 7.5We seek to solve this problem using a Green’s function. As in earlier

discussions, the Green’s function satisfies the differential equation and ho-mogeneous boundary conditions. The associated problem is given by

∇2G = δ(ξ − x, η − y), in D,

G ≡ 0, on C. (7.101)

Page 272: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

260 partial differential equations

However, we need to be careful as to which variables appear in the dif-ferentiation. Many times we just make the adjustment after the derivationof the solution, assuming that the Green’s function is symmetric in its argu-ments. However, this is not always the case and depends on things such asthe self-adjointedness of the problem. Thus, we will assume that the Green’sfunction satisfies

∇2r′G = δ(ξ − x, η − y),

where the notation ∇r′ means differentiation with respect to the variables ξ

and η. Thus,

∇2r′G =

∂2G∂ξ2 +

∂2G∂η2 .

With this notation in mind, we now apply Green’s second identity fortwo dimensions from Problem 8 in Chapter 9. We have∫

D(u∇2

r′G− G∇2r′u) dA′ =

∫C(u∇r′G− G∇r′u) · ds′. (7.102)

Inserting the differential equations, the left hand side of the equationbecomes ∫

D

[u∇2

r′G− G∇2r′u]

dA′

=∫

D[u(ξ, η)δ(ξ − x, η − y)− G(x, y; ξ, η) f (ξ, η)] dξdη

= u(x, y)−∫

DG(x, y; ξ, η) f (ξ, η) dξdη. (7.103)

Using the boundary conditions, u(ξ, η) = g(ξ, η) on C and G(x, y; ξ, η) =

0 on C, the right hand side of the equation becomes∫C(u∇r′G− G∇r′u) · ds′ =

∫C

g(ξ, η)∇r′G · ds′. (7.104)

Solving for u(x, y), we have the solution written in terms of the Green’sfunction,

u(x, y) =∫

DG(x, y; ξ, η) f (ξ, η) dξdη +

∫C

g(ξ, η)∇r′G · ds′.

Now we need to find the Green’s function. We find the Green’s functionsfor several examples.

Example 7.15. Find the two dimensional Green’s function for the antisymmetricPoisson equation; that is, we seek solutions that are θ-independent.

The problem we need to solve in order to find the Green’s function involveswriting the Laplacian in polar coordinates,

vrr +1r

vr = δ(r).

For r 6= 0, this is a Cauchy-Euler type of differential equation. The general solutionis v(r) = A ln r + B.

Page 273: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

green’s functions and nonhomogeneous problems 261

Due to the singularity at r = 0, we integrate over a domain in which a smallcircle of radius ε is cut form the plane and apply the two dimensional DivergenceTheorem. In particular, we have

1 =∫

δ(r) dA

=∫

∇2v dA

=∫

∇v · ·ds

=∫

∂v∂r

dS = 2πA. (7.105)

Therefore, A = 1/2π. We note that B is arbitrary, so we will take B = 0 in theremaining discussion.

Using this solution for a source of the form δ(r − r′), we obtain the Green’sfunction for Poisson’s equation as

G(r, r′) =1

2πln |r− r′|.

Example 7.16. Find the Green’s function for the infinite plane. Green’s function for the infinite plane.

From Figure 7.5 we have |r − r′| =√(x− ξ)2 + (y− η)2. Therefore, the

Green’s function from the last example gives

G(x, y, ξ, η) =1

4πln((ξ − x)2 + (η − y)2).

Example 7.17. Find the Green’s function for the half plane, (x, y)|y > 0, usingthe Method of Images Green’s function for the half plane using

the Method of Images.This problem can be solved using the result for the Green’s function for theinfinite plane. We use the Method of Images to construct a function such thatG = 0 on the boundary, y = 0. Namely, we use the image of the point (x, y) withrespect to the x-axis, (x,−y).

Imagine that the Green’s function G(x, y, ξ, η) represents a point charge at (x, y)and G(x, y, ξ, η) provides the electric potential, or response, at (ξ, η). This singlecharge cannot yield a zero potential along the x-axis (y=0). One needs an additionalcharge to yield a zero equipotential line. This is shown in Figure 7.6.

x

y

G(x, 0; ξ, η) = 0

+

(x, y)

(x,−y)

Figure 7.6: The Method of Images: Thesource and image source for the Green’sfunction for the half plane. Imagine twoopposite charges forming a dipole. Theelectric field lines are depicted indicat-ing that the electric potential, or Green’sfunction, is constant along y = 0.

The positive charge has a source of δ(r − r′) at r = (x, y) and the negativecharge is represented by the source −δ(r∗ − r′) at r∗ = (x,−y). We construct theGreen’s functions at these two points and introduce a negative sign for the negativeimage source. Thus, we have

G(x, y, ξ, η) =1

4πln((ξ − x)2 + (η − y)2)− 1

4πln((ξ − x)2 + (η + y)2).

These functions satisfy the differential equation and the boundary condition

G(x, 0, ξ, η) =1

4πln((ξ − x)2 + (η)2)− 1

4πln((ξ − x)2 + (η)2) = 0.

Example 7.18. Solve the homogeneous version of the problem; i.e., solve Laplace’sequation on the half plane with a specified value on the boundary.

Page 274: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

262 partial differential equations

We want to solve the problem

∇2u = 0, in D,

u = f , on C, (7.106)

This is displayed in Figure 7.7.

x

y

∇2u = 0

u(x, 0) = f (x)

Figure 7.7: This is the domain for asemi-infinite slab with boundary valueu(x, 0) = f (x) and governed byLaplace’s equation.

From the previous analysis, the solution takes the form

u(x, y) =∫

Cf∇G · n ds =

∫C

f∂G∂n

ds.

Since

G(x, y, ξ, η) =1

4πln((ξ − x)2 + (η − y)2)− 1

4πln((ξ − x)2 + (η + y)2),

∂G∂n

=∂G(x, y, ξ, η)

∂η

∣∣η=0 =

y(ξ − x)2 + y2 .

We have arrived at the same surface Green’s function as we had found in Example9.11.2 and the solution is

u(x, y) =1π

∫ ∞

−∞

y(x− ξ)2 + y2 f (ξ) dξ.

7.6 Method of Eigenfunction Expansions

We have seen that the use of eigenfunction expansions is anothertechnique for finding solutions of differential equations. In this section wewill show how we can use eigenfunction expansions to find the solutions tononhomogeneous partial differential equations. In particular, we will applythis technique to solving nonhomogeneous versions of the heat and waveequations.

7.6.1 The Nonhomogeneous Heat Equation

In this section we solve the one dimensional heat equation

with a source using an eigenfunction expansion. Consider the problem

ut = kuxx + Q(x, t), 0 < x < L, t > 0,

u(0, t) = 0, u(L, t) = 0, t > 0,

u(x, 0) = f (x), 0 < x < L. (7.107)

The homogeneous version of this problem is given by

vt = kvxx, 0 < x < L, t > 0,

v(0, t) = 0, v(L, t) = 0. (7.108)

We know that a separation of variables leads to the eigenvalue problem

φ′′ + λφ = 0, φ(0) = 0, φ(L) = 0.

Page 275: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

green’s functions and nonhomogeneous problems 263

The eigenfunctions and eigenvalues are given by

φn(x) = sinnπx

L, λn =

(nπ

L

)2, n = 1, 2, 3, . . . .

We can use these eigenfunctions to obtain a solution of the nonhomoge-neous problem (7.107). We begin by assuming the solution is given by theeigenfunction expansion

u(x, t) =∞

∑n=1

an(t)φn(x). (7.109)

In general, we assume that v(x, t) and φn(x) satisfy the same boundaryconditions and that v(x, t) and vx(x, t) are continuous functions.Note thatthe difference between this eigenfunction expansion and that in Section 4.3is that the expansion coefficients are functions of time.

In order to carry out the full process, we will also need to expand the ini-tial profile, f (x), and the source term, Q(x, t), in the basis of eigenfunctions.Thus, we assume the forms

f (x) = u(x, 0)

=∞

∑n=1

an(0)φn(x), (7.110)

Q(x, t) =∞

∑n=1

qn(t)φn(x). (7.111)

Recalling from Chapter 4, the generalized Fourier coefficients are given by

an(0) =〈 f , φn〉‖φn‖2 =

1‖φn‖2

∫ L

0f (x)φn(x) dx, (7.112)

qn(t) =〈Q, φn〉‖φn‖2 =

1‖φn‖2

∫ L

0Q(x, t)φn(x) dx. (7.113)

The next step is to insert the expansions (7.109) and (7.111) into the non-homogeneous heat equation (7.107). We first note that

ut(x, t) =∞

∑n=1

an(t)φn(x),

uxx(x, t) = −∞

∑n=1

an(t)λnφn(x). (7.114)

Inserting these expansions into the heat equation (7.107), we have

ut = kuxx + Q(x, t),∞

∑n=1

an(t)φn(x) = −k∞

∑n=1

an(t)λnφn(x) +∞

∑n=1

qn(t)φn(x). (7.115)

Collecting like terms, we have

∑n=1

[an(t) + kλnan(t)− qn(t)]φn(x) = 0, ∀x ∈ [0, L].

Page 276: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

264 partial differential equations

Due to the linear independence of the eigenfunctions, we can conclude that

an(t) + kλnan(t) = qn(t), n = 1, 2, 3, . . . .

This is a linear first order ordinary differential equation for the unknownexpansion coefficients.

We further note that the initial condition can be used to specify the initialcondition for this first order ODE. In particular,

f (x) =∞

∑n=1

an(0)φn(x).

The coefficients can be found as generalized Fourier coefficients in an ex-pansion of f (x) in the basis φn(x). These are given by Equation (7.112).

Recall from Appendix B that the solution of a first order ordinary differ-ential equation of the form

y′(t) + a(t)y(t) = p(t)

is found using the integrating factor

µ(t) = exp∫ t

a(τ) dτ.

Multiplying the ODE by the integrating factor, one has

ddt

[y(t) exp

∫ ta(τ) dτ

]= p(t) exp

∫ ta(τ) dτ.

After integrating, the solution can be found providing the integral is doable.For the current problem, we have

an(t) + kλnan(t) = qn(t), n = 1, 2, 3, . . . .

Then, the integrating factor is

µ(t) = exp∫ t

kλn dτ = ekλnt.

Multiplying the differential equation by the integrating factor, we find

[an(t) + kλnan(t)]ekλnt = qn(t)ekλnt

ddt

(an(t)ekλnt

)= qn(t)ekλnt. (7.116)

Integrating, we have

an(t)ekλnt − an(0) =∫ t

0qn(τ)ekλnτ dτ,

or

an(t) = an(0)e−kλnt +∫ t

0qn(τ)e−kλn(t−τ) dτ.

Using these coefficients, we can write out the general solution.

u(x, t) =∞

∑n=1

an(t)φn(x)

=∞

∑n=1

[an(0)e−kλnt +

∫ t

0qn(τ)e−kλn(t−τ) dτ

]φn(x). (7.117)

Page 277: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

green’s functions and nonhomogeneous problems 265

We will apply this theory to a more specific problem which not only hasa heat source but also has nonhomogeneous boundary conditions.

Example 7.19. Solve the following nonhomogeneous heat problem using eigen-function expansions:

ut − uxx = x + t sin 3πx, 0 ≤ x ≤ 1, t > 0,

u(0, t) = 2, u(L, t) = t, t > 0

u(x, 0) = 3 sin 2πx + 2(1− x), 0 ≤ x ≤ 1. (7.118)

This problem has the same nonhomogeneous boundary conditions as those inExample 7.14. Recall that we can define

u(x, t) = v(x, t) + 2 + (t− 2)x

to obtain a new problem for v(x, t). The new problem is

vt − vxx = t sin 3πx, 0 ≤ x ≤ 1, t > 0,

v(0, t) = 0, v(L, t) = 0, t > 0,

v(x, 0) = 3 sin 2πx, 0 ≤ x ≤ 1. (7.119)

We can now apply the method of eigenfunction expansions to find v(x, t). Theeigenfunctions satisfy the homogeneous problem

φ′′n + λnφn = 0, φn(0) = 0, φn(1) = 0.

The solutions are

φn(x) = sinnπx

L, λn =

(nπ

L

)2, n = 1, 2, 3, . . . .

Now, let

v(x, t) =∞

∑n=1

an(t) sin nπx.

Inserting v(x, t) into the PDE, we have

∑n=1

[an(t) + n2π2an(t)] sin nπx = t sin 3πx.

Due to the linear independence of the eigenfunctions, we can equate the coeffi-cients of the sin nπx terms. This gives

an(t) + n2π2an(t) = 0, n 6= 3,

a3(t) + 9π2a3(t) = t, n = 3. (7.120)

This is a system of first order ordinary differential equations. The first set of equa-tions are separable and are easily solved. For n 6= 3, we seek solutions of

ddt

an = −n2π2an(t).

These are given byan(t) = an(0)e−n2π2t, n 6= 3.

Page 278: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

266 partial differential equations

In the case n = 3, we seek solutions of

ddt

a3 + 9π2a3(t) = t.

The integrating factor for this first order equation is given by

µ(t) = e9π2t.

Multiplying the differential equation by the integrating factor, we have

ddt

(a3(t)e9π2t

)= te9π2t.

Integrating, we obtain the solution

a3(t) = a3(0)e−9π2t + e−9π2t∫ t

0τe9π2τ dτ,

= a3(0)e−9π2t + e−9π2t[

19π2 τe9π2τ − 1

(9π2)2 e9π2τ

]t

0,

= a3(0)e−9π2t +1

9π2 t− 1(9π2)2

[1− e−9π2τ

]. (7.121)

Up to this point, we have the solution

u(x, t) = v(x, t) + w(x, t)

=∞

∑n=1

an(t) sin nπx + 2 + (t− 2)x, (7.122)

where

an(t) = an(0)e−n2π2t, n 6= 3

a3(t) = a3(0)e−9π2t +1

9π2 t− 1(9π2)2

[1− e−9π2τ

]. (7.123)

We still need to find an(0), n = 1, 2, 3, . . . .The initial values of the expansion coefficients are found using the initial condi-

tion

v(x, 0) = 3 sin 2πx =∞

∑n=1

an(0) sin nπx.

It is clear that we have an(0) = 0 for n 6= 2 and a2(0) = 3. Thus, the series forv(x, t) has two nonvanishing coefficients,

a2(t) = 3e−4π2t,

a3(t) =1

9π2 t− 1(9π2)2

[1− e−9π2τ

]. (7.124)

Therefore, the final solution is given by

u(x, t) = 2 + (t− 2)x + 3e−4π2t sin 2πx +9π2t− (1− e−9π2τ)

81π4 sin 3πx.

Page 279: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

green’s functions and nonhomogeneous problems 267

7.6.2 The Forced Vibrating Membrane

We now consider the forced vibrating membrane. A two-dimensionalmembrane is stretched over some domain D. We assume Dirichlet condi-tions on the boundary, u = 0 on ∂D. The forced membrane can be modeledas

utt = c2∇2u + Q(r, t), r ∈ D, t > 0,

u(r, t) = 0, r ∈ ∂D, t > 0,

u(r, 0) = f (r), ut(r, 0) = g(r), r ∈ D. (7.125)

The method of eigenfunction expansions relies on the use of eigenfunc-tions, φα(r), for α ∈ J ⊂ Z2 a set of indices typically of the form (i, j) in somelattice grid of integers. The eigenfunctions satisfy the eigenvalue equation

∇2φα(r) = −λαφα(r), φα(r) = 0, on ∂D.

We assume that the solution and forcing function can be expanded in thebasis of eigenfunctions,

u(r, t) = ∑α∈J

aα(t)φα(r),

Q(r, t) = ∑α∈J

qα(t)φα(r). (7.126)

Inserting this form into the forced wave equation (7.125), we have

utt = c2∇2u + Q(r, t)

∑α∈J

aα(t)φα(r) = −c2 ∑α∈J

λαaα(t)φα(r) + ∑α∈J

qα(t)φα(r)

0 = ∑α∈J

[aα(t) + c2λαaα(t)− qα(t)]φα(r). (7.127)

The linear independence of the eigenfunctions then gives the ordinarydifferential equation

aα(t) + c2λαaα(t) = qα(t).

We can solve this equation with initial conditions aα(0) and aα(0) foundfrom

f (r) = u(r, 0) = ∑α∈J

aα(0)φα(r),

g(r) = ut(r, 0) = ∑α∈J

aα(0)φα(r). (7.128)

Example 7.20. Periodic Forcing, Q(r, t) = G(r) cos ωt.It is enough to specify Q(r, t) in order to solve for the time dependence of the

expansion coefficients. A simple example is the case of periodic forcing, Q(r, t) =

h(r) cos ωt. In this case, we expand Q in the basis of eigenfunctions,

Q(r, t) = ∑α∈J

qα(t)φα(r),

G(r) cos ωt = ∑α∈J

γα cos ωtφα(r). (7.129)

Page 280: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

268 partial differential equations

Inserting these expressions into the forced wave equation (7.125), we obtain asystem of differential equations for the expansion coefficients,

aα(t) + c2λαaα(t) = γα cos ωt.

In order to solve this equation we borrow the methods from a course on ordinarydifferential equations for solving nonhomogeneous equations. In particular we canuse the Method of Undetermined Coefficients as reviewed in Section B.3.1. Thesolution of these equations are of the form

aα(t) = aαh(t) + aαp(t),

where aαh(t) satisfies the homogeneous equation,

aαh(t) + c2λαaαh(t) = 0, (7.130)

and aαp(t) is a particular solution of the nonhomogeneous equation,

aαp(t) + c2λαaαp(t) = γα cos ωt. (7.131)

The solution of the homogeneous problem (7.130) is easily founds as

aαh(t) = c1α cos(ω0αt) + c2α sin(ω0αt),

where ω0α = c√

λα.The particular solution is found by making the guess aαp(t) = Aα cos ωt. In-

serting this guess into Equation (ceqn2), we have

[−ω2 + c2λα]Aα cos ωt = γα cos ωt.

Solving for Aα, we obtain

Aα =γα

−ω2 + c2λα, ω2 6= c2λα.

Then, the general solution is given by

aα(t) = c1α cos(ω0αt) + c2α sin(ω0αt) +γα

−ω2 + c2λαcos ωt,

where ω0α = c√

λα and ω2 6= c2λα.In the case where ω2 = c2λα, we have a resonant solution. This is discussed in

Section FO on forced oscillations. In this case the Method of Undetermined Coeffi-cients fails and we need the Modified Method of Undetermined Coefficients. This isbecause the driving term, γα cos ωt, is a solution of the homogeneous problem. So,we make a different guess for the particular solution. We let

aαp(t) = t(Aα cos ωt + Bα sin ωt).

Then, the needed derivatives are

aαp(t) = ωt(−Aα sin ωt + Bα cos ωt) + Aα cos ωt + Bα sin ωt,

aαp(t) = −ω2t(Aα cos ωt + Bα sin ωt)− 2ωAα sin ωt + 2ωBα cos ωt,

= −ω2aαp(t)− 2ωAα sin ωt + 2ωBα cos ωt. (7.132)

Page 281: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

green’s functions and nonhomogeneous problems 269

Inserting this guess into Equation (ceqn2) and noting that ω2 = c2λα, we have

−2ωAα sin ωt + 2ωBα cos ωt = γα cos ωt.

Therefore, Aα = 0 andBα =

γα

2ω.

So, the particular solution becomes

aαp(t) =γα

2ωt sin ωt.

The full general solution is then

aα(t) = c1α cos(ωt) + c2α sin(ωt) +γα

2ωt sin ωt,

where ω = c√

λα.

Figure 7.8: Plot of a solution showingresonance.

We see from this result that the solution tends to grow as t gets large. This iswhat is called a resonance. Essentially, one is driving the system at its naturalfrequency for one of the frequencies in the system. A typical plot of such a solutionis given in Figure 7.8.

7.7 Green’s Function Solution of Nonhomogeneous Heat Equation

We solved the one dimensional heat equation with a source us-ing an eigenfunction expansion. In this section we rewrite the solution andidentify the Green’s function form of the solution. Recall that the solutionof the nonhomogeneous problem,

ut = kuxx + Q(x, t), 0 < x < L, t > 0,

u(0, t) = 0, u(L, t) = 0, t > 0,

u(x, 0) = f (x), 0 < x < L, (7.133)

is given by Equation (7.117)

u(x, t) =∞

∑n=1

an(t)φn(x)

=∞

∑n=1

[an(0)e−kλnt +

∫ t

0qn(τ)e−kλn(t−τ) dτ

]φn(x). (7.134)

The generalized Fourier coefficients for an(0) and qn(t) are given by

an(0) =1‖φn‖2

∫ L

0f (x)φn(x) dx, (7.135)

qn(t) =1‖φn‖2

∫ L

0Q(x, t)φn(x) dx. (7.136)

The solution in Equation (7.134) can be rewritten using the Fourier coef-ficients in Equations (7.135) and (7.136).

u(x, t) =∞

∑n=1

[an(0)e−kλnt +

∫ t

0qn(τ)e−kλn(t−τ) dτ

]φn(x)

Page 282: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

270 partial differential equations

=∞

∑n=1

an(0)e−kλntφn(x) +∫ t

0

∑n=1

(qn(τ)e−kλn(t−τ)φn(x)

)dτ

=∞

∑n=1

1‖φn‖2

(∫ L

0f (ξ)φn(ξ) dξ

)e−kλntφn(x)

+∫ t

0

∑n=1

1‖φn‖2

(∫ L

0Q(ξ, τ)φn(ξ) dξ

)e−kλn(t−τ)φn(x) dτ

=∫ L

0

(∞

∑n=1

φn(x)φn(ξ)e−kλnt

‖φn‖2

)f (ξ) dξ

+∫ t

0

∫ L

0

(∞

∑n=1

φn(x)φn(ξ)e−kλn(t−τ)

‖φn‖2

)Q(ξ, τ) dξ dτ. (7.137)

Defining

G(x, t; ξ, τ) =∞

∑n=1

φn(x)φn(ξ)e−kλn(t−τ)

‖φn‖2 ,

we see that the solution can be written in the formThe solution can be written in termsof the initial value Green’s function,G(x, t; ξ, 0), and the general Green’sfunction, G(x, t; ξ, τ). u(x, t) =

∫ L

0G(x, t; ξ, 0) f (ξ) dξ +

∫ t

0

∫ L

0G(x, t; ξ, τ)Q(ξ, τ) dξ dτ.

Thus, we see that G(x, t; ξ, 0) is the initial value Green’s function and G(x, t; ξ, τ)

is the general Green’s function for this problem.The only thing left is to introduce nonhomogeneous boundary conditions

into this solution. So, we modify the original problem to the fully nonho-mogeneous heat equation:

ut = kuxx + Q(x, t), 0 < x < L, t > 0,

u(0, t) = α(t), u(L, t) = β(t), t > 0,

u(x, 0) = f (x), 0 < x < L, (7.138)

As before, we begin with the expansion of the solution in the basis ofeigenfunctions,

u(x, t) =∞

∑n=1

an(t)φn(x).

However, due to potential convergence problems, we cannot expect that uxx

can be obtained by simply differentiating the series twice and expecting theresulting series to converge to uxx. So, we need to be a little more careful.

We first note that

ut =∞

∑n=1

an(t)φn(x) = kuxx + Q(x, t).

Solving for the expansion coefficients, we have

a(t) =

∫ L0 (kuxx + Q(x, t))φn(x) dx

‖φn‖2 .

In order to proceed, we need an expression for∫ b

a uxxφn(x) dx. We can findthis using Green’s identity from Section 4.2.2.

Page 283: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

green’s functions and nonhomogeneous problems 271

We start with ∫ b

a(uLv− vLu) dx = [p(uv′ − vu′)]ba

and let v = φn. Then,∫ L

0(u(x, t)φ′′n (x)− φn(x)uxx(x, t)) dx = [u(x, t)φ′n(x)− φn(x)ux(x, t))]L0∫ L

0(−λnu(x, t) + uxx(x, t))φn(x) dx = [u(L, t)φ′n(L)− φn(L)ux(L, t))]

−[u(0, t)φ′n(0)− φn(0)ux(0, t))]

−λnan‖φn‖2 −∫ L

0uxx(x, t)φn(x) dx = β(t)φ′n(L)− α(t)φ′n(0). (7.139)

Thus, ∫ L

0uxx(x, t)φn(x) dx = −λnan‖φn‖2 + α(t)φ′n(0)− β(t)φ′n(L).

Inserting this result into the equation for an(t), we have

a(t) = −kλnan(t) + qn(t) + kα(t)φ′n(0)− β(t)φ′n(L)

‖φn‖2 .

As we had seen before, this first order equation can be solved using theintegrating factor

µ(t) = exp∫ t

kλn dτ = ekλnt.

Multiplying the differential equation by the integrating factor, we find

[an(t) + kλnan(t)]ekλnt =

[qn(t) + k

α(t)φ′n(0)− β(t)φ′n(L)‖φn‖2

]ekλnt

ddt

(an(t)ekλnt

)=

[qn(t) + k

α(t)φ′n(0)− β(t)φ′n(L)‖φn‖2

]ekλnt.

(7.140)

Integrating, we have

an(t)ekλnt − an(0) =∫ t

0

[qn(τ) + k

α(τ)φ′n(0)− β(τ)φ′n(L)‖φn‖2

]ekλnτ dτ,

or

an(t) = an(0)e−kλnt +∫ t

0

[qn(τ) + k

α(τ)φ′n(0)− β(τ)φ′n(L)‖φn‖2

]e−kλn(t−τ) dτ.

We can now insert these coefficients into the solution and see how toextract the Green’s function contributions. Inserting the coefficients, wehave

u(x, t) =∞

∑n=1

an(t)φn(x)

=∞

∑n=1

[an(0)e−kλnt +

∫ t

0qn(τ)e−kλn(t−τ) dτ

]φn(x)

+∞

∑n=1

(∫ t

0

[k

α(τ)φ′n(0)− β(τ)φ′n(L)‖φn‖2

]e−kλn(t−τ) dτ

)φn(x).

(7.141)

Page 284: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

272 partial differential equations

Recall that the generalized Fourier coefficients for an(0) and qn(t) are givenby

an(0) =1‖φn‖2

∫ L

0f (x)φn(x) dx, (7.142)

qn(t) =1‖φn‖2

∫ L

0Q(x, t)φn(x) dx. (7.143)

The solution in Equation (7.141) can be rewritten using the Fourier coef-ficients in Equations (7.142) and (7.143).

u(x, t) =∞

∑n=1

[an(0)e−kλnt +

∫ t

0qn(τ)e−kλn(t−τ) dτ

]φn(x)

+∞

∑n=1

(∫ t

0

[k

α(τ)φ′n(0)− β(τ)φ′n(L)‖φn‖2

]e−kλn(t−τ) dτ

)φn(x)

=∞

∑n=1

an(0)e−kλntφn(x) +∫ t

0

∑n=1

(qn(τ)e−kλn(t−τ)φn(x)

)dτ

+∫ t

0

∑n=1

([k

α(τ)φ′n(0)− β(τ)φ′n(L)‖φn‖2

]e−kλn(t−τ)

)φn(x)dτ

=∞

∑n=1

1‖φn‖2

(∫ L

0f (ξ)φn(ξ) dξ

)e−kλntφn(x)

+∫ t

0

∑n=1

1‖φn‖2

(∫ L

0Q(ξ, τ)φn(ξ) dξ

)e−kλn(t−τ)φn(x) dτ

+∫ t

0

∑n=1

([k

α(τ)φ′n(0)− β(τ)φ′n(L)‖φn‖2

]e−kλn(t−τ)

)φn(x)dτ

=∫ L

0

(∞

∑n=1

φn(x)φn(ξ)e−kλnt

‖φn‖2

)f (ξ) dξ

+∫ t

0

∫ L

0

(∞

∑n=1

φn(x)φn(ξ)e−kλn(t−τ)

‖φn‖2

)Q(ξ, τ) dξ dτ.

+k∫ t

0

(∞

∑n=1

φn(x)φ′n(0)e−kλn(t−τ)

‖φn‖2

)α(τ) dτ

−k∫ t

0

(∞

∑n=1

φn(x)φ′n(L)e−kλn(t−τ)

‖φn‖2

)β(τ) dτ. (7.144)

As before, we can define the general Green’s function as

G(x, t; ξ, τ) =∞

∑n=1

φn(x)φn(ξ)e−kλn(t−τ)

‖φn‖2 .

Then, we can write the solution to the fully homogeneous problem as

u(x, t) =∫ t

0

∫ L

0G(x, t; ξ, τ)Q(ξ, τ) dξ dτ +

∫ L

0G(x, t; ξ, 0) f (ξ) dξ

+k∫ t

0

[α(τ)

∂G∂ξ

(x, 0; t, τ)− β(τ)∂G∂ξ

(x, L; t, τ)

]dτ. (7.145)

The first integral handles the source term, the second integral handles theinitial condition, and the third term handles the fixed boundary conditions.

Page 285: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

green’s functions and nonhomogeneous problems 273

This general form can be deduced from the differential equation for theGreen’s function and original differential equation by using a more generalform of Green’s identity. Let the heat equation operator be defined as L =∂∂t − k ∂2

∂x2 . The differential equations for u(x, t) and G(x, t; ξ, τ) for 0 ≤ x, ξ ≤L and t, τ ≥ 0, are taken to be

Lu(x, t) = Q(x, t),

LG(x, t; ξ, τ) = δ(x− ξ)δ(t− τ). (7.146)

Multiplying the first equation by G(x, t; ξ, τ) and the second by u(x, t),we obtain

G(x, t; ξ, τ)Lu(x, t) = G(x, t; ξ, τ)Q(x, t),

u(x, t)LG(x, t; ξ, τ) = δ(x− ξ)δ(t− τ)u(x, t). (7.147)

Now, we subtract the equations and integrate with respect to x and t.This gives ∫ ∞

0

∫ L

0[G(x, t; ξ, τ)Lu(x, t)− u(x, t)LG(x, t; ξ, τ)] dxdt

=∫ ∞

0

∫ L

0[G(x, t; ξ, τ)Q(x, t)− δ(x− ξ)δ(t− τ)u(x, t)] dxdt

=∫ ∞

0

∫ L

0G(x, t; ξ, τ)Q(x, t) dxdt− u(ξ, τ). (7.148)

and ∫ ∞

0

∫ L

0[G(x, t; ξ, τ)Lu(x, t)− u(x, t)LG(x, t; ξ, τ)] dxdt

=∫ L

0

∫ ∞

0[G(x, t; ξ, τ)ut − u(x, t)Gt(x, t; ξ, τ)] dtdx

−k∫ ∞

0

∫ L

0[G(x, t; ξ, τ)uxx(x, t)− u(x, t)Gxx(x, t; ξ, τ)] dxdt

=∫ L

0

[G(x, t; ξ, τ)ut

∣∣∞0 − 2

∫ ∞

0u(x, t)Gt(x, t; ξ, τ) dt

]dx

−k∫ ∞

0

[G(x, t; ξ, τ)

∂u∂x

(x, t)− u(x, t)∂G∂x

(x, t; ξ, τ)

]L

0dxdt

(7.149)

Equating these two results and solving for u(ξ, τ), we have

u(ξ, τ) =∫ ∞

0

∫ L

0G(x, t; ξ, τ)Q(x, t) dxdt

+k∫ ∞

0

[G(x, t; ξ, τ)

∂u∂x

(x, t)− u(x, t)∂G∂x

(x, t; ξ, τ)

]L

0dxdt

+∫ L

0

[G(x, 0; ξ, τ)u(x, 0) + 2

∫ ∞

0u(x, t)Gt(x, t; ξ, τ) dt

]dx.

(7.150)

Exchanging (ξ, τ) with (x, t) and assuming that the Green’s function is sym-metric in these arguments, we have

u(x, t) =∫ ∞

0

∫ L

0G(x, t; ξ, τ)Q(ξ, τ) dξdτ

Page 286: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

274 partial differential equations

+k∫ ∞

0

[G(x, t; ξ, τ)

∂u∂ξ

(ξ, τ)− u(ξ, τ)∂G∂ξ

(x, t; ξ, τ)

]L

0dxdt

+∫ L

0G(x, t; ξ, 0)u(ξ, 0) dξ + 2

∫ L

0

∫ ∞

0u(ξ, τ)Gτ(x, t; ξ, τ) dτdξ.

(7.151)

This result is almost in the desired form except for the last integral. Thus,if ∫ L

0

∫ ∞

0u(ξ, τ)Gτ(x, t; ξ, τ) dτdξ = 0,

then we have

u(x, t) =∫ ∞

0

∫ L

0G(x, t; ξ, τ)Q(ξ, τ) dξdτ +

∫ L

0G(x, t; ξ, 0)u(ξ, 0) dξ

+k∫ ∞

0

[G(x, t; ξ, τ)

∂u∂ξ

(ξ, τ)− u(ξ, τ)∂G∂ξ

(x, t; ξ, τ)

]L

0dxdt.

(7.152)

7.8 Summary

We have seen throughout the chapter that Green’s functions are thesolutions of a differential equation representing the effect of a point impulseon either source terms, or initial and boundary conditions. The Green’sfunction is obtained from transform methods or as an eigenfunction ex-pansion. In the text we have occasionally rewritten solutions of differentialequations in term’s of Green’s functions. We will first provide a few of theseexamples and then present a compilation of Green’s Functions for genericpartial differential equations.

For example, in section 7.4 we wrote the solution of the one dimensionalheat equation as

u(x, t) =∫ L

0G(x, ξ; t, 0) f (ξ) dξ,

where

G(x, ξ; t, 0) =2L

∑n=1

sinnπx

Lsin

nπξ

Leλnkt,

and the solution of the wave equation as

u(x, t) =∫ L

0Gc(x, ξ, t, 0) f (ξ) dξ +

∫ L

0Gs(x, ξ, t, 0)g(ξ) dξ,

where

Gc(x, ξ, t, 0) =2L

∑n=1

sinnπx

Lsin

nπξ

Lcos

nπctL

,

Gs(x, ξ, t, 0) =2L

∑n=1

sinnπx

Lsin

nπξ

Lsin nπct

Lnπc/L

.

Page 287: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

green’s functions and nonhomogeneous problems 275

We note that setting t = 0 in Gc(x, ξ; t, 0), we obtain

Gc(x, ξ, 0, 0) =2L

∑n=1

sinnπx

Lsin

nπξ

L.

This is the Fourier sine series representation of the Dirac delta function,δ(x − ξ). Similarly, if we differentiate Gs(x, ξ, t, 0) with repsect to t and sett = 0, we once again obtain the Fourier sine series representation of theDirac delta function.

It is also possible to find closed form expression for Green’s functions,which we had done for the heat equation on the infinite interval,

u(x, t) =∫ ∞

−∞G(x, t; ξ, 0) f (ξ) dξ,

where

G(x, t; ξ, 0) =e−(x−ξ)2/4t√

4πt,

and for Poisson’s equation,

φ(r) =∫

VG(r, r′) f (r′) d3r′,

where the three dimensional Green’s function is given by

G(r, r′) =1

|r− r′| .

We can construct Green’s functions for other problems which we haveseen in the book. For example, the solution of the two dimensional waveequation on a rectangular membrane was found in Equation (6.37) as

u(x, y, t) =∞

∑n=1

∑m=1

(anm cos ωnmt + bnm sin ωnmt) sinnπx

Lsin

mπyH

, (7.153)

where

anm =4

LH

∫ H

0

∫ L

0f (x, y) sin

nπxL

sinmπy

Hdxdy, (7.154)

bnm =4

ωnmLH

∫ H

0

∫ L

0g(x, y) sin

nπxL

sinmπy

Hdxdy, (7.155)

where the angular frequencies are given by

ωnm = c

√(nπ

L

)2+(mπ

H

)2. (7.156)

Rearranging the solution, we have

u(x, y, t) =∫ H

0

∫ L

0[Gc(x, y; ξ, η; t, 0) f (ξ, η) + Gs(x, y; ξ, η; t, 0)g(ξ, η)] dξdη,

where

Gc(x, y; ξ, η; t, 0) =4

LH

∑n=1

∑m=1

sinnπx

Lsin

nπξ

Lsin

mπyH

sinmπη

Hcos ωnmt

Page 288: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

276 partial differential equations

and

Gs(x, y; ξ, η; t, 0) =4

LH

∑n=1

∑m=1

sinnπx

Lsin

nπξ

Lsin

mπyH

sinmπη

Hsin ωnmt

ωnm.

Once again, we note that setting t = 0 in Gc(x, ξ; t, 0) and setting t = 0in ∂Gc(x,ξ;t,0)

∂t , we obtain a Fourier series representation of the Dirac deltafunction in two dimensions,

δ(x− ξ)δ(y− η) =4

LH

∑n=1

∑m=1

sinnπx

Lsin

nπξ

Lsin

mπyH

sinmπη

H.

Another example was the solution of the two dimensional Laplace equa-tion on a disk given by Equation 6.87. We found that

u(r, θ) =a0

2+

∑n=1

(an cos nθ + bn sin nθ) rn. (7.157)

an =1

πan

∫ π

−πf (θ) cos nθ dθ, n = 0, 1, . . . , (7.158)

bn =1

πan

∫ π

−πf (θ) sin nθ dθ n = 1, 2 . . . . (7.159)

We saw that this solution can be written as

u(r, θ) =∫ π

−πG(θ, φ; r, a) f (φ) dφ,

where the Green’s function could be summed giving the Poisson kernel

G(θ, φ; r, a) =1

a2 − r2

a2 + r2 − 2ar cos(θ − φ).

We had also investigated the nonhomogeneous heat equation in section9.11.4,

ut − kuxx = h(x, t), 0 ≤ x ≤ L, t > 0.

u(0, t) = 0, u(L, t) = 0, t > 0,

u(x, 0) = f (x), 0 ≤ x ≤ . (7.160)

We found that the solution of the heat equation is given by

u(x, t) =∫ L

0f (ξ)G(x, ξ; t, 0)dξ +

∫ t

0

∫ L

0h(ξ, τ)G(x, ξ; t, τ) dξdτ,

where

G(x, ξ; t, τ) =2L

∑n=1

sinnπx

Lsin

nπξ

Le−ω2

n(t−τ).

Note that setting t = τ, we again get a Fourier sine series representation ofthe Dirac delta function.

In general, Green’s functions based on eigenfunction expansions overeigenfunctions of Sturm-Liouville eigenvalue problems are a common wayto construct Green’s functions. For example, surface and initial value Green’s

Page 289: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

green’s functions and nonhomogeneous problems 277

functions are constructed in terms of a modification of delta function rep-resentations modified by factors which make the Green’s function a solu-tion of the given differential equations and a factor taking into account theboundary or initial condition plus a restoration of the delta function whenapplied to the condition. Examples with an indication of these factors areshown below.

1. Surface Green’s Function: Cube [0, a]× [0, b]× [0, c]

g(x, y, z; x′, y′, c) = ∑`,n

2a

sin`πx

asin

`πx′

a2b

sinnπy

bsin

nπy′

b︸ ︷︷ ︸δ−function

sinh γ`nz︸ ︷︷ ︸D.E.

/ sinh γ`nc︸ ︷︷ ︸restore δ

.

2. Surface Green’s Function: Sphere [0, a]× [0, π]× [0, 2π]

g(r, φ, θ; a, φ′, θ′) = ∑`,m

Ym∗` (ψ′ θ′)Ym∗

` (ψ θ)︸ ︷︷ ︸δ−function

r`︸︷︷︸D.E.

/ a`︸︷︷︸restore δ

.

3. Initial Value Green’s Function: 1D Heat Equation on [0, L], kn = nπL

g(x, t; x′, t0) = ∑n

2L

sinnπx

Lsin

nπx′

L︸ ︷︷ ︸δ−function

e−a2k2nt︸ ︷︷ ︸

D.E.

/ e−a2k2nt0︸ ︷︷ ︸

restore δ

.

4. Initial Value Green’s Function: 1D Heat Equation on infinite domain

g(x, t; x′, 0) =1

∫ ∞

−∞dkeik(x−x′)︸ ︷︷ ︸

δ−function

e−a2k2t︸ ︷︷ ︸D.E.

=e−(x−x′)2/4a2t√

4πa2t.

We can extend this analysis to a more general theory of Green’s functions.This theory is based upon Green’s Theorems, or identities.

1. Green’s First Theorem∮S

ϕ∇χ · n dS =∫

V(∇ϕ · ∇χ + ϕ∇2χ) dV.

This is easily proven starting with the identity

∇ · (ϕ∇χ) = ∇ϕ · ∇χ + ϕ∇2χ,

integrating over a volume of space and using Gauss’ Integral Theo-rem.

2. Green’s Second Theorem∫V(ϕ∇2χ− χ∇2 ϕ) dV =

∮S(ϕ∇χ− χ∇ϕ) · n dS.

This is proven by interchanging ϕ and χ in the first theorem and sub-tracting the two versions of the theorem.

Page 290: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

278 partial differential equations

The next step is to let ϕ = u and χ = G. Then,∫V(u∇2G− G∇2u) dV =

∮S(u∇G− G∇u) · n dS.

As we had seen earlier for Poisson’s equation, inserting the differentialequation yields

u(x, y) =∫

VG f dV +

∮S(u∇G− G∇u) · n dS.

If we have the Green’s function, we only need to know the source term andboundary conditions in order to obtain the solution to a given problem.

In the next sections we provide a summary of these ideas as applied tosome generic partial differential equations.33 This is an adaptation of notes from

J. Franklin’s course on mathematicalphysics.

7.8.1 Laplace’s Equation: ∇2ψ = 0.

1. Boundary Conditions

(a) Dirichlet - ψ is given on the surface.

(b) Neumann - n · ∇ψ = ∂ψ∂n is given on the surface.

Note: Boundary conditions can be Dirichlet on part of the surface andNeumann on part. If they are Neumann on the whole surface, thenthe Divergence Theorem requires the constraint

∫∂ψ

∂ndS = 0.

2. Solution by Surface Green’s Function, g(~r,~r′).

(a) Dirichlet conditions

∇2gD(~r,~r′) = 0,

gD(~rs,~r′s) = δ(2)(~rs −~r′s),

ψ(~r) =∫

gD(~r,~r′s)ψ(~r′s) dS′.

(b) Neumann conditions

∇2gN(~r,~r′) = 0,

∂gN∂n

(~rs,~r′s) = δ(2)(~rs −~r′s),

ψ(~r) =∫

gN(~r,~r′s)∂ψ

∂n(~r′s) dS′.

Note: Use of g is readily generalized to any number of dimensions.

Page 291: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

green’s functions and nonhomogeneous problems 279

7.8.2 Homogeneous Time Dependent Equations

1. Typical Equations

(a) Diffusion/Heat Equation ∇2Ψ = 1a2

∂∂t Ψ.

(b) Schrödinger Equation −∇2Ψ + UΨ = i ∂∂t Ψ.

(c) Wave Equation ∇2Ψ = 1c2

∂2

∂t2 Ψ.(d) General form: DΨ = T Ψ.

2. Initial Value Green’s Function, g(~r,~r′; t, t′).

(a) Homogeneous Boundary Conditionsi. Diffusion, or Schrödinger Equation (1st order in time),Dg = T g.

Ψ(~r, t) =∫

g(~r,~r′; t, t0)Ψ(r ′, t0) d3r′,

whereg(r , r ′; t0, t0) = δ(r − r ′),

g(r s) satisfies homogeneous boundary conditions.

ii. Wave Equation

Ψ(r , t) =∫[gc(r , r ′; t, t0)Ψ(r ′, t0)+ gs(r , r ′; t, t0)Ψ(r ′, t0)] d3r ′.

The first two properties in (a) above hold, but

gc(r , r ′; t0, t0) = δ(r − r ′)

gs(r , r ′; t0, t0) = δ(r − r ′)

Note: For the diffusion and Schrödinger equations the ini-tial condition is Dirichlet in time. For the wave equationthe initial condition is Cauchy, where Ψ and Ψ are given.

(b) Inhomogeneous, Time Independent (steady) Boundary Con-ditions

i. Solve Laplace’s equation, ∇2ψs = 0, for inhomogeneousB.C.’s

ii. Solve homogeneous, time-dependent equation for

Ψt(r , t) satisfying Ψt(r , t0) = Ψ(r , t0)− ψs(r ).

iii. Then Ψ(r , t) = Ψt(r , t) + ψs(r ).Note: Ψt is the transient part and ψs is the steady state part.

3. Time Dependent Boundary Conditions with Homogeneous InitialConditions

(a) Use the Boundary Value Green’s Function, h(r , r ′s; t, t′), whichis similar to the surface Green’s function in an earlier section.

Ψ(r , t) =∫ ∞

t0

hD(r , r ′s; t, t′)Ψ(r ′s, t′) dt′,

orΨ(r , t) =

∫ ∞

t0

∂hN∂n

(r , r ′s; t, t′)Ψ(r ′s, t′) dt′.

Page 292: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

280 partial differential equations

(b) Properties of h(r , r ′s; t, t′):

Dh = T h

hD(r s, r ′s; t, t′) = δ(t− t′), or∂hN∂n

(r s, r ′s; t, t′) = δ(t− t′),

h(r , r ′s; t, t′) = 0, t′ > t, (causality).

(c) Note: For inhomogeneous I.C.,

Ψ =∫

gΨ(r ′, t0) +∫

dt′hDΨ(r ′s, t′) d3r ′.

7.8.3 Inhomogeneous Steady State Equation

1. Poisson’s Equation

∇2ψ(r , t) = f (r ), ψ(r s) or∂ψ

∂n(r s) given.

(a) Green’s Theorem:∫[ψ(r ′)∇′2G(r , r ′)− G(r , r ′)∇′2ψ(r ′)] d3r ′

=∫[ψ(r ′)∇′G(r , r ′)− G(r , r ′)∇′ψ(r ′)] · ~dS

′,

where ∇′ denotes differentiation with respect to r′.

(b) Properties of G(r , r ′):

i. ∇′2G(r , r ′) = δ(r − r ′).

ii. G|s = 0 or ∂G∂n′ |s = 0.

iii. Solution

ψ(r ) =∫

G(r , r ′) f (r ′) d3r ′

+∫[ψ(r ′)∇′G(r , r ′)− G(r , r ′)∇′ψ(r ′)] · ~dS

′.

(7.161)

(c) For the case of pure Neumann B.C.’s, the Divergence Theoremleads to the constraint∫

∇ψ · ~dS =∫

f d3r.

If there are pure Neumann conditions and S is finite and∫

f d3r 6=0 by symmetry, then ~n

′ · ∇′G|s 6= 0 and the Green’s functionmethod is much more complicated to solve.

(d) From the above result:

~n′ · ∇′G(r , r ′s) = gD(r , r ′s)

orGN(r , r ′s) = −gN(r , r ′s).

It is often simpler to use G for∫

d3r ′ and g for∫~dS′, separately.

Page 293: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

green’s functions and nonhomogeneous problems 281

(e) G satisfies a reciprocity property,G(r , r ′) = G(r ′, r ) for eitherDirichlet or Neumann boundary conditions.

(f) G(r , r ′) can be considered as a potential at r due to a pointcharge q = −1/4π at r ′, with all surfaces being grounded con-ductors.

7.8.4 Inhomogeneous, Time Dependent Equations

1. Diffusion/Heat Flow ∇2Ψ− 1a2 Ψ = f (r , t).

(a)

[∇2 − 1a2

∂t]G(r , r ′; t, t′) = [∇′2 + 1

a2∂

∂t′]G(r , r ′; t, t′)

= δ(r − r ′)δ(t− t′). (7.162)

(b) Green’s Theorem in 4 dimensions (r , t) yields

Ψ(r , t) =∫ ∫ ∞

t0

G(r , r ′; t, t′) f (r ′, t′) dt′d3r ′ − 1a2

∫G(r , r ′; t, t0)Ψ(r ′, t0) d3r ′

+∫ ∞

t0

∫[Ψ(r ′s, t)∇′GD(r , r ′s; t, t′)− GN(r , r ′s; t, t′)∇′Ψ(r ′s, t′)] · ~dS

′dt′.

(c) Either GD(r ′s) = 0 or GN(r ′s) = 0 on S at any point r ′s.

(d) n′ · ∇′GD(r ′s) = hD(r ′s), GN(r ′s) = −hN(r ′s), and− 1a2 G(r , r ′; t, t0) =

g(r , r ′; t, t0).

2. Wave Equation ∇2Ψ− 1c2

∂2Ψ∂2t = f (r , t).

(a)

[∇2 − 1c2

∂2

∂t2 ]G(r , r ′; t, t′) = [∇′2 − 1c2

∂2

∂t2 ]G(r , r ′; t, t′)

= δ(r − r ′)δ(t− t′). (7.163)

(b) Green’s Theorem in 4 dimensions (r , t) yields

Ψ(r , t) =∫ ∫ ∞

t0

G(r , r ′; t, t′) f (r ′, t′) dt′d3r ′

− 1c2

∫[G(r , r ′; t, t0)

∂t′Ψ(r ′, t0)−Ψ(r ′, t0)

∂t′G(r , r ′; t, t0)] d3r ′

+∫ ∞

t0

∫[Ψ(r ′s, t)∇′GD(r , r ′s; t, t′)− GN(r , r ′s; t, t′)∇′ψ(r ′s, t′)] · ~dS

′dt′.

(c) Cauchy initial conditions are given: Ψ(t0) and Ψ(t0).

(d) The wave and diffusion equations satisfy a causality conditionG(t, t′) = 0, t′ > t.

Problems

1. Find the solution of each initial value problem using the appropriateinitial value Green’s function.

a. y′′ − 3y′ + 2y = 20e−2x, y(0) = 0, y′(0) = 6.

Page 294: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

282 partial differential equations

b. y′′ + y = 2 sin 3x, y(0) = 5, y′(0) = 0.

c. y′′ + y = 1 + 2 cos x, y(0) = 2, y′(0) = 0.

d. x2y′′ − 2xy′ + 2y = 3x2 − x, y(1) = π, y′(1) = 0.

2. Use the initial value Green’s function for x′′+ x = f (t), x(0) = 4, x′(0) =0, to solve the following problems.

a. x′′ + x = 5t2.

b. x′′ + x = 2 tan t.

3. For the problem y′′ − k2y = f (x), y(0) = 0, y′(0) = 1,

a. Find the initial value Green’s function.

b. Use the Green’s function to solve y′′ − y = e−x.

c. Use the Green’s function to solve y′′ − 4y = e2x.

4. Find and use the initial value Green’s function to solve

x2y′′ + 3xy′ − 15y = x4ex, y(1) = 1, y′(1) = 0.

5. Consider the problem y′′ = sin x, y′(0) = 0, y(π) = 0.

a. Solve by direct integration.

b. Determine the Green’s function.

c. Solve the boundary value problem using the Green’s function.

d. Change the boundary conditions to y′(0) = 5, y(π) = −3.

i. Solve by direct integration.

ii. Solve using the Green’s function.

6. Let C be a closed curve and D the enclosed region. Prove the identity∫C

φ∇φ · n ds =∫

D(φ∇2φ +∇φ · ∇φ) dA.

7. Let S be a closed surface and V the enclosed volume. Prove Green’s firstand second identities, respectively.

a.∫

S φ∇ψ · n dS =∫

V(φ∇2ψ +∇φ · ∇ψ) dV.

b.∫

S[φ∇ψ− ψ∇φ] · n dS =∫

V(φ∇2ψ− ψ∇2φ) dV.

8. Let C be a closed curve and D the enclosed region. Prove Green’s iden-tities in two dimensions.

a. First prove ∫D(v∇ · F + F · ∇v) dA =

∫C(vF) · ds.

b. Let F = ∇u and obtain Green’s first identity,∫D(v∇2u +∇u · ∇v) dA =

∫C(v∇u) · ds.

Page 295: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

green’s functions and nonhomogeneous problems 283

c. Use Green’s first identity to prove Green’s second identity,∫D(u∇2v− v∇2u) dA =

∫C(u∇v− v∇u) · ds.

9. Consider the problem:

∂2G∂x2 = δ(x− x0),

∂G∂x

(0, x0) = 0, G(π, x0) = 0.

a. Solve by direct integration.

b. Compare this result to the Green’s function in part b of the last prob-lem.

c. Verify that G is symmetric in its arguments.

10. Consider the boundary value problem: y′′ − y = x, x ∈ (0, 1), withboundary conditions y(0) = y(1) = 0.

a. Find a closed form solution without using Green’s functions.

b. Determine the closed form Green’s function using the properties ofGreen’s functions. Use this Green’s function to obtain a solution ofthe boundary value problem.

c. Determine a series representation of the Green’s function. Use thisGreen’s function to obtain a solution of the boundary value problem.

d. Confirm that all of the solutions obtained give the same results.

11. Rewrite the solution to Problem 15 and identify the initial value Green’sfunction.

12. Rewrite the solution to Problem 16 and identify the initial value Green’sfunctions.

13. Find the Green’s function for the homogeneous fixed values on theboundary of the quarter plane x > 0, y > 0, for Poisson’s equation usingthe infinite plane Green’s function for Poisson’s equation. Use the methodof images.

14. Find the Green’s function for the one dimensional heat equation withboundary conditions u(0, t) = 0 ux(L, t), t > 0.

15. Consider Laplace’s equation on the rectangular plate in Figure 6.8. Con-struct the Green’s function for this problem.

16. Construct the Green’s function for Laplace’s equation in the sphericaldomain in Figure 6.18.

Page 296: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238
Page 297: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

8Complex Representations of Functions

“He is not a true man of science who does not bring some sympathy to his studies,and expect to learn something by behavior as well as by application. It is childishto rest in the discovery of mere coincidences, or of partial and extraneous laws. Thestudy of geometry is a petty and idle exercise of the mind, if it is applied to no largersystem than the starry one. Mathematics should be mixed not only with physics butwith ethics; that is mixed mathematics. The fact which interests us most is the lifeof the naturalist. The purest science is still biographical.” Henry David Thoreau(1817-1862)

8.1 Complex Representations of Waves

We have seen that we can determine the frequency content of a functionf (t) defined on an interval [0, T] by looking for the Fourier coefficients inthe Fourier series expansion

f (t) =a0

2+

∑n=1

an cos2πnt

T+ bn sin

2πntT

.

The coefficients take forms like

an =2T

∫ T

0f (t) cos

2πntT

dt.

However, trigonometric functions can be written in a complex exponen-tial form. Using Euler’s formula, which was obtained using the Maclaurinexpansion of ex in Example A.36,

eiθ = cos θ + i sin θ,

the complex conjugate is found by replacing i with −i to obtain

e−iθ = cos θ − i sin θ.

Adding these expressions, we have

2 cos θ = eiθ + e−iθ .

Subtracting the exponentials leads to an expression for the sine function.Thus, we have the important result that sines and cosines can be written ascomplex exponentials:

Page 298: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

286 partial differential equations

cos θ =eiθ + e−iθ

2,

sin θ =eiθ − e−iθ

2i. (8.1)

So, we can write

cos2πnt

T=

12(e

2πintT + e−

2πintT ).

Later we will see that we can use this information to rewrite the series as asum over complex exponentials in the form

f (t) =∞

∑n=−∞

cne2πint

T ,

where the Fourier coefficients now take the form

cn =∫ T

0f (t)e−

2πintT dt.

In fact, when one considers the representation of analogue signals definedover an infinite interval and containing a continuum of frequencies, we willsee that Fourier series sums become integrals of complex functions and sodo the Fourier coefficients. Thus, we will naturally find ourselves needingto work with functions of complex variables and perform complex integrals.

We can also develop a complex representation for waves. Recall from thediscussion in Section 3.6 on finite length strings that a solution to the waveequation was given by

u(x, t) =12

[∞

∑n=1

An sin kn(x + ct) +∞

∑n=1

An sin kn(x− ct)

]. (8.2)

We can replace the sines with their complex forms as

u(x, t) =14i

[∞

∑n=1

An

(eikn(x+ct) − e−ikn(x+ct)

)+

∑n=1

An

(eikn(x−ct) − e−ikn(x−ct)

)]. (8.3)

Defining k−n = −kn, n = 1, 2, . . . , we can rewrite this solution in the form

u(x, t) =∞

∑n=−∞

[cneikn(x+ct) + dneikn(x−ct)

]. (8.4)

Such representations are also possible for waves propagating over theentire real line. In such cases we are not restricted to discrete frequenciesand wave numbers. The sum of the harmonics will then be a sum over acontinuous range, which means that the sums become integrals. So, we arelead to the complex representation

u(x, t) =∫ ∞

−∞

[c(k)eik(x+ct) + d(k)eik(x−ct)

]dk. (8.5)

Page 299: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

complex representations of functions 287

The forms eik(x+ct) and eik(x−ct) are complex representations of what arecalled plane waves in one dimension. The integral represents a general waveform consisting of a sum over plane waves. The Fourier coefficients in therepresentation can be complex valued functions and the evaluation of theintegral may be done using methods from complex analysis. We would liketo be able to compute such integrals.

With the above ideas in mind, we will now take a tour of complex anal-ysis. We will first review some facts about complex numbers and then in-troduce complex functions. This will lead us to the calculus of functions ofa complex variable, including the differentiation and integration complexfunctions. This will set up the methods needed to explore Fourier trans-forms in the next chapter.

8.2 Complex Numbers

Complex numbers were first introduced in order to solve some sim-ple problems. The history of complex numbers only extends about fivehundred years. In essence, it was found that we need to find the rootsof equations such as x2 + 1 = 0. The solution is x = ±

√−1. Due to the

usefulness of this concept, which was not realized at first, a special sym-bol was introduced - the imaginary unit, i =

√−1. In particular, Girolamo

Cardano (1501− 1576) was one of the first to use square roots of negativenumbers when providing solutions of cubic equations. However, complexnumbers did not become an important part of mathematics or science un-til the late seventh and eighteenth centuries after people like Abraham deMoivre (1667-1754), the Bernoulli1 family and Euler took them seriously.

1 The Bernoulli’s were a family of Swissmathematicians spanning three gener-ations. It all started with JacobBernoulli (1654-1705) and his brotherJohann Bernoulli (1667-1748). Jacobhad a son, Nicolaus Bernoulli (1687-1759) and Johann (1667-1748) had threesons, Nicolaus Bernoulli II (1695-1726),Daniel Bernoulli (1700-1872), and JohannBernoulli II (1710-1790). The last gener-ation consisted of Johann II’s sons, Jo-hann Bernoulli III (1747-1807) and Ja-cob Bernoulli II (1759-1789). Johann, Ja-cob and Daniel Bernoulli were the mostfamous of the Bernoulli’s. Jacob stud-ied with Leibniz, Johann studied underhis older brother and later taught Leon-hard Euler and Daniel Bernoulli, who isknown for his work in hydrodynamics.

z

x

y

r

θ

Figure 8.1: The Argand diagram for plot-ting complex numbers in the complex z-plane.

A complex number is a number of the form z = x + iy, where x and yare real numbers. x is called the real part of z and y is the imaginary partof z. Examples of such numbers are 3 + 3i, −1i = −i, 4i and 5. Note that5 = 5 + 0i and 4i = 0 + 4i.

The complex modulus, |z| =√

x2 + y2.

There is a geometric representation of complex numbers in a two dimen-sional plane, known as the complex plane C. This is given by the Arganddiagram as shown in Figure 8.1. Here we can think of the complex numberz = x + iy as a point (x, y) in the z-complex plane or as a vector. The mag-nitude, or length, of this vector is called the complex modulus of z, denotedby |z| =

√x2 + y2. We can also use the geometric picture to develop a po-

lar representation of complex numbers. From Figure 8.1 we can see that interms of r and θ we have that

x = r cos θ,

y = r sin θ. (8.6)

Thus, Complex numbers can be represented inrectangular (Cartesian), z = x + iy, orpolar form, z = reiθ . Here we define theargument, θ, and modulus, |z| = r ofcomplex numbers.

z = x + iy = r(cos θ + i sin θ) = reiθ . (8.7)

So, given r and θ we have z = reiθ . However, given the Cartesian form,

Page 300: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

288 partial differential equations

z = x + iy, we can also determine the polar form, since

r =√

x2 + y2,

tan θ =yx

. (8.8)

Note that r = |z|.Locating 1 + i in the complex plane, it is possible to immediately deter-

mine the polar form from the angle and length of the “complex vector.” Thisis shown in Figure 8.2. It is obvious that θ = π

4 and r =√

2.

−i

i

2i

2−1 1

z = 1 + i

x

y

Figure 8.2: Locating 1 + i in the complexz-plane.

Example 8.1. Write z = 1 + i in polar form.If one did not see the polar form from the plot in the z-plane, then one could

systematically determine the results. First, write z = 1 + i in polar form, z = reiθ ,for some r and θ.

Using the above relations between polar and Cartesian representations, we haver =

√x2 + y2 =

√2 and tan θ = y

x = 1. This gives θ = π4 . So, we have found

that1 + i =

√2eiπ/4.

We can also define binary operations of addition, subtraction, multiplica-tion and division of complex numbers to produce a new complex number.

The addition of two complex numbers is simply done by adding the realWe can easily add, subtract, multiplyand divide complex numbers. and imaginary parts of each number. So,

(3 + 2i) + (1− i) = 4 + i.

Subtraction is just as easy,

(3 + 2i)− (1− i) = 2 + 3i.

We can multiply two complex numbers just like we multiply any binomials,though we now can use the fact that i2 = −1. For example, we have

(3 + 2i)(1− i) = 3 + 2i− 3i + 2i(−i) = 5− i.

We can even divide one complex number into another one and get acomplex number as the quotient. Before we do this, we need to introducethe complex conjugate, z, of a complex number. The complex conjugate ofz = x + iy, where x and y are real numbers, is given asThe complex conjugate of z = x + iy, is

given as z = x− iy.

z = x− iy.

Complex conjugates satisfy the following relations for complex numbersz and w and real number x.

z + w = z + w

zw = zw

z = z

x = x. (8.9)

Page 301: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

complex representations of functions 289

One consequence is that the complex conjugate of reiθ is

reiθ = cos θ + i sin θ = cos θ − i sin θ = re−iθ .

Another consequence is that

zz = reiθre−iθ = r2.

Thus, the product of a complex number with its complex conjugate is a realnumber. We can also prove this result using the Cartesian form

zz = (x + iy)(x− iy) = x2 + y2 = |z|2.

Now we are in a position to write the quotient of two complex numbersin the standard form of a real plus an imaginary number.

Example 8.2. Simplify the expression z = 3+2i1−i .

This simplification is accomplished by multiplying the numerator and denomi-nator of this expression by the complex conjugate of the denominator:

z =3 + 2i1− i

=3 + 2i1− i

1 + i1 + i

=1 + 5i

2.

Therefore, the quotient is a complex number and in standard form it is given byz = 1

2 + 52 i.

We can also consider powers of complex numbers. For example,

(1 + i)2 = 2i,

(1 + i)3 = (1 + i)(2i) = 2i− 2.

But, what is (1 + i)1/2 =√

1 + i?In general, we want to find the nth root of a complex number. Let t =

z1/n. To find t in this case is the same as asking for the solution of

z = tn

given z. But, this is the root of an nth degree equation, for which we expectn roots. If we write z in polar form, z = reiθ , then we would naively compute

z1/n =(

reiθ)1/n

= r1/neiθ/n

= r1/n[

cosθ

n+ i sin

θ

n

]. (8.10)

For example,

(1 + i)1/2 =(√

2eiπ/4)1/2

= 21/4eiπ/8.

But this is only one solution. We expected two solutions for n = 2.. The function f (z) = z1/n is multivalued.z1/n = r1/nei(θ+2kπ)/n, k = 0, 1, . . . , n− 1.The reason we only found one solution is that the polar representation

for z is not unique. We note that

Page 302: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

290 partial differential equations

e2kπi = 1, k = 0,±1,±2, . . . .

So, we can rewrite z as z = reiθe2kπi = rei(θ+2kπ). Now, we have that

z1/n = r1/nei(θ+2kπ)/n, k = 0, 1, . . . , n− 1.

Note that these are the only distinct values for the roots. We can see this byconsidering the case k = n. Then, we find that

ei(θ+2πin)/n = eiθ/ne2πi = eiθ/n.

So, we have recovered the n = 0 value. Similar results can be shown for theother k values larger than n.

Now, we can finish the example we had started.

Example 8.3. Determine the square roots of 1 + i, or√

1 + i.As we have seen, we first write 1 + i in polar form, 1 + i =

√2eiπ/4. Then,

introduce e2kπi = 1 and find the roots:

(1 + i)1/2 =(√

2eiπ/4e2kπi)1/2

, k = 0, 1,

= 21/4ei(π/8+kπ), k = 0, 1,

= 21/4eiπ/8, 21/4e9πi/8. (8.11)

Finally, what is n√

1? Our first guess would be n√

1 = 1. But, we now knowthat there should be n roots. These roots are called the nth roots of unity.The nth roots of unity, n√1.

Using the above result with r = 1 and θ = 0, we have that

n√1 =

[cos

2πkn

+ i sin2πk

n

], k = 0, . . . , n− 1.

For example, we have

3√1 =

[cos

2πk3

+ i sin2πk

3

], k = 0, 1, 2.

These three roots can be written out as

1−1

−i

i

x

y

Figure 8.3: Locating the cube roots ofunity in the complex z-plane.

3√1 = 1,−12+

√3

2i,−1

2−√

32

i.

We can locate these cube roots of unity in the complex plane. In Figure8.3 we see that these points lie on the unit circle and are at the vertices of anequilateral triangle. In fact, all nth roots of unity lie on the unit circle andare the vertices of a regular n-gon with one vertex at z = 1.

8.3 Complex Valued Functions

We would like to next explore complex functions and the calculusof complex functions. We begin by defining a function that takes complex

Page 303: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

complex representations of functions 291

numbers into complex numbers, f : C → C. It is difficult to visualize suchfunctions. For real functions of one variable, f : R → R, we graph thesefunctions by first drawing two intersecting copies of R and then proceed tomap the domain into the range of f .

It would be more difficult to do this for complex functions. Imagine plac-ing together two orthogonal copies of the complex plane, C. One wouldneed a four dimensional space in order to complete the visualization. In-stead, typically uses two copies of the complex plane side by side in order toindicate how such functions behave. Over the years there have been severalways to visualize complex functions. We will describe a few of these in thischapter.

We will assume that the domain lies in the z-plane and the image lies inthe w-plane. We will then write the complex function as w = f (z). We showthese planes in Figure 8.4 and the mapping between the planes.

z

w

x

y

u

vw = f (z)

Figure 8.4: Defining a complex valuedfunction, w = f (z), on C for z = x + iyand w = u + iv.

Letting z = x + iy and w = u + iv, we can write the real and imaginaryparts of f (z) :

w = f (z) = f (x + iy) = u(x, y) + iv(x, y).

We see that one can view this function as a function of z or a function ofx and y. Often, we have an interest in writing out the real and imaginaryparts of the function, u(x, y) and v(x, y), which are functions of two realvariables, x and y. We will look at several functions to determine the realand imaginary parts.

Example 8.4. Find the real and imaginary parts of f (z) = z2.For example, we can look at the simple function f (z) = z2. It is a simple matter

to determine the real and imaginary parts of this function. Namely, we have

z2 = (x + iy)2 = x2 − y2 + 2ixy.

Therefore, we have that

u(x, y) = x2 − y2, v(x, y) = 2xy.

In Figure 8.5 we show how a grid in the z-plane is mapped by f (z) = z2 intothe w-plane. For example, the horizontal line x = 1 is mapped to u(1, y) = 1− y2

Page 304: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

292 partial differential equations

and v(1, y) = 2y. Eliminating the “parameter” y between these two equations, wehave u = 1− v2/4. This is a parabolic curve. Similarly, the horizontal line y = 1results in the curve u = v2/4− 1.

If we look at several curves, x =const and y =const, then we get a family ofintersecting parabolae, as shown in Figure 8.5.

Figure 8.5: 2D plot showing how thefunction f (z) = z2 maps the lines x = 1and y = 1 in the z-plane into parabolaein the w-plane.

−2 −1 0 1 2−2

−1.5

−1

−0.5

0

0.5

1

1.5

2

x

y

−4 −2 0 2 4−4

−3

−2

−1

0

1

2

3

4

u

v

Figure 8.6: 2D plot showing how thefunction f (z) = z2 maps a grid in thez-plane into the w-plane.

−2 −1 0 1 2−2

−1.5

−1

−0.5

0

0.5

1

1.5

2

x

y

−4 −2 0 2 4−4

−3

−2

−1

0

1

2

3

4

u

v

Example 8.5. Find the real and imaginary parts of f (z) = ez.For this case, we make use of Euler’s Formula.

ez = ex+iy

= exeiy

= ex(cos y + i sin y). (8.12)

Thus, u(x, y) = ex cos y and v(x, y) = ex sin y. In Figure 8.7 we show how agrid in the z-plane is mapped by f (z) = ez into the w-plane.

Example 8.6. Find the real and imaginary parts of f (z) = z1/2.We have that

z1/2 =√

x2 + y2 (cos (θ + kπ) + i sin (θ + kπ)) , k = 0, 1. (8.13)

Thus,u = |z| cos (θ + kπ) , u = |z| cos (θ + kπ) ,

Page 305: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

complex representations of functions 293

−2 −1 0 1 2−2

−1.5

−1

−0.5

0

0.5

1

1.5

2

x

y

−4 −2 0 2 4−4

−3

−2

−1

0

1

2

3

4

u

v

Figure 8.7: 2D plot showing how thefunction f (z) = ez maps a grid in thez-plane into the w-plane.

for |z| =√

x2 + y2 and θ = tan−1(y/x). For each k-value one has a differentsurface and curves of constant θ give u/v = c1, and curves of constant nonzerocomplex modulus give concentric circles, u2 + v2 = c2, for c1 and c2 constants.

−4 −3 −2 −1 0 1 2 3 4−4

−3

−2

−1

0

1

2

3

4

u

v

Figure 8.8: 2D plot showing how thefunction f (z) =

√z maps a grid in the

z-plane into the w-plane.

Example 8.7. Find the real and imaginary parts of f (z) = ln z.In this case we make use of the polar form of a complex number, z = reiθ . Our

first thought would be to simply compute

ln z = ln r + iθ.

However, the natural logarithm is multivalued, just like the square root function.Recalling that e2πik = 1 for k an integer, we have z = rei(θ+2πk). Therefore,

ln z = ln r + i(θ + 2πk), k = integer.

The natural logarithm is a multivalued function. In fact there are an infinitenumber of values for a given z. Of course, this contradicts the definition of afunction that you were first taught.

Figure 8.9: Domain coloring of the com-plex z-plane assigning colors to arg(z).

Thus, one typically will only report the principal value, Log z = ln r + iθ, for θ

restricted to some interval of length 2π, such as [0, 2π). In order to account for themultivaluedness, one introduces a way to extend the complex plane so as to includeall of the branches. This is done by assigning a plane to each branch, using (branch)cuts along lines, and then gluing the planes together at the branch cuts to formwhat is called a Riemann surface. We will not elaborate upon this any further hereand refer the interested reader to more advanced texts. Comparing the multivaluedlogarithm to the principal value logarithm, we have

ln z = Log z + 2nπi.

We should not that some books use log z instead of ln z. It should not be confusedwith the common logarithm.

8.3.1 Complex Domain Coloring

Another method for visualizing complex functions is domain col-oring. The idea was described by Frank A. Farris. There are a few ap-proaches to this method. The main idea is that one colors each point of the

Page 306: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

294 partial differential equations

Figure 8.10: Domain coloring for f (z) =z2. The left figure shows the phase col-oring. The right figure show the coloredsurface with height | f (z)|.

z-plane (the domain) according to arg(z) as shown in Figure 8.9. The mod-ulus, | f (z)| is then plotted as a surface. Examples are shown for f (z) = z2

in Figure 8.10 and f (z) = 1/z(1− z) in Figure 8.11.

Figure 8.11: Domain coloring for f (z) =1/z(1 − z). The left figure shows thephase coloring. The right figure showthe colored surface with height | f (z)|.

We would like to put all of this information in one plot. We can do thisby adjusting the brightness of the colored domain by using the modulus ofthe function. In the plots that follow we use the fractional part of ln |z|. InFigure 8.12 we show the effect for the z-plane using f (z) = z. In the figuresthat follow we look at several other functions. In these plots we have chosento view the functions in a circular window.

Figure 8.12: Domain coloring for thefunction f (z) = z showing a coloring forarg(z) and brightness based on | f (z)|.

One can see the rich behavior hidden in these figures. As you progressin your reading, especially after the next chapter, you should return to thesefigures and locate the zeros, poles, branch points and branch cuts. A searchonline will lead you to other colorings and superposition of the uv grid onthese figures.

As a final picture, we look at iteration in the complex plane. Considerthe function f (z) = z2− 0.75− 0.2i. Interesting figures result when studyingthe iteration in the complex plane. In Figure 8.15 we show f (z) and f 20(z),which is the iteration of f twenty times. It leads to an interesting coloring.What happens when one keeps iterating? Such iterations lead to the studyof Julia and Mandelbrot sets . In Figure 8.16 we show six iterations off (z) = (1− i/2) sin x.Figure 8.13: Domain coloring for the

function f (z) = z2.

Page 307: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

complex representations of functions 295

Figure 8.14: Domain coloring for sev-eral functions. On the top row the do-main coloring is shown for f (z) = z4

and f (z) = sin z. On the second rowplots for f (z) =

√1 + z and f (z) =

1z(1/2−z)(z−i)(z−i+1) are shown. In the lastrow domain colorings for f (z) = ln zand f (z) = sin(1/z) are shown.

Figure 8.15: Domain coloring for f (z) =z2 − 0.75− 0.2i. The left figure shows thephase coloring. On the right is the plotfor f 20(z).

Figure 8.16: Domain coloring for six it-erations of f (z) = (1− i/2) sin x.

Page 308: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

296 partial differential equations

The following code was used in MATLAB to produce these figures.

fn = @(x) (1-i/2)*sin(x);

xmin=-2; xmax=2; ymin=-2; ymax=2;

Nx=500;

Ny=500;

x=linspace(xmin,xmax,Nx);

y=linspace(ymin,ymax,Ny);

[X,Y] = meshgrid(x,y); z = complex(X,Y);

tmp=z; for n=1:6

tmp = fn(tmp);

end Z=tmp;

XX=real(Z);

YY=imag(Z);

R2=max(max(X.^2));

R=max(max(XX.^2+YY.^2));

circle(:,:,1) = X.^2+Y.^2 < R2;

circle(:,:,2)=circle(:,:,1);

circle(:,:,3)=circle(:,:,1);

addcirc(:,:,1)=circle(:,:,1)==0;

addcirc(:,:,2)=circle(:,:,1)==0;

addcirc(:,:,3)=circle(:,:,1)==0;

warning off MATLAB:divideByZero;

hsvCircle=ones(Nx,Ny,3);

hsvCircle(:,:,1)=atan2(YY,XX)*180/pi+(atan2(YY,XX)*180/pi<0)*360;

hsvCircle(:,:,1)=hsvCircle(:,:,1)/360; lgz=log(XX.^2+YY.^2)/2;

hsvCircle(:,:,2)=0.75; hsvCircle(:,:,3)=1-(lgz-floor(lgz))/2;

hsvCircle(:,:,1) = flipud((hsvCircle(:,:,1)));

hsvCircle(:,:,2) = flipud((hsvCircle(:,:,2)));

hsvCircle(:,:,3) =flipud((hsvCircle(:,:,3)));

rgbCircle=hsv2rgb(hsvCircle);

rgbCircle=rgbCircle.*circle+addcirc;

image(rgbCircle)

axis square

set(gca,’XTickLabel’,)

set(gca,’YTickLabel’,)

Page 309: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

complex representations of functions 297

8.4 Complex Differentiation

Next we want to differentiate complex functions. We generalizethe definition from single variable calculus,

z

x

y

Figure 8.17: There are many paths thatapproach z as ∆z→ 0.

f ′(z) = lim∆z→0

f (z + ∆z)− f (z)∆z

, (8.14)

provided this limit exists.The computation of this limit is similar to what one sees in multivariable

calculus for limits of real functions of two variables. Letting z = x + iy andδz = δx + iδy, then

z + δx = (x + δx) + i(y + δy).

Letting ∆z → 0 means that we get closer to z. There are many paths thatone can take that will approach z. [See Figure 8.17.]

z

x

y

Figure 8.18: A path that approaches zwith y = constant.

It is sufficient to look at two paths in particular. We first consider thepath y = constant. This horizontal path is shown in Figure 8.18. For thispath, ∆z = ∆x + i∆y = ∆x, since y does not change along the path. Thederivative, if it exists, is then computed as

f ′(z) = lim∆z→0

f (z + ∆z)− f (z)∆z

= lim∆x→0

u(x + ∆x, y) + iv(x + ∆x, y)− (u(x, y) + iv(x, y))∆x

= lim∆x→0

u(x + ∆x, y)− u(x, y)∆x

+ lim∆x→0

iv(x + ∆x, y)− v(x, y)

∆x.

(8.15)

The last two limits are easily identified as partial derivatives of real valuedfunctions of two variables. Thus, we have shown that when f ′(z) exists,

f ′(z) =∂u∂x

+ i∂v∂x

. (8.16)

A similar computation can be made if instead we take the vertical path,x = constant, in Figure 8.17). In this case ∆z = i∆y and

f ′(z) = lim∆z→0

f (z + ∆z)− f (z)∆z

= lim∆y→0

u(x, y + ∆y) + iv(x, y + ∆y)− (u(x, y) + iv(x, y))i∆y

= lim∆y→0

u(x, y + ∆y)− u(x, y)i∆y

+ lim∆y→0

v(x, y + ∆y)− v(x, y)∆y

.

(8.17)

Therefore,

f ′(z) =∂v∂y− i

∂u∂y

. (8.18)

Page 310: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

298 partial differential equations

We have found two different expressions for f ′(z) by following two dif-ferent paths to z. If the derivative exists, then these two expressions must bethe same. Equating the real and imaginary parts of these expressions, wehaveThe Cauchy-Riemann Equations.

∂u∂x

=∂v∂y

∂v∂x

= −∂u∂y

. (8.19)

These are known as the Cauchy-Riemann equations2.2 Augustin-Louis Cauchy (1789-1857)was a French mathematician well knownfor his work in analysis. Georg FriedrichBernhard Riemann (1826-1866) wasa German mathematician who mademajor contributions to geometry andanalysis.

Theorem 8.1. f (z) is holomorphic (differentiable) if and only if the Cauchy-Riemannequations are satisfied.

Example 8.8. f (z) = z2.In this case we have already seen that z2 = x2− y2 + 2ixy. Therefore, u(x, y) =

x2 − y2 and v(x, y) = 2xy. We first check the Cauchy-Riemann equations.

∂u∂x

= 2x =∂v∂y

∂v∂x

= 2y = −∂u∂y

. (8.20)

Therefore, f (z) = z2 is differentiable.We can further compute the derivative using either Equation (8.16) or Equation

(8.18). Thus,

f ′(z) =∂u∂x

+ i∂v∂x

= 2x + i(2y) = 2z.

This result is not surprising.

Example 8.9. f (z) = z.In this case we have f (z) = x− iy. Therefore, u(x, y) = x and v(x, y) = −y.

But, ∂u∂x = 1 and ∂v

∂y = −1. Thus, the Cauchy-Riemann equations are not satisfiedand we conclude the f (z) = z is not differentiable.

Harmonic functions satisfy Laplace’sequation. Another consequence of the Cauchy-Riemann equations is that both u(x, y)

and v(x, y) are harmonic functions. A real-valued function u(x, y) is har-monic if it satisfies Laplace’s equation in 2D, ∇2u = 0, or

∂2u∂x2 +

∂2u∂y2 = 0.

Theorem 8.2. f (z) = u(x, y) + iv(x, y) is differentiable if and only if u and v areharmonic functions.

This is easily proven using the Cauchy-Riemann equations.

∂2u∂x2 =

∂x∂u∂x

=∂

∂x∂v∂y

Page 311: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

complex representations of functions 299

=∂

∂y∂v∂x

= − ∂

∂y∂u∂y

= −∂2u∂y2 . (8.21)

Example 8.10. Is u(x, y) = x2 + y2 harmonic?

∂2u∂x2 +

∂2u∂y2 = 2 + 2 6= 0.

No, it is not.

Example 8.11. Is u(x, y) = x2 − y2 harmonic?

∂2u∂x2 +

∂2u∂y2 = 2− 2 = 0.

Yes, it is.

Given a harmonic function u(x, y), can one find a function, v(x, y), such The harmonic conjugate function.

f (z) = u(x, y) + iv(x, y) is differentiable? In this case, v are called the har-monic conjugate of u.

Example 8.12. Find the harmonic conjugate of u(x, y) = x2 − y2 and determinef (z) = u + iv such that u + iv is differentiable.

The Cauchy-Riemann equations tell us the following about the unknown func-tion, v(x, y) :

∂v∂x

= −∂u∂y

= 2y,

∂v∂y

=∂u∂x

= 2x.

We can integrate the first of these equations to obtain

v(x, y) =∫

2y dx = 2xy + c(y).

Here c(y) is an arbitrary function of y. One can check to see that this works bysimply differentiating the result with respect to x.

However, the second equation must also hold. So, we differentiate the result withrespect to y to find that

∂v∂y

= 2x + c′(y).

Since we were supposed to get 2x, we have that c′(y) = 0. Thus, c(y) = k is aconstant.

We have just shown that we get an infinite number of functions,

v(x, y) = 2xy + k,

such thatf (z) = x2 − y2 + i(2xy + k)

is differentiable. In fact, for k = 0 this is nothing other than f (z) = z2.

Page 312: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

300 partial differential equations

8.5 Complex Integration

We have introduced functions of a complex variable. We alsoestablished when functions are differentiable as complex functions, or holo-morphic. In this chapter we will turn to integration in the complex plane.We will learn how to compute complex path integrals, or contour integrals.We will see that contour integral methods are also useful in the computa-tion of some of the real integrals that we will face when exploring Fouriertransforms in the next chapter.

z1

z2

x

y

Figure 8.19: We would like to integrate acomplex function f (z) over the path Γ inthe complex plane.

8.5.1 Complex Path Integrals

In this section we will investigate the computation of complex pathintegrals. Given two points in the complex plane, connected by a path Γ asshown in Figure 8.19, we would like to define the integral of f (z) along Γ,∫

Γf (z) dz.

A natural procedure would be to work in real variables, by writing∫Γ

f (z) dz =∫

Γ[u(x, y) + iv(x, y)] (dx + idy),

since z = x + iy and dz = dx + idy.

Figure 8.20: Examples of (a) a connectedset and (b) a disconnected set.

In order to carry out the integration, we then have to find a parametriza-tion of the path and use methods from a multivariate calculus class. Namely,let u and v be continuous in domain D, and Γ a piecewise smooth curve inD. Let (x(t), y(t)) be a parametrization of Γ for t0 ≤ t ≤ t1 and f (z) =

u(x, y) + iv(x, y) for z = x + iy. Then

∫Γ

f (z) dz =∫ t1

t0

[u(x(t), y(t)) + iv(x(t), y(t))] (dxdt

+ idydt

)dt. (8.22)

Here we have used

dz = dx + idy =

(dxdt

+ idydt

)dt.

Furthermore, a set D is called a domain if it is both open and connected.Before continuing, we first define open and connected. A set D is con-

nected if and only if for all z1, and z2 in D there exists a piecewise smoothcurve connecting z1 to z2 and lying in D. Otherwise it is called disconnected.Examples are shown in Figure 8.20

A set D is open if and only if for all z0 in D there exists an open disk|z− z0| < ρ in D. In Figure 8.21 we show a region with two disks.

Figure 8.21: Locations of open disks in-side and on the boundary of a region.

For all points on the interior of the region one can find at least one diskcontained entirely in the region. The closer one is to the boundary, thesmaller the radii of such disks. However, for a point on the boundary, every

Page 313: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

complex representations of functions 301

such disk would contain points inside and outside the disk. Thus, an openset in the complex plane would not contain any of its boundary points.

We now have a prescription for computing path integrals. Let’s see howthis works with a couple of examples.

−i

i

2i

2−1 1 x

y

Figure 8.22: Contour for Example 8.13.

Example 8.13. Evaluate∫

C z2 dz, where C = the arc of the unit circle in the firstquadrant as shown in Figure 8.22.

There are two ways we could carry out the parametrization. First, we note thatthe standard parametrization of the unit circle is

(x(θ), y(θ)) = (cos θ, sin θ), 0 ≤ θ ≤ 2π.

For a quarter circle in the first quadrant, 0 ≤ θ ≤ π2 , we let z = cos θ + i sin θ.

Therefore, dz = (− sin θ + i cos θ) dθ and the path integral becomes∫C

z2 dz =∫ π

2

0(cos θ + i sin θ)2(− sin θ + i cos θ) dθ.

We can expand the integrand and integrate, having to perform some trigonometricintegrations.∫ π

2

0[sin3 θ − 3 cos2 θ sin θ + i(cos3 θ − 3 cos θ sin2 θ)] dθ.

The reader should work out these trigonometric integrations and confirm the result.For example, you can use

sin3 θ = sin θ(1− cos2 θ))

to write the real part of the integrand as

sin θ − 4 cos2 θ sin θ.

The resulting antiderivative becomes

− cos θ +43

cos3 θ.

The imaginary integrand can be integrated in a similar fashion.While this integral is doable, there is a simpler procedure. We first note that

z = eiθ on C. So, dz = ieiθdθ. The integration then becomes∫C

z2 dz =∫ π

2

0(eiθ)2ieiθ dθ

= i∫ π

2

0e3iθ dθ

=ie3iθ

3i

∣∣∣π/2

0

= −1 + i3

. (8.23)

Example 8.14. Evaluate∫

Γ z dz, for the path Γ = γ1 ∪ γ2 shown in Figure 8.23.In this problem we have a path that is a piecewise smooth curve. We can compute

the path integral by computing the values along the two segments of the path and

Page 314: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

302 partial differential equations

adding the results. Let the two segments be called γ1 and γ2 as shown in Figure8.23 and parametrize each path separately.

Over γ1 we note that y = 0. Thus, z = x for x ∈ [0, 1]. It is natural to take xas the parameter. So, we let dz = dx to find∫

γ1

z dz =∫ 1

0x dx =

12

.

For path γ2 we have that z = 1 + iy for y ∈ [0, 1] and dz = i dy. Inserting thisparametrization into the integral, the integral becomes∫

γ2

z dz =∫ 1

0(1 + iy) idy = i− 1

2.

−i

i

2i

2−1 1γ1

γ20

1 + i

x

y

Figure 8.23: Contour for Example 8.14

with Γ = γ1 ∪ γ2.

Combining the results for the paths γ1 and γ2, we have∫

Γ z dz = 12 + (i− 1

2 ) =

i.

Example 8.15. Evaluate∫

γ3z dz, where γ3, is the path shown in Figure 8.24.

In this case we take a path from z = 0 to z = 1 + i along a different path thanin the last example. Let γ3 = (x, y)|y = x2, x ∈ [0, 1] = z|z = x + ix2, x ∈[0, 1]. Then, dz = (1 + 2ix) dx.

−i

i

2i

2−1 1γ1

γ2γ3

0

1 + i

x

y

Figure 8.24: Contour for Example 8.15.

The integral becomes∫γ3

z dz =∫ 1

0(x + ix2)(1 + 2ix) dx

=∫ 1

0(x + 3ix2 − 2x3) dx =

=

[12

x2 + ix3 − 12

x4]1

0= i. (8.24)

In the last case we found the same answer as we had obtained in Example8.14. But we should not take this as a general rule for all complex pathintegrals. In fact, it is not true that integrating over different paths alwaysyields the same results. However, when this is true, then we refer to thisproperty as path independence. In particular, the integral

∫f (z) dz is path

independent if ∫Γ1

f (z) dz =∫

Γ2

f (z) dz

for all paths from z1 to z2 as shown in Figure 8.25.

z1

z2

x

y

Γ1

Γ2

Figure 8.25:∫

Γ1f (z) dz =

∫Γ2

f (z) dz forall paths from z1 to z2 when the integralof f (z) is path independent.

We can show that if∫

f (z) dz is path independent, then the integral off (z) over all closed loops is zero,∫

closed loopsf (z) dz = 0.

A common notation for integrating over closed loops is∮

C f (z) dz. But firstwe have to define what we mean by a closed loop. A simple closed contour

A simple closed contour. is a path satisfying

a The end point is the same as the beginning point. (This makes theloop closed.)

Page 315: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

complex representations of functions 303

b The are no self-intersections. (This makes the loop simple.)

A loop in the shape of a figure eight is closed, but it is not simple.Now, consider an integral over the closed loop C shown in Figure 8.26.

We pick two points on the loop breaking it into two contours, C1 and C2.Then we make use of the path independence by defining C−2 to be the pathalong C2 but in the opposite direction. Then,

∮C f (z) dz = 0 if the integral is path in-

dependent.

∮C

f (z) dz =∫

C1

f (z) dz +∫

C2

f (z) dz

=∫

C1

f (z) dz−∫

C−2f (z) dz. (8.25)

Assuming that the integrals from point 1 to point 2 are path independent,then the integrals over C1 and C−2 are equal. Therefore, we have

∮C f (z) dz =

0.

1

2

C1C2

x

y

Figure 8.26: The integral∮

C f (z) dzaround C is zero if the integral

∫Γ f (z) dz

is path independent.

Example 8.16. Consider the integral∮

C z dz for C the closed contour shown inFigure 8.24 starting at z = 0 following path γ1, then γ2 and returning to z = 0.Based on the earlier examples and the fact that going backwards on γ3 introduces anegative sign, we have

∮C

z dz =∫

γ1

z dz +∫

γ2

z dz−∫

γ3

z dz =12+

(i− 1

2

)− i = 0.

8.5.2 Cauchy’s Theorem

Next we want to investigate if we can determine that integrals oversimple closed contours vanish without doing all the work of parametrizingthe contour. First, we need to establish the direction about which we traversethe contour. We can define the orientation of a curve by referring to thenormal of the curve.

Recall that the normal is a perpendicular to the curve. There are two suchperpendiculars. The above normal points outward and the other normalpoints towards the interior of a closed curve. We will define a positivelyoriented contour as one that is traversed with the outward normal pointingto the right. As one follows loops, the interior would then be on the left. A curve with parametriza-

tion (x(t), y(t)) has a normal(nx , ny) = (− dx

dt , dydt ).

We now consider∮

C(u + iv) dz over a simple closed contour. This can bewritten in terms of two real integrals in the xy-plane.∮

C(u + iv) dz =

∫C(u + iv)(dx + i dy)

=∫

Cu dx− v dy + i

∫C

v dx + u dy. (8.26)

These integrals in the plane can be evaluated using Green’s Theorem in thePlane. Recall this theorem from your last semester of calculus:

Page 316: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

304 partial differential equations

Green’s Theorem in the Plane.

Theorem 8.3. Let P(x, y) and Q(x, y) be continuously differentiable functionson and inside the simple closed curve C as shown in Figure 8.27. Denoting theenclosed region S, we have∫

CP dx + Q dy =

∫ ∫S

(∂Q∂x− ∂P

∂y

)dxdy. (8.27)

S

C

Figure 8.27: Region used in Green’s The-orem.

Green’s Theorem in the Plane is oneof the major integral theorems of vec-tor calculus. It was discovered byGeorge Green (1793-1841) and publishedin 1828, about four years before he en-tered Cambridge as an undergraduate.

Using Green’s Theorem to rewrite the first integral in (8.26), we have∫C

u dx− v dy =∫ ∫

S

(−∂v∂x− ∂u

∂y

)dxdy.

If u and v satisfy the Cauchy-Riemann equations (8.19), then the integrandin the double integral vanishes. Therefore,∫

Cu dx− v dy = 0.

In a similar fashion, one can show that∫C

v dx + u dy = 0.

We have thus proven the following theorem:

Cauchy’s Theorem

Theorem 8.4. If u and v satisfy the Cauchy-Riemann equations (8.19) insideand on the simple closed contour C, then∮

C(u + iv) dz = 0. (8.28)

Corollary∮

C f (z) dz = 0 when f is differentiable in domain D with C ⊂ D.

Either one of these is referred to as Cauchy’s Theorem.

Example 8.17. Evaluate∮|z−1|=3 z4 dz.

Since f (z) = z4 is differentiable inside the circle |z − 1| = 3, this integralvanishes.

We can use Cauchy’s Theorem to show that we can deform one contourinto another, perhaps simpler, contour.

One can deform contours into simplerones. Theorem 8.5. If f (z) is holomorphic between two simple closed contours, C and

C′, then∮

C f (z) dz =∮

C′ f (z) dz.

Proof. We consider the two curves C and C′ as shown in Figure 8.28. Con-necting the two contours with contours Γ1 and Γ2 (as shown in the figure),C is seen to split into contours C1 and C2 and C′ into contours C′1 and C′2.Note that f (z) is differentiable inside the newly formed regions between the

Page 317: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

complex representations of functions 305

curves. Also, the boundaries of these regions are now simple closed curves.Therefore, Cauchy’s Theorem tells us that the integrals of f (z) over theseregions are zero.

Noting that integrations over contours opposite to the positive orienta-tion are the negative of integrals that are positively oriented, we have fromCauchy’s Theorem that∫

C1

f (z) dz +∫

Γ1

f (z) dz−∫

C′1f (z) dz +

∫Γ2

f (z) dz = 0

and ∫C2

f (z) dz−∫

Γ2

f (z) dz−∫

C′2f (z) dz−

∫Γ1

f (z) dz = 0.

In the first integral we have traversed the contours in the following order:C1, Γ1, C′1 backwards, and Γ2. The second integral denotes the integrationover the lower region, but going backwards over all contours except for C2.

C

Γ2

Γ1

C′1C′2

C1

C2

Figure 8.28: The contours needed toprove that

∮C f (z) dz =

∮C′ f (z) dz when

f (z) is holomorphic between the con-tours C and C′.Combining these results by adding the two equations above, we have∫

C1

f (z) dz +∫

C2

f (z) dz−∫

C′1f (z) dz−

∫C′2

f (z) dz = 0.

Noting that C = C1 + C2 and C′ = C′1 + C′2, we have∮C

f (z) dz =∮

C′f (z) dz,

as was to be proven.

Example 8.18. Compute∮

Rdzz for R the rectangle [−2, 2]× [−2i, 2i].

−2i

−i

i

2i

−2 2−1 1

γ1

γ2

γ3

γ4

x

y

R

C

Figure 8.29: The contours used to com-pute

∮R

dzz . Note that to compute the in-

tegral around R we can deform the con-tour to the circle C since f (z) is differ-entiable in the region between the con-tours.

We can compute this integral by looking at four separate integrals over the sidesof the rectangle in the complex plane. One simply parametrizes each line segment,perform the integration and sum the four separate results. From the last theorem,we can instead integrate over a simpler contour by deforming the rectangle into acircle as long as f (z) = 1

z is differentiable in the region bounded by the rectangleand the circle. So, using the unit circle, as shown in Figure 8.29, the integrationmight be easier to perform.

More specifically, the last theorem tells us that∮R

dzz

=∮|z|=1

dzz

The latter integral can be computed using the parametrization z = eiθ for θ ∈[0, 2π]. Thus, ∮

|z|=1

dzz

=∫ 2π

0

ieiθ dθ

eiθ

= i∫ 2π

0dθ = 2πi. (8.29)

Therefore, we have found that∮

Rdzz = 2πi by deforming the original simple closed

contour.

−2i

−i

i

2i

−2 2−1 1

γ1

γ2

γ3

γ4

x

y

R

Figure 8.30: The contours used to com-pute

∮R

dzz . The added diagonals are

for the reader to easily see the argu-ments used in the evaluation of the lim-its when integrating over the segmentsof the square R.

For fun, let’s do this the long way to see how much effort was saved. We will labelthe contour as shown in Figure 8.30. The lower segment, γ4 of the square can be

Page 318: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

306 partial differential equations

simple parametrized by noting that along this segment z = x− 2i for x ∈ [−2, 2].Then, we have∮

γ4

dzz

=∫ 2

−2

dxx− 2i

= ln∣∣∣x− 2i

∣∣∣2−2

=

(ln(2√

2)− πi4

)−(

ln(2√

2)− 3πi4

)=

πi2

. (8.30)

We note that the arguments of the logarithms are determined from the angles madeby the diagonals provided in Figure 8.30.

Similarly, the integral along the top segment, z = x + 2i, x ∈ [−2, 2], is com-puted as ∮

γ2

dzz

=∫ −2

2

dxx + 2i

= ln∣∣∣x + 2i

∣∣∣−2

2

=

(ln(2√

2) +3πi

4

)−(

ln(2√

2) +πi4

)=

πi2

. (8.31)

The integral over the right side, z = 2 + iy, y ∈ [−2, 2], is∮γ1

dzz

=∫ 2

−2

idy2 + iy

= ln∣∣∣2 + iy

∣∣∣2−2

=

(ln(2√

2) +πi4

)−(

ln(2√

2)− πi4

)=

πi2

. (8.32)

Finally, the integral over the left side, z = −2 + iy, y ∈ [−2, 2], is∮γ3

dzz

=∫ −2

2

idy−2 + iy

= ln∣∣∣− 2 + iy

∣∣∣2−2

=

(ln(2√

2) +5πi

4

)−(

ln(2√

2) +3πi

4

)=

πi2

. (8.33)

Therefore, we have that∮R

dzz

=∫

γ1

dzz

+∫

γ2

dzz

+∫

γ3

dzz

+∫

γ4

dzz

Page 319: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

complex representations of functions 307

=πi2

+πi2

+πi2

+πi2

= 4(

πi2

)= 2πi. (8.34)

This gives the same answer we had found using a simple contour deformation.

The converse of Cauchy’s Theorem is not true, namely∮

C f (z) dz = 0does not always imply that f (z) is differentiable. What we do have is Mor-era’s Theorem(Giacinto Morera, 1856-1909): Morera’s Theorem.

Theorem 8.6. Let f be continuous in a domain D. Suppose that for every simpleclosed contour C in D,

∮C f (z) dz = 0. Then f is differentiable in D.

The proof is a bit more detailed than we need to go into here. However,this theorem is useful in the next section.

8.5.3 Analytic Functions and Cauchy’s Integral Formula

In the previous section we saw that Cauchy’s Theorem was useful forcomputing particular integrals without having to parametrize the contoursor for deforming contours into simpler contours. The integrand needs topossess certain differentiability properties. In this section, we will general-ize the functions that we can integrate slightly so that we can integrate alarger family of complex functions. This will lead us to the Cauchy’s Inte-gral Formula, which extends Cauchy’s Theorem to functions analytic in anannulus. However, first we need to explore the concept of analytic functions.

A function f (z) is analytic in domain D if for every open disk |z− z0| < ρ

lying in D, f (z) can be represented as a power series in z0. Namely,

f (z) =∞

∑n=0

cn(z− z0)n.

This series converges uniformly and absolutely inside the circle of conver-gence, |z− z0| < R, with radius of convergence R. [See the Appendix for areview of convergence.]

Since f (z) can be written as a uniformly convergent power series, we can

There are various types of complex-valued functions.

A holomorphic function is (com-plex) differentiable in a neighborhood ofevery point in its domain.

An analytic function has a conver-gent Taylor series expansion in a neigh-borhood of each point in its domain. Wesee here that analytic functions are holo-morphic and vice versa.

If a function is holomorphicthroughout the complex plane, then it iscalled an entire function.

Finally, a function which is holomor-phic on all of its domain except at a set ofisolated poles (to be defined later), thenit is called a meromorphic function.

integrate it term by term over any simple closed contour in D containing z0.In particular, we have to compute integrals like

∮C(z− z0)

n dz. As we willsee in the homework exercises, these integrals evaluate to zero for most n.Thus, we can show that for f (z) analytic in D and on any closed contour Clying in D,

∮C f (z) dz = 0. Also, f is a uniformly convergent sum of con-

tinuous functions, so f (z) is also continuous. Thus, by Morera’s Theorem,we have that f (z) is differentiable if it is analytic. Often terms like analytic,differentiable and holomorphic are used interchangeably, though there is asubtle distinction due to their definitions.

As examples of series expansions about a given point, we will considerseries expansions and regions of convergence for f (z) = 1

1+z .

Page 320: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

308 partial differential equations

Example 8.19. Find the series expansion of f (z) = 11+z about z0 = 0.

This case is simple. From Chapter 1 we recall that f (z) is the sum of a geometricseries for |z| < 1. We have

f (z) =1

1 + z=

∑n=0

(−z)n.

Thus, this series expansion converges inside the unit circle (|z| < 1) in the complexplane.

Example 8.20. Find the series expansion of f (z) = 11+z about z0 = 1

2 .We now look into an expansion about a different point. We could compute the

expansion coefficients using Taylor’s formula for the coefficients. However, we canalso make use of the formula for geometric series after rearranging the function. Weseek an expansion in powers of z− 1

2 . So, we rewrite the function in a form that hasis a function of z− 1

2 . Thus,

f (z) =1

1 + z=

11 + (z− 1

2 + 12 )

=1

32 + (z− 1

2 ).

This is not quite in the form we need. It would be nice if the denominator wereof the form of one plus something. [Note: This is similar to what we had seen inExample A.35.] We can get the denominator into such a form by factoring out the32 . Then we would have

f (z) =23

11 + 2

3 (z−12 )

.

The second factor now has the form 11−r , which would be the sum of a geometric se-

ries with first term a = 1 and ratio r = − 23 (z−

12 ) provided that |r|<1. Therefore,

we have found that

f (z) =23

∑n=0

[−2

3(z− 1

2)

]n

for ∣∣∣− 23(z− 1

2)∣∣∣ < 1.

This convergence interval can be rewritten as∣∣∣z− 12

∣∣∣ < 32

,

which is a circle centered at z = 12 with radius 3

2 .

In Figure 8.31 we show the regions of convergence for the power seriesexpansions of f (z) = 1

1+z about z = 0 and z = 12 . We note that the first

expansion gives that f (z) is at least analytic inside the region |z| < 1. Thesecond expansion shows that f (z) is analytic in a larger region, |z− 1

2 | <32 .

We will see later that there are expansions which converge outside of theseregions and that some yield expansions involving negative powers of z− z0.

−2i

−i

i

2i

−2 2−1 1 x

y

Figure 8.31: Regions of convergence forexpansions of f (z) = 1

1+z about z = 0and z = 1

2 . We now present the main theorem of this section:

Page 321: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

complex representations of functions 309

Cauchy Integral Formula

Theorem 8.7. Let f (z) be analytic in |z− z0| < ρ and let C be the boundary(circle) of this disk. Then,

f (z0) =1

2πi

∮C

f (z)z− z0

dz. (8.35)

Proof. In order to prove this, we first make use of the analyticity of f (z). Weinsert the power series expansion of f (z) about z0 into the integrand. Thenwe have

f (z)z− z0

=1

z− z0

[∞

∑n=0

cn(z− z0)n

]

=1

z− z0

[c0 + c1(z− z0) + c2(z− z0)

2 + . . .]

=c0

z− z0+ c1 + c2(z− z0) + . . .︸ ︷︷ ︸

analytic function

. (8.36)

As noted the integrand can be written as

f (z)z− z0

=c0

z− z0+ h(z),

where h(z) is an analytic function, since h(z) is representable as a seriesexpansion about z0. We have already shown that analytic functions are dif-ferentiable, so by Cauchy’s Theorem

∮C h(z) dz = 0.

Noting also that c0 = f (z0) is the first term of a Taylor series expansionabout z = z0, we have∮

C

f (z)z− z0

dz =∮

C

[c0

z− z0+ h(z)

]dz = f (z0)

∮C

1z− z0

dz.

We need only compute the integral∮

C1

z−z0dz to finish the proof of Cauchy’s

Integral Formula. This is done by parametrizing the circle, |z− z0| = ρ, asshown in Figure 8.32. This is simply done by letting

z− z0 = ρeiθ .

(Note that this has the right complex modulus since |eiθ | = 1. Then dz =

iρeiθdθ. Using this parametrization, we have

∮C

dzz− z0

=∫ 2π

0

iρeiθ dθ

ρeiθ = i∫ 2π

0dθ = 2πi.

z0C

x

y

ρ

Figure 8.32: Circular contour used inproving the Cauchy Integral Formula.

Therefore, ∮C

f (z)z− z0

dz = f (z0)∮

C

1z− z0

dz = 2πi f (z0),

as was to be shown.

Page 322: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

310 partial differential equations

Example 8.21. Compute∮|z|=4

cos zz2−6z+5 dz.

In order to apply the Cauchy Integral Formula, we need to factor the denomi-nator, z2 − 6z + 5 = (z− 1)(z− 5). We next locate the zeros of the denominator.In Figure 8.33 we show the contour and the points z = 1 and z = 5. The onlypoint inside the region bounded by the contour is z = 1. Therefore, we can applythe Cauchy Integral Formula for f (z) = cos z

z−5 to the integral∫|z|=4

cos z(z− 1)(z− 5)

dz =∫|z|=4

f (z)(z− 1)

dz = 2πi f (1).

Therefore, we have ∫|z|=4

cos z(z− 1)(z− 5)

dz = −πi cos(1)2

.

We have shown that f (z0) has an integral representation for f (z) analyticin |z− z0| < ρ. In fact, all derivatives of an analytic function have an integralrepresentation. This is given by

f (n)(z0) =n!

2πi

∮C

f (z)(z− z0)n+1 dz. (8.37)

−2i

−i

i

2i

−2 2−1 1−3 3−4 4 5

−3i

3i

|z| = 4ρ = 4

x

y

Figure 8.33: Circular contour used incomputing

∮|z|=4

cos zz2−6z+5 dz.

This can be proven following a derivation similar to that for the CauchyIntegral Formula. Inserting the Taylor series expansion for f (z) into theintegral on the right hand side, we have∮

C

f (z)(z− z0)n+1 dz =

∑m=0

cm

∮C

(z− z0)m

(z− z0)n+1 dz

=∞

∑m=0

cm

∮C

dz(z− z0)n−m+1 . (8.38)

Picking k = n − m, the integrals in the sum can be computed by usingthe following result: ∮

C

dz(z− z0)k+1 =

0, k 6= 0

2πi, k = 0.(8.39)

The proof is left for the exercises.The only nonvanishing integrals,

∮C

dz(z−z0)n−m+1 , occur when k = n−m =

0, or m = n. Therefore, the series of integrals collapses to one term and wehave ∮

C

f (z)(z− z0)n+1 dz = 2πicn.

We finish the proof by recalling that the coefficients of the Taylor seriesexpansion for f (z) are given by

cn =f (n)(z0)

n!.

Then, ∮C

f (z)(z− z0)n+1 dz =

2πin!

f (n)(z0)

and the result follows.

Page 323: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

complex representations of functions 311

8.5.4 Laurent Series

Until this point we have only talked about series whose terms havenonnegative powers of z− z0. It is possible to have series representations inwhich there are negative powers. In the last section we investigated expan-sions of f (z) = 1

1+z about z = 0 and z = 12 . The regions of convergence

for each series was shown in Figure 8.31. Let us reconsider each of theseexpansions, but for values of z outside the region of convergence previouslyfound.

Example 8.22. f (z) = 11+z for |z| > 1.

As before, we make use of the geometric series . Since |z| > 1, we instead rewritethe function as

f (z) =1

1 + z=

1z

11 + 1

z.

We now have the function in a form of the sum of a geometric series with first terma = 1 and ratio r = − 1

z . We note that |z| > 1 implies that |r| < 1. Thus, we havethe geometric series

f (z) =1z

∑n=0

(−1

z

)n.

This can be re-indexed3 as

3 Re-indexing a series is often useful inseries manipulations. In this case, wehave the series

∑n=0

(−1)nz−n−1 = z−1 − z−2 + z−3 + . . . .

The index is n. You can see that the in-dex does not appear when the sum isexpanded showing the terms. The sum-mation index is sometimes referred toas a dummy index for this reason. Re-indexing allows one to rewrite the short-hand summation notation while captur-ing the same terms. In this example, theexponents are −n − 1. We can simplifythe notation by letting −n− 1 = −j, orj = n + 1. Noting that j = 1 when n = 0,we get the sum ∑∞

j=1(−1)j−1z−j.

f (z) =∞

∑n=0

(−1)nz−n−1 =∞

∑j=1

(−1)j−1z−j.

Note that this series, which converges outside the unit circle, |z| > 1, has negativepowers of z.

Example 8.23. f (z) = 11+z for |z− 1

2 | >32 .

As before, we express this in a form in which we can use a geometric seriesexpansion. We seek powers of z− 1

2 . So, we add and subtract 12 to the z to obtain:

f (z) =1

1 + z=

11 + (z− 1

2 + 12 )

=1

32 + (z− 1

2 ).

Instead of factoring out the 32 as we had done in Example 8.20, we factor out the

(z− 12 ) term. Then, we obtain

f (z) =1

1 + z=

1(z− 1

2 )

1[1 + 3

2 (z−12 )−1] .

Now we identify a = 1 and r = − 32 (z−

12 )−1. This leads to the series

f (z) =1

z− 12

∑n=0

(−3

2(z− 1

2)−1)n

=∞

∑n=0

(−3

2

)n (z− 1

2

)−n−1. (8.40)

This converges for |z− 12 | >

32 and can also be re-indexed to verify that this series

involves negative powers of z− 12 .

Page 324: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

312 partial differential equations

This leads to the following theorem:

Theorem 8.8. Let f (z) be analytic in an annulus, R1 < |z− z0| < R2, with C apositively oriented simple closed curve around z0 and inside the annulus as shownin Figure 8.34. Then,

f (z) =∞

∑j=0

aj(z− z0)j +

∑j=1

bj(z− z0)−j,

with

aj =1

2πi

∮C

f (z)(z− z0)j+1 dz,

and

bj =1

2πi

∮C

f (z)(z− z0)−j+1 dz.

The above series can be written in the more compact form

f (z) =∞

∑j=−∞

cj(z− z0)j.

Such a series expansion is called a Laurent series expansion named after itsdiscoverer Pierre Alphonse Laurent (1813-1854).

z0

CR2

R1

x

y

Figure 8.34: This figure shows an an-nulus, R1 < |z − z0| < R2, with C apositively oriented simple closed curvearound z0 and inside the annulus.

Example 8.24. Expand f (z) = 1(1−z)(2+z) in the annulus 1 < |z| < 2.

Using partial fractions , we can write this as

f (z) =13

[1

1− z+

12 + z

].

We can expand the first fraction, 11−z , as an analytic function in the region |z| > 1

and the second fraction, 12+z , as an analytic function in |z| < 2. This is done as

follows. First, we write

12 + z

=1

2[1− (− z2 )]

=12

∑n=0

(− z

2

)n.

Then, we write1

1− z= − 1

z[1− 1z ]

= −1z

∑n=0

1zn .

Therefore, in the common region, 1 < |z| < 2, we have that

1(1− z)(2 + z)

=13

[12

∑n=0

(− z

2

)n−

∑n=0

1zn+1

]

=∞

∑n=0

(−1)n

6(2n)zn +

∑n=1

(−1)3

z−n. (8.41)

We note that this is not a Taylor series expansion due to the existence of terms withnegative powers in the second sum.

Page 325: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

complex representations of functions 313

Example 8.25. Find series representations of f (z) = 1(1−z)(2+z) throughout the

complex plane.In the last example we found series representations of f (z) = 1

(1−z)(2+z) in theannulus 1 < |z| < 2. However, we can also find expansions which converge forother regions. We first write

f (z) =13

[1

1− z+

12 + z

].

We then expand each term separately.The first fraction, 1

1−z , can be written as the sum of the geometric series

11− z

=∞

∑n=0

zn, |z| < 1.

This series converges inside the unit circle. We indicate this by region 1 in Figure8.35.

In the last example, we showed that the second fraction, 12+z , has the series

expansion1

2 + z=

12[1− (− z

2 )]=

12

∑n=0

(− z

2

)n.

which converges in the circle |z| < 2. This is labeled as region 2 in Figure 8.35.

−2i

−i

i

2i

−2 2−1 1 x

y

1

2

2

3

3

4

Figure 8.35: Regions of convergence forLaurent expansions of f (z) = 1

1+z .

Regions 1 and 2 intersect for |z| < 1, so, we can combine these two seriesrepresentations to obtain

1(1− z)(2 + z)

=13

[∞

∑n=0

zn +12

∑n=0

(− z

2

)n]

, |z| < 1.

In the annulus, 1 < |z| < 2, we had already seen in the last example that weneeded a different expansion for the fraction 1

1−z . We looked for an expansion inpowers of 1/z which would converge for large values of z. We had found that

11− z

= − 1

z(

1− 1z

) = −1z

∑n=0

1zn , |z| > 1.

This series converges in region 3 in Figure 8.35. Combining this series with theone for the second fraction, we obtain a series representation that converges in theoverlap of regions 2 and 3. Thus, in the annulus 1 < |z| < 2 we have

1(1− z)(2 + z)

=13

[12

∑n=0

(− z

2

)n−

∑n=0

1zn+1

].

So far, we have series representations for |z| < 2. The only region not coveredyet is outside this disk, |z| > 2. In in Figure 8.35 we see that series 3, whichconverges in region 3, will converge in the last section of the complex plane. Wejust need one more series expansion for 1/(2 + z) for large z. Factoring out a z inthe denominator, we can write this as a geometric series with r = 2/z,

12 + z

=1

z[ 2z + 1]

=1z

∑n=0

(−2

z

)n.

Page 326: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

314 partial differential equations

This series converges for |z| > 2. Therefore, it converges in region 4 and the finalseries representation is

1(1− z)(2 + z)

=13

[1z

∑n=0

(−2

z

)n−

∑n=0

1zn+1

].

8.5.5 Singularities and The Residue Theorem

In the last section we found that we could integrate functions sat-isfying some analyticity properties along contours without using detailedparametrizations around the contours. We can deform contours if the func-tion is analytic in the region between the original and new contour. In thissection we will extend our tools for performing contour integrals.

The integrand in the Cauchy Integral Formula was of the form g(z) =f (z)

z−z0, where f (z) is well behaved at z0. The point z = z0 is called a singu-Singularities of complex functions.

larity of g(z), as g(z) is not defined there. More specifically, a singularity off (z) is a point at which f (z) fails to be analytic.

We can also classify these singularities. Typically these are isolated sin-gularities. As we saw from the proof of the Cauchy Integral Formula,g(z) = f (z)

z−z0has a Laurent series expansion about z = z0, given by

g(z) =f (z0)

z− z0+ f ′(z0) +

12

f ′′(z0)(z− z0) + . . . .

It is the nature of the first term that gives information about the type ofsingularity that g(z) has. Namely, in order to classify the singularities off (z), we look at the principal part of the Laurent series of f (z) about z = z0,∑∞

j−1 bj(z− z0)−j, which consists of the negative powers of z− z0.Classification of singularities.

There are three types of singularities, removable, poles, and essentialsingularities. They are defined as follows:

1. If f (z) is bounded near z0, then z0 is a removable singularity.

2. If there are a finite number of terms in the principal part of the Lau-rent series of f (z) about z = z0, then z0 is called a pole.

3. If there are an infinite number of terms in the principal part of theLaurent series of f (z) about z = z0, then z0 is called an essentialsingularity.

Example 8.26. f (z) = sin zz has a removable singularity at z = 0.

At first it looks like there is a possible singularity at z = 0, since the denomi-nator is zero at z = 0. However, we know from the first semester of calculus thatlimz→0

sin zz = 1. Furthermore, we can expand sin z about z = 0 and see that

sin zz

=1z(z− z3

3!+ . . .) = 1− z2

3!+ . . . .

Thus, there are only nonnegative powers in the series expansion. So, z = 0 is aremovable singularity.

Page 327: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

complex representations of functions 315

Example 8.27. f (z) = ez

(z−1)n has poles at z = 1 for n a positive integer.

For n = 1 we have f (z) = ez

z−1 . This function has a singularity at z = 1. Theseries expansion is found by expanding ez about z = 1:

f (z) =e

z− 1ez−1 =

ez− 1

+ e +e2!(z− 1) + . . . .

Note that the principal part of the Laurent series expansion about z = 1 only hasone term, e

z−1 . Therefore, z = 1 is a pole. Since the leading term has an exponentof −1, z = 1 is called a pole of order one, or a simple pole. Simple pole.

For n = 2 we have f (z) = ez

(z−1)2 . The series expansion is found again byexpanding ez about z = 1:

f (z) =e

(z− 1)2 ez−1 =e

(z− 1)2 +e

z− 1+

e2!

+e3!(z− 1) + . . . .

Note that the principal part of the Laurent series has two terms involving (z− 1)−2

and (z− 1)−1. Since the leading term has an exponent of −2, z = 1 is called a poleof order 2, or a double pole. Double pole.

Example 8.28. f (z) = e1z has an essential singularity at z = 0.

In this case we have the series expansion about z = 0 given by

f (z) = e1z =

∑n=0

(1z

)n

n!=

∑n=0

1n!

z−n.

We see that there are an infinite number of terms in the principal part of the Laurentseries. So, this function has an essential singularity at z = 0.

In the above examples we have seen poles of order one (a simple pole) Poles of order k.

and two (a double pole). In general, we can say that f (z) has a pole of orderk at z0 if and only if (z − z0)

k f (z) has a removable singularity at z0, but(z− z0)

k−1 f (z) for k > 0 does not.

Example 8.29. Determine the order of the pole of f (z) = cot z csc z at z = 0.First we rewrite f (z) in terms of sines and cosines.

f (z) = cot z csc z =cos zsin2 z

.

We note that the denominator vanishes at z = 0.How do we know that the pole is not a simple pole? Well, we check to see if

(z− 0) f (z) has a removable singularity at z = 0:

limz→0

(z− 0) f (z) = limz→0

z cos zsin2 z

=

(limz→0

zsin z

)(limz→0

cos zsin z

)= lim

z→0

cos zsin z

. (8.42)

We see that this limit is undefined. So, now we check to see if (z− 0)2 f (z) has aremovable singularity at z = 0:

limz→0

(z− 0)2 f (z) = limz→0

z2 cos zsin2 z

Page 328: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

316 partial differential equations

=

(limz→0

zsin z

)(limz→0

z cos zsin z

)= lim

z→0

zsin z

cos(0) = 1. (8.43)

In this case, we have obtained a finite, nonzero, result. So, z = 0 is a pole of order2.

We could have also relied on series expansions. Expanding both the sine andcosine functions in a Taylor series expansion, we have

f (z) =cos zsin2 z

=1− 1

2! z2 + . . .

(z− 13! z

3 + . . .)2.

Factoring a z from the expansion in the denominator,

f (z) =1z2

1− 12! z

2 + . . .

(1− 13! z + . . .)2

=1z2

(1 + O(z2)

),

we can see that the leading term will be a 1/z2, indicating a pole of order 2.

We will see how knowledge of the poles of a function can aid in thecomputation of contour integrals. We now show that if a function, f (z), hasa pole of order k, thenIntegral of a function with a simple pole

inside C. ∮C

f (z) dz = 2πi Res[ f (z); z0],

where we have defined Res[ f (z); z0] as the residue of f (z) at z = z0. Inparticular, for a pole of order k the residue is given by

Residues of a function with poles of or-der k.

Residues - Poles of order k

Res[ f (z); z0] = limz→z0

1(k− 1)!

dk−1

dzk−1

[(z− z0)

k f (z)]

. (8.44)

Proof. Let φ(z) = (z − z0)k f (z) be an analytic function. Then φ(z) has a

Taylor series expansion about z0. As we had seen in the last section, we canwrite the integral representation of any derivative of φ as

φ(k−1)(z0) =(k− 1)!

2πi

∮C

φ(z)(z− z0)k dz.

Inserting the definition of φ(z), we then have

φ(k−1)(z0) =(k− 1)!

2πi

∮C

f (z) dz.

Solving for the integral, we have the result∮C

f (z) dz =2πi

(k− 1)!dk−1

dzk−1

[(z− z0)

k f (z)]

z=z0

≡ 2πi Res[ f (z); z0] (8.45)

Page 329: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

complex representations of functions 317

Note: If z0 is a simple pole, the residue is easily computed as

Res[ f (z); z0] = limz→z0

(z− z0) f (z).

The residue for a simple pole.In fact, one can show (Problem 18) that for g and h analytic functions at z0,with g(z0) 6= 0, h(z0) = 0, and h′(z0) 6= 0,

Res[

g(z)h(z)

; z0

]=

g(z0)

h′(z0).

Example 8.30. Find the residues of f (z) = z−1(z+1)2(z2+4) .

f (z) has poles at z = −1, z = 2i, and z = −2i. The pole at z = −1 is a doublepole (pole of order 2). The other poles are simple poles. We compute those residuesfirst:

Res[ f (z); 2i] = limz→2i

(z− 2i)z− 1

(z + 1)2(z + 2i)(z− 2i)

= limz→2i

z− 1(z + 1)2(z + 2i)

=2i− 1

(2i + 1)2(4i)= − 1

50− 11

100i. (8.46)

Res[ f (z);−2i] = limz→−2i

(z + 2i)z− 1

(z + 1)2(z + 2i)(z− 2i)

= limz→−2i

z− 1(z + 1)2(z− 2i)

=−2i− 1

(−2i + 1)2(−4i)= − 1

50+

11100

i. (8.47)

For the double pole, we have to do a little more work.

Res[ f (z);−1] = limz→−1

ddz

[(z + 1)2 z− 1

(z + 1)2(z2 + 4)

]= lim

z→−1

ddz

[z− 1z2 + 4

]= lim

z→−1

ddz

[z2 + 4− 2z(z− 1)

(z2 + 4)2

]= lim

z→−1

ddz

[−z2 + 2z + 4(z2 + 4)2

]=

125

. (8.48)

Example 8.31. Find the residue of f (z) = cot z at z = 0.We write f (z) = cot z = cos z

sin z and note that z = 0 is a simple pole. Thus,

Res[cot z; z = 0] = limz→0

z cos zsin z

= cos(0) = 1.

The residue of f (z) at z0 is the coefficientof the (z − z0)

−1 term, c−1 = b1, of theLaurent series expansion about z0.

Another way to find the residue of a function f (z) at a singularity z0 is tolook at the Laurent series expansion about the singularity. This is becausethe residue of f (z) at z0 is the coefficient of the (z− z0)

−1 term, or c−1 = b1.

Page 330: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

318 partial differential equations

Example 8.32. Find the residue of f (z) = 1z(3−z) at z = 0 using a Laurent series

expansion.First, we need the Laurent series expansion about z = 0 of the form ∑∞

−∞ cnzn.A partial fraction expansion gives

f (z) =1

z(3− z)=

13

(1z+

13− z

).

The first term is a power of z. The second term needs to be written as a convergentseries for small z. This is given by

13− z

=1

3(1− z/3)

=13

∑n=0

( z3

)n. (8.49)

Thus, we have found

f (z) =13

(1z+

13

∑n=0

( z3

)n)

.

The coefficient of z−1 can be read off to give Res[ f (z); z = 0] = 13 .

Example 8.33. Find the residue of f (z) = z cos 1z at z = 0 using a Laurent series

expansion.Finding the residue at an essential sin-gularity. In this case z = 0 is an essential singularity. The only way to find residues at

essential singularities is to use Laurent series. Since

cos z = 1− 12!

z2 +14!

z4 − 16!

z6 + . . . ,

then we have

f (z) = z(

1− 12!z2 +

14!z4 −

16!z6 + . . .

)= z− 1

2!z+

14!z3 −

16!z5 + . . . . (8.50)

From the second term we have that Res[ f (z); z = 0] = − 12 .

We are now ready to use residues in order to evaluate integrals.−i

i

−1 1 x

y

|z| = 1

Figure 8.36: Contour for computing∮|z|=1

dzsin z .

Example 8.34. Evaluate∮|z|=1

dzsin z .

We begin by looking for the singularities of the integrand. These are located atvalues of z for which sin z = 0. Thus, z = 0,±π,±2π, . . . , are the singularities.However, only z = 0 lies inside the contour, as shown in Figure 8.36. We notefurther that z = 0 is a simple pole, since

limz→0

(z− 0)1

sin z= 1.

Therefore, the residue is one and we have∮|z|=1

dzsin z

= 2πi.

Page 331: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

complex representations of functions 319

In general, we could have several poles of different orders. For example,we will be computing ∮

|z|=2

dzz2 − 1

.

The integrand has singularities at z2 − 1 = 0, or z = ±1. Both poles areinside the contour, as seen in Figure 8.38. One could do a partial fractiondecomposition and have two integrals with one pole each integral. Then,the result could be found by adding the residues from each pole.

In general, when there are several poles, we can use the Residue Theorem.The Residue Theorem.

The Residue Theorem

Theorem 8.9. Let f (z) be a function which has poles zj, j = 1, . . . , N inside asimple closed contour C and no other singularities in this region. Then,

∮C

f (z) dz = 2πiN

∑j=1

Res[ f (z); zj], (8.51)

where the residues are computed using Equation (8.44),

Res[ f (z); z0] = limz→z0

1(k− 1)!

dk−1

dzk−1

[(z− z0)

k f (z)]

.C

C2

C1

C2

Figure 8.37: A depiction of how onecuts out poles to prove that the inte-gral around C is the sum of the integralsaround circles with the poles at the cen-ter of each.

The proof of this theorem is based upon the contours shown in Figure8.37. One constructs a new contour C′ by encircling each pole, as show inthe figure. Then one connects a path from C to each circle. In the figuretwo separated paths along the cut are shown only to indicate the directionfollowed on the cut. The new contour is then obtained by following C andcrossing each cut as it is encountered. Then one goes around a circle in thenegative sense and returns along the cut to proceed around C. The sum ofthe contributions to the contour integration involve two integrals for eachcut, which will cancel due to the opposing directions. Thus, we are left with∮

C′f (z) dz =

∮C

f (z) dz−∮

C1

f (z) dz−∮

C2

f (z) dz−∮

C3

f (z) dz = 0.

Of course, the sum is zero because f (z) is analytic in the enclosed region,since all singularities have been cut out. Solving for

∮C f (z) dz, one has that

this integral is the sum of the integrals around the separate poles, whichcan be evaluated with single residue computations. Thus, the result is that∮

C f (z) dz is 2πi times the sum of the residues.

Example 8.35. Evaluate∮|z|=2

dzz2−1 .

We first note that there are two poles in this integral since

1z2 − 1

=1

(z− 1)(z + 1).

In Figure 8.38 we plot the contour and the two poles, denoted by an “x.” Since bothpoles are inside the contour, we need to compute the residues for each one. They areeach simple poles, so we have

Page 332: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

320 partial differential equations

Res[

1z2 − 1

; z = 1]

= limz→1

(z− 1)1

z2 − 1

= limz→1

1z + 1

=12

, (8.52)

and

Res[

1z2 − 1

; z = −1]

= limz→−1

(z + 1)1

z2 − 1

= limz→−1

1z− 1

= −12

. (8.53)

−2i

−i

i

2i

−2 2−1 1 x

y

|z| = 2

Figure 8.38: Contour for computing∮|z|=2

dzz2−1 .

Then, ∮|z|=2

dzz2 − 1

= 2πi(12− 1

2) = 0.

Example 8.36. Evaluate∮|z|=3

z2+1(z−1)2(z+2) dz.

In this example there are two poles z = 1,−2 inside the contour. [See Figure8.39.] z = 1 is a second order pole and z = −2 is a simple pole. Therefore, we needto compute the residues at each pole of f (z) = z2+1

(z−1)2(z+2) :

Res[ f (z); z = 1] = limz→1

11!

ddz

[(z− 1)2 z2 + 1

(z− 1)2(z + 2)

]= lim

z→1

(z2 + 4z− 1(z + 2)2

)=

49

. (8.54)

Res[ f (z); z = −2] = limz→−2

(z + 2)z2 + 1

(z− 1)2(z + 2)

= limz→−2

z2 + 1(z− 1)2

=59

. (8.55)

−3i

−2i

−i

i

2i

3i

−3 3−2 2−1 1 x

y

|z| = 3

Figure 8.39: Contour for computing∮|z|=3

z2+1(z−1)2(z+2) dz.

The evaluation of the integral is found by computing 2πi times the sum of theresidues: ∮

|z|=3

z2 + 1(z− 1)2(z + 2)

dz = 2πi(

49+

59

)= 2πi.

Example 8.37. Compute∮|z|=2 z3e2/z dz.

In this case, z = 0 is an essential singularity and is inside the contour. ALaurent series expansion about z = 0 gives

z3e2/z = z3∞

∑n=0

1n!

(2z

)n

=∞

∑n=0

2n

n!z3−n

= z3 +22!

z2 +43!

z +84!

+165!z

+ . . . . (8.56)

Page 333: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

complex representations of functions 321

The residue is the coefficient of z−1, or Res[z3e2/z; z = 0] = − 215 . Therefore,

∮|z|=2

z3e2/z dz =415

πi.

Example 8.38. Evaluate∫ 2π

0dθ

2+cos θ .Here we have a real integral in which there are no signs of complex functions.

In fact, we could apply simpler methods from a calculus course to do this integral,attempting to write 1 + cos θ = 2 cos2 θ

2 . However, we do not get very far.One trick, useful in computing integrals whose integrand is in the form f (cos θ, sin θ),

is to transform the integration to the complex plane through the transformationz = eiθ . Then,

cos θ =eiθ + e−iθ

2=

12

(z +

1z

),

sin θ =eiθ − e−iθ

2i= − i

2

(z− 1

z

).

Computation of integrals of functions ofsines and cosines, f (cos θ, sin θ).

Under this transformation, z = eiθ , the integration now takes place around theunit circle in the complex plane. Noting that dz = ieiθ dθ = iz dθ, we have

∫ 2π

0

2 + cos θ=

∮|z|=1

dziz

2 + 12

(z + 1

z

)= −i

∮|z|=1

dz2z + 1

2 (z2 + 1)

= −2i∮|z|=1

dzz2 + 4z + 1

. (8.57)

−i

i

−4 −3 −2 −1 1 x

y

|z| = 1

Figure 8.40: Contour for computing∫ 2π0

dθ2+cos θ .

We can apply the Residue Theorem to the resulting integral. The singularitiesoccur at the roots of z2 + 4z + 1 = 0. Using the quadratic formula, we have theroots z = −2±

√3.

The location of these poles are shown in Figure 8.40. Only z = −2 +√

3 liesinside the integration contour. We will therefore need the residue of f (z) = −2i

z2+4z+1at this simple pole:

Res[ f (z); z = −2 +√

3] = limz→−2+

√3(z− (−2 +

√3))

−2iz2 + 4z + 1

= −2i limz→−2+

√3

z− (−2 +√

3)(z− (−2 +

√3))(z− (−2−

√3))

= −2i limz→−2+

√3

1z− (−2−

√3)

=−2i

−2 +√

3− (−2−√

3)

=−i√

3

=−i√

33

. (8.58)

Page 334: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

322 partial differential equations

Therefore, we have

∫ 2π

0

2 + cos θ= −2i

∮|z|=1

dzz2 + 4z + 1

= 2πi

(−i√

33

)=

2π√

33

. (8.59)

Before moving on to further applications, we note that there is anotherway to compute the integral in the last example. Karl Theodor WilhelmWeierstraß (1815-1897) introduced a substitution method for computingintegrals involving rational functions of sine and cosine. One makes thesubstitution t = tan θ

2 and converts the integrand into a rational function oft. One can show that this substitution implies thatThe Weierstraß substitution method.

sin θ =2t

1 + t2 , cos θ =1− t2

1 + t2 ,

and

dθ =2dt

1 + t2 .

The details are left for Problem 8 and apply the method. In order to see howit works, we will redo the last problem.

Example 8.39. Apply the Weierstraß substitution method to compute∫ 2π

0dθ

2+cos θ .

∫ 2π

0

2 + cos θ=

∫ ∞

−∞

1

2 + 1−t2

1+t2

2dt1 + t2

= 2∫ ∞

−∞

dtt2 + 3

=23

√3

[tan−1

(√3

3t

)]∞

−∞

=2π√

33

. (8.60)

8.5.6 Infinite Integrals

Infinite integrals of the form

∫ ∞−∞ f (x) dx occur often in physics.

They can represent wave packets, wave diffraction, Fourier transforms, andarise in other applications. In this section we will see that such integralsmay be computed by extending the integration to a contour in the complexplane.

Recall from your calculus experience that these integrals are improperintegrals and the way that one determines if improper integrals exist, orconverge, is to carefully compute these integrals using limits such as

∫ ∞

−∞f (x) dx = lim

R→∞

∫ R

−Rf (x) dx.

For example, we evaluate the integral of f (x) = x as

∫ ∞

−∞x dx = lim

R→∞

∫ R

−Rx dx = lim

R→∞

(R2

2− (−R)2

2

)= 0.

Page 335: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

complex representations of functions 323

One might also be tempted to carry out this integration by splitting theintegration interval, (−∞, 0] ∪ [0, ∞). However, the integrals

∫ ∞0 x dx and∫ 0

−∞ x dx do not exist. A simple computation confirms this.

∫ ∞

0x dx = lim

R→∞

∫ R

0x dx = lim

R→∞

(R2

2

)= ∞.

Therefore, ∫ ∞

−∞f (x) dx =

∫ 0

−∞f (x) dx +

∫ ∞

0f (x) dx

does not exist while limR→∞∫ R−R f (x) dx does exist. We will be interested in

computing the latter type of integral. Such an integral is called the CauchyPrincipal Value Integral and is denoted with either a P, PV, or a bar through The Cauchy principal value integral.

the integral:

P∫ ∞

−∞f (x) dx = PV

∫ ∞

−∞f (x) dx = −

∫ ∞

−∞f (x) dx = lim

R→∞

∫ R

−Rf (x) dx. (8.61)

If there is a discontinuity in the integral, one can further modify thisdefinition of principal value integral to bypass the singularity. For example,if f (x) is continuous on a ≤ x ≤ b and not defined at x = x0 ∈ [a, b], then∫ b

af (x) dx = lim

ε→0

(∫ x0−ε

af (x) dx +

∫ b

x0+εf (x) dx

).

In our discussions we will be computing integrals over the real line in theCauchy principal value sense.

Example 8.40. Compute∫ 1−1

dxx3 in the Cauchy Principal Value sense.

In this case, f (x) = 1x3 is not defined at x = 0. So, we have

∫ 1

−1

dxx3 = lim

ε→0

(∫ −ε

−1

dxx3 +

∫ 1

ε

dxx3

)= lim

ε→0

(− 1

2x2

∣∣∣−ε

−1− 1

2x2

∣∣∣1ε

)= 0. (8.62)

We now proceed to the evaluation of principal value integrals using Computation of real integrals by embed-ding the problem in the complex plane.complex integration methods. We want to evaluate the integral

∫ ∞−∞ f (x) dx.

We will extend this into an integration in the complex plane. We extend f (x)to f (z) and assume that f (z) is analytic in the upper half plane (Im(z) > 0)except at isolated poles. We then consider the integral

∫ R−R f (x) dx as an

integral over the interval (−R, R). We view this interval as a piece of alarger contour CR obtained by completing the contour with a semicircle ΓR

of radius R extending into the upper half plane as shown in Figure 8.41.Note, a similar construction is sometimes needed extending the integrationinto the lower half plane (Im(z) < 0) as we will later see.

R−R x

y

ΓR

R

Figure 8.41: Contours for computingP∫ ∞−∞ f (x) dx.

The integral around the entire contour CR can be computed using theResidue Theorem and is related to integrations over the pieces of the contourby ∮

CR

f (z) dz =∫

ΓR

f (z) dz +∫ R

−Rf (z) dz. (8.63)

Page 336: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

324 partial differential equations

Taking the limit R → ∞ and noting that the integral over (−R, R) is thedesired integral, we have

P∫ ∞

−∞f (x) dx =

∮C

f (z) dz− limR→∞

∫ΓR

f (z) dz, (8.64)

where we have identified C as the limiting contour as R gets large.Now the key to carrying out the integration is that the second integral

vanishes in the limit. This is true if R| f (z)| → 0 along ΓR as R → ∞. Thiscan be seen by the following argument. We parametrize the contour ΓR

using z = Reiθ . Then, when | f (z)| < M(R),∣∣∣∣∫ΓR

f (z) dz∣∣∣∣ =

∣∣∣∣∫ 2π

0f (Reiθ)Reiθ dθ

∣∣∣∣≤ R

∫ 2π

0

∣∣∣ f (Reiθ)∣∣∣ dθ

< RM(R)∫ 2π

0dθ

= 2πRM(R). (8.65)

So, if limR→∞ RM(R) = 0, then limR→∞∫

ΓRf (z) dz = 0.

We now demonstrate how to use complex integration methods in evalu-ating integrals over real valued functions.

Example 8.41. Evaluate∫ ∞−∞

dx1+x2 .

We already know how to do this integral using calculus without complex analy-sis. We have that∫ ∞

−∞

dx1 + x2 = lim

R→∞

(2 tan−1 R

)= 2

2

)= π.

We will apply the methods of this section and confirm this result. The neededcontours are shown in Figure 8.42 and the poles of the integrand are at z = ±i. Wefirst write the integral over the bounded contour CR as the sum of an integral from−R to R along the real axis plus the integral over the semicircular arc in the upperhalf complex plane, ∫

CR

dz1 + z2 =

∫ R

−R

dx1 + x2 +

∫ΓR

dz1 + z2 .

Next, we let R get large.R−R x

y

ΓR

i

−i

Figure 8.42: Contour for computing∫ ∞−∞

dx1+x2 .

We first note that f (z) = 11+z2 goes to zero fast enough on ΓR as R gets large.

R| f (z)| = R|1 + R2e2iθ| =

R√1 + 2R2 cos θ + R4

.

Thus, as R→ ∞, R| f (z)| → 0 and CR → C. So,∫ ∞

−∞

dx1 + x2 =

∮C

dz1 + z2 .

We need only compute the residue at the enclosed pole, z = i.

Res[ f (z); z = i] = limz→i

(z− i)1

1 + z2 = limz→i

1z + i

=12i

.

Page 337: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

complex representations of functions 325

Then, using the Residue Theorem, we have∫ ∞

−∞

dx1 + x2 = 2πi

(12i

)= π.

Example 8.42. Evaluate P∫ ∞−∞

sin xx dx.

For this example the integral is unbounded at z = 0. Constructing the contoursas before we are faced for the first time with a pole lying on the contour. We cannotignore this fact. We can proceed with the computation by carefully going aroundthe pole with a small semicircle of radius ε, as shown in Figure 8.43. Then theprincipal value integral computation becomes

P∫ ∞

−∞

sin xx

dx = limε→0,R→∞

(∫ −ε

−R

sin xx

dx +∫ R

ε

sin xx

dx)

. (8.66)

We will also need to rewrite the sine function in term of exponentials in thisintegral. There are two approaches that we could take. First, we could employ thedefinition of the sine function in terms of complex exponentials. This gives twointegrals to compute:

P∫ ∞

−∞

sin xx

dx =12i

(P∫ ∞

−∞

eix

xdx− P

∫ ∞

−∞

e−ix

xdx)

. (8.67)

The other approach would be to realize that the sine function is the imaginary partof an exponential, Im eix = sinx. Then, we would have

P∫ ∞

−∞

sin xx

dx = Im(

P∫ ∞

−∞

eix

xdx)

. (8.68)

ε R−R −ε x

y

ΓR

Figure 8.43: Contour for computingP∫ ∞−∞

sin xx dx.

We first consider P∫ ∞−∞

eix

x dx, which is common to both approaches. We use thecontour in Figure 8.43. Then we have

∮CR

eiz

zdz =

∫ΓR

eiz

zdz +

∫ −ε

−R

eiz

zdz +

∫Cε

eiz

zdz +

∫ R

ε

eiz

zdz.

The integral∮

CReiz

z dz vanishes since there are no poles enclosed in the contour!The sum of the second and fourth integrals gives the integral we seek as ε → 0and R→ ∞. The integral over ΓR will vanish as R gets large according to Jordan’sLemma.

Jordan’s Lemma give conditions as when integrals over ΓR will vanish as R getslarge. We state a version of Jordan’s Lemma here for reference and give a proof is atthe end of this chapter.

Jordan’s Lemma

If f (z) converges uniformly to zero as z→ ∞, then

limR→∞

∫CR

f (z)eikz dz = 0

where k > 0 and CR is the upper half of the circle |z| = R.

Page 338: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

326 partial differential equations

A similar result applies for k < 0, but one closes the contour in the lower half plane.[See Section 8.5.8 for the proof of Jordan’s Lemma.]

The remaining integral around the small semicircular arc has to be done sepa-rately. We have∫

eiz

zdz =

∫ 0

π

exp(iεeiθ)

εeiθ iεeiθ dθ = −∫ π

0i exp(iεeiθ) dθ.

Taking the limit as ε goes to zero, the integrand goes to i and we have∫Cε

eiz

zdz = −πi.

Note that we have not previously doneintegrals in which a singularity lies onthe contour. One can show, as in this ex-ample, that points on the contour can beaccounted for by using half of a residue(times 2πi). For the semicircle Cε youcan verify this. The negative sign comesfrom going clockwise around the semi-circle.

So far, we have that

P∫ ∞

−∞

eix

xdx = − lim

ε→0

∫Cε

eiz

zdz = πi.

At this point we can get the answer using the second approach in Equation (8.68).Namely,

P∫ ∞

−∞

sin xx

dx = Im(

P∫ ∞

−∞

eix

xdx)= Im(πi) = π. (8.69)

It is instructive to carry out the first approach in Equation (8.67). We will needto compute P

∫ ∞−∞

e−ix

x dx. This is done in a similar to the above computation, beingcareful with the sign changes due to the orientations of the contours as shown inFigure 8.44.

We note that the contour is closed in the lower half plane. This is because k < 0in the application of Jordan’s Lemma. One can understand why this is the casefrom the following observation. Consider the exponential in Jordan’s Lemma. Letz = zR + izI . Then,

eikz = eik(zR+izI) = e−kzI eikzR .

As |z| gets large, the second factor just oscillates. The first factor would go to zeroif kzI > 0. So, if k > 0, we would close the contour in the upper half plane. Ifk < 0, then we would close the contour in the lower half plane. In the currentcomputation, k = −1, so we use the lower half plane.

Working out the details, we find the same value for

P∫ ∞

−∞

e−ix

xdx = πi.

ε R−R −ε

x

y

ΓR

Figure 8.44: Contour in the lower halfplane for computing P

∫ ∞−∞

e−ix

x dx.

Finally, we can compute the original integral as

P∫ ∞

−∞

sin xx

dx =12i

(P∫ ∞

−∞

eix

xdx− P

∫ ∞

−∞

e−ix

xdx)

=12i

(πi + πi)

= π. (8.70)

This is the same result as we obtained using Equation(8.68).

Page 339: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

complex representations of functions 327

Example 8.43. Evaluate∮|z|=1

dzz2+1 .

In this example there are two simple poles, z = ±i lying on the contour, asseen in Figure 8.45. This problem is similar to Problem 1c, except we will do itusing contour integration instead of a parametrization. We bypass the two polesby drawing small semicircles around them. Since the poles are not included in theclosed contour, then the Residue Theorem tells us that the integral over the pathvanishes. We can write the full integration as a sum over three paths, C± for thesemicircles and C for the original contour with the poles cut out. Then we take thelimit as the semicircle radii go to zero. So,

0 =∫

C

dzz2 + 1

+∫

C+

dzz2 + 1

+∫

C−

dzz2 + 1

.

−i

i

−1 1 x

y

|z| = 1

Figure 8.45: Example with poles on con-tour.

The integral over the semicircle around i can be done using the parametrizationz = i + εeiθ . Then z2 + 1 = 2iεeiθ + ε2e2iθ . This gives

∫C+

dzz2 + 1

= limε→0

∫ −π

0

iεeiθ

2iεeiθ + ε2e2iθ dθ =12

∫ −π

0dθ = −π

2.

As in the last example, we note that this is just πi times the residue, Res[

1z2+1 ; z = i

]=

12i . Since the path is traced clockwise, we find the contribution is −πiRes = −π

2 ,which is what we obtained above. A Similar computation will give the contributionfrom z = −i as π

2 . Adding these values gives the total contribution from C± aszero. So, the final result is that ∮

|z|=1

dzz2 + 1

= 0.

Example 8.44. Evaluate∫ ∞−∞

eax

1+ex dx, for 0 < a < 1.In dealing with integrals involving exponentials or hyperbolic functions it is

sometimes useful to use different types of contours. This example is one such case.We will replace x with z and integrate over the contour in Figure 8.46. LettingR → ∞, the integral along the real axis is the integral that we desire. The integralalong the path for y = 2π leads to a multiple of this integral since z = x + 2πialong this path. Integration along the vertical paths vanish as R → ∞. This iscaptured in the following integrals:

∮CR

eaz

1 + ez dz =∫ R

−R

eax

1 + ex dx +∫ 2π

0

ea(R+iy)

1 + eR+iy dy

+∫ −R

R

ea(x+2πi)

1 + ex+2πi dx +∫ 0

ea(−R+iy)

1 + e−R+iy dy (8.71)

x

y

R

R + 2πi−R + 2πi

−R

Figure 8.46: Example using a rectangularcontour.

We can now let R → ∞. For large R the second integral decays as e(a−1)R andthe fourth integral decays as e−aR. Thus, we are left with

∮C

eaz

1 + ez dz = limR→∞

(∫ R

−R

eax

1 + ex dx− e2πia∫ R

−R

eax

1 + ex dx)

= (1− e2πia)∫ ∞

−∞

eax

1 + ex dx. (8.72)

Page 340: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

328 partial differential equations

We need only evaluate the left contour integral using the Residue Theorem. Thepoles are found from

1 + ez = 0.

Within the contour, this is satisfied by z = iπ. So,

Res[

eaz

1 + ez ; z = iπ]= lim

z→iπ(z− iπ)

eaz

1 + ez = −eiπa.

Applying the Residue Theorem, we have

(1− e2πia)∫ ∞

−∞

eax

1 + ex dx = −2πieiπa.

Therefore, we have found that∫ ∞

−∞

eax

1 + ex dx =−2πieiπa

1− e2πia =π

sin πa, 0 < a < 1.

8.5.7 Integration Over Multivalued Functions

We have seen that some complex functions inherently possess mul-tivaluedness; i.e., such “functions” do not evaluate to a single value, buthave many values. The key examples were f (z) = z1/n and f (z) = ln z.The nth roots have n distinct values and logarithms have an infinite num-ber of values as determined by the range of the resulting arguments. Wementioned that the way to handle multivaluedness is to assign differentbranches to these functions, introduce a branch cut and glue them togetherat the branch cuts to form Riemann surfaces. In this way we can draw con-tinuous paths along the Riemann surfaces as we move from one Riemannsheet to another.

Before we do examples of contour integration involving multivalued func-tions, lets first try to get a handle on multivaluedness in a simple case. Wewill consider the square root function,

w = z1/2 = r1/2ei( θ2+kπ), k = 0, 1.

There are two branches, corresponding to each k value. If we follow apath not containing the origin, then we stay in the same branch, so the finalargument (θ) will be equal to the initial argument. However, if we follow apath that encloses the origin, this will not be true. In particular, for an initialpoint on the unit circle, z0 = eiθ0 , we have its image as w0 = eiθ0/2. However,if we go around a full revolution, θ = θ0 + 2π, then

z1 = eiθ0+2πi = eiθ0 ,

butw1 = e(iθ0+2πi)/2 = eiθ0/2eπi 6= w0.

Here we obtain a final argument (θ) that is not equal to the initial argument!Somewhere, we have crossed from one branch to another. Points, such as

Page 341: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

complex representations of functions 329

the origin in this example, are called branch points. Actually, there are twobranch points, because we can view the closed path around the origin asa closed path around complex infinity in the compactified complex plane.However, we will not go into that at this time.

We can demonstrate this in the following figures. In Figure 8.47 we showhow the points A-E are mapped from the z-plane into the w-plane underthe square root function for the principal branch, k = 0. As we trace out theunit circle in the z-plane, we only trace out a semicircle in the w-plane. Ifwe consider the branch k = 1, we then trace out a semicircle in the lowerhalf plane, as shown in Figure 8.48 following the points from F to J.

x

y

A

B

C

D

Ex

y

A

BC

D

E

Figure 8.47: In this figure we show howpoints on the unit circle in the z-planeare mapped to points in the w-plane un-der the principal square root function.

x

y

F

G

H

I

Jx

y

F

GH

I

J

Figure 8.48: In this figure we show howpoints on the unit circle in the z-planeare mapped to points in the w-plane un-der the square root function for the sec-ond branch, k = 1.

x

y

A

B

C

D

E

x

y

A

BC

D

E

x

y

F

G

H

I

J

F

GH

I

J

Figure 8.49: In this figure we show thecombined mapping using two branchesof the square root function.

We can combine these into one mapping depicting how the two complex

Page 342: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

330 partial differential equations

planes corresponding to each branch provide a mapping to the w-plane.This is shown in Figure 8.49.

A common way to draw this domain, which looks like two separate com-plex planes, would be to glue them together. Imagine cutting each planealong the positive x-axis, extending between the two branch points, z = 0and z = ∞. As one approaches the cut on the principal branch, then onecan move onto the glued second branch. Then one continues around theorigin on this branch until one once again reaches the cut. This cut is gluedto the principal branch in such a way that the path returns to its startingpoint. The resulting surface we obtain is the Riemann surface shown inFigure 8.50. Note that there is nothing that forces us to place the branchcut at a particular place. For example, the branch cut could be along thepositive real axis, the negative real axis, or any path connecting the originand complex infinity.

Figure 8.50: Riemann surface for f (z) =z1/2.

We now look at examples involving integrals of multivalued functions.

Example 8.45. Evaluate∫ ∞

0

√x

1+x2 dx.

We consider the contour integral∮

C

√z

1+z2 dz.The first thing we can see in this problem is the square root function in the

integrand. Being there is a multivalued function, we locate the branch point anddetermine where to draw the branch cut. In Figure 8.51 we show the contour thatwe will use in this problem. Note that we picked the branch cut along the positivex-axis.

x

y

CR

i

−i

Figure 8.51: An example of a contourwhich accounts for a branch cut.

We take the contour C to be positively oriented, being careful to enclose the twopoles and to hug the branch cut. It consists of two circles. The outer circle CR is acircle of radius R and the inner circle Cε will have a radius of ε. The sought answerwill be obtained by letting R → ∞ and ε → 0. On the large circle we have thatthe integrand goes to zero fast enough as R → ∞. The integral around the smallcircle vanishes as ε → 0. We can see this by parametrizing the circle as z = εeiθ

for θ ∈ [0, 2π] : ∮Cε

√z

1 + z2 dz =∫ 2π

0

√εeiθ

1 + (εeiθ)2 iεeiθdθ

= iε3/2∫ 2π

0

e3iθ/2

1 + (ε2e2iθ)dθ. (8.73)

It should now be easy to see that as ε→ 0 this integral vanishes.The integral above the branch cut is the one we are seeking,

∫ ∞0

√x

1+x2 dx. Theintegral under the branch cut, where z = re2πi, is∫ √

z1 + z2 dz =

∫ 0

√re2πi

1 + r2e4πi dr

=∫ ∞

0

√r

1 + r2 dr. (8.74)

We note that this is the same as that above the cut.Up to this point, we have that the contour integral, as R→ ∞ and ε→ 0 is∮

C

√z

1 + z2 dz = 2∫ ∞

0

√x

1 + x2 dx.

Page 343: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

complex representations of functions 331

In order to finish this problem, we need the residues at the two simple poles.

Res[ √

z1 + z2 ; z = i

]=

√i

2i=

√2

4(1 + i),

Res[ √

z1 + z2 ; z = −i

]=

√−i−2i

=

√2

4(1− i).

So,

2∫ ∞

0

√x

1 + x2 dx = 2πi

(√2

4(1 + i) +

√2

4(1− i)

)= π√

2.

Finally, we have the value of the integral that we were seeking,∫ ∞

0

√x

1 + x2 dx =π√

22

.

Example 8.46. Compute∫ ∞

a f (x) dx using contour integration involving loga-rithms.4

4 This approach was originally publishedin Neville, E. H., 1945, Indefinite integra-tion by means of residues. The Mathemat-ical Student, 13, 16-35, and discussed inDuffy, D. G., Transform Methods for Solv-ing Partial Differential Equations, 1994.

In this example we will apply contour integration to the integral∮C

f (z) ln(a− z) dz

for the contour shown in Figure 8.52. x

y

C2

C4C1

C3

Figure 8.52: Contour needed to compute∮C f (z) ln(a− z) dz.

We will assume that f (z) is single valued and vanishes as |z| → ∞. We willchoose the branch cut to span from the origin along the positive real axis. Employingthe Residue Theorem and breaking up the integrals over the pieces of the contour inFigure 8.52, we have schematically that

2πi ∑ Res[ f (z) ln(a− z)] =(∫

C1

+∫

C2

+∫

C3

+∫

C4

)f (z) ln(a− z) dz.

First of all, we assume that f (z) is well behaved at z = a and vanishes fastenough as |z| = R → ∞. Then, the integrals over C2 and C4 will vanish. Forexample, for the path C4, we let z = a + εeiθ , 0 < θ < 2π. Then,∫

C4

f (z) ln(a− z) dz. = limε→0

∫ 0

2πf (a + εeiθ) ln(εeiθ)iεeiθ dθ.

If f (a) is well behaved, then we only need to show that limε→0 ε ln ε = 0. This isleft to the reader.

Similarly, we consider the integral over C2 as R gets large,∫C2

f (z) ln(a− z) dz = limR→∞

∫ 2π

0f (Reiθ) ln(Reiθ)iReiθ dθ.

Thus, we need only require that

limR→∞

R ln R| f (Reiθ)| = 0.

Next, we consider the two straight line pieces. For C1, the integration along thereal axis occurs for z = x, so∫

C1

f (z) ln(a− z) dz =∫ ∞

af (x) ln(a− x) dz.

Page 344: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

332 partial differential equations

However, integration over C3 requires noting that we need the branch for the loga-rithm such that ln z = ln(a− x) + 2πi. Then,∫

C3

f (z) ln(a− z) dz =∫ a

∞f (x)[ln(a− x) + 2πi] dz.

Combining these results, we have

2πi ∑ Res[ f (z) ln(a− z)] =∫ ∞

af (x) ln(a− x) dz

+∫ a

∞f (x)[ln(a− x) + 2πi] dz.

= −2πi∫ ∞

af (x) dz. (8.75)

Therefore, ∫ ∞

af (x) dx = −∑ Res[ f (z) ln(a− z)].x

y

C

12- 1

2

Figure 8.53: Contour needed to compute∫ ∞1

dx4x2−1 .

Example 8.47. Compute∫ ∞

1dx

4x2−1 .We can apply the last example to this case. We see from Figure 8.53 that the two

poles at z = ± 12 are inside contour C. So, we compute the residues of ln(1−z)

4z2−1 atthese poles and find that∫ ∞

1

dx4x2 − 1

= −Res[

ln(1− z)4z2 − 1

;12

]− Res

[ln(1− z)4z2 − 1

;−12

]= −

ln 12

4+

ln 32

4=

ln 34

. (8.76)

8.5.8 Appendix: Jordan’s Lemma

For completeness, we prove Jordan’s Lemma.

Theorem 8.10. If f (z) converges uniformly to zero as z→ ∞, then

limR→∞

∫CR

f (z)eikz dz = 0

where k > 0 and CR is the upper half of the circle |z| = R.

Proof. We consider the integral

IR =∫

CR

f (z)eikz dz,

where k > 0 and CR is the upper half of the circle |z| = R in the complexplane. Let z = Reiθ be a parametrization of CR. Then,

IR =∫ π

0f (Reiθ)eikR cos θ−aR sin θ iReiθ dθ.

Sincelim|z|→∞

f (z) = 0, 0 ≤ arg z ≤ π,

Page 345: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

complex representations of functions 333

then for large |R|, | f (z)| < ε for some ε > 0. Then,

|IR| =

∣∣∣∣∫ π

0f (Reiθ)eikR cos θ−aR sin θ iReiθ dθ

∣∣∣∣≤

∫ π

0

∣∣∣ f (Reiθ)∣∣∣ ∣∣∣eikR cos θ

∣∣∣ ∣∣∣e−aR sin θ∣∣∣ ∣∣∣iReiθ

∣∣∣ dθ

≤ εR∫ π

0e−aR sin θ dθ

= 2εR∫ π/2

0e−aR sin θ dθ. (8.77)

0 π2

π

0.5

1

1.5

θ

Figure 8.54: Plots of y = sin θ and y =2π θ to show where sin θ ≥ 2

π θ.

The last integral still cannot be computed, but we can get a bound on itover the range θ ∈ [0, π/2]. Note from Figure 8.54 that

sin θ ≥ 2π

θ, θ ∈ [0, π/2].

Therefore, we have

|IR| ≤ 2εR∫ π/2

0e−2aRθ/π dθ =

2εR2aR/π

(1− e−aR).

For large R we have

limR→∞

|IR| ≤πε

a.

So, as ε→ 0, the integral vanishes.

8.6 Laplace’s Equation in 2D, Revisited

Harmonic functions are solutions of Laplace’s equation. Wehave seen that the real and imaginary parts of a holomorphic function areharmonic. So, there must be a connection between complex functions andsolutions of the two-dimensional Laplace equation. In this section we willdescribe how conformal mapping can be used to find solutions of Laplace’sequation in two dimensional regions.

In Section 2.5 we had first seen applications in two-dimensional steady-state heat flow (or, diffusion), electrostatics, and fluid flow. For example,letting φ(r) be the electric potential, one has for a static charge distribution,ρ(r), that the electric field, E = ∇φ, satisfies

∇ · E = ρ/ε0.

In regions devoid of charge, these equations yield the Laplace equation,∇2φ = 0.

Similarly, we can derive Laplace’s equation for an incompressible, ∇ · v =

0, irrotational, ,∇× v = 0, fluid flow. From well-known vector identities, weknow that ∇×∇φ = 0 for a scalar function, φ. Therefore, we can introducea velocity potential, φ, such that v = ∇φ. Thus, ∇ · v = 0 implies ∇2φ = 0.So, the velocity potential satisfies Laplace’s equation.

Page 346: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

334 partial differential equations

Fluid flow is probably the simplest and most interesting application ofcomplex variable techniques for solving Laplace’s equation. So, we willspend some time discussing how conformal mappings have been used tostudy two-dimensional ideal fluid flow, leading to the study of airfoil de-sign.

8.6.1 Fluid Flow

The study of fluid flow and conformal mappings dates back to Euler, Rie-mann, and others.5 The method was further elaborated upon by physicists5 “On the Use of Conformal Mapping in

Shaping Wing Profiles,” MAA lecture byR. S. Burington, 1939, published (1940)in ... 362-373

like Lord Rayleigh (1877) and applications to airfoil theory we presented inpapers by Kutta (1902) and Joukowski (1906) on later to be improved uponby others.

The physics behind flight and the dynamics of wing theory relies on theideas of drag and lift. Namely, as the the cross section of a wing, the airfoil,goes through the air, it will experience several forces. The air speed aboveand belong the wing will differ due to the distance the air has to travelacross the top and bottom of the wing. According to Bernoulli’s Principle,steady fluid flow satisfies the conservation of energy in the form

P +12

ρU2 + ρgh = constant

at points on either side of the wing profile. Here P is the pressure, ρ is the airdensity, U is the fluid speed, h is a reference height, and g is the accelerationdue to gravity. The gravitational potential energy, ρgh, is roughly constanton either side of the wing. So, this reduces to

P +12

ρU2 = constant.

Therefore, if the speed of the air below the wing is lower that than abovethe wing, the pressure below the wing will be higher, resulting in a netupward pressure. Since the pressure is the force per area, this will resultin an upward force, a lift force, acting on the wing. This is the simplifiedversion for the lift force. There is also a drag force acting in the direction ofthe flow. In general, we want to use complex variable methods to model thestreamlines of the airflow as the air flows around an airfoil.

We begin by considering the fluid flow across a curve, C as shown inFigure 8.55. We assume that it is an ideal fluid with zero viscosity (i.e.,does not flow like molasses) and is incompressible. It is a continuous, ho-mogeneous flow with a constant thickness and represented by a velocityU = (u(x, y), v(x, y)), where u and v are the horizontal components of theflow as shown in Figure 8.55.

x

y

A

B

C

u

vVs

U

Figure 8.55: Fluid flow U across curve Cbetween the points A and B.

We are interested in the flow of fluid across a given curve which crossesseveral streamlines. The mass that flows over C per unit thickness in timedt can be given by

dm = ρU · n dAdt.

Page 347: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

complex representations of functions 335

Here n dA is the normal area to the flow and for unit thickness can bewritten as n dA = i dy− i dx. Therefore, for a unit thickness the mass flowrate is given by

dmdt

= ρ(u dy− v dx).

Since the total mass flowing across ds in time dt is given by dm = ρdV, forconstant density, this also gives the volume flow rate,

dVdt

= u dy− v dx,

over a section of the curve. The total volume flow over C is therefore

dVdt∣∣total =

∫C

u dy− v dx.

x

y

A

B

C

dt

Un

Figure 8.56: An amount of fluid crossingcurve c in unit time.

If this flow is independent of the curve, i.e., the path, then we have

∂u∂x

= −∂v∂y

.

[This is just a consequence of Green’s Theorem in the Plane. See Equation( 8.3).] Another way to say this is that there exists a function, ψ(x, t), suchthat dψ = u dy− v dx. Then,∫

Cu dy− v dx =

∫ B

Adψ = ψB − ψA.

However, from basic calculus of several variables, we know that

dψ =∂ψ

∂xdx +

∂ψ

∂ydy = u dy− v dx.

Therefore,

u =∂ψ

∂y, v = −∂ψ

∂x.

It follows that if ψ(x, y) has continuous second derivatives, then ux = −vy.This function is called the streamline function. Streamline functions.

Furthermore, for constant density, we have

∇ · (ρU) = ρ

(∂u∂x

+∂v∂y

)= ρ

(∂2ψ

∂y∂x∂2ψ

∂x∂y

)= 0. (8.78)

This is the conservation of mass formula for constant density fluid flow. Velocity potential curves.

We can also assume that the flow is irrotational. This means that thevorticity of the flow vanishes; i.e., ∇×U = 0. Since the curl of the velocityfield is zero, we can assume that the velocity is the gradient of a scalarfunction, U = ∇φ. Then, a standard vector identity automatically gives

∇×U = ∇×∇φ = 0.

Page 348: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

336 partial differential equations

For the two-dimensional flow with U = (u, v), we have

u =∂φ

∂x, v =

∂φ

∂y.

This is the velocity potential function for the flow.Let’s place the two-dimensional flow in the complex plane. Let an arbi-

trary point be z = (x, y). Then, we have found two real-valued functions,ψ(x, y) and ψ(x, y), satisfying the relations

u =∂φ

∂x=

∂ψ

∂y

v =∂φ

∂y= −∂ψ

∂x(8.79)

These are the Cauchy-Riemann relations for the real and imaginary parts ofa complex differentiable function,

F(z(x, y) = φ(x, y) + iψ(x, y).From its form, dF

dz is called the complex

velocity and√∣∣∣ dF

dz

∣∣∣ = √u2 + v2 is the

flow speed.

Furthermore, we have

dFdz

=∂φ

∂x+ i

∂ψ

∂x= u− iv.

Integrating, we have

F =∫

C(u− iv) dz

φ(x, y) + iψ(x, y) =∫ (x,y)

(x0,y0)[u(x, y) dx + v(x, y) dy]

+i∫ (x,y)

(x0,y0)[−v(x, y) dx + u(x, y) dy]. (8.80)

Therefore, the streamline and potential functions are given by the integralforms

φ(x, y) =∫ (x,y)

(x0,y0)[u(x, y) dx + v(x, y) dy],

ψ(x, y) =∫ (x,y)

(x0,y0)[−v(x, y) dx + u(x, y) dy]. (8.81)

These integrals give the circulation∫

C Vs ds =∫

C u dx + v dy and the fluidflow per time,

∫C −v dx + u dy.

The streamlines for the flow are given by the level curves ψ(x, y) = c1

and the potential lines are given by the level curves φ(x, y) = c2. These aretwo orthogonal families of curves; i.e., these families of curves intersect eachother orthogonally at each point as we will see in the examples. Note thatthese families of curves also provide the field lines and equipotential curvesfor electrostatic problems.Streamliners and potential curves are or-

thogonal families of curves.Example 8.48. Show that φ(x, y) = c1 and ψ(x, y) = c2 are an orthogonal familyof curves when F(z) = φ(x, y) + iψ(x, y) is holomorphic.

Page 349: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

complex representations of functions 337

In order to show that these curves are orthogonal, we need to find the slopes of thecurves at an arbitrary point, (x, y). For φ(x, y) = c1, we recall from multivaribalecalculus that

dφ =∂φ

∂xdx +

∂φ

∂ydy = 0.

So, the slope is found asdydx

= −∂φ∂x∂φ∂y

.

Similarly, we havedydx

= −∂ψ∂x∂ψ∂y

.

Since F(z) is differentiable, we can use the Cauchy-Riemann equations to find theproduct of the slopes satisfy

∂φ∂x∂φ∂y

∂ψ∂x∂ψ∂y

= −∂ψ∂y∂ψ∂x

∂ψ∂x∂ψ∂y

= −1.

Therefore, φ(x, y) = c1 and ψ(x, y) = c2 form an orthogonal family of curves.

x

y Figure 8.57: Plot of the orthogonal fam-ilies φ = x2 − y2 = c1 (dashed) andφ(x, y) = 2xy = c2.

As an example, consider F(z) = z2 = x2− y2 + 2ixy. Then, φ(x, y) = x2− y2

and ψ(x, y) = 2xy. The slopes of the families of curves are given by

dydx

= −∂φ∂x∂φ∂y

= − 2x−2y

=xy

.

dydx

= −∂ψ∂x∂ψ∂y

= −2y2x

= − yx

. (8.82)

Page 350: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

338 partial differential equations

The products of these slopes is −1. The orthogonal families are depicted in Figure8.57.

We will now turn to some typical examples by writing down some dif-ferentiable functions, F(z), and determining the types of flows that resultfrom these examples. We will then turn in the next section to using thesebasic forms to solve problems in slightly different domains through the useof conformal mappings.

Example 8.49. Describe the fluid flow associated with F(z) = U0e−iαz, where U0

and α are real.For this example, we have

dFdz

= U0e−iα = u− iv.

Thus, the velocity is constant,

U = (U0 cos α, U0 sin α).

Thus, the velocity is a uniform flow at an angle of α.

x

y

α

Figure 8.58: Stream lines (solid) and po-tential lines (dashed) for uniform flow atan angle of α, given by F(z) = U0e−iαz.

Since

F(z) = U0e−iαz = U0(x cos α + y sin α) + iU0(y cos α− x sin α).

Thus, we have

φ(x, y) = U0(x cos α + y sin α),

ψ(x, y) = U0(y cos α− x sin α). (8.83)

An example of this family of curves is shown in Figure ??.

Example 8.50. Describe the flow given by F(z) = U0e−iα

z−z0.

We write

F(z) =U0e−iα

z− z0

=U0(cos α + i sin α)

(x− x0)2 + (y− y0)2 [(x− x0)− i(y− y0)]

=U0

(x− x0)2 + (y− y0)2 [(x− x0) cos α + (y− y0) sin α]

+iU0

(x− x0)2 + (y− y0)2 [−(y− y0) cos α + (x− x0) sin α].

(8.84)

The level curves become

φ(x, y) =U0

(x− x0)2 + (y− y0)2 [(x− x0) cos α + (y− y0) sin α] = c1,

ψ(x, y) =U0

(x− x0)2 + (y− y0)2 [−(y− y0) cos α + (x− x0) sin α] = c2.

(8.85)

Page 351: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

complex representations of functions 339

x

y Figure 8.59: Stream lines (solid) and po-tential lines (dashed) for the flow given

by F(z) = U0e−iα

z for α = 0.

The level curves for the stream and potential functions satisfy equations of theform

βi(∆x2 + ∆y2)− cos(α + δi)∆x− sin(α + δi)∆y = 0,

where ∆x = x− x0, ∆y = y− y0, βi =ci

U0, δ1 = 0, and δ2 = π/2., These can be

written in the more suggestive form

(∆x− γi cos(α− δi))2 + (∆y− γi sin(α− δi))

2 = γ2i

for γi =ci

2U0, i = 1, 2. Thus, the stream and potential curves are circles with vary-

ing radii (γi) and centers ((x0 + γi cos(α− δi), y0 + γi sin(α− δi))). Examples ofthis family of curves is shown for α = 0 in in Figure 8.59 and for α = π/6 in inFigure 8.60.

x

y Figure 8.60: Stream lines (solid) and po-tential lines (dashed) for the flow given

by F(z) = U0e−iα

z for α = π/6.

Page 352: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

340 partial differential equations

The components of the velocity field for α = 0 are found from

dFdz

=ddz

(U0

z− z0

)= − U0

(z− z0)2

= −U0[(x− x0)− i(y− y0)]2

[(x− x0)2 + (y− y0)2]2

= −U0[(x− x0)2 + (y− y0)

2 − 2i(x− x0)(y− y0)]

[(x− x0)2 + (y− y0)2]2

= −U0[(x− x0)2 + (y− y0)

2]

[(x− x0)2 + (y− y0)2]2+ i

U0[2(x− x0)(y− y0)]

[(x− x0)2 + (y− y0)2]2

= − U0

[(x− x0)2 + (y− y0)2]+ i

U0[2(x− x0)(y− y0)]

[(x− x0)2 + (y− y0)2]2. (8.86)

Thus, we have

u = − U0

[(x− x0)2 + (y− y0)2],

v =U0[2(x− x0)(y− y0)]

[(x− x0)2 + (y− y0)2]2. (8.87)

Example 8.51. Describe the flow given by F(z) = m2π ln(z− z0).

We write F(z) in terms of its real and imaginary parts:

F(z) =m2π

ln(z− z0)

=m2π

[ln√(x− x0)2 + (y− y0)2 + i tan−1 y− y0

x− x0

]. (8.88)

The level curves become

φ(x, y) =m2π

ln√(x− x0)2 + (y− y0)2 = c1,

ψ(x, y) =m2π

tan−1 y− y0

x− x0= c2.

(8.89)

Rewriting these equations, we have

(x− x0)2 + (y− y0)

2 = e4πc1/m,

y− y0 = (x− x0) tan2πc2

m.

(8.90)

In Figure 8.61 we see that the stream lines are those for a source or sink dependingif m > 0 or m < 0, respectively.

Example 8.52. Describe the flow given by F(z) = − iΓ2π ln z−z0

a .We write F(z) in terms of its real and imaginary parts:

F(z) = − iΓ2π

lnz− z0

a

= −iΓ

2πln

√(x− x0

a

)2+

(y− y0

a

)2+

Γ2π

tan−1 y− y0

x− x0.(8.91)

Page 353: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

complex representations of functions 341

x

y Figure 8.61: Stream lines (solid) and po-tential lines (dashed) for the flow givenby F(z) = m

2π ln(z − z0) for (x0, y0) =(2, 1).

The level curves become

φ(x, y) =Γ

2πtan−1 y− y0

x− x0= c1,

ψ(x, y) = − Γ2π

ln

√(x− x0

a

)2+

(y− y0

a

)2= c2.

(8.92)

Rewriting these equations, we have

y− y0 = (x− x0) tan2πc1

Γ,(

x− x0

a

)2+

(y− y0

a

)2= e−2πc2/Γ.

(8.93)

In Figure 8.62 we see that the stream lines circles, indicating rotational motion.Therefore, we have a vortex of counterclockwise, or clockwise flow, depending ifΓ > 0 or Γ < 0, respectively.

Example 8.53. Flow around a cylinder, F(z) = U0

(z + a2

z

), a, U0 ∈ R.

For this example, we have

F(z) = U0

(z +

a2

z

)= U0

(x + iy +

a2

x + iy

)= U0

(x + iy +

a2

x2 + y2 (x− iy))

= U0x(

1 +a2

x2 + y2

)+ iU0y

(1− a2

x2 + y2

). (8.94)

Page 354: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

342 partial differential equations

Figure 8.62: Stream lines (solid) and po-tential lines (dashed) for the flow givenby F(z) = m

2π ln(z − z0) for (x0, y0) =(2, 1).

x

y

Figure 8.63: Stream lines for the flow

given by F(z) = U0

(z + a2

z

).

Page 355: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

complex representations of functions 343

The level curves become

φ(x, y) = U0x(

1 +a2

x2 + y2

)= c1,

ψ(x, y) = U0y(

1− a2

x2 + y2

)= c2.

(8.95)

Note that for the streamlines when |z| is large, then ψ ∼ Vy, or horizontal lines.For x2 + y2 = a2, we have ψ = 0. This behavior is shown in Figure 8.63 where wehave graphed the solution for r ≥ a.

The level curves in Figure 8.63 can be obtained using the implicitplot feature ofMaple. An example is shown below:

restart: with(plots):

k0:=20:

for k from 0 to k0 do

P[k]:=implicitplot(sin(t)*(r-1/r)*1=(k0/2-k)/20, r=1..5,

t=0..2*Pi, coords=polar,view=[-2..2, -1..1], axes=none,

grid=[150,150],color=black):

od:

display(seq(P[k],k=1..k0),scaling=constrained);

A slight modification of the last example is if a circulation term is added:

F(z) = U0

(z +

a2

z

)− iΓ

2πln

ra

.

The combination of the two terms gives the streamlines,

ψ(x, y) = U0y(

1− a2

x2 + y2

)− Γ

2πln

ra

,

which are seen in Figure 8.64. We can see interesting features in this flowincluding what is called a stagnation point. A stagnation point is a point

where the flow speed,∣∣∣ dF

dz

∣∣∣ = 0.

Example 8.54. Find the stagnation point for the flow F(z) =(

z + 1z

)− i ln z.

Since the flow speed vanishes at the stagnation points, we consider

dFdz

= 1− 1z2 −

iz= 0.

This can be rewritten asz2 − iz− 1 = 0.

The solutions are z = 12 (i ±

√3). Thus, there are two stagnation points on the

cylinder about which the flow is circulating. These are shown in Figure 8.65.

Example 8.55. Consider the complex potentials F(z) = k2π ln z−a

z−b , where k = qand k = −iq for q real.

Page 356: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

344 partial differential equations

Figure 8.64: Stream lines for the flow

given by F(z) = U0

(z + a2

z

)− Γ

2π ln za .

Figure 8.65: Stagnation points (red) onthe cylinder are shown for the flow given

by F(z) =(

z + 1z

)− i ln z.

Page 357: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

complex representations of functions 345

We first note that for z = x + iy,

lnz− az− b

= ln√(x− a)2 + y2 − ln

√(x− a)2 + y2,

+i tan−1 yx− a

− i tan−1 yx− b

. (8.96)

For k = q, we have

ψ(x, y) =q

[ln√(x− a)2 + y2 − ln

√(x− a)2 + y2

]= c1,

φ(x, y) =q

[tan−1 y

x− a− tan−1 y

x− b

]= c2. (8.97)

The potential lines are circles and the streamlines are circular arcs as shown inFigure 8.66. These correspond to a source at z = a and a sink at z = b. One canalso view these as the electric field lines and equipotentials for an electric dipoleconsisting of two point charges of opposite sign at the points z = a and z = b.

The equations for the curves are found from6 6 The streamlines are found using theidentity

tan−1 α− tan−1 β = tan−1 α− β

1 + αβ.

(x− a)2 + y2 = C1[(x− b)2 + y2],

(x− a)(x− b) + y2 = C2y(a− b), (8.98)

where these can be rewritten, respectively, in the more suggestive forms(x− a− bC1

1− C1

)2+ y2 =

C1(a− b)2

(1− C1)2 ,(x− a + b

2

)2+

(y− C2(a− b)

2

)2

= (1 + C22)

(a− b

2

)2. (8.99)

Note that the first family of curves are the potential curves and the second give thestreamlines.

x

y Figure 8.66: The electric field lines(solid) and equipotentials (dashed) fora dipole given by the complex potentialF(z) = q

2π ln z−az−b for b = −a.

Page 358: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

346 partial differential equations

In the case that k = −iq we have

F(z) =−iq2π

lnz− az− b

=−iq2π

[ln√(x− a)2 + y2 − ln

√(x− a)2 + y2

],

+q

[tan−1 y

x− a− tan−1 y

x− b

]. (8.100)

So, the roles of the streamlines and potential lines are reversed and the correspondingplots give a flow for a pair of vortices as shown in Figure 8.67.

Figure 8.67: The streamlines (solid) andpotentials (dashed) for a pair of vorticesgiven by the complex potential F(z) =

q2πi ln z−a

z−b for b = −a.

x

y

8.6.2 Conformal Mappings

It would be nice if the complex potentials in the last section couldbe mapped to a region of the complex plane such that the new stream func-tions and velocity potentials represent new flows. In order for this to betrue, we would need the new families to once again be orthogonal fami-lies of curves. Thus, the mappings we seek must preserve angles. Suchmappings are called conformal mappings.

We let w = f (z) map points in the z−plane, (x, y), to points in the w-plane, (u, v) by f (x + iy) = u + iv. We have shown this in Figure 8.4.

Example 8.56. Map lines in the z-plane to curves in the w-plane under f (z) = z2.We have seen how grid lines in the z-plane is mapped by f (z) = z2 into the w-

plane in Figure 8.5, which is reproduced in Figure 8.68. The horizontal line x = 1is mapped to u(1, y) = 1 − y2 and v(1, y) = 2y. Eliminating the “parameter”y between these two equations, we have u = 1− v2/4. This is a parabolic curve.

Page 359: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

complex representations of functions 347

w = f (z)

x

y

1 + i

1

1

u

v

2i

Figure 8.68: 2D plot showing how thefunction f (z) = z2 maps the lines x = 1and y = 1 in the z-plane into parabolaein the w-plane.

Similarly, the horizontal line y = 1 results in the curve u = v2/4− 1. These curvesintersect at w = 2i.

The lines in the z-plane intersect at z = 1 + i at right angles. In the w-planewe see that the curves u = 1− v2/4 and u = v2/4− 1 intersect at w = 2i. Theslopes of the tangent lines at (0, 2) are −1 and 1, respectively, as shown in Figure8.69.

u

v

2i

Figure 8.69: The tangents to the imagesof x = 1 and y = 1 under f (z) = z2 areorthogonal.

In general, if two curves in the z-plane intersect orthogonally at z = z0

and the corresponding curves in the w-plane under the mapping w = f (z)are orthogonal at w0 = f (z0), then the mapping is conformal. As we haveseen, holomorphic functions are conformal, but only at points where f ′(z) 6=0. Holomorphic functions are conformal at

points where f ′(z) 6= 0.Example 8.57. Images of the real and imaginary axes under f (z) = z2.

The line z = iy maps to w = z2 = −y2 and the line z = x maps to w = z2 =

x2. The point of intersection z0 = 0 maps to w0 = 0. However, the image linesare the same line, the real axis in the w-plane. Obviously, the image lines are notorthogonal at the origin. Note that f ′(0) = 0.

One special mapping is the inversion mapping, which is given by

f (z) =1z

.

This mapping maps the interior of the unit circle to the exterior of the unitcircle in the w-plane as shown in Figure 8.70.

Let z = x + iy, where x2 + y2 < 1. Then,

w =1

x + iy=

xx2 + y2 − i

yx2 + y2 .

Thus, u = xx2+y2 and v = − y

x2+y2 , and

u2 + v2 =

(x

x2 + y2

)2+

(− y

x2 + y2

)2

=x2 + y2

(x2 + y2)2 =1

x2 + y2 . (8.101)

Thus, for x2 + y2 < 1, u2 +U2 > 1. Furthermore, for x2 + y2 > 1, u2 +U2 <

1, and for x2 + y2 = 1, u2 + U2 = 1.

Page 360: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

348 partial differential equations

Figure 8.70: The inversion, f (z) = 1z ,

maps the interior of a unit circle to theexternal of a unit circle. Also, segmentsof aline through the origin, y = 2x, aremapped to the line u = −2v.

f (z) =1z

x

y

1

1

−1

−1

x

y

1

1

−1

−1

In fact, an inversion maps circles into circles. Namely, for z = z0 + reiθ ,we have

w =1

z0 + reiθ

=z0 + re−iθ

|z0 + reiθ |2

= w0 + Re−iθ . (8.102)

Also, lines through the origin in the z-plane map into lines through theorigin in the w-plane. Let z = x + imx. This corresponds to a line with slopem in the z-plane, y = mx. It maps to

f (z) =1z

=1

x + imx

=x− imx(1 + m2)x

. (8.103)

So, u = x(1+m2)x and v = − mx

(1+m2)x = −mu. This is a line through the originin the w-plane with slope −m. This is shown in Figure 8.70 Note how thepotion of the line y = 2x that is inside the unit disk maps to the outside ofthe disk in the w-plane.The bilinear transformation.

Another interesting class of transformation, of which the inversion is con-tained, is the bilinear transformation. The bilinear transformation is givenby

w = f (z) =az + bcz + d

, ad− bc 6= 0,

where a, b, c, and d are complex constants. These transformations werestudied by mappings was studied by August Ferdinand Möbius (1790-1868)and are also called Möbius transformations, or linear fractional transforma-tions. We further note that if ad− bc = 0, then the transformation reducesto the constat function.

We can seek to invert the transformation. Namely, solving for z, we have

z = f−1(w) =−dw + bcw− a

, w 6= ac

.

Page 361: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

complex representations of functions 349

Since f−1(w) is not defined for w 6= ac , we can say that w 6= a

c maps to thepoint at infinity, or f−1 ( a

c)= ∞. Similarly, we can let z→ ∞ to obtain

f (∞) = limn→∞

f (z) = −dc

.

Thus, we have that the bilinear transformation is a one-to-one mapping ofthe extended complex z-plane to the extended complex w-plane. The extended complex plane is the

union of the complex plane plus thepoint at infinity. This is usually de-scribed in more detail using stereo-graphic projection, which we will not re-view here.

If c = 0, f (z) is easily seen to be a linear transformation. Linear transfor-mations transform lines into lines and circles into circles.

When c 6= 0, we can write

f (z) =az + bcz + d

=c(az + b)c(cz + d)

=acz + ad− ad + bc

c(cz + d)

=a(cz + d)− ad + bc

c(cz + d)

=ac+

bc− adc

1cz + d

. (8.104)

We note that if bc− ad = 0, then f (z) = ac is a constant, as noted above. The

new form for f (z) shows that it is the composition of a linear function ζ =

cz + d, an inversion, g(ζ) = 1ζ , and another linear transformation, h(ζ) =

ac +

bc−adc ζ. Since linear transformations and inversions transform the set of

circles and lines in the extended complex plane into circles and lines in theextended complex plane, then a bilinear does so as well.

What is important in out applications of complex analysis to the solutionof Laplace’s equation in the transformation of regions of the complex planeinto other regions of the complex plane. Needed transformations can befound using the following property of bilinear transformations:

A given set of three points in the z-plane can be transformed into a given set ofpoints in the w-plane using a bilinear transformation.

This statement is based on the following observation: There are threeindependent numbers that determine a bilinear transformation. If a 6= 0,then

f (z) =az + bcz + d

=z + b

aca z + d

a

≡ z + α

βz + γ. (8.105)

For w = z+αβz+γ , we have

w =z + α

βz + γ

w(βz + γ) = z + α

−α + wzβ + wγ = z. (8.106)

Page 362: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

350 partial differential equations

Now, let wi = f (zi), i = 1, 2, 3. This gives three equations for the threeunknowns α, β, and γ. Namely,

−α + w1z1β + w1γ = z1,

−α + w2z2β + w2γ = z2,

−α + w3z3β + w3γ = z3. (8.107)

This systems of linear equation can be put into matrix form as −1 w1z1 w1

−1 w2z2 w2

−1 w3z3 w3

α

β

γ

=

z1

z2

z3

.

It is only a matter of solving this system for (α, β, γ)T in order to find thebilinear transformation.

A quicker method is to use the implicit form of the transformation,

(z− z1)(z2 − z3)

(z− z3)(z2 − z1)=

(w− w1)(w2 − w3)

(w− w3)(w2 − w1).

Note that this implicit relation works upon insertion of the values wi, zi, fori = 1, 2, 3.

Example 8.58. Find the bilinear transformation that maps the points −1, i, 1 tothe points −1, 0, 1.

The implicit form of the transformation becomes

(z + 1)(i− 1)(z− 1)(i + 1)

=(w + 1)(0− 1)(w− 1)(0 + 1)

z + 1z− 1

i− 1i + 1

= −w + 1w− 1

. (8.108)

Solving for w, we have

w = f (z) =(i− 1)z + 1 + i(1 + i)z− 1 + i

.

We can use the transformation in the last example to map the unit diskcontaining the points −1, i, and 1 to the half plane w > 0. We see that theunit circle gets mapped to the real axis with z = −i mapped to the point atinfinity. The point z = 0 gets mapped to

w =1 + i−1 + i

=1 + i−1 + i

−1− i−1− i

=22= 1.

Thus, interior points of the unit disk get mapped to the upper half plane.This is shown in Figure 8.71.

Problems

1. Write the following in standard form.

a. (4 + 5i)(2− 3i).

Page 363: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

complex representations of functions 351

Figure 8.71: The bilinear transformationf (z) = (i−1)z+1+i

(1+i)z−1+i maps the unit disk tothe upper half plane.

f (z) =(i− 1)z + 1 + i(1 + i)z− 1 + i

x

y

1

i

−1x

y

10−1

Figure 8.72: Flow ...

x

y

Page 364: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

352 partial differential equations

b. (1 + i)3.

c. 5+3i1−i .

2. Write the following in polar form, z = reiθ .

a. i− 1.

b. −2i.

c.√

3 + 3i.

3. Write the following in rectangular form, z = a + ib.

a. 4eiπ/6.

b.√

2e5iπ/4.

c. (1− i)100.

4. Find all z such that z4 = 16i. Write the solutions in rectangular form,z = a + ib, with no decimal approximation or trig functions.

5. Show that sin(x + iy) = sin x cosh y + i cos x sinh y using trigonometricidentities and the exponential forms of these functions.

6. Find all z such that cos z = 2, or explain why there are none. You willneed to consider cos(x + iy) and equate real and imaginary parts of theresulting expression similar to problem 5.

7. Find the principal value of ii. Rewrite the base, i, as an exponential first.

8. Consider the circle |z− 1| = 1.

a. Rewrite the equation in rectangular coordinates by setting z =

x + iy.

b. Sketch the resulting circle using part a.

c. Consider the image of the circle under the mapping f (z) = z2,given by |z2 − 1| = 1.

i. By inserting z = reiθ = r(cos θ + i sin θ), find the equation of theimage curve in polar coordinates.

ii. Sketch the image curve. You may need to refer to your CalculusII text for polar plots. [Maple might help.]

9. Find the real and imaginary parts of the functions:

a. f (z) = z3.

b. f (z) = sinh(z).

c. f (z) = cos z.

10. Find the derivative of each function in Problem 9 when the derivativeexists. Otherwise, show that the derivative does not exist.

11. Let f (z) = u + iv be differentiable. Consider the vector field given byF = vi+ uj. Show that the equations∇ · F = 0 and ∇× F = 0 are equivalentto the Cauchy-Riemann equations. [You will need to recall from multivari-able calculus the del operator, ∇ = i ∂

∂x + j ∂∂y + k ∂

∂z .]

Page 365: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

complex representations of functions 353

12. What parametric curve is described by the function

γ(t) = (t− 3) + i(2t + 1),

0 ≤ t ≤ 2? [Hint: What would you do if you were instead considering theparametric equations x = t− 3 and y = 2t + 1?]

13. Write the equation that describes the circle of radius 3 which is centeredat z = 2− i in a) Cartesian form (in terms of x and y); b) polar form (in termsof θ and r); c) complex form (in terms of z, r, and eiθ).

14. Consider the function u(x, y) = x3 − 3xy2.

a. Show that u(x, y) is harmonic; i.e., ∇2u = 0.

b. Find its harmonic conjugate, v(x, y).

c. Find a differentiable function, f (z), for which u(x, y) is the realpart.

d. Determine f ′(z) for the function in part c. [Use f ′(z) = ∂u∂x + i ∂v

∂xand rewrite your answer as a function of z.]

15. Evaluate the following integrals:

a.∫

C z dz, where C is the parabola y = x2 from z = 0 to z = 1 + i.

b.∫

C f (z) dz, where f (z) = 2z − z and C is the path from z = 0 toz = 2 + i consisting of two line segments from z = 0 to z = 2 andthen z = 2 to z = 2 + i.

c.∫

C1

z2+4 dz for C the positively oriented circle, |z| = 2. [Hint: Parametrizethe circle as z = 2eiθ , multiply numerator and denominator by e−iθ ,and put in trigonometric form.]

16. Let C be the positively oriented ellipse 3x2 + y2 = 9. Define

F(z0) =∫

C

z2 + 2zz− z0

dz.

Find F(2i) and F(2). [Hint: Sketch the ellipse in the complex plane. Use theCauchy Integral Theorem with an appropriate f (z), or Cauchy’s Theorem ifz0 is outside the contour.]

17. Show that ∫C

dz(z− 1− i)n+1 =

0, n 6= 0,

2πi, n = 0,

for C the boundary of the square 0 ≤ x ≤ 2, 0 ≤ y ≤ 2 taken counterclock-wise. [Hint: Use the fact that contours can be deformed into simpler shapes(like a circle) as long as the integrand is analytic in the region between them.After picking a simpler contour, integrate using parametrization.]

18. Show that for g and h analytic functions at z0, with g(z0) 6= 0, h(z0) = 0,and h′(z0) 6= 0,

Res[

g(z)h(z)

; z0

]=

g(z0)

h′(z0).

Page 366: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

354 partial differential equations

19. For the following determine if the given point is a removable singularity,an essential singularity, or a pole (indicate its order).

a. 1−cos zz2 , z = 0.

b. sin zz2 , z = 0.

c. z2−1(z−1)2 , z = 1.

d. ze1/z, z = 0.

e. cos πz−π , z = π.

20. Find the Laurent series expansion for f (z) = sinh zz3 about z = 0. [Hint:

You need to first do a MacLaurin series expansion for the hyperbolic sine.]

21. Find series representations for all indicated regions.

a. f (z) = zz−1 , |z| < 1, |z| > 1.

b. f (z) = 1(z−i)(z+2) , |z| < 1, 1 < |z| < 2, |z| > 2. [Hint: Use partial

fractions to write this as a sum of two functions first.]

22. Find the residues at the given points:

a. 2z2+3zz−1 at z = 1.

b. ln(1+2z)z at z = 0.

c. cos z(2z−π)3 at z = π

2 .

23. Consider the integral∫ 2π

0dθ

5−4 cos θ .

a. Evaluate this integral by making the substitution 2 cos θ = z + 1z ,

z = eiθ and using complex integration methods.

b. In the 1800’s Weierstrass introduced a method for computing in-tegrals involving rational functions of sine and cosine. One makesthe substitution t = tan θ

2 and converts the integrand into a ratio-nal function of t. Note that the integration around the unit circlecorresponds to t ∈ (−∞, ∞).

i. Show that

sin θ =2t

1 + t2 , cos θ =1− t2

1 + t2 .

ii. Show thatdθ =

2dt1 + t2

iii. Use the Weierstrass substitution to compute the above integral.

24. Do the following integrals.

a. ∮|z−i|=3

ez

z2 + π2 dz.

b. ∮|z−i|=3

z2 − 3z + 4z2 − 4z + 3

dz.

Page 367: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

complex representations of functions 355

c. ∫ ∞

−∞

sin xx2 + 4

dx.

[Hint: This is Im∫ ∞−∞

eix

x2+4 dx.]

25. Evaluate the integral∫ ∞

0(ln x)2

1+x2 dx.[Hint: Replace x with z = et and use the rectangular contour in Figure

8.73 with R→ ∞.]

x

y

R

R + 2πi−R + 2πi

−R

Figure 8.73: Rectangular contour forProblem 25.

26. Do the following integrals for fun!

a. For C the boundary of the square |x| ≤ 2, |y| ≤ 2,∮C

dzz(z− 1)(z− 3)2 .

b. ∫ π

0

sin2 θ

13− 12 cos θdθ.

c. ∫ ∞

−∞

dxx2 + 5x + 6

.

d. ∫ ∞

0

cos πx1− 9x2 dx.

e. ∫ ∞

0

dx(x2 + 9)(1− x)2 .

f. ∫ ∞

0

√x

(1 + x)2 dx.

g. ∫ ∞

0

√x

(1 + x)2 dx.

Page 368: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238
Page 369: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

9Transform Techniques in Physics

“There is no branch of mathematics, however abstract, which may not some day beapplied to phenomena of the real world.”, Nikolai Lobatchevsky (1792-1856)

9.1 Introduction

Some of the most powerful tools for solving problems in physics are In this chapter we will explore the useof integral transforms. Given a functionf (x), we define an integral transform toa new function F(k) as

F(k) =∫ b

af (x)K(x, k) dx.

Here K(x, k) is called the kernel of thetransform. We will concentrate specifi-cally on Fourier transforms,

f (k) =∫ ∞

−∞f (x)eikx dx,

and Laplace transforms

F(s) =∫ ∞

0f (t)e−st dt.

transform methods. The idea is that one can transform the problem at handto a new problem in a different space, hoping that the problem in the newspace is easier to solve. Such transforms appear in many forms.

As we had seen in Chapter 3 and will see later in the book, the solutionsof linear partial differential equations can be found by using the methodof separation of variables to reduce solving partial differential equations(PDEs) to solving ordinary differential equations (ODEs). We can also usetransform methods to transform the given PDE into ODEs or algebraic equa-tions. Solving these equations, we then construct solutions of the PDE (or,the ODE) using an inverse transform. A schematic of these processes isshown below and we will describe in this chapter how one can use Fourierand Laplace transforms to this effect.

PDE

ODE

AlgEq

Transforms

Inverse Transforms

Figure 9.1: Schematic indicating thatPDEs and ODEs can be transformed tosimpler problems, solved in the newspace and transformed back to the origi-nal space.

Page 370: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

358 partial differential equations

9.1.1 Example 1 - The Linearized KdV Equation

As a relatively simple example, we consider the linearized Korteweg-de Vries (KdV) equation:

ut + cux + βuxxx = 0, −∞ < x < ∞. (9.1)

This equation governs the propagation of some small amplitude water waves.Its nonlinear counterpart has been at the center of attention in the last 40

years as a generic nonlinear wave equation.The nonlinear counterpart to this equa-tion is the Korteweg-de Vries (KdV)equation: ut + 6uux + uxxx = 0. Thisequation was derived by Diederik Jo-hannes Korteweg (1848-1941) and hisstudent Gustav de Vries (1866-1934).This equation governs the propagationof traveling waves called solitons. Thesewere first observed by John Scott Rus-sell (1808-1882) and were the source ofa long debate on the existence of suchwaves. The history of this debate is in-teresting and the KdV turned up as ageneric equation in many other fields inthe latter part of the last century leadingto many papers on nonlinear evolutionequations.

We seek solutions that oscillate in space. So, we assume a solution of theform

u(x, t) = A(t)eikx. (9.2)

Such behavior was seen in Chapters 3 and 6 for the wave equation for vi-brating strings. In that case, we found plane wave solutions of the formeik(x±ct), which we could write as ei(kx±ωt) by defining ω = kc. We furthernote that one often seeks complex solutions as a linear combination of suchforms and then takes the real part in order to obtain physical solutions. Inthis case, we will find plane wave solutions for which the angular frequencyω = ω(k) is a function of the wavenumber.

Inserting the guess (9.2) into the linearized KdV equation, we find that

dAdt

+ i(ck− βk3)A = 0. (9.3)

Thus, we have converted the problem of seeking a solution of the partial dif-ferential equation into seeking a solution to an ordinary differential equa-tion. This new problem is easier to solve. In fact, given an initial value,A(0), we have

A(t) = A(0)e−i(ck−βk3)t. (9.4)

Therefore, the solution of the partial differential equation is

u(x, t) = A(0)eik(x−(c−βk2)t). (9.5)

We note that this solution takes the form ei(kx−ωt), where

ω = ck− βk3.

In general, the equation ω = ω(k) gives the angular frequency as aA dispersion relation is an expressiongiving the angular frequency as a func-tion of the wave number, ω = ω(k).

function of the wave number, k, and is called a dispersion relation. Forβ = 0, we see that c is nothing but the wave speed. For β 6= 0, the wavespeed is given as

v =ω

k= c− βk2.

This suggests that waves with different wave numbers will travel at differentspeeds. Recalling that wave numbers are related to wavelengths, k = 2π

λ ,this means that waves with different wavelengths will travel at differentspeeds. For example, an initial localized wave packet will not maintain its

Page 371: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

transform techniques in physics 359

shape. It is said to disperse, as the component waves of differing wave-lengths will tend to part company.

For a general initial condition, we write the solutions to the linearizedKdV as a superposition of plane waves. We can do this since the partialdifferential equation is linear. This should remind you of what we had donewhen using separation of variables. We first sought product solutions andthen took a linear combination of the product solutions to obtain the generalsolution.

For this problem, we will sum over all wave numbers. The wave numbersare not restricted to discrete values. We instead have a continuous range ofvalues. Thus, “summing” over k means that we have to integrate over thewave numbers. Thus, we have the general solution1

1 The extra 2π has been introduced tobe consistent with the definition of theFourier transform which is given later inthe chapter.

u(x, t) =1

∫ ∞

−∞A(k, 0)eik(x−(c−βk2)t) dk. (9.6)

Note that we have indicated that A is a function of k. This is similar tointroducing the An’s and Bn’s in the series solution for waves on a string.

How do we determine the A(k, 0)’s? We introduce as an initial conditionthe initial wave profile u(x, 0) = f (x). Then, we have

f (x) = u(x, 0) =1

∫ ∞

−∞A(k, 0)eikx dk. (9.7)

Thus, given f (x), we seek A(k, 0). In this chapter we will see that

A(k, 0) =∫ ∞

−∞f (x)e−ikx dx.

This is what is called the Fourier transform of f (x). It is just one of theso-called integral transforms that we will consider in this chapter.

In Figure 9.2 we summarize the transform scheme. One can use methodslike separation of variables to solve the partial differential equation directly,evolving the initial condition u(x, 0) into the solution u(x, t) at a later time.

u(x, 0)

PDE

u(x, t)

A(k, 0)

ODE

A(k, t)

Fourier Transform

Inverse Fourier Transform

Figure 9.2: Schematic of using Fouriertransforms to solve a linear evolutionequation.

The transform method works as follows. Starting with the initial condi-tion, one computes its Fourier Transform (FT) as2 2 Note: The Fourier transform as used

in this section and the next section aredefined slightly differently than how wewill define them later. The sign of theexponentials has been reversed.

A(k, 0) =∫ ∞

−∞f (x)e−ikx dx.

Page 372: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

360 partial differential equations

Applying the transform on the partial differential equation, one obtains anordinary differential equation satisfied by A(k, t) which is simpler to solvethan the original partial differential equation. Once A(k, t) has been found,then one applies the Inverse Fourier Transform (IFT) to A(k, t) in order toget the desired solution:

u(x, t) =1

∫ ∞

−∞A(k, t)eikx dk

=1

∫ ∞

−∞A(k, 0)eik(x−(c−βk2)t) dk. (9.8)

9.1.2 Example 2 - The Free Particle Wave Function

A more familiar example in physics comes from quantum mechanics.The Schrödinger equation gives the wave function Ψ(x, t) for a particle un-der the influence of forces, represented through the corresponding potentialfunction V(x). The one dimensional time dependent Schrödinger equationis given byThe one dimensional time dependent

Schrödinger equation.ihΨt = −

h2

2mΨxx + VΨ. (9.9)

We consider the case of a free particle in which there are no forces, V = 0.Then we have

ihΨt = −h2

2mΨxx. (9.10)

Taking a hint from the study of the linearized KdV equation, we willassume that solutions of Equation (9.10) take the form

Ψ(x, t) =1

∫ ∞

−∞φ(k, t)eikx dk.

[Here we have opted to use the more traditional notation, φ(k, t) instead ofA(k, t) as above.]

Inserting the expression for Ψ(x, t) into (9.10), we have

ih∫ ∞

−∞

dφ(k, t)dt

eikx dk = − h2

2m

∫ ∞

−∞φ(k, t)(ik)2eikx dk.

Since this is true for all t, we can equate the integrands, giving

ihdφ(k, t)

dt=

h2k2

2mφ(k, t).

As with the last example, we have obtained a simple ordinary differentialequation. The solution of this equation is given by

φ(k, t) = φ(k, 0)e−i hk22m t.

Applying the inverse Fourier transform, the general solution to the timedependent problem for a free particle is found as

Ψ(x, t) =1

∫ ∞

−∞φ(k, 0)eik(x− hk

2m t) dk.

Page 373: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

transform techniques in physics 361

We note that this takes the familiar form

Ψ(x, t) =1

∫ ∞

−∞φ(k, 0)ei(kx−ωt) dk,

where the dispersion relation is found as

ω =hk2

2m.

The wave speed is given as

v =ω

k=

hk2m

.

As a special note, we see that this is not the particle velocity! Recall that themomentum is given as p = hk.3 So, this wave speed is v = p

2m , which is only 3 Since p = hk, we also see that the dis-persion relation is given by

ω =hk2

2m=

p2

2mh=

Eh

.

half the classical particle velocity! A simple manipulation of this result willclarify the “problem.”

We assume that particles can be represented by a localized wave function.This is the case if the major contributions to the integral are centered abouta central wave number, k0. Thus, we can expand ω(k) about k0:

ω(k) = ω0 + ω′0(k− k0)t + . . . . (9.11)

Here ω0 = ω(k0) and ω′0 = ω′(k0). Inserting this expression into the inte-gral representation for Ψ(x, t), we have

Ψ(x, t) =1

∫ ∞

−∞φ(k, 0)ei(kx−ω0t−ω′0(k−k0)t−...) dk,

We now make the change of variables, s = k− k0, and rearrange the result-ing factors to find

Ψ(x, t) ≈ 12π

∫ ∞

−∞φ(k0 + s, 0)ei((k0+s)x−(ω0+ω′0s)t) ds

=1

2πei(−ω0t+k0ω′0t)

∫ ∞

−∞φ(k0 + s, 0)ei(k0+s)(x−ω′0t) ds

= ei(−ω0t+k0ω′0t)Ψ(x−ω′0t, 0). (9.12)Group and phase velocities, vg = dω

dk ,vp = ω

k .Summarizing, for an initially localized wave packet, Ψ(x, 0) with wavenumbers grouped around k0 the wave function,Ψ(x, t), is a translated ver-sion of the initial wave function up to a phase factor. In quantum mechanicswe are more interested in the probability density for locating a particle, sofrom

|Ψ(x, t)|2 = |Ψ(x−ω′0t, 0)|2

we see that the “velocity of the wave packet” is found to be

ω′0 =dω

dk

∣∣∣k=k0

=hkm

.

This corresponds to the classical velocity of the particle (vpart = p/m).Thus, one usually defines ω′0 to be the group velocity,

vg =dω

dk

Page 374: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

362 partial differential equations

and the former velocity as the phase velocity,

vp =ω

k.

9.1.3 Transform Schemes

These examples have illustrated one of the features of transform the-ory. Given a partial differential equation, we can transform the equationfrom spatial variables to wave number space, or time variables to frequencyspace. In the new space the time evolution is simpler. In these cases, theevolution was governed by an ordinary differential equation. One solves theproblem in the new space and then transforms back to the original space.This is depicted in Figure 9.3 for the Schrödinger equation and was shownin Figure 9.2 for the linearized KdV equation.

Figure 9.3: The scheme for solvingthe Schrödinger equation using Fouriertransforms. The goal is to solve forΨ(x, t) given Ψ(x, 0). Instead of a directsolution in coordinate space (on the leftside), one can first transform the initialcondition obtaining φ(k, 0) in wave num-ber space. The governing equation in thenew space is found by transforming thePDE to get an ODE. This simpler equa-tion is solved to obtain φ(k, t). Then aninverse transform yields the solution ofthe original equation.

Ψ(x, 0)

Ψ(x, t)

φ(k, 0)

φ(k, t)

Fourier Transform

Inverse Fourier Transform

SchrödingerEquation

for Ψ(x, t)

ODE forφ(k, t)

This is similar to the solution of the system of ordinary differential equa-tions in Chapter 3, x = Ax. In that case we diagonalized the system usingthe transformation x = Sy. This lead to a simpler system y = Λy, whereΛ = S−1 AS. Solving for y, we inverted the solution to obtain x. Similarly,one can apply this diagonalization to the solution of linear algebraic systemsof equations. The general scheme is shown in Figure 9.4.

Figure 9.4: This shows the scheme forsolving the linear system of ODEs x =Ax. One finds a transformation betweenx and y of the form x = Sy which diago-nalizes the system. The resulting systemis easier to solve for y. Then, one usesthe inverse transformation to obtain thesolution to the original problem.

A

x(t)

Λ

y(t)

Transform: x = Sy, Λ = S−1 AS

Inverse Transform: x = S−1y

ODEx = Ax

ODEy = Λy

Similar transform constructions occur for many other type of problems.We will end this chapter with a study of Laplace transforms, which are

Page 375: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

transform techniques in physics 363

useful in the study of initial value problems, particularly for linear ordinarydifferential equations with constant coefficients. A similar scheme for usingLaplace transforms is depicted in Figure 9.30.

In this chapter we will begin with the study of Fourier transforms. Thesewill provide an integral representation of functions defined on the real line.Such functions can also represent analog signals. Analog signals are con-tinuous signals which can be represented as a sum over a continuous set offrequencies, as opposed to the sum over discrete frequencies, which Fourierseries were used to represent in an earlier chapter. We will then investi-gate a related transform, the Laplace transform, which is useful in solvinginitial value problems such as those encountered in ordinary differentialequations.

9.2 Complex Exponential Fourier Series

Before deriving the Fourier transform, we will need to rewritethe trigonometric Fourier series representation as a complex exponentialFourier series. We first recall from Chapter ?? the trigonometric Fourier se-ries representation of a function defined on [−π, π] with period 2π. TheFourier series is given by

f (x) ∼ a0

2+

∑n=1

(an cos nx + bn sin nx) , (9.13)

where the Fourier coefficients were found as

an =1π

∫ π

−πf (x) cos nx dx, n = 0, 1, . . . ,

bn =1π

∫ π

−πf (x) sin nx dx, n = 1, 2, . . . . (9.14)

In order to derive the exponential Fourier series, we replace the trigono-metric functions with exponential functions and collect like exponentialterms. This gives

f (x) ∼ a0

2+

∑n=1

[an

(einx + e−inx

2

)+ bn

(einx − e−inx

2i

)]=

a0

2+

∑n=1

(an − ibn

2

)einx +

∑n=1

(an + ibn

2

)e−inx. (9.15)

The coefficients of the complex exponentials can be rewritten by defining

cn =12(an + ibn), n = 1, 2, . . . . (9.16)

This implies that

cn =12(an − ibn), n = 1, 2, . . . . (9.17)

So far the representation is rewritten as

f (x) ∼ a0

2+

∑n=1

cneinx +∞

∑n=1

cne−inx.

Page 376: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

364 partial differential equations

Re-indexing the first sum, by introducing k = −n, we can write

f (x) ∼ a0

2+−∞

∑k=−1

c−ke−ikx +∞

∑n=1

cne−inx.

Since k is a dummy index, we replace it with a new n as

f (x) ∼ a0

2+−∞

∑n=−1

c−ne−inx +∞

∑n=1

cne−inx.

We can now combine all of the terms into a simple sum. We first definecn for negative n’s by

cn = c−n, n = −1,−2, . . . .

Letting c0 = a02 , we can write the complex exponential Fourier series repre-

sentation as

f (x) ∼∞

∑n=−∞

cne−inx, (9.18)

where

cn =12(an + ibn), n = 1, 2, . . .

cn =12(a−n − ib−n), n = −1,−2, . . .

c0 =a0

2. (9.19)

Given such a representation, we would like to write out the integral formsof the coefficients, cn. So, we replace the an’s and bn’s with their integralrepresentations and replace the trigonometric functions with complex expo-nential functions. Doing this, we have for n = 1, 2, . . . .

cn =12(an + ibn)

=12

[1π

∫ π

−πf (x) cos nx dx +

∫ π

−πf (x) sin nx dx

]=

12π

∫ π

−πf (x)

(einx + e−inx

2

)dx +

i2π

∫ π

−πf (x)

(einx − e−inx

2i

)dx

=1

∫ π

−πf (x)einx dx. (9.20)

It is a simple matter to determine the cn’s for other values of n. For n = 0,we have that

c0 =a0

2=

12π

∫ π

−πf (x) dx.

For n = −1,−2, . . ., we find that

cn = cn =1

∫ π

−πf (x)e−inx dx =

12π

∫ π

−πf (x)einx dx.

Therefore, we have obtained the complex exponential Fourier series coeffi-cients for all n. Now we can define the complex exponential Fourier seriesfor the function f (x) defined on [−π, π] as shown below.

Page 377: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

transform techniques in physics 365

Complex Exponential Series for f (x) defined on [−π, π].

f (x) ∼∞

∑n=−∞

cne−inx, (9.21)

cn =1

∫ π

−πf (x)einx dx. (9.22)

We can easily extend the above analysis to other intervals. For example,for x ∈ [−L, L] the Fourier trigonometric series is

f (x) ∼ a0

2+

∑n=1

(an cos

nπxL

+ bn sinnπx

L

)with Fourier coefficients

an =1L

∫ L

−Lf (x) cos

nπxL

dx, n = 0, 1, . . . ,

bn =1L

∫ L

−Lf (x) sin

nπxL

dx, n = 1, 2, . . . .

This can be rewritten as an exponential Fourier series of the form

Complex Exponential Series for f (x) defined on [−L, L].

f (x) ∼∞

∑n=−∞

cne−inπx/L, (9.23)

cn =1

2L

∫ L

−Lf (x)einπx/L dx. (9.24)

We can now use this complex exponential Fourier series for function de-fined on [−L, L] to derive the Fourier transform by letting L get large. Thiswill lead to a sum over a continuous set of frequencies, as opposed to thesum over discrete frequencies, which Fourier series represent.

9.3 Exponential Fourier Transform

Both the trigonometric and complex exponential Fourier seriesprovide us with representations of a class of functions of finite period interms of sums over a discrete set of frequencies. In particular, for functionsdefined on x ∈ [−L, L], the period of the Fourier series representation is2L. We can write the arguments in the exponentials, e−inπx/L, in terms ofthe angular frequency, ωn = nπ/L, as e−iωnx. We note that the frequencies,νn, are then defined through ωn = 2πνn = nπ

L . Therefore, the complexexponential series is seen to be a sum over a discrete, or countable, set offrequencies.

We would now like to extend the finite interval to an infinite interval,x ∈ (−∞, ∞), and to extend the discrete set of (angular) frequencies to a

Page 378: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

366 partial differential equations

continuous range of frequencies, ω ∈ (−∞, ∞). One can do this rigorously.It amounts to letting L and n get large and keeping n

L fixed.We first define ∆ω = π

L , so that ωn = n∆ω. Inserting the Fourier coeffi-cients (9.24) into Equation (9.23), we have

f (x) ∼∞

∑n=−∞

cne−inπx/L

=∞

∑n=−∞

(1

2L

∫ L

−Lf (ξ)einπξ/L dξ

)e−inπx/L

=∞

∑n=−∞

(∆ω

∫ L

−Lf (ξ)eiωnξ dξ

)e−iωnx. (9.25)

Now, we let L get large, so that ∆ω becomes small and ωn approachesthe angular frequency ω. Then,

f (x) ∼ lim∆ω→0,L→∞

12π

∑n=−∞

(∫ L

−Lf (ξ)eiωnξ dξ

)e−iωnx∆ω

=1

∫ ∞

−∞

(∫ ∞

−∞f (ξ)eiωξ dξ

)e−iωx dω. (9.26)

Looking at this last result, we formally arrive at the definition of theDefinitions of the Fourier transform andthe inverse Fourier transform. Fourier transform. It is embodied in the inner integral and can be written

as

F[ f ] = f (ω) =∫ ∞

−∞f (x)eiωx dx. (9.27)

This is a generalization of the Fourier coefficients (9.24).Once we know the Fourier transform, f (ω), then we can reconstruct the

original function, f (x), using the inverse Fourier transform, which is givenby the outer integration,

F−1[ f ] = f (x) =1

∫ ∞

−∞f (ω)e−iωx dω. (9.28)

We note that it can be proven that the Fourier transform exists when f (x) isabsolutely integrable, i.e.,

∫ ∞

−∞| f (x)| dx < ∞.

Such functions are said to be L1.We combine these results below, defining the Fourier and inverse Fourier

transforms and indicating that they are inverse operations of each other.We will then prove the first of the equations, (9.31). [The second equation,(9.32), follows in a similar way.]

Page 379: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

transform techniques in physics 367

The Fourier transform and inverse Fourier transform are inverse opera-tions. Defining the Fourier transform as

F[ f ] = f (ω) =∫ ∞

−∞f (x)eiωx dx. (9.29)

and the inverse Fourier transform as

F−1[ f ] = f (x) =1

∫ ∞

−∞f (ω)e−iωx dω. (9.30)

thenF−1[F[ f ]] = f (x) (9.31)

andF[F−1[ f ]] = f (ω). (9.32)

Proof. The proof is carried out by inserting the definition of the Fouriertransform, (9.29), into the inverse transform definition, (9.30), and then in-terchanging the orders of integration. Thus, we have

F−1[F[ f ]] =1

∫ ∞

−∞F[ f ]e−iωx dω

=1

∫ ∞

−∞

[∫ ∞

−∞f (ξ)eiωξ dξ

]e−iωx dω

=1

∫ ∞

−∞

∫ ∞

−∞f (ξ)eiω(ξ−x) dξdω

=1

∫ ∞

−∞

[∫ ∞

−∞eiω(ξ−x) dω

]f (ξ) dξ. (9.33)

In order to complete the proof, we need to evaluate the inside integral,which does not depend upon f (x). This is an improper integral, so we firstdefine

DΩ(x) =∫ Ω

−Ωeiωx dω

and compute the inner integral as∫ ∞

−∞eiω(ξ−x) dω = lim

Ω→∞DΩ(ξ − x).

x

y

−5 5

−2

8

Figure 9.5: A plot of the function DΩ(x)for Ω = 4.

We can compute DΩ(x). A simple evaluation yields

DΩ(x) =∫ Ω

−Ωeiωx dω

=eiωx

ix

∣∣∣∣Ω−Ω

=eixΩ − e−ixΩ

2ix

=2 sin xΩ

x. (9.34)

A plot of this function is in Figure 9.5 for Ω = 4. For large Ω the peakgrows and the values of DΩ(x) for x 6= 0 tend to zero as shown in Figure

Page 380: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

368 partial differential equations

9.6. In fact, as x approaches 0, DΩ(x) approaches 2Ω. For x 6= 0, the DΩ(x)function tends to zero.

We further note that

limΩ→∞

DΩ(x) = 0, x 6= 0,

and limΩ→∞ DΩ(x) is infinite at x = 0. However, the area is constant foreach Ω. In fact, ∫ ∞

−∞DΩ(x) dx = 2π.

We can show this by recalling the computation in Example 8.42,∫ ∞

−∞

sin xx

dx = π.

Then,

x

y

−3 3

−20

80

Figure 9.6: A plot of the function DΩ(x)for Ω = 40.

∫ ∞

−∞DΩ(x) dx =

∫ ∞

−∞

2 sin xΩx

dx

=∫ ∞

−∞2

sin yy

dy

= 2π. (9.35)

x12- 1

214- 1

418- 1

8

1

2

4

Figure 9.7: A plot of the functions fn(x)for n = 2, 4, 8.

Another way to look at DΩ(x) is to consider the sequence of functionsfn(x) = sin nx

πx , n = 1, 2, . . . . Then we have shown that this sequence offunctions satisfies the two properties,

limn→∞

fn(x) = 0, x 6= 0,

∫ ∞

−∞fn(x) dx = 1.

This is a key representation of such generalized functions. The limitingvalue vanishes at all but one point, but the area is finite.

Such behavior can be seen for the limit of other sequences of functions.For example, consider the sequence of functions

fn(x) =

0, |x| > 1

n ,n2 , |x| <≤ f rac1n.

This is a sequence of functions as shown in Figure 9.7. As n → ∞, we findthe limit is zero for x 6= 0 and is infinite for x = 0. However, the area undereach member of the sequences is one. Thus, the limiting function is zero atmost points but has area one.

The limit is not really a function. It is a generalized function. It is calledthe Dirac delta function, which is defined by

1. δ(x) = 0 for x 6= 0.

2.∫ ∞−∞ δ(x) dx = 1.

Page 381: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

transform techniques in physics 369

Before returning to the proof that the inverse Fourier transform of theFourier transform is the identity, we state one more property of the Diracdelta function, which we will prove in the next section. Namely, we willshow that ∫ ∞

−∞δ(x− a) f (x) dx = f (a).

Returning to the proof, we now have that∫ ∞

−∞eiω(ξ−x) dω = lim

Ω→∞DΩ(ξ − x) = 2πδ(ξ − x).

Inserting this into (9.33), we have

F−1[F[ f ]] =1

∫ ∞

−∞

[∫ ∞

−∞eiω(ξ−x) dω

]f (ξ) dξ.

=1

∫ ∞

−∞2πδ(ξ − x) f (ξ) dξ.

= f (x). (9.36)

Thus, we have proven that the inverse transform of the Fourier transform off is f .

9.4 The Dirac Delta Function

In the last section we introduced the Dirac delta function, δ(x). P. A. M. Dirac (1902-1984) introducedthe δ function in his book, The Principlesof Quantum Mechanics, 4th Ed., OxfordUniversity Press, 1958, originally pub-lished in 1930, as part of his orthogonal-ity statement for a basis of functions ina Hilbert space, < ξ ′|ξ ′′ >= cδ(ξ ′ − ξ ′′)in the same way we introduced discreteorthogonality using the Kronecker delta.

As noted above, this is one example of what is known as a generalizedfunction, or a distribution. Dirac had introduced this function in the 1930’sin his study of quantum mechanics as a useful tool. It was later studiedin a general theory of distributions and found to be more than a simpletool used by physicists. The Dirac delta function, as any distribution, onlymakes sense under an integral.

Two properties were used in the last section. First one has that the areaunder the delta function is one,∫ ∞

−∞δ(x) dx = 1.

Integration over more general intervals gives

∫ b

aδ(x) dx =

1, 0 ∈ [a, b],0, 0 6∈ [a, b].

(9.37)

The other property that was used was the sifting property:∫ ∞

−∞δ(x− a) f (x) dx = f (a).

This can be seen by noting that the delta function is zero everywhere exceptat x = a. Therefore, the integrand is zero everywhere and the only contribu-tion from f (x) will be from x = a. So, we can replace f (x) with f (a) under

Page 382: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

370 partial differential equations

the integral. Since f (a) is a constant, we have that∫ ∞

−∞δ(x− a) f (x) dx =

∫ ∞

−∞δ(x− a) f (a) dx

= f (a)∫ ∞

−∞δ(x− a) dx = f (a). (9.38)

Properties of the Dirac δ-function:∫ ∞

−∞δ(x− a) f (x) dx = f (a).

∫ ∞

−∞δ(ax) dx =

1|a|

∫ ∞

−∞δ(y) dy.

∫ ∞

−∞δ( f (x)) dx =

∫ ∞

−∞

n

∑j=1

δ(x− xj)

| f ′(xj)|dx.

(For n simple roots.)These and other properties are often

written outside the integral:

δ(ax) =1|a| δ(x).

δ(−x) = δ(x).

δ((x− a)(x− b)) =[δ(x− a) + δ(x− a)]

|a− b| .

δ( f (x)) = ∑j

δ(x− xj)

| f ′(xj)|,

for f (xj) = 0, f ′(xj) 6= 0.

Another property results from using a scaled argument, ax. In this casewe show that

δ(ax) = |a|−1δ(x). (9.39)

As usual, this only has meaning under an integral sign. So, we place δ(ax)inside an integral and make a substitution y = ax:∫ ∞

−∞δ(ax) dx = lim

L→∞

∫ L

−Lδ(ax) dx

= limL→∞

1a

∫ aL

−aLδ(y) dy. (9.40)

If a > 0 then ∫ ∞

−∞δ(ax) dx =

1a

∫ ∞

−∞δ(y) dy.

However, if a < 0 then∫ ∞

−∞δ(ax) dx =

1a

∫ −∞

∞δ(y) dy = −1

a

∫ ∞

−∞δ(y) dy.

The overall difference in a multiplicative minus sign can be absorbed intoone expression by changing the factor 1/a to 1/|a|. Thus,∫ ∞

−∞δ(ax) dx =

1|a|

∫ ∞

−∞δ(y) dy. (9.41)

Example 9.1. Evaluate∫ ∞−∞(5x + 1)δ(4(x − 2)) dx. This is a straight forward

integration:∫ ∞

−∞(5x + 1)δ(4(x− 2)) dx =

14

∫ ∞

−∞(5x + 1)δ(x− 2) dx =

114

.

The first strep is to write δ(4(x− 2)) = 14 δ(x− 2). Then, the final evaluation

is given by14

∫ ∞

−∞(5x + 1)δ(x− 2) dx =

14(5(2) + 1) =

114

.

Even more general than δ(ax) is the delta function δ( f (x)). The integralof δ( f (x)) can be evaluated depending upon the number of zeros of f (x). Ifthere is only one zero, f (x1) = 0, then one has that∫ ∞

−∞δ( f (x)) dx =

∫ ∞

−∞

1| f ′(x1)|

δ(x− x1) dx.

This can be proven using the substitution y = f (x) and is left as an exercisefor the reader. This result is often written as

δ( f (x)) =1

| f ′(x1)|δ(x− x1),

again keeping in mind that this only has meaning when placed under anintegral.

Page 383: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

transform techniques in physics 371

Example 9.2. Evaluate∫ ∞−∞ δ(3x− 2)x2 dx.

This is not a simple δ(x− a). So, we need to find the zeros of f (x) = 3x− 2.There is only one, x = 2

3 . Also, | f ′(x)| = 3. Therefore, we have

∫ ∞

−∞δ(3x− 2)x2 dx =

∫ ∞

−∞

13

δ(x− 23)x2 dx =

13

(23

)2=

427

.

Note that this integral can be evaluated the long way by using the substitutiony = 3x− 2. Then, dy = 3 dx and x = (y + 2)/3. This gives

∫ ∞

−∞δ(3x− 2)x2 dx =

13

∫ ∞

−∞δ(y)

(y + 2

3

)2dy =

13

(49

)=

427

.

More generally, one can show that when f (xj) = 0 and f ′(xj) 6= 0 forj = 1, 2, . . . , n, (i.e.; when one has n simple zeros), then

δ( f (x)) =n

∑j=1

1| f ′(xj)|

δ(x− xj).

Example 9.3. Evaluate∫ 2π

0 cos x δ(x2 − π2) dx.In this case the argument of the delta function has two simple roots. Namely,

f (x) = x2 − π2 = 0 when x = ±π. Furthermore, f ′(x) = 2x. Therefore,| f ′(±π)| = 2π. This gives

δ(x2 − π2) =1

2π[δ(x− π) + δ(x + π)].

Inserting this expression into the integral and noting that x = −π is not in theintegration interval, we have

∫ 2π

0cos x δ(x2 − π2) dx =

12π

∫ 2π

0cos x [δ(x− π) + δ(x + π)] dx

=1

2πcos π = − 1

2π. (9.42)

H(x)

x

1

0

Figure 9.8: The Heaviside step function,H(x).

Example 9.4. Show H′(x) = δ(x), where the Heaviside function (or, step func-tion) is defined as

H(x) =

0, x < 01, x > 0

and is shown in Figure 9.8.Looking at the plot, it is easy to see that H′(x) = 0 for x 6= 0. In order to check

that this gives the delta function, we need to compute the area integral. Therefore,we have ∫ ∞

−∞H′(x) dx = H(x)

∣∣∣∞−∞

= 1− 0 = 1.

Thus, H′(x) satisfies the two properties of the Dirac delta function.

Page 384: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

372 partial differential equations

9.5 Properties of the Fourier Transform

We now return to the Fourier transform. Before actually comput-ing the Fourier transform of some functions, we prove a few of the proper-ties of the Fourier transform.

First we note that there are several forms that one may encounter for theFourier transform. In applications functions can either be functions of time,f (t), or space, f (x). The corresponding Fourier transforms are then writtenas

f (ω) =∫ ∞

−∞f (t)eiωt dt, (9.43)

or

f (k) =∫ ∞

−∞f (x)eikx dx. (9.44)

ω is called the angular frequency and is related to the frequency ν by ω =

2πν. The units of frequency are typically given in Hertz (Hz). Sometimesthe frequency is denoted by f when there is no confusion. k is called thewavenumber. It has units of inverse length and is related to the wavelength,λ, by k = 2π

λ .We explore a few basic properties of the Fourier transform and use them

in examples in the next section.

1. Linearity: For any functions f (x) and g(x) for which the Fouriertransform exists and constant a, we have

F[ f + g] = F[ f ] + F[g]

and

F[a f ] = aF[ f ].

These simply follow from the properties of integration and establishthe linearity of the Fourier transform.

2. Transform of a Derivative: F[

d fdx

]= −ik f (k)

Here we compute the Fourier transform (9.29) of the derivative byinserting the derivative in the Fourier integral and using integrationby parts.

F[

d fdx

]=

∫ ∞

−∞

d fdx

eikx dx

= limL→∞

[f (x)eikx

]L

−L− ik

∫ ∞

−∞f (x)eikx dx.

(9.45)

The limit will vanish if we assume that limx→±∞ f (x) = 0. The lastintegral is recognized as the Fourier transform of f , proving the givenproperty.

Page 385: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

transform techniques in physics 373

3. Higher Order Derivatives: F[

dn fdxn

]= (−ik)n f (k)

The proof of this property follows from the last result, or doing severalintegration by parts. We will consider the case when n = 2. Notingthat the second derivative is the derivative of f ′(x) and applying thelast result, we have

F[

d2 fdx2

]= F

[d

dxf ′]

= −ikF[

d fdx

]= (−ik)2 f (k). (9.46)

This result will be true if

limx→±∞

f (x) = 0 and limx→±∞

f ′(x) = 0.

The generalization to the transform of the nth derivative easily fol-lows.

4. Multiplication by x: F [x f (x)] = −i ddk f (k)

This property can be shown by using the fact that ddk eikx = ixeikx and

the ability to differentiate an integral with respect to a parameter.

F[x f (x)] =∫ ∞

−∞x f (x)eikx dx

=∫ ∞

−∞f (x)

ddk

(1i

eikx)

dx

= −iddk

∫ ∞

−∞f (x)eikx dx

= −iddk

f (k). (9.47)

This result can be generalized to F [xn f (x)] as an exercise.

5. Shifting Properties: For constant a, we have the following shiftingproperties:

f (x− a)↔ eika f (k), (9.48)

f (x)e−iax ↔ f (k− a). (9.49)

Here we have denoted the Fourier transform pairs using a doublearrow as f (x)↔ f (k). These are easily proven by inserting the desiredforms into the definition of the Fourier transform (9.29), or inverseFourier transform (9.30). The first shift property (9.48) is shown bythe following argument. We evaluate the Fourier transform.

F[ f (x− a)] =∫ ∞

−∞f (x− a)eikx dx.

Now perform the substitution y = x− a. Then,

F[ f (x− a)] =∫ ∞

−∞f (y)eik(y+a) dy

= eika∫ ∞

−∞f (y)eiky dy

= eika f (k). (9.50)

Page 386: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

374 partial differential equations

The second shift property (9.49) follows in a similar way.

6. Convolution of Functions: We define the convolution of two func-tions f (x) and g(x) as

( f ∗ g)(x) =∫ ∞

−∞f (t)g(x− t) dx. (9.51)

Then, the Fourier transform of the convolution is the product of theFourier transforms of the individual functions:

F[ f ∗ g] = f (k)g(k). (9.52)

We will return to the proof of this property in Section 9.6.

9.5.1 Fourier Transform Examples

In this section we will compute the Fourier transforms of several func-tions.

Example 9.5. Find the Fourier transform of a Gaussian, f (x) = e−ax2/2.x

e−ax2/2

Figure 9.9: Plots of the Gaussian func-tion f (x) = e−ax2/2 for a = 1, 2, 3. This function, shown in Figure 9.9 is called the Gaussian function. It has many

applications in areas such as quantum mechanics, molecular theory, probability andheat diffusion. We will compute the Fourier transform of this function and showthat the Fourier transform of a Gaussian is a Gaussian. In the derivation we willintroduce classic techniques for computing such integrals.

We begin by applying the definition of the Fourier transform,

f (k) =∫ ∞

−∞f (x)eikx dx =

∫ ∞

−∞e−ax2/2+ikx dx. (9.53)

The first step in computing this integral is to complete the square in the argumentof the exponential. Our goal is to rewrite this integral so that a simple substitutionwill lead to a classic integral of the form

∫ ∞−∞ eβy2

dy, which we can integrate. Thecompletion of the square follows as usual:

− a2

x2 + ikx = − a2

[x2 − 2ik

ax]

= − a2

[x2 − 2ik

ax +

(− ik

a

)2−(− ik

a

)2]

= − a2

(x− ik

a

)2− k2

2a. (9.54)

We now put this expression into the integral and make the substitutions y =

x− ika and β = a

2 .

f (k) =∫ ∞

−∞e−ax2/2+ikx dx

= e−k22a

∫ ∞

−∞e−

a2 (x− ik

a )2

dx

= e−k22a

∫ ∞− ika

−∞− ika

e−βy2dy. (9.55)

Page 387: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

transform techniques in physics 375

One would be tempted to absorb the − ika terms in the limits of integration.

However, we know from our previous study that the integration takes place over acontour in the complex plane as shown in Figure 9.10.

x

y

z = x− ika

Figure 9.10: Simple horizontal contour.

In this case we can deform this horizontal contour to a contour along the realaxis since we will not cross any singularities of the integrand. So, we now safelywrite

f (k) = e−k22a

∫ ∞

−∞e−βy2

dy.

The resulting integral is a classic integral and can be performed using a standardtrick. Define I by4

4 Here we show∫ ∞

−∞e−βy2

dy =

√π

β.

Note that we solved the β = 1 case inExample 5.11, so a simple variable trans-formation z =

√βy is all that is needed

to get the answer. However, it cannothurt to see this classic derivation again.

I =∫ ∞

−∞e−βy2

dy.

Then,I2 =

∫ ∞

−∞e−βy2

dy∫ ∞

−∞e−βx2

dx.

Note that we needed to change the integration variable so that we can write thisproduct as a double integral:

I2 =∫ ∞

−∞

∫ ∞

−∞e−β(x2+y2) dxdy.

This is an integral over the entire xy-plane. We now transform to polar coordinatesto obtain

I2 =∫ 2π

0

∫ ∞

0e−βr2

rdrdθ

= 2π∫ ∞

0e−βr2

rdr

= −π

β

[e−βr2

]∞

0=

π

β. (9.56)

The final result is gotten by taking the square root, yielding

I =√

π

β.

We can now insert this result to give the Fourier transform of the Gaussianfunction:

f (k) =

√2π

ae−k2/2a. (9.57)

Therefore, we have shown that the Fourier transform of a Gaussian is a Gaussian. The Fourier transform of a Gaussian is aGaussian.

Example 9.6. Find the Fourier transform of the Box, or Gate, Function,

f (x) =

b, |x| ≤ a0, |x| > a

.

y

x

b

a−a

Figure 9.11: A plot of the box function inExample 9.6.

This function is called the box function, or gate function. It is shown in Figure9.11. The Fourier transform of the box function is relatively easy to compute. It isgiven by

f (k) =∫ ∞

−∞f (x)eikx dx

Page 388: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

376 partial differential equations

=∫ a

−abeikx dx

=bik

eikx∣∣∣a−a

=2bk

sin ka. (9.58)

We can rewrite this as

f (k) = 2absin ka

ka≡ 2ab sinc ka.

Here we introduced the sinc function,

sinc x =sin x

x.

A plot of this function is shown in Figure 9.12. This function appears often insignal analysis and it plays a role in the study of diffraction.

x

y

−20 −10 10 20

−0.5

0.5

1

Figure 9.12: A plot of the Fourier trans-form of the box function in Example 9.6.This is the general shape of the sinc func-tion.

We will now consider special limiting values for the box function and its trans-form. This will lead us to the Uncertainty Principle for signals, connecting therelationship between the localization properties of a signal and its transform.

1. a→ ∞ and b fixed.

In this case, as a gets large the box function approaches the constant functionf (x) = b. At the same time, we see that the Fourier transform approaches aDirac delta function. We had seen this function earlier when we first definedthe Dirac delta function. Compare Figure 9.12 with Figure 9.5. In fact,f (k) = bDa(k). [Recall the definition of DΩ(x) in Equation (9.34).] So, inthe limit we obtain f (k) = 2πbδ(k). This limit implies fact that the Fouriertransform of f (x) = 1 is f (k) = 2πδ(k). As the width of the box becomeswider, the Fourier transform becomes more localized. In fact, we have arrivedat the important result that

∫ ∞

−∞eikx = 2πδ(k). (9.59)

2. b→ ∞, a→ 0, and 2ab = 1.

In this case the box narrows and becomes steeper while maintaining a con-stant area of one. This is the way we had found a representation of the Diracdelta function previously. The Fourier transform approaches a constant inthis limit. As a approaches zero, the sinc function approaches one, leavingf (k) → 2ab = 1. Thus, the Fourier transform of the Dirac delta function isone. Namely, we have

∫ ∞

−∞δ(x)eikx = 1. (9.60)

In this case we have that the more localized the function f (x) is, the morespread out the Fourier transform, f (k), is. We will summarize these no-tions in the next item by relating the widths of the function and its Fouriertransform.

Page 389: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

transform techniques in physics 377

3. The Uncertainty Principle, ∆x∆k = 4π.

The widths of the box function and its Fourier transform are related as wehave seen in the last two limiting cases. It is natural to define the width, ∆xof the box function as

∆x = 2a.

The width of the Fourier transform is a little trickier. This function actuallyextends along the entire k-axis. However, as f (k) became more localized, thecentral peak in Figure 9.12 became narrower. So, we define the width of thisfunction, ∆k as the distance between the first zeros on either side of the mainlobe as shown in Figure 9.13. This gives

∆k =2π

a.

x

y2ab

π

a−π

a

Figure 9.13: The width of the function2ab sin ka

ka is defined as the distance be-tween the smallest magnitude zeros.

Combining these two relations, we find that

∆x∆k = 4π.

Thus, the more localized a signal, the less localized its transform and viceversa. This notion is referred to as the Uncertainty Principle. For generalsignals, one needs to define the effective widths more carefully, but the mainidea holds:

∆x∆k ≥ c > 0.

More formally, the uncertainty principlefor signals is about the relation betweenduration and bandwidth, which are de-fined by ∆t = ‖t f ‖2

‖ f ‖2and ∆ω = ‖ω f ‖2

‖ f ‖2, re-

spectively, where ‖ f ‖2 =∫ ∞−∞ | f (t)|

2 dtand ‖ f ‖2 = 1

∫ ∞−∞ | f (ω)|2 dω. Under

appropriate conditions, one can provethat ∆t∆ω ≥ 1

2 . Equality holds for Gaus-sian signals. Werner Heisenberg (1901-1976) introduced the uncertainty princi-ple into quantum physics in 1926, relat-ing uncertainties in the position (∆x) andmomentum (∆px) of particles. In thiscase, ∆x∆px ≥ 1

2 h. Here, the uncertain-ties are defined as the positive squareroots of the quantum mechanical vari-ances of the position and momentum.

We now turn to other examples of Fourier transforms.

Example 9.7. Find the Fourier transform of f (x) =

e−ax, x ≥ 0

0, x < 0, a > 0.

The Fourier transform of this function is

f (k) =∫ ∞

−∞f (x)eikx dx

=∫ ∞

0eikx−ax dx

=1

a− ik. (9.61)

Next, we will compute the inverse Fourier transform of this result and recoverthe original function.

Example 9.8. Find the inverse Fourier transform of f (k) = 1a−ik .

The inverse Fourier transform of this function is

f (x) =1

∫ ∞

−∞f (k)e−ikx dk =

12π

∫ ∞

−∞

e−ikx

a− ikdk.

This integral can be evaluated using contour integral methods. We evaluate theintegral

I =∫ ∞

−∞

e−ixz

a− izdz,

Page 390: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

378 partial differential equations

using Jordan’s Lemma from Section 8.5.8. According to Jordan’s Lemma, we needto enclose the contour with a semicircle in the upper half plane for x < 0 and in thelower half plane for x > 0 as shown in Figure 9.14.

The integrations along the semicircles will vanish and we will have

f (x) =1

∫ ∞

−∞

e−ikx

a− ikdk

= ± 12π

∮C

e−ixz

a− izdz

=

0, x < 0

− 12π 2πi Res [z = −ia], x > 0

=

0, x < 0

e−ax, x > 0. (9.62)

R−R x

y

CR

−ia

R−Rx

y

CR

−ia

Figure 9.14: Contours for invertingf (k) = 1

a−ik .

Note that without paying careful attention to Jordan’s Lemma one might notretrieve the function from the last example.

Example 9.9. Find the inverse Fourier transform of f (ω) = πδ(ω + ω0) +

πδ(ω−ω0).We would like to find the inverse Fourier transform of this function. Instead of

carrying out any integration, we will make use of the properties of Fourier trans-forms. Since the transforms of sums are the sums of transforms, we can look at eachterm individually. Consider δ(ω − ω0). This is a shifted function. From the shifttheorems in Equations (9.48)-(9.49) we have the Fourier transform pair

eiω0t f (t)↔ f (ω−ω0).

Recalling from Example 9.6 that∫ ∞

−∞eiωt dt = 2πδ(ω),

we have from the shift property that

F−1[δ(ω−ω0)] =1

2πe−iω0t.

The second term can be transformed similarly. Therefore, we have

F−1[πδ(ω + ω0) + πδ(ω−ω0] =12

eiω0t +12

e−iω0t = cos ω0t.

Example 9.10. Find the Fourier transform of the finite wave train.

f (t) =

cos ω0t, |t| ≤ a

0, |t| > a.

For the last example, we consider the finite wave train, which will reappear inthe last chapter on signal analysis. In Figure 9.15 we show a plot of this function.

a0t

f (t)

Figure 9.15: A plot of the finite wavetrain.

A straight forward computation gives

f (ω) =∫ ∞

−∞f (t)eiωt dt

Page 391: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

transform techniques in physics 379

=∫ a

−a[cos ω0t + i sin ω0t]eiωt dt

=∫ a

−acos ω0t cos ωt dt + i

∫ a

−asin ω0t sin ωt dt

=12

∫ a

−a[cos((ω + ω0)t) + cos((ω−ω0)t)] dt

=sin((ω + ω0)a)

ω + ω0+

sin((ω−ω0)a)ω−ω0

. (9.63)

9.6 The Convolution Operation

In the list of properties of the Fourier transform, we defined theconvolution of two functions, f (x) and g(x) to be the integral

( f ∗ g)(x) =∫ ∞

−∞f (t)g(x− t) dt. (9.64)

In some sense one is looking at a sum of the overlaps of one of the functionsand all of the shifted versions of the other function. The German wordfor convolution is faltung, which means “folding” and in old texts this isreferred to as the Faltung Theorem. In this section we will look into theconvolution operation and its Fourier transform.

Before we get too involved with the convolution operation, it should benoted that there are really two things you need to take away from this dis-cussion. The rest is detail. First, the convolution of two functions is a newfunctions as defined by 9.64 when dealing wit the Fourier transform. Thesecond and most relevant is that the Fourier transform of the convolution oftwo functions is the product of the transforms of each function. The rest isall about the use and consequences of these two statements. In this sectionwe will show how the convolution works and how it is useful. The convolution is commutative.

First, we note that the convolution is commutative: f ∗ g = g ∗ f . This iseasily shown by replacing x− t with a new variable, y = x− t and dy = −dt.

(g ∗ f )(x) =∫ ∞

−∞g(t) f (x− t) dt

= −∫ −∞

∞g(x− y) f (y) dy

=∫ ∞

−∞f (y)g(x− y) dy

= ( f ∗ g)(x). (9.65)

The best way to understand the folding of the functions in the convolu-tion is to take two functions and convolve them. The next example givesa graphical rendition followed by a direct computation of the convolution.The reader is encouraged to carry out these analyses for other functions.

Example 9.11. Graphical Convolution of the box function and a triangle function.In order to understand the convolution operation, we need to apply it to specific

Page 392: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

380 partial differential equations

functions. We will first do this graphically for the box function

f (x) =

1, |x| ≤ 1,0, |x| > 1

and the triangular function

g(x) =

x, 0 ≤ x ≤ 1,0, otherwise

as shown in Figure 9.16.

x

f (x)

1−1

1

x

g(x)

1−1

1

Figure 9.16: A plot of the box functionf (x) and the triangle function g(x).

t

g(−t)

1−1

1

Figure 9.17: A plot of the reflected trian-gle function, g(−t).

Next, we determine the contributions to the integrand. We consider the shiftedand reflected function g(t− x) in Equation 9.64 for various values of t. For t = 0,we have g(x − 0) = g(−x). This function is a reflection of the triangle function,g(x), as shown in Figure 9.17.

We then translate the triangle function performing horizontal shifts by t. InFigure 9.18 we show such a shifted and reflected g(x) for t = 2, or g(2− x).

t

g(2− t)

1−1

1

2

Figure 9.18: A plot of the reflected trian-gle function shifted by 2 units, g(2− t).

In Figure 9.18 we show several plots of other shifts, g(x− t), superimposed onf (x).

The integrand is the product of f (t) and g(x− t) and the integral of the productf (t)g(x− t) is given by the sum of the shaded areas for each value of x.

In the first plot of Figure 9.19 the area is zero, as there is no overlap of thefunctions. Intermediate shift values are displayed in the other plots in Figure 9.19.The value of the convolution at x is shown by the area under the product of the twofunctions for each value of x.

Plots of the areas of the convolution of the box and triangle functions for severalvalues of x are given in Figure 9.18. We see that the value of the convolutionintegral builds up and then quickly drops to zero as a function of x. In Figure 9.20the values of these areas is shown as a function of x.

t

y

t

y

t

y

t

y

t

y

t

y

t

y

t

y

t

y

Figure 9.19: A plot of the box and trian-gle functions with the overlap indicatedby the shaded area.

The plot of the convolution in Figure 9.20 is not easily determined usingthe graphical method. However, we can directly compute the convolutionas shown in the next example.

Page 393: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

transform techniques in physics 381

Example 9.12. Analytically find the convolution of the box function and the tri-angle function.

x

( f ∗ g)(x)

1−1

0.5

2

Figure 9.20: A plot of the convolution ofthe box and triangle functions.

The nonvanishing contributions to the convolution integral are when both f (t)and g(x− t) do not vanish. f (t) is nonzero for |t| ≤ 1, or −1 ≤ t ≤ 1. g(x− t)is nonzero for 0 ≤ x− t ≤ 1, or x− 1 ≤ t ≤ x. These two regions are shown inFigure 9.21. On this region, f (t)g(x− t) = x− t.

x

t

−1

−1

1

1

2

2

g(x)

f (x)

Figure 9.21: Intersection of the supportof g(x) and f (x).

Isolating the intersection in Figure 9.22, we see in Figure 9.22 that there arethree regions as shown by different shadings. These regions lead to a piecewisedefined function with three different branches of nonzero values for −1 < x < 0,0 < x < 1, and 1 < x < 2.

x

t

−1

−1

1

1

2

2

g(x)

f (x)

Figure 9.22: Intersection of the supportof g(x) and f (x) showing the integrationregions.

The values of the convolution can be determined through careful integration. Theresulting integrals are given as

( f ∗ g)(x) =∫ ∞

−∞f (t)g(x− t) dt

=

∫ x−1(x− t) dt, −1 < x < 0∫ x

x−1(x− t) dt, 0 < x < 1∫ 1x−1(x− t) dt, 1 < x < 2

=

12 (x + 1)2, −1 < x < 0

12 , 0 < x < 1

12[1− (x− 1)2] 1 < x < 2

(9.66)

Page 394: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

382 partial differential equations

A plot of this function is shown in Figure 9.20.

9.6.1 Convolution Theorem for Fourier Transforms

In this section we compute the Fourier transform of the convolution in-tegral and show that the Fourier transform of the convolution is the productof the transforms of each function,

F[ f ∗ g] = f (k)g(k). (9.67)

First, we use the definitions of the Fourier transform and the convolutionto write the transform as

F[ f ∗ g] =∫ ∞

−∞( f ∗ g)(x)eikx dx

=∫ ∞

−∞

(∫ ∞

−∞f (t)g(x− t) dt

)eikx dx

=∫ ∞

−∞

(∫ ∞

−∞g(x− t)eikx dx

)f (t) dt. (9.68)

We now substitute y = x− t on the inside integral and separate the integrals:

F[ f ∗ g] =∫ ∞

−∞

(∫ ∞

−∞g(x− t)eikx dx

)f (t) dt

=∫ ∞

−∞

(∫ ∞

−∞g(y)eik(y+t) dy

)f (t) dt

=∫ ∞

−∞

(∫ ∞

−∞g(y)eiky dy

)f (t)eikt dt.

=

(∫ ∞

−∞f (t)eikt dt

)(∫ ∞

−∞g(y)eiky dy

). (9.69)

We see that the two integrals are just the Fourier transforms of f and g.Therefore, the Fourier transform of a convolution is the product of theFourier transforms of the functions involved:

F[ f ∗ g] = f (k)g(k).

Example 9.13. Compute the convolution of the box function of height one andwidth two with itself.

Let f (k) be the Fourier transform of f (x). Then, the Convolution Theorem saysthat F[ f ∗ f ](k) = f 2(k), or

( f ∗ f )(x) = F−1[ f 2(k)].

For the box function, we have already found that

f (k) =2k

sin k.

So, we need to compute

( f ∗ f )(x) = F−1[4k2 sin2 k]

=1

∫ ∞

−∞

(4k2 sin2 k

)e−ikx dk. (9.70)

Page 395: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

transform techniques in physics 383

One way to compute this integral is to extend the computation into the complexk-plane. We first need to rewrite the integrand. Thus,

( f ∗ f )(x) =1

∫ ∞

−∞

4k2 sin2 ke−ikx dk

=1π

∫ ∞

−∞

1k2 [1− cos 2k]e−ikx dk

=1π

∫ ∞

−∞

1k2

[1− 1

2(eik + e−ik)

]e−ikx dk

=1π

∫ ∞

−∞

1k2

[e−ikx − 1

2(e−i(1−k) + e−i(1+k))

]dk. (9.71)

We can compute the above integrals if we know how to compute the integral

I(y) =1π

∫ ∞

−∞

e−iky

k2 dk.

Then, the result can be found in terms of I(y) as

( f ∗ f )(x) = I(x)− 12[I(1− k) + I(1 + k)].

We consider the integral ∮C

e−iyz

πz2 dz

over the contour in Figure 9.23.

ε R−R −ε x

y

ΓR

Figure 9.23: Contour for computingP∫ ∞−∞

e−iyz

πz2 dz.

We can see that there is a double pole at z = 0. The pole is on the real axis. So,we will need to cut out the pole as we seek the value of the principal value integral.

Recall from Chapter 8 that∮CR

e−iyz

πz2 dz =∫

ΓR

e−iyz

πz2 dz +∫ −ε

−R

e−iyz

πz2 dz +∫

e−iyz

πz2 dz +∫ R

ε

e−iyz

πz2 dz.

The integral∮

CRe−iyz

πz2 dz vanishes since there are no poles enclosed in the contour!The sum of the second and fourth integrals gives the integral we seek as ε → 0and R→ ∞. The integral over ΓR will vanish as R gets large according to Jordan’sLemma provided y < 0. That leaves the integral over the small semicircle.

As before, we can show that

limε→0

∫Cε

f (z) dz = −πi Res[ f (z); z = 0].

Therefore, we find

I(y) = P∫ ∞

−∞

e−iyz

πz2 dz = πi Res[

e−iyz

πz2 ; z = 0]

.

A simple computation of the reside gives I(y) = −y, for y < 0.When y > 0, we need to close the contour in the lower half plane in order to

apply Jordan’s Lemma. Carrying out the computation, one finds I(y) = y, fory > 0. Thus,

I(y) =

−y, y > 0,y, y < 0,

(9.72)

Page 396: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

384 partial differential equations

We are now ready to finish the computation of the convolution. We have tocombine the integrals I(y), I(y + 1), and I(y − 1), since ( f ∗ f )(x) = I(x) −12 [I(1− k) + I(1 + k)]. This gives different results in four intervals:

( f ∗ f )(x) = x− 12[(x− 2) + (x + 2)] = 0, x < −2,

= x− 12[(x− 2)− (x + 2)] = 2 + x − 2 < x < 0,

= −x− 12[(x− 2)− (x + 2)] = 2− x, 0 < x < 2,

= −x− 12[−(x− 2)− (x + 2)] = 0, x > 2. (9.73)

A plot of this solution is the triangle function,

( f ∗ f )(x) =

0, x < −2

2 + x, −2 < x < 02− x, 0 < x < 2

0, x > 2,

(9.74)

which was shown in the last example.

Example 9.14. Find the convolution of the box function of height one and widthtwo with itself using a direct computation of the convolution integral.

The nonvanishing contributions to the convolution integral are when both f (t)and f (x− t) do not vanish. f (t) is nonzero for |t| ≤ 1, or −1 ≤ t ≤ 1. f (x− t)is nonzero for |x− t| ≤ 1, or x− 1 ≤ t ≤ x + 1. These two regions are shown inFigure 9.25. On this region, f (t)g(x− t) = 1.

Figure 9.24: Plot of the regions of sup-port for f (t) and f (x− t)..

x

t

−1−1

1

1

2

2

−2

−2

3

−3

t = x + 1

t = x− 1

t = −1

t = 1f (x− t)

f (t)

Thus, the nonzero contributions to the convolution are

( f ∗ f )(x) =

∫ x+1−1 dt, 0 ≤ x ≤ 2,∫ 1x−1 dt, −2 ≤ x ≤ 0,

=

2 + x, 0 ≤ x ≤ 2,2− x, −2 ≤ x ≤ 0.

Once again, we arrive at the triangle function.

In the last section we showed the graphical convolution. For complete-ness, we do the same for this example. In figure 9.25 we show the results.We see that the convolution of two box functions is a triangle function.

Page 397: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

transform techniques in physics 385

t

f (x− t) f (t)

t

t

t

t

t

t

t

t

x1-1 2-2

2( f ∗ g)(x)

Figure 9.25: A plot of the convolution ofa box function with itself. The areas ofthe overlaps of as f (x − t) is translatedacross f (t) are shown as well. The resultis the triangular function.

Example 9.15. Show the graphical convolution of the box function of height oneand width two with itself.

Let’s consider a slightly more complicated example, the convolution oftwo Gaussian functions.

Example 9.16. Convolution of two Gaussian functions f (x) = e−ax2.

In this example we will compute the convolution of two Gaussian functions withdifferent widths. Let f (x) = e−ax2

and g(x) = e−bx2. A direct evaluation of the

integral would be to compute

( f ∗ g)(x) =∫ ∞

−∞f (t)g(x− t) dt =

∫ ∞

−∞e−at2−b(x−t)2

dt.

This integral can be rewritten as

( f ∗ g)(x) = e−bx2∫ ∞

−∞e−(a+b)t2+2bxt dt.

One could proceed to complete the square and finish carrying out the integration.However, we will use the Convolution Theorem to evaluate the convolution andleave the evaluation of this integral to Problem 12.

Recalling the Fourier transform of a Gaussian from Example 9.5, we have

f (k) = F[e−ax2] =

√π

ae−k2/4a (9.75)

and

g(k) = F[e−bx2] =

√π

be−k2/4b.

Denoting the convolution function by h(x) = ( f ∗ g)(x), the Convolution Theoremgives

h(k) = f (k)g(k) =π√ab

e−k2/4ae−k2/4b.

This is another Gaussian function, as seen by rewriting the Fourier transform ofh(x) as

h(k) =π√ab

e−14 (

1a +

1b )k2

=π√ab

e−a+b4ab k2

. (9.76)

Page 398: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

386 partial differential equations

In order to complete the evaluation of the convolution of these two Gaussianfunctions, we need to find the inverse transform of the Gaussian in Equation (9.76).We can do this by looking at Equation (9.75). We have first that

F−1[√

π

ae−k2/4a

]= e−ax2

.

Moving the constants, we then obtain

F−1[e−k2/4a] =

√aπ

e−ax2.

We now make the substitution α = 14a ,

F−1[e−αk2] =

√1

4παe−x2/4α.

This is in the form needed to invert (9.76). Thus, for α = a+b4ab we find

( f ∗ g)(x) = h(x) =√

π

a + be−

aba+b x2

.

9.6.2 Application to Signal Analysis

f (t)

t

f (ω)

ω

Figure 9.26: Schematic plot of a signalf (t) and its Fourier transform f (ω).

There are many applications of the convolution operation. One ofthese areas is the study of analog signals. An analog signal is a continuoussignal and may contain either a finite, or continuous, set of frequencies.Fourier transforms can be used to represent such signals as a sum over thefrequency content of these signals. In this section we will describe howconvolutions can be used in studying signal analysis.Filtering signals.

The first application is filtering. For a given signal there might be somenoise in the signal, or some undesirable high frequencies. For example, adevice used for recording an analog signal might naturally not be able torecord high frequencies. Let f (t) denote the amplitude of a given analogsignal and f (ω) be the Fourier transform of this signal such the exampleprovided in Figure 9.26. Recall that the Fourier transform gives the fre-quency content of the signal.

f (ω)

ω

(a)

pω0 (ω)

ω-ω0 ω0

(b)

g(ω)

ω

(c)

Figure 9.27: (a) Plot of the Fourier trans-form f (ω) of a signal. (b) The gate func-tion pω0 (ω) used to filter out high fre-quencies. (c) The product of the func-tions, g(ω) = f (ω)pω0 (ω), in (a) and (b)shows how the filters cuts out high fre-quencies, |ω| > ω0.

There are many ways to filter out unwanted frequencies. The simplestwould be to just drop all of the high (angular) frequencies. For example,for some cutoff frequency ω0 frequencies |ω| > ω0 will be removed. TheFourier transform of the filtered signal would then be zero for |ω| > ω0.This could be accomplished by multiplying the Fourier transform of thesignal by a function that vanishes for |ω| > ω0. For example, we could usethe gate function

pω0(ω) =

1, |ω| ≤ ω0

0, |ω| > ω0, (9.77)

as shown in Figure 9.27.

Page 399: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

transform techniques in physics 387

In general, we multiply the Fourier transform of the signal by some fil-tering function h(t) to get the Fourier transform of the filtered signal,

g(ω) = f (ω)h(ω).

The new signal, g(t) is then the inverse Fourier transform of this product,giving the new signal as a convolution:

g(t) = F−1[ f (ω)h(ω)] =∫ ∞

−∞h(t− τ) f (τ) dτ. (9.78)

Such processes occur often in systems theory as well. One thinks off (t) as the input signal into some filtering device which in turn producesthe output, g(t). The function h(t) is called the impulse response. This isbecause it is a response to the impulse function, δ(t). In this case, one has∫ ∞

−∞h(t− τ)δ(τ) dτ = h(t).

Windowing signals.Another application of the convolution is in windowing. This represents

what happens when one measures a real signal. Real signals cannot berecorded for all values of time. Instead data is collected over a finite timeinterval. If the length of time the data is collected is T, then the resultingsignal is zero outside this time interval. This can be modeled in the sameway as with filtering, except the new signal will be the product of the oldsignal with the windowing function. The resulting Fourier transform of thenew signal will be a convolution of the Fourier transforms of the originalsignal and the windowing function.

Example 9.17. Finite Wave Train, Revisited.We return to the finite wave train in Example 9.10 given by

h(t) =

cos ω0t, |t| ≤ a

0, |t| > a.

a0t

f (t)

Figure 9.28: A plot of the finite wavetrain.

We can view this as a windowed version of f (t) = cos ω0t obtained by multi-plying f (t) by the gate function

ga(t) =

1, |x| ≤ a0, |x| > a

. (9.79)

This is shown in Figure 9.28. Then, the Fourier transform is given as a convolution, The convolution in spectral space is de-fined with an extra factor of 1/2π soas to preserve the idea that the inverseFourier transform of a convolution is theproduct of the corresponding signals.

h(ω) = ( f ∗ ga)(ω)

=1

∫ ∞

−∞f (ω− ν)ga(ν) dν. (9.80)

Note that the convolution in frequency space requires the extra factor of 1/(2π).We need the Fourier transforms of f and ga in order to finish the computation.

The Fourier transform of the box function was found in Example 9.6 as

ga(ω) =2ω

sin ωa.

Page 400: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

388 partial differential equations

The Fourier transform of the cosine function, f (t) = cos ω0t, is

f (ω) =∫ ∞

−∞cos(ω0t)eiωt dt

=∫ ∞

−∞

12

(eiω0t + e−iω0t

)eiωt dt

=12

∫ ∞

−∞

(ei(ω+ω0)t + ei(ω−ω0)t

)dt

= π [δ(ω + ω0) + δ(ω−ω0)] . (9.81)

Note that we had earlier computed the inverse Fourier transform of this function inExample 9.9.

Inserting these results in the convolution integral, we have

h(ω) =1

∫ ∞

−∞f (ω− ν)ga(ν) dν

=1

∫ ∞

−∞π [δ(ω− ν + ω0) + δ(ω− ν−ω0)]

sin νa dν

=sin(ω + ω0)a

ω + ω0+

sin(ω−ω0)aω−ω0

. (9.82)

This is the same result we had obtained in Example 9.10.

9.6.3 Parseval’s EqualityThe integral/sum of the (modulus)square of a function is the integral/sumof the (modulus) square of the trans-form. As another example of the convolution theorem, we derive Par-

seval’s Equality (named after Marc-Antoine Parseval (1755-1836)):∫ ∞

−∞| f (t)|2 dt =

12π

∫ ∞

−∞| f (ω)|2 dω. (9.83)

This equality has a physical meaning for signals. The integral on the leftside is a measure of the energy content of the signal in the time domain.The right side provides a measure of the energy content of the transformof the signal. Parseval’s equality, is simply a statement that the energy isinvariant under the Fourier transform. Parseval’s equality is a special caseof Plancherel’s formula (named after Michel Plancherel, 1885-1967).

Let’s rewrite the Convolution Theorem in its inverse form

F−1[ f (k)g(k)] = ( f ∗ g)(t). (9.84)

Then, by the definition of the inverse Fourier transform, we have∫ ∞

−∞f (t− u)g(u) du =

12π

∫ ∞

−∞f (ω)g(ω)e−iωt dω.

Setting t = 0,∫ ∞

−∞f (−u)g(u) du =

12π

∫ ∞

−∞f (ω)g(ω) dω. (9.85)

Page 401: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

transform techniques in physics 389

Now, let g(t) = f (−t), or f (−t) = g(t). We note that the Fourier transformof g(t) is related to the Fourier transform of f (t) :

g(ω) =∫ ∞

−∞f (−t)eiωt dt

= −∫ −∞

∞f (τ)e−iωτ dτ

=∫ ∞

−∞f (τ)eiωτ dτ = f (ω). (9.86)

So, inserting this result into Equation (9.85), we find that∫ ∞

−∞f (−u) f (−u) du =

12π

∫ ∞

−∞| f (ω)|2 dω

which yields Parseval’s Equality in the form (9.83) after substituting t = −uon the left.

As noted above, the forms in Equations (9.83) and (9.85) are often referredto as the Plancherel formula or Parseval formula. A more commonly definedParseval equation is that given for Fourier series. For example, for a functionf (x) defined on [−π, π], which has a Fourier series representation, we have

a20

2+

∑n=1

(a2n + b2

n) =1π

∫ π

−π[ f (x)]2 dx.

In general, there is a Parseval identity for functions that can be expandedin a complete sets of orthonormal functions, φn(x), n = 1, 2, . . . , which isgiven by

∑n=1

< f , φn >2= ‖ f ‖2.

Here ‖ f ‖2 =< f , f > . The Fourier series example is just a special case ofthis formula.

9.7 The Laplace TransformThe Laplace transform is named af-ter Pierre-Simon de Laplace (1749-1827).Laplace made major contributions, espe-cially to celestial mechanics, tidal analy-sis, and probability.

Up to this point we have only explored Fourier exponential trans-forms as one type of integral transform. The Fourier transform is usefulon infinite domains. However, students are often introduced to anotherintegral transform, called the Laplace transform, in their introductory dif-ferential equations class. These transforms are defined over semi-infinitedomains and are useful for solving initial value problems for ordinary dif-ferential equations. Integral transform on [a, b] with respect

to the integral kernel, K(x, k).The Fourier and Laplace transforms are examples of a broader class oftransforms known as integral transforms . For a function f (x) defined onan interval (a, b), we define the integral transform

F(k) =∫ b

aK(x, k) f (x) dx,

Page 402: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

390 partial differential equations

where K(x, k) is a specified kernel of the transform. Looking at the Fouriertransform, we see that the interval is stretched over the entire real axis andthe kernel is of the form, K(x, k) = eikx. In Table 9.1 we show several typesof integral transforms.

Table 9.1: A table of common integraltransforms.

Laplace Transform F(s) =∫ ∞

0 e−sx f (x) dxFourier Transform F(k) =

∫ ∞−∞ eikx f (x) dx

Fourier Cosine Transform F(k) =∫ ∞

0 cos(kx) f (x) dxFourier Sine Transform F(k) =

∫ ∞0 sin(kx) f (x) dx

Mellin Transform F(k) =∫ ∞

0 xk−1 f (x) dxHankel Transform F(k) =

∫ ∞0 xJn(kx) f (x) dx

It should be noted that these integral transforms inherit the linearity ofintegration. Namely. let h(x) = α f (x) + βg(x), where α and β are constants.Then,

H(k) =∫ b

aK(x, k)h(x) dx,

=∫ b

aK(x, k)(α f (x) + βg(x)) dx,

= α∫ b

aK(x, k) f (x) dx + β

∫ b

aK(x, k)g(x) dx,

= αF(x) + βG(x). (9.87)

Therefore, we have shown linearity of the integral transforms. We have seenthe linearity property used for Fourier transforms and we will use linearityin the study of Laplace transforms.The Laplace transform of f , F = L[ f ].

We now turn to Laplace transforms. The Laplace transform of a functionf (t) is defined as

F(s) = L[ f ](s) =∫ ∞

0f (t)e−st dt, s > 0. (9.88)

This is an improper integral and one needs

limt→∞

f (t)e−st = 0

to guarantee convergence.Laplace transforms also have proven useful in engineering for solving

circuit problems and doing systems analysis. In Figure 9.29 it is shown thata signal x(t) is provided as input to a linear system, indicated by h(t). Oneis interested in the system output, y(t), which is given by a convolutionof the input and system functions. By considering the transforms of x(t)and h(t), the transform of the output is given as a product of the Laplacetransforms in the s-domain. In order to obtain the output, one needs tocompute a convolution product for Laplace transforms similar to the convo-lution operation we had seen for Fourier transforms earlier in the chapter.Of course, for us to do this in practice, we have to know how to computeLaplace transforms.

Page 403: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

transform techniques in physics 391

x(t)

LaplaceTransform

X(s)

h(t)

H(s)

y(t) = h(t) ∗ x(t)

Inverse LaplaceTransform

Y(s) = H(s)X(s)

Figure 9.29: A schematic depicting theuse of Laplace transforms in systemstheory.

9.7.1 Properties and Examples of Laplace Transforms

It is typical that one makes use of Laplace transforms by referring toa Table of transform pairs. A sample of such pairs is given in Table 9.2.Combining some of these simple Laplace transforms with the properties ofthe Laplace transform, as shown in Table 9.3, we can deal with many ap-plications of the Laplace transform. We will first prove a few of the givenLaplace transforms and show how they can be used to obtain new trans-form pairs. In the next section we will show how these transforms can beused to sum infinite series and to solve initial value problems for ordinarydifferential equations.

f (t) F(s) f (t) F(s)

ccs

eat 1s− a

, s > a

tn n!sn+1 , s > 0 tneat n!

(s− a)n+1

sin ωtω

s2 + ω2 eat sin ωt ω(s−a)2+ω2

cos ωts

s2 + ω2 eat cos ωts− a

(s− a)2 + ω2

t sin ωt2ωs

(s2 + ω2)2 t cos ωts2 −ω2

(s2 + ω2)2

sinh ata

s2 − a2 cosh ats

s2 − a2

H(t− a)e−as

s, s > 0 δ(t− a) e−as, a ≥ 0, s > 0

Table 9.2: Table of selected Laplacetransform pairs.

We begin with some simple transforms. These are found by simply usingthe definition of the Laplace transform.

Example 9.18. Show that L[1] = 1s .

For this example, we insert f (t) = 1 into the definition of the Laplace transform:

L[1] =∫ ∞

0e−st dt.

This is an improper integral and the computation is understood by introducing anupper limit of a and then letting a → ∞. We will not always write this limit,but it will be understood that this is how one computes such improper integrals.

Page 404: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

392 partial differential equations

Proceeding with the computation, we have

L[1] =∫ ∞

0e−st dt

= lima→∞

∫ a

0e−st dt

= lima→∞

(−1

se−st

)a

0

= lima→∞

(−1

se−sa +

1s

)=

1s

. (9.89)

Thus, we have found that the Laplace transform of 1 is 1s . This result

can be extended to any constant c, using the linearity of the transform,L[c] = cL[1]. Therefore,

L[c] = cs

.

Example 9.19. Show that L[eat] = 1s−a , for s > a.

For this example, we can easily compute the transform. Again, we only need tocompute the integral of an exponential function.

L[eat] =∫ ∞

0eate−st dt

=∫ ∞

0e(a−s)t dt

=

(1

a− se(a−s)t

)∞

0

= limt→∞

1a− s

e(a−s)t − 1a− s

=1

s− a. (9.90)

Note that the last limit was computed as limt→∞ e(a−s)t = 0. This is only trueif a− s < 0, or s > a. [Actually, a could be complex. In this case we would onlyneed s to be greater than the real part of a, s > Re(a).]

Example 9.20. Show that L[cos at] = ss2+a2 and L[sin at] = a

s2+a2 .For these examples, we could again insert the trigonometric functions directly

into the transform and integrate. For example,

L[cos at] =∫ ∞

0e−st cos at dt.

Recall how one evaluates integrals involving the product of a trigonometric functionand the exponential function. One integrates by parts two times and then obtainsan integral of the original unknown integral. Rearranging the resulting integralexpressions, one arrives at the desired result. However, there is a much simpler wayto compute these transforms.

Recall that eiat = cos at + i sin at. Making use of the linearity of the Laplacetransform, we have

L[eiat] = L[cos at] + iL[sin at].

Thus, transforming this complex exponential will simultaneously provide the Laplacetransforms for the sine and cosine functions!

Page 405: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

transform techniques in physics 393

The transform is simply computed as

L[eiat] =∫ ∞

0eiate−st dt =

∫ ∞

0e−(s−ia)t dt =

1s− ia

.

Note that we could easily have used the result for the transform of an exponential,which was already proven. In this case s > Re(ia) = 0.

We now extract the real and imaginary parts of the result using the complexconjugate of the denominator:

1s− ia

=1

s− ias + ias + ia

=s + ia

s2 + a2 .

Reading off the real and imaginary parts, we find the sought transforms,

L[cos at] =s

s2 + a2

L[sin at] =a

s2 + a2 . (9.91)

Example 9.21. Show that L[t] = 1s2 .

For this example we evaluate

L[t] =∫ ∞

0te−st dt.

This integral can be evaluated using the method of integration by parts:∫ ∞

0te−st dt = −t

1s

e−st∣∣∣∞0+

1s

∫ ∞

0e−st dt

=1s2 . (9.92)

Example 9.22. Show that L[tn] = n!sn+1 for nonnegative integer n.

We have seen the n = 0 and n = 1 cases: L[1] = 1s and L[t] = 1

s2 . We nowgeneralize these results to nonnegative integer powers, n > 1, of t. We consider theintegral

L[tn] =∫ ∞

0tne−st dt.

Following the previous example, we again integrate by parts:5 5 This integral can just as easily be doneusing differentiation. We note that(− d

ds

)n ∫ ∞

0e−st dt =

∫ ∞

0tne−st dt.

Since ∫ ∞

0e−st dt =

1s

,∫ ∞

0tne−st dt =

(− d

ds

)n 1s=

n!sn+1 .

∫ ∞

0tne−st dt = −tn 1

se−st

∣∣∣∞0+

ns

∫ ∞

0t−ne−st dt

=ns

∫ ∞

0t−ne−st dt. (9.93)

We could continue to integrate by parts until the final integral is computed.However, look at the integral that resulted after one integration by parts. It is justthe Laplace transform of tn−1. So, we can write the result as

L[tn] =nsL[tn−1].

We compute∫ ∞

0 tne−st dt by turning itinto an initial value problem for a firstorder difference equation and findingthe solution using an iterative method.

This is an example of a recursive definition of a sequence. In this case we have asequence of integrals. Denoting

In = L[tn] =∫ ∞

0tne−st dt

Page 406: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

394 partial differential equations

and noting that I0 = L[1] = 1s , we have the following:

In =ns

In−1, I0 =1s

. (9.94)

This is also what is called a difference equation. It is a first order difference equationwith an “initial condition,” I0. The next step is to solve this difference equation.

Finding the solution of this first order difference equation is easy to do usingsimple iteration. Note that replacing n with n− 1, we have

In−1 =n− 1

sIn−2.

Repeating the process, we find

In =ns

In−1

=ns

(n− 1

sIn−2

)=

n(n− 1)s2 In−2

=n(n− 1)(n− 2)

s3 In−3. (9.95)

We can repeat this process until we get to I0, which we know. We have tocarefully count the number of iterations. We do this by iterating k times and thenfigure out how many steps will get us to the known initial value. A list of iteratesis easily written out:

In =ns

In−1

=n(n− 1)

s2 In−2

=n(n− 1)(n− 2)

s3 In−3

= . . .

=n(n− 1)(n− 2) . . . (n− k + 1)

sk In−k. (9.96)

Since we know I0 = 1s , we choose to stop at k = n obtaining

In =n(n− 1)(n− 2) . . . (2)(1)

sn I0 =n!

sn+1 .

Therefore, we have shown that L[tn] = n!sn+1 .

Such iterative techniques are useful in obtaining a variety of integrals, such asIn =

∫ ∞−∞ x2ne−x2

dx.

As a final note, one can extend this result to cases when n is not aninteger. To do this, we use the Gamma function, which was discussed inSection 5.4. Recall that the Gamma function is the generalization of thefactorial function and is defined as

Γ(x) =∫ ∞

0tx−1e−t dt. (9.97)

Page 407: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

transform techniques in physics 395

Note the similarity to the Laplace transform of tx−1 :

L[tx−1] =∫ ∞

0tx−1e−st dt.

For x− 1 an integer and s = 1, we have that

Γ(x) = (x− 1)!.

Thus, the Gamma function can be viewed as a generalization of the factorialand we have shown that

L[tp] =Γ(p + 1)

sp+1

for p > −1.Now we are ready to introduce additional properties of the Laplace trans-

form in Table 9.3. We have already discussed the first property, which is aconsequence of linearity of the integral transforms. We will prove the otherproperties in this and the following sections.

Laplace Transform PropertiesL[a f (t) + bg(t)] = aF(s) + bG(s)

L[t f (t)] = − dds

F(s)

L[

d fdt

]= sF(s)− f (0)

L[

d2 fdt2

]= s2F(s)− s f (0)− f ′(0)

L[eat f (t)] = F(s− a)L[H(t− a) f (t− a)] = e−asF(s)

L[( f ∗ g)(t)] = L[∫ t

0f (t− u)g(u) du] = F(s)G(s)

Table 9.3: Table of selected Laplacetransform properties.

Example 9.23. Show that L[

d fdt

]= sF(s)− f (0).

We have to compute

L[

d fdt

]=∫ ∞

0

d fdt

e−st dt.

We can move the derivative off f by integrating by parts. This is similar to what wehad done when finding the Fourier transform of the derivative of a function. Lettingu = e−st and v = f (t), we have

L[

d fdt

]=

∫ ∞

0

d fdt

e−st dt

= f (t)e−st∣∣∣∞0+ s

∫ ∞

0f (t)e−st dt

= − f (0) + sF(s). (9.98)

Here we have assumed that f (t)e−st vanishes for large t.The final result is that

L[

d fdt

]= sF(s)− f (0).

Page 408: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

396 partial differential equations

Example 6: Show that L[

d2 fdt2

]= s2F(s)− s f (0)− f ′(0).

We can compute this Laplace transform using two integrations by parts, or wecould make use of the last result. Letting g(t) = d f (t)

dt , we have

L[

d2 fdt2

]= L

[dgdt

]= sG(s)− g(0) = sG(s)− f ′(0).

But,

G(s) = L[

d fdt

]= sF(s)− f (0).

So,

L[

d2 fdt2

]= sG(s)− f ′(0)

= s [sF(s)− f (0)]− f ′(0)

= s2F(s)− s f (0)− f ′(0). (9.99)

We will return to the other properties in Table 9.3 after looking at a fewapplications.

9.8 Applications of Laplace Transforms

Although the Laplace transform is a very useful transform, itis often encountered only as a method for solving initial value problemsin introductory differential equations. In this section we will show how tosolve simple differential equations. Along the way we will introduce stepand impulse functions and show how the Convolution Theorem for Laplacetransforms plays a role in finding solutions. However, we will first explorean unrelated application of Laplace transforms. We will see that the Laplacetransform is useful in finding sums of infinite series.

9.8.1 Series Summation Using Laplace Transforms

We saw in Chapter ?? that Fourier series can be used to sum series.For example, in Problem ??.13, one proves that

∑n=1

1n2 =

π2

6.

In this section we will show how Laplace transforms can be used to sumseries.6 There is an interesting history of using integral transforms to sum6 Albert D. Wheelon, Tables of Summable

Series and Integrals Involving Bessel Func-tions, Holden-Day, 1968.

series. For example, Richard Feynman7 (1918-1988) described how one can

7 R. P. Feynman, 1949, Phys. Rev. 76, p.769

use the convolution theorem for Laplace transforms to sum series with de-nominators that involved products. We will describe this and simpler sumsin this section.

We begin by considering the Laplace transform of a known function,

F(s) =∫ ∞

0f (t)e−st dt.

Page 409: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

transform techniques in physics 397

Inserting this expression into the sum ∑n F(n) and interchanging the sumand integral, we find

∑n=0

F(n) =∞

∑n=0

∫ ∞

0f (t)e−nt dt

=∫ ∞

0f (t)

∑n=0

(e−t)n dt

=∫ ∞

0f (t)

11− e−t dt. (9.100)

The last step was obtained using the sum of a geometric series. The key isbeing able to carry out the final integral as we show in the next example.

Example 9.24. Evaluate the sum ∑∞n=1

(−1)n+1

n .Since, L[1] = 1/s, we have

∑n=1

(−1)n+1

n=

∑n=1

∫ ∞

0(−1)n+1e−nt dt

=∫ ∞

0

e−t

1 + e−t dt

=∫ 2

1

duu

= ln 2. (9.101)

Example 9.25. Evaluate the sum ∑∞n=1

1n2 .

This is a special case of the Riemann zeta function

ζ(s) =∞

∑n=1

1ns . (9.102)

The Riemann zeta function8 is important in the study of prime numbers and more 8 A translation of Riemann, Bernhard(1859), “Über die Anzahl der Primzahlenunter einer gegebenen Grösse” is in H.M. Edwards (1974). Riemann’s Zeta Func-tion. Academic Press. Riemann hadshown that the Riemann zeta functioncan be obtained through contour in-tegral representation, 2 sin(πs)Γζ(s) =

i∮

C(−x)s−1

ex−1 dx, for a specific contour C.

recently has seen applications in the study of dynamical systems. The series in thisexample is ζ(2). We have already seen in ??.13 that

ζ(2) =π2

6.

Using Laplace transforms, we can provide an integral representation of ζ(2).The first step is to find the correct Laplace transform pair. The sum involves the

function F(n) = 1/n2. So, we look for a function f (t) whose Laplace transform isF(s) = 1/s2. We know by now that the inverse Laplace transform of F(s) = 1/s2

is f (t) = t. As before, we replace each term in the series by a Laplace transform,exchange the summation and integration, and sum the resulting geometric series:

∑n=1

1n2 =

∑n=1

∫ ∞

0te−nt dt

=∫ ∞

0

tet − 1

dt. (9.103)

So, we have that ∫ ∞

0

tet − 1

dt =∞

∑n=1

1n2 = ζ(2).

Page 410: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

398 partial differential equations

Integrals of this type occur often in statistical mechanics in the form of Bose-Einstein integrals. These are of the form

Gn(z) =∫ ∞

0

xn−1

z−1ex − 1dx.

Note that Gn(1) = Γ(n)ζ(n).

In general the Riemann zeta function has to be tabulated through othermeans. In some special cases, one can closed form expressions. For exam-ple,

ζ(2n) =22n−1π2n

(2n)!Bn,

where the Bn’s are the Bernoulli numbers. Bernoulli numbers are definedthrough the Maclaurin series expansion

xex − 1

=∞

∑n=0

Bn

n!xn.

The first few Riemann zeta functions are

ζ(2) =π2

6, ζ(4) =

π4

90, ζ(6) =

π6

945.

We can extend this method of using Laplace transforms to summing se-ries whose terms take special general forms. For example, from Feynman’s1949 paper we note that

1(a + bn)2 = − ∂

∂a

∫ ∞

0e−s(a+bn) ds.

This identity can be shown easily by first noting

∫ ∞

0e−s(a+bn) ds =

[−e−s(a+bn)

a + bn

]∞

0

=1

a + bn.

Now, differentiate the result with respect to a and the result follows.The latter identity can be generalized further as

1(a + bn)k+1 =

(−1)k

k!∂k

∂ak

∫ ∞

0e−s(a+bn) ds.

In Feynman’s 1949 paper, he develops methods for handling several othergeneral sums using the convolution theorem. Wheelon gives more examplesof these. We will just provide one such result and an example. First, we notethat

1ab

=∫ 1

0

du[a(1− u) + bu]2

.

However,1

[a(1− u) + bu]2=∫ ∞

0te−t[a(1−u)+bu] dt.

So, we have1ab

=∫ 1

0du∫ ∞

0te−t[a(1−u)+bu] dt.

We see in the next example how this representation can be useful.

Page 411: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

transform techniques in physics 399

Example 9.26. Evaluate ∑∞n=0

1(2n+1)(2n+2) .

We sum this series by first letting a = 2n + 1 and b = 2n + 2 in the formulafor 1/ab. Collecting the n-dependent terms, we can sum the series leaving a doubleintegral computation in ut-space. The details are as follows:

∑n=0

1(2n + 1)(2n + 2)

=∞

∑n=0

∫ 1

0

du[(2n + 1)(1− u) + (2n + 2)u]2

=∞

∑n=0

∫ 1

0du∫ ∞

0te−t(2n+1+u) dt

=∫ 1

0du∫ ∞

0te−t(1+u)

∑n=0

e−2nt dt

=∫ ∞

0

te−t

1− e−2t

∫ 1

0e−tu du dt

=∫ ∞

0

te−t

1− e−2t1− e−t

tdt

=∫ ∞

0

e−t

1 + e−t dt

= − ln(1 + e−t)∣∣∣∞0= ln 2. (9.104)

9.8.2 Solution of ODEs Using Laplace Transforms

One of the typical applications of Laplace transforms is the so-lution of nonhomogeneous linear constant coefficient differential equations.In the following examples we will show how this works.

The general idea is that one transforms the equation for an unknownfunction y(t) into an algebraic equation for its transform, Y(t). Typically,the algebraic equation is easy to solve for Y(s) as a function of s. Then,one transforms back into t-space using Laplace transform tables and theproperties of Laplace transforms. The scheme is shown in Figure 9.30.

L[y] = g

y(t)

F(Y) = G

Y(s)

Laplace Transform

Inverse Laplace Transform

ODEfor y(t)

Algebraic

Equation

Y(s)

Figure 9.30: The scheme for solvingan ordinary differential equation usingLaplace transforms. One transforms theinitial value problem for y(t) and obtainsan algebraic equation for Y(s). Solve forY(s) and the inverse transform give thesolution to the initial value problem.

Example 9.27. Solve the initial value problem y′ + 3y = e2t, y(0) = 1.The first step is to perform a Laplace transform of the initial value problem. The

transform of the left side of the equation is

L[y′ + 3y] = sY− y(0) + 3Y = (s + 3)Y− 1.

Page 412: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

400 partial differential equations

Transforming the right hand side, we have

L[e2t] =1

s− 2.

Combining these two results, we obtain

(s + 3)Y− 1 =1

s− 2.

The next step is to solve for Y(s) :

Y(s) =1

s + 3+

1(s− 2)(s + 3)

.

Now, we need to find the inverse Laplace transform. Namely, we need to figureout what function has a Laplace transform of the above form. We will use the tablesof Laplace transform pairs. Later we will show that there are other methods forcarrying out the Laplace transform inversion.

The inverse transform of the first term is e−3t. However, we have not seen any-thing that looks like the second form in the table of transforms that we have compiled;but, we can rewrite the second term by using a partial fraction decomposition. Let’srecall how to do this.

The goal is to find constants, A and B, such that

1(s− 2)(s + 3)

=A

s− 2+

Bs + 3

. (9.105)

We picked this form because we know that recombining the two terms into one termThis is an example of carrying out a par-tial fraction decomposition. will have the same denominator. We just need to make sure the numerators agree

afterwards. So, adding the two terms, we have

1(s− 2)(s + 3)

=A(s + 3) + B(s− 2)

(s− 2)(s + 3).

Equating numerators,1 = A(s + 3) + B(s− 2).

There are several ways to proceed at this point.

a. Method 1.

We can rewrite the equation by gathering terms with common powers of s,we have

(A + B)s + 3A− 2B = 1.

The only way that this can be true for all s is that the coefficients of thedifferent powers of s agree on both sides. This leads to two equations for Aand B:

A + B = 0

3A− 2B = 1. (9.106)

The first equation gives A = −B, so the second equation becomes −5B = 1.The solution is then A = −B = 1

5 .

Page 413: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

transform techniques in physics 401

b. Method 2.

Since the equation 1(s−2)(s+3) = A

s−2 + Bs+3 is true for all s, we can pick

specific values. For s = 2, we find 1 = 5A, or A = 15 . For s = −3, we find

1 = −5B, or B = − 15 . Thus, we obtain the same result as Method 1, but

much quicker.

1 2

2

4

6

8

t

y(t)

Figure 9.31: A plot of the solution to Ex-ample 9.27.

c. Method 3.

We could just inspect the original partial fraction problem. Since the numer-ator has no s terms, we might guess the form

1(s− 2)(s + 3)

=1

s− 2− 1

s + 3.

But, recombining the terms on the right hand side, we see that

1s− 2

− 1s + 3

=5

(s− 2)(s + 3).

Since we were off by 5, we divide the partial fractions by 5 to obtain

1(s− 2)(s + 3)

=15

[1

s− 2− 1

s + 3

],

which once again gives the desired form.

Returning to the problem, we have found that

Y(s) =1

s + 3+

15

(1

s− 2− 1

s + 3

).

We can now see that the function with this Laplace transform is given by

y(t) = L−1[

1s + 3

+15

(1

s− 2− 1

s + 3

)]= e−3t +

15

(e2t − e−3t

)works. Simplifying, we have the solution of the initial value problem

y(t) =15

e2t +45

e−3t.

We can verify that we have solved the initial value problem.

y′ + 3y =25

e2t − 125

e−3t + 3(15

e2t +45

e−3t) = e2t

and y(0) = 15 + 4

5 = 1.

Example 9.28. Solve the initial value problem y′′ + 4y = 0, y(0) = 1, y′(0) = 3.We can probably solve this without Laplace transforms, but it is a simple exercise.

Transforming the equation, we have

0 = s2Y− sy(0)− y′(0) + 4Y

= (s2 + 4)Y− s− 3. (9.107)

Solving for Y, we have

Y(s) =s + 3s2 + 4

.

Page 414: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

402 partial differential equations

We now ask if we recognize the transform pair needed. The denominator lookslike the type needed for the transform of a sine or cosine. We just need to play withthe numerator. Splitting the expression into two terms, we have

Y(s) =s

s2 + 4+

3s2 + 4

.2 4 6 8

−2

2

t

y(t)

Figure 9.32: A plot of the solution to Ex-ample 9.28.

The first term is now recognizable as the transform of cos 2t. The second termis not the transform of sin 2t. It would be if the numerator were a 2. This can becorrected by multiplying and dividing by 2:

3s2 + 4

=32

(2

s2 + 4

).

The solution is then found as

y(t) = L−1[

ss2 + 4

+32

(2

s2 + 4

)]= cos 2t +

32

sin 2t.

The reader can verify that this is the solution of the initial value problem.

9.8.3 Step and Impulse Functions

Often the initial value problems that one faces in differentialequations courses can be solved using either the Method of UndeterminedCoefficients or the Method of Variation of Parameters. However, using thelatter can be messy and involves some skill with integration. Many circuitdesigns can be modeled with systems of differential equations using Kir-choff’s Rules. Such systems can get fairly complicated. However, Laplacetransforms can be used to solve such systems and electrical engineers havelong used such methods in circuit analysis.

In this section we add a couple of more transform pairs and transformproperties that are useful in accounting for things like turning on a drivingforce, using periodic functions like a square wave, or introducing impulseforces.

We first recall the Heaviside step function, given by

H(t) =

0, t < 0,1, t > 0.

(9.108)

t

H(t− a)

1

a

Figure 9.33: A shifted Heaviside func-tion, H(t− a).

A more general version of the step function is the horizontally shiftedstep function, H(t− a). This function is shown in Figure 9.33. The Laplacetransform of this function is found for a > 0 as

L[H(t− a)] =∫ ∞

0H(t− a)e−st dt

=∫ ∞

ae−st dt

=e−st

s

∣∣∣∞a=

e−as

s. (9.109)

Page 415: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

transform techniques in physics 403

Just like the Fourier transform, the Laplace transform has two shift the-orems involving the multiplication of the function, f (t), or its transform,F(s), by exponentials. The first and second shifting properties/theoremsare given by The Shift Theorems.

L[eat f (t)] = F(s− a) (9.110)

L[ f (t− a)H(t− a)] = e−asF(s). (9.111)

We prove the First Shift Theorem and leave the other proof as an exercisefor the reader. Namely,

L[eat f (t)] =∫ ∞

0eat f (t)e−st dt

=∫ ∞

0f (t)e−(s−a)t dt = F(s− a). (9.112)

Example 9.29. Compute the Laplace transform of e−at sin ωt.This function arises as the solution of the underdamped harmonic oscillator. We

first note that the exponential multiplies a sine function. The shift theorem tells usthat we first need the transform of the sine function. So, for f (t) = sin ωt, we have

F(s) =ω

s2 + ω2 .

Using this transform, we can obtain the solution to this problem as

L[e−at sin ωt] = F(s + a) =ω

(s + a)2 + ω2 .

More interesting examples can be found using piecewise defined func-tions. First we consider the function H(t)− H(t− a). For t < 0 both termsare zero. In the interval [0, a] the function H(t) = 1 and H(t− a) = 0. There-fore, H(t)− H(t− a) = 1 for t ∈ [0, a]. Finally, for t > a, both functions areone and therefore the difference is zero. The graph of H(t) − H(t − a) isshown in Figure 9.34. t

1

0 a

Figure 9.34: The box function, H(t) −H(t− a).

We now consider the piecewise defined function

g(t) =

f (t), 0 ≤ t ≤ a,0, t < 0, t > a.

This function can be rewritten in terms of step functions. We only need tomultiply f (t) by the above box function,

g(t) = f (t)[H(t)− H(t− a)].

We depict this in Figure 9.35. t

1

0 a

Figure 9.35: Formation of a piecewisefunction, f (t)[H(t)− H(t− a)].

Even more complicated functions can be written in terms of step func-tions. We only need to look at sums of functions of the form f (t)[H(t −a) − H(t − b)] for b > a. This is similar to a box function. It is nonzerobetween a and b and has height f (t).

Page 416: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

404 partial differential equations

We show as an example the square wave function in Figure 9.36. It canbe represented as a sum of an infinite number of boxes,

f (t) =∞

∑n=−∞

[H(t− 2na)− H(t− (2n + 1)a)],

for a > 0.

Example 9.30. Find the Laplace Transform of a square wave “turned on” at t = 0..

Figure 9.36: A square wave, f (t) =

∑∞n=−∞[H(t− 2na)− H(t− (2n + 1)a)].

t-2a 0 a 2a 4a 6a

We let

f (t) =∞

∑n=0

[H(t− 2na)− H(t− (2n + 1)a)], a > 0.

Using the properties of the Heaviside function, we have

L[ f (t)] =∞

∑n=0

[L[H(t− 2na)]−L[H(t− (2n + 1)a)]]

=∞

∑n=0

[e−2nas

s− e−(2n+1)as

s

]

=1− e−as

s

∑n=0

(e−2as

)n

=1− e−as

s

(1

1− e−2as

)=

1− e−as

s(1− e−2as). (9.113)

Note that the third line in the derivation is a geometric series. We summed thisseries to get the answer in a compact form since e−2as < 1.

f (x)

xa1 a2 a3 a4 a5 a6 a7 a8 a9 a10

Figure 9.37: Plot representing im-pulse forces of height f (ai). The sum∑n

i=1 f (ai)δ(x − ai) describes a generalimpulse function.

Other interesting examples are provided by the delta function. The Diracdelta function can be used to represent a unit impulse. Summing over anumber of impulses, or point sources, we can describe a general function asshown in Figure 9.37. The sum of impulses located at points ai, i = 1, . . . , nwith strengths f (ai) would be given by

f (x) =n

∑i=1

f (ai)δ(x− ai).

A continuous sum could be written as

f (x) =∫ ∞

−∞f (ξ)δ(x− ξ) dξ.

Page 417: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

transform techniques in physics 405

This is simply an application of the sifting property of the delta function.We will investigate a case when one would use a single impulse. While amass on a spring is undergoing simple harmonic motion, we hit it for aninstant at time t = a. In such a case, we could represent the force as amultiple of δ(t− a). L[δ(t− a)] = e−as.

One would then need the Laplace transform of the delta function to solvethe associated initial value problem. Inserting the delta function into theLaplace transform, we find that for a > 0

L[δ(t− a)] =∫ ∞

0δ(t− a)e−st dt

=∫ ∞

−∞δ(t− a)e−st dt

= e−as. (9.114)

Example 9.31. Solve the initial value problem y′′ + 4π2y = δ(t − 2), y(0) =

y′(0) = 0.This initial value problem models a spring oscillation with an impulse force.

Without the forcing term, given by the delta function, this spring is initially at restand not stretched. The delta function models a unit impulse at t = 2. Of course,we anticipate that at this time the spring will begin to oscillate. We will solve thisproblem using Laplace transforms.

First, we transform the differential equation:

s2Y− sy(0)− y′(0) + 4π2Y = e−2s.

Inserting the initial conditions, we have

(s2 + 4π2)Y = e−2s.

Solving for Y(s), we obtain

Y(s) =e−2s

s2 + 4π2 .

We now seek the function for which this is the Laplace transform. The form ofthis function is an exponential times some Laplace transform, F(s). Thus, we needthe Second Shift Theorem since the solution is of the form Y(s) = e−2sF(s) for

F(s) =1

s2 + 4π2 .

We need to find the corresponding f (t) of the Laplace transform pair. The de-nominator in F(s) suggests a sine or cosine. Since the numerator is constant, wepick sine. From the tables of transforms, we have

L[sin 2πt] =2π

s2 + 4π2 .

So, we write

F(s) =1

s2 + 4π2 .

This gives f (t) = (2π)−1 sin 2πt.

Page 418: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

406 partial differential equations

We now apply the Second Shift Theorem, L[ f (t− a)H(t− a)] = e−asF(s), or

y(t) = L−1[e−2sF(s)

]= H(t− 2) f (t− 2)

=1

2πH(t− 2) sin 2π(t− 2). (9.115)

5 10 15 20

−0.2

0.2

t

y(t)

Figure 9.38: A plot of the solution to Ex-ample 9.31 in which a spring at rest ex-periences an impulse force at t = 2.

This solution tells us that the mass is at rest until t = 2 and then begins tooscillate at its natural frequency. A plot of this solution is shown in Figure 9.38

Example 9.32. Solve the initial value problem

y′′ + y = f (t), y(0) = 0, y′(0) = 0,

where

f (t) =

cosπt, 0 ≤ t ≤ 2,

0, otherwise.

We need the Laplace transform of f (t). This function can be written in termsof a Heaviside function, f (t) = cos πtH(t − 2). In order to apply the SecondShift Theorem, we need a shifted version of the cosine function. We find the shiftedversion by noting that cos π(t− 2) = cos πt. Thus, we have

f (t) = cos πt [H(t)− H(t− 2)]

= cos πt− cos π(t− 2)H(t− 2), t ≥ 0. (9.116)

The Laplace transform of this driving term is

F(s) = (1− e−2s)L[cos πt] = (1− e−2s)s

s2 + π2 .

Now we can proceed to solve the initial value problem. The Laplace transform ofthe initial value problem yields

(s2 + 1)Y(s) = (1− e−2s)s

s2 + π2 .

Therefore,Y(s) = (1− e−2s)

s(s2 + π2)(s2 + 1)

.

We can retrieve the solution to the initial value problem using the Second ShiftTheorem. The solution is of the form Y(s) = (1− e−2s)G(s) for

G(s) =s

(s2 + π2)(s2 + 1).

Then, the final solution takes the form

y(t) = g(t)− g(t− 2)H(t− 2).

We only need to find g(t) in order to finish the problem. This is easily done byusing the partial fraction decomposition

G(s) =s

(s2 + π2)(s2 + 1)=

1π2 − 1

[s

s2 + 1− s

s2 + π2

].

Page 419: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

transform techniques in physics 407

Then,

g(t) = L−1[

s(s2 + π2)(s2 + 1)

]=

1π2 − 1

(cos t− cos πt) .

The final solution is then given by

y(t) =1

π2 − 1[cos t− cos πt− H(t− 2)(cos(t− 2)− cos πt)] .

A plot of this solution is shown in Figure 9.395 10

−0.4

−0.2

0.2

0.4

t

y(t)

Figure 9.39: A plot of the solution to Ex-ample 9.32 in which a spring at rest ex-periences an piecewise defined force.

9.9 The Convolution Theorem

Finally, we consider the convolution of two functions. Often weare faced with having the product of two Laplace transforms that we knowand we seek the inverse transform of the product. For example, let’s saywe have obtained Y(s) = 1

(s−1)(s−2) while trying to solve an initial valueproblem. In this case we could find a partial fraction decomposition. But,are other ways to find the inverse transform, especially if we cannot performa partial fraction decomposition. We could use the Convolution Theorem forLaplace transforms or we could compute the inverse transform directly. Wewill look into these methods in the next two sections.We begin with definingthe convolution.

We define the convolution of two functions defined on [0, ∞) much thesame way as we had done for the Fourier transform. The convolution f ∗ gis defined as

( f ∗ g)(t) =∫ t

0f (u)g(t− u) du. (9.117)

Note that the convolution integral has finite limits as opposed to the Fouriertransform case.

The convolution operation has two important properties:The convolution is commutative.

1. The convolution is commutative: f ∗ g = g ∗ f

Proof. The key is to make a substitution y = t− u in the integral. Thismakes f a simple function of the integration variable.

(g ∗ f )(t) =∫ t

0g(u) f (t− u) du

= −∫ 0

tg(t− y) f (y) dy

=∫ t

0f (y)g(t− y) dy

= ( f ∗ g)(t). (9.118)

The Convolution Theorem for Laplacetransforms.

2. The Convolution Theorem: The Laplace transform of a convolution isthe product of the Laplace transforms of the individual functions:

L[ f ∗ g] = F(s)G(s)

Page 420: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

408 partial differential equations

Proof. Proving this theorem takes a bit more work. We will makesome assumptions that will work in many cases. First, we assume thatthe functions are causal, f (t) = 0 and g(t) = 0 for t < 0. Secondly,we will assume that we can interchange integrals, which needs morerigorous attention than will be provided here. The first assumptionwill allow us to write the finite integral as an infinite integral. Thena change of variables will allow us to split the integral into the prod-uct of two integrals that are recognized as a product of two Laplacetransforms.

Carrying out the computation, we have

L[ f ∗ g] =∫ ∞

0

(∫ t

0f (u)g(t− u) du

)e−st dt

=∫ ∞

0

(∫ ∞

0f (u)g(t− u) du

)e−st dt

=∫ ∞

0f (u)

(∫ ∞

0g(t− u)e−st dt

)du (9.119)

Now, make the substitution τ = t− u. We note that

int∞0 f (u)

(∫ ∞

0g(t− u)e−st dt

)du =

∫ ∞

0f (u)

(∫ ∞

−ug(τ)e−s(τ+u) dτ

)du

However, since g(τ) is a causal function, we have that it vanishes forτ < 0 and we can change the integration interval to [0, ∞). So, after alittle rearranging, we can proceed to the result.

L[ f ∗ g] =∫ ∞

0f (u)

(∫ ∞

0g(τ)e−s(τ+u) dτ

)du

=∫ ∞

0f (u)e−su

(∫ ∞

0g(τ)e−sτ dτ

)du

=

(∫ ∞

0f (u)e−su du

)(∫ ∞

0g(τ)e−sτ dτ

)= F(s)G(s). (9.120)

We make use of the Convolution Theorem to do the following examples.

Example 9.33. Find y(t) = L−1[

1(s−1)(s−2)

].

We note that this is a product of two functions

Y(s) =1

(s− 1)(s− 2)=

1s− 1

1s− 2

= F(s)G(s).

We know the inverse transforms of the factors: f (t) = et and g(t) = e2t.Using the Convolution Theorem, we find y(t) = ( f ∗ g)(t). We compute the

convolution:

y(t) =∫ t

0f (u)g(t− u) du

Page 421: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

transform techniques in physics 409

=∫ t

0eue2(t−u) du

= e2t∫ t

0e−u du

= e2t[−et + 1] = e2t − et. (9.121)

One can also confirm this by carrying out a partial fraction decomposition.

Example 9.34. Consider the initial value problem, y′′ + 9y = 2 sin 3t, y(0) = 1,y′(0) = 0.

The Laplace transform of this problem is given by

(s2 + 9)Y− s =6

s2 + 9.

Solving for Y(s), we obtain

Y(s) =6

(s2 + 9)2 +s

s2 + 9.

The inverse Laplace transform of the second term is easily found as cos(3t); however,the first term is more complicated.

We can use the Convolution Theorem to find the Laplace transform of the firstterm. We note that

6(s2 + 9)2 =

23

3(s2 + 9)

3(s2 + 9)

is a product of two Laplace transforms (up to the constant factor). Thus,

L−1[

6(s2 + 9)2

]=

23( f ∗ g)(t),

where f (t) = g(t) = sin3t. Evaluating this convolution product, we have

L−1[

6(s2 + 9)2

]=

23( f ∗ g)(t)

=23

∫ t

0sin 3u sin 3(t− u) du

=13

∫ t

0[cos 3(2u− t)− cos 3t] du

=13

[16

sin(6u− 3t)− u cos 3t]t

0

=19

sin 3t− 13

t cos 3t. (9.122)2 4 6 8

−2

2

t

y(t)

Figure 9.40: Plot of the solution to Exam-ple 9.34 showing a resonance.

Combining this with the inverse transform of the second term of Y(s), the solu-tion to the initial value problem is

y(t) = −13

t cos 3t +19

sin 3t + cos 3t.

Note that the amplitude of the solution will grow in time from the first term. Youcan see this in Figure 9.40. This is known as a resonance.

Page 422: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

410 partial differential equations

Example 9.35. Find L−1[ 6(s2+9)2 ] using partial fraction decomposition.

If we look at Table 9.2, we see that the Laplace transform pairs with the denomi-nator (s2 + ω2)2 are

L[t sin ωt] =2ωs

(s2 + ω2)2 ,

and

L[t cos ωt] =s2 −ω2

(s2 + ω2)2 .

So, we might consider rewriting a partial fraction decomposition as

6(s2 + 9)2 =

A6s(s2 + 9)2 +

B(s2 − 9)(s2 + 9)2 +

Cs + Ds2 + 9

.

Combining the terms on the right over a common denominator, we find

6 = 6As + B(s2 − 9) + (Cs + D)(s2 + 9).

Collecting like powers of s, we have

Cs3 + (D + B)s2 + 6As + (D− B) = 6.

Therefore, C = 0, A = 0, D + B = 0, and D − B = 23 . Solving the last two

equations, we find D = −B = 13 .

Using these results, we find

6(s2 + 9)2 = −1

3(s2 − 9)(s2 + 9)2 +

13

1s2 + 9

.

This is the result we had obtained in the last example using the Convolution Theo-rem.

9.10 The Inverse Laplace Transform

Until this point we have seen that the inverse Laplace transform can befound by making use of Laplace transform tables and properties of Laplacetransforms. This is typically the way Laplace transforms are taught andused in a differential equations course. One can do the same for Fouriertransforms. However, in the case of Fourier transforms we introduced aninverse transform in the form of an integral. Does such an inverse integraltransform exist for the Laplace transform? Yes, it does! In this section wewill derive the inverse Laplace transform integral and show how it is used.

We begin by considering a causal function f (t) which vanishes for t < 0and define the function g(t) = f (t)e−ct with c > 0. For g(t) absolutelyintegrable,A function f (t) is said to be of exponen-

tial order if∫ ∞

0 | f (t)|e−ct dt < ∞

∫ ∞

−∞|g(t)| dt =

∫ ∞

0| f (t)|e−ct dt < ∞,

we can write the Fourier transform,

g(ω) =∫ ∞

−∞g(t)eiωtdt =

∫ ∞

0f (t)eiωt−ctdt

Page 423: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

transform techniques in physics 411

and the inverse Fourier transform,

g(t) = f (t)e−ct =1

∫ ∞

−∞g(ω)e−iωt dω.

Multiplying by ect and inserting g(ω) into the integral for g(t), we find

f (t) =1

∫ ∞

−∞

∫ ∞

0f (τ)e(iω−c)τdτe−(iω−c)t dω.

Letting s = c− iω (so dω = ids), we have

f (t) =i

∫ c−i∞

c+i∞

∫ ∞

0f (τ)e−sτdτest ds.

Note that the inside integral is simply F(s). So, we have

f (t) =1

2πi

∫ c+i∞

c−i∞F(s)est ds. (9.123)

The integral in the last equation is the inverse Laplace transform, calledthe Bromwich integral and is named after Thomas John I’Anson Bromwich(1875-1929) . This inverse transform is not usually covered in differen-tial equations courses because the integration takes place in the complexplane. This integral is evaluated along a path in the complex plane calledthe Bromwich contour. The typical way to compute this integral is to firstchose c so that all poles are to the left of the contour. This guarantees thatf (t) is of exponential type. The contour is closed a semicircle enclosing allof the poles. One then relies on a generalization of Jordan’s lemma to thesecond and third quadrants.9

9 Closing the contour to the left of thecontour can be reasoned in a mannersimilar to what we saw in Jordan’sLemma. Write the exponential as est =e(sR+isI )t = esR teisI t. The second factor isan oscillating factor and the growth inthe exponential can only come from thefirst factor. In order for the exponentialto decay as the radius of the semicirclegrows, sRt < 0. Since t > 0, we needs < 0 which is done by closing the con-tour to the left. If t < 0, then the contourto the right would enclose no singulari-ties and preserve the causality of f (t).

c + iR

c− iR

x

y

CR

-1 c

Figure 9.41: The contour used for apply-ing the Bromwich integral to the Laplacetransform F(s) = 1

s(s+1) .

Example 9.36. Find the inverse Laplace transform of F(s) = 1s(s+1) .

The integral we have to compute is

f (t) =1

2πi

∫ c+i∞

c−i∞

est

s(s + 1)ds.

This integral has poles at s = 0 and s = −1. The contour we will use is shownin Figure 9.41. We enclose the contour with a semicircle to the left of the path inthe complex s-plane. One has to verify that the integral over the semicircle vanishesas the radius goes to infinity. Assuming that we have done this, then the result issimply obtained as 2πi times the sum of the residues. The residues in this case are:

Res[

ezt

z(z + 1); z = 0

]= lim

z→0

ezt

(z + 1)= 1

and

Res[

ezt

z(z + 1); z = −1

]= lim

z→−1

ezt

z= −e−t.

Therefore, we have

f (t) = 2πi[

12πi

(1) +1

2πi(−e−t)

]= 1− e−t.

Page 424: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

412 partial differential equations

We can verify this result using the Convolution Theorem or using a partialfraction decomposition. The latter method is simplest. We note that

1s(s + 1)

=1s− 1

s + 1.

The first term leads to an inverse transform of 1 and the second term gives e−t. So,

L−1[

1s− 1

s + 1

]= 1− e−t.

Thus, we have verified the result from doing contour integration.

Example 9.37. Find the inverse Laplace transform of F(s) = 1s(1+es)

.In this case, we need to compute

f (t) =1

2πi

∫ c+i∞

c−i∞

est

s(1 + es)ds.

This integral has poles at complex values of s such that 1 + es = 0, or es = −1.Letting s = x + iy, we see that

es = ex+iy = ex(cos y + i sin y) = −1.

We see x = 0 and y satisfies cos y = −1 and sin y = 0. Therefore, y = nπ for nan odd integer. Therefore, the integrand has an infinite number of simple poles ats = nπi, n = ±1,±3, . . . . It also has a simple pole at s = 0.

c + iR

c− iR−7π

−5π

−3π

π

−πx

y

CR

c

Figure 9.42: The contour used for apply-ing the Bromwich integral to the Laplacetransform F(s) = 1

1+es .

In Figure 9.42 we indicate the poles. We need to compute the resides at each pole.At s = nπi we have

Res[

est

s(1 + es); s = nπi

]= lim

s→nπi(s− nπi)

est

s(1 + es)

= lims→nπi

est

ses

= − enπit

nπi, n odd. (9.124)

At s = 0, the residue is

Res[

est

s(1 + es); s = 0

]= lim

s→0

est

1 + es =12

.

Summing the residues and noting the exponentials for ±n can be combined toform sine functions, we arrive at the inverse transform.

f (t) =12− ∑

n odd

enπit

nπi

=12− 2

∑k=1

sin (2k− 1)πt(2k− 1)π

. (9.125)

The series in this example might look familiar. It is a Fourier sine series withodd harmonics whose amplitudes decay like 1/n. It is a vertically shifted squarewave. In fact, we had computed the Laplace transform of a general square wave inExample 9.30.

Page 425: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

transform techniques in physics 413

Figure 9.43: Plot of the square wave re-sult as the inverse Laplace transform ofF(s) = 1

s(1+es with 50 terms.

In that example we found

L[

∑n=0

[H(t− 2na)− H(t− (2n + 1)a)]

]=

1− e−as

s(1− e−2as)

=1

s(1 + e−as). (9.126)

In this example, one can show that

f (t) =∞

∑n=0

[H(t− 2n + 1)− H(t− 2n)].

The reader should verify that this result is indeed the square wave shown in Figure9.43.

9.11 Transforms and Partial Differential Equations

As another application of the transforms, we will see that wecan use transforms to solve some linear partial differential equations. Wewill first solve the one dimensional heat equation and the two dimensionalLaplace equations using Fourier transforms. The transforms of the partialdifferential equations lead to ordinary differential equations which are eas-ier to solve. The final solutions are then obtained using inverse transforms.

We could go further by applying a Fourier transform in space and aLaplace transform in time to convert the heat equation into an algebraicequation. We will also show that we can use a finite sine transform tosolve nonhomogeneous problems on finite intervals. Along the way we willidentify several Green’s functions.

9.11.1 Fourier Transform and the Heat Equation

We will first consider the solution of the heat equation onan infinite interval using Fourier transforms. The basic scheme has beendiscussed earlier and is outlined in Figure 9.44.

Page 426: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

414 partial differential equations

Figure 9.44: Using Fourier transforms tosolve a linear partial differential equa-tion. u(x, 0)

ut = αuxx

u(x, t)

u(k, 0)

ut = −αk2u

u(k, t)

Fourier Transform

Inverse Fourier Transform

Consider the heat equation on the infinite line,

ut = αuxx, −∞ < x < ∞, t > 0.

u(x, 0) = f (x), −∞ < x < ∞. (9.127)

We can Fourier transform the heat equation using the Fourier transform ofu(x, t),

F [u(x, t)] = u(k, t) =∫ ∞

−∞u(x, t)eikx dx.

We need to transform the derivatives in the equation. First we note that

F [ut] =∫ ∞

−∞

∂u(x, t)∂t

eikx dx

=∂

∂t

∫ ∞

−∞u(x, t)eikx dx

=∂u(k, t)

∂t. (9.128)

Assuming that lim|x|→∞ u(x, t) = 0 and lim|x|→∞ ux(x, t) = 0, then wealso have that

F [uxx] =∫ ∞

−∞

∂2u(x, t)∂x2 eikx dx

= −k2u(k, t). (9.129)

Therefore, the heat equation becomesThe transformed heat equation.

∂u(k, t)∂t

= −αk2u(k, t).

This is a first order differential equation which is readily solved as

u(k, t) = A(k)e−αk2t,

where A(k) is an arbitrary function of k. The inverse Fourier transform is

u(x, t) =1

∫ ∞

−∞u(k, t)e−ikx dk.

=1

∫ ∞

−∞A(k)e−αk2te−ikx dk. (9.130)

Page 427: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

transform techniques in physics 415

We can determine A(k) using the initial condition. Note that

F [u(x, 0)] = u(k, 0) =∫ ∞

−∞f (x)eikx dx.

But we also have from the solution,

u(x, 0) =1

∫ ∞

−∞A(k)e−ikx dk.

Comparing these two expressions for u(k, 0), we see that

A(k) = F [ f (x)].

We note that u(k, t) is given by the product of two Fourier transforms,u(k, t) = A(k)e−αk2t. So, by the Convolution Theorem, we expect that u(x, t)is the convolution of the inverse transforms,

u(x, t) = ( f ∗ g)(x, t) =1

∫ ∞

−∞f (ξ, t)g(x− ξ, t) dξ,

whereg(x, t) = F−1[e−αk2t].

In order to determine g(x, t), we need only recall example 9.5. In thatexample we saw that the Fourier transform of a Gaussian is a Gaussian.Namely, we found that

F [e−ax2/2] =

√2π

ae−k2/2a,

or,

F−1[

√2π

ae−k2/2a] = e−ax2/2.

Applying this to the current problem, we have

g(x) = F−1[e−αk2t] =

√π

αte−x2/4t.

Finally, we can write down the solution to the problem:

u(x, t) = ( f ∗ g)(x, t) =∫ ∞

−∞f (ξ, t)

e−(x−ξ)2/4t√

4παtdξ,

The function in the integrand,

K(x, t) =e−x2/4t√

4παt

is called the heat kernel. K(x, t) is called the heat kernel.

9.11.2 Laplace’s Equation on the Half Plane

We consider a steady state solution in two dimensions. In particular,we look for the steady state solution, u(x, y), satisfying the two-dimensional

Page 428: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

416 partial differential equations

Laplace equation on a semi-infinite slab with given boundary conditions asshown in Figure 9.45. The boundary value problem is given as

uxx + uyy = 0, −∞ < x < ∞, y > 0,

u(x, 0) = f (x), −∞ < x < ∞

limy→∞

u(x, y) = 0, lim|x|→∞

u(x, y) = 0. (9.131)x

y

∇2u = 0

u(x, 0) = f (x)

Figure 9.45: This is the domain for asemi-infinite slab with boundary valueu(x, 0) = f (x) and governed byLaplace’s equation.

This problem can be solved using a Fourier transform of u(x, y) withrespect to x. The transform scheme for doing this is shown in Figure 9.46.We begin by defining the Fourier transform

u(k, y) = F [u] =∫ ∞

−∞u(x, y)eikx dx.

We can transform Laplace’s equation. We first note from the propertiesof Fourier transforms that

F[

∂2u∂x2

]= −k2u(k, y),

if lim|x|→∞ u(x, y) = 0 and lim|x|→∞ ux(x, y) = 0. Also,

F[

∂2u∂y2

]=

∂2u(k, y)∂y2 .

Thus, the transform of Laplace’s equation gives uyy = k2u.

Figure 9.46: The transform scheme usedto convert Laplace’s equation to an ordi-nary differential equation which is easierto solve.

u(x, 0)

uxx + uyy = 0

u(x, y)

u(k, 0)

uyy = k2u

u(k, y)

Fourier Transform

Inverse Fourier Transform

This is a simple ordinary differential equation. We can solve this equationusing the boundary conditions. The general solution isThe transformed Laplace equation.

u(k, y) = a(k)eky + b(k)e−ky.

Since limy→∞ u(x, y) = 0 and k can be positive or negative, we have thatu(k, y) = a(k)e−|k|y. The coefficient a(k) can be determined using the re-maining boundary condition, u(x, 0) = f (x). We find that a(k) = f (k) since

a(k) = u(k, 0) =∫ ∞

−∞u(x, 0)eikx dx =

∫ ∞

−∞f (x)eikx dx = f (k).

We have found that u(k, y) = f (k)e−|k|y. We can obtain the solution usingthe inverse Fourier transform,

u(x, t) = F−1[ f (k)e−|k|y].

Page 429: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

transform techniques in physics 417

We note that this is a product of Fourier transforms and use the ConvolutionTheorem for Fourier transforms. Namely, we have that a(k) = F [ f ] ande−|k|y = F [g] for g(x, y) = 2y

x2+y2 . This last result is essentially proven inProblem 6.

Then, the Convolution Theorem gives the solution

u(x, y) =1

∫ ∞

−∞f (ξ)g(x− ξ) dξ

=1

∫ ∞

−∞f (ξ)

2y(x− ξ)2 + y2 dξ. (9.132)

We note for future use, that this solution is in the form

u(x, y) =∫ ∞

−∞f (ξ)G(x, ξ; y, 0) dξ,

whereG(x, ξ; y, 0) =

2yπ((x− ξ)2 + y2)

is the Green’s function for this problem. The Green’s function for the Laplaceequation.

9.11.3 Heat Equation on Infinite Interval, Revisited

We will reconsider the initial value problem for the heat equationon an infinite interval,

ut = uxx, −∞ < x < ∞, t > 0,

u(x, 0) = f (x), −∞ < x < ∞. (9.133)

We can apply both a Fourier and a Laplace transform to convert this to analgebraic problem. The general solution will then be written in terms of aninitial value Green’s function as

u(x, t) =∫ ∞

−∞G(x, t; ξ, 0) f (ξ) dξ.

For the time dependence we can use the Laplace transform and for thespatial dependence we use the Fourier transform. These combined trans-forms lead us to define

u(k, s) = F [L[u]] =∫ ∞

−∞

∫ ∞

0u(x, t)e−steikx dtdx.

Applying this to the terms in the heat equation, we have

F [L[ut]] = su(k, s)−F [u(x, 0)]

= su(k, s)− f (k)

F [L[uxx]] = −k2u(k, s). (9.134)

Here we have assumed that

limt→∞

u(x, t)e−st = 0, lim|x|→∞

u(x, t) = 0, lim|x|→∞

ux(x, t) = 0.

Page 430: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

418 partial differential equations

Therefore, the heat equation can be turned into an algebraic equation forthe transformed solution,

(s + k2)u(k, s) = f (k),

or

u(k, s) =f (k)

s + k2 .

The transformed heat equation.

The solution to the heat equation is obtained using the inverse transformsfor both the Fourier and Laplace transform. Thus, we have

u(x, t) = F−1[L−1[u]]

=1

∫ ∞

−∞

(1

2πi

∫ c+∞

c−i∞

f (k)s + k2 est ds

)e−ikx dk. (9.135)

Since the inside integral has a simple pole at s = −k2, we can computethe Bromwich integral by choosing c > −k2. Thus,

12πi

∫ c+∞

c−i∞

f (k)s + k2 est ds = Res

[f (k)

s + k2 est; s = −k2

]= e−k2t f (k).

Inserting this result into the solution, we have

u(x, t) = F−1[L−1[u]]

=1

∫ ∞

−∞f (k)e−k2te−ikx dk. (9.136)

This solution is of the form

u(x, t) = F−1[ f g]

for g(k) = e−k2t. So, by the Convolution Theorem for Fourier transforms,thesolution is a convolution,

u(x, t) =∫ ∞

−∞f (ξ)g(x− ξ) dξ.

All we need is the inverse transform of g(k).We note that g(k) = e−k2t is a Gaussian. Since the Fourier transform of a

Gaussian is a Gaussian, we need only recall Example 9.5,

F [e−ax2/2] =

√2π

ae−k2/2a.

Setting a = 1/2t, this becomes

F [e−x2/4t] =√

4πte−k2t.

So,

g(x) = F−1[e−k2t] =e−x2/4t√

4πt.

Page 431: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

transform techniques in physics 419

Inserting g(x) into the solution, we have

u(x, t) =1√4πt

∫ ∞

−∞f (ξ)e−(x−ξ)2/4t dξ

=∫ ∞

−∞G(x, t; ξ, 0) f (ξ) dξ. (9.137)

Here we have identified the initial value Green’s function

G(x, t; ξ, 0) =1√4πt

e−(x−ξ)2/4t.

The initial value Green’s function for theheat equation.

9.11.4 Nonhomogeneous Heat Equation

We now consider the nonhomogeneous heat equation with homo-geneous boundary conditions defined on a finite interval.

ut − kuxx = h(x, t), 0 ≤ x ≤ L, t > 0.

u(0, t) = 0, u(L, t) = 0, t > 0,

u(x, 0) = f (x), 0 ≤ x ≤ . (9.138)

We know that when h(x, t) ≡ 0 the solution takes the form

u(x, t) =∞

∑n=1

bn sinnπx

L.

So, when h(x, t) 6= 0, we might assume that the solution takes the form

u(x, t) =∞

∑n=1

bn(t) sinnπx

L

where the bn’s are the finite Fourier sine transform of the desired solution,

bn(t) = Fs[u] =2L

∫ L

0u(x, t) sin

nπxL

dx

Note that the finite Fourier sine transform is essentially the Fourier sinetransform which we encountered in Section 3.4.

u(x, 0)

ut − uxx = h(x, t)

u(x, t)

A(k, 0)

dbndt + ω2

nbb = Bn(t)

A(k, t)

Finite Fourier Sine Transform

Inverse Finite Fourier Sine Transform

Figure 9.47: Using finite Fourier trans-forms to solve the heat equation by solv-ing an ODE instead of a PDE.

The idea behind using the finite Fourier Sine Transform is to solve thegiven heat equation by transforming the heat equation to a simpler equation

Page 432: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

420 partial differential equations

for the transform, bn(t), solve for bn(t), and then do an inverse transform,i.e., insert the bn(t)’s back into the series representation. This is depictedin Figure 9.47. Note that we had explored similar diagram earlier whendiscussing the use of transforms to solve differential equations.

First, we need to transform the partial differential equation. The finitetransforms of the derivative terms are given by

Fs[ut] =2L

∫ L

0

∂u∂t

(x, t) sinnπx

Ldx

=ddt

(2L

∫ L

0u(x, t) sin

nπxL

dx)

=dbn

dt. (9.139)

Fs[uxx] =2L

∫ L

0

∂2u∂x2 (x, t) sin

nπxL

dx

=[ux sin

nπxL

]L

0−(nπ

L

) 2L

∫ L

0

∂u∂x

(x, t) cosnπx

Ldx

= −[nπ

Lu cos

nπxL

]L

0−(nπ

L

)2 2L

∫ L

0u(x, t) sin

nπxL

dx

=nπ

L[u(0, t)− u(L, 0) cos nπ]−

(nπ

L

)2b2

n

= −ω2nb2

n, (9.140)

where ωn = nπL .

Furthermore, we define

Hn(t) = Fs[h] =2L

∫ L

0h(x, t) sin

nπxL

dx.

Then, the heat equation is transformed to

dbn

dt+ ω2

nbn = Hn(t), n = 1, 2, 3, . . . .

This is a simple linear first order differential equation. We can supple-ment this equation with the initial condition

bn(0) =2L

∫ L

0u(x, 0) sin

nπxL

dx.

The differential equation for bn is easily solved using the integrating factor,µ(t) = eω2

nt. Thus,ddt

(eω2

ntbn(t))= Hn(t)eω2

nt

and the solution is

bn(t) = bn(0)e−ω2nt +

∫ t

0Hn(τ)e−ω2

n(t−τ) dτ.

The final step is to insert these coefficients (finite Fourier sine transform)into the series expansion (inverse finite Fourier sine transform) for u(x, t).The result is

u(x, t) =∞

∑n=1

bn(0)e−ω2nt sin

nπxL

+∞

∑n=1

[∫ t

0Hn(τ)e−ω2

n(t−τ) dτ

]sin

nπxL

.

Page 433: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

transform techniques in physics 421

This solution can be written in a more compact form in order to identifythe Green’s function. We insert the expressions for bn(0) and Hn(t) in termsof the initial profile and source term and interchange sums and integrals.This leads to

u(x, t) =∞

∑n=1

(2L

∫ L

0u(ξ, 0) sin

nπξ

Ldξ

)e−ω2

nt sinnπx

L

+∞

∑n=1

[∫ t

0

(2L

∫ L

0h(ξ, τ) sin

nπξ

Ldξ

)e−ω2

n(t−τ) dτ

]sin

nπxL

=∫ L

0u(ξ, 0)

[2L

∑n=1

sinnπx

Lsin

nπξ

Le−ω2

nt

]dξ

+∫ t

0

∫ L

0h(ξ, τ)

[2L

∑n=1

sinnπx

Lsin

nπξ

Le−ω2

n(t−τ)

]

=∫ L

0u(ξ, 0)G(x, ξ; t, 0)dξ +

∫ t

0

∫ L

0h(ξ, τ)G(x, ξ; t, τ) dξdτ.

(9.141)

Here we have defined the Green’s function

G(x, ξ; t, τ) =2L

∑n=1

sinnπx

Lsin

nπξ

Le−ω2

n(t−τ).

We note that G(x, ξ; t, 0) gives the initial value Green’s function.Note that at t = τ,

G(x, ξ; t, t) =2L

∑n=1

sinnπx

Lsin

nπξ

L.

This is actually the series representation of the Dirac delta function. TheFourier sine transform of the delta function is

Fs[δ(x− ξ)] =2L

∫ L

0δ(x− ξ) sin

nπxL

dx =2L

sinnπξ

L.

Then, the representation becomes

δ(x− ξ) =2L

∑n=1

sinnπx

Lsin

nπξ

L.

Also, we note that∂G∂t

= −ω2nG

∂2G∂x2 = −

(nπ

L

)2G.

Therefore, Gt = Gxx at least for τ 6= t and ξ 6= x.We can modify this problem by adding nonhomogeneous boundary con-

ditions.

ut − kuxx = h(x, t), 0 ≤ x ≤ L, t > 0,

u(0, t) = A, u(L, t) = B, t > 0,

u(x, 0) = f (x), 0 ≤ x ≤ L. (9.142)

Page 434: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

422 partial differential equations

One way to treat these conditions is to assume u(x, t) = w(x)+ v(x, t) wherevt − kvxx = h(x, t) and wxx = 0. Then, u(x, t) = w(x) + v(x, t) satisfies theoriginal nonhomogeneous heat equation.

If v(x, t) satisfies v(0, t) = v(L, t) = 0 and w(x) satisfies w(0) = A andw(L) = B, then u(0, t) = w(0) + v(0, t) = A u(L, t) = w(L) + v(L, t) = B

Finally, we note that

v(x, 0) = u(x, 0)− w(x) = f (x)− w(x).

Therefore, u(x, t) = w(x) + v(x, t) satisfies the original problem if

vt − kvxx = h(x, t), 0 ≤ x ≤ L, t > 0,

v(0, t) = 0, v(L, t) = 0, t > 0,

v(x, 0) = f (x)− w(x), 0 ≤ x ≤ L. (9.143)

and

wxx = 0, 0 ≤ x ≤ L,

w(0) = A, w(L) = B. (9.144)

We can solve the last problem to obtain w(x) = A + B−AL x. The solution

to the problem for v(x, t) is simply the problem we had solved already interms of Green’s functions with the new initial condition, f (x)− A− B−A

L x.

9.11.5 Solution of the 3D Poisson Equation

We recall from electrostatics that the gradient of the elec-tric potential gives the electric field, E = −∇φ. However, we also havefrom Gauss’ Law for electric fields ∇ · E = ρ

ε0, where ρ(r) is the charge dis-

tribution at position r. Combining these equations, we arrive at Poisson’sequation for the electric potential,Poisson’s equation for the electric poten-

tial.

∇2φ = − ρ

ε0.

We note that Poisson’s equation also arises in Newton’s theory of gravitationfor the gravitational potential in the form ∇2φ = −4πGρ where ρ is thematter density.Poisson’s equation for the gravitational

potential. We consider Poisson’s equation in the form

∇2φ(r) = −4π f (r)

for r defined throughout all space. We will seek a solution for the potentialfunction using a three dimensional Fourier transform. In the electrostaticproblem f = ρ(r)/4πε0 and the gravitational problem has f = Gρ(r)

The Fourier transform can be generalized to three dimensions as

φ(k) =∫

Vφ(r)eik·r d3r,

Page 435: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

transform techniques in physics 423

where the integration is over all space, V, d3r = dxdydz, and k is a three di-mensional wavenumber, k = kxi + kyj + kzk. The inverse Fourier transformcan then be written as

φ(r) =1

(2π)3

∫Vk

φ(k)e−ik·r d3k,

where d3k = dkxdkydkz and Vk is all of k-space. Three dimensional Fourier transform.

The Fourier transform of the Laplacian follows from computing Fouriertransforms of any derivatives that are present. Assuming that φ and itsgradient vanish for large distances, then

F [∇2φ] = −(k2x + k2

y + k2z)φ(k).

Defining k2 = k2x + k2

y + k2z, then Poisson’s equation becomes the algebraic

equationk2φ(k) = 4π f (k).

Solving for φ(k), we have

φ(k) =4π

k2 f (k).

The solution to Poisson’s equation is then determined from the inverseFourier transform,

φ(r) =4π

(2π)3

∫Vk

f (k)e−ik·r

k2 d3k. (9.145)

First we will consider an example of a point charge (or mass in the grav-itational case) at the origin. We will set f (r) = f0δ3(r) in order to representa point source. For a unit point charge, f0 = 1/4πε0. The three dimensional Dirac delta func-

tion, δ3(r− r0).Here we have introduced the three dimensional Dirac delta functionwhich, like the one dimensional case, vanishes outside the origin and satis-fies a unit volume condition, ∫

Vδ3(r) d3r = 1.

Also, there is a sifting property, which takes the form∫V

δ3(r− r0) f (r) d3r = f (r0).

In Cartesian coordinates,

δ3(r) = δ(x)δ(y)δ(z),∫V

δ3(r) d3r =∫ ∞

−∞

∫ ∞

−∞

∫ ∞

−∞δ(x)δ(y)δ(z) dxdydz = 1,

and∫ ∞

−∞

∫ ∞

−∞

∫ ∞

−∞δ(x− x0)δ(y− y0)δ(z− z0) f (x, y, z) dxdydz = f (x0, y0, z0).

One can define similar delta functions operating in two dimensions and ndimensions.

Page 436: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

424 partial differential equations

We can also transform the Cartesian form into curvilinear coordinates.From Section 6.9 we have that the volume element in curvilinear coordinatesis

d3r = dxdydz = h1h2h3du1du2du3,

where .This gives ∫

Vδ3(r) d3r =

∫V

δ3(r) h1h2h3du1du2du3 = 1.

Therefore,

δ3(r) =δ(u1)∣∣ ∂r

∂u1

∣∣ δ(u2)∣∣ ∂r∂u2

∣∣ δ(u3)∣∣ ∂r∂u2

∣∣=

1h1h2h3

δ(u1)δ(u2)δ(u3). (9.146)

So, for cylindrical coordinates,

δ3(r) =1r

δ(r)δ(θ)δ(z).

Example 9.38. Find the solution of Poisson’s equation for a point source of theform f (r) = f0δ3(r).

The solution is found by inserting the Fourier transform of this source into Equa-tion (9.145) and carrying out the integration. The transform of f (r) is found as

f (k) =∫

Vf0δ3(r)eik·r d3r = f0.

Inserting f (k) into the inverse transform in Equation (9.145) and carrying outthe integration using spherical coordinates in k-space, we find

φ(r) =4π

(2π)3

∫Vk

f0e−ik·r

k2 d3k

=f0

2π2

∫ 2π

0

∫ π

0

∫ ∞

0

e−ikx cos θ

k2 k2 sin θ dkdθdφ

=f0

π

∫ π

0

∫ ∞

0e−ikx cos θ sin θ dkdθ

=f0

π

∫ ∞

0

∫ 1

−1e−ikxy dkdy, y = cos θ,

=2 f0

πr

∫ ∞

0

sin zz

dz =f0

r. (9.147)

If the last example is applied to a unit point charge, then f0 = 1/4πε0.So, the electric potential outside a unit point charge located at the originbecomes

φ(r) =1

4πε0r.

This is the form familiar from introductory physics.Also, by setting f0 = 1, we have also shown in the last example that

∇2(

1r

)= −4πδ3(r).

Page 437: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

transform techniques in physics 425

Since ∇(

1r

)= − r

r3 , then we have also shown that

∇ ·( r

r3

)= 4πδ3(r).

Problems

1. In this problem you will show that the sequence of functions

fn(x) =nπ

(1

1 + n2x2

)approaches δ(x) as n→ ∞. Use the following to support your argument:

a. Show that limn→∞ fn(x) = 0 for x 6= 0.

b. Show that the area under each function is one.

2. Verify that the sequence of functions fn(x)∞n=1, defined by fn(x) =

n2 e−n|x|, approaches a delta function.

3. Evaluate the following integrals:

a.∫ π

0 sin xδ(

x− π2)

dx.

b.∫ ∞−∞ δ

( x−53 e2x) (3x2 − 7x + 2

)dx.

c.∫ π

0 x2δ(

x + π2)

dx.

d.∫ ∞

0 e−2xδ(x2 − 5x + 6) dx. [See Problem 4.]

e.∫ ∞−∞(x2 − 2x + 3)δ(x2 − 9) dx. [See Problem 4.]

4. For the case that a function has multiple roots, f (xi) = 0, i = 1, 2, . . . , itcan be shown that

δ( f (x)) =n

∑i=1

δ(x− xi)

| f ′(xi)|.

Use this result to evaluate∫ ∞−∞ δ(x2 − 5x− 6)(3x2 − 7x + 2) dx.

5. Find a Fourier series representation of the Dirac delta function, δ(x), on[−L, L].

6. For a > 0, find the Fourier transform, f (k), of f (x) = e−a|x|.

7. Use the result from the last problem plus properties of the Fourier trans-form to find the Fourier transform, of f (x) = x2e−a|x| for a > 0.

8. Find the Fourier transform, f (k), of f (x) = e−2x2+x.

9. Prove the second shift property in the form

F[eiβx f (x)

]= f (k + β).

10. A damped harmonic oscillator is given by

f (t) =

Ae−αteiω0t, t ≥ 0,

0, t < 0..

Page 438: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

426 partial differential equations

a. Find f (ω) and

b. the frequency distribution | f (ω)|2.

c. Sketch the frequency distribution.

11. Show that the convolution operation is associative: ( f ∗ (g ∗ h))(t) =

(( f ∗ g) ∗ h)(t).

12. In this problem you will directly compute the convolution of two Gaus-sian functions in two steps.

a. Use completing the square to evaluate∫ ∞

−∞e−αt2+βt dt.

b. Use the result from part a to directly compute the convolution inexample 9.16:

( f ∗ g)(x) = e−bx2∫ ∞

−∞e−(a+b)t2+2bxt dt.

13. You will compute the (Fourier) convolution of two box functions of thesame width. Recall the box function is given by

fa(x) =

1, |x| ≤ a0, |x| > a.

Consider ( fa ∗ fa)(x) for different intervals of x. A few preliminary sketcheswould help. In Figure 9.48 the factors in the convolution integrand are showfor one value of x. The integrand is the product of the first two functions.The convolution at x is the area of the overlap in the third figure. Thinkabout how these pictures change as you vary x. Plot the resulting areas as afunction of x. This is the graph of the desired convolution.

14. Define the integrals In =∫ ∞−∞ x2ne−x2

dx. Noting that I0 =√

π,

a. Find a recursive relation between In and In−1.

b. Use this relation to determine I1, I2 and I3.

c. Find an expression in terms of n for In.

15. Find the Laplace transform of the following functions.

a. f (t) = 9t2 − 7.

b. f (t) = e5t−3.

c. f (t) = cos 7t.

d. f (t) = e4t sin 2t.

e. f (t) = e2t(t + cosh t).

f. f (t) = t2H(t− 1).

g. f (t) =

sin t, t < 4π,

sin t + cos t, t > 4π.

Page 439: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

transform techniques in physics 427

fa(t)

b

a−a

fa(x− t)

b

a + x−a + x

fa(t) fa(x− t)

a−a

x

Figure 9.48: Sketch used to compute theconvolution of the box function with it-self. In the top figure is the box function.The second figure shows the box shiftedby x. The last figure indicates the over-lap of the functions.

h. f (t) =∫ t

0 (t− u)2 sin u du.

i. f (t) = (t + 5)2 + te2t cos 3t and write the answer in the simplestform.

16. Find the inverse Laplace transform of the following functions using theproperties of Laplace transforms and the table of Laplace transform pairs.

a. F(s) =18s3 +

7s

.

b. F(s) =1

s− 5− 2

s2 + 4.

c. F(s) =s + 1s2 + 1

.

d. F(s) =3

s2 + 2s + 2.

e. F(s) =1

(s− 1)2 .

f. F(s) =e−3s

s2 − 1.

g. F(s) =1

s2 + 4s− 5.

h. F(s) =s + 3

s2 + 8s + 17.

17. Compute the convolution ( f ∗ g)(t) (in the Laplace transform sense) andits corresponding Laplace transform L[ f ∗ g] for the following functions:

a. f (t) = t2, g(t) = t3.

b. f (t) = t2, g(t) = cos 2t.

Page 440: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

428 partial differential equations

c. f (t) = 3t2 − 2t + 1, g(t) = e−3t.

d. f (t) = δ(t− π

4)

, g(t) = sin 5t.

18. For the following problems draw the given function and find the Laplacetransform in closed form.

a. f (t) = 1 + ∑∞n=1(−1)n H(t− n).

b. f (t) = ∑∞n=0[H(t− 2n + 1)− H(t− 2n)].

c. f (t) = ∑∞n=0(t− 2n)[H(t− 2n)−H(t− 2n− 1)]+ (2n+ 2− t)[H(t−

2n− 1)− H(t− 2n− 2)].

19. Use the convolution theorem to compute the inverse transform of thefollowing:

a. F(s) =2

s2(s2 + 1).

b. F(s) =e−3s

s2 .

c. F(s) =1

s(s2 + 2s + 5).

20. Find the inverse Laplace transform two different ways: i) Use Tables.ii) Use the Bromwich Integral.

a. F(s) =1

s3(s + 4)2 .

b. F(s) =1

s2 − 4s− 5.

c. F(s) =s + 3

s2 + 8s + 17.

d. F(s) =s + 1

(s− 2)2(s + 4).

e. F(s) =s2 + 8s− 3

(s2 + 2s + 1)(s2 + 1).

21. Use Laplace transforms to solve the following initial value problems.Where possible, describe the solution behavior in terms of oscillation anddecay.

a. y′′ − 5y′ + 6y = 0, y(0) = 2, y′(0) = 0.

b. y′′ − y = te2t, y(0) = 0, y′(0) = 1.

c. y′′ + 4y = δ(t− 1), y(0) = 3, y′(0) = 0.

d. y′′ + 6y′ + 18y = 2H(π − t), y(0) = 0, y′(0) = 0.

22. Use Laplace transforms to convert the following system of differentialequations into an algebraic system and find the solution of the differentialequations.

x′′ = 3x− 6y, x(0) = 1, x′(0) = 0,

y′′ = x + y, y(0) = 0, y′(0) = 0.

Page 441: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

transform techniques in physics 429

23. Use Laplace transforms to convert the following nonhomogeneous sys-tems of differential equations into an algebraic system and find the solutionsof the differential equations.

a.

x′ = 2x + 3y + 2 sin 2t, x(0) = 1,

y′ = −3x + 2y, y(0) = 0.

b.

x′ = −4x− y + e−t, x(0) = 2,

y′ = x− 2y + 2e−3t, y(0) = −1.

c.

x′ = x− y + 2 cos t, x(0) = 3,

y′ = x + y− 3 sin t, y(0) = 2.

24. Consider the series circuit in Problem 2.20 and in Figure ?? with L =

1.00 H, R = 1.00× 102 Ω, C = 1.00× 10−4 F, and V0 = 1.00× 103 V.

a. Write the second order differential equation for this circuit.

b. Suppose that no charge is present and no current is flowing attime t = 0 when V0 is applied. Use Laplace transforms to find thecurrent and the charge on the capacitor as functions of time.

b. Replace the battery with the alternating source V(t) = V0 sin 2π f twith V0 = 1.00× 103 V and f = 150Hz. Again, suppose that nocharge is present and no current is flowing at time t = 0 when theAC source is applied. Use Laplace transforms to find the currentand the charge on the capacitor as functions of time.

d. Plot your solutions and describe how the system behaves overtime.

25. Use Laplace transforms to sum the following series.

a.∞

∑n=0

(−1)n

1 + 2n.

b.∞

∑n=1

1n(n + 3)

.

c.∞

∑n=1

(−1)n

n(n + 3).

d.∞

∑n=0

(−1)n

n2 − a2 .

e.∞

∑n=0

1(2n + 1)2 − a2 .

Page 442: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

430 partial differential equations

f.∞

∑n=1

1n

e−an.

26. Use Laplace transforms to prove

∑n=1

1(n + a)(n + b)

=1

b− a

∫ 1

0

ua − ub

1− udu.

Use this result to evaluate the sums

a.∞

∑n=1

1n(n + 1)

.

b.∞

∑n=1

1(n + 2)(n + 3)

.

27. Do the following.

a. Find the first four nonvanishing terms of the Maclaurin series ex-

pansion of f (x) =x

ex − 1.

b. Use the result in part a. to determine the first four nonvanishingBernoulli numbers, Bn.

c. Use these results to compute ζ(2n) for n = 1, 2, 3, 4.

28. Given the following Laplace transforms, F(s), find the function f (t).Note that in each case there are an infinite number of poles, resulting in aninfinite series representation.

a. F(s) =1

s2(1 + e−s).

b. F(s) =1

s sinh s.

c. F(s) =sinh s

s2 cosh s.

d. F(s) =sinh(β

√sx)

s sinh(β√

sL).

29. Consider the initial boundary value problem for the heat equation:

ut = 2uxx, 0 < t, 0 ≤ x ≤ 1,u(x, 0) = x(1− x), 0 < x < 1,

u(0, t) = 0, t > 0,u(1, t) = 0, t > 0.

Use the finite transform method to solve this problem. Namely, assumethat the solution takes the form u(x, t) = ∑∞

n=1 bn(t) sin nπx and obtain anordinary differential equation for bn and solve for the bn’s for each n.

Page 443: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

10Numerical Solutions of PDEs

There’s no sense in being precise when you don’t even know what you’re talkingabout.- John von Neumann (1903-1957)

Most of the book has dealt with finding exact solutions to somegeneric problems. However, most problems of interest cannot be solved ex-actly. The heat, wave, and Laplace equations are linear partial differentialequations and can be solved using separation of variables in geometriesin which the Laplacian is separable. However, once we introduce nonlin-earities, or complicated non-constant coefficients intro the equations, someof these methods do not work. Even when separation of variables or themethod of eigenfunction expansions gave us exact results, the computationof the resulting series had to be done on a computer and inevitably onecould only use a finite number of terms of the expansion. So, therefore, it issometimes useful to be able to solve differential equations numerically.

In this chapter we will introduce the idea of numerical solutions of partialdifferential equations. However, we will first begin with a discussion of thesolution of ordinary differential equations in order to get a feel for somecommon problems in the solution of differential equations and the notionof convergence rates of numerical schemes. Then, we turn to the finitedifference method and the ideas of stability. Other common approachesmay be added later.

10.1 Ordinary Differential Equations

10.1.1 Euler’s Method

In this section we will look at the simplest method for solvingfirst order equations, Euler’s Method. While it is not the most efficientmethod, it does provide us with a picture of how one proceeds and can beimproved by introducing better techniques, which are typically covered ina numerical analysis text.

Let’s consider the class of first order initial value problems of the form

dydx

= f (x, y), y(x0) = y0. (10.1)

Page 444: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

432 partial differential equations

We are interested in finding the solution y(x) of this equation which passesthrough the initial point (x0, y0) in the xy-plane for values of x in the interval[a, b], where a = x0. We will seek approximations of the solution at Npoints, labeled xn for n = 1, . . . , N. For equally spaced points we have∆x = x1 − x0 = x2 − x1, etc. We can write these as

xn = x0 + n∆x.

In Figure 10.1 we show three such points on the x-axis.

Figure 10.1: The basics of Euler’sMethod are shown. An interval of thex axis is broken into N subintervals.The approximations to the solutions arefound using the slope of the tangent tothe solution, given by f (x, y). Knowingprevious approximations at (xn−1, yn−1),one can determine the next approxima-tion, yn.

y

x

(x0, y0)

(x1, y(x1))

(x2, y(x2))

y0

x0

y1

x1

y2

x2

The first step of Euler’s Method is to use the initial condition. We repre-sent this as a point on the solution curve, (x0, y(x0)) = (x0, y0), as shown inFigure 10.1. The next step is to develop a method for obtaining approxima-tions to the solution for the other xn’s.

We first note that the differential equation gives the slope of the tangentline at (x, y(x)) of the solution curve since the slope is the derivative, y′(x)′

From the differential equation the slope is f (x, y(x)). Referring to Figure10.1, we see the tangent line drawn at (x0, y0). We look now at x = x1. Thevertical line x = x1 intersects both the solution curve and the tangent linepassing through (x0, y0). This is shown by a heavy dashed line.

While we do not know the solution at x = x1, we can determine thetangent line and find the intersection point that it makes with the vertical.As seen in the figure, this intersection point is in theory close to the pointon the solution curve. So, we will designate y1 as the approximation of thesolution y(x1). We just need to determine y1.

The idea is simple. We approximate the derivative in the differentialequation by its difference quotient:

dydx≈ y1 − y0

x1 − x0=

y1 − y0

∆x. (10.2)

Since the slope of the tangent to the curve at (x0, y0) is y′(x0) = f (x0, y0),

Page 445: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

numerical solutions of pdes 433

we can writey1 − y0

∆x≈ f (x0, y0). (10.3)

Solving this equation for y1, we obtain

y1 = y0 + ∆x f (x0, y0). (10.4)

This gives y1 in terms of quantities that we know.We now proceed to approximate y(x2). Referring to Figure 10.1, we see

that this can be done by using the slope of the solution curve at (x1, y1).The corresponding tangent line is shown passing though (x1, y1) and wecan then get the value of y2 from the intersection of the tangent line with avertical line, x = x2. Following the previous arguments, we find that

y2 = y1 + ∆x f (x1, y1). (10.5)

Continuing this procedure for all xn, n = 1, . . . N, we arrive at the fol-lowing scheme for determining a numerical solution to the initial valueproblem:

y0 = y(x0),

yn = yn−1 + ∆x f (xn−1, yn−1), n = 1, . . . , N. (10.6)

This is referred to as Euler’s Method.

Example 10.1. Use Euler’s Method to solve the initial value problem dydx = x +

y, y(0) = 1 and obtain an approximation for y(1).First, we will do this by hand. We break up the interval [0, 1], since we want the

solution at x = 1 and the initial value is at x = 0. Let ∆x = 0.50. Then, x0 = 0,x1 = 0.5 and x2 = 1.0. Note that there are N = b−a

∆x = 2 subintervals and thusthree points.

We next carry out Euler’s Method systematically by setting up a table for theneeded values. Such a table is shown in Table 10.1. Note how the table is set up.There is a column for each xn and yn. The first row is the initial condition. We alsomade use of the function f (x, y) in computing the yn’s from (10.6). This sometimesmakes the computation easier. As a result, we find that the desired approximationis given as y2 = 2.5.

n xn yn = yn−1 + ∆x f (xn−1, yn−1 = 0.5xn−1 + 1.5yn−1

0 0 1

1 0.5 0.5(0) + 1.5(1.0) = 1.52 1.0 0.5(0.5) + 1.5(1.5) = 2.5

Table 10.1: Application of Euler’sMethod for y′ = x + y, y(0) = 1 and∆x = 0.5.

Is this a good result? Well, we could make the spatial increments smaller. Let’srepeat the procedure for ∆x = 0.2, or N = 5. The results are in Table 10.2.

Now we see that the approximation is y1 = 2.97664. So, it looks like the valueis near 3, but we cannot say much more. Decreasing ∆x more shows that we arebeginning to converge to a solution. We see this in Table 10.3.

Page 446: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

434 partial differential equations

Table 10.2: Application of Euler’sMethod for y′ = x + y, y(0) = 1 and∆x = 0.2.

n xn yn = 0.2xn−1 + 1.2yn−1

0 0 1

1 0.2 0.2(0) + 1.2(1.0) = 1.22 0.4 0.2(0.2) + 1.2(1.2) = 1.483 0.6 0.2(0.4) + 1.2(1.48) = 1.8564 0.8 0.2(0.6) + 1.2(1.856) = 2.34725 1.0 0.2(0.8) + 1.2(2.3472) = 2.97664

Table 10.3: Results of Euler’s Method fory′ = x + y, y(0) = 1 and varying ∆x

∆x yN ≈ y(1)0.5 2.50.2 2.97664

0.1 3.187484920

0.01 3.409627659

0.001 3.433847864

0.0001 3.436291854

Of course, these values were not done by hand. The last computationwould have taken 1000 lines in the table, or at least 40 pages! One coulduse a computer to do this. A simple code in Maple would look like thefollowing:

> restart:

> f:=(x,y)->y+x;

> a:=0: b:=1: N:=100: h:=(b-a)/N;

> x[0]:=0: y[0]:=1:

for i from 1 to N do

y[i]:=y[i-1]+h*f(x[i-1],y[i-1]):

x[i]:=x[0]+h*(i):

od:

evalf(y[N]);

In this case we could simply use the exact solution. The exact solution iseasily found as

y(x) = 2ex − x− 1.

(The reader can verify this.) So, the value we are seeking is

y(1) = 2e− 2 = 3.4365636 . . . .

Thus, even the last numerical solution was off by about 0.00027.

Figure 10.2: A comparison of the resultsEuler’s Method to the exact solution fory′ = x + y, y(0) = 1 and N = 10.

Adding a few extra lines for plotting, we can visually see how well theapproximations compare to the exact solution. The Maple code for doingsuch a plot is given below.

> with(plots):

> Data:=[seq([x[i],y[i]],i=0..N)]:

> P1:=pointplot(Data,symbol=DIAMOND):

> Sol:=t->-t-1+2*exp(t);

Page 447: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

numerical solutions of pdes 435

> P2:=plot(Sol(t),t=a..b,Sol=0..Sol(b)):

> display(P1,P2);

We show in Figures 10.2-10.3 the results for N = 10 and N = 100. InFigure 10.2 we can see how quickly the numerical solution diverges fromthe exact solution. In Figure 10.3 we can see that visually the solutionsagree, but we note that from Table 10.3 that for ∆x = 0.01, the solution isstill off in the second decimal place with a relative error of about 0.8%.

Figure 10.3: A comparison of the resultsEuler’s Method to the exact solution fory′ = x + y, y(0) = 1 and N = 100.

Why would we use a numerical method when we have the exact solu-tion? Exact solutions can serve as test cases for our methods. We can makesure our code works before applying them to problems whose solution isnot known.

There are many other methods for solving first order equations. Onecommonly used method is the fourth order Runge-Kutta method. Thismethod has smaller errors at each step as compared to Euler’s Method.It is well suited for programming and comes built-in in many packages likeMaple and MATLAB. Typically, it is set up to handle systems of first orderequations.

In fact, it is well known that nth order equations can be written as a sys-tem of n first order equations. Consider the simple second order equation

y′′ = f (x, y).

This is a larger class of equations than the second order constant coefficientequation. We can turn this into a system of two first order differential equa-tions by letting u = y and v = y′ = u′. Then, v′ = y′′ = f (x, u). So, we havethe first order system

u′ = v,

v′ = f (x, u). (10.7)

We will not go further into the Runge-Kutta Method here. You can findmore about it in a numerical analysis text. However, we will see that systemsof differential equations do arise naturally in physics. Such systems areoften coupled equations and lead to interesting behaviors.

10.1.2 Higher Order Taylor Methods

Euler’s Method for solving differential equations is easy to un-derstand but is not efficient in the sense that it is what is called a first ordermethod. The error at each step, the local truncation error, is of order ∆x,for x the independent variable. The accumulation of the local truncation er-rors results in what is called the global error. In order to generalize Euler’sMethod, we need to rederive it. Also, since these methods are typically usedfor initial value problems, we will cast the problem to be solved as

dydt

= f (t, y), y(a) = y0, t ∈ [a, b]. (10.8)

Page 448: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

436 partial differential equations

The first step towards obtaining a numerical approximation to the solu-tion of this problem is to divide the t-interval, [a, b], into N subintervals,

ti = a + ih, i = 0, 1, . . . , N, t0 = a, tN = b,

whereh =

b− aN

.

We then seek the numerical solutions

yi ≈ y(ti), i = 1, 2, . . . , N,

with y0 = y(t0) = y0. Figure 10.4 graphically shows how these quantitiesare related.

y

tt0 tNti

(ti , yi)

(ti , y(ti))

(a, y0)

Figure 10.4: The interval [a, b] is dividedinto N equally spaced subintervals. Theexact solution y(ti) is shown with thenumerical solution, yi with ti = a + ih,i = 0, 1, . . . , N.

Euler’s Method can be derived using the Taylor series expansion of of thesolution y(ti + h) about t = ti for i = 1, 2, . . . , N. This is given by

y(ti+1) = y(ti + h)

= y(ti) + y′(ti)h +h2

2y′′(ξi), ξi ∈ (ti, ti+1). (10.9)

Here the term h2

2 y′′(ξi) captures all of the higher order terms and representsthe error made using a linear approximation to y(ti + h).

Dropping the remainder term, noting that y′(t) = f (t, y), and definingthe resulting numerical approximations by yi ≈ y(ti), we have

yi+1 = yi + h f (ti, yi), i = 0, 1, . . . , N − 1,

y0 = y(a) = y0. (10.10)

This is Euler’s Method.Euler’s Method is not used in practice since the error is of order h. How-

ever, it is simple enough for understanding the idea of solving differentialequations numerically. Also, it is easy to study the numerical error, whichwe will show next.

The error that results for a single step of the method is called the localtruncation error, which is defined by

τi+1(h) =y(ti+1)− yi

h− f (ti, yi).

A simple computation gives

τi+1(h) =h2

y′′(ξi), ξi ∈ (ti, ti+1).

Since the local truncation error is of order h, this scheme is said to be oforder one. More generally, for a numerical scheme of the form

yi+1 = yi + hF(ti, yi), i = 0, 1, . . . , N − 1,

y0 = y(a) = y0, (10.11)

the local truncation error is defined byThe local truncation error.

Page 449: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

numerical solutions of pdes 437

τi+1(h) =y(ti+1)− yi

h− F(ti, yi).

The accumulation of these errors leads to the global error. In fact, onecan show that if f is continuous, satisfies the Lipschitz condition,

| f (t, y2)− f (t, y1)| ≤ L|y2 − y1|

for a particular domain D ⊂ R2, and

|y′′(t)| ≤ M, t ∈ [a, b],

then|y(ti)− y| ≤

hM2L

(eL(ti−a) − 1

), i = 0, 1, . . . , N.

Furthermore, if one introduces round-off errors, bounded by δ, in both theinitial condition and at each step, the global error is modified as

|y(ti)− y| ≤1L

(hM

2+

δ

h

)(eL(ti−a) − 1

)+ |δ0|eL(ti−a), i = 0, 1, . . . , N.

Then for small enough steps h, there is a point when the round-off errorwill dominate the error. [See Burden and Faires, Numerical Analysis for thedetails.]

Can we improve upon Euler’s Method? The natural next step towardsfinding a better scheme would be to keep more terms in the Taylor seriesexpansion. This leads to Taylor series methods of order n.

Taylor series methods of order n take the form

yi+1 = yi + hT(n)(ti, yi), i = 0, 1, . . . , N − 1,

y0 = y0, (10.12)

where we have defined

T(n)(t, y) = y′(t) +h2

y′′(t) + · · ·+ h(n−1)

n!y(n)(t).

However, since y′(t) = f (t, y), we can write

T(n)(t, y) = f (t, y) +h2

f ′(t, y) + · · ·+ h(n−1)

n!f (n−1)(t, y).

We note that for n = 1, we retrieve Euler’s Method as a special case. Wedemonstrate a third order Taylor’s Method in the next example.

Example 10.2. Apply the third order Taylor’s Method to

dydt

= t + y, y(0) = 1

and obtain an approximation for y(1) for h = 0.1.The third order Taylor’s Method takes the form

yi+1 = yi + hT(3)(ti, yi), i = 0, 1, . . . , N − 1,

y0 = y0, (10.13)

Page 450: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

438 partial differential equations

where

T(3)(t, y) = f (t, y) +h2

f ′(t, y) +h2

3!f ′′(t, y)

and f (t, y) = t + y(t).In order to set up the scheme, we need the first and second derivative of f (t, y) :

f ′(t, y) =ddt(t + y)

= 1 + y′

= 1 + t + y (10.14)

f ′′(t, y) =ddt(1 + t + y)

= 1 + y′

= 1 + t + y (10.15)

Inserting these expressions into the scheme, we have

yi+1 = yi + h[(ti + yi) +

h2(1 + ti + yi) +

h2

3!(1 + ti + yi)

],

= yi + h(ti + yi) + h2(12+

h6)(1 + ti + yi),

y0 = y0, (10.16)

for i = 0, 1, . . . , N − 1.In Figure 10.2 we show the results comparing Euler’s Method, the 3rd Order

Taylor’s Method, and the exact solution for N = 10. In Table 10.4 we provide arethe numerical values. The relative error in Euler’s method is about 7% and thatof the 3rd Order Taylor’s Method is about 0.006%. Thus, the 3rd Order Taylor’sMethod is significantly better than Euler’s Method.

Table 10.4: Numerical values for Euler’sMethod, 3rd Order Taylor’s Method, andexact solution for solving Example 10.2with N = 10..

Euler Taylor Exact1.0000 1.0000 1.0000

1.1000 1.1103 1.1103

1.2200 1.2428 1.2428

1.3620 1.3997 1.3997

1.5282 1.5836 1.5836

1.7210 1.7974 1.7974

1.9431 2.0442 2.0442

2.1974 2.3274 2.3275

2.4872 2.6509 2.6511

2.8159 3.0190 3.0192

3.1875 3.4364 3.4366

In the last section we provided some Maple code for performing Euler’smethod. A similar code in MATLAB looks like the following:

a=0;

b=1;

Page 451: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

numerical solutions of pdes 439

N=10;

h=(b-a)/N;

% Slope function

f = inline(’t+y’,’t’,’y’);

sol = inline(’2*exp(t)-t-1’,’t’);

% Initial Condition

t(1)=0;

y(1)=1;

% Euler’s Method

for i=2:N+1

y(i)=y(i-1)+h*f(t(i-1),y(i-1));

t(i)=t(i-1)+h;

end

y

t0 .2 .4 .5 .8 1

1

2

3

4

Figure 10.5: Numerical results for Eu-ler’s Method (filled circle) and 3rd OrderTaylor’s Method (open circle) for solvingExample 10.2 as compared to exact solu-tion (solid line).

A simple modification can be made for the 3rd Order Taylor’s Method byreplacing the Euler’s method part of the preceding code by

% Taylor’s Method, Order 3

y(1)=1;

h3 = h^2*(1/2+h/6);

for i=2:N+1

y(i)=y(i-1)+h*f(t(i-1),y(i-1))+h3*(1+t(i-1)+y(i-1));

t(i)=t(i-1)+h;

end

While the accuracy in the last example seemed sufficient, we have to re-member that we only stopped at one unit of time. How can we be confidentthat the scheme would work as well if we carried out the computation formuch longer times. For example, if the time unit were only a second, thenone would need 86,400 times longer to predict a day forward. Of course, thescale matters. But, often we need to carry out numerical schemes for longtimes and we hope that the scheme not only converges to a solution, butthat it converges to the solution to the given problem. Also, the previousexample was relatively easy to program because we could provide a rela-tively simple form for T(3)(t, y) with a quick computation of the derivativesof f (t, y). This is not always the case and higher order Taylor methods inthis form are not typically used. Instead, one can approximate T(n)(t, y) byevaluating the known function f (t, y) at selected values of t and y, leadingto Runge-Kutta methods.

Page 452: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

440 partial differential equations

10.1.3 Runge-Kutta Methods

As we had seen in the last section, we can use higher order Taylormethods to derive numerical schemes for solving

dydt

= f (t, y), y(a) = y0, t ∈ [a, b], (10.17)

using a scheme of the form

yi+1 = yi + hT(n)(ti, yi), i = 0, 1, . . . , N − 1,

y0 = y0, (10.18)

where we have defined

T(n)(t, y) = y′(t) +h2

y′′(t) + · · ·+ h(n−1)

n!y(n)(t).

In this section we will find approximations of T(n)(t, y) which avoid theneed for computing the derivatives.

For example, we could approximate

T(2)(t, y) = f (t, y) +h2

f racd f dt(t, y)

byT(2)(t, y) ≈ a f (t + α, y + β)

for selected values of a, α, and β. This requires use of a generalization ofTaylor’s series to functions of two variables. In particular, for small α and β

we have

a f (t + α, y + β) = a[

f (t, y) +∂ f∂t

(t, y)α +∂ f∂y

(t, y)β

+12

(∂2 f∂t2 (t, y)α2 + 2

∂2 f∂t∂y

(t, y)αβ +∂2 f∂y2 (t, y)β2

)]+ higher order terms. (10.19)

Furthermore, we need d fdt (t, y). Since y = y(t), this can be found using a

generalization of the Chain Rule from Calculus III:

d fdt

(t, y) =∂ f∂t

+∂ f∂y

dydt

.

Thus,

T(2)(t, y) = f (t, y) +h2

[∂ f∂t

+∂ f∂y

dydt

].

Comparing this expression to the linear (Taylor series) approximation ofa f (t + α, y + β), we have

T(2) ≈ a f (t + α, y + β)

f +h2

∂ f∂t

+h2

f∂ f∂y

≈ a f + aα∂ f∂t

+ β∂ f∂y

. (10.20)

Page 453: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

numerical solutions of pdes 441

We see that we can choose

a = 1, α =h2

, β =h2

f .

This leads to the numerical scheme

yi+1 = yi + h f(

ti +h2

, yi +h2

f (ti, yi)

), i = 0, 1, . . . , N − 1,

y0 = y0, (10.21)

This Runge-Kutta scheme is called the Midpoint Method, or Second OrderRunge-Kutta Method, and it has order 2 if all second order derivatives off (t, y) are bounded. The Midpoint or Second Order Runge-

Kutta Method.Often, in implementing Runge-Kutta schemes, one computes the argu-ments separately as shown in the following MATLAB code snippet. (Thiscode snippet could replace the Euler’s Method section in the code in the lastsection.)

% Midpoint Method

y(1)=1;

for i=2:N+1

k1=h/2*f(t(i-1),y(i-1));

k2=h*f(t(i-1)+h/2,y(i-1)+k1);

y(i)=y(i-1)+k2;

t(i)=t(i-1)+h;

end

Example 10.3. Compare the Midpoint Method with the 2nd Order Taylor’s Methodfor the problem

y′ = t2 + y, y(0) = 1, t ∈ [0, 1]. (10.22)

The solution to this problem is y(t) = 3et − 2− 2t− t2. In order to implementthe 2nd Order Taylor’s Method, we need

T(2) = f (t, y) +h2

f ′(t, y)

= t2 + y +h2(2t + t2 + y). (10.23)

The results of the implementation are shown in Table 10.3.

There are other way to approximate higher order Taylor polynomials. Forexample, we can approximate T(3)(t, y) using four parameters by

T(3)(t, y) ≈ a f (t, y) + b f (t + α, y + β f (t, y).

Expanding this approximation and using

T(3)(t, y) ≈ f (t, y) +h2

d fdt

(t, y) +h2

6d fdt

(t, y),

we find that we cannot get rid of O(h2) terms. Thus, the best we can do isderive second order schemes. In fact, following a procedure similar to thederivation of the Midpoint Method, we find that

a + b = 1, , αb =h2

, β = α.

Page 454: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

442 partial differential equations

Table 10.5: Numerical values for 2nd Or-der Taylor’s Method, Midpoint Method,exact solution, and errors for solving Ex-ample 10.3 with N = 10..

Exact Taylor Error Midpoint Error1.0000 1.0000 0.0000 1.0000 0.0000

1.1055 1.1050 0.0005 1.1053 0.0003

1.2242 1.2231 0.0011 1.2236 0.0006

1.3596 1.3577 0.0019 1.3585 0.0010

1.5155 1.5127 0.0028 1.5139 0.0016

1.6962 1.6923 0.0038 1.6939 0.0023

1.9064 1.9013 0.0051 1.9032 0.0031

2.1513 2.1447 0.0065 2.1471 0.0041

2.4366 2.4284 0.0083 2.4313 0.0053

2.7688 2.7585 0.0103 2.7620 0.0068

3.1548 3.1422 0.0126 3.1463 0.0085

There are three equations and four unknowns. Therefore there are manysecond order methods. Two classic methods are given by the modified Eulermethod (a = b = 1

2 , α = β = h) and Huen’s method (a = 14 , b = 3

4 ,α = β = 2

3 h).The Fourth Order Runge-Kutta.

The Fourth Order Runge-Kutta Method, which is most often used, isgiven by the scheme

y0 = y0,

k1 = h f (ti, yi),

k2 = h f (ti +h2

, yi +12

k1),

k3 = h f (ti +h2

, yi +12

k2),

k4 = h f (ti + h, yi + k3),

yi+1 = yi +16(k1 + 2k2 + 2k3 + k4), i = 0, 1, . . . , N − 1. (10.24)

Again, we can test this on Example 10.3 with N = 10. The MATLABimplementation is given by

% Runge-Kutta 4th Order to solve dy/dt = f(t,y), y(a)=y0, on [a,b]

clear

a=0;

b=1;

N=10;

h=(b-a)/N;

% Slope function

f = inline(’t^2+y’,’t’,’y’);

sol = inline(’-2-2*t-t^2+3*exp(t)’,’t’);

% Initial Condition

t(1)=0;

Page 455: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

numerical solutions of pdes 443

y(1)=1;

% RK4 Method

y1(1)=1;

for i=2:N+1

k1=h*f(t(i-1),y1(i-1));

k2=h*f(t(i-1)+h/2,y1(i-1)+k1/2);

k3=h*f(t(i-1)+h/2,y1(i-1)+k2/2);

k4=h*f(t(i-1)+h,y1(i-1)+k3);

y1(i)=y1(i-1)+(k1+2*k2+2*k3+k4)/6;

t(i)=t(i-1)+h;

endMATLAB has built-in ODE solvers, as doother software packages, like Maple andMathematica. You should also note thatthere are currently open source pack-ages, such as Python based NumPy andMatplotlib, or Octave, of which somepackages are contained within the SageProject.

MATLAB has built-in ODE solvers, such as ode45 for a fourth orderRunge-Kutta method. Its implementation is given by

[t,y]=ode45(f,[0 1],1);

In this case f is given by an inline function like in the above RK4 code.The time interval is entered as [0, 1] and the 1 is the initial condition, y(0) =1.

However, ode45 is not a straight forward RK4 implementation. It is ahybrid method in which a combination of 4th and 5th order methods arecombined allowing for adaptive methods to handled subintervals of the in-tegration region which need more care. In this case, it implements a fourthorder Runge-Kutta-Fehlberg method. Running this code for the above ex-ample actually results in values for N = 41 and not N = 10. If we wantedto have the routine output numerical solutions at specific times, then onecould use the following form

tspan=0:h:1;

[t,y]=ode45(f,tspan,1);

In Table 10.6 we show the solutions which results for Example 10.3 com-paring the RK4 snippet above with ode45. As you can see RK4 is muchbetter than the previous implementation of the second order RK (Midpoint)Method. However, the MATLAB routine is two orders of magnitude betterthat RK4.

There are many ODE solvers in MATLAB. These are typically useful ifRK4 is having difficulty solving particular problems. For the most part, oneis fine using RK4, especially as a starting point. For example, there is ode23,which is similar to ode45 but combining a second and third order scheme.Applying the results to Example 10.3 we obtain the results in Table 10.6. Wecompare these to the second order Runge-Kutta method. The code snippetsare shown below.

% Second Order RK Method

y1(1)=1;

Page 456: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

444 partial differential equations

Table 10.6: Numerical values for FourthOrder Runge-Kutta Method, rk45, exactsolution, and errors for solving Example10.3 with N = 10.

Exact Taylor Error Midpoint Error1.0000 1.0000 0.0000 1.0000 0.0000

1.1055 1.1055 4.5894e-08 1.1055 -2.5083e-10

1.2242 1.2242 1.2335e-07 1.2242 -6.0935e-10

1.3596 1.3596 2.3850e-07 1.3596 -1.0954e-09

1.5155 1.5155 3.9843e-07 1.5155 -1.7319e-09

1.6962 1.6962 6.1126e-07 1.6962 -2.5451e-09

1.9064 1.9064 8.8636e-07 1.9064 -3.5651e-09

2.1513 2.1513 1.2345e-06 2.1513 -4.8265e-09

2.4366 2.4366 1.6679e-06 2.4366 -6.3686e-09

2.7688 2.7688 2.2008e-06 2.7688 -8.2366e-09

3.1548 3.1548 2.8492e-06 3.1548 -1.0482e-08

for i=2:N+1

k1=h*f(t(i-1),y1(i-1));

k2=h*f(t(i-1)+h/2,y1(i-1)+k1/2);

y1(i)=y1(i-1)+k2;

t(i)=t(i-1)+h;

end

tspan=0:h:1;

[t,y]=ode23(f,tspan,1);

Table 10.7: Numerical values for SecondOrder Runge-Kutta Method, rk23, exactsolution, and errors for solving Example10.3 with N = 10.

Exact Taylor Error Midpoint Error1.0000 1.0000 0.0000 1.0000 0.0000

1.1055 1.1053 0.0003 1.1055 2.7409e-06

1.2242 1.2236 0.0006 1.2242 8.7114e-06

1.3596 1.3585 0.0010 1.3596 1.6792e-05

1.5155 1.5139 0.0016 1.5154 2.7361e-05

1.6962 1.6939 0.0023 1.6961 4.0853e-05

1.9064 1.9032 0.0031 1.9063 5.7764e-05

2.1513 2.1471 0.0041 2.1512 7.8665e-05

2.4366 2.4313 0.0053 2.4365 0.0001

2.7688 2.7620 0.0068 2.7687 0.0001

3.1548 3.1463 0.0085 3.1547 0.0002

We have seen several numerical schemes for solving initial value prob-lems. There are other methods, or combinations of methods, which aimto refine the numerical approximations efficiently as if the step size in thecurrent methods were taken to be much smaller. Some methods extrapolatesolutions to obtain information outside of the solution interval. Others useone scheme to get a guess to the solution while refining, or correcting, thisto obtain better solutions as the iteration through time proceeds. Such meth-ods are described in courses in numerical analysis and in the literature. Atthis point we will apply these methods to several physics problems beforecontinuing with analytical solutions.

Page 457: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

numerical solutions of pdes 445

10.2 The Heat Equation

10.2.1 The Finite Difference Method

The heat equation can be solved using separation of variables.However, many partial differential equations cannot be solved exactly andone needs to turn to numerical solutions. The heat equation is a simple testcase for using numerical methods. Here we will use the simplest method,finite differences.

Let us consider the heat equation in one dimension,

ut = kuxx.

Boundary conditions and an initial condition will be applied later. Thestarting point is figuring out how to approximate the derivatives in thisequation.

Recall that the partial derivative, ut, is defined by

∂u∂t

= lim∆t→∞

u(x, t + ∆t)− u(x, t)∆t

.

Therefore, we can use the approximation

∂u∂t≈ u(x, t + ∆t)− u(x, t)

∆t. (10.25)

This is called a forward difference approximation.In order to find an approximation to the second derivative, uxx, we start

with the forward difference

∂u∂x≈ u(x + ∆x, t)− u(x, t)

∆x.

Then,∂ux

∂x≈ ux(x + ∆x, t)− ux(x, t)

∆x.

We need to approximate the terms in the numerator. It is customary touse a backward difference approximation. This is given by letting ∆x →−∆x in the forward difference form,

∂u∂x≈ u(x, t)− u(x− ∆x, t)

∆t. (10.26)

Applying this to ux evaluated at x = x and x = x + ∆x, we have

ux(x, t) ≈ u(x, t)− u(x− ∆x, t)∆x

,

and

ux(x + ∆x, t) ≈ u(x + ∆x, t)− u(x, t)∆x

.

Page 458: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

446 partial differential equations

Inserting these expressions into the approximation for uxx, we have

∂2u∂x2 =

∂ux

∂x

≈ ux(x + ∆x, t)− ux(x, t)∆x

≈u(x+∆x,t)−u(x,t)

∆x∆x

−u(x,t)−u(x−∆x,t)

∆x∆x

=u(x + ∆x, t)− 2u(x, t) + u(x− ∆x, t)

(∆x)2 . (10.27)

This approximation for uxx is called the central difference approximation ofuxx.

Combining Equation (10.25) with (10.27) in the heat equation, we have

u(x, t + ∆t)− u(x, t)∆t

≈ ku(x + ∆x, t)− 2u(x, t) + u(x− ∆x, t)

(∆x)2 .

Solving for u(x, t + ∆t), we find

u(x, t + ∆t) ≈ u(x, t) + α [u(x + ∆x, t)− 2u(x, t) + u(x− ∆x, t)] , (10.28)

where α = k ∆t(∆x)2 .

In this equation we have a way to determine the solution at position x andtime t + ∆t given that we know the solution at three positions, x, x + ∆x,and x + 2∆x at time t.

u(x, t + ∆t) ≈ u(x, t) + α [u(x + ∆x, t)− 2u(x, t) + u(x− ∆x, t)] . (10.29)

A shorthand notation is usually used to write out finite difference schemes.The domain of the solution is x ∈ [a, b] and t ≥ 0. We seek approximate val-ues of u(x, t) at specific positions and times. We first divide the interval[a, b] into N subintervals of width ∆x = (b− a)/N. Then, the endpoints ofthe subintervals are given by

xi = a + i∆x, i = 0, 1, . . . , N.

Similarly, we take time steps of ∆t, at times

tj = j∆t, j = 0, 1, 2, . . . .

This gives a grid of points (xi, tj) in the domain.At each grid point in the domain we seek an approximate solution to the

heat equation, ui,j ≈ u(xi, tj). Equation (10.29) becomes

ui,j+1 ≈ ui,j + α[ui+1,j − 2ui,j + ui−1,j

]. (10.30)

Equation (10.31) is the finite difference scheme for solving the heat equa-tion. This equation is represented by the stencil shown in Figure 10.6. Theblack circles represent the four terms in the equation, ui,j ui−1,j ui+1,j andui,j+1.

Page 459: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

numerical solutions of pdes 447

x

t

ii− 1 i + 1

j

j + 1

Figure 10.6: This stencil indicates thefour types of terms in the finite differ-ence scheme in Equation (10.31). Theblack circles represent the four terms inthe equation, ui,j ui−1,j ui+1,j and ui,j+1.

x

t Figure 10.7: Applying the stencil to therow of initial values gives the solution atthe next time step.

Let’s assume that the initial condition is given by

u(x, 0) = f (x).

Then, we have ui,0 = f (xi). Knowing these values, denoted by the opencircles in Figure 10.7, we apply the stencil to generate the solution on thej = 1 row. This is shown in Figure 10.7.

Further rows are generated by successively applying the stencil on eachrow, using the known approximations of ui,j at each level. This gives thevalues of the solution at the open circles shown in Figure 10.8. We noticethat the solution can only be obtained at a finite number of points on thegrid.

In order to obtain the missing values, we need to impose boundary con-ditions. For example, if we have Dirichlet conditions at x = a,

u(a, t) = 0,

or u0,j = 0 for j = 0, 1, . . . , then we can fill in some of the missing datapoints as seen in Figure 10.9.

The process continues until we again go as far as we can. This is shownin Figure 10.10.

Page 460: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

448 partial differential equations

Figure 10.8: Continuation of the pro-cess provides solutions at the indicatedpoints.

x

t

Figure 10.9: Knowing the values of thesolution at x = a, we can fill in more ofthe grid.

x

t

We can fill in the rest of the grid using a boundary condition at x = b.For Dirichlet conditions at x = b,

u(b, t) = 0,

or uN,j = 0 for j = 0, 1, . . . , then we can fill in the rest of the missing datapoints as seen in Figure 10.11.

We could also use Neumann conditions. For example, let

ux(a, t) = 0.

The approximation to the derivative gives

∂u∂x

∣∣∣x=a≈ u(a + ∆x, t)− u(a, t)

∆x= 0.

Then,u(a + ∆x, t)− u(a, t),

or u0,j = u1,j, for j = 0, 1, . . . . Thus, we know the values at the boundaryand can generate the solutions at the grid points as before.

Page 461: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

numerical solutions of pdes 449

x

t Figure 10.10: Knowing the values of thesolution at x = a, we can fill in more ofthe grid until we stop.

x

t Figure 10.11: Using boundary conditionsand the initial condition, the grid can befill in through any time level.

We now have to code this using software. We can use MATLAB to dothis. An example of the code is given below. In this example we specify thelength of the rod, L = 1, and the heat constant, k = 1. The code is run fort ∈ [0, 0.1].

The grid is created using N = 10 subintervals in space and M = 50 timesteps. This gives dx = ∆x and dt = ∆t. Using these values, we find thenumerical scheme constant α = k∆t/(∆x)2.

Nest, we define xi = i ∗ dx, i = 0, 1, . . . , N. However, in MATLAB, wecannot have an index of 0. We need to start with i = 1. Thus, xi = (i− 1) ∗ dx,i = 1, 2, . . . , N + 1.

Next, we establish the initial condition. We take a simple condition of

u(x, 0) = sin πx.

We have enough information to begin the numerical scheme as developedearlier. Namely, we cycle through the time steps using the scheme. There isone loop for each time step. We will generate the new time step from thelast time step in the form

unewi = uold

i + α[uold

i+1 − 2uoldi + uold

i−1

]. (10.31)

Page 462: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

450 partial differential equations

This is done suing u0(i) = unewi and u1(i) = uold

i .At the end of each time loop we update the boundary points so that the

grid can be filled in as discussed. When done, we can plot the final solution.If we want to show solutions at intermediate steps, we can plot the solutionearlier.

% Solution of the Heat Equation Using a Forward Difference Scheme

% Initialize Data

% Length of Rod, Time Interval

% Number of Points in Space, Number of Time Steps

L=1;

T=0.1;

k=1;

N=10;

M=50;

dx=L/N;

dt=T/M;

alpha=k*dt/dx^2;

% Position

for i=1:N+1

x(i)=(i-1)*dx;

end

% Initial Condition

for i=1:N+1

u0(i)=sin(pi*x(i));

end

% Partial Difference Equation (Numerical Scheme)

for j=1:M

for i=2:N

u1(i)=u0(i)+alpha*(u0(i+1)-2*u0(i)+u0(i-1));

end

u1(1)=0;

u1(N+1)=0;

u0=u1;

end

% Plot solution

plot(x, u1);

Page 463: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

numerical solutions of pdes 451

10.3 Truncation Error

In the previous section we found a finite difference scheme fornumerically solving the one dimensional heat equation. We have from Equa-tions (10.29) and (10.31),

u(x, t + ∆t) ≈ u(x, t) + α [u(x + ∆x, t)− 2u(x, t) + u(x− ∆x, t)] .(10.32)

ui,j+1 ≈ ui,j + α[ui+1,j − 2ui,j + ui−1,j

], (10.33)

where α = k∆t/(∆x)2. For points x ∈ [a, b] and t ≥ 0, we use the schemeto find approximate values of u(xi, ti) = ui,j at positions xi = a + i∆x, i =0, 1, . . . , N, and times tj = j∆t, j = 0, 1, 2, . . . .

In implementing the scheme we have found that there are errors intro-duced just like when using Euler’s Method for ordinary differential equa-tions. These truncations errors can be found by applying Taylor approxi-mations just like we had for ordinary differential equations. In the schemes(10.32) and (10.33), we have not use equality. In order to replace the approx-imation by an equality, we need t estimate the order of the terms neglectedin a Taylor series approximation of the time and space derivatives we haveapproximated.

We begin with the time derivative approximation. We used the forwarddifference approximation (10.25),

∂u∂t≈ u(x, t + ∆t)− u(x, t)

∆t. (10.34)

This can be derived from the Tayloer series expansion of u(x, t + ∆t) about∆t = 0,

u(x, t + ∆t) = u(x, t) +∂u∂t

(x, t)∆t +12!

∂2u∂t2 (x, t)(∆t)2 + O((∆t)3).

Solving for ∂u∂t (x, t), we obtain

∂u∂t

(x, t) =u(x, t + ∆t)− u(x, t)

∆t− 1

2!∂2u∂t2 (x, t)∆t + O((∆t)2).

We see that we have obtained the forward difference approximation (10.25)with the added benefit of knowing something about the error terms intro-duced in the approximation. Namely, when we approximate ut with theforward difference approximation (10.25), we are making an error of

E(x, t, ∆t) = − 12!

∂2u∂t2 (x, t)∆t + O((∆t)2).

We have truncated the Taylor series to obtain this approximation and we saythat

∂u∂t

=u(x, t + ∆t)− u(x, t)

∆t+ O(∆t) (10.35)

is a first order approximation in ∆t.

Page 464: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

452 partial differential equations

In a similar manor, we can obtain the truncation error for the uxx- term.However, instead of starting with the approximation we used in Equation??uxx), we will derive a term using the Taylor series expansion of u(x +

∆x, t) about ∆x = 0. Namely, we begin with the expansion

u(x + ∆x, t) = u(x, t) + ux(x, t)∆x +12!

uxx(x, t)(∆x)2 +13!

uxxx(x, t)(∆x)3

+14!

uxxxx(x, t)(∆x)4 + . . . . (10.36)

We want to solve this equation for uxx. However, there are some obstruc-tions, like needing to know the ux term. So, we seek a way to eliminatelower order terms. On way is to note that replacing ∆x by −∆x gives

u(x− ∆x, t) = u(x, t)− ux(x, t)∆x +12!

uxx(x, t)(∆x)2 − 13!

uxxx(x, t)(∆x)3

+14!

uxxxx(x, t)(∆x)4 + . . . . (10.37)

Adding these Taylor series, we have

u(x + ∆x, t) + u(x + ∆x, t) = 2u(x, t) + uxx(x, t)(∆x)2

+24!

uxxxx(x, t)(∆x)4 + O((∆x)6).

(10.38)

We can now solve for uxx to find

uxx(x, t) =u(x + ∆x, t)− 2u(x, t) + u(x + ∆x, t)

(∆x)2

+24!

uxxxx(x, t)(∆x)2 + O((∆x)4). (10.39)

Thus, we have that

uxx(x, t) =u(x + ∆x, t)− 2u(x, t) + u(x + ∆x, t)

(∆x)2 + O((∆x)2)

is the second order in ∆x approximation of uxx.Combining these results, we find that the heat equation is approximated

by

u(x, t + ∆t)− u(x, t)∆t

=u(x + ∆x, t)− 2u(x, t) + u(x + ∆x, t)

(∆x)2 +O((∆x)2, ∆t

).

This has local truncation error that is first order in time and second order inspace.

10.4 Stability

Another consideration for numerical schemes for the heat equa-tion is the stability of the scheme. In implementing the finite differencescheme,

um,j+1 = um,j + α[um+1,j − 2um,j + um−1,j

], (10.40)

Page 465: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

numerical solutions of pdes 453

α = k∆t/(∆x)2, one finds that the solution goes crazy when α is too big.In other words, if you try to push the individual time steps too far intothe future, then something goes haywire. We can determine the onset ofinstability by looking at the solution of this equation for um,j. [Note: Wechanged index i to m to avoid confusion later in this section.]

The scheme is actually what is called a partial difference equation forum,j. We could write it in terms of difference, such as um+1,j − um,j andum,j+1 − um,j. The furthest apart the time steps are are one unit and thespatial points are two units apart. We can see this in the stencils in Figure10.6. So, this is a second order partial difference equation similar to the ideathat the heat equation is a second order partial differential equation. Theheat equation can be solved using the method of separation of variables.The difference scheme can also be solved in a similar fashion. We will showhow this can lead to product solutions.

We begin by assuming that umj = XmTj, a product of functions of theindices m and j. Inserting this guess into the finite difference equation, wehave

um,j+1 = um,j + α[um+1,j − 2um,j + um−1,j

],

XmTj+1 = XmTj + α [Xm+1 − 2Xm + Xm−1] Tj,Tj+1

Tj=

αXm+1 + (1− 2α)Xm + αXm−1

Xm. (10.41)

Noting that we have a function of j equal to a function of m, then we canset each of these to a constant, λ. Then, we obtain two ordinary differenceequations:

Tj+1 = λTj, (10.42)

αXm+1 + (1− 2α)Xm + αXm−1 = λXm. (10.43)

The first equation is a simple first order difference equation and can besolved by iteration:

Tj+1 = λTj,

= λ(λTj−1) = λ2Tj−1,

= λ3Tj−2,

= λj+1T0, (10.44)

The second difference equation can be solved by making a guess in thesame spirit as solving a second order constant coefficient differential equa-tion.Namely, let Xm = ξm for some number ξ. This gives

αXm+1 + (1− 2α)Xm + αXm−1 = λXm,

ξm−1[αξ2 + (1− 2α)ξ + α

]= λξm

αξ2 + (1− 2α− λ)ξ + α = 0. (10.45)

This is an equation foe ξ in terms of α and λ. Due to the boundaryconditions, we expect to have oscillatory solutions. So, we can guess that

Page 466: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

454 partial differential equations

ξ = |ξ|eiθ , where i here is the imaginary unit. We assume that |ξ| = 1, andthus ξ = eiθ and Xm = ξm = eimθ . Since xm = m∆x, we have Xm = eixmθ/∆x.We define β = theta/∆, to give Xm = eiβxm and ξ = eiβ∆x.

Inserting this value for ξ into the quadratic equation for ξ, we have

0 = αξ2 + (1− 2α− λ)ξ + α

= αe2iβ∆x + (1− 2α− λ)eiβ∆x + α

= eiβ∆x[α(eiβ∆x + e−iβ∆x) + (1− 2α− λ)

]= eiβ∆x [2α cos(β∆x) + (1− 2α− λ)]

λ = 2α cos(β∆x) + 1− 2α. (10.46)

So, we have found that

umj = XmTj = λm(a cos αxm + b sin αxm), a2 + b2 = h20,

andλ = 2α cos(β∆x) + 1− 2α.

For the solution to remain bounded, or stable, we need |λ| ≤ 1.Therefore, we have the inequality

−1 ≤ 2α cos(β∆x) + 1− 2α ≤ 1.

Since cos(β∆x) ≤ 1, the upper bound is obviously satisfied. Since −1 ≤cos(β∆x), the lower bound is satisfied for −1 ≤ −2s + 1 − 2s, or s ≤ 1

2 .Therefore, the stability criterion is satisfied when

α = k∆t

∆x2 ≤12

.

10.5 Matrix Formulation

Page 467: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

ACalculus Review

“Ordinary language is totally unsuited for expressing what physics really asserts,since the words of everyday life are not sufficiently abstract. Only mathematics andmathematical logic can say as little as the physicist means to say.” Bertrand Russell(1872-1970)

Before you begin our study of differential equations perhaps youshould review some things from calculus. You definitely need to knowsomething before taking this class. It is assumed that you have taken Calcu-lus and are comfortable with differentiation and integration. Of course, youare not expected to know every detail from these courses. However, thereare some topics and methods that will come up and it would be useful tohave a handy reference to what it is you should know.

Most importantly, you should still have your calculus text to which youcan refer throughout the course. Looking back on that old material, youwill find that it appears easier than when you first encountered the mate-rial. That is the nature of learning mathematics and other subjects. Yourunderstanding is continually evolving as you explore topics more in depth.It does not always sink in the first time you see it. In this chapter we willgive a quick review of these topics. We will also mention a few new methodsthat might be interesting.

A.1 What Do I Need To Know From Calculus?

A.1.1 Introduction

There are two main topics in calculus: derivatives and integrals.You learned that derivatives are useful in providing rates of change in eithertime or space. Integrals provide areas under curves, but also are usefulin providing other types of sums over continuous bodies, such as lengths,areas, volumes, moments of inertia, or flux integrals. In physics, one canlook at graphs of position versus time and the slope (derivative) of such afunction gives the velocity. (See Figure A.1.) By plotting velocity versus timeyou can either look at the derivative to obtain acceleration, or you could look

Page 468: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

456 partial differential equations

at the area under the curve and get the displacement:

x =∫ t

t0

v dt. (A.1)

This is shown in Figure A.2.

t

x(t)

v

Figure A.1: Plot of position vs time.

t0 t

v(t)

x

Figure A.2: Plot of velocity vs time.

Of course, you need to know how to differentiate and integrate givenfunctions. Even before getting into differentiation and integration, you needto have a bag of functions useful in physics. Common functions are thepolynomial and rational functions. You should be fairly familiar with these.Polynomial functions take the general form

f (x) = anxn + an−1xn−1 + · · ·+ a1x + a0, (A.2)

where an 6= 0. This is the form of a polynomial of degree n. Rational func-tions, f (x) = g(x)

h(x) , consist of ratios of polynomials. Their graphs can exhibitvertical and horizontal asymptotes.

Next are the exponential and logarithmic functions. The most commonare the natural exponential and the natural logarithm. The natural exponen-tial is given by f (x) = ex, where e ≈ 2.718281828 . . . . The natural logarithmis the inverse to the exponential, denoted by ln x. (One needs to be care-ful, because some mathematics and physics books use log to mean naturalexponential, whereas many of us were first trained to use this notation tomean the common logarithm, which is the ‘log base 10’. Here we will useln x for the natural logarithm.)

The properties of the exponential function follow from the basic proper-ties for exponents. Namely, we have:Exponential properties.

e0 = 1, (A.3)

e−a =1ea (A.4)

eaeb = ea+b, (A.5)

(ea)b = eab. (A.6)

The relation between the natural logarithm and natural exponential isgiven by

y = ex ⇔ x = ln y. (A.7)

Some common logarithmic properties areLogarithmic properties.

ln 1 = 0, (A.8)

ln1a

= − ln a, (A.9)

ln(ab) = ln a + ln b, (A.10)

lnab

= ln a− ln b, (A.11)

ln1b

= − ln b. (A.12)

We will see applications of these relations as we progress through thecourse.

Page 469: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

calculus review 457

A.1.2 Trigonometric Functions

Another set of useful functions are the trigonometric functions. Thesefunctions have probably plagued you since high school. They have theirorigins as far back as the building of the pyramids. Typical applications inyour introductory math classes probably have included finding the heightsof trees, flag poles, or buildings. It was recognized a long time ago that sim-ilar right triangles have fixed ratios of any pair of sides of the two similartriangles. These ratios only change when the non-right angles change.

Thus, the ratio of two sides of a right triangle only depends upon theangle. Since there are six possible ratios (think about it!), then there are sixpossible functions. These are designated as sine, cosine, tangent and theirreciprocals (cosecant, secant and cotangent). In your introductory physicsclass, you really only needed the first three. You also learned that theyare represented as the ratios of the opposite to hypotenuse, adjacent to hy-potenuse, etc. Hopefully, you have this down by now.

You should also know the exact values of these basic trigonometric func-tions for the special angles θ = 0, π

6 , π3 , π

4 , π2 , and their corresponding angles

in the second, third and fourth quadrants. This becomes internalized aftermuch use, but we provide these values in Table A.1 just in case you need areminder.

θ cos θ sin θ tan θ

0 1 0 0

π6

√3

212

√3

3π3

12

√3

2

√3

π4

√2

2

√2

2 1

π2 0 1 undefined

Table A.1: Table of Trigonometric Values

The problems students often have using trigonometric functions in latercourses stem from using, or recalling, identities. We will have many anoccasion to do so in this class as well. What is an identity? It is a relationthat holds true all of the time. For example, the most common identity fortrigonometric functions is the Pythagorean identity

sin2 θ + cos2 θ = 1. (A.13)

This holds true for every angle θ! An even simpler identity is

tan θ =sin θ

cos θ. (A.14)

Other simple identities can be derived from the Pythagorean identity.Dividing the identity by cos2 θ, or sin2 θ, yields

tan2 θ + 1 = sec2 θ, (A.15)

1 + cot2 θ = csc2 θ. (A.16)

Page 470: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

458 partial differential equations

Several other useful identities stem from the use of the sine and cosine ofthe sum and difference of two angles. Namely, we have thatSum and difference identities.

sin(A± B) = sin A cos B± sin B cos A, (A.17)

cos(A± B) = cos A cos B∓ sin A sin B. (A.18)

Note that the upper (lower) signs are taken together.

Example A.1. Evaluate sin π12 .

sinπ

12= sin

3− π

4

)= sin

π

3cos

π

4− sin

π

4cos

π

3

=

√3

2

√2

2−√

22

12

=

√2

4

(√3− 1

). (A.19)

The double angle formulae are found by setting A = B :Double angle formulae.

sin(2A) = 2 sin A cos B, (A.20)

cos(2A) = cos2 A− sin2 A. (A.21)

Using Equation (A.13), we can rewrite (A.21) as

cos(2A) = 2 cos2 A− 1, (A.22)

= 1− 2 sin2 A. (A.23)

These, in turn, lead to the half angle formulae. Solving for cos2 A and sin2 A,we find thatHalf angle formulae.

sin2 A =1− cos 2A

2, (A.24)

cos2 A =1 + cos 2A

2. (A.25)

Example A.2. Evaluate cos π12 . In the last example, we used the sum/difference

identities to evaluate a similar expression. We could have also used a half angleidentity. In this example, we have

cos2 π

12=

12

(1 + cos

π

6

)=

12

(1 +

√3

2

)

=14

(2 +√

3)

(A.26)

Page 471: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

calculus review 459

So, cos π12 = 1

2

√2 +√

3. This is not the simplest form and is called a nestedradical. In fact, if we proceeded using the difference identity for cosines, then wewould obtain

cosπ

12=

√2

4(1 +

√3).

So, how does one show that these answers are the same? It is useful at times to know when onecan reduce square roots of such radi-cals, called denesting. More generally,one seeks to write

√a + b

√q = c + d

√q.

Following the procedure in this example,one has d = b

2c and

c2 =12

(a±

√a2 − qb2

).

As long as a2 − qb2 is a perfect square,there is a chance to reduce the expres-sion to a simpler form.

Let’s focus on the factor√

2 +√

3. We seek to write this in the form c + d√

3.Equating the two expressions and squaring, we have

2 +√

3 = (c + d√

3)2

= c2 + 3d2 + 2cd√

3. (A.27)

In order to solve for c and d, it would seem natural to equate the coefficients of√

3and the remaining terms. We obtain a system of two nonlinear algebraic equations,

c2 + 3d2 = 2 (A.28)

2cd = 1. (A.29)

Solving the second equation for d = 1/2c, and substituting the result into thefirst equation, we find

4c4 − 8c2 + 3 = 0.

This fourth order equation has four solutions,

c = ±√

22

,±√

62

and

b = ±√

22

,±√

66

.

Thus,

cosπ

12=

12

√2 +√

3

= ±12

(√2

2+

√2

2

√3

)

= ±√

24

(1 +√

3) (A.30)

and

cosπ

12=

12

√2 +√

3

= ±12

(√6

2+

√6

6

√3

)

= ±√

612

(3 +√

3). (A.31)

Of the four solutions, two are negative and we know the value of the cosine for thisangle has to be positive. The remaining two solutions are actually equal! A quick

Page 472: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

460 partial differential equations

computation will verify this:√

612

(3 +√

3) =

√3√

212

(3 +√

3)

=

√2

12(3√

3 + 3)

=

√2

4(√

3 + 1). (A.32)

We could have bypassed this situation be requiring that the solutions for b and cwere not simply proportional to

√3 like they are in the second case.

Finally, another useful set of identities are the product identities. ForProduct Identities

example, if we add the identities for sin(A + B) and sin(A− B), the secondterms cancel and we have

sin(A + B) + sin(A− B) = 2 sin A cos B.

Thus, we have that

sin A cos B =12(sin(A + B) + sin(A− B)). (A.33)

Similarly, we have

cos A cos B =12(cos(A + B) + cos(A− B)). (A.34)

and

sin A sin B =12(cos(A− B)− cos(A + B)). (A.35)

Know the above boxed identities!These boxed equations are the most common trigonometric identities.

They appear often and should just roll off of your tongue.We will also need to understand the behaviors of trigonometric func-

tions. In particular, we know that the sine and cosine functions are periodic.They are not the only periodic functions, as we shall see. [Just visualize theteeth on a carpenter’s saw.] However, they are the most common periodicfunctions.

A periodic function f (x) satisfies the relationPeriodic functions.

f (x + p) = f (x), for all x

for some constant p. If p is the smallest such number, then p is called theperiod. Both the sine and cosine functions have period 2π. This means thatthe graph repeats its form every 2π units. Similarly, sin bx and cos bx havethe common period p = 2π

b . We will make use of this fact in later chapters.Related to these are the inverse trigonometric functions. For example,

f (x) = sin−1 x, or f (x) = arcsin x. Inverse functions give back angles, so

In Feynman’s Surely You’re Joking Mr.Feynman!, Richard Feynman (1918-1988)talks about his invention of his own no-tation for both trigonometric and inversetrigonometric functions as the standardnotation did not make sense to him.

you should thinkθ = sin−1 x ⇔ x = sin θ. (A.36)

Page 473: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

calculus review 461

Also, you should recall that y = sin−1 x = arcsin x is only a function if−π2 ≤

x ≤ π2 . Similar relations exist for y = cos−1 x = arccos x and tan−1 x =

arctan x.Once you think about these functions as providing angles, then you can

make sense out of more complicated looking expressions, like tan(sin−1 x).Such expressions often pop up in evaluations of integrals. We can untanglethis in order to produce a simpler form by referring to expression (A.36).θ = sin−1 x is simple an angle whose sine is x. Knowing the sine is theopposite side of a right triangle divided by its hypotenuse, then one justdraws a triangle in this proportion as shown in Figure A.3. Namely, theside opposite the angle has length x and the hypotenuse has length 1. Usingthe Pythagorean Theorem, the missing side (adjacent to the angle) is sim-ply√

1− x2. Having obtained the lengths for all three sides, we can nowproduce the tangent of the angle as

tan(sin−1 x) =x√

1− x2.

θ

1

x

√1− x2

Figure A.3: θ = sin−1 x ⇒ tan θ =x√

1−x2

A.1.3 Hyperbolic Functions

Solitons are special solutions to somegeneric nonlinear wave equations. Theytypically experience elastic collisionsand play special roles in a variety offields in physics, such as hydrodynam-ics and optics. A simple soliton solutionis of the form

u(x, t) = 2η2 sech2 η(x− 4η2t).

So, are there any other functions that are useful in physics? Actu-ally, there are many more. However, you have probably not see many ofthem to date. We will see by the end of the semester that there are manyimportant functions that arise as solutions of some fairly generic, but im-portant, physics problems. In your calculus classes you have also seen thatsome relations are represented in parametric form. However, there is atleast one other set of elementary functions, which you should already knowabout. These are the hyperbolic functions. Such functions are useful inrepresenting hanging cables, unbounded orbits, and special traveling wavescalled solitons. They also play a role in special and general relativity. Hyperbolic functions; We will later see

the connection between the hyperbolicand trigonometric functions in Chapter8.

Hyperbolic functions are actually related to the trigonometric functions,as we shall see after a little bit of complex function theory. For now, we justwant to recall a few definitions and identities. Just as all of the trigonometricfunctions can be built from the sine and the cosine, the hyperbolic functionscan be defined in terms of the hyperbolic sine and hyperbolic cosine (shownin Figure A.4):

sinh x =ex − e−x

2, (A.37)

cosh x =ex + e−x

2. (A.38) −3 −2 −1 1 2 3

−2

2

cosh x

sinh x

Figure A.4: Plots of cosh x and sinh x.Note that sinh 0 = 0, cosh 0 = 1, andcosh x ≥ 1.

There are four other hyperbolic functions. These are defined in termsof the above functions similar to the relations between the trigonometricfunctions. We have

tanh x =sinh xcosh x

=ex − e−x

ex + e−x , (A.39)

Page 474: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

462 partial differential equations

sech x =1

cosh x=

2ex + e−x , (A.40)

csch x =1

sinh x=

2ex − e−x , (A.41)

coth x =1

tanh x=

ex + e−x

ex − e−x . (A.42)

There are also a whole set of identities, similar to those for the trigono-metric functions. For example, the Pythagorean identity for trigonometricfunctions, sin2 θ + cos2 θ = 1, is replaced by the identity

cosh2 x− sinh2 x = 1.

This is easily shown by simply using the definitions of these functions. Thisidentity is also useful for providing a parametric set of equations describinghyperbolae. Letting x = a cosh t and y = b sinh t, one has

x2

a2 −y2

b2 = cosh2 t− sinh2 t = 1.

A list of commonly needed hyperbolic function identities are given byHyperbolic identities.

the following:

cosh2 x− sinh2 x = 1, (A.43)

tanh2 x + sech2 x = 1, (A.44)

cosh(A± B) = cosh A cosh B± sinh A sinh B, (A.45)

sinh(A± B) = sinh A cosh B± sinh B cosh A, (A.46)

cosh 2x = cosh2 x + sinh2 x, (A.47)

sinh 2x = 2 sinh x cosh x, (A.48)

cosh2 x =12(1 + cosh 2x) , (A.49)

sinh2 x =12(cosh 2x− 1) . (A.50)

Note the similarity with the trigonometric identities. Other identities can bederived from these.

There also exist inverse hyperbolic functions and these can be written interms of logarithms. As with the inverse trigonometric functions, we beginwith the definition

y = sinh−1 x ⇔ x = sinh y. (A.51)

The aim is to write y in terms of x without using the inverse function. First,we note that

x =12(ey − e−y) . (A.52)

Next we solve for ey. This is done by noting that e−y = 1ey and rewriting the

previous equation as0 = (ey)2 − 2xey − 1. (A.53)

Page 475: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

calculus review 463

This equation is in quadratic form which we can solve using the quadraticformula as

ey = x +√

1 + x2.

(There is only one root as we expect the exponential to be positive.)

The inverse hyperbolic functions caregiven by

sinh−1 x = ln(

x +√

1 + x2)

,

cosh−1 x = ln(

x +√

x2 − 1)

,

tanh−1 x =12

ln1 + x1− x

.

The final step is to solve for y,

y = ln(

x +√

1 + x2)

. (A.54)

A.1.4 Derivatives

Now that we know some elementary functions, we seek their deriva-tives. We will not spend time exploring the appropriate limits in any rigor-ous way. We are only interested in the results. We provide these in TableA.2. We expect that you know the meaning of the derivative and all of theusual rules, such as the product and quotient rules.

Function Derivativea 0

xn nxn−1

eax aeax

ln ax 1x

sin ax a cos axcos ax −a sin axtan ax a sec2 axcsc ax −a csc ax cot axsec ax a sec ax tan axcot ax −a csc2 ax

sinh ax a cosh axcosh ax a sinh axtanh ax a sech2 axcsch ax −a csch ax coth axsech ax −a sech ax tanh axcoth ax −a csch2 ax

Table A.2: Table of Common Derivatives(a is a constant).

Also, you should be familiar with the Chain Rule. Recall that this ruletells us that if we have a composition of functions, such as the elementaryfunctions above, then we can compute the derivative of the composite func-tion. Namely, if h(x) = f (g(x)), then

dhdx

=d

dx( f (g(x))) =

d fdg

∣∣∣g(x)

dgdx

= f ′(g(x))g′(x). (A.55)

Example A.3. Differentiate H(x) = 5 cos(π tanh 2x2) .

This is a composition of three functions, H(x) = f (g(h(x))), where f (x) =

5 cos x, g(x) = π tanh x, and h(x) = 2x2. Then the derivative becomes

H′(x) = 5(− sin

(π tanh 2x2

)) ddx

((π tanh 2x2

))

Page 476: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

464 partial differential equations

= −5π sin(

π tanh 2x2)

sech2 2x2 ddx

(2x2)

= −20πx sin(

π tanh 2x2)

sech2 2x2. (A.56)

A.1.5 Integrals

Integration is typically a bit harder. Imagine being given the lastresult in (A.56) and having to figure out what was differentiated in order toget the given function. As you may recall from the Fundamental Theoremof Calculus, the integral is the inverse operation to differentiation:∫ d f

dxdx = f (x) + C. (A.57)

It is not always easy to evaluate a given integral. In fact some integralsare not even doable! However, you learned in calculus that there are somemethods that could yield an answer. While you might be happier using acomputer algebra system, such as Maple or WolframAlpha.com, or a fancycalculator, you should know a few basic integrals and know how to usetables for some of the more complicated ones. In fact, it can be exhilaratingwhen you can do a given integral without reference to a computer or aTable of Integrals. However, you should be prepared to do some integralsusing what you have been taught in calculus. We will review a few of thesemethods and some of the standard integrals in this section.

First of all, there are some integrals you are expected to know withoutdoing any work. These integrals appear often and are just an application ofthe Fundamental Theorem of Calculus to the previous Table A.2. The basicintegrals that students should know off the top of their heads are given inTable A.3.

These are not the only integrals you should be able to do. We can expandthe list by recalling a few of the techniques that you learned in calculus,the Method of Substitution, Integration by Parts, integration using partialfraction decomposition, and trigonometric integrals, and trigonometric sub-stitution. There are also a few other techniques that you had not seen before.We will look at several examples.

Example A.4. Evaluate∫ x√

x2+1dx.

When confronted with an integral, you should first ask if a simple substitutionwould reduce the integral to one you know how to do.

The ugly part of this integral is the x2 + 1 under the square root. So, we letu = x2 + 1.

Noting that when u = f (x), we have du = f ′(x) dx. For our example, du =

2x dx.Looking at the integral, part of the integrand can be written as x dx = 1

2 u du.Then, the integral becomes ∫ x√

x2 + 1dx =

12

∫ du√u

.

Page 477: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

calculus review 465

The substitution has converted our integral into an integral over u. Also, thisintegral is doable! It is one of the integrals we should know. Namely, we can writeit as

12

∫ du√u=

12

∫u−1/2 du.

This is now easily finished after integrating and using the substitution variable togive ∫ x√

x2 + 1dx =

12

u1/2

12

+ C =√

x2 + 1 + C.

Note that we have added the required integration constant and that the derivativeof the result easily gives the original integrand (after employing the Chain Rule).

Function Indefinite Integrala ax

xn xn+1

n+1eax 1

a eax

1x ln x

sin ax − 1a cos ax

cos ax 1a sin ax

sec2 ax 1a tan ax

sinh ax 1a cosh ax

cosh ax 1a sinh ax

sech2 ax 1a tanh ax

sec x ln | sec x + tan x|1

a+bx1b ln(a + bx)

1a2+x2

1a tan−1 x

a1√

a2−x2 sin−1 xa

1x√

x2−a21a sec−1 x

a1√

x2−a2 cosh−1 xa = ln |

√x2 − a2 + x|

Table A.3: Table of Common Integrals.

Often we are faced with definite integrals, in which we integrate betweentwo limits. There are several ways to use these limits. However, studentsoften forget that a change of variables generally means that the limits haveto change.

Example A.5. Evaluate∫ 2

0x√

x2+1dx.

This is the last example but with integration limits added. We proceed as before.We let u = x2 + 1. As x goes from 0 to 2, u takes values from 1 to 5. So, thissubstitution gives∫ 2

0

x√x2 + 1

dx =12

∫ 5

1

du√u=√

u|51 =√

5− 1.

When you becomes proficient at integration, you can bypass some ofthese steps. In the next example we try to demonstrate the thought pro-cess involved in using substitution without explicitly using the substitutionvariable.

Page 478: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

466 partial differential equations

Example A.6. Evaluate∫ 2

0x√

9+4x2 dx

As with the previous example, one sees that the derivative of 9 + 4x2 is propor-tional to x, which is in the numerator of the integrand. Thus a substitution wouldgive an integrand of the form u−1/2. So, we expect the answer to be proportional to√

u =√

9 + 4x2. The starting point is therefore,∫ x√9 + 4x2

dx = A√

9 + 4x2,

where A is a constant to be determined.We can determine A through differentiation since the derivative of the answer

should be the integrand. Thus,

ddx

A(9 + 4x2)12 = A(9 + 4x2)−

12

(12

)(8x)

= 4xA(9 + 4x2)−12 . (A.58)

Comparing this result with the integrand, we see that the integrand is obtainedwhen A = 1

4 . Therefore, ∫ x√9 + 4x2

dx =14

√9 + 4x2.

We now complete the integral,∫ 2

0

x√9 + 4x2

dx =14[5− 3] =

12

.

The function

gd(x) =∫ x

0

dxcosh x

= 2 tan−1 ex − π

2

is called the Gudermannian and con-nects trigonometric and hyperbolic func-tions. This function was named afterChristoph Gudermann (1798-1852), butintroduced by Johann Heinrich Lambert(1728-1777), who was one of the first tointroduce hyperbolic functions.

Example A.7. Evaluate∫ dx

cosh x .This integral can be performed by first using the definition of cosh x followed by

a simple substitution. ∫ dxcosh x

=∫ 2

ex + e−x dx

=∫ 2ex

e2x + 1dx. (A.59)

Now, we let u = ex and du = exdx. Then,∫ dxcosh x

=∫ 2

1 + u2 du

= 2 tan−1 u + C

= 2 tan−1 ex + C. (A.60)

Integration by Parts

When the Method of Substitution fails, there are other methods you cantry. One of the most used is the Method of Integration by Parts. Recall theIntegration by Parts Formula:Integration by Parts Formula.

∫u dv = uv−

∫v du. (A.61)

Page 479: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

calculus review 467

The idea is that you are given the integral on the left and you can relate itto an integral on the right. Hopefully, the new integral is one you can do,or at least it is an easier integral than the one you are trying to evaluate.

However, you are not usually given the functions u and v. You have todetermine them. The integral form that you really have is a function ofanother variable, say x. Another form of the Integration by Parts Formulacan be written as∫

f (x)g′(x) dx = f (x)g(x)−∫

g(x) f ′(x) dx. (A.62)

This form is a bit more complicated in appearance, though it is clearer thanthe u-v form as to what is happening. The derivative has been moved fromone function to the other. Recall that this formula was derived by integratingthe product rule for differentiation. (See your calculus text.) Note: Often in physics one needs to

move a derivative between functions in-side an integrand. The key - use inte-gration by parts to move the derivativefrom one function to the other under anintegral.

These two formulae can be related by using the differential relations

u = f (x) → du = f ′(x) dx,

v = g(x) → dv = g′(x) dx. (A.63)

This also gives a method for applying the Integration by Parts Formula.

Example A.8. Consider the integral∫

x sin 2x dx. We choose u = x and dv =

sin 2x dx. This gives the correct left side of the Integration by Parts Formula. Wenext determine v and du:

du =dudx

dx = dx,

v =∫

dv =∫

sin 2x dx = −12

cos 2x.

We note that one usually does not need the integration constant. Inserting theseexpressions into the Integration by Parts Formula, we have∫

x sin 2x dx = −12

x cos 2x +12

∫cos 2x dx.

We see that the new integral is easier to do than the original integral. Had we pickedu = sin 2x and dv = x dx, then the formula still works, but the resulting integralis not easier.

For completeness, we finish the integration. The result is∫x sin 2x dx = −1

2x cos 2x +

14

sin 2x + C.

As always, you can check your answer by differentiating the result, a step stu-dents often forget to do. Namely,

ddx

(−1

2x cos 2x +

14

sin 2x + C)

= −12

cos 2x + x sin 2x +14(2 cos 2x)

= x sin 2x. (A.64)

So, we do get back the integrand in the original integral.

Page 480: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

468 partial differential equations

We can also perform integration by parts on definite integrals. The gen-eral formula is written as

∫ b

af (x)g′(x) dx = f (x)g(x)

∣∣∣∣ba−∫ b

ag(x) f ′(x) dx. (A.65)

Integration by Parts for Definite Inte-grals. Example A.9. Consider the integral∫ π

0x2 cos x dx.

This will require two integrations by parts. First, we let u = x2 and dv = cos x.Then,

du = 2x dx. v = sin x.

Inserting into the Integration by Parts Formula, we have∫ π

0x2 cos x dx = x2 sin x

∣∣∣π0− 2

∫ π

0x sin x dx

= −2∫ π

0x sin x dx. (A.66)

We note that the resulting integral is easier that the given integral, but we stillcannot do the integral off the top of our head (unless we look at Example 3!). So, weneed to integrate by parts again. (Note: In your calculus class you may recall thatthere is a tabular method for carrying out multiple applications of the formula. Wewill show this method in the next example.)

We apply integration by parts by letting U = x and dV = sin x dx. This givesdU = dx and V = − cos x. Therefore, we have∫ π

0x sin x dx = −x cos x

∣∣∣π0+∫ π

0cos x dx

= π + sin x∣∣∣π0

= π. (A.67)

The final result is ∫ π

0x2 cos x dx = −2π.

There are other ways to compute integrals of this type. First of all, thereis the Tabular Method to perform integration by parts. A second method isto use differentiation of parameters under the integral. We will demonstratethis using examples.

Example A.10. Compute the integral∫ π

0 x2 cos x dx using the Tabular Method.Using the Tabular Method.

First we identify the two functions under the integral, x2 and cos x. We thenwrite the two functions and list the derivatives and integrals of each, respectively.This is shown in Table A.4. Note that we stopped when we reached zero in the leftcolumn.

Next, one draws diagonal arrows, as indicated, with alternating signs attached,starting with +. The indefinite integral is then obtained by summing the productsof the functions at the ends of the arrows along with the signs on each arrow:∫

x2 cos x dx = x2 sin x + 2x cos x− 2 sin x + C.

Page 481: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

calculus review 469

To find the definite integral, one evaluates the antiderivative at the given limits.∫ π

0x2 cos x dx =

[x2 sin x + 2x cos x− 2 sin x

0

= (π2 sin π + 2π cos π − 2 sin π)− 0

= −2π. (A.68)

Actually, the Tabular Method works even if a zero does not appear in theleft column. One can go as far as possible, and if a zero does not appear,then one needs only integrate, if possible, the product of the functions inthe last row, adding the next sign in the alternating sign progression. Thenext example shows how this works.

D I

x2 cos x

2x sin x

2 − cos x

0 − sin x

+

+

Table A.4: Tabular Method

Example A.11. Use the Tabular Method to compute∫

e2x sin 3x dx.As before, we first set up the table as shown in Table A.5.

D I

sin 3x e2x

3 cos 3x 12 e2x

−9 sin 3x 14 e2x

+

Table A.5: Tabular Method, showing anonterminating example.

Putting together the pieces, noting that the derivatives in the left column willnever vanish, we have∫

e2x sin 3x dx = (12

sin 3x− 34

cos 3x)e2x +∫

(−9 sin 3x)(

14

e2x)

dx.

The integral on the right is a multiple of the one on the left, so we can combinethem,

134

∫e2x sin 3x dx = (

12

sin 3x− 34

cos 3x)e2x,

or ∫e2x sin 3x dx = (

213

sin 3x− 313

cos 3x)e2x.

Page 482: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

470 partial differential equations

Differentiation Under the Integral

Another method that one can use to evaluate this integral is to differen-tiate under the integral sign. This is mentioned in the Richard Feynman’smemoir Surely You’re Joking, Mr. Feynman!. In the book Feynman recountsDifferentiation Under the Integral Sign

and Feynman’s trick. using this “trick” to be able to do integrals that his MIT classmates couldnot do. This is based on a theorem found in Advanced Calculus texts.Reader’s unfamiliar with partial derivatives should be able to grasp theiruse in the following example.

Theorem A.1. Let the functions f (x, t) and ∂ f (x,t)∂x be continuous in both t, and

x, in the region of the (t, x) plane which includes a(x) ≤ t ≤ b(x), x0 ≤ x ≤ x1,where the functions a(x) and b(x) are continuous and have continuous derivativesfor x0 ≤ x ≤ x1. Defining

F(x) ≡∫ b(x)

a(x)f (x, t) dt,

then

dF(x)dx

=

(∂F∂b

)dbdx

+

(∂F∂a

)dadx

+∫ b(x)

a(x)

∂xf (x, t) dt

= f (x, b(x)) b′(x)− f (x, a(x)) a′(x) +∫ b(x)

a(x)

∂xf (x, t) dt.

(A.69)

for x0 ≤ x ≤ x1. This is a generalized version of the Fundamental Theorem ofCalculus.

In the next examples we show how we can use this theorem to bypassintegration by parts.

Example A.12. Use differentiation under the integral sign to evaluate∫

xex dx.First, consider the integral

I(x, a) =∫

eax dx =eax

a.

Then,∂I(x, a)

∂a=∫

xeax dx.

So, ∫xeax dx =

∂I(x, a)∂a

=∂

∂a

(∫eax dx

)=

∂a

(eax

a

)=

(xa− 1

a2

)eax. (A.70)

Page 483: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

calculus review 471

Evaluating this result at a = 1, we have∫xex dx = (x− 1)ex.

The reader can verify this result by employing the previous methods or by justdifferentiating the result.

Example A.13. We will do the integral∫ π

0 x2 cos x dx once more. First, considerthe integral

I(a) ≡∫ π

0cos ax dx

=sin ax

a

∣∣∣π0

=sin aπ

a. (A.71)

Differentiating the integral I(a) with respect to a twice gives

d2 I(a)da2 = −

∫ π

0x2 cos ax dx. (A.72)

Evaluation of this result at a = 1 leads to the desired result. Namely,

∫ π

0x2 cos x dx = −d2 I(a)

da2

∣∣∣a=1

= − d2

da2

(sin aπ

a

) ∣∣∣a=1

= − dda

(aπ cos aπ − sin aπ

a2

) ∣∣∣a=1

= −(

a2π2 sin aπ + 2aπ cos aπ − 2 sin aπ

a3

) ∣∣∣a=1

= −2π. (A.73)

Trigonometric Integrals

Other types of integrals that you will see often are trigonometric inte-grals. In particular, integrals involving powers of sines and cosines. Forodd powers, a simple substitution will turn the integrals into simple pow-ers.

Example A.14. For example, consider∫cos3 x dx.

This can be rewritten as ∫cos3 x dx =

∫cos2 x cos x dx.

Let u = sin x. Then, du = cos x dx. Since cos2 x = 1− sin2 x, we have Integration of odd powers of sine and co-sine.

Page 484: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

472 partial differential equations

∫cos3 x dx =

∫cos2 x cos x dx

=∫(1− u2) du

= u− 13

u3 + C

= sin x− 13

sin3 x + C. (A.74)

A quick check confirms the answer:

ddx

(sin x− 1

3sin3 x + C

)= cos x− sin2 x cos x

= cos x(1− sin2 x)

= cos3 x. (A.75)

Even powers of sines and cosines are a little more complicated, but doable.In these cases we need the half angle formulae (A.24)-(A.25).Integration of even powers of sine and

cosine.Example A.15. As an example, we will compute∫ 2π

0cos2 x dx.

Substituting the half angle formula for cos2 x, we have∫ 2π

0cos2 x dx =

12

∫ 2π

0(1 + cos 2x) dx

=12

(x− 1

2sin 2x

)2π

0= π. (A.76)

We note that this result appears often in physics. When looking at rootRecall that RMS averages refer to theroot mean square average. This is com-puted by first computing the average, ormean, of the square of some quantity.Then one takes the square root. Typi-cal examples are RMS voltage, RMS cur-rent, and the average energy in an elec-tromagnetic wave. AC currents oscillateso fast that the measured value is theRMS voltage.

mean square averages of sinusoidal waves, one needs the average of thesquare of sines and cosines. Recall that the average of a function on interval[a, b] is given as

fave =1

b− a

∫ b

af (x) dx. (A.77)

So, the average of cos2 x over one period is

12π

∫ 2π

0cos2 x dx =

12

. (A.78)

The root mean square is then found by taking the square root, 1√2

.

Trigonometric Function Substitution

Another class of integrals typically studied in calculus are those involv-ing the forms

√1− x2,

√1 + x2, or

√x2 − 1. These can be simplified through

the use of trigonometric substitutions. The idea is to combine the two termsunder the radical into one term using trigonometric identities. We will con-sider some typical examples.

Page 485: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

calculus review 473

Example A.16. Evaluate∫ √

1− x2 dx.Since 1− sin2 θ = cos2 θ, we perform the sine substitution

x = sin θ, dx = cos θ dθ.

Then, In any of these computations careful at-tention has to be paid to simplifying theradical. This is because

√x2 = |x|.

For example,√(−5)2 =

√25 = 5. For

x = sin θ, one typically specifies the do-main −π/2 ≤ θ ≤ π/2. In this domainwe have | cos θ| = cos θ.

∫ √1− x2 dx =

∫ √1− sin2 θ cos θ dθ

=∫

cos2 θ dθ. (A.79)

Using the last example, we have∫ √1− x2 dx =

12

(θ − 1

2sin 2θ

)+ C.

However, we need to write the answer in terms of x. We do this by first usingthe double angle formula for sin 2θ and cos θ =

√1− x2 to obtain∫ √

1− x2 dx =12

(sin−1 x− x

√1− x2

)+ C.

Similar trigonometric substitutions result for integrands involving√

1 + x2

and√

x2 − 1. The substitutions are summarized in Table A.6. The simpli-fication of the given form is then obtained using trigonometric identities.This can also be accomplished by referring to the right triangles shown inFigure A.5.

Form Substitution Differential√a2 − x2 x = a sin θ dx = a cos θ dθ√a2 + x2 x = a tan θ dx = a sec2 θ dθ√x2 − a2 x = a sec θ dx = a sec θ tan θ dθ

Table A.6: Standard trigonometric sub-stitutions.

θ

x = sin θ

1

x

√1− x2

θ

x = tan θ

√1 + x2

x

x = sec θ

x

√x2 − 1

1

Figure A.5: Geometric relations used intrigonometric substitution.

Example A.17. Evaluate∫ 2

0

√x2 + 4 dx.

Let x = 2 tan θ. Then, dx = 2 sec2 θ dθ and√x2 + 4 =

√4 tan2 θ + 4 = 2 sec θ.

So, the integral becomes∫ 2

0

√x2 + 4 dx = 4

∫ π/4

0sec3 θ dθ.

Page 486: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

474 partial differential equations

One has to recall, or look up,∫sec3 θ dθ =

12(tan θ sec θ + ln | sec θ + tan θ|) + C.

This gives ∫ 2

0

√x2 + 4 dx = 2 [tan θ sec θ + ln | sec θ + tan θ|]π/4

0

= 2(√

2 + ln |√

2 + 1| − (0 + ln(1)))

= 2(√

2 + ln(√

2 + 1)). (A.80)

Example A.18. Evaluate∫ dx√

x2−1, x ≥ 1.

In this case one needs the secant substitution. This yields∫ dx√x2 − 1

=∫ sec θ tan θ dθ√

sec2 θ − 1

=∫ sec θ tan θ dθ

tan θ

=∫

sec θdθ

= ln(sec θ + tan θ) + C

= ln(x +√

x2 − 1) + C. (A.81)

Example A.19. Evaluate∫ dx

x√

x2−1, x ≥ 1.

Again we can use a secant substitution. This yields∫ dxx√

x2 − 1=

∫ sec θ tan θ dθ

sec θ√

sec2 θ − 1

=∫ sec θ tan θ

sec θ tan θdθ

=∫

dθ = θ + C = sec−1 x + C. (A.82)

Hyperbolic Function Substitution

Even though trigonometric substitution plays a role in the calculus pro-gram, students often see hyperbolic function substitution used in physicscourses. The reason might be because hyperbolic function substitution issometimes simpler. The idea is the same as for trigonometric substitution.We use an identity to simplify the radical.

Example A.20. Evaluate∫ 2

0

√x2 + 4 dx using the substitution x = 2 sinh u.

Since x = 2 sinh u, we have dx = 2 cosh u du. Also, we can use the identitycosh2 u− sinh2 u = 1 to rewrite√

x2 + 4 =

√4 sinh2 u + 4 = 2 cosh u.

The integral can be now be evaluated using these substitutions and some hyper-bolic function identities,∫ 2

0

√x2 + 4 dx = 4

∫ sinh−1 1

0cosh2 u du

Page 487: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

calculus review 475

= 2∫ sinh−1 1

0(1 + cosh 2u) du

= 2[

u +12

sinh 2u]sinh−1 1

0

= 2 [u + sinh u cosh u]sinh−1 10

= 2(

sinh−1 1 +√

2)

. (A.83)

In Example A.17 we used a trigonometric substitution and found∫ 2

0

√x2 + 4 = 2(

√2 + ln(

√2 + 1)).

This is the same result since sinh−1 1 = ln(1 +√

2).

Example A.21. Evaluate∫ dx√

x2−1for x ≥ 1 using hyperbolic function substitu-

tion.This integral was evaluated in Example A.19 using the trigonometric substitu-

tion x = sec θ and the resulting integral of sec θ had to be recalled. Here we willuse the substitution

x = cosh u, dx = sinh u du,√

x2 − 1 =

√cosh2 u− 1 = sinh u.

Then, ∫ dx√x2 − 1

=∫ sinh u du

sinh u

=∫

du = u + C

= cosh−1 x + C

=12

ln(x +√

x2 − 1) + C, x ≥ 1. (A.84)

This is the same result as we had obtained previously, but this derivation was alittle cleaner.

Also, we can extend this result to values x ≤ −1 by letting x = − cosh u. Thisgives ∫ dx√

x2 − 1=

12

ln(x +√

x2 − 1) + C, x ≤ −1.

Combining these results, we have shown∫ dx√x2 − 1

=12

ln(|x|+√

x2 − 1) + C, x2 ≥ 1.

We have seen in the last example that the use of hyperbolic function sub-stitution allows us to bypass integrating the secant function in Example A.19

when using trigonometric substitutions. In fact, we can use hyperbolic sub-stitutions to evaluate integrals of powers of secants. Comparing ExamplesA.19 and A.21, we consider the transformation sec θ = cosh u. The relationbetween differentials is found by differentiation, giving

sec θ tan θ dθ = sinh u du.

Page 488: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

476 partial differential equations

Sincetanh2 θ = sec2 θ − 1,

we have tan θ = sinh u, therefore

dθ =du

cosh u.

In the next example we show how useful this transformation is.Evaluation of∫

sec θ dθ.

Example A.22. Evaluate∫

sec θ dθ using hyperbolic function substitution.From the discussion in the last paragraph, we have∫

sec θ dθ =∫

du

= u + C

= cosh−1(sec θ) + C (A.85)

We can express this result in the usual form by using the logarithmic form of theinverse hyperbolic cosine,

cosh−1 x = ln(x +√

x2 − 1).

The result is ∫sec θ dθ = ln(sec θ + tan θ).

This example was fairly simple using the transformation sec θ = cosh u.Another common integral that arises often is integrations of sec3 θ. In atypical calculus class this integral is evaluated using integration by parts.However. that leads to a tricky manipulation that is a bit scary the first timeit is encountered (and probably upon several more encounters.) In the nextexample, we will show how hyperbolic function substitution is simpler.Evaluation of

∫sec3 θ dθ.

Example A.23. Evaluate∫

sec3 θ dθ using hyperbolic function substitution.First, we consider the transformation sec θ = cosh u with dθ = du

cosh u . Then,∫sec3 θ dθ =

∫ ducosh u

.

This integral was done in Example A.7, leading to∫sec3 θ dθ = 2 tan−1 eu + C.

While correct, this is not the form usually encountered. Instead, we make theslightly different transformation tan θ = sinh u. Since sec2 θ = 1 + tan2 θ, wefind sec θ = cosh u. As before, we find

dθ =du

cosh u.

Using this transformation and several identities, the integral becomes∫sec3 θ dθ =

∫cosh2 u du

Page 489: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

calculus review 477

=12

∫(1 + cosh 2u) du

=12

(u +

12

sinh 2u)

=12(u + sinh u cosh u)

=12

(cosh−1(sec θ) + tan θ sec θ

)=

12(sec θ tan θ + ln(sec θ + tan θ)) . (A.86)

There are many other integration methods, some of which we will visitin other parts of the book, such as partial fraction decomposition and nu-merical integration. Another topic which we will revisit is power series.

A.1.6 Geometric Series

Geometric series are fairly common andwill be used throughout the book. Youshould learn to recognize them andwork with them.

Infinite series occur often in mathematics and physics. Two serieswhich occur often are the geometric series and the binomial series. we willdiscuss these next.

A geometric series is of the form

∑n=0

arn = a + ar + ar2 + . . . + arn + . . . . (A.87)

Here a is the first term and r is called the ratio. It is called the ratio becausethe ratio of two consecutive terms in the sum is r.

Example A.24. For example,

1 +12+

14+

18+ . . .

is an example of a geometric series. We can write this using summation notation,

1 +12+

14+

18+ . . . =

∑n=0

1(

12

)n.

Thus, a = 1 is the first term and r = 12 is the common ratio of successive terms.

Next, we seek the sum of this infinite series, if it exists.

The sum of a geometric series, when it exists, can easily be determined.We consider the nth partial sum:

sn = a + ar + . . . + arn−2 + arn−1. (A.88)

Now, multiply this equation by r.

rsn = ar + ar2 + . . . + arn−1 + arn. (A.89)

Subtracting these two equations, while noting the many cancelations, wehave

(1− r)sn = (a + ar + . . . + arn−2 + arn−1)

Page 490: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

478 partial differential equations

−(ar + ar2 + . . . + arn−1 + arn)

= a− arn

= a(1− rn). (A.90)

Thus, the nth partial sums can be written in the compact form

sn =a(1− rn)

1− r. (A.91)

The sum, if it exists, is given by S = limn→∞ sn. Letting n get large in thepartial sum (A.91), we need only evaluate limn→∞ rn. From the special limitsin the Appendix we know that this limit is zero for |r| < 1. Thus, we have

Geometric Series

The sum of the geometric series exists for |r| < 1 and is given by

∑n=0

arn =a

1− r, |r| < 1. (A.92)

The reader should verify that the geometric series diverges for all othervalues of r. Namely, consider what happens for the separate cases |r| > 1,r = 1 and r = −1.

Next, we present a few typical examples of geometric series.

Example A.25. ∑∞n=0

12n

In this case we have that a = 1 and r = 12 . Therefore, this infinite series con-

verges and the sum is

S =1

1− 12= 2.

Example A.26. ∑∞k=2

43k

In this example we first note that the first term occurs for k = 2. It sometimeshelps to write out the terms of the series,

∑k=2

43k =

432 +

433 +

434 +

435 + . . . .

Looking at the series, we see that a = 49 and r = 1

3 . Since |r|<1, the geometricseries converges. So, the sum of the series is given by

S =49

1− 13=

23

.

Example A.27. ∑∞n=1(

32n − 2

5n )

Finally, in this case we do not have a geometric series, but we do have the differ-ence of two geometric series. Of course, we need to be careful whenever rearranginginfinite series. In this case it is allowed 1. Thus, we have

1 A rearrangement of terms in an infiniteseries is allowed when the series is abso-lutely convergent. (See the Appendix.)

∑n=1

(32n −

25n

)=

∑n=1

32n −

∑n=1

25n .

Page 491: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

calculus review 479

Now we can add both geometric series to obtain

∑n=1

(32n −

25n

)=

32

1− 12−

25

1− 15= 3− 1

2=

52

.

Geometric series are important because they are easily recognized andsummed. Other series which can be summed include special cases of Taylorseries and telescoping series. Next, we show an example of a telescopingseries.

Example A.28. ∑∞n=1

1n(n+1) The first few terms of this series are

∑n=1

1n(n + 1)

=12+

16+

112

+1

20+ . . . .

It does not appear that we can sum this infinite series. However, if we used thepartial fraction expansion

1n(n + 1)

=1n− 1

n + 1,

then we find the kth partial sum can be written as

sk =k

∑n=1

1n(n + 1)

=k

∑n=1

(1n− 1

n + 1

)=

(11− 1

2

)+

(12− 1

3

)+ · · ·+

(1k− 1

k + 1

). (A.93)

We see that there are many cancelations of neighboring terms, leading to the seriescollapsing (like a retractable telescope) to something manageable:

sk = 1− 1k + 1

.

Taking the limit as k→ ∞, we find ∑∞n=1

1n(n+1) = 1.

A.1.7 Power Series

Another example of an infinite series that the student has encoun-tered in previous courses is the power series. Examples of such series areprovided by Taylor and Maclaurin series.

Actually, what are now known as Taylorand Maclaurin series were known longbefore they were named. James Gregory(1638-1675) has been recognized for dis-covering Taylor series, which were laternamed after Brook Taylor (1685-1731) .Similarly, Colin Maclaurin (1698-1746)did not actually discover Maclaurin se-ries, but the name was adopted becauseof his particular use of series.

A power series expansion about x = a with coefficient sequence cn isgiven by ∑∞

n=0 cn(x− a)n. For now we will consider all constants to be realnumbers with x in some subset of the set of real numbers.

Consider the following expansion about x = 0 :

∑n=0

xn = 1 + x + x2 + . . . . (A.94)

Page 492: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

480 partial differential equations

We would like to make sense of such expansions. For what values of xwill this infinite series converge? Until now we did not pay much attentionto which infinite series might converge. However, this particular series isalready familiar to us. It is a geometric series. Note that each term is gottenfrom the previous one through multiplication by r = x. The first term isa = 1. So, from Equation (A.92), we have that the sum of the series is givenby

∑n=0

xn =1

1− x, |x| < 1.

Figure A.6: (a) Comparison of 11−x

(solid) to 1 + x (dashed) for x ∈[−0.2, 0.2]. (b) Comparison of 1

1−x (solid)to 1 + x + x2 (dashed) for x ∈ [−0.2, 0.2].

f (x)

x−0.2−0.1 0.1 0.2

0.80

0.90

1.00

1.10

1.20

(a)

f (x)

x−0.2−0.1 0.1 0.2

0.80

0.90

1.00

1.10

1.20

(b)

In this case we see that the sum, when it exists, is a simple function. Infact, when x is small, we can use this infinite series to provide approxima-tions to the function (1− x)−1. If x is small enough, we can write

(1− x)−1 ≈ 1 + x.

In Figure A.6a we see that for small values of x these functions do agree.

f (x)

x−1.0 −.5 0 .5

1.0

2.0

3.0

Figure A.7: Comparison of 11−x (solid) to

1 + x + x2 (dashed) and 1 + x + x2 + x3

(dotted) for x ∈ [−1.0, 0.7].Of course, if we want better agreement, we select more terms. In Fig-

ure A.6b we see what happens when we do so. The agreement is muchbetter. But extending the interval, we see in Figure A.7 that keeping onlyquadratic terms may not be good enough. Keeping the cubic terms givesbetter agreement over the interval.

Finally, in Figure A.8 we show the sum of the first 21 terms over the entireinterval [−1, 1]. Note that there are problems with approximations near theendpoints of the interval, x = ±1.

f (x)

x−1.0 −.5 0 .5 1.0

1.0

2.0

3.0

4.0

5.0

Figure A.8: Comparison of 11−x (solid) to

∑20n=0 xn for x ∈ [−1, 1].

Such polynomial approximations are called Taylor polynomials. Thus,T3(x) = 1 + x + x2 + x3 is the third order Taylor polynomial approximationof f (x) = 1

1−x .With this example we have seen how useful a series representation might

be for a given function. However, the series representation was a simplegeometric series, which we already knew how to sum. Is there a way tobegin with a function and then find its series representation? Once we havesuch a representation, will the series converge to the function with whichwe started? For what values of x will it converge? These questions can beanswered by recalling the definitions of Taylor and Maclaurin series.

Page 493: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

calculus review 481

A Taylor series expansion of f (x) about x = a is the series Taylor series expansion.

f (x) ∼∞

∑n=0

cn(x− a)n, (A.95)

where

cn =f (n)(a)

n!. (A.96)

Note that we use ∼ to indicate that we have yet to determine when theseries may converge to the given function. A special class of series arethose Taylor series for which the expansion is about x = 0. These are calledMaclaurin series.

A Maclaurin series expansion of f (x) is a Taylor series expansion of Maclaurin series expansion.

f (x) about x = 0, or

f (x) ∼∞

∑n=0

cnxn, (A.97)

where

cn =f (n)(0)

n!. (A.98)

Example A.29. Expand f (x) = ex about x = 0.We begin by creating a table. In order to compute the expansion coefficients, cn,

we will need to perform repeated differentiations of f (x). So, we provide a table forthese derivatives. Then, we only need to evaluate the second column at x = 0 anddivide by n!.

n f (n)(x) f (n)(0) cn

0 ex e0 = 1 10! = 1

1 ex e0 = 1 11! = 1

2 ex e0 = 1 12!

3 ex e0 = 1 13!

Next, we look at the last column and try to determine a pattern so that we canwrite down the general term of the series. If there is only a need to get a polynomialapproximation, then the first few terms may be sufficient. In this case, the patternis obvious: cn = 1

n! . So,

ex ∼∞

∑n=0

xn

n!.

Example A.30. Expand f (x) = ex about x = 1.Here we seek an expansion of the form ex ∼ ∑∞

n=0 cn(x− 1)n. We could createa table like the last example. In fact, the last column would have values of the formen! . (You should confirm this.) However, we will make use of the Maclaurin series

Page 494: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

482 partial differential equations

expansion for ex and get the result quicker. Note that ex = ex−1+1 = eex−1. Now,apply the known expansion for ex :

ex ∼ e(

1 + (x− 1) +(x− 1)2

2+

(x− 1)3

3!+ . . .

)=

∑n=0

e(x− 1)n

n!.

Example A.31. Expand f (x) = 11−x about x = 0.

This is the example with which we started our discussion. We can set up a tablein order to find the Maclaurin series coefficients. We see from the last column of thetable that we get back the geometric series (A.94).

n f (n)(x) f (n)(0) cn

0 11−x 1 1

0! = 1

1 1(1−x)2 1 1

1! = 1

2 2(1)(1−x)3 2(1) 2!

2! = 1

3 3(2)(1)(1−x)4 3(2)(1) 3!

3! = 1

So, we have found1

1− x∼

∑n=0

xn.

We can replace ∼ by equality if we can determine the range of x-valuesfor which the resulting infinite series converges. We will investigate suchconvergence shortly.

Series expansions for many elementary functions arise in a variety ofapplications. Some common expansions are provided in Table A.7.

We still need to determine the values of x for which a given power seriesconverges. The first five of the above expansions converge for all reals, butthe others only converge for |x| < 1.

We consider the convergence of ∑∞n=0 cn(x − a)n. For x = a the series

obviously converges. Will it converge for other points? One can prove

Theorem A.2. If ∑∞n=0 cn(b− a)n converges for b 6= a, then

∑∞n=0 cn(x− a)n converges absolutely for all x satisfying |x− a| < |b− a|.

This leads to three possibilities

1. ∑∞n=0 cn(x− a)n may only converge at x = a.

2. ∑∞n=0 cn(x− a)n may converge for all real numbers.

3. ∑∞n=0 cn(x − a)n converges for |x − a| < R and diverges for |x −

a| > R.

The number R is called the radius of convergence of the power seriesInterval and radius of convergence.

Page 495: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

calculus review 483

Series Expansions You Should Know

ex = 1 + x +x2

2+

x3

3!+

x4

4!+ . . . =

∑n=0

xn

n!

cos x = 1− x2

2+

x4

4!− . . . =

∑n=0

(−1)n x2n

(2n)!

sin x = x− x3

3!+

x5

5!− . . . =

∑n=0

(−1)n x2n+1

(2n + 1)!

cosh x = 1 +x2

2+

x4

4!+ . . . =

∑n=0

x2n

(2n)!

sinh x = x +x3

3!+

x5

5!+ . . . =

∑n=0

x2n+1

(2n + 1)!1

1− x= 1 + x + x2 + x3 + . . . =

∑n=0

xn

11 + x

= 1− x + x2 − x3 + . . . =∞

∑n=0

(−x)n

tan−1 x = x− x3

3+

x5

5− x7

7+ . . . =

∑n=0

(−1)n x2n+1

2n + 1

ln(1 + x) = x− x2

2+

x3

3− . . . =

∑n=1

(−1)n+1 xn

n

Table A.7: Common Mclaurin Series Ex-pansions

and (a− R, a + R) is called the interval of convergence. Convergence at theendpoints of this interval has to be tested for each power series.

In order to determine the interval of convergence, one needs only notethat when a power series converges, it does so absolutely. So, we need onlytest the convergence of ∑∞

n=0 |cn(x− a)n| = ∑∞n=0 |cn||x− a|n. This is easily

done using either the ratio test or the nth root test. We first identify thenonnegative terms an = |cn||x − a|n, using the notation from Section ??.Then, we apply one of the convergence tests.

For example, the nth Root Test gives the convergence condition for an =

|cn||x− a|n,

ρ = limn→∞

n√

an = limn→∞

n√|cn||x− a| < 1.

Since |x− a| is independent of n,, we can factor it out of the limit and dividethe value of the limit to obtain

|x− a| <(

limn→∞

n√|cn|)−1

≡ R.

Thus, we have found the radius of convergence, R.Similarly, we can apply the Ratio Test.

ρ = limn→∞

an+1

an= lim

n→∞

|cn+1||cn|

|x− a| < 1.

Again, we rewrite this result to determine the radius of convergence:

|x− a| <(

limn→∞

|cn+1||cn|

)−1

≡ R.

Page 496: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

484 partial differential equations

Example A.32. Find the radius of convergence of the series ex = ∑∞n=0

xn

n! .Since there is a factorial, we will use the Ratio Test.

ρ = limn→∞

|n!||(n + 1)!| |x| = lim

n→∞

1n + 1

|x| = 0.

Since ρ = 0, it is independent of |x| and thus the series converges for all x. We alsocan say that the radius of convergence is infinite.

Example A.33. Find the radius of convergence of the series 11−x = ∑∞

n=0 xn.In this example we will use the nth Root Test.

ρ = limn→∞

n√1|x| = |x| < 1.

Thus, we find that we have absolute convergence for |x| < 1. Setting x = 1 orx = −1, we find that the resulting series do not converge. So, the endpoints are notincluded in the complete interval of convergence.

In this example we could have also used the Ratio Test. Thus,

ρ = limn→∞

11|x| = |x| < 1.

We have obtained the same result as when we used the nth Root Test.

Example A.34. Find the radius of convergence of the series ∑∞n=1

3n(x−2)n

n .In this example, we have an expansion about x = 2. Using the nth Root Test we

find that

ρ = limn→∞

n

√3n

n|x− 2| = 3|x− 2| < 1.

Solving for |x− 2| in this inequality, we find |x− 2| < 13 . Thus, the radius of

convergence is R = 13 and the interval of convergence is

(2− 1

3 , 2 + 13

)=( 5

3 , 73)

.

As for the endpoints, we first test the point x = 73 . The resulting series is

∑∞n=1

3n( 13 )

n

n = ∑∞n=1

1n . This is the harmonic series, and thus it does not converge.

Inserting x = 53 , we get the alternating harmonic series. This series does converge.

So, we have convergence on [ 53 , 7

3 ). However, it is only conditionally convergent atthe left endpoint, x = 5

3 .

Example A.35. Find an expansion of f (x) = 1x+2 about x = 1.

Instead of explicitly computing the Taylor series expansion for this function, wecan make use of an already known function. We first write f (x) as a function ofx− 1, since we are expanding about x = 1; i.e., we are seeking a series whose termsare powers of x− 1.

This expansion is easily done by noting that 1x+2 = 1

(x−1)+3 . Factoring out a 3,we can rewrite this expression as a sum of a geometric series. Namely, we use theexpansion for

g(z) =1

1 + z= 1− z + z2 − z3 + . . . . (A.99)

Page 497: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

calculus review 485

and then we rewrite f (x) as

f (x) =1

x + 2

=1

(x− 1) + 3

=1

3[1 + 13 (x− 1)]

=13

11 + 1

3 (x− 1). (A.100)

Note that f (x) = 13 g( 1

3 (x− 1)) for g(z) = 11+z . So, the expansion becomes

f (x) =13

[1− 1

3(x− 1) +

(13(x− 1)

)2−(

13(x− 1)

)3+ . . .

].

This can further be simplified as

f (x) =13− 1

9(x− 1) +

127

(x− 1)2 − . . . .

Convergence is easily established. The expansion for g(z) converges for |z| < 1.So, the expansion for f (x) converges for | − 1

3 (x − 1)| < 1. This implies that|x − 1| < 3. Putting this inequality in interval notation, we have that the powerseries converges absolutely for x ∈ (−2, 4). Inserting the endpoints, one can showthat the series diverges for both x = −2 and x = 4. You should verify this!

Example A.36. Prove Euler’s Formula: eiθ = cos θ + i sin θ.Euler’s Formula, eiθ = cos θ + i sin θ,is an important formula and is usedthroughout the text.

As a final application, we can derive Euler’s Formula ,

eiθ = cos θ + i sin θ,

where i =√−1. We naively use the expansion for ex with x = iθ. This leads us to

eiθ = 1 + iθ +(iθ)2

2!+

(iθ)3

3!+

(iθ)4

4!+ . . . .

Next we note that each term has a power of i. The sequence of powers of i isgiven as 1, i,−1,−i, 1, i,−1,−i, 1, i,−1,−i, . . .. See the pattern? We concludethat

in = ir, where r = remainder after dividing n by 4.

This gives

eiθ =

(1− θ2

2!+

θ4

4!− . . .

)+ i(

θ − θ3

3!+

θ5

5!− . . .

).

We recognize the expansions in the parentheses as those for the cosine and sinefunctions. Thus, we end with Euler’s Formula.

We further derive relations from this result, which will be important forour next studies. From Euler’s formula we have that for integer n:

einθ = cos(nθ) + i sin(nθ).

Page 498: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

486 partial differential equations

We also haveeinθ =

(eiθ)n

= (cos θ + i sin θ)n .

Equating these two expressions, we are led to de Moivre’s Formula, namedafter Abraham de Moivre (1667-1754),

(cos θ + i sin θ)n = cos(nθ) + i sin(nθ). (A.101)

de Moivre’s Formula.

This formula is useful for deriving identities relating powers of sines orcosines to simple functions. For example, if we take n = 2 in Equation(A.101), we find

cos 2θ + i sin 2θ = (cos θ + i sin θ)2 = cos2 θ − sin2 θ + 2i sin θ cos θ.

Looking at the real and imaginary parts of this result leads to the wellknown double angle identities

cos 2θ = cos2 θ − sin2 θ, sin 2θ = 2 sin θ cos θ.

Here we see elegant proofs of wellknown trigonometric identities.

cos 2θ = cos2 θ − sin2 θ,(A.102)

sin 2θ = 2 sin θ cos θ,

cos2 θ =12(1 + cos 2θ),

sin2 θ =12(1− cos 2θ).

Replacing cos2 θ = 1− sin2 θ or sin2 θ = 1− cos2 θ leads to the half angleformulae:

cos2 θ =12(1 + cos 2θ), sin2 θ =

12(1− cos 2θ).

Trigonometric functions can be writtenin terms of complex exponentials:

cos θ =eiθ + e−iθ

2,

sin θ =eiθ − e−iθ

2i.

We can also use Euler’s Formula to write sines and cosines in terms ofcomplex exponentials. We first note that due to the fact that the cosine is aneven function and the sine is an odd function, we have

e−iθ = cos θ − i sin θ.

Combining this with Euler’s Formula, we have that

cos θ =eiθ + e−iθ

2, sin θ =

eiθ − e−iθ

2i.

Hyperbolic functions and trigonometricfunctions are intimately related.

cos(ix) = cosh x,

sin(ix) = −i sinh x.

We finally note that there is a simple relationship between hyperbolicfunctions and trigonometric functions. Recall that

cosh x =ex + e−x

2.

If we let x = iθ, then we have that cosh(iθ) = cos θ and cos(ix) = cosh x.Similarly, we can show that sinh(iθ) = i sin θ and sin(ix) = −i sinh x.

A.1.8 The Binomial Expansion

Another series expansion which occurs often in examples and ap-plications is the binomial expansion. This is simply the expansion of theThe binomial expansion is a special se-

ries expansion used to approximate ex-pressions of the form (a + b)p for b a,or (1 + x)p for |x| 1.

expression (a + b)p in powers of a and b. We will investigate this expan-sion first for nonnegative integer powers p and then derive the expansion

Page 499: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

calculus review 487

for other values of p. While the binomial expansion can be obtained usingTaylor series, we will provide a more intuitive derivation to show that

(a + b)n =n

∑r=0

Cnr an−rbr, (A.103)

where the Cnr are called the binomial coefficients.

Lets list some of the common expansions for nonnegative integer powers.

(a + b)0 = 1

(a + b)1 = a + b

(a + b)2 = a2 + 2ab + b2

(a + b)3 = a3 + 3a2b + 3ab2 + b3

(a + b)4 = a4 + 4a3b + 6a2b2 + 4ab3 + b4

· · · (A.104)

We now look at the patterns of the terms in the expansions. First, wenote that each term consists of a product of a power of a and a power ofb. The powers of a are decreasing from n to 0 in the expansion of (a + b)n.Similarly, the powers of b increase from 0 to n. The sums of the exponents ineach term is n. So, we can write the (k+ 1)st term in the expansion as an−kbk.For example, in the expansion of (a + b)51 the 6th term is a51−5b5 = a46b5.However, we do not yet know the numerical coefficients in the expansion.

Let’s list the coefficients for the above expansions.

n = 0 : 1n = 1 : 1 1n = 2 : 1 2 1n = 3 : 1 3 3 1n = 4 : 1 4 6 4 1

(A.105)

This pattern is the famous Pascal’s triangle.2 There are many interesting

2 Pascal’s triangle is named after BlaisePascal (1623-1662). While such configu-rations of numbers were known earlierin history, Pascal published them andapplied them to probability theory.

Pascal’s triangle has many unusualproperties and a variety of uses:

• Horizontal rows add to powers of 2.

• The horizontal rows are powers of 11

(1, 11, 121, 1331, etc.).

• Adding any two successive numbersin the diagonal 1-3-6-10-15-21-28...results in a perfect square.

• When the first number to the right ofthe 1 in any row is a prime number,all numbers in that row are divisibleby that prime number. The readercan readily check this for the n = 5and n = 7 rows.

• Sums along certain diagonals leadsto the Fibonacci sequence. Thesediagonals are parallel to the line con-necting the first 1 for n = 3 row andthe 2 in the n = 2 row.

features of this triangle. But we will first ask how each row can be generated.We see that each row begins and ends with a one. The second term and

next to last term have a coefficient of n. Next we note that consecutive pairsin each row can be added to obtain entries in the next row. For example, wehave for rows n = 2 and n = 3 that 1 + 2 = 3 and 2 + 1 = 3 :

n = 2 : 1 2 1

n = 3 : 1 3 3 1(A.106)

With this in mind, we can generate the next several rows of our triangle.

n = 3 : 1 3 3 1n = 4 : 1 4 6 4 1n = 5 : 1 5 10 10 5 1n = 6 : 1 6 15 20 15 6 1

(A.107)

Page 500: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

488 partial differential equations

So, we use the numbers in row n = 4 to generate entries in row n = 5 :1 + 4 = 5, 4 + 6 = 10. We then use row n = 5 to get row n = 6, etc.

Of course, it would take a while to compute each row up to the desired n.Fortunately, there is a simple expression for computing a specific coefficient.Consider the kth term in the expansion of (a + b)n. Let r = k − 1, fork = 1, . . . , n + 1. Then this term is of the form Cn

r an−rbr. We have seen thatthe coefficients satisfy

Cnr = Cn−1

r + Cn−1r−1 .

Actually, the binomial coefficients, Cnr , have been found to take a simple

form,

Cnr =

n!(n− r)!r!

≡(

nr

).

This is nothing other than the combinatoric symbol for determining how tochoose n objects r at a time. In the binomial expansions this makes sense.We have to count the number of ways that we can arrange r products of bwith n− r products of a. There are n slots to place the b’s. For example, ther = 2 case for n = 4 involves the six products: aabb, abab, abba, baab, baba,and bbaa. Thus, it is natural to use this notation.Andreas Freiherr von Ettingshausen

(1796-1878) was a German mathemati-cian and physicist who in 1826 intro-

duced the notation(

nr

). However,

the binomial coefficients were known bythe Hindus centuries beforehand.

So, we have found that

(a + b)n =n

∑r=0

(nr

)an−rbr. (A.108)

Now consider the geometric series 1 + x + x2 + . . . . We have seen thatsuch this geometric series converges for |x| < 1, giving

1 + x + x2 + . . . =1

1− x.

But, 11−x = (1− x)−1. This is a binomial to a power, but the power is not an

integer.It turns out that the coefficients of such a binomial expansion can be

written similar to the form in Equation(A.108). This example suggests thatour sum may no longer be finite. So, for p a real number, a = 1 and b = x,we generalize Equation(A.108) as

(1 + x)p =∞

∑r=0

(pr

)xr (A.109)

and see if the resulting series makes sense. However, we quickly run intoproblems with the coefficients in the series.

Consider the coefficient for r = 1 in an expansion of (1 + x)−1. This isgiven by (

−11

)=

(−1)!(−1− 1)!1!

=(−1)!(−2)!1!

.

But what is (−1)!? By definition, it is

(−1)! = (−1)(−2)(−3) · · · .

Page 501: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

calculus review 489

This product does not seem to exist! But with a little care, we note that

(−1)!(−2)!

=(−1)(−2)!

(−2)!= −1.

So, we need to be careful not to interpret the combinatorial coefficient liter-ally. There are better ways to write the general binomial expansion. We canwrite the general coefficient as(

pr

)=

p!(p− r)!r!

=p(p− 1) · · · (p− r + 1)(p− r)!

(p− r)!r!

=p(p− 1) · · · (p− r + 1)

r!. (A.110)

With this in mind we now state the theorem:

General Binomial Expansion

The general binomial expansion for (1 + x)p is a simple gener-alization of Equation (A.108). For p real, we have the followingbinomial series:

(1 + x)p =∞

∑r=0

p(p− 1) · · · (p− r + 1)r!

xr, |x| < 1. (A.111)

Often in physics we only need the first few terms for the case that x 1 :

(1 + x)p = 1 + px +p(p− 1)

2x2 + O(x3). (A.112)

Example A.37. Approximate γ = 1√1− v2

c2

for v c. The factor γ =(

1− v2

c2

)−1/2is impor-

tant in special relativity. Namely, thisis the factor relating differences in timeand length measurements by observersmoving relative inertial frames. For ter-restrial speeds, this gives an appropriateapproximation.

For v c the first approximation is found inserting v/c = 0. Thus, one obtainsγ = 1. This is the Newtonian approximation and does not provide enough of anapproximation for terrestrial speeds. Thus, we need to expand γ in powers of v/c.

First, we rewrite γ as

γ =1√

1− v2

c2

=

[1−

(vc

)2]−1/2

.

Using the binomial expansion for p = −1/2, we have

γ ≈ 1 +(−1

2

)(−v2

c2

)= 1 +

v2

2c2 .

Example A.38. Time Dilation ExampleThe average speed of a large commercial jet airliner is about 500 mph. If you

flew for an hour (measured from the ground), then how much younger would yoube than if you had not taken the flight, assuming these reference frames obeyed thepostulates of special relativity?

Page 502: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

490 partial differential equations

This is the problem of time dilation. Let ∆t be the elapsed time in a stationaryreference frame on the ground and ∆τ be that in the frame of the moving plane.Then from the Theory of Special Relativity these are related by

∆t = γ∆τ.

The time differences would then be

∆t− ∆τ = (1− γ−1)∆t

=

(1−

√1− v2

c2

)∆t. (A.113)

The plane speed, 500 mph, is roughly 225 m/s and c = 3.00× 108 m/s. SinceV c, we would need to use the binomial approximation to get a nonzero result.

∆t− ∆τ =

(1−

√1− v2

c2

)∆t

=

(1−

(1− v2

2c2 + . . .))

∆t

≈ v2

2c2 ∆t

=(225)2

2(3.00× 108)2 (1 h) = 1.01 ns. (A.114)

Thus, you have aged one nanosecond less than if you did not take the flight.

Example A.39. Small differences in large numbers: Compute f (R, h) =√R2 + h2 − R for R = 6378.164 km and h = 1.0 m.Inserting these values into a scientific calculator, one finds that

f (6378164, 1) =√

63781642 + 1− 6378164 = 1× 10−7 m.

In some calculators one might obtain 0, in other calculators, or computer algebrasystems like Maple, one might obtain other answers. What answer do you get andhow accurate is your answer?

The problem with this computation is that R h. Therefore, the computationof f (R, h) depends on how many digits the computing device can handle. The bestway to get an answer is to use the binomial approximation. Writing h = Rx, orx = h

R , we have

f (R, h) =√

R2 + h2 − R

= R√

1 + x2 − R

' R[

1 +12

x2]− R

=12

Rx2

=12

hR2 = 7.83926× 10−8 m. (A.115)

Of course, you should verify how many digits should be kept in reporting the result.

Page 503: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

calculus review 491

In the next examples, we generalize this example. Such general com-putations appear in proofs involving general expansions without specificnumerical values given.

Example A.40. Obtain an approximation to (a + b)p when a is much larger thanb, denoted by a b.

If we neglect b then (a + b)p ' ap. How good of an approximation is this?This is where it would be nice to know the order of the next term in the expansion.Namely, what is the power of b/a of the first neglected term in this expansion?

In order to do this we first divide out a as

(a + b)p = ap(

1 +ba

)p.

Now we have a small parameter, ba . According to what we have seen earlier, we can

use the binomial expansion to write(1 +

ba

)n=

∑r=0

(pr

)(ba

)r. (A.116)

Thus, we have a sum of terms involving powers of ba . Since a b, most of these

terms can be neglected. So, we can write(1 +

ba

)p= 1 + p

ba+ O

((ba

)2)

.

Here we used O(), big-Oh notation, to indicate the size of the first neglected term.Summarizing, we have

(a + b)p = ap(

1 +ba

)p

= ap

(1 + p

ba+ O

((ba

)2))

= ap + pap ba+ apO

((ba

)2)

. (A.117)

Therefore, we can approximate (a + b)p ' ap + pbap−1, with an error on the orderof b2ap−2. Note that the order of the error does not include the constant factor fromthe expansion. We could also use the approximation that (a + b)p ' ap, but itis not typically good enough in applications because the error in this case is of theorder bap−1.

Example A.41. Approximate f (x) = (a + x)p − ap for x a.In an earlier example we computed f (R, h) =

√R2 + h2 − R for R = 6378.164

km and h = 1.0 m. We can make use of the binomial expansion to determinethe behavior of similar functions in the form f (x) = (a + x)p − ap. Inserting thebinomial expression into f (x), we have as x

a → 0 that

f (x) = (a + x)p − ap

Page 504: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

492 partial differential equations

= ap[(

1 +xa

)p− 1]

= ap[

pxa

+ O(( x

a

)2)]

= O( x

a

)as

xa→ 0. (A.118)

This result might not be the approximation that we desire. So, we could back upone step in the derivation to write a better approximation as

(a + x)p − ap = ap−1 px + O(( x

a

)2)

asxa→ 0.

We now use this approximation to compute f (R, h) =√

R2 + h2 − R for R =

6378.164 km and h = 1.0 m in the earlier example. We let a = R2, x = 1 andp = 1

2 . Then, the leading order approximation would be of order

O(( x

a

)2)= O

((1

63781642

)2)∼ 2.4× 10−14.

Thus, we have √63781642 + 1− 6378164 ≈ ap−1 px

whereap−1 px = (63781642)−1/2(0.5)1 = 7.83926× 10−8.

This is the same result we had obtained before. However, we have also an estimateof the size of the error and this might be useful in indicating how many digits weshould trust in the answer.

Problems

1. Prove the following identities using only the definitions of the trigono-metric functions, the Pythagorean identity, or the identities for sines andcosines of sums of angles.

a. cos 2x = 2 cos2 x− 1.

b. sin 3x = A sin3 x + B sin x, for what values of A and B?

c. sec θ + tan θ = tan(

θ

2+

π

4

).

2. Determine the exact values of

a. sinπ

8.

b. tan 15o.

c. cos 105o.

3. Denest the following if possible.

a.√

3− 2√

2.

b.√

1 +√

2.

Page 505: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

calculus review 493

c.√

5 + 2√

6.

d. 3√√

5 + 2− 3√√

5− 2.

e. Find the roots of x2 + 6x− 4√

5 = 0 in simplified form.

4. Determine the exact values of

a. sin(

cos−1 35

).

b. tan(

sin−1 x7

).

c. sin−1(

sin3π

2

).

5. Do the following.

a. Write (cosh x− sinh x)6 in terms of exponentials.

b. Prove cosh(x − y) = cosh x cosh y − sinh x sinh y using the expo-nential forms of the hyperbolic functions.

c. Prove cosh 2x = cosh2 x + sinh2 x.

d. If cosh x =1312

and x < 0, find sinh x and tanh x.

e. Find the exact value of sinh(arccosh 3).

6. Prove that the inverse hyperbolic functions are the following logarithms:

a. cosh−1 x = ln(

x +√

x2 − 1)

.

b. tanh−1 x =12

ln1 + x1− x

.

7. Write the following in terms of logarithms:

a. cosh−1 43 .

b. tanh−1 12 .

c. sinh−1 2.

8. Solve the following equations for x.

a. cosh(x + ln 3) = 3.

b. 2 tanh−1 x−2x−1 = ln 2.

c. sinh2 x− 7 cosh x + 13 = 0.

9. Compute the following integrals.

a.∫

xe2x2dx.

b.∫ 3

05x√

x2 + 16dx.

c.∫

x3 sin 3x dx. (Do this using integration by parts, the Tabular Method,and differentiation under the integral sign.)

d.∫

cos4 3x dx.

e.∫ π/4

0 sec3 x dx.

Page 506: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

494 partial differential equations

f.∫

ex sinh x dx.

g.∫ √

9− x2 dx

h.∫ dx(4− x2)2 , using the substitution x = 2 tanh u.

i.∫ 4

0dx√

9 + x2, using a hyperbolic function substitution.

j.∫ dx

1− x2 , using the substitution x = tanh u.

k.∫ dx(x2 + 4)3/2 , using the substitutions x = 2 tan θ and x = 2 sinh u.

l.∫ dx√

3x2 − 6x + 4.

10. Find the sum for each of the series:

a. 5 + 257 + 125

49 + 625343 + · · · .

b. ∑∞n=0

(−1)n34n .

c. ∑∞n=2

25n .

d. ∑∞n=−1(−1)n+1

( eπ

)n.

e. ∑∞n=0

(52n +

13n

).

f. ∑∞n=1

3n(n + 3)

.

g. What is 0.569?

11. A superball is dropped from a 2.00 m height. After it rebounds, itreaches a new height of 1.65 m. Assuming a constant coefficient of restitu-tion, find the (ideal) total distance the ball will travel as it keeps bouncing.

12. Here are some telescoping series problems.

a. Verify that

∑n=1

1(n + 2)(n + 1)

=∞

∑n=1

(n + 1n + 2

− nn + 1

).

b. Find the nth partial sum of the series ∑∞n=1

(n + 1n + 2

− nn + 1

)and

use it to determine the sum of the resulting telescoping series.

c. Sum the series ∑∞n=1

[tan−1 n− tan−1(n + 1)

]by first writing the

Nth partial sum and then computing limN→∞ sN .

13. Determine the radius and interval of convergence of the following infi-nite series:

a. ∑∞n=1(−1)n (x− 1)n

n.

Page 507: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

calculus review 495

b. ∑∞n=1

xn

2nn!.

c. ∑∞n=1

1n

( x5

)n.

d. ∑∞n=1(−1)n xn

√n

.

14. Find the Taylor series centered at x = a and its corresponding radius ofconvergence for the given function. In most cases, you need not employ thedirect method of computation of the Taylor coefficients.

a. f (x) = sinh x, a = 0.

b. f (x) =√

1 + x, a = 0.

c. f (x) = ln1 + x1− x

, a = 0.

d. f (x) = xex, a = 1.

e. f (x) =1√x

, a = 1.

f. f (x) = x4 + x− 2, a = 2.

g. f (x) =x− 12 + x

, a = 1.

15. Consider Gregory’s expansion

tan−1 x = x− x3

3+

x5

5− · · · =

∑k=0

(−1)k

2k + 1x2k+1.

a. Derive Gregory’s expansion by using the definition

tan−1 x =∫ x

0

dt1 + t2 ,

expanding the integrand in a Maclaurin series, and integrating theresulting series term by term.

b. From this result, derive Gregory’s series for π by inserting an ap-propriate value for x in the series expansion for tan−1 x.

16. In the event that a series converges uniformly, one can consider thederivative of the series to arrive at the summation of other infinite series.

a. Differentiate the series representation for f (x) = 11−x to sum the

series ∑∞n=1 nxn, |x| < 1.

b. Use the result from part a to sum the series ∑∞n=1

n5n .

c. Sum the series ∑∞n=2 n(n− 1)xn, |x| < 1.

d. Use the result from part c to sum the series ∑∞n=2

n2 − n5n .

e. Use the results from this problem to sum the series ∑∞n=4

n2

5n .

17. Evaluate the integral∫ π/6

0 sin2 x dx by doing the following:

Page 508: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

496 partial differential equations

a. Compute the integral exactly.

b. Integrate the first three terms of the Maclaurin series expansion ofthe integrand and compare with the exact result.

18. Determine the next term in the time dilation example, A.38. That is,find the v4

c2 term and determine a better approximation to the time differenceof 1 ns.

19. Evaluate the following expressions at the given point. Use your calcu-lator or your computer (such as Maple). Then use series expansions to findan approximation to the value of the expression to as many places as youtrust.

a.1√

1 + x3− cos x2 at x = 0.015.

b. ln√

1 + x1− x

− tan x at x = 0.0015.

c. f (x) =1√

1 + 2x2− 1 + x2 at x = 5.00× 10−3.

d. f (R, h) = R−√

R2 + h2 for R = 1.374× 103 km and h = 1.00 m.

e. f (x) = 1− 1√1− x

for x = 2.5× 10−13.

Page 509: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

BOrdinary Differential Equations Review

“The profound study of nature is the most fertile source of mathematical discover-ies.” - Joseph Fourier (1768-1830)

B.1 First Order Differential Equations

Before moving on, we first define an n-th order ordinary differentialequation. It is an equation for an unknown function y(x) that expresses a n-th order ordinary differential equation

relationship between the unknown function and its first n derivatives. Onecould write this generally as

F(y(n)(x), y(n−1)(x), . . . , y′(x), y(x), x) = 0. (B.1)

Here y(n)(x) represents the nth derivative of y(x).An initial value problem consists of the differential equation plus the Initial value problem.

values of the first n− 1 derivatives at a particular value of the independentvariable, say x0:

y(n−1)(x0) = yn−1, y(n−2)(x0) = yn−2, . . . , y(x0) = y0. (B.2)

A linear nth order differential equation takes the form Linear nth order differential equation

an(x)y(n)(x) + an−1(x)y(n−1)(x) + . . . + a1(x)y′(x) + a0(x)y(x)) = f (x).(B.3)

If f (x) ≡ 0, then the equation is said to be homogeneous, otherwise it iscalled nonhomogeneous. Homogeneous and nonhomogeneous

equations.Typically, the first differential equations encountered are first order equa-tions. A first order differential equation takes the form First order differential equation

F(y′, y, x) = 0. (B.4)

There are two common first order differential equations for which one canformally obtain a solution. The first is the separable case and the second isa first order equation. We indicate that we can formally obtain solutions, asone can display the needed integration that leads to a solution. However,the resulting integrals are not always reducible to elementary functions nordoes one obtain explicit solutions when the integrals are doable.

Page 510: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

498 partial differential equations

B.1.1 Separable Equations

A first order equation is separable if it can be written the form

dydx

= f (x)g(y). (B.5)

Special cases result when either f (x) = 1 or g(y) = 1. In the first case theequation is said to be autonomous.

The general solution to equation (B.5) is obtained in terms of two inte-grals:Separable equations.

∫ dyg(y)

=∫

f (x) dx + C, (B.6)

where C is an integration constant. This yields a 1-parameter family of so-lutions to the differential equation corresponding to different values of C.If one can solve (B.6) for y(x), then one obtains an explicit solution. Other-wise, one has a family of implicit solutions. If an initial condition is givenas well, then one might be able to find a member of the family that satisfiesthis condition, which is often called a particular solution.

Figure B.1: Plots of solutions from the 1-parameter family of solutions of Exam-ple B.1 for several initial conditions.

Example B.1. y′ = 2xy, y(0) = 2.Applying (B.6), one has ∫ dy

y=∫

2x dx + C.

Integrating yieldsln |y| = x2 + C.

Exponentiating, one obtains the general solution,

y(x) = ±ex2+C = Aex2.

Here we have defined A = ±eC. Since C is an arbitrary constant, A is an arbitraryconstant. Several solutions in this 1-parameter family are shown in Figure B.1.

Next, one seeks a particular solution satisfying the initial condition. For y(0) =2, one finds that A = 2. So, the particular solution satisfying the initial conditionis y(x) = 2ex2

.

Figure B.2: Plots of solutions of ExampleB.2 for several initial conditions.

Example B.2. yy′ = −x. Following the same procedure as in the last example, oneobtains: ∫

y dy = −∫

x dx + C ⇒ y2 = −x2 + A, where A = 2C.

Thus, we obtain an implicit solution. Writing the solution as x2 + y2 = A, we seethat this is a family of circles for A > 0 and the origin for A = 0. Plots of somesolutions in this family are shown in Figure B.2.

Page 511: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

ordinary differential equations review 499

B.1.2 Linear First Order Equations

The second type of first order equation encountered is the linearfirst order differential equation in the standard form

y′(x) + p(x)y(x) = q(x). (B.7)

In this case one seeks an integrating factor, µ(x), which is a function that onecan multiply through the equation making the left side a perfect derivative.Thus, obtaining,

ddx

[µ(x)y(x)] = µ(x)q(x). (B.8)

The integrating factor that works is µ(x) = exp(∫ x p(ξ) dξ). One can

derive µ(x) by expanding the derivative in Equation (B.8),

µ(x)y′(x) + µ′(x)y(x) = µ(x)q(x), (B.9)

and comparing this equation to the one obtained from multiplying (B.7) byµ(x) :

µ(x)y′(x) + µ(x)p(x)y(x) = µ(x)q(x). (B.10)

Note that these last two equations would be the same if the second termswere the same. Thus, we will require that

dµ(x)dx

= µ(x)p(x).

This is a separable first order equation for µ(x) whose solution is the inte-grating factor: Integrating factor.

µ(x) = exp(∫ x

p(ξ) dξ

). (B.11)

Equation (B.8) is now easily integrated to obtain the general solution tothe linear first order differential equation:

y(x) =1

µ(x)

[∫ xµ(ξ)q(ξ) dξ + C

]. (B.12)

Example B.3. xy′ + y = x, x > 0, y(1) = 0.One first notes that this is a linear first order differential equation. Solving for

y′, one can see that the equation is not separable. Furthermore, it is not in thestandard form (B.7). So, we first rewrite the equation as

dydx

+1x

y = 1. (B.13)

Noting that p(x) = 1x , we determine the integrating factor

µ(x) = exp[∫ x dξ

ξ

]= eln x = x.

Page 512: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

500 partial differential equations

Multiplying equation (B.13) by µ(x) = x, we actually get back the original equa-tion! In this case we have found that xy′ + y must have been the derivative ofsomething to start. In fact, (xy)′ = xy′ + x. Therefore, the differential equationbecomes

(xy)′ = x.

Integrating, one obtains

xy =12

x2 + C,

ory(x) =

12

x +Cx

.

Inserting the initial condition into this solution, we have 0 = 12 + C. Therefore,

C = − 12 . Thus, the solution of the initial value problem is

y(x) =12(x− 1

x).

We can verify that this is the solution. Since y′ = 12 + 1

2x2 , we have

xy′ + y =12

x +1

2x+

12

(x− 1

x

)= x.

Also, y(1) = 12 (1− 1) = 0.

Example B.4. (sin x)y′ + (cos x)y = x2.Actually, this problem is easy if you realize that the left hand side is a perfect

derivative. Namely,

ddx

((sin x)y) = (sin x)y′ + (cos x)y.

But, we will go through the process of finding the integrating factor for practice.First, we rewrite the original differential equation in standard form. We divide

the equation by sin x to obtain

y′ + (cot x)y = x2 csc x.

Then, we compute the integrating factor as

µ(x) = exp(∫ x

cot ξ dξ

)= eln(sin x) = sin x.

Using the integrating factor, the standard form equation becomes

ddx

((sin x)y) = x2.

Integrating, we have

y sin x =13

x3 + C.

So, the solution is

y(x) =(

13

x3 + C)

csc x.

Page 513: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

ordinary differential equations review 501

B.2 Second Order Linear Differential Equations

Second order differential equations are typically harder thanfirst order. In most cases students are only exposed to second order lineardifferential equations. A general form for a second order linear differentialequation is given by

a(x)y′′(x) + b(x)y′(x) + c(x)y(x) = f (x). (B.14)

One can rewrite this equation using operator terminology. Namely, onefirst defines the differential operator L = a(x)D2 + b(x)D + c(x), whereD = d

dx . Then equation (B.14) becomes

Ly = f . (B.15)

The solutions of linear differential equations are found by making use ofthe linearity of L. Namely, we consider the vector space1 consisting of real-

1 We assume that the reader has been in-troduced to concepts in linear algebra.Later in the text we will recall the def-inition of a vector space and see that lin-ear algebra is in the background of thestudy of many concepts in the solutionof differential equations.

valued functions over some domain. Let f and g be vectors in this functionspace. L is a linear operator if for two vectors f and g and scalar a, we havethat

a. L( f + g) = L f + Lg

b. L(a f ) = aL f .

One typically solves (B.14) by finding the general solution of the homo-geneous problem,

Lyh = 0

and a particular solution of the nonhomogeneous problem,

Lyp = f .

Then, the general solution of (B.14) is simply given as y = yh + yp. This istrue because of the linearity of L. Namely,

Ly = L(yh + yp)

= Lyh + Lyp

= 0 + f = f . (B.16)

There are methods for finding a particular solution of a nonhomogeneousdifferential equation. These methods range from pure guessing, the Methodof Undetermined Coefficients, the Method of Variation of Parameters, orGreen’s functions. We will review these methods later in the chapter.

Determining solutions to the homogeneous problem, Lyh = 0, is not al-ways easy. However, many now famous mathematicians and physicists havestudied a variety of second order linear equations and they have saved usthe trouble of finding solutions to the differential equations that often ap-pear in applications. We will encounter many of these in the following

Page 514: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

502 partial differential equations

chapters. We will first begin with some simple homogeneous linear differ-ential equations.

Linearity is also useful in producing the general solution of a homoge-neous linear differential equation. If y1 and y2 are solutions of the homoge-neous equation, then the linear combination y = c1y1 + c2y2 is also a solutionof the homogeneous equation. In fact, if y1 and y2 are linearly independent,22 A set of functions yi(x)n

i=1 is a lin-early independent set if and only if

c1y1(x) + . . . + cnyn(x) = 0

implies ci = 0, for i = 1, . . . , n.For n = 2, c1y1(x) + c2y2(x) = 0. If

y1 and y2 are linearly dependent, thenthe coefficients are not zero andy2(x) = − c1

c2y1(x) and is a multiple of

y1(x).

then y = c1y1 + c2y2 is the general solution of the homogeneous problem.Linear independence can also be established by looking at the Wronskian

of the solutions. For a second order differential equation the Wronskian isdefined as

W(y1, y2) = y1(x)y′2(x)− y′1(x)y2(x). (B.17)

The solutions are linearly independent if the Wronskian is not zero.

B.2.1 Constant Coefficient Equations

The simplest second order differential equations are those withconstant coefficients. The general form for a homogeneous constant coeffi-cient second order linear differential equation is given as

ay′′(x) + by′(x) + cy(x) = 0, (B.18)

where a, b, and c are constants.Solutions to (B.18) are obtained by making a guess of y(x) = erx. Inserting

this guess into (B.18) leads to the characteristic equation

ar2 + br + c = 0. (B.19)

Namely, we compute the derivatives of y(x) = erx, to get y(x) = rerx, andThe characteristic equation foray′′ + by′ + cy = 0 is ar2 + br + c = 0.Solutions of this quadratic equation leadto solutions of the differential equation.

y(x) = r2erx. Inserting into (B.18), we have

0 = ay′′(x) + by′(x) + cy(x) = (ar2 + br + c)erx.

Since the exponential is never zero, we find that ar2 + br + c = 0.Two real, distinct roots, r1 and r2, givesolutions of the form

y(x) = c1er1x + c2er2x .The roots of this equation, r1, r2, in turn lead to three types of solutions

depending upon the nature of the roots. In general, we have two linearly in-dependent solutions, y1(x) = er1x and y2(x) = er2x, and the general solutionis given by a linear combination of these solutions,

y(x) = c1er1x + c2er2x.

For two real distinct roots, we are done. However, when the roots are real,but equal, or complex conjugate roots, we need to do a little more work toobtain usable solutions.

Example B.5. y′′ − y′ − 6y = 0 y(0) = 2, y′(0) = 0.The characteristic equation for this problem is r2 − r− 6 = 0. The roots of this

equation are found as r = −2, 3. Therefore, the general solution can be quicklywritten down:

y(x) = c1e−2x + c2e3x.

Page 515: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

ordinary differential equations review 503

Note that there are two arbitrary constants in the general solution. Therefore,one needs two pieces of information to find a particular solution. Of course, we havethe needed information in the form of the initial conditions.

One also needs to evaluate the first derivative

y′(x) = −2c1e−2x + 3c2e3x

in order to attempt to satisfy the initial conditions. Evaluating y and y′ at x = 0yields

2 = c1 + c2

0 = −2c1 + 3c2 (B.20)

These two equations in two unknowns can readily be solved to give c1 = 6/5and c2 = 4/5. Therefore, the solution of the initial value problem is obtained asy(x) = 6

5 e−2x + 45 e3x.

Repeated roots, r1 = r2 = r, give solu-tions of the form

y(x) = (c1 + c2x)erx .

In the case when there is a repeated real root, one has only one solution,y1(x) = erx. The question is how does one obtain the second linearly in-dependent solution? Since the solutions should be independent, we musthave that the ratio y2(x)/y1(x) is not a constant. So, we guess the formy2(x) = v(x)y1(x) = v(x)erx. (This process is called the Method of Reduc-tion of Order.)

For constant coefficient second order equations, we can write the equa-tion as

(D− r)2y = 0,

where D = ddx . We now insert y2(x) = v(x)erx into this equation. First we

compute(D− r)verx = v′erx.

Then,0 = (D− r)2verx = (D− r)v′erx = v′′erx.

So, if y2(x) is to be a solution to the differential equation, then v′′(x)erx = 0for all x. So, v′′(x) = 0, which implies that

v(x) = ax + b.

So,y2(x) = (ax + b)erx.

Without loss of generality, we can take b = 0 and a = 1 to obtain the secondlinearly independent solution, y2(x) = xerx. The general solution is then

y(x) = c1erx + c2xerx.

Example B.6. y′′ + 6y′ + 9y = 0.In this example we have r2 + 6r + 9 = 0. There is only one root, r = −3. From

the above discussion, we easily find the solution y(x) = (c1 + c2x)e−3x.

Page 516: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

504 partial differential equations

When one has complex roots in the solution of constant coefficient equa-tions, one needs to look at the solutions

y1,2(x) = e(α±iβ)x.

We make use of Euler’s formula (See Chapter 6 for more on complex vari-ables)

eiβx = cos βx + i sin βx. (B.21)

Then, the linear combination of y1(x) and y2(x) becomes

Ae(α+iβ)x + Be(α−iβ)x = eαx[

Aeiβx + Be−iβx]

= eαx [(A + B) cos βx + i(A− B) sin βx]

≡ eαx(c1 cos βx + c2 sin βx). (B.22)

Thus, we see that we have a linear combination of two real, linearly inde-pendent solutions, eαx cos βx and eαx sin βx.Complex roots, r = α± iβ, give solutions

of the form

y(x) = eαx(c1 cos βx + c2 sin βx). Example B.7. y′′ + 4y = 0.The characteristic equation in this case is r2 + 4 = 0. The roots are pure imag-

inary roots, r = ±2i, and the general solution consists purely of sinusoidal func-tions, y(x) = c1 cos(2x) + c2 sin(2x), since α = 0 and β = 2.

Example B.8. y′′ + 2y′ + 4y = 0.The characteristic equation in this case is r2 + 2r+ 4 = 0. The roots are complex,

r = −1±√

3i and the general solution can be written as

y(x) =[c1 cos(

√3x) + c2 sin(

√3x)]

e−x.

Example B.9. y′′ + 4y = sin x.This is an example of a nonhomogeneous problem. The homogeneous problem

was actually solved in Example B.7. According to the theory, we need only seek aparticular solution to the nonhomogeneous problem and add it to the solution of thelast example to get the general solution.

The particular solution can be obtained by purely guessing, making an educatedguess, or using the Method of Variation of Parameters. We will not review all ofthese techniques at this time. Due to the simple form of the driving term, we willmake an intelligent guess of yp(x) = A sin x and determine what A needs to be.Inserting this guess into the differential equation gives (−A + 4A) sin x = sin x.So, we see that A = 1/3 works. The general solution of the nonhomogeneousproblem is therefore y(x) = c1 cos(2x) + c2 sin(2x) + 1

3 sin x.

The three cases for constant coefficient linear second order differentialequations are summarized below.

Page 517: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

ordinary differential equations review 505

Classification of Roots of the Characteristic Equationfor Second Order Constant Coefficient ODEs

1. Real, distinct roots r1, r2. In this case the solutions corresponding toeach root are linearly independent. Therefore, the general solution issimply y(x) = c1er1x + c2er2x.

2. Real, equal roots r1 = r2 = r. In this case the solutions correspondingto each root are linearly dependent. To find a second linearly inde-pendent solution, one uses the Method of Reduction of Order. This givesthe second solution as xerx. Therefore, the general solution is found asy(x) = (c1 + c2x)erx.

3. Complex conjugate roots r1, r2 = α ± iβ. In this case the solutionscorresponding to each root are linearly independent. Making use ofEuler’s identity, eiθ = cos(θ) + i sin(θ), these complex exponentialscan be rewritten in terms of trigonometric functions. Namely, onehas that eαx cos(βx) and eαx sin(βx) are two linearly independent solu-tions. Therefore, the general solution becomes y(x) = eαx(c1 cos(βx) +c2 sin(βx)).

B.3 Forced Systems

Many problems can be modeled by nonhomogeneous second orderequations. Thus, we want to find solutions of equations of the form

Ly(x) = a(x)y′′(x) + b(x)y′(x) + c(x)y(x) = f (x). (B.23)

As noted in Section B.2, one solves this equation by finding the generalsolution of the homogeneous problem,

Lyh = 0

and a particular solution of the nonhomogeneous problem,

Lyp = f .

Then, the general solution of (B.14) is simply given as y = yh + yp.So far, we only know how to solve constant coefficient, homogeneous

equations. So, by adding a nonhomogeneous term to such equations wewill need to find the particular solution to the nonhomogeneous equation.

We could guess a solution, but that is not usually possible without a littlebit of experience. So, we need some other methods. There are two mainmethods. In the first case, the Method of Undetermined Coefficients, onemakes an intelligent guess based on the form of f (x). In the second method,one can systematically developed the particular solution. We will come backto the Method of Variation of Parameters and we will also introduce thepowerful machinery of Green’s functions later in this section.

Page 518: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

506 partial differential equations

B.3.1 Method of Undetermined Coefficients

Let’s solve a simple differential equation highlighting how we canhandle nonhomogeneous equations.

Example B.10. Consider the equation

y′′ + 2y′ − 3y = 4. (B.24)

The first step is to determine the solution of the homogeneous equation. Thus,we solve

y′′h + 2y′h − 3yh = 0. (B.25)

The characteristic equation is r2 + 2r− 3 = 0. The roots are r = 1,−3. So, we canimmediately write the solution

yh(x) = c1ex + c2e−3x.

The second step is to find a particular solution of (B.24). What possible functioncan we insert into this equation such that only a 4 remains? If we try somethingproportional to x, then we are left with a linear function after inserting x and itsderivatives. Perhaps a constant function you might think. y = 4 does not work.But, we could try an arbitrary constant, y = A.

Let’s see. Inserting y = A into (B.24), we obtain

−3A = 4.

Ah ha! We see that we can choose A = − 43 and this works. So, we have a particular

solution, yp(x) = − 43 . This step is done.

Combining the two solutions, we have the general solution to the original non-homogeneous equation (B.24). Namely,

y(x) = yh(x) + yp(x) = c1ex + c2e−3x − 43

.

Insert this solution into the equation and verify that it is indeed a solution. If wehad been given initial conditions, we could now use them to determine the arbitraryconstants.

Example B.11. What if we had a different source term? Consider the equation

y′′ + 2y′ − 3y = 4x. (B.26)

The only thing that would change is the particular solution. So, we need a guess.We know a constant function does not work by the last example. So, let’s try

yp = Ax. Inserting this function into Equation (B.26), we obtain

2A− 3Ax = 4x.

Picking A = −4/3 would get rid of the x terms, but will not cancel everything.We still have a constant left. So, we need something more general.

Page 519: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

ordinary differential equations review 507

Let’s try a linear function, yp(x) = Ax + B. Then we get after substitution into(B.26)

2A− 3(Ax + B) = 4x.

Equating the coefficients of the different powers of x on both sides, we find a systemof equations for the undetermined coefficients:

2A− 3B = 0

−3A = 4. (B.27)

These are easily solved to obtain

A = −43

B =23

A = −89

. (B.28)

So, the particular solution is

yp(x) = −43

x− 89

.

This gives the general solution to the nonhomogeneous problem as

y(x) = yh(x) + yp(x) = c1ex + c2e−3x − 43

x− 89

.

There are general forms that you can guess based upon the form of thedriving term, f (x). Some examples are given in Table B.1. More general ap-plications are covered in a standard text on differential equations. However,the procedure is simple. Given f (x) in a particular form, you make an ap-propriate guess up to some unknown parameters, or coefficients. Insertingthe guess leads to a system of equations for the unknown coefficients. Solvethe system and you have the solution. This solution is then added to thegeneral solution of the homogeneous differential equation.

f (x) Guessanxn + an−1xn−1 + · · ·+ a1x + a0 Anxn + An−1xn−1 + · · ·+ A1x + A0

aebx Aebx

a cos ωx + b sin ωx A cos ωx + B sin ωx

Table B.1: Forms used in the Method ofUndetermined Coefficients.

Example B.12. Solvey′′ + 2y′ − 3y = 2e−3x. (B.29)

According to the above, we would guess a solution of the form yp = Ae−3x.Inserting our guess, we find

0 = 2e−3x.

Oops! The coefficient, A, disappeared! We cannot solve for it. What went wrong?The answer lies in the general solution of the homogeneous problem. Note that ex

and e−3x are solutions to the homogeneous problem. So, a multiple of e−3x will notget us anywhere. It turns out that there is one further modification of the method.

Page 520: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

508 partial differential equations

If the driving term contains terms that are solutions of the homogeneous problem,then we need to make a guess consisting of the smallest possible power of x timesthe function which is no longer a solution of the homogeneous problem. Namely,we guess yp(x) = Axe−3x and differentiate this guess to obtain the derivativesy′p = A(1− 3x)e−3x and y′′p = A(9x− 6)e−3x.

Inserting these derivatives into the differential equation, we obtain

[(9x− 6) + 2(1− 3x)− 3x]Ae−3x = 2e−3x.

Comparing coefficients, we have

−4A = 2.

So, A = −1/2 and yp(x) = − 12 xe−3x. Thus, the solution to the problem is

y(x) =(

2− 12

x)

e−3x.

Modified Method of Undetermined Coefficients

In general, if any term in the guess yp(x) is a solution of the homogeneousequation, then multiply the guess by xk, where k is the smallest positiveinteger such that no term in xkyp(x) is a solution of the homogeneousproblem.

B.3.2 Periodically Forced Oscillations

A special type of forcing is periodic forcing. Realistic oscillations willdampen and eventually stop if left unattended. For example, mechanicalclocks are driven by compound or torsional pendula and electric oscilla-tors are often designed with the need to continue for long periods of time.However, they are not perpetual motion machines and will need a peri-odic injection of energy. This can be done systematically by adding periodicforcing. Another simple example is the motion of a child on a swing in thepark. This simple damped pendulum system will naturally slow down toequilibrium (stopped) if left alone. However, if the child pumps energy intothe swing at the right time, or if an adult pushes the child at the right time,then the amplitude of the swing can be increased.

There are other systems, such as airplane wings and long bridge spans,in which external driving forces might cause damage to the system. A wellknow example is the wind induced collapse of the Tacoma Narrows Bridgedue to strong winds. Of course, if one is not careful, the child in theThe Tacoma Narrows Bridge opened in

Washington State (U.S.) in mid 1940.However, in November of the same yearthe winds excited a transverse mode ofvibration, which eventually (in a fewhours) lead to large amplitude oscilla-tions and then collapse.

last example might get too much energy pumped into the system causing asimilar failure of the desired motion.

While there are many types of forced systems, and some fairly compli-cated, we can easily get to the basic characteristics of forced oscillations bymodifying the mass-spring system by adding an external, time-dependent,driving force. Such as system satisfies the equation

mx + b(x) + kx = F(t), (B.30)

Page 521: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

ordinary differential equations review 509

where m is the mass, b is the damping constant, k is the spring constant,and F(t) is the driving force. If F(t) is of simple form, then we can employthe Method of Undetermined Coefficients. Since the systems we have con-sidered so far are similar, one could easily apply the following to pendulaor circuits.

k m

b

F cos w t0

Figure B.3: An external driving force isadded to the spring-mass-damper sys-tem.

As the damping term only complicates the solution, we will consider thesimpler case of undamped motion and assume that b = 0. Furthermore,we will introduce a sinusoidal driving force, F(t) = F0 cos ωt in order tostudy periodic forcing. This leads to the simple periodically driven mass ona spring system

mx + kx = F0 cos ωt. (B.31)

In order to find the general solution, we first obtain the solution to thehomogeneous problem,

xh = c1 cos ω0t + c2 sin ω0t,

where ω0 =√

km . Next, we seek a particular solution to the nonhomoge-

neous problem. We will apply the Method of Undetermined Coefficients.A natural guess for the particular solution would be to use xp = A cos ωt+

B sinωt. However, recall that the guess should not be a solution of the ho-mogeneous problem. Comparing xp with xh, this would hold if ω 6= ω0.Otherwise, one would need to use the Modified Method of UndeterminedCoefficients as described in the last section. So, we have two cases to con-sider. Dividing through by the mass, we solve

the simple driven system,

x + ω20 x =

F0

mcos ωt.

Example B.13. Solve x + ω20x = F0

m cos ωt, for ω 6= ω0.In this case we continue with the guess xp = A cos ωt + B sinωt. Since there

is no damping term, one quickly finds that B = 0. Inserting xp = A cos ωt intothe differential equation, we find that(

−ω2 + ω20

)A cos ωt =

F0

mcos ωt.

Solving for A, we obtain

A =F0

m(ω20 −ω2)

.

The general solution for this case is thus,

x(t) = c1 cos ω0t + c2 sin ω0t +F0

m(ω20 −ω2)

cos ωt. (B.32)

Example B.14. Solve x + ω20x = F0

m cos ω0t.In this case, we need to employ the Modified Method of Undetermined Coef-

ficients. So, we make the guess xp = t (A cos ω0t + B sinω0t) . Since there isno damping term, one finds that A = 0. Inserting the guess in to the differentialequation, we find that

B =F0

2mω0,

or the general solution is

x(t) = c1 cos ω0t + c2 sin ω0t +F0

2mωt sin ωt. (B.33)

Page 522: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

510 partial differential equations

The general solution to the problem is thus

x(t) = c1 cos ω0t + c2 sin ω0t +

F0m(ω2

0−ω2)cos ωt, ω 6= ω0,

F02mω0

t sin ω0t, ω = ω0.(B.34)

Special cases of these solutions provide interesting physics, which canbe explored by the reader in the homework. In the case that ω = ω0, wesee that the solution tends to grow as t gets large. This is what is called aresonance. Essentially, one is driving the system at its natural frequency. Asthe system is moving to the left, one pushes it to the left. If it is moving tothe right, one is adding energy in that direction. This forces the amplitudeof oscillation to continue to grow until the system breaks. An example ofsuch an oscillation is shown in Figure B.4.

Figure B.4: Plot of

x(t) = 5 cos 2t +12

t sin 2t,

a solution of x + 4x = 2 cos 2t showingresonance.

In the case that ω 6= ω0, one can rewrite the solution in a simple form.Let’s choose the initial conditions that c1 = −F0/(m(ω2

0−ω2)), c2 = 0. Thenone has (see Problem ??)

x(t) =2F0

m(ω20 −ω2)

sin(ω0 −ω)t

2sin

(ω0 + ω)t2

. (B.35)

For values of ω near ω0, one finds the solution consists of a rapid os-cillation, due to the sin (ω0+ω)t

2 factor, with a slowly varying amplitude,2F0

m(ω20−ω2)

sin (ω0−ω)t2 . The reader can investigate this solution.

Figure B.5: Plot of

x(t) =1

249

(2045 cos 2t− 800 cos

4320

t)

,

a solution of x + 4x = 2 cos 2.15t.

This slow variation is called a beat and the beat frequency is given by f =|ω0−ω|

4π . In Figure B.5 we see the high frequency oscillations are containedby the lower beat frequency, f = 0.15

4π s. This corresponds to a period ofT = 1/ f ≈ 83.7 Hz, which looks about right from the figure.

Example B.15. Solve x + x = 2 cos ωt, x(0) = 0, x(0) = 0, for ω = 1, 1.15. Foreach case, we need the solution of the homogeneous problem,

xh(t) = c1 cos t + c2 sin t.

The particular solution depends on the value of ω.For ω = 1, the driving term, 2 cos ωt, is a solution of the homogeneous problem.

Thus, we assumexp(t) = At cos t + Bt sin t.

Inserting this into the differential equation, we find A = 0 and B = 1. So, thegeneral solution is

x(t) = c1 cos t + c2 sin t + t sin t.

Imposing the initial conditions, we find

x(t) = t sin t.

This solution is shown in Figure B.6.

Figure B.6: Plot of

x(t) = t sin 2t,

a solution of x + x = 2 cos t.

For ω = 1.15, the driving term, 2 cos ω1.15t, is not a solution of the homoge-neous problem. Thus, we assume

xp(t) = A cos 1.15t + B sin 1.15t.

Page 523: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

ordinary differential equations review 511

Inserting this into the differential equation, we find A = − 800129 and B = 0. So, the

general solution is

x(t) = c1 cos t + c2 sin t− 800129

cos t.

Imposing the initial conditions, we find

x(t) =800129

(cos t− cos 1.15t) .

This solution is shown in Figure B.7. The beat frequency in this case is the same aswith Figure B.5.

Figure B.7: Plot of

x(t) =800129

(cos t− cos

2320

t)

,

a solution of x + x = 2 cos 1.15t.

B.3.3 Method of Variation of Parameters

A more systematic way to find particular solutions is through the useof the Method of Variation of Parameters. The derivation is a little detailedand the solution is sometimes messy, but the application of the method isstraight forward if you can do the required integrals. We will first derivethe needed equations and then do some examples.

We begin with the nonhomogeneous equation. Let’s assume it is of thestandard form

a(x)y′′(x) + b(x)y′(x) + c(x)y(x) = f (x). (B.36)

We know that the solution of the homogeneous equation can be written interms of two linearly independent solutions, which we will call y1(x) andy2(x) :

yh(x) = c1y1(x) + c2y2(x).

Replacing the constants with functions, then we no longer have a solutionto the homogeneous equation. Is it possible that we could stumble acrossthe right functions with which to replace the constants and somehow endup with f (x) when inserted into the left side of the differential equation? Itturns out that we can.

So, let’s assume that the constants are replaced with two unknown func-tions, which we will call c1(x) and c2(x). This change of the parametersis where the name of the method derives. Thus, we are assuming that aparticular solution takes the form We assume the nonhomogeneous equa-

tion has a particular solution of the form

yp(x) = c1(x)y1(x) + c2(x)y2(x).yp(x) = c1(x)y1(x) + c2(x)y2(x). (B.37)

If this is to be a solution, then insertion into the differential equation shouldmake the equation hold. To do this we will first need to compute somederivatives.

The first derivative is given by

y′p(x) = c1(x)y′1(x) + c2(x)y′2(x) + c′1(x)y1(x) + c′2(x)y2(x). (B.38)

Page 524: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

512 partial differential equations

Next we will need the second derivative. But, this will yield eight terms.So, we will first make a simplifying assumption. Let’s assume that the lasttwo terms add to zero:

c′1(x)y1(x) + c′2(x)y2(x) = 0. (B.39)

It turns out that we will get the same results in the end if we did not assumethis. The important thing is that it works!

Under the assumption the first derivative simplifies to

y′p(x) = c1(x)y′1(x) + c2(x)y′2(x). (B.40)

The second derivative now only has four terms:

y′p(x) = c1(x)y′′1 (x) + c2(x)y′′2 (x) + c′1(x)y′1(x) + c′2(x)y′2(x). (B.41)

Now that we have the derivatives, we can insert the guess into the differ-ential equation. Thus, we have

f (x) = a(x)[c1(x)y′′1 (x) + c2(x)y′′2 (x) + c′1(x)y′1(x) + c′2(x)y′2(x)

]+b(x)

[c1(x)y′1(x) + c2(x)y′2(x)

]+c(x) [c1(x)y1(x) + c2(x)y2(x)] . (B.42)

Regrouping the terms, we obtain

f (x) = c1(x)[a(x)y′′1 (x) + b(x)y′1(x) + c(x)y1(x)

]+c2(x)

[a(x)y′′2 (x) + b(x)y′2(x) + c(x)y2(x)

]+a(x)

[c′1(x)y′1(x) + c′2(x)y′2(x)

]. (B.43)

Note that the first two rows vanish since y1 and y2 are solutions of thehomogeneous problem. This leaves the equation

f (x) = a(x)[c′1(x)y′1(x) + c′2(x)y′2(x)

],

which can be rearranged as

c′1(x)y′1(x) + c′2(x)y′2(x) =f (x)a(x)

. (B.44)

In summary, we have assumed a particular solution of the form

yp(x) = c1(x)y1(x) + c2(x)y2(x).

This is only possible if the unknown functions c1(x) and c2(x) satisfy thesystem of equations

In order to solve the differential equationLy = f , we assume

yp(x) = c1(x)y1(x) + c2(x)y2(x),

for Ly1,2 = 0. Then, one need only solvea simple system of equations (B.45).

c′1(x)y1(x) + c′2(x)y2(x) = 0

c′1(x)y′1(x) + c′2(x)y′2(x) =f (x)a(x)

. (B.45)

Page 525: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

ordinary differential equations review 513

System (B.45) can be solved as

c′1(x) = − f y2

aW(y1, y2),

c′1(x) =f y1

aW(y1, y2),

where W(y1, y2) = y1y′2 − y′1y2 is theWronskian. We use this solution in thenext section.

It is standard to solve this system for the derivatives of the unknownfunctions and then present the integrated forms. However, one could justas easily start from this system and solve the system for each problem en-countered.

Example B.16. Find the general solution of the nonhomogeneous problem: y′′ −y = e2x.

The general solution to the homogeneous problem y′′h − yh = 0 is

yh(x) = c1ex + c2e−x.

In order to use the Method of Variation of Parameters, we seek a solution of theform

yp(x) = c1(x)ex + c2(x)e−x.

We find the unknown functions by solving the system in (B.45), which in this casebecomes

c′1(x)ex + c′2(x)e−x = 0

c′1(x)ex − c′2(x)e−x = e2x. (B.46)

Adding these equations we find that

2c′1ex = e2x → c′1 =12

ex.

Solving for c1(x) we find

c1(x) =12

∫ex dx =

12

ex.

Subtracting the equations in the system yields

2c′2e−x = −e2x → c′2 = −12

e3x.

Thus,

c2(x) = −12

∫e3x dx = −1

6e3x.

The particular solution is found by inserting these results into yp:

yp(x) = c1(x)y1(x) + c2(x)y2(x)

= (12

ex)ex + (−16

e3x)e−x

=13

e2x. (B.47)

Thus, we have the general solution of the nonhomogeneous problem as

y(x) = c1ex + c2e−x +13

e2x.

Page 526: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

514 partial differential equations

Example B.17. Now consider the problem: y′′ + 4y = sin x.The solution to the homogeneous problem is

yh(x) = c1 cos 2x + c2 sin 2x. (B.48)

We now seek a particular solution of the form

yh(x) = c1(x) cos 2x + c2(x) sin 2x.

We let y1(x) = cos 2x and y2(x) = sin 2x, a(x) = 1, f (x) = sin x in system(B.45):

c′1(x) cos 2x + c′2(x) sin 2x = 0

−2c′1(x) sin 2x + 2c′2(x) cos 2x = sin x. (B.49)

Now, use your favorite method for solving a system of two equations and twounknowns. In this case, we can multiply the first equation by 2 sin 2x and thesecond equation by cos 2x. Adding the resulting equations will eliminate the c′1terms. Thus, we have

c′2(x) =12

sin x cos 2x =12(2 cos2 x− 1) sin x.

Inserting this into the first equation of the system, we have

c′1(x) = −c′2(x)sin 2xcos 2x

= −12

sin x sin 2x = − sin2 x cos x.

These can easily be solved:

c2(x) =12

∫(2 cos2 x− 1) sin x dx =

12(cos x− 2

3cos3 x).

c1(x) = −∫

sinx cos x dx = −13

sin3 x.

The final step in getting the particular solution is to insert these functions intoyp(x). This gives

yp(x) = c1(x)y1(x) + c2(x)y2(x)

= (−13

sin3 x) cos 2x + (12

cos x− 13

cos3 x) sin x

=13

sin x. (B.50)

So, the general solution is

y(x) = c1 cos 2x + c2 sin 2x +13

sin x. (B.51)

B.4 Cauchy-Euler Equations

Another class of solvable linear differential equations that isof interest are the Cauchy-Euler type of equations, also referred to in somebooks as Euler’s equation. These are given by

ax2y′′(x) + bxy′(x) + cy(x) = 0. (B.52)

Page 527: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

ordinary differential equations review 515

Note that in such equations the power of x in each of the coefficients matchesthe order of the derivative in that term. These equations are solved in amanner similar to the constant coefficient equations.

One begins by making the guess y(x) = xr. Inserting this function andits derivatives,

y′(x) = rxr−1, y′′(x) = r(r− 1)xr−2,

into Equation (B.52), we have

[ar(r− 1) + br + c] xr = 0.

Since this has to be true for all x in the problem domain, we obtain thecharacteristic equation The solutions of Cauchy-Euler equations

can be found using the characteristicequation ar(r− 1) + br + c = 0.ar(r− 1) + br + c = 0. (B.53)

Just like the constant coefficient differential equation, we have a quadraticequation and the nature of the roots again leads to three classes of solutions.If there are two real, distinct roots, then the general solution takes the formy(x) = c1xr1 + c2xr2 . For two real, distinct roots, the general

solution takes the form

y(x) = c1xr1 + c2xr2 .Example B.18. Find the general solution: x2y′′ + 5xy′ + 12y = 0.

As with the constant coefficient equations, we begin by writing down the char-acteristic equation. Doing a simple computation,

0 = r(r− 1) + 5r + 12

= r2 + 4r + 12

= (r + 2)2 + 8,

−8 = (r + 2)2, (B.54)

one determines the roots are r = −2 ± 2√

2i. Therefore, the general solution isy(x) =

[c1 cos(2

√2 ln |x|) + c2 sin(2

√2 ln |x|)

]x−2

Deriving the solution for Case 2 for the Cauchy-Euler equations works inthe same way as the second for constant coefficient equations, but it is a bitmessier. First note that for the real root, r = r1, the characteristic equationhas to factor as (r− r1)

2 = 0. Expanding, we have

r2 − 2r1r + r21 = 0.

The general characteristic equation is

ar(r− 1) + br + c = 0.

Dividing this equation by a and rewriting, we have

r2 + (ba− 1)r +

ca= 0.

Comparing equations, we find

ba= 1− 2r1,

ca= r2

1.

Page 528: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

516 partial differential equations

So, the Cauchy-Euler equation for this case can be written in the form

x2y′′ + (1− 2r1)xy′ + r21y = 0.

Now we seek the second linearly independent solution in the form y2(x) =v(x)xr1 . We first list this function and its derivatives,

y2(x) = vxr1 ,

y′2(x) = (xv′ + r1v)xr1−1,

y′′2 (x) = (x2v′′ + 2r1xv′ + r1(r1 − 1)v)xr1−2. (B.55)

Inserting these forms into the differential equation, we have

0 = x2y′′ + (1− 2r1)xy′ + r21y

= (xv′′ + v′)xr1+1. (B.56)

Thus, we need to solve the equation

xv′′ + v′ = 0,

orv′′

v′= − 1

x.

Integrating, we haveln |v′| = − ln |x|+ C,

where A = ±eC absorbs C and the signs from the absolute values. Expo-nentiating, we obtain one last differential equation to solve,

v′ =Ax

.

Thus,v(x) = A ln |x|+ k.

So, we have found that the second linearly independent equation can bewritten as

y2(x) = xr1 ln |x|.

Therefore, the general solution is found as y(x) = (c1 + c2 ln |x|)xr.

For one root, r1 = r2 = r, the generalsolution is of the form

y(x) = (c1 + c2 ln |x|)xr .

Example B.19. Solve the initial value problem: t2y′′ + 3ty′ + y = 0, with theinitial conditions y(1) = 0, y′(1) = 1.

For this example the characteristic equation takes the form

r(r− 1) + 3r + 1 = 0,

orr2 + 2r + 1 = 0.

There is only one real root, r = −1. Therefore, the general solution is

y(t) = (c1 + c2 ln |t|)t−1.

Page 529: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

ordinary differential equations review 517

However, this problem is an initial value problem. At t = 1 we know the valuesof y and y′. Using the general solution, we first have that

0 = y(1) = c1.

Thus, we have so far that y(t) = c2 ln |t|t−1. Now, using the second condition and

y′(t) = c2(1− ln |t|)t−2,

we have

1 = y(1) = c2.

Therefore, the solution of the initial value problem is y(t) = ln |t|t−1.For complex conjugate roots, r = α± iβ,the general solution takes the form

y(x) = xα(c1 cos(β ln |x|)+ c2 sin(β ln |x|)).

We now turn to the case of complex conjugate roots, r = α± iβ. Whendealing with the Cauchy-Euler equations, we have solutions of the formy(x) = xα+iβ. The key to obtaining real solutions is to first rewrite xy :

xy = eln xy= ey ln x.

Thus, a power can be written as an exponential and the solution can bewritten as

y(x) = xα+iβ = xαeiβ ln x, x > 0.

Recalling that

eiβ ln x = cos(β ln |x|) + i sin(β ln |x|),

we can now find two real, linearly independent solutions, xα cos(β ln |x|)and xα sin(β ln |x|) following the same steps as earlier for the constant coef-ficient case. This gives the general solution as

y(x) = xα(c1 cos(β ln |x|) + c2 sin(β ln |x|)).

Example B.20. Solve: x2y′′ − xy′ + 5y = 0.The characteristic equation takes the form

r(r− 1)− r + 5 = 0,

or

r2 − 2r + 5 = 0.

The roots of this equation are complex, r1,2 = 1± 2i. Therefore, the general solutionis y(x) = x(c1 cos(2 ln |x|) + c2 sin(2 ln |x|)).

The three cases are summarized in the table below.

Page 530: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

518 partial differential equations

Classification of Roots of the Characteristic Equationfor Cauchy-Euler Differential Equations

1. Real, distinct roots r1, r2. In this case the solutions corresponding toeach root are linearly independent. Therefore, the general solution issimply y(x) = c1xr1 + c2xr2 .

2. Real, equal roots r1 = r2 = r. In this case the solutions correspondingto each root are linearly dependent. To find a second linearly indepen-dent solution, one uses the Method of Reduction of Order. This givesthe second solution as xr ln |x|. Therefore, the general solution is foundas y(x) = (c1 + c2 ln |x|)xr.

3. Complex conjugate roots r1, r2 = α ± iβ. In this case the solutionscorresponding to each root are linearly independent. These com-plex exponentials can be rewritten in terms of trigonometric functions.Namely, one has that xα cos(β ln |x|) and xα sin(β ln |x|) are two lin-early independent solutions. Therefore, the general solution becomesy(x) = xα(c1 cos(β ln |x|) + c2 sin(β ln |x|)).

Nonhomogeneous Cauchy-Euler Equations

We can also solve some nonhomogeneous Cauchy-Euler equations usingthe Method of Undetermined Coefficients or the Method of Variation ofParameters. We will demonstrate this with a couple of examples.

Example B.21. Find the solution of x2y′′ − xy′ − 3y = 2x2.First we find the solution of the homogeneous equation. The characteristic

equation is r2 − 2r − 3 = 0. So, the roots are r = −1, 3 and the solution isyh(x) = c1x−1 + c2x3.

We next need a particular solution. Let’s guess yp(x) = Ax2. Inserting theguess into the nonhomogeneous differential equation, we have

2x2 = x2y′′ − xy′ − 3y = 2x2

= 2Ax2 − 2Ax2 − 3Ax2

= −3Ax2. (B.57)

So, A = −2/3. Therefore, the general solution of the problem is

y(x) = c1x−1 + c2x3 − 23

x2.

Example B.22. Find the solution of x2y′′ − xy′ − 3y = 2x3.In this case the nonhomogeneous term is a solution of the homogeneous problem,

which we solved in the last example. So, we will need a modification of the method.We have a problem of the form

ax2y′′ + bxy′ + cy = dxr,

Page 531: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

ordinary differential equations review 519

where r is a solution of ar(r− 1) + br + c = 0. Let’s guess a solution of the formy = Axr ln x. Then one finds that the differential equation reduces to Axr(2ar −a + b) = dxr. [You should verify this for yourself.]

With this in mind, we can now solve the problem at hand. Let yp = Ax3 ln x.Inserting into the equation, we obtain 4Ax3 = 2x3, or A = 1/2. The generalsolution of the problem can now be written as

y(x) = c1x−1 + c2x3 +12

x3 ln x.

Example B.23. Find the solution of x2y′′ − xy′ − 3y = 2x3 using Variation ofParameters.

As noted in the previous examples, the solution of the homogeneous problem hastwo linearly independent solutions, y1(x) = x−1 and y2(x) = x3. Assuming aparticular solution of the form yp(x) = c1(x)y1(x) + c2(x)y2(x), we need to solvethe system (B.45):

c′1(x)x−1 + c′2(x)x3 = 0

−c′1(x)x−2 + 3c′2(x)x2 =2x3

x2 = 2x. (B.58)

From the first equation of the system we have c′1(x) = −x4c′2(x). Substitutingthis into the second equation gives c′2(x) = 1

2x . So, c2(x) = 12 ln |x| and, therefore,

c1(x) = 18 x4. The particular solution is

yp(x) = c1(x)y1(x) + c2(x)y2(x) =18

x3 +12

x3 ln |x|.

Adding this to the homogeneous solution, we obtain the same solution as in the lastexample using the Method of Undetermined Coefficients. However, since 1

8 x3 is asolution of the homogeneous problem, it can be absorbed into the first terms, leaving

y(x) = c1x−1 + c2x3 +12

x3 ln x.

Problems

1. Find all of the solutions of the first order differential equations. Whenan initial condition is given, find the particular solution satisfying that con-dition.

a.dydx

=ex

2y.

b.dydt

= y2(1 + t2), y(0) = 1.

c.dydx

=

√1− y2

x.

d. xy′ = y(1− 2y), y(1) = 2.

e. y′ − (sin x)y = sin x.

f. xy′ − 2y = x2, y(1) = 1.

Page 532: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

520 partial differential equations

g.dsdt

+ 2s = st2, , s(0) = 1.

h. x′ − 2x = te2t.

i.dydx

+ y = sin x, y(0) = 0.

j.dydx− 3

xy = x3, y(1) = 4.

2. Consider the differential equation

dydx

=xy− x

1 + y.

a. Find the 1-parameter family of solutions (general solution) of thisequation.

b. Find the solution of this equation satisfying the initial conditiony(0) = 1. Is this a member of the 1-parameter family?

3. Identify the type of differential equation. Find the general solution andplot several particular solutions. Also, find the singular solution if one ex-ists.

a. y = xy′ + 1y′ .

b. y = 2xy′ + ln y′.

c. y′ + 2xy = 2xy2.

d. y′ + 2xy = y2ex2.

4. Find all of the solutions of the second order differential equations. Whenan initial condition is given, find the particular solution satisfying that con-dition.

a. y′′ − 9y′ + 20y = 0.

b. y′′ − 3y′ + 4y = 0, y(0) = 0, y′(0) = 1.

c. 8y′′ + 4y′ + y = 0, y(0) = 1, y′(0) = 0.

d. x′′ − x′ − 6x = 0 for x = x(t).

5. Verify that the given function is a solution and use Reduction of Orderto find a second linearly independent solution.

a. x2y′′ − 2xy′ − 4y = 0, y1(x) = x4.

b. xy′′ − y′ + 4x3y = 0, y1(x) = sin(x2).

6. Prove that y1(x) = sinh x and y2(x) = 3 sinh x − 2 cosh x are linearlyindependent solutions of y′′ − y = 0. Write y3(x) = cosh x as a linear com-bination of y1 and y2.

7. Consider the nonhomogeneous differential equation x′′− 3x′+ 2x = 6e3t.

a. Find the general solution of the homogenous equation.

b. Find a particular solution using the Method of Undetermined Co-efficients by guessing xp(t) = Ae3t.

Page 533: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

ordinary differential equations review 521

c. Use your answers in the previous parts to write down the generalsolution for this problem.

8. Find the general solution of the given equation by the method given.

a. y′′ − 3y′ + 2y = 10. Method of Undetermined Coefficients.

b. y′′ + y′ = 3x2. Variation of Parameters.

9. Use the Method of Variation of Parameters to determine the generalsolution for the following problems.

a. y′′ + y = tan x.

b. y′′ − 4y′ + 4y = 6xe2x.

10. Instead of assuming that c′1y1 + c′2y2 = 0 in the derivation of the solu-tion using Variation of Parameters, assume that c′1y1 + c′2y2 = h(x) for anarbitrary function h(x) and show that one gets the same particular solution.

11. Find all of the solutions of the second order differential equations forx > 0.. When an initial condition is given, find the particular solutionsatisfying that condition.

a. x2y′′ + 3xy′ + 2y = 0.

b. x2y′′ − 3xy′ + 3y = 0.

c. x2y′′ + 5xy′ + 4y = 0.

d. x2y′′ − 2xy′ + 3y = 0.

e. x2y′′ + 3xy′ − 3y = x2.

Page 534: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238
Page 535: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

Index

adjoint operator, 114

airfoils, 334

analog signal, 386

analytic function, 307

angular frequency, 372

area element, 217

Argand diagram, 287

associated Laguerre polynomials, 210

associated Legendre functions, 143, 194

average of a function, 76, 184

backward difference, 445

beat frequency, 510

Bernoulli numbers, 398

Bernoulli, Daniel, 35, 147, 287

Bernoulli, Jacob, 287

Bernoulli, Jacob II, 287

Bernoulli, Johann, 287

Bernoulli, Johann II, 287

Bernoulli, Johann III, 287

Bernoulli, Nicolaus I, 287

Bernoulli, Nicolaus II, 287

Bessel functions, 147, 170

first kind, 148

Fourier-Bessel series, 150

generating function, 149

identities, 149

orthogonality, 149

recursion formula, 149

second kind, 148

spherical, 204

Bessel’s inequality, 153

Bessel, Friedrich Wilhelm, 147

big-Oh, 491

binomial coefficients, 487

binomial expansion, 486

Bohr radius, 210

Bose-Einstein integrals, 398

boundary conditions, 35, 109

nonhomogeneous, 241, 251

time dependent, 256

time independent, 252

boundary value problem, 1, 39

box function, 375

branch cut, 293

branch point, 329

breaking time, 14

Bromwich integral, 411

Bromwich, Thomas John I’Anson, 411

Burgers’ equation, inviscid, 12

cakesbaking, 184

cylindrical, 188

rectangular, 185

Cardano, Girolamo, 287

Cauchy Integral Formula, 309

Cauchy principal value integral, 323

Cauchy’s Theorem, 303, 304

Cauchy, Augustin-Louis, 298

Cauchy-Euler equations, 514

nonhomogeneous, 518

Cauchy-Riemann equations, 298

central difference, 446

Chain Rule, 463

characteristic equation, 502, 515

characteristicsfan, 18

Charpit equations, 26

circle of convergence, 308

circular membrane, 166

circulation, 336

Clairaut’s Theorem, 2

Clairaut, Alexis Claude, 2

classical orthogonal polynomials, 131,134

classification of PDEs, 50

complete basis, 154

complex differentiation, 297

complex functions, 290

multivalued, 293

natual logarithm, 293

Page 536: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

524 partial differential equations

real and imaginary parts, 291

complex numbers, 287

addition, 288

Cartesian form, 288

complex conjugate, 288

imaginary part, 287

modulus, 287

multiplication, 288

nth root, 289

nth roots of unity, 290

polar form, 288

quotient, 288

real part, 287

complex plane, 287

extended, 349

complex velocity, 336

conformal mapping, 346

conics, 54

connected set, 300

conservation lawdifferential form, 11

integral form, 11

conservation laws., 10

conservation of mass, 335

constant coefficient equations, 502

complex roots, 504

repeated roots, 503

contour deformation, 304

contour integral, 323

convergence in the mean, 153

convolutionFourier transform, 379

Laplace transform, 407

convolution theoremFourier transform, 379

Laplace transform, 407

cookingcakes, 184

turkey, 203

coordinatescurvilinear, 213

cylindrical, 216

spherical, 213

cosine series, 88

cube roots of unity, 290

curvilinear coordinates, 213

cutoff frequency, 386

cylindrical coordinates, 216

d’Alembert’s formula, 55

d’Alembert, Jean le Rond, 35

d’Alembert, Jean-Baptiste, 55

de Moivre’s Formula, 486

de Moivre, Abraham, 287, 486

de Vries, Gustav, 358

derivatives, 455

table, 463

difference equation, 394

differential equations, 399

autonomous, 498

first order, 497, 499

linear, 497, 501

nonhomogeneous, 225, 505

Runge-Kutta methods, 440

second order, 501

separable, 498

series, 147

series solutions, 209

Taylor methods, 435

differential operator, 501

Differentiation Under Integral, 470

dipole, 345

Dirac delta function, 226, 239, 368

Laplace transform, 404

sifting property, 369

three dimensional, 423

Dirac, Paul Adrien Maurice, 369

Dirichlet boundary conditions, 109

Dirichlet kernel, 100

Dirichlet, Gustav, 109

dispersion relation, 358

distribution, 369

domain, 300

domain coloring, 293

double factorial, 139

easy as π, 171

eigenfunction expansion, 119

Green’s functions, 242

eigenfunctionscomplete, 111

orthogonal, 111

eigenvaluesreal, 111

Einstein, Albert, ixelectric dipole, 345

electric field, 48, 333

electric potential, 48, 333

electrostatics, 48

elliptic equation, 51

entire function, 307

equilibrium solution, 48

equilibrium temperature, 178

essential singularity, 314

integrand, 320

residue, 318

Page 537: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

index 525

Euler’s Formula, 485

Euler’s Method, 431

Euler, Leonhard, 35, 147, 287

even functions, 82

exponential order., 410

fan characteristics, 18

Feynman’s trick, 470

Feynman, Richard, 396, 460, 470

Fibonacci sequence, 487

filtering, 386

finite difference method, 445

finite wave train, 378, 387

flow rate, 335

fluid flow, 48, 333

forward difference, 445

Fourier analysis, 34, 74

Fourier coefficients, 75

generalized, 120, 130

Fourier series, 74

complex exponential, 364

generalized, 130

Maple code, 91

representation on [0, 2π], 75

Fourier sine transform, 419

Fourier transform, 366

convolution, 374

differential equations, 413

properties, 372

shifitng properties, 373

three dimensions, 422

Fourier, Joseph, 1, 35, 147, 497

Fourier-Bessel series, 147

Fourier-Legendre series, 134, 143

Fredholm Alternative, 248

Fredholm alternative, 122

frequency, 372

functionaverage, 472

function spaces, 128

functionsexponential, 456

hyperbolic, 461

inverse trigonometric functions, 460

logarithmic, 456

polynomial, 456

rational, 456

trigonometric, 457

Fundamental Theorem of Calculus, 464,470

Gödel, Kurt, 33

Gamma function, 145, 395

gate function, 375, 386

Gaussian function, 374, 385

Gaussian integral, 375

Gegenbauer polynomials, 134

generalized function, 369

geometric series, 308, 311, 477

Gibbs phenomenon, 91, 98

Gibbs, Josiah Willard, 98

Gram-Schmidt Orthogonalization, 131

gravitational potential, 138

Green’s first identity, 282

Green’s function, modified, 249

Green’s functions, 225, 501

boundary value, 231, 279

differential equation, 239

heat equation, 258, 419, 421

initial value, 227, 279

Laplace’s equation, 278, 417

Poisson’s equation, 280

properties, 234

series representation, 242

steady state, 280

time dependent equations, 279

wave equation, 259

Green’s identity, 116, 239, 249, 271

Green’s second identity, 226, 282

Green’s Theorem in the Plane, 303, 335

Green, George, 225, 304

Gregory, James, 479

group velocity, 362

Gudermann, Christoph, 466

Gudermannian, 466

harmonic conjugate, 299

harmonic functions, 48, 192, 298, 333

harmonics, 35

meridional, 199

sectoral, 199

spherical, 199

surface, 199

tesseral, 199

zonal, 199

heat equation, 33

1D, 42, 93

derivation, 38

eigenfunction expansion, 262

fully nonhomogeneous, 270

Green’s function, 419

Laplace transform, 418

nonhomogeneous, 251, 419

transform, 414

heat kernel, 415

Heaviside function, 144, 238, 402

Page 538: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

526 partial differential equations

Heaviside, Oliver, 144

Heisenberg, Werner, 377

Helmholtz equation, 160

Helmholtz, Hermann Ludwig Ferdinandvon, 160

Hermite polynomials, 134, 156

holomorphic function, 298, 307

Huen’s method, 442

hydrogen atom, 208

hyperbolic cosine, 461

hyperbolic equation, 51

hyperbolic functionidentities, 462

substitution, 474

hyperbolic sine, 461

hyperbolic tangent, 461

identitiesdouble angle, 458

half angle, 458

product, 460

Pythagorean, 457

sum and difference, 458

tangent, 457

implicit solution, 498

impulse function, 387

unit impulse, 405

impulse response, 387

incompressible, 48, 333

infinite dimensional, 128

infinitesimal area element, 217

infinitesimal volume elements, 216

initial value problem, 1, 399, 497

inner product, 128

inner product spaces, 129

integral transforms, 389

integrals, 455

integration limits, 465

simple substitution, 464

table, 464

trigonometric, 471

integrating factor, 264, 499

integration by parts, 82, 466

integration by recursion, 393

interval of convergence, 483

inverse Fourier transform, 366

inverse Laplace transform, 400, 410

inversion mapping, 347

inviscid Burgers’ equation, 12

irrotational, 48, 333

Jacobi polynomials, 134

Jacobian determinant, 216

Jordan’s lemma, 332

Julia set, 294

Kepler, Johannes, 35, 147

kernel, 227, 357

kettle drum, 166

Korteweg, Diederik, 358

Korteweg-de Vries equation, 358

Kronecker delta, 129

Lagrange’s identity, 116

Lagrange, Joseph-Louis, 147

Lagrange-Charpit method, 26

Laguerre polynomials, 134, 210

Laguerre, Edmond, 210

Lambert, Johann Heinrich, 466

Laplace transform, 389

convolution, 407

differential equations, 399

inverse, 410

properties, 395

series summation, 396

transform pairs, 391

Laplace’s equationGreen’s function, 417

half plane, 415

Polar coordinates, 182

Rectangular coordinates, 48, 176

rectangular plates, 178

spherical symmetry, 191

transform, 416

two dimensions, 48, 175

Laplace, Pierre-Simon, 192, 389

Laplacian, 33

cylindrical, 221

polar coordinates, 167

spherical, 221

Laurent series, 312

singular part, 314

Laurent, Pierre Alphonse, 312

least squaresapproximation, 152

Legendre polynomials, 134, 194

generating function, 138

leading coefficient, 137

normalization, 142

recurrence relation, 136

recursion formula, 136

Rodrigues formula, 135

Legendre, Adrien-Marie, 134, 145

line element, 215

linear fractional transformations, 348

linear operator, 501

Page 539: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

index 527

linearity, 372, 501

logarithmmulti-valued, 293

principal value, 293

Möbius transformation, 348

Möbius, August Ferdinand, 348

Maclaurin series, 481

Maclaurin, Colin, 479

Mandelbrot set, 294

MATLAB code, 296

mean square deviation, 152

mean value theorem, 184

membraneannular, 174

circlur, 166

rectangular, 161

meromorphic function, 307

Method of Characteristics, 5

Method of images, 261

Method of undetermined coefficients,268, 501, 506

modified, 508

metric coefficients, 215

Midpoint method, 441

modified Euler method, 442

modified Green’s function, 249

momentum, 361

Monge, Gaspard, 26

Morera’s Theorem, 307

Morera, Giacinto , 307

multivalued functions, 289, 328

integration, 330

nested radicals, 459

Neumann boundary conditions, 109

Neumann function, 148, 170

Neumann, Carl, 109

Newton’s Law of Cooling, 222

nodal curves, 163

nodal lines, 163

nonhomogeneous boundary conditions,241

nonhomogeneous equations, 225

norm of functions, 131

normal, 303

normalization, 77, 130

numerical stability, 452

odd functions, 82

ode45, 443

open set, 300

ordinary differential equation, 1, 497

orthogonal families, 336

orthogonal functions, 77

orthogonal grid, 214

orthogonalityeigenfunctions, 111

orthonormal, 77

oscillationsforced, 508

Panofsky, Pief, 201

parabolic equation, 51

parametrization, 300

Parseval’s equality, 154, 388

Parseval’s identity, 103

Parseval, Marc-Antoine, 388

partial differential equation, 33

transforms, 413

partial fraction decomposition, 312, 400

particle wave function, 360

particular solution, 498

Pascal’s triangle, 487

Pascal, Blaise, 487

path independence, 302

path integral, 300

period, 73

periodic boundary conditions, 109, 169

periodic extension, 75, 84

periodic function, 73, 460

phase, 74

phase shift, 74

phase velocity, 362

Plancherel’s formula, 388

Plancherel, Michel, 388

Poisson Integral Formula, 184

Poisson kernel, 184, 276

Poisson’s equation, 225

antisymmetric, 260

three dimensions, 422

two dimensions, 259

Poisson, Siméon, 147

polar coordinatesgrid, 215

poles, 314

potential function, 336

product solutions, 45

quadratic forms, 54

radicalsnested, 459

radius of convergence, 308, 482

Rankine-Hugonoit jump condition, 17

Rayleigh quotient, 117

Rayleigh, Lord, 112

real eigenvalues, 111

Page 540: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

528 partial differential equations

rectangular membrane, 161

Reduction of Order, 503

rescaling, 202

residue, 316

Residue Theorem, 319

resonance, 268, 510

Riemann surface, 293, 328

Riemann zeta function, 397

Riemann, Georg Friedrich Bernhard, 225,298

Riemann-Lebesgue Lemma, 154

RMS voltage, 472

root mean square, 472

Runge-Kutta, 440

fourth order, 442

second order, 441

Runge-Kutta-Fehlberg method, 443

Russell, Bertrand, 71, 455

Russell, J. Scott, 358

scalar product, 77

scale factors, 214

schemealgebraic system, 363

finite transform, 419

Fourier, 413

Fourier transform, 359

Laplace transform, 390

Schrödinger equation, 362

Schrödinger equation, 206, 360

Schwinger, Julian, 225

self-adjoint, 115

self-similarity transformation, 202

separation of variables, 42

sequenceFibonacci, 487

seriesbinomial series, 489

Fourier series, 74

geometric series, 477

Laurent series, 311

Maclaurin series, 481

power series, 479

re-indexed, 311

summation by transforms, 396

Taylor series, 308, 481

telescoping, 479, 494

side conditions, 7

simple closed contour, 302

simple harmonic motion, 405

sinc function, 376

sine series, 88

double, 165

singularity, 314

double pole, 315

essential, 314

poles, 314

removable, 314

simple pole, 315

solitons, 461

source density, 11

space of square integrable functions, 154

Special Relativity, 490

spectral analysis, 34

spherical Bessel functions, 204

spherical harmonics, 193, 197

spherical symmetry, 191

spherical turkey, 201

square wave, 404, 412

stabilitynumerical, 452

stagnation point, 343

steady state solution, 252

stencil, 446

step function, 238, 402

stereographic projection, 349

Stone-Weierstraß Theorem, 131

streamline function, 335

streamlinescylinder, 343

sink, 340

source, 340

uniform, 339

vortex, 341

Strutt, John William, 112

Sturm-Liouville operator, 108

Sturm-Liouville problem, 107, 150, 157,158

substitutionhyperbolic, 474

trigonometric, 472

Tabular Method, 468

Taylor polynomials, 441, 480

Taylor series, 308, 481

Taylor, Brook, 35, 479

Tchebychef polynomials, 134

Thanksgiving, 201

thermal equilibrium, 48

time dilation, 489

tones, 72

traffic flow, 12

transforms, 357

exponential Fourier, 365

finite Fourier sine, 419

Fourier, 366

Page 541: I N T R O D U C T I O N T O PA RT I A L D I F F E R E N T ...people.uncw.edu/hermanr/pde1/PDEbook/PDE_Main.pdf7.2.2 The Differential Equation for the Green’s Function . . 238

index 529

Fourier sine, 419

inverse Laplace, 410

Laplace, 389

traveling wave, 7

trigonometricidentities, 457

substitution, 472

trigonometric identities, 76

truncation error, 452

turkey, 201

uncertainty principle, 377

uniform flow, 338

variation of parameters, 501, 511

vector spaces, 501

function spaces, 128

inner product space, 129

vectorsprojection, 132

velocitycomplex, 336

vibrating sphere, 200

vibrating string, 34

Villecourt, Paul Charpit, 26

volume elements, 216

von Ettingshausen, Andreas Freiherr, 488

von Neumann, John, 431

vortices, 346

vorticity, 335

Wallis, John, 35

Watson, G.N., 147

wave equation, 34, 35

1D, 45, 96

circular, 166

derivation, 36

rectangular, 161

wave packet, 361

wave speed, 358

wavelength, 359

wavenumber, 359, 372

weak solutions, 16

Weierstraß substitution, 322

Weierstraß, Karl Theodor Wilhelm, 322

Wilbraham, Henry, 98

Wilkinson, Rebecca, 185

windowing, 387

Wronskian, 502