notes for expansions/series and differential equations series, methods revised_fall09.pdf2...

29
1 Notes for Expansions/Series and Differential Equations In the last discussion, we considered perturbation methods for constructing solutions/roots of algebraic equations. Three types of problems were illustrated starting from the simplest: regular (straightforward) expansions, non-uniform expansions requiring modification to the process through inner and outer expansions, and singular perturbations. Before proceeding further, we first more clearly define the various types of expansions of functions of variables. 1. Convergent and Divergent Expansions/Series Consider a series, which is the sum of the terms of a sequence of numbers. Given a sequence { } 1 2 3 4 5 , , , , , ...., .. n a a a a a a , the nth partial sum S n is the sum of the first n terms of the sequence, that is, 1 . n n k k S a = = (1) A series is convergent if the sequence of its partial sums { } 1 2 3 , , , ...., .. n S S S S converges. In a more formal language, a series converges if there exists a limit such that for any arbitrarily small positive number ε>0, there is a large integer N such that for all n N , n S ε - . (2) A sequence that is not convergent is said to be divergent. Examples of convergent and divergent series: The reciprocals of powers of 2 produce a convergent series: 1 1 1 1 1 1 ......... 2 1 2 4 8 16 32 + + + + + + = . (3) The reciprocals of positive integers produce a divergent series: 1 1 1 1 1 1 ......... 1 2 3 4 5 6 + + + + + + (4) Alternating the signs of the reciprocals of positive integers produces a convergent series: 1 1 1 1 1 1 ......... ln 2 1 2 3 4 5 6 - + - + - + = . (5) The reciprocals of prime numbers produce a divergent series: 1 1 1 1 1 1 ......... 2 3 5 7 11 13 + + + + + (6)

Upload: others

Post on 24-May-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

1

Notes for Expansions/Series and Differential Equations

In the last discussion, we considered perturbation methods for constructing solutions/roots of algebraic equations. Three types of problems were illustrated starting from the simplest: regular (straightforward) expansions, non-uniform expansions requiring modification to the process through inner and outer expansions, and singular perturbations. Before proceeding further, we first more clearly define the various types of expansions of functions of variables.

1. Convergent and Divergent Expansions/Series

Consider a series, which is the sum of the terms of a sequence of numbers. Given a sequence { }1 2 3 4 5, , , , ,...., ..na a a a a a , the nth partial sum Sn is the sum of the first n terms of the sequence, that is,

1.

nn k

kS a

== � (1)

A series is convergent if the sequence of its partial sums { }1 2 3, , ,...., ..nS S S S converges. In a more formal language, a series converges if there exists a limit such that for any arbitrarily small positive number ε>0, there is a large integer N such that for all n N≥ ,

nS ε− ≤� . (2)

A sequence that is not convergent is said to be divergent.

Examples of convergent and divergent series: • The reciprocals of powers of 2 produce a convergent series:

1 1 1 1 1 1......... 2

1 2 4 8 16 32+ + + + + + = . (3)

• The reciprocals of positive integers produce a divergent series:

1 1 1 1 1 1.........

1 2 3 4 5 6+ + + + + + (4)

• Alternating the signs of the reciprocals of positive integers produces a convergent

series: 1 1 1 1 1 1

......... ln 21 2 3 4 5 6

− + − + − + = . (5)

• The reciprocals of prime numbers produce a divergent series:

1 1 1 1 1 1.........

2 3 5 7 11 13+ + + + + (6)

2

Convergence tests:

There are a number of methods for determining whether a series is convergent or divergent.

Comparison test: The terms of the sequence { }1 2 3 4 5, , , , ,...., ..na a a a a a are compared to those of another sequence { }1 2 3 4 5, , , , ,...., ..nb b b b b b . If, for all n, 0 n na b≤ ≤ , and

1n

nb

=� converges, then so does

1n

na

=� . However, if, for all n, 0 n nb a≤ ≤ , and

1n

nb

=� diverges, then so does

1n

na

=� .

Ratio test: Assume that for all n, an > 0. Suppose that there exists an r > 0 such that

1lim n

nn

ar

a+

→∞= . (7)

If r <1, the series converges. If r >1, then the series diverges. If r = 1, the ratio test is inconclusive, and the series may converge or diverge. Root test or nth root test. Suppose that the terms of the sequence under consideration are non-negative, and that there exists r > 0 such that

limn nn

a r→∞

= . (8)

If r <1, the series converges. If r >1, the series diverges. If r = 1, the root test is inconclusive, and the series may converge or diverge. Root test is equivalent to ratio test.

Integral test: The series can be compared to an integral to establish convergence or divergence. Let f(n) = an be a positive and monotone decreasing function. If

1 1

( ) ( )limt

tf x dx f x dx

→∞= < ∞� � , (9)

then the series converges. If however, the integral diverges, the series does so as well.

3

Limit comparison test: If { } { },n na b >0, and the limit lim n

nn

ab→∞

exists and is not zero,

then 1

nn

a∞

=� converges if and only if

1n

nb

=� converges.

Alternating series test: Also known as the Leibniz criterion, the alternating series test

states that for an alternating series of the form 1

( 1)nn

na

=−� , if { }na is monotone

decreasing, and has a limit of 0, then the series converges.

Cauchy condensation test: If { }na is a monotone decreasing sequence, then

1n

na

=� converges if and only if

212 k

k

ka

=� converges.

Other tests for convergence include Dirichlet's test, Abel's test and Raabe's test.

Conditional and absolute convergence:

Note that for any sequence { }1 2 3 4 5, , , , ,...., ..na a a a a a , n na a≤ for all n. Therefore,

1 1n n

n na a

∞ ∞

= =≤� � . This means that if

1n

na

=� converges, then

1n

na

=� also converges (but

not vice-versa).

If the series 1

nn

a∞

=� converges, then the series

1n

na

=� is absolutely convergent. An

absolutely convergent sequence is one in which the length of the line created by joining together all of the increments to the partial sum is finitely long. The power series of the exponential function is absolutely convergent everywhere.

If the series 1

nn

a∞

=� converges but the series

1n

na

=� diverges, then the series

1n

na

=� is

conditionally convergent.

The Riemann series theorem states that if a series converges conditionally, it is possible to rearrange the terms of the series in such a way that the series converges to any value, or even diverges.

4

Uniform convergence:

Let { }1 2 3 4, , , ,...., ..nf f f f f be a sequence of functions. The series 1

nn

f∞

=� is said to

converge uniformly to f if the sequence {Sn} of partial sums defined by

1( ) ( )n n

nS x f x

== � (10)

converges uniformly to f.

Cauchy convergence criterion: The Cauchy convergence criterion states that a series

1n

na

=�

converges if and only if the sequence of partial sums is a Cauchy sequence. This means that for every ε > 0, there is a positive integer N such that for all n m N≥ ≥ we have

n

kk m

a ε=

<� , (11)

which is equivalent to

lim 0.n m

nn k nm

a+

→∞ =→∞

=� (12)

Radius of convergence:

The radius of convergence of a power series is a non-negative quantity, either a real number or that represents a range (within the radius) in which the function will converge.

For a complex power series f defined as:

0( ) ( )n

nn

f z c z a∞

== −� (13)

where a is a constant, the center of the disk of convergence, cn is the nth complex coefficient, and z is a complex variable.

The radius of convergence r is a nonnegative real number or , such that the series converges if z a r− < , and diverges if z a r− > . In other words, the series converges if z is close enough to the center and diverges if it is too far away. The radius of convergence is infinite if the series converges for all complex numbers z.

5

The radius of convergence can be found by applying the root test to the terms of the series. The root test uses the number

limsup nn

nC f

→∞= (14)

where ƒn is the nth term cn(z − a)n ("lim sup" denotes the limit superior). The root test states that the series converges if |C| < 1 and diverges if |C| > 1. It follows that the power series converges if the distance from z to the center a is less than

1/ lim sup nn

nr c

→∞= , (15)

and diverges if the distance exceeds that number. Note that r = 1/0 is interpreted as an infinite radius, meaning that ƒ is an entire function.

The limit involved in the ratio test is usually easier to compute, but the limit may fail to exist, in which case the root test should be used. The ratio test uses the limit

1lim n

nn

fL

f+

→∞= . (16)

In the case of a power series, this can be used to find that

1

lim n

nn

cr

c→∞+

= . (17)

2. Asymptotic Expansions

An asymptotic expansion, asymptotic series or Poincaré expansion is a formal series of functions which has the property that truncating the series after a finite number of terms provides an approximation to a given function as the argument of the function tends towards a particular, often infinite, point.

Let �n be a sequence of continuous functions on some domain, and let L be a (possibly infinite) limit point of the domain. Then the sequence constitutes an asymptotic scale or

guage functions if for every n, 1( ) ( ( ))n nx o x as x Lϕ ϕ+ = → . If f is a continuous function on the domain of the guage functions, an asymptotic expansion of f

with respect to the scale is a formal series 0

( )n nn

a xϕ∞

=� such that, for any fixed N,

10

( ) ( ) ( ( ))N

n n nn

f x a x O x as x Lϕ ϕ +=

= + →� . (18)

In this case, we write

0

( ) ~ ( )n nn

f x a x as x Lϕ∞

=

→� . (19)

6

The most common type of an asymptotic expansion is a power series in either positive or negative terms. While a convergent Taylor series fits the definition as given, a non-convergent series is what is usually intended by the phrase. Methods of generating such expansions include the Euler-Maclaurin summation formula and integral transforms such as the Laplace and Mellin transforms. Repeated integration by parts will often lead to an asymptotic expansion.

Examples of asymptotic expansions: • Gamma function

(20)

• Exponential integral

(21)

• Riemann zeta function

(22)

where B2m are Bernoulli numbers and is a rising factorial. This expansion is valid for all complex s and is often used to compute the zeta function by using a large enough value of N, for instance N > | s |.

• Error function

(23)

We now move on to differential equations, and proceed in the same manner, staring from the simplest to more complex.

7

1. Introduction to Perturbations Techniques for Differential Equations

(http://www.sm.luth.se/~johanb/applmath/chap2en)

Consider the example of the systems shown in the picture above. Let the mass of

the earth be m.The motion of the earth around the sun is an ideal case, where we have no influences from other celestial bodies. The motion y = y(t) of the earth is governed by Newton's law

my Fsun=�� . (24)

We now imagine the motion being perturbed by a comet passing close to the earth. Then, the perturbed system will have an extra term

Fmy F cometsun ε+=�� . (25)

It is not unreasonable to assume that the solution of the perturbed problem might be of the form

0 1 22( ) ( ) ( ) ( ) ......y t y t y t y tε ε= + + + , (26)

where y0(t) is the solution of the unperturbed (original) problem and the additional terms are correction terms.

2. The Main Idea

Consider the second order differential equation, written in a compact form as

0( , , , ; ) 0, 0 ,F t y y y t tε = ≤ ≤� �� (27)

8

where is a small parameter, 0 1ε< << . We will try to solve this equation by assuming a straightforward perturbation expansion expansion

0 1 22( ) ( ) ( ) ( ) ......y t y t y t y tε ε= + + + , (28)

determiing the first few terms (say, y0(t), y1(t), y2(t),...) of the expansion, and then using the sum of first few terms, say

0 1 22( ) ( ) ( ) ( )appry t y t y t y tε ε= + + (29)

as an approximation to the solution of the differential equation. Here y0(t) is the leading or the zeroth-order term, the solution of the unperturbed problem

.0 0 0 0( , , , ;0) 0, 0F t y y y t t= ≤ ≤� �� (30)

The terms 1 22( ), ( ), ......y t y tε ε are higher-order terms that are usually "small" in an

asymptotic sense.

Remark: The unperturbed problem can in many cases be solved exactly.

3. Example: Motion of a Particle in a Nonlinear Resistive Medium

Suppose that a body with mass m and initial velocity V0 moves in a fluid in a rectilinear motion. Let the body experience resistance to its motion with the resistance force governed by

2F av bv= − (31)

where v = v(t), t > 0, is the speed of the body at time t and a and b are positive constants with

b a<< .

Newton's second law then gives the equation of motion for the body as

0, (0) .2dvm av bv v V

dt= − + = (32)

Remark: If, b=0, then the equation is linear and its solution is

9

0( ) .( / )v t V e at m= − (33)

Scaling: Introduce the dimensionless variables

0/ , /( / ).y v V x t m a= = (34)

Then, y is the non-dimensional speed and x is the non-dimensional time. By chain rule, the time derivative of speed transforms to

0dv dv dy dx dy a

Vdt dy dx dt dx m

= = , (35)

and thus the differential equation in terms of non-dimensional variables is

2 , 0,

(0) 1,

dyy y x

dxy

ε� = − + >��� =�

(36a,b)

where we have introduced a small parameter ε by the definition 0 / 1bV aε = << .

Remark 1: The unperturbed problem is given by

, 0,

(0) 1,

dyy x

dxy

� = − >��� =�

(37a,b)

and it has the solution ( ) .( )y x e x= −

Remark 2: The perturbed or the complete problem (equations (36a,b)) in the present case can actually be solved exactly without relying on any approximation techniques, and this exact solution is (see Chapter 2 notes)

( ) .1 ( 1)

( )ey x

e

x

xε=

+ −

−− (38)

Perturbation Solution:

Consider the perturbation approach to solving the above equation now for small ε. The equation is

10

2 , (0) 1.dy

y y ydx

ε= − + = (39a,b)

We expand the solution y(x,ε) in a perturbation series (in powers of ε for sufficiently small ε << 1):

0 1 22( , ) ( ) ( ) ( ) ......y x y x y x y xε ε ε= + + + (40)

Substituting this expansion into the differential equation yields:

0 1 2

20 1 2

0 1 2 0 1

[ ( 0 1 2

[

2

2 2

2 20

( ) ( ) ( ) ......

( ) ( ) ( ) .....] ( ) ( ) ( ) ......)

( ) [ ( ) ( )] ( ) 2 ( ) ( )] .....

y x y x y x

y x y x y x y x y x y x

y x y x y x y x y x y x

ε

ε ε

ε ε ε εε ε

= − +

= − −

+ + +′ ′ ′

+ + + + + +

+ − + + + +, (41)

where the last expression gives terms collected in powers of ε. Also inserting the solution expansion into the initial condition gives:

0 1 22(0) (0) (0) ...... 1.y y yε ε+ + + = (42)

Now consider the limiting case, that is, let ε → 0.

Then 0(0) 1,y = that is

1 221 (0) (0) ...... 1.y yε ε+ + + = (43)

Now, let us subtract 1 on both sides and divide the remaining expression with ε (this is allowed as ε � 0). We get

1 2 32(0) (0) (0) ...... 0.y y yε ε+ + + = (44)

Again taking the limit ε → 0, we get

1(0) 1,y = (45)

and if we repeat this procedure, we get

1 2 3(0) (0) (0) ...... 0.y y y= = = = (46)

11

Now, comparing terms of various powers of ε on both sides of the differential equation (equation (41)) as well as the initial condition (equation (46)), we get the following sequence of equations and the associated initial conditions:

0 :ε

0 0 0

0

( ) , 0, (0) 1

( )

y x y x y

y x e x

′ = − > =

→ = − (47)

1:ε

1 1 0 1

1 1 1

1

2( ) , 0, (0) 0

( ) , (0) 0

( )

2

2

y x y y x y

or y x y e y

y x e e

x

x x

′ = − + > =

′ + = =

→ = −

− − (48)

2 :ε

2 2 0 1 2

2 2 2

2

( ) 2 , 0, (0) 0

( ) 2( ), (0) 0

( ) 2 .

2 3

2 3

y x y y y x y

or y x y e e y

y x e e e

x x

x x x

′ = − + > =

′ + = − =

→ = − +

− −

− − − (49)

In the above table, solutions of the sequence of equations for 0 1 2 3( ), ( ), ( ), ( ).....y x y x y x y x are also given.

An approximate solution is thus

20 1 2

2

( ) ( )

( 2 ).

2

2 3

y x y y y e e eappr

e e e

x x x

x x x

ε ε ε

ε

= + + = + −

+ − +

− − −

− − − (50)

Recall that in the language of asymptotic expansions, this is a three-term solution.

Comparison with Exact Solution:

Recall that the exact solution of the system

2 , (0) 1,dy

y y ydx

ε= − + = (39a,b)

is given by

( ) .1 ( 1)

( )ey x

e

x

xε=

+ −

−− (38)

12

This solution can be expanded in powers of ε utilizing the following binomial expansion:

1 2 31(1 ) 1 ...

(1 )z z z z

z−= + = − + − +

+ (51)

for small values of z. Using the expression ( 1)xz eε −= − in equation (51), we get

1 2( ) [1 ( 1) ( 1) ...]1 ( 1)

( ) ( 2 ) ...

( ) 2

2 2 2 3

exacty x e e e ee

e e e e e e

x x x xx

x x x x x x

ε εε

ε ε

= = − − + − − =+ −

= + − + − + +

− − − −−

− − − − − − (52)

Now compare this solution with the approximate three-term solution derived earlier (repeated here):

20 1 2

2

( ) ( )

( 2 ).

2

2 3

y x y y y e e eappr

e e e

x x x

x x x

ε ε ε

ε

= + + = + −

+ − +

− − −

− − − (50)

Clearly, the error E in the approximation thus is

3 41 2( ) ( ) ( ) ......exactE y y x m x m xappr ε ε= − = +

for some functions m1(x), m2(x),... . As we discussed in the context of algebraic equations, the error E is of order 3ε and we write it as E=O( 3ε ), where O stands for big ‘Oh’.

4. A Nonlinear Oscillator

Consider a mass m that is suspended from a spring, where the

restoring force F in the spring is related to the stretch in the spring by

3( ) ( ), 1.F ky t ay t a= + << (53) The distance y is the displacement of the mass particle and there is no gravity, or the system is in horizontal plane. Newton’s second law then gives the equation of motion as

2

32

d ym ky ay

dt= − − , (54)

with the initial conditions

13

(0) , (0) 0,dy

y Adt

= = (55a,b)

that is, the particle is displaced initially a distance A and then released. To proceed with the analysis, it is convenient to first rescale the equation. This involves scaling of time as well as the displacement. Let

, / ./

tu y A

m kτ = = (56a,b)

The equation of motion and the initial conditions then transform to:

2

32 0, (0) 1, (0) 0.

d u duu u u

d dε

τ τ+ + = = = (57a,b,c)

Here 2 1aA kε ≡ << , which is assumed to be the small parameter representing small nonlinear effects and it is a dimensionless parameter. This is Duffing’s equation we studied before when we considered second-order conservative systems. We will here try to solve this equation using the straightforward perturbation expansion. Let us assume the solution of Duffing’s equation in the form of the straightforward expansion in powers of ε:

20 1 2( , ) ( ) ( ) ( ) .....u u u uτ ε τ ε τ ε τ= + + + (58)

Substituting in the rescaled differential equation (equation (57a)) as well as initial conditions (equations (57b,c)) gives:

2 220 1 2

0 1 22

2 30 1 2

20 1 2

20 1 2

[ ( ) ( ) ( ) .....]( ) ( ) ( ) .....

[ ( ) ( ) ( ) .....] 0,

(0) (0) (0) ..... 1,

(0) (0) (0) ..... 0.

τ ε τ ε τ τ ε τ ε ττ

ε τ ε τ ε τε εε ε

+ + + + + + +

+ + + + =

+ + + =

+ + + =� � �

d u u uu u u

du u u

u u u

u u u

(59a,b,c)

Now we compare equal powers of ε for each of the expressions to get the sequence of initial value problems:

ε0:

20

0 0 02

0

[ ( )]( ) 0, (0) 1, (0) 0;

( ) cos( )

d uu u u

du

τ ττ

τ τ

+ = = =

→ =

� (60a,b,c)

(61)

14

ε1:

231

1 0 1 12

231

1 1 12

1 1

1

[ ( )]( ) 0, (0) 0, (0) 0;

[ ( )]( ) cos ( ), (0) 0, (0) 0;

( ) ( ) [3cos( ) cos(3 )] / 4

( ) [3cos( ) cos(3 )] / 32 3 sin( ) / 8

d uu u u u

dd u

u u ud

u u

u

τ ττ

τ τ ττ

τ τ τ ττ τ τ τ τ

+ + = = =

→ + = − = =

→ + = − +→ = − + −

��

(62a,b,c)

(63)

For these equations (60) and (62) for the first two terms in the solution expansion, we have constructed solutions that are given in equations (61) and (63). An approximate two-term solution (or a two-term approximation) is then

cos( ) [{cos(3 ) cos( )}/ 32 3 sin( ) / 8].appru τ ε τ τ τ τ= + − − (64)

Note that:

(i) the leading term cos(τ) seems correct.

(ii) if τ < T0 for some constant T0 that is bounded and ε is "small", the correction term in the parenthesis is bounded and is "small".

(iii) if we let τ to be large (τ→∞), the correction term can be large even though ε is small. More specifically, the correction term (second term) becomes of the same order as the first term in the expansion. So, the validity of this solution approximation depends on τ, that is, the expansion is non-uniform. Recall the situation in algebraic case when a similar circumstance arose.

Remark: The problem in (iii) is due to the secular term

3 sin( ) / 8τ τ− . (65)

There are many approaches to remove this difficulty. These include the method of Poincare-Lindstedt, the method of multiple time-scales, the method of strained coordinates, the method of averaging etc. We will study some of these methods now. All these techniques introduce a way that removes the terms resulting in secular term in the solution.

5. Poincaré-Lindstedt method

Poincaré-Lindstedt's method is one of the many methods to avoid secular terms in the solution. The basic idea is to scale time through an unknown ω which is determined such that the solution becomes uniform. So, we let

15

(66) where

(67) and then the solution is expressed in straight-forward expansion

(68) Example: We apply this technique now through the example of the Duffing's equation:

(69a,b,c) again. Using the change of variables

(70a,b) the derivatives transform via chain rule to

(71) The equations (69) are thus transformed to

(72a,b,c) Substituting the expressions for and u in these expressions, we get

(73) and

(74a,b)

16

Now, comparing terms of equal powers of in equations (73) as well as (74a,b), we get for the first two powers:

(75)

(76)

The first right hand or non-homogeneous term in equation (76) is in resonance with the homogeneous solution for the differential equation. It will thus lead to a solution of the form τcosτ for equation (76), or the first correction term which is of order ε. This can be avoided by setting the coefficient of this term in equation (76) to zero. Thus, choosing

(77) we see that we can avoid the secular term in the solution at order ε. The equation (76) then reduces to

(78) with the particular solution

(79) Combining solutions for the two terms, we have a first-order perturbation solution of the Duffing’s equation (equations (57)):

(80)

where, recalling that and , we have

(81)

17

6. Order-Notation.

We write

(82) if

(83) Then, we say that f is small-order of g as goes to 0. This small ‘o’ is also called small ‘oh’. We write

(84) if there is a positive constant M such that

(85) for all belonging to some neighborhood of 0. One says that f is large-order of g as goes to 0. This large ‘O’ is also called Big ‘oh’. Example:

(86) since

(87) Example:

(88) since

(89)

18

for all . However, we don't have that

(90) since

(91)

7. Regular perturbation does not always work

Example: Consider the boundary value problem that depends on a small parameter:

(92a,b,c) Let us assume a solution for y=y(t,ε) in the form of a straight forward expansion :

(93) and substitute in the equations (92) to get the differential equation

(94) and the boundary conditions

(95) Comparing for different powers of , we get at the lowest order:

(96a,b,c)

Note that: (i) the general solution of the differential equation in equation (96a), which is a first-order equation as opposed to the original second-order system, is

(97)

19

(ii) the boundary condition at t=0 is

(98) which gives the solution

. (99) Note that the boundary value at t=1,

(100) is then not fulfilled. (iii) using the boundary value at t=1,

(101) we get the solution

(102) but then the boundary value at t=0,

(103) is not fulfilled. This shows that the regular perturbation expansion does not work for this problem, at least in this way.

8. Inner and outer approximations.

Now, we want to discover the exact nature of the solution of the differential equation (boundary value problem) and boundary conditions in equations (92a,b,c), which will also allow us to understand why the above perturbation approach failed. Again consider the problem

(92a,b,c)

20

Note that this is linear second-order differential equation with constant coefficients. So, the solutions should be easy for you to derive. It can be showed that the exact solution y=y(t) is

(104) This exact solution is plotted in the figure below. Now we consider the perturbation techniques

.

1) Let . Then, as seen above, we get the unperturbed problem

(105) The solution of the differential equation is

( ) −= ty t Ce . (106)

The solution that fulfills the boundary condition at t=1 is

(107)

21

This solution is called an outer solution that well agrees with the exact solution for "large" t (t close to 1), as shown in the figure. Clearly, for small t, the outer solution deviates significantly from the true solution. 2) For "small" t (t close to 0) a good approximation to the solution (called inner approximation) is

(108) This solution is also plotted in the figure. Clearly, it approximates the exact solution near t=0, but fails outside this neighborhood.

9. Singular perturbation - when does regular perturbation not work?

We saw in the case of algebraic equations that there can be many reasons for the failure of straightforward expansions. This includes situations when

1) the highest order derivative is multiplied by . 2) the problem totally changes characteristics when the parameter is equal to zero. 3) the problem is defined over an infinite domain. 4) singular points are present in the domain. 5) the equation models physical processes with several time- or length scales. 1-5 are called singular perturbation problems.

In many cases one deals with problems containing boundary layers. We can

roughly treat these problems by

i) letting we get a good approximation for the outer region.

ii) rescale the problem to get an inner approximation.

iii) match inner and outer approximations. Singular perturbation is a matched asymptotic expansion. We now elaborate on these for the example under consideration in the following sections.

22

10. The outer approximation

We get the outer approximation by substituting in the equations (96a,b,c). In our last example that implied that we should solve the equation

(109) This has the solution

(110) (here the subscript ‘o’ refers to ‘outer’) which well agrees with the exact solution

(111) since

(112)

for small . Continuing further, if t=O(1), we have that which implies that

(113)

11. The inner approximation.

To understand how the solution behaves near t=0, we rescale the problem by putting

(114) in the original boundary value problem

(96a,b,c)

23

Let us define

(115) so that we get

(116) The equation then transforms to

(117) Consider the coefficients of the three terms in the equation above:

(118) The problem in the original equation is that the coefficient of the highest derivative y" is small compared to the others. To avoid this problem we thus choose the main

coefficient to be of the same order as one of the other coefficients and that the other two coefficients are comparably small. We demonstrate this ‘dominant terms argument’ (previously considered as well in the algebraic equations and in differential equations) with the procedure below (recall that is small):

24

Case 1):

Case 2):

Case 3):

We see that we only have one possibility and that is Case 1, where the main coefficient is relatively larger than the remaining coefficients. We therefore choose

(119) The transformed equation then becomes

(120) In this equation on an expanded time interval of τ=t/ε, equation (120), ε appears as a regular perturbation parameter and so we can again use straightforward expansion for the

solution z(τ). Thus, at the lowest order, we now put to get the equation

(121) with the solution

(122)

25

Note that this is a second order equation and so there are two constants of integration. Thus, both boundary conditions can be used. The boundary condition z(0)=y(0)=0 yields

(123) Thus, this solution, also called an inner approximation, is

(124) The remaining problem is to determine the constant a and match the inner and outer approximations.

12. Matching.

The inner and outer solutions as well as their regions of applicability are shown in the figure below.

In the overlapping region we let

26

(125) and introduce the intermediate variable

(126) This is a time scale that is "between" the inner time scale

(127) and the outer time scale

(128) and thus the name intermediate time scale. To be able to match the approximations we require that the outer and inner approximations (written with respect to the intermediate

variable) must agree in the limit as that is

(129)

for a fix (positive) value of , the intermediate variable. In our case, this means that

(130) This implies that a=e and that the inner approximation therefore must be

(131) This is known as the matching principle in the method of matched asymptotic expansions.

27

Finally, we want to find a solution that is valid in all the interval [0,1]. We therefore construct yu from the inner and outer approximations minus their common limit e (since it otherwise would have been counted twice) in the overlapping region:

(132)

When t is in the outer region the second term is small and yu is thus approximately which is exactly the outer approximation. When t is in the boundary layer, the first term

is close to e and yu is thus approximately which is exactly the inner approximation. In the overlapping region, both the inner and the outer approximations are approximately equal to e, which makes the sum of yi and yo close to 2e there, that is, twice as much as it should be. That is why we have to subtract the common limit from the sum. If we insert yu in the original differental equation, we see that

(133) that is, yu satisfies the equation exactly on the interval (0,1). If we investigate the boundary conditions we see that

(134)

The left condition is exactly fulfilled and the right is fulfilled up to ´for any n > 0,

since We thus see that yu is a unique good approximation on the interval [0,1].

13. Another example of singular perturbations.

Example: Consider the differential equation

(135)

28

This again is a boundary value problem on a bounded domain with � as a small parameter.

We get the outer approximation by putting and solving the equation with the outer boundary condition:

(136) that is,

(137) We now look for the inner approximation. Let us define the inner variable by the

scaling The equation is then transformed to

(138)

where Comparing the leading coefficient with the other coefficients ( is being assumed to be small):

Case1):

Case 2):

We see that in Case 2, the remaining coefficient is much smaller than the other two. We

therefore choose and make the change of variables

29

Equation (138) then becomes

(139)

We get the inner approximation (at the lowest order) when we put and solve the equation:

(140) which in the original variables is

(141) The constants a and b need to be determined by first imposing the boundary condition at t=0, and then by matching the inner and outer solutions at an intermediate scale. The condition y(0)=1 gives

(142) Let us match these approximations in equations (137) and (142). Let us introduce the

intermediate variable The matching condition

(143) then implies that

(144)

that is, The inner approximation finally becomes The final composite approximation yu is obtained by adding the inner and outer approximations and subtracting their common limit in the overlapping region

(145)