module14_01

12
MODULE 14 Time-dependent Perturbation Theory In order to understand the properties of molecules it is necessary to see how systems respond to newly imposed perturbations on their way to settling into different stationary states. This is particularly true for molecules that are exposed to electromagnetic radiation where the perturbation is a rapidly oscillating electromagnetic field. Time dependent perturbation theory allows us to calculate transition probabilities and rates, very important properties in the Photosciences. The approach we shall use is that for the time-independent situation, viz., examine a two-state system in detail and then outline the general case of arbitrary complexity. The two-state system We can write the total hamiltonian for the system as the sum of a stationary state term and a time-dependent perturbation and suppose that the perturbation is one that oscillates at an angular frequency, , then where is the hamiltonian for a time-independent perturbation and the factor 2 is for later convenience. Our time-dependent Schrödinger equation is 1

Upload: m-sohaib-azam

Post on 20-Jul-2016

1 views

Category:

Documents


0 download

DESCRIPTION

module14_01

TRANSCRIPT

Page 1: module14_01

MODULE 14

Time-dependent Perturbation Theory

In order to understand the properties of molecules it is necessary to see how systems respond to

newly imposed perturbations on their way to settling into different stationary states. This is

particularly true for molecules that are exposed to electromagnetic radiation where the

perturbation is a rapidly oscillating electromagnetic field. Time dependent perturbation theory

allows us to calculate transition probabilities and rates, very important properties in the

Photosciences.

The approach we shall use is that for the time-independent situation, viz., examine a two-state

system in detail and then outline the general case of arbitrary complexity.

The two-state system

We can write the total hamiltonian for the system as the sum of a stationary state term and a

time-dependent perturbation

and suppose that the perturbation is one that oscillates at an angular frequency, , then

where is the hamiltonian for a time-independent perturbation and the factor 2 is for later

convenience. Our time-dependent Schrödinger equation is

As before we consider a pair of eigenstates and representing the (time-independent)

wavefunctions and with energy eigenvalues E1 and E2. These wavefunctions are the

solutions of the equation

and are related to the time-dependent functions according to

1

Page 2: module14_01

When the perturbation is acting the state of the system can be expressed as a linear combination

of the basis functions defined by the set in equation (14.5). We confine ourselves to just two of

them and the linear combination becomes

Notice that we are allowing that the coefficients are time-dependent because the composition of

the state may evolve with time. The overall time dependence of the wavefunction is partly due

to the changes in the basis functions and partly due to the way the coefficients change with time.

At any time t, the probability that the system is in state n is given by . Substitution of

equation (14.6) into the Schrödinger equation (14.3) leads to the following

Now each of the basis functions satisfies the time dependent Schrödinger equation, viz.

so the last equation in (14.7) simplifies down as

where the dots over the coefficients signify their time derivatives. Now we expand the last

equation by explicitly writing the time dependence of the wavefunctions

Now multiplying from the left by the bra and use the orthonormality relationship we find

where the matrix elements are defined in the usual way. Now we write when we

find that (14.11) becomes

2

Page 3: module14_01

It is not unusual to find that the perturbation has no diagonal elements so we put

And then equation (14.12) reduces to

or, rearranging

this provides us with a first order differential equation for one of the coefficients but it contains

the other one. We proceed exactly as above to obtain an equation for the other coefficient

In the absence of any perturbation, the two matrix elements are both equal to zero and the time

derivatives of the two coefficients also become zero. In which case the coefficients retain their

initial values and even though oscillates with time, the system remains frozen in its initially

prepared state.

Now suppose that a constant perturbation, V, is applied at some starting time and continued until

a time later. To allow this let us write

where we have invoked the Hermiticity of the hamiltonian. Then

This is a pair of coupled differential equations, which can be solved by one of several methods.

One solution is

where A and B are constants that are determined by the initial conditions and is given by

A similar expression holds fora1.

3

Page 4: module14_01

If, prior to the perturbation being switched on the system is definitely in state 1 then a1(0) = 1,

and a2(0) = 0. These initial conditions allow us to find A and B and eventually we can arrive at

two particular solutions, one for a1(t) and one for a2(t). These are

and

The probability of finding the system in state 2 (initially equal to zero) after turning on the

perturbation is given by

and by substitution of equation (17.20) we are lead to the Rabi formula

Now suppose the pair of states is degenerate, then 21 = 0 and

This function has the form shown in Figure 14.1.

Figure 14.1: three plots of the function in equation (17.23), with V = 1 (g), 2 (f), and 3(k)

4

Page 5: module14_01

The system oscillates between the two states, spending as much time in 1 as in 2. We also see

that the frequency of the oscillation depends on the strength of the perturbation, V, so that strong

perturbations drive the system between the two states more rapidly than do weak perturbations.

Such systems are described as ‘loose’, because even weak perturbations can drive them

completely from one extreme to the other.

The opposite situation is different, i.e. where the states are far apart in energy in comparison to

the absolute value of V. Then and

Again we find oscillation but now the probability of finding state 2 is never higher than

which is much less than unity. Here the probability of the perturbation driving the

system into state 2 is very small. Furthermore, notice that the oscillation frequency is governed

by the energy separation and is independent of the perturbation strength. The only role of the

perturbation is to govern the fraction of the system in state 2. The probability is higher at larger

perturbations.

Many level systems

For the many-level system we need to expand the wavefunction as a linear combination of all the

n-states contained in equation (14.4). This leads to very complicated equations when combined

with perturbations that are more complex than the time-independent one used above. To

approach this we use the approximation technique called “variation of constants” and invented

by Dirac. The problem we are faced with is to find out how the linear combination varies with

time under the influence of realistic perturbations. We base the approximation on the condition

that the perturbation is weak and applied for such a short time that all the coefficients remain

close to their initial values.

Eventually it can be shown that

5

Page 6: module14_01

where af is the coefficient for the final state that was initially unoccupied. This approximation

ignores the possibility that the route between initial and final states is via other states, i.e. it

accounts only for the direct route. It is a first order theory because the perturbation is applied

once and only once.

Now we can use the expression in (14.25) to see how a system will behave when exposed to an

oscillatory perturbation, such as light. First we consider transitions between a pair of discreet

states and . Then we shall embed the final state into a continuum of states.

Suppose our perturbation oscillates with a frequency = 2 and it is turned on at t = 0. Then

its hamiltonian has the form

(Now you see the reason for the factor of 2 in the expansion.). Putting this expression into

equation (14.25) yields

where fi = Ef – Ei. This is an obscure result but simplification is very straightforward. In

electronic spectroscopy and photoexcitation the frequencies and fi involved are very high, of

the order of 1015 s-1. Thus the first term in the braces is exceedingly small, whereas the second

term, in which the denominator can approach zero, can be very large. It is therefore possible to

ignore the first term. Then the probability of finding the system in state f after a time t, when it

was initially completely in state i, becomes

Using the relationship employed earlier that we rewrite equation (14.28) as

6

Page 7: module14_01

This expression looks very much like equation (14.24) that was developed for a static

perturbation on the two-level system. The difference being that fi is here replaced by (fi - )

which is termed the frequency offset. Equation (14.29) informs us that not only is the amplitude

of the transition probability dependent on the frequency offset, so is its time dependence. Both

these increase as approaches fi and when the difference becomes zero, the transition

probability is at a maximum; at this point the radiation field and the system are in resonance.

Evaluating the limit of equation (17.29) as the two frequencies approach each other we find that

and the probability of the transition occurring as a result of the perturbation increases

quadratically with time. The shape of equation (14.27) and the time dependence of the

amplitude for four times are shown in Figure 14.2.

Figure 14.2: Plots of equation (17.27) at four different times (1,2,3, and 4) showing the quadratic dependence of the amplitude on time. The vertical scale is arbitrary.

7

Page 8: module14_01

Transitions to states in a continuum

In the previous section the transition was between a pair of discrete states; now we consider the

case where the final state is part of a continuum of states, close to each other in energy. We can

still use equation (14.29), to estimate the probability of promoting to one member of the

continuum, but now we need to integrate over all the transition probabilities that the perturbation

can induce in the system. We need to define a density of states, (which relates to the

number of states accessible as a result of the perturbation, then is the number of final

states in the range E to E + dE that are able to be reached. Then the total transition probability is

given by

Now we use equation (14.29) and the relation to arrive at the following

Now we set about simplifying this expression. First, we recognize that the quotient in the

integrand, which is equivalent to is very sharply peaked when , the radiation

frequency. This means that we can sensibly restrict ourselves to considering only those states

that have significant transition probabilities at fi. Because of this we can evaluate the

density of states in the neighborhood of and treat it as a constant. The other effect of

this is that although the matrix elements depend on E, we are concerned only with a narrow

range of energies contributing to the integral, that we can assume it is a constant. Under these

considerations equation (14.32) simplifies to

To add one more approximation we convert the integral into a standard form by extending the

limits to infinity. Because of the shape of the function, the value of the integrand has virtually no

area outside the actual range, so this approximation introduces very little error. Now we simplify

8

Page 9: module14_01

again by setting , which implies that , and the probability

expression becomes

Using the standard form that

then the required probability becomes

If we now define the transition rate (kfi) as the rate of change of the probability that the system

arrives at an initially empty state, we find that

This expression is known as the Fermi Golden Rule and it tells us that we can calculate the rate

of a transition if we know the square modulus of the transition matrix element between the two

states and the density of final states at the frequency of the transition. It is a very useful

expression and we shall find many examples of its use in the coming weeks.

9