lecture notes on statistical physics by peter johansson

66
Lecture notes on Statistical Physics by Peter Johansson Department of Natural Sciences ¨ Orebro University August 25, 2005

Upload: humejias

Post on 28-Apr-2015

80 views

Category:

Documents


0 download

DESCRIPTION

lcture notes on statistical physics

TRANSCRIPT

Page 1: Lecture Notes on Statistical Physics by peter Johansson

Lecture notes on Statistical Physics

byPeter Johansson

Department of Natural SciencesOrebro University

August 25, 2005

Page 2: Lecture Notes on Statistical Physics by peter Johansson

Contents

1 Introduction 1

2 Basics: energy, entropy, and temperature 22.1 Introductory example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22.2 Multiplicity and entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32.3 Two systems in thermal contact . . . . . . . . . . . . . . . . . . . . . . . . . . 52.4 Thermal equilibrium and temperature . . . . . . . . . . . . . . . . . . . . . . 62.5 Problems for Chapter 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

3 The Boltzmann distribution 93.1 Derivation of the Boltzmann distribution . . . . . . . . . . . . . . . . . . . . . 93.2 Application to a harmonic oscillator . . . . . . . . . . . . . . . . . . . . . . . 113.3 Two-dimensional harmonic oscillator . . . . . . . . . . . . . . . . . . . . . . 143.4 Solved examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153.5 Problems for Chapter 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

4 Thermal radiation 234.1 Photon modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234.2 Stefan-Boltzmann law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244.3 The Planck radiation law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264.4 Problems for Chapter 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

5 The ideal gas 305.1 The Boltzmann distribution applied to a particle in a box . . . . . . . . . . . . 305.2 Boltzmann distribution and thermodynamics . . . . . . . . . . . . . . . . . . . 315.3 Many particles in a box—the ideal gas . . . . . . . . . . . . . . . . . . . . . . 335.4 Heat and work in statistical physics . . . . . . . . . . . . . . . . . . . . . . . . 355.5 Problems for Chapter 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

6 Chemical potential and Gibbs distribution 416.1 Chemical equilibrium and chemical potential . . . . . . . . . . . . . . . . . . 416.2 Chemical potential and thermodynamics . . . . . . . . . . . . . . . . . . . . . 426.3 Internal and external chemical potential . . . . . . . . . . . . . . . . . . . . . 436.4 Derivation of the Gibbs distribution . . . . . . . . . . . . . . . . . . . . . . . 456.5 The Fermi-Dirac and Bose-Einstein distributions . . . . . . . . . . . . . . . . 466.6 The Fermi gas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 486.7 Problems for Chapter 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

1

Page 3: Lecture Notes on Statistical Physics by peter Johansson

CONTENTS 2

7 Phase transitions 537.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 537.2 Phase diagrams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 547.3 Applications to practical situations . . . . . . . . . . . . . . . . . . . . . . . . 577.4 The van der Waals equation of state . . . . . . . . . . . . . . . . . . . . . . . 587.5 Problems for Chapter 7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

Appendix 61

A Summation of geometric series 61Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62

Page 4: Lecture Notes on Statistical Physics by peter Johansson

Chapter 1

Introduction

These lecture notes are intended to be used in the course on Statistical Physics at OrebroUniversity. The course starts with a refresher of classical thermodynamics where UniversityPhysics by H. Benson (J. Wiley & Sons, 1996) is used.

The reader should have some basic knowledge of classical thermodynamics, and classicalmechanics. No prior, detailed knowledge of quantum mechanics is needed, even though almostall students following the course have taken a previous course on quantum mechanics; the ba-sic facts about the energy levels etc. of various models, such as the harmonic oscillator and theparticle in a box are introduced in the text as we go along.

The primary source of inspiration in writing these notes has been Thermal Physics, by C.Kittel and H. Kroemer (W. H. Freeman, 1980). I recommend this book wholeheartedly to any-one who wants to further his or her knowledge of the subject.

1

Page 5: Lecture Notes on Statistical Physics by peter Johansson

Chapter 2

Basics: energy, entropy, and temperature

2.1 Introductory example

Statistical physics deals with the properties of systems that are so large that it is both impossibleand meaningless to try to find out and describe the exact state of the system. In chemistry andsolid state physics, to take a couple of examples, one often is faced with systems that contain10

���particles or so. It is then out of the question to ask exactly how all these particles move or

in what quantum state they are. It is, on the other hand, perfectly reasonable to try to answerquestions like: “What is the average energy of a particle in the system?” or “What is theaverage number of particles in a unit volume?”

����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������

������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������H H H H H H H H H H H H H H H H H H H H H HH HH

T H H T H T H H H T H T H H T T H H H T H TH TH

1st sequence

2nd sequence

Figure 2.1: Two possible results of an experiment with 25 coin tosses.

To try to establish a link between statistics and physics, let us first consider an example thathas no direct connection with physics. We toss a coin 25 times. Each time the result can beeither heads (H) or tails (T). Two possible outcomes of such an “experiment” are illustrated inFig. 2.1. Which of these results is more probable than the other?

Actually, the answer to this question depends on how we view the result; how much in-formation about the result do we want to retain? On one hand, the two results are two uniquesequences of � and , thus even if the first sequence is special and it seems unlikely that itshould occur, the second sequence is equally special and unlikely. At a detailed, microscopiclevel the two sequences are equally probable. If we instead look at things from a macroscopicpoint of view, just counting how many times the coin toss gave the result � , then the secondresult (15 � ) is much more probable than the first result (25 � ). The two probabilities arerespectively, ��� � �� ����� ��������� ������� � (2.1)

2

Page 6: Lecture Notes on Statistical Physics by peter Johansson

CHAPTER 2. BASICS: ENERGY, ENTROPY, AND TEMPERATURE 3

since there are in total� ���

possible outcomes and only one of these gives us 25 � in a row, and�� � � � ���� ��� �� ��� � ����

��� � � �� � ��� � � � � ���� � (2.2)

because in this case there are

� ���� � � different sequences giving 15 � . Of course, we have

tacitly assumed that the coin is “perfect” so that � and events are equally probable.By finding analogies with the physical world the above example can tell us a lot about

what statistical physics is. In physics one can make two kinds of measurements. It is in somecases possible to measure the exact microscopic state of a system, for example an atom. Thiskind of measurement corresponds to observing the exact sequence of � and in the coin tossexperiment. However, for a system with 10

���atoms, microscopic measurements that tell us

everything about the state of every single atom are no longer possible. What we can do is, forexample, to put a thermometer into a glass of water and thereby measure the average energy ofthe water molecules. This kind of experiment corresponds to counting the number of � in thecoin toss experiment.

In statistical physics one makes the assumption that all microscopic states that a systemcan attain (given certain constraints such as energy conservation) are equally probable. Fromthis very fundamental assumption one can derive all the laws of statistical physics and thermo-dynamics. One consequence of this is that when a macroscopic measurement is made, someresults are enormously more probable than others.

2.2 Multiplicity and entropy

We will now consider a physical model system and from there develop an understanding ofsome of the basic quantities in statistical physics and thermodynamics. The model systemconsists of N two-level systems, i.e. quantum systems that can be in either of two differentquantum states with energies ��� (the ground state) and � � (the only excited state). In the realworld the two-level systems could be atoms with a spin that can point either up or down. If theseatoms are placed in a magnetic field � , the spins (just like small magnets) find it energeticallyfavorable to align themselves with the external magnetic field because this results in a lowerenergy.

To be specific, the spin of a two-level system can take the two values � � ��� � (spin-up) and� ��� ��� � , respectively. Assuming that the magnetic field points upwards the correspondingenergies can be written � ������� � where � is a constant, thus ��� ����� � � � and � � �!��� � � .This is illustrated in Fig. 2.2.

spin−up spin−down

ε

ε ε

ε0

1

0

1

(m=1/2) (m=−1/2)B

field

Figure 2.2: The two different states of a 1/2 spin in an external magnetic field � , and thecorresponding energy levels.

We now consider a larger system consisting of " spins (two-level systems). Of these "$#point upwards and "&% points downwards; of course "'#)(*"+% � " . The total, net spin of the

Page 7: Lecture Notes on Statistical Physics by peter Johansson

CHAPTER 2. BASICS: ENERGY, ENTROPY, AND TEMPERATURE 4

Table 2.1: Calculated values for the multiplicity for " � � and different values for "$% ." % 0 1 10 20 30 39 40�20 19 10 0 -10 -19 -20

��� " � ��� 1 40� � ��� ��� � � ����� � � �

� �� � ��� ��� � 40 1

system is just half the difference between the number of spins up and down,

� � �� � " # � " % � � (2.3)

and the total internal energy of the system is

� ����� � � � ���� � " # � " % � � (2.4)

Multiplicity. In the following it will be of considerable interest to know how many differentmicroscopic states that yield a certain value for the total spin

�. This is in principal the same

thing as asking how many different sequences of heads and tails give a certain number of headsin the coin toss experiment. We will call this quantity the multiplicity function and denote itwith � . If the total number of spins is " and "'% of them points down then "&# � " � " % mustpoint upwards. To determine in how many ways to select " % down-spins among the " spinsone reasons in the following way. Number all the " spins from 1 through " and select oneof them that should point downwards—there are " different spins to choose between. Thenchoose a second down-spin; this time there are only " � � different possibilities to choosebetween. Moreover, we could have chosen the two down-spins in the opposite order and cometo the same result, therefore there are

�� " � "+% � � � � " � " � � ���� �

� � " � � (2.5)

different ways of choosing 2 down-spins. Generalizing the argument to any number of down-spins one finds that there are

��� " � " % � � " � " � � � � � � � " � " % ( � �� � � � � � � " % � � ""+% � � " �"+% � � " � " % � � (2.6)

different possibilities. This can be reexpressed in terms of " and the total spin�

as

��� " � ��� � " �" # � � " � " # � � � " �" # � " % � � " �� " � � ( ��� � � " � � � ��� � � (2.7)

In Table 2.1 we display calculated values for �� " � ��� for the case " � � . Even for such asmall " the multiplicity becomes very large for situations with equal or nearly equal numbersof spins up and down.

Entropy. The multiplicity � tells how many different states are available to our system givencertain “macroscopic” constraints on " and

�. Thus, � is a measure of uncertainty or, in other

words, disorder. In classical thermodynamics the entropy � , is a measure of disorder and itturns out that the two are related through

����������� � � (2.8)

Page 8: Lecture Notes on Statistical Physics by peter Johansson

CHAPTER 2. BASICS: ENERGY, ENTROPY, AND TEMPERATURE 5

where � � � � ����� � � � ��� � ��� J/K is the Boltzmann constant. (Note that the universal gasconstant

� � "�� ��� where "�� is Avogadro’s number.) In other words, multiplicity, whichat first sight appears to be a rather abstract concept, has a direct connection with classicalthermodynamics. We note that our spin system has zero entropy ( � � � ) in case all the spinsare aligned with (

� � " � � ), or anti-aligned with (� � � " � � ) the magnetic field, since

� � � in these two cases.

2.3 Two systems in thermal contact

We now consider a situation where two systems are in thermal contact. This means that theycan exchange energy with each other. The energy of one system increases while that of theother decreases so that the total energy is conserved.

We will use the spin two-level systems as models also for this discussion. Suppose we havetwo spin systems with " � and " � spins, respectively. The total energy is such that in total " %are in the excited state, whereas "&# are in the ground state. Figure 2.3 shows such a systemwith " � � " � � ��� and " % ��� .

Subsystem 1 Subsystem 2

N N1 2spins spins

Energy exchangecan take place here

Figure 2.3: A spin model system divided into two subsystems that are in thermal contact.

We now want to see how the "'% spin-downs are typically divided between the two subsys-tems. The multiplicity of states yielding " � % spin-downs in subsystem 1 is

��� " � % � � � " �" � % � � " �" � % � � � " �" � % � � " � " �" % � " � % � �(2.9)

In Fig. 2.4 we have plotted the results for the multiplicity as a function of " � % for twocases, " � � " � � � (shown in panel a), and " � � " � � � � (panel b), respectively. In bothcases ��� of the total number of spins are in the high-energy state. The two diagrams showthat the multiplicity (for the system as a whole) reaches a maximum when " � % � " � � , as onemay expect. However, the multiplicities for " � % values near the expected are also rather large.There are fluctuations in the value of " � % . With an increasing number of spins in the systemthe fluctuations, in absolute numbers, become larger; the width of the peak (at half maximum)in panel (a) is � �

, whereas it is � � � in panel (b). However, more importantly, the relative

Page 9: Lecture Notes on Statistical Physics by peter Johansson

CHAPTER 2. BASICS: ENERGY, ENTROPY, AND TEMPERATURE 6

8

7

6

5

4

3

2

1

020151050

Mul

tiplic

ity x

10-1

7

N1↓

N1=40N2=40N↓=20

(a)

6

5

4

3

2

1

01201101009080

Mul

tiplic

ity x

10-1

92

N1↓

N1=400

N2=400

N↓=200

(b)

Figure 2.4: The multiplicity calculated for two different model spin systems.

width decreases when the total number of spins increases. Actually the width of the peak ofthe “multiplicity curve” scales as � " , and consequently the relative width scales as ��� � " .

If we extend this reasoning to an everyday-situation where the thermodynamic systems weencounter contain ��� ��� atoms or so, we find that the relative fluctuations for various quantitiesare extremely small, namely of the order of � ��� � ��� . Therefore, provided that a macroscopicsystem, for example a liter of water, is in thermodynamic equilibrium (i.e. has the same tem-perature throughout) energy is practically homogeneously distributed in it. As we have alreadyseen considering the model spin system, and will see several times later, things are not quiteso simple in the microscopic world. If a system contains just a few atoms there will be largefluctuations in quantities such as the total energy, and one can only discuss the energy contentin statistical terms such as expectation values etc.

2.4 Thermal equilibrium and temperature

From the plots above we can also draw another interesting conclusion. A large system in ther-mal equilibrium ends up in a macroscopic state near the most likely one. By this we meanthat since the relative fluctuations are very small in a large system, it is extremely unlikely tomeasure values for macroscopic quantities that differ from those corresponding to the maxi-mum multiplicity. Thus, in mathematical terms thermal equilibrium means that the multiplicityshould have a maximum. This fact will lead us to a theoretical definition of temperature. Thestarting point for this definition is Eq. (2.9). Of course this expression reaches a maximumwhen the derivative with respect to " � % vanishes,

� �� " � % � � � (2.10)

(While " � % of course is an integer, we can treat it as a continuous variable provided the systemwe look at contains many spins.) Now we rewrite this expression so that we can make bettercontact with physical quantities such as entropy and energy. We first use the definition ofentropy � � � � � � � , to write

��� " � % � �������� ��� " � % � � ���� � ��� �� � � � " � % � � � �� ����� ���� �

�� " � % � ( � � � " � % ��� � ���� � (2.11)

Page 10: Lecture Notes on Statistical Physics by peter Johansson

CHAPTER 2. BASICS: ENERGY, ENTROPY, AND TEMPERATURE 7

(note that the subsystem entropies add) which yields the derivative� �� " � % � �

� �

� � ��

� " � % ( � � �� " � % � ����� ��� � � � " � % � ( � � � " � % � � � ���� � (2.12)

This vanishes when the expression in parenthesis is 0,� ��

� " � % ( � � �� " � % � � ��

� " � % � � � �� " � % � � � � ��

� " � % � � � �� " � % � (2.13)

(Note that� � � " � % � � � � � " � % since " � % � " % � " � % .) Also using the fact that the energies

of the two subsystems can be written

� � ���� � � " � % � " � ��� ��� � � � �� � � " � % � " � � � (2.14)

respectively, i.e. the energies are proportional to the number of down-spins, the condition formaximum multiplicity can be written

� ��

� � � � � �� � � (2.15)

This last condition should characterize thermal equilibrium, which we know means thattwo systems have the same temperature. Therefore the derivative � � � � � � , which is equal inthe two subsystems at maximum multiplicity must have some relation to the temperature. Toget a theoretical definition that is consistent with experimental definitions and our everydayexperiences of what temperature is, one must choose

� � � �� (2.16)

as the definition of absolute temperature (measured in Kelvin). In this way, (i) temperatureswill normally be positive, and (ii) in case thermal equilibrium is not yet established, energy willflow from the subsystem with the higher temperature to the one with the lower temperature.Suppose that the temperature of subsystem 1 is higher than that of subsystem 2,

� � � . Nowif a small amount of energy � , is transferred from subsystem 1 to subsystem 2 there is achange in the total entropy � � �

� ( � � ,� � � � � � �� � � � � � � ( � � � �� � � � � � � � � �

� ( � � � � � � � � (2.17)

Thus, when energy flows from higher to lower temperature, there is an increase in entropy,as there should be according to the second law of thermodynamics. Due to the close relationbetween entropy and multiplicity, increase of entropy means that the system goes from a lesslikely, to a more likely, macroscopic state.

2.5 Problems for Chapter 2

2.1 Entropy and temperatureSuppose the multiplicity ��� � ��� �� �� �

, where � is a constant and " the number of parti-cles.

Page 11: Lecture Notes on Statistical Physics by peter Johansson

CHAPTER 2. BASICS: ENERGY, ENTROPY, AND TEMPERATURE 8

(a) Show that � � � " ��� .

(b) Show that � � � � � � � � is negative. This form of �� � actually applies to an ideal gas.

2.2 The meaning of “never”It is sometimes said that “six monkeys, set to type on word processors for millions of yearswould eventually write all the books in the British Museum.” This statement does not makesense, for it gives misleading conclusions about very, very large numbers. Could all the mon-keys in the world have typed out a single specified book in the age of the universe?Suppose that 10

� � monkeys have been seated at word processors throughout the age of the uni-verse, 10

�� s. This number of monkeys is almost twice that of the present human population.

We suppose that a monkey can hit 10 keys per second. A keyboard may have 44 keys; weaccept lowercase letters in place of capital letters. Assuming that Shakespeare’s Hamlet has10

�characters, will the monkeys ever “write” Hamlet?

(a) Show that the probability that any given sequence of 10�

characters typed at random willcome out in the correct sequence (Hamlet) is of the order of

� � � � � � � � � � � ��� ����� � � � �

where we have used that ������ � � � � � � ��� � .

(b) Show that the probability that a monkey-Hamlet will be typed in the age of the universeis approximately ��� �

���� � ���

. The probability of a monkey-Hamlet is therefore zero in any op-erational sense. The statement found at the beginning of this problem is nonsense; one book,much less a library, will never occur in the total literary production of the monkeys.

2.3 Actual variation of the number of statesConsider any macroscopic system at room temperature.(a) Use the definition of absolute temperature to find the relative increase of the number ofaccessible states when the energy of the system is increased by 1 meV.(b) Suppose that the same system absorbs a photon of visible light with wavelength 500 nm.By what factor does the number of accessible states increase in this case?

Page 12: Lecture Notes on Statistical Physics by peter Johansson

Chapter 3

The Boltzmann distribution

3.1 Derivation of the Boltzmann distribution

The discussion in Chapter 2 mainly focused on a closed model system. Energy could be ex-changed between subparts of the system, but we did not consider the possibility of exchangingenergy with the external world at large. A system that cannot exchange energy with the sur-roundings is in statistical physics described by the so called microcanonical ensemble. Putbriefly, the microcanonical ensemble means that the system can only be in states which have acertain total energy consistent with the initial conditions of the system and the energy principle(energy conservation).

In physics and in daily life we are normally much more interested in systems that can inter-act with the surroundings and change (increase or decrease) their energy. In statistical physicsthe probability distribution of finding such a system with a particular energy is described bythe so called Boltzmann distribution. The Boltzmann distribution is also called the canonicalensemble; it is “bigger” than the microcanonical ensemble.

We will now show how one can derive the Boltzmann distribution. The starting point issimilar to the one we took in Chapter 2 when introducing the concept of temperature. Weconsider a closed, very large system and divide it into two parts. However, this time we do notsplit it into two equally large parts. Instead we make one subsystem small, this is the system�

we are actually interested in, and the other subsystem, the reservoir � , is very large, muchlarger than

�. A schematic illustration is given in Fig. 3.1.

UTOT− ε1

Energy:

g( ) states

UTOT− ε1

S R SRε2ε1 Energy:

g( ) states

UTOT− ε2

UTOT− ε2

Figure 3.1: The total system � +�

can be divided into a reservoir � and a small system�

thatare in thermal contact and exchange energy with each other. Depending on the exact state of�

the energy and hence the multiplicity of the reservoir changes.

Typically the Boltzmann distribution is applied to situations where the system�

is trulymicroscopic, for example an atom. We assume, furthermore that we at least in principle knowall the quantized energy levels that the system

�can be found in. The reservoir � , on the

9

Page 13: Lecture Notes on Statistical Physics by peter Johansson

CHAPTER 3. THE BOLTZMANN DISTRIBUTION 10

other hand, is so large that no matter which state�

system is in, the repercussions on � arecompletely negligible. We also assume that the interaction with the reservoir � is weak andtherefore does not change the energy levels of

�appreciably.

The basic question is “What is the probability of finding the system�

in a particular statewith a certain energy?” To answer this we must calculate the multiplicity for the total systemgiven the constraint that the small system

�has a certain energy.

Let us look at a concrete example where we have a total system � ( �with a total energy������

illustrated in Fig. 3.1. We label two of the states that�

can be found in 1 and 2, andtheir respective energies are � � and � � . We denote the corresponding multiplicities (for the totalsystem) �

�and � � . Since both the states 1 and 2 are completely unique the multiplicities will

be given by the multiplicities for the reservoir in the two situations, thus

�� � � � ��� � ������ � � � ��� ��� � � � � � ��� � ������ � � � � � (3.1)

All states that are available to the total system are equally probable—this is the fundamentalassumption of statistical physics. Therefore the ratio between the probabilities of finding thesystem

�in either state 1 or 2 is given by the ratio between the multiplicities, ��� � � �

�� �

(3.2)

We proceed by reexpressing the multiplicities in terms of the entropy, because this will makeit easier to get a physically meaningful result. We know that � � � � � � � for any system, andthen � ��� ����

. Applying this to Eq. (3.2) we get ��� � � ���� ���� � � � ��� �

(3.3)

As we saw when introducing the definition of temperature, the entropy can be viewed as afunction of the internal energy

, thus

��� � � � � ������ � � � ��� ��� � � � � � � � ������ � � � � � (3.4)

Furthermore, since the reservoir is very large,������ � � � and

������ � � � are very close to������

,and the two different entropies can be calculated using a Taylor expansion around

������where

the reservoir temperature � emerges as a key quantity,

��� � � � � �������� � � � � � � �

� � ������������ � � � ������ � � � �

� � (3.5)

and

� � � � � � � �������� � � � � � � �� � �����������

� � � � ������ � � � � �

�(3.6)

With the aid of Eq. (3.3) the ratio between the two different probabilities can be written ��� � � � � � ��� �� "! �$#� � � � ��� �� "! �$#

�(3.7)

The exponential functions appearing in Eq. (3.7) are called Boltzmann factors.

Page 14: Lecture Notes on Statistical Physics by peter Johansson

CHAPTER 3. THE BOLTZMANN DISTRIBUTION 11

How can we calculate the probability of finding a system in a certain state? Equation (3.7)shows that the probability of finding a system in a certain state is proportional to that state’sBoltzmann factor. Therefore, we get the probability of finding a certain state by dividing itsBoltzmann factor by the sum of all the Boltzmann factors of states that the system

�can attain.

The probability of finding the system in a state � is� � � � ��� ��� � ! #��� � � ��� ���� ! �

(3.8)

The sum in the denominator runs over all states that�

can be found in. This sum is calledthe partition function (tillstandssumma). It is denoted and plays a very important role instatistical physics. We can thus rewrite Eq. (3.8) as

� � � � ��� ��� �� ! # � (3.9)

where �� � � � � � ��� �� ! # � (3.10)

Equation (3.9) is undoubtedly the most important result presented in these lecture notes.The way the partition function is defined, the sum of the probabilities of all the states equals

1, �

� � � � � � ��� ��� �� ! # � � � � � (3.11)

This is of course a necessary requirement for a set of probabilities.Physically Eq. (3.9) roughly speaking means that the probability of finding a system ther-

mally excited to a particular state with energy � above the ground state depends on how large� is compared with � � . If the energy difference is comparable with or smaller than � � the probability of thermal excitation is large. In the opposite case, �� � � , the excitationprobability becomes exponentially small.

3.2 Application to a harmonic oscillator

Harmonic oscillators are found everywhere in physics. You have already encountered severalexamples of this in classical mechanics (mass at the end of a spring, the simple pendulum, etc.).But systems in which kinetic and potential energy are alternately transformed into each otherwhile a system performs some kind of cyclic motion are also very common at a microscopicscale.

A diatomic molecule like CO (carbon monoxide) is perhaps the simplest example of this(see Fig. 3.2). The distance between the two atomic nuclei in the equilibrium state of such amolecule gives a minimum for the total energy of the nuclei and the electrons of the molecule.However, if the molecule is disturbed so that the nuclei starts to move relative to each othertheir kinetic energy is used to increase the total potential energy of the molecule by either com-pressing or stretching the bond between the atoms. Eventually all the kinetic energy of thenuclei is transformed to potential energy and the nuclei will start moving in the other direc-tion. In this way the bond length oscillates, and the molecular vibration works as a harmonicoscillator.

Page 15: Lecture Notes on Statistical Physics by peter Johansson

CHAPTER 3. THE BOLTZMANN DISTRIBUTION 12

����������

C O

x

Molecule

spring

Energy

n=0

n=1

n=2

n=3

Model

l0

Mod

el p

oten

tial e

nerg

y, V

(x)

Bond length, l

V(x)=kx2/2 = mω2 x2/2

where x=l-l0

and l0 is the equilibrium

bond length.

Figure 3.2: The vibrations in a diatomic molecule can be modeled by a harmonic oscillator,where two masses oscillate back and forth while the bond length is either shorter or longer thanin equilibrium. In this way potential and kinetic energy are alternately transformed into eachother. A quantum-mechanical treatment of this problem yields an infinite ladder of equidistantenergy levels.

The quantum mechanical energy levels of a harmonic oscillator turns out to form an equidis-tant ladder. The lowest level has the energy ���� � � where

�is the classical angular oscillation

frequency of the oscillator. The first excited level has energy� ���� � � the second excited level

has energy� ���� � � , etc.; the energy difference between different levels is ���� . For a diatomic

molecule, ���� is on the order of 0.1 eV. Exciting the oscillator from one energy level to thenext corresponds to increasing the amplitude of the oscillation in the classical case.

Let us calculate the probability of finding a harmonic oscillator in a certain state. First wemust find the partition function; without we cannot find the probabilities. Evidently sincethere is an infinite number of states at hand, the partition function is a sum of infinitely manyterms. It is, however, convergent and can be written

� �� � � ����� � � � ( ��� � � ���� � � ��� � � � � ����� ��� � �� ! # �� � � ����� � � ���� � � ��� ��� � (3.12)

which is the sum of a geometric series, see Appendix A. The first term equals 1, and eachconsecutive term is found by multiplying the previous one by � ����� ��� �� ! # . This yields

� � ����� ����� �� "! #

� � � ����� ��� �� ! # � (3.13)

and the probability of finding the oscillator in state with energy �� ( ��� � � ���� is � � � � � ���� � � # ��� ��� �� "! # � � �

���� ��� �� ! # � � � � ���

�# ��� ��� �� ! # � (3.14)

The left panel in Fig. 3.3 shows how

�varies with temperature. For a low temperature,

�is

very small for all but the lowest few energies. With increased temperature it is more likely tofind the oscillator in a higher excited state.

Next let us calculate the energy expectation value � ��� , for an oscillator,

� ��� � �� � � � � � � �� � � � ( ��� � � ���� � �(3.15)

Page 16: Lecture Notes on Statistical Physics by peter Johansson

CHAPTER 3. THE BOLTZMANN DISTRIBUTION 13

1

0.8

0.6

0.4

0.2

06543210

Pn

n

kBT=0.5 −hω

kBT=2.0 −hω

kBT=5.0 −hω

2.5

2

1.5

1

0.5

02.521.510.50

1

0.5

0

< ε

> / − hω

Hea

t cap

acity

(k B

)

Temperature (−hω/kB)

Energy (quantum)

Heat capacity

Energy (classical)

Figure 3.3: Calculated results for the probability

�of finding a harmonic oscillator in state

at different temperatures. The right panel shows how the energy expectation value (neglectingthe zero-point energy) and heat capacity varies with temperature. The straight line in thatdiagram shows the classical limit � ������� � ��� of the energy expectation value.

In order to make the notation less heavy, we introduce the symbol�

,

� � ����

�(3.16)

Combining Eqs. (3.15), (3.14) and (3.16), we get

� ��� � ����� ( � � � � � � ��� � �� � � ���� � �� ��� � ����� ( ��� � � � � � � � �� � � �� � �

� � � (3.17)

where � � � ���� . The sum appearing here can be evaluated by the methods discussed in Ap-pendix A, and the result for the energy expectation value for a harmonic oscillator is

� ��� � ����� ( ����� � ��� � �

�(3.18)

This can be rewritten as

� ��� � �� � ( �� ���� � (3.19)

where � � is the expectation value for the quantum number of the harmonic oscillator. Obvi-ously

� � � �� �� ��� �� ! # � � � (3.20)

and this very important distribution function is called the Planck distribution function. Thediagram to the right in Fig. 3.3 shows how the energy expectation value varies with temperature.For large temperatures Eq. (3.20) yields

� � � �� ( ���� � � � � � � �

� ��� ���� (3.21)

so that the energy expectation value (not counting the zero-point energy) is � � . This is thefirst example of the equipartition principle that we encounter. This principle states that for

Page 17: Lecture Notes on Statistical Physics by peter Johansson

CHAPTER 3. THE BOLTZMANN DISTRIBUTION 14

every term in the expression for the classical energy that is quadratic in either a momentumor a coordinate, there is a contribution � � � � to the expectation value of the total energy inthe classical limit corresponding to high excitation. The total energy for a harmonic oscillatorcan be written � � � � � � � ( ��� � � � and thus has two such quadratic terms. The prediction of theequipartition principle only holds true at temperatures that are high compared with ���� � � � . Atreally low temperatures the fact that the energy of the oscillator can only be increased in stepsof ���� renders the classical prediction of the equipartition principle invalid; then � ��������� � .We have also plotted the heat capacity per oscillator defined as

� � � � ���� (3.22)

in Fig. 3.3.

3.3 Two-dimensional harmonic oscillator

We now consider a two-dimensional harmonic oscillator. This means that oscillations can takeplace in two different directions, for example � and � as illustrated in Fig. 3.4. We assume thatthe oscillation frequency

�is the same in both directions.

In the realm of classical mechanics the motion in the � and � directions are independent ofeach other. This is also true in quantum mechanics. The energy of any quantum state can bewritten as a sum of an � -motion and a � -motion energy. The only difference compared withthe one dimensional case is that now two quantum numbers

�and � are needed. The energy

can be written � � ��� ( ��� � � ���� ( �� � ( ��� � � ���� � � "�( � � ���� (3.23)

where " � � ( � .

�������������� ���

������

xy

Energy

N=0

N=1

N=2

N=3

0.15

0.1

0.05

0109876543210

PN

N

T=3−hω / kB

Figure 3.4: Schematic illustration of a two-dimensional harmonic oscillator and its energy-level diagram. The figure to the right shows the probability of finding the oscillator in a certainenergy level " with energy � " ( � � ���� at temperature � � ���� � � � . Since higher energy levelshave a higher degeneracy, the lowest energy level does not yield the highest probability in thiscase.

Suppose we want to calculate the probability

of finding the two-dimensional harmonicoscillator in a (any) state with energy � " ( � � ���� . We can follow the method used in theprevious section, however, we must be clear over the difference between quantum states and

Page 18: Lecture Notes on Statistical Physics by peter Johansson

CHAPTER 3. THE BOLTZMANN DISTRIBUTION 15

energy levels. In this particular case we have, for example, an energy level with the energy� � � ���� corresponding to " � � . But the harmonic oscillator can have this energy eitherbecause

� � � and � � � or because � � � and � � � . These two different possibilities

correspond to two different quantum states with the same energy. One says that the energylevel is degenerate, and since two states have this energy the degeneracy is 2. Usually thedegeneracy is denoted � (just as the multiplicity). Here, for a general value the degeneracy� � " ( � . [For example, " � �

can be reached in three different ways; ���� � � can be

(2,0), (1,1) or (0,2).]Let us now calculate

. All states with a certain " , regardless of the �

and � values,have the same Boltzmann factor � � � � � � # ��� . The Boltzmann factor divided by the partitionfunction gives the probability to be in a particular state. Then the probability of finding theoscillator in energy level " with energy � " ( � � ���� is � � � � � � � � � � # ��� � � � � "�( � � � � � � � � # ��� � (3.24)

The degeneracy � appears in the formula because we must sum the probabilities of all thestates in the energy level.

It remains to calculate the partition function � . The sum-over-states of Boltzmann factorscan be written

� � � � � � � � � � � � # ��� � � � � � " ( � � � � � � � � # ��� � (3.25)

This sum can be evaluated in the same way as when calculating the expectation value for theenergy for the one-dimensional harmonic oscillator (see Appendix A). The result is

� � � � � ��� � � � � � � ��� � ��

(3.26)

Note that this is the square of the partition function we got in the one-dimensional case. Thisis a general phenomenon inasmuch as addition of energies implies multiplication of partitionfunctions. Equation (3.26) together with Eq. (3.24) yields the final result � � � � � � � �� � � � "�( � � � � � �� � (3.27)

Results for the case when � � ���� � � � are plotted in Fig. 3.4. Notice that here, unlike theone-dimensional case, the lowest energy level need not have the highest probability.

3.4 Solved examples

EXAMPLE 3.1: A quantum-mechanical system has three, non-degenerate energy levels with energies -50 meV, 0 meV, and 50meV, as illustrated to the right. This system is in contact with areservoir held at room temperature, � � � ���

.(a) What is the probability of finding the system in its groundstate?(b) Calculate the expectation value of the system’s energy.

Energy

−50 meV

50 meV

0

SOLUTION: (a) The probability of finding the system in the ground state, i.e. with energy

Page 19: Lecture Notes on Statistical Physics by peter Johansson

CHAPTER 3. THE BOLTZMANN DISTRIBUTION 16

�� � � � ��� ��� is found as the ratio between the Boltzman factor in question � � ��� � , and the

partition function for the system, . The partition function is found as the sum of the threeBoltzman factors, one for each of the states,

� � � ��� � ( � � ��� � ( � � ����� � (3.28)

where �� � � � ��� ��� , � � � ��� ��� , and � � � � ��� ��� , and

� � ��� � � � � . When evaluatingthe Boltzman factors we must make sure to use consistent units; the best is certainly to converteverything to SI units thus multiplying the energies given in units of meV by � � � � � : i.e.

� �� � � �

��� �� � � �

� � � � � ��� ��� � ��� � ��� � � � � � � � ��� � � ��� ��� � � � � � � � � ��� � � � � (3.29)

etc. The partition function then becomes

� �� � � � ( � � ( � �

� � � � � � ����� � � (3.30)

and the probability of finding the system in the ground state�� � � � ��� � � � � � � � � (3.31)

We see that the difference in energy between the different levels is rather large compared withthe “thermal energy” � � � ������� � ��� , and this means that the quantum system usually is notthermally excited.(b) The energy expectation value is given by the expression

� � � � �� �� ( � � � ( � � � � � �

�� � ��� � ( � ��� � ��� � ( � � � � ������� � � � ��� � ��� �

(3.32)

EXAMPLE 3.2: Consider a two-dimensional harmonic oscillator with ���� � ��� ��� that isin contact with a thermal reservoir at temperature � � � � � . Calculate the energy expectationvalue.

SOLUTION: We will look at a couple of ways of solving this problem.The first method makes use of the formulas we found above when investigating the two-

dimensional harmonic oscillator. The energy levels are given by

� � ���� � " ( � � � " � � � � � � � � � � � (3.33)

and the probability of finding the system in level N is � � � � � � � �� � � � "�( � � � � � �� � (3.34)

With the parameter values given for the temperature and ���� , we get numerically � � � � � � � � � �

� � � � � � � � � � � � � �� � � � � � � � � � � �

� � � � � � � � � � � � � � � ��� � � �

(3.35)and the remaining probabilities are very small. In this way we get an accurate approximationto the energy expectation value as

� � � � ��� � ( � � � ( � � � ( � � � ( � � � ( � � � � � � � � � ��� �(3.36)

Page 20: Lecture Notes on Statistical Physics by peter Johansson

CHAPTER 3. THE BOLTZMANN DISTRIBUTION 17

By adding more terms a more accurate value can be found.There is, however, an even simpler, and for general cases more useful, way of dealing with a

system like this one that is built up by a number of subsystems that are mutually independent ofeach other. In this particular case we have two subsystems, the oscillator in the � direction, andthe oscillator in the � direction that are independent of each other. In a general case systemsare independent of each other if they do not interact, meaning that the energy levels of onesubsystem are unaffected by the state of another subsystem.

When these conditions are fullfilled the energy of the system, in any state, is the sum of theenergies of the subsystems. Moreover, the same holds true for the energy expectation value: itis the sum of the expectation values found when treating the different subsystems separately.Finally, the partition function for the entire system is the product of the partition functionsfor the subsystems. In the present case, where the subsystems are identical, this simplifiesthings quite a bit since it means that we only need to calculate the partition function and theexpectation value for one single subsystem.

If we apply these ideas here we find that by using Eq. (3.18) for the energy expectationvalue for one oscillator we get

� � �� � ����� ( ����

� � ��� � � � � � � � � � � ��� �

(3.37)

This means that the expectation value for the system as a whole is

� � � � � � � ��� � � � � � ��� � (3.38)

thus very close to the previously found approximate value.

EXAMPLE 3.3: The quantum system shown in the figure below consists of in total six sepa-rate, but identical subsystems that are independent of each other. (In the figure each subsystemis inscribed in a frame of dashed lines.) The total energy of the system is thus the sum of thesubsystem energies. Each subsystem has four nondegenerate quantum states with the eigen-energies � , � , � � , and

� � , respectively. The entire system, and thereby each of the subsystems,is in thermal contact with a reservoir with a temperature . We assume that � � � � ��� � .(a) Calculate the expectation value for the energy for the system as a whole.(b) Calculate the probability for the entire system to have the energy � .(c) Calculate the probability for the whole system to have an energy of at least

� .

ε

0

ε

0

ε

0

ε

0

ε

0

ε

0

SOLUTION: In this example, just as in the case of the two-dimensional harmonic oscillator,we are dealing with a system built up by a number of identical and independent subsystems.Thus to calculate in (a) the expectation value for the energy we apply the Boltzmann distribu-tion to one subsystem and then get the final result by multiplying by the number of subsystems,6.

Page 21: Lecture Notes on Statistical Physics by peter Johansson

CHAPTER 3. THE BOLTZMANN DISTRIBUTION 18

The partition function for a subsystem can be calculated as

� � � � � ��� �� "! # ( � � � ��� �� ! # ( � � � � ��� �� ! # ( � � � � ��� �� ! # � (3.39)

(Keep in mind that even though the distance between the energy level is the same everywhere,a subsystem is not equvalent to a harmonic oscillator; the ladder of states ends after

� � here,whereas it goes on forever for a harmonic oscillator.) Numerically the partition function be-comes

�� ��� � � � � � (3.40)

At this point we can also calculate the partition function �, for the entire system (which we

will need later). Since the subsystems are independent of each other the total partition functionis the product of the subsystem partition functions,

� �

�� �� �� �� �� � �

��� � ��� ��� � ��� (3.41)

The energy expectation value for one subsystem is now given as

��� �� � �

��� � � � �

� � ��� �� ! # � � � � � ��� � � � � (3.42)

and the expectation value for the system as a whole becomes

��� �� � � ��� �

�� ��� � ��� � � (3.43)

The rest of the problem [parts (b) and (c)] consists in calculating the probability for findingthe system as a whole with a particular energy " � . Such a probability is in general given bythe expression � � � � � ��� �� ! # � �

� (3.44)

where � is the degeneracy for the energy level in question. We have already calculated thepartition function

�and the Boltzmann factors � � � ��� �� "! # are also easily calculated, so the

remaining difficulty lies in determining the degeneracies � .For a complete solution we will need the degeneracies for the four lowest energy levels, � � ,

��, � � , and � � . It is obvious that we will need �

�in order to solve part (b). In part (c) of the

example we will calculate the probability

� � , for the total energy to be

� or more as� � � � � � � �� � � � � � (3.45)

For the whole system to be in the ground state each and every one of the subsystems mustbe so too; the ground state is consequently nondegenerate,

� � � � � (3.46)

The energy � is reached if one of the subsystems is excited to the energy level with energy� while the remaining five subsystems are in their ground state. Obviously this can be done insix different ways and

�� � � � (3.47)

There are a total of 21 quantum states that gives a total energy of� � . In 6 of the cases one

of the subsystems is excited to the level with energy� � while the others are in their respective

ground states. Furthermore there are 15 states in which 2 of the subsystems are excited to �

Page 22: Lecture Notes on Statistical Physics by peter Johansson

CHAPTER 3. THE BOLTZMANN DISTRIBUTION 19

the remaining four subsystems are left in the ground state; there are 15 ways of picking twoelements from a group of 6 as discussed in the previous chapter. Thus,

� � ��� ( � � ���� ���� ( � � � � � � (3.48)

With a total energy of� � to distribute between the different subsystems there are three

distinct cases to consider: the energy can be given to just one of the subsystems (3), to two ofthem (2+1) or to three different subsystems (1+1+1). The first possibility gives us 6 differentquantum states (pick one element among 6). The second possibility gives 30 different quantumstates (first we have 6 different ways of picking the subsystem to give the energy

� � and afterthat there are 5 ways of selecting one of the remaining subsystems to give the energy � to).Finally the energy distribution (1+1+1) gives us another 20 quantum states since there are� ��� � � � � � � � � � � � ways of selecting 3 elements from a total of 6 elements. Thus we arrive

at� � ��� ( � � � ( � � � � � � � � �

��� ( � � ( � � � � � � (3.49)

Now the various probabilities are easy to calculate. In (b) �� � � � � ��� (3.50)

and in (c) � � � � � � � � � (3.51)

3.5 Problems for Chapter 3

3.1 A two-state systemConsider the quantum system illustrated schematically to theright. It has two non-degenerate energy levels with energies �and � � , respectively. The system is in thermal contact with theenvironment held at temperature .Determine the expectation value for the energy of the systemwhen the is given by: (a) � � � � ��� � ��� , (b) � ��� � � � ,(c) � � � � � ��� .

0

ε0

3.2 A three-level systemThe figure shows a schematic sketch of a three-level system,where the levels have the energies � , � , and ��� � , with � � � � ��� .(a) Derive an expression for the probability

�of finding the sys-

tem in the state with energy � given a temperature of the envi-ronment.(b) Sketch a curve that roughly shows how

�varies with tem-

perature. Try to explain in words why

�has this temperature

dependence.

ε

0

10

ε

Page 23: Lecture Notes on Statistical Physics by peter Johansson

CHAPTER 3. THE BOLTZMANN DISTRIBUTION 20

3.3 The Boltzmann distribution and the harmonic oscillatorAs should be well known by now, the en-ergy levels of a harmonic oscillator aregiven by

�� � � ( ��� � � ���� �

One wants to determine the frequency�

for a particular oscillator and to this endan experiment has been carried out at tem-perature � ��� � � that has given theprobabilities of finding the oscillator ineach one of the four lowest states. Theresults of the measurements are given inthe diagram to the right.

-3

-2.5

-2

-1.5

-1

-0.5

0

3210

ln(P

n)

n

Determine an approximate value for�

based on the data.

3.4 � � � for a two-dimensional harmonic oscillatorConsider a two-dimensional harmonic oscillator with �� � � � ��� ��� . Determine the energyexpectation value � � � at room temperature.

3.5 Vibrational energy of a CO � molecule

A CO � molecule has four different vibra-tional modes. One of these (a) correspondsto a vibrational motion in which the entiremolecule is stretched, the motion of anotherof the modes (b) means that one of the bondsis stretched while the other is compressed.Moreover, the molecule has two vibrationalmodes, (c) and (d), in which the molecule isbent. These two modes have the same vi-brational frequency. [In (d) the atoms moveperpendicularly to the plane of the paper.]

������������������������������������������������������������������������

��������������������������������������������������������

������������������������������������������������������������������������

��������������������������������������������������������

������������������������������������������������������������������������

��������������������������������������������������������

������������������������������������������������������������������������

��������������������������������������������������������

ω

ω

ω

ω

a

b

c

c

(a)

(b)

(c)

(d)

CO O

The total vibrational energy of the molecule can be written as a sum of energies for four differ-ent harmonic oscillators (we drop the zero-point energy here)

���� � �� ���� � ( �� ���� � ( ���� ( �� � ���� � �where �� � �� � �� � �� � � � � � � � � � � . Through experiments one has determined the following val-ues for the vibrational quanta: ���� � = 166.3 meV, ���� � = 291.4 meV and ���� � = 82.8 meV.Calculate the expectation value ������� � , at temperature � 500 K.

3.6 Average rotational energy of a diatomic moleculeThe rotational energy of a diatomic molecule is in a quantum-mechanical description given by

Page 24: Lecture Notes on Statistical Physics by peter Johansson

CHAPTER 3. THE BOLTZMANN DISTRIBUTION 21

the expression ��������� � �� �� � � ( � ��� �where

is the moment of inertia of the molecule. The quantum number

�which determines

the magnitude of the rotational angular momentum � can take the values� � � � � � � � � � �

For each value of�

there are � � � ( � � distinct possible quantum states corresponding to differ-ent orientations of the angular momentum ( ��� can take the values � � � � � ( � � � � � � � ). Assumethat the molecule is in a gas in thermal equilibrium at temperature . Calculate the averagerotational energy of the molecule by proceeding as follows:(a) Calculate the partition function keeping in mind that it is a sum over all states, not justall energy levels. To do this, assume that the temperature is large enough that � � �� � � � �� � ,so that the sum defining can be well approximated by an integral using � � � � � ( � � as anintegration variable.(b) Knowing , calculate the expectation value for the rotational energy.(c) For what temperatures is the condition � � �� � � � �� � satisfied for a CO molecule?

3.7 Magnetic susceptibilityConsider the spin system discussed in chapter 2 containing " spins. The system is in thermalcontact with a reservoir at temperature . Calculate the magnetization

�as a function of by

employing the Boltzmann distribution. Calculate also the magnetic susceptibility � ��� � � � � .

3.8 Probability of finding an adsorbed atom at different types of sitesConsider adsorption of gas molecules on a surface. Adsorbed molecules cannot move aroundfreely on the surface, but are instead stuck in certain places and “jump” only once in a whilebetween these special places.The figure shows a certain face of a metal surface.The open circles represent metal atoms. Supposethe adsorbed molecules can either sit right on topof a metal atom (top site) or in between two metalatoms (bridge site). All other positions on the sur-face are inaccessible for the atoms except for veryshort periods of time during jumps. Assume thatthere is an energy difference between the two typesof sites; the energy of a molecule at a top site is 0,while that of a molecule at a bridge site is � . Cal-culate the probability of finding a molecule at a topsite and bridge site, respectively, as a function of .We assume that there are relatively few moleculeson the surface so that they do not interact or get inthe way of each other.Hint: How many different nearest neighbors doeseach metal atom have?

cellUnit ������������

������������

������������ Bridge site

Top site

3.9 Zipper problemA zipper has N links; each link has a state in which it is closed with energy 0 and a state in

Page 25: Lecture Notes on Statistical Physics by peter Johansson

CHAPTER 3. THE BOLTZMANN DISTRIBUTION 22

which it is open with energy � . We require, however, that the zipper can only unzip from theleft end and that link � can only open if all the links to the left � � � � � � � � � � � are already open.(a) Show that the partition function can be summed in the form

� � � ����� � � � "�( � � � � � � � � �� � ��� � � � � � � ��� � � �

(b) In the limits � � � � � and � � � ��� , find the average number of open links. This is avery simplified model of the unwinding of two stranded DNA molecules.

3.10 Degeneracy of a three-dimensional harmonic oscillatorIn this chapter we discussed the two-dimensional harmonic oscillator. The energy levels aregiven by � � ��� ( �� ( � � ���� � � "�( � � ���� �and the degeneracy of an energy level is

� � " ( � �i.e. there is only one state with the ground state energy ���� ( � � � � ), but there are for example4 states with energy

���� ( � � � ).

Now consider the three-dimensional harmonic oscillator for which the energy levels are givenby � � ���� ( �� ( �� ( � � � � ���� � � " ( � � � � ���� �where �� , �� , and �� are all natural numbers. Show, by making use of the result for the two-dimensional harmonic oscillator, that the degeneracy in this case is given by

� �� �� � � "�( � � � " ( � �� �

3.11 Degeneracy for a collection of identical harmonic oscillatorsConsider now instead 5 separate, but identical, harmonic oscillators. To describe the quantumstate of all the 5 oscillators we need five quantum numbers (non-negative integers),

�, � , � , � , and � . The total energy for this system can be expressed as� � ��

� ( � ( � ( � ( � ( � � � � ���� � � " ( � � � � ���� �Calculate the degeneracy � for the energy level corresponding to " �

� ( � ( � ( � ( � � �with energy � � � � ( � � � � ���� .

Page 26: Lecture Notes on Statistical Physics by peter Johansson

Chapter 4

Thermal radiation

4.1 Photon modes

In this chapter we apply the Planck distribution of Eq. (3.20) to calculate the energy contentof the electromagnetic field at a given temperature. We will also be able to establish a relationbetween the temperature of an object, like a piece of red-hot iron or the sun, and its color. Thehigher the temperature the more radiation is sent out at higher photon frequencies. Thereforewhen a piece of metal is heated, it first starts to glow with a reddish color, but as the temperatureis increased it becomes nearly white (a mixture of all colors).

The starting point for the derivation is the Planck distribution valid for a harmonic oscillator.The reason for this is that the electromagnetic field can be viewed as a collection of harmonicoscillators of varying frequency. A photon transmitting blue light is an excitation of a harmonicoscillator with a relatively high frequency. A red photon is an excitation of another oscillatorwith lower frequency. Any kind of electromagnetic radiation, radio waves, microwaves, visiblelight, x-rays, and gamma rays, can be described as excitations of harmonic oscillators.

LL/20

Am

plitu

de

x

n=1

n=2

n=3

Figure 4.1: Illustration of the first fewstanding wave modes confined to theinterval

���������.

Since the expectation value of the energy for a single harmonic oscillator was already de-rived in Chapter 3, the task at hand here is to label and count all the harmonic oscillators thattogether constitute the electromagnetic field. The total energy of the field is then found by

23

Page 27: Lecture Notes on Statistical Physics by peter Johansson

CHAPTER 4. THERMAL RADIATION 24

adding the contributions from the different oscillators. This can be done because the differentoscillators or modes of the field are independent of each other.

To count the number of modes in the electromagnetic field one usually looks at a certaincubic volume � ��� � ( � is the cube side). The allowed modes then correspond to standingwaves that fit into the cubic volume as with a vanishing amplitude at the “walls” of the vol-ume. [Note that the boundary conditions for the electromagnetic field in reality are a bit morecomplicated, but the present way of reasoning is enough for counting the photon modes, whichis our immediate objective.] If we first restrict the attention to an example in one dimension,standing waves on the interval � � � � � take the form

� � � � � � ������� � �� � � � � � � � � � � � (4.1)

Waves of this form are illustrated in Fig. 4.1. The wavelength for each of these modes is

� � � � (4.2)

and the wave number � � � � � � �� � � . Thus, the photon frequency is � �� � �� � � � � �and the corresponding angular frequency

� � � � � � ���

�(4.3)

The energy of a photon is ����� � � � � ���� � �� � � (4.4)

Let us generalize this to the real, three-dimensional case. Normally a wave will propagate ina general direction that is not parallel to any of the coordinate axis. The wave number dependson three different integers �� , �� , and �� , one for each dimension,

� � ��� �� ( �� ( �� � �� � �� � �� � � � � � � � � � � � � � (4.5)

In other words, � is the magnitude of a wave vector �� with Cartesian components ���� � � ,���� � � , and ���� � � . To specify a certain photon mode one needs a triplet of integers, forexample ��� � �� � �� � � � � � � � � � . In addition one must also specify a polarization direction. Theelectric field of a photon mode is a vector. This vector must be orthogonal to the propagationdirection since electromagnetic waves are completely transverse. A wave propagating in the �direction can therefore have two different, independent polarization directions � and � . Thus ingeneral for every triplet ���� � �� � �� � there are two different photon modes, and they both havethe photon energy ����� � ���� � �� � � �� �

�� �� ( �� ( �� � (4.6)

4.2 Stefan-Boltzmann law

Knowing the modes of the electromagnetic field and their photon energies we can proceed withthe next step, the calculation of the total energy of the field in thermal equilibrium with thesurrounding matter. We recall from Eq. (3.19) that the average energy in a harmonic oscillatormode, not counting the zero-point energy ���� � � , is

� ��� � ����� � ��� � �

�(4.7)

Page 28: Lecture Notes on Statistical Physics by peter Johansson

CHAPTER 4. THERMAL RADIATION 25

Adding the energy of all the modes we get the internal energy of the electromagnetic field

� � ���� � � �� � �� �� � �

� � � ��� � �� � �� � � � � ���� � � �� � �� �� � �

� �� � ���� � �� � �� �� � ��� � � � � � � � � � # � � � (4.8)

where according to Eq. (4.6)� ���� � �� � �� � � � � �� �� ( �� ( �� � � . The first factor of 2 in

Eq. (4.8) comes from counting the two polarization directions.Rather than evaluating the sum in Eq. (4.8) we will transform it to an integral, treating�� , �� , and �� as continuous variables. This is perfectly reasonable since the volume � is

macroscopic, and � is much larger than typical photon wavelengths which means that the ’sare very large integers. We make the replacement

�� � � � �� � �� �� � �

��� � �� � �� � �� � �� � �� � �� (4.9)

which yields � � � �� � �� � �� � �� � �� � �� �� � ��� � �� � �� �� � �

� � ��� � � � � � � # � ��

(4.10)

To evaluate the integral, we make a change of variables to “spherical coordinates” using ��� �� ( �� ( �� �

� � �and the angles � and � instead of � , �� , and �� . Then the integrand only

depends on ,

� � � �� � � � � �� ��� ��� � � �� � �� � � � � �� � � � � � �� � � �� � � �� � �� � � �� � �

�(4.11)

With the substitution � � � �� � � � , this yields

� � � ��� � � � � � �

� �� � � � � � � �� � � �� � �

�� �

�� � � � �� �� � � � �

� � � ��

(4.12)

The last integral has the numerical value �� ��� � , so that the energy of the electromagnetic field

per unit volume is

�� � � � �� �

� � �� � � �(4.13)

Equation (4.13) is known as the Stefan-Boltzmann law. It says that the energy content of theelectromagnetic field varies as the fourth power of the temperature.

If the photon field we are studying is confined to some volume (or cavity) and is in thermalequilibrium with the walls of the cavity the radiation emerging from a small opening in thecavity wall is called black-body radiation. The opening into the cavity works as a perfectlyblack body. Any radiation that hits the opening from the outside will eventually be absorbed bythe walls of the cavity and any radiation coming out through the hole is in thermal equilibriumwith the walls, see Fig. 4.2.

The energy flux out from the cavity is proportional to the energy concentration � � in Eq.

(4.13), the speed of light , and a geometric factor equal to ��� . Thus, the energy flux

� � � � � �� �� � �� � � � � � � ��

� � �� � � � ��� � � � (4.14)

where � � � ��� � � � ��� ��� W/(m�

K�) is called the Stefan-Boltzmann constant. Equation (4.14)

is valid when calculating the energy flux from any surface or body that absorbs all radiationthat impinges on it (a so called black body).

Page 29: Lecture Notes on Statistical Physics by peter Johansson

CHAPTER 4. THERMAL RADIATION 26

� � � � � � � � �� � � � � � � � �� � � � � � � � �� � � � � � � � �� � � � � � � � �� � � � � � � � �� � � � � � � � �� � � � � � � � �� � � � � � � � �� � � � � � � � �� � � � � � � � �� � � � � � � � �� � � � � � � � �� � � � � � � � �� � � � � � � � �� � � � � � � � �� � � � � � � � �� � � � � � � � �� � � � � � � � �� � � � � � � � �� � � � � � � � �� � � � � � � � �� � � � � � � � �

� � � � � � � � �� � � � � � � � �� � � � � � � � �� � � � � � � � �� � � � � � � � �� � � � � � � � �� � � � � � � � �� � � � � � � � �� � � � � � � � �� � � � � � � � �� � � � � � � � �� � � � � � � � �� � � � � � � � �� � � � � � � � �� � � � � � � � �� � � � � � � � �� � � � � � � � �� � � � � � � � �� � � � � � � � �� � � � � � � � �� � � � � � � � �� � � � � � � � �� � � � � � � � �

Temperature

JU

T

UCavity

Incoming ray

Figure 4.2: The radiation emitted by asmall opening of a cavity with a veryirregular shape like this one has thesame intensity and spectral distribu-tion as that coming from a perfectlyblack body. Any radiation entering thecavity will be absorbed by the wallsof the cavity before it has a chance ofescaping. Since the cavity is a per-fect absorber it must also be a “perfectemitter;”

� is given by Eq. (4.14).

In practice all surfaces do reflect some of the radiation incident upon them. For example, asurface that only absorbs 70 % of the radiation incident upon it (the absorptivity � � 0.7), alsohas an emission rate that is reduced to 70 % of that given by Eq. (4.14) (the emissivity � � � �0.7). If this was not so, a body that is initially in thermal equilibrium with the electromagneticfield would after a while be out of equilibrium with the field. This would of course be in conflictwith the second law of thermodynamics.

4.3 The Planck radiation law

In addition to knowing the total energy of the electromagnetic field it is of considerable interestto know how this energy is distributed between different photon frequencies (or, for visiblelight, different colors). If we return to Eq. (4.11), and instead use the photon energy � ��� � � � as an integration variable we get

� � � �� � � � ��� � � � �

�� � ��� � ! # � �

� � � (4.15)

Using the Planck distribution function from Eq. (3.20),

� � � � � � �� � ��� �� ! # � � � (4.16)

the total energy can be written as

� � �� � � � � � � � � � � � � � (4.17)

The integrand in this equation has an appealingly simple and clear interpretation. The firstfactor

� � � � is the density of photon modes per unit energy,

� � � � � � � � ��� � � � � � � � �

�� � � � � � (4.18)

the second factor � is the energy of one photon, while � � � � � is the number of photons wecan expect to find in a mode with the particular energy � . The product of these three factors

Page 30: Lecture Notes on Statistical Physics by peter Johansson

CHAPTER 4. THERMAL RADIATION 27

0

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

76543210ε/kBT

Rayleigh-Jeans: (ε/kBT)2

Planck:

(ε/kBT)3

eε/kBT-1

0.5

0.4

0.3

0.2

0.1

032.521.510.50

u ε (

J/(m

3 eV

))

ε (eV)

6000 K

5000 K

4000 K

3000 K2000 K

Figure 4.3: Plot of the function � � � � � � � � � with � � � � � ��� � that governs the energy depen-dence of the energy contents of the electromagnetic field. As is seen in this figure the energydensity reaches a maximum at � � ��� � � � � . The right panel shows � � at a few differenttemperatures.

multiplied by a (small) energy interval � � tells how much energy is carried by photons with anenergy inside the energy interval � � � � ( � � � ; the integral gives the total energy.

Thus, the energy content of the electromagnetic field per unit photon energy is

� � � � � � � � � � � � � � ��� � � � �

� �� � ��� �� "! # � �

� � ��� �

�� � � � �� � � ��� � �� � ��� �� "! # � �

�(4.19)

We see that the energy dependence of � � can be summarized in terms of the dimensionlessvariable � � � � � � � � as

� ���� �

� � � ��

(4.20)

This function is displayed to the left in Fig. 4.3. We see that the field energy per unit photonenergy reaches a maximum when � � ����� � � � . For room temperature � � � � K this corre-sponds to a photon energy of only � 0.073 eV, but for a temperature of 6000 K (approximatelythe surface temperature of the Sun) the maximum appears at � 1.46 eV, which is inside thevisible range. The right panel in Fig. 4.3 shows plots of � � at different temperatures.

In the limit of large temperatures and small photon energies Eq. (4.19) yields

� � � � ��� �

�� � � � �� �

��� � � (4.21)

This relation is known as Rayleigh-Jeans law; a curve showing � � from Eq. (4.21) is displayedin Fig. 4.3. Rayleigh-Jeans law and the Planck law agree for photon energies that are smallcompared with � � , but Rayleigh-Jeans overestimates the energy of the field at higher photonenergies. Equation (4.21) results from a completely classical treatment of the electromagneticfield that does not take into account that the energy of a mode can only increase in steps of ���� .

Rayleigh-Jeans law does not agree with experimental observations. In the late 19th cen-tury that was one of the first indications that classical physics could not describe all physicalphenomena. Max Planck’s derivation of Eq. (4.19) marked the beginning of quantum physics.

Page 31: Lecture Notes on Statistical Physics by peter Johansson

CHAPTER 4. THERMAL RADIATION 28

4.4 Problems for Chapter 4

4.1 Photon energyA photon has wave vector �� � � ��� ���� ( ��� ���� ( � � ��� ���� � � �

�in vacuum. What is the photon

energy?

4.2 Number of thermal photonsShow that the number of photons

� � � in equilibrium at temperature in a volume � is" � ��� � � � � � � ��� � �� � � �(Hint:

� �� � � � � � � � � � ��� � � � ��� � .)

4.3 Number of thermal visible photonsEstimate the number of photons with wavelengths between 400 nm and 800 nm (the visiblerange) that are thermally excited in a volume of � � �

at room temperature 293 K.

4.4 Number of thermal FM photonsEstimate the number of photons in the FM radio range, between 87.5 MHz and 108 MHz, thatare thermally excited in a volume of � � �

at room temperature 293 K.

4.5 Surface temperature of the EarthCalculate the surface temperature of the Earth on the assumption that as a black body in ther-mal equilibrium it reradiates as much thermal radiation as it receives from the Sun. Assumealso that the surface temperature of the Earth is constant over the day-night cycle. Use ���� =5800 K;

����� = 7

�10 � m; and the Earth-Sun distance of 1.5

�10

� �m.

4.6 Surface temperature of Venus and MarsRepeat the calculation of the previous problem for our neighbor planets Venus and Mars. Youhave to look up the data you need yourself. Try also to find information about the actual tem-peratures of those two planets.

4.7 Heat shieldA black, non-reflective plane at temperature � is parallel to a black plane at temperature �� .The net energy flux density in vacuum between the two planes is

� � � � � � � �

��, where� � is the Stefan-Boltzmann constant. A third black plane is inserted between the other two

and is allowed to come to a steady temperature �� . Find �� in terms of � and �� , and showthat the net energy flux is cut in half because of the presence of the shielding plane. This is theprinciple for heat shields, and it is widely used to reduce radiative heat transfer.

4.8 Reflective heat shieldConsider a situation similar to the one in the previous problem, however, this time a plane thatis partly reflecting is inserted between the black planes held at temperatures � and �� , respec-tively. We denote the reflectivity � so that the absorptivity and emissivity � � � � � � � . Showthat the net flux density is reduced by a factor � � � �

�compared with when the inserted plane

is black ( � � � ).

Page 32: Lecture Notes on Statistical Physics by peter Johansson

CHAPTER 4. THERMAL RADIATION 29

4.9 Energy density per unit wavelengthIn some cases one looks at the energy of the electromagnetic field per unit wavelength, insteadof unit photon energy. To every photon energy interval � � � � ( � � � corresponds a wavelengthinterval � � ( � � ( � will of course be negative if � � is positive). The energy density per unitwavelength ��� should satisfy the relation

��� � � � � � � � � � � � ��� � � � �� ��

� �Show that this yields

��� � � �� � � � � �� � � ��� � �� "! # � �

Page 33: Lecture Notes on Statistical Physics by peter Johansson

Chapter 5

The ideal gas

In this chapter we are going to apply the Boltzmann distribution to an ideal gas. While theideal gas is by all means a system for which the laws of classical physics are valid, we are justthe same going to start from an equally valid quantum description since that makes it easier togeneralize the results later on.

5.1 The Boltzmann distribution applied to a particle in a box

We need to know the energy levels of the gas atoms (we assume that the gas is monatomic fornow). Normally one assumes that the gas is contained in a box. Then the quantum-mechanicalwave functions of the gas atoms must form standing waves inside the box in very much thesame way as the photon modes did in Chapter 4 (see in particular Fig. 4.1). A gas atomconfined to a one-dimensional box would be described by a matter wave for which the deBroglie wavelength � � � � � if � is the length of the box and is a positive integer. Werecall the relation between de Broglie wavelength and momentum for a particle,

� � � �

�(5.1)

This means that the energy of the atom can be written as

� � � �� � � � �� � � � � � � �� � � � (5.2)

This reasoning can be generalized to three dimensions in the same way as in the photoncase. Each energy level is characterized by three integers � , �� , and �� (because there arethree different directions in space), and the total energy of the particle is

� ��� � �� � �� � � � � � �� ( �� ( �� �� � � � �(5.3)

Suppose now that we just have one particle in the volume of the box. By applying theBoltzmann distribution we can calculate the expectation value for the energy as well as otherquantities. The partition function is

� � �� � � � �� � �

� �� � �� ����� � � � � � �� �� ( �� ( �� � � � � � � � � � � (5.4)

30

Page 34: Lecture Notes on Statistical Physics by peter Johansson

CHAPTER 5. THE IDEAL GAS 31

and as before� � ��� � � � � . If the gas container is macroscopic in size, the atom may be

excited into states with very high , so just as for the photons, the sums can be turned intointegrals over �� , �� and ��

� � � �� � �� � �� � �� � �� � �� ����� � � � � � � �� ( �� ( �� � � � � � � � ��� � (5.5)

By going over to spherical coordinates we then get a Gaussian integral yielding

� � � � � �� � � ��� � � � � � � � � � � � � � ��� � � � � � � �

� � � � ��� � � (5.6)

The expectation value of the particle’s energy can be written

� ��� � �� �� � � � �� � �

� �� � �� � � � �� ( �� ( �� �

� � � � � � ����� � � � � � � �� ( �� ( �� � � � � � � � ��� � (5.7)

We note, comparing this to Eq. (5.4), that

� ��� � � �� � �� � � � � � � � � �

� � � � � ��� � �� �

�� � � � � � �� � � � ��� ���� � �

� �� � �� ��� �

(5.8)

Thus, the energy of a gas atom is directly proportional to the temperature. Each degree offreedom (each coordinate direction) contributes � � � � to the expectation value; this is anotherexample of the equipartition principle, valid in the classical limit i.e. when one can expect aparticle to be in a highly excited state. We discussed the equipartition principle earlier on page13 in connection with the harmonic oscillator.

5.2 Boltzmann distribution and thermodynamics

Pressure. To be able to treat the case of " gas atoms we will first try to establish better con-nections between statistical physics and thermodynamics. Consider a system, for example theparticle in a box of the previous section. Suppose that we change the volume of the system by asmall amount � � . To do this one must perform work on the system. From classical mechanicsand thermodynamics the work is given by���

� � � � � � (5.9)

where � is the pressure. No quantum transitions take place in the system during the volumechange, but the internal energy of course increases by an amount equal to the performed work,i.e. � �

���. What happens is that the energy levels of the system move (up) so as to

increase the total energy. Looking at the total internal energy the change can be written

� � � � � � �

� � � (5.10)

which, in view of Eq. (5.9) means that

� � � � � � � �

�(5.11)

Page 35: Lecture Notes on Statistical Physics by peter Johansson

CHAPTER 5. THE IDEAL GAS 32

The subscript � on the partial derivative means that the entropy should be kept constant whenevaluating the derivative. Why should one do that? The reason is that when the volume ischanged there are no transitions between different quantum states, and then neither the multi-plicity nor the entropy changes.

Thermodynamic identity. The temperature was defined in Eq. (2.16) as

� � � � �� ��� �(5.12)

(In Chapter 2 we did not explicitly state that the volume should be kept constant since volumewas not really a meaningful quantity for the model system studied there.) Now from Eq. (5.12)we conclude that energy and entropy changes in a slow, quasistatic process at a constant volumeare related through

� � � � � � � � � � � (5.13)

Combining this with the energy change given by Eq. (5.10), which holds for changes at constantentropy, one can write � � � � � � � � � (5.14)

in the limit of infinitesimal changes. Equation (5.14) is known as the thermodynamic identity.The two different terms in Eq. (5.14) represent the two different possible ways of changing theenergy of a system: heat and work. The heat supplied to the system in a quasistatic process isgiven by

�� � � � � (5.15)

It should be kept in mind that the thermodynamic identity is only valid for quasistaticprocesses that are slow enough that the state variables ( , � , � , etc.) are all the time well-defined. The first law of thermodynamics (energy conservation),

� ��� ( ���

(5.16)

is, on the other hand, valid for any kind of changes even if it is not possible to monitor all thestate variables during a fast process, for example a rapid expansion of a gas. We will comeback to a more fundamental discussion of heat and work later in this Chapter.

In Eq. (5.14) one considers

to be a function of the entropy � and the volume � ; we couldalso write

� � � � � � ��� � � ( � �

� � � � � � (5.17)

which means that

� � � � � ��� � � � � � � � �

� � � �

(5.18)

Helmholtz free energy. Sometimes it is not so practical to use � and � as independentvariables. While the volume � can be controlled in many experiments that is seldom the casewith the entropy � . It would be much more practical to use the temperature as an independentvariable. This can be achieved by introducing Helmholtz free energy � , defined as

� � � � � (5.19)

Often � is just called the free energy.

Page 36: Lecture Notes on Statistical Physics by peter Johansson

CHAPTER 5. THE IDEAL GAS 33

By differentiating Eq. (5.19) we find with the aid of the thermodynamic identity that

� � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � (5.20)

for quasistatic processes. Thus, � is a function for which and � are the natural independentvariables. One can also write

� � � � � �� ��� � ( � � �� � � !� � � (5.21)

and by comparing this with Eq. (5.20), we find that

� � � � � �� ��� � � � � � � � � �� � � !�

(5.22)

One can show that a system that is held at constant temperature and volume, unless it is notalready in internal thermal equilibrium, will change its macroscopic state so as to minimize � .Thus, for a system in thermal contact with a reservoir � plays a role similar to the one � hadin a completely isolated system (there all changes towards internal equilibrium drives � to amaximum). This is one reason why � is a very useful quantity.

The free energy and the partition function. Helmholtz free energy also provides a very use-ful link between thermodynamics and statistical physics because it can be calculated directlyfrom the partition function ,

� � � � � ��� �(5.23)

This relation can be proved by inserting the expression for � in Eq. (5.22) into Eq. (5.19). Thisyields the differential equation

� � ( � � �� � � (5.24)

and one can show that � from Eq. (5.23) is a solution to this equation. In the next section wewill apply some of the relations found here to the ideal gas.

5.3 Many particles in a box—the ideal gas

We return to the partition function for a gas atom in a container,

� � � � � � � �

� � � � ��� � � ����

� � � � (5.25)

where we have introduced the quantum volume ��� ,

��� � � � � � �� �� � ��� � � � � � �� �� � � � ��� � � (5.26)

and its inverse, the quantum concentration � � � �

���� � � ��� � � �� � � ��� � � (5.27)

Page 37: Lecture Notes on Statistical Physics by peter Johansson

CHAPTER 5. THE IDEAL GAS 34

The probability of finding the atom in the very lowest state with an energy that is practicallyequal to zero is then ��

� � ������� � � �� � � ��� � � ���

��

(5.28)

If we have " atoms in the box the probability of having an atom in the lowest state is �� � ������� � � " ��� � � � �� � � ( �� � " � � is the particle concentration) as long as the atomscan be considered to be independent of each other. A necessary requirement for having entirelyindependent particles is that

� � ������ � � � � . In other words the particle concentration " � �must be low enough that it is unlikely to find one particle in a volume � � . This condition isfulfilled for small particle concentrations and large temperatures.

Distinguishableparticles

������������

� � ������

������������

������������������

Indistinguishableparticles

Figure 5.1: This figure illustrates thedifference between distinguishable andindistinguishable particles. The twostates to the left, in which each of thetwo particles have a distinct identity,are different from each other. How-ever, once it is impossible to tell thedifference between the particles (as inthe state to the right) only one state re-mains. This reduces the partition func-tion by a factor of

��, and in general,

for " particles by a factor of " �.

Let us now evaluate the partition function for " atoms in a box. The total energy for thesystem is a sum of the single-atom energies,� ����� � �� � � � (5.29)

where � is an atom index. If the atoms were truly distinguishable, a little like differently coloredbilliard balls, the partition function could be written

�� � ��� � � � � � �� � � � � � � � � � � � � ��� � � � � � � � � � � � ( � � � � � � ( � � � ( � � � � � ��� � � � (5.30)

where the last result follows because the multiple sum factorizes into a product of " identical,simple sums. But atoms are not distinguishable; we cannot tell the difference between thesituation where atom � is in one state and atom

�in another state and the situation where the

two atoms have “traded places” with each other (see Fig. 5.3). As a result of this the correctpartition function for " gas atoms is a factor " �

smaller than � � ��� � to account for the fact thatall the possible permutations of the particles are indistinguishable from each other, thus

� �" � � �" � �� � � � �(5.31)

Helmholtz free energy for an ideal gas can now be written

� � � ��� � � � ��� � � " � � " ��� ��� �� � � � � " ��� ��� � �� � � � " ��� � (5.32)

Page 38: Lecture Notes on Statistical Physics by peter Johansson

CHAPTER 5. THE IDEAL GAS 35

where we have used Stirling’s formula, valid for large " ,

��� " � � " � � " � " � (5.33)

and �� � " � � is the particle concentration.Next we can calculate the internal energy since Eq. (5.24) yields

� � � � � �� ��� � � � � � � � � � ��� �

(5.34)

But � � ��� � ��� � " � �� � �� ��� � �

� " � � (5.35)

so that � �

� " ��� � (5.36)

as one might expect from Eq. (5.8) as well as the equipartition principle. This means that theheat capacity at constant volume for a monatomic gas is

� � � � � � � � � �

� " ��� � (5.37)

For polyatomic gases there are additional contributions to both the internal energy and theheat capacity that are due to rotational motion and (at high temperatures) vibrational motion,previously discussed in Chapter 3.

It is also possible to calculate the pressure and in this way prove the ideal gas law. Toevaluate the expression for the pressure, � � � �����

�� �

! � we rewrite Eq. (5.32) as� � � " ��� � � � �� � � �� � ( � � � " ��� � � � � � � � " � ( � � � (5.38)

and note that � depends on but not on � . Therefore

� � " � � �� � (5.39)

and consequently �� � " ��� � (5.40)

which is the well-known ideal gas law. Boyles law �� � � � � ��� � for changes at constanttemperature, and Charles law � ��� � � ��� � � for changes at constant pressure follow from Eq.(5.40).

5.4 Heat and work in statistical physics

In thermodynamics, one introduces the two concepts heat and work in order to describe howthe total energy of a system changes. It is important to keep in mind that heat and work haveto do with changes; heat and work have nothing to do with the state of a system. A physicalsystem contains energy that has been brought there either because heat was supplied or becausework was done on the system, but just looking at the final state of the system it is impossibleto say how it reached that state.

Page 39: Lecture Notes on Statistical Physics by peter Johansson

CHAPTER 5. THE IDEAL GAS 36

From a practical point of view it is normally quite clear when the energy changes becauseof heat transfer (e.g. boiling water) or because of work (e.g. compressing a gas), respectively.In statistical physics it is nevertheless possible to proceed further, and define heat and workfrom a microscopic point of view.

The energy of a quantum system can change in two distinctively different ways (see Fig.5.2), and this distinguishes heat transfer from work: (i) There can be quantum transitions inwhich the system changes from one state to another with a different energy. This is whathappens when heat is transfered. (ii) The actual energy levels can move if some externalcondition changes gradually, and even if no transitions take place the total energy of the systemchanges. This is what happens when work is done on the system, or the system does work onthe environment.

Initial state Heat added Work done

Figure 5.2: Illustration of the difference between heat and work at a microscopic level.

5.5 Problems for Chapter 5

5.1 O � gas1 liter of oxygen gas with an initial temperature of 40 � C and initial pressure � � ��� � ��� ��� �expands until the volume is 1.5 liter and the pressure is � � � ��� ��� ��� � .(a) Calculate the number of moles of oxygen that are present.(b) Calculate the final temperature after the expansion.

5.2 Quantum concentration(a) Calculate the quantum concentration � for oxygen gas O � at room temperature.(b) Calculate the concentration of O � molecules in air at room temperature and atmosphericpressure.

5.3 Addition of pressure from different molecular speciesThe ideal gas law states that the particle concentration"

�� �� �

Page 40: Lecture Notes on Statistical Physics by peter Johansson

CHAPTER 5. THE IDEAL GAS 37

(a) Calculate the particle concentration in air at � ��� �K at sea level.

(b) Using the theory developed in this chapter, try to explain why the ideal gas law works forair, which is a mixture of nitrogen and oxygen, so that " � "�� � ( " � .

5.4 Air bubble in waterWe have an air bubble at the bottom of a 30 m deep sweet-water lake. The bubble has a diam-eter of 1 cm and the temperature of the air is the same as that of the surrounding water, 4 � C.The bubble starts to rise towards the surface of the lake where the water temperature is 18 � C.(a) Calculate the diameter of the water bubble when it reaches the surface assuming that it risesso slowly that the air temperature is all the time the same as the surrounding water temperature.(b) Calculate the diameter of the water bubble when it reaches the surface assuming instead thatthe rise is so fast that no energy is exchanged between the water and the air in the bubble.(c) What is the air temperature when the bubble reaches the surface in the second case?

5.5 Internal energy of indoor airThe temperature in a room with a volume of 25 m

�(at sea level) is increased from 15 � C to 20

� C. Determine the change of internal energy for the air in the room.

5.6 Maxwell-Boltzmann distribution(a) Use the Boltzmann distribution for an ideal gas of particles with mass

�to show that the

distribution of speeds � , for the particles follow� � � � � � �

� � ��� ��� ��� � � ��� � ����� �� "! # �

This is the Maxwell velocity distribution. The quantity

� � � � � � gives the probability of finding

a particle with a speed in the interval � � � � ( � � � .(b) For what velocity does

� � � have a maximum if the particles are O � molecules at � 300

K?

5.7 Value of � � at room temperatureOne mole of any gas at room temperature and atmospheric pressure occupies a volume of about24 liters. Use this result to estimate � � at room temperature. Express the answer in units ofeV.

5.8 Energy for an electron in a quantum wellAn electron is confined in a cubic quantum well. We treat this problem as a particle in a boxwith hard walls so that the energy of a state is given as

� ��� � �� � �� � � � � � �� ( �� ( �� �� � �� �

Here the quantum numbers �� , �� , and �� are non-negative integers, and the side of the cube� � � � � . Calculate the energy expectation value � ��� for the electron at room temperature.

Page 41: Lecture Notes on Statistical Physics by peter Johansson

CHAPTER 5. THE IDEAL GAS 38

5.9 Elasticity of polymersConsider a simple model of a polymeric chain con-sisting of N links, each of length � . The chain isstretched by a mass

�hanging from one end of it.

We can treat each link as a two-level system withenergy levels � � � � and

� � � depending on howthe link is oriented (see the figure). The differencein energy is ultimately due to the potential energyof the mass in the gravitational field.

ε= M ρ

ε=−M

g

g ρ

M

l

Derive an expression for the total length of the chain as a function of " ,�

, and . Show that,in the limit of a large temperature, � � " � � � ����

�Thus, if the temperature is increased the polymer chain curls up and becomes shorter, therebyincreasing its entropy.

5.10 Free energy of a two-state systemHelmholtz free energy � , is a thermodynamic state variable that is often useful. The freeenergy is related to the internal energy, temperature, and entropy through

� � � � �It is also possible to calculate � once the partition function is known since

� � � � � ��� �(a) Find an expression for the free energy as a function of temperature of a system with twostates one at energy 0, and one at energy � .(b) From the free energy find expressions for the internal energy and the entropy of the system.Sketch a graph that shows how

varies with temperature.

5.11 Free energy and entropy of a harmonic oscillator(a) Show that the free energy of a harmonic oscillator, neglecting the zero-point energy ���� � �is

� � ��� ��� � � � ����� � � �� � � � ��� � ��� �(b) Show that the entropy for the harmonic oscillator is

� � ���� ���� � � ��� ���� � � ���� � � ��� ��� � �

� ��� � � � ����� � � �� � � � ��� � ����� �What result does this yield when � � ���� ? How do you interpret it?

5.12 Proof of Eq. (5.24)Verify that Helmholtz free energy given by Eq. (5.23) satisfies Eq. (5.24).

5.13 Entropy in the canonical ensembleConsider a system

�that can exchange energy with a reservoir at temperature . The system

can be in a number of different states � with probabilities given by the Boltzmann distribution� � � � � � ��� �� ! #

Page 42: Lecture Notes on Statistical Physics by peter Johansson

CHAPTER 5. THE IDEAL GAS 39

Use the relation between Helmholtz free energy � and the partition function to show thatthe entropy of the system

�can be expressed in terms of the probabilities as

� � � ��� � � ���

� �

What is required to have � � � ?

5.14 Free energy in the canonical ensembleConsider again the system discussed in the previous problem. Derive an expression for Helmholtzfree energy � , in terms of the energies � � and the probabilities

� and the temperature .

5.15 The free energy reaches a minimum in the Boltzmann distributionUse the expression derived for � from the previous problem in order to show that � has aminimum when the probabilities

� are those given by the Boltzmann distribution.

Hint: What change � � , of the free energy do you get if you make changes � � , of the prob-

abilities away from the Boltzmann distribution? (The changes must be such that what you addin one place has to be taken from somewhere else,

� � � � � � .)

5.16 Pressure of thermal radiation IIn this chapter you have seen that work performed on a system changes the energy of thatsystem by moving the energy levels while keeping all the occupation numbers and thus theentropy unchanged.(a) Use these ideas to show that for the “gas” of photons confined inside a cavity the pressurecan be written

� � � � � � � �

� � � � � � �� �

� �� � �

where the index�

is shorthand for all the mode indices and � � � denotes the average numberof photons in a certain mode.(b) Show that � � �

� � � � � � � � � � � �(c) Show that this leads to � � � � � � � �5.17 Pressure of thermal radiation II(a) Show that the partition function of a photon gas is given by

����� � � ����� � � � ���� � � � �

��

where the product is over the modes�.

(b) The free energy is found directly from the partition function as

� � � ��� � � � ��� � ��� � � � ����� � � � ���� � ��� �

Page 43: Lecture Notes on Statistical Physics by peter Johansson

CHAPTER 5. THE IDEAL GAS 40

Transform the sum to an integral and integrate by parts to find

� � � � � � � �� � � �� � � �(c) By using the relation between the free energy and the pressure verify that

� � � � � � � �5.18 Pressure of thermal radiation IIITry to relate the pressure of a photon gas to the change of photon momentum that takes placewhen a photon with momentum �� �� hits a cavity wall and is absorbed or reflected. Why is thepressure the same regardless of the reflectivity of the cavity walls?

Page 44: Lecture Notes on Statistical Physics by peter Johansson

Chapter 6

Chemical potential and Gibbs distribution

6.1 Chemical equilibrium and chemical potential

So far we have only considered situations where the energy of a system can change eitherbecause heat is exchanged with the surroundings or because work is done on the system. In thischapter we are going to look at systems that can also exchange particles with the surroundings;i.e. the particle number " will not longer be a constant.

������������������

������������������ ������

������������

������������������

����

������������

� � ������

������������

������������

������������������ ������

������������

������������������ ������

������������

������������������

������������

������ � �

!�!!�!"�""�"

#�##�#$�$$�$

%�%%�%&�&&�&

'�''�''�'(�((�((�(

Figure 6.1: Schematic illustration of how two different gases mix. Initially the left compartmentonly holds one species of molecules and the right compartment another species. A hole isopened between the two compartments, and the two gases mix. In the process the multiplicity,and hence the entropy, of the macroscopic state increases.

In reality particles often move from one place to another. Take for example one gas con-tainer with only O � molecules and another with only N � molecules. Once the two containersare connected to each other, gas molecules will diffuse between the containers. Eventuallythe system reaches a state of chemical equilibrium. Then the concentrations of oxygen andnitrogen molecules are uniform throughout the system; from a macroscopic point of view thesystem is chemically homogeneous.

The above gas-mixing scenario is well-known to us from a lot of daily-life experiences,but how can we describe it in terms of statistical physics? Actually, the theoretical descriptionof this phenomenon is similar to that of thermal equilibrium that we dealt with earlier. It isonce again possible to determine the multiplicity of different states, and the result is that themultiplicity of the initial situation in Fig. 6.1 is much smaller than that of the final situation.

When we discussed thermal equilibrium we found that two systems are in equilibriumwhen they have the same temperature. Furthermore, the macroscopic state in equilibrium cor-responded to a maximum for the multiplicity function � for the system as a whole. The samereasoning applies in principle also for chemical equilibrium. In this case one needs yet another

41

Page 45: Lecture Notes on Statistical Physics by peter Johansson

CHAPTER 6. CHEMICAL POTENTIAL AND GIBBS DISTRIBUTION 42

state variable to describe the macroscopic state, namely the number of particles " . So far wehave only considered systems with a fixed number of particles and have not thought of " asa variable. (In a system with more than one kind of particles the number " �

of each kind be-comes a state variable.) Now the multiplicity function varies with " as well as with the internalenergy

(we assume the volume is held constant). Two systems � and

�that can exchange

both energy and particles with each other will do so until they reach a macroscopic state thatgives maximum multiplicity and entropy. Energy will, as before, be distributed so that

�� � � � � �� � � � � � � �� � � � � � (6.1)

at thermal equilibrium. But the entropy must also be maximized with respect to the number ofparticles in system 1, " � , given the constraint that the total number of particles " � " � ( " �is constant. This yields

� ��

� " � ( � � �� " � � � ��

� " � � � � �� " � � � � � ��

� " � � � � �� " � � (6.2)

At this point we define the chemical potential � for the particles through� � � � � �� " � � � �

�(6.3)

This means that two systems in both thermal and chemical equilibrium have the same temper-ature and chemical potential (for each particle species in the general case).

Suppose that subsystems 1 and 2 have the same temperature but that the chemical po-tential � � in subsystem 1 is larger than the one ( � � ) in subsystem 2, i.e.

� � � � " � � � � � � " �[remember there is a minus sign in Eq. (6.3)]. Then if one particle is transferred from subsys-tem 1 to subsystem 2 there is an increase of the total entropy

� � � � � � � ��

� " � ( � � � � �� " � � � � � � � � � � (6.4)

In other words the entropy of the system increases if particles are transferred from higherto lower chemical potential. This particle transfer will proceed until chemical equilibrium,� � � � � , has been established. In the next section we will see how the chemical potential canbe calculated for a simple system like the ideal gas.

6.2 Chemical potential and thermodynamics

For a system with a constant number of particles the thermodynamic identity reads

� � � � � � � � ��� � � � � � ( � � � � (6.5)

When the number of particles can vary there is obviously at least one more term in the expres-sion for � � namely � � � � � � " , thus

� � � � � ( � � � � �

� " � (6.6)

Page 46: Lecture Notes on Statistical Physics by peter Johansson

CHAPTER 6. CHEMICAL POTENTIAL AND GIBBS DISTRIBUTION 43

since this implies � � � � � �� " � � � � � (6.7)

our definition of � . Now Eq. (6.6) can be rewritten

� � � � � � � � ( � � " � (6.8)

which yields an alternative expression for the chemical potential,� � � � � " �

�� �

(6.9)

Furthermore the definition of Helmholtz free energy � � � � means that

� � � � � � � � � � � � � � � � � � ( � � " � (6.10)

thus � � � � �� " � ! � ��

(6.11)

For practical calculations Eq. (6.11) is the most useful expression. Let us demonstrate howit can be used to calculate � for an ideal gas. The free energy for a monatomic ideal gas is fromEq. (5.32)

� � " ��� ��� � �� � � � " � � � " ��� � � � "� � � � " ��� �

(6.12)

The corresponding chemical potential is then� � ��� � �� "� � � ( ��� � � � � ��� � �

� �� � � �

(6.13)

thus the larger the particle concentration, the larger the chemical potential. Since we havealready concluded that the particle concentration in a classical ideal gas must be considerablylower than the quantum concentration, the chemical potential obtained from Eq. (6.13) mustevidently be negative.

6.3 Internal and external chemical potential

Equation (6.13) shows that the chemical potential of an ideal gas is a function of particle con-centration and temperature. Then an ideal gas in thermal and chemical equilibrium should havea uniform concentration. However, we know that this is not always the case. The air of theatmosphere is certainly a nearly ideal gas, but it is not homogeneous. The air at the top ofMount Everest is much thinner than the air at sea level. Does this mean that the atmosphere isnot in chemical equilibrium?

To resolve this question we have to remember that our treatment of the ideal gas has onlytaken into account the kinetic energy of the gas atoms or molecules. But the atmospheric airis also affected by the gravitational field of the Earth. Thus, for every gas particle there is anadditional contribution ��� ��� to the internal energy, and also to the free energy since � � � � .Hence also the chemical potential contains an external contribution � ����� in addition to the

Page 47: Lecture Notes on Statistical Physics by peter Johansson

CHAPTER 6. CHEMICAL POTENTIAL AND GIBBS DISTRIBUTION 44

internal chemical potential � � � given by Eq. (6.13). [We should point out here that Eq. (6.13)only deals with an ideal gas of completely structureless particles. For real gas particles thereare other contributions to the internal chemical potential due to rotational motion, vibrationalmotion, chemical bonds, etc. The chemical potential after all is very relevant to chemistry eventhough we do not further deal with this aspect here.] The external chemical potential dependson the space coordinate (i.e. the height above sea level or whatever zero level one chooses forthe gravitational potential), � � ��� � � � � � (6.14)

where�

is the mass of the particles, and � the height above sea level.

50

40

30

20

10

0

Alti

tude

(km

)

Figure 6.2: “Atmosphere” generated withrandom numbers (4000 particles). Theconcentration of particles follows � � � � ��� � � � � �

� � � �with � � � ��� � � m correspond-

ing to a particle mass equal to that of O �molecules.

Let us see what this means for the atmosphere; we assume that the temperature is inde-pendent of � . The total chemical potential can now be written� � � � � � � � � � � ( � ����� � � � � ��� � �

� �� � � � � � ( � � � � (6.15)

[For molecular gases there is a contribution to the free energy and the chemical potential fromrotational motion, however, this contribution is independent of the position and will not changethe conclusions we are about to draw.] For the atmosphere to be in chemical equilibrium� � � � must be independent of � , � � � � � � � � � . Inserting this condition into Eq. (6.15) andexponentiating we get

������ � � � � � � � �

��� � � �� � � � � (6.16)

and this relation can be simplified to read

�� � � � � �� � � � � � ��� � ��� �� ! # � �� � � � � �� � � � � (6.17)

where � � � ��� � � � � � . An oxygen molecule (

���O) has the mass

� � � � ��� , which with � � � � K yields � � � 7900 m. This means that the concentration of oxygen at an altitude of3000 m is � � � � of that at sea level, and at the summit of Mount Everest (8848 m) � is downto about ��� � of �� � � � .

Whenever there is an external potential present, the chemical potential has an external con-tribution. This contribution will cause particles to flow towards the regions of space where thepotential energy is lower. The internal contribution to � , on the other hand, is largest where the

Page 48: Lecture Notes on Statistical Physics by peter Johansson

CHAPTER 6. CHEMICAL POTENTIAL AND GIBBS DISTRIBUTION 45

particle concentration is largest, thus � � �� strives to even out differences in particle concentra-tion. The resulting distribution of particles in space strikes the balance between particle drift asa result of the forces associated with the external potential, and diffusion that tries to even outconcentration differences.

6.4 Derivation of the Gibbs distribution

We next study the effects the chemical potential has on a microscopic system. We are basicallygoing to repeat the steps of the derivation of the Boltzmann distribution from Chapter 3, how-ever, now for a microscopic system that cannot only exchange energy, but also particles, witha large reservoir.

g( UTOT− ε1 , 1)NTOT

−N

NTOT−N1

particles

S R SRε2ε1 Energy:UTOT

− ε2Energy:UTOT

− ε1 N1

g( UTOT− ε2 , 2)NTOT

−N

NTOT−N2

particlesN2

Figure 6.3: The small system�

can exchange both energy and particles with the reservoir� . The probability of finding

�in a particular state with a given energy and particle number

depends on the multiplicity for the reservoir in the corresponding situation.

The reasoning is very similar to that previously used in Chapter 3. We take a large totalsystem that can be split into a reservoir � and a small system

�, see Fig. 6.3. The total, closed

system � +�

has the energy �����

and contains " �����particles. We compare the probabilities

of the system�

being in two different distinct states, state 1 with energy � � and " � particles,and state 2 with energy � � and " � particles. The ratio between the two probabilities is givenby the ratio between the multiplicity function for the reservoir in the two situations, since wecompare two unique states of the system

�so that ��� � � in both cases. Thus ��� � � � �

��� � (6.18)

and as before the multiplicity is related to the entropy through � ��� ���� , so Eq. (6.18) gives � � � ����� � � � � ������ � � � � " ����� � " � � �

����� � � � � ������ � � � � " ����� � " � � � (6.19)

Since the reservoir is assumed to be very large, the entropy values entering Eq. (6.19) can befound by a Taylor expansion,

� � � ������ � � � " ����� � " � � � � � ������ � " ����� � � � � � � �� � �� � � " � � � �

� " � � � ��

(6.20)

The partial derivatives in Eq. (6.20) are the ones that appeared in the definition of (reservoir)temperature and chemical potential, respectively,

� � � � � � �� � �� � � ��� � � �

� � � � � �� " � � � �

�(6.21)

Page 49: Lecture Notes on Statistical Physics by peter Johansson

CHAPTER 6. CHEMICAL POTENTIAL AND GIBBS DISTRIBUTION 46

When this is inserted into Eq. (6.19) we get, after some simplifications, that the ratio betweenthe two probabilities is

� � � ����� � � " � � � � � � � � ��� � ������ � � " � � � � � � � � ��� � � � (6.22)

Thus, the probability of finding the small system in a state with energy � and " particles isproportional to ����� � � " � � � � � � ��� � � . This exponential is called a Gibbs factor (analogous toBoltzmann factor). To find the absolute probability for a state the Gibbs factor must be dividedby the sum of Gibbs factors of all possible states that the small system can be found in. Thissum is called the Gibbs sum or the grand sum; we denote it

�here. The Gibbs sum can be

written� � ���� �

� � � � #����� � � " � � � � � #

� � ��� � � (6.23)

Note that the structure of this sum is more complicated than that of the partition function ;the sum here runs over all possible numbers of particles " one can find in the system, and forevery " there is a sum over all possible energies. This set of energies in general differs fromone particle number " to another.

The probability of finding the system in a state with energy � and " particles is now� � � " � � �� ����� � � " � � � � � � ��� � � � (6.24)

This is known as the Gibbs distribution or the grand canonical ensemble.

6.5 The Fermi-Dirac and Bose-Einstein distributions

We are going to apply the Gibbs distribution to two special and very fundamental cases. Allparticles found in Nature can be divided into two different categories in terms of statistics:Fermions and Bosons. Fermions have half-integer spin 1/2, 3/2, 5/2, etc. For simple particles,1/2 is the typical case. According to the Pauli principle two fermions can never occupy thesame quantum state; fermion states are always occupied by either 0 or 1 particle. Bosons haveinteger spin; they are not subject to the Pauli principle. A quantum state can be occupied by anarbitrary (non-negative) number of bosons and, moreover, if there is already bosons in a certainstate the probability for other identical bosons to undergo quantum transitions into that stateincreases. The elementary particles building up matter, protons, neutrons, and electrons are allfermions. The fact that they do not occupy the same state is instrumental for the stability ofmatter; the Pauli principle prevents the particles from collapsing into one state. Photons, theelementary “particles” of the electromagnetic field that we discussed in Chapter 4 are bosons.The fact that photons are bosons is essential for stimulated light emission, the physical principlebehind lasers.

Fermi-Dirac. We begin by considering the distribution function for non-interacting fermions.By non-interacting we mean that the fermions do not affect each other by mutual forces. Sup-pose we have a large reservoir with temperature and chemical potential � for the fermionswe are interested in. We want to know the probability of finding a fermion in a particular statewith energy � . To this end we choose the system

�to be that very energy level and nothing

more. The grand sum then only contains two terms, for there are only two possible states that�can be found in. The state is either empty or contains one particle with energy � , thus

� ��� � ��� �� ! # ( � ��� � � # ��� �� �! # � � ( � ��� � � # ��� �� ! # � (6.25)

Page 50: Lecture Notes on Statistical Physics by peter Johansson

CHAPTER 6. CHEMICAL POTENTIAL AND GIBBS DISTRIBUTION 47

The probability that the state is occupied by a particle is given by the second term divided by�

, � ��� � � ��� � � # ��� �� ! #

� � � ��� � � # ��� �� ! #

� ( � ��� � � # ��� � ! # � �� � � � � # ��� � ! # ( ��

(6.26)

This probability function is called the Fermi-Dirac distribution function, and is denoted � � � � ,� � � � � �� � � � � # ��� �� ! # ( �

�(6.27)

Bose-Einstein. If we instead consider the case of non-interacting bosons, the starting pointis the same in that we choose the system

�to be a single state with energy � . The Gibbs sum

will, however, in this case contain an infinite number of terms since there is no limit to thenumber of bosons that can occupy the state. With no particles the energy is 0, with one particleit is � , with 2 particles it is

� � , etc. This yields

� � � � � � ���� � � # ��� �� �! # � �� � � ��� � � # ��� � ! # � (6.28)

where the last equality follows because this is a geometric series. The average number ofparticles occupying the state is an important quantity. It is found from the sum

� " � � �� � � � " � ���� � � # ��� �� �! # � (6.29)

which can be evaluated by the same technique that we employed when calculating the expec-tation value for the energy of a harmonic oscillator (see Appendix A). The result here reads

� � � � � � " � � �� � � � � # ��� �� �! # � ��

(6.30)

This is known as the Bose-Einstein distribution function.We observe several things worth pointing out. First of all the Bose-Einstein and Planck

distribution functions are very similar. In fact the Planck distribution is a Bose-Einstein distri-bution with � � � . A chemical potential does not make much sense for particles like photonswho can be created and destroyed in various processes; the number of photons is not constant.

We also see that the algebraic forms of the Fermi-Dirac and Bose-Einstein distributions aresimilar; the sign in front of � in the denominator is all that changes. Now, this change makesquite a large difference between the functions which are plotted in Fig. 6.4.

The difference between the two distributions shows up for states with an energy near andbelow the chemical potential. Since the Bose-Einstein distribution � � � � diverges when � ap-proaches � the chemical potential for bosons is always smaller than the energy of the lowestenergy level (the number of particles is a finite number). In the case of fermions the chemicalpotential can very well be larger than the energy of some energy levels. This is for example thecase for the electrons who form a “gas” inside a metal or in a white dwarf star, or the neutronsin a neutron star. In these cases the particle concentration (of electrons or neutrons) is so largethat the condition � � � � is no longer satisfied.

Page 51: Lecture Notes on Statistical Physics by peter Johansson

CHAPTER 6. CHEMICAL POTENTIAL AND GIBBS DISTRIBUTION 48

4

3.5

3

2.5

2

1.5

1

0.5

03210-1-2

f(ε)

(ε-µ)/(kBT)

Fermi-Dirac

Bose-Einstein

Classical: e-(ε-µ)/(kBT)

0

0.2

0.4

0.6

0.8

1

1.2

1.4

-300 -200 -100 0 100 200 300

f(ε)

ε - µ (meV)

T=0T=300 K

T=1000 K

4 kBT

300 K

1000 K

Figure 6.4: The left panel shows a comparison between the Fermi-Dirac and Bose-Einsteindistribution functions. A plot of the classical limit for the two functions is also displayed. Theright panel shows plots of the Fermi-Dirac function at different temperatures.

6.6 The Fermi gas

Let us take a closer look at the fermion case. We return to the particle-in-a-box problem fromChapter 5 and see what happens if the particles are electrons. If they are confined to move in acubic box with side � the energy levels can be written

� ��� � �� � �� � � � � � �� ( �� ( �� �� � � � �(6.31)

In addition to the three quantum numbers � , �� , and �� , a spin quantum number � � equal toeither 1/2 or -1/2 is needed to completely describe the state of an electron.

The Pauli principle states that two electrons cannot have identical sets of the four quantumnumbers �� , �� , �� , and � � . A consequence of this is that even if there are many electronsin the box, only two of them can be in the lowest energy level, � � �� � �� � � ; one has� � � ��� � , the other � � � � ��� � . The other electrons are forced to occupy higher energylevels.

Now, suppose that the system is at zero temperature so that the total energy of all theelectrons takes its lowest possible value. We wish to calculate in which energy level we thenfind the electron with the highest one-electron energy. To do this we write down a sum overthe quantum numbers yielding the total number of electrons " ,

" � � �� � � � �� � �� �� � �

� � � � ��� � �� � �� � � � (6.32)

The first factor of�

accounts for the two spin directions. As with the ideal gas we assume thatthe box is so large that the energy levels are closely spaced, and the sums can be transformedto integrals over �� , �� , and �� which are then transformed into an integral over sphericalcoordinates, " � � �� � � �� � � � � � � � � � � � � � � � � (6.33)

Page 52: Lecture Notes on Statistical Physics by peter Johansson

CHAPTER 6. CHEMICAL POTENTIAL AND GIBBS DISTRIBUTION 49

By yet another change of variables, replacing by the energy � � � � � � � � � � � � we get

" � � �� � � � �� � � ��� � � �� � � � � � � � � � (6.34)

In the same way as we introduced a density of photon modes in Chapter 4, we can introducean electron density of states,

� � � � � � �� � � � �� � � ��� � � � � (6.35)

If � � , � is 1 for all energies below � , and 0 for all energies above � . In this case, theenergy corresponding to the highest occupied single-electron level is called the Fermi energy,� � , ( � � � � at � � ). Equation (6.34) then yields

" � � �� � � � �� � � ��� � � ���� � � � � � �

�� � � � �� � � ��� � �

��� �� � (6.36)

and from this

� � � �� �� � � � � � "� � ��� � � (6.37)

Here � � � � is the volume of the box, so the Fermi energy is set by the electron concentration" � � . The electrons in a metal like sodium (Na) behave almost like non-interacting fermions.The electron concentration is " � � � 2.64

�10

� � m � � and this yields a Fermi energy � � �� ��� �

eV. This is certainly much more than � � at room temperature ( � 25 meV). Thus, thePauli principle “lifts” electrons to much higher energy levels than thermal excitation at roomtemperature can do.

0

0.2

0.4

0.6

0.8

1

1.2

1.4

0 1 2 3 4 5

f(ε)

x n

orm

aliz

ed d

ensi

ty o

f sta

tes

ε (eV)

T=0 K

DOS

Occupied states

0

0.2

0.4

0.6

0.8

1

1.2

1.4

0 1 2 3 4 5

f(ε)

x n

orm

aliz

ed d

ensi

ty o

f sta

tes

ε (eV)

T=1000 K

DOS

Occupied states

Figure 6.5: Plot of the density of states (normalized so that� � � �

� � � ) and the product� � � � � � � � for non-interacting fermions at � � and � � � � � K. The Fermi energy � � �� ��� �

eV was calculated using the electron concentration of Na. The filled area representsoccupied states.

With a non-zero temperature some electrons are thermally excited; the Fermi-Dirac distri-bution determines the probability for this thermal excitation. Since � � � � is practically equal to

Page 53: Lecture Notes on Statistical Physics by peter Johansson

CHAPTER 6. CHEMICAL POTENTIAL AND GIBBS DISTRIBUTION 50

1 far below the chemical potential and � 0 far above � , thermal excitations will only affect theelectrons with energies near � (i.e. near � � ) as long as the temperature is not too large. Figure6.5 shows the density of states for electrons in Na and the occupied states at temperatures 0 Kand 1000 K. At 0K the borderline between occupied and unoccupied states is absolutely sharp.At 1000 K some states below the Fermi energy are empty and some states above � � are occu-pied but the changes are limited to a relatively small energy interval. Electrons with an energyfar below � � cannot be excited thermally because other electrons “are in the way.”

If the temperature is increased further so that � � � � � the chemical potential will de-crease a lot; � adjusts so that � " � � " . (For metallic electron concentrations the temperaturemust be of the order 10

�K before this is a major effect so the following discussion is a bit aca-

demic in that context.) Eventually, � drops to a value far below 0, and the Fermi distributionfunction can be approximated by

� � � � � �� � � � � # ��� �� �! # ( � � � � � � � � # ��� �� ! # � (6.38)

provided that � � � � � � � . This is actually the same distribution function as we found forthe classical ideal gas. Let us calculate what value the chemical potential should take in thislimit in order to have the right expectation value for the number of particles,

" � � " � � � �� � � � �� � � ��� � � �� � � � � � � � � � # � � � (6.39)

This integral can be calculated by the variable substitution � � � � , which yields

" � � �� � �� � � � �� � � ��� � � � �

� ��� � � (6.40)

and from this we get � � ��� �� �� " � � � � � � � ��� � � (6.41)

where � � � � � � � � �� � � ��� � (6.42)

as before. In other words, except for the last term � � � ��� � , we recover the old expressionfor the chemical potential for the ideal gas in Eq. (6.13). We note that the requirement that� � � � � ��� for all states means that � must be negative and lie several units of � � belowthe energy zero. This can only be achieved if the particle concentration " � � is much less thanthe quantum concentration � . Thus the two conditions � � � � � � � and " � � � � � areconsistent with each other, and both are basic requirements in order to have an ideal gas.

Where did the last term in Eq. (6.41) come from? The reason we do not get exactly thesame result as in Eq. (6.13) is that the particles we deal with here have spin, whereas the idealgas particles in Chapter 5 were assumed to be spinless. The spin degree of freedom gives ashift in the chemical potential.

Page 54: Lecture Notes on Statistical Physics by peter Johansson

CHAPTER 6. CHEMICAL POTENTIAL AND GIBBS DISTRIBUTION 51

6.7 Problems for Chapter 6

6.1 CentrifugeA circular cylinder of radius

�rotates about the long axis with angular velocity

�. The cylin-

der contains an ideal gas of atoms of mass�

at temperature . Find an expression for thedependence of the concentration � � � � on the radial distance � from the axis, in terms of � � � �(the concentration on the axis). Take � � ��� as for an ideal gas.

6.2 Gibbs sum for a two-level systemConsider a system that may be unoccupied with zero energy or occupied by one particle ineither of two states, one of zero energy, the other of energy � .(a) Show that the Gibbs sum for this system is

� � � ( � � ��� �� ! # ( � ��� � � # ��� �� ! # �(b) Show that the average number of particles in the system is

� " � � � � ��� � ! # ( � ��� � � # ��� � ! #�

�(c) Find an expression for the energy expectation value of the system.

6.3 Derivative of Fermi-Dirac functionShow that � � � � � � evaluated at the chemical potential � � � has the value ��� � � � � .6.4 Symmetry of filled and vacant orbitalsLet �

�be the value of the Fermi-Dirac function at the energy � � � � ( �

, and � � the value at� � � � � �. Show that

�� � � � � � �

6.5 Density of states in two dimensionsShow that the density of states for a free electron that moves in two dimensions on a squarearea

�is

� � � � � � �� �� �

6.6 Energy and pressure of Fermi gas at � �(a) N fermions are confined to a volume � and held at zero temperature. Show that the internalenergy of this system is

� � " � � � � .(b) Show that the pressure of the system

� � � � � � � ��� �� �� �� � "�

��� � �

6.7 Fermi gases in astrophysics(a) Given the mass of the Sun, 2

�10

� � kg, estimate the number of electrons in the Sun.

Page 55: Lecture Notes on Statistical Physics by peter Johansson

CHAPTER 6. CHEMICAL POTENTIAL AND GIBBS DISTRIBUTION 52

(b) In a white dwarf star this number of electrons may be ionized and contained in a sphere ofradius 2

�10

�m. Find the Fermi energy of the electrons in electron volts.

(c) If the above number of electrons were contained within a pulsar of radius 10 km, show thatthe Fermi energy would be � 10 � eV.

6.8 Mass-radius relationship for white dwarf starsConsider a white dwarf of mass

�and radius

�. The electrons form a degenerate Fermi gas

(i.e. the chemical potential � is positive). Due to gravity there will be a very high pressureinside the star; the pressure at the very center will be

� � � �� ��� � � � � �

where G is the gravitational constant and � is the mass density. Unless this gravitational pres-sure is balanced by something else, the star would collapse. The balancing pressure is providedby the degenerate electron gas.(a) Show that pressure balance leads to the condition

�� � � � � � � � � � � , thus the larger the mass

the smaller the radius of a white dwarf.(b) A white dwarf is an old star that has used up most of its fuel (hydrogen). Estimate thewhite-dwarf radius of a star with the same mass as the Sun, taking into account that the mostcommon nuclei in a white dwarf are C and O.

Page 56: Lecture Notes on Statistical Physics by peter Johansson

Chapter 7

Phase transitions

7.1 Introduction

Almost all substances can exist in different forms depending on the temperature and the pres-sure. In daily life we often encounter the phase transformations of water. When the outdoortemperature falls below 273.15 K, water in small pools freezes to ice; it does not rain, instead,in case of precipitation, there will be snow on the ground; and if the cold weather persists forseveral days, there will be ice on the surfaces of lakes. Moreover, in our kitchens we boil waterso that the liquid is transformed to gas form. (One usually uses the word vapor, or in the caseof H � O, steam, to describe a gas that is in equilibrium with the liquid form of the substance.)

From our experience with water we also know that for a phase transformation to take place,usually quite a lot of heat must be supplied, i.e. even if the temperature of the substance doesnot change, its internal energy changes a lot. If one has 1 kg of ice at 0 � C, it takes 333 kJ tomelt it, another 419 kJ to increase the temperature of the liquid from 0 � C to 100 � C, and finallyas much as 2260 kJ to evaporate all the liquid. Thus, the average energy of the water moleculesin gas phase and in liquid phase are quite different. When water is transformed from liquid togas, the volume increases substantially and the distance between the molecules becomes muchlarger. This corresponds to an increase in potential energy for the molecules. Water moleculesare polar, i.e. they are positively charged on one side and negatively charged on the other. Inthe liquid the molecules tend to arrange themselves so that the positive end of one molecule isnear the negative end of another. This lowers the potential energy of the liquid. To form a gas,the molecules must be brought apart and the potential energy is then increased.

The liquid-to-gas transformation for water is illustrated schematically in Fig. 7.1. One thingthat becomes apparent looking at this figure is that all the heat supplied during the transforma-tion does not go into an increase of the internal energy. Instead, since the volume of the vaporis much larger than the volume of the water, the water-vapor system does work. This workequals � � � � � � � � . Thus the heat supplied, the latent heat � , is, per particle,

� ���� " � � � � � � � � ( � � � � � � � � � (7.1)

where � � and � � and � � and � � are the internal energies and volumes per particle in the gas andliquid, respectively.

In addition to phase transformations in which a substance changes from solid to liquid formor from liquid to gas form, there are also other, less obvious, phase transformations. It is, forexample, not unusual that a solid changes its crystal structure (its internal order) at certain

53

Page 57: Lecture Notes on Statistical Physics by peter Johansson

CHAPTER 7. PHASE TRANSITIONS 54

Q Q Q Q Q

T<T T=T T>TT=T T=Tb b b b b

(a) (b) (c) (d) (e)

l

g

l lg g

p p p p p

Figure 7.1: Water is heated at constant pressure. When the temperature reaches the boilingtemperature ��� , liquid and gas can coexist and some of the water is transformed into vapor. Atthe same time the volume increases. Eventually, when enough heat has been supplied, all thewater has been transformed into vapor. If further heat is supplied the temperature of the gasrises above ��� .

temperatures. Ice can for example exist in about 10 different structures (to obtain most of thesestructures requires rather large pressures).

In other phase transformations internal properties, such as the magnetization, changes. Sim-ply speaking each of the atoms in a ferromagnetic material like iron carry a spin. At large tem-peratures these spins point in different directions so that the total magnetization of a sample ofiron is zero. However, once the temperature is lowered below the Curie temperature (1043 Kfor iron), the spins of the different atoms tend to point more or less in the same direction, anda piece of iron spontaneously becomes magnetic; it has a net magnetization.

7.2 Phase diagrams

Let us return to the phase transformations of water and discuss them in more detail. Everythingwe said above assumed that the pressure was the normal atmospheric pressure, ��� � ��� kPa, atsea level. The boiling point of water than lies at 100 � C. But as you probably know, water boilsat lower temperatures than that at high altitude where the pressure from the air is lower (cf. thediscussion in Chapter 6). If the pressure is 2/3 atmospheres, the boiling point is down to about90 � C, and if the pressure is just 1/3 atmospheres it is not more than about 70 � C. If, on theother hand, the pressure is larger than 1 atmosphere the boiling temperature exceeds 100 � C.

One can summarize these facts in a so called phase diagram where temperature is givenon the

�axis, while the � axis gives the pressure. We then draw a curve corresponding to the

boiling temperature as a function of the pressure. This is done for water in Fig. 7.2.Every point in the diagram corresponds to a system in thermal and mechanical equilibrium,

because the temperature and pressure is assumed to be constant throughout the system. In mostpoints only one phase can exist, the one in which the water molecules has the lowest chemicalpotential. On the curves dividing two phases the chemical potential for water molecules is thesame in both of them; the two phases can coexist in chemical equilibrium at these particularcombinations of temperature and pressure.

Page 58: Lecture Notes on Statistical Physics by peter Johansson

CHAPTER 7. PHASE TRANSITIONS 55

0

5000

10000

15000

20000

25000

-50 0 50 100 150 200 250 300 350 400

p (k

Pa)

T (C)

Critical point

Ice Liquid water Water vapor

A B0

2

4

6

8

10

-40 -20 0 20 40 60

p (k

Pa)

T (C)

Triple point

µs < µl µl < µs

µl = µs

µl < µg µg < µl

µg = µl

µs = µg

Figure 7.2: The phase diagram for water. The left panel gives an overview over a large rangeof temperatures and pressures, while the right panel gives a detailed view of the low-pressurerange.

The curves in the left panel shows at what temperature and pressure ice melts to water andwhen water boils. Thus, we have three different regions in the diagram, one for each of thethree different states in which one can find H � O. Apart from confirming what we know fromour everyday experience there are a few points of special interest that can be found in Fig. 7.2.

The lines dividing solid from liquid, solid from gas, and liquid from gas meet in one point,the so called triple point at � � =0.01 � C and � ��� = 0.6113 kPa. At these particular conditions allthree phases can coexist in thermal and chemical equilibrium. Figure 7.3 illustrates how thetriple point conditions for water can be realized if an initially completely evacuated vessel ispartially filled with water and then cooled until some, but not all, of the water has frozen to ice.

������ ��������������������

Water

Vapor

Water

Vapor

tr tr

Q

T=T p=p

IceWater

Vapor

T>T p>ptr tr

(a) (b) (c)

Figure 7.3: Illustration of how the triple point conditions for water can be realized.

The line dividing solid from liquid starts at the triple point and extends from there essen-tially to arbitrarily large pressures. This line has a negative slope: the melting temperaturedecreases from 0.01 � C at the triple point to 0 � C at atmospheric pressure. This behavior isunusual. For most substances this line has a positive slope. Then one can bring the substancefrom the liquid to the solid form at a constant temperature by increasing the pressure. With iceand water things work in the opposite way. By applying a pressure on a piece of ice, one canmelt it even if the temperature does not change. This has to do with another unusual property of

Page 59: Lecture Notes on Statistical Physics by peter Johansson

CHAPTER 7. PHASE TRANSITIONS 56

water: its volume increases when it freezes. For other substances usually the volume decreasesupon forming a solid.

The curve dividing liquid phase from gas has a positive slope, i.e. a gas can be brought toliquid form by increasing the pressure at a constant temperature. The line dividing liquid fromgas does not extend to arbitrarily large pressures and temperatures. Instead it ends at the socalled critical point � � =374.14 � C and � � � � =22090 kPa. Beyond this point there isno qualitative difference between liquid water and water vapor. Quite naturally this differencedoes not disappear in an entirely abrupt way at the critical point. Instead as one approaches �and � � the latent heat involved in transforming liquid to gas decreases steadily and becomes 0right at the critical point. Likewise the volume difference (per particle) between the two phasesdecreases as one comes closer to the critical point. Right at the critical point, the two volumesare equal.

As a matter of fact, since there is no difference between a liquid and a gas beyond the criticalpoint it is possible to transform a substance from liquid to gas twice, without transformingit back to a liquid in between. Suppose that we have water at a temperature and pressurecorresponding to point A in Fig. 7.2. By increasing the temperature the liquid can be turned intogas (at point B in the phase diagram). If then both temperature and pressure first is increasedand then decreased so that the state of the substance follows the dashed curve encircling thecritical point we return to the state A without going through a phase transformation from gas toliquid. Of course, one needs to reach rather extreme pressures to have water go through such aprocess. Under normal conditions there is a very distinct difference between liquid water andwater vapor.

The slope of a coexistence curve. It is possible to express the slope� �� ! of a coexistence

curve in terms of the latent heat and the change of volume that occurs in connection with thephase transformation. The expression reads

� �� � �

��� � (7.2)

where � is the latent heat per particle and � � (equal to � � � � � in the gas-liquid case) is thechange of volume per particle. This equation is called the Clausius-Clapeyron law.

To derive Eq. (7.2) one looks at the variation of the chemical potential of both the coexistingphases, for example liquid and gas, along the coexistence curve. While both � � and � � varywith temperature and pressure, they must always equal each other, � � � � � , on the curve.

To proceed with the derivation we introduce a new free energy, the so called Gibbs freeenergy,

� � � � ( �� � (7.3)

In view of the thermodynamic identity, (6.8), we have the differential,

� � ��� � � � � � � ( � � � ( � � � � � � � ( � � � ( � � " �(7.4)

Gibbs free energy can thus be viewed as a function of the three independent variables , � , and" . Since � is a kind of energy, and just looking at the definition in Eq. (7.3), one realizes that� is proportional to the size of the system. But of the variables , � , and " , only " has to dowith the size of the system. (Such a variable is said to be extensive in contrast with variableslike � and that are said to be intensive.) As a consequence one can write

� � " � � � � � (7.5)

Page 60: Lecture Notes on Statistical Physics by peter Johansson

CHAPTER 7. PHASE TRANSITIONS 57

where � � � � � is a function of and � . From Eq. (7.4) we can conclude that� � �� " � ! � � � � (7.6)

thus, the function � is actually the chemical potential, and

� � " � � � � � � (7.7)

Equation (7.7) says that the chemical potential is equal to Gibbs free energy per particle. Inanalogy with (7.4), the differential ��� can then be written

��� � � � � ( � � � � (7.8)

where � is the entropy per particle and � the volume per particle. Now we must write downexpressions for ��� both in the liquid phase and the gas phase

��� � � � � � � � � ( � � � � � � � � � � ��� � � � � � � � � ( � � � � � � � (7.9)

where the subscript � indicates that the changes in and � should be made such that we stayon the coexistence curve. Requiring that ��� � � ��� � now yields the slope of the curve,� � �

� � � � � � � � �� � � � � � � � �

� � � � � � (7.10)

The heat that must be supplied to bring a particle from the liquid phase to the gas phase is� � � � � � � . This yields Eq. (7.2) if the numerator and denominator in (7.10) aremultiplied by , � � �

� � � � � � � � � � � � � � � � � � � � �

� � � (7.11)

If one instead wants to look at the slope of the coexistence curve for the liquid and solidphase, all that needs to be done is to put in the corresponding entropy and volume differencesinto Eq. (7.11). For the case of water, this brings out the connection between the negative slopeof this curve and the volume expansion when water freezes to ice. The latent heat and thus� � � � � is positive, but � � � � � and the slope � � � � is negative.

7.3 Applications to practical situations

The discussion above has always assumed that the system we look at contains only one sub-stance, only one kind of molecules. The container in Fig. 7.1, for example, only contains H � Omolecules, either in gas phase or in the liquid phase.

Normally when we for example boil water, we do this in air at atmospheric pressure. Thismeans that the pressure that the liquid water feels from the surroundings equals the atmosphericpressure, � 101.3 kPa, regardless of the temperature. Then the liquid may, or may not, be inequilibrium with the water vapor present in the atmosphere. The pressure from the vapor is,however, low compared with that coming from the nitrogen and oxygen molecules in the air.

Typically what happens when one puts some water to boil on the stove is that alreadywhen the water temperature reaches 40–50 � C, one sees vapor rising from the surface of the

Page 61: Lecture Notes on Statistical Physics by peter Johansson

CHAPTER 7. PHASE TRANSITIONS 58

water. This is because the chemical potential in the liquid is higher then � for water moleculesin the air (there are very few water molecules there). The molecules that have evaporatedfrom the liquid will quickly spread into the room and one will never reach an equilibriumsituation. Thus, water can be transformed to gas form at atmospheric pressure also at ratherlow temperatures, much lower than 100 � C. In this way the water in rainwater pools evaporatesafter a while.

If water can evaporate at temperatures lower than 100 � C, what is then so special about theboiling point? Well, at this temperature, the liquid can be in equilibrium with pure water vaporat atmospheric pressure. The concrete result of this is that gas bubbles containing nothing butH � O molecules begins to form inside the liquid. The liquid boils. These issues are illustratedin Fig. 7.4.

pair =1 atm.

H O2< p

vap (T)p

H O2< p

vap (T)p

H O2

T<Tbpvap (T)

p=

bT=T

pair =1 atm.

=1 atm.

Figure 7.4: When water is heated at atmospheric air pressure some of the liquid evaporatesbefore the temperature reaches the boiling point, because the H � O pressure in the air, and thusthe chemical potential, is very low. Once the temperature reaches the boiling point, � , theliquid can be in chemical equilibrium with pure water vapor at atmospheric pressure. Thengas bubbles, containing only water vapor, starts to form in the liquid.

7.4 The van der Waals equation of state

The van der Waals equation of state, briefly discussed in Benson’s book, gives an opportunity todeal with a liquid-gas phase transition in a relatively simple way. This equation can be viewedas a generalization of the ideal gas law and reads

� � ( � " �� � � � � � "�� � � " ��� �

(7.12)

Here � and � are coefficients that depend on what particles we are dealing with. The term � "��can be thought of as a reduction of the total volume since part of it is occupied by particles.The term � � " � � � � �

corresponds to a reduction of the pressure because normally gas atoms ormolecules separated from each other are interacting attractively.

If one introduces the critical temperature � , the critical pressure � � , and the critical volume� � , given by

� � ����� ��� � � � � � ���� � � � � � � � � � � "�� � (7.13)

Page 62: Lecture Notes on Statistical Physics by peter Johansson

CHAPTER 7. PHASE TRANSITIONS 59

the equation of state can be written,� �� � ( �� � � � � � � � � �

� �� �� � � � �

�(7.14)

If then � � � � is plotted as a function of � � � � for different values of � � (these curves are calledisotherms) as in the left panel in Fig. 7.5 one finds that for sufficiently high temperatures, � � , there is a unique value for the volume for all possible values of the pressure. For � � the � - � curve has a terrace point at � � � � and � � � � , and when � � there is nolonger a unique relation between pressure and volume for all values of the pressure.

0

0.5

1

1.5

2

0 1 2 3 4 5 6

p/p c

V/Vc

(a)

1.2

1.1

1.0

0.95

0.9

0.850.8

0

0.2

0.4

0.6

0.8

1

1.2

0 1 2 3 4 5 6

p/p c

V/Vc

(b)

Liqu

id Gas

Liquid+Gas

1.0

0.95

0.85

pvap

1.2

1.0

0.8

0.6

0.4

0.2

0Vl/Vc 6543210

p/p c

V/Vc

(c)

Vg/Vc

Liqu

id

Gas

Liquid+Gas

Figure 7.5: Panel (a) shows plots of the pressure as a function of the volume found from the vander Waals equation of state for different values of the temperature. The values given next to thecurves are � � . In panel (b), the thick curve shows how the volume of liquid and gas, that arein equilibrium with each other, vary with pressure. Panel (c), finally, shows in more detail howthe van der Waals equation of state leads to separation between different phases, liquid andgas. The � � � ��� � isotherm is shown, and at high pressure only the liquid phase can exist.However, at the vapor pressure, liquid and gas can coexist. The relation between pressure andvolume is then not given by the van der Waals equation of state (thin curve) but follows insteada straight horizontal line. The volume can then vary between ��� � � � and � � � � � depending onthe proportions of gas and liquid. If the pressure is lower than � �� � at this temperature only thegas phase can exist.

Every point on the isotherms for � � � � corresponds to a possible physical state for agas that obeys the van der Waals equation of state. In case � � , the part of an isotherm tothe left of the left part of the thick full curve in Fig. 7.5 describes how the pressure varies withvolume in the liquid state and the part of an isotherm to the right of the right half of the thickcurve gives pressure as a function of volume in the gas phase. However, the part of an isothermthat lies in between the two halves of the thick curve does not correspond to a stable state ofmatter. The physical state will instead be a mixture of liquid and gas at constant pressure. Theexact value of the volume depends on the proportions between liquid and gas. When a liquidis transformed to gas at constant pressure, as illustrated in Fig. 7.1 (b), (c), and (d), the volumeincreases from � � to � � along a horizontal line in a �� diagram connecting two points on anisotherm. This is illustrated in some more detail in Fig. 7.5 (c). (Note, however, that for a realsubstance the exact shape of the isotherm is different; no real substance follows the van derWaals equation of state very closely.)

Page 63: Lecture Notes on Statistical Physics by peter Johansson

CHAPTER 7. PHASE TRANSITIONS 60

7.5 Problems for Chapter 7

7.1 � � � � for waterCalculate the slope � � � � for the coexistence curve for water and water vapor near atmosphericpressure. The latent heat is � =2260 kJ/kg at 100 � C, and the volume of the liquid is 0.001044m�/kg and that of the gas is 1.6729 m

�/kg. Express the answer in K/atmospheres.

7.2 Internal evaporation energy for waterUse the data given in the previous problem to calculate the increase of the internal energy when1 kg of water is evaporated at 100 � C.

7.3 Chemical potential for water at the boiling pointUse the data and findings from the two previous problems (7.1 and 7.2) to determine the in-crease of entropy for 1 kg of water when it is evaporated at 100 � C. Then verify that the chemi-cal potential for liquid and vapor equal each other under these conditions so that the liquid andgas phases can coexist.

7.4 Heat of vaporization of iceThe pressure of water vapor over ice is 3.88 mm Hg at -2 � C and 4.58 mm Hg at 0 � C. Estimatefrom this the heat of vaporization of ice at -1 � C.Hint: Assume that � � � � � � � � , and that the vapor follows the ideal gas law.

7.5 Vapor-pressure equation over iceTake the ideas from the previous problem one step further: Assume that the difference involume between ice and water vapor in equilibrium is equal to the vapor volume and that thevapor can be treated as an ideal gas. Show that this leads to the following expression for theslope of the coexistence curve, � �

� � � ���� � �

If we neglect any temperature variations of the latent heat � � � � � � , the above differentialequation is separable and has the solution

� � � � ��� � � � ���� � � � � � �� �where � � � � � � is one point on the coexistence curve. Given that the latent heat for the solid-vapor transition at 0 � C is 2835 kJ/kg, estimate the vapor pressure over ice at -40 � C. Comparethe result you get with the experimental value, 0.0129 kPa.

Page 64: Lecture Notes on Statistical Physics by peter Johansson

Appendix A

Summation of geometric series

The sum of a geometric series � ,

� � � ( � ( �� ( �

� ( � � � � �� � � �

�� (A.1)

(where we assume that�

�� � � ) can be calculated by noting that

� � � ( � � � ( � ( �� ( �

� ( � � � � � � ( � � (A.2)

which yields

� � �� � �

�(A.3)

In these notes we have often (for example when dealing with the harmonic oscillator) en-countered sums like

� � � �� � � � �� � �

(A.4)

In this case, by identifying � � � with � , we get

� � � �� � � � �� � � �� � � �

� � �� � �

� �� � � � �

�(A.5)

When calculating the energy expectation value for a harmonic oscillator we also encoun-tered the sum

�� � �� � � � �

� � �(A.6)

To evaluate ��

we use the fact that it is related to the simpler sum � � . Termwise derivation of� � yields � � �

� � � �� � � �� � � �

� � � �� � � � � � �� � � � � � � � (A.7)

On the other hand we can also calculate � ��� � � � from the result of Eq. (A.5)

� � �� � � �

� �� �� � � � � � � �� � � � � � � �

� � � � � � � �� � � � � � � ��

(A.8)

Thus, by combining Eqs. (A.7) and (A.8), we get the final result

�� � � � �� � � � � � � �

�(A.9)

61

Page 65: Lecture Notes on Statistical Physics by peter Johansson

INDEX 62

Index

, 11�

definition, 13��� , 5

atmosphere, 44

black body, 25boiling point, 58Boltzmann constant � � , 5Boltzmann distribution, 9–15

mathematical expression, 11Boltzmann factor, 10Bose-Einstein distribution, 47bosons, 46Boyles law, 35

canonical ensemble, 9centrifuge, 51Charles law, 35chemical equilibrium, 41, 54chemical potential, 42, 54

external, 43ideal gas, 43internal, 43

Clausius-Clapeyron law, 56coexistence curve, 56critical point, 56critical pressure, 58critical temperature, 58critical volume, 58Curie temperature, 54

degeneracy, 15degenerate energy level, 15density of photon modes, 26density of states

two dimensions, 51diatomic molecule, 11diffusion, 45distinguishable particles, 34DNA, 22drift, 45

electron density of states, 49energy

in electromagnetic field, 27entropy, 38

definition, 4entropy increase, 7, 42equipartition principle, 13, 31extensive variable, 56

Fermi energy, 49Fermi gas

energy, 51pressure, 51

Fermi-Dirac distribution, 47fermions, 46ferromagnet, 54first law, 32fluctuations, 6

in macroscopic system, 6fundamental assumption, 3, 10

Gibbs distribution, 46Gibbs free energy, 56

and chemical potential, 57Gibbs sum, 46grand canonical ensemble, 46grand sum, 46

Hamlet, 8harmonic oscillator, 11

entropy, 38free energy, 38two-dimensional, 14

heat, 32, 35heat capacity, 14

monatomic ideal gas, 35Helmholtz free energy, 32

and partition function, 33ideal gas, 34

ideal gas, 30ideal gas law, 35indistinguishable particles, 34intensive variable, 56internal energy

ideal gas, 35

lasers, 46

Page 66: Lecture Notes on Statistical Physics by peter Johansson

INDEX 63

latent heat, 53

magnetic susceptibility, 21Max Planck, 27Maxwell-Boltzmann distribution, 37measurements

macroscopic and microscopic, 3metal, 47microcanonical ensemble, 9most important result, 11Mount Everest, 44multiplicity, 4

relation to entropy, 4in spin system, 4maximum, 5, 6systems in thermal contact, 5

neutron star, 47

particle in a box, 30thermal energy, 31

partition functiondefinition, 11for harmonic oscillator, 12

partition function, , 11Pauli principle, 46, 48phase diagram, 54phase transition, 53photon modes, 23Planck distribution function, 13polymer, 38pressure, 31

quantum concentration, 33quantum volume, 33

radiative energy flux, 25Rayleigh-Jeans law, 27reservoir, 9

second law, 7spin system, 3–7

in magnetic field, 3internal energy, 4net, 4

steam, 53Stefan-Boltzmann constant, 25Stefan-Boltzmann law, 25

temperature

theoretical definition, 7thermal contact, 5thermodynamic identity, 32tillstandssumma, 11triple point, 55two-level system, 3

van der Waals equation of state, 58vapor, 53

white dwarf, 47work, 31, 32, 35

zipper, 21