2 monte carlo basics 17 - eth zürich - homepage · 6 model: microscopic transition rates...

170
complexfluids.ethz.ch This document is copyrighted M.K., Swiss ETH Zurich/CH This copyright is violated upon selling or posting these class lecture notes Computational Polymer Physics Martin Kröger, Emanuela Del Gado, Patrick Ilg 1 Cellular Automata (CA) 7 1.1 Classification of CA .............................. 8 Exercise CA 4 #c automata ....................... 9 1.2 The Game of life ............................... 10 Exercise Game of life ........................ 10 1.3 ’Moore’ models ................................ 11 1.3.1 Diffusion (Melt CA) ......................... 11 1.3.2 Boiling (Rug CA) ........................... 11 1.3.3 Weathering (Vote CA) ........................ 12 Exercise Cellular Automata ’Moore’ models ................ 12 1.4 Traffic modeling, shock waves using CA ................... 13 Exercise One-Lane traffic ....................... 14 1.5 The Q2R Ising Model ............................. 15 2 Monte Carlo basics 17 Code Uniformly distributed pseudorandom number .............. 17 Code Uniformly distributed pseudorandom number .............. 17 2.1 Quasi Monte Carlo .............................. 18 Code Richtmeyer Sequence ...................... 18 Code Richtmeyer sequence ...................... 18 2.2 Monte Carlo integration ............................ 19 2.2.1 Motivation from statistical physics .................. 19 2.2.2 Numerical integration ........................ 20 .. using Mathematica TM ....................... 20 Code Monte Carlo integration ..................... 20 .. using Matlab TM .......................... 20 Code Numerical integration ...................... 20 .. using Maple TM ........................... 20 2.2.3 Numerical integration via Monte Carlo ............... 20 2.2.4 Monte Carlo Error estimates, scaling ................. 21 2.2.5 Hit Monte Carlo ........................... 21 Exercise Hit Monte Carlo I ....................... 22 Code Hit method [Mathematica] ..................... 22 Code Hit method [Matlab] ....................... 22 Exercise Hit Monte Carlo II ...................... 22 Code Occupied area for a number of (2D) spheres via hit MC method (Mathematica] .. 22 Code Occupied area for a number of (2D) spheres via hit MC method [Matlab] .... 23 2.2.6 Standard Monte Carlo ........................ 24 Exercise Standard Monte Carlo ..................... 24 Code Monte Carlo standard method ................... 24 1

Upload: ngokhanh

Post on 22-Aug-2019

213 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

complexfluids.ethz.ch

This document is copyrighted M.K., Swiss ETH Zurich/CH This copyright is violated upon selling or posting these class lecture notes

Computational Polymer PhysicsMartin Kröger, Emanuela Del Gado, Patrick Ilg

1 Cellular Automata (CA) 71.1 Classification of CA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

Exercise CA4 #c automata . . . . . . . . . . . . . . . . . . . . . . . 91.2 The Game of life . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

Exercise Game of life . . . . . . . . . . . . . . . . . . . . . . . . 101.3 ’Moore’ models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

1.3.1 Diffusion (Melt CA) . . . . . . . . . . . . . . . . . . . . . . . . . 111.3.2 Boiling (Rug CA) . . . . . . . . . . . . . . . . . . . . . . . . . . . 111.3.3 Weathering (Vote CA) . . . . . . . . . . . . . . . . . . . . . . . . 12

Exercise Cellular Automata ’Moore’ models . . . . . . . . . . . . . . . . 121.4 Traffic modeling, shock waves using CA . . . . . . . . . . . . . . . . . . . 13

Exercise One-Lane traffic . . . . . . . . . . . . . . . . . . . . . . . 141.5 The Q2R Ising Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

2 Monte Carlo basics 17Code Uniformly distributed pseudorandom number . . . . . . . . . . . . . . 17Code Uniformly distributed pseudorandom number . . . . . . . . . . . . . . 17

2.1 Quasi Monte Carlo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18Code Richtmeyer Sequence . . . . . . . . . . . . . . . . . . . . . . 18Code Richtmeyer sequence . . . . . . . . . . . . . . . . . . . . . . 18

2.2 Monte Carlo integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192.2.1 Motivation from statistical physics . . . . . . . . . . . . . . . . . . 192.2.2 Numerical integration . . . . . . . . . . . . . . . . . . . . . . . . 20

.. using MathematicaTM . . . . . . . . . . . . . . . . . . . . . . . 20Code Monte Carlo integration . . . . . . . . . . . . . . . . . . . . . 20

.. using MatlabTM . . . . . . . . . . . . . . . . . . . . . . . . . . 20Code Numerical integration . . . . . . . . . . . . . . . . . . . . . . 20

.. using MapleTM . . . . . . . . . . . . . . . . . . . . . . . . . . . 202.2.3 Numerical integration via Monte Carlo . . . . . . . . . . . . . . . 202.2.4 Monte Carlo Error estimates, scaling . . . . . . . . . . . . . . . . . 212.2.5 Hit Monte Carlo . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

Exercise Hit Monte Carlo I . . . . . . . . . . . . . . . . . . . . . . . 22Code Hit method [Mathematica] . . . . . . . . . . . . . . . . . . . . . 22Code Hit method [Matlab] . . . . . . . . . . . . . . . . . . . . . . . 22Exercise Hit Monte Carlo II . . . . . . . . . . . . . . . . . . . . . . 22Code Occupied area for a number of (2D) spheres via hit MC method (Mathematica] . . 22Code Occupied area for a number of (2D) spheres via hit MC method [Matlab] . . . . 23

2.2.6 Standard Monte Carlo . . . . . . . . . . . . . . . . . . . . . . . . 24Exercise Standard Monte Carlo . . . . . . . . . . . . . . . . . . . . . 24Code Monte Carlo standard method . . . . . . . . . . . . . . . . . . . 24

1

Page 2: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

2

2.2.7 How to generate random numbers with given distribution . . . . . . 25Exercise Distributed random numbers . . . . . . . . . . . . . . . . . . . 25Code Random numbers distributed with p = e−y , y ∈ [0,∞] . . . . . . . . . . 26

3 Polymer Physics 273.1 Introductory examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

3.1.1 Accretion – The Ballistic Deposition Model . . . . . . . . . . . . . 283.1.2 Diffusion limited aggregation (DLA) model . . . . . . . . . . . . . 293.1.3 Fractal dimension of DLA clusters . . . . . . . . . . . . . . . . . . 30

Exercise DLA clusters . . . . . . . . . . . . . . . . . . . . . . . . 313.2 Random walk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

3.2.1 Analytic considerations . . . . . . . . . . . . . . . . . . . . . . . . 323.2.2 Radius of gyration . . . . . . . . . . . . . . . . . . . . . . . . . . 34

Exercise Radius of gyration for a random walk . . . . . . . . . . . . . . . . 353.2.3 Monte Carlo simulation for the random walk . . . . . . . . . . . . 35

Exercise Random walk . . . . . . . . . . . . . . . . . . . . . . . . 35Code 2D Random walk, moments of the end-to-end distribution . . . . . . . . . 35Code Random walk end-to-end distribution function . . . . . . . . . . . . . . 36

3.3 Scattering experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373.3.1 Static structure factor . . . . . . . . . . . . . . . . . . . . . . . . . 383.3.2 Pair correlation function . . . . . . . . . . . . . . . . . . . . . . . 393.3.3 Form factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

3.4 Freely rotating chain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40Code Rotate 3D vector v around axis a by an angle t . . . . . . . . . . . . . . 40Exercise Monte Carlo simulation of a freely rotating chain . . . . . . . . . . . 41

3.4.1 Wormlike chain model . . . . . . . . . . . . . . . . . . . . . . . . 413.4.2 Simulation of the wormlike chain . . . . . . . . . . . . . . . . . . 42

Exercise Implement the wormlike chain algorithm and confirm theoretical predictions . . 433.5 Self-avoiding walk (SAW) . . . . . . . . . . . . . . . . . . . . . . . . . . 44

Code Squared end-to-end distance of 2D self-avoiding walk on square lattice . . . . . 443.5.1 Slithering Snake algorithm . . . . . . . . . . . . . . . . . . . . . . 443.5.2 Pivot algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463.5.3 Enriched samples algorithm . . . . . . . . . . . . . . . . . . . . . 47

Code (2D, 3D) Self-avoiding walk via enriched samples . . . . . . . . . . . . 47Exercise Wormlike chain in a cylindrical tube . . . . . . . . . . . . . . . . 48

3.5.4 Fractal dimension of the SAW . . . . . . . . . . . . . . . . . . . . 48Exercise Fractal dimension of SAW . . . . . . . . . . . . . . . . . . . 49Exercise Pivot animation . . . . . . . . . . . . . . . . . . . . . . . 49Exercise Pivot on cubic lattice . . . . . . . . . . . . . . . . . . . . . 49

3.6 Flory-type theories for polymers . . . . . . . . . . . . . . . . . . . . . . . 503.7 Exactly solvable semiflexible spin chain model . . . . . . . . . . . . . . . 52

Page 3: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

3

4 Master equations 554.1 Markov chains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

4.1.1 Stochastic processes in general . . . . . . . . . . . . . . . . . . . . 56Uncorrelated processes . . . . . . . . . . . . . . . . . . . . . . . . 57Markov process . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

4.1.2 Chapman-Kolmogorov equation . . . . . . . . . . . . . . . . . . . 57Short time dynamics . . . . . . . . . . . . . . . . . . . . . . . . . 58Diffusion process . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

4.2 Application of the master equation . . . . . . . . . . . . . . . . . . . . . . 604.2.1 Semi-analytical solution of the Master equation . . . . . . . . . . . 61

Code Vector, Matrix, Matrix inversion, Matrix-vector multiplication . . . . . . . . 61Code Vector, Matrix, Matrix inversion, Matrix-vector multiplication . . . . . . . . 61

4.2.2 Detailed balance . . . . . . . . . . . . . . . . . . . . . . . . . . . 624.2.3 Coupled equations for moments . . . . . . . . . . . . . . . . . . . 62

Exercise One-step process: moments . . . . . . . . . . . . . . . . . . . 64Exercise One-step process: distribution function . . . . . . . . . . . . . . . 64

4.2.4 Metropolis Monte Carlo . . . . . . . . . . . . . . . . . . . . . . . 644.2.5 Metropolis algorithm . . . . . . . . . . . . . . . . . . . . . . . . . 65

Code Metropolis Monte Carlo (including comparison with exact result) . . . . . . . 66Code Metropolis Monte Carlo (including comparison with exact result) . . . . . . . 66Code Metropolis Monte Carlo (periodic boundaries, including comparison with exact result) 66

4.2.6 Barker algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 674.2.7 Error estimates . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

Error versus variance . . . . . . . . . . . . . . . . . . . . . . . . . 684.2.8 Transition matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . 694.2.9 Thermostatted 1D harmonic oscillator via Metropolis MC . . . . . 70

Exercise Thermostatted 1D harmonic oscillator . . . . . . . . . . . . . . . 704.2.10 Lennard-Jones system via Metropolis MC . . . . . . . . . . . . . . 704.2.11 Why are Gaussian integrals important? . . . . . . . . . . . . . . . 71

4.3 Ising model and Phase transitions . . . . . . . . . . . . . . . . . . . . . . . 734.3.1 Mean-field model for the Ising model . . . . . . . . . . . . . . . . 754.3.2 Exact results for the 2D Ising model . . . . . . . . . . . . . . . . . 754.3.3 Metropolis MC algorithm for the 2D Ising model . . . . . . . . . . 76

Exercise 2D Ising model . . . . . . . . . . . . . . . . . . . . . . . 774.3.4 Finite size effects . . . . . . . . . . . . . . . . . . . . . . . . . . . 77

4.4 Kinetic Monte Carlo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 784.5 Phase transitions & percolation . . . . . . . . . . . . . . . . . . . . . . . . 81

Exercise Percolation . . . . . . . . . . . . . . . . . . . . . . . . . 82Percolation theory in the Ising Model . . . . . . . . . . . . . . . . 82Q-Potts model and foams . . . . . . . . . . . . . . . . . . . . . . . 83Random site percolation model . . . . . . . . . . . . . . . . . . . 84

Exercise Hoshen-Kopelmen algorithm . . . . . . . . . . . . . . . . . . . 86

Page 4: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

4

5 Fokker-Planck equations 875.1 Analytic solution of Fokker-Planck equation . . . . . . . . . . . . . . . . . 895.2 Illustration of the Fokker-Planck equation . . . . . . . . . . . . . . . . . . 91

5.2.1 1D harmonic oscillator with external stochastic force (noise) . . . . 915.2.2 Kirkwood process . . . . . . . . . . . . . . . . . . . . . . . . . . 925.2.3 Hookean dumbbell in shear flow . . . . . . . . . . . . . . . . . . . 935.2.4 Rigid dumbbell . . . . . . . . . . . . . . . . . . . . . . . . . . . . 945.2.5 Ellipsoid of revolution . . . . . . . . . . . . . . . . . . . . . . . . 945.2.6 Multibead polymer chains, reptation . . . . . . . . . . . . . . . . . 95

5.3 Alternate approach to Fokker-Planck equations . . . . . . . . . . . . . . . 965.4 Rheological properties of polymer melts . . . . . . . . . . . . . . . . . . . 97

5.4.1 Linear viscoelasticity, oscillatoric shear flow . . . . . . . . . . . . 97

6 Multiscale dynamics 1016.1 Brownian dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102

6.1.1 Brownian dynamics simulation . . . . . . . . . . . . . . . . . . . . 1026.1.2 Kramers process . . . . . . . . . . . . . . . . . . . . . . . . . . . 103

Exercise Harmonic oscillator with noise . . . . . . . . . . . . . . . . . . 104Code Kramers process, harmonic potential, extract

˙p2¸ . . . . . . . . . . . . 104

6.1.3 Brownian dynamics of a FENE dumbbell . . . . . . . . . . . . . . 105Exercise Brownian dynamics of FENE dumbbells subjected to shear flow . . . . . . 105

6.1.4 White noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106Random vector with given mean and covariance . . . . . . . . . . . 106Whitening a random vector . . . . . . . . . . . . . . . . . . . . . . 107

6.2 Molecular dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1076.2.1 Finite difference methods . . . . . . . . . . . . . . . . . . . . . . 109

Velocity-Verlet algorithm . . . . . . . . . . . . . . . . . . . . . . . 1106.2.2 Symplectic integrators . . . . . . . . . . . . . . . . . . . . . . . . 110

Exercise Liouville operator . . . . . . . . . . . . . . . . . . . . . . 112Code Liouville operator . . . . . . . . . . . . . . . . . . . . . . . . 112Exercise Recursive mapping . . . . . . . . . . . . . . . . . . . . . . 113Code Recursive mapping and symplectic coefficients . . . . . . . . . . . . . 113

6.2.3 Einstein frequency and configurational temperature . . . . . . . . . 1146.2.4 Periodic boundary conditions . . . . . . . . . . . . . . . . . . . . 1156.2.5 Temperature control . . . . . . . . . . . . . . . . . . . . . . . . . 1166.2.6 Lees-Edwards boundary conditions . . . . . . . . . . . . . . . . . 1166.2.7 Neighbor lists . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116

Exercise Molecular dynamics simulation . . . . . . . . . . . . . . . . . . 1176.2.8 Long-range forces . . . . . . . . . . . . . . . . . . . . . . . . . . 117

6.3 Models of intermolecular interaction . . . . . . . . . . . . . . . . . . . . . 1186.3.1 Reduced units, reference units . . . . . . . . . . . . . . . . . . . . 1186.3.2 WCA potential . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1196.3.3 SHRAT potential . . . . . . . . . . . . . . . . . . . . . . . . . . . 1196.3.4 Hard spheres . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119

6.4 Simple fluid subjected to plane Couette flow . . . . . . . . . . . . . . . . . 120

Page 5: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

5

6.5 FENE polymer melt subjected to plane Couette flow . . . . . . . . . . . . . 124

7 Liquid crystals and liquid crystalline polymers 1277.1 Isotropic-Nematic phase transition . . . . . . . . . . . . . . . . . . . . . . 128

7.1.1 Landau-de Gennes theory . . . . . . . . . . . . . . . . . . . . . . 1287.1.2 Onsager theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129

7.2 Dynamics of Liquid Crystals . . . . . . . . . . . . . . . . . . . . . . . . . 1307.2.1 Ericksen-Leslie theory . . . . . . . . . . . . . . . . . . . . . . . . 1307.2.2 Alignment tensor theory . . . . . . . . . . . . . . . . . . . . . . . 1317.2.3 Dynamical Mean-Field theory . . . . . . . . . . . . . . . . . . . . 1317.2.4 Dynamics of interacting many-body system . . . . . . . . . . . . . 132

7.3 Numerical solution and simulations of LC dynamics . . . . . . . . . . . . . 1327.3.1 Solving the dynamical mean-field model . . . . . . . . . . . . . . 132

Finite-Differencing . . . . . . . . . . . . . . . . . . . . . . . . . . 132Galerkin scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . 133Brownian Dynamics simulations of LCs . . . . . . . . . . . . . . . 133

7.3.2 Simulating the interacting many particle system . . . . . . . . . . . 1347.4 Spatially inhomogeneous LCs . . . . . . . . . . . . . . . . . . . . . . . . 135

7.4.1 Lebwohl-Lasher model . . . . . . . . . . . . . . . . . . . . . . . . 1357.4.2 Viscosity coefficients for nematics and discotics . . . . . . . . . . . 135

Mean field theory for concentrated suspension of ellipsoids . . . . . 136Uniaxial alignment . . . . . . . . . . . . . . . . . . . . . . . . . . 137Tumbling of the director . . . . . . . . . . . . . . . . . . . . . . . 139

8 Complex liquids 1418.1 FENE-C model for wormlike micelles and equilibrium polymers . . . . . . 1428.2 Actin filaments, semiflexible chains . . . . . . . . . . . . . . . . . . . . . 1458.3 Gelation, filamentous networks . . . . . . . . . . . . . . . . . . . . . . . . 152

8.3.1 Soft-solid model . . . . . . . . . . . . . . . . . . . . . . . . . . . 152Percolating High Density Clusters . . . . . . . . . . . . . . . . . . 154Nucleation, Growth and Hysteresis . . . . . . . . . . . . . . . . . 155Fractal dimension of a gel . . . . . . . . . . . . . . . . . . . . . . 158

9 Smooth particles 1619.1 Smoothed particle dynamics, soft fluid particles . . . . . . . . . . . . . . . 1629.2 Dissipative particle dynamics . . . . . . . . . . . . . . . . . . . . . . . . . 165

9.2.1 Delaunay triangulation and Voronoi diagram . . . . . . . . . . . . 166Code 2D Delaunay triangulation, Voronoi diagram using built-in routines . . . . . . 167Exercise 2D Voronoi diagram . . . . . . . . . . . . . . . . . . . . . . 167

Page 6: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

6

Model: microscopic transition rates (w(x->y))

MASTER equation for p(x)

supplemented by initial conditions

smooth transitions

Fokker-Planck equation for p(x)

w(x->y) -> A(x,t) and D(x,t)

if D=D(t),

A linear in x

analytical

solution

Langevin equation

BD simulation

moments <x>(t), <xx>(t), ..

characterizing macroscopic behavior

more generally,

solve

1) by matrix inversion

[dp/dt = V(x,t).p]

2) by writing down

coupled equations for

moments. Decouple,

then solve

3) for stationary solution

using, e.g., Metropolis

Monte Carlo

4) numerically by

integration, transition

matricesp(x,t)

Connection between the sections about Monte Carlo methods, Markov processes, Master equations,Fokker-Planck equations, Brownian dynamics simulation.

March 19, 2009 mk

Page 7: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

complexfluids.ethz.ch

This document is copyrighted M.K., Swiss ETH Zurich/CH This copyright is violated upon selling or posting these class lecture notes

Computational Polymer PhysicsMartin Kröger, Emanuela Del Gado, Patrick Ilg

Part 1

Cellular Automata (CA)

A Cellular Automaton (CA) consists of a discrete system of lattice sites having various initialvalues. These sites evolve in discrete time steps as each site assumes a new value based on(i) the values of some local neighborhood of sites and (ii) a finite number of previous timesteps.

Kinds of CA lattices

In one dimension, a cellular automaton is a linear list of numbers or symbols. In two di-mensions, various lattices are possible (e.g. triangular, hexagonal, or rectangular). In higherdimensions, the approach is similar but lattices are obviously more complex. One way tocreate a lattice is using the triangulation or tessalation concepts described in Sec. 9.2.1.

CA neighborhoods

The neighborhood of a lattice site consists of the site and its nearest neighbor sites. For therectangular lattice, two kinds of neighborhood are commonly used, - Von Neumann (site +4 nearest neighbors (N, E, S, W)) - Moore (site + 8 nearest neighbors (N, NE, E, SE, S, SW,W, NW)).

Treatment of sites at the edge of a lattice

Various boundary conditions are possible. For example, Absorbing boundaries: Nearestneighbor to left of a site on the left border of the lattice is zero. Similarly, for sites on right,top and bottom borders. Periodic boundaries: Nearest neighbor to left of a site on the leftborder of the lattice equals site on same row on right border. Similarly, for sites on right, topand bottom borders.

7

Page 8: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

8 PART 1. CELLULAR AUTOMATA (CA)

1.1 Classification of CAThe CA operates on a Boolean grid, σi ∈ 0, 1, for example, and a CA is identical witha rule which updates all values σ at discrete ’time’ steps according to the values of a set ofneighboring cells, eventually including the value of the cell itself. This can be formalized:

σi(t + 1) = fi(σj(t), j = 1.., k), (1.1)

where the functions fi depend on the grid position ’i’ only to describe the grid topology ordata structure. The j’s are meant to be defined ’relative’ to i (pointers, the set j may evencontain i), and the function fi can then be regarded as independent of i for any subsequentdiscussion. The quantity k equals the number of interaction sites which are communicatingwith i, and this number is usually also taken to be a constant. For the von Neumann grid,k = 4, for the Hexagonal grid, k = 6, for the Moore grid, k = 8.Because each node has two possible values, and k interaction sites, there are exactly cmax =22k possible rules for a CA with given k:

k # CA rules = # CA programs

0 21 42 163 2564 655365 42949672966 184467440737095516167 3402823669209384634633746074317682114568 115792089237316195423570985008687907853269984665640564039457584007913129639936k 22k

(1.2)

This table demonstrates, that even though there is a limiting number of rules, there is stillsome space to develop and investigate CA automata for new applications, even if we restrictourself to the Hexagonal or Moore grids. The same fact can be used to classify CA by asingle number 0 ≤ c ≤ cmax in a way which is common for the binary representation of(decadic) numbers. Here is a sample CA for k = 3, known as CA3 #106 from cmax = 256possible CA3’s:

σ1σ2σ3 111 110 101 100 011 010 001 000n 7 6 5 4 3 2 1 02n 128 64 32 16 8 4 2 1f(n) 0 1 1 0 1 0 1 0

This is summarized and formalized as:

CAk #2k−1∑j=0

2n f(n) = CAk #106 (1.3)

Page 9: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

1.1. CLASSIFICATION OF CA 9

and we wish to simulate CAk #c, we can extract from c the rules to be used.

EXERCISE CA4 #C AUTOMATA

Set up a CA simulation code which implements the rules for given 0 ≤ c ≤ cmax on a datastructure (with periodic boundary conditions) which has k = 4. I

Page 10: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

10 PART 1. CELLULAR AUTOMATA (CA)

1.2 The Game of life

10 20 30

5

10

15

20

25

30

10 20 30

5

10

15

20

25

30

10 20 30

5

10

15

20

25

30

VL cellular automata dec 2006.m

The Game of Life, created by the British mathematician John Conway, is the most famouscellular automata (CA). The Game of life was the first program run on the connection ma-chine, the world’s first parallel computer. Most importantly, it is the forerunner of the so-called artificial life (or a-life) systems which are of great interest today, not only for theirbiological implications, but for the development of the so-called ’intelligent agents’ for com-puters.The game of life is played on a two-dimensional square lattice with periodic boundary con-ditions. Lattice sites have a value of 0 or 1, i.e. the lattice is a Boolean matrix. A site withvalue 1 is said to be alive and a site with value 0 is said to be dead. The system evolves byupdating all of the sites in the lattice simultaneously, based on their Moore neighborhoods,until two successive configurations are identical, or until a specified number of updates (timesteps) have occurred.

The life rules

The game of life CA rules are based on the value of a site and the sum of the values of itsneighbors. The rules, known as life and death rules, are:

1) A living site with either two or three living nearest neighbor sites remains alive

2) A dead site with exactly three living nearest neighbors becomes live.

3) All other sites either remain dead or die.

EXERCISE GAME OF LIFE

Implement the above algorithm. The game of life is most interesting to watch when persistentpatterns, known as life-forms, occur during the evolution process. One pattern that has beenextensively studied is know as the glider and is defined as follows: (x, y), (x + 1, y), (x +2, y), (x+2, y+1), (x+1, y+2). Another interesting life-form is the beehive: (x, y), (x, y+1), (x, y + 2), (x, y + 3), (x, y + 4), (x, y + 5), (x, y + 6). I

Page 11: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

1.3. ’MOORE’ MODELS 11

1.3 ’Moore’ models

Rudy Rucker has suggested three CA models of various physical phenomena, invoking theMoore neighbor lists, according to [Gaylord and Wellin (1995)].

1.3.1 Diffusion (Melt CA)

20 40

5

10

15

20

25

30

35

40

45

50

20 40

5

10

15

20

25

30

35

40

45

50

20 40

5

10

15

20

25

30

35

40

45

50

VL cellular automata dec 2006.m

The melt CA models describe the spread of molecules in space, e.g., parfume, or the spreadof heat in a material. The sites of a square lattice have values ranging from 0 to r, wherer represents concentration or temperature. At each update, a site’s value is replaced by theaverage of the eight nearest neighbors in its Moore neighborhood.

1.3.2 Boiling (Rug CA)

10 20 30

5

10

15

20

25

30183.0815 184.6533 165.6055 178.0972 182.597 163.6073 167.0414 162.8059 173.1552 161.1345 176.0096 190.9777 188.1257 168.2982 177.6551 163.806 183.8392 186.7825 158.9299 159.6425 154.4444 178.3428 165.5997 181.7719 188.0925 175.1715 173.1997 181.0544 202.1812 207.576

10 20 30

5

10

15

20

25

30148.0744 141.9276 136.0597 136.3616 138.4918 139.5506 139.5209 140.8006 143.1986 145.7594 147.7934 148.8388 148.7315 148.1067 147.3537 147.3609 146.9841 146.3672 146.0166 146.1612 147.0158 148.5625 150.5983 152.8461 155.0305 156.8635 158.2279 158.4401 156.8858 153.3359

10 20 30

5

10

15

20

25

30152.9301 149.8621 146.9228 144.803 144.9355 145.7509 147.3172 148.984 150.5355 151.8696 152.9005 153.575 153.8878 153.8845 153.7041 153.7628 154.2806 154.5773 154.8534 155.2933 156.0129 157.0269 158.2421 159.4753 160.4899 161.0407 160.9199 159.9989 158.2587 155.8108

VL cellular automata dec 2006.m

The Rug CA is a variant of the Melt CA and models the transition from liquid to vapor. Thesites of a square lattice have values ranging from 0 to r − 1. AT each update, a site’s valueis replaced by the average of its eight nearest neighbors plus one, unless the resulting valueis r or higher, in which case, it goes to 0 (or some low value). This process is analogous tothe way in which a sufficiently hot region of water gives up some heat by forming a bubbleof steam, which momentarily cools off the water around the bubble.

Page 12: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

12 PART 1. CELLULAR AUTOMATA (CA)

1.3.3 Weathering (Vote CA)

10 20 30

5

10

15

20

25

301 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1

10 20 30

5

10

15

20

25

301 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1

10 20 30

5

10

15

20

25

301 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1

VL cellular automata dec 2006.m

The Vote CA models the ’smoothing-off’ of jagged edges. The sites of a square lattice havevalues of 0 or 1. At each update, a site’s value is replaced by the value possessed by themajority of the nine sites in its Moore neighborhood, with the following exception: if thereis one more neighborhood site with a value of 1 than with a value of 0, the site value isupdated to 0 and if there is one less neighborhood site with a value of 1 than with a value of0, the site value is updated to 1.

EXERCISE CELLULAR AUTOMATA ’MOORE’ MODELS

Implement the above three algorithms ’Melt CA’, ’Rug CA’ and ’Vote CA’ in a single programand animate the sequences, create a movie. I

Page 13: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

1.4. TRAFFIC MODELING, SHOCK WAVES USING CA 13

1.4 Traffic modeling, shock waves using CAA particularly simple traffic model is that of particles located on a lattice such that no latticesite is occupied by more than one particle at a time. The particles have different probabilitiesof moving left or right so that, overall, there is a net movement of particles in a direction. Wefocus on modeling one-way traffic with one-dimensional cellular automata. The purpose is(simply) to examine the effect of stopping on the flow of traffic as a function of the density ofcars on the road. A somewhat less simple problem would be to include the effect of passing(overtaking).

One-Lane model setup

car density 0.5

(i) Traffic takes place on a one-dimensional lattice of length s.(ii) Periodic boundary conditions are assumed which means that the system has a ring geom-etry.(iii) Values of sites are either car speeds - integers ranging from 0 to vmax (the speed limit!)or symbol e, representing an empty site.(iv) The system evolves over a specified number of time steps by updating each lattice site,based on its value and on the values of neighboring sites to its right (that is, in front).

Algorithm

1) The initial road configuration consists of cars whose speeds and locations are both ran-domly distributed. This is done by first placing cars on the road and then determiningtheir speeds. As well as s and vmax, a needed input parameter is the probability that alocation on the road is occupied by a car (call it p). (We note that in the limit of larges, the density of cars on the road will approach the value of p).

The movement of cars along the road is carried out by simultaneously updating allof the lattice sites according to Steps 2 (adjusting car speed) and 3 (moving the car).Doing Step 2 before Step 3 ensures that no car is crashed into from behind! Steps 2and 3 will be carried out for a specified number (t) of times.

2) (a model for speed adjustment) Let d = distance from car of interest to the car ahead,let v velocity of the car of interest. If d ≤ v then slow down to v′ = d− 1, else speedup to v′ = v + 1 (but not beyond speed limit vmax)

3) (a model for car motion) A car moves to the right, a distance equal to the car’s velocity,v′. [Note: In general, the units of velocity are distance/time; here, we are taking the

Page 14: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

14 PART 1. CELLULAR AUTOMATA (CA)

duration of each time step to be one time unit which allows us, in effect, to comparedistance and velocity directly].

car density 0.05

EXERCISE ONE-LANE TRAFFIC

Implement the above algorithm. The (empty) road should be shown in black, cars as hues,with different colors representing different velocities. Successive snapshots in time should begenerated to allow the evolution of the traffic velocities to be observed. Create two-dimensionalplots where the distance goes horizontal towards the right, and time vertically downwards.Such a representation can be interpreted in terms of kinematic waves which move at differentspeeds and meet to produce shock waves. The kinematic waves are created as cars adjusttheir speed to the cars ahead, creating waves of constant flow: - packs of fast-moving, widelyspaced cars - caravans of slow-moving, closely-spaced cars. A shock wave is created when afast-moving car must brake suddenly to avoid running into a slow-moving car. I

Page 15: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

1.5. THE Q2R ISING MODEL 15

1.5 The Q2R Ising ModelThe Q2R model is a CA version of the Ising model which was discussed in Section 4.3.The two models differ in several significant ways. Q2R uses a microcanonical ensembleapproach while the Metropolis Ising model uses a canonical ensemble approach. In Q2R,all of the spins are flipped simultaneously while in the Metropolis model only one randomlyselected spin is flipped at a time. The spin-flipping decision is deterministic in Q2R whileit is probabilistic in the Metropolis model.Both versions of the Ising model give many comparable results (though it should be pointedout that the Q2R program is not exactly equivalent to the Metropolis Ising program becausethe CA is not ergodic) and we will not be concerned with those results here. Our focus willbe on showing how the “checkerboard updating” scheme used in Q2R can be carried out.The System: The Q2R model takes place on a two-dimensional (n + 1) × n lattice withskewed or periodic boundary conditions. Lattice sites have values of +1 or −1. The systemevolves by simultaneously updating sites (known as spin flipping) based on their von Neu-mann neighborhoods, until the system stops changing or for a specified number of times.The Rules: The constant energy spin flipping condition in the Q2R model requires a rule thatstates that a site is flipped only if it has an equal number of up spin and down spin neighborsites.

Algorithm1) The initial lattice configuration is a (n+1)×n lattice with site values randomly chosen

from the set +1,−1 (up, down).

2) The requirement that a spin flip does not alter the energy of the system makes it neces-sary to simultaneously update only those spins that do not directly interact. To do this,the lattice is separated into two sublattices arranged like a checkerboard. Half of thesites are red and half are black such that no sites of the same color are nearest neigh-bors. On a lattice having an odd number of sites in each row, all of the odd-numberedsites form one sublattice, all of the even-numbered sites belong to the other sublatticeand the neighbors of sites in either sublattice are in the other sublattice.

The updating of the lattice is performed in two consecutive steps:

(1) the values of odd-numbered sites are updated using the values of their even-numberednn sites.

(2) the values of even-numbered sites are updated using the updated values (determinedin the previous step) of their odd-numbered nn sites.

For each sublattice all sites are flipped if there are equal number of up and down spinson neighboring sites. Otherwise the sites remain in their original state.

(3) The updating procedure is repeatedly performed using the ordered pair consistingof the sublattices.

(4) In step 3, we did not bother to reconstruct the lattice at each time step from itsconstituent sublattices. To do this (so that, e.g., we can look at the graphics of thelattice), the sublattices are brought together.

Page 16: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

16 PART 1. CELLULAR AUTOMATA (CA)

Page 17: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

complexfluids.ethz.ch

This document is copyrighted M.K., Swiss ETH Zurich/CH This copyright is violated upon selling or posting these class lecture notes

Computational Polymer PhysicsMartin Kröger, Emanuela Del Gado, Patrick Ilg

Part 2

Monte Carlo basics

Monte Carlo methods use random numbers, or ’random’ sequences, to sample from a distri-bution of known shape, or to extract a distribution by other means, and, in the context of thesenotes to i) generate representative equilibrated samples to be used in (dynamical) nonequilib-rium simulations, or ii) evaluate high-dimensional integrals. It will be important to realize,that Monte Carlo should be as artificial as possible to be efficient and elegant. AdvancedMonte Carlo ’moves’, required to optimize the speed of algorithms for a particular problemat hand, are outside the scope of the lecture. In physics, material science, biology, MonteCarlo methods are commonly used to treat the equilibrium statistics of simple and complexsystems. The treatment will be kept sufficiently general in order to cover Monte Carlo meth-ods as solvers for high-dimensional integration. High-dimensional integrals occur frequentlyin a huge number of diverse applications and areas including financial mathematics and datarecognition.

CODE UNIFORMLY DISTRIBUTED PSEUDORANDOM NUMBER

In[] := Random[Type,min,max]Type ∈ Integer, Real or Complex. I

CODE UNIFORMLY DISTRIBUTED PSEUDORANDOM NUMBER

À random(’Uniform’,min,max),random(’Discrete Uniform’,max-min+1)+min-1,randint(1,1,[min max]), I

17

Page 18: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

18 PART 2. MONTE CARLO BASICS

2.1 Quasi Monte CarloThe strikingly simple idea of quasi Monte Carlo is to replace pseudo random numbers bya quasi random sequence which is ’known to be uniform’ in high-dimensional x space (ex-hibiting low ’discrepancy’). Quasi Monte Carlo methods use deterministic samples at pointsthat belong to low discrepancy sequences and approximate the integrals by the arithmeticaverage of N function evaluations. Their worst case error is of order (1/N) ∗ lnd N ; whered denotes the dimension. Since this term becomes huge when N is fixed and d is large, assometimes happens in practice, traditionally, there has been a certain degree of concern aboutQuasi Monte Carlo. It has been shown that the convergence rate of Quasi Monte Carlo is oforder N−1+p/ln N1/2

with p ≥ 0. Compared to the expected rate N ( − 1/2) of Monte Carloit shows the superiority of Quasi Monte Carlo. To understand the success of Quasi MonteCarlo in some applications, also the notion of effective dimension has been introduced.One example is the ’Richtmeyer sequence’ x(n) ∈ [0, 1]d with x

(n)µ = n

√Pµ mod 1 withcomponents µ = 1, 2.., d and µ’s prime number Pµ (P1 = 2, P2 = 3, P3 = 5, see codebelow). Other often used sequences are the so called Faure and Niedermeyer sequences.There is no exhaustive knowledge about the overall efficiency of quasi Monte Carlo methodsfor computing high-dimensional integrals. Quasi Monte Carlo is certainly good for low-dimensional integration, but supersedes pseudo Monte Carlo only if N > ed (huge for larged). It is yet an empirical observation that quasi random numbers should not be used forsolving stochastic differential equations (Langevin equations).

CODE RICHTMEYER SEQUENCE

In[] := QMCrand[n , m ] := Mod[n Sqrt[Prime[m]], 1];ListPlot[ Table[QMCrand[n, 1], n, 1, 100], PlotJoined → False] I

CODE RICHTMEYER SEQUENCE

À prime=primes(100000);QMCrand=inline(’mod(n*sqrt(mthprime),1)’,’n’,’mthprime’);figure(1); plot(QMCrand(1:100,prime(1)),’.’) I

Page 19: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

2.2. MONTE CARLO INTEGRATION 19

2.2 Monte Carlo integration

2.2.1 Motivation from statistical physicsAccording to classical statistical mechanics expectation values of observables A(x), wherex is a phase space coordinate of the system, can be computed in the canonical ensemble

〈A〉 =

∫A(x) p(x) dx, (2.1)

p(x) =1

e−βH(x), (2.2)

where H(x) denotes the Hamilton function of the system, Zβ the partition function, β =(kBT )−1 the inverse absolute temperature and kB Boltzmann’s constant. The partition func-tion

Zβ =

∫e−βH(x) dx (2.3)

ensures proper normalization 〈1〉 = 1 and yields, e.g., the free energy F (β) = −kBT ln Zβ .Only for a few special cases, the integrations in Eq. (2.1) can be done analytically. In the vastmajority of models, one needs either to rely on approximations or use numerical methods.The main idea of the Monte Carlo method is to treat this problem as a problem for numericalintegration, where, in view of the usually high-dimension of the integral (dimension is pro-portional to the degrees of freedom ∝ N with the number of particles N for a many-particlesystem, see for example the Ising model in Sec. 4.3), an equidistant or regular grid in phasespace cannot be used to compute the integral (cf. Table 2.1), but grid coordinates xi have tobe statistically selected, to obtain

〈A〉 =1

M

M∑i=1

A(xi) p(xi) (2.4)

in the limit M → ∞. Since the distribution function usually varies over several orders ofmagnitude – H(x) is an extensive variable, H(x) ∝ N –, a regular phase space grid is ofteninefficient.

Before we establish using random numbers to solve integrals, let us have a look at some com-mercially available software packages such as MathematicaTM, MatlabTMand MapleTMandlet’s see, if they use random numbers or not ..

Method scaling behaviorTrapezoidal N−2/d regular grid in coordinate spaceSimpson N−4/d regular grid in coordinate spaceStandard pseudorandom Monte Carlo N−1/2 (independent of d)Quasi Monte Carlo N−1 logD N (for some D, realized for N > ed)

Table 2.1: Convergence behaviors for N function evaluations in d dimensions.

Page 20: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

20 PART 2. MONTE CARLO BASICS

2.2.2 Numerical integration.. using MathematicaTM

When MathematicaTMtries to evaluate a numerical integral (command: NIntegrate), it sam-ples the integrand at a sequence of points. If it finds that the integrand changes rapidly in aparticular region, then it recursively takes more sample points in that region. The parametersMinRecursion and MaxRecursion specify the minimum and maximum number of levels ofrecursive subdivision to use. Increasing the value of MinRecursion guarantees that NInte-grate will use a larger number of sample points. MaxRecursion limits the number of samplepoints which NIntegrate will ever try to use. Increasing MinRecursion or MaxRecursion willmake NIntegrate work more slowly. SingularityDepth specifies how many levels of recursivesubdivision NIntegrate should try before it concludes that the integrand is .blowing up. atone of the endpoints, and does a change of variables. With Method→Automatic, NIntegrateuses an adaptive Gaussian quadrature with error estimation based on evaluation at Kronrodpoints in one dimension (QuasiMonteCarlo), and an adaptive Genz-Malik algorithm (Multi-dimensional) otherwise. If an explicit setting for MaxPoints is given, NIntegrate by defaultuses a non-adaptive Halton-Hammersley-Wozniakowski algorithm (QuasiMonteCarlo)

CODE MONTE CARLO INTEGRATION

In[] := myintegral[steps ] := NIntegrate[Exp[-x2], x, -100, 100, MaxPoints → steps];myintegral[100] I

.. using MatlabTM

MatlabTMdoes not provide a Monte Carlo option, quad uses an adaptive Simpson quadrature,quadl the so called Lobatto quadrature. The symbolic toolbox of MatlabTMis actually a cut-down version of the MapleTMprogram.

CODE NUMERICAL INTEGRATION

À myintegrand=inline(’exp(-x.ˆ2)’,’x’);quad(myintegrand,-100,100) I

.. using MapleTM

If the software is running, information can be obtained using the ?evalf/Int command.

To summarize, approximately 50% of available commercial codes use random numbers(upon request) to compute integrals.

2.2.3 Numerical integration via Monte CarloLet p(x) be an arbitrary, but conveniently normalized distribution function (

∫p(x) dx = 1).

Then

I =

∫f(x) dx =

∫f(x)

p(x)p(x) dx =

⟨f(x)

p(x)

p

. (2.5)

Page 21: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

2.2. MONTE CARLO INTEGRATION 21

The average, thus the integral I , can be evaluated by sampling using N random numbersor better, d-dimensional vectors for a d-dimensional integral, xi, i = 1, 2.., N distributedaccording to p(x) as follows

I =

⟨f(x)

p(x)

p

= limN→∞

1

N

N∑i=1

f(xi)

p(xi), (2.6)

2.2.4 Monte Carlo Error estimates, scalingThe error bar σ for this integral estimate after N evaluations can also be calculated ’on the

fly’ (using the above estimate for I) via σ =√

1N42I with

42I =

⟨(f(x)

p(x)

)2⟩

p

−⟨

f(x)

p(x)

⟩2

p

=

⟨(f(x)

p(x)

)2⟩

p

− I2

=

∫f(x)2

p(x)dx− I2 (2.7)

= limN→∞

1

N

N∑i=1

(f(xi)

p(xi)

)2

− I2. (2.8)

While (2.7) can be used to calculate the error for the integral estimate in advance, pro-vided the integrations can be performed analytically, (2.6) and (2.8) are used in a simulation

to estimate the integral I and its error bar√

1N42I . Since (2.7) is a constant, for any p,

the standard deviation of a Monte Carlo based integration value decreases, independent ofdimension of space, with

√N which is in remarkable contrast to classical convergence be-

haviors, cf. Tab. 2.1, and puts Monte Carlo integration far ahead of classical schemes forhigh-dimensional integration.

2.2.5 Hit Monte Carlo

−0.2 0 0.2 0.4 0.6 0.8 1 1.20

0.5

1

1.5

2

2.5

3

3.5

4MC hit yields 3.12 +/− 0.095666 [exact error 0.094812]

Exact result is 3.1416

Let us assume that the integrand in∫

f(x) dx is positive semidefinite. The hit method re-quires the knowledge of an upper bound of the integrand which we call fm, i.e., 0 ≤ f(x) ≤

Page 22: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

22 PART 2. MONTE CARLO BASICS

fm, and x is a d-dimensional vector. As probability distribution p for the random sequencethe method chooses a constant p(x, y) on d + 1-dimensional space. Let us start with anidentity:

f(x) =

∫ fm

0

Θ(f(x)− y) dy, (2.9)

where Θ is the heavysite step function Θ(z) = 1 for z ≥ 0 and Θ(z) = 0 otherwise. Withthe d + 1-dimensional vector x = (x, y) we then rewrite the integral as a d + 1-dimensionalintegral as follows

I =

∫f(x) dx =

∫f(x) dx =

⟨f(x)

p(x)

p(x)=const

,

f(x) = f(x, y) = Θ(f(x)− y), (2.10)

and the constant p, due to∫

p(x) dx must be the inverse of the (known) integration area ind + 1-dimensional space.

EXERCISE HIT MONTE CARLO I

Write a code which calculates the integral and its error bar for a given positive and upperbounded scalar function (input) by employing the hit Monte Carlo method. I

CODE HIT METHOD [MATHEMATICA]

In[] := f[x ]:=Exp[-x2]; fm = 1; (* specify integrand and upper bound 0 ≤ f ≤ fm *)xmin = -1; xmax = 1; steps = 10000; Integral = 0; Area = (xmax - xmin) fm; p=1/Area;Do[X = Random[Real, xmin, xmax]; Y = Random[Real, 0, fm];Integral = Integral + UnitStep[f[X] - Y]/p, step, 1, steps];Integral/steps I

CODE HIT METHOD [MATLAB]

À f=inline(’exp(-x.ˆ2)’); fm=1; % specify integrand and upper bound 0 ≤ f ≤ fm

xmin=-1; xmax=1; steps=1000; area=(xmax-xmin)*fm; p=1/area;X=random(’Uniform’,xmin,xmax,steps,1); Y=random(’Uniform’,0,fm,steps,1);Integral=mean(stepfun(sort(f(X)-Y),0))/p I

EXERCISE HIT MONTE CARLO II

Write a code which generates a random 2D configuration of N circles of radius R in a square2D box. Compute the area occupied by the circles (which may overlap) with the hit MonteCarlo method. I

CODE OCCUPIED AREA FOR A NUMBER OF (2D) SPHERES VIA HIT MC METHOD (MATHEMATICA]

In[] := occupiedarea[circles , radius , shots ] := (hits = 0; Do[pos[i] = Random[], Random[], i, 1, circles];

Page 23: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

2.2. MONTE CARLO INTEGRATION 23

Show[Graphics[Hue[.1], Rectangle[0, 0, 1, 1]],Graphics[Table[Circle[pos[i], radius], i, 1, circles]], AspectRatio → Automatic];

Do[X = Random[]; Y = Random[];If[Min[Table[Norm[pos[i] - X, Y], i, 1, circles]] < radius, hits = hits + 1], i, 1,

shots];hits/shots); I

CODE OCCUPIED AREA FOR A NUMBER OF (2D) SPHERES VIA HIT MC METHOD [MATLAB]

À function MCarea(circles,radius,shots)hits=0; pos=rand(circles,2); shot=rand(shots,2);for i=1:shots, for j=1:circles,if sum((pos(j,:)-shot(i,:)).ˆ2) < radiusˆ2, hit=hit+1; break; else hit=0; end;end; plot(shot(i,1),shot(i,2),’.’,’color’,[hit 0 0]); hold on;end;arcs(pos,radius); title([’occ area = ’ num2str(hit/shots)]);function arcs(pos,radius); X=linspace(0,2*pi);for i=1:length(pos), hold on; plot(radius*cos(X)+pos(i,1),radius*sin(X)+pos(i,2));end;axis equal;

I

0 0.2 0.4 0.6 0.8 1

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

occ area = 0.286

Page 24: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

24 PART 2. MONTE CARLO BASICS

For the variance of the hit method we have, according to (2.7), with area A = 1/p,

42I =

∫f(x)2

p(x)dx− I2

=

∫A

∫Θ(f(x)− y)2 dx dy − I2

= A

∫ ∫Θ(f(x)− y) dx dy − I2 = AI − I2. (2.11)

2.2.6 Standard Monte Carlo

0 0.2 0.4 0.6 0.8 10

0.5

1

1.5

2

2.5

3

3.5

4MC standard yields π = 3.1506 +/− 0.050991 [exact error 0.051545]

Exact result is 3.1416

In opposite to the hit method described in the foregoing section, the standard Monte Carlomethod chooses a constant p(x), which does not require the knowledge of an upper boundfm of the integrand.

∫f(x) dx =

⟨f(x)

p(x)

p(x)=const

, (2.12)

and the related variance is, according to (2.7),

42I =1

p

∫f(x)2 dx− I2 ≤ fm

p

∫f(x) dx− I2 =

fm

pI − I2 = AI − I2, (2.13)

where we have noticed that A = fm/p is the area explored by the hit method. We have thusproven that the variance of the standard method is always below the one for the hit method.

EXERCISE STANDARD MONTE CARLO

Write a code which calculates the integral and its error bar for a given scalar function (input)by employing the standard Monte Carlo method. I

CODE MONTE CARLO STANDARD METHOD

In[] := f[x ] := Exp[-x2]; (* choose integrand *);xmin = -1; xmax = 1; steps = 1000; Integral = 0; p = 1/(xmax - xmin);Do[X[step] = Random[Real, xmin, xmax]; Integral = Integral + f[X[step]]/p, step, 1,

steps];Show[Graphics[ Table[Line[X[step], 0, X[step], f[X[step]]], step, 1, steps]]];

Integral/steps I

Page 25: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

2.2. MONTE CARLO INTEGRATION 25

where the (memory-consuming) field X[step] is only introduced to create the graphics, oth-erwise X[step] should be replaced by X.

2.2.7 How to generate random numbers with given distributionHere we mention the method of variable transformation which can be used only if the fol-lowing equations can be solved and the required inverse function is accessible. Let the given,normalized distribution function we wish to realize be denoted by p, and the one we wishto use to realize this distribution be denoted by p (p is usually a constant, producing equallydistributed random numbers between 0 and 1). Let us assume, for the sake of simplicity, thatp = 1 on the interval [0, 1]. Using the strict identity

1 =

∫p(x) dx =

∫p(x(y)) |dx

dy| dy =

∫p(y) dy, (2.14)

we can read off a differential equation for p, now with p = 1,

dx

dy= p(y), (2.15)

which is solved by

x(y) =

∫ y

p(y′) dy′. (2.16)

If we can invert x(y) to write down y(x), the set of random variates yi = y(xi) are randomlydistributed according to p when the xi’s are a set of random numbers generated with p.Obviously, we cannot choose p proportional to the integrand, and solving the (set of coupled)differential equations can pose problems for high-dimensional integrals, involving the Jacobideterminant |dx/dy|.Example: p(y) = e−y, y ∈ [0,∞] and p(x) = 1, x ∈ [0, 1]. The integral is solved asx(y) =

∫ y

0p(y′) dy′ =

∫ y

0e−y dy′ = 1− e−y, and the inverse function is y(x) = − ln(1−x),

or equally y(x) = − ln(x) as 1−x is equally distributed in [0, 1] if x is equally distributed in[0, 1]. This particular example will be of use for the stochastic time step required in Sec. 4.4(kinetic Monte Carlo). A second example can be found in Sec. 3.4.2.

EXERCISE DISTRIBUTED RANDOM NUMBERS

Write a code which creates exponentially distributed random numbers by making use of themethod of variable transformation. I

Page 26: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

26 PART 2. MONTE CARLO BASICS

CODE RANDOM NUMBERS DISTRIBUTED WITH p = e−y , y ∈ [0,∞]

In[] := Exp[-x]; xmin = 0; xmax = Infinity; (* normalized target distribution *)finverse[y ] = Y /. First[Solve[Integrate[f[x], x, xmin, Y] == y, Y]];steps = 10000; Do[Y[step] = finverse[Random[]], step, 1, steps];

ListPlot[Table[Y[i], i, 1, steps]];dhist = 0.1;hist = Table[Sum[ If[UnitStep[Y[i] - y] == 1 && UnitStep[y + dhist - Y[i]] == 1, 1, 0],i, 1, steps], y, 0, 2, dhist];wanted[x ] = Fit[hist, Exp[-dhist*x], x] Show[ListPlot[hist], Plot[wanted[x], x, 0, 20]]

I

Page 27: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

complexfluids.ethz.ch

This document is copyrighted M.K., Swiss ETH Zurich/CH This copyright is violated upon selling or posting these class lecture notes

Computational Polymer PhysicsMartin Kröger, Emanuela Del Gado, Patrick Ilg

Part 3

Polymer Physics

27

Page 28: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

28 PART 3. POLYMER PHYSICS

3.1 Introductory examples

3.1.1 Accretion – The Ballistic Deposition ModelAccretion is a process in which particles undergo motion until they encounter another particleto ’stick to’ [Gaylord and Wellin (1995)]. As particles coalesce, large, irregularly shapedclusters are formed. If the clusters are free-floating, the process is known as aggregation,and if the clusters are attached to a surface, the process is known as deposition. We will lookat one example of each of these processes:In diffusion-limited aggregation, a particle undergoes Brownian motion until it makes con-tact with another particle. The free-floating cluster that is formed is known as a diffusion-limited aggregate or DLA. This mechanism underlies a wide variety of natural phenomena,including crystallization, colloidal and polymeric condensation, soot formation, and dielec-tric breakdown.In ballistic deposition, a particle starts above a solid substrate or surface and follows astraight-line downward trajectory until it reaches the surface or makes contact with anotherparticle. Deposition has many applications in the area of materials fabrication, such as: thin-film formation, vapor deposition, sputtering, and molecular-beam epitaxy.

CODE: vorlesung balistic deposition.m

The ballistic deposition model is easily described in physical terms. The surface is initiallysmooth, consisting of a single row of particles. A particle is released at a randomly chosenlocation a certain distance above the surface. It falls straight downwards towards the surfaceuntil it is reaches the surface or is adjacent to another particle where it remains.

Algorithm

1) Create a 2× n matrix in which the top row consists entirely of 0’s and the bottom rowconsists entirely of 1s.

2a) Randomly select a site, c, in the first row of the matrix (this value indicates whichlattice column the particle will travel down).

2b) Using the value of c, create the list, L, of the selected matrix column and its nearest-neighbor columns to the right and left.

2c) Find the positions of the first occupied site in each of the three columns in L anddetermine which one occurs first. Calculate the position, r, in the selected columnwhich is adjacent to the position which occurs earliest (this value is the lattice rowwhere the particle stops).

Page 29: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

3.1. INTRODUCTORY EXAMPLES 29

2d) Place the particle in the lattice in the position given by r, c.

3) Determine if the first row of the matrix is all 0s (it will be unless the particle was placedthere in the previous step) and if it is not, then add a row of zero’s to the top of thematrix.

4) Execute the sequence of steps 2 through 3 until the number of occupied sites is t.

3.1.2 Diffusion limited aggregation (DLA) model

CODE: vorlesung DLA.m

The DLA model is most easily described in physical terms. At any time during DLA growth,the system consists of a cluster of particles and a particle executing a random walk. Initially,the cluster contains just a single seed particle. The cluster grows via a simple process: Aparticle starts at a randomly chosen location along the perimeter of a circle centered on theseed, and executes a random walk until the particle is either a certain distance outside thecircle, in which case it vanishes, or until the particle is adjacent to the cluster, in which caseit joins the cluster. This process is repeated, one particle at a time, until the cluster reaches agiven size. The DLA model can be used to study the growth of crystals in materials.

Algorithm

The model employs a two-dimensional square lattice.

• 1) Create a list L containing the lattice site p = (0, 0).

The following sequence of steps 2 through 3 will be executed a number of times (thisis described in step 4), first using the value of the initial cluster L, given in step 1, andthen using the value of L resulting from the previous run-through of the sequence.

• 2a) Determine the lattice site nearest to a randomly chosen location along the circum-ference of a circle whose radius, R, equals a specified value, S plus the maximumabsolute coordinate value in L.

Page 30: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

30 PART 3. POLYMER PHYSICS

• 2b) Starting at the selected lattice site, execute a lattice walk until the step location iseither at a distance greater than (R + S), or on a site that is contiguous (adjacent) to asite in L. Call the final step location of the walk, p.

• 3) Check if p is adjacent to a site in the L list and if it is, add p to L.

• 4) Execute the sequence of steps 2 through 3 until the length of L reaches a value n.

3.1.3 Fractal dimension of DLA clustersAs DLA growth proceeds, the shape of the aggregate or cluster becomes increasingly ir-regular and tenuous. This occurs as the result of a ’screening’ effect which increases thelikelihood that a meandering particle will contact an exposed exterior portion of the DLAbefore it penetrates into a more shielded interior portion (this effect can be seen by lookingat the locations of sites in the cluster as a function of when they joined the cluster, in theDLA graphics above). We can get a feel for the compactness of a DLA (i.e., how it fillsspace) by measuring its fractal dimension. There are a number of fractal dimensions that canbe measured (e.g., the dependence of the radius of gyration of the cluster on the size of thecluster) and while the various fractal measures give different quantitative results, they displaya universal trend, namely that the fractal dimension of a DLA decreases with increasing sizeto a limiting value at large sizes. Here we will follow the density of space occupied by thecluster as a function of size, and extract the fractal dimension from the relationship: (massin volume for size) ∝ size df .

Computing the fractal dimension of a single polymer

The following steps are used to compute the fractal dimension of the DLA. We assume thatthe center of the object is at, or has been shifted to, the origin.

1) A fractal data list F of ordered pairs is constructed.

F = 2r, ρ(r)r=1,2..,Max(S), (3.1)

where S is a list of the site positions (xi, yi) in the cluster and ρ(r) is the number ofsites in S that lie within a square running between −r and r in each direction, dividedby the total number of sites in the square.

The value of ρ is calculated for a given square size, using

ρ(r) =1

2r + 1

∑i

Θ(r − |xi|)Θ(r − |yi|), (3.2)

where Θ(x) = 1 for x ≥ 0 and Θ(x) = 0 otherwise.

2) The fractal dimension df of the DLA structure is determined by linear regression onthe logarithm of list F :

lnF2 = c + df lnF2, (3.3)

Page 31: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

3.1. INTRODUCTORY EXAMPLES 31

which is equivalent with the above relation defining the fractal dimension, which reads

ρ ∝ (2r)df . (3.4)

EXERCISE DLA CLUSTERS

Simulate a DLA cluster and described in Sec. 3.1.2 and compute the fractal dimension of anensemble of DLA clusters. The overall DLA process is stochastic, and it is therefore necessarytake an average of the fractal dimension (or any other quantity) over a number of randomlygenerated DLA’s. I

Page 32: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

32 PART 3. POLYMER PHYSICS

3.2 Random walk

3.2.1 Analytic considerations

Let us consider a random walk with N positions and constant step length of size b0 in ddimensions. We denote the positions of the walk as xi, i = 1, 2.., N and N − 1 segmentsas Qi ≡ xi+1 − xi. Constant step length means Q2

i = b20. A random walk is a sequence

of random unit segments. The random walk is an uncorrelated walk and serves as a modelfor highly flexible macromolecules, to describe scattering processes, diffusion etc. In fact,it serves as a model for any macromolecule, because on a length scale large compared to achemical unit, and dpendent on the actual chemistry, long molecules behave in a universalway, and are equivalent to a random walk on long scales. The length b0 then stands for acharacteristic length on which correlations between segments are lost. In order to describelength scales below b0, and to account for excluded volume, for example, we need to modifythis simple model as will be done in the subsequent section.Why is the random walk a problem to be treated by numerical integration? To understandthat, let us consider the d = 2–dimensional random walk. Just hereafter, we will proceedmaking calculations for the d–dimensional random walk. A segment vector can be writtenin polar coordinates as

Qi = b0(cos φi, sin φi), (3.5)

and we can calculate averages such as

〈Qi〉 =

∫ 2π

0Qi dφi∫ 2π

0dφi

= (2π)−1

∫ 2π

0

(cos φi, sin φi) dφi = 0,

〈Qi ·Qj〉 = (2π)−1b20

∫ 2π

0

∫ 2π

0

Qi ·Qj dφi dφj

= (2π)−1b20

∫ 2π

0

∫ 2π

0

cos φi cos φj + sin φi sin φj dφi dφj = δijb20,

〈QiQj〉 =b20

31 δij, (3.6)

etc. where δij is the Kronecker symbol, δij = 1 if i = j, and δij = 0 if i 6= j, and 1 denotesthe unity matrix. We were able to solve these integrals analytically, so there is no need toperform simulations. We can further calculate quantities like the distribution function f(R)

Page 33: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

3.2. RANDOM WALK 33

of the end-to-end vector R ≡ xN − x1, which turns out (as we will understand later inChapt. 5) to be a Gaussian of the form

f(R) = Z−1 e−αR2

, (3.7)

and we can calculate α and the normalization constant Z in front of the exponential byrequiring

∫f(R) dR = 1 as follows (here, for arbitrary d, the volume element in d di-

mensions is w(r) = πd/2rd−1d/Γ(1 + d/2), such the volume of the d–ball of radius R is∫dR =

∫ R

0w(r)dr = πd/2Rd/Γ(1 + d/2)):

1 =

∫f(R) dR =

1

Z

∫e−αR2

dR =1

Z

∫ ∞

0

e−αR2

w dR =1

Z

α

)d/2

(3.8)

to arrive at

Z =(π

α

)d/2

, (3.9)

where we made use of the Gaussian integral involving the Gamma Γ function:∫ ∞

0

xn e−α x2

dx =Γ(n+1

2)

2 αn+1

2

, (3.10)

valid for α > 0 and n > −1. Using (3.10) we can further write down dimensional ratiosbetween moments, again for the d–dimensional walk, such as

〈Rn+2〉〈Rn〉 =

∫∞0

Rn+2w(R) e−αR2dR∫∞

0Rnw(R) e−α R2 dR

=n + d

2α. (3.11)

This formula can be used to recursively calculate all moments, further noticing that for thefirst three moments we find

⟨R0

⟩= 〈1〉 = 1,

⟨R1

⟩=

Γ(1 + d/2)

Γ(d/2)

√α,

⟨R2

⟩=

d

2α. (3.12)

In particular, one has 〈R4〉 / 〈R2〉2 = 1 + 2/d for a Gaussian distribution which serves as aminimum test if, or if not, a Gaussian distribution is underlying the measured moments.We have already an expression for Z in terms of α above, (3.12) provides us with an expres-sion for α in terms of the second moment 〈R〉

α =d

2

⟨R2

⟩−1, (3.13)

We are left with the second, yet unspecified, moment 〈R2〉. By making use of (3.6) we canevaluate the mean squared end-to-end distance 〈R2〉 as follows,

⟨R2

⟩ ≡ ⟨(xn − x1)

2⟩

=

⟨(N−1∑i=1

Qi)(N−1∑j=1

Qj)

⟩=

∑i,j

〈Qi ·Qj〉

=∑i,j

δijb20 = b2

0

∑i

1 = (N − 1)b20. (3.14)

Page 34: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

34 PART 3. POLYMER PHYSICS

Inserting (3.9), (3.13), (3.14) into (3.7) finally yields the properly normalized distributionfunction for the end-to-end vector,

f(R) =

(d

2π(N − 1)b20

)d/2

exp

(− dR2

2(N − 1)b20

), (3.15)

for the d–dimensional random walk. Notice,

f(R) dR = f(R)w(R) dR, w(R)|d=2 = 2πR, w(R)|d=3 = 4πR2, (3.16)

such that f(R) has its maximum at R = 0, while the maximum of the radial distributionfunction h(R) = f(R)w(R) vanishes at R = 0, and exhibits a maximum at some Rmax. Ford = 3, Rmax = 1/

√α, for arbitrary dimensions, R2

max = (d − 1)Γ(1 + d/2)/αΓ(d/2)d. Al-though everything is known about the random walk, we can try to recover these results usinga simple Monte Carlo scheme, where 1) a sequence of random vectors Qi, i = 1, 2.., N − 1is generated, and 2) the end-to-end-distance R measured and recorded. Steps 1) and 2) arerepeated for a number of times. From the sampled data we calculate the distribution of end-to-end vectors, the moments of the distribution etc. and compare with the expected results.Of course this exercise can be extended, with tiny modifications to the Monte Carlo program,to treat cases for which no analytic solution is available.

3.2.2 Radius of gyration

The tensor of gyration (or tensor of inertia) of a chain can be used to analyze its shape,anisotropy, and size. In contrast to the end-to-end tensor R it requires knowledge of allcoordinates along the chain. It is an important measure because it can be directly measuredin a scattering experiment (in the regime of small wave vector transfer, so called Guinierregime). The tensor of gyration Rg is defined as

Rg ≡ 1

N

N∑i=1

(xi − xcm)(xi − xcm), (3.17)

where xcm is the chain’s center of mass, xcm = N−1∑N

i=1 xi and the xi’s, i = 1, 2.., N , arethe coordinates of the ’beads’ on the chain with N −1 segments. We can rewrite the squaredradius of gyration R2

g, defined as the trace of the gyration tensor, also as a two-particle

Page 35: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

3.2. RANDOM WALK 35

average,

R2g ≡ TrRg =

1

N

N∑i=1

(x2i − 2xi · xcm + x2

cm)

=1

N

N∑i=1

N∑j=1

(x2i

1

N− 2

Nxi · xj +

1

Nxi · xj)

=1

N2

∑i,j

(x2i − xi · xj)

=1

N2

1

2

[(∑i,j

x2i − xi · xj) + (

∑i,j

x2j − xj · xi)

]

=1

2N2

(∑i,j

x2i − 2xi · xj + x2

j

)

=1

2N2

∑i,j

(xi − xj)2

=1

N2

N∑i

N∑j=i

(xi − xj)2. (3.18)

Only distances between particles enter the final formula (3.18) which is the reason for itsclose connection with a scattering intensity where interference effects are determined byrelative distances between scatterers.

EXERCISE RADIUS OF GYRATION FOR A RANDOM WALK

Show that the ratio between mean squared end-to-end distance⟨R2

⟩and mean squared radius

of gyration⟨R2

g

⟩is

⟨R2

⟩/R2

g = 6. I

3.2.3 Monte Carlo simulation for the random walkEXERCISE RANDOM WALK

Write a function which generates a 2D random walk with constant step length and anotherfunction which samples squared end-to-end-distance, squared radius of gyration, and the end-to-end-distribution over random walks. I

CODE 2D RANDOM WALK, MOMENTS OF THE END-TO-END DISTRIBUTION

À function randomwalk2D(N,b0,samples,moments)R=zeros(1,moments);for i=1:samples,phi=2*pi*rand(N-1,1); Q=b0*[cos(phi) sin(phi)]; x=[[0 0];cumsum(Q)];R=R+norm(x(N,:)-x(1,:)).ˆ(1:moments);end; figure(1); plot(x(:,1),x(:,2));[’moments <R>, <Rˆ2>, .., <Rˆ’ num2str(moments) ’> : ’ num2str(R/samples)], I

Page 36: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

36 PART 3. POLYMER PHYSICS

−15 −10 −5 0−10

−8

−6

−4

−2

0

2

CODE RANDOM WALK END-TO-END DISTRIBUTION FUNCTION

À function rwdist(N,b0,samples)Ree=zeros(2,samples);for i=1:samples,phi=2*pi*rand(N-1,1); Q=[cos(phi) sin(phi)];Ree(1:2,i)=norm(sum(Q)).ˆ[1;2];end;

figure(1); subplot(1,2,1); hist(Ree(1,:),N); ylabel(’f(R)’);subplot(1,2,2); hist(Ree(2,:),N); ylabel(’f(Rˆ2)’);

I

0 5 10 15 20 250

100

200

300

400

500

600

f(R

)

0 100 200 300 400 5000

200

400

600

800

1000

1200

1400

1600

1800

2000

f(R

2 )

Page 37: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

3.3. SCATTERING EXPERIMENTS 37

3.3 Scattering experiments

q , ω0 0

q , ω1 1

q= q - q1 0

q0

q1 q

The structure of liquids on the Angstrom and nanometer scale is usually investigated usingNeutron scattering experiments. Neutrons have a mass 1.675 × 10−27 kg, carry no charge,have spin 1/2, and a magnetic moment 0.001µB with Bohr magneton µB. They can begenerated in the laboratory through reaction of Beryllium with high-energetic α-particles,while producing carbon 9Be +α → 12C +n. Larger particle fluxes are usually generated(Institute Laue-Langevin) via moderated (heavy) water in the cooled process 234U +n →Kr+ Ba +(≈)2.5 n. Neutrons with wavelength 0.18 nm possess a kinetic energy E =p2/(2m) = 25 meV with p = ~q = h/λ (de Broglie). The neutrons are detected indirectly(secondary processes) through inonized particles, e.g. 3He + n →3H +p.A beam with intensity I0 and wave vector q0 (energy ~ω0, frequency ω0) hits the target(liquid), and neutrons are scattered (usually in all directions) through scattering events. Onecan then selectively detect scattered neutrons with wave vector q1 (energy ~ω1, frequencyω1). The detected intensity is denoted as I . The difference

q = q1 − q0 (3.19)

is the so-called scattering vector. For elastic scattering (ω1 = ω0) the norm of q is given by

q = 2q0 sinϑ

2, (3.20)

where ϑ is angle between q0 and q1. Further one has q0 = 2π/λ0 where λ0 is the wavelengthof the incoming beam.The contribution of a particle at position ri to the amplitude of the scattered beam is pro-portional to ai exp(−iq · ri); ai is the scattering amplitude and depends on the interactionbetween particle and neutron (values tabulated). For the detected scattering intensity one has

I ∝⟨(∑

i

aie−iq·ri

)(∑j

aje−iq·rj

)⟩I0. (3.21)

The bracket denotes a time average. The summation extends over all N particles containedin the scattering volume. For identical particles (ai = aj = a; coherent scattering) we have

I(q) ∝ I0|a|2N S(q), (3.22)

Page 38: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

38 PART 3. POLYMER PHYSICS

3.3.1 Static structure factorwith the static structure factor

S(q) =1

N

⟨(∑i

aie−iq·ri

)(∑j

aje−iq·rj

)⟩(3.23)

and N = 〈N〉. In case the ’particles’ are themselves made of a number of particles (elec-trons), an additional factor |F (q)|2 has to be added to the right hand side of (3.22); thequantity F (q) is the form factor. The structure factor can be written as

S(q) = 1 +1

N

⟨∑

i,j 6=i

e−iq·rij

⟩, (3.24)

and because only differences rij ≡ ri − rj appear in (3.24), S(q) can be expressed in termsof the pair correlation function g(r) (definition see below) as

S(q) = 1 + n

∫e−iq·rg(r) d3r, (3.25)

with particle number density n, or equivalently, as

S(q) = 1 + (2π)3nδ(q) + n

∫e−iq·r(g(r)− 1) d3r. (3.26)

The latter expression has the advantage that the integral is Fourier integrable (for fluids,g → 1 for r →∞). The inverse relationship reads

g(r) = 1 +1

8π3n

∫eiq·r(S(q)− 1) d3q. (3.27)

In case g(r) and S(q) depend only on the norms of r and q. we can further write

S(q) = 1 +4πn

q

∫ ∞

0

r sin(qr)(g(r)− 1) dr,

g(r) = 1 +1

2π2nr

∫ ∞

0

q sin(qr)(S(q)− 1) dq. (3.28)

Since S(q) can only be determined within a limited range of q-values, g(r) cannot be deter-mined completely using (3.28).For small wave vector transfer which are experimentally difficult to access, we have

S(0) =1

N

⟨N 2⟩

= N +1

N

⟨δN 2

⟩, (3.29)

where δN = N −N is the fluctuation of particle numbers N in the scattering volume, andN = 〈N〉 the time-averaged number of particles. At the same time, S(0) can be identifiedwith the isothermal compression modulus K = n(∂p/∂n)T with pressure p.

Page 39: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

3.3. SCATTERING EXPERIMENTS 39

3.3.2 Pair correlation functionFor a many-particle system, particle coordinates ri, i = 1.., N , the two-particle density n(2)

is usually defined as follows

n(2)(ra, rb) ≡⟨∑

i

j 6=i

δ(ra − ri)δ(rb − rj)

⟩. (3.30)

For a homogeneous system, we can integrate over one of the arguments to obtain the so-called pair correlation function g:

g(r) =V

N2

∫n(2)(r′ + r, r′) d3r′ (3.31)

3.3.3 Form factorsThe isotropic form factor for a sphere of radius R (constant particle density) had been calcu-lated by Lord Rayleigh. It reads

F (q)sphere = F (qR) = 3sin(qR)− (qR) cos(qR)

(qR)3. (3.32)

For cylinders with radius R and length L, oriented in direction u, the anisotropic formfactorbecomes

F (q) =2J1(R|q× u|)

R|q× u|2 sin(L

2q · u)

Lq · u , (3.33)

with the Bessel function J1. For a single polymer chain with N particles, segemnt vectorsu1..,N−1, the formfactor is related to the generalized gyration tensor R

(n)gyr of rank n as follows

F (q) = N

(1 +

∞∑i=1−

(−1)i

(2i)!2q(2i) ¯2i R(2i)

gyr

)=

1

N

⟨N +

N∑

i,j 6=i

cos(q · rij)

⟩,(3.34)

R(n)gyr ≡ 1

N2

⟨N∑

i,j<i

r(n)ij

⟩=

1

N2

N∑i

N∑j<i

i−1∑i1,i2..,in=j

〈ui1ui2 . . .uin〉 , (3.35)

(3.36)

For small wave numbers qRG ¿ 1 (Guinier regime), the expansion reads

F (q) = N(1−qq : R(2)gyr)+o(q4) = N− 1

Nqq :

N∑i

N∑j<i

i−1∑

l

i−1∑

k

〈uluk〉+o(q4). (3.37)

Page 40: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

40 PART 3. POLYMER PHYSICS

3.4 Freely rotating chainThe freely rotating chain model is characterized through the segment-segment correlations

〈Qi ·Qj〉 = b20 (cos θ)|i−j|, 〈Qi〉 = 0, (3.38)

i.e. segments become uncorrelated for large contour distances as lim|i−j|→∞ 〈Qi ·Qj〉 = δij ,and b0 still denotes segment length since lim|i−j|→0 〈Qi ·Qj〉 = 〈Q2

i 〉 = b20. For the mean

squared end-to-end distance we obtain⟨R2

⟩ ≡ (xN − x1)2 (3.39)

=N−1∑i,j=1

〈Qi ·Qj〉

=N−1∑i=1

(i−1∑j=1

〈Qi ·Qj〉+⟨Q2

i

⟩+

N−1∑j=i+1

〈Qi ·Qj〉)

=N−1∑i=1

⟨Q2

i

⟩+ b2

0

N−1∑i=1

(i−1∑j=1

(cos θ)i−j +N−1∑

j=i+1

(cos θ)j−i

)

= (N − 1)b20 + b2

0

N−1∑u=1

(i−1∑

k=1

cosk θ +N−1−i∑

k=1

cosk θ

).

(3.40)

Notice an identity which leads to the definition of dimensionless persistence length sp:

cosk θ = eln cosk θ = ek ln cos θ ≡ e−k/sp ,

sp ≡ − 1

ln cos θ. (3.41)

Since the decay cosk θ with increasing k is rapid for θ 6= 0, we can approximately replacethe summation in (3.40) by an infinite series

i−1∑

k=1

cosk θ +N−1−i∑

k=1

→ 2∞∑

k=1

= 2cos θ

1− cos θ, (3.42)

to finally evaluate (3.40) as follows⟨R2

⟩ ≈ (N − 1) b20 + 2(N − 1) b2

0

cos θ

1− cos θ,

⟨R2

⟩= (N − 1) b2

0 C∞, C∞ ≈ 1 + α

1− α, α ≡ cos θ, (3.43)

where C∞ is called ’characteristic ratio’ which equals unity for the random walk as is evidentfrom (3.14). The exact result for the freely rotating chain is

C∞ =1 + α

1− α− 2α

N − 1

(1− αN−1)

(1− α)2, (3.44)

from which we recover the two limiting cases of a rod [(θ → 0, α → 1) → C∞ = (N − 1)and a random walk (θ = arccos 0 = π/2 → α = 0 → C∞ = 1).

Page 41: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

3.4. FREELY ROTATING CHAIN 41

CODE ROTATE 3D VECTOR v AROUND AXIS a BY AN ANGLE t

À function new=rotate(v,a,t)a=a/norm(a); S=[0 -a(3) a(2); a(3) 0 -a(1); -a(2) a(1) 0];D=ones(3,3)+(1-cos(t))*S*S - sin(t)*S;new=D*v; I

EXERCISE MONTE CARLO SIMULATION OF A FREELY ROTATING CHAIN

Create a chain with N − 1 segments and fixed bond angle θ between adjacent segments. Tosample over an ensemble of freely rotating chains either create a number of these chains, ormodify the initial chain by rotations of a subchain around an arbitrarily selected segment byusing the aforementioned function. Calculate mean squared end-to-end distance (3.39), meansquared radius of gyration (3.18) for various N and θ and test the exact results provided in thissection. I

3.4.1 Wormlike chain modelThe wormlike chain model or Kratky-Porod model [Rubinstein and Colby (2003)] is a spe-cial case of the freely rotating chain model for small bond angles, i.e., α = cos θ ≈ 1−θ2/2.For small x, ln(1− x) ≈ −x → ln cos θ ≈ −θ2/2, and the dimensionless persistence lengthsp, defined in (3.41), becomes sp ≈ 2/θ2. The characteristic ratio simplifies to

C∞ =1 + α

1− α≈ 2− θ2/2

θ2/2≈ 4

θ2. (3.45)

Since a wormlike chain behaves as a random walk on large length scales, one introduces aKuhn length b which equals the step length of the corresponding random walk, defined asthe ratio between mean squared end-to-end distance and maximum stretch (L which is oftenequivalent to the contour length if not local chemical details prevent a full stretch) of thewalk

b ≡ 〈R2〉L

. (3.46)

For the random walk we derived 〈R2〉 = (N−1) b20, and the contour length is L = (N−1) b0,

therefore b = b0. For the wormlike chain L = (N−1) b0 cos θ/2 and 〈R2〉 = (N−1) b20 C∞

with C∞ given in (3.45). Therefore,

b =b0 C∞cos θ/2

≈ b04/θ2

1− θ2/4≈ 2b0sp (3.47)

is the Kuhn length for the wormlike chain and equals 2 times the dimensional persistencelength lp = sp b0 of the wormlike chain. For a wormlike chain, the mean squared end-to-enddistance 〈R2〉, and radius of gyration 〈R2

g〉, defined in the next section, become

⟨R2

⟩= 2lpL

(1− lp

L

[1− e−L/lp

]), (3.48)

⟨R2

g

⟩= lp

L

3− lp +

2l2pL2

[L− lp(1− e−L/lp)

]. (3.49)

Page 42: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

42 PART 3. POLYMER PHYSICS

In the limit of flexible chains, P ≡ lp/L → 0, we obtain 〈R2〉/〈R2g〉 = 6 + 2P (2 +

exp[−1/P ]) + o(P 2). In the opposite limit of stiff chains (3.49) yields 〈R2〉 = L2 and〈R2

g〉 = L2/12.Another conventional approach to wormlike chains is to consider the dimensionless bendinghamiltonian

H/kBT = −kθ

∑i

cos θi = −kθ

∑i

ui · ui+1, (3.50)

where ui is a segment (unit) vector, and kθ a dimensionless strength. In the canonical ensem-ble, we can calculate moments from the distribution function f(θ) ∝ exp(−H/kBT ). Themost important moment evaluates as

α ≡ 〈cos θ〉 = coth(kθ)− 1/kθ ≡ L(kθ), (3.51)

where L is known as Langeving function. The Kuhn length b can therefore be expressed interms of kθ, and also the dimensionless persistence length sp = −1/ ln α as

b = b0

(1 + α

1− α

)= b0

(2kθ

1 + kθ − kθ coth kθ

− 1

)= b0 coth

(2

sp

). (3.52)

Notice that limkθ→∞ b = 2kθb0 = 12spb0 and limkθ→0 b = b0 and limkθ→0 sp = 0. If we wish

to study a wormlike chain with given b or given sp, we have to find the proper weight kθ bynumerically solving (3.52). Actually, equation (3.52) provides the connection between α, b,sp and kθ for the wormlike chain based on (3.50), such as the function α = α(kθ) which wehave written down already in (3.51). We remind that for a freely rotating chain, we can findfrom (3.52) also the fixed bond angle θ = cos−1 α for given b or sp.

3.4.2 Simulation of the wormlike chainIn order to simulate a wormlike chain we could realize the angle-distribution (via Monte-Carlo) corresponding to the Hamiltonian H = −kθ

∑i cos θi where θi denotes the dihydral

angle between segments i and i + 1, as given in (3.50). Here, I present a direct schemewhich does not require a Metropolis criterion. We wish to test (3.49) by simulating thediscrete single wormlike chain with bond length b0 = 1, number of nodes N , contour lengthL = (N − 1)b0, and given persistence length sp. This hamiltonian is simple enough tomake use of the method of variable transformation introduced in Sec. 2.2.7 as follows, usingequally (p-) distributed random numbers on the interval [0,1]:

1 =

∫ 1

0

p(x)dx =

∫p(x(y))|dx

dy|dy =

∫ π

0

q(θ) sin(θ)dθ =

∫ 1

−1

q(z)dz,(3.53)

p(x) = 1, (3.54)q(θ) = Be−H/kBT = Bekθ cos θ = Bekθz = q(z), (3.55)

B−1 =

∫ 1

−1

ekθzdz = k−1θ e−kθ(e2kθ − 1), (3.56)

∫ X

0

dx =

∫ Z

−1

q(z)dz = X = B

∫ Z

−1

ekθzdz =ekθ(1+Z) − 1

e2kθ − 1. (3.57)

Page 43: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

3.4. FREELY ROTATING CHAIN 43

We thus obtain, by inverting the latter relationship, the rule to obtain q-distributed randomnumbers Θ from q-distributed random numbers Z = cos(Θ) using equally (p-)distributedrandom numbers X ∈ [0, 1] via

Z = −1 +1

log[1 + (e2kθ − 1)X

], Θ = cos−1 Z (3.58a)

=1

log[e−kθ + 2 sinh(kθ)X

], (3.58b)

where it depends on the magnitude of kθ, which of the identical expressions (3.58) is moresuitable for numerical implementation; in any case Z is difficult to evaluate at kθ > 100using (3.58) directly. In practise, for kθ > 50, a very useful approximation is

Zkθ>50≈ 1 +

1

log X. (3.59)

For small kθ the Taylor expansion of Z gives Z = 2X − 1 + o(kθ), as it should; z = cos θshould be equally distributed in [−1, 1] in the limit kθ → 0 (random walk). The mean〈Z〉 =

∫ 1

0Z(X)dX of course matches with the Langevin function (3.51). This algorithm

turns out to be very efficient compared with one, which starts with a random walk, and usesthe Metropolis criterion to accept configurations. The simulation confirms (3.49) to highprecision.

EXERCISE IMPLEMENT THE WORMLIKE CHAIN ALGORITHM AND CONFIRM THEORETICAL PREDICTIONS

Consider the 3D discrete wormlike chain composed of N linearly connected segments (unitvectors). The energy of the wormlike chain is proportional to the mean cosine of bond an-gles 〈cos θ〉 between adjacent segments. At constant temperature, bond angles are distributedaccording to the Boltzmann weight p(θ) = exp(kθ cos θ), where kθ is a parameter related tostiffness, or persistence length lp. Generate p-distributed random angles θ by using the methodof variable transformation (see script), starting from uniformly distributed random numbers.Construct M freely rotating chains (see script) using N of these angles for each chain, andcalculate the mean squared radius of gyration R2 (see script) for M wormlike chains withN segments. Visualize sample wormlike chains with N = 100, lp = 0.5 (lp = 5, andlp = 20). Compare your results, for M = 1000, with the approximate formula

⟨R2

⟩ ≈lpN/3− lp + (2l2p/N

2)[N − lp(1− exp(−N/lp)]. Hint: Construct the freely rotating chainsegment-wise, using a bond angle and another random angle in the plane perpendicular to theprevious segment. I

Page 44: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

44 PART 3. POLYMER PHYSICS

3.5 Self-avoiding walk (SAW)In contrast to a random walk, a self-avoiding walk takes into account excluded volume of thewalk. This is of particular importance when applying walk statistics to polymer problems.For a 3D self-avoiding walk, Flory calculated a mean squared end-to-end distance 〈R2〉 =b20N

6/5. The exact exponents are 3/2 (2D) and 2× 0.588 (3D).

CODE SQUARED END-TO-END DISTANCE OF 2D SELF-AVOIDING WALK ON SQUARE LATTICE

À function saw(N,b0,samples)direction= [1 0; -1 0; 0 1; 0 -1]; R=[];for sample=1:samples,W=randint(N,1,[1 4]); walk=cumsum(direction(W,:));if length(unique(walk,’rows’)) == N,R=[R;sum(walk(N,:).ˆ2)];

figure(1); hold off; plot(walk(:,1),walk(:,2),’.-’); pause(0.1);end;end; I=length(R);[’<Rˆ2> = ’ num2str(mean(R)) ’ +/- ’ num2str(std(R)/sqrt(I)) ’ ...from ’ num2str(I) ’ realizations’] I

40 45 50 55 60

40

45

50

55

60

In this code, RANDINTS (vs randint(1,1..)) is only introduced to heavily increase the speedof the algorithm. spalloc(n,m,N) initializes a sparse n×m matrix with no more than N entries.Notice however the bad efficiency of the above ’direct’ implementation.Two algorithms more often used in lectures are the Slithering Snake and Pivot algorithms.The fastest available algoithm should be the one of Erpenbeck. All three algorithms will beexplained in the subsequent paragraphs.

3.5.1 Slithering Snake algorithm1a) Create an SAW of n steps, each in the same direction, on a two-dimensional square

lattice.

1b) Calculate the square end-to-end distance of the SAW and call it squaredist.

The following sequence of steps 2-4 will be executed a number of times (this is de-scribed in step 5), first using the initial SAW configuration given in step 1, and thenusing the SAW configuration resulting from the previous run-through of the sequence.

Page 45: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

3.5. SELF-AVOIDING WALK (SAW) 45

We will describe the sequence in terms of an arbitrary SAW configuration, which we’llcall config.

2) Randomly select a step increment (i.e., create a single, randomly oriented segment)and use it to add a step to config.

3) Check if the new step intersects any previous step. If it does, reverse config (ie., reversethe order of steps in config) and call the resulting SAW, path. If it does not, add thenew step to the end of config, discard the first step of config (thereby conserving thenumber of steps in the walk) and call the resulting SAW, path.

4) Calculate the square end-to-end distance of path, add the value to squaredist, and returnpath.

5) Execute the sequence of steps 2-4, m times, starting with the initial SAW.

6) Determine the mean-square, end-to-end distance.

We see that while a single rearrangement from the initial fully extended configuration onlyreduces the dimensions of the walk slightly, there is a substantial change after many re-arrangements by slithering. The Slithering Snake program can be used to determine thecritical exponent of an SAW. However, there are two drawbacks to using the slithering snakealgorithm. One problem with the slithering snake algorithm is that it is possible for the SAWto find itself in a ‘double cul-de-sac’ shape from which it cannot extricate itself. An exampleof this situation is shown below.

The possibility of an SAW becoming trapped in a configuration from which it cannot escapemakes this SAW sampling method non-ergodic (nonetheless, the critical exponent obtainedby the slithering snake algorithm has been found to be correct).Another problem with the slithering snake algorithm is that, because the walk moves onesegment at a time, the SAW rearranges itself rather slowly, and a very large number ofsteps must be performed (i.e., m must be large) in the simulation to obtain large scale shapechanges. Both of these shortcomings are addressed in the next SAW algorithm.

Page 46: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

46 PART 3. POLYMER PHYSICS

3.5.2 Pivot algorithm

The Pivot algorithm is a very efficient ’dynamic’ algorithm for generating d-dimensionalSAW’s in a canonical ensemble (ie., with a fixed number of steps). It is based on randomlyselecting one of the (d!2d) symmetry (rotation or reflection) operations on a d-dimensionallattice and applying the operation to the section of the SAW subsequent to a randomly se-lected step location, called the pivot. In two dimensions, it is sufficient (for ensuring er-godicity) to consider just three of the eight symmetry operations, specifically rotations of+90, −90 and 180 degrees. To perform these rotations in two dimensions (d = 2), threetwo-dimensional rotation matrices, R90, R180 and R270 are defined:

R90 =

(0 1

−1 0

), R180 =

(−1 0

0 −1

), R270 =

(0 −1

1 0

)(3.60)

The rotation procedure uses three quantities: The location of the pivot point (kink) alongchain; the coordinates of the location of the ith step of chain relative to the pivot point; andthe final coordinates of the ith step of chain after the rotation.

1a) Create an n-step SAW on a two-dimensional square lattice.

1b) Calculate the square end-to-end distance of the SAW and call it squaredist.

The following sequence of steps 2-5 will be executed a number of times (this is de-scribed in step 6), first using the initial SAW configuration given in step 1, and thenusing the value of the SAW configuration resulting from the previous run-through ofthe sequence. We will describe the steps in terms of an arbitrary SAW configuration,which we’ll call config.

2) Choose at random, a pivot point k (0 < k < n) along config and divide config intotwo parts: one part, consisting of the steps in config prior to and including k, is calledfixsec, and the other part, consisting of the steps in config subsequent to k, is calledmovesec.

3) Choose at random, a symmetry operation of the lattice.

4) Apply the operation to movesec to obtain the rotated chain section, and call it newsec.

Page 47: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

3.5. SELF-AVOIDING WALK (SAW) 47

5) Check if any of the step locations in newsec and fixsec coincide. If they do not co-incide, create a new SAW configuration, naming it newconfig, by joining newsec andfixsec, calculate its square end-to-end distance and add that to squaredist, and returnnewconfig. If they do coincide, calculate the square end-to-end distance of the previ-ous SAW configuration, config, add that to squaredist, and return config.

6) Execute the sequence of steps 2-5, m times, starting with the initial SAW.

7) Determine the mean-square end-to-end distance.

A single pivot can produce a much bigger change in the SAW shape than can a single ’slither’,but for a large number of rearrangements, the two algorithms give, on average, the sameresults.

3.5.3 Enriched samples algorithmSeveral attempts have been made to improve efficiency. A particular elegant result is themethod presented by Erpenbeck et al [J. Chem. Phys. 30 (1959) 634, where details can befound]. We summarize the main ideas next. Below is a very compact code.The number Nn of samples (configurations without overlap) remaining after n steps (randomsegment vectors) is known to be approximately described by Nn = N0 exp(−λn), where λis called attrition constant, and N0 is approximately equal to the trial samples. Consider Ns

chains, each of s steps (generated as random walk, reject those configurations which exhibitoverlap). Use each such s-step chain more than once, say p times, where it is important that pis a constant, independent of any particular configuration. This leads to pNs trial chains – ofwhich the first s steps are partially identical – as follows: grow each chain as random walk,a certain number N2s of “successful” chains will reach 2s steps without overlap. NoticeN2s = pNs exp(−λs) = pN0 exp(−2λs). Proceed growing chains in the same way. ThenNjs = pN(j−1)s exp(−λs) = pj−1N0 exp(−jλs) chains will survive after j rounds. Thegoal is to have a constant number of chains, i.e., Njs = Ns. This is achieved by choosingp = exp(λs), and p can be automatically adjusted once Njs exceed a threshold such asNjs > 3Ns. The automatic update of p is however not implemented below.Code: Ns denotes number of chains in the ’1st generation’, gens the number of genera-tions (the maximum chain length will be s*gens), p can be calculated from lambda viap=round(exp(lambda*s)). lambda can be extracted from the ratio between random pathsneeded to successfully generate a self-avoiding path (by simply using the ’attempt’ functionbelow).

CODE (2D, 3D) SELF-AVOIDING WALK VIA ENRICHED SAMPLES

À function saw(d,s,p,Ns,g,theory); % (c) mk 2005for i=1:d, mov(2*i-1:2*i,i)=[-1;1], end;on=ones(1,d); re=repmat([on;-on],floor(max(Ns*p,g*s/2))+1,1);[walks,R]=attempt(2*d,1,1,s-2,s,Ns,mov,re);for g=2:g,figure(1); gs=(g-1)*s; loglog(gs,mean(R),’.’,gs,gsˆtheory,’.g’);title([num2str(size(walks’)) ’ chains’]); hold on; pause(0.1);

[walks,R]=attempt(2*d,walks,g,s-1,s,p,mov,re);

Page 48: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

48 PART 3. POLYMER PHYSICS

end;

function [walks,R]=attempt(D,wk,g,n,s,p,mov,re); S=size(wk);W=S(2); gs=g*s; walks=zeros(gs,p*W); col=1-p:0; r=randint(n,p*W,[1 D-1]); R=[];for w=1:W, col=col+p;walks(:,col)=[repmat(wk(:,w),1,p);mod(cumsum([wk(end,w)*(1:p);r(:,col)]),D)+1];end;for w=p*W:-1:1, X=cumsum(re(1:gs,:).*mov(walks(:,w),:));if length(unique(X,’rows’))<gs, walks(:,w)=[]; continue; else, R=[R;X(end,:)]; end;end; R=mean(sum(R’.ˆ2)); I

This code based on Erpenbeck’s work allows to study chains of rather ’arbitrary’ length.The required computing time is proportional to the length of chains, and the number ofsamples, and therefore offers ideal scaling behavior. Useful parameters are s=17, p=3 (3D)and s=11, p=4 (2D), corresponding to a function call saw(3,17,3,100,100,2*0.588). Thesechoices ensure that the number of chains in each generation is mostly a constant (and can betuned to reach this goal). The initial number of chains No is related to the simulated ones Nsby Ns=No*exp(-s*λ) and λ ≈ 0.04 for the 3D case.

EXERCISE WORMLIKE CHAIN IN A CYLINDRICAL TUBE

Create wormlike chains with given persistence length lp and contour length L inside a cylindri-cal tube of diameter D and infinite length. Use the method of enriched samples (cf. Sec. 3.5.3)to calculate the ratio between the number of configurations remaining inside the cylinder Zwall,and the number of configurations effectively generated Z id. This ratio determines the excessfree energy of a wormlike chain in confined geometry ∆Fc = −kBT ln Zwall/Z id, in general.I

3.5.4 Fractal dimension of the SAWA property of the SAW that is of considerable interest is the so-called ’fractal dimension’of the SAW. The fractal dimension of a simple walk (see also Sec. 3.1.3 for more complexshapes) is given by df in the relationship

⟨R2

⟩1/2= N1/df . (3.61)

A SAW, like a random walk, has a critical exponent, ν, given by the relationship 〈R2〉 =N ν and it follows that df = 2/ν for a SAW or a random walk. For a random walk on asquare lattice, ν = 1 and df = 2. In fact, it turns out the fractal dimension of a randomwalk is independent of the dimensionality of the walk, i.e, a random walk executed on ad-dimensional lattice has df = 2, regardless of the value of d (the reader can create and runone-and three-dimensional versions of the lattice walk program to confirm this).In contrast to the random walk, the fractal dimension of an SAW has a dimensional depen-dence. In one dimension, the end-to-end distance of an SAW must be proportional to thenumber of steps, so that 〈R2〉 = N2, because backtracking is not allowed, and hence df = 1and ν = 2. Going to higher dimensions, ν should decrease, because the chance of self-intersection decreases in higher dimensions (in fact it can be shown that the critical exponent

Page 49: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

3.5. SELF-AVOIDING WALK (SAW) 49

of a four-dimensional SAW is the same as that of a random walk). We will calculate thefractal dimension using a Flory-type approach in the following section 3.6.

EXERCISE FRACTAL DIMENSION OF SAW

Determine the value of df for an SAW on a square lattice. Hint: For a d-dimensional SAW(d ≤ 4), we show in (3.69) that

⟨R2

⟩roughly proportional to N6/(2+d) so that ν and df scale

approximately as 6/(2 + d) and (2 + d)/3, respectively. I

EXERCISE PIVOT ANIMATION

Create an animation of the pivoting process. The resulting picture depicts the rearrangementsof a long chain polymer molecule in dilute solution. I

EXERCISE PIVOT ON CUBIC LATTICE

Write a pivot algorithm to three dimensions for an SAW on a cubic lattice. Hint: Ergodicity issatisfied by employing five symmetry operations, corresponding to the following rotations: 90degrees in the x− y plane, −90 degrees in the x− y plane, 180 degrees in the x− y plane, 90degrees in the x− z plane and −90 degrees in the x− z plane (this is equivalent to rotating aline between adjacent lattice sites to its 5 n.n. sites). I

Page 50: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

50 PART 3. POLYMER PHYSICS

3.6 Flory-type theories for polymersFrom Sec. 3.2.1 we know that the mean square end-to-end distance R2

0 of the ideal chainwith N + 1 nodes and N segments is R2

0 = Nb20 with monomer size b0. We had also

calculated the distribution function of the end-to-end vector, f(R)dR = 4πR2f(R)dR =Z−1 exp−3R2/2R2

0dR, cf. (3.16), with normalization constant Z = [3/(2πR20)]

−3/2 =(2π/3)3/2R3

0. The corresponding entropy of the ideal chain is S = 〈S(R)〉f with configura-tional entropy

S(R) = kB ln f(R) = −3

2kB

(R

R0

)2

. (3.62)

The total entropy is thus a constant

S = kB 〈ln f(R)〉f = −3

2kB

〈R2〉fR2

0

= −3

2kB. (3.63)

The internal configurational energy, on the other hand, as suggested by Flory, reads

E(R) = βkBTNφ, φ =Voccupied

Vavailable

, E(R) = βkBTN2( a

R

)3

, (3.64)

where Voccupied = Na3 is the actually occupied volume of (hard) spherical segments of size a,and Vavailable ' R3 the volume available to a polymer coil of size R; φ is then a kind of densityof monomers within available volume. The prefactor βkBT is the interaction energy for apair of monomers, here assumed to be independent on R, and the factor N assumes that eachmonomer is interacting with any other of the same chain. The total energy, E = 〈E(R)〉f ,using (3.64), would however, diverge, and one should exclude self-interactions from theenergy expression. Nevertheless, we continue and use (3.62) and (3.64) to arrive at Flory’sexpression of the free energy

F = E − TS, (3.65)

which we minimize,

0 =1

kBT

dF

dR= 3

[(R

R0

)− βN2

( a

R

)3]

, (3.66)

to obtain the equilibrium span of the chain

R

a= β

15 N

25

(R0

a

) 25

= β15 N

35 . (3.67)

This is a famous result as it predicts an exponent (3/5 = 0.6) close to the exact one for aSAW in three dimensions, which is about 0.58. If the occupied and available volumes (in ddimensions) would be Voccupied = Nad and Vavailable ∝ Rnad−n, respectively, where the latterexpression has correct units, and n is a parameter describing the actual situation (unconfinedn = d, or confined chains n < d), the same calculation would give

Req

a=

[βn

3N2

(R0

a

)2] 1

2+n

=

(βn

3

) 12+n

N3

2+n . (3.68)

Page 51: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

3.6. FLORY-TYPE THEORIES FOR POLYMERS 51

For the unconfined case, n = d, we thus obtain the fractal dimension for the SAW in ddimensions

df =2 + d

3, fractal dimension of unconfined SAW in d dimensions. (3.69)

In four dimensions, d = 4, the Flory approach predicts df = 2 which is identical with thefractal dimension of a random walk. I.e., in four dimensions, the SAW is a random walk.For this reason, one sometimes calculates properties of SAW in four dimensions and latertries to extrapolate from d = 4 to d = 3 by assuming that d = 4 − ε and ε/d being small.For a confined walk, such a SAW confined to stay between two parallel planes (n = d − 1,relevant for a polymer in nanotube), or within a cylinder (n = d− 2, relevant for a polymerbottle brush) we have

dconfinedf =

2 + n

3< df , fractal dimension of n-confined SAW in d dimensions. (3.70)

More refinements should consider β to depend on R. This effect however, can be completelyabsorbed by n (i.e., by Vavailable), while treating β as independent on R throughout (as wedid). For completeness, let us insert (3.68) into the corresponding free energy. This gives

Feq

kBT=

2 + n

2

(3nβ2

nn

) 1n+2

N4−n2+n (3.71)

where the first terms stems from entropy, the second from energy. When n ≥ 1, the entropicand energetic parts quite equally contribute to the equilibrium free energy.

Page 52: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

52 PART 3. POLYMER PHYSICS

3.7 Exactly solvable semiflexible spin chain modelLet us consider a class of semiflexible spin chain models (N segments, bending angles θn,n = 1, 2.., N characterized by zn ≡ cos θn) with nearest neighbor interactions. Each segmentcarries a spin denoted as φn ∈ 0, 1, and we assume that the hamiltonian can be cast intothe form

H = H(φ, z) =N∑

n=1

Hn(φn, φn+1, zn), (3.72)

with the symmetry feature

Hn(φn, φn+1, zn) = Hn(φn+1, φn, zn). (3.73)

For convenience, we adopt periodic boundary conditions, i.e., φN+1 ≡ φ1. A rodlike spinchain has zn = 0 for all n. With this hamiltonian we restrict ourself to the class of generalizedfreely rotating chains with nonconstant bond angles, whose average spatial configuration isfully characterized by z, and where torsion angles are randomly distributed. Once we willhave solved the model, configurational properties like the end-to-end distance of the spinchain are obtained from the z-values. The spins mediate interaction between z’s through thehamiltonian.With the abbreviations β ≡ 1/kBT and components

Ta,b ≡ 2π

∫ 1

−1

e−βHn(a,b,z) dz T =

(T00 T01

T10 T11

)(3.74)

of a so called transition matrix T we can cast the partition sum for this problem (canonicalensemble) into the following form

Z = (2π)N

2N∑

φ

ze−βH dz = (2π)N

2N∑

φ

z

N∏n=1

e−βHn(φn,φn+1,zn)

=∑i=0,1

[TN ]i,i = tr(TN). (3.75)

This is a remarkable simple reformulation of the problem. After solving the eigenvalueproblem

T · v± = λ±v±, (3.76)

T = S ·D · S−1, D =

(λ+ 0

0 λ−

), Sν± = v±ν (3.77)

with two eigenvalues λ± one has

Z = tr(TN) = tr([S ·D · S−1]N) = tr(S ·DN · S−1) = λN+ + λN

− , (3.78)

Page 53: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

3.7. EXACTLY SOLVABLE SEMIFLEXIBLE SPIN CHAIN MODEL 53

which is still exact. For large N , the partition sum is therefore dominated by the largestabsolute eigenvalue. Explicit expressions for the eigenvalues, solving det(T− λ±I) = 0 are

λ± =1

2

(T11 + T22 ±

√4T 2

12 + (T11 − T22)2

), (3.79)

where we have employed the symmetry of T (T12T21 = T 212). Because all components of T

are positive, λ+ is the larger eigenvalue in magnitude (and λ− could have either sign).We have by now explicitly calculated the partition sum, (3.78) with (3.79) and (3.74) for allmodels which can be cast into the form (3.72) with (3.73). Averages are generally obtainedfrom the partition sum by taking derivatives. Depending on the form of the hamiltonian, onemight add additional terms to it in order to facilitate the calculations.One particular example is the one treated in [Rappaport and Rabin (2008)] for the protein–induced bending of DNA molecules. The authors introduced the following hamiltonian fora semiflexible and coarse-grained DNA molecule whose segments are either occupied (spinup) and non-occupied (spin down) by proteins.

βHn(φn, φn+1, zn) =l(0)p

2

(κn − κ0∆

2n

)2 − µ

2(φn + φn+1),

κn ≡√

2(1− zn), ∆n ≡ φn+1 − φn. (3.80)

The hamiltonian carries coefficients for bending stiffness of the spin-free chain, l(0)p , the

chemical potential for adsorption of a protein, µ, and the spontaneous curvature coefficientκ0. Using (3.80) the components of the transition matrix (3.74) evaluate as

T00 = 2π

(1− e−2l

(0)p

l(0)p

), (3.81)

T01 = 2πeµ/2

l(0)p

e−v2 − e−w2

+ v√

π [erf(v)− erf(w)]

, (3.82)

v ≡ κ0

√l(0)p /2, w ≡ v −

√2l

(0)p (3.83)

and T10 = T01, T11 = T00e−µ. As we have by now calculated the partition sum Z for this

problem as function of l(0)p , µ, κ0 and number of spins N analytically, we can derive some

quantities. Upon the classical averages derived from the partition sum Z are the (dimension-less) total energy E, the free energy F = −β−1 ln Z, entropy S = kB(βE + ln Z), heatcapacity Cv, and mean spin 〈φ〉, by just calculating derivatives of Z with respect to β or µ:

βE = β 〈H〉 =Nl

(0)p

2

⟨(κn − κ0∆

2n

)2⟩− µN 〈φ〉

= −(

l(0)p

d ln Z

dl(0)p

+ µd ln Z

)= −β

∂ ln Z

∂β, (3.84)

Cv

kB

≡ ∂E

∂kBT= −β2∂E

∂β= βE − β

∂βE

∂β= β2∂2 ln Z

∂β2,

〈φ〉 = −N−1∂βE

∂µ=

β

N

∂2 ln Z

∂β∂µ. (3.85)

Page 54: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

54 PART 3. POLYMER PHYSICS

As we had expressed Z in terms of λ±, the main task consists in taking derivatives of thecomponents of T with respect to β and µ (or also l

(0)p for example, in order to calculate 〈z〉).

As this is trivially accomplished, we do not write down the full expressions.

Page 55: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

complexfluids.ethz.ch

This document is copyrighted M.K., Swiss ETH Zurich/CH This copyright is violated upon selling or posting these class lecture notes

Computational Polymer PhysicsMartin Kröger, Emanuela Del Gado, Patrick Ilg

Part 4

Master equations

55

Page 56: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

56 PART 4. MASTER EQUATIONS

4.1 Markov chains

A Markov chain (or Markov process), named after Andrey Markov, is a special case of adiscrete-time stochastic process, i.e., a series of states that can be described by a probabilitydistribution, with the Markov property. A stochastic process has the Markov property, ifthe conditional probability distribution of future states of the process, given the present stateand all past states, depends only upon the present state and not on any past states. At eachtime the system may have changed from the state it was in the moment before, or it mayhave stayed in the same state. The changes of state are called transitions. A series with theMarkov property is a sequence of states for which the conditional probability distribution ofa state in the future can be deduced using only the current state; no additional informationis given by the states the process was in, in the past. In other words, the past states carry noinformation about future states.More formally, a Markov chain is a sequence of realizations of the distribution function(random variables) X1, X2, X3, ..., given the present state, the future and past states areindependent. The possible values of Xi form a countable set called the state space of thechain. Markov chains are often described by a directed graph, where the edges are labeledby the probabilities of going from one state to the other states.

4.1.1 Stochastic processes in general

A sequence of random variables

X1, X2, . . . , Xi, . . .

is called a stochastic process (here: discrete Xi = X(ti), and ti: discrete times). Somedefinitions and properties of the probability densities pn:

• p1(xi, ti): Probability density that X = xi at time ti,

• p2(xi, ti; xj, tj): Joint probability density that X = xi at time ti and X = xj at time tj

• pn(x1, t1; . . . ; xn, tn): Generalized joint probability density involving n states

• pn ≥ 0 : non-negative,

• ∫dxn pn(x1, t1; . . . ; xn, tn) = pn−1(x1, t1; . . . ; xn−1, tn−1),

• ∫dx1 p1(x1, t1) = 1 : normalization.

A general stochastic process from time t1 to tn is fully specified by the generalized jointprobability density pn. Only for special cases, as e.g. those mentioned below, less informa-tion is already sufficient for a complete specification of the stochastic process.

Page 57: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

4.1. MARKOV CHAINS 57

Uncorrelated processes

The simplest stochastic processes are so-called uncorrelated processes, for which

p2(x1, t1; x2, t2) = p1(x1, t1)p1(x2, t2), (4.1)

i.e. p2 (and analogously all higher pn) factorizes, implying that probability density for x2 att2 is statistically independent of x1 at t1. An important example of uncorrelated process isGaussian white noise, which we will meet later in Brownian or Langevin Dynamics. ForMC, uncorrelated processes are rarely in use.

Markov process

Markov processes are the simplest stochastic processes besides uncorrelated processes. De-fine the conditional probability density (also termed ’transition probability’) that x2 at t2,given it was x1 at t1,

P (x2, t2|x1, t) ≡ p2(x1, t1; x2, t2)

p1(x1, t1). (4.2)

Markov processes are defined by

P (xn, tn|xn−1, tn−1; . . . ; x1, t1) = P (xn, tn|xn−1, tn−1). (4.3)

This is sometimes referred to as ’short memory’, since all history except for the precedingstep is irrelevant for the future time evolution. Markov processes are uniquely determinedby p1(x1, t1) and P (x2, t2|x1, t1), since these quantities are sufficient to build up the wholestochastic process. E.g

p3(x1, t1; x2, t2; x3, t3) = P (x3, t3|x2, t2; x1, t1)p2(x1, t1; x2, t2)

= P (x3, t3|x2, t2)P (x2, t2|x1, t1)p1(x1, t1). (4.4)

4.1.2 Chapman-Kolmogorov equationAfter integrating Eq. (4.4) over x2, one arrives at

P (x3, t3|x1, t1) =

∫dx2 P (x3, t3|x2, t2)P (x2, t2|x1, t1). (4.5)

Equation (4.5) is the Chapman-Kolmogorov equation, which is a necessary condition for theconditional probability densities of a Markov process. For the interpretation of (4.5) see thisfigure:

t1 t3t2

Page 58: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

58 PART 4. MASTER EQUATIONS

Some implications of the Chapman-Kolmogorov equation:

• limt2→t3 P (x3, t3|x2, t2) = δ(x3 − x2) : ’in no time there is no transition’

• integrating (4.5) over x1 yields: p1(x3, t3) =∫

dx2 P (x3, t3|x2, t2)p1(x2, t2).

Short time dynamics

Equation (4.5) is a necessary condition for the transition probabilities. Here, we are in-terested in the short time dynamics, in order to arrive at a time evolution equation for theprobability densities p1. The Chapman-Kolmogorov is solved by using the following ansatzfor the transition probabilities for a small time interval τ :

P (x, t + τ |y, t) = [1− a(x, t)τ ]δ(x− y) + τw(x, y, t) +O(τ 2), (4.6)

where a(x, t) is a jump rate,

a(x, t) =

∫w(x′, x, t) dx′, (4.7)

and w(x, y, t) the transition rate. The first term on the right hand side describes the proba-bility of staying at x = y during time τ , while the second term gives the probability densityof change y → x. Inserting (4.6) into (4.5) renders the rates a and w to be interrelated,precisely,

P (x, t + τ |x′, t′) =

∫dy P (x, t + τ |y, t)P (y, t|x′, t′)

= [1− a(x, t)τ ]P (x, t|x′, t′) + τ

∫dy w(x, y, t)P (y, t|x′, t′) +O(τ 2).

Taking the limit τ → 0, multiplying by p1(x′, t′) and subsequent integration over x′ finally

leads to the Master equation

∂tp1(x, t) =

∫dy w(x, y, t)p1(y, t)−

∫dy w(y, x, t)p1(x, t). (4.8)

While the transition rates w carry short time information, the distribution p1 provides thelong-time behavior. In case x takes on only discrete values xn, the Master equation forpn(t) = p1(xn, t) reads

pn =∑m

[wnm(t)pm(t)− wmn(t)pn(t)] (4.9)

with the transition matrix (transitions from m → n) wnm(t) = w(xn, xm, t). The masterequation (4.9) is a balance equation (rate equation) for the probability densities. The firstterm describes the gain, the second the loss.

Page 59: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

4.1. MARKOV CHAINS 59

Diffusion process

Within these approximation, the random variable makes only small jumps, and p1 is a smoothfunction of x. The diffusion process is characterized by its first and second moments

∫(y − x)P (y, t + τ | x, t)dy ≡ A(x, t)τ +O(τ 2), (4.10a)

∫(y − x)(y − x)P (y, t + τ | x, t)dy ≡ D(x, t)τ +O(τ 2) , (4.10b)

i.e. the transition probability for small times is Gaussian distributed with mean value A(x, t)and variance (matrix) D(x, t). In particular,

A(x, t) =

∫(y − x)w(y,x, t)dy. (4.11)

Page 60: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

60 PART 4. MASTER EQUATIONS

4.2 Application of the master equationLet us revisit the Master equation derived in Sec. 4.9, which describes the time evolutionof a (discrete or continuous) distribution function p(x) (such as the fraction of earth-wideinhabitants p in a city x) by balancing gain and loss as follows [Honerkamp (1990)]

d

dtp(x) =

x′w(x′ → x) p(x′)−

x′w(x → x′) p(x)

=∑

x′ 6=x

w(x′ → x) p(x′)−∑

x′ 6=x

w(x → x′) p(x). (4.12)

Here w(x → x′) is the probability of a state x (a representative inhabitant in city x) to movewithin a unit time interval 4t (say, a year) to state (city) x′, i.e.,

(4t)−1 =∑

x′w(x → x′) =

x′ 6=x

w(x → x′) + w(x → x). (4.13)

Since p denotes a distribution function, it must obey a normalization condition, conveniently∑x

p(x) = 1 (4.14)

at all times. This is basically consistent with the Master equation since

d

dt

∑x

p(x) = 0 =∑

x,x′w(x′ → x) p(x′)−

x,x′w(x → x′) p(x)

=∑

x,x′w(x′ → x) p(x′)−

x′,x

w(x′ → x) p(x′) = 0. (4.15)

For discrete states x the Master equation given in (4.9) is a coupled set of differential equa-tions for the probabilities p(x) which we can (and have) enumerated with an index repre-senting the possible states, i.e., p1,2,3..n as long as n is finite. We have, with wi,j denoting thejump rate for jumps between state i and j,

d

dtpi(t) =

∑j

wjipj(t)−∑

j

wijpi(t) (4.16)

=∑

j

wjipj(t)−∑

k,j

wijδkipk(t)

=∑

j

wjipj(t)−∑

j,k

wikδjipj(t)

=∑

j

(wji −

k

wjkδji

)pj(t)

≡∑

j

Vijpj(t), (4.17)

Page 61: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

4.2. APPLICATION OF THE MASTER EQUATION 61

hence

Vij = (1− δij)wji − δij

k 6=i

wjk = wji − δij

k

wjk, (4.18)

or equivalently,

d

dtp(t) = V · p(t). (4.19)

Notice, there is no summation convention with (4.18). Using (4.18) we obtain

∑i

Vij =∑

i

(wji − δij

k

wjk

)=

∑i

wji −∑

k

wjk = 0, (4.20)

which ensures that∑

i pi remains ’normalized’ with time:

d

dt

∑i

pi =∑

i

d

dtpi =

∑i,j

Vijpj =∑

j

[∑

i

Vij]pj = 0. (4.21)

4.2.1 Semi-analytical solution of the Master equationThe solution to (4.19) reads

p(t) = etV · p(0). (4.22)

with initial condition p(0). Calculation of etV is usually achieved by diagonalizing (if pos-sible, see also Sec. 4.2.2) V = S−1 ·D · S (requires computation of eigenvalues λµ sittingon the diagonal of D, and eigenvectors vµ making the columns of the matrix S−1, withµ = 1, 2..., rank(V)) via etV = S−1 · eD · S. The problem is hereby simplified (decoupled)by introducing y ≡ S · p which leads to

y(t) ≡ S · p(t) = eD · S · p(0) = eD · y(0), ⇔ yµ = etλµyµ(0), (4.23)

and finally p(t) = S−1 · y(t). Of course, this brute force approach is limited to problemswhere the number of states is small enough that the eigensystem can be handled by a com-puter in a finite time. Special methods have been developed in the past to treat sparse matrixdiagonalization, for example, or to calculate just the few largest or smallest eigenvalues effi-ciently.

CODE VECTOR, MATRIX, MATRIX INVERSION, MATRIX-VECTOR MULTIPLICATION

In[] := p=0.1,0.3,0.01..; V=0.3,0.2..,0.1,0.4..;Inverse[V]V.p I

CODE VECTOR, MATRIX, MATRIX INVERSION, MATRIX-VECTOR MULTIPLICATION

À p=[0.1 0.3 0.01 ..]; V=[0.3 0.2 ..; 0.1 0.4 ..];inv(V)V*p I

Page 62: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

62 PART 4. MASTER EQUATIONS

4.2.2 Detailed balanceThe matrix V defined in (4.18) and also appearing in (4.22) can be diagonalized if thereexists a stationary solution p∗ obeying,

V · p∗ = p∗ ·VT , (4.24)

which is equivalent with, for all i,

∑j

((1−δij)wji − δij

k 6=i

wjk

)p∗j =

∑j

((1−δij)wij − δij

k 6=j

wik

)p∗i (t), (4.25)

and simplifies to∑

j

δijwjip∗j =

∑j

δijwijp∗i (t). (4.26)

Because this condition must be valid for all i, (4.24) is equivalent with the so-called ’detailedbalance’

wjip∗j = wijp

∗i , (4.27)

for all i and all j, which is a stronger condition than needed to ensure stationarity for which∑

j

wjip∗j =

∑j

wijp∗i (t) (4.28)

is the sufficient condition. The Metropolis Monte Carlo method will make use of the conceptof detailed balance in Sec. 4.2.4 to ensure that the simulated distribution asymptoticallyreaches the desired, given, distribution function for the problem at hand.

4.2.3 Coupled equations for momentsOften it is easier to calculate moments of the distribution function obeying the Master equa-tion rather than the distribution function itself, which contains information about all mo-ments. Here we consider the special case of one-step processes for which the jump rates ωij

vanish except for neighboring states, i.e., we consider the process characterized by

wi,j = r(i)δi,j+1 + g(i)δi,j−1, (4.29)

or equivalently, wi,i+1 = g(i) and wi,i−1 = r(i) with yet unspecified functions r(i) and g(i),which in facts allows us to treat a whole class of problems). The master equation (4.16), with(4.29) reads

pi = r(i + 1)pi+1 + g(i− 1)pi−1 − [r(i) + g(i)]pi, (4.30)

and the matrix V introduced in (4.18) becomes tridiagonal since

Vi,j = (1− δij)wij − δij

k 6=i

wjk

= (1− δij)r(i)δi,j+1 + (1− δij)g(i)δi,j−1 − δij

k 6=i

r(j)δj,k+1 − δij

k 6=i

g(j)δj,k−1

= [g(j)− r(j)]δij + r(i)δi,j+1 − g(i)δi,j−1. (4.31)

Page 63: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

4.2. APPLICATION OF THE MASTER EQUATION 63

For the first moment 〈i〉 we obtain

d

dt〈i〉 =

d

dt

∞∑i=−∞

i pi =∞∑

i=−∞i pi

=∞∑

i=−∞i r(i + 1)pi+1 + g(i− 1)pi−1 − [r(i) + g(i)]pi

= −〈r(i)〉+ 〈g(i)〉 , (4.32)

since∞∑

i=−∞i r(i + 1)pi+1 =

∞∑i=−∞

(i− 1)r(i)pi =∞∑

i=−∞i r(i)pi − 〈r(i)〉 . (4.33)

Analogously, for the higher moments,

d

dt

⟨ik

⟩=

⟨[(i− 1)k − ik]r(i)

⟩+

⟨[(i + 1)k − ik]g(i)

⟩, (4.34)

which includes the equation of change for the variance 42i ≡ 〈i2〉 − 〈i〉2,

d

dt42i = −2 〈i r(i)〉+ 〈r(i)〉+ 2 〈i g(i)〉+ 〈g(i)〉+ 2 〈i〉 [〈g(i)〉+ 〈r(i)〉]. (4.35)

The initial conditions for these coupled equation of change for moments is

⟨ik

⟩ |t=0 =∞∑

i=−∞ik pi(0). (4.36)

If r(i) and g(i) are linear in i, (4.34) is a closed set of equations for the moments. However,if these functions are nonlinear, kth order moments coupled to higher moments and we arefaced with an infinite set of coupled equations which must be approximately closed to besolved. In practice, these problems are candidates for numerical solution.

Example for a one-step process

We assume a transition probability to a neighboring state (left or right) to be independentof the direction which corresponds to r = g (and we choose r = 1) in Sec. 4.2.3. For thisspecial case Sec. 4.2.3 reads,

pi = pi+1 + pi−1 − 2 pi, (4.37)

Vi,j = δi,j+1 − δi,j−1. (4.38)

d

dt

⟨ik

⟩=

⟨(i− 1)k − ik

⟩+

⟨(i + 1)k − ik

⟩,

d

dt〈i〉 = −1 + 1 = 0,

d

dt

⟨i2

⟩= −2 〈i〉+ 1 + 2 〈i〉+ 1 = 2. (4.39)

Page 64: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

64 PART 4. MASTER EQUATIONS

With the initial condition pi(0) = δi,0 or equivalently,⟨ik

⟩= 0 for t = 0 and k ≥ 1, we

arrive at

〈i〉 = 0, 42i =⟨i2

⟩= 2 t. (4.40)

While the mean value remains unchanged, the variance is growing proportional in time.This process, also known as symmetric 1D random walk, is a discrete version of the so-called ’Wiener process’ in the limit where distances between neighboring states becomesmall. In this limit, the master equation is equivalent to a Fokker-Planck equation [Gardiner(1985), Honerkamp (1990), Risken (1989)].

EXERCISE ONE-STEP PROCESS: MOMENTS

Calculate the time-dependence of the third and fourth moments along the lines indicated in thissubsection. I

EXERCISE ONE-STEP PROCESS: DISTRIBUTION FUNCTION

Calculate the time-dependent distribution function pi(t) for i = −n..n with finite n by using(4.38) in (4.22) and calculate time-dependent moments herefrom. Compare results with theones given in (4.40) for the first and second moment. Hint: diagonalize V to rewrite as follows:V = E · λ · ET where λ contains the eigenvalues of V. Further make use of the identityetV = E · etλ ·ET . I

4.2.4 Metropolis Monte CarloMetropolis Monte Carlo is a method to realize a given distribution p(x) of random variablesby constructing a process (an algorithm) in such a way that p(x) is its stationary solution.Thus, averages can be computing from a ’time’ series in the long time limit, while timestands for simulation steps rather than physical time.The basic idea has two ingredients:

1. In order to efficiently calculate the integral I =∫

f(x) dx, the distribution p(x) ofrandom numbers is chosen to be proportional to the integrand f(x);

2. We use the strong condition of detailed balance to ensure that a stationary solutionp∗ of a Master equation exists, and we choose the jump probabilities for the Masterequation (i.e., its parameters) such that p∗ = p(x).

Concerning 1., we can regard also f as a distribution function and calculate the average foran arbitrary quantity g(x) via

〈g(x)〉f ≡∫

g(x) f(x) dx∫f(x) dx

=1

I

∫g(x) f(x) dx =

∫g(x) p(x) dx

〈1〉p=〈g(x)〉p〈1〉p

,(4.41)

which indicates, that if the normalization constant for p would be required, 〈1〉p = 1, weneed to know I in advance (which is not available). For the same reason, Metropolis MonteCarlo achieves the goal p ∝ f by using only ratios between probabilities, and ratios betweenfunction values where I drops out. It will be most useful for calculating moments of the

Page 65: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

4.2. APPLICATION OF THE MASTER EQUATION 65

’distribution function’ f(x). In view of 2. below in a simulation we can assume 〈1〉p = 1 toextract averages as

〈g(x)〉f = 〈g(x)〉p =1

N

N∑i=1

g(xi), (4.42)

where the xi’s are randomly distributed according to p. Choosing g = 1/f(x) we see, that⟨

1

f(x)

f

=

⟨1

f(x)

p

〈1〉−1p =

∫dx∫

f(x) dx=

V

I=

⟨1

f(x)

p

〈1〉−1p , (4.43)

and therefore the integral I can, in principle, be obtained from an average as

I−1 =1

NV

∑i

1

f(xi), (4.44)

where V =∫

dx denotes the integration volume, which is often known. The problem with(4.44) is that Metropolis Monte Carlo will favor regions in space where f is large, thuscontributing little to an efficient and precise calculation of I . The reverse is true for ’typical’averages for which the relevant contribution stems from regions where f is large.Concerning 2. we use the Master equation (4.12) and apply it to p, we hence are looking fora condition for the jump probabilities for given f . To this end, for reasons explained in Sec.4) we can immediately make use of the condition of ’detailed balance’ (4.27) which can be,using p∗ ∝ f , written as

wji

wij

=p∗(xi)

p∗(xj)=

f(xi)

f(xj). (4.45)

4.2.5 Metropolis algorithmA particular solution fulfilling this equation is the Metropolis scheme often used in equilib-rium studies is

wij = min

(1,

f(xj)

f(xi)

), (4.46)

as is proven by considering the cases f(xi) ≥ f(xj) and f(xi) < f(xj) separately, e.g., inthe latter case,

wji

wij

=min

(1, f(xi)

f(xj)

)

min(1,

f(xj)

f(xi)

) =1

f(xj)

f(xi)

=f(xj)

f(xi). (4.47)

The probability to jump from state i to j characterized by wij is large, if f(xj) is largecompared with f(xi). Thus, the algorithm using (4.46) tends to favor regions where thecontribution to the integral I is large and the Metropolis algorithm is therefore a member ofthe so called class of ’importance sampling’ schemes. The jump probability criterion can be

Page 66: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

66 PART 4. MASTER EQUATIONS

implemented in a program by using a equally distributed random number between 0 and 1,if it lies between 0 and wij , the attempted jump is accepted. During the course of a MonteCarlo simulation, jumps starting from state x can be attempted to any position x′, the onlyrequirement, apart from efficiency issues, is that the trajectory must, in principle, be able toreach the available configuration space for x. Concerning efficiency it is often useful to notrandomly select a state x′, but choose x′ in the vicinity ’4x’ of x. In some cases, f(x′) canbe computed efficiently based on the knowledge of f(x). On one hand, if f(x) is large andf smooth, a close-by f(x′) will still be large and make a major contribution to the integral(and any averages). A closeby x′ is likely being accepted by (4.46). On the other hand, large4x ensure that the available phase space is scanned more rapidly. In most applications anacceptance ratio of about 50% turns out to be useful, and one can automatically tune the size(or type) of 4x in a feedback loop such that this ratio is realized in an average sense.

CODE METROPOLIS MONTE CARLO (INCLUDING COMPARISON WITH EXACT RESULT)

In[] := f[x ,m ] := Exp[-x2] xm (* choose integrand and moments *);metro[steps , moments ] := ( X = 0; ave = Table[0, m, 1, moments];Do[Y = X + Random[] - 1/2;If[Random[] < f[Y,0]/f[X,0], X = Y];Do[ave[[m]] = ave[[m]] + Xm, m, 1, moments], n, 1, steps]; ave/steps );exact[m ] := Integrate[f[x,m], x, -Inf, Inf]/ Integrate[f[x,0], x, -Inf, Inf];TableForm[Table[exact[m], metro[1000, 10][[m]], m, 1, 10]] I

CODE METROPOLIS MONTE CARLO (INCLUDING COMPARISON WITH EXACT RESULT)

À f=inline(’exp(-x.ˆ2).*x.ˆm’,’x’,’m’); % choose integrand and momentssteps=1000; moments=10; X=0; ave=zeros(1,moments);for n=1:steps, Y=X+rand-0.5; if rand<f(Y,0)/f(X,0), X=Y; end;ave=ave+X.ˆ(1:moments);end; metro=ave/steps;% be aware MatlabTMcannot compute the Gaussian integral correctly using quadfor m=1:moments, exact(m)=quad(f,-Inf,Inf,[],[],m)/quad(f,-Inf,Inf,[],[],0); end;[exact;metro]’ I

CODE METROPOLIS MONTE CARLO (PERIODIC BOUNDARIES, INCLUDING COMPARISON WITH EXACT RESULT)

À f=inline(’exp(-x.ˆ2).*x.ˆm’,’x’,’m’); % choose integrand and momentsxmin=-1; xmax=1; % choose rangesteps=1000; moments=10; X=0; ave=zeros(1,moments);for n=1:steps, Y=mod(X+rand-0.5+xmin,xmax-xmin)+xmin; % periodic boundariesif rand<f(Y,0)/f(X,0), X=Y; end;ave=ave+X.ˆ(1:moments);end; metro=ave/steps;for m=1:moments, exact(m)=quad(f,-xmin,xmax,[],[],m)/quad(f,-xmin,xmax,[],[],0); end;[exact;metro]’ I

To summarize, the Metropolis algorithm looks like

Page 67: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

4.2. APPLICATION OF THE MASTER EQUATION 67

1. specify an initial configuration X0,

2. generate a new configuration Xi+1,

3. compute the energy change ∆E = Ei+1 − Ei,

4. if ∆E < 0, accept Xi+1 and return to 2.,

5. generate a random number r, uniformly distributed in [0, 1],

6. if r < e−β∆E , accept Xi+1 and return to 2.

7. otherwise, discard this Xi+1, return to old Xi, and goto 2.

The Metropolis, like any MC algorithm, generate a Markov chain X0 → X1 → X2 → . . .(a path in configuration space). Once this chain has converged to a sufficient accuracy to theequilibrium state (i ≥ N0), thermal averages can be extracted as 〈A〉 = N−1

∑N0+Ni=N0

A(Xi)

The Metropolis algorithm drives the system to the minimum energy state (corresponding tothe given N, V, T, . . .). Configurations with higher energy are only accepted with a Boltz-mann weight. Note: MC algorithms need ’good’ (pseudo-)random numbers. A word of cau-tion: ”One can generate numerical results the look reasonable, even with an incorrect sam-pling scheme.” It is therefore important to compare to known, or better, exact results [Frenkeland Smit (2002)].

4.2.6 Barker algorithm

In evolution problems such as kinetic Monte Carlo, another solution fullfilling the detailedbalance is frequently used:

wij =x

1 + x, x ≡ f(xj)

f(xi). (4.48)

Detailed balance is fulfilled, since

wij

wji

=

(x

1 + x

)(x−1

1 + x−1

)−1

=

(x

1 + x

)(1

x + 1

)−1

= x =f(xj)

f(xi). (4.49)

An attempted change of a state represents the same amount of time as an attempted changeof any other state. The number of attempts can then be used as a measure of time, and thetime increment between two attempted changes of states has to be chosen appropriately, aswe will see in Sec. 4.4. The Metropolis is faster as it does not require the generation of arandom number when f(xj) > f(xi) and the probability for the generation of a new state isfaster when compared with the one employed in kinetic MC, since the Metropolis acceptanceprobability is, for all states, above the one for kinetic MC (see figure below).

Page 68: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

68 PART 4. MASTER EQUATIONS

−4 −3 −2 −1 0 1 2 3 40

0.2

0.4

0.6

0.8

1

∆ E / kBT

acce

ptan

ce p

roba

bilit

y

Metropoliskinetic MC

Acceptance probabilities (4.46) and (4.48) for the case where f(x) ∝ exp−H(x)/kBT, andf(xi)/f(xj) = exp−∆E/kBT with ∆E = H(xi)−H(xj). Both probabilities satisfy detailed

balance. The Metropolis criterion is frequently and efficiently used in the study of equilibriumphenomena, while kinetic Monte Carlo requires the ’physical’ solution, together with a suitable time

increment, see Sec. 4.4 for more details. As it is obvious from this plot, the probability for thegeneration of a new state is strictly faster upon using the Metropolis scheme.

4.2.7 Error estimatesOnce the Monte-Carlo (the same holds also Molecular or Brownian Dynamics) simulationhas produced the final results, we are still not done. What is left to do is to estimate theerror bar on the numerical data. One frequently encounters the term ”exact result” (meantin contrast to analytical results that have been obtained employing some approximations)when actually numerical results are meant. In certain circumstances, numerical algorithmscan produce exact results to within floating point precision. In statistical physics, however,numerical methods are restricted to finite systems and finite simulation times and thereforecarry intrinsically statistical errors (on top of numerical inaccuracies).

Error versus variance

Let Aj with j = 1, . . . , N denote N independent measurements of quantity A. Our estimatefor the average is therefore 〈A〉 = A with A = N−1

∑Nj=1 Aj . It is important to note that

〈A〉 is an ordinary number while A is a random variable (the average of random variablesis again a random variable). From the law of large numbers, we know that A → 〈A〉 forN → ∞ in a statistical sense. In order to estimate the error on the estimate A for finite N ,the central limit theorem states that A is Gaussian distributed around 〈A〉 with variance

σ2A =

σ2A

N=〈A2〉 − 〈A〉2

N(4.50)

With this formula, the numerical estimate of the N independent measurements is given as〈A〉 ≈ A ± σA. Unfortunately, this result cannot be applied directly to Monte-Carlo simu-lations since the Markov chain generates highly correlated (and not independent) measure-ments.

Page 69: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

4.2. APPLICATION OF THE MASTER EQUATION 69

4.2.8 Transition matrixIf we think in discrete times (including 4t = 1), the Master equation (4.12) for a number(vector) of discrete states p becomes

p(t + 1) = p(t) + V · p(t) = Π · p(t), (4.51)

with Π = V+1. Using the identity (4.18) and the following solution of the detailed balancecondition

wij = min(1,pj

pi

), (4.52)

we arrive at the following properties of the transition matrix Π:

Πij = 1, for pj > pi,

Πij =pj

pi

, for pj > pi,

Πii = 2−∑

k

wik = 1−∑

k 6=i

wik, (4.53)

in case the probability for a trial move from i → j and j → i are equal. This is a special(Metropolis-type) case of a more general case. In the more general case, the transition rateωij is the product of the probability rate to perform a trial move from state i to j, denoted asαij , and the probability to accept this trial move, denoted as accij with 0 < accij < 1, hence

wij = αij accij, (4.54)

and the condition of detailed balance is rewritten as

accij

accji

=p∗jp∗i

αji

αij

. (4.55)

If we use the Metropolis rule to decide on the acceptance of MC trial moves, then (4.55)implies

accij = min(1,pj

pi

αji

αij

). (4.56)

If the matrix αij is symmetric, i.e., if the probability (rate) for a trial move from i → j andj → i are equal, this simplifies to

accij

accji

=p∗jp∗i

. (4.57)

Other choices of accij are possible, but the original choice of Metropolis appears to resultin a more efficient sampling of configuration space than most other strategies that have beenproposed.Ideally, by biasing the probability to generate a trial configuration in the right way, we couldmake the term on the right hand side of (4.56) always to unity. In that case, every trial movewill be accepted. This ideal situation can be reached in rare cases [R.H Swendsen and Wang(1987)].

Page 70: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

70 PART 4. MASTER EQUATIONS

4.2.9 Thermostatted 1D harmonic oscillator via Metropolis MC

EXERCISE THERMOSTATTED 1D HARMONIC OSCILLATOR

The 1D harmonic oscillator can be simulated at temperature kBT with the MatlabTMcode onpage 66 of this script by replacing the integrand f = exp(−x2) by the one for the canonicaldistribution f = exp[−V/kBT ] with the potential energy of the harmonic oscillator V =12 mω2 x2. Results should be compared with the exact results to be obtained with the help of(3.10). I

4.2.10 Lennard-Jones system via Metropolis MC

A simple Lennard-Jones fluid is system of mass points of equal masses m interacting witha radially symmetric Lennard-Jones potential V (r) given in (6.34). We may wish to studya Lennard-Jones system at constant number of particles N and constant temperature T ina given volume L3 with periodic boundary conditions. Depending on the particle numberdensity n = N/L3 and temperature the Lennard-Jones system exhibits gaseous, fluid, andsolid crystalline states. In a Metropolis Monte Carlo simulation of the Lennard-Jones system,we choose an initial state at random, calculate its energy from the distances between all pairsof particles, attempt to move a single, randomly selected particle, and decide via (4.46) ifwe accept or reject this move. After an equilibration phase which strongly depends on thechosen initial state, we can extract all types of averages from the Monte Carlo trajectory(the coordinates xi(t) of particle i, here t stands for the Monte Carlo step, not a physicaltime). Sample averages include the mean potential energy E, the potential contribution tothe pressure tensor ppot, the hydrostatic pressure p = 1

2Tr(ppot), the pair correlation function

g(r), the structure factor S(q) defined, according to th rules of statistical physics as

E = 〈V (x)〉 =1

2

⟨N∑

i=1

N∑

j 6=i

V (|xij|)⟩

, (4.58)

ppot =n

2N

⟨N∑

i=1

N∑

j 6=i

xiFi

⟩=

n

2N

⟨N∑

i=1

N∑

j 6=i

xijFij

⟩, (4.59)

where Fi = −∇xiV (x), Fij ≡ Fj − Fi, xij ≡ xj − xi.

g(r) =1

nN

N∑i=1

N∑

j 6=i

δ(r− xij), (4.60)

S(q) =1

N

⟨(N∑

i=1

e−q·xi

)2⟩= 1 +

1

N

⟨∑i

j 6=i

e−ixij

⟩(4.61)

From either the pair correlation function or the structure factor (interrelated by a fourier

Page 71: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

4.2. APPLICATION OF THE MASTER EQUATION 71

transform) we can also estimate averages as follows [Hess et al. (2005)]

E =nN

2

∫V (r)g(r) dr = 2πnN

∫ ∞

0

V (r)g(r)r2 dr, (4.62)

ppot =n2

2

∫rF(r)g(r) dr = 2πn2

∫ ∞

0

rF (r)g(r)r2 dr, (4.63)

S(q) = 1 + n

∫e−iq·r [g(r)− 1] dr, (4.64)

S(q) = 1 + 4πnq−1

∫ ∞

0

r sin(qr)[g(r)− 1] dr, (4.65)

where we denote with g(r) and S(q) the radial pair distribution function, and radial structurefactor, respectively. The functions are isotropic for isotropic liquids and become highlyanisotropic in the crystalline or nonequilibrium state.

4.2.11 Why are Gaussian integrals important?Gaussian integrals do not only appear in connection with the random walk and more ad-vanced walks, but ’always’ in statistical physics of N -particle systems in d dimensions whenconsidering ensemble averages, for the reason, that the kinetic energy K(p) is a bilinearform in the particle momenta p,

K(p) = p · T · p, (4.66)

with a dN × dN matrix T . Usually, its components read Tµν = δµν/(2mµ) with particlemasses mµ, and T is diagonal. In the canonical ensemble, phase space coordinates arepositions x and momenta p,

f(q,p) =1

Ze−β(K(p)+V (q)) =

1

Zexp−βp · T · pe−βV (q)), (4.67)

and

Z =

∫exp−βp · T · pdp

∫e−V (q) dq. (4.68)

Ensemble averages involving p simplify as follows

〈g(p)〉 =1

Z

∫g(p) exp−βp ·T ·p dp

∫e−V (q) dq =

∫g(p) exp−βp · T · p dp∫

exp−βp · T · p dp,

(4.69)

and if T is diagonal, the average squared momentum of particle #i, for example, becomes

⟨p2

i

⟩=

∫p2

i e−βpiTiipi dpi∫

e−βpiTiipi dpi

=

∫p4e−βp2Tii dp∫p2e−βp2Tii dp

=Γ(5/2)

Γ(3/2)

1

βTii

= 3mikBT, (4.70)

where Tii = 1/(2mi), Γ(n + 1) = nΓ(n) and the Gaussian integral∫ ∞

0

xn e−α x2

dx =Γ(n+1

2)

2 αn+1

2

, (4.71)

Page 72: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

72 PART 4. MASTER EQUATIONS

valid for α > 0 and n > −1 had been used. Ratios of moments can be inferred from (3.11).At this point let us mention also the three-dimensional Gaussian integral for tensors

∫e−M:QQ dQ =

π3/2

√detM

. (4.72)

We can use the identity

d

dMdetM = M−1 detM (4.73)

to derive from (4.72), for example,∫

QQ e−M:QQ dQ = − d

dM

∫e−M:QQ dQ =

π3/2M−1

2√

detM, (4.74)

and all higher moments by differentiation.

Page 73: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

4.3. ISING MODEL AND PHASE TRANSITIONS 73

4.3 Ising model and Phase transitions

The Ising system serves as one of the simplest models of interacting bodies in statisticalphysics. It has been used to model ferromagnets, anti-ferromagnetism and phase separationin binary alloys. The model has also been applied to spin glasses and neural networks. Ithas even been suggested that the Ising model is relevant to imitative behavior in general,including such disparate systems as flying birds, swimming fish, flashing fireflies, beatingheart cells, spreading diseases, and even fashion fads.The probabilistic Ising model (see also Section 1.5 employs a constant temperature condition(known as the canonical ensemble formulation). The two-dimensional version of the modelconsists of an n× n square lattice in which each lattice site has associated with it a value of1 (known as an up spin) or −1 (known as a down spin). Spins on adjacent, nearest-neighborlattice sites (henceforth called neighbors) interact in a pair-wise manner with a strength J(known as the exchange energy or constant). When J is positive, the energy is lower whenspins are in the same direction and when J is negative, the energy is lower when spins arein opposite directions. There may also be an external field of strength B (known as themagnetic field). The magnetization of the system is the difference between the number of upand down spins on the lattice. In the spin flipping process, a lattice site is randomly selectedand it is either flipped (the sign of its value is changed) or not, based on the energy changein the system that would result from the flip, using The Metropolis method.On a square lattice, each node i has two possible (spin) states, up and down, xi ∈ −1, +1.The classical Ising model considers nearest neighbor interactions only, in 2D (3D), eachspin interacts with f = 4 (f = 8) spins and we denote the neighbors of spin i as ni. TheHamiltonian of the Ising model reads

H(x) = −B∑

i

xi − 1

2J

∑i

∑j∈ni

xixj. (4.75)

The number of possible configurations for a system of n2 spins are Ω = 2(n2). If we take asimple cubic lattice in two dimension with size n2 the growth of Ω is shown in the followingtable.

n 2 3 4 10Ω 16 521 65536 2100 ≈ 1.3× 1030

Page 74: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

74 PART 4. MASTER EQUATIONS

For the 3× 3 lattice, a thermal average can be done by direct summation (summation insteadof integration due to discreteness of spin degrees of freedom) in the fraction of a second. The4× 4 lattice need considerable more time, the 10× 10 lattice is out of reach even for today’ssupercomputers.

0 2 4 6 8 10kT/J

-2

-1,5

-1

-0,5

0

<H

>/J

3 x 34 x 4exact

2D Ising model. Mean energy〈H〉 /J vs. reduced temperaturekBT/J .

The Ising model had been solved analytically by Onsager for the case of an infinite lattice andis one of the most famous ’many-particle’ problems in statistical physics. In the absence of amagnetic field (B = 0) it exhibits a first-order phase transition (divergence of the variance ofH which is proportional to heat capacity) at a critical temperature kBTc ≈ 0.4406868 Jwhere J is an interaction strength between two spins. We can use standard MetropolisMonte Carlo to compute, for example, the magnetization M in the canonical ensemble,f = Z−1e−βH , β = 1/kBT , defined as

M ≡ 〈x〉f =

∫x f(x) dx/

∫f(x) dx. (4.76)

To this end, we have to restrict ourselves to the case of a finite lattice, say an n × n squarelattice,

〈x〉 =1

n2

n2∑i=1

〈xi〉 (4.77)

and will be faced, in general, with finite size effects.

In the simulation, at each Monte Carlo step we need 4H = H(x) − H(x′), the energydifference between the current and attempted state. In order to achieve a ’large’ acceptanceratio, an attempted state (x′) may just differ from the current one (x) by switching a single,randomly selected, spin xi → x′i = −xi. The 4H is then efficiently computed from a sumover the four neighbors of the selected spin only and one should test results for magnetizationas function of temperature, interaction parameter J and magnetic field B against system size.

Page 75: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

4.3. ISING MODEL AND PHASE TRANSITIONS 75

4.3.1 Mean-field model for the Ising model

Let us employ a simple ’mean-field’ approach to qualitatively understand the expected be-havior.

H(x)mf≈ −B

∑i

xi − 1

2J

∑i

∑j∈ni

xi 〈x〉

= −B∑

i

xi − f

2J

∑i

xi 〈x〉

= −z∑

i

xi,

z ≡ B +f

2J 〈x〉 ⇔ 〈x〉 =

2(z −B)

Jf. (4.78)

On the other hand, we can calculate the average with the canonical distribution,

〈x〉 = 〈xi〉 =

∫xie

−βH(x) dx∫e−βH(x) dx

=

∫xie

βzP

i xi dx∫eβz

Pi xi dx

=

∫xie

βzxi dxi∫eβzxi dxi

=eβz − e−βz

eβz + e−βz

= tanh βzmf≈ 2(βz − βB)

βJf, (4.79)

Equation (4.79) for each (dimensionless) βB, is an equation we can solve numerically orgraphically for βz in terms of βJf . Notice, that the integral was replaced by a sum over thetwo spin states, and that we assumed homogeneity through 〈xi〉 = 〈x〉. For βJf = 0, z = B,and 〈x〉 = tanh(βB). For small y, tanh y = y+o(y3) and therefore, βJcf = 2−2B/z whenthe initial slopes coincide. This is the critical point, where a second order phase transitionoccurs. For βJ < βJc, the magnetization vanishes, 〈x〉 = 0, according to (4.79). Forthe two-dimensional case, where f = 4, and without external field (B = 0), the criticalinteraction strength is therefore βJc = 0.5 in the above mean-field approximation.

4.3.2 Exact results for the 2D Ising model

The exact result at the critical point is not too far away, it is βJc = 0.4406868. For B = 0,and in two dimensions (f = 4), Onsager’s exact result for the magnetization reads

〈x〉 = −(1 + ζ2)1/4(1− 6ζ2 + ζ4)1/8

√1− ζ2

, ζ ≡ e−2βJ , (4.80)

Page 76: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

76 PART 4. MASTER EQUATIONS

for βJ > βJc, and 〈x〉 = 0 otherwise. Actually, βJc is the solution of (4.80) for 〈x〉 = 0.The exact result for the total energy is

〈H〉 = −J coth(2βJ)[1 +2

πκ′K1(κ)],

κ ≡ 2sinh(2βJ)

cosh2(2βJ)≤ 1,

κ′ ≡ 2 tanh2(2βJ)− 1,

K1(κ) ≡∫ π/2

0

(1− κ2 sin2 φ)−1/2 dφ. (4.81)

The susceptibility is defined as χ ≡ 〈x2〉 − 〈x〉2, the specific heat (at constant B) as C ≡〈H2〉 − 〈H〉2.

4.3.3 Metropolis MC algorithm for the 2D Ising model1) Create an n× n lattice L consisting of randomly chosen site values of +1 and −1.

The following sequence of steps 2-4 will be executed a number of times (this is de-scribed in step 5), first using the initial lattice configuration, and then using the latticeconfiguration resulting from the previous run-through of the sequence. We will de-scribe the steps in terms of an arbitrary lattice configuration, called L.

2) Select a random lattice site in L.

3) Determine the energy change involved in ’flipping’ the spin at the selected lattice site.This is done in a number of steps:

3a) Determine the neighbors to the selected site. When the selected site is in the interior ofL, the neighbors are the sites north (above), south (below), west (left) and east (right)of the site. When the selected site is along the border of L, some neighbors are takenfrom the opposing side of the lattice. (note: This way of choosing neighbors for bordersites is known as the reflecting or periodic boundary condition.)

3b) The energy change that would result from flipping the spin of the selected lattice siteis determined by the quantity, 2× (value of selected site) × (B + J× (total spin fromneighbors)), where B and J are input values.

4) Use the Metropolis method to decide whether to flip the spin of the selected lattice siteas follows:

4a) Check if there is a negative energy change as a result of the flip.

4b) If the energy change is non-negative, check if the exponential of (- energy change) isgreater than a random number between 0 and 1.

4c) If one of these conditions is satisfied, flip the spin.

5) Execute the sequence of steps 2-4 m times.

Page 77: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

4.3. ISING MODEL AND PHASE TRANSITIONS 77

6) Create a sublist, L0, containing every n2th element from the list of L configurations.(note: The use of every n2th element corresponds to giving each lattice site an equalchance to be selected. Each element in L0is said to correspond to one Monte Carlostep.)

7) Calculate for each element in L0, some global property of the lattice, such as the longrange order (the absolute value of the magnetization of the lattice).

EXERCISE 2D ISING MODEL

Implement the above 2D Ising model algorithm. Hints: Rather than spending time calculatingthe energy change for flipping the selected site, one should create a ’look-up’ table of thepossible energy changes. Measure the magnetization 〈x〉, as well as

⟨x2

⟩, 〈H〉, and

⟨H2

⟩.

Visualize the lattice. Animate the process. You may apply a cluster search algorithm to identifypercolation or spin clusters. I

4.3.4 Finite size effectsIn order to compare simulation results with the exact results we need to extrapolate to N =∞. Binder has shown, for T 6= Tc

〈H〉 (βJ,N) = 〈H〉 (βJ,∞) + b e−N/λ(βJ), N À λ(T ), (4.82)

with parameters 〈H〉 (βJ,∞) – to be compared with the exact result (4.81) –, a and λ(βJ),where λ(βJ) is proportional to the correlation length of the magnetic fluctuations. In orderto fit the simulated results to (4.82) a nonlinear regression method has to be used.

Page 78: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

78 PART 4. MASTER EQUATIONS

4.4 Kinetic Monte CarloAs we have discussed in foregoing sections, a solution of the Master equation using randomnumbers relies on the acceptance probability (4.48) between two states i and j,

wij =x

1 + x, x =

f(xj)

f(xi). (4.83)

In order to assign physical time to the sequence of states generated using (4.83) we needto remember that we can choose a state, for which we test if we switch it to another state,randomly in space and time. As we further know from (4.13), the time increment betweentwo classical MC attempts depends on the chosen state i and fulfills

(∆t)−1i =

∑j

wij = wii + Qi, (4.84)

Qi ≡∑

j 6=i

wij, (4.85)

where Q is the configuration dependent probability to change state i within time (∆t)i. Ifwe would indeed know all wij for a selected state i, we could compute the time increment(∆t)i for each attempt. If we would know all wij for all possible states, we could calculatethe mean time increment between two successive attempts, and call it τ =

⟨(∆t)−1

i

⟩−1. Weexpect τ to depend on temperature and nature of the system but not on the state of the system,per definition. For the time being we take τ as an unknown quantity in practise. Assumingit is in principle possible to calculate all wij we can, however, do something more efficient,which is called kinetic MC. Still, true physical time will be only proportional to the physicaltime we obtain using kinetic MC, as long as we do not know τ .Null-event and rejection free kinetic MC (KMC) relies on the possibility to calculate, for anattempted move, the probability to actually move, and also assigns a time increment to thisattempt. The time increment will depend on the configuration, and it will be obtained usinga statistical argument using the above mean time increment τ between successive attempts.The argument relies on the fact that an attempted change of a randomly selected state rep-resents the same amount of time irrespective of its success to actually change the state. Themean amount of time 〈∆tKMC

i 〉 between successful moves from state i in a kinetic MC isthen proportional to the fraction (wii + Qi)/Qi times ∆t,

⟨(∆tKMC)i

⟩=

wii + Qi

Qi

(∆t)i =1

Qi

≈ #stay + #change

#changeτ, (4.86)

The outermost term on the rhs of this equation (where we have introduced the unknown τ ,together with the lhs, is the principle of kinetic MC. We seek for all possible attempts tomove state i into states j, classify them as belonging to the ’stay’ (in i) or ’change’ class.Then randomly choose between those attempts which change state, perform the change ofstate (no ’null’-events as for Metropoli MC), and assign a time increment with its meanaccording to (4.86). As events and non-events occur randomly in time, the time incrementsy = (∆t)KMC

i are exponentially (poissonian) distributed: p(y) = exp(−ay) with a prefactora which is directly determined by the mean, 〈t〉p = a−1, thus

p((∆t)KMCi ) = e−(∆t)KMC

i Qi (4.87)

Page 79: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

4.4. KINETIC MONTE CARLO 79

As we know from Sec. 2.2.7, a random number generator which produces exponentiallydistributed random numbers y using random numbers x equally distributed in [0, 1] reads:y = −a−1 ln(x), or equally

(∆t)KMCi = −Qi ln x, random number x ∈ [0, 1] (4.88)

∝ − #change

#stay + #changeln x. (4.89)

The cumulative time is approximately proportional to real time. The original BKL works[Bortz et al. (1975)] considered an Ising sping system and denoted the ratio appearing in(4.89) as Q10 because they identified 10 classes of single spin + surrounding spin states.These authors invented the name ’n-fold way’, it was later that ’kinetic MC’ replaced theirterminus.

To summarize, the presented method can be useful whenever the number of potential finalstates (after an attempt) can be calculated with minor computational efforts, which is poten-tially the case for simple lattice models like the Ising model. The advantage of the methodis to not suffer from null-events like Metropolis MC. Each move leads to a change of thesystems configuration and thus the method should be superior to Metropolis MC wheneverthe latter suffers from slow reorganization effects. A major challenge of KMC simulationsis to create a complete catalog of all possible processes along with their transition proba-bilities. This key input to any KMC simulation can be extracted from smaller length andtime scale simulation tools, such as density functional theory (these are often termed firstprinciple KMC simulations), transition state theory (TST), and MD. The difficulty involvedin identifying all microscopic processes (especially the rare ones) may render the catalogof process incomplete. Recent on-the-fly techniques, such as the self-learning KMC, arecurrently under development. The retrieval of information from libraries and the update fol-lowing execution of an event can become a major computational bottleneck for systems witha large number of processes. Despite the substantial acceleration of on-lattice KMC simula-tion achieved by leaving out vibrations, free energy barriers of very different size introducea different separation of time scales, namely separation of time scales among the group ofrare events. Most KMC simulations get stuck via sampling the fast (low barrier) processes,whereas the long time dynamics is controlled via the slow (high barrier) processes that arerarely sampled. This is another major challenge of KMC simulations. Finally, experiencedusers of numerical methods realize that the conventionalKMC method handles one event-at-atime. This is in contrast to most other numerical methods, such as MD simulation or inte-gration of differential equations (DEs), where all equations or all species are simultaneouslyadvanced. This one-at-a time aspect seriously limits computational efficiency of the KMCsimulation. In summary, there are five major challenges in KMC simulation, namely (1) cre-ating the input, (2) algorithmic efficiency in search and update of databases, (3) time scaleseparation, (4) length scale separation, and (5) execution of one process-at-a time [Chatterjeeand Vlachos (2007)].

Page 80: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

80 PART 4. MASTER EQUATIONS

Main steps in a KMC algorithm. The underlying principle in all KMC algorithms is the randomselection of a process based on the transition probabilities of all processes, execution of the selectedprocess (i.e., appropriately modifying the configuration of the system), and updating the time clockand the transition probabilities. In mathematical terms, the KMC method follows a discrete Markov

evolution of the system with continuous time increments given by an exponential distribution(reproduced from [Chatterjee and Vlachos (2007)]).

Page 81: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

4.5. PHASE TRANSITIONS & PERCOLATION 81

4.5 Phase transitions & percolation

Percolation theory is of interest in problems of phase transitions in condensed matter physics,and in biology and chemistry. Here we briefly review the basic concept of percolation theory,exemplify its application to the Ising model. In order to explain the concept of percolation,let us consider this figure

which offers a pictorial representation of percolation. When a continuous chain from thetop to the bottom is formed as in (b), there will be a percolating current and the light bulb(⊗) will turn on. The figure represents pictorially an electrical circuit composed of a setof conducting spheres, represented by (•) and a set of insulating spheres, represented by(). An electrical current will flow from the top plate to the bottom plate and the bulb (⊗)will light once enough conducting spheres are present in the circuit. That is, there will bea percolating electrical current through the system when a conducting sphere from the topis continuously connected to a conducting sphere on the bottom. It is intuitively clear thatpercolation will happen once a minimum of conducting spheres is present in the system, i.e.there is a critical value for the concentration of conducting spheres for which a percolatingcluster will be formed.The existence of a critical value for percolation and the consequent formation of a percolationcluster can be illustrated as follows. Let us consider a square lattice of volume L × L. Foreach lattice site, we attribute a probability p that the site will be occupied, and a probability1−p that it is empty. A ’cluster’ is defined as groups of neighboring occupied sites. Supposenow that one fixes a value of p and at each site of the lattice we draw a random number r.If r ≤ p, then one considers that the site is occupied, if r > p, the site is considered empty.Groups of neighboring occupied sites are collected to form a cluster. At the end, one checksif there is at least one cluster that has percolated, that is, if an occupied site from the topof the lattice belonging to some cluster is continuously connected to an occupied site at thebottom of the lattice. Then, the process is repeated many times. Each complete visit to allsites of the lattice is called a ’sweep’. Next, we define a percolation probability as

〈P 〉 ≡ 1

N

∞∑n=1

Cj, (4.90)

where Cj = 1(0) if a percolating cluster has (has not) occurred during the sweep j, and N isthe number of sweeps.

Page 82: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

82 PART 4. MASTER EQUATIONS

after 10000 iterations

Here we plotted 〈P 〉 for different values of p for L = 100, 250, 400, 600, and 1000, usingfree boundary conditions. One sees that there exists a transition region around a certain valueof p where the percolation probability changes from a small value to a value that becomesclose to one for large lattices. It is clear that as the lattice becomes larger and larger, thetransition region becomes sharper and sharper, with a well defined transition point pc. In thelimit of an infinite lattice, for the present case of a two-dimensional cubic lattice the value ofpc is found to be

pc = 0.592746. (4.91)

Clearly the numerical simulations, for large lattices, give pc very close to this value. To bemore precise, using the principle of finite-size-scaling, the simulations give pc = 0.5927 ±0.0001.

EXERCISE PERCOLATION

Develop an efficient algorithm which identifies percolated clusters and determines 〈P 〉 definedin (4.90). I

Percolation theory in the Ising Model

The magnetic phase transition that occurs in the Ising model (for dimensions larger than 1)can be related to a percolation phenomenon. The relation was established in two major steps.The first step was given by Fortuin and Kasteleyn (1972), who showed that it is possible torewrite the partition function of the Ising model as a sum over configurations of spin clustersn, instead of the usual sum over spin configurations, as follows

Z =∑

x=±1

n

nij=1∏

〈ij〉pijδxi,xj

nij=0∏

〈i,j〉(1− pij)

, (4.92)

where

pij = 1− exp(−2Jβ), (4.93)

is the probability that a bond between spins at neighboring sites i and j occurs, and nij =1(0) indicates that there is (there is not) a bond between spins at i and j. Note that a bondmay be placed only between spins of the same value. Here, β = 1/kBT , where kB is

Page 83: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

4.5. PHASE TRANSITIONS & PERCOLATION 83

the Boltzmann constant and J is the spin-spin strength, defined in the Ising Hamiltonian,H = −(J/4)

∑〈ij〉 xixj , cf. Eq. (4.75) of Sec. 4.3.

where 〈ij〉 means nearest neighbors and xi = ±1 are spins at site i. A spin cluster cantherefore be naturally defined as the collection of spins that are continuously bound. Notethat now the probability of a spin to belong to a cluster depends on the temperature (and thestrength J) as in Eq. (4.93). The next step was given in 1980 by Coniglio and Klein whenthey showed, using renormalization group techniques that the magnetic critical temperatureTc of the Ising model determines the critical value for the occurrence of a percolating cluster,with pc given by the formula pc = 1−exp(−2J/kBTc). The Fortuin-Kasteleyn clusters werelater used by Swendsen and Wang (1987) as a very efficient way of performing global spinupdates in Monte Carlo simulations of spin models.

after 5000 sweeps

In the above Figure we show results of a Monte Carlo simulation for the percolation proba-bility 〈P 〉 versus temperature. In the figure we show results for lattice sizes L between 100and 800.

Q-Potts model and foams

Potts models are generalizations of the Ising model of magnetization. Spins re placed on atwo- or three-dimensional lattice and may take on Q different values. They interact with the

Page 84: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

84 PART 4. MASTER EQUATIONS

(nearest) neighbors via the Hamiltonian

H = −J∑

〈i,j〉δxi,xj

, (4.94)

where i and j represent two neighboring lattice sites, and x ∈ 1, 2.., Q. We can usestandard Metropolis Monte Carlo to study the system at given temperature T . At large valuesof the temperature the spins are in a disordered state, decreasing T leads to formation of ofdomains, that is, regions where all spins are in the same state (same value of Q). The shape ofthe domains depends on Q. Only large values of Q lead to the formation of boundaries thatresemble the shape of foam cells. Also the underlying lattice in the Potts model simulationis of importance, since its anisotropy contrasts with the isotropy of a foam. The anisotropyin the Potts model can be reduced by including next-nearest neighbor interactions into theHamiltonian. The evolution of the cellular structure is then found to be foam-like: that is, itenters a scaling state where the average cell diameter grows linearly in ’time’.

Random site percolation model

Percolation is concerned with connectedness. P.G. deGennes, winner of the 1991 NobelPrize in Physics for his profound work on the theoretical physics of disordered materials,has described the percolation transition in the following way: “many phenomena are madeof random islands and in certain conditions, among these islands, one macroscopic continentemerges.”Percolation phenomena are widespread in nature. One of the most common places to ob-serve percolation is in the kitchen when jello is prepared or when milk curdles. This sol-geltransition, as it is called, occurs in chemical systems, (a polymerization reaction and the vul-canization of rubber), biological systems (the immunological antibody-antigen reaction andthe coagulation of blood), and in physical systems (in critical phenomena). We will look atthe clustering that occurs in the random site percolation model.The random site percolation model consists of an m × m random Boolean lattice. This isa lattice whose sites have values 0 and 1, where 0 represents an empty site and 1 representsan occupied site. The probability p, of a site being occupied is independent of its neighbors.A cluster is defined as a group of occupied nearest-neighbor sites (sites above, below, left orright of a site).

Page 85: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

4.5. PHASE TRANSITIONS & PERCOLATION 85

The standard description of the program is:

1) Each site on a two-dimensional m ×m square lattice is randomly assigned a value abetween 0 and 1.

2) If a ≤ p, its value is changed to 1; otherwise its value is changed to 0.

Once the clusters are identified numerically, there are a number of cluster-related quantitiesthat are interesting to look at, such as the spatial characteristics of clusters (e.g., their fractaldimensions) as a function of p and the percolation threshhold pc, which is the value of p atwhich a spanning cluster (an uninterrupted path across the lattice) first appears.

Hoshen-Kopelmen algorithm

The algorithm starts from a m×m boolean lattice r.

1) A list u, is created by adding: (a) a zero to the front of each of the column in r and (b)a list of (m + 1) zero’s to the front of r.

2) A list ul, is created which is the same size as u and consists of all zero’s.

3) An empty list ulp, is created.

4) The list u is scanned in a ’typewriter’ fashion, starting from position (2, 2) and pro-ceeding along each row in succession, going from the second element to the last ele-ment in the row. The elements in ul and ulp are changed during the scan of u accordingto the criteria that follow. Note: In the following step, we will refer to the elementsin u and ul as sites. A site with a value of zero will be said to be empty and a sitewith a non-zero value will be said to be occupied. Additionally, we will refer to thesite ui,j−1, lying to the left of uij , as u−, the site ui−1,j , lying above uij , as u+, the siteul

i,j−1, lying to the left of uli,j , as ul−, and the site ul

i−1,j , lying above ulij , as ul+.

4a) At each site, uij , look to see if it is occupied or empty and do the following:

4a1) If uij is empty, go on to the next site.

4a2) If uij is occupied, look at the nn site in the previous column, u−, and the nn site in theprevious row, u+, and do the following:

4a2a) If both u− and u+ are empty, set ulij equal to one more than the current maximum

value in ul and then add this new maximum value in ul to ulp.

4a2b) If only one of the u− and u+ sites is occupied, set uli,j equal to the non-zero value of

ul+ or ul−.

4a2c) If both u− and u+ are occupied, set uli,j equal to the smaller of ulp

ul+ and ulpul− and set

the value of the position in ulp having the larger of the values of ulpul+ and ulp

ul− equal tothe smaller of these values.

Page 86: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

86 PART 4. MASTER EQUATIONS

5) After the scan in step 4 is completed, relabel the occupied sites in ul so that connectedsites (i.e., occupied sites adjacent to occupied sites) have the same number.

6) Change the cluster numbers in ul so that they run sequentially, without any gaps.

EXERCISE HOSHEN-KOPELMEN ALGORITHM

Write a program to perform the labelling. Use a well-known method, the Hoshen-Kopelmenalgorithm (named after its creators), which involves scanning the lattice just one time. I

This computation can be carried out in a single stroke using a one-liner program.

Page 87: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

complexfluids.ethz.ch

This document is copyrighted M.K., Swiss ETH Zurich/CH This copyright is violated upon selling or posting these class lecture notes

Computational Polymer PhysicsMartin Kröger, Emanuela Del Gado, Patrick Ilg

Part 5

Fokker-Planck equations

Let us recall the correspondence between Langevin and Fokker-Planck equations motivatedin Sec. 4.1.2. Langevin equation for a stochastic variable X(t):

X = −A(X, t) + B(t) · η B ·B† ≡ D (5.1)

with (usually) a white noise vector η. Fokker-Planck equation for a distribution functionp(x, t):

∂tp((x, t) = [− ∂

∂x·A(x, t) +

1

2

∂x

∂x: D(t)] p(x, t). (5.2)

Both equations exhibit the same solutions, because the underlying processes are character-ized by the same ’first and second moments’. The terms carrying A describe the determin-istic part of motion, the remaining terms dissipative processes. In the following sections wewould like to see under which circumstances we can solve the stochastic differential equationanalytically. If there is no analytic solution available, one can either solve the Langevin equa-tion using Brownian dynamics, or solve the Fokker-Planck equation using a suitable set ofbasis functions, or derive a coupled set of equations for moments of the distribution functionfrom it. This coupled set can eventually be solved subject to a ’closure approximation’.

Example: no noise - deterministic equations

Consider the system

dx

dt≡ x = −A(t). (5.3)

The distribution

p(x, t) = δ(x− x(t)) (5.4)

solves the Fokker-Planck equation [use: ∂δ(x−x(t))/∂x = xδ(x−x(t))], with x(t) beingthe ’trajectory’, i.e., the solution of x = −A(t). Let the initial condition be denoted by:x1 ≡ x(t1). The conditional probability reads

P (x2, t2 | x1, t1) = p(x2, t2) = δ(x2 − x(t2)). (5.5)

87

Page 88: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

88 PART 5. FOKKER-PLANCK EQUATIONS

The first moment, cf. (4.10) yields the drift term:∫

(x2 − x1)P (x2, t + τ | x1, t)dx2 = x(t + τ)− x(t) (5.6)

= x(t)τ + o[τ 2] (5.7)= −A(t)τ + o[τ 2]. (5.8)

Accordingly, the second moment involves the stochastic term:∫

(x2 − x1)(x2 − x1)p(x2, t + τ | x1, t)dx2 = x(t)x(t)τ 2 + o[τ 3] (5.9)

= 0 · τ + o[τ 2] (5.10)= D(x, t) · τ + o[τ 2], (5.11)

again confirming our expectations.

Page 89: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

5.1. ANALYTIC SOLUTION OF FOKKER-PLANCK EQUATION 89

5.1 Analytic solution of Fokker-Planck equationThe Fokker-Planck equation is analytically solvable if

1. A(x, t) is a linear in x and

2. D is independent of x (’additive noise’)

In that case we have

A(x, t) ≡ A(t) · x , D ≡ D(t) . (5.12)

Let us furthermore consider the initial condition as

p(x, 0) = δ(x− x0). (5.13)

The solution of the Fokker-Planck equation can be expressed in terms of the first 〈X〉 andsecond moment S ≡ 〈XX〉 of the distribution function:

p(x, t) = (2π)−d2 [det S]−

12 exp [−1

2(x− 〈X〉) · S−1(t) · (x− 〈X〉)]. (5.14)

We hence need expressions for the two lowest order moments.

First moment 〈X〉Langevin equation (5.1) or Fokker-Planck equation (5.2) →

d

dt〈X〉 (t) = −A(t) · 〈X〉 (t) + (B · 〈η〉 ≡ 0). (5.15)

Solution of (5.15):

〈X(t)〉 = Y (t) · x0 , Y = exp−∫ t

0

A(t′)dt′ . (5.16)

Eigenvalues of A positive → limt→∞ 〈X〉 = 0. Be careful, if: [A(t)A(t′)] 6= 0. (nocommon set of eigenvectors).

Second moment S

S ≡ 〈XX〉 ≡∫

xx p(x, t)dt

Langevin equation (5.1) →d

dtS = −A · S − S ·A† + B 〈ηX〉+ 〈Xη〉 ·B† , (5.17)

Fokker-Planck equation (5.2) →d

dtS =

∫xx[

∂x·A(t) · x +

1

2D(t) :

∂2

∂x∂x] p(x, t)dx

= −〈XX〉 ·A† −A · 〈XX〉+ D ≡ S(t) (5.18)

Page 90: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

90 PART 5. FOKKER-PLANCK EQUATIONS

(Assumption: p, p′ etc. = 0 on the surface)Solution of (5.18)

S(t) = Y · S(0)Y † +

∫ t

0

Y (t)Y −1(t′)D(t′)Y †−1

(t′)Y †(t)dt′.

Comparing (5.17) with (5.18)

〈Xη〉 =1

2B Stratonovich interpretation

Summary

Time dependent and stationary solutions of the Fokker-Planck equation

p(x, t) = (2π)−d2 [det S]−

12 exp−1

2(x− 〈X〉) · S−1(t) · (x− 〈X〉) (5.19)

pstat(x) = (2π)−d2 [det S(∞)]−

12 exp−1

2x · S−1(∞) · x (5.20)

→ Eqs. (5.19), (5.20) are the analytic solutions for all models with:

A(x, t) ≡ A(t) · x , D ≡ D(t) .

Special case: A = 0, constant D, x0 = 0

Using the above results, we arrive at Y = 1, 〈X(t)〉 = x0 = 0, S(t) = S(0)+∫ t

0D(t′) dt′ =

S(0) + tD, S(∞) = tD, thus

pstat(x, t) = (2πt)−d/2[detD]−1/2 exp−1

2x ·D−1 · x. (5.21)

Next, we apply the analytic expressions to some cases of potential interest, or better to say,we give some simple application of the more general results obtained.

Page 91: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

5.2. ILLUSTRATION OF THE FOKKER-PLANCK EQUATION 91

5.2 Illustration of the Fokker-Planck equation

5.2.1 1D harmonic oscillator with external stochastic force (noise)The equations of change for the one-dimensional harmonic oscillator (harmonic pendulum)subject to damping (wind, or resistance in an electronic circuit) can be cast into the followingform: mX + γX + mω2X = σηt, or equivalently,

r =1

mp

p = − γ

mp−mω2r + σηt

With the abbreviation x = (r, p)†, comparison with (5.1) yields

X = −A(t) ·X + B(t) · ηwith

A =

(0 − 1

m

mω2 γm

), B =

(0 0

0 σ

)→ D =

(0 0

0 σ2

)= BB†

As we know from Sec. 5.1 we need to calculate eigenvalues of A, in particular. These are:λ± = γ

2m±√

( γ2m

)2 − ω2 and we can apply the above analytic formluae to write down, inpartiular, the stationary solution for the first moment:

<(λ) > 0 for γ > 0 → limt→∞

⟨(r

p

)⟩(t) = 0.

For the second moment we obtain, again for the stationary case, S = 0):

AS + SA† = D.

S =

(σ2

2γmω2 0

0 σ2m2γ

).

By inserting the second moment S into the analytic solution (5.20) yields

pstat

(r

p

)= C exp− γ

σ2mp2 − mω2γ

σ2r2

We see that upon increasing the drag coefficient γ the phase space (r, p)-distribution is pro-nounced around p = r = 0. The second moment S (diffusion) is proporional to σ2. Let usfurther make use of the equipartition theorem

〈Ekin〉 =1

2kBT,

and the statistical definition of kinetic energy

〈Ekin〉 =1

2m

⟨p2

⟩.

Page 92: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

92 PART 5. FOKKER-PLANCK EQUATIONS

Upon inserting the above solution, actually the S22-component of S, we obtain

1

2m

⟨p2

⟩=

1

2mS22 =

σ2

4γ. (5.22)

We have thus obtained a so-called fluctuation-dissipation theorem which here expresses as

σ2 = 2kBTγ. (5.23)

To summarize, the stationary distribution function has been obtained to read

pstat

(r

p

)=

1

Zexp(− [(

p2

2m) +

1

2mω2r2]

︸ ︷︷ ︸Hamilton function

1

kBT) =

1

Ze−H/kBT , (5.24)

which is the canonical distribution function to be expected for the canonical ensemble (con-stant temperature); Z is the normalization constant.

5.2.2 Kirkwood processRemember: Equation of motion (classical particle, harmonic oscillator)

r =1

mp

p = − γ

mp−mω2r + σηt

Kirkwood (1946) considered the equation of motion of a Brownian particle suspended in amacroscopic homogeneous flow field. Let 〈v〉 = κ · r (linear: homogenous flow field).

r =1

mp

p = −ζ(1

mp− 〈v〉

︸ ︷︷ ︸peculiar velocity

) + σηt

= − ζ

mp + ζκ · r + σηt

→ A =

(0 − 1

m

−ζκ· γm

), B =

(0 0

0 σ

)→ D =

(0 0

0 σ2

)

Insertion into the F-P equation (according to Eq. 5.2)

∂tp(x, t) = [− ∂

∂x·A · x +

1

2

∂2

∂x∂x: D(x, t)]p(x, t) (5.25)

yields the following form (Kirkwood 1946)

∂f (1)

∂t+

p

m· ∂

∂rf (1) = ζ

∂p· [cf (1) +

σ2

2ζ︸︷︷︸kBT

∂pf (1)] , (5.26)

with x ≡ (r,p)†; f (1)(r,p, t) ≡ p(x, t); c ≡ p/m−〈v〉. Fokker-Planck equation canbe motivated too by evaluating the corresponding Boltzmann equation.

Page 93: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

5.2. ILLUSTRATION OF THE FOKKER-PLANCK EQUATION 93

5.2.3 Hookean dumbbell in shear flow

Model: X = (∇v)† ·X − 2H

ζX + σηt

Shear flow (planar Couette geometry)

(∇v)† =

0 1 0

0 0 0

0 0 0

γ

Making use of the analytic result:

pstat(x) = C exp(− H

2kBT

x2 − 2αxy + (1 + 2α2)y2 + (1− α2)z2

1 + α2)

≡ C exp[−Φ(x)] with x = (x, y, z)†

with α = λHγ and λH = ζ/4H . Definition of orientational flow J :

∂tp(x, t) = − ∂

∂xi

Ji(x, t)

Stationary case: ∇ · J = 0

J = ∇× j, with j ∼ exp−Φ(x) ez

Explicitly:

Jx ∝ −(∂

∂yΦ(x)) exp−Φ(x),

Jy ∝ (∂

∂xΦ(x)) exp−Φ(x) .

Page 94: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

94 PART 5. FOKKER-PLANCK EQUATIONS

5.2.4 Rigid dumbbell

Model ?

u = (∇v)† · u + σηt

But: u2 6= 1 because

∂u2

∂t= 2u · u = 2(∇v)† : uu 6= 0

Adjusted model:

u = (∇v)† · u− ((∇v)† : u u)u

Drift term of FP-equation reads

A(u) = (∇v)† · u− (∇v)† : uuu , (5.27)

The Fokker-Planck equation for rigid dumbbells under flow:∂f(u, t)

∂t= −[

∂u· ((∇v)† · u− (∇v)† : uuu)− 1

2σ2 ∂2

∂u2]f , (5.28)

≡ −w ·Lf −L · (T f) + wL2f (5.29)

with L ≡ u× ∂∂u , T ≡ 1

2L(a : γ), γ ≡ (∇v)† , w ≡ 1

2∇× ((∇v)† · u), w ≡ 1

2σ2.

Symmetric traceless (anisotropic) part of a 2nd rank tensor T

5.2.5 Ellipsoid of revolution

Page 95: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

5.2. ILLUSTRATION OF THE FOKKER-PLANCK EQUATION 95

For ellipsoids of revolution with symmetry axis u and axis ratio B (B > 0 prolate, B < 0oblate) the Fokker-Planck equation has to be slightly modified:

∂f(u, t)

∂t= −w ·Lf −BL · (T f) + wL2f (5.30)

5.2.6 Multibead polymer chains, reptation

Reptative motion of a N-chain: u(s, t + ∆t) ≈ u(s + ∆s, t) ↔ one-dimensional diffusionalong the chains contour.f(u, t) → f(u, s, t) for model polymer fluids.Formally:

X = (u, s)†

u is unit vector characterizing the orientation of a segment, σ is contour position of thatsegment on the polymer chain.

u = (∇v)† · u + (∇v)† : uuu + η√

2w

s = η√

2D

Compare with Langevin equation

X = −A(X, t) + B(t) · η, B ·B† ≡ D

Therefore

D = B ·B† =

(2wδ 0

0† 2D

).

Fokker-Planck equation for polymers (e.g. Bird et al. 1987):

∂f(u, σ, t)

∂t= −[

∂u·A(u)− w

∂2

∂u2− D

∂2

∂s2]f (5.31)

w: orientational diffusion coefficientD: one-dimensional (reptation) diffusion coefficent

Page 96: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

96 PART 5. FOKKER-PLANCK EQUATIONS

5.3 Alternate approach to Fokker-Planck equations’Hand waving correspondence’ between(equation of motion + brownian force + continuity equation + usual approximations) andFokker-Planck equations

Example: dumbbellsContinuity equation in orientation space (Liouville):

∂tf(u, t) = − ∂

∂u· uf, (5.32)

Brownian force (Chandrasekhar):

F B =1

2σ2 ∂

∂uln f , (5.33)

Hydrodynamic force (Stokes):

F H ∼ u− (∇v)† · u , (5.34)

Inertia mu is neglected (e.g. Doi,Bird).

0 = u = F B + F H =1

2σ2 ∂

∂uln f + u− (∇v)† · u

⇔ u = (∇v)† · u− 1

2σ2 ∂

∂uln f (5.35)

Inserting this expression for u into the continuity equation (5.32) the Fokker-Planck equationfor dumbbells appears

→ ∂f(u, t)

∂t= −[

∂u· (∇v)† · u− 1

2σ2 ∂2

∂u2]f (5.36)

with rotational diffusion coefficient ∝ σ2 which is comes into the equation through F B.

Page 97: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

5.4. RHEOLOGICAL PROPERTIES OF POLYMER MELTS 97

5.4 Rheological properties of polymer meltsViscosity related to stress tensor via τ ≡ 2ηγ.Formalisms involving an expression for the systems free energy (Hess 1981) or at least em-pirically the stress optic rule

a ≡ ⟨uu

⟩ ∼⟨∫ ∑

s

uu s f(u, s, t)d2u

⟩∼ τ

is approximatively valid. (’alignment tensor’ a)Multiplication of the Fokker-Planck equation for polymer melts times uu and subsequentintegration over the unit sphere d2u yields an equation of change for the alignment tensor

∂ta + 6wa + D

∂2

∂s2a + ... =

2

5γ +

6

7a · γ + 2 w × a , (5.37)

(involves Jaumann derivative) which is coupled to an alignment tensor of rank 4 (containedin the dots, nonlinear A).Special cases: Maxwell model ∂τ

∂t/G+τ/η = γ, corotational Maxwell, Giesekus, upper and

lower convected models, ...

5.4.1 Linear viscoelasticity, oscillatoric shear flowHere, γ ∼ e−iωt ε C|Phase shift between stress and deformation → η ε C|Relevant components (3 of 5) for plane Couette geometry:

a+ ≡ axy = ayx, a− ≡ 1

2(axx − ayy), a0 ≡ azz − 1

3.

Flow alignment angle:

χ = π/4− 1

2tan−1(

a−a+

)

Within the Newtonian flow regime at small amplitudes: no normal stress differences (a− anda0 ≈ 0, respectively).Inserting the ansatz

a =2

5exp−iωt C , (5.38)

into (5.37) leads to

(6w− iω)C −D∂1C

∂s2= 1 (5.39)

with boundary condition

C(ω, s = 0) = C(ω, s = L) = τend , (5.40)

Page 98: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

98 PART 5. FOKKER-PLANCK EQUATIONS

Compare: Doi,Edwards 1986, Bird et al. 1987: τend = 0.

→ C(ω, σ) = λ[1

z2+ (

1

z2− g)(tanh(

z

2) sinh(σz)− cosh(σz))] (5.41)

and σ = L−1s with 0 ≤ σ ≤ 1 and λ ≡ λDoi = L2/D

z ≡√

τ−1λ− iωλ , (5.42)

with g ≡ τendλ−1 being a dimensionless parameter, which determines the properties of C.

ηa(ω, τend) = Ga

∫ 1

0

C dσ = . . . (5.43)

ηa(ω, τend = 0) = viscosity as calculated by Doi/Edwards

Storage (G’) and loss (G”) moduli:

G ≡ iωη = G′ + iG′′

How to get the parameters from exp.?

g = g(ωs, ω′′)

λ = λ(ωs, g)

Ga = Ga(Gs, g)

Exp.: τend ≈ const (independent of ω) ∼ N1, g = τend/λ, λ ∼ N3.5 → g ∝ N−2.4

Page 99: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

5.4. RHEOLOGICAL PROPERTIES OF POLYMER MELTS 99

Page 100: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

100 PART 5. FOKKER-PLANCK EQUATIONS

Page 101: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

complexfluids.ethz.ch

This document is copyrighted M.K., Swiss ETH Zurich/CH This copyright is violated upon selling or posting these class lecture notes

Computational Polymer PhysicsMartin Kröger, Emanuela Del Gado, Patrick Ilg

Part 6

Multiscale dynamics

101

Page 102: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

102 PART 6. MULTISCALE DYNAMICS

6.1 Brownian dynamicsThe basic idea of brownian dynamics is to modify Newton’s equation of motion, whichconserve energy, for slowly varying degrees of freedom (e.g., phase space trajectories ofcolloidal particles) in the presence of frequent ’collisions’ with ’small’ particles or a heatbath in order to reach thermal equilibrium where, for example, the temperature rather thanenergy is constant. In an attempt to take into account these frequent collisions, and random-ization of momentum, one adds noise and friction to Newton’s equations in such a way thatthe desired temperature is stabilized. Noise and friction strength will be therefore coupled.The addition of pure noise would steadily increase the energy of the subsystem, the additionof pure friction would reduce it until the system comes at rest (which corresponds to zerotemperature). By adding white noise we produce a gaussian process whose stationary distri-bution function will be a gaussian distribution, characterized by its first two moments as weknow from (3.10). The corresponding equation of change for the distribution function willbe a diffusion (or Fokker-Planck) equation. To investigate moments of the distribution func-tion we can either solve the Fokker-Planck equation, or numerically integrate the stochasticdifferential equation (modified Newton’s equation) via Brownian dynamics simulation.Consider the equation of change for the stochastic variable X [Ottinger (1996)., Honerkamp(1990), Kroger (2005)]

dX = A(X, t) dt + B(X, t) · η(t)√

dt (6.1)

with a Gaussian white noise η WN(0,1), cf. foregoing section, a deterministic (Newton’s)’drift’ term A and a noise strength B. In general, the noise strength may depend on phasespace coordinates and time. To simplify the following considerations, we will restrict our-selves to the case of constant B. The increment dX is then a normally distributed randomvariable with the following mean value and covariance matrix:

〈dX〉 =⟨Adt + B · η

√dt

⟩= Adt,

KXX = 〈(dX− 〈dX〉)(dX− 〈dX〉)〉 = B · 1 ·BT δ(t− t′)dt

= D δ(t− t′)dt (6.2)

with D ≡ B ·BT .In the so called Ito-interpretation, the stochastic differential equation (6.1) is equivalent to aFokker-Planck equation for the distribution function p(x, t)

∂tp = −∇x · (Ap) +

1

2∇x∇x : (Dp), (6.3)

both equations (6.1) and (6.3) with D = B ·BT predict the same moments.

6.1.1 Brownian dynamics simulation

In a brownian dynamics simulation, phase space trajectories X(t), i.e., realizations of thestochastic process characterized by A and B are calculated according to (6.1). The time step

Page 103: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

6.1. BROWNIAN DYNAMICS 103

4t and initial conditions X(0) are additional parameters and one has to test, as usually, thedependence on 4t. Averaged quantities 〈Q(X)〉 (t) and their variances are obtained via

〈Q〉 =1

N

N∑i=1

Q(X(i4t)),

42Q =⟨Q2

⟩− 〈Q〉2 =1

N2

N∑i=1

[Q(X(i4t)− 〈Q〉

]2

, (6.4)

where we assumed a constant interval for sampling averages, 4t, which must not necessarilycoincide with4t, and averages are calculated from data collected during N such ’snapshots’.The Gaussian white noise can be implemented, for example, using the routines providedin [Ottinger (1996).].

6.1.2 Kramers process

To start with an illustrative example, we consider the 1D Kramers process, ’conveniently’written as (Newton plus friction plus noise)

mr = −V ′(r)− ζr + σ η. (6.5)

with particle mass m, a potential V , coordinate r, momentum p, a certain friction coefficientζ , and noise η. With X = (r, p), (6.5) is rewritten as

dX =

(p/m

−∇rV − ζm

p

)dt +

(0 0

0 σ

)·(

0

ηp

)√dt (6.6)

where we introduce a WN(0,1) white noise η in order to specify the meaning of η, and somelittle arrangements to match the form (6.1) and to immediately read off its elements

A =

(p/m

−∇rV − ζm

p

), (6.7)

B =

(0 0

0 σ

), D = B ·BT =

(0 0

0 σ2

)(6.8)

η =

(0

ηp

). (6.9)

If the Kramers process is processed at constant temperature, we expect a steady state canon-ical distribution p∗(x) = 1

Ze−βH with the Hamiltonian H(r, p) = K(p) + V (r), where the

kinetic energy for a single particle of mass m is K(p) = p2/(2m). Hence

p∗(x) =1

Ze−βH(x) =

1

Ze−βV (x)e−βp2/(2m) (6.10)

Page 104: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

104 PART 6. MULTISCALE DYNAMICS

should be stationary solution to (6.3). To test that, we calculate the elements of (6.3) inconsiderable detail:

∂tp∗ = −βp∗H = −βp∗(r∇rH + p∇pH) = 0,

∇xp∗ = −βp∗∇xH = −βp∗

(∇rV

∇pK

)= −βp∗

(∇rV

p/m

),

−∇X · (Ap∗) = −p∗∇x ·A−A · ∇xp∗

mp∗ + βp∗

(p/m

−∇rV − ζm

p

)·(∇rV

p/m

)

mp∗

(1− βp2/m

),

∇x · (Dp∗) = p∗∇x ·D + D · ∇xp∗ = −p∗

βpσ2

m,

1

2∇x∇x : (Dp∗) = p∗

1

2m2

[β(βp2 −m)σ2

]

(6.11)

Equation (6.3) yields, upon inserting the canonical distribution, the following relationshipbetween ζ and σ,

σ2 = 2ζ/β = 2ζkBT (6.12)

for the case of homogeneous noise, independent of the choice of the potential V , and theKramers process reads

mr = −V ′(r)− ζr +√

2ζkBT η. (6.13)

This equation can be solved by either integrating (6.1) using random numbers, of by solvingthe Fokker-Planck equation (6.3) with A and D given by (6.7) and (6.8) with σ from (6.12).The stationary distribution function is a canonical distribution function p∗(r, p) ∝ e−βH(r,p)

as we have shown. Accordingly, the results from Sec. 4.2.11 such as 〈p2〉 = 3mkBT apply.

EXERCISE HARMONIC OSCILLATOR WITH NOISE

Study the dynamics of a 1D harmonic oscillator V (x) = 12mω2x2 with and without noise via

brownian dynamics, extract time averaged first moment 〈x〉, second moment⟨x2

⟩and higher

moments for several temperatures. Compare stationary results with the values for momentsderived from the corresponding canonical distribution. I

CODE KRAMERS PROCESS, HARMONIC POTENTIAL, EXTRACT˙p2¸

À function BDkramers(zeta,m,kT,H,dt,BDsteps)sigma=sqrt(2*zeta*kT); AX=[0 1/m; -H -zeta/m]; B=[0 0; 0 sigma];x=[0;0]; avep2 = 0;for BDstep=1:BDsteps,x = x + (AX*x)*dt + B*[0;randn(1,1)]*sqrt(dt);avep2 = avep2 + x(2)ˆ2; figure(1); plot(x(1),x(2),’.’); hold on;

Page 105: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

6.1. BROWNIAN DYNAMICS 105

end;ave/BDsteps I

−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8−2

−1.5

−1

−0.5

0

0.5

1

1.5

2

2.5

6.1.3 Brownian dynamics of a FENE dumbbellA finitely extensible nonlinear elastic (FENE) dumbbell is one of the simplest models ap-proximately describing the physics of a polymer confined in a surrounding macroscopic fluidwith velocity field v. A dumbbell consists of two mass points at positions x1,2 connected bya spring (connector Q = x2 − x1), FENE interaction force

F(Q) = − HQ

1− (Q2/Q20)

2. (6.14)

The FENE force simplifies to a hookean force for Q0 →∞ (no maximum extension; in thiscase we recover a Kramers process). For the case of a homogeneous velocity field, v = κ ·r,i.e., κ = (∇v)T . The equation of motion consists of a sum of Newton’s (F), Stokes frictionproportional to velocity (friction coefficient ζ), and stochastic forces as follows,

mxi = −∇xiV − ζ[xi − v(xi)] + σηi, i ∈ 1, 2,

mQ = mx2 −mx1 = 2F(Q)− ζ(Q− κ ·Q)) + 2ση. (6.15)

In the relaxed momentum approximation, we neglect the acceleration term Q, and have

Q = κ ·Q +2

ζF(Q) +

ζη. (6.16)

This equation can be solved using a simple Euler scheme, cf. (6.6),

Q(t +4t) = Q(t) +

[κ ·Q(t) +

2

ζF(t)

]4t +

√4kBT4t

ζη, (6.17)

where η is a white noise (gaussian) random vector with three independent componentsWN(0,1), and where we made use of the result of the foregoing section σ2 = 2kBTζ . Thefriction coefficient can be related to the viscosity of the solvent, the relationship depends onthe underlying model.

Page 106: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

106 PART 6. MULTISCALE DYNAMICS

EXERCISE BROWNIAN DYNAMICS OF FENE DUMBBELLS SUBJECTED TO SHEAR FLOW

Develop a routine for brownian dynamics simulations of FENE dumbbells for start-up of steadyshear flow based on (6.17). Use units of time, length, and mass such that λH ≡ ζ/(4H) = 1,kBT/H = 1, and nkBT = 1 where n is the number density of the (non-interacting) dumbbells.Shear flow (with flow e1-, flow gradient e2-, vorticity e3-direction) is characterized by a shearrate γ and the macroscopic flow gradient reads κ = γe1e2, i.e., κ ·Q = γQ2e1. Extract thesquared extension

⟨Q2

⟩and the orientation tensor (to be measured experimentally via optical

birefringence or dichroism) S ≡⟨QQ

⟩with Q = Q/|Q| as function of time. I

6.1.4 White noise

White noise is a random signal (or process) with a flat power spectral density. In other words,the signal’s power spectral density has equal power in any band, at any centre frequency,having a given bandwidth. An infinite-bandwidth white noise signal is purely a theoreticalconstruct. By having power at all frequencies, the total power of such a signal is infinite.In practice, a signal can be ’white’ with a flat spectrum over a defined frequency band. Arandom vector η is a white noise vector with variance σ2 [so called WN(0,σ)] if and only ifits mean vector and autocorrelation matrix are the following

〈η〉 (t) = 0,

〈η(t)η(t′)〉 = σ2 1δ(t− t′). (6.18)

The above autocorrelation function (covariance matrix Kηη) implies a power spectral den-sity S(ω) = σ2 since the Fourier transform of the delta function is equal to 1. Since thispower spectral density is the same at all frequencies, we call it white as an analogy to thefrequency spectrum of white light.

Kηη ≡ 〈(η(t)− 〈η〉)(η(t′)〉 − 〈η〉) = 〈η(t)η(t′)〉 . (6.19)

Suppose that a random vector X has covariance matrix KXX. Since this matrix is Hermitiansymmetric and positive semidefinite, by the spectral theorem from linear algebra, we candiagonalize or factor the matrix in the following way

KXX = E · λ · ET δ(t− t′), (6.20)

where E is the orthogonal matrix (ET = E−1) of eigenvectors and λ is the diagonal matrix(→ λ = λT ) of eigenvalues.

Random vector with given mean and covariance

We can simulate the 1st and 2nd moment properties of this random vector X with meanµ = 〈X〉 and covariance matrix KXX via the following transformation of a white vector ηWN(0,1):

X = H · η + µ, H = E · λ1/2 (6.21)

Page 107: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

6.2. MOLECULAR DYNAMICS 107

Thus, the output of this transformation has expectation and covariance matrix

〈X〉 = 〈H · η〉+ µ = H · 〈η〉+ µ = µ,

KXX = 〈(H · η + µ− µ)(H · η)〉= H · 〈ηη〉 ·HT = E · λ1/2 · (E · λ1/2)T δ(t− t′)

= E · λ · ET δ(t− t′), q.e.d. (6.22)

Whitening a random vector

We can also whiten a vector X with mean µ and covariance matrix KXX = E ·λ ·ET δ(t−t′)by performing the following calculation:

η = λ−1/2ET · (X− µ). (6.23)

Here is the proof:

〈η〉 = λ−1/2ET · (〈X〉 − 〈X〉) = 0,

〈ηη〉 = λ−1/2 · ET · 〈(X− µ)(X− µ)〉 · E · λ−1/2

= λ−1/2 · ET · E · λ · ET · E · λ−1/2δ(t− t′) = 1δ(t− t′), q.e.d. (6.24)

There is huge range of noise-related topics including active noise control, anti-noise, ambientnoise, antenna noise, artificial noise, black noise, noise color, cosmic noise, noise reduction,fixed pattern noise, noise shaping etc.

6.2 Molecular dynamicsThe microscopic state of a N -particle system in d dimensions may be specified in terms of(generalized) coordinates x and momenta p, both Nd-dimensional vectors,

x = (x1,x2..,xN),

p = (p1,p2..,pN). (6.25)

In the classical description, the system is fully characterized through its Hamiltonian H(x,p),the system’s total energy, which often splits into kinetic (K) and potential (V) contributions,

H(x,p) = V (x) + K(p). (6.26)

Further, the kinetic part is typically a bilinear form in p, specifically, with mass mi of particlei,

K =N∑

i=1

p2i

2mi

, (6.27)

and the potential part is a sum of single-body, two-body, and many-body interactions, thelatter represented by dots

V (x) =N∑

j=1

V1(xj) +∑

i

k>j

V2(xj,xk) + . . . (6.28)

Page 108: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

108 PART 6. MULTISCALE DYNAMICS

For the important case of a radially symmetric two-body interaction between particles, andvanishing multi-body interactions we have

V (x) =N∑

j=1

V1(xj) +∑

j

k>j

V2(|xj − xk|) + . . . (6.29)

Once the interactions V have been modeled or specified, we can start the machinery ofclassical mechanics. The equations of motion for phase space coordinates x,p are Hamiltonsfirst order differential equations

d

dtxi = ∇pi

H =pi

mi

, (6.30)

d

dtpi = −∇xi

H = −∇xiV (x), (6.31)

which can be derived by d’Alemberts principle of least action, and rewritten in terms ofa Poisson bracket, or by extremizing the Lagrange function L(x, x) = K(x) − V (x) anddefining H(x,p) = x ·p−L with canonical momentum p ≡ ∇xL to arrive at second orderdifferential equations (Newton’s equations, where the forces are defined as F = −∇xV ).With the potential V of the form (6.29), (6.31) becomes, with the abbreviation xik ≡ xk−xi,

d

dtpi = −∇xi

V1(xi)−∑

k 6=i

V ′2(|xki|) xki

|xki| , (6.32)

since, with V ′2(r) = ∂V2/∂r, and ∂xi/∂xj = 1δij ,

∇xiV2(|xkj|) = V ′

2(|xkj|)(

∂xi

(xkj)

)· xkj

|xkj|= V ′

2(|xkj|) (δij − δik)xkj

|xkj| ,∑j

k>j

∇xiV2(|xkj|) =

∑j

k>j

V ′2(|xkj|) (δij − δik)

xkj

|xkj|

= 2∑

k>i

V ′2(|xki|) xik

|xki|=

k 6=i

V ′2(|xki|) xki

|xki| . (6.33)

A molecular dynamics simulation solves the coupled equations of motion (6.30) and (6.32)for coordinates and momenta.An idealized pair potential reflecting the salient features of real interactions in a general,empirical way, is the Lennard-Jones potential

V LJ2 (r) = 4ε

((σ

r)12 − (

σ

r)6 + c

), (6.34)

for r ≤ rcut and 0 otherwise. Here rcut is a cutoff distance which will be useful to speed upthe simulation at a moderate price. The Lennard-Jones potential should tend to zero at large

Page 109: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

6.2. MOLECULAR DYNAMICS 109

separations r, the coefficient c is therefore determined by c = −(σ/rcut)12 + (σ/rcut)

6. Theenergy (ε) and length (σ) parameters depend on the particular material under study. The massof a particle mi which appears in the equation of motion sets another characteristic scale.Altogether, at this level, and because simulations do not deal with dimensional numbers ingeneral, we can introduce reference units for length (rref = σ), energy (Eref = ε) and mass(mref) to deal with dimensionless quantities Q∗ ≡ Q/Qref , and to use a potential of the form

V LJ2 (r∗) = 4

[(r∗)−12 − (r∗)−6 − (r∗cut)

−12 + (r∗cut)−6

], (6.35)

The term needed to evaluate (6.32) reads

V LJ2

′(x∗)

x∗

|x∗| = −24[2(x∗)−14 − (x∗)−8

]x∗ (6.36)

= −24(x∗)−2−2−4[2(x∗)−2−4 − 1

]x∗, (6.37)

where (6.37) provides a hint towards an efficient implementation of (6.36). In low dimen-sions, the forces can be stored on a lattice rather than being evaluated at each step to produceapproximate result. Dimensionless simulation results for any quantity can be made dimen-sional by using the above three reference units for a particular material in a postprocessingstep without any additional computational effort. This will be discussed in Sec. 6.3.1. Anonline facility is available at [Footnote].

6.2.1 Finite difference methods

A molecular dynamics simulation should solve the coupled equations of motion (6.30) and(6.32) for coordinates and momenta. One should notice, that the equations of motion implythat the total energy H and the center of mass momentum

∑i pi are conserved quantities,

i.e., could keep the initial values at all times. Further, the equations of motions are time-reversible. Without any constraining mechanism, it would be difficult to achieve conserva-tion in a numerical scheme, where finite differences are used to discretize the equations ofmotion. In order to discretize the differential equations, one usually writes down a Taylorseries

x(t +4t) = x(t) + v(t)4t + 12a(t)(4t)2 + 1

6b(t)(4t)3 + . . . ,

v(t +4t) = v(t) + a(t)4t + 12b(t)(4t)2 + . . .

a(t +4t) = a(t) + b(t)4t + . . .

b(t +4t) = b(t) + . . . (6.38)

etc, with velocity v = x, acceleration a = v. One then replaces acceleration by force, anddetermines best coefficients in predictor-corrector methods for a given ’order’ in the time step4t. There are several algorithms on the ’market’ with their advantages and disadvantagesconcerning quality of conservation of energy and momentum, robustness, large time step,simple to program, low memory requirement etc.

Page 110: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

110 PART 6. MULTISCALE DYNAMICS

Velocity-Verlet algorithm

A popular algorithm which had been proven highly reliable in several applications is the so-called velocity Verlet algorithm (which we will derive, for the interested reader, in Sec. 6.2.2):

x(t +4t) = x(t) + v(t)4t + 12a(t)(4t)2,

v(t +4t) = v(t) + 12[a(t) + a(t +4t)], (6.39)

from which the so-called Verlet algorithm is recovered by eliminating the velocities. Inpractice, the memory-saving implementation goes as follows. Having initialized coordinatesand velocities xi(t = 0) and vi(t = 0), accelerations ai(t) = − 1

mi∇xi

V (x(t)) are computedfrom the potential energy V (x) as shown above. Then

xi(t +4t) = xi(t) + vi(t)4t + 12ai(t)(4t)2,

vi(t + 124t) = vi(t) + 1

2ai(t)4t,

ai(t +4t) = − 1

mi

∇xiV (x(t)),

vi(t +4t) = vi(t + 124t) + 1

24t ai(t +4t). (6.40a)

For the implementation we can remove all brackets in the above set of equations (6.40).Having initialized x, v, and the forces for given x, the complete molecular dynamics ’repeatunit’ reads, in vector notation,

x = x + v4t + 12(4t)2a,

(* accumulate x-dependent averages *)v = v + 1

2a4t,

a = − 1

m∇xV (x),

v = v + 12a4t,

(* accumulate v-dependent averages, eventually rescale velocities *) (6.41)

A useful time step 4t can be estimated to be a fraction (say 3-5%) of the inverse Einsteinfrequency ωE, which can be computed, as will be proven below (in Sec. 6.2.3), from themean squared forces

ω2E =

〈F2〉3NmkBT

(6.42)

during the course of a simulation.

6.2.2 Symplectic integratorsSymplectic integrators are numerical integration schemes for Hamiltonian systems, whichconserve the symplectic two-form dp · dq exactly, so that (q(0), p(0)) → (q(t), p(t)) is acanonical transformation. Symplectic geometry has its origins in the Hamiltonian formula-tion of classical mechanics where the phase space of certain classical systems takes on thestructure of a symplectic manifold. Much work in symplectic geometry and topology has

Page 111: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

6.2. MOLECULAR DYNAMICS 111

centered around investigating which manifolds admit symplectic structures. Any smoothreal-valued function H on a symplectic manifold (= Hamiltonian phase space structure) canbe used to define a Hamiltonian system. The symplectic vector field, also called the Hamil-tonian vector field, induces a Hamiltonian flow on the manifold. The integral curves of thevector field are a one-parameter family of transformations of the manifold; the parameterof the curves is commonly called the time. The time evolution is given by symplectomor-phisms. By Liouville’s theorem, each symplectomorphism preserves the volume form onthe phase space. The collection of symplectomorphisms induced by the Hamiltonian flow iscommonly called the Hamiltonian mechanics of the Hamiltonian system. The Hamiltonianvector field also induces a special operation, the Poisson bracket (a bracket A,B with thefeature A,B = −B,A). The Poisson bracket acts on functions on the symplectic man-ifold, thus giving the space of functions on the manifold the structure of a Lie algebra. Inparticular, given a function A,

d

dtA =

∂tA + A,H. (6.43)

If we have a probability distribution, f , then (since the phase space velocity (qi, pi) has zerodivergence, (∂q, ∂p) · (qi, pi) = ∂qHp + ∂p(−Hq) = Hqp − Hpq = 0, and probability isconserved, its convective derivative can be shown to be zero (df/dt = 0) and so

∂tf = −f,H. (6.44)

This is called Liouville’s theorem. Every smooth function G over the symplectic manifoldgenerates a one-parameter family of symplectomorphisms and if G,H = 0, then G isconserved and the symplectomorphisms are symmetry transformations. Energy H is henceconserved, if ∂tH = 0, since H,H = 0. A Hamiltonian may have multiple conservedquantities Gi. If the symplectic manifold has dimension 2n and there are n functionallyindependent conserved quantities Gi which are in involution (i.e., Gi, Gj = 0), then theHamiltonian is Liouville integrable. The Liouville-Arnol’d theorem says that locally, anyLiouville integrable Hamiltonian can be transformed via a symplectomorphism in a newHamiltonian with the quantities Gi as conserved coordinates; the new coordinates ϕi arecalled action-angle coordinates. The transformed Hamiltonian depends only on the Gi, andhence the equations of motion have the simple form

Gi = 0, ϕi = F (G), (6.45)

for some function F (Arnold, 1968). There is an entire field focusing on small deviationsfrom integrable systems governed by the so called KAM theorem. The integrability ofHamiltonian vector fields is an open question. In general, Hamiltonian systems are chaotic;concepts of measure, completeness, integrability and stability are poorly defined. At thistime, the study of dynamical systems is primarily qualitative, and not a quantitative science.Having introduced some notions used in this field, let us boil the task down to some morewell defined problem (the following stated and partially solved by Yoshida in 1990).

Exact and formal solution

Introducing the notation z = (q, p), the Hamiltonian equation is written in the form z =z,H, where braces stand for the Poisson bracket, a, b = aqbp − apbq with the notation

Page 112: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

112 PART 6. MULTISCALE DYNAMICS

aq = ∂a/∂q. If we introduce a differential operator DG by DGF = F, G, this is writtenas z = DHz, and using the product rule, we can summarize, for arbitrary a(z), includinga = z, as

a = q aq + p ap = Hpaq −Hqap ≡ a,H(z) ≡[Hp

∂q−Hq

∂p

]a ≡ DHa, (6.46)

so the formal solution, or the exact time evolution of z(t) from t = 0 to t = τ is given by

a(τ) ≡ a(z(τ)) = eτDHa(z(0)) ≡ [eτDHa(z)

]z=z(0)

. (6.47)

For a Hamiltonian of the form H(q, p) = T (p) + V (q), correspondingly DH = DT + DV ,cf. (6.46), and with a(z) = z, we have the formal solution of the Hamiltonian equations ofmotion

z(τ) =[eτ(A+B)z

]z=z(0)

, where A ≡ DT = Hp∂

∂qand B ≡ DV = −Hq

∂p, (6.48)

such that Az = DT (q, p) = (Hp, 0) = (Tp, 0) and Bz = DV (q, p) = −(0, Hq) = (0, F )with force F = −Vq; more generally Aa = Hpaq = qaq, Ba = −Hqap = pap, then A2a =A(Aa) = HpHpaqq = q2aqq, similarly B2a = HqHqapp and ABa = −HpHqqap −HpHqapq

and BAa = −HqHppaq −HqHpaqp, hence (AB −BA)a 6= 0 in general.

EXERCISE LIOUVILLE OPERATOR

Implement the formal solution (6.48) of Hamiltons equations into MathematicaTM and calcu-late (i) solution for z = (q, p) up to fifth order o(τ6), (ii) the same expression leaving operatorsA and B unevaluated, (iii) energy as function of time. I

CODE LIOUVILLE OPERATOR

In[] := solution[term ,tau ,order ,Aop ,Bop ]:=Sum[tauˆj/j!(TMP = term; Do[TMP = Aop[TMP] + Bop[TMP], k, 1, j]; TMP), j, 0, order]/. q -> q[0] /. p -> p[0];A[vector ] := D[H[q, p], p] D[vector, q];B[vector ] := -D[H[q, p], q] D[vector, p];H[q , p ] := T[p] + V[q];z = q, p;APPLICATION:A[z]

MatrixForm[solution[z,tau,5,A,B]];MatrixForm[solution[z,tau,5,Aop,Bop]];ENERGY CONSERVATION:solution[H[q,p],tau,13,A,B] I

Page 113: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

6.2. MOLECULAR DYNAMICS 113

Problem

Let A and B be non-commutative operators and τ be a small real number. For a given positiveinteger n which will be called the order of integrator, find a set of real numbers (c1, c2.., ck)and (d1, d2..., dk) such that the difference of the exponential function exp[τ(A + B)] and theproduct of the exponential functions

ec1τAed1τBec2τAed2τB × ...× eckτAedkτB (6.49)

is of the order τn+1, i.e., the following equality holds,

eτ(A+B) =k∏

i=1

eciτAediτB + o(τn+1). (6.50)

Although the problem above is quite general, it is directly related to the symplectic integratorof the Hamiltonian system with H = T (p) + V (q), due to (6.48).

Symplectic mapping

Now suppose the problem is solved, (ci, di) is a set of real numbers which satisfy (6.50) fora given integer n. Consider a mapping from z = z(0) to z(τ) given by

z(τ) =

(k∏

i=1

eciτAediτB

)z = eτ(A+B)z(0)− o(τn+1). (6.51)

This mapping is symplectic because it is just a product of elementary symplectic mappings,and approximates the exact solution (6.48) up to order o(τn+1). Furthermore, (6.51) is ex-plicitly computable although (6.48) is only formal. In fact (6.51) gives the succession of thek mappings,

qi = qi−1 + ciτ∂

∂pT (p)

∣∣∣∣p=pi−1

, qk ≡ q(τ) (6.52a)

pi = pi−1 − diτ∂

∂qV (q)

∣∣∣∣q=qi−1

, pk ≡ p(τ), i = 1.., k. (6.52b)

Hence Eq. (6.52) is symplectic and approximates the exact solution up to order o(τn+1);using (only) k mappings.

EXERCISE RECURSIVE MAPPING

Find coefficients which make (6.52) and (6.48) identical up to order o(τn+1), with k = n, andk = 1 and k = 2 (by making use of the solution and definitions of the foregoing exercise). I

CODE RECURSIVE MAPPING AND SYMPLECTIC COEFFICIENTS

In[] := solutionREC[term , tau , Korder , Aop , Bop ] := (TMP = term;Do[ TMP = TMP + tau c[j]Aop[TMP] ; TMP = TMP + tau d[j] Bop[TMP], j, 1, Korder];Normal[Series[TMP /. q -> q[0] /. p -> p[0], tau, 0, Korder]]); FINDCOEFF[Korder ] :=

Page 114: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

114 PART 6. MULTISCALE DYNAMICS

(TAB = Table[D[Normal[Series[solution[z, tau, Korder, A, B] - solutionREC[z, tau, Korder,A, B], tau, 0, Korder]], tau,J] /. tau -> 0, J, 0, Korder];Solve[TAB == 0 TAB, Flatten[Table[c[j], d[j], j, 1, Korder]]])APPLICATION:FINDCOEFF[1]FINDCOEFF[2] I

Example

In order to get some feeling for the procedure followed by the solution of the above exercises,we write down some equations for the special case n = k = 2. Equation (6.48) becomes

z(τ) =

(q(0) + τTp(0)− 1

2τ 2Vq(0)Tpp(0),

p(0)− τVq(0)− 12τ 2Tp(0)Vqq(0)

)+ o(τ 3), (6.53)

which is exactly the output produced by the command MatrixForm[solution[z, tau, 2, A, B].We remind the reader that Vq ≡ V ′(q), Vqq ≡ V ′′(q) etc. and Vq(0) ≡ V ′(q)|q=q(0). On theother hand, (6.52) we can leave as is, or recursively insert these equations into each other, orequally evaluate (6.51) to obtain irrespective of the chosen approach

z(τ) =

(q(0) + (c1 + c2)τTp(0)− (c1d1 + [c1 + c2]d2)τ

2Vq(0)Tpp(0),

p(0)− (d1 + d2)τVq(0)− (c2d1)τ2Tp(0)Vqq(0)

)+o(τ 3), (6.54)

which is exactly the output produced by the command MatrixForm[FullSimplify[solutionREC[z,tau, 2, A, B]]]. Although determination of coefficients looks straightforward, it becomes aheavy computational problem at large n. The solution is c1 = d1 = 1 for n = k = 1, andc1 = (2d1 − 1)c2, c2 = 1

2d1, d1 = 1 − d2 with arbitrary d2 for n = k = 2. As we see, the

solution is not unique, which is, of course, an advantage. For the particular choice d2 = 1/2,which implies d1 = d2, c2 = 1, and c1 = 0, we exactly recover the velocity Verlet algorithm(6.40).

6.2.3 Einstein frequency and configurational temperatureThe Einstein frequency is a measure for the frequency at which oscillations in the vicinity ofa potential minimum in a canonical (N,V,T) ensemble occur. Each potential valley (we willconsider a three-dimensional N -particle system in the following) can be approximated, inlowest order, by a harmonic potential of the (classical) form V (x) = 1

2mω2 x2 incorporating

an eigen-frequency ω. First of all, straightforward calculation yields

∇xV = mω2x,

4xV = ∇x · ∇xV = mω2∇x · x = 3Nmω2, (6.55)

which relates frequency to the second derivative of the potential, V = 16Nm

(4xV )x2 butalso forms the basis to define the Einstein frequency as follows

ω2E =

⟨ω2

⟩=〈4xV 〉3Nm

, (6.56)

Page 115: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

6.2. MOLECULAR DYNAMICS 115

where the average denotes an ensemble average and is now evaluated with the canonicaldistribution f(x,p) = Z−1e−βH(x,p) and β ≡ (kBT )−1 is inverse thermal energy. With thehelp of the useful identity, valid for arbitrary A(x,p),

〈A∇xH〉 = kBT 〈∇xA〉 (6.57)

and the definition F = −∇xV = −∇xH we can rewrite the nominator in (6.56) as follows

〈4xV 〉 = −〈∇xF〉 = −β 〈F · ∇xH〉 = −β 〈F · ∇xV 〉 = β⟨F2

⟩. (6.58)

This equality proves (6.42). The identity (6.57) is derived by first noticing, that∇x(e−βH) =

−β(∇xH)e−βH by simple differentiation. We further need to insert the canonical distribu-tion f and perform a single partial integration to proof (6.57) as follows:

〈A∇xH〉 =1

Z

∫A(∇xH)e−βH dxdp = −kBT

Z

∫A(∇xe

−βH) dxdp.

=kBT

Z

∫(∇xA) e−βH dxdp = kBT 〈∇xA〉 (6.59)

We mention, that another representation of the Einstein frequency can be obtained by relatingβ to the mean squared velocity in the canonical ensemble, which is

⟨v2

⟩=

1

Z

∫p2

2me−βH dxdp

=1

Z

∫p2

2me−βK dp

∫e−βV dx

=1∫

e−βK dp

∫p2

2me−βK dp =

3NkBT

m. (6.60)

We should point out, that (6.58) which we obtained on the fly is actually hiding a veryimportant result. Once more replacing F by −∇xV , (6.58) states, that the temperature isrelated to purely conformational information as follows

kBT =〈(∇xV )2〉〈4xV 〉 . (6.61)

This equals the definition of a ’configurational temperature’ which can be used in an athermalMonte Carlo simulation to estimate the temperature based on the sampled configurationalinformation alone which is important to calculate free energy or entropy.

6.2.4 Periodic boundary conditionsPeriodic boundary conditions and the nearest image convention are often used in the sim-ulation of condensed, fluid, or gas phase systems if the bulk properties of materials are ofinterest. While particle coordinates can be stored at will (backfolded to the main simula-tion cell, or not), the nearest image convention enters the calculation of the forces F andthe potential energy V . For a system with simulation box sizes Lµ, µ ∈ x, y, z.., and a

Page 116: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

116 PART 6. MULTISCALE DYNAMICS

simulation box centered (!) at the origin (0), the nearest image convention is implemented asfollows for the µ’th component xijµ of a connector xij between particles i and j:

xijµ → xijµ − Lµ × round(xijµ/Lµ), (6.62)

where round(x) denotes the nearest integer of a floating point value x. The nearest imageconvention requires that the cutoff distance in the definition of the potential (6.34) is smallcompared with half the box size.

6.2.5 Temperature controlIn a many-body molecular dynamics simulation, temperature is accessible not only as ’con-figurational temperature’, but also from the velocities (6.60). We should mention here, that3N in (6.60) represents the number of degrees of freedom of the system, which, in view ofcenter of mass momentum conservation reduces to 3(N − 1) if not additional constraintsdecrease the number of degrees of freedom further. Newton’s equation do not keep tem-perature constant, but preserve energy. In many practical application, however, temperaturerather than energy is conserved, in particular, if nonequilibrium situations are considered(particles subjected to external fields, flow fields, electromagnetic fields etc). The simplestmethod to keep temperature constant is to rescale all velocities with the same factor,

v → v

√Twanted

Tmeasured

, (6.63)

where Twanted is the chosen temperature, and Tmeasured the one measured through the meansquared velocity according to (6.60), to be implemented as shown in (6.41).

6.2.6 Lees-Edwards boundary conditionsIn order to study a many particle system subjected to shear flow in a bulk situation wheresurfaces are not present, one has to move the periodic images of the central simulation cell indirection of flow with a speed which corresponds to the applied shear rate. The nearest imageconvention still applies, and one has to insert a particle leaving the central simulation cell indirection of the velocity gradient on the opposite side with an ’offset’ which ensures, thatparticle distances do not change their values during this operation (Lees-Edwards boundaryconditions).

6.2.7 Neighbor listsThe computational bottleneck of a molecular dynamics simulation for a many-particle sys-tem is the calculation of all forces, who, by definition, require performing a sum over allinteracting pairs of particles. By performing a double sum over all N particles we can eas-ily access all interacting particles, such an algorithm scales with N2. We can improve thisscaling to achieve N1 by introducing neighbor lists. Here, all particles are sorted into a grid(grid size g larger than the interaction potential cutoff rcut distance), and interacting pairs ofparticles need to be searched only from the pool of particles in neighboring grid nodes. In

Page 117: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

6.2. MOLECULAR DYNAMICS 117

order to prevent the ’gridding’ of particles in each simulation time step, g should be largerthan rcut and we test, by inspecting displacements of particles relative to their coordinates atthe time of the last gridding, if an update is needed.

EXERCISE MOLECULAR DYNAMICS SIMULATION

Write a molecular dynamics code for a 3D Lennard-Jones system confined in cubic box withperiodic boundary conditions. The initial configuration should be random coordinates andnormally distributed velocities (parameter: temperature T ). Implement the velocity-Verletalgorithm (6.41). Implement neighbor lists to achieve a computing time per time step whichis proportional rather than quadratic in the number of particles. Extract the pair correlationfunction defined in (4.60), the total potential energy (4.58) and then compare (4.58) with (4.62).Also extract the mean squared velocity

⟨v2

⟩which is related to temperature. I

6.2.8 Long-range forcesSo far, we have implicitly assumed that the interactions are finite-ranged, and that the cutoff-distance rcut in the interaction potential (6.34) is small compared with half the box size,otherwise the minimum image convention renders ill-defined. In order to treat long-rangedforces, the Ewald summation technique can be used [Allen and Tildesley (1989)].

Page 118: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

118 PART 6. MULTISCALE DYNAMICS

6.3 Models of intermolecular interactionThe most commonly used LJ potential had been given in (6.34). It contains a characteristicenergy ε and length σ, which, together with the mass m of a LJ particle are used to definedimensionless units to be used in any simulation code.

6.3.1 Reduced units, reference unitsThe following considerations closely resemble the idea of so-called dimensional analysis.The dimensional quantities energy Eref = E∗

ref kg m2s−2, length rref = r∗ref m, and massmref = m∗

ref kg should not appear in a simulation code except in the very final postprocessingstep, once adimensional numbers have been produced. Until then, they can be ignored, i.e.,set to unity in all formula. How to get dimensional quantities in the postprocessing step? Youhave to find three characteristic and independent quantities for your system under study, likeenergy, length, and mass, or three other, but we need three, because there are three SI unitskg, m, and s. Let us now consider a dimensionless quantity Q∗ calculated by the simulationcode. This quantity, due to the formula you used, is known to you to have physical dimension[Q] in ’real life’,

[Q] = kgα mβ sγ, (6.64)

which we generally express by three coefficients, here denoted as α, β, and γ. If the mea-sured quantity is a velocity, then α = 0, β = 1, γ = −1. There is only one way to have thedimensions correct, by equating as follows (with yet unknown coefficients A, B, C)

Qref = EAref rB

ref mCref

= E∗Aref

(kg m

s2

)A

r∗BrefmB m∗C

refkgC

= Q∗ref × kgA+Cm2A+Bs−2A

= Q∗ref × [Q] (6.65)

By comparing the two versions for [Q] (6.64) we see that A = −γ/2, B = β + γ, andC = α+γ/2. Therefore, the dimensional quantity to be compared with experimental data is

Q = Q∗ × E−γ/2ref rβ+γ

ref mα+γ/2ref , (6.66)

where Q∗ is the adimensional quantity evaluated using reduced LJ units, and Q has dimen-sion [Q] given by (6.64), and the reference quantities Eref etc. can be found, for variousmaterials, in the literature. Specifically, the reference quantities for density, temperature,time and viscosity are nref = σ−3, Tref = ε/kB, tref = σ

√mε, ηref = σ−2

√mε. We

therefore have to deal exclusively with ε = σ = m = 1. The following table shows somereference quantities for several gases.

Ar Kr Xe CH4

σ/nm 0.34 0.36 0.41 0.38Tref = ε/kB 120 K 170 K 220 K 150 K

Page 119: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

6.3. MODELS OF INTERMOLECULAR INTERACTION 119

6.3.2 WCA potentialThe purely repulsive part of the LJ potential had been used by Weeks, Chandler, and An-derson (WCA) to calculate properties of dense, simple, systems. The WCA therefore corre-sponds to the LJ potential with a cutoff distance rcut = 21/6.

6.3.3 SHRAT potentialThe SHRAT potential offers an alternative to the the full LJ potential. It is less long-rangedand possesses smooth derivatives at the cutoff distance:

USHRAT(r) ∝ [3(h− r)4 − 4(h− rmin)(h− r)], (6.67)

for r ≤ h, and USHRAT(r) = 0 otherwise. This potential has its minimum at r = rmin. Forh = 3/2 and rmin = 9/8 the SHRAT potential approximates the LJ potential for practicalpurposes sufficiently well. The missing amplitude in (6.67) can be chosen such that theSHRAT potential has its minimum at USHRAT(rmin) = −1 as for the LJ potential.

6.3.4 Hard spheresAnother often employed potential, used for the description of hard and soft spheres, simplyreads

U sphere(r) ∝ r−ν , (6.68)

with a characteristic exponent ν. For ν → ∞ one recovers the case of hard spheres, forν = 12 the potential has a repulsive branch identical with the one of the LJ potential, 2 ≤ν ≤ 12 is in use to model soft spheres, ν = 1 corresponds to electrostatic interaction betweencharged particles

Page 120: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

120 PART 6. MULTISCALE DYNAMICS

6.4 Simple fluid subjected to plane Couette flow

Schematic drawing for the generation of shear flow using Lees-Edwards boundary conditions.

Even the simplest computer fluid, a LJ fluid, exhibits non-Newtonian behaviors and richstructure if subjected to flow. Here, we will present a very simple approach based on ki-netic theory in order to qualitatively characterize the non-Newtonian flow behavior of simpleliquids. We will consider shear flow in x-direction, gradient in y-direction, vx = γy. Themain contribution to the shear stress for a dense liquid stems from the interaction forces(F = −∇U(r)), the kinetic potential can be neglected (the opposite is true for rarefied anddilute gases). The shear stress τxy reads, with pair correlation function g(r),

τxy = −1

2

∫rxFyg(r) d3r =

1

2n2

∫rxryr

−1U ′g(r)d3r, (6.69)

with particle number density n = N/V and radially symmetric interaction potential U , U ′ =dU/dr, and r = |r|. Obviously, τxy 6= 0 only if g(r) 6= geq(r), i.e., out of equilibrium. Inorder to calculate the shear stress we hence need an equation of change for the pair correlationfunction. The simplest relaxation ansatz gives, using the above expression for v (valid forshear flow),

∂g

∂t= −γrx

∂g

∂ry

− 1

λ(g − geq). (6.70)

Multiplication of this equation by 12n2rxFy and subsequent integration over d3r results –

upon neglecting contributions which are nonlinear in the shear rate – to

∂τxy

∂t= Gγ − 1

λτxy, (6.71)

where we have introduced the shear modulus G,

G =1

30n2

∫(r4U ′)′

r2geq(r) d3r =

15

∫ ∞

0

(r4U ′)′geq(r)dr. (6.72)

Page 121: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

6.4. SIMPLE FLUID SUBJECTED TO PLANE COUETTE FLOW 121

In the stationary case this gives τxy = Gλγ and shear viscosity η ≡ τxy/γ = Gλ. We canhence obtain the relaxation time λ from the measured shear viscosity if G is known. For aLJ fluid, we can evaluate the expression for G to obtain

G =21

5nkBT + 3p− 24

5nu = 3ppot − 24

5nupot, (6.73)

where u denotes the internal energy per particle. In this case G is related to the measur-able quantities p (hydrostatic pressure) and u. In a nonequilibrium MD we can measure Gdirectly.

LJ system. Projection of particles and their velocities onto the vorticity plane for a LJ system(T = 1, n = 0.84, shear rate γ = 20) in a state with long-range partial order. The diameter of the

circles is 1 (LJ unit).

Page 122: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

122 PART 6. MULTISCALE DYNAMICS

LJ system. Projection of particle positions onto the vorticity plane. One observes partial plug-flow,temperature had been controlled using a unbiased thermostat which controls velocities against the

measured (layer-wise), not prescribed, velocity field.

0 20 40 600

20

40

60

xz

0 20 40 600

20

40

60

xz

LJ system. Static structure factor S(qx, qz) for the LJ fluid for two shear rates. Below (left) andabove (right) the onset of long-range order.

Page 123: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

6.4. SIMPLE FLUID SUBJECTED TO PLANE COUETTE FLOW 123

LJ system. Potential contribution to the shear stress vs. shear rate (LJ units). The straight linescorrespond to Newtonian flow behavior.

0.1 1 10γ

0.1

1

10

η

0.1 1 10

0.1

1

LJ system. Shear viscosity calculated from the data for shear stress (foregoing figure)

Page 124: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

124 PART 6. MULTISCALE DYNAMICS

6.5 FENE polymer melt subjected to plane Couette flow

−8.0 −6.0 −4.0 −2.0 0.0log10 shear rate

0.0

1.0

2.0

3.0

4.0

log1

0 sh

ear

visc

osity

N=10N=30N=60N=100N=150N=200N=300N=400

10 100chain length

10

100

1000

10000

zero

she

ar v

isco

sity

NEMD data crossover powers 1 −> 3.5 crossover powers 1 −> 3 3

3.5

1

NEMD data

10 100

10

100

1000

10000

chain length N

zero

rat

e vi

scos

ity

Non-Newtonian shear viscosity η of the FENE model vs shear rate γ (LJ units) for different chainlengths N . Inset: Zero rate shear viscosity η0 vs chain length.

The most common model for a polymer melt is based on the LJ potential. In addition to theLJ interaction, a harmonic spring potential (unlimited extension), or alternatively a FENEpotential (limited extension) is added in order to create connectivity between adjacent beads(particles) along the chain’s backbone. The expression for the FENE force was given in(6.14). Due to their internal structure, polymers react to flows in a manner which leads tomuch enhanced non-Newtonian behaviors. Chains tend to rotate or align in flow direction,long chains may form temporary knots or entanglements which hinder the motion. Moreover,diffusion of macromolecules in a melt is limited to a kind of one-dimensional walk withina tube made of surrounding polymers. The entanglement crossover can be studied using theFENE model, both in equilibrium and under flow. Some results for the effect of chain lengthand flow rate on the material properties of polymer melts are collected below. All quantitieshave been obtained via nonequilibrium molecular dynamics, as described above.

Page 125: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

6.5. FENE POLYMER MELT SUBJECTED TO PLANE COUETTE FLOW 125

SANS data

: orientation of ellipses

NEMD data

qx

qy

β

qA

2

A

B β

qx

qy

Differences between local and global order of polymeric FENE chains under shear flow conditionsare revealed via the NEMD structure factor of single chains. (top left) Structure factor extracted byNEMD, projected to shear plane. (top right) Contour fit allows to extract the half axes (half wave

numbers) of ellipses and the rotation angle β. (bottom) Rotation angle vs wave number.

Page 126: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

126 PART 6. MULTISCALE DYNAMICS

Page 127: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

complexfluids.ethz.ch

This document is copyrighted M.K., Swiss ETH Zurich/CH This copyright is violated upon selling or posting these class lecture notes

Computational Polymer PhysicsMartin Kröger, Emanuela Del Gado, Patrick Ilg

Part 7

Liquid crystals and liquid crystallinepolymers

nylon 6,6

... eventually

LC substances: MBBA and ny-lon

Organic substances made of anisotropic molecules can form liquid crystalline phases, as hasfirst been discovered by F. Reinitzer in 1888. Usually, organic or synthetic substances areconsidered as Liquid Crystals (LCs) if they undergo a phase transition from the isotropic(high temperature) state to an orientationally ordered (low temperature) state. These ther-motropic LCs are distinguished from lyotropic LCs, if the phase transition is instead drivenby an increase in concentration.LCs are frequently referred to as ”4th-state of matter”, since their behavior is in betweenordinary liquid and ordinary solid behavior. For nematic LCs in particular, the liquid crys-talline state does not show any long range translational order - similar to the ordinary liquidstate. Thus, LCs flow - similar like simple liquids. On the other hand, long range orienta-tional correlations are present in the liquid crystalline phase - reminiscent of the long rangepositional correlations of solids. Typical LC substances are MBBA or nCB which are ratherrigid, anisotropic molecules (due to presence of phenyl rings), typically around 20A longand 5A width. Liquid crystalline polymers (LCPs) on the contrary, are long chain polymerswith rigid subunits, like Kevlar or Nylon. Their persistence length/width ratio is around100A/10A. Besides the nematic state, more complicated LC phases exist, like smectic A andC, etc. which, in general, show some layering, i.e some degree of positional order withincertain planes. We here always stick to the simplest case of nematic LCs, the so-called”nematics”.

127

Page 128: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

128 PART 7. LIQUID CRYSTALS AND LIQUID CRYSTALLINE POLYMERS

7.1 Isotropic-Nematic phase transition

The isotropic-nematic phase transition has been observed in a wide variety of substances.For pure organic materials, the transition typically occurs in a temperature range between20 C and 30 C. For practical applications, this rather narrow range is broadened by usingmixtures of different substances.

7.1.1 Landau-de Gennes theory

From a macroscopic, phenomenologically point of view, the isotropic-nematic phase transi-tion is a first order (discontinuous) phase transition that can be treated by the Landau theory.Due to head-tail symmetry, the orientational order parameter for LCs cannot be a vector buthas to be a (symmetric) second rank tensor

Qαβ =

⟨uαuβ − 1

3δαβ

⟩. (7.1)

In the isotropic state Q = 0, while Q 6= 0 indicates orientational order. In the specialcase of uniaxial symmetry, the orientational order parameter tensor Q can be represented asQαβ = S2(nαnβ − 1

3δαβ). In this case, S2 is the scalar (Maier-Saupe) order parameter and n

the director (n = −n), giving the mean orientation.

In the spirit of Landau theory, one can postulate a free energy contribution due to the orien-tational order (Landau-de Gennes potential)

F(Q) =A

2tr (Q·Q)2+

B

3tr (Q·Q·Q)3+

C

4tr (Q·Q·Q·Q)4+

C ′

4[tr (Q·Q)2]2, (7.2)

where tr denotes the trace of the matrix and A,B,C,C ′ are unknown coefficients.

x

z

y

Definition of order parameter

Page 129: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

7.1. ISOTROPIC-NEMATIC PHASE TRANSITION 129

7.1.2 Onsager theory

Excluded volume of two hardrods with length L and diameterb

A more microscopic approach to the isotropic-nematic transition is provided by Onsager’stheory (1949) which shows that excluded volume interactions alone are sufficient to drivethe phase transition. Assuming that LCs can be modeled as a suspension of rigid rod-likeparticles of length L and diameter b, Onsager considered the following free energy functional(per volume):

F [f ] = nkBT [ln n− 1 +

∫d2u f(u) ln f(u)

+1

2n

∫d2ud2u′b2(u,u′)f(u)f(u′)], (7.3)

where n = N/V is the number density and b2(u,u′) = 2bL2|u×u′| is Onsager’s exact resultfor the second virial coefficient in case of purely excluded volume interactions of rigid rod-like particles b/L ¿ 1. The equilibrium state is found from the minimum of (7.3) subject tothe constraint

∫d2u f(u) = 1,

feq(u) = z−1 exp [−βU(u, [f ])], UMF(u, [f ]) =

∫d2u′ b2(u,u′)f(u′) (7.4)

with the mean-field interaction potential UMF. Numerical solutions of this self-consistentequation show an isotropic-nematic phase transition for sufficiently high concentrations n >nc with nc ≈ 5.37/bL2.

Page 130: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

130 PART 7. LIQUID CRYSTALS AND LIQUID CRYSTALLINE POLYMERS

7.2 Dynamics of Liquid Crystals

Since the LC equilibrium state is already anisotropic, external perturbations (fields, flow,etc.) can lead to very rich dynamical behavior including symmetry-breaking, periodic, andchaotic states. The dynamics of LCs has been addressed by a wide variety of models, rangingall the way from atomistic simulations to macroscopic balance laws.

7.2.1 Ericksen-Leslie theory

Different models on different levels of description for LC dynamics

Ericksen and Leslie (1960s) developed the general, phenomenological framework for de-scribing LC dynamics, assuming (i) uniaxial symmetry around the director n and (ii) con-stant degree of alignment S2 = const..The general balance law for the director in the presence of a flow field v(r) reads

d

dtn = Ω× n + λ( ∇v · n− ∇v : nnn)− (1− nn) · hn (7.5)

where Ω = 12∇ × n denotes the vorticity of the flow, ∇v = 1

2(∇v + [∇v]T ) − 1

3∇ · v1

the symmetric velocity gradient (the divergence term vanishes for incompressible flow). Theso-called molecular field hn describes the mean-field interaction with other particles as wellas the influence of external (magnetic or electric) fields. An important parameter in theEricksen-Leslie theory (7.5) is the ”tumbling parameter” λ. In steady shear flow, a stationary,flow-aligning state is found only for |λ| > 1, while values |λ| < 1 correspond to oscillatory(tumbling) states.

Page 131: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

7.2. DYNAMICS OF LIQUID CRYSTALS 131

7.2.2 Alignment tensor theoryIn general out-of-equilibrium situations, the order parameter tensor Q defined in (7.1) isnot uniaxially symmetric and therefore, strictly speaking, the Ericksen-Leslie theory is notapplicable. In this case, the following time evolution equation for Q has been proposed

d

dtQ = 2Ω×Q + 2σ ∇v ·Q− 1

τ

∂F(Q)

∂Q+ λK ∇v (7.6)

Equation (7.6) can be motivated by non-equilibrium thermodynamics arguments. It can alsobe derived under certain assumptions from the dynamical mean-field theory presented inSec. 7.2.3. In addition to the coefficients of the Landau-de Gennes potential (7.2), Eq. (7.6)contains a sort of tumbling parameter λK , relaxation time τ and a slip coefficient σ as phe-nomenological parameters. Although the numerical integration of the time evolution equa-tion (7.6) does not pose any particular problems, it should be noted the Eq. (7.6) describesa wealth of different dynamical states: stationary (’flow-aligning’), oscillatory (’tumbling’,’wagging’, ’kayaking’), as well as chaotic. Therefore, one should not only choose a suitableintegrator, but also be careful with the initial conditions.

7.2.3 Dynamical Mean-Field theoryIn the same way as the Onsager theory provides a more microscopic understanding of theequilibrium isotropic-nematic transition, the dynamical mean-field theory gives a more mi-croscopic foundation to the macroscopic balance equations (7.5) and (7.6).Let f(u; t) denote the time-dependent orientational probabaility distribution function with∫

d2u f(u; t) = 1. From the local balance for the probability change, one derives ∂∂t

f + ∂∂u·

(uf) = 0 or equivalently ∂∂t

f + L · (ωf) = 0 with the rotational operator L = u× ∂∂u

. Theangular velocity ω of a given particle can be obtained from the Langevin equation

0!= Θω = −ξ(ω − ω0)−LUMF + TB. (7.7)

The first term on the right hand side denotes the friction torque if the angular velocity ω doesnot match that of the carrier fluid ω0 = Ω + Bu × ∇v · u (B ≤ 1 is the so-called shapefactor of the particle). The second term is the deterministic torque due to the mean-fieldinteraction, while the last term describes fluctuating (random) torques. In the overdampedlimit (left equal sign), the Langevin equation reduces to a torque balance that can be solvedfor ω. Inserting (7.7) into the balance equation for f(u; t) one arrives at

∂tf = −L · [(ω0 +

1

ξLUMF)f ] +

kBT

ξL2f (7.8)

Note, that the fluctuation torques have been chosen such that Eq. (7.8) has the correctBoltzmann equilibrium state. Equation (7.8) is the starting point of many investigationson the dynamics of LCs. In particular the order parameter tensor Q(t) can be studied fromQ(t) =

∫d2u [uu − 1

31]f(u; t). Under certain approximations, the macroscopic balance

equations (7.5) and (7.6) can be derived from the kinetic model (7.8). However, these ap-proximations are in general uncontrolled and can have a significant influence on dynamicalproperties. Therefore, people are interested in the numerical solution of the model (7.8), seeSec. 7.3.1.

Page 132: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

132 PART 7. LIQUID CRYSTALS AND LIQUID CRYSTALLINE POLYMERS

7.2.4 Dynamics of interacting many-body system

The model presented in Sec. 7.2.3 is a mean-field model, i.e. it approximates the interactingmany-particle system by an effective one-particle problem. Mean-field approximations arein general valid for very long range interactions. In the case of LCs, the quality of the mean-field assumption is questionable.If we give up the mean-field approximation and consider instead the interaction potentialΦ =

∑i<j Uij between the particles, the Langevin equations are

0 = −ξ(ωi − ω0)−LiΦ + TBi ,

0 = −ξt(vi − v0)−∇iΦ + FBi , (7.9)

where the first equation resembles (7.7) and the second is the corresponding Langevin equa-tion for the translational motion with the translational friction coefficient ξt and randomforces on particle i denoted by FB

i .

7.3 Numerical solution and simulations of LC dynamics

The numerical solution of the Ericksen-Leslie equations (7.5) as well as the alignment ten-sor dynamics (7.6) do not pose any particular problems as long as no spatial gradients areallowed (see next section). However, due to the non-monotonicity of the Landau-de Gennespotential for intermediate temperatures (meta-stable state), the initial conditions have to bechosen carefully. In the following, different strategies for the numerical solution of the ki-netic equation (7.8) are presented in Sec. 7.3.1. Simulations of the interacting many-bodysystem are described in Sec. 7.3.2.

7.3.1 Solving the dynamical mean-field model

In view of the uncontrolled closure approximations, numerical solutions of the Doi-Hessmodel without further approximations are desirable. Since the Fokker-Planck equation cor-responding to (7.9), cf. (5.2), is a partial differential equation, its numerical solution isin general not trivial. One can roughly distinguish three different approaches: (i) finite-difference schemes, (ii) Galerkin schemes, and (iii) Brownian dynamics simulations.

Finite-Differencing

A standard (and most of the time not very good) method of integrating partial differentialequations is to discretize the independent variables, which in our case is the orientation u.Replacing the rotational operator by suitable finite differences, the Fokker-Planck equationis transformed into a system of coupled, ordinary differential equations which can be solvedby standard integration algorithms. Often, one encounters instabilities in finite-differenceschemes so that one has to use computationally more expensive implicit algorithms.

Page 133: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

7.3. NUMERICAL SOLUTION AND SIMULATIONS OF LC DYNAMICS 133

Galerkin scheme

Instead of directly discretizing the unit sphere, one can expand the orientational distributionfunction in a suitable bases, f(u; t) =

∑nc

`=0

∑`m=−` c`,m(t)Y m

` (u). The spherical harmonicsY m

` (u) provide a complete set of base functions, such that any smooth function f(u) can berepresented in this form for nc →∞. Using the completeness relation

∫d2uY m

` (u)Y m` (u) =

δ`,`′δm,m′ , one can derive the time evolution of the expansion coefficients c`,m(t) from thekinetic equation by

d

dtc`,m(t) =

∫d2uY m

` (u)∂

∂tf(u; t). (7.10)

After inserting the Doi-Hess kinetic equation for ∂∂t

f(u; t) and performing the integrations,one arrives at a coupled set of ordinary differential equations for c`,m(t). Truncating this setat a finite nc, these equations can be solved by standard algorithms.

Brownian Dynamics simulations of LCs

Five independent realization oftrajectories of a Wiener process

Instead of seeking discretizations of the Fokker-Planck equation, Brownian Dynamics sim-ulations solve the underlying time evolution for an ensemble of particles, such that the sameaverages are obtained. BD simulations therefore exploit the intimate relation between theFokker-Planck equation and the corresponding stochastic differential equation. For sim-plicity, we consider the two-dimensional case u = (sin θ, cos θ), where the Fokker-Planckequation (5.2) for f(θ; t) simplifies to

∂tf(θ; t) = − ∂

∂θ[A(θ)f ] +

D

2

∂2

∂θ2f, (7.11)

where D = 2kT/ξ denotes the diffusion coefficient and A = −ξ−1∂V/∂θ the systematicdrift. The stochastic differential equation for θ associated with (7.11) reads

dθ = A(θ)dt + BdW, (7.12)

where B =√

D and W (t) denotes the Wiener process (Gaussian, independent process withzero mean and unit variance), 〈W (t)〉 = 0, 〈W (t)W (t′)〉 = min(t, t′). The simplest (weakfirst order) scheme to integrate Eq. (7.12) is

θ(t + ∆t)− θ(t) = A(θ(t))∆t + B√

∆t ζ, (7.13)

Page 134: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

134 PART 7. LIQUID CRYSTALS AND LIQUID CRYSTALLINE POLYMERS

where ζ = N(0, 1) are independent, normal distributed random numbers. A word of cautionis in order. First, if the diffusion coefficient depends on θ, some terms are shifted into A ifthe last term is written as ∂2

∂θ2Df instead of the form (7.11). This results then in different

stochastic differential equations (Ito vs. Stratonovich calculus, relevant for this case of mul-tiplicative noise). Second, in three dimensions, it is not possible to reduce the Fokker-Planckequation to the one-dimensional form (7.11). Instead, care must be taken in order to correctlytake into account the constraint u2 = 1.

7.3.2 Simulating the interacting many particle systemFor deterministic dynamics, integrating the coupled equations (7.9) is done by MolecularDynamics methods for anisotropic particles. Including the fluctuating forces and torques isdone within Brownian Dynamics simulations.

Page 135: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

7.4. SPATIALLY INHOMOGENEOUS LCS 135

7.4 Spatially inhomogeneous LCs

7.4.1 Lebwohl-Lasher model

Two-dimensional Lebwohl-Lasher model on cubic lattice

A simple model for spatially inhomogeneous LCs was proposed by Lebwohl and Lasher(1972). This model does not rely on any mean-field approximation. The main simplificationof the model is the restriction of the rod-like particles with orientation ui to fixed (cubic)lattice sites i. Although rather simple, the Lebwohl-Lasher model describes several fea-tures of real LCs like defect structures and wall-anchoring effects. The Hamiltonian of theLebwohl-Lasher model reads

H =∑

i

j∈nn(i)

Vij, Vij = UkBT (ui · uj)2 (7.14)

where nn(i) denote the nearest neighbors of i. In two-dimension, the interaction energy sim-plifies to Vij = UkBT sin 2(θ2

i − θ2j ). The dynamics of the two-dimensional Lebwohl-Lasher

model is given by Eq. (7.12), dθi = Ai(θi)dt + BdWi, where Ai(θi) = − ∂∂θi

∑j∈nn(i) Vij =

−UkBT∑

j∈nn(i) sin[2(θi − θj)]. This model can be very efficiently simulated numerically.Supplied with periodic boundary conditions and/or wall interactions, bulk systems as wellas confinement effects can be studied numerically. While static quantities of the Lebwohl-Lasher model can be also obtained from Monte Carlo simulations, the advantage of BrownianDynamics is to provide also information on the dynamical properties.

7.4.2 Viscosity coefficients for nematics and discoticsBrief: Stress tensor is function of a director n and the flow field v. The director is microscop-ically interpreted as average orientation of rods for which an equation of change is derivedfrom a Fokker-Planck equation. Inserting the corresponding expressions into the stress ten-sor all the Leslie viscosity coeffients can be calculated based on a mesoscopic model (Kroger& Sellers, J. Chem. Phys. 103 (1995) 807).Stress tensor (Ericksen and Leslie 1966,1991)

T = (α1nn : ∇v + β1S)nn + α2nN + α3Nn (7.15)

+α4 ∇v + α5nn · ∇v + α6 ∇v · nn, (7.16)

Page 136: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

136 PART 7. LIQUID CRYSTALS AND LIQUID CRYSTALLINE POLYMERS

where

N ≡ n−W · n∇v ≡ 1

2[∇v + (∇v)T] = ∇v

T

W ≡ −1

2[∇v − (∇v)T] = −WT (7.17)

In addition to the usual balance of momentum,

ρv = −∇xp +∇x ·TT, (7.18)

there are two additional equations governing the microstructure:

i) a vector equation for the director n (here director inertia is neglected)

0 = n× [hn − γ1N− γ2D · n] , (7.19)

ii) a scalar equation for the degree of alignment S (again neglecting inertia)

0 = hS − β2S − β3nn : D, (7.20)

where hS is the scalar molecular field. In terms of a free energy F (S,∇xS,n,∇xn),the molecular fields are given by

hn = ∇x · ∂F

∂∇xn− ∂F

∂n, hS = ∇x · ∂F

∂∇xS− ∂F

∂S. (7.21)

The αi are commonly called Leslie viscosity coefficients. The βi were introduced by Erick-sen (1991) for the case of variable degree of alignment. Furthermore the coefficients γi arerelated to the αi by

γ1 = α3 − α2, γ2 = α6 − α5. (7.22)

Onsager relations:

α2 + α3 = α6 − α5, β1 = β3. (7.23)

Equation (7.23a) is commonly called Parodi’s relation, (7.23b) was proposed by Ericksen(1991).

Mean field theory for concentrated suspension of ellipsoids

Symmetric and antisymmetric parts of the stress tensor (Kuzuu and Doi)

Ts = 2µ0D +1

2BckBT [〈u∇u(log f + V/kBT )〉

+ 〈∇u(log f + V/kBT )u〉] , (7.24)

Ta = − c

2[〈u(∇uV )〉 − 〈(∇uV )u〉]. (7.25)

Page 137: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

7.4. SPATIALLY INHOMOGENEOUS LCS 137

Fokker-Planck equation

∂f

∂t= ∇u · [fDr∇u(log f + V/kBT )]−∇u · [fW ·u+ fB(1−uu) · ∇v ·u]. (7.26)

Dr: effective rotary diffusion coefficient. Potential:

V = Vm + Ve, (7.27)

where

Ve = −12χa(H · u)2

Vm = −32UmkBT 〈uu〉 : uu (7.28)

Ve denotes the contribution due to an induced dipole by an external field H, χa being theanisotropic susceptibility of a rod, and Vm denotes the mean-field contribution, Um being aconstant reflecting the energy intensity of the mean field.Goal: Determine the Leslie coefficients αi, βi, γi in terms of B, Dr, c, µ0, V , S2, S4.

Uniaxial alignment

funi = f(|u · n|), (7.29)

〈uu〉uni = S2nn +1

3(1− S2)1, (7.30)

and

〈uiujukul〉uni = S4ninjnknl +1

7(S2 − S4)(δijnknl + δiknjnl + δkjninl

+δilnjnk + δjlnink + δklninj)

+1

105(7− 10S2 + 3S4)(δijδkl + δikδjl + δilδjk), (7.31)

where S2 and S4 are scalar measures of the degree of orientation related to Legendre poly-nomials:

S2 = 〈P2(u · n)〉 , S4 = 〈P4(u · n)〉 . (7.32)

They must satisfy

−1

2≤ S2, S4 ≤ 1. (7.33)

Perfect alignment: S2 = S4 = 1, random alignment: S2 = S4 = 0. Useful identities:[

∂t〈uu〉 −W · 〈uu〉+ 〈uu〉 ·W

]

uni

= S2

(nn− 1

31

)+ S2(Nn + nN) (7.34)

[∇v · 〈uu〉+ 〈uu〉 · ∇v

]uni

= S2( ∇v ·nn+nn · ∇v )+2

3(1−S2) ∇v (7.35)

Page 138: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

138 PART 7. LIQUID CRYSTALS AND LIQUID CRYSTALLINE POLYMERS

∇v : 〈uuuu〉uni = S4( ∇v : nn)nn

+1

7(S2 − S4)[2( ∇v · nn + nn · ∇v ) + ( ∇v : nn)1]

+

[4

21(1− S2)− 2

35(1− S4)

]∇v (7.36)

Fokker-Planck equation for ellipsoids of revolution:

∂f

∂t= −w ·Lf −RL · (T f) + wL2f

≡ ∇u · [fDr∇u (log f + V/kBT )]−∇u · [fW · u + fB(1− uu) · ∇v · u].

→ equation of change for the second moment (alignment tensor)

∂t〈uu〉 = W · 〈uu〉 − 〈uu〉 ·W

+B( ∇v · 〈uu〉+ 〈uu〉 · ∇v )− 2B ∇v : 〈uuuu〉−Dr [〈u∇u(log f + V/kBT )〉+ 〈∇u(log f + V/kBT )u〉] (7.37)

Summary of results for nematics and discotics

With ηN ≡ BckBT/2Dr

α1 = −2ηBNS4,

α2 = −ηN(1λ−1

)S2,

α3 = −ηN(1λ−1

)S2,

α4 = 2µ0 +2

35ηBN(7− 5S2 − 2S4) > 0,

α5 = ηN

[B

7(3S2 + 4S4) + S2

],

α6 = ηN

[B

7(3S2 + 4S4)− S2

],

β1 = −ηN = β3,

β2 =35ηN

(21 + 15S2 − 36S4)B,

γ1 = 2ηNλ−1S2 > 0,

γ2 = −2ηNS2,

λ =(14 + 5S2 + 16S4)B

35S2

. (7.38)

Page 139: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

7.4. SPATIALLY INHOMOGENEOUS LCS 139

Dimensionless Miesowicz viscosities ηa/η, ηb/η, and ηc/η with η = 1/3(ηa+ηb +ηc) versusdegree of alignment S2 for dilute suspension model and concentrated suspension model. Forthe plotted curves S4 = S2

2 is set.

Tumbling of the director

Flow alignment angle θ:

cos θ =−γ1

γ2

=1

λ

The transition between the two regimes is given by

|λ| = 1

or (Kroger & Sellers 1995, Archer & Larson 1995)

(14 + 5S2 + 16S4)|B| = 35|S2|. (7.39)

Tumbling of the director as function of degrees of order parameter S2 (S4 ≡ S22 ) and geom-

etry of particles

Page 140: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

140 PART 7. LIQUID CRYSTALS AND LIQUID CRYSTALLINE POLYMERS

Page 141: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

complexfluids.ethz.ch

This document is copyrighted M.K., Swiss ETH Zurich/CH This copyright is violated upon selling or posting these class lecture notes

Computational Polymer PhysicsMartin Kröger, Emanuela Del Gado, Patrick Ilg

Part 8

Complex liquids

141

Page 142: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

142 PART 8. COMPLEX LIQUIDS

8.1 FENE-C model for wormlike micelles and equilibriumpolymers

The FENE-C model is identical with the FENE model except that the FENE potential iscutoff at a certain distance below the maximum bond length, thus introducing a scissionenergy Esc. Once, a bond exceeds this cutoff, it breaks (i.e. its chain breaks), and oncetwo beads – which are not yet an interior part of a polymer chain – come more close thanthe cutoff distance, chains recombine to form a single chain. This process of scission andrecombination leads to a distribution of chain lengths which is in accord with simple resultsobtained from the theory of growth processes. In particular, in equilibrium, one expectsa monoexponential distribution C(L) of length L. Under shear, the distribution becomesdistorted and it is interesting to study the competing mechanisms of flow-induced scission(or recombination) and relaxation towards the equilibrium distribution. Some results will becollected below. More details can be found in [Kroger (2005)]. In particular, we find

C0(L) =1

〈L〉0exp

(− L

〈L〉0

). (8.1)

3 4 5 6 7 8 9 100

1

2

3

4

5

6

1

Esc/kBT

c = 100%

c = 15%

c = 4%

ln L

(av

erag

e le

ng

th)

theory

MD

MD results for average micellar length 〈L〉0 vs. the scission energy βEsc for FENE-C micellarsolutions (from 4% to 100%) in equilibrium. Lines: the mesoscopic result [Kroger (2005)]. The fit

is parameter-free.

-5 -4 -3 -2 -1 0-2

-1.5

-1

-0.5

0

0.5

1

1.5

2

1

ln L

- E

sc/

2kBT

ln c

0

4

2

6

10

8

Esc/kBT =

theory

MD (RC = 1.13)

Page 143: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

8.1. FENE-C MODEL FOR WORMLIKE MICELLES AND EQUILIBRIUM POLYMERS143

MD results for the average micellar length 〈L〉0 ≡∫

LC(L)dL (reduced form) vs. concentration c

as compared with the mesoscopic result. The expression of Cates [Kroger (2005)] predicts aconstant slope in this representation.

L

55555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555

<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<

−5

−4

−3

−2

−1

0

log

10 C

(L)

−6

0 10 20 30 40 50 60 70 80

c = 15%

c = 100%

c = 4%

/10

/100

RC = 1.1

theory

MD

2222222222222222

MD results for the normalized equilibrium distribution of micellar length C0(L) for three samples atdifferent concentrations c. Lines: the mesoscopic result [Kroger (2005)] with same parameters as

for the microscopic model.

−3 −2 −1 0 1−2.5 −1.5 −0.5 0.5

−3

−2

−1

0

1

−2.5

−1.5

−0.5

0.5

1.5

2.5

2

3

η

Ψ1 Rouse model

Rouse model

ηlo

g10

, Ψ

1

(L

J u

nit

s)lo

g10

log10 Γ (LJ units)

φ = 4 % RC = 1.13

theory

NEMD

Both NEMD and mesoscopic results for the non-Newtonian shear viscosity η, the viscometricfunction Ψ1 vs the dimensionless shear rate Γ. All quantities are given in Lennard-Jones (LJ) units.

Page 144: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

144 PART 8. COMPLEX LIQUIDS

−3 −2 −1 0 1−2.5 −1.5 −0.5 0.5

0

10

20

30

40

5

15

25

35

log10 Γ (LJ units)

L

(nu

mb

er o

f b

ead

s)

theory

NEMD

c = 4 % RC = 1.13

Both NEMD and mesoscopic results for the average length of micelles 〈L〉 vs dimensionless shearrate Γ (LJ units).

−3 −2 −1 0 1−2.5 −1.5 −0.5 0.5

0

10

20

30

40

5

15

25

35

log10 Γ (LJ units)

χ

(al;

ign

men

t an

gle

)

45

theory

NEMD

c = 4 % RC = 1.13

Both NEMD and mesoscopic results for the flow alignment angle χ vs shear rate (LJ units).

Page 145: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

8.2. ACTIN FILAMENTS, SEMIFLEXIBLE CHAINS 145

8.2 Actin filaments, semiflexible chainsRather than adding a cutoff distance which lead to the FENE-C model, we can modify theFENE model to exhibit enhanced stiffness by adding a bending potential (cf. Sec. 3.4.1) ofthe form (giving rise to the so-called FENE-B model),

UB =κ

2

N−2∑i=1

(1− cos ϑi), (8.2)

where ϑi stands for the bending angle between adjacent segments i and i + 1 within a chainwith N − 1 segments. Furthermore, one can enforce semiflexibility by introducing a LJinteraction between non-adjacent beads, as shown below.

The bead-bead interactions. In addition to the interactions indicated in this figure, there are also aFENE interaction between all connected beads in chains and a repulsive LJ between all beads of the

syste.

Next, we collect some qualitative results for systems composed of semiflexible chains. De-tails can be found in [Kroger (2005)].

φ=10% κ=20 L=20 equilibrium

(100 snapshots with ∆t=0.05)

TUBE

entanglementpoint

NEBD

Transient contours of a single FENE-B actin filament with 100 beads embedded in a semidilutesolution.

Page 146: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

146 PART 8. COMPLEX LIQUIDS

φ=60% κ=100 L=50 equilibrium

NEBD

Equilibrium high density semiflexible FENE-B chains (8.2) for system parameters given in thefigure.

φ=5% κ=100 L=100 weak flow regime

NEBD

Flow-aligned FENE-B chains for system parameters given in the figure.

Page 147: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

8.2. ACTIN FILAMENTS, SEMIFLEXIBLE CHAINS 147

0.80 0.85 0.90 0.95

R_ee/L

0.00

0.02

0.04

0.06

0.08

f(R_ee)

D_EE(r/L) normalized, kappa=200, L=100

0.5 %

5 %

10 %

Effect of concentration c on the end-to-end distribution function f(Ree) vs Ree/L of FENE-B actinfilaments (κ = 200, L = 100). For the curve with c = 0.5%, error bars are shown.

0.0 20.0 40.0 60.0 80.0 100.0L

0.000

0.010

0.020

0.030

D ||

5%10%20%40%

Diffusion coefficient parallel to the tube vs chain length L for the FENE-B model actin filaments atvarious concentrations.

Page 148: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

148 PART 8. COMPLEX LIQUIDS

0 0 0shear rate

0

0

1

100

0.05 shear viscosity 0.02 shear viscosity0.05 first viscom. funct.0.02 first viscom. funct.0.05 −second viscom. funct.0.02 −second viscom. funct. 0.05 flow alignment angle0.02 flow alignment angle

Viscosity coefficients and flow alignment angle vs shear rate for both, 2% and 5% solutions ofFENE-B actin filaments (κ = 100, L = 100).

S2,

σ

Temperature0.4 0.9 1.4 1.9

1.0

0.5

S2Cooling

S2

Heating

σHeating

0

Orientational order parameter S2 and positional order parameter σ at different temperatures, for the3−4−3 FENE-B system, observed in heating (from an ideal fcc structure) and (subsequent) cooling.

Page 149: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

8.2. ACTIN FILAMENTS, SEMIFLEXIBLE CHAINS 149

T = 0.50 T = 0.85 T = 0.90 T = 1.00 T = 1.10 T = 1.40

X X X X X X

Z Z Z Z Z Z

During heating: Snapshots of the stiff central parts of molecules at different temperatures T

(increasing from left to right). for the 3−4−3 FENE-B system.

d)

liquid

T = 1.4

c)

smectic cooling

T = 0.8

b)

smectic

T = 1.0

a)

solid

T = 0.74

a)

solid

T = 0.74

b)

smectic

T = 1.0

c)

smectic cooling

T = 0.8

d)

liquid

T = 1.4

kz

ky

kz

ky

Single chain static structure factor Ssc as projected onto the x-plane (kx = 0) at differenttemperatures: T = 0.74 (a), T = 1.00 (b), T = 0.80 (c), and T = 1.40 (d) for the 3−4−3 system.

Page 150: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

150 PART 8. COMPLEX LIQUIDS

Z

Z

T = 0.7 T = 0.8 T = 1.0 T = 1.4T = 1.2

Y Y Y Y Y

X X X X X

During cooling: Snapshots of the stiff parts of the molecules in two orthogonal projections) 3−4−3FENE-B system.

S2 , σ

εatt

σ

S2

σ

S2

The order parameters S2 (nematic) and σ (smectic) as function of εatt at the temperature T = 0.8 forthe 3−4−3 FENE-B system.

Page 151: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

8.2. ACTIN FILAMENTS, SEMIFLEXIBLE CHAINS 151

1/ L

Smectic

Nematic

Isotropic

T

Typical experimental phase diagram where L is length of the chains and T denotestemperature [Kroger (2005)].

Page 152: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

152 PART 8. COMPLEX LIQUIDS

8.3 Gelation, filamentous networksWe consider a toy model [Peleg et al. (2007)] that contains the basic features of microphaseseparation in polymer gels: a stretched elastic network of Lennard-Jones particles, studiedin two dimensions. When temperature is lowered below some value T ∗, attraction betweenparticles dominates over both thermal motion and elastic forces, and the network separatesinto dense domains of filaments connected by three-fold vertices, surrounded by low den-sity domains in which the network is homogeneously stretched. The length of the filamentsdecreases and the number of domains increases with decreasing temperature. The systemexhibits hysteresis characteristic of first order phase transitions: pre-formed filaments thinupon heating and eventually melt at a temperature T ∗∗ (> T ∗). Although details may vary,the above general features are independent of network topology (square or hexagonal), sys-tem size, distribution of spring constants, and perturbations of initial conditions.Phase transitions in polymer networks (gels) involve two coupled yet distinct processes: vol-ume transition and phase separation. During a volume transition the gel undergoes a uniformchange of volume by expelling some of the solvent contained within it. Since this processtakes place by (slow) cooperative diffusion, it is well-separated in time from the fast localreorganization of the gel which leads to its separation into domains of high and low poly-mer concentration (at constant total volume of the gel). While the volume transition is wellunderstood, most of the work on phase separation in polymer gels focused on surface insta-bilities [Peleg et al. (2007)] and the question of what happens in the bulk of the gel remainedlargely unresolved. This question is of considerable interest from the fundamental perspec-tive because, unlike binary liquids in which two macroscopic phases are formed in the pro-cess of phase separation, a gel is a connected network that can not undergo phase separationon macroscopic scales. While it is obvious that only local reorganization of the polymerconcentration profile can take place in such a system, little is known about the details of thisprocess. In particular, it is not clear whether the characteristic length scale of microphaseseparation is of the order of the mesh size or whether cooperative behavior that results fromlong range elastic interactions can lead to the formation of much larger domains. The for-mer scenario according to which the wavelength of microphase separation is determined bymolecular length scales, takes place in diblock copolymer mesophases. However, previoustheoretical and experimental investigations of phase separation in gels suggest the presenceof much larger structures (spongelike domains) versus filamentous honeycomb-shaped net-work.The goal of the present work is to show that the appearance of such mesoscopic structures isa quite general feature of phase separation in a connected network and to obtain some insightabout their properties and the mechanism of their formation.

8.3.1 Soft-solid model

Since microphase separation in gels arises as the result of interplay between attractive forceswhich promote the appearance of a polymer-rich phase and elastic network forces that op-pose phase separation, we introduce a ”minimal” two-dimensional model in which these two(and only these two) aspects of real gels are represented, albeit in an idealized fashion.We begin with a perfect square network as shown here

Page 153: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

8.3. GELATION, FILAMENTOUS NETWORKS 153

Particles and connecting linear springs for a part of the 100× 100 (periodic) network at T = 0.44.The initial ’high temperature’ square grid topology is depicted in the inset. Springs connecting

isolated particles are shown in blue; those connecting clusters to particles and to other clusters areorange and those connecting particles in the same cluster are green.

subjected to periodic boundary conditions. At each node in the network we place a particlesuch that each of the particles is permanently connected through identical stretched harmonicsprings of zero equilibrium length and of spring constant k, to four other particles, preciselythose which were its nearest neighbors in the initial configuration. Similarly to the phantomchain model of polymer networks [Flory (1953)], the springs are not endowed with any phys-ical attributes such as mass or excluded volume; nevertheless, since the harmonic springs arestretched (because of periodic boundary conditions, the network is wrapped around a torus),their presence makes the network behave as an elastic solid. The next step is to introduceattractive interactions by having all the particles interact via the Lennard-Jones (LJ) potentialwhich is known to lead to condensation of gas of particles at sufficiently low temperatures.The system is studied by molecular dynamics simulations in the (N, V, T ) ensemble (N isthe total number of particles, V the total area and T the temperature).The system size is chosen large enough to avoid the relative displacement of any pair of par-ticles above half the box size, because one cannot uniquely introduce a nearest image con-vention for springs compatible with the nearest image convention for the LJ particles. Thismodel feature poses no difficulty in practise, because its validity can be monitored duringthe simulation. The choice of the LJ potential is dictated by convenience: it is the simplestmodel potential that combines both attraction and excluded volume effects. Throughout thissection all quantities (length, times, energies etc.) are made dimensionless by expressingthem in LJ units. We truncate the LJ potential at an interparticle distance of rcut = 3× 21/6,

Page 154: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

154 PART 8. COMPLEX LIQUIDS

and use a velocity Verlet algorithm with integration time step 4t = 0.004 to integrate New-ton’s equation of motion; temperature is kept constant by rescaling velocities We here choosethe initial state to be a simple square (or alternatively, a hexagonal) 100× 100 grid with gridspacing l0 = 3.5, where four (six) nearest neighbors are immediately assigned to each par-ticle (inset to above figure for the square grid). Even though the present model is motivatedby polymer gels which possess entropic elasticity, in this work we neglect any temperaturedependence of the spring constant k. With the present choices of k = 1/10 (square grid) andN = 104, the systems can be equilibrated within accessible simulation times (kinetic barri-ers to structural reorganization increase with spring stiffness), for sufficiently broad range oftemperatures in the two-phase region. The above choice of the value of the spring constantis quite arbitrary - the generic behavior reported in this work occurs for spring constants inthe range kmin < k < kmax, where kmin vanishes in the limit of infinite system size and kmax

is estimated by equating the LJ energy gain for bringing two adjacent lines of particles par-allel to the y-axis close together (each line of length L contains

√N particles), to the elastic

energy loss due to uniform stretching of the rest of the network along the x-axis (the lowestenergy elastic deformation mode).

Percolating High Density Clusters

When such a system, initially placed to the simple grid, is brought to a temperature higherthan some value T ∗ (T ∗ = 0.435 ± 0.01), only isolated particles and small clusters (denseaggregates of particles with nearest neighbors separated by distance ≤ 1.5) of size n - 15that break up and reform continuously, are observed. A typical snapshot at T = 0.44 (startedoff from the square grid) is shown in the above figure. The probability that a particle belongsto a (small) cluster is slightly above the value obtained for an ideal gas of non-interactingparticles at the same concentration and will be quantified below.In order to study phase separation at lower temperatures, we begin with the correspondingideal (square or hexagonal) configuration and perform a series of temperature quenches inthe range T > 0.3 (at T < 0.3, steady state was not reached even after 108 time steps!). Foreach temperature above T = 0.3, a steady state is reached in the sense that all monitoredparameters in the system (e.g., the total number of particles and the total energy of each ofthe phases) do not change in time, apart from small fluctuations about their mean values.

(a) (b) (c)

Typical snapshots of PHDC steady state patterns (relaxed from the initial square grid): (a) singlefilament, (b) two connected 3-fold vertices (both at T = 0.41), (c) network with eight nodes at

T = 0.3.

This figure presents two typical steady state patterns observed after 55 × 106 time steps, atT = 0.41, i.e., slightly below T ∗. The two percolating high density clusters (PHDCs) can

Page 155: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

8.3. GELATION, FILAMENTOUS NETWORKS 155

be obtained by using slightly different distortions of the initial periodic lattice or differentchoices of seeds for the random number generator of the initial velocities of the particles.Even though the shapes of the two PHDCs appear to be quite different, in both cases the ob-served shape is generated by a similar mechanism: nucleation of a high density filament thatelongates by absorbing small clusters at its ends. A linear filament forms when this growthtakes place along the x or y axis (the periodic directions) and the ends meet upon traversingthe periodic lattice, thus forming a circle along one of the principal axes of the torus. Whenthe growth takes place at any other direction, the ends wind around the torus, as can be seenin an accompanying movie (www.complexfluids.ethz.ch/gels, login EPL, password: EPL),until they change direction due to thermal fluctuations and collide with previously formedportions of the filament. A 3-fold vertex forms at each of the collision points of the two ends,resulting in the observed double vertex shape (>-<) of the percolating cluster. Upon closerinspection we notice that the total area occupied by the PHDC is smaller in the linear than inthe double vertex case and that more small clusters are present in the former case than in thelatter. A more quantitative analysis shows that the total energy of the system is significantlylower in the case where the double vertex PHDC is observed, than in the linear PHDC case.Since entropy is not measured in our simulation, this does not tell us which of the two statescorresponds to a lower free energy of the system. Nevertheless, the analogy with late stagegrowth in phase separating binary liquids where surface energy driven droplet coalescenceprocesses continue long after the final equilibrium composition was reached within individ-ual droplets of the daughter phase, suggests that the state with linear PHDC is metastablewith respect to coalescence of the remaining small spherical clusters with the linear cluster,and the formation of the double vertex PHDC. Indeed, at T = 0.4 we observed a sequence ofevents whereby a linear PHDC was first formed and, later on, a new linear cluster nucleatedand grew until both its ends collided with the original linear filament, giving rise to a doublevertex structure (not shown).

As temperature is further lowered, multiple nucleation events take place in the simulationbox, followed by growth of linear filaments and their termination by collisions. This leadsto the formation of a ”super network” of dense filaments connected by 3-fold vertices, em-bedded in a dilute phase of isolated particles connected by strongly stretched springs. Atypical snapshot of the steady state of the system at T = 0.3 is shown in the above figure(c) where both hexagons and squares are observed. Interestingly, the shapes resemble thoseof two-dimensional foams, even though the physics is very different – interplay of elasticityand attractions in our case, vs. interplay of surface tension and gas pressure in foams.

Nucleation, Growth and Hysteresis

In order to examine the sequence of events that precede the birth of a filament, we decreasedthe temperature to T = 0.2 and followed the dynamics of the system in time (see movie B).Following the quench, numerous compact clusters of up to about n = 10 particles appear inthe simulation box. Each of the particles in such a cluster is typically connected by severalsprings to the ’outside’ and therefore there are about n springs that connect neighboringclusters, cf. the following figure.

Page 156: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

156 PART 8. COMPLEX LIQUIDS

(a) (b) (c)

Snapshots of transient patterns (square grid) following quench to T = 0.2: (a) absorption stage, (b)small clusters organized along a line, (c) shortly later – a linear filament is formed.

Since the pulling force on a cluster due to n parallel springs is n times larger than the forceexerted by a similarly stretched single spring (between the cluster and an isolated neighbor-ing particle), in mechanical equilibrium this force has to be balanced by a force equal inmagnitude and acting along the opposite direction. The same argument can be applied tothe neighboring cluster as well, and clusters arrange themselves along lines of high stress.This breaks radial symmetry and the resulting critical nucleus has the shape of a linear fila-ment, reminiscent of the string-like arrangement of magnetic or electric dipoles. The elec-tric/magnetic analogy is not accidental since such a cluster can be described as a force dipole.

0.35 0.4 0.45 0.5 0.55

0

0.1

0.2

0.3

0.4

0.5

0.6

T

fra

ctio

n o

f p

art

icle

s

large cluster

small cluster

isolated

Plots of probability that a particle belongs to a large cluster (¥), a small cluster (©), or that it is anisolated particle (∗). Data for both hexagonal (shown) and square grids with N = 104 particles fall

onto the same curves within statistical certainty and with the exception, that the high T values for thenumber of small clusters slightly differ as expected from the corresponding ideal gas values (0.438

and 0.487 for the square and hexagonal grids, respectively).

Here we plotted the probability that a particle belongs i) to the PHDC, ii) to a ’small’ cluster,or that it is iii) an isolated particle, as a function of temperature. As T decreases, the proba-bility for a particle to belong to a small cluster drops sharply from a value exceeding 0.5 toless than 0.1 at T ∗ and approaches zero at lower T . A much smaller drop at T ∗ is observedfor the probability to observe an isolated particle. We conclude that the formation of the largecluster occurs mainly at the expense of small clusters that are absorbed by it, reminiscent oflate stage growth of droplets in a phase separating binary liquid (in the metastable region ofits phase diagram).

Page 157: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

8.3. GELATION, FILAMENTOUS NETWORKS 157

In order to gain further insight into the behavior of the system we start from an equilibratedlow temperature (T = 0.38) configuration which contains a PHDC, increase T to somevalue larger than T ∗, and monitor the system during very long simulation runs (À 108 timesteps). When T is increased to the range T ∗<T <T ∗∗ (in which no filaments are observedunder cooling from an initial high-temperature state), filaments tend to thin by progressive’melting’ at their surface and then break. In the 104 particles system the process continuesuntil a single thin filament remains which appears to be stable during the longest simulationsruns. Finally, at T ∗∗ no filaments are observed.

0.35 0.4 0.45 0.5 0.55 0.6

0

0.2

0.4

0.6

0.8

1

1.2

ord

er

pa

ram

ete

r

T

!lament phase

homogeneous phase

reached starting from !lament phase

reached starting from homogeneous phase

Order parameter 0 ≤ S ≤ 1 defined as S = L/C where L and C denote the number of particles inthe largest cluster and in all clusters (isolated particles do not belong to any cluster), respectively.Data shown for both ’cooling’ and ’heating’ runs, results indistinguishable for both the hexagonal

and square grids with N = 104 particles. States on both branches are reached from all states withintheir branch upon heating or cooling.

This behavior is summarized in the above figure where the ratio of the number of particlesin the PHDC to the total number of particles in clusters of all sizes, is monitored. Undercooling from the homogeneous state, the system follows the lower branch and jumps tothe inhomogeneous filamentous state at T ∗; conversely, when the system is heated startingfrom the low-temperature inhomogeneous phase where PHDC are present, it follows theupper branch up to T ∗∗ = 0.55 ± 0.02 (for N = 104) and then undergoes a transitionto the homogeneous phase. Such hysteresis is familiar from the study of first order phasetransitions (see, e.g., the isotropic-nematic transition in liquid crystals. The interpretation(based on mean field considerations) is that two free energy minima corresponding to thetwo phases are present in the range T ∗ < T < T ∗∗. Even though only the lowest minimumcorresponds to the true equilibrium state, a system which was initially prepared in the other(metastable) state, will remain in it almost indefinitely if this local free energy minimumis deep enough (compared to thermal energy, T). We therefore interpret T ∗ and T ∗∗ as thestability limits of the homogeneous and the inhomogeneous phases, respectively. Notice thatunlike the true thermodynamic transition temperature which lies somewhere between T ∗ andT ∗∗ and which is strictly defined only in the limit of an infinite system, the latter temperatureshave no thermodynamic significance and can depend on the size of the system.

Page 158: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

158 PART 8. COMPLEX LIQUIDS

Fractal dimension of a gel

We had earlier calculated fractal dimensions of single polymers, cf. Sec. 3.1.3. The fractaldimension of a D-dimensional bulk system with a finite number Ntotal of particles (we con-sider equal box sizes L in each direction) can be calculated using the following procedure:

1. Introduce an integer j ≥ 1, and grid size g(j) = L/(2j − 1). Define Jmax such thatg(Jmax) is of the order of smallest length scale in the system (atom diameter), belowwhich the system cannot be fractal anymore.

2. For each j = 1, 2.., Jmax, sort particle coordinates into the cells of a regular grid ofgrid size g(j) and count the number N of cells occupied by at least one of the particles.Let’s call these quantities N(j) since for each j, we obtain a differernt N . Notice, thatif g(j) is larger (and smaller) than the maximum (and minimum) distance betweenneighboring particles, then N(j) = (2j− 1)D (and N(j) = Ntotal). Its between theseslimiting j-values, where we can extract a fractal dimension.

3. If the system is fractal (within these bounds), the fractal dimension Df is obtainedfrom the positive slope in the plot ln N(j) versus ln(1/g(j)), or equivalently, N(j) ∝g(j)−Df .

Discussion

All the data presented in this section were obtained for a given set of parameters (latticetopology, system size, density, spring constant, initial configuration). In order to test thegenerality of our results we performed a set of simulations in which these parameters werevaried. We find that when the square lattice is replaced by a hexagonal one (and the springconstant is reduced from k = 1/10 to k = 1/15 to compensate for the increased coordinationnumber), all steady state results are little changed, including T ∗, the PHDC patterns and thetemperature ranges in which different patterns are observed. We also tested the sensitivityof our results to major changes in the initial conditions: instead of placing the particles onan ideal lattice, we started from an inhomogeneous ”droplet” configuration consisting of adense block of particles (with grid spacing l0 = 1) centered at the simulation cell, surroundedby a uniform low-density background. When this configuration is quenched to T < T ∗, thedroplet configuration breaks into several filamentous branches and a PHDC is formed whichis similar to that obtained by starting the simulation from the ideal lattice. Further, we madesure that N = 104 is large enough to not suffer from finite size effects by studying systemsbeing 5 times larger in each dimension. While T ∗ appears to increase with system size, ourqualitative conclusions concerning the structure of the low-temperature phase (the geometryand the kinetics of formation of the PHDC) remain unaffected:

Page 159: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

8.3. GELATION, FILAMENTOUS NETWORKS 159

100 x 100

500 x 500

hexagonal grid

Snapshots of large N = 25×N4 and small (insert, N = 104 particles) system at same systemparameters (spring coefficient k = 1/15, grid spacing l0 = 3.5, T = 0.3, after 1.25× 108 time steps,

at t = 50× 104 (initial configuration: hexagonal grid).

What happens if a random network is employed? In order to study this issue we generateda random network on a square lattice (with N = 104 and l0 = 3.5), with spring constantssampled from a uniform random distribution in a symmetric interval about k = 1/10. Fornarrow intervals of k values (0.08 − 0.12), the results are practically indistinguishable fromthose for k = 0.1. For k values in the range 0.01 − 0.19), the steady state PHDC patternsobserved at lower temperatures remain the same as for the regular network case but thepoint at which these clusters first appear is shifted to some temperature above 0.5. The mostpronounced difference is in the temporal history: while in the regular network case steadystate is approached by single-step kinetics, the random network exhibits two-step kineticswith a fast step in which the PHDC is formed, followed by a slow step involving dissolutionof remaining small clusters and further growth of the PHDC.In conclusion, we would like to mention that the microphase separation patterns reported inthis work bear strong resemblance to those seen in sections of elastin hydrogels observed bycryoscopic scanning electron microscopy: a network of 7 nm thick and several hundred nmlong filaments, the latter made of spherical beads (see Figs. 3a and 3b in ref. [McMillan etal. (1999)]). This provides support to our belief that, despite its simplicity, our model [Peleget al. (2007)] captures the main physical ingredients responsible for phase separation in gels.

Page 160: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

160 PART 8. COMPLEX LIQUIDS

Page 161: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

complexfluids.ethz.ch

This document is copyrighted M.K., Swiss ETH Zurich/CH This copyright is violated upon selling or posting these class lecture notes

Computational Polymer PhysicsMartin Kröger, Emanuela Del Gado, Patrick Ilg

Part 9

Smooth particles

161

Page 162: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

162 PART 9. SMOOTH PARTICLES

9.1 Smoothed particle dynamics, soft fluid particlesSmoothed particle dynamics (SPH) is a Lagrangian particle method introduced by Lucy andMonaghan in the 70s in order to solve hydrodynamic problems in astrophysical contexts.Generalizations of SPH in order to include viscosity and thermal conduction and addresslaboratory scale situations like viscous flow and thermal convection have been presentedonly quite recently.In the soft fluid particle model, one introduces a bell-shaped weight function w(r) normal-ized to unity and with compact support, a sphere of radius h. In the limit h → 0 the weightfunction becomes a Dirac delta function. The weight function serves for two purposes. First,it allows one to define the density of every fluid particle through the relation

di =∑

i

w(xij), xij = |xij|, xij = xj − xi, (9.1)

where xij is the distance between fluid particles i and j. If around particle i there are manyparticles, its density di will be large. One associates a volume vi = d−1

i to the fluid particle.The second purpose of the weight function is to provide for an approximation of secondspatial derivatives at the particles locations. It can be shown that an approximation to orderh2 is given by

∇∇Ai =∑

j

1

di

F (xij)(Ai − Aj)[5xijxij − 1], (9.2)

where x = x/|x| and A is an arbitrary hydrodynamic field, and Ai = A(xi). The function Fis defined through

∇w(r) = −rF (r). (9.3)

Expression (9.2) allows one to estimate the value of the derivative at a given point in terms ofthe value of the function at neighboring points. A common selection for the weight functionis Lucy’s function

w(r) =105

16πh3(1 +

3r

h)(1− r

h)3, (9.4)

from which follows F (r) = 315(1 − r/h)2/(4πh5) ≥ 0. With (9.2) we can discretize thesecond order derivatives of the hydrodynamic equations

dt= −ρ∇ · v,

ρdv

dt= −∇P + η∇2v + (ζ +

η

3)∇∇ · v,

Tρds

dt= κ∇2T + 2η∇v : ∇v − 2

3η(∇ · v) + ζ(∇ · v)2. (9.5)

The time derivatives in (9.5) are substantial derivatives describing how the quantities varyas we follow the flow field v, ρ is the mass density field, T and P the temperature andpressure fields given by the local equilibrium assumption. The transport coefficients (taken

Page 163: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

9.1. SMOOTHED PARTICLE DYNAMICS, SOFT FLUID PARTICLES 163

to be constant for simplicity) are the shear and bulk viscosities η and ζ , respectively, andthe thermal conductivity κ. One can construct a model of fluid particles by discretizing theequations of hydrodynamics on a set of nodes that follows the flow field. These nodes canbe interpreted as fluid particles with definite amounts of mass, momentum, energy, volume,and entropy. The local equilibrium hypothesis allows one to relate the entropy with therest of extensive quantities through the corresponding equilibrium equation. In this respect,the nodes are actually understood as representing proper thermodynamic subsystems thatfollow the flow field. The discretization procedure establishes how the extensive quantitiesbetween fluid particles are exchanged and how the fluid particles should eventually move.The discrete equations of hydrodynamics are (assuming vanishing bulk viscosity ζ = 0) read

xi = vi,

mvi =∑

j

[Pi

d2i

+Pj

d2j

]Fijxij − 5η

3

∑j

Fij

didj

(1 + xijxij) · vij,

TiSi =∑

j

Fij

didj

(5η

6[v2

i + (xij · vij)2]− 2κTij

), (9.6)

where Pi and Ti are the pressure and temperature of the fluid particle i, which are functionsof di, Si through the equilibrium equations of state, and Fij = F (xij). In addition, vij =vi − vj , and Tij = Ti − Tj . The above model system conserves mass, momentum, andenergy, and the total entropy is a non-decreasing function in time.The physical picture that emerges from these equations is very appealing and closely resem-bles the interpretation of dissipative particles in DPD. Particles of constant mass m moveaccording to their velocities and exert forces of a range h. to each other of different nature.First, a repulsive force directed along the line joining the particles that has a magnitude givenby the pressure and densities of the particles. Roughly speaking, the larger is the pressure ina given region, the higher the repulsion between them. The fluid particles are also subjectto friction forces that depend on the relative velocities of the particles. As opposed to thefriction force of the DPD model (cf. Sec. 9.2), there is a component of this forces, directlyproportional to vij that breaks the conservation of total angular momentum. If one wishes torespect this conservation law, then it is necessary to introduce in the model a spin variableassociated to every particle. For a sufficiently large number of particles, the violation ofangular momentum is negligible. The terms in the entropy equation have the same meaningas those in the original equation (9.5), that is, heat conduction and viscous heating. The heatconduction term tries to reduce temperature differences between particles by suitable energyexchange, whereas the viscous heating term ensures that the kinetic energy dissipated by thefriction forces is transformed into internal energy of the fluid particles. As in any SPH sim-ulation, one has to test and fine tune the boundary conditions in order to recover the desiredproperties (no velocity slip or no temperature jumps at solid walls, etc.).These problems are not expected to appear in the Voronoi model which is similar to SPH,but based on Voronoi tessellation, because there the implementation of boundary conditionsis straightforward (one simply specifies fluxes at the walls of the cells, for example). In ouropinion, the Voronoi model is conceptually more elegant and superior to the soft particlemodel, and perhaps even faster due to the smaller number of neighbors. However, the com-plexity of programming a 3D Voronoi code as opposed to the simplicity of the smooth fluid

Page 164: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

164 PART 9. SMOOTH PARTICLES

particle code still leaves room for applications of the soft particle model.

Page 165: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

9.2. DISSIPATIVE PARTICLE DYNAMICS 165

9.2 Dissipative particle dynamicsIn dissipative particle dynamics (DPD), atoms are lumped together into ’united atoms’ de-scribing more than one atom, and these ’particles’ interact with each other via rather softforces as the positions of the underlying atoms are smeared out [Karttunen et al. (2004)].Soft forces allow for comparably large integration time steps, typically four orders of mag-nitude above a classical, atomistic molecular dynamics simulation.In (DPD) a set of N interacting particles, whose time evolution is governed by Newton’sequation of motion, is considered. The force Fi acting on a particle i is given by the sum ofa conservative, drag and pair-wise additive random force,

Fi =N∑

j=1

FCij + FD

ij + FRij. (9.7)

The conservative force is given by

FC = −aij (1− xij

rcut

)xij (9.8)

for xij < rcut, and vanishes otherwise. Here, xij = xj − xi as before, xij ≡ |xij| andxij ≡ xij/xij is the unit vector pointing from particle i to j. The coefficients aij parameterizethe corresponding interaction potential. The drag force and the random force act as heat sinkand source, respectively, so that their combined effect is a thermostat. The random force isgiven by

FRij = σ ω(xij) xij

ζ√t, (9.9)

and the drag force as

FDij = −σ2 ω(xij)

2

2kBTxij(vij · xij), (9.10)

with vij ≡ vj − vi. Here, ζ is a random variable with zero mean and unit variance, andω(r) = (1 − r) for r < 1 and ω = 0 for r > 1 (dimensionless units, cf. Sec. 6.3). Onechooses the particle mass, temperature and the interaction range as units of mass, energy,and length, hence m = kBT = rcut = 1 in all equations. The amplitude of the randomforce is proportional to 1/

√t. This particular DPD thermostat is special in that it conserves

(angular) momentum leading to a correct description of hydrodynamics. The reason whythis thermostat conserves hydrodynamics is quite profound. All forces acting on particlesare exerted on them by other particles nearby. This holds for the conservative forces, aswell as for the friction and random forces. Since all particles obey Newton’w third law, thesum of all forces in the system vanishes. If we take any given volume in the fluid, then allforces exerted between particles enclosed by that volume vanish. Consequently, the totalacceleration of this volume of liquid equals the sum of all forces that cross the boundary ofthe volume. This is the very condition that leads to the Navier-Stokes equations. Therefore,whatever interaction force we invent between the particles, as long as it is a local interaction

Page 166: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

166 PART 9. SMOOTH PARTICLES

and satisfies actio-reactio we will always have hydrodynamics. If the random force wouldnot be implemented pair-wise, but instead relative to a fixed background, we would breakNewton’s law. This is the case in Brownian dynamics, cf. Sec. 6.1. The DPD method ingeneral has been shown to produce a correct (N,V,T) ensemble if the fluctuation dissipationrelation is satisfied, which is the deeper reason for the quantity σ to appear both in (9.9) and(9.10).A modified velocity Verlet algorithm had been shown to operate efficiently in DPD, whichfor λ = 1/2 simplifies to the algorithm presented in (6.41) above,

x = x + v4t + 12(4t)2a,

(* accumulate x-dependent averages *)vλ = v + λ a4t,

v = v + 12a4t,

a =1

mF(x,vλ),

v = v + 12a4t,

(* accumulate v-dependent averages, eventually rescale velocities *) (9.11)

9.2.1 Delaunay triangulation and Voronoi diagramA Voronoi diagram, also called a Voronoi tessellation or Voronoi decomposition, named afterGeorgy Voronoi, also called a Dirichlet tessellation, after Lejeune Dirichlet, is special kindof decomposition of a metric space determined by distances to a specified discrete set ofobjects in the space, e.g., by a discrete set of points [Atsuyuki et al. (2000)].The Voronoi diagram is the partitioning of a plane with n points into convex polygons suchthat each polygon contains exactly one generating point and every point in a given polygonis closer to its generating point than to any other.

In other words, for any set S of points and for almost any point x, there is one point of S towhich x is closer than x is to any other point of S. The word ”almost” is occasioned by thefact that a point x may be equally close to two or more points of S. The set of all points closerto a point c of S than to any other point of S is the interior of a (in some cases unbounded)convex polytope called the Dirichlet domain or Voronoi cell for c. The set of such polytopestessellates the whole space, and is the Voronoi tessellation corresponding to the set S. If thedimension of the space is only 2, then it is easy to draw pictures of Voronoi tessellations,

Page 167: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

9.2. DISSIPATIVE PARTICLE DYNAMICS 167

and in that case they are sometimes called Voronoi diagrams. The Delaunay triangulation isa triangulation which is equivalent to the nerve of the cells in a Voronoi diagram.

Voronoi diagrams for a set of (N ) points are efficiently computed (within order N ln N )using a Delaunay-triangulation: Delaunay triangles are defined by the condition, that withina sphere around three points (which form the triangle) there are no (other) points. Theresulting Delaunay (space-filling) triangles do not overlap. The Delaunay triangulation isdual to the Voronoi diagram: The circle circumscribed about a Delaunay triangle has itscenter at the vertex of a Voronoi polygon.In a code, we need to first identify a list of potential triples of points (usually already availableduring the force calculation). For any point P, the two nearest points V1,V2 belong to itsDelaunay triangulation. Having sorted points by their distance to P one can through this listof candidates and compute distances between the candidate and established members of theDelaunay triangulation to identify new members.

CODE 2D DELAUNAY TRIANGULATION, VORONOI DIAGRAM USING BUILT-IN ROUTINES

À x = rand(1,10); y = rand(1,10);TRI = delaunay(x,y),subplot(1,2,1), triplot(TRI,x,y); hold on; plot(x,y,’or’); axis([0 1 0 1]);[vx, vy] = voronoi(x,y,TRI);subplot(1,2,2); plot(x,y,’r+’,vx,vy,’b-’); axis([0 1 0 1]); I

EXERCISE 2D VORONOI DIAGRAM

Write a code which reads in a set of 2D coordinates confined in a square simulation boxwith periodic images, calculate and visualize the Delaunay triangulation and the correspondingVoronoi diagram. I

Page 168: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

168 PART 9. SMOOTH PARTICLES

Page 169: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

Bibliography

[Gaylord and Wellin (1995)] R.J. Gaylord, P.R. Wellin, Computer Simulations with Mathematica(Springer, Berlin, 1995).

[Rubinstein and Colby (2003)] M. Rubinstein, R.H. Colby, Polymer Physics (Oxford UniversityPress, Oxford, 2003).

[Rappaport and Rabin (2008)] S.M. Rappaport, Y. Rabin, Model of DNA bending by cooperativebinding of proteins. Phys. Rev. Lett. 101, 038101 (2008).

[Honerkamp (1990)] J. Honerkamp, Stochastische dynamische Systeme. Konzepte, numerischeMethoden, Datenanalysen (VCH, Weinheim, 1990).

[Gardiner (1985)] C.W. Gardiner, Handbook of stochastic methods (Springer, Berlin, 1985).

[Risken (1989)] H. Risken, The Fokker-Planck equation (Springer, Berlin, 1989).

[Frenkel and Smit (2002)] D. Frenkel, B. Smit, Understanding Molecular Simulation: From Algo-rithms to Applications (Academic Press, London, 2002).

[R.H Swendsen and Wang (1987)] R.H Swendsen, J.S. Wang, Nonuniversal critical dynamics inMonte Carlo simulations. Phys. Rev. Lett. 58, 86 (1987).

[Hess et al. (2005)] S. Hess, M. Kroger, P. Fischer, Einfache und disperse Flussigkeiten, in:Bergmann-Schaefer, Vol. 5 (de Gruyter, Berlin, 2005).

[Bortz et al. (1975)] A.B. Bortz, M.H. Kalos, J.L. Lebowitz, New algorithm for monte-carlo simula-tion of ising spin systems. J. Comput. Phys. 17, 10 (1975).

[Chatterjee and Vlachos (2007)] A. Chatterjee, D.G. Vlachos, An overview of spatial microscopicand accelerated kinetic Monte Carlo methods. J. Computer-Aided Mater. Des. 14, 253 (2007).

[Ottinger (1996).] H.C. Ottinger, Stochastic Processes in Polymeric Fluids (Springer, Berlin, 1996).

[Kroger (2005)] M. Kroger, Models for Polymeric and Anisotropic Liquids (Springer, Berlin, 2005).

[Footnote] Reduced units – online interactive toolI http://www.complexfluids.ethz.ch/units.

[Allen and Tildesley (1989)] M.P. Allen, D.J. Tildesley, Computer simulation of liquids (Clarendon,Oxford, 1989).

[Peleg et al. (2007)] O. Peleg, M. Kroger, I. Hecht, Y. Rabin, Filamentous networks in phase-separating two-dimensional gels. Europhys. Lett. 77, 58007 (2007).

169

Page 170: 2 Monte Carlo basics 17 - ETH Zürich - Homepage · 6 Model: microscopic transition rates (w(x->y)) MASTER equation for p(x) supplemented by initial conditions smooth transitions

170 BIBLIOGRAPHY

[Flory (1953)] P.J. Flory, Principles of Polymer Chemistry (Cornell Univ. Press, Ithaca, 1953).

[McMillan et al. (1999)] R.A. McMillan, K.L. Caran, R.P. Apkarian, V.P. Conticello, High-resolution topographic imaging of environmentally responsive, elastin-mimetic hydrogels.Macromolecules 32, 9067 (1999).

[Karttunen et al. (2004)] M. Karttunen, I. Vattulainen, A. Lukkarinen (Eds.), Novel methods in softmatter simulations (Springer, Berlin, 2004).

[Atsuyuki et al. (2000)] O. Atsuyuki, B. Boots, K. Sugihara, S.N. Chiu, Spatial Tessellations - Con-cepts and Applications of Voronoi Diagrams, 2nd Ed. (John Wiley, Hoboken, NJ, 2000).