topics in occupation times and gaussian free fields · variable of brownian local times evaluated...
TRANSCRIPT
Topics in Occupation Times and Gaussian Free Fields
Alain-Sol Sznitman∗
Notes of the course “Special topics in probability”
at ETH Zurich during the Spring term 2011
∗ Mathematik, ETH Zurich. CH-8092 Zurich, Switzerland
With the support of the grant ERC-2009-AdG 245728-RWPERCRI
Foreword
The following notes grew out of the graduate course “Special topics in probability”, whichI gave at ETH Zurich during the Spring term 2011. One of the objectives was to explorethe links between occupation times, Gaussian free fields, Poisson gases of Markovian loops,and random interlacements. The stimulating atmosphere during the live lectures was anencouragement to write a fleshed-out version of the handwritten notes, which were handedout during the course. I am immensely grateful to Pierre-Francois Rodriguez, Art
..em
Sapozhnikov, Balazs Rath, Alexander Drewitz, and David Belius, for their numerouscomments on the successive versions of these notes.
Contents
0 Introduction 1
1 Generalities 5
1.1 The set-up . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2 The Markov chain X. (with jump rate 1) . . . . . . . . . . . . . . . . . . . 6
1.3 Some potential theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.4 Feynman-Kac formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
1.5 Local times . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
1.6 The Markov chain X. (with variable jump rate) . . . . . . . . . . . . . . . 23
2 Isomorphism theorems 27
2.1 The Gaussian free field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.2 The measures Px,y . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.3 Isomorphism theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
2.4 Generalized Ray-Knight theorems . . . . . . . . . . . . . . . . . . . . . . . 38
3 The Markovian loop 51
3.1 Rooted loops and the measure µr on rooted loops . . . . . . . . . . . . . . 51
3.2 Pointed loops and the measure µp on pointed loops . . . . . . . . . . . . . 58
3.3 Restriction property . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
3.4 Local times . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
3.5 Unrooted loops and the measure µ∗ on unrooted loops . . . . . . . . . . . 67
4 Poisson gas of Markovian loops 71
4.1 Poisson point measures on unrooted loops . . . . . . . . . . . . . . . . . . 71
4.2 Occupation field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
4.3 Symanzik’s representation formula . . . . . . . . . . . . . . . . . . . . . . . 76
4.4 Some identities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
4.5 Some links between Markovian loops and random interlacements . . . . . . 84
References 93
Index 95
1
0 Introduction
This set of notes explores some of the links between occupation times and Gaussian pro-cesses. Notably they bring into play certain isomorphism theorems going back to Dynkin[4], [5] as well as certain Poisson point processes of Markovian loops, which originated inphysics through the work of Symanzik [26]. More recently such Poisson gases of Marko-vian loops have reappeared in the context of the “Brownian loop soup” of Lawler andWerner [16] and are related to the so-called “random interlacements”, see Sznitman [27].In particular they have been extensively investigated by Le Jan [17], [18].
A convenient set-up to develop this circle of ideas consists in the consideration ofa finite connected graph E endowed with positive weights and a non-degenerate killingmeasure. One can then associate to these data a continuous-time Markov chain Xt, t ≥ 0,on E, with variable jump rates, which dies after a finite time due to the killing measure,as well as
the Green density g(x, y), x, y ∈ E,(0.1)
(which is positive and symmetric),
the local times Lx
t =
∫ t
0
1Xs = x ds, t ≥ 0, x ∈ E.(0.2)
In fact g(·, ·) is a positive definite function on E × E, and one can define a centeredGaussian process ϕx, x ∈ E, such that
(0.3) cov(ϕx, ϕy)(= E[ϕxϕy]) = g(x, y), for x, y ∈ E.
This is the so-called Gaussian free field.
It turns out that 12ϕ2z, z ∈ E, and L
z
∞, z ∈ E, have intricate relationships. For instanceDynkin’s isomorphism theorem states in our context that for any x, y ∈ E,
(Lz
∞ +1
2ϕ2z
)z∈E
under Px,y ⊗ PG,(0.4)
has the “same law” as
1
2(ϕ2
z)z∈E under ϕxϕy PG,(0.5)
where Px,y stands for the (non-normalized) h-transform of our basic Markov chain, withthe choice h(·) = g(·, y), starting from the point x, and PG for the law of the Gaussianfield ϕz, z ∈ E.
Eisenbaum’s isomorphism theorem, which appeared in [7], does not involve h-transformsand states in our context that for any x ∈ E, s 6= 0,
(L
z
∞ +1
2(ϕz + s)2
)z∈E
under Px ⊗ PG,(0.6)
has the “same law” as
(12(ϕz + s)2
)z∈E
under(1 +
ϕx
s
)PG.(0.7)
2 0 INTRODUCTION
The above isomorphism theorems are also closely linked to the topic of theorems of Ray-Knight type, see Eisenbaum [6], and chapters 2 and 8 of Marcus-Rosen [19]. Originally,see [13, 21], such theorems came as a description of the Markovian character in the spacevariable of Brownian local times evaluated at certain random times. More recently, theGaussian aspects and the relation with the isomorphism theorems have gained promi-nence, see [8], and [19].
Interestingly, Dynkin’s isomorphism theorem has its roots in mathematical physics.It grew out of the investigation by Dynkin in [4] of a probabilistic representation formulafor the moments of certain random fields in terms of a Poissonian gas of loops interactingwith Markovian paths, which appeared in Brydges-Frohlich-Spencer [2], and was basedon the work of Symanzik [26].
The Poisson point gas of loops in question is a Poisson point process on the state spaceof loops on E modulo time-shift. Its intensity measure is a multiple αµ∗ of the image µ∗ ofa certain measure µrooted, under the canonical map for the equivalence relation identifyingrooted loops γ that only differ by a time-shift. This measure µrooted is the σ-finite measureon rooted loops defined by
(0.8) µrooted(dγ) =∑x∈E
∫ ∞
0
Qtx,x(dγ)
dt
t,
where Qtx,x is the image of 1Xt = xPx under (Xs)0≤s≤t, if X. stands for the Markov
chain on E with jump rates equal to 1 attached to the weights and killing measure wehave chosen on E.
The random fields on E alluded to above, are motivated by models of Euclideanquantum field theory, see [11], and are for instance of the following kind:
(0.9) 〈F (ϕ)〉 =∫
RE
F (ϕ) e−12E(ϕ,ϕ) ∏
x∈E
h(ϕ2
x
2
)dϕx
/∫
RE
e−12E(ϕ,ϕ) ∏
x∈E
h(ϕ2
x
2
)dϕx
with
h(u) =
∫ ∞
0
e−vudν(v), u ≥ 0, with ν a probability distribution on R+,
and E(ϕ, ϕ) the energy of the function ϕ corresponding to the weights and killing measureon E (the matrix E(1x, 1y), x, y ∈ E is the inverse of the matrix g(x, y), x, y ∈ E in (0.3)).
x2
x3y1
w2 w3
w1 y2
x1 y3
Fig. 0.1: The paths w1, . . . , wk in E interact with the gas of loopsthrough the random potentials.
3
The typical representation formula for the moments of the random field in (0.9) looks likethis: for k ≥ 1, z1, . . . , z2k ∈ E,
(0.10)
〈ϕz1 . . . ϕz2k〉 =∑
pairingsof z1, . . . , z2k
Px1,y1 ⊗ · · · ⊗ Pxk,yk ⊗Q[e−∑
x∈E vx(Lx+Lx∞(w1)+···+L
x∞(wk))]
Q[e−
∑x∈E vxLx
] ,
where the sum runs over the (non-ordered) pairings (i.e. partitions) of the symbolsz1, z2, . . . , z2k into x1, y1, . . . , xk, yk. Under Q the vx, x ∈ E, are i.i.d. ν-distributed(random potentials), independent of the Lx, x ∈ E, which are distributed as the totaloccupation times (properly scaled to take account of the weights and killing measure) ofthe gas of loops with intensity 1
2µ, and the Pxi,yi, 1 ≤ i ≤ k are defined just as below
(0.4), (0.5).
The Poisson point process of Markovian loops has many interesting properties. Wewill for instance see that when α = 1
2(i.e. the intensity measure equals 1
2µ),
(0.11)(Lx)x∈E has the same distribution as 1
2(ϕ2
x)x∈E, where
(ϕx)x∈E stands for the Gaussian free field in (0.3).
The Poisson gas of Markovian loops is also related to the model of random interlacements[27], which loosely speaking corresponds to “loops going through infinity”. It appearsas well in the recent developments concerning conformally invariant scaling limits, seeLawler-Werner [16], Sheffield-Werner [24]. As for random interlacements, interestingly,in place of (0.11), they satisfy an isomorphism theorem in the spirit of the generalizedsecond Ray-Knight theorem, see [28].
4 0 INTRODUCTION
5
1 Generalities
In this chapter we describe the general framework we will use for the most part of thesenotes. We introduce finite weighted graphs with killing and the associated continuous-type Markov chains X., with constant jump rate equal to 1, and X., with variable jumprate. We also recall various notions related to Dirichlet forms and potential theory.
1.1 The set-up
We introduce in this section the general set-up, which we will use in the sequel, and recallsome classical facts. We also refer to [14] and [10], where the theory is developed in amore general framework. We assume that
(1.1) E is a finite non-empty set
endowed with non-negative weights
(1.2) cx,y = cy,x ≥ 0, for x, y ∈ E, and cx,x = 0, for x ∈ E,so that
E, endowed with the edge set consisting of the pairs x, y such that(1.3)
cx,y > 0, is a connected graph.
We also suppose that there is a killing measure on E:
(1.4) κx ≥ 0, x ∈ E,and that
(1.5) κx 6= 0, for at least some x ∈ E.
We also consider a
(1.6) cemetery state ∆ not in E
(we can think of κx as cx,∆).
With these data we can define a measure on E:
(1.7) λx =∑y∈E
cx,y + κx, x ∈ E (note that λx > 0, due to (1.2) - (1.5)).
We can also introduce the energy of a function on E, or Dirichlet form
(1.8) E(f, f) = 1
2
∑x,y∈E
cx,y(f(y)− f(x))2 +∑x∈E
κx f2(x),
for f : E → R.
Note that (cx,y)x,y∈E and (κx)x∈E determine the Dirichlet form. Conversely, the Dirich-let form determines (cx,y)x,y∈E and (κx)x∈E. Indeed, one defines, by polarization, forf, g : E → R,
E(f, g) = 1
4[E(f + g, f + g)− E(f − g, f − g)]
=1
2
∑x,y∈E
cx,y(f(y)− f(x))(g(y)− g(x)) +∑x∈E
κxf(x)g(x),(1.9)
6 1 GENERALITIES
and one notes that
E(1x, 1y) = −cx,y, for x 6= y in E,
E(1x, 1x) =∑y∈E
cx,y + κx = λx, for x ∈ E,(1.10)
so that the Dirichlet form uniquely determines the weights (cx,y)x,y∈E and the killingmeasure (κx)x∈E. Observe also that by (1.3), (1.5), (1.8), (1.9), the Dirichlet form definesa positive definite quadratic form on the space F of functions from E to R, see also (1.39)below.
We denote by (·, ·)λ the scalar product in L2(dλ):
(1.11) (f, g)λ =∑x∈E
f(x)g(x)λx, for f, g : E → R .
The weights and the killing measure induce a sub-Markovian transition probabilityon E:
(1.12) px,y =cx,yλx
, for x, y ∈ E,
which is λ-reversible:
(1.13) λx px,y = λy py,x, for all x, y ∈ E.
One then extends px,y, x, y ∈ E to a transition probability on E ∪ ∆ by setting
(1.14) px,∆ =κxλx, for x ∈ E, and p∆,∆ = 1,
so the corresponding discrete-time Markov chain on E ∪∆ is absorbed in the cemeterystate ∆ once it reaches ∆. We denote by
(1.15)Zn, n ≥ 0, the canonical discrete Markov chain on the space ofdiscrete trajectories in E ∪ ∆, which after finitely many stepsreaches ∆ and from then on remains at ∆,
and by
(1.16) Px the law of the chain starting from x ∈ E ∪ ∆.
We will attach to the Dirichlet form (1.8) (or, equivalently, to the weights and the killingmeasure), two continuous-time Markov chains on E∪∆, which are time change of eachother, with discrete skeleton corresponding to Zn, n ≥ 0. The first chain X. will have aunit jump rate, whereas the second chain X. (defined in Section 1.6) will have a variablejump rate governed by λ.
1.2 The Markov chain X. (with jump rate 1)
We introduce in this section the continuous-time Markov chain on E ∪ ∆ (absorbed inthe cemetery state ∆), with discrete skeleton described by Zn, n ≥ 0, and exponential
1.2 The Markov chain X. (with jump rate 1) 7
holding times of parameter 1. We also bring into play some of the natural objects attachedto this Markov chains.
The canonical space DE for this Markov chain consists of right-continuous functionswith values in E ∪ ∆, with finitely many jumps, which after some time enter ∆ andfrom then on remain equal to ∆. We denote by
Xt, t ≥ 0, the canonical process on DE,(1.17)
θt, t ≥ 0, the canonical shift on DE: θt(w)(·) = w(·+ t), for w ∈ DE ,
Px the law on DE of the Markov chain starting at x ∈ E ∪ ∆.
Remark 1.1. Whenever convenient we will tacitly enlarge the canonical space DE andwork with a probability space on which (under Px) we can simultaneously consider thediscrete Markov chain Zn, n ≥ 0, with starting point a.s. equal to x, and an independentsequence of positive variables Tn, n ≥ 1, the “jump times”, increasing to infinity, withincrements Tn+1 − Tn, n ≥ 0, i.i.d. exponential with parameter 1 (with the conventionT0 = 0). The continuous-time chain Xt, t ≥ 0, will then be expressed as
Xt = Zn, for Tn ≤ t < Tn+1, n ≥ 0.
Of course, once the discrete-time chain reaches the cemetery state ∆, the subsequent“jump times” Tn are only fictitious “jumps” of the continuous time chain.
Examples:
1) Simple random walk on the discrete torus killed at a constant rate
E = (Z/NZ)d, where N > 1, d ≥ 1,
endowed with the graph structure, where x, y are neighbors if exactly one of theircoordinates differs by ±1, and the other coordinates are equal. We pick
cx,y = 1x, y are neighbors, x, y ∈ E,κx = κ > 0 .
So Xt, t ≥ 0, is the simple random walk on (Z/NZ)d with exponential holdingtimes of parameter 1, killed at each step with probability κ
2d+κ, when N > 2, and
probability κd+κ
, when N = 2.
2) Simple random walk on Zd killed outside a finite connected subset of Zd, that is:
E is a finite connected subset of Zd, d ≥ 1.
cx,y = 1|x−y|=1, for x, y ∈ E,
κx =∑
y∈Zd\E
1|x−y|=1, for x ∈ E,
8 1 GENERALITIES
x′
E
x
κx′ = 0
κx = 2
Fig. 1.1
Xt, t ≥ 0, when starting in x ∈ E, corresponds to the simple random walk in Zd
with exponential holding times of parameter 1 killed at the first time it exists E.
Our next step is to introduce some natural objects attached to the Markov chain X., suchas the transition semi-group, and the Green function.
Transition semi-group and transition density:
Unless otherwise specified, we will tacitly view real-valued functions on E, as functionson E ∪ ∆, which vanish at the point ∆.
The sub-Markovian transition semi-group of the chain Xt, t ≥ 0, on E is definedfor t ≥ 0, f : E → R, by
Rt f(x) = Ex[f(Xt)] for x ∈ E
=∑n≥0
e−ttn
n!Ex[f(Zn)]
=∑n≥0
e−ttn
n!P n f(x) = et(P−I)f(x),
(1.18)
where I denotes the identity map on RE, and for f : E → R, x ∈ E,
(1.19) Pf(x) =∑y∈E
px,y f(y)(1.15)= Ex[f(Z1)].
As a result of (1.13) and (1.18)
P and Rt (for any t ≥ 0) are bounded self-adjoint operators on L2(dλ),(1.20)
Rt+s = RtRs, for t, s ≥ 0 (semi-group property).(1.21)
We then introduce the transition density
(1.22) rt(x, y) = (Rt 1y)(x)1
λy, for t ≥ 0, x, y ∈ E.
It follows from the self-adjointness of Rt, cf. (1.20), that
(1.23) rt(x, y) = rt(y, x), for t ≥ 0, x, y ∈ E (symmetry)
1.2 The Markov chain X. (with jump rate 1) 9
and from the semi-group property, cf. (1.21), that for t, s ≥ 0, x, y ∈ E,
(1.24) rt+s(x, y) =∑z∈E
rt(x, z) rs(z, y) λz (Chapman-Kolmogorov equations).
Moreover due to (1.3), (1.12), (1.18), we see that
(1.25) rt(x, y) > 0, for t > 0, x, y ∈ E.
Green function:
We define the Green function (or Green density):
(1.26) g(x, y) =
∫ ∞
0
rt(x, y) dt(1.18),(1.22)
=Fubini
Ex
[ ∫ ∞
0
1Xt = ydt] 1
λy, for x, y ∈ E.
Lemma 1.2.
(1.27) g(x, y) ∈ (0,∞) is a symmetric function on E ×E.
Proof. By (1.23), (1.25) we see that g(·, ·) is positive and symmetric. We now prove thatit is finite. By (1.1), (1.3), (1.5) we see that for some N ≥ 0, and ε > 0,
(1.28) infx∈E
Px[Zn = ∆, for some n ≤ N ] ≥ ε > 0.
As a result of the simple Markov property at times, which are multiple of N , we find that
(P kN1E)(x) = Px[Zn 6= ∆, for 0 ≤ n ≤ kN ]
simple Markov
≤(1.28)
(1− ε)k, for k ≥ 1.
It follows by a straightforward interpolation that with suitable c, c′ > 0,
(1.29) supx∈E
(P n 1E)(x) ≤ c e−c′n for n ≥ 0.
As a result inserting this bound in the last line of (1.18) gives:
(1.30) supx∈E
(Rt 1E)(x) ≤ c e−t∑n≥0
tn
n!e−c′n = c exp−t(1− e−c′),
so that
(1.31) g(x, y) ≤ 1
λy
∫ ∞
0
(Rt 1E)(x) dt ≤c
λy
1
1− e−c′≤ c′′ <∞,
whence (1.27).
10 1 GENERALITIES
1.3 Some potential theory
In this section we introduce some natural objects from potential theory such as the equi-librium measure, the equilibrium potential, and the capacity of a subset of E. We alsoprovide two variational characterizations for the capacity. We then describe the orthogo-nal complement under the Dirichlet form of the space of functions vanishing on a subsetof K. This also naturally leads us to the notion of trace form (and network reduction).
The Green function gives rise to the potential operators
(1.32) Qf(x) =∑y∈E
g(x, y) f(y)λy, for f : E → R (a function),
the potential of the function f , and
(1.33) Gν(x) =∑y∈E
g(x, y) νy, for ν: E → R (a measure),
the potential of the measure ν. We also write the duality bracket (between functions andmeasures on E):
(1.34) 〈ν, f〉 =∑x
νx f(x) for f : E → R, ν: E → R.
In the next proposition we collect several useful properties of the Green function andDirichlet form.
Proposition 1.3.
E(ν, µ)def= 〈ν,Gµ〉 =
∑x,y∈E
νx g(x, y)µy, for ν, µ: E → R(1.35)
defines a positive definite, symmetric bilinear form.
Q = (I − P )−1 (see (1.19), (1.32) for notation).(1.36)
G = (−L)−1, where(1.37)
Lf(x) =∑y∈E
cx,yf(y)− λxf(x), for f : E → R.
E(Gν, f) = 〈ν, f〉, for ν: E → R and f : E → R.(1.38)
∃ρ > 0, such that E(f, f) ≥ ρ‖f‖2L2(dλ), for all f : E → R.(1.39)
Gκ = 1 (and the killing measure κ is also called equilibrium measure of E).(1.40)
Proof.
• (1.35):One can give a direct proof based on (1.23) - (1.26), but we will instead derive (1.35) withthe help of (1.37) - (1.39). The bilinear form in (1.35) is symmetric by (1.27). Moreover,for ν: E → R,
0 ≤ E(Gν,Gν) (1.38)= 〈ν,Gν〉 = E(ν, ν) (the energy of the measure ννν).
1.3 Some potential theory 11
By (1.39), 0 = E(Gν,Gν) =⇒ Gν = 0, and by (1.37) it follows that ν = (−L)Gν = 0.This proves (1.35) (assuming (1.37) - (1.39)).
• (1.36):By (1.29):
∫ ∞
0
∑n≥0
e−ttn
n!|P nf(x)|dt ≤ c
∫ ∞
0
e−t∑n≥0
tn
n!e−c′ndt ‖f‖∞
(1.30)= c‖f‖∞
∫ ∞
0
e−t(1−e−c′ )dt <∞.
By Lebesgue’s domination theorem, keeping in mind (1.18), (1.26),
Qf(x)(1.32)=
∫ ∞
0
Rt f(x)dt(1.18)=
∫ ∞
0
∑n≥0
e−ttn
n!P nf(x)dt =
∑n≥0
∫ ∞
0
e−t tn
n!dt P n f(x)
=∑n≥0
P n f(x)(1.29)= (I − P )−1f(x), (1 is not in the spectrum of P by (1.29)).
This proves (1.36).
• (1.37):Note that in view of (1.19)
−L = λ(I − P ) (composition of (I − P ) and the multiplication by λ.,
i.e. (λf)(x) = λx f(x) for f : E → R, and x ∈ E).(1.41)
Hence −L is invertible and
(−L)−1 = (I − P )−1 λ−1 (1.36)= Qλ−1 (1.32)
=(1.33)
G.
This proves (1.37).
• (1.38):By (1.10) we find that
E(f, g) =∑
x,y∈E
f(x) g(y) E(1x, 1y)(1.10)=
∑x∈E
λx f(x) g(x)−∑
x,y∈E
cx,y f(x) g(y)
= 〈f,−Lg〉 (1.2)= 〈−Lf, g〉.(1.42)
As a result
E(Gν, f) = 〈−LGν, f〉 (1.37)= 〈ν, f〉, whence (1.38).
• (1.39):Note that for x ∈ E, f : E → R
f(x) = 〈1x, f〉(1.38)= E(G1x, f).
12 1 GENERALITIES
Now E(·, ·) is a non-negative symmetric bilinear form. We can thus apply Cauchy-Schwarz’s inequality to find that
f(x)2 ≤ E(G1x, G1x) E(f, f)(1.38)= 〈1x, G1x〉 E(f, f)= g(x, x) E(f, f).
As a result we find that
(1.43) ‖f‖2L2(dλ) =∑x∈E
f(x)2 λx ≤∑x∈E
g(x, x) λx E(f, f),
and (1.39) follows withρ−1 =
∑x∈E
g(x, x) λx.
• (1.40):By (1.39), E(·, ·) is positive definite and by (1.9)
E(1, f) (1.9)=
∑x
κx f(x) = 〈κ, f〉(1.38)= E(Gκ, f), for all f : E → R.
It thus follows that 1 = Gκ, whence (1.40).
Remark 1.4. Note that we have shown in (1.42) that for all f, g: E → R,
(1.44) E(f, g) = 〈−Lf, g〉 = 〈f,−Lg〉.
Since −L = λ(I − P ), we also find, see (1.11) for notation,
(1.44’) E(f, g) = ((I − P )f, g)λ = (f, (I − P )g)λ.
As a next step we introduce some important random times for the continuous-timeMarkov chain Xt, t ≥ 0. Given K ⊆ E, we define
HK = inft ≥ 0;Xt ∈ K, the entrance time in K,
HK = inft > 0; Xt ∈ K and there exists s ∈ (0, t) with Xs 6= X0,the hitting time of K,
TK = inft ≥ 0; Xt /∈ K, the exit time from K,
LK =supt > 0; Xt ∈ K, the time of last visit to K
(with the convention sup φ = 0, inf φ =∞).
(1.45)
HK , HK , TK are stopping times for the canonical filtration (Ft)t≥0, on DE (i.e. a [0,∞]-
valued map T on DE , see above (1.17), such that T ≤ t ∈ Ftdef= σ(Xs, 0 ≤ s ≤ t), for
each t ≥ 0). Of course LK is in general not a stopping time.
Given U ⊆ E, the transition density killed outside UUU is
(1.46) rt,U(x, y) = Px[Xt = y, t < TU ]1
λy≤ rt(x, y), for t ≥ 0, x, y ∈ E,
and the Green function killed outside UUU is
(1.47) gU(x, y) =
∫ ∞
0
rt,U(x, y)dt ≤ g(x, y), for x, y ∈ E.
1.3 Some potential theory 13
Remark 1.5.
1) When U is a connected (non-empty) subgraph of the graph in (1.3), rt,U(x, y), t ≥ 0,x, y ∈ U , and gU(x, y), x, y ∈ U , simply correspond to the transition density and theGreen function in (1.22), (1.26), when one chooses on U
- the weights cx,y, x, y ∈ U (i.e. restriction to U × U of the weights on E),
- the killing measure κx = κx +∑
y∈E\U
cx,y, x ∈ U .
2) When U is not connected the above remark applies to each connected component ofU , and rt,U(x, y) and gU(x, y) vanish when x, y belong to different connected componentsof U .
Proposition 1.6. (U ⊆ E,A = E\U)
gU(x, y) = gU(y, x), for x, y ∈ E.(1.48)
g(x, y) = gU(x, y) + Ex[HA <∞, g(XHA, y)], for x, y ∈ E.(1.49)
Ex[HA <∞, g(XHA, y)] = Ey[HA <∞, g(XHA
, x)], for x, y ∈ E(1.50)
(Hunt’s switching identity).
Proof.
• (1.48):This is a direct consequence of the above remark and (1.27).
• (1.49):
g(x, y)(1.26)= Ex
[ ∫ ∞
0
1Xt = ydt] 1
λy= Ex
[ ∫ ∞
0
1Xt = y, t < TUdt] 1
λy
+ Ex
[ ∫ ∞
TU
1Xt = ydt, TU <∞] 1
λy
Fubini=
(1.46),(1.47)gU(x, y)
+ Ex
[TU <∞,
(∫ ∞
0
1Xt = ydt) θTU
] 1
λy
strong Markov= gU(x, y)
+ Ex
[TU <∞, EXTU
[ ∫ ∞
0
1Xt = ydt]] 1
λy
TU=HA=(1.26)
gU(x, y)
+ Ex[HA <∞, g(XHA, y)].
This proves (1.49).
• (1.50):This follows from (1.48), (1.49) and the fact that g(·, ·) is symmetric, cf. (1.27).
Example:
Consider x0 ∈ E. By (1.49) we find that for x ∈ E (with A = x0, U = E\x0):
g(x, x0) = 0 + Px[Hx0 <∞] g(x0, x0),
14 1 GENERALITIES
writing Hx0 for Hx0, so that
(1.51) Px[Hx0 <∞] =g(x, x0)
g(x0, x0), for x ∈ E.
A second application of (1.49) now yields (with U = E\x0)
(1.52) gU(x, y) = g(x, y)− g(x, x0)g(x0, y)
g(x0, x0), for x, y ∈ E.
Given A ⊆ E, we introduce the equilibrium measure of AAA :
(1.53) eA(x) = Px[HA =∞] 1A(x) λx, x ∈ E.
Its total mass is called the capacity of AAA (or the conductance of A):
(1.54) cap(A) =∑x∈A
Px[HA =∞]λx.
Remark 1.7. As we will see below in the case of A = E the terminology in (1.53) isconsistent with the terminology in (1.40). There is an interpretation of the weights (cx,y)and the killing measures (κx) on E as an electric network grounding E at the cemeterypoint ∆, which is implicit in the use of the above terms, see for instance Doyle-Snell [3].
Before turning to the next proposition, we simply recall that given A ⊆ E, by ourconvention in (1.45)
HA <∞ = LA > 0 = the set of trajectories that enter A.
Also given a measure ν on E, we write
(1.55) Pν =∑x∈E
νx Px and Eν for the Pν-integral (or “expectation”).
Proposition 1.8. (A ⊆ E)
Px[LA > 0, XL−A= y] = g(x, y) eA(y), for x, y ∈ E,(1.56)
(XL−Ais the position of X. at the last visit to A, when LA > 0).
hA(x)def= Px[HA <∞] = Px[LA > 0] = GeA(x), for x ∈ E(1.57)
(the equilibrium potential of AAA).
When A 6= φ,
(1.58) eA is the unique measure ν supported on A such that Gν = 1 on A.
Let A ⊆ B ⊆ E then under PeB the entrance “distribution” in A and the last exit “distri-bution” of A coincide with eA:
(1.59) PeB [HA <∞, XHA= y] = PeB [LA > 0, XL−
A= y] = eA(y), for y ∈ E.
In particular when B = E,
(1.60)under Pκ, the entrance distribution in A and the exit distribution of Acoincide with eA.
1.3 Some potential theory 15
Proof.
• (1.56):Both members vanish when y /∈ A. We thus assume y ∈ A. Using the discrete-timeMarkov chain Zn, n ≥ 0 (see (1.15)), we can write:
Px[LA > 0, XL−A= y] = Px
[ ⋃n≥0
Zn = y, and for all k > n, Zk /∈ A]
pairwise disjoint=
∑n≥0
Px[Zn = y, and for all k > n, Zk /∈ A]Markov property
=
∑n≥0
Px[Zn = y]Py[for all k > 0, Zk /∈ A] Fubini=
(1.45)
Ex
[ ∑n≥0
1Zn = y]Py[HA =∞] = Ex
[ ∫ ∞
0
1Xt = ydt]Py[HA =∞]
(1.26)=
(1.53)g(x, y) eA(y).
This proves (1.56).
• (1.57):Summing (1.56) over y ∈ A, we obtain
Px[HA <∞] = Px[LA > 0] =∑y∈A
g(x, y) eA(y)(1.33)= GeA(x), whence (1.57).
• (1.58):Note that eA is supported on A and GeA = 1 on A by (1.57). If ν is another such measureand µ = ν − eA,
〈µ,Gµ〉 = 0
because Gµ = 0 on A, and µ is supported on A. By (1.35) it follows that µ = 0, whence(1.58).
• (1.59), (1.60):By (1.50) (Hunt’s switching identity): for y ∈ E,
EeB [HA <∞, g(XHA, y)] = Ey[HA <∞, (GeB)(XHA
)](1.58)=
A⊆BPy[HA <∞].
Denoting by µ the entrance distribution of X in A under PeB :
µx = PeB [HA <∞, XHA= x], x ∈ E,
we see by the above identity and (1.57) that Gµ(y) = GeA(y), for all y ∈ E, and byapplying L to both sides, µ = eA. As for the last exit distribution of X. from A underPeB , integrating over eB in (1.56), we find:
PeB [LA > 0, XL−A= y] =
∑x∈E
eB(x) g(x, y) eA(y)(1.57)=
A⊆BeA(y), for y ∈ E.
This completes the proof of (1.59). In the special case B = E, we know by (1.40), (1.58)that eB = κ and (1.60) follows.
16 1 GENERALITIES
We now provide two variational problems for the capacity, where the equilibriummeasure and the equilibrium potential appear. These characterizations are, of course,strongly flavored by the previously mentioned analogy with electric networks (we refer toRemark 1.7).
Proposition 1.9. (A ⊆ E)
(1.61) cap(A) = (infE(ν, ν); ν probability supported on A)−1
and when A 6= φ, the infimum is uniquely attained at eA = eA/cap(A), the normalizedequilibrium measure of A.
(1.62) cap(A) = infE(f, f); f ≥ 1 on A,
and the infimum is uniquely attained at hA, the equilibrium potential of A.
Proof.
• (1.61):When A = φ, both members of (1.61) vanish and there is nothing to prove. We thusassume A 6= φ and consider a probability measure ν supported on A. By (1.35), we have
0 ≤ E(ν, ν) = E(ν − eA + eA, ν − eA + eA)
= E(eA, eA) + 2E(ν − eA, eA) + E(ν − eA, ν − eA).
The last term is non-negative, by (1.35) it only vanishes when ν = eA, and
E(ν − eA, eA) =∑x∈E
(νx − eA(x)
) ( ∑y∈E
g(x, y) eA(y))
︸ ︷︷ ︸(1.58)= 1
cap(A)on A
=1− 1
cap(A)= 0 .
We thus find that E(ν, ν) becomes (uniquely) minimal at
E(eA, eA) =1
cap(A)2∑
x,y∈E
eA(x) g(x, y) eA(y)(1.58)=
1
cap(A)2eA(A) =
1
cap(A).
This proves (1.61).
• (1.62):We consider f : E → R such that f ≥ 1A, and hA = GeA, so that
hA(x) = Px[HA <∞] = 1, for x ∈ A.
We have
E(f, f) = E(f − hA + hA, f − hA + hA)
= E(hA, hA) + 2E(f − hA, hA) + E(f − hA, f − hA).
1.3 Some potential theory 17
Again, the last term is non-negative and only vanishes when f = hA, see (1.39). Moreover,we have
E(f − hA, hA) = E(f − hA, GeA)(1.38)= 〈eA, f − hA〉 ≥ 0,
since hA = 1 on A, f ≥ 1 on A, and eA is supported on A.
So the right-hand side of (1.62) equals
E(hA, hA) = E(GeA, hA)(1.38)= 〈eA, hA〉 = eA(A) = cap(A).
This proves (1.62).
Orthogonal decomposition, trace Dirichlet form:
We consider U ⊆ E and set K = E\U . Our aim is to describe the orthogonal complementrelative to the Dirichlet form E(·, ·) of the space of functions supported in U :
(1.63) FU = ϕ : E → R; ϕ(x) = 0, for all x ∈ K.
To this end we introduce the space of functions harmonic in U :
(1.64) HU = h : E → R; Ph(x) = h(x), for all x ∈ U,
as well as the space of potentials of (signed) measures supported on K:
(1.65) GK = f : E → R; f = Gν, for some ν supported on K.
Recall that E(·, ·) is a positive definite quadratic form on the space F of functions fromE to R (see above (1.11)).
Proposition 1.10. (orthogonal decomposition)
HU = GK .(1.66)
F = FU ⊕HU , where FU and HU are orthogonal, relative to E(·, ·).(1.67)
Proof.
• (1.66):We first show that HU ⊆ GK . Indeed when h ∈ HU , h
(1.37)= G(−L) h = Gν where
ν = −Lh is supported on K by (1.64) and (1.41). Hence HU ⊆ GK .To prove the reverse inclusion we consider ν supported on K. Set h = Gν. By (1.37)
we know that Lh = LGν = −ν, so that Lh vanishes on U . It follows from (1.41) thath ∈ HU , and (1.66) is proved. Incidentally note that choosing A = K in (1.49), we canmultiply both sides of (1.49) by νy and sum over y. The first term in the right-hand sidevanishes and we then see that h = Gν satisfies
(1.68) h(x) = Ex[HK <∞, h(XHK)], for x ∈ E.
• (1.67):We first note that when ϕ ∈ FU and ν is supported on K,
E(Gν, ϕ) (1.38)= 〈ν, ϕ〉 = 0.
18 1 GENERALITIES
So the spaces FU and HU are orthogonal under E(·, ·). In addition, given f from E → R,we can define
(1.69) h(x) = Ex[HK <∞, f(XHK)], for x ∈ E,
and note thath(x) = f(x), when x ∈ K,
and that by the same argument as above,
h is harmonic in U.
If we now define ϕ = f − h, we see that ϕ vanishes on K, and hence
(1.70) f = ϕ+ h, with ϕ ∈ FU and h ∈ HU ,
is the orthogonal decomposition of f . This proves (1.67).
As we now explain the restriction of the Dirichlet form to the space HU = GK , see(1.64) - (1.66), gives rise to a new Dirichlet form on the space of functions from K to R,the so-called trace form.
Given f : K → R, we also write f for the function on E that agrees with f on K andvanishes on U , when no confusion arises. Note that
(1.71) f(x) = Ex[HK <∞, f(XHK)], for x ∈ E,
is the unique function on E, harmonic in U , that agrees with f on K, cf. (1.67). Indeedthe decomposition (1.67) applied to the case of the function equal to f on K, and to 0 onU , shows the existence of a function in HU equal to f on K. By (1.68) and (1.66), it is
necessarily equal to f .
We then define for f : K → R, the trace form
(1.72)E∗(f, f) = E(f , f)
(1.69),(1.70)= infE(g, g); g: E → R coincides with f on K,
where we used in the second line the fact that when g coincides with f on K, then
g = ϕ+ f , with ϕ ∈ FU , and hence E(g, g) ≥ E(f , f) due to (1.67). We naturally extendthis definition for f, g: K → R, by setting
(1.73) E∗(f, g) = E(f , g).It is plain that E∗ is a symmetric bilinear form on the space of functions from K to R.As we now explain E∗ does indeed correspond to a Dirichlet form on K induced by some(uniquely defined in view of (1.10)) non-negative weights and killing measure.
Proposition 1.11. (K 6= φ)
The quantities defined by
c∗x,y = λx Px[HK <∞, XHK
= y], for x 6= y in K,(1.74)
= 0, for x = y in K,
κ∗x = λx Px[HK =∞], for x ∈ K,(1.75)
λ∗x = λx(1− Px[HK <∞, XHK
= x]), for x ∈ K,(1.76)
1.3 Some potential theory 19
satisfy (1.2) - (1.5), (1.7), with E replaced by K (in particular c∗x,y = c∗y,x). The corre-sponding Dirichlet form coincides with E∗, i.e.
(1.77) E∗(f, f) = 1
2
∑x,y∈K
c∗x,y(f(y)− f(x)
)2+
∑x∈K
κ∗x f2(x), for f : K → R.
The corresponding Green function g∗(x, y), x, y in K, satisfies
(1.78) g∗(x, y) = g(x, y), for x, y ∈ K.Proof. We first prove that
(1.79)the quantities in (1.74) - (1.76) satisfy (1.2) - (1.5), (1.7),with E replaced by K.
To this end we note that for x 6= y in K,
(1.80)
−E∗(1x, 1y)(1.73)= −E(1x, 1y)
(1.67)= −E(1x, 1y)
(1.44)= L1y(x)
1y(x)=0= λx
∑z∈E
px,z 1y(z)(1.71)=
Markovλx Px[HK <∞, XHK
= y]
= c∗x,y.
By a similar calculation we also find that for x ∈ K,
E∗(1x, 1x) =− L 1x(x) = λx
(1−
∑z∈E
px,z 1x(z))
= λx(1− Px[HK <∞, X
HK= x]
)
= λ∗x.
(1.81)
We further see that
(1.82)
∑y∈K
c∗x,y + κ∗x(1.74),(1.75)
= λx(1− Px[HK <∞, XHK= x])
(1.76)= λ∗x.
From (1.80) we deduce the symmetry of c∗x,y. These are non-negative weights on K.Moreover when x0, y0 are in K, we can find a nearest neighbor path in E from x0 to y0and looking at the successive visits of K by this path, taking (1.74) into account, we seethat K endowed with the edges x, y for which c∗x,y > 0, is a connected graph.
Further we know that for all x in E, Px-a.s., the continuous-time chain on E reaches
the cemetery state ∆ after a finite time. As a result Py[HK =∞] > 0, for at least one yin K, since otherwise the chain starting from any x in K would a.s. never reach ∆. By(1.75) we thus see that κ∗ does not vanish everywhere on K. In addition (1.7) holds by(1.82). We have thus proved (1.79).
• (1.77):Expanding the square in the first sum in the right-hand side of (1.77), we see using thesymmetry of c∗x,y, (1.82), and the second line of (1.74), that the right-hand side of (1.77)equals
∑x∈K
λ∗x f2(x)−
∑x 6=y in K
c∗x,y f(x) f(y)(1.80),(1.81)
=
∑x∈K
E∗(1x, 1x) f 2(x) +∑
x 6=y in K
E∗(1x, 1y) f(x) f(y) = E∗(f, f),
and this proves (1.77).
20 1 GENERALITIES
• (1.78):Consider x ∈ K and ψx the restriction to K of g(x, ·). By (1.66) we see that g(x, ·) =
G1x(·) = ψx(·), and therefore for any y ∈ K we have
E∗(ψx, 1y)(1.73)= E(G1x, 1y)
(1.66),(1.67)= E(G1x, 1y)
(1.38)= 1x=y
(1.38)= E∗(ψ∗
x, 1y), if ψ∗x(·) = g∗(x, ·).
It follows that ψx = ψ∗x for any x in K, and this proves (1.78).
Remark 1.12.
1) The trace form, with its expressions (1.72), (1.77), is intimately related to the notionof network reduction, or electrical network equivalence, see [1], p. 56.
2)When K ⊆ K ′ ⊆ E are non-empty subsets of E, the trace form on K, of the traceform on K ′ of E , coincides with the trace form on K of E . Indeed this follows for instanceby (1.78) and the fact that the Green function determines the Dirichlet form, see (1.37),(1.10). This feature is referred to as the “tower property” of traces.
1.4 Feynman-Kac formula
Given a function V : E → R, we can also view V as a multiplication operator:
(1.83) (V f)(x) = V (x) f(x), for f : E → R.
In this short section we recall a celebrated probabilistic representation formula for theoperator et(P−I+V ), when t ≥ 0. We recall the convention stated above (1.18).
Theorem 1.13. (Feynman-Kac formula)
For V, f : E → R, t ≥ 0, one has
(1.84) Ex
[f(Xt) exp
∫ t
0
V (Xs)ds]
=(et(P−I+V )f
)(x), for x ∈ E,
(the case V = 0 corresponds to (1.18)).
Proof. We denote by St f(x) the left-hand side of (1.84). By the Markov property we seethat for t, s ≥ 0,
St+s f(x) = Ex
[f(Xt+s) exp
∫ t+s
0
V (Xu)du]
= Ex
[exp
∫ t
0
V (Xu)duexp
∫ s
0
V (Xu)du θt f(Xs) θt
]
= Ex
[exp
∫ t
0
V (Xu)duEXt
[exp
∫ s
0
V (Xu)duf(Xs)
]]
= Ex
[exp
∫ t
0
V (Xu)duSs f(Xt)
]= St(Ssf)(x) = (StSs)f(x).
1.4 Feynman-Kac formula 21
In other words St, t ≥ 0 has the semi-group property
(1.85) St+s = StSs, for t, s ≥ 0.
Moreover, observe that
1
t(St f − f)(x) = 1
tEx
[f(Xt) exp
∫ t
0
V (Xs)ds− f(X0)
]
=1
tEx[f(Xt)− f(X0)] + Ex
[f(Xt)
1
t
∫ t
0
V (Xs) e∫ s0 V (Xu)duds
],
and as t→ 0,1
tEx[f(Xt)− f(X0)]→ (P − I) f(x), by (1.18),
whereas by dominated convergence
Ex
[f(Xt)
1
t
∫ t
0
V (Xs) e∫ s
0V (Xu)duds
]→ Ex[f(X0) V (X0)] = V f(x).
So we see that
(1.86)1
t(St f − f)(x) −→
t→0(P − I + V ) f(x).
Then considering St+h f(x) − St f(x) = (Sh − I)St f(x), with h > 0 small, as well as(when t > 0 and 0 < h < t)
St−h f(x)− St f(x) = −(Sh − I)St−h f(x),
one sees that (using in the second case that supu≤t |Su f(x)| ≤ et‖V ‖∞‖f‖∞) the functiont ≥ 0→ St f(x) is continuous.
Now dividing by h and letting h→ 0, we find that
t ≥ 0→ St f(x) is continuously differentiable with derivative:(1.87)
(P − I + V )St f(x).
It now follows that the function
s ∈ [0, t]→ F (s) = e(t−s)(P−I+V )Ss f(x)
is continuously differentiable on [0, t], with derivative
F ′(s) = −e(t−s)(P−I+V )(P − I + V )Ss f(x) + e(t−s)(P−I+V )(P − I + V )Ss f(x) = 0.
We thus find that F (0) = F (t) so that et(P−I+V )f(x) = St f(x). This proves (1.84).
22 1 GENERALITIES
1.5 Local times
In this short section we define the local time of the Markov chain Xt, t ≥ 0, and discusssome of its basic properties.
The local time of X. at site x ∈ E, and time t ≥ 0, is defined as
(1.88) Lxt =
∫ t
0
1Xs = x ds 1
λx.
Note that the normalization is different from (0.2) (we have not yet introduced X t, t ≥ 0).We extend (1.88) to the case x = ∆ (cemetery point) with the convention
(1.89) λ∆ = 1, L∆t =
∫ t
0
1Xs = ∆ds, for t ≥ 0.
By direct inspection of (1.88) we see that for x ∈ E, t ∈ [0,∞) → Lxt ∈ [0,∞) is a
continuous non-decreasing function with a finite limit Lx∞ (because Xt = ∆ for t large
enough). We record in the next proposition a few simple properties of the local time.
Proposition 1.14.
Ex[Ly∞] = g(x, y), for x, y ∈ E.(1.90)
Ex[LyTU] = gU(x, y), for x, y ∈ E, U ⊆ E.(1.91)
∑x∈E∪∆
V (x)Lxt =
∫ t
0
V
λ(Xs)ds, for t ≥ 0, V : E ∪ ∆ → R.(1.92)
Lxt θs + Lx
s = Lxt+s, for x ∈ E, s ≥ 0 (additive function property).(1.93)
Proof.
• (1.90):Ex[L
y∞] = Ex
[ ∫ ∞
0
1Xt = y dtλy
](1.26)= g(x, y).
• (1.91):Analogous argument to (1.90), cf. (1.45), (1.47), and Remark 1.5.
• (1.92):∑
x∈E∪∆
V (x)Lxt =
∑x∈E∪∆
V (x)
∫ t
0
1Xs = x dsλx
=
∫ t
0
∑x∈E∪∆
V
λ(x) 1Xs = x ds =
∫ t
0
V
λ(Xs)ds .
• (1.93):Note that
∫ t+s
0V (Xu)du = (
∫ t
0V (Xu)du) θs +
∫ s
0V (Xu)du, and apply this identity with
V (·) = 1λx
1x(·).
1.6 The Markov chain X. (with variable jump rate) 23
1.6 The Markov chain X. (with variable jump rate)
Using time change, we construct in this section the Markov chain X. with same discreteskeleton Zn, n ≥ 0, as X., but with variable jump rate λx, x ∈ E ∪ ∆. We describe thetransition semi-group attached to X., relate the local times for X. and X., and brieflydiscuss the Feynman-Kac formula forX.. As a last topic, we explain how the trace processof X. on a subset K of E is related to the trace Dirichlet form introduced in Section 1.4.
We define
(1.94) Lt =∑
x∈E∪∆
Lxt =
∫ t
0
λ−1Xsds, t ≥ 0,
so that t ∈ R+ → Lt ∈ R+ is a continuous, strictly increasing, piecewise differentiablefunction, tending to ∞. In particular it is an increasing bijection of R+, and using theformula for the derivative of the inverse one can write for the inverse function of L.:
(1.95) τu = inft ≥ 0; Lt ≥ u =∫ u
0
λXτvdv =
∫ u
0
λXvdv,
where we have introduced the time changed process (with values in E ∪ ∆)
(1.96) Xudef= Xτu , for u ≥ 0,
(the path of X. thus belongs to DE, cf. above (1.17)).
We also introduce the local times of X. (note that the normalization is different from(1.88), but in agreement with (0.2)):
(1.97) Lx
u
def=
∫ u
0
1Xv = x dv, for u ≥ 0, x ∈ E ∪ ∆.
Proposition 1.15. Xu, u ≥ 0, is a Markov chain with cemetery state ∆ and sub-Markovian transition semi-group on E:
(1.98) Rt f(x)def= Ex[f(Xt)] = etLf(x), for t ≥ 0, x ∈ E, f : E → R,
(i.e. X. has the jump rate λx in x and jumps according to px,y in (1.12), (1.14)).
Moreover one has the identities:
Xt = XLt , for t ≥ 0 (“time t for X. is time Lt for X.”),(1.99)
and
Lxt = L
x
Lt, for x ∈ E ∪ ∆, t ≥ 0,(1.100)
Lx∞ = L
x
∞, for x ∈ E.(1.101)
24 1 GENERALITIES
Proof.
• Markov property of X.: (sketch)
Note that τu, u ≥ 0, are Ft = σ(Xs, 0 ≤ s ≤ t)-stopping times and using the fact that Lt,t ≥ 0, satisfies the additive functional property
Lt+s = Ls + Lt θs, for t, s ≥ 0,
we see that Lτuθτv+τv = u+ v, and taking inverses
(1.102) τu+v = τu θτv + τv,
and
Xu+v
(1.96)=
(1.102)Xτuθτv+τv = Xτu θu = Xu θτv .
(Note incidentally that θu = θτu , for u ≥ 0, satisfies the semi-flow property θu+v = θu θv,for u, v ≥ 0, and, in this notation, the above equality reads Xu+v = Xu θv.)It now follows that for u, v ≥ 0, B ∈ Fτv , one has for f : E ∪ ∆ → R,
Ex[f(Xu+v) 1B] = Ex[f(Xτu θτv) 1B]strong Markov for X.
=
Ex[EXτv[f(Xτu)] 1B] = Ex[EXv
[f(Xu)] 1B] .
Since for v′ ≤ v, Xv′ = Xτv′are (see for instance Proposition 2.18, p. 9 of [12]) Fτv -
measurable, this proves the Markov property.
• (1.98):From the Markov property one deduces that Rt, t ≥ 0, is a sub-Markovian semi-group.
Now for f : E → R, x ∈ E, u > 0, one has
1
u(Ru f − f)(x) = 1
uEx[f(Xu)− f(X0)] =
1
uEx[f(Xτu)− f(X0)].
By (1.95) we see that τu ≤ cu, for u ≥ 0. We also know that the probability that X.jumps at least twice in [0, t] is o(t) as t→ 0. So as u→ 0,
1
uEx[f(Xτu)− f(X0)] =
1
uEx
[f(Xτu)− f(X0), X. has exactly one jump in [0, τu]
]+ o(1) =
Ex[f(Z1)− f(X0)]1
uPx[LT1 ≤ u] + o′(1), with T1 the first jump of X.,
(1.94)= (Pf − f)(x) 1
uPx
[T1λx≤ u
]+ o′(1) −→
u→0λx(Pf − f)(x)
(1.37)= Lf(x).
So we have shown that:
(1.103)1
u(Ru f − f)(x) −→
u→0Lf(x) .
1.6 The Markov chain X. (with variable jump rate) 25
Just as below (1.86) one now shows the corresponding statement to (1.87):
(1.104) u ≥ 0→ Ru f(x) is continuously differentiable with derivative LRu f(x).
One then concludes in the same fashion as below (1.87), that euLf(x) = Ru f(x), and thisproves (1.98).
• (1.99):By (1.96), XLt = XτLt
= Xt, for t ≥ 0, whence (1.99).
• (1.100):
d
dtLx
Lt=dLx
u
du
∣∣∣u=Lt
× dLt
dt
(1.97)=
(1.94)1XLt = x 1
λXt
(1.96)= 1Xt = x 1
λx=dLx
t
dt,
except when t is a jump time of X., and integrating we find (1.100).
• (1.101):Letting t→∞ in (1.100) yields L
x
∞ = Lx∞, that is (1.101).
One then has the Feynman-Kac formula for X..
Theorem 1.16. (Feynman-Kac formula for X.)
For V, f : E → R, u ≥ 0, one has
(1.105) Ex[f(Xu) exp∫ u
0
V (Xv)dv]
= eu(L+V )f(x), for x ∈ E.
Proof. The proof is similar to that of (1.84). One simply uses (1.98) in place of (1.18).
Remark 1.17. As a closing remark for Chapter 1, we briefly sketch a link between theMarkov chain obtained as the trace of X. on a non-empty subset K of E and the traceform E∗, cf. (1.72) and Proposition 1.11. To this end we introduce
(1.106) LKu =
∑x∈K∪∆
Lx
u =
∫ u
0
1Xv ∈ K ∪ ∆
dv, for u ≥ 0,
which is a continuous non-decreasing function of u tending to infinity, and its right-continuous inverse,
(1.107) τKv = infu ≥ 0; LK
u > v, for v ≥ 0.
The trace process of X. on K is defined as
(1.108) XKv = XτK
v, for v ≥ 0
(intuitively at time v, XK. is at the location where X. sits once LK
. accumulates v + εunits of time, with ε → 0). With similar arguments as in the case of X. (see the proofof Proposition 1.15, in particular, using the strong Markov property of X. and the factthat, in the notation from below (1.102), XK
u+v = XKu θτK
v), one can show that under
Px, x ∈ K ∪ ∆, XKv , v ≥ 0, is a Markov chain on K with cemetery state ∆.
26 1 GENERALITIES
One can further show that its corresponding sub-Markovian transition semi-group onK has the form
(1.109) RKt f(x) = Ex[f(X
Kt )] = et L
∗
f(x), for x ∈ K, f : K → R,
where in the notation of (1.74), (1.76),
(1.110) L∗f(x) =∑y∈K
c∗x,y f(y)− λ∗x f(x), for x ∈ K, f : K → R.
To see this last point one notes that if L stands for the generator of XK. , the inverse of
−L has the K ×K matrix:
(1.111)Ex
[ ∫ ∞
0
1XKv = y dv
]= Ex
[ ∫ ∞
0
1Xu = y du]
(1.101),(1.90)= g(x, y)
(1.78)= g∗(x, y), for x, y ∈ K.
A similar identity holds for the continuous-time chain on K with variable jump rate λ∗.attached to the weights c∗x,y and the killing measure κ∗.. Its generator is L
∗ (by Proposition1.15), and we thus find that L = L∗.
27
2 Isomorphism theorems
In this chapter we will discuss the isomorphism theorems of Dynkin and Eisenbaummentioned in the introduction, cf. (0.4) - (0.7), as well as some of the so-called generalizedRay-Knight theorems, in the terminology of Marcus-Rosen [19]. We still need to introducesome objects such as the Gaussian free field and the measures on paths entering the Dynkinisomorphism theorem. We keep the same set-up and notation as in Chapter 1.
2.1 The Gaussian free field
In this section we define the Gaussian free field. We also describe the conditional law ofthe field given its values on a given subset K of E, as well as the law of its restriction toK. Interestingly this brings into play the orthogonal decomposition under the Dirichletform and the notion of trace form discussed in Section 1.4.
As we now see we can use g(x, y), x, y ∈ E, as the covariance function of a centeredGaussian field indexed by E. An important step to this effect is (1.35).
We endow the canonical space RE of functions on E with the canonical product σ-algebra and with the canonical coordinates
(2.1) ϕx : f ∈ RE → ϕx(f) = f(x), x ∈ E.
Proposition 2.1. There exists a unique probability PG on RE, under which
(2.2)(ϕx)x∈E is a centered Gaussian field with covarianceEG[ϕx ϕy] = g(x, y), for x, y ∈ E.
Proof.
Uniqueness
Under such a PG, for any ν: E → R, 〈ν, ϕ〉 =∑
x∈E νx ϕx is a centered Gaussian variablewith variance ∑
x,y
νx νy EG[ϕx ϕy] =
∑x,y∈E
νx νy g(x, y) = E(ν, ν).
As a result
(2.3) EG[ei〈ν,ϕ〉] = e−12E(ν,ν), for any ν : E → R.
This specifies the characteristic function of PG, and hence PG is unique.
Existence
We give both an abstract and a concrete construction of the law PG.
Abstract construction:
We choose νℓ, 1 ≤ ℓ ≤ |E| an orthonormal basis for E(·, ·), cf. (1.35), of the space ofmeasures, and consider the dual basis fi, 1 ≤ i ≤ |E|, of functions so: 〈νℓ, fi〉 = δℓ,i, for1 ≤ i, ℓ ≤ |E|. If ξi, i ≥ 1 are i.i.d. N(0, 1) variables on some auxiliary space (Ω,A, P ),we define the random function
(2.4) ψ(·, ω) =∑
1≤i≤|E|
ξi(ω) fi(·).
28 2 ISOMORPHISM THEOREMS
For any x ∈ E, 1x =∑|E|
ℓ=1E(1x, νℓ) νℓ, so that
ψ(x, ω) = 〈1x, ψ(·, ω)〉 =∑
1≤ℓ,i≤|E|
E(1x, νℓ) ξi(ω)〈νℓ, fi〉 =|E|∑ℓ=1
E(1x, νℓ) ξℓ.
It now follows that for x, y ∈ E
EP [ψ(x, ω)ψ(y, ω)] =∑
1≤ℓ,ℓ′≤|E|
E(1x, νℓ)E(1y, νℓ′)EP [ξℓ ξℓ′] =
∑1≤ℓ≤|E|
E(1x, νℓ)E(1y, νℓ)
Parseval= E(1x, 1y) = g(x, y).
So the law of ψ(·, ω) on RE satisfies (2.2).
Concrete construction:
The matrix g(x, y), x, y ∈ E, has inverse 〈−L1x, 1y〉, x, y ∈ E, cf. (1.37), and hence underthe probability
(2.5) PG =1
(2π)|E|2
√detG
exp− 1
2E(ϕ, ϕ)
∏x∈E
dϕx
(using that∑
x,y∈E ϕx ϕy〈−L1x, 1y〉 = 〈−Lϕ, ϕ〉 (1.44)= E(ϕ, ϕ)), (ϕx)x∈E is a centered
Gaussian vector with covariance g(x, y), x, y ∈ E, i.e. (2.2) holds.Remark 2.2. In the above abstract construction the dual basis fi, 1 ≤ i ≤ |E|, of νℓ,1 ≤ ℓ ≤ |E|, is simply given by
(2.6) fi = Gνi, 1 ≤ i ≤ |E|.
Indeed 〈νℓ, fi〉 = 〈νℓ, Gνi〉 = E(νℓ, νi) = δℓ,i. Note that fi, 1 ≤ i ≤ |E|, is an orthonormalbasis under E(·, ·):
E(fi, fj) = E(Gνi, Gνj)(1.38)= 〈νi, Gνj〉 = E(νi, νj) = δi,j .
Conditional expectations:
We consider K ⊆ E, U = E\K, and want to describe the conditional law under PG of(ϕx)x∈U given (ϕx)x∈K , as well as the law of (ϕx)x∈K . The orthogonal decomposition inProposition 1.10 together with the description of the trace form in Proposition 1.11 willbe useful for this purpose. We write PG,U for the law on RE of the centered Gaussianfield with covariance
(2.7) EG,U [ϕx ϕy] = gU(x, y), for x, y ∈ E,
(so ϕx = 0, PG,U -a.s., when x ∈ K).
Proposition 2.3. (K 6= φ)
For x ∈ E, define on RE the σ(ϕy, y ∈ K)-measurable
hx = Ex[HK <∞, ϕXHK]
=∑y∈K
Px[HK <∞, XHK= y]ϕy (so hx = ϕx, for x ∈ K).(2.8)
2.1 The Gaussian free field 29
Then we can write
(2.9) ϕx = ψx + hx, for x ∈ E (so ψx = 0, for x ∈ K).
Under PG,
(ψx)x∈E is independent from σ(ϕy, y ∈ K),(2.10)
and
(ψx)x∈E is distributed as (ϕx)x∈E under PG,U .(2.11)
In addition, in the notation of (1.77),
(ϕx)x∈K has the law1
(2π)|K|2
√detK×KG
exp− 1
2E∗(ϕ, ϕ)
∏x∈K
dϕx,(2.12)
where detK×KG denotes the determinant of the K×K-matrix obtained by restricting g(·, ·)to K ×K.
Proof.
• (2.10):For any x ∈ E, ψx belongs to the linear space generated by the centered jointly Gaussiancollection ϕz, z ∈ E. In addition when x ∈ E and y ∈ K, we find that by (2.8), (2.9) and(2.2),
EG[ψx ϕy] = EG[ϕx ϕy]− EG[hx ϕy]
= g(x, y)−∑z∈K
Px[HK <∞, XHK= z] g(z, y)
= g(x, y)− Ex[HK <∞, g(XHK, y)]
(1.49)= 0.
The claim (2.10) now follows.
• (2.11):Since ψx = 0, for x ∈ K, and PG,U -a.s., ϕx = 0, for x ∈ K, we only need to focus on thelaw of (ψx)x∈U under PG. When F is a bounded measurable function on RU , we find that
by (2.5), setting c as the inverse of (2π)|E|2
√detG, we have
EG[F((ψx)x∈U
)]= c
∫
RE
F((ϕx − hx)x∈U
)exp
− 1
2E(ϕ, ϕ)
∏x∈E
dϕx.
Using Proposition 1.10 and (1.71), (1.72), we see that for ϕ in RE,
E(ϕ, ϕ) = E(ϕ− h, ϕ− h) + E∗(ϕ|K , ϕ|K),
where ϕ|K denotes the restriction to K of ϕ ∈ RE. We then make a change of variablesin the above integral. We set ϕ′
x = ϕx − hx, for x ∈ U , and ϕ′x = ϕx, for x ∈ K, and
30 2 ISOMORPHISM THEOREMS
note that the Jacobian of this transformation equals 1, so that∏
x∈E dϕ′x =
∏x∈E dϕx.
We thus see that for all F as above:
(2.13)
EG[F((ψx)x∈U
)] =
c
∫
RE
F((ϕx)x∈U
)exp
− 1
2E(ϕU , ϕU)− 1
2E∗(ϕ|K , ϕ|K)
∏x∈E
dϕx,
where we have set ϕU(x) = 1U(x)ϕx. Integrating over the variables ϕx, x ∈ K, we findthat for a suitable constant c′,
= c′∫
RU
F((ϕx)x∈U
)exp
− 1
2E(ϕU , ϕU)
∏x∈U
dϕx
and using Remark 1.5 and (2.5)
= EG,U[F((ϕx)x∈U
)].
This proves (2.11).
• (2.12):A simple modification of the above calculation replacing F
((ψx)x∈U
)by H
((ϕx)x∈K
), with
H a bounded measurable functions on RK , yields (2.12).
Remark 2.4. As a result of Proposition 2.3, when x is a given point of U , under PG,conditionally on the variables ϕy, y ∈ K,
(2.14)ϕx is distributed as a Gaussian variable with meanEx[HK <∞, ϕXHK
], and variance gU(x, x).
Note that by Proposition 1.9 (and Remark 1.5), for x ∈ U ,
(2.15)gU(x, x) is the inverse of the minimum energy of a function takingthe value 1 on x and 0 on K
(this provides an interpretation of the conditional variance as an effective resistance be-tween x and K ∪ ∆).
2.2 The measures Px,y
In this section we introduce a further ingredient of the Dynkin isomorphism theorem,namely the kind of measures on paths that appear in (0.4), which live on paths in E withfinite duration that go from x to y. We provide several descriptions of these measures, andderive an identity for the Laplace transform of the local time of the path, which preparesthe ground for the proof of the Dynkin isomorphism theorem in the next section.
We introduce the space of E-valued trajectories with duration t ≥ 0:
Γt = the space of right-continuous functions [0, t]→ E,
with finitely many jumps, left-continuous at t.(2.16)
We still denote by Xs, 0 ≤ s ≤ t, the canonical coordinates, and by convention we setXs = ∆ (cemetery point) if s > t.
2.2 The measures Px,y 31
We then define the space of E-valued trajectories with finite duration as
(2.17) Γ =⋃
t>0
Γt .
For γ ∈ Γ, we denote the duration of γ by
(2.18) ζ(γ) = the unique t > 0 such that γ ∈ Γt.
The σ-algebra we choose on Γ is simply obtained by “transport”. We use the bijection Φ:Γ1 × (0,∞)→ Γ:
(w, t) ∈ Γ1 × (0,∞)Φ−→ γ(·) = w
( ·t
)∈ Γ,
where we endow Γ1 × (0,∞) with the canonical product σ-algebra (and Γ1 is endowedwith the σ-algebra generated by the maps Xs, 0 ≤ s ≤ 1, from Γ1 into E). We thus takethe image by Φ of the σ-algebra on Γ1× (0,∞), and obtain the σ-algebra on Γ. We definefor x, y ∈ E, t > 0, the measure
(2.19)P tx,y the image on Γt of 1Xt = y Px
λy,
under the map: (Xs)0≤s≤t from DE ∩ Xt = y into Γt(⊆ Γ).
Note that the total mass of P tx,y is
(2.20) P tx,y[Γt] =
1
λyPx[Xt = y]
(1.22)= rt(x, y).
We then define the finite measure Px,y on Γ via:
(2.21) Px,y[B] =
∫ ∞
0
P tx,y [B] dt,
for any measurable subset B of Γ (noting that t > 0→ P tx,y defines a finite measure kernel
from (0,∞) to Γ). The total mass of Px,y is
(2.22) Px,y[Γ] =
∫ ∞
0
P tx,y[Γ] dt
(2.20)=
∫ ∞
0
rt(x, y) dt(1.26)= g(x, y).
The next proposition describes some relations between the measures Px,y and Px. Inparticular it provides an interpretation of Px,y as a non-normalized h-transform of Px,with h(·) = g(·, y), see for instance Section 3.9 of [19]. The Remark 2.6 below gives yetan other description of Px,y.
Proposition 2.5. (x, y ∈ E)For 0 < t1 < · · · < tn, x1, . . . , xn ∈ E, one has
(2.23)Px,y[Xt1 = x1, . . . , Xtn = xn] = Px[Xt1 = x1, . . . , Xtn = xn] g(xn, y) =
rt1(x, x1) rt2−t1(x1, x2) . . . rtn−tn−1(xn−1, xn) g(xn, y) λx1 . . . λxn.
32 2 ISOMORPHISM THEOREMS
If K ⊆ E and HK is defined as in (1.45), for B ∈ σ(XHK∧s, s ≥ 0) and ζ as in (2.18),
(2.24) Px,y[B,HK ≤ ζ ] = Ex[B ∩ HK <∞, g(XHK, y)] .
Proof.
• (2.23):
Px,y[Xt1 = x1, . . . , Xtn = xn](2.21)=
∫ ∞
0
P tx,y[Xt1 = x1, . . . , Xtn = xn] , dt
=
∫ ∞
tn
Px[Xt1 = x1, . . . , Xtn = xn, Xt = y]dt
λy
Markov property at time tn=
∫ ∞
tn
Ex[Xt1 = x1, . . . , Xtn = xn, rt−tn(xn, y)] dt
(1.26)= Px[Xt1 = x1, . . . , Xtn = xn] g(xn, y),
and the second equality of (2.23) follows from the Markov property.
• (2.24):
Px,y[B,HK ≤ ζ ] =
∫ ∞
0
P tx,y[B,HK ≤ ζ ] dt
(2.18),(2.19)=
∫ ∞
0
Ex[B,HK ≤ t, Xt = y]dt
λy
strong Markov property=
∫ ∞
0
Ex[B,HK ≤ t, rt−HK(XHK
, y)] dtFubini=
(1.26)Ex[B,HK <∞, g(XHK
, y)].
Remark 2.6. Given γ ∈ Γ, we can introduce N(γ) ≥ 0, the number of jumps of γ strictlybefore ζ(γ), the duration of γ, and when N(γ) = n ≥ 1, we can consider 0 < T1(γ) <· · · < Tn(γ) < ζ(γ) the successive jump times of Xs, 0 ≤ s ≤ ζ(γ).
As we now explain, for n ≥ 1, ti > 0, 1 ≤ i ≤ n, t > 0, and x1, . . . , xn ∈ E, one hasthe following formula complementing Proposition 2.5:
(2.25)
Px,y[N = n,XT1 = x1, . . . , XTn = xn,
T1 ∈ t1 + dt1, . . . , Tn ∈ tn + dtn, ζ ∈ t+ dt] =cx,x1 cx1,x2 . . . cxn−1,y
λx λx1 . . . λxn−1 λyδxn,y 10 < t1 < t2 < · · · < tn < t e−t dt1 . . . dtn dt,
where the precise meaning of (2.25) is obtained by considering some subsets A1, . . . , An,A of (0,∞), replacing “T1 ∈ t1 + dt1”,. . . ,“Tn ∈ tn + dtn”, “ζ ∈ t + dt” by T1 ∈A1, . . . , Tn ∈ An, ζ ∈ A, in the left-hand side of (2.25), and in the right-hand sidemultiplying by 1A1(t1) . . . 1An(tn) 1A(t), and integrating the expression over the variablest1, . . . , tn, t.
2.2 The measures Px,y 33
To find (2.25), we note that for t > 0,
P tx,y[N = n, XT1 = x1, . . . , XTn = xn, T1 ∈ t1 + dt1, . . . , Tn ∈ tn + dtn]
(2.19)=
Px[X. has n jumps in [0, t], XT1 = x1, . . . , XTn = xn,
T1 ∈ t1 + dt1, . . . , Tn ∈ tn + dtn] δxn,y λ−1y =
Px[Z1 = x1, Z2 = x2, . . . , Zn = xn]Px[Tn < t < Tn+1,
T1 ∈ t1 + dt1, . . . , Tn ∈ tn + dtn] δxn,y λ−1y =
cx,x1 cx1,x2 . . . cxn−1,y
λx λx1 . . . λxn−1 λy10 < t1 < · · · < tn < t e−t dt1 . . . dtn δxn,y,
where we made use of (1.12), (1.15), and Remark 1.1. Multiplying both sides by dt readilyyields (2.25), in view of (2.18), (2.21).
Similarly, we see that for n = 0, we have
(2.26) Px,y[N = 0, ζ ∈ t+ dt] = δx,y e−t dt.
We record an interesting consequence of (2.25), (2.26). We denote by∨γ ∈ Γ the “time
reversal” of γ ∈ Γ, i.e.∨γ is the element of Γ such that ζ(
∨γ) = ζ(γ),
∨γ(0) = γ(ζ),
∨γ(ζ) = γ(0), and
∨γ(s) = limε↓0 γ(ζ − s− ε), for 0 < s < ζ(γ).
Observe that on N = n one can reconstruct γ from T1, . . . , Tn, ζ , and XT1 , . . . , XTn .It is then a straightforward consequence of (2.25), (2.26) that
(2.27) Py,x is the image of Px,y under the map γ → ∨γ.
We will later see some analogous formulas to (2.25), (2.26) for rooted loops in Proposition3.1, see also (3.40).
We will now provide some formulas for moments of∫∞
0V (Xs)ds and Lz
∞ under themeasure Px,y.
Proposition 2.7. (x, y ∈ E)For V : E → R and n ≥ 0, one has, in the notation of (1.32) and (1.83),
(2.28) Ex,y
[( ∫ ∞
0
V (Xs)ds)n]
= n!((QV )ngy
)(x), with gy(·) = (G1y)(·) = g(·, y).
For x1, x2, . . . , xn ∈ E, one has
(2.29) Ex,y
[ n∏i=1
Lxi∞
]=
∑σ∈Sn
g(x, xσ(1)) g(xσ(1), xσ(2)) . . . g(xσ(n), y),
with Sn the set of permutations of 1, . . . , n.When ‖G|V | ‖∞ < 1, one has
(2.30) Ex,y
[exp
∑z∈E
V (z)Lz∞
]=
((I −GV )−1gy
)(x) =
((I −GV )−1G1y
)(x).
34 2 ISOMORPHISM THEOREMS
Proof. We begin with a slightly more general calculation and consider
V1, . . . , Vn : E → R and
Ex,y
[ n∏i=1
∫ ∞
0
Vi(Xs)ds]= Ex,y
[ ∫
Rn+
V1(Xs1) . . . Vn(Xsn) ds1 . . . dsn
]
and decomposing over the various orthants
=∑
σ∈Sn
∫
0<sσ(1)<···<sσ(n)<∞
Ex,y[V1(Xs1) . . . Vn(Xsn)] ds1 . . . dsn
(2.23)=
∑σ∈Sn
∫
0<sσ(1)<···<sσ(n)
∑x1,...,xn
rsσ(1)(x, x1) Vσ(1)(x1) λx1 rsσ(2)−sσ(1)
(x1, x2) Vσ(2)(x2) λx2
. . . rsσ(n)−sσ(n−1)(xn−1, xn) Vσ(n)(xn) g(xn, y) λxn dsσ(1) . . . dsσ(n)
integrating over dsσ(n) and summing over xn
(1.32)=
∑σ∈Sn
∫
0<sσ(1)<···<sσ(n−1)<∞
∑x1,...,xn−1
rsσ(1)(x, x1) Vσ(1)(x1) λx1 . . . λxn−1(QVσ(n)g
y)(xn−1)
dsσ(1) . . . dsσ(n−1)
and by induction
=∑
σ∈Sn
(QVσ(1)QVσ(2) . . . QVσ(n)gy)(x).
In other words, we have:
(2.31) Ex,y
[ n∏i=1
∫ ∞
0
Vi(Xs) ds]=
∑σ∈Sn
(QVσ(1)QVσ(2) . . . QVσ(n)gy)(x).
• (2.28):We choose Vi = V , 1 ≤ i ≤ n, and (2.31) yields (2.28).
• (2.29):We chose Vi =
1λxi
1xi, 1 ≤ i ≤ n, and note that QVi = gxi, so that (2.31) yields
Ex,y
[ n∏i=1
Lxi∞
]=
∑σ∈Sn
g(x, xσ(1)) g(xσ(1), xσ(2)) . . . g(xσ(n), y),
i.e., (2.29) is proved.
• (2.30):
(2.32) Ex,y
[exp
∑z∈E
V (z)Lz∞
](1.92)= Ex,y
[ ∑n≥0
1
n!
(∫ ∞
0
V
λ(Xs) ds
)n].
The calculation below shows we can apply dominated convergence:
∑n≥0
1
n!Ex,y
[( ∫ ∞
0
|V |λ
(Xs) ds)n] (2.28)
=∑n≥0
((Q|V |λ
)n
gy)(x)
(1.33)=
∑n≥0
((G|V |)ngy)(x) <∞,
since ‖G|V | ‖∞ < 1.
2.3 Isomorphism theorems 35
So the left hand-side of (2.32) equals
∑n≥0
1
n!Ex,y
[( ∫ ∞
0
V
λ(Xs) ds
)n] (2.28)=
(1.33)
∑n≥0
((GV )ngy
)(x)
and since ‖GV ‖L∞(E)→L∞(E) < 1, we find that
∑n≥0
(GV )n = (I −GV )−1, so that
Ex,y
[exp
∑x∈E
V (x)Lx∞
]=
((I −GV )−1gy
)(x),
i.e. (2.30) holds.
2.3 Isomorphism theorems
This section is devoted to the isomorphism theorems of Dynkin and Eisenbaum, whichexplore the nature of the relations between occupation time and free field. We refer toMarcus-Rosen [19] for a discussion of these theorems under very general assumptions.
We first state and prove the Dynkin isomorphism theorem, which shows that Lz∞,
z ∈ E, under Px,y, has similar features, in a suitable sense, to 12ϕ2z, z ∈ E, under PG.
Theorem 2.8. (Dynkin isomorphism theorem)
For any x, y ∈ E, and bounded measurable F on RE, one has
(2.33) Ex,y ⊗ EG[F((Lz∞ +
1
2ϕ2z
)z∈E
)]= EG
[ϕx ϕy F
((12ϕ2z
)z∈E
)].
In other words:
(Lz∞ +
1
2ϕ2z
)z∈E
under Px,y ⊗ PG, has the same “law” as
(12ϕ2z
)z∈E
under ϕx ϕy PG (← signed measure when x 6= y!).
Proof. We will show that for small V : E → R,
(2.34) Ex,y ⊗EG[exp
∑z∈E
V (z)(Lz∞ +
1
2ϕ2z
)]= EG
[ϕx ϕy exp
∑z∈E
V (z)1
2ϕ2z
]
(i.e. both integrals are well-defined and equal).
Let us first explain how the claim (2.33) then follows.
For any V : E → R, it follows by application of (2.34) to u0V and −u0V for small u0 >0, that cosh(u0
∑z V (z)(L
z∞+ 1
2ϕ2z)) is integrable for Px,y⊗PG and cosh(u0
∑z V (z)1
2ϕ2z))
is integrable under |ϕx ϕy|PG, and hence the functions
u ∈ (−u0, u0) + iR(⊇ C) −→ Ex,y ⊗ EG[exp
u∑z∈E
V (z)(Lz∞ +
1
2ϕ2z
)]
and
u ∈ (−u0, u0) + iR −→ EG[ϕx ϕy exp
u∑z∈E
V (z)1
2ϕ2z
]
36 2 ISOMORPHISM THEOREMS
are analytic. And by (2.34) they are equal for u small and real. Hence they agree in(−u0, u0) + iR, and in particular for the choice u = i, that is for any V : E → R,
Ex,y ⊗ EG[exp
i∑z∈E
V (z)(Lz∞ +
1
2ϕ2z
)]= EG
[ϕx ϕy exp
i∑z∈E
V (z)1
2ϕ2z
].
This means that the characteristic function of the law of Lz∞+ 1
2ϕ2z, z ∈ E, under Px,y⊗PG
equals the characteristic function of the law (i.e. image measure) of 12ϕ2z, z ∈ E, under
ϕx ϕy PG. It follows that these laws are equal (in particular the law of 1
2ϕ2z, z ∈ E, under
the signed measure ϕx ϕy PG, is a positive measure!), and (2.33) is proved.
We now turn to the proof of (2.34).
We already know by (2.30) that when V is small
(2.35) Ex,y
[exp
∑z∈E
V (z)Lz∞
]=
((I −GV )−1G1y
)(x).
Moreover, by (2.5), we see that for small V , the random variable exp12
∑z∈E V (z)ϕ2
zis PG-integrable, because when V is small, thanks to (1.39), EV (ϕ, ϕ) def
= E(ϕ, ϕ) −∑z∈E V (z)ϕ2
z is a positive definite quadratic form. In addition, for any ϕ: E → R:
(2.36) EV (ϕ, ϕ)(1.44)= 〈−Lϕ, ϕ〉 − 〈V ϕ, ϕ〉 = 〈(−L− V )ϕ, ϕ〉.
Thus if we define the probability measure on RE :
(2.37) PG,V =1
(2π)|E|2
√detGEG
[exp
1
2
∑z∈E
V (z)ϕ2z
] exp− 1
2EV (ϕ, ϕ)
∏x∈E
dϕx,
then, the field ϕz, z ∈ E, is a centered Gaussian field under PG,V , with covariance matrix〈(−L− V )−11z, 1z′〉, z, z′ ∈ E. Note that
(2.38) (−L− V )−1 (1.37)= (−L(I −GV ))−1 = (I −GV )−1(−L)−1 = (I −GV )−1G.
As a result, we see that when V is small:
EG,V [ϕx ϕy] =((I −GV )−1G1y
)(x)
(2.35)= Ex,y
[exp
∑z∈E
V (z)Lz∞
].
Multiplying these equalities by the PG-expectation in the denominator of the normalizingconstant of PG,V in (2.37) yields (2.34), and this concludes the proof of (2.33).
The fact that the measures Px,y and not simply the measures Px appear in the Dynkinisomorphism theorem makes its use somewhat cumbersome in a number of applications.We now discuss the Eisenbaum isomorphism theorem, which does not make use of themeasures Px,y, but instead directly involves the measures Px.
As a preparation, we record the statements corresponding to (2.28) - (2.30), when Px,y
is replaced by Px.
2.3 Isomorphism theorems 37
Proposition 2.9. For V : E → R, and n ≥ 0, one has
(2.39) Ex
[( ∫ ∞
0
V (Xs) ds)n]
= n!((QV )n1E
)(x), for x ∈ E.
For x, x1, . . . , xn ∈ E, one has
(2.40)Ex
[ n∏i=1
Lxi∞
]=
∑σ∈Sn
g(x, xσ(1)) g(xσ(1), xσ(2)) . . . g(xσ(n−1), xσ(n))
(Kac’s moment formula).
When ‖G|V | ‖∞ < 1, one has
(2.41) Ex
[exp
∑z∈E
V (z)Lz∞
]=
((I −GV )−11E
)(x), for x ∈ E.
Proof. The proofs are straightforward modifications of the proofs of (2.28), (2.29), (2.30),replacing (2.23) by the identity
(2.42)
Px[Xt1 = x1, . . . , Xtn = xn] =
rt1(x, x1) rt2−t1(x1, x2) . . . rtn−tn−1(xn−1, xn) λx1 . . . λxn,
for 0 < t1 < · · · < tn and x, x1, . . . , xn ∈ E.
We are now ready to state and prove
Theorem 2.10. (Eisenbaum isomorphism theorem)
For any x ∈ E, s 6= 0, and bounded measurable F on RE, one has
(2.43) Ex ⊗EG[F((Lz∞ +
(ϕz + s)2
2
)z∈E
)]= EG
[(1 +
ϕx
s
)F(((ϕz + s)2
2
)z∈E
)],
in other words:
(Lz∞ +
1
2(ϕz + s)2
)z∈E
under Px ⊗ PG, has the same “law” as
(12(ϕz + s)2
)z∈E
under(1 +
ϕx
s
)PG (← signed measure!).
Proof. The same arguments we employed below (2.34), show that it suffices to prove thatfor small V : E → R,
(2.44)
Ex ⊗ EG[exp
∑z∈E
V (z)(Lz∞ +
(ϕz + s)2
2
)]=
EG[(
1 +ϕx
s
)exp
∑z∈E
V (z)(ϕz + s)2
2
]
(i.e. both integrals are well-defined and equal).
We already know by (2.41) that for small V ,
(2.45) Ex
[exp
∑z∈E
V (z)Lz∞
]=
((I −GV )−11E
)(x).
38 2 ISOMORPHISM THEOREMS
We further note that when V is small exp12
∑z∈E V (z)(ϕz + s)
2 is PG is integrable (seeabove (2.36)), and
(2.46)
EG[(1 +
ϕx
s
)exp
1
2
∑z∈E
V (z)(ϕz + s)2]
EG[exp
∑z∈E
V (z)(ϕz + s)2
2
] = 1 +EG
[ϕx e
12
∑z∈E
V (z)(ϕz+s)2]
sEG[e
12
∑z∈E
V (z)(ϕz+s)2] =
1 +1
s
EG,V[ϕx exp
s∑z∈E
V (z)ϕz
]
EG,V[exp
s∑z∈E
V (z)ϕz
] ,
where PG,V is defined in (2.37).
We are going to use the next
Lemma 2.11. If (X, Y ) is a two-dimensional centered Gaussian vector, then for s 6= 0,
(2.47)E[X expsY ]sE[expsY ] = E[XY ].
Proof. For t, s in R, we have
E[exptX + sY ] = exp1
2E[(tX + sY )2]
= exp1
2
(t2 E[X2] + 2ts E[XY ] + s2E[Y 2]
).
By differentiating in t and setting t = 0, we find
E[X expsY ] = sE[XY ] exps2
2E[Y 2]
= sE[XY ]E[expsY ].
This proves (2.47).
We apply (2.47) with X = ϕx and Y =∑
z∈E V (z)ϕz. We find that the last expressionin (2.46) equals
1 + EG,V[ϕx
∑z∈E
V (z)ϕz
]= 1 +
∑z∈E
EG,V [ϕx ϕz]V (z)
(2.38)= 1 +
((I −GV )−1GV
)(x).
(2.48)
Note that (I−GV )−1 = (I−GV )−1(I−GV +GV ) = I+(I−GV )−1GV , so the expressionsin (2.45) and (2.48) are equal. The claim (2.43) follows.
2.4 Generalized Ray-Knight theorems
We will now discuss some so-called “generalized Ray-Knight theorems” (we will explainthe terminology below, see also [8] and [19] for a presentation of these results in a generalframework). These results are closely linked to the isomorphism theorems of Dynkin andEisenbaum, see also [6] and [19].
2.4 Generalized Ray-Knight theorems 39
We begin with a direct application of the isomorphism theorem of Eisenbaum. Weconsider x0 ∈ E, and assume that κx vanishes for x 6= x0. We set
(2.49) U = E\x0,
(so TUPx−a.s.= Hx0, for all x ∈ E, by our assumptions on κ), and denote by PG,U the law
on RE of the centered Gaussian free field with covariance
(2.50) EG,U [ϕx ϕy] = gU(x, y), for x, y ∈ E.
Theorem 2.12. (Generalized first Ray-Knight theorem)
For any x ∈ E, and s 6= 0,
(2.51)
(LzHx0
+1
2(ϕz + s)2
)z∈E
under Px ⊗ PG,U , has the same “law” as(1
2(ϕz + s)2
)z∈E
under (1 + ϕx
s)PG,U .
Proof. This is the direct application of (2.43) to the case where U replaces E, gU(·, ·)replaces g(·, ·) and Lz
Hx0now plays the role of Lz
∞, with the help of Remark 1.5 1) and
2).
Remark 2.13.
1) The above terminology stems from the fact that the corresponding statement in thecase of Brownian motion when x0 = 0 < x in R, can be shown to be equivalent, seeMarcus-Rosen [19], p. 367, to the statement
(2.52)
(LzH0
+ (B2z−x + B2
z−x) 1z≥x
)z≥0
has the same law as
(B2z + B2
z )z≥0,
where (Lzt , z ∈ R, t ≥ 0), (Bz, z ≥ 0), (Bz, z ≥ 0) are independent and respectively
distributed as the local time process of Brownian motion starting at x and two independentcopies of Brownian motion starting at 0. Note incidentally that in the present situationgU(z, z
′) = 2 z ∧ z′, for z, z′ ≥ 0, so that (ϕz, z ≥ 0) under PG,0c=U is distributed as√2Bz, z ≥ 0.
The statement (2.52) can be shown to be equivalent to the more classical statementof the first Ray-Knight theorem (see Marcus and Rosen [19], p. 52):
(2.53)
LzH0, for z ∈ [0, x], under Px (the law of Brownian motion starting from x),
has the law of a two-dimensional squared Bessel process starting at 0,and then proceeds from x as a zero-dimensional squared Bessel process(i.e. conditional on (Lz
H0)0≤z≤x, (L
x+yH0
)y≥0 is distributed as azero-dimensional squared Bessel process),
which stems from the work of Ray [21] and Knight [13]. We also recall that (see Revuz-Yor [23], chapter XI) the law of the δ-dimensional squared Bessel process starting fromx, with δ ≥ 0, and x ≥ 0, is that of the solution of the stochastic differential equation:
(2.54) Zt = x+ 2
∫ t
0
√Zs dBs + δt, t ≥ 0,
40 2 ISOMORPHISM THEOREMS
where Bx, s ≥ 0, is a Brownian motion. This law is commonly denoted by BESQδ(x). Itsatisfies the important relation, see Revuz-Yor [23], p. 410:
(2.55)If Z. and Z ′
. are independent, respectively BESQδ(x) and BESQδ′(x′)-distributed, then Z. + Z ′
. is BESQδ+δ′(x+ x′)-distributed, for δ, δ′, x, x′ ≥ 0.
2) The type of argument which enables to go from (2.52) to (2.53) uses (2.55) (whenδ = δ′ = 1, noting that for a ≥ 0, BESQ1(a) is the law of the square of Brownian motionstarting from
√a), as well as the following property:
(2.56)
If X, Y, Z are random n-vectors with non-negative components such that
X,Z are independent, and Y, Z are independent, and X + Zlaw= Y + Z,
then Xlaw= Y.
(Proof. Let LX(t) = E[e−∑n
i tiXi ], for t ∈ Rn+, be the Laplace transform of X , and LY ,LZ
be defined analogously, then we see, by independence, that LX(t)LZ(t) = LX+Z(t) =LY+Z(t) = LY (t)LZ(t). Now LZ(t) > 0, and simplifying we see that LX(t) = LY (t), for
all t ∈ Rn+, so that X
law= Y .)
It is important to realize that the assumption that the components of X, Y, Zare non-negative is important for the validity of (2.56)! One can find in Feller [9],p. 506, an example of random variables X, Y, Z such that X,Z are independent, Y, Z are
independent, andX+Zlaw= Y+Z, butX has not the same law as Y ! (These examples come
from the fact that there are characteristic functions of distributions, which are supportedin [−1, 1], and there are distinct distributions which can have the same restriction of theirrespective characteristic function to [−1, 1]. Thus, if two distinct characteristic functionsagree in [−1, 1], their products by a characteristic function supported in [−1, 1] will beequal. Interpreting products of characteristic functions in terms of characteristic functionsof sums of independent variables, one obtains the desired examples.)
3) Simple random walk in continuous time on Z also satisfies a corresponding Ray-Knighttheorem (or (2.51)). Actually, when x0 = 0 < x, an integer, and when one choosescz,z+1 =
12for all z (for the weights), then
(2.57) gU(z, z′) = 2 z ∧ z′, for all z, z′ ∈ N, when U = Z\0.
Indeed, when z ≥ z′ ≥ 1, by the strong Markov property, letting H0 denote the entrancetime of Zn, n ≥ 0, in 0,
gU(z, z′) = gU(z
′, z′) = Ez′
[ H0∑k=0
1Zk = z′],
and the number of visits to z′ before hitting 0 under Pz′ is geometric with success param-eter 1
2z′(and hence has expectation 2z′). So gU(·, ·) corresponds to the restriction of the
Brownian Green function gR\0(·, ·) to N×N! It now follows from (2.51) (in this slightlymore general context) that
(2.58)(Lz
H0)z∈N under Px, for x ≥ 1 integer, has the same law as the
restriction to z ∈ N of the Brownian local times LzH0, z ≥ 0,
under Wiener measure starting from x (see (2.53)).
2.4 Generalized Ray-Knight theorems 41
As a preparation for the proof of the generalized second Ray-Knight theorem we recordthe following fact concerning Gaussian vectors.
Proposition 2.14. Let n ≥ 1, ψ = (ψ1, . . . , ψn) be a centered Gaussian vector withinvertible covariance matrix A, and V = diag(v1, . . . , vn) be a deterministic diagonalmatrix, such that A−1 − V is positive definite. Then for any real numbers bi, 1 ≤ i ≤ n,
(2.59)
E[exp
n∑i=1
vi(ψi + bi)
2
2
]=
1√det(I − AV )
exp1
2
( ∑1≤i≤n
vi b2i +
∑1≤i,j≤n
vi bi Aij vj bj
)
where A = (A−1 − V )−1 = (I − AV )−1A.
Moreover, when ψ is an independent copy of ψ, and α, α, β, β are real numbers suchthat
(2.60) α2 + α2 = β2 + β2,
then for any bi, 1 ≤ i ≤ n,
(2.61)((ψi + αbi)
2 + (ψi + α bi)2)1≤i≤n
law=
((ψi + βbi)
2 + (ψi + βbi)2)1≤i≤n
.
Proof.
• (2.59):Note that by assumption A−1 − V is positive definite, hence invertible, and A−1 − V =
A−1(I−AV ), so that I−AV is invertible and A = (A−1−V )−1 = (I−AV )−1A. Moreover,
E[exp
n∑i=1
vi(ψi + bi)
2
2
]= E
[exp
n∑i=1
vi bi ψi +1
2
n∑i=1
vi ψ2i
]e
12
∑ni=1 vi b
2i
and using that A−1 − V = A−1,
E[exp
n∑i=1
vi bi ψi +1
2
n∑i=1
vi ψ2i
]=
1
(2π)n2
√detA
∫
Rn
exp n∑
i=1
vi bi xi − 1
2
n∑i,j=1
xi A−1ij xj
dx1 . . . dxn =
(det AdetA
) 12E[exp
n∑i=1
vi bi ψi
]
where under P , (ψ1, . . . , ψn) is a centered Gaussian vector with covariance matrix A, and
E denotes the P -expectation. Since det A = detA/det(I−AV ), the last expression equals
1√det(I − AV )
· exp1
2
n∑i,j=1
vi bi Aij vj bj
.
This proves (2.59).
42 2 ISOMORPHISM THEOREMS
• (2.61):By (2.59), we see that for small vi, 1 ≤ i ≤ n, one has
(2.62)E[exp
n∑i=1
vi
[(ψi + αbi)2
2+
(ψi + αbi)2
2
]]=
1
det(I −AV )exp
1
2
( n∑i=1
vi b2i (α
2 + α2) +1
2
∑1≤i,j≤n
vi bi Aij vj bj(α2 + α2)
)
and we obtain the same expression if we now replace α by β and α by β, thanks to (2.60).
The same argument as below (2.34) then shows that the random vectors on the left-hand side and on the right-hand of (2.61) have the same characteristic function. Theclaim (2.61) now follows.
We continue with some preparation for the generalized second Ray-Knight theorem.
For the next proposition, once again we assume that the killing measure κ vanisheseverywhere except at a point:
(2.63) there is x0 ∈ E, such that κx0 = λ > 0, and κx = 0, for x 6= x0.
We write U = E\x0, and recall that
gU(x, y)(1.52)= g(x, y)− g(x, x0) g(x0, y)
g(x0, x0)
(1.91)= Ex[L
yHx0
], for x, y ∈ E.(2.64)
We introduce (in the same fashion as in (2.50))
PG,U , the probability on RE under which ϕx, x ∈ E, is a centered(2.65)
Gaussian field with covariance EG,U [ϕx ϕy] = gU(x, y), for x, y ∈ E.
Y , an exponential random variable with parameter λ under Q.(2.66)
Proposition 2.15. (under (2.63) - (2.66))
(2.67)
(Lx∞ +
1
2ϕ2x
)x∈E
under Px0 ⊗ PG,U , has same law as
(1
2(ϕx +
√2Y )2
)x∈E
under PG,U ⊗Q.
Proof. Consider Z,Z ′ independent centered Gaussian variables with variance λ−1, inde-pendent from ϕx, x ∈ E, and ϕ′
x, x ∈ E, two independent copies PG,U -distributed.
By (2.61), we find that
((ϕx + Z)2 + (ϕ′
x + Z ′)2)x∈E
law=
(ϕ2x + (ϕ′
x +√Z2 + Z ′2)2
)x∈E
law=
(ϕ2x + (ϕ′
x +√2Y )2
)x∈E
,
where Y is an exponential random variable with parameter λ, independent of (ϕx)x∈E
and (ϕ′x)x∈E , and we used that Z2 + Z
′2 law= 2Y (use polar coordinates).
2.4 Generalized Ray-Knight theorems 43
Thus if we show that
(2.68)(Lx∞ +
1
2ϕ2x +
1
2(ϕ′
x)2)x∈E
law=
(1
2(ϕx + Z)2 +
1
2(ϕ′
x + Z ′)2)x∈E
,
it will follow that(Lx∞ +
1
2ϕ2x +
1
2(ϕ′
x)2)x∈E
law=
(1
2ϕ2x +
1
2(ϕ′
x +√2Y )2
)x∈E
,
and, “simplifying” on both sides, i.e. applying (2.56), we will conclude that
(Lx∞ +
1
2(ϕ′
x)2)x∈E
law=
(1
2(ϕ′
x +√2Y )2
)x∈E
,
i.e. (2.67) will be proved.
It remains to prove (2.68). By the same argument as below (2.34), it suffices to provethat for small V : E → R,
(2.69)
Ex0 ⊗ EG,U ⊗EG,U[exp
∑x∈E
V (x)(Lx∞ +
1
2ϕ2x +
1
2(ϕ′
x)2)]
=
EG,U ⊗ EG,U ⊗EZ,Z′[exp
∑x∈E
V (x)(1
2(ϕx + Z)2 +
1
2(ϕ′
x + Z ′)2)]
,
where EZ,Z′denotes the expectation relative to the probability governing Z,Z ′. Observe
that (writing E in place of EG,U ⊗ EZ,Z′),
E[(ϕx + Z)(ϕy + Z)] = E[ϕx ϕy] + E[Z2] = gU(x, y) + λ−1,
i.e.
(2.70)ϕx + Z, x ∈ E is a centered Gaussian field with covariancegU(x, y) + λ−1, x, y ∈ E.
Lemma 2.16. (under (2.63), with U = E\x0)
(2.71) g(x, y) = gU(x, y) + λ−1, for x, y ∈ E.
Proof. We have by (2.64)
g(x, y) = gU(x, y) +g(x, x0) g(x0, y)
g(x0, x0), for x, y ∈ E.
Since κ = 0, for x 6= x0, we see that Px[Hx0 < ∞] = 1, for any x ∈ E, and by (1.51)g(x, x0) = g(x0, x0) = g(x0, x), for all x ∈ E. Hence
(2.72) g(x, y) = gU(x, y) + g(x0, x0), for x, y ∈ E.
To compute g(x0, x0), we use the fact that
(2.73) g(x0, x0)(1.26)=
1
λx0
Ex0
[ ∑n≥0
1Zn = x0].
44 2 ISOMORPHISM THEOREMS
Under Px0 ,∑
n≥0 1Zn = x0, the total number of visits to x0, is a geometric random
variable with success parameterκx0
λx0= λ
λx0(i.e. the probability to jump to the cemetery
point). As a result we obtain
Ex0
[ ∑n≥0
1Zn = x0]=
(λ
λx0
)−1
=λx0
λ,
and
g(x0, x0) =1
λx0
λx0
λ= λ−1.
With (2.72) we now find (2.71).
By (2.41), we see that for small V
(2.74) Ex0
[exp
∑x∈E
V (x)Lx∞
]=
((I −GV )−11E
)(x0),
and by (2.59), with bi ≡ 0, and A playing the role of G or GU , where GU stands for thematrix with components gU(x, y), x, y ∈ E, we have for small V :
(2.75)EG,U ⊗ EG,U ⊗EZ,Z′[
e
∑x∈E
V (x)( 12(ϕx+Z)2+ 1
2(ϕ′
x+Z′)2)
]
EG,U ⊗ EG,U[e
∑x∈E
V (x)(ϕ2x2+
(ϕ′x)2
2)]
=det(I −GUV )
det(I −GV ) .
We will now see that the expressions in (2.74) and (2.75) are equal. We observe that byCramer’s rule for the inverse of a matrix
((I −GV )−11E
)(x0) =
det(M)
det(I −GV ) ,
where M is the matrix obtained by replacing the x0-column in I−GV with 1 everywhere.
Subtracting the x0-row from all other rows of M , we obtain a matrix M with coefficients
Mx,y, which for x, y 6= x0 equal
Mx,y = δx,y − g(x, y) V (y) + g(x0, y) V (y)
(2.71)= δx,y − gU(x, y) V (y),
and the x0-column of M vanishes everywhere except at row x0:
Mx,x0 = δx,x0.
Clearly detM = det M , and if we develop the determinant of M along the x0-column, we
find that detM = det M = det((I −GU V )|U×U
)= det(I −GU V ).
We have thus proved that the expressions in (2.74) and (2.75) are equal. This concludesthe proof of (2.69) and hence of (2.67).
We will later give another proof of the above proposition based instead on the Dynkinisomorphism theorem. For the time being we proceed with the generalized secondRay-Knight theorem.
2.4 Generalized Ray-Knight theorems 45
We now consider the situation where the killing measure vanishes onE, i.e. (1.1) -(1.3) hold but instead of our usual set-up,
(2.76) κx = 0, for all x ∈ E.
We denote by X0t , t ≥ 0, the canonical process on the space D0
E of right-continuousE-valued trajectories with finitely many jumps on finite intervals and infinitely manyjumps, and by P 0
x , x ∈ E, the law of the walk with jump rate 1, and Markovian transitionprobability p0x,y =
cx,yλ0x, for x, y ∈ E, with λ0x =
∑y∈E cx,y, for x ∈ E.
The local time of the walk is defined by
(2.77) ℓxt =
∫ t
0
1X0s = x ds 1
λ0x, for x ∈ E, t ≥ 0.
The map t ≥ 0→ ℓxt ≥ 0, is continuous non-decreasing, ℓx0 = 0, P 0z -a.s., ℓ
x∞ =∞, for any
x, z ∈ E, since we are now in a recurrent situation.
We now consider a special point
(2.78) x0 ∈ E,
and keep the notation U = E\x0.Note that the law of X0
t∧Hx0, t ≥ 0, under P 0
x agrees with that of Xt∧Hx0, t ≥ 0, under
Px, when we instead pick the killing measure κ with the unique non-vanishing point x0 of(2.78), as in (2.63). In particular, the killed Green function g0U(·, ·) (attached to the walkX0
t , t ≥ 0) coincides with gU(·, ·), and
(2.79) g0U(x, y) = gU(x, y) = E0x[ℓ
yHx0
], for x, y ∈ E.
We now introduce the right-continuous inverse of t→ ℓx0t :
(2.80) σu = inft ≥ 0; ℓx0t > u, for u ≥ 0.
Theorem 2.17. (Generalized second Ray-Knight theorem)
Keeping the notation of (2.65), for any u > 0,
(2.81)
(ℓxσu
+1
2ϕ2x
)x∈E
under P 0x0⊗ PG,U , has the same law as
(1
2(ϕx +
√2u)2
)x∈E
under PG,U ,
(we will later explain the origin of the above terminology, see Remark 2.19).
Proof. Consider as in (2.66) an exponential variable Y with parameter λ > 0 (see (2.63)),under some auxiliary probability Q. First assume that we can show that
(2.82)
(ℓxσY
)x∈E
under P 0x0⊗Q, has the same law as
(Lx∞
)x∈E
under Px0,
and let us explain how (2.81) follows.
46 2 ISOMORPHISM THEOREMS
By (2.67) and (2.82), we then find that under P 0x0⊗ PG,U ⊗Q,
(2.83)(ℓxσY
+1
2ϕ2x
)x∈E
has the same law as(1
2(ϕx +
√2Y )2
)x∈E
.
As a result for any V : E → R+:
(2.84)
∫ ∞
0
E0x0⊗EG,U
[exp
−
∑x∈E
V (x)(ℓxσu
+1
2ϕ2x
)]e−λudu =
∫ ∞
0
EG,U[exp
− ∑
x∈E
V (x)1
2(ϕx +
√2u)2
]e−λudu,
(indeed, multiplying both members by λ yields on the left-hand side the P 0x0⊗PG,U ⊗Q-
expectation of exp−∑
x∈E V (x)(ℓxσY
+ 12ϕ2x), and on the right-hand side the correspond-
ing expectation of exp−∑x∈E V (x)1
2(ϕx +
√2Y )2).
Now
u ≥ 0→ E0x0⊗ EG,U
[exp
−
∑x∈E
V (x)(ℓxσu
+1
2ϕ2x
)]≥ 0,
is a bounded right-continuous (because u → σu is right-continuous, t ≥ 0 → ℓxt ≥0 is continuous, and we use dominated convergence) function. Similarly, u ≥ 0 →EG,U [exp−
∑x∈E V (x)
12(ϕx+
√2u)2] ≥ 0, is a bounded continuous function. By (2.84)
their Laplace transforms are equal and they are hence equal. But this implies that forany u > 0, the Laplace transform of the law of
(ℓxσu
+ 12ϕ2x
)x∈E
under P 0x0⊗PG,U , is equal
to the Laplace transform of the law of(12(ϕx +
√2u)2
)x∈E
under PG,U , and the claim
(2.81) follows.
So there remains to prove (2.82).
For this purpose it is convenient to introduce the time-changed process defined simi-larly as in (1.96):
X0
u = X0τ0u, for u ≥ 0, where
τ 0u = inft ≥ 0; ℓ0t ≥ u =∫ u
0
λ0Xvdv, with ℓ0t =
∑x∈E
ℓxt
=
∫ t
0
1/λ0X0sds, (t ≥ 0→ ℓ0t ≥ 0 is an increasing bijection of R+).
(2.85)
If we now define, cf. (1.97),
(2.86) ℓx
u =
∫ u
0
1X0v = x dv, for u ≥ 0, x ∈ E,
then as in (1.99), (1.100) we see that
(2.87) X0t = X0
ℓ0t, t ≥ 0, and ℓxt = ℓxℓ0t , for x ∈ E, t ≥ 0.
2.4 Generalized Ray-Knight theorems 47
Now, corresponding to (2.80), we can introduce
(2.88) σv = infu ≥ 0; ℓx0u > v (2.87)
=(2.80)
ℓ0σv, for v > 0,
so that
(2.89) ℓxσY
(2.87)= ℓxℓ0σY
(2.88)= ℓ
x
σY, for any x ∈ E.
The key to the identity in law (2.82) will come from the next representation of the law ofX. under Px.
Lemma 2.18. (x ∈ E)
(2.90)
Zudef= X0
u, for u < σY ,def= ∆, for u ≥ σY ,
has the same law (on DE) under P 0x ⊗Q as
Xu, u ≥ 0, under Px.
Proof. We consider 0 = u0 < u1 · · · < un and f0, f1, . . . , fn: E → R. Then
(2.91)E0
x ⊗ EQ[f0(Zu0) f1(Zu1) . . . fn(Zun)](2.90)=
E0x ⊗ EQ[f0(X
0
u0) f1(X
0
u1) . . . fn(X
0
un) 1un < σY ].
Note that by (2.88)ℓx0
un< Y ⊆ un < σY ⊆ ℓ
x0
un≤ Y
and the Q-probability of both events on the right-hand side and left-hand side is equal toexp−λℓx0
un. So integrating over Q in the second line of (2.91) we see that the expression
on the first line equals
(2.92) E0x[f0(X
0
u0) f1(X
0
u1) . . . fn(X
0
un) e−λℓ
x0u ].
By the corresponding statement to (1.98), we know that X0
u, u ≥ 0, is a Markov chainwith Markovian transition semi-group
R0
t f(x) = E0x[f(X
0
t )] = etL0
f(x), t ≥ 0, where
L0 f(x) =∑y∈E
cx,y f(y)− λ0x f(x), for f : E → R.
The application of the Markov property to (2.92) at times un−1, . . . , u0 shows that theexpression in (2.92) equals
(f0 S0
u1f1 S
0
u2−u1f2 . . . fn−1 S
0
un−un−1fn)(x),
whereS0
u f(x)def= E0
x[f(X0
u) e−λℓ
x0u ], for f : E → R, x ∈ E, u ≥ 0.
The corresponding version of the Feynman-Kac formula (1.105), see (2.86), shows that
S0
u f(x) = eu(L0−λ1x0)f(x) = euL f(x),
48 2 ISOMORPHISM THEOREMS
because κx0 = λ = λx0 − λ0x0, cf. (2.63). We have thus found that
E0x ⊗EQ[f0(Zu0) f1(Zu1) . . . fn(Zun)] = (f0 e
u1L f1 e(u2−u1)L f2 . . . fn−1 e
(un−un−1)L fn)(x)
and by (1.98) we see that this is equal to Ex[f0(Xu0) f1(Xu1) . . . fn(Xun)]. From this andthe fact that Z. and X. remain in ∆ once they reach ∆, one easily deduces that the finitedimensional marginals of Z. and X. coincide, whence (2.90). This concludes the proof ofthe lemma.
We can now conclude the proof of (2.82).
For all x ∈ E, we have ℓxσY
(2.89)= ℓ
x
σY=
∫∞
01Zu = xdu, by the definition of Z. So
under P 0x0⊗Q
(ℓxσY)x∈E =
(∫ ∞
0
1Zu = x du)x∈E
law=
(2.90)
(∫ ∞
0
1Xu = x du)x∈E
(1.97)= (L
x
∞)x∈E(1.101)= (Lx
∞)x∈E , under Px0 ,
(2.93)
and we have completed the proof of (2.82) and hence of (2.81).
Remark 2.19.
1) The “generalized second Ray-Knight theorem” was originally proved in [8]. The termi-nology stems from the fact that in the case of Brownian motion when x0 = 0, the statementcorresponding to (2.81) yields that, see Marcus-Rosen [19], p. 53, for any u > 0,
(2.94)(Lxσu
+B2x
)x≥0
has the same law as((Bx +
√u)2
)x≥0
,
when (Lzt , z ∈ R, t ≥ 0) and (Bx, x ≥ 0) are independent and respectively distributed
as the local time process of a Brownian motion starting at 0, and a Brownian motionstarting at 0, and we have set
σu = inft ≥ 0; L0t > u.
This statement, using arguments described earlier, is equivalent to the more traditionalformulation:
(2.95)Under Wiener measure starting at 0, (Lx
σu)x≥0 has the same
law as a zero-dimensional squared Bessel process starting at u,(i.e. BESQ0(u) in the notation below (2.54)).
2) The same argument that we used below (2.57), shows that one also has a similaridentity for the random walk on Z, when cz,z+1 =
12, for all z. Namely for u > 0,
(2.96)(ℓxσu
)x∈N under P0 has same law as the restriction to integertimes x ∈ N of Lx
σuunder Wiener measure in (2.95).
3) In the case of random interlacements on a transient weighted graph, one can establishan identity in law in the spirit of the generalized second Ray-Knight theorem, see [28]. Itrelates the field of occupation times of random interlacements at level u to the Gaussianfree field on the transient weighted graph, cf. (4.86).
2.4 Generalized Ray-Knight theorems 49
Complement: a proof of (2.67) based on the Dynkin isomorphism theorem
We now provide a second proof of Proposition 2.15, which makes direct use of the Dynkinisomorphism theorem. We recall the notation (2.63) - (2.66).
Second proof of (2.67):
By the Dynkin isomorphism theorem, cf. (2.33), we see that
(2.97)(Lx∞ +
1
2ϕ2x
)x∈E
under P x0,x0 ⊗ PG, has the same law as(1
2ϕ2x
)x∈E
under µ0,
where we have introduced the probabilities
P x0,x0 =1
g(x0, x0)Px0,x0 (on Γ),
µ0 =1
g(x0, x0)ϕ2x0PG (on RE) .
(2.98)
The next observation is that if we consider the process with trajectories in DE ,
Xt+(γ) = limε↓0
Xt+ε(γ), for γ ∈ Γ,
(the only time where Xt(γ) 6= Xt+(γ) is when t = ζ(γ), the duration of γ, see below(2.16)), we have the following:
(2.99) the law on DE of Xt+ , t ≥ 0, under P x0,x0 is equal to Px0.
Indeed g(·, x0) = g(x0, x0), by (2.71), and looking at (2.23), we see that the finite di-mensional distributions of Xt, t ≥ 0, under P x0,x0, which coincide with those of Xt+ ,t ≥ 0, under P x0,x0, since P x0,x0[ζ = t] = 0, for any t, are equal to the finite dimensionaldistributions of Xt, t ≥ 0 (i.e. the canonical process on DE), under Px0 . The claim (2.99)follows.
As a result of (2.99) and of the formula Lx∞ = 1
λx
∫∞
01Xs = x ds, x ∈ E, we see that
(2.100) Lx∞, x ∈ E, under P x0,x0, has the same law as Lx
∞, x ∈ E, under Px0 .
As for the right-hand side of (2.97), we use the following
Lemma 2.20.
(2.101)(1
2ϕ2x
)x∈E
under µ0, has the same law as(1
2(ψx +X)2
)x∈E
,
where ψx, x ∈ E, is PG,U-distributed and independent from X, which is distributed as
(Z2+Z ′2+Z2)12 , where Z,Z ′, Z are independent centered Gaussian variables with variance
λ−1.
Proof. Note that for any x, y ∈ E:
EG[(ϕx − ϕx0)ϕx0] = g(x, x0)− g(x0, x0)(2.71)= 0, and
EG[(ϕx − ϕx0) (ϕy − ϕx0)](2.71)= gU(x, y).
50 2 ISOMORPHISM THEOREMS
As a result (this is also a special case of Proposition 2.3), we find that:
(2.102)ϕx0 and (ϕx − ϕx0)x∈E are independent under PG,and (ϕx − ϕx0)x∈E is PG,U -distributed.
As a consequence, under µ0,(1
2ϕ2x
)x∈E
=(1
2(ϕx − ϕx0 + ϕx0)
2)x∈E
law=
(1
2(ψx + |ϕx0|)2
)x∈E
where ψ and |ϕx0| are independent in the last expression, with ψ having distribution PG,U ,and |ϕx0| being under the law µ0. The claim (2.101) will thus follow once we see that
(2.103) under µ0, |ϕx0| has the distribution of the variable X .
To this end we note that for f : R+ → R, bounded measurable
Eµ0 [f(|ϕx0|)](2.98),(2.71)
= λEG[ϕ2x0f(|ϕx0|)] =
λ32
(2π)12
∫
R
t2 f(|t|) e−λt2
2 dt
=2λ
32
(2π)12
∫ ∞
0
r2 f(r) e−λr2
2 dr,
and that using polar coordinates in R3,
E[f(X)] =λ
32
(2π)32
∫
R3
f(|z|) e−λ|z|2
2 dz =λ
32
(2π)32
4π
∫ ∞
0
r2 f(r) e−λr2
2 dr
= Eµ0 [f(|ϕx0|)], whence the claim (2.101).
By (2.97), (2.100), (2.101) we can conclude, with Lx∞, x ∈ E, now considered under Px0
(and (ψx + Z)x∈E, having the law PG, see (2.70), (2.71)), that:(Lx∞ +
1
2(ψx + Z)2
)x∈E
law=
(1
2(ψx +X)2
)x∈E
.
Adding to both sides an independent copy 12(ψ′
x)2, x ∈ E, of 1
2ψ2x, x ∈ E, we see that
(Lx∞ +
1
2(ψx + Z)2 +
1
2(ψ′
x)2)x∈E
law=
(1
2(ψx +X)2 +
1
2(ψ′
x)2)x∈E
law=
(2.61)
(1
2
(ψx +
√Z2 + Z2
)2
+1
2(ψ′
x + Z ′)2)x∈E
(2.104)
law=
(1
2(ψx +
√2Y )2 +
1
2(ψ′
x + Z ′))x∈E
,
where Y , as in (2.66), is an independent exponential variable with parameter λ. Of course,12(ψx + Z)2 has the same law as 1
2(ψ′
x + Z ′)2, x ∈ E, and simplifying on both sides of(2.104), i.e. applying (2.56), we obtain:
(2.105)(Lx∞ +
1
2(ψ′
x)2)x∈E
law=
(1
2(ψx +
√2Y )2
)x∈E
.
This is simply a reformulation of (2.67).
51
3 The Markovian loop
In this chapter we will introduce the measure describing the Markovian loop and studysome of its properties. For this purpose it will be convenient to first discuss the rooted (orbased) loops as well as pointed loops. The Markovian loops come up as unrooted loops,which live in the space of rooted loops modulo time-shift. We refer to Le Jan [18] for anextensive discussion of Markovian loops and their properties.
3.1 Rooted loops and the measure µr on rooted loops
This section is devoted to the introduction of the general set-up for loops, the constructionof the σ-finite measure µr governing rooted loops, and the discussion of some of its basicproperties.
We first introduce the space of rooted loops of duration t > 0:
(3.1)Lr,t = the space of right-continuous functions [0, t]→ E,
with finitely many jumps and same value in 0 and t.
We denote by Xs, 0 ≤ s ≤ t, the canonical coordinates, and we extend periodically thefunction s ∈ [0, t]→ Xs(γ), for any γ ∈ Lr,t, so that Xs(γ) is well-defined for any s ∈ R.Let us underline that the spaces Lr,t are pairwise disjoint, as t varies over (0,∞).
We then define the space of rooted loops via the formula
(3.2) Lr =⋃
t>0
Lr,t,
and for γ ∈ Lr, a rooted loop, we denote the duration of γ by
(3.3) ζ(γ) = the unique t > 0 such that γ ∈ Lr,t.
We define a σ-algebra on Lr in a similar fashion as below (2.18), i.e. we identify Lr withLr,1×(0,∞) via the map (w, t) ∈ Lr,1×(0,∞)→ γ(·) = w( ·
t) ∈ Lr, and endow Lr,1×(0,∞)
with the canonical product σ-algebra (where Lr,1 is endowed with the σ-algebra generatedby the maps Xs, 0 ≤ s ≤ 1, from Lr,1 into E).
It will be convenient to parametrize rooted loops with the help of the random variables,which we now introduce.
The discrete duration of the rooted loop γ is
(3.4)N(γ) = (the total number of jumps of 0 ≤ s ≤ t→ Xs) ∨ 1
if t = ζ(γ) > 0 stands for the duration of γ.
When N(γ) = n > 1,
(3.5) 0 < T1(γ) < · · · < Tn−1(γ) < Tn(γ) ≤ ζ(γ)
are the successive jump times of the rooted loop γ, and
(3.6) Z0(γ) = γ(0), Z1(γ) = γ(T1), . . . , Zn−1(γ) = γ(Tn−1), Zn(γ) = γ(Tn) = Z0(γ)
are the successive positions of the rooted loop.
52 3 THE MARKOVIAN LOOP
We also extend the definition of Zp(γ) to all p ∈ Z by periodicity (so that whenN(γ) = n, Zn(γ) = Z0(γ), Zn+1(γ) = Z1(γ), . . . ).
In the case where N(γ) = 1, the rooted loop γ does not move away from its initialposition Z0(γ), and has duration ζ(γ); we will call γ a trivial loop. The case where
(3.7) N(γ) = n > 1, and Tn(γ) = ζ(γ),
corresponds to the situation where the rooted loop has a jump at time ζ(γ) (which takinginto account the periodicity of the function s→ Xs(γ), “corresponds to time 0”); we willcall γ such that either N(γ) = 1, or N(γ) = n > 1 and Tn(γ) = ζ(γ), a pointed loop.
With the above variables we can of course reconstruct γ ∈ Lr since
(3.8)
for N(γ) = 1, γ(s) = Z0, for 0 ≤ s ≤ ζ(γ),
for N(γ) = n > 1, γ(s) = Z0, for 0 ≤ s < T1(γ),
= Zk, for Tk(γ) ≤ s < Tk+1(γ), for 1 ≤ k ≤ n− 1,
= Z0, for Tn(γ) ≤ s ≤ ζ(γ).
We also define the shift θv: Lr → Lr, when v ∈ R, via:
(3.9)θv(γ) ∈ Lr, for γ ∈ Lr, is the rooted loop γ′ such thatζ(γ′) = ζ(γ), and Xs(γ
′) = Xs+v(γ), for all s ∈ R.
The measure µrµrµr on rooted loops:
For any t > 0 and x ∈ E, we have defined the finite measure P tx,x in (2.19) as the image of
1Xt = x Px
λxon Γt, under the map (Xs)0≤s≤t. The measure P t
x,x is in fact concentrated
on Γt∩γ(0) = γ(t) ⊆ Lr,t, cf. (3.1), and we can view t > 0→ P tx,x as a positive measure
kernel from (0,∞) to Lr. We then introduce the measure on Lr:
(3.10) µr[B] =∑x∈E
∫ ∞
0
P tx,x(B) λx
dt
t, for measurable subsets B of Lt
r.
Note that for a > 0, by (2.20), µr[ζ ≥ a] =∑
x∈E
∫∞
art(x, x) λx
dtt≤ 1
a
∑x∈E g(x, x) λx <
∞, whereas µr[Lr] = ∞. So we see that µr is a σ-finite measure. We now collect someuseful properties of µr.
Proposition 3.1. For 0 ≤ t1 < · · · < tk < t, x1, . . . , xk ∈ E, one has
(3.11)
µr[Xt1 = x1, . . . , Xtk = xk, ζ ∈ t+ dt] =
rt2−t1(x1, x2) λx2 . . . rtk−tk−1(xk−1, xk) λxk
rt1+t−tk(xk, x1) λx1
dt
t, when k > 1,
= rt(x1, x1) λx1
dt
t, when k = 1,
(see below for the precise meaning of this formula).
3.1 Rooted loops and the measure µr on rooted loops 53
When n > 1, for ti > 0, 1 ≤ i ≤ n, t > 0, and x0, . . . , xn−1 ∈ E, one has
(3.12)
µr(N = n, Z0 = x0, . . . , Zn−1 = xn−1,
T1 ∈ t1 + dt1, . . . , Tn ∈ tn + dtn, ζ ∈ t + dt) =
px0,x1 px1,x2 . . . pxn−1,x0 10 < t1 < · · · < tn−1 < tn < t e−t
tdt1 dt2 . . . dtn dt
where px,y are defined in (1.12) (see below for the precise meaning of this formula).
When n = 1, for x0 ∈ E, t > 0,
(3.13) µr[N = 1, Z0 = x0, ζ ∈ t + dt] = e−t dt
t
(see below for the precise meaning of this formula).
Proof.
• (3.11):The precise meaning of (3.11) is obtained by considering some measurable A ⊆ (tk,∞),replacing “ζ ∈ t + dt” on the left-hand side, by “ζ ∈ A”, and on the right-hand sidemultiplying by 1A(t) and integrating the expression over t. When k > 1, we have fort > tk, with the help of the Markov property (see also the proof of (2.23)):
P tx,x[Xt1 = x1, . . . , Xtk = xk]λx
(2.19)=
rt1(x, x1) λx1 rt2−t1(x1, x2) λx2 . . . rtk−tk−1(xk−1, xk) λxk
rt−tk(xk, x) λx
Summing over x and applying the Chapman-Kolmogorov identity, cf. (1.24), we find that
(3.14)
∑x∈E
P tx,x[Xt1 = x1, . . . , Xtk = xk]λx =
rt2−t1(x1, x2) λx2 . . . rtk−tk−1(xk−1, xk) λxk
rt1+t−tk(xk, x1) λx1,
and (3.11) follows from the above formula and (3.10).
When k = 1, (3.14) is replaced for t > t1 by
(3.15)∑x∈E
P tx,x[Xt1 = x1]λx = rt (x1, x1) λx1
and the last line of (3.11) follows similarly.
• (3.12):We use a similar procedure as indicated at the beginning of the proof (3.11) to give aprecise meaning to (3.12), see also Remark 2.6. We then observe that for t > 0, x ∈ E,
(3.16)
P tx,x[N = n, Z0 = x0, . . . , Zn−1 = xn−1, T1 ∈ t1 + dt1, . . . , Tn ∈ tn + dtn]λx
(2.19)=
Px0 [X. has n jumps in [0, t], XT1 = x1, . . . , XTn−1 = xn−1, XTn = x0,
T1 ∈ t1 + dt1, . . . , Tn ∈ tn + dtn] δx,x0 =
Px0 [Z1 = x1, Z2 = x2, . . . , Zn−1 = xn−1, Zn = x0] · Px0[Tn < t < Tn+1,
T1 ∈ t1 + dt1, . . . , Tn ∈ tn + dtn] δx,x0 = px0,x1 px1,x2 . . . pxn−1,x0
10 < t1 < t2 < · · · < tn < t e−t dt1 dt2 . . . dtn δx,x0,
54 3 THE MARKOVIAN LOOP
where we have also denoted by Tk, k ≥ 1, the successive jump times for the continuousMarkov chain, see Remark 1.1.
Summing over x ∈ E and multiplying by dtt, in view of (3.10), (3.16) yields
µr[N = n, Z0 = x0, . . . , Zn−1 = xn−1, T1 ∈ t1 + dt1, Tn ∈ tn + dtn, ζ ∈ t+ dt] =
px0,x1 px1,x2 . . . pxn−1,x0 10 < t1 < t2 < · · · < tn < t e−t
tdt1 dt2 . . . dtn dt,
i.e. we have proved (3.12).
• (3.13):As above, we can write for t > 0 and x ∈ E,
P tx,x[N = 1, Z0 = x0]λx0 = Px0
[X. has no jump in [0, t]
]δx,x0 =
Px0 [T1 > t] δx,x0 = e−tδx,x0.
so that summing over x and multiplying by dttyields
µr[N = 1, Z0 = x0, ζ ∈ t + dt] =e−t
tdt,
i.e. we have proved (3.13).
We continue the discussion of the properties of the measure µr on rooted (also calledbased) loops, which was introduced in (3.10). In particular we will see that µr is invariantunder time-shift and, in a suitable sense, under time-reversal as well.
Proposition 3.2. For n > 1, x0, . . . , xn−1 ∈ E, one has
µr[N = n, Z0 = x0, . . . , Zn−1 = xn−1] =1
npx0,x1 px1,x2 . . . pxn−1,x0 =(3.17)
1
n
cx0,x1 cx1,x2 . . . cxn−1,x0
λx0 λx1 . . . λxn−1
,
µr[N = n] =1
nTr(P n) (recall notation from (1.19)).(3.18)
µr[N > 1] = − log(det(I − P )
)<∞.(3.19)
µr[N > 1, (Zk+m)m∈Z ∈ ·] = µr[N > 1, (Zm)m∈Z ∈ ·], for any k ∈ Z,(3.20)
(stationarity property of the discrete loop).
θv µr = µr, for any v ∈ R, in the notation of (3.9),(3.21)
(stationarity property of the continuous-time loop).Proof.
• (3.17):µr[N = n, Z0 = x0, . . . , Zn−1 = xn−1]
(3.12)= px0,x1 . . . pxn−1,x0
∫
0<t1<···<tn<t
e−t
tdt1 . . . dtn dt
= px0,x1 . . . pxn−1,x0
∫ ∞
0
tn−1
n!e−t dt =
1
npx0,x1 . . . pxn−1,x0
(1.12)=
1
n
cx0,x1 . . . cxn−1,x0
λx0 . . . λxn−1
, whence (3.17).
3.1 Rooted loops and the measure µr on rooted loops 55
• (3.18):µr[N = n] =
∑x0,...,xn−1∈E
µr[N = n, Z0 = x0, . . . , Zn−1 = xn−1]
(3.17)=
1
n
∑x0∈E
〈1x0, Pn 1x0〉 =
1
nTr(P n),
whence (3.18).
• (3.19):
µr[N > 1] =∞∑n=2
1
nTr(P n) =
∞∑n=1
1
nTr(P n), since Tr(P ) = 0,
= Tr( ∞∑
n=1
1
nP n
)(the series is convergent by (1.29)).
As already observed, by (1.29) all the eigenvalues of the self-adjoint operator P on L2(dλ),say γ1 ≤ · · · ≤ γ|E|, belong to (−1, 1). So we have the identity
(3.22)∞∑n=1
1
nP n = − log(I − P ),
which can for instance be seen after diagonalization of P in some orthonormal basis ofL2(dλ), and hence we find that
µr[N > 1] = Tr(− log(I − P )) =|E|∑i=1
− log(1− γi) = − log|E|∏i=1
(1− γi)
= − log(det(I − P )),
and (3.19) is proved.
• (3.20):We pick n > 1, and first show that
(3.23) µr[N = n, (Zk+m)m∈Z ∈ ·] = µr[N = n, (Zm)m∈Z ∈ ·].
On N = n, m → Zm has period n (so Zm+ℓn = Zm, for all m, ℓ ∈ Z), see below(3.6). We can thus assume that 0 < k < n, and restrict m to 0, . . . , n − 1. But forx0, . . . , xn−1 ∈ E,
µr[N = n, (Zm)0≤m<n = (xm)0≤m<n](3.17)=
1
npx0,x1 . . . pxn−1,x0,
whereas
µr[N = n, (Zk+m)0≤m<n = (xm)0≤m<n] =
µr[N = n, Zk = x0, Zk+1 = x1, . . . , Zn−1 = xn−1−k, Z0 = xn−k, . . . , Zk−1 = xn−1](3.17)=
1
npxn−k,xn−k+1
. . . pxn−2,xn−1 pxn−1,x0 . . . pxn−1−k ,xn−k=
1
npx0,x1 . . . pxn−1,x0, whence (3.23).
Summing over n > 1 in (3.23) yields (3.20).
56 3 THE MARKOVIAN LOOP
• (3.21):We use the following lemma:
Lemma 3.3. (t > 0, v ∈ R)
(3.24) θv ( ∑
x∈E
P tx,x λx
)=
∑x∈E
P tx,x λx.
Proof. The measure∑
x∈E Ptx,x λx is concentrated on Lr,t, and for γ ∈ Lr,t, θℓt(γ) = γ,
for any ℓ ∈ Z, due to the fact that s ∈ R → Xs(γ) ∈ E has period t. We can thusassume that 0 < v < t. The claim (3.24) will then follow once we show that for any0 < t1 < · · · < tk = t− v < tk+1 < · · · < tn = t, one has for x1, . . . , xn ∈ E,
(3.25)
∑x∈E
P tx,x[Xt1 = x1, . . . , Xtn = xn]λx =
∑x∈E
P tx,x
[Xv+t1 = x1, . . . , Xv+tk = xk, Xv+tk+1−t = xk+1, . . . , Xv = xn]λx.
↑tk = t− v
The expression on the left-hand side of (3.25), as in (3.14), is equal to
rt2−t1(x1, x2) λx2 . . . rtn−tn−1(xn−1, xn) λxn rt1(xn, x1) λx1 (recall that tn = t).
Note that 0 < v+ tk+1− t < · · · < v = v+ tn− t < v+ t1 < · · · < v+ tk = t, and thereforeusing once again the calculation in (3.14), the expression on the right-hand side of (3.25)equals:
rtk+2−tk+1(xk+1, xk+2) λxk+2
. . . rtn−tn−1(xn−1, xn) λxn
rt1(xn, x1) λx1 . . . rtk+1−tk(xk, xk+1) λxk+1.
This shows that (3.25) holds and completes the proof of (3.24).
We can now complete the proof of (3.21). When B is a measurable subset of Lr, wethus find that:
µr[B](3.10)=
∫ ∞
0
∑x∈E
P tx,x[B]λx
dt
t,
and therefore
θv µr[B] = µr[θ−1v (B)] =
∫ ∞
0
∑x∈E
P tx,x[θ
−1v (B)]λx
dt
t
(3.24)=
∫ ∞
0
∑x∈E
P tx,x[B]λx
dt
t= µr[B].
This completes the proof of (3.21).
We will now highlight some invariance properties of the measure µr under time-
reversal. For this purpose it is convenient to introduce the time-reversal map∨
θ: Lr → Lr
via:
(3.26)
∨
θ(γ) ∈ Lr, for γ ∈ Lr, is the rooted loop γ′ such thatζ(γ′) = ζ(γ) and Xs(γ
′) = limε↓0
X−s−ε(γ) = X(−s)−(γ), for all s ∈ R.
3.1 Rooted loops and the measure µr on rooted loops 57
Proposition 3.4.
µr[N > 1, (Z−m)m∈Z ∈ ·] = µr[N > 1, (Zm)m∈Z ∈ ·](3.27)
(time-reversal invariance of the discrete loop).
∨
θ µr = µr (time-reversal invariance of the continuous loop).(3.28)
Proof.
• (3.27):One could use (3.28), but it is instructive to give a direct proof. For n > 1, x0, . . . , xn−1 ∈E, one has by (3.17)
µr[N = n, Z0 = x0, . . . , Zn−1 = xn−1] =1
n
cx0,x1 cx1,x2 . . . cxn−1,x0
λx0 . . . λxn−1
,
whereas
µr[N = n, Z0 = x0, Z−1 = x1, . . . , Z−(n−1) = xn−1] =
µr[N = n, Z0 = x0, Z1 = xn−1, Z2 = xn−2, . . . , Zn−1 = x1]
(3.17)=
1
n
cx0,xn−1 cxn−1,xn−2 . . . cx2,x1 cx1,x0
λx0 λxn−1 . . . λx1
= µr[N = n, Z0 = x0, . . . , Zn−1 = xn−1].
Summing over n > 1 yields the claim (3.27).
• (3.28):We first show that for t > 0,
(3.29)∨
θ ( ∑
x∈E
P tx,x λx
)=
∑x∈E
P tx,x λx.
Indeed, for arbitrary k ≥ 1, 0 < t1 < · · · < tk < t, and x1, . . . , xk ∈ E, by (3.14) we have
∑x∈E
P tx,x[Xt1 = x1, . . . , Xtk = xk]λx =
rt2−t1(x1, x2) . . . rtk−tk−1(xk−1, xk) rt1+t−tk(xk, x1) λx1 . . . λxk
,
whereas using (3.26) and the facts that s→ Xs(γ) has period t, when γ ∈ Lr,t, and Ptx,x[v
is a jump time of s→ Xs(γ)] = 0, for any v ∈ R,∨
θ ( ∑
x∈E
P tx,x λx
)[Xt1 = x1, . . . , Xtk = xk] =
(with t′1 = t− tk < t′2 = t− tk−1 < · · · < t′k = t− t1)∑x∈E
P tx,x[Xt′1
= xk, Xt′2= xk−1, . . . , Xt′
k= x1]λx
(3.14)=
rt′2−t′1(xk, xk−1) rt′3−t′2
(xk−1, xk−2) . . . rt′1+t−t′k(x1, xk) λx1 . . . λxk
=
(using the symmetry of rs(·, ·), see (1.23), and the definition of the t′i)
rtk−tk−1(xk−1, xk) rtk−1−tk−2
(xk−2, xk−1) . . . rt1+t−tk(xk, x1) λx1 . . . λxk,
58 3 THE MARKOVIAN LOOP
and the claim (3.29) now follows.
As a result, for a measurable subset B of Lr, we find that
∨
θ µr[B](3.10)=
∫ ∞
0
∑x∈E
P tx,x
[∨θ −1(B)
]λx
dt
t
(3.29)=
∫ ∞
0
∑x∈E
P tx,x[B]λx
dt
t
(3.10)= µr[B],
and this completes the proof of (3.28).
3.2 Pointed loops and the measure µp on pointed loops
In this section we define the σ-finite measure µp governing pointed loops and relate itto the measure µr on rooted loops. As it turns out, the measures µr and µp agree onfunctions invariant under the shift θv, v ∈ R.
We introduce the measurable subspace of Lr of pointed loops, see below (3.7),
Lp = γ ∈ Lr; γ is trivial, or N(γ) = n > 1, and Tn(γ) = ζ(γ)(3.30)
= γ ∈ Lr; γ is trivial, or 0 is a jump time of s ∈ R→ Xs(γ).
It is convenient to introduce for γ ∈ Lr with N(γ) = n > 1, the variables describing thesuccessive durations between jumps of the loop γ (see also Figure 3.1):
σ0(γ) = T1(γ) + ζ(γ)− Tn(γ),(3.31)
σ1(γ) = T2(γ)− T1(γ), . . . , σn−1(γ) = Tn(γ)− Tn−1(γ),
so that
σ0(γ) + · · ·+ σn−1(γ) = ζ(γ).(3.32)
t ∼ 0
T1
T2
Tm+1
Tm
in a rooted loop γ with
Tn
σm
σ1
σ0
the variables σℓ and Tk
N(γ) = n and ζ(γ) = t
Fig. 3.1
3.2 Pointed loops and the measure µp on pointed loops 59
In the case where γ is a pointed loop with N(γ) = n > 1, the variables Z0, . . . , Zn−1 andσ0, . . . , σn−1 enable to reconstruct γ, with the help of (3.8) and the identity Tn(γ) = ζ(γ).We now introduce a measure µp on Lp via:
(3.33)
µp[N = 1, Z0 = x0, ζ ∈ t + dt] = e−t dt
t, for any x0 ∈ E, t > 0,
µp[N = n, Z0 = x0, . . . Zn−1 = xn−1,
σ0 ∈ s0 + ds0, . . . , σn−1 ∈ sn−1 + dsn−1] =
1
npx0,x1 . . . pxn−1,x0 e
−(s0+···+sn−1)ds0 . . . dsn−1, for n > 1, x0, . . . , xn−1 ∈ E,s0, . . . , sn−1 > 0.
The meaning of this formla is the same as in (3.11) - (3.13). We will now relate themeasure µr on rooted loops and µp on pointed loops.
When γ ∈ Lr is such that N(γ) = n > 1, the loops θTm(γ)(γ), for 1 ≤ m ≤ n, arepointed. We will denote by θTm the corresponding map from Lr ∩ N = n ∋ γ → γ′ =θTm(γ)(γ) ∈ Lp ∩ N = n. Thus θTm (1N = nµr) is a measure on Lp ∩ N = n. Aswe will see below it corresponds to a type of size-biased modification of 1N = nµp.
Proposition 3.5. (n > 1)
For any 1 ≤ m ≤ n, x0, . . . , xn−1 ∈ E, s0, s1, . . . , sn−1 > 0,
(3.34)
θTm (1N = nµr) [Z0 = x0, . . . , Zn−1 = xn−1,
σ0 ∈ s0 + ds0, . . . , σn−1 ∈ sn−1 + dsn−1] =
px0,x1 . . . pxn−1,x0,sn−m
s0 + · · ·+ sn−1
e−(s0+···+sn−1) ds0 ds1 . . . dsn−1.
For any 1 ≤ m ≤ n,
(3.35) θTm (1N = nµr) = nσn−m
σ0 + · · ·+ σn−11N = nµp.
When F : Lr → R+ is a bounded measurable function such that F θv = F , for all v ∈ R,then one has
(3.36)
∫
N=n
F dµr =
∫
N=n
F dµp (this equality holds as well when n = 1).
Proof.
• (3.34):We consider h0, . . . , hn−1 bounded measurable functions on (0,∞), and we extend thedefinition of σℓ for ℓ ∈ 0, . . . , n− 1 on N = n, to all ℓ ∈ Z, using periodicity (i.e. sothat σℓ+kn = σℓ for all ℓ, k ∈ Z).So we find that∫
1Z0 = x0, . . . , Zn−1 = xn−1 h0(σ0) . . . hn−1(σn−1) d(θTm (1N = nµr)
)=
∫
N=n
1Zm = x0, Zm+1 = x1, . . . Zm−1 = xn−1 h0(σm) h1(σm+1) . . . hn−1(σm−1) dµr
(3.12)=
px0,x1 . . . pxn−1,x0
∫
0<t1<···<tn<t
h0(tm+1 − tm) . . . hn−m−1(tn − tn−1) hn−m(t− tn + t1)
hn−m+1(t2 − t1) . . . hn−1(tm − tm−1)e−t
tdt1 . . . dtn dt.
60 3 THE MARKOVIAN LOOP
We can perform a change of variables in the above integral, replacing t1, t2, . . . tn, t byt1, s1, s2, . . . sn−1, s0, where
t1 = t1, t2 = t1 + s1, t3 = t1 + s1 + s2, . . . , tn = t1 + s1 + · · ·+ sn−1,t = s1 + · · ·+ sn−1 + s0,
which bijectively maps the region 0 < t1 < t2 < · · · < tn < t into the region 0 < t1 < s0,s1 > 0, . . . , sn−1 > 0.
So the above expression equals:
px0,x1 . . . pxn−1,x0
∫
0<t1<s0s1>0,...,sn−1>0
h0(sm) . . . hn−m−1(sn−1)hn−m(s0)hn−m+1(s1) . . . hn−1(sm−1)
1
(s0 + · · ·+ sn−1)e−(s0+···+sn−1) dt1 ds0 . . . dsn−1 =
px0,x1 . . . pxn−1,x0
∫
s0>0,...,sn−1>0
h0(sm) . . . hn−m−1(sn−1)hn−m(s0)hn−m+1(s1) . . . hn−1(sm−1)
s0(s0 + · · ·+ sn−1)
e−(s0+···+sn−1) ds0 . . . dsn−1 =
(by relabeling variables)
px0,x1 . . . pxn−1,x0
∫
s0>0,...,sn−1>0
h0(s0) . . . hn−1(sn−1)
sn−m
(s0 + · · ·+ sn−1)e−(s0+···+sn−1) ds0 . . . dsn−1,
and this proves (3.34).
• (3.35):
Combining (3.34) and (3.33), we see that for h0, . . . , hn−1 as above and x0, . . . xn−1 ∈ E,one has
∫1Z0 = x0, . . . , Zn−1 = xn−1 h0(σ0) . . . hn−1(σn−1) d(θTm (1N = nµr) =∫1Z0 = x0, . . . , Zn−1 = xn−1 h0(σ0) . . . hn−1(σn−1) n
σn−m
σ0 + · · ·+ σn−1d(1N = nµp).
From this we see that θTm (1N = nµr) and nσn−m
σ0+···+σn−11N = nµp have the same
(finite since n > 1) total mass and using Dynkin’s lemma we conclude that they coincideon the σ-algebra of Lp ∩ N = n generated by the variables Z0, . . . , Zn−1, σ0, . . . , σn−1.This is the full σ-algebra of measurable subsets of Lp ∩ N = n, and (3.35) follows.
• (3.36):
On N = n, with n > 1, we set θTm(γ)def= θTm(γ)(γ), for 1 ≤ m ≤ n, and note that
3.3 Restriction property 61
F θTm = F on N = n (since F θv = F for all v ∈ R). As a result we find that∫
N=n
F dµr =
∫
N=n
1
n
n∑m=1
F θTm dµr
=1
n
n∑m=1
∫F d
(θTm (1N = nµr)
)
(3.35)=
1
n
n∑m=1
∫
N=n
F nσn−m
σ0 + · · ·+ σn−1
dµp
=
∫
N=n
F dµp,
and (3.36) follows.
We now continue with the discussion of the properties of the measures on rooted andpointed loops we have introduced. Our next topic will be the restriction property.
3.3 Restriction property
This is a short section where we state and prove the restriction property. Informallysaid, the measures µr and µp have the property that restricting them to the set of loopsthat remain in U amounts to working with the modified weights and killing measurecorresponding to killing when exiting U , see Remark 1.5.
When U is a subset of E, we define
(3.37) Lr,U = γ ∈ Lr; Xs(γ) ∈ U for all s ∈ R notation= γ ∈ Lr; γ ⊆ U,
Proposition 3.6. (restriction property)
When U is a connected (non-empty) subset of E
1Lr,Uµr = µr,U , and(3.38)
1Lr,Uµp = µp,U ,(3.39)
where µr,U and µp,U respectively stand for the analogues of µr in (3.10) and µp in (3.33),when U replaces E, and U is endowed with the weights cx,y, x, y ∈ U , and the killingmeasure κx = κx +
∑y∈E\U cx,y, x ∈ U (cf. Remark 1.5). (When U is not connected the
above identities apply to the different connected components of U , which induce a partitionof Lr,U .)
Proof.
• (3.38):When n > 1, ti > 0, 1 ≤ i ≤ n, t > 0, and x0, . . . , xn−1 ∈ U , by (3.12):
(3.40)
µr(N = n, Z0 = x0, . . . , Zn−1 = xn−1, T1 ∈ t1 + dt1, . . . , Tn ∈ tn + dtn,
ζ ∈ t+ dt) =
cx0,x1 cx1,x2 . . . cxn−1,x0
λx0λx1 . . . λxn−1
10 < t1 < · · · < tn < t e−t
tdt1 . . . dtndt.
62 3 THE MARKOVIAN LOOP
Note that λxdef=
∑y∈U cx,y+ κx = λx, for x ∈ U , and hence the expression in (3.40) equals
µr,U(N = n, Z0 = x0, . . . , Zn−1 = xn−1, T1 ∈ t1 + dt1, . . . , Tn = tn + dtn, ζ ∈ t+ dt).
By (3.13), a similar equality holds when n = 1.
So we see that for each n ≥ 1, 1Lr,Uµr and µr,U coincide on the σ-algebra of Lr,U ∩
N = n generated by the variables Z0, . . . , Zn−1, T1, . . . Tn−1, ζ . By (3.8), this is the fullσ-algebra of measurable subsets of Lr,U , and (3.38) follows.
• (3.39):The argument is similar, and now uses the formula (3.33) for µp.
3.4 Local times
In this section we define the local time of rooted loops and derive an identity, which willplay an important role in the next chapter, when calculating the Laplace transform of theoccupation field of a Poisson gas of Markovian loops.
We define the local time of the rooted loop γ ∈ Lr at x ∈ E:
(3.41) Lx(γ) =
∫ ζ(γ)
0
1Xs(γ) = x ds 1
λx.
Note that the local time is invariant under time-shift:
Lx θv = Lx, for any x ∈ E, v ∈ R,(3.42)
and also invariant under time-reversal:
Lx ∨
θ = Lx, for any x ∈ E.(3.43)
As a consequence of (3.36) and (3.42), we see that we can indifferently use µrµrµr or µpµpµp
when evaluating “expectations” of functions of (Lx)x∈E(Lx)x∈E(Lx)x∈E.
An important role is played by the next proposition, which computes the “de-singularizedLaplace transform” of the field of local times of Markovian loops.
Proposition 3.7.
(3.44) For v ≥ 0 and x ∈ E,∫
N=1
(1− e−vLx) dµr = log(1 +
v
λx
).
For V : E → R+, one has:∫ (
1− e−
∑x∈E
V (x)Lx)dµr = log det(I +GV )
= log det(I +√V G√V )
= − log(detGV
detG
),
(3.45)
where GV = (V −L)−1 (the various members of (3.45) are finite, non-negative, and equal).
In particular, one has
(3.46)
∫(1− e−vLx) dµr = log
(1 + v g(x, x)
), for v ≥ 0, and x ∈ E.
3.4 Local times 63
Remark 3.8. Note that gV (x, y) = (GV 1y)(x), x, y ∈ E, can be interpreted as the Greenfunction one obtains when choosing κVx = κx + V (x), x ∈ E, as a new killing measure.
Proof of Proposition 3.7.
• (3.44):
Since Lx(γ) = 0, when N(γ) = 1 and Z0(γ) 6= x, and Lx(γ) =ζ(γ)
λx
, when N(γ) = 1 and
Z0(γ) = x, we find that∫
N=1
(1− e−vLx) dµr
(3.13)=
∫ ∞
0
(1− e− vλx
t)e−t
tdt.
To compute this last integral we use the identity:
(3.47)
for 0 ≤ a < b and 0 ≤ a′ < b′,∫ b
a
e−a′t − e−b′t dt
t=
∫ b
a
∫ b′
a′e−tt′ dtdt′ =
∫ b′
a′e−at′ − e−bt′ dt
′
t′.
So choosing a = 0 and letting b→∞, a′ = 1, b′ = 1 + vλx, we obtain:
∫
N=1
(1− e−vLx) dµr =
∫ 1+ vλx
1
dt′
t′= log
(1 +
v
λx
),
whence (3.44).
• (3.45):We begin with a preparatory calculation. By the definition of P t
x,x in (2.19) and Ly in(3.41) we see that
Etx,x
[1− e
−∑y∈E
V (y)Ly]= Px[Xt = x]
1
λx− Ex
[1Xt = x e−
∫ t0
Vλ(Xs)ds
] 1
λx.
Using Feynman-Kac’s formula (1.84) (for V = 0, this boils down to (1.18)), we find thatfor V : E → R+, t > 0, x ∈ E,
(3.48) Etx,x
[1− e
−∑y∈E
V (y)Ly]=
1
λx
(et(P−I) 1x − et(P−I−V
λ) 1x
)(x) ≥ 0.
In combination with (3.10), we obtain that
(3.49)
∫ (1− e
−∑y∈E
V (y)Ly)dµr =
∫ ∞
0
Tr(et(P−I))− Tr(et(P−I−Vλ))dt
t,
(where both sides can possibly be infinite).
We first consider the case of “small V ” and the general case.
• The case of a “small V ”:
We know by (1.29), see also above (3.22), that all eigenvalues of the self-adjoint operatorP on L2(dλ) belong to (−1, 1). The operator P − V
λis also self-adjoint on L2(dλ), and
using the variational characterization of the largest and smallest eigenvalues of P − Vλ:
sup(f,(P − V
λ
)f)λ; (f, f)λ = 1
and inf
(f,(P − V
λ
)f)λ; (f, f)λ = 1
,
64 3 THE MARKOVIAN LOOP
respectively, we can pick ε > 0, so that
(3.50) for ‖V ‖∞ < ε, all eigenvalues of P − V
λbelong to (−1, 1).
We then expand etP and et(P−Vλ), and write:
(3.51) Tr(et(P−I))− Tr(et(P−I−Vλ)) = e−t
∑k≥1
tk
k!
[Tr(P k)− Tr
((P − V
λ
)k)]
(where we took into account the cancellation of the term k = 0).
Note that when 0 < a < 1,
∫ ∞
0
∑k≥1
tkak
k!e−t dt
t=
∑k≥1
ak∫ ∞
0
tk−1
k!e−tdt =
∑k≥1
ak
k= − log(1− a) <∞,
This observation combined with the fact that all eigenvalues of P and P − Vλ
are inabsolute value smaller than 1, shows that we can insert (3.51) in the right-hand side of(3.49), and exchange summation with integrals:
∫ (1− e
−∑y∈E
V (y)Ly)dµr =
∑k≥1
∫ ∞
0
tk
k!Tr(P k) e−t dt
t
−∑k≥1
∫ ∞
0
tk
k!Tr
((P − V
λ
)k)e−t dt
t
=∑k≥1
1
kTr(P k)−
∑k≥1
1
kTr
((P − V
λ
)k)
and as below (3.22) =− log det(I − P ) + log det(I − P +
V
λ
)
= log det(I + (I − P )−1 V
λ
).
(3.52)
By (1.37) and (1.41), we know that (I − P )−1 λ−1 = G, so
det(I + (I − P )−1 V
λ
)= det(I +GV ),
and in addition, “multiplying and dividing” by det(√V ) (i.e. by det(
√V + η), with η > 0,
which one lets go to zero), we find that
det(I +GV ) = det(I +√V G√V ).
Writing further that detG = det(I − P )−1 det(λ−1) and that, since λ(I − P ) = −L,cf. (1.41),
det(I − P +
V
λ
)= det(λ−1) det(V − L) = det(λ−1)/detGV ,
we find that
− log det(I − P ) + log det(I − P +
V
λ
)= log
( detG
detGV
).
Combining these identities, we see that we have proved (3.45) under (3.50), i.e. when Vis “small”.
3.4 Local times 65
We now treat the general case.
• The general case:
Note that the function β ≥ 0 →∫Lr(1 − e−β
∑x∈E V (x)Lx) dµr ∈ [0,∞] is non-decreasing,
finite for small β by the first step, and we can use the inequality 1− e−(a+b) ≤ 1− e−a +1− e−b for a, b ≥ 0, to conclude that finiteness holds for all β ≥ 0, that is:
(3.53)
∫
Lr
(1− e
−β∑
x∈E
V (x)Lx
) dµr <∞, for β ∈ [0,∞)
(and actually increases to +∞ as β →∞, if V is not identically equal to 0, see also below(3.10)). In addition, as we explain below, it follows, by domination, that the function
(3.54) β ∈ z ∈ C; Re z > 0 −→∫
Lr
(1− e
−β∑
x∈E
V (x)Lx
) dµr is analytic.
Indeed, we first note that the integrand in (3.54) has a modulus bounded by 2 whenβ ∈ z ∈ C; Re z > 0, and that µ[N > 1] < ∞, by (3.19). There only remainsto show domination on N = 1. To this end we observe that the integrand in (3.54)
equals∫ 1
0β∑
x∈E V (x)Lx exp−β∑
x∈E V (x)Lx udu, and has a modulus bounded by|β|
∑x∈E V (x)Lx, for β in the same domain as above. By (3.13), 1N = 1Lz is µ-
integrable for each z ∈ E. We have thus shown domination of the integrand in (3.54)when β remains in a compact subset of z ∈ C; Re z > 0. The claim (3.54) follows.
On the other hand√V G√V is self-adjoint for the canonical scalar product 〈·, ·〉 on
RE . By (1.35), all eigenvalues of√V G√V are non-negative (because 〈f,Gf〉 ≥ 0, for all
f : E → R). It follows that
(3.55) β ≥ 0 −→ det(I + β√V G√V ) ≥ 1 is a non-decreasing function,
and, in addition, that
(3.56)β > 0 −→ log det(I + β
√V G√V ) has an analytic extension
to z ∈ C; Re z > 0.
By the first step, this function agrees with the function in (3.54) for small β in (0,∞),and hence for β ∈ (0,∞) by analyticity.
We have thus shown that
(3.57)
∫ (1− e
−β∑x∈E
V (x)Lx
) dµr = log det(I + β√V G√V ), for all β ≥ 0,
and, in particular, for β = 1.
Since the equality of the three terms on the right-hand side of (3.45) has been shownbelow (3.52) (the proof is easily extended to the case of a general V ≥ 0), we thus havecompleted the proof of (3.45).
• (3.46):This is a special case of (3.45) when V = v1x with v ≥ 0, x ∈ E. By ordering E so thatx is the first element of E, we can select the basis 1xi
, 1 ≤ i ≤ E, of the space of functions
66 3 THE MARKOVIAN LOOP
on E. The matrix representing I +GV in this basis is triangular and has the coefficients1 + g(x, x) v, 1, . . . , 1 on the diagonal, so that log det(I + GV ) = log(1 + g(x, x)v), and(3.46) follows.
We can now combine the restriction property, see Proposition 3.6, and the aboveproposition to find, as a direct application:
Proposition 3.9. For V : U → R+, one has:∫
γ⊆U
(1− e
−∑
x∈E
V (x)Lx
) dµr = log det(I +GUV )
= log det(I +√V GU
√V )
= − log(detGU,V
detGU
),
(3.58)
where we have set (see below (3.39) for notation),
GU,V = (V − LU )−1, with LU f(x) =
∑y∈U
cx,yf(y)− λx f(x), for x ∈ U , and f : U → R,
and we have tacitly extended V by 0 outside U in the first and second line of (3.58), andthe determinants in the last line of (3.58) are U × U-determinants.
(3.59)
∫
γ⊆U
(1− e−vLx) dµr = log(1 + vgU(x, x)
), for v ≥ 0, and x ∈ U.
We will also record a variation on Proposition 3.7 in the case where we work with themeasure Px,y in place of µr. This identity will be used in the next chapter when provingSymanzik’s representation formula. Most of the work has actually already been donewhen proving (2.30).
Proposition 3.10. For V : E → R+ and x, y ∈ E, one has
(3.60) Ex,y
[exp
−
∑z∈E
V (z)Lz∞
]= gV (x, y),
with gV (x, y) = (GV 1y)(x) and GV = (V − L)−1, as in (3.45).
Proof. By (2.30), we already know that when V is small so that ‖GV ‖∞ < 1 (recall V ≥ 0here), we have
Ex,y
[exp
− ∑
z∈E
V (z)Lz∞
]=
((I +GV )−1G 1y
)(x).
Then we observe that (−L)−1 = G, cf. (1.37), and hence V − L = −L(I +GV ), so that
(3.61) GV = (I +GV )−1G.
We have thus proved (3.60) when the smallness assumption ‖GV ‖∞ < 1 holds.
In the general case V ≥ 0, βV −L is a self-adjoint operator for the usual scalar product〈·, ·〉, which has all its eigenvalues > 0, when β ≥ 0 (cf. (1.42), (1.39)). Using a similaridentity to (3.61) in the neighborhood of β0 ≥ 0 arbitrary:
GβV = (I + (β − β0)Gβ0V V )−1Gβ0V , for |β − β0| small enough, β ≥ 0,
3.5 Unrooted loops and the measure µ∗ on unrooted loops 67
one sees that
(3.62) β > 0 −→ gβV (x, y) has an analytic extension to a neighborhood of (0,∞).
On the other hand, by domination one sees that
(3.63) β ∈ z ∈ C; Re z > 0 −→ Ex,y
[exp
− β
∑z∈E
V (z)Lz∞
]is analytic,
and coincides for small positive β with gβV (x, y).
Hence this equality also holds for all β ≥ 0, and choosing β = 1, we obtain (3.60).
3.5 Unrooted loops and the measure µ∗ on unrooted loops
In the last section of this chapter we finally introduce the σ-finite measure µ∗ governingloops. We also define unit weights, which are helpful with calculations involving µ∗.
We define an equivalence relation on the set Lr of rooted loops:
(3.64) γ ∼ γ′ if and only if γ = θv(γ′) for some v ∈ R,
and we denote by L∗ the set of equivalence classes of rooted loops (also referred to asunrooted loops), and by π∗ the canonical map
γ ∈ Lrπ∗
−→ γ∗ = π∗(γ) ∈ L∗, the equivalence class of γ for the
relation ∼ in (3.64).(3.65)
We endow L∗ with the σ-algebra (see below (3.3) for notation)
(3.66) L∗ =B ⊆ L∗; (π∗)−1(B) is a measurable subset of Lr
.
In other words, L∗ is the largest σ-algebra on L∗ such that the map π∗: Lr → L∗ ismeasurable (we recall that Lr is endowed with the σ-algebra introduced below (3.3)).
As a consequence of (3.64), when F : L∗ → R is measurable, F π∗: Lr → R is invariantunder all θv, v ∈ R, and measurable. It now follows from (3.36) (and its straightforwardextension to the case n = 1) that the measures µr and µp, see (3.10), (3.33), have thesame image on L∗. We thus introduce the loop measure (or unrooted loop measure)
(3.67) µ∗ = π∗ µr = π∗ µp,
which is straightforwardly seen to be a σσσ-finite measure on (L∗,L∗) (indeed, if ζ(γ∗)def=
ζ(γ), for any γ ∈ Lr with π∗(γ) = γ∗, is the duration of the unrooted loop γ∗, then fora > 0, µ∗(ζ ≥ a) = µr(ζ ≥ a) <∞, see below (3.10)).
The following notion introduced by Lawler-Werner [16] is convenient to handle com-putations with µ∗.
Definition 3.11. (unit weight)
A measurable function T : Lr → [0,∞) is called unit weight when
(3.68)
∫ ζ(γ)
0
T (θvγ) dv = 1, for any γ ∈ Lr.
68 3 THE MARKOVIAN LOOP
Examples:
1) T (γ) = ζ(γ)−1, for γ ∈ Lr, is a trivial example of unit weight.
2) A less trivial example is the following: pick x ∈ E, and set
(3.69)
T (γ) = ζ(γ)−1, if γ(t) 6= x for all t ∈ R,
T (γ) =1γ(0) = x
∫ ζ(γ)
01γ(s) = x ds
, if γ(t) = x for some t ∈ R.
Indeed, in the first case (notation: “x /∈ γ”), one has
∫ ζ(γ)
0
T (θvγ) dv = ζ(γ)/ζ(γ) = 1,
whereas in the second case (notation: “x ∈ γ”), one has
∫ ζ(γ)
0
T (θvγ) dv =
∫ ζ(γ)
0
1γ(v) = x dv/∫ ζ(γ)
0
1γ(s) = x ds = 1.
The interest of the above definition comes from the following
Lemma 3.12. If T is a unit weight, then for any non-negative measurable F on L∗, onehas (see (2.21) for notation):
(3.70)
∫
L∗
F dµ∗ =∑x∈E
∫
Lr
F π∗(γ) T (γ) dPx,x(γ) λx.
Proof. By (2.21) we see that
(3.71)∑x∈E
∫
Lr
F π∗(γ) T (γ) dPx,x(γ)Fubini=
∫ ∞
0
∑x∈E
λxEtx,x[F π∗T ] dt.
Moreover, by (3.24), we find that for each t > 0,
∑x∈E
λxEtx,x[F π∗ T ] =
1
t
∫ t
0
dv( ∑
x∈E
λxEtx,x[(F π∗ θv)(T θv)]
)
and since F π∗ θv = F π∗, and T is a unit weight, so that P tx,x-a.s.,
∫ t
0T θv dv = 1,
the right-hand side of (3.71) equals
∫ ∞
0
∑x∈E
λxEtx,x[F π∗]
dt
t
(3.10)=
∫
Lr
F π∗ dµr
(3.67)=
∫
L∗
F dµ∗,
and the claim (3.70) follows.
3.5 Unrooted loops and the measure µ∗ on unrooted loops 69
Example:
We consider T as in (3.69) and F ≥ 0 measurable on L∗. We write using similar notationas below (3.69):
∫
γ∗∋x
F dµ∗ (3.70)=
∑y∈E
∫
γ∋x
F π∗(γ) T (γ) dPy,y(γ) λy
(3.69)=
∑y∈E
∫
γ∋x
F π∗(γ)1γ(0) = x
∫ ζ(γ)
01γ(s) = x ds
dPy,y(γ) λy
=
∫
γ∋x
F π∗(γ)(∫ ζ(γ)
0
1γ(s) = x dsλx
)−1
dPx,x(γ)
(3.41)=
∫
γ∋x
F π∗(γ) Lx(γ)−1 dPx,x(γ).
In other words, we have proved that
1γ∗ ∋ x dµ∗ = π∗ ( 1
Lx
dPx,x
), for x ∈ E,(3.72)
or, alternatively, since Lx is invariant under θv, cf. (3.42),
1γ∗ ∋ xLx µ∗ = π∗ Px,x, for x ∈ E,(3.72’)
where we have set Lx(γ∗) = Lx(γ), for any γ with π∗(γ) = γ∗.
Note that by (2.34) the total mass of Px,x equals g(x, x), so that
(3.73)
∫
γ∗∋x
Lx dµ∗ = g(x, x), for x ∈ E.
70 3 THE MARKOVIAN LOOP
71
4 Poisson gas of Markovian loops
In this chapter we study the Poisson point process on the space L∗ of unrooted loops withintensity measure αµ∗, with α a positive number. In particular, we relate the occupationfield of this gas of loops to the Gaussian free field, and prove Symanzik’s representationformula. At the end of the chapter we explore several precise meanings for the notion of“loops going through infinity” and relate them to random interlacements.
4.1 Poisson point measures on unrooted loops
In this section we briefly introduce the set-up for Poisson point measures on the set ofloops, and recall some basic identities for the Laplace transforms of these Poisson pointmeasures.
We consider pure point measures ω on (L∗,L∗), i.e. σ-finite measures of the formω =
∑i∈I δγ∗
i, where γ∗i , i ∈ I, is an at most countable collection of unrooted loops, such
that ω(A) <∞, for all A = γ∗ ∈ L∗; a ≤ ζ(γ∗) ≤ b, with 0 < a < b <∞, where
(4.1) ω(A) = #i ∈ I; γ∗i ∈ A, for A ∈ L∗.
We introduce
Ω = the set of pure point measures on (L∗,L∗),(4.2)
and endow Ω with the σ-algebra
A = the σ-algebra generated by the evaluation maps(4.3)
ω ∈ Ω −→ ω(A) ∈ N ∪ ∞, for A ∈ L∗.
We recall that a random variable X on some probability space is said to have Poissondistribution with parameter ρ ∈ [0,∞], when
P [X = k] = e−ρ ρk
k!, for k ≥ 0, if ρ <∞, and
P [X =∞] = 1, if ρ =∞.Definition 4.1. Given a σ-finite measure ν on (L∗,L∗), such that ν(A) < ∞, for Aas above (4.1), a probability measure P on (Ω,A) is the Poisson point measure withintensity ν if
for A ∈ L∗, ω(A) is Poisson distributed with parameter ν(A),(4.4)
for pairwise disjoint A1, . . . , An ∈ L∗, ω(A1), . . . , ω(An) are independent under P.(4.5)
We refer to [20], [22] for a more detailed discussion of Poisson point measures.
Poisson gas of Markovian loops:
Given α > 0, we consider
(4.6) Pα: the Poisson point measure on (L∗,L∗) with intensity αµ∗
(see (3.67) for the definition of µ∗). We will call probability measure Pα on (Ω,A), thePoisson gas of Markovian loops at level α > 0α > 0α > 0. Note that under Pα the pointmeasure ω is a.s. infinite, but by (3.19) its restriction to N > 1 ⊆ L∗, is a.s. a finitepoint measure (we use the notation N(γ∗) = N(γ) for any γ with π∗(γ) = γ∗).
72 4 POISSON GAS OF MARKOVIAN LOOPS
Lemma 4.2. Consider a measurable function Φ: L∗ → R+, then
(4.7) Eα[e−〈ω,Φ〉] = exp
− α
∫
L∗
(1− e−Φ) dµ∗
(where 〈ω,Φ〉 = ∑i∈I φ(γ
∗i ), for ω =
∑i∈I δγ∗
i∈ Ω).
If Φ vanishes on γ∗: ζ(γ∗) < a for some a > 0, then
(4.8) Eα[ei〈ω,Φ〉] = exp
α
∫
L∗
(eiΦ − 1) dµ∗.
Proof.
• (4.7):When Φ =
∑ni=1 ai 1Ai
, with Ai, 1 ≤ i ≤ n, pairwise disjoint, measurable and µ∗(Ai) <∞,one has
Eα[e−〈ω,Φ〉] = Eα
[ n∏i=1
e−aiω(Ai)]
(4.4),(4.5)=
(4.6)
n∏i=1
( ∑k≥0
e−aik e−αµ∗(Ai)(αµ∗(Ai))
k
k!
)
= exp−
n∑i=1
αµ∗(Ai)(1− e−ai)
= exp− α
∫
L∗
(1− e−Φ) dµ∗,
(4.9)
so (4.7) holds.
For a general Φ: L∗ → R+, we can construct Φℓ ↑ Φ as ℓ → ∞, with each Φℓ of theabove type (by the usual measure-theoretic induction construction and the σ-finiteness ofµ∗), and find that
Eα[e−〈ω,Φ〉]
monotone=
convergencelimℓ→∞
↓ Eα[e−〈ω,Φℓ〉]
(4.9)= lim
ℓ→∞↓ exp
− α
∫
L∗
(1− e−Φℓ) dµ∗
monotone=
convergenceexp
− α
∫
L∗
(1− e−Φ) dµ∗, whence (4.7).
• (4.8):The claim follows by similar measure theoretic induction; note that, as remarked below(3.10), µ∗(ζ ≥ a) < ∞, so that 〈ω, ζ ≥ a〉 < ∞, Pα-a.s., and e
iΦ − 1 is both boundedand µ∗-integrable.
4.2 Occupation field
We introduce the occupation field of the Poisson gas of Markovian loops in this section,and calculate its Laplace transform. Later on we relate the law of the occupation field ofthe Poisson gas at level α = 1
2to the free field.
For ω ∈ Ω, x ∈ E, we define the occupation field (also called field of occupationtimes) of ω at x via:
Lx(ω) = 〈ω, Lx〉 ∈ [0,∞], (see (3.41), (3.42) for notation),
=∑i∈I
Lx(γ∗i ), if ω =
∑i∈I
δγ∗i∈ Ω ,(4.10)
where Lx(γ∗) for γ∗ ∈ L∗ is defined below (3.72’).
4.2 Occupation field 73
As a consequence of Proposition 3.7 and Lemma 4.2, we obtain the following importanttheorem describing the Laplace transform of the field of occupation times of a Poissongas of Markovian loops:
Theorem 4.3. (α > 0)
For V : E → R+, one has in the notation of (3.45):
Eα
[e−
∑x∈E
V (x)Lx]= det (I +GV )−α
= det (I +√V G√V )−α
=(detGV
detG
)α
.
(4.11)
For x ∈ E and v ≥ 0, one has
(4.12) Eα[e−vLx ] =
(1 + v g(x, x)
)−α.
In particular, one finds that:
(4.13)
Lx is Γ(α, g(x, x)
)-distributed
(i.e. has density1
Γ(α)
sα−1
g(x, x)αe−
sg(x,x) 1s > 0),
and that
(4.14) Pα-a.s., Lx <∞, for every x ∈ E.
Proof.
• (4.11):We first note that by (4.10), for ω ∈ Ω, V : E → R+, one has
∑x∈E
V (x)Lx(ω) = 〈ω,Φ〉, where Φ(γ∗) =∑x∈E
V (x)Lx(γ∗), for γ∗ ∈ L∗.
We will use (4.7) together with (3.45). Specifically, we have
Eα
[e−
∑x∈E
V (x)Lx]= Eα
[e−〈ω,Φ〉
] (4.7)= exp
− α
∫
L∗
(1− e−Φ) dµ∗.
Now, by (3.67), we obtain the identity:∫
L∗
(1− e−Φ) dµ∗ (3.67)=
∫
Lr
(1− e−Φπ∗)
dµr =
∫
Lr
(1− e
−∑
x∈E
V (x)Lx(γ))dµr(γ).
As a result, we find that
Eα
[e−
∑x∈E
V (x)Lx]= exp
− α
∫
Lr
(1− e
−∑
x∈E
V (x)Lx)dµr
(3.45)= det (I +GV )−α
= det (I +√V G√V )−α
=(detGV
detG
)α
, and (4.11) follows.
74 4 POISSON GAS OF MARKOVIAN LOOPS
• (4.12):In the special case V = v1x, with v ≥ 0, x ∈ E, it follows by (3.46) that
Eα[e−vLx ] = (1 + vg(x, x))−α,
and (4.12) follows.
• (4.14):Pα[Lx <∞] = lim
v→0Eα[e
−vLx ] = limv→0
(1 + vg(x, x)
)−α= 1,
and (4.14) follows.
• (4.13):The Laplace transform of the Γ
(α, g(x, x)
)-distribution is
(1+vg(x, x)
)−α, see [9], p. 430,
and (4.13) follows.
Remark 4.4. One can also introduce the occupation field of non-trivial loops:
Lx(ω)def= 〈1N > 1ω, Lx〉=
∑i∈I
1N(γ∗i ) > 1Lx(γ∗i ), if ω =
∑i∈I
δγ∗i∈ Ω, and x ∈ E.(4.15)
By (3.44), (3.46) we know, for instance, that for v ≥ 0 and x ∈ E,
(4.16)
∫
N>1
(1− e−vLx) dµr = log(1 + vg(x, x)
1 + vλx
).
The same proof used for (4.12) now yields that for v ≥ 0, and x ∈ E,
(4.17) Eα[e−vLx ] =
(1 + vg(x, x)
1 + vλx
)−α
.
In particular, letting v →∞ in (4.17) we find that
(4.18) Pα[Lx = 0] =(λx g(x, x)
)−α, for any x ∈ E.
In the same vein, combining (3.44) and (3.45) yields that
∫
N>1
(1− e
−∑x∈E
V (x)Lx)dµr = − log
(detGV
detG
)+ log det λ− log det(λ+ V )
= − logdet(I − P V )−1
det(I − P )−1= log
det(I − P V )
det(I − P ) ,(4.19)
where P V f(x)def=
∑y∈E
cx,yλx+V (x)
f(y), for x ∈ E, and f : E → R, and we used the
equalities
detG = det(I − P )−1 det λ−1 and detGV = det(I − P V )−1 det(λ+ V )−1
4.2 Occupation field 75
(recall GV = (V − L)−1 and note that V − L = (λ+ V )(I − P V )), as well as the identity∫
N=1
(1− e
−∑x∈E
V (x)Lx)dµr =
∑x∈E
∫
N=1
(1− e−V (x)Lx
)dµr,
since Lx(γ) = 0, for all x 6= γ(0), when N(γ) = 1. We refer to Remark 3.8 for theprobabilistic interpretation of GV , a similar interpretation holds for P V as well. By thesame proof as in (4.11), we thus see that
(4.20) Eα
[e−
∑x∈E
V (x)Lx]=
(det(I − P V )
det(I − P ))−α
, for V : E → R.
Letting V (x) ↑ ∞ for every x ∈ E, we find that
(4.21) Pα[Lx = 0, for all x] =(det(I − P )
)α,
a formula which also follows from the identity
(4.22) Pα[ω(N > 1) = 0](3.67)=
(4.4)exp−αµr(N > 1) (3.19)
=(det(I − P )
)α.
Occupation time of the Poisson gas of Markovian loops and the free field:
We now want to relate the field of occupation times Lx, x ∈ E, to the Gaussian free field,i.e. cf. (2.2), the unique probability PG on RE, under which
(4.23)the canonical coordinates ϕx, x ∈ E, are a centered Gaussian field
with covariance EG[ϕx ϕy] = g(x, y), for x, y ∈ E.
Here is the crucial link, due to Le Jan [17], between the occupation times of the Poissongas of Markovian loops with the choice α = 1
2and the free field (see also (0.11)):
Theorem 4.5.
(4.24) (Lx)x∈E under Pα= 12, has same law as
(1
2ϕ2x
)x∈E
under PG.
Proof. We consider V : E → R+. On the one hand, we know by (4.11) that
(4.25) E 12
[exp
−
∑x∈E
V (x)Lx
]= det(I +GV )−
12 .
On the other hand, by (2.59) and the fact that V −L is positive definite, see (1.39), (1.44),we find that
(4.26) EG[exp
−
∑x∈E
V (x)
2ϕ2x
]= det (I +GV )−
12 .
As a result of (4.25) and (4.26), the Laplace transforms of Lx, x ∈ E, under P 12, and of
12ϕ2x, x ∈ E, under PG coincide and this yields (4.24).
76 4 POISSON GAS OF MARKOVIAN LOOPS
The above result somehow complements the picture stemming from the isomorphismtheorems of Dynkin and Eisenbaum, see (2.33) and (2.43). In particular, for any x, y ∈ E,
(4.27)(Lz
∞ + Lz)z∈E under Px,y ⊗ P 12, has the same “law” as
(1
2ϕ2z
)z∈E
under ϕx ϕy PG (← signed measure when x 6= y!).
As we will now see (4.27) and (4.24) are very close to (0.10), i.e. the representation formulaof Symanzik for the moments of random fields of the type (0.9), which we had mentionedin the Introduction.
4.3 Symanzik’s representation formula
In this section we state and prove Symanzik’s representation formula. It expresses themoments of certain interacting fields in terms of a Poisson gas of loops and a collectionof paths interacting with a random potential. In a way this formula embodies one of thestarting points for the various developments covered in these notes.
We begin with a lemma concerning even moments of centered Gaussian vectors, whichwill be used in the derivation of Symanzik’s formula.
Lemma 4.6. Let k ≥ 1, and (ψ1, . . . , ψ2k) be a centered Gaussian vector with covariancematrix Ai,j, 1 ≤ i, j ≤ 2k, then
(4.28) E[ 2k∏i=1
ψi
]=
∑D1∪···∪Dk=1,...,2k
k∏i=1
cov(Di),
where the sum is over all pairings D1, . . . , Dk of 1, . . . , 2k, i.e. over all unorderedpartitions of 1, . . . , 2k into k disjoint sets each containing two elements, and where
cov(D)def= As,t, for D = s, t. (Note that some ψi, 1 ≤ i ≤ 2k, may coincide.)
Proof. For λ1, . . . , λ2k ∈ R,
E[exp
2k∑i=1
λi ψi
]= exp
1
2E[( 2k∑
i=1
λi ψi
)2],
so that by taking 2k partial derivatives,
E[ 2k∏i=1
ψi
]=
∂
∂λ1. . .
∂
∂λ2kexp
1
2E[( 2k∑
i=1
λi ψi
)2]∣∣∣λ1,...,λ2k=0
=∑n≥0
1
2n n!
∂
∂λ1. . .
∂
∂λ2k
( ∑1≤i,j≤2k
λi λj Ai,j
)n∣∣∣λ1,...,λ2k=0
.
In the above series, only the term n = k survives and when looking at
∂
∂λ1. . .
∂
∂λ2k
( ∑1≤i,j≤2k
λi λj Ai,j
)k∣∣∣λ1,...,λ2k=0
,
4.3 Symanzik’s representation formula 77
the only terms to survive are such that each of the k factors gets hit by two (distinct)derivatives, which keep alive exactly two terms (corresponding to the choice of orderbetween i and j) in each factor. Keeping in mind that pairing corresponds to unorderedpartitions we obtain
1
2k1
k!
∂
∂λ1. . .
∂
∂λ2k
( ∑1≤i,j≤2k
λi λj Ai,j
)k∣∣∣λ1=···=λ2k=0
= the right-hand side of (4.28).
Our claim (4.28) follows.
Remark 4.7. The Feynman graphs (or diagrams), see [11], p. 146, provide a graphicalrepresentation of the “pairings” in the above formula (4.28). One attaches a half-edge toeach of 2k distinct vertices and one chooses a match for each vertex so as to obtain onesuch pairing corresponding to a graph on the 2k vertices, where each vertex belongs toexactly one edge.
11 44
55
22 33
66
Fig. 4.1 When 2k = 6, an example of a Feynman diagram on theright-hand side where the half-edges are paired together.It illustrates an unordered pairing corresponding toD1 = 2, 6, D2 = 1, 4, D3 = 3, 5, see also Fig. 4.2.
We now consider a probability ν on R+, and define as below (0.9)
(4.29) h(u) =
∫ ∞
0
e−vu dν(v), for u ≥ 0,
the Laplace transform of ν.
We are interested in the probability measure on RE
(4.30) PG,h
=1
Zh
exp− 1
2E(ϕ, ϕ)
∏x∈E
h(ϕ2
x
2
)dϕ,
where the normalizing constant Zh is given by the formula:
(4.31) Zh =
∫
RE
exp− 1
2E(ϕ, ϕ)
∏x∈E
h(ϕ2
x
2
)dϕ (with dϕ =
∏x∈E
dϕx).
Similar measures arise in mathematical physics, in the context of Euclidean quantum fieldtheory, see [11]. In this context a natural choice for h would be h(u) = e−λu2+σu, withλ > 0 and σ ∈ R, see Chapter 17 of [11]. The assumption (4.29) however rules out such
78 4 POISSON GAS OF MARKOVIAN LOOPS
a choice since it implies that log h is convex (by Holder’s inequality). Nevertheless, itsimplifies the presentation made below.
We write 〈—〉h for the expectation relative to PG,h
, i.e.
(4.32) 〈F 〉h =
∫RE F (ϕ) e
− 12E(ϕ,ϕ)
∏x∈E
h(ϕ2x
2) dϕ
∫RE e
− 12E(ϕ,ϕ)
∏x∈E
h(ϕ2x
2) dϕ
when, for instance, F : RE → R is a bounded measurable function. Note that whenν = δ0, then h = 1E, and one recovers the free field:
(4.33) PG,h=1E
= PG, in the notation of (2.2).
Symanzik’s formula will provide a representation of the moments of the random field
governed by PG,h
in terms of the occupation field of a Poisson gas of loops and of thelocal times of walks interacting via random potentials. A variant of the formula is, forinstance, used in Section 3 of [2], to obtain bounds on critical temperatures.
As a last ingredient, we introduce an auxiliary probability space (Ω, A, P) endowedwith a collection V (x, ω), x ∈ E, of non-negative random variables (the random poten-tials) such that
(4.34) under P, the variables V (x, ω), x ∈ E, are (non-negative) i.i.d. ν-distributed.
Let us point out that the probability Q in (0.10) coincides with P ⊗ Pα= 12. We are now
ready to state and prove Symanzik’s formula.
x2
x3y1
w2 w3
w1 y2
x1 y3
Fig. 4.2: The paths w1, w2, w3 in E, and the gas of loops interact throughthe random potentials (k = 3, and the z1, . . . , z6 are distinct).
Theorem 4.8. (Symanzik’s representation formula)
For any k ≥ 1, z1, . . . , z2k ∈ E, one has
(4.35)
〈ϕz1 . . . ϕz2k〉h =
∑pairings of1,...,2k
Ex1,y1 ⊗ · · · ⊗Exk,yk ⊗ E⊗ E 12
[e−
∑x∈E
V (x,ω)(Lx(ω)+Lx∞(w1)+···+Lx
∞(wk))]
E⊗ E 12
[e−
∑x∈E
V (x,ω)Lx(ω)] ,
where xi, yi = zℓ; ℓ ∈ Di, 1 ≤ i ≤ k, and D1, . . . , Dk stands for the (unordered)pairing of 1, . . . , 2k, and wi denotes the integration variable under Pxi,yi.
4.3 Symanzik’s representation formula 79
Proof. By (3.60), we know that for ω ∈ Ω, x, y ∈ E,
(4.36) Ex,y
[e−
∑z∈E
V (z,ω)Lz∞]
= gV (·,ω)(x, y),
which is a symmetric function of x, y, so that the expression under the sum in the right-hand side of (4.35) only depends on the pairing and is therefore well-defined.
Let F be a bounded measurable function on RE. Denote by (F )h the numerator of
(4.32) so that 〈F 〉h = (F )h(1)h
. We have the identity:
(F )h(4.34)= E
[ ∫
RE
F (ϕ) e− 1
2[E(ϕ,ϕ)+
∑x∈E
V (x,ω)ϕ2x]
dϕ]
= E⊗ EG,V (·,ω)[F (ϕ)(2π)
|E|2 (detGV (·,ω))
12
]
(4.11)= E⊗ E 1
2⊗ EG,V (·,ω)
[F (ϕ) e
−∑
x∈E
V (x,ω)Lx(ω)](2π)
|E|2 (detG)
12 ,
(4.37)
where the second line is a consequence of (2.5), (2.36), and (V (·, ω)−L)−1 below (3.45)= GV (·,ω).
As a result of (4.28), we thus find that
(4.38)
(ϕz1 . . . ϕz2k)h =
∑pairings of1,...,2k
E⊗ E 12
[GV (·,ω)(x1, y1) . . . GV (·,ω)(xk, yk) e
−∑x∈E
V (x,ω)Lx(ω)]×
(2π)|E|2 (detG)
12
(4.36)=
∑pairings of1,...,2k
Ex1,y1 ⊗ · · · ⊗Exk ,yk ⊗ E⊗ E 12
[e−
∑x∈E
V (x,ω)(Lx(ω)+Lx∞(w1)+···+Lx
∞(wk))] ×
(2π)|E|2 (detG)
12 .
In the same way, we find by (4.37) that
(4.39) (1)h = E⊗ E 12
[e−
∑x∈E
V (x,ω)Lx(ω)](2π)
|E|2 (detG)
12 .
Taking the ratio of (4.38) and (4.39) precisely yields (4.35).
Remark 4.9. When k = 1, Symanzik’s representation formula (4.35) becomes
〈ϕx ϕy〉h =Ex,y ⊗ E⊗ E 1
2
[e−
∑z∈E
V (z,ω)(Lz(ω)+Lz∞(w))]
E⊗ E 12
[e−
∑z∈E
V (z,ω)Lz(ω)] .
We explain below another way to obtain this identity. Recall that (4.27) combinesDynkin’s isomorhism theorem and the identity in law of (Lx)x∈E under P 1
2, with (1
2ϕ2x)x∈E
under PG, stated in Theorem 4.5.
By (4.27) the numerator equals E⊗EG[ϕx ϕy e− 1
2
∑z∈E V (z,ω)ϕ2
z ], whereas by (4.24) the
denominator equals E ⊗ EG[e−12
∑z∈E V (z,ω)ϕ2
z ]. Keeping in mind (4.29) and (4.34), oneeasily recovers the left-hand side of the above quality.
80 4 POISSON GAS OF MARKOVIAN LOOPS
4.4 Some identities
In this section we discuss some further formulas concerning the Poisson gas of Markovianloops. In particular given two disjoint subsets of E, we derive a formula for the probabilitythat no loop of the Poisson gas visits both subsets. In the next section, as an application,we will link the so-called random interlacements with various notions of “loops goingthrough infinity” for the Poisson cloud of Markovian loops.
Given U ⊆ E, we can consider the field of occupation times of loops contained in U :
LUx (ω) = 〈1γ∗ ⊆ Uω, Lx〉
=∑i∈I
1γ∗i ⊆ ULx(γ∗i ), if ω =
∑i∈I
δγ∗i∈ Ω, x ∈ E,(4.40)
and we used the slightly informal notation 1γ∗i ⊆ U in place of 1γi ∈ Lr,U, whereγi ∈ Lr is any rooted loop such that π∗(γi) = γ∗i , and Lr,U has been defined in (3.37).
Proposition 4.10. (α > 0)
Given K ⊆ E, U = E\K, and V : E → R+,
Eα
[e−
∑x∈E
V (x)(Lx−LUx )]
=(detGV
detG
detGU
detGU,V
)α
=(detK×K GV
detK×K G
)α
(4.41)
(where detK×KAdef= det(A|K×K), for A an E×E-matrix, and we write detGU , resp. detGU,V ,
in place of detU×U GU , resp. detU×U GU,V , with the notation from below (3.58)).
If K1 ∩K2 = φ, and Ui = E\Ki, for i = 1, 2, then
Pα[no loop intersects both K1 and K2] =( detG
detGU1
detGU1∩U2
detGU2
)−α
,
=(detK1×K1 GU2
detK1×K1 G
)α
,
(4.42)
(and we used a similar convention as above for detGU).
Proof.
• (4.41):Observe that Lx − LU
x , x ∈ E, is the field of occupation times of loops which are notcontained in U , whereas LU
x , x ∈ U , is the field of occupation times of loops which arecontained in U :
Lx − LUx = 〈1γ∗⊆Uc ω, Lx〉, x ∈ E,
LUx = 〈1γ∗⊆U ω, Lx〉, x ∈ E,
4.4 Some identities 81
and by (4.5) we see that they constitute independent collections of random variables. Sowe find that for V : E → R+,
(detGV
detG
)α (4.11)= Eα
[e−
∑x∈E
V (x)Lx] independence= Eα
[e−
∑x∈E
V (x)(Lx−LUx )]× Eα
[e−
∑x∈E
V (x)LUx ]
(3.58)=
(4.7)Eα
[e−
∑x∈E
V (x)(Lx−LUx )] (detGU,V
detGU
)α
.
The equality in the first line of (4.41) follows.
To prove the second equality in (4.41) we will use the next lemma:
Lemma 4.11. (Jacobi’s determinant identity)
Assume that A is a (k + ℓ)× (k + ℓ)-matrix which is invertible and that
A =
[W XY Z
], A−1 =
[B CD E
],
where B and W are k × k matrices, then
(4.43) detZ = detA detB.
Proof. We know that
I =
[B CD E
] [W XY Z
]=
[BW + CY BX + CZDW + EY DX + EZ
],
and hence
BX + CZ = 0 (k × ℓ-matrix) and DX + EZ = I (ℓ× ℓ-matrix).
As a result, we find that[B CD E
] [I X0 Z
]=
[B BX + CZD DX + EZ
]=
[B 0D I
].
Taking determinants, we find that
detA−1 detZ = detB,
and the claim (4.43) follows.
We choose A = −L, A−1 = G, B = G|K×K, Z = −L|U×U , so that
detZ = det(−L|U×U) = (detGU)−1, detA = (detG)−1,
and (4.43) yields
(4.44)detG
detGU
= detG|K×K
( notation= detK×K G
).
In the same way, with A = V − L, A−1 = GV , B = GV |K×K, Z = (V − L)U×U , we findthat
(4.45)detGV
detGU,V
= detGV |K×K.
82 4 POISSON GAS OF MARKOVIAN LOOPS
Coming back to the expression after the first equality in (4.41), we see that this expression
equals (detGV |K×K
detG|K×K)α, and this completes the proof of (4.41).
• (4.42):We first note that the left-hand side of (4.42) equals
(4.46)
Pα
[⟨ω,
( ∑x∈K1
Lx
)( ∑x∈K2
Lx
)⟩= 0
]=
Pα
[⟨1N > 1ω,
( ∑x∈K1
Lx
)( ∑x∈K2
Lx
)⟩= 0
](4.4)=
exp− αµ∗
[γ∗ :
∑x∈K1
Lx(γ∗) > 0 and
∑x∈K2
Lx(γ∗) > 0, and N(γ∗) > 1
].
Now we have µ∗[N(γ∗) > 1] <∞ (it equals − log det(I − P ) by (3.19) and (3.67)), so wecan write:
(4.47)
µ∗[N > 1,
∑x∈K1
Lx > 0 and∑
x∈K2
Lx > 0]=
µ∗[N > 1,
∑x∈K1
Lx > 0]+ µ∗
[N > 1,
∑x∈K2
Lx > 0]
−µ∗[N > 1,
∑x∈K1∪K2
Lx > 0].
The next lemma will be useful to evaluate the above terms.
Lemma 4.12. (α > 0, K ⊆ E,U = E\K)
With the notation of (4.15), one has
Pα
[ ∑x∈K
Lx = 0]=
(detG|K×K
∏x∈K
λx
)−α
=( detG
detGU
∏x∈K
λx
)−α
,(4.48)
µ∗[N > 1,
∑x∈K
Lx > 0]= log
(detG|K×K
∏x∈K
λx
).(4.49)
Proof. Note that (4.49) is a direct consequence of (4.48) and the identity
Pα
[ ∑x∈K
Lx = 0] (4.4)
= exp− αµ∗
[N > 1,
∑x∈K
Lx > 0].
We hence only need to prove (4.48). We use (4.20) with the choice V = ρ1K where ρ ↑ ∞.We then find that
(4.50) Pα
[ ∑x∈K
Lx = 0]= lim
λ→∞Eα
[e−ρ
∑x∈K
Lx] (4.20)= lim
λ→∞
(det I − P V=ρ1K
det I − P)−α
.
Observe that P V=ρ1K −→λ→∞
1U P (i.e. limρ→∞ P V=ρ1Kf(x) = 1U(x)(Pf)(x), for all x ∈ E),by the definition of P V below (4.19). The matrix for I − 1U P is block diagonal:
[I 0
]
←− (I − P )|U×U,
4.4 Some identities 83
and, therefore,
limλ→∞
(det(I − P V=λ1K )
det(I − P ))−α
=(det(I − P )|U×U)
det(I − P ))−α
.
Now, detGU = (det(−L)|U×U)−1 = (
∏x∈U λx)
−1(det(I − P )|U×U)−1, and similarly,
detG =( ∏x∈E
λx
)−1(det(I − P )
)−1.
Coming back to (4.50), we have shown that
Pα
[ ∑x∈K
Lx = 0]=
detG∏x∈E
λx
detGU
∏x∈U
λx
−α
=( detG
detGU
∏x∈K
λx
)−α
,
and the proof of (4.48) is completed with (4.44).
We now return to (4.46), (4.47), and find that (recall K1 ∩K2 = φ)
Pα[no loop intersects both K1 and K2] =
( detG
detGU1
detG
detGU2
detGU1∩U2
detG
)−α
=( detG
detGU1
detGU1∩U2
detGU2
)−α
(4.44)=
( detK1×K1 G
detK1×K1 GU2
)−α
,
since K1 = U2\(U1 ∩ U2). This concludes the proof of (4.42) and of Proposition 4.10.
Special case: loops going through a point
We specialize the above formula (4.42) to find the probability that loops in the Poissoncloud going through a base point x all avoid some K not containing x:
Kx
E
Fig. 4.3
Corollary 4.13. (α > 0)
Consider x ∈ E, and K ⊆ E not containing x, then
(4.51)
Pα[all loops going through x do not intersect K] =(1− Ex[HK <∞, g(XHK
, x)]
g(x, x)
)α
.
84 4 POISSON GAS OF MARKOVIAN LOOPS
Proof. In the notation of (4.42) we pick K1 = x, K2 = K. Setting U = E\K, (4.42)yields that the left-hand side of (4.51) equals
(gU(x, x)g(x, x)
)α (1.49)=
(g(x, x)−Ex[HK <∞, g(XHK, x)]
g(x, x)
)α
,
and (4.51) follows.
4.5 Some links between Markovian loops and random interlace-ments
In this section we discuss various limiting procedures making sense of the notion of “loopsgoing through infinity”, and see random interlacements appear as a limit object.
We begin with the case of Zd, d ≥ 3. Random interlacements have been introducedin [27], and we refer to [27] for a more detailed discussion of the Poisson point processof random interlacements. We will recover random interlacements on Zd, d ≥ 3, by theconsideration of “loops going through infinity”. More precisely, we consider d ≥ 3, and
(4.52)Un, n ≥ 1, a non-decreasing sequence of finite connected subsets of Zd,with
⋃n
Un = Zd,
as well as
(4.53) x∗ ∈ Zd, a “base point”.
For fixed n ≥ 1, we endow the connected subset Un, playing the role of E in (1.1), withthe weights:
cnx,y =1
2d1|x− y| = 1, for x, y ∈ Un,
and with the killing measure:
κnx =∑
y∈Zd\Un
1
2d1|x− y| = 1, for x ∈ Un,
very much in the spirit of what is done in Example 2) above (1.18) (except for the factwe now replace 1 by 1
2d). Note that λnx =
∑y∈Un
cnx,y + κnx = 1, for all x ∈ Un.
We write Ωn for the space corresponding to (4.2), of pure point measures on the set ofunrooted loops contained in Un, and P
nα for the corresponding Poisson gas of Markovian
loops at level α, see (4.6).
x∗
Un
first the limit n→∞,
then the limit x∗ →∞an unrooted loopcontained in Un
and going through x∗
Fig. 4.4
4.5 Some links between Markovian loops and random interlacements 85
We want to successively take the limit n → ∞, and then x∗ → ∞. The first limitcorresponds to the construction of a Poisson gas of unrooted loops on Zd. We will notreally discuss this Poisson measure, which can be defined in a rather similar fashion towhat we have done at the beginning of this chapter, but of course escapes the set-up of afinite state space E with weights and killing measure satisfying (1.1) - (1.5).
For the second limit (i.e. x∗ →∞), we will also adjust the level α as a function of x∗.The fashion in which we tune α to x∗ is dictated by the Green function of simple randomwalk on Zd:
(4.54) gZd(x, y) = EZd
x
[ ∫ ∞
0
1Xt = ydt], for x, y ∈ Zd,
where P Zd
x denotes the canonical law of continuous-time simple random walk with jumprate 1 on Zd starting at x, and Xt, t ≥ 0, the canonical process. Taking advantage oftranslation invariance we introduce the function
(4.54’) g(x)def= gZd(0, x), for x ∈ Zd (so gZd(x, y) = g(y − x)).
The function g(·) is known to be positive, finite (recall d ≥ 3), symmetric, i.e. g(−x) =g(x), and has the asymptotic behavior
(4.55)g(x) ∼ cd |x|−(d−2), as x→∞,where cd =
2
(d− 2) |B(0, 1)| =d
2Γ(d
2− 1
)1
πd
2
,
where |x| stands for the Euclidean norm of x, and |B(0, 1)| for the volume of the unit ballof Rd (see for instance [15], p. 31).
We will choose α according to the formula:
(4.56) α = ug(0)
c2d|x∗|2(d−2), with u ≥ 0.
We introduce for ω ∈ Ωn, the subset of Un of points visited by the unrooted loops in thesupport of the pure point measure ω, which pass through the base point x∗:
(4.57)Jn,x∗(ω) =
z ∈ Un; there is a γ∗ in the support of ω,
which goes through x∗ and z, for ω ∈ Ωn.
Note that Jn,x∗(ω) = φ, when x∗ /∈ Un, and Jn,x∗(ω) ∋ x∗, when at least one γ∗ in thesupport of ω goes through x∗.
For the next result we will use the fact that (1.57) and (1.58) in the case of continuous-time simple random walk with jump rate 1 on Zd take the following form:
(4.58)
when K is a finite subset of Zd,
P Zd
x [HK <∞] =∑y∈K
gZd(x, y) eK(y), for x ∈ Zd
(with HK as in (1.45)),
where the equilibrium measure
eK(y) = P Zd
y [HK =∞] 1K(y), y ∈ Zd
(with HK as in (1.45)),
86 4 POISSON GAS OF MARKOVIAN LOOPS
is the unique measure supported on K such that the equality in (4.58) holds for all x ∈ K.Its mass capZd(K) is the capacity of K.
The next theorem relates the so-called “random interlacement at level uuu” to thethe set Jn,x∗ when n→∞, and then x∗ →∞, under the measure Pn
α, with α as in (4.56).
In this set of notes we will not introduce the full formalism of the Poisson point processof random interlacements but only content ourselves with the description of the randominterlacement at level u, see Remark 4.15 below.
Theorem 4.14. (d ≥ 3)
For u ≥ 0 and K ⊆ Zd finite, one has
(4.59) limx∗→∞
limn→∞
Pn
α=ug(0)
c2d
|x∗|2(d−2)[Jn,x∗ ∩K = φ] = e−u cap
Zd(K).
Proof. By (4.51) we have, as soon as x∗ ∈ Un and x∗ /∈ K,
Pnα[Jn,x∗ ∩K = φ] = Pn
α[all loops going through x∗ do not meet K]
=(1− EZ
d
x∗[HK < TUn, gUn(XHK
, x∗)]
gUn(x∗, x∗)
)α
,
with gUn(·, ·) the Green function of simple random walk on Zd killed when exiting Un.Clearly, by monotone convergence,
gUn(x, y) ↑ gZd(x, y), for x, y ∈ Zd, when n→∞.
So we see that when x∗ /∈ K:
(4.60) limnPnα[Jn,x∗ ∩K = φ] =
(1− EZ
d
x∗[HK <∞, gZd(XHK
, x∗)]
gZd(x∗, x∗)
)α
(the formula holds also when x∗ ∈ K).
Now gZd(x∗, x∗) = g(0), and, as x∗ →∞, we have by (4.55)
EZd
x∗[HK <∞, gZd(XHK
, x∗)](4.55)∼ P Z
d
x∗[HK <∞] cd |x∗|−(d−2)
(4.58)∼(4.55)
(cd |x∗|−(d−2))2 capZd(K),
and, in particular, with α as in (4.56),
limx∗→∞
α
g(0)EZ
d
x∗[HK <∞, gZd(XHK
, x∗)] = u capZd(K).
Coming back to (4.60) we readily obtain (4.59).
Remark 4.15. One can define a translation invariant random subset of Zd denoted byIu, the so-called random interlacement at level u, see [27], with distribution characterizedby the identity:
(4.61) P[Iu ∩K = φ] = e−u capZd
(K), for all K ⊆ Zd finite.
4.5 Some links between Markovian loops and random interlacements 87
Coming back to (4.59), note that for any disjoint finite subsets K,K ′ of Zd one has by aninclusion-exclusion argument:
Pnα[Jn,x∗ ∩K = φ and Jn,x∗ ⊇ K ′] =
Enα
[ ∏x∈K
1J cn,x∗
(x)∏
x∈K ′
(1− 1J c
n,x∗(x)
)]=
∑A⊆K ′
(−1)|A| Pnα[Jn,x∗ ∩ (K ∪ A) = φ],
where we expanded the last product in the second line to find the last line.
In the same fashion, we see that for disjoint finite subsets K,K ′ of Zd we have
P[Iu ∩K = φ and Iu ⊇ K ′] =∑
A⊆K ′
(−1)|A| P[Iu ∩ (K ∪ A) = φ] =∑
A⊆K ′
(−1)|A| e−u capZd
(K∪A).
As a result, Theorem 4.14 can be seen to imply that under the measure Pn
α=ug(0)
c2d
|x∗|2(d−2),
the law of Jn,x∗ converges in an appropriate sense (i.e. convergence of all finite dimensionalmarginal distributions) to the law of Iu, as n→∞, and then x∗ →∞.
We continue with the discussion of links between random interlacements and“loops going through infinity” in the Poisson cloud of Markovian loops.
We begin with a variation on (4.59) in the context of Zd, d ≥ 3, where we will give adifferent meaning to the informal notion of “loops going through infinity”.
We consider a sequence Un, n ≥ 1, as in (4.52) of finite connected subsets of Zd, d ≥ 3,which increases (in the wide sense) to Zd. The role of the base point x∗, cf. (4.53), is nowreplaced by the complement of the Euclidean ball:
(4.62) BRdef= x ∈ Zd; |x| ≤ R, with R > 0.
Zd
UnK
0
R
BR
an unrooted loop first the limit n→∞,
contained in Un then the limit R→∞and touching Bc
R
Fig. 4.5
88 4 POISSON GAS OF MARKOVIAN LOOPS
By analogy with (4.57), we introduce for ω ∈ Ωn,
Kn,R(ω) = z ∈ Un; there is a γ∗ in the support of ω,
which goes through BcR and z.
(4.63)
We now choose α according to
(4.64) α = uRd−2
cd, with u ≥ 0, and cd as in (4.55).
The corresponding statement to (4.59) is now the following. By the argument of Remark4.15, it can be interpreted as a convergence of the law of Kn,R to the law of Iu, as n→∞and then R→∞.
Theorem 4.16. (d ≥ 3)
For u ≥ 0 and K ⊆ Zd finite, one has
(4.65) limR→∞
limn→∞
Pn
α=u Rd−2
cd
[Kn,R ∩K = φ] = e−u capZd
(K).
Proof. We assume that R is large enough so that K ⊆ BR and n sufficiently large sothat BR Un. In the notation of (4.42), we chose K1 = K and K2 = Un\BR, so thatK1 ∩K2 = φ. Then (4.42) yields that
Pnα[Kn,R ∩K = φ] =
(detK×K GBR
detK×K GUn
)α
.
By (1.49), we writedetK×K GBR
= det(An − B(R)n ),
where An is the K ×K-matrix:
An(x, y) = gUn(x, y), for x, y ∈ K,
and B(R)n , the K ×K-matrix
B(R)n (x, y) = EZ
d
x [HBcR< TUn , gUn(XHBc
R
, y)], for x, y ∈ K.
Likewise, by the above definitions, we find that
detK×K GUn = det(An) .
When n→∞,
limn
An = A, where A(x, y) = gZd(x, y), for x, y ∈ K(4.66)
limn
B(R)n = B(R), where B(R)(x, y) = EZ
d
x [HBcR<∞, gZd(XHBc
R, y)],(4.67)
for x, y ∈ K.
The matrix A is known to be invertible (one can base this on a similar calculation as inthe proof of (1.35), see also [25], P2, p. 292). So we find that:
(4.68) limn→∞
Pnα[Kn,R ∩K = φ] =
(det(A− B(R))
detA
)α
=(det(I − A−1B(R))
)α.
4.5 Some links between Markovian loops and random interlacements 89
For x ∈ K, P Zd
x -a.s., HBcR<∞, and XHBc
R∈ ∂BR, so that
B(R)(x, y) = EZd
x [gZd(XHBcR, y)]
(4.55)∼ cdRd−2
, for x, y ∈ K, as R→∞.
It follows that
(4.69) det(I −A−1B(R)) = 1− cdRd−2
Tr(A−1 111K×K) + o( 1
Rd−2
), as R→∞,
where 111K×K denotes the K ×K matrix with all coefficients equal to 1.
Coming back to (4.58), we see that A−1 111K×K = C, where C is the K × K-matrixwith coefficients C(x, y) = eK(x), for x, y ∈ K. Since
∑x∈K eK(x) = capZd(K), we have
found that
(4.70) det(I −A−1B(R)) = 1− cdRd−2
capZd(K) + o( 1
Rd−2
), as R→∞.
Inserting this formula into (4.68), with α as in (4.64), immediately yields (4.65).
Complement: random interlacements and Poisson gas of loops coming frominfinity on a transient weighted graph
So far we only discussed links between random interlacements and a Poisson cloud of“loops going though infinity”, in the case of Zd, d ≥ 3.
We now discuss another construction, which applies to the general set-up of an(infinite) transient weighted graph with no killing.
We consider a countable (in particular infinite) set Γ endowed with non-negativeweights cx,y, x, y ∈ Γ (i.e. satisfying (1.2) with Γ in place of E), so that
(4.71)Γ endowed with the set of edges consisting of x, ysuch that cx,y > 0, is connected, locally finite,(i.e. each x ∈ Γ has a finite number of neighbors),
and
(4.72)the simple random walk with jump rate 1 on Γ induced by these weightscx,y, x, y ∈ Γ, is transient.
This is what we mean by a transient weighted graph (i.e. with no killing).
We consider, as in (4.52),
(4.73)Un, n ≥ 1, a non-decreasing sequence of finite connectedsubsets of Γ increasing to Γ (i.e.
⋃n≥1
Un = Γ and Un ⊆ Un+1),
as well as a point
(4.74) x∗ not in Γ (which will play the role of the point “at infinity for each Un”).
We consider the finite graph with vertex set En = Un ∪ x∗, endowed with the weightsobtained by collapsing U c
NUcNUcN on x∗x∗x∗:
90 4 POISSON GAS OF MARKOVIAN LOOPS
Un
G
Un ↑ G
Fig. 4.6
cnx,y = cx,y, when x, y ∈ Un
cnx∗,y= cny,x∗
=∑
x∈G\Un
cx,y, when y ∈ Un.(4.75)
In addition we choose on En the killing measure κnx, x ∈ En, concentrated on x∗, so that
κnx∗= λn > 0, with lim λn =∞,
κnx = 0, for x ∈ Un = En\x∗.(4.76)
For the continuous-time walk on Γ with jump rate 1, one can show that when K is afinite subset of G, setting λ0x =
∑y∈Γ cx,y, for x ∈ Γ, and gΓ(·, ·) for the Green function
(i.e. gΓ(x, y) =1λ0yEΓ
x [∫∞
01Xt = ydt], for x, y ∈ Γ),
(4.77) P Γx [HK <∞] =
∑y∈K
gΓ(x, y) eK(y), for x ∈ Γ,
where eK is the equilibrium measure of K:
(4.78) eK(y) = P Γy [HK =∞] 1K(y) λ
0y, for y ∈ Γ
(for instance one approximates the left-hand side of (4.77) by P Γx [HK < TUn], with n→∞,
and applies (1.57), (1.53) to the walk killed when exiting Un).
The total mass of eK is the capacity of K:
(4.79) capΓ(K) =∑y∈K
eK(y).
We write Ωn for the space of unrooted loops on En and Pnα for the Poisson gas of Markovian
loops at level α > 0, on the above finite set En endowed with the weights cn in (4.75) andthe killing measure κn in (4.76).
4.5 Some links between Markovian loops and random interlacements 91
We also introduce the random subset of Un:
Jn(ω) = z ∈ Un; there is a γ∗ in the support of ω
which goes through x∗ and z.(4.80)
We now specify α via the formula, see (4.76),
(4.81) α = u λn, with u ≥ 0.
The statement corresponding to (4.59) and (4.65), which in the present context links thePoisson gas of loops on En going through “the point x∗ at infinity”, with the interlacementat level u on G is coming next. We refer to Remark 1.4 of [27] and [29] for a more detaileddescription of the Poisson point process of random interlacements in this context.
Theorem 4.17. For u ≥ 0 and K ⊆ G finite, one has
(4.82) limn→∞
Pnα=uλn
[Jn ∩K = φ] = e−u capG(K).
Proof. For large n, K ⊆ Un, and we can write by (4.51):
Pnα[Jn ∩K = φ] =
(1− En
x∗[HK <∞, gn(XHK
, x∗)]
gn(x∗, x∗)
)α
,
where P nx stands for the law of the walk on En with unit jump rate, starting at x ∈
En, attached to the weights and killing measure in (4.75), (4.76) and gn(·, ·) for thecorresponding Green function.
By (2.71), we know that
(4.83) gn(x∗, z) = λ−1n , for all z ∈ En,
and, as a result,
(4.84)
Pnα[Jn ∩K = φ] =
(1− P n
x∗[HK <∞]
)α
(1.57)=
(4.83)
(1− 1
λn
capn(K))α
,
where capn(K) stands for the capacity of K in En.
By (1.53) and the fact that κn vanishes on Un, we know that
(4.85) capn(K) =∑x∈K
P nx [HK =∞]λ0x.
In addition, we know that P nx -a.s.,
HK =∞ = Hx∗ < HK ∩ θ−1Hx∗
(HK =∞),
because Un is finite and the walk is only killed at x∗. So applying the strong Markovproperty at time Hx∗ we find that:
P nx [HK =∞] = P n
x [Hx∗ < HK ]× P nx∗[HK =∞]
= P Γx [TUn < HK ]× (1− P n
x∗[HK <∞]),
92 4 POISSON GAS OF MARKOVIAN LOOPS
using the fact that the walk on G and on En “agree up to time TUn”. Note, in addition,that
P nx∗[HK <∞]
(1.57)
≤(1.53)
∑y∈K
gn(x∗, y) λ0y
(4.83)=
1
λn
∑y∈K
λ0y(4.76)−→n→∞
0,
and thatP Γx [TUn < HK ] ↓ P Γ
x [HK =∞], as n→∞.
Coming back to (4.85), we have shown that
(4.86) limn
capn(K) =∑x∈K
P Γx [HK =∞]λ0x
(4.78)=
(4.79)capΓ(K).
If we now insert this identity in (4.84) and keep in mind that α = uλn, we readily find(4.82).
Remark 4.18.
1) By a similar argument as described in Remark 4.15, the above theorem can be seen toimply that under Pn
α=uλn, the law of Jn converges to the law of Iu in the sense of finite
dimensional marginal distributions, as n goes to infinity.
2) A variation on the approximation scheme, which we employed to approximate randominterlacements on a transient weighted graph, can be used to prove an isomorphism the-orem for random interlacements, see [28]. One can define the random field (Lx,u)x∈Γ ofoccupation times of continuous-time random interlacements at level u (this random fieldis governed by a probability denoted by P). One can also define the canonical law PG onRΓ of the Gaussian free field attached to the transient weighted graph under considera-tion: under PG the canonical field (ϕx)x∈Γ is a centered Gaussian field with covarianceEG[ϕx ϕy] = gΓ(x, y), for x, y ∈ Γ, with gΓ(·, ·) the Green function. The isomorphismtheorem from [28] states that
(4.87)
(Lx,u +
1
2ϕ2x
)x∈Γ
under P⊗ PG, has the same law as
(1
2(ϕx +
√2u)2
)x∈Γ
under PG.
The above identity in law is intimately related to the generalized second Ray-Knighttheorem, see Theorem 2.17, and characterizes the law of (Lx,u)x∈Γ.
REFERENCES 93
References
[1] M.T. Barlow. Diffusions on fractals, in volume 1690 of Lecture Notes in Math. Ecoled’ete de Probabilittes de St. Flour 1995, 1–112, Springer, Berlin, 1998.
[2] D. Brydges, J. Frohlich, and T. Spencer. The random walk representation of classicalspin systems and correlation inequalities. Comm. Math. Phys., 83(1):123–150, 1982.
[3] P. Doyle and J. Snell. Random walks and electric networks. The Carus MathematicalMonographs, second printing, Washington DC, 1984.
[4] E.B. Dynkin. Markov processes as a tool in field theory. J. of Funct. Anal., 50(1):167–187, 1983.
[5] E.B. Dynkin. Gaussian and non-Gaussian random fields associated with Markovprocesses. J. of Funct. Anal., 55(3):344–376, 1984.
[6] N. Eisenbaum. Dynkin’s isomorphism theorem and the Ray-Knight theorems. Probab.Theory Relat. Fields, 99:321–335, 1994.
[7] N. Eisenbaum. Une version sans conditionnement du theoreme d’isomorphisme deDynkin. In Seminaire de Probabilites, XXIX, volume 1613, Lecture Notes in Math-ematics, 266–289, Springer, Berlin, 1995.
[8] N. Eisenbaum, H. Kaspi, M.B. Marcus, J. Rosen and Z. Shi. A Ray-Knight theoremfor symmetric Markov processes. Ann. Probab., 28(4):1781–1796, 2000.
[9] W. Feller. An introduction to probability theory and its applications, volume 1. 3rdedition, John Wiley & Sons, New York, 1957.
[10] M. Fukushima, Y. Oshima, and M. Takeda. Dirichlet forms and symmetric Markovprocesses. Walter de Gruyter, Berlin, 1994.
[11] J. Glimm and A. Jaffe. Quantum Physics. Springer, Berlin, 1981.
[12] I. Karatzas and S. Shreve. Brownian motion and stochastic calculus. Springer, Berlin,1988.
[13] F.B. Knight. Random walks and a sojourn density process of Brownian motion.Trans. Amer. Math. Soc., 109(4):56–76, 1963.
[14] T. Kumagai. Random walks on disordered media and their scaling limits. Notes ofSt. Flour lectures, available athttp://www.kurims.kyoto-u.ac.jp/∼kumagai/StFlour-Cornell.html, 2010.
[15] G.F. Lawler. Intersections of random walks. Birkhauser, Basel, 1991.
[16] G.F. Lawler and W. Werner. The brownian loop soup. Probab. Theory Relat. Fields,128:565–588, 2004.
[17] Y. Le Jan. Markov loops and renormalization. Ann. Probab., 38(3):1280–1319, 2010.
[18] Y. Le Jan. Markov paths, loops and fields, volume 2026 of Lecture Notes in Math.Ecole d’Ete de Probabilites de St. Flour, Springer, Berlin, 2012.
94 REFERENCES
[19] M.B. Marcus and J. Rosen. Markov processes, Gaussian processes, and local times.Cambridge University Press, 2006.
[20] J. Neveu. Processus ponctuels, in volume 598 of Lecture Notes in Math., Ecole d’Etede Probabilites de St. Flour 1976, 249–447, Springer, Berlin, 1977.
[21] D. Ray. Sojourn times of diffusion process. Illinois Journal of Math., 7:615–630,1963.
[22] S.I. Resnick. Extreme Values, regular variation, and point processes. Springer, NewYork, 1987.
[23] D. Revuz and M. Yor. Continuous martingales and Brownian motion. Springer,Berlin, 1991.
[24] S. Sheffield and W. Werner. Conformal loop ensembles: the Markovian characteri-zation and the loop-soup construction. To appear in Ann. Math., also available atarXiv:1006.2374.
[25] F. Spitzer. Principles of random walk. Springer, Berlin, second edition, 2001.
[26] K. Symanzik. Euclidean quantum field theory. In: Scuola internazionale di Fisica“Enrico Fermi”, XLV Corso, 152-223, Academic Press, 1969.
[27] A.S. Sznitman. Vacant set of random interlacements and percolation. Ann. Math.,171:2039–2087, 2010.
[28] A.S. Sznitman. An isomorphism theorem for random interlacements. Electron. Com-mun. Probab., 17, 9, 1-9, 2012.
[29] A. Teixeira. Interlacement percolation on transient weighted graphs. Electron. J.Probab., 14:1604–1627, 2009.
Index
capacity, 14, 16variational problems, 16
conductance, 14continuous-time loop, 54
stationarity property, 54time-reversal invariance, 56
Dirichlet form, 5orthogonal decomposition, 17trace form, 17, 18, 25tower property, 20
discrete loop, 54stationarity property, 54time-reversal invariance, 56
energy, 5, 10entrance time, 12equilibrium measure, 14, 16equilibrium potential, 14, 16exit time, 12
Feynman diagrams, 77Feynman-Kac formula, 20, 25
Gaussian free field, 27, 75conditional expectations, 28
generalized Ray-Knight theoremsfirst Ray-Knight theorem, 39second Ray-Knight theorem, 44, 48, 92
Green function, 9killed, 12
hitting time, 12
Isomorphism theoremDynkin isomorphism theorem, 35Eisenbaum isomorphism theorem, 37for random interlacements, 92
jump rate 1, 6
killing measure, 5
local time, 22, 62loops going through a point, 83loops going through infinity, 84, 87, 89
Markov chain X., 22Markov chain X., 6
Markovian loops, 75measure Px,y, 30, 66
occupation field, 72occupation field of Markovian loop, 72occupation field of non-trivial loops, 74occupation time, 75
pointed loops, 52measure µp on pointed loops, 58
Poisson gas of Markovian loops, 71Poisson point measure, 71potential operators, 10
random interlacements, 48, 80, 84, 87, 89random interlacement at level u, 86, 91
Ray-Knight theorem, 38restriction property, 61, 66rooted loops, 51
measure µr on rooted loops, 52rooted Markovian loop, 51
Symanzik’s representation formula, 78
time of last visit, 12trace Dirichlet form, 17, 18, 25trace process, 25transient weighted graph, 89transition density, 8
killed transition density, 12
unit weight, 67unrooted loops, 67
measure µr on unrooted loops, 67
variable jump rate, 22
weights, 5
95