martingale problems and stochastic equations for …kurtz/lectures/frankfurt/mgpsteq.pdfmartingale...

424
First Prev Next Go To Go Back Full Screen Close Quit 1 Martingale problems and stochastic equations for Markov processes 1. Basics of stochastic processes 2. Markov processes and generators 3. Martingale problems 4. Exisence of solutions and forward equations 5. Stochastic integrals for Poisson random measures 6. Weak and strong solutions of stochastic equations 7. Stochastic equations for Markov processes in R d 8. Convergence for Markov processes characterized by martingale problems

Upload: phungxuyen

Post on 24-May-2018

216 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 1

Martingale problems and stochastic equations for Markovprocesses

1. Basics of stochastic processes

2. Markov processes and generators

3. Martingale problems

4. Exisence of solutions and forward equations

5. Stochastic integrals for Poisson random measures

6. Weak and strong solutions of stochastic equations

7. Stochastic equations for Markov processes in Rd

8. Convergence for Markov processes characterized by martingaleproblems

Page 2: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 2

9. Convergence for Markov processes characterized by stochasticdifferential equations

10. Martingale problems for conditional distributions

11. Equivalence of stochastic equations and martingale problems

12. Genealogies and ordered representations of measure-valued pro-cesses

13. Poisson representations

14. Stochastic partial differenctial equations

15. Information and conditional expectation

16. Technical lemmas

17. Exercises

18. Stochastic analysis exercises

19. References

Page 3: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 3

http://www.math.wisc.edu/˜kurtz/FrankLect.htm

Page 4: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 4

1. Basics of stochastic processes

• Filtrations

• Stopping times

• Martingales

• Optional sampling theorem

• Doob’s inequalities

• Stochastic integrals

• Local martingales

• Semimartingales

• Computing quadratic variations

• Covariation

• Ito’s formula

Page 5: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 5

Conventions and caveats

State spaces are always complete, separable metric spaces (some-times called Polish spaces), usually denoted (E, r).

All probability spaces are complete.

All identities involving conditional expectations (or conditional prob-abilities) only hold almost surely (even when I don’t say so).

If the filtration Ft involved is obvious, I will say adapted, ratherthan Ft-adapted, stopping time, rather than Ft-stopping time,etc.

All processes are cadlag (right continuous with left limits at each t >0), unless otherwise noted.

A process is real-valued if that is the only way the formula makessense.

Page 6: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 6

References

Kurtz, Lecture Notes for Math 735

http://www.math.wisc.edu/˜kurtz/m735.htm

Ethier and Kurtz, Markov Processes: Characterization and Convergence

Protter, Stochastic Integration and Differential Equations, Second Edi-tion

Page 7: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 7

Filtrations

(Ω,F , P ) a probability space

Available information is modeled by a sub-σ-algebra of F

Ft information available at time t

Ft is a filtration. t < s implies Ft ⊂ FsFt is complete if F0 contains all subsets of sets of probability zero.

A stochastic process X is adapted to Ft if X(t) is Ft-measurable foreach t ≥ 0.

An E-valued stochastic process X adapted to Ft is Ft-Markov if

E[f(X(t+ r))|Ft] = E[f(X(t+ r))|X(t)], t, r ≥ 0, f ∈ B(E)

Page 8: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 8

Measurability for stochastic processes

A stochastic process is an indexed family of random variables, but ifthe index set is [0,∞), then we may want to know more aboutX(t, ω)than that it is a measurable function of ω for each t. For example, fora R-valued process X , when are∫ b

a

X(s, ω)ds and X(τ(ω), ω)

random variables?

X is measurable if (t, ω) ∈ [0,∞)× Ω→ X(t, ω) ∈ E is B([0,∞))× F-measurable.

Lemma 1.1 IfX is measurable and∫ ba |X(s, ω)|ds <∞, then

∫ ba X(s, ω)ds

is a random variable.

If, in addition, τ is a nonnegative random variable, then X(τ(ω), ω) is arandom variable.

Page 9: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 9

Proof. The first part is a standard result for measurable functionson a product space. Verify the result for X(s, ω) = 1A(s)1B(ω), A ∈B[0,∞), B ∈ F and apply the Dynkin class theorem to extend theresult to 1C , C ∈ B[0,∞)×F .

If τ is a nonnegative random variable, then ω ∈ Ω → (τ(ω), ω) ∈[0,∞) × Ω is measurable. Consequently, X(τ(ω), ω) is the composi-tion of two measurble functions.

Page 10: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 10

Measurability continued

A stochastic process X is Ft-adapted if for all t ≥ 0, X(t) is Ft-measurable.

If X is measurable and adapted, the restriction of X to [0, t] × Ω isB[0, t]×F-measurable, but it may not be B[0, t]×Ft-measurable.

X is progressive if for each t ≥ 0, (s, ω) ∈ [0, t] × Ω → X(s, ω) ∈ E isB[0, t]×Ft-measurable.

Let

W = A ∈ B[0,∞)×F : A ∩ [0, t]× Ω ∈ B[0, t]×Ft, t ≥ 0.

Then W is a σ-algebra and X is progressive if and only if (s, ω) →X(s, ω) isW-measurable.

Since pointwise limits of measurable functions are measurable, point-wise limits of progressive processes are progressive.

Page 11: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 11

Stopping times

Let Ft be a filtration. τ is a Ft-stopping time if and only if τ ≤ t ∈Ft for each t ≥ 0.

If τ is a stopping time, Fτ ≡ A ∈ F : A ∩ τ ≤ t ∈ Ft, t ≥ 0.

If τ1 and τ2 are stopping times with τ1 ≤ τ2, then Fτ1 ⊂ Fτ2.

If τ1 and τ2 are stopping times then τ1 and τ1 ∧ τ2 are Fτ1-measurable.

Page 12: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 12

A process observed at a stopping time

If X is measurable and τ is a stopping time, then X(τ(ω), ω) is a ran-dom variable.

Lemma 1.2 If τ is a stopping time and X is progressive, then X(τ) is Fτ -measurable.

Proof. ω ∈ Ω→ (τ(ω) ∧ t, ω) ∈ [0, t]× Ω is measurable as a mappingfrom (Ω,Ft) to ([0, t]× Ω,B[0, t]×Ft). Consequently,

ω → X(τ(ω) ∧ t, ω)

is Ft-measurable, and

X(τ) ∈ A ∩ τ ≤ t = X(τ ∧ t) ∈ A ∩ τ ≤ t ∈ Ft.

Page 13: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 13

Right continuous processes

Most of the processes you know are either continuous (e.g., Brown-ian motion) or right continuous (e.g., Poisson process).

Lemma 1.3 If X is right continuous and adapted, then X is progressive.

Proof. If X is adapted, then

(s, ω) ∈ [0, t]× Ω→ Yn(s, ω) ≡ X([ns] + 1

n∧ t, ω)

=∑k

X(k + 1

n∧ t, ω)1[ kn ,

k+1n )(s)

is B[0, t] × Ft-measurable. By the right continuity of X , Yn(s, ω) →X(s, ω) on B[0, t] × Ft, so (s, ω) ∈ [0, t] × Ω → X(s, ω) is B[0, t] × Ft-measurable and X is progressive.

Page 14: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 14

Examples and properties

Define Ft+ ≡ ∩s>tFs. Ft is right continuous if Ft = Ft+ for all t ≥ 0.If Ft is right continuous, then τ is a stopping time if and only ifτ < t ∈ Ft for all t > 0.

Let X be cadlag and adapted. If K ⊂ E is closed, τhK = inft :X(t) or X(t−) ∈ K is a stopping time, but inft : X(t) ∈ K maynot be; however, if Ft is right continuous and complete, then forany B ∈ B(E), τB = inft : X(t) ∈ B is an Ft-stopping time. Thisresult is a special case of the debut theorem, a very technical resultfrom set theory. Note that

ω : τB(ω) < t = ω : ∃s < t 3 X(s, ω) ∈ B= projΩ(s, ω) : X(s, ω) ∈ B, s < t

Page 15: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 15

Piecewise constant approximationsε > 0, τ ε0 = 0,

τ εi+1 = inft > τ εi : r(X(t), X(τ εi )) ∨ r(X(t−), X(τ εi )) ≥ ε

Define Xε(t) = X(τ εi ), τ εi ≤ t < τ εi+1. Then r(X(t), Xε(t)) ≤ ε.

If X is adapted to Ft, then the τ εi are Ft-stopping times and Xε

is Ft-adapted. See Exercise 4.

Page 16: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 16

Martingales

An R-valued stochastic processM adapted to Ft is an Ft-martingaleif

E[M(t+ r)|Ft] = M(t), t, r ≥ 0

Every martingale has finite quadratic variation:

[M ]t = lim∑

(M(t ∧ ti+1)−M(t ∧ ti))2

where 0 = t0 < t1 < · · ·, ti → ∞, and the limit is in probability asmax(ti+1 − ti)→ 0. More precisely, for ε > 0 and t0 > 0,

limPsupt≤t0|[M ]t − lim

∑(M(t ∧ ti+1)−M(t ∧ ti))2| > ε = 0.

For standard Brownian motion W , [W ]t = t.

Page 17: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 17

Optional sampling theorem

A real-valued process is a submartingale if E[|X(t)|] <∞, t ≥ 0, and

E[X(t+ s)|Ft] ≥ X(t), t, s ≥ 0.

If τ1 and τ2 are stopping times, then

E[X(t ∧ τ2)|Fτ1] ≥ X(t ∧ τ1 ∧ τ2).

If τ2 is finite a.s. E[|X(τ2)|] <∞ and limt→∞E[|X(t)|1τ2>t] = 0, then

E[X(τ2)|Fτ1] ≥ X(τ1 ∧ τ2).

Of course, if X is a martingale

E[X(t ∧ τ2)|Fτ1] = X(t ∧ τ1 ∧ τ2).

Page 18: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 18

Square integrable martingales

M a martingale satisfying E[M(t)2] <∞. Then

M(t)2 − [M ]t

is a martingale. In particular, for t > s

E[(M(t)−M(s))2] = E[[M ]t − [M ]s].

Page 19: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 19

Doob’s inequalities

Let X be a submartingale. Then for x > 0,

Psups≤t

X(s) ≥ x ≤ x−1E[X(t)+]

Pinfs≤t

X(s) ≤ −x ≤ x−1(E[X(t)+]− E[X(0)])

If X is nonnegative and α > 1, then

E[sups≤t

X(s)α] ≤(

α

α− 1

)αE[X(t)α].

Note that by Jensen’s inequality, if M is a martingale, then |M | is asubmartingale. In particular, if M is a square integrable martingale,then

E[sups≤t|M(s)|2] ≤ 4E[M(t)2].

Page 20: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 20

Stochastic integrals

Definition 1.4 For cadlag processes X , Y ,

X− · Y (t) ≡∫ t

0

X(s−)dY (s)

= limmax |ti+1−ti|→0

∑X(ti)(Y (ti+1 ∧ t)− Y (ti ∧ t))

whenever the limit exists in probability.

Sample paths of bounded variation: If Y is a finite variation pro-cess, the stochastic integral exists (apply dominated convergence the-orem) and ∫ t

0

X(s−)dY (s) =

∫(0,t]

X(s−)αY (ds)

αY is the signed measure with

αY (0, t] = Y (t)− Y (0)

Page 21: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 21

Existence for square integrable martingales

If M is a square integrable martingale, then

E[(M(t+ s)−M(t))2|Ft] = E[[M ]t+s − [M ]t|Ft]

For partitions ti and ri

E[(∑

X(ti)(M(ti+1 ∧ t)−M(ti ∧ t))

−∑

X(ri)(M(ri+1 ∧ t)−M(ri ∧ t)))2]

= E

[∫ t

0

(X(t(s−))−X(r(s−)))2d[M ]s

]= E

[∫(0,T ]

(X(t(s−))−X(r(s−)))2α[M ](ds)

]t(s) = ti for s ∈ [ti, ti+1) r(s) = ri for s ∈ [ri, ri+1)

Page 22: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 22

Cauchy property

Let X be bounded by a constant. As sup(ti+1 − ti) + sup(ri+1 − ri)→0, the right side converges to zero by the dominated convergencetheorem.

MtiX (t) ≡

∑X(ti)(M(ti+1 ∧ t) − M(ti ∧ t)) is a square integrable

martingale, so

E

[supt≤T

(∑X(ti)(M(ti+1 ∧ t)−M(ti ∧ t))

−∑

X(ri)(M(ri+1 ∧ t)−M(ri ∧ t)))2]

≤ 4E

[∫(0,t]

(X(t(s−))−X(r(s−)))2α[M ](ds)

]A completeness argument gives existence of the stochastic integraland the uniformity implies the integral is cadlag.

Page 23: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 23

Local martingales

Definition 1.5 M is a local martingale if there exist stopping times τnsatisfying τ1 ≤ τ2 ≤ · · · and τn → ∞ a.s. such that M τn defined byM τn(t) = M(τn ∧ t) is a martingale. M is a local square-integrablemartingale if the τn can be selected so that M τn is square integrable.

τn is called a localizing sequence for M .

Remark 1.6 If τn is a localizing sequence for M , and γn is anothersequence of stopping times satisfying γ1 ≤ γ2 ≤ · · ·, γn →∞ a.s. then theoptional sampling theorem implies that τn ∧ γn is localizing.

Page 24: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 24

Local martingales with bounded jumps

Remark 1.7 If M is a continuous, local martingale, then

τn = inft : |M(t)| ≥ n

will be a localizing sequence. More generally, if

|∆M(t)| ≤ c

for some constant c, then

τn = inft : |M(t)| ∨ |M(t−)| ≥ n

will be a localizing sequence.

Note that |M τn| ≤ n+ c, so M is local square integrable.

Page 25: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 25

Semimartingales

Definition 1.8 Y is an Ft-semimartingale if and only if Y = M + V ,where M is a local square integrable martingale with respect to Ft and Vis an Ft-adapted finite variation process.

In particular, if X is cadlag and adapted and Y is a semimartingale,then

∫X−dY exists.

Page 26: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 26

Computing quadratic variations

Let ∆Z(t) = Z(t) = Z(t−).

Lemma 1.9 If Y is finite variation, then

[Y ]t =∑s≤t

∆Y (s)2

Lemma 1.10 If Y is a semimartingale,X is adapted, andZ(t) =∫ t

0 X(s−)dY (s),then

[Z]t =

∫ t

0

X(s−)2d[Y ]s.

Proof. Check first for piecewise constant X and then approximategeneral X by piecewise constant processes.

Page 27: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 27

Covariation

The covariation of Y1, Y2 is defined by

[Y1, Y2]t ≡ lim∑i

(Y1(ti+1 ∧ t)− Y1(ti ∧ t)) (Y2(ti+1 ∧ t)− Y2(ti ∧ t))

Page 28: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 28

Ito’s formula

If f : R→ R is C2 and Y is a semimartingale, then

f(Y (t)) = f(Y (0)) +

∫ t

0

f ′(Y (s−))dY (s) +

∫ t

0

1

2f ′′(Y (s))d[Y ]cs

+∑s≤t

(f(Y (s))− f(Y (s−))− f ′(Y (s−))∆Y (s)

where [Y ]c is the continuous part of the quadratic variation given by

[Y ]ct = [Y ]t −∑s≤t

∆Y (s)2.

Page 29: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 29

Ito’s formula for vector-valued semimartingales

If f : Rm → R isC2, Y1, . . . , Ym are semimartingales, and Y = (Y1, . . . , Ym),then defining

[Yk, Yl]ct = [Yk, Yl]t −

∑s≤t

∆Yk(s)∆Yl(s),

f (Y (t)) = f (Y (0)) +m∑k=1

∫ t

0

∂kf (Y (s−)) dYk(s)

+m∑

k,l=1

1

2

∫ t

0

∂k∂lf (Y (s−)) d[Yk, Yl]cs

+∑s≤t

(f (Y (s))− f (Y (s−))−m∑k=1

∂kf (Y (s−)) ∆Yk(s)).

Page 30: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 30

Examples

W standard Brownian motion

Z(t) = expW (t)− 1

2t =

∫ t

0

Z(s)d(W (s)− 1

2s) +

∫ t

0

1

2Z(s)ds

=

∫ t

0

Z(s)dW (s)

Page 31: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 31

2. Markov processes and generators

• Time homogeneous Markov processes

• Markov processes and semigroups

• Semigroup generators

• Martingale properties

• Dynkin’s identity

• Strongly continuous contraction semigroups

• Resolvent operator

• Transition functions

Page 32: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 32

Time homogeneous Markov processes

A process X is Markov with respect to a filtration Ft provided

E[f(X(t+ r))|Ft] = E[f(X(t+ r))|X(t)]

for all t, r ≥ 0 and all f ∈ B(E).

The conditional expectation on the right can be written as gf,t,r(X(t))for a measurable funtion gf,t,r depending on f , t, and r.

If the function can be selected independently of t, that is

E[f(X(t+ r))|X(t)] = gf,r(X(t)),

then the Markov process is time homogeneous. A time inhomogeneousMarkov process can be made time homogeneous by including timein the state. That is, set Z(t) = (X(t), t).

Note that gf,r will be linear in f , so we can write gf,r = T (r)f , whereT (r) is a linear operator on B(E) (the bounded measurable functionson E). The Markov property then implies T (r + s)f = T (r)T (s)f .

Page 33: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 33

Markov processes and semigroupsT (t) : B(E)→ B(E), t ≥ 0 is an operator semigroup if T (t)T (s)f =T (t+ s)f

X is a Markov process with operator semigroup T (t) if and only if

E[f(X(t+ s))|FXt ] = T (s)f(X(t)), t, s ≥ 0, f ∈ B(E).

T (s+ r)f(X(t)) = E[f(X(t+ s+ r))|FXt ]

= E[E[f(X(t+ s+ r))|FXt+s]|FX

t ]

= E[T (r)f(X(t+ s))|FXt ]

= T (s)T (r)f(X(t))

Page 34: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 34

Semigroup and finite dimensional distributions

Lemma 2.1 If X is a Markov process corresponding to T (t), then thefinite dimensional distributions of X are determined by T (t) and the dis-tribution of X(0).

Proof.For 0 ≤ t1 ≤ t2,

E[f1(X(t1))f2(X(t2))] = E[f1(X(t1))T (t2 − t1)f2(X(t1))]

= E[T (t1)[f1T (t2 − t1)f2](X(0))]

Page 35: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 35

Semigroup generatorsf is in the domain of the strong generator of the semigroup if thereexists g ∈ B(E) such that

limt→0+‖g − T (t)f − f

t‖ = 0.

Then Af ≡ g.

f is in the domain of the weak generator A (see Dynkin (1965)), ifsupt ‖t−1(T (t)f − f)‖ <∞ and there exists g ∈ B(E) such that

limt→0+

T (t)f(x)− f(x)

t= g(x) ≡ Af(x), x ∈ E.

The full generator A (see Ethier and Kurtz (1986)) is

A = (f, g) ∈ B(E)×B(E) : T (t)f = f +

∫ t

0

T (s)gds

A ⊂ A ⊂ A.

Page 36: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 36

Martingale properties

Lemma 2.2 If X is a progressive Markov process corresponding to T (t)and (f, g) ∈ A, then

Mf(t) = f(X(t))− f(X(0))−∫ t

0

g(X(s))ds

is a martingale (not necessarily right continuous).

Proof.

E[Mf(t+ r)−Mf(t)|Ft]

= E[f(X(t+ r))− f(X(t))−∫ t+r

t

g(X(s))ds|Ft]

= T (r)f(X(t))− f(X(t))−∫ t+r

t

T (s− t)g(X(t))ds

= 0

Page 37: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 37

Dynkin’s identity

Change of notation: Simply write Af for g, if (f, g) ∈ A.

If Mf is right continuous, the optional sampling theorem implies

E[f(X(t ∧ τ))] = E[f(X(0))] + E[

∫ t∧τ

0

Af(X(s))ds].

Page 38: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 38

Exit times

Assume D is open and X is right continuous. Let τhD = inft :X(t) or X(t−) /∈ D. Write Ex for expectations under the conditionthat X(0) = x.

Suppose f is bounded and continuous, Af = 0, and τD < ∞ a.s.Then

f(x) = Ex[f(X(τhD))].

If f is bounded and continuous, Af(x) = −1, x ∈ D, and f(y) = 0,y /∈ D, and PX(τhD) ∈ D = 0, then

f(x) = Ex[τhD]

Page 39: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 39

Exit distributions in one dimension

For a one-dimensional diffusion process

Lf(x) =1

2a(x)f ′′(x) + b(x)f ′(x).

Find f such thatLf(x) = 0 (i.e., solve the linear first order differentialequation for f ′). Then f(X(t)) is a local martingale.

Fix a < b, and define τ = inft : X(t) /∈ (a, b). If supa<x<b |f(x)| <∞,then Ex[f(X(t ∧ τ))] = f(x).

Moreover, if τ <∞ a.s. Ex[f(X(τ))] = f(x).

Hence f(a)Px(X(τ) = a) + f(b)Px(X(τ) = b) = f(x),

and therefore the probability of exiting the interval at the right end-point is given by

Px(X(τ) = b) =f(x)− f(a)

f(b)− f(a)(2.1)

Page 40: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 40

Exit time

To find conditions under which Px(τ < ∞) = 1, or more precisely,under which Ex[τ ] <∞, solve Lg(x) = −1. Then

g(X(t))− g((X(0))− t,

is a local martingale and C = supa<x<b |g(x)| <∞,

Ex[g(X(t ∧ τ))] = g(x) + Ex[t ∧ τ ]

and 2C ≥ E[t ∧ τ ], so 2C ≥ E[τ ], which implies τ <∞ a.s. By (2.1),

Ex[τ ] = Ex[g(X(τ))]− g(x)

= g(b)f(x)− f(a)

f(b)− f(a)+ g(a)

f(b)− f(x)

f(b)− f(a)− g(x)

Page 41: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 41

Strongly continuous contraction semigroup

Semigroups associated with Markov processes are contraction semi-groups, i.e.,

‖T (t)f‖ ≤ ‖f‖, f ∈ B(E).

Let L0 = f ∈ B(E) : limt→0+ ‖T (t)f − f‖ = 0. Then

• D(A) is dense in L0.

• ‖λf − Af‖ ≥ λ‖f‖, f ∈ D(A), λ > 0.

• R(λ− A) = L0, ∀λ > 0.

Page 42: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 42

The resolvent

Lemma 2.3 For λ > 0 and h ∈ L0,

(λ− A)−1h =

∫ ∞0

e−λtT (t)hdt

Proof. Let f =∫∞

0 e−λtT (t)hdt. Then

r−1(T (r)f − f) = r−1(

∫ ∞0

e−λtT (t+ r)hdt−∫ ∞

0

e−λtT (t)hdt)

= r−1(eλr∫ ∞r

e−λtT (t)hdt−∫ ∞

0

e−λtT (t)hdt)

→ λf − h

Page 43: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 43

Hille-Yosida theorem

Theorem 2.4 The closure of A is the generator of a strongly continuouscontraction semigroup on L0 if and only if

• D(A) is dense in L0.

• ‖λf − Af‖ ≥ λ‖f‖, f ∈ D(A), λ > 0.

• R(λ− A) is dense in L0.

Proof. Necessity is discussed above. Assuming A is closed (other-wise, replace A by its closure), the conditions imply R(λ − A) = L0

and the semigroup is obtained by

T (t)f = limn→∞

(I − 1

nA)−[nt]f.

(One must show that the right side is Cauchy.)

Page 44: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 44

Probabilistic interpretation of the limit

If T (t) corresponds to a Markov process X , then

(I − 1

nA)−1f(x) = Ex[f(X(

1

n∆))],

where ∆ is a unit exponential independent of X , and

(I − 1

nA)−[nt]f(x) = Ex[f(X(

1

n

[nt]∑i=1

∆i))]

Page 45: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 45

Transition functions

Definition 2.5 P (t, x,Γ) defined on [0,∞)×E×B(E) is a transition func-tion if P (·, ·,Γ) is Borel measurable for each Γ ∈ B(E), P (t, x, ·) ∈ P(E)for each (t, x) ∈ [0,∞) × E, and P satisfies the Chapman-Kolmogorovrelation

P (t+ s, x,Γ) =

∫E

P (s, y,Γ)P (t, x, dy).

A Markov processX corresponds to a transition function P provided

PX(t) ∈Γ|X(0) = x = P (t, x,Γ).

T (t)f(x) =∫E f(y)P (t, x, dy) defines a semigroup on B(E).

Page 46: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 46

The resolvent for the full generator

Lemma 2.6 Suppose T (t) : B(E) → B(E) is given by a transition func-tion, T (t)f(x) =

∫E f(y)P (t, x, dy). For h ∈ B(E), define

f(x) =

∫ ∞0

e−λtT (t)h(x)dt.

Then (f, λf − h) ∈ A.

Proof. ∫ t

0

T (s)(λf − h)ds = λ

∫ t

0

∫ ∞0

e−λuT (s+ u)hduds−∫ t

0

T (s)hds

= λ

∫ t

0

eλs∫ ∞s

e−λuT (u)hduds−∫ t

0

T (s)hds

= eλt∫ ∞t

e−λuT (u)hdu−∫ ∞0

e−λuT (u)hdu

= T (t)f − f

Page 47: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 47

A convergence lemma

Lemma 2.7 Let E be compact and suppose fk ⊂ C(E) separates points.If xn satisfies limn→∞ fk(xn) exists for every fk, then limn→∞ xn exists.

Proof. If x and x′ are limit points of xn, we must have fk(x) = fk(x′)

for all k. But then x = x′, since fk separates points.

Page 48: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 48

Feller processes

Lemma 2.8 Assume E is compact, T (t) : C(E)→ C(E), and

limt→0

T (t)f(x) = f(x), x ∈ E, f ∈ C(E).

If X is a Markov process corresponding to T (t), then X has a modifica-tion with cadlag sample paths.

Proof. For h ∈ C(E), f = Rλh ≡∫∞

0 e−λtT (t)hdt ∈ C(E), so settingg = λf − h,

f(X(t))− f(X(0))−∫ t

0

g(X(s))ds

is a martingale. By the upcrossing inequality, there exists a set Ωf ⊂Ω with P (Ωf) = 1 such that for ω ∈ Ωf , lims→t+,s∈Q f(X(s, ω)) existsfor each t ≥ 0 and lims→t−,s∈Q f(X(s, ω)) exists for each t > 0.

Suppose hk, k ≥ 1 ⊂ C(E) is dense. Then Rλhk : λ ∈ Q ∩(0,∞), k ≥ 1 separates points in E.

Page 49: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 49

3. Martingale problems

• Definition

• Equivalent formulations

• Uniqueness of 1-dimensional distributions implies uniqueness offdd

• Uniqueness under the Hille-Yosida conditions

• Markov property

• Quasi-left continuity

http://www.math.wisc.edu/˜kurtz/FrankLect.htm

Page 50: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 50

Martingale problems: Definition

E state space (a complete, separable metric space)

A generator (a linear operator with domain and range in B(E)

µ ∈ P(E)

X is a solution of the martingale problem for (A, µ) if and only ifµ = PX(0)−1 and there exists a filtration Ft such that

Mf(t) = f(X(t))−∫ t

0

Af(X(s))ds

is an Ft-martingale for each f ∈ D(A).

Page 51: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 51

Examples of generators

Standard Brownian motion (E = Rd)

Af =1

2∆f, D(A) = C2

c (Rd)

Poisson process (E = 0, 1, 2 . . ., D(A) = B(E))

Af(k) = λ(f(k + 1)− f(k))

Pure jump process (E arbitrary)

Af(x) = λ(x)

∫E

(f(y)− f(x))µ(x, dy)

Diffusion (E = Rd, D(A) = C2c (Rd))

Af(x) =1

2

∑i,j

aij(x)∂2

∂xi∂xjf(x) +

∑i

bi(x)∂

∂xif(x) (3.1)

Page 52: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 52

Equivalent formulationsSuppose, without loss of generality, that D(A) is closed under addi-tion of constants (A1 = 0). Then the following are equivalent:

a) X is a solution of the martingale problems for (A, µ).

b) PX(0)−1 = µ and there exists a filtration Ft such that for eachλ > 0 and each f ∈ D(A),

e−λtf(X(t))−∫ t

0

e−λs(λf(X(s))− Af(X(s)))ds

is a Ft-martingale.

c) PX(0)−1 = µ and there exists a filtration Ft such that for eachf ∈ D(A) with infx∈E f(x) > 0,

Rf(t) =f(X(t))

f(X(0)exp−

∫ t

0

Af(X(s))

f(X(s))ds

is a Ft-martingale.

Page 53: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 53

Proof. For Part (c), assume D(A) ⊂ Cb(E) and X is right continuous.

f(X(t)) exp−∫ t

0

Af(X(s))

f(X(s))ds

= f(X(0)) +

∫ t

0

exp−∫ r

0

Af(X(s))

f(X(s))dsdf(X(r))

−∫ t

0

f(X(r))Af(X(r))

f(X(r))exp−

∫ r

0

Af(X(s))

f(X(s))dsdr

= f(X(0)) +

∫ t

0

exp−∫ r

0

Af(X(s))

f(X(s))dsdMf(r)

so if Mf is a martingale, then Rf is a martingale.

Page 54: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 54

Conversely, if Rf is a martingale, then

Mf(t) = f(X(0)) +

∫ t

0

exp∫ r

0

Af(X(s))

f(X(s))dsdRf(r)

is a martingale.

Note that considering only f that are strictly positive is no restrictionsince we can always add a constant to f .

Page 55: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 55

Conditions for the martingale property

Lemma 3.1 For (f, g) ∈ A, h1, . . . , hm ∈ C(E), and t1 ≤ t2 ≤ · · · ≤tm+1, let

η(Y ) ≡ η(Y, (f, g), hi, ti)

= (f(Y (tm+1)− f(Y (tm))−∫ tm+1

tm

g(Y (s)ds)m∏i=1

hi(Y (ti)).

Then Y is a solution of the martingale problem forA if and only ifE[η(Y )] =0 for all such η.

The assertion that Y is a solution of the martingale problem for A isan assertion about the finite dimensional distributions of Y .

Page 56: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 56

Uniqueness of 1-dimensional distributions implies unique-ness of fdd

Theorem 3.2 If any two solutions of the martingale problem for A satis-fying PX1(0)−1 = PX2(0)−1 also satisfy PX1(t)

−1 = PX2(t)−1 for all

t ≥ 0, then the f.d.d. of a solution X are uniquely determined by PX(0)−1

Proof. If X is a solution of the MGP for A and Xa(t) = X(a + t),then Xa is a solution of the MGP for A. Further more, for positivefi ∈ B(E) and 0 ≤ t1 < t2 < · · · < tm = a, define

Q(B) =E[1B(Xa)

∏mi=1 fi(X(ti))]

E[∏m

i=1 fi(X(ti))]

defines a probability measure on F = σ(Xa(s), s ≥ 0) and under Q,Xa is a solution of the martingale problem for A with initial distribu-tion

µ(Γ) =E[1Γ(X(a))

∏mi=1 fi(X(ti))]

E[∏m

i=1 fi(X(ti))].

Page 57: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 57

Proceeding by induction, fixm, supposeE[∏m

i=1 fi(X(ti))] is uniquelydetermined for all 0 ≤ t1 < t2 < · · · < tm and all fi. The µ is uniquelydetermined and the one dimensional distributions of Xa under Q areuniquely determined, that is

E[fm+1(X(tm+1))∏m

i=1 fi(X(ti))]

E[∏m

i=1 fi(X(ti))]

is uniquely determined for tm+1 ≥ a. Since a is arbitrary and thedenominator is uniquely determined, the numerator is uniquely de-termined completing the induction step.

Page 58: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 58

Adding a time component

Lemma 3.3 Suppose that g(t, x) has the property that g(t, ·) ∈ D(A) foreach t and that g, ∂tg, and Ag are all bounded in t and x and are continuousfunctions of t. If X is a solution of the martingale problem for A, then

g(t,X(t))−∫ t

0

(∂sg(x,X(s)) + Ag(s,X(s)))ds

is a martingale.

Page 59: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 59

Proof.

E[g(t+ r,X(t+ r))− g(t,X(t))|Ft]=∑k

E[g(t+ sk+1, X(t+ sk+1))− g(t+ sk, X(t+ sk))|Ft]

=∑k

E[g(t+ sk+1, X(t+ sk+1))− g(t+ sk+1, X(t+ sk))|Ft]

+∑k

E[g(t+ sk+1, X(t+ sk))− g(t+ sk, X(t+ sk))|Ft]

=∑k

E[

∫ t+sk+1

t+sk

Ag(t+ sk+1, X(t+ r))dr|Ft]

+∑k

E[

∫ t+sk+1

t+sk

∂rg(t+ r,X(t+ sk))dr|Ft]

To complete the proof, see Exercise 14.

Page 60: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 60

Uniqueness under the Hille-Yosida conditions

Theorem 3.4 If A satisfies the conditions of Theorem 2.4 and D(A) is sep-arating, then there is at most one solution to the martingale problem.

Proof. If X is a solution of the martingale problem for A, then byLemma 3.3, for each t > 0 and each f ∈ D(A), T (t − s)f(X(s)) is amartingale. This martingale property extends to all f in the closureof D(A). Consequently,

E[f(X(t))|Fs] = T (t− s)f(X(s)),

and E[f(X(t))] = E[T (t)f(X(0))] which determines the one dimen-sional distributions implying uniqueness.

Page 61: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 61

Markov property

Theorem 3.5 Suppose the conclusion of Theorem 3.2 holds. If X is a solu-tion of the martingale problem for A with respect to a filtration Ft, thenX is Markov with respect to Ft.

Proof. Assuming that P (F ) > 0, let F ∈ Fr and for B ∈ F , define

P1(B) =E[1FE[1B|Fr]]

P (F ), P2(B) =

E[1FE[1B|X(r)]]

P (F ).

Define Y (t) = X(r + t). Then

P1Y (0) ∈ Γ =E[1FE[1Y (0)∈Γ|Fr]]

P (F )=E[1FE[1X(r)∈Γ|Fr]]

P (F )

=E[1F1X(r)∈Γ]

P (F )=E[1FE[1X(r)∈Γ|X(r)]]

P (F )= P2Y (0) ∈ Γ

Check that EP1[η(Y )] = EP2[η(Y )] = 0 for all η(Y ) as in Lemma 3.1.

Page 62: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 62

Therefore

E[1FE[f(X(r + t))|Fr]] = P (F )EP1[f(Y (t))]

= P (F )EP2[f(Y (t))] = E[1FE[f(X(r + t))|X(r)]]

Since F ∈ Fr is arbitrary, E[f(X(r+ t))|Fr] = E[f(X(r+ t)|X(r)] andthe Markov property follows.

Page 63: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 63

Cadlag versions

Lemma 3.6 Suppose E is compact and A ⊂ C(E) × B(E). If D(A) isseparating, then any solution of the martingale problem for A has a cadlagmodification.

Proof. See Lemma 2.8

Page 64: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 64

Quasi-left continuity

X is quasi-left continuous if and only if for each sequence of stoppingtimes τ1 ≤ τ2 ≤ · · · such that τ ≡ limn→∞ τn <∞ a.s.,

limn→∞

X(τn) = X(τ) a.s.

Lemma 3.7 Let A ⊂ C(E)×B(E), and suppose that D(A) is separating.Let X be a cadlag solution of the martingale problems for A. Then X isquasi-left continuous

Proof. For (f, g) ∈ A,

limn→∞

f(X(τn ∧ t)) = limn→∞

E[f(X(τ ∧ t))−∫ τ∧t

τn∧tg(X(s))ds|Fτn]

= E[f(X(τ ∧ t))| ∨n Fτn] .

Page 65: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 65

Since X is cadlag,

limn→∞

X(τn∧t) =

X(τ ∧ t) if τn ∧ t = τ ∧ t for n sufficiently largeX(τ ∧ t−) if τn ∧ t < τ ∧ t for all n

To complete the proof, see Exercise 6.

Page 66: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 66

Continuity of diffusion process

Lemma 3.8 Suppose E = Rd and

Af(x) =1

2

∑i,j

aij(x)∂2

∂xi∂xjf(x)+

∑i

bi(x)∂

∂xif(x), D(A) = C2

c (Rd).

IfX is a solution of the martingale problem forA, thenX has a modificationthat is cadlag in Rd ∪ ∞. If X is cadlag, then X is continuous.

Proof. The existence of a cadlag modification follows by Lemma 3.6.To show continuity, it is enough to show that for f ∈ C∞c (Rd), f Xis continuous. To show f X is continuous, it is enough to show

limmax |ti+1−ti|→0

∑(f(X(ti+1 ∧ t)− f(X(ti ∧ t)))4 = 0.

Page 67: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 67

From the martingale properties,

E[(f(X(t+ h))− f(X(t)))4]

=

∫ t+h

t

E[Af 4(X(s))− 4f(X(t))Af 3(X(s))

+6f 2(X(t))Af 2(X(s))− 4f 3(X(t))Af(X(s))]ds

Check that

Af 4(x)− 4f(x)Af 3(x) + 6f 2(x)Af 2(x)− 4f 3(x)Af(x) = 0 (3.2)

Page 68: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 68

4. Existence of solutions and forward equations

• Conditions for relative compactness for cadlag processes

• Forward equations

• Uniqueness of the forward equation under a range condition

• Construction of a solution of a martingale problem

• Stationary distributions

• Echeverria’s theorem

• Equivalence of the forward equation and the MGP

Page 69: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 69

Conditions for relative compactnessLet (E, r) be complete, separable metric space, and define q(x, y) =1 ∧ r(x, y) (so q is an equivalent metric under which E is complete).Let Xn be a sequence of cadlag processes with values in E, Xn

adapted to Fnt .

Theorem 4.1 Assume the following:

a) For t ∈ T0, a dense subset of [0,∞), Xn(t) is relatively compact.

b) For T > 0, there exist β > 0 and random variables γn(δ, T ) such thatfor 0 ≤ t ≤ T , 0 ≤ u ≤ δ,

E[qβ(Xn(t+ u), Xn(t))|Fnt ] ≤ E[γn(δ, T )|Fn

t ]

and limδ→0 lim supn→∞E[γn(δ, T )] = 0.

Then Xn is relatively compact in DE[0,∞).

Page 70: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 70

Relative compactness for martingale problems

Theorem 4.2 Let E be compact, and let An be a sequence of generators.Suppose there exists a dense subset D ⊂ C(E) such that for each f ∈D there exist fn ∈ D(An) such that limn→∞ ‖fn − f‖ = 0 and Cf =supn ‖Anfn‖ <∞. If Xn is a sequence of cadlag processes in E such thatfor each n, Xn is a solution of the martingale problem for An, then Xn isrelatively compact in DE[0,∞).

Page 71: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 71

Proof. Let f ∈ D and for δ > 0, let hδ ∈ D be such that there existlim supδ→0

√δChδ <∞ and limδ→0 ‖hδ − f 2‖ = 0. Then for 0 ≤ u ≤ δ,

E[(f(Xn(t+ u)− f(Xn(t))2|Fn

t ]

= E[f 2(Xn(t+ u))− f 2(Xn(t))|Fnt ]

+2f(Xn(t)E[f(Xn(t+ u))− f(Xn(t))|Fnt ]

≤ 2‖f 2 − hδn‖+ 4‖f‖‖f − fn‖+ 2‖f‖Cfδ + Chδδ

→ 2‖f − hδ‖+ 2‖f‖Cfδ + Chδδ,

which implies relative compactness for f(Xn). Since the collectionof such f is dense in C(E), relative compactness of Xn follows.

Page 72: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 72

The forward equation for a general Markov process

Let A ⊂ B(E) × B(E). If X is a solution of the martingale problemfor A and νt is the distribution of X(t), then

0 = E[f(X(t))− f(X(0))−∫ t

0

Af(X(s))ds] = νtf − ν0f −∫ t

0

νsAfds

so

νtf = ν0f +

∫ t

0

νsAfds, f ∈ D(A). (4.1)

(4.1) gives the weak form of the forward equation.

Definition 4.3 A measurable mapping t ∈ [0,∞) → νt ∈ P(E) is asolution of the forward equation for A if (4.1) holds for all f ∈ D(A).

Page 73: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 73

Fokker-Planck equation

Let Af = 12a(x)f ′′(x) + b(x)f ′(x), f ∈ D(A) = C2

c (R). If νt has a C2

density, then

νtAf =

∫ ∞−∞

Af(x)ν(t, x)ds

=

∫ ∞−∞

f(x)

(1

2

∂2

∂x2(a(x)ν(t, x))− ∂

∂x(b(x)ν(t, x))

)dx

and the forward equation is equivalent to

∂tν(t, x) =

1

2

∂2

∂x2(a(x)ν(t, x))− ∂

∂x(b(x)ν(t, x)),

known as the Fokker-Planck equation in the physics literature.

Page 74: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 74

Uniqueness for the forward equation

Lemma 4.4 If νt and µt are solutions of the forward equation for Awith ν0 = µ0 andR(λ−A) is separating for each λ > 0, then

∫∞0 e−λtνtdt =∫∞

0 e−λtµtdt and µt = νt for almost every t. Consequently, if ν and µ areweakly right continuous or if D(A) is separating, νt = µt for all t ≥ 0.

Proof.

λ

∫ ∞0

e−λtνtfdt = ν0f + λ

∫ ∞0

e−λt∫ t

0

νsAfds dt

= ν0f + λ

∫ ∞0

∫ ∞s

e−λtνsAfdt ds

= ν0f +

∫ ∞0

e−λsνsAf ds

and hence∫ ∞

0

e−λtνt(λf − Af)dt = ν0f.

SinceR(λ− A) is separating, the result holds.

Page 75: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 75

The semigroup and the forward equation

If ν0 ∈ P(E), then

ν0T (t)f = ν0f +

∫ t

0

ν0T (s)Afds

and if T (t) is given by a transition function, νt =∫E P (t, x, ·)ν0(dx)

satisfies

νtf = ν0f +

∫ t

0

νsAfds, f ∈ D(A).

If A is the strong generator and D(A) is separating, then uniquenessfollows by Lemma 4.4.

Page 76: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 76

Dissipativity and the positive maximum principle

For λ > 0,

‖λf − t−1(T (t)f − f)‖ ≥ (λ+ t−1)‖f‖ − t−1‖T (t)f‖ ≥ λ‖f‖

so A is dissipative

‖λf − Af‖ ≥ λ‖f‖, λ > 0.

Definition 4.5 A satisfies the positive maximum principle if f(x) =‖f‖ implies Af(x) ≤ 0.

Lemma 4.6 The weak generator for a Markov process satisfies the positivemaximum principle.

Lemma 4.7 LetE be compact andD(A) ⊂ C(E). IfA satisfies the positivemaximum principle, then A is dissipative.

Page 77: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 77

Digression on the proof of the Hille-Yosida theorem

The conditions of the Hille-Yosida theorem 2.4 imply (I − n−1A)−1

exists and‖(I − n−1A)−1f‖ ≤ ‖f‖.

In addition‖(I − n−1A)−1f − f‖ ≤ 1

n‖Af‖.

One proof of the Hille-Yosida theorem is to show that

Tn(t)f = (I − n−1A)−[nt]f

is a Cauchy sequence and to observe that

Tn(t)f = f +1

n

[nt]∑k=1

(I − n−1A)−kAf = f +

∫ [nt]n

0

Tn(s+ n−1)Afds

Page 78: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 78

Probabilistic interpretation

(n − A)−1 =∫∞

0 e−ntT (t)dt and (I − n−1A)−1 = n∫∞

0 e−ntT (t)dt. IfT (t) is given by a transition function, then

ηn(x, dy) = n

∫ ∞0

e−ntP (t, x, dy)dt

is a discrete time transition function. If Y nk is a Markov chain with

transition function ηn, then

E[f(Y nk )] = E[(I − n−1A)−kf(Y0)] = E[f(X(

∆1 + · · ·+ ∆k

n))],

where the ∆i are independent unit exponentials, and Xn(t) = Y n[nt]

can be written as

Xn(t) = X(1

n

[nt]∑k=1

∆k)

Page 79: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 79

Construction of a solution of a martingale problem

Theorem 4.8 Assume that E is compact, A ⊂ C(E) × C(E), (1, 0) ∈ A,A is linear, andD(A) is dense in C(E). Assume thatA satisfies the positivemaximum principle (and is consequently dissipative). Then there exists atransition function ηn such that∫

E

f(y)ηn(x, dy) = (I − n−1A)−1f(x) (4.2)

for all f ∈ R(I − n−1A).

Page 80: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 80

Proof. Note that D((I − n−1A)−1) = R(I − n−1A).

For each x ∈ E, ηxh = (I − n−1A)−1h(x) is a linear functional onR(I − n−1A). Since h(x) = f(x) − 1

nAf(x) for some f ∈ D(A) and A

satisfies the positive maximum principle, |ηxh| = |f(x)| ≤ ‖h‖ andηx1 = 1. The Hahn-Banach theorem implies ηx extends to a positivelinear functional on C(E) (hence a probability measure).

Γx = η ∈ P(E) : ηf = (I − n−1A)−1f(x), f ∈ R(I − n−1A)

is closed and lim supy→x Γy ⊂ Γx. The measurable selection theoremimplies the existence of η satisfying (4.2).

Page 81: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 81

Approximating Markov chain

For ηn as in (4.2), define

Anf = n(

∫E

f(y)ηn(x, dy)− f(x))

Then A is the generator of a pure-jump Markov process of the formXn(t) = Y n

Nn(t), where Y nk is a Markov chain with transition function

ηn and Nn is a Poisson process with parameter n.

Then

f(Xn(t))− f(Xn(0))−∫ t

0

Anf(Xn(s))ds

is a martingale, and in particular, if f ∈ D(A) and fn = f − n−1Af ,then Anfn = Af and

Mnf (t) = fn(Xn(t))− fn(Xn(0))−

∫ t

0

Af(Xn(s))ds

is a martingale.

Page 82: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 82

Theorem 4.2 implies Xn is relatively compact, and (see Lemma 3.1),if X is a limit point of Xn, for 0 ≤ t1 < · · · < tm+1,

0 = E[(fn(Xn(tm+1)− fn(Xn(tm))−∫ tm+1

tm

Af(Xn(s)ds)m∏i=1

hi(Xn(ti))]

→ E[(f(X(tm+1)− f(X(tm))−∫ tm+1

tm

Af(X(s)ds)m∏i=1

hi(X(ti))],

at least if the ti are selected outside the at most countable set oftimes at which X has a fixed point of discontinuity. Since X is rightcontinuous, the right side is in fact zero for all choices of ti, so X isa solution of the martingale problem for A.

Page 83: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 83

Stationary distributions

Definition 4.9 A stochastic process X is stationary if the distribution ofXt ≡ X(t+ ·) does not depend on t.

Definition 4.10 µ is a stationary distribution for the martingale problemfor A if there exists a stationary solution of the martingale problem for Awith marginal distribution µ.

Theorem 4.11 Suppose that D(A) and R(λ − A) are separating and thatfor each ν ∈ P(E), there exists a solution of the martingale problem for(A, ν). If µ ∈ P(E) satisfies∫

E

Afdµ = 0, f ∈ D(A),

then µ is a stationary distribution for A. (See Lemma 4.4.)

Page 84: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 84

Echeverria’s theorem

Theorem 4.12 Let E be compact, and let A ⊂ C(E)×C(E) be linear andsatisfy the positive maximum principle. Suppose that D(A) is closed undermultiplication and dense in C(E). If µ ∈ P(E) satisfies∫

E

Afdµ = 0, f ∈ D(A),

then µ is a stationary distribution of A.

Example 4.13 E = [0, 1], Af(x) = 12f′′(x)

D(A) = f ∈ C2[0, 1] : f ′(0) = f ′(1) = 0, f ′(13) = f ′(2

3)

Let µ(dx) = 3I[ 13 ,

23 ](x)dx.

Page 85: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 85

Outline of proofIn the proof of Theorem 4.8, we constructed ηn so that∫

E

fn(y)ηn(x, dy) =

∫E

(f(y)− 1

nAf(y))ηn(x, dy) = f(x)

Consequently∫E

∫E

fn(y)ηn(x, dy)µ(dx) =

∫E

f(x)µ(dx) =

∫E

(f(x)− 1

nAf(x))µ(dx) =

∫E

fn(x)µ(dx)

For F (x, y) =∑m

i=1 hi(x)(fi(y)− 1nAfi(y)) +h0(y), fi ∈ D(A), define

ΛnF =

∫ [ m∑i=1

hi(x)fi(x) + h0(x)

]µ(dx)

If Λn is given by a measure νn, then both marginals are µ, and lettingηn satisfy νn(dx, dy) = ηn(x, dy)µ(dx), for f ∈ D(A),∫

(f(y)− 1

nAf(y))ηn(x, dy) = f(x), µ− a.s.

Page 86: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 86

The work is to show that Λn is a positive linear functional.

Page 87: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 87

Extensions

Theorem 4.14 Let E be locally compact (e.g., E = Rd), and let A ⊂C(E)× C(E) satisfy the positive maximum principle. Suppose that D(A)

is an algebra and dense in C(E). If µ ∈ P(E) satisfies∫E

Afdµ = 0, f ∈ D(A),

then µ is a stationary distribution of A.

Proof. Let E = E ∪ ∞ and extend A to include (1, 0). There existsan E-valued stationary solution X of the martingale problem for theextended A, but PX(t) ∈ E = µ(E).

Page 88: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 88

Complete, separable E

E complete, separable. A ⊂ C(E)× C(E).

Assume that gk is closed under multiplication. Let I be the col-lection of finite subsets of positive integers, and for I ∈ I, let k(I)satisfy gk(I) =

∏i∈I gi. For each k, there exists ak ≥ |gk|. Let

E = z ∈∞∏i=1

[−ai, ai] : zk(I) =∏i∈I

zi, I ∈ I.

Note that E is compact, and defineG : E → E byG(x) = (g1(x), g2(x), . . .).Then G has a measurable inverse defined on the (measurable) setG(E).

Lemma 4.15 Let µ ∈ P(E). Then there exists a unique measure ν ∈ P(E)satisfying

∫E gkdµ =

∫E zkν(dz). In particular, ifZ has distribution ν, then

G−1(Z) has distribution µ.

Page 89: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 89

Equivalence of the forward equation and the MGP

Suppose

νtf = ν0f +

∫ t

0

νsAfds

Define

Bλf(x, θ) = Af(x, θ) + λ(

∫E

f(y,−θ)ν0(dy)− f(x, θ))

andµλ = λ

∫ ∞0

e−λtνtdt× (1

2δ1 +

1

2δ−1).

Then ∫E

Bλfdµλ = 0, f(x, θ) = f1(x)f2(θ), f1 ∈ D(A).

Let τ1 = inft > 0 : Θ(t) 6= Θ(0), τk+1 = inft > τk : Θ(t) 6= Θ(τk).

Page 90: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 90

Theorem 4.16 Let (Y,Θ) be a stationary solution of the martingale prob-lem for Bλ with marginal distribution µλ. Let τ1 = inft > 0 : Θ(t) 6=Θ(0), τk+1 = inft > τk : Θ(t) 6= Θ(τk). Define X(t) = Y (τ1 + t). Thenconditioned on τ2 − τ1 > t0, X is a solution of the martingale problem forA and the distribution of X(t) is νt for 0 ≤ t ≤ t0.

Page 91: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 91

5. Integration with respect Poisson random measures

• Poisson random measures

• Stochastic integrals for space-time Poisson random measures

• The predictable σ-algebra

• Martingale properties

• Representation of counting processes

• Stochastic integrals for centered space-time Poisson random mea-sures

• Quadratic variation

• Levy processes

• Gaussian white noise

Page 92: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 92

Poisson distribution

Definition 5.1 A random variable X has a Poisson distribution with pa-rameter λ > 0 (write X ∼ Poisson(λ)) if for each k ∈ 0, 1, 2, . . .

PX = k =λk

k!e−λ.

E[X] = λ V ar(X) = λ

and the characteristic function of X is

E[eiθX ] = eλ(eiθ−1).

Since the characteristic function of a random variable characterizesits distribution, a direct computation gives

Page 93: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 93

Proposition 5.2 IfX1, X2, . . . are independent random variables withXi ∼Poisson(λi) and

∑∞i=1 λi <∞, then

X =∞∑i=1

Xi ∼ Poisson

( ∞∑i=1

λi

)

Page 94: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 94

Poisson sums of Bernoulli random variables

Proposition 5.3 Let N ∼ Poisson(λ), and suppose that Y1, Y2, . . . arei.i.d. Bernoulli random variables with parameter p ∈ [0, 1]. If N is inde-pendent of the Yi, then

∑Ni=0 Yi ∼ Poisson(λp).

For j = 1, . . . ,m, let ej be the vector in Rm that has all its entries equalto zero, except for the jth which is 1.

For θ, y ∈ Rm, let 〈θ, y〉 =∑m

j=1 θjyj.

Proposition 5.4 Let N ∼ Poisson(λ). Suppose that Y1, Y2, . . . are in-dependent Rm-valued random variables such that for all k ≥ 0 and j ∈1, . . . ,m

PYk = ej = pj,

where∑m

j=1 pj = 1. Define X = (X1, ..., Xm) =∑N

k=0 Yk. If N is inde-pendent of the Yk, then X1, . . . , Xm are independent random variables andXj ∼ Poisson(λpj).

Page 95: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 95

Poisson random measuresLet (U, dU) be a complete, separable metric space, and let ν be a σ-finite measure on U . LetN (U) denote the collection of counting mea-sures on U .

Definition 5.5 A Poisson random measure on U with mean measure νis a random counting measure ξ (that is, a N (U)-valued random variable)such that

a) For A ∈ B(U), ξ(A) has a Poisson distribution with expectation ν(A)

b) ξ(A) and ξ(B) are independent if A ∩B = ∅.

For f ∈M(U), f ≥ 0, define

ψξ(f) = E[exp−∫U

f(u)ξ(du)] = exp−∫

(1− e−f)dν

(Verify the second equality by approximating f by simple functions.)

Page 96: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 96

Existence

Proposition 5.6 Suppose that ν is a measure on U such that ν(U) < ∞.Then there exists a Poisson random measure with mean measure ν.

Proof. The case ν(U) = 0 is trivial, so assume that ν(U) ∈ (0,∞).Let N be a Poisson random variable defined on a probability space(Ω,F , P ) with E[N ] = ν(U). Let X1, X2, . . . be iid U -valued randomvariables such that for every A ∈ B(U),

PXj ∈ A =ν(A)

ν(U),

and assume that N is independent of the Xj.

Define ξ by ξ(A) =∑N

k=0 1Xk∈A. In other words ξ =∑N

k=0 δXk

where, for each x ∈ U , δx is the Dirac mass at x.

Extend the existence result to σ-finite measures by partitioning U =∪iUi, where ν(Ui) <∞.

Page 97: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 97

Identities

Let ξ be a Poisson random measure with mean measure ν.

Lemma 5.7 Suppose f ∈M(U), f ≥ 0. Then

E[

∫f(y)ξ(dy)] =

∫f(y)ν(dy)

Lemma 5.8 Suppose ν is nonatomic and let f ∈ M(N (U) × U), f ≥ 0.Then

E[

∫U

f(ξ, y)ξ(dy)] = E[

∫U

f(ξ + δy, y)ν(dy)]

Page 98: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 98

Proof. Suppose 0 ≤ f ≤ 1U0, where ν(U0) < ∞. Let U0 = ∪kUn

k ,where the Un

k are disjoint and diam(Unk ) ≤ n−1. If ξ(Un

k ) is 0 or 1, then∫Unk

f(ξ, y)ξ(dy) =

∫Unk

f(ξ(· ∩ Un,ck ) + δy, y)ξ(dy)

Consequently, if maxk ξ(Unk ) ≤ 1,∫

U0

f(ξ, y)ξ(dy) =∑k

∫Unk

f(ξ(· ∩ Un,ck ) + δy, y)ξ(dy)

Since ξ(U0) <∞, for n sufficiently large, maxk ξ(Unk ) ≤ 1,

E[

∫U

f(ξ, y)ξ(dy)] = E[

∫U0

f(ξ, y)ξ(dy)]

= limn→∞

∑k

E[

∫Unk

f(ξ(· ∩ Un,ck ) + δy, y)ξ(dy)]

= limn→∞

∑k

E[

∫Unk

f(ξ(· ∩ Un,ck ) + δy, y)ν(dy)]

Page 99: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 99

= E[

∫U

f(ξ + δy, y)ν(dy)].

Note that the last equality follows from the fact that

f(ξ(· ∩ Un,ck ) + δy, y) 6= f(ξ + δy, y)

only if ξ(Unk ) > 0, and hence, assuming 0 ≤ f ≤ 1U0

,

|∑k

∫Unk

f(ξ(·∩Un,ck )+δy, y)ν(dy)−

∫U0

f(ξ+δy, y)ν(dy)| ≤∑k

ξ(Unk )ν(Un

k ),

where the expectation of the right side is∑

k ν(Unk )2 =

∫U0ν(Un(y))ν(dy) ≤∫

U0ν(U0 ∩ B1/n(y))ν(dy), where Un(y) = Un

k if y ∈ Unk . limn→∞ ν(U0 ∩

B1/n(y)) = 0, since ν is nonatomic.

Page 100: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 100

Space-time Poisson random measures

Let ξ be a Poisson random measure on U× [0,∞) with mean measureν × ` (where ` denotes Lebesgue measure).

ξ(A, t) ≡ ξ(A× [0, t]) is a Poisson process with parameter ν(A).

If ν(A) <∞, ξ(A, t) ≡ ξ(A× [0, t])− ν(A)t is a martingale.

Definition 5.9 ξ is Ft compatible, if for eachA ∈ B(U), ξ(A, ·) is Ftadapted and for all t, s ≥ 0, ξ(A× (t, t+ s]) is independent of Ft.

Page 101: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 101

Stochastic integrals for Poisson random measuresFor i = 1, . . . ,m, let ti < ri andAi ∈ B(U), and let ηi beFti-measurable.Let X(u, t) =

∑i ηi1Ai(u)1[ti,ri)(t), and note that

X(u, t−) =∑i

ηi1Ai(u)1(ti,ri](t). (5.1)

Define

Iξ(X, t) =

∫U×[0,t]

X(u, s−)ξ(du× ds) =∑i

ηiξ(Ai × (ti, ri]).

Then

E [|Iξ(X, t)|] ≤ E

[∫U×[0,t]

|X(u, s−)|ξ(du× ds)]

=

∫U×[0,t]

E[|X(u, s)|]ν(du)ds

and if the right side is finite, E[Iξ(X, t)] =∫U×[0,t]E[X(u, s)]ν(du)ds.

Page 102: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 102

Estimates in L1,0

If ∫U×[0,t]

|X(u, s−)| ∧ 1ξ(du× ds) <∞

then ξ(u, s) : |X(u, s−)| > 1 <∞ and

E

[supt≤T|Iξ(X ∧ 1, t)|

]≤∫U×[0,T ]

E[|X(u, s)| ∧ 1]ν(du)ds

Definition 5.10 Let L1,0(U, ν) denote the space of B(U) × B[0,∞) × F-measurable mappings (u, s, ω)→ X(u, s, ω) such that∫ ∞

0

e−s∫U

E[|X(u, s)| ∧ 1]ν(du)ds <∞.

Let S− denote the collection of B(U)×B[0,∞)×F measurable map-pings (u, s, t)→

∑mi=1 ηi(ω)1Ai(u)1(ti,ri](t) defined as in (5.1).

Page 103: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 103

Lemma 5.11

d1,0(X, Y ) =

∫ ∞0

e−s∫U

E[|X(u, s)− Y (u, s)| ∧ 1]ν(du)ds

defines a metric on L1,0(U, ν), and the definition of Iξ extends to the closureof S− in L1,0(U, ν).

Page 104: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 104

The predictable σ-algebraWarning: Let N be a unit Poisson process. Then

∫∞0 e−sE[|N(s) −

N(s−)| ∧ 1]ds = 0, but P∫ t

0 N(s)dN(s) 6=∫ t

0 N(s−)dN(s) = 1− e−t.

Definition 5.12 Let (Ω,F , P ) be a probability space and let Ft be afiltration in F . The σ-algebra P of predictable sets is the smallest σ-algebra inB(U)×B[0,∞)×F containing sets of the formA×(t0, t0+r0]×Bfor A ∈ B(U), t0, r0 ≥ 0, and B ∈ Ft0.

Remark 5.13 Note that for B ∈ Ft0, 1A×(t0,t0+r0]×B(u, t, ω) is left continu-ous in t and adapted and that the mapping (u, t, ω) → X(u, t−, ω), whereX(u, t−, ω) is defined in (5.1), is P-measurable.

Definition 5.14 A stochastic processX on U× [0,∞) is predictable if themapping (u, t, ω)→ X(u, t, ω) is P-measurable.

Page 105: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 105

Lemma 5.15 If the mapping (u, t, ω)→ X(u, t, ω) is B(U)×B[0,∞)×F-measurable and adapted and is left continuous in t, then X is predictable.

Proof.Let 0 = tn0 < tn1 < · · · and tni+1 − tni ≤ n−1. Define Xn(u, t, ω) =X(u, tni , ω) for tni < t ≤ tni+1. Then Xn is predictable and

limn→∞

Xn(u, t, ω) = X(u, t, ω)

for all (u, t, ω).

Page 106: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 106

Stochastic integrals for predictable processes

Lemma 5.16 Let G ∈ P , B ∈ B(U) with ν(B) < ∞ and b > 0. Then1B×[0,b](u, t)1G(u, t, ω) is a predictable process and

Iξ(1B×[0,b]1G, t)(ω) =

∫U×[0,t]

1B×[0,b](u, s)1G(u, s, ω)ξ(du× ds, ω) a.s.

(5.2)and

E[

∫U×[0,t]

1B×[0,b](u, s)1G(u, s, ·)ξ(du× ds)] (5.3)

= E[

∫U×[0,t]

1B×[0,b]1G(u, s, ·)ν(du)ds]

Page 107: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 107

Proof. Let

A = ∪mi=1Ai × (ti, ti + ri]×Gi : ti, ri ≥ 0, Ai ∈ B(U), Gi ∈ Fti.

Then A is an algebra. For G ∈ A, (5.2) holds by definition, and (5.3)holds by direct calculation. The collection of G that satisfy (5.2) and(5.3) is closed under increasing unions and decreasing intersections,and the monotone class theorem (see Theorem 4.1 of the Appendixof Ethier and Kurtz (1986)) gives the lemma.

Page 108: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 108

Lemma 5.17 Let X be a predictable process satisfying∫ ∞0

e−s∫U

E[|X(u, s)| ∧ 1]ν(du)ds <∞.

Then∫U×[0,t] |X(u, t)|ξ(du× ds) <∞ a.s. and

Iξ(X, t)(ω) =

∫U×[0,t]

X(u, t, ω)ξ(du× ds, ω) a.s.

Proof. Approximate by simple functions.

Page 109: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 109

Consequences of predictability

Lemma 5.18 If X is predictable and∫U×[0,t] |X(u, s)|∧1ν(du)ds <∞ a.s.

for all t, then ∫U×[0,t]

|X(u, s)|ξ(du× ds) <∞ a.s. (5.4)

and ∫U×[0,t]

X(u, s)ξ(du× ds)

exists a.s.

Proof. Let τc = inft :∫U×[0,t] |X(u, s)| ∧ 1ν(du)ds ≥ c, and consider

Xc(s, u) = 1[0,τc](s)X(u, s). ThenXc satisfies the conditions of Lemma5.17, so ∫

U×[0,t]

|X(u, s)| ∧ 1ξ(du× ds) <∞ a.s.

But this implies ξ(u, s) : s ≤ t, |X(u, s)| > 1 <∞, so (5.4) holds.

Page 110: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 110

Martingale properties

Theorem 5.19 SupposeX is predictable and∫U×[0,t]E[|X(u, s)|]ν(du)ds <

∞ for each t > 0. Then∫U×[0,t]

X(u, s)ξ(du× ds)−∫ t

0

∫U

X(u, s)ν(du)ds

is a Ft-martingale.

Page 111: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 111

Proof. Let A ∈ Ft and define XA(u, s) = 1AX(u, s)1(t,t+r](s). ThenXA is predictable and

E[1A

∫U×(t,t+r]

X(u, s)ξ(du× ds)] = E[

∫U×[0,t+r]

XA(u, s)ξ(du× ds)]

= E[

∫U×[0,t+r]

XA(u, s)ν(du)ds]

= E[1A

∫U×(t,t+r]

X(u, s)ν(du)ds]

and hence

E[

∫U×(t,t+r]

X(u, s)ξ(du× ds)|Ft] = E[

∫U×(t,t+r]

X(u, s)ν(du)ds|Ft].

Page 112: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 112

Local martingales

Lemma 5.20 If∫U×[0,t]

|X(u, s)|ν(du)ds <∞ a.s. t ≥ 0,

then ∫U×[0,t]

X(u, s)ξ(du× ds)−∫U×[0,t]

X(u, s)ν(du)ds

is a local martingale.

Page 113: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 113

Proof. If τ is a stopping time andX is predictable, then 1[0,τ ](t)X(u, t)is predictable. Let

τc = t > 0 :

∫U×[0,t]

|X(u, s)|ν(du)ds ≥ c.

Then∫U×[0,t∧τc]

X(u, s)ξ(du× ds)−∫U×[0,t∧τc]

X(u, s)ν(du)ds

=

∫U×[0,t]

1[0,τc](s)X(u, s)ξ(du× ds)−∫U×[0,t]

1[0,τc](s)X(u, s)ν(du)ds.

is a martingale.

Page 114: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 114

Representation of counting processes

LetU = [0,∞) and ν = `. Let λ be a nonnegative, predictable process,and define G = (u, t) : u ≤ λ(t). Then

N(t) =

∫[0,∞)×[0,t]

1G(u, s)ξ(du× ds) =

∫[0,∞)×[0,t]

1[0,λ(s)](u)ξ(du× ds)

is a counting process with intensity λ.

Stochastic equation for a counting process

λ : [0,∞) × DE[0,∞) × Dc[0,∞) → [0,∞), λ(t, z, v) = λ(t, zt−, vt−),t ≥ 0, Z independent of ξ

N(t) =

∫[0,∞)×[0,t]

1[0,λ(s,Z,N)](u)ξ(du× ds) (5.5)

Page 115: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 115

Semimartingale property

Corollary 5.21 If X is predictable and∫U×[0,t] |X(u, s)| ∧ 1ν(du)ds < ∞

a.s. for all t, then∫U×[0,t] |X(u, s)|ξ(du× ds) <∞ a.s.∫

U×[0,t]X(u, s)ξ(du× ds)

=

∫U×[0,t]

1|X(u,s|≤1X(u, s)ξ(du× ds)−∫ t

0

∫U

1|X(u,s)|≤1X(u, s)ν(du)ds︸ ︷︷ ︸local martingale

+

∫ t

0

∫U

1|X(u,s)|≤1X(u, s)ν(du)ds+

∫U×[0,t]

1|X(u,s|>1X(u, s)ξ(du× ds)︸ ︷︷ ︸finite variation

is a semimartingale.

Page 116: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 116

Stochastic integrals for centered Poisson random mea-sures

Let ξ(du× ds) = ξ(du× ds)− ν(du)ds

ForX(u, t−) =

∑i

ηi1Ai(u)1(ti,ri](t).

as in (5.1), define

Iξ(X, t) =

∫U×[0,t]

X(u, s−)ξ(du×ds) =

∫U×[0,t]

X(u, s−)ξ(du×ds)−∫ t

0

∫U

X(u, s)ν(du)ds

and note that

E[Iξ(X, t)

2]

=

∫U×[0,t]

E[X(u, s)2]ν(du)ds

if the right side is finite.

Then Iξ(X, ·) is a square-integrable martingale.

Page 117: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 117

Extension of integral

The integral extends to predictable integrands satisfying∫U×[0,t]

|X(u, s)|2 ∧ |X(u, s)|ν(du)ds <∞ a.s. (5.6)

so that∫U×[0,t∧τ ]

X(u, s)ξ(du× ds) =

∫U×[0,t]

1[0,τ ](s)X(u, s)ξ(du× ds) (5.7)

is a martingale for any stopping time satisfying

E

[∫U×[0,t∧τ ]

|X(u, s)|2 ∧ |X(u, s)|ν(du)ds

]<∞,

and (5.7) is a local square integrable martingale if∫U×[0,t]

|X(u, s)|2ν(du)ds <∞ a.s.

Page 118: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 118

Quadratic variationNote that if X is predictable and∫

U×[0,t]

|X(u, s)| ∧ 1ν(du)ds <∞ a.s. t ≥ 0,

then ∫U×[0,t]

|X(u, s)|2 ∧ 1ν(du)ds <∞ a.s. t ≥ 0,

and[Iξ(X, ·)]t =

∫U×[0,t]

X2(u, s)ξ(du× ds).

Similarly, if∫U×[0,t]

|X(u, s)|2 ∧ |X(u, s)|ν(du)ds <∞ a.s.,

[Iξ(X, ·)]t =

∫U×[0,t]

X2(u, s)ξ(du× ds).

Page 119: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 119

Semimartingale properties

Theorem 5.22 Let Y be a cadlag, adapted process. If X is predictable andsatisfies (5.4), Iξ(X, ·) is a semimartingale and∫ t

0

Y (s−)dIξ(X, s) =

∫U×[0,t]

Y (s−)X(u, s)ξ(du× ds),

and if X satisfies (5.6), Iξ(X, ·) is a semimartingale and∫ t

0

Y (s−)dIξ(X, s) =

∫U×[0,t]

Y (s−)X(u, s)ξ(du× ds)

Page 120: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 120

Levy processes

Theorem 5.23 Let U = R and∫R |u|

2 ∧ 1ν(du) <∞. Then

Z(t) =

∫[−1,1]×[0,t]

uξ(du× ds) +

∫[−1,1]c×[0,t]

uξ(du× ds)

is a process with stationary, independent increments with

E[eiθZ(t)] = expt∫R(eiθu − 1− iθu1[−1,1](u))ν(du)

Page 121: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 121

Proof.

eiθZ(t) = 1 +

∫ t

0

iθeiθZ(s−)dZ(s) +∑s≤t

(eiθZ(s) − eiθZ(s−) − iθeiθZ(s−)∆Z(s))

= 1 +

∫[−1,1]×[0,t]

iθeiθZ(s−)uξ(du× ds) +

∫[−1,1]c×[0,t]

iθeiθZ(s−)uξ(du× ds)

+

∫R×[0,t]

(eiθ(Z(s−)+u) − eiθZ(s−) − iθeiθZ(s−)u)ξ(du× ds)

= 1 +

∫[−1,1]×[0,t]

iθeiθZ(s−)uξ(du× ds)

+

∫R×[0,t]

eiθZ(s−)(eiθu − 1− iθu1[−1,1](u))ξ(du× ds)

Taking expectations

ϕ(θ, t) = 1 +

∫R×[0,t]

ϕ(θ, s)(eiθu − 1− iθu1[−1,1](u))ν(du)ds

so ϕ(θ, t) = expt∫R(eiθu − 1− iθu1[−1,1](u))ν(du)

Page 122: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 122

Approximation of Levy processes

For 0 < ε < 1, let

Zε(t) =

∫[−1,−ε)∪(ε,1]×[0,t]

uξ(du× ds) +

∫[−1,1]c×[0,t]

uξ(du× ds)

=

∫(−∞,−ε)∪(ε,∞)×[0,t]

uξ(du× ds)− t∫

[−1−ε)∪(ε,1]

uν(du)

that is, throw out all jumps of size less than or equal to ε and thecorresponding centering. Then

E[|Zε(t)− Z(t)|2] = t

∫[−ε,ε]

u2ν(du).

Consequently, since Zε−Z is a square integrable martingale, Doob’sinequality gives

limε→0

E[sups≤t|Zε(s)− Z(s)|2] = 0.

Page 123: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 123

Summary on stochastic integrals

If X is predictable and∫U×[0,t] |X(u, s)| ∧ 1ν(du)ds <∞ a.s. for all t,

then ∫U×[0,t]

|X(u, s)|ξ(du× ds) <∞ a.s.

∫U×[0,t]

X(u, s)ξ(du× ds)

=

∫U×[0,t]

1|X(u,s)|≤1X(u, s)ξ(du× ds)−∫ t

0

∫U

1|X(u,s)|≤1X(u, s)ν(du)ds︸ ︷︷ ︸local martingale

+

∫ t

0

∫U

1|X(u,s)|≤1X(u, s)ν(du)ds+

∫U×[0,t]

1|X(u,s)|>1X(u, s)ξ(du× ds)︸ ︷︷ ︸finite variation

is a semimartingale.

Page 124: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 124

If X is predictable and∫U×[0,t]

|X(u, s)|2 ∧ |X(u, s)|ν(du)ds <∞ a.s.,

then ∫U×[0,t]

X(u, s)ξ(du× ds)

= limε→0+

∫U×[0,t]

1|X(u,s)|≥ε(s)X(u, s)ξ(du× ds)

= limε→0+

(∫U×[0,t]

1|X(u,s)|≥εX(u, s)ξ(du× ds)

−∫ t

0

∫U

1|X(u,s)|≥εX(u, s)ν(du)ds)

exists and is a local martingale.

Page 125: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 125

Gaussian white noise

(U, dU) a complete, separable metric space; B(U), the Borel sets

µ a (Borel) measure on U A(U) = A ∈ B(U) : µ(A) <∞

W (A, t) ≡ W (A × [0, t]) Mean zero, Gaussian process indexed byA(U)× [0,∞)

E[W (A, t)W (B, s)] = t ∧ sµ(A ∩B),

W (ϕ, t) =∫ϕ(u)W (du, t)

ϕ(u) =∑aiIAi(u)

W (ϕ, t) =∑

i aiW (Ai, t)

E[W (ϕ1, t)W (ϕ2, s)] = t ∧ s∫U ϕ1(u)ϕ2(u)µ(du)

Define W (ϕ, t) for all ϕ ∈ L2(µ).

Page 126: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 126

Definition of integral

X(t) =∑

i ξi(t)ϕi adapted process in L2(µ)

IW (X, t) =∫U×[0,t]X(s, u)W (du× ds) =

∑i

∫ t0 ξi(s)dW (ϕi, s)

E[IW (X, t)2] = E

[∑i,j

∫ t

0

ξi(s)ξj(s)ds

∫U

ϕiϕjdµ

]

= E

[∫ t

0

∫U

X(s, u)2µ(du)ds

]The integral extends to measurable and adapted processes satisfying∫ t

0

∫U

X(s, u)2µ(du)ds <∞ a.s.

so that

(IW (X, t))2 −∫ t

0

∫U

X(s, u)2µ(du)ds

is a local martingale.

Page 127: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 127

6. Weak and strong solutions of stochastic equations

• Weak and strong solutions for sim-ple stochastic equations

• General stochastic models

• Stochastic optimization example

• The Yamada-Watanabe and Engel-bert theorem

• Compatibility restrictions

• Convex constraints

• Ordinary stochastic differentialequations

• Stochastic equations for Markovchains

• Diffusion limits??

• Uniqueness question

• Compatibility for multiple time-changes

Kurtz (2013, 2007)

Page 128: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 128

The classical Yamada-Watanabe theoremConsider a standard Ito equation

X(t) = X(0) +

∫ t

0

σ(X(s))dW (s) +

∫ t

0

b(X(s))ds

To prove existence, construct an approximation, for example of Eulertype

Xn(t) = X(0) +

∫ t

0

σ(Xn(ηn(s)))dW (s) +

∫ t

0

b(Xn(ηn(s)))ds

where ηn(t) = [nt]n .

Prove relative compactness of Xn.

Show that any limit point is a solution.

The problem: Relative compactness is usually in distribution.

Yamada-Watanabe: “Weak existence” and “pathwise uniqueness”implies “strong existence.”

Page 129: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 129

Weak and strong solutions for simple stochastic equa-tions

Given measurable Γ : S1 × S2 → R and a S2-valued random variableY , consider the equation Γ(X, Y ) = 0.

In many (most?) contexts, it is natural to specify the distributionν ∈ P(S2) of Y rather than a particular Y on a particular probabilityspace.

Following the terminology of Engelbert (1991) and Jacod (1980), werefer to the joint distribution of (X, Y ) as a joint solution measure. Inparticular, µ ∈ P(S1 × S2) is a joint solution measure if µ(S1 × ·) = νand ∫

S1×S2

|Γ(x, y)|µ(dx× dy) = 0.

(Without loss of generality, we can assume that Γ is bounded.) Anyjoint solution measure gives a weak solution of the equation.

Page 130: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 130

Strong solutions

Definition 6.1 A solution (X, Y ) for (Γ, ν) is a strong solution if thereexists a Borel measurable function F : S2 → S1 such that X = F (Y ) a.s.

If a strong solution exists on some probability space, then a strongsolution exists for any Y with distribution ν.

Page 131: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 131

General stochastic modelsLet SΓ,ν be the collection of joint solution measures for the stochasticequation Γ(X, Y ) = 0. Note that SΓ,ν is convex.

A stochastic model will be any set of constraints that relate two randomvariables (X, Y ) where the distribution ν of Y (the stochastic inputs)is given. Γ will denote the set of constraints and SΓ,ν will denote thecollection of µ ∈ P(S1 × S2) such that µ(S1 × ·) = ν and any pair(X, Y ) with distribution µ satisfies the constraints in Γ.

Page 132: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 132

ExampleSuppose Γ0 is a collection of constraints of the form

E[ψ(X, Y )] <∞ and E[fi(X, Y )] = 0, i ∈ I,

where ψ ≥ 0 and |fi(x, y)| ≤ ψ. Then SΓ0,ν is convex.

Let 0 ≤ c(x, y) ≤ ψ(x, y), and let Γ be the set of constraints obtainedfrom Γ0 by adding the requirement∫

c(x, y)µ(dx, dy) = infµ′∈SΓ0,ν

∫c(x, y)µ′(dx, dy).

Note that SΓ0,ν and SΓ,ν are convex.

Page 133: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 133

Disintegration of measures

Lemma 6.2 If µ ∈ P(S1 × S2) and µ(S1 × ·) = ν, then there exists atransition function η such that µ(dx× dy) = η(y, dx)ν(dy).

There exists Borel measurable G : S2 × [0, 1] → S1 such that if Y hasdistribution ν and ξ is independent of Y and uniformly distributed on [0, 1],(G(Y, ξ), Y ) has distribution µ.

Lemma 6.3 Let S be a complete, separable metric space. There exists ameasurable mapping ρ : P(S)× [0, 1]→ S such that if ξ is uniform [0, 1],for each µ ∈ P(S), ρ(µ, ξ) has distribution µ and for almost every u ∈[0, 1], the mapping ν → ρ(ν, u) is continuous at µ.

Proof. See Blackwell and Dubins (1983).

Page 134: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 134

Joint solution measures for strong solutions

Let SΓ,ν ⊂ P(S1×S2) denote the collection of joint solution measuresfor a stochastic model Γ with input distribution ν.

Lemma 6.4 If µ ∈ SΓ,ν , then µ corresponds to a strong solution if andonly if there exists a Borel measurable F : S2 → S1, such that η(y, dx) =δF (y)(dx) a.s. ν.

Weak existence is simply the assertion that SΓ,ν is nonempty. Strong ex-istence is the assertion that there exists µ ∈ SΓ,ν of the form describedin the lemma.

Page 135: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 135

Notions of uniqueness

Definition 6.5 Pointwise (pathwise) uniqueness holds, if X1, X2, andY defined on the same probability space with µX1,Y , µX2,Y ∈ SΓ,ν impliesX1 = X2 a.s.

Joint uniqueness in law (or weak joint uniqueness) holds, if SΓ,ν con-tains at most one measure.

Uniqueness in law (or weak uniqueness) holds if all µ ∈ SΓ,ν have thesame marginal distribution on S1.

Page 136: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 136

Strong solutions and pointwise uniqueness

Lemma 6.6 Suppose SΓ,ν is convex. Every solution is a strong solution ifand only if pointwise uniqueness holds.

Proof. If µ1, µ2 ∈ SΓ,ν , then µ0 = 12µ1 + 1

2µ2 ∈ SΓ,ν . Let Y have distri-bution ν and ξ be uniformly distributed on [0, 1] and independent ofY . Define

X =

F1(Y ) ξ > 1/2F2(Y ) ξ ≤ 1/2.

Then (X, Y ) has distribution µ0 and must satisfy X = F (Y ) for someF . Since ξ is independent of Y we must have F1(Y ) = F (Y ) = F2(Y )a.s.

Page 137: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 137

The Yamada-Watanabe-Engelbert theoremKurtz (2007, 2013) cf. Yamada and Watanabe (1971), Engelbert (1991)

Theorem 6.7 The following are equivalent:

a) SΓ,ν 6= ∅ and pointwise uniqueness holds.

b) Joint uniqueness in law holds and there exists a strong solution.

Proof. Assume (a). If µ1, µ2 ∈ SΓ,ν , then there exist G1(y, u) andG2(y, u) such that for Y with distribution ν and ξ1, ξ2 uniform on [0, 1],all independent, (Gi(Y, ξi), Y ) has distribution µi and by pointwiseuniqueness,

G1(Y, ξ1) = G2(Y, ξ2) a.s.

From the independence of ξ1 and ξ2, it follows that there exists Fsuch that F (Y ) = G1(Y, ξ1) = G2(Y, ξ2).

Assume (b). The unique µ ∈ SΓ,ν must satisfy µ(dx×dy) = δF (y)(dx)ν(dy)implying pointwise uniqueness.

Page 138: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 138

Temporal compatibility restrictions

Let E1 and E2 be Polish spaces and let DEi[0,∞), be the Skorohodspace of cadlag Ei-valued functions. Let Y be a process in DE2

[0,∞).By FY

t , we mean σ(Y (s), s ≤ t).

Definition 6.8 A processX inDE1[0,∞) is compatible with Y if for each

t ≥ 0 and h ∈ B(DE2[0,∞)),

E[h(Y )|FX,Yt ] = E[h(Y )|FY

t ] a.s. (6.1)

(cf. (4.5) of Jacod (1980).)

Lemma 6.9 If Y has independent increments, then X is compatible with Yif Y (t+ ·)− Y (t) is independent of FX,Y

t for all t ≥ 0.

Page 139: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 139

General compatibility restrictions

If BS1α is a sub-σ-algebra of B(S1) and X is an S1-valued random vari-

able on (Ω,F , P ), then FXα ≡ X ∈ D : D ∈ BS1

α is the sub-σ-algebra of F generated by h(X) : h ∈ B(BS1

α ), where B(BS1α ) is the

collection of h ∈ B(S1) that are BS1α -measurable.

Definition 6.10 LetA be an index set and for each α ∈ A, let BS1α be a sub-

σ-algebra of B(S1) and BS2α be a sub-σ-algebra of B(S2). Let Y be an S2-

valued random variable. An S1-valued random variable X is compatiblewith Y if for each α ∈ A and each h ∈ B(S2),

E[h(Y )|FXα ∨ FY

α ] = E[h(Y )|FYα ] a.s., (6.2)

where FXα ≡ X ∈ D : D ∈ BS1

α and FYα ≡ Y ∈ D : D ∈

BS2α . The collection C ≡ (BS1

α ,BS2α ) : α ∈ A will be referred to as a

compatibility structure.

Page 140: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 140

Martingale property

Lemma 6.11 IfA is partially ordered and FXα , α ∈ A and FY

α , α ∈ Aare filtrations (α1 ≺ α2 implies FX

α1⊂ FX

α2and FY

α1⊂ FY

α2), then compati-

bility is equivalent to the assertion that every FYα , α ∈ A-martingale is a

FYα ∨ FX

α , α ∈ A-martingale.

In the temporally ordered setting, Buckdahn et al. (2004) employ asimilar martingale assumption.

Page 141: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 141

Compatibility by conditioning functions of X

Lemma 6.12 X is compatible with Y if and only if for each α ∈ A and eachg ∈ B(BS1

α ),E[g(X)|Y ] = E[g(X)|FY

α ] a.s. (6.3)

Proof. Suppose that X is compatible with Y . Then for f ∈ B(S2) and g ∈ B(BS1α ),

E[f(Y )g(X)] = E[E[f(Y )|FXα ∨ FYα ]g(X)]

= E[E[f(Y )|FYα ]g(X)]

= E[E[f(Y )|FYα ]E[g(X)|FYα ]]

= E[f(Y )E[g(X)|FYα ]],

and (6.3) follows. Conversely, for f ∈ B(S2), g ∈ B(BS1α ), and h ∈ B(BS2

α ) we have

E[E[f(Y )|FYα ]g(X)h(Y )] = E[E[f(Y )|FYα ]E[g(X)|FYα ]h(Y )]

= E[f(Y )E[g(X)|Y ]h(Y )]

= E[f(Y )g(X)h(Y )],

and compatibility follows.

Page 142: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 142

Compatibility is a distributional property

Note that (6.2) is equivalent to requiring that for each h ∈ B(S2),

inff∈B(BS1

α ×BS2α )

E[(h(Y )− f(X, Y ))2] = inff∈B(BS2

α )

E[(h(Y )− f(Y ))2], (6.4)

so compatibility is a property of the joint distribution of (X, Y ).

Page 143: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 143

Ordinary stochastic differential equations

U a process in DRd[0,∞)

V an Rm-valued semimartingale with respect to the filtration FU,Vt

H : DRd[0,∞) → DMd×m[0,∞) Borel measurable satisfying H(x, t) =H(x(· ∧ t), t).

Then X is solution of

X(t) = U(t) +

∫ t

0

H(X, s−)dV (s)

if X is temporally compatible with Y = (U, V ) and

limn→∞

E[1∧|X(t)−U(t)−∑k

H(X,k

n)(V (

k + 1

n∧t)−V (

k

n∧t))|] = 0, t ≥ 0.

But see Karandikar (1995).

Page 144: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 144

Joint compatibilityX1, X2, and Y defined on the same probability space.

X1 and X2, S1-valued and Y S2-valued.

(X1, X2) are jointly compatible with Y if

E[f(Y )|FX1α ∨ FX2

α ∨ FYα ] = E[f(Y )|FY

α ], α ∈ A, f ∈ B(S2).

Pointwise uniqueness for jointly compatible solutions holds if for everytriple of random variables (X1, X2, Y ) defined on the same probabil-ity space such that µX1,Y , µX2,Y ∈ SΓ,C,ν and (X1, X2) is jointly com-patible with Y , we have

X1 = X2 a.s.

Page 145: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 145

Pointwise uniqueness

Lemma 6.13 Pointwise uniqueness for jointly compatible solutions in SΓ,C,νis equivalent to pointwise uniqueness in SΓ,C,ν .

Recall that for µ1, µ2 ∈ SΓ,C,ν and Y , ξ1, and ξ2 independent, Y withdistribution ν and ξ1 and ξ2 uniform on [0, 1], there exist G1 : S2 ×[0, 1] → S1 and G2 : S2 × [0, 1] → S1 such that (G1(Y, ξ1), Y ) hasdistribution µ1 and (G2(Y, ξ2), Y ) has distribution µ2.

The lemma above is a consequence of the following.

Lemma 6.14 If µ1, µ2 ∈ SΓ,C,ν and (G1(Y, ξ1), Y ) has distribution µ1 and(G2(Y, ξ2), Y ) has distribution µ2 where ξ1 and ξ2 are independent and in-dependent of Y , then G1(Y, ξ1), G2(Y, ξ2) are jointly compatible with Y .

Page 146: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 146

Proof of Lemma 6.14For f ∈ B(BS1

α ), by Lemma 6.12,

E[f(G1(Y, ξ1))|Y, ξ2] = E[f(G1(Y, ξ1))|Y ] = E[f(G1(Y, ξ1))|FYα ].

Consequently, for f ∈ B(S2), g1, g2 ∈ B(BS1α ), and h ∈ B(BS2

α ),

E[f(Y )g1(X1)g2(X2)h(Y )]

= E[f(Y )E[g1(X1)|Y, ξ2]g2(X2)h(Y )]

= E[f(Y )E[g1(X1)|FYα ]g2(X2)h(Y )]

= E[E[f(Y )|FX2α ∨ FY

α ]E[g1(X1)|FYα ]g2(X2)h(Y )]

= E[E[f(Y )|FYα ]E[g1(X1)|Y, ξ2]g2(X2)h(Y )]

= E[E[f(Y )|FYα ]g1(X1)g2(X2)h(Y )],

giving the joint compatibility.

Page 147: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 147

Stochastic equations for Markov chainsSpecify a continuous time Markov chain by specifying the intensitiesof its possible jumps

PX(t+ ∆t) = X(t) + ζk|FXt ≈ βk(X(t))∆t

Given the intensities, the Markov chain satisfies

X(t) = X(0) +∑k

Yk(

∫ t

0

βk(X(s))ds)ζk

where the Yk are independent unit Poisson processes.

(Assume that there are only finitely many ζk.)

Page 148: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 148

Diffusion limits??

Possible scaling limits of the form

Xn(t) = Xn(0) +1

n

∑k

Yk(n2

∫ t

0

βnk (Xn(s))ds)ζk +

∫ t

0

F n(Xn(s))ds

where Yk(u) = Yk(u)− u and F n(x) =∑

k nβnk (x)ζk.

Note that 1nYk(n

2·) ≈ Wk

Assuming Xn(0) → X(0), βnk → βk and F n → F , we might expect alimit satisfying

X(t) = X(0) +∑k

Wk(

∫ t

0

βk(X(s))ds)ζk +

∫ t

0

F (X(s))ds

Kurtz (1980), Ethier and Kurtz (1986), Section 6.5.

Page 149: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 149

Uniqueness question

X(t) = X(0) +∑k

Wk(

∫ t

0

βk(X(s))ds)ζk +

∫ t

0

F (X(s))ds

Let τk(t) =∫ t

0 βk(X(s))ds and γ(t) =∫ t

0 F (X(s))ds. Then

τl(t) = βl(X(0) +∑k

Wk(τk(t))ζk + γ(t))

γ(t) = F (X(0) +∑k

Wk(τk(t))ζk + γ(t))

Problem: Find conditions under which pathwise uniqueness holds.

Page 150: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 150

Compatibility for multiple time-changes

X(t) = X(0) +m∑k=1

Wk(

∫ t

0

βk(X(s))ds)ζk +

∫ t

0

F (X(s))ds

Set τk(t) =∫ t

0 βk(X(s))ds, and for α ∈ [0,∞)m, define

FYα = σ(Wk(sk) : sk ≤ αk, k = 1, 2, . . .) ∨ σ(X(0))

and

FXα = σ(τ1(t) ≤ s1, τ2(t) ≤ s2, . . . : si ≤ αi, i = 1, 2, . . . , t ≥ 0).

If X is a compatible solution, then Wk(∫ t

0 βk(X(s))ds), k = 1, . . . ,mare martingales with respect to the same filtration and hence X is asolution of the martingale problems for

Af(x) =1

2

∑i,j

aij(x)∂i∂jf(x) + F (x) · ∇f(x),

a(x) =∑

k βk(x)ζkζTk .

Page 151: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 151

Two-dimensional case

For i = 1, 2, Wi standard Brownian motions.

βi : R2 → (0,∞), bounded

X1(t) = W1(

∫ t

0

β1(X(s))ds) X2(t) = W2(

∫ t

0

β2(X(s))ds)

or equivalently

τi(t) = βi(W1(τ1(t)),W2(τ2(t))), i = 1, 2

A strong, compatible solution exists, and weak uniqueness holds byStroock-Varadhan, so pathwise uniqueness holds.

Page 152: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 152

7. Stochastic equations for Markov processes in Rd

• Markov property

• Gaussian white noise

• Poisson random measure

• Pure jump processes

• Generators in Rd

• Stochastic equations for Markov processes

• Martingale problems

• Conditions for uniqueness

• Uniqueness and the Markov property

Page 153: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 153

Markov chainsMarkov property: “Given the present, the future is independent ofthe past,” or “the future is a function of the present and inputs inde-pendent of the past.”

Discrete time: X0, ξ1, ξ2, . . . independent

Xn+1 = Hn+1(Xn, ξn+1).

By iterationXn+1 = Hk,n+1(Xk, ξk+1, . . . , ξn+1).

Page 154: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 154

Continuous time: Processes with independent increments

Replace the sequence ξk by a process ξ with independent incre-ments:

X(s) = Ht,s(X(t), ξ(s ∧ (t+ ·))− ξ(t)).

Essentially two choices for ξ: Standard Brownian motion and a Pois-son process.

Page 155: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 155

Gaussian white noise integral

µ0 a σ-finite Borel measure on (S0, r0)

A(S0) = A ∈ B(U) : µ0(A) <∞

W (A×[0, t]) normal withE[W (A×[0, t])] = 0 and V ar(W (A×[0, t])) =µ0(A)t and

E[W (A× [0, t])W (B × [0, s])] = µ0(A ∩B)t ∧ s

Then W (∪Ai × [0, t]) =∑W (Ai × [0, t])

Z(t) =

∫U×[0,t]

Y (u, s)W (du× ds)

satisfies

E[Z(t)] = 0 [Z]t =

∫ t

0

∫U

Y 2(u, s)µ0(du)ds

E[Z2(t)] = E[[Z]t] =

∫ t

0

∫U

E[Y 2(u, s)]µ0(du)ds

Page 156: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 156

Space-time Poisson random measures

µ1 a σ-finite, Borel measure on (U, r1)

ξ is a Poisson random measure on U × [0,∞) with mean measureµ1 × ` (where ` denotes Lebesgue measure).

ξ(A, t) ≡ ξ(A× [0, t]) is a Poisson process with parameter µ1(A).

ξ(A, t) ≡ ξ(A× [0, t])− µ1(A)t is a martingale.

Definition 7.1 ξ is Ft-compatible, if for each A ∈ A(U), ξ(A, ·) is Ftadapted and for all t, s ≥ 0, ξ(A × (t, t + s]) is independent of Ft. (SeeLemma 6.9.)

Page 157: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 157

Stochastic integrals for Poisson random measures

For ti < ri, Ai ∈ B(U), and ηi Fti-measurable:

X(u, t) =∑i

ηi1Ai(u)1[ti,ri)(t), so X(u, t−) =∑i

ηi1Ai(u)1(ti,ri](t).

(7.1)

Define

Iξ(X, t) =

∫U×[0,t]

X(u, s−)ξ(du× ds) =∑i

ηiξ(Ai × (ti, ri]).

Then

E [|Iξ(X, t)|] ≤ E

[∫U×[0,t]

|X(u, s−)|ξ(du× ds)]

=

∫U×[0,t]

E[|X(u, s)|]µ1(du)ds

and if the right side is finite, E[Iξ(X, t)] =∫U×[0,t]E[X(u, s)]µ1(du)ds.

Page 158: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 158

Stochastic integrals for centered Poisson random mea-sures

Let ξ(du× ds) = ξ(du× ds)− µ1(du)ds

ForX(u, t−) =

∑i

ηi1Ai(u)1(ti,ri](t).

as in (7.1), define

Iξ(X, t) =

∫U×[0,t]

X(u, s−)ξ(du×ds) =

∫U×[0,t]

X(u, s)ξ(du×ds)−∫ t

0

∫U

X(u, s)µ1(du)ds

and note that

E[Iξ(X, t)

2]

=

∫U×[0,t]

E[X(u, s)2]µ1(du)ds

if the right side is finite.

Then Iξ(X, ·) is a square-integrable martingale.

Page 159: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 159

An equation for a pure jump process

Consider a generator of the form

Af(x) = λ(x)

∫ t

0

(f(x+ y)− f(x))η(x, dy)

for a process in Rd. There exists a function H : Rd × [0, 1] such for Vuniform [0, 1], H(x, V ) has distribution η(x, ·).

Let ξ be a Poisson random measure on [0,∞) × [0, 1] × [0,∞) withmean measure `× `× `. Then the solution of

X(t) = X(0)+

∫[0,∞)×[0,1]×[0,t]

1[0,λ(X(s−))](v)H(X(s−), u)ξ(dv×du×ds).

Page 160: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 160

Markov processes in Rd

Typically, a Markov process X in Rd has a generator of the form

Af(x) =1

2

d∑i,j=1

aij(x)∂2

∂xi∂xjf(x)+b(x)·∇f(x)+

∫Rd

(f(x+y)−f(x)−1B1(y)y·∇f(x))η(x, dy)

where B1 is the ball of radius 1 centered at the origin and η satisfies∫1 ∧ |y2|η(x, dy) < ∞ for each x. (See, for example, Stroock (1975),

Cinlar, Jacod, Protter, and Sharpe (1980).)

η(x,Γ) gives the “rate” at which jumps satisfying X(s)−X(s−) ∈ Γoccur.

B1 can be replaced by any set C containing an open neighborhood ofthe origin provided that the drift term is replaced by

bC(x) · ∇f(x) =

(b(x) +

∫Rdy(1C(y)− 1B1

(y))η(x, dy)

)· ∇f(x).

Page 161: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 161

A representation for ηWe will assume that there exist λ : Rd × S → [0, 1], γ : Rd × S → Rd,and a σ-finite measure µ1 on a complete, separable metric space Ssuch that

η(x,Γ) =

∫S

λ(x, u)1Γ(γ(x, u))µ1(du).

This representation is always possible but in no way unique.

Page 162: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 162

Reformulation of the generatorFor simplicity, assume that there exists a fixed set S1 ∈ S such thatfor S2 = S − S1,∫

S

λ(x, u)(1S1(u)|γ(x, u)|2 + 1S2

(u)|γ(x, u)|)µ1(du) <∞

and ∫S

λ(x, u)|γ(x, u)||1S1(u)− 1B1

(γ(x, u))|µ1(du) <∞.

Then

Af(x) =1

2

d∑i,j=1

aij(x)∂2

∂xi∂xjf(x) + b(x) · ∇f(x)

+

∫S

λ(x, u)(f(x+ γ(x, u))− f(x)− 1S1(u)γ(x, u) · ∇f(x))µ1(du)

where

b(x) = b(x) +

∫S

λ(x, u)γ(x, u)(1S1(u)− 1B1

(γ(x, u)))µ1(du).

Page 163: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 163

Technical assumptionsFor each compact K ⊂ Rd

supx∈K

(|b(x)|+∫S0

|σ(x, u)|2µ0(du) +

∫S1

λ(x, u)|γ(x, u)|2µ1(du)

+

∫S2

λ(x, u)|γ(x, u)| ∧ 1µ1(du)) <∞.

a(x) =

∫S0

σ(x, u)σ(x, u)Tµ0(du)

Let D(A) = C2c (Rd) and assume that for f ∈ D(A), Af ∈ Cb(Rd).

The continuity assumption can be removed and the boundedness as-sumption relaxed using existing technology.

For x outside the support of f ,

Af(x) =

∫S

λ(x, u)(f(x+ γ(x, u))µ1(du).

Page 164: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 164

Ito equationsX should satisfy a stochastic differential equation of the form

X(t) = X(0) +

∫S0×[0,t]

σ(X(s), u)W (du× ds) +

∫ t

0

b(X(s))ds (7.2)

+

∫[0,1]×S1×[0,t]

1[0,λ(X(s−),u)](v)γ(X(s−), u)ξ(dv × du× ds)

+

∫[0,1]×S2×[0,t]

1[0,λ(X(s−),u)](v)γ(X(s−), u)ξ(dv × du× ds),

for t < τ∞ ≡ limk→∞ inft : |X(t−)| or |X(t)| ≥ k, where W is Gaus-sian white noise determined by µ0 and ξ is a Poisson random mea-sure on [0, 1]× S × [0,∞) with mean measure `× µ1 × `.

Page 165: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 165

Martingale properties/problemsAssume that τ∞ =∞. Applying Ito’s formula,

f(X(t))− f(X(0))−∫ t

0

Af(X(s))ds

=

∫S0×[0,t]

∇f(X(s))Tσ(X(s), u)W (du× ds)

+

∫[0,1]×S×[0,t]

1[0,λ(X(s−),u)](v)(f(X(s−) + γ(X(s−), u))− f(X(s−)))ξ(dv × du× ds)

The right side is a local martingale, so under the assumption that Afis bounded, it is a martingale.

Definition 7.2 X is a solution of the martingale problem for A if thereexists a filtration Ft such that

f(X(t))− f(X(0))−∫ t

0

Af(X(s))ds (7.3)

is a Ft-martingale for each f ∈ D(A).

Page 166: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 166

Conditions for uniquenessIn Ito (1951) as well as in later presentations (for example, Skorokhod(1965) and Ikeda and Watanabe (1989)), L2-estimates are used to proveuniqueness for (7.2).

Graham (1992) points out the possibility and desirability of using L1-estimates. (In fact, for equations controlling jump rates with factorslike 1[0,λ(X(t),u](v), L1-estimates are essential.)

Kurtz and Protter (1996) develop methods that allow a mixing of L1,L2, and other estimates.

The uniqueness proof given here uses only L1 estimates.

Page 167: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 167

Theorem 7.3 Suppose there exists a constant M such that

|b(x)|+∫S0

|σ(x, u)|2µ(du) +

∫S1

|γ(x, u)|2λ(x, u)µ1(du) (7.4)

+

∫S2

λ(x, u)|γ(x, u)|µ1(du) < M,

and √∫S0

|σ(x, u)− σ(y, u)|2µ0(du) ≤ M |x− y| (7.5)

|b(x)− b(y)| ≤ M |x− y| (7.6)∫S1

(γ(x, u)− γ(y, u))2λ(x, u) ∧ λ(y, u)µ1(du) ≤ M |x− y|2 (7.7)∫S2

λ(x, u)||γ(x, u)− γ(y, u)|µ1(du) ≤ M |x− y| (7.8)∫S

|λ(x, u)− λ(y, u)||γ(y, u)|µ1(du) ≤ M |x− y|. (7.9)

Then there exists a unique solution of (7.2).

Page 168: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 168

Proof. Suppose X and Y are solutions of (7.2). ThenX(t) (7.10)

= X(0) +

∫S0×[0,t]

σ(X(s), u)W (du× ds) +

∫ t

0

b(X(s))ds

+

∫[0,∞)×S1×[0,t]

1[0,λ(X(s),u)∧λ(Y (s),u)](v)γ(X(s−), u)ξ(dv × du× ds)

+

∫[0,∞)×S1×[0,t]

1(λ(Y (s−),u)∧λ(X(s−),u),λ(X(s−),u)](v)

γ(X(s−), u)ξ(dv × du× ds)

+

∫[0,∞)×S2×[0,t]

1[0,λ(X(s−),u)](v)γ(X(s−), u)ξ(dv × du× ds),

and similarly with the roles of X and Y interchanged. Then (7.5)and (7.6) give the necessary Lipschitz conditions for the coefficientfunctions in the first two integrals on the right, (7.7) gives an L2-Lipschitz condition for the third integral term, and (7.8) and (7.9) giveL1-Lipschitz conditions for the fourth and fifth integral terms on theright. Theorem 7.1 of Kurtz and Protter (1996) gives uniqueness.

Page 169: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 169

Extension to unbounded coefficients

Corollary 7.4 Suppose that there exists a function M(r) defined for r >0, such that (7.4) through (7.9) hold with M replaced by M(|x| ∨ |y|).Then there exists a stopping time τ∞ and a process X(t) defined for t ∈[0, τ∞) such that (7.2) is satisfied on [0, τ∞) and τ∞ = limk→∞ inft :|X(t)| or |X(t−)| ≥ k. If (Y, τ) also has this property, then τ = τ∞and Y (t) = X(t), t < τ∞.

Proof. The corollary follows by a standard localization argument.

Page 170: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 170

A martingale inequality

If M is a local square integrable martingale, there exists a nonde-creasing predictable process 〈M〉 such that M(t)2 − 〈M〉t is a localmartingale. In particular, [M ]− 〈M〉 is a local martingale. Recall thata left-continuous, adapted process is predictable.

Lemma 7.5 For 0 < p ≤ 2 there exists a constantCp such that for any localsquare integrable martingale M with Meyer process 〈M〉 and any stoppingtime τ

E[sups≤τ|M(s)|p] ≤ CpE[〈M〉p/2τ ]

The discrete time version is do to Burkholder (1973) and the con-tinuous time version to Lenglart, Lepingle, and Pratelli (1980). Thefollowing proof is due to Ichikawa (1986).

Page 171: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 171

Proof. For p = 2 the result is an immediate consequence of Doob’sinequality. Let 0 < p < 2. For x > 0, let σx = inft : 〈M〉t > x2.Since σx is predictable, there exists a strictly increasing sequence ofstopping times σnx → σx. Noting that 〈M〉σnx ≤ x2, we have

Psups≤τ|M(s)| > x ≤ Pσnx < τ+ P sup

s≤τ∧σnx|M(s)| > x

≤ Pσnx < τ+E[〈M〉τ∧σnx ]

x2≤ Pσnx < τ+

E[x2 ∧ 〈M〉τ ]x2

,

and letting n→∞, we have

Psups≤τ|M(s)| > x ≤ P〈M〉τ ≥ x2+

E[x2 ∧ 〈M〉τ ]x2

. (7.11)

Using the identity∫∞

0 E[x2 ∧ X2]pxp−3dx = 22−pE[|X|p], the lemma

follows by multiplying both sides of (7.11) by pxp−1 and integrating.

Page 172: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 172

Computation of Meyer process

Let ξ be a Poisson random measure on U× [0,∞) with mean measureν× `, and let W be a Gaussian white noise on S0× [0,∞) determinedby µ0× `. Assume that ξ and W are compatible with a filtration Ft.

If X is predictable and∫U×[0,t]X

2(u, s)ν(du)ds < ∞ a.s., then M(t) =∫U×[0,t]X(u, s)ξ(du× ds) is a local square integrable martingale with

〈M〉t =

∫U×[0,t]

X2(u, s)ν(du)ds <∞.

If X is adapted and∫ t

0

∫S0X(u, s)2µ0(du)ds < ∞ a.s., then M(t) =∫

S0×[0,t]X(u, s)dW (du × ds) is a local square integrable martingalewith

〈M〉t =

∫ t

0

∫S0

X(u, s)2µ0(du)ds.

Page 173: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 173

Estimate for solutions of (7.10)

E[sups≤t|X(s)− Y (s)|]

≤ E[|X(0)− Y (0)|] + C1E[(

∫ t

0

∫S0

|σ(X(s), u)− σ(Y (s, u))|2µ0(du)ds)12 ]

+C1E[(

∫ t

0

∫S1

(γ(X(s), u)− γ(Y (s), u))2λ(X(s), u) ∧ λ(Y (s), u)µ1(du))12 ]

+2E[

∫ t

0

∫S1

|λ(X(s), u)− λ(Y (s), u)|(|γ(X(s), u)|+ |γ(Y (s), u)|)µ1(du)ds]

+E[

∫ t

0

∫S2

λ(X(s), u)|γ(X(s), u)− γ(Y (s), u)|µ1(du)ds]

+E[

∫ t

0

∫S2

|λ(X(s), u)− λ(Y (s), u)||γ(Y (s), u)|µ1(du)ds]

+E[

∫ t

0

|b(X(s))− b(Y (s))|ds]

≤ E[|X(0)− Y (0)|] +D(√t+ t)E[sup

s≤t|X(s)− Y (s)|]

Page 174: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 174

For t small enough so that D(√t+ t) ≤ .5, then

E[sups≤t|X(s)− Y (s)|] ≤ 2E[|X(0)− Y (0)|]

Page 175: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 175

Uniqueness and the Markov property

If X is a solution of (7.2), then

X(t) = X(r) +

∫S0×[r,t]

σ(X(s), u)W (du× ds) +

∫ t

0

b(X(s))ds (7.12)

+

∫[0,1]×S1×[r,t]

1[0,λ(X(s−),u)](v)γ(X(s−), u)ξ(dv × du× ds)

+

∫[0,1]×S2×[r,t]

1[0,λ(X(s−),u)](v)γ(X(s−), u)ξ(dv × du× ds),

Uniqueness impliesX(r) is independent ofW (·+r)−W (r) and ξ(A×(r, ·]) and thatX(t), t ≥ r is determined by X(r), W (·+r)−W (r) andξ(A× (r, ·]), which gives the Markov property.

Page 176: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 176

8. Convergence for Markov processes characterized by martin-gale problems

• Tightness estimates based on generators

• Convergence of processes based on convergence of generators

• Averaging

• Derivation of Fleming-Viot

Page 177: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 177

Compactness conditions based on compactness of realfunctionsCompact containment condition: For each T > 0 and ε > 0, thereexists a compact K ⊂ E such that

lim infn→∞

PXn(s) ∈ K, s ≤ T ≥ 1− ε.

Theorem 8.1 Let D be dense in C(E) in the compact uniform topology.Xn is relatively compact (in distribution in DE[0,∞)) iff Xn satisfiesthe compact containment conditions and f Xn is relatively compact foreach f ∈ D. (Actually it is enough to check for a linear space that separatespoints Jakubowski (1986).)

Note that

E[(f(Xn(t+ u))− f(Xn(t)))2|Fn

t ]

= E[f 2(Xn(t+ u))− f 2(Xn(t))|Fnt ]

−2f(Xn(t))E[f(Xn(t+ u))− f(Xn(t))|Fnt ]

Page 178: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 178

Limits of generators

Xn(t) =1√n

(Nb(nt)−Nd(nt)),

where Nb, Nd are independent, unit Poisson processes.

For f ∈ C3c (R)

Anf(x) = 2n(f(x+ 1√

n)+f(x− 1√

n)

2 − f(x)) = f ′′(x) +O( 1√n) .

Set Af = f ′′.

E[(f(Xn(t+ u))− f(Xn(t)))2|Fn

t ]

= E[f 2(Xn(t+ u))− f 2(Xn(t))|Fnt ]

−2f(Xn(t))E[f(Xn(t+ u))− f(Xn(t))|Fnt ]

≤ u(‖Anf2‖+ 2‖f‖‖Anf‖) .

Page 179: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 179

It follows that Xn is relatively compact in DR∆[0,∞), or using thefact that

Psups≤t|Xn(s)| ≥ k ≤ 4E[X2

n(t)]

k2=

4t

k2,

the compact containment condition holds and Xn is relatively com-pact in DR[0,∞).

Page 180: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 180

Limits of martingales

Lemma 8.2 For each n = 1, 2, . . ., let Mn and Zn be cadlag stochastic pro-cesses and let Mn be a F (Mn,Zn)

t -martingale. Suppose that (Mn, Zn) ⇒(M,Z). If for each t ≥ 0, Mn(t) is uniformly integrable, then M is aF (M,Z)

t -martingale.

Recall that if a sequence of real-valued random variables ψn is uni-formly integrable and ψn ⇒ ψ, then E[ψn]→ E[ψ].

It follows that if X is the limit of a subsequence of Xn, then

f(Xn(t))−∫ t

0

Anf(Xn(s))ds⇒ f(X(t))−∫ t

0

Af(X(s))ds

(along the subsequence) and X is a solution of the martingale prob-lem for A.

Page 181: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 181

Elementary convergence theorem

E compact, En, n = 1, 2, . . . complete, separable metric space.

ηn : En → E

Yn Markov in En with generator An, Xn = ηn(Yn) cadlag

A ⊂ C(E)× C(E) (for simplicity, we write Af = g if (f, g) ∈ A).

D(A) = f : (f, g) ∈ A

For each (f, g) ∈ A, there exist fn ∈ D(An) such that

supx∈En

(|fn(x)− f ηn(x)|+ |Anfn(x)− g ηn(x)|)→ 0.

THEN Xn is relatively compact and any limit point is a solution ofthe martingale problem for A.

Page 182: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 182

Reflecting random walkEn = 0, 1√

n, 2√

n, . . .

Anf(x) = nλ(f(x+1√n

)− f(x)) + nλIx>0(f(x− 1√n

)− f(x))

Let f ∈ C3c [0,∞). Then

Anf(x) = λf ′′(x) +O(1√n

) x > 0

Anf(0) =√nλf ′(0) +

λ

2f ′′(0) +O(

1√n

)

Assume f ′(0) = 0, but still have discontinuity at 0.

Page 183: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 183

Perturbed test functionLet fn = f + 1√

nh, f ∈ f ∈ C3

c [0,∞) : f ′(0) = 0, h ∈ C3c [0,∞).

Then

Anfn(x) = λf ′′(x) +1√nλh′′(x) +O(

1√n

) x > 0

Anfn(0) =λ

2f ′′(0) + λh′(0) +O(

1√n

)

Assume that h′(0) = 12f′′(0). Then, noting that ηn(x) = x, x ∈ En,

supx∈En

(|fn(x)− f(x)|+ |Anfn(x)− Af(x)|)→ 0

Page 184: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 184

AveragingY stationary with marginal distribution ν, ergodic, and independentof W ; Yn(t) = Y (βnt), βn →∞

Xn(t) = X(0) +

∫ t

0

σ(Xn(s), Yn(s))dW (s) +

∫ t

0

b(Xn(s), Yn(s))ds

Lemma 8.3 Let µn be measures on U×[0,∞) satisfying µn(U×[0, t]) =t. Suppose ∫

U

ϕ(u)µn(du× [0, t])→∫U

ϕ(u)µ(du× [0, t]),

for each ϕ ∈ C(U) and t ≥ 0, and that xn → x in DE[0,∞). Then∫U×[0,t]

h(u, xn(s))µn(du× ds)→∫U×[0,t]

h(u, x(s))µ(du× ds)

for each h ∈ C(U × E) and t ≥ 0. (See Kurtz (1992).)

Page 185: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 185

Convergence of averaged generator

Assume that σ and b are bounded and continuous. Then for f ∈C2c (R),

f(Xn(t))−f(Xn(0))−∫ t

0

Af(Xn(s), Yn(s))ds =

∫ t

0

σ(Xn(s), Yn(s))f′(Xn(s))dW (s)

Af(x, y) =1

2σ2(x, y)f ′′(x) + b(x, y)f ′(x)

is a martingale, Xn is relatively compact,∫ t

0

ϕ(Yn(s))ds =1

βn

∫ βnt

0

ϕ(Y (s))ds→ t

∫U

ϕ(u)ν(du),

so any limit point of Xn is a solution of the martingale problem for

Af(x) =

∫U

Af(x, u)ν(du).

Khas’minskii (1966); Kurtz (1992); Pinsky (1991)

Page 186: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 186

Coupled system

Xn(t) = X(0) +

∫ t

0

σ(Xn(s), Yn(s))dW1(s) +

∫ t

0

b(Xn(s), Yn(s))ds

Yn(t) = Y (0) +

∫ t

0

√nα(Xn(s), Yn(s))dW2(s) +

∫ t

0

nβ(Xn(s), Yn(s))ds

Consequently,

f(Xn(t))− f(Xn(0))−∫ t

0

Af(Xn(s), Yn(s))ds

and

g(Yn(t))− g(Y (0))−∫ t

0

nBg(Xn(s), Yn(s))ds

Bg(x, y) =1

2α2(x, y)g′′(y) + β(x, y)g′(y)

are martingales.

Page 187: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 187

Estimates

Suppose that for g ∈ C2c (R), Bg(x, y) is bounded, and that, taking

g(y) = y2,Bg(x, y) ≤ K1 −K2y

2.

Then, assuming E[Y (0)2] <∞,

E[Yn(t)2] ≤ E[Y (0)2] +

∫ t

0

(K1 −K2E[Yn(s)2])ds

which implies

E[Yn(t)2] ≤ E[Y (0)2]e−K2t +

K1

K2(1− e−K2t).

The sequence of measures defined by∫R×[0,t]

ϕ(y)Γn(dy × ds) =

∫ t

0

ϕ(Yn(s))ds

is relatively compact.

Page 188: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 188

Convergence of the averaged process

If σ and b are bounded, then Xn is relatively compact and any limitpoint of (Xn,Γn)must satisfy:

f(X(t))− f(X(0))−∫R×[0,t]

Af(X(s), y)Γ(dy × ds)

is a martingale for each f ∈ C2c (R) and∫

R×[0,t]

Bg(X(s), y)Γ(dy × ds) (8.1)

is a martingale for each g ∈ C2c (R).

But (8.1) is continuous and of finite variation. Therefore∫R×[0,t]

Bg(X(s), y)Γ(dy × ds) =

∫ t

0

∫RBg(X(s), y)γs(dy)ds = 0.

Page 189: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 189

Characterizing γsFor almost every s∫

RBg(X(s), y)γs(dy) = 0, g ∈ C2

c (R)

But, fixing x, and setting Bxg(y) = Bg(x, y)∫RBxg(y)π(dy) = 0, g ∈ C2(R),

implies π is a stationary distribution for the diffusion with generator

Bxg(y) =1

2α2x(y)g′′(y) + βx(y)g′(y), αx(y) = α(x, y), βx(y) = β(x, y)

If αx(y) > 0 for all y, then the stationary distribution πx is uniquelydetermined. If uniqueness hold for all x, Γ(dy × ds) = πX(s)(dy)dsand X is a solution of the martingale problem for

Af(x) =

∫RAf(x, y)πx(dy).

Page 190: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 190

Moran models in population genetics

E type space

B generator of mutation process

σ selection coefficient

Generator with state space En

Anf(x) =n∑i=1

Bif(x)+1

2(n− 2)

∑1≤i 6=j 6=k≤n

(1+2

nσ(xi, xj))(f(ηk(x|xi))−f(x))

ηk(x|z) = (x1, . . . , xk−1, z, xk+1, . . . xn)

Note that if (X1, . . . , Xn) is a solution of the martingale problem forA, then for any permutation σ, (Xσ1

, . . . , Xσn) is a solution of the mar-tingale problem for A.

Page 191: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 191

Conditioned martingale lemma

Lemma 8.4 Suppose U and V are Ft-adapted,

U(t)−∫ t

0

V (s)ds

is an Ft-martingale, and Gt ⊂ Ft. Then

E[U(t)|Gt]−∫ t

0

E[V (s)|Gs]ds

is a Gt-martingale.

Page 192: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 192

Generator for measure-valued process

Pn(E) = µ ∈ P(E) : µ = 1n

∑ni=1 δxi Z(t) = 1

n

∑ni=1 δXi(t)

For f ∈ B(Em) and µ ∈ Pn(E)

〈f, µ(m)〉 =1

n · · · (n−m+ 1)

∑i1 6=···6=im

f(xi1, . . . , xim).

Symmetry and the conditioned martingale lemma imply

〈f, Z(n)(t)〉 −∫ t

0

〈Anf, Z(n)(s)〉ds

is a FZt -martingale.

Define F (µ) = 〈f, µ(n)〉

AnF (µ) = 〈Anf, µ(n)〉

Page 193: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 193

Convergence of the generator

If f depends on m variables (m < n)

Anf(x) =m∑i=1

Bif(x)

+1

2(n− 2)

m∑k=1

∑1≤i6=k≤m

∑1≤j 6=k,i≤n

(1 +2

nσ(xi, xj))(f(ηk(x|xi))− f(x))

+1

2(n− 2)

m∑k=1

n∑i=m+1

∑1≤j 6=k,i≤n

(1 +2

nσ(xi, xj))(f(ηk(x|xi))− f(x))

〈Anf, µ(n)〉 =m∑i=1

〈Bif, µ(m)〉+

1

2

∑1≤i6=k≤m

(〈Φikf, µ(m−1)〉 − 〈f, µ(m)〉) +O(

1

n)

+m∑k=1

(〈σkf, µ(m+1)〉 − 〈σf, µ(m+2)〉) +O(1

n)

Page 194: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 194

Conclusions

• If E is compact, compact containment condition is immediate(P(E) is compact).

• AnF is bounded as long as Bif is bounded and converges to

AF (µ) =m∑i=1

〈Bif, µm〉+

1

2

∑1≤i6=k≤m

(〈Φikf, µm−1〉 − 〈f, µm〉)

+m∑k=1

(〈σkf, µm+)〉 − 〈σf, µm+2〉)

= 〈Af, µm〉

• Limit of uniformly integrable martingales is a martingale

• Zn ⇒ Z if uniqueness holds for limiting martingale problem.

Page 195: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 195

9. Convergence for Markov processes characterized by stochasticequations

• Martingale central limit theorem

• Examples

– Products of random matrices– Diffusion approximations for Markov chains– Averaging

• Convergence for stochastic integrals

• Convergence for SDEs driven by semimartingales

• Limit theorems involving Poisson random measures and Gaus-sian white noise

• Diffusion approximations for Markov chains

• Not so good examples

Page 196: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 196

Martingale central limit theorem

[M ]t = lim∑

(M(ti+1)−M(ti))(M(ti+1)−M(ti))T

Theorem 9.1 Let MN be a sequence of martingales. Suppose that

limN→∞

E[sups≤t|MN(s)−MN(s−)|] = 0

and[MN ]t → K(t) (9.1)

for each t > 0, where K(t) is continuous and deterministic. Then MN ⇒M , whereM is Gaussian with independent increments andE[Mt)M(t)T ] =K(t).

For d = 1, M = W K.

Page 197: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 197

Representation of limit

Note that K(t) − K(s) is nonnegative definite for t ≥ s ≥ 0. If Kis differentiable, then the derivative will also be nonnegative definiteand will have a nonnegative definite square root. In general, supposeK(t) =

∫ t0 σ(s)σ(s)Tds where σ is in Md×m. Then M can be written as

M(t) =

∫ t

0

σ(s)dW (s)

where W is m-dimensional standard Brownian motion.

Page 198: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 198

Example: Products of random matricesLetA(1), A(2), . . . be iid random matrices withE[A(k)] = 0 andE[|A(k)|2] <∞. Set X0 = I

X(k+1) = (I+1√nA(k+1))X(k) = (I+

1√nA(k+1)) · · · (I+

1√nA(1))X(0)

Xn(t) = X([nt]) Mn(t) =1√n

[nt]∑k=1

A(k).

Then Xn(t) = Xn(0) +∫ t

0 dMn(s)Xn(s−).

Mn ⇒M where M is a Brownian motion with

E[Mij(t)Mkl(t)] = E[AijAkl]t,

Can we conclude Xn ⇒ X satisfying

X(t) = X(0) +

∫ t

0

dM(s)X(s)?

Page 199: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 199

Example: Diffusion approximation for Markov chainsXk+1 = H(Xk, ξk+1) where ξ1, ξ2 . . . are iid

PXk+1 ∈ B|X0, ξ1, . . . , ξk = PXk+1 ∈ B|Xk

Example: Xnk+1 = Xn

k + σ(Xnk ) 1√

nξk+1 + b(Xn

k+1)1n

Assume E[ξk] = 0 and V ar(ξk) = 1.

Define Xn(t) = Xn[nt] An(t) = [nt]

n Wn(t) = 1√n

∑[nt]k=1 ξk.

Xn(t) = Xn(0) +

∫ t

0

σ(Xn(s−)dWn(s) +

∫ t

0

b(Xn(s−))dAn(s)

Can we conclude Xn ⇒ X satisfying

X(t) = X(0) +

∫ t

0

σ(X(s))dW (s) +

∫ t

0

b(X(s))ds ?

Page 200: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 200

AveragingY stationary in E with marginal distribution ν, ergodic, and inde-pendent of W ; Yn(t) = Y (βnt), βn →∞

Xn(t) = X(0) +

∫ t

0

σ(Xn(s), Yn(s))dW (s) +

∫ t

0

b(Xn(s), Yn(s))ds.

Define

Γn(C × [0, t]) =

∫ t

0

1C(Yn(s))ds Mn(C × [0, t]) =

∫ t

0

1C(Yn(s))dW (s).

Then

Γn(du× ds)→ ν(du)ds Mn(C × [0, t])⇒ W (C × [0, t]),

W Gaussian white noise determined by ν.

Can we conclude that Xn ⇒ X satisfying

X(t) = X(0)+

∫E×[0,t]

σ(X(s), u)W (du×ds)+

∫ t

0

∫E

b(X(s), u)ν(du)ds.

Page 201: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 201

Stochastic integration

Definition: For cadlag processes X , Y ,∫ t

0

X(s−)dY (s) = limmax |ti+1−ti|→0

∑X(ti)(Y (ti+1 ∧ t)− Y (ti ∧ t))

whenever the limit exists in probability.

Sample paths of bounded variation: If Y is a finite variation pro-cess, the stochastic integral exists (apply dominated convergence the-orem) and ∫ t

0

X(s−)dY (s) =

∫(0,t]

X(s−)αY (ds)

αY is the signed measure with

αY (0, t] = Y (t)− Y (0)

Page 202: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 202

Existence for square integrable martingales

If M is a square integrable martingale, then

E[(M(t+ s)−M(t))2|Ft] = E[[M ]t+s − [M ]t|Ft]

Let X be bounded, cadlag and adapted. Then For partitions ti andri

E[(∑X(ti)(M(ti+1 ∧ t)−M(ti ∧ t))−

∑X(ri)(M(ri+1 ∧ t)−M(ri ∧ t)))2]

= E

[∫ t

0

(X(t(s−))−X(r(s−)))2d[M ]s

]= E

[∫(0,t]

(X(t(s−))−X(r(s−)))2α[M ](ds)

]t(s) = ti for s ∈ [ti, ti+1) r(s) = ri for s ∈ [ri, ri+1)

Page 203: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 203

Truncation

Let σc = inft : |X(t)| ≥ c and

Xc(t) =

X(t) t < σcX(c−) t ≥ σc

Then for ∫ t∧τc

0

X(s−)dY (s) =

∫ t∧τc

0

Xc(s−)dY (s),

so the definition extends to local square-integrable martingales.

Page 204: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 204

Semimartingales

Definition: A cadlag, Ft-adapted process Y is an Ft-semimartingaleif Y = M + V where

M a local, square integrable Ft-martingale

V an Ft-adapted, finite variation process

Total variation: For a finite variation process

Tt(Z) ≡ supti

∑|Z(ti+1 ∧ t)− Z(ti ∧ t)| <∞

Quadratic variation: For cadlag semimartingale

[Y ]t = limmax |ti+1−ti|→0

∑(Y (ti+1 ∧ t)− Y (ti ∧ t))2

Covariation: For cadlag semimartingales

[Y, Z]t = lim∑

(Y (ti+1 ∧ t)− Y (ti ∧ t))(Z(ti+1 ∧ t)− Z(ti ∧ t))

Page 205: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 205

Probability estimates for SIs

Y = M+V ,M a local square-integrable martingale, V finite variationprocess.

Assume |X| ≤ 1.

Psups≤t|∫ s

0

X(r−)dY (r)| > K

≤ Pσ ≤ t+ P sups≤t∧σ

|∫ s

0

X(r−)dM(r)| > K/2

+P sups≤t∧σ

|∫ s

0

X(r−)dV (r)| > K/2

≤ Pσ ≤ t+16E[

∫ t∧σ0

X2(s−)d[M ]s]

K2+ P

∫ t∧σ

0

|X(s−)|dTs(V ) ≥ K/2

≤ Pσ ≤ t+16E[[M ]t∧σ]

K2+ PTt∧σ(V ) ≥ K/2

Page 206: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 206

Good integrator condition

Let S0 be the collection of X =∑ξiI[τi,τi+1), where τ1 < · · · < τm are

stopping times and ξi is Fτi-measurable. Then∫ t

0

X(s−)dY (s) =∑

ξi(Y (t ∧ τi+1)− Y (t ∧ τi)).

If Y is a semimartingale, then

∫ t

0

X(s−)dY (s) : X ∈ S0, |X| ≤ 1

is stochastically bounded.

Y satisfying this stochastic boundedness condition is a good integra-tor.

Bichteler-Dellacherie: Y is a good integrator if and only if Y is a semi-martingale. (See, for example, Protter (1990), Theorem III.22.)

Page 207: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 207

Uniformity conditionsYn = Mn + An, a semimartingale adapted to Fn

t

Tt(An), the total variation of An on [0, t]

[Mn]t, the quadratic variation of Mn on [0, t]

Uniform tightness (UT) Jakubowski, Memin, and Pages (1989):

H0t = ∪∞n=1|

t

∫0Z(s−)dYn(s)| : Z ∈ Sn0 , sup

s≤t|Z(s)| ≤ 1

is stochastically bounded.

Uniformly controlled variations (UCV) Kurtz and Protter (1991):Tt(An), n = 1, 2, . . . is stochastically bounded, and there exist stop-ping times ταn such that Pταn ≤ α ≤ α−1 and supnE[[Mn]t∧ταn ] <∞

A sequence of semimartingales Yn that converges in distributionand satisfies either UT or UCV will be called good.

Page 208: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 208

Basic convergence theorem

Theorem 9.2 (Jakubowski, Memin and Pages; Kurtz & Protter)

(Xn, Yn) Fnt -adapted in DMkm×Rm[0,∞), Yn = Mn + An an Fn

t -semimartingale.

Assume that Yn satisfies either UT or UCV.

If (Xn, Yn)⇒ (X, Y ) in DMkm×Rm[0,∞) with the Skorohod topology, then

(Xn, Yn, ∫ Xn(s−)dYn(s))⇒ (X, Y, ∫ X(s−)dY (s))

in DMkm×Rm×Rk[0,∞)

If (Xn, Yn)→ (X, Y ) in probability in the Skorohod topology onDMkm×Rm[0,∞),then

(Xn, Yn, ∫ Xn(s−)dYn(s))→ (X, Y, ∫ X(s−)dY (s))

in probability in DMkm×Rm×Rk[0,∞)

“IN PROBABILITY” CANNOT BE REPLACED BY “A.S.”

Page 209: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 209

Stochastic differential equationsSuppose

F : DRd[0,∞)→ DMd×m[0,∞)

is nonanticipating in the sense that

F (x, t) = F (xt, t)

where xt(s) = x(s ∧ t).

U cadlag, adapted, with values in Rd

Y an Rm-valued semimartingale.

Consider

X(t) = U(t) +

∫ t

0

F (X, s−)dY (s) (9.2)

Page 210: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 210

Local solutions

Definition 9.3 (X, τ) is a local solution of (9.2) if X is adapted to a fil-tration Ft such that Y is an Ft-semimartingale, τ is an Ft-stoppingtime, and

X(t ∧ τ) = U(t ∧ τ) +

∫ t∧τ

0

F (X, s−)dY (s), t ≥ 0.

Strong local uniqueness holds if for any two local solutions (X1, τ1),(X2, τ2) with respect to a common filtration, we have X1(· ∧ τ1∧ τ2) =X2(· ∧ τ1 ∧ τ2).

Page 211: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 211

Sequences of SDE’s

Xn(t) = Un(t) +

∫ t

0

Fn(Xn, s−)dYn(s).

Structure conditions

T1[0,∞) = γ : γ nondecr. and maps [0,∞) onto [0,∞), γ(t + h) −γ(t) ≤ h

C.a Fn behaves well under time changes: If xn ⊂ DRd[0,∞),γn ⊂ T1[0,∞), and xn γn is relatively compact in DRd[0,∞),then Fn(xn) γn is relatively compact in DMd×m[0,∞).

C.b Fn converges to F : If (xn, yn) → (x, y) in DRd×Rm[0,∞), then(xn, yn, Fn(xn))→ (x, y, F (x)) in DRd×Rm×Md×m[0,∞).

Note that C.b implies continuity of F , that is, (xn, yn)→ (x, y) implies(xn, yn, F (xn))→ (x, y, F (x)).

Page 212: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 212

Examples

F (x, t) = f(x(t)), f continuous

F (x, t) = f(∫ t

0 h(x(s))ds) f , h continuous

F (x, t) =∫ t

0 h(t− s, s, x(s))ds, h continuous

Page 213: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 213

Convergence theorem

Theorem 9.4 Suppose

• (Un, Xn, Yn) satisfies Xn(t) = Un(t) +∫ t

0 Fn(Xn, s−)dYn(s).

• (Un, Yn)⇒ (U, Y )

• Yn is good

• Fn, F satisfy structure conditions

• supn supx ‖Fn(x, ·)‖ <∞.

Then (Un, Xn, Yn) is relatively compact and any limit point satisfies

X(t) = U(t) +

∫ t

0

F (X, s−)dY (s)

If, in addition, (Un, Yn) → (U, Y ) in probability and strong uniquenessholds, then (Un, Xn, Yn)→ (U,X, Y ) in probability.

Page 214: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 214

Euler schemeLet 0 = t0 < t1 < · · · The Euler approximation for

X(t) = U(t) +

∫ t

0

F (X, s−)dY (s)

is given by

X(tk+1) = X(tk) + U(tk+1)− U(tk) + F (X, tk)(Y (tk+1)− Y (tk)).

Consistency

Let tnk satisfy maxk |tnk+1 − tnk | → 0, and define(Yn(t)Un(t)

)=

(Y (tnk)U(tnk)

), tnk ≤ t < tnk+1.

Then (Un, Yn)⇒ (U, Y ) and Yn is good.

Page 215: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 215

The Euler scheme corresponding to tnk satisfies

Xn(t) = Un(t) +

∫ t

0

F (Xn, s−)dYn(s)

If F satisfies the structure conditions and strong uniqueness holds,then Xn → X in probability. (In the diffusion case, Maruyama (1955))

Page 216: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 216

ExamplesProducts of random matrices

Mn(t) =1√n

[nt]∑k=1

A(k)

is a martingale and E[‖[Mn]t‖] ≤ [nt]n c

Diffusion approximation for Markov chain

An(t) = [nt]n satisfies Tt(An) = [nt]

n

Wn(t) = 1√n

∑[nt]k=1 ξk satisfies E[[Wn]t] = [nt]

n .

Page 217: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 217

Sequences of Poisson random measuresξn Poisson random measures with mean measures nν ×m.

h measurable

For A ∈ B(U) satisfying∫A h

2(u)ν(du) <∞, define

Mn(A, t) =1√n

∫A

h(u)(ξn(du× [0, t])− ntν(du)).

Mn is an orthogonal martingale random measure with

[Mn(A),Mn(B)]t = 1n

∫A∩B h(u)2 ξn(du× [0, t])

〈Mn(A),Mn(B)〉t = t

∫A∩B

h(u)2ν(du).

Mn converges to Gaussian white noise W with

E[W (A, t)W (B, s)] = t ∧ s∫A∩B

h(u)2ν(du)

Page 218: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 218

Continuous-time Markov chains

Xn(t) = Xn(0) +1√n

∫U×[0,t]

α1(Xn(s−), u)ξn(du× ds)

+1

n

∫U×[0,t]

α2(Xn(s−), u)ξn(du× ds)

Assume∫U α1(x, u)ν(du) = 0. Then

Xn(t) = Xn(0) +1√n

∫U×[0,t]

α1(Xn(s−), u)ξn(du× ds)

+1

n

∫U×[0,t]

α2(Xn(s−), u)ξn(du× ds)

Can we conclude Xn ⇒ X satisfying

X(t) = X(0)+

∫U×[0,t]

α1(X(s), u)W (du×ds)+∫ t

0

∫U

α2(X(s−), u)ν(du)ds ?

Page 219: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 219

Discrete-time Markov chainsConsider

Xnk+1 = Xn

k + σ(Xnk , ξk+1)

1√n

+ b(Xnk , ζk+1)

1

n

where (ξk, ζk) is iid in U1 × U2.

µ the distribution of ξk

ν the distribution of ζk

Assume∫U1σ(x, u1)µ(du1) = 0

Define

Mn(A, t) =1√n

[nt]∑k=1

(1A(ξk)− µ(A))

Vn(B, t) =1

n

[nt]∑k=1

1B(ζk)

Page 220: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 220

Stochastic equation driven by random measureThen Xn(t) ≡ Xn

[nt] satisfies

Xn(t) = Xn(0) +

∫ t

0

∫U1

σn(Xn(s), u)Mn(du× ds)

+

∫ t

0

∫U2

bn(Xn(s), u)Vn(du× ds)

Vn(A, t)→ tν(A) Mn(A, t)⇒M(A, t)

M is Gaussian with covariance

E[M(A, t)M(B, s)] = t ∧ s(µ(A ∩B)− µ(A)µ(B))

Can we conclude that Xn ⇒ X satisfying

X(t) = X(0) +

∫ t

0

∫U1

σ(X(s), u)M(du× ds) +

∫ t

0

∫U2

b(X(s), u)ν(du)ds ?

Page 221: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 221

Good integrator conditionH a separable Banach space (H = L2(ν), L1(ν), L2(µ), etc.)

Y (ϕ, t) a semimartingale for each ϕ ∈ H

Y (∑akϕk, t) =

∑k akY (ϕk, t)

Let S be the collection of cadlag, adapted processes of the formZ(t) =∑mk=1 ξk(t)ϕk, ϕk ∈ H . Define

IY (Z, t) =

∫U×[0,t]

Z(u, s−)Y (du× ds) =∑k

∫ t

0

ξk(s−)dY (ϕk, s).

Basic assumption:

Ht = sups≤t|IY (Z, s)| : Z ∈ S, sup

s≤t‖Z(s)‖H ≤ 1

is stochastically bounded. (Call Y a good H#-semimartingale.)

The integral extends to all cadlag, adapted H-valued processes.

Page 222: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 222

Convergence for H#-semimartingales

H a separable Banach space of functions on U

Yn an Fnt -H#-semimartingale (for each ϕ ∈ H , Y (ϕ, ·) is an Fn

t -semimartingale)

Xn cadlag, H-valued processes

(Xn, Yn)⇒ (X, Y ), if

(Xn, Yn(ϕ1, ·), . . . , Yn(ϕm, ·))⇒ (X, Y (ϕ1, ·), . . . , Y (ϕm, ·))

in DH×Rm[0,∞) for each choice of ϕ1, . . . , ϕm ∈ H .

Page 223: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 223

Convergence for Stochastic IntegralsLet

Hn,t = sups≤t|IYn(Z, s)| : Z ∈ Sn, sup

s≤t‖Z(s)‖H ≤ 1.

Definition: Yn is uniformly tight if ∪nHn,t is stochastically boundedfor each t.

Theorem 9.5 (Kurtz and Protter (1996)) Assume that Yn is uniformlytight. If (Xn, Yn) ⇒ (X, Y ), then there is a filtration Ft, such that Y isan Ft-adapted, good, H#-semimartingale, X is Ft-adapted and

(Xn, Yn, IYn(Xn))⇒ (X, Y, IY (X)) .

If (Xn, Yn)→ (X, Y ) in probability, then (Xn, Yn, IYn(Xn))→ (X, Y, IY (X))in probability.

Cho (1995) for distribution-valued martingales

Jakubowski (1996) for Hilbert-space-valued semimatingales.

Page 224: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 224

Sequences of SDE’s

Xn(t) = Un(t) +

∫U×[0,t]

Fn(Xn, s−, u)Yn(du× ds).

Structure conditions

T1[0,∞) = γ : γ nondecreasing and maps [0,∞) onto [0,∞), γ(t +h)− γ(t) ≤ h

C.a Fn behaves well under time changes: If xn ⊂ DRd[0,∞),γn ⊂ T1[0,∞), and xn γn is relatively compact in DRd[0,∞),then Fn(xn) γn is relatively compact in DHd[0,∞).

C.b Fn converges to F : If (xn, yn) → (x, y) in DRd×Rm[0,∞), then(xn, yn, Fn(xn))→ (x, y, F (x)) in DRd×Rm×Hd[0,∞).

Note that C.b implies continuity of F , that is, (xn, yn)→ (x, y) implies(xn, yn, F (xn))→ (x, y, F (x)).

Page 225: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 225

SDE convergence theorem

Theorem 9.6 Suppose that (Un, Xn, Yn) satisfies

Xn(t) = Un(t) +

∫U×[0,t]

Fn(Xn, s−, u)Yn(du× ds),

that (Un, Yn) ⇒ (U, Y ), and that Yn is uniformly tight. Assume thatFn andF satisfy the structure condition and that supn supx ‖Fn(x, ·)‖Hd <∞. Then (Un, Xn, Yn) is relatively compact and any limit point satisfies

X(t) = U(t) +

∫U×[0,t]

F (X, s−, u)Y (du× ds)

Page 226: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 226

Not good (evil?) sequencesPiecewise linear interpolation of W :

Wn(t) = W ([nt]

n) + (t− [nt]

n)n

(W (

[nt] + 1

n)−W (

[nt]

n)

)(Classical Wong-Zakai example.)

Wn(t) = W ([nt] + 1

n)− (

[nt] + 1

n− t)n

(W (

[nt] + 1

n)−W (

[nt]

n)

)= Mn(t) + Zn(t)

where Mn is good (take the filtration to beFnt = F [nt]+1

n) andZn ⇒ 0.

Page 227: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 227

Markov chains: Let ξk be an irreducible finite Markov chain withstationary distribution π(x),

∑g(x)π(x) = 0, and let h satisfy Ph −

h = g (Ph(x) =∑p(x, y)h(y)). Define

Vn(t) =1√n

[nt]∑k=1

g(ξk).

Then Vn ⇒ σW , where

σ2 =∑x,y

π(x)p(x, y) (Ph(x)− h(y))2 .

Vn is not “good” but

Vn(t) =1√n

[nt]∑k=1

(Ph(ξk−1)− h(ξk)) +1√n

(Ph(ξ[nt])− Ph(ξ0))

= Mn(t) + Zn(t)

where Mn is good sequence of martingales and Zn ⇒ 0.

Page 228: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 228

Renewal processes: N(t) = maxk :∑k

i=1 ξi ≤ t, ξi iid, positive,E[ξk] = µ, V ar(ξk) = σ2.

Vn(t) =N(nt)− nt/µ√

n.

Then Vn ⇒ αW , α = σ/µ3/2.

Vn(t) =(N(nt) + 1)µ− SN(nt)+1

µ√n

+SN(nt)+1 − nt

µ√n

= Mn(t) + Zn(t)

Page 229: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 229

Not so evil after all

Assume Vn(t) = Yn(t) + Zn(t) where Yn is good and Zn ⇒ 0. Inaddition, assume ∫ ZndZn, [Yn, Zn], and [Zn] are good.

Xn(t) = Xn(0) +

∫ t

0

F (Xn(s−))dVn(s)

= Xn(0) +

∫ t

0

F (Xn(s−))dYn(s) +

∫ t

0

F (Xn(s−))dZn(s)

Integrate by parts using

F (Xn(t)) = F (Xn(0)) +

∫ t

0

F ′(Xn(s−))F (Xn(s−))dVn(s) +Rn(t)

where Rn can be estimated in terms of [Vn] = [Yn] + 2[Yn, Zn] + [Zn].

Page 230: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 230

Integration by parts

∫ t

0

F (Xn(s−))dZn(s)

= F (Xn(t))Zn(t)− F (Xn(0))Zn(0)−∫ t

0

Zn(s−)dF (Xn(s))− [F Xn, Zn]t

≈ −∫ t

0

Zn(s−)F ′(Xn(s−))F (Xn(s−))dYn(s)

−∫ t

0

F ′(Xn(s−))F (Xn(s−))Zn(s−)dZn(s)−∫ t

0

Zn(s−)dRn(s)

−∫ t

0

F ′(Xn(s−))F (Xn(s−))d([Yn, Zn]s + [Zn]s)− [Rn, Zn]t

Page 231: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 231

Wong-Zakai corrections Kurtz and Protter (1991)

Theorem 9.7 Assume that Vn = Yn+Zn where Yn is good, Zn ⇒ 0, and∫ ZndZn is good. If (Xn(0), Yn, Zn,

∫ZndZn, [Yn, Zn])⇒ (X(0), Y, 0, H,K),

then Xn is relatively compact and any limit point satisfies

X(t) = X(0)+

∫ t

0

F (X(s−))dY (s)+

∫ t

0

F ′(X(s−))F (X(s−))d(H(s)−K(s))

Note: For all the examples, H(t)−K(t) = ct for some c.

Page 232: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 232

10. Martingale problems for conditional distributions

• Forward equation

• Equivalence of forward equation and martingale problem

• Conditional distributions for martingale problems

• Partially observed processes

• Filtered martingale problem

• Markov mapping theorem

• Filtering equations

• Stopped martingale problem

• Stopped forward equation

with Gianna Nappo Kurtz and Nappo (2011)

Page 233: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 233

Conditions on generators

Condition 10.1 i) A : D(A) ⊂ Cb(S) → M(S) with 1 ∈ D(A) andA1 = 0.

ii) D(A) is closed under multiplication and separates points.

iii) Either R(A) ⊂ C(S) or there exists a complete separable metric spaceU , a transition function η from S to U , and an operator A1 : D(A) ⊂Cb(S)→ C(S× U) such that

Af(x) =

∫U

A1f(x, u)η(x, du), f ∈ D(A). (10.1)

iv) There exist ψ ∈ C(S), ψ ≥ 1, and constants af such that f ∈ D(A)implies |Af(x)| ≤ afψ(x), or if A is of the form (10.1), there existψ1 ∈ C(S × U), ψ1 ≥ 1, and constants af such that, for all (x, u) ∈S×U |A1f(x, u)| ≤ afψ1(x, u). (If A is of the form (10.1), then defineψ(x) ≡

∫U ψ1(x, u)η(x, du).)

Page 234: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 234

v) A is separable in the sense that there exists a countable collection(fk, gk) ⊂ A such that the martingale problem for (fk, gk) is equiv-alent to the martingale problem for A.

vi) A0 = ψ−1A is a pre-generator (for each fixed u, if A is of the form(10.1)), that is, A0 is dissipative and there are sequences of functionsµn : S→ P(S) and λn : S→ [0,∞) such that for each (f, g) ∈ A

g(x) = limn→∞

λn(x)

∫S(f(y)− f(x))µn(x, dy) (10.2)

for each x ∈ S.

Page 235: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 235

Absorbing boundariesConsider a diffusion X in a closed set S ⊂ Rd with absorbing bound-ary conditions, that is,

X(t) = X(0) +

∫ t∧τ

0

σ(X(s))dW (s) +

∫ t∧τ

0

b(X(s))ds, (10.3)

where τ = inft : X(t) ∈ ∂S = inft : X(t) /∈ So.

Setting a(x) = σ(x)σ(x)> and

Lf(x) =∑ 1

2aij(x)∂i∂jf(x) +

∑bi(x)∂if(x)

for f ∈ C2b (Rd), the natural generator would be Lf with domain be-

ing the C2b -functions satisfying Lf(x) = 0, x ∈ ∂S. This domain does

not satisfy Condition 10.1(ii).

Page 236: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 236

Exploit control representation

Take D(A) = C2b (S), U = 0, 1,

A1f(x, u) = uLf(x)

andη(x, du) = 1So(x)δ1(du) + 1∂S(x)δ0(du),

where δ0 and δ1 are the Dirac measures at 0 and 1 respectively.

We haveAf(x) = 1So(x)Lf(x)

with domain satisfying Condition 10.1(ii).

Any solution of (10.3) will be a solution of the martingale problemfor A, and any solution of the martingale problem for A will be asolution of the martingale problem for the natural generator.

Page 237: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 237

A martingale lemmaLet Ft and Gt be filtrations with Gt ⊂ Ft.

Lemma 10.2 Suppose U and V are Ft-adapted and

U(t)−∫ t

0

V (s)ds

is an Ft-martingale. Then

E[U(t)|Gt]−∫ t

0

E[V (s)|Gs]ds

is a Gt-martingale.

Proof. The lemma follows by the definition and properties of condi-tional expectations.

Page 238: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 238

Martingale properties of conditional distributions

Corollary 10.3 If X is a solution of the martingale problem for A withrespect to the filtration Ft and πt is the conditional distribution of X(t)given Gt ⊂ Ft, then

πtf − π0f −∫ t

0

πsAfds (10.4)

is a Gt-martingale for each f ∈ D(A).

Page 239: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 239

Forward equation

Definition 10.4 A P(S)-valued function νt, t ≥ 0 is a solution of theforward equation for A if for each t > 0,

∫ t0 νsψds < ∞ and for each

f ∈ D(A),

νtf = ν0f +

∫ t

0

νsAfds. (10.5)

Note that if π satisfies (10.4), then νt = E[πt] satisfies (10.5).

Page 240: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 240

Equivalence of forward equation and martingale prob-lem

Theorem 10.5 If νt, t ≥ 0 is a solution of the forward equation for A,then there exists a solution X of the martingale problem for A satisfyingνtf = E[f(X(t)].

Various forms of this result exist in the literature beginning with theresult of Echeverrıa (1982) for the stationary case, that is, νt ≡ ν0 andν0Af = 0. Extension of Echeverria’s result to the forward equation isgiven in Theorem 4.9.19 of Ethier and Kurtz (1986) for locally com-pact spaces and in Theorem 3.1 of Bhatt and Karandikar (1993) forgeneral complete separable metric spaces. The version given here isa special case of Corollary 1.12 of Kurtz and Stockbridge (2001).

Page 241: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 241

Martingale characterization of conditional distributions

Theorem 10.6 Suppose that πt, t ≥ 0 is a cadlag, P(S)-valued processwith no fixed points of discontinuity adapted to Gt satisfying

E[

∫ t

0

πsψds] <∞, t > 0

and that

πtf − π0f −∫ t

0

πsAfds

is a Gt-martingale for each f ∈ D(A). Then there exists a solution X ofthe martingale problem for A, a P(S)-valued process πt, t ≥ 0 with thesame distribution as πt, t ≥ 0, and a filtration Gt such that πt is theconditional distribution of X(t) given Gt.

Page 242: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 242

Conditioning on a process

Theorem 10.7 If Gt in Theorem 10.6 is generated by a cadlag process Ywith no fixed points of discontinuity and π(0), that is,

Gt = F Yt ∨ σ(π(0)),

then there exists a solution X of the martingale problem for A, a P(S)-valued process πt, t ≥ 0, and a process Y such that πt, t ≥ 0 and Yhave the same joint distribution as πt, t ≥ 0 and Y and πt is the condi-tional distribution of X(t) given FY

t ∨ σ(π(0)).

Page 243: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 243

Idea of proof

Enlarge the state space so that the current state of the process con-tains all information about the past of the observation Y .

Let bk, ck ⊂ Cb(S0) satisfy 0 ≤ bk, ck ≤ 1, and suppose that thespans of bk and ck are bounded, pointwise dense in B(S0).

Let a1, a2, . . . be an ordering of the rationals with ai ≥ 1 and

Vki(t) = ck(Y (0))− ai∫ t

0

Vki(s)ds+

∫ t

0

bk(Y (s))ds (10.6)

= ck(Y (0))e−ait +

∫ t

0

e−ai(t−s)bk(Y (s))ds.

Page 244: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 244

Set V (t) = (Vki(t) : k, i ≥ 1) ∈ [0, 1]∞,

D(A) = f(x)m∏

k,i=1

gki(vki) : f ∈ D(A), gki ∈ C1[0, 1],m = 1, 2, . . .

and

A(fg)(x, v) = g(v)Af(x) + f(x)∑

(−aiv + bk(v))∂kig(v),

For fg ∈ D(A),

πtfg(V (t))− π0fg(V (0))

−∫ t

0

(g(V (s))πsAf + πsf

∑(−aiVki(s) + bk(Y (s)))∂kig(V (s))

)ds

is a F Yt -martingale and νt defined by

νt(fg) = E[πtfg(V (t))]

is a solution of the forward equation for A.

Page 245: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 245

Examples

πt = νt so πtf − π0f −∫ t

0 πsAfds = 0.

πt = 1n

∑ni=1 δXi(t) where X1, . . . , Xn are independent solutions of the

martingale problem for A.

πt, t ≥ 0 a neutral Fleming-Viot process with mutation process de-termined by A.

Page 246: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 246

An application

Lemma 10.8 Suppose Xn is relatively compact in DS[0,∞) and that foreach n, Fn

t is a filtration and π(n)t is the conditional distribution of Xn(t)

given Fnt . Then for each ε > 0 and T > 0, there exists a compact Kε,T ⊂ S

such thatsupnPsup

t≤Tπ

(n)t (Kc

ε,T ) ≥ ε ≤ ε

Page 247: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 247

Partially observed processes

Let γ : S→ S0 be Borel measurable.

Corollary 10.9 If in Corollary 10.7, Y and π satisfy∫Sh γ(x)πt(dx) = h(Y (t)) a.s.

for all h ∈ B(S0) and t ≥ 0, then Y (t) = γ(X(t)).

(cf. Kurtz and Ocone (1988) and Kurtz (1998))

Page 248: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 248

The filtered martingale problem

Definition 10.10 A P(S)-valued process π and an S-valued process Y area solution of the filtered martingale problem for (A, γ) if

πtf − π0f −∫ t

0

πsAfds

is a F Yt ∨σ(π(0))-martingale for each f ∈ D(A) and

∫S hγ(x)πt(dx) =

h(Y (t)) a.s. for all h ∈ B(S0) and t ≥ 0.

Theorem 10.11 Let ϕ0 ∈ P(P(S)) and define µ0 =∫P(S) µϕ0(dµ). If

uniqueness holds for the martingale problem (A, µ0), then uniqueness holdsfor the filtered martingale problem for (A, γ, ϕ0). If uniqueness holds forthe filtered martingale problem for (A, γ, ϕ0), then πt, t ≥ 0 is a Markovprocess.

Page 249: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 249

Markov mappings

Theorem 10.12 γ : S→ S0, Borel measurable.

α a transition function from S0 into S satisfying

α(y, γ−1(y)) = 1

Let µ0 ∈ P(S0), ν0 =∫α(y, ·)µ0(dy), and define

C = (∫Sf(z)α(·, dz),

∫SAf(z)α(·, dz)) : f ∈ D(A) .

If Y is a solution of the MGP for (C, µ0), then there exists a solution Z ofthe MGP for (A, ν0) such that Y = γ Z and Y have the same distributionon MS0

[0,∞).

E[f(Z(t))|FYt ] =

∫f(z)α(Y (t), dz)

(at least for almost every t).

Page 250: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 250

Uniqueness

Corollary 10.13 If uniqueness holds for the MGP for (A, ν0), then unique-ness holds for the MS0

[0,∞)-MGP for (C, µ0). If Y has sample paths inDS0

[0,∞), then uniqueness holds for the DS0[0,∞)-martingale problem for

(C, µ0).

Existence for (C, µ0) and uniqueness for (A, ν0) implies existence for (A, ν0)

and uniqueness for (C, µ0), and hence that Y is Markov.

Page 251: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 251

Intertwining condition

Let α(y,Γ) be a transition function from S0 to S satisfying

α(y, γ−1(y)) = 1,

and define S(t) : B(S0)→ B(S0) by

S(t)g(y) = αT (t)g γ(y) ≡∫ST (t)g γ(x)α(y, dx).

Theorem 10.14 (Rogers and Pitman (1981), cf Rosenblatt (1966)) If foreach t ≥ 0,

αT (t)f = S(t)αf, f ∈ B(S), (S(t) is a semigroup)

and X is a Markov process with intial distribution α(y, ·) and semigroupT (t), then Y is a Markov process with Y (0) = y and

PX(t) ∈ Γ|FYt = α(Y (t),Γ).

Page 252: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 252

Generator for Y

Note thatαT (t)f = S(t)αf, f ∈ B(S),

suggests that the generator for Y is given by

Cαf = αAf.

Page 253: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 253

Burke’s output theoremKliemann, Koch and Marchetti

X = (Q,D), an M/M/1 queue and its departure process

Af(k, l) = λ(f(k + 1, l)− f(k, l)) + µ1k>0(f(k − 1, l + 1)− f(k, l))

γ(k, l) = l

Assume λ < µ and define

α(l, (k, l)) = (1−λµ

)(λ

µ)k−1, k = 0, 1, 2, . . . α(l, (k,m)) = 0, m 6= l

Then

αAf(l) = µ∞∑k=1

(1− λ

µ)(λ

µ)k−1(f(k − 1, l + 1)− f(k − 1, l))

= λ(αf(l + 1)− αf(l))

Page 254: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 254

Poisson output

Therefore, there exists a solution (Q,D) of the martingale problemfor A such that D is a Poisson process with parameter λ and

PQ(t) = k|FDt = (1− λ

µ)(λ

µ)k−1,

that is, Q(t) is independent of FDt and is geometrically distributed.

Page 255: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 255

Ordered population processesDonnelly and Kurtz

B generator of an E-valued process

Anf(x) =n∑i=1

Bif(x) +∑

1≤i<j≤n(f(θij(x))− f(x))

y = θij(x) is the element of E = En satisfying

yk = xk, k ≤ j − 1

yj = xi

yk = xk−1, k > j

that is θij(x) = (x1, . . . , xj−1, xi, xj, . . . , xn−1)

Page 256: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 256

Mapping and inverse transition function

γ(x) = 1n

∑ni=1 δxi ∈ E0 = Pn(E)

µ ∈ Pn(E),

α(µ, ·) =1

n!

∑σ∈Σn

δ(xσ1,...,xσn)

Page 257: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 257

Generator on Pn(E)

Let

An0f(x) =

n∑i=1

Bif(x) +1

2

∑1≤i 6=j≤n

(f(ηij(x))− f(x))

where y = ηij(x) is obtained from x by replacing xj by a copy of xi.

IfF (µ) = 〈f, µ(n)〉 ≡

∫f(x)α(µ, dx),

then

CnF (µ) = αAnf(µ) = 〈Anf, µ(n)〉 = 〈An0f, µ

(n)〉 = αAn0f(µ)

and〈An

0f, γ(x)(n)〉 = An0F γ(x)

If Z is a solution of the martingale problem for An0 , then γ(Z) is a

solution for Cn.

Page 258: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 258

Pitman’s theorem

Z standard Brownian motion

M(t) = sups≤t Z(s), V (t) = M(t)− Z(t)

X(t) = (Z(t),M(t)− Z(t))

Y (t) = 2M(t)− Z(t) = 2V (t) + Z(t)

Af(z, v) =1

2fzz(z, v)− fzv(z, v) +

1

2fvv(z, v) b.c. fv(z, 0) = 0

F (y) = αf(y) =1

y

∫ y

0

f(y − 2v, v)dv

αAf(y) =1

2F ′′(y) +

1

yF ′(y)

Page 259: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 259

Filtering equationsX a Markov process with generator A ⊂ Cb(S)× Cb(S)

Y (t) = Y (0) +W (t) +

∫ t

0

h(X(s))ds

Assume for t ≥ 0, E[∫ t

0 |h(X(s))|) ds]<∞, and

∫ t0 (πs|h|)2)ds <∞.

Then assuming independence of X and W , the conditional distribu-tion πt of X(t) given FY

t satisfies

πtf = π0f +

∫ t

0

πsAf ds (10.7)

+

∫ t

0

[πs(hf − πshπsf ] [dY (s)− πsh ds] ,

Note that Y (t)−∫ t

0 πshds is a Brownian motion.

Page 260: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 260

Uniqueness problem

Suppose π satisfies∫ t

0 (πsh)2ds <∞ a.s. and is adapted to FYt , and

πtf = π0f +

∫ t

0

πsAf ds (10.8)

+

∫ t

0

[πs(hf − πsh πsf ] [dY (s)− πsh ds] ,

Under a Girsanov change of measure, the stochastic integral is atleast a local martingale. Assume for the moment, a martingale. Thenunder the new measure P , π and Y have the same joint distributionas π and Y have under the original measure P . Consequently, if πt =

H(t, Y ) P -a.s., πt = H(t, Y ) P -a.s.

Page 261: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 261

Stopped martingale problemAssume that for each f ∈ D(A), there exists a constant af so that

|Af(x)| ≤ afψ(x).

Definition 10.15 A measurable, S-valued process X and a nonnegativerandom variable τ are a solution of the stopped martingale problem forA, if there exists a filtration Ft such that X is Ft-adapted, τ is a Ft-stopping time,

E[

∫ t∧τ

0

ψ(X(s))ds] <∞, t ≥ 0, (10.9)

and for each f ∈ D(A),

f(X(t ∧ τ))− f(X(0))−∫ t∧τ

0

Af(X(s))ds (10.10)

is an Ft-martingale.

Page 262: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 262

Local martingale problem

Definition 10.16 A measurable, S-valued process X is a solution of thelocal-martingale problem for A, if there exists a filtration Ft such thatX is Ft-adapted and a sequence τn of Ft-stopping times such thatτn →∞ a.s. and for each n, (X, τn) is a solution of the stopped martingaleproblem for A.

Page 263: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 263

Stopped forward equation

Definition 10.17 A pair of measure-valued functions (ν0t , ν

1t ), t ≥ 0 is

a solution of the stopped forward equation for A if for each t ≥ 0, νt ≡ν0t + ν1

t ∈ P(S) and∫ t

0 ν1sψds < ∞, t → ν0

t (C) is nondecreasing for allC ∈ B(S), and for each f ∈ D(A),

νtf = ν0f +

∫ t

0

ν1sAfds. (10.11)

Lemma 10.18 If A satisfies Condition 10.1 and (ν0t , ν

1t ), t ≥ 0 is a so-

lution of the stopped forward equation for A, then there exists a solution(X, τ) of the stopped martingale problem forA such that ν0

t f = E[1[τ,∞)(t)f(X(τ))]and ν1

t = E[1[0,τ)(t)f(X(t))].

Page 264: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 264

11. Equivalence of stochastic equations and martingale problems

• Ito equations

• Martingale properties/problems

• Forward equations

• Equivalence of forward equation and martingale problem

• Equivalence of martingale problem and Ito equation

Page 265: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 265

Markov processes in Rd

Typically, a Markov process X in Rd has a generator of the form

Af(x) =1

2

d∑i,j=1

aij(x)∂2

∂xi∂xjf(x)+b(x)·∇f(x)+

∫Rd

(f(x+y)−f(x)−1B1(y)y·∇f(x))η(x, dy)

where B1 is the ball of radius 1 centered at the origin and η satisfies∫1 ∧ |y2|η(x, dy) < ∞ for each x. (See, for example, Stroock (1975),

Cinlar, Jacod, Protter, and Sharpe (1980).)

η(x,Γ) gives the “rate” at which jumps satisfying X(s)−X(s−) ∈ Γoccur.

B1 can be replaced by any set C containing an open neighborhood ofthe origin provided that the drift term is replaced by

bC(x) · ∇f(x) =

(b(x) +

∫Rdy(1C(y)− 1B1

(y))η(x, dy)

)· ∇f(x).

Page 266: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 266

A representation for ηSuppose that there exist λ : Rd × S → [0, 1], γ : Rd × S → Rd, and aσ-finite measure µ1 on a measurable space (S,S) such that

η(x,Γ) =

∫S

λ(x, u)1Γ(γ(x, u))µ1(du).

This representation is always possible but in no way unique.

Page 267: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 267

Reformulation of the generatorFor simplicity, assume that there exists a fixed set S1 ∈ S such thatfor S2 = S − S1,∫

S

λ(x, u)(1S1(u)|γ(x, u)|2 + 1S2

(u))µ1(du) <∞

and ∫S

λ(x, u)|γ(x, u)||1S1(u)− 1B1

(γ(x, u))|µ1(du) <∞.

Then

Af(x) =1

2

d∑i,j=1

aij(x)∂2

∂xi∂xjf(x) + b(x) · ∇f(x)

+

∫S

λ(x, u)(f(x+ γ(x, u))− f(x)− 1S1(u)γ(x, u) · ∇f(x))µ1(du)

where

b(x) = b(x) +

∫S

λ(x, u)γ(x, u)(1S1(u)− 1B1

(γ(x, u)))µ1(du).

Page 268: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 268

Technical assumptionsFor each compact K ⊂ Rd

supx∈K

(|b(x)|+∫S0

|σ(x, u)|2µ0(du) +

∫S1

λ(x, u)|γ(x, u)|2µ1(du)

+

∫S2

λ(x, u)|γ(x, u)| ∧ 1µ1(du)) <∞.

a(x) =

∫S0

σ(x, u)σ(x, u)Tµ0(du)

Let D(A) = C2c (Rd) and assume that for f ∈ D(A), Af ∈ Cb(Rd).

The continuity assumption can be removed and the boundedness as-sumption relaxed using existing technology.

For x outside the support of f ,

Af(x) =

∫S

λ(x, u)(f(x+ γ(x, u))µ1(du).

Page 269: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 269

Ito equationsX should satisfy a stochastic differential equation of the form

X(t) = X(0) +

∫S0×[0,t]

σ(X(s), u)W (du× ds) +

∫ t

0

b(X(s))ds

+

∫[0,1]×S1×[0,t]

1[0,λ(X(s−),u)](v)γ(X(s−), u)ξ(dv × du× ds)

+

∫[0,1]×S2×[0,t]

1[0,λ(X(s−),u)](v)γ(X(s−), u)ξ(dv × du× ds),

for t < τ∞ ≡ limk→∞ inft : |X(t−)| or |X(t)| ≥ k, where W is Gaus-sian white noise determined by µ0 and ξ is a Poisson random mea-sure on [0, 1]× S with mean measure `× µ1.

Page 270: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 270

Martingale properties/problemsAssume that τ∞ =∞. Applying Ito’s formula,

f(X(t))− f(X(0))−∫ t

0

Af(X(s))ds

=

∫S0×[0,t]

∇f(X(s))Tσ(X(s), u)W (du× ds)

+

∫[0,1]×S×[0,t]

1[0,λ(X(s−),u)](v)(f(X(s−) + γ(X(s−), u))− f(X(s−)))ξ(dv × du× ds)

The right side is a local martingale, so under the assumption that Afis bounded, it is a martingale.

Definition 11.1 X is a solution of the martingale problem for A if thereexists a filtration Ft such that

f(X(t))− f(X(0))−∫ t

0

Af(X(s))ds (11.1)

is a Ft-martingale for each f ∈ D(A).

Page 271: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 271

Forward equationTaking expectations in (11.1)

νtf = ν0f +

∫ t

0

νsAfds, (11.2)

where νt is the distribution of X(t) (νtf =∫fdνt).

Theorem 11.2 Every solution of the stochastic equation gives a solution ofthe martingale problem, and every solution of the martingale problem givesa solution of the forward equation.

Page 272: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 272

Equivalence of forward equation and martingale prob-lem

Theorem 11.3 A ⊂ Cb(E) × Cb(E) a pre-generator with bp-separablegraph.

D(A) closed under multiplication and separates points in E.

If νt, t ≥ 0 is a solution of the forward equation for A, then there exists asolution X of the martingale problem for A satisfying νtf = E[f(X(t)].

Page 273: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 273

Equivalence of SDE and MGP for diffusions

X(t) = X(0) +

∫ t

0

σ(X(s))dW (s) +

∫ t

0

b(X(s))ds, (11.3)

By Ito’s formula

f(X(t))− f(X(0))−∫ t

0

Af(X(s))ds =

∫ t

0

∇f(X(s))Tσ(X(s))dW (s)

for

Af(x) =1

2

∑ij

aij(x)∂2

∂xi∂xjf(x) +

∑i

bi(x)∂

∂xif(x)

where ((aij)) = σσT .

If X is a solution of the MGP for A and σ is invertible, then

W (t) =

∫ t

0

σ−1(X(s))dX(s)−∫ t

0

σ−1(X(s))b(X(s))ds

is a standard Brownian motion and (11.3) is satisfied.

Page 274: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 274

The problem

Given X , a solution of the martingale problem for

Af(x) =1

2

d∑i,j=1

aij(x)∂2

∂xi∂xjf(x) + b(x) · ∇f(x)

+

∫S

λ(x, u)(f(x+ γ(x, u))− f(x)− 1S1(u)γ(x, u) · ∇f(x))µ1(du),

“construct” W and ξ so that

X(t) = X(0) +

∫S0×[0,t]

σ(X(s), u)W (du× ds) +

∫ t

0

b(X(s))ds

+

∫[0,1]×S1×[0,t]

1[0,λ(X(s−),u)](v)γ(X(s−), u)ξ(dv × du× ds)

+

∫[0,1]×S2×[0,t]

1[0,λ(X(s−),u)](v)γ(X(s−), u)ξ(dv × du× ds),

What do we do?

Page 275: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 275

Cheat

Let d = 1 and consider

X(t) = X(0) +

∫ t

0

σ(X(s))dW (s) +

∫ t

0

b(X(s))ds. (11.4)

DefineZ(t) = Z(0) +W (t) mod 1

Then (X,Z) is a solution of the MGP for

Af(x, z) =1

2a(x)

∂2

∂x2f(x, z) + σ(x)

∂2

∂x∂zf(x, z)

+1

2

∂2

∂z2f(x, z) + b(x)

∂xf(x, z)

Conversely, if (X,Z) is a solution of the martingale problem for A,then the corresponding W can be recovered from Z and X is a weaksolution of (11.4).

Page 276: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 276

Markov mappings

Theorem 11.4 A ⊂ C(E)×C(E) a pre-generator with bp-separable graph.

D(A) closed under multiplication and separating.

γ : E → E0, Borel measurable.

α a transition function from E0 into E satisfying

α(y, γ−1(y)) = 1

Let µ0 ∈ P(E0), ν0 =∫α(y, ·)µ0(dy), and define

A = (∫E

f(z)α(·, dz),∫E

Af(z)α(·, dz)) : f ∈ D(A) .

If Y is a solution of the MGP for (A, µ0), then there exists a solution Z ofthe MGP for (A, ν0) such that Y = γ Z and Y have the same distributionon ME0

[0,∞).

Page 277: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 277

Equivalence to original MGP

Let f(x, z) = f1(x)f2(z), f1 ∈ C2c (R), f2 ∈ C2([0, 1]), f2(0) = f2(1),

f ′2(0) = f ′2(1), f ′′2 (0) = f ′′2 (1), and define α by αf(x) =∫ 1

0 f(x, z)dz.Then, for

Af(x, z) =1

2a(x)

∂2

∂x2f(x, z) + σ(x)

∂2

∂x∂zf(x, z)

+1

2

∂2

∂z2f(x, z) + b(x)

∂xf(x, z),

αAf(x) = Aαf(x),

where

Af(x) =1

2a(x)

d2

dx2f(x) + b(x)

d

dxf(x),

and any solution of the MGP for A corresponds to a solution for Aand hence is a weak solution of the SDE.

Page 278: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 278

General case: Gaussian white noiseLet ϕ1, ϕ2, . . . be a complete orthonormal basis for L2(µ0). Then W iscompletely determined by

W (ϕi, t) =

∫S0×[0,t]

ϕi(u)W (du× ds), i = 1, 2, . . . .

In particular, if H is an Ft-adapted process with sample paths inDL2(µ)[0,∞), then∫

S0×[0,t]

H(s−, u)W (du× ds) =∞∑i=1

∫ t

0

〈H(s−, ·), ϕi〉dW (ϕi, s).

Yi(t) = Yi(0) +W (ϕi, t) mod 1.

Page 279: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 279

General case: Poisson random measureDi a partition of S such that µ1(Di) < ∞ and S1 = ∪i∈I1

Di, S2 =∪i∈I2

Di.

Ni(t) ≡ ξ([0, 1]×Di × [0, t]),

ξi(· × [0, t]) ≡ ξ(· ∩ ([0, 1]×Di)× [0, t]) =

Ni(t)−1∑i=0

δ(Vi,k,Uik),

Vi,k, Ui,k, i ≥ 1, k ≥ 0 are independent, Vi,k uniform-[0, 1], and Ui,khas distribution µ1(·∩Di)

µ1(Di).

Define Zi(t) = (Vi,Ni(t), Ui,Ni(t)). Then Zi is a Markov process, andZi(t) is independent of σ(ξ(· × [0, s]), s ≤ t).

Note that∫[0,1]×S×[0,t]

H(v, u, s−)ξ(dv × du× ds) =∑i

∫ t

0

H(Zi(s−), s−)dNi(s).

Page 280: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 280

Computation of A

D0 ⊂ C2([0, 1)) the collection of functions satisfying f(0) = f(1−),f ′(0) = f ′(1−), and f ′′(0) = f ′′(1−)

D(A) = f0(x)

m1∏i=1

f1i(yi)

m2∏i=1

f2i(zi) : f0 ∈ C2c (Rd), f1i ∈ D0, f2i ∈ C([0, 1]×Di),

f ∈ D(A), derive Af by applying Ito’s formula to

f0(X(t))

m1∏i=1

f1i(Yi(t))

m2∏i=1

f2i(Zi(t)).

Page 281: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 281

Computation of ADefine

Lxf(x) =1

2

d∑i,j=1

aij(x)∂2

∂xi∂xjf(x) + b(x) · ∇f(x), Lyf(y) =

1

2

∑k

∂2

∂y2kf(y).

For cik(x) =∫S0σi(x, u)ϕk(u)µ(du), Lxyf(x, y) =

∑i,k cik(x)∂xi∂ykf(x, y).

Let γ(x, u) = 1[0,λ(x,u)](v)γ(x, u), for u ∈ S, v ∈ [0, 1], u = (u, v), Forz ∈

∏iDi, let ρ(z, u) be the element of

∏iDi obtained by replacing zi

by u provided u ∈ Di. Define

Jf(x, yz) =∑i

∫Di

∫ 1

0

(f(x+ γ(x, zi), y, ρ(z, u))− f(x, y, z)

−1S1(u)γ(x, u) · ∇xf(x, y, z))dvµ1(du)

Then

Af(x, y, z) = Lxf(x, y, z) + Lyf(x, y, z) + Lxyf(x, y, z) + Jf(x, y, z)

and αAf(x) = Aαf(x).

Page 282: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 282

What have we proved?

Let X be a solution of the martingale problem forA. Then there existsa solution (X, Y, Z) of the martingale problem for A such that

X has the same distribution as X .For f ∈ C2

c (Rd),

f(X(t))− f(X(0))−∫ t

0

Af(X(s))ds

=

∫S0×[0,t]

∇f(X(s))Tσ(X(s), u)W (du× ds)

+

∫[0,1]×S×[0,t]

1[0,λ(X(s−),u)](v)(f(X(s−) + γ(X(s−), u))− f(X(s−)))ξ(dv × du× ds)

X is cadlag in Rd∪∞. If X doesn’t hit∞, then (X,W, ξ) satisfy theSDE.

Page 283: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 283

12. Genealogies and ordered representations of measure-valuedprocesses

Particle representations

• de Finetti theorem

• Limit theorems for de Finetti mea-sures

• Conditionally Poisson randommeasures

• Representations for processes

Exchangeable representations ( Donnelly and Kurtz (1999))

• Neutral population models

• Infinite population limits

• Ancestral levels

• Coalescents

• Finite ancestry property (Schweins-berg’s theorem)

• Applications

– Conditioning on constant pop-ulation size

– Distribution at extinction– Conditioning on nonextinction

• Models with discrete generations

Page 284: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 284

Particle representations of random probability measures

If ξ1, ξ2, . . . is exchangeable, then there exists a random probabilitymeasure Φ such that

Φ = limn→∞

1

n

n∑k=1

δξk

If ξ is a conditionally Poisson random measure on S × [0,∞) withCox measure Ξ×m, m being Lebesgue measure, then

Ξ = limK→∞

1

Kξ(· × [0, K])

Page 285: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 285

de Finetti’s theorem

ξ1, ξ2, . . . is exchangeable if

Pξ1 ∈ Γ1, . . . , ξm ∈ Γm = Pξs1∈ Γ1, . . . , ξsm ∈ Γm

(s1, . . . , sm) any permutation of (1, . . . ,m).

Theorem 12.1 (de Finetti) Let ξ1, ξ2, . . . be exchangeable. Then there ex-ists a random probability measure Φ such that for every bounded, measur-able g,

limN→∞

g(ξ1) + · · ·+ g(ξN)

N=

∫g(x)Φ(dx) a.s.

so limN→∞1N

∑Ni=1 δξi = Φ.

In addition

E[m∏i=1

gi(ξi)|Φ] =m∏i=1

〈Φ, gi〉 =m∏i=1

∫gidΦ.

Page 286: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 286

Basic convergence lemma

Lemma 12.2 Kotelenez and Kurtz (2010) For n = 1, 2, . . ., let ξn1 , . . . , ξnNnbe exchangeable in S (allowing)Nn =∞.) Let Ξn be the empirical measure,

Ξn =1

Nn

Nn∑i=1

δξni .

AssumeNn →∞, and for eachm = 1, 2, . . ., ξn1 , . . . , ξnm ⇒ ξ1, . . . , ξmin Sm.

Then ξi is exchangeable and setting ξni = s0 ∈ S for i > Nn, Ξn, ξn1 , ξn2 . . . ⇒

Ξ, ξ1, ξ2, . . . in P(S)× S∞, where Ξ is the deFinetti measure for ξi.

If for eachm, ξn1 , . . . , ξnm → ξ1, . . . , ξm in probability in Sm, then Ξn →Ξ in probability in P(S).

Page 287: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 287

Poisson random measuresS a Polish space and ν a σ-finite measure on B(S).

ξ is a Poisson random measure with mean measure ν if

a) ξ is a random counting measure on S.

b) For each A ∈ S with ν(A) < ∞, ξ(A) is Poisson distributed withparameter ν(A).

c) For A1, A2, . . . ∈ S disjoint, ξ(A1), ξ(A2), . . . are independent.

Lemma 12.3 If ν(A) < ∞, then conditioned on ξ(A), the point processξ(· ∩ A) has the same distribution as

∑ξ(A)i=1 δζi, where ζ1, . . . , ζξ(A) are iid

with distribution ν(A)−1ν(· ∩ A).

If H : S → S0, Borel measurable, and ξ(A) = ξ(H−1(A)), then ξ is aPoisson random measure on S0 with mean measure ν given by ν(A) =ν(H−1(A))

Page 288: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 288

Moment identities

If ξ is a Poisson random measure with mean measure ν

E[e∫f(z)ξ(dz)] = e

∫(ef−1)dν,

or letting ξ =∑

i δZi,

E[∏i

g(Zi)] = e∫

(g−1)dν.

Similarly,

E[∑j

h(Zj)∏i

g(Zi)] =

∫hgdνe

∫(g−1)dν,

andE[∑i6=j

h(Zi)h(Zj)∏k

g(Zk)] = (

∫hgdν)2e

∫(g−1)dν,

Page 289: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 289

Conditionally Poisson systems

Let ξ be a random counting measure on S and Ξ be a locally finiterandom measure on S.

ξ is conditionally Poisson with Cox measure Ξ if, conditioned on Ξ, ξis a Poisson point process with mean measure Ξ.

E[e−∫Sfdξ] = E[e−

∫S

(1−e−f )dΞ]

for all nonnegative f ∈M(S).

Page 290: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 290

Particle representations of random measuresIf ξ is conditionally Poisson system on S × [0,∞) with Cox measureΞ×m where m is Lebesgue measure, then for f ∈M(S)

E[e−∫S×[0,K]

fdξ] = E[e−K∫S

(1−e−f )dΞ]

and for f ≥ 0,

Ξ(f) = limK→∞

1

K

∫S×[0,K]

fdξ = limε→0

ε

∫S×[0,∞)

e−εuf(x)ξ(dx× du) a.s.

Page 291: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 291

Relationship to exchangeability

Lemma 12.4 Suppose ξ is a conditionally Poisson random measure on S×[0,∞) with Cox measure Ξ × m. If Ξ < ∞ a.s., then we can write ξ =∑∞

i=1 δ(Xi,Ui) with U1 < U2 < · · · a.s. and Xi is exchangeable.

Conditioned on Ξ, Ui is Poisson with parameter Ξ(S) and Xi is iidwith distribution Ξ(S)−1Ξ.

Page 292: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 292

Convergence

Lemma 12.5 If ξn is a sequence of conditionally Poisson random mea-sures on S× [0,∞) with Cox measures Ξn×m. Then ξn ⇒ ξ if and onlyif Ξn ⇒ Ξ, Ξ the Cox measure for ξ.

If ξn → ξ in probability, then Ξn → Ξ in probability

Page 293: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 293

Representations for processes: Preserving ignoranceRepresentations for prelimiting models will be constructed in whichthe behavior of the particles is highly dependent on the level of theparticle, but observation of the empirical measure of the particlesgives no information about the levels of the particles.

Z(t) =

N(t)∑i=1

δXi(t)

For discrete levels:

E[f(X1(t), . . . Xm(t))|FZt ] =

1(N(t)m

) ∑x1,...,xm⊂X(t)

f(x1, . . . xm)

For continuous levels:

E[

N(t)∏i=1

g(Xi(t), Ui(t))|FZt ] =

N(t)∏i=1

1

r

∫ r

0

g(Xi(t), u)du

Page 294: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 294

Neutral models: Model IN(t) denotes the population size at time t.

Nb(t) the number of births up to time t.

Nd(t) the number of deaths, so

N(t) = N(0) +Nb(t)−Nd(t) .

For simplicity, assume that Nb and Nd do not have simultaneousjumps.

At a birth event, the parent is selected at random.

At a death event, the individuals that are eliminated from the popu-lation are selected at random, that is, if there are k deaths, the

(N(t−)k

)possible subsets of the population immediately prior to the deathevent are equally likely to be eliminated.

Page 295: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 295

Page 296: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 296

Types/locations

At each time t, each individual in the population has a type or loca-tion in a space E; at a birth event, the offspring are given the sametype as the parent and in between birth and death events, the typesevolve as independent, E-valued Markov processes correspondingto a specified generator B.

Therefore, the population at time t can be described by a vector

(Y1(t), . . . , YN(t)) ∈ EN(t)

in which we preserve the order of the remaining particles at a deathevent and randomly insert the particles at a birth even, or, since theabove order does not play a role in the birth and death events, by theempirical measure

ZI(t) =

N(t)∑i=1

δYi(t) .

Page 297: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 297

Neutral models: Model II (ordered model)

The population size is defined as in Model I, and in between birthand death events, the types or locations of the individuals evolve asindependent Markov processes with generator B.

At a death event, the individuals removed are the individuals withthe highest indices in (X1(t), . . . , XN(t−)(t)).

At a birth event occuring at time t in which there are k offspring, k+1indices, i1 < · · · < ik+1, are selected at random from 1, . . . N(t).

Since N(t)−N(t−) = k, i1 ≤ N(t−) and i1 will be the index of someindividual in the population immediately before the birth event. Thatindividual will be the parent. After the birth event, the parent andthe k offspring will be indexed by i1, . . . , ik+1. The remaining N(t)−(k+1) indiviuals are reindexed by 1, . . . , N(t)−i1, . . . , ik+1main-taining their previous order.

Page 298: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 298

Equivalence of the models

Theorem 12.6 Donnelly and Kurtz (1999) Suppose that the initial popu-lation vectors (Y1(0), . . . , YN(0)(0)) in Model I and (X1(0), . . . , XN(0)(0))in Model II have the same exchangeable distribution and define

ZII(t) =

N(t)∑i=1

δXi(t) , ZI(t) =

N(t)∑i=1

δYi(t)

Then ZII and ZI have the same distribution and for each t > 0,

(X1(t), . . . , XN(t)(t))

is exchangeable.

Page 299: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 299

Infinite population limit

Let P n(t) = Nn(t)n = P n(0) +

Nnb (t)n − Nn

d (t)n , and assume P n converges.

Let Ln12(t) denote the number of birth events up to time t that involvethe levels 1 and 2. Then (Xn

1 , Xn2 ) converges in distribution provided

the counting process Ln12 converges in distribution.

If there is a birth event at time t with k offspring, then, conditioningon Nn and Nn

b for all time (not just up to time t), the probability thatlevels 1 and 2 are involved is just(

Nn(t)−2k−1

)(Nn(t)k+1

) =k(k + 1)

Nn(t)(Nn(t)− 1).

Page 300: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 300

Martingale properties

Set Un(t) =[Nn

b ]t+Nnb (t)

n2 , and let tm be the jump times of Nnb and

km = Nnb (tm)−Nn

b (tm−). Then

Ln12(t)−∑m:tm≤t,km>0

km(km+1)Nn(tm)(Nn(tm)−1)

= Ln12(t)−∫ t

01

Pn(s)(Pn(s)− 1n )dUn(s)

(12.1)

is a martingale.

Basic assumption: (P n, Un)⇒ (P,U). If P > 0, then∫ t

0

1

P n(s)(P n(s)− 1n)dUn(s)⇒ H(t) =

∫ t

0

1

P (s)2dU(s)

and Ln12 converges to the unique counting process L12 with

L12(t)−H(t)

a martingale. (Note that the discontinuities of H are bounded by 1.)

Page 301: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 301

Convergence of lookdown processesIn general, fix a level l, and letK ⊂ 1, . . . , l. |K|will denote the car-dinality of the set. Let ηm ⊂ 1, . . . , Nn(tm) be the subset of indicesselected at the birth time tm, and define

LnK,l(t) = |m : tm ≤ t, ηm ∩ 1, . . . , l = K| .

Then

LnK,l(t)−∑

m:tm≤t,km+1≥|K|

(Nn(tm)−lkm+1−|K|

)(Nn(tm)km+1

) (12.2)

is a martingale. Let HnK(t) denote the sum in (12.2), and let Uc denote

the continuous part of U .

Page 302: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 302

Convergence of lookdown processesIf |K| = 2, it follows that Hn

K(t) converges to∫ t

0

1

P (s)2dUc(s) +

∑s≤t

∆U(s)

P (s)2

(1−

√∆U(s)

P (s)

)l−2

,

where ∆U(s) = U(s)− U(s−), and LnK,l ⇒ LK,l where

LK,l(t)−∫ t

0

1

P (s)2dUc(s) +

∑s≤t

∆U(s)

P (s)2

(1−

√∆U(s)

P (s)

)l−2

is a martingale.

Page 303: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 303

If |K| > 2, then the sum converges in distribution to

∑s≤t

(√∆U(s)

P (s)

)|K|(1−

√∆U(s)

P (s)

)l−|K|

,

and LnK,l ⇒ LK,l where

LK,l(t)−∑s≤t

(√∆U(s)

P (s)

)|K|(1−

√∆U(s)

P (s)

)l−|K|

is a martingale.

In particular, if U is continuous and |K| > 2, then LnK,l ⇒ 0, that is, inthe limit, only two levels are involved in any birth event.

Page 304: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 304

Lookdown processes

Let Lij = liml→∞ Li,j,l count the number of times that levels i and j

and only levels i and j are in a birth event. Then

Lij(t)−∫ t

0

1

P (s)2dUc(s)

is a martingale.

Let LlK count the number of times an infinite birth event occurs suchthat K ⊂ 1, . . . , l are the levels below level l in the birth event.Then

LlK(t)−∑s≤t

(√∆U(s)

P (s)

)|K|(1−

√∆U(s)

P (s)

)l−|K|

is a martingale. (Note that this assertion holds even if |K| ≤ 2.)

Page 305: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 305

Ancestral levelsFor each t ≥ 0 and k = 1, 2, . . ., let N t

k(s), 0 ≤ s ≤ t be the level attime s of the ancestor of the particle at level k at time t. In terms ofthe Lij, LkK for 0 ≤ s ≤ t,

N tk(s) = k −

∑1≤i<j<k

∫ t

s

IN tk(u)>jdLij(u)

−∑

1≤i<j≤k

∫ t

s

(j − i)1N tk(u)=jdLij(u)

−∑

K⊂1,...,k

∫ t

s

(N tk(u)−min(K))1N t

k(u)∈KdLkK(u)

−∑

K⊂1,...,k

∫ t

s

(|K ∩ 1, . . . , N tk(u)| − 1)1N t

k(u)/∈KdLkK(u) .

Page 306: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 306

The corresponding coalescent

Fix 0 < t ≤ τ , and for s ≤ t define an equivalence relation, Rt(s) by

Rt(s) = (k, l) : k, l = 1, 2, . . . , N tk(s) = N t

l (s). (12.3)

Informally, (k, l) ∈ Rt(s) iff the two levels k and l have the sameancestor at time s.

Lemma 12.7 If Kt(s) is the number of equivalence classes determined byRt(s), then the equivalence classes are given by

Γti(s) = k : N tk(s) = i, i = 1, 2, . . . , Kt(s)

Page 307: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 307

Kingman’s coalescent

Theorem 12.8 Assume that U is continuous and that t < τ . Let νt(u) bethe time change determined for u ≤ H(t) ≡

∫ t0

1P (s)2dU(s) by∫ t

νt(u)

1

P (s)2dU(s) = u.

Up to time H(t), the process Rt defined by Rt(u) = Rt(νt(u)) is the coales-cent defined in Kingman (1982).

Page 308: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 308

Pitman’s Λ coalescent Pitman (1999)

λb,k =

∫ 1

0

xk−2(1− x)b−kΛ(dx)

Rate of coalescence of k specified individuals out of b.

Theorem 12.9 If P ≡ 1 and U is a subordinator with generator

Df(u) = af ′(u) +

∫ 1

0

(f(u+ z)− f(z))ν(dz)

where ν satisfies∫ t

0 zν(dz), thenRt(u) = Rt(t−u) is Pitman’s Λ-coalescentwith ∫ 1

0

g(x)Λ(dx) = ag(0) +

∫ 1

0

g(√z)zν(dz)

If k > 2, the rate in terms of ν is∫ 1

0

√zk(1−

√z)b−kν(dz)

Page 309: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 309

Finite ancestry/coming down from infinity

Assuming P ≡ 1 and U is a subordinator with generator

Df(u) = af ′(u) +

∫ 1

0

(f(u+ z)− f(z))ν(dz),

define

ηb =

∫ 1

0

bx−1(1− (1− x)b−1

)Λ(dx)

= ab(b− 1) +

∫ 1

0

b√z(1− (1−

√z)b−1)ν(dz).

Theorem 12.10 Schweinsberg (2000) The population at time t has onlyfinitely many ancestors alive at time t− h for all h > 0, if and only if∑

b

1

ηb<∞.

Page 310: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 310

Conditioning the branching model

Etheridge and March (1991):

Theorem 12.11 The neutral Dawson-Watanabe process conditioned to havetotal mass identically 1 for all t ≥ 0 is a Fleming-Viot process.

Let Nij, i < j, be the counting process that counts lookdowns from jto i. Then

Nij(t) = Yij(

∫ t

0

2λ(P (s))

P (s)2ds).

Conditioning on P ≡ 1 is the same as replacing Nij by

Nij(t) = Yij(2λ(1)t).

Page 311: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 311

Type distribution at the extinction time

Let τ = inft : P (t) = 0. If P is a continuous state branching process,then ∫ τ

0

1

P (s)ds =∞

Tribe (1992)limt→τ−

Z(t) = δξ0, ξ0 = X1(τ)

Theorem 12.12 Let τ be an FPt -stopping time. Suppose∫ τ

0

λ(P (s))

P (s)2ds =∞

on τ <∞. Then on τ <∞,

limt→τ−

Z(t) = δX1(τ−) .

Page 312: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 312

Conditioning on nonextincition

Assuming Gf(v) = avf ′′(v) − bvf ′(v) (b ≥ 0), conditioning P onnonextinction (cf. Evans and Perkins (1990)) is equivalent to replac-ing P with a process P with generator

Gf(v) = avf ′′(v) + (2a− bv)f ′(v). (12.4)

If P (0) > 0, then P never hits zero, but∫ ∞0

c

P (s)ds =∞.

It follows that eventually all particles trace their ancestory back tothe bottom-level particle. In particular, the bottom-level particle inour construction is the “immortal particle” of Evans (1993).

Page 313: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 313

Discrete generation models

Fixed population size n

Lki number of offspring for the ith individual in the kth generation

(Lk1, . . . , Lkn), k ≥ 1 independent and identically distributed

For fixed k, Lki is exchangeable and∑n

i=1 Lki = n.

n generations per unit time

Select Dki randomly uniformly over all partitions of 1, . . . , n such

that |Dki | = Lki . Let ξki = minDk

i (ξki = ∞) if Lki = 0) and letξk(1) < ξk(2) < · · · be the corresponding order statistics. Let Xi(k) bethe parent for all l ∈ Dk

j if ξkj = ξk(i).

Page 314: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 314

Lookdown rate

How often do the particles at levels α < β have the same parent?

Pα, β ∈ Dki , some i = E[

n∑i=1

Lkin

(Lki − 1)

n− 1] =

1

n− 1E[Lk1(Lk1 − 1)]

Pα, β, γ ∈ Dki , some i =

E[Lk1(Lk1 − 1)(Lk1 − 2)]

(n− 1)(n− 2)

Pα, β ∈ Dki , γ, δ ∈ Dk

j some i 6= j = E[∑i6=j

Lkin

(Lki − 1)

n− 1

Lkjn− 2

(Lkj − 1)

n− 3]

=E[Lk1(Lk1 − 1)Lk2(Lk2 − 1)]

(n− 2)(n− 3)

The details for the infinite popuation limit are given in Birkner, Blath,Mohle, Steinrucken, and Tams (2009).

Page 315: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 315

Infinite population limit

The distribution of Lk1 may depend on n. If (Lk,n1 )2 is uniformlyintegrable and limn→∞E[Lk,n1 (Lk,n1 −1)] = λ, then assuming n genera-tions per unit time, the process converges to the Fleming-Viot processwith lookdown rate λ

Af(x) =m∑i=1

Bif(x) + λ∑

1≤i<j≤m(f(θij(x))− f(x))

AF (µ) ≡m∑i=1

〈Bif, µm〉+ λ

∑1≤i<j≤m

(〈Φijf, µm−1〉 − 〈f, µm〉)

Page 316: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 316

13. Poisson representations

• Branching processes

• Feller limit

• Extinction

• Dawson-Watanabe limt

• Multiple births

• Finite ancestry

Kurtz and Rodrigues (2011)

Page 317: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 317

A population modelConsider a process with state space E = ∪n[0, r]n.

0 ≤ g ≤ 1, g(r) = 1, and f(u, n) =∏n

i=1 g(ui) For a > 0, and −∞ <

b ≤ ra, define

Arf(u, n) = f(u, n)n∑i=1

2a

∫ r

ui

(g(v)−1)dv+f(u, n)n∑i=1

(au2i−bui)

g′(ui)

g(ui).

In other words, particle levels satisfy

Ui(t) = aU 2i (t)− bUi(t),

and a particle with level z gives birth at rate 2a(r − z) to a particlewhose initial level is uniformly distributed between z and r.

N(t) = #i : Ui(t) < r

Apply the Markov mapping theorem with αr(n, du) the joint distri-bution of n iid uniform [0, r] random variables.

Page 318: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 318

A calculation

f(n) =∫f(u, n)αr(n, du) = e−λgn, e−λg = 1

r

∫ r0 g(u)du

To calculate Cf(n) =∫Arfu, n)αr(n, du), observe that

r−12a

∫ r

0

g(z)

∫ r

z

(g(v)− 1)dv = are−2λg − 2ar−1

∫ r

0

g(z)(r − z)dz

and

r−1

∫ r

0

(az2 − bz)g′(z)dz = −r−1

∫ r

0

(2az − b)(g(z)− 1)dz

= −2ar−1

∫ r

0

zg(z)dz + ar + b(e−λg − 1).

Then

Cf(n) = ne−λg(n−1)(are−2λg − 2are−λg + ar + b(e−λg − 1)

)= arn(f(n+ 1)− f(n)) + (ar − b)n(f(n− 1)− f(n)).

Page 319: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 319

Conclusion

Let N be a solution of the martingale problem for

Cf(n) = arn(f(n+ 1)− f(n)) + (ar − b)n(f(n− 1)− f(n)),

that is, N is a branching process with birth rate ar and death rate(ar − b).

Then there exists a solution (U1(t), . . . , UN(t)(t), N(t)) of the martin-gale problem for A such that N has the same distribution as N .

Page 320: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 320

The limit as r →∞If n = O(r), then the scaling is correct for the Feller diffusion.

For f(u) =∏

i g(ui), 0 ≤ g ≤ 1, g(z) = 1, z ≥ ug, Arf converges to

Af(u) = f(u)∑i

2a

∫ ug

ui

(g(v)− 1)dv + f(u)∑i

(au2i − bui)

g′(ui)

g(ui).

If nr−1 → y, then αr(n, du) → α(y, du) where α(y, du) is the distribu-tion of a Poisson process on [0,∞) with intensity y.

f(y) = αf(y) =

∫f(u)α(y, du) = e−y

∫∞0

(1−g(z))dz = e−yβg

and

αAf(y) = e−yβg(

2ay∞∫0g(z)

∞∫z

(g(v)− 1)dvdz + y∞∫0

(az2 − bz)g′(z)dz

)= e−yβg(ayβ2

g − byβg)= ayf ′′(y) + byf ′(y)

Page 321: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 321

Particle representation of Feller diffusion

Let Ui(0) be a conditionally Poisson process on [0,∞) with (con-ditional) intensity Y (0). Then, Ui(t) is conditionally Poisson withintensity Y (t),

Y (t) = limr→∞

1

r#i : Ui(t) ≤ r,

and Y is a Feller diffusion with generator Cf(y) = ayf ′′(y) + byf ′(y)

γ : N (R)→ [0,∞)

γ(u) = limr→∞

1

r#i : ui ≤ r

α(y, du) Poisson process distribution on [0,∞) with intensity y.

α(y, γ−1(y)) = 1.

Page 322: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 322

ExtinctionAssume U1(0) < U2(0) < · · ·. Then for all t, all levels are above

U1(t) =U1(0)e−bt

1− abU1(0)(1− e−bt)

Let τ = inft : Y (t) = 0Pτ > t = PU1(0) <

[(1− e−bt)a/b

]−1 = 1− e−yb/[(1−e−bt)a]

If b ≤ 0, conditioning on nonextinction for all t is equivalent to settingU1(0) = 0. The generator becomes

Af(u) = f(u)∑i

2a

∫ ug

ui

(g(v)− 1)dv + f(u)∑i

(au2i − bui)

g′(ui)

g(ui)

+f(u)2a

∫ ug

0

(g(v)− 1)dv

andαAf(y) = ayf ′′(y) + (2a+ by)f ′(y)

Page 323: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 323

Branching Markov processesf(x, u, n) =

∏ni=1 g(xi, ui), where g : E × [0,∞)→ (0, 1]

As a function of x, g is in the domain D(B) of the generator of aMarkov process inE, g is continuously differentiable in u, and g(x, u) =1 for u ≥ r.

Af(x, u, n) = f(x, u, n)n∑i=1

Bg(xi, ui)

g(xi, ui)

+f(x, u, n)n∑i=1

2a(xi)

∫ r

ui

(g(xi, v)− 1)dv

+f(x, u, n)n∑i=1

(a(xi)u2i − b(xi)ui)

∂uig(xi, ui)

g(xi, ui)

Each particle has a location Xi(t) in E and a level Ui(t) in [0, r].

Page 324: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 324

Behavior of the process

The locations evolve independently as Markov processes with gen-erator B, the levels satisfy

Ui(t) = a(Xi(t))U2i (t)− b(Xi(t))Ui(t)

and particles that reach level r die.

Particles give birth at rates 2a(Xi(t))(r − Ui(t)); the initial location ofa new particle is the location of the parent at the time of birth; andthe initial level is uniformly distributed on [Ui(t), r].

Page 325: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 325

Generator for X(t) = (X1(t), . . . , XN(t))

Setting e−λg(xi) = r−1∫ r

0 g(xi, z)dz and f(x, n) = e−∑ni=1 λg(xi), and cal-

culating as in the previous example, we have

Cf(x, n) =n∑i=1

Bxif(x, n) +n∑i=1

a(xi)r(f((x, xi), n+ 1)− f(x, n))

+n∑i=1

(a(xi)r − b(xi))(f(d(x|xi), n− 1)− f(x, n)),

whereBxi is the generatorB applied to f(x, n) as a function of xi andd(x|xi) is the vector obtained from x be eliminating the ith compo-nent.

Page 326: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 326

Infinite population limit

Letting r →∞, Af becomes

Af(x, u) = f(x, u)∑i

Bg(xi, ui)

g(xi, ui)+ f(x, u)

∑i

2a(xi)

∫ ug

ui

(g(xi, v)− 1)dv

+f(x, u)∑i

(a(xi)u2i − b(xi)ui)

∂uig(xi, ui)

g(xi, ui)

Particle locations evolve as independent Markov processes with gen-erator B, and levels satisfy

Ui(t) = a(Xi(t))U2i (t)− b(Xi(t))Ui(t)

A particle with level Ui(t) gives birth to new particles at its loca-tion Xi(t) and initial level in the interval [Ui(t) + c, Ui(t) + d] at rate2a(Xi(t))(d− c).

A particle dies when its level hits∞.

Page 327: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 327

The measure-valued limit

For µ ∈ Mf(E), let α(µ, dx × du) be the distribution of a Poissonrandom measure on E × [0,∞) with mean measure µ × m. Thensetting h(y) =

∫∞0 (1− g(y, v))dv

αf(µ) =

∫f(x, u)α(µ, dx× du) = exp− ∫

Eh(y)µ(dy),

and

αAf(µ) = exp− ∫Eh(y)µ(dy)[

∫E

∫ ∞0

Bg(y, v)dvµ(dy)

+

∫E

∫ ∞0

2a(y)g(y, z)

∫ ∞z

(g(y, v)− 1)dvdzµ(dy)

+

∫E

∫ ∞0

(a(y)v2 − b(y)v)∂vg(y, v)dvµ(dy)]

= exp− ∫Eh(y)µ(dy)

∫E

(−Bh(y) + a(y)h(y)2 − b(y)h(y)

)µ(dy)

Page 328: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 328

Representation of the Dawson-Watanabe process

It follows that the Cox measure (or more precisely, the E marginal ofthe Cox measure) corresponding to the particle process at time t, callit Z(t), is a solution of the martingale problem for

A = (αf, αAf) : f ∈ D.

Page 329: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 329

Models with multiple simultaneous births

As above, the particles move independently in E according to a gen-erator B.

A particle at position x ∈ E with level u gives birth to k offspring atrate (k + 1)a

(r)k (x)(r − u)kr−(k−1).

New particles have the location of the parent, but their levels areuniformly distributed on [u, r).

Page 330: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 330

GeneratorThen for f(x, u, n) =

∏nk=1 g(xk, uk),

Arf(x, u, n) (13.5)

= f(x, u, n)n∑i=1

Big(xi, ui)

g(xi, ui)

+f(x, u, n)n∑i=1

∞∑k=1

(k + 1) a(r)k (xi)

rk−1

×∫

[ui,r)k

[(k∏l=1

g(xi, vl)

)− 1

]dv1 . . . dvk

+f(x, u, n)n∑i=1

∞∑k=1

r2a(r)k (xi)

×[(

1− uir

)k+1

− 1 + (k + 1)uir

]∂uig(xi, ui)

g(xi, ui),

Page 331: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 331

Behavior of the process

The levels satisfy the equation

Ui(t) =∞∑k=1

r2a(r)k (Xi(t))

[(1− Ui(t)

r

)k+1

− 1 + (k + 1)Ui(t)

r

].

Page 332: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 332

Corresponding branching process

Defining g(x) = 1r

∫ r0 g(x, v) dv and integrating (13.5) with respect to

α(n, du), the uniform measure on [0, r]n, we have that∫Arf(x, u, n)α(n, du)

= Crf(x, n)

= f(x, n)n∑i=1

Big(xi)

g(xi)+ f(x, n)

n∑i=1

∞∑k=1

ra(r)k (xi)

[g(xi)

k − 1]

+f(x, n)n∑i=1

r∞∑k=1

k a(r)k (xi)

[1

g(xi)− 1

],

which is the generator of a branching process with multiple birthswith birth rates ra(r)

k (·), death rate r∑∞

k=1 k a(r)k (·), and particles mov-

ing according to the generator B.

Page 333: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 333

Limiting model

Assume that

Λ(x, u) ≡ limr→∞

∞∑k=1

r(k + 1)a(r)k (x)

[1−

(1− u

r

)k](13.6)

uniformly for x ∈ E and u in bounded intervals.

Integrating with respect to u, we have∫ u

0

Λ(x, v) = limr→∞

∞∑k=1

r2a(r)k (x)

[(1− u

r

)k+1

− 1 + (k + 1)u

r

]∂1Λ(x, u) = lim

r→∞

∞∑k=1

(k + 1)ka(r)k (x)

(1− u

r

)k−1

Page 334: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 334

Convergence of derivatives

Let Λ(r)(x, u) denote the function on the right of (13.6). Then

∂m

∂umΛ(r)(x, u) = (−1)m+1

∞∑k=m

a(r)k (x)

(k + 1)k · · · (k −m+ 1)

rm−1

(1− u

r

)k−m.

Each of the derivatives is a monotone functions of u and must con-verge.

∂1Λ(x, ·) is completely monotone and hence can be represented as

∂1Λ(x, u) =

∫ ∞0

e−uzν(x, dz)

for some σ-finite measure ν(x, ·). Writing ν(x, ·) = 2a0(x)δ0 + ν(x, ·)with ν(x, 0) = 0,

Λ(x, u) = 2a0(x)u+

∫ ∞0

z−1(1− e−uz)ν(x, dz).

Page 335: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 335

Limiting generator

Let g satisfy g(x, v) = 1, for v ≥ ug, and define

h(x, u) =

∫ ug

u

(1− g(x, v) )dv.

Then, by (13.6) and the definition of h, we have

limr→∞

Arf(x, u)

= f(x, u)∑i

Big(xi, ui)

g(xi, ui)

+f(x, u)∑i

(Λ(xi, ui)− Λ(xi, ui + h(xi, ui)))

+f(x, u)∑i

∫ ui

0

Λ(xi, v)dv∂uig(xi, ui)

g(xi, ui)

Page 336: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 336

= f(x, u)∑i

Big(xi, ui)

g(xi, ui)

+f(x, u)∑i

2a0(xi)

∫ ∞ui

(g(xi, v)− 1)dv

+f(x, u)∑i

∫ ∞0

(ez∫∞ui

(g(xi,v)−1)dv − 1)z−1e−zuiν(xi, dz)

+f(x, u)∑i

(a0(xi)u

2i +

∫ ∞0

z−1(ui − z−1(1− e−uiz))ν(xi, dz)

)∂uig(xi, ui)

g(xi, ui).

The second term on the right has the same interpretation as before.To understand the third, recall that if ξ =

∑i δτi is a Poisson process

on [0,∞) with parameter λ, then E[∏g(τi)] = eλ

∫∞0

(g(v)−1)dv. Conse-quently, the third term determines bursts of simultaneous offspringat the location xi of the parent and with levels forming a Poissonprocess with intensity z on [ui,∞).

Page 337: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 337

Generator for the measure-valued processes

Setting h(x) = h(x, 0) =∫∞

0 (1− g(x, v))dv and f(µ) = exp−〈h, µ〉,

Cf(µ) = αAf(µ)

=

∫E

(−Bh(y) +

∫ h(y)

0

Λ(y, z)dz

)µ(dy) exp−〈h, µ〉.

Page 338: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 338

Offspring distribution with finite second moment

The measure ν(x, ·) is nonzero only if the offspring distribution hasa “heavy tail.” If a(r)

k (x) = ak(x), and∑∞

k=1(k + 1) k ak(x) <∞, then

Λ(x, u) = limr→∞

∞∑k=1

r(k + 1)ak(x)

[1−

(1− u

r

)k]=

∞∑k=1

(k + 1)kak(x)u,

and

Af(x, u) = f(x, u)∑i

Big(xi, ui)

g(xi, ui)

+f(x, u)∑i

∞∑k=1

(k + 1)kak(xi)

∫ ug

ui

[g(xi, v)− 1] dv

+f(x, u)∑i

∞∑k=1

(k + 1) k ak(xi)

2u2i

∂uig(xi, ui)

g(xi, ui),

which is a special case of the previous generator.

Page 339: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 339

Conditions for finitely many ancestors

The levels satisfy Ui(t) =∫ Ui(t)

0 Λ(Xi(t), v)dv. Assume that

Λ(x, u) = Λ(u) = 2a0u+

∫ ∞0

z−1(1− e−uz)ν(dz).

and define F (u) =∫ u

0 Λ(v)dv. Then∫ Ui(t)

Ui(t0)

1

F (u)du = t

and Ui hits infinity in finite time if and only if∫ ∞u

1

F (v)dv <∞

If∫∞uh

1F (v)dv = h, then the ancestors of the population at time t have

levels below uh at time t − h and hence are finite in number. (cf.Bertoin and Le Gall (2006))

Page 340: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 340

14. Stochastic partial differential equations

• A stochastic McKean-Vlasov equation

• Exchangeability and de Finetti’s theorem

• Convergence of exchangeable systems

• From particle approximation to particle representation

• Derivation of SPDE

• Classical McKean-Vlasov

• Stationary limits

• Vanishing spatial noise correlations

• Hydrodynamic limit for symmetric simple exclusion process

• Uniqueness of SPDE via Markov mapping

with Dan Crisan, Yoonjung Lee, Peter Kotelenez, Phil Protter, Eliane Rodrigues, Jie Xiong

Page 341: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 341

A stochastic McKean-Vlasov equationKotelenez (1995); Kurtz and Xiong (1999) For i = 1, . . . , n

Xni (t) = Xn

i (0) +

∫ t

0

σ(Xni (s), V n(s))dBi(s) +

∫ t

0

b(Xni (s), V n(s))ds

+

∫U×[0,t]

α(Xni (s), V n(s), u)W (du, ds)

where V n(t) = 1n

∑ni=1 δXn

i (t) and W is space-time Gaussian whitenoise with

E[W ([0, t]× A)W ([0, s]×B) = t ∧ sµ(A ∩B).

Example: Xni (t) = Xn

i (0)+Bi(t)+W (t)+1

n

n∑j=1

∫ t

0

b(Xni (s)−Xn

j (s))ds

Page 342: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 342

Exchangeability and de Finetti’s theoremX1, X2, . . . is exchangeable if

PX1 ∈ Γ1, . . . , Xm ∈ Γm = PXs1∈ Γ1, . . . , Xsm ∈ Γm

(s1, . . . , sm) any permutation of (1, . . . ,m).

Theorem 14.1 (de Finetti) Let X1, X2, . . . be exchangeable. Then there ex-ists a random probability measure Ξ such that for every bounded, measur-able g,

limn→∞

g(X1) + · · ·+ g(Xn)

n=

∫g(x)Ξ(dx)

almost surely, and

E[m∏k=1

gk(Xk)|Ξ] =m∏k=1

∫gkdΞ

Page 343: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 343

Convergence of exchangeable systemsKotelenez and Kurtz (2010)

Lemma 14.2 For n = 1, 2, . . ., let ξn1 , . . . , ξnNn be exchangeable (allowingNn = ∞.) Let Ξn be the empirical measure (defined as a limit if Nn = ∞),Ξn = 1

Nn

∑Nni=1 δξni . Assume

• Nn →∞

• For each m = 1, 2, . . ., (ξn1 , . . . , ξnm)⇒ (ξ1, . . . , ξm) in Sm.

Then

ξi is exchangeable and setting ξni = s0 ∈ S for i > Nn, Ξn, ξn1 , ξn2 . . . ⇒

Ξ, ξ1, ξ2, . . . in P(S)× S∞, where Ξ is the deFinetti measure for ξi.

If for eachm, ξn1 , . . . , ξnm → ξ1, . . . , ξm in probability in Sm, then Ξn →Ξ in probability in P(S).

Page 344: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 344

Lemma 14.3 LetXn = (Xn1 , . . . , X

nNn

) be exchangeable families ofDE[0,∞)-valued random variables such that Nn ⇒∞ and Xn ⇒ X in DE[0,∞)∞.Define

Ξn = 1Nn

∑Nni=1 δXn

i∈ P(DE[0,∞))

Ξ = limm→∞1m

∑mi= δXi

V n(t) = 1Nn

∑Nni=1 δXn

i (t) ∈ P(E)

V (t) = limm→∞1m

∑mi=1 δXi(t)

Then

a) For t1, . . . , tl /∈ t : E[Ξx : x(t) 6= x(t−)] > 0(Ξn, V

n(t1), . . . , Vn(tl))⇒ (Ξ, V (t1), . . . , V (tl)).

b) If Xn ⇒ X in DE∞[0,∞), then V n ⇒ V in DP(E)[0,∞). If Xn → Xin probability in DE∞[0,∞), then V n → V in DP(E)[0,∞) in proba-bility.

Page 345: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 345

From particle approximation to particle representation

Xni (t) = Xn

i (0) +

∫ t

0

σ(Xni (s), V n(s))dBi(s) +

∫ t

0

b(Xni (s), V n(s))ds

+

∫U×[0,t]

α(Xni (s), V n(s), u)W (du, ds)

Wasserstein metric on ν ∈ P(Rd) :∫|x|ν(dx) <∞:

ρ(ν1, ν2) = inf∫|x− y|γ(dx× dy) : γ with marginals ν1, ν2

Lipschitz assumption: Coefficients satisfy

|β(x1, ν1)− β(x2, ν2)| ≤ C(|x1 − x2|+ ρ(ν1, ν2))

Page 346: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 346

Lemma 14.4 If supnE[|Xni (0)|2] <∞, then supnE[|Xn

i (t)|2] <∞.

For each i, Xni is relatively compact in CRd[0,∞) and hence Xn is

relatively compact in C(Rd)∞[0,∞) and any limit point satisfies

Xi(t) = Xi(0) +

∫ t

0

σ(Xi(s), V (s))dBi(s) +

∫ t

0

b(Xi(s), V (s))ds

+

∫U×[0,t]

α(Xi(s), V (s), u)W (du, ds)

where V (t) is the de Finetti measure for Xi(t).

Page 347: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 347

Derivation of SPDEApplying Ito’s formula

ϕ(Xi(t)) = ϕ(Xi(0)) +

∫ t

0

∇ϕ(Xi(s))Tσ(Xi(s), V (s))dBi(s)

+

∫U×[0,t]

∇ϕ(Xi(s)) · α(Xi(s), V (s), u)W (du, ds) +

∫ t

0

L(V (s))ϕ(Xi(s))ds

where for

a(x, ν) = σ(x, ν)σ(x, ν)T +

∫α(x, ν, u)α(x, ν, u)Tµ(du)

L(ν)ϕ(x) =1

2

∑i,j

aij(x, ν)∂i∂jϕ(x) + b(x, ν) · ∇ϕ(x).

Averaging gives

〈V (t), ϕ〉 = 〈V (0), ϕ〉+∫U×[0,t]

〈V (s), α(·, V (s), u)·∇ϕ〉W (du, ds)+

∫ t

0

〈V (s), L(V (s))ϕ(·)〉ds

Page 348: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 348

Classical McKean-Vlasov

Xni (t) = Xn

i (0) +

∫ t

0

σ(Xni (s), V n(s))dBi(s) +

∫ t

0

b(Xni (s), V n(s))ds

converges to

Xi(t) = Xi(0) +

∫ t

0

σ(Xi(s), V (s))dBi(s) +

∫ t

0

b(Xi(s), V (s))ds

which gives the solution of

〈V (t), ϕ〉 = 〈V (0), ϕ〉+

∫ t

0

〈V (s), L(V (s))ϕ(·)〉ds.

Page 349: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 349

Stationary distribution

If

trace(a(x, ν)) + 2x · b(x, ν) ≤ C1 + C2(

∫|y|ν(dy))2 − C3|x|2,

with C2 < C3, and supnE[|Xni (0)|2] < ∞, then supn,tE[|Xn

i (t)|2] <∞. Assuming σ is nondegenerate, there exists a unique stationarydistribution πn for Xn.

The moment estimates imply πn is tight, and uniqueness impliesπn is exchangeable.

Page 350: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 350

Stationary limitsFor the limit

Xi(t) = Xi(0) +

∫ t

0

σ(Xi(s), V (s))dBi(s) +

∫ t

0

b(Xi(s), V (s))ds,

〈V (t), ϕ〉 = 〈V (0), ϕ〉+

∫ t

0

〈V (s), L(V (s))ϕ(·)〉ds,

Xi(0) is exchangeable, but it need not be iid, and V is stationary,but it need not be constant in time.

Page 351: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 351

Coupling through the center of mass

Let X(t) = limk→∞1k

∑ki=1Xi(t) =

∫yV (t, dy) for

Xi(t) = Xi(0) +

∫ t

0

σ(Xi(s), X(s))dBi(s) +

∫ t

0

b(Xi(s), X(s))ds.

Suppose σ is constant and b(x, x) = F (x)− κ(x− x), κ > 0. Then forthe finite particle system

Xn(t) = X

n(0) +

σ

n

n∑i=1

Bi(t) +

∫ t

0

F (Xn(s))ds

and for the infinite limit

X(t) = X(0) +

∫ t

0

F (X(s))ds

and

Xi(t)−X(t) = Xi(0)−X(0) + σBi(t)− κ∫ t

0

(Xi(s)−X(s))ds

Page 352: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 352

Vanishing spatial noise correlations

Let d ≥ 2, U = Rd, µ be Lebesgue measure, and σ = 0, and

αε(x, ν, u) = ε−d/2α(x, ε−1(x− u), ν).

The SPDE

〈Vε(t), ϕ〉 = 〈V (0), ϕ〉+

∫U×[0,t]

〈Vε(s), αε(·, Vε(s), u) · ∇ϕ〉W (du, ds)

+

∫ t

0

〈Vε(s), Lε(Vε(s))ϕ(·)〉ds

is represented by

Xε,i(t) = Xi(0) +

∫ t

0

b(Xε,i(s), Vε(s))ds

+

∫U×[0,t]

αε(Xε,i(s), Vε(s), u)W (du, ds)

Page 353: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 353

Change of variableThe stochastic intergral can be written∫

Rd×[0,t]

αε(Xε,i(s), Vε(s)), u〉W (du, ds)

=

∫Rd×[0,t]

ε−d/2α(Xε,i(s), ε−1(Xε,i(s)− u), Vε(s))W (du, ds)

=

∫Rd×[0,t]

α(Xε,i(s), z, Vε(s))Wεi (dz, ds),

where for each i, W εi is a Gaussian white noise defined by∫

Rd×[0,∞)

ϕ(z, s)W εi (dz, ds) =

∫Rd×[0,t]

ε−d/2ϕ(ε−1(Xε,i(s)−u), s)〉W (du, ds)

(NOTE: The W εi are not independent but are exchangeable.)

Page 354: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 354

Convergence

If∫Rd |α(x, z, ν)|2dz <∞ and x1 6= x2, then∫

Rdε−d/2α(x1, ε

−1(x1 − u), ν)ε−d/2α(x2, ε−1(x2 − u), ν)Tdu

=

∫Rdα(x1, ε

−1x1 − u), ν)α(x2, ε−1x2 − u), ν)Tdu

→ 0

Assume that the convergence is uniform on |x1 − x2| ≥ δ > 0, foreach δ > 0, and on compact subsets of P(Rd).

Assume the nondegeneracy condition

infx,ν

infz

∫Rd(z · α(x, z, ν))2du

|z|2> 0.

Page 355: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 355

The zero correlation limit Kotelenez and Kurtz (2010)

Theorem 14.5 Assume α and b are bounded, V (0) has no atoms, and anadditional regularity condition for d = 2. As ε → 0, Xε converges indistribution to the solution of

Xi(t) = Xi(0)+

∫Rd×[0,t]

α(Xi(s), u, V (s))Wi(du, ds)+

∫ t

0

b(Xi(s), V (s))ds

where theWi are independent and V (t) is the de Finetti measure for Xi(t).

V is the unique solution of

〈V (t), ϕ〉 = 〈V (0), ϕ〉+

∫ t

0

〈V (s), Lϕ(·, V (s))〉ds

where Lϕ(x, ν) = 12

∑ij aij(x, ν)∂i∂jϕ(x) + b(x, ν) · ∇ϕ(x)

a(x, ν) =

∫U

α(x, u, ν)α(x, u, ν)Tdu.

Page 356: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 356

Sketch of proof

Xε,i(t) = Xi(0) +

∫U×[0,t]

ε−d/2α(Xε,i(s),Xε,i − u

ε, Vε(s))〉W (du, ds)

+

∫ t

0

b(Xε,i(s), Vε(s))ds

= Xi(0) +

∫Rd×[0,t]

α(Xε,i(s), u, Vε(s))〉W εi (du, ds) +

∫ t

0

b(Xε,i(s), Vε(s))ds

where

Mϕ,εi (t) =

∫Rd×[0,t]

ϕ(u)W εi (du, ds) =

∫Rd×[0,t]

ε−d/2ϕ(Xε,i(s)− u

ε)W (du, ds)

Relative compactness follows from boundedness of α and b and

[Mϕ,εi ,Mψ,ε

j ]t =

∫Rd×[0,t]

ε−d/2ϕ(Xε,i(s)− u

ε)ε−d/2ψ(

Xε,j(s)− uε

)du→ 0

for t < τij = inft : Xi(t) = Xj(t). If τij = ∞ a.s., then the limits Wi

and Wj are independent.

Page 357: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 357

Symmetric simple exclusion model Rezakhanlou (1994)

Xni particle location on En = 1

nZ mod 1 (at most one particle perlocation)

Xni → Xn

i + kn at rate n2λ(|k|) unless new location is already occupied

(note symmetry)

Equivalent formulation

Swap contents of ln and k

n at rate n2λ(|k − l|)

Xni (t) = Xn

i (0) +1

n

∑l 6=k

∫ t

0

(l − k)1Xni (s−)=kdYkl(n

2λ(|k − l|)s)

= Xni (0) +

1

n

∑l 6=k

∫ t

0

(l − k)1Xni (s−)=kdYkl(n

2λ(|k − l|)s)

Ykl = Ylk independent unit Poisson processes and Ykl(u) = Ykl(u)−u.

Page 358: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 358

Convergence of particle locations

[Xni ]t =

1

n2

∑l 6=k

∫ t

0

(l − k)21Xni (s−)=kdYkl(n

2λ(|k − l|)s)

=1

n2

∑l 6=k

∫ t

0

(l − k)21Xni (s−)=kdYkl(n

2λ(|k − l|)s) + σ2t

Then [Xni ]t → σ2t, σ2 =

∑k k

2λ(|k|), and by the martingale centrallimit theorem, Xn

i ⇒ Xi where Xi(t) = Xi(0) + σWi(t).

[Xni , X

nj ]t = − 1

n2

∑l 6=k

∫ t

0

(l − k)21Xni (s−)=k,Xn

j (s−)=ldYkl(n2λ(|k − l|)s)

converges to zero which implies the Wi are independent.

Page 359: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 359

Covergence of empirical measure

Setting Vn(t) = 1n

∑ni=1 δXn

i (t), Vn ⇒ V , where V (t,Γ) =∫

Γ v(t, x)dxand

vt =1

2σ2∆v.

Page 360: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 360

Uniqueness of SPDE via Markov mappingDefine γ : (Rd)∞ → P(Rd) by

γ(x) = limn→∞

1

n

n∑i=1

δxi

if the limit exists in P(Rd) and γ(x) = µ0 otherwise.

Then V (t) = γ(X(t)) and a Markov mapping theorem implies thatevery solution of the SPDE can be obtained in this way.

Remark 14.6 Strong uniqueness for the infinite system implies that V isadapted to the filtration generated by Xi(0) and W .

Page 361: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 361

15. Information and conditional expectation

• Information

• Independence

• Conditional expectation

• Properties of conditional expectations

• Jensen’s inequality

• Functions of known and unknown random variables

• Convergence of conditional expectations

Page 362: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 362

Information

Information obtained by observations of the outcome of a randomexperiment is represented by a sub-σ-algebra D of the collection ofevents F . If D ∈ D, then the observer “knows” whether or not theoutcome is in D.

Page 363: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 363

Independence

Two σ-algebras D1,D2 are independent if

P (D1 ∩D2) = P (D1)P (D2), ∀D1 ∈ D1, D2 ∈ D2.

An S-valued random variable Y is independent of a σ-algebra D if

P (Y ∈ B ∩D) = PY ∈ BP (D),∀B ∈ B(S), D ∈ D.

Random variables X and Y are independent if σ(X) and σ(Y ) areindependent, that is, if

P (X ∈ B1 ∩ Y ∈ B2) = PX ∈ B1PY ∈ B2.

Page 364: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 364

Conditional expectationInterpretation of conditional expectation in L2.

Problem: Approximate X ∈ L2 using information represented byD such that the mean square error is minimized, i.e., find the D-measurable random variable Y that minimizes E[(X − Y )2].

Solution: Suppose Y is a minimizer. For any ε 6= 0 and any D-measurable random variable Z ∈ L2

E[|X−Y |2] ≤ E[|X−Y−εZ|2] = E[|X−Y |2]−2εE[Z(X−Y )]+ε2E[Z2].

Hence 2εE[Z(X−Y )] ≤ ε2E[Z2]. Since ε is arbitrary,E[Z(X−Y )] = 0and hence

E[ZX] = E[ZY ] (15.1)

for every D-measurable Z with E[Z2] <∞.

Page 365: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 365

Definition of conditional expectationLet X be an integrable random variable (that is, E[|X|] < ∞.) Theconditional expectation of X , denoted E[X|D], is the unique (up tochanges on events of probability zero) random variable Y satisfying

a) Y is D-measurable.

b)∫DXdP =

∫D Y dP for all D ∈ D. (

∫DXdP = E[1DX])

Condition (b) implies that (15.1) holds for all boundedD-measurablerandom variables.

Page 366: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 366

Verifying Condition (b)

Lemma 15.1 Let C ⊂ F be a collection of events such that Ω ∈ C and C isclosed under intersections, that is, if D1, D2 ∈ C, then D1 ∩D2 ∈ C. If Xand Y are integrable and ∫

D

XdP =

∫D

Y dP (15.2)

for all D ∈ C, then (15.2) holds for all D ∈ σ(C) (the smallest σ-algebracontaining C).

Proof. The lemma follows by the Dynkin class theorem.

Page 367: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 367

Discrete case

Assume thatD = σ(D1, D2, . . . , ) where⋃∞i=1Di = Ω, and Di∩Dj = ∅

whenever i 6= j. Let X be any F-measurable random variable. Then,

E[X|D] =∞∑i=1

E[X1Di]

P (Di)1Di

a) The right hand side is D-measurable.

b) Any D ε D can be written as D =⋃iεADi, where A ⊂ 1, 2, 3, . . ..

Therefore,∫D

∞∑i=1

E[X · 1Di ]P (Di)

1DidP =∞∑i=1

E[X · 1Di ]P (Di)

∫D∩Di

1DidP (monotone conv thm)

=∑iεA

E[X · 1Di ]P (Di)

P (Di)

=

∫D

XdP

Page 368: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 368

Properties of conditional expectation

Assume that X and Y are integrable random variables and that D isa sub-σ-algebra of F .

1) E[E[X|D]] = E[X]. Just take D = Ω in Condition B.

2) If X ≥ 0 then E[X|D] ≥ 0. The property holds because Y =E[X|D] is D-measurable and

∫D Y dP =

∫DXdP ≥ 0 for every

D ε D. Therefore, Y must be positive a.s.

Page 369: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 369

3) E[aX+bY |D] = aE[X|D]+bE[Y |D]. It is obvious that the RHS isD-measurable, being the linear combination of twoD-measurablerandom variables. Also,∫

D

(aX + bY )dP = a

∫D

XdP + b

∫D

Y dP

= a

∫D

E[X|D]dP + b

∫D

E[Y |D]dP

=

∫D

(aE[X|D] + bE[Y |D])dP.

4) If X ≥ Y then E[X|D] ≥ E[Y |D]. Use properties (2) and (3) forZ = X − Y .

5) If X is D-measurable, then E[X|D] = X .

Page 370: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 370

6) If Y is D-measurable and Y X is integrable, then E[Y X|D] =Y E[X|D]. First assume that Y is a simple random variable, i.e.,let Di∞i=1 be a partition of Ω, Di ε D, ci ∈ R, for 1 ≤ i ≤ ∞, anddefine Y =

∑∞i=1 ci1Di

. Then,∫D

Y XdP =

∫D

( ∞∑i=1

ci1Di

)XdP =

∞∑i=1

ci

∫D∩Di

XdP

=∞∑i=1

ci

∫D∩Di

E[X|D]dP =

∫D

( ∞∑i=1

ci1Di

)E[X|D]dP

=

∫D

Y E[X|D]P

For general Y , approximate by a sequence Yn∞n=1 of simple ran-dom variables, for example, defined by Yn = k

n if kn ≤ Y < k+1

n ,k ∈ Z. Then Yn converges to Y , and the result follows by thedominated convergence theorem.

Page 371: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 371

7) If X is independent of D, then E[X|D] = E[X]. Independenceimplies that for D ∈ D, E[X1D] = E[X]P (D),∫

D

XdP = E[X1D]

= E[X]

∫Ω

1DdP

=

∫D

E[X]dP

Since E[X] is D-measurable, E[X] = E[X|D].

8) If D1 ⊂ D2 then E[E[X|D2]|D1] = E[X|D1]. Note that if D ε D1

then D ε D2. Therefore,∫D

XdP =

∫D

E[X|D2]dP

=

∫D

E[E[X|D2]|D1]dP.

Page 372: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 372

Convex functionsA function φ : R → R is convex if and only if for all x and y in R, andλ in [0, 1], φ(λx+ (1− λ)y) ≤ λφ(x) + (1− λ)φ(y).

Let x1 < x2 and y ε R. Then

φ(x2)− φ(y)

x2 − y≥ φ(x1)− φ(y)

x1 − y. (15.3)

Now assume that x1 < y < x2 and let x2 converge to y from above.The left side of (15.3) is bounded below, and its value decreases as x2

decreases to y. Therefore, the right derivative φ+ exists at y and

−∞ < φ+(y) = limx2→y+

φ(x2)− φ(y)

x2 − y< +∞.

Moreover,

φ(x) ≥ φ(y) + φ+(y)(x− y), ∀x ∈ R. (15.4)

Page 373: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 373

Jensen’s Inequality

Lemma 15.2 If φ is convex then

E[φ(X)|D] ≥ φ(E[X|D]).

Proof. Define M : Ω→ R as M = φ+(E[X|D]). From (15.4),

φ(X) ≥ φ(E[X|D]) +M(X − E[X|D]),

and

E[φ(X)|D] ≥ E[φ(E[X|D])|D] + E[M(X − E[X|D])|D]

= φ(E[X|D]) +ME[(X − E[X|D])|D]

= φ(E[X|D]) +ME[X|D]− E[E[X|D]|D]= φ(E[X|D]) +ME[X|D]− E[X|D]= φ(E[X|D])

Page 374: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 374

Functions of known and unknown random variables

Lemma 15.3 LetX be an S1-valued,D-measurable random variable and Ybe an S2-valued random variable independent of D. Suppose that ϕ : S1×S2 → R is a Borel measurable function and that ϕ(X, Y ) is integrable.Define ψ(x) = E[ϕ(x, Y )]. Then, E[ϕ(X, Y )|D] = ψ(X).

Proof. For C ∈ B(S1 × S2), define ψC(x) = E[1C(x, Y )]. ψ(X) isD-measurable as X is. For D ∈ D, define µ(C) = E[1D1C(X, Y )]and ν(C) = E[1DψC(X)]. (µ and ν are measures by the monotoneconvergence theorem.) If A ∈ B(S1) and B ∈ B(S2),

µ(A×B) = E[1D1A(X)1B(Y )]

= E[1D1A(X)]E[1B(Y )]

= E[1D1A(X)E[1B(Y )]] = ν(A×B),

and µ = ν by Corollary 15.8, giving the lemma for ϕ = 1C , C ∈B(S1 × S2). For general ϕ, approximate by simple functions.

Page 375: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 375

More general version

Lemma 15.4 Let Y be an S2-valued random variable (not necessarily inde-pendent of D). Suppose that ϕ : S1 × S2 → R is a bounded measurablefunction. Then there exists a measurable ψ : Ω × S1 → R such that foreach x ∈ S1

ψ(ω, x) = E[ϕ(x, Y )|D](ω) a.s.

andE[ϕ(X, Y )|D](ω) = ψ(ω,X(ω)) a.s.

for every D-measurable random variable X .

Page 376: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 376

ExampleLet Y : Ω→ N be independent of the i.i.d random variables Xi∞i=1.Then

E[Y∑i=1

Xi|σ(Y )] = Y · E[X1]. (15.5)

Identity (15.5) follows by taking ϕ(X, Y )(ω) =∑Y (ω)

i=1 Xi(ω) and not-ing that ψ(y) = E[

∑yi=1Xi] = yE[X1].

Page 377: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 377

Convergence of conditional expectations

Since

E[|E[X|D]− E[Y |D]|p] = E[|E[X − Y ]|D]|p] using linearity≤ E[E[|X − Y |p|D]] using Jensen’s inequality= E[|X − Y |p]

we have

Lemma 15.5 Let Xn∞n=0 be a sequence of random variables and p ≥ 1. Iflimn→∞E[|X −Xn|p] = 0, then limn→∞E[|E[X|D]− E[Xn|D]|p] = 0.

Page 378: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 378

Technical lemmas

• Caratheodary extension theorem

• Dynkin class theorem

• Essential supremum

• Martingale convergence theorem

• Kronecker’s lemma

• Law of large numbers for martingales

• Geometric rates

• Uniform integrability

• Dominated convergence theorem

• Metric spaces

• Sequential compactness

Page 379: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 379

• Completeness

• Measurable selection

Page 380: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 380

Caratheodary extension theorem

Theorem 15.6 Let M be a set, and let A be an algebra of subsets of M . Ifµ is a σ-finite measure on A, then there exists a unique extension of µ to ameasure on σ(A).

Page 381: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 381

Dynkin class theoremA collectionD of subsets of Ω is a Dynkin class if Ω ∈ D, A,B ∈ D andA ⊂ B imply B − A ∈ D, and An ⊂ D with A1 ⊂ A2 ⊂ · · · imply∪nAn ∈ D.

Theorem 15.7 Let S be a collection of subsets of Ω such that A,B ∈ Simplies A ∩B ∈ S. If D is a Dynkin class with S ⊂ D, then σ(S) ⊂ D.

Corollary 15.8 Suppose two finite measures, µ, ν, have the same total massand agree on a collection of subsets S that is closed under intersection. Thenthey agree on σ(S).

Page 382: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 382

Essential Supremum

Let Zα, α ∈ I be a collection of random variables. Note that if I isuncountable, supα∈I Zα may not be a random variable; however, wehave the following:

Lemma 15.9 There exists a random variable Z such that PZα ≤ Z = 1for each α ∈ I and there exist αk, k = 1, 2, . . . such that Z = supk Zαk .

Proof. Without loss of generality, we can assume 0 < Zα < 1. (Other-wise, replaceZα by 1

1+e−Zα .) LetC = supE[Zα1∨· · ·∨Zαm], α1, . . . , αm ∈

I,m = 1, 2, . . .. Then there exist (αn1 , . . . , αnmn

) such that

C = limn→∞

E[Zαn1 ∨ · · · ∨ Zαnmn ].

Define Z = supZαni , 1 ≤ i ≤ mn, n = 1, 2, . . ., and note that C =E[Z] and C = E[Z ∨Zα] for each α ∈ I. Consequently, PZα ≤ Z =1.

Page 383: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 383

Martingale convergence theorem

Theorem 15.10 Suppose Xn is a submartingale and supnE[|Xn|] <∞.Then limn→∞Xn exists a.s.

Page 384: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 384

Kronecker’s lemma

Lemma 15.11 Let An and Yn be sequences of random variables whereA0 > 0 and An+1 ≥ An, n = 0, 1, 2, . . .. Define Rn =

∑nk=1

1Ak−1

(Yk −Yk−1). and suppose that limn→∞An = ∞ and that limn→∞Rn exists a.s.Then, limn→∞

YnAn

= 0 a.s.

Proof.

AnRn =n∑k=1

(AkRk − Ak−1Rk−1) =n∑k=1

Rk−1(Ak − Ak−1) +n∑k=1

Ak(Rk −Rk−1)

= Yn − Y0 +n∑k=1

Rk−1(Ak − Ak−1) +n∑k=1

1

Ak−1(Yk − Yk−1)(Ak − Ak−1)

and

YnAn

=Y0

An+Rn −

1

An

n∑k=1

Rk−1(Ak − Ak−1)−1

An

n∑k=1

1

Ak−1(Yk − Yk−1)(Ak − Ak−1)

Page 385: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 385

Page 386: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 386

Law of large numbers for martingales

Lemma 15.12 Suppose An is as in Lemma 15.11 and is adapted to Fn,and suppose Mn is a Fn-martingale such that for each Fn-stoppingtime τ , E[(Mτ −Mτ−1)

21τ<∞] <∞. If∞∑k=1

1

A2k−1

(Mk −Mk−1)2 <∞ a.s.,

then limn→∞Mn

An= 0 a.s.

Proof. Without loss of generality, we can assume that An ≥ 1. Let

τc = minn :n∑k=1

1

A2k−1

(Mk −Mk−1)2 ≥ c.

Then∞∑k=1

1

A2k−1

(Mk∧τc −M(k−1)∧τc)2 ≤ c+ (Mτc −Mτc−1)

21τc<∞.

Page 387: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 387

It follows that Rcn =

∑nk=1

1Ak−1

(Mk∧τc −M(k−1)∧τc) converges a.s. and

hence, by Lemma 15.11, that limn→∞Mn∧τcAn

= 0.

Page 388: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 388

Geometric convergence

Lemma 15.13 Let Mn be a martingale with |Mn+1 −Mn| ≤ c a.s. foreach n and M0 = 0. Then for each ε > 0, there exist C and η such that

P1

n|Mn| ≥ ε ≤ Ce−nη.

Proof. Let ϕ(x) = e−x + ex and ϕ(x) = ex − 1 − x. Then, settingXk = Mk −Mk−1

E[ϕ(aMn)] = 2 +n∑k=1

E[ϕ(aMk)− ϕ(aMk−1)]

= 2 +n∑k=1

E[expaMk−1ϕ(aXk) + exp−aMk−1ϕ(−aXk)]

≤ 2 +n∑k=1

ϕ(ac)E[ϕ(aMk−1)],

Page 389: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 389

and henceE[ϕ(aMn)] ≤ 2enϕ(ac).

Consequently,

Psupk≤n

1

n|Mk| ≥ ε ≤ E[ϕ(aMn)]

ϕ(anε)≤ 2en(ϕ(ac)−aε).

Then η = supa(aε− ϕ(ac)) > 0, and the lemma follows.

Page 390: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 390

Uniform integrability

Lemma 15.14 If X is integrable, then for ε > 0 there exists a K > 0 suchthat ∫

|X|>K|X|dP < ε.

Proof. limK→∞ |X|1|X|>K = 0 a.s.

Lemma 15.15 If X is integrable, then for ε > 0 there exists a δ > 0 suchthat P (F ) < δ implies

∫F |X|dP < ε.

Proof.Let Fn = |X| ≥ n. Then nP (Fn) ≤ E[|X|1Fn]→ 0. Select n sothat E[|X|1Fn] ≤ ε/2, and let δ = ε

2n . Then P (F ) < δ implies∫F

|X|dP ≤∫Fn

|X|dP +

∫F cn∩F

|X|dP <ε

2+ nδ = ε

Page 391: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 391

Theorem 15.16 Let Xα be a collection of integrable random variables.The following are equivalent:

a) supE[|Xα|] <∞ and for ε > 0 there exists δ > 0 such that P (F ) < δimplies supα

∫F |Xα|dP < ε.

b) limK→∞ supαE[|Xα|1|Xα|>K] = 0.

c) limK→∞ supαE[|Xα| − |Xα| ∧K] = 0

d) There exists a convex function ϕ with lim|x|→∞ϕ(x)|x| = ∞ such that

supαE[ϕ(|Xα|)] <∞.

Page 392: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 392

Proof. a) implies b) follows by

P|Xα| > K ≤ E[|Xα|]K

b) implies d): Select Nk such that∞∑k=1

k supαE[1|Xα|>Nk|Xα|] <∞

Define ϕ(0) = 0 and

ϕ′(x) = k, Nk ≤ x < Nk+1.

Recall that E[ϕ(|X|)] =∫∞

0 ϕ′(x)P|X| > xdx, so

E[ϕ(|Xα|)] =∞∑k=1

k

∫ Nk+1

Nk

P|Xα| > xdx ≤∞∑k=1

k supαE[1|Xα|>Nk|Xα|].

d) implies b): E[1|Xα|>K|Xα|] < E[ϕ(|Xα|)]ϕ(K)/K

Page 393: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 393

b) implies a):∫F |Xα|dP ≤ P (F )K + E[1|Xα|>K|Xα|].

To see that (b) is equivalent to (c), observe that

E[|Xα| − |Xα| ∧K] ≤ E[|Xα|1|Xα|>K] ≤ 2E[|Xα| − |Xα| ∧K

2]

Page 394: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 394

Uniformly integrable families

• For X integrable, Γ = E[X|D] : D ⊂ F

• For X1, X2, . . . integrable and identically distributed

Γ = X1 + · · ·+Xn

n: n = 1, 2, . . .

• For Y ≥ 0 integrable, Γ = X : |X| ≤ Y .

Page 395: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 395

Uniform integrability and L1 convergence

Theorem 15.17 Xn → X in L1 iff Xn → X in probability and Xn isuniformly integrable.

Proof. If Xn → X in L1, then

limn→∞

E[|Xn| − |Xn| ∧K] = E[|X| − |X| ∧K]

and Part (c) of Theorem 15.16 follows from the fact that

limK→∞

E[|X| − |X| ∧K] = limK→∞

E[|Xn| − |Xn| ∧K] = 0.

Page 396: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 396

Measurable functions

Let (Mi,Mi) be measurable spaces.

f : M1 → M2 is measurable if f−1(A) = x ∈ M1 : f(x) ∈ A ∈ M1 foreach A ∈M2.

Lemma 15.18 If f : M1 → M2 and g : M2 → M3 are measurable, theng f : M1 →M3 is measurable.

Page 397: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 397

Dominated convergence theorem

Theorem 15.19 Let Xn → X and Yn → Y in probability. Suppose that|Xn| ≤ Yn a.s. and E[Yn|D]→ E[Y |D] in probability. Then

E[Xn|D]→ E[X|D] in probability

Proof. A sequence converges in probability iff every subsequencehas a further subsequence that converges a.s., so we may as well as-sume almost sure convergence. Let Dm,c = supn≥mE[Yn|D] ≤ c.Then

E[Yn1Dm,c|D] = E[Yn|D]1Dm,c

L1→ E[Y |D]1Dm,c= E[Y 1Dm,c

|D].

Consequently, E[Yn1Dm,c] → E[Y 1Dm,c

], so Yn1Dm,c→ Y 1Dm,c

in L1

by the ordinary dominated convergence theorem. It follows thatXn1Dm,c

→ X1Dm,cin L1 and hence

E[Xn|D]1Dm,c= E[Xn1Dm,c

|D]L1→ E[X1Dm,c

|D] = E[X|D]1Dm,c.

Page 398: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 398

Since m and c are arbitrary, the lemma follows.

Page 399: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 399

Metric spaces

d : S × S → [0,∞) is a metric on S if and only if d(x, y) = d(y, x),d(x, y) = 0 if and only if x = y, and d(x, y) ≤ d(x, z) + d(z, y).

If d is a metric then d ∧ 1 is a metric.

Examples

• Rm d(x, y) = |x− y|

• C[0, 1] d(x, y) = sup0≤t≤1 |x(t)− y(t)|

• C[0,∞) d(x, y) =∫∞

0 e−t sups≤t 1 ∧ |x(s)− y(s)| dt

Page 400: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 400

Sequential compactness

K ⊂ S is sequentially compact if every sequence xn ⊂ K has a con-vergent subsequence with limit in K.

Lemma 15.20 If (S, d) is a metric space, then K ⊂ S is compact if andonly if K is sequentially compact.

Proof. Suppose K is compact. Let xn ⊂ K. If x is not a limit pointof xn, then there exists εx > 0 such maxn : xn ∈ Bεx(x) < ∞. Ifxn has no limit points, then Bεx(x), x ∈ K is an open cover of K.The existence of a finite subcover contradicts the definition of εx.

If K is sequentially compact, and Uα is an open cover of K. Letx1 ∈ K and ε1 >

12 supα supr : Br(x1) ⊂ Uα and define recursively,

xk+1 ∈ K ∩ (∪kl=1Bεl(xl)) and εk+1 >12 supα supr : Br(xk+1) ⊂ Uα.

(If xk+1 does not exist, then there is a finite subcover in Uα.) Bysequential compactness, xk has a limit point x and x /∈ Bεk(xk) for

Page 401: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 401

any k. But setting ε = 12 supα supr : Br(x) ⊂ Uα, εk > ε− d(x, xk), so

if d(x, xk) < ε/2, x ∈ Bεk(xk).

Page 402: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 402

Completeness

A metric space (S, d) is complete if and only if every Cauchy se-quence has a limit.

Completeness depends on the metric, not the topology: For example

r(x, y) = | x

1 + |x|− y

1 + |y||

is a metric giving the usual topology on the real line, but R is notcomplete under this metric.

Page 403: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 403

Measurable selection

Page 404: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 404

Exercises1. τ is an Ft-stopping time if for each t ≥ 0, τ ≤ t ∈ Ft.

For a stopping time τ ,Fτ = A ∈ F : τ ≤ t ∩A ∈ Ft, t ≥ 0

(a) Show that Fτ is a σ-algebra.(b) Show that for Ft-stopping times σ, τ , σ ≤ τ implies that Fσ ⊂ Fτ . In particular, Fτ∧t ⊂ Ft.(c) Let τ be a discrete Ft-stopping time satisfying τ < ∞ = ∪∞k=1τ = tk = Ω. Show thatFτ = σA ∩ τ = tk : A ∈ Ftk , k = 1, 2, . . ..

(d) Show that the minimum of two stopping times is a stopping time and that the maximum of twostopping times is a stopping time.

2. Prove theW as defined in the first lecture is a σ-algebra.3. Let N be a Poisson process with parameter λ. Then M(t) = N(t)− λt is a martingale. Compute [M ]t.4. Let 0 = τ0 < τ1 < · · · be stopping times satisfying limk→∞ τk =∞, and for k = 0, 1, 2, . . ., let ξk ∈ Fτk .

Define

X(t) =

∞∑k=0

ξk1[τk,τk+1)(t).

Show that X is adapted.Example: Let X be a cadlag adapted process and let ε > 0. Define τ ε0 = 0 and for k = 0, 1, 2, . . .,

τ εk+1 = inft > τ εk : |X(t)−X(τ εk)| ∨ |X(t−)−X(τ εk)| ≥ ε.

Note that the τ εK are stopping times, by Problem 1. Define

Xε(t) =

∞∑k=0

X(τ εk)1[τεk,τεk+1

)(t).

Page 405: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 405

Then Xε is a piecewise constant, adapted process satisfying

supt|X(t)−Xε(t)| ≤ ε.

5. Show that E[f(X)|D] = E[f(X)] for all bounded continuous fucntions (all bounded measurable func-tions) if and only if X is independent of D.

6. Let X and Y S-valued random variables defined on (Ω,F , P ), and let G ⊂ F be a sub-σ-algebra.Suppose that M ⊂ C(S) is separating and

E[f(X)|G] = f(Y ) a.s.

for every f ∈M . Show that X = Y a.s

7. Let τ be a discrete stopping time with values ti. Show that

E[Z|Fτ ] =∑i

E[Z|Fti ]1τ=ti.

8. Let N be a stochasticly continuous counting processes with independent increments. Show that theincrements of N are Poisson distributed.

9. Let Nn and N be counting processes. Suppose that (Nn(t1), . . . , Nn(tm)) ⇒ (N(t1), . . . , N(tm)) forall choices of t1, . . . , tm in T0, where T0 is dense in [0,∞). Show that Nn ⇒ N under the Skorohodtopology.

10. Suppose the Y1, . . . , Ym are independent Poisson random variables with E[Yi] = λi. Let Y =∑mi=1 Yi.

Compute PYi = 1|Y = 1. More generally, compute PYi = k|Y = m.11. Let ξ be a Poisson random measure on (U, dU ) with mean measure ν. For f ∈M(U), f ≥ 0, show that

E[exp−∫U

f(u)ξ(du)] = exp−∫

(1− e−f )dν.

Page 406: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 406

12. Give an example of a right continuous process X such that∫U×[0,t]

|X(u, s)| ∧ 1ν(du)ds < ∞ a.s. but∫U×[0,t]

|X(u, s)|ξ(du× ds) =∞ a.s.

13. Let Y , ξ1, and ξ2 be independent random variables with values in complete, separable metric spaces, E,S1, and S2. Suppose that G : E × S1 → E0 and H : E × S2 → E0 are Borel measurable functions andthat G(Y, ξ1) = H(Y, ξ2) a.s. Show that there exists a Borel measurable function F : E → E0 such thatF (Y ) = G(Y, ξ1) = H(Y, ξ2) a.s.

Page 407: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 407

14. Let X be a measurable stochastic process that is bounded by a constant c, and let Nn be a Poissonprocess with parameter n that is independent of X . Let Snk denote the jump times of Nn. Show that∑

k

X(Snk )(Snk+1 ∧ t− Snk ∧ t)

converges in L1 to∫ t0X(s)ds.

15. Suppose that X is a Markov process with generator A and semigroup T (t). Let V be a unit Poissonprocess independent of X , and define

Xn(t) = X(1

nV (nt)).

Show that Xn is a Markov process. What is its generator.

16. Let Yk be a discrete time Markov chain with state spaceE and transition operator µ(x,Γ) = PYk+1 ∈Γ|Yk = x. Let λ : E → [0,∞), and let ∆i, i = 0, 1, . . . be independent, unit exponential randomvariables. Define

X(t) = Yk fork−1∑i=0

∆i

λ(Yi)≤ t <

k∑i=1

∆i

λ(Yi).

Show that X is Markov and compute its generator. (Assume λ is bounded to avoid technicalities.)

Page 408: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 408

17. For a one-dimensional diffusion with generator

Af(x) =1

2a(x)f ′′(x) + b(x)f ′(x),

check the claim in (3.2).

18. Suppose thatA ⊂ B(E)×B(E) and that a solution of the martingale problem for (A, δx) exists for eachx ∈ E. Show that A is dissipative. (Recall the equivalent forms of the martingale problem.)

19. Prove Lemma 4.7.

Page 409: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 409

20. Let E = (0, 1] and L = f ∈ C((0, 1]) : limx→0 f(x) exists. Consider the operator

Af(x) = −f ′(x)

with domain

D(A) = f ∈ L : f ′ ∈ L, limx→0

f(x) =

∫ 1

0

f(y)dy.

a) Show that R(λ − A) = L (implying uniqueness for the corresponding martingale problem), butD(A) is not dense in L.

b) Describe the behavior of the process.

c) Is this process quasi-left continuous? Explain.

21. Give an example of a right continuous, adapted process X (not predictable!) such that∫U×[0,t]

|X(u, s)| ∧ 1ν(du)ds <∞ a.s.

but∫U×[0,t]

|X(u, s)|ξ(du× ds) =∞ a.s.

22. Suppose that the solution of (5.5) satisfiesN(t) <∞ for all t > 0. Show thatN has the same distributionas the solution of

N(t) = Y (

∫ t

0

λ(s, Z,N)ds),

where Y is a unit Poisson process.

Page 410: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 410

23. Suppose the X is a solution of the martingale problem for A. Suppose (for simplicity) that 0 < ε ≤β(x) ≤ ε−1 and let γ(t) satisfy ∫ γ(t)

0

β(X(s))−1ds = t

Show that Y = X γ is a solution of the martingale problem for βA, that is, that

f(Y (t))− f(Y (0))−∫ t

0

β(Y (s)Af(Y (s))ds

is a martingale for each f ∈ D(A).

24. Develop a stochastic differential equation model for the movement of a toad in a hale storm. Assumethat when a hale stone lands, the toad jumps directly away from where the hale stone landed, andassume that the distance the toad jumps depends on how close the hale stone lands to the toad and howlarge the hale stone is.For simplicity, assume that the toad is on an infinite, flat surface and that the statistics of the hale stormare translationally and rotationally invariant.

25. Derive the generator for the model developed in Problem 24.

26. For the model developed in Problem 24, give conditions under which the movement of the toad can beapproximated by a Brownian motion.

Page 411: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 411

27. Use the martingale central limit theorem to prove a central limit theorem for a continuous time finiteMarkov chain ξ(t), t ≥ 0. Specifically, assume that the chain is irreducible so that there is a uniquestationary distribution πk and prove convergence for the vector-valued process Un = (U1

n, . . . , Udn)

defined by

Ukn(t) =1√n

∫ nt

0

(1ξ(s)=k − πk)ds.

28. Show that Un defined in Problem 27 is not a “good” sequences of semimartingales. (The easiest ap-proach is probably to show that the conclusion of the stochastic integral convergence theorem is notvalid.)

29. Show that Un can be written as Mn + Zn where Mn is a “good” sequence and Zn ⇒ 0.

30. (Random evolutions) Let ξ be as in Problem 27, and let Xn satisfy

Xn(t) =√nF (Xn(s), ξ(ns)).

Suppose∑i F (x, i)πi = 0. Write Xn as a stochastic differential equation driven by Un, give conditions

under which Xn converges in distribution to a limit X , and identify the limit.

Page 412: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 412

31. Prove the following:

Lemma 15.21 Suppose Xn is relatively compact in DS[0,∞) and that for each n, Fnt is a filtration andπ(n)t is the conditional distribution of Xn(t) given Fnt . Then for each ε > 0 and T > 0, there exists a compactKε,T ⊂ S such that

supnPsup

t≤Tπ(n)t (Kc

ε,T ) ≥ ε ≤ ε

Page 413: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 413

32. Let ξ1, ξ2, . . . be iid uniform [0, r] and independent of N(0). Let Ui(0) = ξi, i = 1, . . . , N(0), and forb > 0, let

Ui(t) = bUi(t)

soUi(t) = Ui(0)ebt.

DefineN(t) = #i : Ui(t) < r.

Let f(u, n) =∏ni=1 g(ui), where 0 ≤ g ≤ 1, g is continuously differentiable, and g(ui) = 1 for ui > r.

The generator for U(t) = (U1(t), . . . , UN(t)) is

Af(u, n) = f(u, n)n∑i=1

buig′(ui)/g(ui)

and

f(U(t), N(t))− f(U(0), N(0))−∫ t

0

Af(U(s), N(s))ds = 0

is a martingale.

(a) Apply the Markov mapping theorem with γ(u1, . . . , un) = n and

α(n, du) =1

rndu1 . . . dun

to show that N is a Markov process.

(b) Give a direct argument that shows why N is a Markov process.

Page 414: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 414

33. Verify the following moment identities for a Poisson random measure ξ with mean measure ν

E[e∫f(z)ξ(dz)] = e

∫(ef−1)dν ,

or letting ξ =∑i δZi ,

E[∏i

g(Zi)] = e∫(g−1)dν .

Similarly,

E[∑j

h(Zj)∏i

g(Zi)] =

∫hgdνe

∫(g−1)dν ,

andE[

∑i 6=j

h(Zi)h(Zj)∏k

g(Zk)] = (

∫hgdν)2e

∫(g−1)dν ,

34. Consider a process with state space (−1, 1×E×[0, r])n and generator given by f(z) =∏ni g(xi, yi, ui),

zi = (xi, yi, ui), and

Af(z) =n∑i=1

f(z)λ(g(−xi, yi, ui)g(xi, yi, ui)

− 1) +∑i6=j

µ1ui<uj ,xi=xjf(z)(g(xj , yi, uj)

g(xj , yj , uj)− 1).

Apply the Markov mapping theorem to characterize the process ηn(t) =∑ni=1 δ(xi,yi) and, under ap-

propriate scaling, use the lookdown model to derive a limiting model for ηn as r →∞.

Page 415: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 415

Stochastic analysis exercises1. Show that if Y1 is cadlag and Y2 is finite variation, then

[Y1, Y2]t =∑s≤t

∆Y1(s)∆Y2(s).

2. Using the fact that martingales have finite quadratic variation, show that semimartingales have finitequadratic variation.

3. Using the above results, show that the covariation of two semimartingales exist.4. Consider the stochastic differential equation

X(t) = X(0) +

∫ t

0

aX(s)dW (s) +

∫ t

0

bX(s)ds.

Find α and β so thatX(t) = X(0) expαW (t) + βt

is a solution.5. Let W be standard Brownian motion and suppose (X,Y ) satisfies

X(t) = x+

∫ t

0

Y (s)ds

Y (t) = y −∫ t

0

X(s)ds+

∫ t

0

cX(s−)dW (s)

where c 6= 0 and x2 + y2 > 0. Assuming all moments are finite, define m1(t) = E[X(t)2], m2(t) =E[X(t)Y (t)], andm3(t) = E[Y (t)2]. Find a system of linear differential equations satisfied by (m1,m2,m3),and show that the expected “total energy” (E[X(t)2 + Y (t)2]) is asymptotic to keλt for some k > 0 andλ > 0.

Page 416: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 416

6. Let X and Y be independent Poisson processes. Show that with probability one, X and Y do not havesimultaneous discontinuities and that [X,Y ]t = 0, for all t ≥ 0.

7. Two local martingales, M and N , are called orthogonal if [M,N ]t = 0 for all t ≥ 0.

(a) Show that if M and N are orthogonal, then [M +N ]t = [M ]t + [N ]t.

(b) Show that if M and N are orthogonal, then M and N do not have simultaneous discontinuities.

(c) Show that if M is finite variation, then M and N are orthogonal if and only if they have no simul-taneous discontinuities.

8. Let W be standard Brownian motion. Use Ito’s Formula to show that

M(t) = eαW (t)− 12α2t

is a martingale. (Note that the martingale property can be checked easily by direct calculation; however,the problem asks you to use Ito’s formula to check the martingale property.)

9. Let N be a Poisson process with parameter λ. Use Ito’s formula to show that

M(t) = eαN(t)−λ(eα−1)t

is a martingale.

10. Let X satisfy

X(t) = x+

∫ t

0

σX(s)dW (s) +

∫ t

0

bX(s)ds

Let Y = X2.

(a) Derive a stochastic differential equation satisfied by Y .

(b) Find E[X(t)2] as a function of t.

Page 417: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 417

11. Suppose that the solution of dX = b(X)dt + σ(X)dW , X(0) = x is unique for each x. Let τ = inft >0 : X(t) /∈ (α, β) and suppose that for some α < x < β, Pτ < ∞, X(τ) = α|X(0) = x > 0 andPτ <∞, X(τ) = β|X(0) = x > 0 .

(a) Show that Pτ < T,X(τ) = α|X(0) = x is a nonincreasing function of x, α < x < β.

(b) Show that there exists a T > 0 such that

infx

maxPτ < T,X(τ) = α|X(0) = x, Pτ < T,X(τ) = β|X(0) = x > 0

(c) Let γ be a nonnegative random variable. Suppose that there exists a T > 0 and a ρ < 1 such thatfor each n, Pγ > (n+ 1)T |γ > nT < ρ. Show that E[γ] <∞.

(d) Show that E[τ ] <∞.

Page 418: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 418

ReferencesJean Bertoin and Jean-Francois Le Gall. Stochastic flows associated to coalescent processes. III. Limit theorems.

Illinois J. Math., 50(1-4):147–181 (electronic), 2006. ISSN 0019-2082. URL http://projecteuclid.org.ezproxy.library.wisc.edu/getRecord?id=euclid.ijm/1258059473.

Abhay G. Bhatt and Rajeeva L. Karandikar. Invariant measures and evolution equations for Markov processescharacterized via martingale problems. Ann. Probab., 21(4):2246–2268, 1993. ISSN 0091-1798.

Matthias Birkner, Jochen Blath, Martin Mohle, Matthias Steinrucken, and Johanna Tams. A modified look-down construction for the Xi-Fleming-Viot process with mutation and populations with recurrent bottle-necks. ALEA Lat. Am. J. Probab. Math. Stat., 6:25–61, 2009. ISSN 1980-0436.

David Blackwell and Lester E. Dubins. An extension of Skorohod’s almost sure representation theorem. Proc.Amer. Math. Soc., 89(4):691–692, 1983. ISSN 0002-9939. doi: 10.2307/2044607. URL http://dx.doi.org/10.2307/2044607.

R. Buckdahn, H.-J. Engelbert, and A. Rascanu. On weak solutions of backward stochastic differential equa-tions. Teor. Veroyatn. Primen., 49(1):70–108, 2004. ISSN 0040-361X.

D. L. Burkholder. Distribution function inequalities for martingales. Ann. Probability, 1:19–42, 1973.

Nhansook Cho. Weak convergence of stochastic integrals driven by martingale measure. Stochastic Process.Appl., 59(1):55–79, 1995. ISSN 0304-4149. doi: 10.1016/0304-4149(95)00031-2. URL http://dx.doi.org/10.1016/0304-4149(95)00031-2.

E. Cinlar, J. Jacod, P. Protter, and M. J. Sharpe. Semimartingales and Markov processes. Z. Wahrsch. Verw.Gebiete, 54(2):161–219, 1980. ISSN 0044-3719.

Page 419: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 419

Peter Donnelly and Thomas G. Kurtz. Particle representations for measure-valued population models. Ann.Probab., 27(1):166–205, 1999. ISSN 0091-1798.

E. B. Dynkin. Markov processes. Vols. I, II, volume 122 of Translated with the authorization and assistance of theauthor by J. Fabius, V. Greenberg, A. Maitra, G. Majone. Die Grundlehren der Mathematischen Wissenschaften,Bande 121. Academic Press Inc., Publishers, New York, 1965.

Pedro Echeverrıa. A criterion for invariant measures of Markov processes. Z. Wahrsch. Verw. Gebiete, 61(1):1–16, 1982. ISSN 0044-3719.

H. J. Engelbert. On the theorem of T. Yamada and S. Watanabe. Stochastics Stochastics Rep., 36(3-4):205–216,1991. ISSN 1045-1129.

Alison Etheridge and Peter March. A note on superprocesses. Probab. Theory Related Fields, 89(2):141–147, 1991.ISSN 0178-8051.

Stewart N. Ethier and Thomas G. Kurtz. Markov processes: Characterization and Convergence. Wiley Series inProbability and Mathematical Statistics: Probability and Mathematical Statistics. John Wiley & Sons Inc.,New York, 1986. ISBN 0-471-08186-8.

Steven N. Evans. Two representations of a conditioned superprocess. Proc. Roy. Soc. Edinburgh Sect. A, 123(5):959–971, 1993. ISSN 0308-2105.

Steven N. Evans and Edwin Perkins. Measure-valued Markov branching processes conditioned on nonextinc-tion. Israel J. Math., 71(3):329–337, 1990. ISSN 0021-2172.

Carl Graham. McKean-Vlasov Ito-Skorohod equations, and nonlinear diffusions with discrete jump sets.Stochastic Process. Appl., 40(1):69–82, 1992. ISSN 0304-4149.

Akira Ichikawa. Filtering and control of stochastic differential equations with unbounded coefficients. Stochas-tic Anal. Appl., 4(2):187–212, 1986. ISSN 0736-2994.

Page 420: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 420

Nobuyuki Ikeda and Shinzo Watanabe. Stochastic differential equations and diffusion processes, volume 24 ofNorth-Holland Mathematical Library. North-Holland Publishing Co., Amsterdam, second edition, 1989. ISBN0-444-87378-3.

Kiyosi Ito. On stochastic differential equations. Mem. Amer. Math. Soc., 1951(4):51, 1951. ISSN 0065-9266.

Jean Jacod. Weak and strong solutions of stochastic differential equations. Stochastics, 3(3):171–191, 1980. ISSN0090-9491.

A. Jakubowski, J. Memin, and G. Pages. Convergence en loi des suites d’integrales stochastiques sur l’espaceD1 de Skorokhod. Probab. Theory Related Fields, 81(1):111–137, 1989. ISSN 0178-8051.

Adam Jakubowski. On the Skorokhod topology. Ann. Inst. H. Poincare Probab. Statist., 22(3):263–285, 1986.ISSN 0246-0203. URL http://www.numdam.org/item?id=AIHPB_1986__22_3_263_0.

Adam Jakubowski. Continuity of the Ito stochastic integral in Hilbert spaces. Stochastics Stochastics Rep., 59(3-4):169–182, 1996. ISSN 1045-1129.

Rajeeva L. Karandikar. On pathwise stochastic integration. Stochastic Process. Appl., 57(1):11–18, 1995. ISSN0304-4149.

R. Khas’minskii. On stochastic processes defined by differential equations with a small parameter. Theoryof Probability and Its Applications, 11(2):211–228, 1966. doi: 10.1137/1111018. URL http://epubs.siam.org/doi/abs/10.1137/1111018.

J. F. C. Kingman. The coalescent. Stochastic Process. Appl., 13(3):235–248, 1982. ISSN 0304-4149. doi:10.1016/0304-4149(82)90011-4. URL http://dx.doi.org.ezproxy.library.wisc.edu/10.1016/0304-4149(82)90011-4.

Peter M. Kotelenez. A class of quasilinear stochastic partial differential equations of McKean-Vlasov type withmass conservation. Probab. Theory Related Fields, 102(2):159–188, 1995. ISSN 0178-8051.

Page 421: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 421

Peter M. Kotelenez and Thomas G. Kurtz. Macroscopic limits for stochastic partial differential equations ofMcKean-Vlasov type. Probab. Theory Related Fields, 146(1-2):189–222, 2010. ISSN 0178-8051. doi: 10.1007/s00440-008-0188-0. URL http://dx.doi.org/10.1007/s00440-008-0188-0.

Thomas G. Kurtz. Representations of Markov processes as multiparameter time changes. Ann. Probab., 8(4):682–715, 1980. ISSN 0091-1798.

Thomas G. Kurtz. Averaging for martingale problems and stochastic approximation. In Applied stochasticanalysis (New Brunswick, NJ, 1991), volume 177 of Lecture Notes in Control and Inform. Sci., pages 186–209.Springer, Berlin, 1992. doi: 10.1007/BFb0007058. URL http://dx.doi.org.ezproxy.library.wisc.edu/10.1007/BFb0007058.

Thomas G. Kurtz. Martingale problems for conditional distributions of Markov processes. Electron. J. Probab.,3:no. 9, 29 pp. (electronic), 1998. ISSN 1083-6489.

Thomas G. Kurtz. The Yamada-Watanabe-Engelbert theorem for general stochastic equations and inequalities.Electron. J. Probab., 12:951–965, 2007. ISSN 1083-6489. doi: 10.1214/EJP.v12-431. URL http://dx.doi.org/10.1214/EJP.v12-431.

Thomas G. Kurtz. Weak and strong solutions of general stochastic models. (preprint), 2013. URL http://www.math.wisc.edu/˜kurtz/papers/13PreprintWkStr.pdf.

Thomas G. Kurtz and Giovanna Nappo. The filtered martingale problem. In Dan Crisan and Boris Rozovskii,editors, Handbook on Nonlinear Filtering, chapter 5, pages 129–165. Oxford University Press, 2011.

Thomas G. Kurtz and Daniel L. Ocone. Unique characterization of conditional distributions in nonlinearfiltering. Ann. Probab., 16(1):80–107, 1988. ISSN 0091-1798.

Thomas G. Kurtz and Philip Protter. Weak limit theorems for stochastic integrals and stochastic differentialequations. Ann. Probab., 19(3):1035–1070, 1991. ISSN 0091-1798.

Page 422: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 422

Thomas G. Kurtz and Philip E. Protter. Weak convergence of stochastic integrals and differential equations. II.Infinite-dimensional case. In Probabilistic models for nonlinear partial differential equations (Montecatini Terme,1995), volume 1627 of Lecture Notes in Math., pages 197–285. Springer, Berlin, 1996.

Thomas G. Kurtz and Eliane R. Rodrigues. Poisson representations of branching markov and measure-valuedbranching processes. Ann. Probab., 39(3):939–984, 2011. doi: 10.1214/10-AOP574. URL http://dx.doi.org/10.1214/10-AOP574.

Thomas G. Kurtz and Richard H. Stockbridge. Stationary solutions and forward equations for controlled andsingular martingale problems. Electron. J. Probab., 6:no. 17, 52 pp. (electronic), 2001. ISSN 1083-6489.

Thomas G. Kurtz and Jie Xiong. Particle representations for a class of nonlinear SPDEs. Stochastic Process.Appl., 83(1):103–126, 1999. ISSN 0304-4149.

E. Lenglart, D. Lepingle, and M. Pratelli. Presentation unifiee de certaines inegalites de la theorie des martin-gales. In Seminar on Probability, XIV (Paris, 1978/1979) (French), volume 784 of Lecture Notes in Math., pages26–52. Springer, Berlin, 1980. With an appendix by Lenglart.

Gisiro Maruyama. Continuous Markov processes and stochastic equations. Rend. Circ. Mat. Palermo (2), 4:48–90, 1955. ISSN 0009-725X.

Mark A. Pinsky. Lectures on random evolution. World Scientific Publishing Co. Inc., River Edge, NJ, 1991. ISBN981-02-0559-7.

Jim Pitman. Coalescents with multiple collisions. Ann. Probab., 27(4):1870–1902, 1999. ISSN 0091-1798.

Philip Protter. Stochastic integration and differential equations, volume 21 of Applications of Mathematics (NewYork). Springer-Verlag, Berlin, 1990. ISBN 3-540-50996-8. A new approach.

Page 423: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 423

Fraydoun Rezakhanlou. Propagation of chaos for symmetric simple exclusions. Comm. Pure Appl. Math., 47(7):943–957, 1994. ISSN 0010-3640. doi: 10.1002/cpa.3160470703. URL http://dx.doi.org.ezproxy.library.wisc.edu/10.1002/cpa.3160470703.

L. C. G. Rogers and J. W. Pitman. Markov functions. Ann. Probab., 9(4):573–582, 1981. ISSN 0091-1798. URL http://links.jstor.org/sici?sici=0091-1798(198108)9:4<573:MF>2.0.CO;2-G&origin=MSN.

Murray Rosenblatt. Functions of Markov processes. Z. Wahrscheinlichkeitstheorie und Verw. Gebiete, 5:232–243,1966.

Jason Schweinsberg. A necessary and sufficient condition for the Λ-coalescent to come down from infinity.Electron. Comm. Probab., 5:1–11 (electronic), 2000. ISSN 1083-589X.

A. V. Skorokhod. Studies in the theory of random processes. Translated from the Russian by Scripta Technica, Inc.Addison-Wesley Publishing Co., Inc., Reading, Mass., 1965.

Daniel W. Stroock. Diffusion processes associated with Levy generators. Z. Wahrscheinlichkeitstheorie und Verw.Gebiete, 32(3):209–244, 1975.

Roger Tribe. The behavior of superprocesses near extinction. Ann. Probab., 20(1):286–311, 1992. ISSN 0091-1798.

Toshio Yamada and Shinzo Watanabe. On the uniqueness of solutions of stochastic differential equations. J.Math. Kyoto Univ., 11:155–167, 1971.

Page 424: Martingale problems and stochastic equations for …kurtz/Lectures/Frankfurt/mgpsteq.pdfMartingale problems and stochastic equations for Markov processes 1.Basics of stochastic processes

•First •Prev •Next •Go To •Go Back •Full Screen •Close •Quit 424

AbstractMartingale problems and stochastic equations for Markov processes

As in the case of ordinary and partial differential equations, it is nat-ural to attempt to specify stochastic models by describing their be-havior over infinitesimal time intervals. There are a variety of ap-proaches to making these, usually heuristic, infinitesimal specifica-tions precise. The focus of this course will be on two of these ap-proaches, formulation of martingale problems and formulation ofstochastic equations, and the relationship between the two. Alongthe way, we will consider a variety of examples and natural ques-tions that illustrate the strengths of each approach.