brownian motion: heuristic motivationpages.stat.wisc.edu/~yzwang/ito.pdffigure 7: geometric brownian...

31
1 Brownian motion: Heuristic motivation The lognormal distribution goes back to Louis Bachelier’s (1990) dissertation at the Sorbonne called The Theory of Speculation. Bache- lier’s work anticipated Einstein’s (1905) theory of Brownian motion. In 1827, Robert Brown, a Scottish botanist, observed the erratic un- predictable motion of pollen grains (in water) under a microscope. In 1905 Einstein understood that the movement was due to bom- bardment by water molecules and he developed a mathematical the- ory. Later, Norbert Wiener, an M.I.T. mathematician, developed a more precise mathematical model of Brownian motion, now called the Wiener process. Random Walk Suppose ε j are i.i.d. standard normal random variables. Partition interval [0,T ] into n subintervals of length Δ = T/n. Let t k = k Δ,

Upload: others

Post on 21-Jul-2020

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Brownian motion: Heuristic motivationpages.stat.wisc.edu/~yzwang/Ito.pdfFigure 7: Geometric Brownian motion As ∆ → 0, it is convenient to write dSt St = µdt+σdBt. Such a continuous

1

Brownian motion: Heuristic motivation

The lognormal distribution goes back to Louis Bachelier’s (1990)

dissertation at the Sorbonne called The Theory of Speculation. Bache-

lier’s work anticipated Einstein’s (1905) theory of Brownian motion.

In 1827, Robert Brown, a Scottish botanist, observed the erratic un-

predictable motion of pollen grains (in water) under a microscope.

In 1905 Einstein understood that the movement was due to bom-

bardment by water molecules and he developed a mathematical the-

ory. Later, Norbert Wiener, an M.I.T. mathematician, developed a

more precise mathematical model of Brownian motion, now called

the Wiener process.

Random Walk

Suppose εj are i.i.d. standard normal random variables. Partition

interval [0, T ] into n subintervals of length ∆ = T/n. Let tk = k ∆,

Page 2: Brownian motion: Heuristic motivationpages.stat.wisc.edu/~yzwang/Ito.pdfFigure 7: Geometric Brownian motion As ∆ → 0, it is convenient to write dSt St = µdt+σdBt. Such a continuous

2

B1

B2

-0.5 0.0 0.5 1.0

-0.4

-0.2

0.0

0.2

0.4

0.6

Brownian Motion

Figure 1: Simulated Brownian motion

B1

B2

-0.20 -0.15 -0.10 -0.05

0.0

0.05

0.10

0.15

0.20

0.25

Brownian Motion: the first 50 steps

Figure 2: The first 50 steps of Brownian motion

Page 3: Brownian motion: Heuristic motivationpages.stat.wisc.edu/~yzwang/Ito.pdfFigure 7: Geometric Brownian motion As ∆ → 0, it is convenient to write dSt St = µdt+σdBt. Such a continuous

3

t

B(t

)

0.0 0.2 0.4 0.6 0.8 1.0

-0.4

-0.2

0.0

0.2

0.4

0.6

0.8

1.0

Two sample paths of Brownian motion

Figure 3: Simulated 2000 step Random Walk

t

S(t

)

0.0 0.2 0.4 0.6 0.8 1.0

12

34

Two sample paths of geometric Brownian motion

Figure 4: Simulated 2000 step geometric Random Walk

Page 4: Brownian motion: Heuristic motivationpages.stat.wisc.edu/~yzwang/Ito.pdfFigure 7: Geometric Brownian motion As ∆ → 0, it is convenient to write dSt St = µdt+σdBt. Such a continuous

4

k = 1, · · · , n, and define normalized random walk

Bntk

=

T

n

k∑

`=1

ε`,

t

Ran

dom

Wal

k

0.0 0.2 0.4 0.6 0.8 1.0

-0.6

-0.4

-0.2

0.0

0.2

Figure 5: Simulated 21 step Random Walk

Plot (tk, Bntk

).

Properties of {Bntk

: tk = k/n ∈ [0, T ], k = 1, · · · , n}

1. Bn0 = 0,

2. Independent increment: Bntk−Bn

tjis independent of Bn

ti, ti ≤ tj ≤

tk,

3. Normality: Bntk

are jointly normal, Bntk∼ N(0, tk), Bn

tk− Bn

tj∼

Page 5: Brownian motion: Heuristic motivationpages.stat.wisc.edu/~yzwang/Ito.pdfFigure 7: Geometric Brownian motion As ∆ → 0, it is convenient to write dSt St = µdt+σdBt. Such a continuous

5

N(0, tk − tj), and Cov(Bnti, Bn

tj) = min(ti, tj).

Proof. ti ≤ tj ≤ tk ↔ i ≤ j ≤ k.

ε1, · · · , εi, εi+1, · · · , εj, εj+1, · · · , εk

Bnti

=

T

n

i∑

`=1

ε`

relies on ε1, · · · , εi,

Bntk− Bn

tj=

T

n

k∑

`=j+1

ε`

depends on εj+1, · · · , εk. Thus, they are independent.

E[Bntk− Bn

tj] =

T

n

k∑

`=j+1

Eε` = 0

V ar[Bntk− Bn

tj] =

T

n

k∑

`=j+1

V ar(ε`) =T

n(k − j) = tk − tj.

Cov(Bnti, Bn

tj) = Cov(Bn

ti, Bn

tj− Bn

ti+ Bn

ti)

= Cov(Bnti, Bn

tj− Bn

ti) + Cov(Bn

ti, Bn

ti)

= V ar(Bnti) = ti

As n → ∞ [or ∆ → 0],

{Bntk

: tk = k/n ∈ [0, T ], k = 1, · · · , n} −→ {Bt : t ∈ [0, 1]}.

Page 6: Brownian motion: Heuristic motivationpages.stat.wisc.edu/~yzwang/Ito.pdfFigure 7: Geometric Brownian motion As ∆ → 0, it is convenient to write dSt St = µdt+σdBt. Such a continuous

6

t

Ran

dom

Wal

k

0.0 0.2 0.4 0.6 0.8 1.0

-1.0

-0.5

0.0

0.5

Figure 6: Brownian motion: Simulated 300 step Random Walk

Brownian motion

The continuous time stochastic process Bt, t ∈ [0, T ] is called

standard Brownian motion (or Wiener process).

Properties of {Bt : t ∈ [0, T ]}

1. B0 = 0,

2. Independent increment: Bt−Bs is independent of Br, r ≤ s ≤ t,

3. Normality: Bt − Bs ∼ N(0, t − s).

(a). For any 0 < s1 < s2 < · · · < sm, Bsjare jointly normal, (b).

Cov(Bs, Bt) = min(s, t).

Page 7: Brownian motion: Heuristic motivationpages.stat.wisc.edu/~yzwang/Ito.pdfFigure 7: Geometric Brownian motion As ∆ → 0, it is convenient to write dSt St = µdt+σdBt. Such a continuous

7

Geometric Brownian motion

The most common model by far in finance is one where the security

price is based on a geometric Brownian motion.

If one invests $1,000 in a stock selling for $1 and it goes up to $2,

one has the same profit, namely $1,000, as if one invests $1,000 in a

stock selling for $100 and it goes up to $200.

Our opportunities for gain are the same for both. It is the pro-

portional increase one wants. that is, it is ∆St/St matters, not ∆St.

Therefore, one set ∆St/St to be the quantity related to a Brownian.

Different stocks have different volatilities. In addition, one expects a

mean rate of return µ on ones investment that is positive. In fact,

one expects the mean rate of return to be higher than the risk-free

interest rate because one expects something in return for undertaking

risk.

For modeling stock price, the following model is frequently used

St+∆ − St

St= µ ∆ + σ

√∆ εt,

Page 8: Brownian motion: Heuristic motivationpages.stat.wisc.edu/~yzwang/Ito.pdfFigure 7: Geometric Brownian motion As ∆ → 0, it is convenient to write dSt St = µdt+σdBt. Such a continuous

8

t

Sto

ck P

rice

0.0 0.2 0.4 0.6 0.8 1.0

1.0

1.2

1.4

1.6

1.8

Figure 7: Geometric Brownian motion

As ∆ → 0, it is convenient to write

dSt

St= µ dt + σ dBt.

Such a continuous process is an idealization of the discrete version

and is called a geometric Brownian motion.

St = S0 exp{(µ − σ2/2) t + σ Bt}.

Given S0,

log ST ∼ N(log S0 + (µ − σ2/2) T, σ2 T ).

E(ST ) = S0 exp{µ T},

Page 9: Brownian motion: Heuristic motivationpages.stat.wisc.edu/~yzwang/Ito.pdfFigure 7: Geometric Brownian motion As ∆ → 0, it is convenient to write dSt St = µdt+σdBt. Such a continuous

9

V ar(ST ) = S20 exp{2 µ T}

[

exp{

σ2 T}

− 1]

.

Proof. For X ∼ N(ν, τ 2), its moment generating function

E(

ea X)

= exp(a ν + a2 τ 2/2).

Let X = log ST or ST = eX . Then

E(ST ) = E(eX), V ar(ST ) = E(S2T ) − [E(ST )]2.

Page 10: Brownian motion: Heuristic motivationpages.stat.wisc.edu/~yzwang/Ito.pdfFigure 7: Geometric Brownian motion As ∆ → 0, it is convenient to write dSt St = µdt+σdBt. Such a continuous

10

Brownian motion

A process Wt is called a Brownian motion (or Wiener process), if

(1). W0 = 0;

(2). for r < s < t, Wt − Ws and Wr are independet, that is, inde-

pendent increments.

(2). for s < t, Wt−Ws follows a normal distribution with mean zero

and variance σ2 (t − s), where σ is a positive constant.

For Brownian motion Wt, let Ft = σ{Ws, s ≤ t}, t ∈ [0, 1].

Martingale: Wt is a martingale:

E[Wt|Fs] = Ws.

Proof.

E[Wt|Fs] = Ws + E[Wt − Ws|Fs] = Ws + E[Wt − Ws] = Ws.

Remark. E[W 2t ] = t, and W 2

t is a submartingale:

E[W 2t |Fs] ≥ W 2

s

Page 11: Brownian motion: Heuristic motivationpages.stat.wisc.edu/~yzwang/Ito.pdfFigure 7: Geometric Brownian motion As ∆ → 0, it is convenient to write dSt St = µdt+σdBt. Such a continuous

11

Quadratic Variation:

[W, W ]t = limk→∞

k∑

i=1

(

Wsi− Wsi−1

)2, in probability,

where the limit is taken for any sequence of partitions 0 = s0 < s1 <

s2 < · · · < sk = t with supj(sj − sj−1) → 0 as k → ∞.

[W, W ]t = t

Proof. We prove t = 1. Let si = i/k. Then Wsi− Wsi−1

∼i.i.d

N(0, 1/n). Let Zi = k (Wsi−Wsi−1

)2. Then Zi are i.i.d. with mean

1 and variance 2. By LLN, we getk

i=1

(Wsi− Wsi−1

)2 =

∑ki=1 Zi

k→ 1.

Thus [W, W ]1 = 1.

Remark For smooth function H(t), if si is a partition of [0, t],

thenk

i=1

|H(si) − H(si−1)| →∫ t

0

|H ′(s)| ds

This immediately impliesk

i=1

|H(si)−H(si−1)|2 ≤ maxi

|H(si)−H(si−1)|k

i=1

|H(si)−H(si−1)| → 0

Page 12: Brownian motion: Heuristic motivationpages.stat.wisc.edu/~yzwang/Ito.pdfFigure 7: Geometric Brownian motion As ∆ → 0, it is convenient to write dSt St = µdt+σdBt. Such a continuous

12

W 2t − t is a martingale:

E[W 2t − t|Fs] = W 2

s − s

Proof.

E[W 2t − t|Fs] = E[(Ws + Wt − Ws)

2|Fs] − t

= E[W 2s + (Wt − Ws)

2 + 2 Ws (Wt − Ws)|Fs] − t

= W 2s + E[(Wt − Ws)

2|Fs] + 2 Ws E[(Wt − Ws)|Fs] − t

= W 2s + E[(Wt − Ws)

2] − t

= W 2s + t − s − t = W 2

s − s

Remark. In general, given a continuous martingale Mt, its

quadratic variation [M, M ]t is a unique non-decreasing process such

that M 2t − [M, M ]t is a martingale.

Page 13: Brownian motion: Heuristic motivationpages.stat.wisc.edu/~yzwang/Ito.pdfFigure 7: Geometric Brownian motion As ∆ → 0, it is convenient to write dSt St = µdt+σdBt. Such a continuous

13

Stochastic Integral

Heuristic motivation: If one purchases ∆0 shares (possibly a neg-

ative number) at time t0 = 0, then changes the investment to ∆1

shares at time t1, then changes the investment to ∆2 at time t2, etc.

At time t0 one has origin wealth Wt0. One buys ∆0 shares and cost

is ∆0 St0. At time t1 one sells ∆0 shares for the price of St1 per share,

and so one wealth is now Wt0 + ∆0 (St1 −St0). One now pays ∆1 St1

for ∆1 shares at time t1 and continues. So one’s wealth at time t = tn

will be

Wt0 + ∆0 (St1 − St0) + ∆1 (St2 − St1) + · · · + ∆tn−1(Stn − Stn−1

),

which is the same as

Wt0 +

∫ t

0

∆s dSs

where ∆s = ∆ti for ti ≤ s < ti+1. In other words, our wealth is

given by a stochastic integral with respect to the stock price. The

requirement that the integrand of a stochastic integral be adapted is

very nature: we can not base the number of share we own at time

s on information that will not be available until the future. The

Page 14: Brownian motion: Heuristic motivationpages.stat.wisc.edu/~yzwang/Ito.pdfFigure 7: Geometric Brownian motion As ∆ → 0, it is convenient to write dSt St = µdt+σdBt. Such a continuous

14

continuous time model of finance is then that the security price is

given by geometric Brownian motion, that there is no transaction

costs, but one can trade as many as one wants and vary amount held

in a continuous time fashion. This is clearly not the way the market

actually works, for example, stock prices are discrete, but this model

has proved to be a very good one.

∫ t

0

Hs dSt

represents the net profit or loss if St is the stock price and Ht is the

number of shares held at time t. This is the direct analogy of

∆i (Sj+1 − Sj).

Page 15: Brownian motion: Heuristic motivationpages.stat.wisc.edu/~yzwang/Ito.pdfFigure 7: Geometric Brownian motion As ∆ → 0, it is convenient to write dSt St = µdt+σdBt. Such a continuous

15

Definition:

Predictable process: a stochastic process H(t) = H(t, ω) is said

to be predictable, if for each t, H(t) is Ft measurable.

Case A: Simple Process:

H(t) = ξ 1(a,b](t)

where ξ is Fa measurable.

Stochastic Integral Definition:

Xt =

∫ t

0

H(s) dWs = ξ (Wt∧b−Wt∧a) =

ξ (Wb − Wa), t > b

ξ (Wt − Wa), a ≤ t ≤ b

0, t < a∫ t

0 H(s) dWs is martingale: for t < T ,

E[XT |Ft] = Xt

Proof. Case 1: for t ≥ b: XT = Xt = ξ (Wb − Wa). Thus,

E[XT |Ft] = ξ (Wb − Wa) = Xt

Case 2: for t ≤ a: Xt = 0, XT = ξ (Wb − Wa) if T > b or

ξ (WT − Wa) if a < T < b or 0 if T ≤ a.

E[Wb − Wa|Ft] = 0, E[WT − Wa|Ft] = 0

Page 16: Brownian motion: Heuristic motivationpages.stat.wisc.edu/~yzwang/Ito.pdfFigure 7: Geometric Brownian motion As ∆ → 0, it is convenient to write dSt St = µdt+σdBt. Such a continuous

16

Thus,

E[XT |Ft] = 0 = Xt

Case 3: for a < t < b: Xt = ξ (Wt − Wa), XT = ξ (Wb − Wa) if

T > b or ξ (WT − Wa) if a < t < T < b.

E[Wb − Wa|Ft] = Wt − Wa, E[WT − Wa|Ft] = Wt − Wa

Thus,

E[XT |Ft] = ξ (Wt − Wa) = Xt

Page 17: Brownian motion: Heuristic motivationpages.stat.wisc.edu/~yzwang/Ito.pdfFigure 7: Geometric Brownian motion As ∆ → 0, it is convenient to write dSt St = µdt+σdBt. Such a continuous

17

Quadratic Variation:

[X, X ]t =

∫ t

0

H2(s) ds = ξ2 (t∧b−t∧a) =

ξ2 (b − a) t > b

ξ2 (t − a) a ≤ t ≤ b

0 t < a

Proof. Case 1: for t ≥ b: XT = Xt = ξ (Wb − Wa), and [X, X ]T =

[X, X ]t = ξ2 (b − a). Thus,

E[X2T − [X, X ]T |Ft] = ξ2 [(Wb − Wa)

2 − (b − a)] = X2t − [X, X ]t

Case 2: for t ≤ a: Xt = 0 and [X, X ]t = 0. XT = ξ (Wb −Wa) if

T > b or ξ (WT − Wa) if a < T < b or 0 if T ≤ a, and [X, X ]T =

ξ2 (b − a) if T > b or ξ2 (T − a) if a < T < b or 0 if T ≤ a.

E[X2T − [X, X ]T |Ft] = ξ2 E[(Wb − Wa)

2 − (b − a)|Ft

= E[(Wb − Wa)2 − (b − a)] = 0

E[X2T − [X, X ]T |Ft] = ξ2 E[(WT − Wa)

2 − (T − a)|Ft

= E[(WT − Wa)2 − (T − a)] = 0

Case 3: for a < t < b: Xt = ξ (Wt − Wa), XT = ξ (Wb − Wa)

if T > b or ξ (WT − Wa) if a < t < T < b. [X, X ]t = ξ2 (t − a),

Page 18: Brownian motion: Heuristic motivationpages.stat.wisc.edu/~yzwang/Ito.pdfFigure 7: Geometric Brownian motion As ∆ → 0, it is convenient to write dSt St = µdt+σdBt. Such a continuous

18

[X, X ]T = ξ2 (b − a) if T > b or ξ2 (T − a) if a < t < T < b.

E[X2T − [X, X ]T |Ft] = ξ2 E[(Wb − Wa)

2 − (b − a)|Ft]

= ξ2{

E[(Wb − Wt)2] + (Wt − Wa)

2

+ 2 (Wt − Wa) E[Wb − Wt|Ft] − (b − a)}

= ξ2{

b − t + (Wt − Wa)2 − (b − a)

}

= ξ2{

(Wt − Wa)2 − (t − a)

}

= X2t − [X, X ]t

E[X2T − [X, X ]T |Ft] = ξ2 E[(WT − Wa)

2 − (T − a)|Ft]

= ξ2{

E[(WT − Wt)2] + (Wt − Wa)

2

+ 2 (Wt − Wa) E[WT − Wt|Ft] − (T − a)}

= ξ2{

T − t + (Wt − Wa)2 − (T − a)

}

= ξ2{

(Wt − Wa)2 − (t − a)

}

= X2t − [X, X ]t

Case B: Step Process:

H(t) = ξ0 1{0}(t) +

m∑

i=1

ξi 1(ai,bi](t)

where ξi is Faimeasurable.

Page 19: Brownian motion: Heuristic motivationpages.stat.wisc.edu/~yzwang/Ito.pdfFigure 7: Geometric Brownian motion As ∆ → 0, it is convenient to write dSt St = µdt+σdBt. Such a continuous

19

Stochastic Integral Definition:

Xt =

∫ t

0

H(s) dWs =

m∑

i=1

ξi (Wt∧bi − Wt∧ai)

[X, X ]t =

∫ t

0

H2(s) ds =

m∑

i=1

ξ2i (t ∧ bi − t ∧ ai)

Martingale property: for t < T ,

E[XT |Ft] = Xt, E[X2T − [X, X ]T |Ft] = X2

t − [X, X ]t

Proof. From Case A, we have that for t < T ,

E[ξi (WT∧bi − WT∧ai)|Ft] = ξi (Wt∧bi − Wt∧ai

)

E[ξ2i (WT∧bi − WT∧ai

)2 − ξ2 (T ∧ bi − T ∧ ai)|Ft]

= ξ2i (Wt∧bi − Wt∧ai

)2 − ξ2 (t ∧ bi − t ∧ ai)

The results are proved by summing over both sides.

Case C: General predictable process: H(t) with

H2(t) dt < ∞

There exist a sequence of step process Hn(t) such that

E|Hn(t) − H(t)|2 dt → 0

Page 20: Brownian motion: Heuristic motivationpages.stat.wisc.edu/~yzwang/Ito.pdfFigure 7: Geometric Brownian motion As ∆ → 0, it is convenient to write dSt St = µdt+σdBt. Such a continuous

20

Stochastic Integral Definition:

Xt =

∫ t

0

H(s) dWs = limn→∞

∫ t

0

Hn(s) dWs, in L2

First show the L2 limit exists.

E

[

{∫ t

0

Hn(s) dWs −∫ t

0

Hm(s) dWs

}2]

= E

[

{∫ t

0

[Hn(s) − Hm(s)] dWs

}2]

= E

[

{∫ t

0

[Hn(s) − Hm(s)] dWs

}2]

=

∫ t

0

E[Hn(s) − Hm(s)]2 ds → 0

Page 21: Brownian motion: Heuristic motivationpages.stat.wisc.edu/~yzwang/Ito.pdfFigure 7: Geometric Brownian motion As ∆ → 0, it is convenient to write dSt St = µdt+σdBt. Such a continuous

21

Martingale property: for t < T ,

E

[∫ T

0

Hn(s) dWs

Ft

]

=

∫ t

0

Hn(s) dWs

E

[

(∫ T

0

Hn(s) dWs

)2

−∫ T

0

H2n(s) ds

Ft

]

=

(∫ t

0

Hn(s) dWs

)2

−∫ t

0

H2n(s) ds

We obtain martingale property by letingt n → ∞

E[XT |Ft] = Xt, E[X2T − [X, X ]T |Ft] = X2

t − [X, X ]t

Page 22: Brownian motion: Heuristic motivationpages.stat.wisc.edu/~yzwang/Ito.pdfFigure 7: Geometric Brownian motion As ∆ → 0, it is convenient to write dSt St = µdt+σdBt. Such a continuous

22

Ito Lemma

∫ t

0

Bs dBs = (B2t − t)/2

Proof. Take t = 1 and si = i/n. Then∫ t

0 Hs dBs is the limit of

n∑

i=1

Bsi−1(Bsi

− Bsi−1)

=1

2

n∑

i=1

[(Bsi+ Bsi−1

) − (Bsi− Bsi−1

)] (Bsi− Bsi−1

)

=1

2

n∑

i=1

[(B2si− B2

si−1) − (Bsi

− Bsi−1)2]

=1

2

n∑

i=1

[(B2si− B2

si−1)] − 1

2

n∑

i=1

(Bsi− Bsi−1

)2

=1

2(B2

1 − B20) −

1

2

n∑

i=1

(Bsi− Bsi−1

)2

→ 1

2(B2

1 − 1)

n∑

i=1

(Bsi− Bsi−1

)2 =1

n

n∑

i=1

[N(0, 1)]2 → 1

Thus,∫ 1

0

Bs dBs = (B21 − 1)/2

Page 23: Brownian motion: Heuristic motivationpages.stat.wisc.edu/~yzwang/Ito.pdfFigure 7: Geometric Brownian motion As ∆ → 0, it is convenient to write dSt St = µdt+σdBt. Such a continuous

23

Ito Formula: Suppose h(x) is twice differentialble, and Xt = X0 +∫ t

0 µs ds +∫ t

0 Hs dBs, with < X, X >t=∫ t

0 Hs ds. Then

h(Xt) = h(X0) +

∫ t

0

h′(Xs) dXs +1

2

∫ t

0

h′′(Xs) d < X, X >s

Ito formula: Suppose a stochastic process X follows

dXt = a(t, Xt) dt + b(t, Xt) dBt,

and H(t, Xt) is a function of Xt and t. Then

dH(Xt, t) =∂H

∂XdX +

∂H

∂tdt +

1

2

∂2H

∂X2d < X, X >t

=∂H

∂XdX +

∂H

∂tdt +

1

2

∂2H

∂X2b2 dt

=

(

∂H

∂Xa +

∂H

∂t+

1

2

∂2H

∂X2b2

)

dt +∂H

∂Xb dB,

where we use the fact

d < X, X >t= b2(t, Xt) dt, or equivalently < X, X >t=

∫ t

0

b2(s, Xs) ds.

Application Example For geometric Brownian motion, take H(t, St) =

log St. Then

∂H

∂S=

1

S,

∂2H

∂S2= − 1

S2,

∂H

∂t= 0.

Page 24: Brownian motion: Heuristic motivationpages.stat.wisc.edu/~yzwang/Ito.pdfFigure 7: Geometric Brownian motion As ∆ → 0, it is convenient to write dSt St = µdt+σdBt. Such a continuous

24

Thus,

dlog St =

(

1

Stµ St +

1

2

−1

S2t

σ2 S2t

)

dt+1

Stσ St dBt = (µ−σ2/2) dt+σ dBt.

Example H(t, Bt) = B2t − t,

d(B2t − t) = 2 Bt dBt.

Heuristic Justification Calculus. Consider a continuous and

differentiable function H(x) of a variable x. If ∆x is a small change

in x and ∆H is the resulting small change in H , it is well known

that

∆H ≈ dH

dx∆x.

In other words, ∆H is approximately equal to the rate of change of

H with respect to x multiplied by ∆x. The error involves terms of

order (∆x)2. If more precision is required, a Taylor series expansion

of ∆H can be used:

∆H =dH

dx∆x +

1

2

d2H

dx2(∆x)2 +

1

6

d3H

dx3(∆x)3 + · · ·

For a continuous and differentiable bivariate function H(x, y) of two

Page 25: Brownian motion: Heuristic motivationpages.stat.wisc.edu/~yzwang/Ito.pdfFigure 7: Geometric Brownian motion As ∆ → 0, it is convenient to write dSt St = µdt+σdBt. Such a continuous

25

variables x and y, the result analogous is

∆H ≈ ∂H

∂x∆x +

∂H

∂y∆y

and the Taylor series expansion of ∆H is

∆H =∂H

∂x∆x+

∂H

∂y∆y+

1

2

∂2H

∂x2(∆x)2+

∂2H

∂x∂y∆x ∆y+

1

2

∂2H

∂y2(∆y)2+· · ·

In the limit as ∆x and ∆y goes to zero,

dH =∂H

∂xdx+

∂H

∂ydy, or H(x, y) =

∫ x

−∞

∫ y

−∞

(

∂H

∂udu +

∂H

∂vdv

)

.

Stochastic calculus. Now H is a function of a stochastic process

Xt and t, and we will consider a stochastic version of differential form

for H(Xt, t). With small increment ∆t in time, let

∆X = Xt+∆t − Xt, ∆H = H(Xt+∆t, t + ∆t) − H(Xt, t).

From Taylor series expansion, we can write

∆H =∂H

∂X∆X+

∂H

∂t∆t+

1

2

∂2H

∂X2(∆X)2+

∂2H

∂X∂t∆X ∆t+

1

2

∂2H

∂t2(∆t)2+· · ·

Xt equation can be discretized as

∆Xt = a(t, Xt)∆t + b(t, Xt)√

∆t εt,

Page 26: Brownian motion: Heuristic motivationpages.stat.wisc.edu/~yzwang/Ito.pdfFigure 7: Geometric Brownian motion As ∆ → 0, it is convenient to write dSt St = µdt+σdBt. Such a continuous

26

or if arguments are dropped,

∆X = a ∆t + b√

∆t ε.

This equation reveals an important difference between the determin-

istic and stochastic situations. When the limiting arguments were

used from Taylor series expansion to differentiable form, term with

(∆x)2 were ignored, because they were second-order terms and are

negligible. For the stochastic case,

(∆X)2 = b2 ε2 ∆t + o(∆t),

which shows that the term involving (∆X)2 has a component that

is of order ∆t and can’t be ignored. The expected value of ε2 ∆t is

∆t, with variance of order (∆t)2. As a result of this, ε2 ∆t becomes

nonstochastic and equal to its expected value of ∆t as ∆t tends to

zero. It follows that (∆X)2 becomes nonstochastic and equals to

b2 dt as ∆t tends to zero. Taking limits as ∆X and ∆t go to zero,

Page 27: Brownian motion: Heuristic motivationpages.stat.wisc.edu/~yzwang/Ito.pdfFigure 7: Geometric Brownian motion As ∆ → 0, it is convenient to write dSt St = µdt+σdBt. Such a continuous

27

we obtain

dH =∂H

∂XdX +

∂H

∂tdt +

1

2

∂2H

∂X2b2 dt

=

(

∂H

∂Xa +

∂H

∂t+

1

2

∂2H

∂X2b2

)

dt +∂H

∂Xb dB.

Page 28: Brownian motion: Heuristic motivationpages.stat.wisc.edu/~yzwang/Ito.pdfFigure 7: Geometric Brownian motion As ∆ → 0, it is convenient to write dSt St = µdt+σdBt. Such a continuous

28

Girsnov theorem

Suppose that Z is a standard normal random variable. Let Y =

σ Z and X = µ + σ Z.

P (Y ≤ y) = Φ(y

σ

)

, P (X ≤ x) = Φ

(

x − µ

σ

)

Define a new probability Q such that

EQ[H ] = EP [M H ],

where M is a nonnegative random variable with EP [M ] = 1. Take

M to be the likelihood ratio of Y vs X

M = exp

(

−µ Y

σ2− µ2

2 σ2

)

= exp

(

−µ Z

σ− µ2

2 σ2

)

Q(X ≤ x) = EP [M 1(X ≤ x)]

=

1(y + µ ≤ x) exp

(

−µ y

σ2− µ2

2 σ2

)

1√2 π σ

exp

(

y2

2 σ2

)

dy

=

1(y + µ ≤ x)1√

2 π σexp

(

−(y + µ)2

2 σ2

)

dy

=

1(u ≤ x)1√

2 π σexp

(

− u2

2 σ2

)

du

=

∫ x

−∞

1

σφ

(u

σ

)

du

= Φ(y

σ

)

,

Page 29: Brownian motion: Heuristic motivationpages.stat.wisc.edu/~yzwang/Ito.pdfFigure 7: Geometric Brownian motion As ∆ → 0, it is convenient to write dSt St = µdt+σdBt. Such a continuous

29

that is, under Q, X ∼ N(0, σ2). Similarly we can show that

Q(Y ≤ y) = Φ

(

y − µ

σ

)

that is, under Q, Y ∼ N(−µ, σ2).

Now we work on Bnt on [0, 1]. Since

Bntk

=

1

n

k∑

`=1

ε`,

Then

Bntk

+

k∑

`=1

µt` n−1 =

1

n

k∑

`=1

[ε` + µt` n−1/2]

follows N(∑k

`=1 µt` n−1, tk). Take

Mn = exp

(

−n−1/2n

`=1

µt` ε` −n

`=1

µ2t`

2 n

)

EQn[H ] = EP [Mn H ],

Then under Qn, ε` + µt`/√

n ∼ N(0, 1), and Bntk

+∑k

`=1 µt` n−1 ∼

N(0, tk). As Bntk

+∑k

`=1 µt` n−1 converges to Bt +∫ t

0 µs ds,

Mn → M = exp

(

−∫ t

0

µs dBs −∫ t

0 µ2s ds

2

)

EQ[H ] = EP [M H ],

Then under Q, Bt +∫ t

0 µs ds is a Brownian motion.

Page 30: Brownian motion: Heuristic motivationpages.stat.wisc.edu/~yzwang/Ito.pdfFigure 7: Geometric Brownian motion As ∆ → 0, it is convenient to write dSt St = µdt+σdBt. Such a continuous

30

Girsanov theorem Suppose

Xt = Bt +

∫ t

0

µ(Xs) ds, or dXt = dBt + µ(Xt)dt,

and Bt is a Brownian motion with respect to probability P . Let

Mt = exp

(

−∫ t

0

µ(Xt)dt − 1

2

∫ t

0

µ2(Xt)dBt

)

.

By Ito lemma, Mt is a martingale with M0 = 1. Define a new

probability Q

Q(A) = E[MT ; A],

that is, Q is absolutely continuous with respect to P and MT is the

Radon-Nikodym derivative of Q with respect to P ,

dQ

dP= MT .

Then under Q, Xt itself is a Brownian motion.

Page 31: Brownian motion: Heuristic motivationpages.stat.wisc.edu/~yzwang/Ito.pdfFigure 7: Geometric Brownian motion As ∆ → 0, it is convenient to write dSt St = µdt+σdBt. Such a continuous

31

Martingale Representation theorem Suppose Mt is a mar-

tingale with respect to the σ-fields generated by a Brownian motion

B. Then There exists Hs such that

Mt = M0 +

∫ t

0

Hs dBs.

If V is an option, and

V = constant +

∫ t

0

Hs dSs

Any option can be replicated by buying and selling the stock.

Fundamental theorems Risk-neutral measure ⇐⇒ martingale

measure.

Under the risk-neutral measure, discounted stock price is a mar-

tingale ⇐⇒ no arbitrage exists.

Market is complete ⇐⇒ martingale measure is unique.