introduction to stochastic calculus - 2€¦ · plt.xlabel("time")...

12
INTRODUCTION TO STOCHASTIC CALCULUS - 2 Borun D Chowdhury

Upload: others

Post on 26-Jul-2020

14 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: INTRODUCTION TO STOCHASTIC CALCULUS - 2€¦ · plt.xlabel("time") plt.ylabel("position") plt.ylim(-7,7) From Binomial Walk to Weiner Process but not back! The process described above

INTRODUCTION TO STOCHASTIC CALCULUS - 2

Borun D Chowdhury

Page 2: INTRODUCTION TO STOCHASTIC CALCULUS - 2€¦ · plt.xlabel("time") plt.ylabel("position") plt.ylim(-7,7) From Binomial Walk to Weiner Process but not back! The process described above

Stochastic process is a series of random variables Xi where the distribution only depends on past realised value i.e. conditional probabilities

Quick Recap - Random processes

02/06/16 14:19StochasticProcesses/RandomWalkAndWeinerProcess.ipynb at master · borundev/StochasticProcesses

Page 2 of 6https://github.com/borundev/StochasticProcesses/blob/master/RandomWalkAndWeinerProcess.ipynb

self.p=p self.dt=dt self.nsteps=int(self.T/self.dt) Paths.__setup__(self) self.randoms=2*(np.random.binomial(1,self.p,self.npaths*(self.nsteps-1))-.5) self.randoms.shape=[self.npaths,self.nsteps-1] for i in range(self.nsteps-1): self.paths[:,i+1]=self.paths[:,i]+self.randoms[:,i] b_paths=BinaryPaths(11,1,6).get_paths()

number_columns=2number_rows=3figsize(12, 9)for i,j in enumerate([(i/number_columns,i%number_columns) for i in range(number_columns*number_rows)]): plt.subplot2grid((number_rows,number_columns),j) plt.plot(b_paths[i],"--o") plt.xlabel("time") plt.ylabel("position") plt.ylim(-7,7)

From Binomial Walk to Weiner Process but not back!

The process described above is a Binomial process and happens in discrete times. However for analytical reasons we would like to take acontinuum limit of the same. Although having done that, for computational reasons the latter is approximated by a discrete process again. Thereason for the analytical expression in continuous time is actually tied to the central limit theorem which for our purposes states that the sum ofmany independent incrememts by random variables with well defined mean and variance tends to a Gaussian process (the Gaussian is stableunder such sums or, as they are known technically, convolutions). Thus if we can pass to a domain where such convergence has happened we donot care about the microscopic model anymore.

If the above is not clear already, I hope the plots below will help. Here I generate random walks, same as before, for the time interval by deviding the interval in 10000 steps. Then we zoom into the central region (any region would do) by factors of 2. You will notice that for the first5 zoom ins the path looks the same. However, after that the discrete nature of Bernoulli jumps start becoming visible.

In [2]: T=10.0num_steps_bernoulli=2**12 delta_t_bernoulli=T/num_steps_bernoulli

t = [0, 10)

A binomial walk is a walk where every step is governed by an independent binomial distribution

Page 3: INTRODUCTION TO STOCHASTIC CALCULUS - 2€¦ · plt.xlabel("time") plt.ylabel("position") plt.ylim(-7,7) From Binomial Walk to Weiner Process but not back! The process described above

Quick recap - central limit theorem

02/06/16 14:19StochasticProcesses/RandomWalkAndWeinerProcess.ipynb at master · borundev/StochasticProcesses

Page 3 of 6https://github.com/borundev/StochasticProcesses/blob/master/RandomWalkAndWeinerProcess.ipynb

b=BinaryPaths(10,delta_t_bernoulli,1)time_line=b.get_timeline()path=b[0]

number_columns=2number_rows=4figsize(12, 9)

# plot the entire path first and then regions zoomed in by factor of 2sfor i,j in enumerate([(i/number_columns,i%number_columns) for i in range(number_columns*number_rows)]): plt.subplot2grid((number_rows,number_columns),j) time_line_for_plot=time_line[num_steps_bernoulli/2 -num_steps_bernoulli/(2**(i+1)):num_steps_bernoulli/2 +num_steps_bernoulli/(2**(i+1))] path_for_plot=path[num_steps_bernoulli/2 -num_steps_bernoulli/(2**(i+1)):num_steps_bernoulli/2 +num_steps_bernoulli/(2**(i+1))] plt.plot(time_line_for_plot,path_for_plot) plt.xlabel("time") plt.ylabel("position")

It is intructive to understand what is happening here clearly. The discussion below works for all values of away from and for sufficiently large .

The change in position after steps is given by

For large enough (depending on how important tail events are the cutoffs are different) this can be approximated by

Specializing to , if we progressively look at an interval we can approximate

and the plot is qualitatively the same as long as is not large enough to violate the Binomial to Gaussian approximation. In the plots above wehave the size of intervals

In [7]: print [num_steps_bernoulli/2**k for k in range(9)]

p 0 1n

n= 2Binomial(n, p) − n .Yn

nYn ∼

==

2 (np, ) − nnp(1 − p)‾ ‾‾‾‾‾‾‾‾√n(2p − 1) + 2 (0, 1)np(1 − p)‾ ‾‾‾‾‾‾‾‾√n(2p − 1) + (0, 1)4np(1 − p)‾ ‾‾‾‾‾‾‾‾‾√

p = .5 n2k

∼ (0, 1)Ynn2k‾ ‾‾√

k

[4096, 2048, 1024, 512, 256, 128, 64, 32, 16]

Zooming out erases all the microstructure and all processes (with well defined means and variances) give the normal distribution

Page 4: INTRODUCTION TO STOCHASTIC CALCULUS - 2€¦ · plt.xlabel("time") plt.ylabel("position") plt.ylim(-7,7) From Binomial Walk to Weiner Process but not back! The process described above

02/06/16 14:19StochasticProcesses/RandomWalkAndWeinerProcess.ipynb at master · borundev/StochasticProcesses

Page 3 of 6https://github.com/borundev/StochasticProcesses/blob/master/RandomWalkAndWeinerProcess.ipynb

b=BinaryPaths(10,delta_t_bernoulli,1)time_line=b.get_timeline()path=b[0]

number_columns=2number_rows=4figsize(12, 9)

# plot the entire path first and then regions zoomed in by factor of 2sfor i,j in enumerate([(i/number_columns,i%number_columns) for i in range(number_columns*number_rows)]): plt.subplot2grid((number_rows,number_columns),j) time_line_for_plot=time_line[num_steps_bernoulli/2 -num_steps_bernoulli/(2**(i+1)):num_steps_bernoulli/2 +num_steps_bernoulli/(2**(i+1))] path_for_plot=path[num_steps_bernoulli/2 -num_steps_bernoulli/(2**(i+1)):num_steps_bernoulli/2 +num_steps_bernoulli/(2**(i+1))] plt.plot(time_line_for_plot,path_for_plot) plt.xlabel("time") plt.ylabel("position")

It is intructive to understand what is happening here clearly. The discussion below works for all values of away from and for sufficiently large .

The change in position after steps is given by

For large enough (depending on how important tail events are the cutoffs are different) this can be approximated by

Specializing to , if we progressively look at an interval we can approximate

and the plot is qualitatively the same as long as is not large enough to violate the Binomial to Gaussian approximation. In the plots above wehave the size of intervals

In [7]: print [num_steps_bernoulli/2**k for k in range(9)]

p 0 1n

n= 2Binomial(n, p) − n .Yn

nYn ∼

==

2 (np, ) − nnp(1 − p)‾ ‾‾‾‾‾‾‾‾√n(2p − 1) + 2 (0, 1)np(1 − p)‾ ‾‾‾‾‾‾‾‾√n(2p − 1) + (0, 1)4np(1 − p)‾ ‾‾‾‾‾‾‾‾‾√

p = .5 n2k

∼ (0, 1)Ynn2k‾ ‾‾√

k

[4096, 2048, 1024, 512, 256, 128, 64, 32, 16]

Bernoulli random walk

Large n limit

02/06/16 14:19StochasticProcesses/RandomWalkAndWeinerProcess.ipynb at master · borundev/StochasticProcesses

Page 3 of 6https://github.com/borundev/StochasticProcesses/blob/master/RandomWalkAndWeinerProcess.ipynb

b=BinaryPaths(10,delta_t_bernoulli,1)time_line=b.get_timeline()path=b[0]

number_columns=2number_rows=4figsize(12, 9)

# plot the entire path first and then regions zoomed in by factor of 2sfor i,j in enumerate([(i/number_columns,i%number_columns) for i in range(number_columns*number_rows)]): plt.subplot2grid((number_rows,number_columns),j) time_line_for_plot=time_line[num_steps_bernoulli/2 -num_steps_bernoulli/(2**(i+1)):num_steps_bernoulli/2 +num_steps_bernoulli/(2**(i+1))] path_for_plot=path[num_steps_bernoulli/2 -num_steps_bernoulli/(2**(i+1)):num_steps_bernoulli/2 +num_steps_bernoulli/(2**(i+1))] plt.plot(time_line_for_plot,path_for_plot) plt.xlabel("time") plt.ylabel("position")

It is intructive to understand what is happening here clearly. The discussion below works for all values of away from and for sufficiently large .

The change in position after steps is given by

For large enough (depending on how important tail events are the cutoffs are different) this can be approximated by

Specializing to , if we progressively look at an interval we can approximate

and the plot is qualitatively the same as long as is not large enough to violate the Binomial to Gaussian approximation. In the plots above wehave the size of intervals

In [7]: print [num_steps_bernoulli/2**k for k in range(9)]

p 0 1n

n= 2Binomial(n, p) − n .Yn

nYn ∼

==

2 (np, ) − nnp(1 − p)‾ ‾‾‾‾‾‾‾‾√n(2p − 1) + 2 (0, 1)np(1 − p)‾ ‾‾‾‾‾‾‾‾√n(2p − 1) + (0, 1)4np(1 − p)‾ ‾‾‾‾‾‾‾‾‾√

p = .5 n2k

∼ (0, 1)Ynn2k‾ ‾‾√

k

[4096, 2048, 1024, 512, 256, 128, 64, 32, 16]

Suppose the microscopic time scale is much shorter than times of interest

Then we can write the above as

02/06/16 14:19StochasticProcesses/RandomWalkAndWeinerProcess.ipynb at master · borundev/StochasticProcesses

Page 4 of 6https://github.com/borundev/StochasticProcesses/blob/master/RandomWalkAndWeinerProcess.ipynb

and around n=256 we start seeing differences.

This shows us something interesting. Let us assume that each step takes time and we are interested in studying processes over time we have

where and .

Take great care to see that there is a square root on here. This makes the left hand side very different from ordinary calculus differentialelements. For instance while the limit

is well defined and goes to zero, the velocity

is not defined. This signifies that this kind of curve is everywhere non-differentiable.

It would be useful to keep in mind that all this discussion is happening when compared to other large time scales in the problem whilemaintaining so as to be able to justify the Binomial to Normal approximation above.

The Weiner process is this approximated process taken at all scales. In other words one forgets that oen first zoomed out and thus infinitezooming in is possible. This is related to the central limit theorem in that we first take the limit of addining ifinite, independent random variable andget a normal distribution and then subdivide the normal as many times as we like without recovering the original distribution.

Formally the Weiner process is defined as

and thus to match our previous example we have .

should be interpreted as a random draw from a unit normal multiplied by . Thus it is immediately clear that is drawn from a distribution of one degree of freedom with mean and standard deviation . There is hardly a book on stochastic calculus that will notmention . We can understand it the following way. If we have

with time split as then is a distribution with mean and variance and in the limit the value of .It is in this sense, udner the integral/summation, that even though there is no convergence for any one interval. This is again a result ofthe central limit theorem. In particular we have the often used results

and

However note that if the intervals are independent

We can integrate the Weiner process to get

giving

Covariance of and

The way the above is written can lead to an erroneous idea (that can seep into code) that and are independet and thus have covariance 0.However, the path from to is the same so they are not independent. For concreteness take then what we really have is

where the two subscripts on the normal denote independent draws and

giving

In general

We can now simulate paths using the Weiner process and zoom in to see the difference from the Bernoulli process.

In [3]: class WeinerPaths(Paths):

δt dt ≫ δt

dY(t) = μdt + σ (0, 1)dt‾‾√

μ = 2p−1δt

σ = 4p(1−p)δt‾ ‾‾‾‾‾√

dt

dY(t)limdt→0

limdt→0

dY(t)dt

dt → 0dt ≫ δt

d = (0, 1)Wt dt‾‾√dY = μdt + σdW

dW dt‾‾√ dW 2 χ2

dt dt2‾√d = dtW 2

Q = Δ∑i=1

NW 2

i

Δt = T/N Q χ2N NΔT = T 2NΔT 2 N → ∞ Q → T

d = dtW 2

⟨d ⟩ = 0Wt

d = dtW 2t

⟨d d ⟩ = dt ∗ cov( (0, 1), (0, 1)) = 0Wt Wt′

= d = (0, 1)Wt ∫t

0Wt t√

⟨ ⟩Wtvar( )Wt

==

0t

Wt Wt′

Wt Wt′

0 min(t, )t ′ > tt ′

Wt = (0, 1)t√ 1 W −tt′ = (0, 1)t − t ′‾ ‾‾‾‾√ 2

= +Wt′ Wt Wt−t′

cov( , )Wt Wt′ ==

t cov( (0, 1), (0, 1)) + cov( (0, 1), (0, 1))1 1 t√ − tt ′‾ ‾‾‾‾√ 1 2t

cov( , ) = min(t, )Wt Wt′ t ′

02/06/16 14:19StochasticProcesses/RandomWalkAndWeinerProcess.ipynb at master · borundev/StochasticProcesses

Page 4 of 6https://github.com/borundev/StochasticProcesses/blob/master/RandomWalkAndWeinerProcess.ipynb

and around n=256 we start seeing differences.

This shows us something interesting. Let us assume that each step takes time and we are interested in studying processes over time we have

where and .

Take great care to see that there is a square root on here. This makes the left hand side very different from ordinary calculus differentialelements. For instance while the limit

is well defined and goes to zero, the velocity

is not defined. This signifies that this kind of curve is everywhere non-differentiable.

It would be useful to keep in mind that all this discussion is happening when compared to other large time scales in the problem whilemaintaining so as to be able to justify the Binomial to Normal approximation above.

The Weiner process is this approximated process taken at all scales. In other words one forgets that oen first zoomed out and thus infinitezooming in is possible. This is related to the central limit theorem in that we first take the limit of addining ifinite, independent random variable andget a normal distribution and then subdivide the normal as many times as we like without recovering the original distribution.

Formally the Weiner process is defined as

and thus to match our previous example we have .

should be interpreted as a random draw from a unit normal multiplied by . Thus it is immediately clear that is drawn from a distribution of one degree of freedom with mean and standard deviation . There is hardly a book on stochastic calculus that will notmention . We can understand it the following way. If we have

with time split as then is a distribution with mean and variance and in the limit the value of .It is in this sense, udner the integral/summation, that even though there is no convergence for any one interval. This is again a result ofthe central limit theorem. In particular we have the often used results

and

However note that if the intervals are independent

We can integrate the Weiner process to get

giving

Covariance of and

The way the above is written can lead to an erroneous idea (that can seep into code) that and are independet and thus have covariance 0.However, the path from to is the same so they are not independent. For concreteness take then what we really have is

where the two subscripts on the normal denote independent draws and

giving

In general

We can now simulate paths using the Weiner process and zoom in to see the difference from the Bernoulli process.

In [3]: class WeinerPaths(Paths):

δt dt ≫ δt

dY(t) = μdt + σ (0, 1)dt‾‾√

μ = 2p−1δt

σ = 4p(1−p)δt‾ ‾‾‾‾‾√

dt

dY(t)limdt→0

limdt→0

dY(t)dt

dt → 0dt ≫ δt

d = (0, 1)Wt dt‾‾√dY = μdt + σdW

dW dt‾‾√ dW 2 χ2

dt dt2‾√d = dtW 2

Q = Δ∑i=1

NW 2

i

Δt = T/N Q χ2N NΔT = T 2NΔT 2 N → ∞ Q → T

d = dtW 2

⟨d ⟩ = 0Wt

d = dtW 2t

⟨d d ⟩ = dt ∗ cov( (0, 1), (0, 1)) = 0Wt Wt′

= d = (0, 1)Wt ∫t

0Wt t√

⟨ ⟩Wtvar( )Wt

==

0t

Wt Wt′

Wt Wt′

0 min(t, )t ′ > tt ′

Wt = (0, 1)t√ 1 W −tt′ = (0, 1)t − t ′‾ ‾‾‾‾√ 2

= +Wt′ Wt Wt−t′

cov( , )Wt Wt′ ==

t cov( (0, 1), (0, 1)) + cov( (0, 1), (0, 1))1 1 t√ − tt ′‾ ‾‾‾‾√ 1 2t

cov( , ) = min(t, )Wt Wt′ t ′

1 ⌧ dt

02/06/16 14:19StochasticProcesses/RandomWalkAndWeinerProcess.ipynb at master · borundev/StochasticProcesses

Page 4 of 6https://github.com/borundev/StochasticProcesses/blob/master/RandomWalkAndWeinerProcess.ipynb

and around n=256 we start seeing differences.

This shows us something interesting. Let us assume that each step takes time and we are interested in studying processes over time we have

where and .

Take great care to see that there is a square root on here. This makes the left hand side very different from ordinary calculus differentialelements. For instance while the limit

is well defined and goes to zero, the velocity

is not defined. This signifies that this kind of curve is everywhere non-differentiable.

It would be useful to keep in mind that all this discussion is happening when compared to other large time scales in the problem whilemaintaining so as to be able to justify the Binomial to Normal approximation above.

The Weiner process is this approximated process taken at all scales. In other words one forgets that oen first zoomed out and thus infinitezooming in is possible. This is related to the central limit theorem in that we first take the limit of addining ifinite, independent random variable andget a normal distribution and then subdivide the normal as many times as we like without recovering the original distribution.

Formally the Weiner process is defined as

and thus to match our previous example we have .

should be interpreted as a random draw from a unit normal multiplied by . Thus it is immediately clear that is drawn from a distribution of one degree of freedom with mean and standard deviation . There is hardly a book on stochastic calculus that will notmention . We can understand it the following way. If we have

with time split as then is a distribution with mean and variance and in the limit the value of .It is in this sense, udner the integral/summation, that even though there is no convergence for any one interval. This is again a result ofthe central limit theorem. In particular we have the often used results

and

However note that if the intervals are independent

We can integrate the Weiner process to get

giving

Covariance of and

The way the above is written can lead to an erroneous idea (that can seep into code) that and are independet and thus have covariance 0.However, the path from to is the same so they are not independent. For concreteness take then what we really have is

where the two subscripts on the normal denote independent draws and

giving

In general

We can now simulate paths using the Weiner process and zoom in to see the difference from the Bernoulli process.

In [3]: class WeinerPaths(Paths):

δt dt ≫ δt

dY(t) = μdt + σ (0, 1)dt‾‾√

μ = 2p−1δt

σ = 4p(1−p)δt‾ ‾‾‾‾‾√

dt

dY(t)limdt→0

limdt→0

dY(t)dt

dt → 0dt ≫ δt

d = (0, 1)Wt dt‾‾√dY = μdt + σdW

dW dt‾‾√ dW 2 χ2

dt dt2‾√d = dtW 2

Q = Δ∑i=1

NW 2

i

Δt = T/N Q χ2N NΔT = T 2NΔT 2 N → ∞ Q → T

d = dtW 2

⟨d ⟩ = 0Wt

d = dtW 2t

⟨d d ⟩ = dt ∗ cov( (0, 1), (0, 1)) = 0Wt Wt′

= d = (0, 1)Wt ∫t

0Wt t√

⟨ ⟩Wtvar( )Wt

==

0t

Wt Wt′

Wt Wt′

0 min(t, )t ′ > tt ′

Wt = (0, 1)t√ 1 W −tt′ = (0, 1)t − t ′‾ ‾‾‾‾√ 2

= +Wt′ Wt Wt−t′

cov( , )Wt Wt′ ==

t cov( (0, 1), (0, 1)) + cov( (0, 1), (0, 1))1 1 t√ − tt ′‾ ‾‾‾‾√ 1 2t

cov( , ) = min(t, )Wt Wt′ t ′

Quick recap - central limit theorem

Page 5: INTRODUCTION TO STOCHASTIC CALCULUS - 2€¦ · plt.xlabel("time") plt.ylabel("position") plt.ylim(-7,7) From Binomial Walk to Weiner Process but not back! The process described above

Stochastic CalculusIn risk analysis we are interested in finding the distribution of a price at a certain time, possible condition on events (like the price never having gone above $ x)

If the elementary distribution is not one of a handful (and in real life it rarely is) then a complicated problem in general

However if times of observation are large enough then CLT kicks in and we can use stochastic calculus

dS(t) = a(t,Wt)dt+ b(t,Wt)dWt

In general we can have

Page 6: INTRODUCTION TO STOCHASTIC CALCULUS - 2€¦ · plt.xlabel("time") plt.ylabel("position") plt.ylim(-7,7) From Binomial Walk to Weiner Process but not back! The process described above

Stochastic CalculusWe are interested in solving

28/07/16 13:59StochasticProcesses/StochasticCalculus.ipynb at master · borundev/StochasticProcesses

Page 1 of 3https://github.com/borundev/StochasticProcesses/blob/master/StochasticCalculus.ipynb

This repository Pull requests Issues Gist

StochasticProcesses / StochasticCalculus.ipynb

0 01 Unwatch Star Forkborundev / StochasticProcesses

Code Issues 0 Pull requests 0 Wiki Pulse Graphs Settings

master Find file Copy path

1 contributor

1c3d008 on May 24 borundev added Milstein method description

190lines(189sloc) 9.81KB

Stochastic Calculus

Before we go into stochastic calculus let's quickly review what ordinary calculus is. Let us consider a function over an interval .We partition this interval into n subintervals and a set of points inside these intervals (i.e. ) and thenwe define

Now in the finite n case there is some ambiguity in the above expression but in the limit of infinite this ambiguity goes away on account of thefunction being differentiable. This is actually a circular statement in that we can only define the above opration un-ambiguously when theabove limit exists.

However, when we are talking about a stochastic process, by definition it cannot be differentiable. We saw this explicitely when we approachedthe Wiener process from the limit of a random walk in an earlier chapter (RandomWalkAndWeinerProcess.ipynb). Intuitively this happens becauseif the process is fundamentally stochastic, i.e. it is stochastic to the smallest time step we can imagine (note this is not the case for random walkbecause during any given step the motion is smooth but this is the case for the Weiner process) then the first derivative is discontinuous.

So how do we define a calculus for stochastic processes? We saw some hints of this in the earlier chapter on random walks and Wiener process(RandomWalkAndWeinerProcess.ipynb) and will now see it more formally. First we make the convergence condition weaker to accomodate thenon-differentiable nature in the following way

where the mean-square limit is defined as

This however does not uniquely fix the integral. The residual ambiguity is due to the fact that we can choose and getdifferent results. This easy to verify by evaluating the mean of . Being a mean it has a definitie (non-stochastic) value and wedemonstrate below that even in the limit the answer dependends on .

We calculate

where we used the relation explained in the chapter on Weiner processes (RandomWalkAndWeinerProcess.ipynb).Thus we see the answer depends on independent of how big is. The choice defines Ito's stochastic integral and the choice defines Stratonovich's stochastic integral.

Ito's calculus has the nice property of being causal in that the increment to the integral in the time interval needs to know only about thevalue of the integrand at the beginning of the interval . This translates into the integral being Martingales and especially useful in finance.The causality property is useful in physical applications also. As of writing this I am not aware of applications that are natuarally bettwe described

G(t) ( , = t)t0 tn( , ), ( , ) … ( , )t0 t1 t1 t2 tn−1 tn τi < <ti−1 τi ti

G( )d = G( )[ − ] .∫tn

0t ′ t ′ lim

n→∞ ∑i=1

nτi ti ti−1

nG( )t ′

G( )dW( ) = ms- G( )[W( ) − W( )] .∫tn

0t ′ t ′ lim

n→∞ ∑i=1

nτi ti ti−1

ms- = X ⇔ ⟨( − X ⟩ = 0limn→∞

Xn limn→∞

Xn )2

= α + (1 − α)τi ti ti−1S = ∫ W( )dW( )t ′ t ′

n → ∞ α

⟨ ⟩Sn =

=

=

=

⟨W( )W( )⟩ − ⟨W( )W( )⟩∑i=1

nτi ti τi ti−1

− ⟩∑i=1

nτi ti−1

α( − )∑i=1

nti ti−1

α(t − )t0

cov( , ) = min(t, )Wt Wt′ t ′

α n α = 0 α = 1

( , )ti−1 tiG( )ti−1

Raw Blame History

But let us review ordinary integrals first. We partition our interval into n elements and sum

28/07/16 13:59StochasticProcesses/StochasticCalculus.ipynb at master · borundev/StochasticProcesses

Page 1 of 3https://github.com/borundev/StochasticProcesses/blob/master/StochasticCalculus.ipynb

This repository Pull requests Issues Gist

StochasticProcesses / StochasticCalculus.ipynb

0 01 Unwatch Star Forkborundev / StochasticProcesses

Code Issues 0 Pull requests 0 Wiki Pulse Graphs Settings

master Find file Copy path

1 contributor

1c3d008 on May 24 borundev added Milstein method description

190lines(189sloc) 9.81KB

Stochastic Calculus

Before we go into stochastic calculus let's quickly review what ordinary calculus is. Let us consider a function over an interval .We partition this interval into n subintervals and a set of points inside these intervals (i.e. ) and thenwe define

Now in the finite n case there is some ambiguity in the above expression but in the limit of infinite this ambiguity goes away on account of thefunction being differentiable. This is actually a circular statement in that we can only define the above opration un-ambiguously when theabove limit exists.

However, when we are talking about a stochastic process, by definition it cannot be differentiable. We saw this explicitely when we approachedthe Wiener process from the limit of a random walk in an earlier chapter (RandomWalkAndWeinerProcess.ipynb). Intuitively this happens becauseif the process is fundamentally stochastic, i.e. it is stochastic to the smallest time step we can imagine (note this is not the case for random walkbecause during any given step the motion is smooth but this is the case for the Weiner process) then the first derivative is discontinuous.

So how do we define a calculus for stochastic processes? We saw some hints of this in the earlier chapter on random walks and Wiener process(RandomWalkAndWeinerProcess.ipynb) and will now see it more formally. First we make the convergence condition weaker to accomodate thenon-differentiable nature in the following way

where the mean-square limit is defined as

This however does not uniquely fix the integral. The residual ambiguity is due to the fact that we can choose and getdifferent results. This easy to verify by evaluating the mean of . Being a mean it has a definitie (non-stochastic) value and wedemonstrate below that even in the limit the answer dependends on .

We calculate

where we used the relation explained in the chapter on Weiner processes (RandomWalkAndWeinerProcess.ipynb).Thus we see the answer depends on independent of how big is. The choice defines Ito's stochastic integral and the choice defines Stratonovich's stochastic integral.

Ito's calculus has the nice property of being causal in that the increment to the integral in the time interval needs to know only about thevalue of the integrand at the beginning of the interval . This translates into the integral being Martingales and especially useful in finance.The causality property is useful in physical applications also. As of writing this I am not aware of applications that are natuarally bettwe described

G(t) ( , = t)t0 tn( , ), ( , ) … ( , )t0 t1 t1 t2 tn−1 tn τi < <ti−1 τi ti

G( )d = G( )[ − ] .∫tn

0t ′ t ′ lim

n→∞ ∑i=1

nτi ti ti−1

nG( )t ′

G( )dW( ) = ms- G( )[W( ) − W( )] .∫tn

0t ′ t ′ lim

n→∞ ∑i=1

nτi ti ti−1

ms- = X ⇔ ⟨( − X ⟩ = 0limn→∞

Xn limn→∞

Xn )2

= α + (1 − α)τi ti ti−1S = ∫ W( )dW( )t ′ t ′

n → ∞ α

⟨ ⟩Sn =

=

=

=

⟨W( )W( )⟩ − ⟨W( )W( )⟩∑i=1

nτi ti τi ti−1

− ⟩∑i=1

nτi ti−1

α( − )∑i=1

nti ti−1

α(t − )t0

cov( , ) = min(t, )Wt Wt′ t ′

α n α = 0 α = 1

( , )ti−1 tiG( )ti−1

Raw Blame History

Where τi is somewhere in the interval. The ambiguity goes away in the infinite n limit. This is because the function is differentiable. However stochastic processes are not differentiable (convince yourself intuitively).

Page 7: INTRODUCTION TO STOCHASTIC CALCULUS - 2€¦ · plt.xlabel("time") plt.ylabel("position") plt.ylim(-7,7) From Binomial Walk to Weiner Process but not back! The process described above

Stochastic Calculus

28/07/16 13:59StochasticProcesses/StochasticCalculus.ipynb at master · borundev/StochasticProcesses

Page 1 of 3https://github.com/borundev/StochasticProcesses/blob/master/StochasticCalculus.ipynb

This repository Pull requests Issues Gist

StochasticProcesses / StochasticCalculus.ipynb

0 01 Unwatch Star Forkborundev / StochasticProcesses

Code Issues 0 Pull requests 0 Wiki Pulse Graphs Settings

master Find file Copy path

1 contributor

1c3d008 on May 24 borundev added Milstein method description

190lines(189sloc) 9.81KB

Stochastic Calculus

Before we go into stochastic calculus let's quickly review what ordinary calculus is. Let us consider a function over an interval .We partition this interval into n subintervals and a set of points inside these intervals (i.e. ) and thenwe define

Now in the finite n case there is some ambiguity in the above expression but in the limit of infinite this ambiguity goes away on account of thefunction being differentiable. This is actually a circular statement in that we can only define the above opration un-ambiguously when theabove limit exists.

However, when we are talking about a stochastic process, by definition it cannot be differentiable. We saw this explicitely when we approachedthe Wiener process from the limit of a random walk in an earlier chapter (RandomWalkAndWeinerProcess.ipynb). Intuitively this happens becauseif the process is fundamentally stochastic, i.e. it is stochastic to the smallest time step we can imagine (note this is not the case for random walkbecause during any given step the motion is smooth but this is the case for the Weiner process) then the first derivative is discontinuous.

So how do we define a calculus for stochastic processes? We saw some hints of this in the earlier chapter on random walks and Wiener process(RandomWalkAndWeinerProcess.ipynb) and will now see it more formally. First we make the convergence condition weaker to accomodate thenon-differentiable nature in the following way

where the mean-square limit is defined as

This however does not uniquely fix the integral. The residual ambiguity is due to the fact that we can choose and getdifferent results. This easy to verify by evaluating the mean of . Being a mean it has a definitie (non-stochastic) value and wedemonstrate below that even in the limit the answer dependends on .

We calculate

where we used the relation explained in the chapter on Weiner processes (RandomWalkAndWeinerProcess.ipynb).Thus we see the answer depends on independent of how big is. The choice defines Ito's stochastic integral and the choice defines Stratonovich's stochastic integral.

Ito's calculus has the nice property of being causal in that the increment to the integral in the time interval needs to know only about thevalue of the integrand at the beginning of the interval . This translates into the integral being Martingales and especially useful in finance.The causality property is useful in physical applications also. As of writing this I am not aware of applications that are natuarally bettwe described

G(t) ( , = t)t0 tn( , ), ( , ) … ( , )t0 t1 t1 t2 tn−1 tn τi < <ti−1 τi ti

G( )d = G( )[ − ] .∫tn

0t ′ t ′ lim

n→∞ ∑i=1

nτi ti ti−1

nG( )t ′

G( )dW( ) = ms- G( )[W( ) − W( )] .∫tn

0t ′ t ′ lim

n→∞ ∑i=1

nτi ti ti−1

ms- = X ⇔ ⟨( − X ⟩ = 0limn→∞

Xn limn→∞

Xn )2

= α + (1 − α)τi ti ti−1S = ∫ W( )dW( )t ′ t ′

n → ∞ α

⟨ ⟩Sn =

=

=

=

⟨W( )W( )⟩ − ⟨W( )W( )⟩∑i=1

nτi ti τi ti−1

− ⟩∑i=1

nτi ti−1

α( − )∑i=1

nti ti−1

α(t − )t0

cov( , ) = min(t, )Wt Wt′ t ′

α n α = 0 α = 1

( , )ti−1 tiG( )ti−1

Raw Blame History

Stochastic integrals defined as a weak convergence (on average convergence)

28/07/16 13:59StochasticProcesses/StochasticCalculus.ipynb at master · borundev/StochasticProcesses

Page 1 of 3https://github.com/borundev/StochasticProcesses/blob/master/StochasticCalculus.ipynb

This repository Pull requests Issues Gist

StochasticProcesses / StochasticCalculus.ipynb

0 01 Unwatch Star Forkborundev / StochasticProcesses

Code Issues 0 Pull requests 0 Wiki Pulse Graphs Settings

master Find file Copy path

1 contributor

1c3d008 on May 24 borundev added Milstein method description

190lines(189sloc) 9.81KB

Stochastic Calculus

Before we go into stochastic calculus let's quickly review what ordinary calculus is. Let us consider a function over an interval .We partition this interval into n subintervals and a set of points inside these intervals (i.e. ) and thenwe define

Now in the finite n case there is some ambiguity in the above expression but in the limit of infinite this ambiguity goes away on account of thefunction being differentiable. This is actually a circular statement in that we can only define the above opration un-ambiguously when theabove limit exists.

However, when we are talking about a stochastic process, by definition it cannot be differentiable. We saw this explicitely when we approachedthe Wiener process from the limit of a random walk in an earlier chapter (RandomWalkAndWeinerProcess.ipynb). Intuitively this happens becauseif the process is fundamentally stochastic, i.e. it is stochastic to the smallest time step we can imagine (note this is not the case for random walkbecause during any given step the motion is smooth but this is the case for the Weiner process) then the first derivative is discontinuous.

So how do we define a calculus for stochastic processes? We saw some hints of this in the earlier chapter on random walks and Wiener process(RandomWalkAndWeinerProcess.ipynb) and will now see it more formally. First we make the convergence condition weaker to accomodate thenon-differentiable nature in the following way

where the mean-square limit is defined as

This however does not uniquely fix the integral. The residual ambiguity is due to the fact that we can choose and getdifferent results. This easy to verify by evaluating the mean of . Being a mean it has a definitie (non-stochastic) value and wedemonstrate below that even in the limit the answer dependends on .

We calculate

where we used the relation explained in the chapter on Weiner processes (RandomWalkAndWeinerProcess.ipynb).Thus we see the answer depends on independent of how big is. The choice defines Ito's stochastic integral and the choice defines Stratonovich's stochastic integral.

Ito's calculus has the nice property of being causal in that the increment to the integral in the time interval needs to know only about thevalue of the integrand at the beginning of the interval . This translates into the integral being Martingales and especially useful in finance.The causality property is useful in physical applications also. As of writing this I am not aware of applications that are natuarally bettwe described

G(t) ( , = t)t0 tn( , ), ( , ) … ( , )t0 t1 t1 t2 tn−1 tn τi < <ti−1 τi ti

G( )d = G( )[ − ] .∫tn

0t ′ t ′ lim

n→∞ ∑i=1

nτi ti ti−1

nG( )t ′

G( )dW( ) = ms- G( )[W( ) − W( )] .∫tn

0t ′ t ′ lim

n→∞ ∑i=1

nτi ti ti−1

ms- = X ⇔ ⟨( − X ⟩ = 0limn→∞

Xn limn→∞

Xn )2

= α + (1 − α)τi ti ti−1S = ∫ W( )dW( )t ′ t ′

n → ∞ α

⟨ ⟩Sn =

=

=

=

⟨W( )W( )⟩ − ⟨W( )W( )⟩∑i=1

nτi ti τi ti−1

− ⟩∑i=1

nτi ti−1

α( − )∑i=1

nti ti−1

α(t − )t0

cov( , ) = min(t, )Wt Wt′ t ′

α n α = 0 α = 1

( , )ti−1 tiG( )ti−1

Raw Blame History

This does not fix the ambiguity though. To see this take

28/07/16 13:59StochasticProcesses/StochasticCalculus.ipynb at master · borundev/StochasticProcesses

Page 1 of 3https://github.com/borundev/StochasticProcesses/blob/master/StochasticCalculus.ipynb

This repository Pull requests Issues Gist

StochasticProcesses / StochasticCalculus.ipynb

0 01 Unwatch Star Forkborundev / StochasticProcesses

Code Issues 0 Pull requests 0 Wiki Pulse Graphs Settings

master Find file Copy path

1 contributor

1c3d008 on May 24 borundev added Milstein method description

190lines(189sloc) 9.81KB

Stochastic Calculus

Before we go into stochastic calculus let's quickly review what ordinary calculus is. Let us consider a function over an interval .We partition this interval into n subintervals and a set of points inside these intervals (i.e. ) and thenwe define

Now in the finite n case there is some ambiguity in the above expression but in the limit of infinite this ambiguity goes away on account of thefunction being differentiable. This is actually a circular statement in that we can only define the above opration un-ambiguously when theabove limit exists.

However, when we are talking about a stochastic process, by definition it cannot be differentiable. We saw this explicitely when we approachedthe Wiener process from the limit of a random walk in an earlier chapter (RandomWalkAndWeinerProcess.ipynb). Intuitively this happens becauseif the process is fundamentally stochastic, i.e. it is stochastic to the smallest time step we can imagine (note this is not the case for random walkbecause during any given step the motion is smooth but this is the case for the Weiner process) then the first derivative is discontinuous.

So how do we define a calculus for stochastic processes? We saw some hints of this in the earlier chapter on random walks and Wiener process(RandomWalkAndWeinerProcess.ipynb) and will now see it more formally. First we make the convergence condition weaker to accomodate thenon-differentiable nature in the following way

where the mean-square limit is defined as

This however does not uniquely fix the integral. The residual ambiguity is due to the fact that we can choose and getdifferent results. This easy to verify by evaluating the mean of . Being a mean it has a definitie (non-stochastic) value and wedemonstrate below that even in the limit the answer dependends on .

We calculate

where we used the relation explained in the chapter on Weiner processes (RandomWalkAndWeinerProcess.ipynb).Thus we see the answer depends on independent of how big is. The choice defines Ito's stochastic integral and the choice defines Stratonovich's stochastic integral.

Ito's calculus has the nice property of being causal in that the increment to the integral in the time interval needs to know only about thevalue of the integrand at the beginning of the interval . This translates into the integral being Martingales and especially useful in finance.The causality property is useful in physical applications also. As of writing this I am not aware of applications that are natuarally bettwe described

G(t) ( , = t)t0 tn( , ), ( , ) … ( , )t0 t1 t1 t2 tn−1 tn τi < <ti−1 τi ti

G( )d = G( )[ − ] .∫tn

0t ′ t ′ lim

n→∞ ∑i=1

nτi ti ti−1

nG( )t ′

G( )dW( ) = ms- G( )[W( ) − W( )] .∫tn

0t ′ t ′ lim

n→∞ ∑i=1

nτi ti ti−1

ms- = X ⇔ ⟨( − X ⟩ = 0limn→∞

Xn limn→∞

Xn )2

= α + (1 − α)τi ti ti−1S = ∫ W( )dW( )t ′ t ′

n → ∞ α

⟨ ⟩Sn =

=

=

=

⟨W( )W( )⟩ − ⟨W( )W( )⟩∑i=1

nτi ti τi ti−1

− ⟩∑i=1

nτi ti−1

α( − )∑i=1

nti ti−1

α(t − )t0

cov( , ) = min(t, )Wt Wt′ t ′

α n α = 0 α = 1

( , )ti−1 tiG( )ti−1

Raw Blame History

and evaluate

28/07/16 13:59StochasticProcesses/StochasticCalculus.ipynb at master · borundev/StochasticProcesses

Page 1 of 3https://github.com/borundev/StochasticProcesses/blob/master/StochasticCalculus.ipynb

This repository Pull requests Issues Gist

StochasticProcesses / StochasticCalculus.ipynb

0 01 Unwatch Star Forkborundev / StochasticProcesses

Code Issues 0 Pull requests 0 Wiki Pulse Graphs Settings

master Find file Copy path

1 contributor

1c3d008 on May 24 borundev added Milstein method description

190lines(189sloc) 9.81KB

Stochastic Calculus

Before we go into stochastic calculus let's quickly review what ordinary calculus is. Let us consider a function over an interval .We partition this interval into n subintervals and a set of points inside these intervals (i.e. ) and thenwe define

Now in the finite n case there is some ambiguity in the above expression but in the limit of infinite this ambiguity goes away on account of thefunction being differentiable. This is actually a circular statement in that we can only define the above opration un-ambiguously when theabove limit exists.

However, when we are talking about a stochastic process, by definition it cannot be differentiable. We saw this explicitely when we approachedthe Wiener process from the limit of a random walk in an earlier chapter (RandomWalkAndWeinerProcess.ipynb). Intuitively this happens becauseif the process is fundamentally stochastic, i.e. it is stochastic to the smallest time step we can imagine (note this is not the case for random walkbecause during any given step the motion is smooth but this is the case for the Weiner process) then the first derivative is discontinuous.

So how do we define a calculus for stochastic processes? We saw some hints of this in the earlier chapter on random walks and Wiener process(RandomWalkAndWeinerProcess.ipynb) and will now see it more formally. First we make the convergence condition weaker to accomodate thenon-differentiable nature in the following way

where the mean-square limit is defined as

This however does not uniquely fix the integral. The residual ambiguity is due to the fact that we can choose and getdifferent results. This easy to verify by evaluating the mean of . Being a mean it has a definitie (non-stochastic) value and wedemonstrate below that even in the limit the answer dependends on .

We calculate

where we used the relation explained in the chapter on Weiner processes (RandomWalkAndWeinerProcess.ipynb).Thus we see the answer depends on independent of how big is. The choice defines Ito's stochastic integral and the choice defines Stratonovich's stochastic integral.

Ito's calculus has the nice property of being causal in that the increment to the integral in the time interval needs to know only about thevalue of the integrand at the beginning of the interval . This translates into the integral being Martingales and especially useful in finance.The causality property is useful in physical applications also. As of writing this I am not aware of applications that are natuarally bettwe described

G(t) ( , = t)t0 tn( , ), ( , ) … ( , )t0 t1 t1 t2 tn−1 tn τi < <ti−1 τi ti

G( )d = G( )[ − ] .∫tn

0t ′ t ′ lim

n→∞ ∑i=1

nτi ti ti−1

nG( )t ′

G( )dW( ) = ms- G( )[W( ) − W( )] .∫tn

0t ′ t ′ lim

n→∞ ∑i=1

nτi ti ti−1

ms- = X ⇔ ⟨( − X ⟩ = 0limn→∞

Xn limn→∞

Xn )2

= α + (1 − α)τi ti ti−1S = ∫ W( )dW( )t ′ t ′

n → ∞ α

⟨ ⟩Sn =

=

=

=

⟨W( )W( )⟩ − ⟨W( )W( )⟩∑i=1

nτi ti τi ti−1

− ⟩∑i=1

nτi ti−1

α( − )∑i=1

nti ti−1

α(t − )t0

cov( , ) = min(t, )Wt Wt′ t ′

α n α = 0 α = 1

( , )ti−1 tiG( )ti−1

Raw Blame History

28/07/16 13:59StochasticProcesses/StochasticCalculus.ipynb at master · borundev/StochasticProcesses

Page 1 of 3https://github.com/borundev/StochasticProcesses/blob/master/StochasticCalculus.ipynb

This repository Pull requests Issues Gist

StochasticProcesses / StochasticCalculus.ipynb

0 01 Unwatch Star Forkborundev / StochasticProcesses

Code Issues 0 Pull requests 0 Wiki Pulse Graphs Settings

master Find file Copy path

1 contributor

1c3d008 on May 24 borundev added Milstein method description

190lines(189sloc) 9.81KB

Stochastic Calculus

Before we go into stochastic calculus let's quickly review what ordinary calculus is. Let us consider a function over an interval .We partition this interval into n subintervals and a set of points inside these intervals (i.e. ) and thenwe define

Now in the finite n case there is some ambiguity in the above expression but in the limit of infinite this ambiguity goes away on account of thefunction being differentiable. This is actually a circular statement in that we can only define the above opration un-ambiguously when theabove limit exists.

However, when we are talking about a stochastic process, by definition it cannot be differentiable. We saw this explicitely when we approachedthe Wiener process from the limit of a random walk in an earlier chapter (RandomWalkAndWeinerProcess.ipynb). Intuitively this happens becauseif the process is fundamentally stochastic, i.e. it is stochastic to the smallest time step we can imagine (note this is not the case for random walkbecause during any given step the motion is smooth but this is the case for the Weiner process) then the first derivative is discontinuous.

So how do we define a calculus for stochastic processes? We saw some hints of this in the earlier chapter on random walks and Wiener process(RandomWalkAndWeinerProcess.ipynb) and will now see it more formally. First we make the convergence condition weaker to accomodate thenon-differentiable nature in the following way

where the mean-square limit is defined as

This however does not uniquely fix the integral. The residual ambiguity is due to the fact that we can choose and getdifferent results. This easy to verify by evaluating the mean of . Being a mean it has a definitie (non-stochastic) value and wedemonstrate below that even in the limit the answer dependends on .

We calculate

where we used the relation explained in the chapter on Weiner processes (RandomWalkAndWeinerProcess.ipynb).Thus we see the answer depends on independent of how big is. The choice defines Ito's stochastic integral and the choice defines Stratonovich's stochastic integral.

Ito's calculus has the nice property of being causal in that the increment to the integral in the time interval needs to know only about thevalue of the integrand at the beginning of the interval . This translates into the integral being Martingales and especially useful in finance.The causality property is useful in physical applications also. As of writing this I am not aware of applications that are natuarally bettwe described

G(t) ( , = t)t0 tn( , ), ( , ) … ( , )t0 t1 t1 t2 tn−1 tn τi < <ti−1 τi ti

G( )d = G( )[ − ] .∫tn

0t ′ t ′ lim

n→∞ ∑i=1

nτi ti ti−1

nG( )t ′

G( )dW( ) = ms- G( )[W( ) − W( )] .∫tn

0t ′ t ′ lim

n→∞ ∑i=1

nτi ti ti−1

ms- = X ⇔ ⟨( − X ⟩ = 0limn→∞

Xn limn→∞

Xn )2

= α + (1 − α)τi ti ti−1S = ∫ W( )dW( )t ′ t ′

n → ∞ α

⟨ ⟩Sn =

=

=

=

⟨W( )W( )⟩ − ⟨W( )W( )⟩∑i=1

nτi ti τi ti−1

− ⟩∑i=1

nτi ti−1

α( − )∑i=1

nti ti−1

α(t − )t0

cov( , ) = min(t, )Wt Wt′ t ′

α n α = 0 α = 1

( , )ti−1 tiG( )ti−1

Raw Blame History

28/07/16 13:59StochasticProcesses/StochasticCalculus.ipynb at master · borundev/StochasticProcesses

Page 1 of 3https://github.com/borundev/StochasticProcesses/blob/master/StochasticCalculus.ipynb

This repository Pull requests Issues Gist

StochasticProcesses / StochasticCalculus.ipynb

0 01 Unwatch Star Forkborundev / StochasticProcesses

Code Issues 0 Pull requests 0 Wiki Pulse Graphs Settings

master Find file Copy path

1 contributor

1c3d008 on May 24 borundev added Milstein method description

190lines(189sloc) 9.81KB

Stochastic Calculus

Before we go into stochastic calculus let's quickly review what ordinary calculus is. Let us consider a function over an interval .We partition this interval into n subintervals and a set of points inside these intervals (i.e. ) and thenwe define

Now in the finite n case there is some ambiguity in the above expression but in the limit of infinite this ambiguity goes away on account of thefunction being differentiable. This is actually a circular statement in that we can only define the above opration un-ambiguously when theabove limit exists.

However, when we are talking about a stochastic process, by definition it cannot be differentiable. We saw this explicitely when we approachedthe Wiener process from the limit of a random walk in an earlier chapter (RandomWalkAndWeinerProcess.ipynb). Intuitively this happens becauseif the process is fundamentally stochastic, i.e. it is stochastic to the smallest time step we can imagine (note this is not the case for random walkbecause during any given step the motion is smooth but this is the case for the Weiner process) then the first derivative is discontinuous.

So how do we define a calculus for stochastic processes? We saw some hints of this in the earlier chapter on random walks and Wiener process(RandomWalkAndWeinerProcess.ipynb) and will now see it more formally. First we make the convergence condition weaker to accomodate thenon-differentiable nature in the following way

where the mean-square limit is defined as

This however does not uniquely fix the integral. The residual ambiguity is due to the fact that we can choose and getdifferent results. This easy to verify by evaluating the mean of . Being a mean it has a definitie (non-stochastic) value and wedemonstrate below that even in the limit the answer dependends on .

We calculate

where we used the relation explained in the chapter on Weiner processes (RandomWalkAndWeinerProcess.ipynb).Thus we see the answer depends on independent of how big is. The choice defines Ito's stochastic integral and the choice defines Stratonovich's stochastic integral.

Ito's calculus has the nice property of being causal in that the increment to the integral in the time interval needs to know only about thevalue of the integrand at the beginning of the interval . This translates into the integral being Martingales and especially useful in finance.The causality property is useful in physical applications also. As of writing this I am not aware of applications that are natuarally bettwe described

G(t) ( , = t)t0 tn( , ), ( , ) … ( , )t0 t1 t1 t2 tn−1 tn τi < <ti−1 τi ti

G( )d = G( )[ − ] .∫tn

0t ′ t ′ lim

n→∞ ∑i=1

nτi ti ti−1

nG( )t ′

G( )dW( ) = ms- G( )[W( ) − W( )] .∫tn

0t ′ t ′ lim

n→∞ ∑i=1

nτi ti ti−1

ms- = X ⇔ ⟨( − X ⟩ = 0limn→∞

Xn limn→∞

Xn )2

= α + (1 − α)τi ti ti−1S = ∫ W( )dW( )t ′ t ′

n → ∞ α

⟨ ⟩Sn =

=

=

=

⟨W( )W( )⟩ − ⟨W( )W( )⟩∑i=1

nτi ti τi ti−1

− ⟩∑i=1

nτi ti−1

α( − )∑i=1

nti ti−1

α(t − )t0

cov( , ) = min(t, )Wt Wt′ t ′

α n α = 0 α = 1

( , )ti−1 tiG( )ti−1

Raw Blame History

because

why?

depends on α independent of n. α=0 is Ito calculus and is ‘non-anticipating’.

Page 8: INTRODUCTION TO STOCHASTIC CALCULUS - 2€¦ · plt.xlabel("time") plt.ylabel("position") plt.ylim(-7,7) From Binomial Walk to Weiner Process but not back! The process described above

An example

Before going into rules let us break down an example

28/07/16 13:59StochasticProcesses/StochasticCalculus.ipynb at master · borundev/StochasticProcesses

Page 2 of 3https://github.com/borundev/StochasticProcesses/blob/master/StochasticCalculus.ipynb

The causality property is useful in physical applications also. As of writing this I am not aware of applications that are natuarally bettwe describedusing Stratonovich's definition.

Now we can look at an application of Ito's calculus with the explicit summation method before we discuss the more formal Ito's lemma that givesanswers magically as it were. We will later that Ito's calculus gives us . Here we see how this is consistent withthe summation under the mean-square limit. This is based on an answer on quant.stackexchange.com(http://quant.stackexchange.com/questions/25019/intergral-of-brownian-motion-w-r-t-brownian-motion/25051#25051). Let us evaluate theexpression we would get on the RHS when evaluating from the above definition:

Now noting each increment is an independent normal distribution we get

The other expression we get from the discretized version of the RHS of the standard answer is

Now if we can just show that the two expressions are same in the mean-square limit we are done. We see that

Here we used the results that the squares of standard normals is a chi-square with degree and that the same has mean and variance .

Hence we are able to prove in detail why .

Ito's LemmaWe note in passing that is a distribution with mean and standard deviation and thus in the limit we have .Then we get

and obvious generalizations for more independent Weiner processes. This allows us to verify the above result for the integral of the Weinerprocess. Take and apply the above rule to get the RHS as .

Ito's IsometryIto's isometry is the following result

This can be proved in the following way. The LHS is the mean square limit of the sum

now due the statistical independence of with , the above expenctation value factorizes

and now we use the result to get the advertized result.

W(s)dW(s) = (W(t − t)∫ t0

12 )2

W(s)dW(s)∫ t0

Y (1)n = ( − ) .∑

i=1

nWti−1 Wti Wti−1

( − ) = (0, 1)Wti Wti−1 dt‾‾√ i−1

Y (1)n = dt (0, 1) (0, 1) .∑

i>ji j

W(s)dW(s) = (W(t − t)∫ t0

12 )2

Y (2)n = − dt .1

2( (0, 1))dt‾‾√ ∑i=1

ni

212 ∑

i=1

n

=

=

=

=

==

⟨( − ⟩limn→∞

Y (2)n Y (1)

n )2

⟨ ⟩limn→∞

dt2

4 ( (0, 1 − n)∑i=1

ni )2

2

⟨( − n ⟩limn→∞

dt2

4 χ2n )2

var( )limn→∞

dt2

4 χ2n

nlimn→∞

dt2

2lim

n→∞t2

2n0

n n n 2n

W(s)dW(s) = (W(t − t)∫ t0

12 )2

dW 2 χ21 dt dt2‾√ dt → 0 d → dtW 2

df [W(t), t] = [ + ] dt + dW(t) + (d )∂f∂t

12

f∂2

∂W 2∂f∂W

t3/2

f (W(t), t) = (W(t − t)12 )2 W(t)dW(t)

E[(∫ XdW ] = E[∫ dt] .)2 X2

=

⟨( X( )( − ) ⟩∑i=1

nti−1 Wti Wti−1 )2

⟨X( )X( )( − )( − )⟩∑i,j=1

nti−1 tj−1 Wti Wti−1 Wtj Wtj−1

Xti−1( − )Wti Wti−1

⟨X( )X( )⟩⟨( − )( − )⟩∑i,j=1

nti−1 tj−1 Wti Wti−1 Wtj Wtj−1

⟨( − )( − )⟩ = dtWti Wti−1 Wtj Wtj−1 δij

We only have to equate them in the mean square sense to show this

28/07/16 13:59StochasticProcesses/StochasticCalculus.ipynb at master · borundev/StochasticProcesses

Page 2 of 3https://github.com/borundev/StochasticProcesses/blob/master/StochasticCalculus.ipynb

The causality property is useful in physical applications also. As of writing this I am not aware of applications that are natuarally bettwe describedusing Stratonovich's definition.

Now we can look at an application of Ito's calculus with the explicit summation method before we discuss the more formal Ito's lemma that givesanswers magically as it were. We will later that Ito's calculus gives us . Here we see how this is consistent withthe summation under the mean-square limit. This is based on an answer on quant.stackexchange.com(http://quant.stackexchange.com/questions/25019/intergral-of-brownian-motion-w-r-t-brownian-motion/25051#25051). Let us evaluate theexpression we would get on the RHS when evaluating from the above definition:

Now noting each increment is an independent normal distribution we get

The other expression we get from the discretized version of the RHS of the standard answer is

Now if we can just show that the two expressions are same in the mean-square limit we are done. We see that

Here we used the results that the squares of standard normals is a chi-square with degree and that the same has mean and variance .

Hence we are able to prove in detail why .

Ito's LemmaWe note in passing that is a distribution with mean and standard deviation and thus in the limit we have .Then we get

and obvious generalizations for more independent Weiner processes. This allows us to verify the above result for the integral of the Weinerprocess. Take and apply the above rule to get the RHS as .

Ito's IsometryIto's isometry is the following result

This can be proved in the following way. The LHS is the mean square limit of the sum

now due the statistical independence of with , the above expenctation value factorizes

and now we use the result to get the advertized result.

W(s)dW(s) = (W(t − t)∫ t0

12 )2

W(s)dW(s)∫ t0

Y (1)n = ( − ) .∑

i=1

nWti−1 Wti Wti−1

( − ) = (0, 1)Wti Wti−1 dt‾‾√ i−1

Y (1)n = dt (0, 1) (0, 1) .∑

i>ji j

W(s)dW(s) = (W(t − t)∫ t0

12 )2

Y (2)n = − dt .1

2( (0, 1))dt‾‾√ ∑i=1

ni

212 ∑

i=1

n

=

=

=

=

==

⟨( − ⟩limn→∞

Y (2)n Y (1)

n )2

⟨ ⟩limn→∞

dt2

4 ( (0, 1 − n)∑i=1

ni )2

2

⟨( − n ⟩limn→∞

dt2

4 χ2n )2

var( )limn→∞

dt2

4 χ2n

nlimn→∞

dt2

2limn→∞

t2

2n0

n n n 2n

W(s)dW(s) = (W(t − t)∫ t0

12 )2

dW 2 χ21 dt dt2‾√ dt → 0 d → dtW 2

df [W(t), t] = [ + ] dt + dW(t) + (d )∂f∂t

12

f∂2

∂W 2∂f∂W

t3/2

f (W(t), t) = (W(t − t)12 )2 W(t)dW(t)

E[(∫ XdW ] = E[∫ dt] .)2 X2

=

⟨( X( )( − ) ⟩∑i=1

nti−1 Wti Wti−1 )2

⟨X( )X( )( − )( − )⟩∑i,j=1

nti−1 tj−1 Wti Wti−1 Wtj Wtj−1

Xti−1( − )Wti Wti−1

⟨X( )X( )⟩⟨( − )( − )⟩∑i,j=1

nti−1 tj−1 Wti Wti−1 Wtj Wtj−1

⟨( − )( − )⟩ = dtWti Wti−1 Wtj Wtj−1 δij

28/07/16 13:59StochasticProcesses/StochasticCalculus.ipynb at master · borundev/StochasticProcesses

Page 2 of 3https://github.com/borundev/StochasticProcesses/blob/master/StochasticCalculus.ipynb

The causality property is useful in physical applications also. As of writing this I am not aware of applications that are natuarally bettwe describedusing Stratonovich's definition.

Now we can look at an application of Ito's calculus with the explicit summation method before we discuss the more formal Ito's lemma that givesanswers magically as it were. We will later that Ito's calculus gives us . Here we see how this is consistent withthe summation under the mean-square limit. This is based on an answer on quant.stackexchange.com(http://quant.stackexchange.com/questions/25019/intergral-of-brownian-motion-w-r-t-brownian-motion/25051#25051). Let us evaluate theexpression we would get on the RHS when evaluating from the above definition:

Now noting each increment is an independent normal distribution we get

The other expression we get from the discretized version of the RHS of the standard answer is

Now if we can just show that the two expressions are same in the mean-square limit we are done. We see that

Here we used the results that the squares of standard normals is a chi-square with degree and that the same has mean and variance .

Hence we are able to prove in detail why .

Ito's LemmaWe note in passing that is a distribution with mean and standard deviation and thus in the limit we have .Then we get

and obvious generalizations for more independent Weiner processes. This allows us to verify the above result for the integral of the Weinerprocess. Take and apply the above rule to get the RHS as .

Ito's IsometryIto's isometry is the following result

This can be proved in the following way. The LHS is the mean square limit of the sum

now due the statistical independence of with , the above expenctation value factorizes

and now we use the result to get the advertized result.

W(s)dW(s) = (W(t − t)∫ t0

12 )2

W(s)dW(s)∫ t0

Y (1)n = ( − ) .∑

i=1

nWti−1 Wti Wti−1

( − ) = (0, 1)Wti Wti−1 dt‾‾√ i−1

Y (1)n = dt (0, 1) (0, 1) .∑

i>ji j

W(s)dW(s) = (W(t − t)∫ t0

12 )2

Y (2)n = − dt .1

2( (0, 1))dt‾‾√ ∑i=1

ni

212 ∑

i=1

n

=

=

=

=

==

⟨( − ⟩limn→∞

Y (2)n Y (1)

n )2

⟨ ⟩limn→∞

dt2

4 ( (0, 1 − n)∑i=1

ni )2

2

⟨( − n ⟩limn→∞

dt2

4 χ2n )2

var( )limn→∞

dt2

4 χ2n

nlimn→∞

dt2

2limn→∞

t2

2n0

n n n 2n

W(s)dW(s) = (W(t − t)∫ t0

12 )2

dW 2 χ21 dt dt2‾√ dt → 0 d → dtW 2

df [W(t), t] = [ + ] dt + dW(t) + (d )∂f∂t

12

f∂2

∂W 2∂f∂W

t3/2

f (W(t), t) = (W(t − t)12 )2 W(t)dW(t)

E[(∫ XdW ] = E[∫ dt] .)2 X2

=

⟨( X( )( − ) ⟩∑i=1

nti−1 Wti Wti−1 )2

⟨X( )X( )( − )( − )⟩∑i,j=1

nti−1 tj−1 Wti Wti−1 Wtj Wtj−1

Xti−1( − )Wti Wti−1

⟨X( )X( )⟩⟨( − )( − )⟩∑i,j=1

nti−1 tj−1 Wti Wti−1 Wtj Wtj−1

⟨( − )( − )⟩ = dtWti Wti−1 Wtj Wtj−1 δij

28/07/16 13:59StochasticProcesses/StochasticCalculus.ipynb at master · borundev/StochasticProcesses

Page 2 of 3https://github.com/borundev/StochasticProcesses/blob/master/StochasticCalculus.ipynb

The causality property is useful in physical applications also. As of writing this I am not aware of applications that are natuarally bettwe describedusing Stratonovich's definition.

Now we can look at an application of Ito's calculus with the explicit summation method before we discuss the more formal Ito's lemma that givesanswers magically as it were. We will later that Ito's calculus gives us . Here we see how this is consistent withthe summation under the mean-square limit. This is based on an answer on quant.stackexchange.com(http://quant.stackexchange.com/questions/25019/intergral-of-brownian-motion-w-r-t-brownian-motion/25051#25051). Let us evaluate theexpression we would get on the RHS when evaluating from the above definition:

Now noting each increment is an independent normal distribution we get

The other expression we get from the discretized version of the RHS of the standard answer is

Now if we can just show that the two expressions are same in the mean-square limit we are done. We see that

Here we used the results that the squares of standard normals is a chi-square with degree and that the same has mean and variance .

Hence we are able to prove in detail why .

Ito's LemmaWe note in passing that is a distribution with mean and standard deviation and thus in the limit we have .Then we get

and obvious generalizations for more independent Weiner processes. This allows us to verify the above result for the integral of the Weinerprocess. Take and apply the above rule to get the RHS as .

Ito's IsometryIto's isometry is the following result

This can be proved in the following way. The LHS is the mean square limit of the sum

now due the statistical independence of with , the above expenctation value factorizes

and now we use the result to get the advertized result.

W(s)dW(s) = (W(t − t)∫ t0

12 )2

W(s)dW(s)∫ t0

Y (1)n = ( − ) .∑

i=1

nWti−1 Wti Wti−1

( − ) = (0, 1)Wti Wti−1 dt‾‾√ i−1

Y (1)n = dt (0, 1) (0, 1) .∑

i>ji j

W(s)dW(s) = (W(t − t)∫ t0

12 )2

Y (2)n = − dt .1

2( (0, 1))dt‾‾√ ∑i=1

ni

212 ∑

i=1

n

=

=

=

=

==

⟨( − ⟩limn→∞

Y (2)n Y (1)

n )2

⟨ ⟩limn→∞

dt2

4 ( (0, 1 − n)∑i=1

ni )2

2

⟨( − n ⟩limn→∞

dt2

4 χ2n )2

var( )limn→∞

dt2

4 χ2n

nlimn→∞

dt2

2lim

n→∞t2

2n0

n n n 2n

W(s)dW(s) = (W(t − t)∫ t0

12 )2

dW 2 χ21 dt dt2‾√ dt → 0 d → dtW 2

df [W(t), t] = [ + ] dt + dW(t) + (d )∂f∂t

12

f∂2

∂W 2∂f∂W

t3/2

f (W(t), t) = (W(t − t)12 )2 W(t)dW(t)

E[(∫ XdW ] = E[∫ dt] .)2 X2

=

⟨( X( )( − ) ⟩∑i=1

nti−1 Wti Wti−1 )2

⟨X( )X( )( − )( − )⟩∑i,j=1

nti−1 tj−1 Wti Wti−1 Wtj Wtj−1

Xti−1( − )Wti Wti−1

⟨X( )X( )⟩⟨( − )( − )⟩∑i,j=1

nti−1 tj−1 Wti Wti−1 Wtj Wtj−1

⟨( − )( − )⟩ = dtWti Wti−1 Wtj Wtj−1 δij

28/07/16 13:59StochasticProcesses/StochasticCalculus.ipynb at master · borundev/StochasticProcesses

Page 2 of 3https://github.com/borundev/StochasticProcesses/blob/master/StochasticCalculus.ipynb

The causality property is useful in physical applications also. As of writing this I am not aware of applications that are natuarally bettwe describedusing Stratonovich's definition.

Now we can look at an application of Ito's calculus with the explicit summation method before we discuss the more formal Ito's lemma that givesanswers magically as it were. We will later that Ito's calculus gives us . Here we see how this is consistent withthe summation under the mean-square limit. This is based on an answer on quant.stackexchange.com(http://quant.stackexchange.com/questions/25019/intergral-of-brownian-motion-w-r-t-brownian-motion/25051#25051). Let us evaluate theexpression we would get on the RHS when evaluating from the above definition:

Now noting each increment is an independent normal distribution we get

The other expression we get from the discretized version of the RHS of the standard answer is

Now if we can just show that the two expressions are same in the mean-square limit we are done. We see that

Here we used the results that the squares of standard normals is a chi-square with degree and that the same has mean and variance .

Hence we are able to prove in detail why .

Ito's LemmaWe note in passing that is a distribution with mean and standard deviation and thus in the limit we have .Then we get

and obvious generalizations for more independent Weiner processes. This allows us to verify the above result for the integral of the Weinerprocess. Take and apply the above rule to get the RHS as .

Ito's IsometryIto's isometry is the following result

This can be proved in the following way. The LHS is the mean square limit of the sum

now due the statistical independence of with , the above expenctation value factorizes

and now we use the result to get the advertized result.

W(s)dW(s) = (W(t − t)∫ t0

12 )2

W(s)dW(s)∫ t0

Y (1)n = ( − ) .∑

i=1

nWti−1 Wti Wti−1

( − ) = (0, 1)Wti Wti−1 dt‾‾√ i−1

Y (1)n = dt (0, 1) (0, 1) .∑

i>ji j

W(s)dW(s) = (W(t − t)∫ t0

12 )2

Y (2)n = − dt .1

2( (0, 1))dt‾‾√ ∑i=1

ni

212 ∑

i=1

n

=

=

=

=

==

⟨( − ⟩limn→∞

Y (2)n Y (1)

n )2

⟨ ⟩limn→∞

dt2

4 ( (0, 1 − n)∑i=1

ni )2

2

⟨( − n ⟩limn→∞

dt2

4 χ2n )2

var( )limn→∞

dt2

4 χ2n

nlimn→∞

dt2

2lim

n→∞t2

2n0

n n n 2n

W(s)dW(s) = (W(t − t)∫ t0

12 )2

dW 2 χ21 dt dt2‾√ dt → 0 d → dtW 2

df [W(t), t] = [ + ] dt + dW(t) + (d )∂f∂t

12

f∂2

∂W 2∂f∂W

t3/2

f (W(t), t) = (W(t − t)12 )2 W(t)dW(t)

E[(∫ XdW ] = E[∫ dt] .)2 X2

=

⟨( X( )( − ) ⟩∑i=1

nti−1 Wti Wti−1 )2

⟨X( )X( )( − )( − )⟩∑i,j=1

nti−1 tj−1 Wti Wti−1 Wtj Wtj−1

Xti−1( − )Wti Wti−1

⟨X( )X( )⟩⟨( − )( − )⟩∑i,j=1

nti−1 tj−1 Wti Wti−1 Wtj Wtj−1

⟨( − )( − )⟩ = dtWti Wti−1 Wtj Wtj−1 δij

Hence proved….

Page 9: INTRODUCTION TO STOCHASTIC CALCULUS - 2€¦ · plt.xlabel("time") plt.ylabel("position") plt.ylim(-7,7) From Binomial Walk to Weiner Process but not back! The process described above

Ito’s Lemma

28/07/16 13:59StochasticProcesses/StochasticCalculus.ipynb at master · borundev/StochasticProcesses

Page 2 of 3https://github.com/borundev/StochasticProcesses/blob/master/StochasticCalculus.ipynb

The causality property is useful in physical applications also. As of writing this I am not aware of applications that are natuarally bettwe describedusing Stratonovich's definition.

Now we can look at an application of Ito's calculus with the explicit summation method before we discuss the more formal Ito's lemma that givesanswers magically as it were. We will later that Ito's calculus gives us . Here we see how this is consistent withthe summation under the mean-square limit. This is based on an answer on quant.stackexchange.com(http://quant.stackexchange.com/questions/25019/intergral-of-brownian-motion-w-r-t-brownian-motion/25051#25051). Let us evaluate theexpression we would get on the RHS when evaluating from the above definition:

Now noting each increment is an independent normal distribution we get

The other expression we get from the discretized version of the RHS of the standard answer is

Now if we can just show that the two expressions are same in the mean-square limit we are done. We see that

Here we used the results that the squares of standard normals is a chi-square with degree and that the same has mean and variance .

Hence we are able to prove in detail why .

Ito's LemmaWe note in passing that is a distribution with mean and standard deviation and thus in the limit we have .Then we get

and obvious generalizations for more independent Weiner processes. This allows us to verify the above result for the integral of the Weinerprocess. Take and apply the above rule to get the RHS as .

Ito's IsometryIto's isometry is the following result

This can be proved in the following way. The LHS is the mean square limit of the sum

now due the statistical independence of with , the above expenctation value factorizes

and now we use the result to get the advertized result.

W(s)dW(s) = (W(t − t)∫ t0

12 )2

W(s)dW(s)∫ t0

Y (1)n = ( − ) .∑

i=1

nWti−1 Wti Wti−1

( − ) = (0, 1)Wti Wti−1 dt‾‾√ i−1

Y (1)n = dt (0, 1) (0, 1) .∑

i>ji j

W(s)dW(s) = (W(t − t)∫ t0

12 )2

Y (2)n = − dt .1

2( (0, 1))dt‾‾√ ∑i=1

ni

212 ∑

i=1

n

=

=

=

=

==

⟨( − ⟩limn→∞

Y (2)n Y (1)

n )2

⟨ ⟩limn→∞

dt2

4 ( (0, 1 − n)∑i=1

ni )2

2

⟨( − n ⟩limn→∞

dt2

4 χ2n )2

var( )limn→∞

dt2

4 χ2n

nlimn→∞

dt2

2limn→∞

t2

2n0

n n n 2n

W(s)dW(s) = (W(t − t)∫ t0

12 )2

dW 2 χ21 dt dt2‾√ dt → 0 d → dtW 2

df [W(t), t] = [ + ] dt + dW(t) + (d )∂f∂t

12

f∂2

∂W 2∂f∂W

t3/2

f (W(t), t) = (W(t − t)12 )2 W(t)dW(t)

E[(∫ XdW ] = E[∫ dt] .)2 X2

=

⟨( X( )( − ) ⟩∑i=1

nti−1 Wti Wti−1 )2

⟨X( )X( )( − )( − )⟩∑i,j=1

nti−1 tj−1 Wti Wti−1 Wtj Wtj−1

Xti−1( − )Wti Wti−1

⟨X( )X( )⟩⟨( − )( − )⟩∑i,j=1

nti−1 tj−1 Wti Wti−1 Wtj Wtj−1

⟨( − )( − )⟩ = dtWti Wti−1 Wtj Wtj−1 δij

Notice the scaling

02/06/16 14:19StochasticProcesses/RandomWalkAndWeinerProcess.ipynb at master · borundev/StochasticProcesses

Page 4 of 6https://github.com/borundev/StochasticProcesses/blob/master/RandomWalkAndWeinerProcess.ipynb

and around n=256 we start seeing differences.

This shows us something interesting. Let us assume that each step takes time and we are interested in studying processes over time we have

where and .

Take great care to see that there is a square root on here. This makes the left hand side very different from ordinary calculus differentialelements. For instance while the limit

is well defined and goes to zero, the velocity

is not defined. This signifies that this kind of curve is everywhere non-differentiable.

It would be useful to keep in mind that all this discussion is happening when compared to other large time scales in the problem whilemaintaining so as to be able to justify the Binomial to Normal approximation above.

The Weiner process is this approximated process taken at all scales. In other words one forgets that oen first zoomed out and thus infinitezooming in is possible. This is related to the central limit theorem in that we first take the limit of addining ifinite, independent random variable andget a normal distribution and then subdivide the normal as many times as we like without recovering the original distribution.

Formally the Weiner process is defined as

and thus to match our previous example we have .

should be interpreted as a random draw from a unit normal multiplied by . Thus it is immediately clear that is drawn from a distribution of one degree of freedom with mean and standard deviation . There is hardly a book on stochastic calculus that will notmention . We can understand it the following way. If we have

with time split as then is a distribution with mean and variance and in the limit the value of .It is in this sense, udner the integral/summation, that even though there is no convergence for any one interval. This is again a result ofthe central limit theorem. In particular we have the often used results

and

However note that if the intervals are independent

We can integrate the Weiner process to get

giving

Covariance of and

The way the above is written can lead to an erroneous idea (that can seep into code) that and are independet and thus have covariance 0.However, the path from to is the same so they are not independent. For concreteness take then what we really have is

where the two subscripts on the normal denote independent draws and

giving

In general

We can now simulate paths using the Weiner process and zoom in to see the difference from the Bernoulli process.

In [3]: class WeinerPaths(Paths):

δt dt ≫ δt

dY(t) = μdt + σ (0, 1)dt‾‾√

μ = 2p−1δt

σ = 4p(1−p)δt‾ ‾‾‾‾‾√

dt

dY(t)limdt→0

limdt→0

dY(t)dt

dt → 0dt ≫ δt

d = (0, 1)Wt dt‾‾√dY = μdt + σdW

dW dt‾‾√ dW 2 χ2

dt dt2‾√d = dtW 2

Q = Δ∑i=1

NW 2

i

Δt = T/N Q χ2N NΔT = T 2NΔT 2 N → ∞ Q → T

d = dtW 2

⟨d ⟩ = 0Wt

d = dtW 2t

⟨d d ⟩ = dt ∗ cov( (0, 1), (0, 1)) = 0Wt Wt′

= d = (0, 1)Wt ∫t

0Wt t√

⟨ ⟩Wtvar( )Wt

==

0t

Wt Wt′

Wt Wt′

0 min(t, )t ′ > tt ′

Wt = (0, 1)t√ 1 W −tt′ = (0, 1)t − t ′‾ ‾‾‾‾√ 2

= +Wt′ Wt Wt−t′

cov( , )Wt Wt′ ==

t cov( (0, 1), (0, 1)) + cov( (0, 1), (0, 1))1 1 t√ − tt ′‾ ‾‾‾‾√ 1 2t

cov( , ) = min(t, )Wt Wt′ t ′

We can use this to numerically evaluate stochastic intervals with the Milstein method

28/07/16 13:59StochasticProcesses/StochasticCalculus.ipynb at master · borundev/StochasticProcesses

Page 3 of 3https://github.com/borundev/StochasticProcesses/blob/master/StochasticCalculus.ipynb

and now we use the result to get the advertized result.

Simulating Stochastic IntegralsSuppose we have a stochastic process

then we can get the following result

This method goes under the name of Milstein-method (https://en.wikipedia.org/wiki/Milstein_method).

⟨( − )( − )⟩ = dtWti Wti−1 Wtj Wtj−1 δij

dS = a(S)dt + b(S)dW

ΔS =

=

=

ΔW + Δ + Δt + (Δ )∂S∂W

12

S∂2

∂W 2W 2 ∂S

∂tW 3

( + )Δt + ΔW + (Δ − Δt) + (Δ )∂S∂t

12

S∂2

∂W 2∂S∂W

12

S∂2

∂W 2W 2 W 3

a(S)Δt + b(S)ΔW + b(S) (S)(Δ − Δt) + (Δ )12 b′ W 2 W 3

Contact GitHub API Training Shop Blog About© 2016 GitHub, Inc. Terms Privacy Security Status Help

28/07/16 13:59StochasticProcesses/StochasticCalculus.ipynb at master · borundev/StochasticProcesses

Page 3 of 3https://github.com/borundev/StochasticProcesses/blob/master/StochasticCalculus.ipynb

and now we use the result to get the advertized result.

Simulating Stochastic IntegralsSuppose we have a stochastic process

then we can get the following result

This method goes under the name of Milstein-method (https://en.wikipedia.org/wiki/Milstein_method).

⟨( − )( − )⟩ = dtWti Wti−1 Wtj Wtj−1 δij

dS = a(S)dt + b(S)dW

ΔS =

=

=

ΔW + Δ + Δt + (Δ )∂S∂W

12

S∂2

∂W 2W 2 ∂S

∂tW 3

( + )Δt + ΔW + (Δ − Δt) + (Δ )∂S∂t

12

S∂2

∂W 2∂S∂W

12

S∂2

∂W 2W 2 W 3

a(S)Δt + b(S)ΔW + b(S) (S)(Δ − Δt) + (Δ )12 b′ W 2 W 3

Contact GitHub API Training Shop Blog About© 2016 GitHub, Inc. Terms Privacy Security Status Help

Page 10: INTRODUCTION TO STOCHASTIC CALCULUS - 2€¦ · plt.xlabel("time") plt.ylabel("position") plt.ylim(-7,7) From Binomial Walk to Weiner Process but not back! The process described above

Brownian motion

28/07/16 14:27StochasticProcesses/BrownianMotion.ipynb at master · borundev/StochasticProcesses

Page 1 of 7https://github.com/borundev/StochasticProcesses/blob/master/BrownianMotion.ipynb

This repository Pull requests Issues Gist

StochasticProcesses / BrownianMotion.ipynb

0 01 Unwatch Star Forkborundev / StochasticProcesses

Code Issues 0 Pull requests 0 Wiki Pulse Graphs Settings

master Find file Copy path

1 contributor

fc8f317 on May 12 borundev improved presentation and put explanatory remarks in the code

579lines(578sloc) 330KB

Brownian Motion

Brownian motion is usually used to refer to the random jiggling of pollen grains in water. However, any molecule inside water undergoes thisjiggling as a result of random bombardments by other molecules but due to color differences this is noticable only for pollen grains. In this sectionwe use stochastic calculus to model Brownian motion. We do this by attributing the change in the velocity to a drag force opposed to the velocityand a random force modelled by the Wiener process. This system is analytically solvable and we do so. We then integrate the velocity to find theposition of the particle. At late times the position displays the classic behavior of random walk. We then proceed to solve the system byMonte Carlo and demonstrate that this is consistent with the analytical method.

I have chosen to keep the steps in the derivations explicit for eager students to be able to reproduce. However, for those interested only in thefinal results and to assist the reader in not getting lost in the details, I have color coded the essential equations and results.

Velocity

The change in velocity process involves a deterministic drag piece and a stochastic force piece modelled by the Wiener process. In a differentreincarnation (devoid of Stochastic calculus notation) this is known as the Langevin equation (https://en.wikipedia.org/wiki/Langevin_equation).The process is described by the SDE

Using standard techniques this can be solved to give

as can be verified by direct substitution.

However this form is not particularly illuminating. Instead we observe that at every instant we add an independent normally distributed incrementto the velocity and thus, by the stability of normal distribution under convolution, the velocity at any time will be given by a normal distribution.Since the normal distribution is completely determinded by its first two moments, we only need to evaluate those. Since the notation of differentialelement in Stochastic calculus is different from that in deterministic calculus, I will be very explicit in the following. We begin by being very clearwhat the LHS of the SDE above means. It means

where and are stochastic variables (and in particular are normally distributed as we established above). We can take theexpectation value on both sides to get

Notice how in the first line the 'd' came out of the mean. This needs to be clearly understood as it is a potential source of confusion. In fact, if onewants to be careful one should use different symbols for stochastic differential elements and ordinary ones. The 'd' on the RHS of the first line isthe usual differential element from ordinary deterministic calculus and we then have an ODE for the mean

with the solution

Now we look for the variance. We use similar techniques as above to get

∼xt t√

dV(t) = −γV(t)dt + βd .Wt

V(t) = + β dv0 e−γt e−γt ∫t

0eγt′ Wt′

dV(t) := V(t + dt) − V(t)

V(t) V(t + dt)

⟨dV(t)⟩ ==

⟨V(t + dt)⟩ − ⟨V(t)⟩ := d⟨V(t)⟩− γ⟨V(t)⟩

d⟨V(t)⟩ = − γ⟨V(t)⟩dt ,

⟨V(t)⟩ = .v0 e−γt

d( (t))V 2 := (t + dt) − (t) ,V 2 V 2

Raw Blame History

The velocity changes by a drag force and iid random normal thermal kicks

Detailed calculations can be found at https://github.com/borundev/StochasticProcesses/blob/master/BrownianMotion.ipynb

28/07/16 14:27StochasticProcesses/BrownianMotion.ipynb at master · borundev/StochasticProcesses

Page 2 of 7https://github.com/borundev/StochasticProcesses/blob/master/BrownianMotion.ipynb

which can also be obtained by application of Ito's rule to . Taking the mean of both sides

which gives us an ODE. The solution to this ODE consistent with the initial condition is

Therefore we get

This finally gives

Interpretation

We see that at late times , we get . Thus the mean velocity goes to zero but there are fluctuations. In the context of

physics these fluctuations have to be such that the average energy is where is the Boltzmann constant and is the tempreature. Thus weget

giving

In other words the strength of the stochastic kicks on the particle is phenominologically fixed from the drag force, the temperature and the massof the particle to be consistent with the equipartition of energy (which basically comes from the second law of thermodynamics which in turn is afancy way of saying that the system is typically found to be in a state that is most likely).

Position

The SDE for position is given by

which should be interpreted as the difference of two stochastic variables

As before (remember how we defined terms like and how we took the averages to get ODEs)

and

Taking the expectation values of these we get the ODEs

The first can be solved to give

We massage the others to get

and

d( (t))V :===

(t + dt) − (t) ,V V2V(t)dV(t) + ,dV(t)2

(−2γ (t) + )dt + 2V(t)βd .V 2 β2 Wt

V 2

⟨d (t)⟩V 2 ==

⟨ (t + dt)⟩ − ⟨ (t)⟩ := d⟨ (t)⟩V 2 V 2 V 2

( − 2γ⟨ (t)⟩)dt .β2 V 2

V(0) = v0

⟨ (t)⟩ = + (1 − ) .V 2 v20 e−2γt β2

2γe−2γt

var(V(t)) = (1 − ) .β2

2γe−2γt

V(t) = ( , )v0 e−γt (1 − )β2

2γe−2γt

‾ ‾‾‾‾‾‾‾‾‾‾‾‾√

tγ ≫ 1 V(t) = (0, )β

2γ√kT1

2 k T

M = kT ,12

β2

2γ12

β = .2γkTM

‾ ‾‾‾‾‾√

dX(t) = V(t)dt ,

dX(t) := X(t + dt) − X(t)

d (t)V 2

d (t)X2 ==

2X(t)dX(t) + dX(t ,)2

2X(t)V(t)dt ,

d(XV(t)) ==

V(t)dX(t) + X(t)dV(t) + dX(t)dV(t) ,( (t) − X(t)V(t))dt + βX(t)d .V 2 Wt

d⟨X(t)⟩d⟨ (t)⟩X2

d⟨XV(t)⟩

===

⟨V(t)⟩dt = ,v0 e−γt

2⟨X(t)V(t)⟩dt ,− γ⟨XV(t)⟩dt + ⟨ (t)⟩dt .V 2

⟨X(t)⟩ = + (1 − ) .x0v0γ

e−γt

dvar(X(t))dt

==

− 2⟨X(t)⟩ ,d⟨ (t)⟩X2

dtd⟨X(t)⟩dt

2cov(X(t), V(t)) .

dcov(X(t), V(t))dt

= − ⟨X(t)⟩ − ⟨V(t)⟩ ,d⟨X(t)V(t)⟩dt

d⟨V(t)⟩dt

d⟨X(t)⟩dt

28/07/16 14:27StochasticProcesses/BrownianMotion.ipynb at master · borundev/StochasticProcesses

Page 2 of 7https://github.com/borundev/StochasticProcesses/blob/master/BrownianMotion.ipynb

which can also be obtained by application of Ito's rule to . Taking the mean of both sides

which gives us an ODE. The solution to this ODE consistent with the initial condition is

Therefore we get

This finally gives

Interpretation

We see that at late times , we get . Thus the mean velocity goes to zero but there are fluctuations. In the context of

physics these fluctuations have to be such that the average energy is where is the Boltzmann constant and is the tempreature. Thus weget

giving

In other words the strength of the stochastic kicks on the particle is phenominologically fixed from the drag force, the temperature and the massof the particle to be consistent with the equipartition of energy (which basically comes from the second law of thermodynamics which in turn is afancy way of saying that the system is typically found to be in a state that is most likely).

Position

The SDE for position is given by

which should be interpreted as the difference of two stochastic variables

As before (remember how we defined terms like and how we took the averages to get ODEs)

and

Taking the expectation values of these we get the ODEs

The first can be solved to give

We massage the others to get

and

d( (t))V :===

(t + dt) − (t) ,V V2V(t)dV(t) + ,dV(t)2

(−2γ (t) + )dt + 2V(t)βd .V 2 β2 Wt

V 2

⟨d (t)⟩V 2 ==

⟨ (t + dt)⟩ − ⟨ (t)⟩ := d⟨ (t)⟩V 2 V 2 V 2

( − 2γ⟨ (t)⟩)dt .β2 V 2

V(0) = v0

⟨ (t)⟩ = + (1 − ) .V 2 v20 e−2γt β2

2γe−2γt

var(V(t)) = (1 − ) .β2

2γe−2γt

V(t) = ( , )v0 e−γt (1 − )β2

2γe−2γt

‾ ‾‾‾‾‾‾‾‾‾‾‾‾√

tγ ≫ 1 V(t) = (0, )β

2γ√kT1

2 k T

M = kT ,12

β2

2γ12

β = .2γkTM

‾ ‾‾‾‾‾√

dX(t) = V(t)dt ,

dX(t) := X(t + dt) − X(t)

d (t)V 2

d (t)X2 ==

2X(t)dX(t) + dX(t ,)2

2X(t)V(t)dt ,

d(XV(t)) ==

V(t)dX(t) + X(t)dV(t) + dX(t)dV(t) ,( (t) − X(t)V(t))dt + βX(t)d .V 2 Wt

d⟨X(t)⟩d⟨ (t)⟩X2

d⟨XV(t)⟩

===

⟨V(t)⟩dt = ,v0 e−γt

2⟨X(t)V(t)⟩dt ,− γ⟨XV(t)⟩dt + ⟨ (t)⟩dt .V 2

⟨X(t)⟩ = + (1 − ) .x0v0γ

e−γt

dvar(X(t))dt

==

− 2⟨X(t)⟩ ,d⟨ (t)⟩X2

dtd⟨X(t)⟩dt

2cov(X(t), V(t)) .

dcov(X(t), V(t))dt

= − ⟨X(t)⟩ − ⟨V(t)⟩ ,d⟨X(t)V(t)⟩dt

d⟨V(t)⟩dt

d⟨X(t)⟩dt

28/07/16 14:27StochasticProcesses/BrownianMotion.ipynb at master · borundev/StochasticProcesses

Page 3 of 7https://github.com/borundev/StochasticProcesses/blob/master/BrownianMotion.ipynb

Therefore we get

and

Now comes the crucial point. Look back at the SDE for . At each instant the increment in is given by normal distributions (as isnormally distributed) but the increments are not independent. However sums of normally distributed random numbers are normally distributedeven if they are not independent. The two variable case shows this easily. Non independent random numbers can be written as and

where are independent . The variances are and the correlation is . We can then clearly see thatsum can be written as sum of independent normals and is thus a normal itself.

Thus we get the distribution of as

Note given that and are normal distributions and have a correlation, they are jointly normally distributed and the reader can write downthe probability distribution for them quite simply.

Interpretation

We see that at late times the mean position goes to and the variance is given by . With the interpretation of given above thevariance is . We see that the fluctuations are inversely proportional to mass and drag and directly proportional to temperature which makessense. Also note that early time behavior is different form the linear scaling of variance with time that is a characteristic of random walk. In fact thevariance at early times ( ) grows as .

Smoluchowski Approximation

There is a nice way to get the late time behavior directly. At late times the distribution becomes stationary for the velocity and this happens whenall the inertial effects are gone and only random ones remain. We do this by setting , solving for and plugging this in to get

that solves to where is the value of when the Smouluchowski approximation kicks in.

A caveat: auto-correlationThere is a danger when describe the position and velocity process as normal distributions to forget that the process has correlations are differenttimes. For instance it would be silly to simulate the process by drawing from independent normals at different times. It is only the increments that are independent at different times. To show this we calculate the auto-correlation.

VelocityFor this part we write down the formal solution from above again

Then the covariance of velocity at different times is

Recall that for the part of the path thats common to the two integrals the two Weiner differential elements are the same so we get and for therest there is only one Wiener differential element so the expectation value gives zero. Thus we have

Thus, in particular, in the stationary limit, when the auto-correlation function becomes

dt==

dt dt dt⟨ (t)⟩ − γ⟨X(t)V(t)⟩ + γ⟨X(t)⟩⟨V(t)⟩ − ⟨V(t) ,V 2 ⟩2

var(V(t)) − γcov(X(t), V(t)) .

cov(X(t)V(t)) = (1 − 2 + ) ,β2

2γ2e−γt e−2γt

var(X(t)) = [t − (1 − ) + (1 − )] .β2

γ22γ

e−γt 12γ

e−2γt

X(t) X(t) V(t)

σxZ1(cos θ + sin θ )σy Z2 Z1 ,Z1 Z2 (0, 1) ,σ2x σ2y sin θ

X(t)

X(t) = ( + (1 − ), )x0v0γ

e−γt βγ t − (1 − ) + (1 − )2

γe−γt 1

2γe−2γt

‾ ‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾√

V(t) X(t)

+x0 v0 γ−1 tβ2γ−2 βt2kT

γM

tγ ≪ 1 ∝ t3

dV = 0 V(t)dt dX = V(t)dt

dX = dβγ

Wt

X(t) = ( , )x′0

βγ t√ x′

0 X(t)

dV

V(t) = + β dv0 e−γt e−γt ∫t

0eγt′ Wt′

cov(V(t), V(s)) =

=

⟨(V(t) − ⟨V(t)⟩)(V(s) − ⟨V(s)⟩)⟩

⟨d d ⟩β2e−γ(t+s) ∫t

0 ∫s

0eγ( + )t′ s′ Wt′ Ws′

dt

cov(V(t), V(s)) =

=

( − 1)β2

2γe−γ(t+s) e2γmin(t,s))

( − )β2

2γe−γ|t−s| e−γ(t+s)

t, s → ∞

cov(V(t), V(s)) = β2e−γ|t−s|

On physical grounds

28/07/16 14:27StochasticProcesses/BrownianMotion.ipynb at master · borundev/StochasticProcesses

Page 2 of 7https://github.com/borundev/StochasticProcesses/blob/master/BrownianMotion.ipynb

which can also be obtained by application of Ito's rule to . Taking the mean of both sides

which gives us an ODE. The solution to this ODE consistent with the initial condition is

Therefore we get

This finally gives

Interpretation

We see that at late times , we get . Thus the mean velocity goes to zero but there are fluctuations. In the context of

physics these fluctuations have to be such that the average energy is where is the Boltzmann constant and is the tempreature. Thus weget

giving

In other words the strength of the stochastic kicks on the particle is phenominologically fixed from the drag force, the temperature and the massof the particle to be consistent with the equipartition of energy (which basically comes from the second law of thermodynamics which in turn is afancy way of saying that the system is typically found to be in a state that is most likely).

Position

The SDE for position is given by

which should be interpreted as the difference of two stochastic variables

As before (remember how we defined terms like and how we took the averages to get ODEs)

and

Taking the expectation values of these we get the ODEs

The first can be solved to give

We massage the others to get

and

d( (t))V :===

(t + dt) − (t) ,V V2V(t)dV(t) + ,dV(t)2

(−2γ (t) + )dt + 2V(t)βd .V 2 β2 Wt

V 2

⟨d (t)⟩V 2 ==

⟨ (t + dt)⟩ − ⟨ (t)⟩ := d⟨ (t)⟩V 2 V 2 V 2

( − 2γ⟨ (t)⟩)dt .β2 V 2

V(0) = v0

⟨ (t)⟩ = + (1 − ) .V 2 v20 e−2γt β2

2γe−2γt

var(V(t)) = (1 − ) .β2

2γe−2γt

V(t) = ( , )v0 e−γt (1 − )β2

2γe−2γt

‾ ‾‾‾‾‾‾‾‾‾‾‾‾√

tγ ≫ 1 V(t) = (0, )β

2γ√kT1

2 k T

M = kT ,12

β2

2γ12

β = .2γkTM

‾ ‾‾‾‾‾√

dX(t) = V(t)dt ,

dX(t) := X(t + dt) − X(t)

d (t)V 2

d (t)X2 ==

2X(t)dX(t) + dX(t ,)2

2X(t)V(t)dt ,

d(XV(t)) ==

V(t)dX(t) + X(t)dV(t) + dX(t)dV(t) ,( (t) − X(t)V(t))dt + βX(t)d .V 2 Wt

d⟨X(t)⟩d⟨ (t)⟩X2

d⟨XV(t)⟩

===

⟨V(t)⟩dt = ,v0 e−γt

2⟨X(t)V(t)⟩dt ,− γ⟨XV(t)⟩dt + ⟨ (t)⟩dt .V 2

⟨X(t)⟩ = + (1 − ) .x0v0γ

e−γt

dvar(X(t))dt

==

− 2⟨X(t)⟩ ,d⟨ (t)⟩X2

dtd⟨X(t)⟩dt

2cov(X(t), V(t)) .

dcov(X(t), V(t))dt

= − ⟨X(t)⟩ − ⟨V(t)⟩ ,d⟨X(t)V(t)⟩dt

d⟨V(t)⟩dt

d⟨X(t)⟩dt

an example of fluctuation-dissipation theorem (for physicists)

Page 11: INTRODUCTION TO STOCHASTIC CALCULUS - 2€¦ · plt.xlabel("time") plt.ylabel("position") plt.ylim(-7,7) From Binomial Walk to Weiner Process but not back! The process described above

Brownian motion

28/07/16 14:27StochasticProcesses/BrownianMotion.ipynb at master · borundev/StochasticProcesses

Page 7 of 7https://github.com/borundev/StochasticProcesses/blob/master/BrownianMotion.ipynb

In [17]: figsize(15,9)number_columns=3number_rows=3for i,j in enumerate([(i/number_columns,i%number_columns) for i in range(number_columns*number_rows)]): plt.subplot2grid((number_rows,number_columns),j) plt.plot(t_paths,paths.get_paths_V()[i],label="velocity") plt.plot(t_paths,paths.get_paths_X()[i],label="position") if(j==(2,1)): plt.xlabel("time") if(j==(1,0)): plt.ylabel("velocity/position") plt.legend()

Contact GitHub API Training Shop Blog About© 2016 GitHub, Inc. Terms Privacy Security Status Help

28/07/16 14:27StochasticProcesses/BrownianMotion.ipynb at master · borundev/StochasticProcesses

Page 6 of 7https://github.com/borundev/StochasticProcesses/blob/master/BrownianMotion.ipynb

paths=OrnsteinUhlenbeckPaths(x0,v0,beta,gamma,10,1000,10000)

# Get the mean and standard deviation of the velocity from the pathsv_mean=paths.get_paths_V().mean(0)v_std=paths.get_paths_V().std(0)

# Get the mean and standard deviation of the position from the pathsx_mean=paths.get_paths_X().mean(0)x_std=paths.get_paths_X().std(0)

# Get the timeline from the pathst_paths=paths.get_timeline()

# plot the velocity processplt.subplot(211)# plot the mean with errorbars for each point on time-lineplt.errorbar(t_paths,v_mean,yerr=v_std,fmt='o',alpha=.3)# plot the analytically generated resultsplt.plot(t,vmean,"--",label="mean velocity (analytical)")plt.plot(t,vup,"--",label="mean+sigma velocity (analytical)")plt.plot(t,vdown,"--",label="mean-sigma velocity (analytical)")plt.xlabel("time")plt.ylabel("velocity")plt.title("Monte Carlo simulation of velocity compared with analytical expressions")plt.legend()

# plot the position processplt.subplot(212)# plot the mean with errorbars for each point on time-lineplt.errorbar(t_paths,x_mean,yerr=x_std,fmt='o',alpha=.3)# plot the analytically generated resultsplt.plot(t,xmean,"--",label="mean position (analytical)")plt.plot(t,xup,"--",label="mean position + sigma (analytical)")plt.plot(t,xdown,"--",label="mean position - sigma (analytical)")plt.xlabel("time")plt.ylabel("position")plt.title("Monte Carlo simulation of position compared with analytical expressions")

plt.legend()

A realization of the path is given below

Out[7]: <matplotlib.legend.Legend at 0x1294be290>

Individual Realisations

Distributions

28/07/16 14:27StochasticProcesses/BrownianMotion.ipynb at master · borundev/StochasticProcesses

Page 3 of 7https://github.com/borundev/StochasticProcesses/blob/master/BrownianMotion.ipynb

Therefore we get

and

Now comes the crucial point. Look back at the SDE for . At each instant the increment in is given by normal distributions (as isnormally distributed) but the increments are not independent. However sums of normally distributed random numbers are normally distributedeven if they are not independent. The two variable case shows this easily. Non independent random numbers can be written as and

where are independent . The variances are and the correlation is . We can then clearly see thatsum can be written as sum of independent normals and is thus a normal itself.

Thus we get the distribution of as

Note given that and are normal distributions and have a correlation, they are jointly normally distributed and the reader can write downthe probability distribution for them quite simply.

Interpretation

We see that at late times the mean position goes to and the variance is given by . With the interpretation of given above thevariance is . We see that the fluctuations are inversely proportional to mass and drag and directly proportional to temperature which makessense. Also note that early time behavior is different form the linear scaling of variance with time that is a characteristic of random walk. In fact thevariance at early times ( ) grows as .

Smoluchowski Approximation

There is a nice way to get the late time behavior directly. At late times the distribution becomes stationary for the velocity and this happens whenall the inertial effects are gone and only random ones remain. We do this by setting , solving for and plugging this in to get

that solves to where is the value of when the Smouluchowski approximation kicks in.

A caveat: auto-correlationThere is a danger when describe the position and velocity process as normal distributions to forget that the process has correlations are differenttimes. For instance it would be silly to simulate the process by drawing from independent normals at different times. It is only the increments that are independent at different times. To show this we calculate the auto-correlation.

VelocityFor this part we write down the formal solution from above again

Then the covariance of velocity at different times is

Recall that for the part of the path thats common to the two integrals the two Weiner differential elements are the same so we get and for therest there is only one Wiener differential element so the expectation value gives zero. Thus we have

Thus, in particular, in the stationary limit, when the auto-correlation function becomes

dt==

dt dt dt⟨ (t)⟩ − γ⟨X(t)V(t)⟩ + γ⟨X(t)⟩⟨V(t)⟩ − ⟨V(t) ,V 2 ⟩2

var(V(t)) − γcov(X(t), V(t)) .

cov(X(t)V(t)) = (1 − 2 + ) ,β2

2γ2e−γt e−2γt

var(X(t)) = [t − (1 − ) + (1 − )] .β2

γ22γ

e−γt 12γ

e−2γt

X(t) X(t) V(t)

σxZ1(cos θ + sin θ )σy Z2 Z1 ,Z1 Z2 (0, 1) ,σ2x σ2y sin θ

X(t)

X(t) = ( + (1 − ), )x0v0γ

e−γt βγ t − (1 − ) + (1 − )2

γe−γt 1

2γe−2γt

‾ ‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾√

V(t) X(t)

+x0 v0 γ−1 tβ2γ−2 βt2kT

γM

tγ ≪ 1 ∝ t3

dV = 0 V(t)dt dX = V(t)dt

dX = dβγ

Wt

X(t) = ( , )x′0

βγ t√ x′

0 X(t)

dV

V(t) = + β dv0 e−γt e−γt ∫t

0eγt′ Wt′

cov(V(t), V(s)) =

=

⟨(V(t) − ⟨V(t)⟩)(V(s) − ⟨V(s)⟩)⟩

⟨d d ⟩β2e−γ(t+s) ∫t

0 ∫s

0eγ( + )t′ s′ Wt′ Ws′

dt

cov(V(t), V(s)) =

=

( − 1)β2

2γe−γ(t+s) e2γmin(t,s))

( − )β2

2γe−γ|t−s| e−γ(t+s)

t, s → ∞

cov(V(t), V(s)) = β2e−γ|t−s|

28/07/16 14:27StochasticProcesses/BrownianMotion.ipynb at master · borundev/StochasticProcesses

Page 2 of 7https://github.com/borundev/StochasticProcesses/blob/master/BrownianMotion.ipynb

which can also be obtained by application of Ito's rule to . Taking the mean of both sides

which gives us an ODE. The solution to this ODE consistent with the initial condition is

Therefore we get

This finally gives

Interpretation

We see that at late times , we get . Thus the mean velocity goes to zero but there are fluctuations. In the context of

physics these fluctuations have to be such that the average energy is where is the Boltzmann constant and is the tempreature. Thus weget

giving

In other words the strength of the stochastic kicks on the particle is phenominologically fixed from the drag force, the temperature and the massof the particle to be consistent with the equipartition of energy (which basically comes from the second law of thermodynamics which in turn is afancy way of saying that the system is typically found to be in a state that is most likely).

Position

The SDE for position is given by

which should be interpreted as the difference of two stochastic variables

As before (remember how we defined terms like and how we took the averages to get ODEs)

and

Taking the expectation values of these we get the ODEs

The first can be solved to give

We massage the others to get

and

d( (t))V :===

(t + dt) − (t) ,V V2V(t)dV(t) + ,dV(t)2

(−2γ (t) + )dt + 2V(t)βd .V 2 β2 Wt

V 2

⟨d (t)⟩V 2 ==

⟨ (t + dt)⟩ − ⟨ (t)⟩ := d⟨ (t)⟩V 2 V 2 V 2

( − 2γ⟨ (t)⟩)dt .β2 V 2

V(0) = v0

⟨ (t)⟩ = + (1 − ) .V 2 v20 e−2γt β2

2γe−2γt

var(V(t)) = (1 − ) .β2

2γe−2γt

V(t) = ( , )v0 e−γt (1 − )β2

2γe−2γt

‾ ‾‾‾‾‾‾‾‾‾‾‾‾√

tγ ≫ 1 V(t) = (0, )β

2γ√kT1

2 k T

M = kT ,12

β2

2γ12

β = .2γkTM

‾ ‾‾‾‾‾√

dX(t) = V(t)dt ,

dX(t) := X(t + dt) − X(t)

d (t)V 2

d (t)X2 ==

2X(t)dX(t) + dX(t ,)2

2X(t)V(t)dt ,

d(XV(t)) ==

V(t)dX(t) + X(t)dV(t) + dX(t)dV(t) ,( (t) − X(t)V(t))dt + βX(t)d .V 2 Wt

d⟨X(t)⟩d⟨ (t)⟩X2

d⟨XV(t)⟩

===

⟨V(t)⟩dt = ,v0 e−γt

2⟨X(t)V(t)⟩dt ,− γ⟨XV(t)⟩dt + ⟨ (t)⟩dt .V 2

⟨X(t)⟩ = + (1 − ) .x0v0γ

e−γt

dvar(X(t))dt

==

− 2⟨X(t)⟩ ,d⟨ (t)⟩X2

dtd⟨X(t)⟩dt

2cov(X(t), V(t)) .

dcov(X(t), V(t))dt

= − ⟨X(t)⟩ − ⟨V(t)⟩ ,d⟨X(t)V(t)⟩dt

d⟨V(t)⟩dt

d⟨X(t)⟩dt

Dissipation causes ‘localization’ of velocity. Variance saturates to value governed by equipartition of energy

Variance of position grows linearly like random walk.

Page 12: INTRODUCTION TO STOCHASTIC CALCULUS - 2€¦ · plt.xlabel("time") plt.ylabel("position") plt.ylim(-7,7) From Binomial Walk to Weiner Process but not back! The process described above

Brownian motion harmonic oscillator

28/07/16 14:38StochasticProcesses/StochasticHarmonicOscillator.ipynb at master · borundev/StochasticProcesses

Page 1 of 6https://github.com/borundev/StochasticProcesses/blob/master/StochasticHarmonicOscillator.ipynb

This repository Pull requests Issues Gist

StochasticProcesses / StochasticHarmonicOscillator.ipynb

0 01 Unwatch Star Forkborundev / StochasticProcesses

Code Issues 0 Pull requests 0 Wiki Pulse Graphs Settings

master Find file Copy path

1 contributor

942b74a on Apr 4 borundev beautified plots with seaborn

442lines(441sloc) 561KB

Brownian Motion For A Harmonic OscillatorWe generalize the discussions of the previous chapter to include a quadratic potential term. The SDEs are

Following through with the steps of the previous chapter we obtain SDEs for and using Ito's rule and take expectationvalues. On massaging the resultant equations we get the following ODEs for the first moments

and the following for the second moments

As before, since the increments are normally distributed the joint probability distribution of and is normal and knowing the first andsecond moments is sufficient.

These ODEs are readily solved and after imposing initial conditions and we have

where . Thus we have

and

dX(t)dV(t)

==

V(t)dt− [ X(t) + γV(t)] dt + βdω2 Wt

d (t), d (t)V 2 X2 dXV(t)

d⟨X(t)⟩dt

d⟨V(t)⟩dt

=

=

⟨V(t)⟩

− ⟨X(t)⟩ − γ⟨V(t)⟩ω2

dvar(X(t))dt

dvar(V(t))dt

dcov(X(t), V(t))dt

=

=

=

2cov(X(t), V(t))

− 2 cov(X(t), V(t)) − 2γvar(V(t))β2 ω2

var(V(t)) − γcov(X(t), V(t)) − var(X(t))ω2

X(t) V(t)

X(0) = x0 V(0) = v0

⟨X(t)⟩

⟨V(t)⟩

var(X(t))

var(V(t))

Cov(X(t), V(t))

=

=

=

=

=

[ cos( t) + ( + ) ]e−γt/2 x0 ω′ v0γx0

2sin( t)ω′

ω′

[ cos( t) − ( + ) ]e−γt/2 v0 ω′ x0 ω2 γv02

sin( t)ω′

ω′

+ ( )β2

2γω2e−γt β2

8γω′2ω2

× [−4 + cos( t) − 2γ sin(2 t)]ω2 γ2 ω′ ω′ ω′

+ ( )β2

2γe−γt β2

8γω′2

× [−4 + cos( t) + 2γ sin(2 t)]ω2 γ2 ω′ ω′ ω′

( ) [1 − cos(2 t)]e−γt β2

4ω′2ω′

=ω′ − /4ω2 γ2‾ ‾‾‾‾‾‾‾‾√

V(t) = ( [ cos( t) − ( + ) ] , )e−γt/2 v0 ω′ x0 ω2 γv02

sin( t)ω′

ω′ + ( ) [−4 + cos( t) + 2γ sin(2 t)]β2

2γe−γt β2

8γω′2ω2 γ2 ω′ ω′ ω′

‾ ‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾√

X(t) = ( [ cos( t) + ( + ) ] , )e−γt/2 x0 ω′ v0γx0

2sin( t)ω′

ω′ + ( ) [−4 + cos( t) − 2γ sin(2 t)]β2

2γω2e−γt β2

8γω′2ω2ω2 γ2 ω′ ω′ ω′

‾ ‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾√

Raw Blame History

Dissipation causes ‘localization’ of velocity. Variance saturates to value governed by equipartition of energy

Variance of position saturates to value governed by equipartition of energy.