linear time series analysis - university of ljubljana · linear time series analysis egon...

22
Stochastic Processes Stationarity Ergodicity Basic Linear Time Series Processes Linear Time Series Analysis Egon Zakrajˇ sek Division of Monetary Affairs Federal Reserve Board Summer School in Financial Mathematics Faculty of Mathematics & Physics University of Ljubljana September 14–19, 2009

Upload: others

Post on 17-Oct-2019

4 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Linear Time Series Analysis - University of Ljubljana · Linear Time Series Analysis Egon Zakrajˇsek Division of Monetary Affairs Federal Reserve Board Summer School in Financial

Stochastic ProcessesStationarityErgodicity

Basic Linear Time Series Processes

Linear Time Series Analysis

Egon ZakrajsekDivision of Monetary Affairs

Federal Reserve Board

Summer School in Financial MathematicsFaculty of Mathematics & Physics

University of LjubljanaSeptember 14–19, 2009

Page 2: Linear Time Series Analysis - University of Ljubljana · Linear Time Series Analysis Egon Zakrajˇsek Division of Monetary Affairs Federal Reserve Board Summer School in Financial

Stochastic ProcessesStationarityErgodicity

Basic Linear Time Series Processes

Introduction

Simple models that describe the behavior of a time series in terms ofpast values without the benefit of a well-developed economic theorymay be quite useful if one wishes to describe the dynamics of anindividual time series.

Large structural econometric models consisting of a large number ofsimultaneous equations often have poorer forecasting performancethan fairly simple univariate time series models based on just a fewparameters and compact specifications.

Page 3: Linear Time Series Analysis - University of Ljubljana · Linear Time Series Analysis Egon Zakrajˇsek Division of Monetary Affairs Federal Reserve Board Summer School in Financial

Stochastic ProcessesStationarityErgodicity

Basic Linear Time Series Processes

Definition

Sample of size T of some random variable Yt:

{y1, y2, . . . , yT } (1)

Example: sample of size T from a Gaussian white noise process:

Collection of T independent and identically distributed (i. i. d.)random variables εt:

{ε1, ε2, . . . , εT }; εt ∼ N(0, σ2), for all t.

Page 4: Linear Time Series Analysis - University of Ljubljana · Linear Time Series Analysis Egon Zakrajˇsek Division of Monetary Affairs Federal Reserve Board Summer School in Financial

Stochastic ProcessesStationarityErgodicity

Basic Linear Time Series Processes

Key Points

Observed sample represents T particular numbers.

This set of T numbers, however, is only one possible outcome ofthe underlying stochastic process that generated the data.

Even if we were able to observe the process for an infinite periodof time:

{yt}∞t=−∞ = {. . . , y−2, y−1, y0, y1, y2, . . . , yT , yT+1, yT+2, . . .}

Infinite sequence {yt}∞t=−∞ would still be viewed as a singlerealization from the underlying stochastic process.

Page 5: Linear Time Series Analysis - University of Ljubljana · Linear Time Series Analysis Egon Zakrajˇsek Division of Monetary Affairs Federal Reserve Board Summer School in Financial

Stochastic ProcessesStationarityErgodicity

Basic Linear Time Series Processes

Unconditional Density

Set n computers generating sequences:

{ε1,t}∞t=−∞, {ε2,t}∞t=−∞, . . . , {εn,t}∞t=−∞

Select from each of the n sequences the observations associatedwith period t:

{y1,t, y2,t, . . . , yn,t}

Represents a sample of n realizations of the random variable Yt.

fYt(yt) = unconditional density of Yt.

Gaussian white noise process:

fYt(yt) =1√2πσ

exp[− y2

t

2σ2

]

Page 6: Linear Time Series Analysis - University of Ljubljana · Linear Time Series Analysis Egon Zakrajˇsek Division of Monetary Affairs Federal Reserve Board Summer School in Financial

Stochastic ProcessesStationarityErgodicity

Basic Linear Time Series Processes

Unconditional Expectation

Expectation of the t-th observations of a time series:

E(Yt) =∫ytfYt(yt)dyt

Viewed as the probability limit (plim) of the sample average:

E(Yt) = plimn→∞

1n

n∑i=1

Yit

Expectation E(Yt) is called the unconditional mean of Yt:

E(Yt) = µt.

Unconditional mean can be a function of the date of theobservation t!

Page 7: Linear Time Series Analysis - University of Ljubljana · Linear Time Series Analysis Egon Zakrajˇsek Division of Monetary Affairs Federal Reserve Board Summer School in Financial

Stochastic ProcessesStationarityErgodicity

Basic Linear Time Series Processes

Some Examples

Example 1: {Yt}∞t=−∞ is the sum of a constant µ and a Gaussianwhite noise process {εt}∞t=−∞:

Yt = µ+ εt,

Unconditional mean: E(Yt) = µ+ E(εt) = µDoes not depend on t.

Example 2: {Yt}∞t=−∞ is the sum of a linear time trend µt and aGaussian white noise process {εt}∞t=−∞:

Yt = µt+ εt,

Unconditional mean: E(Yt) = µtDoes depend on t.

Page 8: Linear Time Series Analysis - University of Ljubljana · Linear Time Series Analysis Egon Zakrajˇsek Division of Monetary Affairs Federal Reserve Board Summer School in Financial

Stochastic ProcessesStationarityErgodicity

Basic Linear Time Series Processes

Unconditional Variance

Variance of the t-th observations of a time series:

γ0t ≡ E(Yt − µt)2 =∫

(yt − µt)2fYt(yt)dyt

Unconditional variance of the process Yt = µt+ εt in is given by

γ0t = E(Yt − µt)2 = E(ε2t ) = σ2

Page 9: Linear Time Series Analysis - University of Ljubljana · Linear Time Series Analysis Egon Zakrajˇsek Division of Monetary Affairs Federal Reserve Board Summer School in Financial

Stochastic ProcessesStationarityErgodicity

Basic Linear Time Series Processes

Autocovariance

Given a particular realization (e.g., {y1,t}∞t=−∞) of a time seriesprocess, construct a (j + 1)-vector y1,t, corresponding to date t andconsisting of the (j + 1) most recent observations on y as of date t forthat realization:

y1,t =

y1,t

y1,t−1

y1,t−2...

y1,t−j

We think of each realization {yi,t}∞t=−∞ as generating one particularvalue of the vector yi,t.

Page 10: Linear Time Series Analysis - University of Ljubljana · Linear Time Series Analysis Egon Zakrajˇsek Division of Monetary Affairs Federal Reserve Board Summer School in Financial

Stochastic ProcessesStationarityErgodicity

Basic Linear Time Series Processes

Autocovariance

Probability distribution of the vector yi,t across realizations i:

fYt,Yt−1,...,Yt−j (yt, yt−1, . . . , yt−j)

This distribution is called the joint distribution of(Yt, Yt−1, . . . , Yt−j).

The j-th autocovariance of Yt:

γjt =∫· · ·∫

(yt − µt)(yt−j − µt−j)

×fYt,Yt−1,...,Yt−j (yt, yt−1, . . . , yt−j)dytdyt−1 · · · dyt−j= E(Yt − µt)(Yt−j − µt−j)

Page 11: Linear Time Series Analysis - University of Ljubljana · Linear Time Series Analysis Egon Zakrajˇsek Division of Monetary Affairs Federal Reserve Board Summer School in Financial

Stochastic ProcessesStationarityErgodicity

Basic Linear Time Series Processes

Autocovariance

γjt has the form of a covariance between RVs X and Y :

Cov(X,Y ) = E(X − µX)(Y − µY )

γjt is simply the covariance of Yt with its own lagged value.

The 0-th autocovariance is just the variance of Yt (i.e., γ0t).

γjt is the (1, j + 1) element of the covariance matrix of thevector yt.

Think of the j-th autocovariance as the probability limit of thesample average:

γjt = plimn→∞

1n

n∑i=1

[Yi,t − µt][Yi,t−j − µt−j ].

Page 12: Linear Time Series Analysis - University of Ljubljana · Linear Time Series Analysis Egon Zakrajˇsek Division of Monetary Affairs Federal Reserve Board Summer School in Financial

Stochastic ProcessesStationarityErgodicity

Basic Linear Time Series Processes

Autocorrelation

The j-th autocorrelation of a covariance-stationary process:

ρj ≡γjγ0.

The terminology arises from the fact that ρj is the correlationbetween Yt and Yt−j :

Corr(Yt, Yt−j) =Cov(Yt, Yt−j)√

Var(Yt)√

Var(Yt−j)=

γj√γ0√γ0

= ρj .

−1 ≤ ρj ≤ 1 for all j.

Page 13: Linear Time Series Analysis - University of Ljubljana · Linear Time Series Analysis Egon Zakrajˇsek Division of Monetary Affairs Federal Reserve Board Summer School in Financial

Stochastic ProcessesStationarityErgodicity

Basic Linear Time Series Processes

Weak Stationarity

Definition (Weak Stationarity)

A stochastic process {Yt}∞t=−∞ is weakly stationary orcovariance-stationary if

i. E(Yt) = µ, for all t;

ii. E(Yt − µ)(Yt−j − µ) = γj , for all t and any j.

Condition (ii) requires that the covariance between observationsin the series is a function only of how far apart the observationsare in time and not the time at which they occur (i.e., covariancebetween Yt and Yt−j depends only on j, the length of timeseparating the observations).

For a covariance-stationary process γj = γ−j , for all integers j.

Page 14: Linear Time Series Analysis - University of Ljubljana · Linear Time Series Analysis Egon Zakrajˇsek Division of Monetary Affairs Federal Reserve Board Summer School in Financial

Stochastic ProcessesStationarityErgodicity

Basic Linear Time Series Processes

Strict Stationarity

A stochastic process is said to be strictly stationary if, for any valuesof j1, j2, . . . , jn, the joint distribution of (Yt, Yt+j1 , Yt+j2 , . . . , Yt+jn)depends only on the intervals separating dates—that is,j1, j2, . . . , jn—and not on the date t itself.

If a process is strictly stationary (with finite second moments),then it must be covariance-stationary:

If the densities over which we are integrating do not depend ontime, then the moments µt and γjt will not depend on time.

It is possible for a process to be covariance-stationary but notstrictly stationary.

The mean and autocovariances may not depend on time, buthigher moments (e.g., E(Y 3

t )) could be functions of time.

Page 15: Linear Time Series Analysis - University of Ljubljana · Linear Time Series Analysis Egon Zakrajˇsek Division of Monetary Affairs Federal Reserve Board Summer School in Financial

Stochastic ProcessesStationarityErgodicity

Basic Linear Time Series Processes

Ergodicity

We have been thinking about expectations of a time series in terms ofaverages over n realizations of the underlying stochastic process.

This may seem a bit contrived, because usually all we have availableis a single realization of size T from the process:

{yi,1, yi,2, . . . , yi,T } for some i

Using these observations, we can calculate the sample mean y:

y ≡ 1T

T∑t=1

yi,t

y is not an average over n realizations, rather it is a time average!

Page 16: Linear Time Series Analysis - University of Ljubljana · Linear Time Series Analysis Egon Zakrajˇsek Division of Monetary Affairs Federal Reserve Board Summer School in Financial

Stochastic ProcessesStationarityErgodicity

Basic Linear Time Series Processes

Ergodicity

Does the time average y eventually converge to the unconditionalexpectation E(Yt) of a stationary process?

For our purposes, a covariance-stationary process is said to be ergodicfor the mean if

plimT→∞

y = µ

A process will be ergodic for the mean provided that theautocovariance γj → 0 as j →∞.

If the autocovariances of a covariance-stationary process satisfy∑∞j=0 |γj | <∞, then {Yt}∞t=−∞ is ergodic for the mean.

Page 17: Linear Time Series Analysis - University of Ljubljana · Linear Time Series Analysis Egon Zakrajˇsek Division of Monetary Affairs Federal Reserve Board Summer School in Financial

Stochastic ProcessesStationarityErgodicity

Basic Linear Time Series Processes

Ergodicity

A covariance-stationary process is said to be ergodic for secondmoments if

plimT→∞

( 1T − j

) T∑t=j+1

(Yt − µ)(Yt−j − µ)

= γj ; j = 0, 1, 2, . . . .

If {Yt} is a stationary Gaussian process, absolute summability ofautocovariances is sufficient to ensure ergodicity for allmoments.

Page 18: Linear Time Series Analysis - University of Ljubljana · Linear Time Series Analysis Egon Zakrajˇsek Division of Monetary Affairs Federal Reserve Board Summer School in Financial

Stochastic ProcessesStationarityErgodicity

Basic Linear Time Series Processes

Example: Ergodicity vs. Stationarity

Suppose that the mean µi for the i-th realization {Yi,t}∞t=−∞ of astochastic process {Yt}∞t=−∞ is generated from a N(0, ω2)distribution:

Yi,t = µi + εt

εt ∼ N(0, σ2), for all t, is a Gaussian white noise process that isindependent of µi.

This process is covariance-stationary, but it is not ergodic for themean.

Page 19: Linear Time Series Analysis - University of Ljubljana · Linear Time Series Analysis Egon Zakrajˇsek Division of Monetary Affairs Federal Reserve Board Summer School in Financial

Stochastic ProcessesStationarityErgodicity

Basic Linear Time Series Processes

White Noise Process

White noise process {εt}∞t=−∞:

E(εt) = 0, for all t;E(εt)2 = σ2 <∞, for all t;E(εtεs) = 0, for all t 6= s.

Slightly stronger condition is that the ε’s are independent acrosstime—independent white noise process.

If εt ∼ N(0, σ2), for all t then the process is called theGaussian white noise process.

Page 20: Linear Time Series Analysis - University of Ljubljana · Linear Time Series Analysis Egon Zakrajˇsek Division of Monetary Affairs Federal Reserve Board Summer School in Financial

Stochastic ProcessesStationarityErgodicity

Basic Linear Time Series Processes

Moving Average Processes: MA(1)

First-order moving average process:

Yt = µ+ εt + θεt−1,

{εt}∞t=−∞ = white noise processµ and θ arbitrary constants.

Moments of Yt:Mean:

E(Yt) = E(µ+ εt + θεt−1) = µ+ E(εt) + θE(εt−1) = µ.

Variance:

E(Yt − µ)2 = E(εt + θεt−1)2

= E(ε2t + 2θεtεt−1 + θ2ε2t−1)= (1 + θ2)σ2

Page 21: Linear Time Series Analysis - University of Ljubljana · Linear Time Series Analysis Egon Zakrajˇsek Division of Monetary Affairs Federal Reserve Board Summer School in Financial

Stochastic ProcessesStationarityErgodicity

Basic Linear Time Series Processes

Moving Average Processes: MA(1)

The first autocovariance:

E(Yt − µ)(Yt−1 − µ) = E(εt + θεt−1)(εt−1 + θεt−2)= E(εtεt−1 + θε2t−1 + θεtεt−2 + θ2εt−1εt−2)= θσ2

Higher autocovariances are all zero:

E(Yt−µ)(Yt−j−µ) = E(εt+θεt−1)(εt−j+θεt−j−1) = 0, for j > 1.

MA(1) process is covariance-stationary.∑∞j=0 |γj | = |(1 + θ2)σ2|+ |θσ2|+ 0 + 0 + · · · <∞:

ρ1 = θσ2

(1+θ2)σ2 = θ(1+θ2)

Page 22: Linear Time Series Analysis - University of Ljubljana · Linear Time Series Analysis Egon Zakrajˇsek Division of Monetary Affairs Federal Reserve Board Summer School in Financial

Stochastic ProcessesStationarityErgodicity

Basic Linear Time Series Processes

Moving Average Processes: MA(q)

q-th order moving average process:

Yt = µ+ εt + θ1εt−1 + θ2εt−2 + · · ·+ θqεt−q,

Mean:

E(Yt) = µ+E(εt)+θ1E(εt−1)+θ2E(εt−2)+· · ·+θqE(εt−q) = µ.

Autocovariances:

γj ={

[θj + θj+1θ1 + θj+2θ2 + · · ·+ θqθq−j ]σ2 for j = 1, . . . , q;0 for j > q

For any values of θ1, θ2, . . . , θq, the MA(q) process is thuscovariance-stationary.Because

∑∞j=0 |γj | <∞, it follows that if {εt} is a Gaussian

white noise, then an MA(q) process is ergodic for all moments.