forecasting high-dimensional realized covariance matricesnov 28, 2018  · forecasting...

34
Forecasting High-Dimensional Realized Covariance Matrices Philip L.H. Yu (with X. Wang and Yaohua Tang) The University of Hong Kong Big Data Challenges for Predictive Modeling of Complex Systems 28 November 2018 This work was partially supported by a grant from the Research Grants Council of the Hong Kong Special Administrative Region, China (Project No.17304417). Yu, Wang and Tang (HKU) Forecasting HD RCov Matrices 28 November 2018 1 / 34

Upload: others

Post on 17-Jun-2020

4 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Forecasting High-Dimensional Realized Covariance MatricesNov 28, 2018  · Forecasting High-Dimensional Realized Covariance Matrices Philip L.H. Yu (with X. Wang and Yaohua Tang) The

Forecasting High-Dimensional Realized Covariance Matrices

Philip L.H. Yu (with X. Wang and Yaohua Tang)

The University of Hong Kong

Big Data Challenges for Predictive Modeling ofComplex Systems28 November 2018

This work was partially supported by a grant from the Research Grants Council of

the Hong Kong Special Administrative Region, China (Project No.17304417).

Yu, Wang and Tang (HKU) Forecasting HD RCov Matrices 28 November 2018 1 / 34

Page 2: Forecasting High-Dimensional Realized Covariance MatricesNov 28, 2018  · Forecasting High-Dimensional Realized Covariance Matrices Philip L.H. Yu (with X. Wang and Yaohua Tang) The

Outline

1 Introduction

2 Existing Models for RCOV Matrices

3 Dynamic Matrix Factor Model

4 Deep Learning

5 Deep Learning for RCOV Matrices

6 Applications

7 Concluding Remarks

Yu, Wang and Tang (HKU) Forecasting HD RCov Matrices 28 November 2018 2 / 34

Page 3: Forecasting High-Dimensional Realized Covariance MatricesNov 28, 2018  · Forecasting High-Dimensional Realized Covariance Matrices Philip L.H. Yu (with X. Wang and Yaohua Tang) The

Introduction

Modeling and forecasting covariance matrices of financial assetreturns play an important role in asset pricing, portfolio allocation andrisk management.

The availability of high-frequency intraday financial data enables us toestimate daily volatilities and co-volatilities of asset returns directly,which leads to the so-called realized covariance (RCOV) matrix.

Suppose we have the daily RCOV matrices of n asset returns over Tdays: Y t , t = 1, . . . ,T .

Two major issues in modeling RCOV matrices:1 Each Y t is a n × n symmetric and positive (semi-)definite matrix.2 The proposed model works for high-dimensional covariance matrices.

Yu, Wang and Tang (HKU) Forecasting HD RCov Matrices 28 November 2018 3 / 34

Page 4: Forecasting High-Dimensional Realized Covariance MatricesNov 28, 2018  · Forecasting High-Dimensional Realized Covariance Matrices Philip L.H. Yu (with X. Wang and Yaohua Tang) The

Existing Models for RCOV Matrices

Wishart AR (WAR(r)) model (Gourieroux et al., 2009)

Y t | Ft−1 ∼Wn(ν,Λt ,Σ),

Λt =r∑

k=1

MkY t−kM ′k . (1)

Conditional AR Wishart (CAW(p, q)) model (Golosnoy et al., 2012)

Y t | Ft−1 ∼Wn(ν, 0,Σt/ν),

Σt = CC ′ +p∑

i=1

B iΣt−iB ′i +

q∑j=1

AjY t−jA′j . (2)

Generalized CAW (GCAW(p, q, r)) Model (Yu, et al. 2017)

Y t | Ft−1 ∼Wn(ν,Λt ,Σt),

Λt is from (1) and Σt is from (2).

However, fitting these models will be computationally demanding formoderate and high dimensions (say n > 10).

Yu, Wang and Tang (HKU) Forecasting HD RCov Matrices 28 November 2018 4 / 34

Page 5: Forecasting High-Dimensional Realized Covariance MatricesNov 28, 2018  · Forecasting High-Dimensional Realized Covariance Matrices Philip L.H. Yu (with X. Wang and Yaohua Tang) The

Existing Models for RCOV Matrices

Matrix Factor Analysis (MFA) model (Tao et al. 2011)

Y t = LF tL′ + E 0,

where L is a n × d factor loading matrix, F t are d × d positivedefinite matrices and E 0 is a n × n positive definite constant matrix.

To forecast Y t+1, Tao et al. (2011) adopted a two-step procedure byfirst estimating L and F t and then fitting a VAR model to vech(F t):

vech(F t) = λ0 +

q∑j=1

Λjvech(F t−j) + et

Shen et al. (2015) replaced the VAR model by a diagonal CAW(DCAW) model with diagonal matrices for A,B and C .

However, the loading matrix L is assumed to be constant over time,meaning that the dynamic correlation structure of Y t is completelygoverned by the model (VAR or DCAW) for the low-dimensional F t .

Yu, Wang and Tang (HKU) Forecasting HD RCov Matrices 28 November 2018 5 / 34

Page 6: Forecasting High-Dimensional Realized Covariance MatricesNov 28, 2018  · Forecasting High-Dimensional Realized Covariance Matrices Philip L.H. Yu (with X. Wang and Yaohua Tang) The

A Dynamic Matrix Factor Model for RCOV Matrices

A conditional Wishart distribution for Y t :

Y t |Ft−1 ∼Wn(ν,Σt/ν).

Note that E (Y t |Ft−1) = Σt .

Spectral decomposition of Σt :

Σt = LtDtL′t

where Dt = diag(d1,t , . . . , dn,t) and Lt = (l 1,t , . . . , l n,t) with dit ’sand lit ’s are eigenvalues and eigenvectors of Σt .

Consider a loading-driven process Qt :

Qt = (1− a− b)Σ + aY t−1 + bQt−1,

where scalars a, b and matrix Σ are the parameters to be determined.

We assume that Σt and Qt share the same loading matrix Lt :

Qt = LtG tL′tYu, Wang and Tang (HKU) Forecasting HD RCov Matrices 28 November 2018 6 / 34

Page 7: Forecasting High-Dimensional Realized Covariance MatricesNov 28, 2018  · Forecasting High-Dimensional Realized Covariance Matrices Philip L.H. Yu (with X. Wang and Yaohua Tang) The

Assumptions

Let Σ = LDL′ be the spectral decomposition of Σ

Assumption 1: a ≥ 0, b ≥ 0, a + b < 1, Σ and Q0 are positivedefinite.

Assumption 2: The eigenvalues of Σ are arranged in strictlydecreasing order, and the signs of the corresponding eigenvectors aresuch that the diagonal elements of L are positive.

Assumption 3: The eigenvalues of Qt are arranged in strictlydecreasing order, and the signs of the corresponding eigenvectors aresuch that the diagonal elements of L′tL are positive.

Assumption 1 guarantees the positive definiteness of Qt for any t, sothat the spectral decomposition of Qt exists.

Assumptions 2 and 3 ensure the uniqueness of the spectraldecomposition of Qt .

Fixing the scale of Σ: tr(Σ) = tr(E (Y t))

Yu, Wang and Tang (HKU) Forecasting HD RCov Matrices 28 November 2018 7 / 34

Page 8: Forecasting High-Dimensional Realized Covariance MatricesNov 28, 2018  · Forecasting High-Dimensional Realized Covariance Matrices Philip L.H. Yu (with X. Wang and Yaohua Tang) The

A Model for Eigenvalues Dt

Dt = diag(d1,t , . . . , dn,t)

A GARCH(1,1) for dit(i = 1, . . . , n):

di ,t = (1− αi − βi )di + αiei ,t−1 + βidi ,t−1

where di is the i-th largest eigenvalue of Σ andei ,t−1 = l ′i ,t−1Y t−1l i ,t−1 is the (i , i)-th entry of L′t−1Y t−1Lt−1

E (ei ,t |Ft−1) = E (l ′i ,tY t l i ,t |Ft−1) = l ′i ,tΣt l i ,t = l ′i ,tLtDtL′t l i ,t = di ,t

Assumption: αi ≥ 0, βi ≥ 0, αi + βi < 1, and di ,0 > 0 i = 1, . . . , n.

Then E (di ,t) = E (ei ,t) = di , and

di ,t > 0 which guarantees the positive definiteness of Σt = LtDtL′t .Since the eigenvalues of Σ = LDL′ are descending, the expectation ofthe eigenvalues of Dt are arranged in decreasing order:

E (d1,t) > E (d2,t) > · · · > E (dn,t).

Yu, Wang and Tang (HKU) Forecasting HD RCov Matrices 28 November 2018 8 / 34

Page 9: Forecasting High-Dimensional Realized Covariance MatricesNov 28, 2018  · Forecasting High-Dimensional Realized Covariance Matrices Philip L.H. Yu (with X. Wang and Yaohua Tang) The

A DMF Model for High-Dim RCOV Matrices

Y t |Ft−1 ∼Wn(ν,Σt/ν) with Σt = LtDtL′tLt is the loading matrix of Qt = LtG tL′t :

Qt = (1− a− b)Σ + aY t−1 + bQt−1

di ,t = (1− αi − βi )di + αiei ,t−1 + βidi ,t−1, i = 1, . . . , n

The number of parameters in the DMF model is n(n+1)2 + 2n + 3.

For high-dimensional RCOV matrices, we assume that

Σ is given or estimated by some method such as RiskMetricsαr = αr+1 = · · · = αn = α∗, and βr = βr+1 = · · · = βn = β∗

The number of parameters in the DMF model reduces to 2r + 3.The optimal dimension r can be chosen by the BIC.

Yu, Wang and Tang (HKU) Forecasting HD RCov Matrices 28 November 2018 9 / 34

Page 10: Forecasting High-Dimensional Realized Covariance MatricesNov 28, 2018  · Forecasting High-Dimensional Realized Covariance Matrices Philip L.H. Yu (with X. Wang and Yaohua Tang) The

Application to Constituent Stocks of DJIA

Consider the 30 constituent stocks of the DJIA index as of the end of2013.

Intra-day price data from January 18, 2007 to December 31, 2013(1752 trading days) are downloaded from the NYSE TAQ database ofWRDS, and daily realized covariance matrices are calculated usingthe ARVM method.

The trading records are from 9:30 am to 4:00 pm each day, withobservations before 10:00 am deleted to avoid opening effects. Thesampling frequency is set to five minutes. Stocks with less than 100daily trading records are also deleted, resulting in 25 stocks.

The figure on the next page shows the plots of the realized variancesof the 25 DJIA stocks. These plots reveal high volatilities around the2008 subprime mortgage crisis and during the flash crash on May 6,2010.

Yu, Wang and Tang (HKU) Forecasting HD RCov Matrices 28 November 2018 10 / 34

Page 11: Forecasting High-Dimensional Realized Covariance MatricesNov 28, 2018  · Forecasting High-Dimensional Realized Covariance Matrices Philip L.H. Yu (with X. Wang and Yaohua Tang) The

Realized variances of the DJIA constituents from 01/18/2007 - 12/31/2013

Yu, Wang and Tang (HKU) Forecasting HD RCov Matrices 28 November 2018 11 / 34

Page 12: Forecasting High-Dimensional Realized Covariance MatricesNov 28, 2018  · Forecasting High-Dimensional Realized Covariance Matrices Philip L.H. Yu (with X. Wang and Yaohua Tang) The

Forecasting Performance

Training size = 1500 trading days (moving window)

Testing data: the last 252 trading days

1-step ahead forecast performance: FN1 = 1T1

∑t ‖Y t+1 − Y t(1)‖F

Models FN1

Moving Average (MA) 4.02Exponential Weighted Moving Average (EWMA) 3.87MFA-VAR 6.14MFA-CAW 4.33MFA-DCAW 4.25DMF 3.69

Yu, Wang and Tang (HKU) Forecasting HD RCov Matrices 28 November 2018 12 / 34

Page 13: Forecasting High-Dimensional Realized Covariance MatricesNov 28, 2018  · Forecasting High-Dimensional Realized Covariance Matrices Philip L.H. Yu (with X. Wang and Yaohua Tang) The

Deep Learning Recovers Patterns

Deep learning is a form of machine learning that useshierarchical abstract layers of latent variables to performclassification and prediction.

See Schmidhuber (2015) for a survey of deep learningand its applications.

Deep learning stems from its amazing applications,including

Image processing: handwritten digit recognition(Ciresan et al. 2010), liver tumor (Roth, et al.2015), skin cancer (Kubota, 2017), etc.Learning in games: AlphaGo (DeepMind, 2017),basketball analytics (Chang and Su, 2017)NLP: Neural Machine Translation (Googletranslate, 2016; Tang et al. 2016),Question-answering, etc.and even music and art...

Yu, Wang and Tang (HKU) Forecasting HD RCov Matrices 28 November 2018 13 / 34

Page 14: Forecasting High-Dimensional Realized Covariance MatricesNov 28, 2018  · Forecasting High-Dimensional Realized Covariance Matrices Philip L.H. Yu (with X. Wang and Yaohua Tang) The

Training a New Rembrandt

Rembrandt Van Rijn (1606-1669) was one of the greatest visualartists ever, and certainly the most important Dutch artist.

A team of Dutch data analysts, developers, engineers and arthistorians analyzed all 346 Rembrandt paintings (150GB):https://www.nextrembrandt.com/

digitized using 3D scans — were analyzed by a deep learning algorithm.isolated common Rembrandt subjects to create the most consistentsubject — a white, middle aged man with facial hair, wearing blackclothes with a white collar and a hat, and facing to the right.

Yu, Wang and Tang (HKU) Forecasting HD RCov Matrices 28 November 2018 14 / 34

Page 15: Forecasting High-Dimensional Realized Covariance MatricesNov 28, 2018  · Forecasting High-Dimensional Realized Covariance Matrices Philip L.H. Yu (with X. Wang and Yaohua Tang) The

Exhibition in Amsterdam in April 2016

Yu, Wang and Tang (HKU) Forecasting HD RCov Matrices 28 November 2018 15 / 34

Page 16: Forecasting High-Dimensional Realized Covariance MatricesNov 28, 2018  · Forecasting High-Dimensional Realized Covariance Matrices Philip L.H. Yu (with X. Wang and Yaohua Tang) The

Convolutional Neural Networks

Convolutional Neural Network (CNN) is a well known deep learningarchitecture inspired by the natural visual perception mechanism ofthe living creatures.

LeNet-5: LeCun et al. (1990) established the modern framework ofCNN, and later improved it in 1998.

Since 2006, benefited from several factors, the development of CNNshas been gradually moving into high gear. These factors include:

a new activation function — Rectified Linear Unit (ReLU)(= max(0, x)), which makes convergence much faster while stillpresents good qualitythe efficient training implementation on modern powerful GPUsthe easy access to an abundance of data (like ImageNet) for traininglarger models.

Numerous variants of LeNet-5 architectures, like AlexNet, GoogleNetand VGGNet achieved great success on a series of complex problems.

Yu, Wang and Tang (HKU) Forecasting HD RCov Matrices 28 November 2018 16 / 34

Page 17: Forecasting High-Dimensional Realized Covariance MatricesNov 28, 2018  · Forecasting High-Dimensional Realized Covariance Matrices Philip L.H. Yu (with X. Wang and Yaohua Tang) The

LeNet-5:

The basic structure of a CNN model consists of three types of layers:

Convolutional layersPooling layers (also called subsampling or downsampling)Fully-connected layers

Yu, Wang and Tang (HKU) Forecasting HD RCov Matrices 28 November 2018 17 / 34

Page 18: Forecasting High-Dimensional Realized Covariance MatricesNov 28, 2018  · Forecasting High-Dimensional Realized Covariance Matrices Philip L.H. Yu (with X. Wang and Yaohua Tang) The

Convolutional layers

Consider a 5× 5 image and a 3× 3 filter weight matrix:

Yu, Wang and Tang (HKU) Forecasting HD RCov Matrices 28 November 2018 18 / 34

Page 19: Forecasting High-Dimensional Realized Covariance MatricesNov 28, 2018  · Forecasting High-Dimensional Realized Covariance Matrices Philip L.H. Yu (with X. Wang and Yaohua Tang) The

Convolutional layers

The “Convolution” of the 5× 5 image and the 3× 3 filter matrix canbe computed like:

Yu, Wang and Tang (HKU) Forecasting HD RCov Matrices 28 November 2018 19 / 34

Page 20: Forecasting High-Dimensional Realized Covariance MatricesNov 28, 2018  · Forecasting High-Dimensional Realized Covariance Matrices Philip L.H. Yu (with X. Wang and Yaohua Tang) The

Convolutional layers

The output matrix is called convolved feature or feature map.The size of the feature map is controlled by three parameters that weneed to decide before the convolution step is performed:

Depth: the number of filters we use for the convolution operation.Stride: the number of pixels by which we slide our filter matrix overthe input matrix.Zero-padding: Sometimes, it is convenient to pad the input matrixwith zeros around the border, so that we can apply the filter tobordering elements of the input image matrix.

A non-linear activation function (e.g. leaky ReLU to avoid “dyingReLU” problem) will be used after every convolution operation.

Yu, Wang and Tang (HKU) Forecasting HD RCov Matrices 28 November 2018 20 / 34

Page 21: Forecasting High-Dimensional Realized Covariance MatricesNov 28, 2018  · Forecasting High-Dimensional Realized Covariance Matrices Philip L.H. Yu (with X. Wang and Yaohua Tang) The

Pooling layers

Spatial pooling reduces the dimensionality of each feature map butretains the most important information.

Spatial pooling can be of different types: Max, Average, Sum etc.

The function of pooling is to progressively reduce the spatial size ofthe input representation.

Yu, Wang and Tang (HKU) Forecasting HD RCov Matrices 28 November 2018 21 / 34

Page 22: Forecasting High-Dimensional Realized Covariance MatricesNov 28, 2018  · Forecasting High-Dimensional Realized Covariance Matrices Philip L.H. Yu (with X. Wang and Yaohua Tang) The

Fully-connected layers

The fully-connected layer is a traditional multi-layer perceptron(MLP) that typically uses a softmax activation function in the outputlayer for classification.

The term fully-connected implies that every neuron in the previouslayer is connected to every neuron on the next layer.

The output from the last pooling layer represents high-level featuresof the input image, in the form of a 3-dimensional matrix. The3-dimensional matrix will be flattened to a long vector and connectedto the next layer fully.

The purpose of the fully-connected layer is to combine these featuresnon-linearly for classifying input images into various classes based onthe trained model.

Yu, Wang and Tang (HKU) Forecasting HD RCov Matrices 28 November 2018 22 / 34

Page 23: Forecasting High-Dimensional Realized Covariance MatricesNov 28, 2018  · Forecasting High-Dimensional Realized Covariance Matrices Philip L.H. Yu (with X. Wang and Yaohua Tang) The

Deep Learning for RCOV Matrices

The input is a set of historical RCOV matrices: a 3D cubeY t−1 = [Y t−1; Y t−2; . . . ; Y t−m] with dimension m× n× n, where mis the lag length and n is the number of assets.Our objective is to produce one-day ahead forecast of the RCOVmatrix, Y t .We wish to learn a mapping F to connect Y t−1 and Y t , which couldhandle high dimensional covariance matrices.Our mapping F is designed to consist of at least three convolutionallayers (i.e., no pooling layers, no fully-connected layers):

Yu, Wang and Tang (HKU) Forecasting HD RCov Matrices 28 November 2018 23 / 34

Page 24: Forecasting High-Dimensional Realized Covariance MatricesNov 28, 2018  · Forecasting High-Dimensional Realized Covariance Matrices Philip L.H. Yu (with X. Wang and Yaohua Tang) The

Fully Convolutional Networks: Layer 1

Extracting spatial and temporal features: this operation extracts(overlapping) spatial and temporal features from the Y t−1.

Formally, our first layer is expressed as an operation F1:

F1(Y t−1) = ReLU(W 1 ⊗ Y t−1 + B1), (3)

where W 1 and B1 represent the filters and biases respectively, and ⊗denotes the convolution operation.

Here, W 1 corresponds to n1 3D filters of size m× f1 × f1 each, wherem is the number of channels (lag length) in the input, f1 is the spatialsize of a filter.

The output is composed of n1 feature maps.

Yu, Wang and Tang (HKU) Forecasting HD RCov Matrices 28 November 2018 24 / 34

Page 25: Forecasting High-Dimensional Realized Covariance MatricesNov 28, 2018  · Forecasting High-Dimensional Realized Covariance Matrices Philip L.H. Yu (with X. Wang and Yaohua Tang) The

Fully Convolutional Networks: Layer 2

Generating more detailed features: this maps the n1 feature mapsinto n2 feature maps.

This is equivalent to applying n2 filters of size f2 × f2:

F2(Y t−1) = ReLU(W 2 ⊗ F1(Y t−1) + B2) (4)

Here W 2 contains n2 filters of size n1 × f2 × f2 each.

The output is composed of n2 feature maps.

More layers can be added to the model to improve the forecastingperformance if needed.

Yu, Wang and Tang (HKU) Forecasting HD RCov Matrices 28 November 2018 25 / 34

Page 26: Forecasting High-Dimensional Realized Covariance MatricesNov 28, 2018  · Forecasting High-Dimensional Realized Covariance Matrices Philip L.H. Yu (with X. Wang and Yaohua Tang) The

Fully Convolutional Networks: Layer 3

Generating covariance matrix: this generates the predicted Y t bythe weighted average of the n2 feature maps:

F (Y t−1) = W 3 ⊗ F2(Y t−1) + B3 (5)

Here W 3 corresponds to 1 filter of size n2 × f3 × f3.

In this layer, instead of using ReLU non-linear activation function, weuse linear mapping here to act like a regression on the learned featuremaps obtained from the second layer.

Yu, Wang and Tang (HKU) Forecasting HD RCov Matrices 28 November 2018 26 / 34

Page 27: Forecasting High-Dimensional Realized Covariance MatricesNov 28, 2018  · Forecasting High-Dimensional Realized Covariance Matrices Philip L.H. Yu (with X. Wang and Yaohua Tang) The

FCN Training

Learning the mapping function F requires the estimation of networkparameters Θ = {W 1,W 2,W 3, . . . , B1,B2,B3, . . .}.This is achieved through minimizing the loss between the forecastedF (Y t−1; Θ) and the true covariance matrix Y t .

Given a set of pairs of {Y t−1,Y t}, we use the least absolute error(L1-norm) as our loss function:

L(Θ) =1

T

T∑t=1

||F (Y t−1; Θ)− Y t ||

The loss is minimized using stochastic gradient descent method.

Yu, Wang and Tang (HKU) Forecasting HD RCov Matrices 28 November 2018 27 / 34

Page 28: Forecasting High-Dimensional Realized Covariance MatricesNov 28, 2018  · Forecasting High-Dimensional Realized Covariance Matrices Philip L.H. Yu (with X. Wang and Yaohua Tang) The

Applications

3 datasets:

1. 25 constituent stocks of the DJIA index (18 Jan 2007 to 31 Dec 2013,T = 1752)

2. 60 constituent stocks of the S&P100 index (10 Sep 2003 to 30 Dec2016, T = 3345)

3. 244 constituent stocks of the S&P500 index (10 Sep 2003 to 30 Dec2016, T = 3345)

High frequency data are downloaded from the NYSE TAQ database,and daily RCOV matrices are calculated using the averaging realizedvolatility matrix (ARVM) method proposed by Wang et al. (2010).

Stock trading is from 9:30 am to 4:00 pm each day, with observationsbefore 10:00 am deleted to avoid opening effects. Stocks with lessthan 100 daily trading records are also deleted.

The testing set contains the last 252 days’ RCOV matrices, thevalidation set contains the second last 252 days’ RCOV matrices, theremaining matrices all go to the training set.

Yu, Wang and Tang (HKU) Forecasting HD RCov Matrices 28 November 2018 28 / 34

Page 29: Forecasting High-Dimensional Realized Covariance MatricesNov 28, 2018  · Forecasting High-Dimensional Realized Covariance Matrices Philip L.H. Yu (with X. Wang and Yaohua Tang) The

Data Preprocessing

To guarantee positive definiteness of the forecasted covariance matrix,we first transform the RCOV matrix (Y t −→ X t) and then applyFCN to model the transformed matrix X t . Finally, transforming theforecasted matrix X t back to the forecasted covariance matrix Y t .

The following transformations are considered:1 Square-root transformation: A = PDP ′ −→ A1/2 = PD1/2P ′2 Cholesky decomposition: A = LL′ −→ L3 Removing moving average: A −→ A−MV (k)

We could apply more than one transformations together: 1 −→ 2,2 −→ 3, 1 −→ 3, 1 −→ 2 −→ 3

Best transformation based on the smallest RMSE of the testing set:

DJIA: 1 −→ 2S&P100, S&P500: 2 −→ 3

Yu, Wang and Tang (HKU) Forecasting HD RCov Matrices 28 November 2018 29 / 34

Page 30: Forecasting High-Dimensional Realized Covariance MatricesNov 28, 2018  · Forecasting High-Dimensional Realized Covariance Matrices Philip L.H. Yu (with X. Wang and Yaohua Tang) The

Forecasting Performance FN1 for the Testing Set

Models DJIA S&P100 S&P500MA 3.851 17.430 110.689EWMA 3.734 16.277 107.586MFA-VAR 4.380 18.639 140.490MFA-DCAW 4.246 16.916 115.144FCN 3.367 14.914 103.044DMF 3.690 N/A N/A

Yu, Wang and Tang (HKU) Forecasting HD RCov Matrices 28 November 2018 30 / 34

Page 31: Forecasting High-Dimensional Realized Covariance MatricesNov 28, 2018  · Forecasting High-Dimensional Realized Covariance Matrices Philip L.H. Yu (with X. Wang and Yaohua Tang) The

Filter Weights of the First Convolution Layer (DJIA)

16 filters of size 3× 3 for the covariance matrices on the past 8 days

Yu, Wang and Tang (HKU) Forecasting HD RCov Matrices 28 November 2018 31 / 34

Page 32: Forecasting High-Dimensional Realized Covariance MatricesNov 28, 2018  · Forecasting High-Dimensional Realized Covariance Matrices Philip L.H. Yu (with X. Wang and Yaohua Tang) The

Correlation between Feature Maps and FutureCovariance Matrices based on Testing Set (DJIA)

Yu, Wang and Tang (HKU) Forecasting HD RCov Matrices 28 November 2018 32 / 34

Page 33: Forecasting High-Dimensional Realized Covariance MatricesNov 28, 2018  · Forecasting High-Dimensional Realized Covariance Matrices Philip L.H. Yu (with X. Wang and Yaohua Tang) The

Concluding Remarks

We propose a dynamic matrix factor (DMF) model and a FCN modelto fit RCOV matrices and empirically show their outstandingforecasting performance of high dimensions.

Various financial applications of the proposed models could beexplored:

risk managementasset allocationquantitative trading

Yu, Wang and Tang (HKU) Forecasting HD RCov Matrices 28 November 2018 33 / 34

Page 34: Forecasting High-Dimensional Realized Covariance MatricesNov 28, 2018  · Forecasting High-Dimensional Realized Covariance Matrices Philip L.H. Yu (with X. Wang and Yaohua Tang) The

Thank You

Yu, Wang and Tang (HKU) Forecasting HD RCov Matrices 28 November 2018 34 / 34