bernoulli and tail-dependence compatibilitysas.uwaterloo.ca/~wang/papers/2015ehw(aap).pdf ·...

23
0001 0002 0003 0004 0005 0006 0007 0008 0009 0010 0011 0012 0013 0014 0015 0016 0017 0018 0019 0020 0021 0022 0023 0024 0025 0026 0027 0028 0029 0030 0031 0032 0033 0034 0035 0036 0037 0038 0039 0040 0041 0042 0043 0044 0045 0046 0047 0048 0049 0050 0051 0052 0053 0054 0055 0056 0057 0058 0059 0060 0061 0062 0063 0064 Bernoulli and Tail-Dependence Compatibility Paul Embrechts * , Marius Hofert and Ruodu Wang July 30, 2015 Abstract The tail-dependence compatibility problem is introduced. It raises the question whether a given d×d-matrix of entries in the unit interval is the matrix of pairwise tail-dependence co- efficients of a d-dimensional random vector. The problem is studied together with Bernoulli- compatible matrices, i.e., matrices which are expectations of outer products of random vec- tors with Bernoulli margins. We show that a square matrix with diagonal entries being 1 is a tail-dependence matrix if and only if it is a Bernoulli-compatible matrix multiplied by a constant. We introduce new copula models to construct tail-dependence matrices, including commonly used matrices in statistics. Key-words: Tail dependence, Bernoulli random vectors, compatibility, matrices, copulas, in- surance application. MSC2010: 60E05, 62H99, 62H20, 62E15, 62H86. 1 Introduction The problem of how to construct a bivariate random vector (X 1 ,X 2 ) with log-normal marginals X 1 LN(0, 1), X 2 LN(0, 16) and correlation coefficient Cor(X 1 ,X 2 )=0.5 is well known in the history of dependence modeling, partially because of its relevance to risk management practice. The short answer is: There is no such model; see Embrechts et al. (2002) who studied these kinds of problems in terms of copulas. Problems of this kind were brought to RiskLab at ETH Zurich by the insurance industry in the mid 1990s when dependence was * RiskLab and SFI, Department of Mathematics, ETH Zurich, 8092 Zurich, Switzerland. Email: [email protected] Department of Statistics and Actuarial Science, University of Waterloo, Canada. Email: [email protected] Department of Statistics and Actuarial Science, University of Waterloo, Canada. Email: [email protected] 1

Upload: others

Post on 03-Jun-2020

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Bernoulli and Tail-Dependence Compatibilitysas.uwaterloo.ca/~wang/papers/2015EHW(AAP).pdf · 2015-07-31 · Bernoulli and Tail-Dependence Compatibility Paul Embrechts, Marius Hofertyand

0001

0002

0003

0004

0005

0006

0007

0008

0009

0010

0011

0012

0013

0014

0015

0016

0017

0018

0019

0020

0021

0022

0023

0024

0025

0026

0027

0028

0029

0030

0031

0032

0033

0034

0035

0036

0037

0038

0039

0040

0041

0042

0043

0044

0045

0046

0047

0048

0049

0050

0051

0052

0053

0054

0055

0056

0057

0058

0059

0060

0061

0062

0063

0064

Bernoulli and Tail-Dependence Compatibility

Paul Embrechts∗, Marius Hofert†and Ruodu Wang‡

July 30, 2015

Abstract

The tail-dependence compatibility problem is introduced. It raises the question whether

a given d×d-matrix of entries in the unit interval is the matrix of pairwise tail-dependence co-

efficients of a d-dimensional random vector. The problem is studied together with Bernoulli-

compatible matrices, i.e., matrices which are expectations of outer products of random vec-

tors with Bernoulli margins. We show that a square matrix with diagonal entries being 1 is

a tail-dependence matrix if and only if it is a Bernoulli-compatible matrix multiplied by a

constant. We introduce new copula models to construct tail-dependence matrices, including

commonly used matrices in statistics.

Key-words: Tail dependence, Bernoulli random vectors, compatibility, matrices, copulas, in-

surance application.

MSC2010: 60E05, 62H99, 62H20, 62E15, 62H86.

1 Introduction

The problem of how to construct a bivariate random vector (X1, X2) with log-normal

marginals X1 ∼ LN(0, 1), X2 ∼ LN(0, 16) and correlation coefficient Cor(X1, X2) = 0.5 is

well known in the history of dependence modeling, partially because of its relevance to risk

management practice. The short answer is: There is no such model; see Embrechts et al. (2002)

who studied these kinds of problems in terms of copulas. Problems of this kind were brought

to RiskLab at ETH Zurich by the insurance industry in the mid 1990s when dependence was

∗RiskLab and SFI, Department of Mathematics, ETH Zurich, 8092 Zurich, Switzerland.

Email: [email protected]†Department of Statistics and Actuarial Science, University of Waterloo, Canada.

Email: [email protected]‡Department of Statistics and Actuarial Science, University of Waterloo, Canada.

Email: [email protected]

1

Page 2: Bernoulli and Tail-Dependence Compatibilitysas.uwaterloo.ca/~wang/papers/2015EHW(AAP).pdf · 2015-07-31 · Bernoulli and Tail-Dependence Compatibility Paul Embrechts, Marius Hofertyand

0065

0066

0067

0068

0069

0070

0071

0072

0073

0074

0075

0076

0077

0078

0079

0080

0081

0082

0083

0084

0085

0086

0087

0088

0089

0090

0091

0092

0093

0094

0095

0096

0097

0098

0099

0100

0101

0102

0103

0104

0105

0106

0107

0108

0109

0110

0111

0112

0113

0114

0115

0116

0117

0118

0119

0120

0121

0122

0123

0124

0125

0126

0127

0128

thought of in terms of correlation (matrices). For further background to Quantitative Risk Man-

agement, see McNeil et al. (2015). Now, almost 20 years later, copulas are a well established

tool to quantify dependence in multivariate data and to construct new multivariate distribu-

tions. Their use has become standard within industry and regulation. Nevertheless, dependence

is still summarized in terms of numbers (as opposed to (copula) functions), so-called measures

of association. Although there are various ways to compute such numbers in dimension d > 2,

measures of association are still most widely used in the bivariate case d = 2. A popular measure

of association is tail dependence. It is important for applications in Quantitative Risk Manage-

ment as it measures the strength of dependence in either the lower-left or upper-right tail of the

bivariate distribution, the regions Quantitative Risk Management is mainly concerned with.

We were recently asked1 the following question which is in the same spirit as the log-

normal correlation problem if one replaces “correlation” by “tail dependence”; see Section 3.1

for a definition.

For which α ∈ [0, 1] is the matrix

Γd(α) =

1 0 · · · 0 α

0 1 · · · 0 α...

.... . .

......

0 0 · · · 1 α

α α · · · α 1

(1.1)

a matrix of pairwise (either lower or upper) tail-dependence coefficients?

Intrigued by this question, we more generally consider the following tail-dependence compatibility

problem in this paper:

When is a given matrix in [0, 1]d×d the matrix of pairwise (either lower or upper)

tail-dependence coefficients?

In what follows, we call a matrix of pairwise tail-dependence coefficients a tail-dependence

matrix. The compatibility problems of tail-dependence coefficients were studied in Joe (1997).

In particular, when d = 3, inequalities for the bivariate tail-dependence coefficients have been

established; see Joe (1997, Theorem 3.14) as well as Joe (2014, Theorem 8.20). The sharpness of

these inequalities is obtained in Nikoloulopoulos et al. (2009). It is generally open to characterize

the tail-dependence matrix compatibility for d > 3.

Our aim in this paper is to give a full answer to the tail-dependence compatibility problem;

see Section 3. To this end, we introduce and study Bernoulli-compatible matrices in Section 2.

1by Federico Degen (Head Risk Modeling and Quantification, Zurich Insurance Group) and Janusz Milek

(Zurich Insurance Group)

2

Page 3: Bernoulli and Tail-Dependence Compatibilitysas.uwaterloo.ca/~wang/papers/2015EHW(AAP).pdf · 2015-07-31 · Bernoulli and Tail-Dependence Compatibility Paul Embrechts, Marius Hofertyand

0129

0130

0131

0132

0133

0134

0135

0136

0137

0138

0139

0140

0141

0142

0143

0144

0145

0146

0147

0148

0149

0150

0151

0152

0153

0154

0155

0156

0157

0158

0159

0160

0161

0162

0163

0164

0165

0166

0167

0168

0169

0170

0171

0172

0173

0174

0175

0176

0177

0178

0179

0180

0181

0182

0183

0184

0185

0186

0187

0188

0189

0190

0191

0192

As a main result, we show that a matrix with diagonal entries being 1 is a compatible tail-

dependence matrix if and only if it is a Bernoulli-compatible matrix multiplied by a constant. In

Section 4 we provide probabilistic models for a large class of tail-dependence matrices, including

commonly used matrices in statistics. Section 5 concludes.

Throughout this paper, d and m are positive integers, and we consider an atomless prob-

ability space (Ω,A,P) on which all random variables and random vectors are defined. Vectors

are considered as column vectors. For two matrices A,B, B > A and B 6 A are understood as

component-wise inequalities. We let A B denote the Hadamard product, i.e., the element-wise

product of two matrices A and B of the same dimension. The d× d identity matrix is denoted

by Id. For a square matrix A, diag(A) represents a diagonal matrix with diagonal entries equal

to those of A, and A> is the transpose of A. We denote IE the indicator function of an event

(random or deterministic) E ∈ A. 0 and 1 are vectors with all components being 0 and 1

respectively, as long as the dimension of the vectors is clear from the context.

2 Bernoulli compatibility

In this section we introduce and study the Bernoulli-compatibility problem. The results

obtained in this section are the basis for the tail-dependence compatibility problem treated in

Section 3; many of them are of independent interest, e.g., for the simulation of sequences of

Bernoulli random variables.

2.1 Bernoulli-compatible matrices

Definition 2.1 (Bernoulli vector, Vd). A Bernoulli vector is a random vector X supported by

0, 1d for some d ∈ N. The set of all d-Bernoulli vectors is denoted by Vd.

Equivalently, X = (X1, . . . , Xd) is a Bernoulli vector if and only if Xi ∼ B(1, pi) for some

pi ∈ [0, 1], i = 1, . . . , d. Note that here we do not make any assumption about the dependence

structure among the components of X. Bernoulli vectors play an important role in Credit Risk

Analysis; see, e.g., Bluhm and Overbeck (2006) and Bluhm et al. (2002, Section 2.1).

In this section, we investigate the following question which we refer to as the Bernoulli-

compatibility problem:

Question 1. Given a matrix B ∈ [0, 1]d×d, can we find a Bernoulli vector X such that B =

E[XX>]?

For studying the Bernoulli-compatibility problem, we introduce the notion of Bernoulli-

compatible matrices.

3

Page 4: Bernoulli and Tail-Dependence Compatibilitysas.uwaterloo.ca/~wang/papers/2015EHW(AAP).pdf · 2015-07-31 · Bernoulli and Tail-Dependence Compatibility Paul Embrechts, Marius Hofertyand

0193

0194

0195

0196

0197

0198

0199

0200

0201

0202

0203

0204

0205

0206

0207

0208

0209

0210

0211

0212

0213

0214

0215

0216

0217

0218

0219

0220

0221

0222

0223

0224

0225

0226

0227

0228

0229

0230

0231

0232

0233

0234

0235

0236

0237

0238

0239

0240

0241

0242

0243

0244

0245

0246

0247

0248

0249

0250

0251

0252

0253

0254

0255

0256

Definition 2.2 (Bernoulli-compatible matrix, Bd). A d× d matrix B is a Bernoulli-compatible

matrix, if B = E[XX>] for some X ∈ Vd. The set of all d× d Bernoulli-compatible matrices is

denoted by Bd.

Concerning covariance matrices, there is extensive research on the compatibility of covari-

ance matrices of Bernoulli vectors in the realm of statistical simulation and time series analysis;

see, e.g., Chaganty and Joe (2006). It is known that, when d > 3, the set of all compatible d-

Bernoulli correlation matrices is strictly contained in the set of all correlation matrices. Note that

E[XX>] = Cov(X) + E[X]E[X]>. Hence, Question 1 is closely related to the characterization

of compatible Bernoulli covariance matrices.

Before we characterize the set Bd in Section 2.2 and thus address Question 1, we first collect

some facts about elements of Bd.

Proposition 2.1. Let B,B1, B2 ∈ Bd. Then

i) B ∈ [0, 1]d×d.

ii) maxbii + bjj − 1, 0 6 bij 6 minbii, bjj for i, j = 1, . . . , d and B = (bij)d×d.

iii) tB1 + (1− t)B2 ∈ Bd for t ∈ [0, 1], i.e., Bd is a convex set.

iv) B1 B2 ∈ Bd, i.e., Bd is closed under the Hadamard product.

v) (0)d×d ∈ Bd and (1)d×d ∈ Bd.

vi) For any p = (p1, . . . , pd) ∈ [0, 1]d, the matrix B = (bij)d×d ∈ Bd where bij = pipj for i 6= j

and bii = pi, i, j = 1, . . . , d.

Proof. Write B1 = E[XX>] and B2 = E[Y Y >] for X,Y ∈ Vd, and X and Y are independent.

i) Clear.

ii) This directly follows from the Frechet–Hoeffding bounds; see McNeil et al. (2015, Re-

mark 7.9).

iii) Let A ∼ B(1, t) be a Bernoulli random variable independent of X,Y , and let Z = AX +

(1 − A)Y . Then Z ∈ Vd, and E[ZZ>] = tE[XX>] + (1 − t)E[Y Y >] = tB1 + (1 − t)B2.

Hence tB1 + (1− t)B2 ∈ Bd.

iv) Let p = (p1, . . . , pd), q = (q1, . . . , qd) ∈ Rd. Then

(p q)(p q)> = (piqi)d(piqi)>d = (piqipjqj)d×d = (pipj)d×d (qiqj)d×d

= (pp>) (qq>).

4

Page 5: Bernoulli and Tail-Dependence Compatibilitysas.uwaterloo.ca/~wang/papers/2015EHW(AAP).pdf · 2015-07-31 · Bernoulli and Tail-Dependence Compatibility Paul Embrechts, Marius Hofertyand

0257

0258

0259

0260

0261

0262

0263

0264

0265

0266

0267

0268

0269

0270

0271

0272

0273

0274

0275

0276

0277

0278

0279

0280

0281

0282

0283

0284

0285

0286

0287

0288

0289

0290

0291

0292

0293

0294

0295

0296

0297

0298

0299

0300

0301

0302

0303

0304

0305

0306

0307

0308

0309

0310

0311

0312

0313

0314

0315

0316

0317

0318

0319

0320

Let Z = X Y . It follows that Z ∈ Vd and E[ZZ>] = E[(X Y )(X Y )>] = E[(XX>)

(Y Y >)] = E[XX>] E[Y Y >] = B1 B2. Hence B1 B2 ∈ Bd.

v) Consider X = 0 ∈ Vd. Then (0)d×d = E[XX>] ∈ Bd. Similarly for (1)d×d.

vi) Consider X ∈ Vd with independent components and E[X] = p.

2.2 Characterization of Bernoulli-compatible matrices

We are now able to give a characterization of the set Bd of Bernoulli-compatible matrices

and thus address Question 1.

Theorem 2.2 (Characterization of Bd). Bd has the following characterization:

Bd = n∑i=1

aipip>i : pi ∈ 0, 1d, ai > 0, i = 1, . . . , n,

n∑i=1

ai = 1, n ∈ N

; (2.1)

i.e., Bd is the convex hull of pp> : p ∈ 0, 1d. In particular, Bd is closed under convergence

in the Euclidean norm.

Proof. Denote the right-hand side of (2.1) by M. For B ∈ Bd, write B = E[XX>] for some

X ∈ Vd. It follows that

B =∑

p∈0,1dpp>P(X = p) ∈M,

hence Bd ⊆ M. Let X = p ∈ 0, 1d. Then X ∈ Vd and E[XX>] = pp> ∈ Bd. By

Proposition 2.1, Bd is a convex set which contains pp> : p ∈ 0, 1d, hence M ⊆ Bd. In

summary,M = Bd. From (2.1) we can see that Bd is closed under convergence in the Euclidean

norm.

A matrix B is completely positive if B = AA> for some (not necessarily square) matrix

A > 0. Denote by Cd the set of completely positive matrices. It is known that Cd is the convex

cone with extreme directions pp> : p ∈ [0, 1]d; see, e.g., Ruschendorf (1981) and Berman and

Shaked-Monderer (2003). We thus obtain the following result.

Corollary 2.3. Any Bernoulli-compatible matrix is completely positive.

Remark 2.1. One may wonder whether B = E[XX>] is sufficient to determine the distribution

of X, i.e., whether the decomposition

B =

2d∑i=1

aipip>i (2.2)

5

Page 6: Bernoulli and Tail-Dependence Compatibilitysas.uwaterloo.ca/~wang/papers/2015EHW(AAP).pdf · 2015-07-31 · Bernoulli and Tail-Dependence Compatibility Paul Embrechts, Marius Hofertyand

0321

0322

0323

0324

0325

0326

0327

0328

0329

0330

0331

0332

0333

0334

0335

0336

0337

0338

0339

0340

0341

0342

0343

0344

0345

0346

0347

0348

0349

0350

0351

0352

0353

0354

0355

0356

0357

0358

0359

0360

0361

0362

0363

0364

0365

0366

0367

0368

0369

0370

0371

0372

0373

0374

0375

0376

0377

0378

0379

0380

0381

0382

0383

0384

is unique for distinct vectors pi in 0, 1d. While the decomposition is trivially unique for d = 2,

this is in general false for d > 3, since there are 2d − 1 parameters in (2.2) and only d(d+ 1)/2

parameters in B. The following is an example for d = 3. Let

B =1

4

2 1 1

1 2 1

1 1 2

=

1

4

((1, 1, 1)>(1, 1, 1) + (1, 0, 0)>(1, 0, 0) + (0, 1, 0)>(0, 1, 0)

+ (0, 0, 1)>(0, 0, 1))

=1

4

((1, 1, 0)>(1, 1, 0) + (1, 0, 1)>(1, 0, 1) + (0, 1, 1)>(0, 1, 1)

+ (0, 0, 0)>(0, 0, 0)).

Thus, by combining the above two decompositions, B ∈ B3 has infinitely many different de-

compositions of the form (2.2). Note that, as in the case of completely positive matrices, it is

generally difficult to find decompositions of form (2.2) for a given matrix B.

2.3 Convex cone generated by Bernoulli-compatible matrices

In this section we study the convex cone generated by Bd, denoted by B∗d:

B∗d = aB : a > 0, B ∈ Bd. (2.3)

The following proposition is implied by Proposition 2.1 and Theorem 2.2.

Proposition 2.4. B∗d is the convex cone with extreme directions pp> : p ∈ 0, 1d. Moreover,

B∗d is a commutative semiring equipped with addition (B∗d,+) and multiplication (B∗d, ).

It is obvious that B∗d ⊆ Cd. One may wonder whether B∗d is identical to Cd, the set of

completely positive matrices. As the following example shows, this is false in general for d > 2.

Example 2.1. Note that B ∈ B∗d also satisfies Proposition 2.1, Part ii). Now consider p =

(p1, . . . , pd) ∈ (0, 1)d with pi > pj for some i 6= j. Clearly, pp> ∈ Cd, but pipj > p2j = minp2

i , p2j

contradicts Proposition 2.1, Part ii), hence pp> 6∈ B∗d.

For the following result, we need the notion of diagonally dominant matrices. A matrix

A ∈ Rd×d is called diagonally dominant if, for all i = 1, . . . , d,∑j 6=i |aij | 6 |aii|.

Proposition 2.5. Let Dd be the set of non-negative, diagonally dominant d× d-matrices. Then

Dd ⊆ B∗d.

6

Page 7: Bernoulli and Tail-Dependence Compatibilitysas.uwaterloo.ca/~wang/papers/2015EHW(AAP).pdf · 2015-07-31 · Bernoulli and Tail-Dependence Compatibility Paul Embrechts, Marius Hofertyand

0385

0386

0387

0388

0389

0390

0391

0392

0393

0394

0395

0396

0397

0398

0399

0400

0401

0402

0403

0404

0405

0406

0407

0408

0409

0410

0411

0412

0413

0414

0415

0416

0417

0418

0419

0420

0421

0422

0423

0424

0425

0426

0427

0428

0429

0430

0431

0432

0433

0434

0435

0436

0437

0438

0439

0440

0441

0442

0443

0444

0445

0446

0447

0448

Proof. For i, j = 1, . . . , d, let p(ij) = (p(ij)1 , . . . , p

(ij)d ) where p

(ij)k = Ik=i∪k=j. It is straightfor-

ward to verify that the (i, i)-, (i, j)-, (j, i)- and (j, j)-entries of the matrix M (ij) = p(ij)(p(ij))>

are 1, and the other entries are 0. For D = (dij)d×d ∈ Dd, let

D∗ = (d∗ij)d×d =

d∑i=1

d∑j=1,j 6=i

dijM(ij).

By Proposition 2.4, D∗ ∈ B∗d. It follows that d∗ij = dij for i 6= j and d∗ii =∑dj=1,j 6=i dij 6 dii.

Therefore, D = D∗ +∑di=1(dii − d∗ii)M (ii), which, by Proposition 2.4, is in B∗d.

For studying the tail-dependence compatibility problem in Section 3, the subset

BId = B : B ∈ B∗d, diag(B) = Id

of B∗d is of interest. It is straightforward to see from Proposition 2.1 and Theorem 2.2 that BIdis a convex set, closed under the Hadamard product and convergence in the Euclidean norm.

These properties of BId will be used later.

3 Tail-dependence compatibility

3.1 Tail-dependence matrices

The notion of tail dependence captures (extreme) dependence in the lower-left or upper-

right tails of a bivariate distribution. In what follows, we focus on lower-left tails; the problem

for upper-right tails follows by a reflection around (1/2, 1/2), i.e., studying the survival copula

of the underlying copula.

Definition 3.1 (Tail-dependence coefficient). The (lower) tail-dependence coefficient of two

continuous random variables X1 ∼ F1 and X2 ∼ F2 is defined by

λ = limu↓0

P(F1(X1) 6 u, F2(X2) 6 u)

u, (3.1)

given that the limit exists.

If we denote the copula of (X1, X2) by C, then

λ = limu↓0

C(u, u)

u.

Clearly λ ∈ [0, 1], and λ only depends on the copula of (X1, X2), not the marginal distributions.

For virtually all copula models used in practice, the limit in (3.1) exists; for how to construct an

example where λ does not exist, see Kortschak and Albrecher (2009).

7

Page 8: Bernoulli and Tail-Dependence Compatibilitysas.uwaterloo.ca/~wang/papers/2015EHW(AAP).pdf · 2015-07-31 · Bernoulli and Tail-Dependence Compatibility Paul Embrechts, Marius Hofertyand

0449

0450

0451

0452

0453

0454

0455

0456

0457

0458

0459

0460

0461

0462

0463

0464

0465

0466

0467

0468

0469

0470

0471

0472

0473

0474

0475

0476

0477

0478

0479

0480

0481

0482

0483

0484

0485

0486

0487

0488

0489

0490

0491

0492

0493

0494

0495

0496

0497

0498

0499

0500

0501

0502

0503

0504

0505

0506

0507

0508

0509

0510

0511

0512

Definition 3.2 (Tail-dependence matrix, Td). Let X = (X1, . . . , Xd) be a random vector with

continuous marginal distributions. The tail-dependence matrix of X is Λ = (λij)d×d, where λij

is the tail-dependence coefficient of Xi and Xj , i, j = 1, . . . , d. We denote by Td the set of all

tail-dependence matrices.

The following proposition summarizes basic properties of tail-dependence matrices. Its

proof is very similar to that of Proposition 2.1 and is omitted here.

Proposition 3.1. For any Λ1,Λ2 ∈ Td, we have that

i) Λ1 = Λ>1 .

ii) tΛ1 + (1− t)Λ2 ∈ Td for t ∈ [0, 1], i.e., Td is a convex set.

iii) Id 6 Λ1 6 (1)d×d with Id ∈ Td and (1)d×d ∈ Td.

As we will show next, Td is also closed under the Hadamard product.

Proposition 3.2. Let k ∈ N and Λ1, . . . ,Λk ∈ Td. Then Λ1 · · · Λk ∈ Td.

Proof. Note that it would be sufficient to show the result for k = 2, but we provide a general

construction for any k. For each l = 1, . . . , k, let Cl be a d-dimensional copula with tail-

dependence matrix Λl. Furthermore, let g(u) = u1/k, u ∈ [0, 1]. It follows from Liebscher (2008)

that C(u1, . . . , ud) =∏kl=1 Cl(g(u1), . . . , g(ud)) is a copula; note that

(g−1( max

16l6kUl1), . . . , g−1( max

16l6kUld)

)∼ C (3.2)

for independent random vectors (Ul1, . . . , Uld) ∼ Cl, l = 1, . . . , k. The (i, j)-entry λij of Λ

corresponding to C is thus given by

λij = limu↓0

∏kl=1 Cl,ij(g(u), g(u))

u= lim

u↓0

k∏l=1

Cl,ij(g(u), g(u))

g(u)

=

k∏l=1

limu↓0

Cl,ij(g(u), g(u))

g(u)=

k∏l=1

limu↓0

Cl,ij(u, u)

u=

k∏l=1

λl,ij ,

where Cl,ij denotes the (i, j)-margin of Cl and λl,ij denotes the (i, j)th entry of Λl, l = 1, . . . , k.

3.2 Characterization of tail-dependence matrices

In this section, we investigate the following question:

Question 2. Given a d× d matrix Λ ∈ [0, 1]d×d, is it a tail-dependence matrix?

8

Page 9: Bernoulli and Tail-Dependence Compatibilitysas.uwaterloo.ca/~wang/papers/2015EHW(AAP).pdf · 2015-07-31 · Bernoulli and Tail-Dependence Compatibility Paul Embrechts, Marius Hofertyand

0513

0514

0515

0516

0517

0518

0519

0520

0521

0522

0523

0524

0525

0526

0527

0528

0529

0530

0531

0532

0533

0534

0535

0536

0537

0538

0539

0540

0541

0542

0543

0544

0545

0546

0547

0548

0549

0550

0551

0552

0553

0554

0555

0556

0557

0558

0559

0560

0561

0562

0563

0564

0565

0566

0567

0568

0569

0570

0571

0572

0573

0574

0575

0576

The following theorem fully characterizes tail-dependence matrices and thus provides a

theoretical (but not necessarily practical) answer to Question 2.

Theorem 3.3 (Characterization of Td). A square matrix with diagonal entries being 1 is a tail-

dependence matrix if and only if it is a Bernoulli-compatible matrix multiplied by a constant.

Equivalently, Td = BId.

Proof. We first show that Td ⊆ BId. For each Λ = (λij)d×d ∈ Td, suppose that C is a copula

with tail-dependence matrix Λ and U = (U1, . . . , Un) ∼ C. Let Wu = (IU16u, . . . , IUd6u).

By definition,

λij = limu↓0

1

uE[IUi6uIUj6u]

and

Λ = limu↓0

1

uE[WuW

>u ].

Since BId is closed and E[WuW>u ]/u ∈ BId, we have that Λ ∈ BId.

Now consider BId ⊆ Td. By definition of BId, each B ∈ BId can be written as B = E[XX>]/p

for an X ∈ Vd and E[X] = (p, . . . , p) ∈ (0, 1]d. Let U, V ∼ U[0, 1], U, V,X be independent and

Y = XpU + (1−X)(p+ (1− p)V ). (3.3)

We can verify that for t ∈ [0, 1] and i = 1, . . . , d,

P(Yi 6 t) = P(Xi = 1)P(pU 6 t) + P(Xi = 0)P(p+ (1− p)V 6 t)

= pmint/p, 1+ (1− p) max(t− p)/(1− p), 0 = t,

i.e., Y1, . . . , Yd are U[0, 1]-distributed. Let λij be the tail-dependence coefficient of Yi and Yj ,

i, j = 1, . . . , d. For i, j = 1, . . . , d we obtain that

λij = limu↓0

1

uP(Yi 6 u, Yj 6 u) = lim

u↓0

1

uP(Xi = 1, Xj = 1)P(pU 6 u)

=1

pE[XiXj ].

As a consequence, the tail-dependence matrix of (Y1, . . . , Yd) is B and B ∈ Td.

It follows from Theorem 3.3 and Proposition 2.4 that Td is the “1-diagonals” cross-section

of the convex cone with extreme directions pp> : p ∈ 0, 1d. Furthermore, the proof of

Theorem 3.3 is constructive. As we saw, for any B ∈ BId, Y defined by (3.3) has tail-dependence

matrix B. This interesting construction will be applied in Section 4 where we show that com-

monly applied matrices in statistics are tail-dependence matrices and where we derive the copula

of Y .

9

Page 10: Bernoulli and Tail-Dependence Compatibilitysas.uwaterloo.ca/~wang/papers/2015EHW(AAP).pdf · 2015-07-31 · Bernoulli and Tail-Dependence Compatibility Paul Embrechts, Marius Hofertyand

0577

0578

0579

0580

0581

0582

0583

0584

0585

0586

0587

0588

0589

0590

0591

0592

0593

0594

0595

0596

0597

0598

0599

0600

0601

0602

0603

0604

0605

0606

0607

0608

0609

0610

0611

0612

0613

0614

0615

0616

0617

0618

0619

0620

0621

0622

0623

0624

0625

0626

0627

0628

0629

0630

0631

0632

0633

0634

0635

0636

0637

0638

0639

0640

Remark 3.1. From the fact that Td = BId and BId is closed under the Hadamard product (see

Proposition 2.1, Part iv)), Proposition 3.2 directly follows. Note, however, that our proof of

Proposition 3.2 is constructive. Given tail-dependence matrices and corresponding copulas, we

can construct a copula C which has the Hadamard product of the tail-dependence matrices as

corresponding tail-dependence matrix. If sampling of all involved copulas is feasible, we can

sample C; see Figure 1 for examples2.

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.2

0.4

0.6

0.8

1.0

U1

U2

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.2

0.4

0.6

0.8

1.0

U1

U2

Figure 1: Left-hand side: Scatter plot of 2000 samples from (3.2) for C1 being a Clayton copula

with parameter θ = 4 (λ1 = 2−1/4 ≈ 0.8409) and C2 being a t3 copula with parameter ρ = 0.8

(tail-dependence coefficient λ2 = 2t4(−2/3) ≈ 0.5415). By Proposition 3.2, the tail-dependence

coefficient of (3.2) is thus λ = λ1λ2 = 23/4t4(−2/3) ≈ 0.4553. Right-hand side: C1 as before,

but C2 is a survival Marshall–Olkin copula with parameters α1 = 2−3/4, α2 = 0.8, so that

λ = λ1λ2 = 1/2.

Theorem 3.3 combined with Corollary 2.3 directly leads to the following result.

Corollary 3.4. Every tail-dependence matrix is completely positive, and hence positive semi-

definite.

Furthermore, Theorem 3.3 and Proposition 2.5 imply the following result.

Corollary 3.5. Every diagonally dominant matrix with non-negative entries and diagonal en-

tries being 1 is a tail-dependence matrix.

Note that this result already yields the if-part of Proposition 4.7 below.

2All plots can be reproduced via the R package copula (version > 0.999-13) by calling

demo(tail compatibility).

10

Page 11: Bernoulli and Tail-Dependence Compatibilitysas.uwaterloo.ca/~wang/papers/2015EHW(AAP).pdf · 2015-07-31 · Bernoulli and Tail-Dependence Compatibility Paul Embrechts, Marius Hofertyand

0641

0642

0643

0644

0645

0646

0647

0648

0649

0650

0651

0652

0653

0654

0655

0656

0657

0658

0659

0660

0661

0662

0663

0664

0665

0666

0667

0668

0669

0670

0671

0672

0673

0674

0675

0676

0677

0678

0679

0680

0681

0682

0683

0684

0685

0686

0687

0688

0689

0690

0691

0692

0693

0694

0695

0696

0697

0698

0699

0700

0701

0702

0703

0704

4 Compatible models for tail-dependence matrices

4.1 Widely known matrices

We now consider the following three types of matrices Λ = (λij)d×d which are frequently

applied in multivariate statistics and time series analysis and show that they are tail-dependence

matrices.

a) Equicorrelation matrix with parameter α ∈ [0, 1]: λij = Ii=j + αIi6=j, i, j = 1, . . . , d.

b) AR(1) matrix with parameter α ∈ [0, 1]: λij = α|i−j|, i, j = 1, . . . , d.

c) MA(1) matrix with parameter α ∈ [0, 1/2]: λij = Ii=j + αI|i−j|=1, i, j = 1, . . . , d.

Chaganty and Joe (2006) considered the compatibility of correlation matrices of Bernoulli vectors

for the above three types of matrices and obtained necessary and sufficient conditions for the

existence of compatible models for d = 3. For the tail-dependence compatibility problem that

we consider in this paper, the above three types of matrices are all compatible, and we are able

to construct corresponding models for each case.

Proposition 4.1. Let Λ be the tail-dependence matrix of the d-dimensional random vector

Y = XpU + (1−X)(p+ (1− p)V ), (4.1)

where U, V ∼ U[0, 1], X ∈ Vd and U, V,X are independent.

i) For α ∈ [0, 1], if X has independent components and E[X1] = · · · = E[Xd] = α, then Λ is

an equicorrelation matrix with parameter α; i.e., a) is a tail-dependence matrix.

ii) For α ∈ [0, 1], if Xi =∏i+d−1j=i Zj, i = 1, . . . , d, for independent B(1, α) random variables

Z1, . . . , Z2d−1, then Λ is an AR(1) matrix with parameter α; i.e., b) is a tail-dependence

matrix.

iii) For α ∈ [0, 1/2], if Xi = IZ∈[(i−1)(1−α),(i−1)(1−α)+1], i = 1, . . . , d, for Z ∼ U[0, d], then

Λ is an MA(1) matrix with parameter α; i.e., c) is a tail-dependence matrix.

Proof. We have seen in the proof of Theorem 3.3 that if E[X1] = · · · = E[Xd] = p, then Y

defined through (4.1) has tail-dependence matrix E[XX>]/p. Write Λ = (λij)d×d and note that

λii = 1, i = 1, . . . , d, is always guaranteed.

i) For i 6= j, we have that E[XiXj ] = α2 and thus λij = α2/α = α. This shows that Λ is an

equicorrelation matrix with parameter α.

11

Page 12: Bernoulli and Tail-Dependence Compatibilitysas.uwaterloo.ca/~wang/papers/2015EHW(AAP).pdf · 2015-07-31 · Bernoulli and Tail-Dependence Compatibility Paul Embrechts, Marius Hofertyand

0705

0706

0707

0708

0709

0710

0711

0712

0713

0714

0715

0716

0717

0718

0719

0720

0721

0722

0723

0724

0725

0726

0727

0728

0729

0730

0731

0732

0733

0734

0735

0736

0737

0738

0739

0740

0741

0742

0743

0744

0745

0746

0747

0748

0749

0750

0751

0752

0753

0754

0755

0756

0757

0758

0759

0760

0761

0762

0763

0764

0765

0766

0767

0768

ii) For i < j, we have that

E[XiXj ] = E[ i+d−1∏

k=i

Zk

j+d−1∏l=j

Zl

]= E

[ j−1∏k=i

Zk

]E[ i+d−1∏

k=j

Zk

]E[ j+d−1∏k=i+d

Zk

]= αj−iαi+d−jαj−i = αj−i+d

and E[Xi] = E[X2i ] = αd. Hence, λij = αj−i+d/αd = αj−i for i < j. By symmetry,

λij = α|i−j| for i 6= j. Thus, Λ is an AR(1) matrix with parameter α.

iii) For i < j, note that 2(1− α) > 1, so

E[XiXj ] = P(Z ∈ [(j − 1)(1− α), (i− 1)(1− α) + 1])

= Ij=i+1P(Z ∈ [i(1− α), (i− 1)(1− α) + 1]) = Ij=i+1α

d

and E[Xi] = E[X2i ] = 1

d . Hence, λij = αIj−i=1 for i < j. By symmetry, λij = αI|i−j|=1

for i 6= j. Thus, Λ is an MA(1) matrix with parameter α.

4.2 Advanced tail-dependence models

Theorem 3.3 gives a characterization of tail-dependence matrices using Bernoulli-compatible

matrices and (3.3) provides a compatible model Y for any tail-dependence matrix Λ(= E[XX>]/p).

It is generally not easy to check whether a given matrix is a Bernoulli-compatible matrix or

a tail-dependence matrix; see also Remark 2.1. Therefore, we now study the following question.

Question 3. How can we construct a broader class of models with flexible dependence structures

and desired tail-dependence matrices?

To enrich our models, we bring random matrices with Bernoulli entries into play. For

d,m ∈ N, let

Vd×m =X = (Xij)d×m : P(X ∈ 0, 1d×m) = 1,

m∑j=1

Xij 6 1, i = 1, . . . , d,

i.e., Vd×m is the set of d × m random matrices supported in 0, 1d×m with each row being

mutually exclusive; see Dhaene and Denuit (1999). Furthermore, we introduce a transformation

L on the set of square matrices, such that, for any i, j = 1, . . . , d, the (i, j)th element bij of L(B)

is given by

bij =

bij , if i 6= j,

1, if i = j;

(4.2)

i.e., L adjusts the diagonal entries of a matrix to be 1, and preserves all the other entries. For a

set S of square matrices, we set L(S) = L(B) : B ∈ S. We can now address Question 3.

12

Page 13: Bernoulli and Tail-Dependence Compatibilitysas.uwaterloo.ca/~wang/papers/2015EHW(AAP).pdf · 2015-07-31 · Bernoulli and Tail-Dependence Compatibility Paul Embrechts, Marius Hofertyand

0769

0770

0771

0772

0773

0774

0775

0776

0777

0778

0779

0780

0781

0782

0783

0784

0785

0786

0787

0788

0789

0790

0791

0792

0793

0794

0795

0796

0797

0798

0799

0800

0801

0802

0803

0804

0805

0806

0807

0808

0809

0810

0811

0812

0813

0814

0815

0816

0817

0818

0819

0820

0821

0822

0823

0824

0825

0826

0827

0828

0829

0830

0831

0832

Theorem 4.2 (A class of flexible models). Let U ∼ CU for an m-dimensional copula CU with

tail-dependence matrix Λ and let let V ∼ CV for a d-dimensional copula CV with tail-dependence

matrix Id. Furthermore, let X ∈ Vd×m such that X,U ,V are independent and let

Y = XU + Z V , (4.3)

where Z = (Z1, . . . , Zd) with Zi = 1 −∑mk=1Xik, i = 1, . . . , d. Then Y has tail-dependence

matrix Γ = L(E[XΛX>]).

Proof. Write X = (Xij)d×m, U = (U1, . . . , Um), V = (V1, . . . , Vd), Λ = (λij)d×d and Y =

(Y1, . . . , Yd). Then, for all i = 1, . . . , d,

Yi =

m∑k=1

XikUk + ZiVi =

Vi, if Xik = 0 for all k = 1, . . . ,m, so Zi = 1,

Uk, if Xik = 1 for some k = 1, . . . ,m, so Zi = 0.

Clearly, Y has U[0, 1] margins. We now calculate the tail-dependence matrix Γ = (γij)d×d of Y

for i 6= j. By our independence assumptions, we can derive the following results:

i) P(Yi 6 u, Yj 6 u, Zi = 1, Zj = 1) = P(Vi 6 u, Vj 6 u, Zi = 1, Zj = 1) = CVij (u, u)P(Zi =

1, Zj = 1) 6 CVij (u, u), where CV

ij denotes the (i, j)th margin of CV . As V has tail-

dependence matrix Id, we obtain that

limu↓0

1

uP(Yi 6 u, Yj 6 u, Zi = 1, Zj = 1) = 0.

ii) P(Yi 6 u, Yj 6 u, Zi = 0, Zj = 1) =∑mk=1 P(Uk 6 u, Vj 6 u,Xik = 1, Zj = 1) =∑m

k=1 P(Uk 6 u)P(Vj 6 u)P(Xik = 1, Zj = 1) 6 u2 and thus

limu↓0

1

uP(Yi 6 u, Yj 6 u, Zi = 0, Zj = 1) = 0.

Similarly, we obtain that

limu↓0

1

uP(Yi 6 u, Yj 6 u, Zi = 1, Zj = 0) = 0.

iii) P(Yi 6 u, Yj 6 u, Zi = 0, Zj = 0) =∑mk=1

∑ml=1 P(Uk 6 u, Ul 6 u,Xik = 1, Xjl = 1) =∑m

k=1

∑ml=1 C

Ukl (u, u)P(Xik = 1, Xjl = 1) =

∑mk=1

∑ml=1 C

Ukl (u, u)E[XikXjl] so that

limu↓0

1

uP(Yi 6 u, Yj 6 u, Zi = 0, Zj = 0)

=

m∑k=1

m∑l=1

λklE[XikXjl] = E[ m∑k=1

m∑l=1

XikλklXjl

]=(E[XΛX>]

)ij.

By the Law of Total Probability, we thus obtain that

γij = limu↓0

P(Yi 6 u, Yj 6 u)

u= lim

u↓0

P(Yi 6 u, Yj 6 u, Zi = 0, Zj = 0)

u

=(E[XΛX>]

)ij.

13

Page 14: Bernoulli and Tail-Dependence Compatibilitysas.uwaterloo.ca/~wang/papers/2015EHW(AAP).pdf · 2015-07-31 · Bernoulli and Tail-Dependence Compatibility Paul Embrechts, Marius Hofertyand

0833

0834

0835

0836

0837

0838

0839

0840

0841

0842

0843

0844

0845

0846

0847

0848

0849

0850

0851

0852

0853

0854

0855

0856

0857

0858

0859

0860

0861

0862

0863

0864

0865

0866

0867

0868

0869

0870

0871

0872

0873

0874

0875

0876

0877

0878

0879

0880

0881

0882

0883

0884

0885

0886

0887

0888

0889

0890

0891

0892

0893

0894

0895

0896

This shows that E[XΛX>] and Γ agree on the off-diagonal entries. Since Γ ∈ Td implies that

diag(Γ) = Id, we conclude that L(E[XΛX>]) = Γ.

A special case of Theorem 4.2 reveals an essential difference between the transition rules

of a tail-dependence matrix and a covariance matrix. Suppose that for X ∈ Vd×m, E[X] is a

stochastic matrix (each row sums to 1), and U ∼ CU for an m-dimensional copula CU with

tail-dependence matrix Λ = (λij)d×d. Now we have that Zi = 0, i = 1, . . . , d in (4.3). By

Theorem 4.2, the tail dependence matrix of Y = XU is given by L(E[XΛX>]). One can check

the diagonal terms of the matrix Λ∗ = (λ∗ij)d×d = XΛX> by

λ∗ii =

m∑j=1

m∑k=1

XikλkjXij =

m∑k=1

Xikλkk = 1, i = 1, . . . ,m.

Hence, the tail-dependence matrix of Y is indeed E[XΛX>].

Remark 4.1. In summary:

i) If an m-vector U has covariance matrix Σ, then XU has covariance matrix E[XΣX>] for

any d×m random matrix X independent of U .

ii) If an m-vector U has uniform [0,1] margins and tail-dependence matrix Λ, then XU has

tail-dependence matrix E[XΛX>] for any X ∈ Vd×m independent of U such that each row

of X sums to 1.

It is noted that the transition property of tail-dependence matrices is more restricted than that

of covariance matrices.

The following two propositions consider selected special cases of this construction which are

more straightforward to apply.

Proposition 4.3. For any B ∈ Bd and any Λ ∈ Td we have that L(B Λ) ∈ Td. In particular,

L(B) ∈ Td and hence L(Bd) ⊆ Td.

Proof. Write B = (bij)d×d = E[WW>] for some W = (W1, . . . ,Wd) ∈ Vd and consider X =

diag(W ) ∈ Vd×d. As in the proof of Theorem 4.2 (and with the same notation), it follows that

for i 6= j, γij = E[XiiλijXjj ] = E[WiWjλij ]. This shows that E[XΛX>] = E[WW> Λ] and

B Λ agree on off-diagonal entries. Thus, L(B Λ) = Γ ∈ Td. By taking Λ = (1)d×d, we obtain

L(B) ∈ Td.

The following proposition states a relationship between substochastic matrices and tail-

dependence matrices. To this end, let

Qd×m =

Q = (qij)d×m :

m∑j=1

qij 6 1, qij > 0, i = 1, . . . , d, j = 1, . . . ,m

,

14

Page 15: Bernoulli and Tail-Dependence Compatibilitysas.uwaterloo.ca/~wang/papers/2015EHW(AAP).pdf · 2015-07-31 · Bernoulli and Tail-Dependence Compatibility Paul Embrechts, Marius Hofertyand

0897

0898

0899

0900

0901

0902

0903

0904

0905

0906

0907

0908

0909

0910

0911

0912

0913

0914

0915

0916

0917

0918

0919

0920

0921

0922

0923

0924

0925

0926

0927

0928

0929

0930

0931

0932

0933

0934

0935

0936

0937

0938

0939

0940

0941

0942

0943

0944

0945

0946

0947

0948

0949

0950

0951

0952

0953

0954

0955

0956

0957

0958

0959

0960

i.e., Qd×m is the set of d×m (row) substochastic matrices; note that the expectation of a random

matrix in Vd×m is a substochastic matrix.

Proposition 4.4. For any Q ∈ Qd×m and any Λ ∈ Tm, we have that L(QΛQ>) ∈ Td. In

particular, L(QQ>) ∈ Td for all Q ∈ Qd×m and L(pp>) ∈ Td for all p ∈ [0, 1]d.

Proof. Write Q = (qij)d×m and let Xik = IZi∈[∑k−1

j=1 qij ,∑k

j=1 qij) for independent Zi ∼ U[0, 1],

i = 1, . . . , d, k = 1, . . . ,m. It is straightforward to see that E[X] = Q, X ∈ Vd×m with

independent rows, and∑mk=1Xik 6 1 for i = 1, . . . , d, so X ∈ Vd×m. As in the proof of

Theorem 4.2 (and with the same notation), it follows that for i 6= j,

γij =

m∑l=1

m∑k=1

E[Xik]E[Xjl]λkl =

m∑l=1

m∑k=1

qikqjlλkl.

This shows that QΛQ> and Γ agree on off-diagonal entries, so L(QΛQ>) = Γ ∈ Td. By taking

Λ = Id, we obtain L(QQ>) ∈ Td. By taking m = 1, we obtain L(pp>) ∈ Td.

4.3 Corresponding copula models

In this section, we derive the copulas of (3.3) and (4.3) which are able to produce tail-

dependence matrices E[XX>]/p and L(E[XΛX>]) as stated in Theorems 3.3 and 4.2, respec-

tively. We first address the former.

Proposition 4.5 (Copula of (3.3)). Let X ∈ Vd, E[X] = (p, . . . , p) ∈ (0, 1]d. Furthermore, let

U, V ∼ U[0, 1], U, V,X be independent and

Y = XpU + (1−X)(p+ (1− p)V ).

Then the copula C of Y at u = (u1, . . . , ud) is given by

C(u) =∑

i∈0,1dmin

minr:ir=1urp

, 1

maxminr:ir=0ur − p

1− p, 0P(X = i),

with the convention min ∅ = 1.

Proof. By the Law of Total Probability and our independence assumptions,

C(u) =∑

i∈0,1dP(Y 6 u,X = i)

=∑

i∈0,1dP(pU 6 min

r:ir=1ur, p+ (1− p)V 6 min

r:ir=0ur,X = i)

=∑

i∈0,1dP(U 6

minr:ir=1urp

)P(V 6

minr:ir=0ur − p1− p

)P(X = i);

the claim follows from the fact that U, V ∼ U[0, 1].

15

Page 16: Bernoulli and Tail-Dependence Compatibilitysas.uwaterloo.ca/~wang/papers/2015EHW(AAP).pdf · 2015-07-31 · Bernoulli and Tail-Dependence Compatibility Paul Embrechts, Marius Hofertyand

0961

0962

0963

0964

0965

0966

0967

0968

0969

0970

0971

0972

0973

0974

0975

0976

0977

0978

0979

0980

0981

0982

0983

0984

0985

0986

0987

0988

0989

0990

0991

0992

0993

0994

0995

0996

0997

0998

0999

1000

1001

1002

1003

1004

1005

1006

1007

1008

1009

1010

1011

1012

1013

1014

1015

1016

1017

1018

1019

1020

1021

1022

1023

1024

For deriving the copula of (4.3), we need to introduce some notation; see also Example 4.1

below. In the following theorem, let supp(X) denote the support of X. For a vector u =

(u1, . . . , ud) ∈ [0, 1]d and a matrix A = (Aij)d×m ∈ supp(X), denote by Ai the sum of the

i-th row of A, i = 1, . . . , d, and let uA = (u1IA1=0 + IA1=1, . . . , udIAd=0 + IAd=1), and

u∗A = (minr:Ar1=1ur, . . . ,minr:Arm=1ur), where min ∅ = 1.

Proposition 4.6 (Copula of (4.3)). Suppose that the setup of Theorem 4.2 holds. Then the

copula C of Y in (4.3) is given by

C(u) =∑

A∈supp(X)

CV (uA)CU (u∗A)P(X = A). (4.4)

Proof. By the Law of Total Probability, it suffices to verify that P(Y 6 u |X = A) = CV (uA)CU (u∗A).

This can be seen from

P(Y 6 u |X = A)

= P( m∑k=1

AjkUk + (1−Aj)Vj 6 uj , j = 1, . . . , d

)= P(UkIAjk=1 6 uj , VjIAj=0 6 uj , j = 1, . . . , d, k = 1, . . . ,m)

= P(Uk 6 min

r:Ark=1ur, Vj 6 ujIAj=0 + IAj=1, j = 1, . . . , d, k = 1, . . . ,m

)= P

(Uk 6 min

r:Ark=1ur, k = 1, . . . ,m

)P(Vj 6 ujIAj=0 + IAj=1, j = 1, . . . , d

)= CU (u∗A)CV (uA).

As long as CV has tail-dependence matrix Id, the tail-dependence matrix of Y is not

affected by the choice of CV . This theoretically provides more flexibility in choosing the body

of the distribution of Y while attaining a specific tail-dependence matrix. Note, however, that

this also depends on the choice of X; see the following example where we address special cases

which allow for more insight into the rather abstract construction (4.4).

Example 4.1. 1. For m = 1, the copula C in (4.4) is given by

C(u) =∑

A∈0,1dCV (uA)CU (u∗A)P(X = A); (4.5)

note that X,A in Equation 4.4 are indeed vectors in this case. For d = 2, we obtain

C(u1, u2) = M(u1, u2)P(X = ( 11 )) + CV (u1, u2)P(X = ( 0

0 ))

+ Π(u1, u2)P(X = ( 10 ) or X = ( 0

1 )),

16

Page 17: Bernoulli and Tail-Dependence Compatibilitysas.uwaterloo.ca/~wang/papers/2015EHW(AAP).pdf · 2015-07-31 · Bernoulli and Tail-Dependence Compatibility Paul Embrechts, Marius Hofertyand

1025

1026

1027

1028

1029

1030

1031

1032

1033

1034

1035

1036

1037

1038

1039

1040

1041

1042

1043

1044

1045

1046

1047

1048

1049

1050

1051

1052

1053

1054

1055

1056

1057

1058

1059

1060

1061

1062

1063

1064

1065

1066

1067

1068

1069

1070

1071

1072

1073

1074

1075

1076

1077

1078

1079

1080

1081

1082

1083

1084

1085

1086

1087

1088

and therefore a mixture of the Frechet–Hoeffding upper bound M(u1, u2) = minu1, u2,

the copula CV and the independence copula Π(u1, u2) = u1u2. If P(X = ( 00 )) = 0 then C

is simply a mixture of M and Π and does not depend on V anymore.

Now consider the special case of (4.5) where V follows the d-dimensional independence cop-

ula Π(u) =∏di=1 ui and X = (X1, . . . , Xd−1, 1) is such that at most one of X1, . . . , Xd−1

is 1 (each randomly with probability 0 6 α 6 1/(d− 1) and all are simultaneously 0 with

probability 1− (d− 1)α). Then, for all u ∈ [0, 1]d, C is given by

C(u) = α

d−1∑i=1

(minui, ud

d−1∏j=1,j 6=i

uj

)+ (1− (d− 1)α)

d∏j=1

uj . (4.6)

This copula is a conditionally independent multivariate Frechet copula studied in Yang

et al. (2009). This example will be revisited in Section 4.4; see also the left-hand side of

Figure 3 below.

2. For m = 2, d = 2, we obtain

C(u1, u2) = M(u1, u2)P(X = ( 1 01 0 ) or X = ( 0 1

0 1 ))

+ CU (u1, u2)P(X = ( 1 00 1 )) + CU (u2, u1)P(X = ( 0 1

1 0 ))

+ CV (u1, u2)P(X = ( 0 00 0 ))

+ Π(u1, u2)P(X = ( 0 01 0 ) or ( 0 0

0 1 ) or ( 1 00 0 ) or ( 0 1

0 0 )). (4.7)

Figure 2 shows samples of size 2000 from (4.7) for V ∼ Π and two different choices of U

(in different rows) and X (in different columns). From Theorem 4.2, we obtain that the

off-diagonal entry γ12 of the tail-dependence matrix Γ of Y is given by

γ12 = p(1,2)(1,1) + p(1,2)(2,2) + λ12(p(1,2)(2,1) + p(1,2)(1,2)),

where λ12 is the off-diagonal entry of the tail-dependence matrix Λ of U .

4.4 An example from risk management practice

Let us now come back to Problem (1.1) which motivated our research on tail-dependence

matrices. From a practical point of view, the question is whether it is possible to find one

financial position, which has tail-dependence coefficient α with each of d − 1 tail-independent

financial risks (assets). Such a construction can be interesting for risk management purposes,

e.g., in the context of hedging.

Recall Problem (1.1):

17

Page 18: Bernoulli and Tail-Dependence Compatibilitysas.uwaterloo.ca/~wang/papers/2015EHW(AAP).pdf · 2015-07-31 · Bernoulli and Tail-Dependence Compatibility Paul Embrechts, Marius Hofertyand

1089

1090

1091

1092

1093

1094

1095

1096

1097

1098

1099

1100

1101

1102

1103

1104

1105

1106

1107

1108

1109

1110

1111

1112

1113

1114

1115

1116

1117

1118

1119

1120

1121

1122

1123

1124

1125

1126

1127

1128

1129

1130

1131

1132

1133

1134

1135

1136

1137

1138

1139

1140

1141

1142

1143

1144

1145

1146

1147

1148

1149

1150

1151

1152

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.2

0.4

0.6

0.8

1.0

Y1

Y2

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.2

0.4

0.6

0.8

1.0

Y1

Y2

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.2

0.4

0.6

0.8

1.0

Y1

Y2

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.2

0.4

0.6

0.8

1.0

Y1

Y2

Figure 2: Scatter plots of 2000 samples from Y for V ∼ Π and U following a bivariate (m = 2)

t3 copula with Kendall’s tau equal to 0.75 (top row) or a survival Marshall–Olkin copula with

parameters α1 = 0.25, α2 = 0.75 (bottom row). For the plots on the left-hand side, the number

of rows of X with one 1 are randomly chosen among 0, 1, 2(= d), the corresponding rows and

columns are then randomly selected among 1, 2(= d) and 1, 2(= m), respectively. For the

plots on the right-hand side, X is drawn from a multinomial distribution with probabilities 0.5

and 0.5 such that each row contains precisely one 1.

18

Page 19: Bernoulli and Tail-Dependence Compatibilitysas.uwaterloo.ca/~wang/papers/2015EHW(AAP).pdf · 2015-07-31 · Bernoulli and Tail-Dependence Compatibility Paul Embrechts, Marius Hofertyand

1153

1154

1155

1156

1157

1158

1159

1160

1161

1162

1163

1164

1165

1166

1167

1168

1169

1170

1171

1172

1173

1174

1175

1176

1177

1178

1179

1180

1181

1182

1183

1184

1185

1186

1187

1188

1189

1190

1191

1192

1193

1194

1195

1196

1197

1198

1199

1200

1201

1202

1203

1204

1205

1206

1207

1208

1209

1210

1211

1212

1213

1214

1215

1216

For which α ∈ [0, 1] is the matrix

Γd(α) =

1 0 · · · 0 α

0 1 · · · 0 α...

.... . .

......

0 0 · · · 1 α

α α · · · α 1

(4.8)

a matrix of pairwise (either lower or upper) tail-dependence coefficients?

Based on the Frechet–Hoeffding bounds, it follows from Joe (1997, Theorem 3.14) that for

d = 3 (and thus also d > 3), α has to be in [0, 1/2]; however this is not a sufficient condition

for Γd(α) to be a tail-dependence matrix. The following proposition not only gives an answer

to (4.8) by providing necessary and sufficient such conditions, but also provides, by its proof, a

compatible model for Γd(α).

Proposition 4.7. Γd(α) ∈ Td if and only if 0 6 α 6 1/(d− 1).

Proof. The if-part directly follows from Corollary 3.5. We provide a constructive proof based on

Theorem 4.2. Suppose that 0 6 α 6 1/(d−1). Take a partition Ω1, . . . ,Ωd of the sample space

Ω with P(Ωi) = α, i = 1, . . . , d− 1, and let X = (IΩ1, . . . , IΩd−1

, 1) ∈ Vd. It is straightforward to

see that

E[XX>] =

α 0 · · · 0 α

0 α · · · 0 α...

.... . .

......

0 0 · · · α α

α α · · · α 1

.

By Proposition 4.3, Γd(α) = L(E[XX>]) ∈ Td.

For the only if part, suppose that Γd(α) ∈ Td; thus α > 0. By Theorem 3.3, Γd(α) ∈ BId.

By the definition of BId, Γd(α) = Bd/p for some p ∈ (0, 1] and a Bernoulli-compatible matrix Bd.

Therefore,

pΓd(α) =

p 0 · · · 0 pα

0 p · · · 0 pα...

.... . .

......

0 0 · · · p pα

pα pα · · · pα p

19

Page 20: Bernoulli and Tail-Dependence Compatibilitysas.uwaterloo.ca/~wang/papers/2015EHW(AAP).pdf · 2015-07-31 · Bernoulli and Tail-Dependence Compatibility Paul Embrechts, Marius Hofertyand

1217

1218

1219

1220

1221

1222

1223

1224

1225

1226

1227

1228

1229

1230

1231

1232

1233

1234

1235

1236

1237

1238

1239

1240

1241

1242

1243

1244

1245

1246

1247

1248

1249

1250

1251

1252

1253

1254

1255

1256

1257

1258

1259

1260

1261

1262

1263

1264

1265

1266

1267

1268

1269

1270

1271

1272

1273

1274

1275

1276

1277

1278

1279

1280

is a compatible Bernoulli matrix, so pΓd(α) ∈ Bd. Write pΓd(α) = E[XX>] for some X =

(X1, . . . , Xd) ∈ Vd. It follows that P(Xi = 1) = p for i = 1, . . . , d, P(XiXj = 1) = 0 for

i 6= j, i, j = 1, . . . , d − 1 and P(XiXd = 1) = pα for i = 1, . . . , d − 1. Note that XiXd = 1,

i = 1, . . . , d − 1, are almost surely disjoint since P(XiXj = 1) = 0 for i 6= j, i, j = 1, . . . , d − 1.

As a consequence,

p = P(Xd = 1) > P( d−1⋃i=1

XiXd = 1)

=

d−1∑i=1

P(XiXd = 1) = (d− 1)pα,

and thus (d− 1)α 6 1.

It follows from the proof of Theorem 4.2 that for α ∈ [0, 1/(d − 1)], a compatible copula

model with tail-dependence matrix Γd(α) can be constructed as follows. Consider a partition

Ω1, . . . ,Ωd of the sample space Ω with P(Ωi) = α, i = 1, . . . , d−1, and let X = (X1, . . . , Xd) =

(IΩ1 , . . . , IΩd−1, 1) ∈ Vd; note that m = 1 here. Furthermore, let V be as in Theorem 4.2,

U ∼ U[0, 1] and U,V ,X be independent. Then,

Y = (UX1 + (1−X1)V1, . . . , UXd−1 + (1−Xd−1)Vd−1, U)

has tail-dependence matrix Γd(α). Example 4.1, Part 1 provides the copula C of Y in this case.

It is also straightforward to verify from this copula that Y has tail-dependence matrix Γd(α).

Figure 3 displays pairs plots of 2000 realizations of Y for α = 1/3 and two different copulas for

V .

Remark 4.2. Note that Γd(α) is not positive semidefinite if and only if α > 1/√d− 1. For

d < 5, element-wise non-negative and positive semidefinite matrices are completely positive; see

Berman and Shaked-Monderer (2003, Theorem 2.4). Therefore Γ3(2/3) is completely positive.

However, it is not in T3. It indeed shows that the class of completely positive matrices with

diagonal entries being 1 is strictly larger than Td.

5 Conclusion and discussion

Inspired by the question whether a given matrix in [0, 1]d×d is the matrix of pairwise tail-

dependence coefficients of a d-dimensional random vector, we introduced the tail-dependence

compatibility problem. It turns out that this problem is closely related to the Bernoulli-

compatibility problem which we also addressed in this paper and which asks when a given matrix

in [0, 1]d×d is a Bernoulli-compatible matrix (see Question 1 and Theorem 2.2). As a main find-

ing, we characterized tail-dependence matrices as precisely those square matrices with diagonal

entries being 1 which are Bernoulli-compatible matrices multiplied by a constant (see Question 2

20

Page 21: Bernoulli and Tail-Dependence Compatibilitysas.uwaterloo.ca/~wang/papers/2015EHW(AAP).pdf · 2015-07-31 · Bernoulli and Tail-Dependence Compatibility Paul Embrechts, Marius Hofertyand

1281

1282

1283

1284

1285

1286

1287

1288

1289

1290

1291

1292

1293

1294

1295

1296

1297

1298

1299

1300

1301

1302

1303

1304

1305

1306

1307

1308

1309

1310

1311

1312

1313

1314

1315

1316

1317

1318

1319

1320

1321

1322

1323

1324

1325

1326

1327

1328

1329

1330

1331

1332

1333

1334

1335

1336

1337

1338

1339

1340

1341

1342

1343

1344

Y1

0.0 0.4 0.8

0.0 0.4 0.8

0.0

0.4

0.8

0.0

0.4

0.8

Y2

Y3

0.0

0.4

0.8

0.0 0.4 0.8

0.0

0.4

0.8

0.0 0.4 0.8

Y4

Y1

0.0 0.4 0.8

0.0 0.4 0.8

0.0

0.4

0.8

0.0

0.4

0.8

Y2

Y3

0.0

0.4

0.8

0.0 0.4 0.8

0.0

0.4

0.8

0.0 0.4 0.8

Y4

Figure 3: Pairs plot of 2000 samples from Y ∼ C which produces the tail dependence matrix

Γ4(1/3) as given by (1.1). On the left-hand side, V ∼ Π (α determines how much weight is on

the diagonal for pairs with one component being Y4; see (4.6)) and on the right-hand-side, V

follows a Gauss copula with parameter chosen such that Kendall’s tau equals 0.8.

and Theorem 3.3). Furthermore, we presented and studied new models (see, e.g., Question 3

and Theorem 4.2) which provide answers to several questions related to the tail-dependence

compatibility problem.

The study of compatibility of tail-dependence matrices is mathematically different from

that of covariances matrices. Through many technical arguments in this paper, the reader may

have already realized that the tail-dependence matrix lacks a linear structure which is essential

to covariance matrices based on tools from Linear Algebra. For instance, let X be a d-random

vector with covariance matrix Σ and tail-dependence matrix Λ, and A be an m× d matrix. The

covariance matrix of AX is simply given by AΣA>, however, the tail-dependence matrix of AX

is generally not explicit (see Remark 4.1 for special cases). This lack of linearity can also help

to understand why tail-dependence matrices are realized by models based on Bernoulli vectors

as we have seen in this paper, in contrast to covariance matrices which are naturally realized by

Gaussian (or generally, elliptical) random vectors. The latter have a linear structure, whereas

Bernoulli vectors do not. It is not surprising that most classical techniques in Linear Algebra

such as matrix decomposition, diagonalization, ranks, inverses and determinants are not very

helpful for studying the compatibility problems we address in this paper.

Concerning future research, an interesting open question is how one can (theoretically or nu-

merically) determine whether a given arbitrary non-negative, square matrix is a tail-dependence

21

Page 22: Bernoulli and Tail-Dependence Compatibilitysas.uwaterloo.ca/~wang/papers/2015EHW(AAP).pdf · 2015-07-31 · Bernoulli and Tail-Dependence Compatibility Paul Embrechts, Marius Hofertyand

1345

1346

1347

1348

1349

1350

1351

1352

1353

1354

1355

1356

1357

1358

1359

1360

1361

1362

1363

1364

1365

1366

1367

1368

1369

1370

1371

1372

1373

1374

1375

1376

1377

1378

1379

1380

1381

1382

1383

1384

1385

1386

1387

1388

1389

1390

1391

1392

1393

1394

1395

1396

1397

1398

1399

1400

1401

1402

1403

1404

1405

1406

1407

1408

or Bernoulli-compatible matrix. To the best of our knowledge there are no corresponding algo-

rithms available. Another open question concerns the compatibility of other matrices of pairwise

measures of association such as rank-correlation measures (e.g., Spearman’s rho or Kendall’s

tau); see (Embrechts et al., 2002, Section 6.2). Recently, Fiebig et al. (2014) and Strokorb et al.

(2015) studied the concept of tail-dependence functions of stochastic processes. Similar results

to some of our findings were found in the context of max-stable processes.

From a practitioner’s point-of-view, it is important to point out limitations of using tail-

dependence matrices in quantitative risk management and other applications. One possible such

limitation is the statistical estimation of tail-dependence matrices since, as limits, estimating tail

dependence coefficients from data is non-trivial (and typically more complicated than estimation

in the body of a bivariate distribution).

After presenting the results of our paper at the conferences “Recent Developments in De-

pendence Modelling with Applications in Finance and Insurance – 2nd Edition, Brussels, May

29, 2015” and “The 9th International Conference on Extreme Value Analysis, Ann Arbor, June

15–19, 2015”, the references Fiebig et al. (2014) and Strokorb et al. (2015) were brought to our

attention (see also Acknowledgments below). In these papers, a very related problem is treated,

be it from a different, more theoretical angle, mainly based on the theory of max-stable and

Tawn-Molchanov processes as well as results for convex-polytopes. For instance, our Theorem

3.3 is similar to Theorem 6 c) in Fiebig et al. (2014).

Acknowledgments

We would like to thank Federico Degen for raising this interesting question concerning tail-

dependence matrices and Janusz Milek for relevant discussions. We would also like to thank the

Editor, two anonymous referees, Tiantian Mao, Don McLeish, Johan Segers, Kirstin Strokorb and

Yuwei Zhao for valuable input on an early version of the paper. Furthermore, Paul Embrechts

acknowledges financial support by the Swiss Finance Institute, and Marius Hofert and Ruodu

Wang acknowledge support from the Natural Sciences and Engineering Research Council of

Canada (NSERC Discovery Grant numbers 5010 and 435844, respectively). Ruodu Wang is

grateful to the Forschungsinstitut fur Mathematik (FIM) at ETH Zurich for financial support

during his visits in 2013 and 2014 when this topic was brought to his attention.

References

Berman, A. and Shaked-Monderer, N. (2003). Completely Positive Matrices. World Scientific.

22

Page 23: Bernoulli and Tail-Dependence Compatibilitysas.uwaterloo.ca/~wang/papers/2015EHW(AAP).pdf · 2015-07-31 · Bernoulli and Tail-Dependence Compatibility Paul Embrechts, Marius Hofertyand

1409

1410

1411

1412

1413

1414

1415

1416

1417

1418

1419

1420

1421

1422

1423

1424

1425

1426

1427

1428

1429

1430

1431

1432

1433

1434

1435

1436

1437

1438

1439

1440

1441

1442

1443

1444

1445

1446

1447

1448

1449

1450

1451

1452

1453

1454

1455

1456

1457

1458

1459

1460

1461

1462

1463

1464

1465

1466

1467

1468

1469

1470

1471

1472

Bluhm, C. and Overbeck, L. (2006). Structured Credit Portfolio Analysis, Baskets & CDOs.

Chapman & Hall.

Bluhm, C., Overbeck, L. and Wagner, C. (2002). An Introduction to Credit Risk Modeling.

Chapman & Hall.

Chaganty, N. R. and Joe, H. (2006). Range of correlation matrices for dependent Bernoulli

random variables. Biometrika, 93(1), 197–206.

Dhaene, J. and Denuit, M. (1999). The safest dependence structure among risks. Insurance:

Mathematics and Economics, 25(1), 11–21.

Embrechts, P., McNeil, A. and Straumann, D. (2002). Correlation and dependence in risk man-

agement: properties and pitfalls. In: Risk Management: Value at Risk and Beyond, ed. M.A.H.

Dempster, Cambridge University Press, Cambridge, 176–223.

Fiebig, U., Strokorb, K. and Schlather, M. (2014). The realization problem for tail correlation

functions. arXiv : http://arxiv.org/abs/1405.6876.

Joe, H. (1997). Multivariate Models and Dependence Concepts. Chapman & Hall.

Joe, H. (2014). Dependence Modeling with Copulas. London: Chapman & Hall.

Kortschak, D. and Albrecher, H. (2009). Asymptotic Results for the Sum of Dependent Non-

identically Distributed Random Variables. Methodology and Computing in Applied Probability,

11, 279–306.

Liebscher, E. (2008). Construction of asymmetric multivariate copulas. Journal of Multivariate

Analysis, 99(10), 2234–2250.

McNeil, A. J., Frey, R. and Embrechts, P. (2015). Quantitative Risk Management: Concepts,

Techniques and Tools. Revised Edition. Princeton University Press.

Nikoloulopoulos, A. K., Joe, H. and Li, H. (2009). Extreme value properties of multivariate t

copulas. Extremes, 12(2), 129–148.

Ruschendorf, L. (1981). Characterization of dependence concepts in normal distributions. Annals

of the Institute of Statistical Mathematics, 33(1), 347–359.

Strokorb, K., Ballani, F. and Schlather, M. (2015). Tail correlation functions of max-stable

processes. Extremes, 18(2), 241–271.

Yang, J., Qi, Y. and Wang, R. (2009). A class of multivariate copulas with bivariate Frechet

marginal copulas. Insurance: Mathematics and Economics, 45(1), 139–147.

23