volume 11 number 3 september 2016 rial...

122
Volume 11 Number 3 September 2016 The Journal of Should the advanced measurement approach be replaced with the standardized measurement approach for operational risk? Gareth W. Peters, Pavel V. Shevchenko, Bertrand Hassani and Ariane Chapelle Comments on the Basel Committee on Banking Supervision proposal for a new standardized approach for operational risk Giulio Mignola, Roberto Ugoccioni and Eric Cope An assessment of operational loss data and its implications for risk capital modeling Ruben D. Cohen Rapidly bounding the exceedance probabilities of high aggregate losses Isabella Gollini and Jonathan Rougier Risk Operational Trial Copy For all subscription queries, please call: UK/Europe: +44 (0) 207 316 9300 USA: +1 646 736 1850 ROW: +852 3411 4828

Upload: others

Post on 14-Apr-2020

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

Volume 11 Number 3September 2016

The Jo

urn

al of O

peratio

nal R

iskVolum

e 11 Num

ber 3 September 2016

The Journal of

■ Should the advanced measurement approach be replaced with the standardized measurement approach for operational risk? Gareth W. Peters, Pavel V. Shevchenko, Bertrand Hassani and Ariane Chapelle

■ Comments on the Basel Committee on Banking Supervision proposal for a new standardized approach for operational risk Giulio Mignola, Roberto Ugoccioni and Eric Cope

■ An assessment of operational loss data and its implications for risk capital modeling Ruben D. Cohen

■ Rapidly bounding the exceedance probabilities of high aggregate lossesIsabella Gollini and Jonathan Rougier

RiskOperational

PEFC Certified

This book has been produced entirely from sustainable papers that are accredited as PEFC compliant.

www.pefc.org

JOP_11_3_Sept_2016.indd 1 07/09/2016 11:05

Tria

l Cop

y For all subscription queries, please call:

UK/Europe: +44 (0) 207 316 9300

USA: +1 646 736 1850 ROW: +852 3411 4828

Page 2: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

in numbers

140,000

Users

Page views

19,400+ on Regulation

6,900+ on Commodities

19,600+ on Risk Management

6,500+ on Asset Management

58,000+ articles stretching back 20 years

200+

New articles & technical papers

370,000

21,000+ on Derivatives £

Visit the world’s leading source of exclusive in-depth news & analysis on risk management, derivatives and complex fi nance now.

(each month)

(each month)

See what you’re missing

(each month)

RNET16-AD156x234-numbers.indd 1 21/03/2016 09:44

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 3: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

The Journal of Operational RiskEDITORIAL BOARD

Editor-in-ChiefMarcelo Cruz

Associate EditorsStephen Brown NYU SternAriane Chapelle University College LondonAnna Chernobai Syracuse UniversityRodney Coleman Imperial CollegeEric Cope Credit SuisseMichel Crouhy IXIS Corporate Investment BankPatrick de Fontnouvelle Federal Reserve Bank of BostonThomas Kaiser Goethe University FrankfurtMark Laycock ML Risk Partners LtdMarco Moscadelli Bank of ItalyMichael Pinedo New York UniversityJeremy Quick Guernsey Financial Services CommissionSvetlozar Rachev Stony Brook UniversityDavid Rowe David M. Rowe Risk AdvisoryAnthony Saunders New York UniversitySergio Scandizzo European Investment BankEvan Sekeris AonPavel Shevchenko Macquarie UniversityPeter Tufano Harvard Business School

SUBSCRIPTIONSThe Journal of Operational Risk (Print ISSN 1744-6740 j Online ISSN 1755-2710) is publishedquarterly by Incisive Risk Information Limited, Haymarket House, 28–29 Haymarket, LondonSW1Y 4RX, UK. Subscriptions are available on an annual basis, and the rates are set out in thetable below.

UK Europe USRisk.net Journals £1945 €2795 $3095Print £735 €1035 $1215Risk.net Premium £2750 €3995 $4400

Academic discounts are available. Please enquire by using one of the contact methods below.

All prices include postage.All subscription orders, single/back issues orders, and changes of addressshould be sent to:

UK & Europe Office: Incisive Media (c/o CDS Global), Tower House, Sovereign Park,Market Harborough, Leicestershire, LE16 9EF, UK. Tel: 0870 787 6822 (UK),+44 (0)1858 438 421 (ROW); fax: +44 (0)1858 434958

US & Canada Office: Incisive Media, 55 Broad Street, 22nd Floor, New York, NY 10004, USA.Tel: +1 646 736 1888; fax: +1 646 390 6612

Asia & Pacific Office: Incisive Media, 20th Floor, Admiralty Centre, Tower 2,18 Harcourt Road, Admiralty, Hong Kong. Tel: +852 3411 4888; fax: +852 3411 4811

Website: www.risk.net/journal E-mail: [email protected]

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 4: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

The Journal of Operational RiskGENERAL SUBMISSION GUIDELINES

Manuscripts and research papers submitted for consideration must be original workthat is not simultaneously under review for publication in another journal or otherpublication outlets. All articles submitted for consideration should follow strict aca-demic standards in both theoretical content and empirical results. Articles should beof interest to a broad audience of sophisticated practitioners and academics.

Submitted papers should follow Webster’s New Collegiate Dictionary for spelling,and The Chicago Manual of Style for punctuation and other points of style, apart froma few minor exceptions that can be found at www.risk.net/journal. Papers should besubmitted electronically via email to: [email protected]. Please clearlyindicate which journal you are submitting to.

You must submit two versions of your paper; a single LATEX version and a PDFfile. LATEX files need to have an explicitly coded bibliography included. All files mustbe clearly named and saved by author name and date of submission. All figures andtables must be included in the main PDF document and also submitted as separateeditable files and be clearly numbered.

All papers should include a title page as a separate document, and the full names,affiliations and email addresses of all authors should be included. A concise andfactual abstract of between 150 and 200 words is required and it should be includedin the main document. Five or six keywords should be included after the abstract.Submitted papers must also include an Acknowledgements section and a Declarationof Interest section. Authors should declare any funding for the article or conflicts ofinterest. Citations in the text must be written as (John 1999; Paul 2003; Peter and Paul2000) or (John et al 1993; Peter 2000).

The number of figures and tables included in a paper should be kept to a minimum.Figures and tables must be included in the main PDF document and also submittedas separate individual editable files. Figures will appear in color online, but willbe printed in black and white. Footnotes should be used sparingly. If footnotes arerequired then these should be included at the end of the page and should be no morethan two sentences. Appendixes will be published online as supplementary material.

Before submitting a paper, authors should consult the full author guidelines at:

http://www.risk.net/static/risk-journals-submission-guidelines

Queries may also be sent to:The Journal of Operational Risk, Incisive Media,Haymarket House, 28–29 Haymarket, London SW1Y 4RX, UKTel: +44 (0)20 7004 7531; Fax: +44 (0)20 7484 9758E-mail: [email protected]

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 5: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

The Journal of

OperationalRisk

The journalThe Basel Committee’s 2014 revision of its operational risk capital framework, alongwith the multi-billion-dollar settlements that financial institutions had to make withfinancial authorities, made operational risk the key focus of risk management. TheJournal of Operational Risk stimulates active discussion of practical approaches toquantifying, modeling and managing this risk as well as discussing current issues inthe discipline. It is essential reading for practitioners and academics who want to stayinformed about the latest research in operational risk theory and practice.

The Journal of Operational Risk considers submissions in the form of researchpapers and forum papers on, but not limited to, the following topics.

� The modeling and management of operational risk.

� Recent advances in techniques used to model operational risk, eg, copulas,correlation, aggregate loss distributions, Bayesian methods and extreme valuetheory.

� The pricing and hedging of operational risk and/or any risk transfer techniques.

� Data modeling external loss data, business control factors and scenario analysis.

� Models used to aggregate different types of data.

� Causal models that link key risk indicators and macroeconomic factors tooperational losses.

� Regulatory issues, such as Basel II or any other local regulatory issue.

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 6: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

The Journal of Operational Risk Volume 11/Number 3

CONTENTS

Letter from the Editor-in-Chief vii

RESEARCH PAPERSShould the advanced measurement approach be replaced withthe standardized measurement approach for operational risk? 1Gareth W. Peters, Pavel V. Shevchenko, Bertrand Hassaniand Ariane Chapelle

Comments on the Basel Committee on Banking Supervision proposalfor a new standardized approach for operational risk 51Giulio Mignola, Roberto Ugoccioni and Eric Cope

An assessment of operational loss data and its implicationsfor risk capital modeling 71Ruben D. Cohen

Rapidly bounding the exceedance probabilities of high aggregatelosses 97Isabella Gollini and Jonathan Rougier

Editor-in-Chief: Marcelo Cruz Subscription Sales Manager: Aaraa JavedPublisher: Nick Carver Global Key Account Sales Director: Michelle GodwinJournals Manager: Dawn Hunter Composition and copyediting: T&T Productions LtdEditorial Assistant: Carolyn Moclair Printed in UK by Printondemand-Worldwide

©Copyright Incisive Risk Information (IP) Limited, 2016. All rights reserved. No parts of this publicationmay be reproduced, stored in or introduced into any retrieval system, or transmitted, in any form or by anymeans, electronic, mechanical, photocopying, recording or otherwise without the prior written permission of thecopyright owners.

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 7: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

LETTER FROM THE EDITOR-IN-CHIEF

Marcelo Cruz

Welcome to the third issue of Volume 11 of The Journal of Operational Risk. Withthe entire operational risk community focused on the controversial Basel Committeeconsultative paper that has created the standardized measurement approach (SMA)and ended the advanced measurement approach (AMA), I asked the members of oureditorial board to write technical papers analyzing the impact of these new rules. Thejournal was lucky enough to have three board members respond to the challenge,allocating many hours of their valuable time to performing this analysis and writingvery interesting pieces on the consequences of these rules for bank capital. I imaginethat most of us reading these papers are deeply involved in discussions on the impactand consequences of these changes and will therefore appreciate more high-qualityviews on the topic.

At OpRisk North America and Europe, which took place a few months ago, SMAclearly dominated discussion, with very animated engagements between practitionersand regulators. As readers will see in the two papers discussed below that analyze theconsequences of SMA, the new approach is risk insensitive, ie, there is no connec-tion between managerial actions for managing risk and the operational risk capitalcalculated by the SMA. Although SMA is not the official rule until the final paperis published by the Basel Committee and the new standards are confirmed and pro-mulgated by the participating countries, we ask readers and authors to submit theiranalyses to The Journal of Operational Risk; even if their papers are less technical, wewill review them and publish them in our Forum section. As the leading publicationin the area, The Journal of Operational Risk would like to be at the forefront of thesediscussions and we would welcome papers that shed some light on these discussions.

In this issue we have four technical papers. Two of the papers deal with an analysisof the SMA, one paper deals with data and another tackles statistical issues aroundthe quantification of operational risk.

In our first paper, “Should the advanced measurement approach be replaced withthe standardized measurement approach for operational risk?”, two of our board mem-bers, Pavel Shevchenko and Ariane Chapelle, along with their regular collaboratorGareth Peters and also Bertrand Hassani, discuss and analyze the weaknesses and pit-falls of SMA, such as instability, risk insensitivity, super-additivity and the implicitrelationship between the SMA capital model and systemic risk in the banking sector.They also discuss the issues with a precursor model proposed by the Basel Commit-tee, which was the basis of the SMA. The authors advocate maintaining the AMAinternal model framework and suggest as an alternative a number of standardization

www.risk.net/journal

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 8: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

recommendations that could be considered to unify internal modeling of operationalrisk.

In the issue’s second paper, “Comments on the Basel Committee on Banking Super-vision proposal for a new standardized approach for operational risk”, regular con-tributors Giulio Mignola and Roberto Ugoccioni, together with our board memberEric Cope, study the behavior of the SMA under a variety of hypothetical and realisticconditions, showing that the simplicity of the new approach is very costly in severalways. Among their findings, their study shows that the SMA does not respond appro-priately to changes in the risk profile of a bank (ie, it is risk insensitive, as the firstpaper also showed), and that it is incapable of differentiating among the range of pos-sible risk profiles across banks; that SMA capital results generally appear to be morevariable across banks than AMA results, where banks had the option of fitting the lossdata to statistical distributions; and that the SMA can result in banks overinsuring orunderinsuring against operational risks relative to previous AMA standards. Finally,the authors argue that the SMA is not only retrograde in terms of its capability tomeasure risk, but it also, perhaps more importantly, fails to create any link betweenmanagement actions and capital requirement.

In the third paper in the issue, “An assessment of operational loss data and itsimplications for risk capital modeling”, Ruben D. Cohen employs a mathematicalmethod based on a special dimensional transformation to assess operational lossdata from an innovative perspective. The procedure, which is formally known as theBuckingham ˘ (Pi) Theorem, is used widely in the field of experimental engineeringto extrapolate the results of tests conducted on models to prototypes. When appliedto the operational loss data considered here, the approach leads to a common andseemingly universal trend that underlies all the resulting distributions, regardless ofhow the data set is divided (ie, by event type, business line, revenue band, etc). Thisdominating trend, which appears to also acquire a tail parameter of 1, could haveprofound implications for how operational risk capital is computed.

In our fourth and final paper, “Rapidly bounding the exceedance probabilities ofhigh aggregate losses”, Isabella Gollini and Jonathan Rougier take on the task ofassessing the right-hand tail of an insurer’s loss distribution for some specified period,such as a year. They present and analyze six different approaches – four upper boundsand two approximations – and examine these approaches under a variety of conditions,using a large event loss table for US hurricanes. They also discuss the appropriatesize of Monte Carlo simulations and the imposition of a cap on single-event losses.Based on their findings, they strongly advocate the Gamma distribution as a flexiblemodel for single-event losses, because of its tractable form in all of the methods.

Journal of Operational Risk 11(3)

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 9: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

Journal of Operational Risk 11(3), 1–49DOI: 10.21314/JOP.2016.177

Research Paper

Should the advanced measurement approachbe replaced with the standardizedmeasurement approach for operational risk?

Gareth W. Peters,1 Pavel V. Shevchenko,2 Bertrand Hassani3

and Ariane Chapelle4

1Department of Statistical Sciences, University College London, Gower Street,London WC1E 6BT, UK; email: [email protected], PO Box 52, North Ryde, NSW 1670, Australia; emails: [email protected],[email protected]é Paris 1 Panthéon-Sorbonne, CES UMR 8174, 106 boulevard de l’Hôpital,75647 Paris, Cedex 13, France; email: [email protected] of Computer Science, University College London, Gower Street,London WC1E 6BT, UK; email: [email protected]

(Received June 27, 2016; accepted June 29, 2016)

ABSTRACT

Recently, the Basel Committee for Banking Supervision proposed to replace allapproaches, including the advanced measurement approach (AMA), to operationalrisk capital with a simple formula referred to as the standardized measurementapproach (SMA). This paper discusses and studies the weaknesses and pitfalls ofthe SMA, such as instability, risk insensitivity, super-additivity and the implicit rela-tionship between the SMA capital model and systemic risk in the banking sector. Wealso discuss issues with the closely related operational risk capital-at-risk (OpCar)Basel Committee-proposed model, which is the precursor to the SMA. In conclu-sion, we advocate to maintain the AMA internal model framework and suggest as analternative a number of standardization recommendations that could be consideredto unify the internal modeling of operational risk. The findings and views presented

Corresponding author: P. V. Shevchenko Print ISSN 1744-6740 j Online ISSN 1755-2710Copyright © 2016 Incisive Risk Information (IP) Limited

1

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 10: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

2 G. W. Peters et al

in this paper have been discussed with and supported by many OpRisk practitionersand academics in Australia, Europe, the United Kingdom and the United States, andrecently at the OpRisk Europe 2016 conference in London.

Keywords: operational risk (OpRisk); standardized measurement approach (SMA); loss distri-bution approach (LDA); advanced measurement approach (AMA); Basel Committee for BankingSupervision (BCBS) regulations.

1 INTRODUCTION

Operational risk (OpRisk) management is the youngest of the three major riskbranches, with the others being market and credit risks within financial institutions.The term OpRisk became more popular after the bankruptcy of Barings bank in 1995,when a rogue trader caused the collapse of a venerable institution by placing bets inthe Asian markets and keeping these contracts out of sight of management. At thetime, these losses could be classified neither as market nor as credit risks, and theterm OpRisk started to be used in the industry to define situations where such lossescould arise. It was quite some time before this definition was abandoned and a properdefinition was established for OpRisk. In these early days, OpRisk had a negativedefinition, “any risk that is not market or credit risk”, which was not very helpful toassess and manage OpRisk. Looking back at the history of risk management research,we observe that early academics found the same issue of classifying risk in general,as Crockford (1982) noted:

Research into risk management immediately encounters some basic problems of def-inition. There is still no general agreement on where the boundaries of the subject lie,and a satisfactory definition of risk management is notoriously difficult to formulate.

One thing is for certain: as risk management started to grow as a discipline, regula-tion also began to get more complex in order to catch up with new tools and techniques.It is not a stretch to say that financial institutions have always been regulated one wayor another, given the risk they bring to the financial system. Regulation was mostly ona country-by-country basis and very uneven, which allowed for arbitrages. As finan-cial institutions became more globalized, the need for more symmetric regulationthat could level the way institutions would be supervised and regulated mechanicallyincreased worldwide.

As a consequence of such regulations, in some areas of risk management, such asmarket risk and credit risk, there has been a gradual convergence or standardizationof best practice, which has been widely adopted by banks and financial institutions.In the area of OpRisk modeling and management, such convergence of best practiceis still occurring. This is due to multiple factors, such as many different types of risk

Journal of Operational Risk www.risk.net/journal

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 11: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

Should AMA be replaced with SMA for operational risk 3

processes being modeled within the OpRisk framework, different influences and lossexperiences in the OpRisk categories in different banking jurisdictions and the verynature of OpRisk being a relatively immature risk category compared with marketand credit risk.

The following question therefore arises: how can one begin to induce a standardiza-tion of OpRisk modeling and capital calculation under Pillar I of the current bankingregulation accords from the Basel Committee for Banking Supervision (BCBS)? Itis stated under these accords that the basic objective of the Basel Committee’s workhas been to close gaps in international supervisory coverage in pursuit of two basicprinciples: that no foreign banking establishment should escape supervision, and thatsupervision should be adequate. It is this second note that forms the context for thenew proposed revisions to simplify OpRisk modeling approaches. These have beenbrought out as two consultative documents:

� the standardized measurement approach (SMA), which was proposed in theBasel Committee consultative document “Standardized measurement approachfor operational risk”, issued in March 2016 for comments by June 3, 2016 (BaselCommittee on Banking Supervision 2016); and

� the closely related OpRisk capital-at-risk (OpCar) model, which was proposedin the Basel Committee consultative document “Operational risk: revisions tothe simpler approaches”, issued in October 2014 (Basel Committee on BankingSupervision 2014).

In Basel Committee on Banking Supervision (2014, p. 1), it is noted that “despite anincrease in the number and severity of operational risk events during and after thefinancial crisis, capital requirements for operational risk have remained stable or evenfallen for the standardised approaches”. Consequently, it is reasonable to reconsiderthese measures of capital adequacy and to decide if they need further revision. Thisis exactly the process undertaken by the Basel Committee in preparing the revisedproposed frameworks that are discussed in this paper. Before getting to the revisedframework of the SMA capital calculation, it is useful to recall the current best practicein Basel regulations.

Many models have been suggested for modeling OpRisk under the Basel II regu-latory framework (Basel Committee on Banking Supervision 2006). Fundamentally,two different approaches are considered: the top-down approach and the bottom-upapproach. A top-down approach quantifies OpRisk without attempting to identify theevents or causes of losses explicitly. It can include the risk indicator models, whichrely on a number of OpRisk exposure indicators to track OpRisks, and the scenarioanalysis and stress-testing models, which are estimated based on what-if scenarios.A bottom-up approach quantifies OpRisk on a micro-level, being based on identified

www.risk.net/journal Journal of Operational Risk

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 12: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

4 G. W. Peters et al

internal events. It can include actuarial-type models (referred to as the loss distributionapproach (LDA)) that model the frequency and severity of OpRisk losses.

Under the current regulatory framework for OpRisk (Basel Committee on BankingSupervision 2006), banks can use several methods to calculate OpRisk capital: thebasic indicator approach (BIA), the standardized approach (TSA) and the advancedmeasurement approach (AMA). Detailed discussion of these approaches can be foundin Cruz et al (2015, Chapter 1). In brief, under the BIA and TSA, the capital iscalculated as simple functions of gross income (GI):

KBIA D ˛1

n

3Xj D1

maxfGI.j /; 0g; n D3X

j D1

1fGI.j />0g; ˛ D 0:15; (1.1)

KTSA D 1

3

3Xj D1

max

� 8XiD1

ˇi GIi .j /; 0

�; (1.2)

where 1f�g is the standard indicator symbol, which equals 1 if the condition in f�gis true, and 0 otherwise. Here, GI.j / is the annual gross income of a bank in yearj ; GIi .j / is the gross income of business line i in year j ; and ˇi are coefficientsin the range [0.12–0.18], specified by the Basel Committee for eight business lines.These approaches have a very coarse level of model granularity and are generallyconsidered simplistic top-down approaches. Some country-specific regulators haveadopted slightly modified versions of the BIA and TSA.

Under the AMA, banks are allowed to use their own models to estimate the capital.A bank intending to use AMA should demonstrate the accuracy of the internal modelswithin Basel II-specified risk cells (eight business lines by seven event types) relevantto the bank. This is a finer level of granularity, more appropriate for a detailed analysisof risk processes in the financial institution. Typically, at this level of granularity, themodels are based on bottom-up approaches. The most widely used AMA is the LDAbased on modeling the annual frequency N and severities X1; X2; : : : of OpRisklosses for a risk cell, so that the annual loss for a bank over the d risk cells is

Z DdX

j D1

NjXiD1

X.j /i : (1.3)

Then, the regulatory capital is calculated as the 0.999 value-at-risk (VaR), which isthe quantile of the distribution for the next year’s annual loss Z:

KLDA D VaRqŒZ� WD inffz 2 R W PrŒZ > z� 6 1 � qg; q D 0:999I (1.4)

this can be reduced by expected loss covered through internal provisions. Typically,frequency and severities within a risk cell are assumed to be independent.

Journal of Operational Risk www.risk.net/journal

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 13: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

Should AMA be replaced with SMA for operational risk 5

For around ten years, the space of OpRisk has been evolving under this model-based structure. A summary of the the Basel accords over this period of time (BaselII–Basel III) can be captured as follows:

� they ensure that capital allocation is more risk sensitive;

� they enhance disclosure requirements that allow market participants to assessthe capital adequacy of an institution;

� they ensure that credit risk, OpRisk and market risk are quantified based ondata and formal techniques;

� they attempt to align economic and regulatory capital more closely to reducethe scope for regulatory arbitrage.

While the final Basel accord has at large addressed the regulatory arbitrage issue, thereare still areas where regulatory capital requirements will diverge from the economiccapital.

However, it was observed recently in studies performed by the BCBS and severallocal banking regulators that the BIA and TSA do not correctly estimate the OpRiskcapital, ie, GI as a proxy indicator for OpRisk exposure appeared to be not a goodassumption.Also, it appeared that capital under theAMA is difficult to compare acrossbanks, due to the wide range of practices adopted by different banks.

So, at this point, two options are available to further refine and standardize OpRiskmodeling practices: (1) to refine the BIA and TSA and, more importantly, convergewithin internal modeling in the AMA framework, or (2) to remove all internal mod-eling and modeling practice in OpRisk in favor of an overly simplified “one size fitsall” SMA model (sadly, this is the option that has been adopted by the current roundof Basel Committee consultations (Basel Committee on Banking Supervision 2016)in Pillar 1).

This paper is structured as follows. Section 2 formally defines the Basel-proposedSMA. The subsequent sections involve a collection of summary results and commentsfor studies performed on the proposed SMA model. Capital instability and sensitiv-ity are studied in Section 3. Section 4 discusses the reduction of risk responsivityand incentivized risk-taking. Discarding key sources of OpRisk data is discussed inSection 5. The possibility of super-additive capital under SMA is examined in Sec-tion 6. Section 7 summarizes the Basel Committee procedure for the estimation ofOpCar model and underlying assumptions and discusses the issues with this approach.The paper then concludes with suggestions in Section 8 relating to maintaining theAMA internal model framework with standardization recommendations that could beconsidered in order to unify the internal modeling of OpRisk.

www.risk.net/journal Journal of Operational Risk

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 14: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

6 G. W. Peters et al

2 BASEL COMMITTEE-PROPOSED STANDARDIZEDMEASUREMENT APPROACH

This section introduces the new simplifications that are being proposed by the BaselCommittee for models in OpRisk, starting with a brief overview of how this processwas initiated by the OpRisk capital-at-risk (OpCar) model proposed in Basel Commit-tee on Banking Supervision (2014), and then finishing with the current version of thisapproach, known as the SMA, proposed in Basel Committee on Banking Supervision(2016).

We begin with a clarification comment on the first round of the proposal, which isimportant conceptually to clarify for practitioners. In Basel Committee on BankingSupervision (2014, p. 1), it is stated that:

Despite an increase in the number and severity of operational risk events during andafter the financial crisis, capital requirements for operational risk have remained stableor even fallen for the standardized approaches. This indicates that the existing set ofsimple approaches for operational risk – the BIA and TSA, including its variant thealternative standardized approach (ASA) – do not correctly estimate the operationalrisk capital requirements of a wide spectrum of banks.

We agree that, in general, there are many cases in which banks will be under-capitalized for large crisis events, such as that which hit in 2008. Therefore, withthe benefit of hindsight, it is prudent to reconsider these simplified models andlook for improvements and reformulations that can be achieved in the wake of newinformation after the 2008 crisis. In fact, we would argue this is sensible practicein model assessment and model criticism, after new information regarding modelsuitability.

As we observed, the BIA and TSA make very simplistic assumptions regardingcapital. Namely, that the GI can be used as an adequate proxy indicator for OpRiskexposure and, further, that a bank’s OpRisk exposure increases linearly in proportionto revenue. Basel Committee on Banking Supervision (2014, p. 1) also makes tworelevant points that “the existing approaches do not take into account the fact thatthe relationship between the size and the operational risk of a bank does not remainconstant or that operational risk exposure increases with a bank’s size in a nonlinearfashion”.

Further, neither the BIA nor TSA approaches have been recalibrated since 2004.We believe that this is a huge mistake, that models and calibrations should betested regularly and that each piece of regulation should come with its revi-sion plan. As can be seen from experience, the model assumption has typicallyturned out to be invalid in a dynamically changing non-stationary risk managementenvironment.

Journal of Operational Risk www.risk.net/journal

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 15: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

Should AMA be replaced with SMA for operational risk 7

The two main objectives of the OpCar model proposed in Basel Committee onBanking Supervision (2014) were stated to be the following:

(i) to refine the OpRisk proxy indicator by replacing GI with a superior indicator;

(ii) to improve calibration of the regulatory coefficients based on the results of thequantitative analysis.

To achieve this, the Basel Committee argued – on practical grounds – that the modeldeveloped should be sufficiently simple to be applied with “comparability of outcomesin the framework”, and that it should be “simple enough to understand, not undulyburdensome to implement, should not have too many parameters for calculation bybanks and it should not rely on banks’ internal models”. However, they also claimedthat such a new approach should “exhibit enhanced risk sensitivity” relative to theGI-based frameworks.

Additionally, such a one-size-fits-all framework “should be calibrated accordingto the OpRisk profile of a large number of banks of different size and business mod-els.” We disagree with this motivation, as many banks in different jurisdictions andfor different bank size and different bank practice may indeed come from differentpopulation level distributions. In other words, the OpCar approach assumes all bankshave a common population distribution from which their loss experience is drawn,and that this will be universal, no matter what your business practice, business vol-ume or jurisdiction of operation. Such an assumption may lead to a less risk-sensitiveframework, with poorer insight into actual risk processes in a given bank than a prop-erly designed model developed for a particular business volume, operating region andbusiness practice.

The background and some further studies on the precursor OpCar framework, whichwas originally supposed to replace just the BIA and TSA methods, are provided inSection 7.We explain how the OpCar simplified framework was developed, discuss thefact that it is based on an LDA model and a regression structure, and demonstrate howthis model was estimated and developed. Along the way, we provide some scientificcriticism of several technical aspects of the estimation and approximations utilized. Itis important to still consider such aspects, as this model is the precursor to the SMAformula. That is, a single LDA is assumed for a bank, and single-loss approximation(SLA) is used to estimate the 0.999 quantile of the annual loss. Four different severitydistributions were fitted to the data from many banks, and Poisson distribution isassumed for the frequency. Then, a nonlinear regression is used to regress the obtainedbank capital (across many banks) to different combinations of explanatory variablesfrom bank books, to end up with the OpCar formula.

The currently proposed SMA for OpRisk capital in Basel Committee on BankingSupervision (2016) is the main subject of our paper. However, we note that it is based

www.risk.net/journal Journal of Operational Risk

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 16: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

8 G. W. Peters et al

on the OpCar formulation, which itself is nothing more than an LDA model appliedin an overly simplified fashion at the institution top level.

In Basel Committee on Banking Supervision (2016), it was proposed that allexisting BIAs, TSAs andAMAs would be replaced with the SMA, calculating OpRiskcapital as a function of the so-called business indicator (BI) and loss component (LC).Specifically, denote Xi .t/ as the i th loss and N.t/ as the number of losses in year t .Then, the SMA capital KSMA is defined as

KSMA.BI; LC/

D

8<̂:̂

BIC if Bucket 1;

110 C .BIC � 110/ ln

�exp.1/ � 1 C LC

BIC

�if Buckets 2–5:

(2.1)

Here,

LC D 71

T

TXtD1

N.t/XiD1

Xi .t/ C 71

T

TXtD1

N.t/XiD1

Xi .t/1fXi >10g

C 51

T

TXtD1

N.t/XiD1

Xi .t/1fXi .t/>100g; (2.2)

where T D 10 years (or at least five years for banks that do not have ten years ofgood quality loss data in the transition period). Buckets and the business indicatorcomponent (BIC) are calculated as

BIC D

8ˆ̂̂ˆ̂̂ˆ̂̂<ˆ̂̂ˆ̂̂ˆ̂̂:

0:11 � BI if BI 6 1000; Bucket 1;

110 C 0:15 � .BI � 1000/ if 1000 < BI 6 3000; Bucket 2;

410 C 0:19 � .BI � 3000/ if 3000 < BI 6 10 000; Bucket 3;

1740 C 0:23 � .BI � 10000/ if 10 000 < BI 6 30 000; Bucket 4;

6340 C 0:29 � .BI � 30000/ if BI > 30 000; Bucket 5:

(2.3)BI is defined as a sum of three components: the interest, lease and dividend com-

ponent; the services component; and the financial component. It is made up of almostthe same profit and loss (P&L) items used for the calculation of GI but combined ina different way (for precise formula, see Basel Committee on Banking Supervision(2016)). All amounts in the above formulas are in € millions.

Journal of Operational Risk www.risk.net/journal

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 17: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

Should AMA be replaced with SMA for operational risk 9

3 STANDARDIZED MEASUREMENT APPROACH INTRODUCESCAPITAL INSTABILITY

In our analysis, we observed that SMA fails to achieve the objective of capital stability.In this section, we consider several examples to illustrate this feature. In most cases,we show results for only lognormal severity; other distribution types considered inthe OpCar model lead to similar or even more pronounced features.

3.1 Capital instability examples

Consider a simple representative model for a bank’s annual OpRisk loss process,comprised of the aggregation of two generic loss processes: one high frequency withlow severity loss amounts, and the other corresponding to low frequency and highseverity loss amounts, given by Poisson–Gamma and Poisson–lognormal models,respectively. We set the BI constant to €2 billion at half way within the interval forBucket 2 of the SMA. We kept the model parameters static over time and simulateda history of 1000 years of loss data for three differently sized banks (small, mediumand large), using different parameter settings for the loss models to characterize suchbanks. For a simple analysis, we set a small bank corresponding to capital in the orderof tens of millions of euros in average annual loss, a medium bank in the order ofhundreds of millions of euros in average annual loss, and a large bank in the orderof €1 billion in average annual loss. We then studied the variability that may arise inthe capital under the SMA formulation, under the optimal scenario that models didnot change, model parameters were not recalibrated and the business environmentdid not change significantly, in the sense that BI was kept constant. In this case, weobserve the core variation that arises just from the loss history experience of banksof the three different sizes over time.

Our analysis shows that a given institution can experience the situation in whichits capital more than doubles from one year to the next, without any changes to theparameters, the model or the BI structure (Figure 1). This also means that two bankswith the same risk profile can produce SMA capital numbers differing by a factor ofmore than two.

In summary, the simulation takes the case of a BI fixed over time, and the loss modelfor the institution is fixed according to two independent loss processes given byPoisson.�/– Gamma.˛; ˇ/ and Poisson.�/– lognormal.�; �/. Here, Gamma.˛; ˇ/

is the Gamma distribution of the loss severities, with mean ˛ˇ and variance ˛ˇ2;lognormal.�; �/ is the lognormal distribution of severities with the mean of the log-severity equal to � and the variance of the log-severity equal to �2.

The institution’s total losses are set to be on average around 1000 per year, with1% coming from the heavy-tailed loss process Poisson–lognormal component. Weperform two case studies, one in which the shape parameter of the heavy-tailed loss

www.risk.net/journal Journal of Operational Risk

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 18: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

10 G. W. Peters et al

TABLE 1 Test case 1 versus test case 2.

Test case 1 Test case 2‚ …„ ƒ ‚ …„ ƒMean annual Annual loss Mean annual Annual loss

Bank loss 99.9% VaR loss 99.9% VaRsize (€ million) (€ million) (€ million) (€ million)

Small 15 260 21 772Medium 136 1841 181 5457Large 769 14 610 1101 41 975

Test case 1 corresponds to the risk process Poisson.10/–lognormal.� D f10I 12I 14g; � D 2.5/ and Poisson.990/–Gamma.˛ D 1; ˇ D f104I 105I 5�105g/.Test case 2 corresponds to Poisson.10/–lognormal.� D f10I 12I 14g; � D2.8/ and Poisson.990/–Gamma.˛ D 1; ˇ D f104I 105I 5 � 105g/.

process component is � D 2:5 and the other in which it is � D 2:8. We summarizethe settings for the two cases below in Table 1. The ideal situation that would indicatethe SMA was not producing capital figures that were too volatile would be if each ofparts (a) to (f) in Figure 1 were very closely constrained around 1. However, as wecan see, the variability in capital from year to year in all size institutions can be verysignificant. Note that we used different sequences of independent random numbers togenerate results for small, medium and large banks in a test case. Thus, caution shouldbe exercised in interpreting the results of test case 1 (or test case 2) for the relativecomparison of capital variability in different banks. At the same time, comparing testcase 1 with test case 2, one can certainly observe that an increase in � increases thecapital variability.

3.2 Capital instability and BI when SMA matches AMA

As a second study of the SMA capital instability, we consider a loss process modelPoisson.�/– lognormal.�; �/. Instead of fixing the BI to the midpoint of Bucket 2 ofthe SMA formulation, we numerically solve for the BI that would produce the SMAcapital equal to the VaR for a Poisson–lognormal LDA model at the annual 99.9%quantile level, VaR0:999.

In other words, we find the BI such that the LDA capital matches the SMA cap-ital in the long term. This is achieved by solving the following nonlinear equationnumerically via root search for the BI:

KSMA.BI; fLC/ D VaR0:999; (3.1)

where fLC is the long-term average of the loss component (2.2) that can be calculatedin the case of Poisson.�/ frequency as

fLC D � � .7EŒX� C 7EŒX j X > L� C 5EŒX j X > H�/: (3.2)

Journal of Operational Risk www.risk.net/journal

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 19: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

Should AMA be replaced with SMA for operational risk 11

FIGURE 1 Ratio of the SMA capital to the long-term average.

0

400500600700800900

0.981.001.021.04

200 400 600 800 1000

0 200 400 600 800 1000

0 200 400 600 800 1000 00

200 400 600 800 1000

0 200 400 600 800 1000

0 200 400 600 800 1000

0.91.01.11.2

0.5

1.5

2.5

SM

A c

apita

l / L

ong-

term

ave

rage

SM

A c

apita

l

100200300400500600700800900

0.951.001.051.101.15

0.81.01.21.41.6

Simulated years Simulated years

1234

(a) (b)

(c) (d)

(e) (f)

Test case 1 corresponds to � D 2.5 (parts (a), (c) and (e)); Test case 2 corresponds to � D 2.8 (parts (b), (d) and(f)). Other parameters are as specified in Table 1. Results for small (parts (a) and (b)), intermediate (parts (c) and(d)) and large (parts (e) and (f)) banks are based on different realizations of random variables in simulation.

In the case of severity X from lognormal.�; �/, it can be found in closed form as

fLC.�; �; �/ D �e�C 12 �2

�7 C 7˚

��2 C � � ln L

�C 5˚

��2 C � � ln H

��;

(3.3)where ˚.�/ denotes the standard normal distribution function, L is €10 million andH is €100 million, as specified by the SMA formula (2.2).

One can approximate the VaR0:999 under the Poisson–lognormal model accordingto the so-called SLA (discussed in Section 7), given for ˛ " 1 by

VaR˛ � SLA.˛I �; �; �/

D exp

�� C �˚�1

�1 � 1 � ˛

��C � exp.� C 1

2�2/; (3.4)

where ˚�1.�/ is the inverse of the standard normal distribution function. In thiscase, the results for the implied BI values are presented in Table 2 for � D 10 andvaried lognormal � and � parameters. Note that it is also not difficult to calculateVaR˛ “exactly” (within numerical error) using Monte Carlo, Panjer recursion or fastFourier transform (FFT) numerical methods.

www.risk.net/journal Journal of Operational Risk

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 20: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

12 G. W. Peters et al

TABLE 2 Implied BI in billions, � D 10.

�‚ …„ ƒ� 1.5 1.75 2.0 2.25 2.5 2.75 3.0

10 0.06 0.14 0.36 0.89 2.41 5.73 13.2412 0.44 1.05 2.61 6.12 14.24 32.81 72.2114 2.52 5.75 13.96 33.50 76.63 189.22 479.80

FIGURE 2 Ratio of the SMA capital to the long-term average.

0

SM

A c

apita

l / L

ong-

term

aver

age

SM

A c

apita

l

100 200 300 400 500 600 700 800 900 1000

2.0

1.8

1.6

1.4

1.2

1.0

0.8

0.6

For a study of capital instability, we use the BI obtained from matching the long-term average SMA capital with the long-term LDA capital, as described above for anexample generated by Poisson.10/– lognormal.� D 12; � D 2:5/. We correspond-ingly found implied BI D €14:714 billion (Bucket 4). In this case, we calculateVaR0:999 using Monte Carlo instead of an SLA (3.4); thus, the value of implied BIis slightly different from that in Table 2. In this case, the SMA capital based on thelong-term average LC is €1.87 billion, which is about the same as VaR0:999 D €1:87

billion. The year-on-year variability in the capital with this combination of impliedBI and Poisson–lognormal loss model is given in Figure 2. This shows that, again,we get capital instability with capital doubling from year to year compared with thelong-term average SMA capital.

3.3 SMA is excessively sensitive to the dominant loss process

Consider an institution with a wide range of different types of OpRisk loss processespresent in each of its business units and risk types. As in our first study above, we

Journal of Operational Risk www.risk.net/journal

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 21: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

Should AMA be replaced with SMA for operational risk 13

FIGURE 3 Boxplot results for the ratio of the SMA capital to the long-term average fordifferent values of the lognormal shape parameter � , based on simulations over 1000years.

2.00 2.25 2.50 2.75 3.000.5

1.0

1.5

2.0

2.5

3.0

3.5

4.0

σ

split these loss processes into two categories: high-frequency/low-severity types andlow-frequency/high-severity types, given by Poisson.990/– Gamma.1; 5 � 105/ andPoisson.10/– lognormal.14; �/, respectively. In this study, we consider the sensitivityof SMA capital to the dominant loss process. More precisely, we study the sensitivityof SMA capital to the parameter � , which dictates how heavy the tail of the mostextreme loss process will be. Figure 3 shows boxplot results based on simulationsperformed over 1000 years for different values of � D f2I 2:25I 2:5I 2:75I 3g.

These results can be interpreted to mean that banks with more extreme loss expe-riences as indicated by heavier-tailed dominant loss processes (increasing � ) tend tohave significantly greater capital instability compared with banks with less extremeloss experiences. Importantly, these findings demonstrate how nonlinear this increasein SMA capital can be as the heaviness of the dominant loss process tail increases.For instance, banks with relatively low heavy-tailed dominant loss processes (� D 2)tend to have a capital variability year on year of between 1.1 to 1.4 multipliers oflong-term average SMA capital. However, banks with relatively large heavy-taileddominant loss processes (� D 2:5, 2.75 or 3) tend to have excessively unstable year-on-year capital figures, with variation in capital being as bad as three to four timesmultipliers of the long-term average SMA capital. Further, it is clear that when oneconsiders each box plot as representing a population of banks with similar dominant

www.risk.net/journal Journal of Operational Risk

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 22: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

14 G. W. Peters et al

loss process characteristics, the population distribution of capital becomes increas-ingly skewed and demonstrates increasing kurtosis in the right tail as the tail heavinessof the dominant loss process in each population increases. This clearly demonstratesexcessive variability in capital year on year for banks with heavy-tailed dominant lossprocesses.

Therefore, the SMA fails to achieve the claimed objective of robust capital esti-mation. Capital produced by the proposed SMA approach will be neither stable norrobust with worsening robustness as the severity of OpRisk increases. In other words,banks with higher severity OpRisk exposures will be substantially worse off underthe SMA approach with regard to capital sensitivity.

4 REDUCED RISK RESPONSIVITY AND INDUCED RISK-TAKING

It is obvious that the SMA capital is less responsive to risk drivers and the variationin loss experience that is observed in a bank at granularity of the Basel II fifty-sixbusiness line/event type (BL/ET) units of measure.

This is due to the naive approach of modeling at the level of granularity assumedby the SMA, which only captures variability at the institution level, and not the intra-variability within the institution at business unit levels explicitly. Choosing to modelat the institution level, rather than the units of measure or granularity of the fifty-sixBasel categories, reduces model interpretability and reduces risk responsivity of thecapital.

Conceptually, it relates to the simplification of the AMA under the SMA adoptinga top-down formulation that reduces OpRisk modeling to a single unit of measure,as if all operational losses were following a single generating mechanism. This isequivalent to considering that earthquakes, cyber-attacks and human errors are allgenerated by the same drivers, and that they manifest in the loss model and losshistory in the same manner as other losses that are much more frequent and have lowerconsequence, such as credit card fraud, when viewed from the institution level lossexperience. It follows quite obviously that the radical simplification and aggregationof such heterogeneous risks in such a simplified model cannot claim the benefit ofrisk-sensitivity, even remotely.

Therefore, the SMA fails to achieve the claimed objective of capital risk sensitivity.Capital produced by the proposed SMA will be neither stable nor related to the riskprofile of an institution. Moreover, the SMA induces risk-taking behaviors, and thusfails to achieve the Basel Committee objectives of stability and soundness of thefinancial institutions. Moral hazard and other unintended consequences include thefollowing.

More risk-taking. Without the possibility of capital reduction for better risk manage-ment, in the face of increased funding costs due to the rise in capital, it is predictable

Journal of Operational Risk www.risk.net/journal

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 23: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

Should AMA be replaced with SMA for operational risk 15

that financial institutions will raise their risk-taking to a level that is sufficient to payfor the increased cost of the new fixed capital. The risk appetite of a financial insti-tution would mechanically increase. This effect goes against the Basel Committeeobjective of having a safe and secured financial system.

Denying loss events. While incident data collection involves a constant effort overa decade in every institution, large or small, the SMA is the most formidabledisincentive to report losses. There are many opportunities to compress historicallosses such as ignoring, slicing or transferring to other risk categories. The wishexpressed in the Basel consultation that “banks should use ten years of good-qualityloss data” is actually meaningless if the collection can be gamed. Besides, whatabout new banks, or BIA banks that do not currently have a loss data collectionprocess?

Hazard of reduced provisioning activity. Provisions, which should be a substitutionfor capital, are vastly discouraged by the SMA, as they are penalized twice, countedboth in the BI and the losses, and not accounted for as a capital reduction. The SMAcaptures both the expected loss and the unexpected loss, when the regulatory capitalshould only reflect the unexpected loss. We believe that this confusion might comefrom the use of the OpCar model as a benchmark, because the OpCar captures bothequally. The SMA states in the definition of gross loss, net loss and recovery inBasel Committee on Banking Supervision (2016, Section 6.2, p. 10) under item(c) that the gross loss and net loss should include “provisions or reserves accountedfor in the P&L against the potential operational loss impact”. This clearly indicatesthe nature of the double counting of this component, since they will enter in the BIthrough the P&L and in the loss data component of the SMA capital.

Ambiguity in provisioning and resulting capital variability. The new guidelines onprovisioning under the SMA framework follow a similar general concept to thosethat recently came into effect in credit risk with the International Financial Report-ing Standard (IFRS 9). This was set forward by the International AccountingStandards Board (IASB), who completed the final element of its comprehensiveresponse to the financial crisis with the publication of IFRS 9 Financial Instrumentsin July 2014. The IFRS 9 guidelines explicitly outline in Phase 2 an impairmentframework, which specifies in a standardized manner how to deal with delayedrecognition of (in this case) credit losses on loans (and other financial instruments).IFRS 9 achieves this through a new expected loss impairment model that will requiremore timely recognition of expected credit losses. Specifically, the new standardrequires entities to account for expected credit losses from when financial instru-ments are first recognized, and it lowers the threshold for recognition of full lifetime

www.risk.net/journal Journal of Operational Risk

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 24: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

16 G. W. Peters et al

expected losses. However, the SMA OpRisk version of such a provisioning con-cept for OpRisk losses fails to provide such a standardized and rigorous approach.Instead, the SMA framework simply states that loss databases should now include:

Losses stemming from operational risk events with a definitive financial impact,which are temporarily booked in transitory and/or suspense accounts and are notyet reflected in the P&L (“pending losses”). Material pending losses should beincluded in the SMA loss data set within a time period commensurate with the sizeand age of the pending item.

Unlike the more specific IFRS 9 accounting standards, under the SMA there isa level of ambiguity. Further, this ambiguity can propagate now directly into theSMA capital calculation, causing the potential for capital variability and instability.For instance, there is no specific guidance or regulation requirement to standardizethe manner in which a financial institution decides what is to be considered a“definitive financial impact” and what they should consider as a threshold fordeciding on existence of a “material pending loss”. Also, the specific guidanceor rules about the time periods related to the inclusion of such pending losses inan SMA loss data set, and, therefore, into the capital, are not stated. The currentguidance simply states that “material pending losses should be included in the SMAloss data set within a time period commensurate with the size and age of the pendingitem”. This is too imprecise, and it may lead to the manipulation of provisionsreporting and categorization that will directly reduce SMA capital over the averagedtime periods in which the loss data component is considered. Further, if differentfinancial institutions adopt different provisioning rules, the capital obtained fortwo banks with identical risk appetites and similar loss experiences could differsubstantially as a result of their provisioning practices.

Imprecise guidance on timing loss provisioning. The SMA guidelines also introducethe topic of “timing loss provisioning”, which they describe as follows:

Negative economic impacts booked in a financial accounting period, due to oper-ational risk events impacting the cash flows or financial statements of previousfinancial accounting periods (timing losses). Material “timing losses” should beincluded in the SMA loss data set when they are due to operational risk events thatspan more than one financial accounting period and give rise to legal risk.

However, we would argue that for standardization of a framework there needs to bemore explicit guidance as to what constitutes a “material timing loss”. Otherwise,different timing loss provisioning approaches will result in different loss databasesand, consequently, differing SMA capital, just as a consequence of the provisioningpractice adopted. In addition, the ambiguity of this statement does not make it clearwhether such losses may be accounted for twice.

Journal of Operational Risk www.risk.net/journal

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 25: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

Should AMA be replaced with SMA for operational risk 17

Grouping of losses. Under previous AMA internal modeling approaches, the unit ofmeasurement or granularity of the loss modeling was reported according to thefifty-six BL/ET categories specified in the Basel II framework. However, underthe SMA, the unit of measure is just at the institution level, so the granularityof the loss processes modeling and interpretation is lost. This has consequenceswhen it is considered in light of the new SMA requirement that “losses causedby a common operational risk event or by related operational risk events overtime must be grouped and entered into the SMA loss data set as a single loss.”Previously, in internal modeling, losses within a given BL/ET would be recordedas a random number (frequency model) of individual independent loss amounts(severity model). Then, for instance, under an LDA model, such losses wouldbe aggregated only as a compound process, and the individual losses would notbe “grouped” except on the annual basis, and not on the per-event basis. How-ever, there seems to be a marked difference in the SMA loss data reporting onthis point. Under the SMA, it is proposed to aggregate the individual lossesand report them in the loss database as a “single grouped” loss amount. Thisis not advisable from a modeling or an interpretation and practical risk man-agement perspective. Further, the SMA guidance states that “the bank’s internalloss data policy should establish criteria for deciding the circumstances, typesof data and methodology for grouping data as appropriate for its business, riskmanagement and SMA regulatory capital calculation needs.” One could arguethat if the aim of the SMA was to standardize OpRisk loss modeling in order tomake capital less variable due to internal modeling decisions, then one can failto see how this will be achieved with imprecise guidance, such as that providedabove. One could argue that the above generic statement on criteria establish-ment basically removes the internal modeling framework of the AMA and replacesit with internal heuristic (non-model based, non-scientifically verifiable) rules to“group” data. This has the potential to result in even greater variability in capitalthan was experienced with non-standardized AMA internal models. At least underAMA internal modeling, in principle, the statistical models could be scientificallycriticized.

Ignoring the future. All forward-looking aspects of risk identification, assessmentand mitigation, such as scenarios and emerging risks, have disappeared in the newBasel consultation. This in effect introduces the risk of setting back the bank-ing institutions in their progress toward a better understanding of threats; eventhough such threats may be increasing in frequency and severity, and the bankexposure to such threats may be increasing due to business practices, this cannotbe reflected in the SMA framework capital. In that sense, the SMA is only backwardlooking.

www.risk.net/journal Journal of Operational Risk

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 26: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

18 G. W. Peters et al

5 STANDARD MEASUREMENT APPROACH FAILS TO UTILIZERANGE OF DATA SOURCES OR PROVIDE RISK MANAGEMENTINSIGHT

As with any scientific discipline, OpRisk modeling is no different when it comesto developing a statistical modeling framework. In practical settings, it is thereforeimportant to set the context with respect to the data and the regulatory requirementsof Basel II when it comes to the data used in OpRisk modeling. In terms of the dataaspect of OpRisk modeling, it has been an ongoing challenge for banks to developsuitable loss databases to record observed losses internally and externally, alongsideother important information that may aid in modeling.

A key process in OpRisk modeling has not just been the collection itself, but,importantly, how and what to collect, as well as how to classify it. The first and keyphase in any analytical process, certainly in the case of OpRisk models, is to castthe data into a form amenable to analysis. This is the very first task that an analystfaces when they set out to model, measure and even manage OpRisk. At this stage,there is a need to establish how the information available can be modeled to act as aninput in the analytical process that would allow proper risk assessment processes to bedeveloped. In risk management, and particularly in OpRisk, this activity is today quiteregulated, and the entire data process, from collection to maintenance and use, hasstrict rules. In this sense, we see that qualitative and quantitative aspects of OpRiskmanagement cannot be dissociated, as they act on one another in a causal manner.

Any OpRisk modeling framework starts by having solid risk taxonomy, so risksare properly classified. Firms also need to perform a comprehensive risk mappingacross their processes to make sure that no risk is left out of the measurement process.This risk mapping is particularly important, as it directly affects the granularity of themodeling, the features observed in the data, the ability to interpret the loss model out-puts and the ability to collect and model data. Further, it can affect the risk sensitivityof the models. It is a process that all large financial institutions have gone through atgreat cost of manpower and time in order to comply with Basel II regulations.

Under the Basel II regulations, there are four major data elements that should beused to measure and manage OpRisk: internal loss data, external loss data, scenarioanalysis, and business environment and internal control factors (BEICFs).

To ensure that data is correctly specified in an OpRisk modeling framework, onemust undertake a risk mapping or taxonomy exercise, which basically encompassesthe following: description, identification, nomenclature and classification. This is avery lengthy and time-consuming process that has typically been done by many banksat a fairly fine level of granularity with regard to the business structure. It involvesgoing through, in excruciating detail, every major process of the firm. The outcomeof this exercise would be the building block of any risk classification study. We also

Journal of Operational Risk www.risk.net/journal

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 27: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

Should AMA be replaced with SMA for operational risk 19

observe that, in practice, this task is complicated by the fact that, in OpRisk settings,often when a risk materializes, and until it is closed, the loss process will continueto evolve over time – sometimes for many years, if we consider legal cases. In somecases, the same list of incidents taken at two different time points will not have thesame distribution of loss magnitude. Here, it is important to bear in mind that a riskis not a loss: we may have risk and never experience an incident, and we may haveincidents and never experience a loss. These considerations should also be taken intoaccount when developing a classification or risk-mapping process.

There are roughly three ways that firms drive this risk taxonomy exercise: throughcause, impact or events. The event-driven risk classification is probably the mostcommon one used by large firms and has been the emerging best practice in OpRisk.This process classifies risk according to OpRisk events. This is the classification usedby the Basel Committee, for which a detailed breakdown into event types at level 1,level 2 and activity groups is provided in Basel Committee on Banking Supervision(2006, pp. 305–307). Further, it is generally accepted that this classification has a def-inition broad enough to make it easier to accept/adopt changes in the process, shouldthey arise. Besides, it is very interesting to note that a control taxonomy may impactthe perception of events in the risk taxonomy, especially if the difference betweeninherent and residual risk is not perfectly understood. The residual risks are definedas inherent risk controls, ie, once we have controls in place, we manage the residualrisks, while we may still be reporting inherent risks; this may bias the perception ofthe bank risk profile. The risk/control relationship (in terms of taxonomy) is not thateasy to handle, as risk owners and control owners might be in completely differentdepartments, preventing a smooth transmission of information. This we believe alsoneeds further consideration in emerging best practice and governance implicationsfor OpRisk management best practice.

5.1 The elements of the OpRisk framework

The four key elements that should be used in any OpRisk framework are internal lossdata, external loss data, BEICFs and scenario analysis.

In terms of OpRisk losses, typically, the definition means a gross monetary lossor a net monetary loss, ie, a net of recoveries but excluding insurance or tax effects,resulting from an operational loss event. An operational loss includes all expensesassociated with an operational loss event except for opportunity costs, forgone rev-enue and costs related to risk management and control enhancements implemented toprevent future operational losses. These losses need to be classified using the Baselcategories (and internal categories, if these are different from Basel’s) and mapped toa firm’s business units. Basel II regulation says that firms need to collect at least fiveyears of data, but most decide not to discard any loss, even when these are older than

www.risk.net/journal Journal of Operational Risk

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 28: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

20 G. W. Peters et al

this limit. Losses are difficult to acquire, and most even pay to supplement internallosses with external loss databases. Considerable challenges exist in collating a largevolume of data, in different formats and from different geographical locations, into acentral repository, as well as in ensuring that these data feeds are secure and can bebacked up and replicated in case of an accident.

There is also a considerable challenge with OpRisk loss data recording and reportingrelated to the length for resolution of OpRisk losses. For some OpRisk events, usuallythe largest, there will be a large interval between the inception of the event and finalclosure, due to the complexity of these cases.As an example, most litigation cases thatcame up from the financial crisis in 2007–8 were only settled by 2012–13. These legalcases have their own life cycle and start with a discovery phase, in which lawyers andinvestigators argue if the other party has a proper case to actually take the action tocourt or not. At this stage, it is difficult to even come up with an estimate for eventuallosses. Even when a case is accepted by the judge, it might be several years untillawyers and risk managers are able to properly estimate the losses.

Firms can set up reserves for these losses (and these reserves should be includedin the loss database), but they usually only do that a few weeks before the case issettled in order to avoid disclosure issues (ie, the counterparty eventually knowingthe amount reserved and using this information to their advantage). This creates anissue for setting up OpRisk capital: since firms would know that a large loss is coming,but they cannot yet include it in the database, the inclusion of this settlement wouldcause some volatility in the capital. The same would happen if a firm set a reserve of,for example, US$1 billion for a case and then a few months later a judge decided inthe firm’s favor, and this large loss had to be removed. For this reason, firms need tohave a clear procedure on how to handle those large, long-duration losses.

The other issue with OpRisk loss reporting and recording is the aspect of addingcosts to losses. As mentioned, an operational loss includes all expenses associatedwith an operational loss event except for opportunity costs, forgone revenue and costsrelated to risk management and control enhancements implemented to prevent futureoperational losses. Most firms, for example, do not have enough lawyers on payroll(or of the expertise) to deal with all of the cases, particularly some of the largest orthose that demand some specific expertise and whose legal fees are quite high. Therewill be cases in which the firm wins in the end, maybe due to external law firms, butthe cost can reach tens of millions of dollars. In this case, even with a court victory,there will be an operational loss. This leads to the consideration of provisioning ofexpected OpRisk losses, which is unlike credit risk, where the calculated expectedcredit losses might be covered by general and/or specific provisions in the balancesheet. For OpRisk, due to its multidimensional nature, the treatment of expectedlosses is more complex and restrictive. Recently, with the issuing of IAS 37 by theInternational Accounting Standards Board IFRS 2012, the rules have become clearer

Journal of Operational Risk www.risk.net/journal

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 29: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

Should AMA be replaced with SMA for operational risk 21

as to what might be subject to provisions (or not). IAS 37 establishes three specificapplications of these general requirements, namely that

� a provision should not be recognized for future operating losses,

� a provision should be recognized for an onerous contract – a contract in whichthe unavoidable costs of meeting its obligations exceeds the expected economicbenefits,

� a provision for restructuring costs should be recognized only when an enterprisehas a detailed formal plan for restructuring and has raised a valid expectationin those affected.

The last of these should exclude costs, such as retraining or relocating continuingstaff and marketing or investment in new systems and distribution networks; therestructuring does not necessarily entail that. IAS 37 requires that provisions shouldbe recognized in the balance sheet when, and only when, an enterprise has a presentobligation (legal or constructive) as a result of a past event. The event must be likelyto call upon the resources of the institution to settle the obligation, and it must bepossible to form a reliable estimate of the amount of the obligation. Provisions in thebalance sheet should be at the best estimate of the expenditure required to settle thepresent obligation at the balance sheet date. IAS 37 indicates also that the amount ofthe provision should not be reduced by gains from the expected disposal of assets,nor by expected reimbursements (arising from, for example, insurance contracts orindemnity clauses). When and if it is virtually certain that reimbursement will bereceived, should the enterprise settle the obligation, this reimbursement should berecognized as a separate asset.

We also note the following key points relating to regulation regarding provision-ing, capital and expected loss (EL) components in “Detailed criteria 669” (BaselCommittee on Banking Supervision 2006, p. 151). This portion of the regulationdescribes a series of quantitative standards that will apply to internally generatedOpRisk measures for purposes of calculating the regulatory minimum capital charge.

(a) Any internal operational risk measurement system must be consistent with thescope of operational risk defined by the Committee in paragraph 644 and theloss event types defined in Annex 9.

(b) Supervisors will require the bank to calculate its regulatory capital requirementas the sum of EL and unexpected loss (UL), unless the bank can demonstratethat it is adequately capturing EL in its internal business practices. That is, tobase the minimum regulatory capital requirement on UL alone, the bank mustbe able to demonstrate to the satisfaction of its national supervisor that it hasmeasured and accounted for its EL exposure.

www.risk.net/journal Journal of Operational Risk

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 30: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

22 G. W. Peters et al

(c) A bank’s risk measurement system must be sufficiently “granular” to capturethe major drivers of operational risk affecting the shape of the tail of the lossestimates.

Here, note that if EL was accounted for, ie, provisioned, then it should not becovered by capital requirements again.

With regard to BEICF data, in order to understand the importance of BEICF data inOpRisk practice, we discuss this data source in the form of key risk indicators (KRIs),key performance indicators (KPIs) and key control indicators (KCIs).

A KRI is a metric of a risk factor. It provides information on the level of exposureto a given OpRisk of the organization at a particular point in time. KRIs are usefultools for business lines managers, senior management and boards to help monitor thelevel of risk-taking in an activity or organization, with regard to their risk appetite.

Performance indicators, usually referred to as KPIs, measure performance or theachievement of targets. Control effectiveness indicators, usually referred to as KCIs,are metrics that provide information on the extent to which a given control is meetingits intended objectives. Failed tests on key controls are natural examples of effectiveKCIs.

KPIs, KRIs and KCIs overlap in many instances, especially when they signalbreaches of thresholds: a poor performance often becomes a source of risk. Poortechnological performance, such as system downtime, for instance, becomes a KRIfor errors and data integrity. KPIs of failed performance provide a good source ofpotential risk indicators. Failed KCIs are even more obvious candidates for preventiveKRIs: a key control failure always constitutes a source of risk.

Indicators can be used by organizations as a means of control to track changes intheir exposure to OpRisk. When selected appropriately, indicators ought to flag anychange in the likelihood or the impact of a risk occurring. For financial institutionsthat calculate and hold OpRisk capital under more advanced approaches, such asthe previous AMA internal model approaches, KPIs, KRIs and KCIs are advisablemetrics to capture BEICF. While the definition of BEICF differs from one jurisdictionto another, and in many cases is specific to individual organizations, these factors must

� be risk sensitive (here, the notion of risk goes beyond incidents and losses),

� provide management with information on the risk profile of the organization,

� represent meaningful drivers of exposure that can be quantified,

� be used across the entire organization.

While some organizations include the outputs of their risk and control self-assessment programs under their internal definition of BEICFs, indicators are an

Journal of Operational Risk www.risk.net/journal

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 31: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

Should AMA be replaced with SMA for operational risk 23

appropriate mechanism to satisfy these requirements, implying that there is an indi-rect regulatory requirement to implement and maintain an active indicator program(see the discussion in Chapelle (2013)).

For instance, incorporating BEICFs into OpRisk modeling is a reflection of themodeling assumption that one can see OpRisk as a function of the control environ-ment. If the control environment is fair and under control, large operational losses areless likely to occur and OpRisk can be seen as under control. Therefore, understand-ing the firm’s business processes, mapping the risks on these processes and assessinghow the controls implemented behave is the fundamental role of the OpRisk manager.However, the SMA does not provide any real incentive mechanism, first for under-taking such a process, and second for incorporating this valuable information into thecapital calculation.

5.2 SMA discards 75% of OpRisk data types

Both the Basel II and Basel III regulations emphasize the significance of incorporatinga variety of loss data into OpRisk modeling and, therefore, ultimately into capitalcalculations. As has just been discussed, the four primary data sources to be includedare internal loss data, external loss data, scenario analysis and BEICF. However, underthe new SMA framework, only the first data source is utilized; the other three are nowdiscarded.

Further, even if this decision to drop BEICFs were reversed in revisions to the SMAguidelines, we argue that this would not be easy to achieve. In terms of using piecesof information such as BEICFs and scenario data, because under the SMA frameworkthe level of model granularity is only at the institution level, it does not easily lenditself to the incorporation of these key OpRisk data sources.

To business line managers, KRIs help to signal a change in the level of risk exposureassociated with specific processes and activities. For quantitative modelers, KRIs area way of including BEICFs in OpRisk capital. However, since BEICF data doesnot form a component of required data for the SMA model, there is no longer aregulatory requirement or incentive under the proposed SMA framework to makeefforts to develop such BEICF data sources. This reduces the effectiveness of the riskmodels through the loss of a key source of information. In addition, the utility of suchdata for risk management practitioners and managers is reduced, as this data is nolonger collected with the same required scrutiny, including validation, data integrityand maintenance and reporting, that was previously required forAMA internal modelsusing such data.

These key sources of OpRisk data are not included in the SMA and cannot easilybe incorporated into an SMA framework, even if there were a desire to do so due tothe level of granularity implied by the SMA. This makes capital calculations less risk

www.risk.net/journal Journal of Operational Risk

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 32: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

24 G. W. Peters et al

sensitive. Further, the lack of scenario-based data incorporated into the SMA modelmakes it less forward looking and anticipatory as an internal model-based capitalcalculation framework.

6 THE STANDARDIZED MEASUREMENT APPROACH CAN BE ASUPER-ADDITIVE CAPITAL CALCULATION

The SMA seems to have the unfortunate feature that it may produce capital at a grouplevel compared with the institutional level in a range of jurisdictions, which has theproperty that it is super-additive. It might be introduced by the regulator on purposeto encourage the splitting of very large institutions, though this is not stated in theBasel Committee documents explicitly. In this section, we show several examples ofsuper-additivity and discuss its implications.

6.1 Examples of SMA super-additivity

Consider two banks with identical BI and LC. However, the first bank has only oneentity, while the second has two entities. The two entities of the second bank have thesame BI and the same LC, and those are equal to both half the BI and half the LC ofthe first joint bank.

In case one, Table 3(a), we consider the situation of a bucket shift, where the SMAcapital obtained for the joint bank is €5771 million, while the sum of the SMAcapital obtained for the two entities of the second bank is only €5387 million. Inthis example, the SMA does not capture a diversification benefit; on the contrary, itassumes that the global impact of an incident is larger than the sum of the parts. Here,the joint bank is in Bucket 5, while the entities appear in Bucket 4. In the secondcase, Table 3(b), we consider no bucket shift between the joint bank (Bank 1) and thetwo-entity bank (Bank 2). Bank 1 is in Bucket 5, and the entities of Bank 2 are inBucket 5 too. In this case, we see that the joint bank has an SMA capital of €11 937million, whereas the two-entity bank has an SMA capital of €10 674 million. Againthere is a super-additive property.

Of course, in the examples in Table 3, we set BI and LC somewhat arbitrarily. So, inthe next example, we use BI implied by the 0.999 VaR of LDA. In particular, assume abank with a Poisson.�/– lognormal.�; �/ risk profile at the top level. Then, calculatethe long-term average LC using (3.3) and the 0.999 VaR of the annual loss using (3.4),and find the implied BI by matching SMA capital with the 0.999 VaR. Now, considerthe identical Bank 2 that splits into two similar independent entities that will have thesame LC and the same BI, both equal to half of the LC and half of the BI of the firstbank, which allows us to calculate SMA capital for each entity. Also note that, in thiscase, the entities will have risk profiles Poisson.1

2�/– lognormal.�; �/ each.

Journal of Operational Risk www.risk.net/journal

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 33: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

Should AMA be replaced with SMA for operational risk 25

TABLE 3 Super-additivity examples (all amounts are in € million).

(a) Bucket shift

Bank 1 Bank 2‚ …„ ƒ ‚ …„ ƒComponent Group Entity 1 Entity 2

BI 32 000 16 000 16 000BIC 6920 3120 3120LC 4000 2000 2000SMA 5771 2694 2694

Sum of SMAs: 5387

(b) No bucket shift

Bank 1 Bank 2‚ …„ ƒ ‚ …„ ƒComponent Group Entity 1 Entity 2

BI 70 000 35 000 35 000BIC 17 940 7790 7790LC 4000 2000 2000SMA 11 937 5337 5337

Sum of SMAs: 10 674

(a) Bank 1 is in Bucket 5 and the entities of Bank 2 are in Bucket 4. (b) Bank 1 and the entities of Bank 2 are inBucket 5.

Remark 6.1 The sum of K independent compound processes Poisson.�i / withseverity Fi .x/, i D 1; : : : ; K, is a compound process Poisson.�/, with � D �1 C� � � C �K and severity

F.x/ D �1

�F1.x/ C � � � C �K

�FK.x/I

(see, for example, Shevchenko 2011, Section 7.2, Proposition 7.1).

The results in the case of � D 10, � D 14, � D 2 are shown in Table 4. Here, theSMA for Bank 1 is €2.13 billion, while the sum of SMAs of the entities of Bank 2is €1.96 billion, demonstrating the sub-additivity feature. Note that, in this case, onecan also calculate the 0.999 VaR for each entity, which is €1.47 billion, while SMAsfor Entity 1 and Entity 2 are €983 million each. That is, the entities with SMA capitalare significantly undercapitalized compared with the LDA economic capital model;this subject will be discussed more in Section 6.2.

www.risk.net/journal Journal of Operational Risk

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 34: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

26 G. W. Peters et al

TABLE 4 Super-additivity example (all amounts are in € million).

Bank 1 Bank 2‚ …„ ƒ ‚ …„ ƒGroup Entity 1 Entity 2

� 10 5 5� 14 14 14� 2 2 2BI 13 960 6980 6980BIC 2 651 1166 1166LC 1 321 661 661SMA 2 133 983 983LDA 2 133 1473 1473

BI for Bank 1 is implied by the 0.999 VaR of Poisson.�/–lognormal.�; �/ risk profile (LDA).

Next, we state a mathematical expression that a bank could utilize in businessstructure planning to decide, in the long term, if it will be advantageous under thenew SMA framework to split into two entities (or more) or remain in a given jointstructure, according to the cost of funding Tier I SMA capital.

Consider the long-term SMA capital behavior averaged over the long-term historyof the SMA capital for each case, both joint and disaggregated business models. Then,from the perspective of a long-term analysis regarding restructuring, the followingexpressions can be used to determine the point at which the SMA capital would besuper-additive. If it is super-additive in the long term, this would indicate that there istherefore an advantage to splitting the institution in the long-run into disaggregatedseparate components. Further, the expression provided allows one to maximize thelong-term SMA capital reduction that can be obtained under such a partitioning ofthe institution into m separate disaggregated entities.

Proposition 6.2 (Conditions for super-additive SMA capital) Under the LDAmodels with the frequency from Poisson.�J/ and generic severity FX .xI �J/, thelong-term average of the loss component fLC can be found using

fLC D � � .7EŒX� C 7EŒX j X > L� C 5EŒX j X > H�/; (6.1)

ie, the short-term empirical loss averages in SMA formula (2.2) are replacedwith the “long-term” average. We now denote the long-term average LC for abank as fLC.�J; �J/ and the long-term average LC for m bank entities after thesplit as fLC.�i ; �i ; �i /, i 2 f1; 2; : : : ; mg. Therefore, the long-term SMA capitalKSMA.BIJ; fLC.�J; �J// will be an explicit function of LDA model parameters .�J; �J/,and the long-term SMA capital for the entity i is KSMA.BIi ; fLC.�i ; �i //. Hence, the

Journal of Operational Risk www.risk.net/journal

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 35: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

Should AMA be replaced with SMA for operational risk 27

SMA super-additive capital condition becomes

KSMA.BIJ; fLC.�J; �J// �mX

iD1

KSMA.BIi ; fLC.�i ; �i // > 0: (6.2)

The above condition is for a model-based restructuring, assuming each bank entityis modeled generically by an LDA model. Structuring around such a model-basedassumption can be performed to determine optimal disaggregation of the institu-tion to maximize capital cost reductions. Many severity distribution types will allowcalculation of the long-term LC in closed form. For example, in the case of thePoisson–lognormal model, it is given by (3.3).

Of course, one can also use the above condition to perform maximization of thecapital reduction over the next year by replacing fLC with the observed LC calculatedfrom empirical sample averages of historical data, as required in the SMA formula,avoiding explicit assumptions for severity and frequency distributions.

6.2 SMA super-additivity, macroprudential policy and systemic risk

In this section, we discuss the fact that the financial system is not invariant underobservation, that is, banks and financial institutions will respond in a rational mannerto maximize their competitive advantage. In particular, if new regulations allow andindeed provide incentives for banks to take opportunities to reduce, for instance, thecost of capital, they will generally act to do so. It is in this context that we introducein brief the relationship between the new SMA capital calculations and the broadermacroprudential view of the economy that the regulator holds.

It is generally acknowledged that the enhancement of the Basel II banking regula-tions by the additions that Basel III accords brought to the table were largely driven bya desire to impose greater macroprudential oversight on the banking system after the2008 financial crisis. Indeed, the Basel III accords adopted an approach to financialregulation aimed at mitigating the “systemic risk” of the financial sector as a whole;we may adopt a generic high level definition of systemic risk as

the disruption to the flow of financial services that is (i) caused by an impairment ofall or parts of the financial system; and (ii) has the potential to have serious negativeconsequences for the real economy.

This view of systemic risk is focused on disruptions that arise from events such asthe collapse of core banks or financial institutions in the banking sector, such aswhat happened after the Lehman Brothers collapse leading to the wider systemic riskproblem of the 2008 financial crisis.

In response to reducing the likelihood of such a systemic risk occurrence, theBasel III regulation imposed several components that are of relevance to macropru-dential financial regulation. Under Basel III, banks’ capital requirements have been

www.risk.net/journal Journal of Operational Risk

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 36: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

28 G. W. Peters et al

strengthened, and new liquidity requirements, a leverage cap and a countercyclicalcapital buffer were introduced; these would remain in place under the new SMAguidelines.

In addition, large financial institutions, ie, the largest and most globally activebanks, were required to hold a greater amount of capital, with an increased proportionof this Tier I capital being more liquid and of greater creditworthiness, ie, “higher-quality” capital. We note that this is consistent with a view of systemic risk reductionbased on a cross-sectional approach. For instance, under this approach, the Basel IIIrequirements sought to introduce systemic risk reduction macroprudential tools.Theseincluded the following:

(a) countercyclical capital requirements, which were introduced with the purposeof avoiding excessive balance-sheet shrinkage from banks in distress that maytransition from going concerns to gone concerns;

(b) caps on leverage in order to reduce or limit asset growth through a mechanismthat linked a banks’ assets to their equity;

(c) time variation in reserve requirements with procyclical capital buffers as ameans to control capital flows with prudential purposes.

In the United Kingdom, such Basel III macroprudential extensions are discussed indetail in the summary of the speech given at the 27th Annual Institute of InternationalBanking Conference, Washington, by the Executive Director for Financial StabilityStrategy and Risk in the Financial Policy Committee at the Bank of England (www.bankofengland.co.uk/publications/Documents/speeches/2016/speech887.pdf).

Further, one can argue that several factors can contribute to the systemic risk buildupin an economy, both locally and globally. One of these that is of relevance to discus-sions on AMA internal modeling versus SMA models is the risk that arises from thecomplexity of mathematical modeling being adopted in risk management and prod-uct structuring/pricing. One can argue from a statistical perspective that in order toscientifically understand the complex nature of OpRisk processes, and then respondto them with adequate capital measures and risk mitigation policies and governancestructuring, it would be prudent to invest in some level of model complexity. However,with such complexity comes a significant chance of misuse of such models for gamingof the system to obtain competitive advantage via, for instance, achieving a reductionin capital. This can inherently act as an unseen trigger for systemic risk, especially ifit is typically taking place in the larger, more substantial banks in the financial net-work, as is the case under AMA Basel II internal modeling. Therefore, we have thistension between reducing the systemic risk in the system due to model complexityand actually understanding the risk processes scientifically. We argue that the SMA

Journal of Operational Risk www.risk.net/journal

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 37: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

Should AMA be replaced with SMA for operational risk 29

goes too far in simplifying the complexity of OpRisk modeling, rendering it unusablefor risk analysis and interpretation. However, perhaps model complexity reductioncould instead be reduced through AMA standardization of internal modeling practice,something we will discuss in the conclusions of this paper.

In this section, we ask what role the SMA model framework can play in the contextof macroprudential systemic risk reduction. To address this question, we adopt theSMA’s highly stylized view, as follows. We first consider the largest banks and finan-cial institutions in the world. These entities are global and key nodes in the financialnetwork; sometimes they have been referred to as “too big to fail” institutions. Itis clear that the existence of such large financial institutions has both positive andnegative economic effects. However, from a systemic risk perspective, they can poseproblems for banking regulations both in local jurisdictions and globally.

There is, in general, an incentive to reduce the number of such dominant nodes inthe banking financial network when viewed from the perspective of reducing systemicrisk. So, the natural question that arises with regard to the SMA formulation is this:does the new regulation incentivize disaggregation of large financial institutions andbanks, at least from the high-level perspective of the reduction of costs associatedwith obtaining, funding and maintaining Tier I capital and liquidity ratios, which isrequired under Basel III at present? In addition, if a super-additive capital is possible,is it achievable for feasible and practically sensible disaggregated entities? Finally,one could ask the following: does this super-additive SMA capital feature provide anincreasing reward in terms of capital reduction as the size of the institution increases?We address these questions in the following stylized case studies, which illustratethat, in fact, the SMA can be considered as a framework that will induce systemic riskreductions from the perspective of providing potential incentives to reduce capitalcosts through disaggregation of large financial institutions and banks in the globalbanking network. However, we also observe that this may lead to significant under-capitalization of the entities after disaggregation; for example, in the case alreadydiscussed in Table 4 and considered in the next section (Figure 5).

6.3 SMA super-additivity is feasible and produces viable BI

For illustration, assume the joint institution is simply modeled by a Poisson–lognormalmodel Poisson.�J/– lognormal.�J; �J/, with parameters sub-indexed by J for the jointinstitution and a BI for the joint institution denoted by BIJ. Further, we assume that ifthe institution had split into m D 2 separate entities for Tier I capital reporting pur-poses, then each would have its own stylized annual loss modeled by two independentPoisson–lognormal models: Entity 1, modeled by Poisson.�1/– lognormal.�1; �1/,and Entity 2, modeled by Poisson.�2/– lognormal.�2; �2/, with BI1 and BI2, respec-tively. Here, we assume that the disaggregation of the joint institution can occur in

www.risk.net/journal Journal of Operational Risk

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 38: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

30 G. W. Peters et al

such a manner that the risk profile of each individual entity may adopt more, less orequal risk aversion, governance and risk management practices. This means that thereare really no restrictions on the parameters �1, �1 and �1, nor on the parameters �2,�2 and �2 from the perspective of �J, �J and �J.

In this sense, we study the range of parameters and BI values that will provide anincentive for large institutions to reduce systemic risk by undergoing disaggregationinto smaller institutions. We achieve this through consideration of the SMA super-additivity condition in Proposition 6.2. In this case, it leads us to consider

KSMA.BIJ; fLC.�J; �J; �J// �2X

iD1

KSMA.BIi ; fLC.�i ; �i ; �i // > 0: (6.3)

Using this stylized condition, banks may be able to determine, for instance, if inthe long term it would be economically efficient to split their institution into two ormore separate entities. Further, they can use this expression to optimize the capitalreduction for each of the individual entities, relative to the combined entities SMAcapital. Hence, what we show here is the long-term average behavior, which will bethe long-run optimal conditions for split or merge.

We perform a simple analysis below, where, at present, the joint institution ismodeled by an LDA model, with frequency given by Poisson.�J D 10/ and severitygiven by lognormal.�J D 12; �J D 2:5/, and with a BI implied by the 0.999 VaR ofthe LDA model, as detailed in Table 2, giving BI D €14:24 billion. We then assumethe average number of losses in each institution if the bank splits into two is givenby �1 D 10 and �2 D 10. In addition, the scale of the losses changes, but the tailseverity of large losses is unchanged, such that �1 D 2:5 and �2 D 2:5, but �1 and�2 are unknown. We also calculate BI1 and BI2 implied by the LDA 0.999 VaR forthe given values of �1 and �2. Then, we determine the set of values of �1 and �2,such that condition (6.3) is satisfied.

Table 5 shows the range of values for which �1 and �2 will produce super-additivecapital structures and therefore justify disaggregation from the perspective of SMAcapital cost minimization. We learn from this analysis that, indeed, it is plausible tostructure a disaggregation of a large financial institution into a smaller set of financialinstitutions under the SMA capital structure. That is, the range of parameters �1 and�2 that produce super-additive capital structures under the SMA formulation setupare plausible, and the BI values are plausible for such a decomposition. In Table 5, wealso show the BI values for Entity 1, implied in the cases satisfying the super-additiveSMA capital condition.

We see from these results that the ranges of BI values that are inferred under thesuper-additive capital structures are also plausible ranges of values. This demonstratesthat it is practically feasible for the SMA to produce incentive to reduce capital bydisaggregation of larger institutions into smaller ones.

Journal of Operational Risk www.risk.net/journal

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 39: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

Should AMA be replaced with SMA for operational risk 31

TABLE 5 Implied BI in €billions for Entity 1 (N/A indicates no super-additive solution).

�2‚ …„ ƒ�1 8 9 10 11 12 13

8 0.301 0.301 0.301 0.301 N/A N/A9 0.820 0.820 0.820 0.820 N/A N/A

10 2.406 2.406 2.406 2.406 N/A N/A11 5.970 5.970 5.970 5.970 N/A N/A12 N/A N/A N/A N/A N/A N/A13 N/A N/A N/A N/A N/A N/A

6.4 Does SMA incentivize larger institutions to disaggregate morethan smaller institutions?

Finally, we complete this section by addressing the following question: is there anincrease in potential capital reductions through using super-additive SMA capital as amotivation to disaggregate into smaller firms as the size of the institution increases?Asthe tail of the bank loss distribution increases, the size of the institution as quantifiedthrough BI increases, and we are interested in seeing if there is an increase in incentivefor larger banks to disaggregate to reduce capital charge.

To illustrate this point, we perform a study in which we use the following setup.We assume that we have a bank with a Poisson.�/– lognormal.�; �/ operationalrisk profile at the institution level. Now, we calculate fLC and match the LDA VaRcapital measure via the SLA at the 99.9% quantile to the SMA capital to implythe corresponding BI, BIC and SMA D KSMA.BI; fLC/ capital. Next, we considerdisaggregating the institution into two similar independent components, which wedenote as Entity 1 and Entity 2. This means the entities will have fLC1 D fLC2 D 1

2fLC,

BI1 D BI2 D 12

BI, �1 D �2 D 12�, �1 D �2 D � and �1 D �2 D � (see

Remark 6.1). Then, we calculate the SMA capitals SMA1 D KSMA.BI1; fLC1/ andSMA2 D KSMA.BI2; fLC2/ for Entity 1 and Entity 2, and we find the absolute super-additivity benefit from SMA capital reduction � D SMA � SMA1 � SMA2 andrelative benefit �=SMA. The results in the case � D 10, � D 14 and varying �

are plotted in Figure 4. Note that the benefit is the non-monotonic function of � ,because in some cases the disaggregation process results in the entities shifting intoa different SMA capital bucket compared with the original joint entity. One can alsosee that the absolute benefit from a bank disaggregation increases as the bank sizeincreases, though the relative benefit drops. In the same figure, we also show theresults for the case of bank disaggregation into ten similar independent entities, ie,�1 D � � � D �10 D �=10, �1 D � � � D �10 D � and �1 D � � � D �10 D � .

www.risk.net/journal Journal of Operational Risk

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 40: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

32 G. W. Peters et al

FIGURE 4 Absolute (in € million) and relative super-additivity benefits from splitting abank into two similar entities (and into ten similar entities) versus the lognormal severityshape parameter � .

σ

302520151050

% s

uper

-add

itivi

tybe

nefit

1.50 1.75 2.00 2.25 2.50 2.75 3.00σ

1.50 1.75 2.00 2.25 2.50 2.75 3.00

Two-entity splitTen-entity split

02000400060008000

10 00012 000

(a) (b)

Abs

olut

e su

per-

addi

tivity

ben

efit

(€)

FIGURE 5 Absolute (in € million) and relative undercapitalization of the entities afterbank disaggregation into two similar entities (and into ten similar entities) versus lognormalseverity shape parameter � .

60

80

40

20

0% u

nder

capi

taliz

atio

n

σ1.50 1.75 2.00 2.25 2.50 2.75 3.00

σ1.50 1.75 2.00 2.25 2.50 2.75 3.000

10 00020 00030 00040 00050 00060 000

(a) (b)

Abs

olut

e un

derc

apita

lizat

ion

(€)

Two-entity splitTen-entity split

We also calculate the 0.999 VaR of the Poisson–lognormal process, denoted asLDA1 and LDA2 for Entity 1 and Entity 2, respectively, using SLA (3.4). Then,we find undercapitalization of the entities LDA1 C LDA2 � SMA1 � SMA1, intro-duced by disaggregation, and corresponding relative undercapitalization, .LDA1 CLDA2 � SMA1 � SMA1/=.LDA1 C LDA2/. These results (and also for the case ofbank disaggregation into ten similar entities) are shown in Figure 5. In this example,undercapitalization is very significant, and it increases for larger banks, although therelative undercapitalization gets smaller. Moreover, both the super-additivity benefitand undercapitalization features become more pronounced in the case of splitting intoten entities when compared with the two-entity split.

Journal of Operational Risk www.risk.net/journal

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 41: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

Should AMA be replaced with SMA for operational risk 33

Thus, from one perspective, the capital calculated using the SMA formula encour-ages large banks to disaggregate (reducing the possibility of systemic risk from failurein the banking network); however, from another perspective, it introduces the signifi-cant undercapitalization of newly formed smaller entities, increasing their chances offailure (ie, increasing systemic risk in the banking system). In light of our analysis,it becomes clear that there is a downward pressure on banks to disaggregate, whichreduces some aspects of systemic risk in the banking network. However, we also showthat, at the same time, this very mechanism may undercapitalize the resulting smallerinstitutions, which would in turn increase systemic risk. The final outcome of thestability of the banking network will therefore depend largely on how aggressivelylarger banks choose to seek capital reductions at the risk of undercapitalizing theirdisaggregated entities, ie, their risk appetite in this regard will dictate the ultimateoutcome. Such an uncertain future is surely not what the regulator had in mind inallowing for an incentive for disaggregation through the existence of super-additivecapital measures.

7 OPCAR ESTIMATION FRAMEWORK

This section summarizes details of the OpCar model, which is the precursor to theSMA model and helped to form the SMA structure. First, we point out that theproposed OpCar model (Basel Committee on Banking Supervision 2014, Annex 2)is based on the LDA, albeit a very simplistic one that models the annual loss of theinstitution as a single LDA model formulation

Z DNX

iD1

Xi ; (7.1)

where N is the annual number of losses modeled as random variables from thePoisson distribution, Poisson.�/, ie, � D EŒN �, and Xi is the loss severity randomvariable from distribution FX .xI �/, parameterized by vector � . It is assumed thatN and Xi are independent and X1; X2; : : : are independent too (note that modelingseverities with autocorrelation is possible; see, for example, Guégan and Hassani(2013)). FX .xI �/ is modeled by one of the following two-parameter distributions:Pareto, lognormal, log-logistic or log-gamma. Three variants of the Pareto model wereconsidered, informally described in the regulation as corresponding to Pareto-light,Pareto-medium and Pareto-heavy. As a result, up to six estimates of the 0.999 VaRthat were generated per bank were averaged to find the final capital estimate used inthe regression model.

Next, we outline the OpCar fitting procedure and highlight potential issues.

www.risk.net/journal Journal of Operational Risk

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 42: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

34 G. W. Peters et al

7.1 Parameter estimation

The proposed OpCar model is estimated using data collected in a quantitative impactstudy (QIS) performed by the Basel Committee. Specifically, for each bank in thedata set over T D 5 years corresponding to the 2005–9 period, the following data isused.

� Qni : the annual number of losses above Qu D €10 000 in year i 2 f1; : : : ; T g.

� ni : the annual number of losses above u D €20 000 in year i 2 f1; : : : ; T g.

� Si : the sum of losses above the level u D €20 000 in year i 2 f1; : : : ; T g.

� Mi : the maximum individual loss in year i 2 f1; : : : ; T g.

The hybrid parameter estimation assumes that the frequency and severity distribu-tions are unchanged over five years. Then, the following statistics are defined:

�u WD EŒN j X > u� D �.1 � FX .uI �//; (7.2)

� Qu WD EŒN j X > Qu� D �.1 � FX . QuI �//; (7.3)

�u WD E� ŒX j X > u�; (7.4)

which are estimated using observed frequencies ni and Qni , and aggregated losses Si ,as

O�u D 1

T

TXiD1

ni ; O� Qu D 1

T

TXiD1

Qni ; O�u DPT

iD1 SiPTiD1 ni

: (7.5)

The conditional mean �u and severity distribution FX .�I �/ are known in closedform for the selected severity distribution types (see Basel Committee on BankingSupervision 2014, p. 26, Table A.5). Then, in the case of lognormal, log-gamma,log-logistic and Pareto-light distributions, the following two equations are solved tofind severity parameter estimates O�:

O� QuO�u

D 1 � FX . QuI O�/

1 � FX .uI O�/and O�u D E O� ŒX j X > u�; (7.6)

which are referred to as the percentile and moment conditions, respectively. Finally,the Poisson � parameter is estimated as

O� DO�u

1 � FX .uI O�/: (7.7)

In the case of Pareto-heavy severity, the percentile condition in (7.6) is replaced bythe “maximum heavy condition”

FX jX> Qu. O�.1/M I O�/ D Qn

Qn C 1; O�.1/

M D max.M1; : : : ; MT /I (7.8)

Journal of Operational Risk www.risk.net/journal

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 43: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

Should AMA be replaced with SMA for operational risk 35

in the case of Pareto-medium severity, the percentile condition is replaced by the“maximum medium condition”

FX jX> Qu. O�.2/M I O�/ D Qn

Qn C 1; O�.2/

M D 1

T

TXj D1

Mj : (7.9)

Here, FX jX> Qu.�/ is the distribution of losses conditional to exceed Qu. An explicitdefinition of Qn is not provided in Basel Committee on Banking Supervision (2014,Annex 2), but it is reasonable to assume Qn D .1=T /

PTiD1 Qni in (7.9) and Qn DPT

iD1 Qni in (7.8).These maximum conditions are based on the following result and approximation,

stated in Basel Committee on Banking Supervision (2014). Denote the ordered losssample X1;n 6 � � � 6 Xn;n, ie, Xn;n D max.X1; : : : ; Xn/. Using the fact thatFX .Xi / D Ui is uniformly distributed, we have EŒFX .Xk;n/� D k=.n C 1/ and,thus,

EŒFX .Xn;n/� D n

n C 1: (7.10)

Therefore, when n ! 1, one can expect that

EŒFX .Xn;n/� � FX ŒE.Xn;n/�; (7.11)

which gives conditions (7.8) and (7.9) when E.Xn;n/ is replaced by its estimators O�.1/M

and O�.2/M , and conditional distribution FX jX> Qu.�/ is used instead of FX .�/ to account for

loss truncation below Qu. Here, we would like to note that, strictly speaking, under theOpRisk settings, n is random from Poisson distribution corresponding to the annualnumber of events that may have implications for the above maximum conditions.Also, note that the distribution of maximum loss in the case of Poisson-distributed n

can be found in closed form (see Shevchenko 2011, Section 6.5).We are not aware of results in the literature regarding the properties (such as accu-

racy, robustness and appropriateness) of estimators O� calculated in the above describedway. Note that it is mentioned in Basel Committee on Banking Supervision (2014)that if the numerical solution for O� does not exist for a model, then this model isignored.

The OpCar framework takes a five-year sample of data to perform sample estimationwhen fitting the models. We note that making an estimation of this type with only fiveyears of data for the annual loss of the institution and fitting the model to this sampleis going to result in very poor accuracy. This can then easily translate into non-robustresults from the OpCar formulation if recalibration of the framework is performed infuture.

At this point, we emphasize that a five-sample estimate is very inaccurate for thesequantities; in other words, the estimated model parameters will have a very large

www.risk.net/journal Journal of Operational Risk

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 44: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

36 G. W. Peters et al

uncertainty associated with them. This is particularly problematic for parametersrelated to kurtosis and tail index in subexponential severity models. This is proba-bly why the heuristic, practically motivated rejection criterion for model fitting wasapplied, in order to reject inappropriate fits (not in a statistical manner) that did notwork due to the very small sample sizes used in this estimation.

To illustrate the extent of the uncertainty present in these five-year sample estimates,we provide the following basic case study. Consider a Poisson–lognormal LDA modelwith parameters � D 1000, � D 10 and � D 2 simulated over m D 1000 years. TheVaR0:999 for this model given by the SLA (3.4) is 4:59 � 108. Then, we calculatepopulation statistics for each year ni , Qni and Si , i D 1; : : : ; m, and form T -yearnon-overlapping blocks of these statistics from the simulated m years in order toperform estimation of distribution parameters for each block using percentile andmoment conditions (7.6). Formally, for each block, we have to numerically solve twoequations,

O1 WD ˚

� O� � ln.eu/

O�

��˚

� O� � ln.u/

O�

���1

�O� QuO�u

D 0;

O2 WD exp

�O� C O�2

2

�˚

� O� C O�2 � ln.u/

O�

��˚

� O� � ln.u/

O�

���1

� O�u D 0;

(7.12)

in order to find the severity parameter estimates O� and O� , which are then substitutedinto (7.7) to get the estimate O�. Here, O� Qu, O�u and O�u are the observed statistics (7.5)for a block.

This system of nonlinear equations may not have a unique solution, or a solu-tion may not exist. Thus, to find an approximate solution, we consider two differentobjective functions.

� Objective function 1: a univariate objective function given by O21 C O2

2 thatwe minimize to find the solution.

� Objective function 2: a multi-objective function to obtain the Pareto optimalsolution by finding the solution such that jO1j and jO2j cannot be jointly betteroff.

In both cases, a simple grid search over equally spaced values of O� and O� was usedto avoid any other complications that may have arisen with other optimization tech-niques. This leads to the most robust solution, which is not sensitive to gradients orstarting points. A summary of the results for parameter estimates and the correspond-ing VaR0:999 in the case of five-year blocks (ie, Œm=T � D 200 independent blocks)is provided in Table 6.

Journal of Operational Risk www.risk.net/journal

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 45: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

Should AMA be replaced with SMA for operational risk 37

TABLE 6 Mean of OpCar model parameter estimates over 200 independent five-yearblocks if data is simulated from Poisson.1000/–lognormal.10,2/.

Grid 1: O� 2 Œ8; 12�, O� 2 Œ1; 3�, ı O� D 0.05, ı O� D 0.05

Objective ObjectiveParameter function functionestimate 1 2

O� 10.12(1.26) 9.35(1.03)

O� 1.88(0.46) 2.16(0.28)O� 3248(5048) 959(290)

OVaR0.999 10.6.13.4/ � 108 4.70.0.88/ � 108

Grid 2: O� 2 Œ6; 14�, O� 2 Œ0.5; 3.5�, ı O� D 0.05, ı O� D 0.05

Objective ObjectiveParameter function functionestimate 1 2

O� 9.79(1.94) 8.85(1.82)

O� 1.88(0.71) 2.26(0.48)O� 1.18.11.1/ � 108 1.5.21/ � 107

OVaR0.999 3.84.36.8/ � 1013 4.34.61/ � 1012

The mean of sample statistics O�u, O� Qu and O�u over the blocks used for parameter estimates are 654(11), 519(10) and3.03.0.27/ � 105, respectively, with the corresponding standard deviations provided in brackets next to the mean.The square root of the mean-squared error of the parameter estimator is provided in brackets next to the mean.

The results are presented for two search grids with different bounds but the samespacing between the grid points ı O� D 0:05 and ı O� D 0:05. Here, VaR0:999 is calcu-lated using SLA (3.4). These results clearly show that this estimation procedure is notappropriate. We note that, in some instances (for some blocks), both objective func-tions produce the same parameter estimates for common sample estimate inputs O� Qu,O�u and O�u; these estimates are close to the true values, although this is not systemati-cally the case. We suspect that the reason for this is that, in some instances, there maybe no solution in existence (or multiple solutions), and this then manifests in differentsolutions for the two objective functions (or inappropriate solutions). This is a seriousconcern for the accuracy of the findings from this approach; it suggests and is a keyreason why other approaches to calibration at more granular levels (where more datais available) are often used – or, in cases where events are very rare, why alternativesources of data such as KRI, KPI, KCI and expert opinions should be incorporatedinto the calibration of the LDA model.

www.risk.net/journal Journal of Operational Risk

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 46: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

38 G. W. Peters et al

7.2 Capital estimation

The second stage of the OpCar framework is to take the fitted model parameters forthe LDA model severity and frequency models at group or institution level and thencalculate the capital. The approach to capital calculation under the OpCar analysisinvolves the so-called SLA. Here, we note that it would have been more accurateand easier to perform the capital calculations numerically, using methods such asPanjer recursion, FFT or Monte Carlo (see the detailed discussion of such approachesprovided in Cruz et al (2015)).

Instead, the BCBS decided upon the following SLA to estimate the 0.999 quantileof the annual loss:

F �1Z .˛/ � F �1

X

�1 � 1 � ˛

�C .� � 1/EŒX�; (7.13)

which is valid asymptotically for ˛ ! 1 in the case of subexponential severitydistributions. Here, F �1

X .�/ is the inverse of the distribution function of the randomvariable X . It is important to point out that a correct SLA (in the case of finite meansubexponential severity and Poisson frequency) is actually given by a slightly differentformula

F �1Z .˛/ D F �1

X

�1 � 1 � ˛

�C �EŒX� C o.1/: (7.14)

This is a reasonable approximation, but it is important to note that its accu-racy depends on distribution type and values of distribution parameters, and thatfurther higher-order approximations are available (see the detailed discussion inPeters and Shevchenko (2015, Section 8.5.1) and the tutorial paper Peters et al(2013)).

To illustrate the accuracy of this first-order approximation for the annual lossVaR ata level of 99.9%, consider the Poisson.�/– lognormal.�; �/ model with parameters� D f10; 100; 1000g, � D 3 and � D f1; 2g; note that parameter � is a scaleparameter and will not affect relative differences in VaR. Then, we calculate the VaRof the annual loss from an LDA model for each of the possible sets of parameters,using a Monte Carlo simulation of 107 years that gives very good accuracy. We thenevaluate the SLA approximation for each set of parameters.A summary of the findingsis provided in Table 7.

The results indicate that though the SLA accuracy is good, the error can be signif-icant for the parameters of interest for OpCar modeling. This can lead to a materialimpact on the accuracy of the parameter estimation in the subsequent regressionundertaken by the OpCar approach and the resulting SMA formula.

Journal of Operational Risk www.risk.net/journal

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 47: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

Should AMA be replaced with SMA for operational risk 39

TABLE 7 Accuracy of the SLA approximation of VaR0.999 in the case of a Poisson.�/–lognormal.�; �/ model, with the scale parameter � D 3.

Parameters MC VaR SLA VaR � VaR �.%/

� D 1000; � D 1 3.88 � 104.0.02%/ 3.54 � 104 �3.4 � 103 �8.7� D 1000; � D 2 4.24 � 105.0.28%/ 4.19 � 105 �5.3 � 103 �1.3� D 100; � D 1 5.42 � 103.0.06%/ 4.74 � 103 �6.8 � 102 �12.5� D 100; � D 2 1.17 � 105.0.4%/ 1.17 � 105 �6.2 � 102 �0.52� D 10; � D 1 1.27 � 103.0.15%/ 1.16 � 103 �1.1 � 102 �8.7� D 10; � D 2 3.57 � 104.0.52%/ 3.56 � 104 �0.8 � 102 �0.23

� VaR is the difference between the SLA approximation and Monte Carlo estimate (the standard error of the MonteCarlo approximation is in brackets next to the estimate). �.%/ is the relative difference between the SLA and MonteCarlo estimates.

7.3 Model selection and model averaging

The OpCar methodology attempts to fit six different severity models to the data, asdescribed in the previous section, that generate up to six 0.999 VaR SLA estimates foreach bank, depending on whether the models survived the imposed “filters”. Then,the final estimate of the 0.999 VaR is found as an average of VaR estimates from themodels that survived.

We would like to comment on the “filters” that were used to select the models tobe used for a bank after the estimation procedures were completed. These heuristicad hoc filters are not based on statistical theory and are determined by the followingcriteria:

� whether the proportion of losses above €20 000 is within a certain range (1–40%);

� whether the ratio between loss frequency and total assets is within a certainrange (0.1–70 losses per billion euros of assets);

� whether the model estimation (outlined in Section 7.1) that was based oniterative solution converged.

We believe that in addition to practical considerations, other more rigorous statisticalapproaches to model selection could also be considered. For instance, Dutta and Perry(2006) discuss the importance of fitting distributions that are flexible but appropriatefor the accurate modeling of OpRisk data. They focussed on the following five simpleattributes in deciding on a suitable statistical model for the severity distribution.

� Good fit: statistically, how well does the model fit the data?

� Realistic: if a model fits well in a statistical sense, does it generate a lossdistribution with a realistic capital estimate?

www.risk.net/journal Journal of Operational Risk

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 48: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

40 G. W. Peters et al

� Well specified: are the characteristics of the fitted data similar to the loss dataand logically consistent?

� Flexible: how well is the model able to reasonably accommodate a wide varietyof empirical loss data?

� Simple: is the model easy to apply in practice, and is it easy to generate randomnumbers for the purposes of loss simulation?

Further, in Cruz et al (2015, Chapter 8) there is a detailed description of appropriatemodel selection approaches that can be adopted in OpRisk modeling. These havebeen specifically developed for the setting that involves rigorous statistical methodsand avoids heuristic methods.

The results of the heuristic filters applied in OpCar excluded a large number ofdistribution fits from the models for each institution. It was stated in Basel Commit-tee on Banking Supervision (2014, p. 22) that the “OpCar calculator was run andvalidated on a sample of 121 out of 270 QIS banks which were able to provide dataon operational risk losses of adequate quality” and “four of the distributions (lognor-mal, log-gamma, Pareto-medium and Pareto-heavy) were selected for the final OpCarcalculation around 20% of the time or less”.

7.4 OpCar regression analysis

Finally, we discuss aspects of regression in OpCar methodology based on the OpCarparameter and VaR estimation outlined in the previous section. Basically, for eachbank’s approximated capital, the regression is performed against a range of factorsin a linear and nonlinear regression formulation. Given the potential for significantuncertainty in all aspects of the OpCar framework presented above, we argue thatresults from this remaining analysis may be spurious or biased, and we would recom-mend further study of this aspect. Further, these approximation errors will propagateinto the regression parameter estimation in a nonlinear manner, making it difficult todirectly determine the effect of such inaccuracies.

The regression models considered come in two forms: linear and nonlinear regres-sions. Given the sample of J banks on which to perform the regression, consider.Yj ; X1;j ; : : : ; X20;j /, j D 1; : : : ; J , where Yj is the j th banks capital (dependentvariable) obtained from the two-stage procedure described in Sections 7.2 and 7.1.Here, Xi;j is the i th factor or covariate (independent variable) derived from the bal-ance sheet and income statement of the j th bank. OpCar methodology consideredtwenty potential covariates. Only one observation of dependent variable Yj per bankwas used, and thus only one observation for each independent variable Xi;j wasneeded and approximated by the average over QIS reporting years.

Journal of Operational Risk www.risk.net/journal

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 49: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

Should AMA be replaced with SMA for operational risk 41

Then, the linear regression model considered in OpCar was specified by the model

Yj D b0 C20X

iD1

biXi;j C �j ; (7.15)

with independent and identically distributed (iid) errors �1; : : : ; �J, which are fromnormal distribution with zero mean and the same variance.

In this particular, practical application, it is unlikely that the regression assumptionof homoscedastic error variance is appropriate. We comment that the failure of thisassumption, which is likely to be true, given the heterogeneity of the banks considered,would have caused spurious results in the regression analysis. Note that “outliers” wereremoved in the analysis, but no indication of what were considered to be outliers wasmentioned. This again would have biased the results. We would have suggested notarbitrarily trying to force homoscedastic errors but instead considering weighted leastsquares if there was concern about outliers.

The regression analysis undertaken by the OpCar analysis and then utilized as aprecursor to the SMA formulation implicitly assumes that all bank capital figures,used as responses (dependent variables) in the regression analysis against the factors(independent variables), such as BI for each bank, come from a common population.This is demonstrated by the fact that the regression is done across all banks jointly,and thereby a common regression error distribution type is assumed. This wouldprobably not be the case in practice. We believe that other factors, such as bankingvolume, banking jurisdiction, banking practice and banking risk management gov-ernance structures, can have significant influences on this potential relationship. Tosome extent, the BI is supposed to capture aspects of this, but it cannot capture all ofthese aspects. Therefore, it may be the case that the regression assumptions about theerror distribution, typically its having iid zero mean homoscedastic (constant) varianceand normal distribution, would probably be questionable. Unfortunately, this wouldthen directly affect all the analysis of significance of the regression relationships, thechoice of the covariates to use in the model, etc.

It is very odd to think that, in the actual OpCar analysis, the linear regression model(7.15) was further reduced to the simplest type of linear regression model in the formof a simple linear model subfamily, given by

Yj D b0 C biXi;j C �j ; (7.16)

ie, only simple linear models and not generalized linear model types were considered.This is unusual, as the presence of multiple factors and their interactions can oftensignificantly improve the analysis, and it is simple to perform estimation in such cases.It is surprising to see this very limiting restriction in the analysis.

www.risk.net/journal Journal of Operational Risk

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 50: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

42 G. W. Peters et al

The second form of regression model considered in OpCar was nonlinear. It isgiven by the functional relationships

R.x/ D xF.x/;

d

dxR.x/ D F.x/ C xF 0.x/;

d2

dx2R.x/ D 2F 0.x/ C xF 00.x/; (7.17)

where x is the proxy indicator, R.x/ represents the total OpRisk requirement (capital)and F.x/ is the functional coefficient relationship for any level x. It is assumed thatF.�/ is twice differentiable. The choice of function F.x/ selected is given by

F.x/ D �.x � A/1�˛

1 � ˛; (7.18)

with ˛ 2 Œ0; 1�, � > 0 and A 6 0.This model and the way it is described in the Basel consultative document is incom-

plete in the sense that it fails to adequately explain how multiple covariates wereincorporated into the regression structure. It seems that, again, only single covari-ates are considered, one at a time; we again emphasize that this is a very limitedand simplistic approach to performing such an analysis. There are standard softwareR packages that would have extended this analysis to multiple covariate regressionstructures, which we argue would have been much more appropriate.

Now, in the nonlinear regression model, if one takes R.xi / D Yi , ie, the i th bank’scapital figure, this model could in principle be reinterpreted as a form of quantileregression model, such as those discussed recently in Cruz et al (2015). In this case,the response is the quantile function, and the function F would have been selectedas a transform of a quantile error function, such as the class of Tukey transformsdiscussed in Peters et al (2016).

The choice of function F.x/ adopted by the modelers was just a translated andscaled power quantile error function of the type discussed in Dong et al (2015, Equa-tion 14). When interpreting the nonlinear model in the form of a quantile regression,it is documented in several places (see the discussion in Peters et al (2016)) that leastsquares estimation is not a very good choice for parameter estimation for such models.Yet, this is the approach adopted in the OpCar framework.

Typically, when fitting quantile regression models, one would instead use a lossfunction corresponding to minimizing the expected loss of Y � u with respect to u,according to

minu

EŒ�� .Y � u/�; (7.19)

Journal of Operational Risk www.risk.net/journal

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 51: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

Should AMA be replaced with SMA for operational risk 43

where �� .y/ is given by

�� .y/ D y.� � 1fy<0g/: (7.20)

Since the OpCar framework is only defined at the institutional level, this meansthat, effectively, the modeling of the regression framework cannot easily incorporateBEICFs such as KRI, KCI and KPI in a natural way. That is because these measuresare typically recorded at a more granular level than the institution level. Instead, newOpRisk proxies and indicators were created under OpCar methods based on balancesheet outputs.

In fact, twenty proxy indicators were developed from the BCBS 2010 QIS data bal-ance sheets and income statements of the participating banks selected. These includedrefinements related to GI, total assets, provisions, administrative costs and alterna-tive types of factors. Unfortunately, the exact choice of twenty indicators used wasnot explicitly stated in detail in the OpCar document Basel Committee on BankingSupervision (2014, Annex 3); this makes it difficult to discuss the pros and consof the choices made. Nor was there a careful analysis undertaken of whether suchfactors selected could have produced possible collinearity issues in the regressiondesign matrix. For instance, if you combine one covariate or factor with another fac-tor derived from this one, or one strongly related to it (as seems to be suggested in thecases with the GI-based factors), then the joint regression modeling with both factorswill lead to an increased variance in parameter estimation and misguided conclusionsabout significance (statistical and practical) with regard to the factors selected in theregression model.

8 PROPOSITION: A STANDARDIZATION OF THE ADVANCEDMEASUREMENT APPROACH

SMA cannot be considered as an alternative toAMA models.We suggest that theAMAis not discarded but instead improved by addressing its current weaknesses. It shouldbe standardized! Details of how a rigorous and statistically robust standardization canstart to be considered, with practical considerations, are suggested below.

Rather than discarding all OpRisk modeling, as allowed under the AMA, the regu-lator could instead make a proposal to standardize the approaches to modeling basedon the accumulated knowledge to date of OpRisk modeling practices. We proposeone class of models that can act in this manner and allow one to incorporate the keyfeatures offered by AMA LDA-type models, which involve internal data, externaldata, BEICFs and scenarios, with other important information on factors that theSMA method and OpCar approaches have tried to achieve but failed. As has alreadybeen noted, one issue with the SMA and OpCar approaches is that they try to modelall OpRisk processes at the institution or group level with a single LDA model and

www.risk.net/journal Journal of Operational Risk

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 52: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

44 G. W. Peters et al

simplistic regression structure. This is bound to be problematic due to the very natureand heterogeneity of OpRisk loss processes. In addition, it fails to allow for the incor-poration of many important OpRisk loss process explanatory information sources,such as BEICFs, which are often no longer informative or appropriate to incorporateat the institution level compared with the individual BL/ET level.

A standardization of the AMA internal models will remove the wide range ofheterogeneity in model type. Here, our recommendation involves a bottom-up mod-eling approach, where for each BL/ET OpRisk loss process we model the severityand frequency components in an LDA structure. It can be comprised of a hybridLDA model with factor regression components; these allow us to include the factorsdriving OpRisks in the financial industry at a sufficient level of granularity, whilealso utilizing a class of models known as the generalized additive models for loca-tion, shape and scale (GAMLSS) in the severity and frequency aspects of the LDAframework. The class of GAMLSS models can be specified to make sure that theseverity and frequency families are comparable across institutions, allowing for bothrisk-sensitivity and capital comparability. We recommend in this regard the Pois-son and generalized Gamma classes for the family of frequency and severity mod-els, as these capture all typical ranges of loss models used in practice over the lastfifteen years in OpRisk, including Gamma-, Weibull-, lognormal- and Pareto-typeseverities.

Standardizing recommendation 1. This leads us to the first standardizing recommen-dation relating to the level of granularity of modeling in OpRisk. The level ofgranularity of the modeling procedure is important to consider when incorporatingdifferent sources of OpRisk data, such as BEICFs and scenarios. This debate hasbeen going on for the last ten years, with much discussion on bottom-up- versustop-down-based OpRisk modeling (see the overview in Cruz et al (2015) and Petersand Shevchenko (2015)). We advocate that a bottom-up-based approach be recom-mended as the standard modeling structure, as it will allow for greater understandingand more appropriate model development of the actual loss processes under study.Therefore, we argue that sticking with the fifty-six BL/ET structure of Basel II isbest for a standardizing framework, with a standard aggregation procedure to insti-tution level/group level. We argue that alternatives such as the SMA and OpCarapproaches, which are trying to model multiple different featured loss processescombined into one loss process at the institution level, are bound to fail, as theyneed to capture high-frequency events as well as high-severity events. This, inprinciple, is very difficult if not impossible to capture with a single LDA model atthe institution level, and it should be avoided. Further, such a bottom-up approachallows for greater model interpretation and incorporation of OpRisk loss data, suchas BEICFs.

Journal of Operational Risk www.risk.net/journal

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 53: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

Should AMA be replaced with SMA for operational risk 45

Standardizing recommendation 2. This brings us to our second recommendation forstandardization in OpRisk modeling. Namely, we propose to standardize the mod-eling class to remove the wide range of heterogeneity in model type. We proposea standardization that involves a bottom-up modeling approach, where for eachBL/ET level of the OpRisk loss process we model the severity and frequency com-ponents in an LDA structure that is comprised of a hybrid LDA model with factorregression components. The way to achieve this is to utilize a class of GAMLSSregression models for the severity and frequency model calibrations. That is, twoGAMLSS regression models are developed, one for the severity fitting and theother for the frequency fitting. This family of models is flexible enough in ouropinion to capture any type of frequency or severity model that may be observed inpractice in OpRisk data, while incorporating factors such as BEICFs (KRIs, KPIsand KCIs) naturally into the regression structure. This produces a class of hybridfactor regression models in an OpRisk LDA family of models that can easily befit, simulated from and utilized in OpRisk modeling to aggregate to the institutionlevel. Further, as more years of data become available, the incorporation of timeseries structure in the severity and frequency aspects of each loss process modelingcan be naturally incorporated into a GAMLSS regression LDA framework.

Standardizing recommendation 3. The class of models considered for the conditionalresponse in the GAMLSS severity model can be standardized. There are severalpossible examples of such models that may be appropriate (Chavez-Demoulin et al2015; Ganegoda and Evans 2013). However, we advocate for the severity modelsthat the class of models be restricted in regulation to one family, the generalizedGamma family of models, where these models are developed in an LDA hybridfactor GAMLSS model. Such models are appropriate for OpRisk, as they admitspecial members that correspond to the lognormal, Pareto, Weibull and Gamma.Allof these models are popular OpRisk severity models used in practice and representthe range of best practice by AMA banks, as observed in the recent survey by BaselCommittee on Banking Supervision (2009). Since the generalized Gamma familycontains all of these models as special sub-cases, this means that banks would onlyhave to ever fit one class of severity model to each BL/ET LDA severity profile.Then, the most appropriate family member would be resolved in the fitting throughthe estimation of the shape and scale parameters in such a manner that, if a lognor-mal model was appropriate, it would be selected, whereas if a Gamma model weremore appropriate, it would also be selected from one single fitting procedure. Fur-ther, the frequency model could be standardized as a Poisson GAMLSS regressionstructure, as the addition of explanatory covariates, and time varying and possiblestochastic intensity allows for a flexible enough frequency model for all types ofOpRisk loss processes.

www.risk.net/journal Journal of Operational Risk

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 54: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

46 G. W. Peters et al

Standardizing recommendation 4. The fitting of these models should be performedin a regression-based manner in the GAMLSS framework, which incorporatestruncation and censoring in a penalized maximum likelihood framework (seeStasinopoulos and Rigby 2007). We believe that by standardizing the fitting proce-dure to one that is statistically rigorous, well understood in terms of the estimatorproperties and robust when incorporating a censored likelihood appropriately, wewill remove the range of heuristic practices that has arisen in fitting models inOpRisk. The penalized regression framework, based on the L1 parameter penalty,will also allow for shrinkage methods to be used in order to select the most appro-priate explanatory variables in the GAMLSS severity and frequency regressionstructures.

Standardizing recommendation 5. The standardization in form of Bayesian- versusFrequentist-type models should be left to the discretion of the bank, who candecide which version is best for their practice. However, we note that, under aBayesian formulation, one can adequately incorporate multiple sources of infor-mation, including expert opinion and scenario-based data (see discussions in Cruzet al (2015), Peters et al (2009) and Shevchenko and Wüthrich (2006)).

Standardizing recommendation 6. The sets of BEICFs and factors to be incorporatedinto each BL/ET LDA factor regression model for severity and frequency should bespecified by the regulator. There should be a core set of factors to be incorporatedby all banks that include BEICFs and other factors to be selected. The followingtypes of KRI categories can be considered in developing the core family of factors(see Chapelle 2013).

� Exposure indicators: any significant change in the nature of the businessenvironment and its exposure to critical stakeholders or critical resources.Flag any change in the risk exposure.

� Stress indicators: any significant rise in the use of resources by the business,whether human or material. Flag any risk rising from overloaded humans ormachines.

� Causal indicators: metrics capturing the drivers of key risks to the business.The core of preventive KRIs.

� Failure indicators: poor performance and failing controls are strong riskdrivers. Failed KPIs and KCIs.

In this approach, a key difference is that instead of fixing the regression coeffi-cients for all banks (as is the case for SMA and OpCar), pretending that all banks

Journal of Operational Risk www.risk.net/journal

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 55: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

Should AMA be replaced with SMA for operational risk 47

have the same regression relationship as the entire banking population, one shouldstandardize the class of factors. Specify explicitly how they should be collectedas well as the frequency, and then specify that they should be incorporated in theGAMLSS regression. This will allow each bank to then calibrate the regressionmodel to their loss experience through a rigorous penalized maximum likelihoodprocedure, with strict criteria on cross-validation-based testing on the amount ofpenalization admitted in the regression when shrinking factors out of the model.This approach has the advantage that banks will not only start to better incorporatethe BEICF information into OpRisk models in a structured and statistically rigor-ous manner, but they will also be forced to better collect and consider such factorsin a principled manner.

9 CONCLUSIONS

In this paper, we discussed and studied the weaknesses of the SMA formula forOpRisk capital recently proposed by the Basel Committee to replace the AMAand other current approaches. We also outlined the issues with the closely relatedOpCar model, which is the precursor of the SMA. There are significant potentialproblems with the use of the SMA, such as capital instability, risk insensitivity andcapital super-additivity as well as serious concerns regarding the estimation of thismodel. We advocate standardization of the AMA rather than its complete removal,and we provide several recommendations based on our experience with OpRiskmodeling.

DECLARATION OF INTEREST

The authors report no conflicts of interest. The authors alone are responsible for thecontent and writing of the paper. Pavel Shevchenko acknowledges the support of theAustralian Research Council’s Discovery Projects funding scheme (project number:DP160103489).

ACKNOWLEDGEMENTS

We would like to thank many OpRisk industry practitioners and academics in Aus-tralia, Europe, the United Kingdom and the United States as well as the recent OpRiskEurope 2016 conference in London, for useful discussions, and for sharing the viewsand findings that have helped in the development and presentation of ideas within thispaper.

www.risk.net/journal Journal of Operational Risk

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 56: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

48 G. W. Peters et al

REFERENCES

Basel Committee on Banking Supervision (2006). International convergence of capitalmeasurement and capital standards: a revised framework, comprehensive version.Report, June, Bank for International Settlements.

Basel Committee on Banking Supervision (2009). Observed range of practice in key ele-ments of Advanced Measurement Approaches (AMA). Report, July, Bank for Interna-tional Settlements.

Basel Committee on Banking Supervision (2014). Operational risk: revisions to the simplerapproaches. Report, October, Bank for International Settlements.

Basel Committee on Banking Supervision (2016). Standardized measurement approachfor operational risk. Report, March, Bank for International Settlements.

Chapelle, A. (2013). The importance of preventive KRIs. Operational Risk and Regulation58(April).

Chavez-Demoulin, V., Embrechts, P., and Hofert, M. (2015). An extreme value approach formodeling operational risk losses depending on covariates.Journal of Risk and Insurance,forthcoming (http://doi.org/bm2q).

Crockford, G.N. (1982).The bibliography and history of risk management:some preliminaryobservations.Geneva Papers on Risk and Insurance 7(23), 169–179 (http://doi.org/bppj).

Cruz, M., Peters, G., and Shevchenko, P.(2015). Fundamental Aspects of Operational Riskand Insurance Analytics: A Handbook of Operational Risk. Wiley (http://doi.org/bppk).

Dong, A. X., Chan, J. S., and Peters, G. W. (2015). Risk margin quantile function via para-metric and non-paramteric Bayesian approaches. Astin Bulletin 45(3), 503–550 (http://doi.org/bppm).

Dutta, K., and Perry, J. (2006). A tale of tails: an empirical analysis of loss distributionmodels for estimating operational risk capital. Working Paper 06-13, Federal ReserveBank of Boston (http://doi.org/fzcjcw).

Ganegoda, A., and Evans, J. (2013). A scaling model for severity of operational lossesusing generalized additive models for location scale and shape (GAMLSS). Annals ofActuarial Science 7(1), 61–100 (http://doi.org/bppn).

Guégan, D., and Hassani, B.(2013). Using a time series approach to correct serial corre-lation in operational risk capital calculation.The Journal of Operational Risk 8(3), 31–56(http://doi.org/bppq).

Peters, G., and Shevchenko, P. (2015). Advances in Heavy Tailed Risk Modeling: AHandbook of Operational Risk. Wiley (http://doi.org/bppr).

Peters, G., Shevchenko, P. V., and Wüthrich, M. V. (2009). Dynamic operational risk:modeling dependence and combining different sources of information. The Journal ofOperational Risk 4(2), 69–104 (http://doi.org/bprz).

Peters, G. W., Targino, R. S., and Shevchenko, P.V. (2013). Understanding operational riskcapital approximations: first and second orders. Journal of Governance and Regulation2(3), 58–78.

Peters, G. W., Chen, W. Y., and Gerlach, R. H.(2016). Estimating quantile families ofloss distributions for non-life insurance modelling via L-moments. Risks 4(2) (http://doi.org/bmqv).

Shevchenko, P. V. (2011). Modelling Operational Risk Using Bayesian Inference. Springer(http://doi.org/bxsvwq).

Journal of Operational Risk www.risk.net/journal

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 57: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

Should AMA be replaced with SMA for operational risk 49

Shevchenko, P. V., and Wüthrich, M. V. (2006). The structural modelling of operational riskvia the Bayesian inference: combining loss data with expert opinions. The Journal ofOperational Risk 1(3), 3–26 (http://doi.org/bpqz).

Stasinopoulos, D. M., and Rigby, R. A. (2007). Generalized additive models for locationscale and shape (GAMLSS) in R. Journal of Statistical Software 23(7), 1–46 (http://doi.org/bpps).

www.risk.net/journal Journal of Operational Risk

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 58: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

Journal of Operational Risk 11(3), 51–69DOI: 10.21314/JOP.2016.184

Research Paper

Comments on the Basel Committee onBanking Supervision proposal for a newstandardized approach for operational risk

Giulio Mignola,1 Roberto Ugoccioni1 and Eric Cope2

1Intesa Sanpaolo, Enterprise Risk Management Department, Corso Inghilterra 3, 10138 Torino,Italy; emails: [email protected], [email protected] Suisse AG, Europaallee 1, 8004 Zurich, Switzerland; email: [email protected]

(Received July 4, 2016; accepted July 21, 2016)

ABSTRACT

On March 4, 2016, the Basel Committee on Banking Supervision published a consul-tative document in which a new methodology, the standardized measurement approach(SMA), was introduced for computing operational risk regulatory capital for banks.In this paper, the behavior of the SMA is studied under a variety of hypothetical andrealistic conditions, showing that the simplicity of the new approach is very costly insome ways. We find that the SMA does not respond appropriately to changes in therisk profile of a bank, and it is incapable of differentiating among the range of possiblerisk profiles across banks. We also discover that SMA capital results generally appearto be more variable across banks than the previous advanced measurement approach(AMA) option of fitting the loss data, and that the SMA can result in banks over- orunder-insuring against operational risks relative to previous AMA standards. Finally,

Corresponding author: G. Mignola Print ISSN 1744-6740 j Online ISSN 1755-2710Copyright © 2016 Incisive Risk Information (IP) Limited

51

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 59: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

52 G. Mignola et al

we argue that the SMA is retrograde in terms of its capability to measure risk and,perhaps more importantly, fails to create any link between management actions andcapital requirement.

Keywords: operational risk measurement; advanced measurement approach (AMA); standard-ized measurement approach (SMA); Basel Committee on Banking Supervision (BCBS); capitalrequirement; capital variability.

1 INTRODUCTION

On March 4, 2016, the Basel Committee on Banking Supervision (BCBS) published aconsultative document aimed at revising the entire minimum regulatory capital frame-work for operational risk (Basel Committee on Banking Supervision 20161). Thisdocument proposes to replace all the currently available options for computing regula-tory capital (the basic indicator approach (BIA), the standardized approach/alternativestandardized approach (TSA/ASA) and the advanced measurement approach (AMA))with a single standardized measure called the standardized measurement approach(SMA). The goal of the SMA is to enhance the simplicity of capital calculation andpromote greater comparability among banks, while still retaining a degree of risksensitivity.

The SMA is undeniably simple. However, we take issue with the notion that itpromotes comparability across banks; it is also less risk sensitive than we might desireit to be. We base this evaluation on a study of the behavior of the SMA under a varietyof hypothetical and realistic conditions, from which we derive several conclusions.

First, the SMA does not respond appropriately to changes in the risk profile of abank, and it is incapable of strongly differentiating among the range of possible riskprofiles across banks. For one, the SMA does not grow in proportion to expectedoperational losses, which is what one would expect of a capital measure. At most,there can be a 50% difference in capital requirements between two banks of similarsize yet at extreme ends of the possible loss profiles under the SMA. Under the AMAor similar value-at-risk (VaR)-type models, there would be more than a factor of 30difference between the capital levels at these banks, indicating that the SMA is notappropriately risk sensitive.

Second, SMA capital results generally appear to be more variable across banksthan a simple AMA-type risk model would be that has been fitted to the loss data.The SMA is closely associated with a measure of bank income, which is imperfectlycorrelated with operational losses. Based on industry survey data collected from the

1 For relevant BCBS comments, see also www.bis.org/press/p160304.htm.

Journal of Operational Risk www.risk.net/journal

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 60: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

Comments on the BCBS proposal for a new standardized approach for operational risk 53

ORX consortium (ORX 2016), we find that the range of income levels observedtogether with a given operational loss profile can vary quite widely. This high degreeof variability indicates that the SMA cannot achieve comparability, as banks withsimilar risk profiles may be assigned quite different levels of capital under the SMA.

Third, the SMA can result in banks over- or under-insuring against operational risksrelative to AMA-level standards. Note that the SMA is not tied to any risk measure orstandard, as theAMA measure was tied to the 99.9th percentileVaR of the annual totalloss distribution. In our investigation, we find an extreme range of possible confidencelevels associated with the SMA, which can be as low as the 99th percentile and as highas the 99.9999th percentile, or more. This provides further evidence that the methoddoes not sufficiently differentiate among risk profiles.

2 DESCRIPTION OF THE STANDARDIZED MEASUREMENTAPPROACH

The main component of the SMA is a measure of business volume called the businessindicator (BI), which is based on the main elements of gross income (today’s driver forthe BIA and TSA) mingled with some indicators of expenses (including operationallosses). The BI is composed of the sum of the three-year averages of the following:

� the interest, lease and dividend component (ILDC), which is basically the netinterest margin, including net operating and financial lease results and dividendincome;

� the service component (SC), which includes the maximum of fee income andfee expenses as well as the maximum of other income and other expenses;

� the financial component (FC), which includes the absolute value of the profitand loss (P&L) of the trading book and the absolute value of the P&L of thebanking book.

That is, the BI can be expressed thus:

BI D ILDCavg C SCavg C FCavg:

The BI represents the single determining factor of the BI component (BIC), whichis the base value of the SMA. The BIC is obtained by applying a specific coefficientplus an offset to the BI. The coefficient and the offset depend on the BI bucket size,as shown in Table 1.

The piecewise-linear relationship between the BI and BIC is shown in Figure 1,which demonstrates the progressive (super-linear) nature of the BIC in relation to theBI.

www.risk.net/journal Journal of Operational Risk

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 61: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

54 G. Mignola et al

TABLE 1 BIC of the SMA as a function of the BI indicator.

BI range (€) BIC formula

0 � 1 bn 0:11 � BI1 � 3 bn 110 m C 0:15 � .BI � 1 bn/

3 � 10 bn 410 m C 0:19 � .BI � 3 bn/

10 � 30 bn 1:74 bn C 0:23 � .BI � 10 bn/

> 30 bn 6:34 bn C 0:29 � .BI � 30 bn/

The progressive behavior (more than linear) is explicit in the different coefficients applied to the different buckets ofBI.

FIGURE 1 BIC of the SMA as a function of the BI.

BI (bn)10 20 30 40 50

BIC

(bn

)

0

2

4

6

8

10

12

0

The progressive behavior (more than linear) is clearly seen from the graph.

To supplement the BIC and take into account the different risk profiles of banks ofsimilar size, the BCBS introduced a modifier to the BIC based on individual banks’internal losses, which it termed the loss component (LC). The LC is only required forbanks with a BI of greater than €1 billion and is calculated according to the followingexpression:

LC D7P

10yrs x C 7P

10yrs x � 1fx > €10 mg C 5P

10yrs x � 1fx > €100 mg10

;

where x represents individual losses experienced over a period of ten years, and 1f�grepresents the indicator function, which equals 0 if the condition inside the bracketsis false, and 1 otherwise.

The LC can be expressed as a multiple of the total annual expected loss, as follows:

LC D ˛� � EL;

where EL is the expected loss and the risk factor ˛� is given by

˛� D 7 C 7˛10 C 5˛100:

Journal of Operational Risk www.risk.net/journal

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 62: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

Comments on the BCBS proposal for a new standardized approach for operational risk 55

Here, ˛10 and ˛100 are the fractions of total losses exceeding 10 or 100 million,respectively, ie,

˛Y DP

x � 1fx > Y gPx

:

The risk factor ˛� can take values as low as 7, for banks that have only experiencedlosses smaller than €10 million (ie, ˛10 D ˛100 D 0), or as high as 19, for banks thathave only experienced losses greater than €100 million (ie, ˛10 D ˛100 D 1). Theseextreme outcomes define the range of loss profiles that can be differentiated by theLC; hence, ˛� represents the SMA’s measure of the riskiness of the bank.

The LC enters into the SMA as a multiplier of the BIC for banks with a BI greaterthan €1 billion. (The LC does not play any role for banks with BIs less than €1billion.) The exact formula is

SMA D

8<:

BIC if BI < €1 bn;

€110 m C .BIC � €110 m/ � ln

�e � 1 C LC

BIC

�if BI > €1 bn:

For banks with BI greater than €1 billion, if the LC is larger (respectively smaller)than the BIC, the SMA will be larger (smaller) than the BIC. When the LC is exactlyequal to the BIC (as the BCBS claims will be true for the “industry average bank”),then the SMA will also coincide with the BIC.

3 CONSIDERATIONS ON THE STANDARDIZED MEASUREMENTAPPROACH

The objective of the minimum capital requirement should be to protect the bank fromthe effects of unexpected losses, creating a buffer in terms of capital that would beenough to absorb those losses with a reasonable degree of certainty. In the currentAMA framework, this degree of certainty is defined as the 99.9% VaR level of con-fidence in the annual total loss distribution. As mentioned above, the SMA is notdirectly linked to any such a risk measure or standard, and the operational loss profileis explicitly represented through the LC, which, as we have seen, can be reduced toan EL component times a risk factor ˛�. The risk factor increases if a bank changesfrom having a predominantly high-frequency, low-intensity (HFLI) loss profile to amore low-frequency, high-intensity (LFHI) loss profile.

Consistent with the objective stated above, we should expect the SMA (or anyreasonable measure of capital under a heavy-tailed loss regime) to have the followingproperties.

www.risk.net/journal Journal of Operational Risk

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 63: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

56 G. Mignola et al

FIGURE 2 SMA/EL ratio for a given value of the multiplier ˛� as a function of the ratio ofthe EL component to the BIC.

LC/BIC

0

0 2 4 6 8

10

20

30

40

50

SM

A/E

Lα* = 7α* = 19

In red (dashed line), the maximum possible value of 19 (ie, all losses > 100 m). In blue (solid line), the lowestpossible value of 7 (ie, all losses > 10 m). The vertical line at 1 identifies the “average” bank. This plot of courseapplies for BI > 1 bn.

(1) A bank with high EL should have significantly more capital than a bank withlow EL, assuming the risk factors at these banks are the same.

(2) A bank with an LFHI risk factor should have significantly more capital than abank with an HFLI risk factor, assuming that the EL at these banks is the same.

However, the SMA obeys neither of these properties. The main component of theSMA is clearly the size of the business rather than the loss profile. In fact, the roleplayed by losses is rather mild overall: the effect of a 100% increase in the LC wouldbe no greater than a 33% increase in the SMA, assuming no change in BI. If thischange in the LC is strictly due to a change in the EL, and the BI and the risk factor˛� do not change, then the ratio of the SMA to the EL decreases substantially; thiseffect appears to be in contradiction to the first principle listed above. More generally,Figure 2 indicates how the ratio of the SMA to the EL changes as the LC changeswith respect to the BIC across the range of possible values of ˛�. In all cases, thedecrease is very rapid, as the ratio SMA/EL decreases from 30 at LC=BIC D 0:5 to12.5, at the point where LC=BIC D 2 and ˛� takes its worst-case value of 19. Fora lower internal loss multiplier, eg, ˛� D 7, the SMA/EL ratio decreases from 11 toless than 5 in the same range of variation for LC/BIC.

Another interesting aspect is the effect of different risk factors ˛�. Consider twobanks with the same BI and EL, where one bank has the minimum risk factor of 7and the other has the maximum value of 19. The ratio between the SMA values ofthese two banks is bounded above by 1.5 and depends on the LC/BIC value. Figure 3

Journal of Operational Risk www.risk.net/journal

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 64: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

Comments on the BCBS proposal for a new standardized approach for operational risk 57

FIGURE 3 Ratio of the SMA value computed with the higher internal loss multiplier of19 to the SMA calculated with the lower internal loss multiplier of 7 as a function of theEL/BIC.

0 0.2 0.4 0.6 0.8 1.01.0

1.1

1.2

1.3

1.4

1.5

EL/BIC

SM

A h

igh/

SM

A lo

w

A reasonable range of EL/BIC is between 1% and 50%; larger values are not excluded but should be rare (thesewould be very risky banks indeed).

indicates that the maximum ratios between the SMA values of these banks are obtainedwhen the ratio of the EL to the BIC is around 0.2, corresponding to an LC/BIC ratioof about 1.4 for banks with a low risk profile (˛� D 7). After this point, the effectdecreases as the EL increases. This is quite counterintuitive, given the second propertyof reasonable capital measures listed above: banks with a high expected loss and ahigh proportion of large losses should be capitalized significantly more in relation tobanks with a lower EL.

4 THEORETICAL CASE STUDIES

In order to understand the level of variability, and hence the consistency, of the SMA,we have run a number of simulation experiments. These simulations are based onrealistic assumptions regarding the frequency and severity of loss-generating pro-cesses commonly observed in the industry as well as on the typical levels of the BIassociated with banks having those loss profiles. These studies show that the SMA ishighly variable in relation to a simple internal model-based approach to measuringcapital, and that most banks will likely be over-insured against operational risk incomparison with the prior AMA standard, sometimes greatly so.

Consider a very simple bank, with just one loss-generating mechanism (ie, definedby one loss severity model and one loss frequency model), where all the parametersof these distributions are known, so that the associated capital requirement at the99.9th percentile of the annual aggregated loss distribution can be easily derived.In this simple, but sufficiently realistic, case, the bank can be described using threeparameters: its average frequency �, assuming a Poisson distribution, and the shape �

www.risk.net/journal Journal of Operational Risk

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 65: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

58 G. Mignola et al

FIGURE 4 ORX survey data of BIC and LC values over a three-year period, collectedfrom fifty-four consortium members.

100

200

500

1000

2000

5000

20 000

10 000

Loss component (mEUR)

BIC

(m

EU

R)

100 1000 10 000 100 000

and location � of its severity distribution, assuming a lognormal distribution. Usingthe standard method of the loss distribution approach (LDA), it is possible to aggregatethe frequency and severity distributions to arrive at the AMA capital requirement. Thedistribution of the LC of the SMA can also be derived.

To fully compute the SMA capital requirement, however, we must additionallymake certain assumptions regarding the size of the BI and its relationship to the LC.Based on a survey of fifty-four banks conducted by the ORX consortium, we havedata on the LC and BIC over a three-year period from 2013 to 2015. There is a strongcorrelation between the two components, as shown in Figure 4.

Using a simple transformation of the data, it is possible to determine a linearrelationship between these two components. An ordinary least-squares regression ofthe log of the BIC against the log of the log of the LC, as shown in Figure 5(a), indicatesthat the residuals are fairly normal in character, as evidenced by the quantile–quantileplot of the regression residuals (Figure 5(b)).

Various statistical tests of these residuals (Anderson–Darling, Shapiro–Wilk,Cramér–von Mises, Lilliefors, etc) all pass at the 5% significance level, confirm-ing that the residuals are reasonably consistent with a normal distribution. The fittedregression is expressed as

log.BIC/ D �2:16 C 4:90 � log.log.LC// C "; (4.1)

Journal of Operational Risk www.risk.net/journal

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 66: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

Comments on the BCBS proposal for a new standardized approach for operational risk 59

FIGURE 5 Fitted regression of (a) log.BIC/ on log.log.LC// and (b) quantile–quantile plotof residuals against a normal distribution.

1.4 1.6 1.8 2.0 2.2 2.4Theoretical quantiles

Sam

ple

quan

tiles

(b)(a)

log(log)(LC)

4

5

6

7

8

9

10

log(

BIC

)

–2

–1

0

1

2

–2 –1 0 1 2

where " is a normal random variable with mean 0 and standard deviation 0:486. Wehave used this regression relation to simulate realistic values of the BIC, given thevalue of the expected LC derived from the assumed parameters.

In order to ensure that the assumed Poisson and lognormal parameters are them-selves within a reasonable range, we assume that the following relationships hold, inkeeping with the ranges of these values reported by ORX.

(A) The AMA capital level (99.9th percentile of the annual aggregate loss dis-tribution) implied by the parameters exceeds €10 million and is less than€50 billion.

(B) The expected value of the LC exceeds €64 million and is less than €150 billion.

(C) The ratio of the frequency of losses exceeding €20 000 to the median BI (deter-mined by the BIC from the regression line, expressed in billions of euro) exceeds20.

The simulation procedure was executed as follows.

(1) Select values of the Poisson parameter � and the lognormal parameters � and� .

(2) Determine the implied AMA capital level, expected LC, median BIC andfrequency of losses exceeding €20 000.

(3) If the three conditions (A)–(C) hold, then run the following procedure 100times.

www.risk.net/journal Journal of Operational Risk

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 67: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

60 G. Mignola et al

(a) Simulate ten years of loss data based on the assumed Poisson andlognormal distributions.

(b) Compute the LC from this data.

(c) Generate a BIC based on the expected LC using the regression equation(4.1).

(d) Compute the SMA requirement based on the LC and BIC.

(e) Fit a lognormal and Poisson distribution to the loss data exceeding€10 000 (the data collection floor for the SMA), and compute the AMAcapital levels implied by the estimated parameters. The capital levels weredetermined using the single-loss approximation (SLA) formula

SLA D OF �1

�1 � 1 � 0:999

O�

�C O�� OF ;

where OF represents the fitted lognormal distribution (left-truncated at€10 000), � OF is the mean of this distribution and O� is the estimatedfrequency of losses exceeding €10 000.

(4) Based on the 100 realizations of the SMA and internal-model based capital(which we shall refer to as SLA capital), compute the following quantities ofinterest:

(a) the coefficient of variation of the SMA capital, ie, the standard deviationdivided by the mean of the 100 realizations;

(b) the coefficient of variation of the SLA capital.

In the simulations, we allowed the value of � to range between 8 and 12, � torange between 1 and 4 and � to be in the set {100, 500, 1000, 5000, 10000}. Notethat � represents the mean frequency of all losses, no matter how small. The numberof losses used in fitting the parameters in step 3(e) of the simulation procedure wasbased only on the realized losses exceeding €10 000, which was usually a much lowernumber than �, depending on the value of � and � . All in all, 315 combinations ofvalues were tested, of which only 134 met conditions (A)–(C).

The results indicate that the SMA results are generally considerably more variablethan the internal model-based SLA results. Figure 6 shows that the coefficient ofvariation of the SMA is consistently well above that of the SLA for all values of the(expected) LC in the range observed in the industry. The primary reason for the highvariability of the SMA is that the BIC is so variable (ie, banks of many sizes can havea similar LC) that it cannot serve as a highly reliable indicator of risk. By contrast,the simple internal model-based outcomes show better overall variability.

Journal of Operational Risk www.risk.net/journal

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 68: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

Comments on the BCBS proposal for a new standardized approach for operational risk 61

FIGURE 6 Simulation results comparing the coefficient of variation of the SMA results foreach parameter setting to the results based on the internal model fit.

0

0.2

0.4

0.6

0.8

1.0

Expected loss component (mEUR)100 500 5000 50 000

SMAInternal model

Coe

ffici

ent o

f var

iatio

n

We can obtain a more granular view of the relation between the SMA and thetrue VaR capital at the 99.9th percentile by examining a range of numeric results forcertain specific combinations of parameters. A second set of results was determinedaccording to a slightly modified procedure. First, a certain level of the BI (in billions)is assumed, using the example of a small bank (BI of €8 billion), a medium bank (BIof €20 billion) and a large bank (BI of €40 billion); this selection also determinesthe BIC. Next, the expected value of the LC is determined using (4.1), and consid-ering three different quantiles of the stochastic (normal) component (10%, 50% and90%); thus, low, median and high LC/BIC ratios are obtained. For each of these ninecombinations of BI and LC, the parameters � and � are chosen in order to assessthree different regimes of the ˛� parameter – a low value (around 8), a medium value(around 11) and a high value (around 15) – so that situations dominated by lossesbelow €10 million up to those with large losses are considered. For a given value of˛�, two combinations of � and � are also selected in order to have a wide range ofpossible frequencies. Based on the LC and ˛� parameter, we may of course determinethe values of EL and �.

To understand the level of insurance that the SMA is providing against operationallosses in each case, denote by FA.SMA/ the value of the cumulative distributionfunction (cdf) of the aggregate annual loss distribution at the level of the SMA. Wefind it convenient to represent FA.SMA/ in terms of the “number of 9s” in the decimalrepresentation. For example, the standard AMA level measures the 0.999 confidencelevel of annual total losses, which is the “three-9s” level. If the SMA correspondsto the 0.99 or 0.9999 level, this would be “two-9s” and “four-9s”, respectively. In

www.risk.net/journal Journal of Operational Risk

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 69: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

62 G. Mignola et al

general, the number of 9s measure is determined by the following formula:

#9s D � log10.1 � FA.SMA//:

The specific values of the input parameters that were tested (BI, �, � and thequantile) are displayed in the left-hand columns of Table 2. From these values, allother columns are computed; note that all of these quantities are derived conditionalon the losses exceeding €10 000, in keeping with the SMA data requirements. (InTable 2, the rows that would be rejected under condition (C) are printed in italics.)In this case, the 99.9th percentile (referred to as VaR in the table) was determinedby using the fast Fourier transform (FFT). The final columns indicate the ratio ofthe SMA to the true 99.9th percentile (SMA/VaR) and the number of 9s associatedwith the SMA. These quantities are direct indications of the capability of the SMAto capture the theoretical risk profile.

Reviewing these results, it is clear that the SMA is materially overestimating thetheoretical VaR for small values of ˛� (values close to the theoretical minimum of 7).Conversely, when ˛� is large, in the range of 14–16, the SMA generally underestimatesVaR. Many of these cases, however, did not meet condition (C) and therefore maynot correspond to actual banks. Among the median values (10–12) of ˛�, we donot observe perfectly consistent results, although (in the majority of cases) the SMAoverestimates VaR.

The number of 9s associated with the SMA percentile level in this experiment rangeswidely, with some outcomes below the two-9s level, indicating that the SMA wouldrepresent only about the once-per-100-year event, and other outcomes above the six-9slevel, in which case the bank is quite heavily over-insured against operational risk.

In addition, the primary driver of the SMA is clearly bank size (income), whereasthe VaR measure is mainly driven by the loss profile. As a result, there is a range ofrisk factors ˛� that are consistent with a fixed set of BI and LC values, indicating thata wide variety of risk profiles, with very different levels of VaR, are consistent withthe same level of the SMA.

Finally, we can observe a strong relationship between VaR and EL, given the valueof the risk factor ˛�, in the examples shown in Table 2. Figure 8 plots these outcomeson a log scale, with regression lines superimposed that correspond to each value of ˛�,estimated so as to have a common slope. The distance between the upper and lowerlines is 3.5, which indicates that theVaR level when ˛� D 16:3 is about exp.3:5/ D 33

times the value of the VaR level when ˛� D 7:2. Compare this value with the factorof 1.5 indicated earlier as the maximum possible difference between the worst-caseand best-case risk profiles under the SMA.

Based on the fact that banks with a high value of ˛�, and hence a very highpercentage of very large losses, have a substantially lower SMA compared with the

Journal of Operational Risk www.risk.net/journal

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 70: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

Comments on the BCBS proposal for a new standardized approach for operational risk 63

TAB

LE

2C

ompa

rison

betw

een

the

SM

Aan

dth

eth

eore

tical

capi

talr

equi

rem

ent

(VaR

)un

der

ara

nge

ofsi

tuat

ions

.[Ta

ble

cont

inue

son

next

two

page

s.]

(a)

BI&

BIC

(bill

ion)

:8(1

.36)

LC

EL

SM

AV

aRS

MA

/�

�Q

uan

tile

(bill

ion

)�

˛�

(bill

ion

)(b

illio

n)

(bill

ion

)V

aR#9

s

82

Low

0.39

073

57.

20.

055

0.98

00.

116

8.5

6.5

Med

ian

0.87

616

500.

122

1.18

50.

207

5.7

6.3

Hig

h2.

195

4133

0.30

71.

615

0.42

73.

86.

29

2.1

Low

0.39

032

37.

70.

050

0.98

00.

203

4.8

5.0

Med

ian

0.87

672

40.

113

1.18

50.

329

3.6

4.8

Hig

h2.

195

1814

0.28

31.

615

0.60

12.

74.

79

2.6

Low

0.39

071

10.9

0.03

60.

980

0.70

91.

43.

3M

edia

n0.

876

159

0.08

01.

185

1.14

01.

03.

0H

igh

2.19

539

80.

201

1.61

51.

942

0.8

2.8

102.

5Lo

w0.

390

4211

.60.

034

0.98

00.

786

1.2

3.2

Med

ian

0.87

694

0.07

51.

185

1.25

90.

92.

9H

igh

2.19

523

60.

189

1.61

52.

128

0.8

2.7

93.

1Lo

w0.

390

1215

.20.

026

0.98

01.

729

0.6

2.7

Med

ian

0.87

628

0.05

81.

185

3.12

30.

42.

4H

igh

2.19

569

0.14

51.

615

5.95

00.

32.

210

3.1

Low

0.39

05

16.3

0.02

40.

980

2.05

70.

52.

6M

edia

n0.

876

120.

054

1.18

53.

846

0.3

2.3

Hig

h2.

195

300.

134

1.61

57.

579

0.2

2.1

www.risk.net/journal Journal of Operational Risk

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 71: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

64 G. Mignola et al

TAB

LE

2C

ontin

ued.

(b)

BI&

BIC

(bill

ion)

:20

(4.0

4)

LC

EL

SM

AV

aRS

MA

/�

�Q

uan

tile

(bill

ion

)�

˛�

(bill

ion

)(b

illio

n)

(bill

ion

)V

aR#9

s

92.

1Lo

w1.

722

1424

7.7

0.22

23.

108

0.50

96.

15.

7M

edia

n4.

728

3909

0.61

04.

279

1.04

44.

15.

6H

igh

14.8

8712

305

1.92

16.

740

2.60

62.

65.

510

2Lo

w1.

722

862

8.1

0.21

33.

108

0.55

25.

65.

6M

edia

n4.

728

2367

0.58

44.

279

1.09

43.

95.

5H

igh

14.8

8774

501.

840

6.74

02.

639

2.6

5.4

92.

6Lo

w1.

722

312

10.9

0.15

83.

108

1.68

81.

83.

5M

edia

n4.

728

857

0.43

44.

279

3.03

31.

43.

3H

igh

14.8

8726

991.

365

6.74

06.

002

1.1

3.1

102.

5Lo

w1.

722

185

11.6

0.14

83.

108

1.85

31.

73.

5M

edia

n4.

728

508

0.40

74.

279

3.29

51.

33.

3H

igh

14.8

8715

981.

281

6.74

06.

407

1.1

3.1

93.

1Lo

w1.

722

5415

.20.

114

3.10

85.

031

0.6

2.7

Med

ian

4.72

814

90.

312

4.27

910

.012

0.4

2.4

Hig

h14

.887

470

0.98

26.

740

21.2

400.

32.

210

3.1

Low

1.72

224

16.3

0.10

53.

108

6.35

60.

52.

6M

edia

n4.

728

650.

289

4.27

913

.062

0.3

2.3

Hig

h14

.887

203

0.91

16.

740

28.5

330.

22.

0

Journal of Operational Risk www.risk.net/journal

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 72: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

Comments on the BCBS proposal for a new standardized approach for operational risk 65

TAB

LE

2C

ontin

ued.

(c)

BI&

BIC

(bill

ion)

:40

(9.2

4)

LC

EL

SM

AV

aRS

MA

/�

�Q

uan

tile

(bill

ion

)�

˛�

(bill

ion

)(b

illio

n)

(bill

ion

)V

aR#9

s

92.

1Lo

w6.

781

5605

7.7

0.87

58.

299

1.37

76.

06.

4M

edia

n22

.418

1853

12.

893

13.0

913.

697

3.5

5.4

Hig

h87

.153

7204

111

.246

22.1

2712

.497

1.8

5.8

102

Low

6.78

133

948.

10.

838

8.29

91.

426

5.8

6.3

Med

ian

22.4

1811

220

2.77

113

.091

3.70

63.

56.

0H

igh

87.1

5343

618

10.7

7322

.127

12.2

061.

85.

89

2.6

Low

6.78

112

2910

.90.

622

8.29

93.

748

2.2

3.8

Med

ian

22.4

1840

642.

056

13.0

917.

726

1.7

3.6

Hig

h87

.153

1580

17.

992

22.1

2718

.838

1.2

3.3

102.

5Lo

w6.

781

728

11.6

0.58

38.

299

4.05

22.

03.

7M

edia

n22

.418

2407

1.92

913

.091

8.18

51.

63.

5H

igh

87.1

5393

567.

499

22.1

2719

.377

1.1

3.2

93.

1Lo

w6.

781

214

15.2

0.44

78.

299

12.7

200.

72.

7M

edia

n22

.418

708

1.47

913

.091

27.6

150.

52.

4H

igh

87.1

5327

525.

750

22.1

2764

.962

0.3

2.1

103.

1Lo

w6.

781

9316

.30.

415

8.29

916

.763

0.5

2.5

Med

ian

22.4

1830

61.

372

13.0

9137

.415

0.3

2.3

Hig

h87

.153

1191

5.33

422

.127

89.8

010.

21.

9

www.risk.net/journal Journal of Operational Risk

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 73: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

66 G. Mignola et al

FIGURE 7 Simulation results showing the median number of 9s in the decimal represen-tation of the SMA percentiles, with respect to the true annual aggregate loss distribution,for the test cases in Table 2.

LC (mEUR)

Med

ian

num

ber

of9s

in p

erce

ntile

1000 10 000 100 000500 5000 50 000

0

2

4

6

8

The red line indicates the three-9s level at which the AMA is calibrated.

theoretical risk profile expressed by the VaR, a systematic bias is introduced by theuse of the SMA in the population of banks. The effect of this bias can be typicallyseen for large international banks that are exposed to large fines and penalties relatedto conduct risk.2 These banks are mainly based in the United States or have a largeUS footprint. Thus, one would expect that the impact of the SMA on US banks wouldbe minimal, or possibly even a reduction in capital, whereas the impact on otherjurisdictions would be to increase the capital requirement, in some cases to a largedegree. This expectation has recently been confirmed in the results of the cited ORXsurvey (ORX 2016).

5 CONCLUSIONS

While the SMA may represent an improvement on other, standardized methods forcalculating capital, it still belongs squarely in the category of size-driven capitalstandards, as the influence of the loss profile is quite minimal. As the simulation testshave indicated, it is certainly not a highly risk-sensitive measure.

Replacing the AMA with the SMA would represent a material step backward in thecapability of banks to effectively hold capital against their operational risk exposure.

2 Fines of tens of billions of dollars have been common in the past few years for cases related tosubprime mortgages and violation of international sanctions or other financial misconduct.

Journal of Operational Risk www.risk.net/journal

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 74: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

Comments on the BCBS proposal for a new standardized approach for operational risk 67

FIGURE 8 Plot of VaR against EL values from Table 2, plotted on a log scale.

log(

Var

)

log(EL)

1

0

–1

–2

2

3

4

–3 –2 –1 0 1 2

Each line represents a regression of log(VaR) against log(EL) for a different value of ˛�, which ranges in the set{7.2, 7.7, 8.1, 10.9, 11.6, 15.2, 16.3} in the examples. The lowest regression line corresponds to ˛� D 7.2, and thehighest corresponds to ˛� D 16.3.

Although it was well known (and often publicly expressed: see, for example, Copeet al (2009)) by most banks that the AMA suffers from serious problems, the abilityof an internal model-based approach to setting capital is still far higher than that ofthe proposed SMA.

The SMA is retrograde in terms of its capability to measure risk and, perhapsmore importantly, fails to create any link between management actions and capitalrequirement. Consider again two banks of the same size and loss profile that discovera similar operational vulnerability. One of these banks decides to invest heavily inimprovements (controls, resources, exiting legally risky environments etc), whilethe second bank decides to do absolutely nothing. Under the SMA, both banks willcontinue to carry a similar capital requirement for many years. On the other hand,with the AMA, the first bank has the opportunity to demonstrate the reduced riskprofile through forward-looking internal estimates. While still not perfect, the AMAis at least directionally correct on this issue, while the SMA will reward bad practicesand penalize, in relative terms, good banks.

In most jurisdictions, the Pillar I minimum capital is considered as a floor for anyPillar II estimates. If the calibration of the SMA is kept at the proposed level, the useof internal models for Pillar II will also be hampered for low-risk banks. Banks willtherefore have very little incentive to invest in a costly Pillar II process if it will haveno influence on the final binding level for the capital ratios.

Very few people would say that the AMA is a perfect method for determiningcapital. At worst, it suffers from problems of design and implementation, which have

www.risk.net/journal Journal of Operational Risk

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 75: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

68 G. Mignola et al

led to the symptoms that are now leading the Basel Committee toward adopting auniversal standardized approach. In particular, the choice of the 99.9th percentile asthe measurement standard essentially guaranteed that the bank internal models wouldnot return results that were either stable or comparable. Based on the amount ofdata that banks have, it is not feasible to estimate the once-per-thousand-year annualloss with any degree of precision. Moreover, given the difficulty of achieving stableor usable results, the models became more and more complex as banks continuedto apply more conditions and selection criteria. In addition, regulators in differentjurisdictions applied the Basel rules differently, creating global imbalances in capitallevels.

Therefore, it is no wonder that today we observe a lack of simplicity and compa-rability in the AMA models; this outcome was virtually guaranteed from the start.However, this should not be interpreted as an indictment of internal models gener-ally. Many proposals have been put forward that could improve the accuracy androbustness of an internal models framework, including basing the capital on a moreattainable measurement standard, developing industry benchmarks, restricting therange of modeling practice and allowing causal or structural models to supplementthe statistical VaR models. Just because the 99.9th percentile cannot be reliably esti-mated from a limited supply of data does not mean that internal models should haveno role to play in determining the capital charge.

It would be a mistake to revert to a fully standardized approach such as the SMA.Aswe have shown, this measure fails according to two of the three desired outcomes thatthe Basel Committee put forward: comparability and risk-sensitivity. The SMA onlysucceeds in being simple. One would do well, however, to question why simplicityis even an objective at all, given the breadth and variety of operational risks affectingbanks today.

DECLARATION OF INTEREST

The views and statements expressed in this paper are those of the authors and donot necessarily reflect the views of Intesa Sanpaolo SpA and its affiliates (“IntesaSanpaolo”) or Credit Suisse Group AG and its affiliates (“Credit Suisse”). IntesaSanpaolo and Credit Suisse provide no guarantee with respect to the content andcompleteness of the paper and disclaim responsibility for any use of the paper.

REFERENCES

Basel Committee on Banking Supervision (2016). Standardised measurement approachfor operational risk. Consultative Document, March, Bank for International Settlements.URL: www.bis.org/bcbs/publ/d355.htm.

Journal of Operational Risk www.risk.net/journal

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 76: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

Comments on the BCBS proposal for a new standardized approach for operational risk 69

Cope, E., Mignola, G., Antonini, G., and Ugoccioni, R. (2009). Challenges and pitfalls inmeasuring operational risk from loss data. The Journal of Operational Risk 4(4), 3–27(http://doi.org/bpqx).

ORX (2016). Capital impact of the SMA. Report, ORX Association. URL: www.orx.org/Pages/ORXResearch.aspx.

www.risk.net/journal Journal of Operational Risk

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 77: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

We also offer daily, weekly and live news updates

Visit Risk.net/alerts now and select your preferences

Asset Management

Commodities

Derivatives

Regulation

Risk Management

CHOOSE YOUR PREFERENCES

Get the information you needstraight to your inbox

RNET16-AD156x234-NEWSLETTER.indd 1 21/03/2016 09:27

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 78: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

Journal of Operational Risk 11(3), 71–95DOI: 10.21314/JOP.2016.178

Research Paper

An assessment of operational loss data andits implications for risk capital modeling

Ruben D. Cohen

Independent Consultant, London, UK; email: [email protected]

(Received March 17, 2016; revised May 20, 2016; accepted May 20, 2016)

ABSTRACT

A mathematical method based on a special dimensional transformation is employed toassess operational loss data from a new perspective. The procedure, which is formallyknown as the Buckingham ˘ (Pi) Theorem, is used broadly in the field of experimentalengineering to extrapolate the results of tests conducted on models to prototypes.When applied to the operational loss data considered in this paper, the approach leadsto a seemingly universal trend underlying the resulting distributions regardless ofhow the data set is divided (eg, by event type, business line, revenue band). Thisdominating trend, which appears to also acquire a tail parameter of 1, could haveprofound implications for how operational risk capital is computed.

Keywords: operational risk capital; dimensional analysis; scale invariant; tail parameter; standard-ized measurement approach; advanced measurement approach.

1 INTRODUCTION

Monetary damages stemming from operational risk can be enormous at times, spread-ing over several orders of magnitude, and notoriously difficult to predict. Hence,developing appropriate models to quantify and forecast these losses for the purposeof estimating capital requirements remains a major challenge for financial institutions.

Print ISSN 1744-6740 j Online ISSN 1755-2710Copyright © 2016 Incisive Risk Information (IP) Limited

71

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 79: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

72 R. D. Cohen

At the time of writing, the method of choice for constructing advanced measurementapproach (AMA) models is the loss distribution approach (LDA). The typical LDAprocess in AMA consists of three main stages: segmenting the institution (ie, bank)into “mutually independent” units of measure (UoMs); pulling together the availablehistorical operational loss data for each UoM; and trying out suitable theoreticaldistributions through each UoM and, by means of various statistical goodness-of-fittests, determining which distribution fits best.

Along with the main stages outlined above, there are other specifics that enter, suchas the impacts of correlation, scenario analysis and business environment and internalcontrol factors (BEICF). While inclusion of these has been deemed necessary by theregulators, their roles remain peripheral to the core AMA process, coming into effectafter the UoM and data selection and distribution fittings have all been conducted andconcluded.

The capital estimation process underlying the core of the AMA, as briefly outlinedabove, has been in place for about ten years. Even though the steps within it arefew and mutually agreed upon by both banks and regulators, serious difficulties andchallenges still arise; these have lately cast doubts on the AMA’s future. This isbecause the process has become increasingly convoluted over the years, and beenapplied inconsistently across institutions, failing to produce any convergence in eithermethodology or capital levels. These challenges have now reached the point wherereplacing the AMA with a simplified “one-size-fits-all” type of approach, comprisinga single capital equation in line with what is called the “standardized measurementapproach” (SMA; Basel Committee on Banking Supervision 2016) is being seriouslyconsidered.

With the SMA now fast becoming a topical subject, proponents of the AMA arelooking for ways to reject the proposed SMA, or at least keep the AMA partly in use.One of the arguments in favor of the AMA points to the SMA’s inability to capturerisk at the more granular levels. The motive for this argument is that the SMA is a one-size-fits-all type of model, given that a single capital equation applies equally acrossall banks, completely disregarding the idiosyncratic attributes, such as classifications,principal lines of business and other individual bank characteristics. The only detailsthat impact capital through the latest proposed SMA formulation enter through theentity’s profit and loss figures and operational loss averages.

While fervent debates surrounding the virtues and drawbacks of the AMA andSMA are still taking place through consultation papers, conferences and other means,the way forward at the time of writing appears to be shifted more favorably towardthe SMA. With no final verdict yet in sight, however, one aim of this paper is to helpthe reader to decide, using the results generated here, which of the two frameworksis better suited for capital computation.

Journal of Operational Risk www.risk.net/journal

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 80: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

Assessment of operational loss data for capital modeling 73

2 OBJECTIVES AND SCOPE

Although it has a relatively short history, operational risk has already demonstrated itssusceptibility to becoming overcomplicated, owing to the large number of variablesand adjustable parameters it includes. This makes it easy to lose focus when conduct-ing research and model development in this area. Thus, it is imperative to firmly layout the objectives and scope of this paper before delving into any analysis. These are

(1) to introduce the method of dimensional analysis and transform the availableoperational loss data accordingly,

(2) to establish a framework for plotting the transformed data in the spirit of theLDA,

(3) to compare the various transformed data sets through visual inspection andsome simple statistics limited to variances and standard deviations,

(4) to discuss the potential impacts on the calculation of risk capital.

Moreover, to avoid loss of focus, we refrain from carrying out any curve fits andperforming goodness-of-fit or other similar statistical tests, as there is a danger ofthem hijacking attention from the main topic of investigation; once curve fits andgoodness-of-fit tests are introduced, they can divert the reader’s attention to othermatters, such as what type of curves are more suitable, what tests are more reliablethan others and why one might prefer a 90% and not a 95% confidence level. However,although we believe the application of such statistical analyses to be premature atthis point, they play a vital role in model development and thus may be adopted insubsequent research and publications, but only after the method has been introducedand its purpose clearly spelled out.All that said, we still intend to highlight asymptoticbehaviors when they emerge, and conduct simple error analyses where relevant.

Next we introduce the method of dimensional analysis to operational risk anddiscuss its applications.

3 THE METHOD OF DIMENSIONAL ANALYSIS AND SIMILITUDE

We present here a method of dimensional analysis and similitude, also known as theBuckingham ˘ Theorem (Buckingham 1914), to examine the behavior of operationalloss data. We emphasize that this approach is purely mathematical; we do not intendto develop a new theory of operational risk or speculate on what theoretical distri-butions would best fit operational loss data. The method essentially employs a datatransformation technique borrowed from engineering to shed light on some of theinnate characteristics of operational loss data that could otherwise remain hidden.

www.risk.net/journal Journal of Operational Risk

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 81: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

74 R. D. Cohen

The significance of the method of dimensional analysis in engineering, particularlyexperimental fluid mechanics, lies in its ability to extrapolate and predict prototypeperformance from tests conducted on models. Since this approach is well establishedand supported by an extensive literature,1 we shall proceed swiftly to its applicationin the area of operational risk analytics, after a minor diversion.

4 A REAL-WORLD EXAMPLE OF THE METHOD OF DIMENSIONALANALYSIS

We deviate here a little to explain the main use of dimensional analysis through areal-life example. The method has little to do with developing new theories: it is amathematical transformation of data, used primarily in experimental engineering toextrapolate the behavior of prototypes from tests run on models.

Say we want to test a 747 Jumbo Jet aircraft. We test a small-scale model in a windtunnel and produce a graph of the lift force, F , versus the wind speed, V . We thenrepeat the experiment on a full-scale jumbo jet in a wind tunnel large enough to holdit, and plot the lift force versus the wind speed on the same graph. A schematic ofthe results of the two experiments together is shown in Figure 1, where we note atotal separation of the data sets caused by differences in size scales and experimentalconditions (ie, the model, which is smaller in size and subjected to lower wind speeds,experiences lower lift forces).

The method of dimensional analysis is now applied via the following steps:

(1) measure the two experimental variables, F and V ;

(2) identify the physical properties of the experiments, ie, fluid density, fluidviscosity and size (model and prototype);

(3) note the fundamental units of the experimental variables and parameters (here,mass, length and time).

(4) Next, using the fundamental units together with the variables and physicalproperties of the experiments, construct the dimensionless variables that areto be plotted against each other, replacing the F -versus-V convention. In thiscase, the dimensionless counterparts of F and V become the “lift coefficient”and “Reynolds number”, respectively. Thus, while the experimental output, F ,has fundamental units of mass � length=.time2/, and input V has fundamentalunits of length=time, the lift coefficient and Reynolds number, respectively, aretotally free of any units of dimension.

1 The method has already been introduced into the fields of economics and finance, ranging fromgeneral economic theory (de Jong 1967) to more specific applications (Cohen 1998).

Journal of Operational Risk www.risk.net/journal

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 82: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

Assessment of operational loss data for capital modeling 75

FIGURE 1 Two experiments performed on a model aircraft and its prototype, with resultsplotted as lift force, F , versus wind speed, V .

x

x x

x

x

x

x

x

x

o

o o

o o o

o o o

Experiment 2: performed on a full scale Jumbo Jet

Experiment 1: performed on an exact (but much smaller)model replica of a Jumbo Jet

Vm

Fm

Vp

Fp

Lift

forc

e, F

Speed, V

This graph does not portray actual data.

Transformation of the variables would mean that, when plotting lift coefficientversus Reynolds number, Re, in this dimensionless coordinate system, we need notworry about whether final outcomes are presented in imperial or metric units, or

www.risk.net/journal Journal of Operational Risk

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 83: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

76 R. D. Cohen

FIGURE 2 Non-dimensionalizing the results of the two experiments leads to a single,universal experimental graph that is scale invariant.

V

F

Lift

coef

ficie

nt

Re

(a) Original

(b) Transformed

These graphs do not portray actual data. They only illustrate how different data can merge under appropriatedimensional transformation.

whether the experiments are conducted on a scale of, say, 1:50 or 1:200. As longas the model is an exact replica of the prototype, we are guaranteed that all thetest results will converge when plotting the dimensionless lift coefficient versus theReynolds number instead of the dimensional F versus V , as shown in Figure 1. Theconvergence resulting from nondimensionalizing the variables is sketched graphicallyin Figure 2.

In summary, two implications emerge from the above, both of which are profound.

(1) If both model and prototype (ie, the sources of the data) are fundamentallysimilar, then the transformed results will converge, allowing us to extrapolatethe behavior of the prototype from the model very accurately.

Journal of Operational Risk www.risk.net/journal

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 84: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

Assessment of operational loss data for capital modeling 77

FIGURE 3 Addition of a third data set belonging to a fundamentally different aircraft showslack of convergence in dimensionless coordinates.

Li

ft co

effic

ient

Re

This graph does not portray actual data.

(2) Looking at it from a different angle, if only the data sets were provided andthere was no definitive information on the sources of the data, the method ofdimensional analysis could enable us to identify, with a high degree of certainty,whether or not these sources are fundamentally similar. This is because funda-mentally similar sources would, under this transformation, produce convergentdata plots, whereas dissimilar sources would not.

A schematic of the second implication is provided in Figure 3, where the data of athird aircraft (a Cessna) is now also included. In transformed coordinates, the newdata set is seen to deviate widely from those of the jumbo jets, implying the presenceof a fundamental difference, which is obviously the shape.

In relation to operational risk, it is well known that loss data is very difficult toscrutinize or explain. We therefore propose the use of this method to investigatethe similarities and differences in loss profiles across the wide variety of data setsavailable, such as big banks versus small banks or the various Basel event types andbusiness lines. The method of dimensional analysis could prove to be a valuable toolhere.

www.risk.net/journal Journal of Operational Risk

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 85: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

78 R. D. Cohen

TABLE 1 Illustrative example of data manipulation to transform dimensional variables tonondimensional variables.

(1) (2) (3) (7)Current losses Band Mid-level (4) (5) (6) Nw�N=

wi (US$m) (US$m) w (US$m) �N �N=�w w= Nw N0�w

7.2 1.2 w < 0.1 0330 0.8 0.1 6 w < 1 0.316 5 5.556 0.006 18.209

2.7 0.5 1 6 w < 10 3.162 6 0.667 0.060 2.1855.6 0.13 10 6 w < 100 31.6 2 0.022 0.603 0.0730.12 210 100 6 w < 1000 316.2 3 0.003 6.030 0.0110.9 139 w > 1000 0

97 308.9 5

�N , number of losses falling within each band. Nw D Pwi =N0 D 52.4. N0 D 16.

5 APPLICATION TO OPERATIONAL LOSS DATA

5.1 Methods

Distributional depictions of operational losses can be accomplished in at least twoways. One popular way is through the cumulative distribution, where, typically, thelosses are first ordered from large to small and then plotted against their rank, with thelargest loss ranked as number 1, the second largest as 2 and so on. This is normally themethod used in operational risk analytics, where the distribution is first fitted onto asuitable function, on which simulations are then conducted to generate random losses.

An alternative, which forms the basis of this work, is to represent the losses as aprobability distribution rather than a cumulative distribution. In this method, bucketsof various band widths are defined, and the losses from the sample are then placed inthem. In the limit of differential calculus, this approach equates to the first derivativeof the cumulative distribution with respect to loss size, w.

Table 1 exemplifies this process, with column 1 listing the available sample ofN0 D 16 “inflation-adjusted” or “current”2 operational losses, wi .i D 1; 2; : : : ; 16/,and column 2 depicting a set of loss bands, each with width �w, to be used asthe buckets to place the operational losses in. Column 3 presents the band’s limitslogarithmic mean, ie, w D p

wj � wj C1, while column 4 shows the number ofoperational losses, �N , falling within each bucket, �w. Column 5 gives the ratio�N=�w calculated for each band, ie, column 4 divided by column 3. Transformationsof w and �N=�w into their nondimensional counterparts are performed in columns 6and 7, which will be discussed next.

2 The way operational losses are normally represented.

Journal of Operational Risk www.risk.net/journal

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 86: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

Assessment of operational loss data for capital modeling 79

For the method of dimensional analysis, it is critically important to first identifythe main variables that are to be plotted against each other (in this case, �N=�w

against w), and then transform them into their dimensionless or “scale invariant”counterparts. The transformation is relatively simple here, considering that the lossvariable, w, can be made dimensionless after dividing it by the average (either linearor geometric average could be used, with linear utilized here going forward) loss inthe sample, Nw, and the ratio �N=�w can be made dimensionless by multiplying bythe factor Nw=N0, where Nw and the sample size, N0, are both defined in the examplein Table 1. With the dimensionless variables now established as

w

Nw andNw

N0

�N

�w;

we return to computing columns 6 and 7 in Table 1, where these values are displayed.In Table 1, the progression of the hypothetical operational loss data from raw data

in column 1 to transformed data in columns 6 and 7 can now be related to Figure 2:a cumulative distribution plot of the numbers in column 1 of Table 1 (ie, orderingthe losses and plotting them against their rank) is analogous to the “original” plot inFigure 2, while a plot of column 7 against column 6 equates to the “transformed”plot.

5.2 Motivation

Our motivation for applying dimensional analysis to operational loss data is as follows:

� operational risk is a complex area, and assembling the relevant data in acomprehensible manner is not straightforward;

� operational loss data can

– originate from a variety of independent and unrelated sources (eg, SAS,Fitch, ORX, internal sources),

– appear on vastly different scales (eg, global, country, business),

– belong to diverse jurisdictions (eg, regions, countries) ,

– come under different classifications,

– be associated with a variety of event types and business lines.

We next apply the method in Section 5.1 to actual operational loss data and itsderivation and discuss our findings and conclusions.

www.risk.net/journal Journal of Operational Risk

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 87: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

80 R. D. Cohen

TABLE 2 Examples of how operational loss data can be classified and/or segmented.

Classification Examples

By data source External or industry, internal

By data vendor SAS, ORX, Fitch

By institution Dozens of banks

By revenue size Small, medium, large

By Basel event type Business disruption & system failures; clients,products & business practices; damage tophysical assets; employment practices &workplace safety; execution, delivery & processmanagement; external fraud; internal fraud

By Basel line of business Agency services; asset management; commercialbanking; corporate finance; other; payment &settlement; retail banking; private banking; retailbrokerage; trading & sales

TABLE 3 Statistics of the available SAS data by Basel event type.

Average MaximumEvent loss loss

Basel event type count (US$m) (US$m)

Business disruption and system failures 25 82 468Clients, products and business practices 2 217 138 18 277Damage to physical assets 44 73 977Employment practices and workplace safety 127 19 349Execution, delivery and process management 210 27 883External fraud 771 15 754Internal fraud 1 408 41 7 747

All 4 802 81 18 277

5.3 Application

First, it is useful to present various examples of how operational loss data can be clas-sified and/or segmented. These classifications and segments are generally predefinedand can appear in several forms, some of which are listed in Table 2. The fact thatthis list is extensive but not exhaustive shows how complicated not only operationalloss data management but also the analytics used to support such data can be.

Journal of Operational Risk www.risk.net/journal

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 88: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

Assessment of operational loss data for capital modeling 81

Given all the different sources, classifications and segmentations available to oper-ational loss data, each potentially acquiring its own unique characteristic features, itwould be useful to have a robust method in hand to enable their analysis. The methodof dimensional analysis would be perfectly suited here to achieve this task, as it canhelp organize data more logically and facilitate identification of the characteristic fea-tures and loss profiles across the diverse sources, scales, jurisdictions, etc, underlyingthe data. The rest of this paper is dedicated to this, namely, segregating the availabledata sets according to their classifications and conducting and applying the analysis.

Two data sets are used in this study. The first, from SAS, contains publicly availableoperational losses of a large number of institutions. This data set is quite comprehen-sive, with losses partitioned into their relevant jurisdictions, sectors and regulatorylines. In operational risk jargon, the data from SAS and other similar vendor-manageddatabases is known as “industry” or “external” data.

The second data set originates from within a major global bank, which is responsiblefor collating and managing its own data. This type of data is known in operationalrisk terminology as “internal data”. In order to maintain confidentiality, no details ofthe numbers (eg, average, minimum and maximum losses, loss frequencies) will berevealed. The process of nondimensionalization will, in addition, naturally concealall details.

6 ANALYSIS OF SAS DATA BY BASEL EVENT TYPE

The statistics from the available SAS operational loss database are tabulated by Baselevent type in Table 3.

This data set, which has been filtered for “financial services” and current or inflation-adjusted losses of US$1 million or more, comprises a total of 4802 losses, ending inDecember 2014 and going back over thirty years. The data set is, nevertheless, knownto suffer from two weaknesses: data prior to 1990 is sparse, and there is a “disclosurebias” at the lower end of the loss spectrum; this is something that we shall discuss indue course. The reason for this bias is that smaller losses are less likely than largerones to hit public news (de Fontnouvelle et al 2003). However, to maintain objectivity,this work will aim to utilize all the data.

Sifting quickly through the statistics provided in Table 3, we note that not muchdetail on individual loss profiles can be deduced from these numbers alone. For exam-ple, an average loss of US$82 million associated with “business disruption and sys-tem failures” is followed by a maximum loss of US$468 million, while for “internalfraud” the average loss of US$41 million is coupled with a maximum of US$7747 mil-lion. The contradiction noted here is only one example of many that demonstrate thedifficulties in drawing conclusions from operational loss statistics.

www.risk.net/journal Journal of Operational Risk

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 89: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

82 R. D. Cohen

FIGURE 4 Cumulative distributions of losses in the SAS data set by event type, showing(a) event count versus loss size and (b) data in transformed coordinates.

Cou

nt

Cou

nt

104

104 105

103

103

102

102

101

101100

100

Cou

nt

Loss (US$m)

102

102 103

101

101

10–1

100

100

10–3

10–2

10–110–6

10–5

10–4

10–2

w/w–

(a)

(b)

w

N0

ΔN

Δw

Business disruption and system failuresClients, products & business practicesDamage to physical assetsEmployment practices and workplace safetyExecution, delivery & process managementExternal fraudInternal fraud

No unique pattern is observed in part (a).

Journal of Operational Risk www.risk.net/journal

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 90: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

Assessment of operational loss data for capital modeling 83

Parts (a) and (b) of Figure 4 depict the data in raw and transformed forms, respec-tively. We see that, while it is difficult to conclude anything from Figure 4(a) given theerratic and crisscrossing patterns of the individual loss distributions, the transformeddata in Figure 4(b) displays a clear and definitive convergence of all the Basel eventtypes onto one common trend. What this means within the context of dimensionalanalysis and operational risk is that the event types in this data set have similar lossprofiles, especially in the tail region, with none distinguishing itself sharply from theothers.

Figure 4(b) shows, in addition, a straight line representing the asymptotic tail ofthe plotted data. The line has a slope of �2 in this coordinate system, which translatesto a “tail parameter” of 1 in operational risk jargon. The significance of this commonasymptotic behavior lies in the fact that not only does it apply equally to all the eventtypes, but also the tail parameter value of 1 it embodies is larger than what is typicallyreported. Both of these observations go against the general belief that each event typeis characterized by its own unique risk profile with a tail parameter typically lessthan 1.

7 ANALYSIS OF SAS DATA BY BASEL REGULATORY LINE OFBUSINESS

In Table 4 the statistics of the SAS database are now tabulated by Basel regulatoryline of business (RLOB).

As in the analysis on event types, we note that not much can be said about individualloss profiles if we were to base them solely on the loss statistics in Table 4. For example,an average loss of US$76 million associated with agency services is followed by amaximum loss of US$772 million, whereas for retail brokerage, the average loss ofUS$19 million is coupled with a maximum of US$1132 million. There seems to beno consistent correlation between number of events, average losses and maximumlosses.

On the other hand, we can draw conclusions from Figure 5 that are very simi-lar to those from Figure 4. Again, while nothing definitive can be concluded fromFigure 5(a), given the erratic patterns and crisscrossing of the individual loss distri-butions, the convergence of the same data toward a common trend, as depicted in thetransformed coordinates in Figure 5(b), again reveals the absence of traits uniquelyattributable to any of the RLOBs. This demonstrates again that none of the RLOBshas specific and outstanding features that can separate it from the others.

Figure 5(b) includes also a straight line with a tail parameter of 1, the same as inFigure 4(b), which appears as a common asymptote to all the RLOBs. This againimplies that losses in all the RLOBs behave similarly in the tail region.

www.risk.net/journal Journal of Operational Risk

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 91: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

84 R. D. Cohen

TABLE 4 Statistics of the available SAS data by regulatory line of business.

Average MaximumEvent loss loss

Basel RLOB count (US$m) (US$m)

Agency services 72 76 772Asset management 333 53 1 815Commercial banking 663 40 2 333Corporate finance 386 234 11 166Other 56 282 2 650Payment and settlement 57 276 8 998Retail banking 2 005 63 18 277Private banking 131 80 2 826Retail brokerage 532 19 1 132Trading and sales 567 126 7 747

All 4 802 81 18 277

8 ANALYSIS OF SAS DATA BY REVENUE BAND

The SAS data set is now split into different revenue bands and analyzed in the sameway as the first two segmentations. Table 5 presents some of the loss statistics.

Dividing the institutions in the data set into the three revenue bands noted above, wecan, for the first time, see a consistent relationship whereby institutions with higherrevenue generate larger losses, both average and maximum. However, the robustnessof this relation is questionable in view of the mixed results reported by Moosa and Li(2013).

Figure 6 shows the loss data pertaining to the revenue bands in the forms of cumu-lative distributions and transformed coordinates. Yet again, while Figure 6(a) showsstrongly distinguishable features across the three revenue bands, the transformed datain Figure 6(b) shows all trends merge together, converging toward a common asymp-tote with tail parameter 1. Do banks with higher revenues therefore have loss profilesthat are different from those with lower revenues, if we define the loss profile by theshape of these curves? According to the similarity in the profiles of the three rev-enue bands observed in Figure 6(b), together with their tight convergence, this is notnecessarily the case.

9 ANALYSIS OF SAS DATA BY FINANCIAL INSTITUTION

The 4802 operational losses in our SAS database are spread across a large numberof financial institutions, whose names are also included. For this investigation, we

Journal of Operational Risk www.risk.net/journal

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 92: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

Assessment of operational loss data for capital modeling 85

FIGURE 5 Cumulative distributions of losses in the SAS data set by RLOB, showing(a) event count versus loss size and (b) data in transformed coordinates.

104

104 105

103

103

102

102

101

101100

100

Cou

nt

Loss (US$m)

102

102 103

101

101

10–1

100

100

10–3

10–2

10–110–6

10–5

10–4

10–3 10–2

w/w–

w

N0

ΔN

Δw

(b)

(a)

Agency servicesAsset managementCommercial bankingCorporate financeOtherPayment and settlementRetail bankingPrivate bankingRetail brokerageTrading and sales

No unique pattern is observed in part (a).

www.risk.net/journal Journal of Operational Risk

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 93: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

86 R. D. Cohen

TABLE 5 Statistics of the available SAS data by revenue band.

Revenue Average Maximumband Loss loss loss

(US$bn) count (US$m) (US$m)

<5 2 377 29 2 4595 6 Rev < 30 1 010 72 4 463

>30 1 415 175 18 277

All 4 802 81 18 277

selected twelve of the largest and most prominent ones to find out, using our method,whether or not any stand out as being characteristically different from the others.Table 6 lists the selected institutions and provides some of their statistics available inthe data set.

Figure 7 plots the average loss against the revenue most currently associated witheach bank as it appears in the SAS database. The somewhat positive relationshipbetween the two variables, which was alluded to in the previous section, can beobserved here as well.

Figure 8(a) displays the cumulative loss distribution for each bank individually; theabsence of any clear trends is obvious. However, in the scale-invariant format shownin Figure 8(b), a clear pattern of convergence toward a mutual trend emerges onceagain. This suggests two possibilities:

(1) from a high-level perspective, a common loss profile overlies all these banks;

(2) attempting to differentiate between these banks through their loss history wouldbe as difficult as trying to make sense out of noise.

A more in-depth scrutiny of the common and overlying loss profile in Figure 8(b)would involve the painstaking analysis of certain inherent properties, such as thebusiness model, geography, size, etc, of each sample bank, to try to figure out whatwould qualify one bank to be included in this group but others to be excluded. Such astudy, however, lies outside the scope of this investigation, although it is recommendedas part of future work, as it could be of value in enabling differentiation between theloss profiles of diverse banks.

10 SUMMARY OF SAS ANALYSIS

Returning to part (b) of Figures 4–6, we note that, in each individual case, the datapoints approach a linear asymptote with tail parameter 1 at some value of w= Nw.

Journal of Operational Risk www.risk.net/journal

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 94: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

Assessment of operational loss data for capital modeling 87

FIGURE 6 Cumulative distributions of losses in the SAS data set by revenue bands,showing (a) event count versus loss size and (b) data in transformed coordinates.

104

104 105

103

103

102

102

101

101100

100

Cou

nt

Loss (US$m)

102

102 103

101

101

10–1

100

100

10–3

10–2

10–110–6

10–5

10–4

10–3 10–2

w/w–

w

N0

ΔN

Δw

Rev < US$5 bn

Rev ≥ US$30 bn

US$5 bn ≤ Rev < US$30 bn

(a)

(b)

Combining all three data sets into one graph (see Figure 9) reveals not only a mergerof all three data sets onto a single trend, but also a common threshold at w= Nw � 2,defining the start of linear asymptotic behavior.

www.risk.net/journal Journal of Operational Risk

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 95: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

88 R. D. Cohen

TABLE 6 The twelve major global financial institutions selected for our investigation.

Average SampleBank loss (US$m) size

Bank of America 266 253Barclays 210 54Citigroup 164 169Credit Suisse 126 57Deutsche Bank 174 58Goldman Sachs 68 53HSBC 179 60JP Morgan 260 120Lehman Brothers 42 32Morgan Stanley 80 89UBS 150 79Wells Fargo 60 92

FIGURE 7 Average loss plotted against current revenue for the bank listed in Table 6.

300

250

200

150

100

00

50Ave

rage

loss

(U

S$

mill

ions

)

20 000 40 000 60 000 80 000

Revenue (US$ millions)

On establishing this common threshold, it is possible to compute the variancesand standard deviations (in natural log units) of the errors relative to the asymptote.Table 7, which lists these error figures, shows that, of the three data cuts considered,the losses by revenue bands show the tightest fit in the region w= Nw D 2, above thethreshold.

Journal of Operational Risk www.risk.net/journal

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 96: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

Assessment of operational loss data for capital modeling 89

FIGURE 8 Cumulative distribution of the SAS data set for the twelve selected banks,showing (a) event count versus loss size and (b) data in transformed coordinates.

104 105

103

103

102

102

101

101100

100

Cou

nt

Loss (US$m)

(a)

102

101

10–1

100

10–3

10–2

10–4

w

N0

ΔN

Δw

10210110010–110–3 10–2

w/w–

(b)

Bank of AmericaBarclaysCitigroupCredit SuisseDeutsche BankGoldman SachsHSBCJP MorganLehman BrothersMorgan StanleyUBSWells Fargo

The straight line in (b) signifies tail parameter 1.

The reason the selected bank data portrayed in Figure 8(b) should not be includedin the above analysis is that this data set does not include all SAS data, unlike thedata in part (b) of Figures 4–6. Thus, its relevance may not be obvious when plottedin the same graph as the other data sets.

www.risk.net/journal Journal of Operational Risk

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 97: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

90 R. D. Cohen

FIGURE 9 Data by event types, business lines and revenue bands combined to displaya common linear asymptotic behavior with tail parameter 1 in the region w= Nw D 2.

Asymptotic region

102

102 103

101

101

10–1

100

100

10–3

10–2

10–110–6

10–5

10–4

10–210–3

w/w–

w

N0

ΔN

Δw

All event types industryAll business lines industryBy revenue band

TABLE 7 Variances and standard deviations of the transformed data in part (b) ofFigures 4–6 along the linear asymptotic region defined by w= Nw D 2.

StandardData Variance deviation

Event types 0.17 0.41Business lines 0.26 0.51Revenue bands 0.07 0.26

11 ANALYSIS OF INTERNAL DATA

The operational loss data of a major global bank was also investigated, with itsresults shown in transformed coordinates in Figure 10. (Absolute numbers have beenexcluded for confidentiality.)

Two observations in Figure 10 are of note. First, one particular event type (clients,products and business practices (CPBP)) appears to have a profile that is markedlydifferent and separate from the rest. Second, a tight convergence of all the other eventtypes materializes along a common trend, yet again with a tail parameter of 1, identicalto all the cases for the industry data investigated earlier.

The deviation of the CPBP event type in Figure 10 means, in the context of dimen-sional analysis, that its risk profile is distinct from the others, and thus its behaviorshould be modeled separately and differently. The remaining event types, however,

Journal of Operational Risk www.risk.net/journal

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 98: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

Assessment of operational loss data for capital modeling 91

FIGURE 10 Internal data of a major global bank in transformed coordinates.

102

102 103 104

101

101

10–1

100

100

10–3

10–2

10–1

10–6

10–7

10–8

10–5

10–4

10–210–3

w/w–

w

N0

ΔN

Δw

Business disruption and system failuresClients, products & business practicesDamage to physical assetsEmployment practices and workplace safetyExecution, delivery & process managementExternal & internal fraud

The straight line represents tail parameter equal to 1.

appear to follow the same pattern, so, if need be, they can be grouped together andtreated as a single unit for capital modeling purposes.

The above observations could also have implications in the areas of data collectionand management, where, at least for this particular bank, we could argue that the sevenBasel event types could be combined and separated into only two units (one CPBPand the other non-CPBP) and managed accordingly. While this suggests ignoring thesubstructures of the bank, it does lend support to the recent European BankingAuthor-ity stress test (European Banking Authority 2016), which differentiates between twotypes of risk: conduct and nonconduct.

Returning to Figure 4(b), which displays the SAS data by event type, we observethat CPBP and non-CPBP show little deviation from each other (certainly not onthe scale of that observed in Figure 10). In Figure 4(b), the CPBP losses appear tomix well with the others, whereas in the internal data in Figure 10 the differenceis marked. The exact cause of this inconsistency between the internal and externalCPBP behaviors has not been determined here, given the scope of this paper, althougha close inspection of both data sets as part of future work could help establish this.

Analysis of internal data by RLOB is not presented here because the available dataset does not contain this. Our conjecture, however, is that the amount of scatter acrossthe dimensionless RLOB plots would be somewhat larger than that in Figure 10, aseach RLOB would tend to absorb its fair share of the strongly deviating CPBP losses.

www.risk.net/journal Journal of Operational Risk

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 99: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

92 R. D. Cohen

FIGURE 11 Comparison of two types of data sets:SAS not including the selected bank andnot including CPBP in aggregate, and the internal data not including CPBP in aggregateand after transformation.

102

102 103 104101

100

100

10–2

10–1

10–6

10–8

10–4

10–2

w/w–

w

N0

ΔN

Δw

Internal ex CPBPAll industry ex selected bank ex CPBP

The straight line again represents the common asymptote with tail parameter equal to 1.

For the same reason, the dimensionless internal RLOB plots might also lose the tightasymptotic tail formations that we have witnessed so far.

12 COMPARISON OF INTERNAL AND INDUSTRY DATA

We have investigated two distinct and independent sources of operational loss data,one industry and the other internal; divided them into various shapes and forms; andconcluded that in all but one case there is a common trend and tail parameter. Nowwe wish to put together the data in dimensionless form on the same graph and assessthe similarities and differences between the two.

Figure 11 shows such a graph, where the SAS industry data not including the onebank (selected for the internal data analysis) and not including CPBP are comparedwith those of the internal data of the selected bank not including CPBP, both inaggregate. Stripping away the CPBP event type from SAS allows us to compare theinternal and external data sets, while taking out the selected bank from SAS enforcessome degree of independence between the two data sets.

The key observation in Figure 11 is that, for the most part, the two plots lie on top ofeach other and follow the same asymptotic line with tail parameter 1. This is remark-able because not only are the data sources completely separate and independent, but

Journal of Operational Risk www.risk.net/journal

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 100: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

Assessment of operational loss data for capital modeling 93

not a single adjustable parameter is in use anywhere here.3 There is, however, oneslight departure between the two behaviors, which is worth noting. It occurs some-where in the narrow region of 1 6 w= Nw 6 2, below which the industry-related dataexperiences a concave-down curvature while the internal ex-CPBP data appears tocontinue along the straight path delineated by the common asymptotic line. This pointof deviation is close to that demarcated in Figure 9. As for the cause of this deviation,our guess is that it is the loss disclosure bias alluded to earlier: a bias in the reporting ofsmaller losses, from which the SAS industry data is known to suffer (de Fontnouvelleet al 2003).

Aside from the deviation mentioned above, the convergence of the two independentdata sets, although remarkable at first glance, should, in fact, not be that big a surprise.All it conveys is that operational loss data, if measured and reported accurately, mustdisplay fundamentally similar features, independent of its source. This is directlyanalogous to the reason why we should not be surprised if two people in the sameroom, each holding a different but accurate thermometer, were to come up with similartemperature readings.

As for CPBP in internal data looking different from the above common trend, a“deep dive” could possibly help establish the reasons behind it. It could be that only asmall fraction of the losses are causing this separation, but, nonetheless, the analysisis not continued here, as it lies beyond the scope of this paper.

The convergence of the two data sets in transformed coordinates in Figure 11 canalso have a major implication for one of the AMA’s regulatory requirements, which isto incorporate external data into the capital estimation process. In practice, this couldbe accomplished in more than one way, eg, by using the external data as a benchmark,by “mixing” the internal and external data or by conducting simulations on both datasets separately before obtaining a weighted-average capital estimate. Nevertheless,the results of this work suggest that we may be better off nondimensionalizing thedata before bringing the two data sets together. However, this idea is only mentionedin passing here, as the process again lies beyond the scope of this paper.

13 IMPLICATIONS FOR CAPITAL MODELING, AND CONCLUSIONS

Our application of the method of dimensional analysis to operational loss data hasled to two key findings. First, granular details behave as noise when losses are plottedin dimensionless form, implying that it is difficult to associate idiosyncratic features

3 If two data sets have widely different boundary limits in the transformed plane, their plots will notnecessarily merge but could be separated by a narrow parallel shift. This is because each transformeddata set represents a bounded, empirical probability distribution, whose area under the curve mustequate to 1. However, in most cases, such as that shown in Figure 11, this shift, if it exists, turnsout to be visually imperceptible.

www.risk.net/journal Journal of Operational Risk

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 101: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

94 R. D. Cohen

with the different divisions and classifications of the data. Second, the data pointsappear to cluster tightly around a single common distribution, regardless of the cut.

The fact that there seems to be a single trend common to all the data examinedhere (with the only exception being internal CPBP) brings us to the use of the SMAfor operational risk capital modeling, which has received much recent attention fromregulators, practitioners and academics. The SMA was proposed relatively recently bythe Basel Committee to replace the AMA (Basel Committee on Banking Supervision2016). The reason for this proposal is simply that, throughout the ten years or soof its existence, the AMA process has been (and remains) overly complicated andinconsistently applied across institutions.

The complexities in the AMA, which ultimately give rise to unreliable capital out-puts, are likely a consequence of the many variables and adjustable parameters withinit. Besides, it is now conceivable, given the results generated here, that these param-eters are governed by noise, which could explain the difficulties encountered whenmodeling operational risk at more granular levels. The SMA aims to circumventthis by proposing that one autonomous capital equation, independent of measure-ment units and other idiosyncratic details, should instead be applied at the entitylevel.

Based entirely on the evidence gathered here through the application of the methodof dimensional analysis, we fully support the view of the SMA that a single one-type-fits-all capital model should be applied at the entity level. However, our findingsdo not suggest that the AMA should be dismantled and put away forever. From theperspective of the LDA, it would perhaps be sensible to look at operational loss datain dimensionless form, ignore all the specific details and focus instead on the overalldominating trend. Once this trend is identified and quantified, simulations can becarried out at the entity level to predict losses. And when performed in conjunctionwith the selected confidence level, capital estimates could follow.

It might also make sense to consider changing the confidence level in the AMA,which is currently set at 1 in 1000 years. The reason for this lies in the tail parameterof 1, which seems to crop up repeatedly in our investigation. Preliminary calculationsshow that a tail parameter of 1 could lead to a sudden and drastic jump in capital overwhat the AMA models are currently producing. Broadening the 1 in 1000 to 1 in 500,or something close to it, could render the capital output more reasonable and evenattune it with the SMA’s output, should we aim to achieve consistency on all sides.This is beyond the scope of this paper, but it could provide an interesting potentialfollow-up investigation. Finally, it would be a worthwhile exercise to apply the methodof dimensional analysis to operational loss data taken from additional sources, suchas Fitch, ORX and internal data from other banks, to explore the possibility of similarconvergent behaviors.

Journal of Operational Risk www.risk.net/journal

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 102: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

Assessment of operational loss data for capital modeling 95

DECLARATION OF INTEREST

The author reports no conflicts of interest. The author alone is responsible for thecontent and writing of the paper.

REFERENCES

Basel Committee on Banking Supervision (2016). Consultative document: standardisedmeasurement approach for operational risk. Report, March, Bank for InternationalSettlements.

Buckingham, E. (1914). On physically similar systems: illustrations on the use of dimen-sional equations. Physical Review 4, 345–376 (http://doi.org/fjm7b8).

Cohen, R.D. (1998).An analysis of the dynamic behaviour of earnings distributions.AppliedEconomics 30, 1–17 (http://doi.org/b2mf3m).

de Fontnouvelle, P., De Jesus-Rueff, V., Jordan, J., and Rosengren, E. (2003). Using lossdata to quantify operational risk. Technical Report, Federal Reserve Bank of Boston(http://doi.org/fw6vft).

de Jong, F. J. (1967). Dimensional Analysis for Economists. North-Holland, Amsterdam.European Banking Authority (2016). 2016 EU-wide stress test. Methodological Note,

February 24, EBA.Moosa, I., and Li, L. (2013). An operational risk profile: the experience of British firms.

Applied Economics 45, 2491–2500 (http://doi.org/bj6z).

www.risk.net/journal Journal of Operational Risk

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 103: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

Journal of Operational Risk 11(3), 97–116DOI: 10.21314/JOP.2016.179

Research Paper

Rapidly bounding the exceedanceprobabilities of high aggregate losses

Isabella Gollini and Jonathan Rougier

1Department of Economics, Mathematics and Statistics, Birkbeck, University of London,Malet Street, Bloomsbury, London WC1E 7HX, UK; email: [email protected] of Mathematics, University of Bristol, University Walk, Clifton,Bristol BS8 1TW, UK; email: [email protected]

(Received August 24, 2015; revised March 16, 2016; accepted May 3, 2016)

ABSTRACT

We consider the task of assessing the right-hand tail of an insurer’s loss distribution forsome specified period, such as a year. We present and analyze six different approaches:four upper bounds and two approximations. We examine these approaches under avariety of conditions, using a large event loss table for US hurricanes. For its com-bination of tightness and computational speed, we favor the moment bound. We alsoconsider the appropriate size of Monte Carlo simulations and the imposition of a capon single-event losses. We strongly favor the Gamma distribution as a flexible modelfor single-event losses, because of its tractable form in all of the methods we analyze,its generalizability and the ease with which a cap on losses can be incorporated into it.

Keywords: event loss table (ELT); compound Poisson process; moment bound; Monte Carlosimulation; catastrophe modeling.

Corresponding author: I. Gollini Print ISSN 1744-6740 j Online ISSN 1755-2710Copyright © 2016 Incisive Risk Information (IP) Limited

97

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 104: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

98 I. Gollini and J. Rougier

1 INTRODUCTION

One of the objectives in catastrophe modeling is to assess the probability distributionof losses for a specified period, such as a year. From the point of view of an insurancecompany, the whole of the loss distribution is interesting, and valuable in determininginsurance premiums. However, the shape of the right-hand tail is critical, because itimpinges on the solvency of the company. A simple measure of the risk of insolvencyis the probability that the annual loss will exceed the company’s current operatingcapital. Imposing an upper limit on this probability is one of the objectives of the EUSolvency II directive.

If a probabilistic model is supplied for the loss process, then this tail probability canbe computed either directly or by simulation. Shevchenko (2010) provides a surveyof the various approaches. This can be a lengthy calculation for complex losses.Given the inevitably subjective nature of quantifying loss distributions, computationalresources might be better used in a sensitivity analysis. This requires either a quickapproximation to the tail probability or an upper bound on the probability, ideally atight one. In this paper, we present and analyze several different bounds, all of whichcan be computed quickly from a very general event loss table (ELT). By making noassumptions about the shape of the right-hand tail beyond the existence of the secondmoment, our approach extends to fat-tailed distributions. We provide a numericalillustration and discuss the conditions under which the bound is tight.

2 INTERPRETING THE EVENT LOSS TABLE

We use a rather general form for the ELT, which is given in Table 1. In this form,the losses from an identified event i are themselves uncertain, and they are describedby a probability density function fi . That is to say, if Xi is the loss from a singleoccurrence of event i , then

Pr.Xi 2 A/ DZ

A

fi .x/ dx

for any well-behaved A � R. The special case in which the loss for an occurrenceof event i is treated as a constant xi is represented with the Dirac delta functionfi .x/ D ı.x � xi /.

The choice of fi for each event represents uncertainty about the loss that followsfrom the event; this is often termed “secondary uncertainty” in catastrophe model-ing. We will discuss an efficient and flexible approach to representing more-or-lessarbitrary specifications of fi in Section 5.

There are two equivalent representations of the ELT for stochastic simulation ofthe loss process through time (see, for example, Ross 1996, Section 1.5). The first

Journal of Operational Risk www.risk.net/journal

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 105: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

Rapidly bounding the exceedance probabilities 99

TABLE 1 Generic event loss table.

Event Arrival rate, LossID yr�1 distribution

1 �1 f1

2 �2 f2:::

::::::

m �m fm

Row i represents an event with arrival rate �i and loss distribution fi .

is that the m events with different IDs follow concurrent but independent homoge-neous Poisson processes. The second is that the collective of events follows a singlehomogeneous Poisson process with arrival rate

� WDmX

iD1

�i I

then, when an event occurs, its ID is selected independently at random, withprobability �i=�.

The second approach is more tractable for our purposes. Therefore, we define Y

as the loss incurred by a randomly selected event, with probability density function

fY DmX

iD1

�i

�fi :

The total loss incurred over an interval of length t is then modeled as the random sumof independent losses, or

St WDNtX

j D1

Yj ; where Nt � Poisson.�t/ and Y1; Y2; : : :iid� fY :

The total loss St would generally be termed a compound Poisson process withrate � and component distribution fY . An unusual feature of loss modeling is thatthe component distribution fY is itself a mixture, sometimes with thousands ofcomponents.

3 A SELECTION OF UPPER BOUNDS

Our interest is in a bound for the probability Pr.St > s/ for some specified ELT andtime period t ; we assume, as is natural, that Pr.St 6 0/ D 0. We pose this question:is Pr.St > s/ small enough to be tolerable for specified s and t? We are aware of four

www.risk.net/journal Journal of Operational Risk

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 106: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

100 I. Gollini and J. Rougier

useful upper bounds on Pr.St > s/, which are explored here in terms of increasingcomplexity. The following material is covered in standard textbooks such as Grimmettand Stirzaker (2001), and in more specialized books such as Ross (1996) and Whittle(2000). To avoid clutter, we will drop the “t” subscript on St and Nt .

The Markov inequality

The Markov inequality states that if Pr.S 6 0/ D 0 then

Pr.S > s/ 6 �

s; (Mar)

where � WD E.S/. As S is a compound process,

� D E.N /E.Y / D �tE.Y /; (3.1)

with the second equality following because N is Poisson. The second expectation issimply

E.Y / DmX

iD1

�i

�E.Xi /:

We do not expect this inequality to be very tight, because it imposes no conditions onthe integrability of S2, but it is so fast to compute that it is always worth a try for alarge s.

The Cantelli inequality

If S is square-integrable, ie, �2 WD Var.S/ is finite, then

Pr.S > s/ 6 �2

�2 C .s � �/2for s > �: (Cant)

This is the Cantelli inequality, and it is derived from the Markov inequality. As S isa compound process,

�2 D E.N /Var.Y / C E.Y /2Var.N / D �tE.Y 2/; (3.2)

with the second equality following because N is Poisson. The second expectation issimply

E.Y 2/ DmX

iD1

�i

�E.X2

i /:

We expect the Cantelli bound will perform much better than the Markov bound,both because it exploits the fact that S is square integrable and because its derivationinvolves an optimization step. It is almost as cheap to compute, so it is really a freeupgrade.

Journal of Operational Risk www.risk.net/journal

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 107: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

Rapidly bounding the exceedance probabilities 101

The moment inequality

This inequality and the Chernoff inequality below use the generalized Markovinequality: if g is increasing, then S > s if and only if g.S/ > g.s/, and so

Pr.S > s/ 6 Efg.S/gg.s/

for any g that is increasing and nonnegative.An application of the generalized Markov inequality gives

Pr.S > s/ 6 infk>0

E.Sk/

sk;

because g.s/ D sk is nonnegative and increasing for all k > 0. Fractional momentscan be tricky to compute, but integer moments are possible for compound Poissonprocesses. Hence, we consider

Pr.S > s/ 6 minkD1;2:::

E.Sk/

sk: (Mom)

This cannot perform any worse than the Markov bound, which is the special case ofk D 1.

The integer moments of a compound Poisson process can be computed recursively,as shown in Ross (1996, Section 2.5.1):

E.Sk/ D �t

k�1Xj D0

k � 1

j

!E.Sj /E.Y k�j /: (3.3)

The only new term here is

E.Y k�j / DmX

iD1

�i

�E.X

k�ji /:

At this point, it would be helpful to know the moment-generating function (MGF, seebelow) of each Xi .

Although not as cheap as the Cantelli bound, this does not appear to be an expensivecalculation if the fi have standard forms with simple known MGFs. It is legitimateto stop at any value of k, and it might be wise to limit k in order to avoid numericalissues with sums of very large values.

The Chernoff inequality

Let MS be the MGF of S ; that is,

MS .v/ WD E.evS /; v > 0:

www.risk.net/journal Journal of Operational Risk

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 108: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

102 I. Gollini and J. Rougier

Chernoff’s inequality states that

Pr.S > s/ 6 infk>0

MS .k/

eks: (Ch)

It follows from the generalized Markov inequality with g.s/ D eks , which isnonnegative and increasing for all k > 0.

If MY is the MGF of Y , then

MS .v/ D MN .log MY .v//; v > 0:

In our model, N is Poisson; hence,

MN .v/ D expf�t.ev � 1/g; v > 0

(see, for example, Ross 1996, Section 1.4). Thus, the MGF of S simplifies to

MS .v/ D expf�t.MY .v/ � 1/g:

The MGF of Y can be expressed in terms of the MGFs of the Xi :

MY .v/ DmX

iD1

�i

�MXi

.v/:

Now it is crucial that the fi have standard forms with simple known MGFs.In an unlimited optimization, the Chernoff bound will never outperform the moment

bound (Philips and Nelson 1995). In practice, however, constraints on the optimizationof the moment bound may result in the best available Chernoff bound being lower thanthe best available moment bound. But, from large deviation theory, there is anotherreason to include the Chernoff bound (see, for example, Whittle 2000, Section 15.6and Chapter 18). Let t be an integer number of years, and define S1 as the loss fromone year, so that MSt

.k/ D fMS1.k/gt . Then, large deviation theory states that

Pr.St > s/ D infk>0

expf�ks C t log MS1.k/ C o.t/gI

so, as t becomes large, the Chernoff upper bound becomes exact.Very informally, then,the convergence of the Chernoff bound and the moment bound suggest, accordingto a squeezing argument, that both bounds are converging from above on the actualprobability.

The bounds presented in this section have the requirement that the distribution hasat least one finite moment. We argue that, in practice, there is a maximum loss that theinsurer can encounter (eg, the sum of all the maximum insured losses). This impliesthat the loss distribution is bounded, ranging between 0 and the value of the maximumloss. This implies that all the moments are finite. In Section 5, we discuss three simpleand widely applicable single-loss distributions that, along with the compound Poissonprocess, result in a loss distribution with finite moments.

Journal of Operational Risk www.risk.net/journal

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 109: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

Rapidly bounding the exceedance probabilities 103

4 TWO “EXACT” APPROACHES

There are several approaches to computing Pr.St > s/ to arbitrary accuracy, althoughin practice this accuracy is limited by computing power (see Shevchenko (2010) fora review). We mention two here.

Monte Carlo simulation

One realization of St for a fixed time-interval can be generated by discrete event sim-ulation, which is also known as the Gillepsie algorithm (see, for example, Wilkinson2012, Section 6.4). Many such simulations can be used to approximate the distri-bution function of St ; they can also be used to estimate probabilities, including tailprobabilities.

Being finite-sample estimates, these probabilities should have a measure of uncer-tainty attached. This is obviously an issue for regulation, where the requirement isoften to demonstrate that

Pr.S1 > s0/ 6 �0

for some s0, which reflects the insurer’s available capital, and some �0 specified bythe regulator. For Solvency II, �0 D 0:005 for one-year total losses. A Monte Carlopoint estimate of p0 WD Pr.S1 > s0/ that was less than �0 would be much morereassuring if the whole of the 95% confidence interval for p0 were less than �0, ratherthan if the 95% confidence interval contained �0.

A similar problem is faced in ecotoxicology, where one recommendation wouldbe equivalent in this context to requiring that the upper bound of a 95% confidenceinterval for p0 is no greater than �0 (see Hickey and Hart 2013). If we adopt thisapproach, though, it is incorrect simply to monitor the upper bound and stop sam-pling when it drops below �0, because the confidence interval in this case ought toaccount for the stochastic stopping rule, rather than being based on a fixed samplesize. However, it is possible to do a design calculation to suggest an appropriate valuefor n, the sample size, that will ensure the upper bound will be no larger than �0 withspecified probability, a priori, as we now discuss.

Let u1�˛.xI n/ be the upper limit of a level .1 � ˛/ confidence interval for p0,where x is the number of sample members that are at least s0, and n is the samplesize. Suppose that the a priori probability of this upper limit being no larger than �0

is to be at least ˇ0, where ˇ0 would be specified. In that case, valid n satisfy

Prfu1�˛.X I n/ 6 �0g > ˇ0;

where X � Binom.n; p0/.

www.risk.net/journal Journal of Operational Risk

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 110: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

104 I. Gollini and J. Rougier

FIGURE 1 The effect of sample size on Monte Carlo accuracy.

1000 2000 5000 10 000

Pro

babi

lity

of u

0.95

( X,n

) ≤

κ 0

1.0

0.8

0.6

0.4

0.2

0

Size of Monte Carlo sample,n (log scale)

The probability that the upper bound of the 95% Jeffreys confidence interval for p0 lies below �0 D 0.005 whenp0 D �0=2.

There are several ways of constructing an approximate .1 � ˛/ confidence intervalfor p0, which are reviewed in Brown et al (2001).1 We suggest what they term the(unmodified) Jeffreys confidence interval, which is simply the equi-tailed .1 � ˛/

credible interval for p0 with the Jeffreys prior, with a minor modification. Usingthis confidence interval, Figure 1 shows the probability for various choices of n with�0 D 0:005 and p0 D �0=2. In this case, n D 105 seems to be a good choice, andthis number is widely used in practice.

Panjer recursion

The second approach is Panjer recursion (see Ross (1996, Corollary 2.5.4) orShevchenko (2010, Section 5)). This provides a recursive calculation for Pr.St D s/

whenever each Xi is integer-valued, so that S itself is integer-valued. This calcula-tion would often grind to a halt if applied literally, but it can be used to provide anapproximation if the ELT is compressed, as will be discussed in Section 6.1.

Perhaps the main difficulty with Panjer recursion, once it has been efficientlyencoded, is that it does not provide any assessment of the error that follows fromthe compression of the ELT. In this situation, a precise and computationally cheap

1 It is not possible to construct an exact confidence interval without using an auxiliary randomization.

Journal of Operational Risk www.risk.net/journal

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 111: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

Rapidly bounding the exceedance probabilities 105

upper bound may be of more practical use than an approximation. Section 6.1 alsodiscusses indirect ways to assess the compression error using the upper bounds.

Monte Carlo simulation is an attractive alternative to Panjer recursion for a numberof reasons:

(1) it comes with a simple assessment of accuracy;

(2) it is easily parallelizable;

(3) the sample drawn from it can be used to calculate other quantities of interestfor insurers, such as the net aggregate loss and reinsurance recovery costs.

Another common method to calculate the exceedence probability is the fast Fouriertransform, which is based on the inversion of the characteristic function. This approachneeds discrete losses, and the density of the loss distribution is estimated by using agrid. Thus, if we are interested in a precise estimate of the upper tail of the distribution,it is necessary to use a high number of bins; this is computationally intensive.

5 TRACTABLE SPECIAL CASES

In this section, we consider three tractable special cases.First, suppose that

fi .x/ D ı.x � xi /; i D 1; : : : ; m;

ie, the loss from event i is fixed at xi . Then,

E.Xki / D xk

i and MXi.v/ D evxi :

All of the bounds are trivial to compute.Second, suppose that each fi is a Gamma distribution with parameters .˛i ; ˇi /:

fi .x/ D Gam.xI ˛i ; ˇi / D ˇ˛i

i

� .˛i /x˛i �1e�ˇi x1x>0; i D 1; : : : ; m;

for ˛i ; ˇi > 0, where 1 is the indicator function and � is the Gamma function:

� .s/ WDZ 1

0

xs�1e�x dx:

Then,

Mi .v/ D�

ˇi

ˇi � v

�˛i

; 0 6 v < ˇi : (5.1)

The moments are

E.Xki / D � .˛i C k/

ˇki � .˛i /

; (5.2)

www.risk.net/journal Journal of Operational Risk

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 112: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

106 I. Gollini and J. Rougier

and, hence,

E.Xi / D ˛i

ˇi

; E.X2i / D .˛i C 1/˛i

ˇ2i

:

Third, suppose that each fi is a finite mixture of Gamma distributions:

fi .x/ DpiX

kD1

�ik Gam.xI ˛ik; ˇik/; i D 1; : : : ; m;

wherePpi

kD1�ik D 1 for each i . Then,

fY .y/ DmX

iD1

�i

piXkD1

�ik Gam.yI ˛ik; ˇik/

DmX

iD1

piXkD1

�i�ik

�Gam.yI ˛ik; ˇik/:

In other words, this is exactly the same as creating an extended ELT with plain Gammafi (ie, as in the second case); however, in this case, each �i is shared out among thepi mixture components according to the mixture weights �i1; : : : ; �ipi

.This third case is very helpful, because the Gamma calculation is so simple, and yet

it is possible to approximate any strictly positive absolutely continuous probabilitydensity function that has limit zero as x ! 1, with a mixture of Gamma distributions(Wiper et al 2001). It is also possible to approximate point distributions by veryconcentrated Gamma distributions, as is discussed below in Section 6.3. Thus, thesecondary uncertainty for an event might be represented as a set of discrete losses,each with its own probability, but encoded as a set of highly concentrated Gammadistributions, leading to very efficient calculations.

Capped single-event losses

For insurers, a rescaled Beta distribution is often preferred to a Gamma distribution,because it has a finite upper limit representing the maximum insured loss. The MGFof a Beta distribution is an untabulated function with an infinite series representation.This means it will be more expensive to compute accurately, which will affect theChernoff bound. There are no difficulties with the moments.

However, we would question the suitability of using a Beta distribution here. Theinsurer’s loss from an event is capped at the maximum insured loss. This implies anatom of probability at the maximum insured loss: if fi is the original loss distributionfor event i , and u is the maximum insured loss, then

fi .xI u/ D fi .x/1x<u C .1 � pi /ı.x � u/;

Journal of Operational Risk www.risk.net/journal

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 113: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

Rapidly bounding the exceedance probabilities 107

where pi WDR u

0fi .x/ dx and ı is the Dirac delta function, as before. A Beta

distribution scaled to Œ0; u� would be quite different, having no atom at u.The Gamma distribution for fi is tractable with a cap on losses. If fi is a Gamma

distribution, then the MGF is

Mi .vI u/ D�

ˇi

ˇi � v

�˛i �.˛i ; .ˇi � v/u/

� .˛i /C .1 � pi /e

vu;

where � is the incomplete Gamma function,

�.s; u/ WDZ u

0

xs�1e�x dx;

and

pi WD �.˛i ; ˇiu/

� .˛i /:

The moments of fi .� I u/ are

E.Xki I u/ D �.k C ˛i ; ˇiu/

ˇki � .˛i /

C .1 � pi /uk :

Introducing a nonzero lower bound is straightforward.

6 NUMERICAL ILLUSTRATION

We have implemented the methods of this paper in a package for the R open source stat-istical computing environment (R Core Team 2013), named tailloss. In addition,this package includes a large ELT for US hurricanes (32 060 rows).

6.1 The effect of merging

We provide a utility function, compressELT, which reduces the number of rowsof an ELT by rounding and merging. This speeds up all of the calculations, and it iscrucial for the successful completion of the Panjer approximation.

The rounding operation rounds each of the losses to a specified number d of decimalplaces, with d D 0 being an integer and d < 0 being a value with d zeros beforethe decimal point. Then, the rounded value is multiplied by 10d to convert it to aninteger. Finally, the merge operation combines all of the rows of the ELT with thesame transformed loss and adds their rates.

Table 2 shows some of the original ELT, and Table 3 is the same table after roundingto the nearest $10 000 (ie, d D �4). It is an empirical question, how much roundingcan be performed on a given ELT without materially changing the distribution of t -yeartotal losses. Ideally, this would be assessed using an exact calculation, such as Panjerrecursion. Unfortunately, it is precisely because Panjer recursion is so numerically

www.risk.net/journal Journal of Operational Risk

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 114: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

108 I. Gollini and J. Rougier

TABLE 2 ELT for US hurricane data set.

Event Arrival rate, ExpectedID yr�1 loss ($)

1 0.09265 12 0.03143 23 0.02159 34 0.01231 45 0.01472 5:::

::::::

32 056 0.00001 17 593 79032 057 0.00001 18 218 50632 058 0.00001 18 297 00332 059 0.00001 19 970 66932 060 0.00001 24 391 615

Row i represents an event with arrival rate �i and expected loss xi .

TABLE 3 ELT for US hurricane data set, after rounding and merging to $10 000 (d D �4).

ExpectedEvent Arrival rate, loss

ID yr�1 ($10 000)

1 0.35764 12 0.16864 23 0.16088 34 0.12135 45 0.12239 5:::

::::::

1141 0.00001 17591142 0.00001 18221143 0.00001 18301144 0.00001 19971145 0.00001 2439

Row i represents an event with arrival rate �i and expected loss xi .

intensive that the rounding and merging of large ELTs is necessary in the first place.So, instead, we assess the effect of rounding and merging using the moment bound,which, as already established, converges to the actual value when the number of eventsin the time interval is large.

Figure 2 shows the result of eight different values for d , from �7 to 0. The outcomewith d D �7 is materially different, which is not surprising, because this ELT only

Journal of Operational Risk www.risk.net/journal

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 115: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

Rapidly bounding the exceedance probabilities 109

FIGURE 2 The effect of compression and merging on the US hurricanes ELT.

0.2

0.4

0.6

0.8

1.0

0 10 20 30 40Values for s, $ million

0

Pr(

S ≥

s)

d = –7: 2 d = –6: 20d = –5: 167d = –4: 1145d = –3: 5017d = –2: 15 078d = –1: 25 865d = 0: 32 060

0

0.05

0.15

Number of rows

The curves show the values of the moment bound on the exceedance probability for one-year total losses. All valuesof d larger than �7 (only two rows) give very similar outcomes, with values of �5 or larger being effectively identical,and overlaid on the figure.

has two rows. More intriguing is that the outcome with d D �6 is almost the sameas that with no compression at all, despite the ELT having only twenty rows.

6.2 Computational expense of the different methods

Here, we consider one-year losses. We treat the losses for each event as certain, ie,as in the first case of Section 5. The methods we consider are Panjer, Monte Carlo,moment, Chernoff, Cantelli and Markov. The first two provide approximately exactvalues for Pr.S1 > s/. Panjer is an approximation because of the need to compress theELT. For the Monte Carlo method, we used 105 simulations, as discussed in Section 4,and we report the 95% confidence interval in the tail. The remaining methods providestrict upper bounds on Pr.S1 > s/. Our optimization approach for the moment andChernoff bounds is given in the online appendix. All the timings are CPU times inseconds on an iMac processor 2.9 GHz Intel Core i5.

Figure 3 shows the exceedance probabilities for the methods, computed on 101equally spaced ordinates between $0 million and $40 million, with compression d D�4. The Markov bound is the least effective, and the Cantelli bound is surprisinglygood. As expected, the Chernoff and moment bounds converge, and, in this case, theyalso converge on the Panjer and Monte Carlo estimates.

www.risk.net/journal Journal of Operational Risk

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 116: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

110 I. Gollini and J. Rougier

FIGURE 3 Exceedance probabilities for the methods, with rounding of d D �4 on the UShurricanes ELT.

0

0.01

0.03

0.05

0.2

0.4

0.6

0.8

1.0

0

Pr(

S1

≥ s)

5 10 15 20 25 30 35 40

Values for s, $ million

PanjerMonte CarloMomentChernoffCantelliMarkov

Mon

te C

arlo

95%

CI

The legend shows the Monte Carlo 95% confidence interval for p0 at s0 D $40 million (see Section 4). Each curvecomprises 101 points, equally spaced between $0 million and $40 million. Timings are given in Table 4. For futurereference, this figure has t D 1, u D 1, � D 0 and d D �4.

TABLE 4 Timings for the methods shown in Figure 3, in seconds on a standard desktopcomputer, for different degrees of rounding (see Section 6.1).

d D �4 d D �3 d D �2 d D �1 d D 0

Panjer 0.165 18.272 1715.764 N/A N/AMonte Carlo 0.419 1.116 3.052 5.286 6.463Moment 0.001 0.003 0.005 0.008 0.010Chernoff 0.021 0.132 0.289 0.447 0.534Cantelli 0.000 0.000 0.001 0.002 0.002Markov 0.000 0.000 0.001 0.001 0.001

The timings for the methods are given in Table 4. These values require very littleelaboration. The moment, Cantelli and Markov bounds are effectively instantaneousto compute, with timings of a few thousandths of a second. The Chernoff bound ismore expensive but still takes only a fraction of a second. The Monte Carlo and Panjerapproximations are hundreds, thousands or even millions of times more expensive.The Panjer bound is impractical to compute at a compression below d D �2 (andfrom now on we will just consider d 6 �3).

Journal of Operational Risk www.risk.net/journal

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 117: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

Rapidly bounding the exceedance probabilities 111

A similar table to Table 4 could be constructed for any specified value s0, rather thana whole set of values. The timings for the moment, Chernoff, Cantelli and Markovbounds would all be roughly one hundredth as large, because these are evaluatedpoint-wise. The timing for Monte Carlo would be unchanged. The timing for Panjerwould be roughly the proportion s0=$40 million of the total timing, because it isevaluated sequentially, from small to large values of s.

6.3 Gamma thickening of the event losses

We continue to consider one-year losses, but we now treat the losses from each event asrandom, not fixed. This allows us to embed the secondary uncertainty. For the simplestpossible generalization, we use a Gamma distribution with a specified expectation xi

and a common specified coefficient of variation, � WD �i=xi . The previous case ofa fixed loss xi is represented by lim � ! 0, which we write, informally, as � D 0.Solving

xi D ˛i

ˇi

and �xi Ds

˛i

ˇ2i

gives the two Gamma distribution parameters as

˛i D 1

�2and ˇi D ˛i

xi

:

Since the upper bounds are very quick to compute, it is possible to perform a sensitivityanalysis varying the value for the parameter � . This would allow the insurer to assessthe spread of uncertainty from the single losses to the final loss distribution in a formalway. Figure 4 shows the effect of varying � on a Gamma distribution with expectation$1 million.

The only practical difficulty with allowing random losses for each event occurs forthe Panjer method; we describe our approach in the online appendix.

Figure 5 shows the exceedance probability curve with � D 0:5. Note that thehorizontal scale now covers a much wider range of loss values than Figure 3. Thetimings are given in Table 5: these are very similar to the non-random case with � D 0

(Table 4), with the exception of the Panjer method. This takes longer because it scaleslinearly to the upper limit on the horizontal axis.

6.4 Capping the loss from a single event

Now consider the case where the single-event loss is capped at $5 million. The imple-mentation of this cap is straightforward, and we describe it in the online appendix. Theresults are given in Figure 6 and Table 6. For the timings, the main effect of the cap ison the Panjer method, because the cap reduces the probability in the right-hand tail of

www.risk.net/journal Journal of Operational Risk

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 118: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

112 I. Gollini and J. Rougier

FIGURE 4 Effect of varying � on the shape of the Gamma distribution with expectation$1 million.

Values for x, $ million0 0.5 1.0 1.5 2.0 2.5 3.0

0.100.250.501.002.00

6P

roba

bilit

y de

nsity

, × 1

06

4

3

2

1

0

5

Valuesfor θ

FIGURE 5 As Figure 3, with t D 1, u D 1, � D 0.5 and d D �4.

0

0.01

0.03

0.05

0.2

0.4

0.6

0.8

1.0

0

Pr(

S1

≥ s)

0 10 20 30 40 50 60

Values for s, $ million

PanjerMonte CarloMomentChernoffCantelli

Mon

te C

arlo

95%

CI

The Markov bound has been dropped. Timings are as given in Table 5.

the loss distribution and allows us to use a smaller upper limit on the horizontal axis.However, the Panjer approximation, where it can be computed, still takes a thousandtimes longer to compute than the moment bound.

Journal of Operational Risk www.risk.net/journal

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 119: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

Rapidly bounding the exceedance probabilities 113

TABLE 5 Timings for the methods shown in Figure 5.

d D �4 d D �3 d D �2 d D �1 d D 0

Panjer 0.490 50.527 N/A N/A N/AMonte Carlo 0.419 1.226 3.275 5.774 7.039Moment 0.003 0.009 0.032 0.049 0.056Chernoff 0.051 0.290 0.714 1.166 1.424Cantelli 0.000 0.002 0.004 0.007 0.008

FIGURE 6 As Figure 3, with t D 1, u D $5 million, � D 0.5 and d D �4.

0

0.01

0.03

0.05

0.2

0.4

0.6

0.8

1.0

0

Pr(

S1

≥ s)

0 10 20 30 40Values for s, $ million

PanjerMonte CarloMomentChernoffCantelli

Mon

te C

arlo

95%

CI

Timings are as given in Table 6.

TABLE 6 Timings for the methods shown in Figure 6.

d D �4 d D �3 d D �2 d D �1 d D 0

Panjer 0.090 5.040 N/A N/A N/AMonte Carlo 0.516 1.289 3.334 5.875 7.062Moment 0.009 0.035 0.104 0.167 0.206Chernoff 0.235 1.008 2.745 4.511 5.413Cantelli 0.001 0.003 0.010 0.016 0.020

6.5 Ten-year losses

Finally, consider expanding the time period from t D 1 to t D 10 years; the resultsare given in Figure 7 and Table 7. The timings of the Markov, Cantelli, moment and

www.risk.net/journal Journal of Operational Risk

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 120: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

114 I. Gollini and J. Rougier

FIGURE 7 As Figure 3, with t D 10, u D $5 million, � D 0.5 and d D �4.

0

0.01

0.03

0.05

0.2

0.4

0.6

0.8

1.0

0

Pr(

S10

≥ s

)

0 20 40 60 80 100 120

Values for s, $ million

PanjerMonte CarloMomentChernoffCantelli

Mon

te C

arlo

95%

CI

Timings are as given in Table 7.

TABLE 7 Timings for the methods shown in Figure 7.

d D �4 d D �3 d D �2 d D �1 d D 0

Panjer 0.218 15.404 N/A N/A N/AMonte Carlo 0.634 1.411 3.458 6.035 7.320Moment 0.014 0.055 0.185 0.281 0.349Chernoff 0.235 0.971 2.731 4.449 5.459Cantelli 0.001 0.003 0.010 0.017 0.020

Chernoff bounds are unaffected by the value of t . The timing for the Panjer methodgrows with t , because the right-hand tail of St grows with t . The timing for the MonteCarlo method grows roughly linearly with t , but the “in simulation” time for MonteCarlo is dominated by other factors, so the additional computing time for the increasein t from t D 1 to t D 10 is small.

7 SUMMARY

We have presented four upper bounds and two approximations for the upper tail ofthe loss distribution that follows from an ELT. We argue that in many situations anupper bound on this probability is sufficient. For example, to satisfy the regulator, ina sensitivity analysis, or when there is supporting evidence that the bound is quite

Journal of Operational Risk www.risk.net/journal

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 121: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

Rapidly bounding the exceedance probabilities 115

tight. Of the bounds we have considered, we find that the moment bound offers thebest blend of tightness and computational efficiency. In fact, the moment bound iseffectively costless to compute, based on the timings from our R package.

We have stressed that there are no exact methods for computing tail probabili-ties when taking into account limited computing resources. Of the approximatelyexact methods we consider, we prefer Monte Carlo simulation over Panjer recursion,because of the availability of an error estimate in the former and the amount of infor-mation provided by the latter.A back-of-the-envelope calculation suggests that 10 000Monte Carlo simulations should suffice to satisfy the Solvency II regulator.

The merging operation is a very useful way to condense an ELT that has becomebloated, eg, after using mixtures of Gamma distributions to represent more compli-cated secondary uncertainty distributions. We have shown that the moment boundprovides a quick way to assess how much merging can be done without it having amajor impact on the resulting aggregate loss distribution.

We have also demonstrated the versatility of the Gamma distribution for single-event losses. The Gamma distribution has a simple MGF and explicit expressions forthe moments. Therefore, it fits very smoothly into the compound Poisson process thatis represented in an ELT for the purposes of computing approximations and bounds.We also show how the Gamma distribution can easily be adapted to account for acap on single-event losses. We favor the capped Gamma distribution over the Betadistribution, which is often used in the industry, because the former has an atom (asis appropriate) while the latter does not.

DECLARATION OF INTEREST

The authors report no conflicts of interest. The authors alone are responsible for thecontent and writing of the paper.

ACKNOWLEDGEMENTS

This work was funded in part by NERC Grant NE/J017450/1, as part of theCREDIBLE consortium. We would like to thank Dickie Whittaker, David Stephensonand Peter Taylor for valuable comments that helped in improving the exposition ofthis paper, and for supplying the US hurricanes ELT.

REFERENCES

Brown, L. D., Cai, T. T., and DasGupta, A. (2001). Interval estimation for a binomialproportion. Statistical Science 16(2), 101–117 (with discussion, pp. 117–133).

Grimmett, G. R., and Stirzaker, D. R. (2001). Probability and Random Processes, 3rd edn.Oxford University Press, New York.

www.risk.net/journal Journal of Operational Risk

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]

Page 122: Volume 11 Number 3 September 2016 rial Copysubscriptions.risk.net/.../01/JournalofOperationalRisk.pdf · 2019-04-01 · Volume 11 Number 3 September 2016 The Journal of Operational

116 I. Gollini and J. Rougier

Hickey, G. L., and Hart, A. (2013). Statistical aspects of risk characterisation in eco-toxicology. In Risk and Uncertainty Assessment for Natural Hazards, Rougier, J. C.,Sparks, S., and Hill, L. J. (eds), Chapter 14, pp. 481–501. Cambridge University Press(http://doi.org/bmhx).

Philips, T. K., and Nelson, R. (1995). The moment bound is tighter than Chernoff’s boundfor positive tail probabilities. American Statistician 49(2), 175–178 (http://doi.org/bmhz).

R Core Team (2013). R: a language and environment for statistical computing. Software,R Foundation for Statistical Computing, Vienna, Austria.

Ross, S. M. (1996). Stochastic Processes, 2nd edn. Wiley.Shevchenko, P. V. (2010). Calculation of aggregate loss distributions. The Journal of

Operational Risk 5(2), 3–40 (http://doi.org/bppc).Whittle, P. (2000). Probability via Expectation, 4th edn. Springer (http://doi.org/bw6rnz).Wilkinson, D. J. (2012). Stochastic Modelling for Systems Biology, 2nd edn. CRC Press,

Boca Raton, FL.Wiper, M., Insua, D. R., and Ruggeri, F. (2001). Mixtures of Gamma distributions

with applications. Journal of Computational and Graphical Statistics 10(3), 440–454(http://doi.org/cjbsvb).

Journal of Operational Risk www.risk.net/journal

To subscribe to a Risk Journal visit Risk.net/subscribe or email [email protected]