did your model tell you all models are wrong? · organisational sources of systemic risk of...

48
1 Systemic Risk of Modelling in Insurance Did your model tell you all models are wrong? White paper by the Systemic Risk of Modelling Working Party

Upload: others

Post on 19-Jul-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Did your model tell you all models are wrong? · Organisational Sources of Systemic Risk of Modelling 13 ... take account of the integration and complexity of network structures,

1

Systemic Risk of Modelling in Insurance

Did your model tell you all models are wrong?

White paper by the Systemic Risk

of Modelling Working Party

Page 2: Did your model tell you all models are wrong? · Organisational Sources of Systemic Risk of Modelling 13 ... take account of the integration and complexity of network structures,

2

Foreword by Simon Beale 4Foreword by Ian Goldin 5Contributors 6

Introduction 71.1 Purpose 71.2 Scope 81.3 The Authors 81.4 What is the Systemic Risk of Modelling? 8 The Modelling Process 8

BenefitsofModelling 10SystemicRisk 10

Examples of Systemic Risk 11 Systemic Risk of Modelling in Insurance 12

Model Sources of Systemic Risk of Modelling 13Organisational Sources of Systemic Risk of Modelling 13Behavioural Sources of Systemic Risk of Modelling 14

1.5 Structure of the White Paper 14

Framework 152.1 Overview 162.2 Modelling Risk 16 Data Risk 16 Model Risk 18 Process Risk 192.3 OrganisationalRisk 20 Regulation 21

RiskDiversification 21 Competitive Environment 22 Model Markets 23 Internal Risks 232.4 Behavioural Risks 26 Autopilot 26 Cognitive bias 282.5 RiskFactorsSummary 30

Contents

Page 3: Did your model tell you all models are wrong? · Organisational Sources of Systemic Risk of Modelling 13 ... take account of the integration and complexity of network structures,

3

Scorecard 313.1 Introduction and purpose of a scorecard 313.2 Design and elements of scoring system 31 Selection of factors 31 Scorecard Design 32

Scoring system calculation (An administrative example of scoring system administration) 35

3.3 Use of scorecard and outlook 37

Summary guidelines for better practice 384.1 Systemic risk in the larger world 384.2 Regulation, policy, practice 39

Monitoring of Systemic Risk of Modelling at industry level 39Stress testing for SRoM at industry level (e.g.majorflawinmajormodel) 39

Regulatory disclosures on SRoM 394.3 Making more resilient organisations and markets 39 Model Independent Scenario Analyses 39 Training 40

Learningfromclosecalls 40Maintainingmodeldiversity 40

4.4 Training/behavioural management 414.5 Final words 42

Glossary 43

Notes 46

IfyouareinterestedinfindingoutmoreaboutSystemicRiskofModelling,pleaseemail:[email protected] The Leadenhall Building, 122 Leadenhall Street, London EC3V 4AGSwitchboard+44(0)2077461000

Contents &

Foreword

02-06Introduction

07-14Scorecard

31-37Fram

ework

15-30Sum

mary G

uidelines38-42

Glossary

43

Page 4: Did your model tell you all models are wrong? · Organisational Sources of Systemic Risk of Modelling 13 ... take account of the integration and complexity of network structures,

4

InhisbookTheButterflyDefect,ProfessorIanGoldinobservesthat“Systemicriskcannotberemovedbecauseitisendemic to globalisation. It is a process to be managed, and not a problem to be solved.” It is primarily for this reason that Amlin and the Oxford Martin School (OMS) have worked with industry experts to developapracticalandappliedmethodtoconsiderwaysofencouragingthequantification,monitoring,reportingandmanagement of the systemic risk of modelling.

Systemic risk is increasing in relevance due to the growing level of globalisation, interconnectedness, and speed of businesstransactionsinourworld.Whilstthesetrendsarebeneficialtheyalsointroducenewcomplexitiestosomeexisting risks. The natural response to a more risky and interconnected world is to try quantify it and thus modelling as a means to better understand risk is becoming increasingly common and in itself, complex. Systemic risk of modelling (SRoM)isbyitsverynatureoftendifficulttoquantifypartlyduetothewayinwhichmany,oftensophisticated,systems interact with each other.

Thispaperisspecificallyfocusedonapracticalsolutionforourindustry,withtheobjectivetodesignariskscorecardfortheSRoM-apracticalwaytomeasuretheamountofsystemicriskintroducedfrommodellingleveragingtheAmlin-OMScollaborativeresearchandfindingstodate.TheSRoMscorecardhasbeendesignedlesstowardsanexact risk measure and more towards providing an indication of whether certain actions and practices are aligned with reducing systemic risk of modelling.

Ourannouncementtointroduceandaddresssystemicriskusinganappliedandpracticalmethodwithleadingfiguresfrom the insurance market was communicated at our London event last year. I am pleased to convey that the strong interestwasself-evidentasexpertsfrombothacademicresearchandtheinsuranceindustryvolunteeredtocontributeandco-authorthispaper.AsaresultofthisconcertedresearchwearebetterpositionedtodesignaSRoMscorecardthatfirmsandregulatorscanusetomonitorandmanagesystemicsourcesandmeasuresofsystemicrisk.Inadditionpractical solutions towards measuring whether risk management decisions and modelling practices are aligned with reducing or heightening systemic risk were developed and are incorporated in this paper.

Amlin and the Oxford Martin School have worked collaboratively with this group of experts and market practitioners with representation from life and general insurers, reinsurers, catastrophe model vendors, brokers, regulators and consultants. By bringing together the ‘best minds’ in the business with the ‘best minds’ in academia an effective and meaningfulSRoMscorecardhasbeendevelopedwhichcanbeusedbyfirmsandregulatorsaspartoftheirtoolkittobetter monitor and manage risk.

The technical expertise and support of the individuals involved has been invaluable and I am grateful to all those who havehelpedtoshareideasandbestpracticesforthispaper.IbelievetheSRoMscorecardisanothersignificantstepin managing the systemic risks we face and ensuring that our reliance on the increasing amalgamation of models in the insurance industry is managed more effectively.

Simon Beale, Chief Underwriting Officer, Amlin plc.

Foreword by Simon Beale

Page 5: Did your model tell you all models are wrong? · Organisational Sources of Systemic Risk of Modelling 13 ... take account of the integration and complexity of network structures,

5

As we move into the 21st century the world is becoming more connected, more complex and more uncertain. Thelatestwaveofglobalizationhasintegratedmarketsandfinancewhiletheinformationrevolutionhascompressedtimeandspace.Theresultisthatbothvirtualandphysicalproximityhasincreased,ingeneraltogreatbenefit.Similarly the network structures underlying society and technology have grown to become more complex, interdependent and integrated than ever before, facilitating the transmission of material, capital and knowledge at ahighdegreeofefficiency.

Yetthisconnectednessandcomplexityincreasesuncertainty:morefactorsfrommoredistantplacescaninfluenceevents,crisescanunfoldfarfasterthanbefore,andthesheerfloodofinformationtaxestheabilitytodistinguishthesignal from the noise in a timely manner. The result of these trends is a growth of systemic risk that challenges thebenefitsthatglobalisationhasproduced.

Asthefinancialcrisisdemonstrated,thereisarealriskthatgovernancecanfallbehindthegrowthofnewmodels andinstruments:toolsthatmayreduceriskincertaincircumstancesmayhaveunexpectedanddestructiveramificationswhenusedunwisely.Makingstrongassumptionsandsimplificationsinmodelsthatfailedtocapture thecomplexityandsystemicnatureoffinanceproducedaformofcollectiveoverconfidencethatledtocrisis.In order to reduce systemic risk, risk governance needs to be strengthened. There is a need for new models that take account of the integration and complexity of network structures, as well as for transparent communications about choices, risks, and uncertainty of policy alternatives, improved risk measurement, and promotion of resiliency and sustainability.

In contributing to improvements in risk management the Oxford Martin School has collaborated with the Industry Working Party on Systemic Risk of Risk Modelling to develop this report. The aim has been to understand how different factors build up systemic risk when risk models are used, and approach a way of detecting the risk. Understanding how the fallibility of human thinking, competitive pressures, inadequate governance and weak modellingprocessesinteracttoproducesystemicriskisafirststeptowardsmeasuringandmitigatingit.Itismy hope that this document will in the long run help improve systemic risk governance not only in insurance but in other institutions where risk models are used.

Professor Ian Goldin, Director of the Oxford Martin School at the University of Oxford.

Author of The Butterfly Defect, how globalisation creates systemic risks, and what to do about it.

Foreword by Ian GoldinC

ontents & Forew

ord02-06

Introduction07-14

Scorecard31-37

Framew

ork15-30

Summ

ary Guidelines

38-42G

lossary43

Page 6: Did your model tell you all models are wrong? · Organisational Sources of Systemic Risk of Modelling 13 ... take account of the integration and complexity of network structures,

6

The views expressed in this paper were held under the Chatham House Rule and therefore do not necessarilyreflecttheviewsoftheorganisationstowhichtheworkingpartymembersbelong.

AcademicsOwain Evans Research Fellow Oxford Martin SchoolBen Levinstein Research Fellow Oxford Martin SchoolAnders Sandberg Senior Research Fellow Oxford Martin SchoolAndrewSnyder-Beattie FHIResearchDirector OxfordMartinSchoolCecilia Tilli Programme Manager Oxford Martin SchoolFeng Zhou Research Fellow Oxford Martin School

BrokersJayant Khadilkar Partner Tiger Risk PartnersVladimir Kostadinov Associate Tiger Risk PartnersHeather Roscoe Associate Tiger Risk PartnersDavid Simmons Managing Director Analytics Willis

ConsultancyDomenico del Re Director PwC

General InsuranceMark Christensen Head of Catastrophe Management ACE European GroupJB Crozet Head of Underwriting Modelling Amlin plcAlan Godfrey Head of Exposure Management ASTA Managing Agency LtdMatt Harrison Syndicate Exposure Manager HiscoxAndrea Hughes Delivery Manager, Underwriting Modelling Amlin plcAndrewLeach HeadofCatastropheModelling-Europe TravelersInsuranceAlan Milroy International Property Catastrophe Underwriter XL CatlinCatherine Pigott Natural Perils Actuary XL CatlinDavid Singh Underwriting Exposure Manager Amlin plcGemma Smyth Exposure Manager Charles Taylor Managing Agency LtdStav Tsielepis Vice President, Risk Management Actuary Arch

Model VendorsGabrielaChavez-Lopez AccountDirector CoreLogicAndrew Coburn Senior Vice President Risk Management SolutionsShane Latchman Senior Manager, Research

and Client Services Group AIR WorldwideMilan Simic Senior Vice President

and Managing Director AIR WorldwideMohammed Zolfaghari Director Cat Risk Solutions Ltd

Life InsuranceDavid Kendix Head of Insurance Risk Prudential plc

& Model Oversight

RegulatorDimitris Papachristou Chief Actuary (Research) General Insurance Bank of England

Rating AgencyMiroslav Petkov Director Standard and Poor’s Ratings Services

Contributors

Page 7: Did your model tell you all models are wrong? · Organisational Sources of Systemic Risk of Modelling 13 ... take account of the integration and complexity of network structures,

7

1.1 Purpose

The purpose of this White Paper is to help the readers understand the systemic risk associated with Modelling practices within the insurance industry.

Theterm“SystemicRisk”isoftenusedwhendescribingtheGlobalFinancialCrisisof2007-2008.ThecatalystforthisfinancialmeltdownwastheburstingoftheU.S.housingbubblewhichpeakedin2004,causingthevalueofsecuritiestiedtotheU.S.housingmarkettoplummetanddamagefinancialinstitutionsonaglobalscale. Core to the magnitude of the losses experienced by these events was the Modelling assumptions upon which these securities were valued, the extent to which these assumptions were applied across markets and the sensitivity of these assets’ values to those assumptions – assumptions that turned out to be unreliable. These Modelling shortcomingstookaSystemicnaturebecausethewholemortgage-backedindustrywasusingmoreorlessthesamemodels,astheyhadbeen“institutionalised”bycreditratingagencies.Whenthemodelassumptionsbegantofailtheeconomiceffectsfurtheramplifiedthemodelfailureandlosses.

ThispaperisspecificallyfocusedonthepracticalunderstandingofSystemicRiskofModellinganddevelopmentof practical solutions for managing such risk within the insurance industry. Attempts to understand and manage Systemic Risk can be considered to fall into 4 categories.

1. Existence of risk: this is the process whereby we acknowledge, quantify and qualify the elements and drivers ofsystemic risk which occur or could potentially occur within the insurance industries modelling practices.

2. Observation of risk: the process whereby Systemic Risk factors can be observed and tracked over time,to ensure that the process of Systemic Risk Management is a proactive rather than reactive mechanism.

3. Risk triggers: the insurance industry is founded on the basis of taking on board risk and the existence of risk iscore to its operation. Risk triggers however act as a mechanism for indicating where risk levels may be approaching or exceeding agreed risk tolerance levels.

4. Mitigation of risk: Certain risk factors may be able to be diminished at source or managed by underwriting oroperational practices or processes. Where this is not the case, it may be possible to minimise the potential effects of such risk via arbitrage

ThisWhitePaperaimstoprovidethereaderwith:• A ‘framework’ to understand the Systemic Risk of Modelling and build a Risk Management process for it; and

• The design of a ‘Risk Scorecard for the Systemic Risk of Modelling’ as a practical way to measure and raiseawareness about Systemic Risk of Modelling within organisations.

Our target audience is practitioners in the insurance industry – whether underwriters, brokers, modellers or executives-butalsoregulatorsandpeopleinotherorganisationsthatuseriskmodels.We are hoping that this White Paper will contribute towards developing guidelines for sustainable Modelling practices within organisations and across the industry, in particular

• How do we manage this risk, which is behavioural and operational in nature?

• How do we educate users of model results about the potential pitfalls?

• How do we develop a sustainable, robust usage of models within the insurance industry?

IntroductionC

ontents & Forew

ord02-06

Introduction07-14

Scorecard31-37

Framew

ork15-30

Summ

ary Guidelines

38-42G

lossary43

Page 8: Did your model tell you all models are wrong? · Organisational Sources of Systemic Risk of Modelling 13 ... take account of the integration and complexity of network structures,

8

1.2 Scope

Thecontextofthisreportisgeneralriskmodellingininsurance.Itaimsatbeingindependentofthetypeofinsurance:modelriskexistsinalldomains,whetherdata-richlife,casualtyandcommodityinsurancemodels,themoredata-poorcatastrophemodels,capitalmodels,orotheractuarialmodels.However,theremaypossiblybewiderapplications: we are conscious about the growing importance of risk models in civil defence, planning, governance and forecasting. Many of the risk factors are the same, and the experience of the insurance world can be helpful to avoid the pitfalls of modellingwhilereapingthebenefits.

1.3 The Authors

This White Paper was drafted by the Systemic Risk of Modelling Working Party, as a collaboration forum composed of academic researchers and industry practitioners.

The idea behind forming the Working Party was to bring together the ‘best minds’ in the business with the ‘best minds’inacademia,inordertotacklethecomplexandsignificantissueofSystemicRiskofModelling.

Whilst the Working Party has representation across many parts of our industry (including general insurance, life insurance, brokers, model vendor, regulator, rating agency, actuarial consultancy), there is a bias in experience amongst its members towards London Market Catastrophe Modelling.

ContributionshavebeenmadebasedontheChathamHouseruleanddonotreflecttheviewsoftheparticipating organisations.

1.4 What is the Systemic Risk of Modelling?

The Modelling Process

A model is a representation of a selected part of the world, often based on a theory of how it works (or appears to behave).

TheModellingProcessistheactivityofbuildingamodeltofittheworld,butalsotheusethismodeltolearnandpredict the part that is being modelled.

Models are part of larger processes of decision making, organisation and explanation. The actual mathematics and softwareisnottheonlyrelevantpart:howtheyarebeingusedcanmatterfarmorefortheoutcomeswecareabout.

The Model Modelling Results

RecipientsParameters

Model Inputs

Methodology

Metrics

Narrative

Ris

k C

hara

cter

istic

s

Model Users

Model MakersControls

Page 9: Did your model tell you all models are wrong? · Organisational Sources of Systemic Risk of Modelling 13 ... take account of the integration and complexity of network structures,

9

TheModellingProcessforariskmodelhasseveralcomponents:

Component Definition Example 1: Example 2:General Insurance Life AssuranceCatastrophe Mode Internal Mode

Wewillusethis“canonical”ModellingProcessthroughouttheWhitePaper.

Risk Characteristics Known or assumed properties of the risk being modelled.

Apartmentonthe25thfloorof a tower block.

Policyholderaged50withknownmedical problems

Model Inputs Information on the risk used by the model.

Street address, sums insured (buildings, contents and BI), deductibles.

Demographic information (age, gender), health information, location, contract type, deductibles.

Parameters Elements of the model which defineresultsforagivensetofinputs.

Probability distribution of hazard rate, intensity and location, damage functions, correlations between perils.

Mortality risk as a function of class, distribution and correlation structure of external factors.

Methodology The theoretical approach and assumptions combined to build a model, and how the model derives the results from the inputs.

Stochastic (Monte Carlo) simulation models.

Actuarial models based on mortality tables.

Controls Methods to plan and validate the model creation and usage.

Dependent validation, peer reviews and independent model validation, audits, peer reviews and back testing.

Dependent validation, peer reviews and independent model validation, audits, peer reviews and back testing.

Modelling Results Estimates of the risk/price probability distribution as a function of the model inputs.

Average annual losses, Exceedance Probability Curves.

Profitandlossdistribution,Solvency Capital Requirement.

Metrics Measures of the key modelled output used for risk management and monitoring.

USWindstorm1in200VaR,1in100TVaR.

ProfitandLoss1in200VaR.

Recipients Stakeholders that make use of the modelling results directly or indirectly.

Underwriters, brokers, actuaries and regulators.

Underwriters, brokers, actuaries and regulators.

Model Users People responsible for operating the Model, feeding Model inputs and extracting/analysing Modelling Results.

Catastrophe Modellers within an insurance organisation.

Actuaries running the Internal Model.

Model Makers People responsible for the Methodology and Parameters in the Model.

Vendors of Catastrophe Models, Natural Hazard Experts.

Actuaries specialised in building the Internal Model.

Narrative Subject matter expert review, commentary and analysis.

Board packs, model change commentary.

ORSA, qualitative change analysis.

Contents &

Foreword

02-06Introduction

07-14Scorecard

31-37Fram

ework

15-30Sum

mary G

uidelines38-42

Glossary

43

Page 10: Did your model tell you all models are wrong? · Organisational Sources of Systemic Risk of Modelling 13 ... take account of the integration and complexity of network structures,

10

Benefits of Modelling

Thebenefitsofquantitativemodelsareundeniable,whichexplainsthespeedoftheiradoptionbythemarketanditsregulators. By providing a consistent, informed assessment of the risks within our business, quantitative models have helped(re)insurersonseveralfronts:

•RiskManagement:theabilitytomanageriskonaprobabilisticbasis,andcomparetheriskinessofverydifferenttypes of exposures (e.g. life assurance vs. property catastrophe);

•PortfolioManagement:theabilitytomeasuretherisk-returnprofileofthecurrentportfolio,andproducealternative“whatif”scenarios;

•TechnicalPricing:theabilitytomeasuretheexpectedcostassociatedwithaspecificcontract,andcomparetherelative value of alternative policy structures.

Thesebenefitshavehelpedtopartially“de-risk”thebusinessof(re)insurance,byprovidingacontrolframeworktherebyloweringour“costofcapital”andenablingmoreaffordable(re)insuranceinthemarket.

Systemic Risk

SystemicRiskcanbedefinedinmanyways,butausefulroughdefinitionis“riskthathappensinasystembecauseofthewayitspartsinteract,ratherthanfaultsinthepartsthemselves.”Thesystemisvulnerabletoself-reinforcingjoint risks that can spread from part to part, affecting the function of the entire system, often with massive real-worldconsequences1.

A key feature that emerges is that parts of a system that individually may function well become vulnerable to a joint risk when connected, leading to a spread in risk that potentially affects the entire system.

It provides a unique challenge as, unlike other risks, adaptation and risk mitigation (including regulation) are not separate from the system, and can actually increase the systemic risk. Additionally much of the risk comes from the structure of the system, which is often constrained, making strong changes infeasible.

Footnote

1 AndersSandberg,NickBeckstead,StuartArmstrong.Definingsystemicrisk,InSystemicRiskofModelling,reportfromtheFHI-AmlinSystemicRiskofRiskModellingCollaboration.OxfordUniversity.2014

Page 11: Did your model tell you all models are wrong? · Organisational Sources of Systemic Risk of Modelling 13 ... take account of the integration and complexity of network structures,

11

Tulipmania

Bythe1620stulipshadbecomefashionableintheNetherlands,withahighdemand for bulbs from rare varieties but also uncertainty in their future value. One ofthefirstfuturesmarketemerged,firstforhedging,butsoonspeculatorsjoinedthe gardeners. The tulip futures were less strongly regulated than normal

investments or insurance and had few protections. Speculation in rare bulbs expandedandpeakedin1634-1637.Atitspeak,singlebulbscouldsellforthepriceof a building. In February 1637 tulip prices collapsed, and in April the authorities

suspended all the futures contracts. While the severity of the event has been contesteditwasoneofthefirstdocumentedfinancialbubbles.Thebubblewaspartially caused by normal speculation – people hoping to make a windfall and crowding in when they saw their neighbours’ apparently getting rich – but also throughinexperiencewiththenewfinancialinstruments.*

*TheclassicaccountofTulipomaniaisMackay,C.(1841),Chapter3:Tulipomania.InMemoirsofExtraordinaryPopularDelusionsandtheMadnessofCrowds,London:RichardBentle.Foronemoderneconomist’sview,seeDay,C.C.(2004).IsThereaTulipinYourFuture?:

RuminationsonTulipManiaandtheInnovativeDutchFuturesMarkets.JournaldesEconomistesetdesEtudesHumaines,14(2),151-170.

Examples of Systemic Risk

•Financialassetpricebubbles:assetpricesbegintoincreaseatanacceleratingpace,triggeredbyfundamentalorfinancialinnovation.Investorspileintotakeadvantage,furtherincreasingtheprice. It seems rational for agents to join in; especially if they expect that they can pull out in time or receive bailouts. Speculationandoverconfidenceensueuntilthepricecrashes,oftenimpactingtheeconomyoutsidethefinancialsector2. The fact that bubbles have occurred numerous times in the past – from the Dutch tulip bulb mania 1634-37totherecentUSHousingbubble–doesnotstronglydeternewbubbles.

2 Brunnermeier,M.K.,&Oehmke,M.(2012).Bubbles,financialcrises,andsystemicrisk(No.w18398).NationalBureauofEconomicResearch.models(forexample,bychanginghowrisksareinvestigated and modelled, or how models are seen).

Contents &

Foreword

02-06Introduction

07-14Scorecard

31-37Fram

ework

15-30Sum

mary G

uidelines38-42

Glossary

43

Page 12: Did your model tell you all models are wrong? · Organisational Sources of Systemic Risk of Modelling 13 ... take account of the integration and complexity of network structures,

3 GenevaAssociation.(2010).Systemicriskininsurance:ananalysisofinsuranceandfinancialstability.SpecialReportoftheGenevaAssociationSystemicRiskWorkingGroup,March.4Systematicalriskisdistinctfromsystemicrisk.Asystematicalriskisasimultaneousshocktothewholemarketthatcausesadverseeffects:thecauseisexternaltothesystemratherthaninternal.5 Therearealsopossiblefeedbacks.Themodelcancauseactionsthatchangetheriskitself,makingthemodelobsoleteandinaccurate(forexample,theYear2000problemtriggeredremedialactionsthat reduced the risk, while bond rating models in the CDO market contributed to a bubble that changed the underlying risk for the worse). It can also cause actions that affect future reliability of risk models (for example, by changing how risks are investigated and modelled, or how models are seen).

12

•CollapseoftheAtlanticnorthwestcodfishery:codfishingaroundNewfoundlandhadhistoricallybeenveryprofitable.Itwasineachindividualfisher’sbestinteresttocatchasmuchaspossible.Fromthe1950sonwardsnewfishingtechnologyarrivedthatenabledmoreefficienttrawling,boostingyields–butalsodepletingcodstocksanddisrupting the essential ecosystem. Uncertainties of the underlying situation and strong vested interests made remedial action too slow, and in 1992 cod numbers fell to 1% of their previous levels. This low state has been relatively stable and has only slowly recovered.

Source:TimeseriesforthecollapseoftheAtlanticnorthwestcodstock,captureinmilliontonneswithCanadiandatapresentedseparately.BasedondatasourcedfromtheFishStatdatabase2May2012.Author:Epipelagic

Systemic Risk of Modelling in Insurance

Mostdiscussionofsystemicriskinthefinancialsectortendstofocusonhowinstitutionsbecomeconnected througheconomictiesandhowthiscancausecontagion.Theinsuranceindustryissaferthanmostfinancialindustries because of its nature3.However,thereisoneareathatisoftenoverlookedasasourceofsystemicrisk:risk modelling itself.

A risk model intends to make probability estimates of a risk, which will then be used to make decisions. Ifitunderestimatestheprobabilityofarisk,actionswillbetakeninfalseconfidence.Ifitoverestimatesrisk, resources will be misallocated. If it causes correlated actions across an organisation or market, systematical4 or systemic risk emerges5.

Page 13: Did your model tell you all models are wrong? · Organisational Sources of Systemic Risk of Modelling 13 ... take account of the integration and complexity of network structures,

13

Model Sources of Systemic Risk of Modelling The large investments required to build sophisticated representations of (re)insurers’ risks and the scalability of quantitativemodels,pointtosignificanteconomiesofscalefromcentralisingandoutsourcingtheirdevelopmenttothird-partyvendors.

For instance, many (re)insurers license proprietary Economic Scenario Generators or Catastrophe Models from third-partyvendorswhogeneratetheinvestmentintalentandR&D.

Whilethefinancialbenefitsofoutsourcingmodel-buildingtothird-partyvendorsareoftenclear,theassociated“outsourcingofcognition”presentssomechallengesinitself:

•Thedivergenceinprincipal-agentinterestsmightleadthird-partyvendorstobeinfluencedbyotherprioritiesthan modelling quality (e.g. production costs, sales potential, social and political context etc.);

•(Re)insurershavereducedincentivestoinvestinmodellingknowledgeandtalent,tothepointthattheirdecision-makerscouldbecomeover-reliantonthe“autopilot”andunabletocritiqueorevenfunctionwithoutit;

• The oligopolistic nature of markets with large economies of scale, allows the few players to be more authoritative ascentralsourceofknowledge,thanjustifiedbythequalityoftheirmodelsalone.

Unfortunately, the more the industry tends to rely on a single source of knowledge, the smaller the upside when it gets things right and the greater the downside when it gets things wrong (as, one day, it inevitably will).

Organisational Sources of Systemic Risk of Modelling The Internal Models in the UK ICAS and the coming EU Solvency II regimes (mimicking the Basel II regulations for financialinstitutions,byintroducingInternalModelsforregulatorycapitalsetting)have,however,takenawidelydifferent stance. In essence, the setting of minimum capital requirements is outsourced to the (re)insurer if its InternalModelisapprovedbytheregulator:

• Internal Model Approval requires the regulator to be comfortable with the model, which could limit the range of potentialapproachesandpossiblyintroduce“asymmetricalerrorchecking”(i.e.mostlyscrutinisingthemodelswhichdonotfitexpectationsorpreferences);

•TheDocumentationStandardsrequiresufficientdetailstoenabletheInternalModeltobejustifiabletoathirdparty, possibly restricting reliance on those areas of expert judgment for which documentary evidence is sparse, thereby slowing down the adoption of innovative approaches; and

• The Use Test requires that the Internal Model informs risk management and key decision processes, which is likely to restrict reliance placed on outputs from alternative models within the organisation.

Theriskforourindustryisthatweareunconsciouslydis-incentivisingtheemergenceofalternativeapproaches,which are vital for a fully functional evolutionary process.

The result is that, as an industry, we have become dependent on the same models used within organisations and across organisations. This means that we are putting all our eggs in the same basket when it comes to Modelling, andwearethereforeexposingourselvesto“rarebutextreme”modelfailures.

Contents &

Foreword

02-06Introduction

07-14Scorecard

31-37Fram

ework

15-30Sum

mary G

uidelines38-42

Glossary

43

Page 14: Did your model tell you all models are wrong? · Organisational Sources of Systemic Risk of Modelling 13 ... take account of the integration and complexity of network structures,

14

Behavioural Sources of Systemic Risk of Modelling

Theexistenceofmodelerror,popularisedbyGeorgeBox’sfamousquote“allmodelsarewrongbutsomeareuseful”,is reasonably understood by practitioners in the market.

Our industry is, however, much less familiar with the risks arising from the behavioural aspects of the modelling process;or,inotherwords:how,inhumanhands,“allmodelsarewrong,buteventheusefulonescanbemisused”.

Quantitativemodelshavethesignificantadvantageofscalingupwithtechnologicalprogress.Unlikeexpertjudgmentwhich is limited in speed and footprint, models become faster and more advanced as technology improves. Often, however, the gains in calculation speed are translated into a higher reporting frequency without necessarily a full appreciationforthecritical,qualitativedifferencebetween,forexample:

•AChiefUnderwritingOfficerreceivingaquarterlyreportontheriskprofileoftheportfolio,supportedbyqualitativecommentary from his Chief Pricing Actuary highlighting the limitations of the analysis; and

•ThesameChiefUnderwritingOfficeraccessingthesamefiguresdaily,onaself-servicebasisatthepressofabutton.

These two types of reporting have a purpose adapted to different tasks. To draw a parallel with Daniel Kahneman’s “Thinking,FastandSlow”:theslowerandmoredeliberativeapproachisbetteradaptedtomorestrategicsituations,while the faster, instinctive reporting is best suited to monitoring contexts.

Withouttheawarenessofthisdistinction,thetemptationisgreat,however,fortheChiefUnderwritingOfficertorelyonthefaster,automatedreportingforstrategicdecisions;leadingtoa“dumbingdown”ofthedecisionmakingprocess as a result of technological automation. Unlike expert judgment, quantitative models are based on transparent assumptions, which can be adapted in order to improve predictive power or adapt to environmental changes over time. Similarly,amodelidentifiedtonotbefit-for-purposewouldquicklybedisregardedifitdidnotadaptappropriately.

This evolutionary process is a powerful force, which has helped our industry get better and better models over time. Butwemustrecognisethattheinstitutionalisationofquantitativemodelscanleadtostructuralgroupthinkand“limitthe gene pool” by reducing the potential for model diversity.

Historically, regulatory frameworks have not interfered with (re)insurers’ freedom to select and use models as they deemedfit.Theregulatoryrulesforsettingminimumcapitalrequirementscomplementedtheinternalriskmanagement perspective with an independent view and an additional layer of defence.

1.5 Structure of the White Paper

• The Framework section is an explanation of the key drivers of the Systemic Risk of Modelling, split into modelling,organisational and behavioural sources of risk. It provides the foundations for the design of the scorecard.

• The Scorecard section provides details on the scorecard methodology, the selection of factors and calibration.It also provides an outlook for future development and usage of the scorecard methodology for the Systemic Risk of Modelling.

• The Guidance for Better Practices section draws from the Framework and Scorecard sections, in order to offersuggestions for managing the Systemic Risk of Modelling within organisations and across the industry.

Page 15: Did your model tell you all models are wrong? · Organisational Sources of Systemic Risk of Modelling 13 ... take account of the integration and complexity of network structures,

15

Framework

Here, we discuss each factor in turn. For more full discussion and examples, see Appendix A.

Metrics

2.1 Overview

Systemic risks in modelling can arise from a number of factors. Broadly speaking, these factors can be broken downintothreecategories:1)risksoriginatingfrommodels,2)risksarisingfromorganisationalstructures,and3)risks originating from behaviour.

Thefirstcategorydealswithexistenceandeffectsarisingfromthelimitationsofmodels,evenwithperfectlyrationalpeopleandorganisations.Thisincludeshowdataisgatheredandusedinmodels,howthemodelfitsreality and our expectations, and more generally, how well the limitations of the model are well understood and addressed.

The second category deals with effects on the modelling process that emerge from how organisations and institutionsaresetup.Thiscanrangefromhowinformationflowswithinacompany,howandwhenmonitoringiscarried out, competitive biases in the insurance or model markets, and lastly how market regulation can sometimes trade one kind of systemic risk for another. These can result in systemic risks from correlated behaviour from small setsofmodelsbeingused,unwarrantedconfidenceinmodels,ormanagementnotfullyunderstandingthelimitations of the modelling process.

The third category deals with the human factors that could complicate matters even with perfectly accurate models. The most common factors referred to here are human cognitive biases, shortage of training, expertise or experience, and behavioural inclinations to use models in ways that they are not designed for. Here systemic risks can emerge from people independently making the same mistakes, and being overly dependent or unchallenging of the models.

In practice there will always be a fair amount of overlap and interaction between the three categories. For example, the availability of a limited number of acceptable models (organisational) may constrain the methods used in modelling (model) and make underwriters expect all models to behave that way (behavioural); this could also lead to resistance to using other acceptable but less popular methods that are actually more appropriate (behavioural). As a result the market could potentially have an unduly limited view of risk and a lower threshold to badly modelled risks.

Modelling Risk Organisational Risk

Internal Risks

Behavioural Risks

Model Markets

Competitive Environment

Risk Diversification

Regulation

Process Risk

Model Risk

Data Risk

Cognitive Bias

Autopilot

Contents &

Foreword

02-06Introduction

07-14Scorecard

31-37Fram

ework

15-30Sum

mary G

uidelines38-42

Glossary

43

Page 16: Did your model tell you all models are wrong? · Organisational Sources of Systemic Risk of Modelling 13 ... take account of the integration and complexity of network structures,

6 Box, G. E. P., and Draper, N. R., (1987), Empirical Model Building and Response Surfaces, John Wiley & Sons, New York, NY. p. 424.7Jarzabkowski,P.,Bednarek,R.,&Spee,P.(2015).MakingaMarketforActsofGod:ThePracticeofRiskTradingintheGlobalReinsuranceIndustry.OxfordUniversityPress.p.71

2.2 Modelling Risk

All models are wrong, but some are useful. 6

For models to be used correctly, the limitations and assumptions of the models need to be well documented and understood. Gaining an understanding of how sensitive model Parameters and outputs are to data, comparing modelled output with expectations, and being aware of the process of handling and interpreting modelling information are examples of how a model can be used more correctly.

Inthissectionwedelineatethreemainfactorsofriskthatoriginatefrommodels:datarisk,modelrisk,andprocessrisk.

Data Risk

Marketization and its associated models are savage masters – they push forward in a single direction. The drive to increase modellability of deals means that Property Catastrophe underwriters receive, year after year, ever more granular data, and request ever more detailed information.

The rise of vendor models has been intrinsically linked to an increase in detailed statistical data: models are both dependenton and shape the production and consumption of data. 7

Data risk is the prospect of modelling error arising from deviations in the quality of data input compared to that expectedbythemodel;thesignificanceofthisriskisthereforedependentonthecompleteness,accuracyandgranularity of the data inputs compared to that used to build and validate the model, and on the sensitivity of the modeltodeviationsofthedatapresented.Threekindsofdata-riskinparticularposecommonproblems:

• Data Sources: this represents the amount of, the interpretation, and level of access to the underlying data used tobuild and calibrate a model. Bad data can obviously lead to miscalibrated models, but missing data can also quietly leadtomis-estimatesofriskthatarehardtodetect-especiallywhenitishandledthesameacrosscompanies.Forinstance,catastrophemodelstypicallyuselargeamountsofclaimsdatatobuildspecificdamagefunctions,butiftheytreat“unknownentries”byassigningtheaveragecharacteristicforageographicareaorexposuretype,someclassesofobjectsmaybesystematicallymis-estimated.

• Data Translation:thisrepresentsthelevelofwork-aroundand/orshoehorningrequiredtotranslatetheunderlyingcharacteristics into the prescribed modelling data format. There is a risk of a systematic distortion that is invisible at laterstages,makingpeopleoverconfidentinthecalibrationofthemodel.Forexample,ifitisnotpossibletoencodethetypeofcoveragebecausethemodelwasoriginallydesignedforamorespecificdomain,disparateobjectsmaybe bundled together into an apparently homogeneous type or mislabelled as other types; this data will then potentially distort model estimates of the content of the portfolio.

• Data Granularity: this represents the quality of data presented to the model compared to that expected by themodel. For instance, for a catastrophe model geocoding locations at low levels such as US County level would provideaverypoorinputormakeitimpossibleforassessmentagainstlocation-sensitiveperilslikeflood.

16

Page 17: Did your model tell you all models are wrong? · Organisational Sources of Systemic Risk of Modelling 13 ... take account of the integration and complexity of network structures,

6 Box, G. E. P., and Draper, N. R., (1987), Empirical Model Building and Response Surfaces, John Wiley & Sons, New York, NY. p. 424.7 Jarzabkowski, P., Bednarek, R., & Spee, P. (2015). Making a Market for Acts of God: The Practice of Risk Trading in the Global Reinsurance Industry. Oxford University Press. p. 71

Thai Floods and CBI

Risks can arise when a location’s characteristics are easily translated into the model, but certain external factors are not considered in the loss estimate. When modelling properties in a catastrophe model, contingent business interruption (CBI) is not taken into account but could be a source of substantial losses.

The2011Thaifloodsillustratethis: theheaviestfloodingin50yearsaffectedthecountryseverely, but in particular many industrial estates had been builtonformerricefields(andhencefloodplains)andwerehitsimultaneously. This particularly affected the global hard drive market(in2011Thailandaccountedfor25%oftheglobalmarket), and affected the Japanese economy badly. Lloyds

estimated itself to be liable for £1.4 billion[i]. Here the direct damage was compounded by spreading, unmodeled CBI.

[i]http://www.theguardian.com/business/2012/feb/14/lloyds-thailand-flooding-2bn-dollars

Spreadsheet errors

Spreadsheets are widespread tools in business and elsewhere, but have an astonishingly high rate of errors. According to studies[ii]closeto90%ofallspreadsheetshaveerrors–manywithsignificanteffectsonbusiness.17%oflargeUKbusinesseshavesufferedfinanciallossduetopoorspreadsheets,andfar more (57%) have wasted time or made poor decisions (33%) due to spreadsheet problems.[iii] The list of horror stories is long,[iv] including budgeting errors running

into billions of dollars, mistaken hedging contracts, hundreds of million dollars of clientlossestoinvestmentfirms,bankfailures,andsoon.Despitethedatadevelopersandusersareoftenoverconfidentinthecorrectnessofthespreadsheets,

which contributes to the problem.[v]

[ii]Panko,R.R.(1998/2008).Whatweknowaboutspreadsheeterrors.JournalofEndUserComputing,10,15-21.Revised2008.http://panko.shidler.hawaii.edu/SSR/Mypapers/whatknow.htm[iii]F1F9,“Capitalism’sdirtysecret:aresearchreportintotheuses&abusesofspreadsheets”(2015)http://info.f1f9.com/capitalisms-dirty-secret[iv]http://www.eusprig.org/horror-stories.htm,http://www.cio.com/article/2438188/enterprise-software/eight-of-the-worst-spreadsheet-blunders.html[v]Panko,R.R.(2009).TwoExperimentsinReducingOverconfidenceinSpreadsheetDevelopment.EvolutionaryConceptsinEndUserProductivityandPerformance:ApplicationsforOrganizational.

17

Sourcehttps://en.wikipedia.org/wiki/2011_Thailand_floods#/media/File:Flooding_of_Rojana_Industrial_Park,_Ayutthaya,_Thailand,_October_2011.jpg

Contents &

Foreword

02-06Introduction

07-14Scorecard

31-37Fram

ework

15-30Sum

mary G

uidelines38-42

Glossary

43

Page 18: Did your model tell you all models are wrong? · Organisational Sources of Systemic Risk of Modelling 13 ... take account of the integration and complexity of network structures,

18

Model Risk

Additionalriskcanobviouslyresultfromflawsinthemodelitself.Noamountofdataorcleverhandlingofuncertaintycan compensate for an overly simplistic (or just incorrect) model. The systemic risk comes from many actors relying on models with similar mistakes, or not recognizing limitations in the same way.

Thebiggestcomponentsofthiskindofriskinclude:

• Model Diversity:thisrepresentsthelevelofdiversification(orconcentration)inmodellingerrorresultingfromthe aggregation of many modelling components. This includes the number of explicit models in use, but also the number of methodologies, internal components or alternative views that contribute to the risk estimates. Different modelsfittedtothesamehistoricaldataandinsurersusingthesamemodelswillhavecorrelatedmodelerrors,causing systemic risk if they are not countered by good model management, expertise and an individual view of risk.

• Model Credibility: this represents the extent of the knowledge (or uncertainty) of the real underlying phenomenacaptured in the model, and how much to trust it. Systemic risk can emerge when many users have inaccurate estimates,especiallydrivenbyapparentaccuracy,soliddataandmarketbuy-in.Forinstance,whenmodellingveryextreme but rare events (such as a nuclear meltdown, the probability of which is estimated to be of the order of one in a million per year8 ), Model Risk can become much greater than the probability of the real underlying phenomenon9, yet the model itself and practical experience will give little indication that there is reason to doubt it.

• Model Fitness For Purpose: this represents the applicability of the model to the business under consideration.Forinstance,avendormodelcanbetailoredtoprovideabetterfitness-for-purposethroughanextensivevalidationprocess and associated adjustments to the model. However, some things are too complex to model, and should perhapsnotbemodelledinthefirstplacesinceevenatailoredmodelwouldbemisleading.Understandingenoughabouttheenvironmentandintendedusecanhelpdetermineifmodellingisfitforthepurpose.

Model Diversity and Aggregation

There is an extensive literature in machine learning and statistics on ‘ensemble methods’. The general idea is to train many diverse predictors on the same dataset and then aggregate the predictors to make an overall prediction. Aggregated predictions tend to outperform methods thattrainasinglepredictor.“Boosting”and“randomforests”aretwopowerfulandwidelyusedensemble methods.[ix](IntheNetflix$1mprize,thewinnersendedupbeingensemblesoftheearly leaders). An empirical survey of ensemble methods[x] suggests that most of the gains come fromcombiningthefirstfewpredictors,butthatsomeensemblemethodscontinuetogetlargegains up to 25 predictors.

[ix]Domingos,P.(2012).Afewusefulthingstoknowaboutmachinelearning.CommunicationsoftheACM,55(10),78-87.https://homes.cs.washington.edu/~pedrod/papers/cacm12.pdf[x]Opitz,D.,&Maclin,R.(1999).Popularensemblemethods:Anempiricalstudy.JournalofArtificialIntelligenceResearch,169-198.

8IAEA(2004),Statusofadvancedlightwaterreactordesigns2004,IAEA-TECDOC-1391,InternationalAtomicEnergyAgency,Vienna,Austria.Itisworthnotingthattherehavebeenthreemajorreactoraccidentsincivilnuclearpower(ThreeMileIsland,Chernobyl,andFukushima)over15,000cumulativereactor-years.Thissuggestsarateof2in10,000peryear,significantlyabove the modelled aim.

9Ord,T.,Hillerbrand,R.,&Sandberg,A.(2010).Probingtheimprobable:methodologicalchallengesforriskswithlowprobabilitiesandhighstakes.JournalofRiskResearch,13(2),191-205.

Page 19: Did your model tell you all models are wrong? · Organisational Sources of Systemic Risk of Modelling 13 ... take account of the integration and complexity of network structures,

19

Computational diversity

Combiningmultiplemodelsissometimesusedinhighreliabilitycomputing.InN-versionprogrammingseveralimplementationsofthesamesoftwarespecificationarewrittenbyindependentteams. These programs are then run in parallel, with output corresponding to majority decisions. Ideally the independence of implementation in this approach greatly reduces the chance of identical software faults[xv]. Unfortunately, in practice it turns out that independent programmers actually make similar errors in the same places[xvi], reducing the utility of the approach.

[xv]Chen,L.,&Avizienis,A.(1978,June).N-versionprogramming:Afault-toleranceapproachtoreliabilityofsoftwareoperation.InDigestofPapersFTCS-8:EighthAnnualInternationalConferenceonFaultTolerantComputing(pp.3-9).Avizienis,A.A.(1995).TheMethodologyofN-VersionProgramming,SoftwareFaultTolerance,EditedbyMichaelR.Lyu.[xvi] Knight, J. C., & Leveson, N. G. (1986). An experimental evaluation of the assumption of independence in multiversion programming. Software Engineering, IEEE Transactionson,(1),96-109.

Process Risk

Someriskfromthemodellingprocessresultsnotfromflawswiththemodelsthemselvesbutfromtheirinappropriate operation within the organisation. The Modelling Process involves selecting data to build models, the actualmodelbuilding,howthemodelsarevalidatedandused,andhowtheresultsarethenusedtoguideaction:a model can be correct but integrated in a process that produces systemic risk. As we’ll see, process risk by its nature spans modelling risk proper, organisational risk, and behavioural risk.

ExamplesofriskfactorsassociatedwithProcessriskinclude:

• Subjective Judgment:thisrepresentsthelevelofunjustifiedtweaking,typicallyresultingfromover-confidenceor anchoring biases. For instance, underwriters could be anchored on irrelevant data, outdated information, or other biasing factors (see the behavioural factors section 2.4).

• Resources, Expertise & Experience: This is the modelling team’s ability to understand the content of modelsand their limitations. Much of this is based on having experience with the models, both when they work as they should, when (and why) they fail, and when unexpected real world situations occur. Systemic risks can occur when teams have too limited expertise pools, for example by lacking diverse backgrounds or proper feedback, and hence tend to make similar or naive assumptions.

• Controls & Consistency: this represents the ability to prevent operational risks from producing unintendedresults from the model. While good Controls may reduce operational risks to organisations they can inculcate similar practices that make the weaknesses of the models similar across the industry. For instance, asymmetric error checking driven by the natural pressures of the market could produce systemic bias of errors (see the behavioural factors category for further detail).

“Risk management is about people and processes andnot about models and technology. Trevor Levine

Contents &

Foreword

02-06Introduction

07-14Scorecard

31-37Fram

ework

15-30Sum

mary G

uidelines38-42

Glossary

43

Page 20: Did your model tell you all models are wrong? · Organisational Sources of Systemic Risk of Modelling 13 ... take account of the integration and complexity of network structures,

10Jarzabkowski,P.,Bednarek,R.,&Spee,P.(2015).MakingaMarketforActsofGod:ThePracticeofRiskTradingintheGlobalReinsuranceIndustry.OxfordUniversityPress.p.192

20

2.3 Organisational Risk

Nested rationality sheds light on the mundane practices through which models become accepted, hubris constitutes the norm, stress is celebrated, and traders become entangled in a dense relationality of miscalculation. We show that these everyday practices constitute the collective practice of the market and, hence, are the crux of systemic health – or systemic risk – within markets. 10

To gain a fuller understanding of risk in the industry, we need to understand models not in isolation but as part of a broader system. Models are used within — and produced by — organisations embedded inside markets, subject tocompetitivepressuresandregulatoryoversight,Howinformationflowsbetweenpeopleinanorganisationcanbeasimportantashowdataistransferredbetweenpartsofamodelfortheeventualoutcome:iftransparency,correctness checks, incentives or feedback are problematic the organisation may make its members use the model in afaultyway.Competitionbetweeninsurancecompaniescanfavourportfoliooptimizationtofitmodels,theuseofcertain models, or market cyclicity that makes company behaviour and risk more correlated. Regulation, which generally aims at reducing systemic risk, can inadvertently lead to rules that make companies behave similarly or reduce model diversity.

Inthissectionwedelineatefourmainriskfactorsarisingfromorganisation:Regulation, Competitive Environment, Model Markets, and Internal Risks.

Lessons from Freestyle Chess

In ‘freestyle chess,’ any combination of human or machinedecision-makingprocessisallowed.Currently the champions of freestyle chess are not computers,butratherhuman-machinecombinations(called“centaurs”).In2014’sFreestyleBattlethecyborgsbeatthebestchessAI53-42.Humanguidancethusaddssignificantvalue.Agoodfreestyleplayerknows when to listen to the computer, when to override

it, and how to resolve disputes between different chess programs. Learning how to exploit the complementary abilities that the best chess programs have is therefore a

distinctive cognitive skill.

““

Page 21: Did your model tell you all models are wrong? · Organisational Sources of Systemic Risk of Modelling 13 ... take account of the integration and complexity of network structures,

21

Regulation

Firmsdonotnecessarilyhavefinalcontroloverwhichmodelstheyuseandhowtheyusethem.Consideringtheirwidereachandinfluence,regulatorsnaturallyhavepotentialtocreateormitigatesystemicrisk.Individualcompanies will be impacted to varying degrees depending on their particular regulator, and the impact that a regulator has will be dependent on its ability to act in a way that avoids introducing systemic risk and is acceptable to markets and other stakeholders. Just like insurance companies aim to use models to manage their risks, regulatorshavemoreorlessexplicitmodelsformanagingmarketrisks-subjecttoexactlythesameuncertaintiesand risks as the other risk models11 .

Determiningoptimalregulatorypracticesvis-à-vissystemicriskisasubtleanddifficultmatter.Forinstance,iftheregulator accepts a higher risk, the lower capital requirements across the market may create a systemic risk. On theotherhand,enforcinghighcapitalrequirementscomparedtoothercountriesmayleadtofirmschangingdomicileorbecominguncompetitive,inturndamagingtheefficientoperationofasectoronwhichthewidereconomy depends. Indeed, the nuance of the role regulators play in determining systemic risk is a strong motivation for developing a robust metamodel.

Risk Diversification

Inadditiontohavingadiversesetofmodels,firmsalsoneedtohaveadiversifiedsetofrisks.Whileminimisingcorrelationbetweenriskholdingsisalwaysakeyconsiderationforaninsurancefirm,theneedtodosobecomeseven more acute when considering the systemic risk of modelling. Model errors and structural uncertainty might mean that certain risks are underestimated. By diversifying risk, one can ensure not only that a single disaster wouldn’t bankrupt the company, but also that a single set of model errors or poor estimates wouldn’t either.

However,individualfirmsdiversifyingtheirassetsisnotalwayssufficienttoensureprotectionagainstsystemicrisk.Allfirmshaveanincentivetodiversifytheirrisk.However,diversitybetweenfirmscanparadoxicallydiminishasallfirmspursuesimilardiversificationstrategies.Whileindividualfirmsarefarlesslikelytofailwithdiversifiedholdings, the system as a whole becomes vulnerable to total collapse via these correlated risk connections. What worksasarisk-reductionstrategyforeachindividualfirmcouldactuallymakethesystemasawholemorefragile.AseriesofstudiesbyLordRobertMayandhiscolleaguesfoundthatthisdynamicexacerbatedthe2008financialcrisis. To what extent this dynamic impacts the insurance market has yet to be determined.

11 Haldane,A.,&Madouros,V.(2012,August).ThedogandtheFrisbee.BankofEngland.InSpeechgivenattheFederalReserveBankofKansasCity’s36theconomicpolicysymposium,“TheChangingPolicyLandscape”,JacksonHole,Wyoming(Vol.31).http://www.bankofengland.co.uk/publications/Documents/speeches/2012/speech596.pdf

Contents &

Foreword

02-06Introduction

07-14Scorecard

31-37Fram

ework

15-30Sum

mary G

uidelines38-42

Glossary

43

Uniform diversification

Theinter-banklendingnetworkisintendedtoreducerisk,butcanundersomeconditions instead amplify it and produce systemic risk, such as during the 2008financialcrisis.LordMayandNimalanArinaminpathymodelledhowliquidityshockscanspreadinthenetwork,findinghowthenetworkstructureandsizedistributioninfluence its stability.

Having uniform controls can sometimes increase systemic risk. Lord May and his colleagues note that all banks have an incentive to diversify their assets. However, diversity between banks can paradoxically diminish as they all pursue similar diversificationstrategies.Whileindividualbanksarefarlesslikelytofailwithdiversifiedholdings,thesystemasawholebecomesvulnerabletototalcollapseviatheseasset-holdingconnections.Whatworksasarisk-reductionstrategyforeachindividual bank actually makes the system as a whole more fragile.

Page 22: Did your model tell you all models are wrong? · Organisational Sources of Systemic Risk of Modelling 13 ... take account of the integration and complexity of network structures,

22

Competitive Environment

Depending on the market in which a company operates it may be subject to varying levels of competitive pressure. A suitably competitive environment can be healthy, raising standards and breeding innovation. However it can also allownegativeeffectstoperpetuateandwhereasignificantproportionofthemarketisinfluencedinthesamewaythis can introduce certain systemic risks. For example, extreme competitiveness can promote willingness to use smaller relative security loadings or cut corners on capital buffers in order to attract customers, increasing individual riskbutalsoforcinglessrisk-takingcompaniesoutofthemarketorintomakingsimilaradjustments.

The focus of this particular category are the scenarios that would lead to individual companies throughout a market to act in a way that they would not do if they were isolated from the behaviour of other companies within the market, andspecificallywherethisbothreducesindividualrobustnessandincreasesthelikelihoodofotherswithinthemarketacting in the same way.

Thiscanbebroadlyconsideredintwodimensions:

• Reputation: The extent to which a company’s ability to attract and retain business is linked to their reputation orexternal perception can dictate how susceptible they are to making decisions based on appearance, rather than with due consideration for the risk. By its nature, this effect will perpetuate through a market as each individual attempts toeitherfollowwhatisperceivedtobe“bestpractice”,orattemptstocontinuallydifferentiatethemselvesfromthecompetition.Examplesincludepressuretoincludecoverageforemergingnon-modelledrisks,suchasCyber;orpressuretonot“rocktheboat”byrequestinghigherdataqualitythanacompetitor.

• Irregular feedback:InahealthycompetitiveP&Cenvironment,thethreatofbeingundercutsolelyonpriceselection is naturally corrected for through rapid feedback in the form of claims. This ensures that pure risk measuresbetweencompaniescannotdeviatetoofarfromrealityandforcesindividualstocompeteonefficiency,innovation and customer service. Where irregular feedback exists, the negative effects of decisions do not occur sufficientlyquicklyfortheactiontobecorrectedforinthisway,resultinginaconsistentbiastounderestimateriskinordertogainbusiness,andamarket-wideracetothebottom.Examplesincludeextremeeventsthatfollowstatistical power laws; complex risks where there are multiple interpretations of outcome; or risks with high uncertainty ranges. This effect is most extreme when the consumers are likely to be poorly informed about the natureoftheriskthemselves,suchasrareoremergingrisks-typicallyrisksthatarealsobadlymodelled.

Getting the “right” answer

One dramatic example of how pressures to get a desired result can bias a model can be found in the JP Morgan Task Force Report, detailing how billion dollar losses were partially due to spreadsheet errors lowering the estimate of VaR in Basel II models. A mistaken formula muted volatility by a factor of two and erroneously lowered the VaR[xxi]. However, this error was very much in line with theCIO’sprioritiestoreduceVaRin2012,andtherewasgreatpressuretoimplementanewandimproved VaR model which was promised to produce a lowered VaR. [xxii]

[xxi]http://files.shareholder.com/downloads/ONE/2272984969x0x628656/4cb574a0-0bf5-4728-9582-625e4519b5ab/Task_Force_Report.pdf(p.128-129)[xxii]LisaPollack,Atempestinaspreadsheet,FTAlphaville,Jan172013http://ftalphaville.ft.com/2013/01/17/1342082/a-tempest-in-a-spreadsheet/.

Page 23: Did your model tell you all models are wrong? · Organisational Sources of Systemic Risk of Modelling 13 ... take account of the integration and complexity of network structures,

23

Model Markets

Inadditiontobeingtosomeextentatthemercyoftheregulatorandthedirectcompetition,firmshavelittledirectandimmediatecontrolovertheavailabilityandqualityofmodelsbuiltbythird-partyvendors.Modelsarebuilttofollow market requirements, with the ability to write business driving the granularity and completeness of the model, rather than the underlying risk. Conversely, regions with poor data are hard to model. Measuring a company’s risk exposure against the priorities of the wider market can be used to understand how aligned they are to this systemic risk.

Withinamarketthatreliesheavilyonthird-partymodels,theprioritiesofthevendorswillbedrivenbymarketforcesaswithanyotherproduct.Thiscanleadtoover-emphasisonahandfulof“toppriority”risks,thatreachthewidest audience, rather than an overall adequacy of representation for the materiality of each risk to the market as a whole.

Existing model vendors do ask their customers where to improve their models, and the response is typically where (geographicallyorperil-wise)theyhavebusiness.Newvendorsdon’tbuildmodelsnobodyislikelytoaskfor:thecost of building models means market size will determine where models are built. This could be because it is not consideredmaterialtoasufficientnumberofcompanies,orbecauseeitherpoordataqualityoralackofclaimsinformation increase the work required.

The result is an uneven global footprint of what is modelled, based on domestic appetite. If risks are underwritten then they tend to be modelled well, but rarer risks will be less well modelled in addition to the model uncertainty duetolackofdata,andthelackofexperienceamongunderwritersinhowtohandlethem.Asaresult,asignificantmarketriskmaycontinuallybeunder-investedin.

Longtermunder-investmentinahazardmodelcanintroducesystemicrisktoamarket.Wherenocrediblemodelsexistregulatorsstruggletointroduceappropriatemonitoring;poorgranularityorout-of-datemodelscanmislead,leadingtoinappropriateallocationofriskacrossamarket;andwithoutsufficientscrutinyandfeedbacktocontinually challenge a model and improve its skill it can introduce systemic risk through behavioural effects.

Internal Risks

Systemic risk is introduced not only through external forces but also through the internal organizational structure ofafirm.Theoldadage“nooneevergotfiredforbuyingIBM”hassomeresonanceinthisarea.Whetherseekingregulatoryapprovalforone’smodelorevenjustsign-offfromamanagementteam,thereseemslittletobegainedfrom adopting a model, approach or assumptions that you know to be distinct from your peers’. There are enough surveys by consultants that give visibility as to the range of tail risk assumptions for a model builder to be persuadedtotoetheline.Ultimately,thiscanleadtoindustry-widealignmentofassumptions,generatingthesystemic risk of failure.

Modelsarenowsocomplexthatitisbecomingevermoredifficultforanyoneataseniorlevelintheorganisationto have a thorough understanding of its operation. The technical limitations of the model, as well as more generic model risks, can be communicated. However such limitations will not provide a binary measure that says under what circumstances model outputs should be distrusted. Rather it will suggest the situations in which its reliability or accuracy may be affected. As per the IBM adage, believing the model may be a safer move than saying that it is not to be trusted and then making a decision unsupported by the model in which your business has invested vast sums.Thesystemicrisksofbeliefinmodeloutputswasseenalltooclearlyduringtherecentfinancialcrisis.Worse, there may exist pressures within the organisation that pushes towards accepting certain models despite safeguards.

Contents &

Foreword

02-06Introduction

07-14Scorecard

31-37Fram

ework

15-30Sum

mary G

uidelines38-42

Glossary

43

Page 24: Did your model tell you all models are wrong? · Organisational Sources of Systemic Risk of Modelling 13 ... take account of the integration and complexity of network structures,

24

Near Misses and Normal Accidents

When an accident happens in an organisation, it is often found that it was preceded by many earlier near misses that did not cause immediate harm. Such“normalaccidents”[xxiii] are often overlooked and interpreted as signs that the system is truly resilient rather than being in danger. How a near miss is interpreted can depend strongly on whether it is experienced as a loss (it could potentially have been much worse) or not (the safeguards look adequate)[xxiv].

Handling this problem is fundamentally anorganisationalproblem:high-pressuresituations often lead to accepting risks and anomalies, but managers need to justify their assessment of near misses and people who own up to or point out mistakes should be rewarded [xxv].

[xxiii]Perrow,C.(2011).Normalaccidents:Livingwithhighrisk technologies. Princeton University Press.[xxiv] Tinsley, C. H., Dillon, R. L., & Cronin, M.A.(2012).Hownear-misseventsamplifyorattenuateriskydecisionmaking.ManagementScience,58(9),1596-1613.[xxv]https://hbr.org/2011/04/how-to-avoid-catastrophe=

Source:ByUnknownornotprovided(U.S. National Archives and Records Administration)[Public domain], via Wikimedia Commons

Page 25: Did your model tell you all models are wrong? · Organisational Sources of Systemic Risk of Modelling 13 ... take account of the integration and complexity of network structures,

25

Mistakes in unit conversions can be costly. One of the classic examplesisthe,a$125millioncraftthatinSeptember1999

plunged into the Martian atmosphere and disintegrated.

The cause was found to be that one piece of navigation software used American units, but interfaced with another system expecting metric units.

The discrepancies were also noted by at least two navigators whose concerns were dismissed[xxvi]. These unit conversion errors can strike

critical systems, but can be avoided with observational feedback.

Models need to report enough internal information so that users can recognize that something is amiss – and have their concerns investigated.

Unit conversion disaster

Contents &

Foreword

02-06Introduction

07-14Scorecard

31-37Fram

ework

15-30Sum

mary G

uidelines38-42

Glossary

43

Source:“MarsClimateOrbiter2”byNASA/JPL/CorbyWaste-http://www.vitalstatistics.info/uploads/mars%20climate%20orbiter.jpg(seealsohttp://www.jpl.nasa.gov/pictures/solar/mcoartist.html).LicensedunderPublicDomainviaCommons-https://commons.wikimedia.org/wiki/File:Mars_Climate_Orbiter_2.jpg#/media/File:Mars_Climate_Orbiter_2.jpg

[xxvi]Board,M.I.(1999).MarsClimateOrbiterMishapInvestigationBoardPhaseIReportNovember10,1999.ftp://ftp.hq.nasa.gov/pub/pao/reports/1999/MCO_report.pdf

Page 26: Did your model tell you all models are wrong? · Organisational Sources of Systemic Risk of Modelling 13 ... take account of the integration and complexity of network structures,

26

2.4 Behavioural Risks

Because it is humans who ultimately use the models and make the most important decisions in any company, it’s crucialtoanalysehowourownthoughtprocessescanexacerbatesystemicrisk.Psychologistshaveidentifiedsystematicbiasesindecision-makingaboutprobabilityandrisk.Becauseweoftenmakesimilarandcorrelatederrorsin judgment, there is substantial potential for the market as a whole to respond incorrectly, thereby introducing systemicrisk.Generally,theinsurancemarketwillbenefitbybeingawareofthesebiases,leadingtoabettermanagement of risk.

In this section we analyse the Autopilot Problem and Cognitive Bias, which are two main behavioural drivers of risk. Further discussion and potential methods for risk mitigation can be found in the Experimental Section and Appendix A.

Autopilot

When models turn on, brains turn off. Til Schuermann

The autopilot problem emerges when there is a dependence on an automated system that has been introduced with the intention of replacing a human act, thus transforming the function of the human into an overseer. The autopilot problem can dictate the outcome of decisions, which in turn can either reduce or introduce system risk.

With the increased availability of high performance computing, individual companies are relying on technological advances and automated systems more than ever. Automation has played an increasingly important role in the domain of risk modelling in (re)insurance since the introduction of catastrophe models.

In order to maintain expertise, specialists must be put into situations where they receive frequent feedback about theiractions(Kahneman&Klein,2009,Shanteau,1992).Limitedorzerofeedbackresultsinindividualswhoarenotlearning or honing their skills. Older generations of pilots can sometimes ride on their skills (i.e. perform at a reduced, but still acceptable level, by using the skills and techniques acquired before the introduction of the ‘autopilot’ (Bainbridge,1983)),whilesubsequentgenerationscannotdeveloptheseskillsinthefirstplace.Expertiseandexperience can dictate the outcome of decisions, which in turn can either reduce or introduce systemic risk to a company.

Page 27: Did your model tell you all models are wrong? · Organisational Sources of Systemic Risk of Modelling 13 ... take account of the integration and complexity of network structures,

27

Air France Flight 447

In2009,theautopilotofAirFranceFlight447disengagedsuddenlywhiletheplanewasonthewayfrom Rio to Paris. This was triggered by a technical failure, but the pilots couldn’t know this, as they hadnotbeenactivelyflyingtheplanetothatpoint.Theyweretryingtosimultaneouslyfigureoutwhatwas going wrong and correct it without the necessary comparison data. The pilots never knew what the problem was before the plane hit the waters of the Atlantic killing everyone on board. In order to maintain expertise, one must be put in situations with frequent feedback. Lacking this feedback, expertise and skills degrade.[xxviii] [xxix]

[xxviii]Kahneman,D.,&Kelin,G.(2009).Conditionsforintuitiveexpertise:afailuretodisagree.AmericanPsychologist,64(6),515-526.[xxix]Shanteau,J.(1992).Competenceinexperts:Theroleoftaskcharacteristics.Organizationalbehaviourandhumandecisionprocesses,52(3),381-410

Source:“PKIERZKOWSKI070328FGZCPCDG”byPawelKierzkowski-Ownwork.LicensedunderCCBY-SA3.0viaCommons -https://commons.wikimedia.org/wiki/File:PKIERZKOWSKI_070328_FGZCP_CDG.jpg#/media/File:PKIERZKOWSKI_070328_FGZCP_CDG.jpg

Theautopilotbiasinthiscontextcanbemediatedbyanumberoffactors,fiveofwhichweexplorefurtherinAppendixA:

1. Skills and knowledge of those using the model. Can underwriters estimate risk without a model beforehand?

2. Integration and reliance of model usage. To what extent are those using the model reliant on the model, andtowhatextentisthemodelintegratedintoamorewell-roundedriskassessmentframework?

3. Calibration and overconfidence of those using the model. How well are underwriters calibrated to areas themodeldoesn’tcover,andaretheyoverconfident?

4. Modelling narrative of decisions and reporting. Do qualitative interpretations of model output accompany therisk estimates?

5. Company culture surrounding the use of models. How good is communication between underwriters andmodellers? What role do models play in the company?

Contents &

Foreword

02-06Introduction

07-14Scorecard

31-37Fram

ework

15-30Sum

mary G

uidelines38-42

Glossary

43

Page 28: Did your model tell you all models are wrong? · Organisational Sources of Systemic Risk of Modelling 13 ... take account of the integration and complexity of network structures,

28

Cognitive bias

Cognitive biases are systematic deviations in human thinking from ideal reasoning patterns. Many biases are the result ofcognitivelimitations(boundedrationality)andcanbeadaptive:inmanyenvironments,theyleadtoquickandapproximately correct answers. Unfortunately, in many modern day situations that our ancestors didn’t face—such as those that require careful attention to precise probability and risk—our judgment can be led severely astray.

A general principle underlying the heuristics and biases is that humans use methods of thought which quickly return good approximate answers in many cases; but also give rise to systematic errors called biases. Eliezer Yudkowsky

12

12 Yudkowsky,Eliezer.“Cognitivebiasespotentiallyaffectingjudgmentofglobalrisks.”Globalcatastrophicrisks1(2008):86.

Although much of the cognitive work of insurance has been outsourced to models better equipped to handle large data sets than humans are, underwriters must rationally assess modelled output, risk, and unmodelled exogenous factors to make decisions. These sorts of decisions are, unfortunately, highly susceptible to errors due to cognitive bias.

Cognitive biases play a large role in how individuals and companies ultimately make decisions. Indeed, behavioural economicsandbehaviouralfinanceareentireacademicfieldslargelydevotedtostudyingtheireffects.It’sthenquiteimportant to understand how these psychological factors work, how they affect risk, and how we can mitigate them.

Acompletediscussionofcognitivebiasisbeyondthescopeofthisstudy,butwebrieflydescribesomeprimaryandrepresentative risks with further elaboration in the next section and in Appendix A.

Page 29: Did your model tell you all models are wrong? · Organisational Sources of Systemic Risk of Modelling 13 ... take account of the integration and complexity of network structures,

13 Ariely,D.,G.Loewenstein,andD.Prelec(2003).“‘Coherentarbitrariness’:Stabledemandcurveswithoutstablepreferences.”QuarterlyJournalofEconomics118(1):73--105.14 Kahneman,DanielandAmosTversky(1972).“Subjectiveprobability:Ajudgmentofrepresentativeness.”CognitivePsychology3:430-54.15 Kliger,Doron,andAndreyKudryavtsev.“Theavailabilityheuristicandinvestors’reactiontocompany-specificevents.”TheJournalofBehavioralFinance11.1(2010):50-65.16 Stalans,LorettaJ.“Citizens’crimestereotypes,biasedrecall,andpunishmentpreferencesinabstractcases:Theeducativeroleofinterpersonalsources.”LawandHumanbehavior17.4(1993):451.17 Darley,JohnM.,andBibbLatane.“Bystanderinterventioninemergencies:diffusionofresponsibility.”Journalofpersonalityandsocialpsychology8.4p1(1968):377.

29

1. Anchoring Humanstendtofixtheirestimatesofagivenquantityonasinglepieceofinformation,evenwhenthatinformationis known to be unrelated to the value of that quantity. For instance, in one experiment, subjects were asked to write the last two digits of their United States Social Security Number and then asked whether they would pay this number of dollars for an item of unknown value. When asked to bid on these items later, subjects with higher numberssubmittedbidsbetween60%and120%morethanthosewithlowernumbers.Thesocialsecuritynumbers thus served as anchors which biased the ultimate value estimate.13 Do risk assessors excessively anchor on irrelevant model output?

2. Availability Heuristics The availability heuristic is our tendency to estimate the likelihood of an event based on how mentally available similar examples are.14 Imagine, for example, that you’re asked to determine whether a random English word is more likely (a) to begin with the letter K or (b) to have K as the third letter. According to a study by Kahneman and Tversky, subjects tend (incorrectly) to answer that (a) is more likely.15 The reason is that it is much easier to think of words that begin with K than to think of words that have K as the third letter. That is, words starting with K are more mentally available to us, and we use their availability as a guide to their frequency. Because the emotional intensity of an event affects its ease of recall, personal experience often plays an undue role in judgment.16 We hypothesize that underwriters who experienced losses from Katrina are more risk averse with regard to wind policies than are younger underwriters, despite the fact that both groups have access to roughly the same data.

3. Bystander Effects The bystander effect refers to the tendency people have not to intervene when others are present. For instance, in emergency situations, it often takes longer for a victim to receive help when surrounded by a large group of people as opposed to asking one bystander for help.17Thiseffectcarriesovertonon-emergencysituationsaswell.Inparticular, if a worker notices a problem with, say, how the company uses a model, he may be less likely to voice his concerns if a number of others could also speak up. Research has shown that a number of factors contribute to this behaviour, but two of the most relevant for the purposes of insurance include (1) the problem of interpretation, and(2)diffusionofresponsibility.Thefirstreferstothebystander’spossibledoubtthathehasactuallyidentifiedareal problem. Since others are in the same position as he is, he might think that if there really were a problem, somebody else would have already said something. Furthermore, if he actually voices his concerns and turns out to be mistaken, he might look foolish to the group. The second refers to the fact that, when there are multiple bystanders, no individual necessarily feels responsible. Everybody else, in his eyes, is equally responsible, so he becomes less likely to intervene.

4. Biased Error Search When imperfect models and data sets exist to estimate risk, underwriters and modellers make decisions about when to search models and data sets for errors, what types of errors to look for, and when to stop looking in a way thattendstovindicatepre-existingviewsaboutrisk.Thispatternofbiasederrorsearchispartiallydrivenbyconfirmationbiasandautomationbias.Undertimeconstraints,biasederrorsearchleadstofindingmoreexpectederrors,andfewerunexpectederrorsbutandmoreerrorsintotal.Quantitativeinformationaboutthesetrade-offsisunknown,butthetrade-offscouldbesubstantial.

Contents &

Foreword

02-06Introduction

07-14Scorecard

31-37Fram

ework

15-30Sum

mary G

uidelines38-42

Glossary

43

Page 30: Did your model tell you all models are wrong? · Organisational Sources of Systemic Risk of Modelling 13 ... take account of the integration and complexity of network structures,

30

2.5 Risk Factors Summary

Systemic risk can come from a numerous sources. Here, we have decomposed the factors of systemic risk into three primarycategories:risksarisingfrommodels,organisationalstructures,andbehaviouralfactors.Withineachofthesecategories, a wide range of sources can contribute to systemic risk, ranging from poor data quality to motivated cognition. The next stage of the research sought to determine to what extent each of these factors contributed to overallsystemicrisk,andtakeafirststeptodesigningacountermeasureintheformofasystemicriskscorecard. The goal is to enable decision makers to monitor and assess systemic risk, and modify their behaviour to mitigate their contributions to such risk.

Biased Error Search

Biased error search is a familiar problem in science, as illustrated by this examplefromRichardFeynman:

“Millikanmeasuredthechargeonanelectronbyanexperimentwithfallingoildrops, and got an answer which we now know not to be quite right. It’s a little bit off, because he had the incorrect value for the viscosity of air. It’s interesting to look at the history of measurements of the charge of the electron,afterMillikan.Ifyouplotthemasafunctionoftime,youfindthatone is a little bigger than Millikan’s, and the next one’s a little bit bigger thanthat,andthenextone’salittlebitbiggerthanthat,untilfinallytheysettle down to a number which is higher.

Why didn’t they discover that the new number was higher right away? It’s a thing that scientists are ashamed of—this history—because it’s apparentthatpeopledidthingslikethis:Whentheygotanumberthat was too high above Millikan’s, they thought something must be wrong—andtheywouldlookforandfindareasonwhysomethingmight be wrong. When they got a number closer to Millikan’s value they didn’t look so hard. And so they eliminated the numbers that were too far off, and did other things like that.” (Feynman 1974, p.12) [xxxi]

Source:ByGaelMace(Ownwork(Personalphotograph))[CCBY3.0(http://creativecommons.org/licenses/by/3.0)],viaWikimediaCommons

[xxxi] Feynman, R. P. (1998). Caltech’s 1974 commencement address. Reprinted in Feynman, R. P. (1998). 6. Cargo Cult Science. The Art and Science of Analog Circuit Design, 5

Page 31: Did your model tell you all models are wrong? · Organisational Sources of Systemic Risk of Modelling 13 ... take account of the integration and complexity of network structures,

31

Scorecard

3.1 Introduction and purpose of a scorecard

The working party has developed a scorecard, which is useful for risk managers to form a preliminary snapshot of monitoring potential systemic risk. The aim is less towards an exact risk measure and more towards an indicator of whether modelling practices are aligned with reducing systemic risk. The scorecard combines different individual contributing factors that are explained in the chapters earlier into a summarised single score. The weight of each individualfactorisestimatedbyacalibrationprocessthatconsistsof:(1)ananalysisofunderstandingunderwriters’ pricing behaviour in the real world; (2) computational simulations based on the Metamodel; and (3) a structured expert opinion elicitation based on Delphi method. (Detailed descriptions are included in the Appendices B and C).

Inouropinion,thescorecardisintendedasaroughwayofmonitoringsystemicrisks,notsomuchfordecision-making,marketing,orprediction,butmainlyfornudgingusers’behaviouralchanges:Goingthroughthescoringexercise will ideally force the participants to consider the modelling process from different perspectives, becoming aware of the peculiarities of the process and where weak points in it will always be more useful than any statistical score. In fact, systemic risk is literally everybody’s problem and being aware of the issue and ready to reduce the risks where possible is both the moral and practical thing to do. As discussed before, systemic risks are risks that emerge from the way parts of a system are assembled and used rather than the individual parts themselves. This canhappenonmultiplelevels:thepartsofmodelsinteracttoproduceaproblematicmodeloutput,the(mis)useofmodels inside insurance companies can produce bad business decisions, and correlated model usage can make entire markets more vulnerable than they look. The scorecard helps users to take this into account by looking at factorsbelongingtothedifferentlevelsandcombiningthemintothefinalscore:evenifonehasaperfectmodelusage in one’s own company, one can be exposed to systemic risks from the surrounding market.

Nevertheless,thescorecardis“local”.Whatthescorecardattemptstocaptureisasenseofthecontributiontooverall market systemic risk due to a company’s modelling practices. While one can imagine evaluations of entire marketscarriedoutbyregulators,wherewethinkthescorecardcanactuallybeusefulisforself-evaluationofcompanies using their own local information. This is plausible because there appears to be a fair correlation between modelling practices likely to cause risk to individual companies and to the entire market, and since proper systemic risk reduction work begins at home.

3.2 Design and elements of scoring system

Selection of factors

Thefirstaimfortheworkingpartywastoselectfactorsthataffectsystemicmodellingrisk.Ittemporarilydividedinto three workstreams in order to more deeply investigate risk factors linked to model, behavioural and organisational systemic risks. After separate discussions a set of factors was selected with reasonably low overlap. Thesewerefurtherrefinedinworkstreamsessions,producingthecurrentlist.Inparallel,discussionsaboutobservable signs of the risk factors and appropriate weighting systems were made.

“ “ Far better an approximate answer to the right question, which is often vague, than an exact answer to the wrong question, which can always be made precise. John Tukey

18

Contents &

Foreword

02-06Introduction

07-14Scorecard

31-37Fram

ework

15-30Sum

mary G

uidelines38-42

Glossary

43

18Tukey,J.W.(1962).Thefutureofdataanalysis.TheAnnalsofMathematicalStatistics,1-67.

Page 32: Did your model tell you all models are wrong? · Organisational Sources of Systemic Risk of Modelling 13 ... take account of the integration and complexity of network structures,

32

Not all factors are equal. Some were found to likely have smaller effects than others, or were very hard to estimate. In addition, a scoring system attempting to cover all factors discussed in this whitepaper would be prohibitively complex and cumbersome. We hence selected a smaller subset aiming at capturing those that tended to come out on top in the working party discussions, literature surveys, and simulations. Some of the factors used in the scorecard are composites of, or correlated with, several of the factors discussed before, combining more information into a simple score.Anotherimportantconsiderationiscontrollability:someriskfactorsmaybesetbyregulationorthenatureofbusiness and are not amenable to easy improvement. They may nevertheless be worth noting even if they affect every market participant in the same way.

Scorecard Design

The scorecard works by summing weighted ratings of all selected factors, which are themselves linearly weighted averages based on observable measures, to produce an overall systemic risk score.

Obviously nonlinear models could be made, for example taking question answers as inputs to a neural network or a nonlinear regression (as is sometimes done in credit scoring). However, the simple linear model is robust, allows investigation of what answers caused particular results, and is widely used where nonlinear models would be hard to motivate. In fact, there might be a principled case for linear models given that human judgement is often surprisingly well modelled (or even outperformed) by linear models19. This is particularly true when inputs have a monotone effect on the output, errors of measurement or estimation are likely, and the model is not overly sensitive to weighting20.

Setting the weights of factors can be carried out in different ways. Had extensive statistics been available for systemic riskfailuresduetomodellingbeenavailable,itwouldhavebeenpossibletousearegressionfit–butthiskindofdataisscarce.Attheoppositeendofcomplexityliessimpletallying:inmanysituationsevennon-optimalweightsettingusingequalweights(“improperlinearmodels”)producesgoodresults21 , especially when experts choose the factors that are informative and matter in practice.

The working party combined several methods for getting estimates of factor importance. A structured expert opinion elicitation based on Delphi was used to both estimate the degree of initial group consensus, and to develop an informed view (see Appendix C. Other input came from the Oxford Metamodel of the role of modelling in the insurance market that had been developed at the Future of Humanity Institute (FHI). In particular, the metamodel allows for a way of estimating how much systemic risk changed due to different factors, which was used to estimate theirrelativeweightinthescorecard(seeAppendixB).AfinalinputwasexperimentaldatafromaFHIpilotstudyofcognitivebiasinunderwriters.CombiningthesesourcesproducedthefinalestimatedweightingofthefactorsinFigure 3.

19 Goldberg,LewisR.1968.Simplemodelsorsimpleprocesses?Someresearchonclinicaljudgments.AmericanPsychologist,23:7,483-496.20Dawes, R. M., & Corrigan, B. (1974). Linear models in decision making. Psychological bulletin, 81(2), 95.21Dawes, R. M. (1979). The robust beauty of improper linear models in decision making. American psychologist, 34(7), 571.

Factor Weightings

Systemic Risk Score

Measured Observables

Page 33: Did your model tell you all models are wrong? · Organisational Sources of Systemic Risk of Modelling 13 ... take account of the integration and complexity of network structures,

33

Figure3:relativeweight/importanceofthefactorsonthescorecard.

Basedonthesecalibrationmethods,asummaryof(relative)weightsfortheselectedtop10factorsisintheTable[1]. For each question multiply the individual answer by the question weight and sum it all together, dividing by 5. Thetotalmaxscoreis100,correspondingtomaximalsystemicrisk.

The importance is not so much the absolute numbers as that they give information about where efforts may be needed.

Model Inputs

Contents &

Foreword

02-06Introduction

07-14Scorecard

31-37Fram

ework

15-30Sum

mary G

uidelines38-42

Glossary

43

Estimated Weights of Systemic Risk FactorsW

EIG

HTS

Experiment

MetaModel

Delphi Method

Autopilot Proces

s

Data Source

s

Data Gran

ularity

Model Dive

rsity

ModelShopping

ModelMark

et

Autopilot Proces

s

Assymetr

ical Erro

r Check

ing

Control O

f Modelli

ng Process

RiskD

iversifi

cation

Overall

Market

Competition

FACTORS

Page 34: Did your model tell you all models are wrong? · Organisational Sources of Systemic Risk of Modelling 13 ... take account of the integration and complexity of network structures,

34

Reference Key Factors Weight Observerable Measures Measurements Your and Metrics (Score 0-5) Score

1 DataSources 7 Whatproportionofyourdatacomes 0=100% fromfirst-hand,reliablesources? 1=80% 2=60% 3=40% 4=20% 5=0%

2 DataGranularity 9 Whatproportionofyourdatadeviates 0=0% fromtheoptimalgranularity, 1=20% requirementsexpectedbythemodel? 2=40% 3=60% 4=80% 5=100% 3 Model Diversity 14 What is the spread of the contribution of

different parts of the model to the modelled results?

Thisisestimatedby“Modelentropy”.Alow entropy model would have all of the

result depend on a single part, a maximum entropy model would weight

them all equally.

If xi is the contribution of part i (where the sum of the xi is 1), the normalized entropy

is:

Thisentropyisbetween0(totalconcentration from one part) and 1 (even

distribution)

0=Entropyof1(evencontribution from all

parts)

1=Entropyof0.92=Entropyof0.8

3=Entropyof0.74=Entropyof0.5and 5=0Entropy

(all contributions from one part

4 Model Shopping 9

5 Model Market 11 6 AutopilotProcess 10

What proportion of your model selection decisions are based on appropriateness,

fitnessforpurposeandscientificcredibility of the model? As opposed to

other considerations such as price, regulatory approval, market acceptance,

global licenses and relationships.

What proportion of your model or methodologies are subject to a restricted poolofsuppliers/methodologies(definedas less than 3 suppliers/methodologies)?

What proportion of the management and key metric information from the model is produced at a frequency which does not allow for review, narrative and challenge

(e.g. weekly or more frequent)?

0=100% 1=80% 2=60% 3=40% 4=20% 5=0%

0=0% 1=20% 2=40% 3=60% 4=80% 5=100%

0=0% 1=20% 2=40% 3=60% 4=80% 5=100%

Page 35: Did your model tell you all models are wrong? · Organisational Sources of Systemic Risk of Modelling 13 ... take account of the integration and complexity of network structures,

35

Data sources:Forthefirstyear,thecompanyestimatesthatabout60%ofitsdatacomesfromfirst-handsources,givingmeasurementof2.Multiplyingby7forthefactorweight,thecontributiontothefinalscoreis14.Inthesecondyear,becauseofashiftawayfromreinsurancedata,about80%ofdataisclosetofirst-hand,andthemeasurement is now 1.

Data granularity: about40%ofbusinessiswrittenwithonlycountyleveldata,whileitisknownthattheperilinquestion(forexampleflooding)isbestmodelledatafinergranularity.Thisproducesameasurementof2.

Model diversity:thecompanywritesbusinessusingthreemainmodels,whereoneofthemodelshasfivetimesmoreinfluence(intermsofimportancefordecisionsandamountofbusinesswritten;forexample,itmayalwaysbeused,whiletheothertwomodelsareonlyconsultedforspecialcases).Thisproducesx1=0.14,x2=0.14,x3=0.71andentropyof0.72,givingameasurementof3.

What is the spread of the contribution of different risks to the modelled results?

Calculatedby“RiskPortfolioentropy”.Alow entropy portfolio would have all of the result depend on a single risk, a maximum

entropy portfolio would weight them all equally. If xi is the contribution of risk i (

where the sum of the xi is 1), the normalized entropy is

Thisentropyisbetween0(totalconcentration from one risk) and 1 (even

distribution)

0=Entropyof1(evencontribution from all

parts)

1=Entropyof0.92=Entropyof0.8

3=Entropyof0.74=Entropyof0.5and 5=0Entropy

(all contributions from one part)

7 Asymmetric Error 7 Checking

8 Control of Model 5 Process 9 RiskDiversification 14

10 OverallMarket 14

Competition

Reference Key Factors Weight Observerable Measures Measurements Your and Metrics (Score 0-5) Score

0=atleast10% 1=5% 2=1% 3=0.5% 4=0.1%

5=lessthan0.1%

Given a decision, what is the probability that it will be checked carefully even if it is

apparently“normal”?

0=100%1=80%2=60%3=40%4=20%5=0%

What proportion of your modelling process is subject to a well governed, controlled, effective and documented

control framework?

0=0% 1=20% 2=40% 3=60% 4=80% 5=100%

What proportion of your business is from highly competitive markets where the market price can deviate by more than 30%belowthetechnicalpricewithinthe

prevailing underwriting cycle?

Contents &

Foreword

02-06Introduction

07-14Scorecard

31-37Fram

ework

15-30Sum

mary G

uidelines38-42

Glossary

43

Table2:suggestedweightsandpossibleobservablemeasuresfordefiningfactorvalues

Scoring system calculation (An administrative example of scoring system administration)

Thisisafictionalexampleofhowthescoringsystemmaybeusedbyacompany.TheexampleisspecifictoCatastrophe Modelling, however the scorecard methodology may be applied to all types of insurance modelling.

Page 36: Did your model tell you all models are wrong? · Organisational Sources of Systemic Risk of Modelling 13 ... take account of the integration and complexity of network structures,

36

Selected Weight Measurement Weighted Measurement Weighted Change of Factors (Year 0) Measurement (Year 1) Measurement Systemic Risk

Datasources 7 2 14 1 7 -7

Datagranularity 9 2 18 2 18 0

Modeldiversity 14 3 42 3 42 0

Modelshopping 9 3 27 2 18 -9

Modelmarket 11 5 55 3 33 -22

Autopilotprocess 10 3 30 3 30 0

Asymmetricerrorchecking 7 4 28 4 28 0

Controlofmodellingprocess 5 2 10 3 15 5

Riskdiversification 14 3 42 4 56 14

Overallmarketcompetition 14 3 42 3 42 0

Total 100 308/5=61.6 289/5=57.8 -19/5=-3.8

Thesemeasurementsaremultipliedbyfactorweights,summed,andnormalizedtoascale0-100.

Model shopping: TheCompanywritesabout40%ofitsbusinessusingamodelimposedlargelybylimitedmodelchoice,givingameasureof3.Byyear2thishasimprovedbecauseofavailabilityofnew,apparentlyscientificallycredible models, and the measure becomes 2.

Model market:inyear0,thereisessentiallyonlyoneavailablemodelsupplierfortherelevantmarket,producingameasurementof5.Inyear1,newmodelshavearrivedapplicableto40%ofthebusiness,reducingitto3.

Autopilot:About60%oftheinformationfromthemodelisproducedatahighfrequencythatmakesithardtocriticise, producing a measure of 3.

Asymmetric error checking:Theinternalqualityprocessesofthecompanyspotchecksaboutoneinathousandcases, producing a measure of 4.

Control: Due to the changes in model usage and what business is written, the documentation of the modelling processslipssomewhat,increasingthemeasurefrom2(60%usageiswelldocumented)to3(40%).

Risk diversification: Because of market changes, the risks become less spread between different types. Originally the totalriskportfoliowas12%,13%,13%and62%(entropy0.77),butinyeartwoafocusonthelasttwoformsofriskchangesitto3%,7%,15%,and75%(entropy0.52)increasingthemeasurementto4.

Market competition:Muchofthecompanybusinessisinamarketwherepricesoftenreflectpreferencesforcustomer retention, producing a measure of 3.

Inthiscasewecanseethatthecompanyisscoringinthehighmiddleofthescalebetweenperfect(0score)andworstpossible(100):thereisroomforimprovement.Comparedtoyearzerothecompanyhasimproveditssystemicrisk, mostly by the change in model market and a better model shopping avoidance strategy. However, this is offset by worseningriskdiversification:theshiftinbusinessmayhavereducedtheproblemofdataqualityatthepriceoffocusing too much on particular risk areas.

Page 37: Did your model tell you all models are wrong? · Organisational Sources of Systemic Risk of Modelling 13 ... take account of the integration and complexity of network structures,

37

3.3 Use of scorecard and outlook

Clearly,ascoringsystemisanapproximationtoreality:itmapsacomplexdomainintoasimpleestimate.Itwillnot function well if the data or theory it is based on is not representative or correct enough. While we have good confidencethatthisscoringsystempointsintherightdirection,issimpleenoughnottosufferelaborateoverfitting,andwouldapplytoawiderangeofmodelling,therewillalwaysbecaseswhereitcannotapply. Ascoringexerciseshouldalwaysincludeconsiderationsof“Doesthisquestionmakesenseinourcontext?”-understanding how one’s business differs from the normal is important for having a proper view of one’s own risk. Bycontext,theusershouldthinkofthemodellingenvironmentinthewidercontext:whatarethemodelledoutputs used for? Are systemic risk drivers inherently challenged as a result of the organisational structure? Are there incentives to introduce bias in the model use?

This also matters for comparability. The interpretation of the factors will by necessity be tied to the nature of the company and its business, and may hence change over time. Just because two companies score the same does not mean that they have the same type of risk, or that they would agree on whether the overall systemic risk is at an acceptable level. A regular user will soon consider how to make the scorecard tailored to the modelling environmentwithinthecompany,addingormodifyingthefactorstoreflectnewunderstandingoftheskillofthemodels, just as for normal risk it is important to own one’s own view of systemic risk.

Lastbutnotleast,animportantissueistheso-called“Goodhart’slaw”,mostpopularlydescribedas:“Whenameasure becomes a target, it ceases to be a good measure.” People anticipate the response, and begin to game the measure rather than try to achieve the end for which the measure was invented. A relevant restatement is that risk models commonly break down when used for regulatory purposes.22 Using the scoring system for regulation woulddefinitelybeunwiseevenifithadperfectstatisticalandtheoreticalrigor,sinceitwouldnodoubtbegameable.Thisiswhyweadviseagainstusingitforcomparingorganisationsordoingdecision-making.Itisbetterat helping discover what can be improved than giving the proper incentives for improvement. The scorecard in its presentstateisjustthefirststeptowardspropersystemicriskmeasurementandsustainablemodelling.Wehopeitwillserveasinspirationforbettertoolsinthefuture.Itcanandshouldbeimprovedinmanyways:throughfeedback and critique from users, more detailed experimental, simulation and expert input, by being compared to actual market data over time, and deeper investigations into the nature of SRoM.

Thereisanimportantdifferencebetweenbeingawareandobservingarisk,andbeingabletomitigateitefficiently.Thescorecardhelpswiththefirsthalf,butbettermitigationstrategiesareneeded.Italsodoesnotcoverrisktriggers:mappingoutwhatwouldtriggercascadesofsystemicmistakeswouldfurtherhelpguidemitigation.

Expanding the investigation to the insurance linked security market and other uses of risk models may also prove useful:suchinstrumentshavebeguntoconnectpreviouslyuncorrelatedmarkets(insuranceandcapital)inwaysthat may pose systemic risks. Another potentially fruitful area is understanding how model-makers,individualfirmsand(quasi-)regulatorscancoordinatetoreducejointsystemicrisk,onewhereashared model or score for systemic risk would be helpful just as modelling has been as a shared language in insurance itself.

22 Daníelsson,Jón(July2002).“TheEmperorHasNoClothes:LimitstoRiskModelling”.JournalofBanking&Finance26(7):1273–96.

Contents &

Foreword

02-06Introduction

07-14Scorecard

31-37Fram

ework

15-30Sum

mary G

uidelines38-42

Glossary

43

Page 38: Did your model tell you all models are wrong? · Organisational Sources of Systemic Risk of Modelling 13 ... take account of the integration and complexity of network structures,

23 Goldin,I.,&Mariathasan,M.(2014).Thebutterflydefect:Howglobalizationcreatessystemicrisks,andwhattodoaboutit.PrincetonUniversityPress.24 Haldane,A.,&Madouros,V.(2012,August).ThedogandtheFrisbee.BankofEngland.InSpeechgivenattheFederalReserveBankofKansasCity’s36theconomicpolicysymposium,“TheChangingPolicy Landscape”, Jackson Hole, Wyoming (Vol. 31).

38

Summary guidelines for better practice

4.1 Systemic risk in the larger world

Systemic risk is increasing because of the growing globalisation, interconnectedness, and speed of our world23– trends thataregenerallybeneficial,butintroducenewrisks.Modellingisalsogoingtobecomeincreasinglycommon:thenatural response to a complex risky world is to try to manage it, and our modelling capabilities will increase radically over the coming years thanks to better computers, automatically collected massive datasets, and new data science methods. Donenaivelythiswillcreateaperfectstormofmodellingsystemicrisk,wheresharedmodelsusedbyoverconfidentactors lead global markets into more risk24. However, learning the lessons from past mistakes and planning ahead about systemic risk can help us avoid this. Systemicriskisbyitsnatureoftendifficulttonoticeuntilacatastrophestrikes.Becauseitdependsonhowvariousparts of the system interact even if it is noticed it can be very hard to mitigate. Onthepositiveside,differentpartscanalsosupporteachothertoreducetherisk:wecandesignourmodellingpractices, organisational cultures and markets to mitigate systemic risk.

Source:WorldEconomicForum

Page 39: Did your model tell you all models are wrong? · Organisational Sources of Systemic Risk of Modelling 13 ... take account of the integration and complexity of network structures,

25 OnechallengeistoavoidCampbell’slaw:“themoreanyquantitativesocialindicatorisusedforsocialdecision-making,themoresubjectitwillbetocorruptionpressuresandthemoreaptitwillbetodistort and corrupt the social processes it is intended to monitor”.

39

4.2 Regulation, policy, practice

Monitoring of Systemic Risk of Modelling (SRoM) at industry level

Systemicriskisoftena“Tragedyofthecommons”problem,inthatIndividualrationalactorscanactinwaysthatproduceasharedproblem.Thistypicallyrequirescoordinationtosolve.InthecaseofSRoMthefirststepshouldbe monitoring the risk on the industry level, since this can both help estimate how much mitigating effort is needed individually and jointly. This will require the support from impartial trusted third parties (such as Lloyd’s) and/or regulators to handle issues of information sharing.

Stress testing for SRoM at industry level (e.g. major flaw in major model)

One useful experiment would be stress testing for SRoM at the industry level, for example by running an exercise consideringtheeffectsofamajorflawinmajormodel.Thiscanalsohelpquantifytheactualbenefitsofmodeldiversity and the overall systemic risk due to present practice.

Regulatory disclosures on SRoM

Regulatory disclosures on SRoM (along the lines of the risk factors highlighted in section 2) may be helpful. At presentregulatorsmainlylookforsystemicrisksduetothemoretraditionalfinancialsourcesofcontagionanddiversification.FindingausefulformofSRoMdisclosureisachallenge:merelystatingscoresoftheSRoMscorecard is not enough. Systemic risk disclosures are fundamentally qualitative, but it is easy to turn reporting intoabox-checkingexerciseratherthanactuallyprovidingusefulinformation25. The aim at this time should likely be to begin the process of understanding what would be useful and how it could be done.

4.3 Making more resilient organisations and markets

Model Independent Scenario Analyses

Systemicriskincreasesifallinputstodecision-makingaremodel-dependentorfilteredthroughthesamecognitivebiases.Toincreaseresiliencetheorganisationcandevelopmodel-independentapproachestosupporttheirdecision-making,forexamplerelyingmoreonrawinformation,StressScenariosorRealisticDisasterScenarios(providedtheydonotrelyonmodellingoutput),oraccumulationanalyseslike“spiderbombs”.

Theseapproachestendtobelessrefined,buttheycancomplementthemodellingapproachandpotentiallyflaganyover-relianceonthemodel.Forinstance,themodellingoutputmayindicateverylittleriskbecauseitassumesvery low probability of occurrence; but the accumulation analyses show that there are huge exposures and the losseswouldbehugeifthemodelprovestobewrong.Thelackofrefinementisalsoasafeguardagainstoverfittingand biased model selection.

Contents &

Foreword

02-06Introduction

07-14Scorecard

31-37Fram

ework

15-30Sum

mary G

uidelines38-42

Glossary

43

Page 40: Did your model tell you all models are wrong? · Organisational Sources of Systemic Risk of Modelling 13 ... take account of the integration and complexity of network structures,

Training Pilots train for manual landings and equipment failure; it may be useful to consider training for handling model failure, both total and partial. This is locally useful inside the organisation to maintain skills and critical thinking about models, and across a market to reduce overall systemic risk.

Oneparticularmethodthathasbeensuggestedtoavoidgettingfocusedonspecific(possiblyspuriousormodeldependent) probabilities or ignore badly behaved tail probabilities is to consider what scenarios can actually be handledbythecompany.Forexample,doingreversestress-testingtoseewhattheleastextremescenariothatcoulddestroythebusinessmodelofacompany,hasthepotentialtofindspecificscenariosthatarebothactionableandgive a sense of what truly is at risk26.

Learning from close calls Extremetailrisksbydefinitionrarelyoccur,makingmodelsofthemunreliable.ThisisdoublytrueforSRoM,sinceextreme model failures may be unprecedented (models may have not been used long in the market, and the market itself is changing). However, close calls when recognised and interpreted as warnings (rather than reassurances that the safeties are adequate) do give some information about systemic risks. Building organisations and markets that can pick up this information can strengthen resilience. It requires procedures and a culture that recognizes the utility of disclosurewhensomebodynoticesaproblem,andon-goingscepticalevaluationofwhatisgoingoneven(orperhapsespecially) when things seem to work well. Maintaining model diversity Extensivepracticaltestinginthefieldsofmachinelearningandstatisticssuggestthatpredictionscanbeimprovedbyfittingmultiplemodelsandcombiningtheirpredictions.Such‘ensembles’willbemoreusefultothedegreethatindividual models miss important aspects of the domain being modelled. Similar results from the Oxford Metamodel (aswellastheintuitionofexperts)suggestthatincreasingthediversityofmodelsbetweenfirmsreducessystemicriskfortheindustry.Althoughindividualfirmscanacquirecomfortfromusingmodelsthataregenerallyacceptedinthemarketconformingtoacommonview,firmsarelikelytofeelcompetitivepressuretousecertainmodelssimilartotheircompetitors.Thisisaparticularinstanceofthegeneralconflictbetweengoodpracticeforindividualfirmsandgood practice for the health of the entire industry. An additional source of pressure to conform comes from regulatory requirementsthatareeasiertosatisfywithindustry-standardandacceptedmodels.Cooperationbetweenregulatorsand the industry on modelling issues could result in wider appreciation of model systemic risk. Greater tolerance of justifieddivergencesinestimatescouldhelpmaintainadiversityofviewsandhencereducesystemicrisk,whichshould be as important a regulatory goal as the solvency of individual participants. Adiversityofmodels,whilebeneficial,isnotapanacea.Choosingamodelforaparticulardomainisachallengingtask and models have to be validated and carefully adjusted before use. An additional source of risk is the lack of a shared understanding of key properties of the model between the modellers, the underwriters and the upper-levelmanagement. The importance of diversity extends beyond the models themselves. When it comes to systemic risk, models can only be so helpful. Detailed modelling of tail risk may not be possible due to limited data. Models that are diverse along many other dimensions may end up making the same incorrect prediction about a tail event. For this reason, it’s important that the limitations of particular models are understood, and that underwriters are able to consider diverse scenarios that might be poorly captured by their models.

26 MaryPatCampbell,M.P.(2012).MinimallyDestructiveScenariosandCognitiveBias.InRiskMetricsfordecisionmakingandORSA.SocietyofActuaries.Schaumburg,Illinois.pp.15-17http://www.casact.org/pubs/Risk_Essays/orsa-essay-2012-campbell.pdf

40

Page 41: Did your model tell you all models are wrong? · Organisational Sources of Systemic Risk of Modelling 13 ... take account of the integration and complexity of network structures,

27 http://www.goodjudgmentproject.com/,Tetlock,P.(2005).Expertpoliticaljudgment:Howgoodisit?Howcanweknow?Princeton University Press.

28 Shanteau,J.(1992).Competenceinexperts:Theroleoftaskcharacteristics.Organizationalbehaviorandhumandecisionprocesses,53(2),252-26629 Lovallo,D.,&Sibony,O.(2010).Thecaseforbehavioralstrategy.McKinseyQuarterly,2,30-43.http://www.mckinsey.com/insights/strategy/the_case_for_behavioral_strategy

41

4.4 Training/behavioural management

Behavioural economics has many applications for insurance. There is extensive work on typical human biases in reasoning about probability and risk. We know that these biases can become quite acute when people have to make decisions in environments foreign to those our ancestors faced. In particular, insurance forces underwriters to make decisions based on large bodies of somewhat ambiguous data and to handle small probabilities of extreme catastrophe. Because people often make the same kinds of mistakes in these types of situations, there is serious potential for underwriters to introduce systemic risk into the market.

Conversely,therehaverecentlybeenlarge-scaleexperimentsthatinvestigatethefeaturesofpeoplewhoareespecially good at the task of predicting highly uncertain events27,findingthatthereareindeedindividualdifferences. Research in the growth of expertise has shown that it can be trained for some tasks, especially the ones that provide feedback, can be decomposed into parts, and have adequate decision aids28.

There are two main methods to help mitigate the effects of cognitive bias. First, we can actively train modellers and underwriters to be less susceptible to some of the most important biases. While we know that full elimination is impossible, some successes in other industries give us reason to hope that the right kind of training will be useful.

Second, we can try to set up organisations in a way that will make them more resilient to the effects of behavioural bias even if the participants themselves are biased29. For instance, it’s often helpful to require participants in a meeting to write down their initial positions before discussion to combat some of the effects of anchoring. Instituting policies of checking randomly selected decisions or model results can counteract asymmetric error checking.Butsuchpoliciesneedtobeanchoredintheorganisation:everybodyneedstounderstandwhytheyaredone and there must be support from management even when they are inconvenient.

Case studies in other domains While behavioural biases have not been widely recognized until recently, some areas have begun takingstepstoinvestigateandcounterthem:• The oil and gas industry has a long tradition of investigating cognitive bias – since mistakes can

be very costly1.Anchoringandoverconfidenceinprobabilityestimatesarefound,andtrainingandexperiencehaveratherweak–butpositive-effectinamelioratingthem2.

•Militarydecision-makinghasbeenfoundvulnerabletobias,andinsomequarterstrainingeffortshave been attempted3.

• Intelligence analysis is highly vulnerable to bias4 and there is evidence that biases on multiple levels can impair national security5. The US intelligence community has investigated various debiasing and decision support methods in a realistic setting, as well as structured analysis methods6.

1 Krause,T.(2010).High-reliabilityPERFORMANCE:Cognitivebiasesunderminedecision-making.ISHN,44(9),46.http://bstsolutions. com/en/knowledge-resource/163-high-reliability-performance-cognitive-biases-undermine-decision-making

2http://www.psychology.adelaide.edu.au/cognition/aml/aml2/welsh_aml2.pdfhttp://csjarchive.cogsci.rpi.edu/proceedings/2007/docs/p1647.pdf3 Janser,M.J.(2007).Cognitivebiasesinmilitarydecisionmaking.ARMYWARCOLLCARLISLEBARRACKSPA.Davis,P.K.,Kulick,J.,&Egner,M.(2005). Implicationsofmoderndecisionscienceformilitarydecision-supportsystems.RandCorporation.http://www.au.af.mil/au/awc/awcgate/milreview/williams_bias_mil _d-m.pdf

4Heuer,R.J.(1999).SectionIII:cognitivebiases.InPsychologyofintelligenceanalysis.UnitedStatesGovernmentPrinting.5Yetiv,S.A.(2013).NationalSecurityThroughaCockeyedLens:HowCognitiveBiasImpactsUSForeignPolicy.JHUPress.

6Cook,M.B.,&Smallman,H.S.(2008).Humanfactorsoftheconfirmationbiasinintelligenceanalysis:Decisionsupportfromgraphicalevidencelandscapes.HumanFactors:TheJournaloftheHumanFactorsandErgonomicsSociety,50(5),745-754

Contents &

Foreword

02-06Introduction

07-14Scorecard

31-37Fram

ework

15-30Sum

mary G

uidelines38-42

Glossary

43

Page 42: Did your model tell you all models are wrong? · Organisational Sources of Systemic Risk of Modelling 13 ... take account of the integration and complexity of network structures,

42

4.5 Final words

Systemicriskiseverybody’sproblem:byitsnatureitisshared.Thereisoftenresponsibilityformanyindirectlyaffected stakeholders who are not themselves involved in the practices that create the risk. Mitigating such broad risks ishencesociallyandmorallysignificant.Beingagoodcitizenofacommunityrequiresusto“cleanup”therisksweimpose on others.

Systemicriskisoftenintimatelytiedtowhatmakesthesystemuseful.Wecannotreapthebenefitsofmodellingwithoutriskingthatourmodellingpracticessometimesmisleadus.Butwecanavoidoverconfidentgamblingandactually try to measure and manage our systemic risks.

Throughout its long history the insurance industry has specialized in managing risk regardless of what domain the perilexistsin:overtimenewperils–whetherairplanesorcyber–emerge,arehandled,andeventuallybecomeprofitable.Itmaybethatmeetingthechallengeofsystemicriskofmodellingwillbethenextstepinthissequence. Ifso,itwillbeusefulfaroutsidetheconfinesofinsurance.

Page 43: Did your model tell you all models are wrong? · Organisational Sources of Systemic Risk of Modelling 13 ... take account of the integration and complexity of network structures,

43

Glossary

Word Definition / description

“What if” scenarios Changing values inserted into models or their parameters to determine the

sensitivity and consequences; sometimes also includes entire scenarios of possible events

1in100TVaR (TailValueatRisk)Theexpectedamountoflossgivenalossequalorlargertothe 1% VaR

1in200VaR Theamountoflossexpectedtobeexceededinonly0.5%ofrelevanttime(ValueatRisk) periods

Average annual losses The minimum amount of premium to charge to cover losses over time

Back testing Testing a model on past data or time periods in order to gauge its performance

Basel II The second Basel Accords, recommendations on banking law and regulation. In particular, it amends international standards on how much capital banks need to hold toguardagainstfinancialandoperationalrisks

Capitalbuffers Theamountofmoneyafinancialinstitutionisrequiredtoholdinordertoavoidexcessive insolvency risk

Cyber Risk from failures of information technology

Delphi A method for systematically combining the opinions of experts, originally intended for forecasting. The experts answer questionnaires in two or more rounds. After each round they see the joint distribution and motivations, updating their own responses.

Dependent validation Validation undertaken or coordinated by model users who are involved in the production usage, development, parameterisation, testing and/or operation of the external catastrophe model that feeds the Internal Model. This is opposed to independent validation carried out by validation risk experts e.g. within the Risk department who are removed of the production, development, parameterisation, testing and/or operation of the catastrophe models in the Internal Model.

Economic Scenario Models generating a scenarios for economic risk drivers such as interest rates, Generators creditrisk,inflation,equityreturns,realestatereturnsetc.

Exceedance probability A diagram showing the estimated probability that losses will be larger than curves different values.

Hybrid systemic risk Systemic risks that span more than one system, such as insurance and capital markets, or energy and food security

ILS Insurance Linked Securities

Internal model A company or institution’s model of how different kinds of risk – insurance risk, risk, capital operational risk etc. – will affect it. In insurance, Solvency II requires calculating capital requirements using their internal models.

Linearlyweighted Aseriesofnumbersaremultipliedbyfixedweightnumbersandsummed.Thismakesthe output depend more on the numbers with large weights.

Lloyd’s Lloyd’s of London is an insurance market, a corporate body acting as a quasiregulator of the London insurance market.

Machine learning Techniques for automatically extracting patterns from data, allowing software to predict, classify or approximate new data.

Contents &

Foreword

02-06Introduction

07-14Scorecard

31-37Fram

ework

15-30Sum

mary G

uidelines38-42

Glossary

43

Page 44: Did your model tell you all models are wrong? · Organisational Sources of Systemic Risk of Modelling 13 ... take account of the integration and complexity of network structures,

44

Word Definition/description

Metamodel A model of the modelling process.

Minimum capital The minimum level of capital required by Solvency II to be held by an insurer. requirements The higher being the SCR (Solvency Capital Requirement) which is the prudent

measure.

Modelentropy Ameasureofhowconcentratedordispersedrelianceonmodelsis:ifnumerousindependent models are used the model entropy is high, while if most model use is based on a single model it is low.

Model independent Multiple uniformly spaced scenarios placed within a geographical polygon to scenario analyses identify the area of maximum accumulation.

Nonmodelledrisks Sourcesofnon-lifelossthatmayariseasaresultofcatastropheevents,butwhichisnot explicitly covered by a company’s use of existing catastrophe models

Nonlinear regression Statistical modelling of data where a nonlinear curve is used to approximate how input data contributes to output data.

ORSA Own Risk Solvency Assessment

Overfitting Whenamodeldescribesnoiseanddataartefactsratherthantheunderlyingreality.This commonly happens because the model is excessively complex compared to the numberofobservationsitisfittedto.

P&C Property and Casualty

Portfolio The book of business of an insurer or reinsurer, in particular all policies held.

Quasiregulator Institution that performs many of the same regulatory functions as a regulatory body withoutspecificenablinglegislation

Realistic Disaster Stress test scenarios based on various disasters maintained by Lloyd’s used to Scenarios test syndicates and the market

Regressionfit Statisticalmodelfittingasimplelinearruletodata,oftenusedforforecastingorapproximation.

Reverse stress testing Instead of testing the consequences of a stressful event on a company, one can analyserisktofindthesmalleststressthatcouldcauseagivenbadoutcome.

Riskdiversification Reducingoverallriskbyhavingaportfoliocoveringavarietyof(hopefully)uncorrelated risks, making the risk of the whole lesser than the sum of the parts

Riskprofile Theestimatedriskdistributionforaportfolio.

Risk-returnprofile Thepatternofriskestimatedfordifferentreturnsonaportfolio.

Solvency II EU programme for a harmonized insurance regulatory regime.

Spider bombs Multiple uniformly spaced circular scenarios placed within a geographical polygon to identify the area of maximum accumulation, most commonly used for terrorism analysis.

Statisticalpowerlaws Probabilitydistributionswheretheprobabilityofeventsofsizexisproportionaltox-awhere a>1. Such distributions have large tail risk, and show up in estimates of many catastrophe risks.

Page 45: Did your model tell you all models are wrong? · Organisational Sources of Systemic Risk of Modelling 13 ... take account of the integration and complexity of network structures,

45

Word Definition/description

Stochastic models Models where numerous randomly generated events and their consequences are simulated in order to estimate the combined probability distribution of outcomes.

StressScenarios/ Analysisofhowmuchagivencrisiswillaffectacompany,financialinstrumentStresstesting or market

Systemic Risk of Systemic Risk of Modelling. Inadvertent increases in (shared) risk due to use Modelling (SRoM) of risk models.

Tail risk The risk from extreme events far away from the median events. If the risk probability distributionisheavy-tailedthetailriskfromevenveryrareeventscandominatethetotal risk.

Technical price A price expected to generate a certain expected loss ratio.

Tragedyofthecommons Situationwhereindividualsactrationallyaccordingtotheirownself-interest,buttheend result is against the best interests of the whole group

UK ICAS UK Individual Capital Adequacy Standards.

Underwriting cycle Insurance underwriting has a tendency toward cyclicality. First premiums are low due to competition and excess insurance capacity. After a natural disaster or other cause a surge in insurance claims less capitalized insurers are driven out of business. Less competition and capacity leads to higher premiums and better earnings, which attracts more competitors.

Use test The Solvency II requirement that insurance companies actually use the internal model that generates their estimate for capital buffers.

Appendices are available at

http://www.amlin.com/~/media/Files/A/Amlin-Plc/Systemic_Risk_Scorecard_Appendices

Contents &

Foreword

02-06Introduction

07-14Scorecard

31-37Fram

ework

15-30Sum

mary G

uidelines38-42

Glossary

43

Page 46: Did your model tell you all models are wrong? · Organisational Sources of Systemic Risk of Modelling 13 ... take account of the integration and complexity of network structures,

Notes

46

Page 47: Did your model tell you all models are wrong? · Organisational Sources of Systemic Risk of Modelling 13 ... take account of the integration and complexity of network structures,

47

Page 48: Did your model tell you all models are wrong? · Organisational Sources of Systemic Risk of Modelling 13 ... take account of the integration and complexity of network structures,

The Leadenhall Building, 122 Leadenhall Street, London EC3V 4AG Switchboard+44(0)2077461000

www.amlin.com