01_cass capco institute paper series on risk
TRANSCRIPT
journal 03
/20
10/#
28
the journal of financial transformation
Recipient of the APEX Awards for Publication Excellence 2002-2009
Cass-Capco Institute Paper Series on Risk
MSc in Insurance and Risk Management
Cass is one of the world’s leading academic centresin the insurance field. What's more, graduatesfrom the MSc in Insurance and Risk Managementgain exemption from approximately 70% of theexaminations required to achieve the AdvancedDiploma of the Chartered Insurance Institute (ACII).
For applicants to the Insurance and RiskManagement MSc who already hold a CII Advanced Diploma, there is a fast-track January start, giving exemption from the first term of the degree.
To find out more about our regular informationsessions, the next is 10 April 2008, visitwww.cass.city.ac.uk/masters and click on'sessions at Cass' or 'International & UK'.
Alternatively call admissions on:
+44 (0)20 7040 8611
With a Masters degree from Cass Business School,you will gain the knowledge and skills to stand out in the real world.
Minimise risk,optimise success
Editor
Shahin Shojai, Global Head of Strategic Research, Capco
Advisory Editors
Cornel Bender, Partner, Capco
Christopher Hamilton, Partner, Capco
Nick Jackson, Partner, Capco
Editorial Board
Franklin Allen, Nippon Life Professor of Finance, The Wharton School,
University of PennsylvaniaJoe Anastasio, Partner, Capco
Philippe d’Arvisenet, Group Chief Economist, BNP Paribas
Rudi Bogni, former Chief Executive Officer, UBS Private Banking
Bruno Bonati, Strategic Consultant, Bruno Bonati Consulting
David Clark, NED on the board of financial institutions and a former senior
advisor to the FSA
Géry Daeninck, former CEO, Robeco
Stephen C. Daffron, Global Head, Operations, Institutional Trading & Investment
Banking, Morgan Stanley
Douglas W. Diamond, Merton H. Miller Distinguished Service Professor of Finance,
Graduate School of Business, University of Chicago
Elroy Dimson, BGI Professor of Investment Management, London Business School
Nicholas Economides, Professor of Economics, Leonard N. Stern School of
Business, New York University
José Luis Escrivá, Group Chief Economist, Grupo BBVA
George Feiger, Executive Vice President and Head of Wealth Management,
Zions Bancorporation
Gregorio de Felice, Group Chief Economist, Banca IntesaHans Geiger, Professor of Banking, Swiss Banking Institute, University of Zurich
Wilfried Hauck, Chief Executive Officer, Allianz Dresdner Asset Management
International GmbH
Pierre Hillion, de Picciotto Chaired Professor of Alternative Investments and
Shell Professor of Finance, INSEAD
Thomas Kloet, Senior Executive Vice-President & Chief Operating Officer,
Fimat USA, Inc.
Mitchel Lenson, former Group Head of IT and Operations, Deutsche Bank Group
David Lester, Chief Information Officer, The London Stock Exchange
Donald A. Marchand, Professor of Strategy and Information Management,
IMD and Chairman and President of enterpriseIQ®
Colin Mayer, Peter Moores Dean, Saïd Business School, Oxford University
Robert J. McGrail, Executive Managing Director, Domestic and International Core
Services, and CEO & President, Fixed Income Clearing Corporation
John Owen, CEO, Library House
Steve Perry, Executive Vice President, Visa Europe
Derek Sach, Managing Director, Specialized Lending Services, The Royal Bank
of Scotland
John Taysom, Founder & Joint CEO, The Reuters Greenhouse Fund
Graham Vickery, Head of Information Economy Unit, OECD
Norbert Walter, Group Chief Economist, Deutsche Bank Group
TABlE of conTEnTs
opinion
8 A partial defense of the giant squidSanjiv Jaggia, Satish Thosar
12 Thou shalt buy ‘simple’ structured products onlySteven Vanduffel
14 Enterprise friction — the mandate for risk managementSandeep Vishnu
19 Enterprise Risk Management — a clarification of some common concernsMadhusudan Acharyya
22 capital at risk — a more consistent and intuitive measure of riskDavid J. Cowen, David Abuaf
ARTiclEs
27 Economists’ hubris — the case of risk managementShahin Shojai, George Feiger
37 Best practices for investment risk managementJennifer Bender, Frank Nielsen
45 lessons from the global financial meltdown of 2008Hershey H. Friedman, Linda W. Friedman
55 What can leaders learn from cybernetics? Toward a strategic framework for managing complexity and risk in socio-technical systemsAlberto Gandolfi
61 financial stability, fair value accounting, and procyclicalityAlicia Novoa, Jodi Scarlata, Juan Solé
77 can ARMs’ mortgage servicing portfolios be delta-hedged under gamma constraints?Carlos E. Ortiz, Charles A. Stone, Anne Zissu
87 The time-varying risk of listed private equityChristoph Kaserer, Henry Lahr, Valentin Liebhart, Alfred Mettler
95 The developing legal risk management environmentMarijn M.A. van Daelen
103 interest rate risk hedging demand under a Gaussian frameworkSami Attaoui, Pierre Six
109 non-parametric liquidity-adjusted VaR model: a stochastic programming approachEmmanuel Fragnière, Jacek Gondzio, Nils S. Tuchschmid, Qun Zhang
117 optimization in financial engineering — an essay on ‘good’ solutions and misplaced exactitudeManfred Gilli, Enrico Schumann
123 A VaR too far? The pricing of operational risk Rodney Coleman
131 Risk management after the Great crashHans J. Blommestein
As the variety and volume of shocks facing the financial system increase the demand for those who have a
thorough understanding of risk, its measurement and management will continue to rise. And this has certainly
been the case over the past couple of years. The financial system came very close to collapsing and had it
not been for the aggressive and immediate responses from the global monetary authorities many of today’s
surviving institutions might have also disappeared, without commenting on the moral hazard debate that this
has engendered.
In order to avoid similar events taking place in the future, the world’s leading executives and policy makers
have come together to develop means of predicting and perhaps preventing similar crises from engulfing the
financial system in the future, sadly with each sitting on either side of the debate. While these efforts are to
be commended it is not clear whether they are necessarily targeting the real reasons behind the current crisis
and whether such preventions would be effective for future crises that will have their own specificities and
implications.
From our perspective, these efforts will be in vain unless executives, policymakers, and developers of scientific
models come together to have an open debate about the true causes and implications of the current crisis and
what actions need to be taken in order to rectify them, since many of the main culprits are still present within
the system. This is the reason that we established the Cass-Capco Institute Paper Series on Risk in collaboration
with Cass Business School. To focus on the real issues that impact our industry and not just those that get press
attention.
These issues are covered in a number of the articles in this edition of the paper series, which we are confident
will be of interest, and certainly of useful benefit to our executive readers.
At Capco we strive to bring together leading academic thinking along with business application, as either in
isolation will be of limited benefit to anyone. We hope that this issue of the paper series has also achieved that
objective.
Good reading and good luck.
Yours,
Rob Heyvaert,
Founder and CEO, Capco
Dear Reader,
The past year or so has been very bruising for executives sitting at the helms of the world’s major financial insti-
tutions. Many came very close to failing, and quite a few in fact did. Many of the most respected global financial
brands have now disappeared.
There have been numerous reasons put forward for the current crisis, but most have focused on the most obvious
causes and not necessarily on the underlying issues that are most pertinent. The aim of this edition of the Cass-
Capco Institute Paper Series on Risk is to give adequate attention to those issues that have fallen under the radar,
mainly through neglect by headline-grabbing journalists or because press attention has been focused elsewhere.
The papers address the damage that securitization has caused through the disappearance of due diligence at the
issuers of mortgage lending. They also examine why it is that most, if not all, of the risk measurement and manage-
ment models look great on paper but are of very little value to the institutions that need them most.
Risk management had in recent years reached the status of a science. Yet if anything, the current crisis proved
that it is anything but. There are a myriad of models and formulas that have been found wanting during the recent
crisis. In fact, none of the quantitative models worked. Those institutions that survived did so simply because of
the actions of the monetary authorities of the countries in which they were based.
We still have a long way to go before the models developed at academic institutes that view economics and finance
as science rather than art can actually be put to effective practice within financial institutions. The gap between
theory and practice remains as large today as it has ever been.
Similarly, in the field of risk management the gap between hoped-for and actual effectiveness has also remained
quite large. Financial institutions will have a very tough time instituting effective risk management policies so long
as they are dealing with spaghetti-like IT systems that simply don’t communicate with one another. This situation
prevents management from getting a holistic picture of the different pockets containing highly correlated risks.
This edition of the paper series highlights these issues so that actions can be taken in time for the next inevitable
crisis. It insists that financial executives wake up to the fact that investing in new and more advanced models is of
little benefit if they are not aware of all the risks they face. Those at the helm need to get their IT in order before
any of the models can be applied. We cannot continue putting the cart before the horse.
We hope that you enjoy this issue of the Journal and that you continue to support us in our efforts to mitigate the
gap between academic thinking and business application by submitting your ideas to us.
On behalf of the board of editors
The dream of effective ERM
opinion
A partial defense of the giant squid
Thou shalt buy ‘simple’ structured products only
Enterprise friction — the mandate for risk management
Enterprise Risk Management — a clarification of some common concerns
Capital at risk — a more consistent and intuitive measure of risk
Taibbi, M., 2009, “The great American bubble machine,” Rolling Stone magazine, 1
issue 1082-83, July 2.
See: Don’t dismiss Taibbi: what the mainstream press can learn from a Goldman take-2
down, The Audit, posted on the CJR website on August 08, 2009.
Based on the ranking system developed in: Carter, R. B., F. H. Dark, and A. K. Singh, 3
1998, “Underwriter reputation, initial returns and the long-run performance of IPO
stocks,” Journal of Finance, 53:1, 285-311.
Spinning involves the underwriter allocating underpriced IPOs to favored executives 4
– the quid pro quo being a promise of future business. Laddering involves allocations
conditioned upon buyers agreeing to purchase additional shares of the IPO in the after-
market. The SEC sanctioned various underwriting firms including Goldman Sachs, which
paid a fine of U.S.$40 million without admitting wrongdoing. The firm also reportedly
paid U.S.$110 million to settle an investigation by New York state regulators.
Taibbi quotes Professor Jay Ritter, a leading IPO researcher at the University of 5
Florida: “In the early eighties, the major underwriters insisted on three years of
profitability. Then it was one year, then it was a quarter. By the time of the Internet
bubble, they were not even requiring profitability in the foreseeable future.”
See: Jaggia, S., and S. Thosar, 2004, “The medium-term aftermarket in high-tech 6
IPOs: patterns and implications,” Journal of Banking and Finance, 28, 931-950.
For those who have been meditating at a Buddhist monastery over
the last year, the giant squid in the title refers to Goldman Sachs
Inc. Matt Taibbi writing in Rolling Stone magazine1 characterizes
the investment bank as: “a great vampire squid wrapped around
the face of humanity, relentlessly jamming its blood funnel into
anything that smells like money.” The article is (to put it mildly)
a colorful polemic hurled at Goldman Sachs accusing it of essen-
tially creating and profiting from various financial bubbles since the
onset of the Great Depression.
Taibbi’s rhetoric was not well received in the mainstream business
press. Reactions were dismissive (along the lines of: simplistic analy-
sis; he’s not a real business reporter!), indignant (basically objecting
to the article’s over-the-top language), defensive (all of them do it,
why pick on Goldman?) but seemed not to engage with the substance
of Taibbi’s accusations. In fact, an ‘audit’ done by the Columbia
Journalism Review’s Dean Sparkman largely validates Taibbi’s sub-
stantive claims2.
One of these claims relates to the tech sector bubble of the late
1990s. With the benefit of hindsight, it is clear that many of the
high-tech IPOs launched in this period were based on dubious
valuations. Goldman was certainly active in IPO underwriting and
had the highest ranking in terms of underwriter reputation [Carter
et al. (1998)]3. The firm also had its share of high-profile misfires
(for example: Webvan, Etoys) when the dot.com mania peaked and
crashed in the spring of 2000. Goldman was also arguably involved
in activities such as spinning and laddering4; the latter has the
effect of artificially pumping up the stock prices of IPO firms in the
aftermarket. But was Goldman a particularly egregious offender in
a climate in which underwriting best practices had slipped precipi-
tously?5 And how should this be evaluated?
As it turns out, we were involved in researching high-tech firms
that had an IPO in the late 1990s. We found significant positive
momentum and sharp reversals within a six-month aftermarket
window6. When we were doing the study, underwriter reputation
was not a central concern — it was only one of several control vari-
ables we employed. However, in the wake of the Taibbi article and
the considerable controversy it has generated, we thought it would
be interesting to revisit our sample to see if we could uncover any
interesting facts related to underwriter identity.
The set upOur primary sample was drawn from ipo.com, which lists the
universe of U.S. IPOs with dates, offer prices etc. classified in a
number of categories. We chose all IPOs from January 1, 1998
through October 30, 1999 in the following sectors: biotechnol-
ogy, computer hardware, computer software, electronics, Internet
services, Internet software, and telecommunications. This resulted
in a sample of 301 high-tech IPO firms. We stopped at October 30,
1999 because we wanted to study medium-term aftermarket price
behavior beyond the IPO date while excluding the market correc-
tion that commenced in 2000 [Jaggia and Thosar (2004)].
In Figure 1, we provide selected descriptive statistics relating to our
sample broken down by three lead underwriter reputation tiers:
top, medium, and bottom. The top-tier underwriters are those that
received the highest score of 9 in the Carter et al. (1998) rank-
ing system. These are: Goldman Sachs, Credit Suisse First Boston
(renamed Credit Suisse), Hambrecht & Quist, and Salomon Smith
Barney. The medium-tier underwriters are those with a score
between 8.75 and 8.99, while the bottom tier includes all firms with
a score below 8.75.
There do not appear to be any obvious differences between IPO
firms represented by top-tier underwriters and the others in terms
of objective quality criteria. If anything, metrics such as: the level
of initial underpricing, percentage of profitable firms, and firm age
A partial defense of the giant squid Sanjiv Jaggia, Professor of Economics and Finance, California Polytechnic State University
Satish Thosar, Professor of Finance, University of Redlands
8
Underwriter reputation
Variables Top-tier Medium-tier Bottom-tier
Cumulative market-adjusted return (CMAR) at the end of six months
44.71(140.91)
19.95(101.88)
-16.55(57.23)
Percentage change from offer to market open price
65.74(72.26)
68.38(105.15)
43.66(73.18)
Percentage of firms with positive net income in pre-IPO year
21.11(41.04)
16.90(37.61)
24.64(43.41)
Revenue in pre-IPO year ($ millions) 83.71(285.64)
50.71(200.94)
75.84(397.87)
Offer size ($ millions) 134.48(176.86)
126.97(490.20)
48.60(47.48)
Percentage of firms with green-shoe (over-allotment) option
64.44(48.14)
54.23(50.00)
55.07(50.11)
Percentage of firms belonging to the Internet services or software categories
61.11(49.02)
60.56(49.04)
59.42(49.46)
Firm age at IPO date (years) 4.45(3.40)
5.61(5.88)
5.90(6.03)
Number 90 142 69
Notes:Standard deviations are in parentheses below the sample means.1. Top-tier underwriter firms are those assigned the highest point score of 9 in the 2. Carter et al. (1998) system. This category includes Goldman Sachs. Medium-tier firms are those with a score of 8.75 – 8.99. Bottom-tier are all those below 8.75.A green-shoe provision gives the underwriter the option to purchase additional 3. shares at the offer price to cover over allotments. Presence of the provision indirectly increases underwriter compensation.
Figure 1 – Selected descriptive statistics for IPO firms classified by underwriter
reputation
Let P7 i1 represent the day 1 open price of the ith firm and let Pm1 be the corresponding
level of the market (Nasdaq) index. Similarly, Pit and Pmt represent the open price at
day t of the ith firm and the market respectively. The CMAR of the firm at time t is
calculated as: CMARit = [Pit/Pi1] ÷ [Pmt/Pm1] -1. The time in question does not refer to
calendar time, but to the time from the IPO date.
seem to favor the bottom-tier group. The average IPO offer size is
considerably larger for the top-tier compared to the bottom-tier
group, which is not surprising. One expects that superior reputation
carries with it the ability to tap more extensive investor networks
to raise larger chunks of capital at a given time. Also worth noting
is that the top-tier group has the highest proportion of contracts
with a green-shoe option. A green-shoe provision gives the under-
writer the option to purchase additional shares at the offer price to
cover over-allotments and thereby indirectly increases underwriter
compensation.
A striking and somewhat surprising difference is in the cumula-
tive market-adjusted returns (CMAR) registered by each group.
To study this in greater detail, we graph (Figure 2) the CMAR for
each group using the post-IPO day 1 open price as the base through
trading-day 125 or approximately six months after the IPO date7.
There are visual and arguably economically significant differences
across groups. The bottom-tier group (green) immediately slips into
negative territory and stays there for a six-month CMAR of -16.5
percent. The medium (red) and top (blue) groups display strong
positive momentum and reach a CMAR peak of 40.5 percent (at
day 114) and 61.5 percent (at day 112) respectively. The CMAR then
tapers off possibly due to the onset of lock-up expiration pressures
and at the end of six months ends up at 20.0 percent and 44.7 per-
cent for the medium and top groups respectively.
This can be viewed in a number of ways. If the market is behav-
ing rationally and recognizing ‘true value’ as time elapses, the
45 percent CMAR displayed by the top group represents serious
underestimation of the initial IPO offer prices. It represents in
effect a wealth transfer from the founders and seed financiers of
the firm to outside investors and this is over and above the initial
underpricing of 66 percent for this group (Figure 1). Under normal
circumstances, the underwriters could be justly accused either of
incompetence in terms of valuation or extorting their IPO clients to
enrich themselves and their favored customers.
On the other hand, if informed investors recognize that tech sector
stock prices are inflated, unsustainable, and are in the market to
exploit the ‘greater fool,’ the CMAR patterns may reflect the ability
of certain underwriters through their analyst coverage, laddering
arrangements, etc., to not only stabilize but pump up prices in the
aftermarket until the wealth transfer from uninformed to informed
investors is duly complete.
We decided that a closer disaggregated look at the top-tier group
might be useful.
The defenseIn Figures 3 and 4, we report descriptive statistics and CMAR pat-
terns for the IPOs underwritten by top-tier firms. Hambrecht &
Quist and Salomon Smith Barney are combined so as to represent
a reasonable sample size; Credit Suisse and Goldman Sachs are
reported separately.
A few metrics are worth noting. More firms represented by Goldman
(27 percent) were profitable in their pre-IPO year than Credit Suisse
9
Note: Top-tier underwriter firms are those assigned the highest point score of 9 in
the Carter et al. (1998) system. In our sample, they represent 90 firms. Medium-tier
are those with a score of 8.75 – 8.99 representing 142 firms.
Bottom-tier are all those below 8.75 representing 69 firms.
Figure 2 – Cumulative market-adjusted returns (CMAR) for IPO firms grouped by lead
underwriter reputation
-20
0
20
40
60
80
100
120
0 25 50 75 100 125
Trading days
CMAR CS GS Rest
-30
-20
-10
0
10
20
30
40
50
60
70
0 25 50 75 100 125
Trading days
Top-tier Medium-tier Bottom-tierCMAR
Variables cs Gs Rest
Cumulative market-adjusted return (CMAR) at the end of six months
80.09(165.39)
27.51(131.84)
30.55(121.06)
Percentage change from offer to market open price
74.34(75.17)
85.67(81.23)
26.63(28.61)
Percentage of firms with positive net income in pre-IPO year
14.29(35.64)
27.03(45.02)
20.00(40.83)
Revenue in pre-IPO year ($ millions) 41.34(147.31)
37.32(50.56)
199.81(504.84)
Offer size ($ millions) 101.97(103.99)
155.59(222.89)
139.63(165.43)
Percentage of firms with green-shoe (over-allotment) option
67.86(47.56)
86.47(34.66)
28.00(45.83)
Percentage of firms belonging to the Internet services or software categories
71.43(46.00)
59.46(49.77)
52.00(50.99)
Firm age at IPO date (years) 4.01(2.13)
5.03(3.98)
4.07(3.63)
Number 28 37 25
Notes:Standard deviations are in parentheses below the sample means.1. Top-tier underwriter firms are those assigned the highest point score of 9 in 2. the Carter et al. (1998) system; CS = Credit Suisse, GS = Goldman Sachs, Rest = Hambrecht & Quist and Salomon Smith Barney A green-shoe provision gives the underwriter the option to purchase additional 3. shares at the offer price to cover over allotments. Presence of the provision indirectly increases underwriter compensation.
Figure 3 – Selected descriptive statistics for IPO firms classified by top-tier
underwriters
10 – The journal of financial transformationThis may reflect Goldman’s greater clout even within the top-tier group.8
(14 percent) and the rest (20 percent). Goldman firms were also
marginally longer in business before the IPO date. On the other
hand, Goldman firms were subject to greater initial underpricing on
average. They also had a higher average offer size and were more
likely to be subject to a green-shoe provision8. But the most sug-
gestive statistic in our view is the six-month CMAR. The Goldman
group’s CMAR at 28 percent is significantly lower than that of the
Credit Suisse group which racked up 80 percent. Thus aftermarket
momentum (or manipulation if one were to take the cynical view) is
lowest for firms represented by Goldman.
This is borne out by the CMAR patterns in Figure 4. The red line
representing Goldman firms is virtually flat in the immediate after-
market, when most purported price pumping takes place. The blue
(Credit Suisse) and green (Hambrecht & Quist and Salomon Smith
Barney) lines suggest higher levels of momentum and reversal
within a six-month period — more of a bubble within a bubble pat-
tern with the benefit of hindsight.
After all is said and done, the tech bubble is only one instance of a
series of such events in recorded history. And, while these events
result in a lot of wealth destruction, the firms left standing in the
end usually signify technological and productivity gains to society,
which may in the long-run exceed the Schumpeterian costs.
We do not profess to know how to execute such a cost-benefit analy-
sis. Instead, we decided to undertake an outlier analysis within our
sample. We carried out a case study-type analysis of the 20 firms
that registered a six-month CMAR of more than 100 percent and
were represented by top-tier lead underwriters. We were essentially
projecting ourselves back in time before the crash and picking a small
subset of the likeliest candidates for success. How did they perform
over the long-term? We traced the fortunes of these 20 firms from
their IPO date up until the present (August 2009). We examined
available financials, stock price performance, mergers, consolida-
tions etc. Several firms were targets of class-action lawsuits filed by
aggrieved stockholders claiming misstatements in the IPO prospec-
tus and the like. Our findings are summarized in Figure 5, which is
essentially a status report on each firm. We assigned each firm into
one of following categories, or letter grades if you will.
A These are all firms that have survived and thrived. In our judg-
ment, they all have successful business models and good pros-
pects going forward. An investor who bought shares soon after
the IPO date and held on to them till August 2009 would have
realized significant positive returns. Only four of the 20 firms
receive an A grade — three of these (Ebay, Juniper Network,
Allscripts) were lead underwritten by Goldman Sachs. The fourth
(F5 Networks) was underwritten by Hambrecht & Quist.
B The three firms in this category are viewed as viable ongoing
enterprises. There is a fair amount of within-group variation. For
-20
0
20
40
60
80
100
120
0 25 50 75 100 125
Trading days
CMAR CS GS Rest
-30
-20
-10
0
10
20
30
40
50
60
70
0 25 50 75 100 125
Trading days
Top-tier Medium-tier Bottom-tierCMAR
Note: CS = Credit Suisse representing 28 firms; GS = Goldman Sachs representing 37
firms; Rest = Hambrecht & Quist and Salomon Smith Barney together representing
25 firms.
Figure 4 – Cumulative market-adjusted returns (CMAR) for IPO firms grouped by top-
tier underwriters
ipo firm name/current or former ticker symbol
cMAR % lead underwriter status
Infospace Inc (INSP) 199.2 Hambrecht & Quist B
Art Technology Group (ARTG) 255.4 Hambrecht & Quist B
F5 Networks Inc (FFIV) 469.9 Hambrecht & Quist A
Inktomi Corp (INKT) 142.7 Goldman Sachs C
Ebay Inc (EBAY) 623.4 Goldman Sachs A
Viant Corp (VIAN) 183.6 Goldman Sachs D
Active Software Inc (ASWX) 248 Goldman Sachs C
Allscripts Inc (MDRX) 103.9 Goldman Sachs A
Tibco Software (TIBX) 182.8 Goldman Sachs B
Inet Technologies (INTI) 119 Goldman Sachs C
Juniper Network Inc (JNPR) 116.5 Goldman Sachs A
NetIQ Corp (NTIQ) 141.9 Credit Suisse C
Appnet Systems Inc (APNT) 158.4 Credit Suisse C
Commerce One Inc (CMRC) 609.8 Credit Suisse C
E.Piphany Inc (EPNY) 138.3 Credit Suisse C
Phone.com Inc (PHCM) 141.4 Credit Suisse C
Software.com Inc (SWCM) 237.3 Credit Suisse C
Tumbleweed Software Corp (TMWD) 201.7 Credit Suisse C
Liberate Technologies (LBRT) 487.2 Credit Suisse C
Vitria Technology (VITR) 250 Credit Suisse C
Notes:The above firms were represented by top-tier lead underwriters and experienced 1. post-IPO six-month cumulative market-adjusted returns (CMAR) greater than 100 percent.Status (August 2009) definitions are given below:2. Successful ongoing enterprises; significant positive returns realized by early long-3. term investors.Viable ongoing enterprises.4. Merged, restructured or otherwise consolidated; significant impairment to early 5. valuations.Defunct.6.
Figure 5 – Current status of selected IPO firms launched during the dotcom bubble era
11Jonathan A. Knee, senior managing director at Evercore Partners, in the New York 9
Times, DealBook Dialogue, October 6, 2009.
Critics may point out that Goldman would likely have gone under (or at least taken 10
large losses) if the U.S. taxpayer had not bailed out AIG and thereby its counterpar-
ties.
instance, Infospace (Hambrecht & Quist) has negative income
in its latest financial year but still has a market capitalization of
U.S.$293 million. Early post-IPO investors who held on to their
position would see a negative return. In contrast, Tibco Software
(Goldman) is profitable, has a current market capitalization of
U.S.$1.61 billion, and a P/E multiple of 27. The only reason Tibco
did not get an A grade is that early buy-and-hold investors would
register a negative stock return.
C The twelve firms in this group were severely impacted in the
tech sector crash of 2000. While a small number survive with
their original stock ticker symbol, none of these are profitable or
actively traded. Most have merged, restructured, or otherwise
consolidated. The common element is that early investors who
had not divested before the crash would have suffered signifi-
cant (if not quite total) losses. Goldman represented three firms
in this group.
D The one firm in this category (Viant; Goldman) declared bank-
ruptcy in 2003 and is essentially defunct.
Hambrecht & Quist represented only three firms (1 A; 2 Bs), all of
which survive and in aggregate delivered considerable value to
early investors. Goldman’s record is mixed with three As and a B
balanced out with three Cs and a D. Credit Suisse has the poorest
record in terms of our sample (9 Cs). None of the firms they rep-
resented were successful in weathering the tech sector shakeout.
Thus, even among the small subset of IPO firms represented by
top-tier underwriters and greeted with sustained enthusiasm by
investors, ex-post analysis reveals considerable variation in the
staying power of their business models.
conclusionA respected market observer recently commented: “When faced
with market euphoria, whatever its source, financial institutions
will always be confronted with the same stark choice: lower your
standards or lower your market share.”9
Goldman was certainly part of the general deterioration of under-
writing standards but our analysis reveals that they did represent
some very good firms and in terms of our CMAR analysis were a
reasonably responsible player in the IPO aftermarket. Perhaps their
quality control mechanisms were not quite so compromised. More
recently, they seem to have recognized the risks stemming from
subprime lending well ahead of their competitors, hedged with
some success, and have emerged from the financial crisis more or
less intact10. We doubt that Taibbi would set much store by this but
there it is.
12 – The journal of financial transformation
Thou shalt buy ‘simple’ structured products onlySteven Vanduffel, Faculty of Economics and Political Science, Vrije Universiteit Brussel (VUB)
Structured products are a special type of investment product; they
appear to offer good value in two situations. The first is when you
can sell them with an attractive margin, such that the payoff pro-
vided at the end of the investment horizon T>0 is hedged, and not
of your concern. This method of value creation is possible for banks
and financial planners but is not really in reach of retail customers.
The second possibility consists of purchasing these instruments
and making sufficiently high investment returns with them. In this
note we claim that one must be extremely careful when one is
on the buyer’s side; the odds may go against you, transforming a
potential cash cow into a loss generator. We will elaborate on this
point further.
An obvious question people ask themselves when investing is how
to do it in an optimal way. Of course, everyone who prefers more
to less, which is akin to having an increasing utility curve [von
Neumann and Morgenstern (1947)], will want to invest the available
funds in a product that will ultimately provide the highest return.
Unfortunately an investor cannot predict with certainty which
investment product will outperform all others. However, ex-ante
one may be able to depict for each investment vehicle all possible
returns together with the odds to achieve these, hence determining
the distribution function for the wealth that will be available at time
T, and this will be called the wealth distribution.
Hence, the quest for the best investment vehicle amounts to
determining the most attractive wealth distribution first. There
is, however, no such thing as a wealth distribution function that is
universally optimal; simply because optimality is intimately linked
to the individual preference structure, the utility curve, of the
investor at hand. Some people may prefer certainty above all, and
in such instance putting the available money in a state deposit is
the best one can do. Others may prefer a higher expected return
at the cost of more uncertainty and may find it appropriate to
invest in a fund that tracks the FTSE-100. For some people neither
of these options is satisfactory and they may wish to seek capital
protection while taking advantage of increasing markets as well.
For example, they may want to purchase a product that combines
the protection of capital with a bonus of 50% in case the FTSE-100
has increased in value during the entire investment horizon under
consideration. Others may find this too risky and may prefer a
product which protects their initial capital augmented with 50%
of the average increase (if any) of the FTSE-100 during the invest-
ment horizon.
The different examples hint that building the optimal wealth dis-
tribution and corresponding investment strategy often consists
of combining several underlying products, such as fixed income
instruments, equities, and derivatives. In the context of this note,
such a cocktail of products will be further called a structured prod-
uct. Hence, using structured products one is able to generate any
desired wealth distribution, and in this sense there is no doubt that
structured products can offer good value.
Is this the end of the story? Not really, because once a convenient
wealth distribution has been put forward and a structured product
has been designed to achieve this, one may still raise the following
question: how can we obtain this wealth distribution at the lowest
possible cost? Of course, such a cost efficient strategy, if it exists,
would be preferred by all investors. But does it exist? Intuitively
one may think that two products with the same wealth distribution
at maturity should bear the same cost. Surprisingly this common
belief is absolutely untrue and careless design of a structured prod-
uct can cause a lot of harm.
cost efficiency in a complete single-asset market Dybvig (1988) showed that in a so-called complete single-asset
market the most efficient way to achieve or build a wealth distribu-
tion is by purchasing ‘simple’ structured products. Here ‘complete-
ness’ refers to a financial market where all payoffs linked to the
single underlying risky asset are hedgeable or replicable [Harrison
and Kreps (1979)], and ‘simple’ refers to the feature that the payoff
can be generated using a (risk-free) zero-coupon bond and plain
vanilla derivatives (calls and puts).
Indeed, Dybvig showed that optimal payoffs only depend on the
value of the underlying asset at maturity not at intermediate times.
In other words, they should be path-independent. He also showed
that they should be non-decreasing in the terminal value of the
underlying risky asset. For example, if S(t) (0≤t≤T) is reflecting the
price process of the risky asset, then the exotic payoff S2(T)-3 is a
path-independent and non-decreasing payoff, whereas the payoff
S(T/2).S(T) is path-dependent.
Moreover, since these path-independent payoffs can be approxi-
mated to any degree of precision by a series of zero-coupon
bonds and calls and puts, the logical conclusion of Dybvig’s work
would indeed be that proper investments only consist of simply
purchasing the appropriate proportions of bonds and plain vanilla
derivatives (calls and puts) written on the underlying risky asset.
Path-dependent, thus complex, structured products are then to
be avoided. We remark that Cox and Leland (1982, 2000) already
showed (using other techniques) that optimal payoffs are necessar-
ily path-independent but this result is more limited because it only
holds for investors that are risk averse, whereas Dybvig’s result
holds for all investors (assuming they all prefer more to less).
More general marketsOne can argue that the above mentioned results are limited in the
sense that they only hold for a single-asset market, which is quite
13
restrictive, and also that they are contingent on a major assump-
tion, i.e., the claim that markets are complete.
Let us start with the first assumption and analyze to which extent
it can be relaxed to multi-asset markets. Assuming that the sub-
sequent periodical asset returns of the different risky assets are
(multivariate) normally and independently distributed (nowadays
also referred to as a multidimensional Black-Scholes market),
Merton (1971) already proved that risk averse investors exhibiting
a particular, so-called CRRA utility function would always allocate
their funds to a riskless account and a mutual fund representing
the so-called market portfolio; implying that their payoff at matu-
rity can be understood as a particular path-independent deriva-
tive written on an underlying market portfolio. Recently Maj and
Vanduffel (2010) have generalized this to include all risk averse
decision-makers. They have shown that optimal pay-offs are neces-
sarily path-independent in the underlying market portfolio. Hence,
in a multidimensional Black-Scholes market, the only valuable struc-
tured products are those which can be expressed as a combination
of a fixed income instrument and plain vanilla derivatives that are
written on the underlying market-portfolio.
Regarding the assumption of normally distributed returns, it is well-
known that this is a false assumption in general. See, for instance,
Madan and Seneta (1990) for evidence, but also look at Vanduffel
(2005), where it is argued that this assumption is also dependent
on the length of the investment horizon involved. Nevertheless, a
widely accepted paradigm to describe asset returns involves the
use of Lévy processes. Hence, let us assume that this is actually
being used by market participants, and also that they agree to use
the so-called Esscher tranform (exponential tilting) to price deriva-
tives. Vanduffel et al. (2009a) have shown that optimal structured
products are necessarily path-independent; providing further evi-
dence against the use of complex (path-dependent) structures.
final remarksIn this note we have given some theoretical evidence that from
a buyer’s point of view the only (financially) valuable structured
products are path-independent and thus ‘simple.’ While compli-
cated structured products may provide some emotional value or
happiness to the investor they do not seem to be the right vehicles
for generating financial value. For illustrations of the different
theoretical results we refer to Dybvig (1988), Ahcan et al. (2009),
Vanduffel et al. (2009a,b), Maj and Vanduffel (2010), and Bernard
and Boyle (2010).
Let us also remark that besides theoretical inefficiency, path-
dependent payoffs also have a practical cost disadvantage. Indeed,
as compared to plain vanilla products they also suffer from a lack of
transparency and liquidity, creating the potential for their promot-
ers to make them more expensive than they fairly should be.
We also remark that at this point we do not claim that in all theoreti-
cal frameworks path-independent structures are value-destroying,
and we leave it for future research to analyze to what extent the
different findings can be generalized and interpreted further. Our
conjecture is that, at least, structured products should be designed
‘in some particular way’ in order to be potentially optimal.
All in all, from a buyer’s perspective the only good structured prod-
ucts to invest in seem to be the simple ones and one can argue that
even these may be unnecessary. Indeed, using basic products such
as equities, property, and bonds one may be able to design a well-
balanced wealth distribution as well; hence avoiding all other bells
and whistles that are associated with derivatives.
Finally, note that the optimal design of structured products is a
research topic that has also been picked up by some other authors
including Boyle and Tian (2008) and Bernard and Boyle (2010).
ReferencesBernard, C., and P. Boyle, 2010, “Explicit representation of cost efficient strategies,” •
working paper, University of Waterloo
Boyle, P., and W. Tian, 2008, “The design of equity indexed annuities,” Insurance, •
Mathematics and Economics, 43:3, 303-315
Cox, J. C., and H. E. Leland, 1982, “On dynamic investment strategies,” Proceedings of •
the seminar on the Analysis of Security Prices, 262, Center for Research in Security
Prices, University of Chicago
Cox, J. C., and H. E. Leland, 2000, “On dynamic investment strategies,” Journal of •
Economic Dynamics and Control, 24, 1859-1880
Dybvig, P, H., 1988, “Inefficient dynamic portfolio strategies or how to throw away a •
million dollars in the stock market,” The Review of Financial Studies, 1:1, 67-88
Harrison, J. M., and D. M. Kreps, 1979, “Martingales and arbitrage in multiperiod securi-•
ties markets,” Journal of Economic Theory, 20, 381-408
Madan, D. B., and E. Seneta, 1990, “The variance gamma (VG) model for share market •
returns,” Journal of Business, 63, 511-524
Maj, M., and S. Vanduffel, 2010, “Improving the design of financial products,” working •
paper, Vrije Universiteit Brussel
Merton, R., 1971, “Optimum consumption and portfolio rules in a continuous-time •
model,” Journal of Economic Theory, 3, 373-413
Vanduffel, S., A. Ahcan, B. Aver, L. Henrard, and M. Maj, 2009b, “An explicit option-•
based strategy that outperforms dollar cost averaging,” working paper
Vanduffel, S., A. Chernih, M. Maj, and W. Schoutens, 2009a, “A note on the suboptimal-•
ity of path-dependent pay-offs for Lévy markets,” Applied Mathematical Finance, 16:4
http://www.informaworld.com/smpp/title~db=all~content=t713694021~tab=issueslist~
branches=16 – v16, 315-330
Vanduffel, S., 2005, “Comonotonicity: from risk measurement to risk management,” •
PhD thesis, University of Amsterdam
von Neumann, J., and O. Morgenstern, 1947, “Theory of games and economic •
behavior,” Princeton University Press, Princeton
14 – The journal of financial transformationwww.opriskandcompliance.com1
Enterprise friction — the mandate for risk managementSandeep Vishnu, Partner, Capco
In today’s battered economy, few are willing to put in place any-
thing that might meddle with earnings potential. Even fewer are
willing to spend money on something that may offer only a theo-
retical return on investment. Behind the polite but forced smiles
and handshakes from executives, there is a silent accusation: risk
management dampens revenue and puts brakes on innovation. This
is a challenge faced by risk managers as they try to put in place
structures to guard against losses.
But risk management is not about playing it safe; it is about play-
ing it smart. It is about minimizing, monitoring, and controlling the
likelihood and/or fallout of unfavorable events caused by unpredict-
able financial markets, legal liabilities, project failures, accidents,
security snafus — even terrorist attacks and natural disasters. There
is always risk in business, and risk management should be designed
to help companies navigate the terrain.
Sure, risk management may at times call on companies to pull back
on the reins, and it certainly is not free. However, risk management
provides a counterpoint to enterprise opportunity — friction, if you
will — that not only avoids unnecessary losses, but enhances the
ability of organizations to respond effectively to the threats and
vulnerabilities to which they are exposed in the course of busi-
ness.
Friction is a much-maligned term. The connotations are more often
negative than positive: a retarding effect, in-fighting, etc. Even in
physics, it is characterized as a necessary evil. The fact that friction
allows us to walk properly is often overlooked. One can understand
the importance of friction simply by trying to walk on ice, where the
absence of friction causes one to slip and slide. By contrast, high
friction makes walking on wet sand inordinately difficult.
Imagine Michael Jordan without his Nikes. One might argue that a
stockinged Jordan might be less encumbered with the extra weight
and trappings of footwear. But few could argue that he would have
been as effective on the polished court that was his field of opera-
tions.
Achieving a proper balance of risk in strategic planning and opera-
tions for the enterprise is critical — especially in today’s environ-
ment, when resilience and agility in the face of uncertainty are as
important as the effective use of identified variables.
Now, even more than before, the goal of enterprise risk manage-
ment is to define and deliver the right level of friction, because
getting this calibration exactly right can help improve enterprise
agility. Too little friction, and a company could slip into dangerous
scenarios; too much friction, and a company could just get stuck.
Getting this right can not only drive corrective measures, but can
also serve as an effective counterbalance.
To create that necessary, well-balanced friction, companies need
to take some fundamental steps. In this report, we will examine the
three cornerstones of robust risk management — cornerstones that
will help win over the naysayers and more importantly, ensure that
companies have the most-efficient risk management programs in
place. But for this to work, we believe that:
Companies will need to change their view of risk management ■■
as a necessary evil, and elevate the role of this critical function
within their organizations. Risk should have a strong voice at the
management table, and risk mitigation should be given as much
importance as risk taking.
Executives will need to design a new blueprint for strategic ■■
management that integrates risk management into every criti-
cal element of the enterprise and thereby drives a risk-sensitive
culture.
Organizations will have to buttress the risk management infra-■■
structures they already have in place, including, data, analytics,
and reporting.
History lessons: extending the time horizon of risk managementOne of the toughest issues associated with implementing an effec-
tive risk management program revolves around effectively funding
the efforts. The challenge often boils down to cost versus benefit. In
the months and years that led to the housing market and mortgage
meltdown, credit crisis and subsequent recession, companies paid
less attention to risk management mainly because times were so
good for so long. Typically, managing risks was not an embedded
element in critical business processes; it was a bolt-on activity.
Consider a 2007 global risk information management survey
conducted by OpRisk & Compliance magazine1. Although a major-
ity of respondents said they were at least somewhat effective in
providing the right information to the right people at the right
time to meet the organization’s business requirements, the survey
indicated that many felt the information they had was not being
used effectively to create a risk management culture within their
organizations. That same survey also illuminated the ever-present
ROI hurdle. Only 9 percent of respondents said their firms were able
to trace the ROI of information management initiatives designed to
capture and manage vital corporate data.
The biggest problem with risk management is in the establishment
of its ROI. Risk management is primarily about loss avoidance, and
it is difficult to measure what has been avoided and thus has not
occurred. Efficiency and effectiveness metrics are often dwarfed by
the magnitude of loss avoidance; however, while the former can be
measured, the latter are estimated.
Executives are often heard saying, “We haven’t faced a serious
15
problem; why are we spending this much money?” It is a classic
Catch-22.
The ROI problem is compounded by the fact that risk management
can sometimes act as a retardant to growth. Anything — especially
the cost to fund an initiative with fuzzy ROI metrics or threaten a
company’s profit margin — is definitely going to be suspect.
We suggest that firms refrain from looking at risk management
costs within quarterly financial reporting time frames. Analysis of
risk management’s return needs to be considered over much longer
time horizons.
Sometimes retardants are necessary for growth. With 20/20 hind-
sight, many contend that Wall Street should have invested more
strategically in risk management practices and infrastructure. If
risk profiles had been based on a 30-year historical record rather
than the standard 100-year window, a fair number of the subprime
and adjustable rate mortgage (ARM) loans processed during the
housing bubble and in the years leading up to the recession would
have been seen more clearly for what they became — toxic assets
that damaged the health of the financial industry and ultimately the
entire U.S. economy.
In all likelihood, many on Wall Street probably knew intuitively the
risks they were taking; they just chose to make a management
decision that the risk was acceptable because the top line return
was so attractive. We would not call it willful negligence, but we do
believe it was a measured decision recognizing that the benefit of
the upside potentially outweighed the likelihood of the downside. Of
course, we doubt anybody suspected that the downside would turn
out to be as severe as it has been. This phenomenon is exacerbated
when dealing with new products and structures [i.e., collaterized
debt obligations (CDO) and mortgage backed securities (MBS)],
which do not have the same experiential basis as traditional prod-
ucts such as fixed-rate mortgages.
While longer-term views are desirable for assessments, ROI calcu-
lations, etc., they are sometimes hard to achieve. Consequently,
enterprises should try to embed risk management in all ‘risk-taking’
activities and ultimately drive a risk-sensitive culture. This can be
facilitated by the increased use of risk-sensitive measures such
as risk-adjusted return on capital (RAROC), economic value added
(EVA), shareholder value added (SVA), etc. For example, within
investment banks this would mean that the desk heads are not just
responsible for P&L, but also for the amount of capital used to gen-
erate that P&L. Extending this sentiment further, incentive compen-
sation can also be based on RAROC or similar metrics. These met-
rics allow for a degree of normalization across the enterprise and
allow the board and senior management to determine investment
strategy and consistently evaluate returns. If such metrics and
management approaches gain broad support, then markets would
begin assessing enterprise performance through risk-adjusted
measures in addition to traditional top-line and bottom-line metrics.
Regulatory standardization and public availability of such informa-
tion has greater potential to drive a shift toward sustainable growth
versus short-term results.
Fast forward to today, and risk management is still receiving short
shrift because so many companies are scrambling to make ends
meet. Few feel they have the financial resources or time to finance
and bolster existing risk management practices. It is once again a
Catch-22 situation:
When times are good, people do not want to pay attention to risk ■■
management because they are too busy making money.
When times are bad, people do not want to pay too much atten-■■
tion to risk management because they are already incurring
losses, and do not want to spend more money.
In the end, risk management gets only lip service. That needs to
change.
striking a balanceEnlightened enterprises promote creative tension between strategy
and risk management, and put in place a set of checks and balances
to guard against the exploitation of short-term opportunities at the
expense of long-term viability. Failure to strike this balance can
have devastating consequences, as evidenced by Countrywide’s
demise in 2009. In 2005, it was the largest mortgage originator;
however, 19 percent of its loans were option ARMs, and of those, 91
percent had low documentation.
More specifically, strategic considerations and risk assessments
need to be made in tandem. There must be a dynamic — even sym-
biotic — interaction between these two perspectives. They should
be seen as two sides of the same coin — like the classic ying-yang
balance principle.
Enterprise strategy
Enterprise risk
management
To effectively integrate risk considerations into the critical strate-
gic decision-making processes, organizations should incorporate
the following principles into every aspect of their management
philosophy.
16 – The journal of financial transformation
promote a culture of resilienceExecutives may well consider revisiting many of the major pillars of
their organization and refine critical processes by integrating risk
considerations into their enterprise architecture. Resilience and
agility should be primary goals of such efforts and should address
foundational elements such as data, as well as derived capabilities,
including analytics, and feedback loops driven through reporting.
Often organizations conduct risk assessments as a bolt-on activity.
But organizations that integrate resilience (and risk management
in general) into their culture in a granular manner stand a better
chance of not only mitigating risks more effectively, but also more
cost efficiently.
The agile software development process adopted by high-tech
organizations has demonstrated that integrating quality assurance
into the development process results in both higher-quality and
less-expensive final products. Checking for mistakes ‘after the fact’
is almost always more expensive.
Robust enterprise risk management (ERM) needs to leverage formal
structures — data, processes, and technology used for creating, stor-
ing, sharing, and analyzing information — as well as informal networks
represented by the communication and relationships both within and
without the risk management organization. Informal networks have
repeatedly shown their usefulness in identifying and mitigating fraud,
and often provide early warnings of potential tail events.
informal networks
informal networks
formal structures
info
rmal
net
wor
ks
info
rmal
net
wor
ks
The interplay between formal structures and informal networks
is important because this allows risk managers to compensate
for shortcomings in one by using the other. But this requires the
right culture to be in place: one that encourages staff to ask tough
questions without fear of being seen as inhibitors to growth. Risk
identification should not have punitive consequences. A culture
of appropriately calibrated enterprise friction should be fostered.
Doing this would allow critical elements of the organization to
accelerate their pursuit of opportunities while knowing that they
have the perspective, and operational ability, to slow down, acceler-
ate, or change course because of an appropriate sensitivity to key
risk parameters.
Compensation is, and always has been, a key lever in determining
the nature of a corporate culture. Culture depends heavily on incen-
tives, which today are often skewed toward rewarding upside ben-
efits and not necessarily avoiding downside losses. Compensation
practices need to become more risk-sensitive, so that they reward
long-term value creation and not just short-term gains. Risk mitiga-
tion should be as important as risk-taking to drive the appropriate
culture.
Governance
Reporting
Analytics
Data
Resilience
Data as a foundation for risk managementThere is a growing consensus among risk managers across indus-
tries — from government, to financial services, to manufacturing
and health care — that the data upon which key organizational
decisions are made represent the foundational layer for enterprise
risk management (ERM). Bad data can have an immediate and nega-
tive impact at any point of the organization, but the downstream
impacts of bad data can snowball out of control.
Some data challenges, such as completeness and timeliness, are
harder to overcome than others. However, incorporating a risk
management perspective on the design of a robust data model
can help reduce inconsistency and inaccuracy, and drive overall
efficiency. This can help address the challenges that result from
the fact that data often exists in silos, making it difficult to get an
accurate view of a related set of information across these silos.
Wachovia’s write-down of the Golden West financial portfolio, which
stemmed largely from overreliance on poor data, offers an example
of disproportionate emphasis being placed on valuations rather
than on borrower income and assets.
Another challenge relates to inconsistent labels, which make it hard
to match customers to key metrics over a common information
life cycle. Different technological platforms (which help create the
silos in the first place) make aggregation and synthesis challenging.
Most organizations lack a clear enterprise-wide owner in charge of
addressing such data quality issues, which complicates their identi-
fication and remediation.
17
Analytical risksAnalytical frameworks help translate data into actionable informa-
tion. However, analytics should not just be simple characterizations
of data. They should be timely and insightful so that analysis can
enable appropriate actions. In the financial services industry, the
credit crisis demonstrated how neglected — or inappropriate —
analytical frameworks prevented organizations from identifying
knowable risks (i.e., flawed model assumptions) and illustrated why
key decision-makers were unable to break through the opacity of
others (i.e., lack of transparency into the risk of underlying assets
being traded in secondary markets, especially when it related to
second-order derivatives).
All too often, analytical frameworks emerge as simplistic char-
acterizations of the ‘real world’ that may not be able to convey
a complete risk profile. This is evidenced by the overreliance on
value-at-risk as a key risk metric in the recent financial crisis. The
dissolution of Lehman Brothers and the near collapse of AIG offer
good examples of the shortcomings of traditional analytics, which
were unable to adequately account for dramatic increases in lever-
age, counterparty risk, and capital impacts as markets and ratings
deteriorated.
Reporting deficienciesReporting is a multidimensional concept that does not necessarily
capture the dynamic nature of information presentation. Typically,
reporting has at least four major stakeholders, two external (regu-
lators and investors) and two internal (senior management, includ-
ing the board of directors, and line management).
A strong risk-information architecture is crucial to delivering the
right information to the right audience in a timely manner. It should
present salient information as a snapshot, as well as provide the
ability to drill down into the detail. Well-defined business usage will
help drive overall requirements, while integrated technology plat-
forms can help deliver the processing efficiency needed to manage
the volumes and timeliness of information presentation.
Reporting has often been segmented into regulatory reporting
and management reporting, directed toward specific compliance
requirements for the former and financial statements for the lat-
ter. The financial crisis highlighted the need for organizations in
many industries to develop both ad-hoc and dynamic reporting,
which not only meet compliance requirements, but also, and more
importantly, improve the decision-making process. Many organiza-
tions are coming to the conclusion that current architectures and
infrastructures might not necessarily facilitate easy achievement
of these requirements. For example, a March 2007 statement to
investors by Bear Stearns represented that only 6 percent of one
of its hedge funds was invested in subprime mortgages. However,
subsequent examination revealed that it was closer to 60 percent.
Governance imperativesGovernance has many definitions and flavors, which span the
strategic as well as the tactical. It is probably simplest to think of
governance as the way that an enterprise steers itself. This involves
using key conceptual principles to define objectives as well as moni-
toring the performance of processes to ensure that objectives are
being met.
Reporting, or information presentation, is the mechanism that
enables governance. Governance relies on this function to provide
timely and insightful information that allows executives to take
preventative and corrective action so that they can avoid imbal-
ance and tail events. For example, executives from Bear Stearns
and the SEC, which was providing regulatory oversight, failed to
recognize that risk managers at Bear Stearns had little experience
with mortgage-backed securities, where the greatest risk was con-
centrated. Reporting mechanisms were oriented toward capturing
and characterizing transactions and did not appropriately address
competencies and capabilities, thereby creating a knowledge gap.
Defining and facilitating the integrated management of different
risk types should become a primary activity for enterprise gover-
nance.
conclusion: where to go from hereRisk management is not new; most companies already have infra-
structure in place to help execute their risk management strategy.
The good news is that companies do not need to scrap what they
have got. Instead, firms need to enhance and buttress current risk
management infrastructures to drive the right level of enterprise
friction.
The first order of business an organization needs to put in place
to begin reversing negative attitudes about risk management is to
elevate its role. The CEO needs to be involved, along with boards of
directors and senior executives across the lines of business. Senior
management needs to define and then promulgate a shared set of
risk management values across the company. One positive outcome
of the recession is that risk management has greater executive
mindshare, and risk managers need to capitalize on that.
Specific goals for creating a risk management culture include:
Institutionalize risk management. Develop and articulate an ■■
explicit risk management strategy. Establish roles that reflect
the organization’s risk management model.
Determine and define who has ownership over risk management ■■
issues and actions, and who will take on the roles established in
the risk management model.
Nurture a culture of risk awareness and action. Include risk-■■
based metrics in performance scorecards and operate a reward
system that balances risk taking with risk mitigation.
18 – The journal of financial transformation
Be pragmatic. Focus on business needs, such as compliance and ■■
shareholder value. Then attack those needs in bite-size portions
to demonstrate success early and often.
In the end, creative tension between strategy and risk management
should be seen as a positive development in organizations. This
helps to ensure that short-term opportunities are not exploited at
the expense of long-term viability. However, these strategic consid-
erations and risk assessments should be made at the same time,
ensuring a symbiotic interaction between these two perspectives.
In summary, three components are critical to delivering enterprise
friction:
Enterprise risk management should have a strong voice at the ■■
management table and should work in tandem with enterprise
strategy across all enterprise activities.
Formal risk management structures must be buttressed across ■■
data, analytics, reporting, and governance to help the enterprise
achieve the appropriate level of resilience.
Informal networks should be encouraged. These networks can ■■
evolve to fill the white space left uncovered by formal struc-
tures.
19The author is grateful to John Fraser, Chief Risk Officer of Hydro One, for revision of 1
the manuscript before submission.
Enterprise Risk Management — a clarification of some common concernsMadhusudan Acharyya, The Business School, Bournemouth University1
There have been several discussions on the topic of Enterprise
Risk Management (ERM) both in practitioner and academic com-
munities. However, there still remains considerable confusion and
misunderstanding in the ongoing debate on this topic. This may be
because we have focused more on the application of the subject
rather than the causes of the confusion. This article articulates a
different understanding of ERM and provides clarifications of sev-
eral misunderstood issues.
It is understood that risk is universal and it has a holistic effect
on the organization. All organizational functions are exposed to
risk to various extents. In addition, there exist differences in risk
attitude both at individual and organizational levels. One of the
controversial areas of risk management is the classification of
risk. Risk is traditionally classified by professionals depending on
the phenomena they observe in their functions at various levels
of the management hierarchy (i.e., lower-medium-top). From
financial and economic perspectives, risks are often classified as
market (price, interest rate, and exchange risks), liquidity risks
(which involve interaction between the firm and the external
market) and credit risk. From the strategic perspective, risk can
be classified as systemic risk and reputational risk (which involves
policy and decision-making issues at the top management level).
From the operational perspective risk can be classified as fraud
risk and model risk (which involves human and process at the
organizational level). From the legal perspective risk can be clas-
sified as litigation risk, sovereignty risk, regulatory/compliance
risk, and so on. The list of risk categories is so long that it never
stops, thus creating complexities to understand them. There exists
another understanding that attempts to classify the risks of a firm
from the various sources, i.e., both internal and external sources.
Organizations take risks knowingly and unknowingly, and risks are
also produced during the operations of the business. Some of the
risks cause massive failure and some do not. Moreover, some risks
are core to the business and some are ancillary. As such, the list of
significant risks differs extensively from one industry to another.
For a bank the source of core risks are from lending activities (i.e.,
credit risk), for an insurance company the core risks are providing
the cover of insurable risks at the right price and offloading them
through appropriate pooling and diversification. In addition, the
professional expertise of the leader also influences the risk man-
agement priorities of the organization. To some organizations risk
management may be thought of as a matter of common sense but
it might be the subject of particular specialization to some others.
On one hand, risk management is close to the general management
function, with particular attention to risk built on a management
process (i.e., identification, measurement, mitigation, and monitor-
ing). It is a matter of understanding the culture, growing aware-
ness, selecting priorities, and effective communication. On the
other hand, it is a tool to maintain the survival of the organization
from unexpected losses with the features of capturing potential
opportunities. Alternatively, in the latter view, risk is concerned
with the extreme events which are rare but have massive power
of destruction. Statisticians call them lower tail risks with lower
probability and higher severity which provide volatility in value
creation. In sum, some view risk as a danger and to others taking
risks is an opportunity.
The meaning of ERMSo, what does ERM mean? It actually means different things to dif-
ferent people depending on their professional training, nature of
job, type of business, and the objective to be achieved. There exist
two types of definitions of ERM. From the strategic perspective,
ERM is to manage the firm’s activities in order to prevent the failure
of achieving its corporate objectives. From another perspective,
ERM is related to the successful operationalization of the corporate
strategy of the firm in the dynamic market environment, which
requires management of all significant risks in a holistic framework
rather than in silos. It is argued that ERM is not to deal with the risks
that are related to the day-to-day business functions of the firm
and are routinely checked upon by lower or line level management.
Here is the key difference between business risk management and
ERM. It is evident that the practitioners, at least in the financial
industry, are developing ERM to deal with the unusual risks, which
the statisticians called outliers or extreme events. Enterprise risk is
a collection of extraordinary risks that are full of surprising potenti-
ality and can threaten the survival of business. Indeed, overlooking
the less vulnerable areas, which currently might seem less risky,
is dangerous as they may produce severe unexpected risks over a
period of time. Consequently, a comparison and differentiation of
the key significant risks with the less significant risks is an inherent
issue in ERM. A continuous process of selecting key significant risks
and the relevant time frame are essential elements of identification
of enterprise risks. Notwithstanding, whether ERM is a specialized
branch of [corporate] finance or an area of general business man-
agement is still an issue of debate in both practitioner and academic
communities.
The evolution of ERMHow did ERM evolve? In some cases it was the internal initiatives
of the organizations and in some others the motivation was exter-
nal (i.e., regulations). Indeed, the regulators and rating agencies
play an observational (or monitoring) role in the business journey
of financial firms. They are there to maintain the interest of the
20 – The journal of financial transformation
customers and investors. Theoretically, the innovation remains
within the expertise of the firm and most [large] corporations, in
essence, want to get ahead of the regulatory curve to maintain
their superior role in the competitive market. It would be frighten-
ing for the market if the goals of regulators and rating agencies
in terms of risk management are not aligned to the business goals
of the corporations. If not, at least two things could happen —
first, the creation of systemic risk and second, an increase in the
cost of the product that is often charged to the customers. It is
interesting to see if the ERM initiatives of regulators and rating
agencies influence the firm’s internal (or actual) risk manage-
ment functions. Nevertheless, organizations, in particular those
that have large scale global operations, should be given enough
flexibility to promote innovations. Otherwise, the current move-
ment of ERM of financial firms will be limited to the boundary of
regulators and rating agencies’ requirements. On the other hand,
it is true that strict (or new) regulations trigger innovations mainly
in the areas of governance. Interestingly, there appears a positive
indication through the introduction of a principle-based regula-
tory approach.
The uniqueness of ERMUniqueness is an important aspect of ERM. Since risks of each firm
(and industry) are different from another it is important to view
ERM from a specific firm or industry perspective. Take the example
of the insurance business. The key sources of risk of a typical
insurance company are underwriting, investment, and finance/
treasury functions. Similar to other industries, insurers also face
several types of risks, such as financial, operational, strategic, haz-
ard, reputational, etc. However, the risks are traditionally seen in
silos and the view of an underwriter and an actuary on risk is very
different from the investment and finance people. The opposite is
also true. In the banking world the views of a trader on risk is very
different from that of the lenders. The risk profile of an investment
bank and commercial bank will be different. Consequently, some
common questions are emerging in both practitioner and academic
communities in recognizing ERM. Does ERM seek a common view of
risks of all these professionals? Is it necessary? Is it possible? The
short answer is ‘indeed not.’ So, what does a common language of
risk mean? In my view, it means that everybody should have the
capability of assessing the downside risk and upside opportunity of
their functions and making decisions within their position in terms
of the corporate objectives of the firm. This understanding places
the achievement of corporate objectives and strategic decision-
making at the heart of ERM. Alternatively, employees should have
the ability to judge the risk and return of their actions in a proac-
tive fashion, judging the implication of their functions on the entire
organization (i.e., another department/division), and communicat-
ing their concerns across the firm. In practice, a ‘group risk policy,’
which is the best example of such a common risk language, provides
valuable guidance.
Value of ERMThe value proposition of ERM is another disputed area. What is the
objective of ERM? Why should a firm pursue ERM? In reality, the
goal of a firm is to create value for its owners. Consequently, maxi-
mization of profit is the overriding objective of an enterprise, which,
in modern terms is called creation of [long-term] shareholder
value. This is fully aligned with the expectation of the sharehold-
ers of a firm. Within this justification, senior executives are paid
commensurate to their risk taking and managing capabilities (an
issue of agency theory). There remains a lot of speculation as to
the benefits of ERM, such as value creation for the firm, securing
competitive advantage, reducing the cost of financial distress, low-
ering taxes, etc. In addition, there remains analysis of the benefits
of risk management from ex-ante (pre-loss) and ex-post (post-loss)
situations. However, the recent 2007 financial crisis demonstrated
the failure of several large organizations that were believed to have
ERM in place.
Risk ownershipA further unresolved issue of ERM is the risk ownership structure
within the organization. Take the example of a CEO. How does he/
she view risks of the firm? How much risk of the firm does he/she
personally hold? In fact, the CEO is (at the executive level) the
ultimate owner of the risk of the firm of going burst (i.e., survival).
A relevant question for risk ownership is who else, along with the
CEO, owns and constantly monitors the total risk of the firm?
Although, at the upper level it is the board of directors, it might
be too late to get them involved to deal with the risks of the firm
that have already caused irreparable damage to the organization.
In essence, the CEO is the only person who takes the holistic view
of the entire organization. Consequently, it is important to support
ERM by creating risk ownership across the various levels of man-
agement hierarchy within the organization.
Risk appetite and toleranceHow much risk should an organization take? This is often referred
to as risk appetite or level of risk tolerance. This needs a bottom-
up assessment of risk with the integration of several key risks that
exist within the firm. Moreover, it is directly linked to the corporate
objectives of the firm which, in essence, means where the firm
wants to be in a certain period of time in future. Certainly, it is not
limited to tangible risks of the firm, those associated with the capi-
tal market variables, such as asset and liability risks. In essence, it
includes a lot of intangible issues, such as the culture of the firm,
the expertise of the people who drive the business process, the
market where the firm operates, and the future cash flows that the
firm wants to produce. In fact, the appetite for risk is an essential
element of formulating a firm’s corporate strategy. Indeed, it is dan-
gerous to rely solely on statistical models that generate numbers
where the scope for including managerial judgment is limited.
21
challenges of ERMHow can the ERM objectives be achieved? Should we prefer the
mathematical approach? Indeed, a mathematical treatment (or
extrapolation) of risk (i.e., quantitative modeling) is necessary to
transform ideas into actions; but the limitation of mathematics,
as a language, is that it cannot transform all ideas into numerical
equations. This is equally true for the progress of ERM, as it is still
going through the transition period of getting to maturity. Indeed,
risk involves a range of subjective elements (i.e., individual experi-
ence, judgment, emotion, trust, confidence, etc.) and ignoring these
behavioral attributes will likely lead to failure in adoption of ERM.
It is important to remember that beyond mathematical attempts to
theorize risk effects/impacts, risk management is about processes
and systems that involve human understanding and actions. Ideally,
neither approach is singly sufficient to handle the enterprise risk of
the firm. An effective ERM must balance both in a common frame-
work. ERM is truly a multidisciplinary subject.
The role of cRoWho shares the responsibility of the CEO in relation to risk? The
common practice is to have a ‘risk leadership team’ equivalent
to a ‘group risk committee’ comprising of the head of each major
function. Theoretically, this team is supported by both technical
and non-technical persons; hence there could be communication
problems since they might speak with different languages of risk.
Consequently, there should be somebody responsible for coordina-
tion (or facilitation) in order to maintain the proper communication
of risk issues across the organization in terms of the require-
ments of the corporate objectives. This person, at least theoreti-
cally, should be technically familiar with all the subject areas (which
practically appears impossible) but should have the capability to
understand the sources of risks and their potential impact either
solely or with the help of relevant experts. Currently such a role
is emerging and is often called the Chief Risk Officer (CRO). In the
meantime, the presence of CROs has appeared both in the financial
(i.e., banking and insurance) and the non-financial sectors (i.e., com-
modity). Typically, a CRO is responsible for developing ERM with
adequate policy and process for managing risks at all levels of the
firm. In addition to an adequate knowledge of the quantitative side
of risk management, a CRO should have a fair understanding of the
behavioral aspects of risk as he/she has to deal with the human
perceptions and systems (i.e., communication and culture). One of
the challenging jobs of CROs is to report to the board of directors
(most of whose members often do not have a technical knowledge
of the risks) through the CFO or the CEO depending on the specific
structure of the firm. Ideally, the board of directors is, by regula-
tions (e.g., Combined Code in the U.K., Sarbanes Oxley in the U.S.,
and similar regulations in some other countries), responsible for all
risks of the firm. Another big challenge for a CRO is to promote a
risk awareness and ownership culture throughout the firm, includ-
ing the units at the corporate centre and the subsidiaries/divisions
at various geographical locations. The recent Sir Walker’s review in
the U.K. has highlighted the significance of the role of the CRO and
recommended the establishment of a board level risk committee for
banking and other bank-like financial firms (i.e., life insurers).
future of ERMWhere is ERM going? Certainly, globalization influences businesses
to change their business patterns and operational strategies. As
stated earlier, ERM is currently maturing and more significant
developments are likely in future. It is assumed that ERM will move
from its narrow focus of core business risks to take on a more
general perspective. Risk management will gradually be embedded
within firms’ strategic issues. Certainly, risk is an integral part of all
businesses and their success depends on the level of each firm’s
capability for managing risks. However, integrating the two views
of managing risks (i.e., fluctuation of performance in the area of
corporate governance with the volatility in the shareholder value
creation) in a common framework is challenging. Importantly, the
robustness of an ERM program depends on the commitment of the
top management in promoting a strong risk management culture
across the organization. Moreover, an innovative team of people
within the organization with a structured risk-taking ability and
approach is significantly important for the success of ERM.
ReferencesBrian W. N. and R. M. Stulz, 2006, “Enterprise risk management: theory and practice,” •
Journal of Applied Corporate Finance 18, 8-20
Calandro, J., W. Fuessler, and R. Sansone, 2008, “Enterprise risk management – an •
insurance perspective and overview.” Journal of Financial Transformation, 22, 117-122
Dickinson, G., 2001, “Enterprise risk management: its origins and conceptual founda-•
tion,” The Geneva Papers on Risk and Insurance: Issues and Practice, 26, 360-366
Fraser, J. R. S., and B. J. Simkins, 2007, “Ten common misconceptions about enter-•
prise risk management,” Journal of Applied Corporate Finance, 19, 75-81
Mehr, R. I. and B. A. Hedges, 1963, Risk management in the business enterprise, Richard •
D. Irwin, Inc. Homewood, IL
Walker, D., 2009, “A review of corporate governance in U.K. banks and other financial •
industry entities – Final recommendations,” H M Treasury
A reference to Nassim Taleb’s popular book Black Swan. David Cowen and Nassim 1
worked at the same firm in London, Bankers Trust, 16 years ago for a brief period.
Nassim has led the charge against VaR as a risk model. 22
Wall Street Journal, August 2007. http://online.wsj.com/article/2
SB118679281379194803.html. For a more in-depth look at the limitations of VaR see
this recent article: http://www.nytimes.com/2009/01/04/magazine/04riskt.html?_
r=2&ref=magazine&pagewanted=all
capital at risk — a more consistent and intuitive measure of riskDavid J. Cowen, Quasar Capital, and President and CEO, Museum of American Finance David Abuaf
This paper will explain a risk methodology for traders and hedge
funds that trade in the most liquid of markets like G10 futures and
foreign exchange. The methodology is called ‘capital at risk’ (CaR)
and is a replacement for ‘value at risk’ (VaR). CaR obviates the need
to worry about fat tails or outliers called Black Swans as it virtually
eliminates downside surprises1. It is a conservative measure of risk
and focuses on assessing the maximum downside to the portfolio.
The traditional profile of a risk manager who should use CaR is a
short-term trader investing in the most liquid of markets — where
slippage is almost entirely avoidable; however, CaR is by no means
exclusive to short-term traders. In the volatility of 3rd and 4th quar-
ters of 2008 this tool would have been very useful to those with a
medium to longer term trading horizon as well.
problems with traditional risk metricsIn traditional risk management, VaR is used by traders to assess the
probability of a deviation to the portfolio’s return in excess of a spe-
cific value. This measurement, like many others, has flaws. The most
obvious is its basis on past performance — wherein historical volatil-
ity is indicative of future volatility. This flaw leads to two discreet
problems. The first is that it cannot take into account severe market
dislocations that are not reflected in historical data. The second is
that from a practitioner’s standpoint VAR can be completely differ-
ent from one trader to the next due to subjective limitations, i.e., the
time limit utilized or confidence threshold.
In the more liquid markets which short-term traders frequent, VaR is
a risk model that has the potential for catastrophic drawdowns. When
using VaR, only historical returns are factored into future volatility
expectations, and as a result infrequent occurrences are not reflect-
ed (stock market crashes, high commodity demand, terrorist attacks,
etc.). Additionally, when one sees a return indicated at 3σ VaR (99%
confidence level) we can expect the event to occur 2.5x per year,
yet amongst traders it is not uncommon to observe moves of this
magnitude or greater more than 5x per year. And with VaR there is
the required subjective nature of the timeframe used. For instance, a
VaR model that has twenty years of look-back data might seem suf-
ficient; however it would not include the October 1987 market crash.
A five-year model would not have the technology bubble of 2000.
Therefore, there are inherent caution flags when using VaR.
To see the problems of using VaR, one need only to look at the
performance of hedge funds and statistical arbitrage traders dur-
ing the summer of 2007. Many funds lost 20% of their capital in
those months alone. Matthew Rothman, head of Quantitative Equity
Strategies for now defunct Lehman Brothers was quoted in the
Wall Street Journal as saying “[Today] is the type of day people will
remember in quant-land for a very long time. Events that models
predicted would happen once in 10,000 years happened every day
for three days”2. That quote is testimony enough to find a different
measure of risk.
What is caR?When should it be used?
CaR is a measure of risk originally designed to value the maximum
downside to the portfolio without using any assumptions. There are
specific conditions which must exist in order to properly use CaR:
1 Each trade must have a predetermined stop-loss.
Stop-loss levels are continually readjusted for profitable a.
trades to lock-in profits. Consequently, CaR is not a static
number. Additionally, even if the stop-loss level does not
move, because the market is moving, CaR will by definition be
a dynamic number.
Even if two trades have a high degree of correlation they must b.
be treated as separate trades to their stop loss. For instance,
if one was short equivalent Australian dollar against U.S.
dollar with 25 basis points of risk to the portfolio and long
equivalent New Zealand dollar with 25 basis points of risk to
the portfolio, CaR will report 50 basis points. VAR risk models
would look at this as cross trade, long New Zealand dollar
versus short Australian dollar and say that those two currency
pairs are highly correlated, which they are, and then report a
significantly lower risk to the portfolio, say something on the
order of 10 basis points.
2 Trades must be in liquid futures or spot foreign exchange so that
slippage is mitigated.
Emerging markets trades should take caution using CaR a.
methodology for these currencies have substantial gap risk
negating the usefulness of CaR.
3 If options are used then it is calculated at the full premium of the
option no matter what maturity or delta. It is the full cost of the
option.
Therefore CaR can only be used in a long only option based a.
strategy. It has limitations if the risk manager is naked short-
ing options.
These conditions are not just optimal but essential.
How to calculate caR?One of the more fortunate effects of transacting in extremely liq-
uid markets and not relying on historical performance or outside
assumptions is the ease of calculation of CaR:
1 Revalue all cash and futures positions to their stop loss levels, so
if the risk manager was stopped out what would you have lost.
Add up the total cost of all your options based on if revaluation 2
went to zero.
23Washington Post, December 30, 2008. http://www.washingtonpost.com/wp-dyn/con-3
tent/article/2008/12/30/AR2008123003431_5.html?sid=ST2008123003491&s_pos
Add 1 and 2.3
Divide the above amount by the total capital of the portfolio.4
The end result is the maximum amount of loss to the portfolio, 5
CaR.
CaR’s ease of use in calculation can be applied to all trades and is
easily aggregated to specific market or portfolio levels.
CaR is not the most difficult of measures to use or calculate. The
benefit of CaR is that a trader can rest assured that they know their
maximum portfolio loss in the event of a catastrophe. It is always a
worst case scenario. In that manner there are no portfolio surprises
as one is always cognizant of the full risk to the portfolio.
Benefits of caREase of use.■■
Ease of calculation.■■
Easily understandable.■■
Not easily manipulated.■■
Not subject to historical data or assumptions.■■
More intuitive manner in which to assess risks of a trade.■■
Eliminates downside surprise.■■
Example usage of caRThe following illustrates a portfolio of $10,000,000 invested in a
few securities.
AUM 10,000,000
contract Quantity current price stop cAR
Day 1 Feb COMEX gold call 20 13.1 26.2bp
DEC S&P Mini s.1 20 840 823 17bp
DEC S&P Mini s.2 10 840 830 5bp
Total portfolio cAR 48.2bp
Day 2 Feb COMEX gold call 20 14 28bp
DEC S&P Mini s.1 20 866 850 16bp
DEC S&P Mini s.2 10 866 850 8bp
DEC S&P Mini s.3 10 866 845 10.5bp
Total portfolio cAR 62.5bp
The notations s.1, s.2, and s.3 indicate distinct sets of contracts
Here we see that as of the close of day 1 the firm has 20 outstand-
ing Feb gold calls presently priced at 13.1 and 30 outstanding
December S&P mini contracts with different stops. The CaR was
calculated assuming the call would fall from present prices to zero
and that the mini-contracts could fall from 840 to their respective
stops. Each gold contract is worth $100 per point. Consequently,
we multiplied 13.1 x $100 x 20 to reach $26,200. Then we divided
$26,200 by $10,000,000. The mini example is (840-823) = 17. 17 is
multiplied by $50 per mini contract and then by 20, or 17 x $50 x
20 = $17,000. $17,000 is then divided by $10,000,000 to achieve
the 17 basis points of risk.
On day 2, we see the portfolio has more risk associated with the
gold call — this is because the price of the call has risen, so the pos-
sible value lost has increased. In the first two mini contracts, the
stops were rolled upwards, locking in profit but still exposing the
portfolio to a loss from day 2’s NAV. We also see the addition of a
third S&P mini contract, further adding to the portfolio’s CaR.
Real world examples of not knowing your riskIn 2008, we witnessed catastrophic losses at Bear Stearns, Lehman
Brothers, AIG, and a score of other high profile firms. The cost to
the economy and taxpayers has been enormous. To use AIG as but
one example, the U.S. Government has pumped U.S.$152 billion
into AIG in the form of direct loans, investment in preferred stock,
and acquisition of troubled assets. AIG’s exposure was through its
Financial Products Division, which became a major player in the
derivatives market, both in credit default swaps and collateralized
debt obligations. In the case of the credit default swap, in exchange
for fees AIGFP would guarantee, or insure, a firm’s corporate debt
in case of default. It was a growing market and the firm was book-
ing hefty profits. With respect to CDOs, the firm had a portfolio of
structured debt securities that held either secured or unsecured
bonds or loans. By the end of 2005 AIG had almost U.S.$80 billion
of CDOs.
The firm was comfortable with its derivatives portfolio. In a public
forum in August 2007 AIG Financial Products President Joseph
Cassano boasted on a conference call to investors that the deriva-
tives portfolio was secure: “It is hard for us, without being flippant,
to even see a scenario within any kind of realm of reason that would
see us losing $1 in any of those transactions”3. What prompted this
confidence? According to interviews with top officials at AIG they
were relying on a computer model to assess their credit default
swap portfolio. According to their proprietary model, there was only
a 0.15% chance of paying out, or a 99.85% of reaping reward. AIG
believed that there was only a 1.5 chance in 1,000 of disaster. Once
again a model which predicted only a slim chance of an event occur-
ring, the so-called fat tail, sunk a once prestigious firm.
While AIG had credit risk they never faced the reality of the magni-
tude of their risk. Had CaR been used, it would not have presented
a probability, but rather a finite amount of risk. In particular what
hurt AIG was that it had to post collateral against these swaps
and when their AAA rating became imperiled the calls for margin
occurred. Perhaps if AIG had thought in terms of full collateral capi-
tal at risk they would have thought differently about their risk. The
24 – The journal of financial transformation
reality of life is that risk in the financial markets is never 0.15% no
matter what the model states.
And we know that VaR in practice is simply unable to handle the
stress. Bear Stearns reported on May 31, 2007 an interest rate
risk to their portfolio of U.S.$30.5 million using a VaR confidence
level of 95%. Moreover, they claimed diversification benefits from
elsewhere offset the entire firm-wide exposure to only U.S.$28.7
million. When Bear Stearns failed we know that they had levered
their money so high, over thirty times, that a run developed that
could not be met. How does a firm state a risk of less than U.S.$30
million when its true risk is significantly higher? We know that VaR
is part of the problem and not part of the answer.
conclusionCaR has been used by David Cowen to measure risk in his hedge
funds. Over his two decade trading career he was never satisfied
with VaR. David set out to find a simplistic method to value the
maximum downside to the portfolio. The result is CaR. This is an
easy to calculate measure which appropriately factors in all risks to
the portfolio without looking at historical and often erroneous data.
While David originated this idea he does not take sole authorship
for the concept. Others could have easily simultaneously developed
this method as well.
Articles
Economists’ hubris — the case of risk management
Best practices for investment risk management
Lessons from the global financial meltdown of 2008
What can leaders learn from cybernetics? Toward a strategic framework for managing complexity and risk in socio-technical systems
Financial stability, fair value accounting, and procyclicality
Can ARMs’ mortgage servicing portfolios be delta-hedged under gamma constraints?
The time-varying risk of listed private equity
The developing legal risk management environment
Interest rate risk hedging demand under a Gaussian framework
Non-parametric liquidity-adjusted VaR model: a stochastic programming approach
Optimization in financial engineering — an essay on ‘good’ solutions and misplaced exactitude
A VaR too far? The pricing of operational risk
Risk management after the Great Crash
26 – The journal of financial transformation
27The views expressed in this paper reflect only those of the authors and in no way 1
are representative of the views of Capco, Contango Capital Advisors, or any of their
partners
Articles
Economists’ hubris — the case of risk management1
AbstractIn this, the third paper in the Economists’ Hubris series, we high-
light the shortcomings of academic thought in developing models
that can be used by financial institutions to institute effective
enterprise-wide risk management systems and policies. We find
that pretty much all of the models fail when put under intense sci-
entific examinations and that we still have a long way to go before
we can develop models that can indeed be effective. However, we
find that irrespective of the models used, the simple fact that the
current IT and operational infrastructures of banking institutions
does not allow the management to obtain a holistic view of risk
and the silos they sit within means that instituting an effective
enterprise-wide risk management system is as of today nothing
more than a panacea. The main worry is that it is not only academ-
ics who fail to realize this fact, practitioners also believe that these
models work even without having a holistic view of the risks within
their organizations. In fact, we can state that this is the first paper
in which we highlight not only the hubris exhibited by economists
but also the hubris of practitioners who still believe that they are
able to accurately measure and manage the risk of the institutions
they manage, monitor, or regulate.
shahin shojaiGlobal Head of Strategic Research, Capco
George feigerCEO, Contango Capital Advisors
28 – The journal of financial transformation
Economists’ hubris — the case of risk management
In this, our third article in the economists’ hubris series, we look
at the shortcomings of academic thinking in financial risk man-
agement, a very topical subject. In the previous two articles, we
examined whether contributions from the academic community
in the fields of mergers and acquisitions [Shojai (2009)] and asset
pricing [Shojai and Feiger (2009)] were of much practical use to the
practitioners and demonstrated that economists have drifted into
realms of sterile, quasi-mathematical and a priori theorizing instead
of coming to grips with the empirical realities of their subjects. In
this sense, they have stood conventional scientific methodology,
which develops theories to explain facts and tests them by their
ability to predict, on its head. Not surprisingly this behavior has
carried also into the field of risk management, with an added twist.
Rather like the joke about the man who looks for his dropped keys
under the street light because that is where the light is rather than
where he dropped the keys, financial economists have focused on
things that they can ‘quantify’ rather than on things that actually
matter. The latter include both the structure of the financial system
and the behavior of its participants. Consequently, the gap between
academic thinking and business application remains as large today
as it has ever been.
Irrespective of one’s views regarding academic finance, or even the
practioners who are expected to apply the models devised, few can
deny that there were serious failures in risk management at major
global financial institutions, perpetrated in all probability by the
belief that the models developed work and that they can withstand
environments such as the recent financial crisis. However, it is not
enough to simply make such generalized statements without taking
the time to get a better understanding of why such failures took
place and whether we will be able to correct them in the future.
Our opinion is that the tools that are currently at the disposal of
the world’s major global financial institutions are not adequate to
help them prevent such crises in the future and that the current
structure of these institutions makes it literally impossible to avoid
the kind of failures that we have witnessed.
Our objective with this article is not to exonerate the risk manage-
ment divisions of these institutions, nor is it to suggest that the
entire enterprises should be forgiven for the incalculable dam-
age that they have caused. Our aim is to demonstrate that even
if the risk management divisions of these institutions had acted
with the best intentions of their organizations, and even the other
stakeholders, in mind they would have still had a difficult task in
effectively managing the risks within their enterprises given the
tools that were at their disposal and the structures of the firms
they operated in. Furthermore, and in our opinion given the focus of
this paper, more importantly, had the risk management divisions of
these institutions effectively instituted the tools that were provided
to them by academic finance the situation would be no better than
it is today.
The more one delves into the intricacies of academic finance the
more one realizes how little the understanding is among a majority
of academics about what actually takes place within these institu-
tions and how difficult it really is to implement the theories that are
devised in so-called scientific finance institutions at major business
schools in the West.
In this paper, we will focus on a number of these issues. We will high-
light why it is that the current structures within the major financial
institutions make it almost impossible to have a holistic view of the
enterprise’s risk, despite the many different suggestions as to its
viability in academic literature; which we must add is not that many.
We will discuss why it is that the current compensation structures
make it very hard for the management to control risks at the indi-
vidual, divisional, and group level. And, finally, we will explain why it
would still be impossible to prevent such crisis in the future, even if
the two former issues were somehow miraculously solved, given the
tools that are available to risk managers and their management.
However, before we get too deep into the intricacies of financial
institutions and their risk management operations, it is important
to cast a critical eye on what has been suggested to be the main
cause of the recent crisis and discuss why it is that one of the main
causes has been overlooked, namely the disappearance of due
diligence by banks.
What are causes of the recent market crisis?If one were to choose the one area which has borne the most criti-
cism for the current crisis it would be the CDO market, and those that
rate them [Jacobs (2009)]. Of course, the regulators and central
bankers have also not got away unscathed, but most studies seem
to suggest that had we got a better understanding of CDOs and their
risks we might have been able to prevent the current crisis.
The first problem with this point of view is the expectation that com-
plex financial assets, such as CDOs or other securitized assets, can
be accurately valued using scientific methods. Even if we were able
to calculate the risk of simple, vanilla structure, financial assets to
correctly price them, and by that we mean that the pricing models
arrive at the same price that the asset is trading at in the markets,
we would still have a very difficult time pricing securitized assets
with any degree of accuracy. The reason is that the prepayment
rights of these instruments, which are related, with some friction,
to movements in interest rates, would necessitate an absolutely
perfect prediction of interest rate movements into the future, and
a similarly accurate assessment of the proportion of borrowers who
choose to repay [Boudoukh et al. (1997)]. As one can appreciate,
that is quite an unrealistic expectation.
The sad fact is that academic finance has failed in its efforts to
even provide valuation models that can price simple assets, such
29
Economists’ hubris — the case of risk management
as equities, with any degree of accuracy. Expecting these models to
perform any better for highly complex instruments is nothing more
than wishful thinking [Shojai and Feiger (2009)].
If we accept that these assets cannot be priced with any degree of
accuracy then we must accept that neither the financial institutions
that created or traded these assets nor the rating agencies would
have been able to help prevent the crisis that was brought about
by the subprime mortgage market even if they did know what they
were doing.
But, in our opinion, focusing on the pricing of these assets, espe-
cially since all of the people involved with these instruments are
familiar with their intricacies, ensures that we ignore the main
reason why this market became so big and finally imploded. In our
opinion, even if interest rates had not been kept so low for as long
as they were we would still have not been able to prevent the crisis.
And that is because securitization, by its mere existence, creates an
environment that would lead to such crises. The fact that it had not
happened before is the real surprise.
Now, our job is not to provide a primer on securitization or how
the U.S. financial system works for those of our readers who are
significantly more qualified to talk on the subject than we are,
but it would not hurt to just examine how the mere existence of
securitized mortgages can lead to potential crises. If we look at
how mortgages were issued in the past we would find that prior
to the development of the so-called individual risk rating models
[Hand and Blunt (2009), Hand and Yu (2009)] banks used to use
the information that they had about their client of many years to
decide on whether and how much mortgage to issue them with. The
decision and the amount were, therefore, related to the history of
that client’s relationship with the bank.
When banks started lending to nonclients more aggressively they
had to resort to using the information provided by personal risk
ranking organizations, such as Experian. Now, the issue is not
whether these rating agencies provide information that is genuinely
accurate or of much use, though what we do find is that it is not as
remotely scientific in its accuracy as many are lead to believe. The
fact is that the loan was still issued by the bank or its brokers after
doing some sort of due diligence on the client, and more importantly
the loan remained on the books of the bank. The fact that the loan
remained on the bank’s own books ensured that they monitored the
activities of their brokers more closely, since any defaults would
directly cost the bank itself. What securitization does is to allow
the bank to package these mortgages and sell them to third-party
investors. In other words, the loss would no longer hit the bank
but the investors, who in the case of credit enhanced MBSs would
have no idea who the original issuing bank(s) was (were). When this
happens, the bank is no longer a mortgage company but simply
a financial engineer of mortgages. Its income will come not from
the difference between its borrowing rate and the rate at which it
lends, less any loss experienced as a result of defaults, but from
the fees it can charge from manufacturing mortgage pools that are
then sold to investors. As a result it does not spend as much time
on due diligence as it used to do; hence the accusations that banks
were pushing mortgages to those they knew would not be able to
repay them. The fact was that once the mortgages were packaged
non-payment was somebody else’s problem. However, since house
prices were continually going up and interest rates were going in
the opposite direction few would default since they could simply
extract more value from their homes by remortgaging; yet another
source of fees for the banks.
With the banks relinquishing their role as the conductors of due dili-
gence, it was left to the rating agencies to act as sort of auditors for
these issues. But, they never had access to the underlying borrow-
ers to have any idea of their true state of health. That information
was with the issuing bank, and remember that they had stopped
caring about collecting that kind of information when they started
selling the mortgages onto other investors [Keys et al. (2010)]. So,
the rating agencies had to use aggregate data to rate these instru-
ments, or the credit quality of the insurer who had provided credit
enhancement to the security [Fabozzi and Kothari (2007)]. In either
case, neither the credit enhancer nor the rating agency had any
idea about the underlying quality of the borrowers. Consequently,
all that was needed to bring the house of cards down was a correc-
tion in house prices, which is exactly what happened.
Consequently, no matter how we change the regulations govern-
ing rating agencies, unless they admit that they have no idea how
to rate securitized assets, which they really cannot for the afore-
mentioned reasons, such crises are likely to occur again and again.
Given that the number of securitized asset issues dwarf other more
traditional issues that rating agencies used to live off it is highly
unlikely that they will admit to having no idea about how to rate
these instruments, no matter how complex the mathematical model
they use is.
The moral of this overview is that when risk is passed on and com-
pensation is based on the number of sales, due diligence goes out
of the window. It is this lack of due diligence that brought about the
current crisis, and not only in terms of CDOs and MBSs but across
the entire financial ecosystem. From traders who can only win to
asset managers that share in client winnings but do not have to
recoup their losses, or even share in them, when they lose.
Because of the aforementioned issues it is essential that when we
do examine institutional risk management policies we recognize
the importance that human aspects play in their successful imple-
mentation.
30 – The journal of financial transformation
Economists’ hubris — the case of risk management
Having shared with you our perspective on what actually caused
the recent crisis, we will, in the next section, highlight the short-
comings of academic finance in developing models that financial
institutions can use to institute effective enterprise-wide risk man-
agement systems and policies.
A framework for thinking about financial risksIn order to evaluate the relevance of academic thinking on risk
management it is helpful to use a 3-level schema:
Risk at the level of the individual financial instrument.■■
Risk at the level of a financial institution holding diverse instru-■■
ments.
Risk at the level of the system of financial institutions.■■
A financial instrument might be a credit card or a residential mort-
gage or a small business loan. In the U.S., risks in these instruments
have been intensively studied and are moderately predictable in
large pools. One example that has become common parlance is to
equate the default rate on national pools of seasoned credit card
balances to the national unemployment rate. A financial institution
holding a diverse portfolio of such instruments might be a bank
which originates them and retains all or some or an investing insti-
tution like a pension fund or a hedge fund or an insurance company
(or, indeed, an individual investor with a personal portfolio).
The system of financial institutions is the total of these individual
players, in particular, embracing the diverse obligations of each to
the others. A bank may have loaned unneeded overnight cash to
another bank or it may have borrowed to fund its portfolio of trad-
able securities by overnight repurchase agreements; a hedge fund
may have borrowed to leverage a pool of securities; an insurance
company may have guaranteed another institution’s debt, backing
that guarantee by pledging part of its own holding of securities;
and an individual may have guaranteed a bank loan to his small
business by pledging a real estate investment, itself leveraged by
a mortgage.
The fallacy in academic approaches to risk management (enthu-
siastically adopted by the financial institutions themselves) is to
assume that the techniques shown to be reasonably useful for the
analysis of large samples of individual instruments can be of signifi-
cant value in assessing the other levels of risk.
Academic approach to risk management within financial institutionsThe academic approach to handling the risk of a financial institu-
tion holding a diverse pool of instruments is to look at some set of
historic correlations among the instruments and model the institu-
tion as the portfolio of the instruments that it holds. Technically it
assumes that the outcomes of all the instruments are drawn from
a stationary joint probability distribution, from which all sorts of
enticing estimates are possible, such as Enterprise Value at Risk
and others.
Anyone who has attempted to estimate the variance/covariance
matrices of the instruments knows that the distributions are not
stationary. Indeed, they vary markedly according to the sample
period chosen. Hence the statistically trained have used various
weighting schemes to create ‘more relevant’ data. For example,
weighting recent data more heavily than older data using some
distributed lag scheme. This is typical of the economists’ methods:
create some theoretical structure and impose it on the data. A more
productive approach would be to inquire why the distributions are
not stationary.
Without getting into too much detail of the underlying models at
this stage we would like to suggest that at least 3 factors play a role
in causing the nonstationarity of risk distributions which lead to the
practical downfall of the ‘portfolio approach’ to risk management
within an institution:
Failure to attempt to understand ‘causation.’■■
Neglect of the ‘convergence of trading behavior’ in the financial ■■
system we have created.
Adherence to false notions of ‘market efficiency’ which causes ■■
neglect of credit-fueled valuation bubbles.
Let us start with ‘causation.’ Unlike the predictably random move-
ments of electrons in an atom, economic events sometimes have
causes which bear rather sharply on the correlation among instru-
ments. It is easy to give a practical illustration. Some time ago one
of the authors had a client with what was, supposedly, a well-diver-
sified portfolio — equities, bonds, real estate, even some direct busi-
ness interests. Unfortunately, all the investments were located in
or were claims on businesses in Houston, Texas. When the price of
oil collapsed, so did the value of the portfolio. There was a common
causal factor in all the returns. It is not so easy, ex-ante, to calculate
the variance/covariance matrix of returns in Houston conditional
on the price of oil or of returns in the Midwest conditional on the
health of General Motors. Let alone to then create the probability
distribution of the fate of these causal factors.
Consider now ‘convergence of trading behavior.’ Correlations of
returns of assets like European and U.S. and emerging stock indices,
various commodities, and the like have been highly variable but
clearly rising over the past 15 years [Ned Davis research]. A combina-
tion of deregulation of global capital flows, development of sophisti-
cated capital market intermediaries operating globally, and data and
trading technology have enabled more and more money to be placed
on the same bets at the same time. And it is. Once, Harvard and Yale
were unique among institutions in investing heavily in timberland and
private equity and hedge strategies. Now everyone is doing it and the
returns are falling while the volatility is rising.
31
Economists’ hubris — the case of risk management
And finally ‘credit-fueled valuation bubbles.’ Asset markets do not
price efficiently, no matter what your professor said in business
school. Reinhardt and Rogoff (2009) have demonstrated that for
centuries, valuation bubbles funded by excess credit creation have
occurred in all economies for which we have decent records. Why
do investors not simply recognize these and “trade them away,”
as the efficient markets hypothesis would imply? It is not so easy,
because while the bubbles are ‘obvious,’ when they will burst is not.
Smithers (2009) explains this in eloquent terms. However, if you
think about it, had you started shorting the tech bubble aggres-
sively in 1998 you would have been bankrupt well before the end.
As Keynes said, the markets can remain irrational longer than you
can remain solvent.
systemic connections and liquidity riskOnce we acknowledge that valuations are not perfect but can
undershoot and overshoot we can explain systemic risk that is,
essentially, broad loss of liquidity in most assets. Why pay more for
something today than it is likely to be worth tomorrow?
Consider the example of the CLO market in 2008. Several hundred
billion of corporate loans were held in CLOs, on the largely invisible
books of offshore hedge funds and underwriting institutions (via
off-balance-sheet vehicles), in structures leveraged 20 times and
more. The leverage came from major banks and insurance compa-
nies which, we should remember, devoted the bulk of their other
business activities to making loans to entities like businesses and
real estate developers and to each other. They in turn raised their
liabilities by issuing bonds and commercial paper.
Some of the loans in the CLOs started falling in price in secondary
trading (the market making in which, by the way, was provided by
the same banks which were providing the leverage to the CLO hold-
ers). This precipitated margin calls on the holders that they could
not all meet. With leverage of 20 times the fund equity could quickly
disappear so the only recourse was to dump loans and delever-
age as quickly as possible. So we sat at our Bloomberg screens in
amazement as blocks of hundreds of millions or billions of loans
were thrown out for whatever bid they could get. Well, would you
buy then, even if you thought that almost all of the loans were
ultimately money-good, as we indeed did? Of course not, because
in that panic it was certain that the prices would fall further, which
they did.
Normally the market makers would buy bargains but they were
providing the leverage and they were holding trading inventories
that were tumbling in price. So they withdrew the leverage and
reduced the inventories and so forced prices down further. This
killed the hedge funds but also inflicted losses on other holders,
including their own balance sheets, and created credit losses on
the loans extended to the leveraged CLO holders. Now the banks
were in trouble themselves. So they dried up the interbank lend-
ing market, essential for liquidity in the world trading system, and
their commercial paper appeared risky and fell in price, damaging
the money market fund industry which held a large part of liquid
assets in the US.
Similar things happened with mortgage-backed CDOs and other
instruments. We need not elaborate on the history but you can
see why the variance/covariance matrix did not work out to be
relevant. And, consequently, why the core of the academic work
on risk management did not turn out to be relevant. And indeed,
how insidious the concept of market efficiency has been in blinding
market participants to the nature of real risk by implying that it has
already been priced in to all assets as well as it can be.
The role of collateral and incentivesWe have not quite finished with criticism of academic treatment of
risk management. We need to turn now to the two academic nos-
trums for the kind of risk we have described: ‘adequate collateral’
and ‘appropriate incentives.’
Collateral, whether in the hands of a central counterparty or put up
over the counter against individual transactions (the equity cush-
ion in all those leveraged CLOs) is claimed to allow the system to
absorb unanticipated shocks. That may be, but the question is, how
much collateral is enough? Here we come back to the stationary
joint probability distribution of asset prices, which defines the likely
magnitude of these unanticipated shocks, and we can start over.
On reflection, systemic risk renders collateral least helpful when
you need it most. Think of something as simple as a mortgage on
a house. The collateral is the down payment, which is the lender’s
cushion against default. If your house goes on the market in ‘normal
times,’ say because of a divorce, all will be fine and the lender is
likely to recoup his loan. If a real estate bubble has collapsed and
every house on your block is for sale, the cushion is non-existent.
This is not an easy problem to solve. Think dynamic collateral
requirements driven by rules on market value versus ‘true value.’
Let us now turn to incentives, particularly the notion that if incen-
tives are paid in deferred common stock of the originators, there
will be a much lower likelihood of people making ‘bad trades.’ We
believe that such attempts will fail the test of practical reality, for
at least the three following reasons.
They are incompatible with a free market for talent. Institutions ■■
that attempt long deferral of incentive payments into restricted
vehicles will experience loss of their highest performers to insti-
tutions that do not do this. You can see this happening right now
on Wall Street.
The way in which we choose to measure corporate performance ■■
encourages the opposite, namely short-term risk taking for near-
32 – The journal of financial transformation
Economists’ hubris — the case of risk management
term results. We measure results quarterly and annually even
though the economic cycle takes place over years, not quarters.
Who doubts that any financial institution CEO who dropped out
of playing in late 2006 would have long lost his job and his key
staff, well before the markets collapsed? Is this not the meaning
of the infamous remark by Chuck Prince, former CEO of Citi that
“as long as the music is playing we have to dance.”
The proposed holding periods for the restricted incentive pay-■■
ments are always some calendar interval. The economy does not
oblige by revealing the quality of systemic risk provoking actions
like credit bubbles over short periods.
Having highlighted what we believe to have been the main causes
of the recent financial crisis and some of the failures of academic
finance in the area of risk management for financial institutions,
in the rest of the paper we will highlight the shortcomings of VaR
and its offsprings and highlight why it is that irrespective of the
effectiveness of the models, enterprise-wide risk management will
remain a panacea until internal infrastructures of financial institu-
tions are modified to eliminate many of the problems that exist
today.
VaR and its shortcomingsUnlike what many who study the subject of risk might believe there
are very few studies dedicated to the institution of effective risk
management systems and policies within financial institutions. In
fact, had it not been for Value-at-Risk (VaR) [Jorion (2006)], and
its numerous offsprings [Kuester et al. (2006)], we would not have
had much to refer to in this article. The reality is that as of today we
are still forced to refer back to the contributions of Harry Markowitz
in the early 1950s [Markowitz (1952)] towards diversification ben-
efits of less than perfect assets within a portfolio when trying to
determine both risk tolerance and appetite of financial firms (as we
discussed in the previous section).
VaR at its most basic tries to measure of the maximum loss that a
firm is willing to accept within a prespecified parameter, which in
most cases is the simple confidence interval over a set time hori-
zon. It is used to determine what the firm’s appetite is for risk. A
simple example that most use is that if we use a 95% confidence
interval and the firm’s portfolio of investments has a one-day 5%
VaR of $100 million, there is a 5% probability that the portfolio
will fall in value by more than $100 million over a one-day period.
Another way of stating this is that a loss of $100 million or more
on this portfolio is expected on 1 day in 20. However, VaR assumes
that markets are normal and there is no trading, i.e., no change in
the portfolio composition.
In reality the composition of the firm’s portfolio is changing every
second, with every trade that the traders make or every new instru-
ment that is engineered. In fact, for complex instruments the expo-
sure could change during the day without any additional trades or
changes in portfolio composition.
The assumption that the bank’s entire portfolio of assets follows
a normal distribution is also not very valid. Different institutions
have different portfolio compositions and based on that the shape
of their risk profile could be very different from their peers. Using
the incorrect distribution could result in arriving at risk appetite
levels that are completely inappropriate for the firm in question.
In fact, even if one starts off with the right distribution it does not
mean that they will end with the correct distribution, as distribu-
tions could change over time, such as the subprime mortgages that
became more risky with different distribution profiles over time.
The fact that different assets have different distributions means
that their risks cannot simply be added to each other to arrive
at a group risk figure. Given the enormous difficulties that fund
management companies, and their consultants, face when trying to
compute the overall risk of a portfolio of shares that follow similar
distributions it should come at no surprise just how difficult it would
be if one were to try to calculate the overall risk of a financial
institution.
More importantly, the relationships between the different assets
within the portfolio could be, and most probably are, miscalculated,
even if we assume that historical patterns will hold into the future.
How would the model account for liquidity risk when recalibrating
the overall VaR [Fragnière et al. (2010)]? What happens to the cor-
relations during a crisis? If we have learned nothing else from the
recent crisis we have learned that correlations among assets that
were previously unrelated become significantly stronger during cri-
ses. If the correlations merge we would lose some of the benefits of
diversification that we were relying on, pushing VaR figures against
the wall. And sadly, the fact is that historical patterns rarely hold in
the future. They are of little use in correctly determining VaR, espe-
cially since no future crisis will exhibit the patterns of behavior that
its predecessors did. Hence, it is literally impossible to determine
the true number of bad events that fall outside our predetermined
confidence interval.
Even if you were able to accurately determine the number of events
that will fall beyond your accepted confidence level, VaR will not
help you determine their magnitude. The problem is that VaR looks
at the number of events that might fall beyond the confidence
interval we have established and not their magnitude. What is the
point of knowing that there is a 1 in 20 chance of losing more than
$100 million when because of that single event the bank could go
bankrupt? The recent events have proved that a one-off outlier
could bring the entire system down.
VaR also fails to account for the interrelationships between finan-
33
Economists’ hubris — the case of risk management
cial institutions, which in recent years, thanks to the CDO and MBS
markets, has increased dramatically. The fact is that many banks
had used MBSs as collateral with the Fed and when the crisis hit
their values plummeted, preventing the rebalancing needed in light
of higher home loan borrower default rates. As a result, banks were
left with just one alternative when default rates shot up, decreasing
credit availability by lending less and calling-in outstanding loans
[Hunter (2009)]. The implications were that the interbank market
dried up and lending to businesses disappeared, which had second-
ary and tertiary impacts on different assets of other institutions.
The reality of the situation is that VaR, and all other risk manage-
ment methodologies, are relying on data that is backward looking,
is incomplete, and fails to account for the important events that
financial institutions really need to look out for. These risk manage-
ment tools behave like the ratings that ratings agencies issue. It
really does not matter whether a firm has a BBB rating rather than
AAA rating if neither experience default. All that happens is that
the investors in the BBB issue earn a higher return for the same
risk. The differences in ratings matter when the BBB firm defaults
while the AAA firm does not, and we have seen that the granularity
that investors expect simply does not exist. All borrowers become
equally highly risky when the situation turns sour. Citigroup is a
perfect example of an institution that went from being a superstar
to a destroyed entity.
Most importantly, all risk measurements are far more subjective than
many want to accept. They are based on individual decisions about
what should be incorporated into the model and what should not.
The problem sadly is that even if the methodologies that were
provided were reliable, which they are far from being, the current
structures of most financial institutions make instituting effective
risk management policies literally impossible. And it is these institu-
tional intricacies that we will turn to in the next section.
Why risk management is a panacea in today’s financial environmentEven if we choose to ignore the fact that different instruments have
different risk profiles and follow disparate distributions, we cannot
ignore the fact that they are based within different silos of the
institution that is trying to manage its risk.
These divisions have different management styles, compensation
structures, and risk appetites. More importantly, in order to deter-
mine the firm’s overall risk appetite we need to be able to firstly
place each of the risks within their correct buckets, such as credit,
market, operational, etc. The reality of the situation is that most
institutions have major difficulties in quantifying the risks of their
credit and market instruments, let alone the firm’s operational or
liquidity risks.
Most institutions have only recently started to come to grips with
the fact that operational risk is a major risk and that it must be
managed. Learning how to quantify it will take years, if not decades.
Once that is done, we need to be able to compute the operational
risks of each division. Many institutions still segregate their differ-
ent business, hence it is literally impossible to quantify operational
risk for the group and determine what diversification benefits could
be derived. For example, many institutions combine credit and FX
instruments within the same divisions, others keep them separate.
Some combine complex instruments with the fixed income division,
others with equities. In some firms FX, credit, and equities each
have their own quants teams, whose risks no one can understand.
As a result, the firm will have a view on the risk of each silo, but
will not be able to aggregate them in any useful way for the group.
Similarly, since they are sitting in different parts of the business it
is difficult to accurately determine the correlation benefits that the
firm is experiencing in a meaningful way.
The other problem is that there is just too much data to deal with,
even if we are able to aggregate them. The risk management teams
are facing a hosepipe of data which they have to decipher, and its
contents change with every trade and even by a simple change in
the hour of the day.
Even if so much data could be effectively analyzed the next tough
task is to present them in a useful way to the management, since
it is they who determine the firm’s overall risk appetite and not the
risk management division. Once the management sets the param-
eters then it is the risk management team’s job to ensure that
everyone operates within the set boundaries.
These kinds of complex reports are very hard for the manage-
ment to understand. And the simpler they are made the greater
sacrifices have to be made about specificities of risk. The result is
that either the firm does not allocate adequate capital to the risks
it faces or it allocates too much, hence limiting some of the firm’s
profit potential.
Even if the firm was fully intent on getting all the available informa-
tion and it was able to correctly label all of the risks it faces and
place them in the correct buckets, IT and human aspirations will still
make their task impossible.
implications of iT and human greedWhile academic literature does account for personal greed, and
a number of papers have been written on the subject by Michael
Jensen and his many colleagues and students of the subject
[Jensen and Meckling (1976)], it is usually overlooked when it
comes to subjects that deal with anything other than corporate
finance. For some reason, there is an assumption that just like
34 – The journal of financial transformation
Economists’ hubris — the case of risk management
Shojai, S., 2009, “Why the rush to repay TARP?” Riskcenter, May 121
investors all behave rationally individuals are also mostly honest
and as a result models should work.
The fact is that any risk model is as reliable as the data that is
inputted into it, and financial executives are inclined to reduce the
allocation of risk to their activities insofar as possible so that they
can take on greater risks. After all, their compensation is based
purely on the returns they generate. They have an option on the
bank. They share in the gains and not the losses. Consequently, the
higher the risk, the higher the return, with a floor attached.
Given the complexity of many of the instruments that financial
institutions are dealing with it is only natural that they will be col-
lecting underestimated risk profiles from the traders and engineers.
Furthermore, because of the silo structures many firms will find
that they have assets that are highly correlated with each other but
sitting in different silos. As a result, during the downturn the firm
might find itself significantly more exposed than it thought and not
as capitalized as necessary.
The desire to underestimate risk is not limited to the traders or
engineers sitting on the quants desks, the firm’s own management
are compensated on their annual performance, if not quarterly.
Hence they would also like to be able to take more risks than they
should be permitted to. And none of these people are stupid. The
traders know that they will be bailed out by the bank if the situation
goes sour and at worst lose their jobs, but if the bet pays off they
will be able to retire with savings that even their grandchildren can-
not spend. And, the management knows that if the bank goes close
to failure it will be bailed out by the government if it is important
enough to the local or global economy. The moral hazard problem,
as some had predicted1, has become severely worse since banks are
now even more confident of taking big risks.
Sadly, however, there is nothing that can be done about that. And
the different Basle concordats will in no way help, since they are
based on parameters that have no connection with the real world.
They are arbitrary numbers plucked from the air. BIS is today as
effective in preventing financial risk as the U.N. is in preventing
global conflicts.
In addition to the human aspects of risk management, most firms
are unable to get a holistic view of their risk due to the fact that
their IT systems are simply unable to provide that kind of data. Most
financial institutions are unable to determine the actual cost of
the instruments they develop let alone their risk, and how it might
change over time.
In today’s world very few financial institutions have not undergone
some sort of a merger or acquisition, with the pace increasing in the
past couple of years. The result is a spaghetti like infrastructure of
systems that are simply unable to communicate with one another
[Dizdarevic and Shojai (2004)]. Incompatible systems make the
development of a viable enterprise-wide risk management close to
impossible, since without accurate and timely data the firm will be ill
prepared to respond to changes in asset compositions and prices.
And, sadly that situation will remain for many years to come.
Despite the huge annual investments made by major financial insti-
tutions, different systems supporting different instruments are sim-
ply unable to share information in any meaningful way. This means
that no matter how remarkable the models used are, the data that
they will be dealing with are incomplete.
Consequently, despite the best of ambitions, the dream of having
a reliable and effective enterprise-wide risk management shall
remain just that, a dream. And as long as firms remain incapable
of determining their own exposures, irrespective of which of the
reasons mentioned above is the main culprit, they will continue to
face enormous risks at times when markets do correct themselves
rapidly and sadly the precedent set in the recent crisis does noth-
ing but put flame to the fire. Financial institutions are now even
more confident of taking risks than they were two years ago, which
means that the next crisis can only be greater than the one we just
lived through.
conclusionIn this, the third article in the Economists’ Hubris series, we turn our
attention to not only the hubris of economists but also the hubris
of practitioners, be they bankers or regulators. We find that while
a number of models have been developed to help financial institu-
tions manage their risk, none is really that reliable when placed
under strict scientific examinations. Similar to other economic
disciplines, risk management is also prone to being effective solely
within the confines of the academic institutions in which they are
developed.
However, the models are not our only problem. In order to institute
effective risk management systems and policies at financial institu-
tions we first need to be able to collect reliable data, which given
today’s operational and IT infrastructures is literally impossible to
do. Risk data are sitting in a number of silos across the globe with-
out any viable way of aggregating them in any useful way. It is this
issue that needs to be dealt with first before we work on developing
new models that can effectively manage them.
The sad fact is that just like the academics who develop these mod-
els, the practitioners who use them also assume that they work.
Unlike asset pricing where there is a clear gap between academic
thinking and business application, when it comes to enterprise-wide
risk management, both are equally wrong. Consequently, practitio-
ners, both bankers and their regulators, were living under a false
35
Economists’ hubris — the case of risk management
sense of security that was shattered with the recent financial crisis.
The current governmental and regulatory responses are focusing
on the periphery. The real issue is that banks should be forced
to improve their operational and IT infrastructure in order to be
able to get a holistic view of the risks that are sitting within the
many pockets of their organizations. Our aim with this paper is to
highlight the main issues that financial institutions face in institut-
ing effective risk management systems and policies so that public
policy debates get diverted to focus on those issues that truly mat-
ter and not those that simply get press attention.
ReferencesBoudoukh, J., M. Richardson, and R. Stanton, 1997, “Pricing mortgage-backed securities •
in a mulitfactor interest rate environment: a multivariate density estimation approach,”
Review of Financial Studies, 10, 405-446
Dizdarevic, P., and S. Shojai, 2004, “Integrated data architecture – the end game,” •
Journal of Financial Transformation, 11, 62-65
Fabozzi, F. J., and V. Kothari, 2007, Securitization the tool of financial transformation,” •
Journal of Financial Transformation, 20, 33-45
Fragnière, E., J. Gondzio, N. S. Tuchschmid, and Q. Zhang, forthcoming 2010, “Non-•
parametric liquidity-adjusted VaR model: a stochastic programming approach”, Journal
of Financial Transformation
Hand, D. J., and G. Blunt, 2009, “Estimating the iceberg: how much fraud is there in the •
U.K.,” Journal of Financial Transformation, 25, 19-29
Hand, D. J., and K. Yu, 2009, “Justifying adverse actions with new scorecard technolo-•
gies,” Journal of Financial Transformation, 26, 13-17
Hunter, G. W., 2009, “Anatomy of the 2008 financial crisis: an economic analysis post-•
mortem,” Journal of Financial Transformation, 27, 45-48
Jacobs, B. I., 2009, “Tumbling tower of Babel: subprime securitization and the credit •
crisis,” Financial Analysts Journal, 66:2, 17 – 31
Jensen. M. C., and W. H. Meckling, 1976, “Theory of the firm: managerial behavior, •
agency costs and ownership structure,” Journal of Financial Economics, 3:4, 305-360
Jorion, P., 2006, Value at Risk: the new benchmark for managing financial risk, 3rd ed. •
McGraw-Hill
Keys, B. J., T. Mukherjee, A. Seru, and V. Vig, 2010, “Did securitization lead to lax •
screening? Evidence from subprime loans,” Quarterly Journal of Economics, forthcom-
ing
Kuester, K., S. Mittnik, and M. S. Paolella, 2006, “Value-at-Risk prediction: a comparison •
of alternative strategies,” Journal of Financial Econometrics, 4:1, 53-89
Markowitz, H., 1952, “Portfolio selection,” Journal of Finance, 7:1, 77-91•
Reinhart, C. M., and K. Rogoff, 2009, This time is different: eight centuries of financial •
folly, Princeton University Press
Shojai, S., 2009, “Economists’ hubris – the case of mergers and acquisitions,” Journal •
of Financial Transformation, 26, 4-12
Shojai, S., and G. Feiger, 2009, “Economists’ hubris: the case of asset pricing,” Journal •
of Financial Transformation, 27, 9-13
Smithers, A., 2009, Wall Street revalued: imperfect markets and inept central bankers, •
John Wiley & Sons
37
Articles
Best practices for investment risk management
Jennifer BenderVice President, MSCI Barra
frank nielsenExecutive Director, MSCI Barra
AbstractA successful investment process requires a risk management
structure that addresses multiple aspects of risk. In this paper, we
lay out a best practices framework that rests on three pillars: risk
measurement, risk monitoring, and risk-adjusted investment man-
agement. All three are critical. Risk measurement means using the
right tools accurately to quantify risk from various perspectives.
Risk monitoring means tracking the output from the tools and flag-
ging anomalies on a regular and timely basis. Risk-adjusted invest-
ment management (RAIM) uses the information from measurement
and monitoring to align the portfolio with expectations and risk
tolerance.
38 – The journal of financial transformation
Best practices for investment risk management
The last 18 months have brought risk management to the forefront
and highlighted the need for guidance on best practices for inves-
tors. Many institutional investors were surprised by the violent mar-
ket moves during the current crisis. Some have argued that current
risk management practices failed when they were needed most, and
with multi-sigma events extending across formerly uncorrelated
asset classes, investors have questioned the very meaning of the
term ‘well diversified portfolio.’ What does sound risk management
mean for plans, foundations, endowments, and other institutional
investors? How should these institutions think about best practices
in risk management? We start with three guiding principles:
Risk management is not limited to the risk manager; any-1
one involved in the investment process, from the cio to
the portfolio managers, should be thinking about risk — risk
management should not be limited to an after-the-fact reporting
function but must be woven into the investor’s decision-making
process, whether it is the asset allocation decision or the process
for hiring managers. Those responsible for asset allocation and
management should be risk managers at heart and consider risk
and return tradeoffs before making investment decisions.
if you cannot assess the risk of an asset, maybe you should 2
not invest in it — for those institutions invested in alternative
asset classes, such as private equity and hedge funds, or who
have exposure to complex instruments, such as derivatives and
structured products, the risk management requirements have
greatly increased. These investors need a framework for manag-
ing risk that far exceeds what was needed for the plain vanilla
stock and bond investing that prevailed only ten years ago. We
argue that one should assess one’s risk management capabilities
before making the decision to invest in certain asset types.
proactive risk management is better than reactive risk 3
management — being prepared for unlikely events is perhaps
the most important lesson learned from the recent crisis. This
applies to both market and nonmarket risks such as counter-
party, operational, leverage, and liquidity. Addressing this issue
transcends the simple use of the output of models and tools. It
requires an institutional mindset that analyzes the global eco-
nomic outlook, understands the aggregate portfolio exposures
across asset classes, and is willing to use the model output intel-
ligently to align the portfolio structure with the plan sponsor’s
assessment of the risks that may impact the portfolio.
In this paper, we propose a risk management framework that:
Is aligned with the investment objectives and investment hori-■■
zon.
Tackles multiple aspects of risk and is not limited to a single ■■
measure like tracking error or Value at Risk (VaR).
Measures, monitors, and manages exposures to economic and ■■
fundamental drivers of risk and return across asset classes to
avoid overexposures to any one risk factor.
Manages risk for normal times but is cognizant of and aims to be ■■
prepared for extreme events.
We developed this framework with institutional investors and
their risk management challenges in mind, but this framework
can be adapted easily to the requirements of asset managers. In
the first section of the paper, we describe a framework that takes
into account the three guiding principles. In the second section of
the paper, we illustrate this framework in more detail and provide
examples for its implementation.
Three pillars for risk managementRisk management has evolved rapidly over the last few decades,
marked by key developments like the adoption of the 1988 Basel
Accord and significant episodes like the U.S. Savings and Loan crisis,
the collapse of LTCM, the ‘Dot.com’ bust, and the recent financial
crisis. However, the degree to which various risk methodologies
and practices have been implemented by institutional investors has
varied. In particular, there remains a wide range in how market par-
ticipants (pension plans, endowments, asset managers, hedge funds,
investment banks, etc.) have integrated the risk management func-
tion. Best and Reeves (2008), for example, highlight the divergence
in risk management practices between buy-side and sell-side institu-
tions, the latter being subject to greater regulatory pressure.
Our goal is to establish a framework for sound market risk man-
agement for institutional investors. We rely on three pillars:
risk measurement, monitoring, and management (or risk-adjusted
investment management, RAIM). Risk measurement refers to the
tools institutional investors use to measure risk. Risk monitoring
focuses on the process of evaluating changes in portfolio risk over
time. RAIM refers to how investors may adjust their portfolios in
response to expected changes in risk. Robust risk management
integrates all three areas.
The risk manager’s toolkit may include a variety of measures captur-
ing different views of risk. Figure 1 illustrates one way of categorizing
the suite of tools needed. We distinguish between risk measures for
Alpha (active risk) Beta (total risk)
normal Extreme normal Extreme
Tracking error Stress testing active bets
Asset class volatility/beta
Stress testing asset classes
Contribution to tracking error
Active exposures
Active contribution to tail risk
Contributions to total risk
Total contribution to tail risk
Benchmark misfits Maximum active drawdown
Sources of return/exposures
Maximum drawdown, contagion effect
Figure 1 – Structure for risk measurement and monitoring
39
Best practices for investment risk management
assets, both equity and fixed income, together with commodities, hedge funds, and
currencies, are then combined into a single risk model. This makes it suitable for a
wide range of investment purposes, from conducting an in-depth analysis of a single-
country portfolio to understanding the risk profile of a broad set of international
investments across several asset classes.
VaR captures the expected loss at some threshold, while Expected Shortfall captures 3
the expected loss once that threshold has been exceeded. Maximum drawdown is
defined as the largest drop from a peak to a bottom in a certain period.
Performance attribution, the attribution of realized returns to a set of exposures 1
times factor returns, can provide valuable insight to risk managers seeking to iden-
tify where their investments or managers added value. In addition, it can highlight
similarities between asset groups or managers’ strategies in a way that is far more
informative than looking at historical inter-manager correlations alone.
The Barra Integrated Model (BIM) is such a multi-asset class model for forecasting 2
the asset- and portfolio-level risk of global multi-asset class portfolios or plans. The
model begins with a detailed analysis of individual assets from 56 equity markets and
46 fixed income markets to uncover the factors that drive their risk and return. The
normal and extreme times as well as risk measures that relate to
absolute losses or losses relative to a benchmark. On one hand, insti-
tutional investors need to manage the total risk of their investments,
which means protecting themselves from asset-liability deficits,
declines in broad asset classes, and more generally, any losses large
enough to make it difficult to meet the investor’s obligations. On the
other hand, institutions need to manage the risk of managers under-
performing their benchmarks, which involves monitoring the tracking
error and performance relative to the assigned benchmark.
To assess future risks, it is essential to measure and monitor risk
both at the aggregate level and at the factor level. For risk mea-
surement, most institutional investors measure aggregate portfolio
risk with volatility or tracking error, which rely on individual volatili-
ties and correlations of asset classes and managers. However, while
volatility, tracking error, and correlations capture the overall risk of
the portfolio, they do not distinguish between the sources of risk,
which may include market risk, sector risk, credit risk, and interest
rate risk, to name a few. For instance, energy stocks are likely to
be sensitive to oil prices, and BBB corporate bonds are likely to
be sensitive to credit spreads. Sources of risk, or factors, reflect
the systematic risks investors are actually rewarded for bearing
and are often obscured at the asset class level [Kneafsey (2009)].
Institutional investors can decompose portfolio risk using a factor
model to understand how much return and risk from different asset
classes or managers resulted from prescribed factor exposures in
the past1, or how much risk to expect going forward.2
Risk monitoring enables institutions to monitor changes in the
sources of risk on a regular and timely basis. For instance, many
well diversified U.S. plans saw a growing exposure to financial sec-
tor, housing, and credit risk from 2005-2006. While risk managers
may not have foreseen a looming correction, the ability to monitor
these exposures would have at least alerted them to the risks in the
event of a correction.
Portfolio decomposition plays an important role in stress testing.
Here, the sources of risk are stressed by the risk manager to assess
the impact on the portfolio. Stress testing is flexible in enabling risk
managers to gauge the impact of an event on the portfolio. The
stress scenario might be real or hypothetical, commonplace or rare,
but stress tests are used typically to assess the impact of large and
rare events. Scenarios can come in different flavors, such as macro
shocks, market shocks, or factor shocks. The intuition behind stress
testing for market risk can be applied to nonmarket or systemic
risks, such as leverage and liquidity risk. When leverage and liquid-
ity shocks occur, as in 2008, it may result in unexpected increases
in investment commitments for which no immediate funding source
is available. While largely unpredictable, the impact of such shocks
can be analyzed using stress tests. Below we list a number of stress
test categories that investors might employ on a regular basis to
assess the immediate impact on the portfolio as well as the change
in impact over time.
systemic shock
Liquidity shock■■
Leverage shock■■
Macro shock
Interest rate shock■■
Oil price shock■■
Market-wide shock
Market-wide decline in equity prices■■
Targeted shock
U.S. value stocks hit■■
Japan growth stocks hit■■
Our discussion of stress testing segues naturally into the problem
of managing tail risk, or the risk of some rare event occurring.
Whereas stress tests do not address the likelihood of extreme
shocks occurring, other methods for analyzing tail risk do. This
recent period of turmoil has acutely highlighted both the impor-
tance of managing tail risk and the inadequacy of generic tail risk
measures, such as parametric VaR.
While the simplest measure of parametric VaR assumes that
returns are normally distributed, more sophisticated methods
for calculating VaR do not. These span a wide range of modeling
choices that may rely on parametrically or non-parametrically
specified non-normal distributions, or Extreme Value Theory. For
a more detailed discussion on the latter, we refer to Barbieri et al.
(2009) and Goldberg et al. (2009). Other measures of tail risk, such
as Expected Shortfall (conditional VaR) and Maximum Drawdown,3
seek to capture a different and potentially more relevant facet of
tail risk. In general, turbulent times highlight the need for moni-
toring appropriate tail risk measures. Such times also call for the
frequent reporting of exceptional developments, i.e., reporting that
highlights unusual changes in risk measures or increases in expo-
sure to certain factors.
Before we move on to the third pillar, RAIM, it is important to point
out that risk monitoring requires the necessary IT and infrastruc-
ture resources for support. First, accurate data is essential, as is
sufficient coverage of the assets held in the portfolio. Delays in a
40 – The journal of financial transformation
Best practices for investment risk management
risk manager’s ability to view changes in holdings, prices, or char-
acteristics are often caused by infrastructure limitations. In some
cases, data may not be readily available, or the resources required
to collect data from custodians or individual managers may be
prohibitively expensive. In addition, hard-to-model assets, such as
complex derivatives, hedge funds, and private equity, can pose a
challenge for even the most advanced systems. In sum, institutions
should consider the costs of implementing the necessary risk man-
agement systems when they decide in which assets to invest.
One consequence of the current crisis may be that investors
become more cautious when they choose their investments.
Warren Buffett, for example, commented at his recent shareholder
meeting on complex calculations used to value purchases: “If you
need to use a computer or a calculator to make the calculation, you
shouldn’t buy it.” Even though that statement may be extreme, the
point is well taken. The damage that exotic, illiquid, and hard-to-val-
ue instruments have triggered over the last 18 months highlighted
the need to be able to assess the risks of such investments before
money is allocated to them.
The third pillar in our framework is RAIM, which puts risk measure-
ment and monitoring outputs into action. While risk measurement
provides the measures, and risk monitoring ensures that the
measures are timely and relevant, without the ability to make
adjustments to the portfolio, this information is of limited value
for protecting the investor against losses. RAIM aligns the invest-
ment decision-making process with the risk management function.
For instance, RAIM might be used to make portfolio adjustments
as either the correlations between assets or managers rise or the
probability of certain tail risk or disaster scenarios increases. RAIM
could also facilitate the management of risks coming from certain
sources of return, or it could aid in better diversifying the portfolio.
Specifically, RAIM could be used in the development of overlay
strategies that would facilitate certain hedges, such as currency
hedges, or tail risk insurance.
As an example, the declines in the broad equity market last year
caused many pension plans to become underfunded. Decision-
makers may decide that their tolerance for losses should be limited
to a specific percentage. They should then decide whether that limit
should be maintained through a passive hedge or through a trigger
mechanism defined by the breach of clearly defined parameters of
a risk measure. Some pension plans started hedging their equity
exposure to limit downside risk, though for many it was too late.
One reason why pension plans may not have hedged their market
exposure more frequently is the cost of hedging. Hedging reduces
the performance of the portfolio in up markets, but in periods when
the market declines, hedging limits the downside. Figure 2 illus-
trates a successful market hedge that includes a stop-loss plan at a
point when assets drop below a specified level.
All three pillars — risk measurement, risk monitoring, and RAIM — are
indispensable to a complete risk management structure. Figure 3
summarizes the three pillars, illustrated with specific examples.
The Figure uses the same idea we presented before, namely, that
risk measures can be categorized by normal and extreme times and
relative versus absolute investment objectives.
implementing a market risk management frameworkIn practice, the needs of institutional investors can be wide ranging,
and their ideal measurement, monitoring, and managing capabili-
ties will differ. In this section, we illustrate the case of a hypotheti-
cal but typical U.S. plan sponsor. Although there may be additional
criteria, the three critical drivers of risk management requirements
are as follows:
1 Return requirements — the plan’s liabilities or expected pay-
outs will influence not only the assets in which it invests but
also which benchmarks are used and how much it can lose over
60%
40%
20%
0%
-20%
-40%
-60%
-80%
-100%
Cumulative return
Plan bears cost of insurance during
normal markets but benefits from
large unexpected drops due to
systemic blow-ups
Portfolio
insurance
takes effect
Dec-0
8
Feb-0
9
Oct-0
8
Aug-08
Jun-08
Apr-08
Feb-0
8
Dec-0
7
Oct-0
7
Aug-07
Jun-07
Apr-07
Feb-0
7
Dec-0
6
Uninsured portfolio Insured portfolio
Figure 2 – Risk-adjusted investment management to protect against downside risk
Risk measurement Risk monitoring RAiM
nor
mal
Total Volatility Monitor sources of volatility
Limit exposure to biggest sources of volatility
Active Tracking error Monitor sources of tracking error
Limit exposure to biggest sources of tracking error
Extr
eme
Total Stress tests/tail risk measures
Monitor expected shortfall of the total plan
Implement portfolio insurance plan
Active Stress tests/tail risk measures
Monitor changes in potential active losses if market declines by X%
Ask managers to limit exposures to certain sources of risk
Figure 3 – Three pillars of risk management
41
Best practices for investment risk management
certain periods. The latter, in turn, may drive how much risk it is
willing to take and with how much exposure to certain sources of
return/risk it is comfortable.
2 investment horizon — the plan’s investment horizon, or willing-
ness to sustain shorter-term shocks, will influence which risk
measures are appropriate and how frequently they need to be
monitored.
3 complexity of investments — plans that invest in difficult-to-
value assets with potentially non-normal return distributions
or unusually high exposure to tail events require additional risk
measures, higher monitoring frequencies, and advanced RAIM
capabilities.
These criteria are naturally linked, although the degree of impor-
tance might vary from plan to plan. For instance, return require-
ments may play the primary role in driving the choice of instru-
ments and asset classes for some plans, while they may play a
less important role for other plans. For some plans, the investment
horizon is tied directly to their return requirements, while for oth-
ers, it is more a function of how much they are willing to lose over a
given period. Regardless, these three criteria determine the guiding
principles for any plan’s risk management function.
For example, a plan sponsor with reasonable and relatively
infrequent payout obligations, a large surplus, and with limited
exposure to alternatives and complex instruments does not need
short-term measures or frequent monitoring. This plan would
benefit from focusing on longer-term risk measures. Instead of
setting up a system to calculate 10-day VaR measures, the plan
could focus on how multiyear regime shifts in different risk fac-
tors, such as interest rate cycles, may affect the portfolio’s value.
In contrast, a plan with frequent and significant expected payouts,
a limited ability to sustain short-term losses, and with substantial
exposure to alternatives and complex instruments would require
a wide variety of risk measures, frequent risk monitoring, and a
well developed RAIM process. Most plans are likely to fall between
these two extremes.
Another example may help to shed light on these ideas. Below, we
group investors into one of three categories using the third criteria —
complexity of instruments and asset classes.
A Type-1 plan invests in a straightforward allocation to equities and
fixed income. Equity and fixed income allocations may be limited
to the domestic market, and fixed income investments are mostly
concentrated in government bonds and AAA-rated corporate. A
Type-2 plan may invest in equities globally, including emerging
markets. Fixed income investments may include high yield bonds
and mortgage-backed securities, and the plan may also invest in
alternatives and complex derivatives. However, as a percentage of
the plan’s total value, the investments in alternatives and deriva-
tives would be small. A Type-3 plan would invest in a variety of
alternative asset classes as well as complex instruments but to a
larger extent than Type-2 plans.
Currently, the vast majority of pension plans fall into the second
group. We, therefore, consider a hypothetical Type-2 plan for our
illustration of a risk management framework. Its asset allocation
is as follows: equity (60%) [U.S. (36%), international (24%)], fixed
income (U.S.) (25%), alternatives (15%) [real estate (5%), private
equity (5%), hedge funds (5%)].
A first critical step is to adopt tools that enable the plan to mea-
sure and monitor risk at the source, or factor level, and not just at
the aggregate level. The plan should then monitor its exposure to
different risk sources on a regular basis — monthly or quarterly at
the very least. This would occur at the total portfolio level, look-
ing across asset classes. In addition, the plan can require from its
managers more detailed information on risk exposures. Many plans
receive only high-level performance summaries focusing on real-
ized returns and tracking error. Requiring estimates of exposures
to various sources of risk is a crucial extension. For illustration, we
show how this would fit in the framework we have used so far:
Total risk:
normal periods —■■ the plan can evaluate sources of return and
risk across its overall portfolio using a multi-asset class factor
model. Sources of risk can include macroeconomic and market
factors. An example of the type of analysis that can be done is
to look at the performance of the plan’s portfolio in different
macroeconomic regimes. The plan could then adjust its asset
allocation during the next review period.
Extreme events —■■ using the sources of risk for the overall port-
folio, the plan can shock certain factors or sources of risk, i.e.,
those likely to suffer in the event of a market dislocation, includ-
ing a systemic meltdown or series of external shocks. These
stress tests would enable the plan to evaluate how individual or
multiple simultaneous shocks impact the overall portfolio (i.e.,
tail risk and tail correlations). The plan could then establish an
action plan if asset values drop by some absolute or relative (i.e.,
to liabilities) amount.
Active risk:
normal periods —■■ the plan can ask for reports on their sources
of risk from all equity, fixed income, and alternatives managers,
or the plan can estimate them internally. Sources of active risk
should be detailed and focused on the specific risk and return
drivers of the manager’s investment strategy. For instance,
analyzing a value, small cap equity manager’s tracking error will
focus on the active bets relative to the agreed upon benchmark,
ideally a small cap value benchmark like the MSCI Small Cap
Value Index. Measuring and monitoring will focus on questions
42 – The journal of financial transformation
Best practices for investment risk management
of active bets relative to this benchmark, for example, does the
manager’s portfolio have a consistent small cap value bias or did
the portfolio move towards growth-oriented companies over the
last few years when value stocks underperformed? The active
risk analysis should ensure that the hired manager is following
his or her mandate and is not deviating from the agreed upon
guidelines.
Extreme events —■■ the plan may want to stress test the impact
of the joint underperformance of a number of active strategies
that historically have been uncorrelated. For example, during
the quant meltdown in August 2007, a number of return factors
became suddenly highly correlated, leading to severe negative
portfolio performance relative to their respective benchmarks.
Such stress tests enable the plan to evaluate how shocks impact
an entire group of managers. Other useful measures for rare
events are tail risk and tail risk correlations of active bets across
managers. Certain factors or strategies may become highly cor-
related across asset classes or markets during crises (i.e., value
or momentum across equity markets) and could lead to vastly
higher losses relative to their respective benchmarks than esti-
mated by the tracking error.
The above examples illustrate how a model that decomposes risk
along its sources can help institutions evaluate different types of
risk across different dimensions. It can be applied similarly to real-
ized returns in order to attribute past performance. For our hypo-
thetical Type-2 plan, a suggested set of minimum components for
risk management may include:
Aggregate measures of volatility and tracking error across man-■■
agers and asset classes.
An accurate decomposition of return and risk across asset ■■
classes, utilizing an integrated (across asset classes) multi-factor
risk model.
A stress testing framework and/or extreme risk measures for ■■
understanding tail risk and tail risk correlations in the portfolio.
An appropriate set of benchmarks.■■
The exact measures, monitoring frequencies, and RAIM processes
the plan adopts will depend on its return requirements and expect-
ed payouts, and its investment horizon and willingness to tolerate
shorter-term losses. For instance, a plan with limited ability to
withstand short-term losses may want to build out its ability to
assess tail risk over different horizons using risk measures such as
Expected Shortfall based on Extreme Value Theory [Goldberg et al.
(2009)], which is more conservative than parametric VaR. These
plans may also want to implement extensive stress tests across
asset classes and within certain subcategories of investments.
Meanwhile, plans with greater ability to withstand short-term losses
may opt for more basic tail risk measures and stress tests.
Our example focused on a Type-2 plan. For a Type-3 plan, this illus-
tration would also be relevant, but the requirements for measuring,
monitoring, and managing different types and sources of risk would
be more extensive. For a Type-1 plan, the extent to which it invests
in risk management should depend on its liabilities structure and its
short-term risk tolerance.
Most plans, regardless of their specific characteristics, can take
some basic actions on an organizational or administrative level to
manage risk. Our hypothetical plan may establish a risk commit-
tee consisting of the CIO, risk managers, senior portfolio manag-
ers, and legal and compliance officers that meets at least once a
quarter to discuss the overall economic and financial environment.
Participants can discuss their concerns regarding systemic risk
issues such as liquidity and leverage, review the results of stress
tests, and debate whether hedging strategies should be activated to
address undesired exposures or potential tail risk events. The plan
could also develop a reporting framework where the risk committee
would receive at least monthly reports on unusual developments
identified by the risk manager. Then, if the investment committee
is sufficiently concerned about exposure to a certain segment, it
could ask those managers with large exposures to hedge them or
to eliminate the undesired exposures.
Figure 4 illustrates this type of setup, where the risk manager pre-
pares risk reports and recommendations for the risk committee and
deliverers risk management services and advice to the different
asset class managers.
Finally, the plan may establish minimum risk management require-
ments for external managers. For instance, the external managers
could be required to demonstrate their ability to calculate tracking
error, VaR, or other measures, as well as how risk management
impacts their portfolio construction.
senior investment /risk committee
Risk manager
Equity Alternatives fixed income
Initial asset/manager allocation•
Standard risk reports•
Red flags/exceptions reporting•
Figure 4 – Organizational structure for risk management
43
Best practices for investment risk management
conclusionRecent events have put into stark relief the inadequacy of the cur-
rent state of risk management. Much has been said about the need
for better risk management and a greater degree of risk aware-
ness in the broader investment community. Risk management is
a dynamic area, and any set of best practices are bound to evolve
over time. Here we set out to clarify some of the principles and
tools that we believe are required for a sound risk management
framework.
Specifically, we lay out a framework that rests on three pillars — risk
measurement, monitoring, and RAIM (or risk-adjusted investment
management). Each of the three domains is critical for risk manage-
ment. Risk measurement means having the right tools to measure
risk accurately from various perspectives. Risk monitoring means
observing the risk measures on a regular and timely basis. RAIM
means using the information from the measurement and monitor-
ing layers intelligently to ensure that the portfolio management
process is aligned with expectations of risk and risk tolerance.
While each pillar encompasses a different aspect of risk manage-
ment, each is indispensable to a strong risk management process.
Moreover, they are interdependent and should be aligned with the
investor’s objectives. Their interconnectedness drives the key con-
ceptual theme — that risk management and the investment process
should be fully integrated.
ReferencesBarbieri, A., V. Dubikovsky, A. Gladkevich, L. Goldberg, and M. Hayes, 2009, “Central •
limits and financial risk,” MSCI Barra Research Insights
Best, P., and M. Reeves, 2008, “Risk management in the evolving investment manage-•
ment industry,” Journal of Financial Transformation, 25, 88-90
Goldberg, L., M. Hayes, J. Menchero, and I. Mitra, 2009, “Extreme risk analysis,” MSCI •
Barra Research Insights
Kneafsey, K., 2009, “Four demons,” Journal of Financial Transformation, 26, 18-23•
Articles
45
lessons from the global financial meltdown of 2008
Hershey H. friedmanProfessor of Business and Marketing, Department of Economics, Brooklyn College, City University of
New York
linda W. friedmanProfessor, Department of Statistics and Computer
Information Systems, Baruch College and the Graduate Center of the City University of New York
AbstractThe current financial crisis that threatens the entire world has
created an ideal opportunity for educators. A number of impor-
tant lessons can be learned from this financial meltdown. Some
are technical and deal with the value of mathematical models and
measuring risk. The most important lesson, however, is that unethi-
cal behavior has many consequences. This debacle could not have
occurred if the parties involved had been socially responsible and
not motivated by greed. Conflicts of interest and the way CEOs are
compensated are at the heart of this financial catastrophe that has
wiped out trillions of dollars in assets and millions of jobs. We pres-
ent a set of lessons as teaching opportunities for today’s students
and tomorrow’s decision makers.
46 – The journal of financial transformation
lessons from the global financial meltdown of 2008
The current financial crisis is the worst debacle we have experienced
since the Great Depression. Millions of jobs have been lost and tril-
lions of dollars in market value have evaporated. The entire world is
suffering. The financial crisis is far from over and is adversely affect-
ing people all over the world. Are there lessons that can be learned
from this financial debacle? In fact, it is the perfect teaching tool
since it makes it so easy to demonstrate what can go wrong when
firms do not behave in an ethically, socially responsible manner.
It is interesting to note that the financial meltdown of 2008 did not
suddenly appear out of nowhere. The corporate world was heading
down a dangerous path for more than 20 years. We feel that an
early warning was the Savings and Loans disaster in which 1,043
banks failed with a cost to U.S. taxpayers of about U.S.$124 billion
[Curry and Shibut (2000)]. That happened between 1986 and 1995.
The financial world ignored this warning. A few years later came
several colossal corporate scandals including Enron, which filed
for bankruptcy in late 2001, Tyco International, Adelphia, Global
Crossing, WorldCom, and many other firms. These companies were
found to use dubious accounting practices or engage in outright
accounting fraud to deceive the public and enrich executives. In
fact, the Sarbanes-Oxley Act of 2002 was enacted in order to
prevent future financial disasters such as Enron. Suskind (2008)
reports that Alan Greenspan, Chairman of the Federal Reserve,
was at a meeting on February 22, 2002 after the Enron debacle
and was upset with what was happening in the corporate world.
Mr. Greenspan noted how easy it was for CEOs to “craft” financial
statements in ways that could deceive the public. He slapped the
table and exclaimed: “There’s been too much gaming of the system.
Capitalism is not working! There’s been a corrupting of the system
of capitalism.” This was another warning sign that it was easy for
unrestrained greed to harm the entire economy.
The dot.com bubble which took place between 1995 and 2001
(NASDAQ peaked at over 5100 in March 2000), was a different
kind of crisis. It was fueled by irrational spending on Internet stocks
without considering traditional business models. Investors did not
seem to care about earnings per share or other more traditional
measures. Moreover, there were too many companies trying to
create online businesses. The price of many dot.com stocks came
tumbling down, but the bubble was not based on fraud as much as
overvaluation of stocks (especially the IPOs) and excessive specu-
lation. The housing bubble, on the other hand, that helped cause
the current financial meltdown was fueled to a large degree by the
ready availability of deceitful mortgages. What was apparent from
the dot.com bubble was that prices cannot go up forever — a lesson
not heeded by those in the mortgage business.
Long Term Capital Management (LTCM), a hedge fund founded
in 1994, which went out of business in 2000, showed how risky a
highly-leveraged hedge fund could be. In 1998, LTCM had borrowed
over U.S.$125 billion, but only had equity of U.S.$5 billion. The
financial crisis started by LTCM at that time also demonstrated how
the entire financial system could be at risk because of the actions
of one fund. The Federal Reserve Bank was involved in a U.S.$3.5
billion rescue package in 1998 to protect the financial markets
from a total collapse because of the actions of LTCM. One lesson
that should have been learned from this financial debacle was that
we should not rely so much on sophisticated mathematical models.
LTCM’s models were developed by two Nobel laureates — Myron
Scholes and Robert C. Merton — who were both members of the
board of the hedge fund.
Lowenstein (2008) observes that “regulators, too, have seemed to
replay the past without gaining from the experience. What of the
warning that obscure derivatives needed to be better regulated
and understood? What of the evident risk that intervention from
Washington would foster yet more speculative behavior — and pos-
sibly lead to a string of bailouts?” Lowenstein (2008) states that
only six months after the LTCM fiasco, Alan Greenspan “called for
less burdensome derivatives regulation, arguing that banks could
police themselves.” Needless to say, Greenspan was proven quite
wrong in this assertion.
The scandal involving Bernard Madoff which has been called the larg-
est Ponzi scheme ever has also served to cast serious doubts on how
well our financial system is being monitored. Gandel (2008) notes
that KPMG, PricewaterhouseCoopers, BDO Seidman, and McGladrey
& Pullen all signed off that all was well with the many feeder funds
that had invested with Madoff. What has shocked everyone is that
the auditors did not recognize that billions of dollars of assets were
just not there. Auditors are supposed to check that the stated assets
actually exist. Interestingly, Madoff himself was not a client of any of
the large auditing firms; he used a tiny accounting firm in New City,
NY that had only three employees. It is now apparent that this alone
should have been an indication that something was very wrong with
the way Madoff conducted business. One lawyer remarked: “All they
really had to substantiate the gains of these funds was Madoff’s own
statements. They were supposed to be the watchdogs. Why did they
sign off on these funds’ books?” [Gandel (2008)].
What can be learned from the above crises, especially the financial
meltdown of 2008? Firstly, we should recognize that what we have
experienced is not the breakdown of an economic system. This is
a values meltdown more than anything else. In fact, a recent poll
showed that bankers are near the bottom of the list when it comes
to respect felt by the public and barely beat prostitutes and con-
victed felons [Norris (2009)].
is the pursuit of self-interest always good?One pillar of mainstream economics taught in most economics
courses is based on the famous saying of Adam Smith in his classic
47Available at: http://www.americanrhetoric.com/MovieSpeeches/moviespeechwall-1
street.html
lessons from the global financial meltdown of 2008
work, “Wealth of Nations,” that: “it is not from the benevolence of
the butcher, the brewer or the baker that we expect our dinner,
but from their regard to their own interest.” Smith demonstrated
how self-interest and the “invisible hand” of the marketplace
allocates scarce resources efficiently. Students are taught that
“economic man” or homo economicus acts with perfect rationality
and is interested in maximizing his/her self-interest. This results in
an economic system (capitalism) that is efficient and productive.
For the corporation, self-interest is synonymous with maximiza-
tion of profits and/or maximization of shareholder wealth. Indeed,
students are being taught that self interest plus free markets and
deregulation results in prosperity for everyone. The famous speech
by Gordon Gekko in the movie Wall Street is based on the idea that
the pursuit of self-interest is good for all of us1: “Greed, for lack of
a better word, is good. Greed is right. Greed works. Greed clarifies,
cuts through, and captures the essence of evolutionary spirit. Greed
in all of its forms, greed for life, for money, for love, knowledge has
marked the upward surge of mankind. And greed, you mark my
words, will not only save Teldar Paper, but that other malfunction-
ing corporation called the USA.”
Change the word “greed” to pursuit of self-interest and you have in
effect what has been taught to millions of business and economics
students. Even the so-called watchmen and gatekeepers — cor-
porate directors, investment bankers, regulators, mutual funds,
accountants, auditors, etc. — have fallen into the self-interest trap
and disregarded the needs of the public [Lorsch et al. (2005)].
It is ironic that the discipline of economics started as part of the
discipline of moral philosophy, and even became a moral science
[Alvey (1999)]. Adam Smith, in his first book, “The Theory of Moral
Sentiments,” made it clear that he believed that economic growth
depended on morality. To Smith, benevolence — not pursuit of self-
interest — was the highest virtue [Alvey (1999)]. The following quo-
tation demonstrates what Smith actually believed: “Man ought to
regard himself, not as something separated and detached, but as a
citizen of the world, a member of the vast commonwealth of nature
and to the interest of this great community, he ought at all times to
be willing that his own little interest should be sacrificed.”
Robinson made the point more than 30 years ago that the pursuit
of self-interest has caused much harm to society and that Adam
Smith should not be associated with this doctrine. In actuality,
Smith believed that “society, however, cannot subsist among those
who are at all times ready to hurt and injure one another.” Raw self-
interest without a foundation of morality is not what Adam Smith
is all about. Robinson ended a commencement address with the
following warning: “I hope … that you will find that the doctrines of
Adam Smith are not to be taken in the form in which your profes-
sors are explaining them to you” [Robinson (1977)].
Howard (1997) uses the expression “tragedy of maximization” to
describe the devastation that the philosophy of maximizing self-
interest has wrought. Unrestrained capitalism that is obsessed with
self-interest and is unconcerned about the long run, can lead to
monopoly, inequitable distribution of income, unemployment, and
environmental disaster [Pitelis (2002)].
In 1937, at his second inaugural address, President Franklin D.
Roosevelt stated: “We have always known that heedless self-
interest was bad morals; we know now that it is bad economics.”
[Roosevelt (1937)]. Lawrence H. Summers, in a 2003 speech to the
Chicago Economic Club made the following remark: “For it is the
irony of the market system that while its very success depends
on harnessing the power of self-interest, its very sustainability
depends upon people’s willingness to engage in acts that are not
self-interested.”
The financial meltdown of 2008 shows quite clearly what happens
when everyone is solely concerned with self-interest. Closely tied to
the concept of pursuit of self-interest is the belief that free markets
do not need regulation. Many economists and CEOs promoted the
belief that capitalism could only work well with very little regulation.
free markets and the role of regulationThere has been a small movement in economics that questions
the neoclassical model in economics and its belief that free mar-
kets and laissez-faire economics will solve all problems [Cohen
(2007)]. According to one economist, only 5% to 10% of America’s
15,000 economists are “heterodox,” i.e., do not follow the neoclas-
sical model promoted by free market enthusiasts such as Milton
Friedman. Some heterodox economists feel that neoclassical eco-
nomics has become “sycophantic to capitalism;” the discipline is
concerned with mathematical solutions that do not resemble the
real world. The discipline is more concerned about models than
solving social problems [Monaghan (2003)].
Lichtblau (2008) believes that the federal government did not do
a good job monitoring Wall Street. He cites what Arthur Levitt,
former chairman of the SEC, said regarding his former agency: “As
an overheated market needed a strong referee to rein in danger-
ously risky behavior, the commission too often remained on the
sidelines.” Sean Coffey, who used to be a fraud prosecutor, also
was not happy with the performance of the SEC: the SEC “neutered
the ability of the enforcement staff to be as proactive as they could
be. It’s hard to square the motto of investor advocate with the
way they’ve performed the last eight years.” Not only was there a
relaxation of enforcement, there was also a reduction in SEC staff.
Coffey asserts that the administration used the argument that
loosening up regulations was necessary in order to make it possible
for American companies to compete globally [Lichtblau (2008)].
Senator Charles E. Schumer also believed that the rules had to be
48 – The journal of financial transformation
lessons from the global financial meltdown of 2008
changed to encourage free markets and deregulation if the United
States was to remain competitive [Lipton and Hernandez (2008)].
The problems began at a meeting on April 28, 2004 between five
major investment banks and the five members of the SEC The
investment banks wanted an exemption from a regulation that lim-
ited the amount of debt — known as the net capital rule — they could
have on their balance sheets. By increasing the amount of debt they
would be able to invest in the “opaque world of mortgage-backed
securities” and credit default swaps (CDSs). The SEC agreed to
loosen the capital rules and also decided to allow the investment
banks to monitor their own riskiness by using computer models to
analyze the riskiness of various securities, i.e., switch to a voluntary
regulatory program [Labaton (2008)]. The firms did act on the new
requirements and took on huge amounts of debt. The leverage ratio
at Bear Stearns rose to 33:1. This made the firm very risky since it
held only $1 of equity for $33 of debt. Regarding what transpired,
James D. Cox said [Labaton (2008)]: “We foolishly believed that
the firms had a strong culture of self-preservation and responsibil-
ity and would have the discipline not to be excessively borrowing.
Letting the firms police themselves made sense to me because I
didn’t think the SEC had the staff and wherewithal to impose its own
standards and I foolishly thought the market would impose its own
self-discipline. We’ve all learned a terrible lesson.”
Alan Greenspan finally admitted at a congressional hearing in
October, 2008 that he had relied too much on the “self-correcting
power of free markets” [Andrews (2008)]. He also acknowledged
that he did not anticipate the “self-destructive power of wanton
mortgage lending” [Andrews (2008)]. Greenspan was asked wheth-
er his ideology — i.e., belief in deregulation and that government
regulators did not do a better job than free markets in correcting
abuses — had contributed to the financial crisis. He admitted that he
had made a mistake and actually took partial responsibility. Some
of the mistakes Greenspan made had to do with risky mortgages
and out-of-control derivatives.
Risky mortgagesGreenspan allowed the growth of highly risky and fraudulent mort-
gages without recognizing the “self-destructive power” of this type
of mortgage lending with virtually no regulation [Skidelsky (2008)].
The Fed could have put a stop to it by using its power under a 1994
law (Home Owner Equity Protection Act) to prevent fraudulent
lending practices. It was obvious that the mortgage industry was
out of control and was allowing individuals with very little money
to borrow huge sums of money. Greenspan could also have used
the monetary powers of the Fed to raise interest rates which would
have ended the housing bubble.
Here are just a few examples of how deregulation affected the sub-
prime mortgage market:
WaMu, for example, lent money to nearly everyone who asked for
it. Loan officers were encouraged to approve mortgages with virtu-
ally no checking of income [Goodman and Morgenson (2008)]. To
encourage people with little income to borrow money, WaMu used
option ARMs (Adjustable Rate Mortgages). The very low initial rates
enticed people to take out mortgages. Of course, many borrowers
thought the low payments would continue indefinitely and would
never balloon [Goodman and Morgenson (2008)]. The number of
ARMs at WaMu increased from 25% (2003) to 70% (2006). It did
not take long for word to spread that WaMu would give mortgages
to nearly anyone. WaMu even ran an advertising campaign telling
the world that they would give mortgages to anyone, “The power
of yes.” WaMu did exactly that: it approved almost every mortgage.
Revenues at WaMu’s home lending unit increased from $707 million
to approximately $2 billion in one year. As one person noted: “If
you were alive, they (WaMu) would give you a loan. Actually, I think
if you were dead, they still would give you a loan” [Goodman and
Morgenson (2008)].
In the mortgage business, NINJA loans are mortgage loans made
to people with “No income, no job or assets.” These mortgage
loans were made on the basis of unsubstantiated income claims
by either the applicant or the mortgage broker or both. There are
stories of individuals with income of U.S.$14,000 a year purchasing
U.S.$750,000 homes with no money down and no mortgage pay-
ments for two years [Friedman (2008)]. Other types of mortgages
that became popular include the “balloon mortgage” (borrower only
makes the interest payment for 10 years but then has to pay a huge
amount — the balloon payment); the “liar loan” (borrower claims an
annual income, no one checks and there is no documentation); the
“piggyback loan” (this combines a first and second mortgage so a
down payment is not necessary); the “teaser loan” (mortgage at very
low interest rate for first two years but when it is readjusted after the
two years, the borrower does not have the income to make the pay-
ments); the “option ARM loan” (discussed above); and the “stretch
loan” (borrower has to use more than 50% of his monthly income to
make the mortgage payments) [Pearlstein (2007)].
President Bush wanted to encourage homeownership, especially
among minorities. Unfortunately, his approach, which encouraged
easy lending with little regulation, helped contribute to the financial
crisis [Becker et al. (2008)]. The increase in homeownership was
accomplished through the use of toxic mortgages. President Bush
also encouraged mortgage brokers and corporate America to come
up with innovative solutions to enable low-income people to own
homes. L. William Seidman, an advisor to Republican presidents,
stated: “This administration made decisions that allowed the free
market to operate as a bar room brawl instead of a prize fight.”
Bush’s banking regulators made it clear to the industry that they
would do everything possible to eliminate regulations. They even
used a chainsaw to symbolically show what they would do to the
49
lessons from the global financial meltdown of 2008
banking regulations. Various states, at one point, did try to do
something about predatory lending but were blocked by the federal
government. The attorney general of North Carolina said: “They
took 50 sheriffs off the beat at a time when lending was becoming
the Wild West” [Becker et al. (2008)].
The policy of encouraging homeownership affected how Fannie
Mae and Freddie Mac conducted business. Fannie and Freddie are
government-sponsored, shareholder-owned companies, whose job
is to purchase mortgages. They purchase mortgages from banks
and mortgage lenders, keep some and sell the rest to investors.
This enables banks to make additional loans, thus allowing more
people to own homes. Fannie and Freddie were encouraged by
President Bush to do everything possible to help low-income people
buy homes. One way to accomplish this was to reduce the down
payments. There was a drive to allow new homeowners to obtain
a federally-insured mortgage with no money down. There was little
incentive to do something about the super-easy lending practices,
since the housing industry was helping to pump up the entire econ-
omy. The increasing home values helped push consumer spending.
More and more people were becoming homeowners in line with
what President Bush wanted.
Back in the year 2000, Fannie Mae, under CEO Franklin D. Raines
and CFO J. Timothy Howard decided to expand into riskier mort-
gages. They would use sophisticated computer models and rank
borrowers according to how risky the loan was. The riskier the
loan, the higher the fees that would be charged to guarantee the
mortgage. The big risk was if a huge number of borrowers would be
unable to make the payments on their mortgages and walk away
from their obligations. That danger was not seen as likely in 2000.
Moreover, the computer models would ensure that the higher fees
for the riskier mortgages would offset any losses from mortgage
defaults. The company announced that it would purchase U.S.$2
trillion in loans from low-income (and risky) borrowers by 2010
[Duhigg (2008)].
What this accomplished — besides enriching the executives at Fannie
Mae — was that it made subprime mortgages that in the past would
have been avoided by lenders more acceptable to banks all over
the country. These banks did not have the sophistication or experi-
ence to understand the kind of risk they were taking on. Between
2001 and 2004, the subprime market grew from U.S.$160 billion to
U.S.$540 billion [Duhigg (2008)]. In 2004, there were allegations
of accounting fraud at Freddie and Fannie and both had to restate
earnings. Daniel H. Mudd became CEO of Fannie Mae in 2005 after
Raines and Howard resigned from Fannie Mae under a cloud. Under
Mudd’s watch, Fannie purchased even riskier mortgages, those that
were so new that the computer models could not analyze them
properly. Mr. Mudd was warned by Enrico Dallavecchia, his chief risk
officer, that the company was taking on too much risk and should
charge more. According to Dallavecchia, Mudd’s response was that
“the market, shareholders, and Congress all thought the companies
should be taking more risks, not fewer. Who am I supposed to fight
with first?” [Duhigg (2008)].
As early as February 2003, Armando Falcon, Jr., who ran the Office
of Federal Housing Enterprise Oversight (OFHEO), wrote a report
that warned that Fannie and Freddie could default because they
were taking on far too many mortgages and did not have the capi-
tal to protect themselves against losses. Falcon almost got fired
for this report. After some accounting scandals at Freddie, the
President decided to keep Falcon on. A bill was written by Michael
Oxley, a Republican and Chairman of the House Financial Services
Committee that would have “given an aggressive regulator enough
power to keep the companies from failing” [Becker et al. (2008)].
Since the bill was not as strong as what President Bush wanted,
he opposed it and it died in the Senate. The bottom line was that
in Bush’s desire to get a tougher bill, he ended up with no bill at
all. Eventually, James B. Lockhart III became the Director of the
OFHEO. Under his watch, Freddie and Fannie purchased U.S.$400
billion of the most risky subprime mortgages. In September, 2008,
the federal government had to take over Fannie Mae and Freddie
Mac.
Derivatives out of controlGreenspan also admitted that he allowed the market for derivatives
to go out of control. Greenspan was opposed to tighter regulation
of derivatives going back to 1994 [Andrews (2008)]. George Soros,
the renowned financier, avoided derivatives “because we don’t
really understand how they work”; Felix G. Rohatyn, the prominent
investment banker, called derivatives “potential hydrogen bombs”;
and Warren E. Buffett remarked that derivatives were “financial
weapons of mass destruction.” Greenspan, on the other hand, felt
that “derivatives have been an extraordinarily useful vehicle to
transfer risk from those who shouldn’t be taking it to those who are
willing and capable of doing so” [Goodman (2008)].
Greenspan also admitted that the market for credit default swaps
(CDSs), which became a multi-trillion dollar business, went out of
control [Andews (2008)]. The CDSs were originally developed to
insure bond investors against default risk but they have taken on
a life of their own and have been used for speculative purposes.
A CDS is a credit derivative and resembles insurance since the
buyer makes regular payments and collects if the underlying finan-
cial instrument defaults. In a CDS, there is the protection buyer who
can use this instrument for credit protection; the protection seller
who gives the credit protection; and the “reference entity,” which
is the specific bond or loan that could go bankrupt or into default.
It could be compared to buying fire insurance on someone else’s
house. Imagine how the insurance industry would work if 1000
people were able to buy fire insurance on Jane Doe’s house. With
50 – The journal of financial transformation
lessons from the global financial meltdown of 2008
traditional insurance, if Jane Doe’s house burns down, there is only
one payment. If, on the other hand, 1000 people were allowed to
each own a fire insurance policy on Jane Doe’s home, one thousand
payouts must be made. The insurance company would be collecting
fees from 1,000 customers and have a nice revenue stream, but the
risk would be very great. In fact, we would have a situation where
one thousand people would do everything possible to make sure the
Jane Doe home burns down.
Another difference between a CDS and traditional insurance is that
there is a market for the CDS, which makes it attractive for specula-
tors, since the owner of the CDS does not actually have to own the
underlying security. One problem with this is that a hedge fund can
buy a CDS on a bond and then do everything possible (i.e., sell the
stock short to force the price of the stock down) to make sure that
the reference entity does indeed default [Satow (2008)].
It should be noted that the market for derivatives and CDSs became
unregulated thanks to the Commodity Futures Modernization Act of
2000. This law was pushed by the financial industry in the name of
free markets and deregulation. It also made it virtually impossible
for states to use their own laws to prevent Wall Street from doing
anything about these financial instruments. This bill was passed “at
the height of Wall Street and Washington’s love affair with deregu-
lation, an infatuation that was endorsed by President Clinton at the
White House and encouraged by Federal Reserve Chairman Alan
Greenspan” [Kroft (2008)].
According to Eric Dinallo, the insurance superintendent of New
York State [Kroft (2008)]: “As the market began to seize up and as
the market for the underlying obligations began to perform poorly,
everybody wanted to get paid, had a right to get paid on those
credit default swaps. And there was no ‘there’ there. There was no
money behind the commitments. And people came up short. And
so that’s to a large extent what happened to Bear Sterns, Lehman
Brothers, and the holding company of AIG. It’s legalized gambling.
It was illegal gambling. And we made it legal gambling…with abso-
lutely no regulatory controls. Zero, as far as I can tell.” In response
to the question as to whether the CDS market was like a “bookie
operation,” Dinallo said: “Yes, and it used to be illegal. It was very
illegal 100 years ago.”
SEC Chairman, Christopher Cox, stated that it is of utmost impor-
tance to bring transparency to the unregulated market in CDSs.
They played a major role in the financial meltdown and were also
the cause of the near bankruptcy of AIG which necessitated a gov-
ernment bailout [Landy (2008), Cox (2008)]. According to Landy
(2008), no one even knows the exact size of the CDS market;
estimates range from U.S.$35 trillion to U.S.$55 trillion. When AIG
was bailed out by the federal government it held U.S.$440 billion of
CDSs [Philips (2008)].
One banker from India made the point that his colleagues were sur-
prised at the “lack of adequate supervision and regulation.” What
made it even more amazing was that all this occurred after the
Enron debacle and the passing of the Sarbanes-Oxley bill [Nocera
(2008)]. In fact, India avoided a subprime crisis because a bank
regulator by the name of V. Y. Reddy, who was the opposite of
Greenspan, believed that if “bankers were given the opportunity to
sin, they would sin.” Unlike Greenspan, he felt that his job was to
make sure that the banking system would not be part of a huge real
estate bubble. He used his regulatory powers to deflate potential
bubbles by not allowing bank loans for the purchase of undeveloped
land. Bankers were not happy that Reddy was preventing them
from making huge amounts of money, like their American peers
were making. Now everyone knows that Reddy was right. As one
Indian banker explained, regarding what happened in the United
States: “It was perpetuated by greedy bankers, whether investment
bankers or commercial bankers. The greed to make money is the
impression it has made here.” [Nocera (2008)].
credit rating agenciesThere are three major credit-rating agencies: Moody’s, Standard
& Poors, and Fitch Ratings. All have been accused of being overly
generous in how they rated the securities that consisted of bundled,
low-quality mortgages. The big question that has arisen is whether
these firms assigned very good ratings (AAA) because of sheer
incompetence or to make more money. By ingratiating themselves
with clients, they were able to steer more business to themselves.
There is no question that the firms were able to charge consider-
ably more for providing ratings for complex financial securities than
for simple bonds. Rating a complex U.S.$350 million mortgage pool
would generate approximately U.S.$250,000 in fees, as compared
to U.S.$50,000 for an equally sized municipal bond. Morgenson
(2008a) quotes a managing director at Moody’s, a firm that rates the
quality of bonds, as saying that: “These errors make us look either
incompetent at credit analysis or like we sold our soul to the devil for
revenue, or a little of both.”
Moody’s graded the securities that consisted of Countrywide Financial’s
mortgages — Countrywide is the largest mortgage lender in the United
States. Apparently, the ratings were not high enough and Countrywide
complained. One day later, Moody’s raised the rating. This happened
several times with securities issued by Countrywide.
Morgenson (2008a) provides an interesting example of how unreli-
able the ratings had become. A pool of residential subprime mort-
gages was bundled together by Goldman Sachs during the summer
of 2006 (it was called GSAMP 2006-S5). The safest part of this,
consisting of U.S.$165 million in debt received a AAA rating from
Moody’s and S & P on August 17, 2006. Eight months later, the rat-
ing of this security was lowered to Baa; and on December 4, 2007,
it was downgraded to junk bond status.
51
lessons from the global financial meltdown of 2008
Method of compensation may encourage risk takingWarren Buffet once said: “in judging whether corporate America
is serious about reforming itself, CEO pay remains the acid test”
[Kristof (2008)]. It is obvious to all that corporate America has not
performed well on this test. The fact that as many as 29% of firms
have been backdating options makes it appear that the system of
compensation of executives needs an overhaul [Burrows (2007)].
According to a Watson Wyatt survey, approximately 90% of insti-
tutional investors believe that top executives are dramatically
overpaid [Kirkland (2006)].
Richard Fuld, CEO of Lehman Brothers, earned approximately half-
a-billion dollars between 1993 and 2007. Kristof (2008) observes
that Fuld earned about U.S.$17,000 an hour to destroy a solid,
158-year old company. AIG Financial Products, a 377-person office
based in London, nearly destroyed the mother company, a trillion-
dollar firm with approximately 116,000 employees. This small office
found a way to make money selling credit default swaps to financial
institutions holding very risky collateralized debt obligations. AIG
Financial Products made sure that its employees did very well
financially: they earned U.S.$3.56 billion in the last seven years
[Morgenson (2008b)].
One of the big problems at many Wall Street firms was how
compensation was determined. Bonuses made up a huge part of
how people were compensated. One individual at Merrill Lynch
received U.S.$350,000 as salary but U.S.$35,000,000 as bonus
pay. Bonuses were based on short-term profits, which distorted
the way incentives work. Employees were encouraged to take
huge risks since they were interested in the bonus. Bonuses were
based on the earnings for that year. Thus, billions in bonuses were
handed out by Merrill Lynch in 2006 when profits hit U.S.$7.5 bil-
lion. Of course, those profits were only an illusion as they were
based on toxic mortgages. After that, the company lost triple that
amount, yet the bonuses were not rescinded [Story (2008)]. Lucian
Bebchuk, a compensation expert, asserted that “the whole organi-
zation was responding to distorted incentives” [Story (2008)].
It is clear that bonuses based on the profits of a particular year
played a role in the financial meltdown. Firms are now changing
the way bonuses work so that employees have to return them if
the profits turn out to be illusory. The money will be kept in escrow
accounts and not distributed unless it is clear that the profits are
real. E. Stanley O’Neal, former CEO of Merrill Lynch, not only col-
lected millions of dollars in bonuses but was given a package worth
about U.S.$161 million when he left Merrill Lynch. He did quite well
for someone whose claim to fame is that he nearly destroyed Merrill
Lynch; it was finally sold to Bank of America.
According to WaMu employees, CEO Kerry K. Killinger put a huge
amount of pressure on employees to lend money to borrowers with
little in the way of income or assets. Real estate agents were given
fees of thousands of dollars for bringing borrowers to WaMu. WaMu
also gave mortgage brokers generous commissions for the riskiest
loans since they produced higher fees and resulted in increased com-
pensation for executives. Between 2001 and 2007, Killinger earned
approximately U.S.$88 million [Goodman and Morgenson (2008)].
Cohan (2008) also sees “Wall Street’s bloated and ineffective com-
pensation system” as a key cause of the financial meltdown. Several
politicians are examining the way bonuses work in order to make
sure that they will not be given to executives of firms that are receiv-
ing government assistance. Cohan (2008) feels very strongly that
compensation reform in Wall Street is badly needed. He feels that
the change of the old system, where the big firms (i.e., Donaldson,
Lufkin, and Jenrette; Merrill Lynch; Morgan Stanley; Goldman
Sachs; Lazard; etc.) switched from being a partnership to a public
company, contributed to the financial debacle. When these firms
were partnerships, there was collective liability so the firms were
much more cautious. Once they became corporations with common
stock, “bankers and traders were encouraged to take short-term
risks with shareholder’s money.” These bankers and traders did not
mind taking on more and more risk since their bonuses depended
on the annual profits. Put these ingredients together — encourage
risk taking, no accountability, and use of shareholder’s money — and
you get a financial meltdown. Siegel (2009) feels that the CEOs
deserve most of the blame for the financial crisis. When the major
investment banks such as Lehman Brothers and Bear Stearns were
partnerships, they were much more conservative since they were
risking their own money. Once they became public companies, they
did not mind taking on huge amounts of risk since they were no
longer risking their own wealth. In effect, they were using other
people’s money to become super wealthy.
Cohan (2008) feels that compensation, which consumes approxi-
mately 50% of generated revenues, is far too high. It is a myth
that these high salaries and bonuses are needed to keep good
executives from leaving. There was a time that CEOs earned about
30 to 40 times more than the ordinary worker. Recently, that ratio
at large companies has been 344 to 1 [Kristof (2008)]. For those
who believe that large bonuses lead to improved performance,
research by Dan Ariely indicates the opposite. Bonuses cost the
company more and lead to worse performance [Ariely (2008)].
The reason given by Ariely is that the stress caused by trying to
win the big bonus overwhelms any ability of the bonus to motivate
performance.
It appears that the method of compensation used by Wall Street
firms was a distorted incentive that did not improve performance.
Rather, it encouraged firms to take on huge amount of risk — risk
that eventually destroyed many of these firms. Amy Borrus, dep-
uty director at the Council of Institutional Investors, asserts that
52 – The journal of financial transformation
lessons from the global financial meltdown of 2008
“poorly structured pay packages encouraged the get-rich-quick
mentality and overly risky behavior that helped bring financial mar-
kets to their knees and wiped out profits at so many companies”
[Morgenson (2009)].
To make matters even worse, the public has learned that Merrill
Lynch rushed U.S.$4 billion in end-of-year bonuses to top manage-
ment a few days before the company was taken over by Bank of
America. These bonuses were paid out while the company was
reporting a fourth quarter loss of more than U.S.$15 billion. Clearly,
the bonuses are not rewards for a job well done. John Thain,
former CEO of Merrill Lynch, argued that “if you don’t pay your
best people, you will destroy your franchise.” The massive losses
suffered by Merrill Lynch forced Bank of America to ask for addi-
tional billions in bailout money from the government on top of the
U.S.$25 billion they had already been promised. Thain also agreed
to reimburse Bank of America for the U.S.$1.2 million renovation of
his office a year earlier [LA Times (2009)]. The renovation included
area rugs for U.S.$131,000, chairs for U.S.$87,000, a U.S.$13,000
chandelier, and an antique commode for U.S.$35,000 [Erman
(2009)]. Citigroup was pressured by the Obama administration to
cancel a deal to purchase a U.S.$50 million new French corporate
jet. Citigroup was castigated by the media and called Citiboob by
the New York Post for this planned purchase. Citigroup has lost bil-
lions of dollars, fired 52,000 employees, and received U.S.$45 bil-
lion in government bailout money, yet they were ready to purchase
this luxury for its executives [Keil and Bennett (2009)].
The year 2008 was a time in which Wall Street firms lost billions.
The country was surprised to learn that this did not stop these
firms from giving out bonuses totaling U.S.$18.4 billion — sixth larg-
est amount ever. Apparently, bonuses are also doled out for a job
poorly done. President Obama declared these bonuses “shameful”
and “the height of irresponsibility.” Vice President Biden stated: “I’d
like to throw these guys in the brig. They’re thinking the same old
thing that got us here, greed. They’re thinking, ‘Take care of me’”
[Stolberg and Labaton (2009)].
What has become clear to the public is that Wall Street just does
not get it. The public and the media see the Wall Street executives
who are indifferent to what they caused. They still feel that they
deserve enormous amounts of money even if their firms have shed
thousands of jobs and have nearly destroyed the financial system.
Ira Kay, an executive consultant with Watson Wyatt, feels that you
can make a strong case that the Wall Street culture (some refer
it to the “eat what you kill” mentality) as well as the bonuses that
resulted from this kind of selfish culture have contributed greatly
to the current financial meltdown [Nocera (2009b)].
There is no question that executive compensation is going to be
scrutinized very carefully in the future, even by boards that were
far too chummy with CEOs. There is talk of passing legislation that
will force CEOs to return past pay — this is known as a “clawback” —
in cases where compensation was based on profits that turned out
to be illusory. Frederick E. Rowe, a founder of Investors for Director
Accountability, feels that “there is a fine line that separates fair
compensation from stealing from shareholders. When manage-
ments ignore that line or can’t see it, then hell, yes, they should
be required to give the money back” [Morgenson (2009)]. There is
even talk of “clawing back” executives’ pensions in cases where the
profit earned by the firm turned out to be imaginary.
Models have only limited valueModels are representations of reality. They will not work if conditions
have changed dramatically. They certainly will not work if compensa-
tion is tied to risk taking and executives insist that employees take
risks to enrich themselves. As noted above, executives were doing
everything possible to show stellar performance — even if it meant
taking on huge amounts of risk — since they wanted a fat yearly
bonus. Whenever there are conflicts of interest, poor decisions are
likely to be made. Models rely on interpretation by people. If the
employees interpreting the model are afraid of losing their jobs, they
will construe the models in a way that pleases top management.
As noted above, Fannie Mae decided to expand into riskier mort-
gages in the year 2000. They believed that their sophisticated mod-
els would allow them to purchase riskier mortgages and that these
computer models would protect them from the increased risk. After
all, Fannie Mae would charge higher fees for risky mortgages. The
mortgages, though, became super risky because of very low down
payments and unverified income on the part of the borrower. It is
doubtful that any model could anticipate that millions of homeown-
ers would find it easy to walk away from their mortgage. In the past,
relatively few people defaulted on a mortgage because they had to
make a substantial down payment [Duhigg (2008)].
Lewis and Einhorn (2009) feel that the rating agencies did not
measure risk properly. American financial institutions took on a
great deal more risk without having to face a downgrading of their
securities. This was probably due to the fact that the credit rating
agencies were making money on the risky securities being created
by these very same financial institutions. Companies such as AIG,
Fannie Mae, Freddie Mac, GE, and MBIA which guarantee municipal
bonds kept their AAA rating for a very long time. Lewis and Einhorn
(2009) make the point that MBIA deserved its AAA rating in 1990
when it insured municipal bonds and had U.S.$931 million in equity
and U.S.$200 million of debt. By the year 2006, MBIA was insuring
CDOs (collateralized debt obligations) which are extremely risky;
the company had U.S.$26.2 billion in debt and only U.S.$7.2 billion
in equity. It kept its AAA rating for quite a while until it was obvious
to everyone that MBIA was no longer a secure firm. The models
that were being used by the credit rating agencies may not have
53
lessons from the global financial meltdown of 2008
worked quite well. However, no one wanted to kill the goose that
was producing so much profit for the credit rating agencies. Indeed,
the agencies, as noted above, were quite pleased with all the risky
securities they were paid to evaluate.
Back in 2004, as noted above, the investment banks wanted the
SEC to give them an exemption from a regulation that limited the
amount of debt — known as the net capital rule — they could have on
their balance sheets. By increasing the amount of debt, they would
be able to invest in the “opaque world of mortgage-backed securi-
ties” and credit default swaps (CDSs). The SEC agreed and decided
to allow the investment banks to monitor their own riskiness by
using computer models to analyze the riskiness of various securi-
ties [Labaton (2008)]. One individual was opposed to the changes
and said the computer models would not work in periods of “severe
market turbulence.” He pointed out that computer models did not
protect Long-Term Capital Management from collapse [Labaton
(2008)]. Of course, no one listened to him.
One measure of risk that was very popular with the financial engi-
neers was VaR (Value-at-Risk). This is a short-term measure of risk
(daily, weekly, or for a few weeks) expressed in dollars and is one
number and is based on the normal distribution. A weekly VaR of
U.S.$100 million indicates that, for that week, there is a 99% prob-
ability that the portfolio will not lose more than U.S.$100 million.
Of course, this is based on what has happened in the past. Alan
Greenspan noted the following when trying to explain what went
wrong during the financial meltdown: “The whole intellectual edifice,
however, collapsed in the summer of last year because the data input
into the risk-management models generally covered only the past
two decades, a period of euphoria. Had instead the models been fit-
ted more appropriately to historic periods of stress, capital require-
ments would have been much higher and the financial world would be
in far better shape today, in my judgment [Nocera (2009a)].
The problem with VaR, and similar measures, is that it ignores what
can happen 1% of the time. Taleb (2007) considers VaR a dishonest
measure since it does not measure risks that come out of nowhere,
the so called “black swan.” It provides a false sense of security
because over the long run, things that have low probabilities of
occurrence do happen. Indeed, Long Term Capital Management col-
lapsed because of a black swan, unexpected financial crises in Asia
and Russia [Nocera (2009a)].
conclusionPhelps (2009) observes: “Whether in Enron’s creative accounting,
the packaging of high-risk subprime mortgages as top-grade col-
laterized debt obligations, or Bernard Madoff’s $50 billion scam
operation, the recent riot of capitalist irresponsibility has shattered
the fantasy that the free market, left to its own devices, will pro-
duce rationality and prosperity.”
The key lessons one can learn from the global financial crisis of
2008 are the following:
some self-interest is good —■■ capitalism requires it and human
beings are programmed that way. However, voracious self-
interest with total disregard for everyone else is not good for
society.
Too much regulation may be a bad thing — ■■ it stifles innovation
and is not good for business or society. On the other hand, too
little regulation is even more dangerous for society and busi-
ness. China is discovering this with their recent problem with
tainted milk which killed several children. Apparently, executives
at several dairy companies sold dairy products adulterated with
melamine, a toxic chemical, in order to make the protein count
appear higher than it actually was. One executive, who has been
sentenced to death, was convicted of selling 600 tons of “protein
powder” contaminated with melamine to dairy firms [Barboza
(2009)]. The poor conditions of the Peanut Corporation of
America plant in Blakely, Georgia which resulted in a salmonella
outbreak also demonstrated what can happen when there is too
little regulation. ConAgra, manufacturer of Peter Pan peanut
butter, had similar problems and upgraded its safety proce-
dures after a salmonella outbreak in 2007. Government officials
admitted that there were not enough agents (60) to monitor
thousands of food businesses. It is clear that the plant inspec-
tors missed many serious and chronic problems such as a leaky
roof in the Blakely plant [Moss (2009)]. Coming up with the right
amount of regulation is not impossible. What is needed is suf-
ficient regulation to discourage huge risk taking but enough to
encourage innovation [Knowledge@Wharton (2009)].
incentives are an effective way of motivating people — ■■
However, bonuses may not be the appropriate kind of incentive
for executives. In fact, bonuses can help destroy firms if they
encourage executives to focus on short-term profits rather than
the long-term health of a company.
Mathematical models —■■ might provide some insights but ulti-
mately it is people who have to interpret them. They can be used
to justify almost anything and can cause as much harm as good.
The global financial crisis would not have occurred if executives
were truly ethical. There is no question that lack of ethics played
a significant role in the meltdown. A large number of people knew
that the mortgages they were dealing with were toxic. It does not
take a great financial expert to know that mortgages with no down
payments given to people with no income is extremely foolhardy.
The excuse that they believed that housing prices would continue to
keep going up (at one point houses were doubling in value every six
or seven years) is not credible and indeed it is not even a legitimate
justification for this behavior. The truth is that as long as there
was money to be made virtually no one said anything. To hide how
risky these toxic mortgages were, they turned them into securities.
To make matters worse, mortgage brokers were encouraged to do
54 – The journal of financial transformation
lessons from the global financial meltdown of 2008
everything possible to get people to take out mortgages. The rating
agencies were making money on rating these securities so they did
not do their job. All the parties figured someone else would end up
holding the bag.
One thing is clear: free markets do not work well unless there is
accountability, responsibility, ethics, and transparency. Predatory
capitalism that disregards the concern for others and is based pure-
ly on self-interest may even be more dangerous than communism.
ReferencesAlvey, J. E., 1999, “Why spirituality deserves a central place in liberal education,” •
Journal of Markets and Morality, 2:1, 53-73
Andrews, E. L., 2008, “Greenspan concedes error on regulation,” New York Times, •
October 24, B1, B6
Ariely, D., 2008, “What’s the value of a big bonus?” New York Times, OP-ED, November •
20, A43
Barboza, D., 2009, “China plans to execute 2 in scandal over milk,” New York Times, •
January 23, A5
Becker, J., S. G. Stolberg, and S. Labaton, 2008, “White House philosophy stoked mort-•
gage bonfire,” New York Times, December 21, 1, 36-37
Burrows, P., 2007, “He’s making hay as CEOs squirm,” Business Week, January 15, •
64-65
Cohan, W. D., 2008, “Our risk, Wall Street’s reward,” New York Times, Week in Review, •
November 16, 13
Cohen, P., 2007, “In economics departments, a growing will to debate fundamental •
assumptions,” New York Times, July 11, B6
Cox, C., 2008, “Swapping secrecy for transparency,” New York Times, Week in Review, •
October 19, 12
Curry, T., and L. Shibut, 2000, “The cost of the savings and loan crisis: Truth and •
consequences,” FDIC Banking Review, 13(2), retrieved December 29, 2008 from http://
www.fdic.gov/bank/analytical/banking/2000dec/brv13n2_2.pdf
Duhigg, C., 2008, “Pressured to take more risk, Fannie reached a tipping point,” New •
York Times, October 5, 1, 34
Erman, B., 2009, “Big bonuses, costly commodes: Thain out at B of A,” Globeandmail.•
com, January 23. Retrieved January 28, 2009 from http://www.theglobeandmail.com/
servlet/story/LAC.20090123.RTHAIN23/TPStory/Business
Friedman, T. L., 2008, “The great unraveling.” New York Times, OP-ED, December 17, •
A39
Gandel, S., 2008, “The Madoff fraud: How culpable were the auditors?” Time, •
December 17, retrieved December 29, 2008 from http://www.time.com/time/print-
out/0,8816,1867092,00.html#
Goodman, P. S., 2008, “Taking a hard look at a Greenspan legacy,” New York Times, •
October 9, A1, A29
Goodman, P. S., and G. Morgenson, 2008, “By saying yes, WaMu built empire on shaky •
loans,” New York Times, December 28, 1, 22
Howard, G. S., 1997, “The tragedy of maximization,” Ecopsychology Online, October, •
retrieved June 25, 2007 from http://ecopsychology.athabascau.ca/1097/index.
htm#politics
Keil, J.G. and C. Bennett, 2009, “Citigroup cancels plans to buy $50m jet,” New •
York Post, January 27, retrieved January 28, 2009 from http://www.nypost.
com/seven/01272009/news/nationalnews/citigroup_cancels_plans_to_buy_50m_
jet_152270.htm
Kirkland, R., 2006, “The real CEO pay problem,” Fortune, June 30. Retrieved •
July 29, 2008 from http://money.cnn.com/magazines/fortune/fortune_
archive/2006/07/10/8380799/index.htm
Knowledge@Wharton, 2009, “Getting it right: making the most of an opportunity to •
update market regulation,” January 21, retrieved January 22, 2009 from http://knowl-
edge.wharton.upenn.edu/article.cfm?articleid=2145
Kristof, N., 2008, “Need a job? $17,000 an hour. No success required,” New York •
Times, OP-ED, September 18, A29
Kroft, S., 2008, “The bet that blew up Wall Street,” 60 Minutes, October 26, retrieved •
January 4, 2008 from http://www.cbsnews.com/stories/2008/10/26/60minutes/
main4546199.shtml
Labaton, S., 2008, “Agency’s ’04 rule let banks pile up new debt and risk,” New York •
Times, October 3, A1, A23
Landy, H., 2008, “Credit default swaps oversight nears,” Washington Post, November •
15, D03
Lewis, M., and D. Einhorn, 2009, “The end of the financial world as we know it,” New •
York Times, Week in Review, January 4, 9-10
Lichtblau, E., 2008, “Federal cases of stock fraud drop sharply” New York Times, •
December 25, A1
Lipton, E., and R. Hernandez, 2008, “A champion of Wall Street reaps benefits,” New •
York Times, December 14, 1, 36
Lorsch, J. W., L. Berlowitz, and A. Zelleke, 2005, Restoring trust in American business, •
MIT Press, Cambridge, MA
Los Angeles Times, 2009, “John Thain defends Merrill Lynch bonuses amid Bank of •
America Takeover,” January 27, latimes.com, retrieved January 28, 2009 from http://
www.latimes.com/business/la-fi-thain27-2009jan27,0,476991.story
Lowenstein, R., 2008, “Long-Term Capital Management: It’s a short term memory,” •
International Herald Tribune, September 7. Retrieved December 29, 2008 from http://
www.iht.com/articles/2008/09/07/business/07ltcm.php
Monaghan, P., 2003, “Taking on ‘rational man’,” Chronicle of Higher Education, •
January 24, A12-A15
Morgenson, G., 2008a, “Debt watchdogs: Tamed or caught napping,” New York Times, •
December 7, 1, 40
Morgenson, G., 2008b, “Behind insurer’s crisis, blind eye to a web of risk,” New York •
Times, September 28, 1, 28
Morgenson, G., 2009, “After huge losses, a move to reclaim executives’ pay,” Sunday •
Business, New York Times, February 22, 1, 7
Moss, M., 2009, “Peanut case shows holes in food safety net,” New York Times, •
February 9, A1, A12
Nocera, J., 2008, “How India avoided a crisis,” New York Times, December 20, B1, B8•
Nocera, J., 2009a, “Risk mismanagement,” New York Times Magazine, January 4, •
23-50
Nocera, J., 2009b, “Getting theirs: It’s not the bonus money. It’s the principle,” New •
York Times, Business Day, January 31, 1, 4
Norris, F., 2009, “Failing upward at the Fed,” New York Times, Business Day, February •
27, B1, B4
Pearlstein, S., 2007, “No money down falls flat,” Washington Post, March 14, D01•
Phelps, C., 2009, “Beyond economic revival: which New Deal approaches are worth •
emulating?” Chronicle Review, February 13, B10-B12
Philips, M., 2008, “The monster that ate Wall Street,” Newsweek, October 6, retrieved •
on January 4, 2009 from http://www.newsweek.com/id/161199
Pitelis, C., 2002, “On economics and business ethics,” Business Ethics: A European •
Review, 11(2), 111-118
Robinson, J., 1977, “Morality and economics” Commencement Address, retrieved July •
9, 2007 from Economist’s View http://economistsview.typepad.com/economists-
view/2007/07/morality-and-ec.html
Roosevelt, F. D., 1937, “Second Inaugural Address,” retrieved October 25, 2008 from •
Bartleby.com http://www.bartleby.com/124/pres50.html
Satow, J. 2008, “Default swaps may be next in credit crisis,” New York Sun, September •
22, http://www.nysun.com/business/default-swaps-may-be-next-in-credit-crisis/86312/
Siegel, J., 2009, “Lesson one: what really lies behind the financial crisis,” Knowledge@•
Wharton, January 21, retrieved January 22, 2009 from: http://knowledge.wharton.
upenn.edu/article/2148.cfm
Skidelsky, R., 2008, “The remedist,” New York Times Magazine, December 14, 21-22•
Stolberg, S. G., and S. Labaton, 2009, “Banker bonuses are ‘shameful,’ Obama •
declares,” New York Times, January 30, A1, A20
Story, L., 2008, “On Wall Street, bonuses, not profits, were real,” New York Times, •
December 18, A1, A35
Suskind, R., 2008, “The crisis last time,” New York Times, OP-ED, September 25, A29•
Taleb, N. N., 2007, The black swan: the impact of the highly improbable, Random •
House, New York
Articles
55
What can leaders learn from cybernetics? Toward a strategic framework for managing complexity and risk in socio-technical systemsAlberto Gandolfi
Professor, Department of Management and Social Sciences, SUPSI University of Applied Science of
Southern Switzerland
AbstractHow can leaders cope with the seemingly inexorable increase in
complexity of their world? Based on the cybernetic “law of requi-
site variety,” formulated almost fifty years ago by W. R. Ashby, we
propose a pragmatic framework that might help strategic decision
makers to face and manage complexity and risk in organizational,
political, and social systems. The law of requisite variety is a funda-
mental hypothesis in the general theory of regulation and defines
an upper limit to the controlling ability of a system, based on its
“variety” (complexity) level. The strategic framework discussed
here assumes that complexity breeds systemic risks and suggests
three mutually complementary approaches to managing complexity
at a strategic level: 1) where possible, reduce complexity in the sys-
tem to be controlled; 2) effectively manage residual complexity; and
3) increase complexity of the controlling system. Each approach is
then discussed, and practical examples provided.
56 – The journal of financial transformation
What can leaders learn from cybernetics? Toward a strategic framework for managing complexity and risk in socio-technical systems
Decision makers in literally every area are facing an increasing level
of complexity. This inexorable trend in turn has lead to increased
difficulty in managing organizational, technological, political, and
socio-economical systems and their related risks. Everything seems
to become more complex, and therefore more difficult to manage
and control: markets and business opportunities, corporate gover-
nance, projects, fiscal systems, technology, demands from all kinds
of stakeholders, management systems, and supply chains.
Partly as a response to this important trend, in the last three
decades the interdisciplinary study of the structure and the behav-
ior of complex systems — to which we will refer here as complexity
science — has grown considerably and has brought relevant inputs
to many scientific fields, such as biochemistry, ecology, physics,
chemistry, genetics, and biology [see for example Kauffman (1993)
for genetics and evolutionary biology or Pimm (1991) for ecology].
Many scholars have then tried to apply the instruments and analyti-
cal concepts developed by complexity science in order to improve
the control of socio-technical systems, including organizational,
economical, technological, social, and political systems [Iansiti
and Levien (2004), Kelly (1994), Senge (1992), Malik (1984)]. Even
though this transfer from “hard” to “soft” sciences has brought
some interesting insight for the latter, a general model addressing
complexity management in socio-technical systems is still lacking.
What we still need is a more pragmatic approach, enabling manag-
ers, politicians, and other decision makers in the real world to better
understand and actively influence the behavior and control risks of
the systems they are supposed to control. Our research has led to
a conceptual model for complexity management in social and socio-
technical systems, which will be outlined in this paper. The current
model should not be considered as complete, but as work in prog-
ress which will be further enhanced and modified over the coming
years, focusing primarily not on theoretical elegance, but rather at
usefulness in a real decision making environment.
complexity and riskBefore we can proceed to explain our framework, it is necessary
to clarify what we mean by the term “complexity”. In this paper
we will adopt a relatively simple, management-oriented definition,
viewing complexity from the perspective of the decision makers
who are supposed to plan, manage, and control real socio-technical
systems, surrounded by real environments. Although it is not in the
scope of this paper to discuss the different meanings of “complex-
ity,” for our purposes two complementary dimensions of complexity
become particularly relevant [Casti (1994), Malik (1984)] [see also
Sommer and Loch (2004) for applications in project management,
and Perrow (1999) for applications in risk management]:
The number of states of the systems (system variety) —■■ what
does cybernetics mean by the term “state” of a system? In short,
it is a particular configuration of the different elements of a
system, exhibiting a particular behavior [Ashby (1956)]. System
variety depends, therefore, on the total number of different ele-
ments in the systems and on the number of different internal
(behavior relevant) configurations of each element. For example,
in a productive system, a status could be a particular combina-
tion of all relevant system elements: machines, raw material,
components, people, schedule, data, procedures, and orders.
The interactions among elements (system dynamics) —■■ lead-
ing to discontinuous and nonlinear changes in system behavior —
from a decision-making point of view, the perceived complexity
increases by increasing the number of relationships between ele-
ments, and by increasing nonlinearity and opacity (i.e., nontrans-
parency) of these relationships. Opacity occurs when an observer
(in this case, the decision-maker) is no longer able to determine
clear and straightforward cause-effect relationships in the sys-
tem dynamics. Nonlinearity occurs when the system changes in
an erratic, discontinuous way or when the size of change is not
correlated with the size of the input that caused it.
We can note that those two aspects represent, in a way, a concep-
tual extension of the NK-model for evolutionary fitness landscapes
[Kauffman (1993)], where N is the number of elements of the land-
scape and K is the total number of relationships between elements.
Following these preliminary considerations, in the remainder of this
paper we will adopt this twofold definition of complexity.
A very important point is the relationship between complexity and
risk. Results from several research areas have consistently dem-
onstrated that an increase in complexity breeds more, and more
diverse, potential risks for a socio-technical system, as also sug-
gested by our daily experiences [Johnson (2006)]. The dynamics of
complex systems show a set of well-known features, such as feed-
back loops [Johnson (2006), Sterman (2000), Forrester (1971)],
nonlinear behavior (phase transitions) [Perrow (1999), Sterman
(2000), Ball (2005), Kauffman (1993)], network effects (network
dynamics) [Barabasi (2002)], emerging properties (system effects,
self-organization) [Johnson (2006), Holland (2000), Kauffman
(1993), Ashby (1956)], delayed effects [Jervis (1997), Sterman
(2000)], and indirect, unintended consequences (side effects)
[Jervis (1997), Sterman (2000), Tenner (1997)].
As Jervis (1997, p.29) summarized it: “many crucial effects are
delayed and indirect; the relations between two actors often are
determined by each one’s relation with others; interactions are
central and cannot be understood by additive operations; many
outcomes are unintended; regulation is difficult.”
Casti (1994) suggests that complexity generates risks for manage-
ment and produces system ‘surprises.’ Such surprises make the
system’s behavior opaque, unstable, counterintuitive, sometimes
57
What can leaders learn from cybernetics? Toward a strategic framework for managing complexity and risk in socio-technical systems
chaotic, and eventually unpredictable [Forrester (1971)]. Under such
circumstances, control, regulation, and, consequently, effective risk
management are extremely difficult or even impossible to achieve.
Errors and failures, and in the worst cases catastrophes, are the
often unavoidable consequences [Choo (2008), Perrow (1999)].
Ashby’s seminal hypothesis: “only variety can destroy variety”In the 1960s, Ashby proposed a law of ‘requisite variety,’ which
states that “only variety can destroy variety” [Ashby (1956)], where
variety means the number of different states (or behaviors) the
system can display. To fully understand this metaphor, one has
to consider the cybernetic definition of control, as suggested by
Ashby: to control a system means to reduce the variety of its poten-
tial behaviors. One controls a system if he/she is able to limit the
number of different states the system may exhibit. In the extreme
case, the system is entirely under control, such that it may exhibit
only one particular behavior (or status), the one the controlling
system has chosen [Hare (1967)]. An alternative way of understand-
ing the law of requisite variety, sometimes called Cybernetics I [De
Leeuw (1982)], is to view control as the ability to reduce or to block
the flow of variety from environmental disturbances, thus enabling
an internal “physiological” stability and the survival of the system
[Ashby (1956)].
In this paper, we will assume that the cybernetic concept of variety
might be substituted by the more broad (and bold) term of ‘com-
plexity’ [Jackson (2000), Malik (1984), Van Court (1967)]. In fact,
we believe the term complexity can better capture the rich nature
of real socio-technical systems, where variety is only one of the
aspects of complexity [Johnson (2006)].
The three leversOn the basis of the conceptual schema provided by the law of
requisite variety, our research led to the elaboration of a strategic
framework, with a pragmatic orientation. Its objective, and the
criterion which we will adopt to measure its effectiveness, is to
help decision-makers out in the field to manage their complex work
environment. The target audience for the strategic framework is
those people faced with problems, environments, and situations
of increasing complexity, such as board members, executives of
private or publicly owned companies, consultants, politicians, plan-
ners, and project managers.
Van Court, in one of the rare comments and conceptual develop-
ments of Ashby’s law of requisite variety, noted that “there are only
two major ways to adjust the ability of the analyst (or controller)
to the requirements of the system to be controlled: (1) increase the
variety of the controller, or (2) reduce the variety of the system to
be controlled” [Van Court (1967)]. This issue was initially proposed
by Stafford Beer with his seminal concept of ‘variety engineering’
[Beer (1979, 1981)]. Beer argued that management can balance its
‘variety (complexity) equation’ by using both ‘variety reducers’
for filtering out environmental variety and ‘variety amplifiers’ for
increasing its own variety vis-à-vis its environment.
Basically, our model extends this approach, suggesting three meth-
ods, or strategic levers, for the decision maker to better manage, and
survive, complexity. Assuming that a controlling system C wants to
control and manage a system s, the three strategic levers are:
first lever —■■ reduce complexity in the controlled system.
second lever —■■ manage the remaining complexity.
Third lever —■■ increase complexity of the controlling system C,
and understand and accept complexity.
first lever — reduce complexity in the controlled system cThe first lever suggested by our strategic framework aims to
reduce the complexity of the controlled system — where and when
such a reduction is possible, realistic, and adequate. One can adopt
different approaches in order to reduce the overall complexity of a
system, such as:
complexity transfer — in this first approach a certain ‘amount
of complexity’ is transferred elsewhere, outside the system to be
controlled.
Example 1 —■■ outsourcing decisions usually allow for transferring
a considerable amount of organizational and technical complex-
ity of a process to a specialized partner, outside the company.
Here, it is interesting to note that the overall net management
burden and costs are often reduced, due to the specialization
of the outsourcing partner in a few specific processes, such as
transportation of goods, IT management, or market research.
The result is that the complexity decrease in the outsourcing
company is greater than the complexity increase in the special-
ized partner.
Example 2 —■■ the introduction of traffic circles leads to a sub-
stantial reduction in the complexity of local traffic management
by public authorities, usually achieved by traffic lights. In this
very successful case, complexity of coordination has been effec-
tively distributed to thousands of single car drivers, each of them
managing his/her own space- and time-limited complexity when
approaching the crossing.
Example 3 —■■ from an industrial point of view, Alberti has sug-
gested a general concept for the reduction of complexity in pro-
duction planning and production management [Alberti (1996)],
which includes the transfer of complexity from department and
plant managers to other elements of the manufacturing system,
such as product structure, production infrastructure (i.e., factory
layout), workers, and suppliers.
58 – The journal of financial transformation
What can leaders learn from cybernetics? Toward a strategic framework for managing complexity and risk in socio-technical systems
simplification of system elements (reduction of both number and
variety of elements) — in this approach, complexity is reduced by
standardizing and rationalizing the individual elements of a socio-
technical system. The result is a reduction in the overall number
and variety of elements of the system, and hence of system variety
[Gottfredson and Aspinall (2005), Schnedl and Schweizer (2000)].
Example 1 —■■ some airlines have dramatically reduced the number
of different planes in their fleet. For example, Southwest Airlines
ended up having only one kind of plane [Freiberg and Freiberg
(1996)]. This leads to a reduced complexity in terms of training
pilots, crew and technical personnel, maintenance and safety
management, and purchasing of new planes and spare parts.
Example 2 —■■ standardizing the documents which are created
and circulated in an organization is also a complexity reducing
measure. For example, a company could decide that all sales reps
use only one standard format to register new sales orders. For
them and for their colleagues from all other departments this
would mean a less demanding task, reduced error probability,
and a reduction in training time. By the same token, imposing a
standardization of the different softwares used by an organiza-
tion will lead to a significant complexity reduction, particularly in
the IT department and in the help desk.
simplification of the interaction between elements (reduction
of the dynamical complexity) — in complex systems the relation-
ships between elements are usually numerous, nonlinear, and non-
transparent. This approach focuses on moving the system in the
opposite direction, i.e., reducing the number of interactions, making
them more linear or more transparent for the decision maker. The
objective is to obtain a system with lower overall dynamic complex-
ity [Gottfredson and Aspinall (2005)].
Example —■■ a growing number of project leaders have adopted
web-based communication platforms (often with a “pull-philos-
ophy” for getting the data) to better manage communication
and collaboration in complex projects, with many stakeholders
involved. Such a tool may help to organize and simplify commu-
nication flow between project members, usually characterized
by infinite and disturbing numbers of emails, the coexistence of
different versions of the same document, and frequent mistakes.
Here complexity reduction is obtained by reducing the number
of interactions in the communication network between project
members, that is, moving from a n:n-communication topology
(everybody interacts with everybody) to a n:1 topology (every
project member interacts within the communication platform.)
second lever — managing residual complexityThe second approach suggested by our framework concentrates
on managing the remaining complexity in the controlled system.
Obviously, the potential for reducing complexity in the controlled
system has its limits, due to either physical-structural boundaries or
limitations in the actual power of the controlling system (in this case
the decision maker managing a complex socio-technical system).
In the last decades several scientific disciplines have developed
supporting tools and concepts, which can be extremely helpful in
enabling decision makers to better manage existing socio-technical
complexities. We will focus here on two of such tools and concepts:
using decision support software and segmenting complexity levels
Decision support software — we refer here to software tools which
enable the effective simulation and optimization of the behavior
of complex systems. One should distinguish between (a) general
purpose, generic applications and (b) task- or system-specific appli-
cations (i.e., for logistics, transportation, production scheduling).
Basically, even many functionalities of the traditional Enterprise
Resource Planning (ERP) applications aim to reduce complexity
for decision-makers in organizations, enabling a centralized, non-
redundant, fully-integrated data management [Davenport and
Harris (2005)].
segmenting complexity levels — another approach tries to seg-
ment (or segregate) complexity, that is, to set apart different levels
of complexity in different processes, with specific rules, procedures,
resources, and strategies. The segmentation of process complexity
is one of the basic tenets of Business Process Reengineering, BPR
[Hammer and Champy (1993)]. In the case of many insurance com-
panies, which have subdivided the process of evaluation of requests
for compensation according to the complexity of the individual
cases, simple cases follow a more simplified procedure, with less
informed collaborators — while more complex cases are managed
by a team of specialists through a more elaborate process. This per-
mits allocation of complex tasks to the best (and most expensive)
specialized resources, while avoiding the overburdening of these
resources with numerous simple and non-critical cases.
Third lever — increase the complexity of the decision makerAfter having reduced the complexity of the controlled system,
and having better managed the remaining complexity of system,
a third approach suggests increasing the variety (or complexity)
of the controlling system, the decision making body. What does
“increasing complexity of the decision maker” mean? According to
our working definition of complexity, to increase complexity means
here to increase the number of potential perspectives, opinions,
decisions, and the variety of approaches for selecting among them
the best, most effective one. This translates into two complementa-
ry options: both (a) increase the number or the (internal) complex-
ity of elements of the decision making system, and (b) improve the
number and variety of the interactions among these elements. In
real life — say, a manager who has to lead a complex organization —
we can envisage the following approaches to reach this goal.
59
What can leaders learn from cybernetics? Toward a strategic framework for managing complexity and risk in socio-technical systems
increase the “decision complexity” of human resources —
through different training and development measures, one has
to enable single employees and managers to think and act in a
more complex way: considering more potential frames and solu-
tion ideas when solving a problem; understanding the relationship
between activities and objectives of different processes, teams
and departments; communicating and collaborating with others in
a more effective way; acting as an interconnected element for the
long term interests of the whole system; and deliberately delaying
judgment and remaining open to alternative systems [McKenzie et
al. (2009)]. For example, decision makers should understand and
accept complexity, the logic behind the structure and behavior of
complex socio-technical systems, and its profound consequences
for the management of those systems. One essential conceptual
frame for understanding complexity is System Thinking [Sterman
(2000), Vester (1999), Senge (1990)]. Basically, System Thinking
is a holistic way of seeing and interpreting the world, in terms of
systems, made up of networks of interconnected elements. Often
the emergent behavior of a system is determined by the relation-
ship between elements, rather than by the behavior of the single
elements. System Thinking has been implicitly adopted and inte-
grated in strategic planning tools such as the well-known Balanced
Scorecard, BSC [Kaplan and Norton (2001)], developed in order to
escape from the limits of traditional strategic planning which was
too focused on financial indicators — and hence not systemic. BSC,
on the contrary, recognizes the role of many other dimensions and
objectives (market, internal, learning and growth objectives) for the
long-term success of an organization.
increase variety of the whole decision system [schwaninger
(2001)] — a first, simple step is to go from individual to collective
decision-making [Van Buuren and Edelenbos (2005)]. Further,
one can increase complexity by diversifying the decision-making
group from different perspectives: diversity of age, professional
or cultural background, thinking style, race, sex, knowledge of
the problem, role in the organization, experiences, and interests.
A more diverse team is potentially able to generate more diverse
mental states, solutions, or decisions [Page (2007)]. For example,
some innovative companies have made their research and develop-
ment teams more diverse by integrating “outsiders,” that is, people
with little or no knowledge (or a very naïve knowledge) of a specific
technological product, like children, older people, housewives, and
artists. At the opposite end, there are clues which show that some
big decision mistakes might be traced back to an insufficient vari-
ety of decision bodies (i.e., the board of directors). Schütz (1999)
describes the case of Union Bank of Switzerland (UBS) in the 1990s.
He shows that the board of directors of UBS was in those years very
homogeneous: members were all men, Swiss, more than 50 years
old, army officers, with an economic or legal background, having
followed a traditional banking career. This complexity (variety) level
was apparently inadequate to cope with an increasing complexity in
the financial market environment. Schütz argues that such a board
could not effectively control or manage some critical and complex
processes of the bank. Eventually, UBS ended up in serious finan-
cial distress, and had to accept a merger with a smaller and less
powerful competitor, SBS (the new bank maintained the name UBS,
although most of the new management came from SBS).
improve knowledge management in the organization — more
knowledge enables access to more diverse mental states and
more dynamic interactions, and thus, potentially, to more complex
decisions. In this way, effective knowledge management can be
a powerful tool for increasing the complexity of decision-making
throughout the organization.
create networks and decide and operate in networks — there is
some evidence that a network can generate a higher decision and
behavior complexity than other topological configurations, such
as hierarchies or chains, or a single decision maker [Surowiecki
(2004), Vester (1999), Kelly (1994), Peters (1992)]. One of the most
effective ways of dealing with very complex problems is to create
a network of different and complementary competencies, as in
the case of scientific research, where the trend is to build large
research networks (competence centers) on specific complex top-
ics. At the Civico City Hospital of Lugano (Switzerland), a medical
competence centre for diagnosis and therapy of breast cancer was
established in 2005, in order to integrate into a virtual network all
the involved specialists (oncologists, radiologists, surgeons, phar-
macologists, nurses, psychologists, and researchers). One of the
most important features of such networks is a massive improve-
ment of communication between the nodes, which leads to an
increase in relationship complexity of the system [Evans and Wolf
(2005)]. The effectiveness of networked decision making can be
further improved by implementing methodologies and approaches
designed to foster cohesion, communication, and synergy in larger
groups of individuals, as defined for example by Stafford Beer with
his Team Syntegrity Model [Schwaninger (2001)].
conclusion Complexity and risks of markets, legal systems, technological land-
scapes, environmental, and social issues, to mention only a few,
will further increase in the next decades. Drawing from the insight-
ful intuition of R.W. Ashby, this paper tries to outline a pragmatic
strategic framework to managing complexity in socio-technical
systems, as summarized in Figure 1. The model should be further
improved in the next few years; very important progress is to be
expected if researchers eventually become able to measure com-
plexity levels or at least to quantify changes in the complexity levels
of socio-technical systems.
We believe that the application of concepts and insights of complex-
ity science to management still has a huge development potential.
60 – The journal of financial transformation
What can leaders learn from cybernetics? Toward a strategic framework for managing complexity and risk in socio-technical systems
We must narrow down the gap between the two worlds: complexity
science and system thinking on the one hand and “down-to-earth,”
empirical management reality on the other. We also believe that the
model discussed in this paper can be a humble contribution to this
fascinating challenge on the edge of these two worlds.
ReferencesAlberti, G., 1996, Erarbeitung von optimalen Produktionsinfrastrukturen und Planungs- •
und steuerungssystemen, Dissertation Nr. 11841, ETH, Zürich
Ashby, W. R., 1956, An introduction to cybernetics, Chapman & Hall, London•
Beer, S., 1985, Diagnosing the system, Wiley, New York•
Beer, S., 1981, Brain of the firm, second edition, Wiley, Chichester•
Beer, S. 1979, The heart of enterprise, Wiley, Chichester•
Casti, J., 1994, Complexification, Harper Collins, New York•
Choo, C.W., 2008, “Organizational disasters: why they happen and how they may be •
prevented,” Management Decision, 46:1, 32-45
Davenport, T. H., and J. G. Harris, 2005, “Automated decision making comes of age,” •
Sloan Management Review, Summer, 83-89
De Leeuw A., 1982, “The control paradigm: an integrating systems concept in organi-•
zation theory,” in Ansoff, H. I., A. Bosman, and P. M. Storm (eds.), understanding and
managing strategic change, North Holland, New York
De Leeuw A., and H. Volberda, 1996, “On the concept of flexibility: a dual control per-•
spective,” Omega, International Journal of Management Sciences, 24:2, 121-139
Evans, P., and B. Wolf, 2005, “Collaboration rules,” Harvard Business Review, July-•
August, 96-104
Forrester, J. W., 1971, “Counterintuitive behavior of social systems,” Theory and •
Decision, 2:2, 109-140
Freiberg, K. L., and J. A. Freiberg, 1996, Nuts! Bard Press, New York•
Gandolfi, A., 2004, La Foresta delle decisioni, Casagrande, Bellinzona•
Gottfredson, M., and K. Aspinall, 2005, “Innovation versus complexity: what is too •
much of good thing?” Harvard Business Review, November, 62-71
Hammer, M., and J. Champy, 1993, Reengineering the corporation, Harper Business, •
New York
Holland, J. H. 2000, Emergence – from chaos to order, Oxford University Press, Oxford•
Iansiti, M., and R. Levien, 2004, The keystone advantage, Harvard Business School •
Press, Boston
Jackson, M. C., 2000, Systems approaches to management, Kluwer Academic, New •
York
Jervis, R., 1997, Systems effects, Princeton University Press, Princeton•
Johnson, J., 2006, “Can complexity help us better understanding risk?” Risk •
Management, 8, 227-267
Kauffman, S. A., 1993, The origin of order, Oxford University Press, Oxford•
Kaplan, R. S., and D. P. Norton, 2001, The strategy focused organization, Harvard •
Business Press, Boston
Kelly, K., 1994, Out of control: the new biology of machines, social systems and the •
economic world, Addison-Wesley Reading
Küppers, B. O., 1987, “Die Komplexität des Lebendigen,” in Küppers, B. O., Ordnung aus •
del chaos, Piper, München
Malik, F., 1984, Strategie des managements komplexer systeme, Haupt, Bern•
McKenzie, J., N. Woolf, and C. Van Winkelen, 2009, Cognition in strategic decision mak-•
ing, Management Decision, 47:2, 209-232
Mintzberg, H., 1994, The rise and fall of strategic planning, The Free Press, New York•
Ormerod, P., 1998, Butterfly economics, Faber and Faber, London•
Page, S., 2007, The difference, Princeton University Press, Princeton•
Perrow, C., 1999, Normal accidents, Princeton University Press, Princeton•
Peters, T., 1992, Liberation management, Fawcett Columbine, New York•
Pimm, S. L., 1991, The balance of nature? University of Chicago Press, Chicago•
Schnedl, W., and M. Schweizer, 2000, Dynamic IT management in the 21st century, •
PricewaterhouseCoopers
Schütz, D., 1999, Der fall der UBS, Weltwoche-ABC-Verlag, Zürich•
Schwaninger, M., 2001, “Intelligent organizations: an integrative approach”. System •
Research and Behavioral Science, 18, 137-158
Senge, P. M., 1990, The fifth discipline, Doubleday, New York•
Sommer S. C., and C. H. Loch, 2004, “Selectionism and learning in projects with com-•
plexity and unforeseeable uncertainty,” Management Science, 50:10, 1334-1347
Sterman, J., 2000, Business dynamics, Irwin McGraw-Hill, Boston•
Surowiecki, J., 2004, The wisdom of crowds, Little Brown, London•
Tenner, E., 1997, Why things bite back, Vintage Books, New York•
Van Buuren A., and J. Edelenbos, 2005, “Innovations in the Dutch polder,” Science and •
Society Conference, Liverpool, 11-15 September
Van Court, H., 1967, Systems analysis: a diagnostic approach, Harcourt, Brace & World, •
New York
Vester, F., 1999, Die Kunst vernetzt zu denken, DVA, Stuttgart•
strategic lever some approaches
first lever — reduce complexity in the controlled system C
Complexity transfer
Simplification of system elements (reduction of variety)
Simplification of the interaction between elements (reduction of the dynamical complexity)
second lever — manage remaining complexity
Decision support applications
Segmenting complexity levels
……
Third lever — increase complexity of the controlling system C
Increasing decision complexity of human resources
Increasing variety of the whole decision system
Improving knowledge management
Deciding and operating in networks
Figure 1 – Overview of the strategic framework to better manage complexity in socio-
technical systems – the four strategic levers and the corresponding approaches (the
list is not exhaustive).
Articles
61The authors are thankful to Kenneth Sullivan, and participants at the 101 th Accountants
Forum of the IMF, as well as a number of staff of the IMF, for their comments and
suggestions. We are also indebted to Yoon Sook Kim and Xiaobo Shao for their
research and technical support.
AbstractIn light of the uncertainties about valuation highlighted by the
2007–2008 market turbulence, this paper provides an empirical
examination of the potential procyclicality that fair value account-
ing (FVA) could introduce in bank balance sheets. The paper finds
that, while weaknesses in the FVA methodology may introduce
unintended procyclicality, it is still the preferred framework for
financial institutions. It concludes that capital buffers, forward-
looking provisioning, and more refined disclosures can mitigate the
procyclicality of FVA. Going forward, the valuation approaches for
accounting, prudential measures, and risk management need to be
reconciled and will require adjustments on the part of all parties.
Alicia novoaEconomist, Financial Oversight Division, IMF
Jodi scarlataDeputy Division Chief, Financial Analysis Division,
IMF
Juan soléEconomist, Global Financial Stability Division, IMF
financial stability, fair value accounting, and procyclicality1
62 – The journal of financial transformation
financial stability, fair value accounting, and procyclicality
Non-derivative financial assets with fixed or determinable payments and fixed matu-2
rity that an entity has the intention and ability to hold to maturity.
Namely, when they are risk-managed on a FV basis, though differences remain 3
between FAS159 and IAS39.
The paper utilizes that major classification of financial assets as set out by the IASB 4
and FASB, recognizing that there are additional, specific categories of instruments
subject to fair valuation.
Since the 2007 market turmoil surrounding complex structured
credit products, fair value accounting and its application through
the business cycle has been a topic of considerable debate. As
the illiquidity of certain products became more severe, financial
institutions turned increasingly to model-based valuations that,
despite increased disclosure requirements, were nevertheless
accompanied by growing opacity in the classification of products
across the fair value spectrum. Moreover, under stressed liquidity
conditions, financial institutions made wider use of unobservable
inputs in their valuations, increasing uncertainty among financial
institutions, supervisors, and investors regarding the valuation of
financial products under such conditions.
It has been during this period that the procyclical impact of fair
value accounting on bank balance sheets and, more specifically,
the valuation of complex financial instruments in illiquid markets,
came to the fore, raising questions on the use of market prices
below “theoretical valuation” and the validity of “distressed sales.”
Financial products were fair valued despite concerns that the cur-
rent market prices were not an accurate reflection of the product’s
underlying cash flows or of the price at which the instrument
might eventually be sold. Sales decisions based on fair value pric-
ing in a weak market with already falling prices resulted in further
declines in market prices, reflecting a market illiquidity premium.
Additionally, falling prices can, and did, activate margin calls and
sale triggers that are components of risk management criteria,
contributing further to the downward trend. As bank net worth is
positively correlated with the business cycle, and as fair market
values for collateral values fall, losses have been passed through
to banks’ capital [Kashyap (2005)]. The weakening of bank balance
sheets and regulatory requirements for prudential capital replen-
ishment has served to heighten concerns as to the future course of
some markets, the health of banks and, more broadly, the financial
system.
This paper reviews the principles and application of fair value
accounting (FVA), the implications of its features and how these
impact bank balance sheets. Using a simple model, it provides
empirical support for the public discussions regarding the procycli-
cality of FVA on bank balance sheets. Utilizing representative bank
balance sheets from a sample of actual institutions, it examines the
application of FVA to banks’ balance sheets during the course of a
normal business cycle, as well as during extreme shocks, such as
those that have recently occurred, to distill in what manner FVA
may contribute to procyclicality. The paper examines the results
obtained, discusses actual and proposed alternatives to FVA, and
elaborates on policy implications going forward.
The paper finds that, while the application of FVA methodology
introduces unwanted volatility and measurement difficulties, FVA
nevertheless is the correct direction for proceeding as it provides
as faithful a picture as possible of a bank’s current financial condi-
tion — alternative techniques have their own shortcomings. Yet
despite its advantages, difficulties exist not only in determining
FV prices in downturns and illiquid markets, but also during boom
times in active markets when prices can overshoot and incorporate
risk premia that inflate profits. Under such circumstances, market
prices may not accurately reflect risks and can result in exagger-
ated profits that distort incentives (i.e., management compensa-
tion) and amplify the cyclical upturn. In rapidly evolving financial
markets, inaccurate valuations may quickly alter the implications
for solvency and more broadly, financial stability.
The paper emphasizes that FVA should be structured so that it
contributes to good risk management and ensures that financial
statements include adequate disclosure of valuations, method-
ologies, and volatilities such that inherent uncertainties are well
understood. While the volatility of estimation errors in valuation
techniques should be reduced as much as possible, genuine eco-
nomic volatility should be faithfully reflected in financial statements
and preserved by regulators and supervisors [Barth (2004), Borio
and Tsatsaronis (2005)]. The paper concludes by providing some
quantitative insight for regulators and supervisors to better assess
the implications of FVA on bank balance sheets and capital and puts
forward proposals for dealing with issues of the volatility of FVA
and FV classification. Importantly, it stresses the need for resolving
the tensions between valuation approaches across risk managers,
accountants, and prudential supervisors and regulators, so as to
ensure that accounting frameworks do not unduly contribute to
potential financial instability.
fair value accounting through the business cyclefair value accounting and its application
The current accounting framework
Both U.S. Generally Accepted Accounting Principles (U.S. GAAP)
and International Financial Reporting Standards (IFRS) use a mixed
attributes model, where different valuation criteria are applied to
different types of assets and liabilities, depending on their char-
acteristics and on management’s intentions in holding them to
maturity or not. In essence, both frameworks require FV valuation
for financial assets and liabilities held for trading purposes and
available-for-sale (AFS) assets, and all derivatives. Held-to-maturity
(HTM) investments2, loans, and liabilities not fair valued are valued
at amortized cost. Both frameworks provide a carefully specified
option to fair value (FVO) certain financial assets and liabilities3 that
would normally be valued at amortized cost4. The mixed attributes
model is intended to be as neutral as possible — without emphasiz-
ing one accounting principle over another. But, its uneven applica-
tion to balance sheets produces accounting volatility and may not
fully capture the effects of economic events in all instruments
included in the banks’ financial statements.
63
financial stability, fair value accounting, and procyclicality
September 30, 2008, the U.S. SEC jointly with the U.S. FASB issued new guidance
clarifying the use of FVA under the current environment and, on October 10, 2008,
the U.S. FASB staff issued Staff Position No. 157-3 providing guidance on how to
determine the FV of a financial asset when the market for that asset is not active.
IFRS 7, “Financial instruments: disclosures,” became effective on January 1, 2007.9
For those financial assets measured at amortized cost, the entity must also disclose 10
the FV in the notes to the statements.
Including audit-related programs.11
The FSF recommends disclosures about price verification processes to enhance gov-12
ernance and controls over valuations and related disclosures. Disclosures regarding
risk management governance structures and controls would also be welcome.
Examples are the U.S. SEC letters of March 2007 and March 2008 to major financial 13
institutions outlining the nature of recommended disclosures and the most current
letter of September 16, 2008.
“Leading-practice disclosures for selected exposures,” April 11, 2008. Twenty large, 14
internationally oriented financial firms were surveyed (15 banks and five securities
firms) as of end-2007.
Nevertheless, differences are disappearing given the international convergence to 5
IFRS currently underway, led jointly by the International Accounting Standards Board
(IASB) and the U.S. Financial Accounting Standards Board (FASB), which is aimed to
achieve a single set of high-quality international accounting standards.
This language is U.S. GAAP-specific and not IFRS, but it is used extensively in the 6
banking industry and in financial statements of IFRS users as well.
IFRS do not explicitly mention some risk factors (i.e., counterparty credit risk, liquid-7
ity risk), which may have added confusion to financial statement preparers during
the 2007–08 turmoil. An International Accounting Standards Board Expert Advisory
Group is currently working on this and other FV issues. The U.S. Financial Accounting
Standards Board is reevaluating some disclosure requirements (i.e., credit deriva-
tives) and has issued new standards (i.e., FAS 161 on derivatives and hedging). Both
Boards are working jointly on FV issues and examining requirements for off-balance
sheet entities as well.
White papers prepared by the six largest international audit firms and other audit 8
firms summarize guidance on what constitutes an active market, FV measurement
in illiquid markets, and forced sales, CAQ (2007) and GPPC (2007). Further, on
What is fair value?
IFRS and U.S. GAAP similarly define FV as the amount for which an
asset could be exchanged, and a liability settled, between knowl-
edgeable, willing parties, in an arm’s length, orderly transaction.
U.S. GAAP (FAS 157) are more prescriptive than IFRS because they
consider that FV is an “exit” or “selling” price5. Both accounting
frameworks prescribe a hierarchy of FV methodologies that start
with observable prices in active markets (Level 1), using prices
for similar instruments in active or inactive markets or valuation
models using observable inputs (Level 2), and moving to a mark-to-
model methodology with unobservable inputs and model assump-
tions (Level 3)6. The absence of market prices, trading activity, or
comparable instruments’ prices and inputs is a prominent feature
of complex structured credit products, many of which are held off
balance sheet. Consequently, both frameworks require extensive
disclosures of information on the FV methodologies used, specific
assumptions, risk exposures, sensitivities, etc.
Thus defined, FV does not require the presence of deep and liquid
markets to be applied. FV can be estimated when a market does not
exist, as FV valuation models comprise the expected, risk-discount-
ed cash flows that market participants could obtain from a finan-
cial instrument at a certain point in time. While FV incorporates
forward-looking assessments, it must also reflect current market
conditions, measures of risk-return factors7, and incorporate all fac-
tors that market participants consider relevant, with firm-specific
risk preferences or inputs kept to a minimum. Under this definition,
two key issues underlying the FV methodology present a challenge:
what constitutes an active market and what can be considered an
observable price or input.
Forced or “fire” sales would not be valid determinants of market
prices, because the accounting frameworks presume that a report-
ing entity is a going concern that does not need or intend to liqui-
date its assets, or materially curtail the scale of its operations. Yet,
accounting standard setters have decided to leave to judgment
(namely, of management, supervisors, and auditors) how to deter-
mine “regularly occurring” or “distressed” sales, and when sales in
thin markets, at heavy discounts, could be used for balance-sheets’
FVA8. Consequently, market participants and supervisors would
expect to see banks’ external auditors use a very cautious approach
to examining the prices and inputs used to FV financial instruments,
in order to minimize late write-downs or write-offs and opportuni-
ties for management to “cherry-pick” the accounting treatment of
financial instruments.
Disclosures of fVA
Both IFRS and U.S. GAAP mandate various disclosures, particularly
when information other than market inputs is used to estimate FV.
For example, IFRS 7 requires disclosure (i) if the transaction price
of a financial instrument differs from its FV when it is first recorded
in the balance sheet; and (ii) of the implications of using “reason-
ably possible alternative assumptions” to reflect the sensitivities of
FV measurement9. IFRS 7 also contain reporting requirements that
include the publication of sensitivity tests for individual items of
the financial statements. Similarly, FAS 157 requires banks’ balance
sheets to be sufficiently clear and transparent as to fully explain to
market participants, through quantitative and qualitative notes to
the financial statements, the nature of the changes and the meth-
odologies used, to name a few items10.
Although some U.S. and European Union (E.U.) financial institutions
voluntarily provide such disclosures, neither IFRS nor U.S. GAAP
require disclosure on the governance and management control
processes11 surrounding FV valuation12. Enhancement of disclosures
in this direction could increase confidence in the banks’ balance-
sheets and lower investors’ aversion to transact in instruments
whose valuations may not be well understood13. This would not
necessarily indicate a need for more disclosures, but for a more
appropriate composition, medium (i.e., websites), and frequency of
disclosures.
Along this line, at the request of the Financial Stability Forum (FSF)
a Senior Supervisors Group conducted a survey of disclosure prac-
tices for selected financial exposures, such as special-purpose enti-
ties and collateralized debt obligations, among others, and issued
a report concluding that disclosure practices currently observed
could be enhanced without amending existing accounting disclo-
sure requirements14. The FSF is encouraging financial institutions
to use these disclosure practices for their 2008 financial reports
and urging supervisors to improve risk disclosure requirements in
Pillar 3 of Basel II.
64
Canada has postponed adoption of the full International Financial Reporting 15
Standards until 2011.
Barth (2004) argues that mixed-attributes models impair the relevance and reliability 16
of financial statements and that this constitutes one of the primary reasons behind
hedge accounting. IAS 39 was aimed to alleviate mismatches in assets and liabilities
valuations due to the mixed-attributes model and the complexities of hedge accounting.
It should be noted that procyclicality of accounting and reporting standards existed 17
prior to the recent attention to FVA. It has long been recognized that as the business
cycle and market sentiment change, so too will valuations of assets and liabilities.
IFRS and U.S. GAAP accounting standards – and FVA is no exception – are applicable 18
to reporting entities irrespective of their size or systemic importance.
financial stability, fair value accounting, and procyclicality
One intention of the FVO in both accounting frameworks is to enable entities to 19
reduce accounting mismatches by applying FV on matching assets and liabilities.
Bank supervisors use prudential filters as a tool to adjust changes in the (accounting) 20
equity of a bank due to the application of the accounting framework, so that the qual-
ity of regulatory capital may be properly assessed. For example, when the gains that
result from a deterioration in a bank’s own creditworthiness (fair valued liability) are
included in a bank’s prudential own funds, they must be “filtered out” by the supervi-
sor in order to determine the true amount of regulatory own funds.
In principle, valuations are thus better aligned with the prevailing mark-to-model 21
techniques used in risk management.
A preliminary reading of financial reports prepared for mid-2008 by
some U.S., European Union, and Canadian banks would show that
U.S. banks are including more quantitative notes in their financial
statements, as compared with their end-2007 reporting15, typically
providing information on financial assets securitized, cash flows
received on Special Purpose Entities (SPE) retained interests,
assets in non-consolidated variable-interest entities (VIE), and
maximum exposures to loss in consolidated and non-consolidated
VIEs, with details broken down by instrument.
Volatility and procyclicality of fVA
Barth (1994 and 2004) argues that there are three potential
channels through which FV may introduce volatility into financial
statements. The first is the volatility associated with changes in
the underlying economic parameters. The second is the volatility
produced by measurement errors and/or changing views regarding
economic prospects throughout the business cycle. As to the third,
volatility may be introduced by relying on the “mixed attributes”
model that applies FVA to certain instruments and amortized
cost to others, reducing the netting effect that full fair valuation
of assets and liabilities would produce16. Each of these sources of
volatility is either explicitly or implicitly present in the simulation
exercises examined later in the paper.
The mixed attributes model adopted by IFRS and U.S. GAAP has
embedded volatility and procyclicality aspects17. On the one hand,
historical cost accounting, applicable to HTM investments and loans,
is less volatile and also backward looking. When such an investment
or loan is correctly priced at origination, its FV equals its face value.
Over the life of the asset and until maturity, its reported stream of
profits is stable and its carrying value is based on its value at origi-
nation. But if market conditions negatively affect these portfolios
and there is evidence of a credit loss event and asset impairment,
then the reporting values must be reassessed and provisions for
losses must be accrued or write-offs recorded. The latter is often a
late recognition of excess risk taken earlier, in good times. In this
sense, historical costs are subject to a backward-looking assess-
ment of value. Thus, amortization of loans, when combined with
procyclical provisioning, often coincides with a downturn of an
economic cycle, adding to stresses.
On the other hand, FVA introduces more volatility in earnings and
capital during the life of an asset or liability than historical cost
accounting and incorporates forward-looking assessments18. Gains
and losses in fair-valued instruments can generally affect the
income statement and this increased volatility of FVA and resulting
procyclical effects may create incentives for banks to restructure
their balance sheets (i.e., lower loan originations, higher/lower
securitization, introduce hedging, etc.)19. Nevertheless, higher FV
volatility, per se, would not necessarily be a problem if market
participants are well informed and could correctly interpret the
information provided in the financial statements. In this sense,
increased volatility may be thought of as part of the process of fair
valuing financial instruments, and a reflection of genuine economic
volatility, not as a cause itself of procyclicality.
However, in some cases, the symmetrical treatment within FVA
produces seemingly misleading results. For example, the use of
FVA on a bank’s own debt, where the price of the bank’s bonds and
notes falls due to a decline in its own creditworthiness, will result in
a gain that must be recognized in the bank’s financial statements,
equal to the difference between the original value of the debt and
its market price. As counter-intuitive as this situation may be, it is
still a faithful representation of FV and is a signal to supervisors or
other users of financial statements to have appropriate tools (i.e.,
prudential filters)20 for understanding the implications of FVA and
the impact on regulatory capital.
As valuation moves from market prices to mark-to-model valua-
tion, FVA poses reliability challenges to which markets, particularly
under distress, are sensitive21. These “subjective” aspects of FVA
may compound market illiquidity or price spirals if they increase
uncertainty around valuations. Both in the United States and the
European Union, financial institutions’ balance sheets are heav-
ily represented in Level 2 instruments, a possible indication that
financial institutions are biased towards using Level 2 methods
2228 25
7267
69
6 5 6
0
10
20
30
40
50
60
70
80
90
100
U.S. Financial Institutions European Financial Institutions Total
Level 1 valuations Level 2 valuations Level 3 valuations
Figure 1 – Aggregate fair value hierarchy, end-2007 (in percent)
Source: Fitch Ratings
65
financial stability, fair value accounting, and procyclicality
IASB’s November 2009 exposure draft, Financial instruments: amortized cost and 22
impairment, proposes an expected loss model for provisioning.
due to their flexibility, as well as a desire to avoid “obscure” Level
3 assets and liabilities (Figure 1). Falling valuations can activate
certain management decision rules that trigger the liquidation of
certain assets or portfolios, adding additional stress. Hence, there
is a need for good risk management practices to be consistent with
FV mark-to-model valuations. Clear and transparent quantitative
and qualitative notes to the financial statements regarding the
nature of the changes and methodologies could enhance reliability
of mark-to-model valuations.
Although more volatile, FVA could play a role by partially mitigating
the crisis if warning signals are heeded, thereby helping markets
to recover before damaging self-fulfilling downturns worsen. FVA
that captures and reflects current market conditions on a timely
basis could lead to a better identification of a bank’s risk profile, if
better information is provided. An earlier warning that can prompt
corrective action by shareholders, management, and supervisors
allows for a timelier assessment of the impact of banks’ risky
actions on regulatory capital and financial stability. Moreover, since
FVA should lead to earlier recognition of bank losses, it could have
a less protracted impact on the economy than, for example, loan
portfolios whose provisions for losses are usually made when the
economy is already weak22. Raising new capital at an earlier stage
might enable banks to retain written-down assets or other assets
originally not for sale on their balance sheets and, thus, to avoid
asset price spirals.
On the prudential front, the negative impact of vastly lower valu-
ations stemming from recent market conditions raises questions
as to whether increases in regulatory capital may be needed for
complex structured products, off-balance sheet entities (OBSEs),
or other risks. Basel II, Pillar 2 guidance could encourage banks to
put greater attention into FV during periods of falling or rising asset
prices, so that they may better control for procyclical aspects of
FVA. Pillar 3 disclosures could improve transparency of valuations,
methodologies, and uncertainties. Nevertheless, FVA can serve as
an early warning system for supervisors to pursue closer scrutiny
of a bank’s risk profile, risk-bearing capacity, and risk management
practices.
off-balance-sheet entities and procyclicality
Recent market turmoil has heightened public awareness of the
extensive use of off-balance-sheet entities (OBSEs) by financial
institutions. With variations, both IFRS and U.S. GAAP have spe-
cific criteria to determine when instruments transferred to OBSEs
should be consolidated on-balance-sheet. Any retained interest
in securitized financial assets should be on-balance-sheet and
accounted for at FV, usually in the trading book.
Mandatory disclosures on OBSEs are not prevalent. Their absence
may have added to market confusion and contributed to procyclical
behavior by helping to create a market perception that the banks
were standing behind their OBSEs. Both the IASB and the U.S. FASB
have different projects under way to improve OBSE disclosures and
enhance the criteria for derecognition and consolidation of OBSEs.
Examples are the IASB’s consolidation and derecognition projects,
and the FASB’s changes to FAS 140 and Interpretation 46(R). The
FASB’s recently revised standard, FAS 140, will go into effect at the
end of 2009.
Regardless, OBSEs require financial supervisors to revisit pruden-
tial reporting so that the integrity of banks’ risk exposures can be
better captured and explained, as well as adequately buffered (i.e.,
capital) to the satisfaction of supervisors.
procyclicality in the Basel ii framework
A key improvement in the Basel II framework is its enhanced risk
sensitivity. Yet this very feature is associated with the unintended
effect of heightening its procyclical propensity. Basel II recognizes
possible business cycle effects and how they should be addressed
in both Pillar 1 (minimum capital requirements) and Pillar 2 (super-
visory review process) of the framework. If Basel II is properly
implemented, then greater risk sensitivity can lead banks to restore
capital earlier in a cyclical downturn, thus preventing a build-up of
required capital when it could amplify the cycle.
Under Basel II’s Standardized Approach, risk weights are based on
external ratings constructed to see through the cycle, so that cyclical
effects are muted. It is in the internal- ratings-based (IRB) approach-
es that deterioration in credit risk feeds more directly into the capital
requirements. The three main risk components in the IRB approaches
(i.e., probability of default, loss given default, and exposure at
default) are themselves influenced by cyclical movements and may
give rise to a cyclical impact on banks’ capital requirements.
Basel II includes mitigating measures to address these concerns.
Although Pillar 1 does not mandate the use of through-the-cycle mod-
els, it promotes estimates of risk components based on observations
that “ideally cover at least one economic cycle,” and whose valida-
tion must be based on data histories covering one or more complete
business cycles. Sound stress testing processes must be in place
that involve scenarios based on economic or industry downturns and
include specific credit risk stress tests that take into account a mild
recession to assess the effects on the bank’s risk parameters.
Pillar 2 places the onus on both banks and supervisors to assess
business cycle risk and take appropriate measures to deal with it.
Banks are required to be “mindful of the stage of the business cycle
in which they are operating” in their internal assessment of capital
adequacy, perform forward-looking stress tests, address capital
volatility in their capital allocation, and define strategic plans for
raising capital. In turn, encouraging forward-looking credit risk
The U.S. Financial Accounting Standards Board has a project under way to address 23
provisioning and related credit risk disclosures.66
financial stability, fair value accounting, and procyclicality
In mid-October 2008, the IASB amended IAS39 to allow some reclassifications of 24
financial instruments held for trading or AFS to the HTM category, meeting certain
criteria, with the desire to reduce differences between IFRSs and US GAAP. As of
November 2009, IFRS 9, Financial statements, requires reclassifications between
amortized cost and fair value classification when the entity’s business model changes.
assessments or higher provisioning for loan losses (that consider
losses over the loans’ whole life) is left to national supervisors23.
Thus, where Pillar 1 does not adequately capture business cycle
effects, supervisors should take remedial action under Pillar 2,
including through additional capital buffers.
The capital disclosures required by Pillar 3 may assist markets and
stakeholders in exercising pressure on the banks to maintain their
capital levels throughout the full business cycle. In its recent report,
“Enhancing market and institutional resilience,” the Financial
Stability Forum called for the Basel Committee to develop Pillar 2
guidance on stress testing practices and their use in assessing capi-
tal adequacy through the cycle, to examine the balance between
risk sensitivity and cyclicality, and update the risk parameters and
the calibration of the framework, if needed [Financial Stability
Forum (2008)]. In response, the committee is establishing a data
collection framework to monitor Basel II’s impact on the level and
cyclicality of prudential capital requirements over time across
member countries. The committee is expected to use these results
to further calibrate the capital adequacy framework.
options for the application of fair value accounting to mitigate procyclicalityThe procyclicality of FVA has prompted the search for options that
allow financial institutions to cope with situations of market turmoil.
Alternatives range from considering a wider selection of “observ-
able” prices or inputs to a change in the accounting treatment of
financial instruments, as follows:
consensus pricing services
Consensus pricing services, often independent brokers and agen-
cies, can provide price quotes for complex or illiquid financial
instruments, often using prices based on their own sales of relevant
instruments that allow them to observe price behavior and market-
test their estimates. Through this approach, illiquid products could
obtain a Level 2 price, potentially limiting valuation uncertainty
and underpricing in downturns. However, difficulties may remain if
there is a wide dispersion of values that do not reflect the features
of the specific financial product or if banks contend that values do
not reflect market conditions, thereby obliging banks to use internal
valuation methodologies.
Valuation adjustments
Banks could estimate the “uncertainty” surrounding the price of cer-
tain assets and make a valuation adjustment to the carrying value of
an instrument disclosed in the financial statements. Valuation adjust-
ments would allow banks to work with less perfect prices that are
corrected to reflect current market conditions. These estimates of
“uncertainty” might incorporate the liquidity of inputs, counterparty
risk, or any market reaction likely to occur when the bank’s position
is realized. Valuation adjustments could improve fair value measure-
ments and discipline in reporting, yet they need close monitoring to
ensure that this practice does not evolve into management “cherry
picking,” providing a means to evade a certain accounting fair value
level classification, or improving the balance sheet.
Reclassifications
The transfer of assets from available-for-sale or trading to the
held-to-maturity (HTM) category could avoid the volatility resulting
from valuation changes amid a downward spiral. However, from an
accounting perspective, reclassifications could be penalized by not
allowing banks to revert to the trading book when markets rebound.
Further, assets transferred from the trading category to HTM would
be subject to impairment assessment (as they should were they
moved into the AFS category). From a prudential standpoint, dete-
riorated HTM assets would require higher regulatory capital, while
changes in AFS assets would be considered additional but not core
capital. Allowing reclassifications, particularly if not fully disclosed,
may postpone the weaknesses of the balance sheets, and promote
cherry-picking elements of the accounting framework24.
full fair value accounting
Recognizing the significant challenges that FVA poses, a longer-
term alternative would be to adopt a full-fair-value (FFV) model for
all financial assets and liabilities in a balance sheet, irrespective of
an entity’s intention in holding them. One single FV principle, with
some limited exceptions, would reduce the complexity of financial
instruments reporting, balance sheet window dressing, and cherry
picking, and allow for more transparent representations of the
financial condition of an entity. It could improve the comparability
of financial information across balance sheets and enhance market
discipline, but it would pose significant challenges for implementa-
tion, modeling capabilities, and auditing estimates.
internal decision rules
Without searching for a FVA alternative, regulators could require
banks to have internal decision rules based on FV that require a
careful review of all the implications of changing FV and the specific
occasions when such changes could trigger management decisions,
so that these decisions do not adversely affect regulatory capital or
accentuate downward price spirals.
smoothing techniques and circuit breakers
Smoothing asset prices and circuit breakers could be used as price
adjusters to FVA to reduce excessive price volatility in the bal-
ance sheet. Smoothing techniques involve the averaging of asset
prices over a given period. A circuit breaker imposes rules to stem
the recognition of a fall in asset prices. However, both reduce the
information content of financial statements by suspending equity
at an artificially higher-than-fair-value calculated level. The simula-
tion exercises examine the following alternatives: FFV accounting,
smoothing techniques, circuit breakers, and reclassifications.
67
financial stability, fair value accounting, and procyclicality
See Enria et al. (2004), who examine the impact of several one-off shocks on the 25
balance-sheet of a representative European bank under alternative accounting
frameworks.
Modeling fVA through the business cycle using simulationsUsing model simulations, this section assesses the effects that
changes in financial instruments’ fair value have on the balance
sheet of three types of large, internationally active financial institu-
tions — U.S. commercial banks, U.S. investment banks, and European
banks — as well as more retail-oriented U.S. and E.U. banks (Table 1).
The balance sheets of a sample of representative institutions were
taken as of end-2006 to construct prototypical institutions. The
simulations illustrate the impact of changes in valuations and, ulti-
mately, on these representative banks’ equity capital. The section
also explores possible alternatives related to FVA and its current
application — full fair value, smoothing techniques, circuit breakers,
and reclassifications — that aim to reduce its volatility on balance
sheets (Box 3.4).
The first simulation serves as the baseline for subsequent scenarios
and consists of tracking the evolution of the banks’ balance sheets
throughout a normal business cycle. Four scenarios are applied to
the normal cycle with the goal of gauging the degree to which fair
valuations amplify fluctuations in balance sheet components, and
more notably, on accounting capital25. The sources of increased
cyclicality are (i) a bust-boom cycle in equity valuations; (ii) a bust-
boom cycle in the housing market; (iii) a widening and then contrac-
tion of banks’ funding spreads; and (iv) a bust-boom cycle in debt
securities’ valuations, all of which are calibrated using the most
current cyclical movements (Table 2). As noted by Fitch (2008a
and 2008b) among others, the sensitivities of FV measurements
to changes in significant assumptions are particularly important
when valuations are model-based and/or markets become highly
illiquid. Specifically, the method by which an institution chooses
to value components of its balance sheet constitutes one of the
three main transmission channels through which FVA introduces
volatility into the balance sheet [Barth (2004)]. The simulations
help underscore this point and provide a sense of the magnitude of
these effects. In addition, the simulations illustrate how a sudden
tightening in banks’ funding conditions, or changes in the liquidity
conditions in securities markets, exacerbate cyclical fluctuations in
balance sheets.
It is worth noting that from a cash flow perspective, the changes
in assumptions underlying valuations (such as those made in the
simulations below) may not necessarily be of future consequence
to the reporting institution, as those gains and losses have not been
realized and may never be. In this sense, the ensuing changes in
regulatory capital produced by the updated valuations are some-
what artificial. With these considerations in mind, the simulation
results should be interpreted as a simple exercise to gauge how
changes in the underlying valuation parameters in the presence of
FVA may lead to substantial fluctuations in banks’ equity.
Data and modeling assumptionsThis section presents the construction of the simulation exercises
and reviews the assumptions underlying the various scenarios.
Banks’ balance sheets
To accurately reflect the balance sheets of a representative
large U.S. commercial bank, a large U.S. investment bank, a large
European bank, and retail-oriented U.S. and European banks, the
financial statements at end-2006 for these five banking groups
U.s. commercial
bank
U.s. investment
bank
European bank
U.s. retail-
oriented bank
European retail-
oriented bank
financial assets
Securities
Debt securities 21.82 27.85 15.71 14.96 17.72
Trading book FV1 21.82 27.85 14.98 5.09 16.59
Banking book2 — — 0.73 9.87 1.13
Shares 6.73 7.50 6.55 0.64 2.96
Trading book FV1 6.73 7.50 6.32 0.47 2.96
Banking book2 — — 0.23 0.17 —
Derivatives (trading) 2.67 5.28 14.71 1.19 4.44
Interest rate swaps 1.48 1.87 7.76 ... ...
Other derivatives 1.20 3.41 6.96 ... ...
Loans
Corporate/Consumer 10.11 5.63 23.77 23.00 25.84
Short-term (fixed rate) < 1 year FV1 4.72 2.82 11.88 6.84 12.92
Medium-term ( > 1 year < 5 year) 3.66 2.82 3.57 10.97 3.88
Fixed rate FV1 0.72 1.41 1.78 1.71 1.94
Variable rate FV1 2.94 1.41 1.78 9.26 1.94
Long-term (> 5year) 1.73 n.a. 8.32 5.19 9.04
Fixed rate FV1 0.46 n.a. 4.16 2.03 4.52
Variable rate FV1 1.27 n.a. 4.16 3.16 4.52
Mortgages 16.51 n.a. 6.54 37.44 26.43
Fixed rate FV1 12.83 n.a. 1.40 29.09 10.78
Variable rate FV1 3.68 n.a. 5.14 8.35 15.65
other assets 28.60 43.27 20.93 17.34 5.41
financial liabilities
Debt securities/equity (trading) FV1 4.68 8.68 12.77 0.01 12.71
Derivatives (trading) 3.20 5.49 15.34 0.96 3.47
Interest rate swaps 2.09 1.73 7.84 ... ...
Other derivatives 1.10 3.76 7.49 ... ...
Short-term and long-term financial liabilities/Bonds
FV1 18.25 27.21 10.35 19.56 18.97
other liabilities 65.26 51.52 56.23 69.72 61.16
of which: deposits and interbank borrowing
42.44 3.72 24.88 60.12 56.72
net equity3 7.65 3.71 2.86 9.75 4.36
Table 1 – Balance sheet of representative U.S. and European financial institutions (in
percent of total assets, December 31, 2006)
Sources: Annual Reports; and SEC’s 10-K filings.
Note: Columns may not add to 100 percent as some balance sheet items are not
displayed in the table.
1 Valued at fair value.
2 Annual statements showed negligible or zero holdings for the sampled U.S. banks.
3 Net equity in percent of total (non-risk weighted) assets.
The filing period was chosen to be December 2006 in order to obtain balance sheets 26
that are relatively recent, while at the same time do not reflect too closely banks’ bal-
ance sheet structures in the run-up or fall-out of the 2007-08 U.S. sub-prime meltdown.
For simulation purposes, all banks were assumed to be newly established, so that all 27
balance sheet items are at FV at the start of the simulations. Thus, the shocks applied
to the baseline reflect only the pure impact of the shocks, and not a combination of the
imposed shock plus any initial deviations from fair value.
IAS 39 prevents the valuation of demand deposits at a value less than face value, even 28
if a significant portion of these display economic characteristics of a term deposit.
Consequently, deposits remain at face value in the exercise.
68
financial stability, fair value accounting, and procyclicality
Despite being a central element in the 2007–08 turmoil, an explicit breakdown of credit 29
derivative exposures was unavailable in the 2006 reports. Some mortgage-backed
securities were included in the debt securities category.
Strictly speaking, PDt is the conditional probability of default at time t. That is, the 30
probability that, conditional on not having defaulted before, a loan defaults on period t.
It should be noted that the Quantitative impact study 5 (QIS-5) estimated the PD for a 31
group of G-10 (ex-U.S.) banks’ retail mortgage portfolio at 1.17 percent, very close to the
estimate of 1.18 percent for the trend period used here.
Although this may be a less realistic assumption than allowing LGDs to evolve through 32
the cycle, the qualitative results of the simulations would not be altered.
were compiled from the institutions’ annual reports and the U.S.
Securities and Exchange Commission’s 10-K filings26. Individual
bank balance sheets were then used to construct a weighted aver-
age for each type of institution, and the resulting representative
balance sheets (Table 1). Table 1 indicates the line items that were
fair valued in the simulations27, 28. Not all the items in the balance
sheet were fair valued in the simulations: items that are typically
not available for sale (i.e., securities in the banking book) and items
that fall under the “other” categories were held constant.29
Valuation of assets and liabilities under fair value
Loans and debt securities are valued at their expected net present
value (NPV), which takes into account the probability of default and
the loss given default of each instrument. In other words, the value
of a given security (or loan) with a maturity of T years is given by
the expression
NPV =E CFt( )1 +δt( )t
t = 1
T
∑
where δt is the discount rate for year t, and E(CFt) is the expected
cash-flow for year t factoring in the possibility that the security (or
loan) defaults:
E(CFt) = [PDt · (1 + rt) · N · (1 – LGDt)] + [(1 – PDt) · rt · N] for all t<T,
and
E(CFT) = [PDT · (1 + rT) · N · (1 – LGDT)] + [(1 – PDt) · (1 + rT) · N]
where PDt stands for probability of default30, rt is the interest rate
on the loan, N is the notional amount of the loan, and LGDt is the
loss-given-default.
Under FV, traded shares are valued at their market price. Since
the detailed composition of the shares portfolio of banks was
not available, it was assumed that banks hold a generic type of
share which represents the Standard & Poor’s 500 stock market
index. Therefore, the number of shares for each type of bank was
obtained by dividing the value of their shares portfolio at end-2006
by the value of the S&P 500 index at the same date.
characterization of the business cycles
To simplify the analysis, the paper considers a stylized business
cycle consisting of four periods representing different points in a
typical business cycle: trend, trough, peak, and back to trend. Each
point in the business cycle is characterized by a different prob-
ability of default (PD) on securities and loans. To construct the
normal business cycle, the PDs on loans and debt securities were
assumed to change with the pulse of the cycle, increasing during
economic downturns and decreasing during upswings. To isolate
the effect of the evolving PDs on valuations, the baseline simula-
tion abstracts from changes in interest rates during the cycle and
initially assumes a flat yield curve.
In principle, different classes of securities and loans may have dif-
ferent PDs and evolve differently throughout the cycle. For simplic-
ity, however, this paper assumes that all securities and loans have
the same PD and display the same cyclical behavior, except for the
scenario of the bust-boom cycle in real estate, where a different
PD for mortgages is assumed. In addition, loans are assumed to be
bullet instruments, whose principal is repaid in full upon maturity.
The specific values for these PDs were derived from Nickell et
al. (2000), who investigate the dependence of securities rating
transition probabilities on the state of the economy [Pederzoli and
Torricelli (2005), Bangia et al. (2002), and Altman et al. (2005)].
The probabilities of default at different stages of the business cycle
were computed using their estimated transition matrices at differ-
ent points in the cycle (Table 2)31.
To compute the net present value (NPV) of loans and securities,
it is also necessary to have a measure of losses in the event of
default. Thus, loss-given-default (LGD) rates were taken from the
BIS’s Fifth quantitative impact study QIS-5 [BIS (2006a)], and equal
20.3 percent for mortgage loans and 46.2 percent for corporate
loans. To isolate the effect of the evolving PDs, the LGD rates were
held constant through the cycle (except in the bust-boom cycle
in the housing market and in the downward price spiral for debt
securities)32.
Business cycle trend
points
Business cycle trough
points
Business cycle peak
points
Normal cycle PD for all loans and securities 1.18 1.40 0.73
LGD for mortgages 20.30 20.30 20.30
LGD for loans1 and securities 46.20 46.20 46.20
Stock market index 100.00 100.00 100.00
Stock market cycle
PD for all loans and securities 1.18 1.40 0.73
LGD for mortgages 20.30 20.30 20.30
LGD for loans1 and securities 46.20 46.20 46.20
Stock market index 100.00 80.00 120.00
Real estate market cycle
PD for mortgages 1.18 5.29 0.73
PD for loans1 and securities 1.18 1.40 0.73
LGD for mortgages 20.30 30.50 20.30
LGD for loans1 and securities 46.20 46.20 46.20
Stock market index 100.00 100.00 100.00
Note: PD = probability of default; LGD = loss given default.1 Loans excluding mortgages.
Table 2 – Parameter values for each simulation (in percent)
Sources: IMF staff estimates; and Nickell et al. (2000).
69
financial stability, fair value accounting, and procyclicality
The results are presented in terms of the evolution of banks’ normalized equity 37
through the cycle — that is, at each point in the cycle, banks’ equity is divided by their
initial level of equity (i.e., at end-2006). All figures for this section are presented at
the end.
Note however that this result reflects only one element of countercyclical forces, as 38
“other liabilities” represents about 50 percent of the balance sheet and can poten-
tially introduce additional countercyclicality.
Chapter 4 of IMF (2008a) examines procyclicality of leverage ratios of U.S. invest-39
ment banks, finding their extreme variation across the cycle. Note this is consistent
with the scenario conducted later in this paper where funding spreads vary through
the cycle, producing the same procyclicality found in IMF (2008a).
See Guerrera and White (2008). Additionally, Barth et al. (2008) suggest that these 40
counterintuitive effects are attributable primarily to incomplete recognition of con-
temporaneous changes in asset values.
The initial price of the representative stock held by banks was normalized to the value 33
of the S&P 500 index at end-2006, which closed at 1418 on December 29th, 2006.
To estimate the PDs during the 2007–08 U.S. housing crisis, it was assumed that 100 34
percent of foreclosures and 70 percent of delinquencies beyond 90 days end up in
default. These percentages are then combined with the respective PDs to yield an over-
all estimated PD of 5.29 percent for all mortgages. See UBS (2007); data source: Merrill
Lynch, April 2008.
The rationale behind this characterization of distressed markets follows Altman et. 35
al (2005) in that during times of distress, the demand for securities declines hence
reducing both the market price and the recovery rate (i.e., the inverse of LGD) of
securities. See Acharya et al. (2007), Altman et al. (2005), and Bruche and González-
Aguado (2008) for papers discussing the link between distressed markets and
increases in LGD rates.
Derived from Bruche and González-Aguado (2008).36
characterization of the economic shocks
The first scenario considered is a bust-boom cycle in stock market
valuations where, concurrent with a normal cycle, share prices ini-
tially plummet by 20 percent during the downturn of the economic
cycle and then surge to a level that is 20 percent above the original
level, to ultimately return to their trend value (Table 3)33.
The second scenario is a bust-boom cycle in the housing market, in
which mortgage default rates and LGD rates dramatically increase
during the downturn, and then rebound during the recovery. In
this scenario, PDs of mortgage loans increase to 5.29 percent in
the trough of the cycle — a magnitude which is commensurate with
the recent meltdown in the U.S. housing market34. Additionally, the
reduction in house values — and thus the expected decline in recov-
eries — was factored in through a 50 percent increase in the LGD
rate over the average values reported in the QIS-5 (i.e., from 20.3
percent to 30.5 percent).
To simulate the cycle in funding conditions, the paper assumes that
during the business cycle trough, banks’ cost of funding increases by
58.7 basis points. This increase in spreads was obtained by comput-
ing the average rise in Libor-OIS spreads for U.S. and European banks
during the summer of 2007. Conversely, to analyze the effects of
ample liquidity conditions, the simulation assumes that banks’ fund-
ing costs decrease by the same amount during the cycle peak.
To construct the scenario of distressed securities markets and then
recovery, it was assumed that the LGD rates for debt securities
sharply increase during troughs and decrease by the same amount
during peaks35. During the cycle trough, the LGD rate for debt
securities increases to 67.3 percent36 from its initial base of 46.2
percent. Subsequently, the simulation applies the same shock mag-
nitude (but reversed sign) to the LGD during the cycle peak; that is,
LGD decreases to 25.1 percent.
simulation results
The simulations highlight three key points regarding FVA and its
potential regulatory and financial stability implications: (i) strong
capital buffers are crucial to withstand business cycle fluctuations
in balance sheet components, especially when FV is applied more
extensively to assets than liabilities; (ii) fair valuing an expanded set
of liabilities acts to dampen the overall procyclicality of the balance
sheet; and (iii) when combined with additional liquidity shortages in
financial markets, the FVA framework magnifies the cyclical volatil-
ity of capital.
The effects of economic shocks under full fair value
In the normal cycle, fair valuing both sides of the balance sheet
produces fluctuations that are mild compared to the bust-boom
scenarios below (Figure 2), an intuitive result37. However, it is worth
noting that, in the case of the representative U.S. investment bank,
equity behaves in a countercyclical manner due to the strong effect
of fair valuing the liabilities. Under full FV (FFV), the value of the
bank’s liabilities declines as economic activity weakens and prob-
abilities of default (PDs) rise, mitigating the decline in equity. This
effect arises because of the asset/liability structure of the invest-
ment banks’ balance sheet, which consists of a large proportion of
financial liabilities that are fair valued. Liabilities at FFV, as is done
by some U.S. investment banks, can introduce an element of coun-
tercyclicality by serving as an implicit counter-balancing hedge to
the fair valuation of assets38, 39. This phenomenon has raised related
concerns by some market observers who regard with unease a
bank’s ability to record revaluation gains as its own creditworthi-
ness weakens and the price of its own debt declines40. The presence
of gains that are a construct of the particular technique chosen
for valuation, signals the need for clear disclosure of underlying
assumptions to avoid misrepresentation of financial statements.
In both the bust-boom cycles in equity valuations and in the housing
market, the European banks exhibit the largest deviations from trend.
For the equity price shock, despite roughly comparable magnitudes
U.s. commercial banks Baseline period 1 period 2 period 3 period 4
Business cycle trend
Business cycle
trough
Business cycle trend
Business cycle peak
Business cycle trend
Normal cycle 7.6 7.5 7.6 7.9 7.6
Bust-boom cycle in share prices 7.6 6.3 7.3 9.1 7.6
Bust-boom cycle in real estate 7.6 5.4 7.6 7.9 7.6
U.s. investment banks Baseline period 1 period 2 period 3 period 4
Business cycle trend
Business cycle
trough
Business cycle trend
Business cycle peak
Business cycle trend
Normal cycle 3.7 3.8 3.7 3.6 3.7
Bust-boom cycle in share prices 3.7 2.3 3.4 5.0 3.7
Bust-boom cycle in real estate 3.7 3.8 3.7 3.6 3.7
European banks Baseline period 1 period 2 period 3 period 4
Business cycle trend
Business cycle
trough
Business cycle trend
Business cycle peak
Business cycle trend
Normal cycle 2.9 2.8 2.9 3.0 2.9
Bust-boom cycle in share prices 2.9 1.6 2.6 4.2 2.9
Bust-boom cycle in real estate 2.9 1.9 2.9 3.0 2.9
Table 3 – Equity to assets ratio through the business cycle (in percent)
Source: IMF staff estimates.
Some portion of the lower equity position in European banks may stem from differ-41
ences in IFRS versus U.S. GAAP accounting treatments [Citigroup (2008), Financial
Times (2008)].
Note, however, that retail-oriented European banks also have a larger fraction of debt 42
securities and financial liabilities than the larger European banks.
70
financial stability, fair value accounting, and procyclicality
In effect, valuing these instruments at amortized cost would produce comparable 43
results to being classified as HTM.
of equity shares across the three banks’ portfolios, a combination
of two effects are at work. First, there is the countercyclical effect
of the relatively greater proportion of FV liabilities for investment
banks. Second, the European bank has a lower capital base and thus
the relative size of valuation changes to normalized equity capital is
larger. In the housing market scenario, the European bank exhibits
wider fluctuations, despite the fact that the U.S. commercial bank
holds a much larger fraction — about two-and-half times greater — of
its loan portfolio in mortgages. In both scenarios, the lower capital
base of the European bank vis-à-vis the U.S. commercial bank is a
key element. Similar results in terms of capital-to-assets ratios are
presented in Table 3, but reflect a less dramatic impact on European
banks41. More generally, a bank’s balance sheet would evolve through
the cycle — contracting in downturns and expanding in upturns — such
that it would restore a bank’s capital adequacy ratio, a result that is
not testable in this simple framework.
The recent events have raised two interesting scenarios regarding
increased funding costs and a downward spiral in the valuation of
debt securities. Sudden changes in bank’s ability to obtain funding
largely exacerbate the fluctuations in balance sheets (Figure 3).
This exercise underscores the significance of general liquidity
conditions in driving balance sheet fluctuations, and how the FVA
framework recognizes these changes promptly. Interestingly, the
countercyclical behavior observed in the U.S. investment banks’
equity disappears. In fact, the U.S. investment bank is hardest hit by
both the tightening of funding conditions and the distress in secu-
rities markets. This should not be surprising given that, contrary
to the U.S. commercial and European banks, the U.S. investment
bank does not rely on deposits — which are not fair valued — to
fund its activities. Note, too, that these simulations do not account
for structured credit products or the OBSEs that were so central
to much of the 2007–08 turmoil and would likely increase the
procyclicality of the balance sheets. Such a deterioration of banks’
balance sheets could affect market confidence and overall share
prices, which in turn could generate additional volatility in banks’
balance sheets.
The results presented thus far have focused on the balance sheets
of large internationally active institutions. Comparatively, the
more retail-oriented banks tend to have larger loan and mortgage
portfolios and rely more extensively on deposits for their funding42.
To illustrate the effects of these two structural characteristics,
simulations comprising the cycle in funding spreads and the bust-
boom cycle in real estate were conducted for all banks, excluding
the representative U.S. investment bank. The results corroborate
the supposition that the more retail-oriented institutions are less
vulnerable to changes in funding conditions than their internation-
ally active counterparts (Figure 4). Conversely, the retail-oriented
banks are harder hit by a bust in the housing market than the inter-
nationally active banks.
The effects of mixed-attributes models
Using two versions of the mixed-attributes model, this exercise
shows how the degree to which financial institutions apply FV to
their assets and liabilities affects the extent to which there can be
offsetting volatility effects. Table 4 shows that financial institutions
apply FV differentially. But what is not shown in the table is the
extent to which the vast majority of banks continue to use amor-
tized cost to value their loan portfolio. Thus, for the purposes of the
simulations, two variations of the model are considered: (i) “finan-
cial liabilities and bonds” are valued at amortized cost throughout
the cycle; and then (ii) in addition, “loans” and “mortgages” are also
valued at amortized cost43.
Figure 5 underscores the idea that the asymmetric application of
a mixed-attributes model, where FV is applied more extensively to
value assets than liabilities, has the effect of increasing the procy-
clical behavior of the balance sheet. In other words, the fluctuations
in equity — for all types of institutions and for all the scenarios
considered — are larger when a smaller fraction of liabilities are
fair valued (compare with Figure 2, the results under FFV). Thus,
the benefits intended by the introduction of the FVO, which were
to reduce the accounting volatility of the mixed attributes methods
and the need for FV hedge accounting techniques, are lessened.
This supports an expanded application of FV, rather than a reduced
application, as some would like to propose. Bear in mind, however,
that the application of FV to banks’ own debt may produce revalu-
financial institutions Assets at fV on a recurring
basis
liabilities at fV on a recurring
basis
Return on equity
JPMorgan Chase & Co. 41 16 12.86
Citigroup 39 22 3.08
Bank of America 27 6 10.77
Goldman Sachs 64 43 31.52
Lehman Brothers 42 22 20.89
Merrill Lynch 44 33 -25.37
Morgan Stanley 44 27 9.75
Credit Suisse 64 39 17.88
Societe Generale 46 32 3.36
Royal Bank of Scotland 45 31 15.13
BNP Paribas 65 55 16.98
Deutsche Bank 75 48 18.55
UBS 54 35 -10.28
HSBC 40 25 16.18
Barclays 52 39 20.50
Credit Agricole 44 24 10.67
Table 4 – Application of fair value by U.S. and European banks, 2007
(in percent of total balance sheet)
Sources: Fitch; and Bloomberg L.P.
71
financial stability, fair value accounting, and procyclicality
This simulation abstracts from the effect of revaluing interest rate swaps. 47
Unfortunately, it was not possible to obtain a sufficiently complete and consistent
dataset on these instruments to include them in the simulation. Nevertheless, pre-
liminary results using available data on interest rate swaps showed similar qualitative
results.
Moving to an expected loss model of provisioning could decrease volatility.44
Although this simulation is subject to the Lucas critique in that bank behavior is 45
assumed not to change in response to policy adjustments, it provides some insights
into the interaction between FVA and interest rates.
Interestingly, the addition of changes in the yield curve counteracts the effect of the 46
evolution of PDs. The drop in the yield curve in the downturn results in higher valu-
ations and thus counterbalances the downward effect of the PDs, while the positive
effect on valuations stemming from lower PDs is counterbalanced by a higher yield
curve in the upturn.
ation gains as the value of liabilities declines on their balance sheet
and that this should be properly disclosed.
This simulation highlights that the greater the imbalance of the
mixed attributes application to assets and liabilities, the greater is
the accounting volatility. When financial instruments are valued at
a historical cost that does not represent the current market condi-
tions, an accurate picture of a bank’s equity becomes blurred and
the informational content of the accounting statements weakens.
Historical costs have low information content for investors who rely
on current financial figures as a basis for investment decisions. For
a regulator, making an accurate assessment of the health of a bank,
and formulating the appropriate regulatory response, becomes
increasingly difficult44.
The second simulation (not shown), where financial liabilities plus
loans and mortgages are all valued at amortized cost, showed that
the range of fluctuations diminished further than in the above simu-
lation. Thus, although the wider application of the mixed attributes
model can reduce fluctuations in the balance sheet, the cost comes
in the form of a further reduction in up-to-date information.
smoothing techniques and circuit breakers on reporting pricesSimulations using proposed alternatives to smooth balance sheet
volatility show that a smoothing/averaging technique for falling
asset prices blurs the bank’s capital position, in magnitudes varying
by the amount and period over which the averages are calculated.
Smoothing techniques and other impediments to allowing valua-
tions to adjust, so called “circuit breakers,” make it harder for regu-
lators and investors to accurately assess the financial position of a
bank as it hides the economic volatility that should be accounted
for in the balance sheet.
To illustrate, two simulations were conducted, each averaging
share prices over different lengths. The first simulation uses a two-
period average, whereas the second simulation is extended to three
periods. As shown in Figure 6, the longer the averaging length, not
surprisingly, the smoother is the path of the balance sheet. Notably,
the application of a smoothing technique might reduce the occa-
sion for forced sales, as it could avoid sale triggers in some cases.
Accordingly, this could lessen a downward price spiral in the market
for a financial product by avoiding forced sales, but comes at the
expense of a reduction in the informational content of financial
statements and potentially lengthening the resolution period.
Similarly, concepts such as a circuit breaker, whereby rules stem the
recognition of a fall in asset prices, mask the underlying equity posi-
tion by suspending equity at an artificially higher level than under
FV and, more generally, may hamper price discovery. However,
in this case, the cycle may be extenuated even longer than with a
smoothing technique because the circuit breaker can maintain the
same value for a given period, while the smoothing is a rolling aver-
age that is updated during each period of the cycle. Additionally, this
measure is asymmetrically applied, as the circuit breaker has gener-
ally been proposed for when valuations are falling. Even though not a
preferred technique, for symmetry, one could apply circuit breakers
during bubble periods to stop the artificial inflation of equity. If not,
asymmetric treatment of valuations may create perverse risk-taking
incentives for managers as long as financial institutions are able to
benefit from the upside in valuation while the downside would remain
capped.
The effects of a changing yield curve Yield curve effects are introduced to the baseline scenario to evalu-
ate how the change in interest rates over the cycle affect the bal-
ance sheet45. The paper follows Keen (1989) and assumes the fol-
lowing stylized facts regarding the cyclical behavior of yield curves
[Piazzesi and Schneider (2006), Keen (1989)]: (i) both short- and
long-term rates tend to decline during business cycle downturns
and rise during expansions; and (ii) short rates tend to rise more
relative to long rates during expansions (i.e., the yield curve flat-
tens) and fall more relative to long rates during recessions (i.e., the
yield curve steepens) (Figure 7)46.
The influence of interest rates tends to dominate the effect of
the change in PDs, such that the interest rate effect dampens
the magnitude of procyclical equity fluctuations for the European
bank, and even becomes countercyclical for the U.S. commer-
cial bank (Figure 8). For the U.S. investment bank, the change in
interest rates renders the evolution of equity procyclical, rather
than countercyclical, as in the baseline simulation. This reversal
in behavior is due to the fact that the U.S. investment bank has
a slightly larger share of FV liabilities than assets being revalued
when interest rates change47. But this also highlights the European
banks as an intermediate structure between the investments banks
and retail bank characteristics. Regardless of the balance sheet
structure, changes to interest rates and other monetary policy
tools can dampen procyclical influences, suggesting countercyclical
monetary policy could have the beneficial outcome of also helping
to counteract the effects of the asset valuation cycles on banks’
equity. Note, however, these simulations do not allow the financial
institutions to respond to policy changes, and thus these results,
while informative, should be taken with caution.
72 – The journal of financial transformation
financial stability, fair value accounting, and procyclicality
Although the weaknesses are related more to issues of OBSEs, consolidation, and 48
derecognition, than to FV.
conclusions and policy recommendationsThe financial turmoil that started in July 2007 unveiled weaknesses
in the application of some accounting standards48 and with the
valuation and reporting of certain structured products. While these
weaknesses may have contributed to the current crisis, they also
provide an opportunity to better understand them.
The paper finds that, despite concerns about volatility and mea-
surement difficulties, FVA is the appropriate direction forward and
can provide a measure that best reflects a financial institution’s
current financial condition, though various enhancements are
needed to allow FVA to reinforce good risk management techniques
and improved prudential rules. Nevertheless, the application of
FVA makes more transparent the effects of economic volatility on
balance sheets that, under certain risk management frameworks,
could exacerbate cyclical movements in asset and liability values.
Exaggerated profits in good times create the wrong incentives.
Conversely, more uncertainty surrounding valuation in downturns
may translate into overly tight credit conditions, and negatively
affect growth at a time when credit expansion is most needed.
This is not to say that alternative accounting frameworks, such
as historical cost accounting, avoid such fluctuations, but rather
that FVA recognizes them as they develop. Regardless, accounting
frameworks are not meant to address the market-wide or systemic
outcomes of their application, as they are applied only to individual
institutions. Nevertheless, much of the controversy surrounding
FV stems more from the risk management and investment deci-
sion rules using FV outcomes, rather than the framework itself.
Delinking the interaction of FV estimates from specific covenants,
such as sales triggers, margin calls, or additional collateral require-
ments during downturns, or compensation tied to short-term prof-
its during upturns, are options that could mitigate the procyclical
impact of FVA.
Overall, the simulations confirmed a number of issues in the ongo-
ing FVA debate and underscored three key points regarding FVA
and its potential regulatory and financial stability implications: (i)
strong capital buffers and provisions make an important contribu-
tion to withstanding business cycle fluctuations in balance sheets,
especially when FVA is applied more extensively to assets than
liabilities; (ii) when combined with additional liquidity shortages in
financial markets, the FVA framework magnifies the cyclical volatil-
ity of capital; and (iii) fair valuing an expanded set of liabilities acts
to dampen the overall procyclicality of the balance sheet. However,
the latter may also give rise to the counterintuitive outcome of
producing gains when the valuation of liabilities worsens. This is of
particular concern when a deterioration in a bank’s own credit wor-
thiness, and the subsequent decline in value of own debt, results
in profits and a false sense of improvement in the bank’s equity
position.
Proposals for alternative accounting methods, such as historical
cost or simplistic mechanisms to smooth valuation effects on bank
balance sheets, reduce the transparency of a financial institution’s
health by blurring the underlying capital position. While these tech-
niques may avoid sale triggers incorporated in risk management
covenants and limit downward price spirals, the measurement
variance introduced by such techniques can increase uncertainties
regarding valuations. The loss of transparency makes it more dif-
ficult for all users of financial statements, for example, for supervi-
sors to conduct adequate oversight of financial institutions and rec-
ommend appropriate regulatory measures to deal with prudential
concerns, and for investors who will demand increased risk premia
in the face of uncertainty.
policy proposalsMost proposals should aim to deal with the use of FV estimates to
lessen the volatility that FVA can introduce to the balance sheet.
Assessments of provisioning and capital adequacy should take
better account of the business cycle. Improved transparency can
be achieved not necessarily by more disclosures, but better dis-
closures. Financial, accounting, and regulatory bodies are already
providing guidance and recommendations in this direction.
The simulations support the relevance of establishing a capital ■■
buffer that looks through the cycle, augmenting the capital posi-
tion during boom cycles to withstand the burden on capital that
stems from economic downturns. Although a partial analysis, the
simulations show that FVA can introduce financial statement vola-
tility and provide a first indication that buffers of around 24 per-
cent of additional capital would help banks weather normal cycli-
cal downturns, whereas higher buffers — on the order of 30–40
percent extra capital — would be needed to offset more severe
shocks. Recognizing that these estimates do not reflect concur-
rent changes in risk-weighted assets, they nevertheless provide an
initial estimate of the magnitude of the needed capital buffer, as
well as the direction for further analysis. Note that these are not
adjustments to FV calculations, per se, but are adjustments meant
to help mitigate the impact on bank balance sheets. Consideration
to making other changes to the accounting framework so that
the FV calculations themselves obviate the need for these other
adjustments would be useful at this juncture.
Broadening the current narrow concept of provisions to incor-■■
porate additional methods of retaining income in upswings could
provide a way of better offsetting balance sheets’ procyclical
effects, for not-fair-valued assets. It is generally agreed that
provisions protect against expected losses and capital protects
against unexpected losses. A build-up of provisions better linked
to the expected volatility, higher risks, and potentially larger
losses of an asset, could better anticipate the potential negative
effects on the balance sheet that would be reflected through the
73
financial stability, fair value accounting, and procyclicality
Basel Committee on Banking Supervision (2006b) and IAS 2001.50
FASB’s XRBL project for financial institutions would provide data online in about 51
three years, as discussed in the IMF April 2008 edition of the Global financial stability
report [IMF (2008b)].
This would be separate from U.S. SEC 10-Q filings.52
Forward-looking provisioning denotes provisions based on the likelihood of default 49
over the lifetime of the loan, reflecting any changes in the probability of default
(after taking into account recovery rates). Dynamic (or statistical) provisioning can be
considered an extension of forward-looking provisions with reliance on historical data
on losses for provisioning calculations. Conceptually, dynamic provisioning would
entail that during the upside of the cycle, specific provisions are low and the statisti-
cal provision builds up generating a fund; during the downturn, the growth in specific
provisions can be met using the statistical fund instead of the profit and loss account
[Enria al (2004) and Bank of Spain (www.bde.es)].
cycle, as long as the build-up does not provide a way for manipu-
lating earnings. Coordination between accounting standard set-
ters and supervisors would be needed to effect such changes.
Similarly, the use of forward-looking provisioning■■ 49, combined with
a supervisor’s experienced credit judgment in assessing the prob-
ability of default, loss given default, and loan loss provisioning50,
could mitigate the procyclical forces on the balance sheet. The
recognition of credit losses in the loan portfolio earlier in a down-
ward cycle would lessen an accompanying decline in bank profits
and the potential for a squeeze in credit extension that could
contribute to a further downward economic trend. Similarly, on
the upside, dividend distributions should only come from realized
earnings that are not biased by upward cyclical moves.
From an oversight perspective, the simulations underscore the ■■
importance of understanding the cyclical implications of FVA.
An enhanced role for prudential supervisors will be needed to
ensure close inspection of a bank’s risk profile and risk manage-
ment practices, and make appropriate recommendations for
augmented capital buffers and provisions, as needed. A compre-
hensive bank supervisory framework should include stress tests
of FV positions through the business cycle. Similarly, auditors
will have a critical role to play in ensuring credibility, consistency,
and neutrality in the application of FVA, and overall in support-
ing market confidence rather than appearing to augment pro-
cyclicality by encouraging lower valuations during a downturn.
A closer collaborative framework among audit and accounting
standard setters and supervisors would be highly beneficial for
markets and financial stability to ensure greater consistency in
assessing and interpreting financial statements.
In light of the different dynamics through the financial cycle and ■■
the doubts that can surround valuations, FV estimates should be
supplemented by information on a financial instrument’s price
history, the variance around the FV calculations, and manage-
ment’s forward-looking view of asset price progression and how
it will impact the institution’s balance sheet. Reporting a range
within which the FV price could fall would help users of financial
statement to better understand and utilize the volatilities with
which they are dealing. FV estimates should be supplemented
with detailed notes on the assumptions underlying the valuations
and sensitivity analyses, so that investors can conduct their own
scenario analyses and determine whether the FV price is repre-
sentative of market conditions.
More refined disclosures could meet the expanding needs of ■■
various users, including investors, supervisors, and deposi-
tors, in a common framework of disclosure. For example, a
series of shorter reports that would be available on websites51
and issued more frequently (i.e., quarterly)52 and cater to a
narrower group of user’s needs could highlight the most rel-
evant information, with a particular emphasis on risk develop-
ments. Further, the volatility associated with a FV balance
sheet may mean that the balance sheet is no longer the pri-
mary medium for evaluating bank capital. Market participants
and supervisors may increasingly turn to cash flow state-
ments, income and equity statements, and risk measures —
all of which provide distinct financial information and that must
evolve in response to users’ needs.
Albeit of a simple structure and subject to isolated shock sce-■■
narios, the simulations point to the fact that the application of
FV to both sides of the balance sheet would introduce a coun-
tercyclical component that may cushion some of the financial
shocks that can result in large swings in bank equity. This result,
however, arises in the shock scenarios, in part, from a deteriora-
tion in the own-debt values as risk premia rise on the liability
side of the balance sheet. This logically compensates for the
deterioration of the asset side during a downturn. From the view
point of assessing the riskiness of the financial institution or its
future prospects, the result can be viewed as paradoxical, as it
can hardly be regarded as a positive factor for the financial insti-
tution to have its own debt values deteriorate. The simulations
also illustrate how a bank’s response to a particular shock varies
substantially depending on the specific balance sheet structure
and thus there is a need to discern the source of the cyclicality
through additional disclosures.
A key challenge going forward will be to enrich the FVA framework
so that market participants and supervisors are better informed,
in order to promote market discipline and financial stability. The
fragmented solution that currently exists between the accounting,
prudential, and risk management approaches to valuation is insuf-
ficient and must be reconciled. Importantly, this will require adjust-
ments on the part of all three disciplines to resolve these tensions.
74 – The journal of financial transformation
financial stability, fair value accounting, and procyclicality
Normal Cycle
100.098.3103.8100.0
101.5
96.997.6
105.1
949698
100102104106
U.S. commercial banks U.S. investment banks European banks
Bust-Boom Cycle in Share Prices
100.0100.0
80.7
121.4
61.0
137.3
53.5
149.3
40
60
80
100
120
140
160
Bust-Boom Cycle in Real Estate
100.0
69.1
103.8100.0101.5
96.9
66.7
105.1
40
60
80
100
120
140
160
Business cycletrend
Business cycletrough
Business cycletrend
Business cycletrend
Business cycletrend
Business cycletrough
Business cycletrend
Business cycletrend
Business cycletrend
Business cycletrough
Business cycletrend
Business cycle Business cycletrendpeak
Business cyclepeak
Business cyclepeak
Normal Cycle
100.0100.0100.0 98.3 103.8
97.6
105.1
97.8
104.9
96.5
107.8
94
99
104
109
U.S. commercial banks European banksRetail-oriented U.S. banks Retail-oriented European banks
Cycle in Funding Spreads
100.0100.0
85.0
117.4
62.0
141.9
88.9114.0
64.4
140.9
40
60
80
100
120
140
160
Bust-Boom Cycle in Real Estate
100.0
14.4
100.0
69.1 103.8
66.7
105.1
45.9
104.9
107.8
10
40
70
100
130
160
Business cycletrend
Business cycletrough
Business cycletrend
Business cycletrend
Business cycletrend
Business cycletrough
Business cycletrend
Business cycletrend
Business cycletrend
Business cycletrough
Business cycletrend
Business cycle Business cycletrendpeak
Business cyclepeak
Business cyclepeak
U.S. Commercial Bank
100.0100.0100.0 98.3
103.8
85.0
117.4
91.6107.3
55
75
95
115
135
Normal cycle Cycle in funding spreads
Cycle in debt securities' LGDs
U.S. Investment Bank
100.0100.0 101.5
96.9
58.8
140.9
83.9
106.3
55
65
75
85
95
105
115
125
135
145
European Bank
100.0100.0100.0 97.6
105.1
62.0
141.9
85.3
111.7
55
65
75
85
95
105
115
125
135
145
Business cycletrend
Business cycletrough
Business cycletrend
Business cycletrend
Business cycletrend
Business cycletrough
Business cycletrend
Business cycletrend
Business cycletrend
Business cycletrough
Business cycletrend
Business cycle Business cycletrendpeak
Business cyclepeak
Business cyclepeak
Normal Cycle
95.5
100.0100.0
96.3
107.9
109.4
94.7
111.3
90
95
100
105
110
115
U.S. commercial banks U.S. investment banks European banks
Bust-Boom Cycle in Share Prices
100.0100.0
78.7
125.4
55.1
149.9
50.5
155.5
40
60
80
100
120
140
160
Bust-Boom Cycle in Real Estate
100.0100.0100.0
67.1
107.9100.0 95.5
109.4
63.8
111.3
40
60
80
100
120
140
Business cycletrend
Business cycletrough
Business cycletrend
Business cycletrend
Business cycletrend
Business cycletrough
Business cycletrend
Business cycletrend
Business cycletrend
Business cycletrough
Business cycletrend
Business cycle Business cycletrendpeak
Business cyclepeak
Business cyclepeak
Figure 2 – Simulation of full fair value Figure 4 – Simulation of full fair value: international versus retail-oriented banks
Figure 3 – Simulation of full fair value: changes in fund conditions and financial
market distress
Figure 5 – Simulation of partial fair value (includes short-term and long-term financial
liabilities valued at amortized cost)
75
financial stability, fair value accounting, and procyclicality
U.S. Commercial Banks
100.0
80.7
121.4
89.5
110.4108.8
100.0102.3
104.4
70
80
90
100
110
120
130
Bust-boom cycle in share prices Two period average
Three period average Circuit breaker
U.S. Investment Banks
120.2
61.0
137.3
100.0
112.1100.0
81.2 93.6
40
60
80
100
120
140
160
European Banks
100.0
122.1
111.0100.0
53.5
149.3
75.5
121.7
101.5
40
60
80
100
120
140
160
Business cycletrend
Business cycletrough
Business cycletrend
Business cycletrend
Business cycletrend
Business cycletrough
Business cycletrend
Business cycletrend
Business cycletrend
Business cycletrough
Business cycletrend
Business cycle Business cycletrendpeak
Business cyclepeak
Business cyclepeak
Normal Cycle
100.0100.0
103.0
98.4
95.8
102.798.4
103.9
90
95
100
105
110
U.S. commercial banks U.S. investment banks European banks
Bust-Boom Cycle in Share Prices
100.0
85.4
116.0
100.0
55.4
143.1
54.2
148.1
40
60
80
100
120
140
160
Bust-Boom Cycle in Real Estate
100.0100.0
79.8 98.4
95.8 102.7
70.3
103.9
40
60
80
100
120
140
160
Business cycletrend
Business cycletrough
Business cycletrend
Business cycletrend
Business cycletrend
Business cycletrough
Business cycletrend
Business cycletrend
Business cycletrend
Business cycletrough
Business cycletrend
Business cycle Business cycletrendpeak
Business cyclepeak
Business cyclepeak
Figure 6 – Simulation of smoothing techniques Figure 8 – Simulation of full fair value with upward sloping yield curve
3,25
3,75
4,25
4,75
5,25
5,75
6,25
6,75
1 2 3 4 5 7 10 20 30
Maturities (In years)
Trend Trough Peak
Figure 7 – Yield curves and business cycles (in percent)
76 – The journal of financial transformation
financial stability, fair value accounting, and procyclicality
ReferencesAcharya, V. V., S. T. Bharath, and A. Srinivasan, 2007, “Does industry-wide distress •
affect defaulted firms? Evidence from creditor recoveries,” Journal of Financial
Economics, 85:3, 787–821
Altman, E. I., B. Brady, A. Resti, and A. Sironi, 2005, “The link between default and •
recovery rates: theory, empirical evidence, and implications,” Journal of Business,
78:6, 2203–2227
Bangia, A., F. Diebold, A. Kronimus, C. Schagen, and T. Schuermann, 2002, “Ratings •
migration and the business cycle, with application to credit portfolio stress testing,”
Journal of Banking and Finance, 26
Bank of Spain, “Dynamic provisioning in Spain: impacts of the new Spanish statistical •
provision in the second half of 2000,” Available at (www.bde.es)
Barth, M., 2004, “Fair values and financial statement volatility,” in Borio, C., W. C. •
Hunter, G. G Kaufman, and K. Tsatsaronis (eds), The market discipline across countries
and industries, MIT Press, Cambridge, MA
Barth, M., L. D. Hodder, and S. R. Stubben, 2008, “Fair value accounting for liabilities •
and own credit risk,” The Accounting Review, 83:3.
Basel Committee on Banking Supervision, 2006a, Results of the fifth quantitative •
impact study (QIS 5), Bank for International Settlements, Basel
Basel Committee on Banking Supervision, 2006b, Sound credit risk assessment and •
valuation for ), Bank for International Settlements, Basel
Borio, C., and K. Tsatsaronis, 2005, “Accounting, prudential regulations and financial •
stability: elements of a synthesis,” BIS Working Papers No. 180, ), Bank for International
Settlements, Basel
Bruche, M., and C. González-Aguado, 2008, “Recovery rates, default probabilities, and •
the credit cycle,” CEMFI Working Paper, Centro de Estudios Monetarios y Financieros,
Madrid
Calza, A., T. Monacelli, and L. Stracca, 2006, “Mortgage markets, collateral constraints, •
and monetary policy: do institutional factors matter?” Center for Financial Studies,
University of Frankfurt, Working Paper No. 2007/10
Center for Audit Quality, 2007, “Measurements of fair value in illiquid (or less liquid) •
markets,” White Papers, October
Citigroup, 2008, “There’s a hole in my bucket: further deterioration in European banks’ •
capital ratios,” Industry focus, Citigroup Global Markets: Equity Research
Enria, A., L. Cappiello, F. Dierick, S. Grittini, A. Haralambous, A. Maddaloni, P. A.M. •
Molitor, F. Pires, and P. Poloni, 2004, “Fair value accounting and financial stability,”
ECB Occasional Paper Series No. 13, European Central Bank
Financial Stability Forum, 2008, “Report of the Financial Stability Forum on enhancing •
market and institutional resilience,” April 7
Financial Times, 2008, “Banks according to GAAP.” Available at www.ft.com•
Fitch Ratings, 2008a, “Fair value accounting: is it helpful In illiquid markets?” Credit •
Policy Special Report, 28 April
Fitch Ratings, 2008b, “Fair value disclosures: a reality check,” Credit Policy Special •
Report, June 26
Guerrera, F., and B. White, 2008, “Banks find way to cushion losses,” Financial Times, •
July 8
Global Public Policy Committee, 2007, “Determining fair value of financial instruments •
under IFRS in current market conditions,” Available at www.pwc.com
International Accounting Standards Board, 2001, “Financial instruments: recognition •
and measurement,” IAS 39, Available at www.iasb.org
International Accounting Standards Board, 2007, “Disclosure of financial instruments,” •
IFRS 7, available at www.iasb.org
International Monetary Fund (IMF), 2008a, “Financial stress and economic downturns,” •
Chapter 4, World Economic Outlook, World Economic and Financial Surveys, October
International Monetary Fund (IMF), 2008b, Global financial stability report, World •
Economic and Financial Surveys, April
Kashyap, A., 2005, “Financial system procyclicality,” Joint Conference of the IDB and •
Federal Reserve Bank of Atlanta’s Americas Center: Toward better banking in Latin
America, InterAmerican Development Bank, Washington, January 10
Keen, H., 1989, “The yield curve as a predictor of business cycle turning points,” •
Business Economics, October, Available at: http://findarticles.com/p/articles/mi_
m1094/is_n4_v24/ai_7987167.
Nickell, P., W. Perraudin, and S. Varotto, 2000, “Stability of rating transitions,” Journal •
of Banking and Finance, 24:1–2, 203–227
Office of Federal Housing Enterprise Oversight, 2008, Data statistics•
Pederzoli, C., and C. Torricelli, 2005, “Capital requirements and business cycle regimes: •
Forward-looking modeling of default probabilities,” Journal of Banking and Finance,
29:12, 3121-140
Piazzesi, M., and M. Schneider, 2005, “Equilibrium yield curves,” NBER Working Paper •
12609
Union Bank of Switzerland (UBS), 2007, “Subprime loss protection – a do-it-yourself •
kit,” Mortgage Strategist, June 26
Articles
77
can ARMs’ mortgage servicing portfolios be delta-hedged under gamma constraints?
carlos E. ortizAssociate Professor, Department of Mathematics
and Computer Science, Arcadia University
charles A. stoneAssociate Professor, Department of Economics,
Brooklyn College, City University of New York
Anne ZissuAssociate Professor and Chair, Department of
Business, CityTech, City University of New York, and Research Fellow, Department of Financial
Engineering, New York University, The Polytechnic Institute
AbstractOrtiz et al. [2008, 2009] develop models for portfolios of mortgage
servicing rights (MSR) to be delta-hedged against interest rate risk.
Their models rely on this fundamental relationship between pre-
payment rates (cpr) and interest rates, represented as a sigmoid
function (S-shape). Defaults that lead to foreclosures or loan modi-
fications on mortgages will either truncate or extend the stream of
servicing income greeted by pools of adjustable rate mortgages.
Ortiz et al.’s previous research focuses on mortgage services rights
for fixed rate mortgages. In this paper we will extend their research
to MSR for adjustable rate mortgages (ARMs).
78 – The journal of financial transformation
can ARMs’ mortgage servicing portfolios be delta-hedged under gamma constraints?
The market for ARMs and the current financial crisis are closely
related. The story goes like this: home values were increasing at
increasing rates between 1994 and 2007, making the public goal
of widespread home ownership more difficult to achieve. Not only
were house prices increasing, the rate of increase was going up for
most of this period as well (Figure 1).
As home prices increased home ownership became too costly for
large segments of the population, particularly people with sub-
prime credit scores. Subprime lending filled the gap and subprime
lenders bent over backwards to accommodate more and more bor-
rowers who wanted to get involved with the home buying frenzy.
Underwriting standards were lowered and speculative/ponzi loans
cloaked in the fabric of thirty-year amortizing loans were originated.
Rising home values increased the value of leverage. Leverage was
more affordable when the borrower assumed the interest rate risk
(adjustable rate mortgages). Competition among mortgage lenders
for origination and servicing fees created a supply of unstable mort-
gage loans that were effectively short term loans with an option
to extend or refinance. Rising home prices were expected to act
as cover for weak underwriting. Ponzi finance always depends on
rising asset values and underpriced credit. Both of these elements
became increasingly and extremely scarce at the end of 2006 and
the beginning of 2007. As refinancing for subprime borrowers
became less available, default became the only exit for a large per-
cent of the subprime market place.
Adjustable rate mortgages encompass a wide range of contract
designs. The significant differences across ARM contracts are the
frequency of interest rate adjustment, the index used to determine
the interest rate, the margin over the index, the caps and floors
in the contract, and the required minimum interest payment. The
basic ARM product will use a money market index such as LIBOR or
the constant maturity treasury (CMT) as the benchmark. The inter-
est rate the mortgagor is required to pay will be set at a margin
above this index and will adjust periodically. The adjustment period
may be a year or longer. The amount of rate adjustment that can
take place within an adjustment period and over the length of the
loan may be capped and floored. Some ARMS known as hybrid
mortgages have a fixed rate period that may range from three
to ten years. After the fixed rate period is over the adjustable
rate phase begins. The mortgage rate begins to float at a margin
above an index. Another product that became common during the
extreme acceleration of home prices in 2005 is the option ARM.
The distinguishing feature of this ARM product is the choice it
gives the borrower to defer interest payments. Deferred interest
payments create negative amortization of the loan as the amounts
deferred are added onto the outstanding mortgage loan principal.
Negative amortization increases the leverage the borrower is using
to finance the property. Negative amortization was a short-term
solution for borrowers who were trying to conserve cash in the
short run. Negative amortization creates an unstable situation
when house prices begin to decline and the thin equity cushion the
owner has is quickly eroded. This is what happened at the end of
the housing boom.
The underwriters underpriced the credit risk because they overes-
timated the future value of the mortgaged property. It appears that
the premise that was prevalent in the mortgage market between
2005 and 2007 was that rising home prices would erase all traces
of weak or questionable underwriting. Risky underwriting includes
basing loan amounts on the discounted rates as opposed to fully
indexed rates. This technique was used extensively to tease bor-
rowers into using more leverage than was prudent. The unsurpris-
ing result of offering below market rates is payment shock when
the discounted rate is adjusted to a fully indexed rate. Optimists
believed that payment shock could be dealt with by either refinanc-
ing an outstanding loan with a loan that was more affordable or
by sale of the house for more than the amount of the mortgage
principal. When both the mortgage and housing markets collapsed
in 2007, payment shock led to accelerating default rates.
Servicing income has become an increasingly important source
of income for financial institutions as securitization became the
preferred method of financing mortgages. Securitization enables
financial institutions to originate mortgages in greater volumes
than they are able to finance mortgages. This is true at the firm
level and the industry level. Financial institutions (FI) have trans-
formed their balance sheets from spaces where long-term financing
takes place to refineries where the raw material of MBS and ABS
are distilled into primary components and then refinanced in the
broader capital markets all over the world. What stays behind on
the balance sheets of the FI is the servicing contract. The benefits
that accrue to FI from operating a financial refinery are the fees
associated with originating and servicing financial assets. In this
paper we are examining how servicers can construct delta hedges
to offset losses in value to ARM servicing portfolios from changes in
interest rates. Interest rate changes affect the rates of mortgagor
prepayment and default.
Figure 1 – Percentage change in Case-Shiller national home price index
(Q1-1988 to Q1-2009)
-25.00%
-20.00%
-15.00%
-10.00%
-5.00%
0.00%
5.00%
10.00%
15.00%
20.00%
1988 1990 1992 1994 1996 1998 2000 2002 2004 2006 2008
Time
% Change
0
2
4
6
8
1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009
6-month LIBOR
1-Year CMT
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
2005 2006 2007 2008 2009
79
can ARMs’ mortgage servicing portfolios be delta-hedged under gamma constraints?
Servicing income is derived from the outstanding mortgage princi-
pal that is being serviced. Servicing fees range from 25 to 38 basis
points of serviced principal. Servicing costs are not constant. The
servicer is responsible for the costs associated with delinquencies,
workouts, and foreclosures on top of the normal costs of operating
servicing centers. The value of servicing is higher when mortgagors
make payments on time. Delinquencies, defaults, and foreclosures
are costly. Workouts, while costly, are also a source of revenue
because while defaults truncate the stream of servicing income,
workouts extend the servicing income. The fact that mortgage
defaults were too many and too fast, “too many to fail” has pro-
moted the government, investors, and lenders to promote the work-
out loans rather than move to foreclose on properties. This makes
sense in a deflating property market. Not only does the relationship
of the fixed costs to variable costs makes the value of servicing
uncertain, the principal amount that is serviced at any point in time
is not known in advance because mortgagors can prepay or default.
While both prepayments and defaults reduce the future stream
of servicing income, the impact of defaults and prepayments are
not equivalent. As already mentioned, a default is more costly to
service than a prepayment. If the prepayment is financed with a
mortgage that is serviced by the same servicer as the original mort-
gage, then servicing income may boost income from the profits
associated with the origination of a new loan. Full defaults simply
end the servicing income because the principal of the loan is settled
with the investors from the sale of the foreclosed property.
The common elements that drive prepayment of ARMs are the rela-
tive cost of available fixed rate financing at a point in time to the
expected cost of floating rate financing going forward. If an ARM is
designed to adjust upward or downward every six months and mort-
gage rates continue to increase after the mortgage is originated,
the cause for refinancing will not be to secure an immediate savings
but rather to lower the expected present value of future funding
costs. Fixed rates will always be above adjustable rates except for
the rare case of a very steep downward sloping yield curve. As
long as fixed rates are above adjustable rates, the exercise of the
ARM prepayment option would be prompted by a mortgagor trying
to protect resources from future increases in the ARM index. The
incentive to refinance an ARM is not one sided as it is for a fixed
rate mortgage. When fixed rates fall by enough from their levels
that existed when the ARM was originated, the mortgagor may
have the incentive to switch into the less risky fixed rate mortgage.
Hybrid ARMs that combine features of fixed rate and adjustable
rate instruments have become popular. Mortgages such as the 5/1
ARM offer fixed rates for a five-year period after which the rates
begin to adjust. There is evidence that at the cusp between the
adjustable rate and fixed rate periods, mortgagors act to avoid a
sharp upward revision in interest payments by attempting to refi-
nance out of ARM into a more affordable loan if one is available
[Pennington-Cross and Ho (2006)].
The flow of subprime risk into the mortgage market increased dra-
matically in the years leading up to the crash in the housing market
and the collapse of the capital and money markets in 2008. For
example, as a percent of total mortgage originations in terms of
principal in 2006, 2007, and 2008, the percentages of MBSs backed
by subprime mortgages that were issued were 16.8%, 9.01%, and
0.13%, respectively. These numbers are important because servic-
ing income became more unstable as it was backed by mortgages
that were speculative in nature. If borrowers were unable to call
their mortgage by refinancing with a more affordable loan then
they often exercised the other embedded option, the right to put
the mortgaged property to the borrower. The current wave upon
wave of foreclosures was triggered by the collapse of the subprime
market. Going forward, what will have an important impact on ser-
vicing income derived from ARMs are the public policy and private
initiatives that are being executed to modify outstanding mortgage
loans, many of which are and will be subprime ARMs. “Currently,
about 21 percent of subprime ARMs are ninety days or more delin-
quent, and foreclosure rates are rising sharply.” (Chairman Ben S.
Bernanke at the Women in Housing and Finance and Exchequer
Club Joint Luncheon, Washington, D.C. January 10, 2008, Financial
Markets, the Economic Outlook, and Monetary Policy)
To arrive at a rough idea of how significant the market for mort-
gage servicing rights is, we assume there is a 25 bps servicing fee
charged against all outstanding first lien mortgage loans financing
1 to 4 family residences, whether securitized or not. The approxi-
mate servicing income generated in 2005, 2006, 2007, and Q1
2008 is provided in Figure 2. Hedging this income correctly at a
reasonable cost can protect the capital of banks and make cash
flows less volatile. Figure 4 illustrates the breakdown between fixed
rate mortgages and adjustable rate mortgages. As of May 2007,
subprime ARMs accounted for two-thirds of first lien subprime
mortgage market and 9% of all first lien mortgages. “After rising
at an annual rate of nearly 9 percent from 2000 through 2005,
house prices have decelerated, even falling in some markets. At the
same time, interest rates on both fixed- and adjustable-rate mort-
gage loans moved upward, reaching multi-year highs in mid-2006.
Some subprime borrowers with ARMs, who may have counted on
refinancing before their payments rose, may not have had enough
home equity to qualify for a new loan given the sluggishness in
2005 2006 2007 Q4-2008 Q2-2009
Outstanding mortgage principal (first lien)
U.S.$ 9.385 trillion
U.S.$ 10.451 trillion
U.S.$ 11.140 trillion
U.S.$ 11.042 trillion
U.S.$ 10.912 trillion
Servicing income = 25 bp x outstanding mortgage principal
U.S.$ 23.462 billion
U.S.$ 26.128 billion
U.S.$ 27.852 billion
U.S.$ 27.606 billion
U.S.$ 27.281 billion
Figure 2 – Estimate of annual mortgage servicing income Source for outstanding mortgage principal is Freddie Mac.
80 – The journal of financial transformation
can ARMs’ mortgage servicing portfolios be delta-hedged under gamma constraints?
house prices.” (Chairman Ben S. Bernanke at the Federal Reserve
Bank of Chicago’s 43rd Annual Conference on Bank Structure and
Competition, Chicago, Illinois May 17, 2007, The Subprime Mortgage
Market)
If we look at the CMT (constant maturity Treasury) rate or the
six month LIBOR we see that rates rise incrementally, placing
increasing pressure on the borrowers’ ability to repay as ARM
rates increase. At some point the mortgagor will have to decide if
further increases in rates are sustainable. Rising ARM indices will
prompt the mortgagor to search for a way out of the increasingly
costly contract. The three paths out are prepayment, modification,
or default. It must be noted that while conventional mortgages are
typically issued without prepayment penalties, subprime mortgages
often do have prepayment penalties attached. These penalties
make prepayment more costly.
Institutions such as Citibank, Wells Fargo, Bank of America, or
Countrywide rely on servicing income and take measures to hedge
this income. We are offering a technique for hedging the nega-
tive fallout to servicing income derived from ARM loans that face
increasing risks of prepayment and default as interest rates rise.
The subprime mortgage loan cohorts that had the worst perfor-
mance are 2005, 2006, and 2007. Subprime mortgages originated in
2007 experienced the most rapid foreclosure rates. The 2007 cohort
loans were originated as home processes began their steep decline
and refinancing became next to impossible (Testimony before
the Joint Economic Committee U.S. Congress HOME MORTGAGES
Recent Performance of Nonprime Loans Highlights the Potential
for Additional Foreclosures Statement of William B. Shear, Director
Financial Markets and Community Investment, For Release on
Delivery Expected at 10:00 a.m. EDT Tuesday, July 28, 2009).
It is estimated that 21.9% of subprime and Alt-A loans set to reset in
the third quarter of 2009 are already 30+ days overdue. This makes
these loans likely candidates for default or prime candidates for
loan workouts. (Data report No. 1, February 2008, State Foreclosure
Prevention Working Group)
To stem the tide of foreclosures, the U.S. Government has initiated
the “Making Home Affordable Program” (HAMP). This program offers
mortgagors the chance to refinance with a more affordable mortgag-
es or to modify a mortgage that is delinquent or at risk of becoming
delinquent. Servicers are offered compensation to facilitate the mod-
ification of loans that qualify for the program. These fees along with
the savings that servicers will gain from avoiding foreclosure and the
increased longevity of the loan make HAMP interesting. The program
is being ramped up so that the rate of foreclosures can be stemmed,
which would help to stabilize the housing market. HAMP supplements
the efforts of lenders who are taking actions to save value by working
with mortgagors. Loan modifications extend the life of mortgages,
enhancing the value of servicing contracts. Rather than lose the
mortgage principal that is being serviced via a foreclosure, the idea
is to modify loans that allow borrowers to stay current with the new
terms. (OCC and OTS Mortgage Metrics Report Disclosure of National
Bank and Federal Thrift Mortgage Loan Data Second Quarter 2009,
Office of the Comptroller of the Currency Office of Thrift Supervision
Washington, D.C. September 2009)
prepayment function for variable rate mortgagesOur model of ARM prepayments is based on a number of general
assumptions about prepayment behavior. First of all, the diversity
of ARM contracts means that it is not possible to make accurate
general statements about all ARMs. The ARM contracts that our
analysis is most applicable to are subprime ARMs that were issued
with initial index discounts, Option ARMs and Hybrid ARMs. All
of these mortgage contracts set up the potential for an extreme
increase in required payments once the teaser period is over or the
fixed rate period is over and the interest is fully indexed to current
money market rates. A glance at Figure 3 shows how rapidly money
market rates to which many ARMs are indexed (the six-month
LIBOR rate and the 1 year CMT) increased between 2004 and 2006.
This rapid increase in interest rates resulted in serious stress on
the ability of many households that had issued ARMs to continue
to make monthly payments. Since the rate increases happened dis-
cretely and index resets are not continuous, mortgagors have time
to make decisions that lower the value of their payment burdens.
These options include prepayment and default [Pennington-Cross
and Ho (2006)]. This increases the likelihood of a delinquency lead-
ing to default or delinquency leading to prepayment.
The following assumptions about mortgagor prepayment and ser-
vicing are integral to our model of delta-hedging servicing income.
The prepayment rate, cpr for ARMs (adjustable rate mortgages), ■■
is a function of the market interest rate y and default rates d. In
our model, the cpr of ARMs increases when interest rates go up,
or are expected to increase because mortgagors have an incen-
tive to lock-in current rates using fixed rate mortgages or search
for alternative ARMs with lower rates. Mortgage defaults impact
the cpr in the same way as prepayments. As a result of this
assumption the cpr function is no longer the S-Shape found with
fixed-rate mortgages [Stone and Zissu (2005)]. The cpr-function
for ARMs that prompt prepayments or defaults in rising interest
rate environments becomes an inverted S-shape (mirror of the
prepayment function for fixed rate mortgages). As interest rates
decline relative to the contract rate, borrowers will still have
an incentive to refinance when the obtainable fixed rate is less
costly than the expected adjustable rates in the future.
Default rates on ARMs are positively correlated with rising inter-■■
est rates and diminishing home values.
81
can ARMs’ mortgage servicing portfolios be delta-hedged under gamma constraints?
The magnitude and value of income derived from mortgage servic-■■
ing rights (MSR) depend on default rates and prepayment rates.
We delta-hedge a portfolio of mortgage servicing rights for ARMs
with other fixed income securities such that the value of the servic-
ing portfolio is not affected by increases or decreases in market
rates. In order to obtain the portfolio that requires the lowest cost
delta hedge, we compare hedge ratios dynamically. This paper
applies Ortiz et al.’s model of a delta-hedge-ratio function applied to
a portfolio of fixed rate mortgages to a portfolio of servicing rights
derived from a pool of ARMs. We develop the gamma-hedge-ratio
function using three types of fixed-income securities, a coupon pay-
ing bond with n years to maturity (case 1), a zero coupon bond with
n years to maturity (case 2), and a bond that pays coupons only
twice in the life of the bond, at n/2 and then at n (case 3).
“The fair value of the MSRs is primarily affected by changes in
prepayments that result from shifts in mortgage interest rates. In
managing this risk, Citigroup hedges a significant portion of the value
of its MSRs through the use of interest rate derivative contracts,
forward purchase commitments of mortgage-backed securities, and
purchased securities classified as trading (primarily mortgage-backed
securities including principal-only strips).” (Citigroup’s 2008 annual
report on Form 10-K). We have selected cash purchases of bonds to
affect our delta hedge. Our research can be extended to incorporate
the instruments that banks like Citigroup employ to hedge MSR.
The option to prepay an ARM is not as straight-forward as it is
for a fixed rate mortgage. Mortgagors have financial incentives to
prepay ARMs when interest rates are either rising or falling. This is
because ARM mortgages shift the interest rate risk to the borrower.
Our contribution is to set up a delta hedge for MSR that diminishes
as mortgagors try to avoid further rate increases.
“The adjustable-rate mortgage loans in the trust generally adjust
after a one month, six month, one year, two year, three year, five
year, or seven year initial fixed-rate period. We are not aware of
any publicly available statistics that set forth principal prepay-
ment experience or prepayment forecasts of mortgage loans of
the type included in the trust over an extended period of time, and
the experience with respect to the mortgage loans included in the
trust is insufficient to draw any conclusions with respect to the
expected prepayment rates on such mortgage loans.” (Prospectus
supplement to prospectus dated June 28, 2006, DLJ Mortgage
Capital, Inc. Sponsor and seller adjustable rate mortgage-backed
pass-through certificates, series 2006-3).
Model and applicationsWe express the relationship between prepayment rate and the
change in basis points in equation (1). The prepayment rate, cpr, is
primarily a function of the difference between the new mortgage
rate y in the market and the contracted mortgage rate r.
cpr = a/(1 + eb(y-r)) (1)
Figure 5 shows the relationship between the spread between mar-
ket rates y and contractual rates on an adjustable-rate mortgage r
and the cpr of the mortgage. The greater the spread is, the higher
the incentive mortgagors will have to switch from an ARM to a
fixed rate mortgage or default on their loan ARM if new affordable
financing is not available. The default option is typically exercised
when home equity has become negative. Declining home equity can
be the result of either an increase in the value of the mortgage lia-
bility or a decline in the value of the mortgaged property or both.
Of course, declining home values are the fundamental cause of
negative home equity. Default rates have been stronger than refi-
nancing rates during the subprime crisis. Default rates by subprime
borrowers have swamped refinancing rates by these borrowers
because of rapidly declining values of home equity and increasing
joblessness, even as rates have come down to historical lows.
Kalotay and Fu (2009) illustrate that the option value offered to
mortgagors differs across mortgage contracts. He shows that the
5/1 ARM has a lower option value than the fixed rate mortgage and a
higher option value than the 1 year ARM. The option value increases
Figure 3 – ARM indicies, 6-month LIBOR rates, and 1-year CAT Figure 4 – Conventional 30-year mortgage rate conventional Treasuy indexed 5-1
hybrid adjustable rate mortgages
-25.00%
-20.00%
-15.00%
-10.00%
-5.00%
0.00%
5.00%
10.00%
15.00%
20.00%
1988 1990 1992 1994 1996 1998 2000 2002 2004 2006 2008
Time
% Change
0
2
4
6
8
1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009
6-month LIBOR
1-Year CMT
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
2005 2006 2007 2008 2009
-25.00%
-20.00%
-15.00%
-10.00%
-5.00%
0.00%
5.00%
10.00%
15.00%
20.00%
1988 1990 1992 1994 1996 1998 2000 2002 2004 2006 2008
Time
% Change
0
2
4
6
8
1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009
6-month LIBOR
1-Year CMT
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
2005 2006 2007 2008 2009
82 – The journal of financial transformation
can ARMs’ mortgage servicing portfolios be delta-hedged under gamma constraints?
as the mortgage incorporates more elements of the 30-year fixed
rate mortgage. A lower option value has generally translated into
a lower initial interest rate for the borrower. (Consumer Mortgage
Decisions by Andrew J. Kalotay and Qi Fu, Research Institute for
Housing America June 2009)
Valuation of MsR The cash flow of a MSR portfolio at time t is equal to the servicing
rate s times the outstanding pool in the previous period:
MSRt = (s)m0(1 – cpr)t-1Bt-1 (2)
where:
m0 = number of mortgages in the initial pool at time zero
B0 = original balance of individual mortgage at time zero
r = mortgage coupon rate
cpr = prepayment rate
m0(1-cpr)t = number of mortgages left in pool at time (t)
Bt = outstanding balance of individual mortgage at time (t)
s = servicing rate
We express the value of mortgage servicing rights as:
V(MSR) = (s)m0 [Σ(1 – cpr)t-1Bt-1] ÷ (1 + y)t (3)
with t = 1,…..n (through the entire paper).
Equation (3) values a MSR portfolio by adding each discounted cash
flow generated by the portfolio to the present, where n is the time
at which the mortgages mature, and y is the yield to maturity.
After replacing equation (1) in equation (3) we obtain the MSR
function as:
V(MSR) = (s)m0 [Σ(1 – (a/(1 + expb(y-r)))t-1Bt-1] ÷ (1 + y)t (3a)
We use the following values for Equation (3a) to generate the MSR
over a range of market rates y and we present them in Figure 6:
m0 = 100
B0 = $100,000
r = 6%
a = .4
b = 100
cpr = prepayment rate
s = .25%
n = 30
Before commenting on Figure 6, it is important to understand that
a mortgage servicing right is equivalent to an interest only secu-
rity (IO). The ultimate cash flow that an IO generates over time is
directly tied to the amount of underlying principal outstanding at
any point in time. An interest only security is derived by separating
interest from principal cash flows when the mortgagor’s payment
is made, before distributing it to investors. The interest component
is distributed to the IO investors and the principal component is
distributed to the PO (principal only) investors. The typical graph of
an IO security over market rates shows an increase in the value of
MSR as market rates increase because prepayment rate decreases
(prepayment effect is stronger than discount effect) until its value
reaches a maximum (prepayment effect is equal to discount effect)
and then it starts to decrease (discount effect is greater than pre-
payment effect).
-0.06 -0.04 -0.02 0(y-r)
0.02
0.02
0.04
0.06
0.08
0.10cpr(y)
0.12
0.14
0.16
0.18
0.04 0.06
Plot of cpr vs (y-r)
Figure 6 – Value of mortgage servicing rights
Figure 5 – Prepayment function
0 0.05 0.10 0.15y
$
5.5 x 106
5 x 106
4.5 x 106
4 x 106
Plot of function V(IO) vs y
83
can ARMs’ mortgage servicing portfolios be delta-hedged under gamma constraints?
The value of MSR derived from a pool of ARMs with respect to chang-
es in market interest rates differs from the value of MSR for fixed
rate mortgages because the prepayment function is inverted. This
is illustrated in Figure 6. When market rates start to increase, mort-
gagors who anticipate that further rate increases are on the horizon
will attempt to refinance their mortgages with fixed rate mortgages.
Those who cannot secure a fixed rate mortgage at acceptable terms
may have to endure further rate increases which will lead to higher
default rates. The effect of rising interest rates for certain classes of
subprime ARMs will be a diminution of outstanding principal. Since
this principal is the source of servicing fees and the discount rate is
now higher the value of MSR must fall. We are discounting a smaller
stream of cash flows at a higher rate.
ARMs do not all reset at the same margin above an index. Rates
that were initially set at a discount from market value at some
point will reset to the fully indexed margin. An extreme change
in rates when the teaser period is over leads to what is known as
prepayment shock. Anticipation of payment shock increases the
incentive to prepay. In 2004, the Fed began raising the Fed funds
rate. In December 2003, the rate was 1%; by June 2007 it had risen
to 5.25%. This increase in money market rates led to increases in
ARM indices, a lower demand for mortgage finance, higher default
rates, and the beginning of the reduction in the supply of mortgage
credit. A negative feedback loop was set in motion that led to falling
house prices that increased default rates that lowered the supply of
mortgage credit even further that placed further downward pres-
sure on house prices.
The LIBOR index rose by more than the CMT index, especially in
2008 when interbank lending became severely curtailed. Rising
interest rates can lead to payment shock for initially discounted
ARMs. If mortgagors who have issued ARMs expect rates to con-
tinue rising they can see that their household finances will become
stressed. While prepayment into a fixed rate mortgage will not
necessarily lower payments, it may lower the value of the mortgage
liability relative to the initial ARM by shielding the borrower from
further rate increases.
Rapidly increasing interest rates will increase the incentive to
refinance and the likelihood to default. The incentive to get out of
ARMs that are becoming too costly will depress the value of MSR.
Declining interest rates and lower defaults rates will enhance the
value of MSR in our model over a range of rates. Notice that the plot
of MSR in Figure 6 is far from linear. At first, rising rates diminish
the value of the MSR, the IO strip. Once MSR reaches a minimum,
further rate increases lead to an increase in the value of MSR. This
is explained by the absence of refinancing opportunities that offer
savings and perhaps coupled with rising home values. Again at
some point the discount rate effect takes over and the discounted
value of MSR begin to decline.
The delta hedge ratioWe use the same model (OSZ) [Ortiz et al. (2008)] developed
to obtain a delta-hedged portfolio of fixed rate bonds and MSRs
derived from fixed rate mortgages. In this paper we are hedging
MSRs that are derived from ARMs with fixed rate bonds with the
difference that the MSR are backed by adjustable rate mort-
gages.
αV(MSR) + βV(B) = K (4)
Where K is the constant value of the portfolio of MSRs and bonds;
and α and β are the shares of MSRs and bonds respectively that are
consistent with a zero-delta portfolio that satisfies the constraint K.
OSZ obtain the hedge ratio α as a function of the MSR’s and bond’s
deltas and their respective values:
α = [-K(dV(B)/dy)] ÷ [(dV(MSR)/dy)V(B) – (dV(B)/dy)V(MSR)] (5)
osZ simulations and analysisIn the following three cases we use the OSZ model to create the
delta hedge of ARM mortgage serving rights.
case 1
A regular bond with yearly coupons and face value paid at maturity:
V(B) = c Σ 1/(1+y)t + Face/(1+y)n
c = $350,000
Face = $5,000,000
y = 5%
n = 10
case 2
A zero-coupon bond
V(B) = Face/(1+y)n
We keep the value of the variables the same as before:
Face = $5,000,000
y = 5%
n = 10
case 3
A bond that pays coupons only twice in the life of the bond: at n/2
and then at n:
V(B) = c/(1+y)n/2 + c/(1+y)n + Face/(1+y)n
c = $350,000
n = 10
84 – The journal of financial transformation
can ARMs’ mortgage servicing portfolios be delta-hedged under gamma constraints?
Face = $5,000,000
y = 5%
α corresponds to the share of MSR in the portfolio of MSR and
bonds that delta-hedges the portfolio of MSR for small changes
in interest rates. A high α means that less bond principal must
be positioned to delta-hedge the MSR portfolio, making for a less
costly hedge. The optimal hedge is the one that requires the highest
α bond portfolio.
We now examine Figure 7, which plots on the vertical axis the value
of α and on the horizontal axis the market interest rate available
to mortgagors who have issued ARMs but are now considering
prepaying their mortgage. The interest rate on the underlying ARM
mortgage pool is 5%. Again α is the share of the MSR in the hedged
portfolio.
We have divided the horizontal axis into three zones.
Zone 1 — starts at a market rate of 0% and ends where the three α
functions cross the horizontal axis for the first time. This crossover
point corresponds to the minimum value of the MSR in Figure 6. In
zone 1 the maximum α (most efficient hedge) is achieved by pur-
chasing zero coupon bonds. As market rates begin to rise, the value
of MSR derived from certain pools of ARMs will begin to decline as
defensive prepayments begin to increase.
Zone 2 — corresponds to the range of market rates where the three
α functions are negative. In this range, the optimal delta hedge is
obtained by shorting n year coupon bonds.
Zone 3 — starts where the three α functions cross for the second
time the horizontal axis; that point corresponds to the maximum
value of MSR in Figure 6. In that zone, the highest level of α is
achieved by adding to the MSR zero-coupon bonds (case 2).
conclusionAdjustable rate mortgages shift, to varying degrees, interest rate
risk from lenders/investors to borrowers. The hybrid mortgage
combines elements of the fixed-rate mortgage and the adjustable
rate mortgage by offering borrowers a fixed period prior to the
adjustable period. Adjustable-rate mortgages were a significant
component of the subprime mortgage market. The value of mort-
gage servicing rights is extremely sensitive to changes in market
rates and of course to default rates which are directly impacted by
market rates. Holders of MSR can delta-hedge their MSR portfolios
against interest rate risk. This paper simulates three different port-
folios of MSR and bonds. While the servicer will sometimes choose
to take a long position in zero-coupon bonds and sometimes to
short the n year coupon paying bond, the intermediate case is never
selected. The intermediate bond has a shorter duration than the
zero-coupon bond and a longer duration than the n year coupon
bond.
We have illustrated that the financial instrument that servicers
must use to effectively delta-hedge a portfolio of servicing rights
that is losing value as interest rates rise changes as the spread
between the market mortgage rate and contract rate changes.
Hedging servicing rights derived from ARMs that expose the mort-
gagor to payment shock as fixed rates becomes adjustable and the
contract rate is fully marked to a margin above the ARM index. In
addition to the changing incentive to refinance into a more afford-
able mortgage there is the increasing risk of default if the opportu-
nity to refinance is not available. The hedging of AMRs in our model
also incorporates the discount affect on future servicing income.
A very import dimension of the market for mortgage servicing
rights and particularly servicing income derived from subprime
mortgages, and even more specifically from subprime ARMs, is the
effort the government and the private sector are making to modify
loans before they default. These efforts will boost the value of MSRs
because the servicers will continue to service principal that would
have otherwise been lost to foreclosures. In addition, servicers that
participate in the government’s HAMP program will retain servicing
income that could have been lost to other lenders/servicers. 0.05 0.10 0.15
Plots of alpha for the three cases
-0,2
0
0.2
0.4
0.6
Case 1: red line (n-year annual coupon bonds)Case 2: blue line (n-year zero coupon bonds) Case 3: green line (bond with duration less than case 1 and greater than case 2)
Figure 7 – The share of MSR (α) in a delta-hedged portfolio
85
can ARMs’ mortgage servicing portfolios be delta-hedged under gamma constraints?
ReferencesBierwag, G. O., 1987, Duration analysis: managing interest rate risk, Ballinger Publishing •
Company, Cambridge, MA
Boudoukh, J., M. Richardson, R. Stanton, and R. F. Whitelaw, 1995, “A new strategy •
for dynamically hedging mortgage-backed securities,” The Journal of Derivatives, 2,
60-77, 1995
Ederington, L. H., and W. Guan, 2007, “Higher order Greeks,” Journal of Derivatives, •
14:3, 7-34
Eisenbeis, R. A., W. Frame Scott, and L. D. Wall, 2006, “An analysis of the systemic •
risks posed by Fannie Mae and Freddie Mac and an evaluation of the policy options for
reducing those risks,” Federal Reserve Bank of Atlanta, Working Paper 2006-2
Goodman, L. S., and J. Ho, 1997, “Mortgage hedge ratios: which one works best?”, The •
Journal of Fixed Income, 7:3, 23-34
Kalotay, A. J., and Q. Fu, 2009, “Consumer mortgage decisions,” Research Institute for •
Housing America, June
Office of Thrift Supervision, 2007, “Hedging – mortgage servicing rights,” examination •
book, 750.46-750.52, July
Ortiz, C., C. A. Stone, and A. Zissu, 2008, “Delta hedging of mortgage servicing portfo-•
lios under gamma constraints” Journal of Risk Finance, 9:4, 379-390
Ortiz, C., C. A. Stone, and A. Zissu, 2009, “Delta hedging a multi-fixed-income-securities •
portfolio under gamma and vega constraints,” Journal of Risk Finance, 10:2, 1691-72
Pennington-Cross, A., and G. Ho, 2006, “The termination of subprime hybrid and fixed •
rate mortgages,” in Research Division Federal Reserve Bank of St. Louis Working Paper
Series, July
Posner, K., and D. Brown, 2005, “Fannie Mae, Freddie Mac, and the road to redemp-•
tion”, Morgan Stanley, Mortgage Finance, July 6
Stone, C. A., and A. Zissu, 2005, The securitization markets handbook: structure and •
dynamics of mortgage- and asset-backed securities, Bloomberg Press
Articles
87
AbstractStructure and stability of private equity market risk are still nearly
unknown, since market prices are mostly unobservable for this
asset class. This paper aims to fill this gap by analyzing market
risks of listed private equity vehicles. We show that aggregate
market risk varies strongly over time and is positively correlated
with the market return variance. Cross-sections of market risk are
highly unstable, whereas ranks of individual vehicles within yearly
subsamples change slightly less over time. Individual CAPM betas
are predictable only up to two to three years into the future and
quickly converge to a stationary distribution when measured in
risk classes in an empirical Markov transition matrix. We suspect
that market risk of private equity is affected by factors unique to
this sector: acquisitions and divestments that constantly rebalance
portfolios, scarcity of information about portfolio companies, and
rapid changes within portfolio companies. Unstable market risk
seems to be a fundamental characteristic of private equity assets,
which must be incorporated into valuation and portfolio allocation
processes by long-term private equity investors. Large increases in
systematic risk in recent years cast doubt on diversification ben-
efits of private equity in times of crisis.
christoph KasererFull Professor, Department of Financial
Management and Capital Markets, Center for Entrepreneurial and Financial Studies (CEFS),
Technische Universität München
Henry lahrCenter for Entrepreneurial and Financial Studies
(CEFS), Technische Universität München
Valentin liebhartCenter for Entrepreneurial and Financial Studies
(CEFS), Technische Universität München
Alfred MettlerClinical Associate Professor, Department of
Finance, J. Mack Robinson College of Business, Georgia State University
The time-varying risk of listed private equity
88 – The journal of financial transformation
The time-varying risk of listed private equity
Performance measurement and portfolio allocation are notoriously
difficult when dealing with private equity assets due to the non-
transparent nature of this asset class. To evaluate the success of an
investment, its risk premium as derived from factor pricing models
can serve as a benchmark for required returns in excess of the risk-
free rate. Private equity’s market risk, albeit hard to measure in
traditional private equity funds, can be obtained from listed private
equity (LPE) vehicles. In this paper, we measure aggregate and indi-
vidual market risk and its variability over time. Private equity inves-
tors can benefit from a quantification of private equity market risks
and their variability, since this asset class represents a substantial
share of international investment opportunities. The private equity
sector had more than U.S.$2.5 trillion of capital under management
in 2008 according to the International Financial Services London
[IFSL (2009)]. This large volume demands for a time variation anal-
ysis of systematic risks. In addition, industry-specific characteristics
caused by private equity business models shape the evolution of
risk within this asset class: acquisitions and divestments of portfolio
companies constantly rebalance fund portfolios, which should lead
to highly unstable market risk.
Several authors have focused on non-constant risk premia in equi-
ties, bonds, and REITs. Early papers discussed the impact of risk
variability on portfolio decisions [Levy (1971), Blume (1971)]. Later
work developed the conditional capital asset pricing model and sim-
ilar models using mostly public market data [Lettau and Ludvigson
(2001), Ghysels (1998), De Santis and Gérard (1997), Jagannathan
and Wang (1996), Bollerslev et al.(1988), Ferson et al. (1987)]. Time-
varying risk properties of private equity, however, have not been
examined by empirical research.
The difficulty with risk measurement in traditional (unlisted) private
equity vehicles lies in the opacity of their price formation. Time
series of returns are hardly observable, which renders estimation of
market risk nearly impossible. Many attempts at measuring system-
atic risk are based on voluntarily reported returns of private equity
funds, internal rate of return (IRR) distributions [Kaplan and Schoar
(2005)], cash flows [Driessen et al. (2009)], transaction values of
portfolio companies [Cochrane (2005), Cao and Lerner (2009)], or
the matching of portfolio companies to public listed companies with
similar risk determinants [Ljungqvist and Richardson (2003), Groh
and Gottschalg (2009)].
Private equity vehicles that are listed on international stock
exchanges provide a natural alternative to unlisted ones when esti-
mating their risk. Return data are readily available and can be used
to answer risk-related questions: what are the market risk patterns
of listed private equity throughout the life of the security? How
stable are the market risks of the listed private equity companies?
We first analyze the market risk structure of listed private equity.
For this purpose, we measure market risk in an international capital
asset pricing model (CAPM) using Dimson (1979) betas. While Lahr
and Herschke (2009) measure constant betas over the lifetime of
listed private equity vehicles, we take a step further in considering
their time series properties. In our model, market risk is measured
over a rolling window to generate a continuous set of beta obser-
vations, which describes the aggregate asset class risk over time.
Second, we examine the stability of individual betas. Correlations of
cross-sections for consecutive years can offer insights into relative
changes of risk within the asset class. We find that market risk of
listed private equity is quite unstable if time periods longer than
two years are considered. In a second step, we compute transition
probabilities between risk classes. Our results reflect the instability
of risk in general, but highlight a moderate persistence of excep-
tionally high and low risks.
stability of market riskA broad picture of how market risk evolves in listed private equity
can be seen from yearly cross-sections. In this section, we show the
main properties of market risk measurement for our listed private
equity sample for different time horizons: rolling windows of one
and two years and total lifetime. Our sample of listed private equity
vehicles is based on the data from Lahr and Herschke (2009). They
generate a comprehensive list of listed private equity companies,
which we extend to account for recent listings. Our final sample
includes 278 vehicles, the largest proportions being investment
companies and funds that are managed externally. The time hori-
zon chosen for our analysis is January 1st, 1989 to March 25, 2009,
although not all companies have returned data during the entire
time period due to new listings and delistings.
To measure the market risk of our sample vehicles, we use individual
return indices from Thomson Datastream in U.S. dollars, the 3-month
Treasury bill rate as a proxy for the risk-free rate, and MSCI World
index returns in U.S. dollars as a proxy for market returns. All return
data are converted to logarithmic returns. During the time period
studied in this paper, 33 companies were delisted. All companies
enter our analysis when return data becomes available and drop out
if they delist or if trading volume is zero after some date.
Market risk estimation
To obtain market risks we regress excess LPE stock returns on
excess market returns (MSCI World). We employ a Dimson regres-
sion to account for autocorrelation in asset returns caused by
illiquidity in LPE vehicles. Early studies showed that in similar set-
tings autocorrelation on the portfolio level can be a problem [Fisher
(1966), French et al. (1987)]. We use the results of Dimson (1979)
and incorporate 7 lagged market returns in the estimation model to
adjust for autocorrelation. In a second step we aggregate the lags
as proposed by Dimson to obtain a measure of market risk. Since
our sample consists of vehicles traded on international exchanges,
89
The time-varying risk of listed private equity
we account for currency risk by introducing risk factors for the
euro, yen, and pound sterling, which represent excess returns of
currency portfolios in U.S. dollar. The international capital asset
pricing model is thus given by
ri,t = αi + βi,krm,t−k +γ i,1GBPt +k=0
7
∑ γ i,2EURt +γ i,3YPYt +εi,t
where ri,t and rm,t—k are the respective observed logarithmic
(excess) individual vehicle and market returns at time t and t-k,
whereas k corresponds to the respective lag. The intercept α rep-
resents a vehicle-specific constant excess return, β and g are the
slope coefficients, and e is an error term. We use this regression
equation to calculate the market risks for different time periods:
one year, two years, and lifetime market risks.
Yearly cross-sections and aggregate market risk
We first illustrate the behavior of aggregate market risk in a time
series context before taking a closer look at individual risk stability.
Time series of aggregate risk can be constructed from measures
of market risk in rolling windows. We define two such rolling win-
dows: the first spans 52 weekly observations and the second has
104 observations. Similar windows are used by Bilo (2002), who
measures historical return variances but does not examine the time
series properties of systematic risk.
Table 1 summarizes the main market risk statistics for different
observation windows. Mean one-year betas range from a minimum
of 0.22 in 1993 to a maximum of 1.36 in 2000. Two-year betas
are highest for the periods 1999-2000 and 2007-2008. One-year
betas as well as two-year betas vary around an average that is
almost equal to unity. All periods and estimation windows exhibit a
large cross-sectional variation in market risk. This might be caused
by the huge diversity within the listed private equity asset class,
which includes many small vehicles with strongly differing business
models. Interestingly, mean betas are positively correlated to their
standard deviation. This suggests that mean betas are driven by
vehicles with huge betas as indicated by a skewed distribution of
betas.
Figure 1 shows a more detailed picture over time. Listed private
equity betas are first estimated for each vehicle in rolling windows.
These individual betas are then weighted equally over all vehicles
that we were able to calculate a beta for at a given point in time.
Figure 1 reveals the volatile nature of private equity market risk.
Even mean betas vary widely over time. Beta variability is smaller
lifetime beta Deciles
Mean sD no. > 0 no. < 0 Min 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Max
1.31 0.93 270 8 -2.89 0.32 0.62 0.85 1.03 1.18 1.41 1.62 1.88 2.44 5.28
Yearly cross-sections
one-year betas Two-year betas
period ending Mean sD Max-Min no. < 0 n period ending Mean sD Max-Min no. < 0 n
1989 0.51 1.58 6.74 13 35
1990 0.84 0.86 4.01 6 41 1990 0.93 0.71 2.69 5 41
1991 0.84 1.40 8.45 9 48
1992 0.26 1.74 8.72 22 53 1992 0.56 1.06 6.75 12 53
1993 0.22 1.52 8.25 27 57
1994 0.52 1.68 8.77 20 61 1994 0.65 1.32 8.14 16 61
1995 0.64 2.47 16.61 25 69
1996 1.15 4.32 38.72 27 74 1996 0.86 2.06 15,00 23 74
1997 0.97 2.23 16.94 32 88
1998 1.16 1.33 7.34 14 103 1998 1.26 1.26 6.45 7 103
1999 1.31 2.53 20.75 32 120
2000 1.36 2.31 17.89 32 135 2000 1.33 1.30 8.49 17 135
2001 1.19 1.36 9.96 21 160
2002 1.16 1.45 11.49 29 177 2002 1.19 1.16 7.12 11 177
2003 1.09 2.09 15.21 42 182
2004 1.35 2.95 30.14 47 183 2004 1.30 2.07 16.94 42 182
2005 1.05 2.63 20.78 58 190
2006 0.95 1.51 13.83 40 203 2006 1.16 1.29 11,00 23 202
2007 1.04 2.12 23.33 52 226
2008 1.20 1.33 12.02 30 244 2008 1.37 1.29 8.15 18 243
Mean 0.94 1.97 15.00 Mean 1.06 1.35 9.07
Table 1 – Beta cross-sections from 1989 to 2008
90 – The journal of financial transformation
The time-varying risk of listed private equity
for two-year betas due to smoothing but higher at the beginning
of our observation period. This higher variability could be caused
by the smaller number of vehicles compared to later years, which
makes the sample mean a less efficient estimator for the true mean.
The one-year beta time series exhibits a mean-reverting behavior,
oscillating between low values around zero during the early 1990s
and peaking in 1996 and 2000. Its long term average, however, is
a moderate 0.98. Two-year betas behave similarly around a time
series mean of 1.07. They are lower than the market average during
the 1990s and again from 2006 through 2008 with a large hike dur-
ing the financial crisis. Both charts have more or less pronounced
peaks during the Asian Financial Crisis (1997-1998), the dot.com
bubble (1999-2001), the year 2004, and the recent financial crisis
(2007-2009).
Exogenous shocks and extreme market movements are possible
causes for beta variability. The green lines in Figure 1 show market
return variances for the corresponding rolling windows (one-year
and two-year) for weekly MSCI World return data. Betas and the
market return variance are significantly correlated with a coeffi-
cient of 0.28 (p<0.05), which is surprising, since betas are inversely
related to the market’s variance in a CAPM context by definition.
This result suggests a large increase in covariance between listed
private equity vehicles and the market in times of uncertainty.
If systematic risk of private equity is about the same as the mar-
ket’s risk, and even worse rises in times when investors seek portfo-
lio insurance, the often purported benefits of this asset class might
turn out to be hard to achieve.
Do risks move together?Although aggregate market risk seems to be rather unstable over
medium to long time periods, individual risks might still move
parallel to the general mean. There could thus be considerable
relative stability within the listed private equity asset class despite
its apparent irregular behavior. We measure risk stability relative
to the listed private equity asset class by estimating Pearson cor-
relation coefficients. Betas can be huge in magnitude, which can
strongly influence linear estimators such as Pearson correlation
coefficients. Spearman’s rank correlation coefficient can provide a
robust measure of relative beta stability.
We first calculate the Pearson correlation to capture beta move-
ments between two points in time. Correlation coefficients are
calculated from all vehicles with a risk estimate available for two
consecutive years. Figure 2 shows that one-year betas are correlated
especially at lags one and two, but only for observation periods after
1993. Betas prior to 1994 seem to behave randomly. Correlations of
one-year betas after 1995 for lag one vary between 0.16 and 0.3.
Interestingly, betas are not always significantly correlated even for
recent years. There is, for example, almost no correlation between
2006 and 2008. An explanation for weak relations in general could
be mean reversion. LPE vehicles tend to have a beta around an indi-
vidual mean. If market risk deviates from this mean in a given year
due to an exogenous shock affecting an individual vehicle, it tends to
drift back to the individual mean after some time. This would reduce
the correlation between individual betas.
Another explanation is real changes in the vehicles’ underlying
businesses, which cause beta variability. Private equity funds buy
portfolio companies to realize capital gains and management fees
over some holding period, which is usually less than ten years,
-0,5
0
0,5
1
1,5
2
2,5
3
0
0,00035
0,0007
0,00105
0,0014
0,00175
0,0021
0,00245
0
0,2
0,4
0,6
0,8
1
1,2
1,4
1,6
1,8
2
1990 1992 1994 1996 1998 2000 2002 2004 2006 2008
Market risk
0
0,00015
0,0003
0,00045
0,0006
0,00075
0,0009
0,00105
0,0012
0,00135
0,0015
Variance of weekly returns of the MSCI World
Total
Mean
Variance MSCI
Market risk Variance of weekly returns of the MSCI World
1990 1992 1994 1996 1998 2000 2002 2004 2006 2008
Figure 1 – Mean one-year beta (upper panel) and two-year beta (lower panel) from
1989 to 2009.
8990919293949596979899000102030405060708
90
92
94
96
98
00
02
04
06
08
89
90 91
92
93
94
95
96
97
98
99
00 01
02
03
04
05
06
07
08
90 92
94
96
98
00
02
04
06
08
One-year betas
-0.4 0.05 0.5
Two-year betas
Figure 2 – Correlation of cross-sectional betas between observation periods.
Pearson correlation coefficients are below the main diagonal, Spearman rank
correlation coefficients are above.
91
The time-varying risk of listed private equity
depending on the funds’ focus. Portfolio turnover is thus higher
than in other companies that grow organically or merge with newly
acquired subsidiaries. If the portfolio companies’ market risk is
diverse across portfolio acquisition, private equity betas experience
jumps according to the market risk of the often substantial amounts
of assets acquired or sold.
The portfolio companies’ risk itself might not be constant either.
Private equity funds and firms that specialize in turnaround man-
agement and venture capital in particular can experience large
swings in market risk. Restructuring can affect operational risk as
well as market risk, while rapid growth of successful companies
causes changes in portfolio risk even if individual portfolio com-
pany risk remains constant. This effect also depends on portfolio
size. Acquisitions and divestments must be large relative to port-
folio size to cause substantial risk changes on the portfolio level.
It seems reasonable to expect such a rebalancing effect over the
medium to long term.
Short-term beta variability is likely caused by estimation errors,
which in turn arise for primarily two reasons. First, listed private
equity vehicles are often quite small by market capitalization and
illiquid. The low informational quality of prices in thinly traded
stocks — although mitigated by the Dimson regression — can carry
over to insignificant or unstable beta coefficients. Second, the
nontransparent structure and characteristics of portfolio com-
panies can reduce informational efficiency. Since most portfolio
companies are not listed and investors in private equity must rely
on the information provided by the fund manager, this information
can sometimes be as scarce as unreliable. If company betas are
seen as a moving average of partially unreliable information about
true market risk, betas become increasingly unstable over short
horizons.
Rank correlations
As a robustness check for the Pearson correlation matrices, we
calculate the Spearman rank correlation to capture the rela-
tive rank movements. Instead of correlating betas directly, the
Spearman correlation computes a coefficient based on the ranks
of individual betas in two consecutive time periods. This calculation
yields a matrix with yearly entries from 1990 to 2008 in the one-
year case and with two-year intervals where betas are estimated
over 104 weeks. Estimation of correlations is based on all vehicles
with observable betas in two consecutive periods, which leads to a
changing number of degrees of freedom.
The entries above the diagonal in both panels in Figure 2 show rank
correlation matrices for one-year and two-year betas. Similar to the
one-year Pearson correlation matrix, betas are correlated at the
first few lags in the one-year case. The highest correlation for one-
year betas is 0.319 at the first lag. Betas begin to be significantly
correlated from 1996 on, which is partly due to their higher magni-
tude and partly due to increasing degrees of freedom.
Results are different in the two-year case but similar to Pearson
correlations. Coefficients are about 0.2 higher than in the one-
year beta case. This is likely the result of better estimates due
to the increasing degrees of freedom when estimating beta over
104 weeks, which makes estimates less sensitive to outliers in the
return distribution. If betas are measured more accurately, correla-
tions increase as well.
Our results suggest that opacity in portfolio companies and illi-
quidity lead to estimation error on the short run, while portfolio
rebalancing changes individual betas over the medium to long term.
Estimated market risk seems to be most stable over horizons span-
ning two to three years. The two-year beta correlations are slightly
higher than in the one-year case, while significant correlations can
be observed over the last decade only.
Evolution of risk over timeThe time series perspective can be combined with an assessment
of cross-sectional stability if one assumes that individual betas are
generated by a Markov process common to all vehicles. We estimate
an empirical Markov transition matrix under the assumption that
future betas depend only on their current value. Transition prob-
abilities from one risk class to another within the empirical Markov
Transition Matrix (MTM) are calculated as the relative frequency
of moving from one risk class to another in the next observation
period. For this purpose, we construct risk classes in two different
ways: betas deciles and fixed beta classes.
persistency in beta deciles
The deciles-oriented MTM is based on quantiles of the distribution
of individual company market risk. All companies in a decile are
assigned to one risk class. As a result of changes in aggregate beta
over time, decile boundaries may change as well. Since betas are
measured with error, boundaries of upper and lower deciles fluctu-
ate most. Quantiles around the median are more stable.
Because a company’s risk class is assigned by the decile function, its
risk class depends on the risk exposure of the entire listed private
equity market. An increase in beta, however, does not change the
risk class if all companies are affected by this increase to the same
extent. This property has an important influence on the interpreta-
tion of our results. The probability of a transition from one risk class
into another does not reflect changes of absolute risk exposure but
gives insight into how the risk of one company behaves compared
to the risk of all other companies. If, for example, a flat transition
matrix is found, betas are equally likely to move from one risk class
to any other. In other words, the relative risk structure would be
completely random over time. This could hold for companies in the
92 – The journal of financial transformation
The time-varying risk of listed private equity
venture capital market in particular, whose risks are driven by real
options and less by fundamental data. If, to the contrary, only posi-
tive entries along the main diagonal exist, relative risks within the
industry do not change over time.
When interpreting results, we have to take care of the fact that
new companies get listed and some become delisted. If we allow
companies to move in and out of our sample, decile boundaries can
change even without any variation in risks, which may force a com-
pany to change its risk class. This can be a problem if the sample
is not random.
Transitions probabilities shown in Figure 3 confirm our results
for cross-sectional rank correlations. Betas are moderately stable
relative to the listed private equity sector over short time periods.
Interestingly, high-risk companies remain in the highest risk class
with a 24% (one-year) and a 25% (two-year) probability. This
suggests that there might be companies that have a persistently
higher risk than other companies. The same result can be seen for
low risk classes. For both observation horizons, risk is comparably
stable (17.8% for the one-year betas and even 18.3% for the two-
year betas). Note that in both cases a relatively high proportion of
companies switch from the highest risk class into the lowest risk
class from one period to the next and vice versa. These companies
are most likely outliers, whose beta cannot be estimated reliably
and therefore is unstable compared to the industry as a whole.
Considering the almost random correlation structure prior to 1994
in Figure 2, we calculated transition matrices excluding these early
betas, but find similar results.
The flat probability structure in deciles 5 and 6 can be observed
for several reasons: first, it can be driven by listings and delistings.
These changes in the underlying sample can influence decile bound-
aries, if newly added companies do not follow the same risk distri-
bution as existing ones. A second explanation could be that the risk
of private equity changes fast. Companies having extremely high or
low risk remain in their respective risk classes if their risk does not
change too much, whereas medium-risk companies switch between
deciles more often for similar beta changes. Incomplete and noisy
information about portfolio companies might not allow the market
to generate stable betas over short time periods.
Moderately stable ranks are good news for private equity investors.
If private equity vehicles remain in their risk deciles over periods
of two years and even longer (results for three and four years are
not shown here), investors can base their portfolio allocation deci-
sions on betas relative to the private equity sector. Although betas
exhibit large absolute swings, which will be shown below, long-term
investors can still target specific high or low risk vehicles for inclu-
sion in their portfolio.
persistency in fixed beta classes
Absolute persistency of market risks can be examined by using
fixed risk classes. In this case, we do not measure the behavior of
company risk relative to other companies but absolute risk change.
We define the following ten risk classes for individual betas βi,t for
vehicle i at time t: qk-1<βi,t<qk with {q0,...,q10} = {-∞,-3,-2,-1,0,1,2,3,4,
5,∞}, where each class is denoted by its upper boundary. If risks
were stable in this sense, we would expect matrix entries along the
main diagonal. If risks change their size randomly, each row should
reflect the unconditional distribution of betas.
The MTM with fixed risk classes in Figure 4 yields a different impres-
sion of market risk. When using fixed class boundaries, transition
matrices cannot be used to reach conclusions about risk move-
ments within the listed private equity sector anymore. Instead,
transition probabilities reflect the behavior of individual risks, which
include changes in market risk as well as idiosyncratic exogenous
factors. As expected from our correlation analysis, betas show
a highly mean-reverting behavior. This effect can be seen from
classes five and six (representing the industry general mean mar-
ket risk), which have the highest transition probability from almost
every other risk class. Transition probabilities seem to converge to
the stationary distribution after a short time, which again indicates
that betas become unstable due to portfolio rebalancing and time-
varying risk within portfolio companies.
Our suspicion that extreme betas are due to estimation error is
confirmed by the fact that one-year betas in risk classes 1 and
10 behave quite randomly between two observation periods. An
economic reason for unstable negative betas could be that private
equity has no short selling strategies. Although results are simi-
lar for one-year and two-year betas, there are a few differences.
Except for one outlier in element p(-3,5) (not shown), the lowest risk
class for two-year betas does not contain any entries. High risks are
10
9
8
7
6
5
4
3
2
1
10
9
8
7
6
5
4
3
2
1
1 2 3 4 5 6 7 8 9 10
1 2 3 4 5 6 7 8 9 10
One-year betas
0 0.25 0.5
Two-year betas
Figure 3 – Markov transition matrices with risk classes constructed from deciles
Left panel – one year betas from 1990 to 2008, N = 2158; right panel: two year betas
from 1991 to 2007, N = 914
t
t+1
t
t+1
93
The time-varying risk of listed private equity
more persistent than medium risks, as can be seen from risk classes
9 and 10, whose transition probabilities are shifted to the right. This
effect is strongest in one-year betas in the high risk classes, which
are more stable than two-year betas.
conclusionAggregate market risk of listed private equity vehicles varies
strongly over time and is positively correlated with the market
return variance. Individual CAPM betas are highly unstable, where-
as ranks of individual vehicles within a cross-section change slightly
less over time. Individual CAPM betas are predictable only up to 2 to
3 years into the future and quickly converge to a stationary distri-
bution when measured in risk classes in an empirical Markov transi-
tion matrix. High- and low-risk companies, however, are more likely
to remain within their risk classes than medium-risk companies. The
probability that a company can be found in the same decile in the
next observation period is about 25% for high-risk companies and
18% for low-risk ones.
We suspect that market risk of private equity is affected by factors
unique to this sector: acquisitions and divestments that constantly
rebalance portfolios, scarcity of information about portfolio com-
panies, and rapid changes within portfolio companies. Unstable
market risk seems to be a fundamental characteristic of private
equity assets, which must be incorporated in the valuation process
and which casts doubt on diversification benefits of private equity
in times of crisis. Particularly important, investors usually hold
traditional private equity shares to maturity, which can be up to 10
years. Unpredictable changes in market risk pose a challenge for
portfolio allocation, since investors would be buying assets that
behave entirely different from what they were supposed to when
first included in the investor’s portfolio. However, targeting vehicles
with specific risks relative to the asset class might be a feasible
strategy for long-term private equity investors.
ReferencesBilo S., 2002, “Alternative asset class: publicly traded private equity, performance, •
liquidity, diversification potential and pricing characteristics,” Doctoral dissertation,
University of St. Gallen
Blume, M. E., 1971, “On the assessment of risk,” Journal of Finance, 26:1, 1-10•
Bollerslev, T., R. F. Engle, and J. M. Wooldridge, 1988, “A capital asset pricing model •
with time-varying covariances,” Journal of Political Economy, 96:1, 116-131
Cao, J., and J. Lerner, 2009, “The performance of reverse leveraged buyouts,” Journal •
of Financial Economics, 91:2, 139-157
Cochrane, J. H., 2005, “The risk and return of venture capital,” Journal of Financial •
Economics, 75:1, 3-52
De Santis, G., and B. Gérard, 1997, “International asset pricing and portfolio diversifica-•
tion with time-varying risk,” Journal of Finance, 52:5, 1881-1912
Dimson, E., 1979, “Risk measurement when shares are subject of infrequent trading,” •
Journal of Financial Economics, 7, 197-226
Driessen, J., T. Lin, and L. Phalippou, 2009, “A new method to estimate risk and return •
of non-traded assets from cash flows: the case of private equity funds,” NBER Working
Paper No. W14144
Ferson, W. E., S. Kandel, and R. F. Stambaugh, 1987, “Tests of asset pricing with time-•
varying expected risk premiums and market betas,” Journal of Finance, 42:2, 201-220
Fisher, L., 1966, “Some new stock market indices,” Journal of Business, 39:1, 191-225•
French, K., W. Schwert, and R. Stambaugh, 1987, “Expected returns and volatility,” •
Journal of Financial Economics, 19:1, 3-29
Ghysels, E., 1998, “On stable factor structures in the pricing of risk: do time-varying •
betas help or hurt?” Journal of Finance, 53:2, 549-573
Groh, A., and O. Gottschalg, 2009, “The opportunity cost of capital of US buyouts,” •
Working Paper
International Financial Services London, 2009, “Private equity 2009,”•
Jagannathan, R., and Z. Wang, 1996, “The conditional CAPM and the cross-section of •
expected returns,” Journal of Finance, 51:1, 3-53
Kaplan S. N., and A. Schoar, 2005, “Private equity performance: returns, persistence, •
and capital flows,” Journal of Finance, 60, 1791-1823
Lahr, H. and F. T. Herschke, 2009, “Organizational forms and risk of listed private •
equity,” The Journal of Private Equity, 13:1, 89-99
Lettau, M., and S. Ludvigson, 2001, “Resurrecting the (C)CAPM: a cross-sectional test •
when risk premia are time-varying,” Journal of Political Economy, 109:6, 1238-1287
Levy, R. A., 1971, “On the short-term stationarity of beta coefficients,” Financial •
Analysts Journal, 27:6, 55-62
Ljungqvist, A., and M. Richardson, 2003, “The cash flow, return and risk characteristics •
of private equity,” NBER Working Paper No. 9454
max
5
4
3
2
1
0
-1
-2
-3
max
5
4
3
2
1
0
-1
-2
-3
-3 -2 -1 0 1 2 3 4 5
max -3 -2 -1 0 1 2 3 4 5
max
One-year betas
0 0.25 0.5
Two-year betas
Figure 4 – Markov transition matrices with fixed risk classes.
Left panel – one year betas from 1990 to 2008, N = 2158; right panel: two year betas
from 1991 to 2007, N = 914.
t
t+1
t
t+1
Articles
95
The developing legal risk management environment
Marijn M.A. van DaelenLaw and Management lecturer / researcher,
Department of Business Law / Center for Company Law (CCL), Tilburg University
AbstractFinancial institutions are facing an increasing number of risk
management regulations with different national and international
approaches. The legal framework, including its risk management
provisions that existed prior to the current financial crisis, has been
severely tested in ‘real life’ and did not hold up to its expectations
(whether reasonable or not). As a reaction to the financial crisis
lawmakers and policymakers have been focusing on, inter alia, risk
management regulations to restore public confidence in companies
and the overall market. The underlying question here is whether
new regulations can indeed prevent the next crisis.
The developing legal risk management environment
96
van Daelen, M. M. A. and C. F. Van der Elst, 2009, “Corporate regulatory frameworks 4
for risk management in the US and EU,” Corporate Finance and Capital Markets Law
Review, 1:2, 83-94, p. 84
van Daelen, M. M. A., forthcoming 2010, “Risk management from a business law 5
perspective,” in van Daelen, M. M. A. and C. F. Van der Elst, eds., Risk management
and corporate governance: interconnections in law, accounting and tax, Edward Elgar
Publishing, Cheltenham
Basel Committee on Banking Supervision, 2005, “Compliance and the compliance 1
function in banks,” Bank for International Settlements, p. 1
Enriques, L., 2009, “Regulators’ response to the current crisis and the upcoming 2
reregulation of financial markets: one reluctant regulator’s view,” University of
Pennsylvania Journal of International Economic Law, 30:4, 1147-1155
Section 8.01(b) of the US MBCA 2005 and Regulation 70 of the U.K. Table A as 3
amended on 1 October 2007 as well as Article 3 of the Model Articles for Public
Companies of the Companies Act 2006.
As a reaction to the financial crisis lawmakers and policymakers
have been focusing on, inter alia, risk management regulations to
restore public confidence in companies and the overall market. The
underlying question here is whether new regulations can indeed
prevent the next crisis. Of course, new regulations can improve
the legal environment of financial institutions, thereby reducing
the imperfections shown by this crisis. However, there seems to be
nothing to gain with extensive additional regulation that can only
prevent a crisis with attributes that are similar to the one the mar-
ket is facing now. First of all, previous crises have had their specific-
ities and the next crisis will most likely have its own features. Hence
it is more than doubtful whether a lot of this kind of regulation is
needed to prevent the next crisis. Secondly, introducing new rules
without reducing existing rules that should have (but might not
have) tackled the same or relating problems, will lead to a pile-up
of regulations. Compliance with all applicable rules and regulations
becomes prohibitively costly for companies. At the same time, the
compliance risk will increase, which in turn can also increase costs.
Compliance risk can be defined as “the risk of legal or regulatory
sanctions, material financial loss, or loss to reputation a [financial
institution] may suffer as a result of its failure to comply with laws,
regulations, rules, related self-regulatory organization standards,
and codes of conduct applicable to its […] activities”1. This is why, in
the words of Enriques (2009), “excessive reregulation today is the
best guarantee of effective pressure towards deregulation tomor-
row” and “regulators should make a lot of noise and show a lot
of activism, all the while producing very little change”2. Excessive
regulation will have a lot of negative side effects and may not even
ensure justified public faith in the reliability of companies and the
overall market in the future. The upshot is that it makes sense first
to have a closer look at existing risk management regulation and
determine the goals in the long run before deciding on the need for
more financial regulation.
Two types of rules provide the basis for the legal risk management
environment. Firstly, there are regulations that demand compa-
nies to have an appropriate risk management system present.
This includes not only ensuring that there is a system to identify
and analyze risks, but also ensuring that the system is adequately
maintained and monitored. Secondly, there are regulations that
require the company to disclose information on (a) the company’s
risk management system or (b) the risks that the company faces
(thus indirectly require a system). This firm-specific information set
is important for minimizing the information asymmetry between
managers and shareholders. After all, managers are involved with
the day-to-day business and have better access to all company
information whereas the shareholders, the providers of capital, only
receive publicly available information. The information needed to
reduce this asymmetry must be disclosed in prospectuses (one time
disclosure document) and in half year or annual reports (ongoing
disclosure documents).
In specific, financial institutions are facing an increasing number of
risk management regulations with different national and interna-
tional approaches. The legal framework, including its risk manage-
ment provisions that existed prior to the current financial crisis, has
been severely tested in ‘real life’ and did not hold up to its expecta-
tions (whether reasonable or not). These risk management provi-
sions can be divided into general and sector-specific provisions. As
the sector-specific rules supplement the general requirements, the
latter will be discussed in the next section. The following section
will address the sector-specific risk management regulation and the
final section provides for some concluding remarks.
General risk management regulationTo start off, in general the board of directors is responsible for man-
aging the company. For example, the U.S. statutory model states that
the board of directors has to manage and oversee the business and
exercise corporate powers and the U.K. model articles of association
state that the board has to manage the company’s business3. In both
the U.S. and U.K. practices, management directs operations since del-
egation of board authority is recognized, but policymaking remains
a task of the board of directors4. Throughout the years, the duty of
directors has been further developed. In the 20th century the duties
of the directors included maintaining a system of internal controls
and disclosing the company’s risks5. For years, the U.S. was forerun-
ner with requirements that focused solely on the financial reporting
aspect of the internal control and risk management framework. The
foundations of American federal securities law are laid down in the
1933 Securities Act, demanding issuers to publicly disclose significant
information about the company and the securities, and the 1994
Securities Exchange Act, requiring mandatory corporate reporting
and independent audits. Congress enacted these two securities
Acts in response to the American stock market crash of 1929 and
the Great Depression in order to reduce the information asymmetry
between managers and shareholders. The Securities and Exchange
Commission (SEC) was established pursuant to the 1934 Act. In the
1933 Act, regulatory recognition was given to the importance of
internal control (which had a more limited scope than risk manage-
ment as we know it today). Regulation S-X Rule 2-02(b) of the 1933
Act required external auditors to give appropriate consideration
to the adequacy of the system of internal control implemented by
the corporation. In its final report of 1940, the SEC recommended a
more thorough assessment of the internal control system. After a
number of corporate scandals, which were related to the bribery of
foreign officials in the mid-1970s, the American regulatory approach
to internal control changed. The 1977 Foreign Corrupt Practices Act
(FCPA) required reporting companies to keep books, records, and
97
The developing legal risk management environment
Committee on Corporate Governance, 1997, Corporate governance in the Netherlands 13
– forty recommendations, Paragraphs 4.2, 4.3, 3.2 and 6.4
Heier, J. R., M. T. Dugan and D. L. Sayers, 2004, “Sarbanes-Oxley and the culmination 14
of internal control development: a study of reactive evolution,” American Accounting
Association 2004 Mid-Atlantic region meeting paper, p. 14
Romano, R., 2005, “The Sarbanes-Oxley Act and the making of quack corporate gov-15
ernance,” Yale ICF Working Paper 05 (23), p. 1
Section 8.01(c), subsections (2) and (6) of the Model Business Corporation Act 16
2005. This Act has been adopted in whole or in part by more than 30 U.S. states.
Amendments to the act were adopted December 2009 regarding proxy voting.
Committee of Sponsoring Organizations of the Treadway Commission, 2004, 17
Enterprise risk management – integrated framework, executive summary, New York:
AICPA Inc., p. 3
15 U.S.C. section 78m (b) (2) (B)6
17 CFR 229.303.a-3-ii and “Instructions to paragraph 303(a)”, under no. 37
See, for instance, the Treadway Commission, 1987, Report of the National 8
Commission on Fraudulent Financial Reporting, New York: AICPA Inc., p. 12.
Establishing an audit committee was already recommended by the SEC in 1972 and
demanded by the NYSE in 1978.
The Cadbury Committee was set up by the Financial Reporting Council, the London 9
Stock Exchange, and the accountancy profession. Its recommendations are focused
on the control and reporting functions of boards, and on the role of auditors.
Cadbury Committee, 1992, Report on the financial aspect of corporate governance, 10
London: Gee, Recommendation 4.31 and 4.32
Cadbury Report 1992, Recommendation 4.35, section (e), under (v)11
Hampel Committee, 1998, Committee on Corporate Governance – final report, 12
London: Gee, Section D (Accountability and Audit) under II and subsection 2.20, p. 21
accounts as well as maintaining a system of internal accounting con-
trols in order to control management activities6. A few years later,
companies needed to assess their risks as item 303 of the MD&A
was added to Regulation S-K. It required managements’ discussion
and analysis report to include “material events and uncertainties
known to management that would cause reported financial informa-
tion not to be necessarily indicative of future operating results or of
future financial condition”7. Around that time, U.S. recommendation
reports and guidelines — such as the 1978 Cohen Report and the
1979 Minahan Report and later the 1987 Treadway Report and 1992
COSO I Report — were starting to stress a broader internal control
framework. Moreover, recommendations towards the oversight duty
of audit committees of the board of directors regarding the financial
reporting process and internal controls started to develop8.
Years after the U.S. 1933 Act, the FCPA, and the MD&A, the U.K.
followed with self-regulatory but more detailed provisions to man-
age companies’ risks. The main rules on mandatory disclosure were
given by the Companies Act of 1985 and the Listing Rules. Section
221 of the Companies Act required companies to keep account-
ing records in order to show and explain their transactions and to
disclose their financial position. The Listing Rules required listed
companies to include a statement of compliance with the provisions
of the 1992 Cadbury Report9 in their annual report and accounts on
a comply-or-explain basis. This self-regulatory report provided that
the board of directors had to maintain a system of internal control
over the financial management of the company — including proce-
dures to mitigate corporate governance risks and failures — and that
the directors had to make a statement in the annual report on the
effectiveness of their internal control system10. The Cadbury Report
also recommended that all listed companies should establish an audit
committee, comprising at least three non-executives. The report
gave the audit committee’s duties, which include reviewing the
company’s statement on internal control systems11. The 1998 Hampel
Report broadened the U.K. internal control perspective by arguing
that the system did not only have to cover financial controls but also
operational and compliance controls, as well as risk management12.
As the Hampel Committee suggested, the London Stock Exchange
issued the Combined Code on Corporate Governance, which included
the provisions of, inter alia, the Cadbury Report and Hampel Report.
Later, other European member states followed the U.K. with inter-
nal control and risk management regulations. For instance, the
Netherlands issued a self-regulatory code (the 1997 Peters Report)
that stressed the board of directors’ responsibility for effective sys-
tems of internal control and recommends the supervisory board to
consider whether to appoint an audit committee. This committee was
recommended specific duties such as supervising external financial
reports, compliance, and the control of company risks13.
Obviously, the legal internal control and risk management envi-
ronment significantly changed when the U.S. Congress passed
the Sarbanes Oxley Act (SOX) after the corporate failures and
fraud cases between 2000 and 2003. It has been said to be the
culmination of a century of internal control developments14. This
2002 federal law was intended to restore public faith and trust
by, inter alia, improving the accuracy and reliability of corporate
disclosures. It contains not only disclosure requirements, but also
substantive corporate governance mandates15. The legal duties of
corporate constituents regarding a system of internal controls are
further developed by this Act and other legislative measures. The
well-known SOX Section 404 demands an annual internal control
report in which management’s responsibility for “establishing and
maintaining an adequate internal control structure and procedures
for financial reporting” is stressed. The report also has to include an
assessment of the effectiveness of these structures and procedures.
Section 302 requires the CEO and CFO — thus not management as
Section 404 does — to certify the fairness of the financial state-
ments and information as well as their responsibility for establish-
ing and maintaining internal controls. The CEO and CFO also have
to present their conclusions — not the total evaluation of Section
404 — about the effectiveness of the internal controls based on their
evaluation. The duties of audit committees also continued to evolve
as Section 301 requires that audit committees establish procedures
for, “the receipt, retention, and treatment of complaints […] regard-
ing accounting, internal accounting controls, or auditing matters.”
Section 205(a) stresses that the purpose of the audit committee is
to oversee the company’s accounting and financial reporting pro-
cesses and audits of the financial statements. Other U.S. legislative
measures as well as guidelines have a wider internal control and risk
management perspective, such as the MBCA and the COSO II Report.
The MBCA provides the scope of the board’s oversight responsi-
bilities relating to the company’s major risks and the effectiveness
of the company’s internal financial, operational, and compliance
controls16. The COSO II Report broadens reporting to encompass
non-financial information and internal reporting and it adds a fourth
category, the strategic objectives, to the existing financial reporting,
operational, and compliance objectives17. Eversince the corporate
failures manifested themselves, it has been argued that new regula-
The developing legal risk management environment
98
Corporate Governance Code Monitoring Committee, 2008, The Dutch corporate gov-24
ernance code – principles of good corporate governance and best practice provisions
(DCGC 2008), Principle II.1
DCGC 2008, Best practice provision III.5.425
The 1984 Eighth Company Law Directive (84/253/EEC, OJ L 126, 12 May 1984, p. 26
20–26) harmonized the approval of persons responsible for carrying out the statu-
tory audits of accounting documents. Articles 3 and 24 demanded such persons to be
independent and of good repute.
Article 5 and IV of Annex I of Directive 2003/71/EC of the European Parliament and 27
of the Council of 4 November 2003 on the prospectus to be published when securi-
ties are offered to the public or admitted to trading and amending Directive 2001/34/
EC, OJ L 345, 31 December 2003, p. 64–89
Ribstein, L. E., 2002, “Market vs. regulatory responses to corporate fraud: a critique 18
of the Sarbanes-Oxley Act of 2002”, Journal of Corporation Law, 28:1, p. 5
Cunningham, L. A., 2002, “Sharing accounting’s burden: business lawyers in Enron’s 19
dark shadows”, Boston College Working Paper, pp. 16-17
Bratton, W. W., 2002, “Enron and the dark side of shareholder value,” Available at 20
SSRN: <http://ssrn.com/abstract=301475>, p. 13
Committee on Corporate Governance, 2000, The Combined Code – principles of 21
good governance and code of best practice (Combined Code 2000), Principle D.2 and
Provision D.2.1 (Principle C.2 and Provision C.2.1 of the 2008 Combined Code)
Committee on Corporate Governance, 2003, The Combined Code – principles of good 22
governance and code of best practice (Combined Code 2003), Provision C.3.2
Financial Reporting Council, 2009, Review of the Combined Code: Final Report, p. 2723
tions such as SOX might not succeed in regulating frauds or that
their effectiveness would be limited as the frauds that preceded this
legal response occurred despite several levels of monitoring in place
at the time18. For example, Cunningham notes that “[h]istory offers
no reason to expect that new rules will prevent a repeat of account-
ing scandals even of this large size or frequency”19 and Bratton,
that “[t]he costs of any significant new regulation can outweigh the
compliance yield, particularly in a system committed to open a wide
field for entrepreneurial risk taking”20.
Within Europe, the legal internal control and risk management envi-
ronment also changed after the failures and frauds around the new
millennium, but with a more principle-based and self-regulatory
approach. Following the 2003 European Commission’s Plan to Move
Forward, E.U. member states have drawn up or updated their nation-
al corporate governance codes for listed companies. In the U.K., due
to the Hampel Report, the 2000 Combined Code already underlined
the board’s duty to maintain a sound system of internal controls,
on a comply-or-explain basis. The Code added that the board has to
report annually to the shareholders that it has reviewed the effec-
tiveness of the group’s internal control system, covering financial,
operational, and compliance controls and risk management21. A few
years later, the provisions dealing with the audit committee’s duties
were updated due to the 2003 Higgs and Smith Reports requiring
the committee to review the company’s internal control and risk
management systems22. The 2009 Review of the Combined Code
announced amendments to the internal control principle in order
to stress “the board’s responsibility for defining the company’s risk
appetite and tolerance and maintaining a sound risk management
system” and to the provisions in order to include that the board has
to “satisfy itself that appropriate systems are in place to enable it
to identify, assess and manage key risks”23. Other E.U. member
states also issued corporate governance codes that emphasized
the board’s and audit committee’s duties. For instance, the Dutch
code provides that the board is responsible for complying with all
relevant primary and secondary legislation and managing the risks
associated with the company’s activities. In addition, the board
has to report related developments to and discuss the internal risk
management and control systems with the supervisory board and
the audit committee24. The audit committee has to monitor the
activities of the board with respect to the operation of the internal
risk management and control systems25.
Traditionally, E.U. lawmakers focused mainly on corporate disclosure
rules and not so much on requiring management systems to endorse
the reliability of the reporting and an internal control framework26.
Responding to the corporate failures and fraud around the new mil-
lennium, the E.U. became more active in areas such as company law,
accounting, and auditing law, although parts of these areas remain
controlled by the national legislators. The E.U. legislative movement
brought forward several general as well as sector-specific direc-
tives and recommendations. One of these general directives is the
Prospectus Directive with the purpose of harmonizing, inter alia,
the information contained in the prospectus in order to provide
equivalent investor protection. It requires the prospectus to include
key information on the company’s risk factors and a summary in
which “the essential characteristics and risks associated with the
issuer, any guarantor and the securities” are disclosed27. In addition,
the Transparency Directive 2004/109/EC harmonizes transparency
requirements to ensure a higher level of investor protection and mar-
ket efficiency within the E.U. The directive acknowledges the impor-
tance of disclosure of the companies’ main risks as Articles 4 and 5
of the directive require the annual and half-yearly financial reports
to include a management report in which a description is given of the
“principal risks and uncertainties” that the company faces. Next to
disclosing general information on the companies’ risks, it is required
to disclose information on the risk management systems regarding
the financial reporting process. That is because Articles 1 and 2 of the
2006/46/EC amendment to the Accounting Directives — the Fourth
and Seventh Company Law Directives — provide that the annual
corporate governance statement must include a description of the
main features of the company’s (or group’s) internal control and risk
management systems for the financial reporting process. Moreover,
specific features of the audit committee’s duties are given at E.U.
level. In a Commission Recommendation (2005/162/EC), the audit
committee is recommended to assist the board to review the internal
control and risk management systems. The committee should do so
in order to ensure that the main risks the company faces are properly
identified, managed, and disclosed. This monitoring role of the audit
committee is further developed in the 2006/43/EC Audit Directive
for monitoring the financial reporting process and the effectiveness
of the company’s internal control and risk management systems.
Thus, the legislative measures at the E.U. level require reporting on
the main features of the systems for the financial reporting process
and monitoring the effectiveness of the systems. Before implement-
ing these directives in national laws and regulations, most member
states had already issued corporate governance codes focusing on a
broader concept of internal control and risk management, emphasiz-
ing the financial reporting, operational, and compliance aspects.
99
The developing legal risk management environment
Articles 11 and 22 and Annex v of Directive 2006/48/EC of the European Parliament 31
and of the Council of 14 June 2006 relating to the taking up and pursuit of the
business of credit institutions, OJ L 177, 30 June 2006, p. 1–200, and Article 34 of
Directive 2006/49/EC of the European Parliament and of the Council of 14 June
2006 on the capital adequacy of investment firms and credit institutions, OJ L 177,
30 June 2006, p. 201–255
Directive 2009/138/EC of the European Parliament and of the Council of 25 32
November 2009 on the taking-up and pursuit of the business of Insurance and
Reinsurance (Solvency II), OJ L 335, 17 December 2009, p. 1–155
See for a more thorough analysis of risk management within financial law: Van der 28
Elst, C. F., forthcoming 2010, Risk management in financial law, in van Daelen, M. M.
A., and C. F. Van der Elst, eds., Risk management and corporate governance: intercon-
nections in law, accounting and tax, Edward Elgar Publishing, Cheltenham
Belgium, Canada, France, Germany, Italy, Japan, the Netherlands, Sweden, the U.K. 29
and the U.S. and later also Switzerland.
Basel Committee on Banking Supervision, 2004, International convergence of capital 30
measurement and capital standards – a revised framework
The upshot is that, despite some inconsistencies, both in the E.U.
and the U.S. most parts of the reporting and monitoring level
are covered, as shown in Table 1. In the U.S. this is accompanied
by legislative measures focusing on establishing and maintaining
an internal control system for financial reporting, whereas E.U.
member states further the framework by provisions related to the
overall system for establishment and maintenance. In addition, in
the U.S. the establishing, maintaining, reporting, and monitoring
provisions regarding the financial reporting systems are provided
by law. On the contrary, in the E.U. the establishing, maintaining,
and reporting provisions regarding the overall systems are pro-
vided by self-regulation, although this regulation has a legal basis
in several member states.
Establish/identify
Maintain/manage
Report on Monitor
General risks
(E.U. / U.S. state)
(E.U. / U.S. state)
E.U./U.S. E.U. MS / U.S. state
General systems
E.U. MS E.U. MS E.U. MS E.U./E.U. MS / U.S. state
Financial reporting systems
U.S. U.S. E.U./U.S. U.S.
Table 1 – Aspects of the main general E.U. and U.S. internal control and risk
management provisions
sector-specific risk management regulationNext to these general provisions, the legal risk management environ-
ment — especially with regards to the financial industry — is shaped by
sector-specific legal measurements. The financial industry-specific
provisions cover mainly the banking system, insurance, and securi-
ties. The previous section shows that the foundation of the legal
risk management environment is given by mainly two types of rules.
Firstly, having — including maintaining and monitoring — internal
control and risk management systems and secondly, disclosing infor-
mation about those systems and the risks the company faces. As the
current section will show, the financial industry regulations and guide-
lines supplement these two levels — though much more detailed —
but also expand the first level. Indeed, the general rules provide that
a system must be in place and stress the board’s duty to maintain
internal control and risk management systems and the audit com-
mittee’s monitoring role therein. The financial industry-specific
provisions, however, regulate certain internal functions within the
organization and emphasize the external monitoring role of the
supervisory authorities. Several financial industry-specific provisions
are described below in order to further explain these expansions28.
At E.U. level, financial industry-specific directives that refer to risk
management are, inter alia, the Capital Requirements Directives,
the Solvency Directive, and the MiFID. The Basel Committee on
Banking Supervision was established in 1974 by the central bank
governors of the Group of Ten countries29. Without formal supra-
national supervisory authority, the committee issues supervisory
standards and guidelines which national authorities can implement.
The 1988 Basel Capital Accord introduced a capital measurement
system which provided for the implementation of a credit risk mea-
surement framework. In 2004 a revised framework was issued. This
2004 Basel II Accord provides for requirements relating to minimum
capital, supervisory review, and market discipline and disclosure30.
It stresses that risk management is fundamental for an effective
assessment of the adequacy of a bank’s capital position. The Basel
II framework is introduced into European legislation through the
Capital Requirements Directives, comprising Directive 2006/48/EC
and Directive 2006/49/EC. It affects credit institutions and certain
types of investment firms. In line with the above described gen-
eral legal provisions, Article 138 of Directive 2006/48/EC requires
credit institutions to have adequate risk management processes
and internal control mechanisms in place, including reporting and
accounting procedures. Article 135 of that Directive reads that
E.U. member states have to demand that “persons who effectively
direct the business of a financial holding company be of sufficiently
good repute and have sufficient experience to perform those
duties.” Consequently, where general legal provisions develop what
the duty of the board and managers includes, this sector-specific
provision regulates what a proper person for performing certain
duties would be like. In addition, credit institutions and certain
types of investment firms must have effective processes to iden-
tify, manage, monitor, and report their risks as well as adequate
internal control mechanisms. Their management body — consisting
of at least two persons with sufficiently good repute and sufficient
experience to perform such duties — should approve and review the
strategies and policies for indentifying, managing, monitoring, and
mitigating the risks, taking into account specific criteria regarding
the credit and counterparty risk, residual risk, concentration risk,
securitization risk, market risk, interest rate risk arising from non-
trading activities, operational risk, and liquidity risk31. Obviously,
these requirements, especially the specific criteria, are much more
detailed than the general ones described in the previous section.
Another sector-specific European Directive that introduces a
comprehensive framework for risk management and regulates
certain internal functions within the organization is the Solvency
II Directive32. It has a much wider scope than the Solvency I
Directive and contains thorough revision and realignment of the
E.U. Directives relevant to (re)insurers. The Solvency II Directive
has similarities to the Basel II banking regulation. This set of regu-
latory measurements for insurance companies includes provisions
regarding capital, governance, and risk management, effective
The developing legal risk management environment
100
Section 4.2 of the Dutch Banking Code: Nederlandse Vereniging van Banken (NVB), 39
Code Banken, 9 September 2009, p. 10. This self-regulatory code will most likely
receive a legal basis in 2010.
Recommendations 23-27 and Annex 10 (Elements in a board risk committee report) of 40
the Walker Review, 2009, A review of corporate governance in U.K. banks and other
financial industry entities – Final recommendations. In addition, see the U.K. Turner
Review, 2009, A regulatory response to the global banking crisis, p. 93.
Financial Reporting Council, 2009, Review of the Combined Code: Final Report, p. 25 41
Articles 41, 44, 46 and 101 of Directive 2009/138/EC33
Articles 42, 46 and 47 of Directive 2009/138/EC34
Commission Directive 2006/73/EC of 10 August 2006 implementing Directive 35
2004/39/EC of the European Parliament and of the Council as regards organisational
requirements and operating conditions for investment firms and defined terms for
the purposes of that Directive, OJ L 241, 2 September 2006, p. 26–58 (MiFID level 2
Directive)
Articles 6 and 7 of Commission Directive 2006/73/EC36
Section 303A of the NYSE’s Listed Company Manual 37
COSO, Effective enterprise risk oversight: the role of the board of directors, 2009; 38
Section 5 of the Shareholder Bill of Rights Act of 2009 (S. 1074) of 19 May 2009. The
bill proposes to amend the Securities Exchange Act of 1934 (15 U.S.C. 78a et seq.) by
inserting after section 14 section 14A and by adding at the end subsection (e) ‘corpo-
rate governance standards,’ (5) ‘risk committee’
supervision, and disclosure. It requires written and implemented
policies for the company’s risk management, internal control, and
internal audit. In addition, the (re)insurance companies must have
an effective and well integrated risk-management system in order
to identify, measure, monitor, manage, and report risks, covering,
inter alia, market, credit, liquidity, concentration, and operational
risks. The risk management system must include risk mitigation
techniques and the companies have to conduct risk and solvency
assessments. Next to the risk management system, an effective
internal control system with administrative and accounting pro-
cedures, an internal control framework, and appropriate reporting
arrangements is required33. To sum up, compared to the general
provisions described above, this directive introduces a more com-
prehensive set of risk management and internal control require-
ments. The directive also prescribes internal functions and the
duties of as well as the personal qualifications for those functions.
For instance, for the evaluation of the adequacy and effective-
ness of the internal control system, an internal audit function is
required. Furthermore, the (re)insurance companies are instructed
to have a compliance function within their organization. This com-
pliance function has the duty to advise the management or super-
visory body on compliance with laws and regulations, to identify
and assess compliance risks, and to assess the possible impact of
any changes in the legal environment. Moreover, it demands that
the “persons who effectively run the undertaking or have other key
functions” are fit and proper, that is, have adequate professional
qualifications, knowledge, and experience and are of good repute
and integrity respectively34.
A third financial industry-specific piece of E.U. legislation is the
MiFID, the markets in financial instruments directive, which pro-
vides organizational requirements and operating conditions for
investment firms35. Like the previous directives it requires com-
panies to establish, implement, and maintain risk management
policies and procedures in order to detect risks, set the level of risk
tolerance, and includes risk minimizing procedures. The directive
further demands investment companies to monitor the adequacy
and effectiveness of these risk management policies and proce-
dures. With regard to the internal functions of the organization, it
provides that investment companies have to establish and maintain
a compliance function, for which a compliance officer must be
appointed, with the duty to monitor and assess the adequacy and
effectiveness of the company’s measures and procedures. It goes
on to describe that this compliance function must have “the nec-
essary authority, resources, expertise, and access to all relevant
information” in order to create an environment in which it can
discharge its responsibilities properly and independently. In addi-
tion, likewise the insurance companies, the investment companies
need to have an internal audit function for the evaluation of the
adequacy and effectiveness of the internal control system36.
As the regulatory reform is making its entrance at E.U. level, E.U.
member states are introducing their own financial industry-specific
guidelines and the U.S. is developing general regulations regarding
certain internal functions within the organization. In the U.S., the New
York Stock Exchange corporate governance rules require audit com-
mittees to discuss the guidelines and policies to govern the process
of risk assessment and risk management37. In addition, in May 2009
legislation entitled ‘Shareholder Bill of Rights Act of 2009’ has been
introduced in the U.S. Senate by Senator Charles Schumer that would,
if passed, mandate risk committees for publicly traded companies in
general. The role of these risk committees — that are to be composed
of independent directors — is to be responsible for the establishment
and evaluation of the risk management practices of the issuer38. Like
in the U.S., in E.U. member states the idea of requiring companies to
have a risk committee is also starting to be considered. For instance,
the Dutch self-regulatory Banking Code requires banks — not listed
companies in general — to have a risk committee39. Furthermore,
the U.K. Walker Review recommends certain listed banks and insur-
ance companies to establish a risk committee in order to, inter alia,
oversee and advise the board on current risk exposures and the
future risk strategy40. Similar to the Netherlands, but contrary to
the U.S., this U.K. recommendation is not extended to non-financial
listed companies41. In general, from a legal perspective reform as a
reaction to the financial crisis includes the (further) development of
the duty of the board, senior management, the supervisory body or
non-executives, the audit committee, the internal audit, the compli-
ance function, and the risk committee.
Next to regulating the duty of and personal qualifications for cer-
tain internal functions within the organization, the more recent
financial industry-specific provisions emphasize the external moni-
toring role of the supervisory authorities. Financial markets are
more global than they used to be and since the financial crisis can
be defined as a global systemic crisis, lawmakers are searching for
ways to address regulatory repair internationally. The reaction at
101
The developing legal risk management environment
Proposal for a directive of the European Parliament and of the Council Amending 45
Directives 1998/26/EC, 2002/87/EC, 2003/6/EC, 2003/41/EC, 2003/71/EC, 2004/39/
EC, 2004/109/EC, 2005/60/EC, 2006/48/EC, 2006/49/EC, and 2009/65/EC in
respect of the powers of the European Banking Authority, the European Insurance
and Occupational Pensions Authority, and the European Securities and Markets
Authority, 26 October 2009, COM(2009) 576. Legislative measures to implement the
European Systemic Risk Board are not included in this proposal.
See for this discussion Morley, J. D., and R. Romano, eds., 2009, “The future of finan-46
cial regulation”, John M. Olin Center for Studies in Law, Economics, and Public Policy,
Research Paper No. 386, 108-116
Morley, J. D., and R. Romano, R. eds., 2009, “The future of financial regulation”, John 47
M. Olin Center for Studies in Law, Economics, and Public Policy, Research Paper No.
386, p. 88
See for the U.S. reaction: H.R.4173 – Wall Street Reform and Consumer Protection 42
Act of 2009. U.S. House passed this financial services reform bill on 11 December
2009, which is said to be ‘the biggest change in financial regulation since the Great
Depression.’ When the bill becomes law, it would provide for, inter alia, protection of
consumers and investors, enhanced Federal understanding of insurance issues, and
regulated over-the-counter derivatives markets. This legislation would also establish
a Consumer Financial Protection Agency and give the Treasury Department new
authority. See in addition, Department of The Treasury, financial regulatory reform
– a new foundation: rebuilding financial supervision and regulation, 17 June 2009
(the White Paper on Financial Regulatory Reform of June 2009 from the Obama
Administration).
The de Larosière Group, 2009, Report of the high-level group on financial supervision 43
in the EU, Brussels, p. 4
Communication from the Commission, European financial supervision, 27 May 2009, 44
COM(2009) 252
the E.U. level42 includes introducing European supervisory authori-
ties to supplement the national supervision in order to repair the
lack of cohesiveness and form a supervisory front. The European
Commission mandated the de Larosière Group, a high-level group
on financial supervision in the E.U., to propose recommendations
on the future of European financial regulation and supervision.
The report of the de Larosière Group provides for a framework
that points out three main items to strive for: a new regulatory
agenda (to reduce risk and improve risk management), stronger
coordinated supervision (macro- and micro-prudential), and effec-
tive crisis management procedures (to build confidence among
supervisors)43. In reaction to these recommendations, the European
Commission proposes reforms relating to the way financial markets
are regulated and supervised. It recommends a European financial
supervisory framework composed of two new pillars. The first pil-
lar is a European Systemic Risk Board (ESRB) “which will monitor
and assess potential threats to financial stability that arise from
macro-economic developments and from developments within the
financial system as a whole (‘macro-prudential supervision’)”44. The
second pillar is a European System of Financial Supervisors (ESFS)
that should consist of a network of national financial supervisors
as well as new European Supervisory Authorities. At the moment,
three financial sector-specific committees are already in place
at the EU level: the Committee of European Banking Supervisors
(CEBS), the Committee of European Insurance and Occupational
Pensions Committee (CEIOPS), and the Committee of European
Securities Regulators (CESR). In order to establish European
Supervisory Authorities, a directive is proposed which transforms
these committees into a European Banking Authority (EBA), a
European Insurance and Occupational Pensions Authority (EIOPA),
and a European Securities and Markets Authority (ESMA)45.
concluding remarks Since the financial crisis started, the legal risk management envi-
ronment is under construction. Especially for the financial services
industry, regulations relating to the internal organization of the
company and the external monitoring role of the supervisory
authorities are piling up. As argued above, the underlying question
here is whether new regulations can indeed prevent the next crisis.
For new regulations to improve the legal environment of financial
institutions — thereby reducing the imperfections shown by this
crisis — firstly the current legal environment must be clear and sec-
ondly, the primary problem has to be understood. This seems only
logical, but gauging the precise problem is far from easy. Where
Fein argues that it might not be bank regulation that is broken, but
rather bank supervision, Kashyap argues that regulation is broken
at the most basic level46. Even so, if new regulations can prevent
a crisis such as this one, there is no guarantee that this type of
regulation is needed to prevent the next one, as the next crisis will
most likely have its own features. Besides that, at a roundtable on
the future of financial regulation Harring argued that there might
almost never be a perfect time for reform. “When profits are high
and markets are buoyant, it’s only we ivory tower types who think
about it. And when there’s a crash, risk aversion rises to such an
extent that tightening regulations is unnecessary because institu-
tions and markets are already too risk-averse to rekindle economic
growth”47. To conclude, it might not be the right time for regulatory
reform, but when it is, it might be best to focus on what we want
to achieve in the long run and how that can be achieved keeping
in mind an adequate balance between the interests of corporate
constituents, consumers, and investors on the one hand, and the
overall costs, on the other. After all, the financial services industry
is one of the most heavily regulated industries already.
Articles
103
interest rate risk hedging demand under a Gaussian framework
sami AttaouiEconomics and Finance Department,
Rouen Business School
pierre sixEconomics and Finance department,
Rouen Business School
AbstractThis article analyzes the state variables Merton-Breeden hedg-
ing demand for an investor endowed with a utility function over
both intermediate consumption and terminal wealth. Based on the
three-factor model of Babbs and Nowman (1999), we show that this
demand can be simply expressed as weighted average zero-coupon
bonds sensitivities to these factors. The weighting parameter is
actually the proportion of wealth our investor sets aside for future
consumption rather than for terminal wealth.
interest rate risk hedging demand under a Gaussian framework
104The first component is the speculative mean-variance demand that depends on asset 4
direction that can be much more easily studied than the Merton-Breeden hedging
demand.
The study of the variance covariance matrix as well as the covariance of assets with 5
state variables can be easily carried out for various assets in our affine Gaussian
framework and is thus omitted in the sequel.
Our analysis can also be carried out in the Dai and Singleton (2000) framework.1
We consider the case 2 g>1. This is standard in the theoretical literature and has been
supported by empirical evidence.
See Cox and Huang (1991) for the equivalence between the martingale and dynamic 3
programming approach.
Portfolio management of fixed-income securities has recently
received a lot of attention in the literature. However, to the best of
our knowledge, none of these studies focus on the Merton-Breeden
hedging demand [Merton (1973), Breeden (1979)] for an arbitrary
fixed income security. Considering an investor with a utility func-
tion over intermediate consumption and terminal wealth, we focus
in this study on a stochastic opportunity set composed of stochas-
tic interest rates modeled by the three-factor model of Babbs and
Nowman (1999). Indeed, Litterman and Scheinkman (1991) have
identified three common factors that account for more than 82%
of innovations in bond returns.
Relying on the martingale approach for portfolio management
[Karatzas et al. (1987), Cox and Huang (1989, 1991)], we provide
useful insights on the factors related to interest rate risk hedging
demand. Our results show that this demand is intimately linked to
the sensitivities of zero-coupon bonds to the various factors as
well as to the proportion of wealth an investor sets aside for future
consumption rather than for terminal wealth. In addition, these
quantities can be straightforwardly computed.
settingWe consider the three-factor interest rate model of Babbs and
Nowman (1999)1. The three state variables Xi(t), i = 1, 2, 3 are
assumed to be governed by the following dynamics
dX(t) = -diag(κ)Xdt + σdz(t);
X(0) ≡ X, (1)
where dz(t) ≡ [dz1(t) dz2(t) dz3(t)]’ is a vector of independent
Brownian motion defined on a filtered probability space (Ω, F0≤t≤T,
F, P), T designates the end of the economy, and P the historical
probability. The market prices of risk, θ ≡ [θ1 θ2 θ3]’, linked to these
Brownian motions are assumed to be constant. Moreover, κ ≡ [κ1 κ2
κ3]’ is a vector of positive constant mean reversion speed of adjust-
ments and diag(κ) is the diagonal matrix whose ith element is κi.
σ ≡ (σij), 1≤i, j≤3, is a lower triangular matrix of constants that repre-
sents the sensitivity of the factors to the various shocks.
The instantaneous risk-free interest rate is specified as follows:
r(t) ≡ μ – X1(t) – X2(t) –X3(t) (2)
where μ is a positive constant.
Under these assumptions, Babbs and Nowman (1999) give a closed-
form formula for the time-0 price of a zero-coupon bond maturing
at time TB:
B TB, X( ) = exp −TB. r∞ − w TB( )( ) − D K TB( )′ X
, (3)
where D K TB( ) ≡ Dκ1TB( ) Dκ2
TB( ) Dκ3TB( )[ ]'
,
with Dx y( ) ≡ 1
x1 − exp −x.y( )( ).
These three functions represent the sensitivity of the bond price to
the three factors. The long term interest rate r∞ and the determin-
istic function w(TB) are given in the appendix. Applying Ito’s lemma
to Equation (3), we obtain the zero coupon volatility vector:
σ B(TB ) = ′ σ D K TB( )
(4)
We consider an individual that invests in a riskless asset and in a
portfolio composed of three non-redundant fixed-income securities.
Let π(t) denote the vector of risky proportions, and W(t) and C(t)
the investor’s wealth and consumption, respectively. Furthermore,
we assume that our investor has a constant relative risk aversion2,
g, and a finite investment horizon, TI.
The investor maximizes his/her expected utility over consumption
and terminal wealth subject to a budget constraint. Formally,
J ≡ supπ (s)0≤ s ≤ TI
ECs
1 −γ
1 −γds
0
TI∫ +WTI
1 −γ
1 −γ
(5)
dW(t)
W (t)+ C(t)
W (t)dt= r(t)+ ϖ (t ′ ) π(t)( )θ[ ]dt+ ϖ (t ′ ) π(t)( )′dz(t)
W (0) ≡ W
(6)
and C(0) ≡ C is the investor optimal initial consumption.
Using the dynamic programming approach3, Merton (1973) has
shown that:
π(0) = JW
−WJWW
Σ−1 (0)e(0)+ JWi
−WJWW
Σ−1 (0)i
3
∑ ωi (0),
(7)
where JW and JWW denote the first and second partial derivative
of the value function J with respect to wealth, respectively. JWi is
the second cross partial derivative of J with respect to wealth and
state variable Xi. Σ(t) ≡ ϖ(t) ϖ(t)’ is the variance covariance matrix,
with ϖ(t) denoting the matrix of volatility of assets. e(t) represents
the vector of excepted return in excess of the risk free rate of these
assets. ωi(t) stands for the vector of covariance between state vari-
able Xi and the three risky assets.
The second component4 of the right hand side of Equation (7) is the
so-called Merton-Breeden hedging demand for risky assets whose
number is equal to that of state variables.
As stated above, we focus on the part of the hedging demand that
does not depend on the type of asset selected in the portfolio:
JWi(t) ÷ -W(t)JWW(t), i=1,2,35. Merton (1973) has shown that JWi(t) ÷
-JWW(t) could be couched in terms of two sensitivities:
105
interest rate risk hedging demand under a Gaussian framework
Full demonstration is available from the authors upon request8
The proof is available from the authors upon request.9
We show in the appendix that T10 i(g, TI, X) <TI, which implies that Hi(g, Ti(g, TI, X) < Hi(g, TI).
If 6 ∂iC<0 (>0) then the investor will demand more of the risky asset whose returns are
positively (negatively) correlated with changes in the state variable i. Therefore, an
unfavorable shift in the opportunity set is offset by higher level of wealth (see Merton
(1973) for a more detailed discussion of this point).
|| stands for the usual euclidean norm.7
JWi ÷ -JWW = -∂iC ÷ ∂WC (8)
where ∂iC denotes the partial derivative of consumption with
respect to state variable Xi and ∂WC is the partial derivative of
consumption with respect to wealth. Since -∂iC ÷ ∂WC can be posi-
tive or negative, we provide below a comprehensive investigation
of this term6.
In order to study this hedging component, we use the martingale
approach [Karatzas et al. (1987), Cox and Huang (1989, 1991)],
where the budget constraint (6) is cast into a martingale:
ECs
Gs
ds0
TI∫ +WTI
GTI
= W
(9)
The numeraire Gt is the growth portfolio whose dynamics obeys the
following equation:7
dG(t)/G(t) ≡ [r(t) + |θ|2]dt + θ’ dz(t)
G(0) = 1 (10)
Using the results of Munk and Sorensen (2007), the value function
can be computed as follows:
J(γ,TI , X,W ) = W 1 −γ
1 −γW
Cγ,TI , X( )
γ
.
(11)
The optimal ratio of wealth to consumption is given only in terms
of the growth portfolio:
W
Cγ,TI , X( ) = Q γ,TI , X( ) + q γ,TI , X( ),
(12)
with Q γ,TI , X( ) ≡ q γ,u, X( )du0
TI
∫
and q γ,u, X( ) ≡ E G(u)1
γ−1
.
Equation (12) stresses the importance of the function q(g, u, X),
which describes how state variables affect the value function and
then the hedging component under scrutiny. The closed-form
expression of q(g, u, X) can be obtained by a change of probability
measures where a particular numeraire8, whose volatility vector
is 1/g · θ + [1 – 1/g]σB(TI), is introduced. We obtain the following
proposition:
proposition 1 — The functions q(g, TI, X) are given by:
q γ,TI , X( ) = B TI , X( )1 − 1γ exp − γ −1
2γ 2σ B(v)−θ 2
dv0
TI
∫
(13)
where the explicit expression of σ B(v)−θ 2dv
0
TI
∫ is given in the
appendix.
proof — available from the authors upon request.
The first term of Equation (13) is a positive decreasing function of
the investment horizon and the second term is obviously also a
positive decreasing function of the investment horizon. Hence, q(g,
TI, X) and Q(g, TI, X) are decreasing and increasing functions of the
investment horizon, respectively. As a consequence, the behavior
of the wealth to consumption ratio [Equation (12)] as a function of
the investment horizon has to be examined numerically. Moreover,
Equation (12) shows that the wealth to consumption ratio does not
depend on either wealth or consumption. Thus, using Equation (11),
our hedging term can be couched solely in terms of consumption:
JWi
−WJWW
γ,TI , X( ) = − ∂iC
Cγ,TI , X( )
(14)
The hedging demand component is the opposite of the elasticity
of consumption with respect to state variables. To the best of our
knowledge, this feature has not been much emphasized in the lit-
erature.
To further study this elasticity we need to define another variable
which measures, in proportion, how much wealth an investor sets
aside to satisfy future consumption rather than terminal wealth.
Building on Karatzas et al. (1987)9, we show that this proportion,
πC(g, TI, X), is given by:
πC γ,TI , X( ) = 1 +q γ,TI , X( )Q γ,TI , X( )
−1
.
(15)
πC(g, TI, X) is an increasing function of the investment horizon, is
positive, and lies between 0 and 1. When πC(g, TI, X) ≡ 1, the investor
focuses only on consumption, and in the case πC(g, TI, X) ≡ 0, he/she
is solely concerned by terminal wealth. We are now able to state the
main result of our article:
proposition 2 — the elasticity of consumption with respect to state
variables is given by:
∂iC
Cγ,TI , X( ) = πC γ,TI , X( )H i γ,Ti γ,TI , X( )( )+ 1 − πC γ,TI , X( )[ ]H i γ,TI( )
(16)
with H i γ,T ()( ) ≡ 1 − 1
γ
Dκ i
T ()( ) and Ti(g,TI,X) is given in the appendix.
proof — Available from the authors upon request.
Equation (16) states that the elasticity of consumption can be
decomposed in weighted average of bond sensitivities. The weight-
ing parameter is the proportion of wealth for future consumption.
Furthermore, Hi(g, TI) and Hi(g, Ti(g, TI, X)) are the risk-aversion
adjusted sensitivities of a zero-coupon bond price with respect to
the state variable. They are related to a bond having TI as terminal
maturity and to a bond maturing at an intermediate horizon Ti(g, TI, X),
respectively10. Moreover, in the case of the first bond the investor is
solely concerned about terminal wealth, whereas, in the case of the
second bond he/she is concerned about intermediate consumption.
106 – The journal of financial transformation
interest rate risk hedging demand under a Gaussian framework
Finally, we point out that the elasticity of consumption is always
positive.
numerical illustrationIn this section, we provide various numerical analysis for πC(g, TI, X),
Hi(g, TI), Hi(g, Ti(g, TI, X)), Ti(g, TI, X), and ∂iC/C(g, TI, X). The base case
parameters (Table 1) are taken from the empirical study in Babbs
and Nowman (1999). The initial values for the state variables are
set equal to their long-term means, i.e. zero.
We consider three levels of risk aversion, g = 3, 6, 9, and three
investment horizons, TI = 3, 10, 30.
Table 2 provides results for πC(g, TI, X). It is an increasing function of
both the risk aversion and the investment horizon. It reaches 100%
for a very long investment horizon. The investor, in this case, solely
considers the hedging component linked to consumption.
Table 3 reports the bond sensitivities in the case of terminal
maturity only (Panel A) and in the case of intermediate maturity
(Panel B). Both components are increasing in risk aversion and
investment horizon. However, they differ in size impact. It is higher
for the sensitivity linked to terminal wealth than for that linked to
consumption. Moreover, the magnitude across the three factors
varies significantly, especially for the component linked to terminal
wealth. For example, for g = 6 and TI = 10, we obtain, in the case of
terminal wealth, H1(g,TI) =1.27, H2(g,TI) = 5.98, and H3(g,TI) = 6.48,
and, in the case of intermediate consumption, H1(T1(g,TI,X)) = 0.89,
H2(T2(g,TI,X)) = 2.08, and H3(T3(g,TI,X)) = 2.16.
The size and pattern of the sensitivity-linked consumption is
reflected in the behavior of the intermediate horizon (Table 4). This
consumption horizon is decreasing in risk aversion and increasing
in investor’s terminal horizon.
We finally turn to the ultimate variable of the paper, that is, the
elasticity of consumption with respect to the state variables
(Table 5). Despite the conflicting behavior of the various compo-
nents of demand, the pattern of this component is monotonic. For
all factors, this term is increasing in risk aversion and investment
horizon. Moreover, the elasticity of consumption is substantially
higher for the third factor than for the first one.
κ1 κ2 κ3 θ1 θ2 θ3 μ
65.53% 7.05% 5.25% 15.82% 9.61% 1.73% 7.01%
σ11 σ21 σ21 σ31 σ32 σ31
2.14% -1.78% 0.65% 1.43% -0.46% 0.64%
Table 1 – Base case parameters
πc(g,TI,X)
TI =3 TI =10 TI =30
g=3 79.21% 98.39% 100%
g=6 80.15% 99.02% 100%
g=9 80.46% 99.17% 100%
Table 2 – πC(g, TI, X) as a function of g and TI
panel A — linked to terminal wealth
H1(g,TI) H2(g,TI) H3(g,TI)
TI=3 TI=10 TI=30 TI=3 TI=10 TI=30 TI=3 TI=10 TI=30
g=3 0.875 1.02 1.02 1.8 4.78 8.32 1.85 5.19 10.1
g=6 1.09 1.27 1.27 2.25 5.98 10.4 2.31 6.48 12.6
g=9 1.17 1.35 1.36 2.4 6.38 11.1 2.47 6.92 13.4
panel B — linked to intermediate consumption
H1(T1(g,TI,X)) H2(T2(g,TI,X)) H3(T3(g,TI,X))
TI=3 TI=10 TI=30 TI=3 TI=10 TI=30 TI=3 TI=10 TI=30
g=3 0.546 0.738 0.748 0.875 1.81 1.94 0.89 1.88 2.04
g=6 0.675 0.89 0.897 1.08 2.08 2.16 1.1 2.16 2.25
g=9 0.718 0.938 0.945 1.14 2.16 2.23 1.16 2.24 2.32
Table 3 – Bond sensitivities
T1(g,TI,X) T2(g,TI,X) T3(g,TI,X)
TI=3 TI=10 TI=30 TI=3 TI=10 TI=30 TI=3 TI=10 TI=30
g=3 0.504 0.847 0.872 0.00684 0.015 0.0162 0.00381 0.00843 0.00918
g=6 0.496 0.788 0.801 0.00673 0.0136 0.0142 0.00375 0.00767 0.00804
g=9 0.493 0.771 0.781 0.0067 0.0132 0.0137 0.00373 0.00744 0.00774
Table 4 – Intermediate horizon
∂1C/C(g,TI,X) ∂2C/C(g,TI,X) ∂3C/C(g,TI,X)
TI=3 TI=10 TI=30 TI=3 TI=10 TI=30 TI=3 TI=10 TI=30
g=3 61.46% 74.24% 74.84% 106.77% 185.66% 194.45% 108.94% 193.72% 203.78%
g=6 75.83% 89.36% 89.7% 131.04% 211.49% 216.26% 133.68% 219.93% 225.38%
g=9 80.54% 94.18% 94.46% 138.93% 219.17% 223.02% 141.72% 227.68% 232.08%
Table 5 – Elasticity of consumption as a function of g and TI
107
interest rate risk hedging demand under a Gaussian framework
concluding remarksThis article provides an in-depth analysis of the state variables
Merton-Breeden hedging demand when the opportunity set is
affected by interest rates only. Relying on the three-factor model
of Babbs and Nowman (1999), we show that the Merton-Breeden
hedging demand boils down to a portfolio of risk-adjusted bond
sensitivities with respect to the state variable. The portfolio weight-
ing parameter is the wealth proportion that will be used for future
consumption. Our analysis could be extended to take account of a
stochastic behavior of the market price of risk.
AppendixA. Zero coupon parameters
Following Babbs and Nowman (1999), the bond parameters are as
follow:
r∞ = µ + θ j
σ ij
κ ii=1
3
∑j=1
3
∑ − 1
2
σ ij
κ ii=1
3
∑
2
j=1
3
∑
(A1)
w u( ) = Dκiu( ) θ j
σ i, j
κ i
−σ kjσij
κ kκ ik=1
3
∑j=1
3
∑j=1
3
∑
i=1
3
∑ + 1
2Dκ i +κ j
u( ) σ ikσ jk
κ iκ jk=1
3
∑i, j
3
∑
(A2)
B. Closed-form expression of σ B(v)−θ 2dv
0
TI
∫
Direct computation leads to:
σ B(v)−θ 2dv
0
TI
∫ = θ 2TI + σ ijθ j
1
κ i
TI − Dκi(TI)( )
i, j
3
∑ + mij
1
κ iκ j
TI − Dκi(TI)− Dκ j
(TI)+ Dκi +κ j(TI)( )
i, j
3
∑
σ B(v)−θ 2dv
0
TI
∫ = θ 2TI + σ ijθ j
1
κ i
TI − Dκi(TI)( )
i, j
3
∑ + mij
1
κ iκ j
TI − Dκi(TI)− Dκ j
(TI)+ Dκi +κ j(TI)( )
i, j
3
∑
(A3)
with mij ≡ (σσ’)ij.
c.
It can be shown from the authors that T1(g,TI,X), is given by:
Ti γ,TI , X( ) ≡ κ i log 1 −κ i ψ γ,u, X( )Dκiu( )
0
TI
∫ du
−1
(C1)
with ψ γ,u, X) ≡q γ,u, X( )
Q γ,TI , X( )( . Since κ i log 1 −κ i x( )−1( ) is the inverse
function of Dκi(x), Equation (C1) clearly demonstrates a weighted
average formulae which implies that Ti(g,TI,X)<TI.
ReferencesBabbs, S., and K. Nowman, 1999, “Kalman filtering of generalized Vasicek term struc-•
ture models,” Journal of Financial and Quantitative Analysis, 34:1, 115-130
Breeden, D., 1979, “An intertemporal asset pricing model with stochastic consumption •
and investment opportunities,” Journal of Financial Economics, 7, 263-296
Cox, J., and C. Huang, 1989, “Optimum consumption and portfolio policies when asset •
prices follow a diffusion process,” Journal of Economic Theory, 49, 33-83
Cox, J., and C. Huang, 1991, “A variational problem arising in financial economics,” •
Journal of Mathematical Economics, 21, 465-488
Dai, Q., and K. Singleton, 2000, “Specification analysis of affine term structure mod-•
els,” The Journal of Finance, 55, 1943-1978
Karatzas, I., J. Lehoczky, and S. Shreve, 1987, “Optimal portfolio and consumption •
decisions for a small investor on a finite horizon,” Siam Journal on Control and
Optimization 25, 1557-1586
Litterman, R., and J. Scheinkman, 1991, “Common Factors Affecting Bond Returns,” •
Journal of Fixed Income, 1, 62-74
Merton, R., 1973, “An intertemporal capital asset pricing model,” Econometrica, 41, •
867-887
Munk, C., and C. Sørensen, 2007, “Optimal real consumption and investment strategies •
in dynamic stochastic economies”, in Jensen, B. S., and T. Palokangas, (eds), Stochastic
economic dynamics, CBS Press, 271-316
Articles
109
Emmanuel fragnièreProfessor, Haute École de Gestion de Genève,
and Lecturer, University of Bath
Jacek GondzioProfessor, School of Mathematics,
University of Edinburgh
nils s. TuchschmidProfessor, Haute École de Gestion de Genève
Qun ZhangSchool of Mathematics, University of Edinburgh
non-parametric liquidity-adjusted VaR model: a stochastic programming approach
AbstractThis paper proposes a Stochastic Programming (SP) approach for the
calculation of the liquidity-adjusted Value-at-Risk (LVaR). The model
presented in this paper offers an alternative to Almgren and Chriss’s
mean-variance approach (1999 and 2000). In this research, a two-
stage stochastic programming model is developed with the intention
of deriving the optimal trading strategies that respond dynamically
to a given market situation. The sample paths approach is adopted
for scenario generation. The scenarios are thus represented by a
collection of simulated sample paths rather than the tree structure
usually employed in stochastic programming. Consequently, the SP
LVaR presented in this paper can be considered as a non-parametric
approach, which is in contrast to Almgren and Chriss’s parametric
solution. Initially, a set of numerical experiments indicates that the
LVaR figures are quite similar for both approaches when all the
underlying financial assumptions are identical. Following this sanity
check, a second set of numerical experiments shows how the ran-
domness of the different types (i.e., bid and ask spread) can be easily
incorporated into the problem due to the stochastic programming
formulation and how optimal and adaptive trading strategies can
be derived through a two-stage structure (i.e., a recourse problem).
Hence, the results presented in this paper allow for the introduction
of new dimensionalities into the computation of LVaR by incorporat-
ing different market conditions.
110 – The journal of financial transformation
non-parametric liquidity-adjusted VaR model: a stochastic programming approach
Gondzio and Grothey (2006) showed in their research that they could solve a qua-1
dratic financial planning problem exceeding 109 decision variables by applying a
structure-exploiting parallel primal-dual interior-point solver.
Developed over the last couple of decades, Value-at-Risk (VaR)
models have been widely used as the main market risk manage-
ment tool in the financial world [Jorion (2006)]. VaR estimates
the likelihood of a portfolio loss caused by normal market move-
ments over a given period of time. However, VaR fails to take into
consideration the market liquidity impact. Its estimate is quite
often based on mid-prices and the assumption that transactions do
not affect market prices. Nevertheless, large trading blocks might
impact prices, and trading activity is always costly. To overcome
these problems, some researchers have proposed the calculation
of liquidity adjusted VaR (LVaR) [Dowd (1998)]. Differing from the
conventional VaR, LVaR takes both the size of the initial holding
position and liquidity impact into account. The liquidity impact is
commonly subcategorized into exogenous and endogenous illiquid-
ity factors. The former is normally measured by the bid-ask spread,
and the latter is expressed as the price movement caused by mar-
ket transactions [Bangia et al. (1999)]. From this perspective, LVaR
can be seen as a complementary tool for risk managers who need
to estimate market risk exposure and are unwilling to disregard the
liquidity impact.
Bangia et al. (1999) proposed a simple but practical solution that
is directly derived from the conventional VaR model in which an
illiquidity factor is expressed as the bid-ask spread. Although this
approach avoids many complicated calculations, it fails to take
into consideration endogenous illiquidity factors. Hence, liquidity
risk and LVaR are underestimated. A more promising solution for
LVaR estimation stems from the derivation of optimal trading
strategies as suggested by Almgren and Chriss (1999 and 2000).
In their model, Almgren and Chriss adopted the permanent and
temporary market impact mechanisms from Holthausen et al.’s
work (1987) and assumed linear functions for both of them. By
externally setting a sales completion period, they derived an opti-
mal trading strategy defined as the strategy with the minimum
variance of transaction cost, or of shortfall, for a given level of
expected transaction cost. Or inversely, a strategy that has the
lowest level of expected transaction cost for a given level of vari-
ance. With the normal distribution and the mean and variance of
transaction cost, LVaR can also be determined and minimized
to derive optimal trading strategies. In this setting, LVaR can be
understood as the pth percentile possible loss that a trading posi-
tion can encounter when liquidity effects are incorporated into
the risk measure computation. Later on, Almgren (2003) extended
this model by using a continuous-time approximation, and also
introduced a non-linear and stochastic temporary market impact
function. Another alternative is the liquidity discount approach
presented by Jarrow and Subramanian (1997 and 2001). Similar
to Almgren and Chriss’s approach (1999 and 2000), the liquidity
discount approach requires that the sales completion period be
given as an exogenous factor. The optimal trading strategy is then
derived by maximizing an investor’s expected utility of consump-
tion. Note that both approaches require externally setting a fixed
horizon for liquidation. Aiming to overcome this problem, Hisata
and Yamai (2000) extended Almgren and Chriss’s approach by
assuming a constant speed of sales and by using continuous
approximation. They could derive a closed-form analytical solution
for the optimal holding period. In this setting, the sales comple-
tion time thus becomes an endogenous variable. Yet, Hisata and
Yamai’s model relies on the strong assumption of a constant
speed of sales.
Krokhmal and Uryasev (2006) argued that the solution offered
by Almgren and Chriss and that of Jarrow and Subramanian were
unable to dynamically respond to changes in market conditions.
Therefore, they suggested a stochastic dynamic programming
method and derived an optimal trading strategy by maximizing the
expected stream of cash flows. Under their framework, the optimal
trading strategy becomes highly dynamic as it can respond to mar-
ket conditions at each time step. Another methodology that incor-
porates these dynamics into an optimal trading strategy is that of
Bertsimas and Lo (1998). They applied a dynamic programming
approach to the optimal liquidation problems. Analytical expres-
sions of the dynamic optimal execution strategies are derived by
minimizing the expected trading cost over a fixed time horizon.
In this paper, we present a new framework for the calculation
of non-parametric LVaR by using stochastic programming (SP)
techniques. Over the past few years, stochastic programming has
grown into a mature methodology used to approach decision mak-
ing problems in uncertain contexts. The main advantage of SP is
its ability to better tackle optimization problems under conditions
of uncertainty over time. Due to the fast development of com-
puting power, it has been used to solve large scale optimization
problems1. Therefore, we believe it is a promising methodology for
LVaR modeling.
The SP approach presented in this paper is extended from
Almgren and Chriss’s framework (1999 and 2000). The sample
path approach is adopted for scenario generation, rather than the
scenario tree structure usually employed in SP. The scenario set
is represented by a collection of simulated sample paths. Differing
from Almgren and Chriss’s parametric formulation of LVaR, we
present a non-parametric formulation for LVaR. Both exogenous
and endogenous illiquidity factors are taken into account. The for-
mer is measured by the bid-ask spread, and the latter is expressed
by linear market impact functions, which are related to the quantity
of sales. The model in this paper is built in a discrete-time manner,
and the holding period is required to be determined externally. The
permanent and temporary market impact mechanism proposed by
Holthausen et al. (1987) is adopted to formulate the market impact,
and both permanent and temporary market impacts are assumed
as linear functions.
111
non-parametric liquidity-adjusted VaR model: a stochastic programming approach
For the estimation of temporary and permanent market impact coefficients, Almgren 2
and Chriss did not propose a specific method. They assumed that: for the temporary
market impact, trading each 1% of the daily volume incurs price depression of one bid-
ask spread, and for the permanent market impact, trading 10% of the daily volume will
have a significant impact on price, and incur price depression of one bid-ask spread.
Since this paper focuses on the LVaR modeling, not the estimation of market impact
stochastic programming lVaR modelThis paper proposes a SP approach to estimate the non-parametric
LVaR, which is based on Almgren and Chriss’s mean-variance
approach (1999 and 2000). While their model has been shown to
be an interesting methodology for the calculation of LVaR and has
a huge potential in practice, the optimal trading strategies derived
by their model fail to respond dynamically to the market situation,
as they rely on a ‘closed-form’ or ‘static’ framework. For instance,
if an increasing trend is observed in the market price, investors
may decide to slow their liquidation process. If, on the other hand,
unexpected market shocks occur, investors may decide to adjust
their trading strategy and to speed up the completion of their sale.
These market situations can be simulated and incorporated into
scenarios. Clearly, any ‘closed form’ solution cannot deal with this
type of uncertainty in such a dynamic manner. The LVaR formula-
tion in Almgren and Chriss’s model is based on the mean-variance
framework; thus, it can be considered as a parametric approach.
In contrast, the LVaR formulation presented in this paper is non-
parametric and allows for the incorporation of various dynamics
in the liquidation process. Thus, we propose a new framework for
LVaR modeling.
Almgren and chriss’s mean-variance model
According to Almgren and Chriss’s framework, a holding period T
is required to be set externally. Then, this holding period is divided
into N intervals of equal length (t = T/N). The trading strategy is
defined as the quantity of shares sold in each time interval, which
is denoted by a list of n1,…, nk,…, nN, where nk is the number of
shares that the trader plans to sell in the kth interval. Accordingly,
the quantity of shares that the trader plans to hold at time tk = kt
is denoted by xk. Suppose a trader has a position X that needs to be
liquidated before time T, then we have:
1k k kn x , 1
N
kk
X n=
=1 1
, 0,..., .k N
k j jj j k
x X n n k N= = +
= = = = and 1k k kn x , 1
N
kk
X n=
=1 1
, 0,..., .k N
k j jj j k
x X n n k N= = +
= = = =
Price dynamics in Almgren and Chriss’s model are formulated as an
arithmetic random walk as follows:
1k
k k k
nS S gµ= + + (1)
where Sk is the equilibrium price after a sale, μ and σ are the drift
and volatility of the asset price, respectively, and xk is a random
number that follows a standard normal distribution N (0, 1). The last
term, g(nk/t), describes the permanent market impact from a sale.
The actual sale price is calculated by subtracting the temporary
impact, h(nk/t), from the equilibrium price:
kk k
nS S h=� (2)
According to Almgren and Chriss’s framework (1999 and 2000),
both g(nk/t) and h(nk/t) are assumed to be linear functions:
k kn ng = 1
2k kn n
h = + (3) and k kn ng = 1
2k kn n
h = + (4)
where g and h are the permanent and temporary market impact
coefficients2, respectively, and e denotes the bid-ask spread. They
are all assumed to be constant.
Based on the previously presented equations, the formula for the
actual sale price is derived as:
01 1
1
2
k kk
k j kj j
nS S k nµ
= =
= + +�
I II III
(5)
As you can see from this formula, the actual sale price can be
decomposed into three parts. Part I is the price random walk, which
describes the price dynamics without any market impacts. Parts II
and III are the price decline caused by the permanent and tempo-
rary market impact, respectively.
Then the total proceeds can be calculated by summing the sale
values over the entire holding period:
( ) 20 1 1
1 1 1 1 1
2 20 1 1
1 1 1
1
2
1 1 1
2 2 2
N N N N N
k k k k k k k kk k k k k
N N N
k k k kk k k
n S XS x x n X x X n
XS x x X X n
µ
µ
= = = = =
= = =
= = + +
= + +
�total proceed
(6)
Consequently, ‘liquidation cost’ (LC)3 can be derived by subtract-
ing the total actual sale proceeds from the trader’s initial holding
value, that is:
2 20 1 1
1 1 1 1
1 1 1
2 2 2
N N N N
k k k k k kk k k k
LC XS n S x x X X nµ= = = =
= = + + +� (7).
Almgren and Chriss derive the formulae for the mean and variance
of the liquidation cost as:
[ ] 2 21
1 1
1 1 1
2 2 2
N N
k kk k
E LC X x X nµ= =
= + +
[ ] 2 21
1
N
kk
V LC x=
=
(8)
and
[ ] 2 21
1 1
1 1 1
2 2 2
N N
k kk k
E LC X x X nµ= =
= + +
[ ] 2 21
1
N
kk
V LC x=
= (9)
Finally, they formulate the LVaR by using the parametric approach
with the mean and variance of the LC as:
[ ] [ ]clLVaR E LC V LC= + (10)
where cl denotes the confidence level for the LVaR estimation, and
αcl is the corresponding percentile of the standard normal distribu-
tion. As expressed, LVaR measures a possible loss with a given
position while taking into consideration both market risk conditions
and liquidity effects.
coefficients, Almgren and Chriss’s simple assumption is adopted for all the numerical
experiments in this paper.
In Almgren and Chriss’s paper, this ‘cost’ is referred as the transaction cost. However, 3
the transaction cost is commonly known as the fees involved for participating in the
market, such as the commission to the brokers. Therefore, in order to avoid any con-
fusion, it is named ‘liquidation cost’ in this paper.
112 – The journal of financial transformation
non-parametric liquidity-adjusted VaR model: a stochastic programming approach
An extension is presented below.4
The optimal trading strategy could be derived by minimizing the
LVaR. The mathematical programming formulation of this optimiza-
tion problem is thus written as:
[ ] [ ]
1
1
1
min
. 0,..., 1
,..., 0.
kcl
n
N
k jj k
N
kk
N
E LC V LC
st x n k N
X n
n n
= +
=
+
= =
=
Based on this brief introduction to Almgren and Chriss’s mean-vari-
ance approach, we can thereon proceed to present the SP approach
to LVaR modeling.
stochastic programming transformationIn stochastic programming, uncertainty is modeled with scenarios
that are generated by using available information to approximate
future conditions. Before conducting the SP transformation, we
need to briefly introduce the scenario generation technique used
in this paper.
The liquidation process of investors’ positions is a multi-period
problem. The most commonly used technique is to model the evo-
lution of stochastic parameters with multinomial scenario trees, as
shown in Figure 1(a).
However, the use of scenario tree structures often leads to con-
siderable computational difficulty, especially when dealing with
large scale practical problems. In the scenario tree structure, the
uncertainties are represented by the branches that are gener-
ated from each node. Increasing the number of branches per node
can improve the quality of the approximation of the uncertainty.
However, it causes an exponential growth in the number of nodes.
Indeed, in order to approximate the future value of the uncertain
parameters with a sufficient degree of accuracy, the resulting
scenario tree could be of a huge size. It is commonly known as
the “curse of dimensionality” [Bellman (1957)]. It is a significant
obstacle for dynamic or stochastic optimization problems. An
alternative method to overcome this problem is to simulate a col-
lection of sample paths to reveal the future uncertainty as shown
in Figure 1(b). Each simulated path represents a scenario. These
sample paths can be generated by using Monte Carlo simulation,
historical simulation, or bootstrapping. There have been several
interesting papers regarding the application of the sample paths
method in stochastic programming [Hibiki (2000), Krokhmal and
Uryasev (2006)]. Using sample paths is advantageous because
increasing the number of paths to achieve a better approximation
causes the number of nodes to increase linearly rather than expo-
nentially. This advantage is also present when the time period is
increased. The number of nodes increases linearly with the sample
paths method and exponentially with scenario tree structure.
Let ( ){ }0 1 , 2, , ,, , , , , , | 1 , ,s s k s N sC C C C C s Sc= =… … … , be a collection of sample paths
( ){ }0 1 , 2, , ,, , , , , , | 1 , ,s s k s N sC C C C C s Sc= =… … … ,
where Ck,s represents the information about relevant parameters.
In Almgren and Chriss’s model (1999 and 2000), we should recall
that the only randomness considered is market price. Hence, to
set a point of comparison between their results and the results
from the SP approach, we first assume the only randomness that
is considered in the sample paths will come from the market price
component, k. Yet, this restrictive assumption can clearly be easily
relaxed, and randomness can be added to other parameters, such
as the bid-ask spread or the temporary and permanent market
impact coefficients4.
Under the SP framework, the trading strategy is no longer a vector
but a two dimensional matrix
1 ,1 ,1 ,1
1 , , ,
1 , , ,
k N
s k s N s
Sc k Sc N Sc
n n n
n n n
n n n
� �� � �
� � �� �
strategy=
first stage second stage
where nk,s is the quantity of shares sold in kth interval on path s, s is
the index of scenarios, and Sc is the number of scenarios. This is a
two stage SP problem. n1,s (s = 1,...,Sc) are the first stage variables,
and nk,s (k = 2,…,N and s = 1,...,Sc) are the second stage variables.
Due to the nonanticipativity in the first stage, the first stage vari-
ables must be locked:
1 , 1 , 1 2,...s sn n s Sc= = .
For the actual sale price formulation, recall Equation (5). Taking
into account the scenarios and replacing part I with k,s (i.e., the Figure 1 – Scenario generation
(a) scenario tree
Time period Time period
(b) sample paths
113
non-parametric liquidity-adjusted VaR model: a stochastic programming approach
asset price without market impacts in kth interval for each scenario),
the actual sale price is reformulated as:
,, , ,
1
1ˆ2
kk s
k s k s j sj
nS S n
=
=� (11)
As we now have the sale price formulation, the total sale proceeds
corresponding to each scenario is naturally obtained by summing
up the sale proceeds over the entire set of N intervals:
2,
, , , , , , ,1 1 1
2 2, , ,
1 1
1ˆ2
1 1 1ˆ2 2 2
N N kk s
k s k s k s k s k s k s j sk k j
N N
k s k s k sk k
nn S S n n n n
S n X X n
= = =
= =
= =
=
�.
total proceed
(12)
Consequently, the liquidation cost under scenario s is derived by
subtracting the corresponding total sale proceeds from the trader’s
initial holding value, that is:
2 20 , , 0 , , ,
1 1 1
1 1 1ˆ2 2 2
N N N
s k s k s k s k s k sk k k
LC XS n S XS X X S n n= = =
= = + + +�
(13)
The deterministic equivalent formulation of this SP problem with
nonanticipativity constraints is:
,1
,1
1 , ,
1 , 1 , 1
min
. 1 ,...,
, ..., 0 1 ,...,
2,...
k s
Sc
s sn
s
N
k sk
s N s
s s
p LC
st X n s Sc
n n s Sc
n n s Sc
=
=
= =
== =
where ps is the probability of scenario s. Since the scenarios are
obtained by the Monte Carlo simulation, they are thus equally prob-
able with ps = 1 / Sc.
The resulting problem is a quadratic optimization one. The objec-
tive, the expected value of the liquidation cost, is a quadratic func-
tion, and all constraints are linear.
non-parametric lVaR formulation
Depending on the set of assumptions, the calculation methodology,
and their uses, two different types of VaRs usually exist, i.e., the
parametric VaR and the non-parametric VaR. The same categoriza-
tion obviously applies to LVaR. The LVaR estimated by Almgren
and Chriss (1999 and 2000) is parametric as shown in Equation
[10]. In this paper, we have to rely on a non-parametric formulation
because it stems from the SP solution that we have adopted. More
precisely, the calculation procedure is as follows:
Solve the stochastic optimization problem stated above and 1
obtain the optimal trading strategy matrix.
Apply the optimal trading strategies to the corresponding sce-2
nario and calculate the liquidation cost for that specific scenario.
That is to say, we substitute the optimal trading strategy matrix
into the liquidation cost formula (Equation [13]), and calculate
LC, which is a vector indexed by s.
Sort the vector LC, and find the value of the 3 αth percentile LC, i.e.,
the α%-LVaR. The most commonly used α value is 95 and 99.
numerical experiments iAs previously mentioned, we first conducted a sanity check. This
section details the numerical experiments for both the SP model
and the Almgren and Chriss’s mean-variance model with the restric-
tion of randomness on one component only, i.e., the ‘pure market
price.’
JP Morgan’s stock data was collected for the numerical experi-
ments. The holding period, T, was set to be 5 days, and we selected
the time interval to be 0.5 day. Thus, the total number of sales, N,
was 10. The selection of the holding period and time interval was
arbitrary.
For the price sample paths generation, the Monte Carlo simulation
was applied. The stochastic evolution of the price was assumed to
follow a geometric Brownian motion:
21
1ˆ ˆ exp2k k kS S µ= +
(14)
Under Almgren and Chriss’s mean-variance framework, market
price was assumed to follow an arithmetic random walk because it
is ultimately rather difficult to derive a closed-form solution based
on an assumption of geometric Brownian motion. Yet, with Monte
Carlo simulations, formulating the price evolution under different
assumptions creates no issues related to the underlying distribu-
tions that could generate prices and returns. Since the geometric
random walk is the most commonly used assumption for price sto-
chastic processes, it is used in this paper even though differences
between these two random walks are almost negligible over a short
period of time.
10,000 sample paths were generated by using the Monte Carlo
simulation. The simulated prices form a 10-by-10,000 matrix. The
initial price is 37.72. These simulated sample paths are displayed
in Figure 2.
Five different initial holdings were chosen for the numerical experi-
ments with the aim of observing how the initial position affected
the LVaR estimation. The LVaRs were calculated with the most
commonly seen confidence levels of 95% and 99%. The results are
summarized in Table 1.
The numerical results show that the LVaR ratios computed by the
SP model are slightly lower than those computed by Almgren and
114 – The journal of financial transformation
non-parametric liquidity-adjusted VaR model: a stochastic programming approach
Chriss’s model at both the 95% and 99% confidence levels. Also,
as expected, the numerical results show that the LVaR estimates
increase when the initial holdings increase. This is true for both
Almgren and Chriss’s model and the SP model. As previously stated,
as the initial holding becomes larger, the market impact becomes
stronger when a trader liquidates his position. Consequently, the
LVaR will increase as well. This is clearly a characteristic that dis-
tinguishes the LVaR from the traditional VaR.
SP LVaRs are lower than Almgren and Chriss’s LVaRs because the
SP model’s optimal trading strategies can dynamically adapt to the
market situation. This fits investors’ actual trading behaviors in
the market, as they will adjust their trading plans according to the
market environment. Therefore, the SP model can provide more
precise LVaR estimates due to the characteristic of the SP model’s
adaptive trading strategies.
Generalization of the stochastic programming lVaR modelA simple stochastic programming LVaR model that was transformed
from Almgren and Chriss’s mean-variance model was presented
above in order to compare with other models discussed (i.e., both
two LVaR approaches were used under the same setting). Let us now
extend the analysis and show some of the advantages provided by the
SP approach. Contrary to Almgren and Chriss’s model that assumes
that both the bid-ask spread and market impact coefficients are con-
stants, we generalize the SP LVaR model by relaxing this assumption
and treating these two components as random variables.
By incorporating randomness in the bid-ask spread and both the
permanent and temporary market impact coefficients, the formula
of the actual sale price needs to be rewritten as:
, ,, , , , ,
1
1ˆ2
kk s k s
k s k s j s j s k sj
nS S n
=
=�
(15)
For the formulation of ek,s, we employ a standardization process.
Since bid-ask spreads tend to be proportional to asset prices, past
observations may not accurately reflect the current variations.
Bangia et al. (1999) suggested calculating a relative bid-ask spread
that is equal to the bid-ask spread divided by the mid-price. By
employing this calculation, the bid-ask spread is expressed as a pro-
portion of the asset price; thus, the current bid-ask spread variation
is sensitive to the current asset price rather than past observations.
The relative bid-ask spread, as a normalizing device, can improve
the accuracy of the bid-ask spread variation estimation. The bid-ask
spread is thus formulated as:
, , ,ˆ
k s k s k sS= (16)
where , , ,ˆ
k s k s k sS= is the relative bid-ask spread at time tk on path s. Recall
the sample path set
0 0.5 1 1. 5 2 2.5 3 3.5 4 4.5 532
34
36
38
40
42
44
Figure 2 – Simulated price scenarios
Price
Time (days)
Price scenario (simulated random walk
initial holding (shares) 1000000 500000 100000 50000 10000
Almgren and chriss’s Mean-Variance Model
Parametric LVaR 95%
1.425E+06 6.186E+05 1.010E+05 4.845E+04 9.311E+03
Parametric LVaR per share
1.425 1.237 1.010 0.969 0.931
Parametric LVaR ratio
3.78% 3.28% 2.68% 2.57% 2.47%
Parametric LVaR 99%
1.863E+06 8.216E+05 1.388E+05 6.718E+04 1.304E+04
Parametric LVaR per share
1.863 1.643 1.388 1.344 1.304
Parametric LVaR ratio
4.94% 4.36% 3.68% 3.56% 3.46%
stochastic programming Model
Non-parametric LVaR 95%
1.290E+06 5.399E+05 8.144E+04 3.832E+04 7.218E+03
Non-parametric LVaR per share
1.290 1.080 0.814 0.766 0.722
Non-parametric LVaR ratio
3.42% 2.86% 2.16% 2.03% 1.91%
Non-parametric LVaR 99%
1.811E+06 7.944E+05 1.342E+05 6.442E+04 1.249E+04
Non-parametric LVaR per share
1.811 1.589 1.342 1.288 1.249
Non-parametric LVaR ratio
4.80% 4.21% 3.56% 3.42% 3.31%
Table 1 – Numerical results summary
*LVaR per share = LVaR/Initial holding; LVaR ratio = LVaR per share/Initial price
115
non-parametric liquidity-adjusted VaR model: a stochastic programming approach
( ){ }0 1 , 2, , ,, , , , , , | 1 , ,s s k s N sC C C C C s Sc= =… … … .
By incorporating the randomness into the relative bid-ask spread
and market impact coefficients, Ck,s is extended to
( ), , , , ,ˆ , , ,k s k s k s k s k sC S= .
In other words, in the generalized SP model, each node on the simu-
lated sample paths contains information for the asset price, the
relative bid-ask spread, and the permanent and temporary market
impact coefficients.
As we now have the new sale price formulation, the formula for
liquidation cost under scenario s (as shown in Equation [13]) is
rewritten as:
2, ,
0 , , 0 , , , , , , , ,1 1 1
1ˆ ˆ2
N N kk s k s
s k s k s k s k s k s k s k s k s k s j sk k j
nLC XS n S XS S n S n n n
= = =
= =� (17)
The deterministic equivalent formulation and the LVaR calculation
procedure are the same as shown above. By generalizing the SP
model, more parameters are incorporated in the sample path set,
which leads to a more accurate approximation of future uncertain-
ties.
numerical experiments iiThis section reports the numerical experiments for the generalized
SP LVaR model presented above. We use the same dataset that
was used for aforementioned numerical experiments. The holding
period and time interval remain identical, i.e., 5 days and half a day,
respectively.
For the sample paths’ generation of the relative bid-ask spread, and
the permanent and temporary market impacts, we assumed that
they followed independent lognormal distributions and simulated
each of them simply as a white noise:
(18)
(19)
(20)
2, ,
1exp
2k s k sµ= +
2, ,
1exp
2k s k sµ= +
2, ,
1exp
2k s k sµ= +
where μ and σ are the means and standard deviations, respectively,
of the three random variables (i.e., e, g and h).
Once again 10,000 sample paths were generated by using the
Monte Carlo simulation for each parameter. The LVaRs at the 95%
and 99% confidence levels were computed for the same five initial
holding scenarios employed above. The results are summarized in
Table 2.
In Figure 3, we can see that the LVaR ratios computed by the SP
model with the incorporation of randomness into the bid-ask spread
and the market impact coefficients are slightly lower than those
computed by the SP model with the constant bid-ask spread and
market impact coefficients. When the initial holding is small, incor-
porating these new random variables does not cause a significant
change to the LVaR estimate. However, when the initial holding is
large, the differences are substantial. For instance, when the initial
holding is 1,000,000, incorporating randomness reduces the 95%
LVaR ratio from 3.42% to 2.87% and the 99% LVaR ratio from
4.80% to 4.30%.
The main reason for these differences must lie in the way the opti-
mal trading strategies that are derived by the SP model respond to
the variation of the bid-ask spread and market impact coefficients.
For example, if we assume that the bid-ask spread is constant, the
initial holding (shares) 1000000 500000 100000 50000 10000
Non-parametric LVaR 95%
1.084E+06 4.740E+05 7.800E+04 3.739E+04 7.119E+03
Non-parametric LVaR per share
1.084 0.948 0.780 0.748 0.712
Non-parametric LVaR ratio
2.87% 2.51% 2.07% 1.98% 1.89%
Non-parametric LVaR 99%
1.621E+06 7.389E+05 1.312E+05 6.403E+04 1.243E+04
Non-parametric LVaR per share
1.621 1.478 1.312 1.281 1.243
Non-parametric LVaR ratio
4.30% 3.92% 3.48% 3.40% 3.30%
Table 2 – Numerical results
Figure 3: LVaR ratio comparison
1,5%
2,0%
2,5%
3,0%
3,5%
4,0%
4,5%
5,0%
1.000.000 500.000 100.000 50.000 10.000
Initial holding (shares)
LVaR ratio 95% with incorporation of the variation of spread and market impact coefficients
LVaR ratio 95% with constant spread and market impact coefficients
LVaR ratio 99% with incorporation of the variation of spread and market impact coefficients
LVaR ratio 99% with constant spread and market impact coefficients
LVaR ratio
116 – The journal of financial transformation
non-parametric liquidity-adjusted VaR model: a stochastic programming approach
loss caused by the spread in the whole liquidation process is e X/2
for each scenario as shown in Equation (13) (e is the mean value
of the bid-ask spread). With the incorporation of randomness into
the bid-ask spread, the optimal trading strategies are adjusted in
accordance with its variation. When the spread is high, the optimal
trading strategy may suggest selling less. On the contrary, when
it is low, the optimal trading strategy may suggest selling more.
Therefore, the average loss caused by the spread can be expected
to be lower than e X/2. Stated otherwise, the SP model’s optimal
trading strategies can take advantage of changes by acting in a
flexible and timely manner. Also note that introducing the calcula-
tion of the relative bid-ask spread and the Monte Carlo simulation
itself can cause certain differences. However, the effects are pre-
sumably small.
Finally, it is worthwhile mentioning that incorporating randomness
into the bid-ask spread and market impact coefficients within the
Almgren and Chriss’s model will definitely enlarge the resulting
LVaR estimates. Indeed, it would add new variance terms to the
variance of the liquidation cost, since the variations of parameters
are represented by their variances. This would lead to the increase
of the LVaR estimates. The SP solution and its numerical experi-
ments indicate that if uncertainty is handled well, it does not nec-
essarily cause an increase in the LVaR estimates. It highlights the
strength of the SP approach, which provides adaptive strategies
(or ‘recourse strategies’). Moreover, adding new random variables
in the model does not increase the difficulty of the problem due to
the non-parametric nature of the SP LVaR.
conclusionThis paper presents a stochastic programming approach for LVaR
modeling, which is extended from Almgren and Chriss’s mean-
variance approach. In contrast to their approach, the optimal trad-
ing strategies are derived by minimizing the expected liquidation
cost. Thus, the SP strategies dynamically adapt to new market
situations. This is the strength of SP in the context of decision
making under uncertainty. Another contribution from this paper
is the non-parametric formulation of the SP LVaR. It contrasts
with the LVaR modeling methodologies that quite often rely on
parametric approaches. Overall, the numerical results indicate that
the two approaches are not identical. Indeed, the LVaRs computed
using the SP model in this paper are lower than those computed by
Almgren and Chriss’s model. Yet, LVaR modeling still remains in its
infancy, especially when using SP in this context.
ReferencesAlmgren, R. and N. Chriss, 1999, “Value under liquidation,” Risk, 12, 61-63•
Almgren, R. and N. Chriss, 2000, “Optimal execution of portfolio transactions,” Journal •
of Risk, 3, 5-39
Almgren, R., 2003, “Optimal execution with nonlinear impact functions and trading-•
enhanced risk,” Applied Mathematical Finance, 10, 1-50
Bangia, A., F. Diebold, T. Schuermann and J. Stroughair, 1999, “Liquidity on the out-•
side,” Risk, 12, 68-73
Bertsimas, D., and A. Lo, 1998, “Optimal control of execution costs,” Journal of •
Financial Markets, 1, 1-50
Bellman, R.E., 1957, Dynamic programming, Princeton University Press, Princeton, New •
Jersey
Dowd K., 1998, Beyond Value at Risk: the new science of risk management, Wiley and •
Sons, New York
Gondzio, J. and A. Grothey, 2006, “Solving nonlinear financial planning problems with •
109 decision variables on massively parallel architectures,” in Costantino, M., and C.A.
Brebbia (eds.), Computational finance and its applications II, WIT transactions on mod-
elling and simulation, 43, WIT Press, Southampton
Hibiki, N., 2000, “Multi-period stochastic programming models for dynamic asset allo-•
cation,” Proceedings of the 31st ISCIE international symposium on stochastic systems
theory and its applications, 37-42
Hisata, Y. and Y. Yamai, 2000, “Research toward the practical application of liquidity •
risk evaluation methods,” Monetary and Economic Studies, 83-128
Holthausen, R. W., R. W. Leftwich and D. Mayers, 1987, “The effect of large block trans-•
actions on security prices: a cross-sectional analysis,” Journal of Financial Economics,
19, 1987, 237-268
Holthausen, R. W., R. W. Leftwich and D. Mayers, 1990, “Large-block transactions, the •
speed of response, and temporary and permanent stock-price effects,” Journal of
Financial Economics, 26, 1990, 71-95
Jarrow, R. A., and A. Subramanian, 1997, “Mopping up liquidity,” Risk, 10, 170-173•
Jarrow, R. A., and A. Subramanian, 2001, “The liquidity discount,” Mathematical •
Finance, 11:4, 447-474
Jorion P., 2006, Value at Risk: the new benchmark for managing financial risk, 3rd Ed., •
McGraw-Hill, New York
Krokhmal, P., and S. Uryasev, 2006, “A sample-path approach to optimal position liqui-•
dation,” Annals of Operations Research, Published Online, November, 1-33
Articles
117This paper is based on Manfred Gilli’s leçon d’adieu, given at the Conférence Luigi 1
Solari 2009 in Geneva. Both authors gratefully acknowledge financial support from
the E.U. Commission through MRTN-CT-2006-034270 COMISEF.
Manfred Gilli1University of Geneva and Swiss Finance Institute
Enrico schumannUniversity of Geneva
optimization in financial engineering — an essay on ‘good’ solutions and misplaced exactitude
AbstractWe discuss the precision with which financial models are handled,
in particular optimization models. We argue that precision is only
required to a level that is justified by the overall accuracy of the
model, and that this required precision should be specifically ana-
lyzed in order to better appreciate the usefulness and limitations of
a model. In financial optimization, such analyses are often neglect-
ed; operators and researchers rather show an a priori preference
for numerically-precise methods. We argue that given the (low)
empirical accuracy of many financial models, such exact solutions
are not needed; ‘good’ solutions suffice. Our discussion may appear
trivial: everyone knows that financial markets are noisy, and that
models are not perfect. Yet the question of the appropriate preci-
sion of models with regard to their empirical application is rarely
discussed explicitly; specifically, it is rarely discussed in university
courses on financial economics and financial engineering. Some
may argue that the models’ errors are understood implicitly, or
that in any case more precision does no harm. Yet there are costs.
We seem to have a built-in incapacity to intuitively appreciate ran-
domness and chance. All too easily then, precision is confused with
actual accuracy, with potentially painful consequences.
118 – The journal of financial transformation
optimization in financial engineering — an essay on ‘good’ solutions and misplaced exactitude
Der Mangel an mathematischer Bildung gibt sich durch nichts
so auffallend zu erkennen wie durch maßlose Schärfe im
Zahlenrechnen. (Carl Friedrich Gauß)
Imagine you take a trip to the Swiss city of Geneva, known for
international organizations, expensive watches and, not least, its
lake. After leaving the train station, you ask a passerby how far it is
to the lakefront. You are told, ‘Oh, just this direction, along the Rue
du Mont-Blanc. It’s 512.9934 meters.’
This is not a made-up number. Google Earth allows you to track
your path with such a precision. We measured this route around
10 times and found that it ranged between roughly 500 and 520
meters. (Unfortunately, if you had actually tried to take this route,
during most of 2009 you would have found that the street was
blocked by several construction sites.) When we are told ‘it’s 500
meters to the lake,’ we know that this should mean about, say,
between 400 and 600 meters. We intuitively translate the point
estimate into a range of reasonable outcomes.
In other fields, we sometimes seem to lack such an understanding. In
this short article, we shall look at such a field, financial engineering.
We will argue that a misplaced precision can sometimes be found
here, and we will discuss it in the context of financial optimization.
financial modelingIn setting up and solving an optimization model, we necessarily
commit a number of approximation errors. [A classic reference on
the analysis of such errors is von Neumann and Goldstine (1947).
See also the discussion in chapter 6 of Morgenstern (1963).] ‘Errors’
does not mean that something went wrong; these errors will occur
even if all procedures work as intended. The first approximation is
from the real problem to the model. For instance, we may move
from actual prices in actual time to a mathematical description of
the world, where both prices and time are continuous (i.e., infinitely-
small steps are possible). Such a model, if it is to be empirically
meaningful, needs a link to the real world, which comes in the form
of data, or parameters that have to be forecast, estimated, simu-
lated, or approximated in some way. Again, we have a likely source
of error, for the available data may or may not well reflect the true,
unobservable process.
When we solve such models on a computer, we approximate a solu-
tion; such approximations are the essence of numerical analysis. At
the lowest level, errors come with the mere representation of num-
bers. A computer can only represent a finite set of numbers exactly;
any other number has to be rounded to the closest representable
number, hence we have what is called roundoff error. Then, many
functions (i.e., the logarithm) cannot be computed exactly on a
computer, but need to be approximated. Operations like differ-
entiation or integration, in mathematical formulation, require a
‘going to the limit,’ i.e., we let numbers tend to zero or infinity. But
that is not possible on a computer, any quantity must stay finite.
Consequently, we have so-called truncation error. For optimization
models, we may incur a similar error. Some algorithms, in particular
the methods that we describe below, are stochastic. As a result, we
do not (in finite time) obtain the model’s ‘exact’ solution, but only an
approximation (notwithstanding other numerical errors).
In sum, we can roughly divide our modeling into two steps: from
reality to the model, and then from the model to its numerical solu-
tion. Unfortunately, large parts of the quantitative finance literature
seem only concerned with assessing the quality of the second step,
from model to implementation, and attempt to improve here. In the
past, a certain division of labor has been necessary: the economist
created his model, and the computer engineer put it into numerical
form. But today, there is little distinction left between the research-
er who creates the model, and the numerical analyst who imple-
ments it. Modern computing power allows us to solve incredibly
complex models on our desktops. (John von Neumann and Herman
Goldstine, in the above-cited paper, describe the inversion of ‘large’
matrices where large meant n>10. In a footnote, they ‘anticipate
that n~100 will become manageable’ (fn. 12). Today, Matlab inverts
a 100x100 matrix on a normal desktop PC in a millisecond. (But
please, you will not solve equations by matrix inversion.) But then of
course, the responsibility to check the reasonableness of the model
and its solution lies — at all approximation steps — with the financial
engineer. And then only evaluating problems with respect to their
numerical implementation falls short of what is required: any error
in this step must be set into context, we need to compare it with
the error introduced in the first step. But this is, even conceptually,
much more difficult.
Even if we accepted a model as ‘true,’ the quality of the model’s
solution would be limited by the attainable quality of the model’s
inputs. Appreciating these limits helps us decide how ‘exact’ a solu-
tion we actually need. This decision is relevant for many problems
in financial engineering since we generally face a trade-off between
the precision of a solution and the effort required (most apparently,
computing time). Surely, the numerical precision with which we
solve a model matters; we need reliable methods. Yet, empirically,
there must be a required-precision threshold for any given problem.
Any improvement beyond this level cannot translate into gains
regarding the actual problem anymore; only in costs (increased
computing time or development costs). For many finance problems,
we guess, this required precision is not high.
Example 1 — in numerical analysis, the sensitivity of the problem
is defined as follows: if we perturb an input to a model, the change
in the model’s solution should be proportional. If the impact is far
larger, the problem is called sensitive. Sensitivity is often not a
numerical problem; it rather arises from the model or the data. In
119
optimization in financial engineering — an essay on ‘good’ solutions and misplaced exactitude
finance, many models are sensitive. Figure 1 shows the S&P 500
from 31 December 2008 to 31 December 2009, i.e., 253 daily prices.
The index level rose by 23%. 23.45%, to be more precise (from
903.25 to 1115.10). But does it make sense to report this number to
such a precision?
Suppose we randomly pick two observations (less than one percent
of the daily returns), delete them, and recompute the yearly return.
Repeating this jackknifing 5000 times, we end up with a distribu-
tion of returns (right panel of Figure 1). The median return is about
23%, but the 10th quantile is 20%, the 90th quantile is 27% (the
minimum is only about 11%, the maximum is 34%!). Apparently,
tiny differences like adding or deleting a couple of days cause very
meaningful changes.
This sensitivity has been documented in the literature [for instance
in Acker and Duck (2007); or Dimitrov and Govindaraj (2007)],
but it is rarely heeded. The precision with which point estimates
are sometimes reported must not be confused with accuracy. We
may still be able to give qualitative findings (like ‘this strategy
performed better than another’), but we should not make single
numbers overly precise; we need robustness checks. Returns are
the empirical buildings blocks of many models. If these simple
calculations are already that sensitive, we should not expect more
complex computations to be more accurate.
Example 2 — the theoretical pricing of options, following the papers
of Black, Scholes, and Merton in the 1970s, is motivated by an arbi-
trage argument according to which we can replicate an option by
trading in the underlier and a riskless bond. A replication strategy
prescribes to hold a certain quantity of the underlier, the delta. The
delta is changing with time and with moves in the underlier’s price,
hence the options trader needs to rebalance his positions. Suppose
you live in a Black-Scholes-Merton world. You just sold a one-month
call (strike and spot price are 100, no dividends, riskfree rate is at
2%, volatility is constant at 30%), and you wish to hedge the posi-
tion. There is one deviation from Black-Scholes-Merton, though: you
cannot hedge continuously, but only at fixed points in time [Kamal
and Derman (1999)].
We simulate 100,000 paths of the stock price, and delta-hedge
along each path. We compute two types of delta: one is the delta
as precise as Matlab can get and the other is rounded to two digits
(i.e., 0.23 or 0.67). Table 1 shows the volatility of the hedging-error
(i.e., difference between the achieved payoff and the contractual
payoff) as a percentage of the initial option price. (It is often help-
ful to scale option prices, i.e., price to underlier, or price to strike.)
Figure 2 shows replicated option payoffs.
The volatility of the profit-and-loss is practically the same, so even
in the model world nothing is lost by not computing delta to a high
precision. Yet in research papers on option pricing, we often find
prices and Greeks to 4 or even 6 decimals.
Here is a typical counterargument: ‘True, for one option we don’t
need much precision. But what if we are talking about one million
options? Then small differences matter.’ We agree; but the question
is not whether differences matter, but whether we can meaningfully
compute them. (Your accountant may disagree. Here is a simple
rule: whenever you sell an option, round up and when you buy,
round down.) Between buying one share of IBM stock or buying one
million shares, there is an important difference: you take more risk.
We can rephrase our initial example: you arrive at the train station
in Geneva, and ask for the distance to Lake Zurich.
optimization in financial engineeringHeuristics
The obsession with precision is also found in financial optimization;
researchers are striving for exact solutions, better even if in closed-
form. Finding these exact solutions is not at all straightforward, for
most problems it is not possible. Importantly, optimization methods
like linear or quadratic programming place — in exchange for exact
solutions — considerable constraints on the problem formulation.
We are often required to shape the problem such that it can be
solved by such methods. Thus, we get a precise solution, but at the
Jan 09 Apr 09 Jul 09 Oct 09 Jan 10
700
800
900
1000
1100
10% 15% 20% 25% 30% 35%
Figure 1 – Left: The S&P 500 in 2009. Right: Annual returns after jackknifing 2
observations. The vertical line gives the realized return.
frequency of rebalancing with exact delta with delta to two digits
once per day 18.2% 18.2%
five times per day 8.3% 8.4%
Table 1 – Volatility of profit-and-loss under different hedging schemes.
-5
0
5
10
15
20
25
80 85 90 95 100 105 110 115 120-5
0
5
10
15
20
25
80 85 90 95 100 105 110 115 120
Figure 2 – Payoff of replicating portfolios with delta to double precision (left), and
delta to 2 digits (right).
120 – The journal of financial transformation
optimization in financial engineering — an essay on ‘good’ solutions and misplaced exactitude
price of possibly incurring more approximation error at an earlier
stage. An example from portfolio optimization can illustrate this
point. Markowitz (1959, chapter 9) compares two risk measures,
variance and semi-variance, along the dimensions cost, conve-
nience, familiarity, and desirability, and concludes that variance
is superior in terms of cost, convenience, and familiarity. For vari-
ance, we can compute the exact solution to the portfolio selection
problem; for semi-variance, we can only approximate the solution.
But with today’s computing power (the computing power we have
on our desktops), we can test whether even with an inexact solution
for semi-variance the gains in desirability are worth the effort.
To solve such a problem, we can use optimization heuristics. The
term heuristic is used in different fields with different, though relat-
ed, meanings. In mathematics, it is used for derivations that are not
provable (sometimes even incorrect), but lead to correct conclu-
sions nonetheless. [The term was made famous by George Pólya
(1957).] Psychologists use the word for simple ‘rules of thumb’ for
decision making. The term acquired a negative connotation through
the works of Kahnemann and Tversky in the 1970s, since their ‘heu-
ristics and biases’ program involved a number of experiments that
showed the apparent suboptimalitiy of such simple decision rules.
More recently, however, an alternative interpretation of these
results has been advanced [Gigerenzer (2004, 2008)]. Studies indi-
cate that while simple rules underperform in stylized settings, they
yield (often surprisingly) good results in more realistic situations,
in particular in the presence of uncertainty. The term heuristic is
also used in computer science; Pearl (1984) describes heuristics as
methods or rules for decision making that are (i) simple, and (ii) give
good results sufficiently often.
In numerical optimization, heuristics are methods that aim to provide
good and fast approximations to optimal solutions [Michalewicz and
Fogel (2004)]. Conceptually, they are often very simple; implement-
ing them rarely requires high levels of mathematical sophistication
or programming skills. Heuristics are flexible, we can easily add,
remove, or change constraints, or modify the objective function.
Well-known examples for such techniques are Simulated Annealing
and Genetic Algorithms. Heuristics employ strategies that differ
from classical optimization approaches, but exploit the processing
power of modern computers; in particular, they include elements
of randomness. Consequently, the solution obtained from such
a method is only a stochastic approximation of the optimum; we
trade-off approximation error at the solution step against approxi-
mation error when formulating the model. Thus, heuristics are not
‘better’ methods than classical techniques. The question is rather
when to use which approach [Zanakis and Evans (1981)]. In finance,
heuristics are appropriate [Maringer (2005) gives an introduction
and presents several case studies].
Minimizing downside riskIn this section, we will consider a concrete example: portfolio opti-
mization. Our first aim is to evaluate the precision provided by a
heuristic technique. To do that, we need to compare the in-sample
quality of a solution with its out-of-sample quality. Then, we will
compare several selection criteria for portfolio optimization, and
discuss the robustness of the results.
Required precision — we use a database of several hundred
European stocks to run a backtest for a simple portfolio strategy:
minimize semi-variance, subject to (i) the number of assets in the
portfolio being between 20 and 50, (ii) any weight of an included
asset being between 1% and 5%. We construct a portfolio using
data from the last year, hold the portfolio for three months and
record its performance; then we rebalance. In this manner, we
‘walk forward’ through the data which spans the period from
January 1999 to March 2008 [Details can be found in Gilli and
Schumann (2009)].
The solution to this optimization problem cannot be computed
exactly. We use a heuristic method called Threshold Accepting.
This method, however, only returns stochastic solutions: running
the method twice for the same dataset will lead to different optimal
portfolios. With this method, we face an explicit trade-off between
computing time and precision. So when we let the algorithm search
7
8
9
10
11
12
13
14
0 100 200 300 400 500 600-0,2
0
0,2
0,4
0,6
0,8
0 100 200 300 400 500 600
Figure 3 – Risk and risk-adjusted return. The grey dots give the actual portfolios, the
dark line is a local average.
Ranks Average risk Average risk-adjusted return
all 9.55 (1.67) 0.48 (0.19)
1-50 8.03 (0.04) 0.66 (0.05)
51-100 8.07 (0.05) 0.66 (0.05)
101-150 8.09 (0.04) 0.64 (0.05)
151-200 8.17 (0.08) 0.63 (0.07)
201-300 8.51 (0.13) 0.59 (0.08)
301-400 9.16 (0.24) 0.49 (0.11)
401-500 10.94 (0.40) 0.35 (0.09)
501-600 12.49 (0.55) 0.19 (0.11)
Table 2 – Risk and risk-adjusted return (out-of-sample).
121
optimization in financial engineering — an essay on ‘good’ solutions and misplaced exactitude
for longer, then on average we get better solutions. We execute, for
the same dataset, 600 optimization runs with differing numbers of
iterations, and obtain 600 solutions that differ in their in-sample
precision. Higher in-sample precision is associated with more com-
puting time. We rank these portfolios by their in-sample objective
function (i.e., in-sample risk) so that rank 1 is the best portfolio and
rank 600 is the worst.
The left-hand panel of Figure 3 shows the resulting out-of-sample
risk of the portfolios, sorted by in-sample rank. We observe an
encouraging picture: as the in-sample risk goes down, so does the
out-of-sample risk. In other words, increasing the precision of the
in-sample solutions does improve the out-of-sample quality of the
model. At least up to a point: for the best 200 portfolios or so, the
out-of-sample risk is practically the same. So once we have a ‘good’
solution, further improvements are only marginal. We also compute
risk-adjusted returns (Sortino ratios with a required return of zero),
shown in the right-hand panel. It shows a similar pattern, even
though it is much noisier. Table 2 gives details (all annualized). The
numbers in parentheses are the standard deviations of the out-of-
sample results.
This randomness, it must be stressed, follows from the optimization
procedure: in each of our 600 runs we obtained slightly different
portfolios, each portfolio maps into a different out-of-sample per-
formance. For the best portfolios, the improvements are minuscule.
For example, the average risk per year of the best 50 portfolios is 4
basis points lower than the risk of the next-best 50 portfolios.
To judge the relevance of this randomness introduced by our
numerical technique, we need to compare it with the uncertainty
coming from the data. To build intuition, we jackknife from the
out-of-sample paths just as we did in Example 1 above. An example
is illustrated in Figure 4. In the upper panel, we picture the out-
of-sample performance of one euro that was invested in the best
portfolio (rank-1; this path corresponds to the left-most grey dot in
Figure 3). In the lower panel, we see several paths computed after
having randomly-selected and deleted one percent of the data
points. The average risk of the best 50 portfolios in our tests was
8.03% per year, with a standard deviation of 4 basis points. With
jackknifing one percent of the data, we obtain risk between 7.75%
and 8.17%; with jackknifing five percent, we get a risk between
7.58 and 8.29, far greater than the randomness introduced by our
method. For risk-adjusted return – in which we are naturally more
interested – things are even worse. The best 50 portfolios had an
average risk-return ratio of 0.66. Just jackknifing the paths of these
portfolios by one percent, we already get a range of between 0.42
and 0.89.
In sum, heuristics may introduce approximation error into our
analysis, but it is swamped by the sensitivity of the problem with
respect to even slight data changes. Hence, objecting to heuristics
because they do not provide exact solutions is not a valid argument
in finance.
Robustness checks — we run backtests for a large number of
alternative selection criteria, among them partial moments (i.e.,
semi-variance), conditional moments (i.e., Expected Shortfall), or
quantiles (i.e., Value-at-Risk). In the study described above, we only
investigated the approximation errors of the optimization method,
compared with the errors coming from the data. But now we wish
to compare different models.
We implement a backtest like the one described above, but we also
add a robustness check, again based on a jackknifing of the data:
suppose a small number of in-sample observations were randomly
selected and deleted (we delete 10% of the data). The data has
changed, and hence the composition of the computed portfolio
will change. If the portfolio selection strategy is robust, we should
expect the resulting portfolio to be similar to the original one, as
the change in the historical data is only small, and we would also
expect the new portfolio to exhibit a similar out-of-sample perfor-
mance. Repeating this procedure many times, we obtain a sampling
distribution of portfolio weights, and consequently also a sampling
distribution of out-of-sample performance. We do not compare
the differences in the portfolio weights since it is difficult to judge
what a given norm of the difference between two weight-vectors
practically means. Rather, we look at the changes in out-of-sample
results. This means that for any computed quantity that we are
interested in we have a distribution of outcomes. Figure 5 gives
some examples for different strategies.
0.8
1
1.2
1.4
1.6
1.8
2
Jan00 May01 Oct02 Feb04 Jul05 Nov06 Apr08
Jan00 May01 Oct02 Feb04 Jul05 Nov06 Apr08
0.8
1
1.2
1.4
1.6
1.8
2
Figure 4 – Upper panel: out-of-sample performance of one euro invested in the rank-1
portfolio. Lower panel: out-of-sample performance after jackknifing.
122 – The journal of financial transformation
optimization in financial engineering — an essay on ‘good’ solutions and misplaced exactitude
(More details can be found in Gilli and Schumann, forthcoming.) The
figure shows the out-of-sample returns of three strategies: mini-
mum variance, the upside-potential ratio [Sortino et al. (1999)], and
Value-at-Risk; we also plot Sharpe ratios. Portfolios constructed
with the upside-potential ratio, for instance, have a median return
that is more than a percentage point higher than the return of the
minimum variance portfolio; Sharpe ratios are also higher. Even
VaR seems better than its reputation. But most remarkable is the
range of outcomes: given a 10% perturbation of in-sample data,
returns differ by more than 5 percentage points per year.
conclusionIn this article, we have discussed the precision with which financial
models are handled, in particular optimization models. We have
argued that precision is only required to a level that is justified by
the overall accuracy of the model. Hence, the required precision
should be specifically analyzed, so that the usefulness and limita-
tions of a model can be better appreciated. Our discussion may
appear trivial; everyone knows that financial markets are noisy,
and that models are not perfect. Yet the question of the appropri-
ate precision of models with regard to their empirical application is
rarely discussed explicitly. In particular, it is rarely discussed in uni-
versity courses on financial economics and financial engineering.
Again, some may argue, the errors are understood implicitly (just
like ‘500 meters’ means ‘between 400 and 600 meters’), or that in
any case more precision does no harm; but here we disagree. We
seem to have a built-in incapacity to intuitively appreciate random-
ness and chance, hence we strive for ever more precise answers. All
too easily then, precision is confused with accuracy; acting on the
former instead of the latter may lead to painful consequences.
ReferencesAcker, D., and N. W. Duck, 2007, “Reference-day risk and the use of monthly returns •
data,” Journal of Accounting, Auditing and Finance, 22, 527-557
Dimitrov, V., and S. Govindaraj, 2007, “Reference-day risk: observations and exten-•
sions,” Journal of Accounting, Auditing and Finance, 22, 559-572
Gigerenzer, G., 2004, “Fast and frugal heuristics: the tools of bounded rationality,” in •
Koehler, D. J. and N. Harvey (eds), Blackwell handbook of judgment and decision mak-
ing, Blackwell Publishing
Gigerenzer, G., 2008, “Why heuristics work,” Perspectives on Psychological Science, •
3, 20-29
Gilli, M., and E. Schumann, 2009, “Optimal enough?” COMISEF Working Paper Series, •
No. 10
Gilli, M., and E. Schumann, forthcoming, “Risk-reward optimisation for long-run inves-•
tors: an empirical analysis,” European Actuarial Journal
Kamal, M., and E. Derman, 1999, “When you cannot hedge continuously: the corrections •
of Black-Scholes,” Risk, 12, 82-85
Maringer, D., 2005, Portfolio management with heuristic optimization, Springer•
Markowitz, H. M., 1959, Portfolio selection, Wiley•
Michalewicz, Z., and D. B. Fogel, 2004, How to solve it: modern heuristics, Springer, •
2nd edition
Morgenstern, O., 1963, On the accuracy of economic observations, Princeton University •
Press, 2nd edition
Pearl, J., 1984, Heuristics, Addison-Wesley•
Pólya, G., 1957, How to solve it, Princeton University Press, 2nd edition (expanded •
reprint, 2004)
Sortino, F., R. van der Meer, and A. Plantinga, 1999, “The Dutch triangle,” Journal of •
Portfolio Management, 26, 50-58
von Neumann, J., and H. H. Goldstine, 1947, “Numerical inverting of matrices of high •
order,” Bulletin of the American Mathematical Society, 53, 1021-1099
Zanakis, S. H., and J. R. Evans, 1981, “Heuristic ‘optimization’: why, when, and how to •
use it,” Interfaces, 11, 84-91
10 11 12 13 14 15 16 17 18 19 200
0.5
1
Annualised return in %
Minimum variance
Upside potential ratio
Value-at-Risk
0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 10
0.5
1
Annualised Sharpe ratios
Minimum variance
Upside potential ratio
Value-at-Risk
Figure 5 – Annualized returns and Sharpe ratios for different portfolio selection
strategies.
Articles
123
Rodney colemanDepartment of Mathematics,
Imperial College London
A VaR too far? The pricing of operational risk
AbstractThis paper is a commentary on current and emerging statistical
practices for analyzing operational risk losses according to the
Advanced Measurement Approaches of Basel II, the New Basel
Accord. In particular, the limitations of the ability to model opera-
tional risk loss data to obtain high severity quantiles when the
sample sizes are small and exposed. The viewpoint is that of a
mathematical statistician.
124 – The journal of financial transformation
A VaR too far? The pricing of operational risk
Value-at-Risk (VaR) entered the financial lexicon as a measure of
volatility matched to riskiness. Basel I saw that it could be a risk-
sensitive pricing mechanism for computing regulatory capital for
market and credit risks. Basel II extended its scope to operational
risk, the business risk of loss resulting from inadequate or failed
internal processes, people, or systems, or from external events. The
European Parliament agreed and has passed legislation ensuring
that internationally active banks and insurers throughout the E.U.
will have adopted its provisions from 2012.
However, to put the problem of applying the Basel II risk-sensitive
advanced measurement approaches [BCBS (2006)] in perspective,
consider the role of quantitative risk modeling in the recent global
financial crisis. This began with the credit crunch in August 2007
following problems created by subprime mortgage lending in the
U.S. A bubble of cheap borrowing was allowed to develop, with debt
securitized (i.e. off-loaded) and insured against default, creating a
vicious spiral. In their underwriting, the insurers failed to adjust for
the risk that property values might fall, with consequent defaults on
mortgage repayment. This bad business decision caused havoc at
banks who lent the money, and with investors who bought the debt,
and insurers without the means to pay out on the defaults. Where
was VaR when it was needed? That is to say, what part did financial
modeling play in preventing all this? The answer has to be nothing
much. Further, it can only marginally be put down to operational
risk, since operational risk excludes risks which result in losses from
poor business decisions.
Oprisk losses will often stem from weak management and from
external events. Basel II, the New Basel Accord, set out a regulatory
framework for operational risk at internationally active banks. In so
doing, it gave a generally accepted definition for operational risk,
previously categorized as ‘other risks.’ More importantly, it is Basel
II’s role in driving developments in operational risk management
practices, the search for robust risk measurement, and the pros-
pect of being subject to regulatory supervision and transparency
through disclosure (Pillars 2 and 3 of the Accord) that have led in a
short time to operational risk becoming a significant topic in busi-
ness and management studies.
Basel ii The Accord sets out a risk sensitive way of calculating reserve capi-
tal to cover possible defaults. Institutions are required to categorize
operational risk losses by event type, promoting identification of
risk drivers. There is no mandated methodology.
Pillar 1 of Basel II gives three ways of calculating the operational risk
capital charge, with increasing complexity, but benefiting from a
reducing charge. We shall be considering its requirements in respect
of its highest level, the Advanced Measurement Approaches (AMA),
which requires that the banks model loss distributions of cells over
a business line/loss event type grid using operational risk loss data
that they themselves have collected, supplemented as required by
data from external sources.
Pillar 2 of the Accord requires banks to demonstrate that their man-
agement and supervisory systems are satisfactory. Pillar 3 relates
to transparency, requiring them to report on their operational risk
management. It is these two latter pillars that are probably going
to have a greater impact in protecting global finance than loss
modeling.
Solvency II, the European Union’s regulatory directive for insurers,
has adopted the same three pillars. This directive will come into
force throughout the E.U. in 2012.
In November 2007, the U.S. banking agencies approved the U.S.
Final Rule for Basel II.
Banks will be grouped into the large or internationally active banks
that will be required to adopt AMA, those that voluntarily opt-in to
it, and the rest who will adopt an extended version of the earlier
Basel I.
We note that in the rule book, summarized in BCBS (2006), opera-
tional risk sits in paragraphs 644 to 679, occupying the final 12
pages, pages 140 to 151.
Advanced measurement approaches Attention is directed to the following passages taken from BCBS
(2006) that cover the modeling requirements for the advanced
measurement approaches.
(665) — A bank’s internal measurement system must reasonably
estimate unexpected losses based on the combined use of internal
and relevant external loss data, scenario analysis, and bank-specific
business environment and internal control factors (BEICF).
(667) — The Committee is not specifying the approach or distribu-
tional assumptions used to generate the operational risk measure
for regulatory capital purposes. However, a bank must be able to
demonstrate that its approach captures potentially severe ‘tail’
loss events. Whatever approach is used, a bank must demonstrate
that its operational risk measure meets a soundness standard [...]
comparable to a one-year holding period and a 99.9th percentile
confidence interval.
(669b) — Supervisors will require the bank to calculate its regula-
tory capital requirement as the sum of expected loss (EL) and
unexpected loss (UL), unless the bank can demonstrate that it is
adequately capturing EL in its internal business practices. That is,
to base the minimum regulatory capital requirement on UL alone,
125
A VaR too far? The pricing of operational risk
the bank must be able to demonstrate [...] that it has measured and
accounted for its EL exposure.
(669c) — A bank’s risk measurement system must be sufficiently
`granular’ to capture the major drivers of operational risk affecting
the shape of the tail of the loss estimates.
(669d) — The bank may be permitted to use internally determined
correlations in operational risk losses across individual operational
risk estimates [...]. The bank must validate its correlation assump-
tions.
Event types
Internal fraud
External fraud
Employment practices and workplace safety
Clients, products, and business practice
Damage to physical assets
Business disruption and system failures
Execution, delivery, and process management
Table 1 – Event types for banking and insurance under Basel II and Solvency II
(699f) — There may be cases where estimates of the 99.9th percen-
tile confidence interval based primarily on internal and external loss
event data would be unreliable for business lines with a heavy-tailed
loss distribution and a small number of observed losses. In such
cases, scenario analysis, and business environment and control
factors, may play a more dominant role in the risk measurement
system. Conversely, operational loss event data may play a more
prominent role in the risk measurement system for business lines
where estimates of the 99.9th percentile confidence interval based
primarily on such data are deemed reliable.
(672) — Internally generated operational risk measures used for
regulatory capital purposes must be based on a minimum five-year
observation period of internal loss data.
Business unit Business line
investment banking Corporate finance
Trading and sales
Banking Retail banking
Commercial banking
Payment and settlement
Agency services
others Asset management
Retail brokerage
Table 2 – Business units and business lines for international banking under Basel II
(673) — A bank must have an appropriate de minimus gross loss
threshold for internal loss data collection, for example, €10,000.
(675) — A bank must use scenario analysis of expert opinion in
conjunction with external data to evaluate its exposure to high-
severity events. [...] These expert assessments could be expressed
as parameters of an assumed statistical loss distribution.
Event types and business lines Table 1 shows the seven designated loss event types (ETs) given in
Basel II, also adopted by Solvency II. Table 2 shows the eight broad
business lines (BLs) within the banking sector given in Basel II.
Together they create an 8 by 7 grid of 56 BL/ET cells.
The Operational Risk Consortium Ltd (ORIC) database of opera-
tional risk events, established in 2005 by the Association of British
Insurers (ABI) has nearly 2000 events showing losses exceeding
£10,000 in the years 2000 to 2008. Table 3, based on a report for
ABI [Selvaggi (2009)], gives percentages of loss amounts for those
BL/ET cells having at least 4% of the total loss amount.
Event type
Business line cp&Bs ED&pM BD&sf others Total
Sales and distribution 18.9 6.8 0.7 26.4
Customer service/policy 13.2 2.3 15.5
Accounting/finance 23.4 0.1 23.5
IT 6.0 6.6 12.6
Claims 4.0 1.4 5.4
Underwriting 6.3 0.3 6.6
Others 5.1 11.8 1.4 10.0
Total 24.0 65.5 7.4 11.4 100.0
Table 3 – Business line/event type grid showing percentages of loss amounts (min
4%) in the ORIC database (2000-08)
Source: Selvaggi (2009)
This information tells little about the actual events. For this we need
Level 2 and Level 3 categories, the seven event types being Level
1. To illustrate this, again from Selvaggi (2009), Table 4 shows the
most significant Level 2 and Level 3 event types in terms of both
severity and frequency (values over 4%) from ORIC. This table
excludes losses arising from the widespread mis-selling of endow-
ment policies in the U.K. in the 1990s.The victims were misled with
over-optimistic forecasts that their policies would pay off their
mortgages on maturity. Among the insurers, Abbey Life was fined
a record £1 million in December 2002, and paid £160 million in com-
pensation to 45,000 policy holders. The following March, Royal &
Sun Alliance received a fine of £950,000. Later that year, Friends
Provident was fined £675,000 for mis-handling complaints, also an
operational risk loss.
126 – The journal of financial transformation
A VaR too far? The pricing of operational risk
A widely accepted approach in risk management is to identify the
major risks faced by an organization, measured by their impact in
terms of their frequency and severity. In many cases, not much
more than a handful of operational risks will give rise to the most
serious of the loss events and loss amounts.
However, the day-to-day management requires bottom up as well
as top down risk control. There needs to be an understanding of
the risk in all activities. Aggregating losses over business lines and
activities would tend to conceal low impact risks behind those hav-
ing more dramatic effect. The bottom-up approach is thus a neces-
sary part of seeing the complete risk picture. Aggregation at each
higher level will inform management throughout the business.
Expected loss and unexpected loss An internal operational loss event register would typically show
high impact events at low frequency among events of high fre-
quency but low impact. A financial institution might therefore sort
its losses into: ‘expected loss’ (EL) to be absorbed by net profit,
‘unexpected loss’ (UL) to be covered by risk reserves (so not totally
unexpected), and ‘stress loss’ (SL) requiring core capital or hedging
for cover. The expected loss per transaction can easily be embed-
ded in the transaction pricing. It is the rare but extreme stress loss-
es that the institution must be most concerned with. This structure
is the basis of the loss data analysis approach to operational risk.
Hard decisions need to be made in choosing the EL/UL and UL/SL
boundaries. As well as these, a threshold ‘petty cash’ limit is needed
to set a minimum loss for recording it as an operational loss. Loss
events with recovery and other near miss events will also need to
be entered into the record.
We see that, for regulatory charges, Basel specifies EL to be the
expectation of the fitted loss distribution and UL to be its 99.9th
percentile. For insurers, Solvency II sets UL at the 99.5th percentile.
Basel further permits the use of UL as a single metric for charging.
Let us compare this with the classical VaR, the difference between
UL and EL on a profit-and-loss plot. Expectation measures location,
and VaR measures volatility. Oprisk losses are non-negative, so Basel
appears to be using zero as its location measure, making it possible
to rely on just UL as its opVaR metric. Basel’s referring to ‘confidence
interval’ for the confidence limit UL adds further weight to thinking
it refers to the interval (0, UL). Now UL gives no information about
potential losses larger than itself. In fact, the very high once-in-a-
thousand-years return value is deemed by Basel to capture pretty
much all potential losses. This will not always be the case for heavy-
tailed loss distribution models. In survival analysis and extreme value
theory we would also estimate the expected shortfall, the expecta-
tion of the loss distribution conditioned on the values in excess of UL.
This has been called CVaR (conditional VaR).
There being a minimum threshold to registered losses, we really
need to reconstruct the loss distribution for under-the-threshold
losses to include them in the expectation and 99.9th percentile.
I have always felt that the use of the difference between a moment
measure (expectation) and a quantile measure (percentile) in the
VaR calculation unnecessarily complicates any investigation of its
statistical properties. Why not have median and high percentile?
External data During a recent supervisory visit to an insurance company, the
FSA was told that they had only two operational risk loss events
to show, with another six possibles. This extreme example of the
paucity of internally collected oprisk data, particularly the large
losses that would have a major influence in estimating reserve
funding, means that data publicly available or from commercial or
consortia databases needs to be explored to supplement internal
loss events.
Fitch’s OpVar is a database of publicly reported oprisk events show-
ing nearly 500 losses of more than ten million dollars between
1978 and 2005 in the U.S. The 2004 Loss Data Collection Exercise
(LDCE) collected more than a hundred loss events in the U.S. of 100
million dollars or more in the ten years to 2003.
The Operational Riskdata eXchange Association (ORX) is well estab-
lished as a database of oprisk events in banking. It is a consortium
level 2 level 3 size% freq%
Advisory activities Mis-selling (non-endowment) 13 9
Transaction capture, execution, maintenance
Accounting error 12
Inadequate process documentation
8
Transaction system error 8 6
Management information error 7
Data entry errors 7 5
Management failure 5
Customer service failure 4 16
Suitability, disclosure, fiduciary
Customer complaints 6 4
Systems Software 6
Customer or client account management
Incorrect payment to client/customer
9
Payment to incorrect client/customer
4
Theft and fraud Fraudulent claims 4
Total 76 57
Table 4 – Levels 2 and 3 event categories from insurance losses in ORIC (2000-08)
showing loss amount or loss frequency of 4% or more
Source: Selvaggi (2009)
127
A VaR too far? The pricing of operational risk
collecting data from thirty member banks from twelve countries. It
has more than 44,000 losses, each over €20,000 in value. Apart
from ORIC, individual insurance companies will have their own
claims records containing accurate settlement values.
combining internal and external data When data from an external database is combined with an in-house
dataset, the former are relocated and scaled to match. The exter-
nal data are standardized by subtracting its estimated location
parameter value (i.e., its sample mean m) and dividing the result
by its estimated scale value (i.e., its sample standard deviation s)
from each datum. Then we adopt the location m’ and scale s’ of
the internal data. So, if y is a standardized external datum, the new
value of the external datum is z = m’ + s’y. Finally we check the
threshold used for the internal database against the transformed
threshold for the relocated and rescaled external data, and set the
larger of the two as the new threshold, eliminating all data points
that fall below.
This can lead to strange statistical results. Table 5 shows that sam-
ple statistics for pooled data can lie outside the range given by the
internal and external data sets [Giacometti et al. (2007, 2008)].
We note that the same statistics using logscale data show this phe-
nomenon for commercial banking skewness and kurtosis. Perhaps
the location and scaling metrics are inappropriate, and we need
model specific metrics for these. We are told that the sample size of
the external data is about 6 times that of the internal set.
The small sample problem An extreme loss in a small sample is overrepresentative of its 1 in a
1000 or 1 in a 10,000 chance, yet under-represented if not observed.
Indeed we find the largest losses are overly influential in the fitted
model. So, we must conclude that fitting a small dataset cannot
truly represent the loss process, whatever the model used.
As a statistician I am uncomfortable with external data. An alterna-
tive device which I feel to be more respectable is to fit a heavy-
tailed distribution to the data and then simulate a large number of
values from this fitted distribution, large enough to catch sufficient
high values for statistical analysis. A basic principle in statistics
is that inferences about models in regions far outside the range
of the available data must be treated as suspect. Yet here we are
expected to do just that when estimating high quantiles.
stress and scenario testing Scenario analysis has as its object to foresee and consider respons-
es to severe oprisk events. However, merging scenario data with
actual loss data will corrupt that data and distort the loss distribu-
tion. These scenario losses would be those contributing to a higher
capital charge than otherwise would be the case.
Stress testing is about the contingency planning for these adverse
events based on the knowledge of business experts.
A potential loss event could arise under three types of scenario:
Expected loss (optimistic scenario) ■■
Unexpected serious case loss (pessimistic scenario) ■■
Unexpected worst case loss (catastrophic scenario) ■■
This mimics the expected loss, unexpected loss, and stress loss of
the actuarial approach.
probability modeling of loss data Loss data is not gaussian. The normal distribution model that backs
so much of statistical inference will not do. Loss data by its nature
has no negative values. There are no profits to be read as negative
losses. Operational losses are always a cost. Further a truncated
Business line sample statistic
internal data
External data
pooled data
Retail banking mean 15,888 37,917 11,021
std deviation 97,668 184,787 59,968
skewness 20.17 26.53 26.60
kurtosis 516 910 953
min 500 5,000 1,230
max 2,965,535 8,547,484 2,965,535
Commercial banking mean 28,682 40,808 19,675
median 2,236 12,080 1,347
std deviation 133,874 653,409 83,253
min 500 5,000 521.54
max 1,206,330 20,000,000 2,086,142
Table 5 – Some descriptive statistics for pooled data [abstracted from Giacometti et
al. (2007)]
Figure 1
0.0 200 400 600 800 1000
0.005
0.004
0.003
0.002
0.001
0.0
128 – The journal of fi nancial transformation
A VaR too far? The pricing of operational risk
normal distribution taking only positive values gives too little prob-
ability to its tail, making insuffi cient allowance for large and very
large losses. The lognormal distribution has been used instead
historically in econometrics theory, and the Weibull in reliability
modeling. In practice, even the lognormal will fail to pick up on the
extremely large losses.
Two models that can allow large observations come from Extreme
Value Theory. These are the Generalized Extreme Value distribu-
tion (GEV) and the Generalized Pareto Distribution (GPD). They are
limit distributions as sample sizes increase to infi nity. These distri-
butions are used in environmental studies (hydrology, pollution, sea
defenses, etc.) as well as in fi nance. The GEV and GPD each have
three parameters, μ giving location, σ scale, and x shape, which we
vary to obtain a good fi t. The location μ and shape x are not to be
identifi ed with the population mean and population variance. For
the GPD μ is the lower bound of the range. Figure 1 shows the form
of their respective probability density functions. The lognormal and
Weibull have just two parameters, and so lack fl exibility. Four and
more parameter models, such as Tukey’s g-and-h class of distribu-
tions, are also gaining users, but require more data than is usually
available, though they have been seen to capture the loss distribu-
tion of aggregated fi rm-wide losses. An extensive list with plots and
properties can be found in Young and Coleman (2009).
fitting severity models We fi t the GEV and GPD to the 75 losses given in Cruz (2002,
p.83). For each model we use two fi tting processes: maximum
likelihood for all three parameters and maximum likelihood for the
location and scale, but the Hill estimator for the shape parameter
[Hill (1975)]. Figure 2 shows the sample cumulative distribution
function (the observed proportion of values less than x) shown as
steps, together with four fi tted cumulative distribution functions
(the height y is the probability of obtaining a future value less than
x). The range of observation is (143, 3822). From Figure 2 we can
see a good fi t in each case. In Table 6 what we also see is that the
estimated losses at large quantiles (reading x-values from fi tted
y-values) differ greatly between the fi tted models, that is to say, at
values way beyond the largest observation.
Estimation far outside a dataset is always fraught and can lead
to signifi cant errors in high quantile estimation. Basel II asks for
the 99.9 percentile, Q(0.999), Solvency II for the 99.5 percentile,
Q(0.995). These quantiles are the opVaR.
A simulation study of GPD (0.70, 150, 125) gave an estimated 95%
confi dence interval for Q(0.999) of (5200, 9990), very wide indeed.
The computations were made using Academic Xtremes [Reiss and
Thomas (2007)]. The statistical operations were carried out without
seeking any great precision. The data in thousands of dollars were
rounded to the nearest thousand dollars, the parameters of the
fi tted models are rounded to two signifi cant digits, the fi ts were
Figure 2
0.0 1000 2000 3000 4000
1
0.8
0.6
0.4
0.2
0.0
fitted model GEV GEV GpD GpD
parameter estimates
μ 230 230 135 150
σ 130 100 165 125
x 0.53 0.70 0.44 0.70
Quantiles
Q(0.9) 777 793 866 793
Q(0.95) 1230 1169 1425 1161
Q(0.975) 1960 1706 2333 1661
Q(0.99) 3663 2794 4457 2605
Q(0.995) 5906 4046 7258 3619
Q(0.999) 18066 9525 22452 7595
Data Model values
917 1089 1057 1251 1053
1299 1288 1214 1497 1204
1416 1614 1459 1902 1435
2568 2280 1925 2733 1857
3822 4842 3470 5929 3160
Table 6 – The parameters, quantiles, and fi tted values of GEV and GPD models when
fi tted to loss data
Source: Young and Coleman (2009, pp. 400-403)
μ σ xGEV 230 130 0.53
Simulation 1 234 135 0.56
Simulation 2 224 120 0.49
Simulation 3 221 119 0.45
Simulation 4 230 131 0.51
Average 227 126 0.50
Table 7 – Parameter estimates of GEV (x, μ, σ) from four simulations of 1000 values
from GEV (0.53, 230, 130)
129
A VaR too far? The pricing of operational risk
judged by eye, the simulation for the estimated confidence interval
was based on a simulation of only 4000 values. My first big surprise
was that a good fit could be achieved, secondly that it could be
achieved so easily, my third that we could have four close fits, and
with such a variety of parameter values. Table 7 shows parameter
estimation variability through four simulations of 1000 values from
GEV (0.53, 230, 130). We see that 1000 values can be a small sample
in that it may not be enough to provide precise estimation.
Why such lack of concern for precision? Highly sophisticated meth-
ods can do no better when we have such variability in the resulting
output at high quantiles far beyond the data. We see this variability
right away in the estimated parameter values. The fit was judged
by eye. Why were goodness-of-fit tests not used? They are by their
nature conservative, requiring strong evidence for rejecting a fit
(evidence not available here).
Modeling body and tail separately With sufficient data we may see that the tail data needs to be mod-
eled separately from the main body. We might consider fitting a
GEV to the body and a GPD to the tail, reflecting the Extreme Value
Theory properties. Three parameters each and one for the loca-
tion of the join between the two parts makes seven, but having the
two probability densities meeting smoothly provides two relations
between them, and using the same shape parameter brings the
problem down to four unknowns [Giacometti et al. (2007), Young
and Coleman (2009, p. 397)].
Modeling frequency Standard statistical methods can be used to fit Poisson or negative
binomial probability distributions to frequency counts. Experience
shows that the daily, weekly, or monthly frequencies of loss events
tend to occur in a more irregular pattern than can be fitted by
either of these models.
Basel asks that we combine the fitted frequency model with the fit-
ted severity model to obtain a joint model. This loses the correlation
structure from the loss events.
Some topics left out:
Basel asks for correlation analysis: a big problem with small data-■■
sets. Mathematical finance has provided us with a correlation
theory based on copulas, but not useful here.
Validation techniques, such as the resampling methods of the ■■
jackknife and bootstrap [Efron and Tibshirani (1993)] can be
used to obtain sampling properties of the estimates such as
confidence intervals.
Bayes hierarchical modeling [Medova (2000), Kyriacou and ■■
Medova (2000), Coles and Powell (1996)] treats non-stationarity
by letting the GPD parameters be themselves random from dis-
tributions with parameters (hyper-parameters).
Dynamic financial analysis refers to enterprise-wide integrated ■■
financial risk management. It involves (mathematical) modeling
of every business line, with dynamic updating in real time.
Bayes belief networks (BBNs) are acyclic graphs of nodes con-■■
nected by directed links of cause and effect. The nodes are
events, and the states are represented by random variables.
Each node event is conditioned on every path to it. For the link A
to B, the random variable X associated with event A is given by
the multi-on-multi-variate history leading to A. The BBN requires
starting probabilities for each node, and the calculation for P(B |
A) where A incorporates all links to it. This is a formidable task.
The introspection forces risk assessment for every activity, and,
once the network is up and running, it can be used for stress
testing.
To sum up The point being emphasized is that no methodology on its own can
provide an answer. Multiple approaches should be used, both quali-
tative and quantitative, to aid management in acquiring a sensitivity
to data and its interpretation and its use in decision making.
References Basel Committee on Banking Supervision, 2006, “International convergence of capital •
measurement and capital standards,” Bank for International Settlements
Coles S., and E. Powell, 1996, “Bayesian methods in extreme value modelling: A review •
and new developments, International Statistical Review, 64, 119-136
Cruz, M.G., 2002, Modeling, measuring and hedging operational risk, Wiley •
Efron, B., and R. Tibshirani, 1993, An introduction to the bootstrap, Chapman & Hall •
Embrechts P., (editor), 2000, Extremes and integrated risk management, Risk Books •
Giacometti, R., S. Rachev, A. Chernobai, and M. Bertocchi, 2008, “Aggregation issues in •
operational risk,” The Journal of Operational Risk, 3:3, 3-23
Giacometti, R., S. Rachev, A. Chernobai, M. Bertocchi, and G. Consigli, 2007, “Heavy-•
tailed distributional model for operational risk,” The Journal of Operational Risk, 3:3,
3-23
Hill, B.M., 1975, “A simple general approach to inference about the tail of a distribu-•
tion,” Annals of Statistics, 3, 1163-1174
Kyriacou, M.N., and E.A. Medova, 2000, “Extreme values and the measurement of •
operational risk II,” Operational Risk, 1:8, 12-15
Lloyd’s of London, 2009, “ICA: 2009 Minimum Standards and Guidance,”, Lloyd’s •
Medova, E.A., 2000, “Extreme values and the measurement of operational risk I”, •
Operational Risk, 1:7, 13-17
Reason, J., 1997, Managing the risk of organisational accidents, Ashgate •
Reiss, R.-D., and M. Thomas, 2007, Statistical analysis of extreme values (3rd edition), •
Birkhauser
Selvaggi, M., 2009, “Analysing operational risk in insurance,” ABI Research Paper 16. •
Young, B., and R. Coleman, 2009, Operational risk assessment, Wiley •
Articles
131The views expressed are personal ones and do not represent the organisations with 1
which the author is affiliated. All errors are mine.
Hans J. Blommestein1
PwC Professor of Finance, Tilburg University and Head of Bond Markets and Public Debt, OECD
Risk management after the Great crash
AbstractThis study takes a closer look at the role of risk (mis)management
by financial institutions in the emergence of the Great Crash. It is
explained that prior to the crisis too much reliance was placed on
the quantitative side of risk management, while not enough atten-
tion was paid to qualitative risk management. In this context it is
argued that there is an urgent need for dealing more effectively
with inherent weaknesses related to institutional- and organiza-
tional aspects, governance issues, and incentives. More sophistica-
tion and a further refinement of existing quantitative risk manage-
ment models and techniques is not the most important or effective
response to the uncertainties and risks associated with a fast-mov-
ing financial landscape. In fact, too much faith in a new generation
of complex risk models might lead to even more spectacular risk
management problems than the one we experienced during the
Great Crash. Against this backdrop, the most promising approach
for improving risk management systems is by providing a coherent
framework for addressing systematical weaknesses and problems
that are of a qualitative nature. However, given the inadequate and
very imperfect academic knowledge and tools that are available,
risk management as a scientific discipline is not capable of dealing
adequately with fundamental uncertainties in the financial system,
even if a coherent picture associated with the aforementioned
qualitative issues and problems is being provided. This perspective
is providing an additional motivation to those authorities that are
contemplating to constrain or restructure parts of the architecture
of the new financial landscape.
There are various channels through which an unsound incentive structure can have 2
an adverse structural influence on the pricing of financial assets. For example, Fuller
and Jensen (2002) illustrate (via the experiences of Enron and Nortel) the dangers
of conforming to market pressures for growth that are essentially impossible, leading
to an overvalued stock. Other channels through which perverse incentives are being
transmitted originate from situations where traders and CEOs pursue aggressive risk
strategies, while they are facing largely an upside in their rewards structures (and
hardly a downside). For example, successful traders can make fortunes, while those
that fail simply lose their jobs (in most cases they move on to a trading job in other
financial institutions). CEOs of institutions that suffer massive losses walk away with
132
Risk management after the Great crash
very generous severance and retirement packages. There are also fundamental flaws
in the bonus culture. Costs and benefits associated with risk-taking are not equally
shared and the annual merry-go-round means financial institutions can end up paying
bonuses on trades and other transactions that subsequently prove extremely costly
[Thal Larsen (2008)]. Rajan (2008) points out that bankers’ pay is deeply flawed. He
explains that employees at banks (CEOs, investment managers, traders) generate
jumbo rewards by creating bogus ‘alpha’ by hiding long-tail risks. In a similar spirit,
Moody’s arrives at a very strong conclusion that the financial system suffers from
flawed incentives that encourage excessive risk-taking [Barley (2008)].
The literature on the causes of the global credit/liquidity crisis
(Great Crash for short) is growing exponentially [see, for example,
Senior Supervisors Group (2008), FSA (2009), BIS (2009), among
others]. The many studies and policy reports reveal serious failures
at both the micro- (financial institutions) and macro-levels (financial
system as a whole) and mistakes made by different actors (bankers,
rating agencies, supervisors, monetary authorities, etc.). This paper
(i) takes a closer look at the role of risk (mis)management by finan-
cial institutions in the creation of the Great Crash; and (ii) outlines
the most promising ways for improving risk management, in par-
ticular by paying much more attention to qualitative risk manage-
ment issues. However, it will also be explained why it would still be
impossible to prevent major crises in the future, even if a coherent
picture involving qualitative issues or aspects would be provided.
However, in doing so, we do not exclude the importance of the role
of other (important) factors in the origin of the Great Crash. On the
contrary, mistaken macroeconomic policies as well as deficiencies
in the official oversight of the financial system were also significant
factors in the global financial crisis, especially in light of global imbal-
ances and the emergence of systemic risks and network externali-
ties during the evolution of the crisis. However, most commentators
would agree that serious failures in risk management at the level of
major (and even some medium-sized) financial institutions played an
important role in the origin and dynamics of the Great Crash.
Why did risk management fail at the major financial institutions? The conventional storyline before the Great Crash was that risk
management as a science has made considerable progress in the
past two decades or so and that financial innovations such as risk
transfer techniques had actually made the balance sheet of finan-
cial institutions stronger. However, my analysis of risk management
failures associated with the global financial crisis (and supple-
mented by insights gained from studying earlier crisis episodes
such as the crash of LTCM in 1998) [Blommestein (2000)] uncovers
deep flaws in the effectiveness of risk management tools used by
financial institutions.
The crisis has not only shown that many academic theories were
(are) not well-equipped to properly price the risks of complex
instruments such as CDOs [Blommestein (2008b, 2009)], espe-
cially during market down-turns [Rajan (2009)], but also that
risk management methodologies and strategies based on basic
academic finance insights were not effective or even misleading.
A core reason is that academic risk-management methodologies
are usually developed for a system with ‘well-behaved or governed’
financial institutions and markets that operate within a behavioral
framework with ‘well-behaved’ incentives (that is, risk management
is not hampered by dysfunctional institutions, markets, or financial
instruments and/or undermined by perverse incentives).
In this paper, I will, therefore, not only focus on the quantitative
dimension of risk management systems such as the mispricing of
risks (of complex products) and measurement problems but also on
the qualitative dimension covering such issues like those involving
the above mentioned ‘practical’ obstacles of an institutional, orga-
nizational and incentive nature. In addition to the well-documented
role of the perverse incentives associated with misconstrued com-
pensation schemes, it will be argued that the institutional, or orga-
nizational, embedding of risk management is of crucial importance
as well. It will be suggested that the qualitative dimension of risk
management is equally (or perhaps even more) important to the
measurement or quantitative side of the process of risk manage-
ment.
This view implies that even if the risk management divisions of
these financial institutions had acted with the longer-term inter-
ests of all stakeholders in mind (which many of them did not), they
would still have had an uphill battle in effectively managing the risks
within their firms because of (a) the inadequate tools that were at
their disposal (based, crucially, on insights from academic finance);
(b) the complex organizational architecture, and sometimes dys-
functional institutional environment, in which many risk managers
had (have) to operate; and (c) excessive risk-taking associated
with business strategies that incorporated perverse compensa-
tion schemes. In other words, had the risk management divisions
of these institutions effectively implemented the state-of-the-art
tools that were provided to them by academic finance (includ-
ing, crucially, reliable information about the riskiness of complex
financial instruments such as structured products), they would still
have to struggle with the complex institutional or organizational
embedding of risk management situations as well as the various
channels through which an unsound incentive structure (operating
between financial institutions and the market) can have an adverse
structural impact on the pricing of financial assets2.
This perspective can then also be used to explain why it is very
difficult or even impossible to effectively manage the totality of
complex risks faced by international banks and other financial insti-
tutions. More specifically, it is the reason why effective enterprise-
wide risk management is an extremely hard objective, especially in
133
Risk management after the Great crash
a rapidly-changing environment. This echoes a conclusion from a
2005 paper on this topic: “But successful implementation of ERM
is not easy. For example, a recent survey by the Conference Board,
a business research organization, shows that only 11 per cent of
companies have completed the implementation of ERM processes,
while more than 90 per cent are building or want to build such a
framework” [Blommestein (2005b)].
For all these reasons we have to have a realistic attitude towards
the practical capacity of risk management systems, even advanced
ones based on the latest quantitative risk measures developed in
the academic literature. An additional reason to take a very modest
view on real abilities of available risk control technologies is the fact
that academically developed risk measures predominantly deal with
market-and-credit risk. Academic finance has much less to say about
the analytical basis for liquidity risk, operational risk, and systemic
risk. Unfortunately, the latter types of risks played major roles in the
origin of the Great Crash [Blommestein (2008a)], showing that they
were not adequately diagnosed, managed, and/or supervised.
Against this backdrop, we will, first, cast a critical eye at what has
been suggested to be the principal cause of the recent financial cri-
sis: the systemic mispricing of risks and the related degeneration of
the risk management discipline into a pseudo quantitative science.
After that we will show that the disappearance or weakening of due
diligence by banks in the securitization process was an important
crisis-factor that was not detected by conventional, quantitative
risk management systems. In fact, it is another important example
why individual financial institutions can fail, and complete systems
collapse, when not enough attention is being paid to the qualitative
dimension of risk management [Blommestein et al. (2009)].
How important was the mispricing of risks? An increasingly interconnected and complex financial system made
it harder to price risks correctly. Both market participants and
supervisors underestimated the increase in systemic risk. Early on
I concluded in this context: “The sheer complexity of derivatives
instruments, coupled with consolidation in the financial industry,
has made it increasingly hard for regulators and bankers to assess
levels of risk. In the credit derivatives market, risks that have been
noted include a significant decline in corporate credit quality, little
information on counterparties, operational weaknesses that may
result from the novelty of these instruments, and a disincentive to
manage actively portfolio credit risk. As a result, systemic risk in
this complex, often opaque financial landscape is likely to be higher
than before” [Blommestein (2005b)].
Although risk managers had more rigorous risk management tools
at their disposal than in the past, the rapidly changing financial
landscape (characterized by more complex products and markets,
a higher level of systemic risk, and increased financial fragility)
weakened the applicability and conditions under which these quan-
titative tools and techniques can be used effectively. Many market
participants (including sophisticated ones) had difficulties in under-
standing the nature and pricing of new products and markets, due
to the sheer complexity of many new financial instruments and the
underlying links in the new financial landscape. In a 2005 study
I noted: “Even sophisticated market participants might, at times,
have difficulties understanding the nature of these new products
and markets. Consequently, risks may be seriously mispriced. In
the market for collateralized debt obligations (CDOs), the high pace
of product development requires the rapid adaptation of pricing
machines and investment strategies. Although the ability to value
risky assets has generally increased, concerns have been raised
about the complex risks in this fast-growing, market segment and,
more to the point, whether investors really understand what they
are buying” [Blommestein (2005b)].
Moreover, as explained above, securitization was adversely affected
by problems with incentives and information as well as the pricing
of tail events [Cechetti (2009)]. More generally, a number of widely
held assumptions proved to be wrong and costly, in particular that
the originate-and-distribute (securitize) model would decrease
residual risk on the balance sheet of banks, that the growing use
of credit risk transfer instruments would result in better allocated
risks and a more stable banking sector, and that ample market
liquidity would always be available.
The outbreak of the financial crisis proved these assumptions
wrong, whereby the dark side of the risk paradox became visible
[Blommestein (2008c)]. In effect, it became increasingly clear that
there had been a significant and widespread underestimation of
risks across financial markets, financial institutions, and countries
[Trichet (2009)]. For a variety of reasons, market participants did
not accurately measure the risk inherent in financial innovations
and/or understand the impact of financial innovations on the over-
all liquidity and stability of the financial system. Indeed, there is
growing evidence that some categories or types of risks associated
with financial innovations were not internalized by markets; for
example, tail risks were underpriced and systematic risk (as exter-
nality) was not priced, or was priced inadequately. This widespread
and systemic underestimation of risks turned out to be at the core
of the financial crisis. At the same time, the availability of sophisti-
cated quantitative risk tools created a false sense of security and
induced people to take greater risks. “Professional enthusiasm
about new risk control technology may give rise to overconfidence
and even hubris” [Helwig (2009)].
The underestimation of risks reflected to an important degree mis-
takes in both the strategic use of risk management systems as well
as the technically inadequate risk management tools. During the
unfolding of the crisis, many financial institutions revealed a huge
Let us focus on operational risk (op risk) in a large bank as an example. Shojai and 3
Feiger(2010) note that many institutions have only recently started to come to grips
with the fact that operational risk is a major risk fraught with obstacles. First, quantifi-
cation is a very challenging task. Second, once op risk has been measured properly, we
need to be able to compute the operational risks of each division. Third, how do firms
aggregate op risk across organisations as a whole? Many larger institutions still segre-
gate their different businesses (perhaps for good reasons). Hence it is nearly impossible
(a) to quantify operational risk for the group and (b) to determine what diversification
benefits could be derived. Moreover, in some banks, FX, credit, and equities each have
their own quants teams, whose aggregate risks for the bank no one can really under-
stand [or even for the financial system as a whole [Patterson (2010)].
See Patterson (2010) for a non-technical account of the role of Process Driven 4
Trading (PDT) during the crisis. The formulas and complicated models of quants
traded huge quantities of securities and as the housing market began to crash, the
models collapsed. In the words of Patterson (2010): “The result was a catastrophic
domino effect. The rapid selling scrambled the models that quants used to buy and
sell stocks, forcing them to unload their own holdings. By early August, the selling
134
Risk management after the Great crash
had taken on a life of its own, leading to billions in losses. The meltdown also revealed
dangerous links in the financial system few had previously realized — that losses in
the U.S. housing market could trigger losses in huge stock portfolios that had nothing
to do with housing. It was utter chaos driven by pure fear. Nothing like it had ever
been seen before. This wasn’t supposed to happen!”
Some academic economists were certainly aware of the limitations and weaknesses 5
of these models for use in the financial sector. For example, Merton (1994) gave the
following general warning: “The mathematics of hedging models are precise, but the
models are not, being only approximations to the complex, real world. Their accuracy
as a useful approximation to that world varies considerably across time and place.
The practitioner should therefore apply the models only tentatively, assessing their
limitations carefully in each application”. However, other academics and many users
of academic models in the financial industry were often ill-informed and ignorant
about the deeper weaknesses of using these kinds of models across time and differ-
ent market places [Blommestein (2009)].
Ironically enough, the October 1987 crash marked the birth of VAR as a key risk man-6
agement tool. For a very brief history of the birth of VAR, see Haldane (2009).
concentration of risks, suggesting that risk management systems
failed (a) to identify key sources of risks, (b) to assess how much
risk was accumulated, and (c) to price financial risks properly (or to
use reliable market prices, in particular for structured products). The
underlying problem was that risk management did not keep pace with
the risks and uncertainty inherent in financial innovations and the
fast-changing financial landscape. Risk managers placed too much
trust into the existing risk models and techniques (see below), while
underlying assumptions were not critically evaluated. Unfortunately,
the use of these models proved to be inadequate, both from a techni-
cal and a conceptual point of view. On top of this, risk management
fell short from a qualitative perspective; that is, too little attention
was paid to corporate governance processes, the architecture and
culture of organizations, business ethics, incentives, and people.
fatal flaws in the origination and securitization process: failures in quantitative — and qualitative risk managementThe (impact of the) securitization of mortgages and financial
innovations such as CDO and CDS markets came under heavy
criticism as being an important cause of the global financial crisis
[Blommestein (2008a); Tucker (2010); Jacobs (2009)]. Risks were
significantly underpriced [Blommestein (2008b)] (in particular by
rating agencies) while risk management systems failed.
Naturally, also regulators and central bankers made mistakes. Most
studies, however, seem to suggest that had we gotten a better
understanding of the correct pricing of complex structured products
such as CDOs, CLOs and CDSs over the cycle, then we might have
been able to prevent the seriousness of the global financial crisis. For
example, popular CDO pricing models such as the Gaussian copula
function are based on the dubious key assumption that correlations
are constant over the cycle. The above reasoning implies that had
we been able to employ a far superior method (in terms of accuracy
and/or robustness) than the Gaussian copula function, then we would
have valued more accurately structured products over the cycle. As
a result, the Great Crash would not have occurred.
The limits of pricing models and quantitative risk management
However, the conclusion that better quantifications would have
prevented a major crisis can be challenged on the following three
key grounds. First, as noted above, pricing in the fast-moving,
complex financial landscape is a huge challenge. Pricing models
are therefore subjected to significant model risk. For example, the
foundation of the pricing of risk in structured products such as
CDOs and CDSs is based on the key theoretical notion of perfect
replication. Naturally, perfect replication does not exist in reality
and has to be approximated by historical data which in many cases
is very incomplete and of poor quality. Instead, researches and
practitioners had to rely on simulation-based pricing machines. The
input for these simulations was very shaky as they were based on
“relatively arbitrary assumptions on correlations between risks and
default probabilities” [Colander et al. (2009)].
Second, many institutions have major difficulties in quantifying
the ‘regular’ risks associated with credit and market instruments.
However, the Great Crash demonstrated that this was even more
so the case for firms’ operational and liquidity risks. These compli-
cations multiply when one tries to aggregate the various risks of
divisions or departments within larger financial institutions3.
Third, from a more conceptual perspective, the financial crisis
brought to light that the risk management discipline had developed
too much into a pseudo quantitative science with pretensions beyond
its real risk management capabilities4. The over-reliance on sophis-
ticated though inadequate risk management models and techniques
contributed to a false sense of security [Honohan (2008)]. Indeed,
many professionals were too confident in the ability of quantitative
models to reliably measure correlations and default probabilities
[Helwig and Staub (1996)]. It was assumed that quantitative risk
management models represented stable and reliable stochastic
descriptions of reality. Ironically, by relying to an increasing degree
on sophisticated mathematical models and techniques, the risk man-
agement discipline lost its ability to deal with the fundamental role
of uncertainty in the financial system5. In addition to this fundamen-
tal methodological problem, the financial crisis revealed technical
failures in risk management in the sense that even sophisticated
methods and techniques turned out not to be refined enough. At
the core of many risk management systems was (is) the concept of
Value-At-Risk (VAR), which became a key tool in the quantification of
risk, the evaluation of risk/return tradeoffs, and in the disclosure of
risk appetite to regulators and shareholders6. This concept is effec-
tively based on the idea that the analysis of past price movement
patterns could deliver statistically robust inferences relating to the
135
Risk management after the Great crash
A clear example is the trading strategy used by a number of Citigroup employees. 9
On 2 August 2004, Citigroup pushed through €11 billion in paper sales in two minutes
over the automated MTS platform, throwing the market into confusion. As the value
of futures contracts fell and traders moved to cover their positions, Citigroup reen-
tered the market and bought back about €4 billion of the paper at cheaper prices.
The strategy was dubbed Dr Evil, in an internal e-mail circulated by the traders. In
2007, an Italian Court indicted seven (by that time former) Citigroup traders on
charges of market manipulation in the sale and repurchase of government bonds on
the MTS electronic fixed income network. (Citi bond traders indicted over ‘Dr Evil’
trade, http://www.finextra.com/fullstory.asp?id=17210, 19 July 2007.)
A similar problem would occur when one would focus only on ‘macro’ factors such 7
as global imbalances and low interest policies. Even if interest rates had not been
kept so low for as long as they were or global imbalances would have been smaller,
we would still have not been able to prevent the erosion of financial stability and the
structural weaknesses in the financial sector (in both the banking sector and security
markets). In retrospect, the global crisis of the new financial landscape was an acci-
dent waiting to happen due to (semi-) hidden unsound structural features (see below).
Rajan (2009) notes in this context that ‘….originators could not completely ignore 8
the true quality of borrowers because they were held responsible for initial defaults.”
However, he concludes that even this weak source of discipline was undermined by
steadily rising housing prices.
probability of price movements in the future [FSA (2009)]. However,
the financial crisis revealed severe problems with applying the VAR
concept to the world of complex longer-term social and economic
relationships [Danielsson (2002)].
complex risks and uncertainties and the importance of qualitative risk managementFocusing too much on the technical intricacies of models for the
‘correct’ pricing of these assets, although important, ensures that
we ignore another, often neglected, crucial reason why markets
for securitized assets became so big and fragile and finally col-
lapsed7. In fact, as noted before, the quantitative approach to risk
management does not fully cover the range of important risks and
uncertainties.
First, the insight that the ‘originate- to- securitize’ process (and its
embedded risks) was capable of generating not very well under-
stood negative spillovers via the shadow banking sector to com-
mercial banks [Tucker (2010)].
Second, the originate-to-securitize business model or process had
as (unintended) consequence the fatally weakening of due diligence
undertaken by originators8. With the originators relinquishing their
role as the conductors of due diligence it was left to the credit
rating agencies (CRAs) to fill this information gap. But, on top of
their inadequate pricing methodologies, CRAs never had sufficient
access to the required information about underlying borrowers to
have any idea of their true state of health. That crucial information
was in principle in the hands of the issuing bank, but, as noted, they
had stopped caring about collecting that kind of information when
they started selling the mortgages onto other investors [Keys et al.
(2010)]. So, the rating agencies had to use aggregate data to rate
these instruments, or to rely on the credit quality of the insurer
who had provided credit enhancement to the security [Fabozzi and
Kothari (2007)]. In either case, neither the credit enhancer nor the
rating agency had any idea about the underlying quality of the bor-
rowers [Shojai and Feiger (2010)].
Third, perverse incentives, associated with flawed compensation
structures, are keeping valuations from their ‘true’ (equilibrium)
prices [Blommestein (2008b)]. As a result, excessive risk-taking
(was) is manifesting itself in part through asset bubbles with sig-
nificantly underpriced risks. Moreover, experience and tests show
that humans have an ingrained tendency to underestimate outliers
[Taleb (2007)] and that asset markets have a tendency to gener-
ate a pattern of bubbles (with prices much higher than the intrinsic
value of the asset), followed by crashes (rapid drops in prices)
[Haruvy et al. (2007)].
However, prior to the crisis, these underlying structural problems
did not dampen the demand from institutional investors for AAA
paper. Institutional investors believed that it was possible to
squeeze out a large quantity of paper rated AAA via the slicing and
dicing through repeated securitization of the original package of
assets (mortgages, other loans). Consequently, all that was needed
to expose the underlying weaknesses was a correction in house
prices, which is exactly what happened. Moreover, it became indeed
painfully clear that the complex securities issued by CDOs are very
hard to value, especially when housing prices started to drop and
defaults began to increase.
The key insight of this overview is that modern risk transfer
schemes (that reallocate risk, or, in many cases more accurately
stated, uncertainty) may undermine due diligence (and prudence
more generally), especially in combination with compensation
schemes that encourage excessive risk-taking. This structural lack
of prudential behavior infected not only the structured finance seg-
ments but the entire financial system. All types of players (bankers,
brokers, rating agencies, lawyers, analysts, etc.) were operating
under business plans with a (implicit) short-term horizon that put
institutions and systems at risk. Deeply flawed incentive schemes
encouraged dangerous short-cuts, excessive risk-taking, but also
unethical practices9. The culture of excessive risk-taking and dubi-
ous ethics [Blommestein (2005a)] in the banking industry spread
like a virus during the past decades and became firmly entrenched
[Blommestein (2003)]. Even if the top management of banks aim to
maximize long-term bank value, it may then be extremely hard to
impose incentives and control systems that are consistent with this
objective. In fact, prior to the crisis, there was a move away from
this long-term objective, with an increasing number of bank CEOs
encouraging business strategies based on aggressive risk-taking.
This in turn engendered excess risk-taking and non-ethical prac-
tices within firms and at all levels (traders, managers, CEOs).
Hence, a relatively small and local crisis could transform itself into
the Great Crash.
Clearly, the many complex and new risks and uncertainties in the
fast-moving financial landscape could not be effectively diagnosed
and managed via a purely quantitative approach. In fact, it encour-
aged additional risk-taking induced by a false sense of confidence in
sophisticated risk-control technologies. We, therefore, need a para-
136This point is also emphasized by the CFO of the Dutch KAS Bank. He observes that 10
many risk management models work fine when there is no crisis. However, they can
fail spectacularly during a crisis because of left-out factors. It is important (1) to be
aware which risk factors have been (deliberately) omitted and why and (2) which
actions to take when a crisis erupts [Kooijman (2010)].
Risk management after the Great crash
Of interest is that the conclusions from a 2008 report on risk management practices 11
during the crisis, drafted by a group of 8 financial supervisors from 5 countries,
focused predominantly on organizational and institutional issues [Senior Supervisors
Group (2008)].
digm shift in risk management that also includes an assessment
of uncertainties through the lens of qualitative risk management.
Only in this way would we be able to tackle the adverse influences
of organizational issues, human behavior, and incentives schemes
[Blommestein (2009)]. This would also allow us to account for the
fact that all risk measurements systems are far more subjective
than many experts want to accept or admit. Empirical risk control
systems are the result of subjective decisions about what should be
incorporated into the risk model and what should not10.
conclusionsThe first key conclusion is that prior to the Great Crash too much
reliance was placed on the quantitative side of risk management
and too little on the qualitative dimension. In this context it was
argued that there was an urgent need for dealing effectively with
inherent weaknesses related to institutional, organizational, gover-
nance, and incentives’ aspects11. More sophistication and a further
refinement of existing quantitative risk management models and
techniques is not the most important or effective response to the
uncertainties and risks associated with a fast-moving financial
landscape. In fact, too much faith in a new generation of complex
risk models might even lead to more spectacular risk management
problems as the one witnessed during the last decade. Instead, as
noted by Blommestein et al. (2009), a more holistic and broader
approach to risk management is needed as part of a paradigm shift
where more attention is given to the qualitative dimension of risk
management.
A final key finding is that it would still be impossible to prevent
major crises in the future, even if a coherent picture associated with
the aforementioned qualitative issues is being provided. The under-
lying epistemological reason is the (by definition) imperfect state
of academic knowledge about new uncertainties and risks associ-
ated with a fast-moving society, on the one hand, and the inher-
ently inadequate risk-management responses that are available as
tools to risk managers, their top management and, indeed, also to
their supervisors, on the other. Indeed, we have shown in a related
analysis that the Great Crash is another illustration of the fact that
risk management as a scientific discipline is not capable of dealing
adequately with fundamental uncertainties in the financial system
[Blommestein et al. (2009)]. From this perspective it is therefore
no surprise that some authorities are considering to constrain or
restructure parts of the architecture of the new financial landscape
[see Annex below and Group of Thirty (2009)].
Annex: structural weaknesses waiting to erupt in the new financial landscapeTucker (2010) recently analyzed the question of whether the struc-
ture or fundamental architecture of the new financial landscape
needs to be constrained or restructured by the authorities. In doing
so he focused on a key weakness in the new financial landscape: the
shadow banking sector.
In Tucker’s analysis, the dangerous side of ‘shadow banking’ refers
to “those instruments, structures, firms or markets which, alone or
in combination, and to a greater or lesser extent, replicate the core
features of commercial banks: liquidity services, maturity mismatch
and leverage.” The un(der)regulated shadow banking activities can
then create an unstable and fragile banking sector. For example,
the money fund industry is a major supplier of short-term funding
to banks, while its own maturity mismatch served to mask the true
liquidity position of the banking sector. This in turn fatally injected
additional fragility into the financial system as a whole. Warnings
were published quite a few years ago [Edwards (1996)], while Paul
Volcker, former chairman of the U.S. Federal Reserve, is reported to
have expressed serious concerns at internal Federal Reserve meet-
ings around thirty years ago [Tucker (2010)].
So, like in the case of the ‘originate- to- securitize’ process, this
was an example of a structural weakness waiting to erupt, although
the wait was longer. But during the global financial crisis they both
became a reality. “When the Reserve Fund “broke the buck” after
Lehman’s failure, there was a run by institutional investors” …
“Echoing Paul Volcker’s concerns, the Bank of England believes
that Constant-NAV money funds should not exist in their current
form” [Group of 30 (2009)].
137
Risk management after the Great crash
ReferencesBarley, R., 2008, “Ability to track risk has shrunk ‘forever’” Moody’s, Reuters.com, 6 •
January
Blommestein, H. J., 2009, The financial crisis as a symbol of the failure of academic •
finance?(A methodological digression), Journal of Financial Transformation, Fall
Blommestein, H. J.,2008a, “Grappling with uncertainty”, The Financial Regulator, March•
Blommestein, H. J., 2008b, “Difficulties in the pricing of risks in a fast-moving financial •
landscape (A methodological perspective), Journal of Financial Transformation, 22
Blommestein, H. J., 2008c, “No going back (derivatives in the long run)”, Risk, August•
Blommestein, H. J. 2006, The future of banking, SUERF Studies.•
Blommestein, H. J., 2005a, How to restore trust in financial markets?” in Dembinski, P. •
H., (ed.), Enron and world finance – a case study in ethics, Palgrave
Blommestein, H. J., 2005b, “Paying the price of innovative thinking,” Financial Times •
(Mastering Risk), 23 September
Blommestein, H. J., 2003, “Business morality audits are needed,” Finance & Common •
Good, No. 15, Summer
Blommestein H. J. 2000, “Challenges to sound risk management in the global financial •
landscape,” Financial Market Trends No 75, April, OECD
Blommestein, H. J., L. H. Hoogduin, and J. J. W. Peeters, 2009, “Uncertainty and risk •
management after the Great Moderation, The role of risk (mis)management by finan-
cial institutions,” Paper presented at the 28th SUERF Colloquium on “The Quest for
Stability”3-4 September 2009, Utrecht, The Netherlands and Forthcoming in the SUERF
Studies Series
Boudoukh, J., M. Richardson, and R. Stanton, 1997, “Pricing mortgage-backed securities •
in a mulitfactor interest rate environment: a multivariate density estimation approach,”
Review of Financial Studies, 10, 405-446
Cechetti, S. G., 2009, “Financial system and macroeconomic resilience,” Opening •
remarks at the 8th BIS Annual Conference, 25-26 June 2009
Colander, D., H. Föllmer, A. Haas, M. Goldberg, K. Juselius, A. Kirman, and T. Lux, 2009, •
“The financial crisis and the systematic failure of academic economics,” Kiel Working
Papers No. 1489, Kiel Institute for the World Economy
Danielsson, J., 2002, “The emperor has no cloths: limits to risk modelling,” Journal of •
Banking and Finance, 26:7, 1273-1296
Edwards, F., 1996, “The new finance: regulation and financial stability,” American •
Enterprise Institute
Fabozzi, F. J., and V. Kothari, 2007, Securitization the tool of financial transformation,” •
Journal of Financial Transformation, 20, 33-45
Financial Services Authority, 2009, “The Turner review: a regulatory response to the •
global banking crisis,” March
Fuller, J., and M. C. Jensen, 2002, “Just say no to Wall Street,” Journal of Applied •
Corporate Finance, 14:4, 41-46
Group of Thirty, 2009, “Financial reform: a framework for financial stability,” January 15•
Haldane, A. G., 2009, “Why banks failed the stress test,” Speech, 13 February•
Haruvy, E., Y. Lahav, and C. N. Noussair, 2007, “Traders’ expectations in asset markets: •
experimental evidence,” American Economic Review, 97:5, 1901-1920
Honohan, P., 2008, “Risk management and the cost of the banking crisis”, IIIS •
Discussion Paper, No. 262
Jacobs, B. I., 2009, “Tumbling tower of Babel: subprime securitization and the credit •
crisis,” Financial Analysts Journal, 66:2, 17 – 31
Jensen. M. C., and W. H. Meckling, 1976, “Theory of the firm: managerial behavior, agen-•
cy costs and ownership structure,” Journal of Financial Economics, 3:4, 305-360
Jorion, P., 2006, Value at Risk: the new benchmark for managing financial risk, 3rd ed. •
McGraw-Hill
Keys, B. J., T. Mukherjee, A. Seru, and V. Vig, 2010, “Did securitization lead to lax •
screening? Evidence from subprime loans,” Quarterly Journal of Economics, forthcom-
ing
Kooijman, R., 2010, “Risk management: a managerial perspective,” Presentation at the •
IIR Seminar on Risk Management for Banks 2010, Amsterdam, 26-27 January
Kuester, K., S. Mittnik, and M. S. Paolella, 2006, “Value-at-Risk prediction: a comparison •
of alternative strategies,” Journal of Financial Econometrics, 4:1, 53-89
Markowitz, H., 1952, “Portfolio selection,” Journal of Finance, 7:1, 77-91•
Merton, R.C., 1994, “Influence of mathematical models in finance on practice: Past, pres-•
ent and future”, Philosophical Transactions (Royal Society), 347 (1684), p. 451-462
Patterson, S., 2010, The quants: how a new breed of math whizzes conquered Wall •
Street and nearly destroyed it, Random House, Inc
Rajan, R. G., 2009, The credit crisis and cycle-proof regulation, Federal Reserve Bank of •
St. Louis Review, September/October
Rajan, R., 2008, “Bankers’ pay is deeply flawed,” Financial Times, 8 January•
Shojai, S., and G. Feiger, 2010, “Economists’ hubris: the case of risk management,” •
Journal of Financial Transformation, Forthcoming
Senior Supervisors Group, 2008, “Observations on risk management practices during •
the recent market turbulence”, March 6
Taleb, N. N., 2007, The black swan – the impact of the highly improbable, Random •
House, New York
Thal Larsen, P., 2008, “The flaw in the logic of investment banks’ largesse,” Financial •
Times, 6 January
Trichet, J-C. , 2009, “(Under-)pricing of risks in the financial sector”, speech 19 January•
Tucker, P., 2010, Shadow banking, capital markets and financial stability, Remarks, •
Bernie Gerald Cantor (BGC) Partners Seminar, Bank of England, London, 21 January
138 – The journal of financial transformation
Manuscript guidelines
All manuscript submissions must be in English.
Manuscripts should not be longer than 7,000 words each. The maximum number of A4 pages allowed is 14, including all footnotes, references, charts and tables.
All manuscripts should be submitted by e-mail directly to the [email protected] in the PC version of Microsoft Word. They should all use Times New Roman font, and font size 10.
Where tables or graphs are used in the manuscript, the respective data should also be provided within a Microsoft excel spreadsheet format.
The first page must provide the full name(s), title(s), organizational affiliation of the author(s), and contact details of the author(s). Contact details should include address, phone number, fax number, and e-mail address.
Footnotes should be double-spaced and be kept to a minimum. They should be numbered consecutively throughout the text with superscript Arabic numerals.
for monographsAggarwal, R., and S. Dahiya, 2006, “Demutualization and cross-country merger of exchanges,” Journal of Financial Transformation, Vol. 18, 143-150
for booksCopeland, T., T. Koller, and J. Murrin, 1994, Valuation: Measuring and Manag-ing the Value of Companies. John Wiley & Sons, New York, New York
for contributions to collective worksRitter, J. R., 1997, Initial Public Offerings, in Logue, D. and J. Seward, eds., Warren Gorham & Lamont Handbook of Modern Finance, South-Western Col-lege Publishing, Ohio
for periodicalsGriffiths, W. and G. Judge, 1992, “Testing and estimating location vectors when the error covariance matrix is unknown,” Journal of Econometrics 54, 121-138
for unpublished materialGillan, S. and L. Starks, 1995, Relationship Investing and Shareholder Activ-ism by Institutional Investors. Working Paper, University of Texas
Guidelines for manuscript submissions
Guidelines for authors
In order to aid our readership, we have established some guidelines to ensure that published papers meet the highest standards of thought leader-ship and practicality. The articles should, therefore, meet thefollowing criteria:
1. Does this article make a significant contribution to this field of research? 2. Can the ideas presented in the article be applied to current business mod-
els? If not, is there a road map on how to get there.3. Can your assertions be supported by empirical data?4. Is my article purely abstract? If so, does it picture a world that can exist in
the future?5. Can your propositions be backed by a source of authority, preferably
yours?6. Would senior executives find this paper interesting?
subjects of interestAll articles must be relevant and interesting to senior executives of the lead-ing financial services organizations. They should assist in strategy formula-tions. The topics that are of interest to our readership include:
• Impact of e-finance on financial markets & institutions• Marketing & branding• Organizational behavior & structure• Competitive landscape• Operational & strategic issues• Capital acquisition & allocation• Structural readjustment• Innovation & new sources of liquidity • Leadership • Financial regulations• Financial technology
Manuscript submissions should be sent toProf. Shahin Shojai, Ph.D.The [email protected]
CapcoBroadgate West9 Appold StreetLondon EC2A 2APTel: +44 207 426 1500Fax: +44 207 426 1501
139
The world of finance has undergone tremendous change in recent years. Physical barriers have come down and organizations are finding it harder to maintain competitive advantage within today’s truly global market place. This paradigm shift has forced managers to identify new ways to manage their operations and finances. The managers of tomorrow will, therefore, need completely different skill sets to succeed.
It is in response to this growing need that Capco is pleased to publish the ‘Journal of financial transformation.’ A journal dedicated to the advancement of leading thinking in the field of applied finance.
The Journal, which provides a unique linkage between scholarly research and business experience, aims to be the main source of thought leadership in this discipline for senior executives, management consultants, academics, researchers, and students. This objective can only be achieved through relentless pursuit of scholarly integrity and advancement. It is for this reason that we have invited some of the world’s most renowned experts from academia and business to join our editorial board. It is their responsibility to ensure that we succeed in establishing a truly independent forum for leading thinking in this new discipline.
You can also contribute to the advancement of this field by submitting your thought leadership to the Journal.
We hope that you will join us on our journey of discovery and help shape the future of finance.
Prof. Shahin [email protected]
Request for papers — Deadline 8 July, 2010
For more info, see opposite page
2010 The Capital Markets Company. VU: Prof. Shahin Shojai,
Prins Boudewijnlaan 43, B-2650 Antwerp
All rights reserved. All product names, company names and registered trademarks in
this document remain the property of their respective owners.
Design, production, and coordination: Cypres — Daniel Brandt and Pieter Vereertbrugghen
© 2010 The Capital Markets Company, N.V.
All rights reserved. This journal may not be duplicated in any way without the express
written consent of the publisher except in the form of brief excerpts or quotations for review
purposes. Making copies of this journal or any portion there of for any purpose other than
your own is a violation of copyright law.
www.capco.com
Capco offices
Amsterdam
Antwerp
Bangalore
Chicago
Frankfurt
Geneva
London
Luxembourg
Mumbai
New York
Paris
Pune
San Francisco
Toronto
Washington DC
T +32 3 740 10 00