aenorm 64

72
Understanding Infant Mortality: The Econometric Game Reports Edition 64 Volume 17 July - 2009 Magazine for students in Actuarial Sciences, Econometrics & Operations Research Integrated Anticipatory Control of Road Networks Solvency 2: an Analysis of the Underwriting Cycle With Piecewise Linear Dynamical Systems Meet Your Meat Interview with Han de Jong Deflator: Bridge Between Real World Simulations and Risk Neutral Valuation Portfolio Insurance Does Complexity Leads to Better Performances?

Upload: vsae

Post on 25-Mar-2016

213 views

Category:

Documents


1 download

DESCRIPTION

 

TRANSCRIPT

Page 1: Aenorm 64

Understanding Infant Mortality:The Econometric Game Reports

Edition 64Volume 17July - 2009

Magazine for students in Actuarial Sciences, Econometrics & Operations Research

Integrated Anticipatory Control of Road Networks

Solvency 2:an Analysis of the Underwriting Cycle

With Piecewise Linear Dynamical SystemsMeet Your Meat

Interview with Han de Jong

Deflator:Bridge Between Real World Simulations and Risk Neutral Valuation

Portfolio InsuranceDoes Complexity Leads to Better Performances?

Page 2: Aenorm 64

Join our team: www.alloptions.nl/life

Success is a team effort!

All Options is a leading market maker providing liquidity to the derivatives markets in Europe and Asia.

In only 2 years we have grown from 60 to 300+ employees and are now one of the largest market makers in the world. And we don’t stop there – this year we are seeking another 50 young talents for trading careers.

This success is due to our belief in support for personal achievement and discipline in success. Working in unity is at the core of our culture. That’s why we reward our traders on both their individual and team performance.

If you are interested in a challenging and rewarding career in trading check out your options at: www.alloptions.nl

Trading on the future.

Page 3: Aenorm 64

1AENORM 64 July 2009

A magical moment

It is three o’clock in the afternoon and the final round of Roland Garros 2009 between Roger Federer and Robin Söderling is starting. I’m feeling a bit guilty for watching this

match as I have not been able to produce even a single predicate for the preface I am supposed to write. However thrilled with the fact that Federer is writing history, this fan of the “Fed Express” temporarily does not feel any guilt at all and is happy to see how the game evolves despite the fact that it is actually quite boring. In the victory ceremony, the winner describes his particular victory as a very special “magical moment”.

Even though Federer has won over 14 Grand Slam titles, you can clearly see the sincerity of his words on his face as tears begin to overpower his big smile. As I watch the ceremony, I begin to wonder how such pleasures are translated to the perspective of academia. Being an undergraduate econometrics student, I do not often get the opportunity to observe my academic superiors in such states. In fact, I more often notice these neutral facial expressions that seem to verbalise the question “Where the hell is my morning cup of coffee?”. However I still wonder what events or experiences academics in particular consider to be magical. It could be the moment when their thesis or dissertation was completed or maybe when they got promoted to the position of professor. Or perhaps the first article they published in AENORM. Well, let’s be honest, probably not.

While I am only in my bachelor phase with precisely zero publications with my name written on them, I would be delighted when my first publication is official. To realise that your work is actually contributing to a specific scientific field, is quite something: the result of years of education and hard work summarized into an article of only a few pages.

In the last issue of AENORM, our chief editor and president of the VSAE board Annelies Langelaar mentioned the tenth edition of the Econometric Game. After three days of inten-sive econometric brainstorming by several universities, Universidad Carlos III de Madrid was declared the winner of this year’s Econometric Game. Let me be one of many to congratulate Madrid with their victory! I’m quite sure that the winners at the time had their own magical moment. A shortened version of their winning paper can be found in this issue of AENORM as well as a summary of the impressive paper of University College London.

With the last exams in sight, the beginning of the long-awaited summer break is also near. I can only recommend our readers to enjoy their well-deserved holiday. After the completion of my bachelor’s degree next month, I certainly intend to do the same. As far as special academic moments are concerned, I hope to experience my own “Federer moment” soon enough. However I reckon only time will tell when that happens.

Chen Yeh

Preface

Join our team: www.alloptions.nl/life

Success is a team effort!

All Options is a leading market maker providing liquidity to the derivatives markets in Europe and Asia.

In only 2 years we have grown from 60 to 300+ employees and are now one of the largest market makers in the world. And we don’t stop there – this year we are seeking another 50 young talents for trading careers.

This success is due to our belief in support for personal achievement and discipline in success. Working in unity is at the core of our culture. That’s why we reward our traders on both their individual and team performance.

If you are interested in a challenging and rewarding career in trading check out your options at: www.alloptions.nl

Trading on the future.

Page 4: Aenorm 64

2 AENORM 64 July 2009

Aenorm 64 Content List

2

Assesing the Impact of Infant Mortality on upon the Fertility Decisions of Mothers in India 13

The task of the first case of the 2009 Econometric Game was to investigate the size and direction of this effect empirically2. A distinctive feature of such an investigation is that the outcome of interest, the number of children, is a `count' variable which takes only non-negative integer values; as such, this article is primarily concerned with the issues involved in modelling such a variable and the steps that we took in specifying a model for the case.

Team UCL

Cover design:Michael Groen

Aenorm has a circulation of 1900 copies for all students Actuarial Sciences and Econometrics & Operations Research at the Uni-versity of Amsterdam and for all students in Econometrics at the VU University of Amster-dam. Aenorm is also distributed among all alumni of the VSAE.

Aenorm is a joint publication of VSAE and Kraket. A free subsciption can be obtained at www.aenorm.eu.

Insertion of an article does not mean that the opinion of the board of the VSAE, the board of Kraket or the redactional staff is verbalized. Nothing from this magazine can be duplicated without permission of VSAE or Kraket. No rights can be taken from the content of this maga-zine.

© 2009 VSAE/Kraket

Interview with Han de Jong 8

Han de Jong is Chief Economist of ABN AMRO Bank N.V., based in Amsterdam. Prior to taking up this position in 2005, he headed the Investment Strategy team at ABN AMRO Asset Management for five years. Before that, De Jong held various positions inside and outside ABN AMRO, such as leading the bank’s fixed-income research unit.

Annelies Langelaar

Understanding Infant Mortality in Brazil 4

This article is a summarized version of the winning report of the Econometric Game 2009. We evaluate if the government’s program Family Health Program has been successful in reducing infant mortality rates in Brazil. To study the impact of this program, and also the impact of other variables on infant mortality, we estimate a dynamic panel data model covering the period 1998-2005.

Team Carlos III

Fast-Food Economics: Higher Minimum Wages, Higher Employment Levels 17

Numerous studies have been published about the effect of a minimum wage increase on employment. In this article, we will take a closer look at the study of Card and Krueger (The American Economic Review, 1994).

Chen Yeh

When Should You Press the Reload Button? 21

While surfing on the Internet, you may have observed the following. If a webpage takes a long time to download and you press the reload button, often the page promptly appears on your screen. Hence, the download was not hindered by congestion — then you would better try again later — but by some other cause.

Judith Vink-Timmer

Portfolio Insurance - Does Complexity Lead to Better Performance? 25

The importance of Portfolio Insurance as a hedging strategy arises from the asymmetric risk preferences of investors. Portfolio Insurance allows investors to limit their downside risk, while retaining exposure to higher returns. This goal can be accomplished by an investment constructed with the payoff profile of the writer of a call option.

Elisabete Mendes Duarte

Deflator: Bridge Between Real World Simulations and Risk Neutral Valuation 30

The importance of market consistent valuation has risen in recent years throughout the global financial industry. This is due to the new regulatory landscape and because banks and insurers acknowledge the need to better understand the uncertainty in the market value of their balance sheet.

Pieter de Boer

Page 5: Aenorm 64

3AENORM 64 July 2009 3

Puzzle 67

Facultative 68

Volume 17Edition 64July 2009ISSN 1568-2188

Chief editor:Annelies Langelaar

Editorial Board:Annelies Langelaar

Design:Carmen Cebrián

Lay-out:Taek Bijman

Editorial staff:Erik BeckersDaniëlla BralsLennart DekJacco MeureBas MeijersChen Yeh

Advertisers:AchmeaAll OptionsAONAPGDelta LloydDe Nederlandsche BankErnst & YoungKPMGORTECSNS ReaalTowers PerrinWatson Wyatt WorldwideZanders

Information about advertising can be obtained from Daan de Bruin [email protected]

Editorial staff adresses:VSAE, Roetersstraat 11, 1018 WB Amsterdam, tel: 020-5254134

Kraket, de Boelelaan 1105, 1081 HV Amsterdam, tel: 020-5986015

www.aenorm.eu

Meet your Meat 38

People often do not realize that their food consumption is a substantial environmental burden. Primeval forests are being cut down for the production of food like soy, livestock contributes 18% to total greenhouse gas emissions and substantial emissions of substances that contribute to eutrophication and acidification accompany our food production.

Femke de Jong

Realistic Power Plant Valuations - How to Use Cointegrated Spark Spreads 42

The large investments in new power generation assets illustrate the need for proper financial plant evaluations. In this article we demonstrate the use of cointegration to incorporate market fundamentals and calculate dynamic yet reasonable spread levels and power plant values.

Henk Sjoerd Los, Cyriel de Jong and Hans van Dijken

Dynamic Risk Indifference Pricing and Hedging in Incomplete Markets 48

This work studies a contingent claim pricing and hedging problem in incomplete markets, using backward stochastic differential equation (BSDE) theory. In what follows, we sketch the pricing problem in complete vs incomplete markets.

Xavier De Scheemaekere

Integrated Anticipatory Control of Road Networks 52

Dynamic traffic management is an important approach to minimise the negative effects of increasing congestion. The work described in this article shows that anticipatory control can contribute to a better use of the infrastructure in relation with policy objectives.

Henk Taale

Solvency 2: an Analysis of the Underwriting Cycle with Piecewise Linear Dynamical Systems 56

Solvency II represents a complex project for reforming the present vigilance system of solvability for European insurance companies. In this context many innovative elements arise, such as the formal introduction of risk management techniques also in the insurance sector.

Fabio Lamantia and Rocco Cerchiara

Interview with Pieter Omtzigt 61

Pieter Herman Omtzigt obtained a Phd in Econometrics in 2003 in Florence with his thesis “Essays in Cointegration Analysis”. Nowadays he is a Dutch Politician for the party CDA. In the Tweede Kamer he is mostly busy with pensions, the new health care system and social security.

Annelies Langelaar

Mean Sojourn Time in a Parallel Queue 63

This account considers a parallel queue, which is two-queue network, where any arrival generates a job at both queues. We first evaluate a number of bounds developed in the literature, and observe that under fairly broad circumstances these can be rather inaccurate.

Benjamin Kemper

Page 6: Aenorm 64

4 AENORM 64 July 2009

This article is a summarized version of the winning report of the Econometric Game 2009. We evaluate if the government’s program Family Health Program has been successful in reducing infant mortality rates in Brazil. To study the impact of this program, and also the impact of other variables on infant mortality, we estimate a dynamic panel data model covering the period 1998-2005, controlling for differences among regions as well as for endogeneity. We found evidence indicating that this program significantly reduced infant mortality during the analyzed period. This reduction was more pronounced in poor regions. We also found that poverty, income inequality and fertility were associated with higher infant mortality rates. Finally, we discuss possible policy implications that can be drawn from our results.

Understanding Infant Mortality1 in Brazil

Team Carlos III de Madrid

consisted of two PHD students and three master students, respectively: André Alves Portela Santos, Liang Chen, María Cecilia Avramovich, Dolores de la Mata and José Daniel Vargas Rozo. André’s research interest are financial econometrics, portfolio optimization and machine learning and for Liang it is theoretic and applied econometrics, macroeconomics, and financial economics. Maria’s interest fields are political economy and development in Latin American countries and for Dolores it is health economics, economics of education and policy evaluation. The research interests of José are applied econometrics, mergers and competition policy.

Introduction

The aim of our study is to determine the fac-tors that affect infant mortality rates in Brazil and, in particular, to assess the impact of the intervention known as Family Health Program (PSF, Programa Saúde da Família). The PSF was implemented in Brazil during the mid 90’s with the aim of broadening access to health services and to help provide universal care in a context of limited resources. It was expected to affect infant mortality rates. In order to analyze this issue we first provide informative descriptive statistics of the data that allows us to motivate our further econo-metric analysis. Throughout the report we use a panel of state-level aggregated data for the 27 Brazilian states (26 states and the federal district) over the eight consecutive years of 1998-2005. Figure 1 displays infant mortality rates for each Brazilian state during the period considered. We can see that in almost all states infant mortality declines monotonically. It also shows that some

states have a higher ratio of infant mortality in 1998. For example, the state of Alagoas (num-ber 2 in the graph) shows a rate of almost 66 infant deaths per 1,000 live births compared with other states such as Espírito Santo (num-ber 9 in the graph) with an infant mortality rate of almost 21 deaths per 1,000 live births. We specify a reduced form model in which in-fant mortality rate is affected by the coverage of the PSF (proportion of the population covered), measures of medical resources (medical doc-tors and hospital beds per 1,000 inhabitants), socioeconomic measures (an alphabetized in-dex, gini index, per capita household income, poverty index, number of children per woman), and indicators of access to infrastructure (po-pulations with running water, sewerage facili-ties and waste collection). One important issue to be taken into account in our econometric estimations is the existence of many potential explanatory endogenous va-riables. First of all, even when the total number of children per woman (fertility) has been found to be positively correlated with infant mortality, it could be argued that causality goes in both directions. Secondly, while medical resources (number of hospital beds per 1,000 inhabitants and number of medical doctors per 1,000 inha-bitants) could be a cause for better health of the population and consequently reduce infant mortality, at the same time, greater medical re-sources could be allocated to areas with high infant mortality in order to reduce it more ra-pidly. Finally, the coverage of the PSF program itself could also be a potential explanatory en-dogenous variable for the same reason as given for the potential endogeneity of medical re-sources. In fact, we can see from the data that

1 We thank César Alonso-Borrego for his excellent coaching and support during our preparation for theEconometric Game

Econometrics

Page 7: Aenorm 64

5AENORM 64 July 2009

those regions with worse socioeconomic indica-tors are the ones which received the highest coverage of the PSF program. A further analytical issue is related to the possible persistence in rates of infant morta-lity. Reductions in mortality rates may requi-re structural changes that can only be slowly implemented. In order to account for this, our econometric specification introduces the one period lagged dependent variable as an ex-planatory variable, though this is potentially another endogenous variable. Another consideration that we take into ac-count is that policies targeting health issues may not have a direct effect on contempora-neous health indicators, and their benefits may only be observed with some temporal lag. In our case, the impact of the PSF program may

have a lagged effect on infant mortality rates. For instance, the program may affect the he-alth of women that will give birth in the future. Then, with better health, these women will face reduced risk of suffering the death of a new-born child. However, this effect is not captured in contemporaneous infant mortality statistics. For this reason in our econometric specificati-ons we also consider the introduction of a one period lag for the PSF program coverage. Finally, it is important to note that the socioeco-nomic differences between the five Brazilian re-gions (north, northeast, mid-west, southeast, and south) are well delimited. Moreover, se-veral characteristics of the health policy, such as its effectiveness and the amount of resour-ces received, can also vary between regions. Therefore, one would expect that policy makers would target the regions differently in order to achieve faster reductions in infant mortality in regions where this problem is more severe.

Variable selection and description

The selected explanatory variables are sum-marized in table 1, which reports the values of each variable for the first and last year of the sample, and the corresponding variation in that period. Table 1 shows that the coverage of the PSF pro-gram increased substantially within the period of analysis, going from a population coverage of 8,7% in 1998 to 55,1% in 2005, while the number of doctors have also increased in that period by almost 34% (possibly due to the PSF program) and the number of hospital beds have shrunk 29%.

Proposed econometric models

In this section we describe our proposed eco-nometric model to study the determinants of infant mortality in Brazil. The panel data model selected for determining child mortality (vari-

Variable Name 1998 2005 Δ2005-2008

Mean Std.Dev Mean Std.Dev

psf cov. of pop. from the family health plan. 8.7 11.6 55.1 21.7 534%

med medical doctors per 1000 inhab. 1.0 0.6 1.4 0.7 33%

hos hospital beds per 1000 inhab. 2.9 0.8 2.0 0.4 -29%

ana analphabetic index: % 16.4 9.6 13.9 7.8 -15%

gin gini index of income ineq. 0.6 0.0 0.5 0.0 -6%

yhc per capita household income 61.0 7.0 64.3 7.0 5%

fer number of children per woman 2.6 0.5 2.2 0.4 -15%

pov poverty index: % of poor people 46.6 17.0 44.4 16.0 -5%

wat % of pop. with running water 72.2 12.8 74.0 14.4 3%

sew % of pop. with sewerage 52.1 23.5 56.4 21.0 8%

gar % of pop. enjoying refusal collection 72.4 16.3 78.5 12.2 8%

Table 1: Summary statistics for the selected explanatory variables

Figure 1: Evolution of infant mortality rates for the 27 Brazilian states, 1998-2005

Econometrics

Page 8: Aenorm 64

6 AENORM 64 July 2009

able im1it, number of deceased children within the first year of birth per 1,000 live births) is the following:

im1it = βXit+α1im1it-1+γj(Djpsfit)+ηi+uit (1)

where i denotes a state, X is the 10 x 1 matrix of explanatory variables that includes six exo-genous variables (anait, povit, watit, sewit, garit, ginit) and four endogenous variables (ferit, hosit, medit, psfit) are the associated 1 X 10 parameter vector. Dj is a dummy for each of the five regi-ons of Brazil2. The objective of this specification is to capture the different impact of psfit across Brazilian regions. ηi captures state-specific ef-fects and α1 is the coefficient associated to the one-period lagged depend variable. Finally, uit is the error term. We consider two alternative specifications of model (1). The first is exactly the benchmark model proposed in (1). The second specificati-

on considers the impact of one-period lag psfit-1 instead of contemporaneous psfit. Finally, due to the dynamic nature of the problem and the presence of endogenous variables, we use the Arellano-Bond estimator in order to get consi-stent estimators for these models.

Econometric analysis for the infant mortality in Brazil

Table 2 reports estimation results when it is assumed policy has a contemporaneous effect. Region 3 (southeast) is the reference case, where more industrialized and rich states of Brazil, such as Sao Paulo and Rio de Janeiro, are located. Results suggest that there are important differences in the effectiveness of the policy across regions. This supports the hypothesis that policy makers in Brazil could be targeting the poorest regions with the objective of achieving a faster reduction in infant mortality. In particular, the policy has a significant contemporaneous effect in reducing infant mortality. This is particularly the case in the northeast region (region 2), which according to the data provided in the instructions of the Game, could be considered the poorest region in Brazil. The policy effect in this region is given by the difference in the coefficient of the reference region (D.psf) and the coefficient of the northeast region (D.p_region2), i.e, 0.053-0.088=-0.035. Results for the regions 1 (north) and 5 (mid-west) indicated that the policy had no contemporaneous effect when compared to the reference case. This is shown by the coefficient associated to the reference region (D.psf) and the coefficients associated to regions 1 and 5 (D.p_region1 and D.p_region5, respectively)which have almost the same values but with opposite signs. Finally, we found that lagged infant mortality has a positive and significant coefficient, and that higher poverty and income inequality are associated with higher infant mortality rates. Interestingly, we found that more medical doctors and more hospital beds have no significant impact in reducing infant mortality rates. This suggests that socioeconomic variables are more important in explaining infant mortality than medical resources. Table 3 reports the results when the lagged effect of policy is considered. The results reinforce our previous findings that there are significant interaction effects between policy and geographical region, and that the effectiveness of the policy significantly differs between regions. We found that the policy is most effective in the two poorest Brazilian regions: the north (region 1) and northeast (region 2). Furthermore, fertility, poverty and income inequality were significantly associated

Variable Coefcient (Std. Err.)

LD.im1 0.581*** (0.123)

D.med -0.825 (0.831)

D.hos 0.221 (0.360)

D.fer 0.562 (0.353)

D.psf 0.053*** (0.018)

D.yhc -0.006 (0.024)

D.ana 0.057 (0.073)

D.pov 0.042* (0.014)

D.wat 0.038 (0.029)

D.sew -0.005 (0.009)

D.gar -0.020 (0.026)

D.gin 3.807* (2.158)

D.p region1 -0.053*** (0.014)

D.p region2 -0.088*** (0.020)

D.p region4 -0.009 (0.020)

D.p region5 -0.050*** (0.017)

Intercept -0.228 (0.199)

*** Sig. at 1% ** Sig. at 5% * Sig at 10% Endogenous regressors: im1 lagged,med, hos, fer, psf

Arellano-Bond test for zero autocorrelation:

Order z Prob>z

1 -2.6388 0.0083

2 -1.4653 0.1428

Sargan test of overidentifying restrictions: H0: overidentifying restrictions are valid

chi2(109) = 115.9157 Prob > chi2 = 0.3072

Table 2: Estimation results: Contemporaneous efect for the policy. Dependent variable: im1

2 Regions 1, 2, 3, 4, and 5 denote, respectively, the north, northeast, southeast, south and midwestregions.

Econometrics

Page 9: Aenorm 64

7AENORM 64 July 2009

with an increase in infant mortality rates. Finally, it is worth noting that specification tests reported in tables 2 and 3 indicate that the model is well specified. In particular, the Arrelano-Bond test for zero autocorrelation indicated that there is no autocorrelation of order 2. Moreover, the Sargan test indicated that the proposed instrumental variables are valid.

Conclusions

In this paper we proposed an econometric mo-del capable of analyzing the determinants of infant mortality in Brazil and that enables us to make policy recommendations with respect to the Family Health Program. We found two important factors that have a significant effect on infant mortality. Firstly, the measure of poverty and income inequality are positively correlated with infant mortality rates, suggesting that policy interventions capable of reducing poverty and improving

Variable Coefcient Std. Err.)

LD.im1 0.667*** (0.131)

D.med -1.199 (0.857)

D.hos 0.325 (0.258)

D.fer 0.668** (0.307)

D.psf 1 0.034* (0.020)

D.yhc -0.021 (0.019)

D.ana 0.051 (0.069)

D.pov 0.038*** (0.013)

D.wat 0.048 (0.031)

D.sew -0.012 (0.009)

D.gar -0.021 (0.030)

D.gin 4.922* (2.926)

D.p1 region1 -0.041*** (0.014)

D.p1 region2 -0.061*** (0.021)

D.p1 region4 -0.027* (0.017)

D.p1 region5 -0.036 (0.023)

Intercept -0.023 (0.193)

*** Sig. at 1% ** Sig. at 5% * Sig, at 10% Endogenous regressors:im1 lagged, med, hos, fer, psf

Arellano-Bond test for zero autocorrelation:

Order z Prob>z

1 -2.6828 0.0073

2 -1.6181 0.1056

Sargan test of overidentifying restrictions: H0: overidentifying restrictions are valid

chi2(105) = 113.028 Prob > chi2 = 0.2789

Table 3: Estimation results: Lagged efect for the policy. Dependent variable: im1

income distribution may have a positive impact on child survival. Secondly, we found a positive relationship between fertility and infant mortality. Policy makers have to consider what the true direction of this relationship is. In fact, there is evidence suggesting that the direction of causality may go from mortality to fertility (Balhotra and van Soest 2008). Moreover, we found a significant relationship between current and one-period lagged infant mortality, suggesting a substantial state depen-dence. This indicates that interventions aimed at reducing infant mortality will have long las-ting effects. Finally, we found that areas in Brazil with hi-gher levels of infant mortality during the im-plementation of the PSF program have expe-rienced greater reductions in this measure. The challenge for this program is to keep on with these successful results given that the effect of the policy may be attenuated when the star-ting levels of infant mortality are lower. Under this scenario, the PSF program should be com-plemented with additional policies such as the ones we mentioned above.

References

Arellano, M. and Bond. S. (1991). Some Tests of Specification for Panel Data: Monte Carlo Evidence and an Application to Employment Equations, Review of Economic Studies, 58, 277-297.

Balhotra, S. and van Soest A. (2008). Birth-spacing, fertility and neonatal mortality in India: dynamics, frailty, and fecundity, Journal of Econometrics, 143, 274-290.

Cameron, A. and Trivedi, P. (2009).Microeconometrics using Stata, StataCorp LP.

Macinko, J., Souza M., Guanais, F. and Simões, C. (2007). Going to scale with community-based primary care: an analysis of the Family Health Program on infant mortality in Brazil, 1999-2004, Social Science and Medicine, 65, 2070-2080.

Econometrics

Page 10: Aenorm 64

8 AENORM 64 July 2009

Han de Jong is Chief Economist of ABN AMRO Bank N.V., based in Amsterdam. Prior to taking up this position in 2005, he headed the Investment Strategy team at ABN AMRO Asset Management for five years. Before that, De Jong held various positions inside and outside ABN AMRO, such as leading the bank’s fixed-income research unit.Between 1992 and 1997, De Jong worked in Dublin, Ireland, for a local brokerage firm as its Chief Economist. After graduating from the Free University in Amsterdam with a master’s degree in economics, he initially worked as a college lecturer. Han de Jong is currently also a columnist for the leading Dutch financial newspaper, Financieele Dagblad, writing about economic and financial affairs, and serves on the investment committee of three Dutch pension funds.

Interview with Han de Jong

Could you tell our readers something of your background?

I studied economics at Vrije University in Amsterdam and completed an internship in Brussels. This internship influenced my life strongly, both in business and personally. My time there gave my life an international dimension and during my stay in Brussels I also met my wife. After graduating, I taught for several years while working as a college lecturer. I think it is good to teach, because becoming a teacher means teaching the topics you yourself have just learned. I would recommend it to anybody. After teaching for three years full-time and three years part-time, I changed jobs and started working for the ABN AMRO. After seven years with the bank, my wife and I moved to Ireland where I worked for a local brokerage firm. The ABN AMRO asked me if I wanted to come back and, deciding I did, agreed to their offer.

You have been the Global Head of Research at Fortis for several months. What are your responsibilities there?

My most important responsibility has been im-proving communication between research de-partments. Many of the analysts and the eco-nomists were not properly communicating and were not interacting with each other. Companies have pure macroeconomics departments who analyse the current market. The outcomes of this analysis must be communicated to strate-gists who need this information to, for example, decide for which investments they should shor-ten or extend the duration or whether they should buy or sell corporate bonds. If the stra-tegists and the economists do not cooperate, you have serious problems. At Fortis it was un-

believable how little effective communication was occurring. For example, the strategic and economic departments co-authored publicati-ons, but their outlooks were completely the op-posite. At Fortis the economists were working in Amsterdam, the strategists in Brussels and the employees responsible for the corporate bonds were working in Paris. That is why I was detached at Fortis as Global Head of Research. The funny thing is that all teams internally were working very well, but as soon as team lines, and so country borders, were transcended then problems started.

What do you think of the current situation in the financial world?

Many things have gone wrong in the past cou-ple of years. The current situation has forced many at the ABN AMRO to self-reflection. I do think that the social discussion about the crisis is too emotional. If you only listened to the me-dia, you would probably think that all problems were caused by the banks. However, that view is based solely from a microeconomic perspec-tive. You must also take into account the ma-croeconomic side of the whole situation. The last few years were marked with imbalances such as in the markets of the US and China. China had copious deficits that had to be finan-ced. They solved this by purchasing bonds that led to decreasing returns. Banks were tempted to take more and more risks and bought dubi-ous products. In the end it is clear that banks have failed and, as you can see by the improper loans provided, did not calculate risk correctly. However, not only the banks but also the re-gulatory supervisors and macroeconomic policy makers saw the imbalances and yet chose not to intervene sufficiently. In conjunction, rating agencies made big mistakes, investors claimed

Interview

Page 11: Aenorm 64

Je leert meer... ...als je niet voor de grootste kiest.

Wie graag goed wil leren zeilen, kan twee dingen doen. Je kunt aan boord stappen van een groot zeil-schip en alles leren over een bepaald onderdeel, zoals de stand van het grootzeil of de fok. Of je kiest voor een iets kleiner bootje, waarop je al snel aan het roer staat en zelf de koers kunt bepalen. Zo werkt het ook met een startfunctie bij SNS REAAL, de innovatieve en snelgroeiende dienst-verlener in bankieren en verzekeren. Waar je als starter bij een hele grote organisatie vaak een vaste plek krijgt met speci eke werkzaamheden, kun je je aan boord bij SNS REAAL in de volle breedte van onze organisatie ontwikkelen. Dat geldt voor

onze nanciële, commerciële en IT-functies, maar net zo goed voor onze trainee ships waarin je diverse functies bij verschillende afdelingen vervult. Waardoor je meer ervaring opdoet, meer leert en sneller groeit. SNS REAAL is met een balanstotaal

van € 83 miljard en zo’n 7000 mede-werkers groot genoeg voor jouw ambities

en klein genoeg voor een persoonlijk contact. Aan jou de keuze: laat je de koers van je carrière door anderen bepalen of sta je liever zelf aan het roer? Kijk voor meer informatie over de startfuncties en traineeships van SNS REAAL op www.werkenbijsnsreaal.nl.

Starters

Page 12: Aenorm 64

10 AENORM 64 July 2009

returns that were far too high and people have borrowed more money than they can repay. Many factors contributed to the current situa-tion. It is an exaggerated claim that only the banks were responsible for this crisis.

Who do you think has the leading role in this all?

Historically, financial crises appear occasionally, especially in a rising economy. The current cri-sis is now on a larger scale than to those we are accustomed. The western countries have not been hit by a financial crisis for a long time. In the end it is caused by the human failure

of greed. Like the theory of Keynes concerning stability in the economy, the economy cannot remain stable all the time. If people observe that the current situation is stable, they assume it will remain stable and start acting less cautious. I think his theory is interesting as you can learn a lot from the past. So I think the current situation could not have been prevented, as we were probably due to end up in a crisis. If you are asking who has provided a possible initial catalyst, we have to look at Alan Greenspan. The interest rate was too low for too long. The American supervisors allowed banks to increase the leverage on their balance sheets. In fact, this has caused am-biguity as through this method activities were left from the balance sheets. The banks were allowed certain room to move and have used this improperly in several ways. I do admit it is difficult for banks in situations like that to say; “I won’t participate in it.” The ABN AMRO was also involved in markets that have had large losses. We were a bit more re-luctant than other financial institutions. Those institutions had more leverage on their balance sheet, which gave them higher profits and so higher stock prices. As a result they were finan-cially able to take over the ABN AMRO in 2007. When the books were finally opened, the ABN AMRO was horrified at the plainly irresponsible amount of the leverage. In the end Fortis went bankrupt and the ABN AMRO has been taken over by the Dutch government. I do not apologize for the acts of the financial institutions, as it is a fact that if you do not participate in the developments, you become food for parties who are seeking a take-over candidate. In this business, it is eat or be eaten.

You are a member of investment commit-tees for several Dutch pension funds. What are your duties in that function and what are your recent recommendations?

Most pension funds have an investment com-mittee that advises them regarding themes relating to investments. The smaller pension funds have external experts. That means that once in a while you get an overview of the in-vestments developments over the past period. At meetings you discuss the investment policy and which changes would be desirable. Some funds, such as the ABN AMRO Pension Fund, are very mechanical and there is only a little

room for a new vision. My role in the committee of the ABN AMRO Pension fund is quite limited. Other pension funds are keen to know whether they should buy more or less shares, or if they should invest more in portfolios or not. There is almost no pension fund that manages every-thing themselves.

What do you think of the financial situa-tion of the pension funds?

The coverage of several pension funds has decreased dramatically in the period following the internet bubble. Now with the current crisis their coverage is decreasing once again. I think it is disappointing that the past is repeating itself. Apparently pension funds didn’t do enough during recession to prevent this situation from happening. Clearly there is no long term vision. If suddenly coverage is under a 100%, something has gone terribly wrong. They should have protected their coverage a lot earlier. This would have prevented a lot of problems they are currently suffering. Today, most pension funds do not index the pensions. This will negatively affect incomes. There is however a positive side to this financial crisis. It has been one of the worst periods for stock exchanges we have ever experienced. Still there are pension funds which have coverage rates of about 100%. These funds still have a buffer left. Besides, there are also pension funds which have coverage rates of more than 105%. You can say that they are still alive, even during difficult times. That is the positive side.We should learn from this crisis. Nowadays we have several different pension funds that effectively do the same thing. It might be a good idea to let them consolidate. It will make the pension funds more efficient. This, however, is

Interview

"In this business, it is eat or be eaten"

Page 13: Aenorm 64

11AENORM 64 July 2009

not in their interest. They like to compare their results with other pension funds. They say, as long as the ABP pension fund does even worse, it does not really matter.

Some time ago you wrote in the magazine Optiek/HFD that the pension age should not be increased as this will not lead to extra productivity, but only to higher indi-vidual costs. Could you explain your view on this?

I think everyone wants to progress in life, but the question is how you define progress. Everyone has to provide for their basic needs, which means daily food and a roof above their head. But people also want to have more luxu-rious things in life and I think that real pro-gress in life means that you expand your pos-sibilities of choice. Increasing the pension age will instead mean a decline in those choices. I expected to retire when I reached the age of 65 but now I probably have to work till 67. I think it is a big mistake and miscalculation that people are getting older. On average they do get older but that is only due to the fact that some people live to a very old age. If you take a look of what is actually happening, you see that a lot of people do not retire when they turn 65 as they choose to retire earlier. People are more productive when they are in their thirties or forties and not when they are in their sixties. Right now it has become a trend that young people are choosing to work four days, even af-ter their children have started school. One day off for people in their forties means a greater loss of productivity than the gain of one extra day from people in their sixties. Morally, it does not benefit the society much when people must postpone retirement. It is better to give people more possibilities to choose. When people are young they should have the choice to save more money for their pension so that they can retire earlier.

You often write columns in Het Financieel Dagblad, where you discuss possible in-terest rate cuts or increases by the ECB. What is your opinion about the interest ra-tes for the next year?

I believe the ECB is almost finished. Officially they cannot change interest rates much more. They can lower it again, but it is already at a low 1%. There are quite a few conflicts within the ECB. If you set interest rates really low, you cannot use interest rates anymore to stimulate the economy. They need to come up with something else, and they did: quantitative easing. It’s a logical step. Using interest rates you can influence the price of money. When this is no longer an option, much like it is now, you can always influence the amount of money in

the economy. This is what they are now doing. This follows the announcement that the Federal Reserve will buy $300 billion worth of govern-ment bonds. Besides this, they also decided to insure mortgage-backed securities for $1,750 billion and to buy $200 billion of the agency debt of Freddie Mac and Fanny Mae.They are issuing more money than before and through different ways. The ECB has recently begun purchasing government bonds, which will amount to €60 billion. It is only a fraction of the amount the FED has spent. This is also because Germany and other countries do not want the ECB to be printing money. It can be really dangerous in the long term. What the Germans want are coverage bonds, some kind of mortgage-backed securities. However, Germany will profit the most from this, as Germany issues half of the coverage bonds. This division will not make the ECB look ready for battle and this will not improve the faith in the ECB.

During a congress with the topic “Count on China” last year in June 2008 you spoke about the power of China. You said that China will grow into an economic superpo-wer and that it will conquer all the poten-tial obstacles. Which obstacles do you see and how do you think China will develop in the coming years?

The population of China has increased enormously the past decennia and the income per capita has increased more than it has in western countries. I expect that China will become the leading po-wer in the world, but they still face significant problems such as water scarcity and environ-mental problems. We also have to see whether the political system will remain in its current form. They try to solve their problems by pro-viding stimulus to the economy or by referring problems to the government, but you cannot perform like this forever. Other problems China is facing at the moment are non-performing lo-ans and from a social perspective, minorities in China continue to struggle for equal rights with the majority. China is trying to provide ac-cess to commodities and resources in Africa in exchange for building infrastructure there. We have to wait and see how the world reacts to these developments. There are a lot of chal-lenges for China but their desire to become a superpower is enormous. The Chinese popu-lation has tasted the fruits of prosperity. They certainly will not want to relinquish that now.

Interview

Page 14: Aenorm 64

RUIMTEvoor uw ambities

Risico’s raken uw ondernemersgeest en uw

ambities. Aon adviseert u bij het inzichtelijk

en beheersbaar maken van deze risico’s.

Wij helpen u deze risico’s te beoordelen, te

beheersen, te bewaken en te financieren.

Aon staat voor de geïntegreerde inzet van

hoogwaardige expertise, diensten en

producten op het gebied van operationeel,

financieel en personeel risicomanagement

en verzekeringen. De focus van Aon is

volledig gericht op het waarmaken van uw

ambities.

In Nederland heeft Aon 12 vestigingen met 1.600 mede-

werkers. Het bedrijf maakt deel uit van Aon Corporation,

Chicago, USA. Het wereldwijde Aon-netwerk omvat circa

500 kantoren in meer dan 120 landen en telt ruim 36.000

medewerkers. www.aon.nl.

4982

aa

R IS ICOMANAGEMENT | EMPLOYEE BENEFITS | VERZEKERINGEN

4982aa:Layout 2 02-07-2008 09:53 Pagina 1

Page 15: Aenorm 64

13AENORM 64 July 2009

Nearly 11 million children die before their fifth birthday each year1. The Developing World bears the brunt of these deaths; while the preceding figure is almost 2% of the world's total child stock, over 10% of children in Sub-Saharan Africa die before their fifth birthday. Economic theory, however, often gives ambiguous predictions as to the effect of such signicant rates of infant mortality upon the decision of parents over how many children to bear. For example, authors such as O'Hara, 1975; Ben-Porath, 1976; Rosenzweig and Wolpin, 1980 and Sah, 1991 all find that an assumption of a fixed cost to childbearing leads to an ambiguity in the effect of the survival rate on fertility. The task of the first case of the 2009 Econometric Game was to investigate the size and direction of this effect empirically2. A distinctive feature of such an investigation is that the outcome of interest, the number of children, is a `count' variable which takes only non-negative integer values; as such, this article is primarily concerned with the issues involved in modelling such a variable and the steps that we took in specifying a model for the case.

Assessing the Impact of Infant Mortality upon the FertilityDecisions of Mothers in India

We can consider approaches to modelling count data in two broad groups. The first are what we will call fully parametric approaches; these fully specify the (conditional) distribution of the out-come count variable and then proceed with esti-mation by maximum likelihood. These have the advantage of being very informative about the effect of covariates on a variety of aspects of the distribution of the outcome, at the expense of making restrictive parametric assumptions that leave the modeller particularly exposed to mis-specification issues. The second, rather broad, group can be labelled semi-parametric approa-ches; these focus on modelling only particu-lar attributes of the outcome distribution - for example the conditional variance or mean. The semi-parametric approach tends to be less res-trictive, but is often also less informative. With the gleeful abandon that accompanies the first day of everyone’s favourite international eco-nometrics competition we decided to try both a parametric and a semi-parametric model, and below we deal firstly with some some of the steps involved in selecting the parametric mo-del before discussing briefly a semi-parametric quantile regression method. Computational and time constraints meant that we were unable to produce quantile regression results, but we feel that the method is potentially very useful and worth mentioning here.

Team UCL

consisted of two PhD students and three master students, respectively: Alex Armand, Dan Rogger, Andy Feld, Rodrigo Lluberas and Alex Tuckett. The research interest of the team members are mainly growth economics, development economics, micro econometrics, pension economics and the theory and econometrics of public service delivery efficiency and effectiveness in the developing world.

Implicit in what follows are considerations about the criteria which define a ̀ good’ model. Clearly we wanted our model to fit the data well - an important criterion when evaluating the plau-sibility of parametric restrictions - whilst being as rich as possible. However, in an ideal world a model would also be economically informative; that is, beyond purely statistical considerations we would like the model to identify the under-lying (structural) decision-making processes of parents that we believe exist in this microecono-mic setting. In this way a structural model has more external validity than a `fitted’ (reduced form) model since it separates purely environ-mental aspects of a particular study from what we believe are the essential underlying proces-ses which do not change. In the time limited context of the Econometric Game building a structural model was never really a possibility, but we feel that such a model would be ideal for a full solution to the case, and as such we

1 "Unicef at a glance", Introductory Handbooks to the United Nations (United Nations, New York)2 The data used for the case was a subsample from the 3rd Indian Demographic and Health Survey

Econometrics

RUIMTEvoor uw ambities

Risico’s raken uw ondernemersgeest en uw

ambities. Aon adviseert u bij het inzichtelijk

en beheersbaar maken van deze risico’s.

Wij helpen u deze risico’s te beoordelen, te

beheersen, te bewaken en te financieren.

Aon staat voor de geïntegreerde inzet van

hoogwaardige expertise, diensten en

producten op het gebied van operationeel,

financieel en personeel risicomanagement

en verzekeringen. De focus van Aon is

volledig gericht op het waarmaken van uw

ambities.

In Nederland heeft Aon 12 vestigingen met 1.600 mede-

werkers. Het bedrijf maakt deel uit van Aon Corporation,

Chicago, USA. Het wereldwijde Aon-netwerk omvat circa

500 kantoren in meer dan 120 landen en telt ruim 36.000

medewerkers. www.aon.nl.

4982

aa

R IS ICOMANAGEMENT | EMPLOYEE BENEFITS | VERZEKERINGEN

4982aa:Layout 2 02-07-2008 09:53 Pagina 1

Page 16: Aenorm 64

14 AENORM 64 July 2009

tried to motivate our assumptions by economic considerations wherever possible.

Firstly, then, our parametric specification. The workhorse of parametric count data models has become the Poisson distribution, if only because of its ease of use. We can begin by considering a sequence of m independent binary random variables, {Z1,...,Zm), such that the probability each variable takes value 1 is p. This sequen-ce represents m binary decisions (i.e. whether or not to have a child), and we can define our count variable outcome, Y , as `the total num-ber of positive outcomes’ (i.e. the total num-ber of children), or ∑iZi. In this case Y follows a Binomial distribution (Y~Bi(m,p)), and under the assumption that mp stays constant at λ when m grows large, Y tends to a Poisson dis-tribution:

yλ λf y Y y

y−= = =

whereE[Y] = Var[Y] = λ

The Poisson distribution is then used in a re-gression model by specifying a non-linear con-ditional mean:

i i i iE Y x λ x x β′= =

so that

iyi i

i ii

x β x βf y x

y′ ′−=

where i indexes randomly sampled observati-ons, xi is a vector of covariates, and parame-ter β is estimated by maximum likelihood3. There are a number of issues with the above model that prove to be illuminating. Firstly, the Poisson distribution has a very specific proper-ty, namely that of `equidispersion’, whereby the conditional mean of the outcome is equal to the conditional variance. An inspection of histograms of our data on the number of bir-ths suggested that it was not equidispersed. In addition, equidispersion can be tested for, for example along the lines suggested by Cameron and Trivedi (2005) using the fitted values from a Poisson regression; we ran such a test with our data and rejected the null hypothesis of equidispersion. Beyond this we also found that the number of children in the data appeared to be bimodal; that is, there were a large number of observations clustered at zero children with a second peak in the distribution at around three or four children. This clustering at zero again

does not reconcile with the outcome being Poisson distributed. Thus a lack of fit in two areas suggested that we needed an alternative to a Poisson distribution.

A second feature of the Poisson model above concerns its economic interpretation. In parti-cular, the number of children born to a mother, Y, is characterised as the result of a sequence of binary decisions, the = each of which can be characterised as the choice over whether or not to have a child. The big problem (among others) with this setup is that for Y to have a Poisson distribution the Zi need to be indepen-dently distributed. It is extremely doubtful thatthis is the case with fertility choices; decisions about having children are dependent over time. That is, each Zi is better characterised as the decision about whether or not to have another child, forming a dynamic sequence where each choice depends on previous choices. Thus thin-king about the decision process behind the data complements the observations on the lack of fit in suggesting that we need an alternative to the Poisson distribution.

Thirdly, all of the parametric models we considered require a correct specification of the conditional mean, (xi), or its analogue in other distributions. Importantly, all of the observations about how well a distribution fits the data presume a correct specification; however, a misspecified conditional mean can give rise to an ill-fitting model even when the outcome is infact Poisson distributed. We chose a non-linear mean of the form given above; we included a wide range of covariates, including number of children to have died, mothers age at time of first and last birth, economic status, marital status, awareness and use of contraception and religion. This brings us to what is potentially a major flaw in our case solution, which is that at least one of these variables is likely to be endogenous, especially the number of children to have died. Unfortunately we could find no convincing way to overcome the endogeneity in our case solution, although we suggest a potential instrument below.

We reached our preferred parametric specifi-cation by modifying the Poisson model in light of its deficiencies highlighted above. Given the sequential nature of fertility choices we used a Negative Binomial distribution in place of the standard Poisson distribution; this has been shown, for example by Winkelmann (1995), to be appropriate when binary decisions are de-pendent over time. We felt that the clustering of observations at zero could also be explained by the nature of the fertility decision; in parti-cular, we can think about the fertility decision as

3 One of the reasons the Poisson model is easy to use is that the parameter m from the Binomial distribution does not have to be estimated

Econometrics

Page 17: Aenorm 64

15AENORM 64 July 2009

"Decisions about having children are dependent over time"

a two stage process. That is, there is a decision about whether to have children at all, and then a separate type of decision about how many children to have. Thus, for example, awareness and use of contraception plays a different role in the first type of decision to that in the second type. Two popular ways to model such a pro-cess with count data are what we will call the `hurdle model’ and`inflated zeros’ model. The hurdle model species that zeros are generated by one count distribution, f1(y), whereas posi-tive values are generated by a different count distribution, f2(y), so that:

== = − ⋅ > >

where f1(y) and f2(y) are differently specified count density functions. The `inflated zeros’ model is similar, except that f1(y) is the den-sity of a binary variable, and Y can take value zero as a result of the first stage or the second stage. Hence in the inflated zeros model:

+ − ⋅ =

= = − ⋅ >

We felt that the inflated zeros model better cha-racterised the fertility decision in this case, with a binary first stage involving decisions about contraception and a second stage determining the number of children and containing the dy-namic considerations mentioned above. In the notation above, f1(y) was chosen as a logit den-sity and f2(y) as a Negative Binomial density. Thus we arrived at our preferred parametric specification - an inflated zeros model with a Negative Binomial distribution - by considering both how well the model could fit the data and how persuasively the model captured aspects of the fertility decision process.

In addition to the parametric model we also at-tempted a semi-parametric quantile regressionapproach following Machado and Santos Silva (2005). Generally quantile regression involves making much less restrictive assumptions than

a fully parametric method, while at the same time providing relatively rich information about the effect of covariates on the outcome. Rather than, say, the conditional mean or variance of the outcome we model conditional quanti-les; for example, a simple linear4 model of the τ-quantile of the outcome is:

Y = x’βτ + U

with the restriction that:

Qτ(U|x) = 0

where Qτ(∙) denotes the τ-quantile. The res-triction above is in a similar class to those in semi-paramtric models of conditional mean or variance; the advantage of quantile regression, however, is that we can, for example, distin-

guish the effect of infant mortality on those mo-thers with very few children from that on mo-thers with a lot of children. The difficulty in the application to count data is that discrete out-comes generate an objective function that the quantile estimator minimises which is non-dif-ferentiable. However, Machado and Santos Silva propose a `jittering’ method to overcome this. Intuitively, the method works by constructing a new variable which shares identical quantiles to the count variable; standard quantile estima-tion and inference can then be performed on this new variable.

Ultimately, we found a positive and statisti-cally signicant effect of infant mortality on the number of children born. There are a number of ways in which we would have liked to ex-tend our analysis (beyond actually finishing it). Primary among them is finding an instrument with which to deal with the endogeneity of in-fant mortality; we believe the scaling up of the Indian Council for Sustainable Development (ICSD) could be a good candidate. The expan-sion had a heterogenous impact and coverage across Indian states, and might provide an exo-genous decrease in infant mortality without af-fecting expectations, and so fertility decisions, of parents. Given the importance highlighted above of the dynamic nature of the fertility decision, we would ideally have liked to have built a structural model of dynamic optimisa-

4 There are many reasons why we may not in fact want to restrict conditional quantile functions to be linear - for example, estimated linear quantiles have a nasty habit of crossing each other

Econometrics

Page 18: Aenorm 64

16 AENORM 64 July 2009

tion in the family, as well as to have dealt more concretely with the supply and demand of con-traception and healthcare. Overall we’d like to thank all of the organisers and case makers of the 2009 Econometric Game for what proved to be an extremely stimulating, productive, and enjoyable experience.

References

O'Hara, D.J. (1975). Microeconomic aspects of the demographic transition, Journal of Political Economy, 83.

Ben-Porath, Y. (1976). Fertility response to child mortality: micro data from Israel, Journal of Political Economy, 84 (part 2).

Rozenzweig, M.R. and Wolpin K.I. (1980). Testing the quantity-quality model of fertility: results from a natural experiment using twins, Econometrica, 48.

Sah, R.K. (1991). The effects of child mortality changes on fertility choice and parental welfare, Journal of Political Economy, 99.

Cameron, C.A. and Trivedi, P.K., (2005). Microeconometrics: Methods and Applications, Cambridge University Press.

Winkelmann, R. (1995). Duration, Dependence and Dispersion in Count-Data Models, Journal of Business and Economic Statistics, 13.

Machado, J.A.F. and Santos Silva, J.M.C., (2005). Quantiles for Counts, Journal of the American Statistical Association, 100.

Econometrics

Page 19: Aenorm 64

17AENORM 64 July 2009

In this issue of AENORM, we continue to present a series of articles. These series contain summaries of articles which have been of great importance in economics or have caused considerable attention, be it in a positive sense or a controversial way. Reading papers from scientific journals can be quite a demanding task for the beginning economist or econometrician. By summarizing the selected articles in an understanding way, the AENORM sets its goal to reach these students in particular and introduce them into the world of economic academics. For questions or criticism, feel free to contact the AENORM editorial board at [email protected]

Fast-Food Economics: Higher Minimum Wages, Higher Employment Levels

Numerous studies have been published about the effect of a minimum wage increase on em-ployment. Predictions of traditional economic theory (Stigler, 1946) are quite clear: assuming that employers are perfectly competitive, the-re is a negative correlation between minimum wages and employment, i.e. an increase of the minimum wage leads to a decrease in employ-ment. Early studies in the 70’s seem to confirm this hypothesis. However more recent studies have failed to spot a negative employment ef-fect of higher minimum wages. In this article, we will take a closer look at the study of Card and Krueger (The American Economic Review, 1994). By using US 1992 data from fast-food restaurants, they find a rather surprising con-clusion: traditional economic theory might not be as conventional as it seems.

Introduction

In the labour economics literature, many pa-pers can be found on the effect of a minimum wage increase on employment. Conclusions of early studies, both theoretical as well as em-pirical, are unambiguous: an increase in the minimum wage leads perfectly competitive em-ployers to size down their employment (Stigler, 1946). However results of more recent studies are not as straightforward as in the 70’s (Katz and Krueger, 1992; Card, 1992). In Card and Krueger (1994, henceforth C&K) new evidence is presented on the effect of mini-mum wages on employment. They analyze the effect of the 1992 minimum wage increase (en-acted on April 1, 1992) in New Jersey on fast-food establishment employment levels. This minimum wage change consisted of a rise from $4.25 to $5.05 per hour. Their empirical me-

thodology is surprisingly simple: by comparing employment levels, minimum wages and pri-ces of fast-food restaurants in New Jersey and Pennsylvania, which is used as a control group, C&K are able to evaluate the effects of changes in minimum wages.

Justifying the use of the New Jersey/Pennsylvania dataset

C&K justify the particular use of their New Jersey/Pennsylvania dataset threefold. First, they note that the 1992 minimum wage increase occurred during a recession. The decision to increase the minimum wage in New Jersey however was made two years earlier, when the state economy was in relative good shape. By the time of the actual increase, unemployment had already reached substantial levels. Thus it is quite unlikely that Card and Krueger’s results on the effects of a higher minimum wage were caused by a favourable business cycle. Moreover New Jersey is a relatively small US state with an economy that is closely linked to those of its neighbours. Thus C&K argue that Pennsylvanian fast-food stores form an excel-lent control group for comparison with the fast-food restaurant experiences in New Jersey. The validity of the Pennsylvanian control group in turn can be tested by looking at wage variati-ons (low- and high-wages) across stores in New Jersey. Third, the dataset contains complete informa-tion on store closings between February 1992 (when the first wave of interviews was conduc-ted) and December 1992 (second wave). This allows C&K to take account of employment changes in closed stores. Thus they measure the overall effect of minimum wages on aver-

Econometrics

Page 20: Aenorm 64

18 AENORM 64 July 2009

age employment levels and not simply its effect on surviving fast-food restaurants.

Sample design: fast-food restaurants and interviews

As was mentioned earlier, C&K use data on fast-food restaurants in New Jersey and Pennsylvania. The choice of fast-food restau-rants was motivated by several factors. First, fast-food establishments are often employers of low-wage workers: C&K mention that fran-chised restaurants in 1987 employed 25 per-cent of all workers in the restaurant business.Second, fast-food restaurants comply with mi-nimum wage regulations and change their wa-ges according to changes in minimum wages. Third, fast food restaurants are easier to com-pare: the job requirements and fast-food pro-ducts are quite similar. Moreover, the absence of tips greatly simplifies the measurement of wages. Fourth, C&K argue that a sample frame of fast-food restaurants is easy to construct: based on experiences of earlier studies, fast-food restaurants have a high response rate to telephone interviews. C&K constructed a sample frame of the following fast-food restaurants in New Jersey and eas-tern Pennsylvania: Burger King, Kentucky Fried Chicken, Wendy’s and Roy Rogers. MacDonald’s restaurants were excluded as a pilot survey of Katz and Krueger (1992) received very low res-ponse rates from these fast-food restaurants. Furthermore two waves of interviews were con-ducted: in late February and early March 1992, about a month before the scheduled minimum wage increase in New Jersey (410 successful interviews) and in November and December 1992, approximately 8 months after the mini-mum wage increase (399 successful interviews).

By comparing employment levels before and after the scheduled minimum wage increase, C&K are able to evaluate the effects of changes in minimum wages. In their study, C&K use the following employment level measure: Full-time equivalent (FTE) employment, which is defined as the number of full-time workers (including managers) plus half times the number of part-time workers. In the first wave, average employment in Pennsylvania was 23.3 FTE employers per store, compared with 20.4 FTE employers per store in New Jersey. Starting wages and their distributions were very similar (4.63$/hour and 4.61$/hour) across the states. Furthermore no significant differences in the average hours of operation, fraction of full-time workers or the prevalence of bonus programs are present in the two US states, implying that Pennsylvania serves as an appropriate control group. Despite the increase in minimum wages, FTE employment actually increased in New Jersey relative to Pennsylvania.

Empirical evidence: differences in differences estimates

In the first part of their empirical evidence, C&K use the following simple linear regression:

ΔFTE = α +βSTATE + ε

where ΔFTE denotes the change in FTE employ-ment, thus the difference in FTE employment from the second and first wave of interviews. Furthermore STATE is a dummy, equal to 1 when the restaurant is located in New Jersey and 0 otherwise. This implies that the constant α is interpreted as the average change in FTE employment in Pennsylvanian fast-food restau-

Stores by state Stores in New Jersey Difference within NJ

Variable PA (i) NJ(ii) Difference, NJ-PA (iii)

Wage= $4.25 (iv)

Wage= $4.26-

$4.99 (v)

Wage≥ $5.00 (vi)

Low-high (vii)

Midrange -high (viii)

1. FTE employement before, all available

observations

23.33 (1.35)

20.44 (0.51)

-2.89 (1.44)

19.56 (0.77)

20.08 (0.84)

22.25 (1.14)

-2.69 (1.37)

-2.17 (1.41)

2. FTE employement after, all available ob-

servations

21.17 (0.94)

21.03 (0.52)

-0.14 (1.07)

20.88 (1.01)

20.96 (0.76)

20.21 (1.03)

0.67 (1.44)

0.75 (1.27)

3. Change in mean FTE employement

-2.16 (1.25)

0.59 (0.54)

2.76 (1.36)

1.32 (0.95)

0.87 (0.84)

-2.04 (1.14)

3.36 (1.48)

2.91 (1.41)

4. Change in mean FTE employement, balan-ced sample of stores

-2.28 (1.25)

0.47 (0.48)

2.75 (1.34)

1.21 (0.82)

0.71 (0.69)

-2.16 (1.01)

3.36 (1.30)

2.87 (1.22)

5. Change in mean FTE employement, setting

FTE at temporarily closed stores to 0

-2.28 (1.25)

0.23 (0.49)

2.51 (1.35)

0.90 (0.87)

0.49 (0.69)

-2.39 (1.02)

3.29 (1.34)

2.88 (1.23)

Table1: Difference in difference estimates, source: Card and Krueger (1994).

Econometrics

Page 21: Aenorm 64

19AENORM 64 July 2009

rants. The coefficient β is the so-called diffe-rence-in-difference estimator: C&K estimate the difference in FTE employment between the two US states. The results of C&K are rather surprising: the average change in FTE employment in New Jersey is actually positive and significant (one simply calculates α β+ = − + = ). Thus C&K’s results are contradictory to conven-tional economic theory predictions! However it should be noted that the decrease in FTE em-ployment in Pennsylvania is a bit awkward since no changes in minimum wages were present there. A close to zero, non-significant value for α would have been preferable. C&K in turn ar-gue that their Pennsylvanian control group is valid: the results for this control group are simi-lar to those of the high wage restaurants in New Jersey, which also should have been largely un-affected by the minimum wage increase.C&K note that the comparisons made above, do not allow for other sources of variation in employment changes, such as differences across fast-food chains. To account for these other sources of variation, C&K use the following regression-adjusted models:

ΔFTEi = α +βXi + γNJi + εiΔFTEi = a +bXi + cGAPi + εi

where ΔFTEi denotes the change in FTE employment from wave 1 to 2 at store i, Xi is a set of characteristics, the dummy NJi indicates whether the store is located in New Jersey and GAPi is the proportional increase in wages needed to reach the new minimum wage level for low-wage restaurants in New Jersey. The regression results can be found in table 2.The main findings of these models indicate that the set of control variables Xi (which are 3 dummies for the fast-food chain restau-rants and another dummy for company-owned stores) have no effect on the estimated New Jersey dummy (2.30 and 2.33). The third and fourth specification measure the effect of the minimum wage with the wage gap variable. The implications are nearly the same. The mean va-lue of the wage gap variable across New Jersey stores is 0.11. Thus combined with the esti-mate = , FTE employment in New Jersey increases 15.65 x 0.11=1.72 FTE relative to

Pennsylvania.

Robustness: Specification tests

The results in table 1 and 2 seem to contra-dict standard predictions of economic theory. To strengthen their empirical findings, C&K pre-sent some alternative specifications to test the robustness of their results. In this section, we will discuss a subset of these specifications. In table 3, these specification tests can be found. The first row shows the base specification. In the second row, FTE employment of the tem-porarily closed stores in the second interview wave is set to 0. The change only seems to have a minor effect: the coefficient changes from 2.30 to 2.20. In rows 3 – 5, alternative measures for FTE employment are used: these changes also do not seem to have a relative effect on the base specification. The same can be said for row 6 (exclusion of a subsample of restaurants in the New Jersey shore area) and row 7 (addition of control dummies that indi-cate the week of the second wave interview). In the last specification test, New Jersey restau-rants are excluded and the wage gap variable is defined (incorrectly) for Pennsylvanian restau-rants. Since no changes in the minimum wage were present in Pennsylvania, we should see no effect of the wage gap on employment. As predicted, this is the case (results in row 12). Thus C&K’s results do not seem to be based on a spurious relationship.

Discussion

The case study of C&K (1994) does not find evi-dence of a negative correlation between mini-mum wages and employment, contrary to the central prediction of traditional economic the-ory. In fact their findings, based on the 1992 minimum wage increase in New Jersey, seem to indicate that employment actually increased. Proof is found by using a simple empirical me-thodology: mainly by comparing employment levels before and after the 1992 New Jersey minimum wage increase in New Jersey and Pennsylvania, C&K come to their rather surpri-sing conclusion. A wide variety of other speci-fications are used to assess the robustness of their results. Even though the results are some-

Model

Independent variable (i) (ii) (iii) (iv) (v)

1. New Jersey Dummy 2.33 (1.19) 2.30 (1.20) - - -

2. Initial gap wage - - 15.65 (6.08) 14.92 (6.21) 11.91 (7.39)

3. Controls for chain and ownership no yes no yes yes

4. Contorls for region no no no no yes

5. Standard error of regression 8.79 8.78 8.76 8.76 8.75

6. Probability value of controls - 0.34 - 0.44 0.40

Table 2: Results of the regression-adjusted models, source: Card and Krueger (1994).

Econometrics

Page 22: Aenorm 64

20 AENORM 64 July 2009

© 2007 KPMG Staffing & Facility Services B.V., een Nederlandse besloten vennootschap, is lid van het KPMG-netwerk van zelf-standige ondernemingen die verbonden zijn aan KPMG International, een Zwitserse coöperatie. Alle rechten voorbehouden.

AU D I T TA X A DV I S O RY

Wie schrijft blijft?

Schrijf je scriptie of afstudeeropdracht bij KPMG.

Eerlijk is eerlijk, niet iedere tekst is even goed. Maar vaak zit er iets slims of moois tussen. En heel soms iets onvergetelijks. Zo is dat op de wc-deur van een kroeg en zo is dat bij KPMG, waar studenten als jij een scriptie of afstudeeropdracht kunnen schrijven. Zo’n scriptie of afstudeeropdracht is een ideale manier om kennis te maken met KPMG. Misschien zelfs het begin van een prachtige carrière: geef je al schrijvend blijk van passie voor het vak, dan moet je maar eens serieus overwegen om te blijven. Meer weten? Kijk op www.kpmg.nl/stages.

-02944_A4_Scriptanten_OF.indd 1 01-09-2008 12:46:23

Page 23: Aenorm 64

21AENORM 64 July 2009

times attenuated, their main result is preserved as none of the alternative specifications seem to find a negative employment effect of a rise in the minimum wage.C&K expand these findings in their 1995 Myth and Measurement: The New Economics of the Minimum Wage. Other cases are analyzed and their conclusion stays the same: negative employment effects of minimum wage increases seem to be minimal, if not non-existent. Unfortunately the opinions of (leading) economists are ambiguous: Greg Mankiw does not support the results as opposed to Nobel laureates Paul Krugman and Joseph Stiglitz. Numerous “counter papers” have furthermore been published, e.g. Kennan (1995) stays sceptical of the validity of the Pennsylvanian control group, Hamermesh (Brown et al.,1995) criticizes the timing of the interview waves and casts serious doubts on the validity of C&K’s “experiment” and Neumark and Wascher (2000) argue that the use of telephone interviews, rather than payroll records, leads to faulty inferences. Attempts to (theoretically) explain C&K’s findings with the use of the standard competitive model have been unsuccessful so far and alternative models (e.g. monopsony or equilibrium search models) do not perform any better. Judging on the results of C&K, it seems that supporters of conventional economic theory have a lot to think about.

References

Brown, C. et al. (1995). Review: Myth and Measurement: The New Economics of the Minimum Wage, Industrial and Labor Relations Review, 48(4), 828 – 849.

Card, D. (1992). Do Minimum Wages Reduce Employment? A Case Study of California, 1987 – 89, Industrial and Labor Relations Review, 46(1), 38 – 54.

Card, D. and Krueger, A.B. (1994). Minimum Wages and Employment: A Case Study of the Fast-Food Industry in New Jersey and Pennsylvania, The American Economic Review, 84(4), 772 – 793.

Card, D. and Krueger, A.B. (1996). Myth and Measurement: The New Economics of the Minimum Wage, Princeton University Press.

Katz, L.F. and Krueger, A.B. (1992). The Effect of the Minimum Wage on the Fast Food Industry, Industrial and Labor Relations Review, 46(1), 6 – 21.

Kennan, J. (1995). The Elusive Effects of Minimum Wages, Journal of Economic Literature, 33, 1949 – 1965.

Neumark, D. and Wascher, W. (2000). Minimum Wages and Employment: A Case Study of the Fast-Food Industry in New Jersey and Pennsylvania: Comment, The American Economic Review, 90(5), 1362 – 1396.

Change in employment Proportional change in employ-ment

Specification NJ dummy (i) Gap measure (ii)

NJ dummy (iii) Gap measure (iv)

1. Base specification 2.30 (1.19) 14.92 (6.21) 0.05 (0.05) 0.34 (0.26)

2. Treat four temporarily closed stores as permanently closed

2.20 (1.21) 14.42 (6.31) 0.04 (0.05) 0.34 (0.27)

3. Exclude managers in employement count 2.34 (1.17) 14.69 (6.05) 0.05 (0.07) 0.28 (0.34)

4. Weight part-time as 0.4xfull-time 2.34 (1.20) 15.23 (6.23) 0.06 (0.06) 0.30 (0.33)

5. Weight part-time as 0.6xfull-time 2.27 (1.21) 14.60 (6.26) 0.04 (0.06) 0.17 (0.29)

6. Exclude stores in NJ shore area 2.58 (1.19) 16.88 (6.36) 0.06 (0.05) 0.42 (0.27)

7. Add controls for wave-2 interview date 2.27 (1.20) 15.79 (6.24) 0.05 (0.05) 0.40 (0.26)

8. Exclude stores called more than twice in wave 1

2.41 (1.28) 14.08 (7.11) 0.05 (0.05) 0.31 (0.29)

9. Weight by initial employment - - 0.13 (0.05) 0.81 (0.26)

10. Stores in towns around Newark - 33.75 (16.75) - 0.90 (0.74)

11. Stores in towns around Camden - 10.91 (14.09) - 0.21 (0.70)

12. Pennsylvania stores only - -0.30 (22.00) - -0.33 (0.74)

Table 3: Results of several robustness specifications, source: Card and Krueger (1994).

Econometrics

© 2007 KPMG Staffing & Facility Services B.V., een Nederlandse besloten vennootschap, is lid van het KPMG-netwerk van zelf-standige ondernemingen die verbonden zijn aan KPMG International, een Zwitserse coöperatie. Alle rechten voorbehouden.

AU D I T TA X A DV I S O RY

Wie schrijft blijft?

Schrijf je scriptie of afstudeeropdracht bij KPMG.

Eerlijk is eerlijk, niet iedere tekst is even goed. Maar vaak zit er iets slims of moois tussen. En heel soms iets onvergetelijks. Zo is dat op de wc-deur van een kroeg en zo is dat bij KPMG, waar studenten als jij een scriptie of afstudeeropdracht kunnen schrijven. Zo’n scriptie of afstudeeropdracht is een ideale manier om kennis te maken met KPMG. Misschien zelfs het begin van een prachtige carrière: geef je al schrijvend blijk van passie voor het vak, dan moet je maar eens serieus overwegen om te blijven. Meer weten? Kijk op www.kpmg.nl/stages.

-02944_A4_Scriptanten_OF.indd 1 01-09-2008 12:46:23

Page 24: Aenorm 64

22 AENORM 64 July 2009

When Should You Press the Reload Button?

While surfing on the Internet, you may have observed the following. If a webpage takes a long time to download and you press the reload button, often the page promptly appears on your screen. Hence, the download was not hindered by congestion — then you would better try again later — but by some other cause.If you do not know if some cause (like congestion) may hinder your download, what should you do? When should you cancel the download and when should you press the reload button? Should you press it immediately or should you wait for a while? And how long should you wait before cancelling the download? We analyze these issues in this article, which is a non-technical impression of the paper “Efficiency of Repeated Network Interactions” by Judith Timmer (UT) and Michel Mandjes (UvA).

Judith Vink-Timmer

Judith Timmer is assistant professor in the Stochastic Operations Research group at the University of Twente, Enschede. She has a Bachelor and Master degree in Econometrics from the University of Tilburg, and obtained her Ph.D. degree in game theory at the same university. Her research interests include analysis of cooperation and coordination in networks, allocation of joint profits, and game theory.

Problem description

The amount of traffic transmitted over the Internet is still increasing. The main part of this traffic consists of transfers like video, data and email. The completion times of these transfers vary over time due to several causes. First, there is Internet congestion — as the level of congestion fluctuates, the completion times do as well. Next, you may have observed that a webpage which took long to download appeared promptly on your screen after you pressed the reload button In this case we say the download request was hindered by non-congestion-related errors. This is a second cause of varying completion times. Users of the internet do not know which of these two causes, if any, occurs. A user cancels a download request if he feels he has been waiting too long; he gets impa-tient. This personal maximum waiting time is called his impatience threshold. After canceling a download request he may wait some time be-fore putting down a new request. This may im-prove his chances on a successful request — a request that is completed before he gets impa-tient. If the user decides not to wait — his wai-ting time has length zero — this user is said to

use a restart strategy (Maurer and Huberman, 2001). Such a strategy is often used on the web when a page seems to take too long to load: users impatiently press the reload button and often the page is promptly downloaded. Upon completion of the request the user spends some time reading or studying the page that was downloaded from the network. After fi-nishing this, he immediately puts down a new request for a download. The goal of each user is to maximize his expected number of successful requests over a given time span by choosing a suitable impatience threshold and waiting time. We want to know how patient the user should be — how long he should wait before pressing the reload button — and if he should use a restart strategy.

Model with congestion

We study this problem in the most simple setting possible, namely a network used by two users. In our first model, we assume that congestion is the only cause of unsuccessful requests. If both users simultaneously use the network then it is congested; a download takes twice as long compared to the situation where a single user is on the network. The two users want to download some pages to read them(like webpages or documents) from the network. Each user knows the size of the page to be downloaded and knows how long downloading would take if there was no congestion. The time to read the page is a realization of the user’s exponential reading time. A user decides when to cancel a download request (that is, what his impatience threshold is) and how long to wait before reissuing his request (that is, what his waiting time is). Users cannot see whether the

Econometrics

Page 25: Aenorm 64

23AENORM 64 July 2009

network is congested or not, and in addition they do not know the characteristics (like page size and strategy) of the other user. Moreover, a user only observes whether or not his page is already loaded; he does not observe the download progress. We assume a user is patient enough to have his page downloaded if there is no congestion during the download; this is a lower bound on his impatience threshold. Clearly, in congestion periods it takes relatively long to complete a download. If the network is congested while the user tries to download his page, he may get impatient before the download request is completed and cancels the download request. Since congestion is the only cause of unsuccessful requests in this model, the user concludes that the network was congested. Hence, he will wait for some amount of time before issuing a new download request.

Extended model with non-congestion-related errors

Our second model is an extension of the pre-vious one and includes non-congestion-related errors as a second cause of unsuccessful down-load requests. Assume that at the beginning of each download attempt such an error takes pla-

ce with probability p. If it occurs, the download request is completely ignored — to the network it seems as if there was no request. After a cer-tain period of time the user becomes impatient because his download request is not fulfilled. He cancels the request and waits for some time before putting down a new one. Notice that, in contrast to the previous model, here the user cannot deduce the cause (congestion or non-congestion-related errors) of the unsuccessful download. Also remark that for probability p=0 non-congestion-related errors cannot occur, and this model boils down to the first model.

Solution methods

Each user wishes to maximize the expected number of pages he can download and read in a fixed time interval. Notice that this number does not only depend on his own strategy but also on the strategy of the other user. This dependence on each other’s strategies implies that the two users are actually involved in a two-player non-cooperative game. In such a game, the users are the players, a strategy of a player is a pair

of impatience threshold and waiting time, and the payoff of a player is the expected number of pages he can download and read in a fixed time interval given the strategies of both players. The strategy pairs of the users are called Nash equilibrium strategies (Nash, 1951) if no user can download and read more pages by unilate-ral deviation from his own strategy.The analysis of this game with its repeated net-work interactions is difficult and complex due to the stochastic reading times of the users. Conventional methods in non-cooperative game theory cannot handle stochastic components, and so, it is hard to determine the equilibrium strategies of this game. Therefore, simulation is used to search for equilibrium strategies in this two-person network for both models.

Computational results

In the first model congestion is the single source of unsuccessful download requests. We say that a user is as patient as possible if he is patient enough for his page to be completely downloa-ded under congestion. Also, he is as impatient as possible if a request is only successful if the-re is no congestion. The simulation results are as follows.

• If the page sizes are almost equal then in any equilibrium strategy all users are as patient as possible and any waiting time may be cho-sen.

• Otherwise, if there are differences in job sizes then the equilibrium strategies are as follows. Assume that user 1 has the smallest page size. Then this user is as patient as possible. User 2 need not be that patient, but he should also not be as impatient as possible. Again, any waiting time can be part of an equilibri-um.

This result has the following explanation. If a user is as patient as possible then any down-load request is successful. The user never has to abort a download and consequently never has to wait before starting a new attempt. Hence, since all download attempts are succes-sful the user optimizes the number of pages he can read. Setting a waiting time is superfluous, and hence any waiting time may be chosen.Notice that some equilibrium strategies are restart strategies and others are not.

Econometrics

"In congestion periods it takes relatively long to complete a download "

Page 26: Aenorm 64

24 AENORM 64 July 2009

In the second model unsuccessful requests are caused by congestion or by non-congestion-re-lated errors. The simulation results for probabi-lity p=0.10 are as follows.

• If the page sizes are similar then both users have a unique equilibrium strategy, namely to be as patient as possible and set zero waiting times.

• If there are small differences in page size, assume that the page of user 1 is the smallest. Then in any equilibrium strategy user 1 is as patient as possible. User 2 need not be that patient but he should also not be as impatient as possible. Both users have zero waiting times.

• If there are large differences in page size, assume that the page of user 1 is the smallest. Then in any equilibrium strategy user 1 is as patient as possible, user 2 may have any impatient threshold except being as patient or impatient as possible, and both users have zero waiting times.

Remark that in all equilibrium strategies the user with the smallest page size is as patient as possible. Also note that none of the users waits for a positive amount of time after cancelling an unsuccessful download request. Both users immediately put down a new download request, which has a negative effect on network congestion. These restart strategies seem logical since a user that is as patient as possible can only experience an unsuccesful request if it is caused by a non-congestion related error. Therefore it makes no sense to wait and the user chooses to place a new download request immediately. Hence, under the presence of non-congestion-related errors all equilibrium strategies are restart strategies.

Concluding remarks

We studied a network with two users. Each of them wants to maximize its expected number of successful download requests over a given time span by choosing a suitable impatience threshold and waiting time. In the first model, where congestion is the only cause of unsuccessful requests, each of the users will be very patient and any waiting time is possible. Hence, restart strategies are just one type of equilibrium strategies. We proposed a second model in which non-congestion related errors are a second source of increased waiting time. Here, users set large impatience thresholds as well, but now have zero waiting times in equilibirum; they immediately reissue an unsuccesful download. In this case all equilibrium strategies are restart strategies. Hence, we conclude that in both models users may use restart strategies because these are equilibrium strategies. Our results depend on the fact that there are

only two users in the network. An interesting extension of this study is to investigate whether restart strategies remain among the equilibrium strategies when the number of network users increases. It seems very likely that this will not be true and that waiting times will be positive because the uncertainty about the cause of the unsuccessful requests increases. Future research should clarify this.

References

Maurer, S.M. and Huberman B.A. (2001). Restart strategies and Internet congestion, Journal of Economic Dynamics & Control, 25, 641-654.

Mo, J. and Walrand, J. (2000). Fair end-to-end window-based congestion control, IEEE/ACM Transactions on Networking, 8, 556-567.

Nash, J. (1951). Non-cooperative games, Annals of Mathematics, 54, 286-295.

Timmer, J. and Mandjes, M. (2009). Efficiency of repeated network interactions, International Journal of Electronics and Communications, 63, 271-278.

Econometrics

Page 27: Aenorm 64

25AENORM 64 July 2009

The importance of Portfolio Insurance as a hedging strategy arises from the asymmetric risk preferences of investors. Portfolio Insurance allows investors to limit their downside risk, while retaining exposure to higher returns. This goal can be accomplished by an investment constructed with the payoff profile of the writer of a call option. Portfolio Insurance techniques have their roots in Black and Scholes option pricing theory. In Black and Scholes (1973) a non-arbitrage argument is used to derive the model equation. This non-arbitrage argument can also be used to synthetically create options. The original Portfolio Insurance technique was based on option valuation theory. The developments in theory have produced varied techniques that, though using different means, aim to achieve the same goal.

Portfolio Insurance - Does Complexity Lead to Better Performance?

Elisabete Mendes Duarte

lives in Leiria, Portugal. PHD in economics (2006) - University of Coimbra approved Summa Cum Laude. MSc in Financial Economics (1997) - University of Coimbra. Licentiate and Bachelor in Economics (1988, 1986) - Technical University of Lisbon. Currently she is Professor at the School of Technology and Management, Polytechnic Institute of Leiria.

The strategy can be executed through the di-rect purchase of a put option, providing a static hedge (static Portfolio Insurance), or through a portfolio composed only of stocks and the risk-free asset that is reviewed periodically (dyna-mic Portfolio Insurance). In this work we focus solely on dynamic Portfolio Insurance. The use of dynamic Portfolio Insurance stra-tegies means the portfolio is rebalanced bet-ween stocks and the risk-free asset according to the rules defined by the different Portfolio Insurance techniques until the portfolio rea-ches maturity. In order to achieve the proposed goals, dynamic Portfolio Insurance implies that the portfolio must be continuously rebalanced, which incurs transaction costs for investors. In the last few years Portfolio Insurance has gra-dually become more commercially feasible due to falling transaction costs. This has prompted the subject to once again become a focus of public discussion.

The Techniques

Stop-Loss StrategyStop Loss strategy functions on a simple pro-position: a floor (F), or minimum value allowed for the portfolio, is established. The initial in-vestment is fully allocated to stock. Then, two different situations may occur:

1 If the portfolio value, at time t, is higher than the present value of the floor, pt>Fe-r(1-t), then the allocation to stock remains unchanged;

2 If the portfolio value, at time t, is lower or equal to the floor present value, pt≤Fe-r(1-t),

the stock is immediately sold and the inves-tor’s wealth is reallocated to the risk-free as-set.

The floor is guaranteed because if the inves-tor’s funds remain in the risk-free asset until reaching maturity, the value is determined by the capitalization of the risk-free interest rate. The result of this strategy is equal to the risky asset if its price never drops below the present value of the floor.

CPPI - Constant Proportion Portfolio Insurance The CPPI was originally proposed by Perold (1986) and Black and Jones (1987). A CPPI strategy begins by establishing the floor. The difference between the portfolio value, at every moment t (pt), and the floor, at moment t (Ft), is defined as the cushion Ct=Pt-Ft. The product of the cushion for a multiple (m), gives us, at moment t, the amount to allocate to the risky asset. This is called the exposition e=m.Ct. The multiple is taken to be greater than one in order to lever the investment. The multiple is

Actuarial Sciences

Page 28: Aenorm 64

26 AENORM 64 July 2009

chosen to reflect the expected performance of the risky asset as well as the investor’s risk pre-ferences. As such, in rising markets the multi-ple is usually high and in falling markets the multiple is usually low. Over time, if the growth in the risky asset ex-ceeds the risk-free rate of return, the cushion will rise and the investor’s wealth should be switched from the risk-free to the risky asset, allowing the investor to retain exposure to hi-gher returns. If the risky asset performs poorly, the investor’s wealth when rebalanced will be transferred into the risk-free asset, providing the investor a minimum value (floor) for his portfolio.

Options Based Portfolio Insurance An Options Based Portfolio Insurance (OBPI) strategy consists of buying a risky asset and purchasing a put option on it simultaneously. This means that the put option gives its owner the right to sell the underlying asset at a spe-cified price and a specific date (European put). This strategy enables the investor to place a downside limit on the value of the underlying asset, which can be exercised at expiration date.OBPI was the first strategy of Portfolio Insurance to be proposed. In its purest form (the one that will be applied in our empirical approach) OBPI uses the Black and Scholes options valuation model to create a continuously adjusted syn-thetic European put. Combining the purchase of the risky asset with the purchase of a put option is the equivalent to purchasing a continuously adjusted portfolio that combines risky and risk-free assets. Leland (1980) and Rubinstein and Leland (1981) show how to adjust the proportions bet-ween the risky and the risk-free assets based on Black and Scholes (1973) formulas:A well-known result of Black and Scholes (1973) model is the Call-Put parity P=C+Ee-rt-S. Rearranging the equation gives P+S=C+Ee-rt, nd further substituting the call option value we get S+P=SN(d1)-Ee-rτ N(d2)+Ee-rτ. If we further rearrange the equation we see: S+P=SN(d1)+Ee-rτ[1-N(d2)] So we have S+P=SN(d1)+Ee-rτN(-d2) where the first part of the equation, SN(d1), is the portion invested in risky assets. The second part of the equation, Ee-rτN(-d2), represents the investment in the risk-free asset.Because we use synthetic put options, we have total freedom of choice in underlying assets,

maturities and strike price (floor). The strike price is the result of the investor’s choice of risk preference.

The Simulation

Data and Technical RemarksThis paper simulates the performance of the three Portfolio Insurance strategies described above. We make cross-sectional comparisons between portfolios with the same starting va-lues and which guarantee the same minimum value at the end of period. It is difficult to evaluate the different Portfolio Insurance strategies, both because they do

not maximize utility and because of the widely spoken asymmetry of expected returns. As in Garcia and Gould (1987) Bird et al. (1988), Bird et al. (1990), Benninga (1990) we evaluate results garnered by empirical simulation with market data. We chose to test the performance of portfolio insurance with the PSI- 20 Index and the DJ Stoxx 50 index for the data between January 2003 and December 2008.We ran six simulations of one year’s duration for the three strategies. These were run on both in-dexes. One euro was valued as one index point, so the investor’s initial wealth is the index value at the beginning of the investment period. We set a floor at 98%. For the risk-free rate we chose to use the one year Euribor. The decision to use a shorter ma-turity was made so as to avoid implied roll-overs from periods divided by a portfolio revision. A portfolio revision is made whenever there is a variation in return that surpasses an imposed li-mit. This method of adjustment allows for flexi-bility in the choice of different tolerance limits for portfolios revisions in rising and falling mar-ket periods. We chose to implement a portfolio revision whenever the stock varies positively by 5% or more, or negatively by 3% or more. In our simulations we modelled transaction costs proportional to the changes in portfolio positi-ons. We modelled incurred transaction costs of 0,5% when assets were reallocated. The simulations involving CPPI were done with a multiple of 6. For the OBPI simulation, we considered the real volatility in the analysed period in our simulations. In order to allow for transaction costs we worked with the Leland (1985) variance.The evaluation of the insured portfolios is pro-blematic, as the usual methods, based upon

"The simplest techniques provide the best results"

Actuarial Sciences

Page 29: Aenorm 64

27AENORM 64 July 2009

return-variance and return-beta trade-off, do not apply. The reason for this is that Portfolio Insurance strategies provide asymmetric re-sults which are not captured by these measu-res. So results are evaluated on the basis of wealth at maturity, implementation costs and a measure of error.The comparison of wealth at maturity becomes particularly important because it summarizes the central goal of Portfolio Insurance strate-gies.Implementation costs are an important measu-re because an investor, irrespective of his pre-ferences, would like to achieve his investment objectives at the minimum cost.We use a measurement of error because in ad-vance the Portfolio Insurance investor establi-shes what their expected results are: the floor if the risky asset value falls during the invest-ment horizon or the value of the risky asset if it rises (without the premium). This ratio is constructed based on the formula:

This formula is very close to what Leland (1985) defined as the hedging error, because it measu-res the difference between the expected value of the strategy and the one that is achieved.Rubinstein (1985) refers to the fact that the in-crease in such error is identifiable as a path de-pendency relationship. This is because it shows that the rate of return on the insured portfolio does not depend solely on the rate of return of the insured portfolio but also on the path taken by the insured portfolio over the investment horizon.

Stop Loss CPPI OBPI

2003 5690.71 6203.77 6481.86

2004 7600.16 7049.79 7395.18

2005 8618.67 7976.09 8417.855

2006 11197.60 9775.62 10930.03

2007 13019.36 11842.22 12541.37

2008 12445.63 12807.72 11677.21

Table 1 – Wealth at maturity – PSI - 20 (EURO)

Stop Loss CPPI OBPI

2003 2284.35 2447.19 2510.24

2004 2774.77 2719.87 2709.24

2005 3349.10 2999.26 3253.033

2006 3255.11 3482.73 3549.64

2007 3600.73 3762.95 3661.25

2008 3281.65 3624.86 3259.77

Table 2 – Wealth at maturity – DJ Stoxx 50 (EURO)

Empirical Results In the analysis of tables 1 and 2 is easy to ve-rify that the Stop-Loss strategy is the one that presents better results for the years 2004, 2005, 2006 and 2007 for the PSI-20 index and in 2004 and 2005 for the DJ Stoxx index. These were the years where there was a rise in the indexes so the investment was fully allocated to the risky asset for that period. In investment periods such as 2003 for the PSI-20 and 2003 and 2006 for the DJ Stoxx 50 where there was a decrease at the beginning of the year followed by a gradual rise of the indexes in the later months, OBPI provides the best results. CPPI seems to be more appropriate in scenarios such as 2008 where there is a big drop in the indexes. In this scenario this technique allows the investor to achieve good results.The implementation costs are reported in tables 3 and 4Just as has been seen in the previous analysis, there is no uniform answer to the question of the cheapest strategy. It seems that this ans-wer is related to the evolution of the index. So depending on this evolution there is a different technique that can be used to lower implemen-tation costs. Error measurement (tables 5 and 6) also leads to identical conclusions.We can identify a path dependency relation-ship because the rate of return on the insured portfolio does not depend solely on the rate of return of the insured portfolio but also on the path taken by the insured portfolio over the in-vestment horizon (Rubinstein, 1985).

Stop Loss CPPI OBPI

2003 908.04 394.98 116.89

2004 -147.77 402.60 57.21

2005 -124.40 518.18 76.42

2006 -139.03 1282.94 128.53

2007 -318.60 858.54 159.39

2008 -1020.77 -1382.85 -252.35

Table 3 – Implementation Costs – PSI 20 (EURO)

Stop Loss CPPI OBPI

2003 211.94 49.11 -13.95

2004 -73.96 -19.06 -8.43

2005 -56.09 293.75 39.97

2006 350.02 122.40 55.49

2007 -50.13 -212.35 -110.65

2008 -153.92 -497.14 -132.05

Table 4 – Implementation Costs – DJ Stoxx 50 (EURO)

=

Actuarial Sciences

Page 30: Aenorm 64

28 AENORM 64 July 2009

Concluding Remarks

Stop-Loss strategy is the one that presents bet-ter results in the scenario where there is a rise in the indexes. For the scenario where there is a decrease at the beginning of the year followed by a gradual rise of the indexes in later months, OBPI provides the best results. CPPI seems to be more appropriate in scenarios where there is a big drop in the indexes. The Stop-Loss and the CPPI strategies have the advantage of being implementable without using options valuation theory, making these techniques less complex. However, we must not forget that there are scenarios where only OBPI achieves the expected results. We find that the techniques’ performances are path-dependent and are not related to the de-gree of method complexity. We also find that in some market conditions, the simplest techni-ques provide the best results.

Stop Loss CPPI OBPI

2003 0.86 0.94 0.98

2004 1.02 0.95 0.99

2005 1.01 0.94 0.99

2006 1.01 0.88 0.99

2007 1.03 0.93 0.99

2008 1.09 1.12 1.02

Table 5 – Errors Measure – PSI 20

Stop Loss CPPI OBPI

2003 0.92 0.98 1.01

2004 1.03 1.01 1.00

2005 1.02 0.91 0.99

2006 0.90 0.97 0.98

2007 1.01 1.06 1.03

2008 1.05 1.16 1.04

Table 6 – Errors Measure – DJ Stoxx 50

Actuarial Sciences

Page 31: Aenorm 64

29AENORM 64 July 2009

We are looking for

Consultantswith different levels of experience

e-mail: [email protected] • website: www.zanders.eu

About ZandersZanders is an independent fi rm with a track

record of innovation and success across

the total spectrum of Treasury & Finance.

We provide international advisory, interim,

outsourcing and transaction services.

Zanders creates value with our specialist

expertise and independence.

Since 1994, Zanders has grown into a

professional organization with currently

a dedicated advisory team of more than

80 consultants and an associated pool

of more than 45 interim managers.

Our success is especially due to our

employees. That is why Zanders provides

a working environment which offers

development opportunities to everyone,

on a professional as well as personal

level. At Zanders, we are always looking

for talented people who would like to use

their expertise and know-how in our fi rm.

We are currently looking for Consultants to

strengthen our team.

What is the profi le of our Consultants?To be considered for this position, you

should meet the following requirements:

• University degree in Economics,

Econometrics, Business Administration,

Mathematics or Physics;

• Up to 2 years work experience for

associate consultants;

• 2-5 years work experience for

consultants;

• Well-developed analytical skills and

affi nity with the fi nancial markets;

• Pragmatic and solution-oriented;

• Excellent command of English (spoken

and written), willing to learn Dutch

or French, any other language being

an asset.

Areas of expertise• Treasury Management

• Risk Management

• Corporate Finance

Competences• Strategy & Organization

• Processes & Systems

• Modeling & Valuation

• Structuring & Arranging

Would you like more information about

this position, and/or are you interested

in a career at Zanders? If so, please

contact our Human Resources Manager,

Philine Veldhuysen.

Zanders NetherlandsBrinklaan 1341404 GV Bussum+ 31 (0)35 692 89 89

Zanders BelgiumPlace de l’Albertine 2, B21000 Brussels+32 (0) 2 213 84 00

Postal addressP.O. box 2211400 AE BussumThe Netherlands

Zanders UK26 Dover StreetLondon W1S 4LY+44 (0)207 763 7296

Consultants_A4_10-07.indd 1 07-10-2008 16:01:48

Page 32: Aenorm 64

30 AENORM 64 July 2009

The importance of market consistent valuation has risen in recent years throughout the global financial industry. This is due to the new regulatory landscape and because banks and insurers acknowledge the need to better understand the uncertainty in the market value of their balance sheet.The balance sheet of banks and insurers often includes products with embedded options, which can be properly valued with standard risk neutral valuation techniques. Determining the uncertainty in the future value of such products (for example needed for regulatory or economic capital calculations) is more difficult, because when using risk neutral valuation, future outcomes are not simulated based on their historical return. For example, when using risk neutral simulations, stock prices are assumed to grow with the risk-free interest rate, which is not realistic.

Deflator: Bridge Between Real World Simulations and Risk Neutral Valuation

Using real-world simulations, variables are si-mulated based on their historical return, stock prices are chosen to grow at the actual expec-ted return (the risk free rate combined with a risk premium). The valuation of a product using a ‘standard’ risk-neutral discount factor is in-consistent, since the returns are not risk-neu-tral in this case.This article discusses the combination of these two methods in order to simulate future outco-mes based on the actual expected return and still valuate products market consistently. Real world simulations are needed to simulate future values of the variables based on their historical return and a stochastic discount factor (SDF), called the ‘deflator’, is needed to calculate the market value of these products. The uncertain-ty in future market value is estimated by com-bining these methods.In the next two sections a Hull White Black Scholes (HWBS) model is used to demonstrate how a deflator can be determined and incorpo-rated in a HWBS framework. An example pro-duct with a payout based on a stock return and

an interest rate is used to show some results based on this framework and using real market data.

HWBS model

In this article, the one-factor Hull White (HW) model is used to simulate interest rates. The HW model is chosen because it incorporates mean-reverting features and, with proper calibration, fits the current interest rate term structure without arbitrage opportunities (Rebonato, 2000). Furthermore, an appealing feature of the HW model is its analytical tractability (Hull & White, 1990).Stock prices are simulated with a Black and Scholes (Black & Scholes, 1973) based Brownian motion that is correlated with the HW process using a Cholesky decomposition.Assume a probability space (Ω, F, F, Q), where Ω is the sample space, Q is the risk neutral pro-bability measure, F is the sigma field and F is the natural filtration {Ft}0≤t≤T. Suppose the inte-rest rate is also an F-adapted random process.The HW model for the process of the short rate under a risk neutral probability measure can be expressed as in equation 1, where a and σr are constants, WrQ is a Wiener process for the in-terest rate and θ(t) is a deterministic function, chosen in such a way that it fits the current term structure of interest rates. The process for the stock price is shown in equation 2, where ρ indicates the correlation between both proces-ses and WsQ is a Wiener process for the stock price.

Pieter de Boer

was a board member of the VSAE in 2006. He was responsible for the external affairs. He organized, among other events, the Econometric Game, the National Econometrician Day and the Actuarial Congres. This article is a summary of his master thesis written under the supervision of Prof. dr. H.P. Boswijk during an internship at Zanders. Since December 2008 he's employed as an associate consultant at Zanders.

Econometrics

Page 33: Aenorm 64

31AENORM 64 July 2009

rQt t r tdr θ t ar dt σ dW= − + (1)

rQ sQt t t s t s tdS r S dt σ S ρdW σ S ρ dW= + + − (2)

When simulating these processes under Q, the present value of a product can be determined, since the proper discount factor is known to be the risk free interest rate.Under the assumption of a different probability space (Ω, F, F, P), where Ω is the sample space, P is the real world probability measure and F is the natural filtration {Ft}0≤t≤T, the process for the interest rate and the stock price can be written as:

rPt r t r tdr ar dt σ dW= − + (3)

rP sPt t t s t s tdS S dt σ S ρdW σ S ρ dW= + + − (4)

Where μr is the historical mean for the interest rate and μt is the expected return of the stock price, which is equal to the expected return un-der a risk neutral probability measure plus a market risk premium (πs).

Stochastic discount factor

When simulating these processes under the real world probability measure P, the value of a product is more difficult to determine, since the risk free interest rate is not the proper dis-count factor anymore. Discounting with the risk free interest rate under actual expected returns would not lead to a market consistent value.To find a proper stochastic discount factor un-der the real world probability measure P, sup-pose X is a F-measurable random variable and the risk neutral probability measure is Q. L, the Radon-Nikodym derivative of Q with respect to P (Etheridge, 2002), equals

L = dQ/dP (5)

and

Lt = EP[L|Ft] (6)

For equivalent probability measures1 Q and P, given the Radon-Nikodym derivative from equation 5, the following equation holds for the random variable X (Duffie, 1996)

EQ(X) = EP(LX) (7)

and

EQ[Xt|Ft] = EP[XtLT/Lt|Ft] (8)

It can be seen from the above equation that the expectation of X under the probability measure Q is equal to the expectation of L times X under the probability measure P.

Furthermore, suppose {Wt} is a Q-Brownian motion with the natural filtration that was given above as {Ft}. Define:

t tPt s s s sL θ dW θ θ ds= − −∫ ∫ (9)

and assume that the following equation holds

2

0

1Ε[exp( )]

2

T

tθ dt < ∞∫ (10)

where the probability measure P is defined in such a way that Lt is the Radon-Nikodym deri-vative of Q with respect to P. Now, it is possible to use the preceding to rewrite equations 7 and 8 to link risk neutral valuation and valuation under a real world probability measure:

' '12

Ε [exp( ) | ]

Ε [exp( ) | ]

TQs T tt

T T TPs s s s s t tt t t

r ds X F

r ds θ dW θ θ ds X F

= − − −

∫∫ ∫ ∫

(11)

Combining the above equations and using Girsanov’s theorem (Girsanov, 1960) states that the process

tQ Pt t sW W θ ds= + ∫ (12)

is a standard Brownian motion under the pro-bability measure P. A useful feature of this theorem is that when changing the probability measure from real world to risk neutral, the vo-latility of the random variable X is invariant to the process. In changing from a risk neutral to a real world probability measure, it is essential to make Wt

P a standard Brownian motion.

SDF in HWBS model

Now, according to the above theory, it is pos-sible to change from probability measure P to probability measure Q. For this, it is sufficient to find θs from equation 12. This leaves the fol-lowing two equations:

srQ rP rt t s

sπQ sP st t s

W W θ ds

dW W θ ds

= +

= +

∫∫

(13)

By choosing a proper value for rsθ the substituti-

on of the first part of equation 13 into equation 1 should be equal to equation 3. By solving this inequality, r

sθ is found to be:

rsθ = (μr-θ(t))/σr (14)

Something similar can be done to compute ssθ .

1 Q and P are equivalent probability measures when it is provided that Q(A) > 0 if and only if P(A) > 0, for any event A (Duffie, 1996).

Econometrics

Page 34: Aenorm 64

32 AENORM 64 July 2009

25.1520.09

‘APG beheert het pensioenvermogen van miljoenen mensen. Ik ben hier risk controller en bewaak dat beleg-gings specialisten verantwoordelijk omgaan met hun beleggingsportefeuille. Ik toets of de risico’s die zij nemen, passen binnen de afgesproken grenzen. Wat mijn werk boeiend maakt, is de dynamiek van de financiële markten. Ik sta altijd klaar om snel te reageren en met afdelingen te overleggen over maatregelen. Als het nodig is, kom ik direct in actie.’

APG Asset Management zoekt ambitieuze Investment Professionals. APG beheert zo’n 175 miljard euro pensioenvermogen. Wij hebben kantoren in Heerlen, Amsterdam,Hong Kong en New York. Wil je meer weten? Ga dan naar www.werkenbijapg.nl en stel je voor…

‘Stel je voor… je bewaakt het vermogen van 2,6 miljoen Nederlanders’Alejandra Lopez Rodriguez, risk controller bij APG Asset Management in Amsterdam

APG_adv_A4_Aenorm_Alejandra_25.1520.09.indd 1 27-05-2009 16:40:42

Page 35: Aenorm 64

33AENORM 64 July 2009

With this knowledge, substituting the second part of equation 13 into equation 4 and solving yields:

sQ sP st t

s

rQ rPt t

πdW dW dt

σ ρ

ρdW dW

ρ

= +−

− −−

(15)

Which results in:

rr

rss

ss

s

θ tσθ

ρπθ

ρ ρσ

− −= − −

(16)

Assuming that equation 10 holds, which is a re-quirement, the stochastic discount factor in the BSHW model can be written as:

T T s sPs st t

T T Ts s r rP r rs s s s st t t

SDF t T r ds θ dW

θ θ ds θ dW θ θ ds

= − −

− − −

∫ ∫∫ ∫ ∫

(17)

Example

Using the theory described in the previous sec-tion, the value and the uncertainty in the future value of a theoretical product are calculated. The following guaranteed product is chosen; the client receives the return on the AEX-index unless the return is below the 1 month Euribor interest rate, in that case the payout is equal to the 1 month Euribor interest rate. These types of products are common on the balance sheet of insurers and due to the complex payout struc-

ture, a simulation model is needed to evalu-ate the value of such a product. Therefore, the HWBS framework using a stochastic discount factor is suitable to value this product and cal-culate risk figures for this product.First, the value of the product on two different dates is calculated in a standard risk neutral setting. This value is compared with the value resulting from the real world simulations and the use of the stochastic discount factor. See the insert for the expectations and variances that where used for the risk neutral processes.For the stock price, the volatility was based on at the money (ATM) options with a time to matu-rity of one year. The mean reversion parameter and the volatility in the HW model were calibra-ted using a set of ATM swaptions. The average 1-month interest rate μr is chosen to be 4,27% based on historical data. Furthermore, the risk premium, πs, is fixed at 3%.In figure 1, the result of running 10,000 simu-lations of the (1-month) interest rate and the stock price is shown. The history and a forecast for the next 3 years, including the boundaries of a 98% confidence interval (CI) of the AEX-index are shown, under both probability measures.As can be seen, the average predicted value of the AEX-index has a smooth course, but the

Risk neutral expectations and variances

Interest rates

( ) ( )Ε [ ( ) | ] ( )e ( )eQ a t s a t ssr t F r s α s− − − −= + (18)

Q a t sr

s

σr t F

a− −= − (19)

Stock

(0, )2

(0, )

22 2

2

( ) 1 e 1E [ln | ] ( ) ln Var [ ( ) | ]

( ) 2

2 1[ (e e ) (e e ]

2 2

a t M TQ Q

s s sM t

aT at aT atr

S T fF x t σ t r t F

S t a f

σt

a a a

− − − −

−= − + +

= − − − −

(20)

where:

atrσx t t

a−− −

22 2 2

2

2( ) 2 1 3 1Var [ln | ] [ e e ] (1 )

( ) 2 2Q a t a t a ts rr

s s

ρσ σσS TF t σ t t e

S t a a a a a a− − −= + − − + + − − (21)

 

Figure 1: Development of the AEX-index under both probability measures

Econometrics

25.1520.09

‘APG beheert het pensioenvermogen van miljoenen mensen. Ik ben hier risk controller en bewaak dat beleg-gings specialisten verantwoordelijk omgaan met hun beleggingsportefeuille. Ik toets of de risico’s die zij nemen, passen binnen de afgesproken grenzen. Wat mijn werk boeiend maakt, is de dynamiek van de financiële markten. Ik sta altijd klaar om snel te reageren en met afdelingen te overleggen over maatregelen. Als het nodig is, kom ik direct in actie.’

APG Asset Management zoekt ambitieuze Investment Professionals. APG beheert zo’n 175 miljard euro pensioenvermogen. Wij hebben kantoren in Heerlen, Amsterdam,Hong Kong en New York. Wil je meer weten? Ga dan naar www.werkenbijapg.nl en stel je voor…

‘Stel je voor… je bewaakt het vermogen van 2,6 miljoen Nederlanders’Alejandra Lopez Rodriguez, risk controller bij APG Asset Management in Amsterdam

APG_adv_A4_Aenorm_Alejandra_25.1520.09.indd 1 27-05-2009 16:40:42

Page 36: Aenorm 64

34 AENORM 64 July 2009

"Current market conditions are not necessarily a good measure for future outcomes"

width of the confidence interval shows that the predicted values of the index are in fact rather volatile. As expected, under the real world pro-bability measure the index increases faster on average.The market value of the product can be esti-mated by calculating the future payout in each scenario and calculating the average of the dis-

counted value over all scenarios. The market value of the product and the boundaries of the 90% confidence interval are shown in table 1.As expected, the market value is similar under both probability measures on both calculation dates. The (minor) differences can first be ex-plained by the fact that a different set of simu-lations is run for both methods. Second, a dis-crete approximation for a part of the stochastic discount factor had to be made in order to use it in the stochastic simulation model.The higher average value of the product, when valued at the 29th of August in 2008, results from the rise of the volatility of the stock price. The recent increase in the implied volatility can be related to the ‘credit crisis’.Next to this, it is interesting to examine the risk an insurer runs by holding this product on its balance sheet. Since the insurer sold the pro-duct, the risk arises from a value increase of this product. As a measure for this risk, the Value at Risk (VaR) of the product is estimated.The 95% Value at Risk (VaR) of the product can be calculated by examining the difference bet-ween the market value of the product and the market value of the product at =1. This diffe-rence should be corrected for the actual expec-

ted change, since the time to maturity of the product has declined at t=1. The 95% VaR is defined as the difference between the average market value at t=1 and the 5% boundary of the 90% confidence interval of the market va-lue at t=1. The results are given in table 2.Whether the value of the product in one year is estimated correctly can be tested by using the method of backtesting.

Backtest

To examine the forecast capabilities of the mo-del, the results can be tested by performing a backtest. Both models are used to predict the value of the product in one year. However, it is difficult to collect enough observations and the-refore, a one year rolling window is used.The dataset starts in May 2003, which leaves 51 observations available for the backtest. In all of these 51 observations, it will be tested whether the actual value of the product lies outside the 90% confidence intervals of the predicted va-lue, generated by both models. The results of the backtest are shown in figure 2.What can be concluded from figure 2, is that in particular the observations in the last year of the dataset fall outside the predicted confiden-ce intervals. In total 15 of the 51 observations, lie outside the predicted 90% confidence inter-val of the real world model. These results can be mainly attributed to the rise in the implied volatility due to the turbulent market conditions from May 2007 on, which can be seen in figure 3.Whether the model passes the backtest can be

 Figure 2: Results of the backtest

 

Figure 3: Implied volatility of the AEX-index

Date Risk Neutral Real World

Market value 5% LB 95% UB Market value 5% LB 95% UB

30/6/2006 28.2 -30.2 131.0 27.5 -30.5 127.2

29/8/2008 57.2 -19.0 188.1 56.4 -17.8 181.6

Table 1: Average market value and the boundaries of the 90% CI under both measures

Econometrics

Page 37: Aenorm 64

35AENORM 64 July 2009

calculated in a likelihood ratio testing frame-work (Christoffersen, 1998). In this framework, suppose that = is the indicator variable for the interval forecast given by one of either mo-dels, which means that whenever It=1 the ac-tual value lies in the interval. The conditional coverage can be tested by comparing the null hypothesis that E[It]=p with the alternative hy-pothesis that E[It]≠p.The likelihoods under the null hypothesis and under the alternative hypothesis are given by:

x n x

x n x

L p I I I p p

L π I I I π π

= −

= − (22)

Where the maximum likelihood estimate of π is x/n, the number of values outside the interval forecast divided by n, the total number of ob-servations. Using these likelihoods, a likelihood ratio test for the test of the conditional covera-ge can be formulated

cc

L p I I Iχ

L π I I I= − (23)

Where the test statistic is actually asymptotical-ly Chi-Squared distributed with s(s-1) degrees of freedom, with s=2 as the number of pos-sible outcomes. It is difficult to take the auto-correlation (due to the rolling window) into ac-count. Therefore, the resulting conclusions are less reliable. In this case, the LR-test statistic is 14,4, significantly higher than the 0,10 from the (5%) confidence level of the Chi-squared distribution, what justifies the conclusion that the model is inaccurate.However, the recent crisis is a very unexpected event. If only data from May 2003 until May 2007 is taken into account, the backtest would show a totally different outcome. The LR test statistic for this dataset is 0,05, which would lead to not rejecting the model, as opposed to a rejection taking the data from May 2007 until August 2008 into account. On the other hand, when these tests are perfor-

med for the model under a risk neutral probabi-lity measure, both tests result in a rejection of the model, see table 3.So, even when the data until May 2007 are used to backtest the model under a risk neutral probability measure, it is rejected as accurate. This is unlike the model under the real world probability measure. This evidence suggests that the risk of the guaranteed product might be estimated better using the model under the real world probability measure using the sto-chastic discount factor.

Conclusions

The objective of the this article was to link real world simulation to risk neutral valuation and thereby investigating if it is possible to improve the estimation of uncertainty in future market value. To be able to determine this, a HWBS framework in combination with a stochastic discount factor (SDF) was used. The SDF, also called deflator, was needed for proper valuation using real world simulations. In an example based on real market date using this framework this method was tested.The most important conclusions that can be drawn from the results and the backtest are:

• Valuation under the real world probability using a stochastic discount factor results in a market value that is consistent with the risk neutral value. The main advantage of using real world simulations is that the simulations can also be used for a ‘realistic’ simulation of random variables.

• Combining the real world simulations with a stochastic discount factor is very useful for banks and insurers. They can use this me-thod to estimate the current value of their products and, more importantly, estimate the uncertainty in this value in one year in a con-sistent way. This can be used in regulatory (e.g. Basel II or Solvency II) and economic capital calculations.

• Capital calculations are typically based on a one year 99% VaR. When using real world

Date Risk Neutral Real World

Expected market value in 1 year

5% LB 95% UB VaR Expected market value in 1 year

5% LB 95% UB VaR

30/6/2006 28.1 34.5 20.2 6.4 28.4 35.6 21.2 7.2

29/8/2008 57.4 67.2 44.1 9.8 55.7 66.4 45.0 10.7

Table 2: Risk figures for the product under both measures

Date Real world Risk neutral 1% critical value 5% critical value

Until August 2008 14.4 23.6 0.02 0.10

Until May 2007 0.05 1.67 0.02 0.10

Table 3: Results of the backtest for both models

Econometrics

Page 38: Aenorm 64

36 AENORM 64 July 2009

simulations and a standard discount factor, estimated average values are inaccurate, therefore, resulting VaR calculations can be as well. When using risk neutral valuation to estimate the VaR, only current market condi-tions are taken into account. Current market conditions are not necessarily a good measu-re for future outcomes, which could also lead to inaccurate VaR estimations.

However, some drawbacks of the model must be noted.

• The model under the real world probability measure, using the SDF, did not pass the backtest. The null hypothesis that the model correctly predicts the uncertainty in the futu-re value is rejected. The failure of the model in the backtest needs to be taken seriously. However, as already mentioned, the market conditions in the last period of the sample, are quite unusual. Whenever the dataset is cut off at May 2007, the model passes the backtest unlike the model under a risk neu-tral probability measure. Of course, doing this would be a case of data mining, but it does not alter the fact that the current market conditions are difficult to take into account. It could be defined as an outlier, some theories state that the recent crisis is comparable to the crisis in the twenties.

• Two variables, the stock price and interest rate, are modelled stochastically. When more variables are modelled stochastically, the SDF becomes more complicated. For banks and insurers, who also model variables like exchange rates and volatility stochastically, several more random variables enter the model. As the results have shown, the value of the product greatly depends on this input and modelling this input as a random variable could help to improve the forecasting quali-ties of the model. However, this would make the model and the SDF more complicated and less practical.

References

Black, F., and Scholes, M. (1973). The pricing of options and corporate liabilities, Journal of Political Economy, 637-654.

Christoffersen, P. F. (1998). Evaluating in-terval forecasts, Washington: International Monetary Fund.

Duffie, D. (1996). Dynamic asset pricing the-ory, Princeton University Press.

Etheridge, A. (2002). A course in financial calcu-lus, Cambridge: Cambridge University Press.

Hull, J., & White, A. (1990). Pricing interest-rate-derivative securities, The Review of Financial Studies, 573-592.

Rebonato, R. (2000). Interest-rate option mo-dels, Chichester: John Wiley & Sons.

Econometrics

Bij ORTEC zit je goed!

ORTEC LogisticsGroningenweg 6k2803 PV GoudaTel.: 0182-540 [email protected]

ORTEC FinanceMax Euwelaan 783062 MA RotterdamTel.: 010-498 66 66hr@ortec-fi nance.com

www.werkenbijortec.com

Bij ORTEC wordt wereldwijd gewerkt aan

complexe optimalisatievraagstukken in

diverse logistieke en fi nanciële sectoren.

Onze medewerkers helpen klanten

gefundeerde beslissingen te nemen met

gebruik van wiskundige modellen en het

toepassen van simulatie- en optimalisatie-

technieken.

ORTEC is een professionele, jonge organisatie

met volop doorgroeimogelijkheden. Tijdens

of na je studie kun je bij ons aan de slag. Je

wordt direct op projecten ingezet en krijgt veel

eigen verantwoordelijkheid. Wij bieden een

werkomgeving met voldoende ruimte om je

talenten te ontwikkelen binnen jouw interesse-

gebied, zowel nationaal als internationaal.

Spreekt dit je aan en volg je een studie

Econometrie, Operationele Research,

Informatica of Wiskunde of heb je deze

voltooid en heb je affi niteit met statistische

modellen en de logistieke of fi nanciële wereld,

dan zit je bij ORTEC goed!

Vertel ons hoe jij je talent wilt inzetten voor de

verbetering van onze producten en diensten

en de verdere internationale groei van ORTEC.

Op onze website www.werkenbijortec.com

vind je meer informatie over werken bij ORTEC

en een actueel overzicht van vacatures en

stage- of afstudeermogelijkheden. Zit jouw

ideale functie of onderwerp er niet bij, stuur

dan een open sollicitatie.

Page 39: Aenorm 64

37AENORM 64 July 2009

Bij ORTEC zit je goed!

ORTEC LogisticsGroningenweg 6k2803 PV GoudaTel.: 0182-540 [email protected]

ORTEC FinanceMax Euwelaan 783062 MA RotterdamTel.: 010-498 66 66hr@ortec-fi nance.com

www.werkenbijortec.com

Bij ORTEC wordt wereldwijd gewerkt aan

complexe optimalisatievraagstukken in

diverse logistieke en fi nanciële sectoren.

Onze medewerkers helpen klanten

gefundeerde beslissingen te nemen met

gebruik van wiskundige modellen en het

toepassen van simulatie- en optimalisatie-

technieken.

ORTEC is een professionele, jonge organisatie

met volop doorgroeimogelijkheden. Tijdens

of na je studie kun je bij ons aan de slag. Je

wordt direct op projecten ingezet en krijgt veel

eigen verantwoordelijkheid. Wij bieden een

werkomgeving met voldoende ruimte om je

talenten te ontwikkelen binnen jouw interesse-

gebied, zowel nationaal als internationaal.

Spreekt dit je aan en volg je een studie

Econometrie, Operationele Research,

Informatica of Wiskunde of heb je deze

voltooid en heb je affi niteit met statistische

modellen en de logistieke of fi nanciële wereld,

dan zit je bij ORTEC goed!

Vertel ons hoe jij je talent wilt inzetten voor de

verbetering van onze producten en diensten

en de verdere internationale groei van ORTEC.

Op onze website www.werkenbijortec.com

vind je meer informatie over werken bij ORTEC

en een actueel overzicht van vacatures en

stage- of afstudeermogelijkheden. Zit jouw

ideale functie of onderwerp er niet bij, stuur

dan een open sollicitatie.

Page 40: Aenorm 64

38 AENORM 64 July 2009

Our actions and behaviors have an impact on the environment. Through our everyday choices, we influence how the world around us looks like and will look like. Do we choose to drive a car to the supermarket or take a bicycle? Do we take a plane to an exotic country or the train to France? Do we want steak for dinner or a vegetarian meal? People often do not realize that their food consumption is a substantial environmental burden. Primeval forests are being cut down for the production of food like soy, livestock contributes 18% to total greenhouse gas emissions and substantial emissions of substances that contribute to eutrophication and acidification accompany our food production. Food is responsible for about 30% of our total environmental impact, our meat consumption accounts for around 10%.

Femke de Jong

is a consultant at the economics department of environmental research organisation CE Delft. Last year she obtained her Msc in Operations Research & Management at the University of Amsterdam.

Meet your Meat

Differences between meat products

Meat has a relatively large effect on our environ-ment, but there are differences between meat products. Research has shown that poultry has the smallest environmental impact, while beef has the biggest impact. The environmental bur-den of eggs and meat substitutes are smaller than that of meat, while cheese is no better for the environment than most meat products. No conclusive evidence exist that shows that organic meat production has a smaller environ-mental burden than conventional production. In fact, most, if not all, LCA1 studies point out that organic meat needs more land than conventio-nal meat for the same amount of output.

The effects of our food consumption on the en-vironment can thus be reduced in three ways:

1 by reducing our meat and dairy consumpti-on;

2 by changing our meat consumption (in favor of more chicken and less beef) or;

3 by replacing our meat consumption by meat substitutes.

Other options are reducing food losses and changing to less energy intensive refrigerators.

Externalities

The consumption of meat causes several nega-tive environmental effects that are not taken into account by the producers and consumers of meat products. These are externalities, a source of market failure, in which case econo-mists argue for regulation by the government. When external effects are present, market pri-ces do not necessarily reflect social costs. If there was a market for environmental services, society would end up at the point where the benefits of an additional unit of –for example- clean air is equal to the costs of an additional unit of pollution reduction (the equilibrium price). However, we are often not at the equi-librium (optimum) level of pollution, but for example at point A in the figure below. In this case, there are two methods to put a monetary value on the environmental effect: direct valuation of damages or the prevention cost approach. The prevention cost approach delivers the marginal cost to society of policy efforts with the goal of maintaining environ-mental quality A. The damage cost approach delivers the marginal costs to society of small

1 Life Cycle Assessment, a compilation and evaluation of the inputs, outputs and potential environmental impacts of a product system throughout its life cycle (ISO 14040).

Figure 1: Source: Blonk et al. (2008)

ORM

Page 41: Aenorm 64

39AENORM 64 July 2009

deviations from environmental quality A.Recently, a study for the European Commission (IMPRO, 2008) has calculated the environmen-tal impacts of meat products and valued these impacts by assessing the damages to ecosy-stems, human well-being and resource produc-tivity (the damage cost approach). While there is still discussion what value to ascribe to, for example, ecosystems, figure 3 shows that the external costs are substantial compared to the amount of money we pay in the supermarket for our meat.External costs of our meat consumption are huge. In total, our meat consumption of 2006 leads to a cost to society of almost €7 billion (see table 1). In The Netherlands, we consume on average almost 81 kg pork, beef and chicken per year. The total environmental costs of this average meat consumption amount to more than €400 per year.

What can the government do?

Three different forms of government interven-tion to take the external environmental costs of meat into account were modeled:

1 An excise on meat.

2 A sales tax increase on meat.3 An emission limit for ammonia.

Partial equilibrium analysis was used to de-termine which of these interventions is most beneficial to society, taking all relevant costs (decline in consumer and producer surplus) and benefits (environmental improvement, govern-ment revenue) into account. The analysis showed that an emission limit is preferred over an excise or a sales tax on meat. While an emission limit decreases the external costs per kg meat, with an excise or a sales tax, producers have no incentive to adopt en-vironmentally friendlier techniques. Since a low price elasticity was assumed (between -0.3 and -0.6), meat consumption will not decline signi-ficantly as a result of these policies. The results show that an ammonia limit would have almost no consequences for consumers, while consu-

mers have to pay €20-30 more for their yearly meat consumption if the sales tax is increased to 19%. There are some remarks however. Although emission limits result in the lowest social costs, they could have unwanted effects. Sevenster & De Jong (2008) have showed that reducing livestock greenhouse gas emissions in the Netherlands could lead to an increase of green-house gases abroad. Enteric fermentation (re-sulting in eructation of methane) is the main source of greenhouse gas emissions in the beef/dairy life cycle, but reducing these emis-sions may lead to trade-offs. When the focus is on reducing this source of direct livestock emis-sions, it is possible that globally, greenhouse gas emissions increase because of higher use (imports) of concentrates.

2 Note: be careful not to substitute cheese for meat, because this would have no environmental benefit.3 For example, information campaigns, research into improving meat substitutes, better placement of meat substitutes on shelves in supermarkets.

Quantity consumed (ton) in 2006

External costs (€/

kg)

Total costs (billion €)

Beef 287,100 €13 3.7

Pork 676,300 €3.52 2.4

Chicken 282,000 €2.16 0.6

Total 1,245,400 6.7

Table 1: Source: PVE (2006), IMPRO (2008)

Figure 2: Source: De Bruyn et al. (2007) Figure 3: Source: PVE (2006), IMPRO (2008)

ORM

"External costs of our meat consumption are huge"

Page 42: Aenorm 64

40 AENORM 64 July 2009

IK WORD HIERDIRECTEUR.

IN DE LIFT BIJ

TRAINEES

De tijd van traditioneel verzekeren is voorbij. ‘All-fi nance’ is de toekomst. En Delta Lloyd Groep wil hierin haar leidende marktpositie

uitbouwen. Daarvoor hebben we mensen nodig. Heel goede mensen. Zo selecteren we ieder jaar een aantal afgestudeerde academici

voor onze Trainee Programma’s die opleiden tot een leidinggevende functie binnen het concern. Ook hebben we een Business Course

en het Young Talent Network waarin jonge, hoog opgeleide medewerkers elkaar inspireren tot bijzondere prestaties. Waarmee we

maar willen zeggen: als je wilt, kun je bij Delta Lloyd Groep heel ver komen. Aan ons zal het niet liggen. Zet jezelf in de lift. Kijk op

werkenbijdeltalloydgroep.nl

D E L T A L L O Y D G R O E P I S O N D E R A N D E R E D E L T A L L O Y D , O H R A E N A B N A M R O V E R Z E K E R I N G E N

DLG8009_210x297mm_Direct.indd 1 17-10-2008 09:43:49

Page 43: Aenorm 64

41AENORM 64 July 2009

Concluding remarks

If every person in the world would consume as much meat as we do now, an environmental crisis would ensue. Our current global meat consumption is already accompanied by the use of 80% of the agricultural land. In face of rising populations reaching 9 billion in 2050 and the resulting doubling of meat consumption, we would do best to think about our current con-sumption patterns. There are some reassuring signs from society, since an increasingly growing group of people are turning part-time vegetari-ans. A piece of advice to those environmentally minded consumers that care about animal wel-fare: consume less meat2, buy organic, and eat relatively more chicken and less beef.But probably the actions of individual consumers will not be enough. Government intervention is needed to tackle our biggest environmental threats (global warming, loss of biodiversity). Some government intervention is already ta-king place. Industries, energy companies and other large companies are obliged to lower their CO2 emissions under the European CO2 emission trading system (EU ETS). Emissions of air polluting substances like ammonia and sulp-hur dioxide are already regulated by European legislation (the National Emission Ceilings gui-delines). While a simple partial equilibrium ana-lysis showed that emission limits are the best solution to lower the environmental burden of our meat consumption, emissions abroad could increase as a result. So for most food pro-ducts, life-cycle oriented policies are necessary to avoid shifting the environmental burden to other countries. An excise on meat products combined with other measures3 could nudge consumers into eating less meat. Furthermore, financial compensation could be employed to make sure that important ecosystems remain untouched.

References

Blonk, H., Kool, A. and Lutske, B. (2008). Milieueffecten van Nederlandse consumptie van eiwitrijke producten: Gevolgen van ver-vanging van dierlijke eiwitten anno 2008.

Weidema, B., Wesnaes, M., Hermansen, J., Kristensen, T. and Halbert, N. (2008). Environmental Improvement Potentials of Meat and Dairy Products (IMPRO).

De Bruyn, S., Blom, M., Schroten, A. and Mulder,

M. (2007). Leidraad MKBA in het milieubeleid: Versie 1.0, CE Delft.

Sevenster, M. and De Jong, F. (2008). A sustai-nable dairy sector: Global, regional and life cycle facts and figures on greenhouse gas emissions, CE Delft.

Productschappen Vee, Vlees en Eieren (PVE) (2006). Vee, Vlees en Eieren in Nederland.

ORM

IK WORD HIERDIRECTEUR.

IN DE LIFT BIJ

TRAINEES

De tijd van traditioneel verzekeren is voorbij. ‘All-fi nance’ is de toekomst. En Delta Lloyd Groep wil hierin haar leidende marktpositie

uitbouwen. Daarvoor hebben we mensen nodig. Heel goede mensen. Zo selecteren we ieder jaar een aantal afgestudeerde academici

voor onze Trainee Programma’s die opleiden tot een leidinggevende functie binnen het concern. Ook hebben we een Business Course

en het Young Talent Network waarin jonge, hoog opgeleide medewerkers elkaar inspireren tot bijzondere prestaties. Waarmee we

maar willen zeggen: als je wilt, kun je bij Delta Lloyd Groep heel ver komen. Aan ons zal het niet liggen. Zet jezelf in de lift. Kijk op

werkenbijdeltalloydgroep.nl

D E L T A L L O Y D G R O E P I S O N D E R A N D E R E D E L T A L L O Y D , O H R A E N A B N A M R O V E R Z E K E R I N G E N

DLG8009_210x297mm_Direct.indd 1 17-10-2008 09:43:49

Page 44: Aenorm 64

42 AENORM 64 July 2009

The large investments in new power generation assets illustrate the need for proper financial plant evaluations. Traditional Net Present Value (NPV) analysis disregards the flexibility to adjust production decisions to market developments, and thus underestimate true plant value. On the other hand, methods treating power plants as a series of spread options ignore technical and contractual restrictions, and thus overestimate true plant value. In this article we demonstrate the use of cointegration to incorporate market fundamentals and calculate dynamic yet reasonable spread levels and power plant values. A practical case study demonstrates how various technical and market constraints impact plant value. It also demonstrates that plant value may contain considerable option value, but 33% less than with the usual real option approaches. We conclude with an analysis of static and dynamic hedges affecting risk and return profiles.

Realistic Power Plant Valuations - How to Use Cointegrated Spark Spreads

Henk Sjoerd Los, Cyriel de Jong and Hans van Dijken

KYOS Energy Consulting, an independent consultancy firm offering specialized advice on trading and risk management in energy markets. Kyos advises energy companies, end-users, financial institutions, policy makers and regulators.This article was partly published in the World Power (2008).

Introduction

The combination of rising electricity demand with an aging production park requires conti-nuous investments in new production capacity. And although countries world wide have am-bitious targets for green energy consumption, fossil-fired power plants will remain to play a key role in the coming years. RWE estimates that only in Europe 400,000 MW of existing capacity has to be renewed, of which 170,000 MW from fossil-fired power plants. In the 2008 issue of World Power the authors investiga-ted investments in wind production (De Jong and Van Dijken, 2008). Whereas investments in wind mills will be massive too, coal and gas fired power plants will remain the back bone of the world’s electricity production for the next decades, and are the subject of this article. This does not necessarily violate green energy tar-gets, considering the possibilities of replacing fossil fuels with biofuels and possibilities of car-bon capture and storage. The need for investments may be clear, but each individual investment has to be justified before it can actually be made. If we assume

that the price for a new gas plant equals around € 700 per kW, it’s easy to calculate that a 420 MW gas fired plant costs almost € 300 million. Investments in coal fired plants easily involve a multiple of this number and these investments have to be earned back over a plant’s lifetime.The difficulty with estimating future income is the uncertainty about price levels in combination with uncertainty about asset behaviour. Will prices remain at this level? Will the availability of the power plant be according to expectations? Without doubt, today’s expectations about future prices and plant performance will prove to be wrong. Therefore it is essential to have a clear picture about potential price scenarios, likely plant behaviour and hedging strategies. This combination provides a range of outcomes, which gives valuable insight in the total value distribution and the optimal dispatch and hedging strategy to follow. In this article we describe how to overcome the most common pitfalls in power plant valuation. We explain how a realistic Monte Carlo price si-mulation framework can be built in line with a market’s merit order, using a cointegration ap-proach. We also show how plant characteristics can be incorporated in this framework. This ap-proach is especially relevant for assets that are relatively flexible and located in the back of the supply stack. We will demonstrate that the ex-trinsic value or the flexibility value for low effi-cient (gas) plants is relatively high. And finally, we clarify the impact of asset-backed trading strategies on the actual cash-flows.

Econometrics

Page 45: Aenorm 64

43AENORM 64 July 2009

Intrinsic valuation

The gross margin of a power plant is deter-mined by the difference between the power price and the production costs, consisting of costs for fuel, CO2 emissions and variable ope-rating costs. This margin is commonly deno-ted as (clean) spark spread for gas-fired units and (clean) dark spread for coal-fired units. Depending on the plant efficiency, the amount of fuel required to produce 1 MWh of electri-city varies. A new CCGT (Combined Cycle Gas Turbine) with a 58%1 efficiency requires 1.7 MWh of gas, whereas an older unit with a 50% efficiency requires 2 MWh of gas. In the remai-ning of the paper we will refer to all spreads as ‘spark spreads’, not implying the discussion is limited to gas. A traditional approach to calculate plant value is to calculate the future spark spread levels and multiply this with a load factor of say 2,500 hours off-peak and 2,500 hours peakload. A Net Present Valuation (NPV) is obtained by dis-counting back all spark spreads to today while deducting all cost components and the initial in-vestment. This approach is often combined with a scenario analysis, where prices are assumed to be relatively high or low over the complete evaluation period.As a first improvement, more detailed forward curves for the relevant commodities should be constructed. Initially, the curves typically have a monthly granularity. Especially further out in time, the curve inevitably involves some (solid) guesswork. The monthly forward curves for the peak and offpeak spark spreads form the basis for the expected operation and the intrinsic va-luation of a plant. Refining the power and gas curves with daily and hourly profiles improves the valuation further. In the end, the largest part of the power plant’s capacity will be dispat-ched on an hourly basis. Consequently, hourly price curves are required to make the dispatch decision.

Price uncertainty and real option valuation

The hourly and daily forward curves may be treated as the best forecast of future spot price levels (if we leave aside risk premia). However, actual spot price levels will surely be different. On one hand, this creates a risk, which may be reflected in a high discount rate. On the other hand, price variations offer opportunities for extra margin if the plant’s dispatch and trading decisions can respond to them. To capture this uncertainty, it does not suffice to create high/medium/low price or spread scenarios. Actual market dynamics are far more diverse than that. For example, a period of low margins may be followed by a period of high margins in the same day, week, month or year. A plant ope-

rator will respond by reducing the production in the low spread period to minimize losses. At the same time, he will maximize production in the high spread periods. In fact, a flexible plant offers the ability to limit the downside and take full advantage of the upside. This is the basis for any real option approach and is actually the way plant owners make a large part of their as-set-backed trading profits in the market place. Still, to many in the power industry this seems a non-real financial trick. Indeed, such an ap-proach is sensitive to ‘model error’ or ‘analyst bias’. It easily leads to an overestimation of true plant value. First, approaches which treat the plant as a strip of spark spread call options ignore the real-life restrictions on plant flexibi-lity; restrictions may have either a technical or contractual nature. Second, approaches which are directly or indirectly based on unrealistic spark spread levels suffer from the same over-estimation bias.

Correlated returns: unrealistic spreads

To capture the dynamics between commodities and over time, analysts rely on Monte Carlo price simulations. This covers a wide range of model implementations and we will demonstra-te that the usual approaches exaggerate actual variations in spark spread levels.The most common approach to combine mul-tiple commodities in a Monte Carlo simulation model is applying a correlation matrix bet-ween the different commodities. This includes Principal Component Analysis (PCA). A correla-tion matrix captures the degree to which prices move together from one day to the next; it is derived from daily (or weekly) price returns. A correlation matrix, in combination with mar-ket volatilities, describes actual price behavi-our quite well for relatively short horizons, for example in Value-at-Risk models. However, ex-tensive research and practical experience lead to the insight that a correlation matrix is too weak to maintain the fundamental relationships between commodities over a longer period. As a result, very large or negative spark spreads will be the result. These extreme scenarios are not possible in reality though, as they would mean that either no power plant makes mo-ney or all power plants make huge amounts of money. So, whereas an intrinsic valuation dis-regards the value of plant flexibility, the usual Monte Carlo simulation approach of correlated returns results in an overestimation of plant flexibility.Another approach is not simulating the indi-vidual commodities, but simulating the spark spread directly. There are clear benefits for this approach. The spread can fluctuate between certain ‘logical’ boundaries, with the result that the (undesired) extreme outcomes are avoided.

1 Lower heating value

Econometrics

Page 46: Aenorm 64

44 AENORM 64 July 2009

On the other hand, information is lost about the movement of underlying power and fuels pri-ces, e.g. relevant for hedging decisions. This will lead to various practical problems, for in-stance when combining a dedicated gas con-tract to the power plant. In short, we feel that the most common ap-proaches are inadequate solutions when they are applied to power plant evaluation projects. The alternative is the explicit incorporation of fundamental price relationships. This approach has the benefit that spark spreads remain at logical levels, but that information about under-lying prices is not lost.Power prices are the result of the movement in underlying fuels and carbon prices. This re-lationship can be captured with cointegrati-on: power prices are co-integrated with price movement of fuels (mainly coal and gas) and carbon. With cointegration, power prices are fundamentally driven by dynamic market mar-ginal costs in peak and off-peak and will react properly when commodities are substituted (for instance, change from coal to natural gas in summer periods). Actual commodity prices may temporarily deviate from the fundamental relationships, but not for too long and not by too much.

Cointegrated forward and spot price simulations

KYOS started the development of a proprietary price simulation model for energy commodities already several years ago, part of which has been published in the literature (see e.g. De Jong, 2007). It is in use among several leading commodity trading companies. Fuel and CO2 prices are simulated first, with power prices fol-lowing. The model captures the many shapes that forward curves display over their lifetime. They may for example turn from contango (fu-ture price higher than today) in backwardation (future prices lower than today). To capture these dynamics we use a multi-factor model to simulate the returns of the monthly forward prices:

( ) ( )( )( ) ( ) ( ) ( )

i i i i i i

i i i i i i

r t T φ γ ε t γ η t T

γ ε t γ η t T h t T ε t

= ⋅ ⋅ + − ⋅ +

⋅ + − ⋅ + ⋅

where trading days are denoted by t and matu-rities are denoted by T. The five factors are:εi,j(t): Return of factor j for commodity i at time t (for j = 1,2,3).η1,2(t,T): Return of factor j for commodity i, ap-plied to maturity starting at T (for j = 1,2).

Primary parameters and functions are:φi(t,T): Variance multiplier to factor 1 for com-modity i at time t and maturity T.

hi(t,T): Variance multiplier to factor 2 for com-modity i at time t and maturity T.γi,j: Maturity dependent part of factor j return for commodity i (for j = 1,2)

The simulated forward price returns are a weighted average of short term factor returns, long terms factor returns and seasonal factor returns. The model parameters can be accura-tely estimated on the basis of a limited set of historical data. The major parameters capture general level shifts, shifts from contango into backwarda-tion and shifts in the size of the winter-summer spread (for power and gas). The volatilities and correlations of the different maturities along the curve can be calibrated to properly match the historical price data, both between different maturities and between different commodities. This is especially important when hedging stra-tegies are evaluated. The model also contains spiky (‘regime-switching’) power and gas spot prices, mean-reverting to forward price levels, and with appropriate random hourly profiles. The model as described above produces rea-listic price simulations for individual commo-dities. At first sight, it also nicely ties com-modities together through correlations. Still, we experienced that it does not produce rea-listic spreads between commodities, whether it be oil-gas spreads, regional gas spreads or power-fuel spreads. Yet, spreads are actually the most important input to most valuations, including power plant valuations. We solved the issue through cointegration, a Nobel prize winning econometric innovation (Engle and Granger, 1987). For spark and dark spreads it is complemented with the explicit incorporation of the merit order. Essentially, the cointegra-tion approach captures the correlation between price levels rather than (only) price returns. Intuitively, it uses a regression to find the ‘sta-ble’ relationship between commodity prices and then assumes that ‘actual’ commodity pri-ces move around this stable level. The concept is very similar to a spot price mean-reverting around a forward price level. The primary chal-lenge is to align the approach with the return-driven movements of the forward curve, some-thing we learned to solve over time. In order to bring this theoretical explanation to

 

Econometrics

Page 47: Aenorm 64

45AENORM 64 July 2009

a practical level, in the next section we consi-der a case study involving a power plant over a three year period.

Case study

We consider a new gas-fired power plant in Germany. With a 58.5% efficiency the plant produces a maximum of 420 MW; at the mini-mum stable level it produces 170 MW (47% ef-ficiency). The plant has fixed annual Operations & Maintenance (O&M) costs of € 6.3 mln. We disregard discounting for simplicity. If a plant is dispatched economically, it produ-ces when its spark spread, the gross marginal revenue, is positive and does not produce when the spark spread is negative. Albeit simple as it seems, technical, contractual and market restrictions hinder plant owners to exactly dis-patch along this principle. Actual dispatching is an optimization challenge, involving issues such as ramp rates, minimum run-times, plant trips, maintenance and production-dependent heat rates. Optimal dispatch decisions can be derived with various mathematical techniques. KYOS generally works with dynamic program-ming techniques.

Case study results

We evaluate the plant over the period 2010-2012 based on forward prices at the end of March 2009.

• Traditional approach, no constraintsWith the traditional approach, power is con-stantly produced during 2,500 peak and 2,500 offpeak hours. Taking fixed cost components of 6.3 mln /year into consideration, this leads to an average annual value of € 20.4 mln.• Monthly intrinsic valuation, no constraintsA more detailed monthly curve shows that the winter periods have the highest spark spreads, where the high power forward prices compensate for the also high gas prices. In the 36 months, the plant produces only peak-load, generating an average spark spread of € 31.70 /MWh. This generates an annual value of € 35.2 mln. If the company could trade all monthly periods individually, this would en-sure a minimum value the company can lock in on the forward market.

• Hourly intrinsic valuation, no constraints

In most of the months, the expected hourly spark spread is negative in some hours, but positive in other hours. Assuming the plant has maximum ramping flexibility and is fully traded on the spot market, the average ex-pected value totals € 43.0 mln. This is more than the monthly intrinsic value, because of the larger expected variations in the spot than in the forward market. However, prices will not follow the current curve for sure. This creates risk, part of which can be hedged on the forward market, but also additional profit opportunities.

• Simulations with cointegrations, no cons-traintsBased on our price simulation model we cal-culate an optimal dispatch schedule per simu-lation path. This yields a value per simulation, with an average of € 53.8 mln, but with a large standard deviation of € 8 mln. As we will analyze later, the uncertainty in outcomes may be partially hedged on the forward mar-ket, but some risk certainly remains. The € 10.7 mln difference with the hourly intrinsic is labeled the option value, extrinsic value or flexibility value.

• Variable O&M and start costsNow we make the case gradually more realis-tic by adding variable costs. They depend on the number of operating hours (inspections, overhauls) or on the number of starts (extra fuel, extra maintenance). With variable costs per production hour of 1.50 €/MWh, the plant value reduces by € 2.9 mln.

• Minimum runtimes and start costsIn practice, there are no fossil-fired plants that are switched on and off from one hour to the next. Actual plant operation is constrained by minimum times to be on or off, which we set at 24 hours each. The impact on plant va-lue is € 6.5 mln. Taking into account costs per starts of € 12,600 plus 2,000 GJ of gas, the plant value reduces further by € 2.0 mln.

• Maintenance and tripsPlanned maintenance is the time required for inspections and planned repairs. For a lon-ger term analysis it is worth to incorporate an inspection scheme with both smaller in-spections and major overhauls. Assuming the plant will be in maintenance for 20 days

"Cointegration reduces the plant value to 67% of the Monte Carlo approach"

Econometrics

Page 48: Aenorm 64

46 AENORM 64 July 2009

per year, the plant value is reduced by € 2.7 mln. Unplanned outages (trips) have more ef-fects than simply reducing the generated po-wer production by a single percentage. A trip can occur at the start of a production period, but also at the end where the financial con-sequences are limited. Furthermore, after a trip, a decision needs to be made if the plant can and should start again. With an outage rate of 6%, in our example the plant value is reduced by € 2.8 mln.

• Seasonal effects and plant degradationThe outside temperature influences the capa-city of gas-fired power plants. In the winter, with colder temperatures, more oxygen re-sults in higher capacities than in the summer. The impact of 5% more capacity in favorable periods (winters tend to have larger spark spreads) and 5% less capacity in less favora-ble periods (summer) leads to a small incre-ase of € 0.2 mln. During the lifetime a power plant will lose some of its efficiency. Although maintenance reduces the consequences, de-gradation may be expected especially in the first period after commissioning. An average efficiency of 58% leads to a decrease of € 0.5 mln.

• Contractual: take-or-pay obligation for natu-ral gasBesides the physical constraints there can also be contractual limitations to fully exploit the plant flexibility. A dedicated gas contract with a take-or-pay clause restricts the flexibility of the power plant, as the gas cannot be trans-ported elsewhere. In our case, a take-or-pay obligation is translated in a minimum number of operating hours of 5,000 in the first year. As a take-or-pay contract is usually aligned with the expected consumption, the impact is limited to a decrease of € 0.7 mln.

Besides the described limitations, more cons-traints could be applied. An example is the ramp rate, although this is more a limitation for coal plants. Also, the delivery of heat could impose must-run obligations for specific plants. Environmental constraints like maximum NOx-emissions would also limit the flexibility, similar as for take-or-pay contracts. To highlight the effect of cointegration, a comparison is made with the full simulation model, but the cointegration switched off. The lack of cointegration causes a value increase from € 35.9 mln to € 53.9 mln. So, cointegration reduces the plant value to 67% of the ‘normal’ Monte Carlo approach. This reduction is solely attributable to the price scenarios, where spark spreads become more extreme. This becomes an even larger problem when the valuation horizon increases.

Comparing option values

The option or flexibility value of a power plant is the difference between the intrinsic value, de-rived from a static curve (hourly, monthly or something else), and the average value over the simulations. This value is realized by adap-ting the production profile to changed price scenarios: if spreads turn positive, the plant is switched on. If spreads turn negative, the plant is switched off. With this behavior profits are added in positive market circumstances, while losses are avoided by stopping the production in negative market circumstances. New-build plants with a relative high efficiency produce in more hours than older, less efficient plants. This impacts the option value: if a plant is already operating, there is the possibility to reduce out-put or stop producing, while if a plant is not yet running, there is the possibility to switch on. It is therefore important to realize that the flexi-bility value is highly dependent on the power

Intrinsic Flexibility Total Power Gas Carbon Starts OH

[mln €/yr]

[mln €/yr]

[mln €/yr]

[GWh/yr]

[GWh/yr]

[kton/yr] [#/yr] [#/yr]

Traditional 20.4 0.0 20.4 2,100 3,621 740 N/A 5,000

Monthly shape 35.2 0.0 35.2 1,379 2,377 486 N/A 3,283

Hourly shape 43.0 0.0 43.0 2,059 3,520 720 417 4,902

Simulations 43.0 10.7 53.8 2,003 3,423 700 343 4,769

Variable O&M 40.0 10.8 50.8 1,919 3,280 671 345 4,569

Min runtimes 32.6 11.8 44.3 2,008 3,488 713 52 5,232

Start costs 30.2 12.1 42.3 2,025 3,545 725 44 5,294

Maintenance 28.1 11.5 39.6 1,907 3,339 683 42 4,987

Unplanned Maintenance 25.8 11.0 36.9 1,800 3,153 645 44 4,706

Seasonality 26.0 11.1 37.1 1,803 3,159 646 44 4,703

Degradation 25.2 11.4 36.5 1,782 3,148 644 44 4,661

ALL, incl ToP 25.9 10.0 35.9 1,894 3,341 683 43 4,935

ALL, but without cointegr. 25.9 28.0 53.9 1,630 2,872 587 30 4,252

Econometrics

Page 49: Aenorm 64

47AENORM 64 July 2009

plant characteristics and the degree to which the plant is already ‘in-the-money’ (i.e. profita-ble to run). For a new plant, the flexibility value is limited compared to the relatively high intrin-sic value. But for an older plant the flexibility value has a larger influence on the total plant value. This is illustrated with comparing our re-ference plant (58% efficiency) with a 10 year old power plant (54% efficiency). Note that the flexibility value of the plants is relatively high, as result of the chosen forward curves with low spreads.

Hedging strategies

It is common to sell a majority of the expected plant production in the forward market, while at the same time purchasing forward the required fuels and CO2 credits. This is called hedging. Hedging a power plant serves two main pur-poses:

1 Risk reduction. First, with hedging the depen-dency on price levels of highly volatile spot markets decreases. In relation to this, hed-ging reduces potential liquidity issues on spot markets.

2 Profit optimization. Forward spark and dark spreads vary over time. Dynamic trading stra-tegies can increase value by selling more po-wer against high spreads and selling less po-wer (or buying it back) against low spreads.

In this case study the hedge volume is defined as the expected production over the evaluation period, i.e. the average volume over all scena-rios. It can be verified that this volume hedge is very close to the concept of a delta hedge. To begin with, the spark spreads are sold for-ward in March 2009 using calendar forward contracts for delivery in 2010, 2011 and 2012 for peakload power, natural gas and CO2. We assume no transaction costs. If the hedge is not adapted over the lifetime, this is defined as a static hedge. The result of static hedge is illustrated in the figure. Where a spot strategy leads to a wide value distribution, hedging re-duces the bandwidth. Scenarios with high spot spreads yield a loss on the hedge, whereas sce-narios with low spot spreads yield a profit on the hedge. This dampens the total profit and loss on the spot market and clarifies that that

hedging reduces the risk profile. In reality the expected production volume, which drives our hedge volume, varies with a change in spark spreads. Re-hedging on the basis of this information is called dynamic hedging. Dynamic hedging leads to a further narrowing of the value distribution. And more importantly, a higher profit is expected as more production is sold against higher spark spreads.

Conclusion

The energy industry is facing important invest-ment decisions, shaping the power production portfolio for the next decades. Different plant types offer different degrees of flexibility to respond to future price developments. An im-portant consideration in the decision process is therefore the accurate assessment of the value to assign to this flexibility. This article demon-strates how the concepts of cointegration and dynamic programming can help to avoid a bias towards either very flexible, yet expensive, or very inflexible power plants.

References

de Jong, C. and van Dijken H. (2008). Effective Pricing of Wind Power, World Power

de Jong, C. (2007). The nature of power spikes: a regime-switch approach, Studies in non-li-near dynamics and econometrics

Engle, R. and Granger C. (1987). Co-integration and error correction: Representation, estima-tion and testing, Econometrica, 251-276

 

 

Econometrics

Page 50: Aenorm 64

48 AENORM 64 July 2009

This work studies a contingent claim pricing and hedging problem in incomplete markets, using backward stochastic differential equation (BSDE) theory. In what follows, we sketch the pricing problem in complete vs incomplete markets, in a simple setting, and show why BSDEs provide a natural framework for this issue from a mathematical point of view. Then, we introduce the principle of risk indifference pricing and summarize our results. Concerning the literature1, we use results in the theory of BSDEs (El Karoui et al., 1997; Hamadène and Lepeltier, 1995) to examine pricing and hedging problems in a risk indifference framework (Øksendal and Sulem, 2008; Xu, 2005)

Dynamic Risk Indifference Pricing and Hedging in Incomplete Markets

Xavier De Scheemaekere

is F.R.S.-F.N.R.S. research fellow and PhD student in finance at the Solvay Brussels School of Economics and Management (Université Libre de Bruxelles). This article is a summary of the working paper available online at http://ideas.repec.org/p/sol/wpaper/08-027.html.

Complete vs incomplete markets

In a complete market, there is a unique dynamic arbitrage-free pricing rule for a contract with payoff G at time t=T (think, e.g., of a European call option). This price is the conditional expec-tation of the discounted payoff G with respect to the so-called (unique) equivalent martingale measure (EMM). This fundamental result ap-pears naturally when the problem is formulated in terms of BSDEs.Without loss of generality, assume the interest rate is zero and the price of the riskless asset is constant at 1. Further, assume the risky asset (say, the stock price) is described by the follo-wing continuous stochastic process:

t

dS tdt σdW S t T

S t= + > ∈ (1)

where, for simplicity, μ and σ are two constants (different from zero) and W is a one dimensional Brownian motion. In complete markets, every contingent claim can be replicated by buying or selling the underlying risky asset and the riskless asset in appropriate proportions. These quantities form a so-called wealth process (or portfolio process) that is assumed to be conti-

nuously rebalanced in time in a self-financing way, i.e., without adding new cash. The arbi-trage-free price (at time t=0) of the contingent claim G is the initial value of the wealth process whose terminal value equals G. This portfolio process is called the replicating portfolio.The dynamics of this portfolio π

xX t X t= is

t t

dS tdX t π π dt σdW t t T

S tX T G

= = + ∈

=

(2)

where πt represents the amount invested in the risky asset at time t, and where the initial value X(0)=x. If we denote by pt(G) the price at time t of the contingent claim G, we have that

πt Xp G X t=

In particular, at time zero, we get p0(G)=x We can rewrite equation (2) so as to include the market price of risk, θ=(μ -r)/σ (remember that the interest rate r is zero):

t tdX t π σθdt π σdW t t T

X T G

= + ∈=

Making the change of variable πt σ=Z(t) yields

( ) ( ) ( ) ( ), [0, ]( )

dX t Z t θdt Z t dW t t TX t G

= + ∈=

 (3)

Equation (3) is a one-dimensional linear BSDE, i.e., a stochastic differential equation (SDE) with

1 We refer to the original paper for more details

Econometrics

Page 51: Aenorm 64

49AENORM 64 July 2009

Hoeveel is er nodig om onze pensioenen in de toe-

komst te kunnen betalen? Rekening houdend met

de vergrijzing en de economische ontwikkelingen?

Kunnen we straks nog steeds zorgeloos een potje

biljarten? Bij Watson Wyatt kijken we verder dan de

cijfers. Want cijfers hebben betrekking op mensen.

En op maatschappelijke ontwikkelingen. Dat maakt

ons werk zo interessant en afwisselend. Watson

Wyatt adviseert ondernemingen en organisaties

wereldwijd op het gebied van ‘mens en kapitaal’:

pensioenen, beloningsstructuren, verzekeringen en

investeringsstrategieën. We werken voor toonaan-

gevende bedrijven, waarmee we een hechte relatie

opbouwen om tot de beste oplossingen te komen.

Onze manier van werken is open, gedreven en infor-

meel. We zijn op zoek naar startende en ervaren

medewerkers, bij voorkeur met een opleiding Actu-

ariaat, Econometrie of (toegepaste) Wiskunde. Kijk

voor meer informatie op werkenbijwatsonwyatt.nl.

Watson Wyatt. Zet je aan het denken.

T o e t s h e t a a n w e z i g e v e r m o g e nv a n e e n p e n s i o e n f o n d s o m d e i n d e x a t i ev o o r g e p e n s i o n e e r d e n t e b e p a l e n .

-00013_210x297_Biljart.indd 1 24-09-2007 15:35:33

Page 52: Aenorm 64

50 AENORM 64 July 2009

a final condition. Since the work of Pardoux and Peng (1990), who proved a general existence and unicity result for such equations in the mul-ti-dimensional non-linear case, there is a gene-ral mathematical theory for BSDEs, which has proved to be very useful for many applications. In mathematical finance, in particular, such equations arise naturally and, therefore, BSDE theory is very fruitful. For example, it gives the proper measurability and integrability conditi-ons on X(t) and Z(t) for (3) to have a unique solution, i.e., for the existence of a unique re-plicating portfolio. Furthermore, proposition 2.2 in El Karoui et al. (1997) enables to write thesolution X(t) in (3) as follows:

X(t)=EQ[G|Ft], (4)

where dQ=K(T)dP and K(T) is defined by the forward linear SDE

dK(t)=K(t)(-θ)dWt, K(0)=1. (5)

Because σ is invertible (being a constant diffe-rent from zero), the market price of risk θ exi-sts and is unique, and the above equations are well defined. Moreover, (4) is exactly the risk-

neutral valuation formula because (5) defines the unique probability measure Q as the risk-neutral probability measure (also called EMM). In other words, the arbitrage-free price p(t)= X(t) is the conditional expectation at time t of the (discounted) payoff G with respect to the unique EMM. This fundamental result appears naturally in the framework of BSDEs.As we have seen, the complete market situa-tion relies on the fact that σ is invertible; this implies that the market price of risk is unique and well-defined, which, in turn, enables to de-termine the unique EMM to be used for pricing.If the uncertainty was described by two in-dependent Brownian motions, completeness would imply that there are two non-redundant risky assets. In that case, σ would be an inver-tible matrix and θ would be (well) defined as a unique two-dimensional vector.In reality, it makes no doubt that markets are incomplete. This raises the fundamental ques-tion of how to price (and hedge) incomplete markets. In the paper, we consider that the in-completeness comes from the illiquidity of the underlying risky assets vis-à-vis the dimension of uncertainty. More precisely, we assume that

the number of risky assets is strictly smaller than the number of independent Brownian mo-tions. For simplicity, consider the case where there is one risky asset and where the Brownian motion is two-dimensional. Equation(1) then becomes

t t

dS tdt σ dW σ dW

S t= + +

where μ is a constant and σ*=(σ1, σ2) (* deno-tes the transpose) is a two-dimensional vector, which is of course not invertible. The market price of risk is given by the vector θ*=(θ1,θ2) that satisfies θ*σ=μ

i.e., σ1θ1+σ2θ2=μ

As a consequence, θ is not uniquely defined and there are infinitely many EMM. Hence, there is no unique method for pricing a given contin-gent claim in an arbitrage-free way. Arbitrage-free pricing thus leads to an interval of prices, and to different buyer´s and seller´s prices. In order to get a price (or, at least, a “reasonable” interval of prices), one must introduce some optimality criterion.

Risk indifference pricing

In this work, the pricing formula relies on the risk indifference principle, which is a natural extension to the idea of pricing and hedging in complete markets. Indeed, the extension of perfect dynamic hedging into an incomplete market would mean that the trader buys or sells the option for an amount so that his risk expo-sure will not increase at expiration because of active hedging. The (seller´s) risk indifference price is the initial payment that makes the risk involved for the seller of a contract equal to the risk involved if the contract is not sold, with no initial payment. Formally, if pt is a dynamic con-vex risk measure (see Detlefsen and Scandolo (2005)and the references therein), then the dynamic risk indifference price at time t, pt, is defined by

( ) ( )

Π Πinf ( ( ) ) inf ( ( )).

t

π πt x p t xπ πρ X T G ρ X T+∈ ∈

− = (6)

The left-hand side of (6) describes the situa-tion where an agent, who has sold a contract with payoff G at time T, tries to minimize his

"Arbitrage-free pricing leads to an interval of prices"

Econometrics

Page 53: Aenorm 64

51AENORM 64 July 2009

terminal risk, i.e., the risk associated with the final value of his wealth process (with initial va-lue x+pt) minus his liability G, over the set of admissible portfolios. The right-hand side des-cribes the situation where no contract is sold and where the agent simply minimizes the risk associated with the terminal value of his wealth process. The risk indifference price pt is such that the agent is indifferent between his opti-mal risk if a transaction occurs and his optimal risk if no transaction occurs, at all times.

The results

In the paper, we use BSDE theory to solve pro-blem (6). The methodology is straightforward and it provides explicit formulas for both the solution of the risk indifference pricing and hed-ging problem, in a general framework. We show that risk indifference pricing leads to reasonable price intervals, compared to other approaches. In fact, the size of the price interval directly depends on the way risk is measured. In other words, different ways of measuring risk lead todifferent price intervals. Our approach explicitly accounts for this dependence, showing that the choice of a specific convex risk measure leads to the choice of an EMM for pricing.For a given contingent claim, the comparison between different price intervals, depending on different risk measures, would provide infor-mation on the risk sensitivity of the product in question. This could be useful from a risk ma-nagement perspective.

References

Detlefsen, K. and Scandolo G. (2005). Conditional and dynamic convex risk measu-res, Finance and Stochastics, 9, 539-561.

El Karoui, N., Peng, S. and Quenez, M.C. (1997). Backward stochastic differential equations in finance, Mathematical Finance, 7, 1-71.

Hamadène, S. and Lepeltier, J.P. (1995). Zero-sum stochastic differential games and back-ward equations, Systems & Control Letters, 24, 259-263.

Øksendal, B. and Sulem, A. (2008). Risk indif-ference pricing in jump diffusion markets. Mathematical Finance, to appear.

Pardoux, E. and Peng, S. (1990). Adapted so-lutions of a backward stochastic differential equation, Systems and Control Letters, 14, 55-61.

Xu, M. (2005). Risk measure pricing and hed-ging in incomplete markets, Annals of Finance, 2(1), 51-71

Econometrics

Page 54: Aenorm 64

52 AENORM 64 July 2009

Dynamic traffic management is an important approach to minimise the negative effects of increasing congestion. Measures such as ramp metering and route information, but also the traditional traffic signal control are used. The focus in designing traffic control plans has always been on local control. However, there is a tendency to come to a more centralised and network wide approach of traffic control. The interaction between traffic management measures and the route choice behaviour of the road users then becomes an important aspect of the control strategy design. The work described in this article shows that anticipatory control can contribute to a better use of the infrastructure in relation with policy objectives.

Integrated Anticipatory Control of Road Networks

Henk Taale

is a senior consultant employed by the Centre for Transport and Navigation, a department of Rijkswaterstaat. He has 18 years of experience in the fields of traffic management, traffic models and evaluation. He obtained a Master of Science degree in Applied Mathematics from Delft University of Technology in 1991 and finished his PhD on the subject of anticipatory control of road networks in 2008. Currently, he is responsible for the design of a national monitoring and evaluation plan and for ITS Edulab, a cooperation between Rijkswaterstaat and the Delft University of Technology. He is also a member of the Expert Centre for Traffic Management, a cooperation between Rijkswaterstaat and TNO.

Introduction

In The Netherlands transport and traffic policy heavily relies on traffic management. Building new roads is either too expensive or takes too much time due to procedures related to spatial and environmental conditions. It will be difficult to implement road pricing in the coming years because of technical and political reasons, so for the Dutch Ministry of Transport, Public Works and Water Management (2004) traffic management is the key direction in which so-lutions for the increasing congestion problems have to be found. The reason for this is that traffic management is faster to implement and it faces less resistance than the other solution directions. This has in fact been the situation since the 1990s. From 1989 on, a lot of traffic management measures were implemented, va-rying from a motorway traffic management sy-stem and ramp metering systems to overtaking prohibitions for trucks, peak-hour lanes and special rush hour teams of the traffic police. In a recent policy document the Dutch Ministry of Transport, Public Works and Water Management (2008) estimates that traffic management re-

duced the increase of congestion (measured in vehicle hours delay) with 25% during the years 1996-2005.In most cases traffic management in The Netherlands is used only on a local level. It lacks an integrated and network wide approach. The main reason for this is that different net-work types (e.g. motorways and urban roads) are operated and maintained by different road managers. In practise these road managers are only responsible for their own part of the network and proper communication and coope-ration is mostly lacking. To deal with this, The Netherlands has adopted a different approach, described in the Handbook Sustainable Traffic Management (Rijkswaterstaat 2003). The hand-book gives a step-by-step method that enables policy makers and traffic engineers to translate policy objectives into specific measures. The method consists of clearly defined steps that can be summarised as: define policy objecti-ves, assess current situation, determine bot-tlenecks and create solutions. The step-by-step plan helps to develop a network vision based on policy objectives, shared by all participating sta-keholders. In addition, the handbook provides the stakeholders with a first indication of the measures required to achieve effective traffic management in line with the shared vision. In order to assess the effects of the solutions bet-ter, the Regional Traffic Management Explorer (RTME) was developed. This sketch and calcula-tion tool supports the steps from the handbook and makes it possible to determine the effects of proposed traffic management services and measures. The effects can then be compared to the formulated policy objectives or other sets of measures. For more information on the method, the RTME and its applications, the reader is re-ferred to Taale et al. (2004) and to Taale and Westerman (2005).

ORM

Page 55: Aenorm 64

53AENORM 64 July 2009

Dynamic Traffic Assignment

To be able to calculate the effectiveness of traffic management, the Regional Traffic Management Explorer (RTME) uses a dynamic traffic as-signment (DTA) model. Traffic assignment is

concerned with the distribution of the traffic demand among the available routes for every origin-destination pair. It is called dynamic, the fact that traffic demand and traffic situation change in the network is taken into account in the distribution. The model itself consists of a control module, an assignment module and a network-loading module, which are integra-ted in a framework. The framework is shown in figure 1. After initialisation traffic control is optimised, then the network is loaded with the traffic demand to calculate the traffic situation and this situation is used to come to a new as-signment of traffic on the available routes. This process iterates until it converges into a traffic equilibrium.

The dynamic traffic assignment (DTA) module contains three different assignment methods: deterministic, stochastic and system optimal. A deterministic assignment assumes that all travellers have perfect knowledge about the traffic situation in the network and therefore chose the route that is best for them. In a stochastic assignment travellers do not have perfect knowledge and choose the route that they perceive to be best. This type of assignment is the most realistic one and is used for the case studies. In a system optimal assignment everybody chooses the route that is best for the network as a whole. It is a kind of benchmark with which the results of the other assignments can be compared. All assignment methods are route based. That means that they distribute the traffic among the available routes for a certain origin-destination relation. Therefore, route searching is important. The route enumeration process searches for the k-shortest routes using a Monte Carlo approach, with a stochastic variation of the free flow link travel times and Dijkstra’s shortest path algorithm.The dynamic network-loading (DNL) model uses travel time functions to propagate traffic through the network. For different link types (normal links, signal controlled links, rounda-bout links and priority links) different functions are used. The travel time is used to determine the outflow of links and with that the inflow of downstream links. At decision nodes traffic is distributed from the incoming to the outgoing links according to the splitting rates, which are calculated from the route flows using the travel

times. Congestion is always caused by a capa-city restriction and the resulting queue propa-gates upstream and horizontal, which means that blocking back is taken into account. The route travel times (needed for the assignment) are calculated from the link travel times using a trajectory method.The DTA and DNL models are calibrated and validated for a motorway bottleneck and for a network with motorways and urban roads. For both situations real-life data is used to calibra-te parameters and to see whether model re-sults and data are comparable. Although com-ments can be made concerning the data and the method of comparison, it appears that the DNL model is capable of simulating bottlenecks

"Integrated and anticipatory traffic management is the next step towards real network traffic

management"

Initialisation

Optimisation control plans

Dynamic network loadingDynamic network loading

Dynamic traffic assignment

Figure 1: Framework for DTA model

ORM

Page 56: Aenorm 64

54 AENORM 64 July 2009

fairly accurate, and that the combination of the DTA and DNL models is capable of simulating medium-sized networks with good results. In figure 2 the results for the motorway bottleneck are shown. The figure shows the speeds over time and space, measurements on the left and simulated values on the right. It is clear that the model does not produce the shock wave pattern as measured in the data. From the plots it can be also be seen that the congestion in the model starts earlier and takes more time to dissolve.

Integrated Anticipatory Control

Both the assignment and network-loading mo-dules are part of a framework for integrated anticipatory control. Integrated control means that the network is considered to be one mul-ti-level network, consisting of motorways and urban roads. Anticipatory control means taking into account not only the current, but also fu-ture traffic conditions. For these future traffic conditions the focus is on long term behaviour of road users, such as route choice and choice of departure time. Using game theory, it can be shown that traditional, local traffic control is related to the Nash game or Cournot game, in which each player reacts on the moves of other players. Anticipatory control is related to the Stackelberg game, in which one or more play-ers can anticipate the moves of other players if they have some knowledge about how players react.In the research described in this article, the question was answered how traffic management should be designed and optimised and whether it is beneficial to anticipate route choice beha-viour. To answer these questions, the frame-work from figure 1 is extended with a control module and in this control module the traffic management measures are optimised in such a way that route choice behaviour is taken into account (figure 3). This was done by using the traffic assignment and network-loading modu-les also in the optimisation of the control plans.

In the optimisation of control plans four steps are needed:

1 Generate a certain control plan by whatever method;

2 Run a simulation with the network-loading model to see how traffic propagates through the network with this control plan;

3 Based on these results run a dynamic traffic assignment to obtain a new route flow distri-bution;

4 Run the dynamic network-loading model again to come to a final evaluation of the control plan.

Due to the nature of the optimisation problem, the number of variables to optimise and the fact that a function evaluation consists of a combined DNL, DTA and DNL run, an analytical approach would become very complex and is therefore not very suitable. Because of this a heuristic approach is chosen, which uses as less function evaluations as possible. A workable method is the evolution strategy (ES) with covariance matrix adaptation (CMA-ES), as described by Hansen (2006). Evolution strategies belong to the larger family of evolutionary algorithms, just like genetic algorithms, and primarily use

Figure 2: Measured and simulated speeds for a motorway bottleneck

Figure 3: Framework extended with anticipatory control

ORM

Page 57: Aenorm 64

55AENORM 64 July 2009

mutation and selection as operators.

Case study

Using the framework, the benefits of integrated anticipatory control can be demonstrated with two cases containing a motorway and urban roads and different types of control (ramp-me-tering and traffic signal control). The networks are shown in figure 4. The first network (case 5a) is quite simple with a motorway, one sig-nal controlled intersection (black dot) and two possibilities to enter the motorway. Both on-ramps have ramp metering (grey dots). The second network (case 5b) has more origins and destinations and more routes. Also here both on-ramps are metered, but now there are two signal-controlled intersections on the urban network. For both networks three control strategies are tested: local control, anticipatory control and system optimum control. The results for these two networks are shown in figure 5. The figure shows the percent changes in total network de-lay compared with local control. It is clear that anticipatory control is much better than local control. For the first case the results (about 40% improvement) come close to system op-timum results. But also for the second case the improvements are high (about 20%).

Conclusions

We already mentioned that in many cases traf-fic management is reactive and local: it reacts on local traffic conditions and traffic manage-ment measures are taken to reduce congestion on that specific location. To come to an integra-ted and network-wide approach, the Handbook Sustainable Traffic Management describes a process for cooperation between the different road authorities and other stakeholders. This is a first and important step, but still a methodo-logical approach to integrated traffic manage-ment is lacking. How can traffic management measures be operated to reduce congestion on a network level, taking network condition into account? In the research described in this ar-ticle, and more extensively in Taale (2008), a framework for integrated and anticipatory traf-

fic management is developed and demonstra-ted with good results. It can be used as a next step towards real network traffic management.

References

Hansen, N. (2006). The CMA Evolution Strategy: A Comparing Review. , J.A. Lozano et al. (Eds.), Towards a New Evolutionary Computation. Advances in Estimation of Distribution Algorithms, 75–102. Springer-Verlag, Berlin.

Ministry of Transport, Public Works and Water Management (2004). Mobility Policy Document – Towards reliable and predictable accessibility, MinVenW, VROM.

Ministry of Transport, Public Works and Water Management (2008). Policy Framework for Utilisation – A Pillar of Better Accessibility, MinVenW.

Taale, H., Westerman, M., Stoelhorst, H. and Van Amelsfort D (2004). Regional and Sustainable Traffic Management in The Netherlands: Methodology and Applications, Proceedings of the European Transport Conference 2004, Association for European Transport, Strasbourg, France.

Taale, H. and Westerman M. (2005). The Application of Sustainable Traffic Management in The Netherlands, Proceedings of the European Transport Conference 2005. Association for European Transport, Strasbourg, France.

Taale, H. (2008). Integrated Anticipatory Control of Road Networks – A Game Theoretical Approach. PhD Thesis, Delft University of Technology.

Figure 4a: Networks for the case 1 and case 2Figure 4b: Results for case 1 and case 2 (relative change in total delay compared with local control)

ORM

Page 58: Aenorm 64

56 AENORM 64 July 2009

Solvency II represents a complex project for reforming the present vigilance system of solvability for European insurance companies. In this context many innovative elements arise, such as the formal introduction of risk management techniques also in the insurance sector. This allows to correctly assess risks and their independences and to take opportunities in terms of new insurance products, whose impact on company’s solvability is estimated beforehand.

Solvency 2: an Analysis of the Underwriting Cycle with Piecewise Linear Dynamical Systems

Consequently there is a growing need to de-velop so-called internal risk models to get ac-curate estimates of liabilities. In the context of non-life insurance, it is crucial to correctly assess risk from different sources, such as un-derwriting risk with particular reference to pre-mium, reserving and catastrophe risks. In par-ticular the underwriting cycle is not quantified in standard formula under Quantitative Impact Study 4 (QIS4). The module on Underwriting risk for non-life insurances is divided in two components: NLpr, pertaining to Premium Risk and Reserve Risk (risk that premiums or reser-ves are not sufficient to face future liabilities), which are conjointly evaluated, and NLCat, per-taining to catastrophic events.It is extremely important to correctly quantify all relevant inputs for applying the standard formula, especially for premiums and technical reserves. All models and employed data must

be coherent with IASB guidelines on interna-tional accounting principles (IFRS), according to the concept of “Current Exit Value” and, for non-hedgeable risks (such as reservation risk) according to a “mark to model” approach.To be very short, the computation of SCR for Non-Life modules is based on the following for-mula:

= ⋅ ⋅∑

(1)

where NLr=NLpr and NLc=NLCat, assuming that the correlation coefficient between the underly-ing risks for the two sub-modules is equal to one. We remark that the computation for mo-dules Health (containing in Italy injuries and ill-ness) and Non-life (containing other damages) must be carried out separately.However QIS4 doesn’t define any additional ca-pital requirement for underwriting cycle. It is worth mentioning that the underwriting cycle provides an artificial volatility to underwriting results, outside the statistical realm of insu-rance risk. So for developing Internal Model under Solvency II, underwriting cycle must be analyzed, as the additional volatility could in-deed generate higher capital requirements. Feldblum (2001) discusses the main causes of the underwriting cycle, taking into account in-surance industry aspects that could influence insurer solvency. The presence and length of cycles could depend on technical and non tech-nical aspects such as position and competiti-veness of leader companies in relation to the market, firm’s tendency to increase its own

Rocco Cerchiara

is assistant professor of Actuarial Mathematics at the Faculty of Economics, University of Calabria (Italy). His main research interests include Risk Theory for Life and Non-Life Insurance, with particular reference to Pricing and Reserving models under Solvency II project.

Fabio Lamantia

is associate professor of financial mathematics at the Faculty of Economics, University of Calabria (Italy). His main research interests include financial risk theory, dynamical systems (stability, bifurcations and complex behaviours) and their applications to the modelling of the evolution of economic, social and financial systems.

Actuarial Sciences

Page 59: Aenorm 64

57AENORM 64 July 2009

Page 60: Aenorm 64

58 AENORM 64 July 2009

market share; internal and external inflation of claim costs and change in premium rates, loyalty changes and exposition variations. The inability to obtain profits at the end of a cycle could produce reduction of market share and loss of business as well as a reduction in the solvency ratio.

Analysis of the underwriting cycle

There are several papers devoted to analyze and model underwriting cycle. It is worth men-tioning the so-called financial pricing models (based on discounted cash flows). Originally Venezian (1985) has used this approach, main-ly based on time series analysis, to confirm the adoption of theories based on rational expec-

tations and absence of financial market imper-fections.Another possible approach is given by capacity constraint models, based on the assumption that, in front of constraints deriving from re-gulatory capital requirement, the insurer has always an excess of capital, as to avoid the risk to demand capital externally; on this point see for example Higgins and Thistle (2000). In particular they proposed so called “regime swit-ching” techniques, to eliminate the assumption of invariance of the model parameters in every phase of the cycle.Other studies have been principally based on actuarial models, in particular the proposed ap-proaches include:

1 Deterministic models (trigonometric functi-ons), as considered in Daykin et al. (1994);

2 Time Series analysis (see Daykin et al., 1994, Cummins and Outreville, 1987);

3 Exogenous impacts: combined use of the previous ones, incorporating also external factors and simulation models, as shown in Pentikainen et al. (1989) and Daykin et al. (1994).

In the next sections, an actuarial model will be employed in order to correctly model the un-derwriting cycle for non-life insurance compa-nies, also taking into account the effect on the solvency ratio adopting an approach based on piecewise–linear dynamical systems, in order to investigate also the long-time horizon dyna-mic of the model. The basic model is derived from Collective Risk Theory. Besides a dynamic

control policy (see Pentikainen et al., 1989), this permits to specify the relationship between solvency ratio and safety loading, in order to model the underwriting cycle. In particular a simplified formula of safety loading is derived that assumes the form of a one dimensional piecewise linear map, whose state variable is the solvency ratio.

A dynamic control rule for the solvency ratio

The basic model is derived from Collective Risk Theory (see Daykin et al., 1994, Klugman et al., 1998 and Dhaene et al., 2001), where the sol-vency ratio u(t), i.e. risk reserve U(t+1) on risk premium P(0), at the end of the year t+1 (not

considering expenses and relative loadings) is given by:

( )

u t ru t λ t p t

x t

+ = + + + + +

(2)

where- r is a function (constant for our purposes)

of the rate of return j, the rate of portfolio growth g and the inflation rate i (supposed constants): r = (1+j)/[(1+g)(1+i)];

- x(t+1) is the ratio of present value of aggre-gate loss X(t+1) on risk premium;

- p(t+1) is the ratio of risk premium P(t+1) = E[x(t+1)] on initial level risk premium P(0) = E[X(0)];

- λ(t+1) is the safety loading.

Starting from the idea of Daykin et al. (1994), in this paper a dynamic control policy is proposed to specify the relationship between solvency ra-tio and premium rates (underwriting cycle). For this reason, it is assumed the following dynamic equation for the safety loading:

λ t λ c ,R u t

c ,u t R

+ = +

(3)

where we assume that 0<R1≤R2. Equation (2) shows how, starting from a basic level λ0, safe-ty loading will be dynamically:

- increased, with a percentage of c1, if u(t) de-creases under a floor level R1 or

"The underwriting cycle is not quantified in standard formula under QIS4"

Actuarial Sciences

Page 61: Aenorm 64

59AENORM 64 July 2009

- decreased, with a percentage of c2, if u(t) is higher than a roof level R2.

Note that c1, c2, R1, R2 could represent strategic parameters which depend on risk management choices. Under the rough assumption that aggregate loss distribution does not change in time, so that p(t+1) = p(t) = p(t-1) = …= 1 (not consi-dering also time lag effects), we define a sim-plified version of (2) that assumes the form of a one dimensional piecewise linear map in the state variable u(t):

u t r u t λ c ,R u t

c ,u t R x t

+ = + + + +

(4)

This dynamic control obviously prevents the tendency to infinity of u(t), which is the typi-cal situation on long-term process for r≥1. In this paper we generalize the proof of Daikin et al. (1994) for asymptotic behaviour of u(t) in a long-term process, introducing this dynamic control policy thus obtaining different levels of equilibrium, varying in particular the parame-ter r. In doing so, we do not use, at least in a simplified setting, any simulation approach, but only analytical results on piecewise–linear dynamical systems (see Di Bernardo et al., 2008).In Cerchiara and Lamantia (2009), we gene-ralized the proof in Daikin et al. (1994) of the asymptotic behaviour of solvency ratio u(t), when a dynamic control policy is introduced. In particular different equilibrium levels and ana-lytical conditions for their coexistence can be obtained. Within the proposed model, it is pos-sible to define analytical control rules by set-ting the strategic parameters c1, c2, R1, R2 and, consequently, to dynamically update the safety loading level. With this approach it is possible to “guarantee” prefixed equilibrium levels of the solvency ratio and so of the insurer’s capi-tal requirements. It also could be analytically determined the dynamic behaviour that can be generated by the underlying model, and in par-ticular the possibility of sudden jumps in the solvency ratio, technically as a consequence of a double “border collision” fold bifurcation.

Conclusions

All in all we think that this method could be very useful for internal models developments under Solvency 2. In fact this approach could represent an alternative (or a complementary) tool to the traditional techniques employed in actuarial application, such as standard simula-tions, approximation formulas, etc. This paper represents only a first step toward the use of these techniques and will be extended in subse-quent works. In fact we are working on further

developments, such as testing other dynamic control policies, estimating probability distribu-tions when bifurcations of the underlying map occur and assessing, with real insurance data, aggregate losses and parameters estimations for stochastic implementations.

References

CEIOPS (2007). Quantitative Impact Studies 4 - Technical Specifications.

Cerchiara, R.R. and Lamantia, F. (2009). An analysis of the underwriting cycle for non-life insurance companies, Proceedings of Actuarial and Financial Mathematics Conference, Bruxelles.

Cummins, J.D. and Outreville, J.F. (1987). An international analysis of underwriting cycle, Journal of Risk and Insurance, 54, 246–262.

Daykin, C. D., Pentikainen, T. and Pesonen, M. (1994). Practical Risk Theory for Actuaries, London: Chapman and Hall.

Dhaene, J., Denuit, M., Goovaerts, M.J. and Kaas, R. (2001). Modern Actuarial Risk Theory, Dordrecht: Kluwer Academic Publishers.

Di Bernardo, M, Budd, C.J., Champneys, A.R. and Kowalczyk, P. (2008). Piecewise-smooth dynamical systems, London: Springer Verlag.

Feldblum, S. (2001). Underwriting cycles and business strategies, Proceedings of the Casualty Actuarial Society, 58, 175-235.

Higgins, M. and Thistle, P. (2000). Capacity constraints and the dynamics of underwriting profits, Economic Inquiry, 38, 442–457.

Klugman, S., Panjer, H. and Willmot, G. (1998). Loss Models - From Data to Decisions, New York: John Wiley & Sons. First Edition.

Pentikainen, T., Bondsdorff, H., Pesonen, M., Rantala, J. and Ruohonen, M. (1989). Insurance solvency and financial strength, Helsinki: Finnish Insurance Training and Publishing Company Ltd.

Venezian, E. (1985). Ratemaking method and profit cycles in property and liability insu-rance, Journal of Risk and Insurance, 52, 477-500.

Actuarial Sciences

Page 62: Aenorm 64

60 AENORM 64 July 2009

Achmea

Achmea maakt deel uit van Eureko; een financiële

dienstverlener met flinke ambities en ondernemingen

in verschillende Europese landen. Zowel Eureko als

Achmea hebben tot doel het creëren van waarde voor

al onze stakeholders: klanten, distributiepartners,

aandeelhouders en medewerkers. Daarvoor hebben we

medewerkers nodig die zich inleven in onze klanten en

dat weten te vertalen naar originele oplossingen.

Het profiel

Als internationaal actuarieel trainee beschik je over

een afgeronde universitaire opleiding, bij voorkeur

actuariële wetenschappen, econometrie of wiskunde.

Je hebt maximaal twee jaar werkervaring bij een

financiële dienstverlener. Daarnaast herken je jezelf in

de volgende competenties: zeer analytisch, leergierig,

zelfstandig handelend, bereid om internationaal te

werken en communicatief zeer vaardig in de Engelse

taal.

Wij bieden

Een unieke kans om jezelf in korte tijd snel te

ontwikkelen in het actuariële vakgebied in een

internationale omgeving. Vervolgens kun je jouw

ambities waarmaken met de vele mogelijkheden die

Eureko-Achmea te bieden heeft.

Meer weten?

Neem contact op met Sandrien Bekker, recruiter

(06) 51 18 9852. We ontvangen je sollicitatie graag via

www.werkenbijachmea.nl

Internationaal actuarieel traineeship

Achmea is in Nederland de grootste actuariële werkgever. Maar ons werkterrein beperkt zich niet

tot Nederland. Ook over de grenzen heen zijn we actief. Met ons driejarig internationaal actuarieel

traineeship leiden we starters op om op internationaal niveau te gaan werken. Het programma begint

met een introductieperiode van een half jaar op een actuariële afdeling in Nederland. Daarna ga je

voor twee à drie jaar naar één van de Eureko onderdelen in Athene of Dublin. Heb jij de ambitie om

jezelf zowel inhoudelijk als persoonlijk te ontwikkelen in een internationale omgeving? Dan maken

wij graag kennis met jou.

Ontzorgen is een werkwoord

AV É R O A C H M E A

C E N T R A A L B E H E E R A C H M E A

F B T O

I N T E R P O L I S

Z I LV E R E N K R U I S A C H M E A

Wat doe je? als je ambitiesals actuaris verder reiken dan de Nederlandse grenzen

Naamloos-1 1 27-05-2009 10:11:53

Page 63: Aenorm 64

61AENORM 64 July 2009

Interview with Pieter Omtzigt

Pieter Herman Omtzigt obtained a Phd in Econometrics in 2003 in Florence with his thesis “Essays in Cointegration Analysis”. Nowadays he is a Dutch Politician for the party CDA. In the Tweede Kamer he is mostly busy with pensions, the new health care system and social security.

Could you tell our readers something of your background?

I studied Economics and Statistics with European Studies at the University of Exeter (United Kingdom). Once graduated, I moved to Florence for my PhD. At the European University Institute in Florence I conducted research in several fields in Econometrics and wrote my thesis titled “Essays on Co-integration”. During that time I was also involved with the University of Insubria in Varese (Italy) researching non-stationary time series’. After my time in Italy I went back to The Netherlands where I started researching non-stationary time series’ at the University of Amsterdam; I have also taught several courses in statistics and econometrics. In 2003 I have become a member of the parliament of the party CDA. I have become a spokesman for themes concerning taxation, pensions and corporate go-vernance.

What were your experiences during your time at the University of Amsterdam and what was the reason you have chosen for politics?

At the university I conducted research on themes like obsolescence. I enjoyed doing research, but I missed the practical side. I was asked to be on the list of the party CDA and I immediately agreed. I am glad that I got the opportunity and have enjoyed my work immensely.

Last December you spoke at the annual VSAE Actuarial Congress where the theme was “Transparency of insurances”. What is your opinion about the ‘woekerpolisaf-faire’?

The ‘woekerpolisaffaire’ means that insurance companies have calculated additional fees on in-vestment insurance without informing the custo-mers. This could mean that people paid their in-surance for years but at time of maturity it shows that all savings have been eroded due to sheer amount of fees.

This has led to many distressing situations. People thought they were saving money by paying the monthly fee to insurance companies during their entire working life. However, when they reached their retirement age, it showed on their account that nothing had been saved with the insurance company. Some pensioners even had debts with their insurance company, though they had already paid thousands of euro’s over the years. I think insurance companies have not been transparent over recent years. They deliberately avoid giving all the required information about the risks of investing with borrowed money. Actuaries have a socially responsible position because of the complex calculations and constructions they have made. If actuaries establish that there are definite problems, they should sound the alarm and should ask themselves whether certain cal-culations are in the interest of the customers. The recent past has been filled with accounting scan-dals like Enron and now we see daily the conse-quences of the collapse of the housing market in the US. Actuaries are extremely suitable to warn about the consequences of these situations.

Right now you are a politician for the CDA. On your website (www.pieteromtzigt.nl) you have written that one of your major issues is a fair retirement for everyone in The Netherlands. In April 2009 you proposed a bill to regulate the fees that are deducted from pensions. What is your exact goal with this bill? The pension system in The Netherlands contains three pillars. The first pillar is the AOW, which is the monthly financial support of the government every Dutch citizen gets when one has reached the age of 65. The second pillar is the pension employees save through their employer and the last pillar consists of private life insurance. In the second pillar insurance companies have re-tained too many fees, the so-called usury pensi-ons. In the past we have had bad experiences in The Netherlands with usury pensions. The current legal environment allows for a large upfront fee for the intermediaries who are selling the product.

Interview

Achmea

Achmea maakt deel uit van Eureko; een financiële

dienstverlener met flinke ambities en ondernemingen

in verschillende Europese landen. Zowel Eureko als

Achmea hebben tot doel het creëren van waarde voor

al onze stakeholders: klanten, distributiepartners,

aandeelhouders en medewerkers. Daarvoor hebben we

medewerkers nodig die zich inleven in onze klanten en

dat weten te vertalen naar originele oplossingen.

Het profiel

Als internationaal actuarieel trainee beschik je over

een afgeronde universitaire opleiding, bij voorkeur

actuariële wetenschappen, econometrie of wiskunde.

Je hebt maximaal twee jaar werkervaring bij een

financiële dienstverlener. Daarnaast herken je jezelf in

de volgende competenties: zeer analytisch, leergierig,

zelfstandig handelend, bereid om internationaal te

werken en communicatief zeer vaardig in de Engelse

taal.

Wij bieden

Een unieke kans om jezelf in korte tijd snel te

ontwikkelen in het actuariële vakgebied in een

internationale omgeving. Vervolgens kun je jouw

ambities waarmaken met de vele mogelijkheden die

Eureko-Achmea te bieden heeft.

Meer weten?

Neem contact op met Sandrien Bekker, recruiter

(06) 51 18 9852. We ontvangen je sollicitatie graag via

www.werkenbijachmea.nl

Internationaal actuarieel traineeship

Achmea is in Nederland de grootste actuariële werkgever. Maar ons werkterrein beperkt zich niet

tot Nederland. Ook over de grenzen heen zijn we actief. Met ons driejarig internationaal actuarieel

traineeship leiden we starters op om op internationaal niveau te gaan werken. Het programma begint

met een introductieperiode van een half jaar op een actuariële afdeling in Nederland. Daarna ga je

voor twee à drie jaar naar één van de Eureko onderdelen in Athene of Dublin. Heb jij de ambitie om

jezelf zowel inhoudelijk als persoonlijk te ontwikkelen in een internationale omgeving? Dan maken

wij graag kennis met jou.

Ontzorgen is een werkwoord

AV É R O A C H M E A

C E N T R A A L B E H E E R A C H M E A

F B T O

I N T E R P O L I S

Z I LV E R E N K R U I S A C H M E A

Wat doe je? als je ambitiesals actuaris verder reiken dan de Nederlandse grenzen

Naamloos-1 1 27-05-2009 10:11:53

Page 64: Aenorm 64

62 AENORM 64 July 2009

I am pushing for fees that are proportional with time in the fund. People that change jobs fre-quently do not save much money in a pension if fees are not being calculated proportionately.

What is your opinion about the future of pensions in the Netherlands as the pro-blem of aging becomes larger?

In The Netherlands we have a good working pen-sion system, even with the current crisis. Although the assets of pension funds have decreased signi-ficantly, the funds are still stable. I would suggest we keep the good things of the current system going forward. We do have to ensure the whole market is more able to handle the problem of an increasingly aging population. Every generation should be able to take advantage of the system. The increase of the AOW age to 67 is quite logi-cal. When the AOW was introduced in 1957, life expectancy was around six or seven years lower than it is nowadays. Also several decennia ago there were more physically intensive jobs and those people did not retire until they reached the age of 65. By introducing the new Retirement Laws in 2007, the Uniform Pension Overview (UPO) was also introduced. As a result of this, from 2008 on all insurance companies and pension funds are legally required to send the UPO to their customers. Do you think this was a good start for making the market more transparent?

It is hard to say how the introduction of the UPO has made the insurance market more transpa-rent. The market has definitely done a good job by making the pensions more comprehensible. I think it is shame that in politics themes get atten-tion only when problems related to that theme oc-cur. Earlier, politicians did not have much time nor give much attention to the issue of pensions. But right now, the coverage of several pension funds has become critical so there is a greater focus on pensions. This is not only from the politicians but also from the Dutch citizens. Several pension funds have decided not to index pensions for the coming year, which means that the purchasing power of retirees will decrease as their pensions do not increase inline with inflation. Although it is an undesirable situation that there is no indexa-tion, it is a good thing people are getting more interested in their own pension.

You are a member of the Board of the Actuarial Association. What are your du-ties/responsibilities in that function?

Members of the board of the Actuarial Association usually meet each other twice a year, sometimes three times a year. At those meetings we discuss how the Actuarial Association can influence cur-

rent trends and developments in the financial and actuarial world. For example, we have advised the actuaries to become involved in the discus-sion about AOW. Actuaries know better than any-body how the AOW can be financed in the future. I think actuaries have a socially responsible posi-tion and it is a positive development that they ask their customers and other relevant people what kind of position they should take. At the moment I have a lot of contact with actuaries because of the current financial problems pension funds face. The technical knowledge of the actuaries is some-thing I can always count on.

In 2004 you wrote an article “European AOW is not good for the Netherlands” with Sir Camiel Eurlings (current minister at the Ministry of Transport, Publick Works and Water Management). What was the motivation for writing this article and what do you think of the current situation?

Sir Eurlings and I still think that pensions and the AOW should remain the responsibility of every EU-member and not a responsibility of Europe. This is an issue that may never change. The European pension market faces big challen-ges, but the problems of each country should be solved by that particular country instead of putting their troubles on the shoulders of their neighbours. I would push for guidelines against transferability of obligations, if necessary with motions and veto. If we do not have guidelines for this, it could happen that we are obligated to transfer pension money to Italy, but it would not automatically apply that it would work in re-verse. That is pure impoverishment and is so-mething I definitely hope does not occur.

What are the current files you are working on?

The crisis has led to a lot of political recovery plans and we are waiting for regulatory acts from parliament. Important bills right now are the “multi OPF” and the “PPI”. The “multi OPF” (abbreviation for multi Company Pension Funds) is a new partnership between company pension funds. In March 2009 the ca-binet approved the bill and the Pension Law of 2007 will be amended. The legislative change means that company pension funds can combine their expertise and have a collective board, but financially they remain separated. In the past ten years the number of company pension funds has decreased from 938 to 597. The “PPI” (abbreviation for Pension Premium Setting) means a legislative change of the Law of Financial Supervision. The introduction of PPI is related to the so-called defined contribution system. Overall, I think it is really important that active participation of voters continues in the pension market.

Interview

Page 65: Aenorm 64

63AENORM 64 July 2009

Mean Sojourn Time in a Parallel Queue

This account considers a parallel queue, which is two-queue network, where any arrival generates a job at both queues. It is noted that earlier work has revealed that this class of models is notoriously hard to analyze. We first evaluate a number of bounds developed in the literature, and observe that under fairly broad circumstances these can be rather inaccurate. For the homogeneous case we present a number of approximations, which are extensively tested by simulation, and turn out to perform remarkably well.

Benjamin Kemper

graduated in econometrics, University of Amsterdam. He was an active member of the VSAE and editor of Aenorm. In 2007 he started his PhD project “optimization of response times in service networks” under supervision of prof.dr. Michel Mandjes and dr. Jeroen de Mast, University of Amsterdam. His PhD thesis will present the results of OR applications in the Lean Six Sigma methodology. Further, Benjamin is consultant with IBIS UvA in the field of business and industrial statistics. Email: [email protected].

Introduction

The mathematical study of queues (queueing the-ory) is a branch of operation research. It analy-ses the arrival, waiting, and service processes in service systems. Queueing theory seeks to derive performance measures, such as average waiting time, idle time, and throughput time, to help ma-king business decisions about the allocation of scarce resources needed to provide a service (i.e., a server need not be idle), or to execute a service on (i.e., a client need not be waiting).Parallel queues are service systems in which every arrival generates input in multiple queues. One could for example consider a Poissonian ar-rival stream that generates random jobs in two queues.The rationale behind studying parallel queues of the type described above lies in the fact that they are a natural model for several relevant real-life systems, for instance in service systems, health care applications, manufacturing systems, and communication networks. With Si denoting a job's sojourn time in queue i, a particularly interesting object is the parallel queue's sojourn time S:= max{S1; S2}, as in many situations the job can be further processed only if service at both queues has been completed. One could think of many specific examples in which parallel queues (and the sojourn time S) play a crucial role, such as:- a request for a mortgage is handled simulta-

neously by a loan division and a life insurance division of a bank; the mortgage request is fi-nalized when the tasks at both divisions have been completed.

- a laboratorial request of several blood samples is handled simultaneously by several lab em-ployees of a hospital; the patient's laboratorial report is finalized when all the blood samples have been analyzed.

- a computer code runs two routines in parallel; both should be completed in order to start a next routine.

Parallel queues have been studied intensively in the past and have turned out to be notoriously hard to analyze. The literature as mentioned in Kemper and Mandjes (2009) underscores the need for accurate methods to approximate the mean sojourn time E(S) that work for a broad set of service-time distributions. We present a set of such approximations and heuristics that are of low computational complexity, yet remarkably accu-rate. The structure is as follows. In Section 2 we sketch the model, and present some preliminaries. In Section 3 we consider the homogeneous case. We then present a number of approximations, which turn out to be highly accurate. Section 4 conclu-des.

Model, preliminaries, and bounds

In this section we formally introduce the parallel queue (or: fork-join network), see Figure 1.This system consists of two queues (or: worksta-tions, nodes) that work in parallel. The jobs ar-rive according a Poisson process with parameter λ; without loss of generality, we can renormalize time such that λ=1 (which we will do throughout this paper). Upon arrival the job forks into two different ‘sub-tasks’ that are directed simultane-ously to both workstations. The service times in workstation i (for i=1,2), which can be regarded

Econometrics

Page 66: Aenorm 64

64 AENORM 64 July 2009

as a queue, are an i.i.d. sequence of non-negative random quantities (Bi,n)nєN (distributed as a gene-ric random variable Bi); we also assume (B1,n)nєN and (B2,n)nєN to be mutually independent. The load of node i (that can be seen as the average oc-cupation rate) is defined as ρi:=λEBi≡  EBi<1. The systems stability is assured under the, intuitively obvious, condition max{ρ1;ρ2}<1.The queues handle the sub-tasks in a first-come-first-serve fashion. In other words: if the sub-task finds the queue non-empty, it waits in the queue before until service starts. When both sub-tasks (corresponding to the same job) have been per-formed, they join and the job departs the net-work. Therefore, the total sojourn time of a the n-th job in the network is the maximum of two sojourn times of the sub-tasks, that is, in self-evident notation, Sn=maxi=1;2Si,n. We here analyze the mean sojourn time, i.e., ES=E[max{S1;S2}]; with Si denoting the sojourn time of an arbitrary customer (in steady-state) in queue i.In general, the mean sojourn time cannot be ex-plicitly calculated, the only exception being the case that B1 and B2 correspond to the same ex-ponential distribution. This result, by Nelson and Tantawi (1988), is recalled below. Relaxing the homogeneity and exponentiality assumptions, upper and lower bounds are known, which will be mentioned next.

The homogeneous M/M/1 parallel queue

As proven in Tijms (1986), in case of two ho-mogeneous servers with exponentially distribu-ted service times, the mean sojourn time obeys the strikingly simple formula

12E ,8ρ

S m−⎛ ⎞= ⋅⎜ ⎟

⎝ ⎠ 

where m := ρ/(1-ρ) is the mean sojourn time of an M/M/1 queue. This result is found by first decomposing the mean sojourn time ES is the sum of the mean sojourn time m of an M/M/1 queue and a mean synchronization delay d, i.e., ES=m+d. Using Little’s formula and using the balance equations, one can show that

01

1 ( 1) ,2 i

i

i id p

λ

=

+= ∑  

with pi0 the steady-state probability of i jobs in queue 1 and the other queue being empty.The first two moments, that is ∑iipi0 and ∑ii2pi0, are found from the generating function in Flatto and Hahn (1984)

3/2(1 )( ,0)1ρ

P zρz

−=

− 

thus yielding d=m(4-ρ)/8, as desired1. Observe that, when increasing the load from 0 to 1, the ratio of the mean sojourn time ES and the mean sojourn time of a single worksta-tion, i.e., ES=m, varies just mildly: for ρ↑1 it is 11/8=1.375, whereas for ρ↓0 it is 12/8=3/2=1.5, i.e., about 8% difference. This entails that an ap-proximation of the type ES≈3/2m is conservative, yet quite accurate.

Bounds for the M/G/1 parallel queue

We discuss a number of bounds on ES in an M/G/1 parallel queue. It is noted that they in fact apply to the GI/G/1 parallel queue, but un-der the assumption of Poisson arrivals explicit computations are possible, see Kemper and Mandjes (2009).An upper and lower bound for the general GI/G/1 case are presented by Baccelli andMakowski (1985); in the sequel we refer to the-se bounds as the BM bounds. The BM bounds for the sojourn time are in fact sojourn times of similar systems of two independent queues:- in the BM upper bound, U, one does as if two

queues are independent. Informally, by ma-king the queues independent, the stochasti-city increases, and therefore the mean of the maximum of ES1 and ES2 increases, explaining that this yields an upper bound.

- in the BM lower bound, L, one considers two D/G/1 queues (with the same loads as in the original parallel queue). Informally, by assu-ming deterministic arrivals, one reduces the system’s stochasticity, and therefore the mean of the maximum of ES1 and ES2 decreases, ex-plaining that this yields a lower bound.

In addition we discuss a number of trivial (but useful) bounds. We present a trivial lower bound. Using that xa  max{0;x} is a convex function, due to Jensen’s inequality, we have

ES = ES1 + E[max{0;S2 - S1}] ≤ES1+ max{0;E(S2 - S1)}=max{ES1;ES2}=:l.

Because max{a;b} = a+b – min{a;b} ≤ a+b, we also have the upper bound

ES ≤ ES1 + ES2 =:u.

1 Evaluate the first and second derivative in z=1, P’(1,0)=∑i ipi0=ρ/2 and P’’(1,0)=∑i i(i-1)pi0=3ρ2/4(1-ρ), and note that d=P’’(1,0)+2P’(1,0).

Figure 1. A simple fork-join queue

Econometrics

Page 67: Aenorm 64

65AENORM 64 July 2009

Notice that these bounds are in some sense in-sensitive, as they depend on the distribution of S1 and S2 only through their respective means.

The homogeneous case

In this section we consider the situation of ho-mogeneous servers, i.e., B1 and B2 are (indepen-dently) sampled from the same distribution. As shown by Nelson and Tantawi (1986), the mean sojourn time in case of homogeneous exponen-tially distributed service times is a simple func-tion of the mean sojourn time of a single queue, say m, and the service load, ρ; for other service times, however, no explicit results are known. We assess the accuracy of the bounds u, l, U, and L, by systematic comparison with simulation results. We do this by varying the load ρ (equal for both queues) imposed on the system, as well as the ‘variability’ of the service times (in terms of the SCV). It is noted that the trivial bounds u and l reduce to 2m and m, respectively, in case of homogeneity. Our results clearly reveal that the effect of the system’s service load ρ is mo-dest, as was already observed by Nelson and Tantawi (1986) for the case of exponentially dis-tributed service times.We verify the accuracy of the bounds L and U, see Figure 2. We concentrate on an ‘extreme’ load of 0.9, and vary the SCV. In Kemper and Mandjes (2009) we provide the mean sojourn time in a single queue, m, and the simulated mean sojourn time ES of the parallel queue. (An exact expression for ES for SCV=1 is discussed in Section 2). The figure should be read as fol-

lows:(i) The ratio of ES and m, which we call α(SCV). In view of the trivial bounds, it is clear that α lies between 1 and 2.(ii) The ratio of upper bound U and m, denoted by αU(SCV).(iii) The ratio of lower bound L and m, denoted by αL(SCV).(iv) An approximation for the mean sojourn time introduced below, denoted by φ(SCV).

The service times with SCV smaller than 1 are obtained by using Erlang distributions. For SCVs larger than 1 we use hyperexponentional distri-bution, with the additional condition of ‘balanced means’ [5, Eq. (A.16)]. In this figure we used explicit formulae where possible; we otherwise relied on simulation. Here and in the sequel, the spread of the 95% confidence intervals for the simulated mean sojourn times is less than 0.5% of the simulated value.The main conclusions from this table (and ad-ditional numerical experimentation, on which we do not report here) are the following:- For low loads the bounds L and U are relatively

close, but the difference can be substantial for higher SCVs. For higher loads, however, L and U tend to be far apart, particularly for low SCVs.

- In several cases, the lower bound L is even below the trivial lower bound l=m. It is readily checked that this effect is not ruled out in the construction of the lower bound L.

- A disadvantage of relying on these bounds is that particularly L is in most cases not known in

Figure 2. Graph with BM bounds, simulated values and approximated values for load ρ = 0:9.

SCV log(SCV) ρ=0.1 ρ=0.3 ρ=0.5 ρ=0.7 ρ=0.9

0.25 -1.3863 1.269 1.2603 1.2523 1.2462 1.2449

0.33 -1.0987 1 1.2961 1.2858 1.2773 1.2733

0.5 -0.6931 1.3676 1.3526 1.3381 1.3251 1.3154

0.75 -0.2877 1.4401 1.417 1.3948 1.365 1.3568

1 0.0000 1.4874 1.4626 1.4374 1.4124 1.3875

2 0.6931 1.5792 1.5662 1.5447 1.5114 1.4607

4 1.3863 1.6634 1.6658 1.6423 1.5942 1.5148

16 2.7726 1.8048 1.8155 1.7685 1.6886 1.5682

64 4.1589 1.9062 1.8828 1.8143 1.7175 1.5831

256 5.5452 1.9527 1.8999 1.8207 1.7217 1.584

Table 1. Simulated values of α(SCV) of several SCVs and several loads ρ.

Econometrics

Page 68: Aenorm 64

66 AENORM 64 July 2009

closed-form. It therefore needs to be obtained by simulation, but then there is no advantage of using this bound anymore: with compara-ble effort we could have simulated the parallel queue as well.

In view of the results illustrated in Figure 2, there is a clear need for more accurate bounds and/or approximations. The approach followed here is to identify, for any given value of the load ρ, an elementary function φ(•), such that φ(SCV) accurately approximates α(SCV). In this approach we parameterize the service time distribution by its mean and SCV. The under-lying idea is that in a single M/G/1 queueing system the mean sojourn time solely depends on its first two moments, as it can be expres-sed as a function of its mean service time and coefficient of variation through the Pollaczek-Khintchine formula, see for example [5, Eq. (2.55)]. We expect the mean sojourn time of the parallel queueing system to exhibit (by ap-proximation) similar characteristics, thus justi-fying the approach followed. Having a suitable function φ(•) at our disposal, we can estimate ES by mφ(SCV). Note that m, i.e., the mean so-journ time of a single queue is known explicitly. The function φ(•) shown in Figure 2 refers to the one that will be proposed in the left panel of Table 2.To estimate α(SCV)=ES/m for various values of SCV and ρ, we performed simulation expe-riments, leading to the results shown in Table 1. The table indicates that a rule of thumb of the type ES=3/2m (that is α=3/2) is a conser-vative, yet accurate approximation for a broad range of parameter values. We now try to iden-tify a function φ(•) with a better fit.In Table 1 we study the simulated ratios as function of the service-time distribution’s SCV. We approximate the ratio α(SCV) with a poly-nomial of log(SCV) of degree two, based on 10 datapoints. The coefficients are estimated by applying ordinary least squares. As can be seen in the left part of Table 2 and from Figure 2, the polynomial regression fits extremely well, with an R2 of nearly 100%. The table gives fit-ted curves for ρ=0.1+0.2•i; with i=0,…,4, but our experiments indicate that for other values of ρ nice fits can be achieved by interpolating estimates for α(SCV) linearly.We could also try to see how good a fit can be obtained by an even simpler function, for instance by approximating α(SCV) by a poly-

nomial of log(SCV) of degree one. The results are reported in the rightmost columns of Table 2. The model still shows a reasonable fit, but one observes that the R2 for this polynomial re-gression analysis is decreasing in the load ρ. Especially for larger values of ρ the polynomial of degree one fits considerably worse than the polynomial of degree two.

Concluding remarks

The parallel queue is a well known generic buil-ding block of more complex service systems in industry, services, and healthcare. The fact that these systems have proven to be highly com-plex, even in the very simple case of just two servers, is undisputably true. This makes the analysis challenging, and explains the need for simple heuristics.We explain the bounds suggested by Baccelli and Makowski (1985). As they performed poor-ly, we developed an alternative approach: we identified a suitable function of the first two moments of the service-time distribution to es-timate the mean sojourn time of the homoge-neous parallel queue.

References

Baccelli, F. and Makowski, A. M. (1985). Simple computable bounds for the fork-join queue. In Proc. Johns Hopkins Conf. Information Science, Johns Hopkins University, Baltimore.

Flatto, L. and Hahn, S. (1984). Two parallel queue created by arrivals with two demands I, SIAM Journal on Applied Mathematics, 44, 1041-1053.

Kemper, B. P. H. and Mandjes, M. R. H. (2009). Approximations for the mean sojourn time in parallel queues, http://ftp.cwi.nl/CWIreports/PNA/PNA-E0901.pdf.

Nelson, R. and Tantawi, A. N. (1988). Approximate analysis of fork/join synchroni-zation in parallel queues, IEEE Transactions on Computers, 37, 739-743.

Tijms, H. (1986). Stochastic modelling and ana-lysis - a computational approach. Wiley Series in Probability and Mathematical Statistics: Applied Probability and Statistics, John Wiley & Sons Ltd., Chichester.

Load ρ φ(SCV) R² φ(SCV) R²

ρ=0.1 1.484+0.1461log(SCV)-0.01099log(SCV)² 100.00% 1.463+0.1031log(SCV) 96.20%

ρ=0.3 1.476+0.1527log(SCV)-0.01344log(SCV)² 99.70% 1.451+0.1001log(SCV) 93.80%

ρ=0.5 1.456+0.1448log(SCV)-0.01406log(SCV)² 99.50% 1.430+0.0898log(SCV) 91.70%

ρ=0.7 1.427+0.1266log(SCV)-0.01323log(SCV)² 99.40% 1.403+0.07486log(SCV) 89.70%

ρ=0.9 1.392+0.0950log(SCV)-0.01109log(SCV)² 99.60% 1.372+0.05158log(SCV) 85.80%

Table 2. Fitted ratios α(SCV) for various loads % based on least squares estimation.

Econometrics

Page 69: Aenorm 64

67AENORM 64 July 2009

Puzzle

Puzzle

Here are two new puzzles to challenge your brain. The first puzzle should be solvable for most of you, but the second one is a bit harder. Solving this puzzles may even win you a book token! But first, the solutions to the puzzles of last edition.

A dice game

Out of the 216 ways the dive may be thrown, you will win on only 91 of them and lose on 125. Suppose now you place 1 dollar on all of the six squares. The student will pay out three dollars and take in three dollars on every roll that showed three different numbers. But on doubles he makes a dollar and on triples he makes two dollars. In the long run, this gives the student a profit of 7.8 percent on each dol-lar bet.

Long division

749 / 638897 \ 853 5992 ----------- 3969 3745 ----------- 2247 2247

This week’s new puzzles:

Annual event

A group of students starts off to the annual event organized by their study association in different buses, each carrying exactly the same number of students. Half way to the event ten buses broke down, so it was necessary for each remaining bus to carry one more student. All students enjoyed themselves at the event, but when they started for home they discovered that fifteen more buses were out of commis-sion. On the return trip there were therefore three persons more in each bus than when they started off in the morning. How many students attended the event?

Strange clock

Suppose we have a clock with a somewhat strange movement of the hands. Assume that this clock has an hour hand which moves twelve times faster than the minute hand. When will the hands first reach a point (after six o’clock) which will indicate the correct time?

Solutions

Solutions to the two puzzles above can be submitted up to September 1st. You can hand them in at the VSAE room; C6.06, mail them to [email protected] or send them to VSAE, for the at-tention of Aenorm puzzle 64, Roetersstraat 11, 1018 WB Amsterdam, Holland. Among the cor-rect submissions, one book token will be won. Solutions can be both in English as in Dutch.

Page 70: Aenorm 64

68 AENORM 64 July 2009

University of Amsterdam

VU UniversityAmsterdam

Facultive

With the summer ahead of us, the silence is returning in our board room. After a year of hard working and great activities the members of study association Kraket will go on a well de-served holiday.

In the last months we have had some great acti-vities. First there was a Bonbon-workshop, and for the first time in the history of Kraket more women than men showed up. Our next activity was our own Caseday, a day full of interesting cases and even a suit-workshop. On the 21st of April a group of our study association went to an Inhouse-Day of PricewaterhouseCoopers. On the 28th of April the yearly soccer tourna-ment took place. The soccer tournament was a great success for Kraket, because our new team with first-year-students won the tournament. And last but not least: The Kraketweekend! This year the weekend went to the south of Belgium. With a weekend full of great activities like laser-gaming, kayaking and a visit to the Sanadome in Nijmegen it was again a great experience.

The Kraket board wishes everyone a good ho-liday!

Agenda

24 - 28 August IDEE Week

29 - 30 August Introduction Week

The last period several VSAE projects have been successful completed. In April the lustrum edition of the Econometric Game was held in Amsterdam. Twenty-six universities worked during this three-day event on two cases con-cerning child mortality. After the first case eli-minations took place so that only the ten win-ners were allowed to work on the final case. The proud winner of the final game and so of the Econometric Game 2009 became Universidad Carlos III de Madrid. The VSAE was very proud that sir James Ramsey came to Amsterdam to take place in the Econometric Game jury.

The day after the Econometric Game, a group of twenty-four VSAE members travelled to Hong Kong to work on a trading game. Also a visit to the Hong Kong Exchange, Macau and China were part of the program. At the end of April the soccer tournament with study association Kraket took place on a rainy afternoon.

Summer seems to get started in Amsterdam and at the moment the VSAE students are busy with studying for their (re-)exams and all look-ing forward to the summer holiday.

In September a new group of freshmen will start with their study Econometrics, Actuarial Science of Operational Research at the University of Amsterdam. As VSAE board we look forward to welcome them and hope they will enjoy our study and of course our study association.

Agenda

24 - 26 August Introduction days

7 September General members meeting

8 September Monthly drink

6 - 7 October Beroependagen

Excellent enough to make a difference?

Gezocht: (Aankomende) actuarissen

Ga jij de fi nancieel directeur van een verzekeraar adviseren over een

nieuwe premiestructuur of het toeslagenbeleid van een pensioenfonds

toetsen? Of ben je meer geïnteresseerd in de modellen die

beleggingsfondsen gebruiken bij de waardering van hun portfolio’s.

Ernst & Young Actuarissen biedt de mogelijkheid om je breed te

ontwikkelen op zowel inhoudelijk als persoonlijk vlak. Wij zijn onderdeel

van een Europese organisatie waarin actuarissen uit alle fi nanciële

centra van Europa samenwerken. In de dynamische markt waarin wij

werken hebben wij continu nieuwe adviseurs nodig. Voor onze kantoren

in Amsterdam en Utrecht zijn wij op zoek naar ambitieuze starters die

het verschil willen maken.

Ben je gedreven, leergierig, analytisch en adviseer je liever dan dat je

wordt geadviseerd? Dan heb jij de instelling die wij zoeken. Wij bieden

jou de mogelijkheid om na je studie als beginnend actuaris aan de slag

te gaan. Ook kun je tijdens je studie voor een dag in de week aan de slag

gaan als werkstudent of je afstudeerscriptie schrijven over een praktijk

probleem.

Voor meer informatie kun je contact opnemen met Dieuwertje Huizer,

06-21252814 of [email protected]. Solliciteren kan op

www.ey.nl/carriere

2058609082 Ad actuarissen_A4.indd 1 07-01-2009 15:46:27

Page 71: Aenorm 64

Excellent enough to make a difference?

Gezocht: (Aankomende) actuarissen

Ga jij de fi nancieel directeur van een verzekeraar adviseren over een

nieuwe premiestructuur of het toeslagenbeleid van een pensioenfonds

toetsen? Of ben je meer geïnteresseerd in de modellen die

beleggingsfondsen gebruiken bij de waardering van hun portfolio’s.

Ernst & Young Actuarissen biedt de mogelijkheid om je breed te

ontwikkelen op zowel inhoudelijk als persoonlijk vlak. Wij zijn onderdeel

van een Europese organisatie waarin actuarissen uit alle fi nanciële

centra van Europa samenwerken. In de dynamische markt waarin wij

werken hebben wij continu nieuwe adviseurs nodig. Voor onze kantoren

in Amsterdam en Utrecht zijn wij op zoek naar ambitieuze starters die

het verschil willen maken.

Ben je gedreven, leergierig, analytisch en adviseer je liever dan dat je

wordt geadviseerd? Dan heb jij de instelling die wij zoeken. Wij bieden

jou de mogelijkheid om na je studie als beginnend actuaris aan de slag

te gaan. Ook kun je tijdens je studie voor een dag in de week aan de slag

gaan als werkstudent of je afstudeerscriptie schrijven over een praktijk

probleem.

Voor meer informatie kun je contact opnemen met Dieuwertje Huizer,

06-21252814 of [email protected]. Solliciteren kan op

www.ey.nl/carriere

2058609082 Ad actuarissen_A4.indd 1 07-01-2009 15:46:27

Page 72: Aenorm 64

To some people, it’s just a jobTo others, it’s demonstrating our teamworkOpportunities at Towers Perrin

http://careers.towersperrin.com