aenorm 62

84
Dutch Ready for Eligibility Age Increase to 67 Years Lying or Truth-telling: Why Does It Matter In Economics? Edition 62 Volume 16 Jan - 2009 Magazine for students in Actuarial Sciences, Econometrics & Operations Research Jams Are Everywhere How Physics May Help to Solve Our Traffic Problems How Less Data Can Be Better To DICE with Climate Change a Regression Analysis The Power to Chase Purchasing Power The Dutch Business Cycle: Which Indicators Should We Monitor?

Upload: vsae

Post on 24-Mar-2016

217 views

Category:

Documents


2 download

DESCRIPTION

 

TRANSCRIPT

Page 1: Aenorm 62

Dutch Ready for Eligibility Age Increase to 67 Years

Lying or Truth-telling: Why Does It Matter In Economics?

Edition 62Volume 16Jan - 2009

Magazine for students in Actuarial Sciences, Econometrics & Operations Research

Jams Are EverywhereHow Physics May Help to Solve

Our Traffic Problems

How Less Data Can Be Better

To DICE with Climate Changea Regression Analysis

The Power to Chase Purchasing Power

The Dutch Business Cycle: Which Indicators Should We Monitor?

Page 2: Aenorm 62

((4 + 3)² + 4) / 1/4 = - - -

11 + 6³ = - - -

√576 + 11 x 3² = - - -

√196 = - -

http://- - - . - - - . - - - . - -

AO_FSR-A4_final.indd 1 4/8/08 1:32:15 PM

Page 3: Aenorm 62

1AENORM 62 January 2009

Good Intentions

January; a fresh start of the new year. This month will be the one in which most people want to make a difference, get rid of their bad habits and make the best of the year. Some

will try to quit smoking, others want to lose weight and go to the gym more often. All are trying to work harder and achieve their goals. I could write down a whole list with my own good intentions, but unfortunately this edition of Aenorm already contained more pages than ever before, so little space remained.

In this Aenorm you can find a few topics that relate to good intentions as well. Lying or truth telling was already introduced on our cover. Clearly it would be a good intention to stop ly-ing and start being honest. But as it comes to lying or truth telling in economics it might be different. (Hint: go and read the article). Another topic is climate change. As we all know the climate has been changing a lot in the past years and if we do not adjust our lifestyle it will not get any better. So it would be a good idea to minimize pollution and keep our en-vironment as clean as possible. One way to achieve this, is to use the car less often. This would result in less jams and might solve our traffic problems (p. 19). Hopefully you are now curious enough to go and read all articles.

2008 was a year in which we visited international conferences to become better-known as Aenorm all over the world. For me, representing the Public Relations of the magazine, I can only say we should continue this in 2009 and try to convince people to publish in Aenorm and subscribe to it all over the world!

With the start of a new year the VSAE will get a new board. This also means that our lovely chief editor Lennart Dek will step aside to let the new president of our study association, miss Annelies Langelaar, show her skills as the new chief editor. I can only wish her all the best with Aenorm! On behalf of the committee I would also like to thank Lennart for all the effort he put in Aenorm in 2008. This magazine is his last masterpiece, so I hope you will enjoy reading it.

Last but not least I would like to wish all of you a very happy new year in which you hopefully will achieve all your goals and make all your good intentions reality. It does not matter if this is obtaining your foundation course, bachelor or master degree or maybe even a promotion at your job, go for it!

Shari Iskandar

Preface

((4 + 3)² + 4) / 1/4 = - - -

11 + 6³ = - - -

√576 + 11 x 3² = - - -

√196 = - -

http://- - - . - - - . - - - . - -

AO_FSR-A4_final.indd 1 4/8/08 1:32:15 PM

Page 4: Aenorm 62

2 AENORM 62 January 2009

Aenorm 62 Contest List

2

Cover design:Michael Groen

Aenorm has a circulation of 1900 copies for all students Actuarial Sciences and Econometrics & Operations Research at the Uni-versity of Amsterdam and for all students in Econometrics at the VU University of Amster-dam. Aenorm is also distributed among all alumni of the VSAE.

Aenorm is a joint publication of VSAE and Kraket. A free subsciption can be obtained at www.aenorm.eu.

Insertion of an article does not mean that the opinion of the board of the VSAE, the board of Kraket or the redactional staff is verbalized. Nothing from this magazine can be duplicated without permission of VSAE or Kraket. No rights can be taken from the content of this maga-zine.

© 2009 VSAE/Kraket

QIS4: The Treatment of Lapse Risk in a Life Portfolio 14

Rob Bruning

How Less Data Can Be Better 9

Gijs Rennen

Regional Real Exchange Rates and Phillips Curves in Monetary Unions: Evidence from the US and EMU 22

Jan Marc Berk and Job Swank

Determining a Logistics Network for Toyota’s Spare Parts Distribution 31

Kenneth Sörensen and Peter Goos

Lying or Truth-telling: Why Does it Matter in Economics? 4

Marta Serra-Garcia

Hedging Prepayment Risk on Retail Mortgages 26

Dirk Veldhuizen

Jams Are Everywhere - How Physics may Help to Solve Our Traffic Problems 19

Andreas Schadschneider

Dutch Citizens Ready for Eligibility Age Increase from 65 to 67 Years 42

Karen van der Wiel

The Maximal Damage Paradigm in Antitrust Regulation: Is it Something New? 37

Harold Houba an d Evgenia Motchenkova

Page 5: Aenorm 62

3AENORM 62 January 2009 3

Puzzle 79

Facultive 80

The Jackknife and its Applications 66

Koen Jochmans

Volume 16Edition 62January 2009ISSN 1568-2188

Chief editor:Lennart Dek

Editorial Board:Lennart Dek

Design:Carmen Cebrián

Lay-out:Taek Bijman

Editorial staff:Raymon BadloeErik BeckersDaniëlla BralsLennart DekJacco MeureBas MeijersChen Yeh

Advertisers:AchmeaAll OptionsAONAPGDelta LloydDe Nederlandsche BankErnst & YoungHewittKPMGMichael PageMiCompanyORTECPricewaterhouseCoopersShellSNS ReaalTowers PerrinWatson Wyatt Worldwide

Information about advertising can be obtained from Tom Ruijter [email protected]

Editorial staff adresses:VSAE, Roetersstraat 11, 1018 WB Amsterdam, tel: 020-5254134

Kraket, de Boelelaan 1105, 1081 HV Amsterdam, tel: 020-5986015

www.aenorm.eu

Optimal Diversity in Investments with Recombinant Innovation 46

Paolo Zeppini

To DICE with Climate Change 51

Marleen de Ruiter

The Power to Chase Purchasing Power 70

Jan-Willem Wijckmans

Longevity in the Netherlands: History and Projections for the Future 62

Henk Angerman and Tim Schulteis

The Dutch Business Cycle: Which Indicators Should We Monitor? 74

Ard den Reijer

Container Logistics 57

Iris Vis and Kees Jan Roodbergen

Page 6: Aenorm 62

4 AENORM 62 January 2009

As buyers or sellers in different markets, we negotiate prices and quantities. In the process, we often communicate and share information with each other. Hence the question arises; does our communication lead to more efficient outcomes? Although we might intuitively expect so, under the standard theoretical framework in economics this assumption does not necessarily hold. Even if buyers have no reason to mislead sellers, the latter might believe they do and hence ignore their messages. In turn, this makes both truthful and untruthful statements equivalent for buyers in monetary terms and can deem communication irrelevant. However recent research suggests truthful and untruthful statements might not be equivalent in terms of utility, as often is assumed in the standard framework. Individuals exhibit an aversion to lying and are willing to forgo monetary payoffs for true statements. These findings shed a new light on the role of communication in economic interactions and support its potential in determining economic outcomes.

Lying or Truth-telling: Why Does it Matter in Economics?

Marta Serra-Garcia

is a PhD candidate in Economics at Tilburg University. In 2007 she completed her Master of Philosophy (MPhil.) in Economics at Tilburg University. Her interests are in the fields of behavioral and experimental economics. Her dissertation analyzes different mechanisms which affect cooperation in social dilemmas. One such mechanism on which she is currently focusing is communication.

Communication is present in many of our eco-nomic interactions. Despite its relevance, it is often difficult to predict the effect of commu-nication on outcomes. Since communication is costless, discussions about intentions or an individual’s private information are not neces-sarily credible.

In this article, we first discuss a strand of the literature, which focuses on individual’s pre-ferences for truth-telling. Then we argue how these preferences can, under some circumstan-ces, lead to more precise predictions regarding the role of communication in economic interac-tions.

The literature on communication is quite wide. On the one hand, it studies communication in situations of symmetric information, in which individuals communicate their intended choi-ces before a game (for example, in coordina-tion games and social dilemmas). On the other hand, it examines the transmission of private information between informed and uninformed agents. We will concentrate on the latter strand

of the literature and work through an example.Imagine you want to buy a second-hand car. You go to a seller who has two apparently iden-tical cars for sale. Suppose one car is labeled A while the other is labeled B. You know one car is of good quality and worth 100 to you, while the other is of bad quality and worth 0 to you. Unfortunately, you cannot tell which car is what quality, but the seller does know. Suppose the price of both cars is 50 and the seller earns a fixed wage of 50. Whatever car he sells you, he will earn 50 but you will only earn 50 if it is the high quality car and lose 50 otherwise. Table 1 summarizes the payoffs in this game, which we will call game 1.

Payoff to seller Payoff to buyer

High-quality car sold

50 50

Low-quality car sold

50 -50

Table 1: Payoffs in game 1

Imagine the seller tells you that ‘Car A is of bet-ter quality’. Would you believe him? He has no incentive to lie, since he earns the same amount of money whatever car he sells. Thus, you have reasons to believe him and buy car A.

Now, imagine someone told you the seller is a compulsive liar and enjoys misleading buyers. Would you believe him now? If this is the case, you might doubt his statement and decide to ignore it. If you do, you will buy the ‘good’ car with a probability of a half, while with the same

Econometrics

Page 7: Aenorm 62

5AENORM 62 January 2009

*connectedthinking©2007 PricewaterhouseCoopers. Alle rechten voorbehouden.

Assurance • Tax • Advisory

of weet jij* een beter moment voor de beste beslissing van je leven?www.werkenbijpwc.nl

2833-08 PwC Adv. Beslissing A4 V1 1 17-09-2007 17:01:47

Page 8: Aenorm 62

6 AENORM 62 January 2009

probability, you might end up with the ‘bad’ car. Your expected payoff is then 0. If you are slightly risk averse, you might refuse to buy any car. Thus, in this case, the seller’s statements become irrelevant to your decision and also ir-relevant in terms of monetary payoffs for him.

Hence this simple example shows that we often do not have a precise prediction about the role of communication in economic interactions. In the first case, the high-quality car is sold and both seller and buyer are better off. In the se-cond case, the transaction might never take place and efficiency gains might be forgone. Furthermore both outcomes are plausible wit-hin game 1.

Things can get worse, if we slightly modify game 1. Suppose that the seller’s wage is equal to the profit made by selling the car. More precisely, the low-quality car is worth 0 to him while the high-quality one is worth 30. By selling the low-quality car, the seller earns 50, while selling the high quality one earns him 20. Table 2 shows the payoffs in this situation, called game 2.

Payoff to seller Payoff to buyer

High-quality car sold

20 50

Low-quality car sold

50 -50

Table 2: Payoffs in game 2

If the seller only cares about his own payoffs, what will he say? He will probably try to mislead you into buying the low-quality car. Given these incentives, you have no reason to believe his statements. Your expected payoff is then 0 and you might end up not buying a car at all. Thus, communication in game 2 is irrelevant to the transaction’s outcome.

However, what happens if we assume indivi-duals have a preference for truth telling? Well, under this assumption you will believe the sel-ler and buy the high quality car in game 1. In game 2, you might also be willing to believe the seller, if his aversion to lying is stronger than the 30 monetary units he forgoes by telling the truth. Thus, individuals’ aversion to lying can help us make sharper predictions about indi-viduals’ communication and its impact on out-comes. Recent studies have shown that individuals of-

ten exhibit an aversion to lying. Gneezy (2005) conducted an experimental study among uni-versity students in Israel. He found that many students are willing to forgo monetary payoffs and tell the truth. Interestingly, he also found that as the gains for the informed player (the seller, in our example) for lying increase, indi-viduals lie more often. On the contrary, when the harm from lying to the uninformed player increases, individuals lie significantly less of-ten. Other studies in the literature, as Sánchez-Pagés and Vorsatz (2007), have found further evidence of lying aversion in similar situations.

Interestingly, this experimental evidence has also been incorporated into theoretical models of

communication. Demichelis and Weibull (2008) have developed a model in which they assume that individuals have lexicographic preferences for truth-telling, i.e. if a payoff can be achieved via a truthful and an untruthful message, the former is preferred. They study communication before a specific class of coordination games, in which two players want to coordinate which de-cision to make. Generally speaking, they then show that, under such preferences, the only evolutionarily stable equilibrium in these games is Pareto-efficient.

This literature is very recent and many questi-ons still remain unanswered. One of the most interesting questions consists of investigating the strength of individuals’ lying aversion. As we have seen in game 2, lying aversion can lead the seller to be truthful and the buyer to buy the car, only if the costs of lying are ‘high enough’. Further experimental studies could in-vestigate the strength of lying aversion in diffe-rent economic interactions, allowing us to have a better understanding of when it is relevant. This could lead to a better prediction power re-garding the impact of communication in econo-mic interactions.

Overall, communication might not be irrelevant to economic outcomes and thus efficiency, if we consider individuals’ preferences for truth-telling. However, doing so poses further chal-lenges. How should we model lying aversion? Could it be simply modeled as a utility cost, which individuals suffer when they do not tell the truth? This cost could also be dependent on the monetary payoffs obtained by lying, if the lie is thought to be trustworthy. To find the

"Aversion to lying can help us to make sharper predictions"

Econometrics

Page 9: Aenorm 62

7AENORM 62 January 2009

answers to these questions, both empirical evi-dence, mainly stemming from experimental re-search and theoretical modeling will be needed. Furthermore it is the dialogue between them that will probably result in the most relevant answers.

References

Gneezy, U. (2005). Deception: The Role of Consequences, American Economic Review, 95(1), 384-394.

Sánchez-Pagés, S. and Vorsatz, M. (2007). An experimental study of truth-telling in sen-der-receiver games, Games and Economic Behavior, 61, 86-112.

Demichelis, S. and Weibull, J. W. (2008). Language, Meaning, and Games: A Model of Communication, Coordination, and Evolution, American Economic Review, 98(4), 1292-1311.

Econometrics

Page 10: Aenorm 62

8 AENORM 62 January 2009

Specialist in Banking & Financial Services Recruitment166 kantoren in 28 landen | www.michaelpage.nl

Michael Page Banking & Financial Services is toonaangevend

op het gebied van werving & selectie en interim management

bij banken, verzekeringsmaatschappijen, asset managers,

pensioenfondsen en andere financiële partijen. Zij vervult

financiële, commerciële en specialistische functies,

waar gewenst in samenwerking met Banking divisies in

bijvoorbeeld Londen, New York, Parijs, Frankfurt, Singapore

en Sydney. Door onze sector expertise en internationale

wervingskracht bieden wij onze relaties meerwaarde. Zowel

recent afgestudeerden als ervaren professionals kunnen wij

in binnen- of buitenland hierdoor de best mogelijke stap in

hun loopbaan bieden. Ook de eerste!

Naast de mogelijkheid om vrijblijvend kennis te maken met

onze gespecialiseerde consultants kun je met betrekking

tot de onderstaande vacatures gedetailleerde informatie

opvragen via onze website www.michaelpage.nl. Je vindt

de vacatures door op onze homepage het bijbehorende

referentienummer in te toetsen. Vanaf onze website kun je

direct reageren op de daar getoonde vacatures, daarnaast

kun je jouw vragen, opmerkingen of cv direct sturen naar

[email protected].

Onze opdrachtgever is een wereldwijd opererende HRM-Consultancy en Outsourcings-organisatie. De kernactiviteit van de organisatie is het helpen van klanten met actuarieel advies, pensioenuitvoering en complete HRM-consultancy. Vanwege de groei op het gebied van Investment Services zijn wij op zoek naar een Investment Consultant voor de afdeling Investment Services. Als Investment Consultant adviseer je institutionele klanten bij het oplossen van vraagstukken op het gebied van beleggen en (extern) vermogensbeheer. Deze functie geeft je de kans deel uit te maken van een informeel team met veel mogelijkheden tot ontwikkeling. Ref. 141146

Verhoogd risico op een glansrijke carrière

Financial Analyst

EQT is a private equity funds that manages approximately € 12 billion in equity in 12 funds. The Amsterdam office is a small, professional fund administration office in Schiphol and is responsible for the management of the Dutch based funds and the holding company structure of the group. It provides support services to the Guernsey based funds and is overall responsible for the financial control of the group. For this office EQT has a vacancy for a Financial Analyst. You are responsible for actively maintaining the existing holding structures dividends, investments and ownership stakes (pro-actively) provide supporting personal information to the EQT organisation. Ref. 135378

Voor verschillende opdrachtgevers uit de top van het bankwezen en de pensioen- en verzekeringsbranche alsmede consultancy firms in Nederland zoeken wij ambitieuze Econometristen. Voor zowel startende als ervaren Econometristen zijn er diverse interessante mogelijkheden. Bij onze klanten bestaan goede doorgroei- en ontwikkelingsmogelijkheden en uitstekende arbeidsvoorwaarden. Geïnteresseerde kandidaten die openstaan voor een objectief advies over toekomstige werkgevers, worden uitgenodigd te reageren. Ref. 135611

Investments AnalistOnze opdrachtgever is een snelgroeiende vermogensbeheerder, onderdeel van één van de grootste internationale financiële spelers. De kernactiviteit van de organisatie is het leveren van inzicht en overzicht over het totale vermogen aan een aantal grootvermogende cliënten. Vanwege de groei in activiteiten zijn wij op zoek naar een Investments Analist voor de afdeling Investments. Als Investments Analist draag je bij aan het in kaart brengen en analyseren van ontwikkelingen in de financiële wereld en vertaal je deze richting zowel interne als externe cliënten. Deze functie geeft je de kans deel uit te maken van een hecht team met veel mogelijkheden tot ontwikkeling. Ref. 140104

ActuarissenVoor verschillende consultancy firms alsmede de top van het bankwezen en de pensioen- en verzekeringsbranche zoeken wij ambitieuze Actuarissen. Recent afgestudeerden kunnen wij in binnen- of buitenland de best mogelijke eerste stap in hun loopbaan bieden. Graag komen wij in contact met kandidaten die hun wiskundig inzicht willen omzetten in praktische oplossingen en het leuk vinden om klantcontact te hebben.Bij onze klanten is het mogelijk om als specialist of als allround Actuaris door te groeien. Geïnteresseerden die openstaan voor een objectief advies over toekomstige werkgevers, worden uitgenodigd te reageren.Ref. 137278

Junior ALM Specialist

ABN AMRO Hypotheken Groep is een van de grootste gespecialiseerde aanbieders van hypotheken. De AAHG-organisatie kenmerkt zich door een nuchtere en slagvaardige benadering, waarin management en directie gemakkelijk ‘aanspreekbaar’ zijn voor alle medewerkers. Het team Balansmanagement vervult een brugfunctie met de business. Het team heeft zijn verantwoordelijkheid sterk zien groeien en is daarom op zoek naar een Junior ALM Specialist. Je hebt een afgeronde kwantitatieve WO opleiding en weet inhoudelijk wat het belang is van balansbeheer en ALM-studies binnen een professionele financiële dienstverlener. Waardeoptimalisatie, financiering en financieringsconstructies, analyse en afdekking van renterisico en optimale pricing van hypotheken zijn onderdelen waar jij je in wilt ontwikkelen. Ref. 141845

Investment Consultant

Econometristen

Een selectie uit onze actuele vacatures op www.michaelpage.nl

Page 11: Aenorm 62

9AENORM 62 January 2009

During your study, you have probably been warned on many occasions that you can only draw valid conclusions or construct an accurate model if you have enough data. Therefore, the more data you have, the better your model is. However, what if you are in a situation where there is a relative abundance of data? Then the large size of the dataset might also have drawbacks. Fitting a model to your data can take quite some time or can even be impossible. Moreover what to do if you have more data from one part of the design space than from another? Will your model be accurate throughout the whole space or will it be more accurate in areas with more data at the cost of less accuracy in other areas with less data?In this article, we take a look at the effect of using a large non-uniform dataset to fit a Kriging model. We show that using a subset instead of the complete available dataset can have a number if benefits. Furthermore, we introduce and compare several new and current methods for selecting an adequate subset from the original dataset. In short, we will show that less data can be better.

How Less Data Can Be Better

Gijs Rennen

is a PhD student at the Deptartment of Econometrics and Operations Research at Tilburg University. His research is on design of computer experiments and multi-objective optimization. This article is an abridged version of Rennen (2008) which can be downloaded online for free. For comments about this article the author can be contacted via e-mail: [email protected].

Motivations

Kriging is an interpolation technique, which finds its roots in geostatistics. Besides geosta-tistics, Kriging can also be applied in various other fields. For instance Sacks et al. (1989) applied Kriging in the field of deterministic si-mulation for the design and analysis of compu-ter experiments.When building a model, intuition tells us that using more data will always result in better mo-dels. However, when the large given dataset is non-uniformly distributed over the whole design space, problems can occur. Furthermore these datasets can occur in several situations. The first situation we can think of is when we have a set of legacy data (Srivastava et al. 2004). Legacy datasets contain results of expe-riments, simulations or measurements perfor-med in the past. As this data is not generated especially for fitting a global model, it is often not uniformly distributed over the whole design space. A second situation is when the data is the result of sequential optimization methods. These me-thods often generate more data points near the potential optima than in other regions (Booker et al. 1999). Hence if we want to use this data to fit a global metamodel, we will have to take into account that it contains clusters of points.These are just a few examples of situations where we can come across large non-uniform datasets. Using these sets can generate pro-blems, which can often be partly or even com-pletely resolved by using a more uniform sub-set. Here it is worthwhile noting that a subset is called uniform if the input data of the data points is “evenly spread” over the entire design

space. Important motivations for using a uni-form subset instead of the complete dataset are the following:

• Increasing accuracyWhen fitting a model to a dataset, we gene-rally try to optimize a function of the model’s accuracy at the points in the dataset. If this function treats all points equally, accuracy in regions of the design space rich in points can weigh heavier than accuracy in regions with fewer points. This will reduce the accuracy of the model measured over the complete de-sign space. Thus using a more uniform data-set which would equalize the weights of the different regions in the design space could result in a better overall accuracy.

• Time savingsA second reason would be the reduction in necessary time to fit the Kriging model. This is certainly a significant issue as time-con-sumption is generally regarded as one of the main drawbacks of the Kriging method (Jin et al. 2001).

• Avoiding numerical inaccuraciesA common property of large non-uniform da-

Econometrics

Page 12: Aenorm 62

10 AENORM 62 January 2009

tasets is that they can contain points which are positioned in close proximity of each other. This property can make the corresponding correlation matrix ill-conditioned (Davis and Morris 1997, Booker et al. 1999). Moreover solving a linear system with an ill-conditioned matrix could cause considerable numerical inaccuracies making the optimization of the Kriging model inaccurate. Hence removing certain points from the dataset can improve the numerical accuracy of the model.

• Improving robustnessRobustness of the Kriging model with respect to errors in the output data can also be nega-tively influenced when data points form clus-ters. Siem and Den Hertog (2007) show that points, which are at proximity of each other, could be assigned relatively large Kriging weights. Due to these large weights, errors in the output values at these points might have a large influence on the output value of the Kriging model at other points.

Example

To show that selecting a subset can significantly improve the above-mentioned aspects, we in-troduce the following simple artificial example. Here we try to approximate the six-hump ca-melback function (Dixon and Szegö, 1978):

2 4 6 2 41 2 1 1 2 2 2

21 1( ) 4 4 410 3

f x x x x x x x x= − + + − + ,

with x1є [-2,2] and x2є [-1,1]. As our dataset, we take a maximum Latin Hypercube Design of 20 points (Van Dam et al. 2007) with four additional points lying close to an existing point as depicted in Figure 1. By ad-ding these four points, we have created a clus-ter of points in the dataset.We test the effect of selecting a uniform subset by fitting Kriging models to the datasets first with the four additional points, then without them. To measure the effects of taking a subset

on the different aspects, we use a number of different performance measures. The definiti-ons of these measures can be found in Rennen (2008), but for now it is central to know that lo-wer is better for all measures. The results of the performance measures are given in Table 1.

Without fouradditional

points

With fouradditional

points

RMSE 0.48 0.51

Maximum Error 2.03 2.46

Condition Number

76 1491564

Average Robustness

0.97 8.36

Maximum Robustness

1.57 100.66

Table 1: Performance of Kriging models fitted to the datasets with and without the four additional points.

Both the Root Mean Squared Error (RMSE) and the Maximum Error show that the first Kriging model’s accuracy, without the four points, is substantially better than that of the model with the four points. The second Kriging model focu-ses more on the region where these additional points lie and is, as a result, more accurate in this region. However, this additional accuracy is at the expense of accuracy in other regions, which deteriorates the overall accuracy. When we compare the condition numbers, we see a large difference. Thus the additional four points make the resulting Kriging model to be much more susceptible to numerical inaccuracies. Finally, the larger maximal and average robust-ness values indicate that this model is also less robust with respect to errors in the output data. For this example, we did not look at the time savings as these are negligible due to the small size of the dataset. Thus this simple example proves that remo-ving some points can improve the quality of the Kriging model. Determining which points to re-move is quite easy in this case, but becomes less straightforward when the number of points is considerably larger. Therefore, we introduce in the next section some current and new me-thods for selecting points from a dataset.

Subset selection methods

Orthogonal Array SelectionIn the paper by Srivastava et al. (2004), Orthogonal Array Selection (OAS) is introduced. They apply it to the problem of selecting 500 or fewer points from a dataset containing 2490 points in 25-dimensional space. The selection process goes as follows: first a randomized or-thogonal array is constructed. Second, for each point of the orthogonal array a ”nearest neigh-

Figure 1: Maximum Latin Hypercube Design of 20 points with 4 additional points.

Econometrics

Page 13: Aenorm 62

11AENORM 62 January 2009

bor” is determined, i.e. the data point closest to the orthogonal array point. All ”nearest neigh-bors” grouped together form the subset.

Fast Exchange AlgorithmLam et al. (2002) introduced the Fast Exchange Algorithm (FEX) in order to select a subset from a very large database containing characteristics of molecules. The algorithm aims to optimize the uniform cell coverage criterion in the fol-lowing way. It starts with an initial subset and tries to improve it by exchanging points that are inside and outside the subset. This is done in two steps. The first step is to use a distribu-tion of improvements to select a point to be ad-ded to the current subset. In the second step, the removable point is determined again using a distribution.

Greedy MAXMIN SelectionThe greedy MAXMIN method originally comes from the dispersion problems’ field. As its name indicates, it seeks to maximize the minimal Euclidean distance between any two points in the subset. It starts with selecting the two points furthest away from each other. After that, it iteratively adds the point furthest away from the already chosen points.

Greedy DELETION AlgorithmBesides MAXMIN Selection, we also use another simple greedy algorithm. This greedy method constructs a subset by iteratively removing one point of the pair of points with the smallest Euclidean distance between them. To decide which of the two points should be removed, we look for both points at the distance to its second

closest point. The point for which this distance is smallest is removed from the dataset.

Sequential SelectionIn the four methods discussed above, the out-put values of the points are not used in the se-lection process. However as the selected points are used to determine a model of the output, it seems to be a good idea to explicitly use the output information in selecting the training points.We use the output values in the following way. First, we determine a small initial subset using one of the other methods. Then we fit a Kriging model to this subset and calculate the predic-tion error at the non-selected points. The idea is to add non-selected points with a large error to the subset. However, we should take into ac-

count that if the Kriging model is inaccurate in a certain region, all points in that region will show a large error. Consequently, simply selecting for instance n1 points with the largest error might result in adding points that are clustered toge-ther in one or a couple of regions. To reduce this problem, we use two methods. The first method, which we call SS1, consists of adding one point at a time. This method completely solves the above-mentioned problem, but un-fortunately it is quite time consuming to fit a new Kriging model after every added point. The second method determines the n2 > n1 worst points and then uses the greedy MAXMIN heu-ristic to select a uniform subset of n1 points. These points are then added to the training set. In our research, we have used n1= 10, n2= 40 and we refer to this method as SS1040.

Kriging RAND OAS FEX MAXMIN DELETION SS1 SS1040

RMSE 0.06 0.17 0.17 0.17 0.16 0.16 0.10 0.10

Maximum Error

0.76 1.62 1.60 1.56 1.54 1.53 0.96 0.94

Time Subset Selection

0.00 0.00 0.00 0.21 0.02 0.07 9.56 0.92

Time Model Fitting

15.09 0.10 0.08 0.10 0.05 0.05 0.08 0.08

Condition number

237859 1826 473 919 150 187 288 266

Average Robustness

6.33 1.11 0.94 0.98 0.82 0.84 0.67 0.71

Maximum Robustness

45.73 7.27 2.18 5.25 1.35 1.43 1.75 1.72

Table 2: Results obtained by taking subsets of 250 points from artificial datasets of 2000 points.

"Using uniform subsets, we indeed can find accurate Kriging models faster"

Econometrics

Page 14: Aenorm 62

12 AENORM 62 January 2009

Numerical results

We tested all five methods on artificial data-sets of three different sizes and two levels of uniformity. Furthermore we performed tests on the HSCT dataset, which was also used by Srivastava et al. (2004). Table 2 shows results of one of the artificial datasets. The first co-lumn contains results of fitting a Kriging model to the original dataset and the second of using a randomly selected subset. For this dataset, the accuracy is reduced by taking a subset, but it is reduced the least for SS1 and SS1040. All the other performance measures improve by taking a subset and generally improve the most by using the SS1040, MAXMIN or DELETION al-gorithm.In general the tests show that by using uniform subsets, we can indeed find accurate Kriging models faster. Furthermore these Kriging mo-dels are more robust and less susceptible to nu-merical inaccuracies. When comparing the dif-ferent methods for finding subsets, there is no overall winner. SS1040 generally performs well on accuracy, robustness and numerical accu-racy. Compared to the other methods, SS1040 is relatively time consuming, but remains con-siderably faster than fitting a Kriging model to the complete dataset. The OAS method is considerably faster, but has a lower accuracy, robustness and numerical accuracy. Hence de-ciding which method is best for a practical ap-plication depends on how the different aspects are valued.

More information

This article is an abridged version of Rennen (2008), which can be downloaded for free on-line.

References

Booker, A.J., Dennis J.E., Frank, P.D., Serafini, D.B., Torczon, V., and Trosset, M.W. (1999). A rigorous framework for optimization of expen-sive functions by surrogates, Structural and Multidisciplinary Optimization, 17(1), 1–13.

van Dam, E.R., Husslage, B.G.M., den Hertog, D. and Melissen, J.B.M. (2007). Maximin Latin hypercube designs in two dimensions, Operations Research, 55(1), 158–169.

Davis, G.J. and Morris, M.D. (1997). Six fac-tors which affect the condition number of ma-trices associated with Kriging, Mathematical Geology, 29, 669–683.

Dixon, L.C.W. and Szegö, G.P. (1978). The glo-bal optimization problem: An introduction. In L.C.W. Dixon and G.P. Szegö (Eds.), Toward

Global Optimization, 2, 1–15, North-Holland.Jin, R., Chen, W. and Simpson, T. W. (2001).

Comparative studies of metamodelling tech-niques under multiple modelling criteria, Structural and Multidisciplinary Optimization, 23, 1–13.

Lam, R.L.H., Welch, W.J., and Young S.S. (2002). Uniform coverage designs for mole-cule selection, Technometrics, 44, 99–109.

Rennen, G. (Published Online 2008). Subset Selection from Large Datasets for Kriging Modeling, Structural and Multidisciplinary Optimization.

Sacks, J., Welch, W.J., Mitchell, T.J. and Wynn, H.P. (1989). Design and analysis of compu-ter experiments, Statistical Science, 4, 409–435.

Siem, A.Y.D. and den Hertog, D. (2007). Kriging models that are robust with respect to simu-lation errors, CentER Discussion Paper 2007-68. Tilburg University.

Srivastava, A., Hacker, K., Lewis, K. and Simpson, T.W. (2004). A method for using legacy data for metamodel-based design of large-scale systems, Structural and Multidisciplinary Optimization, 28, 145–155.

Econometrics

Page 15: Aenorm 62

13AENORM 62 January 2009

Page 16: Aenorm 62

14 AENORM 62 January 2009

One of the major challenges the European insurance industry faces is the introduction of Solvency II which is to be expected in 2012. Solvency II is a new risk-oriented framework for supervision. Within this framework rules are defined to determine the minimum amount of capital an insurance company has to possess in addition to its technical provisions. This summer a quantitative impact study (QIS4) has been performed by many European insurance companies. QIS4 was set up to test the new solvency-rules. In this article special attention is given to the treatment, within QIS4, of the risk connected to lapse and in particular to “pupping”. In case of pupping the policyholder continues the policy without further premium payment, thus creating a paid-up-policy (“pup”).

QIS4: The Treatment of Lapse Risk in a Life Portfolio

Rob Bruning1

studied actuarial science at the VU Amsterdam until 1984. He is now actuarial advisor at ASR Nederland and teacher in actuarial science at the University of Amsterdam (UVA). At ASR Nederland he is involved in the Solvency II project. During the early summer of 2008 he coordinated the Solvency II fieldstudy, QIS4, for all insurance companies within ASR Nederland.

Solvency II

The new Solvency framework will be mandatory for all insurance companies within the European Union. Solvency II consists of three “pillars”, just like the solvency-regime for banks. The first pillar deals with the (quantitative) sol-vency requirements for insurance companies: what is the minimum amount of capital an insu-rance company has to keep on its balance sheet above the fair value provisions? Pillar 2 deals with the (internal) supervision and pillar 3 with transparency (disclosure).In the current Solvency I- regime the mini-mum required capital is mostly determined by simple rules. For example, the required capi-tal can be 4% of the technical provision and 0,3% of the capital under risk. The provisions within Solvency I are based on assumptions of the historical price of the insurance, which for example include a fixed interest rate.Solvency II implies a radical break with these traditions:

1 Fair value approach on the whole balan-ce sheetA fair value balance sheet is underlying the new solvency-framework. Also technical pro-visions are based on fair value. The available capital is derived from the balance sheet as the fair value of assets minus the fair value of liabilities.

2 Risk-orientationEvery kind of risk an insurance company fa-ces influences the level of required capital. Many of the risks that are recognised within Solvency II are measured by using shocks. For example: the capital charge for mortality-risk is derived from a mortality shock scena-rio in which all assumed future mortality rates on best estimate basis are increased by 10%. The shocks are such that, theoretically, the probability to survive one year is 99,5%.

An important aspect of Solvency II is the pos-sibility that, under certain conditions, the in-surance company is allowed to use an internal model. An internal model better fits the risk-profile of the business and can therefore lead to less capital requirement.

QIS 4

During the early summer of 2008 the fourth quantitative impact study, QIS 4, was performed by many insurers. Participation was voluntary and the coordination was done by CEIOPS2. A main reason to conduct this field-test was to test the new solvency-rules. Furthermore, the test would help insurers to identify short-comings of

1 The author wishes to thank Drs. G.J.M. (Ger) Kock AAG, responsible for Insurance Risk & Value Management at ASR Nederland, for his comments on earlier versions of this article. The views expressed in this article are the personal view of the author.2 Committee of European Insurance and Occupational Pension Supervisors; www.ceiops.eu

Actuarial Sciences

Page 17: Aenorm 62

15AENORM 62 January 2009

their models with respect to Solvency II.The results of QIS 4 for the Dutch companies are presented in a “Landenrapport”. One of the findings of this report is that the solvency mar-gin (available capital above required capital) will increase under QIS4 for most life companies.In the following paragraph we will focus on the capital rules for a life-company. One of the re-marks of the “Landenrapport” is that several companies found the mass lapse shock too con-servative; an opinion that is not supported by DNB. Therefore we will especially evaluate the role of lapse in the solvency II-framework.

Applying the new solvency-rules, an example

The insurance portfolioWe consider a specific portfolio of life-insu-rances, which all started 10 years ago. The insurances all provide a benefit in the case of death. All the insured were, at the start of the policies, 30-year old women. A premium has to be paid every year until the insured dies. This premium is based on the Dutch mortality table AG 2000-2005 for women and a fixed interest rate of 3%. The net premium is increased with 10% to cover expenses. Total insured capital is 100.000.

The asset portfolioThe assets consist of fixed interest government bonds, all with coupon rate of 4% and remai-ning time to maturity equal to 30 years. Total face value is 12.500.

The balance sheetThe balance sheet, shown in the next table, is composed according to Solvency I-principles and to QIS 4. We assume that within QIS 4 both the fair value of assets and liabilities is cal-culated with the yield-curve of year-end 2007 as provided by CEIOPS3.

Balance Sheet

Solvency I QIS 4

Total Assets 10.776 10.776

Liabilities

Equity 1.158 8.243

Insurance Liabilities

9.618 329

Risk Margin 0 2.205

Total Liabilities

10.776 10.776

The assets on the asset side are valued accor-

ding to fair value both under Solvency I and Solvency II. However, big differences occur on the liability-side. Under solvency I the insu-rance liability is still based on the mortality and interest assumption of the premium (AG 2000-2005 women, interest rate 3%)4. In QIS4 the liability is based on fair value and split in a “best estimate” (329) and a risk margin (2.205). The best estimate is the present value of the fu-ture insurance cash flow (premiums, benefits and expenses) on the basis of “best guess”, so without implicit prudency. In this example as-sumed future mortality rates are 90% of the rates used in the premium and lapse rates are 2% per year.Here, lapse means “pupping”: continuation of the policy without paying further premiums (creating a paid-up policy or “pup”). Of course in case of pupping the insured benefits are re-duced. There is no surrender, which is usually not possible for this kind of insurances.The best estimate in this example is far below the Solvency I-provision, due to the fact that the level of the yield-curve at year-end 2007 lies far above the interest rate of 3%. The risk margin is an additional provision and can be seen as an explicit prudency-margin. According to QIS4-specifications the risk mar-gin can be calculated as 6% of the net present value of the required capital with respect to in-surance risks to be held in future year by year. The 6% is prescribed by CEIOPS5. The risk mar-gin has a big impact on the liability and conse-quently on the available capital.

The solvency position

The solvency positions under both Solvency I and QIS 4 are shown in the next table:

Solvency position

Solvency I QIS 4

Required 656 3.901

Available 1.158 8.243

Solvency Margin 502 4.342

Margin %% 77% 111%

Under QIS4 both the required capital and the available capital increase. Together this leads to a much better solvency margin under QIS4. The available capital is found on the balance-sheet presented before. How the required capital can be derived will be explained below.

The required capitalIn Solvency I the required capital is determined as 4% of the technical provision and 0,3% of

3 See www.ceiops.eu4 In this example we assume that there is no profit-sharing. Otherwise the company might have created an extra shadow-accounting-provision according to IFRS 4.5 This is the so-called “Cost-of-Capital method”. In this example some proxies have been used when determining the risk margin.

Actuarial Sciences

Page 18: Aenorm 62

16 AENORM 62 January 2009

Je leert meer... ...als je niet voor de grootste kiest.

Wie graag goed wil leren zeilen, kan twee dingen doen. Je kunt aan boord stappen van een groot zeil-schip en alles leren over een bepaald onderdeel, zoals de stand van het grootzeil of de fok. Of je kiest voor een iets kleiner bootje, waarop je al snel aan het roer staat en zelf de koers kunt bepalen. Zo werkt het ook met een startfunctie bij SNS REAAL, de innovatieve en snelgroeiende dienst-verlener in bankieren en verzekeren. Waar je als starter bij een hele grote organisatie vaak een vaste plek krijgt met speci eke werkzaamheden, kun je je aan boord bij SNS REAAL in de volle breedte van onze organisatie ontwikkelen. Dat geldt voor

onze nanciële, commerciële en IT-functies, maar net zo goed voor onze trainee ships waarin je diverse functies bij verschillende afdelingen vervult. Waardoor je meer ervaring opdoet, meer leert en sneller groeit. SNS REAAL is met een balanstotaal

van € 83 miljard en zo’n 7000 mede-werkers groot genoeg voor jouw ambities

en klein genoeg voor een persoonlijk contact. Aan jou de keuze: laat je de koers van je carrière door anderen bepalen of sta je liever zelf aan het roer? Kijk voor meer informatie over de startfuncties en traineeships van SNS REAAL op www.werkenbijsnsreaal.nl.

Starters

Page 19: Aenorm 62

17AENORM 62 January 2009

the capital at risk. The required capital un-der Solvency II is called the Solvency Capital Requirement (SCR) and comprises the elements shown in the next table:

SCR

Market (interest) 3.048

Life 1.745

Basic SCR 3.872

Operational risk 29

SCR 3.901

In this example the market risk is the domina-ting factor of the SCR. Market risks are risks related to assets and interest. In this simple example only interest rate risk plays a roll, i.e. the potential effect on the fair value (assets – liabilities) when the yield-curve changes6. In general, market-risk is the dominant risk for life-insurers within QIS4, with exception of unit-linked portfolio’s.In this example, the category of life risk consi-sts of four sub-risks that contribute to the SCR: mortality, lapse, expense and catastrophe risk.

Split up SCR Life

Mortality 561

Longevity 0

Disability 0

lapse down -568

lapse up 478

lapse mass 1.491

Lapse 1.491

Expense 242

Revision 0

CAT-risk 142

SCR Life 1.745

The lapse risk is, perhaps surprisingly, the main factor of the life-risk. Before we explore lapse risk further, we should mention that mortality risk is the risk of a sudden and permanent in-crease of mortality rates (by 10%), that cata-strophe risk is the risk of a sudden increase of mortality rates for 1 year (1.5 per 1000 for eve-ry age) and expenses risk is the risk of higher expense than expected, for example due to in-

flation. But let us now focus on the lapse risk.

The lapse riskThe lapse risk is the maximum of three shock scenario’s. The result of the “lapse down”-shock (-568) is the change in fair value (assets – li-abilities) when all future lapse rates reduce by 50%. The shock is negative which means that lower lapse rates are profitable for the insurer. In the “lapse up”-shock scenario the lapse rates are increased by 50%. The change in fair value is now 478, which implies that higher lapse ra-tes are unfavourable to the insurer. The third lapse shock is a kind of lapse catastrophe sce-

nario: the change in fair value in case of an immediate lapse of 30% of the portfolio. In this example the mass-lapse scenario is dominant.In the traditional world of Solvency I lapse was not recognised as a risk. In case of pupping the provision does not change in Solvency I (except for possible lapse-penalties). In case of surren-der a surrender payment is compensated by re-lease of provision.This example shows that pupping is a severe risk for the insurer in fair value terms (re-member that in this example surrender does not exist). The fair value provision after pup-ping appears to be much higher than before pupping thus creating a loss for the insurer. Consequently, the mass lapse scenario of QIS4 leads to a (big) SCR. However, it should be no-ted that the volume of the lapse risk depends on the yield-curve. In case of a low yield-curve, pupping can be more favourable for the insurer then continuation of premium payment.

Some comments on the mass lapse risk

In a fair value environment the lapse risk be-comes explicitly visible. For many life-portfolios the mass lapse shock in QIS 4 is the dominant factor within lapse risk. Some insurers have ar-gued that a mass lapse of 30% is too high, but DNB does not agree with this opinion. Indeed a mass surrender is far less likely to occur than the disastrous situation that Banks can encoun-ter; i.e. large numbers of clients transferring their money. Policy restrictions and fiscal rules can prevent policyholders from surrender. But lapse is not only surrender: the example shows that the option of “pupping” (continuation of the policy without further premium payment) is also a risk for the insurer. This risk arises

6 Details can be found in the QIS4 technical specifications. See www.ceiops.eu

"In a fair value environment the Lapse risk becomes explicitly visible"

Actuarial Sciences Actuarial Sciences

Page 20: Aenorm 62

18 AENORM 62 January 2009

especially in a high-interest world, where the fair-value provision is low and the provision is not enough to withstand lapse. Note that in the Netherlands the technical provision on the balance sheet currently has to meet the re-quirement that it is at least equal to the sur-render value or the provision after pupping (surrenderfloor)7. The mass lapse shock of 30% within QIS4 must be seen in the context of the lack of such a surrenderfloor on the provision in the fair value balance sheet.In order to get an idea of the impact of lapse, the following table shows the solvency positi-ons under QIS4 for the example described in the previous paragraph, under three alternative mass-shock scenarios: 0%, 30% and 100%.

Solvency position QIS 4

0% 30% 100%

Required 3.414 3.901 6.621

Available 8.815 8.243 5.593

Solvency Margin

5.400 4.342 -1.028

Margin %% 158% 111% -16%

A 30% mass lapse is relatively close to the 0% mass lapse alternative, because in the 0% al-ternative there is still a lapse risk of 478 due to the lapse-up shock. Note that the choice of mass lapse also affects the available capital, because the risk margin depends on the SCR for life risk.

Conclusions

QIS4 shows an increase of the solvency margin for most life companies. Lapse plays a dominant role within the life risk. When interest rates are high, the fair value provisions might be too low to withstand lapse. This can occur in case of surrender but also in case of pupping (continu-ation of the policy without any further premium payments). When measuring lapse risk within QIS 4 the insurer has to investigate which kind of lapse is the most unfavourable.The conclusion that a 30% mass lapse scena-rio within QIS4 is too conservative, as some insurers have stated, can be questioned. In a scenario of economic recession pupping poses a serious danger, perhaps even larger than sur-render, because pupping is for the policyholder often easier to perform. Lapse, both surrender and pupping, can be a serious risk in a fair value world, because in that world there is no surren-derfloor on the technical provision anymore.

References

Bouwknegt, P. and Pelsser, A. (2003). The va-lue of Non Enforcable Future Premiums in Life Insurance, Paper for AFIR Colloquium 2003, Maastricht.

DNB (2008). QIS4 on Solvency II: Country Report for The Netherlands, (“Landenrapport”). Available at: www.dnb.nl/openboek/extern.

Liedorp, F. and van Welie, D. (2008). QIS4 – Laatste halte voor Solvency II?, De Actuaris, 16(2), 42-45.

7 See: Staatsblad van het Koninkrijk der Nederlanden, jaargang 2006: Besluit prudentiële regels Wft, artikel 116.

Actuarial Sciences

Page 21: Aenorm 62

19AENORM 62 January 2009

Traffic jams are annoying and cause a lot of economic damages. Therefore it is surprising that despite all technological advances we still have to suffer from them. So naturally, the questions arise "Why can't traffic jams been avoided?" or "Can technology help us to make traffic more fluent?".

Jams are Everywhere - How Physics may Help to Solve Our Traffic Problems

Andreas Schadschneider

is professor in theoretical physics at Cologne University, Germany. He is working in the area of statistical physics, and has specialized on nonequilibrium systems. His main interest are interdisciplinary applications of physical methods, e.g. to highway traffic, pedestrian and evacuation dynamics, biological and economic systems.

Surprisingly physicists have been studying such questions for a long time now, going back to the 50's of the last century. Beside the practical relevance of this topic, for a physicist traffic has much more to offer. In the early days traffic was mostly seen as an exotic fluid with some strange properties. Water, for instance, flows faster at bottlenecks, an effect you can easily test with a garden hose. However, this is very different for traffic; a jam will form at such a bottleneck. So, early theories of highway traf-fic were inspired by fluid-dynamical theories. These theories have the disadvantage of not distinguishing between individual particles or cars, but use densities as basic quantities

In the early 90’s, a new class of models has been proposed for the description and simulati-on of traffic flow, the so-called cellular automata (CA). CA are a perfect tool for studying complex systems. A popular variant is the “Game of Life” which allows creating a simple “world” with mo-ving, reproducing and dying entities. Cellular automata are ideal tools for modeling non-physical or interdisciplinary problems. They are based on simple and intuitive rules which de-termine the dynamics of the “particles” instead of forcing action between them. The other main feature of CA is their discreteness: space is not treated as a continuum, but is divided into cells of a finite size. This approximation makes CA perfectly suited for computer simulations which can be done much more efficiently.In 1992, Kai Nagel and Michael Schreckenberg, at the time both affiliated with the Cologne University, have proposed the basic CA for traf-fic flow. The highway is divided into cells with a length of about 7.5 meter which corresponds to the typical space occupied by a car in a dense jam. Each cell can either be empty or occu-pied by exactly one car. The state of each car n (n=1,2,…,N) is characterized by its velocity vn

which can take one of the vmax+1 allowed inte-ger values v=0,1,…, vmax (see Fig. 1). The dyna-mics that determine the motion of cars consists of four simple and intuitive steps. First, all cars accelerate by a specified unit if they have not already reached the maximum velocity vmax. In the second step, drivers check whether they can drive safely at their current speed. If there is any danger of causing an accident, the velo-city is reduced to a safe value. The third step is stochastic and takes into account that drivers are not driving perfectly, e.g. due to fluctuati-ons in their behavior. In this step, the velocity of a car is reduced by one unit (vn →  vn—1) with some probability p. This is sometimes cal-led “dawdling”. In the final step each car moves forward the number of cells given by its cur-rent velocity. One such update cycle, which is performed synchronously for all cars, roughly corresponds to 1 second of real time. The si-mulation of one hour of real traffic thus requires 3600 update cycles.

The Nagel-Schreckenberg model is able to ex-plain many of the empirical observations made for highway traffic. Probably the most fascina-

Figure 1: A typical configuration in the NaSch model. The number in the upper rightcorner is the speed vn of the vehicle.

ORM

Page 22: Aenorm 62

20 AENORM 62 January 2009

ting of these are the so-called “phantom traffic jams” or “jams out of nowhere” or, more sci-entifically, “spontaneous jams”. These are jams which occur without an obvious reason, like an accident or road construction. Most of you have probably encountered this phenomenon: You are standing in a traffic jam for some time and after it has evaporated you wonder what the reason of the jam was.

In fact the occurrence of such jams has been studied in experiments. Fig. 2 shows a snapshot of such an experiment which has been perfor-med for the German TV station ARD. About 25 drivers were asked to move on a circular course as fast as possible, but without causing an acci-dent. After a few minutes of free flow, suddenly a jam was formed. The reason is that drivers overreact. If they approach the preceding car with too much speed, they have to brake in order to avoid an accident. Usually they brake stronger than necessary, i.e. they overreact, and thus force the drivers behind them to brake as well. If the traffic density is large enough this starts a chain reaction and, about 10 cars later, the first car has to stop completely and thus a jam is created. Exactly these overreacti-ons, which occur when drivers start to lose their concentration, are captured in step 3 of the dy-namics in the Nagel-Schreckenberg model.

Apart from this phenomenon the model also reproduces more quantitative aspects of traffic flow, e.g. the so-called “fundamental diagram”. This aspect is, as already indicated by its name, very important in traffic engineering. It relates the average velocity of the vehicles with the traffic density. When the density is small and

few cars are on the highway, all can drive at the desired speed. Here the velocity is indepen-dent of density. Increasing the number of cars leads to interactions which reduce the average velocity, even up to creating jams. Densities of around 15% of the maximum already lead to a strong reduction of the average velocity on a highway.

Fig. 3 shows a schematic representation of the fundamental diagram according to the so-cal-led three-phase theory. Instead of the average velocity it shows the density-dependence of the flow (or current). This quantity is easy to measure: one just has to count the number of cars passing a certain point within one hour. B. Kerner surprisingly discovered that the flow in a certain regime is not uniquely determined by the density, but can vary over a whole interval. For historical reasons this regime is called “syn-chronized phase” although “viscous traffic” may be a more appropriate description. There is still some debate about its precise origin, but it ap-pears that the formation of platoons (clusters of cars moving with a small inter-vehicle distance) plays an important role.

Although the model of Nagel and Schreckenberg captures the basic aspects of highway traffic, some aspects need to be improved, e.g. it can-not explain the synchronized phase. In real traf-fic it has been found that “anticipation” plays an important role. This means that drivers try to guess what the preceding driver is going to do. If (s)he has a clear road ahead then often headways are accepted which are much shorter than safe, e.g. headways which allow to avoid

"The main problem is still the human factor"

Figure 2: Experiment showing the occurance of phantom jams.

Figure 3: Schematic representation of the fundamental diagram. F denotes the free flow branch and J the jam line. The bubble represents the synchronized phase where the flow (or velocity) is not uniquely determined by the density.

ORM

Page 23: Aenorm 62

21AENORM 62 January 2009

an accident in case the preceding car starts to brake suddenly.

The improved models based on the Nagel-Schreckenberg model nowadays perform so well that they can be used for traffic forecas-ting. The simplicity allows computers to perform simulations faster than real time: one hour of traffic on the German highway system can be simulated within seconds. Only this makes traf-fic forecasting possible. Indeed the predictions are so reliable that already an official forecast for the German state of Northrhine-Westphalia is available which is used by more than 200.000 persons. It can be found at www.autobahn.nrw.de and displays not only the current state of the highway system, based on measurements from inductive loops, but also 30 minute and 60 minute forecasts.

So, what is the future of traffic management? As we have seen, reliable traffic forecasting is already possible today. But surprisingly it turns out that predicting traffic is in one respect much more difficult than weather forecasting. The dif-ference is the human factor! The weather does not care if a forecast is shown on TV every eve-ning. This will have no influence on the future development. For traffic, however, the situation is very different. Publication of a forecast will have a direct influence on the decisions of the drivers. Maybe they decide to take a different route, leave at a different time or even use public transport instead. So the public announcement of a traffic forecast immediately renders the fo-recast obsolete. Therefore the next big step in improving the reliability of these predictions co-mes from a better understanding of how drivers react to forecasts. First tests have already been performed by Schreckenberg in collaboration with Reinhard Selten, who received the Nobel Prize for economics (together with John Nash) for his work on game theory. Incorporating such results into the forecast should lead to a further improvement of its reliability.

We see that technology might indeed help to reduce the traffic problems encountered in eve-ryday life. However, the main problem is still the “human factor”. Spontaneous jams occur be-cause drivers are human and do not react per-fectly in all situations. This could be improved by driver assistance systems, but one still has an acceptance problem because many drivers enjoy their freedom on the roads (especially in Germany, where there is no general speed limit on highways) and do not want to be controlled by “machines”. However, this might change in the near future if one realizes that such support systems allow for much more relaxed driving and reduce the probability of jams.

References

Chowdhury, D., Santen, L. and Schadschneider, A. (2000). Statistical Physics of Vehicular Traffic and Some Related Systems, Physics Reports, 329, 199.

Chowdhury, D., Santen, L. and Schadschneider, A (2000). Simulation of vehicular traffic: a statistical physics perspective, Computing in Science and Engineering, 2, 80.

Kerner, B.S. (2004). The Physics of Traffic, Springer.

Knospe, W., Santen, L., Schadschneider, A., and Schreckenberg M. (2005). Optimization of highway networks and traffic forecasting, Physica, A346, 163.

Schadschneider, A. (2006). Cellular automata models of highway traffic, Physica, A372, 142.

A Java applet for the simulation of the Nagel-Schreckenberg model can be found at http://www.thp.uni-koeln.de/~as.

Figure 4: Snapshot from the webpage www.autobahn.nrw.de which provides not only information about the current state of the highways in Northrhine-Westphalia, but also 30 and 60 minute forecasts.

ORM

Page 24: Aenorm 62

22 AENORM 62 January 2009

Are regional inflation differentials in a monetary union a cause for concern? The answer is: it de-pends. If regional inflation differentials are the counterpart of price level convergence across the monetary union after a region or a subset of regions has been hit by an asymmetric supply shock, they are a welcome phenomenon and temporary by nature. If regional inflation dif-

Comparing inflation developments in the US and the euro area as a whole over the past quar-ter of a century, the most striking fact emanating from the data is that the differences are mi-nor (see figure 1). The overall picture is one of steadily declining inflation rates on both sides of the Atlantic, sometimes interrupted by temporary hiccups due to external turmoil (late sev-enties) or strong domestic demand (late eighties and late nineties). There is also resemblance of inflation dynamics in the short run, with the euro area clearly lagging behind the US by one to two years. This correspondence in inflation patterns of the two economic blocks is remarkable for at least two reasons. First, it occurred in spite of huge swings in the exchange rate of the US dollar vis-à-vis European currencies during the period. Second, it occurred in spite of huge, although declining, inflation differentials between euro area countries, whereas regional inflation rates in the US moved more or less in tandem, as figure 1 shows. To ensure that the difference in inflation dispersion between the US and the euro area does not stem from a difference in aggregation, we have disaggregated the euro area into countries and into four large regions (comparable to the US). It is evident from figure 1 that the level of aggregation hardly matters. This paper deals with the second observation.

ferentials are due to relative catching-up of re-gions (the so-called Balassa-Samuelson effect), they are part of a natural process and, in spite of their longer persistence, of no immediate concern to the central bank either. Things are different, however, if regional inflation differen-tials are the manifestation of rigidities in wage and price formation in certain regions, thereby potentially thwarting rather than effectuating price level convergence in the monetary union. As a consequence, monetary policy – typically aimed at controlling the average rate of inflati-on in the monetary union – may be suboptimal for some or all regions individually. This is why the European Central Bank (ECB) repeatedly stresses the need for structural reforms in la-bour and product markets in euro area member states. To see whether such appeals are jus-tified in the present context, it is instructive to disentangle the causes of the regional inflation differentials depicted in figure 1, and to com-pare them between the two monetary unions.

Our research1 aims to establish whether regio-nal inflation differentials in the euro area and in

1 For details, see Berk, J.M. and Swank, J. (2007), ‘Regional real exchange rates and Phillips curves in monetary unions: evidence from the US and EMU’, DNB Working paper no 147, september 2007. Downloadable from www.dnb.nl

Jan Marc Berk

is Head of Research and deputy director of the division Economics and Research of De Nederlandsche Bank. He is also a member of the Monetary Policy Committee of the ESCB, and of the board of the International Journal of Central Banking and SUERF (The European Money and Finance Forum).

Job Swank

is director of the division Economics and Research of De Nederlandsche Bank. He is also a member of the International Relations Committee of the ESCB, Deputy Crown member of the Economic Council of the Netherlands (SER) and part-time professor of Economic Policy, Faculty of Economics, Erasmus University Rotterdam.

Regional Real Exchange Rates and Phillips Curves in Monetary Unions: Evidence from the US and EMU

Econometrics

Page 25: Aenorm 62

23AENORM 62 January 2009

the US are persistent, or a reflection of an equi-librating mechanism in the monetary union. We take the results for the US, a well-established and fully integrated monetary union, as a ben-chmark against which regional price dynamics in the euro area can be compared to. We di-vide the US into 4 regions (Midwest, Northeast, South and West), and the euro area into the

countries that are the founding members (that is, current member countries excluding Greece and Slovenia). Our sample period runs from 1977 until 2005, and therefore embeds both the ERM-episode and the first six years of EMU. In this way, we can test whether the transition from quasi-fixed to irrevocably fixed exchange rates has influeced the price formation proces in the euro area.

The starting point of our analysis is the hypo-thesis that the relationship between regional prices in a monetary union may temporarily change due to transitory shocks, but is charac-terised by mean reversion and therefore ulti-

 Figure 2: Real exchange rates Portugal and France vis-à-vis euro area: Actual and trend 1974-2004

mately reverts to a particular equilibrium path. Figure 2 illustrates this in the case of France and Portugal. The equilibrium path can exhibit a (deterministic) trend-like pattern due to the Balassa-Samuelson effect mentioned earlier. This is modelled and estimated as a closed sy-stem in that each regional real exchange rate has as numeraire a weighted average of all re-gional prices in the monetary union rather than the price of one particular region, implying that any Balassa-Samuelson effects cancel out in the aggregate. As we define this effect vis-à-vis the rest of the monetary union, some regions will have a negative trend and the sum of these effects (that is, for the entire monetary union) will be zero by definition. This condition implies that regional inflation processes in a monetary

union are interdependent, which has implica-tions for the econometric modelling of these processes.

Our data allow us to accept the hypothesis of mean reversion of relative regional price levels – or regional real exchange rates – for both monetary unions.2 If regional real exchange rates are (trend) stationary, regional prices are cointegrated, granted that they have a unit root (all series in logs). We build on this property, using Granger’s representation theorem, to de-rive a set of regional Phillips curves of the hy-brid New-Keynesian type. Regional inflation ra-tes in this framework act to restore purchasing

2 Formally: we reject the null of no mean reversion.

 Figure 1: Inflation dispersion in the US and the euro area 1977-2004

"Regional real exchange rates adjust significantly faster to random shocks in

the euro area than in the US"

Econometrics

Page 26: Aenorm 62

24 AENORM 62 January 2009

power parity (PPP), barring Balassa-Samuelson effects, but also depend on explanatory varia-bles typically included in the closed economy version of the hybrid Phillips curve: lagged in-flation, expected future inflation and a proxy for real marginal cost. In fact, by exploiting the unit root properties of regional prices and re-gional real exchange rates, we obtain an open economy version of the hybrid Phillips curve that can be reconciled with a form derived from first principles. Put differently, we show that the open economy Phillips curve is reconcilable with PPP. We estimate such a model for both the US regions and the euro area countries, taking the abovementioned interdependencies of regions within the monetary union into account. We take an diagnostic view regarding the expec-tations formation mechanism, using both bac-kward looking and forward-looking (ie rational expectations) processes. We furthermore in-vestigate whether the price fomation process in the euro area has changed due to the irrevoca-bly fixing of exchange rates in 1999.

Our main finding is that the speed by which regional real exchange rates adjust to random shocks is, on average, significantly higher in the euro area than in the US, once Balassa-Samuelson effects are taken into account. The average half life of such shocks is around two years in the euro area, against three years in the US, considerably shorter than the half lives reported in most studies of PPP. We also find that since the start of EMU, inflation rates of in-dividual member states have taken over the role that nominal exchange rate adjustments used to play prior to EMU. This is quite surprising, as European labour and product markets are not known for their flexibility compared to the US. Our conjecture is that the near absence of labour mobility across euro area countries pla-ces a relatively great weight on real exchange rate adjustments after a country or a subset of countries has been hit by an asymmetric shock. Finally, while there is clear evidence of forward-looking pricing behaviour in both monetary uni-ons, inflation persistence is nevertheless sub-stantial, especially in the US.

All in all, although inflation differentials bet-ween euro area countries are larger than bet-ween regions in the US, this can not be said of the persistence of those differentials between euro area countries. Once Balassa-Samuelson effects are taken into account, it seems that regional inflation differentials in both the US and the euro area since 1999 are indicative of a economically healthy restoration of competitive forces within the respective monetary unions. The conclusion therefore seems warranted that these inflation differentials are not much of a cause for concern for the ECB!

Econometrics

Page 27: Aenorm 62

25AENORM 62 January 2009

Wat als zijn overboekingnaar Hongkonghalverwege de wegkwijtraakt?Een paar miljoen overmaken is zo gebeurd. Binnen enkele

seconden is het aan de andere kant van de wereld. Hij twijfelt

er niet aan dat zijn geld de juiste bestemming bereikt. Het

gaat immers altijd goed. Maar wat als het toch van de weg af

zou raken? Door hackers, fraude of een computerstoring?

Daarom levert de Nederlandsche Bank (DNB) een bijdrage

aan een zo soepel en veilig mogelijk betalingsverkeer. We

onderhouden de betaalsystemen, grijpen in als problemen

ontstaan en onderzoeken nieuwe betaalmogelijkheden. Het

betalingsverkeer in goede banen leiden, is niet de enige taak

van DNB. We houden ook toezicht op de financiële instel-

lingen en dragen – als onderdeel van het Europese Stelsel

van Centrale Banken – bij aan een solide monetair beleid.

Zo maken we ons sterk voor de financiële stabiliteit van

Nederland. Want vertrouwen in ons financiële stelsel is de

voorwaarde voor welvaart en een gezonde economie. Wil jij

daaraan meewerken? Kijk dan op www.werkenbijdnb.nl.

| Economen

| Juristen

Werken aan vertrouwen.

-00122_A4_adv_OF.indd 1 23-04-2008 16:09:03

Page 28: Aenorm 62

26 AENORM 62 January 2009

Hedging the interest rate risk of a retail mortgage portfolio is a difficult task for banks. Besides normal interest rate risk, there is risk caused by embedded options i.e. choices incorporated in the mortgage contract. In regular Dutch retail mortgages, the customer receives several embedded options. One of these options is the option to prepay the mortgage without incurring additional costs in case the customer moves to a new home. In this article we explain why there is risk when the customer receives this option. If the customer does not use the option, he will continue the current contract. We will discuss how the resulting interest rate risk can be hedged and how to determine the hedging price. However let us first discuss the mortgage’s financing.

Hedging Prepayment Risk on Retail Mortgages

Dirk Veldhuizen

graduated in the master of Econometrics and OR and the master of Finance (honours track Quantitative Finance) at the Vrije Universiteit in September 2008. He now works at SNS Reaal as Risk Management Trainee. This article is based on his thesis for Quantitative Finance, written at SNS Reaal. Dr. Svetlana Borovkova (VU) and dr. Frans Boshuizen (SNS Reaal) supervised the thesis.

How to finance a mortgage and why is there prepayment risk?

When a customer acquires a mortgage loan, he or she usually borrows money for a long period of time, say 10 years. Moreover he will have to pay a fixed interest rate for the entire pe-riod. In order to fund this mortgage the bank could issue covered bonds or attract retail sa-vings, which is cash on ordinary savings ac-counts. Additionally in order to prevent interest rate risk, the bank needs to make sure that the mortgage and funding have the same maturity time i.e. they have the same duration. This will ensure that the market value of the assets and liabilities remains the same. Furthermore in certain circumstances the bank may be unable to find funding with the same maturity as the mortgage. In this case, the bank could hedge the resulting interest rate risk by entering into a regular interest rate swap. This enables the bank to interchange the interest rate on the ac-quired funding with the interest rate on funding with the same maturity as the mortgage. We show how this works for an index amortizing swap in Figure 1. Now let us discuss how the embedded option can yield a loss to the bank. First of all, sup-pose that the bank matched its mortgages and funding perfectly. Now, suppose that the yield

curve otherwise known as the term structure of interest rates, decreases for all interest rates. A consumer who moves to a new home can now prepay his mortgage for free and purchase a new mortgage loan for a lower interest rate. However the bank’s funding cannot be prepaid without compensating the counterparties for the interest rate decrease. Moreover the bonds’ value increase as the interest rates decrease; this is due to discounting. Thus the bank makes a loss when it repays the funding. Another op-tion for the bank is keeping the relatively ex-pensive funding, which eventually results in a loss over time compared to using new funding.Prepayment risk is defined as the interest rate risk due to early repayments or prepayments of mortgages. We will only focus on prepayments which are driven by the current interest rates on mortgages. We assume that non-interest rate driven prepayments are independent of in-terest rate movements.

Hedging strategies

A bank can set up a strategy to hedge prepay-ment risk. It needs to buy financial derivatives such that the cash of the prepaid mortgages can be lent to other parties while still yielding a sufficient interest rate. The bank needs a portfolio of receiver swaptions1 for this hedging strategy. It can also hedge its prepayment risk dynamically by checking every day, week or month what its current portfolio of mortgages and funding is. The mismatch can be hedged using regular swaps. The costs of this last stra-tegy are unknown beforehand and it does not require any option premium. The challenge with the first hedging strategy is to determine the required amount of receiver swaptions. For example, when interest rates

Econometrics

Page 29: Aenorm 62

27AENORM 62 January 2009

have been relatively high as in the fall of 2008, interest rate driven prepayments will be lower than expected. The receiver swaptions will ex-pire worthless, because the fixed rate received on new swaps is higher, while the mortgage portfolio stays large. When interest rates fall for example in a few years, more customers than expected will be able to prepay their mortgage. Now the swaption portfolio is not large enough to cover all prepayments, which will result in a loss for the bank.

Advanced hedging strategy using index amortizing swaps

The use of index amortizing payer swaps, which is an over-the-counter contract, can be the ans-wer to this challenge. The key feature of an in-dex-amortizing swap is that the notional of the swap decreases or amortizes based on a func-tion instead of a predetermined scheme. This function is usually based on a reference rate. We use the interest rate for a regular bond, which matures at the same time as our swap-contract, as reference rate. The bank chooses a function which yields a smaller amortization of the notional when the reference rate is high and a larger amortization when interest rates are low. This corresponds to observations of the prepayments on Dutch retail mortgages. For an index amortizing payer swap to be ef-

fective, the bank needs to pay a floating inte-rest rate on the funding preferably one which resets every month. Using an index amortizing payer swap2 on the entire mortgage portfolio, the bank changes the floating interest rate pay-ments it has to pay into the required fixed rate payments. Disregarding credit risk and assuming that the amortization function perfectly matches the customer behaviour, Figure 1 shows how the index amortizing payer swap works. The risks associated with an index amortizing swap are described in Galaif (1993). There are the usual risks like counterparty risk and liquidity risk, an index-amortizing swap being an over-the-counter product. However there is also model risk, which we will address later on. The product also has an interesting interest rate risk, which is asymmetric for the fixed rate payer and fixed rate receiver. Additionally the fixed rate payer has less interest rate risk than the fixed rate receiver. When the yield curve decreases and hence the reference rate de-creases the fixed rate payer incurs a loss. This loss is relatively small because the amortization is high and the size of the contract decreases rapidly. Also, when the yield curve increases, the amortization is low and the fixed ratepayer makes a relatively large profit. As mentioned in the above, there is also model risk associated with index amortizing swaps.

7 A receiver swaption is an option to enter into an interest rate swap where you receive a fixed interest rate and pay a floating interest rate. The floating interest rate can be received on the lent cash. When the interest rates fall and you exercise the option, you can transform a relative low floating rate into a relatively high fixed rate. The net payoff of this swaption (when exercised) is the present value of the difference between the fixed interest rate received on the underlying swap of the swaption and the fixed rate which would be received when you enter a new at-the-money receiver swap.2 When you enter a payer swap, you pay the fixed interest rate and receive the floating rate. When you enter a receiver swap, you pay the floating rate and receive the fixed rate.

Figure 1. Schematics of a hedge using an index amortizing payer swap (IAPS). Solid lines resemble the interest rate payments (i.r.p.). The dashed lines resemble the prepayments. The bank receives fixed interest rate payments on the mortgages, but borrows funding for a floating interest rate. This way the bank can re-use the cash of the prepaid mortgages for new loans for current market prices and prevent a loss on prepayments. The index amortizing swap with the prepayments, which equal the outcomes of the amortization function, exchanges exactly the right amount of fixed interest rate payments into floating interest rate payments to eliminate the interest rate risk of borrowing for a floating rate and lending for a fixed rate.

Econometrics

Page 30: Aenorm 62

28 AENORM 62 January 2009

IK WORD HIERDIRECTEUR.

IN DE LIFT BIJ

TRAINEES

De tijd van traditioneel verzekeren is voorbij. ‘All-fi nance’ is de toekomst. En Delta Lloyd Groep wil hierin haar leidende marktpositie

uitbouwen. Daarvoor hebben we mensen nodig. Heel goede mensen. Zo selecteren we ieder jaar een aantal afgestudeerde academici

voor onze Trainee Programma’s die opleiden tot een leidinggevende functie binnen het concern. Ook hebben we een Business Course

en het Young Talent Network waarin jonge, hoog opgeleide medewerkers elkaar inspireren tot bijzondere prestaties. Waarmee we

maar willen zeggen: als je wilt, kun je bij Delta Lloyd Groep heel ver komen. Aan ons zal het niet liggen. Zet jezelf in de lift. Kijk op

werkenbijdeltalloydgroep.nl

D E L T A L L O Y D G R O E P I S O N D E R A N D E R E D E L T A L L O Y D , O H R A E N A B N A M R O V E R Z E K E R I N G E N

DLG8009_210x297mm_Direct.indd 1 17-10-2008 09:43:49

Page 31: Aenorm 62

29AENORM 62 January 2009

Because the notional depends on a reference rate, the price of this product is path depen-dent. We need the entire history of the refe-rence rate from the start of the swap in order to determine the current notional. One method to

price this swap is simulation. For this we need an interest rate model, which may not capture the reality correctly. Fernard (1993) discusses a relatively simple example of how to price an index-amortizing swap. We will develop a gene-ral scheme for a more complex function. Another challenge when using an index- amor-tizing swap as a hedge is to determine the cor-rect amortization function, i.e., the function which matches the prepayments of a portfolio of mortgages. This can prove to be rather difficult in practice. Having an inaccurate amortization function will hurt the effectiveness of the hedge. This is firm specific basis risk when using this product for a hedging strategy. We have seen before that the use of receiver swaptions yields the same type of risk.

Pricing an index amortizing swap

In order to price an index amortizing swap, we determine the par fixed interest rate on the swap (the fixed interest rate such that the con-tract has no value to either parties). We thus need to equate the value of a bond, which pays a fixed interest rate and another, which pays a floating interest rate. Interest rate swaps ex-clude the exchange of the notional. Lending the notional to the counterparty and borrowing the

same amount from the same counterparty only increases counterparty risk while it does not change the net cash flows of the swap. However for the pricing, it is convenient to pretend we exchange the notional.

Additionally when exchanging the notional, the value of a floating rate bond must equal the notional at the start of the contract. Setting the notional equal to 1, we have to find the fixed interest rate for which the expected present va-lue of the interest payments, the prepayments and the remaining notional amount at the end of the contract, equals 1. It is important to note that we assume a risk-neutral world. Hence this translates into the following formula:

1

111

(

( ) ) 1,

mn tfixed tt

mnt t mnt

LE r dfm

L L df L df

=

−=

⋅ ⋅

+ − ⋅ + ⋅ =

∑ 

Where rfixed is the fixed interest rate which needs to be paid, m is the number of payments per year, n is the maturity of the contract in years, Lt is the notional amount at time t and dft is the discount factor for the tth interest payment. The notional amount Lt equals Lt= Lt-1— PRt-1, where PRt-1 is the prepayment in period t (we assume PR0=0 ), which is determined by our prepayment function. In order to determine the fixed rate of the index amortizing swap, we use the simulation scheme 1:Because the index-amortizing swap is path de-

Step 0: Choose and calibrate a model for the yield curve.Step 1: Simulate the relevant part of the yield curve for all future dates in a risk-neutral world.Step 2: Construct the reference rate.Step 3: Calculate the prepayments per period.Step 4: Determine the present value of the interest payments, prepayments and repayment of the remaining notional.Step 5: Repeat step 1-4, N times, where N is the number of simulations we want to run. Step 6: Calculate the par swap rate by solving the following equation:

11 1 1 1

( ) ( ( ) )i

N nm N nmi i i i i itfixed t t t t nm nmi t i t

Lr df L L df L df Nm −= = = =

⋅ ⋅ + − ⋅ + ⋅ =∑ ∑ ∑ ∑  

with i the index for the simulations and t the index for the time in a simulation. This yields:

−= =

−= =

− − ⋅ + ⋅=

∑ ∑∑ ∑

Scheme 1

"Another challenge in using an index amortizing swap as a hedge is to determine the correct

amortization function"

Econometrics

IK WORD HIERDIRECTEUR.

IN DE LIFT BIJ

TRAINEES

De tijd van traditioneel verzekeren is voorbij. ‘All-fi nance’ is de toekomst. En Delta Lloyd Groep wil hierin haar leidende marktpositie

uitbouwen. Daarvoor hebben we mensen nodig. Heel goede mensen. Zo selecteren we ieder jaar een aantal afgestudeerde academici

voor onze Trainee Programma’s die opleiden tot een leidinggevende functie binnen het concern. Ook hebben we een Business Course

en het Young Talent Network waarin jonge, hoog opgeleide medewerkers elkaar inspireren tot bijzondere prestaties. Waarmee we

maar willen zeggen: als je wilt, kun je bij Delta Lloyd Groep heel ver komen. Aan ons zal het niet liggen. Zet jezelf in de lift. Kijk op

werkenbijdeltalloydgroep.nl

D E L T A L L O Y D G R O E P I S O N D E R A N D E R E D E L T A L L O Y D , O H R A E N A B N A M R O V E R Z E K E R I N G E N

DLG8009_210x297mm_Direct.indd 1 17-10-2008 09:43:49

Page 32: Aenorm 62

30 AENORM 62 January 2009

pendent, we have to use this somewhat large formula instead of using the average fixed rate of all separate simulation runs. The resulting outcomes depend on the realizations of the yield curve.

Greeks of an index amortizing swap

Besides calculating the par fixed rate of an index amortizing payer swap, calculating the Greeks or sensitivity with respect to several parame-ters is a challenge as well. Glasserman (2004) describes three different solutions to calculating the Greeks in simulation. Two of those are semi analytical solutions. These solutions are howe-ver infeasible, because the path dependence of the swap creates very complex derivatives with respect to parameters (high dimensional with many terms). In the third solution, the Finite-Difference Approximation, we calculate the Greeks using brute force. First, we calcu-late the value of many different values for the parameters and then calculate the derivative by computing:

+ ∆ − − ∆=

where v(x) is the value of the index amortizing swap at parameters X and Δ is a small change in one of the parameters.

Summary

In this paper we first discussed the principles of an advanced hedging strategy for prepayment risk using index amortizing swaps and then we computed the price and Greeks of an index-amortizing swap. Moreover we argued that when using an index-amortizing swap we could avoid answering the question of how large the position in hedging instruments should be. This can make index amortizing swaps more accu-rate than a portfolio of regular receiver swap-tions. However, we still need to estimate a pre-payment function, which can be challenging in practice.

References

Boshuizen, F., Van der Vaart, A.W., Van Zanten, H., Banachewicz, K. and Zareba, P. (2006). Lecture notes course Stochastic Processes for Finance, Vrije Universiteit Amsterdam.

Galaif, L. N. (1993). Index amortizing rate swaps, Quarterly Review Issue Winter, 63-70

Fernard, J. D. (1993). The pricing and hedging of index amortizing rate swaps, Quarterly Review Issue Winter, 71-74.

Glasserman, P. (2004). Monte Carlo Methods

in Financial Engineering, New York : Springer Science and Business Media.

Hull, J.C. (2006). Options, futures, and other derivatives 6th ed, Upper Saddle River: Pearson/Prentice Hall.

Hull, J.C. and White, A. (1990). Pricing Interest Rate Derivate Securities, Review of Financial Studies, 3(4), 573-592.

Econometrics

Page 33: Aenorm 62

31AENORM 62 January 2009

Like any car manufacturer Toyota produces and distributes spare parts for maintenance or repair of sold cars. Toyota’s customers in the spare parts market are dealers, that sell them to car repair shops, or use them themselves. To distribute the spare parts to the car dealers Toyota has set up a highly responsive distribution network that is able to deliver to any dealer within hours after an order has been placed. Peculiar about this network is that it is only partly owned by Toyota and involves a large number of so-called third-party logistics (3PL) providers. These are essentially transportation companies, that have some facilities for temporarily storing, sorting and packing the spare parts. Outsourcing part of the distribution makes Toyota’s network more flexible but also increases its complexity. The car manufacturer now faces decisions concerning which 3PL companies it should include in its network and which part of the distribution it should assign to each of the selected 3PL companies. We have developed a software tool to support these decisions. Toyota currently uses the tool to determine the optimal configuration of its network. Moreover, the software has proven useful in increasing Toyota’s negotiating power in its relationship with the transportation companies. This article describes Toyota’s distribution strategy and the development and working of our tool.

Determining a Logistics Network for Toyota’s Spare Parts Distribution

Toyota’s spare parts supply chain

Figure 1 shows Toyota’s spare parts supply chain. During the day dealers place orders for spare parts. The orders are recorded at the main spare parts distribution center where they are consolidated, picked, and readied for transport. The consolidation stage ensures that different orders for the same spare part going to the same transport platform are grouped so that unnecessary moving about in the distri-bution center is avoided. All this activity takes place immediately after the deadline for placing orders has passed, around 6pm. After the spare parts have been loaded onto (large) trucks they leave for the various regional distribution cen-ters, which are called transport platforms, and are essentially warehouses with some limited handling capabilities. Some handling equipment and personnel is necessary because the parts are unloaded, unpacked, packed into smal-ler quantities, and loaded onto smaller trucks, which ultimately deliver the spare parts to the different dealers.The process of setting up Toyota’s distribution channel is a difficult one that involves a lot of negotiating with the different potential trans-portation companies. When talking to these potential 3PL partners, Toyota’s supply chain managers attempt to assess the characteristics

Kenneth Sörensen and Peter Goos

are professors in operational research and statistics at the Faculty of Applied Economics of the Universiteit Antwerpen. Together with Patrick Schittekat, who is a part-time researcher at the university and a consultant at ORTEC Belgium, they work on various challenging combinatorial optimization problem in supply chain management and design of experiments.

of each company in terms of handling quality, reliability and cost. The transport sector is one in which not all companies are equally reliable, to say the least, and filtering out the unrea-listic proposals of some distribution companies is therefore of great importance. Prior to using our tool Toyota had experienced difficulties in the process of evaluating each transportation company and deciding which ones to work with. For companies it had already worked with, esti-mating reliability and handling quality could be done without too much difficulty. For new po-tential partners, however, Toyota found that as-sessing these characteristics could only be done within very large margins of error. Moreover, Toyota experienced a lot of problems when trying to determine the relative importance of each 3PL provider in its network, which limited its negotiating power to a large extent. Toyota had no idea of what would happen if it would

ORM ORM

Page 34: Aenorm 62

32 AENORM 62 January 2009

stop collaborating with some of its providers. In general, Toyota had a very limited view on potential alternative distribution networks that it could use.

Decision support system requirements

The decision support methodology we develo-ped to solve these problems therefore had to be innovative in two ways. First, its goal could not just be to find the “best” solution. On the contrary, it had to offer Toyota’s supply chain experts with a lot of insight into the structure of potential alternative networks. With this know-ledge about alternative networks, Toyota would suddenly find itself in a much stronger position for negotiating with the different (potential) 3PL partners. Second, to further increase their knowledge about the transportation network, Toyota needed a tool that could provide them with extremely accurate cost information on the different transportation options. It needed this information to ensure that a 3PL provi-der did not overcharge or (as sometimes hap-pened) undercharge and then get into financial problems.

Features of the decision support system

The tool we developed does not generate a sin-gle “best” solution, but rather a set of struc-turally different solutions. In this way, a port-folio of transportation networks, all of which have a high quality (i.e. low cost) but which are all different from each other, is available to the decision makers at Toyota. The solutions in the portfolio are structurally different in that they use different sets of transport platforms.

Providing the Toyota supply chain expert with a set of — say — ten solutions all of which are very much alike would not make much sense, as it would not increase Toyota’s insight into the different alternatives. Generating structu-rally different solutions, however, gives Toyota a clear view on which 3PL transport platforms should at all cost be included in the transport network and which can be discarded at little or no additional cost. Our tool integrates a commercial vehicle rou-ting solver that is both extremely powerful and versatile. The solver we chose was SHORTREC, developed by the company ORTEC. It allows us to model the vehicle routing problem in a very detailed fashion. Distances, for example, are calculated over an actual street map; dif-ferent possible truck types, with different cost

structures, can be used; time windows (earliest and latest visiting times) at the dealers can be set; etc.Without going too much in detail, our method solves a so-called location–routing problem (Nagy and Salhi, 2007), that consists of two sub-problems: a facility location problem and a vehicle routing problem.

Location problem: the selection of the 3PL providers

The location problem is the problem of which transport platforms to select. This is done by using a technique called tabu search (Glover, 1989 and Glover, 1990). Essentially, this techni-que works by improving the solution one small step at a time. The tabu search method will for example try to close a transport platform or open one. If this small change increases the quality (decreases the cost) of the solution, the new solution is adopted as the current solution which the method then attempts to improve. This process of gradual improvement is cal-led local search and is continued until no bet-ter solution can be found or a certain time limit has been reached. The main drawback of local search is that it often gets stuck in a so-called local optimum. A local optimum is a solution that cannot be improved by local search, i.e. no small change exists that improves the solu-tion corresponding to the local optimum. Tabu search is especially designed to avoid getting

Parts are loadedunto trucks

Parts are transported (FTL)to transport platforms

Parts are loadedunto smaller trucks

Parts are delivered to dealersin milk runs

Car dealers order spare parts

Automotive company 3PL partnersFigure 1: Toyota's distribution process

"Our decision support system has greatly improved Toyota's negotiating power and

responsiveness to changes in its supply chain"

ORMORM

Als afgestudeerde wil je graag direct aan de slag. Bij ORTEC

hoef je hier niet lang op te wachten. Je wordt direct op

projecten ingezet en krijgt veel eigen verantwoordelijkheid.

Bij ORTEC werken veel studenten. Sommigen schrijven bij

ons een afstudeerscriptie, anderen werken enkele dagen

per week als studentassistent.

Maar je staat er nooit alleen voor. Je kunt rekenen op

de expertise van je collega’s: stuk voor stuk experts

op het gebied van complexe optimalisatievraagstukken in

diverse logistieke en financiële sectoren. Hoogopgeleide,

veelal jonge mensen die weten wat ze doen en jou naar een

hoger niveau zullen brengen. Samen met je collega’s help je

klanten gefundeerde beslissingen te nemen. Dit doe je met

gebruik van wiskundige modellen en het toepassen van

simulatie- en optimalisatietechnieken.

Vanwege onze constante groei is ORTEC altijd op zoek

naar enthousiaste studenten en afgestudeerden die de

ruimte zoeken om zich te ontwikkelen en willen bijdragen

aan de volgende generatie optimalisatietechnologie.

Hiervoor denken we aan bèta’s in de studierichtingen:

• Econometrie

• Operationele Research

• Informatica

• Wiskunde

Kijk voor vacatures en afstudeerplaatsen eens op

www.ortec.com/atwork. Zit jouw ideale functie of afstu-

deerplek er niet bij, stuur dan een open sollicitatie of

scriptievoorstel naar [email protected].

EPROFESSIONALS IN PLANNING

Dat wil niet zeggen dat je van Mars moet komen

A06

62

Wij bieden je

A0662A4 snijtekens 5/1/07 9:35 AM Page 1

Page 35: Aenorm 62

33AENORM 62 January 2009

Als afgestudeerde wil je graag direct aan de slag. Bij ORTEC

hoef je hier niet lang op te wachten. Je wordt direct op

projecten ingezet en krijgt veel eigen verantwoordelijkheid.

Bij ORTEC werken veel studenten. Sommigen schrijven bij

ons een afstudeerscriptie, anderen werken enkele dagen

per week als studentassistent.

Maar je staat er nooit alleen voor. Je kunt rekenen op

de expertise van je collega’s: stuk voor stuk experts

op het gebied van complexe optimalisatievraagstukken in

diverse logistieke en financiële sectoren. Hoogopgeleide,

veelal jonge mensen die weten wat ze doen en jou naar een

hoger niveau zullen brengen. Samen met je collega’s help je

klanten gefundeerde beslissingen te nemen. Dit doe je met

gebruik van wiskundige modellen en het toepassen van

simulatie- en optimalisatietechnieken.

Vanwege onze constante groei is ORTEC altijd op zoek

naar enthousiaste studenten en afgestudeerden die de

ruimte zoeken om zich te ontwikkelen en willen bijdragen

aan de volgende generatie optimalisatietechnologie.

Hiervoor denken we aan bèta’s in de studierichtingen:

• Econometrie

• Operationele Research

• Informatica

• Wiskunde

Kijk voor vacatures en afstudeerplaatsen eens op

www.ortec.com/atwork. Zit jouw ideale functie of afstu-

deerplek er niet bij, stuur dan een open sollicitatie of

scriptievoorstel naar [email protected].

EPROFESSIONALS IN PLANNING

Dat wil niet zeggen dat je van Mars moet komen

A06

62

Wij bieden je

A0662A4 snijtekens 5/1/07 9:35 AM Page 1

Page 36: Aenorm 62

34 AENORM 62 January 2009

trapped in such a local optimum. In contrast with simple local search, this technique al-lows non-improving changes and uses memory structures to guide the (local) search out of a local optimum. One of those memory structu-res, the one that lends its name to the method, is called a tabu list. This list prohibits us from undoing the last few changes that were made, thereby preventing the search from ending up in an endless repetition of non-improving chan-ges. The location decision, i.e. which transport platforms to use, determines the starting point for the vehicle routing problem. When a se-lection of transport platforms has been made, solving a (complicated) vehicle routing problem will tell us what the cost of using this specific set of 3PL providers is. These costs can be de-termined by calculating how much it would cost to deliver to all dealers using this specific set of transport platforms.

Vehicle routing problem

Vehicle routing is the branch of science that studies models and methods to determine the optimal schedule for a fleet of vehicles to visit a set of delivery points (Toth and Vigo, 2001). The simplest in this class of problems is the so-called capacitated vehicle routing problem or simply VRP. In this problem, a number of delivery points (called customers) have to be visited to deliver a homogeneous product. As an example, consider a set of trucks delivering petrol to people’s houses for their heating sy-stems. Each customer has a certain demand (e.g. 3000 liters) and all vehicles have the same capacity e.g. 20.000 liters). The objective of this problem is to find the optimal schedule to visit all customers. This means we have to determine (1) which customers have to be vi-sited by each of the available vehicles and (2)

in which order each of the customers has to be visited, so that the total distance traveled by all vehicles is minimized (see figure 2). The VRP is surprisingly difficult to solve: if we insist on finding the best possible (or optimal) solution we can now – with present-day algorithms and computer power – reliably solve problems that involve about 150 customers, but this might take several hours of computing time on an average computer. If we want to solve larger problems our do it faster, we have to content ourselves with solutions that approximate the optimal solution. In real life, problems with more than 150 stops abound, of course, so in practice the usefulness of algorithms that try to find the optimal solution is limited.The problem that needs to be solved by our method is not only much larger than 150 cus-tomers, but also much more complex. It in-cludes different starting and ending locations (the transport platforms), time windows at the transport platforms and the dealers, restrictions on how long a truck driver can drive and when he needs to take a break, and restrictions on which trucks can visit which companies (some

Figure 3: The best solution found by our method

(a) (b)

CustomerDepot

Key

Figure 2: A vehicle routing problem and a possible solution

ORM

Page 37: Aenorm 62

35AENORM 62 January 2009

are located in city centers and cannot be visited with huge trucks). The vehicle routing software that we use is therefore heuristic in nature, i.e. it does not guarantee that it can find the opti-mal solution, but tries to produce a satisfactory solution in reasonable time.

Combined solution to location and routing problem

Combining the methods for the location and the routing subproblems, our method generates a set of solutions that are structurally different from each other. Each of the generated soluti-ons corresponds to a transport network that has a low cost. Toyota’s supply chain experts can then analyze each of the proposed networks in detail. One such solution (for the distribution of spare parts in Germany) is shown in figure 3. The results of our method have exceeded all expectations. The supply chain planners at Toyota have welcomed our tool and the insight it gives into the working of their supply chain with open arms. Each month, the tool is used to completely re-evaluate the entire spare parts supply chain and — if necessary — adjust it. The tool therefore provides a high level road-map on which decisions can be based. Toyota found that its negotiating power and its respon-siveness to all sorts of changes in the supply chain context have greatly improved.

Acknowledgment

The research described in this article was finan-cially supported by the Research Foundation - Flanders (Fonds voor Wetenschappelijk Onderzoek - Vlaanderen).

References

Glover, F. (1989). Tabu Search–Part I, INFORMS Journal on Computing, 1(3), 190.

Glover, F. (1990). Tabu Search-Part II, ORSA Journal on Computing, 2(1), 4–32.

Nagy, G. and Salhi S. (2007). Location-routing: Issues, models and methods, European Journal of Operational Research, 177(2), 649–672.

Toth, P. and Vigo D., editors (2001). The ve-hicle routing problem, SIAM Monographs on Discrete Mathematics and Applications. Society for Industrial and Applied Mathematics, Philadelphia, PA, USA.

ORM

Page 38: Aenorm 62

36 AENORM 62 January 2009Ontzorgen is een werkwoord

AV É R O A C H M E A

C E N T R A A L B E H E E R A C H M E A

F B T O

I N T E R P O L I S

Z I LV E R E N K R U I S A C H M E A

Internationaal actuarieel traineeshipTijdens het traineeship volg je op inhoudelijk vlak, naast een training on the job, een actuariële opleiding.

Je start met een introductieperiode van een half jaar op één van onze actuariële afdelingen in Nederland.

Gedurende deze periode leer je het actuariële vak beter kennen en bepaal je zelf je specialisatie. Tevens leer

je zowel de internationale organisatie Eureko als de nationale organisatie Achmea beter kennen. Na deze

periode vertrek je voor tweeënhalf jaar naar één van de Eureko onderdelen in Athene (Interamerican) of

Dublin (Friends First), afhankelijk van jouw specialisatie. Na dit programma wordt je ingezet op kort- of

langdurende opdrachten binnen Europa, afhankelijk van zowel jouw persoonlijke voorkeur als de vraag

van de organisatie.

Wat we vragenAls internationaal actuarieel trainee beschik je over een afgeronde universitaire opleiding, bij voorkeur

actuariële wetenschappen, econometrie of wiskunde. Je hebt maximaal twee jaar werkervaring bij een

financiële dienstverlener. Daarnaast herken je jezelf in de volgende competenties: zeer analytisch,

leergierig, zelfstandig handelend, bereid om internationaal te werken en communicatief zeer vaardig in de

Engelse taal

Wat wij biedenEen unieke kans om jezelf in korte tijd snel te ontwikkelen in het actuariële vakgebied in een internationale

omgeving. Vervolgens kun je jouw ambities waarmaken met de vele mogelijkheden die Eureko / Achmea te

bieden heeft.

AchmeaAchmea maakt deel uit van Eureko; een financiële dienstverlener met flinke ambities en ondernemingen

in verschillende Europese landen. Zowel Eureko als Achmea hebben tot doel het creëren van waarde voor

al onze stakeholders: klanten, distributiepartners, aandeelhouders en medewerkers. Om dat waar te

kunnen maken hebben we medewerkers nodig die verder kijken dan hun eigen bureau. Die oog hebben

voor wat er speelt. Maar vooral: mensen die zich inleven in onze klanten en dat weten te vertalen naar

originele oplossingen.

Meer weten?Voor meer informatie over het actuarieel traineeship kun je contact opnemen met Joan van Breukelen,

recruiter, (06) 20 95 7231. We ontvangen je sollicitatie graag via www.werkenbijachmea.nl.

Achmea is in Nederland de grootste actuariële werkgever. Maar ons werkterrein beperkt zich

niet tot Nederland. Ook over de grenzen heen zijn we actief. Met ons driejarig internationaal

actuarieel traineeship leiden we actuarissen op om op internationaal niveau te werken. Na dit

intensieve programma kun je als actuarieel professional aan de slag bij één van de Eureko

onderdelen in Europa. Heb jij de ambitie om jezelf zowel inhoudelijk als persoonlijk te

ontwikkelen in een internationale omgeving? Dan maken wij graag kennis met jou.

Wat doe je? als je ambities als actuaris verder reiken dan de Nederlandse grenzen

11500862_Adv Personeel Actuaris_210x297.indd 1 16-12-2008 13:56:10

Page 39: Aenorm 62

37AENORM 62 January 2009

It is well known that cartels are harmful for consumers. To counteract cartels, cartel formation is by law an economic crime with the antitrust authority (AA) as its crime fighter. Recently, Harrington (2004, 2005) studied a general model of cartel formation and its pricing based upon profit maximization. In this article, we discuss the novel approach in Houba et al. (2009), who take the maximal damage for consumers as the key criterion. Some developments of this approach are introduced and related to the literature.

The Maximal Damage Paradigm in Antitrust Regulation: Is it Something New?

Harold Houba

is associate professor at the Department of Econometrics of the VU University. He obtained his Ph.D. from Tilburg University in 1994. Houba is affiliated with the VU since 1992. His specialization is bargaining theory with applications to labor and environmental economics.

Evgenia Motchenkova

is assistant professor at the Department of Economics of the VU University. She obtained her Ph.D. from Tilburg University in 2005. Motchenkova is affiliated with the VU since 2005. Her specialization is the economics of antitrust regulation, in particular leniency programs.

Quan Wen

is professor at the Department of Economics of the Vanderbilt University in Nashville. He obtained his Ph.D. from the University of Western Ontario in London, in 1991. Wen is affiliated with Vanderbilt since 2001. His specializations are bargaining theory, repeated games and their applications to economics.

Introduction

Despite a large literature on enforcement against individual illegal behavior, the theory of regulation is still in its infancy when it co-mes to enforcing market competition. Illegal anti-competitive behavior is much more com-plicated since it typically is a concerted illegal action performed within an ongoing relationship over time, called a cartel. Any theory of regu-lation therefore requires a dynamic setting, for example an infinitely-repeated oligopoly model with grim trigger strategies.In this article, we discuss an innovative but unconventional approach in which we study the maximal-sustainable cartel price in a repeated oligopoly model, i.e., the largest cartel price for which the equilibrium conditions for sustaina-bility hold. This differs from the standard ap-proach in which the cartel maximizes profits. The main reason for doing so is that experi-mental economics establishes that economic agents often behave differently from standard microeconomic theory. Also, there is empirical evidence in support of Baumol’s (1958) hy-pothesis that managers of large corporations seek to maximize sales rather than profits. Sustainability of cartel behavior offers a more robust criterion that does not depend on the cartel’s objective. Then, the characterization of consumers’ maximal damage can be regarded as a worst-case scenario for consumers. It is therefore natural to apply this new approach to regulation and compare the main results with those obtained for a profit-maximizing cartel in Harrington (2004, 2005).This article is organized as follows. Section 2 in-troduces the maximal-sustainable cartel price in a benchmark model without regulation. In sec-tion 3 we analyze the impact of AA enforcement

on the maximal-sustainable cartel price. In sec-tion 4, we compare, by means of an example, our approach to the one in Harrington (2004, 2005). Section 5 concludes the analysis.

The Benchmark Model

Consider an oligopoly market where n≥2 sym-metric firms compete in prices with either ho-mogenous or heterogeneous products over in-finitely many periods. All firms have a common discount factor δ є (0,1) per period. Since we deal with symmetric outcomes, we simplify(p,...,p) є n

+   to p є +  . We adopt the following notation:

Econometrics

Page 40: Aenorm 62

38 AENORM 62 January 2009

- pN and pM denote the competitive (Nash) equi-librium price, respectively, the maximal col-lusive price.

- π(p) is the profit function of an individual firm in any period. π(p) is continuous and strictly increasing for p є [pN,pM].

- πopt(p) is a firm’s profit from unilateral deviati-on against the cartel when all the other cartel members set their prices at p. πopt(p) is conti-nuous and πopt(p)>π(p)>0 for p є [pN,pM].

- λ(p) is the degree of cartel stability and it is defined as

( ) / ( ), for ( , ],( )1, for .

opt N M

N

π p π p p p pλ p

p p

⎧ ∈⎪= ⎨=⎪⎩

 

λ(p)<λ(pN) ≡  1 for all p є (pN,pM] is decreasing for all p є (pN,pM].

The degree of cartel stability is a new con-cept. Standard intuition implies that a higher λ(p) means less incentives for cartel members to deviate and a stabler cartel. Furthermore, a higher cartel price implies a higher incentive for each cartel member to deviate. Since the function λ might be discontinuous at p=pN, as Example 2 illustrates, we introduce

0lim ( ) 1 and ( ).N M

ελ λ p ε λ λ p

+→= + ≤ =  

The above oligopoly model without regula-tion is a standard infinitely-repeated game. Throughout this article, we focus on grim-trigger strategies to sustain cartel price p>pN in which any deviation leads to the repetition of the competitive (Nash) price in every period thereafter. The underlying rationale is that car-tels are based upon trust and, by the reciprocal nature of humans, all trust is gone after some-one cheats. The equilibrium concept is a sub-game perfect equilibrium.In the absence of regulation, the necessary and sufficient condition to sustain p є (pN,pM] as a cartel price is

1( ) ( ) ( )1 1

1 ( ).

opt opt Nδπ p π p π p

δ δδ λ p

+ ≤− −

⇔ ≥ −  (1)

The socially worst outcome is the maximal-sustainable cartel price that is defined

[ , ]max , s.t. (1).

N M

C

p p pp p

∈=   (2)

Due to the monotonicity of π(p)/(1–δ), the maximal-sustainable cartel price pC also maxi-mizes the cartel’s profit.A direct approach would solve (1) for p as a function of all parameters, which requires the inverse function λ-1(1–δ). Later, however, this approach is not applicable. Instead, we analyze the properties of the threshold level for δ as a function of p є (pN,pM] and, then, translate these properties into the maximal-sustainable

cartel price as a function of δ in the (δ,p)-plane. This indirect approach allows for an easier in-terpretation.

Proposition 1 In the absence of regulation, the maximal-sustainable cartel price pC is non-decreasing in δ є (0,1), andpC = pN, for δ є (0, 1 – λ  ),pC є [pN, pM), for δ є [1 – λ   ; 1 – λ  ),pC = pM, for δ є [1 – λ  , 1),

We conclude this section with a well-known example.

Example 2 Consider a homogeneous Bertrand oligopoly model with linear demand 2–p and 0 mar-ginal costs. Note that pN=0 and pM=1. For all p є (pN,pM], each of the n firms may deviate by slightly undercutting the others to obtain the full cartel profit, i.e., λ(p)=1 for all p є (pN,pM]. Consequently, λ  =λ  =1/n. Proposition 1 implies

NC

M

p δ np

p δ n

< −= ≥ −

Antitrust Enforcement

In this section, we examine the impact of re-gulation. Given p є [pN,pM], the probability that the AA investigates the market outcome in a period and finds the firms guilty of collusion is β(p) є [0,1), where β(p) is increasing in p and β(pN)=0. Upon being caught, violators will be fined by the amount k(p)π(p), where k(p) is increasing and continuous such that k(pN)=0 and k(p)>0 for all p є (pN,pM]. The function β(∙) reflects that a higher cartel price attracts sus-picions and makes detection more likely. Any cartel takes this negative effect into account when deciding upon the price, see Harrington (2004, 2005).The AA is a passive player in this model, while firms are the active players. The detection pro-bability β(∙) is limited by the resources of the authority, and the fine schedule k(∙) is limited by legislation. The OECD (2002) reports detec-tion probabilities 1/7≤β(p)≤1/6 and penalty schemes 2≤k(p)≤3. These facts imply 2/7≤β(p)k(p)≤1/2, or an expected penalty roughly bet-ween 30% to 50% of the illegal cartel profits. Therefore, the AA may not be able to deter vio-lations. Here, we assume 0<β(p)k(p)<1 for all p є (pN,pM] meaning any cartel is tempted to set its price above the competitive price.Another aspect is how cartel members react to detection. In some cases, being caught once is sufficient to deter cartel activity in the future. In other cases, the economic sector is notorious for cartel activities despite many convictions (meaning members pay the fines and continue

Econometrics

Page 41: Aenorm 62

39AENORM 62 January 2009

illegal business). γ є [0,1] is the cartel culture parameter that reflects the probability that the firms will behave competitively (i.e. stop ille-gal business) after each conviction. Notorious implies γ=0, while γ=1 means the sector be-comes competitive after the first detection. All models in the literature assume either γ=0 or γ=1.Let V(p) be the present value of a cartel mem-ber’s expected profit if the cartel sets price p є [pN,pM] in every period. This value consists of the current illegal gains π(p), the expected fine β(p)k(p)π(p), the expected continuation payoff

of a renewed cartel after detection β(p)(1–γ)δV(p), and the expected continuation payoff of not being detected (1–β(p))δV(p), from which we obtain

1 ( ) ( ) ( )( ) ( ) ,1 [1 ( )] 1

( , ].N M

β p k p π pV p π p

δ γβ p δ

p p p

−= <

− − −

  (3)

So, introduction of regulation reduces the car-tel’s profitability. Does it also affect sustaina-bility?The cartel has its own destabilizing forces wor-king from within, because individual cartel members have an incentive to cheat on the car-tel. Here, cartel members adopt modified grim-trigger strategies to sustain p>pN:

1 Firms continue to set a price p>0 with pro-bability 1–γ after each conviction (and with probability γ set the competitive price pN ever after).

2 Any deviation by some cartel members leads to the competitive price pN in every period ever after.

Then, the profit from a unilateral deviation is equal to the short term gain of πopt(p) in the cur-rent period, followed by the competitive equili-brium with π(pN)=0 forever after. Consequently, the necessary and sufficient condition to sustain cartel price p є (pN,pM] is V(p)≥πopt(p), or

1 [1 ( )]( ) Λ( )

1 ( ) ( )δ γβ p

λ p pβ p k p

− −≥ ≡

−  (4)

Under regulation the maximal-sustainable car-tel price is given by

[ , ]max , s.t. (4).

N M

R

p p pp p

∈=   (5)

where superscript R refers to regulation. Program (5) is a well-defined program since p є [pN,pM], which can be deducted as follows. Since the monotonicity properties of Λ(p) (in-creasing) and λ(p) (decreasing) are opposite, the intersection λ(p)=Λ(p) is unique and coinci-des with pR. Furthermore, any p≤pR can also be sustained by the cartel. So, the range of prices in (4) is a closed subinterval of [pN,pM].Comparing (2) and (5), we observe that pN≤pR ≤pC≤pM meaning that regulation may reduce the maximal-sustainable cartel price in ge-neral. Similar to Proposition 1, we derive the

thresholds on the discount factor δ. Doing so, (4) can be rewritten as

1 ( )[1 ( ) ( )]Δ( ) 1 ( )

[1 ( )]λ p β p k p

δ p λ pγβ p

− −≥ ≡ ≥ −

−  (6)

The function Δ(p) is continuous and increasing in p, k(p) and β(p).Proposition 3Under regulation, the maximal-sustainable car-tel price pR is non-decreasing in δ є (0,1) and decreasing in γ є [0,1]. Furthermore, we have

pR = pN, for δ є (0, 1 – λ  ),pR є [pN, ), for δ є [1 – λ  , Δ(pM)),pR = pM, for δ є [Δ(pM), 1),

An overall increase in β(p) or k(p) shifts Δ(pM) and the entire curve to the right.

Clearly, inequality (6) is more restrictive than (1) implying that introduction of regulation restricts the set of discount factors for which collusion can be sustained for every possible price p є (pN, ]. This implies that cartel sta-bility is reduced compared to the benchmark case. Moreover, the fact that Δ(p) is increasing in p implies that the regulation is more effec-tive against collusion on higher prices. When Δ(pM)≤δ<1, the regulation is not effective to deter the cartel from setting its monopoly pri-ce.It is interesting to investigate whether regu-lation can eradicate the monopoly price for all cartel cultures. Solving Δ(pM)<1 for γ yields

γ < {λ(pM)[1 – k(pM)β(pM)]}/β(pM),

where the right-hand side remains positive un-der 0<β(p)k(p)<1. Hence, industries that are notorious for cartel behavior cannot be eradica-

"Current regulation is ineffective"

Econometrics

Page 42: Aenorm 62

40 AENORM 62 January 2009

ted by regulation unless one is willing to adopt regulation that fully takes away the illegal gains (i.e. β(p)k(p)>1 for all p є (pN, ]).Another implication is related to the effect of the degree of cartel stability λ(p) on sustaina-bility of the monopoly price. Since ∂Δ/∂λ<0, sectors where the degree of cartel stability is higher (λ(p) closer to 1) have less restrictive conditions for sustaining consumers´ worst pri-ce pM. This makes regulation less effective in these sectors.The main message is a mixed blessing for regu-lation. On the one hand, proposition 3 identifies non-empty sets of parameter values for which regulation is effective in reducing the maximal-sustainable cartel price, i.e., pR<pC. On the other hand, as long as regulation obeys β(p)k(p)<1, there will remain a large non-empty set of parameter values for which pR=pM, mea-ning the regulation is ineffective. We conclude this section with an example.

Example 4 Reconsider Example 2 and let β(p)=βp and

k(p)=k, where k<1. Then, (5) becomes

[0,1]

1 1( , ) max , s.t. .1p

δ γδβpp δ γ p

n kβρ∈

− += ≥

− 

Note that p=0 is feasible in the quadratic cons-traint if and only if δ≥1–1/n. The constraint can be rewritten asp ≤ [1–n(1–δ)]/[(nγδ+k)β],which is the solution to the problem if it is bet-ween 0 and 1. The right hand side is increasing in δ. To summarize, we have

R

δ np n δ

n δnγδ k β

< −= − − − ≤ < +

Note that pR<pM=1 for all δ є (0,1) if and only if (nγ+k)β>1. Since βk<1, this condition can hold only when nγ is sufficiently large. For sectors with small numbers of firms and γ sufficien-tly close to 0, the monopoly price will not be eradicated by regulation. Both possible cases, (nγ+k)β≤1, respectively, (nγ+k)β>1, are illu-strated by figure 1, where the vertical dotted line at δ=1–1/n represents the discontinuous jump in pC of Example 2 from pN=0 to pM=1.

The profit-maximizing cartel price

In this section, we compare our approach to Harrington (2004, 2005). He defines the en-dogenous cartel price as the profit-maximizing sustainable cartel price pπ:

[ , ]arg max ( ), s.t. (4).

N M

π

p p pp V p

∈=   (7)

For explanatory reasons, we restrict attention to numerical values β(p)=p/2 and k(p)=3 and γ=2/3 in Example 4. Then,

p p p

V pδ p n

− −= ⋅− +

Standard arguments imply V(p) fails both mo-notonicity and concavity on [pN,pM]=[0,1], but this function is single peaked on [pN,pM]. Checking the second-order conditions can be avoided, because V(p) is monotonically increa-sing from pN to its peak and monotonically de-creasing from its peak topM. So, application of the first-order conditions suffices. In contrast, (5) is a well-defined con-vex program.Harrington shows that, in general, (4) is non-binding for δ sufficiently close to 1 and we may solve the first-order condition ∂V(p)/∂p=0. In our case, MAPLE returns

δ δ δP

δ− − + − +≡ ∈

Figure 1

Econometrics

Page 43: Aenorm 62

41AENORM 62 January 2009

for all δ є [0,1]. Taking also (4) into account implies that the profit-maximizing cartel price pπ is the minimum of and pR. These two price curves intersect at δ≈0.955. So, on the inter-val [0,0.955], the profit-maximizing cartel price pπ=pR, while for the interval (0.955,1] we have pπ= <pR and the equilibrium condition is non-binding. Figure 2 illustrates pπ =min{ ,pR}.This example offers important insights. Whenever constraint (4) in (7) is binding, the maximal-sustainable cartel price pR and the profit-maximizing cartel price pπ coincide and our approach is complementary to the analy-sis in Harrington (2004,2005). Otherwise, i.e., (4) in (7) is nonbinding, these two cartel prices systematically differ. As the figure shows, for δ close to 1, the profit-maximizing cartel price pπ is close to the competitive price pN and, the-refore, seriously underestimates the potential maximal damage to consumers. Our approach offers a worst case scenario of sustainable car-tel behavior.

Conclusion

In this article, we explore a general infinitely-repeated oligopoly model for the analysis of violations of competition law under regulation. A novel concept is the maximal-sustainable cartel price that reflects consumers´ worst cartel price among those cartel prices that are sustainable, which endogenizes the cartel for-mation decision and its pricing strategy. This cartel price is related to the discount rate and the novel concepts of type and structure of the industry (λ) and cartel culture parameter (γ). Regulation is less effective in sectors where the degree of cartel stability is higher or where the sector´s cartel culture to continue business as usual is more prominent. Stylized facts from OECD countries imply that current regulation is ineffective. Finally, our approach is comple-mentary to Harrington (2004, 2005) in case equilibrium conditions are binding. Otherwise,

Figure 2

the profit-maximizing cartel price underestima-tes the maximal damage to consumers; a bias that might be huge.Since economic agents often behave differently from standard microeconomic theory, the crite-rion of sustainability of cartel behavior offers a more robust framework that does not depend on the cartel´s objective function. In this per-spective, our approach is a worst-case scena-rio.

References

Baumol, W. (1958). On the theory of oligopoly, Economica, 25, 187-198.

Harrington, J. (2004). Cartel pricing dynamics in the presence of an antitrust authority, The Rand Journal of Economics, 35, 651-673.

Harrington, J. (2005). Optimal cartel pricing in the presence of an antitrust authority, International Economic Review, 46, 145-170.

Houba, H., Motchenkova, E., and Wen, Q. (2009). Leniency programs in antitrust regu-lation with an endogenous detection probabi-lity, TI discussion paper, (in preparation).

OECD (2002). Fighting hard-core cartels: Harm, effective sanctions and leniency programs.

OECD Report 2002, OECD, Paris, France, http://www.SourceOECD.org.

Econometrics

Page 44: Aenorm 62

42 AENORM 62 January 2009

A majority of the Dutch population expects that the government will increase the eligibility age for old age social security (AOW) some time in the coming decades from 65 to 67 years. If this is the case, the question rises whether the Dutch are also preparing for such an anticipated increase. To some extent this appears to be the case: individuals, who are relatively certain that the eligibility age will be increased, purchase private pension products more often.

Dutch Citizens Ready for Eligibility Age Increase from 65 to 67 Years

Karen van der Wiel

is currently a PhD student in the department of econometrics at Tilburg University, The Netherlands. She is associated to Netspar, a network that does research on pensions, aging and retirement and through this network the Dutch government institution that pays out old age social security benefits (SVB) finances her research. Karen received an Msc in economics from Erasmus University Rotterdam in 2005 (cum laude). In 2008, Karen spent a semester at University College London. Her research interests include applied microeconometrics, labor economics and experimental economics.

Measuring expectations

Economists assume that future expectations play an important role in almost all long-term decisions. In the absence of reliable data, it was difficult to test this notion for a long time. However, in recent years it turned out that ex-pectations can indeed be measured by asking respondents for the probability between 0 and 100 that a certain event will occur (Manski, 2004). An example of such a probability-ques-tion is asking individuals for the chance they will survive up to the age of 80 years. U.S. data shows that there is a remarkable corre-lation between the given answers and the ac-tual survival rates of older individuals (Hurd and McGarry, 2002). Expectations can thus be measured using probability-questions.

AOW expectations

In May 2006, CentERdata in Tilburg therefore began measuring the monthly pension expec-tations of Dutch individuals in the so-called Pensionbarometer. Netspar, the `Network for Studies on Pensions, Aging and Retirement’, pays for this research in order to gain more in-sight into the confidence of the Dutch popula-

tion in the various elements of the Dutch pen-sion system. Questions concerning the future of the old age social security system play a central role in the Pensionbarometer. Because of the political relevance, I will focus on the future expectations about the earliest possible age at which a pensioner may receive benefits in this article. It turns out that the average probability that people assign to an increase in the eligibility age from 65 to at least 67 years, is 51 (or 0.51). This average varies as the relevant time hori-zon differs. As one would expect, people believe that the probability of this policy change to be occurring sometime in the next twenty years (average probability is 54) is higher than it to be occurring sometime in the next ten years (average probability is 47). Figure 1 shows the development of the average expectation over the past two years (from May 2006 to October 2008). Pensionbarometer respondents were most optimistic about the future of the AOW eli-gibility at the end of 2006 (in the period of the national general elections) and were most pes-simistic in June 2008 (around the publication of the report of the Bakker Commission).

Savings behavior and AOW expectations

Within the life-cycle/savings theory it is as-sumed that saving for old age is an important saving motive (Browning and Lusardi, 1996). The hypothesis is that most people no longer want and / or are able to work after reaching a certain age and that they will provide themsel-ves with accumulated financial resources from that moment onwards. In the Netherlands, sa-ving for old age is organized on three levels: 1) the government guarantees an AOW benefit for everyone; 2) most employers take care of an additional pension for their employees and 3)

Actuarial Sciences

Page 45: Aenorm 62

43AENORM 62 January 2009

each individual can also save for his or her own old age. Research from the seventies has de-monstrated substitution effects between such public and individual savings systems. In the United States people saved less themselves af-ter the government had installed old age social security (Feldstein, 1974). In recent years, the ageing of Western popula-tions has forced many governments to discuss the future affordability of Pay-As-You-Go pensi-on schemes. In some countries, such as Britain and Germany, this has already led to an incre-ase in the social security eligibility age. The de-bate on how to keep the AOW sustainable is also ongoing in the Netherlands. Now suppose that the government would indeed raise the AOW eligibility age from 65 to 67 years. Given the interaction between public and private savings, it is evident that the Dutch should compensate the increase by saving more themselves. In reality however, it is still uncertain what will happen to the old age pension age in coming years, but the Dutch population does have cer-tain expectations about its future. When future expectations are indeed important for long-term decisions, one would expect people who are relatively more pessimistic about the future of the AOW also save more for their own retire-ment by, for example, purchasing annuities and single premium insurance policies. If you expect to receive nothing from the state between your 65th and 67th birthday, and you do want to quit working before you reach the age of 67, you’ll need to save. For a policymaker it is particular-ly important to know whether people can deal with potential policy changes such as a higher AOW-eligibility age. If many families are unable to react properly to an anticipated increase in the old age pension age, it does not seem wise to decide to imlement such a change.

Actual expectations and pension scheme participation

CentERdata in Tilburg has been executing sa-vings research commissioned by the Dutch Central Bank (DNB) for a long time. For the em-pirical analysis of this article I combined data

from the Pensionbarometer with data from the DNB Household Savings Survey (DHS). This has lead to a dataset containing information on over 3,000 individuals. The dataset includes individuals’ view on the future of the old age pension age and their saving behavior. Table 1 presents the crude percentages of res-pondents who own a private pension product (an annuity and/ or single premium policy). 39% of the total sample held such a third pil-lar pension product. This figure is higher than that of the entire Dutch population, since richer households are slightly over-represented in the DHS. In Table 1, all respondents of the DNB Household Savings Survey are divided into three groups: in the first column one finds individuals that are not convinced that the AOW-eligibility age will go up within 20 years (who provide a probability between 0 and 0.4), in the second column those that assign an average probability (between 0.4 and 0.6) and in the third column one finds individuals who are relatively pessimi-stic (who provide a probability between 0.6 and 1). The first row of Table 1 indeed shows that pessimistic respondents purchase more annui-ties and/ or single premium policies. The most pessimistic group is 11% more likely to have a private pension product. These results are the same for most of the subgroups. This implies the Dutch who anticipate a policy change do seem to prepare for it. This is not the case for every subgroup. Relatively young workers (un-der the age of 40) do not take their own expec-tations about the AOW into account when pre-paring for their retirement. The differences in participation rates for wealthier workers (with a gross income above 38,100 Euro) are also not significant between the expectations groups.

Regression Results

Little can be concluded about the causality of the relationship between the AOW future ex-pectations and savings behavior from the re-sults in Table 1. If, for example, those with low educational attainment are all very optimistic about the future of the AOW-eligibility age and also coincidentally do not buy pension products, education is the cause of low pension scheme participation, not confidence in AOW. To (to a certain extent) rule out such confoun-ding factors, I estimate a probit regression on the possession of a private pension product on the probability individuals assign to a two year higher eligibility age and on all other relevant personal characteristics (education level, age, a self-reported health status, having children, having a partner, sex and year of the sample). The results of this regression are shown in Table 2. This table displays the marginal effects of the subjective probability of an eligibility age incre-ase on the probability of owning an annuity and/ or single premium policy. The marginal effect

Prob. 0-0,4 Prob. 0,4-0,6 Prob. 0,6-1 Observations

Perc. policy total 32.85 38.75 43.02 3,228

Total observations 904 978 1,346

Perc. policy 30-39 years 33.48 30.16 32.84 931

Perc. policy 40-49 years 24,17 35.52 45.92 1,097

Perc. policy 50-59 years 39.37 51.14 48.63 1,200

Perc. policy <25.150 Euro 17.63 30.03 30.61 1,076

Perc. policy 25.150 – 38.100 Euro 31.18 39.88 46.34 1,076

Perc. policy > 38.100 Euro 48.63 46.11 52.30 1,076

Table 1: Percentage of respondents who own a private pension product (an annuity and/ or single premium policy)

4045

5055

60P

roba

bilit

y

0 50 100 150Week from 1-1-2006

Within ten years Within twenty years

Average per monthProbability that AOW-eligibility age will increase to 67 years

Figure 1

Actuarial Sciences

Page 46: Aenorm 62

44 AENORM 62 January 2009

is 19% for the entire sample. This means that a person with average characteristics (average level of education, average age, average inco-me, etc.) who is convinced that the AOW-age will go up is 19 percent more likely to own a private pension product than a similar person who is convinced that the AOW-age will remain unchanged. Table 2 shows that this marginal effect is quite high for all subgroups, except for persons under 40 and for people with a high income, like in Table 1. For the group between 40 and 49 years, the difference in pension pro-duct ownership between people who assign a probability of one and a probability of zero is even 42%.

Conclusion

The Dutch population believes there is a good chance that within the next few years the ear-liest possible age to receive an AOW benefit, now 65 years, will be increased by the government. Whether this will actually happen remains to be seen. My research does show that those who are more convinced of a higher eligibility-age in the future, are preparing for such a policy change. Those who assign higher probabilities to an eligibility age of 67 years own more third pillar pension products, such as annuities and/ or single premium policies. An individual who is certain that the old age pension age incre-ases, is 19% more likely to hold a third pillar product than a similar person who is convinced the AOW age will remain 65 years. Only the Dutch under the age of 40 and those with a gross annual income that is higher than 38,000 Euro do not pay so much attention to their own AOW future expectations. Perhaps one should not expect these groups are able to do so as retirement is still too far away for those under 40 and the AOW benefit is a minor part of the pension of the richer Dutchman.

References

Browning and Lusardi (1996). Household sa-ving: Micro theories and micro facts, Journal of Economic Literature, 34(4), 1797-1855.

Hurd and McGarry (2002). The Predictive

Validity of Subjective Probabilities or Survival, The Economic Journal, 112(482), 966-985.

Feldstein (1974). Social Security, Induced Retirement, and Aggregate Capital Accumulation, The Journal of Political Economy, 82(5), 905-926.

Manski (2004). Measuring Expectations, Econometrica, 72(5), 1329-1376.

Van der Wiel (2008), Preparing for policy chan-ges: Social Security Expectations and Pension Scheme Participation, IZA Working Paper 3623.

Sample Marginal effects Standard errors

Total sample 19%** 0.06

Prob. that AOW-eligibility age will be (at least) 67 within twenty

years

30-39 years 4% 0.10

40-49 years 42% *** 0.11

50-59 years 16% 0.10

Income <25,150 Euro 21% ** 0.08

Income 25,150 – 38,100 Euro 30% ** 0.10

Income > 38,100 Euro 5% 0.11

Table 2: Marginal effects regression analysis

Actuarial Sciences

Page 47: Aenorm 62

45AENORM 62 January 2009

‘APG Asset Management is een van de grootste pensioen-vermogensbeheerders ter wereld. We beleggen zo’n200 miljard euro aan vermogen. Als trainee krijg ik hier direct veel verantwoordelijkheid en alle kansen om mezelf te ontwikkelen. Ik draai volwaardig mee en heb dagelijks contact met collega’s in binnen- en buitenland. Ook krijg ik hier ‘training on the job’ en maak ik kennis met verschil-lende onderdelen van de organisatie. Zo kan ik ontdekken welk carrièrepad het beste bij mij past.’

APG Asset Management start twee keerper jaar een traineeprogramma. Daarvoor zoeken we toptrainees.

APG beheert het pensioenvermogen van 2,6 miljoen Nederlanders. Wij hebben kantoren in Heerlen, Amsterdam, Hong Kong en New York. Wil je meer weten over het traineeship bij APG Asset Management? Ga dan naar www.investinyourcareer.nl en stel je voor…

‘Stel je voor… als trainee doe je straks zaken met New York en Hong Kong’Stephanie Dubiël, trainee bij APG Asset Management

25.1517.08

APG_adv_Aenorm_Steph_25.1517.08_WT.indd 1 12/15/08 2:47:58 PM

Page 48: Aenorm 62

46 AENORM 62 January 2009

When making decisions on investments in technological innovation, implicitly or explicitly choices are made about diversity of options, strategies or technologies. Such choices should ideally consider the benefits and costs of diversity and arrive at an optimal trade-off. One important benefit of diversity relates to the nature of innovation, which often results from combining two or more different and existing technologies or knowledge bases (Fleming, 2001; Ethiraj and Levinthal, 2004). For instance, a laptop computer in essence is a combination of a desktop computer and a battery; a laser is quantum mechanics integrated into an optical device, while an optical fibre for telecommunication is a laser applied to glass technology.

Optimal Diversity in Investments with Recombinant Innovation

Paolo Zeppini

is currently doing his PhD in economics at the University of Amsterdam, which he started after completing the two years master program (MPhil) in economics of Tinbergen Institute. Paolo graduated in Physics at the University of Florence, worked for two years in consultancy, did a master in quantitative finance at Bocconi University in Milano and worked for three years as a trader in an investment bank before coming to Amsterdam in 2006 to do research in economics.

This article presents a theoretical framework for the analysis of an investment decision problemwhere an innovative process is at work resulting from the interaction of two technologies. Such a process is called recombinant or modular inno-vation. The main idea is that in an investment decision problem where available options may recombine and give birth to an innovative opti-on (technology), a certain degree of diversity of parent options can lead to higher benefits than specialization. The conceptual framework and the detailed theoretical analysis can be found in van den Bergh and Zeppini-Rossi (2008).

A pilot model

Consider a system of two investment options that can be combined to produce a third one. Let I denote cumulative investment in parent options. Investment I3 in the new option only occurs if it emerges, which happens with proba-bility PE. The growth rates of parent options are proportional to investments, with shares α and 1−α. Let O1 and O2 represent the values of the cumulative investment in parent options and O3 the (expected) cumulative investment in the in-novative option. The dynamics of the system is described by

1 = I1 = αI•

2 = I2 = (1−α)I (1)•

3 = PE(O1,O2)I3

The optimization problem is to find α that maxi-mizes the final total benefits of parent and in-novative options.The probability of emergence depends on two factors; the diversity of the parents and a sca-ling factor π which is the efficiency of the R&D process underlying recombinant innovation:

PE(O1,O2) = πB(O1,O2) (2)

Diversity is expressed as the balance B of pa-rent options:

B(O1,O2) = 4(O1O2)/(O1 + O2)2

Assuming that investment in parent options begins at time t=0, their value at time t is O1(t)=αIt and O2(t)=(1−α)It. Under this as-sumption the balance function is independent of time: B=4α(1−α). The probability of emer-gence is constant and only depends on α. The innovative option grows linearly:

O3(t) = 4πI3α(1−α)t (3)

The optimization problem of this investment decision is addressed considering the joint be-nefits of parents and innovative options:

V(α;T) = O1(T;α)s + O2(T;α)s + O3(T;α)s (4)

The returns to scale parameter s allows to model the trade-off between diversity and scale advan-tages of specialization. t=T is the time horizon. Normalizing benefits to its value in case of speci-alization V(α=0;T)=V(α=1;T)=IsTs one obtains

Econometrics

Page 49: Aenorm 62

47AENORM 62 January 2009

s s s s s

s s

V α TV α α α C α α

I T≡ = + − + − (5)

where C=4πI3/I. Figure 1 reports the normali-zed benefits curves in a case of increasing re-turns to scale (s=1.2) for six different values of the factor π. There is a threshold value

π such that for π<

π the optimal decision is spe-cialization, while for π>

π diversity is optimal. Conversely, given a value π of the efficiency of recombinant innovation one may ask what the turning point of returns to scale is where diver-sity (α=1/2) becomes optimal. This threshold level

solves the equation_

_

~ 1( 1 / 2) [2 ] 122

s

s

CV α

⎛ ⎞= = + =⎜ ⎟⎝ ⎠

  (6)

If C=0 (for instance with π=0) we have

=0. If C=1 (for instance with I=4I3 and π=1) we find

≃ 1.2715. There is no closed form solution

as function of other parameters, but we can instead solve for C. For s>1 this solution is

= 2(2s−2)1/s (7)

Proposition 1 For a given positive value of the recombination probability π, benefits from diversity are larger than benefits from specialization as far as I3/I>1/π, for any value of returns to scale s.

The reason is that the rate of growth of inno-vation is unbounded. Now assume the ratio of investments I3/I is given. For s=1 (constant re-turns to scale) we have

(1/2)s=1=1+C/4≥1, since C≥0. If a positive investment I3 is devoted to innovation, the following holds true:

Proposition 2The threshold

below which a diversified sy-stem is the optimal choice has theproperty that

≥1 and

>1 iff π>0.

Corollary 1For all decreasing or constant returns a maxi-mum value of total final benefits is realized for the allocation α=1/2, i.e. for maximum diver-sity.

This situation is summarized in figure 2. The case of increasing returns to scale better repre-sents real cases of technological innovation be-cause of fixed costs and learning. Here there is a tradeoff between scale advantages and bene-fits from diversity. If the probability of recom-binant innovation is insufficiently large, returns to scale may be too high for diversity to be the optimal choice. In figure 1 this holds for the bottom four curves. In general we have the fol-lowing result, which completes Proposition 1:

Corollary 2 Diversity α=1/2 can be optimal also with incre-asing returns to scale (

>1) provided that the probability π of recombination is large enough.

A general model

Abandoning the hypothesis of zero initial values of parent options we introduce dynamics into the model. The general solution to system (1) is the following:

O1(t) = O10 + I1tO2(t) = O20 + I2t (8)O3(t) =

t

πI B O s O s ds∫

Figure 1: Benefits

as a function of the investment share α under increasing returns to scale (s=1.2) for different values of innovation effectiveness π=0,0.2,0.4,0.6,0.8,1. Here I= 4I3.

Figure 2: Benefits

as a function of the investment share α under increasing returns to scale (s=0.5 for different values of innovation effectiveness π=0,0.2,0.4,0.6,0.8,1. Here I= 4I3.

Econometrics

Page 50: Aenorm 62

48 AENORM 62 January 2009

© 2007 KPMG Staffing & Facility Services B.V., een Nederlandse besloten vennootschap, is lid van het KPMG-netwerk van zelf-standige ondernemingen die verbonden zijn aan KPMG International, een Zwitserse coöperatie. Alle rechten voorbehouden.

AU D I T TA X A DV I S O RY

Wie schrijft blijft?

Schrijf je scriptie of afstudeeropdracht bij KPMG.

Eerlijk is eerlijk, niet iedere tekst is even goed. Maar vaak zit er iets slims of moois tussen. En heel soms iets onvergetelijks. Zo is dat op de wc-deur van een kroeg en zo is dat bij KPMG, waar studenten als jij een scriptie of afstudeeropdracht kunnen schrijven. Zo’n scriptie of afstudeeropdracht is een ideale manier om kennis te maken met KPMG. Misschien zelfs het begin van een prachtige carrière: geef je al schrijvend blijk van passie voor het vak, dan moet je maar eens serieus overwegen om te blijven. Meer weten? Kijk op www.kpmg.nl/stages.

-02944_A4_Scriptanten_OF.indd 1 01-09-2008 12:46:23

Page 51: Aenorm 62

49AENORM 62 January 2009

In the long run (t>>Oi0/(αI), i=1,2) the pa-rents’ initial values are negligible and the ba-lance converges to a constant value:

10 202

10 20

( )( (1 ) )4 4 (1 )( )

O αIt O α ItB α α

O O It+ + −

= → −+ +

  (9)

Proposition 3In the long run the balance converges to the constant value B(α) = 4α(1−α), which is inde-pendent of initial values of parent options. O3 attains linear growth.

In general the dynamics of the balance depends on the relative magnitude of ratios O10/O20 and α/(1−α). There is a necessary and sufficient condition for constant balance:

Proposition 4The balance is constant through time and equal to B(α) = 4α(1−α) iff

O10/O20 = α/(1−α) (10)

In the pilot model the probability of emergen-ce is essentially the balance of parent options. Now a size factor is introduced to capture the positive effect that a larger cumulative size has on recombinant innovation, i.e. a kind of econo-mies of scale effect in the innovation process. If the size effect is expressed by a factor S(O1,O2), the probability of emergence becomes:

PE = πB(O1,O2)S(O1,O2) (11)

where

S(O1,O2) = 1 − 1 2( )σ O Oe− +   (12)

Here ∂S/∂Oi=∂S/∂O=σ/eσO, with O=∑iOi. The parameter σ captures the sensitivity of PE to the size. Assume that condition (10) holds true: with constant balance B=4α(1−α) the rate of growth of innovation is

σ O ItEI P t I B e− += − (13)

Its integral is

σO

σIteO t πI α α t e

σI

−−= − + − (14)

The second term tells about the size effect. Here

3(t)>0 and ••

3(t)>0 ∀ t≥0: the innova-tive option has a convex time pattern. There

is a transitory phase in which the innovation “warms up” before becoming effective. This is a stylized fact of recombinant innovation.Relaxing the assumption of constant balance, we have a much more complicated expressionfor the new option. But in the long run (It>> O0) it reduces to the following:

σO t α α I t

I Itσ O α O αI O

− − −

(15)

The logarithmic term adds negatively in expres-sion (15), producing the expected convex timepattern of innovation, which tells about the di-minishing marginal contribution of parent tech-nologies.

Optimization of diversity

In the general model the optimization of final benefits is enriched by several dynamical ef-fects. If a size factor is present in the probabi-lity of emergence, normalized benefits look:

(α;t) = αs + (1−α)s +Csm(t)sαs(1−α)s (16)

where C=4πI3/I. A time dependent factor shows up, m(t)=1+(e−σIt−1)/σIt, with m′(t)>0, → 0 m(t)=0 and → ∞m(t)=1. Let’s define a func-tion C(t)=Cm(t). Final benefits (16) are formal-ly the same as in the pilot model (5), provided that C(t) is in place of C: although the size ef-fect makes the system dynamic, still optimal diversity will be either α=0 and α=1 or α=1/2. This is better understood looking at figures 1 and 2. Given I, I3 and π, as time flows C(t) in-creases and the benefits curve goes from the lower curve π=0 (C=0) to the upper curve π=1 (C=1). If π is large enough, optimal diversity will shift at some time from α=0 and α=1 to α=1/2.There is a threshold level

(t) where, for a gi-ven time horizon t, benefits with α=1/2 are the same as benefits from specialization (α=0 and α=1):

Figure 3: As time goes by, the region of returns to scale where diversity is optimal becomes larger.

"If investment is large enough, diversity will always become the optimal choice"

Econometrics

Page 52: Aenorm 62

50 AENORM 62 January 2009

s t

s t

C tv α = = + =

(17)

Proposition 5 For a given time horizon t diversity (α=1/2) is optimal iff s<

(t).

How does

(t) behave? The larger t, the larger

(t). The intuition is as follows. C(t) is increa-sing: time works in favour of recombinant in-novation. As time flows, the region of returns to scale where diversity is optimal enlarges. The threshold

(t) converges to the value

of the pilot model (see figure 3). Even with π=1 di-versity may never become the optimal solution if returns to scale are too high (

<s). But if investment I3 is large enough, diversity will al-ways become the optimal choice. This is consi-stent with proposition 1: given returns to scale s, if one has infinite disposal of investment I3, threshold

can always be made such that

>s: at some time t one will see

(t)>s.

Concluding, the size factor introduces a dyna-mical scale effect into the system. The opti-mal solution may change through time, but it can only switch from α=0 and α=1 to α=1/2. This happens if and only if the probability of recombination is large enough. In the limit of infinite time (It>>O0) the size effect sa-turates (→ ∞S(t)=1). If the time horizon is long enough the size factor can be discarded. The solution of the optimal diversity problem is approximated by the solution of the static pilot model. One final remark about the effect of initial va-lues of parent options on optimization of diver-sity. The main consequence is a reduction of symmetry in the system (unless O10=O20). The symmetric allocation α=1/2 is not a solution to the above equation in general. Optimal diver-sity is represented by a function of time α*(t)

Figure 4: Final benefits with positive initial values and no size effect. Here we have O10=1, O20=10, s=1.2, π=1 and I=4I3=1. The five time horizons are in units of 1/I.

(figure 4). In the long run symmetry is resto-red since the effect of initial values dissipates. Again, in the long run the system is well ap-proximated by the pilot model.

Conclusions and further research

This study has proposed a problem of invest-ment allocation where the decision maker fa-ces a trade-off between scale advantages and benefits of diversity. Optimization of diversity comes down to finding an optimal trade-off bet-ween these two. Due to time constraints, high returns to scale and limited disposal of capi-tal, specialization may result the best choice, despite recombinant innovation. But in princi-ple if two technologies recombine diversity of investment can always be made the optimal decision.

Several directions for future research can be identified. Investment in the innovative option can be endogenized, i.e. made part of the al-location decision. Extending the number of pa-rent options allows for an examination of com-plexity aspects of recombinant innovation, as well as for assessing the marginal effect of new options.

References

Ethiraj, S. K. and Levinthal, D. (2004). Modularity and Innovation in complex sy-stems. Management Sci., 50(2), 159–173.

Fleming, L. (2001). Recombinant uncertainty in technological search. Management Sci., 47(1), 117–132.

van den Bergh, J. C. J. M. and Zeppini-Rossi, P. (2008). Optimal diversity in investments with recombinant innovation, Tech. Rep. 2008 -091/1, Tinbergen Institute.

Econometrics

Page 53: Aenorm 62

51AENORM 62 January 2009

Global warming is the main contemporary environmental issue. It has even reached the G8 agenda at their meeting in Japan this summer. This clarifies that climate change, human beings, and especially the economy, are nowadays strictly connected. When emissions of greenhouse gases (GHGs) increase, so do the concentrations of GHGs in Earth’s atmosphere. This leads to an increase in radiative forcing which causes temperatures to rise and in turn is the source of many damages. That is why it has become important to link climate change to economic models. The DICE model (Dynamic Integrated model of Climate and the Economy), developed by W.D. Nordhaus, is one of the first models to link these two subjects. The aim of the DICE model is to connect the scientific causes and effects of global warming to the economics of emissions of GHGs (Nordhaus and Boyer, 1999).

To DICE with Climate Change

Marleen de Ruiter

started her studies in econometrics at the UvA in 2003 but she switched to the Earth & Economics bachelor at the VU in 2006. She is expected to graduate in January 2009. Along with her studies, she is also a teaching assistant and researcher at the SPINlab-VU. This article is a summary of her bachelor thesis, which she wrote at the IVM under supervision of Dr. R. Dellink and Dr. M. van Drunen.

The first DICE model has often been criticized for its overly simplified climate change module. In more recent versions Nordhaus (2007) alte-red DICE by including feedback links (processes that tend to in- or decrease a certain effect) in the climate change module. This paper examines the importance of feed-back links for the recent DICE model, compared to the first, simplified DICE model; it does so by using a regression analysis.

The enhanced greenhouse effect

Earth’s surface bounces part of the sun trans-mitted short-wave radiation back into the at-mosphere as short-wave radiation and it emits long-wave radiation (see figure 1). However some gases in the atmosphere absorb the long-wave radiation, acting as a blanket. This is the natural greenhouse effect and is mainly (80%) caused by water vapour (Skinner, Porter and Parker, 2004). Clouds are formed by small wa-ter vapour particles attached to aerosols. They have two main functions. On the one hand they radiate short-wavelength radiation from the sun back into the atmosphere, reducing the energy balance. On the other hand they capture the long-wave radiation emitted by Earth and the-refore enlarge the energy balance (Waterloo, Post and Horner, 2007), (Houghton, 2004, p.90-92). However, the natural greenhouse effect is not the source of the contemporary problem; it is the enhanced greenhouse effect that causes Earth to warm up extensively and hence con-tributes to the climate change (Chiras, 2006, p. 445). Contrary to its noteworthy influence on the natural greenhouse effect, water vapour does not play a significant role in the enhan-ced greenhouse effect, since it is not one of the GHGs emitted by humans (Houghton, 2004,

p. 16). When the emissions of GHGs increase due to human activities, the atmospheric con-centration of the GHGs increases, the ‘blanket’ becomes thicker and the temperature rises (Skinner, Porter and Parker, 2004). However, the greenhouse effect becomes more complicated because of the several feedbacks which it induces (Houghton, 2004, p.90). Feedbacks tend to enlarge (positive feedback) or reduce (negative feedback) an impact (Skinner, Porter and Parker, 2004). Feedbacks that play an important role in the enhanced greenhouse effect are for instance the cloud-radiation feed-back as described in the above and the water vapour feedback: when the atmosphere warms up because of the greenhouse effect, e.g. of water vapour, the amount of evaporation mainly coming from oceans raises. Moreover the rela-tive humidity of the atmosphere increases cau-sing more water vapour, which in turn tends to warm up the atmosphere (Waterloo, Post and Horner, 2007, p. 26).

Important GHGs

Table 1 shows the most common gases in the atmosphere as well as the ones that contribute to the enhanced greenhouse effect. According to the IPCC third assessment report (2001) the greenhouse gases, with an enhanced green-house effect mentioned here, all belong to the

Econometrics

Page 54: Aenorm 62

52 AENORM 62 January 2009

Kyoto greenhouse basket. The gases not ela-borated, because of their rare existence, but included into the Kyoto basket of greenhouse gases are: SF6, PFCs and HFCs.

Aggregation of GHGs

Scientists used to convert all the GHGs into carbon equivalent concentrations (often calcu-lated in ppm or GtC2) in order to compare them with each other. The concept of Global Warming Potential has been used commonly. Houghton (2004) defines GWP as the ratio of the enhan-ced greenhouse effect of any gas compared to that of CO2. The index solves the problem of the different lifetimes of the GHGs (Houghton, 2004).Nowadays it is more common to convert all the concentrations of GHGs in the atmosphere into radiative forcing based on CO2 concentrations instead of carbon (Houghton, 2004). Houghton (2004, p.29) defines radiative forcing as “the change in average net radiation at the top of the troposphere, which occurs because of a change in the concentration of a GHG or some other change in the overall climate system”. When the radiative forcing balance between outgoing and incoming radiation is positive, the surface will warm up and vice versa Chiras, 2006). The

Gas Mixing ratio or mole fraction expressed as fraction* or ppm1.

Relative contribution to the enhanced greenhouse effect (values date from 2004)

Nitrogen (N2) 0.78* No contributor

Oxygen (O2) 0.21* No contributor

Water vapour (H2O) Variable (0-0.02*) No contributor

Carbon dioxide (CO2) 370 60%

Methane (CH4) 1.8 15%

Nitrous dioxide (N2O) 0.3 5%

Chlorofluorocarbons (CFC’s)

0.001 12%

Ozone (O3) Variable (0-0.1000) 8%

Table 1: Composition of gases in the atmosphere and their relative contribution to the enhanced greenhouse effect. Source: Mixing ratio’s in Earth’s atmosphere (Houghton, 2004, p. 16) and contribution to greenhouse effect (Skinner, Porter and Park, 2004, p. 514)

formula Houghton (2004) uses to convert the atmospheric concentration of carbon dioxide (C) into its radiative forcing (R in W m–2) is gi-ven below where C0 is the pre-industrial CO2 concentration of 280 ppm.

R=5.3*ln(C/C0)

The constant in this formula (5.3) changes when we compute R for different gases. In its third assessment report, the IPCC (2001) gives a mathematical definition of GWP based on equivalent radiative forcing instead of equiva-lent concentrations. This gives the radiative for-cing of every gas compared with that of CO2:

⋅=

∫∫

Where the ratio is computed “of the time-inte-grated radiative forcing from the instantaneous release of 1 kg of a trace substance relative to that of 1 kg of a reference gas” . TH is the time horizon for which the GWP as the radia-tive forcing of a certain GHG is computed (IPCC Working group I, 2001, chapter 6).

The carbon cycle

To fully determine climate change, restricted to the enhanced greenhouse effect, the carbon cy-cle needs to be defined. There are four main re-servoirs through which carbon moves: atmosp-here, land (biosphere), ocean (hydrosphere) and the geological reservoirs (lithosphere) as shown in figure 2.Humans tend to increase the amount of CO2 in numerous ways. When the amount of anthro-pogenic emitted CO2 increases (because of fos-sil burning and land-use change) the uptake of CO2 by oceans and land biota cannot entirely compensate it. Therefore, the carbon cycle is no longer balanced; hence the amount of at-

 

Figure 1: The enhanced greenhouse effect. Source: IPCC (1990). Homepage (www.ipcc.ch), 10th of July.

1 Parts per million.2 Gig tonnes of Carbon. 1GtC = 3.7Gt CO2

Econometrics

Page 55: Aenorm 62

53AENORM 62 January 2009

mospheric carbon increases (Skinner, Porter and Parker, 2004). The link between the car-bon cycle and its effects upon radiative forcing is important. The IPCC (2007) defines how the climate responds to constant radiative forcing (equilibrium climate sensitivity) as the “equi-librium global average surface warming” when the concentration of CO2 is doubled. When the concentrations of GHGs build up in the atmosp-here the radiative forcing increases causing the temperature of the atmosphere, and therefore the temperature of the shallow oceans, to rise as well. This warms the deeper oceans and leads to a positive feedback of warming the tempera-ture of the atmosphere again (Houghton, 2004, p.93-95). The carbon cycle is influenced by several feedbacks as in the above-mentioned example.

Different scenarios

In order to describe DICE, one of the main cha-racteristics of climate change models needs to be

defined; hence we divide climate change scena-rios into three main groups. The first group con-sists of the ‘business-as-usual’ scenarios, also known as baseline or non-intervention scenarios. In these scenarios it is assumed that there won’t be major changes in current attitudes and prio-rities, except for the policies that have recently been agreed on and are soon to be implemented (European Environment Agency, 2008). The second main group consists of the mitiga-tion scenarios. These are the opposite of the ‘business-as-usual’ scenarios; they are based on the assumption that there will be changes in the future, for example technological altera-tions or a changing attitude towards economic development, in order to reduce future glo-bal warming impacts (European Environment Agency, 2008), (Houghton, 2004). In the third scenario, the stringent scenario, emissions are rigidly controlled by the govern-ment (The encyclopaedia of the Earth, 2008).

Equation Output Optimal scenario Base scenario Stringent scenario

1 β1 (-0.023351, 0.002228) (-0.040784, 0.003081) (-0.015875, 0.001763)

β2 (0.603997, 0.037199) (0.687701, 0.033681) (0.583671, 0.044495)

R2 0.875639 0.912707 0.823092

β3 (3.752063, 0.014432) (3.787804, 0.011605) (3.711871, 0.018494)

2 R2 0.987840 0.995803 0.978397

β4 (0.977864, 0.014752) (0.953037, 0.011832) (0.990999, 0.016899)

3 β5 (0.206136, 0.010147) (0.223645, 0.008018) (0.196290, 0.011682)

R2 0.998423 0.999687 0.994125

Table 2: Summary of the most important output of the three scenarios where the R2 and the (μ,σ) of the β’s are shown.

 Figure 2: The main components of the natural carbon cycle; volumes and exchanges in billions of tonnes of carbon. Source: IPCC (2001) third assessment report; working group 1 – The scientific basis.

Econometrics

Page 56: Aenorm 62

54 AENORM 62 January 2009

most the same output as the model that does include feedback links. However, at first one would expect that when there is more climate change, the feedback links would be stronger. An explanation for this could be that when the-re is more climate change, the relative effect of the feedback links, compared to the total effect, becomes smaller; e.g. the relationship between the two is not linear. Furthermore, the μ and σ-values of the βj’s are very similar to each other per scenario. When constructing confidence intervals for all equations and scenarios, the base scenario of each equation contains either the maximum or minimum value of all three confidence intervals and the stringent scenario contains exactly the opposite. This also implies that for every βj, the confidence intervals of the optimal scenario are between the confidence intervals of the three scenarios, where the ranges sometimes partly overlap each other.

Conclusions

In order to examine the relevance of the feed-back links for the climate change module of DICE, three simplified equations, which didn’t include feedback links, have been established based on the equations described in the DICE 2007 GAMS code. When analysing the results, the adjusted R2 has been used as an impor-tant measure for the goodness of fit. It has been shown that for every j, βj has an adjusted R2>0.75. Therefore a re-estimation of the func-tional forms from the simplified equations based on the first DICE model approximates the most recent and more complex version of the func-tional form reasonably well. It should be noted however that this thesis is based on a number of boundaries and simplifications. Therefore, the subject of this thesis can be broadened in several ways.

References

Chiras, D.D. (2006). Environmental science, 7th edition, Massachusetts: Jones and Bartlett Publishers.

Cunningham, W.P. and Cunningham, M.A. (2008). Environmental science: a global con-cern, 10th edition, New York: McGraw-Hill.

Houghton, J. (2004). Global warming, Cambridge university press.

IPCC third assessment report (2001). Working group 1, the scientific basis, Sweden: IPCC.

IPCC (2007). Climate change 2007: synthesis rapport, Sweden: IPCC.

Nordhaus, W.D. (1994). Managing the global

The DICE model

The aim of the DICE model is to connect the scientific causes and effects of global warming to the economics of emissions of greenhouse gases: economic activity (Q) causes an incre-ase in emissions of GHGs (E) which increases the amount of C in the atmosphere (MAT), lea-ding to an increase in radiative forcing (Forc) which causes a rise in atmospheric tempera-ture (Tatm). This leads to an increase in radia-tive forcing which causes temperature to rise. Therefore it focuses on damages, rather than emission reduction, as a result of which it tries to find a balance between less economic invest-ments today (loss) and less climate change in the future (gain). Time periods are divided into periods of ten years starting from 1995. The goal is to maximise the social-welfare given the constraints concerning climate change and uti-lity, where the radiative forcing, carbon cycle and climate equations are globally aggregated.

Simplified equations

In order to examine the relevance of the feed-back links, simplified equations are established based on the DICE-GAMS code and shown in box 1.

A summary of the output of the regression ana-lysis of the three scenarios is shown in table 2. What is striking about the adjusted R2 values is that for every single equation the base scena-rio has the highest and the stringent scenario has the lowest adjusted R2 value. This suggests that in a scenario where there are no climate change policies, the simpler model achieves al-

Econometrics

 Figure 3: Changes in the climate change module of DICE: grey are the old parts and in black the recently added parts. Source: Nordhaus (1994 and 2008).

(1) MAT(T+1)=(1-β1)∙MAT(T)+β2∙E(T)

(2) FORC(T)= β3∙[log(MAT(T)+0.000001) /596.4]/log(2)+FORCOTH(T)

(3) TATM(T+1)=(1-β4-LAM)∙TATM(T) +β5∙Forc(T)

Box 1

Page 57: Aenorm 62

55AENORM 62 January 2009

Econometrics

commons – The economics of climate change, MIT Press.

Nordhaus, W. and Boyer, J. (1999). Roll the DICE again: the economics of global warming, MIT Press.

Nordhaus, W.D. (2008). A Question of Balance: Economic Modeling of Global Warming, MIT Press.

Skinner, B.J., Porter, S.C. and Park, J. (2004). Dynamic Earth – An introduction to physical geology, 5th edition, Hoboken: J. Wiley & Sons.

Waterloo, M.J., Post, V.E.A. and Horner, K. (2007). Introduction to hydrology, syllabus, Vrije Universiteit press.

European Environment Agency (2008). Homepage (http://www.eea.europa.eu), 17th of July.

Encyclopaedia of the Earth (2008). Homepage (http://www.eoearth.org),11th of September.

IPCC AR4 (2008). Homepage (http://www.ipcc.ch/pdf/assessment-report/ar4/syr/ar4_syr.pdf), 17th of July.

US Climate change science program (2008). Homepage (http://www.climatescience.gov), 17th of July.

Page 58: Aenorm 62

56 AENORM 62 January 2009

RUIMTEvoor uw ambities

Risico’s raken uw ondernemersgeest en uw

ambities. Aon adviseert u bij het inzichtelijk

en beheersbaar maken van deze risico’s.

Wij helpen u deze risico’s te beoordelen, te

beheersen, te bewaken en te financieren.

Aon staat voor de geïntegreerde inzet van

hoogwaardige expertise, diensten en

producten op het gebied van operationeel,

financieel en personeel risicomanagement

en verzekeringen. De focus van Aon is

volledig gericht op het waarmaken van uw

ambities.

In Nederland heeft Aon 12 vestigingen met 1.600 mede-

werkers. Het bedrijf maakt deel uit van Aon Corporation,

Chicago, USA. Het wereldwijde Aon-netwerk omvat circa

500 kantoren in meer dan 120 landen en telt ruim 36.000

medewerkers. www.aon.nl.

4982

aa

R IS ICOMANAGEMENT | EMPLOYEE BENEFITS | VERZEKERINGEN

4982aa:Layout 2 02-07-2008 09:53 Pagina 1

Page 59: Aenorm 62

57AENORM 62 January 2009

Since the introduction of containers in the early 1960s, the containerised trade market has been growing rapidly. As a result, ships have grown in size to the current maximum of 13,000 TEU (twenty-foot-equivalent-unit; length of the smallest container is twenty feet). Ports and terminals should handle their operations efficiently to keep up with this growth and to provide the necessary capacity and customer service. Especially in Europe, where the competition among the ports in the Le Havre-Hamburg range is fierce. In other words, large amounts of containers have to be loaded, unloaded, stored, retrieved and transhipped in a short time span.

Container Logistics

Iris F.A. Vis

is an Associate Professor of Logistics at the VU University Amsterdam. She holds an M.Sc. in Mathematics from the University of Leiden and a Ph.D. from the Erasmus University Rotterdam. She received the INFORMS Transportation Science Section Dissertation Award 2002. Her research interests are in design and optimization of container terminals, vehicle routing, and supply chain management

Kees Jan Roodbergen

is an Associate Professor of Logistics and Operations Management at Rotterdam School of Management, Erasmus University. He received his M.Sc. in Econometrics from the University of Groningen and his Ph.D. from the Erasmus University Rotterdam. His research interests include design and optimization of warehousing and cross docking environments.

Terminal management needs to make many de-cisions in relation to all logistics processes to achieve the goal of minimizing berthing times of ships. Techniques from Operations Research can be used to formulate tools that assist in this decision making process. The complexity of the logistics processes ensures a continuous chal-lenge for researchers to develop new mathe-matical techniques to obtain good and, whene-ver possible, optimal solutions. In this article, we first describe each of the logistics processes in more detail. Secondly, we summarise a novel approach for the storage and retrieval process as presented in the Operations Research paper of Vis and Roodbergen (2009).

Logistics Processes at container terminals

The process of unloading and loading a ship at a container terminal can be described as fol-lows (see Figure 1): manned quay cranes un-load containers from the ship’s hold and the deck. These containers are positioned on trans-portation vehicles, which travel between the ship and the seaside of the stack (i.e., storage area). When a transportation vehicle arrives at the stack, a crane (e.g., a straddle carrier) ta-kes the container off the vehicle and stores it in the stack. After a certain storage period, the containers are retrieved from the stack and are transhipped to other modes of transportation, such as barges, trucks and trains. Transhipment to trucks and trains occurs at the landside of the stack. To load containers onto a ship, these processes need to be executed in the reverse order. Various decision problems need to be solved to obtain an efficient container terminal. Vis and De Koster (2003) present a classification of decision problems that arise at a container terminal. Examples of these decision problems include allocation of ships to berths, selection of transportation and storage systems, layout

Figure 1: Top: Loading and unloading a container ship at Ceres Paragon Amsterdam (Ceres Paragon Terminals) Bottom: Schematic drawing of logistics processes at a container terminal

ORM

Page 60: Aenorm 62

58 AENORM 62 January 2009

of the terminal, dispatching of containers to ve-hicles and sequencing of storage and retrieval requests at the stack. In the remainder of this article we will focus on one decision problem in particular, namely sequencing requests for cra-nes operating in the stack. First, we will discuss stacking operations in more detail. Thereafter, we will describe an efficient algorithm that can determine optimal schedules.

Stacking of containers

Straddle carriers operate in a block of contai-ners. Containers are stored for a certain period in such a block to await further transportation. To load the container on a ship or other moda-lity, the container is retrieved from the stack. A block consists of a number of rows of con-tainers with a pickup and delivery point (I/O point) at both ends of each row, namely one at the seaside and one at the landside. At a pickup and delivery point, the container is either taken from or placed on the ground. These pickup and delivery points are assumed to have infinite ca-pacity. The transport of the container to and from this point is executed by another type of material handling equipment, for example, a multi trailer system. Clearly, the storage and transportation processes are decoupled.Vis and Roodbergen (2009) focus on the sche-duling of storage and retrieval requests wit-hin the stack for a single straddle carrier (see Figure 2). A straddle carrier (SC) can travel over a single row of containers and transport a single container at the same time. At the head or tail of each row a SC can make the cross-over to another row. A tour of a SC indicates the order in which a set of requests need to be handled. Each request has a pick-up and drop-off location. For a retrieval request, the pick-up location is simply the location where that con-tainer is currently stored and the drop-off loca-tion is the row’s I/O-point at either the sea-side or the land-side of the stack, depending on the container’s destination. For a storage request, the pick-up location is the I/O-point where the container has been delivered, which in turn de-pends on whether the container arrived by sea

or by land. The drop-off location is the assigned storage location in the same row. Figure 3 pro-vides an illustration of the required movements to store or retrieve a container. As a result, a SC travels full from an origin to a destination location. Empty travel distances occur when a SC travels from the destination of a container to the origin of another container. Clearly, full tra-vel distances are fixed. Therefore, it is required to schedule storage and retrieval requests for a SC in such a way that empty travel distances are minimized. In the remainder of this article we present a summary of the scheduling algo-rithm developed by Vis and Roodbergen (2009) to calculate an optimal tour for a SC to handle all requests.Figure 4a illustrates a straddle carrier operating in a stack with 6 rows, I/O-points at both ends of each row, the start and end point of the tour (i.e., depot) and several storage and retrieval requests with respectively their destination or origin that need to be handled. In finding an op-timal tour we assume that the tour of a straddle carrier starts and ends at the same, given posi-tion. Furthermore, there are no restrictions on the sequence of storage and retrieval requests within a given row. However, once a row is en-

Figure 2: Straddle carrier stores a container in the stack (Ceres Paragon Terminals)

Seaside

Storage request ith i i idwith origin seaside

Retrieval request

Assigned storage location

Storage location

Retrieval requestwith destination landside

Pick-up and delivery point

Landside

Figure 3: Required movement to store and retrieve a container

Input/output points (seaside) row

1a1 a2 a3 a4 a5 a6

11 11 11 1 1 1 1 1

straddlecarrier

1 1

1

1

3r31

1

r51

1

1 1s111

1carrier

86s31 s51r11

r61

2

1 11

1

2

81

s41

s61

7

2s628

1

1 1

11

r21

s21 r41

r61s11 s41

23

12

2

2

2

22

Input/output points (landside)

depot

1a1 a2 a3 a4 a5 a6

11 11 10

1 11 12 2 2 2 2 2

Retrieval request from landsideRetrieval request from seaside

(a)v0

(b)

Storage request from landside

Retrieval request from landside

Storage request from seaside

Retrieval request from seaside

Figure 4: (a) Example of a stack with 6 rows, I/O-points at both land- and seaside of each row and several storage and retrieval requests. (b) Directed network representation of (a). To keep the figure simple, we incorporated only arcs between two adjacent nodes.

ORM

Page 61: Aenorm 62

59AENORM 62 January 2009

tered, all requests in that row must be handled before the straddle carrier can continue to the next row. This assumption is quite reasonable in practice due to the large time needed to change rows. Note that this assumption restricts the number of times rows are entered and left; it does not put a direct restriction on the travel distances between rows, nor does it force a fixed sequence to visit the rows. If beneficial, a row may be initially skipped and only visited at a later stage. Vis and Roodbergen (2009) also show how this problem can be solved without these restrictions.

Sequencing storage and retrieval requests

The first step in the solution approach is to con-struct a network by representing each location which need to be visited to perform a storage or retrieval request as a node. Furthermore, the pick up and delivery points of each row and the depot are also represented by nodes. Between two nodes multiple directed arcs exist. Figure 4b represents the network for the example de-picted in Figure 4a.The nodes s correspond to the locations, where containers must be stored. The nodes r corres-pond to the locations where containers must be picked up. The superscript indicates whether the storages (retrievals) originate from (end at) either input/output point I/O1 or I/O2. Node v0 corresponds to start point of the tour. Nodes ai

1 and ai2 correspond to the locations of respec-

tively I/O1 or I/O2 for each row i. Next, we need to define the arcs in the network. For any pair of nodes x,y in the same row we introduce two arcs: (x,y) and (y,x). Some of these arcs must be traversed, i.e., are required (see Figure 3). Furthermore, we introduce an unlimited num-ber of copies of the arcs to make the connec-tions between the rows. The length of an arc (x,y) is denoted as d(x,y) and simply equals the physical distance between the nodes x and y. Only directed arcs between two adjacent nodes have been drawn in Figure 4b to keep the figure simple. For example, the arc( 1 2

11,ia s  ) also exists with distance 9.Figure 5a shows row 1 of Figure 4b, with only the required arcs drawn. That is, the arc from a retrieval to the corresponding I/O-point, and the arcs from the appropriate I/O-point to the storages have been drawn. These arcs must be traversed, other arc traversals are optional, but some must be included to obtain a tour. A

complete graph, based on this formulation, for only the first row is given in Figure 5b. We now have a directed network that conforms with the definition of the Rural Postman Problem (RPP, see e.g., Lawler et al. 1985). The more general Rural Postman Problem is known to be NP-hard (Lenstra and Rinnooy Kan, 1976). However, in our problem the nodes of the network are not freely positioned in Euclidian space, but restric-ted to a limited number of parallel lines (the rows of the stack). Connections between the rows can only occur at the head and tail of the rows. Furthermore, all required arcs in any row i either originate at or end in the head and tail

of the row. We can exploit this special structure to solve this problem efficiently. The first step is to transform the current network formulation into a network formulation that classifies as a Steiner Travelling Salesman Problem (STSP). The Steiner Travelling Salesman Problem (STSP) looks for a shortest tour such that a given sub-set of the vertices is visited at least once and is only solvable in polynomial time for some situa-tions (see e.g., Cornuéjols, Fonlupt and Naddef, 1985). To achieve this, we need to change the RPP network formulation such that we no longer have required arcs, but are still able to obtain a valid solution for the original problem. All nodes are also included in the STSP formulation (see Figure 5c). Most arc lengths d(x,y) are equal to their RPP counterparts with some excepti-ons. For example, before we can travel from the retrieval location 1

11r   to the storage location 211s  , the straddle carrier first needs to deliver

the container to I/O1. Thereafter, the straddle carrier needs to travel from I/O1 to the origin of the storage request, I/O2, and then with the container to storage location 2

11s  .We now have a network formulation that can serve as an input for our solution method.

a11

r111 r11

1

22

22

1s11

1

r111

a11 a1

1

1

2

1

1 77

13

2

1

1 7

s2

r11

s111 s11

199

1

8

8

7119

1

10

10

138 8

1010

1010

12 8

s11

a12

s112

a 2

s112

a 211

99 8

11

119 10

(b)(a)

a1

(c)

a1

Figure 5: (a) Representation of row 1 of Figure 4b with only the required arcs drawn. (b) Graph of the RPP formulation with dashed lines representing the required arcs. (c) Graph of the reformulation as STSP. Required nodes are solid black. Arcs lengths that differ from their RPP counterparts are highlighted.

"This algorithm significantly outperforms common rules of thumb "

ORM

Page 62: Aenorm 62

60 AENORM 62 January 2009

Shortest tours in the network will be determined by means of dynamic programming. Ratliff and Rosenthal (1983) already used dynamic pro-gramming to solve the problem of routing or-der pickers in a warehouse. In their problem a shortest tour has to be found to retrieve a number of products from specified locations in a warehouse with a number of parallel aisles (partly comparable with the rows in a container stack). However, their method is only applicable for undirected networks without required arcs and does not incorporate storage requests. The objective of this dynamic programming algorithm is to construct a directed tour sub graph visiting all nodes at least once and all arcs at most once. The essence of a dynamic programming method consists of three compo-nents: the potential states, the possible tran-sitions between states, and the costs involved in such a transition. By consistently adding arcs (transitions) to partial tours (states), the algorithm gradually builds towards a complete tour. There are two types of transitions in the algorithm. The first transition type consists of adding connections between two consecutive rows. The second type of transition consists of adding arcs within a single row. Given the fact that a row may be visited only once, only five

useful arc configurations can lead to a shortest feasible tour in a single row (see Figure 6). These five arc configurations are, however, not always trivial to obtain. A separate optimization procedure is required to find a shortest sequen-ce of jobs within a row for any given starting and ending point. The first step in this procedure is to formulate a bipartite network with a copy of each node s and r at both sides. The start point is included at the left side and the end point at the right side. The assignment problem can be solved by using an existing method such as the Hungarian method (e.g., Papadimitriou and Steiglitz, 1982). The resulting solution may have unconnected cycles. However, Vis and Roodbergen (2009) show that these can be patched such that the final result is optimal. The resulting five configurations can be used in the dynamic programming algorithm.The dynamic programming algorithm starts with an empty tour. Next, all possible routes within row 1 are determined and the costs are calcula-ted according to the explanation provided abo-ve. We now either have one empty partial tour subgraph (if there is no work to be done in row 1) or four partial tour subgraphs. The next step is to make the connection to row 2. Intuitively, it is quite clear what these connections should look like (see Figure 7). For example, a partial tour subgraph with a south-to-south traversal (see Figure 6iv) in row 1 must be connected by two arcs at the south position. Any other option would either not result in a feasible tour or would not be optimal. Costs are counted in each step by simply adding the lengths of the arcs that are added. The next step is to de-

ai ai ai ai ai1 1 1 1 1ai ai ai ai ai

ai ai ai ai ai2 2 2 2 2

(i) (ii) (iii) (iv) (v)

Figure 6:Five ways to visit a row

ai ai+1 ai ai+1ai ai+1 ai ai+11 1 1 1 1 1 1 1

ai ai+1ai ai+1 ai ai+1 ai ai+12 2 2 2 2 2 2 2

(0c)(0a) (0b) (0d)

ai ai+1ai ai+1 ai ai+11 1 1 1 1 1

ai ai+1ai ai+1 ai ai+12 22 2 2 2

⎣n/2⎦ a(2a)(1a) (3a)

ai ai+1 ai ai+1 ai ai+11 1 1 1 1 1

ai ai+1 ai ai+1 ai ai+12 2 2 2 2 2

Figure 7: Possible transitions to make crossover to another row

a1 a2 a3 a4 a5 a61 1 1 1 1 12 3 8 9 11

410

15

712

7

613

1718

a1 a2 a3 a4 a5 a62 2 2 2 2 2

13

141516

18

v0

Figure 8: Optimal route for a straddle carrier to handle the requests as denoted in Figure 4a. The red numbers indicate the order in which the arcs are handled. The straddle carrier starts in row 1and finishes in row 2 with step 17 and returns in the depot with step 18.

ORM

Page 63: Aenorm 62

61AENORM 62 January 2009

termine all possible partial tour subgraphs by adding arcs in row 2. And so on. By applying this dynamic programming algorithm, a shor-test directed multiple row storage and retrieval tour can be found in polynomial time. Figure 8 shows the solution for the example depicted in Figure 4a. Simulation studies demonstrate that this algorithm significantly outperforms common rules of thumb such as first-come-first served.This is just one example of a mathematical tool that can be used by the terminal management to assist in daily decision making. Many algo-rithms are available that can be directly imple-mented in computer systems at terminals. Both fast calculation times and good quality solutions demonstrate the usefulness of these methods to obtain direct savings in berth times of ships and logistics costs at terminals.

References

Cornuéjols, G., Fonlupt, J. and Naddef, D. (1985). The traveling salesman problem on a graph and some related integer polyhedra, Mathematical Programming, 33, 1-27.

Lawler, E.L., Lenstra, J.K., Rinnooy Kan, A.G.H. and Shmoys, D.B. (1985). The Traveling Salesman Problem, a Guided Tour of Combinatorial Optimization, John Wiley & Sons, Chichester.

Lenstra, J.K. and Rinnooy Kan, A.H.G. (1976). On general routing problems, Networks, 6, 273-280.

Papadimitriou, C.H. and Steiglitz, K. (1982). Combinatorial Optimization, Algorithms and Complexity, Prentice-Hall, Inc., Englewood Cliffs.

Ratliff, H.D. and Rosenthal, A.S. (1983). Orderpicking in a rectangular warehouse: a solvable case of the traveling salesman pro-blem, Operations Research, 31(3), 507-521.

Vis, I.F.A. and De Koster, R. (2003), Transshipment of containers at a container terminal: an overview, European Journal of Operational Research, 147, 1-16.

Vis, I.F.A. and Roodbergen, K.J. (2009), Scheduling of container storage and retrieval, Operations Research, forthcoming.

http:///www.ceresglobal.nl (Ceres Paragon Terminals)

http://www.irisvis.nl/container

ORM

Page 64: Aenorm 62

62 AENORM 62 January 2009

During recent years longevity (increasing life expectancy) has become an increasingly important issue for especially pension funds. Most pension funds have a Defined Benefit pension scheme. This means that the accrued pension rights imply a certain, guaranteed benefit per year. If, for example, life expectancy rises by one year, the pension fund must pay an extra year of benefit to, on average, all pensioners. Hence, the expected present value of the pension liabilities rises with increasing life expectancy.

Longevity in the Netherlands: History and Projections for the Future

Henk Angerman

works at the ALM-department of the Algemene Pensioen Groep (APG). His daily occupations consist of both monitoring solvency positions of pension funds which are managed by APG and participating in isolated projects. An example of such a project is the so-called “langlevenproject”, which aims to make a forecast of the future life expectancy of Dutchmen in general and the Dutch pension participant in specific (these tend to live longer!).

Tim Schulteis

works at the PRO department (Product Research and Development) of the Algemene Pensioen Groep (APG). His main activity is conducting actuarial calculations in order to advice social partners, who determine the pension deal of a pension fund that is customer of APG. Furthermore, he is involved with the premium model of disability pensions and specific research projects, such as the survey to the future life expectancy in relation to the pension affirmation.

Currently pension funds already have the obli-gation (by regulation) to account for longevity in their premiums. A lot of research is done in the field of modelling life expectancy and pro-babilities of dying at a certain age. The aim is to produce a projection (prediction) for probabili-ties of death for different ages in different years. In classical actuarial mathematics, probabilities of death are assumed only to depend on age. A typical feature (by construction) of models for longevity is that the probability of death does not only depend on the age, but also on the year (e.g. the probability of death for a 28 year old in 2009 is different from the probability for a 28 year old in 2023). In the Netherlands these predictions of future life expectancy are mainly produced by the Dutch Statistical Office (CBS) and the Dutch Actuarial Association (AG).The Dutch Statistical Office uses so called “ex-

pert opinion models” in which the opinions of experts on the future development of impor-tant causes of death are incorporated. In this publication, however, we will follow a different path and present the most commonly used sta-tistical model. In this setup a certain functional dependence is assumed and the parameters of the model are fitted by methods known from econometrics.

Qualitative considerations

Before we present the model, we will try to give some intuition for the assumed functional form of the model by looking at some data.The first graph shows the natural logarithm of the probability of dying at a certain age for wo-men in the period 2001-2006. First, we notice that these probabilities are fairly large for the zero year old and then decrease (showing the effect of infant mortality). After reaching a mi-nimum at the age of about 10 years, the graph increases in a way that looks linear, implying an exponential growth of the probability of death with age: the probability of death increases by

Actuarial Sciences

Graph 1

log mortality rate women 2001-2006

Page 65: Aenorm 62

63AENORM 62 January 2009

A first candidate could be of the form

log m(x,t) = αx+κt+εx,t (1)

Here, log m(x,t) denotes log-mortality. Hence, this model describes a situation, in which log-mortality is given by an age-dependent factor (αx), a time-dependent trend (κt) and an error term (εx,t) that is assumed to be a white noise.

A model of this form, however, does not capture the effect shown in the second graph. In the model the term κt does not depend on age and therefore the modelled dynamics of log mor-tality does not depend on age. We saw in the second graph that the decrease of log mortality with time is different for different ages.

Therefore, our second candidate is a model of the form

log m(x,t) = αx+βxκt+εx,t (2)This model enables us to capture the effect that model (1) missed. We can look at κt as a depen-dent trend and at βx as the “sensitivity” of the age x to this trend. We choose

∑=

=T

tx txm

T 1),(log1α   (3)

where t=1,…,T is the observed time interval. If we look at log m(x,t) as being a matrix with 111 rows (corresponding to the ages 0,..,110) and T columns, notice that (2) assumes that the matrix A(x,t) defined as

A(x,t) = log m(x,t) — αx

approximately 10% per year.The second graph shows the dynamics of the probabilities of death for women at certain ages with time. Again the graph shows the natural logarithm of the probability of death (we will call this log-mortality in the remainder of this article). The graph shows a clear tendency of declining, but with different “speeds” of decline for different ages.The third graph shows the dynamics of life ex-pectancy for men (thick line) and women (thin line) between 1850 and 2006. Here, life expec-

tancy is calculated as the expected value of the remaining life time, using the probabilities of death as they were measured in that year. In order to calculate the expected remaining life time for a 15 year old in, say, 1970 one needs future probabilities, since these persons are 16 in 1971, 17 in 1972 and so on. This has some major drawbacks. First, one needs projections for future mortality and secondly these projec-tions must have a very long horizon (the cohort from the example turns 75 in 2030). Therefore, life expectancy in some year is always defined in terms of the probabilities belonging to that year.

Model building

We will use the intuition from the presented graphs to set up a model for the dynamics of mortality.

Graph 3

"Existing models perform well from a statistical point of view, but may significantly

underestimate future longevity"

Actuarial Sciences

Graph 2

log mortality rate women

year

Page 66: Aenorm 62

64 AENORM 62 January 2009

specific country is depicted that had the highest life expectancy among all countries in that time interval. Notice that a linear regression would fit very well here and that the slope of this regres-sion would be 0,25 (i.e. life expectancy increa-ses by 3 months every year!).The fourth graph also shows life expectancy for the Netherlands for men and women. By col-lecting data on the amount of smokers it is pos-sible to link this amount to some extent to the deviation from the “graph of best countries”. Since the percentage of smokers is decreasing, it is plausible that life expectancy will converge to life expectancy in Japan, or at least conver-ge to the slope of 0,25. This would imply that predictions underestimate the effects of longe-vity. If parameters for a statistical model are estimated over a time period in which longevity effects were small for some reason (e.g. smo-king) it is clear that projections based on these estimates will produce forecasts with “small” increase in longevity in the future. Hence, the Lee-Carter model performs well from a statis-tical point of view, but may not tell the whole truth and significantly underestimate future longevity in the case at hand. The fifth graph additionally shows the prediction made by the Dutch Statistical Office. From this it is clear that this prediction has a slope that is significantly lower than 0,25. Furthermore, life expectancy in the Netherlands in 2050 is predicted to be below current life expectancy in Japan. At the moment APG is working on models that take this effect into account.

which is a matrix that depends on both x and t, can be, apart from the white noise εx,t, written as the product of a column vector βx, depending only on x, and a row vector κt, depending only on t.

Furthermore, we must impose some dynamics on κt. If we look at the fourth graph, we see that it might be a good attempt to model this with a first order autoregression

κt = Θ+κt-1+ηt (4)

where ηt is assumed to be a white noise. The model described by the equations (2), (3) and (4) is the so called Lee-Carter model for log mortality.

Estimation of the parameters

The estimation of the Lee-Carter model is done in consecutive steps. First, the mentioned de-composition

A(x,t) = βx·κt

is performed by so-called principal component analysis. Thereafter, the dynamics of κt are re-estimated by techniques known from time se-ries analysis to match the fitted and the ob-served data. After the parameter estimation is performed the Lee-Carter model can be used to compute predictions for future mortality.A complete description of the methods and the evaluation goes beyond the scope of this article. However, we mention the fact that an analysis of the residuals shows that for some countries a so-called cohort effect must be introduced (a term depending on t-x), whereas for Dutch data this extra variable is not necessary and the Lee-Carter model performs well.

Concluding remarks

We want to conclude our article by considering the dynamics of life expectancy in the world sin-ce 1880. First we present a graph in which the period 1880-2006 is divided into different time intervals for which the life expectancy of that

Actuarial Sciences

Graph 5

Life expectancy women

CBS prediction 2006

year

Graph 4

Life expectancy

year

3 months per yearNew Zeal

and

Norway

Icelan

d Japan

Page 67: Aenorm 62

65AENORM 62 January 2009

�����

�����

����������

�����

�����������

����������

�������������

�������������

���������

����������

Wat is haalbaar? En wat is verstandig? Hoeveel risico

mag een pensioenfonds eigenlijk lopen? Je hebt

het wel over de oudedagvoorziening van hon derd-

duizenden mensen. Er moet voor hen hoe dan ook een

fl inke taart overblijven. Bij Watson Wyatt kijken we

verder dan de cijfers. Want cijfers hebben betrekking

op mensen. En op maatschappe lij ke ont wik ke lingen.

Dat maakt ons werk zo interessant en afwisselend.

Watson Wyatt adviseert ondernemingen en organisa-

ties wereldwijd op het gebied van ‘mens en kapitaal’:

pensioenen, beloningsstructuren, ver zekeringen en

investeringsstrategieën. We werken voor toonaan-

gevende bedrijven, waarmee we een hechte relatie

opbouwen om tot de beste oplossingen te komen.

Onze manier van werken is open, ge dreven en infor-

meel. We zijn op zoek naar startende en ervaren mede-

werkers, bij voorkeur met een oplei ding Actuariaat,

Econometrie of (toegepaste) Wiskunde. Kijk voor meer

informatie op werkenbijwatsonwyatt.nl.

Watson Wyatt. Zet je aan het denken.

B e p a a l d e o p t i ma l e b e l e g g i n g s m i x v o o r e e n p e n s i o e n f o n d s me t e e n v e r m o g e n v a n 4 . 3 m i l j a r d e u r o .

-00013_210x297_Taart.indd 1 24-09-2007 15:32:29

Page 68: Aenorm 62

66 AENORM 62 January 2009

The Jackknife and its Applications

The validity of econometric inference is almost invariably based on asymptotic theory and for good reason. Finite sample moments of estimators are generally very complex functions of the data, making their analysis extremely complicated. While relying on asymptotic approximations works well when the available sample is large enough, the small sample distribution of estimators might be poorly approximated if it is not. This potentially results in misleading point estimates and erroneous inference. The Jackknife is a nonparametric technique based on subsampling that aims at alleviating some of these issues. Its apparent simplicity and general applicability has provoked a long line of research in statistics but, until recently, its potential has been largely overlooked in the field of econometrics. I give a brief discussion of the origins of the Jackknife and provide some examples from the recent econometric literature where the Jackknife idea has been applied. Miller (1971) provides a good introduction to the Jackknife. Parr and Schucany (1980) contains a bibliography of statistics work pre 1980.

Preliminaries

Let θ be the population parameter of interest and denote by θn an estimator based on a sam-ple of size n. Assume θn to be consistent, i.e. as n → ∞, θn →  θ. Although a variety of functionals could be of interest, for inference, we are most often concerned with the first two moments of θn. More precisely, we care about its finite sam-ple bias and variance, which are defined as

B(θn) = E[θn]-θ, (1)

'( ) [ ] [ ] [ ]'.n n n n nV θ E θ θ E θ E θ= −   (2)

For example, when B(θn) is large, point esti-mates of θ will, on average, be located far from the truth. Additionally, confidence sets will have poor coverage, i.e., in repeated samples, they will contain θ far less often than their nominal condence would suggest.

Origins of the Jackknife estimator Quenouille (1949, 1956) noticed that, if B(θn) is a smooth function of the data, a linear combi-nation of θn and estimators based on subsam-ples removes part of its bias, i.e. we obtain a bias-reduced estimator. The smoothness requi-rement is that, for some j ≥ 2,

= + + +θ

(3)

where B1,...,Bj are unknown constants and o(n-j) is a term that vanishes more quickely than n-j, as n→ ∞.Assume for a moment that the data is i.i.d. Let θn-1,i denote an estimator from the subsample obtained by deleting the i-th observation from the full sample. Then B(θn-1,i) follows the same expansion as in (3) but with n replaced by n-1. Therefore

−− − − = +θ θ

(4)

and the delete-1 Jackknife estimator,

1,1

1 [ ( 1) ],nJn n n ii

n nn −=

= − −∑θ θ θ   (5)

will be unbiased up to first-order1. The term between brackets is referred to as a pseudo-value. The averaging over all possible pseudo-

1 Note that the expansion of the bias of the jackknifed estimator will generally have coefficients B2',...,Bj

', say, that differ from those of θn. This is because the bias terms that are not removed by jackknifing are rescaled as the first-order bias of θn is estimated with error. Furthermore, jackknifing, at least in a cross-sectional setting, does not improve the mean squared error.

Koen Jochmans

is a PhD student at the Department of Economics of the Katholieke Universiteit Leuven and a Pre-Doctoral Fellow of the EU RTN "Microdata: Methods and Practice" at CEMFI, Madrid. His main research interests lie in Microeconometrics and his work concerns identification, estimation, and inference in the presence of unobserved heterogeneity.

Econometrics

Page 69: Aenorm 62

67AENORM 62 January 2009

ments grows.The Jackknife Instrumental Variables estima-tor (JIVE) (Angrist, Imbens and Krueger, 1999) uses Z-i and X-i, the matrices Z and X with the i-th observation deleted, to construct

− − − − −=π

(11)

for each i. As Z-i and X-i are independent of ξi and εi, this reduces the finite sample bias. JIVEthen is

−− −= Π Π⊙ ⊙θ

(12)

with Πn-1 = (πn-1,1,..., πn-1,n)’. Whether JIVE actu-ally leads to improved inference is still an unre-solved issue. The interested reader is referred to Davidson and MacKinnon (2007) and the re-ferences therein.

Nonlinear panel data models

In applications with n x T panel data one often specifies a parametric model conditional on a fixed effect, representing time-constant unob-served heterogeneity amongst observations, and estimates θ by Maximum Likelihood (ML). Call this estimator θn,T. The ML estimator is, however, typically inconsistent for large and small T, the most appropriate asymptotics in most microeco-nometric panels. This is the incidental parame-ters problem (Neyman and Scott, 1948). While consistent for T → ∞, the limiting distribution of θn,T under double asymptotics is incorrectly cen-tred, even if T grows as fast as n.Hahn and Newey (2004) noted that, for smooth models, the small T bias of θ∞,T = n n Tθ→∞ may be expanded in integer powers of T-1, i.e.

j

T j j

BB BB θ o

T T T T∞ = + + +

(13)

Therefore, assuming the data to be i.i.d., the leading term in this expansion may be removed by applying the delete-1 Jackknife (as in (5) with n replaced by T) to obtain, say,

PJn Tθ

. Additionally, the removal of this term makes the asymptotic distribution of (nT)1/2(

PJn Tθ

-θ) centred around zero while retaining the efficiency of the ML es-timator, even if T grows more slowly than n(T/n3 → ∞ is required).As a consequence, finite sample inference is gre-atly improved if T is not too small. Additionally,

values is not required for bias reduction but is important for variance considerations. In the case of dependent data, (4) will generally not hold, but one may form a subsample of p < n consecutive observations where n = p+q and form an estimate of the bias as -p/q[θn-θp]. Tukey (1958) noticed Quenouille’s idea could be useful in the context of estimating V(θn) as well. He observed that pseudo-values could be treated as being approximately independent while having approximately the same vari-ance as n1/2θn. Therefore the sample variation in the pseudo-values can be used to form the Jackknife variance estimator

21, 1,1 1

1 1( ) [ ] .n nJn n n i n Ji J

nVn n− −= =

−= −∑ ∑θ θ θ   (6)

The idea of using pseudo-values for variance estimation bares similarity to the Bootstrap and, in fact, may be seen as an approximation to it (see e.g. Efron and Tibshirani, 1993). The delete-1 estimator is the most commonly en-countered version of the Jackknife, but not theonly one available. For example, Shao and Wu(1989) consider the delete-d Jackknife for non-smooth statistics such as the median, and Schucany, Gray and Owen (1971) propose an additional correction to J

nθ  to achieve higher-or-der bias-reduction. Extensions to non-random sampling schemes and hybrid versions such as Jackknife-after-Bootstrap (Efron, 1992), have also been considered.

Instrumental variables estimation

Consider the classic endogeneous variables set-up:

Y = Xθ + ε, (7)X = Zπ + ξ, (8)

where X may contain endogenous variables due to correlation between ε and ξ. Assuming X’Z to have full rank, the Two-Stage Least Squares estimator (based on i.i.d. data) is

θn = [(Zπn)’X]-1[(Zπn)’Y], (9)Zπn = Zπ + PZξ, (10)

where PZ = Z(Z’Z)-1Z’, and will be consistent if E[π’Z’ε] = 0. In finite samples however, Zπn and ε will be correlated, leading to bias in θn. The bias typically increases as the number of instru

"The potential of the Jackknife has been overlooked in the field of econometrics"

Econometrics

Page 70: Aenorm 62

68 AENORM 62 January 2009

Meer informatie over diverse functies en over werken bij Hewitt vind je op www.werkenbijhewitt.nl

“ Don’t ask yourself if it’s a long road. Ask yourself if it’s a good journey.”

Kies je eigen weg

Cijfermatige talenten op zoeknaar een werkstudentschapHewitt Associates is een wereldwijd opererende HRM-Consulting en Outsourcings-organisatie met zo'n 23.000mensen in bijna veertig landen. In Nederland (350 collega's) helpen wij onze klanten met actuarieel advies, pensioenuitvoering en complete HRM-consultancy. Wij doen ons werk met passie, wat bij ons staat voor intellectueleuitdagingen, optimale kwaliteit en interessante klanten. Maar ook voor plezier in je werk, groei en een eigen koers.Wij zijn een bedrijf waarvan je mag verwachten dat het weet wat mensen beweegt in hun werk en wat ze in een carrière zoeken. Daarom vind je hier geen verhaal over targets en hoe wij tel kens weer weten die te bereiken. De weg erheen vinden wij veel belangrijker, omdat die het beste in mensen boven brengt. Bij Hewitt is dat een pad dat je in hoge mate zelf uitstippelt. En waar el ke bestemming een nieuw begin is.

Aanbod. Hewitt is op zoek naar cijfermatige talenten, die tijdens hun studie 1 à 2 dagen per week praktijkervaring willen opdoen in het actuariële vakgebied. Bij ons vind je daarom ook de ruimte voor initiatief, een informele cultuur, continue uitdagingen en mogelijkheden om werken en studeren te combineren.

Ook zijn er mogelijkheden om na je bachelor bij Hewitt in dienst te komen. Bij Hewitt krijg je de mogelijkheid om door te studeren tot actuaris. Niet alleen wordt deze studie volledig vergoed, ook krijg je jaarlijks extra studieverlof en kun je de studie in je eigen tempo doen!

Wij willen graag in jou investeren, zodat je een goed beeld kan krijgen van het beroep actuaris en de organisatie. Bij ons mag je – sterker nog- moét je jezelf zijn, want pas dan haal je het beste uit jezelf en ben je in staat om je eigenkoers te varen wanneer je bent afgestudeerd.

Standplaats. Hewitt is gevestigd in Amsterdam, Eindhoven en Rotterdam. Op ieder kantoor is plaats voor 2 werkstudenten. Sollicitatieprocedure loopt gedurende het gehele jaar. Momenteel zijn wij voor kantoor Amsterdam opzoek naar een werkstudent.

Interesse? Ben jij een vierdejaars student Wiskunde, Econometrie of Actuariële Wetenschappen en wil je naast studerenwerkervaring opdoen binnen de actuariële dienstverlening, dan willen we je graag uitnodigen voor een kennismakings-gesprek om de mogelijkheden van werken en studeren met je te bespreken. Je kunt je curriculum vitae en brief met daarin een korte omschrijving van jezelf sturen naar Hewitt Associates B.V., Afdeling Human Resources, t.a.v. Laura Goeree,Postbus 12079, 1100 AB Amsterdam, of mail deze naar [email protected].

Voor meer informatie over het werkstudentschap, kan je contact opnemen met Laura Goeree, (Recruiter) 020-6609400.

Sidney Poitier

210x297 (WAD) Werkstudentschap.qxp 12/16/2008 4:00 PM Pagina 1

Page 71: Aenorm 62

69AENORM 62 January 2009

as n is typically quite large, the mean squarederror of

PJn Tθ

, E( PJn Tθ

-θ)2 is much smaller thanthat of θn,T.

Nonparametric estimation

As a final application, imagine a two-step esti-mator, θn(πn) = θn,h, where πn is a kernel-based estimator with bandwidth h. This situation fre-quently arises in semi-parametric models with endogeneity. The convergence speed of θn,h depends on the rate at which h is allowed to shrink to zero. As a consequence, θn,h may not be root-n consistent and therefore n1/2(θn,h - θ) might not be centred at θh ≠ 0. This happens, for example, when undersmoothing is required (i.e. letting the bandwidth decrease only slow-ly with ). A common solution is to resort to higher-order kernels. An alternative approach has been given by Honore and Powell (2005), and is briefly described next.Assume that (θ∞,h - θ) is a sufficiently smooth function of h such that it may be expanded as

j

h j j

BB Bθ θ o

h h h h∞ − = + + +

(14)

where j is such that the remainder term is o(n-1/2). Again a linear combination of estima-tors may be used to bias-reduce θn,h. Here the variability of the estimator as a function of h is exploited, not n.Let c1,...,ck be a chosen sequence of different constants. Then , ln c hθ  is an estimator based on bandwidth clh whose large n limit satisfies (14), with clh replacing h. Honoré and Powell’s (2005) bandwidth-driven Jackknife

,BJn hθ   then is obtained

by forming a linear combination of such estima-tors which removes the first j bias terms from (14). Therefore, n1/2(

,BJn hθ  - θ) will be centred at

zero and hence, ,BJn hθ   will be root-n consistent.

References

Angrist, J.D., Imbens, G.W. and Krueger, A.B. (1999). Jackknife instrumental variables esti-mation. Journal of Applied Econometrics, 14, 57-67.

Davidson, R. and Mackinnon, J. G. (2007). Mo-ments of IV and JIVE estimators. Econometrics Journal, 10, 541-553.

Efron, B. (1992). Jackknife-after-Bootstrap standard errors and influence functions. Journal of the Royal Statistical Society, Series B, 54, 83-127.

Efron, B. and Tibshirani, R. (1993). An Intro-duction to the Bootstrap, CRC Press.

Hahn, J. and Newey, W. K. (2004). Jackknife and analytical bias reduction for nonlinear panel models, Econometrica, 72, 1295-1319.

Honoré, B. E. and Powell, J. L. (2005). Pairwise difference estimators for nonline-ar models. In Andrews, D. W. and Stock, J. H., editors, Identification and Inference for Econometric Models, Essays in Honor of Thomas Rothenberg, 520-553, Cambridge University Press.

Miller, R.G. (1971). The Jackknife - a re-view, Biometrika, 61,1-15.

Neyman, J. and Scott, E. L. (1948). Consistent estimates based on partially consistent observations, Econometrica, 16, 1-32.

Parr, W. C. and Schucany, W. R. (1980). The Jackknife: a bibliography, International Statistical Review, 48, 73-78.

Quenouille, H. M. (1949). Approximate tests of correlation in time series. Journal of the Royal Statistical Society, Series B, 11, 68-84.

Quenouille, H. M. (1956). Notes on bias in estimation, Biometrika, 43, 353-360.

Schucany, W. R., Gray, W. R. and Owen, D.B. (1971). On bias reduction in estima-tion, Journal of the American Statistical Associational, 66, 524-533.

Shao, J. and Wu, C. (1989). A general the-ory for the Jackknife variance estimation, The Annals of Statistics, 17, 1176-1197.

Tukey, J. W. (1958). Bias and condence in not-quite large samples, The Annals of Mathematical Statistics, 29, 614.

Econometrics Econometrics

Meer informatie over diverse functies en over werken bij Hewitt vind je op www.werkenbijhewitt.nl

“ Don’t ask yourself if it’s a long road. Ask yourself if it’s a good journey.”

Kies je eigen weg

Cijfermatige talenten op zoeknaar een werkstudentschapHewitt Associates is een wereldwijd opererende HRM-Consulting en Outsourcings-organisatie met zo'n 23.000mensen in bijna veertig landen. In Nederland (350 collega's) helpen wij onze klanten met actuarieel advies, pensioenuitvoering en complete HRM-consultancy. Wij doen ons werk met passie, wat bij ons staat voor intellectueleuitdagingen, optimale kwaliteit en interessante klanten. Maar ook voor plezier in je werk, groei en een eigen koers.Wij zijn een bedrijf waarvan je mag verwachten dat het weet wat mensen beweegt in hun werk en wat ze in een carrière zoeken. Daarom vind je hier geen verhaal over targets en hoe wij tel kens weer weten die te bereiken. De weg erheen vinden wij veel belangrijker, omdat die het beste in mensen boven brengt. Bij Hewitt is dat een pad dat je in hoge mate zelf uitstippelt. En waar el ke bestemming een nieuw begin is.

Aanbod. Hewitt is op zoek naar cijfermatige talenten, die tijdens hun studie 1 à 2 dagen per week praktijkervaring willen opdoen in het actuariële vakgebied. Bij ons vind je daarom ook de ruimte voor initiatief, een informele cultuur, continue uitdagingen en mogelijkheden om werken en studeren te combineren.

Ook zijn er mogelijkheden om na je bachelor bij Hewitt in dienst te komen. Bij Hewitt krijg je de mogelijkheid om door te studeren tot actuaris. Niet alleen wordt deze studie volledig vergoed, ook krijg je jaarlijks extra studieverlof en kun je de studie in je eigen tempo doen!

Wij willen graag in jou investeren, zodat je een goed beeld kan krijgen van het beroep actuaris en de organisatie. Bij ons mag je – sterker nog- moét je jezelf zijn, want pas dan haal je het beste uit jezelf en ben je in staat om je eigenkoers te varen wanneer je bent afgestudeerd.

Standplaats. Hewitt is gevestigd in Amsterdam, Eindhoven en Rotterdam. Op ieder kantoor is plaats voor 2 werkstudenten. Sollicitatieprocedure loopt gedurende het gehele jaar. Momenteel zijn wij voor kantoor Amsterdam opzoek naar een werkstudent.

Interesse? Ben jij een vierdejaars student Wiskunde, Econometrie of Actuariële Wetenschappen en wil je naast studerenwerkervaring opdoen binnen de actuariële dienstverlening, dan willen we je graag uitnodigen voor een kennismakings-gesprek om de mogelijkheden van werken en studeren met je te bespreken. Je kunt je curriculum vitae en brief met daarin een korte omschrijving van jezelf sturen naar Hewitt Associates B.V., Afdeling Human Resources, t.a.v. Laura Goeree,Postbus 12079, 1100 AB Amsterdam, of mail deze naar [email protected].

Voor meer informatie over het werkstudentschap, kan je contact opnemen met Laura Goeree, (Recruiter) 020-6609400.

Sidney Poitier

210x297 (WAD) Werkstudentschap.qxp 12/16/2008 4:00 PM Pagina 1

Page 72: Aenorm 62

70 AENORM 62 January 2009

One of the most important tasks of pension funds is to provide participants with a pension that is deprived of dilution due to inflation as much as possible. Within the context of a pension fund that compensates the inflation on a conditional basis, decreased purchasing power is not caused by inflation only, but rather by a combination of several factors. In this article, it is shown that an integral approach to the management of purchasing power risks can provide a highly improved outlook for pensioners without violating existing constraints

The Power to Chase Purchasing Power

Jan-Willem Wijckmans

studied econometrics in Groningen. Since 2003, he works at Cardano in Rotterdam as a risk management consultant. Based on quantitative research, he advises pension funds and insurance companies on sophisticated risk management solutions, with a focus on flexible tailored strategies. Cardano offers support for the entire risk management process, from research, advice, strategy implementation and control.

Inflation risks

Ronald Reagan once said: “Inflation is as vio-lent as a mugger, as frightening as an armed robber and as deadly as a hit man”. He was right. In Europe, economic policies all aim at keeping the level of price inflation at around 2%, a percentage deemed to be low and ac-ceptable. However, over time even such a re-lative low inflation rate is capable of seriously diluting the amount of goods one can purchase. The greater the amount of time that goes by, the greater the dilution will be. The long-term character of pensions makes the inflation risk highly relevant in this area. A worker that builds up his pension over the course of 40 years has an average exposure to inflation of around 20 years. Over the course of this time period, he can lose around 30% of his purchasing power if inflation is on average 2%, but this can easily increase to over 50% if inflation averages at 4%. To put this figure into perspective: this means that due to inflation, someone who expects to receive a pension that is 80% of the average salary he has earned du-ring his lifetime, effectively only receives 40% in real terms. Most pension funds therefore aim to compen-sate participants for inflation. Given the fact that the pension fund is a collective, the infla-tion risk for the pension fund as a whole is a lot smaller than for each individual separately. However, periods of high inflation will still con-

stitute a major risk factor for the fund. An incre-ase in the expected inflation of 1% can cause the value of the expected real pension payouts to rise with 15% to 20%. To be able to actually make these additional payouts, the required re-turn on the investments has to increase in line with inflation.

Risk management of purchasing power

The most common risk management tool that Dutch pension funds employ to mitigate this risk for the fund, is conditional indexation. Participants will receive indexation only when the nominal solvency position of the fund allows for it. If this is not the case, indexation is for-feited or postponed for better times. Although an individual is still better off under this system than on its own, indexation risks still exist in terms of purchasing power. If indexation is not received for several years in a row because of low solvency of the pension fund, the purcha-sing power of the participant can be highly im-pacted, especially if the low solvency situation coincides with high inflation. In order to minimize the probability and severi-ty of purchasing power deficits, a pension fund can actively manage her indexation targets. For an average pension scheme, the most impor-tant drivers of purchasing power risks are:• Liability risk: as interest rates go down, the

value of the pension liabilities that a pension fund accounts for will go up, decreasing the solvency;

• Investment risk: as investments in for example equities and real estate (referred to as return assets) go down, the solvency also goes down;

• Inflation risk: high inflations deplete the sol-vency buffers of a pension fund more rapidly. When solvency drops below the indexation boundary, high inflations accelerate the rate at which purchasing power decreases.

Actuarial Sciences

Page 73: Aenorm 62

71AENORM 62 January 2009

Ideally, all sources have to be targeted toge-ther in setting the investment guidelines for the fund. This is not a simple task, since all decisi-ons have both a risk and return impact, on se-veral important measures. A relative low equity allocation decreases the short term probability of a solvency shortage, but also impacts the possibilities for growth; simple hedging of all

nominal interest rate sensitivity decreases the immediate solvency risks, but also increases the exposure to high inflation rates; and finally, decreasing the exposure to inflation by conver-ting the realised inflation to fixed inflation, can actually increase the probability of no indexati-on, because this increases the nominal funding ratio volatility.A good way to analyse the short and long term impact of different strategies is a scenario based Asset-Liability Model. The aim of such analysis is to define a practical and feasible strategy that minimizes relevant risks on dif-ferent horizons. In the remainder of this article, the impact of various strategies on the funding ratio and purchasing power risks for pensioners will be discussed.

Case study

The analysis assumes an average-type Dutch pension fund in most policy respects: inflation compensation is granted only when the nomi-nal funding ratio is greater than 120%, and the contribution rate is set to provide nominal service cost. The initial nominal funding ratio is set at 130%. In order to properly address the impact of investment risk, the percentage of equity investments in the initial asset mix is set to 50%, which is higher than that of the average Dutch pension fund. Also, no additional liability hedging apart from the existing bond

portfolio is taken into account.The focus of the analysis is on the impact se-veral policy changes can have on purchasing power. However, the nominal solvency risks are also important from a regulatory and con-tinuation perspective. A solution that performs very well with respect to purchasing power, but introduces unacceptable high solvency risks is obviously not a practical and feasible solution. The analysis will therefore entail solution as-sessment on both solvency and purchasing po-wer risks. The analysis is carried out on a 15 year horizon, although results are very much comparable for different time frames.

Analysis

From figure 1, the starting point (labelled ‘Basis’) shows that the probability of a solvency shor-tage (a funding ratio below 105%) is around 13% with a probability of a drop in purchasing power of over 20% amounting to 6%. For each

of the three mentioned risk drivers - liability risk, investment risk and inflation risk - the im-pact on purchasing power and solvency can be understood by varying the exposure of the fund towards one of the drivers individually. The liability risk exists, because the value of the liabilities is usually much more sensitive to changes in interest rate risk than the assets. To balance these effects, interest rate swaps can be used to align the asset sensitivity towards interest rates with the sensitivity of the liabili-ties. In figure 1, the allocation labelled ‘Reduced liability risk swaps’ shows the result. The proba-bility of solvency shortage has decreased as ex-pected, but purchasing power risks are actually higher. The main reason for this is the existing correlation between interest rates and inflation: high inflation rates are often accompanied by high interest rates. For a pension fund with lo-wer interest rate sensitivities in the assets than in the liabilities, the higher expected payouts on the liabilities due to the high inflations can be ‘paid for’ through a lower value of the current liabilities. However, this is not the case when assets and liabilities are balanced in terms of interest rate sensitivity. So although the use of swaps lowers the current liability risk, it can actually increase the exposure in high inflation and high interest rates scenarios. A partial solution to this problem is not to use only swaps, but to incorporate swaptions (an option on a swap) in the liability hedge. With

Probability of solvency shortage

Probability of a drop in purchasing power by 20% or more

Reduced liability risk swaps/swaptions

Reduced liability risk swaps

Basis

Direction of efficieny gain

Direction of efficieny loss

0%

2%

4%

6%

8%

10%

12%

0% 5% 10% 15% 20%

Figure 1

"Even an inflation rate of 2% can have serious impact over time.""

Actuarial Sciences

Page 74: Aenorm 62

72 AENORM 62 January 2009

swaptions, one is protected for interest rates falls in a way similar to swaps, but one retains the upside in case of rising interest rates. It can be seen quite easily that this will lower the risk in the high inflation and high interest rate scenario described. The effect of using a com-bined strategy with both swaps and swaptions is shown with the allocation ‘Reduced liability risk swaps/swaptions’. Indeed the problem with increased purchasing power is lessened so-mewhat, although it cannot be entirely taken away. Next, we discuss the investment risk. This is lowered by physically moving away from return assets, in this case by as much as 20%. Although in most cases this can be done more efficiently by using options, for illustration purposes we have chosen to physically sell the return assets. Apparently (see allocation ‘Reduced investment risk’ in figure 2) this is a more attractive possi-bility than removing nominal interest rate risk, since it improves solvency risks, while leaving the purchasing power risks unchanged. It should be noted that again lowering the allocation to return assets will have less of an impact and purchasing power risk might even rise eventu-ally. This is because return assets do not only have a higher risk than fixed income assets, but on average also bring a higher return needed to pay for inflation. A pension fund that already starts out with much less equity than the 50% chosen here will have less steering power with this instrument. However, it does illustrate the point about the impact of investment risk on both measures.Finally, we can reduce the exposure to infla-tion risk by using inflation swaps. These in-struments allow a fund to receive the realised inflation that they need for participants in ex-change for a fixed expected inflation rate, thus removing the uncertainty about future inflation levels. Figure 2 shows the results that reducing the inflation exposure serves a one-way pur-pose. The purchasing power risks are reduced substantially by over 60%. However, the price to pay is reflected in the solvency measure: a drastic increase in the probability of a solvency shortage. This phenomenon arises because in-flation hedging is only useful when hedged for

all years to come. The value of all future infla-tion changes reflected in the inflation swap is however not shown on the liability side. This leads to increased volatility in the funding ratio with a higher risk of solvency shortage in those scenarios where inflation actually decreases. In those scenarios, the value of the inflation swap becomes negative, even though this is offset by the lower expected future inflation compen-sation on the liabilities. This result perfectly il-lustrates the risk management challenges when indexation is provided on a conditional basis.Now that the impact of the individual risk dri-vers has been properly assessed, it is time to show the possibilities when all three steering instruments are combined in a more tailored way. This is reflected by the ‘Combined’ allo-cation in Figure 2. No effort has been put in optimizing the result, since this is not the exer-cise currently of interest. The point of interest is that a sensible allocation of all risk drivers at hand can improve the outlook of a pension with fully retained purchasing power, without neces-sarily violating nominal constraints regarding the solvency. This is true in the relative simple case study shown here, but also in more com-plex pension cases, where other risk measures and possible return constraints have to be ta-ken into account.

Conclusion

The indexation of pensions is a very important part of a pensioner’s income, especially when the inflation is high over a prolonged period. Within the set of policies of a pension fund that provides indexation on a conditional ba-sis, three major drivers of risk that impact the purchasing power have been identified: liability risk, investment risk and inflation risk. It turns out that a standalone change in the exposure to one single risk driver may only have limited ef-fects or may even lead to violation of conflicting constraints. However, a sensible optimization of all available exposures and risk management instruments can lead to both an improved out-look for the purchasing power of pensioners, as sustained policy results.

Probability of solvency shortage

Probability of a drop in purchasing power by 20% or more

Basis

Reduced investment risk

Reduced liability risk swaps/swaptions

Combined

Reduced inflation riskDirection of efficieny gain

Direction of efficieny loss

0%

2%

4%

6%

8%

10%

12%

0% 5% 10% 15% 20%

Figure 2

Actuarial Sciences

Page 75: Aenorm 62

73AENORM 62 January 2009

Do you want to take a look at all articles of Aenorm?Or are you searching for a particular subject?

www.Aenorm.euHere you can find all articles ever published in Aenorm, use the database, subscribe and contact the editorial board.

Page 76: Aenorm 62

74 AENORM 62 January 2009

With the credit crunch in the banking system and the stock market crash, it is just a matter of time until the unravelling financial turmoil is going to push the real economy into recession. What does that actually mean, being in a recession? And how can you measure and reasonably predict such a thing as a business cycle and its corresponding turning points? The business cycle indicator of the Durch central bank, De Nederlandsche Bank (DNB) extracts fluctuations from selected macroeconomic variables and represents them in a single index that describes the current stance of the business cycle. This article explains how the DNB business cycle indicator is constructed.

The Dutch Business Cycle: Which Indicators Should We Monitor?

Ard den Reijer

is currently employed at the central bank of Sweden, Sveriges Riksbank. He constructed the business cycle indicator while he was employed at the Dutch central bank, De Nederlandsche Bank1. The analysis is part of his PhD-thesis under supervision of prof. dr. L.H. Hoogduin (University of Amsterdam) and prof. dr. F.C. Palm (Maastricht University). See also his personal website; http://www.reijer.info

Various commercial, academic and government institutions use a business cycle indicator as an instrument to measure and predict business cycle developments and its turning points. An accurate assessment of the current and future state of the business cycle is valuable informa-tion in the decision process of policy makers and businesses. Most institutions, who regu-larly publish business cycle indicators, follow the approach of using leading and coincident indicators as developed at the National Bureau of Economic Research (NBER) in the US in the 1930s. Some institutions construct uniform business cycle indicators for several countries. Most notably, the American Conference Board and the Paris based Organisation of Economic Co-operation and Development (OECD) regu-larly publish their updated indicators for various countries. The OECD also runs an indicator that is specifically calibrated for the Dutch economy, as do the Netherlands Bureau for Economic Policy Analysis (CPB), Rabobank, Statistics Netherlands (CBS) and the universities of Groningen and Rotterdam. This article focuses on the business cycle indicator that is used by DNB and published on their website2.

The definition of a business cycle as formula-ted in the seminal contribution of Burns and Mitchell (1946) reads as follows: ”Business cy-cles are a type of fluctuation found in the ag-gregate economic activity of nations that orga-nize their work mainly in business enterprises: a cycle consists of expansions occurring at about the same time in many economic acti-vities, followed by similarly general recessions, contractions, and revivals which merge into the expansion phase of the next cycle; this se-quence of changes is recurrent, but not perio-dic; in duration business cycles vary from more than one year to ten or twelve years; they are not divisible.” So, business cycles can broadly be defined as oscillating motions of economic activity, which are visible as patterns of fluc-tuations within macroeconomic variables such as output, production, interest rates, financial markets, consumption, unemployment and pri-ces. The term ́ cycle´ is misleading in the sense that it suggests a regular periodicity. Each sin-gle business cycle is unique and can be charac-terized by its depth, duration and diffusion. The depth of the cycle represents the cumulative rise (fall) of economic activity during the ex-pansion (recession) phase. The duration of the cycle is the elapsed time between two peaks or troughs and varies from more than one year to ten or twelve years. The diffusion of the cycle represents the extent to which the business cy-cle is visible across different economic sectors and geographical areas. The DNB business cycle indicator presents the cyclical outlook in two single indices, as shown in figure 1. The coincident index of the DNB business cycle indicator summarizes the factual stance of the business cycle. The leading indi-

1 Views expressed are those of the individual author and do no necessarily reflect official positions of De Nederlandsche Bank.2 See for the most recent update http://www.dnb.nl/en/z-onderzoek/auto88154.jsp

Econometrics

Page 77: Aenorm 62

75AENORM 62 January 2009

cator aims to replicate the coincident index and moreover, aims to project it up to six months ahead into the future. The indicator describes the business cycle as the deviation of economic activity from its trend level. The x-axis of the figure corresponds to the trend level and so, a positive (negative) index means that the level of economic activity lies above (below) its trend level. A downward (upward) sloping index me-ans that the level of economic activity is gro-wing slower (faster) than its potential growth, which is shown in figure 1 by the shaded areas. Following the peak in the business cycle, four subsequent phases can be distinguished as: i) slow-down; the cycle is positive, but downward sloping; ii) recession: the cycle is negative and downward sloping, which ends in a business cycle trough; iii) recovery: the cycle is nega-tive, but upward sloping; iv) boom: the cycle is positive and upward sloping. The DNB busi-ness cycle indicator shows in Figure 1 that the Dutch economy is currently in the slow-down phase and will be entering the recession phase on short notice.

Measuring the business cycle

The DNB business cycle indicator is based on the notion of a business cycle as all the intrinsic cyclical motion visible in macroeconomic data consisting of waves within a specified frequen-cy interval. This interval of business cycle fre-quencies corresponds with Burns and Mitchell’s (1946) taxonomy of business cycles as waves lasting longer than a pre-specified minimum duration and shorter than a pre-specified maxi-mum duration. Business cycles at certain fre-quencies can be isolated by so called band-pass filters. This type of filters originates from elec-trical engineering and operates on a time se-ries of observations for an economic variable like tuning a radio: some frequency ranges are eliminated and other frequency ranges, i.e. the business cycles, get passed through. In an ideal world in which we have an infinite amount of observations ∞

=−∞ for an econo-

mic time series variable, we can precisely ex-tract the pre-specified business cycle fluctuati-ons using the ideal band-pass filter defined as follows:Let 2 ≤ pl < pu < ∞

− = ≥ −= = =

Then cyclet =

∞−=−∞∑

So, the cyclical fluctuation labelled cyclet con-sists of all the cyclical fluctuations incorporated in the time series variable xt whose waves are characterized by a duration longer than pl peri-ods and shorter than pu periods. The DNB busi-ness cycle indicator, which is based on monthly observations, operationalizes 36 months, i.e. 3 years, as a lower bound and 132 months, i.e. 11 years, as an upper bound. Since we are not living in an ideal world, but in a real one with a limited amount of availa-ble observations, the econometric question is how to approximate the ideal band pass filter? Approximate band pass filters suffer from lea-kage, i.e. some frequency ranges that are suppo-sed to be filtered out get passed through to some extent and compression, i.e. some frequency ranges that are supposed to pass through get dampened to some extent. Christiano and Fitzgerald (2003) (CF) propose an approximate band pass filter that is optimal in a weighted mean squared error sense conditional on a spe-cial characteristic that is typical for macroeco-nomic time series3. In the real world, we have a finite amount of T observations = . Then t, t=1,...,T with f=T-t+1, p=t-1 the coefficients of the CF-filter look like:

− −−

= = − + −

= − = − ∑ ∑

Then

−=−

= ∑

Note that the filter coefficients converge to their ideal band pass filter equivalents in case we have a lot of observations at our disposal. So,

→ as p,f →  ∞

Which indicators should we monitor?

Real gross domestic product (GDP) as the ag-gregate of all economic activity constitutes an important statistic for business cycle measu-rement, the American and European business cycle dating committees4, however, monitor se-

3 More precisely, Christiano and Fitzgerald (2003) assume that the underlying economic time series variable follows a random walk process, i.e. that it is integrated to the first order: a so-called I(1) process. This type of process captures the trending behaviour that is typically observed in macroeconomic data.4 See the websites of the American National Bureau of Economic Research (NBER) and the European Centre for Economic Policy Research (CEPR).

Grafiek1

0 0

0,5

1,0

1,5

2,0

rom

tren

d

DNB Business Cycle Indicator

Page 1

-3,0

-2,5

-2,0

-1,5

-1,0

-0,5

0,083 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 0 1 2 3 4 5 6 7 8 9

Stan

dard

ized

dev

iatio

n fr

Low Growth Phase Composite Leading Indicator, May 2009 Composite Coincident Index, September 2008

Figure 1: DNB Business Cycle Indicator

Econometrics

Page 78: Aenorm 62

76 AENORM 62 January 2009

MIcompany-VSAE_adv-DEF.indd 1 15-12-2008 11:43:32

Page 79: Aenorm 62

77AENORM 62 January 2009

veral macroeconomic variables as the cyclical fluctuations of GDP do not always move syn-chronously with the ones of its underlying com-ponents. It is in general infeasible to identify one single variable, which covers a broad range of economic activity, represents the current stage of the business cycle and is available on a monthly basis. The coincident reference index is therefore constructed as a composite index of several synchronous indicators. The index constitutes a more robust standard of business cycle measurement since the idiosyncrasies of the individual series are averaged out. In addi-tion to reflecting the cyclical properties of GDP, other important practical criteria for selecting the variables are a limited publication delay and minor data revisions. This means that new data releases are published shortly after the corres-ponding period has ended and that the initially published values are subject to only minor revi-sions during subsequent publications regarding the same period. As a result, the coincident in-dex of the DNB business cycle indicator is based

on industrial production, household consumpti-on and staffing employment. The volume index of consumption by households is closely related to retail sales, since it consists only of those ex-penditures for which households pay themsel-ves. Industrial production constitutes approxi-mately 25% of GDP and its share is declining

relative to the share of services production. Despite this small share, the OECD’s indices are solely based on industrial production as its cyclical motion is considered a representative statistic for the business cycle. The staffing em-ployment market is the segment of the labour market that is most sensitive to business cycle motions, because companies can adjust their use of staffing services immediately to chan-ging market conditions. Cyclical motion in the economy is reflected in various macroeconomic time series in a dy-namic sense as well. Depending on the timing behaviour of their cyclicality vis-à-vis the coin-cident index, variables can be classified as lea-ding, coincident or lagging. Leading variables give by nature an early signal on the cyclical position. Economic plausibility requires that the leading indicators should be supported by eco-nomic theory either as possible causes of busi-ness cycles or as quickly reacting to positive or negative shocks. Since economic fluctuations originate from different sources, we combine a

number of relevant leading indicators into a single composite index. The main selection cri-teria for the leading indicators are the lead time and the statistical conformity with the business cycle as represented by the composite coinci-dent index. The leading indicator should show consistent timing by systematically anticipa-ting peaks and troughs with a rather constant lead time and avoiding missed turning points or false signals. A missed turning point is a tur-ning point in the coincident index, which the leading indicator fails to signal. A false signal occurs when the leading indicator shows a tur-ning point that eventually does not materialize in the coincident index. The leading index of the DNB business cycle indicator consists of eleven selected leading indicator variables that together make up a ba-lanced representation of near future cyclical de-velopments. As shown in table 1, the composite leading indicator consists of three financial se-ries, four business and consumer survey results and four real activity variables, of which two supply and two demand related. The three financial variables are the short term interest rate, the yield curve and the stock price index. Low short term interest rates re-duce financing costs and will spur investment

Leading indicator variable Lead time (in months)

IFO-indicator Germany 9

Expected business activity 9

Three month interest rate 26

Term structure of interest rates

30

Stock finished products 10

Order arrivals 15

Consumer Confidence 11

Stock Price Index (AEX) 7

Registered motor vehicles 9

Real house price 7

OECD leading indicator for the U.S.

18

Table 1 Composition of the leading index

"The Dutch economy is currently in the slow-down phase and will be entering the recession

phase on short notice"

Econometrics

Page 80: Aenorm 62

78 AENORM 62 January 2009

demand. The AEX stock price index consists of the 25 most actively traded shares in the Netherlands and indicates the expected fu-ture corporate profitability and the underlying growth potential. An inverse yield curve is usu-ally observed at the start of a recession period and acts therefore as a predictor. The four busi-ness and consumer survey results are expected business activity, the IFO-indicator of future expectations, domestic consumer confidence and the OECD´s leading indicator for the Unites States. The IFO-indicator represents the econo-mic expectations of producers in Germany, the largest trading partner of the Netherlands. The OECD´s leading indicator for the U.S. reflects the short term outlook of the world´s largest economy, whose business cycle is known to be leading for the G7 countries. Finally, two sup-ply related real activity variables are the order arrivals and the stock of finished products and two consumption related real activity variables are the real house price and the registrations of new cars. Housing wealth, and therefore real house price changes, are an important factor in the total wealth of consumers. The registration of new cars is a variable that quickly reacts to alterations in the business cycle.

Summary

Every month the composite coincident index shows the current state of the economy on the basis of industrial production, household con-sumption and employment. These three varia-bles are important reference points for dating economic turning points. The composite leading index provides an estimate of the state of the economy in the near future. The leading index is based on three financial series, four business and consumer survey results and four real ac-tivity variables, of which two supply and two demand related. Referring to the four quadrant characterisation of business cycle phases, i.e. slow-down, recession, recovery and boom, the DNB business cycle indicator shows that the Dutch economy is currently in the slow-down phase and will be entering the recession phase on short notice.

References

Burns, A. and Mitchell, W. (1946). Measuring Business Cycles, National Bureau of Economic Research.

Christiano, L. and Fitzgerald, T. (2003). The Band Pass Filter, International Economic Review, 44.

Den Reijer, A.H.J., (2006). The Dutch Business Cycle: Which Indicators Should We Monitor?, DNB Working Paper no.100

Econometrics

Page 81: Aenorm 62

79AENORM 62 January 2009

Puzzle

Puzzle

The puzzles in last edition were probably a bit too hard for most of the readers, since we re-ceived no right answers at all. Therefore, the puzzles in this edition are more straightfor-ward. But first, the solutions to the puzzles of last edition.

American election

In the election puzzle, add the pluralities to the total vote and divide by the number of candi-dates ((5,219 + 22 + 30 + 73) / 4). The quo-tient will be the vote of the successful one, from which the votes of the others can be ascertained by subtraction. The counts were 1,336, 1,314, 1,306 and 1,263.

A Chinese switch-word puzzle

A possible solution to this puzzle is the word ‘interpreting’.

The new puzzles:

 

 

 

 

 

 

 

 

 

 

 

 

Was it a cat I saw

One of the most remarkable stories you may have read during your youth has to be Alice in Wonderland. During her journey through Wonderland, Alice comes along the Cheshire cat, which has a way of vanishing into thin air until nothing but its irresistible smile remains.

Illustrated above are Alice and the cat and a square containing the sentence “Was it a cat I saw?” multiple times. In how many different ways can you read this sentence in the above square? Start at any of the W’s, spell by mov-ing to adjacent letters until you reach the C, then spell back out to the border again. You may move up and down, left and right.

The boxer’s puzzle

 A popular game among young students is the boxer’s puzzle. The goal of this 2-player game is to get as many boxes as possible by drawing a straight line between two points. A possible state of this game is illustrated above. In every turn each player has to draw a straight line be-tween two points (for example between A and B). When a player completes a square (there are for example lines between the points A, B, E and F), this box is won by this player and the same player has to draw another line. Now we are in the state of the illustration above and it’s your turn to draw a line. If you connect M to N your opponent could score four boxes in one run, then having the right to one more play, he would connect H and L which would win all the rest to him. What play would now be best and how many boxes will it win against the best possible play of your opponent?

Solutions

Solutions to the two puzzles above can be sub-mitted up to December 1st. You can hand them in at the VSAE room; C6.06, mail them to [email protected] or send them to VSAE, for the atten-tion of Aenorm puzzle 59, Roetersstraat 11, 1018 WB Amsterdam, Holland. Among the cor-rect submissions, one book token will be won. Solutions can be both in English as in Dutch.

Page 82: Aenorm 62

80 AENORM 62 January 2009

University of Amsterdam

VU UniversityAmsterdam

Facultive

The past few months have been a great suc-cess for the study association Kraket. Many well organized activities, with a lot of participants. In October we organized a squash tournament, with many enthusiastic member of our associ-ation. In November Saen Options organized a Inhouse Day for our study association, with a trading game and a tour at the trading ground. On the 26th of November we went bowling with our fellow students of the VSAE, with over forty members of both Kraket and the VSAE. And last but not least we welcomed Saint Nicholas to our association. In a pleasant atmosphere the members of our association received presents and also a little poem.

In the upcoming months we have planned some other activities. In January we will go ice-skating with our association, and the study trip to New York will also take place in January. In February we have planned an indoor soccer tournament with our association, and last but not least the-re is the LED (National Econometrics Day) in Groningen to look forward to.

Agenda

7-24 January 2009 Study Trip

20 January 2009 Ice-Skating

February 2009 Indoor Soccer Tournament

3 March 2009 LED

9 April 2009 Casedag Kraket

The VSAE has hosted a number of projects in the last months of 2008. In November, the KBR (short foreign journey) brought 52 stu-dents in Actuarial Sciences, Econometrics and Operations Research to Prague. During our five day visit we saw the University and Czech National Bank, gazed at Prague’s castle and historic centre and enjoyed its nightlife.

On the 17th of December, the VSAE organised the ´Actuariaatcongres´ - a congress on ac-tuarial sciences -, which had Transparency of Insurances as its central theme. A number of speakers shared their view on the subject with an audience of 160 participants, consisting of both students and actuaries. A discussion panel concluded the congress, sending the partici-pants home with numerous ideas about the role of actuaries in making insurances transparent.

Since January is the exam month at the University of Amsterdam, the agenda of the VSAE includes only our monthly free drink and curling. From February onward a new board will try to bring the VSAE to even greater heights. Hopefully they succeed in doing so!

Agenda

13 January Monthly free drink

22 January Curling

Page 83: Aenorm 62

Excellent enough to make a difference?

(Aankomende) actuarissen – M/V

Bij Ernst & Young werken accountants, belastingadviseurs en gespecialiseerde

adviseurs in vrijwel elke richting van de fi nanciële, juridische en notariële

dienstverlening. Met ruim 4.600 medewerkers verspreid over 23 kantoren zijn

we een van de grootste adviesorganisaties in Nederland. Er is bij ons altijd

ruimte voor ambitieuze starters die het verschil willen maken.

Ga jij de fi nancieel directeur van een verzekeraar adviseren over een nieuwe

premiestructuur of het toeslagenbeleid van een pensioenfonds toetsen? Of ben

je meer geïnteresseerd in de modellen die beleggingsfondsen gebruiken bij de

waardering van hun portfolio’s. Ernst & Young Actuarissen biedt de mogelijkheid

om je breed te ontwikkelen op zowel inhoudelijk als persoonlijk vlak. Wij zijn

onderdeel van een Europese organisatie waarin actuarissen uit alle fi nanciële

centra van Europa samenwerken. In de dynamische markt waarin wij werken

hebben wij continu nieuwe adviseurs nodig. Voor onze kantoren in Amsterdam

en Utrecht zijn wij op zoek naar (Aankomende) actuarissen.

Ben je gedreven, leergierig, analytisch en adviseer je liever dan dat je

wordt geadviseerd. Dan heb jij de instelling die wij zoeken. Wij bieden jou

de mogelijkheid om na je studie als beginnend actuaris aan de slag te gaan.

Ook kun je tijdens je studie voor een dag in de week aan de slag gaan als

werkstudent of je afstudeerscriptie schrijven over een praktijk probleem.

Voor meer informatie kun je contact opnemen met Dieuwertje Huizer, 06-21252814 of [email protected]. Solliciteren kan op www.ey.nl/carriere

2058609074 Ad actuarissen_A4_v1.indd 1 06-11-2008 16:47:27

Page 84: Aenorm 62

To some people, it’s just a jobTo others, it’s reaching your potentialConsultancy opportunities at Towers Perrin

RECRUITMENT

Of je nu internationale carrièrewensen hebt of het accent wilt leggen op een goede balans tussen privé en werk, bij Towers Perrin krijg je de ruimte. Als een van’s werelds grootste adviesorganisaties voor people, risk en financial management, vinden we het belangrijk dat mensen zich kunnen ontplooien op een manier die bij hen past. Dé Towers Perrin-consultant bestaat dan ook niet. Wat we wel gemeenschappelijk hebben is onze gedrevenheid en liefde voor het vak.

Wil jij ook aan de slag in een uitdagende en prettige werkomgeving? Denk dan eens na over een toekomst bij Towers Perrin. Op dit moment zoeken we starters (econometristen en actuarissen) voor onze kantoren in Amsterdam, Rotterdam en Apeldoorn.

Wil je meer weten?Ga naar http://careers.towersperrin.comVoor vragen kun je terecht bij Philip Le Poole. Tel. 020 - 711 40 16 of mail:[email protected]://careers.towersperrin.com