aenorm 69

36
Benefit from Inflation with Bonds, Swaps, Caps, and Floors You Can Print Money but You Can’t Print Goods Optimal Design for SAP-Warehouse 69 This edition: A Generalization of the Classical Secretary Problem vol.18 nov ‘

Upload: vsae

Post on 17-Mar-2016

216 views

Category:

Documents


0 download

DESCRIPTION

 

TRANSCRIPT

Page 1: Aenorm 69

Benefit from Inflation with Bonds, Swaps,

Caps, and Floors

You Can Print Money but You Can’t Print Goods

Optimal Design for SAP-Warehouse

69

This edition:

A Generalization of the Classical Secretary Problem

vol.18 nov ‘

C

M

Y

CM

MY

CY

CMY

K

covers69(18).pdf 16-11-2010 16:46:23

Page 2: Aenorm 69

<On the ambition to excel>

We view ambition as a quality to be cherished. Because we see it as a force that fuels initiatives. The best form of ambition combines the will to be independent with the willingness to take on responsibility. That’s what we call the ambition to excel. If you think you, too, have that kind of ambition, we would like to hear from you. For our Analyst Program, NIBC is looking for university graduates who share our ambition to excel. Personal and professional development are the key-elements of the Program: in-company training in co-operation with the Amsterdam Institute of Finance; working side-by-side with professionals at all levels and in every financial discipline as part of learning on the job. We employ top talent from diverse university backgrounds, ranging from economics and business administration, to law and technology. If you have just graduated, with above-average grades, and think you belong to that exceptional class of top talent, apply today. Joining NIBC’s Analyst Program might be the most important career decision you ever make!

Want to know more? Surf to www.careeratnibc.com.

T H E H A G U E • L O N D O N • B R U S S E L S • F R A N K F U R T • N E W Y O R K • S I N G A P O R E • W W W . N I B C . C O M

Interested? Please contact us: NIBC Human Resources, Frouke Röben, [email protected]. For further information see www.careeratnibc.com. NIBC is a Dutch specialised bank that offers advisory, financing and investing in the Benelux and Germany. We believe ambition, teamwork, and professionalism are important assets in everything we do.

NIBC adv. Excel A4

Page 3: Aenorm 69

AENORM vol. 18 (69) November 2010 1

Colofon

Chief editorEwout Schotanus

Editorial BoardEwout Schotanus

Editorial StaffTara DoumaDaniella BralsChen YehAnnelieke BallerDianne KapteinJan Nooren

DesignUnited Creations © 2009

Lay-outTaek BijmanMaartje Gielen

Cover design© Shutterstock(edit by Michael Groen)

Circulation2000

A free subscription can be obtained at www.aenorm.eu.

AdvertisersDNB NIBCSNS ReaalTNOTowers WatsonZanders

Information about advertising can be obtained from Axel Augustinus at [email protected]

Insertion of an article does not mean that the opinion of the board of the VSAE, the board of Kraket or the redactional staff is verbalized. Nothing from this magazine can be duplicated without permission of VSAE or Kraket. No rights can be taken from the content of this magazine.

ISSN 1568-2188

Editorial Staff adressesVSAERoetersstraat 11, E2.021018 WB Amsterdamtel. 020-5254134

KraketDe Boelenlaan 11051081 HV Amsterdamtel. 020-5986015

By now most people have gotten notice of the fact the Faculty of Economics and Business (FEB) of the University of Amsterdam (UvA) has huge money problems. Therefore, a large reorganization is required. Within three years the university has to come up with 14 million euros to compensate the debts and build up a new buffer. In order to accomplish, all expenses on the university’s budget will be cut down. The consequences are, 60 fte’s are going to get fired and most sections will partly or even completely disappear.

One month ago our association heard the rumor that Operations Research (OR) is one of the three sections that is going to disappear completely. This decision surprises me for several reasons. First, our department contains only three sections, Actuarial Science, Econometrics and OR. For all three of these sections, the first two years of the bachelor completely overlap. Second, there is a large growth in first and second year students who really want to do Operations Research. Third, every university in the Netherlands that teaches econometrics also teaches OR. The last few years there has been a large growth in the number of new first year students. However, by eliminating OR, I think the number of new students at our department will decline rapidly over the next few year which will also harm the other two sections. I believe these three sections form a whole, so by eliminating one of them the remaining two sections will be incomplete.

The problem is that the decision to eliminate OR was not based on these facts, but on achievements and the amount of publications of the section. The board is running the university like a company. The dept problem of the faculty is solved in the easiest and fastest way possible, which is firing a lot of staff and along with it eliminating a lot courses. Consequently, the remaining courses will be visited by more students, which means teacher will have to spend more time in teaching as well as in publishing to save their courses. The problem is the board forgets the main purpose of a university, offering students high quality education. I think the amount of money saved on the short run by this reorganization does not compensate the harm it does on the long run to the quality of our education by a long shot. Unfortunately, no one has a say in this, there will not be a referendum or anything. Teachers say the students may even have more influence on this than the teachers themselves.

On the other hand, the university has enough money to completely renovate and rebuild the entire complex of buildings – our faculty is in one of these buildings – which will cost a lot more than the 14 million our faculty has to come up with. Of course they will say the money for the renovation was especially reserved, but it seems the university rather has a renovated building than high quality education. Maybe the board hopes that by having new buildings they will attract a lot of new foreign students, which is one of the main purposes of the university after the reorganization. They think foreign students work harder and are more disciplined, which is of course nonsensical. Students who go abroad to study may be more disciplined, because it is hard to get this all accomplished. This however does not work the other way around. Attracting random students from abroad to the university does not necessarily mean these students are better than the average student already here.

These last statements above make me conclude the university has completely lost their vision on how a university should be managed. The decisions that have been made in the last few months seem to lay focus only on solving problems on the short term without focusing on long term effects and this worries me very much.

by: Ewout Schotanus

Friend or Foe?

<On the ambition to excel>

We view ambition as a quality to be cherished. Because we see it as a force that fuels initiatives. The best form of ambition combines the will to be independent with the willingness to take on responsibility. That’s what we call the ambition to excel. If you think you, too, have that kind of ambition, we would like to hear from you. For our Analyst Program, NIBC is looking for university graduates who share our ambition to excel. Personal and professional development are the key-elements of the Program: in-company training in co-operation with the Amsterdam Institute of Finance; working side-by-side with professionals at all levels and in every financial discipline as part of learning on the job. We employ top talent from diverse university backgrounds, ranging from economics and business administration, to law and technology. If you have just graduated, with above-average grades, and think you belong to that exceptional class of top talent, apply today. Joining NIBC’s Analyst Program might be the most important career decision you ever make!

Want to know more? Surf to www.careeratnibc.com.

T H E H A G U E • L O N D O N • B R U S S E L S • F R A N K F U R T • N E W Y O R K • S I N G A P O R E • W W W . N I B C . C O M

Interested? Please contact us: NIBC Human Resources, Frouke Röben, [email protected]. For further information see www.careeratnibc.com. NIBC is a Dutch specialised bank that offers advisory, financing and investing in the Benelux and Germany. We believe ambition, teamwork, and professionalism are important assets in everything we do.

NIBC adv. Excel A4

Page 4: Aenorm 69

2 AENORM vol. 18 (69) November 2010

00 vol. 00m. y.69 vol. 18

nov. 10

The writers of this article introduce a stochastic reserving methodology for general insurance based on the projection of individual claims processes. With the introduction of Solvency 2 (in 2012) and IFRS 4 Phase 2 (in 2013) insurers face major challenges. The measurement of future cash flows and its uncertainty becomes more and more important. That also gives rise to the question whether the currently used techniques can be improved. Antonio en Plat (2010) have introduced a new methodology for stochastic loss reserving for general insurance. In this article this methodology is summarized and applied to an existing insurance portfolio.

by: Katrien Antonio and Richard Plat

Micro-level Stochastic Loss Reserving

There are many standard problems that can be solved with algorithms that give a guarantee of optimality. Most of these algorithms do not run in polynomial time however and renders these algorithms practically useless. In this thesis we will describe an atomic routing problem that is hard to solve through conventional algorithms that use branch and bound techniques and we will try to evaluate the use of evolutionary programming, a popular metaheuristical method, to solve this kind of problem. The main focus will be to show the reader the relative ease of using evolutionary programming.

Genetic Algorithms in Network Congestion Games

This article focuses on the determination and realization of the storage capacity of a SAP-warehouse. A warehouse is an important link in the chain between producers and consumers. Producers deliver products at a warehouse where they are stored until they are picked up by retailers. Determining the storage capacity is a major strategic decision that must be made when designing a warehouse. If the storage capacity is too low, the warehouse is not able to store all the products producers would want to deliver at the warehouse, which leads to lost profits. If the storage capacity is too large, a part of the capacity is not used, which leads to unnecessary costs. Apart from being import, the design of a warehouse is a very complex decision as well. Since the arrival and departure of goods are stochastic processes the warehouse does not know beforehand how much storage capacity is needed.

by: Thomas Post

Optimal Design for a Sap-Warehouse

by: Vincent Warmerdam

15

04

10

Page 5: Aenorm 69

AENORM vol. 18 (69) November 2010 3

BSc - Recommended for readers of Bachelor-level

MSc - Recommended for readers of Master-level

PhD - Recommended for readers of PhD-level

Facultive

Puzzle

32

27

This article focuses on several conundrums in economics. One such conundrum is that historical stock returns are much higher than bond returns. This is called the equity premium. A second conundrum is that the money supply increases yearly by 9% but that inflation is around 2%. These puzzles are not puzzles once you connect the dots. The return on equity is asset inflation and that explains where all the money went. In fact it is simply impossible for stock return to exceed economic growth. As the latter is around 2%, real stock returns of 7% are just impossible.

by: David Hollanders

You can print money but you can’t print goods

This article studies the secretary problem. The classical secretary is usually described by different real life examples, notably the process of hiring a secretary. Imagine a company manager in need of a secretary. Our manager wants to hire only the best secretary from a given set of candidates. The manager decides to interview the candidates one by one in a random fashion. Every time he has interviewed a candidate he has to decide immediately whether to hire her or to reject her and interview the next one. The exact optimal policy to hire the best secretary is known. Also, many variations and generalizations of the original problem have been introduced and analysed. One of these generalizations is the focus of our paper, namely the problem to select one of several best.

by: Chris Dietz, Dinard van der Laan, Ad Ridder

A generalization of the classical secretary problem

In this article the writer discusses the history of inflation-linked products and shows how inflation-linked bonds, swaps, and caps/floors can be of interest to an organization or institution looking to hedge against inflation. The problem of inflation is nearly as old as currency itself, and inflation-linked products have been around for centuries as well. There have been times in history when inflation rose to double-digit percentages and then there are times, such as the present, when inflation is even negative. During prolonged periods of extremely high or low inflation, consumers will eventually start feeling the pinch. Businesses and governments are particularly vulnerable to inflation, as positive and negative cash flows depend on inflation. Examples of this include rents, gas and commodity prices. But there are numerous inflation-linked products available to hedge against this inflation risk.

by: Rogier Galesloot

Benefit from inflation with bonds, swaps, caps, and floors

18

22

31

Page 6: Aenorm 69

4 AENORM vol. 18 (69) November 2010

Operations Research and Management

A warehouse is an important link in the chain between producers and consumers. Producers deliver products at a warehouse where they are stored until they are picked up by retailers. Determining the storage capacity is a major strategic decision that must be made when designing a warehouse. If the storage capacity is too low, the warehouse is not able to store all the products producers would want to deliver, which leads to lost profits. If the storage capacity is too large, a part of the capacity is not used, which leads to unnecessary costs. Apart from being import, the design of a warehouse is a very complex decision as well. Since the arrival and departure of goods are stochastic processes the warehouse does not know beforehand how much storage capacity is needed. This article focuses on the determination and realization of the storage capacity of a SAP-warehouse.

by: Thomas Post

Introduction

SAP is a computer system that monitors inventory levels in warehouses. In order for the SAP system to function properly, the storage of goods must take place in com-partments within the warehouse, so called bins. More-over, all these bins must be connected with the entrance and exit of the warehouse by aisles. A simple example of the design of a SAP-warehouse is shown in figure 1. In this figure each square represents a certain area, say 1 m2, grey squares represent area that is used by bins, white squares, on the other hand, represent area that is used by aisle. Bold black lines mark the borders of bins and in the centre of each bin its size is shown.

The warehouse in figure 1 contains 9 bins: 1 bin of res-pectively 2, 4 and 6 m2, and 6 bins of 1 m2. The total area used by bins is therefore 18 m2. Besides the area used by bins the design uses 12 m2 of aisle as well. The total area the design in figure 1 uses is therefore equal to 30 m2.

The storage capacity of a SAP-warehouse is realized by the set of bins it contains. Consequently the main ques-tion of this article is: How can a set of bins be determined that uses as less space as possible, but is still expected to be able to store all the arriving products?

In the second paragraph the problem is defined more ex-tensive, after which in the third paragraph the problem is formulated as an M|G|∞ queuing model. In the fourth and fifth paragraph the problem is analysed using queuing theory moreover an explicit method to determine an op-timal set of bins, based on some characteristics of the ar-rival and departure process, is derived.

Problem definition

This article considers a warehouse that stores multiple product types i, say i = 1, 2, …, N. These products arrive and depart from the warehouse in batches of varying size. The size of a batch is expressed in the number of square metres it uses when stored in a bin. Consistently the size of a bin is expressed in its storage capacity in square me-tres. As soon as a batch arrives, it must be stored imme-diately.

A strategic decision that must be made when a batch of products arrives, is to either store the batch as a whole in one bin, or to divide the arriving batch among mul-tiple bins. Using the first strategy requires a set contai-ning mainly large bins. However, if one uses the second

Optimal Design of a SAP-Warehouse

Thomas PostThomas Post (1987) obtained his bachelor degree in Operations Research and Management at the University of Amsterdam with his thesis: “the optimal design of a SAP warehouse”. He wrote his thesis under the supervision of prof. R. Nunez Queija from the UvA and ir. M. van den Broek from Constultants in Quantative Methods (CQM). In the fall of 2010 he started with the subsequent master. he obtained a bachelor degree in Actuarial Sciences as well.

1 1 1 1 1 1

2 4 6

ExitEntrance

Figure 1 “An example of the design of a SAP-warehouse”

Figure 3 "A graphical illustration of the M|G|∞ model"

∞ Space(∞ Number of servers)

Departing batchesDiμi

Arriving batchesAiλi

Figure 1. An example of the design of a SAP-warehouse

Page 7: Aenorm 69

AENORM vol. 18 (69) November 2010 5

Operations Research and Management

strategy, a set with mainly small bins is required. Both small and large bins have advantages and disadvantages compared to the other.

A disadvantage of small bins is that they use more space, relative to the capacity they deliver, than larger bins. This is due to the fact that every bin must be acces-sible with a fork-lift-truck, which means that every bin should be adjacent to at least 2 m2 of aisle. Consequently a warehouse with 3 bins of 1 m2 needs three times as much aisle as a warehouse with 1 bin of 3 m2, although both warehouses have a storage capacity equal to 3 m2.

A disadvantage of large bins is that their capacity will be unused more frequently than the capacity of smaller bins. This is caused by the fact that arriving products can only be stored in empty bins. Consequently, when pro-ducts are stored in a bin, that bin will only be available again from the moment all the products are picked up from that bin. Therefore, a bin which stores less products will sooner be available than a bin which stores more products. This principle will be shown by the example below.

Suppose that at moment 0 a batch containing 2 m2 of products arrives and that after moment 0 the products are picked up at a rate of 1 m2 a day. In the graph of figure 2 the inventory level of a bin is shown if the arriving batch is stored in one bin. One can see that the bin is again available for the storage of new goods after two days. If, however, the arriving batch is divided among two bins, one of the two bins is already available for the storage of new goods after one day. This is shown in the graph of figure 3.

The situation as an M|G|∞ model

To determine a set of bins that is expected to be able to store all the arriving batches and minimizes the used space, the problem is formulated as a queueing model. Arriving batches of products are seen as customers that enter a system and are served by servers. A busy server represents a bin in the warehouse. Since the number of bins that a warehouse is able to contain is assumed to be unlimited, the number of servers is unlimited as well. The serving time of a customer depends on the type of the customer (product type) and its size (batch size). This model is illustrated in figure 4.

In figure 4 λi and μi are equal to the rate at which ar-rivals respectively departures of type i occur. Ai

t and Dit

are equal to the size, in m2, of an arrival respectively de-parture of type i at moment t. Moreover, let iμ

  

be defined by the following relation:

( )ti i iE Dμ μ= ⋅  

The problem is now formulated as an M|G|∞ model. For M|G|∞ models relations for the expected number of busy servers and the expected remaining service time of a job

of a busy server on an arbitrary moment are known. Let #Serversi

t be equal to the number of servers ser-ving customers of type i at moment t. According to Gross and Haris (2008) the following relation holds for the ex-pected value of #Serversi

t at an arbitrary moment:

( )(# )

tt i ii

i

E AE Servers

λμ⋅

Let remainingServiceit be equal to the remaining ser-

vice time of a job of a server, serving customers of type i at moment t. According to Adan and Resing (2002) the following relation holds for the expected value of remainingServicei

t at an arbitrary moment:

2( )( )

2 ( )

tt ii t

i

E AE remainingService

E A=

⋅  

Figure 2. Inventory levels when storing the arriving batch in 1 bin

Figure 3. Inventory levels when storing the arriving batch in 2 bins

Figure 4. A graphical illustration of the M|G|∞ model

(1)

(2)

When the variation in the size of the arriving batches increases, the number of bins should increase as well

Page 8: Aenorm 69

6 AENORM vol. 18 (69) November 2010

In the next paragraph a required set of bins, when each arriving batch is stored in one bin, will be determined using (1) and (2). Moreover, an approximation of the space this set uses will be made. Subsequently in the last paragraph a method is introduced to analyse whether it is rewarding to divide an arriving batch among several bins and, if so, among how many bins.

Storing arriving batches in 1 bin

In this paragraph arriving batches are assumed to be sto-red as a whole in one bin. Let #Binsi be equal to the total number of required bins for the storage of products of type i. Combining (1) and the insight that the expected number of busy servers in the M|G|∞ model is equal to the number of required bins, one can deduce the follo-wing relation:

( )# (# )

tt i i

i ii

E ABins E Servers

λμ⋅

= = 

After the number of required bins is known, the ques-tion arises as to how big these bins should be. Let Sizei be equal to the required size of bins storing products of type i. Under the assumption of a continuous departure process, the inventory in a particular bin at an arbitrary moment is on average equal to half of the size of the inventory at the moment it was first stored in that bin. Combining the former with the insight that the remaining service time in the M|G|∞ model represents the inventory level in a bin, gives the following relation for Sizei

2( )2 ( )

( )

tt i

i i ti

E ASize E remainingService

E A= ⋅ =

 

With (3) and (4) one can determine a required set of bins for every product type. Combining these sets for the N product types gives the total required set of bins. Ques-tion is however: how much space does this set of bins use?

Let Spacei be equal to the space used by the set of bins for the storage of product type i, and let TotalSpace be equal to the space used by the set of required bins to store all product types. Since every bin approximately uses a space equal to its size plus 2 m2 of required adjacent aisle, one can deduce the following relations:

# 2 ## ( 2)

i i i i

i i

Space Bins Size BinsBins Size

= ⋅ + ⋅

= ⋅ + 

2( ) ( )2

( )

t ti i i

i ti i

E A E ASpace

E Aλ

μ⎛ ⎞⋅

= +⎜ ⎟⎝ ⎠  

2

1 1

( ) ( )2

( )

t tN Ni i i

i ti i i i

E A E ATotalSpace Space

E Aλ

μ= =

⎛ ⎞⋅= = +⎜ ⎟

⎝ ⎠∑ ∑

 

Dividing an arriving batch among several bins

With (6) one can calculate the space used by a set of re-quired bins when every arriving batch is stored in one bin. An arriving batch, however, does not necessarily has to be stored as a whole in one bin, there is also the op-portunity to divide the batch among several smaller bins. When will dividing an arriving batch lead to decrease in used space?

If the warehouse chooses to divide all arriving batches of product type i over n bins, the arrival of product type i can be seen as the arrival of n artificial product types, say (i, 1), (i, 2) until (i, n). These artificial product types arrive at the same rate as the original product type, the size of the arriving batches, however, is smaller. Assuming that arriving batches will be divided equally among n bins, one can postulate the following relations.

( , ) {1,2,.., }, {1,2,..., )i l i l n i Nλ λ= ∀ ∈ ∈ 

 1

( , )( ) ( ) {1,2,.., }, {1,2,..., )t ti l inE A E A l n i N= ∀ ∈ ∈

 

 Let Size(i,n) be equal to the required size of bins used for the storage of products of type i, when arriving batches of type i are divided among n bins. Combining (4), (8) and (9) one can now formulate the following relation:

222 1

( , ) 1( , ) ( ,1)1

( , )

( )( )( ) ( )

ttii l n

i n int ti l in

E AE ASize Size

E A E A= = =

 

 As is formulated in the second paragraph, the advantage of dividing an arriving batch among n different bins is the expectation that these bins will sooner be available for the storage of other products. Intuitively one would perhaps postulate the relation 1

( , )i l inμ μ=  and use this in

combination with (3), (8) and (9) to calculate the required number of bins when arriving batches are divided among n bins. This however will not lead to the desired results, as is shown in the following example.

Let’s take the example which is introduced in the se-cond paragraph under consideration again. Figure 3 shows how the inventory level in the bins is expected to develop if the arriving batch is divided among 2 bins. However, when the former described method is used the model ap-proximates the inventory levels of these bins as is shown in figure 5. In this case the bins are both available for new storage after 2 days. Consequently the advantage of divi-

Figure 5. Incorrect approximation of inventory levels

(4)

(5)

(6)

(7)

(8)

(9)

(10)

(3)

Operations Research and Management

Page 9: Aenorm 69

AENORM vol. 18 (69) November 2010 7

specialty

ding the arriving batch over 2 bins is neutralized. In order to let the model approximate the inventory

levels in the desired way, the following statistics are used when calculating the required number of bins:

( , ) {1,2,.., }, {1,2,..., )i l i l n i Nλ λ= ∀ ∈ ∈ 

 1

( , )( ) ( ) {1,2,.., }, {1,2,..., )t ti l inE A E A l n i N= ∀ ∈ ∈

 

  ( , ) {1, 2,.., }, {1, 2,..., )i l i l n i Nμ μ= ∀ ∈ ∈ 

 Using this statistics, the model approximates the in-ventory levels of the bins from the formerly introduced example as is shown in figure 6.

Looking at figure 6 one can immediately conclude that the approximated inventory levels are not correct. At moment 0 bin 2 has an approximated inventory level of 2 m2. However, in reality, the inventory level of bin 2 will never be greater than 1 m2. (11), (12) and (13), however, are only used for the calculation of the required number of bins, and therefore not the inventory levels but the moments the bins are available for the storage of new products have to be approximated accurately. When comparing figure 3 and figure 6, one can conclude that the approximation of these moments is as desired.

Let #Bins(i,n) be equal to the required number of bins for the storage of products of type i if every arriving batch of type i is divided among n bins. Combining (3), (11), (12) and (13), one can now postulate the relation below.

( , ) ( , )( , )

1 1( , )

1( ,1) ( ,1)2

1

( ) ( )#

# ( 1) #

t tln ni l i l i in

i nl li l i

nl

i inl

E A E ABins

Bins n Bins

λ λμ μ= =

=

⋅ ⋅= =

= = +

∑ ∑

∑ 

 

Let Space(i,n) be equal to the space used by the set of bins required for the storage of products of type i, if every arriving batch of type i is divided among n bins. Using (5), (10) and (14) an explicit relation between Space(i,n) and the number of bins among which an arriving batch is divided can be formulated.

1 1( , ) ( ,1) ( ,1)2

1 1( ,1) ( ,1) ( ,1)2

( 1) # ( 2)

# ( 2 2)i n i in

i i in

Space n Bins Size

Bins Size n Size

= + ⋅ +

= + + +  

For every type of product one can now calculate an opti-mal value of n by choosing n such that the first derivative

of (15) is equal to 0. After some simplifications this first derivative is equal to:

( , ) ( ,1) ( ,1)( ,1) 2

##

2i n i i

i

Space Size BinsBins

n n∂ ⋅

= −∂  

(16) leads to the following relation for an optimal value of n, say n*:

2( .1)* ( )

2 2 ( )

ti i

ti

Size E An

E A= =

⋅  

With (17) one now has an explicit relation between an optimal storage strategy and characteristics of the arrival process of a product. (17) states that when the variation in the size of the arriving batches increases, the number of bins among which arriving batches are divided should increase as well. This relation is shown figure 7 for a pro-duct which arrives in batches with an expected size of 1 m2.

Intuitively this relation is clear, since dividing arriving batches among more bins means that the bins, after ar-riving products are stored in it, will sooner be available again for the storage of new products. Accordingly, the set of bins will be able to anticipate better on unexpected arrivals that need to be stored.

The optimal set of bins is determined by choosing for each product type an optimal n* according to (17), and using this n* in the relations below to derive the optimal number of bins and their optimal size.

**1

2( , )

( )# ( 1)

ti i

i ni

E ABins n

λμ⋅

= + ⋅ 

* *( , )

( )ti

i n

E ASize

n=

 

Maybe one would not expect (19) as a relation for the required size of the bins, but (10). Using (10), however, will not lead to a correct measurement of the required size. This is caused by the fact that (10) uses the average size of the present inventory at an arbitrary moment as an estimator for the average size of an arriving batch. Since large arriving batches tend to be present in the warehouse for a longer period, this estimator is biased. Therefore using (19) to determine the required size of the bins leads to more accurate results.

Figure 6. Correct approximation of inventory levels Figure 7. Relation between variance and optimal number of bins

(14)

(15)

(16)

(17)

(18)

(19)

Operations Research and Management

(11)

(12)

(13)

Page 10: Aenorm 69

8 AENORM vol. 18 (69) November 2010

Conclusion

The determination of a set of bins is an important strate-gic decision for SAP-warehouses. The decision to use a particular set of bins is of great influence on the space the warehouse uses, and therefore of great influence on the costs the warehouse faces. In this article a direct relation between the variability of the size of the arriving batches, the number of required bins to store the batches and the required size of the bins is formulated. These relations are formulated in (17), (18) and (19).

If one uses these relations the obtained set of bins is expected to be able to store the arriving batches, minimi-zing the required space. The relations implicitly say that the number of bins, and therefore the flexibility of the de-sign to anticipate on unexpected arrivals, should increase as the variability in the size of the arriving batches rises, which is intuitively a clear statement.

References

Adan, I. and J. Resing. Queueing Theory. Eindhoven University of Technology, 2002.

Gross, D. and C.M. Harris. Fundamentals of Queueuing Theory. 3rd edition, Wiley, 1998.

Operations Research and Management

Wat als ze vertrouwen op de verkeerde cijfers?Ondernemen zit ze in het bloed. Binnenkort een, misschien

twee zaken erbij. Zodat het familiebedrijfje een hele keten

wordt. Maar wat als ze te veel risico nemen? Omdat de

economische cijfers waarop hun plannen zijn gebaseerd,

rooskleuriger zijn dan de werkelijkheid?

Daarom stelt de Nederlandsche Bank (DNB) betrouw-

bare statistieken ter beschikking. Het gaat daarbij niet alleen

om de cijfers van financiële instellingen, maar ook om statis-

tieken over de Nederlandse betalingsbalans en onze inter-

nationale investeringspositie. Statistieken verzorgen is niet

de enige taak van DNB. We houden ook toezicht op de

stabiliteit van banken, verzekeraars en pensioenfondsen. Als

onderdeel van het Europese Stelsel van Centrale Banken

dragen we bovendien bij aan een solide monetair beleid en

een soepel en veilig betalingsverkeer. Zo maken we ons sterk

voor de financiële stabiliteit van Nederland. Want vertrouwen

in ons financiële stelsel is de voorwaarde voor welvaart en

een gezonde economie. Wil jij daaraan meewerken? Kijk dan

op www.werkenbijdnb.nl.

| Algemeen economen

| HBO-ers (economisch/statistisch)

Werken aan vertrouwen.

-00122_A4_adv_OF.indd 8 23-04-2008 16:12:30

Page 11: Aenorm 69

AENORM vol. 18 (69) November 2010 9

specialty

Wat als ze vertrouwen op de verkeerde cijfers?Ondernemen zit ze in het bloed. Binnenkort een, misschien

twee zaken erbij. Zodat het familiebedrijfje een hele keten

wordt. Maar wat als ze te veel risico nemen? Omdat de

economische cijfers waarop hun plannen zijn gebaseerd,

rooskleuriger zijn dan de werkelijkheid?

Daarom stelt de Nederlandsche Bank (DNB) betrouw-

bare statistieken ter beschikking. Het gaat daarbij niet alleen

om de cijfers van financiële instellingen, maar ook om statis-

tieken over de Nederlandse betalingsbalans en onze inter-

nationale investeringspositie. Statistieken verzorgen is niet

de enige taak van DNB. We houden ook toezicht op de

stabiliteit van banken, verzekeraars en pensioenfondsen. Als

onderdeel van het Europese Stelsel van Centrale Banken

dragen we bovendien bij aan een solide monetair beleid en

een soepel en veilig betalingsverkeer. Zo maken we ons sterk

voor de financiële stabiliteit van Nederland. Want vertrouwen

in ons financiële stelsel is de voorwaarde voor welvaart en

een gezonde economie. Wil jij daaraan meewerken? Kijk dan

op www.werkenbijdnb.nl.

| Algemeen economen

| HBO-ers (economisch/statistisch)

Werken aan vertrouwen.

-00122_A4_adv_OF.indd 8 23-04-2008 16:12:30

Page 12: Aenorm 69

10 AENORM vol. 18 (69) November 2010

Mathematical Economics

There are many standard problems that can be solved with algorithms that give a guarantee of optimality. Most of these algorithms do not run in polynomial time however and renders these algorithms practically useless. In this thesis we will describe an atomic routing problem that is hard to solve through conventional algorithms that use branch and bound techniques and we will try to evaluate the use of evolutionary programming, a popular metaheuristical method, to solve this kind of problem. The main focus will be to show the reader the relative ease of using evolutionary programming.

by: Vincent D. Warmerdam

Introduction

We are interested in finding short paths for multiple players on a graph. We are not interested in the shortest path per player but we are interested in finding the allocation that has the least lag (or cost) on a global scale. The sum of the costs of all players is what we would like to minimize. The lag that one would experience by traveling a road is not constant but changes in value depending on the number of people using the road. If more people use the road, the higher the congestion and we take this into account in our setting. This setting is also known as a congestion game. In this setting each player may act selfishly so what is optimal for a certain player may not be optimal for all players on a global scale. We are interested in finding the optimal allocation of paths that minimizes the sum of costs of all players.

The setting

To put the setting more formal; we have an undirected graph G = (V, A) with nodes V and arcs A. On this graph we have a set of players K = {(1, ..., k)} and player i needs to travel from a node si to a node ti. When the player travels over an arc the player experiences a lag. Each arc has its own lag function la (xa) where xa is the amount of

players that travel over the arc. Because there are multiple players in this setting the value of xa can indeed be larger than one. We define xa as xa = Σi xi,a where xi,a equals one if player i travels over arc a and zero if it does not. In this setting a player has to travel entirely over a single path.

The total lag of all the players is described by the following function if we consider linear lag functions:

z = Σi Σa xi,a la (xa) z = Σi Σa xi,a (baxa + ca) z = Σi Σa [xi,aba Σi xi,a + xi,a ca]

For this problem the reader is urged not to worry about the feasible region. It is bounded, the disk space needed is O(nmk) and the amount of decision variables needed is O(kn2).

Convential methods

Solving our original problem could involve some sort of a branch and bound algorithm that makes use of an underlying algorithm that could solve the above problem were it not an integer problem. This process can take quite a while. For linear problems this is a widely used method and can be quite effective. With the linear lag functions in our setting however we clearly have a nonlinear criterion function and we’d need to apply specialist methods to obtain an optimal solution for such a case. It seems however that there are a couple of useful methods that would suit this approach. It can be proven that our problem is quadratic but also convex. Underlying the branch and bound method we could therefore use quadratic programming, gradient methods or cutting plane methods. No matter what method we use under the hood of branch and bound, the main problem is that we still have a very large problem to deal with. This problem becomes harder and harder as we add more players. The addition of arcs would not make this problem much worse, the addition of players does. We

Genetic Algorithms in Network Congestion Games

Vincent D. WarmerdamVincent D. Warmerdam is a bachelor econometrics and operations research student at vrije Universiteit in Amsterdam. Before studying in Amsterdam he got his first years degree in Delft studying Industrial Design at the TU. Right now he is taking extracuricular courses in the fields of biology and social sciences to fill the gap of time between bachelor and master. He hopes to achieve a dual major in both econometrics and operations research.

Page 13: Aenorm 69

AENORM vol. 18 (69) November 2010 11

Mathematical Economics

have positive characteristics that make this problem do-able: polynomial data size requirements and convexity but in the main picture this will only be part of a much larger branch and bound problem. All the computational effort that goes into solving this is only peanuts when you realize how many times the branch and bound application might need to run it. Due to this the problem is bound to the class of NP-complete problems. So then, is this the way to go? Perhaps not.

A different approach

Because of the sheer computational complexity of certain problems it is often a good choice to turn to metaheuristical methods. Metaheuristical methods, although they do not give us a guarantee of optimality, are very practical because they are able to give us a good approximation of a solution that could be very close to optimal. A good metaheuristical method is one that searches the feasible region for an optimal solution while being able not to ‘get locked’ into a local optima. One of such methods to go by is the method of evolutionary programming.

The idea behind evolutionary programming is quite simple. We start out with a couple of solutions that are feasible and then combine characteristics of them to find better solutions. From the set of initial solutions we make pairs that act as parents. These parents will spawn new offspring by passing along genetic data and these offspring will be evaluated. The offspring themselves can undergo mutations after being born. After evaluating the offspring; the best will again be chosen to become parents that will make pairs and make more offspring. We can repeat this procedure as long as we want. The procedure is just like nature according to the theory of evolution. With each time step another generation is born and only the fittest (best solutions) will make it to the next round. Nature would do this with a string of DNA information; our algorithm would do the exact same but with a string of decision variables. Genetic algorithms seems to favor problems that are integer based because of this property.

The setup of the genetic algorithm is very simple. There are many different variations possible however. In biology these are restricted by the laws of nature, in our domain we may play around with this a bit. A standard evolutionary algorithm would have the following pseudo-code:

1. Create a population of feasible initial solutions; 2. Evaluate the solutions in the population;3. Survival of the fittest: choose parents and create new

solutions; 4. Apply mutations, define the new population; 5. Remember the best solution up until now;6. If stop criteria is met stop, otherwise repeat from 2.

One can only imagine the sheer amount of flexibility in these algorithms. Not only could you choose how

genetic information is passed down to children but also what parents are selected to mate, how a mutation influences the genotype of a new offspring, what the initial population is and how the population takes shape throughout the natural selection.

In our setting the string of genetic information describes the paths that all players choose to take. When two of such strings are combined we check if each players paths from both strings have overlapping arcs. We choose one of these arcs, if there are any, and make a new path randomly that connects the starting node to the starting node of the arc and the ending node of the arc to the ending node of the player. We make sure that the solution is feasible, we don’t allow loops in the paths. This way a new individual is created that has genetic information of the parents but also a mutation.

We experiment with three different properties in the thesis; initialization with nash, keeping old generations and the use of simulated annealing to determine what parents get to mate and with whom. The initialization with nash is determined through an updating Dijkstra algorithm. Each player consecutively determines if their own situation can be improved, once a set op reoccurring allocations is reached we choose the best one. In pseudo code:

1. Select a starting allocation for all players; 2. Select a player and evaluate current allocation; 3. Define the neighborhood (other possible paths for the

player);4. Check and see if a better neighbor exists (Dijkstra); 5. If so change path, otherwise don’t;6. Move on to the next player, repeat from 2 unless

allocations reoccur, in that case stop and take the best of these reoccuring allocation as a nash equilibrium.

When keeping old generations is on in the algorithm we select the best members from the old and new generation to determine the new parents in the current generation.

Figure 1. The absence of convergence makes this search algorithm very slow as it is a nearly random search.

Page 14: Aenorm 69

Student Magazine Actuarial Sciences,Econometrics & Operations Research

Are you interested in being in the editorial staff and having your name in the colofon?

If the answer to the question above is yes, please send an e-mail to the chief editor at [email protected].

The staff of Aenorm is looking for people who like to:

- find or write articles to publish in Aenorm;- take interviews for Aenorm;- make summaries of (in)famous articles;- or maintain the Aenorm website.

To be in de editorial board, you do not necessarily have to live in the Netherlands.

C

M

Y

CM

MY

CY

CMY

K

Aenorm advo kleur.pdf 10-11-2010 19:22:38

Page 15: Aenorm 69

AENORM vol. 18 (69) November 2010 13

Mathematical Economics

We need to keep in mind that some level of convergence is favorable because we’d like to find an optimum within a certain timeframe but that too much converge would turn the algorithm into a local search. The power of the algorithm lies in the fact that it is able to escape local optima, early convergence of the population might deny us this very powerful characteristic. We insert two figures to demonstrate both extremes.

When simulated annealing is used the selection of parents is randomly biased. Better performance means higher chance of getting selected to become a parent). As time passes by this process is updated to make it harder for individuals with lower performance to find a mate. Without simulated annealing the best parent gets combined with the second best and the third with the forth and so on. We list a short table with some results of the algorithm in table 1. The numbers in parentheses represent the variance of the time needed or average best.

Note that the method names are abbreviations of Dijkstra, Simulated Annealing and Keep Old Generations. These algorithms were run in a small (3 by 3) manhattan setting. Each method is run 10 times in order to supply us with the data in table one. We urge the reader not to worry about cost functions and player paths, these are all basic and the same throughout all experiments. We list the average best solutions found and the average time needed with their respective standard deviations in parentheses. Each method creates 50 generations ten times. One should note a few things however. These numbers give an impression how well a genetic method preforms in our current setting.

Note that time is not really a consideration when one wants to judge the performance of different methods as they are all very much alike. Logically, it seems that initializing with dijkstra is a good idea. It also seems that simulated annealing on it own is not as powerful as it is when old generations are kept. In an attempt to delve a bit deeper into this we run the algorithm again but for more iterations and for more players. The results from these tests can be seen in table two.

Table 1. Shows different methods and their output. Each method is run ten times to produce the above outcomes. Numbers in parentheses represent variance. Total best is the best outcome from the 10 runs, average best shows the average best of the 10 runs.

Method # Players Iterations Time Needed Average Best Total Best

- 10 50 25.7 (0.53) 337.1 (15.9) 311D 10 50 25.1 (0.62) 334.0 (27.2) 304

SA 10 50 26.1 (0.61) 349.8 (22.0) 314KOG 10 50 23.2 (0.51) 331.0 (29.7) 323D SA 10 50 26.0 (1.16) 353.7 (17.4) 330

D KOG 10 50 22.3 (0.44) 334.8 (10.9) 316KOG SA 10 50 25.4 (1.28) 331.6 (15.7) 307

D SA KOG 10 50 24.7 (0.82) 331.2 (16.3) 312

Figure 2. A very converging algorithm has the property of improving much quicker but may not be able to improve because it is stuck in a local optima because the whole population becomes identical.

Figure 3. A run with a method with D, KOG and SA that runs 1000 generations instead of 200.

Page 16: Aenorm 69

14 AENORM vol. 18 (69) November 2010

Mathematical Economics

The results seem a bit counter intuitive. It is true that the average best results are highest when you have a randomly biased converging evolutionary method but the total best solution seems to be found in very random methods. It is very hard to point out why this is the case because the search is, although biased, a random method. The easiest explanation is best shown through illustration.

A converging algorithm needs to be able to converge within the time frame for it to be an effective search. When combined with simulated annealing the algorithm is able to escape local optima but it will need more time to converge. Note in figure 3 how much improvement goes by after 200 generations. Even after 1000 iterations, the algorithm has not fully converged yet.

Conclusions

As a general guideline for using these algorithms it seems that it is best to have the algorithm be random in early stages and less random in later stages. As for convergence, one must make sure that the convergence takes place within the timeframe set for the method. The algorithm, sadly, will never be able give any guarantee of optimality.

This seems to be the major weakness with genetic algorithms. Even though they are very flexible (our original problem could also hold a non-convex criterion function and we’d still be able to apply the algorithm), easy to program and able to quickly find a good solution for a combinatorial optimization problem within a set amount of time they are quite poor at finding the best solution. From a pure mathematical perspective, a genetic algorithm is more a stochastic process and not really a method for guaranteed optimization. It is amazing however how such a slightly altered version of a random search can be so useful an algorithm.

References

Wolfe, P.. “The Simplex Method for Quadratic Programming.” Econometrics 27 (1959): 383-398.

Brinkhuis, J. and V. Tikhomirov. Optimization: Insights and Applications. Princeton: University Press, 2005: 304-316.

Papadimitriou, Christos H. and Kenneth Steiglitz. Combinatorial Optimization: Algorithms and Complexity. Dover: Dover Publications, 1998.

Davis, Lawrence. Handbook of Genetic Algorithms. London: Chapman and Hall, 1991.

Table 2. shows more methods and their output.

Method # Players Iterations Time Needed Average Best Total Best

D 10 100 50.2 (1.94) 333.4 (12.9) 309D SA 10 100 53.4 (2.0) 338.4 (16.3) 307

D KOG 10 100 47.4 (4.9) 323.6 (12.6) 310D SA KOG 10 100 53.5 (7.7) 326.7 (12.7) 313

D 20 200 244.4 (157.3) 1450.0 (38.8) 1391D SA 20 200 207.4 (90.8) 1492.7 (73.4) 1317

D KOG 20 200 205.7 (144.2) 1415.8 (37.6) 1365D SA KOG 20 200 221.8 (111.3) 1406.7 (34.0) 1370

Page 17: Aenorm 69

AENORM vol. 18 (69) November 2010 15

Actuarial Sciences

With the introduction of Solvency 2 (in 2012) and IFRS 4 Phase 2 (in 2013) insurers face major challenges. The measurement of future cash flows and its uncertainty becomes more and more important. That also gives rise to the question whether the currently used techniques can be improved. Antonio en Plat (2010)1 have introduced a new methodology for stochastic loss reserving for general insurance. In this article this methodology is summarized and applied to an existing insurance portfolio.

by: Katrien Antonio and Richard Plat

Introduction

For an overview of current techniques, see England and Verrall (2002). These techniques can be applied to so-called run-off triangles containing either paid losses or incurred losses (i.e. the sum of paid losses and case reserves). In a run-off triangle observable variables are summarized per arrival (or origin) year and development year combination. An arrival year is the year in which the claim occurred, while the development year refers to the delay in payment relative to the origin year.

Current techniques

The most popular approach is the Chain Ladder approach, largely because of its practicality. However, the use of aggregate data in combination with (stochastic variants of) the Chain Ladder approach (or similar techniques) gives rise to several issues. A whole literature on itself has evolved to solve these issues, which are (in random order):

1) Different results between projections based on paid losses or incurred losses, addressed by Quarg and Mack (2008).

2) Lack of robustness and the treatment of outliers, see Verdonck et al (2009).

3) The existence of the Chain Ladder bias, see Halliwell (2007) and Taylor (2003).

4) Instability in ultimate claims for recent arrival years, see Bornhuetter and Ferguson (1972).

5) Modeling negative or zero cells in a stochastic setting, see Kunkler (2004).

6) The inclusion of calendar year effects, see Verbeek (1972) and Zehnwirth (1994).

7) The different treatment of small and large claims, see Alai and Wüthrich (2009).

8) The need for a tail factor, see for example Mack

(1999).9) Over parameterization of the Chain Ladder method,

see Wright (1990) and Renshaw (1994).10) Separate assessment of IBNR and RBNS claims, see

Schieper (1991) and Liu and Verrall (2009).11) The realism of the Poisson distribution underlying

the Chain Ladder method.12) Not using lots of useful information about the

individual claims data, as noted by England and Verrall (2002) and Taylor and Campbell (2002).

Most references above present useful additions to the Chain Ladder method, but these additions cannot all be applied simultaneously. More importantly, the existence of these issues and the substantial literature about it indicates that the use of aggregate data in combination with (stochastic variants of) the Chain Ladder approach (or similar techniques) is not fully adequate for capturing the complexities of stochastic reserving for general insurance.

Micro-level stochastic loss reserving

The run-off process of an individual general insurance claim is shown in figure 1.

The interval [t1, t2] represents the reporting delay. In this interval the claim is not yet known to the insurer, so

Micro-level Stochastic Loss Reserving

1 This working paper is available for download at http://ssrn.

com.

Katrien Antonio and Richard Plat

Katrien Antonio is an assistant professor in actuarial science at the University of Amsterdam (webpage: http://home.medewerker.uva.nl/k.antonio/)

Richard Plat AAG RBA is Senior Risk Manager at the Group Risk Department of Eureko / Achmea Holding

Page 18: Aenorm 69

16 AENORM vol. 18 (69) November 2010

Actuarial Sciences

Incurred But Not Reported (IBNR). The interval [t2, t6] is often referred to as the settlement delay. Within this interval the claim is Reported But Not Settled (RBNS). Typically databases within general insurers contain detailed information about the run-off process of historical and current claims. The question arises why this large collection of data isn’t used in the reserving process, by modeling on the level of individual claims (micro-level). Therefore, Antonio and Plat (2010) have developed a stochastic model on micro-level for stochastic reserving, in the spirit of Norberg (1993, 1999) and Haastrup and Arjas (1996).

The quality of reserves and their uncertainty can be improved by using more detailed claims data. A micro-level approach allows much closer modeling of the claims process. Lots of issues mentioned above will not exist when using a micro-level approach, because of the availability of lots of data and the potential flexibility in modeling the future claims process. For example, specific information (deductibles, policy limits, calendar year) can be included in the projection of the cash flows when claims are modeled at an individual level. The use of lots of (individual) data avoids robustness problems and over parameterization. Also the problems with negative or zero cells and setting the tail factor are circumvented, and small and large claims can be handled simultaneously. Furthermore, individual claim modeling can provide a natural solution for the dilemma within the traditional literature whether to use triangles with paid claims or incurred claims. After all, also the case reserve can be used as a covariate in the projection process of future cash flows.

In the remainder of this article the methodology is summarized and results are shown for an example based on a general liability portfolio.

The model exists of 4 building blocks:

- Reporting delay- Number of IBNR claims- Development process- Payments

The distributions of the building blocks above can be fitted based on the available individual data2. Below the building blocks are further described and the modeling choices that have been made for the example portfolios are highlighted.

Reporting delay The reporting delay is a one-time single type event that can be modeled using standard distributions from survival analysis, such as the Exponential, Gompertz or Weibull distribution. Because for a large part of the claims the claim will be reported in the first few days after the occurrence, we have used a mixture of a Weibull distribution and 9 degenerate distributions. The latter are meant to fit the reported claims in the first days more closely.

Number of IBNR claims We have used a piecewise constant specification (on monthly basis) for the occurrence rate of a claim. Combining the reporting delay distribution and this occurrence process, one can distinguish between IBNR and RBNS claims and simulate the number of IBNR claims when projecting future cash flows.

Development process The development process is modelled using the statistical framework of recurrent events. The different events that are specified are: - Type 1: settlement without payment; - Type 2: settlement with a payment (at the same time); - Type 3: payment without settlement.

This process is modeled through a piecewise constant specification for the hazard rate of an event. A good alternative could be to use Weibull hazard rates.

Payments Events of type 2 and type 3 come with a payment. Several distributions have been fitted to the data, such as the Lognormal, Burr and Gamma distributions. The Lognormal distribution fits the data of the example portfolio best. This is further refined by including the development year and the initial reserve category as explanatory variables. The case reserves are categorised in a few classes. This reflects the empirical finding that the probability on a high (low) claim is higher for claims with high (low) case reserves.

Based on the above building blocks the future cash flows can be simulated. Results of this exercise are shown in the next paragraph.

Example

We compare the results of the model described in paragraph 3 with results of traditional actuarial techniques applied

2 All processes are fitted and simulated in SAS.

Figure 1. Run-off process of an individual general insurance

claim

Page 19: Aenorm 69

AENORM vol. 18 (69) November 2010 17

Actuarial Sciences

to run-off triangles. This is done with an out-of-sample exercise, where the reserve per 1–1–2005 is calculated based on data from 1997–2004. Given that the results for 2005–2009 are known already, the results of the models can be confronted with the realisations.

Figure 2 shows the distributions per 1–1–2005 for bodily injury claims of a general liability portfolio (for private individuals), based on 10,000 simulations. Furthermore the actual realisation (the dashed vertical black line) is given. The results are compared with two standard actuarial models developed for aggregate data, being a stochastic version of the Chain-Ladder model (based on an Overdispersed Poisson distribution) and a Lognormal model. Both of these models are implemented in a Bayesian framework.

Figure 2 shows that both the Overdispersed Poisson model as the Lognormal model overstate the reserve for this case-study: the actually observed amount is in the left tail of the distribution. The resulting distribution of the micro-level model seems closer to reality (see Antonio and Plat (2010) for tables with numerical results). Similar conclusions were drawn for separate calendar years and for another case-study using material claims of the same general liability portfolio, see Antonio and Plat (2010).

Conclusion

In this article we have introduced a new model for stochastic loss reserving for general insurance, based on modelling at micro-level. This model makes better use of the large collection of data and circumvents the issues that exist with models based on aggregate data. An out-of-sample exercise shows that for our case-study the proposed model is preferable compared to traditional actuarial techniques.

References

Alai, D.H. and M.V. Wüthrich. “Modelling small and large claims in a chain ladder framework.” Working paper (2009).

Antonio, K., and R. Plat. “Micro-level stochastic loss reserving.” Working paper (2010).

Arjas, E.. “The claims reserving problem in non-life insurance: some structural ideas.” ASTIN Bulletin 19 (1989):139-152.

Bornhuetter, R.L. and R.E. Ferguso. “The actuary and IBNR.” Proc. CAS LIX (1972):181-195.

England, P.D. and R.J. Verrall. “Stochastic claims reserving in general insurance.” Britisch Actuarial Journal 8 (2002):443-518.

Haastrup, S. and E. Arjas. “Claims reserving in continuous time: a nonparametric Bayesian approach.” ASTIN Bulletin 26 (1996):139-164.

Mack, T.. “The standard error of chain ladder reserve estimates: recursive calculation and the inclusion of a tail factor.” ASTIN Bulletin 29 (1999):361-366.

Norberg, R.. “Prediction of outstanding liabilities in non-life insurance.” ASTIN Bulletin 23 (1993):95-115.

Norberg, R.. “Prediction of outstanding liabilities ii. Model variations and extensions.” ASTIN Bulletin 29 (1999):5-25.

Quarg, G. and T. Mack. “Munich Chain Ladder: a reserving method that reduces the gap between IBNR projections based on paid losses and IBNR projections based on incurred losses.” Variance 2 (2008):266-299

Schnieper, R. “Separating true IBNR and IBNER claims.” ASTIN Bulletin 21 (1991):111-127.

Taylor, G. and M. Campbell. “Statistical case estimation.” Research paper 104, The University of Melbourne, Australia 1 (2002).

Verdonck, T., M. Van Wouwe and J. Dhaene. “A robustification of the chain-ladder method.” North American Actuarial Journal 13 (2009):280-298.

Figure 2. Out-of-sample results – injury claims

Page 20: Aenorm 69

18 AENORM vol. 18 (69) November 2010

Econometrics

The problem of inflation is nearly as old as currency itself, and inflation-linked products have been around for centuries as well. In this article Rogier Galesloot discusses the history of inflation-linked products and shows how inflation-linked bonds, swaps, and caps/floors can be of interest to an organization or institution looking to hedge against inflation.

by: Rogier Galesloot

Introduction

There have been times in history when inflation rose to double-digit percentages and then there are times, such as the present, when inflation is even negative. During prolonged periods of extremely high or low inflation, consumers will eventually start feeling the pinch. Businesses and governments are particularly vulnerable to inflation, as positive and negative cash flows depend on inflation. Examples of this include rents, gas and commodity prices. But there are numerous inflation-linked products available to hedge against this inflation risk. Inflation-linked products have existed since the American War of Independence (1775–1783). During this war the American colonies were weighted down by extremely high war inflation, creating dissatisfaction among American soldiers about the decline in their purchasing power. The State of Massachusetts then decided to issue inflation-linked bonds. By purchasing these bonds, soldiers were able to secure a consistent purchasing power level. Once the war and the attendant high inflation were over, these bonds were forgotten. It was not until the 1980s that inflation-linked bonds were issued again, this time by the British government. These products were designed for pension funds, which used them to hedge against increasing pensions as a result of inflation. In the following years the inflation market

grew significantly. Nowadays inflation-linked bonds are issued in more than 40 countries and extensive trade is also conducted in inflation-linked derivatives, such as options. In 2008 more than USD 1,500 billion-worth of inflation-linked bonds had been issued worldwide.

Who uses them?

Inflation-linked products can be of interest to any organization and institution. If income or expenditure depends on inflation, an inflation-linked product – a bond or a derivative – can be used as protection against inflation risk, compensating for increased prices. An organization that collects rent, for example, could use an inflation-linked swap to exchange uncertain future rental income for certain, fixed cash flow. The organization will then be able to base its long-term view on a series of fixed, future cash flows.

Inflation-linked bonds

A significant difference between a conventional bond and an inflation-linked bond is that the principal or the coupon value of the latter is index-linked to inflation. Instead of the principal, the initial principal plus the inflation from that period is repaid on maturity. Alternatively, an annually indexed coupon can be paid instead of a constant coupon value. An example of such a product is the inflation-linked bond issued by the British government, as mentioned above. The principal of this bond is indexed each period to the year-on-year change in the British Consumer Price Index (CPI). As a result, the purchasing power of the bond holder is the same on maturity as at the start.

Inflation-linked swaps

In an inflation-linked swap a fixed payment is usually exchanged for an inflation rate. This percentage is also calculated based on the CPI. One way is to divide the CPI at the time of a cash flow by a contractually determined

Benefit from Inflation with Bonds, Swaps, Caps, and Floors

Rogier Galesloot

Rogier Galesloot studied Econometrics at the University of Amsterdam. He has written his thesis during an internship at Zanders and ING Investment Management. This article is based on this thesis, which is written under supervision of Peter Boswijk. After his graduation in 2009 Rogier started to work at Zanders as an associate consultant. He is working on the department specialized in the valuation of derivatives.

Page 21: Aenorm 69

AENORM vol. 18 (69) November 2010 19

Econometrics

base-rate CPI. Projected annual inflation rates can also be distilled from the forward CPI curve. The structure of an inflation-linked swap is shown in Figure 1. An inflation-linked swap can be used to exchange the uncertainty of inflation-dependent cash flows for a certain, fixed cash flow.

Inflation-linked caps and inflation-linked floors

An inflation-linked cap or inflation-linked floor is a derivative based on inflation. This type of product can be used to hedge against both high and low inflation. An inflation-linked cap generates cash flows if inflation exceeds a certain percentage, known as the strike. An inflation-linked floor, on the other hand, generates cash flows if inflation falls below a certain level. For this category a strike rate of 0% is often used, as protection against deflation. Inflation-linked caps and inflation-linked floors can be regarded as options on inflation. A cap can be a useful tool when income suffers from inflation, as in the case of rents. When inflation increases, purchasing power declines. This decline can be compensated for by income from the cap contract.

Valuation

The term “CPI” is mentioned several times in the paragraphs above. This index is used in the calculation of cash flows. If an inflation-linked product is issued based on European inflation, the European CPI should be used. Figure 2 shows the annual CPI percentages. It is clear that, even within Europe, there are considerable differences in inflation rates. It is also clear that inflation in the EU, like

Dutch and German inflation, leans towards 2%, which is the long-term inflation target for EU member states.

The valuation of inflation-linked derivatives is based on the theory of the valuation of an interest rate derivative. The forward interest rate curve is used to value an interest rate derivative. A forward CPI curve is required to value an inflation-linked derivative. The future, risk-neutral, cash flows are determined based on this curve. A disadvantage of the forward CPI curve is that it is only available in countries where a sufficient number of inflation-linked bonds are issued. There are currently only six countries in the world besides the European Union where the inflation market liquidity is such that there is also a forward CPI curve. These countries are the United Kingdom, the United States, France, Japan, Australia, and Italy.

As an example, an inflation-linked swap based on European inflation. Party A pays the inflation to party B once a year. In exchange, party B pays a fixed amount of 2% of the principal, i.e. EUR 1 million – see column C in Table 1 below.

1. First a forward inflation curve must be constructed. The forward CPI curve can be used for this purpose – see column E. The inflation rates are calculated based on the forward CPI curve. The CPI on December 31, 2009 was 107.51.

2. A discount curve is subsequently constructed, column D. Depending on the payment frequency, a 1-monthly,

Figure 2. Annual CPI figuresFigure 1. Diagram of an inflation-linked swap

Table 1. Market value calculation inflation-linked swap

A B C D E F G H I

Start date End date Principal Discount factor

Forward CPI

Inflation rate

Inflation Cash flow

PV infla-tion rate

PV fixed rate

1-1-2010 12-31-2010 1,000,000 0.9903 108.93 1.32% 13,200 13,072 -/- 17,3301-1-2011 12-31-2011 1,000,000 0.9722 111.12 1.67% 16,650 16,187 -/- 17,014 1-1-2012 12-31-2012 1,000,000 0.9398 113.61 1.86% 18,575 17,457 -/- 16,447 1-1-2013 12-31-2013 1,000,000 0.9017 116.50 2.03% 20,275 18,282 -/- 15,780 1-1-2014 12-31-2014 1,000,000 0.8605 119.56 2.15% 21,475 18,479 -/- 15,059

Total 83,477 -/- 81,629

Page 22: Aenorm 69

20 AENORM vol. 18 (69) November 2010

Econometrics

3-monthly or 6-monthly curve can be selected.3. The uncertain cash flows are estimated based on the

information from point 1, see column G.4. Using the discount factors from point 2 the present

value (PV) of the cash flows is calculated, see columns H and I. In this case the 6-monthly curve is used.

5. The difference between this market value and the market value of the fixed rate results in the net present value of the swap, i.e. a net present value of EUR 1,848. At present, therefore, this product has a positive market value for the party paying the inflation.

Conclusion

To conclude, inflation-linked products can be of interest to any organization and institution. If income or expenditure depends on inflation, an inflation-linked product – a bond or a derivative – can be used as protection against inflation risk, compensating for increased prices. However, many countries have an illiquid inflation-linked bond market. The existence of such a market means that it is difficult to value the derivatives based on this inflation. This problem is examined from a theoretical point of view in my thesis on which this article is based. Considered from a practical point of view, there is no obvious solution for countries whose payment instrument is the euro. European inflation can be used as a proxy for the domestic, illiquid inflation market. The situation is considerably more complex in countries where another currency is used. In order to approach this problem in a practical manner, the historical correlations between currency and inflation and the various liquid markets should be examined.

References

Brigo, D and F. Mercurio. Interest rate models, Theory and Practice. Berlin: Springer, 2001.

Barclays Capital Research. Global Inflation-Linked Products, A User’s Guide. London: 2004.

Galesloot, R.. “Valuation of inflation linked caps : in illiquid markets”, 2009.

Hull, J.. Options, futures and other derivatives. New Jersey: Prentice Hall, 2005. 7th edition.

Lont, I.. “Een introductie in inflatieproducten.” 13.49 (2005): 39-43.

Wrase, J.M.. “Inflation-Indexed Bonds: How Do They Work?” Business Review, july 1997: 3-16.

Page 23: Aenorm 69

We are looking for

Consultantswith different levels of experience

e-mail: [email protected] • website: www.zanders.eu

About ZandersZanders is an independent fi rm with a track

record of innovation and success across

the total spectrum of Treasury & Finance.

We provide international advisory, interim,

outsourcing and transaction services.

Zanders creates value with our specialist

expertise and independence.

Since 1994, Zanders has grown into a

professional organization with currently

a dedicated advisory team of more than

80 consultants and an associated pool

of more than 45 interim managers.

Our success is especially due to our

employees. That is why Zanders provides

a working environment which offers

development opportunities to everyone,

on a professional as well as personal

level. At Zanders, we are always looking

for talented people who would like to use

their expertise and know-how in our fi rm.

We are currently looking for Consultants to

strengthen our team.

What is the profi le of our Consultants?To be considered for this position, you

should meet the following requirements:

• University degree in Economics,

Econometrics, Business Administration,

Mathematics or Physics;

• Up to 2 years work experience for

associate consultants;

• 2-5 years work experience for

consultants;

• Well-developed analytical skills and

affi nity with the fi nancial markets;

• Pragmatic and solution-oriented;

• Excellent command of English (spoken

and written), willing to learn Dutch

or French, any other language being

an asset.

Areas of expertise• Treasury Management

• Risk Management

• Corporate Finance

Competences• Strategy & Organization

• Processes & Systems

• Modeling & Valuation

• Structuring & Arranging

Would you like more information about

this position, and/or are you interested

in a career at Zanders? If so, please

contact our Human Resources Manager,

Philine Veldhuysen.

Zanders NetherlandsBrinklaan 1341404 GV Bussum+ 31 (0)35 692 89 89

Zanders BelgiumPlace de l’Albertine 2, B21000 Brussels+32 (0) 2 213 84 00

Postal addressP.O. box 2211400 AE BussumThe Netherlands

Zanders UK26 Dover StreetLondon W1S 4LY+44 (0)207 763 7296

Consultants_A4_10-07.indd 1 07-10-2008 16:01:48

Page 24: Aenorm 69

22 AENORM vol. 18 (69) November 2010

The classical secretary problem is a well known optimal stopping problem from probability theory. It is usually described by different real life examples, notably the process of hiring a secretary. Imagine a company manager in need of a secretary. Our manager wants to hire only the best secretary from a given set of n candidates, where n is known. No candidate is equally as qualified as another. The manager decides to interview the candidates one by one in a random fashion. Every time he has interviewed a candidate he has to decide immediately whether to hire her or to reject her and interview the next one. During the interview process he can only judge the qualities of those candidates he has already interviewed. This means that for every candidate he has observed, there might be an even better qualified one within the set of candidates yet to be observed. Of course the idea is that by the time only a small number of candidates remain unobserved, a recently interviewed candidate that is relatively best will probably also be the overall best candidate.

by: Chris Dietz, Dinard van der Laan, Ad Ridder

Introduction

There is abundant research literature on this classi-cal secretary problem, for which we refer to Ferguson (1989) for an historical note and an extensive biblio-graphy. The exact optimal policy is known, and may be derived by various methods, see for instance Dyn-kin and Yushkevich (1969), and Gilbert and Mosteller (1966). Also, many variations and generalizations of the original problem have been introduced and analysed. One of these generalizations is the focus of our paper, namely the problem to select one of the b best, where 1 ≤ b ≤ n is some preassigned number (notice that b = 1 is the classical secretary problem). Originally, this problem was introduced by Gusein-Zade (1966), who derived the structure of the optimal policy: there is a sequence 0 ≤ s1 < ... < sb ≤ sb+1 = n − 1 of position thresholds such that when candidate i is presented, and judged to have relative rank k among the i candidates1, then the optimal decision says

i ≤ s1 : continue whatever k is;

1

stop if 1 (where 1,..., ) : ;

continue if j j

k js i s j b

k j+

≤⎧+ ≤ ≤ = ⎨ >⎩

 

i = n : stop whatever k is.

Furthermore, Gusein-Zade (1966) gave an algorithm to compute these thresholds, and derived asymptotic ex-pressions (as n → ∞) for the b = 2 case. Also Frank and Samuels (1980) proposed an algorithm, and gave the li-miting (as n → ∞) probabilities and limiting proportional thresholds sj / n.

The algorithms of Frank and Samuels and Gusein-Za-de are based on dynamic programming, which means that the optimal thresholds sj, and the optimal winning proba-bility are determined numerically. The next interest was to find analytic expressions. To our best knowledge, this has been resolved only for b = 2 by Gilbert and Mosteller (1966), and for b = 3 by Quine and Law (1996). Although the latter claim that their approach is applicable to produ-ce exact results for any b, it is clear that the expressions become rather untractable for larger b. This has inspired us to develop approximate results for larger b.

We consider two approxim ate policies for the general b case: single-level policies, and double-level policies. A single-level policy is given by a single position threshold s in conjunction with a rank level r, such that when can-didate i is presented, and judged to have relative rank k among the first i candidates, then the policy says

i ≤ s : continue whatever k is;

stop if 1 1 : ;

continue if k r

s i nk r≤⎧

+ ≤ ≤ − ⎨ >⎩ 

A Generalization of the Classical Secretary Problem

Chris Dietz, Dinard van der Laan, Ad Ridder

Ad Ridder is associate professor at the Department of Econometrics of VU University. His research interests include applied probability, rare event simulation, and performance evaluation of stochastic systems.

Dinard van der Laan is assistant professor at the Department of Econometrics of VU University. His research interests include applied probability and combinatorics.

Chris Dietz is a Ph.D. student at the Department of Econometrics of VU University. He works on hierarchical networks in economics and game theory.

1 It is most convenient to rank the candidates 1, 2, …, n, with

rank 1 being the best, rank 2 being second best, etc.

Operations Research and Management

Page 25: Aenorm 69

AENORM vol. 18 (69) November 2010 23

i = n : stop whatever k is.

A double-level policy is given by two position thresholds s1 < s2 in conjunction with two rank levels r1 < r2, such that when candidate i is presented, and judged to have relative rank k among the first i candidates, then the po-licy says

i ≤ s1 : continue whatever k is;

11 2

1

stop if 1 : ;

continue if k r

s i sk r≤⎧

+ ≤ ≤ ⎨ >⎩ 

22

2

stop if 1 1 : ;

continue if k r

s i nk r≤⎧

+ ≤ ≤ − ⎨ >⎩ 

i = n : stop whatever k is.

We shall derive the exact winning probability for these two approximate policies, when the threshold and level parameters are given. These expressions can then used easily to compute the optimal single-level and the opti-mal double-level policies, i.e., we optimize the winning probabilities (under these level policies) with respect to their threshold and level parameters. The most important result is that the winning probabilities of the optimal dou-ble-level policies are extremely close to the winning pro-babilities of the optimal policies (with the b thresholds), specifically for larger b, see Table 1. In other words, we have found explicit formulas that approximate closely the winning probabilities for this generalized secretary problem.

Single-level policies

Before we consider the single-level policies we first in-troduce some notation we use throughout this paper. The absolute rank of the i-th object is denoted by Xi, while the relative rank of the i-th object is denoted by Yi. Ranks run from 1 to n, and we say that rank i is higher than rank j when i < j. Moreover for natural x and n, the falling fac-torial x, (x − 1) … (x − n + 1) is denoted by (x)n.

Single-level policies are determined by two integer parameters: s (called the position threshold) and r (called the rank level). Following such a single-level policy objects are considered to be selected from position s + 1 and then the first one encountered with a relative rank higher or equal than r is picked. Moreover, we assume that if the first n − 1 items are not picked that then the last object is certainly picked independent of its relative rank Yn. Let π = π(s, r) be such a policy with 1 ≤ r ≤ b and r ≤ s ≤ n − 1; we discard the trivial cases of s = n (never stop before the last object), and s < r (stop at position s + 1), and denote the probability of success by PSLP(π). Thus PSLP(π) is the probability that an object is picked with absolute rank higher than or equal to b if policy π is applied. Note: when we wish to express explicitly parameters (n, b, s, r) we denote it, otherwise we omit it.

Theorem 1For r = 1, 2, …, b, and r ≤ s ≤ n − 1:

( ( , ))SLPP s r A Bπ = +

  

with

1

1 1 1

11( ) 1

1( 1)1

( )( 1)

n b rr

t s j r kr

r

r

j n jk i ks rA

ni n ni

s bBn n

= + = + =

⎛ − − ⎞⎛ ⎞⎛ ⎞⎜ ⎟⎜ ⎟⎜ ⎟− −⎝ ⎠⎝ ⎠⎜ ⎟= +⎜ ⎟−− ⎛ ⎞⎜ ⎟⎜ ⎟⎜ ⎟−⎝ ⎠⎝ ⎠

=−

∑ ∑ ∑ 

Before proving this expression we need two auxiliary re-sults (we give no proofs!).

Lemma 2For s = r, …, n − 2 and i = s + 2, s + 3, …, n we have that

P(min{Ys+1, Ys+2, ..., Yi−1} > r) = (s)r / (i − 1)r

Lemma 3For i = s + 1, s + 2, …, n − 1 and r = 1, 2, …, b we have that

11

11 1

1 for 1, 2,...,

( | )for 1, 2,...,

j n jr

k i ki in

k i

j r

P Y r X jj r r b

− −⎛ ⎞⎛ ⎞⎜ ⎟⎜ ⎟− −⎝ ⎠⎝ ⎠

−⎛ ⎞⎜ ⎟= −⎝ ⎠

=⎧⎪⎪≤ = = ⎨ = + +⎪⎪⎩∑

 

Proof of Theorem 1the case s = n − 1 is trivial because then

PSLP(π (n − 1, r)) = P(Xn ≤ b) = b / n.

Let r ≤ s ≤ n − 2. For i = s + 1, ..., n and j = 1, ..., b let ijA  

be the event that Xi = j and policy π(s, r) picks the object at position i:

1 2 1{min{ , ,..., } , , }ji s s i i iA Y Y Y r Y r X j+ + −= > ≤ =  

Thus,

1 1

1

1 2 1 1

( ( , )) ( )

( ( , )) ( ) ( ) ( )

n bi

SLP ji s j

b n b bs i n

SLP j j jj i s j j

P s r P A

P s r P A P A P A

π

π

= + =

+

= = + = =

=

= + +

∑ ∑

∑ ∑ ∑ ∑ 

Cases i = s + 1 and i = n are treated separately. Notice that for k < i the relative ranks Yk are independent of both Xi and Yi, thus (for s + 2 ≤ i ≤ n − 1)

1 2 1

1 2 1

( ) (min{ , ,..., } , , )

( ) (min{ , ,..., } , , )( , )

( ) (min{ ( 1), ( 2),..., ( 1)} ,, ) ( | ) ( )

ji s s i i ij

i s s i i i

i ij

i

P A P Y Y Y r Y r X jP A P Y Y Y r Y r X j

P Y r X j

P A P Y s Y s Y i rY i r X i j P Y i r X i j P X i j

+ + −

+ + −

↓ ↓ ↓

↓ ↓ ↓ ↓ ↓

= > ≤ =

= > ≤ =

× ≤ =

= + + − >

≤ = ≤ = =

 

Operations Research and Management

Page 26: Aenorm 69

Student Magazine Actuarial Sciences,Econometrics & Operations Research

Are you interested in being in the editorial staff and having your name in the colofon?

If the answer to the question above is yes, please send an e-mail to the chief editor at [email protected].

The staff of Aenorm is looking for people who like to:

- find or write articles to publish in Aenorm;- take interviews for Aenorm;- make summaries of (in)famous articles;- or maintain the Aenorm website.

To be in de editorial board, you do not necessarily have to live in the Netherlands.

C

M

Y

CM

MY

CY

CMY

K

Aenorm advo kleur.pdf 10-11-2010 19:22:38

Page 27: Aenorm 69

AENORM vol. 18 (69) November 2010 25

with P(Xi = j) = 1 / n, and the other two factors were deter-mined in Lemma 2 and Lemma 3. For i = s + 1:

( ( 1)) ( ( 1) , ( 1) )( ( 1) | ( 1) ) ( ( 1) ),

P A j s P X s j Y s rP Y s r X s j P X s j

↑↓ ↓ ↓

↓ ↓ ↓

+ = + = + ≤

= + ≤ + = + = 

and then apply Lemma 3 while noticing that (s)r / (i − 1)r = 1. For i = n:

1 2 1

1 2 1

( ) (min{ , ,..., } , , )(min{ , ,..., } ) ( )

ni s s n i n

s s n n

P A P Y Y Y r Y r X jP Y Y Y r P X j

+ + −

+ + −

= > ≤ =

= > = 

and apply Lemma 2.

  We defer the comparison of the performance of

single-level policies with the optimal policy to section “Numerical Results”.

Double-level policies

A natural extension of the single-level policies is the class of double-level policies for the secretary problem where the objective is to pick one of the b best objects from n objects consecutively arriving one by one in the usual random fashion. Let be given two rank levels 1 ≤ r1 < r2 ≤ b, and two position thresholds r1 ≤ s1 < s2 ≤ n − 1 (we discard the trivial cases of s2 = n which gives again a single-level policy, and s1 < r1 which leads to stopping at position s1 + 1). The double-level policy says to observe the first s1 presented objects without picking any; next, from objects at positions s1 + 1 up to s2 the first one encountered with a relative rank higher or equal than r1 is picked; if no such object appears, the first object at positions s2 + 1 up to n − 1 is selected which has a rela-tive rank of at least r2; finally, if all these n − 1 items are not picked, the last object is certainly picked independent of its relative rank Yn. Slightly abusing notation, we de-note again by π = π(s, r) such a double-level policy and by PDLP(π) its winning probability. Similar to the proof of Theorem 1 we derive the winning probability.

Theorem 4The double-level policy given by rank level 1 ≤ r1 < r2 ≤ b, and position thresholds r1 ≤ s1 < s2 ≤ n − 1 has winning probability

( ( , ))DLPP s r A B Cπ = + +

  with

2 11

1 11

2 21 2 1

1 22

1 1

1 1 1

1 2 1 2

1 1 1

1( ) 11

1( 1)1

1( ) ( ) 11

1( 1)1

s rbr

t s j r kr

s rbr r r

t s j r kr

j n js k i krA

ni n ni

j n js s r k i krB

ni n ni

= + = + =

= + = + =

⎛ − − ⎞⎛ ⎞⎛ ⎞⎜ ⎟⎜ ⎟⎜ ⎟− −⎝ ⎠⎝ ⎠⎜ ⎟= +⎜ ⎟−− ⎛ ⎞⎜ ⎟⎜ ⎟⎜ ⎟−⎝ ⎠⎝ ⎠

⎛ − − ⎞⎛ ⎞⎛ ⎞⎜ ⎟⎜ ⎟⎜ ⎟− − −⎝ ⎠⎝ ⎠⎜ ⎟= +⎜ ⎟−− ⎛ ⎞⎜ ⎟⎜ ⎟⎜ ⎟−⎝ ⎠⎝ ⎠

∑ ∑ ∑

∑ ∑ ∑

1 2 1

2

1 2 1( ) ( )( 1)

r r r

r

s s r bCn n

−−=

 

Numerical Results

We can find numerically the optimal single-level policy for a given number of candidates n, and a given worst al-lowable rank b, in a two-step approach as:

1,..., 1,..., 1max max ( ( , )).SLPr b s n

P s rπ= = −

 

Thus, in the first step, we fix also a rank level r (between 1 and b). The function {r, ..., n − 1} → PSLP(π(∙, r)) is uni-modal concave (this follows after a marginal analysis), and thus we can solve numerically for the optimal positi-on threshold s* = s*(r), and the associated winning proba-bility PSLP(π(s*, r)). The second step is simply a complete enumeration to determine

max{PSLP(π(s*, r)) : r = 1, ..., b}

However, it can be shown that the function {1, ..., b} → PSLP(π(s*, ∙)) is unimodal, which yields a shortcut in the second step. To check our numerical re-sults, we have constructed an alternative method to find the optimal position threshold s*(r), given n, b, r, namely by dynamic programming.

Table 1. Relative errors (%) of the optimal single- and double-level policies.

n Single-level Double-level

b 100 250 1000 100 250 1000

5 10.630 10.854 10.965 3.286 3.331 3.354 10 5.262 5.674 5.876 1.702 1.841 1.911 15 2.095 2.467 2.658 0.568 0.686 0.746 20 0.739 0.996 1.131 0.155 0.221 0.258 25 0.239 0.381 0.464 0.036 0.066 0.084

Operations Research and Management

Page 28: Aenorm 69

26 AENORM vol. 18 (69) November 2010

Similarly, in the case of double-level policies, we have constructed a two-step approach, where the first step finds the optimal position thresholds and for any given pair of rank levels (r1, r2) and its associated winning pro-bability PDLP(π(s*, r)) (vector notation for s and r). Then a straightforward search procedure determines

*

1 2 11,..., 1 ,...,max max ( ( , )).DLPr b r r b

P s rπ= − =

 

Finally, as mentioned in the introductory section, dy-namic programming can be applied easily to obtain the optimal (multi-level) policy (Frank and Samuels and Gusein-Zade). We have implemented the algorithms for the optimal multi-level and optimal double-level policies on a website, of which the address can be found in the references.

Table 1 gives the relative errors of the winning proba-bilities of the optimal single and double-level policies for n = 100, n = 250 and n = 1000, and for b = 5, 10, …, 25, relatively to the corresponding optimal multi-level poli-cies. The double-level policy gives extremely small er-rors for larger b, up to very large population sizes n. Also we notice that the errors (for a given b) increase slightly as n increases.

Conclusion

For the considered generalized secretary problem of se-lecting one of the b best out of a group of n we have obtained closed expressions for the probability of suc-cess for all possible single- and double-level policies. For any given finite values of n and b these expressions can be used to obtain the optimal single-level policy respec-tively optimal double-level policy in a straightforward manner. We computed for varying b and n the optimal single-level and double-level policies and corresponding winning probabilities and compared the results to the overall optimal policy which is determined by b position thresholds. We found that the double-level policies per-form nearly as well as the overall optimal policy.

References

Dynkin, E.B. and A. Yushkevich. Markov Processes: Theorems and Problems. New York: Plenum Press, 1969.

Ferguson, T.S.. “Who solved the secretary problem?” Statistical Science 4, (1989): 282-296.

Frank, A.Q. and S.M. Samuels. “On an optimal stopping problem of Gusein-Zade.” Stochastic Processes and their Applications 10, (1980): 299-311.

Gilbert, J.P. and F. Mosteller. “Recognizing the maximum of a sequence.” Journal of the American Statistical As-sociation 61 (1966): 35-73.

Gusein-Zade, S.M.. “The problem of choice and the opti-mal stopping rule for a sequence of indpendent trials. “ Theory of Probability and its Applications 11 (1966): 472-476.

Quine, M.P. and J.S. Law.. “Exact results for a secretary problem.” Journal of Applied Probability 33, (1996): 630-639.

http://staff.feweb.vu.nl/aridder/java/best.html

Operations Research and Management

Page 29: Aenorm 69

AENORM vol. 18 (69) November 2010 27

Econometrics

There are several conundrums in economics which keeps minds puzzling, conferences going and academic research thriving. One such conundrum is that historical stock returns are much higher than bond returns. This is called the equity premium. A second conundrum is that the money supply increases yearly by 9% but that inflation is around 2%. These puzzles are not puzzles once you connect the dots. The return on equity is asset inflation and that explains where all the money went. In fact it is simply impossible for stock return to exceed economic growth. As the latter is around 2%, real stock returns of 7% are just impossible.

by: David Hollanders

Introduction

These artificially high equity returns have real consequences because stockowners -banks, pension funds, insurance companies- are not as wealthy as they like to claim. And therefore will not be able to meet their liabilities, at least not in real terms, which are the only terms that matter. This is how one can interpret the credit-crisis. Pension funds are underfunded (around 70% in real terms) and decrease benefits. Banks are insolvent if not for huge bail-outs with taxpayer’s money, either directly (the Dutch government bought 100 billion “worth” of assets, guarantees another 200 billion) or indirectly via low interest rates charged by the central bank (printing 1300 billion extra money).

Why (real) stock return cannot exceed economic growth

The equity premium –the extra return compared to bonds- equals around 5%. With bond returns hovering around 4% and inflation around 2%, the real return on equity is (claimed to be) 7%. Of course, the exact return depends on timing, precise portfolio and transaction costs, but the basic picture is that it outperforms bonds and that it is higher than economic growth, which is historically 2%, tops 3%. That is, if a bank or pension fund invested 45 billion euro on the stock market in 1975, now it would have 900 billion, around 150% GDP, on paper at least. Indeed ING and ABN have more assets -again: on paper- than the Dutch GDP. To own much high-priced shares only means something if one can turn them into real goods at one point. There are two ways to do this; either sell the shares or wait for dividends to pour in.

Suppose first that the bank wants to exchange the stocks piled up for cash (for example because people run on the bank), that is, sell it. In order to sell one has to get a buyer for this enormous amount of shares. This will be problematic, because there will not be enough buyers and

if so, the high supply of stocks will depress prices. For argument’s sake, suppose that this is not a problem and the bank receives 150% GDP in cash. But now a second problem arises; the bank (or its creditors) cannot buy 150% of GDP. Not counting inventory, second-hand stuff and machines, there is only 100% around. Added to that, once the bank (or its creditors) tries to buy real goods, this will push up prices and the asset pile is not worth as much as you thought. The high stock returns will turn out to be asset inflation. In other words, if everyone tries to turn savings in real goods, inflation will go to the roof and savings turn out to be less worth than thought. This is what happens in a bank run, people try to get cash out of the banks and the lucky ones subsequently buy goods. But this increases prices.

An alternative to selling shares and subsequently consuming the proceeds, is to hold on to the stocks and consume the resulting dividends. After all, stocks are a claim to profits of a company, paid out in dividends. This is the reason why stocks have value in the first place. The value of a stock is nothing more than the sum of all current and future dividends.

So how high a dividend return can a typical investor expect? That depends on how much the company grows and how this growth is divided over debtholders and shareholders. Now denote the total value of a typical

You can Print Money but You can’t Print Goods

David Hollanders

David Hollanders is Assistent Professor at TU Delft (section Innovation and Public Sector Efficiency). He is a junior-fellow of Netspar and will shortly finish his PhD on pension economics at Tilburg University.

Page 30: Aenorm 69

28 AENORM vol. 18 (69) November 2010

Econometrics

company V. This is partly debt-financed, denoted D, and partly equity-financed, denoted E. By definition it follows that V = D + E. If the company is insolvent, E is zero. (That is why ING-shares went down the drain in the autumn of 2008 until the government started buying crappy assets) and bondholders take a haircut. Note that the famous proposition of Modigliani and Miller states that no matter how you divide the cake, given by V, among equity- and debtholders that doesn’t affect the cake. So the ratio E / V does not influence V.

Now the company produces and in doing so (hopefully) increases in value. Production is the essence of economic growth. Call the growth of the company e and thus the value of the company one year later equals (1 + e) V. Part of the growth goes to debt-holders in the form of interest payments, while shareholders are paid dividends d. Together this results in the following equality: eV = (1 + r) D + (1 + d) E. This is just an accounting equality which states that the value of the company either goes to debt-holders or shareholders.

All companies together make up the economy. As the economy grows with around 2% historically, e is also around 2%. (Some claim economic growth is actually

less, which explains another conundrum, the “puzzle” of jobless growth in the first decade of the 21th century, but that is another issue.) Of course, there may be high-performing companies who make (much) more profit than the general level of economic growth, but for every high-flier there is an underachiever. Taken together companies grow with economic growth.

To equate growth of all companies together with economic growth merits some more explanation, as several objections can be made. However, all these objections point in the same direction that economic growth is an overestimation of the aggregate growth of companies. First, a balance sheet also contains inventories and capital goods whereas economic growth refers to production in particular years. Inventories are however not productive and the same holds for capital goods which besides their role in yearly production, do not gain in value; if anything, they depreciate. Second, one could argue that productivity growth -how much one can produce more with a fixed amount production goods- is more relevant than economic growth. True, but because population grows, productivity growth is almost always less than economic growth. Third, one could argue that equity-financed companies are more productive than debt-financed ones. There is no theoretical reason to assume that and it is also not what is witnessed in practice. The reason to use equity instead of debt in the first place is because a company does not grow steadily (if it did, it could get a bank loan). Added to that, all companies that went bust during the credit-crisis (if the government

had not saved them) were listed banks and insurance companies. Fourth, one could claim that not all growth accrues (or should accrue) to capital providers but also to workers. Again true, but this only lowers returns for capital providers. Fifth, there is the idea that the value of a company also takes future returns into account which may exceed 2%. That is again totally true, and it is exactly how Enron did its accounting, handing out millions of bonuses because future profits would make up for it. Even if not all companies are Enron-likes, it is hard to see where that future growth will come from. The economy is heavily indebted, aggregate demand is tumbling and on the supply side peak oil is enough to not be too optimistic. But that doesn’t matter because the claim here is just that in the last decades dividend returns could not have been more than 2% in real terms if not for money printing and accounting fraud. It is, in other words, an ex post exercise.

The only point is that in the most optimistic scenario it holds that: (1 + r) D + (1 + d) E = 1.02 V. Now take real interest rates to be equal to 2%, which is a very conservative estimate (only banks can borrow against 1% nominal with the ECB, regular companies would

be thrilled with nominal 4%). The consequence is that dividend returns cannot be higher than 2%. Even if real interest rates would be lower than 2% -which they are not- equity return cannot reach 7%. With a high leverage ratio (E / V) of 30%, an interest rate of 1.5% and economic growth of 2%, dividend returns would be no more than 3.2%. Besides, with such a high leverage, banks would probably charge a higher not a lower interest rate. (Of course banks themselves have a capital ratio of 3%, which may explain why their dividend return is 20%. However, as addressed below, these “profits” result from flawed accounting.)

So again, stocks cannot outperform economic growth in real terms. Miraculously however they did outpace economic growth for a long time, at least in nominal terms. How can that be?

Why it did (nominally): Asset inflation

When you print money and thus increase the money supply, this usually leads to goods inflation. Ever since 1971 the world has seen money supply gone berserk. In 1971 the US no longer backed the dollar with gold, essentially defaulting on its debt. This was the final step towards a debt-based economy, where all money is fiat money. The money supply was no longer backed by or constrained with a physical entity like gold or silver. Money could be printed out of thin air. And it was. The money supply in Europe increased by 9% yearly in these decades. But strangely consumer price indexes settled

The equity premium is not a puzzle, it is just asset inflation

Econometrics

Page 31: Aenorm 69

AENORM vol. 18 (69) November 2010 29

down on a growth of 2%. Apparently people were not so much buying goods with the freshly printed money. Instead they bought houses and shares. But with more demand than supply for equity the result was not hard to predict: stock prices increased.

And then a strange thing happened, because the stock market is no ordinary market. When the price of a commodity increases, one would expect demand to fall. But this doesn’t happen with stocks, quite the opposite. When prices increase, people buy more shares, contradicting everything economics postulates. People somehow think that the price will keep on increasing (or think other people will think the price will increase, or they believe some autoregressive econometric model that extrapolates unsustainable trends instead of anticipating its unsustainability). This reinforces itself and an asset bubble is borne. This explains the high returns on stocks (and houses and securities backed by “this-will-make-a-profit-for-sure-so-give-me-an-unrefundable-bonus-now” investments). High stock returns result from the teaming up of money printing and bubble-economics.

The consequence for mark-to-market ac-counting

One reaction –except denial- is: so what? Who cares that some people buy stocks that will turn out to be worth less than they paid for it and even less than they hoped. Who cares indeed but the enormous asset bubble has real consequences for people who never invested a dime on the stock market. The reason is that the largest shareholders in the economy are banks, pension funds and insurance companies. They invest savings and pension contributions in stocks, solemnly promising to return that one day. These financial institutions value their assets with so called mark-to-market accounting. That is, they assume that the market price reflects the real value. This accounting assumes that there are no bubbles and no money printing; so it assumes away all reasons for high asset prices.

To see how bogus market-accounting really is (and why banks love it), consider the following hypothetical-but-not-so-hypothetical example. Bank A has a package of shares X, worth 80 in total and bank B has shares Y, worth in total 80 as well. They sell the shares to each other for 90 (so no cash is transferred), swapping the stocks. Now they both make a profit as the book value, which equals the market value, is 90. This “profit” is handed out as bonuses and dividends.

It becomes crazier. Suppose they sell the exact same packages back to each other for 100 (so again without any cash transfer). Now nothing changed vis-à-vis the initial situation. But both banks now own 100 of stocks and made a nice “profit” of 20. (Of course you can repeat this trick indefinitely, and/or do it with other assets as well. This explains the high turn-over of assets. If you sell each other crap long enough, it will look a “will-make-a-profit-for-sure” investment eventually.) This “profit” is

again divided over bonuses and dividends. At one point it will become clear that the stocks are not

worth 100 but just 80. And then it will be clear that banks are insolvent. This moment is now, this is the credit crisis. It becomes clear that high equity return is just ripping of bondholders and savers. (And now it is just ripping of tax payers who transfer real money in exchange for bogus assets that nobody besides the government wants).

PAYG or Full Funding

Not only banks are in trouble, pension funds – or to be precise: their participants- are also on the hook. To see the problem it is good to compare Pay-As-You-Go -with current workers paying for current retirees, as in the Dutch AOW-system- with funded system where participants “save” for themselves via a pension fund. Several people plea for such full funding because they think it makes a better return than PAYG. This is the equity premium argument all over again. But if the equity premium is questionable, perhaps so is this plea. In fact, these two systems are virtually equivalent from a macro-perspective.

If savings are invested in bonds this follows directly, as cash flows are equal (money from workers to government and from government to elderly, with or without an intermediating pension fund collecting a nice fee). But investment in stocks does not change the picture, as it ultimately remains the current young that produce for current elderly. PAYG and full funding are merely two different systems to organize claims on production.

But surely, investing in stock is at least a little better than state-organized PAYG? Well, that depends. If you invest in a company it may invest usefully (inventing a cure against AIDS) or waste it (buying an airplane for the CEO). The same holds for the government, it can likewise do something useful with taxes (subsidizing R&D which invests the cure against AIDS) or something wasteful (buy an airplane for the minister). So, the ultimate question is whether you think whether companies are more productive (on the margin) than the government. The credit crisis has seen too much credit going to too many worthless investments (lend other people’s money to homeowners who cannot afford it, securitize the “for-sure-profits” and cash in a bonus) to be sure that is the case.

Conclusion

The stock returns of 9% nominally and 7% “real” are a consequence of money supply gone wild and the mother of all asset bubbles. This return is –in other words- phony. Once people try to exchange their stocks for goods, asset inflation will turn into goods inflation. In the end, the real return of a stock -which is a claim on the production of companies- cannot exceed economic growth.

In particular pension funds, banks and insurance companies will see their asset pile turn to dust. That is

Econometrics

Page 32: Aenorm 69

30 AENORM vol. 18 (69) November 2010

what is now happening. Banks are insolvent if not for a large subsidy of the tax-payer and pension funds are not paying the pension benefits that they promised.

Of course the nominal return might stay 9%, if we keep printing money and keep believing assets always go up. It may even be 90%, 900% or 9000%, if enough money is printed. But that doesn’t matter because you can print money, but you can’t print goods. In real terms, the only terms that count in the end, returns will be closer to 2%. Consequently, all bonuses paid based on 9% returns, were too high. And that solves another conundrum, why bonuses for performance grow more than economic growth, which is what that “performance” ultimately should result in.

In the end the real puzzle is why situations which are so obviously bogus, are called puzzles.

Econometrics

Page 33: Aenorm 69

AENORM vol. 18 (69) November 2010 31

Answer to “24-game”

The question was to make 24 with the numbers 1, 4, 5 and 6 by using only +, -, / and x. The puzzle had two solutions which are 4 / (1 – 5 / 6) and 6 / ( 5 / 4 – 1).

Answer to “Kendoku”

For the solution you only had to hand in the second line, but here you find the solution to the whole puzzle.

And here you find the puzzles of this edition:

Crossing the Bridge

4 people need to cross a bridge within 17 minutes. However, there are a few problems. The time it takes to cross the bridge differs for each person. The fastest can cross the bridge in 1 minute, the second in 2 minutes, the third in 5 minutes and the slowest one takes 10 minutes to cross the bridge. In addition, if they want to cross the bridge, they need to carry a torch and they only have one torch. Finally, they can cross the bridge with at most 2 people at a time. When two people cross the bridge, it takes them as long as it would take the slowest one

crossing the bridge. How can the whole group cross the bridge within 17 minutes?

Marathon

Pete is running in a marathon with n participants, where n is a number between 10 and 100. All participants are numbered from 1 to n. During the marathon Pete notices that the sum of the numbers of the participants who have a lower number than Pete is exactly the same as the sum of the numbers of all the participants who have a higher number than Pete. How many people are participating in the marathon and what is the number of Pete.

Solutions

Solutions to the two puzzles above can be submitted up to November 1st 2010. You can hand them in at the VSAE room (E2.02/04), mail them to [email protected] or send them to VSAE, for the attention of Aenorm puzzle 68, Roeterstraat 11, 1018 WB Amsterdam, Holland. Among the correct submissions, one book token will be won. Solutions can be both in English and Dutch.

On this page you find a few challenging puzzles. Try to solve them and compete for a price! But first we will provide you with the answers to the puzzles of last edition.

Puzzle

3 4 8 5 1 2 7 6 4 5 1 7 2 6 8 3 2 7 4 8 6 1 3 5 6 2 7 4 5 3 1 8 7 8 6 1 3 5 4 2 8 6 2 3 7 4 5 1 5 1 3 2 8 7 6 4 1 3 5 6 4 8 2 7

15+

16+

5/

21+

8+

8

5-

1-

1 2-

18+

21+

9+

3

5

4/ 224x

12+

11+

96x

11+

12x

14+7+

5+

13+

4/

Page 34: Aenorm 69

32 AENORM vol. 18 (69) November 2010

Agenda Agenda

• 17 NovemberBowling with Kraket

• 19-22 NovemberShort Trip Abroad to Berlin

• 8 DecemberActuarial Congress on longevity risk

• 13 DecemberGeneral Members Meeting

• 17 DecemberCocktaildrink with members and alumni

• 28 January to 6 FebruarySkiing Trip to Alpes d’Huez

• 11 NovemberCase day

• 24 November Inhouse day TNO

• 1 December‘Sinterklaas’ drink

• 5 JanuaryNew Years dinner with Towers Watson

• 8 - 22 JanuaryTokyo Study Trip

The academic year is already well on the way. A new board has been installed and a record number of 110 new Econometricians started this wonderful study of ours at the Vrije Universiteit. The year started with the IDEE week and the introduction weekend for new members in Texel.

This year, the National Econometricians Football tournament, organized by Kraket, was also a big success. About 200 econometricians travelled from around the country to compete in this highly anticipated event and I’m proud to say that Kraket brought home the cup!

Coming up in November is the Caseday, which takes place at the Victoria Hotel, our In-house day at TNO and of course the social events like the bowling tournament with the VSAE and the ‘Sinterklaas’-drink. There is also our two-yearly study trip. From 8 January to 22 January, sixteen of our best students and two Professors will make the long trip to Tokyo to visit Japanese companies, universities and to get to know the Japanese culture.

The board hopes that the upcoming activities will be just as successful as the previous ones and hope to see you at one of those events.

The summer is long gone, the second block of the first semester is half way and New Year’s Eve is in sight. Also the term of the current board of VSAE is running to an end and within a few weeks the next board will be known. So the focus now lies on teaching the new board members the ropes and finish our last projects with success.

Our last, but certainly not our least project is at the beginning of the next month, namely the Actuarial Congress. This year´s theme will be longevity risk with presentations of several CEO´s and professors teaching us more about this problem. The day will end with a discussion panel who will discuss several controversial theorems.

This year, two weeks before New Year, we will celebrate the end of the year with our members and alumni. For this drink the idea is that everyone will dress up, because the theme is Cocktail. We hope a lot of people will come, because we normally don’t organize a drink for both members and alumni. It will be a special evening, because it will be the last real activity our board organizes. So if you are or were a member of the VSAE, come to our cocktail drink on Friday the 17th of December.

Page 35: Aenorm 69

10e editie van het Actuariaatcongres

8 december 2010De Duif, Amsterdam

Kijk voor meer informatie en inschrijvingen op www.actuariaatcongres.nl

Thema: Langlevenrisico

Met Lenneke Roodenburg-Berkhoutals dagvoorzitter

Paneldiscussie met: Henk van Broekhoven (ING, AG), Lex Geerdes (CEO Aon), Mohamed Lechkar (DNB) en Marco van der Winden (PGGM)

Presentaties

Theoretische achtergrond

Financiële producten

Gevolgen voor bedrijven

Page 36: Aenorm 69

Towers Watson

Towers Perrin en Watson Wyatt zijn nu samen Towers Watson. Een wereldwijde onderneming met een eenduidige focus op klanten en hun succes.

U kunt vertrouwen op 14.000 ervaren professionals die over zowel lokale als internationale expertise beschikken. Onze aanpak is gebaseerd op samenwerking en betrouwbare analyses. Wij bieden een helder perspectief dat uw specifieke situatie koppelt aan het grotere geheel. Zo leiden wij u naar betere bedrijfsresultaten. Towers Watson. Duidelijk resultaat.

Towers Watson. Een helder perspectief voor concrete oplossingen.

towerswatson.nl

©2010 Towers Watson. Alle rechten voorbehouden.

Benefits | Risk and Financial Services | Talent and Rewards

TW3414c7_DeGids_FP_DUTCH.indd 1 1/11/10 4:26:03 PM