aenorm 71

40
This edition: 71 vol. 19 may ‘11 The WWB: Route to Work or Exile in Social Security? Simulating a Social Housing Allocation Policy Young People and their Pension: Not Everything what Media and Literature Assume is Correct The Search for Successful Active Management: A Case of Skill or Just Pure Luck

Upload: vsae

Post on 22-Mar-2016

214 views

Category:

Documents


2 download

DESCRIPTION

 

TRANSCRIPT

Page 1: Aenorm 71

This edition:

71 vol. 19may ‘11

The WWB: Route to Work or Exile in

Social Security?

Simulating a Social Housing Allocation Policy

Young People and their Pension: Not Everything what

Media and Literature Assume is Correct

The Search for Successful Active Management:

A Case of Skill or Just Pure Luck

Page 2: Aenorm 71

����������������� �������

��������������� ������ �������������������������������� ���� ������ ������ ������������� ������������������������������� �� ���������������������������������������� ������ ������������������������� �� �� ��������� ��� ��������������������� ������������������������ ���������� � �!�"#������������������������ � ���������� ������������� ��������� ������������� � ����������� ������������� ����������$�������������������� �%��$���� ����� ��������$���� ����������������� ���������������� ���&����������$��$��������������� ��� �� ���������� ������������ �� �������� ��� �������� ������������'��������������������� �������������������������� �������� � ������������������� ���������� ����� ��� ���� �� ���������������������� ���'������ � ������� ���� ��� ����� ��� ����������������������� ���(������ ���� ����������� ���� � ������� ���)�����!�"#*���� ���������� ����������������������� ���� ��������������������� ��+

� ���������������,�-������������ ���� ���������

� � � � � ��� � � � � � �� �� � � � � � � � � � � � � � � � � � �� � � � � � � � � � � �� � �� � � � � � � � � � � � �� � � � � � ���� � � � � � � ���

���������� ��� �� ���� �� ��� ���� ��� � ���������� ������ ������ ��������������������� ��� ���!�� �� ��� ���� ���"""�� ���� ���������� ���� �� #���! �$��� ����� � �� �! � � ��� �%����&� �� ����' �� ��%�����' �� �!� ������( ��)��� �&�*������%� ���������� �"���� ��$�� ������ ���� ����$��� �� ��������%��&�!��'"����

Page 3: Aenorm 71

© 2010 VSAE/Kraket

Best Solution for Money Problems

Colofon

Chief EditorMyrna Hennequin

Editorial BoardMyrna HennequinDaan Oosterbaan

Editorial StaffEwout SchotanusSharita WolswijkAnnelieke BallerDianne KapteinJan Nooren

DesignUnited Creations © 2009

Lay-outDaan OosterbaanMyrna Hennequin

Cover designMichael Groen /© iStockphoto / hepatus

Circulation2000

A free subscription can be obtained at www.aenorm.eu.

AdvertisersDNBKPMGNIBCOC&CPGGMTNOTowers Watson

Information about advertising can be obtained from Daan Oosterbaan at [email protected]

Insertion of an article does not mean that the opinion of the board of the VSAE, the board of Kraket or the redactional staff is verbalized. Nothing from this magazine can be duplicated without permission of VSAE or Kraket. No rights can be taken from the content of this magazine.

ISSN 1568-2188

Editorial Staff adressesVSAERoetersstraat 11, E2.02/041018 WB Amsterdamtel. 020-5254134

KraketDe Boelelaan 11051081 HV Amsterdamtel. 020-5986015

by: Roel de Waal Malefijt

I would like to start this preface with a short question: how could an organization improve her financial position in the best way? There are two possible answers to this question: raise more income or reduce expenditures. Due to the financial crisis, many commercial companies already answered this question. Nowadays public organizations need to do the same. For example: the Dutch government has decided to reduce expenditures and announced some rigorous cuts. The University of Amsterdam, and especially the Faculty of Economics and Business (FEB), must also decide how they should answer this question, because they are facing a substantial budget deficit.

The intention of the faculty is to stop three of the total of fifteen divisions. One of the divisions is the Operational Research & Management (ORM) division at the quantitative department. In this way the faculty hopes to reduce costs quickly. But the reason to stop this particular division is unclear to me, and the way it is intended to be done, is questionable and perhaps even doubtful.

The main reason for the closing down is said to be the low number of students at the ORM division. As a result of that, the earnings are insufficient related to the cost of the division. But this number of students is based on outdated data. Recently, a growth in the number has occurred due to an increase in popularity in the bachelor phase and due to a growth of the number of international students. In addition, recent data show that if the research department and its related publications are considered as well, the department as a whole is cost neutral.

The way the closing down takes place is in my view incorrect. The Student Council, which has a determining vote, decided that the plans should be reviewed. But at that moment the teachers already had been dismissed. So next year’s training will be given by external staff. In addition to that: the research department is cut down. Because of these movements, the level of the study will decline rapidly. Ultimately, the Executive Board can close down the division without resistance.

In my opinion not the right order to come to the best solution, in total a curious state of affairs. Also because Operational Research is a growing field in nowadays world economy. Optimizing processes and implementing logistics improvements play a major role in the changing world economy. The Netherlands is world famous for serious solutions to such problems. In the future, an increased demand for skilled OR staff will occur.

Maybe the solution for the money problems is a collaboration or even a merger with the Vrije University Amsterdam (VU Amsterdam). They already have a similar study in the Operational Research field. Collaboration has a change to lead to a better total education due to economies of scale and a combination of knowledge. In this way, students and teachers do not become victims of the financial gap caused by doubtful policy.

But after all, let’s try to convince the board to reconsider the cuttings, to find a solution which can lead to possible higher revenues in the long term (increasing student numbers, publications and promotions). Because after all, the whole quantitative department benefits from a good ORM division!

AENORM vol. 19 (71) May 2011 1

Page 4: Aenorm 71

2 AENORM vol. 19 (71) May 2011

00 vol. 00m. y.00 vol. 00m. y.00 vol. 00m. y.71 vol. 19

may 11

In recent years, Dutch media have addressed the growing waiting times for households looking to rent social housing. Average waiting times have reached historic highs in the bigger cities across the country. This growth is a result of a shortage of supply. To allocate a vacant accommodation in a setting where demand exceeds supply, social housing allocation policies are used to keep the process transparent and systematic. But taking notice of the growing waiting times, the question arises whether the current policy is efficient. This article takes a closer look at the social housing situation in the Netherlands, and presents a simulation model which can be used to see how a change of policy influences the allocation process.

by: Jeroen Buitendijk

Simulating a Social Housing Allocation Policy 12

This article provides a detailed insight in effect of the introduction of the Wet Werk en Bijstand (WWB). The WWB aims at a drastic reduction of the uptake of welfare assistance, by changing the incentive structure (i.e. employing the incentive-disincentive paradigm) of the executor of Dutch welfare assistance policy: municipalities. Although libraries can be filled with literature on the paradigm change in social security and how this relates to concrete policy changes, contributions focusing on quantifying the effects of these policy changes are rare. This article quantifies the effect of the WWB introduction, hence the introduction of the incentive-disincentive structure in Dutch welfare assistance policy.

by: Reinier Joustra

The WWB: Route to Work or Exile in Social Security?

18

In the asset management industry the terms skill and luck are of central debate. Is it possible that managers beat the market with active strategies in a consistent manner? If so, how can skilled managers be distinguished from lucky managers, managers who beat the market just through sheer luck? To answer these questions testing procedures have been developed, often based on statistical analysis of performance measures which are based on historical returns. This article tries to answer how different performance measures based on historical returns behave, how well these tests distinguish lucky managers from skilled managers, and whether skill truly exists.

by: Bastiaan Pluijmers

The Search for Succesful Active Management: A Case of Skill or Just Pure Luck

04

Page 5: Aenorm 71

AENORM vol. 19 (71) May 2011 3

BSc - Recommended for readers of Bachelor-level

MSc - Recommended for readers of Master-level

PhD - Recommended for readers of PhD-level

Facultive 36

In this article, a solution is presented for problems with the euro. The euro is lurching from crisis to crisis and politics is struggling to find a solution. In April, European policy makers presented a range of measures to strengthen the euro and improve European governance. However, this package does not tackle the fundamental problem. This problem is the fact that the Economic and Monetary Union (EMU) is only a half-way station. Monetary integration is completed, but political integration is not. Clearly, surging ahead towards further political integration, forming a European government with a substantial central budget is completely unrealistic at the present time. But a simple, inexpensive and in the end self-financing solution is being overlooked.

by: Wim Boonstra

Solution for Problems of the Euro Within Reach

Puzzle 35

This article discusses statements about young people and their pension in media and literature. In the present common pension contract there is the risk that young people and future generations bear too many risks and expenses for older generations. Because of that, the pension contract has to become future-proof again, so that older generations, young people and future generations will still want to participate in the pension contract. To meet this challenge, it is essential to know what young people require from a pension contract. We see all kinds of statements about young people and their pension in literature and media, but is it all correct? What is the meaning of terms as solidarity, risk and freedom of choice for young people in relation to pension?

Young People and their Pension: Not Everything what Media and Literature Assume is Correct

by: Hans Staring

27

The Dutch government introduced a new health care insurance system in 2006. One of the elements in this new system is the possibility for insured consumers to switch between health care insurance providers each year, so that insurers need to compete for customers. The switch percentages in the years after the system change are so low that it is doubtful whether the market performs efficiently. Consumers seem to be loyal in their choice for a health care insurer, but whether this is optimal behaviour is unclear. This article takes a closer look at consumer choice behaviour on the Dutch health care insurance market.

Switching and Efficiency on the Dutch Health Care Insurance Market: A Reinforcement Approach

by: Tim Pellenkoft

32

22

Page 6: Aenorm 71

4 AENORM vol. 19 (71) May 2011

Econometrics

Introduction

The active investment strategies under study are strategies aimed at the long term and use an index as a proxy for the market they want to beat, in general referred to as the benchmark. With these strategies managers try to achieve returns in excess of the benchmark returns, in other words outperform the benchmark, by intentionally deviating from the composition of the benchmark. In this article managers who run an active strategy and achieve outperformance due to a solid conviction and competence in active investing are defined as skilled. Managers who do not have skill, will not be able to consistently beat the benchmark in the long run.

The difficulty in identifying the skilled managers comes from the fact that probability is involved. Unskilled managers can outperform the market due to luck, this can lead to the false conclusion that the manager has skill. The opposite is also possible, due to bad luck a skilled manager can show underperformance.

When using historical performance to identify skilled managers, the managers who have failed to beat the

benchmark during a longer period can be ignored. Mostly unskilled managers will be among this group. The skilled managers that end up in this group, are unlucky and underperformed the market for a longer period. Unfortunately these managers cannot be distinguished using historical performance, the historical data simply does not reflect any skill. The problem breaks down to identifying the good performing skilled managers from the good performing unskilled managers.

Test procedures and test statistics for identifying skilled managers

To identify skillful managers by analyzing the historical monthly performance series, different performance measures and test procedures have been developed. To make inference on the reliability and behavior of these performance measures and tests, they are simulated. The tests range from simple methods to more sophisticated test procedures. Especially simple methods are widely used. To illustrate the shortcomings of these tests they are also incorporated in the simulations.

The first test looks at the mean excess return. Skillful managers should have a positive true value of the Mean Excess Return, unskilled managers a Mean Excess Return of 0 or less. The sample value of the Mean Excess Return is defined as:

The return of the active manager over month t is rt, the benchmark return on time t is rb,t and T is the number of months in the historical performance series. The null

In the asset management industry the terms skill and luck are of central debate. Is it possible that managers beat the market with active strategies in a consistent manner? If so, how can skilled managers be distinguished from lucky managers, managers who beat the market just through sheer luck? To answer these questions testing procedures have been developed, often based on statistical analysis of performance measures which are based on historical returns. This paper tries to answer how different performance measures based on historical returns behave and how well these tests distinguish lucky managers from skilled managers. This is done by simulating the tests procedures. To answer whether skill truly exists, the tests that behaved reliable in the simulations are carried out on a data set of active strategies.

by: Bastiaan Pluijmers

The Search for Successful Active Management: A Case of Skill or Just Pure Luck

Bastiaan PluijmersBastiaan Pluijmers currently works as a Fund Manager at the fiduciary manager Blue Sky Group. Before he started working he studied Financial Econometrics at the University of Amsterdam, which he finished in December of 2009. During his study Bastiaan worked part-time for Blue Sky Group and was mainly involved in fund selection and monitoring. This gave him the opportunity to write his master thesis at Blue Sky Group, under supervision of Cees Diks. This article is a summary of his master thesis.

1

Page 7: Aenorm 71

AENORM vol. 19 (71) May 2011 5

Econometrics

hypothesis of no skill is H0: E(er) ≤ 0. The null hypothesis will be tested using a standard t-test.

Next to the Mean Excess Return another well known performance measure is the Sharpe Ratio. This ratio measures relative performance over the risk free rate, per unit of risk. So the higher the ratio, the better the manager is capable of achieving excess returns per unit of risk. Whether the manager has skill or not can be evaluated by the Sharpe Criterion. The criterion states that if the manager has a significantly higher Sharpe Ratio than the benchmark, the manager has skill.

Here, the rf,t is the risk free rate. The null hypothesis of no skill, H0: E(sr) = 0, can again be tested with a simple t-test.

The development of the Capital Asset Pricing Model (CAPM) and the discussion about market efficiency in the 60’s led to the rise of regression-based performance models. The most basic analysis in this range is Jensen’s Alpha. According to CAPM, returns can be divided into systematic and residual components. The following model is used:

Jensen’s Alpha is the estimator of the intercept in the regression. When the regression is carried out on the monthly return series of a manager, this manager can be tested for skill by a simple t-test on the intercept. The null hypothesis of no skill, H0: E(α) = 0, again can be tested using a standard t-test.

Although the simplicity of these three measures and procedures seem appealing, these measures have some statistical shortcomings. According to asymptotic theory some very stringent restrictions have to apply for the tests to be valid. Issues with historical performance series are heteroskedasticity and serial correlation which are in general present. Another issue with Jensen’s Alpha is that the model in the regression can be wrongly specified. All three tests have yet another problem in common, they are subject to data-snooping bias. Data snooping occurs when a given set of data is used more than once for purposes of inference or model selection. When this occurs, there is always the possibility that the results may simply be due to chance rather than to any merit inherent in the method yielding the results.

An alternative to the described testing methods can be found in the bootstrap method. The original bootstrap method was introduced by Efron in 1979. Instead of using tests based on asymptotic approximations of the distributions of test statistics, the bootstrap method

approximates the distributions of the test statistics by resampling the available data (L. Horowitz, 2000). Under the appropriate conditions the bootstrap method yields an approximation to the distribution that is often more accurate in finite samples, as the approximation obtained from first-order asymptotics.

Besides the computational difficulties that the bootstrap relieves compared to estimating asymptotic distributions, an advantage of the bootstrap is that the empirical character of the method makes it work reasonably well when the data is not identically distributed (Politis et al, 1996). Therefore the heteroskedasticity in the data can be ignored. That the data is not independent imposes a bigger problem. It impacts the validity of the estimate of the distribution. Politis and Romano (1994) came with an adapted bootstrap version called the stationary bootstrap that addresses this problem. The stationary bootstrap randomly resamples blocks of observations instead of individual observations, which mimics the dependence between the observations.

The performance measures tested with the stationary bootstrap are the Sample Mean Excess return, er, and the Sharpe Ratio Criterion, sr. An additional measure that will be tested is the Information Ratio (IR). The IR measures the Mean Excess Return per extra unit of risk taken over the benchmark. The IR, simply the ratio of the Mean Excess return and the standard deviation of the excess returns, seeks to summarize the mean-variance properties of a portfolio (Godwin, 1998).

The null hypotheses of no skill when testing the three performance measures with the bootstrap method are H0: E(er) = 0, H0: E(sr) = 0 and H0: E(IR) = 0.

Though most issues are solved by testing the performance measures with distributions derived from the stationary bootstrap method, the test is still subject to data snooping. Two more advanced methods have been developed to reduce the data-snooping bias.

The first method is the Reality Check (RC) procedure (White, 2000) which compares N models, in this case the N managers you want to test for skill, with a benchmark model and tries to determine whether the best model is actually better than the benchmark model. White formulated his null hypothesis as a multiple hypothesis, the whole multivariate system of manager performance series is tested. This differs from the stationary bootstrap tests, which evaluates every manager apart. The null hypothesis of the RC method states that the best manager has no skill: H0 : maxn=1,..,N (E( f )) ≤ 0, f being the vector of performance measures.

To test this null hypothesis, White developed the following test statistic:

1

1

1

Page 8: Aenorm 71

6 AENORM vol. 19 (71) May 2011

Econometrics

The statistic’s distribution again is estimated with the stationary bootstrap method, but now the whole system of managers is resampled.

The RC though had some shortcomings. First, the power of the Reality Check can be driven to zero when adding irrelevant and poor alternatives. A second problem of the RC is that the statistics of the different performance series, in the real data set and in the bootstrapped dataset, are measured in different units of standard deviations. This can lead to a situation where an unskilled manager that shows very high relative performance because of high variance, gets selected by the RC statistic over a skillful manager that has a small variance. Hansen (2005) developed the Superior Predictive Ability (SPA) test to correct for this. Hansen proposed the following alternative statistic and null distribution:

1. Where is a

consistent estimator of .

2. Invoke a null distribution, multivariate of order N, that is based on , , where is the estimator for μ, the real mean of the statistic . is defined as:

The null hypothesis is the same as with the RC. The distribution can again be estimated by the using the stationary bootstrap.

The RC and the SPA test whether the best model outperforms the benchmark. The tests are seeking to control the simultaneous rate of error under the null hypothesis. The question here is about distinguishing the managers that have skill from the unskilled managers. The test have to control for the average rate of error and not for the simultaneous rate of error under the null hypothesis. To adapt the tests for the average rate of error, three alternative specifications for the methods are proposed.

The first alternative proposed is the stepwise summation and testing of the performance measures. This test will be referred to as the Summation Test. The null hypothesis states that the combination of the J managers with the highest RC/SPA statistics have no skill over the benchmark, . The statistic is:

The square brackets around the subscript of performance measure indicate that it is about the order statistics of the vector of performance measures of length N.

The second alternative developed is to stepwise check managers against their own relevant approximated order distribution. This test will be referred to as the Order

Test. In general the test statistic for the J th highest found

manager looks as follows:

The square brackets around the subscript of performance measure indicate that it is about the order statistics of the vector of performance measures of length N.

The third and last alternative will be referred to as the Removal Test. This method performs the RC/SPA on the whole data set. When for the ‘best’ manager the null of no skill is rejected, this manager is removed from the dataset. The RC/SPA is performed on the new data set. Again if for the ‘best’ manager in this data set the null of no skill is rejected, remove the manager from the data set. Keep repeating this step until a null hypothesis of no skill is not rejected. The conclusion is that the null hypothesis of no skill can be rejected for the managers that where tested before the last tested manager

All three alternative specifications of the RC and SPA follow an iterative path in testing. So, for the Summation Test first the highest performance measure is tested, then the combination of the two highest performance measures is tested, followed by the three highest measures and so on until the null is not rejected anymore. The same with the Order Test. First the highest performance measure is tested, followed by the second highest measure etc.

The performance measures tested with the three alternative specifications of the RC and SPA are again the Sample Mean Excess Return, the Sharpe Ratio Criterion and the Information Ratio.

The simulation results

The above described tests are evaluated in two sets of simulations, both have a size of 10.000 replications. All tests are tested with a 5% significance level. The first simulation set is based on series of 144 observations, the second simulation set generates the half of that, 72 observations. The results are presented per simulated sample size:

• The average fraction of managers with skill found among the total managers that have skill, defined as power.

• The average fraction of type 1 errors made among the total number of managers without skill, defined as data-snooping bias.

Jensen’s Alpha

In Table 1 the results are given for Jensen’s Alpha. The results confirm that the performance measure is unreliable when used for analyzing series that are heteroskedastic and serially correlated. The result that an increase in the sample size leads to a decrease in the power of the test, is a first indication of the non-validity of the test. Besides that the test experiences a high data-snooping bias with

Page 9: Aenorm 71

AENORM vol. 19 (71) May 2011 7

Econometrics

both sample sizes. On average around 16 to 17 percent of the unskilled managers are identified as skilled, which is not a very reliable result when testing with a 5% significance level!

Mean Excess Return

The results of the different testing procedures of the Mean Excess Return are shown in Table 2. The results of the simple t-test show its unreliability. The result that the power of the test goes down from 35.3% to 34% when the sample size increases from 72 to 144 indicates serious problems. Though the percentages with the simple t-test in itself seem reasonable, it is an invalid and not to be trusted test procedure on this kind of data.

The Stationary Bootstrap test does not show strange behavior. The power is reasonable and it seems that the test is quite conservative with a data-snooping bias of 2.1% and 1.1% respectively for the small and large sample.

The Summation Method that follows the RC test does show strange behavior. When the sample size doubles, the data-snooping bias significantly increases from 9.2% to 19.2%. Another result of the Summation Method is a high value of the data-snooping bias in combination with a high power for the RC Test in the large sample and SPA Test in both samples. These results are due to the fact that the tests look at combinations of managers. In this context it can be concluded that the aggregate of managers in the

combination is skilled, while not every manager in the combination is skilled. In other words, the results of the unskilled managers can be compensated by the skilled manager in the combination. So in this set-up it is false to conclude that all managers in the combination are skilled when the null hypothesis is rejected. When this is done, it is likely this leads to a very high acceptance of unskilled managers as skilled. This is confirmed by the results from the simulation. Also the Order Method shows unreliable behavior. For the RC Test the data-snooping bias goes up when the sample size increases, in this case from 1.2% to 2.4%. Another observation is that the values of the data-snooping bias when the Order Test follows the SPA method in both samples are relatively high with 7.7% and 6.2% as opposed to the much lower data-snooping bias when the RC method is followed. A last result is that the test does not reduce the data-snooping bias, it is higher than the data-snooping bias of the Ordinary Stationary Bootstrap Test.

The Removal Method shows the smallest data-snooping bias, both for the RC and SPA methods. The Hansen method clearly shows a significantly higher rate of correctly rejected null hypotheses, while the rate of type 1 errors is not significantly higher. This result is in line with Hansen’s findings. Clearly the Removal Test succeeds the best in reducing the data-snooping bias. Unfortunately, the test has low power.

Information Ratio

The results for the simulations of the Information Ratio are given in Table 3. The results show that the outcomes of the simulations are more or less the same as with the Mean Excess Return. The same problems and issues apply. The main difference lays in the results of the RC method. They show that the tests that follow the RC method have significantly higher power then they do when the Mean Excess Returns is tested.

Table 1. Testing results of Jensen’s Alpha

Jensen’s Alpha

72 Months Power 54,63%

DS Bias 17,41%

144 Months Power 52,51%

DS Bias 16,17%

Table 2. Testing results of the Mean Excess Return

Mean Excess Return 72 Months 144 Months

Power DS Bias Power DS Bias

Simple t - test 35,3% 1,9% 34,1% 1,1% Stationairy Bootstrap Test 30,6% 2,1% 33,7% 1,1%

White RC Statistics Summation Test 19,6% 9,2% 44,9% 19,1% Order Test 7,6% 1,2% 20,8% 2,4% Removal Test 0,7% 0,0% 2,0% 0,0%

Hansen SPA Statistics Summation Test 71,8% 20,4% 77,5% 17,5% Order Test 45,4% 7,7% 50,9% 6,2% Removal Test 10,0% 0,1% 11,1% 0,0%

Page 10: Aenorm 71

8 AENORM vol. 19 (71) May 2011

Econometrics

Sharpe Criterion

The results for the Sharpe Criterion are given in Table 4. Again, the results indicate that the simple t-test is not very reliable. In both samples the amount of managers correctly identified as skilled is low, especially when compared to the amount of unskilled managers identified as skilled.

When looking at the other procedures, all using some form of the bootstrap method, the results also indicate problems. The high data-snooping bias in the large samples compared to these results in the smaller samples are worrisome and indicate that the tests experience serious problems. The problems are found in the stationary requirements that must apply to make sure the stationary bootstrap method is consistent. Further investigation, using the Augmented Dickey–Fuller Test to look for unit roots in the data, pointed out that the risk free rate is not stationary. The results imply that this has a severe influence on the reliability of the testing methods. Despite this conclusion, the Sharpe Criterion is an important and often used statistic in finding skill.

For this reason it still seems useful to identify the best performing test method.

According the results in the Table 4, the method that performs best is the Removal Test. This again performs relatively well when comparing the power/data-snooping bias proportion to those of the other tests. It must be noted that the size of the sample seems to have a large effect on the power of the test. The results for the Removal Test again show that the Hansen method has superior power over the White approach.

Testing of real world data

To make inference on the question if there is evidence of skill, the performance measures and test procedures are used in an empirical study. The dataset that is tested consists of 108 active investment products with a track record of 144 months, benchmarked against the S&P 500. The test methods used are the Ordinary Stationary Bootstrap Test and the Removal Test that follows the SPA method, as they performed the most favorable in the simulation.

Table 3. Testing results of the Information Ratio

Information Ratio 72 Months 144 Months

Power DS Bias Power DS Bias

Stationairy Bootstrap Test 29,2% 1,9% 32,9% 1,0%

White RC Statistics Summation Test 71,3% 31,6% 80,0% 33,8% Order Test 31,7% 3,4% 38,4% 3,0% Removal Test 6,5% 0,0% 8,8% 0,0%

Hansen SPA Statistics Summation Test 70,8% 20,1% 77,1% 17,4% Order Test 44,2% 7,3% 50,6% 6,2% Removal Test 8,3% 0,1% 10,2% 0,0%

Table 4. Testing results of the Sharpe Ratio

Sharpe Ratio 72 Months 144 Months

Power DS Bias Power DS Bias

Simple t - test 8,5% 2,2% 15,3% 4,0% Stationairy Bootstrap Test 21,4% 2,6% 46,1% 11,0%

White RC Statistics Summation Test 1,2% 0,3% 84,3% 67,0% Order Test 0,4% 0,1% 54,1% 22,6% Removal Test 0,2% 0,1% 5,2% 1,1%

Hansen SPA Statistics Summation Test 11,9% 2,5% 79,2% 42,0% Order Test 4,6% 0,3% 67,9% 29,5% Removal Test 1,8% 0,0% 15,2% 1,0%

Page 11: Aenorm 71

AENORM vol. 19 (71) May 2011 9

Econometrics

The dataset is tested in three ways. First the whole dataset of 144 observations is tested. The whole dataset covers two market cycles. To see if the skill of managers depends on up-markets or down-markets, the complete set is divided in two subsets. The up-market periods are taken from July 1997 to August 2000 and from September 2002 to October 2007. The up-market set consists of 99 observations. The down-market periods are taken from September 2000 to August 2002 and from October 2007 to June 2009. This set consists of 45 observations.

Results for two market cycles

According to the results there is evidence of skill present. The behavior of the tests in the simulation indicate it is very reasonable to assume that the two managers, distinguished with the Removal Test on the Information Ratio, do have skill. It is also not unlikely that there are more managers with skill present. The Removal Test has low power with the used 5% significance level and the stationary bootstrap method shown is also conservative when used testing this kind of return data.

The same applies for skill in achieving a higher return per unit of risk, here the results also indicate there is evidence for this kind of skill. The low type 1 error rate of the Removal Tests from the simulation does makes it reasonable to assume the 6 found managers do have skill. Again the low power of the Removal Test and the fact that the stationary bootstrap method distinguishes 33 managers, make it very likely there are more managers with this kind of skill present in the data set.

Results for up-markets

The results are different from the tests on two market cycles. No evidence for any sort of skill can be found with the Removal Test. The stationary bootstrap does indicate that there is skill. It must be noted that the data set with 99 observations is smaller than the data set of two market cycles, with 144 observations, what makes the evidence weaker. Also the fact that only the stationary

bootstrap method distinguishes skill makes the evidence less reliable than with the previous test on the large dataset.

Results for down-markets

The dataset for the down market consist of only 44 observations. This is a small data set that makes inference not as reliable as with the previous tests. Though taking this into account, the number of managers distinguished as skilled is large. Especially the number of rejections of no skill in terms of outperforming the benchmark is large, so it is unlikely there is no skill present in the down market. The Stationary Bootstrap and the Removal Test results show high numbers of distinguished skillful managers.

The removal test shows 4 managers with the skill in achieving a higher return per unit of risk taken, the Stationary Bootstrap Tests indicates that there are 51 managers with this skill. So also here statistical evidence of this kind of skill is present in the data.

Conclusion

The simulation results indicate that there is no reason to prefer the Information Ratio over the Mean Excess Return when using the tests examined. The Sharpe Criterion is shown to be an unreliable measure to test with. In terms of testing methods the simulations indicate that the ordinary stationary bootstrap does not do a poor job in testing the Mean Excess Return and the Information Ratio. When simplicity and high power are of more concern than correction for data-snooping, this method is very useful. When the risk of data-snooping bias is of main concern, the Removal Test that follows the SPA method is best chosen. The RC method is not preferred, as it has lower power than the SPA. When testing the Sharpe Criterion the only useful test is the Removal Test that follows the SPA method. The other tests proved to be very unreliable.

The results from the empirical study indicate there is evidence for successful active asset management. The tests show there are performance series that show significant skill, both in terms of outperforming the benchmark in pure returns and in achieving higher returns per unit of risk.

Important is that the evidence does not indicate a large number of managers with skill. This in combination with the results from the simulation that certain statistical tests

Table 5. Results of the analysis of two market cycles

Stationary Hansen Bootstrap Removal Test

Mean Excess Return 30 0Information Ratio 32 2Sharpe Criterion 33 6

Table 7. The number of managers identified as skillful, down-market

Stationary Hansen Bootstrap Removal Test

Mean Excess Return 54 15Information Ratio 39 14Sharpe Criterion 51 4

Table 6. The number of managers identified as skillful, up-market

Stationary Hansen Bootstrap Removal Test

Mean Excess Return 20 0Information Ratio 26 0Sharpe Criterion 21 0

Page 12: Aenorm 71

10 AENORM vol. 19 (71) May 2011

Econometrics

show a degree of reliability but are certainly not error proof, leads to the conclusion that statistical methods are useful tools, but certainly not the holy grail for being successful in finding skilled asset managers.

References

Horowitz, J.L. (2000). The Bootstrap. Department of Economics, University of Iowa

Hansen, P. (2005). “A Test for Superior Predictive Ability.” Journal of Business & Economic Statistics, 23, 365-380

Goodwin, T.H. (1998). “The Information Ratio.” Finan-cial Analyst Journal, Juli/August 1998, 34-42.

White, H. (2000). “A Reality Check for Data Snoo-ping.” Econometrica, 68, 1097-1126.

Politis, D.N., and J.P. Romano (1994). “The Stationary Bootstrap.” Journal of the American Statistical As-sociation, 89, 1303-1313.

Politis, D.N., J.P. Romano and M. Wolf (1997). “Sub-sampling for heteroskedastic time series.” Journal of Econometrics, 81, 281-317.

Page 13: Aenorm 71
Page 14: Aenorm 71

12 AENORM vol. 19 (71) May 2011

Operations Research and Management

Introduction

To understand a model that simulates an allocation policy for social housing, we first need to get a better understanding of how the social sector is organized. Where Dutch readers will probably have a global understanding of how the process works, international readers might be surprised about the system. Vacancies in the social housing stock mostly get advertised via some form of media, where registered households can apply for the specific accommodation, which is an uncommon way of allocation throughout Europe. Besides that, the size of the social housing sector varies largely between countries in Europe: from no more than a few per cent in Hungary to more than a third of the total housing stock in the Netherlands (Kromhout and van Ham). We also notice differences in the target population of the social sector: where a country as Ireland uses the social sector specifically to accommodate the less fortunate in society, the Dutch sector also serves young people leaving their parents’ house and older people moving into a smaller accommodation (Haffner and Hoekstra, 2004). All these elements make the allocation process in the Netherlands pretty complex.

As a result of the demand that exceeds the supply we arrived at a situation where almost all vacant accommodation receive more than one application. Recently published averages for Amsterdam2 portray that on average 146 households applied for an advertised social accommodation in 2009, which is one of the more extreme situations in the country, but nevertheless represents the average situation across the country. With more than one household applying, the question arises which household deserves the vacant property the most. To keep this process of allocating transparent and systematic, the process is directed by a social housing allocation policy.

A social housing allocation policy

A social housing allocation policy consists of a set of criteria indicating which households are allowed to live in which accommodation, and includes rules about how households can apply for a vacant property. The rules about which households are allowed to live in which accommodation can be divided into four groups (Kromhout and van Ham):

1. Eligibility criteria: These criteria describe which households are eligible to apply for social housing.

2. Selection criteria: These criteria describe which households are entitled to which accommodations.

3. Ranking criteria: These criteria describe the order in

In recent years, Dutch media have addressed the growing waiting times for households looking to rent social housing. Average waiting times have reached historic highs in the bigger cities across the country, where for example the average waiting time for a social accommodation in Amsterdam has gone up to eleven years1. This growth is a result of a shortage of supply. To allocate a vacant accommodation in a setting where demand exceeds supply, social housing allocation policies are used to keep the process transparent and systematic. But taking notice of the growing waiting times, the question arises whether the current policy is efficient. This article takes a closer look at the social housing situation in the Netherlands, and presents a simulation model which can be used to see how a change of policy influences the allocation process.

by: Jeroen Buitendijk

Simulating a Social Housing Allocation Policy

Jeroen Buitendijk

Jeroen Buitendijk (1987) obtained his master degree in Operations Research at the University of Amsterdam in the summer of 2010. This article is based on his thesis, which he wrote under supervision of Ir. J.A.M. Hontelez during an internship at RIGO Research en Advies BV, where he got supervised by S. Kromhout and S. Zeelenberg. He currently works as junior reseacher/advisor at RIGO Research en Advies BV.

1 Waiting times depend on specific location within the city

as well as type of accommodation. The average waiting

times range from 9 to 14 years, according to Woningnet at

December 30, 2010 (http://www.woningnet.nl/slaagkans_

result.asp).

2 Published by the city region of Amsterdam at http://www.

stadsregioamsterdam.nl/@140825/vraag_naar_sociale/

Page 15: Aenorm 71

AENORM vol. 19 (71) May 2011 13

Operations Research and Management

which the households are offered the vacant dwelling they applied for.

4. Priority criteria: These criteria describe which households are eligible for special urgency statements, which gives them a higher priority than the ‘regular’ household.

When we look at these criteria, we acknowledge that the policy could do without all but the ranking criteria, since a system that’s eligible for all with no urgencies given would still result in multiple applications in the current housing situation. These ranking criteria can theoretically be of a wide variety, but throughout the country most of the ranking criteria are pretty straightforward: the household that has been waiting the longest will be the first to get the property offered (without any applications from priority households). The idea of using a rule like this mimics regular queues, where people entering the queue take place at the back of the queue. For young people this criteria has a downside though, because with the growing waiting times and the fact that they cannot register until they’re eighteen (in most cases) gives them little prospect of finding an accommodation early on.

From social housing to a simulation model

So far we looked at the social housing sector and described what a social housing allocation policy is. We have described that the current situation is one of long waiting times and a high number of applicants for a vacant accommodation. Where an allocation policy will not solve the problem of a market where demand exceeds supply, policy makers are asking questions whether a change in policy cannot be part of the answer. To assist policy makers by answering these questions we developed a simulation model. This model can be used to see how a change in policy would influence the allocation process, something which is now only done by

setting out an experiment and evaluating the experiment afterwards. In this section we’ll take a closer look at the process from a queuing theory point of view to see how the allocation process can be modelled.

A interesting point is the current waiting list situation. The number of households registered have grown in the same fashion as the waiting times, with for example 184.000 registered households in the region of Utrecht at the first of January 2009. When we take into consideration that in the first half of 2009 a total of about 2.600 accommodations became vacant, it would take around 70 years to serve all these registered households.

However, we notice that 86% of all registered households did not apply for any of the accommodations that became vacant in the first half of 2009. It could be that this group could not find an accommodation that met their expectations, but we know that there is a group that registered just in case they would like to apply for a social accommodation in the future (as a result of a ranking criteria based on length of registration). Translating this for example to a queue at the baker store, this would mean that people are standing in line because they might want to buy a bread in a couple of days.

So from a queuing theory point of view the social housing allocation situation shows some uniqueness, which complicates the use of mathematical analysis, but simulation techniques can be used to develop a model that includes this uniqueness of the allocation process. In the remainder of this article we’ll describe the simulation model that we developed and a case where the model was used.

Model

To test a social housing allocation policy by simulating the process, we need a model that simulates a group of households that are looking to rent a social accommodation, a group of social accommodations, as

Figure 1. Schematic representation of the housing allocation process.

Page 16: Aenorm 71

WERKENBIJTNO.NL

Page 17: Aenorm 71

AENORM vol. 19 (71) May 2011 15

Operations Research and Management

well as a way to bring the two together. In Figure 1 a schematic representation of the housing allocation process is given, which we used as a basis for the development of the model. In the representation we recognize the four types of criteria we discussed before as well as two separate flows:

• Accommodation: An accommodation belongs to the occupied housing stock until the current tenant leaves the accommodation. At that moment the accommodation becomes vacant and the application process starts. When the application process is executed, the accommodation gets offered to households that turned in an application and returns to the occupied housing stock when a household accepts the offer.

• Households: A household has to register before being

able to apply for vacant accommodations (meeting both the eligibility criteria and selection criteria for the accommodation). After a successful application the households gets a place in the ranking, which leads to a possible offer and acceptation. In case of either no offer or a rejection of the offer, the household stays a registered household. When the household accepts the offer it leaves the system (the household deregisters).

To represent the group of registered households and the occupied housing stock as realistic as possible, we use a registration database which contains all households that registered so they can apply for social housing and a database that contains a list of all properties that are registered as social housing. For the model we take the size and the composition of the registered households as the state of the model. A change in the state can then occur in three different ways:

1. A registration of a new household.2. A deregistration of a household without having found

a social accommodation.3. An acceptation of a social accommodation by a

household, which leads to a deregistration of that household.

For the model we set the possible events as the above three possible changes of the state. The first two events require simple changes, where the size of the registration either increases or decreases with one. We model the arrival and departure of a household as a Poisson process, where the time between two events is exponentially distributed with a parameter based on a historic (de)registration process.

The last event requires the most administration, which can be described with four steps (following the scheme in Figure 1):

1. Simulate the characteristics of the accommodation that becomes vacant.

2. Simulate a list of households that apply for this

accommodation.3. Order the list of households according to the ranking

criteria.4. Simulate the acceptation process of the interested

households.

To simulate the moment when the vacant accommodation is advertised, we use a similar Poisson process as with the other two events, again with a parameter based on a historic process. The characteristics of the advertised accommodation depends on the composition of the total group of social accommodations as well as the speed of rotation of tenants in the specific types of accommodation. To simulate which households apply for the advertised accommodation, we use the historic process to estimate the probability that a household of type x applies for an accommodation of type y.

The output of the model is a matching between households and social accommodations. By simulating different kinds of policies, for example different ranking criteria, the matching differs. As a result, the model gives us an insight as to how the policy influences the matching and helps the makers of the policy with choosing the policy they want to implement. In the next section we discuss a case where the model helped in a decision process like this.

Case: young households in Utrecht

With the growing waiting times that we mentioned in the introduction of this article, the counsellors in Utrecht were worried about the success rate of younger households in the region. In the current policy, registrations were not allowed before the age of eighteen and the ranking is based on length of registration. In addition, after accepting an accommodation households had to re-register, meaning that their length of registration was set to zero. This could lead to the hypothetical case where after waiting for five years, a household accepts its first social accommodation at the age of 23 and as a result has to wait another 5 + years to take the step to a next social accommodation. Taking into consideration that in that stage of a person’s life the situation of a household can change drastically, the policy makers wondered whether the current policy was efficient enough, since the policy disables them to adapt their housing situation to their needs in a short period of time.

To see whether an adjustment could be made to the policy so the 5 + years the younger household has to wait in the hypothetical situation could be quickened, the policy makers thought about adjusting the re-registration process. Instead of setting their length of registration back to zero, the idea was to set the length of registration for younger (younger is defined as age < 30) households back to a length of n years, where n could either be an independent number from 1 to 5 or a number depending on the age of the household.

In Figure 2 we see the distribution of the

Page 18: Aenorm 71

16 AENORM vol. 19 (71) May 2011

Operations Research and Management

accommodations by age in case the young households re-register with a length of zero, three and five years of registration. The simulation shows that by changing the policy, in accordance with the idea presented above, the percentage of younger households in the matching grows. We also see that this doesn’t affect the number of households older than 65 years in the matching, as a result of a difference in desired accommodations and the length of registration of these households.

Where the growing number of younger households in the matching is expected by this change of policy, the size of the growth is hard to predict and therefore the outcome of the model becomes valuable in an evaluation process of housing allocation policies. Furthermore, the model also gives us information about changes in the matching for other dimensions than age, for example low-income households or high-rent accommodations. For a further description of the effect of the change of policy in Utrecht we refer to the original report by Kromhout, Burger and Buitendijk (2010).

Model evaluation

Even though the model has showed its value in real-life cases (like the one described above), the use of the model brought two possible areas of further development to light: consumer behaviour and chain of movement.

The first area is about examining what factors decide whether a household is interested in a specific accommodation. The accuracy of the model depends on how well the probability that a household of type x will apply for an accommodation of type y represents the reality and, as a consequence, how realistic the simulated list of applicants is compared to a real life situation. Currently the available data limits the level of detail we can include in characterizing the households and the accommodations, which limits the level of detail we can use in the matching and could possibly leave relevant factors out of the equation. Besides the factors that influence the preferences of households, we can also take this consumer behaviour analysis one step further; the possibility exists that as a result of a changing policy

households will change the way they apply for social accommodations.

The second area could be described as the chain of movement, which refers to the fact that some of the accommodations only become vacant because the current tenant moves into another social accommodation. In the model we use a historical process to determine the parameter for the Poisson process that gives us the frequency of newly advertised accommodations. But as a result of the chain of movement the frequency is dependent on the policy, which means that by changing the policy the parameter we use for the Poisson process could misrepresent the reality. A solution to overcome this would be to exclude the accommodations that become vacant because a household moves into another social accommodation when calculating this parameter, but the current database does not give us a clear answer towards the reason why an accommodation becomes vacant.

Conclusion

This article (as well as the original thesis) brought simulation techniques into the world of social housing allocation policies. A conclusion can therefore be made from two perspectives.

When we look at the social housing sector point of view we acknowledge that the model enables policy-makers to get insight in the consequences of a policy change before actually implementing the new policy. Where before the policy-makers had to experiment with a new policy and evaluate whether the change in policy leads to a better situation, the model shows beforehand how the new situation will look like. The model therefore can be used as a way to substantiate the implementation of a different policy.

From the simulation point of view the model proves that these kind of techniques are applicable in worlds where they were not used before. There will always be an amount of uncertainty in how good the model will represent the reality, but with the possible misrepresentation in mind the techniques could prove their value in a wide range of policy making processes.

References

Kromhout, S. and M. van Ham. Allocating Social Housing, International Encyclopaedia of Housing and Home

Haffner, M.E.A. and J.S.C.M. Hoekstra. Woonruimtever-deling in Europese context. Delft University of Tech-nology. Onderzoeksinstituut OTB, 3 Sept. 2004. Web. 5 Feb. 2010.

Kromhout, S., P. Burger and J. Buitendijk. Effectver-kenning behoud inschrijfduur. RIGO Research en Advies BV, Working paper, 2010.

Figure 2. Number of households in allocation matching grouped by age, n equals the length of registration at the moment of re-registration.

Page 19: Aenorm 71

Operations Research and Management

title

DOWNLOADand READ published articles online

www.aenorm.eu

GET A FREE SUBSCRIPTIONNOW! www.aenorm.eu

Page 20: Aenorm 71

18 AENORM vol. 19 (71) May 2011

Econometrics

Introduction

Focusing on the Netherlands, a similar development took place. Although the process of obtainment was initiated fashionably late, the Dutch have developed a vast system of social security. Despite the fact that the added value of the welfare state was never at stake, the increased uptake of social security, welfare assistance in particular, during the second half of the twentieth century, induced criticism about how to organize and finance it. The result is a paradigm change in social security. Rather than adopting a view based on the principles of rights and duties, social security policy is to be based on incentives and disincentives. In particular, Dutch welfare assistance policy was drastically altered based on this paradigm change, the most notable alteration being the introduction of the Wet Werk en Bijstand (WWB).

The WWB aims at a drastic reduction of the uptake of welfare assistance, by changing the incentive structure (i.e. employing the incentive-disincentive paradigm) of the executor of Dutch welfare assistance policy: municipalities. Although libraries can be filled with literature on the paradigm change in social security and how this relates to concrete policy changes, contributions focusing on quantifying the effects of these policy changes are rare. This paper quantifies the effect of the WWB introduction, hence the introduction

of the incentive-disincentive structure in Dutch welfare assistance policy1.

The remainder of this paper is organized as follows. The first section discusses the properties of the process determining whether an individual requires welfare assistance, and translates this into a dynamic, nonlinear panel data model setup. The second section exploits this setup in order to provide a detailed insight in the effect of the WWB introduction. The final section is dedicated to the main results.

Modeling the Requirement of Welfare Assistance

Although it is plausible that the requirement of welfare assistance is determined by numerous individual and economic characteristics, one only observes whether or not individuals require it. Hence, there must be a nonlinear transformation relating the set of explanatory factors to the observed binary variable. Focusing on the set of explanatory factors, it is conceivable that lagged welfare assistance dependence is of significant influence. Hence, a dynamic model specification is required. Combining the available data with aforementioned properties of the process determining welfare assistance dependence yields the need for a dynamic, nonlinear panel data model.

Before focusing on the model itself, there are two issues particularly relevant for empirical work based on dynamic, nonlinear panel data models that are worth discussing here: (i) state dependence; and (ii) the initial condition problem. The first notion of state dependence is Heckman (1978a). He observed that individuals are persistent in their behavior, before distinguishing between two fundamentally different explanations for this: (i) experiencing a certain event changes preferences in favor of that event; and (ii) individuals differ in their propensity to experience an event. The former is known

For a fictitious socially oriented economist who left our planet two generations ago, returning to earth is a surprising experience. At the time of his departure, institutions concerning dismissal protection and unemployment benefits were unknown. How different is the world he returns to! Western labor markets are now characterized by obese systems protecting employees against (the consequences of) unemployment. Put briefly, welfare states developed in his absence.

by: Reinier Joustra

The WWB: Route to Work or Exile in Social Security?

Reinier JoustraReinier Joustra (26) started his pursuit of econometric wisdom at the UvA in 2006 – before that; he obtained his bachelor degree in economics at the University of Utrecht. He obtained his master degree in econome-trics last May. During his chairmanship of the Econo-metric Game committee, he recognized the rewards of employing theoretically challenging econometric me-thods to socially relevant problems. Inspired by this, his master thesis quantifies the effect of an important social security policy reform in the Netherlands.

1 The analysis is based on an extensive panel data, property

of the Centraal Bureau voor Statistiek (CBS).

Page 21: Aenorm 71

AENORM vol. 19 (71) May 2011 19

Econometrics

as true state dependence, the latter is labeled spurious state dependence. Determining whether a process is subject to true or spurious state dependence is of significant interest. When policy reforms are capable of changing individual behavior, true state dependence implies that individuals are likely to persist in this behavior. However, small perturbations to the system may have long-lasting, potentially harmful effects (Chay and Hislop, 2000). With respect to the initial condition problem, note that it would be a strict coincidence if the beginning of the stochastic process determining a dependent variable coincides with the start of the panel data set. The dynamic model specification implies that (the distribution of) every observation depends on (the distribution of) its predecessor. Neglecting the unobserved information implicitly determining the first observed value, then, yields inefficient estimates. Where traditional approaches to cope with the initial condition problem are hard to implement empirically, the renewal of interest in dynamic, nonlinear panel data specifications has led to solutions that are relatively easy to implement (e.g. Honoré (1993), Kyriazidou (1997a); and Honoré and Kyriazidou (2000)). Wooldridge (2005) – the most recent and elegant solution to the initial condition problem – is the solution employed in this paper.

Consider now the model for welfare assistance depen-dence. To start with, consider:

where: (i) is the index for individuals, distinguishing between the individuals; and (ii) is the index for the number of time periods an individual is observed, orde-ring the observations of individual . Importantly, , is a latent (i.e. unobserved) variable; a nonlinear transfor-mation of is observed:

Notice that depends on and not on . With respect to (1) and (2), assume that: (i) is an individual-specific random effect, allowing otherwise identical in-dividuals to have different treatment paths; (ii) is a K-vector containing the values of both time variant and time invariant variables for individual in period ; and (iii) is an idiosyncratic, standard logistically distribu-ted disturbance term. Wooldridge (2005) dismisses the regular assumption that is standard normally distribu-ted, and assumes that

where: (i) is a -vector containing all time invariant

covariates; and (ii) the are identically, independently normally distributed with mean zero and variance , conditional on and . Substituting (3) in (1) yields a renewed latent equation:

Notice that this specification prohibits identification of the time invariant covariates contained in , due to their inclusion in . Exploiting the distributional properties of

, it is easily derived that:

and:

Note that these probabilities sum to one. To obtain the maximum likelihood estimator, consider the joint distribution of . For notational purposes denote: (i) the joint distribution of as ; (ii) the set of parameters consisting of , , , and β by θ; and (iii) .

Moreover, define the filtration as the information set consisting of , , and , where

. The joint distribution of is then given by:

The maximum likelihood estimator based on the condi-tional distribution function expressed in (7) can shown to be -consistent under some mild regularity condi-tions (see Wooldridge (2005)). Although algebraically straightforward, there is still a computational issue in (7) that is to be dealt with: the joint distribution depends on the stochastic . Integrating these out by exploiting

yields2:

Note the difference between the left-hand side and the distribution function within the integral. They are related though completely different. The parameter set associated with equals the set { }. The integral on the right-hand-side of (8) generally does not

,(1)

,

(2) .

,(3)

(4) .

(5)

(6)

.

(7)

(8)

2 Of course, there are several alternative strategies. See Davidson and Mackinnon (2004).

1e .

Page 22: Aenorm 71

20 AENORM vol. 19 (71) May 2011

Econometrics

have a closed form expression. Employing the principle of Gaussian quadrature to obtain an approximation (without elaborating on its properties, but denoting the approximated joint distribution by ) yields the following maximum likelihood estimator:

Recall that the estimator can shown to be -consistent3. The maximum likelihood estimator presented in (9) is used to quantify the effects of the WWB introduction.

Quantifying the effects of the WWB introduction

The WWB introduction is a direct extension of the incentive-disincentive paradigm mentioned above. It intends to decrease the number of individuals that require welfare assistance, by changing the incentive structure of the executor of Dutch welfare assistance policy: municipalities. Before the WWB introduction, the (size of) welfare assistance budgets awarded to municipalities depended on historical expenditures on welfare assistance. This construction implicitly neglects municipality characteristics, changes in those characteristics in particular. Moreover, municipalities could declare possible budget shortages. This construction lacks an incentive to strive for a minimal uptake of welfare assistance. The WWB introduction affected both the allocation of resources (i.e. budget sizes) and municipality responsibilities (i.e. budget responsibility). Welfare assistance budgets now consist of (i) an income component – the I-share; and (ii) a work component – the W-share. The I-share is to be used for monetary welfare assistance, while the W-share is meant for financing reintegration trajectories. The size of the I-share is (partially) determined by an objective distribution model, that relates municipality characteristics to budget size. The renewed allocation of resources is evidently more efficient. The WWB also obliges municipalities to bear policy responsibility. Potential shortages in the I-share are to be financed by municipality funds, while potential surpluses can be exerted as desired. The restructured incentive scheme – that corresponds with the incentive-disincentive paradigm – is clearly to be preferred over the original scheme. Importantly, the introduction was staged, as illustrated by Table 1.

To quantify the effect of the WWB introduction, consider first the possibility of estimating the model elaborated on in the previous section, including relevant individual

and economic variables. As the WWB is concerned, a variable is included that increases when the extent to which the WWB is introduced increases. Table 2 present the estimation results of two specifications. Specification (i) includes only individual characteristics (besides the basic variables), while specification (ii) also includes relevant economic characteristics.

Table 2 yields several noteworthy results. First, coefficient estimates for the constant and are large in absolute size, highly significant and have opposite signs, for both specifications. This implies that there is a relatively large threshold preventing the self-supportive from welfare assistance dependence, relative to the threshold separating the welfare assistance dependent from self-supportiveness. Second, most coefficients of the covariates are significant and of the expected sign. Third, the WWB variable (measuring the extent to which the WWB is introduced) has a negative sign and is highly significant, for both specifications. This suggests that, ceteris paribus, the WWB introduction is negatively related to the probability that an arbitrary individual requires welfare assistance at an arbitrary point in time.

Now suppose a hypothetical panel data set is created, in which the WWB is not introduced (i.e. the WWB variable is set to zero). Note that the estimated coefficients, the initial observation and the covariates allow one to replicate sequences of welfare assistance dependence under this assumption. Comparing the properties of these sequences with the properties of the sequences that are truly observed may yield interesting results. There are at least three properties of arbitrary sequences of welfare

(9)

3 Although -consistent, little is known about its sample performance. For the sake of reliability, a Monte Carlo setup was

constructed to evaluate the performance of the estimator for: (i) different sample sizes; and (ii) different sets of included

variables. The results verify the applicability of the estimator.

Table 1. Overview of staged WWB introduction (CBS).

Year Municipalities Objective Budget

(inhabitants) distribution responsi- model bility

2000 All 0% 10%2001 All 0% 25%2002 >60.000 25% 50% 40.000-60.000 25% 0-50% <40.000 25% 0%2003 >60.000 25% 100% 40.000-60.000 25% 0-100% <40.000 25% 0%2004 >60.000 100% 40% 40.000-60.000 100% 0-40% <40.000 100% 0%2005 >60.000 100% 73% 40.000-60.000 100% 0-73% <40.000 100% 0%2006 >60.000 100% 100% 30.000-60.000 100% 0-100% <30.000 0% 10%

Page 23: Aenorm 71

AENORM vol. 19 (71) May 2011 21

Econometrics

assistance dependence that are worth comparing: (i) the number of zero-to-one transitions; (ii) the number of one-to-zero transitions; and (iii) the average of . Assume that the real world (i.e. the world in which the WWB is introduced and sequences of welfare assistance dependence are actually observed) is labeled , while the hypothetical world (i.e. the world where the WWB is never introduced and sequences of welfare assistance dependence are replicated) is labeled . Table 3 presents the results.

The numbers in Table 3 are sample fractions4. The results are striking and intuitively appealing. First, the WWB introduction slightly decreased the number of zero-to-one transitions (i.e. transitions from work to welfare assistance). Second, the WWB introduction increased the number of one-to-zero transitions (i.e. transitions from

welfare assistance to work). Third, on average, the WWB introduction decreased the number of times an individual requires welfare assistance. This analysis can also be done for distinct sample subgroups5.

Conclusion

As an extension of the incentive-disincentive paradigm in Dutch social security, the WWB introduction decreases the uptake of welfare assistance in the Netherlands. It is shown that the WWB introduction has little effect on those that have a job, but increases the likelihood of a transition from requiring welfare assistance to self-supportiveness significantly. This illustrates that activating social policy can have the desired effect. In the end, the connection between the governmental paradigm change and empirics is the following: the introduction of the incentive-disincentive paradigm decreased the uptake of welfare assistance in the Netherlands.

References

Chay, K.Y. and D. Hyslop (2000). “Identification and Estimation of Dynamic Binary Response Models: Empirical Evidence Using Alternative Approaches.” U. C. Berkeley Center for Labor Economics Working Paper No. 5.

Davidson. R. and J.G. Mackinnon (2004). Econometric Theory and Methods. Oxford: Oxford University Press.

Heckman, J. J. (1978a). “Simple Statistical Models for Discrete Panel Data Developed and Applied to Test the Hypothesis of True State Dependence against the Hypothesis of Spurious State Dependence.” Annales de l’inséé, 30-31, 227-269.

Honoré, B. E. (1993). “Orthogonality Conditions for Tobit Models with Fixed Effects and Lagged Dependent Variables”, Journal of Econometrics, 59, 1-2, 35-61.

Honoré, B. E. and E. Kryiazidou (2000). Panel Data Discrete Choice Models with Lagged Dependent Variables. Econometrica, 68, 4, 839-874.

Kyriazidou, E. (1997a). “Estimation of a Panel Data Sample Selection Model.” Econometrica, 65, (6), 1335-1364.

Wooldridge, J. M. (2005). “Simple Solutions to the Initial

Conditions Problem in Dynamic Nonlinear Panel Data Models with Unobserved Heterogeneity.” Journal of Applied Econometrics, 20, 39-54.

4 The numbers in Table 3 should thus be read as follows: e.g. the 0,2940 in the upper-left cell indicates that 29,40 percent of

the sampled individuals has less zero-to-one transitions in world than in world. 5 It can be shown that this the WWB particularly affects: (i) females; (ii) allochthonous citizens; (iii) singles; and (iv) parents

with young children.

Table 2. Estimation results.

Dependent Specification (i) Specification (ii)variable: Constant -5,5450 (0,3137) 7,5761 (0,5158) Initial condition ( )

1,7915 (0,0933) 1,9270 (0,0800)

Gender 0,1940 (0,0452) 0,0767 (0,0636)Nationality 0,1945 (0,0462) 0,0639 (0,0685)Partner -0,4113 (0,0544) -0,4445(0,0769)Young child(ren) 0,3323 (0,0518) 0,5498 (0,0807)No children 0,0951 (0,0427) 0,0893 (0,0643)Age (logarithmic) 0,7176 (0,0780) 0,9885 (0,1170)Level of educa- tion (logarithmic) -0,6323 (0,0552) -0,6014 (0,0710)

Mutation number of jobs - 0,3076 (0,1531)

Job fraction in ca- tering and trading - 0,0325 (0,0244)

Degree of urbani- zation - 0,3222 (0,0813)

West - -0,1535 (0,0777)Population - 0,0026 (0,0018)WWB -0,5356 (0,0209) -0,2920 (0,0410)Lagged dependentvariable ( )

5,0823 (0,0343) 5,3176 (0,0534) Sample size 4.426 2.897Average length timepath (quarters) 30,24 22,5

Likelihood -22.798,54 -8.946,24

Specification (i) Specification (ii) O < R O = R O > R O < R O = R O > R

Zero-to-one transitions

0.2940 0.5018 0.2042 0.2157 0.6020 0.1823

One-to-zerotransitions

0.1184 0.1616 0.7200 0.0887 0.2489 0.6624

0.5305 0.2458 0.2237 0.3977 0.3635 0.2389

Table 3. Comparing observed and replicated sequences.

Page 24: Aenorm 71

22 AENORM vol. 19 (71) May 2011

Economics

Introduction

The political will to keep the euro intact is still exceptionally strong. That is good news. But the problem is that the solutions adopted are far from convincing. New issuing of eurobonds by the European Financial Stability Facility (EFSF) and its successor, the European Stability mechanism (ESM) only exacerbate the fragmentation of public bond markets within EMU. If investors’ sentiment in financial markets turn against an individual member state, they are still able to push individual countries into acute liquidity shortages. And every new recurrence of a new problem gives rise to the same questions. Is support required, and if so, how much? And on what terms? All these questions, that occasionally pop-up, add time and again to the uncertainty concerning EMU’s long term viability. Basically, in spite of all the well-intended rescue efforts, the fundamental flaws in EMU’s design have still not been eliminated. As a result, the euro still is in the danger zone.

EMU

The EMU has no central government and no central budget of any significance. If financial markets lose faith in the financial standing of an individual member state, they can still hike up interest rates to such an extent that a liquidity crisis acutely arises. Note, that markets can be extremely volatile. During the first eight years of EMU, markets completely failed to differentiate between the public debt of financially strong and financially weaker countries (Figure 1). This is a very familiar pattern in the run up to financial crises: financial markets often ignore financial fundamentals for a long time, but once sentiment turns, the reaction usually is extremely damaging. So far, however, policy makers fail to learn lessons from this pattern.

Due to the fragmentation of EMU markets for government debt, any individual country facing financial problems immediately threatens the stability of the whole eurozone. However, a monetary union that aims to continue in existence perpetually must be capable of surviving a defaulting government of one of its member states without falling into an acute existential crisis. The continued existence of the dollar was never called into question when New York was on the brink of bankruptcy in 1975 nor is it now, with California teetering on its verge.

In times of recession and cutbacks especially, as in Greece and Ireland at present, people tend – often mistakenly, it should be noted – to entertain nostalgic memories of ‘the past, when things were better’. National politicians can appeal to this in calling for an exit from the eurozone. As a consequence, every national financial problem in even the smallest member state immediately translates into fundamental questions about the long-term

The euro is lurching from crisis to crisis and politics is struggling to find a solution. In April, European policy makers presented a range of measures to strengthen the euro and improve European governance. Although this package may be expected to be enough to hold the eurozone together en calm the markets for the time being, it does not tackle the fundamental problem. This problem is the fact that the Economic and Monetary Union (EMU) is only a half-way station. Monetary integration is completed, but political integration is not. Clearly, surging ahead towards further political integration, forming a European government with a substantial central budget is completely unrealistic at the present time. But a simple, inexpensive and in the end self-financing solution is being overlooked.

by: Wim Boonstra

Solution for Problems of the Euro Within Reach

Wim BoonstraWim Boonstra is Chief Economist of Rabobank Nederland, Utrecht, and President of the Monetary Commission of the European League for Economic Cooperation (ELEC). This article reflects his personal views but not necessarily those of Rabobank or ELEC. He also teaches Money and Banking at VU University, Amsterdam

Page 25: Aenorm 71
Page 26: Aenorm 71

24 AENORM vol. 19 (71) May 2011

Economics

viability of the euro. This compels the EU time and again to come up with ad hoc bailout measures, which can in turn nurture anti-European sentiments in the stronger countries.

Consolidating the eurozone

EMU stabilisation is urgently required and can be attained through a combination of measures. The most straightforward and therefore most fundamental solution is the completion of the political integration process: the establishment of the United States of Europe. However, massive political support for such a step is far beyond the horizon. So we should aim for the second best solution: central funding of all public deficits in EMU. What is needed first is a move towards financing all government deficits within the EMU via a newly established central agency. This will put paid to the fragmentation of the market for government loans within the EMU. This step is necessary but not sufficient. The central agency, referred to as the EMU Fund, raises the funds required on behalf of the EMU as a whole and assigns them to the individual member states. It does so while applying a surcharge mechanism and, if necessary, additional conditionality. This is the second step. The interest due from the member states on the funds they obtain from the EMU Fund will depend in part on the position of their government finances. The larger their government deficit and/or the higher their government debt, the higher the rate they will be charged. Slowly deteriorating public finances will be reflected in gradually increasing surcharges. Individual member states are, however, shielded from acute fluctuations in market sentiment. The

third step is the strengthening of the Stability and Growth Pact (SGP), which must to that end be provided with effective, graduated sanctions. These should preferably not take the form of fines. Political sanctions are more keenly felt by policy makers and easier to effectuate. The EMU Fund can also impose additional terms for lending (as is being already done) if a country’s performance slips further.

This combination of steps can be applied to stabilise the EMU, without surrendering too much national sovereignty. At the same time, it offers several evident benefits for all countries. Everyone stands to benefit from the formation of a very large and liquid common bond market. The European Central Bank will no longer have to buy up bonds of individual member states, meaning that the present undesirable interweaving of monetary and budgetary policy can be ended. Weaker countries will be shielded from fluctuations in market sentiment. And strong countries will not find themselves facing an acute problem every time a weaker country flounders. Countries that are out of line will be gradually confronted with rising financing costs. Consequently, an extended period of free riding by weak countries on the creditworthiness of stronger member states of the kind witnessed in the period 1999 – 2008 will no longer be possible. And if things nonetheless unexpectedly go wrong in a member state and its debt needs to be restructured, it will be clear that the EMU Fund will conduct negotiations and what the rules of the game are. No more ad hoc crisis negotiations. And, as the EMU fund will run a structural surplus in normal times it will build up a huge reserve base. Which means that if, in spite of the presence of a much improved SGP, a country nevertheless runs into

Figure 1. Increasing tensions in the eurozone.

Page 27: Aenorm 71

AENORM vol. 19 (71) May 2011 25

Economics

problems, it has ample reserves to finance a rescue package. No more begging the German, Finnish or Dutch taxpayer to finance a bail-out.

As an added advantage of this construction, its terms will only have to be negotiated once, namely when the fund is put in place. The SGP agreements will also have to be tightened, but they already have to be anyway. The technical aspects of the EMU Fund are very simple. The political hurdles that have to be overcome appear to be higher at first sight, as the benefits of the EMU Fund are abundantly clear for financially weaker countries but initially less evident for the stronger countries. Germany in particular will need some convincing.

It is already the case today that the creditworthiness of the eurozone depends on that of the largest member states, i.e. Germany and France. That is why it is the most pragmatic way to establish in advance that these two countries will, by definition, not have to pay a surcharge to the EMU Fund. Countries whose government finances are in better shape than those of Germany and France will pay no surcharge either. However, all financially strong countries benefit from the creation of a huge and extremely liquid market of Eurobonds, which will result in lower funding costs. The financially weaker countries will have to pay a surcharge accompanied by gradually tightening terms. This also means that as a rule the EMU Fund will always operate at a profit, which can be added to the Fund’s capital.

The simplest way of introducing the EMU Fund would therefore be for France and Germany to agree with each other to adopt central funding of their government deficit, in combination with a cross-guarantee. Other countries will then be given the opportunity to join them, subject to appropriate terms. The advantages of participation and especially the drawbacks of non-participation will be so evident that it may be expected that the rest will soon follow suit. That is what happened in the past when the European Monetary System was founded and in fact it was no different, in essence, on the start-up for the introduction of the euro.

Conclusion

In the press, many observes talk about the euro’s ‘debt crisis’. However, on average the public debt and deficits in the eurozone are better than those of the US, the UK and certainly Japan. The plain fact that a number of very small economies with debt problems are able to bring the whole eurozone into disarray roves that EMU does have a flaw in its design. Setting up an EMU Fund cannot solve all problems of the eurozone. It does not absolve policy makers from their obligation to put their government finances and economies in order. But it can give countries time to put their affairs in order without the eurozone lurching from one crisis into the next. Above all it will help to consolidate the euro for the future. All it takes is a little creativity and courage.

Page 28: Aenorm 71

Calculator of innovator?

www.pggm.nl/werkenbij

Misschien ligt jouw toekomst wel bij PGGM. We zijn namelijk regelmatig

op zoek naar nieuwe talentvolle collega’s, ook op het gebied van actuariaat.

PGGM heeft twee actuariële afdelingen: Actuarieel Advies & ALM en

Actuariële Verantwoording & Analyse. Beide afdelingen hebben direct

contact met de klanten van PGGM en leveren producten op gebied

van beleidadvies & ALM en verantwoording. Als medewerker op een van

deze afdelingen breng je advies uit over de meest uiteenlopende kwesties.

Wat wordt bijvoorbeeld ons financiële beleid? Hoe ‘houdbaar’ is een

pensioencontract? Wat is de beste methode van verslaglegging en welk

verzekeringsproduct heeft echt toegevoegde waarde? Maar ook: wat moet

de invloed van het solidariteitsprincipe zijn? Het bestuur van de klant besluit

op basis van ons onderzoek en advies hoe het financiële beleid van het

pensioenfonds wordt ingericht. Iets voor jou als je straks afgestudeerd

bent? Hou onze vacatures in de gaten op www.pggm.nl/werkenbij. We zien

je sollicitatie graag tegemoet!

In de sector zorg en welzijn is PGGM een financiële dienstverlener van

formaat. Kijk maar naar de cijfers: we beheren de pensioenen van meer

dan twee miljoen klanten. En dan hebben we het nog niet eens over

de aanvullende producten die we ontwikkelen. Kortom, typisch een

organisatie waar ambitieuze mensen zich kunnen uitleven. Uitdaging is er

genoeg, of je nou kiest voor een functie in ICT, consultancy, verzekeringen

of beleggingen. Voorwaarde is dat je carrière wilt maken op het snijvlak

van zakelijk en maatschappelijk belang. Maar natuurlijk ook dat je ja

zegt tegen een fraai pakket arbeidsvoorwaarden en een prettige balans

tussen werk en privé. Meer weten? Je leest alles over je mogelijkheden op

www.pggm.nl/werkenbij

Page 29: Aenorm 71

AENORM vol. 19 (71) May 2011 27

Actuarial Sciences

Introduction

I will discuss statements about young people and pension in this article by the themes of interest, solidarity, risk and retirement age. First, I distilled statements from literature and media, which I tested in three ways:

1. A depth interview with Union FNV Young (FNV Jong);

2. A depth interview with six young people working in the Care and Health sector (the ‘PGGM-youngsters’);

3. An online survey at msn.nl.

Theme 1: Interest

The statements in literature and media are:

1. Young people have no interest in pension;2. The way of communication is not attractive for young

people;3. Young people want more freedom of choice and

decision-making authority.

The pension awareness in the Netherlands is examined by the Socio-Economic Board (Sociaal-Economische Raad, SER, 2008). They conclude that young people have the lowest interest in pension and do not worry about it. Reasons are that their pensiondate is still far away and the matter is too complicated. A cause of the low interest is the way of communicating towards the participants. The Authority Financial Markets (Autoriteit Financiële Markten, AFM) calls the information participants receive ‘incomprehensible’2. 85% of the participants understand hardly anything from the information they receive. Since the information written is complex, participants have to read concentrated to understand it. But just on the point of reading young people pull out (Veen and Jacobs, 2005). When the information is tough and pension is experienced as ‘far away’, it is understandable that, according to the AFM, more than 50% of the young people do not read their Uniform Pension Report (Uniform Pensioen Overzicht).

If this way communication is not attractive to young people, which way of communication is? Well, life, work and learning of young people is more and more digitized (Veen and Jacobs, 2005). Young people have a preference for images, because images have a ‘functional and informative value’. They want to work interactive as well with the visual information they receive. Young people learn more from interactive working (Oblinger and Oblinger, 2005). This young generation, also called the instant generation, wants to receive information at the moment they ask for it (Veen and Jacobs, 2005).

In the present common pension contract there is the risk that young people and future generations bear too many risks and expenses for older generations. Because of that, the pension contract has to become future-proof again, so that older generations, young people and future generations will still want to participate in the pension contract. To meet this challenge, it is essential to know what young people require from a pension contract. We see all kinds of statements about young people and their pension in literature and media, but is it all correct? What is the meaning of terms as solidarity, risk and freedom of choice for young people in relation to pension?

by: Hans Staring

Young People and their Pension: not Everything what Media and Literature Assume is Correct

Hans StaringHans Staring obtained his master degree in Actuarial Sciences and Mathematical Finance at the University of Amsterdam in June 2010. This article is based on a part of his master thesis1 (‘An optimal pensioncontract from the point of view of young people’, Dutch written), written at PGGM. Hans wrote his thesis under guidance of drs. Jan Tamerus AAG (UvA and PGGM), drs. Mark Brussen AAG and ir. drs. Dick Boeijen (both PGGM) and he likes to thank them for their cooperation and for thinking along with him. Hans is now working as actuary at PGGM.

1A Dutch version of this article is published in the January

2011 Volume of Pensioen Magazine.2http://www.ipe.com/nederland/AFM_Pensioeninformatie_

onbegrijpelijk_34079.php in Dutch.

Page 30: Aenorm 71

28 AENORM vol. 19 (71) May 2011

Actuarial Sciences

Another cause of the low interest is the lack of choices participants can make in the pension contract. Kortleve and Slager (2010) show that the modern consumer wants more freedom of choice. They demand a pension product that they can adjust to their own preferences. Not only the modern consumer wants more freedom of choice, but also the modern youth. Young people want to be independent and make their own choices (Dieleman and Meijers, 2005). Martin Pikaart, chairman of Alternative for Labor Union (Alternatief voor Vakbond, AVV), pleads for total openness and freedom of choice regarding pensions (Financieel Dagblad, FD, 3 November 2009). The Goudswaard commission (2010) states that there are different needs among the participants. Increasing the freedom of choice may increase the bearing capacity and sustainability of the Dutch pension system.

What do young people say themselves?

FNV Young says it is correct that young people are not interested in matters of pension. It is too far in the future. A large part of them has additional jobs or work parttime and pensionrights are no issue. Their interest grows with the first fulltime job, but only when the pension date is coming closer interest really increases. The interviewed PGGM-youngsters say also that they have low interest in and knowledge of pension. They offer several reasons: ‘I do not have to do anything for it’, ‘pension is far way’, ‘pension is complicated’ and ‘I am already busy with other things’. The msn.nl survey tells a slightly different story. The outcome is showed for two age categories. The first category is 17 to 21 years (7310 participants), the other is 22 to 25 years (2055 participants). See Figure 1.

Almost half of the young people are interested in pensions and 20% might be interested when the subject is brought to them in a more appealing way. In addition, the older category of youngsters have more interest in pensions than the younger category. A bias in the survey is the reason why the percentage of young people with interest is somewhat higher than expected. The participants are not part of a random sample, they choose to join. Maybe people who have more interest in pensions are more willing to participate in the survey. Based on

the interviews and the survey, the statement that young people have low interest in pensions is correct.

The interviewed youngsters want other ways of communication from pension funds. They cannot understand the presented numbers. The attached text is also difficult: ‘I don’t read it, because I don’t understand it’. The information should be less complicated. Terms as indexation are not understandable without a clear explanation. The way of communication should be more interactive and more visual. Regarding communication, the statements from literature and media correspond to what is said in the interviews. A more attractive and better understandable way of communication for young people can increase their interest and involvement.

The statement that young people want more freedom of choice and more decision-making authority is confirmed by FNV Young and the interviewed youngsters. During the interview with the PGGM-youngsters it became clear there are differences in risk-appetite in the group. Some like risk and some really don’t. The participants made it clear that they would like to have the possibility to choose for slightly more or slightly less risk. There was also a participant who stated that there shouldn’t be too much freedom: pension is complicated as it is. Young people tend to think: ‘Just arrange it for me’. This is not the same view we receive from literature and media. FNV Young didn’t want too much freedom of choice either, only for risk allocation. Freedom of choice on risk allocation should go hand in hand with a good default-option. Slightly more freedom of choice can increase the interest and involvement of young people.

Theme 2: Solidarity

The statements in literature and media are:

1. Young people think elderly people demand too much and contribute too little;

2. Young people bear too many risks and expenditures for elderly people;

3. Young people prefer no intergenerational solidarity.

Elderly people talk only about the rights they have, without making a gesture themselves, says Huibrecht Bos, chairman of Young Management (Jong Management, FD, 3 November 2009). There is also criticism from the side of young employees. Jamila Aanzi, the former vice-chairman of FNV Young, ‘misses the empathy from baby boomers to make a contribution too’ (FD, 3 November 2009). Martin Pikaart (AVV) thinks it ‘astonishing that pensioners who miss a half percent compensation for inflation make a lot of fuss’ and he states that ’everyone over the age of 50 is untouched and the bill is offered to the younger generation’ (NRC Handelsblad, 23 March 2010).

From the point of view that young people bear too many costs for elderly people and elderly people demand too much, young people will start asking questions

37,342,3

19,5

0,9

28,6

49,4

21,1

0,9

No, it's too far away Yes, I think it's important

Maybe, when it's more appealing

Pension, what is that?

Are you interested in pensions? (in %)

17-21 years 22-25 years

Figure 1. Are you interested in pensions?

Page 31: Aenorm 71

AENORM vol. 19 (71) May 2011 29

Actuarial Sciences

concerning the intergenerational solidarity. ‘Solidarity concerning pensions is something from another period’ (Huibrecht Bos, FD, 3 November 2009).

What do young people say themselves?

The young people themselves show a complete other image. Contrary to what is stated in media and literature PGGM-youngsters do not experience high charges for elderly people or that the elderly people demand too much and contribute too little. To put it more stronger, if you ask them how a pension fund must act in a situation of underfunding, they are willing to pay more premium to protect the entitlements of pensioners.

The PGGM-youngsters state that ‘elderly people have to do well’. These young people do want intergenerational solidarity. But they demand reciprocity in the future. FNV Young and the interviewed youngsters state that when youngsters pay for the elderly people nowadays, the future young people should also pay for them when they are old. The survey on msn.nl shows a similar image (Figure 2). A large majority wants to pay for the pension of the elderly.

Theme 3: (Investment) Risk

The statements in literature and media are:

1. Young people can take more risks than they have now;

2. Young people want more risks than they have now;3. (Nominal) guarantees have no value for young

people;4. Young people have no need for (nominal) guarantees.

According to the life-cycle theory, young people can take more risk than elderly people (Bodie et.al., 1992). There are two reasons for that. First, young people can absorb financial shocks better, because they own more human capital. Secondly, young people regularly own less financial capital. De Jong et.al. (2008) is of the opinion that individual investors make risk return considerations during their lives. They say so because young investors

are willing to take more risk.Tamerus (2008) calls nominal security illusional

security, because indexation of pension rights is needed in a world with inflation. One of the most important conclusions of the Frijns commision (2010) is that a real framework must be leading. A real objective and (nominal) guarantees are conflicting. Young people have no need for security and definitely no need for nominal security (Tamerus, 2008).

What do young people say themselves?

During the interviews it became clear that participants make other choices when they have more knowledge. The interviewed youngsters feel that, without knowledge of the consequences of investing, risk is ‘scary’ and ‘it feels like gambling’. They also state that investing is ‘too complex’ and ‘it is not my world’. This antipathy against investing is also found in the survey on msn.nl (Figure 3). In the first instance, the majority of young people rather takes no risk, just like the PGGM-youngsters. The interviewed youngsters do have a need for guarantees, in contradiction with what is found in literature and media. The guarantee young people want is not so much as: ‘I want to know exactly what I get in forty years’, but more like ‘I want to have the feeling that my money is handled carefully and that I can retire’.

During the interviews with the PGGM-youngsters a pension course was given. Participants received information about the importance of indexation and the impact of investing on premium (without investing the premium will rise strongly) and return (without investing the return will decline). Furthermore, they received information about risk sharing through collectivity. After the pension course young people were willing to take more risk and they also understood why pension funds have to invest. Still, there remains a (strong) sense for security. Apparently, there is a difference between ratio and emotion. Rationally, it makes more sense for young people to take more risk, but emotionally they feel the need for security and a certain level of guarantee.

6,7

63,9

23,8

5,68,2

64,5

22,4

4,9

Yes Only when that is done for me too when

I'm elderly

No, that is not my responsibility

Don't know / no opinion

Are you willing to pay for the pension of theelderly? (in %)

17-21 years 22-25 years

Figure 2. Are you willing to pay for the pensioen of the elderly?

40,234,6

16,9

5,1 3,1

36,340,6

16,7

4,0 2,4

Only if there is no risk

No, I don't want anyone to take

risk with my future

Yes, as long as I receive enough

Don't know / no opinion

Of course, the value can rise

than

Is it okay when pension funds invest with your pension account? (in %)

17-21 years 22-25 years

Figure 3. Is it okay when pension funds invest with your pension account?

Page 32: Aenorm 71

30 AENORM vol. 19 (71) May 2011

Actuarial Sciences

Young participants were asked if they are willing to take more risk now, than when they are older. The answer was that the participants favoured a bit more risk. When people are older, ‘money should go from the investment account to the savings account’. And ‘wanting more security as they get older’ applies for most participants.

Theme 4: Retirement Age

The statement of many youth organisations is:

1. Young people feel the retirement age should rise immediately.

The youth organisations of the political parties VVD, GroenLinks, D66, ChristenUnie and PvdA and also CNV Jongeren all feel that the retirement age for the AOW should rise.3 It can be expected that they also feel this should apply for the supplementary pension.

What do young people say themselves?

At first glance, opinions of youth organisations and young people do not match. According to FNV Young it turns out that young people do not want the AOW retirement age to rise, but that they are prepared to work longer. This also applies for a part of the interviewed PGGM-youngsters. FNV Young does feel that the coupling of retirement age to life expectancy is fair and easy to explain. From the survey on msn.nl, we receive a similar divergent view (Figure 4). From the interviews and the survey we gather that, in contradiction with the opinion of many youth organisations, not all young people want the retirement age to rise. Of course, a reason for this can be a lack of awareness that living longer has its costs. If so, the costs of living longer should be made clear.

Thus the problem of higher life expectancy was explained to PGGM-youngsters. They answered the question on what the pension participant should do: work longer or get less pension? If only confronted

with the problem of higher life expectancy – without mentioning its costs – they already see that ‘something should happen’. The way in which the participants want to deal with the higher life expectancy is divergent. Two participants, who were least willing to take risks, preferred a lower pension income. The other participants see working longer as the best solution.

Conclusion

Several conclusions can be drawn from my analysis. First, in many cases a discrepancy exists between what young people say themselves and what is said about them, which returns in all themes. Young people do not prefer full freedom of choice, do want to be solidary with the elderly, do feel the need for security and not all of them want the retirement age to rise. Thus it stands to reason not to believe everything you read in literature and media about young people and pension. In addition, it is true that young people have low interest in pension.

Finally, we see that when young people make choices, there is a difference when they have insight. We see that in the risk-preference and how they want to cope with the higher life expectancy. Education is crucial to gain insight and it encourages involvement with the subject. And this insight should not only concern their own pension, but also the mechanism behind it (indexation, the trade-off between risk and return, risk-sharing). When they have knowledge of the mechanism, young people tend to draw the following conclusion: a reasonable pension is an uncertain pension, because it calls for indexation, meaning that return is needed, which leads to risks and to uncertainty about the outcome.

The limited scale of my analysis asks for further investigation. Larger groups of young people should get involved, also from other sectors than the Care and Health sector. Other youth organisations than FNV Young can extend the perspective we have on what young people want. For example, we know that there are many youth organisations who, unlike FNV Young, want a higher retirement age.

References

Autoriteit Financiële Markten, Pensioenkennis van de Nederlandse consument, 2008.

Bodie Z., R.C. Merton and W.F. Samuelson. “Labor supply flexibility and portfolio choice in a life cycle model.” Journal of Economic Dynamics and Control, 16, 427-449, 1992.

Commissie-Frijns. Pensioen: ‘onzekere zekerheid’. Een analyse van het beleggingsbeleid en het risicobeheer van de Nederlandse pensioenfondsen. 2010.

Commissie-Goudswaard. Een sterke tweede pijler. Naar een toekomstbestendig stelsel van aanvullende

34,6

26,7 25,6

11,2

1,9

38,8

27,322,8

9,9

1,2

No, when you're 65 you can quit

Only if you want to

Only when it is not to heavy

Ofcourse Don't know/no opinion

When people live longer, should they also work longer? (in %)

17-21 years 22-25 years

Figure 4. When people live longer, should they also work longer?

3According to their websites (Spring of 2010).

Page 33: Aenorm 71

AENORM vol. 19 (71) May 2011 31

Actuarial Sciences

pensioenen. 2010.

Dieleman and F. Meijers. Paradise lost: youth in transition in the Netherlands. In N. Bagnall (red.). Youth Transition in a Globalised Marketplace. New York, Nova Science, 75-99, 2005.

Jong F. de, P. Schotman and B. Werker. Strategic Asset Allocation. Netspar Panel Paper nr. 8, 2008.

Kortleve C.E. and A. Slager. Consumenten aan het roer. Strategische toekomstvisies voor de Nederlandse pensioensector. Netspar NEA Paper, 27, 2010.

Oblinger D. and J. Oblinger. Is it age or IT: First steps towards understanding the net-generation. In Educating the Net-Generation: Educause, 2005.

Sociaal-Economische Raad. Op weg naar pensioenbewust-zijn, de bevindingen van het debat Pensioenbewustzijn. Rapport Pensioencommissie, 30 januari 2008.

Tamerus J.H.. “Gaan we in het pensioencontract diffe-rentiëren?” Tijdschrift voor Pensioenvraagstukken, 4, 2008.

Veen W. and F. Jacobs. Leren van jongeren. Een literatuuronderzoek naar nieuwe geletterdheid. Stichting SURF, 2005.

Furthermore, several publications in national newspapers are mentioned.

Page 34: Aenorm 71

32 AENORM vol. 19 (71) May 2011

Mathematical Economics

Introduction

The switch percentages in the years after the system change are so low that it is doubtful whether competition really is present in this market and whether the market performs efficiently. Consumers seem to be loyal in their choice for a health care insurer, but whether this is optimal behaviour is unclear. Therefore the thesis tried to find an explanation for these low switch rates in this loyalty feature that consumers have. It also investigated whether low switch levels are harmful for efficiency of choices on the Dutch market for health care insurance.

In the first year of the reform a lot of health care insurance companies asked prices for their insurance policies that were below their expected costs in order to attract consumers. In the subsequent years they increased these prices to a profitable level and hoped that consumers would be loyal. One of the aims of the thesis was to find out whether this strategy could work. Consumer choice behaviour on the Dutch health care insurance market is modelled with a reinforcement model that Weisbuch et al. (2000) constructed to investigate the behaviour of buyers on a fish market in Marseille. With

this model buyers can learn to make optimal choices, because better choices are reinforced more strongly for a next period than worse ones. They can choose efficiently if they adopt a good mixture between exploring the market by switching and exploiting the insurer that offers the largest utility.

A reinforcement model

To explain how this reinforcement model works in detail, a set of n buyers, indexed i and m sellers, indexed j, is considered. The basic assumption underlying the model is that a buyer i, bases his choice for a seller j partially on his experiences from the past. The buyer’s learning process evolves according to the following relation:

(1)

In every period a buyer i can obtain a payoff Uij by going to seller j. This payoff is added to the corresponding element in the matrix of preferences J. The matrix of preferences consists of payoffs made in previous periods that are discounted with a constant factor γ. Recent payoffs matter more with a larger factor γ. The preference coefficients are used to choose a seller. Buyers can choose the seller with the highest preference coefficient, but this can lead to exploitation of buyers by sellers. Which seller has the highest preference coefficient depends on the order in which they are chosen and also on the initial condition of J. When a seller has the highest preference coefficient, he can offer zero payoff and remain the preferred seller. This can be prevented when the buyer explores the market to acquire new information once in a while. The following choice rule captures this exploitation versus exploration. It gives the probability for buyer i to visit seller j as:

The Dutch government introduced a new health care insurance system in 2006. There were several reasons for this new system of which promoting competition was the most important one. Before the new system health care had high costs which would increase in the coming years due to the aging of the Dutch population. Therefore the government decided to introduce competition to improve efficiency of the system. One of the elements in this new system is the possibility for insured consumers to switch between health care insurance providers each year, so that insurers need to compete for customers.

by: Tim Pellenkoft

Switching and Efficiency on the Dutch Health Care Insurance Market: A Reinforcement Approach

Tim PellenkoftTim graduated in Econometrics in October 2010 at the University of Amsterdam. This thesis was written under the supervision of dr. Jan Tuinstra. During his studies Tim was a very active VSAE member. Tim is currently working as a data analyst and programmer for MeyerMonitor.

( ) (1 ) ( 1) ( ), , .J t J t U t i jij ij ij�� � � � � �

Page 35: Aenorm 71

AENORM vol. 19 (71) May 2011 33

Mathematical Economics

(2)

When β = 0, the buyer goes to each seller with equal probability (no exploitation), while β = ∞ means that the buyer will always go to the seller that he prefers most (no exploration).

Not every consumer can afford the insurance that offers the best payoff when this insurer is one of the more expensive insurance suppliers. Therefore the model should also compensate for income restrictions. The utility function therefore depends on income, offered quality and price and could look like this

(3)

Is a low switch rate harmful for the efficiency of the market? That is a question that is very relevant for the introduction of competition in the new system. Efficiency in period t for consumer i is computed with the following measure

(4)

Results

A group of fifty consumers was observed in simulations for fifty periods. Each consumer chose one of five insurers and this choice was made in every period, so that they all needed to make 50 choices. In this choice process the order in this choice sequence determined whether they switched a lot or rarely. Simulations were run to check the dynamics of the model for different values of β and γ, different initial matrices of preference J and different payoffs offered by the five insurers. Also the influence of relative income

was measured. From the consumer choices it was possible to derive whether these choices were optimal given their conditions.

More loyalty implies less switching, which can be seen in the graph of Figure 1. When β is two, approximately 10% of the consumers switches per period after fifty periods while this was 2/5 for β = 1.1 When β is set to three, average switch per period decreases further to approximately 5%, because more consumers become more loyal to one insurer. This decrease continues when β increases. For β = 5 there is nearly no switching left on the market. Still consumers with a high γ switch more often than those with a lower γ.

Efficiency is displayed over time in figure 2. Interesting is the decrease in efficiency when β exceeds two. Apparently there is a maximum in efficiency somewhere around β = 2. This maximum is caused by two effects. When β < βmax

2 consumers tend to switch too often causing them to make wrong choices. Whenever they choose the optimal insurer, their decision is not reinforced strong enough to become loyal to this insurer. They have a tendency to keep exploring the market, which creates a loss of efficiency. On the other hand, when β > βmax consumers are too loyal and stick with their insurer even though this insurer might not be the best one. If consumers choose an insurer in the first period that offers a positive payoff, their choice is strongly reinforced by (2) even when this choice is not optimal. The consumers may have a tendency to exploit this positive payoff by sticking to this insurer, while exploring the market to find the optimal insurer is a better choice.

Douven and Schut (2006) stated that in the first year after the reform insurance companies asked a price that was below their expected costs. Companies used this dynamic price strategy to attract as many consumers as possible. By gradually increasing the price to a level above their expected costs in the years that followed, insurers hoped they would make larger profits, because

Figure 1. Levels of switch over time for different β.

Figure 2. Levels of efficiency over time for different β.

exp( ), , .

exp( )''

JijP i jij Jijj

�� � �

( ) ( )( )

( ) ( )

U t Min Uij j ijE ti Max U Min Uj ij j ij

��

1 The switch rate for β = 0 is not displayed because it is theoretically equal to 0.8. Every consumer has probability 0.2 to choose

the same insurer as the previous period.

2 βmax is the value of β where efficiency reaches its maximum.

Page 36: Aenorm 71

34 AENORM vol. 19 (71) May 2011

Mathematical Economics

they guessed that most attracted consumers would become loyal to them. According to intuition it would only work for high β, because people should remain loyal to the insurer even though the price increased. However, it appeared that a high β = 5 is the only case where insurers are on average worse off than in the regular situation. For very high β it appears to be too hard to attract consumers by setting a low price. When consumers chose another insurer they tend to be loyal to that chosen insurer. For very low β attracting consumers is not a problem, but maintaining them limits the working of the strategy. For the intermediate values of β the strategy seems to work quite well. Consumers are able to find the cheap insurer, build up high preferences and remain loyal when the price is increased, because preference for this insurer remain the highest. Perhaps there is a link between efficiency of choice by consumers and the profitability of this strategy. When consumers choose efficiently they will explore the market enough to find this insurer and exploit the insurer enough to become loyal. The main disadvantage of this strategy is that it damages the efficiency of consumer choice.

The pricing strategy as it is exercised by most insurance companies seems to work, because current switch levels match a loyalty level for which it is a profitable strategy.

Conclusion

Under the assumption that the reinforcement model is able to model the Dutch health care insurance market well, current switch rates would be consistent with a loyalty parameter value β = 3. However, consumer choices on the market would be a bit more efficient when this loyalty parameter β would be approximately two. The corresponding level of switch is somewhere between 10-15%, which was reached in the first year of the reform when there was a lot more awareness among consumers of the switch option and the competitive character of the market. By increasing awareness the government or other market regulating agencies might improve market performance and competition. When loyalty is decreased or the willingness to switch is increased consumers will explore the market more and exploit their best insurer by being reasonably loyal. Market regulating authorities should also avoid that insurance companies exercise a pricing strategy where they set a low price even below their costs to attract consumers and then increase this price when consumers exploited them and became loyal to the company. If a single company implements such a strategy this can reduce efficiency of consumer choices, especially when consumers are not very loyal. When consumers are very loyal this strategy does not work, because insurers are not able to attract extra consumers by setting a low price.

References

Douven, R.C.M.H. and F.T Schut. “Premieconcurrentie tussen zorgverzekeraars.” Economisch-statistische berichten; vol. 91, afl. 4488, (2006): 272-275.

Kirman, A.P. and N.J. Vriend. “Evolving Market Struc-ture: An ACE Model of Price Dispersion and Loyalty.” Journal of Economic Dynamics & Control, 25 (2001), 459-502, Elsevier.

Pellenkoft, T. “Switching and efficiency on the Dutch he-alth care insurance market: a reinforcement approach”. MSc. thesis, University of Amsterdam, 2010.

Weisbuch, G., A.P. Kirman and D. Herreiner. “Market Organisation and Trading Relationships.” The Economic Journal, 110 (April, 2000), 411-436, Royal Economic Society.

Page 37: Aenorm 71

AENORM vol. 19 (71) May 2011 35

Puzzle

Answer to “Cucumbers”

At first the pile of cucumbers weights 200 pounds of which 99% is water and therefore 1% of the pile is solid material which does not evaporate. 1% of 200 pounds is 2 pounds which will be 2% of the pile of cucumbers after they have laid in the sun. Therefore, the pile of cucumbers only weights a 100 pounds at the end of the day.

Answer to “Jigsaw”

To determine the possible number of pieces the jigsaw could contain, we need to solve the equation:

2m + 2n - 4 = 0.08mn

If you rewrite this it becomes:

(m - 25) (n - 25) = 575

There are three possible solutions, namely:

1. m = 26, n = 600: number of pieces is 15,6002. m = 30, n = 140: number of pieces is 4,2003. m = 48, n = 50: number of pieces is 2,400

Answer to “Shopping”

The prices of the three products are € 1.20, € 1.25 and € 4.90.

Two Blondes

Two blondes are sitting in a street cafe, talking about the children. One says that she has three daughters. The product of their ages equals 36 and the sum of the ages coincides with the number of the house across the street. The second blonde replies that this information is not enough to figure out the age of each child. The first agrees and adds that the oldest daughter has the beautiful blue eyes. Then the second blonde instantly solves the puzzle. How does she do so?

Bank Teller

A confused bank teller transposed the dollars and cents when he cashed a check for Ms Smith, giving her dollars instead of cents and cents instead of dollars. After buying a newspaper for 50 cents, Ms Smith noticed that she had left exactly three times as much as the original check. What was the amount of the check?

Solutions

Solutions to the two puzzles above can be submitted up to August 1st 2011. You can hand them in at the VSAE room (E2.02/04), mail them to [email protected] or send them to VSAE, for the attention of Aenorm puzzle 71, Roetersstraat 11, 1018 WB Amsterdam, Holland. Among the correct submissions, one book token will be won. Solutions can be both in English and Dutch.

On this page you find a few challenging puzzles. Try to solve them and compete for a prize! But first we will provide you with the answers to the puzzles of last edition.

Page 38: Aenorm 71

36 AENORM vol. 19 (71) May 2011

Agenda Agenda

• 27 April Active Members day

• 3 May Kraket/VSAE soccer tournament

• 30 May In-house day Optiver

• 17-19 June Kraketweekend

• 23 May General Members Meeting

• 26 May Golfclinic with Ernst & Young

• 10 June End-of-year activity

On the 1st of February, the new VSAE board started and a lot has happened since. On the 17th and 18th of February, the first edition of the Risk Intelligence Competition took place. 25 selected students worked on cases of Deloitte, Nationale-Nederlanden and Zanders for two days to learn more about risk management. The event was a great success.

On the 12th, 13th and 14th of April, we welcomed 125 talented Master and PhD students in econometrics from all over the world for the twelfth Econometric Game. The case by professor Frank Windmeijer was on alcohol consumption during pregnancy, using genetic markers as instruments. After three days of hard work, but also a lot of fun, the jury of professors called the team of Maastricht University winner of the Econometric Game 2011!

Of course, the VSAE also organized many relaxing activities in the last few months. Besides our monthly drinks, we organized a pubquiz, a party with the theme ‘Develish Angels’ and a soccer tournament with Kraket. We also travelled to Cracow with 39 students for our yearly Short Trip Abroad.The academic year is coming to an end and it is almost time for summer holidays. We wish all our members good luck with their lasts exams, and hope that everyone enjoys the summer afterwards!

The end of the academic year is already approaching fast. The last couple of months brought some great activities. The National Econometricians Day (LED) for example was a big success. This year’s LED was in Rotterdam and was filled with an excellent program full of interesting presentations, a good comedy sketch, a great dinner and of course a party at the end. Among other things, we held a Beach soccer tournament to get in the mood for the summer a little bit and had a Spring Gala with the study associations Anguilla and Salus.

On the 11th April of we visited Getronics Consulting for an In-house day. We were welcomed with a presentation and after a nice lunch were asked to solve a case about the communication infrastructure between different branches of an international company.

As if all of this isn’t enough, we have some great activities coming up. To thank all the active members for their effort they put in Kraket during this year, the active members day will take place, at the 27th of April. Next to that we have an In-house day with Optiver and the traditional Kraketweekend coming up, among other things.

Page 39: Aenorm 71

W W W.GA A AN.NU© 2011 KPMG N.V., alle rechten voorbehouden.

Marleen van Dijsseldonk, 25 jaarJunior adviseur KPMG Advisory

“Onderweg naar een opdracht bij een klant in #Barcelona. Weekendje shoppen eraan vastgeplakt met vriendin daar.”

Voor 24/7 updates over werken bij Audit of Advisory, check de KPMG-bloggers op www.gaaan.nu

Page 40: Aenorm 71

Ben jij een adviestalent?’s Werelds grootste multinationals kijken Towers Watson aan om belangrijke business issues voor hen te tackelen. Ontwikkel je talent en begin een uitdagende carrière bij de thought leader in

Retirement Solutions, Finance en Human Resources.

werkenbijtowerswatson.nl

Sca

n d

eze

QR code met je sm

art phone