text book 1 solutions

178
Probability: The Science of Uncertainty with Applications to Investments, Insurance, and Engineering Michael A. Bean Student’s Solution Manual to Accompany

Upload: lifeofasoldier

Post on 14-Oct-2014

69 views

Category:

Documents


3 download

TRANSCRIPT

Page 1: Text Book 1 Solutions

Probability: The Science of Uncertaintywith Applications to Investments, Insurance, and Engineering

Michael A. Bean

Student’s Solution Manual to Accompany

Page 2: Text Book 1 Solutions

Contents

Introduction...1

Chapter One Solutions...3

Chapter Two Solutions...6

Chapter Three Solutions...15

Chapter Four Solutions...23

Section 4.1.13 Exercises...23

Section 4.2.4 Exercises...38

Section 4.3.3 Exercises...38

Chapter Five Solutions...44

Chapter Six Solutions...59

Chapter Seven Solutions...77

Chapter Eight Solutions...101

Chapter Nine Solutions...129

Chapter Ten Solutions...162

Page 3: Text Book 1 Solutions

Introduction

This manual contains complete solutions to approximately one quarter of the exercises in the book Probability: The Science of Uncertainty with Applications to Investments, Insurance, and Engineering. It is an ideal companion to the textbook and is recommended for students who are preparing to take an examination in probability set by a professional society such as the Society of Actuaries, the Casualty Actuarial Society, or the Canadian Institute of Actuaries. This manual will also be a valuable resource for students interested in seeing worked-out solutions to problems that go beyond the examples given in the textbook.

‡ How To Use This Manual

Mathematics is a subject that can only be learned through practice. Hence, before consulting the solutions in this manual students should have made a serious attempt to do the textbook exer-cises on their own. Although there are many ways in which this manual can be used as a supplement to the textbook, we believe that students will learn the most by using the manual in the following way:

1. Read the assigned chapter or section of the textbook thoroughly before attempting any of the exercises.

2. Re-read each of the examples in the assigned reading paying close attention to the key points and steps required to obtain the final answer.

3. Without consulting the solutions that accompany the examples, recreate solutions for each of the examples in the assigned reading and verify that the answers obtained agree with those given in the textbook.

4. Without consulting the solutions in this manual, attempt to solve each of the exercises accompanying the assigned reading. If necessary, read the sections and examples of the text-book that pertain to the given exercises once again. Do not immediately consult the solution manual when confronted with a problem that is difficult to solve.

5. If a particular problem still seems intractable, then read the first few sentences of the solution from this manual and try to solve the rest of the problem on your own.

6. After completing these steps, carefully read over the sections of this manual that pertain to the assigned exercises, making note of all key solution points and any alternative approaches that you may not have considered.

Page 4: Text Book 1 Solutions

1. Read the assigned chapter or section of the textbook thoroughly before attempting any of the exercises.

2. Re-read each of the examples in the assigned reading paying close attention to the key points and steps required to obtain the final answer.

3. Without consulting the solutions that accompany the examples, recreate solutions for each of the examples in the assigned reading and verify that the answers obtained agree with those given in the textbook.

4. Without consulting the solutions in this manual, attempt to solve each of the exercises accompanying the assigned reading. If necessary, read the sections and examples of the text-book that pertain to the given exercises once again. Do not immediately consult the solution manual when confronted with a problem that is difficult to solve.

5. If a particular problem still seems intractable, then read the first few sentences of the solution from this manual and try to solve the rest of the problem on your own.

6. After completing these steps, carefully read over the sections of this manual that pertain to the assigned exercises, making note of all key solution points and any alternative approaches that you may not have considered.

We trust that you will find this manual helpful in your study of probability. Comments on the contents or structure of this manual can be sent to the publisher at the address printed in the front of the textbook.

2 Introduction

Page 5: Text Book 1 Solutions

Chapter One Solutions

2. a. The terms Bayesian and frequentist refer to interpretations of probability. The frequentist (also called objectivist) interpretation of probability is a perspective in which probabilities are considered to be long run relative frequencies. The Bayesian (also called subjectivist) interpretation of probability is a perspective in which probabilities are considered to be measures of belief that can change over time to reflect new informa-tion. See sections 1.6 and 1.9 in the textbook.

b. The insurance principle is the basis of actuarial science, whereas the principle of no arbitrage is the basis of financial engineering. The insurance principle asserts that for any group of homogeneous and independent risks, the average loss per individual becomes more certain as the size of the group increases. The principle of no arbitrage asserts that any two securities that provide the same future cash flow and have the same level of risk must sell for the same price. The insurance principle can be used to determine the pure cost of insurance for a large group of independent, homogeneous risks. The principle of no arbitrage can be used to determine the (theoretically correct) price of a security relative to the prices of other securities in an active market.

c. Both moral hazard and adverse selection arise in insurance from an insurer's inability to access perfect information about the insured person. Adverse selection arises from an inability to distinguish completely the good risks from the bad. Moral hazard arises from the behavorial changes that insurance protection induces after it is purchased. Adverse selection can be minimized through the use of a good risk classification scheme (and to a lesser extent, policy design). Moral hazard is primarily mitigated through policy design (by including, for example, deductibles and coinsurance provi-sions which require policyholders to share in the losses).

d. Actuarial science is the subject that is concerned with analyzing the adverse financial consequences of large, unpredictable losses and with designing mechanisms to cushion the harmful financial effects of such losses. Financial engineering is the subject that is concerned with analyzing risk in financial markets and with designing products and techniques to manage that risk. Actuarial science is based on applications of the insurance principle, whereas financial engineering is based on applications of the principle of no arbitrage and the principle of optimality. Historically, actuarial science developed to address contingencies in a company's liabilities, whereas financial engineer-ing developed to address contingencies in the company's assets.

Page 6: Text Book 1 Solutions

2. a. The terms Bayesian and frequentist refer to interpretations of probability. The frequentist (also called objectivist) interpretation of probability is a perspective in which probabilities are considered to be long run relative frequencies. The Bayesian (also called subjectivist) interpretation of probability is a perspective in which probabilities are considered to be measures of belief that can change over time to reflect new informa-tion. See sections 1.6 and 1.9 in the textbook.

b. The insurance principle is the basis of actuarial science, whereas the principle of no arbitrage is the basis of financial engineering. The insurance principle asserts that for any group of homogeneous and independent risks, the average loss per individual becomes more certain as the size of the group increases. The principle of no arbitrage asserts that any two securities that provide the same future cash flow and have the same level of risk must sell for the same price. The insurance principle can be used to determine the pure cost of insurance for a large group of independent, homogeneous risks. The principle of no arbitrage can be used to determine the (theoretically correct) price of a security relative to the prices of other securities in an active market.

c. Both moral hazard and adverse selection arise in insurance from an insurer's inability to access perfect information about the insured person. Adverse selection arises from an inability to distinguish completely the good risks from the bad. Moral hazard arises from the behavorial changes that insurance protection induces after it is purchased. Adverse selection can be minimized through the use of a good risk classification scheme (and to a lesser extent, policy design). Moral hazard is primarily mitigated through policy design (by including, for example, deductibles and coinsurance provi-sions which require policyholders to share in the losses).

d. Actuarial science is the subject that is concerned with analyzing the adverse financial consequences of large, unpredictable losses and with designing mechanisms to cushion the harmful financial effects of such losses. Financial engineering is the subject that is concerned with analyzing risk in financial markets and with designing products and techniques to manage that risk. Actuarial science is based on applications of the insurance principle, whereas financial engineering is based on applications of the principle of no arbitrage and the principle of optimality. Historically, actuarial science developed to address contingencies in a company's liabilities, whereas financial engineer-ing developed to address contingencies in the company's assets.

10. This question and the one following it are designed to give students an appreciation of the differences between the frequentist and Bayesian interpretations of probability. They are also designed to illustrate some of the strengths and weaknesses of each interpretation.

a. Recall that a frequentist considers a probability to be a constant long-run relative frequency, whereas a Bayesian considers a probability to be a measure of belief that can change over time to reflect new information. Looking at the data in the problem, a frequentist would note that the average number of accidents per year over the five year period is 72 ( (90+70+75+60+65)/5 ) and would estimate the probability of an accident in the coming year to be 7.2%. A Bayesian, on the other hand, might notice the decline in accident frequency over time and choose to alter his/her opinion of the accident frequency to take this new information into account. As a result, a Bayesian might estimate the probability of an accident in the coming year to be around 6%. A frequen-tist might also notice the apparent decline in accident frequency over time but, in the absence of further data, would interpret the relatively high first year accident frequency of 90 to be simply an "above average" observation that could have occurred in any year. The frequentist would interpret the first year observation in this way because the frequentist considers the accident probability to be an inherent constant that does not change with the arrival of new data.

b. One would expect the accident frequency to decrease as drivers gain experience. Hence, it is not realistic to assume a constant accident frequency over time. A Bayesian can easily incorporate this anticipated decrease into future estimates for the accident frequency because a Bayesian considers probability to be a measure of belief that can change over time to reflect new information and personal opinion. A frequentist could explain the change in accident frequency by arguing that the observed data do not come from the same experiment and hence should not be combined to determine an estimate for the future accident frequency. Indeed, a frequentist could argue that the data come from five distinct experiments, where each experiment is defined by the number of years of driving experience, and that the only way to obtain meaningful estimates of the accident frequency is to consider several groups of 1000 newly licensed 18-year old drivers over a 5 year period. The estimates of accident frequency determined in this way will differ with the number of years of driving experience, as intuition suggests they should.

4 Chapter One Solutions

Page 7: Text Book 1 Solutions

10. This question and the one following it are designed to give students an appreciation of the differences between the frequentist and Bayesian interpretations of probability. They are also designed to illustrate some of the strengths and weaknesses of each interpretation.

a. Recall that a frequentist considers a probability to be a constant long-run relative frequency, whereas a Bayesian considers a probability to be a measure of belief that can change over time to reflect new information. Looking at the data in the problem, a frequentist would note that the average number of accidents per year over the five year period is 72 ( (90+70+75+60+65)/5 ) and would estimate the probability of an accident in the coming year to be 7.2%. A Bayesian, on the other hand, might notice the decline in accident frequency over time and choose to alter his/her opinion of the accident frequency to take this new information into account. As a result, a Bayesian might estimate the probability of an accident in the coming year to be around 6%. A frequen-tist might also notice the apparent decline in accident frequency over time but, in the absence of further data, would interpret the relatively high first year accident frequency of 90 to be simply an "above average" observation that could have occurred in any year. The frequentist would interpret the first year observation in this way because the frequentist considers the accident probability to be an inherent constant that does not change with the arrival of new data.

b. One would expect the accident frequency to decrease as drivers gain experience. Hence, it is not realistic to assume a constant accident frequency over time. A Bayesian can easily incorporate this anticipated decrease into future estimates for the accident frequency because a Bayesian considers probability to be a measure of belief that can change over time to reflect new information and personal opinion. A frequentist could explain the change in accident frequency by arguing that the observed data do not come from the same experiment and hence should not be combined to determine an estimate for the future accident frequency. Indeed, a frequentist could argue that the data come from five distinct experiments, where each experiment is defined by the number of years of driving experience, and that the only way to obtain meaningful estimates of the accident frequency is to consider several groups of 1000 newly licensed 18-year old drivers over a 5 year period. The estimates of accident frequency determined in this way will differ with the number of years of driving experience, as intuition suggests they should.

Chapter One Solutions 5

Page 8: Text Book 1 Solutions

Chapter Two Solutions

6. a. The given distribution function is a step function with jumps at x = -1 and x = 23 .

Hence, the only values of x for which pX@xD > 0 are x = -1 and x = 23 . To see why this

is so, consider, for example, the point x = 0. From the definition of FX , FX@0D =13 and

FX@-1D = 13 . However,

FX@0D = Pr@X § 0D =Pr@X § -1D + Pr@-1 < X < 0D + Pr@X = 0D = FX@-1D + Pr@-1 < X < 0D + pX@0D.

Hence, Pr@-1 < X < 0D + pX@0D = 0. So, since probabilities cannot be negative, we have pX@0D = 0 as claimed. One can show in a similar way that pX@xD = 0 for x ≠ -1, 2

3 .

Now

pX2

3= Pr X =

2

3= Pr X §

2

3- Pr X <

2

3= FX

2

3- Pr X <

2

3.

Further, since pX@xD = 0 for -1 < x < 23 , as demonstrated earlier, we have

Pr X <2

3= Pr@X § -1D + Pr 1 < X <

2

3= Pr@X § -1D = FX@-1D.

Hence,

pX2

3= FX

2

3- FX@-1D = 1 -

1

3=

2

3.

In particular, the value of pXB23 F is the size of the jump on the graph of FX at the point

x = 23 . In a similar manner, we have pX@-1D = 1

3 .

Consequently, the probability mass function of X is given by

Page 9: Text Book 1 Solutions

In particular, the value of pXB23 F is the size of the jump on the graph of FX at the point

x = 23 . In a similar manner, we have pX@-1D = 1

3 .

Consequently, the probability mass function of X is given by

pX@-1D =1

3, pX

2

3=

2

3, and pX@xD = 0 for all other x.

b. From the probability mass function determined in part a, the expected value is given by

E@XD = H-1L1

3+

2

3

2

3=

1

9.

c. Using the observations given in part a, we have

Pr X <2

3= Pr@X § -1D + Pr 1 < X <

2

3= FX@-1D + 0 =

1

3.

d. The graphs of pX and FX can be created using Mathematica or similar computer software. The graphs given here have the same scale to facilitate comparisons.

-2 -1 1 2x

0.2

0.4

0.6

0.8

1pX

Chapter Two Solutions 7

Page 10: Text Book 1 Solutions

-2 -1 1 2x

0.2

0.4

0.6

0.8

1FX

12. Let X1 and X2 be as defined in the question.

a. If only the suit of a card is observed, then the sample space isS = 8HH, HD, DH, DD<,

where H represents hearts, D represents diamonds, and where order is respected. If the two hearts are distinguishable from each other and the two diamonds are distinguishable from each other, then the sample space isS = 8H1 H2, H2 H1, H1 D1, H1 D2, H2 D1,

H2 D2, D1 H1, D1 H2, D2 H1, D2 H2, D1 D2, D2 D1<

where the subscripts identify the given heart or diamond uniquely and order is respected. Since we are only interested in the suit of a card in this question, we can take the sample space to be S = 8HH, HD, DH, DD<. Note, however, that the principle of equal likeli-hood does not apply to S because we have suppressed information on the individual outcomes. See section 3.1 for a discussion of how the choice of sample space can affect the validity of the principle of equal likelihood.

b. The random variables X1 and X2 are not independent because knowledge of the value assumed by one of the variables changes the probability distribution of the other. For example, if it is known that X1 assumes the value 1 (i.e., the card on the left is a heart), then it is less likely that X2 = 1 because only one of the three remaining cards is a heart, the other two cards being diamonds. However, the random variables X1 and X2 are identically distributed because in the absence of knowledge of the left card, the probabil-ity that the right card is a heart is 12 and in the absence of knowledge of the right card,

the probability that the left card is a heart is 12 .

8 Chapter Two Solutions

Page 11: Text Book 1 Solutions

b. The random variables X1 and X2 are not independent because knowledge of the value assumed by one of the variables changes the probability distribution of the other. For example, if it is known that X1 assumes the value 1 (i.e., the card on the left is a heart), then it is less likely that X2 = 1 because only one of the three remaining cards is a heart, the other two cards being diamonds. However, the random variables X1 and X2 are identically distributed because in the absence of knowledge of the left card, the probabil-ity that the right card is a heart is 12 and in the absence of knowledge of the right card,

the probability that the left card is a heart is 12 .

c. Using the relative frequency interpretation of probability, the probability mass function for the random vector HX1, X2L is given by

pX1,X2 @1, 1D =1

6, pX1,X2 @1, 0D =

1

3, pX1,X2 @0, 1D =

1

3, pX1,X2 @0, 0D =

1

6.

Consider, for example, the point Hx1, x2L = H1, 1L. Suppose that the given experiment is repeated n times, where n is a large number. Then according to the relative frequency interpretation of probability, approximately n2 of the ordered pairs Hx1, x2L are of the form H1, ¯L. Suppose that exactly n* are of this form. Then according to the relative frequency interpretation of probability, approximately 13 of these n* ordered pairs are of

the form H1, 1L. Consequently, of the original n observations, approximately 13 n* º J

13 N I

12 M n = n

6 are of the form H1, 1L. Therefore, by the relative frequency interpre-

tation of probability again, pX1,X2 @1, 1D = 16 as claimed. The values of pX1,X2 at the

points H1, 0L, H0, 1L, and H0, 0L are determined using a similar argument. See also the discussion in section 2.2 of the textbook.

d. The contingency table for X1 and X2 is as follows:

X2

0 1

-------------------------------------------------------------

0 16

13

12

X1

1 13

16

12

-------------------------------------------------------------

12

12 1

Chapter Two Solutions 9

Page 12: Text Book 1 Solutions

X2

0 1

-------------------------------------------------------------

0 16

13

12

X1

1 13

16

12

-------------------------------------------------------------

12

12 1

15. a. From the general relationship

FX@xD = ‡-¶

xfX@sD „ s

(see section 2.3 in the textbook) and the given form of the density function, it follows that

FX@xD = 1 -1

x2for x ¥ 1, FX@xD = 0 for x < 1.

Indeed, for x ¥ 1,

FX@xD = ‡1

x 2

s3„ s = I-s-2M 1

x = 1 -1

x2.

b. From the given formula for fX and the formula for the expectation of a continuous random variable (section 2.3 in the textbook), it follows that

E@XD = ‡-¶

x fX@xD „ x = ‡1

x2

x3„ x = -2 x-1 1

¶ = 2.

Note that we obtain the same answer using the formula E@XD = Ÿ0¶SX@xD „ x. Indeed,

since SX@xD = 1ëx2 for x ¥ 1 and SX@xD = 1 for x < 1, we have

10 Chapter Two Solutions

Page 13: Text Book 1 Solutions

Note that we obtain the same answer using the formula E@XD = Ÿ0¶SX@xD „ x. Indeed,

since SX@xD = 1ëx2 for x ¥ 1 and SX@xD = 1 for x < 1, we have

E@XD = 1 + ‡1

¶ 1

x2„ x = 2.

c. From the formula for FX determined in part b, we have

Pr@X > 4D = 1 - Pr@X § 4D = 1 - FX@4D = 1 - 1 -1

16=

1

16.

d. The graphs of fX and FX can be created using Mathematica or similar computer software. Note that only the values for x ¥ 1 have been plotted since fX and FX are both zero for x < 1.

1 2 3 4 5x

0.5

1

1.5

2fX

Chapter Two Solutions 11

Page 14: Text Book 1 Solutions

1 2 3 4 5x

0.2

0.4

0.6

0.8

1FX

Note that the slope of the graph of FX at x = a is fXHaL. In particular, the slope of FX at x = 1 is 2. (Here, we are implicitly considering the slope at x = 1 to be the slope deter-mined by considering points to the right of x = 1.)

20. Since X and Y are independent, we can complete the contingency table using the multiplicative relationship pX,Y @x, yD = pX@xD pY @yD . We also need to use the facts that

⁄pX@xD = 1 and ⁄pY @yD = 1. The procedure for completing the table is as follows:

i. We are given that pX@2D = .4. Hence, pX@1D = .6.

ii. Using the value of pX@1D obtained in i, the given values for pX, Y @1, 1D and pX, Y @1, 4D, and the multiplicative relationship for pX, Y @x, yD, we get pY @1D = .4 and pY @4D = .2.

iii. From the values obtained in ii and the given value for pY @2D, we get pY @3D = .1.

iv. From the marginal distributions for X and Y , we can complete the rest of the contingency table using the multiplicative relationship for pX, Y .

The completed contingency table for X and Y is as follows:

Y

1 2 3 4

-------------------------------------------------------------------------------------------------------

1 .24 .18 .06 .12 .6

X

2 .16 .12 .04 .08 .4

-------------------------------------------------------------------------------------------------------

.4 .3 .1 .2 1

12 Chapter Two Solutions

Page 15: Text Book 1 Solutions

Y

1 2 3 4

-------------------------------------------------------------------------------------------------------

1 .24 .18 .06 .12 .6

X

2 .16 .12 .04 .08 .4

-------------------------------------------------------------------------------------------------------

.4 .3 .1 .2 1

23. Let V be the value of a $1 investment in the given security two days after the date of initial investment.

a. A $1 investment that gains 50% and then loses 40% will only be worth H$1 .00L H1.50L H0.60L = $0 .90. Since there is an equal chance of a gain or loss on any given day, this suggests that it is not beneficial to hold the security for 2 days.

b. There are four possible outcomes: gains on both days, losses on both days, a gain followed by a loss, or a loss followed by a gain. Since gains and losses on different days are independent, it follows that

Chapter Two Solutions 13

Page 16: Text Book 1 Solutions

V = H1.50L2 with probability1

4,

V = H1.50L H0.60L with probability1

2,

V = H0.60L2 with probability1

4.

Hence, the probability mass function for V is given by

pV @0.36D =1

4, pV @0.90D =

1

2, pV @2.25D =

1

4, pV @vD = 0 otherwise.

c. From the answer to part b, we have

Pr@V > 1D =1

4

and

E@VD = H2.25L1

4+ H0.90L

1

2+ H0.36L

1

4= 1.1025.

Hence, there is only a 25% chance of our coming out ahead after two days. This agrees with our observation in part a. The fact that E@VD > 1 may lead one to believe that the investment is a good one. However, this is only true in certain circumstances, as explained in the next part of the question.

d. From part c, E@VD = 1.1025 > 1. Since E@VD represents the average accumulation per investment for a large number of independent investments of the type described, it follows that the investment opportunity is a good one if we can make a large number of independent investments of this type. This is so even though only one quarter of the investments will be profitable (Pr@V > 1D = .25 from part c) because the gains, when they occur, more than make up for the losses on the other three quarters of the invest-ments. Note the importance of the assumption that investment returns on the indvidual investments are independent: If all of the investments had the same 2-day gain-loss pattern, then it would not be advantageous to invest for the reasons given in part a.

14 Chapter Two Solutions

Page 17: Text Book 1 Solutions

Chapter Three Solutions

2. We are given that Pr@ED = .3, Pr[F]=.5, Pr@E FD = .4, and we are required to calculate Pr@E› FD , Pr@E‹ FD , and Pr@F ED . From the definition of conditional probability and the given information, we havePr@E› FD = Pr@E FD Pr@FD = H.4L H.5L = .2.

Hence,

Pr@F ED =Pr@E› FD

Pr@ED=

.2

.3=

2

3

andPr@E‹ FD = Pr@ED + Pr@FD - Pr@E› FD = .3 + .5 - .2 = .6.

8. In this question, we are required to determine estimates for Pr@E‹ FD and Pr@E› FD when given the values of Pr@ED and Pr@FD only.

From Boole's inequality (exercise 6), we know that Pr@E‹ FD § Pr@ED + Pr@FD. We also know from basic properties of probabilities that Pr@E‹ FD ¥ Pr@ED, Pr@E‹ FD ¥ Pr@FD, and Pr@E‹ FD § 1. Consequently, Pr@E‹ FD satisfies the inequalitymax@Pr@ED, Pr@FDD § Pr@E‹ FD § min@1, Pr@ED + Pr@FDD.

This is the strongest statement one can make about Pr@E‹ FD without having informa-tion about the nature of E› F. The case E Õ F illustrates that the lower bound is best possible and the case E› F =« illustrates that the upper bound is best possible.

We can also derive sharp estimates for Pr@E› FD using Bonferroni's inequality and properties of probabilities. Indeed, from Bonferroni's inequality (exercise 7), we know that Pr@E› FD ¥ Pr@ED + Pr@FD - 1. We also know that Pr@E› FD § Pr@ED , Pr@E› FD § Pr@FD , and Pr@E› FD ¥ 0. Consequently,

Page 18: Text Book 1 Solutions

This is the strongest statement one can make about Pr@E‹ FD without having informa-tion about the nature of E› F. The case E Õ F illustrates that the lower bound is best possible and the case E› F =« illustrates that the upper bound is best possible.

We can also derive sharp estimates for Pr@E› FD using Bonferroni's inequality and properties of probabilities. Indeed, from Bonferroni's inequality (exercise 7), we know that Pr@E› FD ¥ Pr@ED + Pr@FD - 1. We also know that Pr@E› FD § Pr@ED , Pr@E› FD § Pr@FD , and Pr@E› FD ¥ 0. Consequently,max@0, Pr@ED + Pr@FD - 1D § Pr@E› FD § min@Pr@ED, Pr@FDD.

This is the strongest statement one can make about Pr@E› FD without having informa-tion about E‹ F. The case E Õ F illustrates that the upper bound is best possible and the case E‹ F = S illustrates that the lower bound is best possible.

We are now ready to answer the question.

a. Suppose that Pr@ED = .7 and Pr@FD = .4. Then the strongest statements we can make about Pr@E‹ FD and Pr@E› FD are.7 § Pr@E‹ FD § 1

and.1 § Pr@E› FD § .4.

b. Suppose that Pr@ED = .6 and Pr@FD = .2. Then the strongest statements we can make about Pr@E‹ FD and Pr@E› FD are.6 § Pr@E‹ FD § .8

and0 § Pr@E› FD § .2.

c. Since probabilities are always less than or equal to 1, Boole's inequality provides nontrivial information for Pr@E‹ FD if and only if Pr@ED + Pr@FD < 1.

d. Since probabilities are always greater than or equal to 0, Bonferroni's inequality provides nontrivial information for Pr@E› FD if and only if Pr@ED + Pr@FD > 1.

e. Since it is not possible for both the statements Pr@ED + Pr@FD < 1 and Pr@ED + Pr@FD > 1 to be true, it follows from parts c and d that it is not possible for both Boole's inequality and Bonferroni's inequality to provide nontrivial information. Only one of these inequalities can provide nontrivial information for a given pair of events. This was illustrated numerically in parts a and b.

11. Let E, B, M be the following events:

E: Employee owns units of the equity fund.

B: Employee owns units of the bond fund.

M: Employee owns units of the money market fund.

We are given the following information: Pr@ED = .15, Pr@BD = .28, Pr@MD = .30, Pr@E› BD = .08, Pr@E›MD = .10, Pr@B›MD = .15, Pr@E› B›MD = .05. From this information, it is straightforward to construct a Venn diagram using Mathematica or similar computer software.

16 Chapter Three Solutions

Page 19: Text Book 1 Solutions

11. Let E, B, M be the following events:

E: Employee owns units of the equity fund.

B: Employee owns units of the bond fund.

M: Employee owns units of the money market fund.

We are given the following information: Pr@ED = .15, Pr@BD = .28, Pr@MD = .30, Pr@E› BD = .08, Pr@E›MD = .10, Pr@B›MD = .15, Pr@E› B›MD = .05. From this information, it is straightforward to construct a Venn diagram using Mathematica or similar computer software.

E B

M

S

.05

.10

.03

.05 .10

.02 .10

.55

The answers to parts a through f follow directly from this Venn diagram.

a. The percentage of eligible employees currently participating in the pension plan isPr@E‹ B‹MD = .45.

b. The percentage of eligible employees currently not participating isPr@HE‹ B‹MLcD = .55.

c. The percentage of participating employees who direct their contributions to a single fund can be calculated by dividing the fraction of all eligible employees who direct their contributions to a single fund by the fraction of all eligible employees who are participat-ing. The fraction of eligible employees who direct their contributions to a single fund is equal to the fraction of eligible employees whose contributions go to the equity fund only plus the fraction whose contributions go to the bond fund only plus the fraction whose contributions go to the money market fund only. From the Venn diagram, it follows that the fraction of eligible employees whose contributions go to a single fund is .02 + .10 + .10 = .22. Consequently, the percentage of participating employees who direct their contributions to a single fund is

Chapter Three Solutions 17

Page 20: Text Book 1 Solutions

c. The percentage of participating employees who direct their contributions to a single fund can be calculated by dividing the fraction of all eligible employees who direct their contributions to a single fund by the fraction of all eligible employees who are participat-ing. The fraction of eligible employees who direct their contributions to a single fund is equal to the fraction of eligible employees whose contributions go to the equity fund only plus the fraction whose contributions go to the bond fund only plus the fraction whose contributions go to the money market fund only. From the Venn diagram, it follows that the fraction of eligible employees whose contributions go to a single fund is .02 + .10 + .10 = .22. Consequently, the percentage of participating employees who direct their contributions to a single fund is .02 + .10 + .10

.45=

22

45.

The desired probability can also be described in probability notation as follows:Pr@HE î HB‹MLL ‹ HB î HE‹MLL ‹ HM î HE‹MLL E‹ B‹ MD.

From this expression, it should be clear that working with the Venn diagram is the best approach to take to determine the desired probability!

d. The percentage of participating employees who direct their contributions to at least two different funds is simply the complement of the probability determined in part c. Hence, the desired probability is

1 -22

45=

23

45.

e. The fraction of participants with bond shares who also own stock shares is

Pr@E BD =Pr@E› BD

Pr@BD=

.08

.28=

2

7.

f. The fraction of participants with money market shares who also own stock shares is

Pr@E MD =Pr@E›MD

Pr@MD=

.10

.30=

1

3.

14. Let C, I, P be the following events:

C: Worker belongs to a company pension plan.

I: Worker has an IRA.

P: Worker has private savings in excess of $5000.

We are given the following information: Pr@CD = .25, Pr@ID = .20, Pr@PD = .30, Pr@C› I› PD = .05, Pr@HC‹ I‹ PLcD = .55, Pr@I CD = .60, Pr@I PD = 1

3 . To construct

the Venn diagram for this problem, let x = Pr@HC› PL î ID.

18 Chapter Three Solutions

Page 21: Text Book 1 Solutions

14. Let C, I, P be the following events:

C: Worker belongs to a company pension plan.

I: Worker has an IRA.

P: Worker has private savings in excess of $5000.

We are given the following information: Pr@CD = .25, Pr@ID = .20, Pr@PD = .30, Pr@C› I› PD = .05, Pr@HC‹ I‹ PLcD = .55, Pr@I CD = .60, Pr@I PD = 1

3 . To construct

the Venn diagram for this problem, let x = Pr@HC› PL î ID.

C I

P

S

.05

.20-x

.10

x .05

.10-x 0

.55

The value of x can be determined from the condition that all the probabilities in this Venn diagram must sum to 1. That is,H.10 - xL + .10 + .05 + x + .05 + H.20 - xL + 0 + .55 = 1.

Solving for x, we obtain x = .05. With this information and the Venn diagram just constructed, we can provide answers to parts a through d.

a. The fraction of people with an IRA who also have private retirement savings in excess of $5000 is, by Bayes' theorem, equal to

Pr@P ID =Pr@I PD Pr@PD

Pr@ID=J13 N H.30L

.20=

1

2.

b. The fraction of people with an IRA that also belong to a company pension plan is, by Bayes' theorem, equal to

Chapter Three Solutions 19

Page 22: Text Book 1 Solutions

b. The fraction of people with an IRA that also belong to a company pension plan is, by Bayes' theorem, equal to

Pr@C ID =Pr@I CD Pr@CD

Pr@ID=H.60L H.25L

H.20L= .75.

c. The fraction of people who belong to a company pension plan that have no other retirement savings besides social security is

Pr@C î HI‹ PL CD =.10 - x

.25=

.05

.25=

1

5.

d. The fraction of people with private savings in excess of $5000 that do not participate in a company pension plan is

Pr@Cc PD =Pr@Cc › PD

Pr@PD=H.20 - xL + .05

.30=

.20

.30=

2

3.

17. Let G, B, A, C be the following events:

G: Policyholder classified as a good risk.

B: Policyholder classified as a bad risk.

A: Policyholder classified as an average risk.

C: Policyholder files an accident claim.

We are given the following information: Pr@GD = .30, Pr@BD = .20, Pr@AD = .50, Pr@C GD = .05, Pr@C BD = .40, Pr@C AD = .10.

a. The probability that a randomly chosen customer files an accident claim in the coming year is, by the law of total probability,Pr@CD = Pr@C GD Pr@GD + Pr@C BD Pr@BD + Pr@C AD Pr@AD =

H.05L H.30L + H.40L H.20L + H.10L H.50L = .015 + .08 + .05 = .145.

b. Using Bayes' theorem and the answer to part a, the desired probabilities are

20 Chapter Three Solutions

Page 23: Text Book 1 Solutions

Pr@G CD =Pr@C GD Pr@GD

Pr@CD=H.05L H.30L

.145=

15

145=

3

29,

Pr@B CD =Pr@C BD Pr@BD

Pr@CD=H.40L H.20L

.145=

80

145=

16

29,

and

Pr@A CD =Pr@C AD Pr@AD

Pr@CD=H.10L H.50L

.145=

50

145=

10

29

respectively.

c. Let x be the required value for Pr@AD. Then Pr@BD = .70 - x, Pr@GD = .30, and by the law of total probability, Pr@CD = Pr@C GD Pr@GD + Pr@C BD Pr@BD + Pr@C AD Pr@AD =

H.05L H.30L + H.40L H.70 - xL + H.10L x = .295 - .30 x.

Hence Pr@CD § .10 if and only if .295 - .30 x § .10, i.e., if and only if x ¥ .65. Conse-quently, for the company's requirement to be met, at least 65% of the company's customers must be classified as average risks.

23. Let P and Q represent the following events:

P: Student passes test.

Q: Student is qualified.

We are given the following information: Pr@QcD = .20, Pr@P QD = .85, Pr@Pc QcD = .80.

a. Pr@Qc PD represents the fraction of students that pass the test who are unqualified. Pr@Q PcD represents the fraction of students that fail the test who are qualified. These quantities represent errors in the testing procedure and should be as small as possible.

b. If the college is primarily concerned with screening unqualified applicants, then minimizing Pr@Qc PD is more important than minimizing Pr@Q PcD.

c. If the college is primarily concerned with reducing the number of qualified students who are denied admission because they failed the test, then minimizing Pr@Q PcD is more important than minimizing Pr@Qc PD. This implicitly assumes that every appli-cant who passes the test is granted admission, everyone who fails is denied admission, and that the college has the capacity to accommodate whatever number of candidates pass the test.

d. By the law of total probability and the property Pr@P QcD = 1 - Pr@Pc QcD, we have

Chapter Three Solutions 21

Page 24: Text Book 1 Solutions

23. Let P and Q represent the following events:

P: Student passes test.

Q: Student is qualified.

We are given the following information: Pr@QcD = .20, Pr@P QD = .85, Pr@Pc QcD = .80.

a. Pr@Qc PD represents the fraction of students that pass the test who are unqualified. Pr@Q PcD represents the fraction of students that fail the test who are qualified. These quantities represent errors in the testing procedure and should be as small as possible.

b. If the college is primarily concerned with screening unqualified applicants, then minimizing Pr@Qc PD is more important than minimizing Pr@Q PcD.

c. If the college is primarily concerned with reducing the number of qualified students who are denied admission because they failed the test, then minimizing Pr@Q PcD is more important than minimizing Pr@Qc PD. This implicitly assumes that every appli-cant who passes the test is granted admission, everyone who fails is denied admission, and that the college has the capacity to accommodate whatever number of candidates pass the test.

d. By the law of total probability and the property Pr@P QcD = 1 - Pr@Pc QcD, we have

Pr@PD = Pr@P QD Pr@QD + Pr@P QcD Pr@QcD = H.85L H1 - .20L + H1 - .80L H.20L = .72.

Hence, by Bayes' theorem and the properties Pr@P QcD = 1 - Pr@Pc QcD and Pr@Pc QD = 1 - Pr@P QD , we have

Pr@Qc PD =Pr@P QcD Pr@QcD

Pr@PD=H1 - .80L H.20L

.72=

1

18º .06

and

Pr@Q PcD =Pr@Pc QD Pr@QD

Pr@PcD=H1 - .85L H1 - .20L

1 - .72=

3

7º .43.

Consequently, if the goal of the test is to limit the number of unqualified applicants who gain admission then the test is fairly good because Pr@Qc PD is small. However, if the goal is to limit the number of qualified applicants who are denied admission then the test is not very good because Pr@Q PcD is quite large.

e. The files of students who drop out (i.e., are unqualified) are likely to be kept separate from the files of students who continue. If the files are organized in this way, then it is relatively simple to determine the fraction of drop-outs who failed the initial test and the fraction of continuing students who passed the initial test. Since the files are likely to be organized in this way, it is more likely that the values of Pr@P QD and Pr@Pc QcD will be observed than the values of Pr@Qc PD and Pr@Q PcD. In fact, if the test results are used for any sort of screening, then it is not even possible to observe the value of Pr@Q PcD because students who fail the test and as a result are denied admission will never get a chance to prove that they are qualified.

22 Chapter Three Solutions

Page 25: Text Book 1 Solutions

Chapter Four Solutions

‡ Section 4.1.13 Exercises

4. The probability masses associated with a mixed distribution occur at the points of discontinuity of the distribution function. From the definition of F, it is clear that F has two jump discontinuities: one of size 16 at x = 0 and the other of size 13 at x = 2. Between

the points x = 0 and x = 2 there is a continuous distribution of probability given by the density

f @xD = F£@xD =1

4for 0 < x < 2.

Note that the amount of probability between 0 and 2 is Ÿ02 14 „ x =

12 . When this number is

added to the probability masses at x = 0 and x = 2, we obtain 1, as we should. This describes the form of the generalized density for X.

To express FX as a weighted sum of a continuous distribution function and a discrete distribution function, we first extract a continuous distribution function FC from FX by removing the jumps from the graph of FX and scaling appropriately. The jumps can be removed by subtracting appropriate multiples of the unit step functions at x = 0 and x = 2. The appropriate scaling factor is determined by the requirement that limxضFC@xD = 1. Following these steps, we obtain

FC@xD = 2 FX@xD -1

6u0@xD -

1

3u2@xD ,

where ua is the unit step function at the point x = a, i.e.,

Page 26: Text Book 1 Solutions

ua@xD = 0 for x < a,ua@xD = 1 for x ¥ a.

Rearranging this equation for FC, we obtain

FX@xD =1

2FC@xD +

1

6u0@xD +

1

3u2@xD.

Hence,

FX@xD =1

2FC@xD +

1

2FD@xD,

where FD@xD =13 u0@xD +

23 u2@xD. Note that FD is the distribution function for the discrete

distribution with probability masses of 13 at x = 0 and 23 at x = 2. Note also that

FC@xD =12 x for 0 § x < 2. Consequently, we have shown that FX can be written as a

weighted sum of a continuous distribution function and a discrete distribution function as required.

7. a. From the definition of the given distribution function, there are five different values for FX1,X2 and these values are assumed on five distinct regions. Hence, the graph of FX1,X2 can be represented in two dimensions using five degrees of shading. This two-dimensional graph can be created using Mathematica or similar computer software.

24 Chapter Four Solutions

Page 27: Text Book 1 Solutions

-1 0 1 2 3-1

0

1

2

3

x1

x2

Note that in this graph the more lightly shaded regions are the regions on which the value of FX1,X2 is greater. Note further that this graph only displays the portion of FX1,X2 inside the square @-1, 3D ä @-1, 3D. However, from the given graph, the nature of FX1,X2 outside this square should be readily apparent.

b. The two-dimensional graph created in part a suggests that a three-dimensional represen-tation of the given FX1,X2 will consist of several blocks with rectangular faces. The required three-dimensional graph can be created using Mathematica or similar computer software.

Chapter Four Solutions 25

Page 28: Text Book 1 Solutions

-10

12

3

x1

-10

12

3

x2

0

0.25

0.5

0.75

1

FX1,X2

-10

12

3

x1

-10

12

3

x2

Note that the view point for this picture is in the third octant (i.e., the octant with x1 < 0, x2 < 0, and z > 0) rather than the more customary first octant (i.e., the octant with x1 > 0, x2 > 0, and z > 0). Since FX1,X2 is increasing in both x1 and x2, we get a better visual representation for the graph of FX1,X2 by doing this. Choosing a view point in the third octant also facilitates comparisons with the two-dimensional graph generated in part a and makes the determination of a formula for pX1,X2 in part c simpler.

c. It is relatively straightforward to determine the probability mass function for a discrete univariate random variable from its distribution function. Indeed, the locations and sizes of the probability masses are simply the locations and sizes of the jumps in the graph of the distribution function. However, determining the probability mass function for a discrete bivariate random variable from its distribution function is not quite so simple.

It is still true that the presence of a probability mass results in a jump on the graph of FX1,X2 at the location of the probability mass. The demonstration of this fact in the bivariate case is similar to its demonstration in the univariate case (see Example 1, section 4.1.2 of the textbook). However, it is no longer true that every jump on the graph of FX1,X2 arises in this way: Looking at the graph generated in part b, we can see that this graph has jumps along each of the lines x1 = a for a > 0. If each of these jumps corresponded to a different probability mass, the set of points with non-zero probability mass would be infinite. But then the function FX1,X2 itself would have an infinite number of values rather than the five that it actually does.

If we think a little bit harder about what actually happens to the distribution function at a point where a probability mass is located, we soon realize that at such points a new "block" is created. (Consider the graph in part b.) From this realization, it follows that for the specifed FX1,X2 the only possible locations for probability masses are the points H0, 0,L, H0, 1L, H1, 0L, H1, 1L (see the graph in part b). After a little reflection, it becomes apparent that there are non-zero probability masses at each of these points.

The size of the probability mass at H0, 0L is relatively straightforward to determine. From the graph of FX1,X2 , it is 18 . The size of the probability mass at H0, 1L can be determined

by moving along the line x1 = 0 and calculating the size of the jump that occurs when the point H0, 1L is reached. Following this procedure, we find that pX1,X2 H0, 1L = 3

8 -18 =

14 . Using a similar approach, we get pX1,X2 H1, 0L = 1

4 -18 =

18 . The

size of the probability mass at H1, 1L is then determined by the requirement that the probability masses must sum to 1. Hence, pX1,X2 H1, 1L = 1

2 .

To summarize, the probability mass function for the specified distribution is

26 Chapter Four Solutions

Page 29: Text Book 1 Solutions

c. It is relatively straightforward to determine the probability mass function for a discrete univariate random variable from its distribution function. Indeed, the locations and sizes of the probability masses are simply the locations and sizes of the jumps in the graph of the distribution function. However, determining the probability mass function for a discrete bivariate random variable from its distribution function is not quite so simple.

It is still true that the presence of a probability mass results in a jump on the graph of FX1,X2 at the location of the probability mass. The demonstration of this fact in the bivariate case is similar to its demonstration in the univariate case (see Example 1, section 4.1.2 of the textbook). However, it is no longer true that every jump on the graph of FX1,X2 arises in this way: Looking at the graph generated in part b, we can see that this graph has jumps along each of the lines x1 = a for a > 0. If each of these jumps corresponded to a different probability mass, the set of points with non-zero probability mass would be infinite. But then the function FX1,X2 itself would have an infinite number of values rather than the five that it actually does.

If we think a little bit harder about what actually happens to the distribution function at a point where a probability mass is located, we soon realize that at such points a new "block" is created. (Consider the graph in part b.) From this realization, it follows that for the specifed FX1,X2 the only possible locations for probability masses are the points H0, 0,L, H0, 1L, H1, 0L, H1, 1L (see the graph in part b). After a little reflection, it becomes apparent that there are non-zero probability masses at each of these points.

The size of the probability mass at H0, 0L is relatively straightforward to determine. From the graph of FX1,X2 , it is 18 . The size of the probability mass at H0, 1L can be determined

by moving along the line x1 = 0 and calculating the size of the jump that occurs when the point H0, 1L is reached. Following this procedure, we find that pX1,X2 H0, 1L = 3

8 -18 =

14 . Using a similar approach, we get pX1,X2 H1, 0L = 1

4 -18 =

18 . The

size of the probability mass at H1, 1L is then determined by the requirement that the probability masses must sum to 1. Hence, pX1,X2 H1, 1L = 1

2 .

To summarize, the probability mass function for the specified distribution is

pX1,X2 @0, 0D =1

8, pX1,X2 @0, 1D =

1

4, pX1,X2 @1, 0D =

1

8, pX1,X2 @1, 1D =

1

2.

It is straightforward to check that the distribution function corresponding to this probabil-ity mass function has the form specified. Hence, by the uniqueness of probability mass functions, this must be the correct definition for pX1,X2 .

d. The graph of the probability mass function specified in part c can be created using Mathematica or similar computer software.

Chapter Four Solutions 27

Page 30: Text Book 1 Solutions

-10

12 3

x1

-10123

x2

0

0.2

0.4

pX1,X2

-10

12 3

x1

-10123

x2

Note that the view point for this graph is in the third octant to facilitate comparisons with part b.

e. The distribution functions of bivariate and univariate distributions have the following similarities:

i. Both are non-decreasing.

ii. Both have values between 0 and 1.

iii. Both having limiting values of 0 and 1 at "extreme" locations.

However, they also have some important differences:

i. Not every jump on the graph of a bivariate distribution function corresponds to a probability mass (see the discussion in the answer to part c).

ii. The function FX1,X2 need not tend to 1 along every line to infinity. Consider, for

example the line x1 =12 or the line x2 =

12 on the graph constructed in part b.

iii. It is not generally possible to determine the value of pX1,X2 by looking at the differences in height between two neighboring planes. Instead, one must consider the relationships among the heights of all neighboring planes at the point where a probability mass is located. As an illustration, consider the point H1, 1L for the FX1,X2 of this problem (see the answer to part c for details).

28 Chapter Four Solutions

Page 31: Text Book 1 Solutions

e. The distribution functions of bivariate and univariate distributions have the following similarities:

i. Both are non-decreasing.

ii. Both have values between 0 and 1.

iii. Both having limiting values of 0 and 1 at "extreme" locations.

However, they also have some important differences:

i. Not every jump on the graph of a bivariate distribution function corresponds to a probability mass (see the discussion in the answer to part c).

ii. The function FX1,X2 need not tend to 1 along every line to infinity. Consider, for

example the line x1 =12 or the line x2 =

12 on the graph constructed in part b.

iii. It is not generally possible to determine the value of pX1,X2 by looking at the differences in height between two neighboring planes. Instead, one must consider the relationships among the heights of all neighboring planes at the point where a probability mass is located. As an illustration, consider the point H1, 1L for the FX1,X2 of this problem (see the answer to part c for details).

9. a. A graph of the region of nonzero probability can be created using Mathematica or similar computer software.

Chapter Four Solutions 29

Page 32: Text Book 1 Solutions

0.5 1 1.5 2 2.5x1

0.5

1

1.5

2

2.5x2

A graph of the density function in three-dimensional space can be created using Mathe-matica or similar computer software.

30 Chapter Four Solutions

Page 33: Text Book 1 Solutions

0

1

2

x1

0

1

2x2

0

0.25

0.5

0.75

1

fX1,X2

0

0.25

0.5

0.75

1

fX1,X2

b. Recall that from a graphical perspective, conditional densities are scaled cross-sections (see section 4.1.9). From the graph of the bivariate density fX1,X2 created in part a, the cross-sections parallel to the respective axes define rectangular regions. We can see this more clearly from graphs that highlight the cross-sections:

Chapter Four Solutions 31

Page 34: Text Book 1 Solutions

0

1

2

x1

0

1

2x2

0

0.25

0.5

0.75

1

fX1,X2

0

0.25

0.5

0.75

1

fX1,X2

32 Chapter Four Solutions

Page 35: Text Book 1 Solutions

0

1

2

x1

0

1

2x2

0

0.25

0.5

0.75

1

fX1,X2

0

0.25

0.5

0.75

1

fX1,X2

From these two graphs, we can make the following observations:

i. The cross-section defined by X1 = x1 outlines a rectangle with base length 2 - x1 and height 12 . Hence for each x1, the distribution of X2 X1 = x1 is uniform on H0, 2 - x1L, that is,

fX2 X1=x1 @x2D =1

2 - x1for x2 œ H0, 2 - x1L.

ii. The cross-section defined by X2 = x2 outlines a rectangle with base length 2 - x2 and height 12 . Hence for each x2, the distribution of X1 X2 = x2 is uniform on H0, 2 - x2L, that is,

Chapter Four Solutions 33

Page 36: Text Book 1 Solutions

fX1 X2=x2 @x1D =1

2 - x2for x1 œ H0, 2 - x2L.

From the formulas for fX2 X1=x1 and fX1 X2=x2 just determined, it is clear that knowledge of the value assumed by one of the random variables affects the distribution of probability for the other. Consequently, X1 and X2 are not independent.

c. Recall that from a graphical perspective, marginal densities are projections. From the graphs created in part b, we can make the following observations:

i. The cross-section defined by X1 = x1 outlines a rectangle with base length 2 - x1 and height 12 . Hence the amount of probability projected onto the x1-axis at the point x1

is 12 H2 - x1L (the area of this rectangle). Consequently, the marginal density of X1 is

given by fX1 @x1D =12 H2 - x1L for x1 œ H0, 2L.

ii. The cross-section defined by X2 = x2 outlines a rectangle with base length 2 - x2 and height 12 . Hence the amount of probability projected onto the x2-axis at the point x2

is 12 H2 - x2L (the area of this rectangle). Consequently, the marginal density of X2 is

given by fX2 @x2D =12 H2 - x2L for x2 œ H0, 2L.

It is straightforward to verify that these formulas are correct using the algebraic defini-tions given in section 4.1.8. Indeed,

fX1 @x1D = ‡-¶

fX1,X2 @x1, x2D „ x2 =

‡-¶

00 „ x2 + ‡

0

2-x1 1

2„ x2 + ‡

2-x1

0 „ x2 =1

2H2 - x1L for x1 œ H0, 2L

and similarly,

fX2 @x2D =1

2H2 - x2L for x2 œ H0, 2L.

d. The formulas for the conditional densities determined in part b can be verified using the algebraic definition of conditional density given in section 4.1.9 and the formulas for the marginal densities determined in part c. Indeed,

34 Chapter Four Solutions

Page 37: Text Book 1 Solutions

fX1 X2=x2 @x1D =fX1,X2 @x1, x2D

fX2 @x2D=

12

12 H2 - x2L

=1

2 - x2for x1 œ H0, 2 - x2L and x2 œ H0, 2L.

Note that for fixed x2 œ H0, 2L, fX1,X2 @x1, x2D = 0 for x1 < 0 or x1 > 2 - x2. Similarly,

fX2 X1=x1 @x2D =fX1,X2 @x1, x2D

fX1 @x1D=

12

12 H2 - x1L

=1

2 - x1for x2 œ H0, 2 - x1L and x1 œ H0, 2L.

Note that these formulas only hold for the specified values of x1 and x2.

e. Graphs of the marginal and conditional densities are as follows:

0.5 1 1.5 2x1

0.2

0.4

0.6

0.8

1fX1

Chapter Four Solutions 35

Page 38: Text Book 1 Solutions

0.5 1 1.5 2x2

0.2

0.4

0.6

0.8

1fX2

2- x2 2x1

1ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ2- x2

fX1 »X2

36 Chapter Four Solutions

Page 39: Text Book 1 Solutions

2- x1 2x2

1ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ2- x1

fX2 »X1

These graphs are consistent with the graphical interpretations of fX1 , fX2 , fX1 X2 , and fX2 X1 considered in part b and part c.

f. Recall that probabilities associated with bivariate distributions can be interpreted as volumes of particular regions under the two-dimensional surface defined by the density function. Since the density function in this exercise assumes the constant value 12 on the

region of nonzero probability, it follows that the probability Pr@X1 > 2 X2D is equal to 12 times the area of the region of nonzero probability defined by X1 > 2 X2. The latter region is illustrated in the following graph:

Chapter Four Solutions 37

Page 40: Text Book 1 Solutions

0.5 1 1.5 2X1

0.5

1

1.5

2

X2

X2=1ÅÅÅÅÅ2X1

X1+X2=2

H4ÅÅÅÅÅ3,2ÅÅÅÅÅ3L

From basic geometry, the shaded region in this graph has area1

2

4

3

2

3+

1

22 -

4

3

2

3=

4

9+

2

9=

2

3.

Consequently, the desired probability is

Pr@X1 > 2 X2D =1

2

3=

1

3.

11. a. By the distributional form of the law of total probability,

38 Chapter Four Solutions

Page 41: Text Book 1 Solutions

fX@xD = ‡-¶

fX L=l@xD fL@lD „l = ‡0

Il ‰-l xM I4 l ‰-2 lM „l = ‡0

4 l2 ‰-l Hx+2L „l.

Using integration by parts twice, we have

‡0

4 l2 ‰-l Hx+2L „l = 4 l2‰-l Hx+2L

-Hx + 2L l=0¶ -‡

0

8 l‰-l Hx+2L

-Hx + 2L„l =

0 +8

x + 2‡0

l ‰-l Hx+2L „l =8

x + 2l‰-l Hx+2L

-Hx + 2L l=0¶ -‡

0

1µ‰-l Hx+2L

-Hx + 2L„l =

8

x + 20 +

1

x + 2‡0

‰-l Hx+2L „l =8

Hx + 2L3.

So

fX@xD =8

Hx + 2L3for x > 0.

Using this formula, we have

Pr@X > 2D = ‡2

8 Hx + 2L-3 „ x = -4 Hx + 2L-2 2¶ =

1

4.

b. If a claim of size two is received, then the insurer's belief about the true value of l going forward is captured by the distribution of L X = 2. Using the distributional form of Bayes' theorem we have

fL X=2@lD =fX L=l@2D fL@lD

fX@2D.

From the given information,

fX L=l@2D = l ‰-lx x=2 = l ‰-2 l,

fL@lD = 4 l ‰-2 l.

Further from part a,

fX@2D = 8 Hx + 2L-3 x=2 =1

8.

Consequently, the density of L X = 2 is given by

Chapter Four Solutions 39

Page 42: Text Book 1 Solutions

Consequently, the density of L X = 2 is given by

fL X=2@lD =Il ‰-2 lM I4 l ‰-2 lM

1 ê8= 32 l2 ‰-4 l for l > 0.

This density encapsulates the insurer's belief about l going forward.

‡ Section 4.2.4 Exercises

3. The expectation of a function of a mixed random variable can be calculated by consider-ing sums over the discrete part and integrals over the continuous part (see section 4.2.1 for details). Hence

E@ X + 1 D = H x + 1 L x=-2 µ1

4+ H x + 1 L x=2 µ

1

4+

‡-1

0H x + 1 L

1 + x

2„ x + ‡

0

1H x + 1 L

1 - x

2„ x =

1

4+

3

4+

1

2‡-1

0Hx + 1L2 „ x +

1

2‡0

1I1 - x2M „ x = 1 +

1

6+

1

3=

3

2.

Note that x + 1 = x + 1 for x ¥ -1.

‡ Section 4.3.3 Exercises

3. One of the important properties of the moment generating function for a random variable X is that it characterizes the distribution of X, i.e., there is one and only one moment generating function associated with each probability distribution (see section 4.3.1). Hence, if we can construct a probability distribution whose moment generating function is the one given in this exercise, then that distribution must be the distribution of X.

The presence in MX of 11-t , which is the moment generating function for an exponential

distribution with parameter l = 1 (see Example 6 of section 4.3.1 and Example 2 of section 4.2.1), suggests that the distribution of X contains an "exponential component". At the same time, the presence of the term 14 suggests that there is a probability mass of

size 14 at x = 0. Taken together, these observations suggest that X has a mixed distribu-tion with a discrete probability mass at x = 0 and a continuous exponential part on the interval x > 0. As an initial guess, consider the distribution for a random variable Y with probability mass 14 at y = 0 and continuous distribution on y > 0 given by

40 Chapter Four Solutions

Page 43: Text Book 1 Solutions

The presence in MX of 11-t , which is the moment generating function for an exponential

distribution with parameter l = 1 (see Example 6 of section 4.3.1 and Example 2 of section 4.2.1), suggests that the distribution of X contains an "exponential component". At the same time, the presence of the term 14 suggests that there is a probability mass of

size 14 at x = 0. Taken together, these observations suggest that X has a mixed distribu-tion with a discrete probability mass at x = 0 and a continuous exponential part on the interval x > 0. As an initial guess, consider the distribution for a random variable Y with probability mass 14 at y = 0 and continuous distribution on y > 0 given by

fY @yD =3

4‰-y, y > 0.

From the definition of moment generating function and the formula for calculating the expectation of a mixed random variable (see sections 4.3.1 and 4.2.1), we have

MY @tD = EY A‰t Y E = ‰0 ÿ1

4+ ‡

0

‰t y ÿ3

4‰-y „ y =

1

4+

3

4ÿ

1

1 - t, t > 1

which is identical to the moment generating function given. Hence, by the uniqueness of moment generating functions, it follows that the mass-density function for X is given by

pX@0D =1

4, fX@xD =

3

4‰-x, x > 0.

From this, it follows that the distribution function of X is

FX@xD = Pr@X § xD =14 +

34 H1 - ‰-xL for x ¥ 0,

0 otherwise.

Hence

FX@xD =1 - 3

4 ‰-x for x ¥ 0,

0 otherwise.

The graphs of fX and FX can be created using Mathematica or similar computer software.

Chapter Four Solutions 41

Page 44: Text Book 1 Solutions

1 2 3 4 5x

0.10.20.30.40.50.60.7

fX

1 2 3 4 5x

0.2

0.4

0.6

0.8

1

FX

6. From the formulas given in section 4.3.1 (and derived in exercise 5), the mean, variance, and skewness can be determined from either the moment generating function or the cumulant generating function. For the distributions of this question, we will use whichever approach is simpler from a computational viewpoint. This means consider-ing the cumulant generating function for parts a, d, and e, and the moment generating function for parts b and c. Note however, that in each part, the mean, variance, and skewness can be determined using both approaches.

42 Chapter Four Solutions

Page 45: Text Book 1 Solutions

6. From the formulas given in section 4.3.1 (and derived in exercise 5), the mean, variance, and skewness can be determined from either the moment generating function or the cumulant generating function. For the distributions of this question, we will use whichever approach is simpler from a computational viewpoint. This means consider-ing the cumulant generating function for parts a, d, and e, and the moment generating function for parts b and c. Note however, that in each part, the mean, variance, and skewness can be determined using both approaches.

a. Since MX@tD = H1 - tL-1, we haveyX@tD = - log@1 - tD.

Differentiating yX successively, we have

yX£ @tD = H1 - tL-1,

yX″ @tD = H1 - tL-2,

yXH3L@tD = 2 H1 - tL-3.

HencemX = yX

£ @0D = 1,

sX2 = yX

″ @0D = 1,

gX =yXH3L@0D

yXH2L@0D

3ê2=

2

1= 2.

b. Since MX@tD =12 ‰

t + 12 ‰

-2 t, we have

MX£ @tD =

1

2‰t - ‰-2 t,

MX″@tD =

1

2‰t + 2 ‰-2 t,

MXH3L@tD =

1

2‰t - 4 ‰-2 t.

Hence

E@XD =MX£ @0D =

-1

2,

Chapter Four Solutions 43

Page 46: Text Book 1 Solutions

EAX2E =MX″@0D =

5

2,

EAX3E =MXH3L@0D =

-7

2.

Consequently,

mX =-1

2,

sX2 = EAX2E - E@XD2 =

5

2-

-1

2

2

=9

4,

and

gX =EAX3E - 3 EAX2E E@XD + 2 E@XD3

sX3

=I-72 M - 3 I

52 M I

-12 M + 2 I

-12 M

3

I94 M3ê2

= 0.

Note that it would be much more complicated to use the cumulant generating function here since the expression for yX

HkL cannot be simplified to any great extent.

9. Recall that for non-negative random variables X, the expected value can be calculated using the following formula:

E@XD = ‡0

SX@xD „ x

(see section 4.3.2). Hence for the random variable X with survival function SX@xD = H1 + xL-2, x ¥ 0 we have

E@XD = ‡0

H1 + xL-2 „ x = -H1 + xL-1 0¶ = 1.

The variance of this particular Pareto distribution is actually infinite. To see this, note that

44 Chapter Four Solutions

Page 47: Text Book 1 Solutions

fX@xD = - SX£ @xD = 2 H1 + xL -3

and

EAX2E = ‡0

x2 fX@xD „ x = ‡0

2 x2 H1 + xL -3 „ x.

However, from the relationship 2 x2 ¥ 12 H1 + xL2 which holds for x ¥ 1, we have

‡1

2 x2 H1 + xL-3 „ x ¥1

2‡1

H1 + xL-1 „ x =1

2log@1 + xD 1¶ =¶.

Consequently EAX2E =¶ and so VarHXL =¶ as well.

Chapter Four Solutions 45

Page 48: Text Book 1 Solutions

Chapter Five Solutions

1. Recall that the probability mass function of the binomial distribution is given by

pX@xD = KnxO px H1 - pLn-x, x = 0, 1, ..., n

and the mean and variance aremX = n p,

sX2 = n p H1 - pL.

Using the general formulas

EAHX - mXL3E = EAX3E - 3 EAX2E E@XD + 2 E@XD3,

MXHkL@0D = EAXkE

and the formula for the moment generating function of a binomal random variable, one can show that the skewnesss of a binomial random variable is given by

gX =1 - 2 p

Hn p H1 - pLL1ê2.

From these formulas, one can give the qualitative descriptions requested in part a.

a. Suppose first that n is fixed. Then as p increases from 0 to 1, the distribution of probability moves from being concentrated around the point x = 0 to being concentrated around the point x = n. In the limiting cases p = 0 and p = 1, the distribution reduces to a point mass. From the formula for gX , it follows that the distribution is positively skewed when p < 1

2 , negatively skewed when p > 12 , and has zero skew when p = 1

2 . From the formula for pX , it follows that the distribution obtained by replacing p with 1 - p is the reflection of the given binomial distribution in the line x = n

2 and is itself a binomial distribution. Indeed,

Page 49: Text Book 1 Solutions

a. Suppose first that n is fixed. Then as p increases from 0 to 1, the distribution of probability moves from being concentrated around the point x = 0 to being concentrated around the point x = n. In the limiting cases p = 0 and p = 1, the distribution reduces to a point mass. From the formula for gX , it follows that the distribution is positively skewed when p < 1

2 , negatively skewed when p > 12 , and has zero skew when p = 1

2 . From the formula for pX , it follows that the distribution obtained by replacing p with 1 - p is the reflection of the given binomial distribution in the line x = n

2 and is itself a binomial distribution. Indeed,

pXn

2- k =

n !

In2 - kM ! I n2 + kM !

pnê2- k H1 - pLnê2+ k.

Hence a binomial distribution is symmetric if and only if p = 12 .

From the formula for sX2 , it follows that the binomial distribution with the greatest

variance for a given n is the one with p = 12 and the variance equals 0 when p = 0 or

p = 1 (the cases in which the distribution reduces to a point mass). It also follows that the variance of a binomial distribution is invariant with respect to interchanging p and 1 - p, which makes sense since the distributions with p and 1 - p interchanged are mirror images of one another, as noted earlier. From the formula for mX , it is clear that the mean of a binomial distribution is directly proportional to p.

Now suppose that p is fixed and is not equal to 0 or 1. Then as n increases, gX approaches 0. Indeed,

gX =1 - 2 p

Hn p H1 - pLL1ê2Ø 0 as n ض.

Hence for any fixed p, the distribution becomes more symmetric as n ض. Moreover by considering graphs of pX with the same p and various n it is apparent that the distribution becomes more "bell-shaped" as n ض. (This can be proved directly from the formula for pX , but students are not expected to furnish such a proof at this point in the book.) For fixed p, the mean and variance are both directly proportional to n. Hence although the distribution becomes more bell-shaped as n ض, it is also true that the distribution's variance increases without bound as n ض.

b. From the formula for pX , it follows thatpX@x + 1D

pX@xD=

n - x

x + 1

p

1 - p=

n + 1

x + 1- 1

p

1 - p

for x = 0, 1, ... , n - 1. Hence, the ratio pX@x + 1D ê pX@xD is decreasing for all x. Consequently to show that pX first increases and then decreases it suffices to show that pX@1D ê pX@0D > 1 and pX@nD ê pX@n - 1D < 1. Now

Chapter Five Solutions 45

Page 50: Text Book 1 Solutions

for x = 0, 1, ... , n - 1. Hence, the ratio pX@x + 1D ê pX@xD is decreasing for all x. Consequently to show that pX first increases and then decreases it suffices to show that pX@1D ê pX@0D > 1 and pX@nD ê pX@n - 1D < 1. NowpX@1D

pX@0D=

n p

1 - p

andpX@nD

pX@n - 1D=

1

n

p

1 - p.

HencepX@1D

pX@0D> 1 ñ n p > 1 - p ñ p >

1

n + 1

andpX@nD

pX@n - 1D< 1 ñ p < 1 -

1

n + 1.

Therefore, if n and p are such that 1 ê Hn + 1L < p < 1 - 1 ê Hn + 1L, then the graph of pX first increases and then decreases. On the other hand, if p § 1 ê Hn + 1L then pX@1D ê pX@0D § 1 in which case pX@x + 1D ê pX@xD § 1 for all x = 0, 1, ..., n - 1 and the graph of pX is always decreasing, whereas if p ¥ n ê Hn + 1L then pX@nD ê pX@n - 1D ¥ 1 in which case pX@x + 1D ê pX@xD ¥ 1 for all x = 0, 1, ..., n - 1 and the graph of pX is always increasing.

c. From the answer to part b, the ratio pX@x + 1D ê pX@xD is decreasing. Hence to deter-mine the modes we need only determine the integers m for whichpX@m + 1D

pX@mD§ 1 and

pX@mD

pX@m - 1D¥ 1.

Now

46 Chapter Five Solutions

Page 51: Text Book 1 Solutions

pX@x + 1D

pX@xD§ 1 ñ

n - x

x + 1

p

1 - p§ 1 ñ x ¥ Hn + 1L p - 1

andpX@xD

pX@x - 1D¥ 1 ñ

n - x + 1

x

p

1 - p¥ 1 ñ x § Hn + 1L p.

Consequently the modes are the integers m such thatHn + 1L p - 1 § m § Hn + 1L p.

Therefore when Hn + 1L p is an integer there are two modes, one at Hn + 1L p and one at Hn + 1L p - 1. Otherwise, the distribution has only one mode.

Comment on Exercises 7 and 8: The objective of exercises 7 and 8 is to introduce students to the important topic of distribution fitting. In many practical problems, one is given a set of data rather than an explicit formula for a distribution function and one must make inferences on the basis of this data alone. One approach to take in such situations is to fit the data to a distribution with known characteristics and use the fitted distribution as the probability model. This approach is known as parametric modeling and is illus-trated in its simplest form in exercises 7 and 8. There are two questions that arise naturally in parametric modeling:

i. Which family of distributions (e.g., Poisson, binomial, negative binomial, etc) models the data best?

ii. Which distribution in the selected family (i.e., which values of the parameters in the selected family) models the data best?

Students will not yet have the background to answer either of these questions adequately (unless they have already taken a course in statistical estimation, which usually follows a course in probability). Exercises 7 and 8 provide an introduction to these important statistical questions in a relatively simple setting.

7. Let N be the number of claims associated with a randomly chosen policy.

a. Let pÔ

N denote the relative frequency function for the given data set. (In statistics, this

is also referred to as the frequency function for the empirical distribution.) Then pÔ

N is given by

Chapter Five Solutions 47

Page 52: Text Book 1 Solutions

a. Let pÔ

N denote the relative frequency function for the given data set. (In statistics, this

is also referred to as the frequency function for the empirical distribution.) Then pÔ

N is given by

pÔN@0D = .122, pÔN@1D = .188, pÔN@2D = .188, pÔN@3D = .156, pÔN@4D = .117, pÔN@5D = .082,

pÔN@6D = .055, pÔN@7D = .035, pÔN@8D = .022, pÔN@9D = .013, pÔN@10D = .022.

b. A bar chart for pÔ

N can be created using Mathematica or similar computer software.

0 1 2 3 4 5 6 7 8 9 10n

0.05

0.1

0.15

0.2

N Empirical Distribution

Note that the distribution is discrete and positively skewed. Based on the distributions studied in chapter 5, this suggests that possible models include the Poisson, negative binomial, or binomial with parameter p small.

c. From part a, the implied mean isH0L H.122L + H1L H.188L + H2L H.188L + ... + H9L H.013L + H10L H.022L = 2.998

and the implied second moment is

I02M H.122L + I12M H.188L + I22M H.188L + ... + I92M H.013L + I102M H.022L = 14.622.

Hence the implied variance is

14.622 - H2.998L2 = 5.633996.

To one decimal place, the implied mean and variance are 3.0 and 5.6 respectively. From these calculations it appears that E@ND < VarHNL. Since negative binomial distributions have this property (i.e., mean less than variance) but Poisson and binomial distributions do not, a negative binomial model would appear to be the best among these three choices.

48 Chapter Five Solutions

Page 53: Text Book 1 Solutions

To one decimal place, the implied mean and variance are 3.0 and 5.6 respectively. From these calculations it appears that E@ND < VarHNL. Since negative binomial distributions have this property (i.e., mean less than variance) but Poisson and binomial distributions do not, a negative binomial model would appear to be the best among these three choices.

d. The simplest way to fit a distribution to data is to match moments. Since the negative binomial distribution has two parameters, this means equating the first and second moments. We are instructed to fit the distribution by equating means and variances. Since the mean, variance and second moment are related by the general formula VarHXL = EAX2E - E@XD2, this approach will give us the same result, ignoring rounding errors. Note that moment matching is the simplest but by no means the only way to fit a distribution to data. Students will encounter other techniques when they take a course in statistical estimation.

From section 5.3, we know that the mean and variance of a negative binomial distribu-tion with parameters r and p are

E@ND =r H1 - pL

p

and

Var HNL =r H1 - pL

p2

respectively. In part c we determined the implied mean and variance for the given data set to be 3.0 and 5.6 to one decimal place respectively. Equating these respective values we haver H1 - pL

p= 3.0

andr H1 - pL

p2= 5.6.

This system is solved most easily by dividing the first equation by the second and then substituting the resulting value of p into the first equation to determine r. When we do this, we obtain

Chapter Five Solutions 49

Page 54: Text Book 1 Solutions

This system is solved most easily by dividing the first equation by the second and then substituting the resulting value of p into the first equation to determine r. When we do this, we obtain

r =45

13, p =

15

28.

Let pN denote the probability mass function for the negative binomial distribution with parameters r = 45 ê13 and p = 15 ê28. Then

pN@nD =GBn + 45

13 F

GB 4513 F G@n + 1D

15

28

45ê13 13

28

n

for n = 0, 1, 2, ... .

Note that the more general form of the negative binomial probability mass function must be used here since the estimated value of r is not an integer. Approximate numerical values of pN@nD for n = 0, 1, 2, ..., 10 can be easily determined using Mathematica or similar computer software.

n pN@nD

0 0.115264

1 0.185245

2 0.191861

3 0.162168

4 0.121626

5 0.0842695

6 0.0551765

7 0.034626

8 0.021023

9 0.0124302

10 0.00719178

Comparing these numbers to the relative frequency function determined in part a, it appears that the negative binomial distribution with r = 45 ê13 and p = 15 ê28 is a reasonable fit.

50 Chapter Five Solutions

Page 55: Text Book 1 Solutions

Comparing these numbers to the relative frequency function determined in part a, it appears that the negative binomial distribution with r = 45 ê13 and p = 15 ê28 is a reasonable fit.

e. We computed the implied mean and variance in part c by assuming that all probability at values greater than or equal to 10 is concentrated at the value 10. The effect of this assumption is to underestimate the true mean and variance of the distribution. To compensate for this, we could round up the implied mean and variance before equating them to the negative binomial distribution mean and variance formulas. For example, if we estimate the implied mean and variance as 3 and 6 respectively, then the parameter values for the corresponding negative binomial distribution are r = 3 and p = .5 and the probability mass function for this distribution is

pN@nD =n + 2

21

2

n+3

for n = 0, 1, 2, ... .

Approximate numerical values for this particular set of pN@nD for n = 0, 1, 2, ..., 10 can be determined using Mathematica or similar software.

n pN@nD

0 0.125

1 0.1875

2 0.1875

3 0.15625

4 0.117188

5 0.0820313

6 0.0546875

7 0.0351563

8 0.0219727

9 0.0134277

10 0.00805664

Comparing these values to the corresponding values for the negative binomial distribution with r = 45 ê13 and p = 15 ê28, we see that the fit when r = 3 and p = .5 appears to be slightly better.

Chapter Five Solutions 51

Page 56: Text Book 1 Solutions

Comparing these values to the corresponding values for the negative binomial distribution with r = 45 ê13 and p = 15 ê28, we see that the fit when r = 3 and p = .5 appears to be slightly better.

f. The desired probability is Pr@N > 2D = 1 - Pr@N = 0D - Pr@N = 1D - Pr@N = 2D. According to the model constructed in part d, i.e., N ~ NegativeBinomialH45 ê13, 15 ê28L,

Pr@N = 0D º .115264, Pr@N = 1D º .185245, Pr@N = 2D º .191861.

HencePr@N > 2D º 1 - .115264 - .185245 - .191861 = .50763.

If a negative binomial model with parameters r = 3 and p = .5 is used instead (see the answer to part e), then

Pr@N = nD = Kn + 2

nO

1

2

n+3

for n = 0, 1, 2, ...

and the required probability is

Pr@N > 2D = 1 -1

2

3

- 31

2

4

- 61

2

5

=1

2.

12. In each part of this question, one must first recognize the given moment generating function as the moment generating function of a particular special distribution. Then using the uniqueness property of the moment generating function and properties of the identified special distribution, it is straightforward to determine E@XD, VarHXL, Pr@X > 1D and Pr@X = 2D.

a. X ~ BinomialH10, .25L. HenceE@XD = H10L H.25L = 2.5,

Var HXL = H10L H.25L H.75L = 1.875,

Pr@X > 1D = 1 - Pr@X = 0D - Pr@X = 1D =

1 -100 H.75L10 -

101 H.25L H.75L9 = 1 - H3.25L H.75L9 º .75597477,

52 Chapter Five Solutions

Page 57: Text Book 1 Solutions

Pr@X = 2D =102 H.25L2 H.75L8 º .28156757.

b. X ~ NegativeBinomialH3, .25L. Hence

E@XD =3 H.75L

.25= 9,

Var HXL =3 H.75L

H.25L2= 36,

Pr@X > 1D = 1 - Pr@X = 0D - Pr@X = 1D =

1 -22 H.25L3 -

32 H.25L3 H.75L = 1 - H3.25L H.25L3 = .94921875,

Pr@X = 2D =42 H.25L3 H.75L2 =

3

8

3

.

c. X ~ PoissonH2L. HenceE@XD = 2,

Var HXL = 2,

Pr@X > 1D = 1 - Pr@X = 0D - Pr@X = 1D = 1 -20 ‰-2

0 !-

21 ‰-2

1 != 1 - 3 ‰-2 º .59399415,

Pr@X = 2D =22 ‰-2

2 != 2 ‰-2 º .27067057.

16. Let X be the number of deliquencies in the first month and let L be the expected number of deliquencies per month. Then from the given information, a reasonable model for X is HX L= lL ~ PoissonHlL where L has the density

fL@lD = H0.02L2 l ‰-0.02 l for l > 0.

By the law of total probability,

Chapter Five Solutions 53

Page 58: Text Book 1 Solutions

Pr@X = xD = ‡0

¶ lx ‰-l

x!fL@lD „l = ‡

0

H0.02L2lx+1 ‰-1.02 l

x!„l.

Using integration by parts we have

‡0

lx+1 ‰-1.02 l „l =

lx+1‰-1.02 l

-1.02 0¶ -‡

0

Hx + 1L lx‰-1.02 l

-1.02„l =

x + 1

1.02‡0

lx ‰-1.02 l „l.

By repeated application of this formula we obtain

‡0

lx+1 ‰-1.02 l „l =Hx + 1L!

H1.02Lx+1‡0

‰-1.02 l „l =Hx + 1L!

H1.02Lx+2.

Hence

Pr@X = xD = ‡0

H0.02L2lx+1 ‰-1.02 l

x!„l =

H0.02L2

x!

Hx + 1L!

H1.02Lx+2= Hx + 1L

0.02

1.02

2 1

1.02

x

= Hx + 1L1

51

2 50

51

x

.

Therefore, the probability that there are fewer than 50 deliquencies in the first month is

Pr@X < 50D =‚x=0

49

Hx + 1L1

51

2 50

51

x

.

We could evaluate this sum using a computer. However there is a more elegant way, which we now describe.

Consider the quantity defined by

g@rD =‚j=0

n

r j.

Note that from the formula for the sum of a finite geometric series,

54 Chapter Five Solutions

Page 59: Text Book 1 Solutions

g@rD =1 - rn+1

1 - r.

Hence

g£@rD =‚j=1

n

j r j-1

and also

g£@rD =I-Hn + 1L rn H1 - rL - I1 - rn+1M H-1LM

H1 - rL 2= -

rn Hn + 1L

H1 - rL+

1 - rn+1

H1 - rL 2.

Equating these two expressions for g£@rD we obtain

‚j=1

n

j r j-1 =1 - rn+1

H1 - rL2-Hn + 1L rn

1 - r.

Putting r = 5051 and n = 50 into this equation we have

‚j=1

50

j50

51

j-1

=1 - J

5051 N

51

J151 N

2-H51L J 5051 N

50

J151 N

,

that is,

‚j=1

50

j50

51

j-1

= 512 1 -50

51

51

-50

51

50

.

By changing the index of summation we also have

‚j=1

50

j50

51

j-1

=‚x=0

49

Hx + 1L50

51

x

.

Consequently,

Chapter Five Solutions 55

Page 60: Text Book 1 Solutions

‚x=0

49

Hx + 1L50

51

x

= 512 1 -50

51

51

-50

51

50

.

It follows that the desired probability is

Pr@X < 50D =‚x=0

49

Hx + 1L1

51

2 50

51

x

= 1 -50

51

51

-50

51

50

º .26422909.

Comment: The alert reader may have noticed that L~ GammaH2, 0.02L. Using the fact thatHX L= lL ~ Poisson HlL and L~ Gamma Hr, aL fl

X ~ NegativeBinomial Hr, a ê Ha + 1LL,

it then follows that X ~ NegativeBinomialJ2, 151 N. Hence the probability mass function

of X is

pX@xD =x + 1

11

51

2 50

51

x

for x = 0, 1, ...

which is precisely the formula derived earlier.

20. Let X1 be the number of claims submitted in a month for a group known to be a low utilizer, let X2 be the number of claims submitted in a month for a group known to be a high utilizer, and let N be the number of claims submitted in a month for the group under consideration. Let C be defined as follows:

C = ;1 if given group is low utilizer,2 if given group is high utilizer.

Then from the given information, Pr@C = 1D = .70, Pr@C = 2D = .30, X1 ~ PoissonH20L and X2 ~ PoissonH50L. We are interested in the probability that the given group submits fewer than 20 claims in the first month. By the law of total probability, this isPr@N < 20D = Pr@N < 20 C = 1D Pr@C = 1D + Pr@N < 20 C = 2D Pr@C = 2D =

Pr@X1 < 20D Pr@C = 1D + Pr@X2 < 20D Pr@C = 2D =H.70L Pr@X1 < 20D + H.30L Pr@X2 < 20D.

Since X1 ~ PoissonH20L and X2 ~ PoissonH50L, we have

56 Chapter Five Solutions

Page 61: Text Book 1 Solutions

Since X1 ~ PoissonH20L and X2 ~ PoissonH50L, we have

Pr@X1 < 20D =‚x=0

19 20x ‰-20

x!

and

Pr@X2 < 20D =‚x=0

19 50x ‰-50

x!.

These sums can be evaluated numerically using Mathematica or similar computer software. We find that

‚x=0

19 20x ‰-20

x!º 0.470257

and

‚x=0

19 50x ‰-50

x!º 4.79136 ä 10-7.

Consequently, the desired probability isPr@N < 20D = H.70L Pr@X1 < 20D + H.30L Pr@X2 < 20D º

H.70L H.470257L + H.30L I4.79136 ä 10-7M º .32918 º 33 %.

24. Let X be the number of policyholders that file at least one claim during the first year and let P be the probability that a given policyholder files at least one claim. Then assuming claims are independent, HX P = pL ~ BinomialH100, pL. We are given that the density of P isfP@pD = 3 H1 - pL2, 0 < p < 1.

Hence by Bayes' theorem,

Chapter Five Solutions 57

Page 62: Text Book 1 Solutions

fP X=x@pD =fX P=p@xD fP@pD

fX@xD=

K100

xO px H1 - pL100-x ÿ3 H1 - pL2

fX@xD∝ px H1 - pL102-x

where the terms not containing p have been omitted from the proportionality. The proportionality constant can be determined in principle from the condition Ÿ01 fP X=x@pD „ p = 1.

Since there are no claims filed in the first year, the event of interest is X = 0. In this case, the proportionality constant is relatively simple to determine. From the formula for fP X=x we have

fP X=0@pD =H1 - pL102

Ÿ01H1 - pL102 „ p

=H1 - pL102

-H1-pL103

103 01= 103 H1 - pL102.

Therefore the desired probability isPr@P > .10 X = 0D =

‡.10

1fP X=0@pD „ p = ‡

.10

1103 H1 - pL102 „ p = -H1 - pL103 .10

1 = H.90L103 º 1.9363µ10-5.

Note that H.90L103 is extremely small. Hence it is very unlikely that P will exceed 10% if no claims are observed during the first year.

58 Chapter Five Solutions

Page 63: Text Book 1 Solutions

Chapter Six Solutions

1. Recall that the probability density function of the gamma distribution is given by

fX@xD =lr xr-1 ‰-l x

G@rD, x > 0

and the mean, variance, and skewness are

mX =r

l,

sX2 =

r

l2,

gX =2

r1ê2.

From these formulas, one can give the qualitative descriptions requested in part a.

a. The distribution is positively skewed for all r and l. The distributions with the greatest skew are the ones for which r is small. As r increases with l held fixed, the distribution of probability moves to the right, becomes more spread out, and becomes more symmetric. On the other hand, as l increases with r held fixed, the distribution of probability moves to the left and becomes less spread out, but the skewness does not change. These characteristics are clear from the formulas for the mean, variance, and skewness.

Now consider what happens when rØ 0 or lØ 0. As rØ 0 with l held fixed, the distribution becomes more concentrated around x = 0. In the limiting case r = 0, the distribution reduces to a point mass at 0. Indeed, for any x ≠ 0 and any l ¥ 0,

Page 64: Text Book 1 Solutions

fX@xD =lr xr-1 ‰-l x

G@rDØ 0 as rØ 0

and for r < 1, fX@xDض as xØ 0+. As lØ 0 with r held fixed, the distribution of probability moves to the right. From the interpretation of a gamma random variable as a waiting time, the limiting distribution in the case l = 0 could be considered a point mass at infinity. However, strictly speaking, the distribution is not defined at l = 0.

b. Different values of the parameter r result in density curves of a different shape. For example, when r < 1 the density function is unbounded and becomes infinite at x = 0, when r = 1 the density function is strictly decreasing with a maximum at x = 0, and when r > 1 the density function increases and then decreases and attains its maximum at a point x > 0. (See figures 6.1 and 6.2b in the textbook.) From these descriptions, it is clear that the "shape" of the graph is not the same for all values of r. Hence it is appropri-ate to consider r to be a "shape parameter".

The easiest way to see why l can be considered a scale parameter is to plot a few graphs of gamma densities with different l values and the same r value. Consider for example plots of the densities for Gamma(2, 1) and Gamma(2, 2). This can be done using Mathematica or similar computer software.

1 2 3 4 5 6x

0.050.10.150.20.250.30.35

fX GammaH2, 1L

60 Chapter Six Solutions

Page 65: Text Book 1 Solutions

0.5 1 1.5 2 2.5 3x

0.10.20.30.40.50.60.7

fX GammaH2, 2L

Without refering to the axis scale, the two graphs appear to be the same. However, if the graphs are plotted using the same scale, we see that they are in fact quite different:

1 2 3 4 5 6x

0.10.20.30.40.50.60.7

fX

This suggests that changes in l amount to changes in scale.

We can also see this by looking directly at the formula for the density function:

Chapter Six Solutions 61

Page 66: Text Book 1 Solutions

fX@xD =l Hl xLr-1 ‰-l x

G@rD.

Consider the change of scale given by the substitution u = l x. Since probability densi-ties measure probability per unit length, they are affected by the choice of unit length. This means that the substitution u = l x will also effect the scale of the graph in the vertical direction. Indeed, the change in vertical scale will be given by v = y êl, where Hx, yL represents two-dimensional coordinates before the change of scale and Hu, vL represents two-dimensional coordinates after the change of scale. Hence applying this substitution to the formula for fX we obtain

f @uD =ur-1 ‰-u

G@rD,

where f is the density function in the new coordinate system (i.e., after the change of scale). Note that the value of r has not changed. This shows that changes in l are related to changes in scale. For this reason, it is appropriate to consider l a "scale parameter".

c. The derivative of the density function is

fX£ @xD =l r

G@rD9Hr - 1L xr-2 ‰-l x + xr-1 H-lL ‰-l x= =

lr xr-2 ‰-l x

G@rD8Hr - 1L - l x<.

Hence fX is increasing for r - 1 - l x > 0, i.e., for x < Hr - 1L êl and fX is decreasing for r - 1 - l x < 0, i.e., for x > Hr - 1L êl. Consequently, fX is decreasing for all x > 0 if and only if Hr - 1L êl § 0, i.e., if and only if r § 1.

5. a. ExponentialH100L (time measured in hours) or ExponentialJ 53 N (time measured in

minutes).

b. BetaH6, 96L. If a sample of size n is drawn from a population with replacement and the sample contains x defectives, then the fraction of defectives in the entire population is BetaHx + 1, n - x + 1L (see section 6.3.3). Note that the sampling was probably done without replacement, but if the population is large relative to the sample size, the difference between sampling with and without replacement is small.

c. LognormalHm, sL. There is insufficient information in the statement of the question to specify m and s. Note that this is an approximate model. The precise value of the security after one year is X1 X2 ÿ ÿ ÿXn where X j is the accumulation of a $1 investment in the j-th day and n is the number of trading days. By assumption, the X j are indepen-dent, identically distributed, and positive random variables. Since n is reasonably large, the multiplicative form of the central limit theorem applies. Hence X1 X2 ÿ ÿ ÿXn º LognormalHm, sL where m, s are the mean and standard deviation of log@X1 X2 ÿ ÿ ÿXnD.

62 Chapter Six Solutions

Page 67: Text Book 1 Solutions

c. LognormalHm, sL. There is insufficient information in the statement of the question to specify m and s. Note that this is an approximate model. The precise value of the security after one year is X1 X2 ÿ ÿ ÿXn where X j is the accumulation of a $1 investment in the j-th day and n is the number of trading days. By assumption, the X j are indepen-dent, identically distributed, and positive random variables. Since n is reasonably large, the multiplicative form of the central limit theorem applies. Hence X1 X2 ÿ ÿ ÿXn º LognormalHm, sL where m, s are the mean and standard deviation of log@X1 X2 ÿ ÿ ÿXnD.

d. ExponentialH3L, time measured in months. Since the failure rate is constant, there is no aging. Consequently, the distribution is exponential.

9. Let Tl, Tr be the total service times for the left and right machines respectively and let Tl*, Tr* be the corresponding remaining service times. Let T be the waiting time until the first machine becomes available when both machines are in use. Suppose that Tl, Tr, Tl*, Tr*, and T are all measured in seconds. Then T = minITl*, Tr*M.

We are not explicitly told what models to use for Tl and Tr. In the interest of simplicity, let's assume that both Tl and Tr have exponential distributions. Since the exponential distribution has the memoryless property it follows from this assumption that Tl* and Tr* are exponentially distributed with Tl* ~ Tl and Tr* ~ Tr. Note that in this context the memoryless property means that knowledge of the time that a machine has already spent servicing a customer has no effect on the distribution of the remaining service time. This is not an unreasonable assumption to make in this context as anyone who has stood behind a customer performing multiple transactions can attest! Since the average service times are 30 seconds and 20 seconds for the left and right machines respectively, it follows that Tl ~ ExponentialJ 130 N, Tr ~ ExponentialJ 120 N and also

Tl* ~ ExponentialJ 130 N, Tr* ~ ExponentialJ 120 N.

In section 6.1.1, it was shown that if T1 ~ ExponentialHl1L, T2 ~ ExponentialHl2L and T1, T2 are independent then minHT1, T2L ~ ExponentialHl1 + l2L. Since T = minITl*, Tr*M, it

follows that T has an exponential distribution with parameter l = 130 +

120 =

112 , i.e.,

T ~ ExponentialI 112 M. This fact will be used to answer parts a through e.

Chapter Six Solutions 63

Page 68: Text Book 1 Solutions

In section 6.1.1, it was shown that if T1 ~ ExponentialHl1L, T2 ~ ExponentialHl2L and T1, T2 are independent then minHT1, T2L ~ ExponentialHl1 + l2L. Since T = minITl*, Tr*M, it

follows that T has an exponential distribution with parameter l = 130 +

120 =

112 , i.e.,

T ~ ExponentialI 112 M. This fact will be used to answer parts a through e.

a. Since T ~ ExponentialI 112 M, we have E@TD = 12. Hence the person at the front of the line should expect to wait 12 seconds.

b. The desired probability is

Pr@T > 15D = ‰-15ê12 = ‰-5ê4 º .2865.

c. From part a, the expected waiting time for a person at the front of the line is 12 seconds. Hence we should expect the line to move every 12 seconds. It follows that the person who is currently third in line should expect to wait 36 seconds. This result can also be derived more formally using the approach outlined in part d.

d. Let T j be the time that the j-th person in line must wait for service after making it to the front of the line and let T j* be the amount of time that the j-th person must wait in total. ThenT j* = T1 + T2 + + T j.

From earlier comments, T j ~ ExponentialI 112 M for all j. Moreover, since machine service times are independent and exponentially distributed (i.e., "memoryless"), the T j are also independent. Hence from section 6.1.2 we have

T j* ~ Gamma j,1

12.

Consequently the expected waiting time for the person currently third in line is

EAT3*E =3

1 ê12= 36 seconds

(the answer obtained in part c) and the probability that this person must wait more than 30 seconds is

PrAT3* > 30E =‚n=0

2 9I112 M H30L=

n‰-30ê12

n != ‰-5ê2 1 +

5

2+

1

2

5

2

2

=53

8‰-5ê2 º .5438.

e. To answer the question of this part, we need only consider the machine on the left. The desired probability is

64 Chapter Six Solutions

Page 69: Text Book 1 Solutions

e. To answer the question of this part, we need only consider the machine on the left. The desired probability isPr@Tl > 60D = ‰-60ê30 = ‰-2 º .1353.

14. Let X j be the dollar increase on the j-th trading day. By assumption the X j are indepen-dent and identically distributed with probability distribution given by

X j = ;2 with probability .50,

-1 with probability .50.

Since the current price of the stock is $100, its price n trading days hence isSn = 100 + X1 + X2 + + Xn.

We are interested in determining Pr@S50 > 145D.

Let I j be an indicator of a price increase on the j-th trading day. Then I j ~ Binomial@1, .50D and

X j = 3 I j - 1.

HenceSn = 100 + 3 HI1 + + InL - n = 100 - n + 3 Y

where Y = I1 + + In ~ Binomial@n, .50D. Consequently,

Pr@Sn > 145D = Pr@100 - n + 3 Y > 145D = Pr Y > 15 +n

3= ‚k=k*

n

KnkO H.50Ln

where k* = 15 + Bn3 F + 1 = 16 + B

n3 F. Here @xD denotes the integer part of x, i.e., the

greatest integer less than or equal to x. For n = 50 we have k* = 32. Hence

Pr@S50 > 145D = ‚k=32

50 50k H.50L50.

The latter sum can be determined numerically using Mathematica or similar computer software. When we do this we find that

Chapter Six Solutions 65

Page 70: Text Book 1 Solutions

Pr@S50 > 145D º .0324543.

An alternative approach to determining Pr@S50 > 145D is to use a normal approximation for Sn. From the definition of X j we have

EAX jE = H2L H.50L + H-1L H.50L = 0.50,

Var IX jM = EAX j2E - EAX jE2= 9H2L2 H.50L + H-1L2 H.50L= - H0.50L2 = 2.25.

Hence

E@SnD = 100 +‚j=1

n

EAX jE = 100 +n

2,

Var HSnL =‚j=1

n

Var IX jM = 2.25 n,

where the formula for the variance follows from the independence of the X j. It follows that for n sufficiently large,

Sn º Normal 100 +n

2, 1.5 n O.

Using this approximation and correcting for continuity we havePr@S50 > 145D =

Pr@S50 ¥ 145.5D = PrS50 - 125

1.5 50¥

145.5 - 125

1.5 50º Pr@Z ¥ 1.9328D = 1 - [email protected]

where Z ~ NormalH0, 1L and F is the distribution function of Z. From the table in Appendix E and using linear interpolation we [email protected] º [email protected] + [email protected] = H.72L H.9732L + H.28L H.9738L = .973368.

Consequently,Pr@S50 > 145D º 1 - [email protected] º 1 - .973368 º .02663.

Note that a correction for continuity was appropriate in this case because the values of Sn are all integers. If the daily price movements (i.e., the values of X j) had not been whole dollar amounts then it would not have been appropriate to correct for continuity in the approximation of Sn.

66 Chapter Six Solutions

Page 71: Text Book 1 Solutions

Note that a correction for continuity was appropriate in this case because the values of Sn are all integers. If the daily price movements (i.e., the values of X j) had not been whole dollar amounts then it would not have been appropriate to correct for continuity in the approximation of Sn.

It is instructive to compare the value calculated for Pr@S50 > 145D under a normal approxi-mation for S50 to the exact value determined earlier. Recall that the exact value of Pr@S50 > 145D was determined to be .0324543 and the value of Pr@S50 > 145D under a normal approximation was determined to be .02663. To the nearest percentage point, both values are about 3%. If this degree of precision in the answer is sufficient then it is reasonable to use the normal approximation. However, if greater precision is required then the desired probability must be calculated exactly.

18. Let P be the fraction of the company's policies for which a claim is filed. From section 6.3.3, we know that if a sample of size n is drawn with replacement from a population whose members are one of two types and the sample contains x items of a particular type, then the fraction of items of this type in the entire population has the distribution BetaHx + 1, n - x + 1L. In this exercise, the sample size is n = 100 and x = 5. We are not told whether the sampling is done with or without replacement. However, since the number of policies is likely to be very large relative to the size of the sample (which we know to be 100), we may assume that the sampling is done with replacement. Hence an appropriate model for P is P ~ BetaH6, 96L and the desired probability is

Pr@P > .10D = 1 - Pr@P § .10D = 1 -1

B@6, 96D‡0

.10x5 H1 - xL95 „ x.

We could calculate the latter integral using successive applications of integration by parts; however this would require five iterations! Alternatively, we can use the formulaPr@Beta Hr, sL § xD = Pr@Binomial Hr + s - 1, xL ¥ rD.

Using this result we havePr@P § .10D =

Pr@Beta H6, 96L § .10D = Pr@Binomial H101, .10L ¥ 6D = 1 -‚x=0

5

K101x

O H.10Lx H.90L101-x.

Hence

Chapter Six Solutions 67

Page 72: Text Book 1 Solutions

Hence

Pr@P > .10D =‚x=0

5

K101x

O H.10Lx H.90L101-x.

The latter sum can be evaluated numerically using Mathematica or similar computer software. When we do this we find that the desired probability isPr@P > .10D º .0541903.

22. Let X be the number of heads obtained in 1000 tosses of the selected coin and let I be an indicator of the fairness of the coin, i.e.,

I = ;1 if selected coin is fair,0 if selected coin is biased.

Since the gambler concludes that the coin is biased if X ¥ 525 and concludes that it is fair otherwise, the probability that the gambler reaches a false conclusion is, by the law of total probability,Pr@X ¥ 525 I = 1D Pr@I = 1D + Pr@X < 525 I = 0D Pr@I = 0D.

Consider first the quantity Pr@X ¥ 525 I = 1D. This is the probability of reaching a false conclusion when the coin being tossed is known to be fair. Note that the distribution of X I = 1 is binomial with parameters n = 1000 and p = .50. (The total number of tosses is 1000 and since the coin is fair, the probability of heads on a single toss of the coin is .50.) Hence

Pr@X ¥ 525 I = 1D = ‚x=525

1000

K1000x

O H.50L1000.

We can evaluate this sum using Mathematica or similar computer software. When we do this we find thatPr@X ¥ 525 I = 1D º .0606071.

Alternatively, we can evaluate the probability using a normal approximation with continuity correction. When we do this, we obtain

68 Chapter Six Solutions

Page 73: Text Book 1 Solutions

Pr@X ¥ 525 I = 1D = Pr@X ¥ 524.5 I = 1D =

PrX - H1000L H.50L

H1000L H.50L H.50L¥

524.5 - H1000L H.50L

H1000L H.50L H.50LI = 1 º Pr@Z ¥ 1.5495D.

From Appendix E of the textbook and using linear interpolation as appropriate we [email protected] º [email protected] + [email protected] = H.05L H.9382L + H.95L H.9394L = .93934.

HencePr@X ¥ 525 I = 1D º Pr@Z ¥ 1.5495D º 1 - [email protected] º .06066,

which is close to the value .0606071 calculated directly.

Now consider the quantity Pr@X < 525 I = 0D. This is the probability of reaching a false conclusion when the coin being tossed is known to be the biased one. Since the probability of heads for the coin known to be biased is 55% by assumption, the distribu-tion of X I = 0 is binomial with parameters n = 1000 and p = .55. Hence

Pr@X < 525 I = 0D =‚x=0

524

K1000x

O H.55Lx H.45L1000-x.

Once again, we can evaluate this probability using Mathematica or similar computer software. When we do this we find thatPr@X < 525 I = 0D º .0526817.

Alternatively, we can use a normal approximation with continuity correction:Pr@X < 525 I = 0D = Pr@X § 524.5 I = 0D =

PrX - H1000L H.55L

H1000L H.55L H.45L§

524.5 - H1000L H.55L

H1000L H.55L H.45LI = 0 º Pr@Z § -1.6209D =

[email protected] = 1 - [email protected].

From Appendix E of the textbook and using linear interpolation as appropriate we [email protected] º [email protected] + [email protected] º H.91L H.9474L + H.09L H.9484L = .94749.

HencePr@X < 525 I = 0D º 1 - [email protected] º .05251,

which is close to the value .0526817 calculated directly.

Chapter Six Solutions 69

Page 74: Text Book 1 Solutions

which is close to the value .0526817 calculated directly.

The only remaining probabilities to consider are Pr@I = 0D and Pr@I = 1D. Since the gambler has one coin of each type and selects the coin to flip at random, we must have

Pr@I = 0D =1

2and Pr@I = 1D =

1

2.

Putting this together, we find that the probability of reaching a false conclusion isPr@X ¥ 525 I = 1D Pr@I = 1D + Pr@X < 525 I = 0D Pr@I = 0D =

H.0606071L1

2+ H.0526817L

1

2= .0566444.

Note that we have used the numerical values computed directly from the binomial sums by Mathematica when determining the final answer. However, the answer obtained using the normal approximation is similar.

30. Let X be the insurer's payment in dollars for a randomly selected policy and let I be an indicator of a claim for this policy. Then according to the assumptions,

I = ;1 with probability .25,0 with probability .75,

andHX I = 1L ~ Pareto H3, 100L.

Hence

SX I=1@xD =100

100 + x

3

for x > 0.

a. The desired probability is Pr@X > 50D. By the law of total probability we havePr@X > 50D = Pr@X > 50 I = 1D Pr@I = 1D + Pr@X > 50 I = 0D Pr@I = 0D.

Clearly Pr@X > 50 I = 0D = 0 since no payment is made if no claim is submitted. From the formula for SX I=1 stated earlier we also have

70 Chapter Six Solutions

Page 75: Text Book 1 Solutions

Pr@X > 50 I = 1D = SX I=1@50D =100

100 + 50

3

=2

3

3

.

Consequently,

Pr@X > 50D =2

3

3

H.25L + H0L H.75L =2

27.

b. The desired probability is Pr@X § 10D. Arguing as in part a we havePr@X > 10D = Pr@X > 10 I = 1D Pr@I = 1D + Pr@X > 10 I = 0D Pr@I = 0D =

SX I=1@10D ÿPr@I = 1D + 0 ÿPr@I = 0D =100

100 + 10

3

H.25L =10

11

3 1

4º .18782870.

HencePr@X § 10D = 1 - Pr@X > 10D º 1 - .18782870 = .81217130.

c. Applying the law of total probability as in parts a and b we have for x ¥ 0,SX@xD = Pr@X > xD = Pr@X > x I = 1D Pr@I = 1D + Pr@X > x I = 0D Pr@I = 0D =

SX I=1@xD ÿPr@I = 1D + 0 ÿPr@I = 0D =100

100 + x

3

H.25L.

Since the payment on a given policy cannot be negative we must also haveSX@xD = Pr@X > xD = 1 for x < 0.

Consequently, the survival function of X is given by

SX@xD =100

100 + x

3

H.25L for x ¥ 0,

SX@xD = 1 for x < 0.

It follows that the distribution function FX is given by

FX@xD = 1 -100

100 + x

3

H.25L for x ¥ 0,

FX@xD = 0 for x < 0.

Note that

Chapter Six Solutions 71

Page 76: Text Book 1 Solutions

Note that Pr@X = 0D =

Pr@X = 0 I = 1D Pr@I = 1D + Pr@X = 0 I = 0D Pr@I = 0D = 0 ÿPr@I = 1D + 1 ÿPr@I = 0D = .75.

This also follows from the formula for FX . Hence we see that X has a mixed distribution with a probability mass of size .75 at x = 0 (representing the event that no claim is submitted) and a continuous distribution of probability on x > 0.

d. Recall that for nonnegative random variables X we have

E@XD = ‡0

SX@xD „ x.

Hence using the formula for SX derived in part c we haveE@XD =

‡0

¶ 100

100 + x

3

H.25L „ x = H.25L H100L3H100 + xL-2

-2 0¶ = H.25L H100L3

100-2

2= 12.5.

To determine the variance of X we need to consider the density function fX . From part c, it follows that the continuous part of the distribution has density function

fX@xD = - SX£ @xD = H.25L3

1001 +

x

100

-4for x > 0.

The discrete part consists of a probability mass of size .75 at x = 0. Hence

EAX2E = 02 ÿPr@X = 0D + ‡0

x2 fX@xD „ x = 02 ÿ H.75L + H.25L ‡0

x2 ÿ3

1001 +

x

100

-4„ x.

The integral Ÿ0¶x2 ÿ 3

100 J1 + x100 N

-4„ x can be determined by recursively applying

integration by parts. Alternatively, one could recognize this integral as the second moment of a Pareto distribution with parameter s = 3, b = 100, and use the formula for the second moment stated in section 6.1.3. Taking the latter approach we have

‡0

x2 ÿ3

1001 +

x

100

-4„ x =

1002 ÿ2

H3 - 1L H3 - 2L= 1002.

Consequently, the second moment of X is

72 Chapter Six Solutions

Page 77: Text Book 1 Solutions

Consequently, the second moment of X is

EAX2E = H.25L ‡0

x2 ÿ3

1001 +

x

100

-4„ x =

1

41002 = 2500.

It follows that

Var HXL = EAX2E - E@XD2 = 2500 - H12.5L2 = 2343.75.

32. In this exercise, students determine the probability mass function for a continuous mixture of binomial random variables with beta mixing weights. An application of this is considered in exercise 35.

Suppose that HN P = pL ~ BinomialHm, pL and P ~ BetaHr, sL. Then from sections 5.1 and 6.3.3 we have

pN P=p@nD = Kmn O p

n H1 - pLm-n for n = 0, 1, …, m

and

fP@pD =pr-1 H1 - pLs-1

B@r, sDfor 0 < p < 1.

Hence by the law of total probability,

pN@nD = Pr@N = nD = ‡0

1Pr@N = n P = pD fP@pD „ p =

‡0

1pN P=p@nD fP@pD „ p = ‡

0

1;Kmn O p

n H1 - pLm-n?pr-1 H1 - pLs-1

B@r, sD„ p =

Kmn O

B@r, sD‡0

1pn+r-1 H1 - pLm-n+s-1 „ p.

From section 6.3.3 we have

Chapter Six Solutions 73

Page 78: Text Book 1 Solutions

‡0

1pn+r-1 H1 - pLm-n+s-1 „ p = B@n + r, m - n + sD.

From Appendix C of the textbook we have

B@r, sD =G@rD G@sD

G@r + sD

and

B@n + r, m - n + sD =G@n + rD G@m - n + sD

G@m + r + sD.

Consequently,

pN@nD = Kmn O ÿ

G@r + sD

G@rD G@sDÿG@n + rD G@m - n + sD

G@m + r + sD

as required.

For positive integer values of the argument, the gamma function reduces to a factorial. In particular,G@xD = Hx - 1L! for x = 1, 2, 3, … .

Hence if r and s are positive integers, the formula for pN becomes

pN@nD = Kmn O

Hr + s - 1L! Hn + r - 1L! Hm - n + s - 1L!

Hr - 1L! Hs - 1L! Hm + r + s - 1L!,

which is the formula that we were required to derive.

35. Let N be the number of companies that use their line of credit in the coming month. From the information given in the question, an appropriate model for N is as follows: HN P = pL ~ BinomialH10, pL, P ~ BetaH2, 3L. From exercise 32, the probability mass function for the unconditional distribution of N is given by

74 Chapter Six Solutions

Page 79: Text Book 1 Solutions

pN@nD = K10nOG@5D G@n + 2D G@13 - nD

G@2D G@3D G@15D= K

10nO

4 ! Hn + 1L! H12 - nL!

1 ! µ 2 ! µ 14 !

for n = 0, 1, ..., 10. Hence the desired probability isPr@N § 2D = pN@0D + pN@1D + pN@2D =

100

4 ! µ 1 ! µ 12 !

1 ! µ 2 ! µ 14 !+

101

4 ! µ 2 ! µ 11 !

1 ! µ 2 ! µ 14 !+

102

4 ! µ 3 ! µ 10 !

1 ! µ 2 ! µ 14 !º .31068931.

42. Let X be the size of a random claim. We are given that HX L= lL ~ ExponentialHlL and L~ GammaH2, 100L. We are also given that the size of the first claim is x1 = 200. Since ExponentialHlL is a special case of GammaHr, lL with r = 1, we can use the result of exercise 41 to determine the distribution of L X = x1. In exercise 41, it was shown that if HX L= lL ~ GammaHr, lL and L~ GammaHs, bL then HL X = xL ~ GammaHr + s, x + bL. In this exercise, we have r = 1, s = 2, b = 100, and x = 200. HenceHL X = x1L ~ Gamma H3, 300L.

It follows that an appropriate model for L going forward is GammaH3, 300L.

46. a. Let X be the eye pressure measurement for a randomly selected person and let I be an indicator for glaucoma such that

I = ;1 if eye is diseased,0 if eye is healthy.

We are given that HX I = 1L ~ NormalH25, 1L, HX I = 0L ~ NormalH20, 1L, and Pr@I = 1D = .10. We are interested in determining Pr@I = 1 X = xD. Using Bayes' theorem we have

pI X=x@iD =fX I=1@xD pI@iD

fX@xDfor i = 0, 1.

Since X I = i has a normal distribution we have

Chapter Six Solutions 75

Page 80: Text Book 1 Solutions

fX I=1@xD =1

2 p‰-Hx-25L

2ë2

and

fX I=0@xD =1

2 p‰-Hx-20L

2ë2,

and so by the law of total probabilityfX@xD =

fX I=1@xD ÿ pI@1D + fX I=0@xD ÿ pI@0D =1

2 p‰-Hx-25L

2ë2 H.10L +1

2 p‰-Hx-20L

2ë2 H.90L.

It follows that the desired probability is

Pr@I = 1 X = xD = pI X=x@1D =fX I=1@xD pI@1D

fX@xD=

‰-Hx-25L2ë2

‰-Hx-25L2ë2 + 9 ‰-Hx-20L

2ë2.

Simplifying this expression we have

Pr@I = 1 X = xD =1

1 + 9 ‰H225- 10 xLê2.

b. We are interested in the values of x for which pI X=x@1D >12 . From part a,

pI X=x@1D =1

1 + 9 ‰H225- 10 xLê2.

Hence

pI X=x@1D >1

2ñ 1 + 9 ‰H225- 10 xLê2 < 2 ñ

‰H225- 10 xLê2 <1

1

2H225 - 10 xL < log

1

x >1

5

225

2- log

1

9ñ x > 22.5 +

1

5log@9D º 22.9394.

76 Chapter Six Solutions

Page 81: Text Book 1 Solutions

Chapter Seven Solutions

4. Consider the transformation

Y = X for X ¥ 0,0 for X < 0.

If X assumes both positive and negative values, as is the case when X ~ NormalH0, 1L, then this transformation is not one-to-one. In this situation, Y will have a probability mass at 0 equal to Pr@X § 0D and a distribution of probability on the interval y > 0.

The best way to approach the problem of determining a formula for FY is to consider a graph of the transformation with the event Y § y highlighted:

y2X

y

Y

From this graph we see that for y ¥ 0,

Page 82: Text Book 1 Solutions

FY @yD = Pr@Y § yD = PrA X § yE = PrAX § y2E = FXAy2E

and for y < 0, FY @yD = 0. Hence the distribution function of Y under the given transforma-tion is in general

FY @yD =FXAy2E for y ¥ 0,0 for y < 0.

Suppose that X ~ NormalH0, 1L. Then from section 6.2.1,

FX@xD =1

2 p‡-¶

x‰-t

2ë2 „ t for x e.

Hence under the given transformation,

FY @yD =1

2 p‡-¶

y2

‰-t2ë2 „ t for y ¥ 0,

FY @yD = 0 for y < 0.

It follows that Y has a mixed distribution with a probability mass of size 12 at y = 0 and a continuous distribution of probability on y > 0 given by

fY @yD =2

py ‰-y

4ë2.

The latter expression is determined using the fundamental theorem of calculus and the Chain rule to calculate the derivative of FY . In particular,

„ y

1

2 p‡-¶

y2

‰-t2ë2 „ t = 2 y ÿ

1

2 p‰-t

2ë2t=y2 =

2

py ‰-y

4ë2.

Therefore, the (generalized) density function for Y when X ~ NormalH0, 1L is given by

78 Chapter Seven Solutions

Page 83: Text Book 1 Solutions

fY @yD =2

py ‰-y

4ë2 for y > 0,

pY @0D =1

2.

Comment: The probability mass at 0 was inadvertently omitted from the answer given in the "Answers to Selected Exercises" section of the textbook. The complete answer is the one given here.

6. When X is a continuous random variable the stated properties of the limited expected value function are straightforward to prove using the formulas

E@X; mD = ‡-¶

mx fX@xD „ x + m SX@mD,

E@X; mD = E@XD - ‡m

Hx - mL fX@xD „ x

derived in section 7.2 of the textbook. Hence to begin, let's assume that the distribution of X is continuous.

Under this assumption, the survival function of X is continuous. Hence from the formula

E@X; mD = ‡-¶

mx fX@xD „ x + m SX@mD

the quantity E@X; mD is continuous as a function of m. Differentiating this formula with respect to m and using the fundamental theorem of calculus as appropriate we have„

„mE@X; mD =

„m‡-¶

mx fX@xD „ x +

„mm SX@mD =

m fX@mD + SX@mD + m SX£ @mD = m fX@mD + SX@mD + m H- fX@mDL = SX@mD

at all points m for which SX is differentiable. Consequently,„

„mE@X; mD ¥ 0 for all m

(since survival functions are always nonnegative) and so E@X; mD is increasing (or more precisely nondecreasing) in m. Differentiating the formula „

„mE@X; mD = SX@mD just

derived we obtain

Chapter Seven Solutions 79

Page 84: Text Book 1 Solutions

(since survival functions are always nonnegative) and so E@X; mD is increasing (or more precisely nondecreasing) in m. Differentiating the formula „

„mE@X; mD = SX@mD just

derived we obtain„2

„m2E@X; mD = SX£ @mD = - fX@mD.

From this it follows that„2

„m2E@X; mD § 0 for all m

(since density functions are always nonnegative) and so E@X; mD is concave as a function of m. NowE@X; mD § E@XD,

a fact that follows directly from the formula

E@X; mD = E@XD - ‡m

Hx - mL fX@xD „ x.

Hencelimmض

E@X; mD § E@XD.

On the other hand,limmض

E@X; mD ¥ E@XD

which can be demonstrated by letting m ض in the formula

E@X; mD = ‡-¶

mx fX@xD „ x + m SX@mD

noting that m SX@mD ¥ 0 and Ÿ-¶¶ x fX@xD „ x = E@XD. Consequently,

limmض

E@X; mD = E@XD.

Hence we have shown in the case where X has a continuous distribution that

i. E@X; mD is increasing, continuous, and concave as a function of m.

ii. E@X; mDØ E@XD as m ض.

iii. „

„mE@X; mD = SX@mD.

80 Chapter Seven Solutions

Page 85: Text Book 1 Solutions

Hence we have shown in the case where X has a continuous distribution that

i. E@X; mD is increasing, continuous, and concave as a function of m.

ii. E@X; mDØ E@XD as m ض.

iii. „

„mE@X; mD = SX@mD.

These are the three properties of E@X; mD that we were required to demonstrate.

The demonstration of these three properties when X does not have a continuous distribu-tion is more delicate. Essentially the same line of reasoning can be followed in the general case by interpreting the probability densities to be generalized densities and the integrals to be generalized integrals in the sense described in the appendix to chapter 4. However, the validity of some of the intermediate steps must be carefully established. This requires demonstrating that the fundamental theorem of calculus and the formula for integration by parts hold for generalized integrals.

When X is discrete, it is also possible to demonstrate the three properties of E@X; mD by determining an explicit formula for E@X; mD. To illustrate the key steps with this approach, consider a discrete distribution with exactly three probability masses. Sup-pose that these probability masses are located at the points a1, a2, a3 where a1 < a2 < a3 and have respective sizes k1, k2, k3. Then from the definition of limited expected value, we haveE@X; mD = m for m < a1,E@X; mD = k1 a1 + Hk2 + k3L m for a1 § m < a2,E@X; mD = Hk1 a1 + k2 a2L + k3 m for a2 § m < a3,E@X; mD = E@XD for m ¥ a3.

It follows from this that E@X; mD, when considered as a function of m, is piecewise linear with slopes that decrease from 1 to 0. Moreover, the slope of E@X; mD at x = m is the sum of the probability masses to the right of m, i.e., „

„mE@X; mD = SX@mD, and the

ultimate value of E@X; mD is E@XD.

A graph of E@X; mD illustrating these properties can be created using Mathematica or similar computer software.

Chapter Seven Solutions 81

Page 86: Text Book 1 Solutions

a1 a2 a3m

E@X DE@X;mD

From this graph, it is clear that when X is a discrete random variable with three probabil-ity masses, the function E@X; mD has the properties stated in the question. The demonstra-tion of these properties when X has more than three probability masses is similar.

It is worth noting that although the graph of E@X; mD has some of the characteristics of the graph of a distribution function (nondecreasing, limiting value), E@X; mD is not a distribution function. Moreover, the graph of E@X; mD is generally quite different from the graph of the corresponding FX . We can illustrate this concretely by creating the graph of the distribution function that corresponds to the graph of E@X; mD just given.

82 Chapter Seven Solutions

Page 87: Text Book 1 Solutions

a1 a2 a3x

1FX

Note that the sizes of the respective jumps at a1, a2, a3 are k1, k2, k3.

This completes the required demonstration.

10. From section 7.4, the actuarial present value of $1 payable at uncertain future time T is EA‰-r T E where r is the constant per annum continuously compounded discount rate. If T1, …, Tn are the future lifetimes of the individuals insured under a joint life policy then the time of payment of the benefit on this policy is T = minHT1, …, TnL (see exercise 9 for the definition of a joint life policy). From the solution to exercise 9, it follows that if the T j are independent and exponentially distributed with T j ~ ExponentialIl jM then T ~ ExponentialHl1 + + lnL.

Now the actuarial present value of $1 when the future lifetime random variable T has an exponential distribution with parameter l is in general l ê Hr + lL where r is the constant per annum continuously compounded discount rate. Indeed,

EA‰-r T E = ‡0

‰-r t fT @tD „ t = ‡0

‰-r t ÿl ‰-l t „ t = l‡0

‰-Hr+lL t „ t =l

r + l.

Hence if T = minHT1, …, TnL where the T j are independent and exponentially distributed with T j ~ ExponentialIl jM then the actuarial present value of $1 payable at future time T is

Chapter Seven Solutions 83

Page 88: Text Book 1 Solutions

Hence if T = minHT1, …, TnL where the T j are independent and exponentially distributed with T j ~ ExponentialIl jM then the actuarial present value of $1 payable at future time T is⁄j=1n l j

r +⁄j=1n l j

.

Consequently, the actuarial present value for $1 of benefit on a joint life policy for which the insured lives have future lifetimes that are independent and exponentially distributed is⁄j=1n l j

r +⁄j=1n l j

,

where r is the constant per annum continuously compounded discount rate and l1, …, ln are the parameters of the exponential distributions for the individual lives.

14. Let X be the amount of a random claim measured in thousands of dollars and let Y be the reimbursement corresponding to this claim. From the given information,Y = min H.70 HX - 0.5L+, 5L

where for any random variable W ,

W+ = ;0 if W § 0,W if W > 0.

We are also given that X ~ ParetoH3, 2L. Hence the survival function of X is given by

SX@xD =2

2 + x

3

for x ¥ 0.

a. To determine a formula for FY , we consider separately the cases y < 0, y = 0, 0 < y < 5, and y ¥ 5. Since the reimbursement amount Y lies between 0 and 5 (according to the terms of the contract), the cases y < 0 and y ¥ 5 are trivial. In particu-lar, FY @yD = 0 for y < 0 and FY @yD = 1 for y ¥ 5. Hence we need only consider the cases y = 0 and 0 < y < 5 in detail.

Consider first the case y = 0. Note that from the definition of Y , the event Y § 0 is equivalent to the event X § 0.5. Hence

84 Chapter Seven Solutions

Page 89: Text Book 1 Solutions

Consider first the case y = 0. Note that from the definition of Y , the event Y § 0 is equivalent to the event X § 0.5. HenceFY @0D = Pr@Y § 0D = Pr@X § 0.5D = [email protected].

Since SX@xD = H2 ê2 + xL3 for x ¥ 0, it follows that

FY @0D = [email protected] = 1 - [email protected] = 1 -2

2.5

3

= 1 -4

5

3

= .488.

Now consider the case 0 < y < 5. From the definition of Y , the event Y § y is equivalent to the event .70 HX - 0.5L § y, i.e., X § 0.5 +

y.70 . Hence for 0 < y < 5,

FY @yD = Pr@Y § yD = Pr X § 0.5 +y

.70= FX 0.5 +

y

.70= 1 - SX 0.5 +

y

.70.

Since SX@xD = H2 ê2 + xL3 for x ¥ 0, it follows that

FY @yD = 1 -2

2.5 +y.70

3

= 1 -1.4

1.75 + y

3

for 0 < y < 5.

Combining the formulas for FY in the various cases, we obtainFY @yD = 0 for y < 0,

FY @yD = 1 -1.4

1.75 + y

3

for 0 § y < 5,

FY @yD = 1 for y ¥ 5.

Note that Y has probability masses at y = 0 and y = 5 and a continuous distribution of probability on H0, 5L. The probability mass at y = 0 represents the event that the submit-ted claim is less than or equal to the deductible. The probability mass at y = 5 repre-sents the event that the submitted claim is such that the cap has been reached.

b. The expected reimbursement is E@YD. The simplest way to determine E@YD when X ~ ParetoH3, 2L is to express Y as a linear combination of functions of the form minHX, ÿL and then use the formula for the limited expected value function of a Pareto random variable determined in exercise 5.

From the definition of the contract terms,

Chapter Seven Solutions 85

Page 90: Text Book 1 Solutions

From the definition of the contract terms,Y = min H.70 HX - 0.5L+, 5L.

Using properties of the minimum function and the fact that HX - dL+ = X - min HX, dL

we haveY = min H.70 HX - 0.5L+, 5L =

.70 min HX - 0.5L+,50

7= .70 min X - min HX, 0.5L,

50

7=

.70 min X,50

7+ min HX, 0.5L - min HX, 0.5L .

Considering separately the cases X < 507 + 1

2 and X ¥ 507 + 1

2 we also have

min X,50

7+ min HX, 0.5L = min X,

50

7+

1

2.

Hence

Y = .70 min X,50

7+

1

2- min X,

1

2.

Taking the expectation, it follows that

E@YD = .70 E X;50

7+

1

2- E X;

1

2.

Now in the solution to exercise 5, students will have shown that if X ~ ParetoHs, bL with s ≠ 1 then

E@X; mD =b

s - 11 -

b

b + m

s-1

.

Hence for X ~ ParetoH3, 2L we have

86 Chapter Seven Solutions

Page 91: Text Book 1 Solutions

E@X; mD = 1 -2

2 + m

2

for m ¥ 0.

In particular,

E X;50

7+

1

2= 1 -

2

2 + 507 + 1

2

2

= 1 -28

135

2

and

E X;1

2= 1 -

2

2 + 12

2

= 1 -4

5

2

.

Since the distribution of submitted claims (in thousands of dollars) is assumed to be ParetoH3, 2L, it follows that the expected reimbursement (in thousands of dollars) isE@YD =

H.70L E X;50

7+

1

2- E X;

1

2= H.70L 1 -

28

135

2

- 1 -4

5

2

º 0.41788752.

Consequently, the expected reimbursement is $417.89.

Comment: The answer given in the "Answers to Selected Exercises" section at the back of the textbook is incorrect in some printings.

17. Consider a contract in which the insurer pays 80% of the first $2000 of eligible claim expenses in excess of a $200 deductible and 50% of any remaining claim expenses, with a maximum possible payment of $10,000. Let X be the size of a random loss and let Y be the insurer's payment toward this loss. ThenY = 0 if X < 200,Y = H.80L HX - 200L if 200 § X < 2200,Y = H.80L H2000L + H.50L HX - 2200L if 2200 § X < x*,Y = 10,000 if X ¥ x*,

where x* is such that

Chapter Seven Solutions 87

Page 92: Text Book 1 Solutions

H.80L H2000L + H.50L Hx* - 2200L = 10,000,

i.e., x* = 19,000.

a. The easiest way to decompose the contract into a portfolio of single layer contracts is to consider a graph of the insurer's payout Y as a function of the loss X:

5000 10000 15000 20000X

2000

4000

6000

8000

10000Y

By limiting the plot range, we can get a better sense of the true character of the graph for losses X with X § 500:

88 Chapter Seven Solutions

Page 93: Text Book 1 Solutions

100 200 300 400 500X

50

100

150

200Y

In this "close-up" graph, the dotted line has slope 1 and is included to give the reader a sense of the relative size of the slope of the graph of Y on the interval 200 < X < 2200.

PutY1 = min HY , 1600L

andY2 = Y - Y1.

From the graphs just given, it should be clear that Y1 and Y2 represent the payouts on single layer contracts and Y1 + Y2 = Y . This can be confirmed by considering graphs of Y1 and Y2 as functions of X and recalling the shape of the graph of the insurer's payout on a single layer combination contract:

Chapter Seven Solutions 89

Page 94: Text Book 1 Solutions

5000 10000 15000 20000X

2000

4000

6000

8000

10000Y1

5000 10000 15000 20000X

2000

4000

6000

8000

10000Y2

By limiting the plot range of the graph of Y1 we obtain a graph that is immediately recognizable as the payout on a single layer contract:

90 Chapter Seven Solutions

Page 95: Text Book 1 Solutions

500 1000 1500 2000 2500X

500

1000

1500

2000Y1

It follows from these graphs that Y1 represents the insurer's payout on a single layer contract with deductible d = $200, coinsurance fraction a = 80 % and maximum reim-bursement m = $1600, and Y2 represents the insurer's payout on a single layer contract with deductible d = $2200, coinsurance fraction a = 50 % and maximum reimbursement m = $8400. Hence the double layer combination contract given in the statement of the question can be considered a portfolio of the single layer contracts 1, 2 with respec-tive parametersd = 200, a = 0.80, m = 1600

andd = 2200, a = 0.50, m = 8400.

Comment: The technique to decompose contracts with more than two layers is similar.

b. Let 1 be the single layer contract with parameters d1 = 200, a1 = 0.80, m1 = 1600 and let 2 be the single layer contract with parameters d2 = 2200, a2 = 0.50, m2 = 8400. In part a, it was shown that the double layer combination contract of this question is equivalent to a portfolio of the single layer contracts 1 and 2. Hence

hw@XD = hw1 @XD + hw2 @XD

where hwi @XD is the writer's payout on contract i and hw@XD is the writer's payout on the

given double layer contract. We are interested in E@hw@XDD when X has an exponential distribution with mean 4000, i.e., X ~ ExponentialH1 ê4000L.

Chapter Seven Solutions 91

Page 96: Text Book 1 Solutions

where hwi @XD is the writer's payout on contract i and hw@XD is the writer's payout on the

given double layer contract. We are interested in E@hw@XDD when X has an exponential distribution with mean 4000, i.e., X ~ ExponentialH1 ê4000L.

From the general formula in section 7.3 for the expected payout on a single layer contract we have

EBhwi @XDF = ai E X; di +

mi

ai- E@X; diD .

Moreover, for X ~ ExponentialHlL we have

E@X; mD =1

lI1 - ‰-lmM

(see Example 2 of section 7.2 of the textbook). This follows directly from the formula E@X; mD = Ÿ0

mSX@xD „ x for nonnegative X using the fact that SX@xD = ‰-l x for X ~ ExponentialHlL. Consequently,

EAhw1 @XDE = a1 E X; d1 +m1

a1- E@X; d1D =

H.80L E X; 200 +1600

.80- E@X; 200D = H.80L 8E@X; 2200D - E@X; 200D< =

H.80L 94000 I1 - ‰-2200ê4000M - 4000 I1 - ‰-200ê4000M= = 3200 I‰-0.05 - ‰-0.55M

and

EAhw2 @XDE = a2 E X; d2 +m2

a2- E@X; d2D =

H.50L E X; 2200 +8400

.50- E@X; 2200D = H.50L 9EAX; 19,000E - E@X; 2200D= =

H.50L 94000 I1 - ‰-19,000 ê4000M - 4000 I1 - ‰-2200ê4000M= = 2000 I‰-0.55 - ‰-4.75M.

Therefore, the expected payout to the policyholder (i.e., the policyholder's expected reimbursement) when X has an exponential distribution with mean $4000 is

E@hw@XDD = EAhw1 @XDE + EAhw2 @XDE = 3200 I‰-0.05 - ‰-0.55M + 2000 I‰-0.55 - ‰-4.75M =3200 ‰-0.05 - 1200 ‰-0.55 - 2000 ‰-4.75 º $2334 .29.

19. Let X be the total allowable expenses in thousands of dollars incurred during the year for a given policyholder and let Y be the amount reimbursed. To answer parts a and b of this question, we need to determine a formula for E@YD in which the deductible and the out-of-pocket maximum are unspecified.

92 Chapter Seven Solutions

Page 97: Text Book 1 Solutions

19. Let X be the total allowable expenses in thousands of dollars incurred during the year for a given policyholder and let Y be the amount reimbursed. To answer parts a and b of this question, we need to determine a formula for E@YD in which the deductible and the out-of-pocket maximum are unspecified.

Let d be the deductible in thousands of dollars and let M be the out-of-pocket maximum in thousands of dollars. (We use an upper case M here rather than a lower case m to avoid possible confusion with the notation of section 7.3 of the textbook, where m is used to denote the maximum amount payable by the insurer. Note that M is a fixed constant in this context, not a random variable, despite our convention of reserving upper case letters for random quantities.) Then the formula for Y in terms of d and M isY = 0 if 0 § X § d,Y = H.80L HX - dL if d < X < b,Y = X - M if X ¥ b,

where b is the value of X at which the out-of-pocket maximum is breached. The value of b can be determined explicitly by solving the equation

d + H.20L HX - dL = M .

The left hand side of this equation is the amount paid by the policyholder under the assumption that total expenses are greater than the deductible but less than the amount at which the out-of-pocket maximum is triggered. The right hand side of this equation is the out-of-pocket maximum. Note that the deductible is considered an out-of-pocket expense. Solving this equation we obtain

X = d +M - d

.20.

Hence

b = d +M - d

.20.

Substituting this into the formula for Y we obtainY = 0 if 0 § X § d,

,

Chapter Seven Solutions 93

Page 98: Text Book 1 Solutions

Y = 0 if 0 § X § d,

Y = H.80L HX - dL if d < X < d +M - d

.20,

Y = X - M if X ¥ d +M - d

.20.

Now by assumption, X ~ ExponentialH1L. Hence

E@YD =

‡0

d0 ÿ fX@xD „ x + ‡

d

d+HM-dLê.20H.80L Hx - dL fX@xD „ x + ‡

d+HM-dLê.20

Hx - ML fX@xD „ x =

0 + ‡d

5M-4 dH.80L Hx - dL ‰-x „ x + ‡

5M-4 d

Hx - ML ‰-x „ x.

Applying integration by parts we have

‡d

5M-4 dH.80L Hx - dL ‰-x „ x = H.80L Hx - dL

‰-x

H-1L d5M-4 d -.80 ‡

d

5M-4 d1 ÿ

‰-x

H-1L„ x =

9.80 H5 M - 5 dL I-‰-5M+4 dM - 0= + H.80L‰-x

H-1L d5M-4 d =

-4 HM - dL ‰-5M+4 d + 9.80 I-‰-5M+4 dM - .80 I-‰-dM= =.80 ‰-d - 84 HM - dL + .80< ‰-5M+4 d

and similarly

‡5M-4 d

Hx - ML ‰-x „ x =

Hx - ML‰-x

H-1L 5M-4 d¶ -‡

5M-4 d

¶ ‰-x

H-1L„ x = 0 - H4 M - 4 dL

‰-5M+4 d

H-1L+

‰-x

H-1L 5M-4 d¶ =

4 HM - dL ‰-5M+4 d + ‰-5M+4 d = H4 HM - dL + 1L ‰-5M+4 d.

Consequently,

E@YD = 0 + I.80 ‰-d - H4 HM - dL + .80L ‰-5M+4 dM + H4 HM - dL + 1L ‰-5M+4 d =.80 ‰-d + .20 ‰-5M+4 d.

According to the given information, the insurer would like the average reimbursement per member to be no greater than $200. Therefore the values of M and d must be such that

94 Chapter Seven Solutions

Page 99: Text Book 1 Solutions

.80 ‰-d + .20 ‰-5M+4 d § .20.

a. Suppose that the deductible remains at $500, i.e., d = 0.5. Then the out-of-pocket maximum must be selected such that .80 ‰-0.5 + .20 ‰-5M+4 H0.5L § .20.

Since .80 ‰-0.5 º .485 > .20, there is no value of M for which this inequality holds. It follows that there is no level of the out-of-pocket maximum that would result in the insurer's average reimbursement being at most $200 if the deductible remains at $500.

b. Suppose that the maximum out-of-pocket expense remains at $1500, i.e., M = 1.5. Then the deductible must be selected such that.80 ‰-d + .20 ‰-5 H1.5L+4 d § .20,

that is,

4 + ‰-7.5 ‰5 d - ‰d § 0.

The latter inequality is a quintic in ‰d and must be solved numerically. We do this by considering the function g defined as g@xD = ‰-7.5 x5 - x + 4. As a first step to determin-ing the roots of g, it is useful to consider a graph of g for x ¥ 0:

2 4 6 8 10x

2

4

6

8

g@xD

This graph suggests that g@xD > 0 for all x > 0. By considering the derivative g£ one can show using standard Calculus techniques that this is in fact the case. It follows that

Chapter Seven Solutions 95

Page 100: Text Book 1 Solutions

This graph suggests that g@xD > 0 for all x > 0. By considering the derivative g£ one can show using standard Calculus techniques that this is in fact the case. It follows that4 + ‰-7.5 e5 d - ‰d > 0

for all d ¥ 0. Therefore, there is no level of the deductible that would result in the insurer's average reimbursement being at most $200 if the out-of-pocket maximum remains at $1500.

24. Let P be the required annual premium payable continuously under the requirement that the present value of premium payments exceeds the present value of benefit payments 95% of the time. Let T be the future lifetime of the insured and let r be the per annum continuously compounded rate of interest. By assumption, T is uniformly distributed on H0, 20L and r = 6 %. From the discussion in section 7.4 of the textbook, the present value of premium payments is

P ÿ1 - ‰-r T

r

and the present value of benefits is

10,000 ‰-r T .

Hence the requirement on the premium P is

Pr P1 - ‰-r T

r> 10,000 ‰-r T = .95.

Now

P1 - ‰-r T

r> 10,000 ‰-r T ñ P - P ‰-r T > 10,000 r ‰-r T ñ

IP + 10,000 rM ‰-r T < P ñ‰-r T <P

P + 10,000 rñ T > -

1

rlog

P

P + 10,000 r.

Hence

96 Chapter Seven Solutions

Page 101: Text Book 1 Solutions

Pr P1 - ‰-r T

r> 10,000 ‰-r T =

Pr T > -1

rlog

P

P + 10,000 r=

20 - I- 1r logAPëIP + 10,000 rMEM

20

(provided that -1r logAPëIP + 10,000 rME § 20). It follows from the requirement

Pr P1 - ‰-r T

r> 10,000 ‰-r T = .95

that

20 + 1r logAPëIP + 10,000 rME

20= .95.

Substituting r = .06 into this equation and rearranging as appropriate, we find that

logP

P + 600= -.06

or equivalentlyP

P + 600= ‰-.06.

Solving this equation for P we have

P =600 ‰-.06

1 - ‰-.06º 9,703.00.

Therefore the annual premium payable continuously that should be charged if the insurer wishes the present value of premiums to exceed the present value of benefits 95% of the time is $9,703. This is significantly greater than the annual premium determined by equating actuarial present values ($836.57). In fact the annual premium payable continuously is greater than the single premium that one must pay today under an analogous 95% criterion ($9,417.65)!

To understand these apparently contradictory results, note that the $9,703 figure just calculated is a per annum amount that is payable continuously. In the absence of life contingencies (i.e., assuming survival to the end of the year), $9,703 payable continu-ously throughout the year is equivalent to

Chapter Seven Solutions 97

Page 102: Text Book 1 Solutions

To understand these apparently contradictory results, note that the $9,703 figure just calculated is a per annum amount that is payable continuously. In the absence of life contingencies (i.e., assuming survival to the end of the year), $9,703 payable continu-ously throughout the year is equivalent to

$ 9,7031 - ‰-.06

.06= $ 9,417.65

payable at the beginning of the year or

$ 9,417.65 ‰.06 = $ 10,000

payable at the end of the year. Hence the 95% criterion in the current exercise amounts to setting the premium under the assumption that death occurs at the end of the first year of the policy. The reason that this happens is that the future lifetime of the policyholder is uniformly distributed on H0, 20L and as a result the probability of survival beyond the first year is exactly 95%. If the future lifetime were modeled using any other distribu-tion, the premium could not be set in such a simple way.

26. Let T be the lifetime of the system measured in hours and let TA, TB, TC, TD be the lifetimes of the system components A, B, C, D respectively. Then from the configura-tion given in Figure 7.13 of the textbook,T = max Hmin HTA, TBL, max HTC, TDLL.

Put Y1 = minHTA, TBL and Y2 = maxHTC, TDL. Then since TA, TB, TC, TD are independent, the distributions of Y1 and Y2 are given bySY1 @tD = STA @tD STB @tD,

FY2 @tD = FTC @tD FTD @tD

(see section 7.5 of the textbook) and so the distribution of T is given byFT @tD = FY1 @tD FY2 @tD = H1 - STA @tD STB @tDL IFTC @tD FTD @tDM.

Now by assumption, TA ~ ExponentialI 14 M, TB ~ ExponentialJ 13 N, TC ~ ExponentialI 12 M,

TD ~ ExponentialH1L. Hence, the distribution function of T is given by

98 Chapter Seven Solutions

Page 103: Text Book 1 Solutions

FT @tD = H1 - STA @tD STB @tDL FTC @tD FTD @tD =I1 - ‰-tê4 ‰-tê3M I1 - ‰-tê2M H1 - ‰-tL = I1 - ‰-7 tê12M I1 - ‰-tê2M H1 - ‰-tL

for t ¥ 0.

Since T (being a lifetime random variable) is nonnegative, the expected value of T can be determined using the formula E@TD = Ÿ0

¶ST @tD „ t. Hence

E@TD = ‡0

ST @tD „ t = ‡0

H1 - FT @tDL „ t = ‡0

91 - I1 - ‰-7 tê12M I1 - ‰-tê2M H1 - ‰-tL= „ t.

However,

I1 - ‰-7 tê12M I1 - ‰-tê2M H1 - ‰-tL = I1 - ‰-7 tê12M I1 - ‰-tê2 - ‰-t + ‰-3 tê2M =1 - ‰-tê2 - ‰-t + ‰-3 tê2 - ‰-7 tê12 + ‰-13 tê12 + ‰-19 tê12 - ‰-25 tê12.

Consequently,

E@TD = ‡0

9‰-tê2 + ‰-t - ‰-3 tê2 + ‰-7 tê12 - ‰-13 tê12 - ‰-19 tê12 + ‰-25 tê12= „ t.

Using the fact that Ÿ0¶‰-l t „ t = 1 êl we obtain

E@TD = 2 + 1 -2

3+

12

7-

12

13-

12

19+

12

25=

385,519

129,675º 2.97296318.

Hence the expected life of the system is about 3 hours.

30. Suppose that X = tan@QD and Q is uniformly distributed on H-p, pL. The distribution function for X can be determined by considering a graph of the transformation X = tan@QD with the event X § x highlighted:

Chapter Seven Solutions 99

Page 104: Text Book 1 Solutions

-p -pÅÅÅÅÅÅÅ2

pÅÅÅÅÅÅÅ2

pQ

x

X

Note that in this graph it is assumed that x > 0.

Suppose that x > 0. Then the event X § x is equivalent to the event

-p < Q § arctan@xD - p or -p

2< Q § arctan@xD or

p

2< Q < p

where arctan@xD is the branch of the inverse tangent function with values in the interval A- p

2 , p

2 E. Hence

Pr@X § xD = Pr@-p < Q § arctan@xD - pD + Pr -p

2< Q § arctan@xD + Pr

p

2< Q < p

and so since Q is uniformly distributed on H-p, pL,

Pr@X § xD =Harctan@xD - pL - H-pL

2 p+

arctan@xD - I- p

2 M

2 p+p - p

2

2 p=

arctan@xD

p+

1

2.

On the other hand, suppose that x § 0. Then the event X § x is equivalent to the event

-p

2< Q § arctan@xD or

p

2< Q < arctan@xD + p.

Hence

100 Chapter Seven Solutions

Page 105: Text Book 1 Solutions

Pr@X § xD = Pr -p

2< Q § arctan@xD + Pr

p

2< Q < arctan@xD + p =

arctan@xD - I- p

2 M

2 p+

arctan@xD + p - p

2

2 p=

arctan@xD

p+

1

2

which is the same as the formula obtained in the case x > 0.

Consequently, the distribution function of X is given by

FX@xD =1

2+

arctan@xD

pfor x e

and so the density function of X is

fX@xD = FX£ @xD =

1

pÿ„

„ xarctan@xD =

1

pÿ

1

1 + x2for x e

as required.

Chapter Seven Solutions 101

Page 106: Text Book 1 Solutions

Chapter Eight Solutions

3. Let X, Y be random variables with joint density function

fX,Y @x, yD = ;1 for x + 2 y § 2, x ¥ 0, y ¥ 0,0 otherwise,

and put S = X + Y and D = X - Y . We are required to determine the distributions of S and D.

Note that X and Y are not independent. This follows immediately from an examination of the region of nonzero probability for fX,Y :

Page 107: Text Book 1 Solutions

-1 -0.5 0.5 1 1.5 2 2.5 3X

-1

-0.5

0.5

1

1.5

2Y

-1 -0.5 0.5 1 1.5 2 2.5 3X

-1

-0.5

0.5

1

1.5

2Y

Since fX,Y is constant on the region of nonzero probability, probabilities associated with X and Y can be determined by calculating areas in the x-y plane and then multiplying by the constant value of fX,Y , which is 1 in this case. This is the approach we take to determine the distribution functions of S and D.

Distribution of S: From the graph of the region of nonzero probability for fX,Y the calculation of FS@sD = Pr@S § sD can be separated into four cases:

i. s § 0,

ii. 0 < s § 1,

iii. 1 < s § 2,

iv. s > 2.

102 Chapter Eight Solutions

Page 108: Text Book 1 Solutions

Distribution of S: From the graph of the region of nonzero probability for fX,Y the calculation of FS@sD = Pr@S § sD can be separated into four cases:

i. s § 0,

ii. 0 < s § 1,

iii. 1 < s § 2,

iv. s > 2.

This corresponds to a partition of s values according to the shapes of the regionss = 8 Hx, yL : x + y § s, x + 2 y § 2, x ¥ 0, y ¥ 0<.

Since fX,Y is 0 outside the region 8 Hx, yL : x + 2 y § 2, x ¥ 0, y ¥ 0 < , it is clear that FS@sD = 0 for all s § 0 and FS@sD = 1 for all s > 2. Hence we need only consider the cases 0 < s § 1, 1 < s § 2 in detail.

Consider first the case 0 < s § 1. In this case, the region of integration for calculating FS@sD = Pr@S § sD = Pr@X + Y § sD is triangular with base and height equal to s:

s 2X

s

1

Y

X + Y = s

s 2X

s

1

Y

Since fX,Y is constant and equal to 1 on this region, it follows that

Chapter Eight Solutions 103

Page 109: Text Book 1 Solutions

Since fX,Y is constant and equal to 1 on this region, it follows that

FS@sD =1

2s2 for 0 < s § 1.

Now consider the case 1 < s § 2. In this case the region of integration for calculating Pr@X + Y § sD is a quadrilateral that can be decomposed into a rectangle and two right-angled triangles:

2 s - 2 s 2X

2 - s

1

s

Y

X + Y = s

2 s - 2 s 2X

2 - s

1

s

Y

Using basic geometry, the area of the shaded region is

104 Chapter Eight Solutions

Page 110: Text Book 1 Solutions

1

2H2 s - 2L H1 - H2 - sLL + H2 - sL H2 s - 2L +

1

2Hs - H2 s - 2LL H2 - sL =

Hs - 1L Hs - 1L + 2 H2 - sL Hs - 1L +1

2H2 - sL H2 - sL = -

1

2s2 + 2 s - 1.

Hence

FS@sD = -1

2s2 + 2 s - 1 for 1 < s § 2.

Consequently, the distribution function of S is given byFS@sD = 0 for s § 0,

FS@sD =1

2s2 for 0 < s § 1,

FS@sD = -1

2s2 + 2 s - 1 for 1 < s § 2,

FS@sD = 1 for s > 2.

The graph of FS is as follows:

0.5 1 1.5 2s

0.2

0.4

0.6

0.8

1

FS

From this graph and the preceding formula for FS it is clear that FS has the properties of a distribution function. Hence there is no obvious mistake in the formula derived for FS.

Distribution of D: From the graph of the region of nonzero probability for fX,Y the calculation of FD@dD = Pr@D § dD can be separated into four cases:

i. d § -1,

ii. -1 < d § 0,

iii. 0 < d § 2,

iv. d > 2.

Chapter Eight Solutions 105

Page 111: Text Book 1 Solutions

Distribution of D: From the graph of the region of nonzero probability for fX,Y the calculation of FD@dD = Pr@D § dD can be separated into four cases:

i. d § -1,

ii. -1 < d § 0,

iii. 0 < d § 2,

iv. d > 2.

This corresponds to a partition of d values according to the shapes of the regionsd = 8 Hx, yL : x - y § d, x + 2 y § 2, x ¥ 0, y ¥ 0<.

Since fX,Y is 0 outside the region 8 Hx, yL : x + 2 y § 2, x ¥ 0, y ¥ 0 < , it is clear that FD@dD = 0 for all d § -1 and FD@dD = 1 for all d > 2. Hence we need only consider the cases -1 < d § 0, 0 < d § 2 in detail.

Consider first the case -1 < d § 0. In this case, the region of integration for calculating FD@dD = Pr@D § dD = Pr@X - Y § dD is triangular:

106 Chapter Eight Solutions

Page 112: Text Book 1 Solutions

2ÅÅÅÅÅ3

H d + 1L 2X

- d

1ÅÅÅÅÅ3

H2 - dL

1

Y

X - Y = d

2ÅÅÅÅÅ3

H d + 1L 2X

- d

1ÅÅÅÅÅ3

H2 - dL

1

Y

Using basic geometry, the area of the shaded region is 1

2H1 - H-dLL

2

3Hd + 1L =

1

3Hd + 1L2.

Since fX,Y is constant and equal to 1 on this region, it follows that

FD@dD =1

3Hd + 1L2 for - 1 < d § 0.

Now consider the case 0 < d § 2. In this case the region of integration for calculating Pr@X - Y § dD is a quadrilateral:

Chapter Eight Solutions 107

Page 113: Text Book 1 Solutions

d 2X

- d

1

Y

X - Y = d

H2 Hd + 1LÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ

3,2- dÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ3

L

d 2X

- d

1

Y

The area of the shaded region can be determined by subtracting the area of the triangle bounded by the points Hd, 0L, H2, 0L, J 23 d + 2

3 , 23 -13 dN from the area of the triangle

bounded by the points H0, 0L, H2, 0L, H0, 1L. Taking this approach, the shaded area is1

2H2L H1L -

1

2H2 - dL

2

3-

d

3= 1 -

1

6H2 - dL2.

Since fX,Y is constant and equal to 1 on the shaded region, it follows that

FD@dD = 1 -1

6H2 - dL2 for 0 < d § 2.

Consequently, the distribution function of D is given by

108 Chapter Eight Solutions

Page 114: Text Book 1 Solutions

FD@dD = 0 for d § -1,

FD@dD =1

3Hd + 1L2 for - 1 < d § 0,

FD@dD = 1 -1

6H2 - dL2 for 0 < d § 2,

FD@dD = 1 for d > 2.

The graph of FD is as follows:

-1 -0.5 0.5 1 1.5 2d

0.2

0.4

0.6

0.8

1

FD

From this graph and the preceding formula for FD it is clear that FD has the properties of a distribution function. Hence there is no obvious mistake in the formula derived for FD.

Comment on Exercise 10: The solution to exercise 10 makes use of the following formula:

‡x=0

¶ 1

2 p‰-

1

2Ix2+HuêxL2M „ x =

1

2‰- u for u e.

To facilitate understanding of the solution to exercise 10 we will derive this formula here. The derivation is also interesting in its own right as it demonstrates an integration technique that is often useful.

Put

Chapter Eight Solutions 109

Page 115: Text Book 1 Solutions

g@uD = ‡x=0

¶ 1

2 p‰-

1

2Ix2+HuêxL2M „ x.

From this definition, it should be clear that g@uD = g@-uD for all u. Hence, it suffices to prove that

g@uD =1

2‰-u for u ¥ 0.

We will show that g£@uD = -g@uD for u > 0

and then use standard solution techniques for differential equations to deduce that g@uD = 1

2 ‰-u for u ¥ 0.

Differentiating g (by interchanging the order of differentiation and integration as appropriate) we obtain„

„u‡x=0

¶ 1

2 p‰-

1

2Ix2+HuêxL2M „ x = ‡

x=0

¶ „

„u

1

2 p‰-

1

2Ix2+HuêxL2M „ x =

‡x=0

¶ 1

2 p‰-

1

2Ix2+HuêxL2M -

1

2ÿ

1

x2H2 uL „ x = -u ‡

x=0

¶ 1

2 p‰-

1

2Ix2+HuêxL2M 1

x2„ x.

Applying the substitution x = u êy with fixed u > 0 to the latter integral we have

-u ‡x=0

¶ 1

2 p‰-

1

2Ix2+HuêxL2M 1

x2„ x =

-u ‡y=¶

0 1

2 p‰-

1

2IHuêyL2+y2M y

u

2-

u

y2„ y = -‡

y=0

¶ 1

2 p‰-

1

2IHuêyL2+y2M „ y = -g@uD.

Henceg£@uD = - g@uD for u > 0

as claimed.

Now the equation g£@uD = - g@uD is a separable differential equation and can be solved fairly easily. Written in terms of the variables x and y where y = g@xD the equation has the form

110 Chapter Eight Solutions

Page 116: Text Book 1 Solutions

Now the equation g£@uD = - g@uD is a separable differential equation and can be solved fairly easily. Written in terms of the variables x and y where y = g@xD the equation has the form„ y

„ x= -y.

Separating the variables in the standard way, we have„ y

y= - „ x

from which we obtain, on integration of both sides,log y = - x + C

where C is an arbitrary constant. Solving for y, we obtain the solution

y = y0 ‰-x

where y0 is the value of y when x = 0. In terms of the earlier notation we have

g@uD = g@0D ‰-u for u > 0.

To complete the derivation we need only determine the value of g@0D. From the defini-tion of g we have

g@0D = ‡0

¶ 1

2 p‰-

1

2x2 „ x.

The integrand of the latter integral is the density function for a standard normal random variable, i.e., it is the density function for NormalH0, 1L. Since the standard normal density is symmetric about 0 and probability densities integrate to 1 it follows that

g@0D =1

2.

Consequently,

g@uD =1

2‰-u for u ¥ 0.

Since g@uD = g@-uD for all u, it follows that

Chapter Eight Solutions 111

Page 117: Text Book 1 Solutions

Since g@uD = g@-uD for all u, it follows that

g@uD =1

2‰- u for all u.

Therefore

‡x=0

¶ 1

2 p‰-

1

2Ix2+HuêxL2M „ x =

1

2‰- u for u e,

which is the formula we set out to prove.

10. Suppose that X and Y are independent random variables with X ~ ExponentialH1L and Y ~ NormalH0, 1L. We are asked to determine the density function for X1ê2 Y .

Note that for X ~ ExponentialH1L, X1ê2 ~ WeibullH1, 2L (see section 6.2.1 of the text-book). Hence the distribution of X1ê2 Y is identical to the distribution of V Y where V ~ WeibullH1, 2L.

In exercise 11, the student is asked to determine the distribution of X Y under the assumption that X ~ NormalH0, 1L, Y ~ WeibullI 2 , 2M, and X, Y are independent. (The distribution types of X and Y are not explicitly given in the statement of exercise, but they are easy to deduce from the forms of the given density functions.) From the observations just made, this is essentially the same problem as the current exercise (the only difference being the parameters of the Weibull distribution). Hence the density function obtained in the current exercise should have the same basic form as the one given in exercise 11.

Put W = X1ê2 Y where X ~ ExponentialH1L, Y ~ NormalH0, 1L, and X, Y are independent. We will determine the distribution of W directly without making use of the observation that X1ê2 ~ WeibullH1, 2L. Since X cannot assume negative values, X1ê2 is well defined and it makes sense to ask for the distribution of W .

Note that the distribution of W is symmetric about w = 0. In particular,Pr@W § wD = Pr@W ¥ -wD for all w.

The demonstration of this fact is more subtle than it may at first appear: Suppose that we have determined a formula for the distribution function of W = X Y . Then since Y ~ NormalH0, 1L fl -Y ~ NormalH0, 1L and since -Y and X are independent whenever Y and X are independent, we can follow exactly the same steps to determine the distribu-tion function of -W = X1ê2H-YL and the result will be the same distribution function. Consequently, when X ~ ExponentialH1L, Y ~ NormalH0, 1L and X, Y are independent, the distributions of X1ê2 Y and -X1ê2 Y are identical. (Note that this argument does not contradict the comments made in section 4.1.11 of the textbook where it is pointed out that substitution of equivalent random variables is not in general valid.) Hence for any w,

112 Chapter Eight Solutions

Page 118: Text Book 1 Solutions

The demonstration of this fact is more subtle than it may at first appear: Suppose that we have determined a formula for the distribution function of W = X Y . Then since Y ~ NormalH0, 1L fl -Y ~ NormalH0, 1L and since -Y and X are independent whenever Y and X are independent, we can follow exactly the same steps to determine the distribu-tion function of -W = X1ê2H-YL and the result will be the same distribution function. Consequently, when X ~ ExponentialH1L, Y ~ NormalH0, 1L and X, Y are independent, the distributions of X1ê2 Y and -X1ê2 Y are identical. (Note that this argument does not contradict the comments made in section 4.1.11 of the textbook where it is pointed out that substitution of equivalent random variables is not in general valid.) Hence for any w,Pr@W § wD = PrAX1ê2 Y § wE = PrA-X1ê2 Y ¥ -wE = Pr@W ¥ -wD.

It follows that the distribution of W is symmetric. Consequently, it suffices to determine the density of W in the case w > 0.

Suppose henceforth that w > 0. We determine Pr@W > wD by integrating the joint density of X and Y over the region defined by the event X1ê2 Y > w. From the given informa-tion, the joint density of X and Y is

fX,Y @x, yD =1

2 p‰-x ‰-y

2ë2 for x > 0 and y e.

A graph of the event X1ê2 Y > w can be created using Mathematica or similar computer software.

w2ÅÅÅÅÅÅÅÅÅy2

X

y

Y

X 1ê2Y=w

w2ÅÅÅÅÅÅÅÅÅy2

X

y

Y

Carrying out the integration of fX,Y over the shaded region, we have

Chapter Eight Solutions 113

Page 119: Text Book 1 Solutions

Carrying out the integration of fX,Y over the shaded region, we have

Pr@W ¥ wD = Pr Y ¥w

X1ê2= ‡ ‡

y¥wëx1ê2, x>0, y>0

1

2 p‰-x ‰-y

2ë2 „ x „ y =

‡y=0

‡x=HwêyL2

¶ 1

2 p‰-y

2ë2 ‰-x „ x „ y = ‡y=0

¶ 1

2 p‰-y

2ë2‡x=HwêyL2

‰-x „ x „ y =

‡y=0

¶ 1

2 p‰-y

2ë2 ‰-HwêyL2„ y = ‡

y=0

¶ 1

2 p‰-1

2;y2+Jw 2 íyN

2?„ y.

Now from the comment preceding the solution to this exercise,

‡x=0

¶ 1

2 p‰-

1

2Ix2+HuêxL2M „ x =

1

2‰- u , u e.

Substituting u = w 2 and using the assumption w > 0 we have

‡y=0

¶ 1

2 p‰-1

2;y2+Jw 2 íyN

2?„ y =

1

2‰-w 2 .

Hence

Pr@W ¥ wD =1

2‰-w 2 for w > 0.

Since W is continuous, Pr@W = wD = 0 for all w. Consequently,

SW @wD = Pr@W > wD = Pr@W ¥ wD - Pr@W = wD =1

2‰-w 2 - 0 =

1

2‰-w 2 for w > 0

and

fW @wD = - SW£ @wD = -„

„w

1

2‰-w 2 =

1

2‰-w 2 for w > 0.

Since the distribution of W is symmetric, it follows that the density function of W is in general

114 Chapter Eight Solutions

Page 120: Text Book 1 Solutions

fW @wD =1

2‰- w 2 for w e.

This is the density function that we were required to derive.

15. We are asked to show that if X1, …, Xn are independent Cauchy random variables then the arithmetic average Xn = HX1 + + XnL ên also has a Cauchy distribution.

At first glance this may appear to be a violation of the law of large numbers since the distribution of Xn is the same for all n, and hence does not become more concentrated as n increases. However, recall that one of the conditions specifed in the statement of the law of large numbers is that the mean of the distribution exist and be finite (see section 8.4.2 of the textbook and take note of the section of the proof where this condition is used). From section 7.6 of the textbook, we know that the mean of the Cauchy distribu-tion does not exist. Hence the law of large numbers is not applicable to random vari-ables having a Cauchy distribution.

To prove the required result, it suffices to show that a weighted sum of any two indepen-dent Cauchy random variables is Cauchy. Indeed, if k Y1 + H1 - kL Y2 is Cauchy when-ever Y1 and Y2 are independent Cauchy random variables and k > 0 then since X1 + + Xn

n=

n - 1

nÿ

X1 + + Xn-1

n - 1+

1

nXn,

it follows by induction that HX1 + + XnL ên has a Cauchy distribution whenever X1, …, Xn are independent Cauchy random variables. Consequently if X1, …, Xn are independent Cauchy random variables and if k Y1 + H1 - kL Y2 is Cauchy whenever Y1 and Y2 are independent Cauchy random variables and k > 0, then by induction, HX1 + + XnL ên has a Cauchy distribution.

Hence suppose that Y1 and Y2 are independent Cauchy random variables, k is a positive real number, and consider the weighted sum S = k Y1 + H1 - kL Y2. Since Y1 and Y2 are independent, the density function of S can be determined using the convolution formula given in section 8.1.3 of the textbook:

fS@sD = ‡-¶

fk Y1 @xD fH1-kL Y2 @s - xD „ x.

Since Y1 and Y2 are Cauchy random variables, the densities of k Y1 and H1 - kL Y2 are as follows:

Chapter Eight Solutions 115

Page 121: Text Book 1 Solutions

Since Y1 and Y2 are Cauchy random variables, the densities of k Y1 and H1 - kL Y2 are as follows:

fk Y1 @xD =k

pÿ

1

x2 + k2,

fH1-kL Y2 @xD =1 - k

pÿ

1

x2 + H1 - kL2.

Hence

fk Y1 @xD fH1-kL Y2 @s - xD =k H1 - kL

p2ÿ

1

x2 + k2ÿ

1

Hx - sL2 + H1 - kL2.

Using the method of partial fractions, the latter expression can be decomposed into a sum of functions that are more easily integrated. Applying this method we have

1

x2 + k2ÿ

1

Hx - sL2 + H1 - kL2=

A + B x

x2 + k2+

C + D x

Hx - sL2 + H1 - kL2

where

A =s2 + 1 - 2 k

Is2 + 1 - 2 kM2 + 4 s2 k2,

B =2 s

Is2 + 1 - 2 kM2 + 4 s2 k2,

C =3 s2 - 1 + 2 k

Is2 + 1 - 2 kM2 + 4 s2 k2,

D = -B.

Consequently,

116 Chapter Eight Solutions

Page 122: Text Book 1 Solutions

fS@sD = ‡-¶

fk Y1 @xD fH1-kL Y2 @s - xD „ x =k H1 - kL

p2‡-¶

¶ 1

x2 + k2ÿ

1

Hx - sL2 + H1 - kL2„ x =

k H1 - kL

p2‡-¶

¶ A + B x

x2 + k2„ x + ‡

¶ C + D x

Hx - sL2 + H1 - kL2„ x .

Applying the substitution x = k w to the first integral in the latter expression we have

‡-¶

¶ A + B x

x2 + k2„ x =

1

k‡-¶

¶ A + B k w

w2 + 1„w

and applying the substitution x = s + H1 - kL w to the second integral in this expression we have

‡-¶

¶ C + D x

Hx - sL2 + H1 - kL2„ x =

1

1 - k‡-¶

¶ C + D HH1 - kL w + sL

w2 + 1„w.

Substituting these expressions into the previous formula for fS and simplifying we have

fS@sD =k H1 - kL

p2‡-¶

¶ A + B x

x2 + k2„ x + ‡

¶ C + D x

Hx - sL2 + H1 - kL2„ x =

1

p2H1 - kL ‡

¶ A + B k w

w2 + 1„w + k ‡

¶ C + D HH1 - kL w + sL

w2 + 1„w =

1

p2‡-¶

¶ A H1 - kL + C k + D k s

w2 + 1+ k H1 - kL ‡

¶ HB + DL w

1 + w2„w =

1

p28A H1 - kL + C k - B k s< ‡

¶ „w

w2 + 1+ 0 =

1

p28A H1 - kL + C k - B k s< Iarctan@wD w=-¶

¶ M =

1

p28A H1 - kL + C k - B k s<

p

2- -

p

2OO =

1

p8A H1 - kL + C k - B k s<.

(Note the use of the relationship D = -B.) Substituting the values of A, B, C previously specified into the latter expression we have1

p8A H1 - kL + C k - B k s< =

Chapter Eight Solutions 117

Page 123: Text Book 1 Solutions

p8A H1 - kL + C k - B k s< =

1

pÿIs2 + 1 - 2 kM H1 - kL + I4 s2 - Is2 + 1 - 2 kMM k - H2 sL k s

Is2 + 1 - 2 kM2 + 4 s2 k2=

1

pÿIs2 + 1 - 2 kM H1 - 2 kL + 2 s2 k

Is2 + 1 - 2 kM2 + 4 s2 k2=

1

pÿ

s2 + H1 - 2 kL2

Is2 + 1 - 2 kM2 + 4 s2 k2=

1

pÿ

s2 + H1 - 2 kL2

Is2 + 1M Is2 + H1 - 2 kL2M=

1

pÿ

1

s2 + 1.

Consequently,

fS@sD =1

pÿ

1

s2 + 1for s e,

which we recognize as the density function of a Cauchy distribution.

Hence we have shown that if Y1 and Y2 are independent Cauchy random variables and k > 0 then k Y1 + H1 - kL Y2 has a Cauchy distribution. Applying this result repeatedly in the manner discussed earlier it follows that for any independent Cauchy random vari-ables X1, …, Xn, the arithmetic average HX1 + + XnL ên has a Cauchy distribution. This is the result we were required to prove.

17. Let X and Y be the returns on securities S1 and S2 respectively. We are given that mX = 10 %, sX = 5 %, mY = 20 %, sY = 15 %, and rX,Y = .30. The equation in the risk-reward plane for the possible portfolios consisting of S1 and S2 issR2 = A HmR - m0L2 + s0

2

where

A =HsX - sY L

2 + 2 I1 - rX,Y MsX sY

HmX - mY L2=H5 %- 15 %L2 + 2 H1 - .30L H5 %L H15 %L

H10 %- 20 %L2= 2.05,

118 Chapter Eight Solutions

Page 124: Text Book 1 Solutions

m0 =mX sY

2 - HmX + mY L rX,Y sX sY + mY sX2

HsX - sY L2 + 2 I1 - rX,Y MsX sY

=

H10 %L H15 %L2 - H10 %+ 20 %L H.30L H5 %L H15 %L + H20 %L H5 %L2

H5 %- 15 %L2 + 2 H1 - .30L H5 %L H15 %Lº 10.12195 %,

and

s02 =

sX2 sY

2 I1 - rX,Y2 M

HsX - sY L2 + 2 I1 - rX,Y MsX sY

=

H5 %L2 H15 %L2 I1 - H.30L2M

H5 %- 15 %L2 + 2 H1 - .30L H5 %L H15 %Lº H4.99695029 %L2.

Note that in decimal form, m0 = 0.1012195 and s0 = 0.0499695029. It doesn't matter which form (decimal or percentage) is used in the equations that follow as long as we are consistent and the answers are interpreted appropriately. Note that A is the same regardless of the form used for m0 and s0.

a. Let a be the fraction invested in S1 that minimizes the variance of the portfolio. Thenm0 = a mX + H1 - aL mY .

From the preceding comments,m0 º 10.12195 %,

mX = 10 %,

mY = 20 %.

Hence the required fraction is

a =m0 - mY

mX - mY=

10.12195 %- 20 %

10 %- 20 %º .9878.

So to minimize the variance, about 98.78% should be invested in S1.

b. The expected return on the minimum variance portfolio is

Chapter Eight Solutions 119

Page 125: Text Book 1 Solutions

m0 º 10.12195 %.

19. This exercise is based on a story told by the Nobel Prize winning economist Paul Samuelson. It is designed to illustrate the difference between risk pooling and risk sharing and expose a common fallacy associated with the law of large numbers. See also section 8.4.3 of the textbook and Samuelson's 1963 paper "Risk and Uncertainty: A Fallacy of Large Numbers" published in volume 98 of the journal Scientia.

a. Let Jk be John's winnings on the k-th bet and let SJ be John's accumulated winnings on 100 bets. Then from the given information, the Jk are independent and identically distributed with distribution given by

Jk = ;2000 with probability .50,-1000 with probability .50,

and SJ = J1 + + Jk. The riskiness of a bet or group of bets can be analyzed by consider-ing the variance. From the definition of Jk we haveVar HJkL =

EAJk2E - E@JkD2 = 9H2000L2 H.50L + H-1000L2 H.50L= - 8H2000L H.50L + H-1000L H.50L<2 =2,500,000 - 250,000 = 2,250,000 = H1500L2.

Since the Jk are independent and identically distributed we also have

Var HSJL = Var HJ1L + + Var HJ100L = 100 Var HJkL = I15,000 M2.

Hence by agreeing to 100 bets, John actually increases his risk (by a factor of 100 when risk is measured by the variance). Consequently, if John is not willing to take on the risk of loss associated with a single bet, he should definitely not take on the risk associ-ated with 100 such bets. It follows that John's reasoning is not sound.

b. Let Mk be Matt's winnings on the k-th bet and let SM be Matt's accumulated winnings on 1000 bets. Then from the given information

Mk = ;2 with probability .50,-1 with probability .50,

and SM = M1 + + M1000. Hence

120 Chapter Eight Solutions

Page 126: Text Book 1 Solutions

Var HMkL = EAMk2E - E@MkD

2 =

9H2L2 H.50L + H-1L2 H.50L= - 8H2L H.50L + H-1L H.50L<2 =5

2-

1

2

2

= 2.25

andVar HSM L = Var HM1L + + Var HM1000L = 1000 Var HMkL = 1000 H2.25L = 2250.

Moreover,E@SM D = 1000 E@MkD = 500.

Now recall from part a that the variance and expected value on a single bet of the type proposed by Paul areVar HJkL = 2,250,000

andE@JkD = 500.

Hence by breaking up Paul's proposed bet into 1000 pieces, Matt has reduced the risk as measured by the variance by a factor of 1000 without changing the expected value. So unlike John, Matt's reasoning is sound.

c. In order for Matt and John to take advantage of Paul's offer they need to share the risk of loss. One way of doing this is to share accumulated winnings on the 100 bets with 100 like-minded people. Let S denote the share of the accumulated winnings for one of these 100 people. Then using the notation of part a,

S =1

100HJ1 + + J100L.

Since the Jk are independent and identically distributed withE@JkD = 500,

Var HJkL = 2,250,00,

we have

Chapter Eight Solutions 121

Page 127: Text Book 1 Solutions

E@SD =1

100HE@J1D + + E@J100DL = E@JkD = 500

and

Var HSL =1

100

2

HVar HJ1L + + Var HJ100LL =1

100Var HJkL = 22,500.

Clearly the risk of loss for one of the 100 people sharing the accumulated winnings is less than the risk faced on a single bet with no sharing of winnings even though the expected gain in both situations is the same. It follows that if John and Matt wish to take advantage of Paul's tempting offer, they should find 98 other people interested in a bet of this type and share accumulated gains or losses with these people.

If John and Matt still find the risk of loss to be too high, they can reduce the risk further by increasing the number of people with whom gains and losses are shared. Note however that if the number of bets remains at 100, then increasing the number of people sharing in the winnings will reduce the expected gain for each member.

Comment: A concrete application of the risk reduction technique presented in part c is presented in the solution to exercise 22, part c.

22. Let L j be the loss incurred and k EAL jE the amount of premium collected by insurer j where k > 1 and k is the same for both insurers. Under the proposed coinsurance arrangement, premiums and losses are shared equally. Hence each insurer pays L where L = HL1 + L2L ê2 and receives k HE@L1D + E@L2DL ê2. The losses L j are assumed to be independent and identically distributed.

a. Since L = HL1 + L2L ê2 and the L j are independent and identically distributed, we have

E@LD =1

2HE@L1D + E@L2DL =

1

2I2 EAL jEM = EAL jE

and

Var HLL =1

4HVar HL1L + Var HL2LL =

1

4I2 Var IL jMM =

1

2Var IL jM.

Hence

122 Chapter Eight Solutions

Page 128: Text Book 1 Solutions

Hence mL = mL j ,

sL =1

2sL j .

Consequently,

mL

sL= 2

mL j

sL j

.

b. In exercise 21, it is argued that the ratio m ês can be considered a rudimentary measure of solvency with larger values of m ês indicating a higher probability of solvency. From part a of the present exercise, we havemL

sL= 2

mL j

sL j

>mL j

sL j

.

Consequently, according to the risk measure m ês the insurers are more secure with coinsurance.

We can also reach this conclusion by calculating insolvency probabilities directly. Since the insurers share the collected premiums equally, the probability that aggregate claims exceed aggregate premiums when insurer j participates in the coinsurance arrangement is

Pr L > kE@L1D + E@L2D

2= PrAL > k mL j E.

Under a normal approximation, the latter probability is

PrAL > k mL j E = PrL - mL

sL>

k mL j - mL

sL=

PrL - mL

sL>Hk - 1L mL j

sL j ë 2º Pr Z > 2 Hk - 1L

mL j

sL j

= 1 - F 2 Hk - 1LmL j

sL j

.

Hence the probability of insolvency for insurer j when the coinsurance arrangement is in place is approximately

Chapter Eight Solutions 123

Page 129: Text Book 1 Solutions

Hence the probability of insolvency for insurer j when the coinsurance arrangement is in place is approximately

1 - F 2 Hk - 1LmL j

sL j

.

From exercise 21, the probability of insolvency for insurer j when the coinsurance arrangement is not in place is approximately

1 - F Hk - 1LmL j

sL j

.

Since F is increasing and k > 1 we have

1 - F 2 Hk - 1LmL j

sL j

< 1 - F Hk - 1LmL j

sL j

.

Consequently, the probability of insolvency is lower when the coinsurance arrangement is in place. Hence the insurers are more secure with coinsurance, as argued earlier.

c. In the course of doing business, insurance companies naturally accumulate pools of risk, some of which can be large. Pooling actually increases risk. To understand why, consider n independent, identically distributed losses and let S = X1 + + Xn be the aggregate loss. Using standard properties of the variance we haveVar HSL = Var HX1L + + Var HXnL = n Var IX jM

from which it follows thatVar HSLض as n ض.

This shows that the uncertainty in the payout associated with a pool of risks increases without bound as the number of policies in the pool increases, i.e., pooling increases risk.

Now the basic principle of insurance is that risk can be reduced through sharing. The mathematical justification of this principle follows from the law of large numbers (see sections 1.4 and 8.4.3 of the textbook). One way that an insurer can share the risk associated with a pool of risks is to enter into a coinsurance agreement with other insurers. This is the approach considered in part b of this exercise. Another way that an insurer can share the risk is to sell shares to the public.

124 Chapter Eight Solutions

Page 130: Text Book 1 Solutions

Now the basic principle of insurance is that risk can be reduced through sharing. The mathematical justification of this principle follows from the law of large numbers (see sections 1.4 and 8.4.3 of the textbook). One way that an insurer can share the risk associated with a pool of risks is to enter into a coinsurance agreement with other insurers. This is the approach considered in part b of this exercise. Another way that an insurer can share the risk is to sell shares to the public.

A share in a company represents an ownership position in the company and entitles the holder of the share to a portion of the future profits generated by the company. Since future profits are unpredictable, the value of a company's shares will fluctuate over time, generally increasing during periods of rising profits and decreasing during periods of falling profits. When an insurance company experiences a large unexpected loss, profits are generally impacted in a negative way and the value of the company declines. For a company with shares outstanding this decline in value will be reflected in a lower trading price for the company's shares. In effect, the unexpected loss will be shifted from the company to the shareholders.

Now if the number of shareholders is large then the amount of the loss shifted to an individual shareholder (through a reduction in share price) will be relatively small and limited to the value of the individual's shares. Moreover the risk of loss for an individual shareholder will be much less than the risk associated with the entire pool accumulated by the company. Indeed, if S is the loss on the pool at the company level and this loss is spread equally over the m shares outstanding then the risk of loss on an individual share as given by the variance is

VarS

m=

1

m2Var HSLØ 0 as m ض.

Consequently, the risk to individual shareholders is relatively small and approaches 0 as the number of shareholders becomes arbitrarily large.

It follows from these observations that selling shares to the public is an effective way for an insurance company to disperse the risk associated with large risk pools. Selling shares also makes it possible for individuals of relatively modest means to share in the often generous profits associated with underwriting large risk pools without having to assume an unacceptably high level of risk.

24. Let X j be the payout on the j-th policy during the coming year and let S be the aggre-gate payout during the coming year on the portfolio of m policies. Assume that all death claim payments are made at the end of the year. Then from the given information, the X j are independent and identically distributed with distribution given by

Chapter Eight Solutions 125

Page 131: Text Book 1 Solutions

X j =25,000 ‰-.05 with probability .01,0 with probability .99,

and S = X1 + + Xm. Note that 25,000 ‰-.05 is the present value of 25,000 discounted one year at a continuously compounded rate of 5% per annum. Let P be the per-policy premium that must be charged for the insurer to be 99% confident that total premiums exceed total claims. Then P is determined by the conditionPr@S § m PD = .99.

Now if m is sufficiently large, then the distribution of S is approximately normal with meanE@SD = m EAX jE

and varianceVar HSL = m Var IX jM.

From the information on the distribution of X j stated earlier we have

EAX jE = I25,000 ‰-.05M H.01L º 237.81

and

Var IX jM = EAX j2E - EAX jE

2= I25,000 ‰-.05M

2H.01L - I25,000 ‰-.05 H.01LM2 =

I25,000 M2 ‰-.10 H.01L H.99L º H2366.15L2.

Hence for m sufficiently large, the distribution of S is approximately normal withE@SD º 237.81 m

and

Var HSL º H2366.15L2 m.

Assuming this to be the case, the conditionPr@S § m PD = .99

is approximately equivalent to the condition

126 Chapter Eight Solutions

Page 132: Text Book 1 Solutions

Pr Z §m P - 237.81 m

2366.15 mº .99

where Z ~ NormalH0, 1L. In terms of the standard normal distribution function, the latter condition is

FP - 237.81

2366.15m º .99.

Now from Appendix E of the textbook, we have [email protected] º .99. Hence the condition

FP - 237.81

2366.15m º .99

is equivalent toP - 237.81

2366.15m º 2.326.

Solving the latter equation for P we obtain

P º 237.81 +5503.67

m.

Therefore to be 99% confident that total premiums exceed total claims, the insurer must charge each of the m policyholders the dollar amount

237.81 +5503.67

m.

Note that the $237.81 figure here is the expected loss per policy and the $5503 .67ë m figure is the additional per policy amount that must be collected to ensure with 99% confidence that claims are covered in the aggregate. The additional per-policy amount is not insignificant (unless m is more than 1 million). The reason for this is that claim payments, while unlikely (1% probability of occurrence), are quite large ($25,000) when they occur.

27. Let X j be the loss on the j-th contract measured in thousands of dollars and let S be the aggregate loss in thousands of dollars. By assumption, the X j are independent and identically distributed with X j ~ ExponentialH1 ê5L and S = X1 + + X100. Let P be the premium in thousands of dollars collected on each contract. By assumption, P = 5.05. We are required to determine Pr@S > 100 PD using a normal power approximation for S. In order to apply the normal approximation formula to S we need to determine values for mS, sS, and gS. From sections 4.2.1, 4.2.2, and 4.2.3 of the textbook, the mean, variance, and skewness of a sum S of independent random variables X1, …, Xn are respectively

Chapter Eight Solutions 127

Page 133: Text Book 1 Solutions

27. Let X j be the loss on the j-th contract measured in thousands of dollars and let S be the aggregate loss in thousands of dollars. By assumption, the X j are independent and identically distributed with X j ~ ExponentialH1 ê5L and S = X1 + + X100. Let P be the premium in thousands of dollars collected on each contract. By assumption, P = 5.05. We are required to determine Pr@S > 100 PD using a normal power approximation for S. In order to apply the normal approximation formula to S we need to determine values for mS, sS, and gS. From sections 4.2.1, 4.2.2, and 4.2.3 of the textbook, the mean, variance, and skewness of a sum S of independent random variables X1, …, Xn are respectivelymS = mX1 + + mXn ,

sS2 = sX1

2 + + sXn2 ,

gS =gX1 sX1

3 + + gXn sXn3

ss3

.

When the X j are identically distributed, as is the case in this question, these formulas reduce tomS = n mX ,

sS2 = n sX

2 ,

gS =n gX sX

3

sS3

=n gX sX

3

n3ê2 sX3

=gX

n1ê2

where X ~ X j. Now for X ~ ExponentialHlL we have

mX =1

l,

sX2 =

1

l2,

128 Chapter Eight Solutions

Page 134: Text Book 1 Solutions

gX = 2

(see section 6.1.1 of the textbook). Consequently, the mean, variance, and skewness for the aggregate loss random variable S of this question are given by

mS = 100 mX = 1001

l= 100

1

1 ê5= 500,

sS2 = 100 sX

2 = 1001

l2= 100 ÿ

1

H1 ê5L2= 2500,

gS =gX

1001ê2=

2

10=

1

5.

Using these values of mS, sS, and gS in the normal power approximation formula given in section 8.6 we find that

Pr@S § 100 PD = Pr@S § 505D = FS@505D º F -3

gS+

9

gS2+ 1 +

6

gSÿ505 - mS

sS=

F -H3L H5L + H9L H25L + 1 + H6L H5L ÿ505 - 500

50=

FA-15 + 229 E º [email protected] º [email protected].

From the table for the standard normal distribution function in Appendix E of the textbook we [email protected] º [email protected] + [email protected] = H.73L H.5517L + H.27L H.5557L = .55278.

HencePr@S § 100 PD º .55278.

Consequently, the probability that aggregate losses exceed total premiums collected isPr@S > 100 PD = 1 - Pr@S § 100 PD º 1 - .55278 º .44722

using a normal power approximation.

Comment: It is actually possible to specify the exact distribution of S in this question. Indeed, since S is a sum of independent, identically distributed exponential random variables, S must have a gamma distribution. In particular, S ~ GammaH100, 1 ê5L. Using this exact distribution and the fact that the survival function of a gamma distribution whose parameter r is a positive integer has the form

Chapter Eight Solutions 129

Page 135: Text Book 1 Solutions

Comment: It is actually possible to specify the exact distribution of S in this question. Indeed, since S is a sum of independent, identically distributed exponential random variables, S must have a gamma distribution. In particular, S ~ GammaH100, 1 ê5L. Using this exact distribution and the fact that the survival function of a gamma distribution whose parameter r is a positive integer has the form

ST @tD =‚n=0

r-1 Hl tLn ‰-l t

n !

(see section 6.1.2 of the textbook), it is possible to calculate the exact value of the desired probability. Indeed,

Pr@S > 505D =‚n=0

99 HH1 ê5L H505LLn ‰-H1ê5L H505L

n !=‚n=0

99 101n ‰-101

n !.

The numerical value of this probability can be determined using Mathematica or similar computer software. When we do this we find that

‚n=0

99 101n ‰-101

n !º 0.447104.

Comparing this value to the value .44722 of Pr@S > 505D that was previously determined using a normal power approximation, we see that the normal power approximation is quite close. The advantage of using a normal power approximation instead of the exact distribution to calculate probabilities (even in problems such as the one just considered where it is relatively straightforward to determine the exact distribution) is that the normal power approximation only requires the use of a hand-held calculator and a table of values for the standard normal distribution function whereas the exact distribution may require more sophisticated computer software such as Mathematica.

130 Chapter Eight Solutions

Page 136: Text Book 1 Solutions

Chapter Nine Solutions

1. For a given throw of the dart, let X be the distance from the fixed origin to the place where the dart lands and let Y be the number of the target at which the dart is aimed. Suppose that the target number is selected by the roll of a die.

a. E@X YD is the average distance from the origin of the darts thrown at target Y . (If the target shooter is fairly accurate, this should be close to the distance from the origin to the location of target Y .)

VarHX YL is the variability in the distance from the origin of the darts thrown at target Y . (Note that this is not necessarily a good measure of the error in the target shooter's aim: If the darts all land far from the target but close to each other then VarHX YL will be small even though the error associated with each throw is large.)

E@E@X YDD is the average of the average distances weighted according to the number of darts aimed at each target.

E@VarHX YLD is the average variability in hitting the selected target (whatever it happens to be) weighted according to the number of darts aimed at each target.

VarHE@X YDL is the variability in the average dart distances for the targets weighted according to the number of darts aimed at each target. (If the target shooter is fairly accurate then this is a measure of the proximity of the target locations.)

b. The formula E@XD = E@E@X YDD has the following interpretation: The average distance from the origin of all darts tossed can be determined by calculating the average distance from the origin for darts thrown at each specific target and then averaging these values according to the frequency with which each target is selected.

The formula VarHXL = E@VarHX YLD + VarHE@X YDL has the following interpretation: The variability in the distance from the origin of all darts tossed is equal to the sum of the average variability in hitting the selected target and the variability in the average distances for each target. (If the target shooter is fairly accurate then this formula asserts that the variability in dart locations for all darts is determined by the variability around each target and the spread of the target locations.)

Page 137: Text Book 1 Solutions

b. The formula E@XD = E@E@X YDD has the following interpretation: The average distance from the origin of all darts tossed can be determined by calculating the average distance from the origin for darts thrown at each specific target and then averaging these values according to the frequency with which each target is selected.

The formula VarHXL = E@VarHX YLD + VarHE@X YDL has the following interpretation: The variability in the distance from the origin of all darts tossed is equal to the sum of the average variability in hitting the selected target and the variability in the average distances for each target. (If the target shooter is fairly accurate then this formula asserts that the variability in dart locations for all darts is determined by the variability around each target and the spread of the target locations.)

3. Let C be a risk classification variable defined as follows:

C = 1 if client has no credit history,

C = 2 if client has history of late payments,

C = 3 if client has declared personal bankruptcy.

Let L j be the loss on a randomly selected account for which it is known that C = j. We are given that

L1 ~ ExponentialH1 ê100L,

L2 ~ ExponentialH1 ê500L,

L3 ~ ExponentialH1 ê1000L,

and the distribution of C is

C = 1 with probability .20,

C = 2 with probability .50,

C = 3 with probability .30.

Let L denote the loss on a randomly selected account for which the risk class is not known. Parts a through d are concerned with the distribution of L.

a. The expected loss on a randomly selected account can be determined using the formula for unconditional expectation given in section 9.3 of the textbook,E@LD = EC@E@L CDD.

From the given information,

E@L CD = E@L1D with probability .20,

E@L CD = E@L2D with probability .50,

E@L CD = E@L3D with probability .30.

130 Chapter Nine Solutions

Page 138: Text Book 1 Solutions

From the given information,

E@L CD = E@L1D with probability .20,

E@L CD = E@L2D with probability .50,

E@L CD = E@L3D with probability .30.

Moreover, E@L1D = 100, E@L2D = 500, E@L3D = 1000. HenceE@E@L CDD =

E@L1D H.20L + E@L2D H.50L + E@L3D H.30L = H100L H.20L + H500L H.50L + H1000L H.30L = 570.

Consequently, the expected loss on a randomly selected account isE@LD = E@E@L CDD = $570.

b. The variance of the loss on a randomly selected account can be determined using the formula for unconditional variance given in section 9.3 of the textbook,Var HLL = E@Var HL CLD + Var HE@L CDL.

From the given information,

VarHL CL = VarHL1L with probability .20,

VarHL CL = VarHL2L with probability .50,

VarHL CL = VarHL3L with probability .30,

and

E@L CD = E@L1D with probability .20,

E@L CD = E@L2D with probability .50,

E@L CD = E@L3D with probability .30.

Moreover, since L1 ~ ExponentialH1 ê100L, L2 ~ ExponentialH1 ê500L, and L3 ~ ExponentialH1 ê1000L we have

E@L1D = 100, E@L2D = 500, E@L3D = 1000

and

Var HL1L = 1002, Var HL2L = 5002, Var HL3L = 10002.

Hence

Chapter Nine Solutions 131

Page 139: Text Book 1 Solutions

HenceE@Var HL CLD = Var HL1L H.20L + Var HL2L H.50L + Var HL3L H.30L =

I1002M H.20L + I5002M H.50L + I10002M H.30L = 427,000,

EAE@L CD2E = E@L1D2 H.20L + E@L2D2 H.50L + E@L3D2 H.30L =H100L2 H.20L + H500L2 H.50L + H1000L2 H.30L = 427,000,

andE@E@L CDD =

E@L1D H.20L + E@L2D H.50L + E@L3D H.30L = H100L H.20L + H500L H.50L + H1000L H.30L = 570.

Consequently,

Var HE@L CDL = EAE@L CD2E - HE@E@L CDDL2 = 427,000 - H570L2 = 102,100

and soVar HLL = E@Var HL CLD + Var HE@L CDL = 427,000 + 102,100 = 529,100.

Therefore the variance of the loss on a randomly selected account is 529,100 squared dollars. This is equivalent to a standard deviation of about $727.39.

Important Comment: Note that in this particular question it just so happens that the values of E@VarHL CLD and EAE@L CD2E are the same. However, the reader should keep in mind that the values of E@VarHL CLD, EAE@L CD2E are generally different. The reason they are the same in this question is that the exponential distribution has the property that its variance is equal to the square of its mean. Indeed for X ~ ExponentialHlL, E@XD = 1 êl and VarHXL = 1ël2 = E@XD2.

c. The desired probability can be determined using the law of total probability. Indeed, for any l > 0 we havePr@L > lD =

Pr@L > l C = 1D Pr@C = 1D + Pr@L > l C = 2D Pr@C = 2D + Pr@L > l C = 3D Pr@C = 3D =Pr@L1 > lD Pr@C = 1D + Pr@L2 > lD Pr@C = 2D + Pr@L3 > lD Pr@C = 3D.

From the given information,

L1 ~ ExponentialH1 ê100L,

L2 ~ ExponentialH1 ê500L,

L3 ~ ExponentialH1 ê1000L

132 Chapter Nine Solutions

Page 140: Text Book 1 Solutions

From the given information,

L1 ~ ExponentialH1 ê100L,

L2 ~ ExponentialH1 ê500L,

L3 ~ ExponentialH1 ê1000L

andPr@C = 1D = .20, Pr@C = 2D = .50, Pr@C = 3D = .30.

Hence

Pr@L > lD = ‰-lê100 H.20L + ‰-lê500 H.50L + ‰-lê1000 H.30L for l > 0.

Consequently, the probability that the company loses more than $500 on a randomly selected account isPr@L > 500D = ‰-5 H.20L + ‰-1 H.50L + ‰-1ê2 H.30L º .36724651 º 37 %.

d. From part a, the expected loss on a randomly selected account is $570 and from part b the standard deviation of the loss is $727.39. Hence if the finance company wishes to recoup its expected loss and have a safety margin of 1 standard deviation for each customer, it should charge each client an administration fee of $1297.39 (= $570 + $727.39). From a practical viewpoint, it is probably not possible to charge a fee of this size upfront since people with a poor or nonexistent credit history are unlikely to have that much cash lying around. However, this fee could be recouped by increasing the interest rate on the loan and assessing periodic "hidden" fees that are less visible to the borrower.

Comment: As a check on our calculations in parts a and b, we can use the survival function determined in part c to calculate E@LD and VarHLL directly, i.e., without making use of the formulas for unconditional mean and variance given in section 9.3 of the textbook. From part c, the survival function of L isSL@lD = .20 ‰-lê100 + .50 ‰-lê500 + .30 ‰-lê1000 for l > 0.

Hence L is a mixture of ExponentialH1 ê100L, ExponentialH1 ê500L, and ExponentialH1 ê1000L with respective mixing weights .20, .50, and .30. It follows from section 9.4 that the moment generating function of L is

ML@tD = .201

1 - 100 t+ .50

1

1 - 500 t+ .30

1

1 - 1000 tfor t <

1

1000.

(Recall from section 6.1.1 that the moment generating function of X ~ ExponentialHlL is MX@tD = l ê Hl - tL for t < l.) Consequently, using the fact that EALkE = ML

HkL@0D for k = 1, 2, ... (see section 4.3.1 of the textbook) we have

Chapter Nine Solutions 133

Page 141: Text Book 1 Solutions

(Recall from section 6.1.1 that the moment generating function of X ~ ExponentialHlL is MX@tD = l ê Hl - tL for t < l.) Consequently, using the fact that EALkE = ML

HkL@0D for k = 1, 2, ... (see section 4.3.1 of the textbook) we haveE@LD = ML

£ @0D = 9H.20L H1 - 100 tL -2 H100L +H.50L H1 - 500 tL -2 H500L + H.30L H1 - 1000 tL -2 H1000L= t=0 = 570

and

EAL2E =ML

″@0D = 9H.20L H2L H1 - 100 tL -3 H100L2 + H.50L H2L H1 - 500 tL -3 H500L2 + H.30L H2LH1 - 1000 tL -3 H1000L 2= t=0 = 854,000.

ThereforeE@LD = 570

and

Var HLL = EAL2E - E@LD2 = 854,000 - H570L2 = 529,100,

which are identical to the values given in parts a and b, as they should be.

5. Let L be the size of the uncertain loss and let K be the indemnified amount.

a. Suppose that the insurer caps the indemnified amount at m. Then

K = min HL, mL = ;L if L § m,m if L > m.

The distribution function for K can be determined by applying the law of total probabil-ity with the conditions L § m and L > m. Indeed, for any k we havePr@K § kD = Pr@K § k L § mD Pr@L § mD + Pr@K § k L > mD Pr@L > mD =

Pr@L § k L § mD Pr@L § mD + Pr@m § k L > mD Pr@L > mD =Pr@L § k and L § mD

Pr@L § mDPr@L § mD +

Pr@m § k and L > mD

Pr@L > mDPr@L > mD =

Pr@L § min Hm, kLD + Pr@m § k and L > mD.

Hence for k < m we have

134 Chapter Nine Solutions

Page 142: Text Book 1 Solutions

Hence for k < m we havePr@K § kD = Pr@L § min Hm, kLD + Pr@m § k and L > mD = Pr@L § kD + 0 = Pr@L § kD

and for k ¥ m we havePr@K § kD = Pr@L § min Hm, kLD + Pr@m § k and L > mD = Pr@L § mD + Pr@L > mD = 1.

Consequently, the distribution function of K is given by

FK@kD = ;FL@kD for k < m,1 for k ¥ m.

b. Suppose that the indemnified amount is subject to a deductible d but no cap. Then

K = ;0 if L § d,L - d if L > d.

The distribution function for K can be determined by applying the law of total probabil-ity with the conditions L § d and L > d. Indeed, for any k we havePr@K § kD = Pr@K § k L § dD Pr@L § dD + Pr@K § k L > dD Pr@L > dD =

Pr@0 § k L § dD Pr@L § dD + Pr@L - d § k L > dD Pr@L > dD =Pr@0 § k and L § dD

Pr@L § dDPr@L § dD +

Pr@d < L § d + kD

Pr@L > dDPr@L > dD =

Pr@0 § k and L § dD + Pr@d < L § d + kD.

Hence for k ¥ 0 we havePr@K § kD = Pr@0 § k and L § dD + Pr@d < L § d + kD =

Pr@L § dD + Pr@d < L § d + kD = Pr@L § d + kD = FL@d + kD

and for k < 0 we trivially havePr@K § kD = Pr@0 § k and L § dD + Pr@d < L § d + kD = 0.

Consequently, the distribution function of K is given by

FK@kD = ;FL@d + kD for k ¥ 0,0 for k < 0.

c. Suppose that the indemnified amount is subject to both a deductible d and a cap m. Then

K = 0 if L § d,

K = L - d if d < L § d + m,

K = m if L > d + m.

Chapter Nine Solutions 135

Page 143: Text Book 1 Solutions

c. Suppose that the indemnified amount is subject to both a deductible d and a cap m. Then

K = 0 if L § d,

K = L - d if d < L § d + m,

K = m if L > d + m.

The distribution function for K can be determined by applying the law of total probabil-ity with the conditions L § d, d < L § d + m, L > d + m. Indeed for any k we havePr@K § kD = Pr@K § k L § dD Pr@L § dD +

Pr@K § k d < L § d + mD Pr@d < L § d + mD + Pr@K § k L > d + mD Pr@L > d + mD =Pr@0 § k L § dD Pr@L § dD + Pr@L - d § k d < L § d + mD Pr@d < L § d + mD +

Pr@m § k L > d + mD Pr@L > d + mD =Pr@0 § k and L § dD + Pr@L - d § k and d < L § d + mD + Pr@m § k and L > d + mD =

Pr@0 § k and L § dD + Pr@d < L § min Hd + k, d + mLD + Pr@m § k and L > d + mD.

Hence for 0 § k < m we havePr@K § kD =

Pr@0 § k and L § dD + Pr@d < L § min Hd + k, d + mLD + Pr@m § k and L > d + mD =Pr@L § dD + Pr@d < L § d + kD + 0 = Pr@L § d + kD,

for k ¥ m we havePr@K § kD =

Pr@0 § k and L § dD + Pr@d < L § min Hd + k, d + mLD + Pr@m § k and L > d + mD =Pr@L § dD + Pr@d < L § d + mD + Pr@L > d + mD = 1,

and for k < 0 we havePr@K § kD = Pr@0 § k and L § dD +

Pr@d < L § min Hd + k, d + mLD + Pr@m § k and L > d + mD = 0 + 0 + 0 = 0.

Consequently, the distribution function of K is given by

FK@kD = 0 for k < 0,

FK@kD = FL@d + kD for 0 § k < m,

FK@kD = 1 for k ¥ m.

7. Suppose that X = B I, where I ~ BernoulliHpL and B is a nonnegative random variable which is independent of I.

a. The distribution function of X can be determined by conditioning on I and using the law of total probability as follows:

136 Chapter Nine Solutions

Page 144: Text Book 1 Solutions

a. The distribution function of X can be determined by conditioning on I and using the law of total probability as follows:FX@xD = Pr@X § xD = Pr@X § x I = 0D Pr@I = 0D + Pr@X § x I = 1D Pr@I = 1D =

Pr@B I § x I = 0D Pr@I = 0D + Pr@B I § x I = 1D Pr@I = 1D =Pr@0 § xD ÿ H1 - pL + Pr@B § xD ÿ p.

Note that Pr@B I § x I = 1D = Pr@B § xD since B is independent of I. Hence for x ¥ 0FX@xD = Pr@0 § xD H1 - pL + Pr@B § xD p = H1 - pL + p FB@xD,

and for x < 0FX@xD = Pr@0 § xD H1 - pL + Pr@B § xD p = 0 ÿ H1 - pL + 0 ÿ p = 0.

(Note that Pr@B § xD = 0 for x < 0 since B is assumed to be nonnegative.) Consequently, the distribution of X is given by

FX@xD = ;H1 - pL + p FB@xD for x ¥ 0,0 for x < 0,

as required. Since SX@xD = 1 - FX@xD for all x, it follows that the survival function of X is

SX@xD = ;1 - HH1 - pL + p FB@xDL for x ¥ 0,1 - 0 for x < 0;

= ;p H1 - FB@xDL for x ¥ 0,1 for x < 0; = ;

p SX@xD for x ¥ 0,1 for x < 0,

as required.

In general, X has a mixed distribution with a probability mass at x = 0. When B is continuous and p < 1, X has a probability mass at x = 0 and a continuous distribution of probability on the interval x > 0. Sample graphs of FX and SX when B is continuous can be created using Mathematica or similar computer software.

Chapter Nine Solutions 137

Page 145: Text Book 1 Solutions

x

1- p

1

FX

x

p

1

SX

b. The graph of the generalized density fX corresponding to the graphs of FX and SX given in part a is as follows:

138 Chapter Nine Solutions

Page 146: Text Book 1 Solutions

x

1- p

fX

Note that the area under the continuous part of the density is equal to p. The probability mass at x = 0 is equal to the probability that B = 0 or I = 0. Since B is assumed to be continuous, Pr@B = 0D = 0 and so Pr@X = 0D = Pr@I = 0D = 1 - p.

c. Applying the formula for unconditional expectation given in section 9.3 we haveE@XD = EI@E@X IDD = E@X I = 0D Pr@I = 0D + E@X I = 1D Pr@I = 1D =

E@I B I = 0D Pr@I = 0D + E@I B I = 1D Pr@I = 1D =E@0D Pr@I = 0D + E@BD Pr@I = 1D = 0 ÿ H1 - pL + mB p = p mB

as required. Note thatE@I B I = 1D = E@B I = 1D = E@BD

since B is independent of I.

To determine the formula for VarHXL we apply the formula for unconditional variance which in this context has the formVar HXL = EI@Var HX ILD + VarI HE@X IDL.

Arguing as before we have

Chapter Nine Solutions 139

Page 147: Text Book 1 Solutions

EI@Var HX ILD = Var HX I = 0L Pr@I = 0D + Var HX I = 1L Pr@I = 1D =Var HI B I = 0L Pr@I = 0D + Var HI B I = 1L Pr@I = 1D =

Var H0L Pr@I = 0D + Var HBL Pr@I = 1D = 0 ÿ H1 - pL + sB2 p = p sB

2 .

Since

E@X ID = ;0 if I = 0,mB if I = 1,

we also have

VarI HE@X IDL = mB2 Pr@I = 1D Pr@I = 0D = mB

2 p H1 - pL.

Hence the unconditional variance of X is given by

Var HXL = EI@Var HX ILD + VarI HE@X IDL = p sB2 + mB

2 p H1 - pL

as required.

d. The moment generating function of X is defined by

MX@tD = EA‰t XE.

Using the formula for unconditional expectation given in section 9.3 of the textbook we haveEA‰t XE = EIAEA‰t X IEE = EA‰t X I = 0E Pr@I = 0D + EA‰t X I = 1E Pr@I = 1D =

EA‰t I B I = 0E Pr@I = 0D + EA‰t I B I = 1E Pr@I = 1D =EA‰0E Pr@I = 0D + EA‰t BE Pr@I = 1D = 1 ÿ H1 - pL + MB@tD p.

Consequently,MX@tD = H1 - pL + p MB@tD

for all t where MB is defined. Differentiating this equation k times we obtain

MXHkL@tD = p MB

HkL@tD.

Hence using the fact that EAXkE = MXHkL@0D for any random variable X we have

EAXkE = p EABkE for k = 1, 2, ...

as required.

140 Chapter Nine Solutions

Page 148: Text Book 1 Solutions

as required.

e. The third central moment can be calculated in terms of the moments about 0 as follows:EAHX - mXL3E = EAX3E - 3 EAX2E E@XD + 2 E@XD3

(see exercise 5 part a of section 4.3 or simply expand the trinomial HX - mXL3 and use the linearity property of expectation). From part d of the current exercise,EAXkE = p EABkE for k = 1, 2, ... .

Hence substituting this expression for EAXkE into the preceding formula for EAHX - mXL3E we obtainEAHX - mXL3E =

EAX3E - 3 EAX2E E@XD + 2 E@XD3 = p EAB3E - 3 Ip EAB2EM Hp E@BDL + 2 Hp E@BDL3 =p EAB3E - 3 p2 EAB2E E@BD + 2 p3 E@BD3

as required.

A formula for EAHX - mXL3E in terms of the statistics mB, sB, and gB can also be given. Using the general relationshipsEAHB - mBL3E = EAB3E - 3 EAB2E E@BD + 2 E@BD3,

EAHB - mBL3E = gB sB3 ,

sB2 = EAB2E - E@BD2,

we have

EAB2E = sB2 + mB

2

and

EAB3E = EAHB - mBL3E + 3 EAB2E E@BD - 2 E@BD3 =gB sB

3 + 3 IsB2 + mB

2 M mB - 2 mB3 = gB sB

3 + 3 sB2 mB + mB

3 .

Substituting these formulas for EAB2E and EAB3E into the formula for EAHX - mXL3E derived earlier we obtain

Chapter Nine Solutions 141

Page 149: Text Book 1 Solutions

Substituting these formulas for EAB2E and EAB3E into the formula for EAHX - mXL3E derived earlier we obtainEAHX - mXL3E = p EAB3E - 3 p2 EAB2E E@BD + 2 p3 E@BD3 =

p IgB sB3 + 3 sB

2 mB + mB3 M - 3 p2 IsB

2 + mB2 M mB + 2 p3 mB

3 =

p gB sB3 + I3 p - 3 p2MsB

2 mB + Ip - 3 p2 + 2 p3M mB3 =

p gB sB3 + 3 p H1 - pLsB

2 mB + p H1 - pL H1 - 2 pL mB3

as required.

Comment: The assumption that B and I are independent was inadvertently omitted in the statement of the question in section 9.6. However, this assumption is necessary for the formulas just derived to hold in general. To see that this is so consider the case B = I (perfect dependence). Then X = I2 and so the distribution of X is BernoulliHpL,

X = ;1 with probability p,0 with probability 1 - p.

Hence

FX@xD = 0 for x < 0,

FX@xD = 1 - p for 0 § x < 1,

FX@xD = 1 for x ¥ 1,

which is clearly different from the formula one gets in part a of this exercise. Indeed, substituting the distribution function FB when B = I into the formula in part a we obtain

FX@xD = 0 for x < 0,

FX@xD = 1 - p2 for 0 § x < 1,

FX@xD = 1 for x ¥ 1.

Consequently for the formulas in parts a through e to hold in general, we require that B and I be independent.

12. Let X j be the payout on a randomly selected policy of type j, let I j be an indicator of a claim occurrence on a randomly selected policy of type j, and let L j be the claim size on a policy of type j for which a claim is known to have occurred. Then from the given information,

142 Chapter Nine Solutions

Page 150: Text Book 1 Solutions

12. Let X j be the payout on a randomly selected policy of type j, let I j be an indicator of a claim occurrence on a randomly selected policy of type j, and let L j be the claim size on a policy of type j for which a claim is known to have occurred. Then from the given information,

I1 = ;1 with probability 25 %,0 with probability 75 %,

I2 = ;1 with probability 40 %,0 with probability 60 %,

X1 = ;L1 if I1 = 1,0 if I1 = 0,

X2 = ;L2 if I2 = 1,0 if I2 = 0,

and the probability mass functions of L1 and L2 are respectivelypL1 @1000D = .20, pL1 @5000D = .50, pL1 @10 000D = .30

andpL2 @1000D = .70, pL2 @5000D = .20, pL2 @10 000D = .10.

Let Y be the payout on a randomly selected policy whose type is not known and let C be an indicator of type defined as

C = ;1 if policyholder is of type 1,2 if policyholder is of type 2.

Then from the given information,

C = ;1 with probability 30 %,2 with probability 70 %,

and

Y = ;X1 if C = 1,X2 if C = 2.

With this notation, the solutions to parts a through d can be presented in an organized way.

a. The desired quantities are E@X1D and VarHX1L. Using the formulas for unconditional expectation and variance given in section 9.3 of the textbook we have

Chapter Nine Solutions 143

Page 151: Text Book 1 Solutions

a. The desired quantities are E@X1D and VarHX1L. Using the formulas for unconditional expectation and variance given in section 9.3 of the textbook we haveE@X1D = EI1 @E@X1 I1DD = E@X1 I1 = 1D Pr@I1 = 1D + E@X1 I1 = 0D Pr@I1 = 0D =

E@L1D Pr@I1 = 1D + E@0D Pr@I1 = 0D = E@L1D H.25L + H0L H.75L = H.25L E@L1D

andVar HX1L = EI1 @Var HX1 I1LD + VarI1 HE@X1 I1DL =

8Var HX1 I1 = 1L Pr@I1 = 1D + Var HX1 I1 = 0L Pr@I1 = 0D< + VarI1 HE@X1 I1DL =Var HL1L Pr@I1 = 1D + Var H0L Pr@I1 = 0D + VarI1 HE@X1 I1DL =

Var HL1L H.25L + H0L H.75L + VarI1 HE@X1 I1DL = H.25L Var HL1L + VarI1 HE@X1 I1DL.

Now

E@X1 I1D = ;E@L1D if I1 = 1,E@0D if I1 = 0; = ;

E@L1D if I1 = 1,0 if I1 = 0; = E@L1D ÿ I1.

HenceVarI1 HE@X1 I1DL =

E@L1D2 Var HI1L = E@L1D2 Pr@I1 = 1D Pr@I1 = 0D = E@L1D2 H.25L H.75L = .1875 E@L1D2

and so

Var HX1L = H.25L Var HL1L + VarI1 HE@X1 I1DL = H.25L Var HL1L + H.1875L E@L1D2.

(Note that the formula VarHI1L = Pr@I1 = 1D Pr@I1 = 0D follows directly from the formula for the variance of a binomial random variable with n = 1 and p = Pr@I1 = 1D.)

From the distribution of L1 we haveE@L1D = H1000L H.20L + H5000L H.50L + H10 000L H.30L = 5700

and

EAL12E = H1000L2 H.20L + H5000L2 H.50L + H10 000L2 H.30L = 42,700,000.

Hence

Var HL1L = EAL12E - E@L1D2 = 42,700,000 - H5700L2 = 10,210,000.

Consequently, the mean and variance of X1 are

144 Chapter Nine Solutions

Page 152: Text Book 1 Solutions

E@X1D = H.25L E@L1D = H.25L H5700L = 1425

andVar HX1L =H.25L Var HL1L + H.1875L E@L1D2 = H.25L I10,210,000 M + H.1875L H5700L2 = 8,644,375.

These are the numerical values of the desired quantities.

b. The desired quantities are E@X2D and VarHX2L. Arguing as in part a we haveE@X2D = E@L2D Pr@I2 = 1D

andVar HX2L = Var HL2L Pr@I2 = 1D + VarI2 HE@X2 I2DL =

Var HL2L Pr@I2 = 1D + Var HE@L2D ÿ I2L = Var HL2L Pr@I2 = 1D + E@L2D2 Var HI2L =Var HL2L Pr@I2 = 1D + E@L2D2 Pr@I2 = 1D Pr@I2 = 0D.

HenceE@X2D = H.40L E@L2D

and

Var HX2L = H.40L Var HL2L + H.40L H.60L E@L2D2 = H.40L Var HL2L + H.24L E@L2D2.

Now from the distribution of L2 we haveE@L2D = H1000L H.70L + H5000L H.20L + H10 000L H.10L = 2700,

EAL22E = H1000L2 H.70L + H5000L2 H.20L + H10 000L2 H.10L = 15,700,000,

and

Var HL2L = EAL22E - E@L2D2 = 15,700,000 - H2700L2 = 8,410,000.

Consequently, the mean and variance of X2 areE@X2D = H.40L E@L2D = H.40L H2700L = 1080

and

Var HX2L = H.40L Var HL2L + H.24L E@L2D2 = H.40L I8,410,000 M + H.24L H2700L2 = 5,113,600.

These are the numerical values of the desired quantities.

Chapter Nine Solutions 145

Page 153: Text Book 1 Solutions

These are the numerical values of the desired quantities.

c. The desired quantities are E@YD and VarHYL. Using the formulas for unconditional expectation and variance given in section 9.3 of the textbook we haveE@YD = EC@E@Y CDD =

E@Y C = 1D Pr@C = 1D + E@Y C = 2D Pr@C = 2D = E@X1D Pr@C = 1D + E@X2D Pr@C = 2D

andVar HYL = EC@Var HY CLD + VarC HE@Y CDL =

Var HY C = 1L Pr@C = 1D + Var HY C = 2L Pr@C = 2D + VarC HE@Y CDL =Var HX1L Pr@C = 1D + Var HX2L Pr@C = 2D + VarC HE@Y CDL.

Now

E@Y CD = ;E@X1D if C = 1,E@X2D if C = 2.

From parts a and b we haveE@X1D = 1425

andE@X2D = 1080.

Further, from the information given in the statement of the question,Pr@C = 1D = .30, Pr@C = 2D = .70.

Hence

E@Y CD = ;1425 with probability .30,1080 with probability .70,

and so

Var HE@Y CDL = EAE@Y CD2E - HE@E@Y CDDL2 =9H1425L2 H.30L + H1080L2 H.70L= - 8H1425L H.30L + H1080L H.70L<2 =

1,425,667.50 - 1,400,672.25 = 24,995.25.

It follows that

146 Chapter Nine Solutions

Page 154: Text Book 1 Solutions

Var HYL = Var HX1L Pr@C = 1D + Var HX2L Pr@C = 2D + Var HE@Y CDL =Var HX1L H.30L + Var HX2L H.70L + 24,995.25.

Now from parts a and b we also haveVar HX1L = 8,644,375

andVar HX2L = 5,113,600.

Hence substituting these values and the earlier values for E@X1D, E@X2D into the formulas for E@YD and VarHYL we haveE@YD = E@X1D Pr@C = 1D + E@X2D Pr@C = 2D = H1425L H.30L + H1080L H.70L = 1183.50

andVar HYL = Var HX1L H.30L + Var HX2L H.70L + 24,995.25 =

I8,644,375 M H.30L + I5,113,600 M H.70L + 24,995.25 = 6,197,827.75.

These are the numerical values of the desired quantities.

d. The desired probability is Pr@Y ¥ 5000D. This can be determined by successive applications of the law of total probability.

To make the calculations more transparent, we will first determine a general formula for Pr@Y ¥ yD in the case y > 0. Conditioning on the risk class C we have

Pr@Y ¥ yD = Pr@Y ¥ y C = 1D Pr@C = 1D + Pr@Y ¥ y C = 2D Pr@C = 2D =Pr@X1 ¥ yD H.30L + Pr@X2 ¥ yD H.70L

and conditioning on the event of a claim occurrence we havePrAX j ¥ yE = PrAX j ¥ y I j = 1E PrAI j = 1E + PrAX j ¥ y I j = 0E PrAI j = 0E =

PrAL j ¥ yE PrAI j = 1E + Pr@0 ¥ yD PrAI j = 0E.

Using the fact that Pr@0 ¥ yD = 0 for y > 0 it follows that

Pr@Y ¥ yD = Pr@X1 ¥ yD H.30L + Pr@X2 ¥ yD H.70L =Pr@L1 ¥ yD Pr@I1 = 1D H.30L + Pr@L2 ¥ yD Pr@I2 = 1D H.70L =

Pr@L1 ¥ yD H.25L H.30L + Pr@L2 ¥ yD H.40L H.70L = Pr@L1 ¥ yD H.075L + Pr@L2 ¥ yD H.28L

for y > 0. Now from the distributions of L1 and L2 we have

Chapter Nine Solutions 147

Page 155: Text Book 1 Solutions

for y > 0. Now from the distributions of L1 and L2 we have

Pr@L1 ¥ 5000D = Pr@L1 = 5000D + Pr@L1 = 10 000D = .50 + .30 = .80

andPr@L2 ¥ 5000D = Pr@L2 = 5000D + Pr@L2 = 10 000D = .20 + .10 = .30.

Consequently, the probability that the payout on a randomly selected policy is at least $5000 isPr@Y ¥ 5000D =

Pr@L1 ¥ 5000D H.075L + Pr@L2 ¥ 5000D H.28L = H.80L H.075L + H.30L H.28L = .1440.

17. Let X1, …, X500 be the insurer's payouts on the 500 policies for which the claim incidence probability is 10%, let Y1, …, Y2000 be the insurer's payouts on the 2000 policies for which the claim incidence probability is 5%, and let S be the insurer's total payout on all policies. Suppose that the Xi, Y j and S are all measured in $100,000 lots. ThenS = X1 + + X500 + Y1 + + Y2000.

Our objective is to determine the per-policy premium such that aggregate premiums exceed aggregate payments 95% of the time.

Suppose that the Xi and Y j are mutually independent. Then the distributions of the sums X1 + + X500 , Y1 + + Y2000 are approximately normal and so since the sum of independent normal random variables has a normal distribution, it follows that the distribution of S is approximately normal. To use a normal approximation for S we need only determine E@SD and VarHSL.

Since the Xi and Y j are mutually independent and Xi ~ X, Y j ~ Y for X, Y of the type specified in the statement of the question, it follows thatE@SD = 500 E@XD + 2000 E@YD,

Var HSL = 500 Var HXL + 2000 Var HYL.

From the given information,

148 Chapter Nine Solutions

Page 156: Text Book 1 Solutions

X = I B

where I ~ BernoulliH.10L and B is a truncated exponential random variable with parame-ters l = 1 and m = 2.5, andY = I B

where I ~ BernoulliH.05L and B is a truncated exponential random variable with parame-ters l = 2 and m = 5. In exercise 7 part c it is shown that if X = I B with I ~ BernoulliHpL and B nonnegative thenE@XD = p mB,

Var HXL = p H1 - pL mB2 + p sB2 .

From Example 3, section 4.2.1 and Example 2, section 4.2.2 of the textbook, the mean and variance of a truncated exponential random variable B with parameters l and m are

E@BD =1

lI1 - ‰-lmM,

Var HBL =1

l2I1 - 2 l m ‰-lm - ‰-2 lmM.

Hence if X = I B with I ~ BernoulliHpL and B is a truncated exponential with parameters l and m, then

E@XD = p mB =p

lI1 - ‰-lmM,

Var HXL = p H1 - pL mB2 + p sB2 = p H1 - pL

1

l2I1 - ‰-lmM

2+

p

l2I1 - 2 l m ‰-lm - ‰-2 lmM.

It follows that the mean and variance of the X j (i.e., the case p = .10, l = 1, m = 2.5) are

EAX jE =.10

1I1 - ‰-H1L H2.5LM = .10 I1 - ‰-2.5M,

Chapter Nine Solutions 149

Page 157: Text Book 1 Solutions

Var IX jM = H.10L H.90L1

12I1 - ‰-H1L H2.5LM

2+

.10

12I1 - H2L H1L H2.5L ‰-H1L H2.5L - ‰-H2L H1L H2.5LM =

.09 I1 - ‰-2.5M2+ .10 I1 - 5 ‰-2.5 - ‰-5M = .19 - .68 ‰-2.5 - .01 ‰-5

and the mean and variance of the Y j (i.e., the case p = .05, l = 2, m = 5) are

EAY jE =.05

2I1 - ‰-H2L H5LM = .025 I1 - ‰-10M,

Var IY jM = H.05L H.95L1

22I1 - ‰-H2L H5LM

2+

.05

22I1 - H2L H2L H5L ‰-H2L H5L - ‰-H2L H2L H5LM =

.011875 I1 - ‰-10M2+ .0125 I1 - 20 ‰-10 - ‰-20M =

.024375 - .27375 ‰-10 - .000625 ‰-20.

Consequently, the mean and variance of S areE@SD = 500 E@XD + 2000 E@YD =

500 I.10 I1 - ‰-2.5MM + 2000 I.025 I1 - ‰-10MM = 100 - 50 ‰-2.5 - 50 ‰-10 º 95.89348

andVar HSL = 500 Var HXL + 2000 Var HYL =

500 I.19 - .68 ‰-2.5 - .01 ‰-5M + 2000 I.024375 - .27375 ‰-10 - .000625 ‰-20M =143.75 - 340 ‰-2.5 - 5 ‰-5 - 547.50 ‰-10 - 1.25 ‰-20 º

115.7825543 º H10.76023L2.

a. Suppose that each policyholder is charged the same premium. Let P be the per-policy premium in $100,000 lots. Then we requirePr@S < 2500 PD = .95.

Using a normal approximation for S we have

Pr@S < 2500 PD = PrS - E@SD

Var HSL<

2500 P - 95.89348

10.76023º Pr Z <

2500 P - 95.89348

10.76023.

Now [email protected] º .95. Hence the required premium P is given by

150 Chapter Nine Solutions

Page 158: Text Book 1 Solutions

2500 P - 95.89348

10.76023º 1.645,

that is,P º .0454376.

Consequently, if all policyholders are charged the same amount, the required premium is $4543.76 per policy.

b. Suppose that the insurer charges a policyholder with expected loss m the premium P = H1 + JL m where J is the same for all policyholders. Then the premium for the 500 policyholders with claim incidence probability 10% isP = H1 + JL E@XD = H1 + JL H.10L I1 - ‰-2.5M º .09179150 H1 + JL

and the premium for the 2000 policyholders with claim incidence probability 5% is

P = H1 + JL E@YD = H1 + JL H.025L I1 - ‰-10M = .02499887 H1 + JL.

The requirement that aggregate premiums exceed aggregate payments 95% of the time takes the formPr@S < H500L H.09179150L H1 + JL + H2000L H.02499887L H1 + JLD = .95,

i.e.,Pr@S < 95.89348 H1 + JLD = .95.

Using a normal approximation for S we havePr@S < 95.89348 H1 + JLD =

PrS - E@SD

Var HSL<

95.89348 H1 + JL - 95.89348

10.76023º F

95.89348 H1 + JL - 95.89348

10.76023.

Since [email protected] º .95, the requirement on J is95.89348 H1 + JL - 95.89348

10.76023º 1.645,

i.e.,

Chapter Nine Solutions 151

Page 159: Text Book 1 Solutions

J º .18458584.

Consequently, the required premium for policies with claim incidence probability 10% is given byP = H1 + JL E@XD = H1.18458584L H.09179150L = .1087349,

i.e., $10,873.49 per policy, and the required premium for policies with claim incidence probability 5% is given byP = H1 + JL E@YD = H1.18458584L H.02499887L = .02961331,

i.e., $2961.33 per policy.

Comment: The total premium collected under the criterion in part a isH2500L H$4543 .76L º $11,359,400.

The total premium collected under the criterion in part b isH500L I$10,873.49 M + H2000L I$2,961.33 M = $11,359,405.

Ignoring rounding error, we see that the total premium collected is the same. The advantage of the criterion in part b is that it charges a premium that distinguishes between the risks. This can help to avoid a problem with adverse selection.

20. Let S be the total amount of claims in hundreds of dollars. ThenS = W1 + + WN

where W j is the size of the j-th claim in hundreds of dollars and N is the number of claims. We are given that N ~ BinomialH4, .25L and the distribution of W j is

PrAW j = 1E = .3, PrAW j = 2E = .45, PrAW j = 3E = .25.

We are also given that the W j are mutually independent and independent of N. We are asked to determine Pr@S > 5D.

By the law of total probability,

152 Chapter Nine Solutions

Page 160: Text Book 1 Solutions

Pr@S > 5D =

‚n=0

Pr@S > 5 N = nD Pr@N = nD =‚n=0

Pr@W1 + + WN > 5 N = nD Pr@N = nD.

Since the W j are independent of N,

Pr@W1 + + WN > 5 N = nD = Pr@W1 + + Wn > 5D.

Moreover, since N ~ BinomialH4, .25L,

Pr@N = nD = K4nO H.25Ln H.75L4-n for n = 0, 1, 2, 3, 4.

Hence

Pr@S > 5D =‚n=0

4

Pr@W1 + + Wn > 5D K4nO H.25Ln H.75L4-n.

Now since the only possible values of W j are 1, 2, or 3, the aggregate claim S cannot exceed 5 if N § 1. In particular,PrAW*0 > 5E = 0,

Pr@W1 > 5D = 0,

where W*0 represents W1 + + Wn when n = 0. Hence

Pr@S > 5D = Pr@W1 + W2 > 5D42 H.25L2 H.75L2 +

Pr@W1 + W2 + W3 > 5D43 H.25L3 H.75L + Pr@W1 + W2 + W3 + W4 > 5D H.25L4.

Consider Pr@W1 + W2 > 5D. Since the possible values of W j are 1, 2, or 3, the only way that the aggegrate claim S can exceed 5 is if both W1 and W2 are equal to 3, i.e.,Pr@W1 + W2 > 5D = Pr@W1 = 3 and W2 = 3D.

Since the W j are independent, it follows that

Pr@W1 + W2 > 5D = Pr@W1 = 3D Pr@W2 = 3D = H.25L2 = .0625.

Now consider Pr@W1 + W2 + W3 > 5D. Put X = W1 + W2 + W3. For simplicity we will determine Pr@X § 5D and then deduce the value for Pr@X > 5D. The values of the triple HW1, W2, W3L for which X § 5 are as follows:

Chapter Nine Solutions 153

Page 161: Text Book 1 Solutions

Now consider Pr@W1 + W2 + W3 > 5D. Put X = W1 + W2 + W3. For simplicity we will determine Pr@X § 5D and then deduce the value for Pr@X > 5D. The values of the triple HW1, W2, W3L for which X § 5 are as follows:H1, 1, 1L, H1, 1, 2L, H1, 1, 3L, H1, 2, 1L, H1, 2, 2L,H1, 3, 1L, H2, 1, 1L, H2, 1, 2L, H2, 2, 1L, H3, 1, 1L.

Since the W j are independent, it follows that

Pr@X § 5D = Pr@W1 = 1D Pr@W2 = 1D Pr@W3 = 1D +Pr@W1 = 1D Pr@W2 = 1D Pr@W3 = 2D + + Pr@W1 = 3D Pr@W2 = 1D Pr@W3 = 1D =

H.3L3 + 3 H.3L2 H.45L + 3 H.3L2 H.25L + 3 H.3L H.45L2 = .39825.

HencePr@W1 + W2 + W3 > 5D = Pr@X > 5D = 1 - Pr@X § 5D = 1 - .39825 = .60175.

Finally consider Pr@W1 + W2 + W3 + W4 > 5D. Put Y = W1 + W2 + W3 + W4. As in the previous case, it is simpler to determine Pr@Y § 5D and deduce the value of Pr@Y > 5D. The values of the quadruple HW1, W2, W3, W4L for which Y § 5 are as follows:H1, 1, 1, 1L, H1, 1, 1, 2L, H1, 1, 2, 1L, H1, 2, 1, 1L, H2, 1, 1, 1L.

Hence arguing as before,Pr@Y § 5D = Pr@W1 = 1D Pr@W2 = 1D Pr@W3 = 1D Pr@W4 = 1D +

Pr@W1 = 1D Pr@W2 = 1D Pr@W3 = 1D Pr@W4 = 2D + +Pr@W1 = 2D Pr@W2 = 1D Pr@W3 = 1D Pr@W4 = 1D = H.3L4 + 4 H.3L3 H.45L = .0567.

It follows thatPr@W1 + W2 + W3 + W4 > 5D = Pr@Y > 5D = 1 - Pr@Y § 5D = 1 - .0567 = .9433.

Substituting these results into the earlier formula for Pr@S > 5D we obtain

Pr@S > 5D = Pr@W1 + W2 > 5D42 H.25L2 H.75L2 +

Pr@W1 + W2 + W3 > 5D43 H.25L3 H.75L + Pr@W1 + W2 + W3 + W4 > 5D H.25L4 =

H.0625L H6L H.25L2 H.75L2 + H.60175L H4L H.25L3 H.75L + H.9433L H.25L4 =.04507539 º 4.5 %.

Consequently, the probability that aggregate claims on the given policy exceed $500 is about 4.5%.

154 Chapter Nine Solutions

Page 162: Text Book 1 Solutions

Consequently, the probability that aggregate claims on the given policy exceed $500 is about 4.5%.

24. Let Y1, Y2, Y3, Y4 be the aggregate claims in thousands of dollars in the seasons spring, summer, fall, winter respectively and let C denote the aggregate claims for the entire year. By assumption, the Y j have compound Poisson distributions with intensity parametersl1 = 100, l2 = 75, l3 = 90, l4 = 200

and claim amount random variablesExponential H1L, Exponential H2L, Exponential H1L, Exponential H.25L.

Moreover, the Y j are assumed to be independent. Hence from section 9.5.2 of the textbook or exercise 22, C has a compound Poisson distribution with intensity parame-ter l = l1 + l2 + l3 + l4 = 100 + 75 + 90 + 200 = 465 and claim amount W a mixture of ExponentialH1L, ExponentialH2L, ExponentialH1L, and ExponentialH.25L with mixing weights l1 êl = 100 ê465 = 20 ê93, l2 êl = 75 ê465 = 15 ê93, l3 êl = 90 ê465 = 18 ê93, l4 êl = 200 ê465 = 40 ê93.

a. From section 9.5.2 of the textbook or exercise 21 part b, the formulas for the expected value, variance, and skewness of a compound Poisson random variable are as follows:E@CD = l E@WD,

Var HCL = l EAW2E,

gC =1

l1ê2ÿ

EAW3E

EAW2E3ê2

.

In this exercise, W is a mixture of ExponentialH1L, ExponentialH2L, ExponentialH1L, ExponentialH0.25L with mixing weights 2093 , 1593 , 1893 , 4093 and l = 465. Hence using the

general formula for the moment generating function of a mixture and the formula for the moment generating function of an exponential random variable we have

Chapter Nine Solutions 155

Page 163: Text Book 1 Solutions

MW @tD =20

93ÿ

1

1 - t+

15

93ÿ

2

2 - t+

18

93ÿ

1

1 - t+

40

93ÿ

0.25

0.25 - t=

38

93ÿ

1

1 - t+

15

93ÿ

2

2 - t+

40

93ÿ

1

1 - 4 t.

It follows that

EAWkE = MWHkL@0D =

38

93ÿ„k

„ tk1

1 - tt=0 +

15

93ÿ„k

„ tk2

2 - tt=0

+40

93ÿ„k

„ tk1

1 - 4 tt=0 =

38

93ÿk !

1+

15

93ÿk !

2k+

40

93ÿ4k ÿk ! .

In particular,

E@WD =411

186=

137

62,

EAW2E =2727

186=

909

62,

EAW3E =62,397

372=

20,799

124.

Consequently, the mean, variance, and skewness of the aggregate claim random variable is

E@CD = l E@WD = 465137

62=

2055

2,

Var HCL = l EAW2E = 465909

62=

13,635

2,

gC =1

l1ê2ÿ

EAW3E

EAW2E3ê2

=1

465ÿ

20,799124

J90962 N

3ê2=

2311

303 3030º .13855940.

b. The desired probability is Pr@C > 1200D. (Note that C is measured in thousands of dollars.) From part a, we have

156 Chapter Nine Solutions

Page 164: Text Book 1 Solutions

b. The desired probability is Pr@C > 1200D. (Note that C is measured in thousands of dollars.) From part a, we haveE@CD = 1027.50,

Var HCL = 6817.50,

gC º .13855940.

Hence using a normal approximation we have

Pr@C > 1200D = PrC - E@CD

Var HCL>

1200 - 1027.50

6817.50º Pr@Z > 2.0892D = 1 - [email protected]

where Z ~ NormalH0, 1L and F is the standard normal distribution function. From the table in Appendix E of the textbook and using linear interpolation we [email protected] º [email protected] + [email protected] º H.08L H.9812L + H.92L H.9817L = .98166.

Consequently, under a normal approximation the desired probability isPr@C > 1200D º 1 - [email protected] º 1 - .98166 = 1.834 %.

We can also approximate the desired probability using a normal power approximation. From section 8.6 of the textbook we have the approximation

FC@cD º F-3

gC+

9

gC2+ 1 +

6

gCÿc - mC

sC.

Substituting the values of mC, sC, and gC specified earlier into this formula we have

FC@cD º F-3

.13855940+

9

H.13855940L2+ 1 +

6

H.1385594Lÿc - 1027.50

6817.50º

F -21.65136 + 469.7816 + 43.3027c - 1027.50

82.5682.

Hence the desired probability isPr@C > 1200D = 1 - FC@1200D º 1 - [email protected].

From Appendix E of the textbook we have

Chapter Nine Solutions 157

Page 165: Text Book 1 Solutions

From Appendix E of the textbook we [email protected] º [email protected] + [email protected] º H.18L H.9778L + H.82L H.9783L = .97821.

Consequently, under a normal power approximation the desired probability is Pr@C > 1200D º 1 - [email protected] º 1 - .97821 = 2.179 % .

Both approximations give a numerical value of Pr@C > 1200D that is fairly small. How-ever, the value of 2.179% determined by the normal power approximation is probably closer to the true value because it incorporates the impact of the skewness of C which is not entirely negligible in this case. Since the normal distribution is symmetric (and hence has skewness of 0), it has a tendency to underestimate probability in the right tail of distributions with a positive skew.

30. Let N be the number of claims received on this policy and let X be the aggregate amount of claims for this policy. Then from the given information, the distribution of N is given bypN@0D = .50, pN@1D = .30, pN@2D = .20

and

X = 0 if N = 0,

X = W1 if N = 1,

X = W1 + W2 if N = 2,

where W j ~ ExponentialH1 ê2L and the W j are independent. From section 6.1.2 of the textbook, we know that the sum of independent exponential random variables with common parameter l has a gamma distribution. Hence W1 + W2 ~ GammaI2, 12 M.

Consequently, X is a mixture of a certain point mass at 0, the distribution

ExponentialI 12 M, and the distribution GammaI2, 12 M with respective mixing weights .50, .30, .20.

a. From the observation that X is a mixture of the type just described it follows that the distribution function of X is given by

158 Chapter Nine Solutions

Page 166: Text Book 1 Solutions

FX@xD = .50 FX1 @xD + .30 FX2 @xD + .20 FX3 @xD

where

FX1 @xD = ;1 for x ¥ 0,0 for x < 0,

FX2 @xD =1 - ‰-xê2 for x ¥ 0,0 for x < 0,

andFX3 @xD = 1 - SX3 @xD =

1 -⁄n=01 Hxê2Ln ‰-xê2

n!for x ¥ 0,

0 for x < 0;=

1 - ‰-xê2 I1 + x2 M for x ¥ 0,

0 for x < 0.

(See section 6.1.2 of the textbook for the formula for the survival function of a gamma distribution whose parameter r is a positive integer.) Simplifying we obtain

FX@xD =1 - ‰-xê2 I.50 + .20 x

2 M for x ¥ 0,0 for x < 0;

=1 - .50 ‰-xê2 J1 + x

5 N for x ¥ 0,

0 for x < 0.

b. To determine the unconditional mean and variance of X we condition on N and use the formulasE@XD = EN@E@X NDD,

Var HXL = EN@Var HX NLD + VarN HE@X NDL.

From the earlier observations, we have

E@X ND = E@0D if N = 0,

E@X ND = E@W1D if N = 1,

E@X ND = E@W1 + W2D if N = 2,

and

VarHX NL = VarH0L if N = 0,

VarHX NL = VarHW1L if N = 1,

VarHX NL = VarHW1 + W2L if N = 2.

Chapter Nine Solutions 159

Page 167: Text Book 1 Solutions

VarHX NL = VarH0L if N = 0,

VarHX NL = VarHW1L if N = 1,

VarHX NL = VarHW1 + W2L if N = 2.

Since W j ~ ExponentialH1 ê2L and the W j are independent, it follows from the formulas for the mean and variance of an exponential distribution (section 6.1.1 of the textbook) and the distribution of N specified earlier that

E@X ND = 0 with probability .50,

E@X ND = 2 with probability .30,

E@X ND = 4 with probability .20,

and

VarHX NL = 0 with probability .50,

VarHX NL = 4 with probability .30,

VarHX NL = 8 with probability .20.

Consequently,EN@E@X NDD = H0L H.50L + H2L H.30L + H4L H.20L = 1.4,

EN@Var HX NLD = H0L H.50L + H4L H.30L + H8L H.20L = 2.8,

andVarN HE@X NDL =

ENA E@X ND2E - HEN@ E@X ND DL2 = 9H0L2 H.50L + H2L2 H.30L + H4L2 H.20L= -8H0L H.50L + H2L H.30L + H4L H.20L<2 = 4.4 - 1.96 = 2.44.

Therefore, the unconditional mean and variance of X areE@XD = EN@ E@X ND D = 1.4,

Var HXL = EN@Var HX NLD + VarN HE@X NDL = 2.8 + 2.44 = 5.24.

c. From the solution to part a, we have

160 Chapter Nine Solutions

Page 168: Text Book 1 Solutions

SX@xD = .50 ‰-xê2 1 +x

5for x ¥ 0.

Hence the probability that aggregate claims exceed three is

SX@3D = .50 ‰-3ê2 1 +3

5= .80 ‰-3ê2 º .1785.

Chapter Nine Solutions 161

Page 169: Text Book 1 Solutions

Chapter Ten Solutions

1. Let mS, sS, mB, sB be the expected return and standard deviation of return for the stock and bond funds respectively, and let r be the correlation of the returns for the two funds. We are given that mS = 15 %, sS = 20 %, mB = 5 %, sB = 10 %, r = 0.25.

a. From section 10.1 of the textbook, the equation of the curve in the risk-reward plane representing the possible portfolios iss2 = A Hm - m0L2 + s0

2

where

A =HsS - sBL

2 + 2 H1 - rLsS sB

HmS - mBL2,

m0 =mS sB

2 - HmS + mBL r sS sB + mB sS2

HsS - sBL2 + 2 H1 - rLsS sB

,

s02 =

sS2 sB

2 I1 - r2M

HsS - sBL2 + 2 H1 - rLsS sB

.

Substituting the given values of mS, sS, mB, sB, r into these expressions we have

A =H20 %- 10 %L2 + 2 H1 - .25L H20 %L H10 %L

H15 %- 5 %L2= 4,

Page 170: Text Book 1 Solutions

m0 =H15 %L H10 %L2 - H15 %+ 5 %L H0.25L H20 %L H10 %L + H5 %L H20 %L2

H20 %- 10 %L2 + 2 H1 - .25L H20 %L H10 %L= 6.25 %,

s02 =

H20 %L2 H10 %L2 I1 - H.25L2M

H20 %- 10 %L2 + 2 H1 - .25L H20 %L H10 %L= 93.75 I%2M º H9.68 %L2.

Consequently, the equation in the risk-reward plane is

s2 = 4 Hm - 6.25 %L2 + H9.68 %L2.

b. From part a, the portfolio with the least risk has risk-reward coordinatess0 = 9.68 %,

m0 = 6.25 %.

c. The graph is straightforward to create using Mathematica or similar computer software.

d. For an investor with utility functional U = m - k s2, the optimal allocation in the stock fund ism* - mB

mS - mB

where m* is the return on the optimal portfolio and given by

m* = m0 +1

2 k A

(see section 10.1 of the textbook). In this exercise, k = 1 ê100. Hence the return on the optimal portfolio is

m* = m0 +1

2 k A= 6.25 %+

1

2 J1

100% N H4L= 18.75 %

and the fraction invested in the stock fund is

Chapter Ten Solutions 163

Page 171: Text Book 1 Solutions

m* - mB

mS - mB=

18.75 %- 5 %

15 %- 5 %= 1.375.

The interpretation of this is as follows: For every $1000 of investable assets, one should sell short $375 of bonds and invest $1375 in stocks. If it is not possible to sell short the bond fund, then the next best thing for this particular investor is to invest 100% of the portfolio in the stock fund.

3. Let x1, x2, x3 be the weights of S1, S2, S3 in the optimal risky portfolio and l be the reward-to-variability ratio for the optimal risky portfolio when the risk-free rate is r f . From section 10.3 of the textbook, x1, x2, x3, and l satisfy the following system:

s12 Hl x1L + s12 Hl x2L + s13 Hl x3L = m1 - r f ,

s21 Hl x1L + s22 Hl x2L + s23 Hl x3L = m2 - r f ,

s31 Hl x1L + s32 Hl x2L + s32 Hl x3L = m3 - r f .

This system has a unique soltuion in l x1, l x2, l x3 provided that the variance-covariance matrix is nondegenerate, which we will assume. Hencel x1 = c11 + c12 r f ,

l x2 = c21 + c22 r f ,

l x3 = c31 + c32 r f ,

where the ci j are independent of r f . We are given that when r f = 5 %,

x1 =74

93, x2 =

-60

93, x3 =

79

93, l = 46.5.

Substituting these values into the preceding system we obtain

H46.5L74

93= c11 + c12 H5L,

H46.5L-60

93= c21 + c22 H5L,

164 Chapter Ten Solutions

Page 172: Text Book 1 Solutions

H46.5L79

93= c31 + c32 H5L,

where the units of ci 1 and ci 2 are as appropriate. We are also given that when r f = 10 %,

x1 =12

19, x2 =

-20

19, x3 =

27

19, l = 19.

Hence arguing as before we obtain

H19L12

19= c11 + c12 H10L,

H19L-20

19= c21 + c22 H10L,

H19L27

19= c31 + c32 H10L.

The six equations in the ci j are most easily solved by considering them as three sets of two equations:37 = c11 + 5 c12,12 = c11 + 10 c12;

-30 = c21 + 5 c22,-20 = c21 + 10 c22;

39.5 = c31 + 5 c32,27 = c31 + 10 c32.

Solving these equations we obtainc11 = 62, c12 = -5,

c21 = -40, c22 = 2,

c31 = 52, c32 = -2.5.

Hence the coordinates of the optimal risky portfolio are given by

Chapter Ten Solutions 165

Page 173: Text Book 1 Solutions

l x1 = 62 - 5 r f ,

l x2 = -40 + 2 r f ,

l x3 = 52 - 2.5 r f .

In particular, for r f = 15 % we have

l x1 = -13, l x2 = -10, l x3 = 14.5

and so

x1 =26

17, x2 =

20

17, x3 =

-29

17.

8. We are given that the expected return and standard deviation on the market portfolio are mM = 15 and sM = 15 and the expected return and standard deviation on the global minimum variance portfolio are m0 = 5 and s0 = 5. Note that in this context the global minimum variance portfolio refers to the portfolio among those with no position in the risk-free asset whose variance is the smallest. Since both the market portfolio and the global minimum variance portfolio must lie on the minimum variance set of risky portfolios ("risky" meaning that there is no position in the risk-free asset), we can determine the equation in the risk-reward plane for the minimum variance set. Indeed, from sections 10.1, 10.2, and 10.3 of the textbook, the minimum variance set has equation s2 = A Hm - m0L2 + s0

2

where Hs0, m0L are the risk-reward coordinates for the global minimum variance portfo-lio. Substituting the values of Hs, mL for the market portfolio into this equation and solving for A, we find that A = 2. Hence the minimum variance set for the risky portfo-lios is defined bys2 = 2 Hm - 5L2 + 25.

We are given that the rate of return on risk-free investments is 3% and short positions in the risk-free asset are not allowed. Consequently, the set of efficient portfolios consists of two separate pieces:

i. The line segment between the points H0, 3 %L and T where T is the point of tangency for the curve s2 = 2 Hm - 5L2 + 25 and the tangent line through H0, 3 %L.

ii. The portion of the curve s2 = 2 Hm - 5L2 + 25 to the right of the point T.

166 Chapter Ten Solutions

Page 174: Text Book 1 Solutions

We are given that the rate of return on risk-free investments is 3% and short positions in the risk-free asset are not allowed. Consequently, the set of efficient portfolios consists of two separate pieces:

i. The line segment between the points H0, 3 %L and T where T is the point of tangency for the curve s2 = 2 Hm - 5L2 + 25 and the tangent line through H0, 3 %L.

ii. The portion of the curve s2 = 2 Hm - 5L2 + 25 to the right of the point T.

Note that by the definition of a zero-beta companion portfolio, T is the efficient portfolio whose zero beta companion has expected return 3%.

Because short positions are not allowed in the risk-free asset, the market portfolio will lie on the portion of the curve s2 = 2 Hm - 5L2 + 25 that is to the right of T. We don't actually need to make this observation to determine the answers to parts a through d. It will be a consequence of our calculations. However, it is helpful to keep this fact in mind as we proceed through the rest of the solution.

a. Let HsZHML, mZHMLL be the risk-reward coordinates for the zero beta companion of the market portfolio M . From the definition of a zero beta companion, mZHML is the m-intercept of the tangent line to the hyperbola s2 = 2 Hm - 5L2 + 25 through the point HsM , mM L. The value of mZHML can be determined from the equation„ m

„sHs, mL=HsM , mM L =

mM - mZ HML

sM - 0.

Note that the left side of this equation is the slope of the hyperbola at the point HsM , mM L and the right side of this equation is the slope of the line through the points H0, mZHMLL and HsM , mM L. A formula for the derivative „ m ê„s can be determined by implicitly differentiating the equation s2 = 2 Hm - 5L2 + 25:„ m

„s=

s

2 Hm - 5L.

Substituting HsM , mM L = H15, 15L into the equation for mZHML and solving we obtain

mZ HML = 3.75.

Since the zero beta companion of M must lie on the minimum variance set, we can determine the standard deviation of the zero beta companion by substituting this value for mZHML into the equation s2 = 2 Hm - 5L2 + 25. When we do this, we find that

Chapter Ten Solutions 167

Page 175: Text Book 1 Solutions

sZ HML2 = 2 HmZ HPL - 5L2 + 25 = 2 H3.75 - 5L2 + 25 = 28.125.

Hence the expected return and standard deviation for the zero beta companion of the market portfolio aremZ HML = 3.75,

sZ HML = 28.125 º 5.30.

b. From section 10.3 of the textbook, we know that every portfolio on the minimum variance set can be constructed using only the market portfolio and its zero beta compan-ion. Indeed, for any portfolio Q on the minimum variance set, the weights x1, x2 in M , ZHML are given by

x1 =z1

z1 + z2, x2 =

z2z1 + z2

wheresM2 00 sZ HML

2 Kz1z2O =

mM - rmZ HML - r

and where r is the expected return on the zero beta companion of Q. Since T is a portfolio on the minimum variance set, it follows that T can be constructed using only M and ZHML. Since the expected return on the zero beta companion of T is 3% and HsZHML, mZHMLL = H5.30, 3.75L from part a, it follows that the weights x1, x2 in M , ZHML can be determined by solving 152 00 28.125

Kz1z2O =

15 - 33.75 - 3 .

When we do this, we find that

z1 =12

225, z2 =

0.75

28.125

and so

168 Chapter Ten Solutions

Page 176: Text Book 1 Solutions

x1 =z1

z1 + z2=

2

3, x2 =

z2z1 + z2

=1

3.

Consequently, we can replicate portfolio T by investing 23 of our assets in the market

portfolio and the remaining 13 in the zero beta companion of the market portfolio.

c. Let RT , RM , RZHML denote the returns on the portfolio T, the market portfolio M , and the zero beta companion of the market portfolio respectively. Then from part b,

RT = x1 RM + x2 RZ HML =2

3RM +

1

3RZ HML.

Hence using the fact that mZHML = 3.75 from part a,

E@RT D =2

3E@RM D +

1

3E@RZ HMLD =

2

3H15L +

1

3H3.75L = 11.25.

Now the CAPM for the efficient portfolios lying to the right of the tangency portfolio T (i.e., the efficient portfolios with expected return greater than E@RT D) is given byE@RD = E@RZ HMLD + b HE@RM D - E@RZ HMLDL,

with the restriction on b to be determined shortly. (See section 10.4 of the textbook.) This version of the CAPM is used since the efficient portfolios with expected return greater than E@RT D do not have a position (either long or short) in the risk-free asset. Since T does not have a position (either long or short) in the risk-free asset, it follows that E@RT D and bT satisfy the preceding CAPM equation, i.e.,

E@RT D = E@RZ HMLD + bT HE@RM D - E@RZ HMLDL.

Substituting the given and calculated values into this equation, we have11.25 = 3.75 + bT H15 - 3.75L

from which it follows that

bT =2

3.

Since the CAPM equation is increasing in b and since all efficient portfolios with expected return less than E@RT D lie on the straight line through H0, 3 %L, HsT , mT L (and hence contain a long position in the risk-free asset), it follows that the preceding CAPM equation only holds for b ¥ bT , i.e., for b ¥ 2

3 .

Chapter Ten Solutions 169

Page 177: Text Book 1 Solutions

Since the CAPM equation is increasing in b and since all efficient portfolios with expected return less than E@RT D lie on the straight line through H0, 3 %L, HsT , mT L (and hence contain a long position in the risk-free asset), it follows that the preceding CAPM equation only holds for b ¥ bT , i.e., for b ¥ 2

3 .

d. From part c, the security market line for efficient portfolios with expected return greater than E@RT D isE@RD = E@RZ HMLD + b HE@RM D - E@RZ HMLDL.

As noted in part c, this equation holds for b ¥ bT . Substituting the given and calculated values into this equation, we see that the security market line for b ¥ 2

3 is given by

E@RD = 3.75 + 11.25 b.

The security market line for efficient portfolios with b < 23 is simply the straight line in

the b-m plane through the points representing T and the risk-free asset. The reason for this is that all of the efficient portfolios with expected return less than E@RT D lie on the straight line in the s-m plane through the points H0, 3 %L, HsT , mT L, and so T plays the role of the market portfolio in this case. The equation of the line in the b-m plane through the points representing T and the risk-free asset is in general

E@RD = r f + bE@RT D - r f

bT.

Substituting the values of E@RT D and bT determined in part c and the given value of the risk-free rate we haveE@RD = 3 + 12.375 b

for b § 23 .

Consequently, the security market line for efficient portfolios is given by

E@RD =3 + 12.375 b for b § 2

3 ,

3.75 + 11.25 b for b ¥ 23 .

The security market line for individual risky assets is given by the general form of the CAPM for all values of b:

170 Chapter Ten Solutions

Page 178: Text Book 1 Solutions

The security market line for individual risky assets is given by the general form of the CAPM for all values of b:

E@RD = E@RZ HMLD + b HE@RM D - E@RZ HMLDL.

Substituting the values of E@RM D and E@RZHMLD into this equation we obtain

E@RD = 3.75 + 11.25 b for all values of b.

This equation provides information on the "required return" for a risky asset to be a candidate for investment. Note that the "required return" is higher for an individual risky asset of risk level b (with b < 2

3 ) than it is for an efficient portfolio of risk level b.

This makes perfect sense since an efficient portfolio with b < 23 has a return component

(the risk-free component) that is completely certain.

Chapter Ten Solutions 171