course outline - mcgraw hill

34
1 4. Binomial Random Variable Approximations, Conditional Probability Density Functions and Stirling’s Formula Let X represent a Binomial r.v as in (3-42). Then from (2-30) Since the binomial coefficient grows quite rapidly with n, it is difficult to compute (4-1) for large n. In this context, two approximations are extremely useful. 4.1 The Normal Approximation (Demoivre-Laplace Theorem) Suppose with p held fixed. Then for k in the neighborhood of np, we can approximate ( ) = = = = 2 1 2 1 . ) ( 2 1 k k k k n k k k k n q p k n k P k X k P (4-1) ! )! ( ! k k n n k n = n npq PILLAI

Upload: others

Post on 27-Dec-2021

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Course Outline - McGraw Hill

1

4. Binomial Random Variable Approximations,Conditional Probability Density Functions

and Stirling’s Formula

Let X represent a Binomial r.v as in (3-42). Then from (2-30)

Since the binomial coefficient grows quite rapidly with n, it is difficult to compute (4-1) for large n. In this context, two approximations are extremely useful.

4.1 The Normal Approximation (Demoivre-Laplace Theorem) Suppose with p held fixed. Then for k in the neighborhood of np, we can approximate

( ) ∑∑=

=

==≤≤

2

1

2

1

.)(21

k

kk

knkk

kkn qp

kn

kPkXkP (4-1)

!)!(!

kknn

kn

−=

∞→n

npqPILLAI

Page 2: Course Outline - McGraw Hill

2

(4-2).2

1 2/)( 2 npqnpkknk enpq

qpkn −−− ≈

π

Thus if and in (4-1) are within or around the neighborhood of the interval we can approximate the summation in (4-1) by an integration. In that case (4-1) reduces to

where

We can express (4-3) in terms of the normalized integral

that has been tabulated extensively (See Table 4.1).

1k 2k( ), , npqnpnpqnp +−

( ) ,21

21 2/

2/)(

2122

1

22

1

dyedxenpq

kXkP yx

x

npqnpxk

k

−−− ∫∫ ==≤≤ππ

(4-3)

)(21)(

0

2/2

xerfdyexerfx y −== ∫ −

π(4-4)

. , 22

11 npq

npkxnpq

npkx −=

−=

PILLAI

Page 3: Course Outline - McGraw Hill

3

For example, if and are both positive ,we obtain

Example 4.1: A fair coin is tossed 5,000 times. Find the probability that the number of heads is between 2,475 to 2,525. Solution: We need Here n is large so that we can use the normal approximation. In this case so that and Since and the approximation is valid for and Thus

Here

( ) ).()( 1221 xerfxerfkXkP −=≤≤

1x 2x

).525,2475,2( ≤≤ XP

(4-5)

,21

=p

500,2=np .35≈npq ,465,2=− npqnp

,535,2=+ npqnp 475,21 =k.525,22 =k

( ) ∫ −=≤≤2

1

2

.21 2/

21

x

x

y dyekXkPπ

.75 ,

75 2

21

1 =−

=−=−

=npq

npkxnpq

npkx

PILLAI

Page 4: Course Outline - McGraw Hill

4

21)(

21)(erf

0

2/2

−== ∫ − xGdyexx y

π

x erf(x) x erf(x) x erf(x) x erf(x)

0.050.100.150.200.25

0.300.350.400.450.50

0.550.600.650.700.75

0.019940.039830.059620.079260.09871

0.117910.136830.155420.173640.19146

0.208840.225750.242150.258040.27337

0.800.850.900.951.00

1.051.101.151.201.25

1.301.351.401.451.50

0.288140.302340.315940.328940.34134

0.353140.364330.374930.384930.39435

0.403200.411490.419240.426470.43319

1.551.601.651.701.75

1.801.851.901.952.00

2.052.102.152.202.25

0.439430.445200.450530.455430.45994

0.464070.467840.471280.474410.47726

0.479820.482140.484220.486100.48778

2.302.352.402.452.50

2.552.602.652.702.75

2.802.852.902.953.00

0.489280.490610.491800.492860.49379

0.494610.495340.495970.496530.49702

0.497440.497810.498130.498410.49865

Table 4.1

PILLAI

Page 5: Course Outline - McGraw Hill

5

Since from Fig. 4.1(b), the above probability is given by

where we have used Table 4.1

4.2. The Poisson ApproximationAs we have mentioned earlier, for large n, the Gaussian approximation of a binomial r.v is valid only if p is fixed, i.e., only if and what if np is small, or if it does not increase with n?

,01 <x

( )

,516.075erf2

|)(|erf)(erf)(erf)(erf525,2475,2 1212

=

=

+=−=≤≤ xxxxXP

1>>np .1>>npq

( ). 258.0)7.0(erf =

Fig. 4.1

x

(a)1x 2x

2/2

21 xe −

π

0 ,0 21 >> xx

x

(b)1x 2x

2/2

21 xe −

π

0 ,0 21 >< xx

PILLAI

Page 6: Course Outline - McGraw Hill

6

Obviously that is the case if, for example, as such that is a fixed number.

Many random phenomena in nature in fact follow this pattern. Total number of calls on a telephone line, claims in an insurance company etc. tend to follow this type of behavior. Consider random arrivals such as telephone calls over a line. Let n represent the total number of calls in the interval From our experience, as we have so that we may assume Consider a small interval of duration ∆ as in Fig. 4.2. If there is only a single call coming in, the probability p of that single call occurring in that interval must depend on its relative size with respect to T.

0→p ,∞→nλ=np

∞→T).,0( T ∞→n.Tn µ=

1 2 n

∆0 TFig. 4.2 PILLAI

Page 7: Course Outline - McGraw Hill

7

Hence we may assume Note that as However in this case is a constant, and the normal approximation is invalid here.

Suppose the interval ∆ in Fig. 4.2 is of interest to us. A call inside that interval is a “success” (H), whereas one outside is a “failure” (T ). This is equivalent to the coin tossing situation, and hence the probability of obtaining kcalls (in any order) in an interval of duration ∆ is given by the binomial p.m.f. Thus

and here as such that It is easy to obtain an excellent approximation to (4-6) in that situation. To see this, rewrite (4-6) as

.T

p ∆= 0→p .∞→T

λµµ =∆=∆

⋅=T

Tnp

)( kPn

,)1(!)!(

!)( knkn pp

kknnkP −−−

= (4-6)

0 , →∞→ pn .λ=np

PILLAI

Page 8: Course Outline - McGraw Hill

8

.)/1()/1(

!112111

)/1(!)()1()1()(

k

nk

knk

kn

nn

knk

nn

nnpk

npn

knnnkP

λλλ

−−

−−

−=

−+−−

= −

(4-7)

Thus,

!)(lim

,0 ,

λ

λ

λ −

=→∞→= e

kkP

k

nnppn(4-8)

since the finite products as well as tend to unity as and

The right side of (4-8) represents the Poisson p.m.f and the Poisson approximation to the binomial r.v is valid in situations where the binomial r.v parameters n and p diverge to two extremes such that their product np is a constant.

nk

nn112111

k

n

λ1 ,∞→n

.1lim λλ −

∞→=

− e

n

n

n

)0,( →∞→ pn

PILLAI

Page 9: Course Outline - McGraw Hill

9

Example 4.2: Winning a Lottery: Suppose two million lottery tickets are issued with 100 winning tickets among them. (a) If a person purchases 100 tickets, what is the probability of winning? (b) How many tickets should one buy to be 95% confident of having a winning ticket? Solution: The probability of buying a winning ticket

Here and the number of winning tickets X in the npurchased tickets has an approximate Poisson distribution with parameter Thus

and (a) Probability of winning

.105102

100 ticketsof no. Total

tickets winningof No. 56

−×=×

==p

,100=n

.005.0105100 5 =××== −npλ

,!

)(k

ekXPkλλ−==

.005.01)0(1)1( ≈−==−=≥= −λeXPXP

PILLAI

Page 10: Course Outline - McGraw Hill

10

(b) In this case we need

But or Thus one needs to buy about 60,000 tickets to be 95% confident of having a winning ticket!

Example 4.3: A space craft has 100,000 components The probability of any one component being defective is The mission will be in danger if five or more components become defective. Find the probability of such an event. Solution: Here n is large and p is small, and hence Poisson approximation is valid. Thus and the desired probability is given by

.95.0)1( ≥≥XP

.320ln implies 95.01)1( =≥≥−=≥ − λλeXP

3105 5 ≥××== −nnpλ .000,60≥n

( )∞→n

).0(102 5 →× − p

,2102000,100 5 =××== −λnp

PILLAI

Page 11: Course Outline - McGraw Hill

11

.052.032

342211

!1

!1)4(1)5(

2

4

0

24

0

=

++++−=

−=−=≤−=≥

=

=

− ∑∑

e

ke

keXPXP

k

kk

k

λλλ

Conditional Probability Density Function

For any two events A and B, we have defined the conditional probability of A given B as

Noting that the probability distribution function is given by

we may define the conditional distribution of the r.v X given the event B as

.0)( ,)(

)()|( ≠∩

= BPBP

BAPBAP (4-9)

)( xFX

, )( )( xXPxF X ≤= ξ (4-10)

PILLAI

Page 12: Course Outline - McGraw Hill

12

( ) .)(

)( |)( )|(BP

BxXPBxXPBxF X∩≤

=≤=ξ

ξ

Thus the definition of the conditional distribution depends on conditional probability, and since it obeys all probability axioms, it follows that the conditional distribution has the same properties as any distribution function. In particular

Further

( )

( ) .0)()(

)( )( )|(

,1)()(

)( )( )|(

==∩−∞≤

=−∞

==∩+∞≤

=+∞

BPP

BPBXPBF

BPBP

BPBXPBF

X

X

φξ

ξ

(4-12)

(4-11)

( )

),|()|( )(

)( )|)((

12

2121

BxFBxFBP

BxXxPBxXxP

XX −=

∩≤<=≤<

ξξ

(4-13)

PILLAI

Page 13: Course Outline - McGraw Hill

13

Since for

The conditional density function is the derivative of the conditional distribution function. Thus

and proceeding as in (3-26) we obtain

Using (4-16), we can also rewrite (4-13) as

,12 xx ≥

( ) ( ) ( ). )()()( 2112 xXxxXxX ≤<∪≤=≤ ξξξ (4-14)

,

)|()|(dx

BxdFBxf XX = (4-15)

∫ ∞−=

x

XX duBufBxF

.)|()|( (4-16)

( ) ∫=≤<2

1

21 .)|(|)(x

x X dxBxfBxXxP ξ (4-17)

PILLAI

Page 14: Course Outline - McGraw Hill

14

Example 4.4: Refer to example 3.2. Toss a coin and X(T)=0, X(H)=1. Suppose Determine

Solution: From Example 3.2, has the following form. We need for all x.

For so that and

).|( BxFX

)(xFX

)|( BxFX

,)( ,0 φξ =≤< xXx ( ) ,)( φξ =∩≤ BxX.0)|( =BxFX

.HB =

)(xFX

x

(a)

q1

1

( | )XF x B

Fig. 4.3

1

x

(b)1

PILLAI

Page 15: Course Outline - McGraw Hill

15

For so that

For and

(see Fig. 4.3(b)).

Example 4.5: Given suppose Find Solution: We will first determine From (4-11) and B as given above, we have

, )( ,10 TxXx =≤<≤ ξ

( ) φξ =∩=∩≤ HTBxX )( .0)|( and =BxFX

,)( ,1 Ω=≤≥ xXx ξ

( ) )( BBBxX =∩Ω=∩≤ξ 1)()()|( and ==

BPBPBxFX

),(xFX .)( aXB ≤= ξ ).|( Bxf X

).|( BxFX

( ) ( ) ( ) . )|(

aXPaXxXPBxF X ≤

≤∩≤= (4-18)

PILLAI

Page 16: Course Outline - McGraw Hill

16

( ) ( ) ( )xXaXxXax ≤=≤∩≤< ,

( )( ) .

)()()|(

aFxF

aXPxXPBxF

X

XX =

≤≤

=

( ) ( ) )( , aXaXxXax ≤=≤∩≤≥ .1)|( =BxF X

<=

, ,1

, ,)()(

)|(ax

axaFxF

BxF X

X

X

For so that

For so that Thus

and hence

(4-19)

(4-20)

<

==otherwise. ,0

,,)()(

)|()|( axaFxf

BxFdxdBxf X

X

XX (4-21)

PILLAI

Page 17: Course Outline - McGraw Hill

17

)|( BxFX

)(xFX

xa

1

(a)Fig. 4.4

)|( Bxf X

)(xf X

xa(b)

Example 4.6: Let B represent the event with For a given determine and Solution:

For we have and hence

bXa ≤< )( ξ .ab >),(xFX )|( BxFX ).|( Bxf X

( ) ( ) ( )

( ) ( ) .)()(

)( )(

)( )()( |)( )|(

aFbFbXaxXP

bXaPbXaxXPBxXPBxF

XX

X

−≤<∩≤

=

≤<≤<∩≤

=≤=

ξξξ

ξξξ

(4-22)

,ax < ,)( )( φξξ =≤<∩≤ bXaxX.0)|( =BxFX (4-23)

PILLAI

Page 18: Course Outline - McGraw Hill

18

For we have and hence

For we have so that Using (4-23)-(4-25), we get (see Fig. 4.5)

)( )( )( xXabXaxX ≤<=≤<∩≤ ξξξ

( ) .)()()()(

)()()()|(

aFbFaFxF

aFbFxXaPBxF

XX

XX

XXX −

−=

−≤<

,bxa <≤

,bx ≥ bXabXaxX ≤<=≤<∩≤ )( )( )( ξξξ.1)|( =BxF X

(4-24)

(4-25)

≤<

−=otherwise.,0

,,)()(

)()|( bxa

aFbFxf

Bxf XX

X

X (4-26)

)|( Bxf X

)(xf X

x

Fig. 4.5

a b

PILLAI

Page 19: Course Outline - McGraw Hill

19

We can use the conditional p.d.f together with the Bayes’ theorem to update our a-priori knowledge about the probability of events in presence of new observations. Ideally, any new information should be used to update our knowledge. As we see in the next example, conditional p.d.f together with Bayes’ theorem allow systematic updating. For any two events A and B, Bayes’ theorem gives

Let so that (4-27) becomes (see (4-13) and (4-17))

.)(

)()|()|(BP

APABPBAP = (4-27) 21 )( xXxB ≤<= ξ

( )( )

).()(

)|()(

)()()|()|(

)()(|))(())((|

2

1

2

1

12

12

21

2121

APdxxf

dxAxfAP

xFxFAxFAxF

xXxPAPAxXxPxXxAP

x

x X

x

x X

XX

XX

∫∫

=−−

=

≤<≤<

=≤<ξ

ξξ

(4-28)

PILLAI

Page 20: Course Outline - McGraw Hill

20

Further, let so that in the limit as

or

From (4-30), we also get

or

and using this in (4-30), we get the desired result

,0 , , 21 >+== εεxxxx ,0→ε

( ) ).()(

)|()(|))((|lim0

APxf

AxfxXAPxXxAPX

X===+≤<→

ξεξε

(4-29)

.)(

)()|()|(| APxfxXAPAxf X

AX=

= (4-30)

(4-31),)()|()|()(1

dxxfxXAPdxAxfAP XX ∫∫+∞

∞−

+∞

∞−==

dxxfxXAPAP X )()|()( ∫+∞

∞−== (4-32)

.)()|(

)()|()|(|

∫∞+

∞−=

==

dxxfxXAP

xfxXAPAxfX

XAX

(4-33)

PILLAI

Page 21: Course Outline - McGraw Hill

21

To illustrate the usefulness of this formulation, let us reexamine the coin tossing problem.

Example 4.7: Let represent the probability of obtaining a head in a toss. For a given coin, a-priori p can possess any value in the interval (0,1). In the absence of any additional information, we may assume the a-priori p.d.f to be a uniform distribution in that interval. Now suppose we actually perform an experiment of tossing the coin ntimes, and k heads are observed. This is new information. How can we update Solution: Let A= “k heads in n specific tosses”. Since these tosses result in a specific sequence,

)(HPp =

)( pfP

?)( pfP

)( pfP

p0 1

Fig.4.6

,)|( knk qppPAP −== (4-34)

PILLAI

Page 22: Course Outline - McGraw Hill

22

and using (4-32) we get

The a-posteriori p.d.f represents the updated information given the event A, and from (4-30)

Notice that the a-posteriori p.d.f of p in (4-36) is not a uniform distribution, but a beta distribution. We can use this a-posteriori p.d.f to make further predictions, For example, in the light of the above experiment, what can we say about the probability of a head occurring in the next (n+1)th toss?

.)!1(

!)!()1()()|()(1

0

1

0 +−

=−=== ∫∫ −

nkkndpppdppfpPAPAP knk

P (4-35)

| ( | )P Af p A

).,( 10 ,!)!(

)!1(

)()()|()|(|

knpqpkkn

nAP

pfpPAPApf

knk

PAP

β<<−+

=

==

− (4-36)

)|(| Apf AP

p

Fig. 4.710∼

PILLAI

Page 23: Course Outline - McGraw Hill

23

Let B= “head occurring in the (n+1)th toss, given that kheads have occurred in n previous tosses”. Clearly and from (4-32)

Notice that unlike (4-32), we have used the a-posteriori p.d.f in (4-37) to reflect our knowledge about the experiment already performed. Using (4-36) in (4-37), we get

Thus, if n =10, and k = 6, then

which is more realistic compare to p = 0.5.

,)|( ppPBP ==

∫ ==1

0 .)|()|()( dpApfpPBPBP P

(4-37)

∫ ++

=−+

⋅= −1

0 .

21

!)!()!1()(

nkdpqp

kknnpBP knk (4-38)

,58.0127)( ==BP

PILLAI

Page 24: Course Outline - McGraw Hill

24

To summarize, if the probability of an event X is unknown, one should make noncommittal judgement about its a-priori probability density function Usually the uniform distribution is a reasonable assumption in the absence of any other information. Then experimental results (A) are obtained, and out knowledge about X must be updated reflecting this new information. Bayes’ rule helps to obtain the a-posteriori p.d.f of X given A. From that point on, this a-posteriori p.d.f should be used to make further predictions and calculations.

).(xf X

)|(| Axf AX

PILLAI

Page 25: Course Outline - McGraw Hill

25

Stirling’s Formula : What is it?

Stirling’s formula gives an accurate approximation for n!as follows:

in the sense that the ratio of the two sides in (4-39) is nearto one; i.e., their relative error is small, or the percentageerror decreases steadily as n increases. The approximationis remarkably accurate even for small n. Thus 1! = 1 is approximated as and is approximated as 5.836.

Prior to Stirling’s work, DeMoivre had establishedthe same formula in (4-39) in connection with binomial distributions in probability theory. However DeMoivre did not establish the constant

12! ~ 2 n nn n eπ + − (4-39)

2 / 0.9221,eπ

PILLAI

3! 6=

Page 26: Course Outline - McGraw Hill

26

term in (4-39); that was done by James Stirling ( 1730).How to prove it?

We start with a simple observation: The functionlog x is a monotone increasing function, and hence we have

Summing over we get

or

1

1log log log .

k k

k kx dx k x dx

+

−< <∫ ∫

1, 2, ,k n=

1

0 1log log ! log

n nx dx n x dx

+< <∫ ∫

log log ! ( 1) log( 1) .n n n n n n n− < < + + − (4-40)PILLAI

log x

log k

1k+1k− k x1

( )log logxdx x x x= −∫

Page 27: Course Outline - McGraw Hill

27

The double inequality in (4-40) clearly suggests that log n!is close to the arithmetic mean of the two extreme numbersthere. However the actual arithmetic mean is complicatedand it involves several terms. Since (n + )log n – n is quite close to the above arithmetic mean, we consider the difference1

This gives

1According to W. Feller this clever idea to use the approximate mean (n + )log n – n is due to H.E. Robbins, and it leads to an elementary proof.

(4-41)

1 12 21 12 2

121

32

312 2

11 12 2

log ! ( ) log log( 1)!( ) log( 1) ( 1)

log( 1) ( ) log ( ) log( 1) 1

( ) log 1 ( ) log 1.

n n

nnn n

a a n n n n nn n n

n n n n n

n n

+

+ +++ −

− = − + + − ++ + + − += − + − + + + + −

= + − = + −

PILLAI

12

12log ! ( ) log .na n n n n− + +

12

Page 28: Course Outline - McGraw Hill

28

PILLAI

( )

12 1

12 1

1121 1

12 3 5

2 4 6

1 1 12 1 3(2 1) 5(2 1)

1 1 13(2 1) 5(2 1) 7(2 1)

( ) log( ) 1

2( ) 1

0

n

nn n

n n n

n n n

a a n

n

+

+

++ −

+ + +

+ + +

− = + −

= + + + + −

= + + + > (4-42)

Thus an is a monotone decreasing sequence and let crepresent its limit, i.e.,

From (4-41), as this is equivalent to

To find the constant term c in (4-44), we can make use of a formula due to Wallis ( 1655).1By Taylor series expansion

lim nna c

→∞=

n →∞12! ~ .nc nn e n e+ − (4-44)

(4-43)

Hence1

2 3 4 2 3 41(1 )

1 1 1 1 1 1log(1 ) ; log2 3 4 2 3 4xx x x x x x x x x−+ = − + − + = + + + +

Page 29: Course Outline - McGraw Hill

29

The well known function goes to zero at moreover these are the onlyzeros of this function. Also

has no finite poles. (All poles are at infinity). As a result we can write

or [for a proof of this formula, see chapter 4 of Dienes, The Taylor Series]

which for gives the Wallis’ formula

sincsin xxx =;x nπ= ±

sin xx

/ 2x π=

2 2 22 2 2 24

sin (1 )(1 ) (1 ) ,x x xn

x xπ π π

= − − −

2 2 22 2 2 24

sin (1 )(1 ) (1 )x x xn

xx π π π= − − −

2 2 2 2 2 2 2(2 1) (2 1)13 3 5 5 71 1 1

2 4 (2 ) 2 4 6 (2 )2 21 )(1 ) (1 ) ( )( )( ) ( )n nn n

π π − ⋅ +⋅ ⋅ ⋅= − − − =

sin xx

xππ− nπnπ−

1 (PILLAI

Page 30: Course Outline - McGraw Hill

30

Thus as this gives

Thus as

But from (4-41) and (4-43)

and hence letting in (4-45) and making use

22 22 2 (2 )4 62 4(2 1)(2 1) 1 3 3 5 5 7 (2 1) (2 1)

1

22 4 6 2 11 3 5 (2 1) 2 1

( )( )( ) ( )2

( ) .

nnn n n n

n

nn n

π ∞

− + ⋅ ⋅ ⋅ − ⋅ +=

⋅ ⋅⋅ ⋅ − +

= =

=

2 22

112

1 1 .1 12 2

2 4 6 21 3 5 (2 1)

(2 4 6 2 ) ( !)(2 )! (2 )!2 n

n

n n

nn

n nn n

π+

+ +

⋅ ⋅⋅ ⋅ −

⋅ ⋅

=

= =

12

12log 2 log 2 2log ! log(2 )! log( )n n n nπ = + − − +

12lim log ! lim ( ) log

n nn c n n n

→∞ →∞= + + −

(4-45)

(4-46)

n →∞PILLAI

,n →∞

or

n →∞

Page 31: Course Outline - McGraw Hill

31

PILLAI

of (4-46) we get

which gives

With (4-47) in (4-44) we obtain (4-39), and this provesthe Stirling’s formula.

Upper and Lower BoundsIt is possible to obtain reasonably good upper and

lower bounds for n! by elementary reasoning as well. To see this, note that from (4-42) we get

so that an – 1/12n is a monotonically increasing

12

1 1 12 2 2log log 2 lim log(1 ) log 2nn

c cπ→∞

= − − + = −

(4-47)2 .ce π=

1

2

1 12 4(2 1) (2 1)

13

1 1 1 11212 ( 1) 12( 1)3[(2 1) 1]

n nn n

nn n nn

a a +

+ + +

+ ++ −

− <

= = = −

Page 32: Course Outline - McGraw Hill

32

PILLAI

sequence whose limit is also c. Hence for any finite n

and together with (4-41) and (4-47) this gives

Similarly from (4-42) we also have

so that an – 1/(12n+1) is a monotone decreasing sequencewhose limit also equals c. Hence

or

Together with (4-48)-(4-49) we obtain

1 112 12n nn na c a c− < ⇒ < +

( )211

1221

! 2 nnnn n eπ

− −+< (4-48)

21 01 1 112 1 12( 1) 13(2 1)n n n nna a + >−+ + ++− > >

1 112 1 12 1n nn na c a c> ⇒+ +− > +

(4-49)1112 ( 1)2

(1 )! 2 .n nnnn n eπ +− −+>

Page 33: Course Outline - McGraw Hill

33

PILLAI

Stirling’s formula also follows from the asymptotic expansion for the Gamma function given by

Together with the above expansion canbe used to compute numerical values for real x. For a derivation of (4-51), one may look into Chapter 2 of the classic text by Whittaker and Watson (Modern Analysis).

We can use Stirling’s formula to obtain yet anotherapproximation to the binomial probability mass

( 1) ( ),x x xΓ + = Γ

1 11 112 1 122 22 ! 2 .n nn nn nn e e n n e eπ π++ +− −< < (4-50)

(4-51)

12

2 3 41391 1 1

12 286 51840( ) 2 1 ( )xxx x x xx e x oπ −−Γ = + + − +

Page 34: Course Outline - McGraw Hill

34

function. Since

using (4-50) on the right side of (4-52) we obtain

and

where

and

Notice that the constants c1 and c2 are quite close to each other.

( ) ( )1 2 ( )k n kk n k np nqn

n k k k n kn

p q ck π

−−− −

>

! ,( )! !

k n k k n kn np q p qn k kk

− − = − (4-52)

1 1 112 1 12( ) 12

1n n k kc e + −− −

=

1 1 112 12( ) 1 12 1

2 .n n k kc e − + +− −=

( ) ( )2 2 ( )k n kk n k np nqn

n k k k n kn

p q ck π

−−− −

<

PILLAI