9. two functions of two random variablesee162.caltech.edu/notes/lect9.pdftwo functions of two random...

Post on 29-Apr-2018

235 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

1

9. Two Functions of Two Random Variables

In the spirit of the previous lecture, let us look at an immediate generalization: Suppose X and Y are two random variables with joint p.d.f Given two functions and define the new random variables

How does one determine their joint p.d.f Obviouslywith in hand, the marginal p.d.fs and can be easily determined.

(9-1)

).,( yxfXY

).,(

),(

YXhW

YXgZ

==

),( yxg

),,( yxh

(9-2)

?),( wzfZW

),( wzfZW )(zfZ )(wfW

2

The procedure is the same as that in (8-3). In fact for given zand w,

where is the region in the xy plane such that the inequalities and are simultaneously satisfied.

We illustrate this technique in the next example.

( ) ( )( ) ∫ ∫ ∈

=∈=

≤≤=≤≤=

wzDyx XYwz

ZW

dxdyyxfDYXP

wYXhzYXgPwWzZPwzF

,),( , ,),(),(

),(,),()(,)(),( ξξ

(9-3)

wzD ,

zyxg ≤),( wyxh ≤),(

x

y

wzD ,

Fig. 9.1

wzD ,

3

Example 9.1: Suppose X and Y are independent uniformly distributed random variables in the interval Define DetermineSolution: Obviously both w and z vary in the interval Thus

We must consider two cases: and since theygive rise to different regions for (see Figs. 9.2 (a)-(b)).

).,max( ),,min( YXWYXZ ==

.0or 0 if ,0),( <<= wzwzFZW

).,( wzf ZW

).,0( θ

).,0( θ(9-4)

( ) ( ). ),max( ,),min(,),( wYXzYXPwWzZPwzFZW ≤≤=≤≤= (9-5)

zw ≥ ,zw <

wzD ,

X

Y

wy =

),( ww

),( zz

zwa ≥ )(

X

Y

),( ww

),( zz

zwb < )(

Fig. 9.2

4

For from Fig. 9.2 (a), the region is represented by the doubly shaded area. Thus

and for from Fig. 9.2 (b), we obtain

With

we obtain

Thus

, , ),(),(),(),( zwzzFzwFwzFwzF XYXYXYZW ≥−+=

wzD ,

(9-6)

,zw ≥

,zw <. , ),(),( zwwwFwzF XYZW <= (9-7)

,)( )(),(2θθθ

xyyxyFxFyxF YXXY =⋅== (9-8)

2

2 2

(2 ) / , 0 ,( , )

/ , 0 .ZW

w z z z wF z w

w w z

θ θθ θ

⎧ − < < <= ⎨ < < <⎩

⎩⎨⎧ <<<

=.otherwise,0

,0,/2),(

2 θθ wzwzf ZW

(9-9)

(9-10)

5

From (9-10), we also obtain

and

If and are continuous and differentiable functions, then as in the case of one random variable (see (5-30)) it is possible to develop a formula to obtain the joint p.d.f directly. Towards this, consider the equations

For a given point (z,w), equation (9-13) can have many solutions. Let us say

,0 ,12

),()(

θ

θθθ

<<⎟⎠⎞

⎜⎝⎛ −== ∫ z

zdwwzfzf

z ZWZ (9-11)

.),( ,),( wyxhzyxg == (9-13)

.0 ,2

),()(

0 2 θθ

<<== ∫ ww

dzwzfwfw

ZWW(9-12)

),( yxg ),( yxh

),( wzfZW

),,( , ),,( ),,( 2211 nn yxyxyx

6

represent these multiple solutions such that (see Fig. 9.3)

(9-14).),( ,),( wyxhzyxg iiii ==

Fig. 9.3

Consider the problem of evaluating the probability

z

(a)

w

•),( wz

z∆

w∆ww ∆+

zz ∆+

( )( ). ),(,),(

,

wwYXhwzzYXgzP

wwWwzzZzP

∆+≤<∆+≤<=∆+≤<∆+≤<

(9-15)

(b)

x

y1∆ 2∆

i∆

n∆

),( 11 yx

),( 22 yx

),( ii yx

),( nn yx

7

Using (7-9) we can rewrite (9-15) as

But to translate this probability in terms of we need to evaluate the equivalent region for in the xy plane. Towards this referring to Fig. 9.4, we observe that the point A with coordinates (z,w) gets mapped onto the point with coordinates (as well as to other points as in Fig. 9.3(b)). As z changes to to point B in Fig. 9.4 (a), let represent its image in the xy plane. Similarly as w changes to to C, let represent its image in the xy plane.

(9-16)

),,( yxf XY

( ) .),(, wzwzfwwWwzzZzP ZW ∆∆=∆+≤<∆+≤<

wz ∆∆

A′),( ii yx

zz ∆+ B′

ww ∆+ C′

(a)

w

Az∆

w∆w

B

zz

C D

Fig. 9.4 (b)

y

A′iy B′

xix

C′D′

i∆θ ϕ

8

Finally D goes to and represents the equivalent parallelogram in the XY plane with area Referring back to Fig. 9.3, the probability in (9-16) can be alternatively expressed as

Equating (9-16) and (9-17) we obtain

To simplify (9-18), we need to evaluate the area of the parallelograms in Fig. 9.3 (b) in terms of Towards this, let and denote the inverse transformation in (9-14), so that

DCBA ′′′′.i∆

,D′

( ) .),(),( ∑∑ ∆=∆∈i

iiiXYi

i yxfYXP (9-17)

.),(),( ∑ ∆∆∆=

i

iiiXYZW wz

yxfwzf (9-18)

.wz∆∆i∆

1g 1h

).,( ),,( 11 wzhywzgx ii == (9-19)

9

As the point (z,w) goes to the point the point and the point Hence the respective x and y coordinates of are given by

and

Similarly those of are given by

The area of the parallelogram in Fig. 9.4 (b) isgiven by

,),( Ayx ii ′≡ ,),( Bwzz ′→∆+,),( Cwwz ′→∆+ .),( Dwwzz ′→∆+∆+

B′

,),(),( 1111 z

z

gxz

z

gwzgwzzg i ∆

∂∂+=∆

∂∂+=∆+

.),(),( 1111 z

z

hyz

z

hwzhwzzh i ∆

∂∂+=∆

∂∂+=∆+

(9-20)

(9-21)

C′

. , 11 ww

hyw

w

gx ii ∆

∂∂+∆

∂∂+ (9-22)

DCBA ′′′′

( ) ( )( ) ( ) ( ) ( ). cos sinsin cos

)sin(

θϕθϕϕθ

CABACABA

CABAi

′′′′−′′′′=−′′′′=∆

(9-23)

10

But from Fig. 9.4 (b), and (9-20) - (9-22)

so that

and

The right side of (9-27) represents the Jacobian of the transformation in (9-19). Thus

.cos ,sin

,sin ,cos

11

11

ww

gCAz

z

hBA

ww

hCAz

z

gBA

∆∂∂=′′∆

∂∂=′′

∆∂∂=′′∆

∂∂=′′

θϕ

θϕ

(9-25)

(9-24)

wzz

h

w

g

w

h

z

gi ∆∆⎟

⎠⎞

⎜⎝⎛

∂∂

∂∂−

∂∂

∂∂=∆ 1111 (9-26)

⎟⎟⎟⎟⎟

⎜⎜⎜⎜⎜

∂∂

∂∂

∂∂

∂∂

=⎟⎠⎞

⎜⎝⎛

∂∂

∂∂−

∂∂

∂∂=

∆∆∆

w

h

z

h

w

g

z

g

z

h

w

g

w

h

z

g

wzi

11

11

1111 det (9-27)

( , )J z w

11

1 1

1 1

( , ) d e t .

g g

z wJ z w

h h

z w

∂ ∂⎛ ⎞⎜ ⎟∂ ∂⎜ ⎟

= ⎜ ⎟⎜ ⎟∂ ∂⎜ ⎟

∂ ∂⎝ ⎠

(9-28)

Substituting (9-27) - (9-28) into (9-18), we get

since

where represents the Jacobian of the original transformation in (9-13) given by

,),(|),(|

1),(|),(|),( ∑∑ ==

iiiXY

iiiiiXYZW yxf

yxJyxfwzJwzf (9-29)

|),(|

1 |),(|

ii yxJwzJ = (9-30)

( , )i iJ x y

.det),(

, ii yyxx

ii

y

h

x

h

y

g

x

g

yxJ

==⎟⎟⎟⎟⎟

⎜⎜⎜⎜⎜

∂∂

∂∂

∂∂

∂∂

= (9-31)

12

Next we shall illustrate the usefulness of the formula in (9-29) through various examples:

Example 9.2: Suppose X and Y are zero mean independent Gaussian r.vs with common variance Define where Obtain Solution: Here

Since

if is a solution pair so is From (9-33)

),/(tan , 122 XYWYXZ −=+=

.2σ

).,( wzfZW

.2

1),(

222 2/)(2

σ

πσyx

XY eyxf +−= (9-32)

,2/||),/(tan),(;),( 122 π≤==+== − wxyyxhwyxyxgz (9-33)

),( 11 yx ).,( 11 yx −−

.tanor ,tan wxywx

y == (9-34)

.2/|| π≤w

13

Substituting this into z, we get

and

Thus there are two solution sets

We can use (9-35) - (9-37) to obtain From (9-28)

so that

.cos or ,sec tan1 222 wzxwxwxyxz ==+=+= (9-35)

.sintan wzwxy == (9-36)

.sin ,cos ,sin ,cos 2211 wzywzxwzywzx −=−=== (9-37)

).,( wzJ

,cossin

sincos),( z

wzw

wzw

w

y

z

y

w

x

z

x

wzJ =−

=

∂∂

∂∂

∂∂

∂∂

= (9-38)

.|),(| zwzJ = (9-39)

14

We can also compute using (9-31). From (9-33),

Notice that agreeing with (9-30). Substituting (9-37) and (9-39) or (9-40) into (9-29), we get

Thus

which represents a Rayleigh r.v with parameter and

),( yxJ

.11

),(22

2222

2222

zyx

yx

x

yx

y

yx

y

yx

x

yxJ =+

=

++−

++= (9-40)

|,),(|/1|),(| ii yxJwzJ =

( )

.2

|| ,0 ,

),(),(),(

22 2/2

2211

ππσ

σ <∞<<=

+=

− wzez

yxfyxfzwzf

z

XYXYZW

(9-41)

,0 ,),()(22 2/

2

2/

2/ ∞<<== −

−∫ zez

dwwzfzf zZWZ

σπ

π σ(9-42)

,2σ

,2

|| ,1

),()(

0

ππ

<== ∫∞

wdzwzfwf ZWW (9-43)

15

which represents a uniform r.v in the interval Moreover by direct computation

implying that Z and W are independent. We summarize these results in the following statement: If X and Y are zero mean independent Gaussian random variables with common variance, then has a Rayleigh distribution and has a uniform distribution. Moreover these two derived r.vsare statistically independent. Alternatively, with X and Y as independent zero mean r.vs as in (9-32), X + jY represents a complex Gaussian r.v. But

where Z and W are as in (9-33), except that for (9-45) to hold good on the entire complex plane we must have and hence it follows that the magnitude and phase of

).2/,2/( ππ−

22 YX + )/(tan1 XY−

,jWZejYX =+ (9-45)

)()(),( wfzfwzf WZZW ⋅= (9-44)

,ππ <<− W

16

a complex Gaussian r.v are independent with Rayleighand uniform distributions ~ respectively. The statistical independence of these derived r.vs is an interesting observation.

Example 9.3: Let X and Y be independent exponential random variables with common parameter λ. Define U = X + Y, V = X - Y. Find the joint and marginal p.d.f of U and V. Solution: It is given that

Now since u = x + y, v = x - y, always and there is only one solution given by

Moreover the Jacobian of the transformation is given by

.0 ,0 ,1

),( /)(2

>>= +− yxeyxf yxXY

λ

λ(9-46)

.2

,2

vuy

vux

−=+= (9-47)

,|| uv <

( )),( ππ−U

17

and hence

represents the joint p.d.f of U and V. This gives

and

Notice that in this case the r.vs U and V are not independent.

As we show below, the general transformation formula in (9-29) making use of two functions can be made useful even when only one function is specified.

2 11

1 1 ),( −=

−=yxJ

, || 0 ,2

1),( /

2∞<<<= − uvevuf u

UVλ

λ(9-48)

,0 ,2

1),()( /

2

/2

∞<<=== −

− ∫∫ ueu

dvedvvufuf uu

u

uu

u UVUλλ

λλ(9-49)

/ | |/2 | | | |

1 1( ) ( , ) , .

2 2u v

V UVv vf v f u v du e du e vλ λ

λ λ∞ ∞ − −= = = − ∞ < < ∞∫ ∫ (9-50)

18

Auxiliary Variables:

Suppose

where X and Y are two random variables. To determine by making use of the above formulation in (9-29), we can define an auxiliary variable

and the p.d.f of Z can be obtained from by proper integration.

Example 9.4: Suppose Z = X + Y and let W = Y so that the transformation is one-to-one and the solution is given by

),,( YXgZ =

YWXW == or

(9-51)

(9-52)

),( wzfZW

. , 11 wzxwy −==

)(zfZ

19

The Jacobian of the transformation is given by

and hence

or

which agrees with (8.7). Note that (9-53) reduces to the convolution of and if X and Y are independent random variables. Next, we consider a less trivial example.

Example 9.5: Let ∼ and ∼ be independent. Define

1 1 0

1 1 ),( ==yxJ

),(),(),( 11 wwzfyxfyxf XYXYZW −==

∫∫∞+

∞−−==

,),(),()( dwwwzfdwwzfzf XYZWZ

(9-53)

)(zf X )(zfY

)1,0( UX )1,0( UY

( ) ).2cos(ln2 2/1 YXZ π−= (9-54)

20

Find the density function of Z. Solution: We can make use of the auxiliary variable W = Yin this case. This gives the only solution to be

and using (9-28)

Substituting (9-55) - (9-57) into (9-29), we obtain

( )

,

,

1

2/)2sec(1

2

wy

ex wz

== − π (9-55)

(9-56)

( )

( ) .)2(sec

10

)2(sec

),(

2/)2sec(2

12/)2sec(2

11

11

2

2

wz

wz

ewz

w

xewz

w

y

z

y

w

x

z

x

wzJ

π

π

π

π

−=

∂∂−

=

∂∂

∂∂

∂∂

∂∂

= (9-57)

( ) 2sec( 2 ) / 22( , ) sec (2 ) ,

, 0 1,

z wZWf z w z w e

z w

ππ −=− ∞ < < +∞ < <

(9-58)

21

and

Let so that Notice that as w varies from 0 to 1, u varies from to Using this in (9-59), we get

which represents a zero mean Gaussian r.v with unit variance. Thus ∼ Equation (9-54) can be used as a practical procedure to generate Gaussian random variables from two independent uniformly distributed random sequences.

( ) 22 1 1 tan(2 ) / 2/ 2 2

0 0( ) ( , ) sec (2 ) .z wz

Z ZWf z f z w dw e z w e dwππ −−= =∫ ∫tan(2 )u z wπ= 22 sec (2 ) .du z w dwπ π=

∞− .∞+

, ,2

1

2

2

1)( 2/

1

2/2/ 222

∞<<∞−== −∞+

∞−

−− ∫ zedu

eezf zuzZ πππ

(9-59)

(9-60)

).1,0( NZ

top related