solutions probability
TRANSCRIPT
-
8/3/2019 Solutions Probability
1/12
Bocconi University
PhD in Economics and Finance
2010-2011
ProbabilitySandra Fortini, Caterina May
SOLUTIONS
1)
a) An is an infinite sequence of heads (H) and tails (T) having one tail at (n 1)-th toss, n heads fromn-th toss to (2n 1)-th and again one tail at 2n-th toss. IfP(T) = P(H) = 1/2, then P(A1) = (1/2)2and, for any n 2, P(An) = (1/2)n+2.
b) The event An has a H at n-th toss, while An+1 has a T at n-th toss. Hence, An and An+1 are disjoint,and it follows that the sequence An cant be increasing (An An+1).
c) For the same reason then in (b), An is not decreasing (An An+1).
d) Since An and An+1 are disjoint (for any n), then: lim infAn = (An,ult.) =m
nmAn = .
On the other site, lim sup An = (An, i.o.) =m
nmAn is non empty; for instance, the sequence
having a T at (22n 1)-th toss, all H from 22n-th toss to (22n+1 1)-th toss and a T at 22n+1-th tossis contained in lim sup An. It follows that lim infAn = lim sup An, and hence An is not convergent.
e) Sincen P(An) < , then, from Borel-Cantelli Lemma, P(An, i.o.) = 0.
f) P(An,ult.) = 0, because (An,ult.) = .
2)
a) Tn is not measurable with respect to Gn, for instance: the event (T1 = 3) is the event (X1 = 6, X2 =6, X3 = 6), and then it is not contained in (X1).
b) By definition, Tj is a stopping time with respect to Gn if and only if (Tj = n) Gn for any n.We prove it by induction on j.For j = 1 this is true, in fact:{T1 = 1} = {X1 = 6} G1, and, for any n > 1:{T1 = n} = {X1 = 6, . . . , X n1 = 6, Xn = 6} Gn.Suppose that (Tj = n) Gn for any fixed j and let us prove that (Tj+1 = n) Gn:{Tj+1 = n} =
i=1,...,n1{Tj = i, Xi+1 = 6, . . . , X n1 = 6, Xn = 6} Gn.
c) TX1 is not
G1-measurable; for instance, if X1 = 1 then TX1 can assume all values 1, 2, 3,... and this
implies that it doesnt exist any measurable function g such that TX1 = g(X1).
d) TX1 is a stopping time with respect to Gn, in fact:{TX1 = n} =
j=1,...,6{X1 = i, Tj = n} and this belongs to Gn from (b).
e) We can write Y =T2i=1 Xi. Y is not FT2-measurable because, for instance, if T2 = 3 then Y can
assume all values between 13 and 17; this implies that there is not any measurable function f suchthat Y = f(T2).
1
-
8/3/2019 Solutions Probability
2/12
f) Y is measurable with respect to GT2 since we can show that the event {Y = y} GT2 for any y.Let us write:
{Y = y
}= m{
Y = y, T2 = m
}= m{
mi=1 Xi = y, T2 = m
};
we have that each event {mi=1 Xi = y, T2 = m} of the countable union belongs to GT2 ;in fact, by calling Am = {
mi=1 Xi = y, T2 = m},
Am{T2 = n} is the null set ( Gn) when n = m, and Am{T2 = n} is the event {ni=1 Xi = y, T2 =
n} Gn when n = m.
3)
a) FY1 FZ and FZ FY1 . In fact: suppose, for instance, that X1, X2, X3 take values in {1, 0, 1}; inthis case, if Z = 1 then Y1 can be either 1 or 1. On the other side, if Y1 = 1 then Z can be 1, 1 or0. This implies that there is not any function g such that Y1 = g(Z) and there is not any function hsuch that Z = h(Y1).
b)
FY1,Y2,Y3
FX1,X2,X3 since (Y1, Y2, Y3) = (X1X2, X2X3, X3X1), which is a measurable function of
(X1, X2, X3).FX1,X2,X3 FY1,Y2,Y3 ; in fact, as in (a), if for instance X1, X2, X3 take values in {1, 0, 1}, then when(Y1, Y2, Y3) = (1, 1, 1) we can have (X1, X2, X3) equal to (1, 1, 1) or (1, 1, 1).
c) FY1,Y2,Z FX1,X2,X3 , since (Y1, Y2, Z) = (X1X2, X2X3, X1X2X3), which is a measurable function of(X1, X2, X3).Also FX1,X2,X3 FY1,Y2,Z, since (X1, X2, X3) = (Z/Y2, Y1Y2/Z, Z/Y1), which is a measurable functionof (Y1, Y2, Z).
d) FW,Z FX1,X2,X3 , since (W, Z) = (X1 + X2 + X3, X1X2X3), which is a measurable function of(X1, X2, X3).FX1,X2,X3 FW,Z; in fact, if for instance X1, X2, X3 take values in {1, 0, 1}, then when (W, Z) =(1, 1) we can have (X1, X2, X3) equal to (1, 1, 1) or (1, 1, 1) etc.
e) FX1,W,Z FX1,X2,X3 , since (X1, W, Z) = (X1, X1+X2+X3, X1X2X3), which is a measurable functionof (X1, X2, X3).FX1,X2,X3 FX1,W,Z ; in fact if for instance X1, X2, X3 take values in {1, 0, 1}, then when (X1, W, Z) =(1, 1, 1) we can have (X1, X2, X3) equal to (1, 1, 1) or (1, 1, 1).
4)
a) The moment generation of X is MX(s) = E(esX) =k=0 eskke
k!= e
k=0
(es)k
k!;
by indicating = es, we have:
k=0
(es)k
k!=
k=0
k
k!= e = ee
s
;
we can conclude that MX(s) = e ees = e(es1).b) Since MX
(k)(0) = E(Xk), in particular we have E(X2) = MX(2)(0). Since, calculating from (a),
MX(2)(s) = e(e
s1)es(1 + es),
we obtain MX(2)(0) = (1 + ), and then E(X2) = (1 + ).
2
-
8/3/2019 Solutions Probability
3/12
c) By definition, M(X,Y)(s, t) = E(esX+tY); since X and Y are independent, E(esX+tY) = E(esX)E(etY),
and, from (a), this is equal to
e(es1)e(e
t1);
we can conclude that M(X,Y)(s, t) = ees+et.
d) MX+Y(s) = E(es(X+Y)) = E(esX+sY); from (c), this is equal to ees+es = e(+)(e
s1).
e) IfZ is a Poisson random variable with parameter +, from (a) we have that Z has moment generatingfunction MZ(s) = e
(+)(es1); on the other side, we have from (d) that also X + Y has momentgenerating function equal to e(+)(e
s1). It follows that Z and X+ Y have the same distribution.
5)
a)
MX(s) = E(esX) =+
esx 12
e(x
)2
22 dx =+
12
e1
22 (x2
+2
2x22
sx)dx
= e2s2/2+s
+
12
e
1
22(x(+2s))2
dx,
where the last equality follows from: (x2 + 2 2x 22sx) = (x ( + 2s))2 (2s)2 22s.Since we are integrating between and + the density of a Gaussian random variable with para-meters ( + 2s) and 2 (and this is equal to one), we obtain:
MX(s) = e2s2/2+s.
b) MY(s) = E(esY) = E(esaX+sb) = esb
E(esaX) = esb+
2a2s2/2+sa = e(2a2)s2/2+(b+a)s.
From (a), it follows that Y is a Gaussian random variable with parameters (b + a) and 2a2.
c) Z has moment generating function MZ(s) = E(esZ) = E(esX
) = E(es/Xs/), and this is,from (a), equal to: es
2/2+s/s/ = es2/2.
By the Taylor expansion of the exponential function we find: MZ(s) = es2/2 =
k=0
s2k
2kk!;
on the other side, MZ(s) = E(esZ) = E(k=0
skZk
k!) =
k=0
skE(Zk)k!
;
it follows that
- if k is odd, then E(Zk) = 0;
- if k is even, thenE(Zk)
k!=E(Z2m)
(2m)!=
1
m!2m, and hence
E(Z2m) =
(2m)!
m!2m.
d) E(Ws) = E((eX)s) = E(esX) = MX(s); it follows that:
E(W) = MX(1) = e2/2+ (from (a));
E(W2) = MX(2) = e22+2 (similarly),
and V ar(W) = E(W2) E(W)2 = e2+2(e2 1).
3
-
8/3/2019 Solutions Probability
4/12
6)
a) Since, for k, > 0, the density function of X
Gamma(, k) is f(x; k, ) = xk1kex
(k)I(0,)
(x),
then its moment generating is
MX(s) = E(esX) =
0
esxxk1kex
(k)dx
=k
( s)k0
xk1( s)k e(s)x
(k)dx
=
sk
0
f(x; k, s) dx =
sk
,
(1)
where f(x; k, s) is a density function for s < and then the last equality holds for s < ; hence,for k = 1, 2, 3,
MXk(s) = 2
2 sk
, for s < 2.
b) Since, for a R+, MaX(s) = MX(as) (in its domain), then, by (1), if X Gamma(, k)
MaX(s) = MX(as) =
ask
= aa s
k= MY(s) (2)
where Y Gamma(a , k). Hence, aXk Gamma(2/a,k).c) From (1), if Xk Gamma(, k) for k = 1, 2, 3 and all Xk are independent, then
MX1+X2+X3(s) =
3k=1
MXk(s) =
31
s
k=
s
1+2+3=
s
6,
then X1 + X2 + X3 Gamma(, 6). Thus, with = 6, X1 + X2 + X3 Gamma(2, 6).d) From (c), X1 + X2 + X3 Gamma(2, 6). By (2), (X1 + X2 + X3)/3 Gamma(6, 6).e) Using the density of X Gamma(, k) in (a), we have
E(X2) =
0
x2xk1kex
(k)dx
=(k + 2)
(k)2
0
x(k+2)1k+2ex
(k + 2)dx
=(k + 2)
(k)2
0
f(x; k + 2, ) dx =(k + 2)
(k)2=
k(k + 1)
2.
(The same result can be gained by using the m.g.f. MX(s) obtained in (a), and calculating
E(X2) = MX(0)).
From (c), Y = (X1 + X2 + X3) Gamma(2, 6), and hence E(Y2) = 674 = 424 .
4
-
8/3/2019 Solutions Probability
5/12
7)
a) E(Yn|Ym) = E(ni=1 Xi|
mi=1 Xi) = E(
mi=1 Xi+
ni=m+1 Xi|
mi=1 Xi) =
mi=1 Xi+E(
ni=m+1 Xi),
where the last equality follows from the linearity of conditional expectation, the measurability ofmi=1 Xi with respect to (
mi=1 Xi), and the independence between Xm+1,...,Xn and X1,...,Xm.
Since E(ni=m+1 Xi) =
ni=m+1 E(Xi) = 0, we can conclude that E(Yn|Ym) = Ym.
b) By symmetry, we know that E(X1|Yn) = E(X2|Yn) = = E(Xn|Yn); moreover,E(nj=1 Xj|Yn) = E(Yn|Yn) = Yn; it follows that, for any j n,
E(Xj |Yn) = 1n
ni=j
E(Xi|Yn) = 1n
Yn.
c) E(Ym|Yn) = E(m
i=1 Xi|Yn) = m
i=1 E(Xi|Yn) =m
n Yn,
where the last equality follows from (b).
d) E(Sn2|Sm2) = E(
ni=1 Xi
2
n|Sm2) = E(
mi=1 Xi
2
n+
ni=m+1 Xi
2
n|Sm2)
= E(m
nSm
2|Sm2) + E( 1n
ni=m+1 Xi
2|Sm2) = mn
Sm2 +
1
n
ni=m+1 E(Xi
2),
where the the last equality follows from the independence between Xm+1,...,Xn and X1,...,Xm.
We can conclude that E(Sn2|Sm2) = m
nSm
2 +n m
n2.
e) Since, by symmetry, E(X1|Yn, Sn2) = E(X2|Yn, Sn2) = = E(Xn|Yn, Sn2), it follows thatE(Xj|Yn, Sn2) = 1
n
ni=j E(Xi|Yn, Sn2) =
1
nE(Yn|Yn, Sn2) = 1
nYn.
f) Let us compute: E(N SN2|N) = E(Ni=1 Xi2|N) = NE(Xi2) = N 2, where the last but one equality
follows from the independence between N and the Xis.Since E(N SN
2) = E(E(N SN2|N)), we obtain that E(NSN2) = E(N)2 = 2.
8)
a) Given Bn and Yn, we have that Bn+1 = Bn + 1 with probability Yn, and Bn+1 = Bn with probability(1 Yn); it follows that
E(Bn+1|Bn, Yn) = Bn + Yn 1 + (1 Yn) 0 = Bn + Yn.
b) E(Bn+1|Bn) = E(E(Bn+1|Bn, Yn)|Bn) = E(Bn + Yn|Bn), from (a),
= E(Bn + Bn/(r + b + n)|Bn) = r + b + n + 1r + b + n
Bn.
c) From (b) we have that E(Bn+1|Bn) = r + b + n + 1r + b + n
Bn > Bn, and hence Bn is a submartingale.
5
-
8/3/2019 Solutions Probability
6/12
d) By definition, E(Yn+1|Yn) = E( Bn+1r + b + n + 1
| Bnr + b + n
) =1
r + b + n + 1E(Bn+1|Bn);
from (c), this is equal toBn
r + b + n= Yn, and hence Yn is a martingale.
e) E(Yn) = E(Y0) because Yn is a martingale, and moreover E(Y0) = Y0 =b
r + b.
f) E(Bn) = E((r + b + n)Yn) = (r + b + n)b
r + b(from (e)).
9)
a) Gn Fn, since Yn = f(X1, . . . , X n). Fn Gn, consider for instance F2 and G2: if Y1 = 1, Y2 = 2, wecan have either X1 = 1, X2 = 1 or X1 = 1, X2 = 1.
b) E(Yn|Yn1) = E(XnI(Yn1=0) + nYn1|Xn||Yn1) = I(Yn1=0)E(Xn) + nYn1E(|Xn|) = Yn1.
c) Yn is a martingale with respect to Gn because (b) and because E(|Yn|) < .d) E(Yn|Fn1) = E(XnI(Yn1=0)+nYn1|Xn||(Xn1, ...X1)) = I(Yn1=0)E(Xn)+ nYn1E(|Xn|) = Yn1,
because Yn1 is Fn1-measurable and X1, X2, ...Xn,... are independent random variables. Hence Ynis a martingale with respect to Fn
e) By martingale properties, E(Yn) = E(Y1); E(Y1) = E(X1) = 0.
f) E(|Yn|) = E(|Xn|I(Yn1=0)+n|Yn1||Xn|) = E(|Xn|(I(Yn1=0)+n|Yn1|)) =1
nP(Yn1 = 0)+E(|Yn1|),
where the last equality follows from E(|Xn|) = 1/n and from the independence between Xn and Yn1.
Hence, E(
|Yn
|)
E(
|Yn1
|) =
P(Yn1 = 0)
n
1 1/(n 1)
n
=1
n
1
n(n 1),
and E(|Yn|) = E(|Y1|) +ni=2(E(|Yi|) E(|Yi1|)) 1 +
ni=1(
1
i 1
i(i 1) ),which is a divergent series. We conclude that E(|Yn|) is not uniformly bounded.
g) Following the calculations in (f), we can see that limn E(|Yn|) = +.
10)
a) E(X1) = 1/2; E(X2) = E(E(X2|X1)) = E(X1/2) = 1/4; E(X3) = E(E(X3|X1, X2)) = E(X2/2) = 1/8.b) P(X2 1/2) = E(P(X2 1/2|X1)) = E(I(X11/2)P(X2 1/2|X1)) + E(I(X1>1/2)P(X2 1/2|X1))
= P(X1
1/2) + E(I(X1>1/2)P(X2
1/2|X1)) = 1/2 + E(I(X1>1/2)
1
2X1)
=1
2+
11/2
1
2xdx =
1
2+
log2
2.
c) P(X1 1/2, X2 1/4) = E(P(X1 1/2, X2 1/4|X1)) = E(I(X11/2)P(X2 1/4|X1))= E(I(X11/4)P(X2 1/4|X1)) + E(I(1/4
-
8/3/2019 Solutions Probability
7/12
d) P(X3 1/4|X1) = E(P(X3 1/4|X1)|X1, X2) = E(P(X3 1/4|X1, X2)|X1)= E(IX21/4P(X3 1/4|X1, X2)|X1) + E(IX2>1/4P(X3 1/4|X1, X2)|X1)
= E(IX21/4|X1) + E(IX2>1/41
4X2|X1)
= E(IX21/4IX11/4|X1)+E(IX21/4IX1>1/4|X1)+E(IX2>1/4IX11/41
4X2|X1)+E(IX2>1/4IX1>1/4
1
4X2|X1)
= IX11/4+IX1>1/4P(X2 1/4|X1)+1
4
X11/4
1
x2
1
X1dx2 = IX11/4+IX1>1/4
1
4X1+IX1>1/4
1
4X1log(4X1).
e) fX1|X2(t|x) =fX1,X2(t, x)
fX2(x), where:
fX1,X2(t, x) = fX2|X1(x|t)fX1(t) =1
tI[0,t](x)I[0,1](t),
and fX2(x) =
10
1
tI[0,t](x)I[0,1](t)dt = log(x)I[0,1](x),
and then fX1|X2(t|x) = 1
log x
1
tI[0,1](x)I[x,1](t).
Hence we have:
P(X1 x1|X2 = x) =1x1
fX1|X2(t|x)dt = I[x,1](x1)I[0,1](x)(1)log x
1x1
1
tdt+I[0,x)(x1)I[0,1](x)
(1)log x
1x
1
tdt
= I[x,1](x1)I[0,1](x)log x1log x
+ I[0,x)(x1)I[0,1](x)log x
log x,
and we can conclude that
P(X1 x1|X2) = I[0,1](X2)I[X2,1](x1)log x1log X2
+ I[0,1](X2)I[0,X2)(x1).
11)
a) = E(E(Xj|)) = E() = 1/2.V ar(Xj) = E(Xj
2) E(Xj)2 = E(E(Xj2|)) 1/4 = E() 1/4 = 1/2 1/4 = 1/4.b) Cov(Xi, Xj) = E((Xi 1/2)(Xj 1/2)) = E(XiXj 1/2 Xi 1/2 Xj + 1/4) = E(2) 1/4
=10
x2dx 1/4 = 1/12.c) Since Cov(Xi, Xj) = 0 from (b), they cant be stochastically independent.
d) P( p|X1 = 1, X2 = 0, X3 = 1, X4 = 1, X5 = 1) = P( p, X1 = 1, X2 = 0, X3 = 1, X4 = 1, X5 = 1)P(X1 = 1, X2 = 0, X3 = 1, X4 = 1, X5 = 1)
=
p0
P(X1 = 1, X2 = 0, X3 = 1, X4 = 1, X5 = 1| = s)ds10
P(X1 = 1, X2 = 0, X3 = 1, X4 = 1, X5 = 1| = s)ds=
p0
s4(1 s)ds10
s4(1 s)ds=
p5/5 p6/61/30
;
hence,
f|X1=1,X2=0,X3=1,X4=1,X5=1(p) =d
dpP( p|X1 = 1, X2 = 0, X3 = 1, X4 = 1, X5 = 1) = 30p4(1 p).
7
-
8/3/2019 Solutions Probability
8/12
e) From (d) we have that: E(|X1 = 1, X2 = 0, X3 = 1, X4 = 1, X5 = 1) =
=
1
0 pf|X1=1,X2=0,X3=1,X4=1,X5=1(p)dp = 30
1
0 p
5
(1 p)dp = 5/7.
f) From (d) we have that:
P( < 1/2|X1 = 1, X2 = 0, X3 = 1, X4 = 1, X5 = 1) = 301/20
p4(1 p)dp = 7/64.
12)
a) We know that, if pij(2) = P(Xn+2 = j|Xn = i) is the (i, j)-th element of the matrix P(2), then
P(2) = P2 =
1/2 1/4 1/41/4 1/2 1/4
1/4 1/4 1/2
b) The chain is not periodic since, from (a), the 2-step transition probabilities are all positive.
c) The chain is irreducible since, from (a), the 2-step transition probabilities are all positive.
d) By writing the equations: (1, 2, 3)P = (1, 2, 3) and 1 + 2 + 3 = 1, we find the unique:(1, 2, 3) = (1/3, 1/3, 1/3).
e) No because (1, 0, 0)P = (1, 0, 0). Alternatively, from (d), the unique stationary distribution is =(1/3, 1/3, 1/3) and therefore the initial distribution must be for the chain to be stationary.
f) From (b), (c) and (d) and the properties of stationary distributions, limn P(Xn = 1) = 1/3 if thechain starts in 1.
g) No because of the properties of stationary distributions, as in (f ).
13)
a) E(X22) = E(E(X2
2|X1)) = E(X1 + X12) = E(X1) + E(X12) < .b) E(X2|X1) = X1 and V(X2|X1) = X1, because X2|X1 Poisson(X1).c) E(X2) = E(E(X2|X1)) = E(X1) = 1.d) From theorem of decomposition of variance we have:
V(X2) = E(V(X2|X1)) + V(E(X2|X1)) = E(X1) + V(X1) = 1 + 1 = 2.e) ||X2 X1||2 = E[(X2 X1)2]1/2 = [E(X22) + E(X12) 2E(X1X2)]1/2
= [E(X22
) + E(X12
) 2E(E(X1X2|X1))]1/2
= [3 + 2 2E(X12
)]
1/2
= (3 + 2 4)1/2
= 1.
14)
a) Xt is an AR(1) process: Xt = c+Xt1+Ut. From the autocovariance function of the AR(1) processeswe obtain:
(k) = E(Ut2)
k
1 2 =0.5k
1 0.52 =0.5k
0.75.
8
-
8/3/2019 Solutions Probability
9/12
b) By backward substitution we obtain:
Xt = 1 + 0.5Xt1 + Ut = 1 + 0.5(1 + 0.5Xt2 + Ut1) + Ut = 1 + 0.5 + 0.52Xt2 + 0.5Ut1 + Ut
= 1 + 0.5 + 0.52
(1 + 0.5Xt3 + Ut2) + 0.5Ut1 + Ut = ...... =
k1j=0 0.5
j + 0.5kXtk +k1j=0 0.5
jUtj , (by iterating k times),
which converges, as k , to: 2 +j=0 12j Utj . We have showed that Xt = 2 +j=0
1
2jUtj .
c) E(Xt|Xt1) = E(1 + 0.5Xt1 + Ut|Xt1) = 1 + 0.5Xt1.d) V(Xt|Xt1) = E((XtE(Xt|Xt1))2|Xt1) = E((1+0.5Xt1+Ut10.5Xt1)2|Xt1) = E(Ut2) = 1.e) By decomposing the variance we have:
V(Xt) = E(V(Xt|Xt1)) + V(E(Xt|Xt1)) = 1 + V(1 + 0.5Xt1) = 1 + 0.52V(Xt1) = 1 + 0.52
1 0.52 .
In fact: V(Xt) =1
1 0.52
= 1 +0.52
1 0.52
.
15)
a) Ut is integrable, it satisfies E(Ut|Ut1,...,U1) = E(Ut) = 0, hence it is a martingale difference. Bydefinition it is also a white noise, since it is a sequence of independent random variables with zeromean.
b) Xt is stationary: it is a M A(1) process: Xt = + Ut + Ut1 and its autocorrelation function is:(1) = /(1 + 2) = 0.2/1.04, (k) = 0 for any k > 1.
c) Yt = Xt Xt1 = Ut 0.8Ut1 0.2Ut2.It follows that: E(Yt) = 0, and, indicating by (k) the autocovariance function and remembering thatUt, Ut1, Ut2,... are independent, we get:
(0) = V(Yt) = V(Ut 0.8Ut1 0.2Ut2) = 1 + 0.82
+ 0.22
= 1.68;(1) = E((Ut 0.8Ut1 0.2Ut2)(Ut1 0.8Ut2 0.2Ut3)) = 0.8 + 0.2 0.8 = 0.64;(2) = E((Ut 0.8Ut1 0.2Ut2)(Ut2 0.8Ut3 0.2Ut4)) = 0.2;(k) = 0 for any k > 2.
d) Zt = XtUt = Ut+ 0.2UtUt1 + Ut2. It follows that: E(Zt) = 1, and, indicating by (k) the autocovari-
ance function we get:
(0) = V(Zt) = E(Ut + 0.2UtUt1 + Ut2)2 1
= E(Ut2 + 0.22Ut
2Ut12 + Ut
4 + 2 0.2Ut2Ut1 + 2Ut3 + 2 0.2Ut3Ut1) 1 = 1 + 0.22 + 4 1 = 4.04;(1) = Cov(Zt, Zt1) = E((Ut + 0.2UtUt1 + Ut
2 1)(Ut1 + 0.2Ut1Ut2 + Ut12 1)) = 0.e) Wt is a MA(1) as Xt (see (b)), and its autocorrelation function is:
(1) = /(1 + 2) = 5/26 = 0.2/1.04, (k) = 0 for any k > 1.
The autocovariance functions are not identical because V(Xt) = 0.22 + 1, while V(Wt) = 52 + 1.
16)
a) For any > 0, P(|Xn| > ) = 1n
0 as n . Hence, by definition, Xn converges to 0 in probability.
b) IfXn converges almost surely, then the limit must be 0 for (a). Since
n=0
P(Xn = 1) =n=0
1
n= ,
9
-
8/3/2019 Solutions Probability
10/12
from Borel-Cantelli second lemma this implies that P(lim sup(Xn = 1)) = 1, and then Xn cantconverge almost surely to 0.
c) IfXn converges in L2
, then the limit must be 0 for (a).||Xn||2 = E(|Xn|2)1/2 = P(Xn = 1)1/2 = 1
n1/2 0 as n . Hence, by definition, Xn converges to
0 in L2.
d) Yn = nXn can not converge almost surely because Xn dont converge almost surely as proved in (b).
For any > 0, P(|Yn| > ) = 1n
0 as n . Hence, by definition, Yn converges to 0 in probability.If Yn converges also in L
2, then the limit must be 0:||Yn||2 = E(|Yn|2)1/2 = E(|nXn|2)1/2 = nP(Xn = 1)1/2 = n1/2 + as n . Hence, Yn cantconverge in L2.
e) P(Zn = 0) = P(X1 = 0,...,Xn = 0) =ni=1 P(Xi = 0) = (1
1
2)(1 1
3)...(1 1
n) =
1
nand then
P(Zn = 1) = 1 1
n ;
as in (a), for any > 0, P(|Zn 1| > ) = 1n
0 as n , and hence Zn converges to 1 inprobability.Similarly to the results for Xn in (c), Zn converges to 1 also in L
2; moreover, Zn is an increasingsequence, and then it converges also almost surely to 1.
f) P(Wn = 1) = P(X1 = 1,...,Xn = 1) =ni=1 P(Xi = 1) = 1
1
2 1
3 ... 1
nand
P(Wn = 0) = 1 1 12
13
... 1n
; it follows that, for any > 0,
P(|Wn| > ) = 1 12
13
... 1n
0, as n ,
and then Wn converges to 0 in probability.Since |Wn| |Xn|, we have that Wn converges to 0 also in L2 from (c).Finally we have
n=0
P(Wn = 1) =n=0
1 12
13
... 1n
=n=0
1
n!< ,
and then, from Borel-Cantelli first lemma, P(lim supXn = 1) = 0; hence Xn converge also almostsurely to 0.
17)
a) MTn(s) = E[exp(sTn)] = E[exp(snj=1 Xnj]) =
nj=1 E[exp(sXnj]) = E[exp(sXn1)]
n
= (1
2pn +pn exp(s) +pn exp(
s))n = (1 +pn(exp(s) + exp(
s)
2))n.
b) In this case, MTn(s) = (1+1
n2(exp(s)+exp(s)2))n = and this converges to 1 as n (remember
that:(1 +
c
n2)n
2/c exp(1) as n , and then (1 + cn2
)n = [(1 +c
n2)n
2/c]c/n exp(1)0 = 1 as n ).This implies that Tn converges to 0 in distribution, and then it converges to 0 also in probability.
c) In this case, MTn(s) = (1 +1
n(exp(s)+exp(s) 2))n exp(es+ es 2) = exp(es 1)exp(es 1),
which is equal to MXY(s) = E[exp(s(X Y)] = E[exp(sX)]E[exp(sY)], where X and Y are twoindependent Poisson(1).
10
-
8/3/2019 Solutions Probability
11/12
d) M|Tn|(s) = (1 +1
n(2 exp(s) 2))n exp(2(es 1)) as n , which is the m.g.f. of a Poisson(2).
Thus,
|Tn
|converges in distribution to a Poisson(2).
e) Since E(Xnj) = 0 and V(Xnj) = 2p, from Levy Central Limit TheoremTn2np
converges in distribution
to a standard normal distribution; since g(x) = x2 is a continuous function, it follows thatTn
2
2np=
(Tn2np
)2 converges in distribution to a chi-square distribution with one degree of freedom.
18)
a) Since, for k, > 0, the density function of X Gamma(, k) is f(x; k, ) = xk1k ex
(k)I(0,+)(x),
then its moment generating function is, for s < ,
MX(s) = E(esX) =
0
esxxk1kex
(k)dx
=k
( s)k0
xk1( s)k e(s)x
(k)dx
=
sk
0
f(x; k, s) dx =
sk
;
(3)
hence if Xk Gamma(1, k), we get MXk(s) = (1 s)k for s < 1.b) Since Ma(X+b)(s) = e
absMX(as) (for as in the domain of MX), we get
MYk(s) = esk
1 skk
.
c) From (b), and by using the Taylor expansion of the logarithm,
MYk(s) = exp
s
k k log 1 sk
= exp
s
k k s
k s
2
2k+ o(1/k)
= exps2
2+ o(1)
exp
s22
as k ,
which is the m.g.f. of a standard normal distribution; this shows that the limit distribution of Yk is astandard gaussian distribution.
d) Since g(x) = x2 is a continuous function, then Y2k converges to the square of the limit ofYk (ContinuousMapping Theorem) . By (c), Y2k converges to
21 (a chisquare distribution with 1 degree of freedom).
e) Y1, Y2, . . . are independent, since X1, X2, . . . are. Then, from (d),
MY2k+Yk+12+Y2k+2
(s) = MY2k
(s) MYk+12(s) MY2k+2(s) k(M21(s))3.
Note that 2m (a chisquare distribution with m degrees of freedom) is the sum of m independent 21,
which means that M2m(s) = (M21(s))m. We get
M23
(s) = (M21
(s))3 = limk
MY2k+Yk+12+Yk+22(s),
and hence the limit of Yk2 + Yk+1
2 + Yk+22 is a 23 distribution.
11
-
8/3/2019 Solutions Probability
12/12
f) Y1, Y2, . . . are independent, since X1, X2, . . . are. Then, from (c) and (d),
M(Yk,Yk+12)(s1, s2) = MYk(s1) MYk+12(s2)
MN(0,1)(s1)M2
1(s2) as k
,
which means that the joint vector (Yk, Yk+12) converges to (Z, D), where Z N(0, 1) and D 21
are independent. Since g(x, y) = x/
y is continuous for y > 0, then, from the Continuous Mapping
Theorem,YkY2k+1
=Yk
|Yk+1| converges to Z/
D, a Students tdistribution with one degree of freedom.
12