assignment 1 - solutions · assignment 1 - solutions problem 1. cf lecture notes chp 1 pages...
TRANSCRIPT
Stochastic Modeling (2016)
Assignment 1 - Solutions
Problem 1.
Cf lecture notes Chp 1 pages 39–43. Check also Pattern Recognition with Fuzzy ObjectiveFunction Algorithms by J.C. Bezdek (1981), pages 65–68.
Problem 2.
Cf hand-written solution at the end of the document. Check also the appendix in Mixtures ofprobabilistic principal component analysers by M.E. Tipping and C.M. Bishop, Neural Com-
putation 11(2), pp. 443–482.
Problem 3. Sampling Methods.
(i) To simulate X conditionally on X > a using rejection sampling, you may start by
simulating a random variable Y1 with distribution F , and set X = Y1 if Y1 > a, and to
start with a new random variable Y2 otherwise, and keep going until you get Y
N
> a.
The random variable N has a geometric distribution with parameter 1/K = P(X > a),
which tends to 0 as a tends to infinity. This method is therefore ine�cient when a is in
the tail of the distribution of X.
(ii) Let U be a uniform random variable on the interval [0, 1], and
T = F
�1(F (a) + (1� F (a))U) .
Since F (a) + (1� F (a))U � F (a), the random variable T is larger than a. Therefore,
the distribution function F
T
of T is, for t > a,
F
T
(t) = P(T t) = P(F
�1(F (a) + (1� F (a))U t) = P(F (a)+(1�F (a))U F (t)) ,
which yields
F
T
(t) = P
✓U F (t)� F (a)
1� F (a)
◆=
F (t)� F (a)
1� F (a)
=
P(a < X t)
P(X > a)
= P(X t |X > a) .
We deduce a simple method to simulate X conditionally on X > a: draw some U ⇠U(0, 1) and set T = F
�1(F (a) + (1� F (a))U).
This method is by far more e�cient than the previous method, since we do not reject
any samples, and since it works for any a. It however requires the inversion of F , while
the rejection method simply requires the ability to simulate according to F .
(iii) Fix a > 0, and put
Q(a) =
1p2⇡
Z 1
a
exp
⇢�1
2
u
2
�du ,
p(x) =
1
Q(a)
p2⇡
exp
⇢�1
2
x
2
�, q
�
(x) = �e
��(x�a)1(x > a) .
1
Stochastic Modeling (2016)
We are looking for the pair (�, K) such that p(x) Kq
�
(x) for all x, with K as small
as possible, that is
KQ(a)
p2⇡1(x > a) � 1
�
exp
⇢�(x� a)� x
2
2
�,
which can be rewritten as
KQ(a)
p2⇡ � sup
x>a
1
�
exp
⇢�(x� a)� x
2
2
�=: sup
x>a
'
�
(x) .
Since K must be optimal, choose it such that
KQ(a)
p2⇡ = inf
�>0sup
x>a
'
�
(x) .
The derivative of '
�
is
'
0�
(x) = (�� x) exp
⇢�(x� a)� x
2
2
�.
To study its sign, we need to distinguish two cases. If 0 < � a, then '
0�
is always
negative on x > a, '
�
is non-increasing and thus
sup
x>a
'
�
(x) = '
�
(a) =
1
�
exp
⇢�a
2
2
�.
It follows that
inf
0<�a
1
�
exp
⇢�a
2
2
�=
1
a
exp
⇢�a
2
2
�= '
a
(a) .
Suppose now that � � a. Then
sup
x>a
'
�
(x) = '
�
(�) =
1
�
exp
⇢�(�� a)� �
2
2
�.
The derivative of '
�
(�) with respect to � is
1
�
2(�
2 � �a� 1) exp
⇢�(�� a)� �
2
2
�,
so that
inf
��a
1
�
exp
⇢�(�� a)� �
2
2
�= '
�0(�0) ,
where
�0 =a+
pa
2+ 4
2
.
Since '
�0(�0) < '
a
(a), we conclude that
inf
�>0sup
x>a
'
�
(x) = '
�0(�0) ,
and we choose � = �0 and
K =
1
Q(a)
p2⇡
'
�0(�0) .
2