a strong limit theorem for functions of continuous random ...downloads.hindawi.com › journals ›...
TRANSCRIPT
Hindawi Publishing CorporationJournal of Applied MathematicsVolume 2008, Article ID 639145, 10 pagesdoi:10.1155/2008/639145
Research ArticleA Strong Limit Theorem for Functions ofContinuous Random Variables and an Extension ofthe Shannon-McMillan Theorem
Gaorong Li,1, 2 Shuang Chen,3 and Sanying Feng4
1 School of Finance and Statistics, East China Normal University, Shanghai 200241, China2 College of Applied Sciences, Beijing University of Technology, Beijing 100022, China3 School of Sciences, Hebei University of Technology, Tianjin 300130, China4 College of Mathematics and Science, Luoyang Normal University, Henan 471022, China
Correspondence should be addressed to Gaorong Li, l [email protected]
Received 1 November 2007; Revised 10 January 2008; Accepted 29 March 2008
Recommended by Onno Boxma
By means of the notion of likelihood ratio, the limit properties of the sequences of arbitrary-dependent continuous random variables are studied, and a kind of strong limit theoremsrepresented by inequalities with random bounds for functions of continuous random variablesis established. The Shannon-McMillan theorem is extended to the case of arbitrary continuousinformation sources. In the proof, an analytic technique, the tools of Laplace transform, andmomentgenerating functions to study the strong limit theorems are applied.
Copyright q 2008 Gaorong Li et al. This is an open access article distributed under the CreativeCommons Attribution License, which permits unrestricted use, distribution, and reproduction inany medium, provided the original work is properly cited.
1. Introduction
Let {Xn, n ≥ 1} be a sequence of arbitrary continuous real random variables on the probabilityspace (Ω,F, P) with the joint density function
fn(x1, . . . , xn
)> 0, n = 1, 2, . . . , (1.1)
where xi ∈ (−∞,∞), 1 ≤ i ≤ n. Let Q be another probability measure on F, and {Xn, n ≥ 1}is a sequence of independent random variables on the probability space (Ω,F, Q) with themarginal density functions gk(xk) (1 ≤ k ≤ n), and let
πn
(x1, . . . , xn
)=
n∏
k=1
gk(xk
). (1.2)
2 Journal of Applied Mathematics
In order to indicate the deviation between {Xn, n ≥ 1} on the probability measure P andQ, we first introduce the following definitions.
Definition 1.1. Let {Xn, n ≥ 1} be a sequence of random variables with joint distribution (1.1),and let gk(xk) (k = 1, 2, . . . , n) be defined by (1.2). Let
Zn(ω) =fn(X1, . . . , Xn
)
πn
(X1, . . . , Xn
) . (1.3)
In statistical terms, Zn(ω) is called the likelihood ratio, which is of fundamental importance inthe theory of testing the statistical hypotheses (cf. [1, page 388]; [2, page 483]).
The random variable
r(ω) = lim supn→∞
1nlnZn(ω) = −lim inf
n→∞1nln
[{n∏
k=1
gk(Xk
)}/
fn(X1, . . . , Xn
)]
(1.4)
is called asymptotic logarithmic likelihood ratio, relative to the product of marginaldistribution of (1.2), of Xn, n ≥ 1, where ln is the natural logarithm, ω is the sample point.For the sake of brevity, we denote Xk(ω) by Xk.
Although r(ω) is not a proper metric between probability measures, we neverthelessthink of it as a measure of “dissimilarity” between their joint distribution fn(x1, . . . , xn) andthe product πn(x1, . . . , xn) of their marginals.
Obviously, r(ω) = 0, a.s. if and only if {Xn, n ≥ 1} are independent.A stochastic process of fundamental importance in the theory of testing hypotheses is
the sequence of likelihood ratio. In view of the above discussion of the asymptotic logarithmiclikelihood ratio, it is natural to think of r(ω) as a measure how far (the random deviation of)Xn is from being independent, how dependent they are. The smaller r(ω) is, the smaller thedeviation is (cf. [3–5]).
In [3], the strong deviation theorems for discrete random variables were discussed byusing the generating function method. Later, the approach of Laplace transform to studythe strong limit theorems was first proposed by Liu [4]. Yang [6] further studied the limitproperties for Markov chains indexed by a homogeneous tree through the analytic technique.Many comprehensive works may be found in Liu [7]. The purpose of this paper is to establisha kind of strong deviation theorems represented by inequalities with random bounds forfunctions of arbitrary continuous random variables, by combining the analytic techniquewith the method of Laplace transform, and to extend the strong deviation theorems to thedifferential entropy for arbitrary-dependent continuous information sources in more generalsettings.
Definition 1.2. Let {hn(xn), n ≥ 1} be a sequence of nonnegative B orel measurable functionsdefined on R, the Laplace transform of random variables hn(Xn) on the probability space(Ω,F, Q) is defined by
f̃n(s) =∫∞
−∞e−shn(xn)gn
(xn
)dxn = EQe
−shn(Xn), n = 1, 2, . . . , (1.5)
where EQ denotes the expectation under Q.
Gaorong Li et al. 3
We have the following assumptions in this paper.
(1) Assume that there exists s0 ∈ (0,∞) such that
f̃n(s) < ∞, s ∈ [ − s0, s0], n = 1, 2, . . . . (1.6)
(2) Assume M > 0 is a constant, satisfying
supn
EQhn
(Xn
) ≤ M < ∞. (1.7)
In order to prove our main results, we first give a lemma, and it will be shown that itplays a central role in the proofs.
Lemma 1.3. Let fn(x1, . . . , xn), gn(x1, . . . , xn) be two probability functions on (Ω,F, P), let
Tn(ω) =gn
(X1, . . . , Xn
)
fn(X1, . . . , Xn
) , (1.8)
then
lim supn→∞
1nln Tn(ω) ≤ 0 a.s. (1.9)
Proof. By [8], {Tn,F, n ≥ 1} is a nonnegative martingale and ETn = 1, we have by the Doobmartingale convergence theorem, there exists an integral random variable T∞(ω), such thatTn → T∞, a.s. and (1.9) follows.
2. Main results
Theorem 2.1. Let {Xn, n ≥ 1}, Zn(ω), r(ω), f̃n(s) be defined as before, and under the assumptions of(1) and (2), let
lim supn→∞
1n
n∑
k=1
EQes0hk(Xk) = lim sup
n→∞
1n
n∑
k=1
f̃k( − s0
)= M
(s0)< ∞. (2.1)
Then
lim infn→∞
1n
n∑
k=1
[hk
(Xk
) − EQhk
(Xk
)] ≥ −β(r(ω)), a.s. (2.2)
lim supn→∞
1n
n∑
k=1
[hk
(Xk
) − EQhk
(Xk
)] ≤ β(r(ω)
), a.s. (2.3)
where
β(x) = inf{φ(s, x),−s0 < s < 0
}, x ≥ 0, (2.4)
φ(s, x) = −2e−2sM
(s0)
(s0 − |s|)2
− x
s, x ≥ 0, (2.5)
0 ≤ β(x) ≤√x(1 +
√x)
s0
[2e−2M
(s0)+ 1
], lim
x→0+β(x) = β(0) = 0. (2.6)
4 Journal of Applied Mathematics
Remark 2.2. Let
α(x) = sup{φ(s, x), 0 < s < s0
}, x ≥ 0, (2.7)
then
α(x) = −β(x). (2.8)
Proof. Let s be an arbitrary real number in (−s0, s0), let
gn(s, xn
)= e−shn(xn)gn
(xn
)/f̃n(s), xn ∈ (−∞,∞), n = 1, 2, . . . , (2.9)
then∫∞−∞ gn(s, xn)dxn = 1, and let
qn(s;x1, . . . , xn
)=
n∏
k=1
gk(s, xk
). (2.10)
Therefore, qn(s;x1, . . . , xn) is an n multivariate probability density function, let
tn(s,ω) =qn(s;X1, . . . , Xn
)
fn(X1, . . . , Xn
) , n = 1, 2, . . . . (2.11)
By Lemma 1.3, there exists a set A(s) such that P(A(s)) = 1, so we have
lim supn→∞
1nln tn(s,ω) ≤ 0, ω ∈ A(s). (2.12)
By (1.3), (2.9), (2.11), and (2.12), we have
lim supn→∞
1n
[
− sn∑
k=1
hk
(Xk
) −n∑
k=1
ln f̃k(s) − lnZn(ω)
]
≤ 0, ω ∈ A(s). (2.13)
Therefore,
r(ω) ≥ 0, ω ∈ A(0). (2.14)
By (2.13) and (1.4), the property of the superior limit
lim supn→∞
(an − bn
) ≤ 0 =⇒ lim supn→∞
an ≤ lim supn→∞
bn, (2.15)
and the inequality lnx ≤ x − 1 (x > 0), we have
lim supn→∞
1n(−s)
n∑
k=1
[hk
(Xk
) − EQhk
(Xk
)]
≤ lim supn→∞
1n
n∑
k=1
[ln f̃k(s) + sEQhk
(Xk
)]+ r(ω)
≤ lim supn→∞
1n
n∑
k=1
[f̃k(s) − 1 + sEQhk
(Xk
)]+ r(ω)
= lim supn→∞
1n
n∑
k=1
[f̃k(s) − 1 − (−s)EQhk
(Xk
)]+ r(ω), ω ∈ A(s).
(2.16)
Gaorong Li et al. 5
By the inequality 0 ≤ ex − 1 − x ≤ (1/2)x2e|x|, which can be found in [9], we have
0 ≤ e−shk(Xk) − 1 − (−s)hk(Xk) ≤ (1/2)(shk
(Xk
))2e|shk(Xk)|. (2.17)
By (2.5) and (2.17), we have
lim supn→∞
1n(−s)
n∑
k=1
[hk
(Xk
) − EQhk
(Xk
)]
≤ (1/2)s2lim supn→∞
1n
n∑
k=1
EQ
[(hk
(Xk
))2e|s|hk(Xk)
]+ r(ω), ω ∈ A(s).
(2.18)
It is easy to see that ϕ(x) = txx2 (t > 1) attains its largest value ϕ(−2/ ln t) = 4e−2/(ln t)2
on the interval (−∞, 0], and ϕ(x) = txx2 (0 < t < 1) attains its largest value ϕ(−2/ ln t) =4e−2/(ln t)2 on the interval [0,∞), we have
sup{e(s−s0)hk(Xk)
[hk
(Xk
)]2, k ≥ 1
} ≤ 4e−2(s − s0
)2 =4e−2
(s0 − s
)2 , 0 < s < s0, (2.19)
sup{e(s0+s)(−hk(Xk))
[ − hk
(Xk
)]2, k ≥ 1
} ≤ 4e−2(s0 + s
)2 , −s0 < s < 0. (2.20)
Let 0 < s < s0 in (2.18), by (2.19) and (2.1), we obtain
(−s)lim infn→∞
1n
n∑
k=1
[hk
(Xk
) − EQhk
(Xk
)]
≤ (1/2)s2lim supn→∞
1n
n∑
k=1
EQ
[(hk
(Xk
))2e|s|hk(Xk)
]+ r(ω)
= (1/2)s2lim supn→∞
1n
n∑
k=1
EQ
[es0hk(Xk)e(s−s0)hk(Xk)
(hk
(Xk
))2] + r(ω)
≤ 2e−2s2M(s0)
(s0 − s
)2 + r(ω), ω ∈ A(s).
(2.21)
Dividing the two sides of (2.21) by −s, we obtain
lim infn→∞
1n
n∑
k=1
[hk
(Xk
) − EQhk
(Xk
)] ≥ −2e−2sM
(s0)
(s0 − s
)2 − r(ω)s
=̂φ(s, r(ω)
), 0 < s < s0, ω ∈ A(s) ∩A(0).
(2.22)
By (2.14) and 0 < s < s0, obviously φ(s, r(ω)) ≤ 0, hence α(r(ω)) ≤ 0. Let Q+ be the set ofrational numbers in the interval (0, s0), and let A∗ = ∩s∈Q+A(s), then P(A∗) = 1. By (2.22), thenwe have
lim infn→∞
1n
n∑
k=1
[hk
(Xk
) − EQhk
(Xk
)] ≥ φ(s, r(ω)
), ω ∈ A∗ ∩A(0), ∀s ∈ Q+. (2.23)
6 Journal of Applied Mathematics
It is easy to see that φ(s, x) is a continuous function with respect to s on the interval(0, s0). For each ω ∈ A∗ ∩A(0) (0 ≤ r(ω) < ∞), take sn(ω) ∈ Q+, n = 1, 2, . . . , such that
limn→∞
φ(sn(ω), r(ω)
)= α
(r(ω)
). (2.24)
By (2.23), (2.24), and (2.8), we have
lim infn→∞
1n
n∑
k=1
[hk
(Xk
) − EQhk
(Xk
)] ≥ −β(r(ω)), ω ∈ A∗ ∩A(0). (2.25)
Since P(A∗ ∩A(0)) = 1, (2.2) follows from (2.25).Let −s0 < s < 0 in (2.18), by (2.20) and (2.1), we have
lim supn→∞
1n
n∑
k=1
[hk
(Xk
) − EQhk
(Xk
)]
≤ −(1/2)s lim supn→∞
1n
n∑
k=1
EQ
[(hk
(Xk
))2e|s|hk(Xk)
] − r(ω)s
= −(1/2)s lim supn→∞
1n
n∑
k=1
EQ
[es0hk(Xk)e(s0+s)(−hk(Xk))
( − hk
(Xk
))2] − r(ω)s
≤ −2e−2sM
(s0)
(s0 + s
)2 − r(ω)s
=̂φ(s, r(ω)
), −s0 < s < 0, ω ∈ A(s) ∩A(0).
(2.26)
By (2.14) and −s0 < s < 0, obviously φ(s, r(ω)) ≥ 0, hence β(r(ω)) ≥ 0. Let Q− be the set ofrational numbers in the interval (−s0, 0), and letA∗ = ∩s∈Q−A(s), then P(A∗) = 1. Then we haveby (2.26)
lim supn→∞
1n
n∑
k=1
[hk
(Xk
) − EQhk
(Xk
)] ≤ φ(s, r(ω)
), ω ∈ A∗ ∩A(0), ∀s ∈ Q−. (2.27)
It is clear that φ(s, x) is a continuous function with respect to s on the interval (−s0, 0). For eachω ∈ A∗ ∩A(0) (0 ≤ r(ω) < ∞), take λn(ω) ∈ Q−, n = 1, 2, . . . , such that
limn→∞
φ(λn(ω), r(ω)
)= β
(r(ω)
). (2.28)
By (2.27) and (2.28), we have
lim supn→∞
1n
n∑
k=1
[hk
(Xk
) − EQhk
(Xk
)] ≤ β(r(ω)
), ω ∈ A∗ ∩A(0). (2.29)
Since P(A∗ ∩A(0)) = 1, (2.3) follows from (2.29).By (2.4), (2.5), and (2.14), if x > 0, we have
0 ≤ β(x) ≤ φ
(
− s0√x
1 +√x, x
)
=√x(1 +
√x)
s0
[2e−2M
(s0)+ 1
]. (2.30)
If x = 0, we have
β(0) ≤ φ( − n−1, 0
)=
2e−2M(s0)
n(s0 − n−1)2
, n ≥ 1. (2.31)
Noticing that β(x) ≥ 0, (x ≥ 0), (2.6) follows from (2.30) and (2.31).
Gaorong Li et al. 7
Corollary 2.3. If P = Q, or {Xn, n ≥ 1} is a sequence of independent random variables, and under theassumptions of (1) and (2), then
limn→∞
1n
n∑
k=1
[hk
(Xk
) − Ehk
(Xk
)]= 0 a.s. (2.32)
Proof. In this case, fn(x1, . . . , xn) =∏n
k=1gk(xk), and r(ω) = 0 a.s. Hence, (2.32) follows directlyfrom (2.2) and (2.3).
3. An extension of the Shannon-McMillan theorem
In order to understand better, we first introduce some definitions in information theory in thissection.
Let {Xn, n ≥ 1} be a sequence produced by an arbitrary continuous information sourceon the probability space (Ω,F, P) with the joint density function
fn(x1, . . . , xn
)> 0, xk ∈ (−∞,∞), 1 ≤ k ≤ n, n = 1, 2, . . . . (3.1)
For the sake of brevity, we denote fn > 0, and Xk stands for Xk(ω). Let
pn(ω) = −(1/n) ln fn(X1, . . . , Xn
), (3.2)
where ω is the sample point, pn(ω) is called the sample entropy or the entropy density of{Xk, 1 ≤ k ≤ n}. Also let Q be another probability measure on F with the density function
qn(x1, . . . , xn
)> 0, xk ∈ (−∞,∞), 1 ≤ k ≤ n, n = 1, 2, . . . . (3.3)
Let
Ln(ω) = ln[fn(X1, . . . , Xn
)/qn
(X1, . . . , Xn
)],
L(ω) = lim supn→∞
(1/n)Ln(ω)
= −lim infn→∞
(1/n) ln[qn(X1, . . . , Xn
)/fn
(X1, . . . , Xn
)],
D(fn‖qn
)= EPLn
= EP ln[fn(X1, . . . , Xn
)/qn
(X1, . . . , Xn
)].
(3.4)
Ln(ω), L(ω), and D(fn‖qn) are called the sample relative entropy, the sample relativeentropy rate, and the relative entropy, respectively, relative to the reference density functionqn(x1, . . . , xn). Indeed, they all are the measure of the deviation between the true jointdistribution density function fn(x1, . . . , xn) and the reference distribution density functionqn(x1, . . . , xn) (cf. [10, pages 12, 18]).
A question of importance in information theory is the study of the limit properties of therelative entropy density fn(ω). Since Shannon’s initial work was published (cf. [11]), there hasbeen a great deal of investigation about this question (e.g., cf. [12–20]).
In this paper, a class of small deviation theorems (i.e., the strong limit theoremsrepresented by inequalities) is established by using the analytical technique, and an extensionof the Shannon-McMillan theorem to the arbitrary-dependent continuous information sourcesis given. Especially, an approach of applying the tool of Laplace transform to the study of thestrong deviation theorems on the differential entropy is proposed.
8 Journal of Applied Mathematics
Let hk(xk) = − ln gk(xk) (1 ≤ k ≤ n, n = 1, 2, . . .) in (1.5), then we give the followingdefinitions.
Definition 3.1. The Laplace transform of − ln gk(xk) is defined by
f̃k(s) = Ee−s(− ln gk(Xk)) =∫∞
−∞e−s(− ln gk(xk))gk
(xk
)dxk. (3.5)
Definition 3.2. The differential entropy for continuous random variables Xk is defined by
h(Xk
)= E
[ − ln gk(Xk
)]= −
∫∞
−∞gk
(xk
)ln gk
(xk
)dxk. (3.6)
In the following theorem, let {Xn, n ≥ 1} be independent random variables with respectto Q, then the reference density function qn(x1, . . . , xn) =
∏nk=1gk(xk), and let hk(Xk) =
− ln gk(Xk) (1 ≤ k ≤ n) in Theorem 2.1.
Theorem 3.3. Let {Xn, n ≥ 1}, Ln(ω), L(ω), f̃n(s) be given as above, and under the assumptions of(1) and (2), let
lim supn→∞
1n
n∑
k=1
EQes0(− ln gk(Xk)) = lim sup
n→∞
1n
n∑
k=1
f̃k( − s0
)= M
(s0)< ∞. (3.7)
Then
lim infn→∞
1n
n∑
k=1
[ − ln gk(Xk
) − h(Xk
)] ≥ −β(L(ω)), a.s.
lim supn→∞
1n
n∑
k=1
[ − ln gk(Xk
) − h(Xk
)] ≤ β(L(ω)
), a.s.
(3.8)
where
β(x) = inf{φ(s, x), −s0 < s < 0
}, x ≥ 0, (3.9)
φ(s, x) = −2e−2sM
(s0)
(s0 − |s|)2
− x
s, x ≥ 0, (3.10)
0 ≤ β(x) ≤√x(1 +
√x)
s0
[2e−2M
(s0)+ 1
], lim
x→0+β(x) = β(0) = 0. (3.11)
Remark 3.4. Let
α(x) = sup{φ(s, x), 0 < s < s0
}, x ≥ 0, (3.12)
then
α(x) = −β(x). (3.13)
Gaorong Li et al. 9
Corollary 3.5. Let pn(ω) be defined by (3.2). Under the condition of Theorem 3.3, then
lim infn→∞
[pn(ω) − (1/n)h
(X1, . . . , Xn
)] ≥ α(L(ω)
) − L(ω) +H∗ a.s.
lim supn→∞
[pn(ω) − (1/n)h
(X1, . . . , Xn
)] ≤ β(L(ω)
)+H∗ a.s.
(3.14)
where h(X1, . . . , Xn) = E[− ln fn(X1, . . . , Xn)] is the differential entropy for (X1, . . . , Xn), and
H∗ = lim infn→∞
(1/n)
[n∑
k=1
h(Xk
) − h(X1, . . . , Xn
)]
= lim infn→∞
(1/n)D(fn‖qn
),
H∗ = lim supn→∞
(1/n)
[n∑
k=1
h(Xk
) − h(X1, . . . , Xn
)]
= lim supn→∞
(1/n)D(fn‖qn
),
(3.15)
where α(L(ω)), β(L(ω)) are denoted by (3.9)–(3.13).
Corollary 3.6. If P = Q, or {Xn, n ≥ 1} are independent random variables, and there exists s0 > 0,such that (2.1) holds, then
limn→∞
[pn(ω) − (1/n)h
(X1, . . . , Xn
)]= 0 a.s. (3.16)
Acknowledgments
This research is supported by the National Natural Science Foundation of China (Grantsnos. 10671052 and 10571008), the Natural Science Foundation of Beijing (Grant no. 1072004),Funding Project for Academic Human Resources Development in Institutions of HigherLearning Under the Jurisdiction of Beijing Municipality, the Basic Research and FrontierTechnology Foundation of Henan (Grant no. 072300410090), and the Natural Science ResearchProject of Henan (Grant no. 2008B110009). The authors would like to thank the editor and thereferees for helpful comments, which helped to improve an earlier version of the paper.
References
[1] R. G. Laha and V. K. Rohatgi, Probability Theory, Wiley Series in Probability andMathematical Statistic,John Wiley & Sons, New York, NY, USA, 1979.
[2] P. Billingsley, Probability and Measure, Wiley Series in Probability and Mathematical Statistics, JohnWiley & Sons, New York, NY, USA, 2nd edition, 1986.
[3] W. Liu, Relative entropy densities and a class of limit theorems of the sequence of m-valued randomvariables, The Annals of Probability, vol. 18, no. 2, pp. 829839, 1990.
[4] W. Liu, A class of strong deviation theorems and Laplace transform methods, Chinese Science Bulletin,vol. 43, no. 10, pp. 10361041, 1998.
[5] W. Liu and Y. Wang, A strong limit theorem expressed by inequalities for the sequences of absolutelycontinuous random variables, Hiroshima Mathematical Journal, vol. 32, no. 3, pp. 379387, 2002.
[6] W. Yang, Some limit properties for Markov chains indexed by a homogeneous tree, Statistics &Probability Letters, vol. 65, no. 3, pp. 241250, 2003.
10 Journal of Applied Mathematics
[7] W. Liu, Strong Deviation Theorems and Analytic Method, Science Press, Beijing, China, 2003.[8] J. L. Doob, Stochastic Processes, John Wiley & Sons, New York, NY, USA, 1953.[9] W. Liu and J. Wang, A strong limit theorem on gambling systems, Journal of Multivariate Analysis, vol.
84, no. 2, pp. 262273, 2003.[10] T. M. Cover and J. A. Thomas, Elements of Information Theory, Wiley Series in Telecommunications,
John Wiley & Sons, New York, NY, USA, 1991.[11] C. E. Shannon, A mathematical theory of communication, The Bell System Technical Journal, vol. 27, pp.
379423, 623–656, 1948.[12] P. H. Algoet and T. M. Cover, A sandwich proof of the Shannon-McMillan-Breiman theorem, The
Annals of Probability, vol. 16, no. 2, pp. 899909, 1988.[13] A. R. Barron, The strong ergodic theorem for densities: generalized Shannon-McMillan-Breiman
theorem, The Annals of Probability, vol. 13, no. 4, pp. 12921303, 1985.[14] K. L. Chung, A note on the ergodic theorem of information theory, The Annals of Mathematical Statistics,
vol. 32, no. 2, pp. 612614, 1961.[15] J. C. Kieffer, A simple proof of theMoy-Perez generalization of the Shannon-McMillan theorem, Pacific
Journal of Mathematics, vol. 51, pp. 203206, 1974.[16] J. C. Kieffer, A counterexample to Perez’s generalization of the Shannon-McMillan theorem, The
Annals of Probability, vol. 1, no. 2, pp. 362364, 1973.[17] B. McMillan, The basic theorems of information theory, The Annals of Mathematical Statistics, vol. 24,
no. 2, pp. 196219, 1953.[18] W. Liu and W. Yang, An extension of Shannon-McMillan theorem and some limit properties for
nonhomogeneous Markov chains, Stochastic Processes and Their Applications, vol. 61, no. 1, pp. 129145,1996.
[19] W. Liu and W. Yang, The Markov approximation of the sequences ofN-valued random variables anda class of small deviation theorems, Stochastic Processes and Their Applications, vol. 89, no. 1, pp. 117130,2000.
[20] R. M. Gray, Entropy and Information Theory, Springer, New York, NY, USA, 1990.
Submit your manuscripts athttp://www.hindawi.com
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttp://www.hindawi.com
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
Journal of
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
CombinatoricsHindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
International Journal of
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
The Scientific World JournalHindawi Publishing Corporation http://www.hindawi.com Volume 2014
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttp://www.hindawi.com
Volume 2014 Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
Stochastic AnalysisInternational Journal of