A limit theorem for the entropy density of nonhomogeneous Markov information source
Let Xn, n [greater-or-equal, slanted] 0 be a sequence of successive letters produced by a nonhomogeneous Markov information source with alphabet S = 1,2, ...,m, and the probability distribution p(x0)[Pi]kn = 1 pk(xk-1, xk), where pk(i,j) is the transition probability P(Xk = jXk-1 = i). Let fn([omega]) be the relative entropy density of Xk, 0 [less-than-or-equals, slant] k [less-than-or-equals, slant] n. In this paper we prove that for an arbitrary nonhomogeneous Markov information source, fn([omega]) and (1/n)[Sigma]k = 1n H[pk(Xk-1, 1), ..., pk(Xk-1, m)] are asymptotically equal almost everywhere as n --> [infinity], where H(p1, ..., pm) is the entropy of the distribution p1, ..., pm.
Year of publication: |
1995
|
---|---|
Authors: | Wen, Liu ; Weiguo, Yang |
Published in: |
Statistics & Probability Letters. - Elsevier, ISSN 0167-7152. - Vol. 22.1995, 4, p. 295-301
|
Publisher: |
Elsevier |
Keywords: | Nonhomogeneous Markov information source Relative entropy density a.e. convergence |
Saved in:
Saved in favorites
Similar items by person
-
An extension of Shannon-McMillan theorem and some limit properties for nonhomogeneous Markov chains
Wen, Liu, (1996)
-
Guan, Yanjun, (2016)
-
Weiguo, Yang, (2015)
- More ...