Bayesian Representation of Stochastic Processes under Learning: de Finetti Revisited
A probability distribution governing the evolution of a stochastic process has infinitely many Bayesian representations of the form mu = integral operator [subscript theta] mu[subscript theta] delta lambda (theta). Among these, a natural representation is one whose components (mu[subscript theta]'s) are 'learnable' (one can approximate mu[subscript theta] by conditioning mu on observation of the process) and 'sufficient for prediction' (mu[subscript theta]'s predictions are not aided by conditioning on observation of the process). The authors show the existence and uniqueness of such a representation under a suitable asymptotic mixing condition on the process. This representation can be obtained by conditioning on the tail-field of the process, and any learnable representation that is sufficient for prediction is asymptotically like the tail-field representation. This result is related to the celebrated de Finetti theorem, but with exchangeability weakened to an asymptotic mixing condition, and with his conclusion of a decomposition into i.i.d. component distributions weakened to components that are learnable and sufficient for prediction.
Year of publication: |
1999
|
---|---|
Authors: | Jackson, Matthew O. ; Kalai, Ehud ; Smorodinsky, Rann |
Published in: |
Econometrica. - Econometric Society. - Vol. 67.1999, 4, p. 875-894
|
Publisher: |
Econometric Society |
Saved in:
Saved in favorites
Similar items by person
-
Patterns, Types, and Bayesian Learning
Jackson, Matthew O., (1997)
-
Bayesian Representation of Stochastic Processes under Learning: de Finetti Revisited
Jackson, Matthew O., (1998)
-
Bayesian Representation of Stochastic Processes under Learning: de Finetti Revisited
Jackson, Matthew O., (1999)
- More ...