Convergence of Adaptive Sampling Schemes
In the design of ecient simulation algorithms, one is often beset with a poorchoice of proposal distributions. Although the performances of a given kernel canclarify how adequate it is for the problem at hand, a permanent on-line modicationof kernels causes concerns about the validity of the resulting algorithm. While theissue is quite complex and most often intractable for MCMC algorithms, the equivalentversion for importance sampling algorithms can be validated quite precisely.We derive sucient convergence conditions for a wide class of population MonteCarlo algorithms and show that Rao{Blackwellized versions asymptotically achievean optimum in terms of a Kullback divergence criterion, while more rudimentaryversions simply do not benet from repeated updating. Adaptivity, Bayesian Statistics, CLT, importance sampling, Kullbackdivergence, LLN, MCMC algorithm, population Monte Carlo, Rao-Blackwellization.
Year of publication: |
2004
|
---|---|
Authors: | Douc, Randal ; Guillin, Arnaud ; Marin, Jean-Michel ; Robert, Christian P, |
Institutions: | Centre de Recherche en Économie et Statistique (CREST), Groupe des Écoles Nationales d'Économie et Statistique (GENES) |
Saved in:
freely available
Saved in favorites
Similar items by person
-
Minimum Variance Importance Sampling via Population Monte Carlo
Douc, Randal, (2005)
-
Convergence of adaptive sampling schemes
Douc, Randal, (2004)
-
Minimum variance importance sampling via population Monte Carlo
Douc, Randal, (2005)
- More ...