Note on neural network sampling for Bayesian inference of mixture processes
In this paper we show some further experiments with neural network sampling, a class of sampling methods that make use of neural network approximations to (posterior) densities, introduced by Hoogerheide et al. (2007). We consider a method where a mixture of Student's t densities, which can be interpreted as a neural network function, is used as a candidate density in importance sampling or the Metropolis-Hastings algorithm. It is applied to an illustrative 2-regime mixture model for the US real GNP growth rate. We explain the non-elliptical shapes of the posterior distribution, and show that the proposed method outperforms Gibbs sampling with data augmentation and the griddy Gibbs sampler.
Year of publication: |
2007-04-30
|
---|---|
Authors: | van Dijk, Herman K. ; Hoogerheide, Hoogerheide, L.F. |
Institutions: | Faculteit der Economische Wetenschappen, Erasmus Universiteit Rotterdam |
Saved in:
freely available
Extent: | application/pdf |
---|---|
Series: | Econometric Institute Research Papers. - ISSN 1566-7294. |
Type of publication: | Book / Working Paper |
Notes: | The text is part of a series RePEc:ems:eureir Number EI 2007-15 |
Source: |
Persistent link: https://www.econbiz.de/10010731728
Saved in favorites
Similar items by person
-
van Dijk, Herman K., (2005)
-
Comparison of the Anderson-Rubin test for overidentification and the Johansen test for cointegration
van Dijk, Herman K., (2001)
-
Simulation based bayesian econometric inference: principles and some recent computational advances.
van Dijk, Herman K., (2007)
- More ...