Showing 1 - 10 of 20
Persistent link: https://www.econbiz.de/10009691169
Andrieu et al. (2010) prove that Markov chain Monte Carlo samplers still converge to the correct posterior distribution of the model parameters when the likelihood is estimated by the particle filter (with a finite number of particles) is used instead of the likelihood. A critical issue for...
Persistent link: https://www.econbiz.de/10012870345
Persistent link: https://www.econbiz.de/10000861202
A general model is proposed for flexibly estimating the density of a continuous response variable conditional on a possibly high-dimensional set of covariates. The model is a finite mixture of asymmetric student-t densities with covariate dependent mixture weights. The four parameters of the...
Persistent link: https://www.econbiz.de/10003896094
Bayesian inference for DSGE models is typically carried out by single block random walk Metropolis, involving very high computing costs. This paper combines two features, adaptive independent Metropolis-Hastings and parallelisation, to achieve large computational gains in DSGE model estimation....
Persistent link: https://www.econbiz.de/10003932659
Smooth mixtures, i.e. mixture models with covariate-dependent mixing weights, are very useful flexible models for conditional densities. Previous work shows that using too simple mixture components for modeling heteroscedastic and/or heavy tailed data can give a poor fit, even with a large...
Persistent link: https://www.econbiz.de/10008696841
Persistent link: https://www.econbiz.de/10010470006
The computing time for Markov Chain Monte Carlo (MCMC) algorithms can be prohibitively large for datasets with many observations, especially when the data density for each observation is costly to evaluate. We propose a framework where the likelihood function is estimated from a random subset of...
Persistent link: https://www.econbiz.de/10010500806
We propose a generic Markov Chain Monte Carlo (MCMC) algorithm to speed up computations for datasets with many observations. A key feature of our approach is the use of the highly efficient difference estimator from the survey sampling literature to estimate the log-likelihood accurately using...
Persistent link: https://www.econbiz.de/10011300365
We show how to speed up Sequential Monte Carlo (SMC) for Bayesian inference in large data problems by data subsampling. SMC sequentially updates a cloud of particles through a sequence of distributions, beginning with a distribution that is easy to sample from such as the prior and ending with...
Persistent link: https://www.econbiz.de/10011999819