Showing 1 - 10 of 21,823
The computing time for Markov Chain Monte Carlo (MCMC) algorithms can be prohibitively large for datasets with many observations, especially when the data density for each observation is costly to evaluate. We propose a framework where the likelihood function is estimated from a random subset of...
Persistent link: https://www.econbiz.de/10010500806
Many statistical and econometric learning methods rely on Bayesian ideas, often applied or reinterpreted in a frequentist setting. Two leading examples are shrinkage estimators and model averaging estimators, such as weighted-average least squares (WALS). In many instances, the accuracy of these...
Persistent link: https://www.econbiz.de/10012839923
Many statistical and econometric learning methods rely on Bayesian ideas, often applied or reinterpreted in a frequentist setting. Two leading examples are shrinkage estimators and model averaging estimators, such as weighted-average least squares (WALS). In many instances, the accuracy of these...
Persistent link: https://www.econbiz.de/10012176861
In small samples and especially in the case of small true default probabilities, standard approaches to credit default probability estimation have certain drawbacks. Most importantly, standard estimators tend to underestimate the true default probability which is of course an undesirable...
Persistent link: https://www.econbiz.de/10013113964
Likelihood based inference for multi-state latent factor intensity models is hindered by the fact that exact closed-form expressions for the implied data density are not available. This is a common and well-known problem for most parameter driven dynamic econometric models. This paper reviews,...
Persistent link: https://www.econbiz.de/10011374420
The estimation of the holding periods of financial products has to be done in a dynamic process in which the size of the observation time interval influences the result. Small intervals will produce smaller average holding periods than bigger ones. The approach developed in this paper offers the...
Persistent link: https://www.econbiz.de/10011890392
We studied the effects of sample size and distribution scale/shape for 3 types of skewness (g1, G1, and b1) and kurtosis (g2, G2, and b2) using 18 simulated probability distributions. In general, skewness and kurtosis always increased with increasing sample size. The order in the skewness...
Persistent link: https://www.econbiz.de/10014242098
Probabilistic editing has been introduced to enable valid inference using established survey sampling theory in situations when some of the collected data points may have measurement errors and are therefore submitted to an editing process. To reduce the editing effort and avoid over-editing, in...
Persistent link: https://www.econbiz.de/10015207175
We extend to score, Wald and difference test statistics the scaled and adjusted corrections to goodness-of-fit test statistics developed in Satorra and Bentler (1988a,b). The theory is framed in the general context of multisample analysis of moment structures, under general conditions on the...
Persistent link: https://www.econbiz.de/10014179647
Maximum likelihood estimation (MLE) of stochastic differential equations (SDEs) is difficult because in general the transition density function of these processes is not known in closed form, and has to be approximated somehow. An approximation based on efficient importance sampling (EIS) is...
Persistent link: https://www.econbiz.de/10014183458