Showing 1 - 10 of 33
This paper is concerned with estimation of a predictive density with parametric constraints under Kullback–Leibler loss. When an invariance structure is embedded in the problem, general and unified conditions for the minimaxity of the best equivariant predictive density estimator are derived....
Persistent link: https://www.econbiz.de/10011041990
This paper obtains conditions for minimaxity of hierarchical Bayes estimators in the estimation of a mean vector of a multivariate normal distribution. Hierarchical prior distributions with three types of second stage priors are treated. Conditions for admissibility and inadmissibility of the...
Persistent link: https://www.econbiz.de/10005152908
This paper studies minimaxity of estimators of a set of linear combinations of location parameters [mu]i, i=1,...,k under quadratic loss. When each location parameter is known to be positive, previous results about minimaxity or non-minimaxity are extended from the case of estimating a single...
Persistent link: https://www.econbiz.de/10009194650
We consider stochastic domination in predictive density estimation problems when the underlying loss metric is α-divergence, D(α), loss introduced by Csiszàr (1967). The underlying distributions considered are normal location-scale models, including the distribution of the observables, the...
Persistent link: https://www.econbiz.de/10011041977
We investigate conditions under which estimators of the form X + aU'Ug(X) dominate X when X, a p - 1 vector, and U, an m - 1 vector, are distributed such that [X1, X2,..., Xp, U1, U2,..., Up]'/[sigma] has a spherically symmetric distribution about [[theta]1, [theta]2,..., [theta]p, 0, 0,...,...
Persistent link: https://www.econbiz.de/10005093717
We consider the problem of estimating a p-dimensional parameter [theta]=([theta]1,...,[theta]p) when the observation is a p+k vector (X,U) where dim X=p and where U is a residual vector with dim U=k. The distributional assumption is that (X,U) has a spherically symmetric distribution around...
Persistent link: https://www.econbiz.de/10005106991
Assume X = (X1, ..., Xp)' is a normal mixture distribution with density w.r.t. Lebesgue measure, , where [Sigma] is a known positive definite matrix and F is any known c.d.f. on (0, [infinity]). Estimation of the mean vector under an arbitrary known quadratic loss function Q([theta], a) = (a -...
Persistent link: https://www.econbiz.de/10005107001
When estimating, under quadratic loss, the location parameter[theta]of a spherically symmetric distribution with known scale parameter, we show that it may be that the common practice of utilizing the residual vector as an estimate of the variance is preferable to using the known value of the...
Persistent link: https://www.econbiz.de/10005021321
We derive minimax generalized Bayes estimators of regression coefficients in the general linear model with spherically symmetric errors under invariant quadratic loss for the case of unknown scale. The class of estimators generalizes the class considered in Maruyama and Strawderman [Y. Maruyama,...
Persistent link: https://www.econbiz.de/10008521112
Let X,V1,...,Vn-1 be n random vectors in with joint density of the formwhere both [theta] and [Sigma] are unknown. We consider the problem of the estimation of [theta] with the invariant loss ([delta]-[theta])'[Sigma]-1([delta]-[theta]) and propose estimators which dominate the usual estimator...
Persistent link: https://www.econbiz.de/10005221209