Showing 1 - 9 of 9
Common high-dimensional methods for prediction rely on having either a sparse signal model, a model in which most parameters are zero and there are a small number of non-zero parameters that are large in magnitude, or a dense signal model, a model with no large parameters and very many small...
Persistent link: https://www.econbiz.de/10010477564
Common high-dimensional methods for prediction rely on having either a sparse signal model, a model in which most parameters are zero and there are a small number of non-zero parameters that are large in magnitude, or a dense signal model, a model with no large parameters and very many small...
Persistent link: https://www.econbiz.de/10011337679
-parametric and easy to implement. Our approach can be connected to corrections for selection bias and shrinkage estimation and is to …
Persistent link: https://www.econbiz.de/10012063831
for selection bias and shrinkage estimation and is to be contrasted with deconvolution. Simulation results confirm the …
Persistent link: https://www.econbiz.de/10012792731
Empirical research typically involves a robustness-efficiency tradeoff. A researcher seeking to estimate a scalar parameter can invoke strong assumptions to motivate a restricted estimator that is precise but may be heavily biased, or they can relax some of these assumptions to motivate a more...
Persistent link: https://www.econbiz.de/10015073234
This paper considers inference in logistic regression models with high dimensional data. We propose new methods for estimating and constructing confidence regions for a regression parameter of primary interest α0, a parameter in front of the regressor of interest, such as the treatment variable...
Persistent link: https://www.econbiz.de/10010226493
We develop uniformly valid confidence regions for regression coefficients in a high-dimensional sparse least absolute deviation/median regression model. The setting is one where the number of regressors p could be large in comparison to the sample size n, but only s << n of them are needed to accurately describe the regression function. Our new methods are based on the instrumental median regression estimator that assembles the optimal estimating equation from the output of the post l1-penalized median regression and post l1-penalized least squares in an auxiliary equation. The estimating equation is immunized against non-regular estimation of nuisance part of the median regression function, in the sense of Neyman. We establish that in a homoscedastic regression model, the instrumental median regression estimator of a single regression coefficient is asymptotically root-n normal uniformly with respect to the underlying sparse model. The resulting confidence regions are valid uniformly with respect to the underlying model. We illustrate the value of uniformity with Monte-Carlo experiments which demonstrate that standard/naive post-selection inference breaks down over large parts of the parameter space, and the proposed method does not. We then generalize our method to the case where p1 > n regression coefficients...</<>
Persistent link: https://www.econbiz.de/10010227487
In the practice of program evaluation, choosing the covariates and the functional form of the propensity score is an important choice for estimating treatment effects. This paper proposes data-driven model selection and model averaging procedures that address this issue for the propensity score...
Persistent link: https://www.econbiz.de/10010209255
This paper develops and applies a Bayesian approach to Exploratory Factor Analysis that improves on ad hoc classical approaches. Our framework relies on dedicated factor models and simultaneously determines the number of factors, the allocation of each measurement to a unique factor, and the...
Persistent link: https://www.econbiz.de/10010382148