Showing 51 - 60 of 848
Most modern supervised statistical/machine learning (ML) methods are explicitly designed to solve prediction problems very well. Achieving this goal does not imply that these methods automatically deliver good estimators of causal parameters. Examples of such parameters include individual...
Persistent link: https://www.econbiz.de/10011538313
Here we present an expository, general analysis of valid post-selection or post-regularization inference about a low-dimensional target parameter in the presence of a very high-dimensional nuisance parameter which is estimated using selection or regularization methods. Our analysis provides a...
Persistent link: https://www.econbiz.de/10011524714
In this article the package High-dimensional Metrics (hdm) is introduced. It is a collection of statistical methods for estimation and quantification of uncertainty in high-dimensional approximately sparse models. It focuses on providing confidence intervals and significance testing for...
Persistent link: https://www.econbiz.de/10011524715
Common high-dimensional methods for prediction rely on having either a sparse signal model, a model in which most parameters are zero and there are a small number of non-zero parameters that are large in magnitude, or a dense signal model, a model with no large parameters and very many small...
Persistent link: https://www.econbiz.de/10011337679
Persistent link: https://www.econbiz.de/10011699331
Persistent link: https://www.econbiz.de/10011701515
Persistent link: https://www.econbiz.de/10011782994
We revisit the classic semiparametric problem of inference on a low di-mensional parameter Ø0 in the presence of high-dimensional nuisance parameters π0. We depart from the classical setting by allowing for π0 to be so high-dimensional that the traditional assumptions, such as Donsker...
Persistent link: https://www.econbiz.de/10011655554
Persistent link: https://www.econbiz.de/10011347397
Persistent link: https://www.econbiz.de/10011347403