Showing 1 - 10 of 12
In this article, we propose a new estimation methodology to deal with PCA for high-dimension, low-sample-size (HDLSS …) data. We first show that HDLSS datasets have different geometric representations depending on whether a ρ …-mixing-type dependency appears in variables or not. When the ρ-mixing-type dependency appears in variables, the HDLSS data converge to an n …
Persistent link: https://www.econbiz.de/10011041986
In this paper, we consider tests of correlation when the sample size is much lower than the dimension. We propose a new estimation methodology called the extended cross-data-matrix methodology. By applying the method, we give a new test statistic for high-dimensional correlations. We show that...
Persistent link: https://www.econbiz.de/10011042082
In this paper, we propose a general spiked model called the power spiked model in high-dimensional settings. We derive relations among the data dimension, the sample size and the high-dimensional noise structure. We first consider asymptotic properties of the conventional estimator of...
Persistent link: https://www.econbiz.de/10010702809
Let (εj)j≥0 be a sequence of independent p-dimensional random vectors and τ≥1 a given integer. From a sample ε1,…,εT+τ of the sequence, the so-called lag-τ auto-covariance matrix is Cτ=T−1∑j=1Tετ+jεjt. When the dimension p is large compared to the sample size T, this paper...
Persistent link: https://www.econbiz.de/10011263460
We derive efficient recursive formulas giving the exact distribution of the largest eigenvalue for finite dimensional real Wishart matrices and for the Gaussian Orthogonal Ensemble (GOE). In comparing the exact distribution with the limiting distribution of large random matrices, we also found...
Persistent link: https://www.econbiz.de/10010786416
In this work we construct an optimal linear shrinkage estimator for the covariance matrix in high dimensions. The recent results from the random matrix theory allow us to find the asymptotic deterministic equivalents of the optimal shrinkage intensities and estimate them consistently. The...
Persistent link: https://www.econbiz.de/10011041912
This article studies two regularized robust estimators of scatter matrices proposed (and proved to be well defined) in parallel in Chen et al. (2011) and Pascal et al. (2013), based on Tyler’s robust M-estimator (Tyler, 1987) and on Ledoit and Wolf’s shrinkage covariance matrix estimator...
Persistent link: https://www.econbiz.de/10011042062
Principal Components are usually hard to interpret. Sparseness is considered as one way to improve interpretability, and thus a trade-off between variance explained by the components and sparseness is frequently sought. In this note we address the problem of simultaneous maximization of variance...
Persistent link: https://www.econbiz.de/10010939515
In High Dimension, Low Sample Size (HDLSS) data situations, where the dimension d is much larger than the sample size n … PCA in the HDLSS asymptotics. While the results hold under a general situation, the limiting distributions under Gaussian … assumption are illustrated in greater detail. In addition, the geometric representation of HDLSS data is extended to give three …
Persistent link: https://www.econbiz.de/10011042061
In this paper we demonstrate that a higher-ranking principal component of the predictor tends to have a stronger correlation with the response in single index models and sufficient dimension reduction. This tendency holds even though the orientation of the predictor is not designed in any way to...
Persistent link: https://www.econbiz.de/10011042065