Covariance structure approximation via gLasso in high-dimensional supervised classification
Recent work has shown that the Lasso-based regularization is very useful for estimating the high-dimensional inverse covariance matrix. A particularly useful scheme is based on penalizing the ℓ<sub>1</sub> norm of the off-diagonal elements to encourage sparsity. We embed this type of regularization into high-dimensional classification. A two-stage estimation procedure is proposed which first recovers structural zeros of the inverse covariance matrix and then enforces block sparsity by moving non-zeros closer to the main diagonal. We show that the block-diagonal approximation of the inverse covariance matrix leads to an additive classifier, and demonstrate that accounting for the structure can yield better performance accuracy. Effect of the block size on classification is explored, and a class of asymptotically equivalent structure approximations in a high-dimensional setting is specified. We suggest a variable selection at the block level and investigate properties of this procedure in growing dimension asymptotics. We present a consistency result on the feature selection procedure, establish asymptotic lower an upper bounds for the fraction of separative blocks and specify constraints under which the reliable classification with block-wise feature selection can be performed. The relevance and benefits of the proposed approach are illustrated on both simulated and real data.
| Year of publication: |
2012
|
|---|---|
| Authors: | Pavlenko, Tatjana ; Björkström, Anders ; Tillander, Annika |
| Published in: |
Journal of Applied Statistics. - Taylor & Francis Journals, ISSN 0266-4763. - Vol. 39.2012, 8, p. 1643-1666
|
| Publisher: |
Taylor & Francis Journals |
Saved in:
Saved in favorites
Similar items by person
-
Krylov Sequences as a Tool for Analysing Iterated Regression Algorithms
BJÖRKSTRÖM, ANDERS, (2010)
-
A Generalized View on Continuum Regression
Björkström, Anders, (1999)
- More ...