An ℓ1-Penalization Of Adaptive Normalized Quasi-Newton Algorithm For Sparsity-Aware Generalized Eigenvector Estimation | IEEE Conference Publication | IEEE Xplore

An ℓ1-Penalization Of Adaptive Normalized Quasi-Newton Algorithm For Sparsity-Aware Generalized Eigenvector Estimation

Publisher: IEEE

Abstract:

The goal of this paper is to establish a widely applicable method for exploiting the sparsity in generalized eigenvector estimation. We propose an ℓ 1 -penalized extensio...View more

Abstract:

The goal of this paper is to establish a widely applicable method for exploiting the sparsity in generalized eigenvector estimation. We propose an ℓ 1 -penalized extension of the Adaptive normalized quasi-Newton algorithm (Nguyen and Yamada, 2013 . To enhance sparsity in the estimate of the generalized eigenvector, the proposed adaptive algorithm maximizes a certain non-convex criterion with ℓ 1 penalty. A convergence analysis is also given for the proposed algorithm with decaying weight. Numerical experiments show that the proposed algorithm improves the subspace tracking performance in the situation where the covariance matrix pencil has sparse principal generalized eigenvector and is effective for recent sparsity-aware eigenvector analysis, e.g., sparse PCA.
Date of Conference: 10-13 June 2018
Date Added to IEEE Xplore: 30 August 2018
ISBN Information:
Publisher: IEEE
Conference Location: Freiburg im Breisgau, Germany

References

References is not available for this document.