Skip to main content

Sparse Principal Component Analysis via Joint L 2,1-Norm Penalty

  • Conference paper
AI 2013: Advances in Artificial Intelligence (AI 2013)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 8272))

Included in the following conference series:

Abstract

Sparse principal component analysis (SPCA) is a popular method to get the sparse loadings of principal component analysis(PCA), it represents PCA as a regression model by using lasso constraint, but the selected features of SPCA are independent and generally different with each principal component (PC). Therefore, we modify the regression model by replacing the elastic net with L 2,1-norm, which encourages row-sparsity that can get rid of the same features in different PCs, and utilize this new “self-contained” regression model to present a new framework for graph embedding methods, which can get sparse loadings via L 2,1-norm. Experiment on Pitprop data illustrates the row-sparsity of this modified regression model for PCA and experiment on YaleB face database demonstrates the effectiveness of this model for PCA in graph embedding.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Yang, J., Zhang, D., Frangi, A.F., Yang, J.Y.: Two-dimensional, P.C.A.: a new approach to appearance-based face representation and recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence 26(1), 131–137 (2004)

    Article  Google Scholar 

  2. Jolliffe, I.: Principal component analysis. John Wiley & Sons Ltd. (2005)

    Google Scholar 

  3. Richman, M.B.: Rotation of principal components. Journal of Climatology 6(3), 293–335 (1986)

    Article  Google Scholar 

  4. Vines, S.K.: Simple principal components. Journal of the Royal Statistical Society: Series C (Applied Statistics) 49(4), 441–451 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  5. Jeffers, J.N.R.: Two case studies in the application of principal component analysis. Applied Statistics 225–236 (1967)

    Google Scholar 

  6. Zou, H., Hastie, T., Tibshirani, R.: Sparse principal component analysis. Journal of Computational and Graphical Statistics 15(2), 265–286 (2006)

    Article  MathSciNet  Google Scholar 

  7. Alter, O., Brown, P.O., Botstein, D.: Singular value decomposition for genome-wide expression data processing and modeling. Proceedings of the National Academy of Sciences 97(18), 10101–10106 (2000)

    Article  Google Scholar 

  8. Tibshirani, R.: Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society. Series B (Methodological), 267–288 (1996)

    Google Scholar 

  9. Cadima, J., Jolliffe, I.T.: Loading and correlations in the interpretation of principle compenents. Journal of Applied Statistics 22(2), 203–214 (1995)

    Article  MathSciNet  Google Scholar 

  10. Nie, F., Huang, H., Cai, X., Ding, C.H.: Efficient and robust feature selection via joint L 2,1-norms minimization. In: Proc. NIPS, pp. 1813–1821 (2010)

    Google Scholar 

  11. Gu, Q., Li, Z., Han, J.: Joint feature selection and subspace learning. In: Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence, vol. 2. AAAI Press (2011)

    Google Scholar 

  12. Yuan, M., Lin, Y.: Model selection and estimation in regression with grouped variables. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 68(1), 49–67 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  13. Argyriou, A., Evgeniou, T., Pontil, M.: Convex multi-task feature learning. Machine Learning 73(3), 243–272 (2008)

    Article  Google Scholar 

  14. Obozinski, G., Taskar, B., Jordan, M.I.: Joint covariate selection and joint subspace selection for multiple classification problems. Statistics and Computing 20(2), 231–252 (2010)

    Article  MathSciNet  Google Scholar 

  15. Yan, S., Xu, D., Zhang, B., Zhang, H.J., Yang, Q., Lin, S.: Graph embedding and extensions: a general framework for dimensionality reduction. IEEE Transactions on Pattern Analysis and Machine Intelligence 29(1), 40–51 (2007)

    Article  Google Scholar 

  16. Hou, C., Nie, F., Yi, D., Wu, Y.: Feature selection via joint embedding learning and sparse regression. In: Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence, vol. 2. AAAI Press (2011)

    Google Scholar 

  17. Efron, B., Hastie, T., Johnstone, I., Tibshirani, R.: Least angle regression. The Annals of Statistics 32(2), 407–499 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  18. Journe, M., Nesterov, Y., Richtrik, P., Sepulchre, R.: Generalized power method for sparse principal component analysis. The Journal of Machine Learning Research 11, 517–553 (2010)

    Google Scholar 

  19. Jenatton, R., Audibert, J.Y., Bach, F.: Structured variable selection with sparsity-inducing norms. The Journal of Machine Learning Research 12, 2777–2824 (2011)

    MathSciNet  Google Scholar 

  20. Zou, H., Hastie, T.: Regression shrinkage and selection via the elastic net, with applications to microarrays. Journal of the Royal Statistical Society: Series B 67, 301–320 (2003)

    MathSciNet  Google Scholar 

  21. Trefethen, L.N., Bau III, D.: Numerical linear algebra. SIAM (1997)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer International Publishing Switzerland

About this paper

Cite this paper

Xiaoshuang, S., Zhihui, L., Zhenhua, G., Minghua, W., Cairong, Z., Heng, K. (2013). Sparse Principal Component Analysis via Joint L 2,1-Norm Penalty. In: Cranefield, S., Nayak, A. (eds) AI 2013: Advances in Artificial Intelligence. AI 2013. Lecture Notes in Computer Science(), vol 8272. Springer, Cham. https://doi.org/10.1007/978-3-319-03680-9_16

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-03680-9_16

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-03679-3

  • Online ISBN: 978-3-319-03680-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics