Skip to main content
Log in

Principal component analysis based on block-norm minimization

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

Principal Component Analysis (PCA) has attracted considerable interest for years in the studies of image recognition. So far, several state-of-the-art PCA-based robust feature extraction techniques have been proposed, such as PCA-L1 and R1-PCA. Since those methods treat image by its transferred vector form, it leads to the loss of latent information carried by images and loses sight of the spatial structural details of image. To exploit these two kinds of information and improve robustness to outliers, we propose principal component analysis based on block-norm minimization (Block-PCA) which employs block-norm to measure the distance between an image and its reconstruction. Block-norm imposes L2-norm constrain on a local group of pixel blocks and uses L1-norm constrain among different groups. In the case where parts of an image are corrupted, Block-PCA can effectively depress the effect of corrupted blocks and make full use of the rest. In addition, we propose an alternative iterative algorithm to solve the Block-PCA model. Performance is evaluated on several datasets and the results are compared with those of other PCA-based methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  1. Aanæs H, Fisker R, Astrom K, Carstensen JM (2002) Robust factorization. IEEE Trans Pattern Anal Mach Intell 24(9):1215–1225

    Article  Google Scholar 

  2. Abdi H, Williams LJ (2010) Principal component analysis. Wiley Interdiscip Rev Comput Stat 2(4):433–459

    Article  Google Scholar 

  3. Baccini A, Besse P, Falguerolles A (1996) A l1-norm pca and a heuristic approach. Ordinal Symb Data Anal 1(1):359–368

    Article  MATH  Google Scholar 

  4. Brooks JP, Dulá J, Boone EL (2013) A pure l1-norm principal component analysis. Comput Stat Data Anal 61:83–98

    Article  MATH  Google Scholar 

  5. Candès EJ, Li X, Ma Y, Wright J (2011) Robust principal component analysis J ACM (JACM) 58 (3):11

    Article  MathSciNet  MATH  Google Scholar 

  6. De La Torre F, Black MJ (2003) A framework for robust subspace learning. Int J Comput Vis 54 (1–3):117–142

    Article  MATH  Google Scholar 

  7. Ding C, Zhou D, He X, Zha H (2006) R1-pca: rotational invariant l1-norm principal component analysis for robust subspace factorization. In: Proceedings of the 23rd international conference on Machine learning. ACM, pp 281–288

  8. Georghiades A, Belhumeur P, Kriegman D (2001) From few to many: illumination cone models for face recognition under variable lighting and pose. IEEE Trans Pattern Anal Mach Intell 23(6):643–660

    Article  Google Scholar 

  9. Gottumukkal R, Asari VK (2004) An improved face recognition technique based on modular pca approach. Pattern Recogn Lett 25(4):429–436

    Article  Google Scholar 

  10. Ke Q, Kanade T (2005) Robust l/sub 1/norm factorization in the presence of outliers and missing data by alternative convex programming. In: IEEE computer society conference on computer vision and pattern recognition, 2005. CVPR 2005, vol 1. IEEE, pp 739–746

  11. Kumar N, Singh S, Kumar A (2017) Random permutation principal component analysis for cancelable biometric recognition. Appl Intell. https://doi.org/10.1007/s10489-017-1117-7

  12. Kwak N (2008) Principal component analysis based on l1-norm maximization. IEEE Trans Pattern Anal Mach Intell 30(9):1672–1680

    Article  Google Scholar 

  13. Li BN, Yu Q, Wang R, Xiang K, Wang M, Li X (2016) Block principal component analysis with nongreedy l1-norm maximization. IEEE Trans Cybern 46(11):2543–2547. https://doi.org/10.1109/TCYB.2015.2479645

    Article  Google Scholar 

  14. Luo M, Nie F, Chang X, Yang Y, Hauptmann AG, Zheng Q (2017) Avoiding optimal mean l2,1-norm maximization-based robust pca for reconstruction. Neural Comput 29(4):1124–1150

    Article  MathSciNet  Google Scholar 

  15. Luo T, Yang Y, Yi D, Ye J (2017) Robust discriminative feature learning with calibrated data reconstruction and sparse low-rank model. Appl Intell. https://doi.org/10.1007/s10489-017-1060-7

  16. Martinez AM (1998) The ar face database. CVC technical report 24

  17. Mi JX, Luo Z, Fu Q, He A (2018) Double direction matrix based sparse representation for face recognition. In: International conference on security, pattern analysis, and cybernetics, pp 660–665

  18. Mi JX, Sun Y, Lu J (2018) Robust face recognition based on supervised sparse representation. In: International conference on intelligent computing, pp 253–259

  19. Nie F, Huang H (2016) Non-greedy l21-norm maximization for principal component analysis. arXiv preprint arXiv:1603.08293

  20. Nie F, Huang H, Cai X, Ding C (2010) Efficient and robust feature selection via joint l2, 1-norms minimization. In: Advances in neural information processing systems, pp 1813–1821

  21. Nie F, Huang H, Ding C, Luo D, Wang H (2011) Robust principal component analysis with non-greedy l1-norm maximization. In: IJCAI proceedings-international joint conference on artificial intelligence, vol 22, p 1433. Citeseer

  22. Nie F, Yuan J, Huang H (2014) Optimal mean robust principal component analysis. In: International conference on machine learning, pp 1062–1070

  23. Ren CX, Dai DQ, Yan H (2012) Robust classification using l2, 1-norm based regression model. Pattern Recognit 45(7):2708–2718

    Article  MATH  Google Scholar 

  24. Sim T, Baker S, Bsat M (2002) The cmu pose, illumination, and expression (pie) database. In: Fifth IEEE international conference on automatic face and gesture recognition. Proceedings, pp 53–58. IEEE

  25. Skocaj D, Leonardis A (2003) Weighted and robust incremental method for subspace learning. In: ICCV, pp 1494–1501

  26. Turk M, Pentland A (1991) Eigenfaces for recognition. J Cogn Neurosci 3(1):71–86

    Article  Google Scholar 

  27. Wang H (2012) Block principal component analysis with l1-norm for image analysis. Pattern Recogn Lett 33 (5):537–542. https://doi.org/10.1016/j.patrec.2011.11.029

    Article  Google Scholar 

  28. Yi S, Lai Z, He Z, Cheung YM, Liu Y (2017) Joint sparse principal component analysis. Pattern Recognit 61:524–536. https://doi.org/10.1016/j.patcog.2016.08.025

    Article  Google Scholar 

  29. Zainuddin N, Selamat A, Ibrahim R (2018) Hybrid sentiment classification on twitter aspect-based sentiment analysis. Appl Intell 48(5):1218–1232. https://doi.org/10.1007/s10489-017-1098-6

    Google Scholar 

  30. Zia Uddin M, Lee JJ, Kim TS (2010) Independent shape component-based human activity recognition via hidden markov model. Appl Intell 33(2):193–206. https://doi.org/10.1007/s10489-008-0159-2

    Article  Google Scholar 

Download references

Acknowledgements

This work was supported by the National Nature Science Foundation of China (under Grant Nos. 61601070 and 61472055) and sponsored by Natural Science Foundation of Chongqing. (under Grant Nos. cstc2018jcyjAX0532 and cstc2014jcyjA40011)

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jian-Xun Mi.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Mi, JX., Zhu, Q. & Lu, J. Principal component analysis based on block-norm minimization. Appl Intell 49, 2169–2177 (2019). https://doi.org/10.1007/s10489-018-1382-0

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-018-1382-0

Keywords

Navigation