Skip to main content
Log in

Joint latent low-rank and non-negative induced sparse representation for face recognition

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

Representation-based methods have achieved exciting results in recent applications of face recognition. However, it is still challenging for the face recognition task due to noise and outliers in the data. Many existing methods avoid these problems by constructing an auxiliary dictionary from the extended data but fail to achieve good performances because they use the main dictionary only for classification. In this paper, to avoid the need to manually construct an auxiliary dictionary and the effects of noise, we propose a Joint Latent Low-Rank and Non-Negative Induced Sparse Representation (JLSRC) for face recognition. Specifically, JLSRC adaptively learns two clean low-rank reconstructed dictionaries jointly via an extended latent low-rank representation to reveal the potential relationships in the data and then embeds a non-negative constraint and an Elastic Net regularization in the coefficient vectors of the dictionaries to enhance the performance on classification. In this way, the learned low-rank dictionaries can be mutually boosted to extract discriminative features and handle the noise, and the obtained coefficient vectors are simultaneously both sparse and discriminative. Moreover, the proposed method seamlessly and elegantly integrates low-rank learning and sparse representation-based classification. Extensive experiments on three challenging face databases demonstrate the effectiveness and robustness of JLSRC in comparison with the state-of-the-art methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

Notes

  1. http://www.cvc.yale.edu/projects/yalefaces/yalefaces.html

  2. https://www.cl.cam.ac.uk/research/dtg/attarchive/facedataset.html

  3. http://www2.ece.ohio-state.edu/aleix/ARdataset.html

References

  1. Xu J, An W, Zhang L, Zhang D (2019) Sparse, collaborative, or nonnegative representation: which helps pattern classification? Pattern Recogn 88:679–688

    Article  Google Scholar 

  2. Deng W, Hu J, Guo J (2013) In defense of sparsity based face recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 399–406

  3. Zhou P, Lin Z, Zhang C (2015) Integrated low-rank-based discriminative feature learning for recognition. IEEE Transactions on Neural Networks and Learning Systems 27(5):1080–1093

    Article  MathSciNet  Google Scholar 

  4. Li Y, Liu J, Lu H, Ma S (2017) Learning robust face representation with classwise block-diagonal structure. IEEE Transactions on Information Forensics and Security 9(12):2051–2062

    Article  Google Scholar 

  5. Du H, Ma L, Li G, Wang S (2020) Low-rank graph preserving discriminative dictionary learning for image recognition. Knowledge-Based Systems 187:104823

    Article  Google Scholar 

  6. Yang X, Jiang X, Tian C, Wang P, Zhou F, Fujita H (2020) Inverse projection group sparse representation for tumor classification: A low rank variation dictionary approach. Knowledge-Based Systems 196:105768

    Article  Google Scholar 

  7. Lu J, Wang H, Zhou J, Chen Y, Lai Z, Hu Q (2020) Low-rank adaptive graph embedding for unsupervised feature extraction. Pattern Recognition 113:107758

    Article  Google Scholar 

  8. Zhu W, Peng B (2020) Sparse and low-rank regularized deep subspace clustering. Knowledge-Based Systems 204:106199

    Article  Google Scholar 

  9. Liu S, Li L, Jin M, Hou S, Peng Y (2019) Optimized coefficient vector and sparse representation-based classification method for face recognition. IEEE Access 8:8668–8674

    Article  Google Scholar 

  10. Liao M, Xiaodong G u (2020) Face recognition approach by subspace extended sparse representation and discriminative feature learning. Neurocomputing 373:35–49

    Article  Google Scholar 

  11. Wright J, Yang AY, Ganesh A, Shankar Sastry S, Yi M a (2009) Robust face recognition via sparse representation. IEEE Transactions on Pattern Analysis and Machine Intelligence 31(2):210–227

    Article  Google Scholar 

  12. Deng W, Hu J, Guo J (2012) Extended src: under-sampled face recognition via intraclass variant dictionary. IEEE Trans Pattern Anal Mach Intell 34(9):1864–1870

    Article  Google Scholar 

  13. Zhang L, Yang M, Feng X, Ma Y, Zhang D (2012) Collaborative representation based classification for face recognition. arXiv:1204.2358

  14. Yang S, Zhang L, He L, Wen Y (2019) Sparse low-rank component-based representation for face recognition with low-quality images. IEEE Transactions on Information Forensics and Security 14(1):251–261

    Article  Google Scholar 

  15. Gao Y, Ma J, Yuille AL (2017) Semi-supervised sparse representation based classification for face recognition with insufficient labeled samples. IEEE Trans Image Process 26(5):2545–2560

    Article  MathSciNet  Google Scholar 

  16. Deng T, Ye D, Ma R, Fujita H, Xiong L (2020) Low-rank local tangent space embedding for subspace clustering. Inf Sci 508:1–21

    Article  MathSciNet  Google Scholar 

  17. Liu G, Lin Z, Yong Y u (2010) Robust subspace segmentation by low-rank representation. In: International conference on machine learning

  18. Zhang Y, Xiang M, Yang B (2017) Low-rank preserving embedding. Pattern Recogn 70:112–125

    Article  Google Scholar 

  19. Xie L, Yin M, Yin X, Liu Y, Yin G (2018) Low-rank sparse preserving projections for dimensionality reduction. IEEE Trans Image Process 27:1–1

    Article  MathSciNet  Google Scholar 

  20. Liu G, Yan S (2011) Latent low-rank representation for subspace segmentation and feature extraction. In: 2011 IEEE international conference on computer vision (ICCV). IEEE, p 2011

  21. Wang L, Zhang Z, Li S, Liu G, Hou C, Qin J (2018) Similarity-adaptive latent low-rank representation for robust data representation. In: Pacific rim international conference on artificial intelligence. Springer, pp 71–84

  22. Liu Z, Ou W, Lu W, Wang L (2019) Discriminative feature extraction based on sparse and low-rank representation. Neurocomputing 362:129–138

    Article  Google Scholar 

  23. Yu S, Yiquan W (2018) Subspace clustering based on latent low rank representation with frobenius norm minimization. Neurocomputing 275:2479–2489

    Article  Google Scholar 

  24. Fang X, Han N, Wu J, Xu Y, Yang J, Wong WK, Li X (2018) Approximate low-rank projection learning for feature extraction. IEEE Transactions on Neural Networks and Learning Systems 29:5228–5241

    Article  MathSciNet  Google Scholar 

  25. Ren Z, Sun Q, Wu B, Zhang X, Yan W (2019) Learning latent low-rank and sparse embedding for robust image feature extraction. IEEE Transactions on Image Processing PP(99):1–1

    Google Scholar 

  26. Lin Z, Chen M, Ma Y (2010) The augmented lagrange multiplier method for exact recovery of corrupted low-rank matrices. arXiv:1009.5055

  27. Ren Z, Sun Q, Yang C (2020) Nonnegative discriminative encoded nearest points for image set classification. Neural Comput Applic 32(13):9081–9092

    Article  Google Scholar 

  28. Shen B, Liu BD, Wang Q (2016) Elastic net regularized dictionary learning for image classification. Multimedia Tools and Applications 75(15):8861–8874

    Article  Google Scholar 

  29. Cai J-F, Candès EJ, Shen Z (2010) A singular value thresholding algorithm for matrix completion. SIAM Journal on Optimization 20(4):1956–1982

    Article  MathSciNet  Google Scholar 

  30. Wong WK, Lai Z, Wen J, Fang X, Yuwu L u (2017) Low-rank embedding for robust image feature extraction. IEEE Trans Image Process 26(6):2905–2917

    Article  MathSciNet  Google Scholar 

  31. Wang S, Wang H (2017) Unsupervised feature selection via low-rank approximation and structure learning. Knowl-Based Syst 124:70–79

    Article  Google Scholar 

  32. Martinez AM (1998) The ar face database. CVC Technical Report24

  33. Laurens VDM, Hinton G (2008) Visualizing data using t-sne. J Mach Learn Res 9 (2605):2579–2605

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Ling Wang or Zhenwen Ren.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Mingna Wu and Shu Wang are contributed equally to this work.

Appendix

Appendix

(1) Derivation of the close-form solution of Z is written as

$$ \begin{array}{@{}rcl@{}} Z^{*}=\mathcal{M}(Z) &&=\text{argmin}_{Z}\frac{\omega}{2}{||LX-LXZ||_{F}^{2}} \\&&\quad+\frac{\mu}{2}(\left| \left|X-XZ-LX-E+Y_{1}/\mu \right|\right|_{F}^{2}\\&&\quad+\left| \left| Z-J+Y_{2}/\mu \right|\right|_{F}^{2}) \end{array} $$
(31)

Due to the (31) is convex, thus we are able to obtain the solution of Z by setting the partial derivative of Z to zero.

$$ \begin{array}{@{}rcl@{}} \frac{\partial \mathcal{M}}{\partial Z} &=& \frac{\omega}{\mu}X^{T}L^{T}LXZ+X^{T}XZ+Z\\&&\quad-\frac{\omega}{\mu}X^{T}L^{T}LX-X^{T}(X-LX-E+\frac{Y_{1}}{\mu})\\&&\quad-J+\frac{Y_{2}}{\mu} \end{array} $$
(32)

Let \( \frac {\partial {\mathscr{M}}}{\partial Z} =0\), then the closed-form solution of Z is

$$ \begin{array}{@{}rcl@{}} Z^{*} &&= (I+\frac{\omega}{\mu}X^{T}L^{T}LX+X^{T}X)^{-1}(X^{T}(X-LX-E)\\&&\quad+J+\frac{X^{T}Y_{1}-Y_{2}}{\mu}+\frac{\omega}{\mu}X^{T}L^{T}LX) \end{array} $$
(33)

(2) The optimization problem of L is defined as

$$ \begin{array}{@{}rcl@{}} L^{*}&&=\mathcal{M}(L) =\text{argmin}_{L}\frac{\omega}{2}{||LX-LXZ||_{F}^{2}} \\&&\quad+\frac{\mu}{2}(\left| \left|X-XZ-LX-E+Y_{1}/\mu \right|\right|_{F}^{2}\\&&\quad+\left| \left| L-S+Y_{2}/\mu \right|\right|_{F}^{2}) \end{array} $$
(34)

Similar to Z, the closed-form of L can be obtained by using the following way, i.e., \( \frac {\partial {\mathscr{M}}}{\partial L} =0\). We have

$$ \begin{array}{@{}rcl@{}} \frac{\partial \mathcal{M}}{\partial L} &&= \frac{\omega}{\mu}L((X-XZ)(X-XZ)^{T})+LXX^{T}\\&&\quad+L-(X-XZ-E)X^{T}-S\\&&\quad-(Y_{1}-Y_{3})/\mu=0 \end{array} $$
(35)

Therefore, we obtain

$$ \begin{array}{@{}rcl@{}} L^{*}&&= ((Y_{1}-Y_{3})/\mu+(X-XZ-E)X^{T}\\&&\quad+S)(\frac{\omega}{\mu}(X-XZ)(X-XZ)^{T}+XX^{T}+I)^{-1} \end{array} $$
(36)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wu, M., Wang, S., Li, Z. et al. Joint latent low-rank and non-negative induced sparse representation for face recognition. Appl Intell 51, 8349–8364 (2021). https://doi.org/10.1007/s10489-021-02338-x

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-021-02338-x

Keywords

Navigation