Skip to main content
Log in

An improved redundant dictionary based on sparse representation for face recognition

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

In recent years sparse representation has been widely used for face recognition and achieved good results. Most sparse representation methods need a redundant dictionary to solve sparse coefficients. And the number of atoms must be much larger than the dimension of atoms in the dictionary. So the design of redundant dictionary is very important for improving the performance of sparse representation methods. By experiments we find that feature fusion (LBP, Gabor, Hog, and raw pixels) after PCA can remain a high recognition rate, which means the feature fusion can represent faces well with a low dimension. So we can use the dictionary based on feature fusion to solve small sample size problem in LDA without losing useful information. And LDA can increase between-class scatter and decrease within-class scatter while reducing the dimensionality, which can build a better structure for redundant dictionary. Based on above we propose a linear discriminative redundant dictionary based on feature fusion to improve the performance of face sparse representation methods, namely, LDRD. Firstly, extract and concatenate a standard set of features (LBP, Gabor, Hog, and raw pixels) to form a feature vector as the atoms, then introduce LDA to rebuild the dictionary of atoms, to reduce dimensionality and enhance the discriminative ability of the dictionary. We compare LDRD with the dictionary based on downsampling and feature fusion for SRC, CRC_RLS and LASRC. The extensive experiments demonstrate that the proposed dictionary has better recognition rate and operating efficiency, while it can easily reject distractor faces.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig.. 15

Similar content being viewed by others

References

  1. Ahonen T, Hadid A, Pietikainen M (2006) Face description with local binary patterns: application to face recognition. IEEE Trans Pattern Anal Mach Intell 28(12):2037–2041

    Article  MATH  Google Scholar 

  2. Becker BC, Ortiz EG(2013) Evaluating open-universe face identification on the web. IEEE Conf Comput Vis Pattern Recogn Workshops 904–911

  3. Becker BC, Ortiz EG (2013) Evaluating open-universe face identification on the web. IEEE Conf Comput Vis Pattern Recogn Workshops 904–911

  4. Belhumeur PN, Hespanha JP, Kriegman DJ (1997) Eigenfaces vs. fisherfaces: recognition using class specific linear projection. IEEE Trans Pattern Anal Mach Intell 19(7):711–720

    Article  Google Scholar 

  5. Cai D, He X, Han J, Huang TS (2011) Graph regularized nonnegative matrix factorization for data representation. IEEE Trans Pattern Anal Mach Intell 33(8):1548–1560

    Article  Google Scholar 

  6. Déniz O, Bueno G, Salido J, De la Torre F (2011) Face recognition using histograms of oriented gradients. Pattern Recogn Lett 32(12):1598–1603

    Article  Google Scholar 

  7. Figueiredo MA, Nowak RD, Wright SJ (2007) Gradient projection for sparse reconstruction: application to compressed sensing and other inverse problems. IEEE J Sel Top Sign Proces 1(4):586–597

    Article  Google Scholar 

  8. Georghiades AS, Belhumeur PN, Kriegman DJ (2000) From few to many: generative models for recognition under variable pose and illumination. In Automatic Face and Gesture Recognition 2000. Proc Fourth IEEE Int Conf 277–284

  9. Howland P, Wang J, Park H (2006) Solving the small sample size problem in face recognition using generalized discriminant analysis. Pattern Recogn 39(2):277–287

    Article  Google Scholar 

  10. Huang GB, Ramesh M, Berg T, Learned-Miller E (2007) Labeled faces in the wild: a database for studying face recognition in unconstrained environments (Vol. 1, No. 2, p. 3). Technical Report 07–49, University of Massachusetts, Amherst

  11. Lange J, von der Malsburg C (1989) Distortion invariant object recognition by matching hierarchically labeled graphs. IEEE Int Joint Conf Neural Netw 155–159

  12. Martinez AM (1998) The AR face database. CVC technical report, 24

  13. Ortiz EG, Becker BC (2014) Face recognition for web-scale datasets. Comput Vis Image Underst 118:153–170

    Article  Google Scholar 

  14. Pinto N, Stone Z, Zickler T, Cox D (2011) Scaling up biologically-inspired computer vision: A case study in unconstrained face recognition on facebook. IEEE Comput Soc Conf Comput Vis Pattern Recognit Workshops 35–42

  15. Shi Q, Eriksson A, Van Den Hengel A, Shen C (2011) Is face recognition really a compressive sensing problem? IEEE Conf Comput Vis Pattern Recognit 553–560

  16. Wright J, Yang AY, Ganesh A, Sastry SS, Ma Y (2009) Robust face recognition via sparse representation. IEEE Trans Pattern Anal Mach Intell 31(2):210–227

    Article  Google Scholar 

  17. Zhang L, Yang M, Feng X (2011) Sparse representation or collaborative representation: Which helps face recognition? IEEE Int Conf Comput Vis 471–478

  18. Zhang H, Zha ZJ, Yang Y, Yan S, Chua TS (2014) Robust (semi) nonnegative graph embedding. IEEE Trans Image Proces 23(7):2996–3012

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgments

The authors would like to thank Dr. Zhizhen Liang for helpful and informative discussion on face recognition and the design of experiments. This work was supported by the National High Technology Research and Development Program of China (Grant No. 2012AA0622022 and Grant No. 2012AA011004), the Doctoral Fund of Ministry of Education of China (Grant No. 20100095110003 and Grant No. 20110095110010), the Fundamental Research Funds for the Central Universities under Grant (Grant No. 2013XK10), the National Natural Science Fund (Grant No. 61402482) and the key project of coal union fund under the National Natural Science Fund (Grant No. U1261201).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Fanrong Meng.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Meng, F., Tang, Z. & Wang, Z. An improved redundant dictionary based on sparse representation for face recognition. Multimed Tools Appl 76, 895–912 (2017). https://doi.org/10.1007/s11042-015-3083-6

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-015-3083-6

Keywords

Navigation