Skip to main content
Log in

Single sample per person face recognition with KPCANet and a weighted voting scheme

  • Original Paper
  • Published:
Signal, Image and Video Processing Aims and scope Submit manuscript

Abstract

Most current methods of facial recognition rely on the condition of having multiple samples per person available for feature extraction. In practical applications, however, only one sample may be available for each person to train a model with. As a result, many of the traditional methods fall short, leaving the challenge of facial recognition greater than ever. To deal with this challenge, this study addresses a face recognition algorithm based on a kernel principal component analysis network (KPCANet) and then proposes a weighted voting method. First, the aligned face image is partitioned into several non-overlapping patches to form the training set. Next, a KPCANet is used to obtain filters and feature banks. Finally, the identification of the unlabeled probe occurs through the application of the weighted voting method. Based on several widely used face datasets, the results of the experiments demonstrate the superiority of the proposed method.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  1. Ahonen, T., Hadid, A., Pietikainen, M.: Face description with local binary patterns: application to face recognition. IEEE Trans. Pattern Anal. Mach. Intell. 28(12), 2037–2041 (2006)

    Article  MATH  Google Scholar 

  2. Cao, L., Chua, K.S., Chong, W., Lee, H., Gu, Q.: A comparison of pca, kpca and ica for dimensionality reduction in support vector machine. Neurocomputing 55(1), 321–336 (2003)

    Google Scholar 

  3. Chan, T.H., Jia, K., Gao, S., Lu, J., Zeng, Z., Ma, Y.: Pcanet: A simple deep learning baseline for image classification? arXiv preprint arXiv:1404.3606 (2014)

  4. Chen, S., Liu, J., Zhou, Z.H.: Making flda applicable to face recognition with one sample per person. Pattern Recognit. 37(7), 1553–1555 (2004)

    Article  Google Scholar 

  5. Cover, T., Hart, P.: Nearest neighbor pattern classification. IEEE Trans. Inform. Theory 13(1), 21–27 (1967)

    Article  MATH  Google Scholar 

  6. Deng, W., Hu, J., Guo, J.: Extended SRC: undersampled face recognition via intraclass variant dictionary. IEEE Trans. Pattern Anal. Mach. Intell. 34(9), 1864–1870 (2012)

    Article  Google Scholar 

  7. Ding, C., Choi, J., Tao, D., Davis, L.S.: Multi-directional multi-level dual-cross patterns for robust face recognition. IEEE Trans. Pattern Anal. Mach. Intell. 38(3), 518–531 (2016)

    Article  Google Scholar 

  8. Everitt, B.S., Dunn, G.: Principal components analysis. AppliedMultivariate Data Analysis, Second Edition pp. 48–73 (1993)

  9. Gao, S., Zhang, Y., Jia, K., Lu, J., Zhang, Y.: Single sample face recognition via learning deep supervised autoencoders. IEEE Trans. Inform. Forensics Secur. 10(10), 2108–2118 (2015)

    Article  Google Scholar 

  10. Georghiades, A.S., Belhumeur, P.N., Kriegman, D.J.: From few to many: illumination cone models for face recognition under variable lighting and pose. IEEE Trans. Pattern Anal. Mach. Intell. 23(6), 643–660 (2001)

    Article  Google Scholar 

  11. Hu, C., Ye, M., Ji, S., Zeng, W., Lu, X.: A new face recognition method based on image decomposition for single sample per person problem. Neurocomputing 160, 287–299 (2015)

    Article  Google Scholar 

  12. Hu, G., Yang, Y., Yi, D., Kittler, J., Christmas, W., Li, S.Z., Hospedales, T.: When face recognition meets with deep learning: an evaluation of convolutional neural networks for face recognition. In: Proceedings of the IEEE International Conference on Computer Vision Workshops, pp. 142–150 (2015)

  13. Huang, G.B., Ramesh, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: A database for studying face recognition in unconstrained environments. Tech. rep., Technical Report 07–49, University of Massachusetts, Amherst (2007)

  14. Izenman, A.J.: Linear discriminant analysis. In: Modern Multivariate Statistical Techniques, pp. 237–280. Springer (2013)

  15. Jégou, H., Chum, O.: Negative evidences and co-occurences in image retrieval: The benefit of pca and whitening. In: Computer Vision–ECCV 2012, pp. 774–787. Springer (2012)

  16. Kumar, R., Banerjee, A., Vemuri, B.C., Pfister, H.: Maximizing all margins: Pushing face recognition with kernel plurality. In: 2011 International Conference on Computer Vision, pp. 2375–2382. IEEE (2011)

  17. Martinez, A.M.: The ar face database. CVC Technical Report 24 (1998)

  18. Ouamane, A., Belahcene, M., Benakcha, A., Bourennane, S., Taleb-Ahmed, A.: Robust multimodal 2d and 3d face authentication using local feature fusion. Signal Image Video Process. 10(1), 129–137 (2016)

  19. Phillips, P.J., Wechsler, H., Huang, J., Rauss, P.J.: The feret database and evaluation procedure for face-recognition algorithms. Image Vision Comput. 16(5), 295–306 (1998)

    Article  Google Scholar 

  20. Sayeed, F., Hanmandlu, M.: Three information set-based feature types for the recognition of faces. Signal Image Video Process. 10(2), 327–334 (2016)

    Article  Google Scholar 

  21. Tan, X., Chen, S., Zhou, Z.H., Zhang, F.: Face recognition from a single image per person: a survey. Pattern Recognit. 39(9), 1725–1745 (2006)

    Article  MATH  Google Scholar 

  22. Tang, Y., Salakhutdinov, R., Hinton, G.: Deep lambertian networks. arXiv preprint arXiv:1206.6445 (2012)

  23. Vetter, T.: Synthesis of novel views from a single face image. Int. J. Comput. Vis. 28(2), 103–116 (1998)

    Article  MathSciNet  Google Scholar 

  24. Wolf, L., Hassner, T., Taigman, Y.: Similarity scores based on background samples. In: Asian Conference on Computer Vision, pp. 88–97. Springer (2009)

  25. Wright, J., Yang, A.Y., Ganesh, A., Sastry, S.S., Ma, Y.: Robust face recognition via sparse representation. IEEE Trans. Pattern Anal. Mach. Intell. 31(2), 210–227 (2009)

    Article  Google Scholar 

  26. Yang, M., Van Gool, L., Zhang, L.: Sparse variation dictionary learning for face recognition with a single training sample per person. In: Proceedings of the IEEE international conference on computer vision, pp. 689–696 (2013)

  27. Zhang, L., Yang, M., Feng, X.: Sparse representation or collaborative representation: Which helps face recognition? In: 2011 International Conference on Computer Vision, pp. 471–478. IEEE (2011)

  28. Zhu, P., Yang, M., Zhang, L., Lee, I.Y.: Local generic representation for face recognition with single sample per person. In: Asian Conference on Computer Vision, pp. 34–50. Springer (2014)

  29. Zhu, P., Zhang, L., Hu, Q., Shiu, S.C.: Multi-scale patch based collaborative representation for face recognition with margin distribution optimization. In: European Conference on Computer Vision, pp. 822–835. Springer (2012)

Download references

Acknowledgements

This research is supported by the strategic priority research program—“Real-time Processing System of Massive Network Traffic Based on Sea-cloud Collaboration” (Grant No. XDA060112030) of the Chinese Academy of Science. We also want to show our deepest gratitude to Tom Tomaszewski for improving the language of our manuscript.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chunhui Ding.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ding, C., Bao, T., Karmoshi, S. et al. Single sample per person face recognition with KPCANet and a weighted voting scheme. SIViP 11, 1213–1220 (2017). https://doi.org/10.1007/s11760-017-1077-8

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11760-017-1077-8

Keywords

Navigation