Skip to main content
Log in

Converted-face identification: using synthesized images to replace original images for recognition

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

The changes in appearance of faces, usually caused by pose, expression and illumination variations, increase data uncertainty in the task of face recognition. Insufficient training samples cannot provide abundant multi-view observations of a face. To address this issue, many pioneering works focus on generating virtual training images for better recognition performance. However, the issue also exists in a test set where a test image only conveys a split-second representation of a face and cannot cover more comprehensive features. In this paper, we propose a new face synthesis method for face recognition. In the proposed pipeline, we synthesize a virtual image using both the original image and its corresponding mirror one. Note that, we apply this technique both to the training and test sets. Then we use the newly generated training and test images to replace the original ones for face recognition. The aim is to increase the similarity between a test image and its corresponding intra-class training images. This proposed method is effective and computationally efficient. In order to verify this, we tested our system using multiple face recognition methods in terms of the recognition accuracy, based on either the synthesized images or original images. The methods used in the paper include statistical subspace learning algorithms and representation-based classification approaches. Experimental results obtained on FERET, ORL, GT, PIE and LFW show that the proposed approach improves the face recognition accuracy, especially on faces with left-right pose variations.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19

Similar content being viewed by others

References

  1. Belhumeour PN, Hespanha JP, Kriegman DJ (1997) Eigenfaces versus fisherfaces: recognition using class specific linear projection. IEEE Trans Pattern Anal Mach Intell 19(7):711–720

    Article  Google Scholar 

  2. Beymer D, Poggio T (1995) Face recognition from one example view. Proc Fifth Int Conf Comput Vision 500–507

  3. Deng WH, Hu JN, Guo J, Cai WD, Feng DG (2010) Robust, accurate and efficient face recognition from a single training image: a uniform pursuit approach. Pattern Recogn 43:1748–1762

    Article  MATH  Google Scholar 

  4. Deng W, Hu J, Guo J (2012) Extended SRC: Undersampled face recognition via intraclass variant dictionary. Pattern Anal Mach Intell, IEEE Trans 34(9):1864–1870

    Article  Google Scholar 

  5. Ekman P, Hager JC, Friesen WV (1981) The symmetry of emotional and deliberate facial actions. Psychophysiology 18(2):101–106

    Article  Google Scholar 

  6. Jian M, Lam KM, Dong J et al. (2011) Illumination compensation and enhancement for face recognition. Proc Asia–Pacific Sign Inform Process Assoc Ann Summit Conf (APSIPA ASC’2011), paper Wed-AM.RS6

  7. Liu C, Wechsler H (2002) Gabor feature based classification using the enhanced fisher linear discriminant model for face recognition. IEEE Trans Imag Process 11(4):467–476

    Article  Google Scholar 

  8. Meng Y (2012) “Regularized robust coding for face recognition.” IEEE Trans Image Process: Publ IEEE Sign Process Soc

  9. Naseem I, Togneri R, Bennamoun M (2010) Linear regression for face recognition. IEEE Trans Pattern Anal Mach Intell 32(11):2106–2112

    Article  Google Scholar 

  10. Saad E-SM (2006) Frontal-view face detection in the presence of skin-tone regions using a new symmetry approach. J Comput Sci Technol 6

  11. Saber E, Murat Tekalp A (1998) Frontal-view face detection and facial feature extraction using color, shape and symmetry based cost functions. Pattern Recognit Lett 19(8):669–680

    Article  MATH  Google Scholar 

  12. Saha S, Sanghamitra B (2007) A symmetry based face detection technique. Proc IEEE WIE Natl Symposium Emerg Technol

  13. Sharma A, Dubey A, Tripathi P, Kumar V (2010) Pose invariant virtual classifiers from single training image using novel hybrid-eigenfaces. Neurocomputing 73(10–12):1868–1880

    Article  Google Scholar 

  14. Sugiyama M, Roweis S (2007) “Dimensionality reduction of multimodal labeled data by local Fisher discriminant analysis”. J Mach Learn Res 1027–1061

  15. Swets DL, Weng JJ (1996) Using discriminant eigenfeatures for image retrieval. IEEE Trans Pattern Anal Mach Intell 8:831–836

    Article  Google Scholar 

  16. Swets DL, Weng JJ (1996) Using discriminant eigenfeatures for image retrieval. IEEE Trans Pattern Anal Mach Intell 18(8):831–836

    Article  Google Scholar 

  17. Tan X, Chen S, Zhou Z-H, Zhang F (2006) Face recognition from a single image per person: a survey. Pattern Recogn 39(9):1725–1745

    Article  MATH  Google Scholar 

  18. Tang D, Zhu N, Yu F, Chen W, Tang T (2014) A novel sparse representation method based on virtual samples for face recognition. Neural Comput & Applic 24(3–4):513–519

    Article  Google Scholar 

  19. Turk M, Pentland A (1991) Eigenfaces for recognition. J Cogn Neurosci 3(1):71–86

    Article  Google Scholar 

  20. Vetter T (1998) Synthesis of novel views from a single face image. Int J Comput Vis 28(2):102–116

    Article  Google Scholar 

  21. Wang SJ, Yang J, Sun MF, Peng XJ, Sun MM, Zhou CG (2012) Sparse tensor discriminant color space for face verification. IEEE Trans Neural Netw Learn Syst 23(6):876–888

    Article  Google Scholar 

  22. Wright J (2009) “Robust face recognition via sparse representation.”. Pattern Anal Mach Intell, IEEE Trans 31(2):210–227

    Article  Google Scholar 

  23. Wright J et al (2010) Sparse representation for computer vision and pattern recognition. Proc IEEE 98(6):1031–1044

    Article  Google Scholar 

  24. Xu Y (2014) Integrating conventional and inverse representation for face recognition. Cybernet, IEEE Trans 44.10:1738–1746

    Google Scholar 

  25. Xu Y, Fang XZ, Li XL, Yang J, You J, Liu H, Teng SH (2014) Data uncertainty in face recognition. IEEE Trans Cybernet 44(10):1950–1961

    Article  Google Scholar 

  26. Xu Y, Jin Z (2008) Down-sampling face images and low-resolution face recognition. Proc 3rd Int Conf Innov Comput Inform Control 392–395

  27. Xu Y, Li XL, Yang J, Zhang D (2014) Integrate the original face image and its mirror image for face recognition. Neurocomputing 131:191–199

    Article  Google Scholar 

  28. Xu Y, Zhang D, Jin Z, Li M, Yang JY (2006) A fast kernel-based nonlinear discriminant analysis for multi-class problems. Pattern Recogn 39(6):1026–1033

    Article  MATH  Google Scholar 

  29. Xu Y, Zhang D, Yang J, Yang JY (2011) A two-phase test sample sparse representation method for use with face recognition, IEEE Trans. Circuits Syst. Video Technol 21(9):1255–1262

    MathSciNet  Google Scholar 

  30. Xu Y, Zhu X, Li Z, Liu G et al (2013) Using the original and ‘symmetrical face’ training samples to perform representation based two-step face recognition. Pattern Recogn 46(4):1151–1158

    Article  Google Scholar 

  31. Yang J, Yang JY (2003) Why can LDA be performed in PCA transformed space. Pattern Recogn 36(2):563–566

    Article  Google Scholar 

  32. Yang J, Zhang D, Frangi AF, Yang JY (2004) Two-dimensional PCA: a new approach to appearance-based face representation and recognition. IEEE Trans Pattern Anal Mach Intell 26(1):131–137

    Article  Google Scholar 

  33. Zhang L (2011) Sparse representation or collaborative representation: which helps face recognition?. Proc ICCV

  34. Zhang T, Li XF, Guo RZ (2014) Producing virtual face images for single sample face recognition. Optik 125:5017–5024

    Article  Google Scholar 

  35. Zhao W, Chellappa R, Phillips PJ et al (2003) Face recognition: a literature survey [J]. Acm Comput Surv (CSUR) 35(4):399–458

    Article  Google Scholar 

Download references

Acknowledgments

The authors would like to thank the anonymous reviewers for their constructive suggestions. This work was supported by the Natural Science Foundation of China (Grant no. 61373055), the Natural Science Foundation of Jiangsu Province (Grant nos. BK2012700, BK20130473), the Foundation of Artificial Intelligence Key Laboratory of Sichuan Province (Grant no. 2012RZY02), the Open Project Program of the State Key Lab of CAD&CG of Zhejiang University (Grant no. A1418) and the Fundamental Research Funds for the Central Universities (Grant No. JUSRP115A29, JUSRP51410B).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiaoning Song.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Shao, C., Song, X., Shu, X. et al. Converted-face identification: using synthesized images to replace original images for recognition. Multimed Tools Appl 76, 6641–6661 (2017). https://doi.org/10.1007/s11042-016-3349-7

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-016-3349-7

Keywords

Navigation