Skip to main content
Log in

Age-invariant face recognition using gender specific 3D aging modeling

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

The age-invariant face recognition (AIFR) is a relatively new area of research in the face recognition domain which has recently gained substantial attention due to its great potential and importance in real-world applications. However, the AIFR is still in the process of emergence and development, offering a large room for further investigation and accuracy improvement. The key challenges in the AIFR are considerable changes of appearance of facial skin (wrinkles, jaw lines), facial shape, and skin tone in combination with the variations of pose and illumination. These challenges impose limitations on the current AIFR systems and complicate the recognition task for identity verification especially for temporal variation. In order to address this problem, we need a temporally invariant face verification system that would be robust vis-à-vis several factors, such as aging (shape, texture), pose, and illumination. In this study, we present a 3D gender-specific aging model that is robust to aging and pose variations and provides a better recognition performance than the conventional state-of-the-art AIFR systems. The gender-specific age modeling is performed in a 3D domain from 2D facial images of various datasets, such as PCSO, BROWNS, Celebrities, Private, and FG-NET. The evaluation of the proposed approach is performed on FG-NET (the most referred database in the AIFR studies) and MORPH-Album2 (the largest aging database) by using the VGG face CNN descriptor for matching. In addition, we also test the effects of linear discriminant analysis (LDA) and principal component analysis (PCA) subspaces learning in our face verification experiments. The proposed AIFR system is evaluated both on the pose corrected and background composited age-simulated images. The experimental results demonstrate that the proposed system provides state-of-the-art performance on FG-NET (83.89% of rank-1, 43.24% of TAR) and comparable performance to the state-of-the-art on MORPH-Album2 (75.27% of rank-1, 96.93% of TAR).

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Notes

  1. The PCSO dataset is the record of criminal data collected for 130 subjects

  2. The BROWNS dataset is a collection of pictures of four sisters taken every year over a period of 33 years from 1975 to 2007 [20]

  3. The Celebrities dataset is a collection of images from 11 female and male subjects which are searched online with name and year

  4. Private collection

References

  1. Blanz V, Vetter T (1999) A morphable model for the synthesis of 3D faces. Proc ACM SIGGRAPH Conf on Comput Graph Interact Tech (CGIT): 187–194. https://doi.org/10.1145/311535.311556

  2. Bianco S (2016) Large age-gap face verification by feature injection in deep networks. Comput Res Rep arXiv:1602.06149

  3. Chen BC, Chen CS, Hsu WH (2015) Face recognition and retrieval using cross-age reference coding with cross-age celebrity dataset. IEEE Trans on Multimedia 17(6):804–815

    Article  Google Scholar 

  4. Chen D, Cao X, Wen F, Sun J (2013) Blessing of dimensionality: high-dimensional feature and its efficient compression for face verification. IEEE Conf Comput Vis Pattern Recognit (CVPR): 3025– 3032

  5. Congnitec (2010) FaceVACS software developer kit. Cognitec Systems GmbH, http://www.cognitec-systems.de. Accessed 13 Oct 2017

  6. Ding C, Tao D (2016) A comprehensive survey on pose-invariant face recognition. ACM Trans on Intelligent Syst Technol 7(3):1–42

    Article  Google Scholar 

  7. Ding L, Ding X, Fang C (2012) Continuous pose normalization for pose-robust face recognition. IEEE Signal Process Lett 19(11):721–724

    Article  Google Scholar 

  8. Du J-X, Zhai C-M, Ye Y-Q (2013) Face aging simulation and recognition based on NMF algorithm with sparseness constraints. Neurocomputing 116:250–259

    Article  Google Scholar 

  9. Dong Y, Zhen L, Shengcai L, Stan ZL (2014) Learning face representation from scratch. arXiv:1411.7923

  10. Farkas LG (1994) Anthropometry of the head and face. Raven Press, Lippincott Williams & Wilkins, New York

    Google Scholar 

  11. Gary BH, Erik L-M (2014) Labeled faces in the wild: updates and new reporting procedures. m-CS-2014-003, University of Massachusetts, Amherst, May 2014

  12. Geng X, Zhou Z, Smith-Miles K (2007) Automatic age estimation based on facial aging patterns. IEEE Trans Pattern Anal Mach Intell (PAMI) 29(12):2234–2240

    Article  Google Scholar 

  13. Gong D, Li Z, Lin D, Liu J, Tang X (2013) Hidden factor analysis for age invariant face recognition. IEEE Intl Conf on Comput Vision (ICCV), Sydney, NSW. 2872–2879

  14. Ho HT, Chellappa R (2013) Pose-invariant face recognition using markov random fields. IEEE Trans Image Process 22(4):1573–1584

    Article  MathSciNet  MATH  Google Scholar 

  15. Hwang J, Sunjin Y, Kim J, Lee S (2012) 3D face modeling using the multi-deformable method. Sensors (Basel, Switzerland) 12:12870–89. https://doi.org/10.3390/s121012870

    Article  Google Scholar 

  16. Klare B, Jain AK (2011) Face recognition across time lapse: on learning feature subspaces. Int Joint Conf on Biometrics:1–8

  17. Li X, Mori G, Zhang H (2006) Expression-invariant face recognition with expression classification. The 3rd Canadian Conf on Comput Robot Vision (CRV): 77–77

  18. Li Z, Park U, Jain AK (2011) A discriminative model for age invariant face recognition. IEEE Trans Inf Forensics Secur 6(3):1028–1037

    Article  Google Scholar 

  19. Milborrow S, Nicolls F (2014) Active shape models with SIFT descriptors and MARS. J VISAPP

  20. Nixon N, Galassi P (2007) The brown sisters, thirty-three years. The Museum of Modern Art

  21. Otto C, Han H, Jain AK (2012) How does aging affect facial components? In ECCV Workshops 2:189–198

    Google Scholar 

  22. Park U, Tong Y, Jain AK (2010) Age-invariant face recognition. IEEE Trans Pattern Anal Mach Intell (PAMI) 32(5):947–954

    Article  Google Scholar 

  23. Parkhi OM, Vedaldi A, Zisserman A (2015) Deep face recognition. British Machine Vision Conference

  24. Tikhonov AN, Arsenin VY (1977) Solution of ill-posed problems. Washington: Winston & Sons. ISBN 0-470-99124-0

  25. Taigman Y, Yang M, Ranzato M, Wolf L (2014) Deepface: closing the gap to human-level performance in face verification. Proc IEEE Conf Comput Vis Pattern Recognit (CVPR): 1701–1708

  26. Yang H, Huang D, Wang Y (2014) Age invariant face recognition based on texture embedded discriminative graph model. IEEE Intl Joint Conf Biometrics (IJCB), Clearwater, FL, 1–8

  27. Zhu X, Lei Z, Yan J, Yi D, Li SZ (2015) High-fidelity pose and expression normalization for face recognition in the wild. IEEE Conf Comput Vis Pattern Recognit (CVPR), Boston, MA, 787– 796

Download references

Acknowledgements

This research has been funded in part by IARPA’s Janus program under contract number 2014-14071600011 and the ICT R&D program of MSIP/IITP. [R0126-16-1112, Development of Media Application Framework based on Multi-modality which enables Personal Media Reconstruction].

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Unsang Park.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Riaz, S., Ali, Z., Park, U. et al. Age-invariant face recognition using gender specific 3D aging modeling. Multimed Tools Appl 78, 25163–25183 (2019). https://doi.org/10.1007/s11042-019-7694-1

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-019-7694-1

Keywords

Navigation