Skip to main content
Log in

Aligned texture map creation for pose invariant face recognition

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

In last years, Face recognition based on 3D techniques is an emergent technology which has demonstrated better results than conventional 2D approaches. Using texture (180° multi-view image) and depth maps is supposed to increase the robustness towards the two main challenges in Face Recognition: Pose and illumination. Nevertheless, 3D data should be acquired under highly controlled conditions and in most cases depends on the collaboration of the subject to be recognized. Thus, in applications such as surveillance or control access points, this kind of 3D data may not be available during the recognition process. This leads to a new paradigm using some mixed 2D-3D face recognition systems where 3D data is used in the training but either 2D or 3D information can be used in the recognition depending on the scenario. Following this concept, where only part of the information (partial concept) is used in the recognition, a novel method is presented in this work. This has been called Partial Principal Component Analysis (P2CA) since they fuse the Partial concept with the fundamentals of the well known PCA algorithm. This strategy has been proven to be very robust in pose variation scenarios showing that the 3D training process retains all the spatial information of the face while the 2D picture effectively recovers the face information from the available data. Furthermore, in this work, a novel approach for the automatic creation of 180° aligned cylindrical projected face images using nine different views is presented. These face images are created by using a cylindrical approximation for the real object surface. The alignment is done by applying first a global 2D affine transformation of the image, and afterward a local transformation of the desired face features using a triangle mesh. This local alignment allows a closer look to the feature properties and not the differences. Finally, these aligned face images are used for training a pose invariant face recognition approach (P2CA).

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14

Similar content being viewed by others

Notes

  1. The 9 ranges correspond to the 9 different views as explained in Section 2 where the creation of the texture maps has been explained

References

  1. Akimoto T, Suenaga Y, Wallace RS (1993) Automatic creation of 3D facial models. IEEE Comput Graph Appl 13(5):16–22

    Article  Google Scholar 

  2. Blanz V, Vetter T (2003) Face recognition based on fitting 3D morphable model. IEEE Trans Pattern Anal Mach Intell 25(9):1063–1074

    Article  Google Scholar 

  3. Bowyer K, Chang K, Flynn P (2004) A survey of approaches to 3D and multi-modal 3D+2D face recognition. IEEE IntlConf on Pattern Recognition

  4. Bronstein AM, Bronstein MM, Kimmel R (2005) Three-dimensional face recognition. Int J Comput Vis 64(1):5–30

    Article  Google Scholar 

  5. Brown LM (2001) 3D head tracking using motion adaptive texture-mapping. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pages 998–1003, Hawaii, December 2001. IEEE Computer Society

  6. Cascia ML, Isidoro J, Sclaroff S (1998) Head tracking via robust registration in texture map images. In Proceedingsof the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, page 508, Santa Barbara, CA, USA, June 1998. IEEE Computer Society

  7. Chang KI, Bowyer KW, Flynn PJ (2005) An evaluation of multimodal 2D+3D face biometrics. IEEE Trans Pattern Anal Mach Intell 27:619–624

    Article  Google Scholar 

  8. Farkas LG (1995) Anthropometry of the head and face, 2nd edition. Raven Press

  9. Kouzani A, Sammut K (1999) Quadtree principal component analysis and its application to facial expression classification. In Proceedings of the IEEE International Conference on Systems,Man, and Cybernetics, pages 835–839, Hawaii, October 1999

  10. Lee W-S, Magnenat-Thalmann N (2000) Fast head modeling for animation. Image Vis Comput 18(4):355–364, (10), March

    Article  Google Scholar 

  11. Lewis JP (1995) Fast normalized cross-correlation. Vision Interface

  12. Onofrio D, Rama A, Tarres F, Tubaro S (2006) P2CA: how much information is needed. IEEE International Conference on Image Processing, Atlanta, USA, October 2006

  13. Phillips PJ, Grother P, Micheals R, Blackburn D, Tabassi E, Bone J (2003) Face recognition vendor Test 2002: evaluation report. Technical Report NISTIR 6965, National Institute of Standards and Technology

  14. Pretzel, Lotz (2007) Research project: “face recognition as a search tool” technical report. Bundeskriminalamt, Wiesbaden

    Google Scholar 

  15. Rama A, Tarrés F (2005) P2CA: a new face recognition scheme combining 2D and 3D information. IEEE International Conference on Image Processing, Genova, Italy, September 11–14, 2005

  16. Savvides M, Kumar BV, Khosla PK (2004) Eigenphases vs. eigenfaces. Int Conf on Pattern Recognition, Washington DC

  17. Scales LE (1985) Introduction to non-linear optimization. Springer-Verlag New York, Inc., New York

    Google Scholar 

  18. Scharstein D, Szeliski R (2002) A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. Int J Comput Vis 47(1):7–42

    Article  MATH  Google Scholar 

  19. Soh AWK, Yu Z, Prakash EC, Chan TKY, Sung E (2002) Texture mapping of 3D human face for virtual reality environments. Int J Inform Tech 8(2):54–65

    Google Scholar 

  20. Szeliski R (2004) Image alignment and stitching: a tutorial. Technical Report MSR-TR-2004-92, Microsoft Research, December 2004

  21. Tsapatsoulis N, Doulamis N, Doulamis A, Kollias S (1998) Face extraction from non-uniform background and recognition in compressed domain. IEEE ICASSP, Seattle, WA, USA, May 1998

  22. Turk MA, Pentland AP (1991) Face recognition using eigenfaces. Proceedings of the IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, pp. 586–591, Maui, Hawaii 1991

  23. “UPC Face Database” in http://gps-tsc.upc.es/GTAV

  24. Wang S, Wang Y, Jin M, Gu XD, Samaras D (2007) Conformal geometry and its applications on 3D shape matching, recognition, and stitching. IEEE Trans Pattern Anal Mach Intell 29(7):1209–1220

    Article  Google Scholar 

  25. Xiao J, Kanade T, Cohn JF (2003) Robust full-motion recovery of head by dynamic templates and re-registration techniques. Int J Imag Syst Tech 13:85–94

    Article  Google Scholar 

  26. Yang J, Zhang D, Frangi AF, Yang J (2004) Two-dimensional PCA: a new approach to appearance-based face representation and recognition. IEEE Trans on Pattern Analysis and Machine Intel., Jan. 2004

  27. Young JW (1993) Head and face anthropometry of adult U.S. civilians. Technical Report ADA268661, Civil Aeromedical Institute, Federal Aviation Administration

  28. Zhao W, Chellapa R (2006) Face processing: advanced modeling and methods. Academic Press

Download references

Acknowledgement

The work presented was developed within VISNET II, a European Network Of Excellence funded under the EC IST FP6 programme.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Antonio Rama.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Rama, A., Tarrés, F. & Rurainsky, J. Aligned texture map creation for pose invariant face recognition. Multimed Tools Appl 49, 545–565 (2010). https://doi.org/10.1007/s11042-009-0447-9

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-009-0447-9

Keywords

Navigation