Skip to main content
Log in

Recognize the facial emotion in video sequences using eye and mouth temporal Gabor features

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

Machine analysis of facial emotion recognition is a challenging and innovative research topic in Human-Computer Intelligent Interaction (HCII) nowadays. The eye and mouth regions are the most important components for facial emotion recognition. Most of the existing approaches have not utilized the eye and mouth temporal features for high recognition rate. This paper proposes a novel approach for recognizing the facial emotions using eye and mouth temporal features with high recognition rate. The local features are extracted in each frame by using Gabor Wavelet with selected scale and orientations. This feature is passed to ensemble classifier for detecting the location of face region. From the signature of the face region, the eye and the mouth regions are detected using ensemble classifier. Blocks of temporal features are extracted from the signature of the eye and the mouth regions in the consecutive frames. In each block, the eye and mouth temporal features are normalized by Z-score normalization technique and encoded into binary pattern features. Concatenate the eye and mouth encoded temporal features to generate the enhanced temporal feature. Multi-class Adaboost is used to select and classify the discriminative temporal features for recognizing the facial emotion. The developed methods are deployed on the RML and CK databases, and they exhibit significant performance improvement owing to their temporal features when compared with the existing techniques.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

References

  1. Ahmed KR, Alexandre B, Konika H, Bouakaza S (2013) Framework for reliable, real-time facial expression recognition for low resolution images. Pattern Recogn Lett 34(10):1159–1168

    Article  Google Scholar 

  2. Bo W, Haizhou A, Chang H, Shihong L (2004) Fast rotation invariant multi-view face detection based on RealAdaboost. In: Proc IEEE Int Conf Automatic Face Gesture Recognition, 79–84

  3. Cohn JF (2010) Advances in behavioral science using automated facial image analysis and synthesis. IEEE Signal Proc 27(6):128–133

    Google Scholar 

  4. Daugman J (2003) Demodulation by complex-valued wavelets for stochastic pattern recognition. Internat. J. Wavelets Multiresolution Inform. Process

  5. Ekman P, Friesen A (1978) The facial action coding system. W.V, Consulting Psychologist Press, San Francisco

    Google Scholar 

  6. Fakhreddine K, Milad A, Jamil AS, No NA (2008) Human-computer interaction: overview on state of the art. Int J Smart Sensing Intell Syst 1(1):23

    Google Scholar 

  7. Fasel F, Luettin J (2003) Automatic facial expression analysis: a survey, Pattern Recognition Letter

  8. Fasela B, Juergen L (2003) Automatic facial expression analysis: a survey. Pattern Recogn 36:259–275

    Article  MATH  Google Scholar 

  9. Freund Y, Schapire RE (1997) A decision-theoretic generalization of on-line learning and an application to boosting. J Comput Syst Sci 55:119–139

    Article  MathSciNet  MATH  Google Scholar 

  10. Guruswami V, Sahai A (1999) Multiclass learning, boosting, and error-correcting codes. in Proc. Conf. Comput. Learn. Theory (COLT), 102–113, Santa Cruz, CA, USA

  11. Inan T, Ugur H (2012) 3-D face recognition with local shape descriptors. Inf Forensics Security, IEEE Trans 7(2):577–587

    Article  Google Scholar 

  12. Jerome F, Trevor H, Robert T (2000) Additive logistic regression: a statistical view of boosting. Annals Statistics

  13. Ji Y, Idrissi K (2010) Learning from essential facial parts and local features for automatic facial expression recognition. In: CBMI, 8th International. Workshop on Content-Based Multimedia indexing

  14. Ji Y, Idrissi K (2012) Automatic facial expression recognition based on spatiotemporal descriptors. Pattern Recogn Lett 33:1373–1380

    Article  Google Scholar 

  15. Kanade T, Cohn JF, Yingli T (2000) Comprehensive database for facial expression analysis. In: IEEE Int Conf Automatic Face Gesture Recognition (FG)

  16. Lee TS (1996) Image representation using 2d Gabor wavelets. IEEE Trans PAMI 18(10):959–971

    Article  Google Scholar 

  17. Li Y, Shangfei W, Yongping Z, Qiang J (2013) Simultaneous facial feature tracking and facial expression recognition. IEEE Trans Image Proc 22(7):2559–2573

    Article  Google Scholar 

  18. Ligang Z, Tjondronegoro D (2011) Facial expression recognition using facial movement features, IEEE Trans Affective Comput, 219–229

  19. Liu P, Han S, Meng Z (2014) Facial expression recognition via a boosted deep belief network. IEEE Conf Comput Vision Pattern Recognition (CVPR), 1805–1812

  20. Muneeswaran K, Ganesan L, Arumugam S, Ruba Soundar K (2006) Texture image segmentation using combined features from spatial and spectral distribution. Pattern Recogn Lett 27:755–764

    Article  Google Scholar 

  21. Saberian M, Vasoncelos J (2011) Multiclass boosting: theory and algorithms. Proc Neural Inf Process Syst (NIPS), Granada, Spain, 2124–2132

  22. Shan C, Gong S, McOwan P (2009) Facial expression recognition based on local binary patterns: a comprehensive study. Image Vision Comput 27:803–816

    Article  Google Scholar 

  23. Tie Y, Ling G (2013) Human emotional state recognition using real 3D visual features from Gabor library. Pattern Recogn 46:529–538

    Article  Google Scholar 

  24. Vezhnevets A, Vezhnevets V (2005) Modest AdaBoost-teaching AdaBoost to generalize better. In: Graphicon, Novosibirsk Akademgorodok

  25. Vezzetti E, Federica M, Giulia F (2014) 3D face recognition: an automatic strategy based on geometrical descriptors and landmarks. Robotics Autonomous Systems, 1768–1776

  26. Viola P, Jones M (2001) Rapid object detection using a boosted cascade of simple features. Proc IEEE Comput Soc Conf Comput Vision Pattern Recognition 1:511–518

    Google Scholar 

  27. Xiang T, Leung MKH, Cho SY (2008) Expression recognition using fuzzy spatio-temporal modeling. Pattern Recogn 41:204–216

    Article  MATH  Google Scholar 

  28. Xiaoyang T, Bill T (2007) Fusing Gabor and LBP feature sets for kernel-based face recognition. INRIA & Laboratoire Jean Kuntzmann, France, 235–249

  29. Yang P, Liu Q, Metaxas DN (2009) Boosting encoded dynamic features for facial expression recognition. Pattern Recogn Lett 30:132–139

    Article  Google Scholar 

  30. Yongjin W, Ling G (2008) Recognizing human emotional state from audiovisual signals. IEEE Trans Mult 10(5):659–668

    Google Scholar 

  31. Yun T, Ling G (2013) A deformable 3-D facial expression model for dynamic human emotional state recognition. IEEE Trans Circ Syst Video Technol 23(1):142–157

    Article  MathSciNet  Google Scholar 

  32. Zhong L, Liu Q, Yang P, Huang J, Metaxas N (2014) Learning multiscale active facial patches for expression analysis. IEEE Trans Cybernetics

Download references

Acknowledgments

The work is supported by the management of Mepco Schlenk Engineering College, Sivakasi, Tamilnadu, India. The authors thank the management, Principal of the college for providing the research facilities for carrying out this work.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to P. Ithaya Rani.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Rani, P.I., Muneeswaran, K. Recognize the facial emotion in video sequences using eye and mouth temporal Gabor features. Multimed Tools Appl 76, 10017–10040 (2017). https://doi.org/10.1007/s11042-016-3592-y

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-016-3592-y

Keywords

Navigation