Skip to main content

Advertisement

Log in

Emotion recognition using facial expression by fusing key points descriptor and texture features

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

Emotions have a great significance in human-to-human and in human-to-computer communication and interaction. In this paper, an effective and novel approach to recognize the emotions using facial expressions by the fusion of duplex features is proposed. The proposed approach broadly have three phases, phase-I: ROIs extraction, phase-2:Fusion of duplex features and phase-III: Classification. The proposed approach also gives a novel eye center detection algorithm to detect centres of the eyes. The outcome of the algorithm is further contribute to locate and partition the facial components. The hybrid combination of duplex features also gives the importance of fusion of features over individual features. The proposed approach classify the 5 basic emotions i.e. angry, happy, sad, disgust, surprise. The proposed method also raise the issue of high misclassification rate of emotions in higher age groups (>40) and successfully overcomes it. The proposed approach and its outcome evaluation is validated by using four datasets: the dataset created by us including 2500 images of 5 basic emotions (angry, happy, sad, disgust, surprise) having 500 images per emotions, CK+ dataset, MMI dataset and JAFEE dataset. Experimental results shows that the proposed work significantly improves the recognition rate (approx. 97%, 88%, 86%, 93%) and reduces the misclassification rate (approx.1.4%, 7.6%, 6.6%, 2.7%) even for the subjects of higher age group.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14

Similar content being viewed by others

References

  1. Abdiansah A., Wardoyo R. (2015) Time complexity analysis of support vector machines (SVM) in libSVM. International journal computer and application

  2. Agrawal DD, Dubey SR, Jalal AS (2014) Emotion recognition from facial expressions based on multi-level classification. International Journal of Computational Vision and Robotics 4.4:365–389

    Article  Google Scholar 

  3. Ahonen T, Hadid A, Pietikainen M (2006) Face description with local binary patterns: application to face recognition. IEEE Trans Pattern Anal Mach Intell 28 (12):2037–2041

    Article  MATH  Google Scholar 

  4. Alphonse AS, Dharma D (2017) A novel monogenic directional pattern (MDP) and pseudo-Voigt kernel for facilitating the identification of facial emotions. J Vis Commun Image Represent 49:459–470

    Article  Google Scholar 

  5. Bartlett M, Littlewort G, Fasel I, Movellan R (2003) Real time face detection and facial expression recognition: development and application to human-computer interaction, CVPR Workshop on CVPR for HCI

  6. Bartlett MS, Littlewort G, Frank M, Lainscsek C, Fasel I, Movellan J (2005) Recognizing facial expression: machine learning and application to spontaneous behavior. In: IEEE computer society conference on IEEE, computer vision and pattern recognition, 2005. CVPR 2005, vol 2, pp 568–573

  7. Bartlett MS, Movellan JR, Sejnowski TJ (2002) Face recognition by independent component analysis. IEEE Trans Neural Netw/a publication of the IEEE Neural Networks Council 13(6):1450

    Article  Google Scholar 

  8. Chen C, Jafari R, Kehtarnavaz N (2015) Action recognition from depth sequences using depth motion maps-based local binary patterns. In: Applications of computer vision (WACV), IEEE winter conference, pp 1092–1099

  9. Chen C, Zhang B, Su H, Li W, Wang L (2016) Land-use scene classification using multi-scale completed local binary patterns. SIViP 10(4):745–752

    Article  Google Scholar 

  10. Cohen I, Sebe N, Garg A, Chen LS, Huang TS (2003) Facial expression recognition from video sequences: temporal and static modeling. Comput Vis Image Underst 160-187:91

    Google Scholar 

  11. Edwards GJ, Cootes TF, Taylor CJ (1998) Face recognition using active appearance models. In: European conference on computer vision. Springer, Berlin, pp 581–595

  12. Ekman P, Friesen WV (1978) Facial action coding system. Investigator’s guide. Consulting Psychologists Press, Washington

    Google Scholar 

  13. Gbèhounou S, Lecellier F, Fernandez-Maloigne C (2016) Evaluation of local and global descriptors for emotional impact recognition. J Vis Commun Image Represent 38:276–283

    Article  Google Scholar 

  14. Gu W, Xiang C, Venkatesh YV, Huang D, Lin H (2012) Facial expression recognition using radial encoding of local Gabor features and classifier synthesis. Pattern Recogn 45(1):80–91

    Article  Google Scholar 

  15. Guo X, Tie Y, Ye L, Yan J (2018) Identifying facial expression using adaptive sub-layer compensation based feature extraction. J Vis Commun Image Represent 50:65–73

    Article  Google Scholar 

  16. Hadid A, Pietikainen M, Ahonen T (2004) A discriminative feature space for detecting and recognizing faces. In: Proceedings of the 2004 IEEE computer society conference on IEEE computer vision and pattern recognition, CVPR 2004, vol 2, pp II–II

  17. Happy SL, Routray A (2015) Automatic facial expression recognition using features of salient facial patches. IEEE Trans Affect Comput 6(1):1–12

    Article  Google Scholar 

  18. Huang L, Chen C, Li W, Du Q (2016) Remote sensing image scene classification using multi-scale completed local binary patterns and fisher vectors. Remote Sens 8(6):483

    Article  Google Scholar 

  19. Kim BK, Dong SY, Roh J, Kim G, Lee SY (2016) Fusing aligned and non-aligned face information for automatic affect recognition in the wild: a deep learning approach. In: IEEE conference computer vision and pattern recognition (CVPR) workshops, pp 48–57

  20. Li W, Chen C, Su H, Du Q (2015) Local binary patterns and extreme learning machine for hyperspectral imagery classification. IEEE Trans Geosci Remote Sens 53(7):3681–3693

    Article  Google Scholar 

  21. Liang J, Hou Z, Chen C, Xu X (2016) Supervised bilateral two-dimensional locality preserving projection algorithm based on Gabor wavelet. SIViP 10(8):1441–1448

    Article  Google Scholar 

  22. Liao S, Fan W, Chung AC, Yeung DY (2006) Facial expression recognition using advanced local binary patterns, tsallis entropies and global appearance features. In: IEEE international conference on IEEE image processing, pp 665–668

  23. Lopes AT, DeAguiar E, DeSouza AF, et al (2017) Facial expression recognition with convolutional neural networks: coping with few data and the training sample order. Pattern Recogn 61:610–628

    Article  Google Scholar 

  24. Lowe DG (2004) Distinctive image features from scale-invariant keypoints. Int J Comput Vis 60(2):91–110

    Article  Google Scholar 

  25. Lucey P, Cohn JF, Kanade T, Saragih J, Ambadar Z, Matthews I (2010) The extended cohn-kanade dataset (ck+): a complete dataset for action unit and emotion-specified expression. In: 2010 IEEE computer society conference on IEEE computer vision and pattern recognition workshops (CVPRW), pp 94–101

  26. Lyons MJ, Akamatsu S, Kamachi M, Gyoba J, Budynek J (1998) The Japanese female facial expression (JAFFE) database. In: Proceedings of third international conference on automatic face and gesture recognition, pp 14–16

  27. Sanchez-Mendoza D, Masip D, Lapedriza A (2015) Emotion recognition from mid-level features. Pattern Recogn Lett 67:66–74

    Article  Google Scholar 

  28. Moeini A, Faez K, Moeini H, Safai AM (2017) Facial expression recognition using dual dictionary learning. J Vis Commun Image Represent 45:20–33

    Article  Google Scholar 

  29. Mohammadi MR, Fatemizadeh E, Mahoor MH (2014) PCA-based dictionary building for accurate facial expression recognition via sparse representation. J Vis Commun Image Represent 25(5):1082–1092

    Article  Google Scholar 

  30. Mollahosseini A, Chan D, Mahoor MH (2016) Going deeper in facial expression recognition using deep neural networks. In: Applications of computer vision (WACV), I.E. winter conference on IEEE, pp 1–10

  31. Ojala T, Pietikäinen M, Harwood D (1996) A comparative study of texture measures with classification based on featured distributions. Pattern Recogn 29(1):51–59

    Article  Google Scholar 

  32. Ojala T, Pietikainen M, Maenpaa T (2002) Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans Pattern Anal Mach Intell 24(7):971–987

    Article  MATH  Google Scholar 

  33. Pantic M, Patras I (2006) Dynamics of facial expression: recognition of facial actions and their temporal segments from face profile image sequences, vol 36, pp 433–449

  34. Paulo D, de Bem R, de Melo A (2011) Analyzing and exploring feature detectors in images. In: IEEE international conference on industrial informatics (INDIN), pp 305–310

  35. Shan C, Gong S, McOwan PW (2009) Facial expression recognition based on local binary patterns: a comprehensive study. Image Vis Comput 27(6):803–816

    Article  Google Scholar 

  36. Txia JD, Huang CL (2009) Age estimation using AAM and local facial features. In: Fifth international conference on IEEE intelligent information hiding and multimedia signal processing, IIH-MSP’09, pp 885–888

  37. Valstar M, Pantic M (2006) Fully automatic facial action unit detection and temporal analysis. In: IEEE conference on computer vision and pattern recognition workshop, vol 149

  38. Valstar M, Pantic M (2010) Induced disgust, happiness and surprise: an addition to the mmi facial expression database. In: Proceedings 3rd intern workshop on emotion (satellite of LREC): corpora for research on emotion and affect, vol 65

  39. Valstar M, Patras I, Pantic M (2005) Facial action unit detection using probabilistic actively learned support vector machines on tracked facial point data. In: IEEE conference on computer vision and pattern recognition workshop, vol 3, pp 76–84

  40. Vapnik V (2013) The nature of statistical learning theory. Springer Science and Business Media, Berlin

    MATH  Google Scholar 

  41. Viola P, Jones MJ (2004) Robust real-time face detection. Int J Comput Vis 57(2):137–154

    Article  Google Scholar 

  42. Xudong X, Lam KM (2009) Facial expression recognition based on shape and texture. Pattern Recogn 42.5:1003–1011

    Google Scholar 

  43. Yang B, Cao JM, Jiang DP, Lv JD (2017) Facial expression recognition based on dual-feature fusion and improved random forest classifier. Multimedia Tools and Applications 77:1–23

    Google Scholar 

  44. Zhang W, Zhang YM, Ma L, et al (2015) Multimodal learning for facial expression recognition. Pattern Recogn 48(10):3191–3202

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mukta Sharma.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sharma, M., Jalal, A.S. & Khan, A. Emotion recognition using facial expression by fusing key points descriptor and texture features. Multimed Tools Appl 78, 16195–16219 (2019). https://doi.org/10.1007/s11042-018-7030-1

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-018-7030-1

Keywords