Skip to main content
Log in

Detecting facial emotions using normalized minimal feature vectors and semi-supervised twin support vector machines classifier

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

In this paper, human facial emotions are detected through normalized minimal feature vectors using semi-supervised Twin Support Vector Machine (TWSVM) learning. In this study, face detection and tracking are carried out using the Constrained Local Model (CLM), which has 66 entire feature vectors. Based on Facial Animation Parameter’s (FAPs) definition, entire feature vectors are those things that visibly affect human emotion. This paper proposes the 13 minimal feature vectors that have high variance among the entire feature vectors are sufficient to identify the six basic emotions. Using the Max & Min and Z-normalization technique, two types of normalized minimal feature vectors are formed. The novelty of this study is methodological in that the normalized data of minimal feature vectors fed as input to the semi-supervised multi-class TWSVM classifier to classify the human emotions is a new contribution. The macro facial expression datasets are used by a standard database and several real-time datasets. 10-fold and hold out cross-validation is applied with the cross-database (combining standard and real-time). In the experimental result, using ‘One vs One’ and ‘One vs All’ multi-class techniques with 3 kernel functions produce a 36 trained model of each emotion and their validation parameters are calculated. The overall accuracy achieved for 10-fold cross-validation is 93.42 ± 3.25% and hold out cross-validation is 92.05 ± 3.79%. The overall performance (Precision, Recall, F1-score, Error rate and Computation Time) of the proposed model was also calculated. The performance of the proposed model and existing methods were compared and results indicate them to be more reliable than existing models.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

References

  1. Owusu E, Zhan Y, Qi RM (2014) An svm-adaboost facial expression recognition system. Appl Intell 40(3):536–545

    Article  Google Scholar 

  2. Patil H, Kothari A, Bhurchandi K (2016) Expression invariant face recognition using semidecimated dwt, patch-ldsmt, feature and score level fusion. Appl Intell 44(4):913–930

    Article  Google Scholar 

  3. Siddiqi MH (2018) Accurate and robust facial expression recognition system using real-time youtube-based datasets. Appl Intell: 1–18

  4. Suwa M, Sugie N, Fujimora K (1978) A preliminary note on pattern recognition of human emotional expression. In: International joint conference on pattern recognition, vol 1978, pp 408–410

  5. Samal A, Iyengar PA (1992) Automatic recognition and analysis of human faces and facial expressions: a survey. Pattern Recogn 25(1):65–77

    Article  Google Scholar 

  6. Zhang Z, Girard JM, Wu Y, Zhang X, Liu P, Ciftci U, Canavan S, Reale M, Horowitz A, Yang H, et al. (2016) Multimodal spontaneous emotion corpus for human behavior analysis. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 3438–3446

  7. Martins P, Batista J (2009) Identity and expression recognition on low dimensional manifolds. In: 2009 16Th IEEE international conference on image processing (ICIP). IEEE, pp 3341–3344

  8. Tian Y-I, Kanade T, Cohn JF (2001) Recognizing action units for facial expression analysis. IEEE Trans Pattern Anal Mach Intell 23(2):97–115

    Article  Google Scholar 

  9. Wu Y, Ji Q (2016) Constrained joint cascade regression framework for simultaneous facial action unit recognition and facial landmark detection. In: 2016 IEEE conference on computer vision and pattern recognition (CVPR)

  10. Ekman PE, Sorenson RE, Friesen WV (1969) Pan-cultural elements in facial displays of emotion. Science 164(3875):86–88

    Article  Google Scholar 

  11. Pandzic IS, Forchheimer R (eds) (2003) MPEG-4 Facial animation: the standard, implementation and applications. Wiley, New York

  12. Salam S (2013) Multi-Object modelling of the face. PhD thesis, Supélec

  13. Uddin MZ (2014) An efficient local feature-based facial expression recognition system. Arab J Sci Eng 39(11):7885–7893

    Article  Google Scholar 

  14. Yu K, Wang Z, Hagenbuchner M, Feng DD (2014) Spectral embedding based facial expression recognition with multiple features. Neurocomputing 129:136–145

    Article  Google Scholar 

  15. Saeed A, Al-Hamadi A, Niese R, Elzobi M (2014) Frame-based facial expression recognition using geometrical features. Advances in Human-Computer Interaction 2014:4

    Article  Google Scholar 

  16. Wan C, Tian Y, Liu S (2012) Facial expression recognition in video sequences. In: 2012 10th world congress on intelligent control and automation (WCICA). IEEE, pp 4766–4770

  17. Mohammadian A, Aghaeinia H, Towhidkhah F (2015) Video-based facial expression recognition by removing the style variations. IET Image Process 9(7):596–603

    Article  Google Scholar 

  18. Ren F, Huang Z (2015) Facial expression recognition based on aam–sift and adaptive regional weighting. IEEJ Trans Electr Electron Eng 10(6):713–722

    Article  Google Scholar 

  19. Jiang X, Feng B, Jin L (2016) Facial expression recognition via sparse representation using positive and reverse templates. IET Image Process 10(8):616–623

    Article  Google Scholar 

  20. Papachristou K, Tefas A, Pitas I (2014) Symmetric subspace learning for image analysis. IEEE Trans Image Process 23(12):5683–5697

    Article  MathSciNet  Google Scholar 

  21. Nikitidis S, Tefas A, Pitas I (2014) Maximum margin projection subspace learning for visual data analysis. IEEE Trans Image Process 23(10):4413–4425

    Article  MathSciNet  Google Scholar 

  22. Kumar MP, Rajagopal MK (2018) Detecting happiness in human face using minimal feature vectors. In: Computational signal processing and analysis. Springer, pp 1–10

  23. Khemchandani R, Chandra S et al (2007) Twin support vector machines for pattern classification. IEEE Trans Pattern Anal Mach Intell 29(5):905–910

    Article  Google Scholar 

  24. Kumar MP, Rajagopal MK (2018) Detecting happiness in human face using unsupervised twin-support vector machines. International Journal of Intelligent Systems and Applications 10(8):85

    Article  Google Scholar 

  25. Tomar D, Agarwal S (2016) Multi-class twin support vector machine for pattern classification. In: Proceedings of 3rd international conference on advanced computing, networking and informatics. Springer, pp 97–110

  26. Ding S, Zhao X, Zhang J, Zhang X, Xue Y (2017) A review on multi-class twsvm. Artif Intell Rev, pp 1–27

  27. Cohen I, Sebe N, Cozman FG, Huang TS (2003) Semi-supervised learning for facial expression recognition. In: Proceedings of the 5th ACM SIGMM international workshop on Multimedia information retrieval. ACM, pp 17–22

  28. Rifai S, Bengio Y, Courville A, Vincent P, Mirza M (2012) Disentangling factors of variation for facial expression recognition. In: Computer vision–ECCV 2012. Springer, pp 808–822

  29. Jiang B, Jia K, Sun Z (2013) Research on the algorithm of semi-supervised robust facial expression recognition, pp 136–145. Cham

  30. Saragih JM, Lucey S, Cohn JF (2011) Deformable model fitting by regularized landmark mean-shift. Int J Comput Vis 91(2):200–215

    Article  MathSciNet  Google Scholar 

  31. Wang Y, Lucey S, Cohn JF (2008) Enforcing convexity for improved alignment with constrained local models IEEE conference on computer vision and pattern recognition, 2008. CVPR 2008. IEEE, pp 1–8

  32. Silverman BW (1986) Density estimation for statistics and data analysis, vol 26. CRC Press, Boca Raton

    Book  Google Scholar 

  33. Cheng Y (1995) Mean shift, mode seeking, and clustering. IEEE Trans Pattern Anal Mach Intell 17(8):790–799

    Article  Google Scholar 

  34. Tekalp AM, Ostermann J (2000) Face and 2-d mesh animation in mpeg-4. Signal Process Image Commun 15(4):387–421

    Article  Google Scholar 

  35. Chen H, Li J, Zhang F, Li Y, Wang H (2015) 3d model-based continuous emotion recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1836–1845

  36. Tomar D, Agarwal S (2015) An effective weighted multi-class least squares twin support vector machine for imbalanced data classification. International Journal of Computational Intelligence Systems 8(4):761–778

    Article  Google Scholar 

  37. Valstar M, Pantic M (2010) Induced disgust, happiness and surprise: an addition to the mmi facial expression database. In: Proc 3rd intern workshop on EMOTION (satellite of LREC): Corpora for research on emotion and affect, p 65

  38. Zhao G, Huang X, Taini M, Li SZ, Pietikäinen M (2011) Facial expression recognition from near-infrared videos. Image Vis Comput 29(9):607–619

    Article  Google Scholar 

  39. Kanade T, Cohn JF, Tian Y (2000) Comprehensive database for facial expression analysis. In: Fourth IEEE international conference on automatic face and gesture recognition, 2000. Proceedings. IEEE, pp 46–53

  40. Lucey P, Cohn JF, Kanade T, Saragih J, Ambadar Z, Matthews I (2010) The extended Cohn-Kanade dataset (ck+): a complete dataset for action unit and emotion-specified expression. In: 2010 IEEE Computer society conference on computer vision and pattern recognition-workshops. IEEE, pp 94–101

  41. Petridis S, Martinez B, Pantic M (2013) The mahnob laughter database. Image Vis Comput 31(2):186–202. Affect analysis in continuous input

    Article  Google Scholar 

  42. Dapogny A, Bailly K, Dubuisson S (2015) Pairwise conditional random forests for facial expression recognition. In: Proceedings of the IEEE international conference on computer vision, pp 3783–3791

  43. Guo Y, Zhao G, Pietikäinen M (2012) Dynamic facial expression recognition using longitudinal facial expression atlases. In: Computer vision–ECCV 2012. Springer, pp 631–644

Download references

Acknowledgments

The authors would like to thank, Internet of Things (IOT) laboratory, SENSE and research colleague of Vellore Institute Technology, Chennai, India for real time dataset of facial emotion and execution of this research work.

Funding

No funding.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Manoj Prabhakaran Kumar.

Ethics declarations

Conflict of Interest

The authors declare that they have no conflict of interest.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kumar, M.P., Rajagopal, M.K. Detecting facial emotions using normalized minimal feature vectors and semi-supervised twin support vector machines classifier. Appl Intell 49, 4150–4174 (2019). https://doi.org/10.1007/s10489-019-01500-w

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-019-01500-w

Keywords

Navigation