Skip to main content
Log in

Automatic Facial Expression Recognition—A Circumcenter–Incenter–Centroid (CIC) Trio Feature-Induced Approach

  • Original Research
  • Published:
SN Computer Science Aims and scope Submit manuscript

Abstract

Facial expression is directly related to the changes in the shape of a face. The Active Appearance Model (AAM) can be used to determine the geometrical position of basic components by earmarking landmark points. The landmark points which are congruent upon the basic facial expressions are considered for the generation of triangle set encompassing the face. In this context, the area of the triangle formed by connecting the Circumcenter, Incenter, and Centroid is considered as the key shape descriptor. This novel feature is learned with Multi-Layer Perceptron (MLP) for the classification of expressions in six atomic classes viz Anger, Disgust, Fear, Happiness, Sadness, and Surprise. The proposed system is tested on four well-known benchmark databases viz. I. The Extended Cohn–Kanade (CK+) II. Japanese Female Facial Expression (JAFFE) III. Multimedia Imaging (MMI) and, IV. Multimedia Understanding Group (MUG) database. Overwhelming impressive results on these four databases confirm the effectiveness and efficiency of our proposed method.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  1. Aifanti N, Papachristou C, Delopoulos A. The mug facial expression database. In: 11th International Workshop on Image Analysis for Multimedia Interactive Services WIAMIS 10, IEEE, 2010, pp. 1–4.

  2. Avishek N, Paramartha D, Md N. Face expression recognition using side length features induced by landmark triangulation. In: Computational intelligence for human action recognition. London: Chapman and Hall/CRC; 2020. p. 53–72.

    Chapter  Google Scholar 

  3. Barman A, Dutta P. Facial expression recognition using distance and shape signature features. Pattern Recogn Lett. 2017;145:254–61.

    Article  Google Scholar 

  4. Barman A, Dutta P. Facial expression recognition using distance and texture signature relevant features. Appl Soft Comput. 2019;77:88–105.

    Article  Google Scholar 

  5. Cootes TF, Edwards GJ, Taylor CJ. Active appearance models. IEEE Trans Pattern Anal Mach Intell. 2001;6:681–5.

    Article  Google Scholar 

  6. Edwards GJ, Cootes TF, Taylor CJ. Face recognition using active appearance models. In: European conference on computer vision. Berlin: Springer; 1998. p. 581–95.

    Google Scholar 

  7. Ekman P, Friesen WV. Constants across cultures in the face and emotion. J Personal Soc Psychol. 1971;17(2):124.

    Article  Google Scholar 

  8. Freitas-Magalhães A. Facial expression of emotion: from theory to application. Leya. 2013.

  9. Friesen E, Ekman P. Facial action coding system: a technique for the measurement of facial movement. Palo Alto. 1978;3:5.

    Google Scholar 

  10. Happy S, Routray A. Automatic facial expression recognition using features of salient facial patches. IEEE Trans Affect Comput. 2014;6(1):1–12.

    Article  Google Scholar 

  11. Ji Y, Idrissi K. Automatic facial expression recognition based on spatiotemporal descriptors. Pattern Recogn Lett. 2012;33(10):1373–80.

    Article  Google Scholar 

  12. Jung H, Lee S, Yim J, Park S, Kim J. Joint fine-tuning in deep neural networks for facial expression recognition. In: Proceedings of the IEEE International Conference on Computer Vision, 2015, pp. 2983–91.

  13. Kotsia I, Buciu I, Pitas I. An analysis of facial expression recognition under partial facial image occlusion. Image Vis Comput. 2008;26(7):1052–67.

    Article  Google Scholar 

  14. Kotsia I, Pitas I. Facial expression recognition in image sequences using geometric deformation features and support vector machines. IEEE Trans Image Process. 2006;16(1):172–87.

    Article  MathSciNet  Google Scholar 

  15. Kumari J, Rajesh R, Pooja K. Facial expression recognition: a survey. Proced Comput Sci. 2015;58:486–91.

    Article  Google Scholar 

  16. Kuo CM, Lai SH, Sarkis M. A compact deep learning model for robust facial expression recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition workshops, 2018, pp. 2121–9.

  17. Lopes AT, de Aguiar E, De Souza AF, Oliveira-Santos T. Facial expression recognition with convolutional neural networks: coping with few data and the training sample order. Pattern Recogn. 2017;61:610–28.

    Article  Google Scholar 

  18. Lucey P, Cohn JF, Kanade T, Saragih J, Ambadar Z, Matthews I. The extended cohn-kanade dataset (ck+): A complete dataset for action unit and emotion-specified expression. In: 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition-Workshops, 2010, pp. 94–101.

  19. Lyons M, Akamatsu S, Kamachi M, Gyoba J. Coding facial expressions with gabor wavelets. In: Proceedings Third IEEE International Conference on Automatic Face and Gesture Recognition, IEEE, 1998 pp. 200–5.

  20. Magdin M, Prikler F. Real time facial expression recognition using webcam and sdk affectiva. IJIMAI. 2018;5(1):7–15.

    Article  Google Scholar 

  21. Milborrow S, Nicolls F. Locating facial features with an extended active shape model. In: European conference on computer vision. Berlin: Springer; 2008. p. 504–13.

    Google Scholar 

  22. Mollahosseini A, Chan D, Mahoor MH. Going deeper in facial expression recognition using deep neural networks. In: 2016 IEEE Winter Conference on Applications of Computer Vision (WACV), IEEE, 2016, pp. 1–10.

  23. Møller MF. A scaled conjugate gradient algorithm for fast supervised learning. Neural Netw. 1993;6(4):525–33.

    Article  Google Scholar 

  24. Nandi A, Dutta P, Nasir M. Recognizing human emotions from facial images by landmark triangulation: a combined circumcenter-incenter-centroid trio feature-based method. In: Algorithms in machine learning paradigms. Berlin: Springer; 2020. p. 147–64.

    Chapter  Google Scholar 

  25. Rahulamathavan Y, Phan RCW, Chambers JA, Parish DJ. Facial expression recognition in the encrypted domain based on local fisher discriminant analysis. IEEE Trans Affect Comput. 2012;4(1):83–92.

    Article  Google Scholar 

  26. Richhariya B, Gupta D. Facial expression recognition using iterative universum twin support vector machine. Appl Soft Comput. 2019;76:53–67. https://doi.org/10.1016/j.asoc.2018.11.046.

    Article  Google Scholar 

  27. Sagonas C, Antonakos E, Tzimiropoulos G, Zafeiriou S, Pantic M. 300 faces in-the-wild challenge: database and results. Image Vis Comput. 2016;47:3–18.

    Article  Google Scholar 

  28. Shan C, Gong S, McOwan PW. Robust facial expression recognition using local binary patterns. IEEE Int Conf Image Process. 2005;2:2–370.

    Google Scholar 

  29. Shan C, Gong S, McOwan PW. Facial expression recognition based on local binary patterns: a comprehensive study. Image Vis Comput. 2009;27(6):803–16.

    Article  Google Scholar 

  30. Valstar M, Pantic M. Induced disgust, happiness and surprise: an addition to the mmi facial expression database. In: Proceedings of 3rd International Workshop on EMOTION (satellite of LREC): Corpora for Research on Emotion and Affect, 2010, p. 65.

  31. Yi J, Chen A, Cai Z, Sima Y, Zhou M, Wu X. Facial expression recognition of intercepted video sequences based on feature point movement trend and feature block texture variation. Appl Soft Comput. 2019. https://doi.org/10.1016/j.asoc.2019.105540.

    Article  Google Scholar 

  32. Zavaschi TH, Britto AS Jr, Oliveira LE, Koerich AL. Fusion of feature sets and classifiers for facial expression recognition. Expert Syst Appl. 2013;40(2):646–55.

    Article  Google Scholar 

  33. Zhao X, Shi X, Zhang S. Facial expression recognition via deep learning. IETE Tech Rev. 2015;32(5):347–55.

    Article  Google Scholar 

Download references

Funding

This research is funded by UGC and Council Of Scientific And Industrial Research (CSIR), Grant no [UGC-Ref. No.: 3437/(OBC)(NET-JAN 2017)].

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Avishek Nandi.

Ethics declarations

Conflict of interest

The MUG database is provided by Dr. A. Delopoulos and the MUG database is provided by Prof. Maja Pantic. Author Avishek Nandi thanks the University Grant Commission (UGC), India for providing NET-JRF fellowship (UGC-Ref. No.: 3437/(OBC)(NET-JAN 2017)) for this research. Author Md Nasir thanks the Department of Science and Technology, Govt. of India for providing the DST-INSPIRE fellowship for conducting this research. Author Paramartha Dutta is a Senior Professor at Visva-Bharati University.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Thanks to University Grant Commission (UGC), India for providing NET-JRF fellowship to Avishek Nandi for conduction this research (UGC-Ref. No.: 3437/(OBC)(NET-JAN 2017)), The MUG database is provided by Dr. A. Delopoulos and the MUG database are provided by Prof. Maja Pantic.

This article is part of the topical collection “Next-Generation Digital Transformation through Intelligent Computing” guest edited by PN Suganthan, Paramartha Dutta, Jyotsna Kumar Mandal and Somnath Mukhopadhyay.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Nandi, A., Dutta, P. & Nasir, M. Automatic Facial Expression Recognition—A Circumcenter–Incenter–Centroid (CIC) Trio Feature-Induced Approach. SN COMPUT. SCI. 3, 8 (2022). https://doi.org/10.1007/s42979-021-00868-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s42979-021-00868-2

Keywords

Navigation