Abstract
Facial expression is directly related to the changes in the shape of a face. The Active Appearance Model (AAM) can be used to determine the geometrical position of basic components by earmarking landmark points. The landmark points which are congruent upon the basic facial expressions are considered for the generation of triangle set encompassing the face. In this context, the area of the triangle formed by connecting the Circumcenter, Incenter, and Centroid is considered as the key shape descriptor. This novel feature is learned with Multi-Layer Perceptron (MLP) for the classification of expressions in six atomic classes viz Anger, Disgust, Fear, Happiness, Sadness, and Surprise. The proposed system is tested on four well-known benchmark databases viz. I. The Extended Cohn–Kanade (CK+) II. Japanese Female Facial Expression (JAFFE) III. Multimedia Imaging (MMI) and, IV. Multimedia Understanding Group (MUG) database. Overwhelming impressive results on these four databases confirm the effectiveness and efficiency of our proposed method.
Similar content being viewed by others
References
Aifanti N, Papachristou C, Delopoulos A. The mug facial expression database. In: 11th International Workshop on Image Analysis for Multimedia Interactive Services WIAMIS 10, IEEE, 2010, pp. 1–4.
Avishek N, Paramartha D, Md N. Face expression recognition using side length features induced by landmark triangulation. In: Computational intelligence for human action recognition. London: Chapman and Hall/CRC; 2020. p. 53–72.
Barman A, Dutta P. Facial expression recognition using distance and shape signature features. Pattern Recogn Lett. 2017;145:254–61.
Barman A, Dutta P. Facial expression recognition using distance and texture signature relevant features. Appl Soft Comput. 2019;77:88–105.
Cootes TF, Edwards GJ, Taylor CJ. Active appearance models. IEEE Trans Pattern Anal Mach Intell. 2001;6:681–5.
Edwards GJ, Cootes TF, Taylor CJ. Face recognition using active appearance models. In: European conference on computer vision. Berlin: Springer; 1998. p. 581–95.
Ekman P, Friesen WV. Constants across cultures in the face and emotion. J Personal Soc Psychol. 1971;17(2):124.
Freitas-Magalhães A. Facial expression of emotion: from theory to application. Leya. 2013.
Friesen E, Ekman P. Facial action coding system: a technique for the measurement of facial movement. Palo Alto. 1978;3:5.
Happy S, Routray A. Automatic facial expression recognition using features of salient facial patches. IEEE Trans Affect Comput. 2014;6(1):1–12.
Ji Y, Idrissi K. Automatic facial expression recognition based on spatiotemporal descriptors. Pattern Recogn Lett. 2012;33(10):1373–80.
Jung H, Lee S, Yim J, Park S, Kim J. Joint fine-tuning in deep neural networks for facial expression recognition. In: Proceedings of the IEEE International Conference on Computer Vision, 2015, pp. 2983–91.
Kotsia I, Buciu I, Pitas I. An analysis of facial expression recognition under partial facial image occlusion. Image Vis Comput. 2008;26(7):1052–67.
Kotsia I, Pitas I. Facial expression recognition in image sequences using geometric deformation features and support vector machines. IEEE Trans Image Process. 2006;16(1):172–87.
Kumari J, Rajesh R, Pooja K. Facial expression recognition: a survey. Proced Comput Sci. 2015;58:486–91.
Kuo CM, Lai SH, Sarkis M. A compact deep learning model for robust facial expression recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition workshops, 2018, pp. 2121–9.
Lopes AT, de Aguiar E, De Souza AF, Oliveira-Santos T. Facial expression recognition with convolutional neural networks: coping with few data and the training sample order. Pattern Recogn. 2017;61:610–28.
Lucey P, Cohn JF, Kanade T, Saragih J, Ambadar Z, Matthews I. The extended cohn-kanade dataset (ck+): A complete dataset for action unit and emotion-specified expression. In: 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition-Workshops, 2010, pp. 94–101.
Lyons M, Akamatsu S, Kamachi M, Gyoba J. Coding facial expressions with gabor wavelets. In: Proceedings Third IEEE International Conference on Automatic Face and Gesture Recognition, IEEE, 1998 pp. 200–5.
Magdin M, Prikler F. Real time facial expression recognition using webcam and sdk affectiva. IJIMAI. 2018;5(1):7–15.
Milborrow S, Nicolls F. Locating facial features with an extended active shape model. In: European conference on computer vision. Berlin: Springer; 2008. p. 504–13.
Mollahosseini A, Chan D, Mahoor MH. Going deeper in facial expression recognition using deep neural networks. In: 2016 IEEE Winter Conference on Applications of Computer Vision (WACV), IEEE, 2016, pp. 1–10.
Møller MF. A scaled conjugate gradient algorithm for fast supervised learning. Neural Netw. 1993;6(4):525–33.
Nandi A, Dutta P, Nasir M. Recognizing human emotions from facial images by landmark triangulation: a combined circumcenter-incenter-centroid trio feature-based method. In: Algorithms in machine learning paradigms. Berlin: Springer; 2020. p. 147–64.
Rahulamathavan Y, Phan RCW, Chambers JA, Parish DJ. Facial expression recognition in the encrypted domain based on local fisher discriminant analysis. IEEE Trans Affect Comput. 2012;4(1):83–92.
Richhariya B, Gupta D. Facial expression recognition using iterative universum twin support vector machine. Appl Soft Comput. 2019;76:53–67. https://doi.org/10.1016/j.asoc.2018.11.046.
Sagonas C, Antonakos E, Tzimiropoulos G, Zafeiriou S, Pantic M. 300 faces in-the-wild challenge: database and results. Image Vis Comput. 2016;47:3–18.
Shan C, Gong S, McOwan PW. Robust facial expression recognition using local binary patterns. IEEE Int Conf Image Process. 2005;2:2–370.
Shan C, Gong S, McOwan PW. Facial expression recognition based on local binary patterns: a comprehensive study. Image Vis Comput. 2009;27(6):803–16.
Valstar M, Pantic M. Induced disgust, happiness and surprise: an addition to the mmi facial expression database. In: Proceedings of 3rd International Workshop on EMOTION (satellite of LREC): Corpora for Research on Emotion and Affect, 2010, p. 65.
Yi J, Chen A, Cai Z, Sima Y, Zhou M, Wu X. Facial expression recognition of intercepted video sequences based on feature point movement trend and feature block texture variation. Appl Soft Comput. 2019. https://doi.org/10.1016/j.asoc.2019.105540.
Zavaschi TH, Britto AS Jr, Oliveira LE, Koerich AL. Fusion of feature sets and classifiers for facial expression recognition. Expert Syst Appl. 2013;40(2):646–55.
Zhao X, Shi X, Zhang S. Facial expression recognition via deep learning. IETE Tech Rev. 2015;32(5):347–55.
Funding
This research is funded by UGC and Council Of Scientific And Industrial Research (CSIR), Grant no [UGC-Ref. No.: 3437/(OBC)(NET-JAN 2017)].
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The MUG database is provided by Dr. A. Delopoulos and the MUG database is provided by Prof. Maja Pantic. Author Avishek Nandi thanks the University Grant Commission (UGC), India for providing NET-JRF fellowship (UGC-Ref. No.: 3437/(OBC)(NET-JAN 2017)) for this research. Author Md Nasir thanks the Department of Science and Technology, Govt. of India for providing the DST-INSPIRE fellowship for conducting this research. Author Paramartha Dutta is a Senior Professor at Visva-Bharati University.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Thanks to University Grant Commission (UGC), India for providing NET-JRF fellowship to Avishek Nandi for conduction this research (UGC-Ref. No.: 3437/(OBC)(NET-JAN 2017)), The MUG database is provided by Dr. A. Delopoulos and the MUG database are provided by Prof. Maja Pantic.
This article is part of the topical collection “Next-Generation Digital Transformation through Intelligent Computing” guest edited by PN Suganthan, Paramartha Dutta, Jyotsna Kumar Mandal and Somnath Mukhopadhyay.
Rights and permissions
About this article
Cite this article
Nandi, A., Dutta, P. & Nasir, M. Automatic Facial Expression Recognition—A Circumcenter–Incenter–Centroid (CIC) Trio Feature-Induced Approach. SN COMPUT. SCI. 3, 8 (2022). https://doi.org/10.1007/s42979-021-00868-2
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s42979-021-00868-2