skip to main content
10.1145/3490035.3490271acmotherconferencesArticle/Chapter ViewAbstractPublication PagesicvgipConference Proceedingsconference-collections
research-article

Towards interpretable facial emotion recognition

Published: 19 December 2021 Publication History

Abstract

In this paper, an interpretable deep-learning-based system has been proposed for facial emotion recognition. A novel approach to interpret the proposed system's results, Divide & Conquer based Shapley additive explanations (DnCShap), has also been developed. The proposed approach computes 'Shapley values' that denote the contribution of each image feature towards a particular prediction. The Divide and Conquer algorithm has been incorporated for computing the Shapley values in linear time instead of the exponential time taken by the existing interpretability approaches. The experiments performed on four facial emotion recognition datasets, i.e., FER-13, FERG, JAFFE, and CK+, resulted in the emotion classification accuracy of 62.62%, 99.68%, 91.97%, and 99.67%, respectively. The results show that DnCShap has consistently interpreted the highly relevant facial features for the emotion classification for various datasets.

References

[1]
Amina Adadi and Mohammed Berrada. 2018. Peeking Inside the Black-box: A Survey on Explainable A.I. (XAI). IEEE Access 6 (2018), 52138--52160.
[2]
Marco Ancona, Cengiz Oztireli, and Markus Gross. 2019. Explaining Deep Neural Networks with a Polynomial Time Algorithm for Shapley Value Approximation. In The International Conference on Machine Learning (ICML). 272--281.
[3]
Deepali Aneja et al. 2016. Modeling Stylized Character Expressions via Deep Learning. In The Asian Conference on Computer Vision (ACCV). Springer, 136--153.
[4]
Marian Bartlett et al. 2005. Recognizing Facial Expression: Machine Learning and Application to Spontaneous Behavior. In The IEEE/CVF International Conference on Computer Vision and Pattern Recognition (CVPR), Vol. 2. 568--573.
[5]
Pierre-Luc Carrier, Aaron Courville, Ian J Goodfellow, Medhi Mirza, and Yoshua Bengio. 2013. FER-2013 Face Database. Universit de Montral (2013).
[6]
Javier Castro et al. 2009. Polynomial Calculation of The Shapley Value based on Sampling. Elsevier Computers & Operations Research 36, 5 (2009), 1726--1730.
[7]
Junkai Chen, Zenghai Chen, Zheru Chi, Hong Fu, et al. 2014. Facial Expression Recognition based on Facial Components Detection and HOG Features. In The International Workshop on Electrical and Computer Engineering Subfields. 884--888.
[8]
Marcelo Cossetin et al. 2016. Facial Expression Recognition using a Pairwise Feature Selection and Classification Approach. In The IEEE International Joint Conference on Neural Networks (IJCNN). 5149--5155.
[9]
Marina Danilevsky et al. 2020. A Survey of the State of Explainable AI for Natural Language Processing. In The 10th International Joint Conference on Natural Language Processing (IJCNLP). 447--459.
[10]
Jia Deng, Gaoyang Pang, Zhiyu Zhang, Zhibo Pang, Huayong Yang, and Geng Yang. 2019. cGAN based Facial Expression Recognition for Human Robot Interaction. IEEE Access 7 (2019), 9848--9859.
[11]
Ruoxi Deng et al. 2018. Learning to Predict Crisp Boundaries. In The European Conference on Computer Vision (ECCV). 562--578.
[12]
Finale Doshi-Velez and Been Kim. 2017. Towards a Rigorous Science of Interpretable Machine Learning. arXiv preprint arXiv:1702.08608 (2017).
[13]
Paul Ekman. 1977. Facial Action Coding System. (1977).
[14]
Paul Ekman and Wallace V Friesen. 1971. Constants Across Cultures in the Face and Emotion. American Psychological Association Journal of Personality and Social Psychology 17, 2 (1971), 124.
[15]
Clment Feutry, Pablo Piantanida, Yoshua Bengio, and Pierre Duhamel. 2018. Learning Anonymized Representations with Adversarial Neural Networks. arXiv preprint arXiv:1802.09386 (2018).
[16]
Ruth Fong, Mandela Patrick, and Andrea Vedaldi. 2019. Understanding deep networks via extremal perturbations and smooth masks. In The IEEE/CVF International Conference on Computer Vision (ICCV). 2950--2958.
[17]
Ruth C Fong and Andrea Vedaldi. 2017. Interpretable Explanations of Black-boxes by Meaningful Perturbation. In The The IEEE/CVF International Conference on Computer Vision (ICCV). 3429--3437.
[18]
Leilani Gilpin et al. 2018. Explaining Explanations: An Overview of Interpretability of Machine Learning. In The 5th IEEE International Conference on Data Science and Advanced Analytics. 80--89.
[19]
SL Happy and Aurobinda Routray. 2014. Automatic Facial Expression Recognition using Features of Salient Facial Patches. IEEE Transactions on Affective Computing (TAC) 6, 1 (2014), 1--12.
[20]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep Residual Learning for Image Recognition. In The IEEE/CVF International Conference on Computer Vision and Pattern Recognition (CVPR). 770--778.
[21]
Paul VC Hough. 1962. Method and Means for Recognizing Complex Patterns. US Patent 3,069,654.
[22]
Junjie Hu, Yan Zhang, and Takayuki Okatani. 2019. Visualization of Convolutional Neural Networks for Monocular Depth Estimation. In The IEEE/CVF International Conference on Computer Vision (ICCV). 3869--3878.
[23]
Miyuki Kamachi, Michael Lyons, and Jiro Gyoba. 1998. The Japanese Female Facial Expression (JAFFE) Database. URL www.kasrl.org/jaffe.html 21 (1998), 32.
[24]
Pooya Khorrami et al. 2015. Do Deep Neural Networks Learn Facial Action Units when Doing Expression Recognition?. In The IEEE/CVF International Conference on Computer Vision-workshop (ICCVw). 19--27.
[25]
Pieter-Jan Kindermans et al. 2019. The (Un)reliability of Saliency Methods. In Explainable AI: Interpreting, Explaining and Visualizing Deep Learning. 267--280.
[26]
Puneet Kumar, Vishesh Kaushik, and Balasubramanian Raman. 2021. Towards the Explainability of Multimodal Speech Emotion Recognition. In INTERSPEECH. 1748--1752.
[27]
Yong Li, Jiabei Zeng, Shiguang Shan, and Xilin Chen. 2018. Occlusion Aware Facial Expression Recognition using CNN with Attention Mechanism. IEEE Transactions on Image Processing (TIP) 28, 5 (2018), 2439--2450.
[28]
Patrick Lucey et al. 2010. The Extended Cohn-Kanade Dataset (CK+): A Complete Dataset for Action Unit and Emotion Specified Expression. In The IEEE/CVF International Conference on Computer Vision and Pattern Recognition-workshops (CVPRw). 94--101.
[29]
Scott M Lundberg, Gabriel G Erion, and Su-In Lee. 2018. Consistent Individualized Feature Attribution for Tree Ensembles. arXiv preprint arXiv:1802.03888 (2018).
[30]
Scott M Lundberg and Su-In Lee. 2017. A Unified Approach to Interpreting Model Predictions. In The 31st International Conference on Neural Information Processing Systems (NeuroIPS). 4768--4777.
[31]
Shervin Minaee, Mehdi Minaei, and Amirali Abdolrashidi. 2021. Deep-Emotion: Facial Expression Recognition using Attentional Convolutional Network. MDPI Sensors 21, 9 (2021), 3046.
[32]
Ali Mollahosseini, David Chan, and Mohammad H Mahoor. 2016. Going Deeper in Facial Expression Recognition using Deep Neural Networks. In The Winter Conference on Applications of Computer Vision (WACV). 1--10.
[33]
W James Murdoch, Chandan Singh, Karl Kumbier, Reza Abbasi-Asl, and Bin Yu. 2019. Definitions, Methods, and Applications in Interpretable Machine Learning. Proceedings of the National Academy of Sciences 116, 44 (2019), 22071--22080.
[34]
Ben Niu et al. 2021. Facial Expression Recognition with LBP and ORB features. Computational Intelligence and Neuroscience 2021 (2021).
[35]
Marco Ribeiro et al. 2016. "Why Should I Trust You?" Explaining the Predictions of Any Classifier. In The 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD). 1135--1144.
[36]
Ramprasaath Selvaraju et al. 2017. Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization. In The IEEE/CVF International Conference on Computer Vision (ICCV). 618--626.
[37]
LS Shapley. 1953. A Value for n-person Games, Contributions to the Theory of Games II, AW Tucker, HW Kuhn.
[38]
Minchul Shin, Munsang Kim, and Dong-Soo Kwon. 2016. Baseline CNN structure analysis for Facial Expression Recognition. In The 25th IEEE International Symposium on Robot and Human Interactive communication. 724--729.
[39]
Avanti Shrikumar, Peyton Greenside, Anna Shcherbina, and Anshul Kundaje. 2016. Not Just a Black box: Learning Important Features Through Propagating Activation Differences. arXiv preprint arXiv:1605.01713 (2016).
[40]
Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. 2013. Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps. arXiv preprint arXiv:1312.6034 (2013).
[41]
Karen Simonyan and Andrew Zisserman. 2014. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv preprint arXiv:1409.1556 (2014).
[42]
Chandan Singh, W James Murdoch, and Bin Yu. 2018. Hierarchical Interpretations for Neural Network Predictions. In The 6th International Conference on Learning Representations (ICLR).
[43]
Douglas R Smith. 1985. The Design of Divide and Conquer Algorithms. Elsevier Science of Computer Programming 5 (1985), 37--58.
[44]
Suraj Srinivas and François Fleuret. 2019. Full Gradient Representation for Neural Network Visualization. In The 33rd International Conference on Neural Information Processing Systems (NeurIPS). 4124--4133.
[45]
Erico Tjoa and Cuntai Guan. 2020. A Survey on Explainable Artificial Intelligence (XAI): Towards Medical XAI. IEEE Transactions on Neural Networks and Learning Systems (TNNLS) (2020).
[46]
Ying Tong and Rui Chen. 2019. Local Dominant Directional Symmetrical Coding Patterns for Facial Expression Recognition. Computational Intelligence and Neuroscience (2019).
[47]
Ayşegül Uçar, Yakup Demir, and Cüneyt Güzeliş. 2016. A New Facial Expression Recognition based on Curvelet Transform and Online Sequential Extreme Learning Machine Initialized with Spherical Clustering. Springer Neural Computing and Applications (NCAA) 27, 1 (2016), 131--142.
[48]
Monu Verma et al. 2018. EXPERTNet: Exigent Features Preservative Network for Facial Expression Recognition. In The 11th Indian Conference on Computer Vision, Graphics and Image Processing (ICVGIP). 1--8.
[49]
Xiaoqing Wang, Xiangjun Wang, and Yubo Ni. 2018. Unsupervised Domain Adaptation for Facial Expression Recognition using Generative Adversarial Networks. Computational Intelligence and Neuroscience 2018 (2018).
[50]
Jason Yosinski, Jeff Clune, Thomas Fuchs, and Hod Lipson. 2015. Understanding Neural Networks through Deep Visualization. In The 32nd International Conference on Machine Learning-workshop (ICMLw).
[51]
Matthew D Zeiler and Rob Fergus. 2014. Visualizing and Understanding Convolutional Networks. In The European Conference on Computer Vision (ECCV). 818--833.
[52]
Jiabei Zeng, Shiguang Shan, and Xilin Chen. 2018. Facial Expression Recognition with Inconsistently Annotated Datasets. In The European conference on computer vision (ECCV). 222--237.
[53]
Quanshi Zhang, Ying Nian Wu, and Song-Chun Zhu. 2018. Interpretable Convolutional Neural Networks. In The The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 8827--8836.
[54]
Hang Zhao et al. 2018. Transfer Learning with Ensemble of Multiple Feature Representations. In The 16th International Conference on Software Engineering Research, Management and Applications. 54--61.
[55]
Bolei Zhou, Yiyou Sun, David Bau, and Antonio Torralba. 2018. Interpretable Basis Decomposition for Visual Explanation. In The European Conference on Computer Vision (ECCV). 119--134.
[56]
Xiuzhuang Zhou, Kai Jin, Yuanyuan Shang, and Guodong Guo. 2018. Visually Interpretable Representation Learning for Depression Recognition from Facial Images. IEEE Transactions on Affective Computing (TAC) 11, 3 (2018), 542--552.

Cited By

View all
  • (2024)Toward Explainable Affective Computing: A ReviewIEEE Transactions on Neural Networks and Learning Systems10.1109/TNNLS.2023.327002735:10(13101-13121)Online publication date: Oct-2024
  • (2024)Unsupervised Emotion Matching for Image and Text Input2024 IEEE International Conference on Interdisciplinary Approaches in Technology and Management for Social Innovation (IATMSI)10.1109/IATMSI60426.2024.10502459(1-6)Online publication date: 14-Mar-2024
  • (2024)Unlocking the Black Box: Concept-Based Modeling for Interpretable Affective Computing Applications2024 IEEE 18th International Conference on Automatic Face and Gesture Recognition (FG)10.1109/FG59268.2024.10581918(1-10)Online publication date: 27-May-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
ICVGIP '21: Proceedings of the Twelfth Indian Conference on Computer Vision, Graphics and Image Processing
December 2021
428 pages
ISBN:9781450375962
DOI:10.1145/3490035
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 19 December 2021

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. affective computing
  2. computer vision
  3. deep network interpretability
  4. emotion recognition
  5. facial expressions

Qualifiers

  • Research-article

Conference

ICVGIP '21

Acceptance Rates

Overall Acceptance Rate 95 of 286 submissions, 33%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)58
  • Downloads (Last 6 weeks)7
Reflects downloads up to 28 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Toward Explainable Affective Computing: A ReviewIEEE Transactions on Neural Networks and Learning Systems10.1109/TNNLS.2023.327002735:10(13101-13121)Online publication date: Oct-2024
  • (2024)Unsupervised Emotion Matching for Image and Text Input2024 IEEE International Conference on Interdisciplinary Approaches in Technology and Management for Social Innovation (IATMSI)10.1109/IATMSI60426.2024.10502459(1-6)Online publication date: 14-Mar-2024
  • (2024)Unlocking the Black Box: Concept-Based Modeling for Interpretable Affective Computing Applications2024 IEEE 18th International Conference on Automatic Face and Gesture Recognition (FG)10.1109/FG59268.2024.10581918(1-10)Online publication date: 27-May-2024
  • (2024)Leveraging explainable artificial intelligence for emotional label prediction through health sensor monitoringCluster Computing10.1007/s10586-024-04804-w28:2Online publication date: 26-Nov-2024
  • (2023)Interpretable multimodal emotion recognition using hybrid fusion of speech and image dataMultimedia Tools and Applications10.1007/s11042-023-16443-183:10(28373-28394)Online publication date: 5-Sep-2023

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media