ABSTRACT
Facial expression recognition plays an important role in human behaviour, communication, and interaction. Recent neural networks have demonstrated to perform well at its automatic recognition, with different explainability techniques available to make them more transparent. In this work, we propose a facial expression recognition study for people with intellectual disabilities that would be integrated into a social robot. We train two well-known neural networks with five databases of facial expressions and test them with two databases containing people with and without intellectual disabilities. Finally, we study in which regions the models focus to perceive a particular expression using two different explainability techniques: LIME and RISE, assessing the differences when used on images containing disabled and non-disabled people.
- Ko, B. (2018). A brief review of facial emotion recognition based on visual information. sensors, 18(2), 401.Google Scholar
- Carroll, J. M., & Kjeldskov, J. (2013). The encyclopedia of human-computer interaction. 2nd. Ed. Interaction Design Foundation.Google Scholar
- Huang, W. (2015). When HCI Meets HRI: the intersection and distinction.Google Scholar
- Fong, T., Nourbakhsh, I., & Dautenhahn, K. (2003). A survey of socially interactive robots. Robotics and autonomous systems, 42(3-4), 143-166.Google Scholar
- Mitchell, A., Sitbon, L., Balasuriya, S.S., Koplick, S., Beaumont, C. (2021). Social Robots in Learning Experiences of Adults with Intellectual Disability: An Exploratory Study. Human-Computer Interaction – INTERACT 2021. INTERACT 2021. Lecture Notes in Computer Science, vol 12932. Springer, Cham. https://doi.org/10.1007/978-3-030-85623-6_17]Google ScholarCross Ref
- D. Silvera-Tawil and C. R. Yates, "Socially-Assistive Robots to Enhance Learning for Secondary Students with Intellectual Disabilities and Autism," 2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), 2018, pp. 838-843, doi: 10.1109/ROMAN.2018.8525743.]Google ScholarCross Ref
- Chowdary, M. K., Nguyen, T. N., & Hemanth, D. J. (2021). Deep learning-based facial emotion recognition for human–computer interaction applications. Neural Computing and Applications, 1-18.Google Scholar
- Generosi, A., Ceccacci, S., Faggiano, S., Giraldi, L., & Mengoni, M. (2020). A toolkit for the automatic analysis of human behavior in HCI applications in the wild. Adv. Sci. Technol. Eng. Syst. J, 5(6), 185-192.Google Scholar
- Martínez A, Belmonte LM, García AS, Fernández-Caballero A, Morales R. Facial Emotion Recognition from an Unmanned Flying Social Robot for Home Care of Dependent People. Electronics. 2021; 10(7):868. https://doi.org/10.3390/electronics10070868Google ScholarCross Ref
- Ramis S, Buades JM, Perales FJ. Using a Social Robot to Evaluate Facial Expressions in the Wild. Sensors. 2020; 20(23):6716. https://doi.org/10.3390/s20236716Google ScholarCross Ref
- K. Weitz, T. Hassan, U. Schmid, and J.-U. Garbas, “Deep-learned faces of pain and emotions: Elucidating the differences of facial expressions with the help of explainable AI methods,” Technisches Messen, vol. 86, no. 7–8, pp. 404–412, 2019.Google ScholarCross Ref
- Ramis, S., Buades, J.M., Perales, F.J. et al. A Novel Approach to Cross dataset studies in Facial Expression Recognition. Multimed Tools Appl 81, 39507–39544 (2022). https://doi.org/10.1007/s11042-022-13117-2Google ScholarDigital Library
- Ribeiro MT, Singh S, Guestrin C. “Why should I trust you?”: Explaining the predictions of any classifier," CoRR, vol. abs/1602.04938, 2016. http://arxiv.org/abs/1602.04938Google Scholar
- A. Heimerl, K. Weitz, T. Baur, and E. Andre, “Unraveling ML Models of Emotion with NOVA: Multi-Level Explainable AI for Non-Experts,” IEEE Transactions on Affective Computing, vol. 1, no. 1, pp. 1–13, 2020.Google Scholar
- M. T. Ribeiro, S. Singh, and C. Guestrin, “‘Why Should I Trust You?’: Explaining the Predictions of Any Classifier,” in Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2016, pp. 1135–1144.Google ScholarDigital Library
- M. Alber , “iNNvestigate Neural Networks!,” Journal of Machine Learning Research, vol. 20, no. 93, pp. 1–8, 2019.Google Scholar
- Lucey P, Cohn JF, Kanade T (2010) The extended Cohn-Kanade dataset (CK+): A complete dataset for action unit and emotion-specified expression. In: 2010 IEEE computer society conference on computer vision and pattern recognition -workshops, CVPRW 2010.Google Scholar
- Yin L, Wei X, Sun Y (2006) A 3D facial expression database for facial behavior research. In: FGR2006: proceedings of the 7th international conference on automatic face and gesture recognition.Google ScholarDigital Library
- Lyons M, Kamachi M, Gyoba J (2017) Japanese female facial expression (JAFFE) database. Available:http://www.kasrl.org/jaffe.htmGoogle Scholar
- Olszanowski M, Pochwatko G, Kuklinski K (2014) Warsaw set of emotional facial expression pictures:a validation study of facial display photographs. Front Psychol 5. https://doi.org/10.3389/fpsyg.2014.01516Google ScholarCross Ref
- Shukla, J., Barreda-Ángeles, M., Oliver, J., & Puig, D. (2016). MuDERI: Multimodal database for emotion recognition among intellectually disabled individuals. In Social Robotics: 8th International Conference, ICSR 2016, Kansas City, MO, USA, November 1-3, 2016 Proceedings 8 (pp. 264-273). Springer International Publishing.Google ScholarCross Ref
- Lisani JL, Ramis S, Perales FJ (2017) A contrario detection of faces: a case example. SIAM J Imaging Sci10:2091–2118. https://doi.org/10.1137/17M1118774Google ScholarCross Ref
- Sagonas, C., Tzimiropoulos, G., Zafeiriou, S., & Pantic, M. (2013). 300 Faces in-the-wild challenge: The first facial landmark localization challenge. In Proceedings of the IEEE International Conference on Computer Vision Workshops (pp. 397-403).Google ScholarDigital Library
- Krizhevsky A, Sutskever I, Hinton GE (2012) ImageNet classification with deep convolutional neuralnetworks. In: Advances in Neural Information Processing SystemsGoogle Scholar
- Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556.Google Scholar
- Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., ... & Herrera, F. (2020). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information fusion, 58, 82-115.Google Scholar
- Ribeiro, M. T., Singh, S., & Guestrin, C. (2016, August). " Why should i trust you?" Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining (pp. 1135-1144).Google Scholar
- Petsiuk, V., Das, A., & Saenko, K. (2018). Rise: Randomized input sampling for explanation of black-box models. arXiv preprint arXiv:1806.07421.Google Scholar
- Achanta, R., Shaji, A., Smith, K., Lucchi, A., Fua, P., & Süsstrunk, S. (2010). Slic superpixels (No. REP_WORK).Google Scholar
Index Terms
- Explainable Facial Expression Recognition for People with Intellectual Disabilities
Recommendations
Expression-invariant face recognition by facial expression transformations
In this paper, we present a method of expression-invariant face recognition that transforms input face image with an arbitrary expression into its corresponding neutral facial expression image. When a new face image with an arbitrary expression is ...
Facial expression recognition with Convolutional Neural Networks
Facial expression recognition has been an active research area in the past 10 years, with growing application areas including avatar animation, neuromarketing and sociable robots. The recognition of facial expressions is not an easy problem for machine ...
Pose-Robust Facial Expression Recognition Using View-Based 2D + 3D AAM
This paper proposes a pose-robust face tracking and facial expression recognition method using a view-based 2D 3D active appearance model (AAM) that extends the 2D 3D AAM to the view-based approach, where one independent face model is used for a ...
Comments