Abstract
Classification of human emotions remains an important and challenging task for many computer vision algorithms, especially in the era of humanoid robots which coexist with humans in their everyday life. Currently proposed methods for emotion recognition solve this task using multi-layered convolutional networks that do not explicitly infer any facial features in the classification phase. In this work, we postulate a fundamentally different approach to solve emotion recognition task that relies on incorporating facial landmarks as a part of the classification loss function. To that end, we extend a recently proposed Deep Alignment Network (DAN), that achieves state-of-the-art results in the recent facial landmark recognition challenge, with a term related to facial features. Thanks to this simple modification, our model called EmotionalDAN is able to outperform state-of-the-art emotion classification methods on two challenging benchmark dataset by up to 5%.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Benitez-Quiroz, C.F., Srinivasan, R., Martinez, A.M.: Emotionet: an accurate, real-time algorithm for the automatic annotation of a million facial expressions in the wild. In CVPR (2016)
Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: a large-scale hierarchical image database. In: CVPR 2009 (2009)
Ekman, P., Friesen, W.: Facial Action Coding System: Investigator’s Guide. Consulting Psychologists Press, Washington, DC (1978)
Happy, S.L., Patnaik, P., Routray, A., Guha, R.: The Indian spontaneous expression database for emotion recognition. IEEE Trans. Affect. Comput. 8, 131–142 (2017)
Hasani, B., Mahoor, M.: Facial expression recognition using enhanced deep 3D convolutional neural networks. In: IEEE Conference on Computer Vision and Pattern Recognition Workshops (2017)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. CoRR abs/1512.03385 (2015)
Kahou, S., Michalski, V., Konda, K.: Recurrent neural networks for emotion recognition in video. In: Proceedings of the ACM on International Conference on Multimodal Interaction (2015)
Kennedy, B., Balint, A.: Emotionnet2. https://github.com/co60ca/EmotionNet
Kowalski, M., Naruniec, J., Trzcinski, T.: Deep alignment network: a convolutional neural network for robust face alignment. In: CVPRW (2017)
Lopes, A.T., de Aguiar, E., Oliveira-Santos, T.: A facial expression recognition system using convolutional networks. In: SIBGRAPI (2015)
Lucey, P., Cohn, J.F., Kanade, T., Saragih, J., Ambadar, Z., Matthews, I.: The extended cohn-kanade dataset (CK+): a complete dataset for action unit and emotion-specified expression. In: CVPRW (2010)
Lyons, M.J., Akamatsu, S., Kamachi, M., Gyoba, J.: The Japanese female facial expressions database. http://www.kasrl.org/jaffe.html
Mollahosseini, A., Chan, D., Mahoor, M.H.: Going deeper in facial expression recognition using deep neural networks. In: 2016 IEEE Winter Conference on Applications of Computer Vision (WACV) (2016)
Mollahosseini, A., Chan, D., Mahoor, M.H.: Going deeper in facial expression recognition using deep neural networks. In: IEEE Winter Conference on Applications of Computer Vision (WACV) (2016)
Mollahosseini, A., Hasani, B., Mahoor, M.H.: Affectnet: a database for facial expression, valence, and arousal computing in the wild. IEEE Trans. Affect. Comput. (2017)
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. Comput. Res. Repository (2014)
Szegedy, C., et al.: Going deeper with convolutions. In: CVPR (2015)
Xia, X.L., Xu, C., Nan, B.: Facial expression recognition based on tensorflow platform. In: ITM Web of Conferences (2017)
Zafeiriou, S., Trigeorgis, G., Chrysos, G., Deng, J., Shen, J.: The menpo facial landmark localisation challenge: a step towards the solution. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) (2017)
Zhang, K., Zhang, Z., Li, Z., Qiao, Y.: Joint face detection and alignment using multitask cascaded convolutional networks. IEEE Sig. Process. Lett. 23, 1499–1503 (2016)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Tautkute, I., Trzciński, T., Bielski, A. (2019). Recognizing Emotions with EmotionalDAN. In: Barneva, R., Brimkov, V., Kulczycki, P., Tavares, J. (eds) Computational Modeling of Objects Presented in Images. Fundamentals, Methods, and Applications. CompIMAGE 2018. Lecture Notes in Computer Science(), vol 10986. Springer, Cham. https://doi.org/10.1007/978-3-030-20805-9_11
Download citation
DOI: https://doi.org/10.1007/978-3-030-20805-9_11
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-20804-2
Online ISBN: 978-3-030-20805-9
eBook Packages: Computer ScienceComputer Science (R0)