Abstract
Valence-Arousal model can represent complex human emotions, including slight changes of emotion. Most prior works of facial emotion estimation only considered laboratory data and used video, speech or other multi-modal features. The effect of these methods applied on static images in the real world is unknown. In this paper, a two-level attention with multi-task learning (MTL) framework is proposed for facial emotion estimation on static images. The features of corresponding region were automatically extracted and enhanced by first-level attention mechanism. And then we designed a practical structure to process the features extracted by first-level attention. In the following, we utilized Bi-directional Recurrent Neural Network (Bi-RNN) with self-attention (second-level attention) to make full use of the relationship of these features adaptively. It can be concluded as a combination of global and local information. In addition, we exploited MTL to estimate the value of valence and arousal simultaneously, which employed the correlation of the two tasks. The quantitative results conducted on AffectNet dataset demonstrated the superiority of the proposed framework. In addition, extensive experiments were carried out to analysis effectiveness of different components.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsReferences
Jung, H., Lee, S., Yim, J., Park, S., Kim, J.: Joint fine-tuning in deep neural networks for facial expression recognition. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2983–2991 (2015)
Kim, B.K., Dong, S.Y., Roh, J., Kim, G., Lee, S.Y.: Fusing aligned and non-aligned face information for automatic affect recognition in the wild: a deep learning approach. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 48–57 (2016)
Zhang, K., Huang, Y., Du, Y., Wang, L.: Facial expression recognition based on deep evolutional spatial-temporal networks. IEEE Trans. Image Process. 26(9), 4193–4203 (2017)
Chen, S., Jin, Q., Zhao, J., Wang, S.: Multimodal multi-task learning for dimensional and continuous emotion recognition. In: Proceedings of the 7th Annual Workshop on Audio/Visual Emotion Challenge, pp. 19–26. ACM (2017)
Xia, R., Liu, Y.: A multi-task learning framework for emotion recognition using 2D continuous space. IEEE Trans. Affect. Comput. 1, 3–14 (2017)
Russell, J.A.: A circumplex model of affect. J. Pers. Socialpsychol. 39(6), 1161 (1980)
Mollahosseini, A., Hasani, B., Mahoor, M.H.: AffectNet: a database for facial expression, valence, and arousal computing in the wild. arXiv preprint arXiv:1708.03985 (2017)
Liu, X., Kumar, B.V., You, J., Jia, P.: Adaptive deep metric learning for identity-aware facial expression recognition. In: CVPR Workshops, pp. 522–531 (2017)
Li, S., Deng, W., Du, J.: Reliable crowdsourcing and deep locality-preserving learning for expression recognition in the wild. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2584–2593. IEEE (2017)
Sun, W., Zhao, H., Jin, Z.: A visual attention based ROI detection method for facial expression recognition. Neurocomputing 296, 12–22 (2018)
Wang, F., et al.: Residual attention network for image classification. arXiv preprint arXiv:1704.06904 (2017)
Chang, W.Y., Hsu, S.H., Chien, J.H.: FATAUVA-Net: an integrated deep learning framework for facial attribute recognition, action unit (au) detection, and valence-arousal estimation. In: Proceedings of the IEEE Conference onComputer Vision and Pattern Recognition Workshop (2017)
Kollias, D., Zafeiriou, S.: A multi-component CNN-RNN approach for dimensional emotion recognition in-the-wild. arXiv preprint arXiv:1805.01452 (2018)
Zhou, F., Kong, S., Fowlkes, C., Chen, T., Lei, B.: Fine-grained facial expression analysis using dimensional emotion model. arXiv preprint arXiv:1805.01024 (2018)
Mnih, V., Heess, N., Graves, A., et al.: Recurrent models of visual attention. In: Advances in Neural Information Processing Systems, pp. 2204–2212 (2014)
Hu, J., Shen, L., Sun, G.: Squeeze-and-excitation networks. arXiv preprint arXiv:1709.015077 (2017)
Vaswani, A., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems, pp. 5998–6008 (2017)
Das, A., Agrawal, H., Zitnick, L., Parikh, D., Batra, D.: Human attention in visual question answering: do humans and deep networks look at the same regions? Comput. Vis. Image Underst. 163, 90–100 (2017)
Xu, K., et al.: Show, attend and tell: neural image caption generation with visual attention. In: International Conference on Machine Learning, pp. 2048–2057 (2015)
Chang, J., Scherer, S.: Learning representations of emotional speech with deep convolutional generative adversarial networks. In: 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 2746–2750. IEEE (2017)
Duan, M., Li, K., Tian, Q.: A novel multi-task tensor correlation neural network for facial attribute prediction. arXiv preprint arXiv:1804.02810 (2018)
Black, M.J., Rangarajan, A.: On the unification of line processes, outlier rejection, and robust statistics with applications in early vision. Int. J. Comput. Vis. 19(1), 57–91 (1996)
Zhang, K., Zhang, Z., Li, Z., Qiao, Y.: Joint face detection and alignment using multitask cascaded convolutional networks. IEEE Sig. Process. Lett. 23(10), 1499–1503 (2016)
Mahoor, M.H.: AffectNet. http://mohammadmahoor.com/affectnet/. Accessed 27 July 2018
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Wang, X., Peng, M., Pan, L., Hu, M., Jin, C., Ren, F. (2019). Two-Level Attention with Multi-task Learning for Facial Emotion Estimation. In: Kompatsiaris, I., Huet, B., Mezaris, V., Gurrin, C., Cheng, WH., Vrochidis, S. (eds) MultiMedia Modeling. MMM 2019. Lecture Notes in Computer Science(), vol 11295. Springer, Cham. https://doi.org/10.1007/978-3-030-05710-7_19
Download citation
DOI: https://doi.org/10.1007/978-3-030-05710-7_19
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-05709-1
Online ISBN: 978-3-030-05710-7
eBook Packages: Computer ScienceComputer Science (R0)