Abstract
The way of analyzing sentiment by the proposed model in this paper is strikingly similar to the mechanism by which one person perceives another’s sentiment. In this paper, We proposed a novel neural architecture named WeaveNet to “listen” and “watch” a person’s sentiment. The main strength of our model comes from capturing both intra-interactions of one modal and inter-interactions of different modals stage by stage. Intra-interactions were modeled by convolution operations in the first few stages for each modality respectively and by bidirectional LSTM in the final stage for both audio clips and video clips. Inter-interactions were recognized at each stage applying various fusion effectively. At the same time, our model concentrated on the delicate design of the neural network rather than handcrafted features. The inputs of the network in our model were raw audios and natural images. In addition, audio clips and frames of a video were aligned by keyframe rather than by time in time order. We performed extensive comparisons on three publicly available datasets for both sentiment analysis and emotion recognition. WeaveNet outperformed state-of-the-art results in three publicly available datasets.
Supported by Xinjiang Natural Science Foundation under Grant 2020D01C026 and Grant 2015211C288.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Busso, C., et al.: Iemocap: interactive emotional dyadic motion capture database. Lang. Resour. Eval. 42, 335–359 (2008)
Cambria, E.: Affective computing and sentiment analysis. IEEE Intell. Syst. 31, 102–107 (2016)
Chen, M., Wang, S., Liang, P.P., Baltrusaitis, T., Zadeh, A., Morency, L.P.: Multimodal sentiment analysis with word-level fusion and reinforcement learning. In: ICMI (2017)
Etienne, C., Fidanza, G., Petrovskii, A., Devillers, L., Schmauch, B.: Speech emotion recognition with data augmentation and layer-wise learning rate adjustment. CoRR abs/1802.05630 (2018)
Gievska, S., Koroveshovski, K., Tagasovska, N.: Bimodal feature-based fusion for real-time emotion recognition in a mobile context. In: 2015 International Conference on Affective Computing and Intelligent Interaction (ACII), pp. 401–407 (2015)
Güçlütürk, Y., Güçlü, U., van Gerven, M., van Lier, R.: Deep impression: audiovisual deep residual networks for multimodal apparent personality trait recognition. In: ECCV Workshops (2016)
Hazarika, D., Poria, S., Mihalcea, R., Cambria, E., Zimmermann, R.: Icon: Interactive conversational memory network for multimodal emotion detection. In: EMNLP (2018)
Kim, D.H., Lee, M.K., Choi, D.Y., Song, B.C.: Multi-modal emotion recognition using semi-supervised learning and multiple neural networks in the wild. In: ICMI (2017)
Kim, J., Englebienne, G., Truong, K.P., Evers, V.: Deep temporal models using identity skip-connections for speech emotion recognition. In: ACM Multimedia (2017)
Liang, P.P., Liu, Z., Zadeh, A., Morency, L.P.: Multimodal language analysis with recurrent multistage fusion. CoRR abs/1808.03920 (2018)
Liu, Z., Shen, Y., Lakshminarasimhan, V.B., Liang, P.P., Zadeh, A., Morency, L.P.: Efficient low-rank multimodal fusion with modality-specific factors. In: ACL (2018)
Ma, X., Yang, H., Chen, Q., Huang, D., Wang, Y.: Depaudionet: an efficient deep model for audio based depression classification. In: AVEC@ACM Multimedia (2016)
Mistry, K., Zhang, L., Neoh, S.C., Lim, C.P., Fielding, B.: A micro-GA embedded PSO feature selection approach to intelligent facial emotion recognition. IEEE Trans. Cybern. 47, 1496–1509 (2017)
Nasir, M., Jati, A., Shivakumar, P.G., Chakravarthula, S.N., Georgiou, P.G.: Multimodal and multiresolution depression detection from speech and facial landmark features. In: AVEC@ACM Multimedia (2016)
Nguyen, D.L., Nguyen, K., Sridharan, S., Ghasemi, A., Dean, D., Fookes, C.: Deep spatio-temporal features for multimodal emotion recognition. In: 2017 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 1215–1223 (2017)
Pérez-Rosas, V., Mihalcea, R., Morency, L.P.: Utterance-level multimodal sentiment analysis. In: ACL (2013)
Pham, H., Manzini, T., Liang, P.P., Póczos, B.: Seq2seq2sentiment: Multimodal sequence to sequence models for sentiment analysis. CoRR abs/1807.03915 (2018)
Poria, S., Cambria, E., Hazarika, D., Mazumder, N., Zadeh, A., Morency, L.P.: Multi-level multiple attentions for contextual multimodal sentiment analysis. In: 2017 IEEE International Conference on Data Mining (ICDM), pp. 1033–1038 (November 2017). https://doi.org/10.1109/ICDM.2017.134
Poria, S., Cambria, E., Bajpai, R., Hussain, A.: A review of affective computing: from unimodal analysis to multimodal fusion. Inf. Fusion 37, 98–125 (2017)
Poria, S., Chaturvedi, I., Cambria, E., Hussain, A.: Convolutional MKL based multimodal emotion recognition and sentiment analysis. In: 2016 IEEE 16th International Conference on Data Mining (ICDM), pp. 439–448 (2016)
Seng, K.P., Ang, L.M., Ooi, C.S.: A combined rule-based & machine learning audio-visual emotion recognition approach. IEEE Trans. Affect. Comput. 9, 3–13 (2018)
Sivaprasad, S., Joshi, T., Agrawal, R., Pedanekar, N.: Multimodal continuous prediction of emotions in movies using long short-term memory networks. In: ICMR (2018)
Soleymani, M., García, D., Jou, B., Schuller, B.W., Chang, S.F., Pantic, M.: A survey of multimodal sentiment analysis. Image Vis. Comput. 65, 3–14 (2017)
Trigeorgis, G., et al.: Adieu features? end-to-end speech emotion recognition using a deep convolutional recurrent network. In: 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5200–5204 (2016)
Tzirakis, P., Trigeorgis, G., Nicolaou, M.A., Schuller, B.W., Zafeiriou, S.: End-to-end multimodal emotion recognition using deep neural networks. IEEE J. Sel. Top. Sign. Proces. 11, 1301–1309 (2017)
Tzirakis, P., Zhang, J., Schuller, B.W.: End-to-end speech emotion recognition using deep neural networks. In: 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5089–5093 (2018)
Wu, A., Huang, Y., Zhang, G.: Feature fusion methods for robust speech emotion recognition based on deep belief networks. In: ICNCC 2016 (2016)
Yan, J., Zheng, W., Xu, Q., Lu, G., Li, H., Wang, B.: Sparse kernel reduced-rank regression for bimodal emotion recognition from facial expression and speech. IEEE Trans. Multimedia 18, 1319–1329 (2016)
Yang, L., Jiang, D., Xia, X., Pei, E., Oveneke, M.C., Sahli, H.: Multimodal measurement of depression using deep learning models. In: AVEC@ACM Multimedia (2017)
Zadeh, A., Chen, M., Poria, S., Cambria, E., Morency, L.P.: Tensor fusion network for multimodal sentiment analysis. In: Empirical Methods in Natural Language Processing, EMNLP (2017)
Zadeh, A., Liang, P.P., Poria, S., Vij, P., Cambria, E., Morency, L.P.: Multi-attention recurrent network for human communication comprehension. CoRR abs/1802.00923 (2018)
Zadeh, A., Zellers, R., Pincus, E., Morency, L.P.: Mosi: Multimodal corpus of sentiment intensity and subjectivity analysis in online opinion videos. CoRR abs/1606.06259 (2016)
Zhang, L., Wang, S., Liu, B.: Deep learning for sentiment analysis : a survey. Wiley Interdisc. Rew. Data Min. Knowl. Discov. 8, e1253 (2018)
Zhu, B., Zhou, W., Wang, Y., Wang, H., Cai, J.J.: End-to-end speech emotion recognition based on neural network. In: 2017 IEEE 17th International Conference on Communication Technology (ICCT), pp. 1634–1638 (2017)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Yu, Y., Jia, Z., Shi, F., Zhu, M., Wang, W., Li, X. (2022). WeaveNet: End-to-End Audiovisual Sentiment Analysis. In: Sun, F., Hu, D., Wermter, S., Yang, L., Liu, H., Fang, B. (eds) Cognitive Systems and Information Processing. ICCSIP 2021. Communications in Computer and Information Science, vol 1515. Springer, Singapore. https://doi.org/10.1007/978-981-16-9247-5_1
Download citation
DOI: https://doi.org/10.1007/978-981-16-9247-5_1
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-16-9246-8
Online ISBN: 978-981-16-9247-5
eBook Packages: Computer ScienceComputer Science (R0)