Abstract
In recent times, given the competitive nature of corporates, customer support has become the core of organizations that can strengthen their brand image. Timely and effective settlement of customer’s complaints is vital in improving customer satisfaction in different business organizations. Companies experience difficulties in automatically identifying complaints buried deep in enormous online content. Emotion detection and sentiment analysis, two closely related tasks, play very critical roles in complaint identification. We hypothesize that the association between emotion and sentiment will provide an enhanced understanding of the state of mind of the tweeter. In this paper, we propose a Bidirectional Encoder Representations from Transformers (BERT) based shared-private multi-task framework that aims to learn three closely related tasks, viz. complaint identification (primary task), emotion detection, and sentiment classification (auxiliary tasks) concurrently. Experimental results show that our proposed model obtains the highest macro-F1 score of 87.38%, outperforming the multi-task baselines as well as the state-of-the-art model by indicative margins, denoting that emotion awareness and sentiment analysis facilitate the complaint identification task when learned simultaneously.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
- 2.
Food & beverage, apparel, software, electronics, services, retail, transport, cars, other.
- 3.
- 4.
- 5.
We used the random module’s inbuilt function sample() in Python which returns a particular length list of items chosen from the sequence. We additionally performed the experiments with an up-sampled Complaint dataset but due to the redundant instances, the results were erroneous.
- 6.
In the case where emotion expressed in the tweet is not in the seven categories (anger, disgust, fear, shame, guilt, sadness, and joy) the annotators label it to the next closest emotion associated with the tweet.
- 7.
- 8.
- 9.
- 10.
- 11.
We experimented with epochs = [3, 4, 5] and learning rates = [1e−3, 2e−3, 3e−5].
- 12.
Using loss_weights parameter of Keras compile function.
- 13.
- 14.
We perform Student’s t-test for assessing the statistical significance.
References
Akhtar, M.S., Ekbal, A., Cambria, E.: How intense are you? Predicting intensities of emotions and sentiments using stacked ensemble [application notes]. IEEE Comput. Intell. Mag. 15(1), 64–75 (2020)
Akhtar, S., Ghosal, D., Ekbal, A., Bhattacharyya, P., Kurohashi, S.: All-in-one: emotion, sentiment and intensity prediction using a multi-task ensemble framework. IEEE Trans. Affect. Comput. (2019)
Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. In: Bengio, Y., LeCun, Y. (eds.) 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, 7–9 May 2015, Conference Track Proceedings (2015). http://arxiv.org/abs/1409.0473
Bhat, S., Culotta, A.: Identifying leading indicators of product recalls from online reviews using positive unlabeled learning and domain adaptation. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 11 (2017)
Bingel, J., Søgaard, A.: Identifying beneficial task relations for multi-task learning in deep neural networks. In: Lapata, M., Blunsom, P., Koller, A. (eds.) Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2017, Valencia, Spain, 3–7 April 2017, Volume 2: Short Papers, pp. 164–169. Association for Computational Linguistics (2017). https://doi.org/10.18653/v1/e17-2026
Brown, P., Levinson, S.C., Levinson, S.C.: Politeness: Some Universals in Language Usage, vol. 4. Cambridge University Press, Cambridge (1987)
Caruana, R.: Multitask learning. Mach. Learn. 28(1), 41–75 (1997)
Caruana, R., De Sa, V.R.: Promoting poor features to supervisors: some inputs work better as outputs. In: Advances in Neural Information Processing Systems, pp. 389–395 (1997)
Cho, K., van Merrienboer, B., Bahdanau, D., Bengio, Y.: On the properties of neural machine translation: encoder-decoder approaches. In: Wu, D., Carpuat, M., Carreras, X., Vecchi, E.M. (eds.) Proceedings of SSST@EMNLP 2014, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation, Doha, Qatar, 25 October 2014, pp. 103–111. Association for Computational Linguistics (2014). https://doi.org/10.3115/v1/W14-4012. https://www.aclweb.org/anthology/W14-4012/
Chollet, F., et al.: keras (2015)
Devlin, J., Chang, M., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: Burstein, J., Doran, C., Solorio, T. (eds.) Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, 2–7 June 2019, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics (2019). https://doi.org/10.18653/v1/n19-1423. https://doi.org/10.18653/v1/n19-1423
Liu, B., Zhang, L.: A survey of opinion mining and sentiment analysis. In: Aggarwal, C., Zhai, C. (eds.) Mining text data, pp. 415–463. Springer, Boston (2012). https://doi.org/10.1007/978-1-4614-3223-4_13
Liu, L., et al.: On the variance of the adaptive learning rate and beyond. In: 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, 26–30 April 2020. OpenReview.net (2020). https://openreview.net/forum?id=rkgz2aEKDr
Olshtain, E., Weinbach, L.: 10. complaints: a study of speech act behavior among native and non-native speakers of Hebrew. In: The pragmatic perspective, p. 195. John Benjamins (1987)
Pedregosa, F., et al.: Scikit-learn: machine learning in python. J. Mach. Learn. Res. 12, 2825–2830 (2011)
Pennington, J., Socher, R., Manning, C.: Glove: global vectors for word representation. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014)
Preotiuc-Pietro, D., Gaman, M., Aletras, N.: Automatically identifying complaints in social media. In: Korhonen, A., Traum, D.R., Màrquez, L. (eds.) Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, 28 July–2 August 2019, Volume 1: Long Papers, pp. 5008–5019. Association for Computational Linguistics (2019). https://doi.org/10.18653/v1/p19-1495. https://doi.org/10.18653/v1/p19-1495
Pryzant, R., Shen, K., Jurafsky, D., Wagner, S.: Deconfounded lexicon induction for interpretable social science. In: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pp. 1615–1625 (2018)
Qureshi, S.A., Dias, G., Hasanuzzaman, M., Saha, S.: Improving depression level estimation by concurrently learning emotion intensity. IEEE Comput. Intell. Mag. 15(3), 47–59 (2020)
Saha, T., Patra, A., Saha, S., Bhattacharyya, P.: Towards emotion-aided multi-modal dialogue act classification. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 4361–4372 (2020)
Sailunaz, K., Alhajj, R.: Emotion and sentiment analysis from twitter text. J. Comput. Sci. 36, 101003 (2019)
Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(1), 1929–1958 (2014)
Vásquez, C.: Complaints online: the case of tripadvisor. J. Pragmat. 43(6), 1707–1717 (2011)
Vaswani, A., et al.: Attention is all you need. arXiv preprint arXiv:1706.03762 (2017)
Wang, S., Wu, B., Wang, B., Tong, X.: Complaint classification using hybrid-attention GRU neural network. In: Yang, Q., Zhou, Z.-H., Gong, Z., Zhang, M.-L., Huang, S.-J. (eds.) PAKDD 2019. LNCS (LNAI), vol. 11439, pp. 251–262. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-16148-4_20
Yang, W., et al.: Detecting customer complaint escalation with recurrent neural networks and manually-engineered features. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Industry Papers), pp. 56–63 (2019)
Yi, R., Hu, W.: Pre-trained BERT-GRU model for relation extraction. In: Proceedings of the 2019 8th International Conference on Computing and Pattern Recognition, pp. 453–457 (2019)
Acknowledgement
This publication is an outcome of the R&D work undertaken in the project under the Visvesvaraya Ph.D. Scheme of Ministry of Electronics & Information Technology, Government of India, being implemented by Digital India Corporation (Formerly Media Lab Asia).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Singh, A., Saha, S. (2021). Are You Really Complaining? A Multi-task Framework for Complaint Identification, Emotion, and Sentiment Classification. In: Lladós, J., Lopresti, D., Uchida, S. (eds) Document Analysis and Recognition – ICDAR 2021. ICDAR 2021. Lecture Notes in Computer Science(), vol 12822. Springer, Cham. https://doi.org/10.1007/978-3-030-86331-9_46
Download citation
DOI: https://doi.org/10.1007/978-3-030-86331-9_46
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-86330-2
Online ISBN: 978-3-030-86331-9
eBook Packages: Computer ScienceComputer Science (R0)