Abstract
Sentiment analysis is one of the crucial tasks in Natural Language Processing (NLP) which refers to classifying natural language sentences by their positive or negative sentiments. In many existing deep learning-based models, providing an explanation of a sentiment might be as necessary as the prediction itself. In this study, we use four different classification models applied to the sentiment analysis of the Internet Movie Database (IMDB) reviews, and investigate the explainability of results using Local Interpretable Model-agnostic Explanation (LIME). Our results reveal how the attention-based models, such as Bidirectional LSTM (BiLSTM) and fine-tuned Bidirectional Encoder Representations from Transformers (BERT) would focus on the most relevant keywords.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Zhou, L., Zhou, Y., Corso, J.J., Socher, R., Xiong, C.: End-to-end dense video captioning with masked transformer. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8739–8748 (2018)
Najafabadi, M.M., Villanustre, F., Khoshgoftaar, T.M., Seliya, N., Wald, R., Muharemagic, E.: J. Big Data, 2(1), 1–21 (2015)
Mikolov, T., Sutskever, I., Chen, K., Corrado, G.S., Dean, J.: Distributed representations of words and phrases and their compositionality. In: Advances in Neural Information Processing Systems, pp. 3111–3119 (2013)
Minaee, S., Kalchbrenner, N., Cambria, E., Nikzad, N., Chenaghlu, M., Gao, J.: Deep learning-based text classification: a comprehensive review. ACM Comput. Surv. (CSUR) 54, 1–40 (2021)
Hasan, A., Moin, S., Karim, A., Shamshirband, S.: Machine learning-based sentiment analysis for twitter accounts. Math. Comput. Appl. 23, 11 (2018)
Linkov, I., Galaitsi, S., Trump, B.D., Keisler, J.M., Kott, A.: Cybertrust: from explainable to actionable and interpretable artificial intelligence. IEEE (2020)
Bodria, F., Panisson, A., Perotti, A., Piaggesi, S.: Explainability methods for natural language processing: applications to sentiment analysis (Discussion Paper) (2020)
Danilevsky, M., Qian, K., Aharonov, R., Katsis, Y., Kawas, B., Sen, P.: A survey of the state of explainable AI for natural language processing arXiv preprint arXiv:2010.00711 (2020)
Liu, H., Yin, Q., Wang, W.Y.: Towards explainable NLP: a generative explanation framework for text classification arXiv preprint arXiv:1811.00196 (2018)
Wiegreffe, S., Marasović, A.: Teach me to explain: a review of datasets for explainable NLP. arXiv preprint arXiv:2102.12060 (2021)
Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?” Explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining Pages, pp. 1135–1144 (2016)
Adadi, A., Berrada, M.: Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 6, 52138–52160 (2018)
Arrieta, A.B., et al.: Explainable Artificial Intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, vol. 58. Elsevier (2020)
Grimsley, C., Mayfield, E., Bursten, J.: Why attention is not explanation: surgical intervention and causal reasoning about neural models (2020)
Brunner, G., Liu, Y., Pascual, D., Richter, O., Ciaramita, M., Wattenhofer, R.: On identifiability in transformers. arXiv preprint arXiv:1908.04211 (2019)
Daeli, N.O.F., Adiwijaya, A.: Sentiment analysis on movie reviews using Information gain and K-nearest neighbor. J. Data Sci. Appl. 3, 1–7 (2020)
Lipton, Z.C.: The mythos of model interpretability. In: Machine learning, the Concept of Interpretability is Both Important and Slippery. ACM New York, NY, USA,, vol. 16, no. 3, pp. 31–57. Queue Press (2018)
Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., Pedreschi, D.: A survey of methods for explaining black box models. In: ACM Computing Surveys (CSUR) (2018)
Arya, V., et al.: One explanation does not fit all: a toolkit and taxonomy of AI explainability techniques. arXiv preprint arXiv:1909.03012 (2019)
Kenny, E.M., Keane, M.T.: Twin-systems to explain artificial neural networks using case-based reasoning: comparative tests of feature-weighting methods in ANN-CBR twins for XAI. In: Twenty-Eighth International Joint Conferences on Artificial Intelligence (IJCAI), Macao (2019)
Keane, M.T., Smyth, B.: Good counterfactuals and where to find them: a case-based technique for generating counterfactuals for explainable AI (XAI). In: International Conference on Case-Based Reasoning (2020)
Saltelli, A., et al.: Global sensitivity analysis: the primer (2008)
Sundararajan, M., Taly, A., Yan, Q.: Axiomatic attribution for deep networks. In: International Conference on Machine Learning (2017)
Gorski, L., Ramakrishna, S., Nowosielski, J.M.: Towards grad-cam based explainability in a legal text processing pipeline. arXiv preprint arXiv:2012.09603 (2020)
Lertvittayakumjorn, P., Toni, F.: Human-grounded evaluations of explanation methods for text classification. arXiv preprint arXiv:1908.11355 (2019)
Poerner, N., Roth, B., Schütze, H.: Evaluating neural network explanation methods using hybrid documents and morphological agreement. arXiv preprint arXiv:1801.06422 (2018)
Croce, D., Rossini, D., Basili, R.: Explaining non-linear classifier decisions within kernel-based deep architectures. In: Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP (2018)
Alvarez-Melis, D. Jaakkola, T.S.: A causal framework for explaining the predictions of black-box sequence-to-sequence models. arXiv preprint arXiv:1707.01943 (2017)
Chen, H., Zheng, G., Ji, Y.: Generating hierarchical explanations on text classification via feature interaction detection. arXiv preprint arXiv:2004.02015 (2020)
Chen, H., Ji, Y.: Improving the explainability of neural sentiment classifiers via data augmentation. arXiv preprint arXiv:1909.04225 (2019)
Aljuhani, S.A., Alghamdi, N.S.: A comparison of sentiment analysis methods on Amazon reviews of Mobile Phones. Int. J. Adv. Comput. Sci. Appl. 10, 608–617 (2019)
Karthika, P., Murugeswari, R., Manoranjithem, R.: Sentiment analysis of social media network using random forest algorithm. In: 2019 IEEE International Conference on Intelligent Techniques in Control, Optimization and Signal Processing (INCOS) (2019)
Singh, J., Tripathi, P.: Sentiment analysis of Twitter data by making use of SVM, Random Forest and Decision Tree algorithm. In: 2021 10th IEEE International Conference on Communication Systems and Network Technologies (CSNT) (2021)
Munshi, A., Arvindhan, M., Thirunavukkarasu, K.: Random forest application of twitter data sentiment analysis in online social network prediction. In: Emerging Technologies for Healthcare: Internet of Things and Deep Learning Models (2021)
Aufar, M., Andreswari, R., Pramesti, D.: Sentiment analysis on YouTube social media using decision tree and random forest algorithm: a case study. In: 2020 International Conference on Data Science and Its Applications (ICoDSA) (2020)
Novendri, R., Callista, A.S., Pratama, D.N., Puspita, C.E.: Sentiment analysis of YouTube movie trailer comments using Naïve Bayes. Bull. Comput. Sci. Electr. Eng. 1, 26–32 (2020)
Dey, S., Wasif, S., Tonmoy, D.S., Sultana, S., Sarkar, J., Dey, M.: A comparative study of support vector machine and Naive Bayes classifier for sentiment analysis on Amazon product reviews. In: 2020 International Conference on Contemporary Computing and Applications (IC3A) (2020)
Li, Z., Li, R., Jin, G.: Sentiment analysis of danmaku videos based on Naïve Bayes and sentiment dictionary. IEEE Access (2020)
Dhola, K., Saradva, M.: A comparative evaluation of traditional machine learning and deep learning classification techniques for sentiment analysis. In: 2021 11th International Conference on Cloud Computing, Data Science & Engineering (Confluence) (2021)
Rahman, R., Masud, M.A., Mimi, R.J., Dina, M.N.S.: Sentiment analysis on bengali movie reviews using multinomial Naïve Bayes. In: 2021 24th International Conference on Computer and Information Technology (ICCIT) (2021)
Schuster, M., Paliwal, K.K.: Bidirectional recurrent neural networks. IEEE Trans. Signal Process. 45, 2673–2681 (1997)
Nistor, S.C., Moca, M., Moldovan, D., Oprean, D.B., Nistor, R.L.: Building a twitter sentiment analysis system with recurrent neural networks. Sensors 21, 2266 (2021)
Islam, M.S., Sultana, S., Roy, U.K., Al Mahmud, J., Jahidul, S.: HARC-new hybrid method with hierarchical attention based bidirectional recurrent neural network with dilated convolutional neural network to recognize multilabel emotions from text. Jurnal Ilmiah Teknik Elektro Komputer dan Informatika (JITEKI) (2021)
Abid, F., Li, C., Alam, M.: Multi-source social media data sentiment analysis using bidirectional recurrent convolutional neural networks. Comput. Commun. 157, 102–115 (2020)
Cai, Y., Huang, Q., Lin, Z., Xu, J., Chen, Z., Li, Q.: Recurrent neural network with pooling operation and attention mechanism for sentiment analysis: a multi-task learning approach. Knowl.-Based Syst. 203, 105856 (2020)
Turek, J., Jain, S., Vo, V., Capotă, M., Huth, A., Willke, T.: Approximating stacked and bidirectional recurrent architectures with the delayed recurrent neural network. In: International Conference on Machine Learning (2020)
Elfaik, H., et al.: Deep bidirectional LSTM network learning-based sentiment analysis for Arabic text. J. Intell. Syst. 30, 395–412 (2021)
Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9, 1735–1780 (1997)
Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018)
Taylor, W.L.: “Cloze procedure’’: a new tool for measuring readability. J. Quart. 30, 415–433 (1953)
Habimana, O., Li, Y., Li, R., Gu, X., Yu, G.: Sentiment analysis using deep learning approaches: an overview. Sci. China Inf. Sci. 63, 1–36 (2020)
Chauhan, P., Sharma, N., Sikka, G.: The emergence of social media data and sentiment analysis in election prediction. J. Ambient. Intell. Humaniz. Comput. 12, 2601–2627 (2021)
Karimi, A., Rossi, L., Prati, A.: Adversarial training for aspect-based sentiment analysis with Bert. In: 2020 25th International Conference on Pattern Recognition (ICPR) (2021)
Hoang, M., Bihorac, O.A., Rouces, J.: Aspect-based sentiment analysis using BERT. In: Proceedings of the 22nd NORDIC Conference on Computational Linguistics (2019)
Zhou, Z., Hooker, G., Wang, F.: S-lime: stabilized-lime for model explanation. In: Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining (2021)
Garreau, D., Mardaoui, D.: What does LIME really see in images? In: International Conference on Machine Learning (2021)
Acknowledgments
This work has been supported in part by the Natural Sciences and Engineering Research Council of Canada (NSERC).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Hajiyan, H., Davoudi, H., Ebrahimi, M. (2023). A Comparative Analysis of Local Explainability of Models for Sentiment Detection. In: Arai, K. (eds) Proceedings of the Future Technologies Conference (FTC) 2022, Volume 3. FTC 2022 2022. Lecture Notes in Networks and Systems, vol 561. Springer, Cham. https://doi.org/10.1007/978-3-031-18344-7_42
Download citation
DOI: https://doi.org/10.1007/978-3-031-18344-7_42
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-18343-0
Online ISBN: 978-3-031-18344-7
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)