Skip to main content

A Comparative Analysis of Local Explainability of Models for Sentiment Detection

  • Conference paper
  • First Online:
Proceedings of the Future Technologies Conference (FTC) 2022, Volume 3 (FTC 2022 2022)

Part of the book series: Lecture Notes in Networks and Systems ((LNNS,volume 561))

Included in the following conference series:

Abstract

Sentiment analysis is one of the crucial tasks in Natural Language Processing (NLP) which refers to classifying natural language sentences by their positive or negative sentiments. In many existing deep learning-based models, providing an explanation of a sentiment might be as necessary as the prediction itself. In this study, we use four different classification models applied to the sentiment analysis of the Internet Movie Database (IMDB) reviews, and investigate the explainability of results using Local Interpretable Model-agnostic Explanation (LIME). Our results reveal how the attention-based models, such as Bidirectional LSTM (BiLSTM) and fine-tuned Bidirectional Encoder Representations from Transformers (BERT) would focus on the most relevant keywords.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Zhou, L., Zhou, Y., Corso, J.J., Socher, R., Xiong, C.: End-to-end dense video captioning with masked transformer. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8739–8748 (2018)

    Google Scholar 

  2. Najafabadi, M.M., Villanustre, F., Khoshgoftaar, T.M., Seliya, N., Wald, R., Muharemagic, E.: J. Big Data, 2(1), 1–21 (2015)

    Google Scholar 

  3. Mikolov, T., Sutskever, I., Chen, K., Corrado, G.S., Dean, J.: Distributed representations of words and phrases and their compositionality. In: Advances in Neural Information Processing Systems, pp. 3111–3119 (2013)

    Google Scholar 

  4. Minaee, S., Kalchbrenner, N., Cambria, E., Nikzad, N., Chenaghlu, M., Gao, J.: Deep learning-based text classification: a comprehensive review. ACM Comput. Surv. (CSUR) 54, 1–40 (2021)

    Article  Google Scholar 

  5. Hasan, A., Moin, S., Karim, A., Shamshirband, S.: Machine learning-based sentiment analysis for twitter accounts. Math. Comput. Appl. 23, 11 (2018)

    Google Scholar 

  6. Linkov, I., Galaitsi, S., Trump, B.D., Keisler, J.M., Kott, A.: Cybertrust: from explainable to actionable and interpretable artificial intelligence. IEEE (2020)

    Google Scholar 

  7. Bodria, F., Panisson, A., Perotti, A., Piaggesi, S.: Explainability methods for natural language processing: applications to sentiment analysis (Discussion Paper) (2020)

    Google Scholar 

  8. Danilevsky, M., Qian, K., Aharonov, R., Katsis, Y., Kawas, B., Sen, P.: A survey of the state of explainable AI for natural language processing arXiv preprint arXiv:2010.00711 (2020)

  9. Liu, H., Yin, Q., Wang, W.Y.: Towards explainable NLP: a generative explanation framework for text classification arXiv preprint arXiv:1811.00196 (2018)

  10. Wiegreffe, S., Marasović, A.: Teach me to explain: a review of datasets for explainable NLP. arXiv preprint arXiv:2102.12060 (2021)

  11. Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?” Explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining Pages, pp. 1135–1144 (2016)

    Google Scholar 

  12. Adadi, A., Berrada, M.: Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 6, 52138–52160 (2018)

    Article  Google Scholar 

  13. Arrieta, A.B., et al.: Explainable Artificial Intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, vol. 58. Elsevier (2020)

    Google Scholar 

  14. Grimsley, C., Mayfield, E., Bursten, J.: Why attention is not explanation: surgical intervention and causal reasoning about neural models (2020)

    Google Scholar 

  15. Brunner, G., Liu, Y., Pascual, D., Richter, O., Ciaramita, M., Wattenhofer, R.: On identifiability in transformers. arXiv preprint arXiv:1908.04211 (2019)

  16. Daeli, N.O.F., Adiwijaya, A.: Sentiment analysis on movie reviews using Information gain and K-nearest neighbor. J. Data Sci. Appl. 3, 1–7 (2020)

    Google Scholar 

  17. Lipton, Z.C.: The mythos of model interpretability. In: Machine learning, the Concept of Interpretability is Both Important and Slippery. ACM New York, NY, USA,, vol. 16, no. 3, pp. 31–57. Queue Press (2018)

    Google Scholar 

  18. Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., Pedreschi, D.: A survey of methods for explaining black box models. In: ACM Computing Surveys (CSUR) (2018)

    Google Scholar 

  19. Arya, V., et al.: One explanation does not fit all: a toolkit and taxonomy of AI explainability techniques. arXiv preprint arXiv:1909.03012 (2019)

  20. Kenny, E.M., Keane, M.T.: Twin-systems to explain artificial neural networks using case-based reasoning: comparative tests of feature-weighting methods in ANN-CBR twins for XAI. In: Twenty-Eighth International Joint Conferences on Artificial Intelligence (IJCAI), Macao (2019)

    Google Scholar 

  21. Keane, M.T., Smyth, B.: Good counterfactuals and where to find them: a case-based technique for generating counterfactuals for explainable AI (XAI). In: International Conference on Case-Based Reasoning (2020)

    Google Scholar 

  22. Saltelli, A., et al.: Global sensitivity analysis: the primer (2008)

    Google Scholar 

  23. Sundararajan, M., Taly, A., Yan, Q.: Axiomatic attribution for deep networks. In: International Conference on Machine Learning (2017)

    Google Scholar 

  24. Gorski, L., Ramakrishna, S., Nowosielski, J.M.: Towards grad-cam based explainability in a legal text processing pipeline. arXiv preprint arXiv:2012.09603 (2020)

  25. Lertvittayakumjorn, P., Toni, F.: Human-grounded evaluations of explanation methods for text classification. arXiv preprint arXiv:1908.11355 (2019)

  26. Poerner, N., Roth, B., Schütze, H.: Evaluating neural network explanation methods using hybrid documents and morphological agreement. arXiv preprint arXiv:1801.06422 (2018)

  27. Croce, D., Rossini, D., Basili, R.: Explaining non-linear classifier decisions within kernel-based deep architectures. In: Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP (2018)

    Google Scholar 

  28. Alvarez-Melis, D. Jaakkola, T.S.: A causal framework for explaining the predictions of black-box sequence-to-sequence models. arXiv preprint arXiv:1707.01943 (2017)

  29. Chen, H., Zheng, G., Ji, Y.: Generating hierarchical explanations on text classification via feature interaction detection. arXiv preprint arXiv:2004.02015 (2020)

  30. Chen, H., Ji, Y.: Improving the explainability of neural sentiment classifiers via data augmentation. arXiv preprint arXiv:1909.04225 (2019)

  31. Aljuhani, S.A., Alghamdi, N.S.: A comparison of sentiment analysis methods on Amazon reviews of Mobile Phones. Int. J. Adv. Comput. Sci. Appl. 10, 608–617 (2019)

    Google Scholar 

  32. Karthika, P., Murugeswari, R., Manoranjithem, R.: Sentiment analysis of social media network using random forest algorithm. In: 2019 IEEE International Conference on Intelligent Techniques in Control, Optimization and Signal Processing (INCOS) (2019)

    Google Scholar 

  33. Singh, J., Tripathi, P.: Sentiment analysis of Twitter data by making use of SVM, Random Forest and Decision Tree algorithm. In: 2021 10th IEEE International Conference on Communication Systems and Network Technologies (CSNT) (2021)

    Google Scholar 

  34. Munshi, A., Arvindhan, M., Thirunavukkarasu, K.: Random forest application of twitter data sentiment analysis in online social network prediction. In: Emerging Technologies for Healthcare: Internet of Things and Deep Learning Models (2021)

    Google Scholar 

  35. Aufar, M., Andreswari, R., Pramesti, D.: Sentiment analysis on YouTube social media using decision tree and random forest algorithm: a case study. In: 2020 International Conference on Data Science and Its Applications (ICoDSA) (2020)

    Google Scholar 

  36. Novendri, R., Callista, A.S., Pratama, D.N., Puspita, C.E.: Sentiment analysis of YouTube movie trailer comments using Naïve Bayes. Bull. Comput. Sci. Electr. Eng. 1, 26–32 (2020)

    Article  Google Scholar 

  37. Dey, S., Wasif, S., Tonmoy, D.S., Sultana, S., Sarkar, J., Dey, M.: A comparative study of support vector machine and Naive Bayes classifier for sentiment analysis on Amazon product reviews. In: 2020 International Conference on Contemporary Computing and Applications (IC3A) (2020)

    Google Scholar 

  38. Li, Z., Li, R., Jin, G.: Sentiment analysis of danmaku videos based on Naïve Bayes and sentiment dictionary. IEEE Access (2020)

    Google Scholar 

  39. Dhola, K., Saradva, M.: A comparative evaluation of traditional machine learning and deep learning classification techniques for sentiment analysis. In: 2021 11th International Conference on Cloud Computing, Data Science & Engineering (Confluence) (2021)

    Google Scholar 

  40. Rahman, R., Masud, M.A., Mimi, R.J., Dina, M.N.S.: Sentiment analysis on bengali movie reviews using multinomial Naïve Bayes. In: 2021 24th International Conference on Computer and Information Technology (ICCIT) (2021)

    Google Scholar 

  41. Schuster, M., Paliwal, K.K.: Bidirectional recurrent neural networks. IEEE Trans. Signal Process. 45, 2673–2681 (1997)

    Article  Google Scholar 

  42. Nistor, S.C., Moca, M., Moldovan, D., Oprean, D.B., Nistor, R.L.: Building a twitter sentiment analysis system with recurrent neural networks. Sensors 21, 2266 (2021)

    Article  Google Scholar 

  43. Islam, M.S., Sultana, S., Roy, U.K., Al Mahmud, J., Jahidul, S.: HARC-new hybrid method with hierarchical attention based bidirectional recurrent neural network with dilated convolutional neural network to recognize multilabel emotions from text. Jurnal Ilmiah Teknik Elektro Komputer dan Informatika (JITEKI) (2021)

    Google Scholar 

  44. Abid, F., Li, C., Alam, M.: Multi-source social media data sentiment analysis using bidirectional recurrent convolutional neural networks. Comput. Commun. 157, 102–115 (2020)

    Article  Google Scholar 

  45. Cai, Y., Huang, Q., Lin, Z., Xu, J., Chen, Z., Li, Q.: Recurrent neural network with pooling operation and attention mechanism for sentiment analysis: a multi-task learning approach. Knowl.-Based Syst. 203, 105856 (2020)

    Article  Google Scholar 

  46. Turek, J., Jain, S., Vo, V., Capotă, M., Huth, A., Willke, T.: Approximating stacked and bidirectional recurrent architectures with the delayed recurrent neural network. In: International Conference on Machine Learning (2020)

    Google Scholar 

  47. Elfaik, H., et al.: Deep bidirectional LSTM network learning-based sentiment analysis for Arabic text. J. Intell. Syst. 30, 395–412 (2021)

    Article  Google Scholar 

  48. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9, 1735–1780 (1997)

    Article  Google Scholar 

  49. Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018)

  50. Taylor, W.L.: “Cloze procedure’’: a new tool for measuring readability. J. Quart. 30, 415–433 (1953)

    Google Scholar 

  51. Habimana, O., Li, Y., Li, R., Gu, X., Yu, G.: Sentiment analysis using deep learning approaches: an overview. Sci. China Inf. Sci. 63, 1–36 (2020)

    Article  Google Scholar 

  52. Chauhan, P., Sharma, N., Sikka, G.: The emergence of social media data and sentiment analysis in election prediction. J. Ambient. Intell. Humaniz. Comput. 12, 2601–2627 (2021)

    Article  Google Scholar 

  53. Karimi, A., Rossi, L., Prati, A.: Adversarial training for aspect-based sentiment analysis with Bert. In: 2020 25th International Conference on Pattern Recognition (ICPR) (2021)

    Google Scholar 

  54. Hoang, M., Bihorac, O.A., Rouces, J.: Aspect-based sentiment analysis using BERT. In: Proceedings of the 22nd NORDIC Conference on Computational Linguistics (2019)

    Google Scholar 

  55. Zhou, Z., Hooker, G., Wang, F.: S-lime: stabilized-lime for model explanation. In: Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining (2021)

    Google Scholar 

  56. Garreau, D., Mardaoui, D.: What does LIME really see in images? In: International Conference on Machine Learning (2021)

    Google Scholar 

Download references

Acknowledgments

This work has been supported in part by the Natural Sciences and Engineering Research Council of Canada (NSERC).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mehran Ebrahimi .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Hajiyan, H., Davoudi, H., Ebrahimi, M. (2023). A Comparative Analysis of Local Explainability of Models for Sentiment Detection. In: Arai, K. (eds) Proceedings of the Future Technologies Conference (FTC) 2022, Volume 3. FTC 2022 2022. Lecture Notes in Networks and Systems, vol 561. Springer, Cham. https://doi.org/10.1007/978-3-031-18344-7_42

Download citation

Publish with us

Policies and ethics