Skip to main content

Prototypical Convolutional Neural Network for a Phrase-Based Explanation of Sentiment Classification

  • Conference paper
  • First Online:
Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD 2021)

Abstract

The attention mechanisms are often used to support an interpretation of neural network based classification of texts by highlighting words to which the network paid attention while making a prediction. Following recent studies, the attention technique does not always provide a faithful explanation of the model. Thus, in this paper we study another idea of prototype-based neural networks. Although for texts they obtain promising results, they may provide explanations in the form of comparisons of whole (potentially long) documents or also run into problems with providing reliable explanations. To overcome it, in this work a new prototype-based convolutional neural architecture for text classification is introduced, which provides predictions’ explanations in the form of similarities to phrases from the training set. The experimental evaluation demonstrates that the proposed network obtains similar classification performance to the black-box convolutional networks while providing faithful explanations. Moreover, it is shown that a new method for dynamic tuning of the number of prototypes introduced in this paper offers performance gains against static tuning.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    The purpose of the preprocessing was 1) to binarize the datasets where sentiment was expressed on a scale of 1–5, 2) to balance the size of the datasets, and 3) to balance the number of examples from the positive and negative classes through under-sampling. For more details, please refer to [6].

  2. 2.

    https://github.com/plutasnyy/ProtoCNN.

References

  1. Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. In: Bengio, Y., LeCun, Y. (eds.) 3rd International Conference on Learning Representations ICLR (2015)

    Google Scholar 

  2. Bodria, F., Giannotti, F., Guidotti, R., Naretto, F., Pedreschi, D., Rinzivillo, S.: Benchmarking and Survey of Explanation Methods for Black Box Models. arXiv e-prints arXiv:2102.13076 (February 2021)

  3. Chen, C., Li, O., Tao, D., Barnett, A., Rudin, C., Su, J.K.: This looks like that: Deep learning for interpretable image recognition. In: Wallach, H., Larochelle, H., Beygelzimer, A., d’Alché-Buc, F., Fox, E., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 32, pp. 8928–8939 (2019)

    Google Scholar 

  4. Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., Pedreschi, D.: A survey of methods for explaining black box models. ACM Comput. Surv. (CSUR) 51(5), 1–42 (2018)

    Article  Google Scholar 

  5. He, R., Lee, W.S., Ng, H.T., Dahlmeier, D.: Effective attention modeling for aspect-level sentiment classification. In: Proceedings of the 27th International Conference on Computational Linguistics, pp. 1121–1131 (2018)

    Google Scholar 

  6. Hong, D., Baek, S., Wang, T.: Interpretable sequence classification via prototype trajectory (July 2020). https://arxiv.org/abs/2007.01777

  7. Hutchinson, B., Prabhakaran, V., Denton, E., Webster, K., Zhong, Y., Denuyl, S.: Social biases in NLP models as barriers for persons with disabilities. In: Proceedings of the 58th ACL, pp. 5491–5501 (2020)

    Google Scholar 

  8. Jain, S., Wallace, B.C.: Attention is not Explanation. In: Proceedings of the NAACL, pp. 3543–3556 (2019)

    Google Scholar 

  9. Lampridis, O., Guidotti, R., Ruggieri, S.: Explaining sentiment classification with synthetic exemplars and counter-exemplars. In: Appice, A., Tsoumakas, G., Manolopoulos, Y., Matwin, S. (eds.) DS 2020. LNCS (LNAI), vol. 12323, pp. 357–373. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-61527-7_24

    Chapter  Google Scholar 

  10. Letarte, G., Paradis, F., Giguère, P., Laviolette, F.: Importance of self-attention for sentiment analysis. In: Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pp. 267–275 (2018)

    Google Scholar 

  11. Li, O., Liu, H., Chen, C., Rudin, C.: Deep learning for case-based reasoning through prototypes: a neural network that explains its predictions. In: AAAI (2018)

    Google Scholar 

  12. Ming, Y., Xu, P., Qu, H., Ren, L.: Interpretable and steerable sequence learning via prototypes. In: Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (July 2019)

    Google Scholar 

  13. Molnar, C.: Interpretable Machine Learning (2019). https://christophm.github.io/interpretable-ml-book/

  14. Pennington, J., Socher, R., Manning, C.: GloVe: global vectors for word representation. In: Proceedings of the EMNLP, pp. 1532–1543 (2014)

    Google Scholar 

  15. Ribeiro, M.T., Singh, S., Guestrin, C.: Model-agnostic interpretability of machine learning. In: Workshop on Human Interpretability in Machine Learning at International Conference on Machine Learning (2016)

    Google Scholar 

  16. Samek, W., Müller, K.-R.: Towards explainable artificial intelligence. In: Samek, W., Montavon, G., Vedaldi, A., Hansen, L.K., Müller, K.-R. (eds.) Explainable AI: Interpreting, Explaining and Visualizing Deep Learning. LNCS (LNAI), vol. 11700, pp. 5–22. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-28954-6_1

    Chapter  Google Scholar 

  17. Sanh, V., Debut, L., Chaumond, J., Wolf, T.: DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. In: 5th Workshop on Energy Efficient Machine Learning and Cognitive Computing @ NeurIPS 2019 (2019)

    Google Scholar 

  18. Strubell, E., Verga, P., Belanger, D., McCallum, A.: Fast and accurate entity recognition with iterated dilated convolutions. In: Proceedings of EMNLP, pp. 2670–2680 (2017)

    Google Scholar 

  19. Sundararajan, M., Taly, A., Yan, Q.: Axiomatic attribution for deep networks. In: International Conference on Machine Learning, pp. 3319–3328. PMLR (2017)

    Google Scholar 

  20. Wang, Y., Huang, M., Zhu, X., Zhao, L.: Attention-based LSTM for aspect-level sentiment classification. In: Proceedings of the EMNLP, pp. 606–615 (2016)

    Google Scholar 

  21. Wiegreffe, S., Pinter, Y.: Attention is not explanation. In: Proceedings of the EMNLP-IJCNLP, pp. 11–20 (2019)

    Google Scholar 

Download references

Acknowledgments

The authors are grateful to the Poznan Supercomputing and Networking Center for computational resources. The research by Kamil Pluciński and Jerzy Stefanowski was supported by TAILOR, a project funded by EU Horizon 2020 research and innovation programme under GA no. 952215. Mateusz Lango was supported by the Polish National Science Centre grant no. 2016/22/E/ST6/00299.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mateusz Lango .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Pluciński, K., Lango, M., Stefanowski, J. (2021). Prototypical Convolutional Neural Network for a Phrase-Based Explanation of Sentiment Classification. In: Kamp, M., et al. Machine Learning and Principles and Practice of Knowledge Discovery in Databases. ECML PKDD 2021. Communications in Computer and Information Science, vol 1524. Springer, Cham. https://doi.org/10.1007/978-3-030-93736-2_35

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-93736-2_35

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-93735-5

  • Online ISBN: 978-3-030-93736-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics