Skip to main content

Strategies to Exploit XAI to Improve Classification Systems

  • Conference paper
  • First Online:
Explainable Artificial Intelligence (xAI 2023)

Abstract

Explainable Artificial Intelligence (XAI) aims to provide insights into the decision-making process of AI models, allowing users to understand their results beyond their decisions. A significant goal of XAI is to improve the performance of AI models by providing explanations for their decision-making processes. However, most XAI literature focuses on how to explain an AI system, while less attention has been given to how XAI methods can be exploited to improve an AI system. In this work, a set of well-known XAI methods typically used with Machine Learning (ML) classification tasks are investigated to verify if they can be exploited, not just to provide explanations but also to improve the performance of the model itself. To this aim, two strategies to use the explanation to improve a classification system are reported and empirically evaluated on three datasets: Fashion-MNIST, CIFAR10, and STL10. Results suggest that explanations built by Integrated Gradients highlight input features that can be effectively used to improve classification performance.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Apicella, A., Isgrò, F., Prevete, R., Sorrentino, A., Tamburrini, G.: Explaining classification systems using sparse dictionaries. In: ESANN 2019 - Proceedings, 27th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, pp. 495–500 (2019)

    Google Scholar 

  2. Apicella, A., Giugliano, S., Isgrò, F., Prevete, R.: Exploiting auto-encoders and segmentation methods for middle-level explanations of image classification systems. Knowl.-Based Syst. 255, 109725 (2022)

    Article  Google Scholar 

  3. Apicella, A., Isgrò, F., Pollastro, A., Prevete, R.: Toward the application of XAI methods in EEG-based systems. In: Proceedings of the 3rd Italian Workshop on Explainable Artificial Intelligence co-located with 21th International Conference of the Italian Association for Artificial Intelligence(AIxIA 2022), Udine, Italy, 28 November–3 December 2022. CEUR Workshop Proceedings, vol. 3277, pp. 1–15. CEUR-WS.org (2022)

    Google Scholar 

  4. Apicella, A., Isgrò, F., Prevete, R.: XAI approach for addressing the dataset shift problem: BCI as a case study (short paper). In: Proceedings of 1st Workshop on Bias, Ethical AI, Explainability and the Role of Logic and Logic Programming (BEWARE 2022) co-located with the 21th International Conference of the Italian Association for Artificial Intelligence (AI*IA 2022), Udine, Italy, 2 December 2022. CEUR Workshop Proceedings, vol. 3319, pp. 83–88 (2022)

    Google Scholar 

  5. Bach, S., Binder, A., Montavon, G., Klauschen, F., Müller, K.R., Samek, W.: On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLoS ONE 10(7), e0130140 (2015)

    Article  Google Scholar 

  6. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  7. Hind, M., et al.: TED: teaching AI to explain its decisions. In: Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, pp. 123–129 (2019)

    Google Scholar 

  8. Ieracitano, C., Mammone, N., Hussain, A., Morabito, F.C.: A novel explainable machine learning approach for EEG-based brain-computer interface systems. Neural Comput. Appl. 34(14), 11347–11360 (2022)

    Article  Google Scholar 

  9. Laxmi Lydia, E., Anupama, C.S.S., Sharmili, N.: Modeling of explainable artificial intelligence with correlation-based feature selection approach for biomedical data analysis. In: Khamparia, A., Gupta, D., Khanna, A., Balas, V.E. (eds.) Biomedical Data Analysis and Processing Using Explainable (XAI) and Responsive Artificial Intelligence (RAI). ISRL, vol. 222, pp. 17–32. Springer, Singapore (2022). https://doi.org/10.1007/978-981-19-1476-8_2

    Chapter  Google Scholar 

  10. Lei, T., Barzilay, R., Jaakkola, T.: Rationalizing neural predictions. arXiv preprint arXiv:1606.04155 (2016)

  11. Mathew, B., Saha, P., Yimam, S.M., Biemann, C., Goyal, P., Mukherjee, A.: HateXplain: a benchmark dataset for explainable hate speech detection. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 14867–14875 (2021)

    Google Scholar 

  12. Montavon, G., Binder, A., Lapuschkin, S., Samek, W., Müller, K.-R.: Layer-wise relevance propagation: an overview. In: Samek, W., Montavon, G., Vedaldi, A., Hansen, L.K., Müller, K.-R. (eds.) Explainable AI: Interpreting, Explaining and Visualizing Deep Learning. LNCS (LNAI), vol. 11700, pp. 193–209. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-28954-6_10

    Chapter  Google Scholar 

  13. Montavon, G., Lapuschkin, S., Binder, A., Samek, W., Müller, K.R.: Explaining nonlinear classification decisions with deep Taylor decomposition. Pattern Recogn. 65, 211–222 (2017)

    Article  Google Scholar 

  14. Qian, K., et al.: XNLP: A living survey for XAI research in natural language processing. In: 26th International Conference on Intelligent User Interfaces-Companion, pp. 78–80 (2021)

    Google Scholar 

  15. Rathod, P., Naik, S.: Review on epilepsy detection with explainable artificial intelligence. In: 2022 10th International Conference on Emerging Trends in Engineering and Technology-Signal and Information Processing (ICETET-SIP-22), pp. 1–6. IEEE (2022)

    Google Scholar 

  16. Ribeiro, M.T., Singh, S., Guestrin, C.: “Why should i trust you?” explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016)

    Google Scholar 

  17. Ross, A.S., Hughes, M.C., Doshi-Velez, F.: Right for the right reasons: training differentiable models by constraining their explanations. arXiv preprint arXiv:1703.03717 (2017)

  18. Samek, W., Binder, A., Montavon, G., Lapuschkin, S., Müller, K.R.: Evaluating the visualization of what a deep neural network has learned. IEEE Trans. Neural Netw. Learn. Syst. 28(11), 2660–2673 (2016)

    Article  MathSciNet  Google Scholar 

  19. Schiller, D., Huber, T., Lingenfelser, F., Dietz, M., Seiderer, A., André, E.: Relevance-based feature masking: improving neural network based whale classification through explainable artificial intelligence (2019)

    Google Scholar 

  20. Schoonderwoerd, T.A., Jorritsma, W., Neerincx, M.A., Van Den Bosch, K.: Human-centered XAI: developing design patterns for explanations of clinical decision support systems. Int. J. Hum Comput Stud. 154, 102684 (2021)

    Article  Google Scholar 

  21. Schramowski, P., et al.: Making deep neural networks right for the right scientific reasons by interacting with their explanations. Nat. Mach. Intell. 2(8), 476–486 (2020)

    Article  Google Scholar 

  22. Selvam, R.P., Oliver, A.S., Mohan, V., Prakash, N.B., Jayasankar, T.: Explainable artificial intelligence with metaheuristic feature selection technique for biomedical data classification. In: Khamparia, A., Gupta, D., Khanna, A., Balas, V.E. (eds.) Biomedical Data Analysis and Processing Using Explainable (XAI) and Responsive Artificial Intelligence (RAI). ISRL, vol. 222, pp. 43–57. Springer, Singapore (2022). https://doi.org/10.1007/978-981-19-1476-8_4

    Chapter  Google Scholar 

  23. Simonyan, K., Vedaldi, A., Zisserman, A.: Deep inside convolutional networks: visualising image classification models and saliency maps. arXiv preprint arXiv:1312.6034 (2013)

  24. Springenberg, J.T., Dosovitskiy, A., Brox, T., Riedmiller, M.: Striving for simplicity: the all convolutional net. arXiv preprint arXiv:1412.6806 (2014)

  25. Sun, J., Lapuschkin, S., Samek, W., Zhao, Y., Cheung, N.M., Binder, A.: Explanation-guided training for cross-domain few-shot classification. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 7609–7616. IEEE (2021)

    Google Scholar 

  26. Sundararajan, M., Taly, A., Yan, Q.: Axiomatic attribution for deep networks. In: International Conference on Machine Learning, pp. 3319–3328. PMLR (2017)

    Google Scholar 

  27. Weber, L., Lapuschkin, S., Binder, A., Samek, W.: Beyond explaining: opportunities and challenges of XAI-based model improvement. Inf. Fusion (2022)

    Google Scholar 

  28. Xiao, H., Rasul, K., Vollgraf, R.: Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747 (2017)

  29. Zeiler, M.D., Fergus, R.: Visualizing and understanding convolutional networks. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8689, pp. 818–833. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10590-1_53

    Chapter  Google Scholar 

Download references

Acknowledgements

This work is supported by the European Union - FSE-REACT-EU, PON Research and Innovation 2014-2020 DM1062/2021 contract number 18-I-15350-2, and was partially supported by the Ministry of University and Research, PRIN research project “BRIO – BIAS, RISK, OPACITY in AI: design, verification and development of Trustworthy AI.”, Project no. 2020SSKZ7R, and by the Ministry of Economic Development, “INtegrated Technologies and ENhanced SEnsing for cognition and rehabilitation” (INTENSE) project. Furthermore, we acknowledge financial support from the PNRR MUR project PE0000013-FAIR (CUP: E63C22002150007).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Andrea Apicella .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Apicella, A., Di Lorenzo, L., Isgrò, F., Pollastro, A., Prevete, R. (2023). Strategies to Exploit XAI to Improve Classification Systems. In: Longo, L. (eds) Explainable Artificial Intelligence. xAI 2023. Communications in Computer and Information Science, vol 1901. Springer, Cham. https://doi.org/10.1007/978-3-031-44064-9_9

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-44064-9_9

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-44063-2

  • Online ISBN: 978-3-031-44064-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics