Skip to main content

The Methods and Approaches of Explainable Artificial Intelligence

  • Conference paper
  • First Online:
Computational Science – ICCS 2021 (ICCS 2021)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 12745))

Included in the following conference series:

Abstract

Artificial Intelligence has found innumerable applications, becoming ubiquitous in the contemporary society. From making unnoticeable, minor choices to determining people’s fates (the case of predictive policing). This fact raises serious concerns about the lack of explainability of those systems. Finding ways to enable humans to comprehend the results provided by AI is a blooming area of research right now. This paper explores the current findings in the field of Explainable Artificial Intelligence (xAI), along with xAI methods and solutions that realise them. The paper provides an umbrella perspective on available xAI options, sorting them into a range of levels of abstraction, starting from community-developed code snippets implementing facets of xAI research all the way up to comprehensive solutions utilising state-of-the-art achievements in the domain.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Samek, W., Montavon, G., Vedaldi, A., Hansen, L.K., Müller, K.-R. (eds.): Explainable AI: Interpreting, Explaining and Visualizing Deep Learning. LNCS (LNAI), vol. 11700. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-28954-6

    Book  Google Scholar 

  2. Miller, T.: Machine learning interpretability: a survey on methods and metrics. Electronics 8, 832 (2019)

    Article  Google Scholar 

  3. Carvalho, D.V., Pereira, E.M., Cardoso, J.S.: Explanation in artificial intelligence: insights from the social sciences. Artif. Intell. 267, 1–38 (2019)

    Article  MathSciNet  Google Scholar 

  4. Pawlicki, M., Choraś, M., Kozik, R.: Defending network intrusion detection systems against adversarial evasion attacks. FGCS 110, 148–154 (2020)

    Article  Google Scholar 

  5. Choraś, M., Pawlicki, M., Puchalski, D., Kozik, R.: Machine learning – the results are not the only thing that matters! What about security, explainability and fairness? In: Krzhizhanovskaya, V.V., et al. (eds.) ICCS 2020. LNCS, vol. 12140, pp. 615–628. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-50423-6_46

    Chapter  Google Scholar 

  6. Wang, M., Zheng, K., Yang, Y., Wang, X.: An explainable machine learning framework for intrusion detection systems. IEEE Access 8, 73127–73141 (2020)

    Article  Google Scholar 

  7. Vilone, G., Longo, L.: Explainable Artificial Intelligence: a Systematic Review (2020)

    Google Scholar 

  8. Xie, N., Ras, G., van Gerven, M., Doran, D.: Explainable Deep Learning: A Field Guide for the Uninitiated (2020)

    Google Scholar 

  9. Stoyanovich, J., Van Bavel, J.J., West, T.V.: The imperative of interpretable machines. Nat. Mach. Intell. 2(4), 197–199 (2020)

    Article  Google Scholar 

  10. Roscher, R., Bohn, B., Duarte, M.F., Garcke, J.: Explainable Machine Learning for Scientific Insights and Discoveries, CoRR (2019)

    Google Scholar 

  11. Tjoa, E., Guan, E.: A survey on explainable artificial intelligence: toward medical XAI. IEEE Trans. Neural Netw. Learn. Syst. (2020)

    Google Scholar 

  12. Ghosh, A., Kandasamy, D.: Interpretable artificial intelligence: why and when. Am. J. Roentgenol. 214(5), 1137–1138 (2020)

    Article  Google Scholar 

  13. Reyes, M., et al.: On the interpretability of artificial intelligence in radiology. Radiol. Artif. Intell. 2(3), e190043 (2020)

    Article  Google Scholar 

  14. https://cloud.google.com/explainable-ai

  15. Arya, V., et al.: One explanation does not fit all: a toolkit and taxonomy of AI explainability techniques (2019)

    Google Scholar 

  16. Samhita, L., Gross, H.: The “Clever Hans phenomenon’’ revisited. Commun. Integr. Biol. 6(6), 27122 (2013)

    Article  Google Scholar 

  17. Greene, T.: AI now: predictive policing systems are flawed because they replicate and amplify racism. TNW (2020)

    Google Scholar 

  18. Asaro, P.M.: AI ethics in predictive policing: from models of threat to an ethics of care. IEEE TSM 38(2), 40–53 (2019)

    MathSciNet  Google Scholar 

  19. Wexler, R.: When a computer program keeps you in jail: how computers are harming criminal justice. New York Times (2017)

    Google Scholar 

  20. Arrieta, A.B., et al.: Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 58, 82–115 (2020)

    Article  Google Scholar 

  21. Choraś, M., Pawlicki, M., Puchalski, D., Kozik, R.: Machine learning – the results are not the only thing that matters! what about security, explainability and fairness? In: Krzhizhanovskaya, V.V., et al. (eds.) ICCS 2020. LNCS, vol. 12140, pp. 615–628. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-50423-6_46

    Chapter  Google Scholar 

  22. Gandhi, M.: What exactly is meant by explainability and interpretability of AI? Analytics Vidhya (2020)

    Google Scholar 

  23. Taylor, M.E.: Intelligibility is a key component to trust in machine learning. Borealis AI (2019)

    Google Scholar 

  24. Doshi-Velez, F., Kim, B.: Considerations for evaluation and generalization in interpretable machine learning. In: Escalante, H.J., et al. (eds.) Explainable and Interpretable Models in Computer Vision and Machine Learning. TSSCML, pp. 3–17. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-98131-4_1

    Chapter  Google Scholar 

  25. Doshi-Velez, F., Been, K.: Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608 (2017)

  26. Honegger, M.: Shedding Light on Black Box Machine Learning Algorithms: Development of an Axiomatic Framework to Assess the Quality of Methods that Explain Individual Predictions. arXiv preprint arXiv:1808.05054 (2018)

  27. Russel, S., Norvig, P.: Artificial Intelligence: A Modern Approach (2010)

    Google Scholar 

  28. Liu, S., Zheng, H., Feng, Y., Li, W.: Prostate cancer diagnosis using deep learning with 3D multiparametric MRI. In: Medical Imaging2017: Computer-Aided Diagnosis (2017)

    Google Scholar 

  29. Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning (2016)

    Google Scholar 

  30. Lipton, Z.C.: The mythos of model interpretability. In: International Conference “In Machine Learning: Workshop on Human Interpretability in Machine Learning” (2016)

    Google Scholar 

  31. Adadi, A., Berrada, M.: Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). In: ICCS, vol. 6 (2018)

    Google Scholar 

  32. Weina, J., Carpendale, S., Hamarneh, G., Gromala, D.: Bridging AI developers and end users: an end-user-centred explainable AI taxonomy and visual vocabularies. In: IEEE Vis (2019)

    Google Scholar 

  33. Chromik, M., Schuessler, M.: A taxonomy for human subject evaluation of black-box explanations in XAI. In: ExSS-ATEC@ IUI (2020)

    Google Scholar 

  34. Blanco-Justicia, A., Domingo-Ferrer, J.: Machine learning explainability through comprehensible decision trees. In: Holzinger, A., Kieseberg, P., Tjoa, A.M., Weippl, E. (eds.) CD-MAKE 2019. LNCS, vol. 11713, pp. 15–26. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-29726-8_2

    Chapter  Google Scholar 

  35. Ribeiro, M., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. In: Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations, San Diego, CA (2016)

    Google Scholar 

  36. Lundberg, S., Lee, S.I.: A unified approach to interpreting model predictions. In: Advances in Neural Information Processing Systems (2017)

    Google Scholar 

  37. Bach, S., Binder, A., Montavon, G., Klauschen, F., Müller, K.R., Samek, W.: On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLoS ONE 10(7), 0130140 (2015)

    Google Scholar 

  38. Alber, M., et al.: iNNvestigate neural networks!, arXiv (2018)

    Google Scholar 

  39. Kindermans, P.-J., et al.: Learning how to explain neural networks: patternnet and patternattribution (2017)

    Google Scholar 

  40. Montavon, G., Binder, A., Lapuschkin, S., Samek, W., Müller, K.-R.: Layer-wise relevance propagation: an overview. In: Samek, W., Montavon, G., Vedaldi, A., Hansen, L.K., Müller, K.-R. (eds.) Explainable AI: Interpreting, Explaining and Visualizing Deep Learning. LNCS (LNAI), vol. 11700, pp. 193–209. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-28954-6_10

    Chapter  Google Scholar 

  41. https://github.com/oracle/Skater. Accessed 30 Dec 2020

  42. Gurumoorthy, K.S., Dhurandhar, A., Cecchi, G., Aggarwal, C.: Efficient data representation by selecting prototypes with importance weights. In: ICD. IEEE (2019)

    Google Scholar 

  43. Dhurandhar, A., et al.: Explanations based on the missing: towards contrastive explanations with pertinent negatives. In: Advances in Neural Information Processing Systems (2018)

    Google Scholar 

  44. https://flowcast.ai. Accessed 30 Dec 2020

  45. https://resources.flowcast.ai/resources/big-data-smart-credit-white-paper/. Accessed 18 Mar 2021

  46. Szczepański, M., Choraś, M., Pawlicki, M., Kozik, R.: Achieving explainability of intrusion detection system by hybrid oracle-explainer approach. In: IJCNN (2020)

    Google Scholar 

  47. https://darwinai.com. Accessed 30 Dec 2020

  48. https://www.fiddler.ai. Accessed 30 Dec 2020

  49. Sundararajan, M., Taly, A., Yan, Q.: Axiomatic attribution for deep networks, arXiv preprint arXiv:1703.01365 (2017)

  50. Maleki, S., Tran-Thanh, L., Hines, G., Rahwan, T., Rogers, A.: Bounding the estimation error of sampling-based Shapley value approximation, arXiv:1306.4265 (2013)

  51. Kapishnikov, A., Bolukbasi, T., Viégas, F., Terry, M.: Xrai: better attributions through regions. In: IEEE International Conference on Computer Vision (2019)

    Google Scholar 

  52. https://www.rulex.ai. Accessed 30 Dec 2020

  53. https://kyndi.com. Accessed 30 Dec 2020

  54. https://www.h2o.ai. Accessed 30 Dec 2020

  55. https://www.ventureradar.com. Accessed 30 Dec 2020

  56. https://www.sparta.eu/programs/safair/. Accessed 18 Mar 2021

  57. https://cordis.europa.eu/project/id/952060. Accessed 30 Dec 2020

  58. Zanni-Merk, C.: On the Need of an Explainable Artificial Intelligence (2020)

    Google Scholar 

  59. Widmer, G., Kubat, M.: Learning in the presence of concept drift and hidden contexts. Mach. Learn. 23(1), 69–101 (1996)

    Google Scholar 

  60. https://peterasaro.org/writing/Asaro_PredictivePolicing.pdf. Accessed 30 Dec 2020

Download references

Acknowledgment

This work is funded under the SPARTA project, which has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 830892.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mateusz Szczepański .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Szczepański, M., Choraś, M., Pawlicki, M., Pawlicka, A. (2021). The Methods and Approaches of Explainable Artificial Intelligence. In: Paszynski, M., Kranzlmüller, D., Krzhizhanovskaya, V.V., Dongarra, J.J., Sloot, P.M. (eds) Computational Science – ICCS 2021. ICCS 2021. Lecture Notes in Computer Science(), vol 12745. Springer, Cham. https://doi.org/10.1007/978-3-030-77970-2_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-77970-2_1

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-77969-6

  • Online ISBN: 978-3-030-77970-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics