Abstract
Artificial Intelligence has found innumerable applications, becoming ubiquitous in the contemporary society. From making unnoticeable, minor choices to determining people’s fates (the case of predictive policing). This fact raises serious concerns about the lack of explainability of those systems. Finding ways to enable humans to comprehend the results provided by AI is a blooming area of research right now. This paper explores the current findings in the field of Explainable Artificial Intelligence (xAI), along with xAI methods and solutions that realise them. The paper provides an umbrella perspective on available xAI options, sorting them into a range of levels of abstraction, starting from community-developed code snippets implementing facets of xAI research all the way up to comprehensive solutions utilising state-of-the-art achievements in the domain.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Samek, W., Montavon, G., Vedaldi, A., Hansen, L.K., Müller, K.-R. (eds.): Explainable AI: Interpreting, Explaining and Visualizing Deep Learning. LNCS (LNAI), vol. 11700. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-28954-6
Miller, T.: Machine learning interpretability: a survey on methods and metrics. Electronics 8, 832 (2019)
Carvalho, D.V., Pereira, E.M., Cardoso, J.S.: Explanation in artificial intelligence: insights from the social sciences. Artif. Intell. 267, 1–38 (2019)
Pawlicki, M., Choraś, M., Kozik, R.: Defending network intrusion detection systems against adversarial evasion attacks. FGCS 110, 148–154 (2020)
Choraś, M., Pawlicki, M., Puchalski, D., Kozik, R.: Machine learning – the results are not the only thing that matters! What about security, explainability and fairness? In: Krzhizhanovskaya, V.V., et al. (eds.) ICCS 2020. LNCS, vol. 12140, pp. 615–628. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-50423-6_46
Wang, M., Zheng, K., Yang, Y., Wang, X.: An explainable machine learning framework for intrusion detection systems. IEEE Access 8, 73127–73141 (2020)
Vilone, G., Longo, L.: Explainable Artificial Intelligence: a Systematic Review (2020)
Xie, N., Ras, G., van Gerven, M., Doran, D.: Explainable Deep Learning: A Field Guide for the Uninitiated (2020)
Stoyanovich, J., Van Bavel, J.J., West, T.V.: The imperative of interpretable machines. Nat. Mach. Intell. 2(4), 197–199 (2020)
Roscher, R., Bohn, B., Duarte, M.F., Garcke, J.: Explainable Machine Learning for Scientific Insights and Discoveries, CoRR (2019)
Tjoa, E., Guan, E.: A survey on explainable artificial intelligence: toward medical XAI. IEEE Trans. Neural Netw. Learn. Syst. (2020)
Ghosh, A., Kandasamy, D.: Interpretable artificial intelligence: why and when. Am. J. Roentgenol. 214(5), 1137–1138 (2020)
Reyes, M., et al.: On the interpretability of artificial intelligence in radiology. Radiol. Artif. Intell. 2(3), e190043 (2020)
Arya, V., et al.: One explanation does not fit all: a toolkit and taxonomy of AI explainability techniques (2019)
Samhita, L., Gross, H.: The “Clever Hans phenomenon’’ revisited. Commun. Integr. Biol. 6(6), 27122 (2013)
Greene, T.: AI now: predictive policing systems are flawed because they replicate and amplify racism. TNW (2020)
Asaro, P.M.: AI ethics in predictive policing: from models of threat to an ethics of care. IEEE TSM 38(2), 40–53 (2019)
Wexler, R.: When a computer program keeps you in jail: how computers are harming criminal justice. New York Times (2017)
Arrieta, A.B., et al.: Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 58, 82–115 (2020)
Choraś, M., Pawlicki, M., Puchalski, D., Kozik, R.: Machine learning – the results are not the only thing that matters! what about security, explainability and fairness? In: Krzhizhanovskaya, V.V., et al. (eds.) ICCS 2020. LNCS, vol. 12140, pp. 615–628. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-50423-6_46
Gandhi, M.: What exactly is meant by explainability and interpretability of AI? Analytics Vidhya (2020)
Taylor, M.E.: Intelligibility is a key component to trust in machine learning. Borealis AI (2019)
Doshi-Velez, F., Kim, B.: Considerations for evaluation and generalization in interpretable machine learning. In: Escalante, H.J., et al. (eds.) Explainable and Interpretable Models in Computer Vision and Machine Learning. TSSCML, pp. 3–17. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-98131-4_1
Doshi-Velez, F., Been, K.: Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608 (2017)
Honegger, M.: Shedding Light on Black Box Machine Learning Algorithms: Development of an Axiomatic Framework to Assess the Quality of Methods that Explain Individual Predictions. arXiv preprint arXiv:1808.05054 (2018)
Russel, S., Norvig, P.: Artificial Intelligence: A Modern Approach (2010)
Liu, S., Zheng, H., Feng, Y., Li, W.: Prostate cancer diagnosis using deep learning with 3D multiparametric MRI. In: Medical Imaging2017: Computer-Aided Diagnosis (2017)
Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning (2016)
Lipton, Z.C.: The mythos of model interpretability. In: International Conference “In Machine Learning: Workshop on Human Interpretability in Machine Learning” (2016)
Adadi, A., Berrada, M.: Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). In: ICCS, vol. 6 (2018)
Weina, J., Carpendale, S., Hamarneh, G., Gromala, D.: Bridging AI developers and end users: an end-user-centred explainable AI taxonomy and visual vocabularies. In: IEEE Vis (2019)
Chromik, M., Schuessler, M.: A taxonomy for human subject evaluation of black-box explanations in XAI. In: ExSS-ATEC@ IUI (2020)
Blanco-Justicia, A., Domingo-Ferrer, J.: Machine learning explainability through comprehensible decision trees. In: Holzinger, A., Kieseberg, P., Tjoa, A.M., Weippl, E. (eds.) CD-MAKE 2019. LNCS, vol. 11713, pp. 15–26. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-29726-8_2
Ribeiro, M., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. In: Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations, San Diego, CA (2016)
Lundberg, S., Lee, S.I.: A unified approach to interpreting model predictions. In: Advances in Neural Information Processing Systems (2017)
Bach, S., Binder, A., Montavon, G., Klauschen, F., Müller, K.R., Samek, W.: On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLoS ONE 10(7), 0130140 (2015)
Alber, M., et al.: iNNvestigate neural networks!, arXiv (2018)
Kindermans, P.-J., et al.: Learning how to explain neural networks: patternnet and patternattribution (2017)
Montavon, G., Binder, A., Lapuschkin, S., Samek, W., Müller, K.-R.: Layer-wise relevance propagation: an overview. In: Samek, W., Montavon, G., Vedaldi, A., Hansen, L.K., Müller, K.-R. (eds.) Explainable AI: Interpreting, Explaining and Visualizing Deep Learning. LNCS (LNAI), vol. 11700, pp. 193–209. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-28954-6_10
https://github.com/oracle/Skater. Accessed 30 Dec 2020
Gurumoorthy, K.S., Dhurandhar, A., Cecchi, G., Aggarwal, C.: Efficient data representation by selecting prototypes with importance weights. In: ICD. IEEE (2019)
Dhurandhar, A., et al.: Explanations based on the missing: towards contrastive explanations with pertinent negatives. In: Advances in Neural Information Processing Systems (2018)
https://flowcast.ai. Accessed 30 Dec 2020
https://resources.flowcast.ai/resources/big-data-smart-credit-white-paper/. Accessed 18 Mar 2021
Szczepański, M., Choraś, M., Pawlicki, M., Kozik, R.: Achieving explainability of intrusion detection system by hybrid oracle-explainer approach. In: IJCNN (2020)
https://darwinai.com. Accessed 30 Dec 2020
https://www.fiddler.ai. Accessed 30 Dec 2020
Sundararajan, M., Taly, A., Yan, Q.: Axiomatic attribution for deep networks, arXiv preprint arXiv:1703.01365 (2017)
Maleki, S., Tran-Thanh, L., Hines, G., Rahwan, T., Rogers, A.: Bounding the estimation error of sampling-based Shapley value approximation, arXiv:1306.4265 (2013)
Kapishnikov, A., Bolukbasi, T., Viégas, F., Terry, M.: Xrai: better attributions through regions. In: IEEE International Conference on Computer Vision (2019)
https://www.rulex.ai. Accessed 30 Dec 2020
https://kyndi.com. Accessed 30 Dec 2020
https://www.h2o.ai. Accessed 30 Dec 2020
https://www.ventureradar.com. Accessed 30 Dec 2020
https://www.sparta.eu/programs/safair/. Accessed 18 Mar 2021
https://cordis.europa.eu/project/id/952060. Accessed 30 Dec 2020
Zanni-Merk, C.: On the Need of an Explainable Artificial Intelligence (2020)
Widmer, G., Kubat, M.: Learning in the presence of concept drift and hidden contexts. Mach. Learn. 23(1), 69–101 (1996)
https://peterasaro.org/writing/Asaro_PredictivePolicing.pdf. Accessed 30 Dec 2020
Acknowledgment
This work is funded under the SPARTA project, which has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 830892.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Szczepański, M., Choraś, M., Pawlicki, M., Pawlicka, A. (2021). The Methods and Approaches of Explainable Artificial Intelligence. In: Paszynski, M., Kranzlmüller, D., Krzhizhanovskaya, V.V., Dongarra, J.J., Sloot, P.M. (eds) Computational Science – ICCS 2021. ICCS 2021. Lecture Notes in Computer Science(), vol 12745. Springer, Cham. https://doi.org/10.1007/978-3-030-77970-2_1
Download citation
DOI: https://doi.org/10.1007/978-3-030-77970-2_1
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-77969-6
Online ISBN: 978-3-030-77970-2
eBook Packages: Computer ScienceComputer Science (R0)