Abstract
Explainability is a growing concern in many machine learning applications. Machine learning platforms now typically provide explanations accompanying their model output. However, this may be considered as just a first step towards defining explainability as a service in itself, which would allow users to get more control over the kind of explainability technique they wish to employ. In this paper, we first provide a survey of the current offer of machine learning platforms, observing their pricing models and the explainability features they possibly offer. In order to progress towards Explainability-as-a-Service (XaaS), we propose to base its definition on the REST paradigm, considering three major examples of explainability techniques, relying on either feature scoring, surrogate linear models, or internal state observation. We also show that XaaS is dependent on machine-learning model provisioning, and the two services are linked by a one-way essential complement relationship, where ML provisioning plays the role of the essential component and XaaS is the complement option. We also suggest that vertical integration is the natural arrangement for companies offering either service, given their mutual relationship.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Abnar, S., Zuidema, W.: Quantifying attention flow in transformers. In: Jurafsky, D., Chai, J., Schluter, N., Tetreault, J. (eds.) Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 4190–4197 (2020)
Adadi, A., Berrada, M.: Peeking inside the Black-Box: a survey on explainable artificial intelligence (XAI). IEEE Access 6, 52138–52160 (2018)
Bach, S., Binder, A., Montavon, G., Klauschen, F., Müller, K.R., Samek, W.: On Pixel-Wise explanations for Non-Linear classifier decisions by Layer-Wise relevance propagation. PLoS ONE 10(7), e0130140 (2015)
Cashmore, M., Collins, A., Krarup, B., Krivic, S., Magazzeni, D., Smith, D.: Towards explainable ai planning as a service. arXiv preprint arXiv:1908.05059 (2019)
Chattopadhay, A., Sarkar, A., Howlader, P., Balasubramanian, V.N.: Grad-CAM++: generalized Gradient-Based visual explanations for deep convolutional networks. In: 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 839–847. IEEE (2018)
Chefer, H., Gur, S., Wolf, L.: Transformer interpretability beyond attention visualization. In: 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 782–791, June 2021
Chen, M.K., Nalebuff, B.J.: One-way essential complements. Technical report CFDP 1588, Cowles Foundation for Research in Economics (2006)
Ding, Y., Liu, Y., Luan, H., Sun, M.: Visualizing and understanding neural machine translation. In: Proc. of the 55th Annual Meeting of the Association for Computational Linguistics, vol. 1, pp. 1150–1159, Vancouver, Canada (2017)
Fantozzi, P., Naldi, M.: The explainability of transformers: current status and directions. Computers 13(4), 92 (2024)
Jovanović, M., Schmitz, M.: Explainability as a user requirement for artificial intelligence systems. Computer 55(2), 90–94 (2022)
Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. In: Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS 2017, pp. 4768–4777, Red Hook, NY, USA (2017)
Muhammad, M.B., Yeasin, M.: Eigen-CAM: class activation map using principal components. In: 2020 International Joint Conference on Neural Networks (IJCNN), pp. 1–7. IEEE, July 2020
Rane, N., Choudhary, S., Rane, J.: Explainable artificial intelligence (xai) approaches for transparency and accountability in financial decision-making. Available at SSRN 4640316 (2023)
Ribeiro, M.T., Singh, S., Guestrin, C.: “Why should I trust you?”: explaining the predictions of any classifier. In: Proc. of the 22nd ACM SIGKDD Intl Conf on Knowledge Discovery and Data Mining, pp. 1135–1144, New York, NY, USA (2016)
Samiei, S., Baratalipour, N., Yadav, P., Roy, A., He, D.: Addressing stability in classifier explanations. In: 2021 IEEE International Conference on Big Data (Big Data), pp. 1920–1927. IEEE (2021)
Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-CAM: visual explanations from deep networks via gradient-based localization. In: 2017 IEEE Intl Conf on Computer Vision (ICCV), pp. 618–626 (2017)
Shapley, L.S.: A value for n-person games. In: Harold William Kuhn, A.W.T. (ed.) Contributions to the Theory of Games (AM-28), vol. II, pp. 307–318. Princeton University Press (1953)
Voita, E., Talbot, D., Moiseev, F., Sennrich, R., Titov, I.: Analyzing Multi-Head Self-Attention: Specialized heads do the heavy lifting, the rest can be pruned. In: Proc. of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 5797–5808, Florence, Italy (2019)
Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., Torralba, A.: Learning deep features for discriminative localization. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, June 2016
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Fantozzi, P., Laura, L., Naldi, M. (2025). Machine Learning Explainability as a Service: Service Description and Economics. In: Naldi, M., Djemame, K., Altmann, J., Bañares, J.Á. (eds) Economics of Grids, Clouds, Systems, and Services. GECON 2024. Lecture Notes in Computer Science, vol 15358. Springer, Cham. https://doi.org/10.1007/978-3-031-81226-2_21
Download citation
DOI: https://doi.org/10.1007/978-3-031-81226-2_21
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-81225-5
Online ISBN: 978-3-031-81226-2
eBook Packages: Computer ScienceComputer Science (R0)