Skip to main content

Machine Learning Explainability as a Service: Service Description and Economics

  • Conference paper
  • First Online:
Economics of Grids, Clouds, Systems, and Services (GECON 2024)

Abstract

Explainability is a growing concern in many machine learning applications. Machine learning platforms now typically provide explanations accompanying their model output. However, this may be considered as just a first step towards defining explainability as a service in itself, which would allow users to get more control over the kind of explainability technique they wish to employ. In this paper, we first provide a survey of the current offer of machine learning platforms, observing their pricing models and the explainability features they possibly offer. In order to progress towards Explainability-as-a-Service (XaaS), we propose to base its definition on the REST paradigm, considering three major examples of explainability techniques, relying on either feature scoring, surrogate linear models, or internal state observation. We also show that XaaS is dependent on machine-learning model provisioning, and the two services are linked by a one-way essential complement relationship, where ML provisioning plays the role of the essential component and XaaS is the complement option. We also suggest that vertical integration is the natural arrangement for companies offering either service, given their mutual relationship.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://labelstud.io/.

  2. 2.

    https://blogs.microsoft.com/blog/2023/01/23/microsoftandopenaiextendpartnership/.

References

  1. Abnar, S., Zuidema, W.: Quantifying attention flow in transformers. In: Jurafsky, D., Chai, J., Schluter, N., Tetreault, J. (eds.) Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 4190–4197 (2020)

    Google Scholar 

  2. Adadi, A., Berrada, M.: Peeking inside the Black-Box: a survey on explainable artificial intelligence (XAI). IEEE Access 6, 52138–52160 (2018)

    Article  Google Scholar 

  3. Bach, S., Binder, A., Montavon, G., Klauschen, F., Müller, K.R., Samek, W.: On Pixel-Wise explanations for Non-Linear classifier decisions by Layer-Wise relevance propagation. PLoS ONE 10(7), e0130140 (2015)

    Article  MATH  Google Scholar 

  4. Cashmore, M., Collins, A., Krarup, B., Krivic, S., Magazzeni, D., Smith, D.: Towards explainable ai planning as a service. arXiv preprint arXiv:1908.05059 (2019)

  5. Chattopadhay, A., Sarkar, A., Howlader, P., Balasubramanian, V.N.: Grad-CAM++: generalized Gradient-Based visual explanations for deep convolutional networks. In: 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 839–847. IEEE (2018)

    Google Scholar 

  6. Chefer, H., Gur, S., Wolf, L.: Transformer interpretability beyond attention visualization. In: 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 782–791, June 2021

    Google Scholar 

  7. Chen, M.K., Nalebuff, B.J.: One-way essential complements. Technical report CFDP 1588, Cowles Foundation for Research in Economics (2006)

    Google Scholar 

  8. Ding, Y., Liu, Y., Luan, H., Sun, M.: Visualizing and understanding neural machine translation. In: Proc. of the 55th Annual Meeting of the Association for Computational Linguistics, vol. 1, pp. 1150–1159, Vancouver, Canada (2017)

    Google Scholar 

  9. Fantozzi, P., Naldi, M.: The explainability of transformers: current status and directions. Computers 13(4), 92 (2024)

    Article  MATH  Google Scholar 

  10. Jovanović, M., Schmitz, M.: Explainability as a user requirement for artificial intelligence systems. Computer 55(2), 90–94 (2022)

    Article  MATH  Google Scholar 

  11. Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. In: Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS 2017, pp. 4768–4777, Red Hook, NY, USA (2017)

    Google Scholar 

  12. Muhammad, M.B., Yeasin, M.: Eigen-CAM: class activation map using principal components. In: 2020 International Joint Conference on Neural Networks (IJCNN), pp. 1–7. IEEE, July 2020

    Google Scholar 

  13. Rane, N., Choudhary, S., Rane, J.: Explainable artificial intelligence (xai) approaches for transparency and accountability in financial decision-making. Available at SSRN 4640316 (2023)

    Google Scholar 

  14. Ribeiro, M.T., Singh, S., Guestrin, C.: “Why should I trust you?”: explaining the predictions of any classifier. In: Proc. of the 22nd ACM SIGKDD Intl Conf on Knowledge Discovery and Data Mining, pp. 1135–1144, New York, NY, USA (2016)

    Google Scholar 

  15. Samiei, S., Baratalipour, N., Yadav, P., Roy, A., He, D.: Addressing stability in classifier explanations. In: 2021 IEEE International Conference on Big Data (Big Data), pp. 1920–1927. IEEE (2021)

    Google Scholar 

  16. Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-CAM: visual explanations from deep networks via gradient-based localization. In: 2017 IEEE Intl Conf on Computer Vision (ICCV), pp. 618–626 (2017)

    Google Scholar 

  17. Shapley, L.S.: A value for n-person games. In: Harold William Kuhn, A.W.T. (ed.) Contributions to the Theory of Games (AM-28), vol. II, pp. 307–318. Princeton University Press (1953)

    Google Scholar 

  18. Voita, E., Talbot, D., Moiseev, F., Sennrich, R., Titov, I.: Analyzing Multi-Head Self-Attention: Specialized heads do the heavy lifting, the rest can be pruned. In: Proc. of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 5797–5808, Florence, Italy (2019)

    Google Scholar 

  19. Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., Torralba, A.: Learning deep features for discriminative localization. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, June 2016

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Paolo Fantozzi .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Fantozzi, P., Laura, L., Naldi, M. (2025). Machine Learning Explainability as a Service: Service Description and Economics. In: Naldi, M., Djemame, K., Altmann, J., Bañares, J.Á. (eds) Economics of Grids, Clouds, Systems, and Services. GECON 2024. Lecture Notes in Computer Science, vol 15358. Springer, Cham. https://doi.org/10.1007/978-3-031-81226-2_21

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-81226-2_21

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-81225-5

  • Online ISBN: 978-3-031-81226-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics