Skip to main content

Observation-Specific Explanations Through Scattered Data Approximation

  • Conference paper
  • First Online:
Explainable Artificial Intelligence (xAI 2024)

Abstract

This work introduces the definition of observation-specific explanations to assign a score to each data point proportional to its importance in the definition of the prediction process. Such explanations involve the identification of the most influential observations for the black-box model of interest. The proposed method involves estimating these explanations by constructing a surrogate model through scattered data approximation utilizing the orthogonal matching pursuit algorithm. The proposed approach is validated on both simulated and real-world datasets.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    kaggle.com/datasets/abrambeyer/openintro-possum.

References

  1. Ackley, D.H.: A Connectionist Machine for Genetic Hillclimbing. Springer, US (1987)

    Book  Google Scholar 

  2. Baroli, D., Harbrecht, H., Multerer, M.: Samplet basis pursuit: multiresolution scattered data approximation with sparsity constraints. IEEE Trans. Sig. Process. 72, 1813–1823 (2024)

    Google Scholar 

  3. Belsley, D.A., Kuh, E., Welsch, R.E.: Regression diagnostics: Identifying influential data and sources of collinearity. John Wiley & Sons (2005)

    Google Scholar 

  4. Berlinet, A., Thomas-Agnan, C.: Reproducing kernel Hilbert spaces in probability and statistics. Springer Science & Business Media (2011)

    Google Scholar 

  5. Borgonovo, E., Ghidini, V., Hahn, R., Plischke, E.: Explaining classifiers with measures of statistical association. Comput. Stat. Data Anal. 182, 107701 (2023)

    Article  MathSciNet  Google Scholar 

  6. Chatterjee, S., Hadi, A.S.: Influential observations, high leverage points, and outliers in linear regression. Stat. Sci. 1(3), 379–393 (1986)

    MathSciNet  Google Scholar 

  7. Cook, R.D.: Influential observations in linear regression. J. Am. Stat. Assoc. 74(365), 169–174 (1979)

    Article  MathSciNet  Google Scholar 

  8. Ghidini, V.: The Xi method: unlocking the mysteries of regression with statistics. In: Longo, L. (ed.) Explainable Artificial Intelligence. Communications in Computer and Information Science, vol. 1901, pp. 97–114. Springer (2023). https://doi.org/10.1007/978-3-031-44064-9_6

  9. Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Pedreschi, D., Giannotti, F.: A survey of methods for explaining black box models. ACM Comput. Surv. 51(5), 1–42 (2018)

    Article  Google Scholar 

  10. Hastie, T., Tibshirani, R., Friedman, J.: The Elements of Statistical Learning. Springer New York, New York, NY (2009)

    Book  Google Scholar 

  11. Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Adv. Neural. Inf. Process. Syst. 2017, 4766–4775 (2017)

    Google Scholar 

  12. Molnar, C.: Interpretable Machine Learning - a guide for making Black Box models explainable. lulu.com (2018)

    Google Scholar 

  13. Müller, S., Schaback, R.: A Newton basis for kernel spaces. J. Approx. Theory 161(2), 645–655 (2009)

    Article  MathSciNet  Google Scholar 

  14. Multerer, M., Schneider, P., Sen, R.: Fast empirical scenarios (2024). arXiv:2307.03927

  15. Pazouki, M., Schaback, R.: Bases for kernel-based spaces. J. Comput. Appl. Math. 236(4) (Sep 2011)

    Google Scholar 

  16. Petsiuk, V., Das, A., Saenko, K.: RisE: Randomized input sampling for explanation of black-box models. In: British Machine Vision Conference 2018, BMVC 2018 (2019)

    Google Scholar 

  17. Ribeiro, M.T., Singh, S., Guestrin, C.: “Why should I trust you?" - explaining the predictions of any classifier. In: Proceedings of the International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016)

    Google Scholar 

  18. Steinwart, I., Christmann, A.: Support Vector Machines, 1st edn. Springer Publishing Company, Incorporated (2008)

    Google Scholar 

  19. Wendland, H.: Scattered Data Approximation, vol. 17. Cambridge University Press, Cambridge (2005)

    Google Scholar 

Download references

Acknowledgement

Valentina Ghidini, Michael Multerer, and Jacopo Quizi were funded by the SNSF starting grant “Multiresolution methods for unstructured data” (TMSGI2_211684). Rohan Sen was supported by the SNF grant “Scenarios” (100018_189086).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jacopo Quizi .

Editor information

Editors and Affiliations

Ethics declarations

Disclosure of Interests

The authors have no competing interests to declare that are relevant to the content of this article.

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Ghidini, V., Multerer, M., Quizi, J., Sen, R. (2024). Observation-Specific Explanations Through Scattered Data Approximation. In: Longo, L., Lapuschkin, S., Seifert, C. (eds) Explainable Artificial Intelligence. xAI 2024. Communications in Computer and Information Science, vol 2154. Springer, Cham. https://doi.org/10.1007/978-3-031-63797-1_17

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-63797-1_17

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-63796-4

  • Online ISBN: 978-3-031-63797-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics