Abstract
This work introduces the definition of observation-specific explanations to assign a score to each data point proportional to its importance in the definition of the prediction process. Such explanations involve the identification of the most influential observations for the black-box model of interest. The proposed method involves estimating these explanations by constructing a surrogate model through scattered data approximation utilizing the orthogonal matching pursuit algorithm. The proposed approach is validated on both simulated and real-world datasets.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
kaggle.com/datasets/abrambeyer/openintro-possum.
References
Ackley, D.H.: A Connectionist Machine for Genetic Hillclimbing. Springer, US (1987)
Baroli, D., Harbrecht, H., Multerer, M.: Samplet basis pursuit: multiresolution scattered data approximation with sparsity constraints. IEEE Trans. Sig. Process. 72, 1813–1823 (2024)
Belsley, D.A., Kuh, E., Welsch, R.E.: Regression diagnostics: Identifying influential data and sources of collinearity. John Wiley & Sons (2005)
Berlinet, A., Thomas-Agnan, C.: Reproducing kernel Hilbert spaces in probability and statistics. Springer Science & Business Media (2011)
Borgonovo, E., Ghidini, V., Hahn, R., Plischke, E.: Explaining classifiers with measures of statistical association. Comput. Stat. Data Anal. 182, 107701 (2023)
Chatterjee, S., Hadi, A.S.: Influential observations, high leverage points, and outliers in linear regression. Stat. Sci. 1(3), 379–393 (1986)
Cook, R.D.: Influential observations in linear regression. J. Am. Stat. Assoc. 74(365), 169–174 (1979)
Ghidini, V.: The Xi method: unlocking the mysteries of regression with statistics. In: Longo, L. (ed.) Explainable Artificial Intelligence. Communications in Computer and Information Science, vol. 1901, pp. 97–114. Springer (2023). https://doi.org/10.1007/978-3-031-44064-9_6
Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Pedreschi, D., Giannotti, F.: A survey of methods for explaining black box models. ACM Comput. Surv. 51(5), 1–42 (2018)
Hastie, T., Tibshirani, R., Friedman, J.: The Elements of Statistical Learning. Springer New York, New York, NY (2009)
Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Adv. Neural. Inf. Process. Syst. 2017, 4766–4775 (2017)
Molnar, C.: Interpretable Machine Learning - a guide for making Black Box models explainable. lulu.com (2018)
Müller, S., Schaback, R.: A Newton basis for kernel spaces. J. Approx. Theory 161(2), 645–655 (2009)
Multerer, M., Schneider, P., Sen, R.: Fast empirical scenarios (2024). arXiv:2307.03927
Pazouki, M., Schaback, R.: Bases for kernel-based spaces. J. Comput. Appl. Math. 236(4) (Sep 2011)
Petsiuk, V., Das, A., Saenko, K.: RisE: Randomized input sampling for explanation of black-box models. In: British Machine Vision Conference 2018, BMVC 2018 (2019)
Ribeiro, M.T., Singh, S., Guestrin, C.: “Why should I trust you?" - explaining the predictions of any classifier. In: Proceedings of the International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016)
Steinwart, I., Christmann, A.: Support Vector Machines, 1st edn. Springer Publishing Company, Incorporated (2008)
Wendland, H.: Scattered Data Approximation, vol. 17. Cambridge University Press, Cambridge (2005)
Acknowledgement
Valentina Ghidini, Michael Multerer, and Jacopo Quizi were funded by the SNSF starting grant “Multiresolution methods for unstructured data” (TMSGI2_211684). Rohan Sen was supported by the SNF grant “Scenarios” (100018_189086).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Ethics declarations
Disclosure of Interests
The authors have no competing interests to declare that are relevant to the content of this article.
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Ghidini, V., Multerer, M., Quizi, J., Sen, R. (2024). Observation-Specific Explanations Through Scattered Data Approximation. In: Longo, L., Lapuschkin, S., Seifert, C. (eds) Explainable Artificial Intelligence. xAI 2024. Communications in Computer and Information Science, vol 2154. Springer, Cham. https://doi.org/10.1007/978-3-031-63797-1_17
Download citation
DOI: https://doi.org/10.1007/978-3-031-63797-1_17
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-63796-4
Online ISBN: 978-3-031-63797-1
eBook Packages: Computer ScienceComputer Science (R0)