Skip to main content

Introducing the Attribution Stability Indicator: A Measure for Time Series XAI Attributions

  • Conference paper
  • First Online:
Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD 2023)

Abstract

Given the increasing amount and general complexity of time series data in domains such as finance, weather forecasting, and healthcare, there is a growing need for state-of-the-art performance models that can provide interpretable insights into underlying patterns and relationships. Attribution techniques enable the extraction of explanations from time series models to gain insights but are hard to evaluate for their robustness and trustworthiness. We propose the Attribution Stability Indicator (ASI), a measure to incorporate robustness and trustworthiness as properties of attribution techniques for time series into account. We extend a perturbation analysis with correlations of the original time series to the perturbed instance and the attributions to include wanted properties in the measure. We demonstrate the wanted properties based on an analysis of the attributions in a dimension-reduced space and the ASI scores distribution over three whole time series classification datasets.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    https://captum.ai/.

References

  1. Adadi, A., Berrada, M.: Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access (2018)

    Google Scholar 

  2. Aghabozorgi, S., Shirkhorshidi, A.S., Wah, T.Y.: Time-series clustering–a decade review. Inf. Syst. (2015)

    Google Scholar 

  3. Bach, S., Binder, A., Montavon, G., Klauschen, F., Müller, K.R., Samek, W.: On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLOS ONE (2015)

    Google Scholar 

  4. Bhattacharyya, A.: On a measure of divergence between two statistical populations defined by their probability distributions. Bull. Calcutta Math. Soc. (1943)

    Google Scholar 

  5. Dau, H.A., et al.: The UCR time series archive. IEEE/CAA J. Automatica Sinica (2019)

    Google Scholar 

  6. Endres, D.M., Schindelin, J.E.: A new metric for probability distributions. IEEE Trans. Inf. Theory (2003)

    Google Scholar 

  7. Galton, F.: Regression towards mediocrity in hereditary stature. J. Anthropol. Inst. Great Britain Ireland (1886)

    Google Scholar 

  8. Geirhos, R., et al.: Shortcut learning in deep neural networks. Nat. Mach. Intell. (2020)

    Google Scholar 

  9. Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., Pedreschi, D.: A survey of methods for explaining black box models. ACM Comput. Surv. (CSUR) (2018)

    Google Scholar 

  10. Hellinger, E.: Neue Begründung der Theorie quadratischer Formen von unendlichvielen Veränderlichen. Journal für die reine und angewandte Mathematik (1909)

    Google Scholar 

  11. Hooker, S., Erhan, D., Kindermans, P.J., Kim, B.: A benchmark for interpretability methods in deep neural networks. In: Advances in Neural Information Processing Systems (2019)

    Google Scholar 

  12. Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. In: International Conference on Machine Learning (2015)

    Google Scholar 

  13. Ismail Fawaz, H., Forestier, G., Weber, J., Idoumghar, L., Muller, P.A.: Deep learning for time series classification: a review. Data Min. Knowl. Disc. (2019)

    Google Scholar 

  14. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)

  15. Lundberg, S., Lee, S.I.: A unified approach to interpreting model predictions. In: Advances in Neural Information Processing Systems (2017)

    Google Scholar 

  16. McInnes, L., Healy, J., Melville, J.: Umap: uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018)

  17. Mercier, D., Bhatt, J., Dengel, A., Ahmed, S.: Time to focus: a comprehensive benchmark using time series attribution methods. arXiv preprint arXiv:2202.03759 (2022)

  18. Mercier, D., Dengel, A., Ahmed, S.: TimeREISE: time series randomized evolving input sample explanation. Sensors (2022)

    Google Scholar 

  19. Mohseni, S., Zarei, N., Ragan, E.D.: A multidisciplinary survey and framework for design and evaluation of explainable AI systems. ACM Trans. Interact. Intell. Syst. (TiiS) (2021)

    Google Scholar 

  20. Ribeiro, M.T., Singh, S., Guestrin, C.: “Why should i trust you?” Explaining the predictions of any classifier. In: International Conference on Knowledge Discovery and Data Mining (2016)

    Google Scholar 

  21. Rudin, C.: Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell. (2019)

    Google Scholar 

  22. Schlegel, U., Arnout, H., El-Assady, M., Oelke, D., Keim, D.A.: Towards a rigorous evaluation of XAI methods on time series. In: ICCV Workshop on Interpreting and Explaining Visual Artificial Intelligence Models (2019)

    Google Scholar 

  23. Schlegel, U., Keim, D.A.: Time series model attribution visualizations as explanations. In: Workshop on TRust and EXpertise in Visual Analytics, TREX (2021)

    Google Scholar 

  24. Schlegel, U., Keim, D.A.: A deep dive into perturbations as evaluation technique for time series XAI. In: International Conference on eXplainable Artificial Intelligence (xAI) (2023)

    Google Scholar 

  25. Schlegel, U., Oelke, D., Keim, D.A., El-Assady, M.: An empirical study of explainable AI techniques on deep learning models for time series tasks. In: Pre-registration workshop NeurIPS (2020)

    Google Scholar 

  26. Schlegel, U., Vo, D.L., Keim, D.A., Seebacher, D.: TS-MULE: local interpretable model-agnostic explanations for time series forecast models. In: ECML-PKDD Workshop Advances in Interpretable Machine Learning and Artificial Intelligence (AIMLAI) (2021)

    Google Scholar 

  27. Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-CAM: visual explanations from deep networks via gradient-based localization. In: International Conference on Computer Vision (2017)

    Google Scholar 

  28. Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. In: International Conference on Machine Learning (2017)

    Google Scholar 

  29. Šimić, I., Sabol, V., Veas, E.: Perturbation effect: a metric to counter misleading validation of feature attribution. In: Proceedings of the 31st ACM International Conference on Information & Knowledge Management (2022)

    Google Scholar 

  30. Simonyan, K., Vedaldi, A., Zisserman, A.: Deep inside convolutional networks: visualising image classification models and saliency maps. In: Proceedings of the International Conference on Learning Representations (ICLR) (2014)

    Google Scholar 

  31. Sundararajan, M., Taly, A., Yan, Q.: Axiomatic attribution for deep networks. In: International Conference on Machine Learning (2017)

    Google Scholar 

  32. Theissler, A., Spinnato, F., Schlegel, U., Guidotti, R.: Explainable AI for time series classification: a review, taxonomy and research directions. IEEE Access (2022)

    Google Scholar 

  33. Turbé, H., Bjelogrlic, M., Lovis, C., Mengaldo, G.: Interprettime: a new approach for the systematic evaluation of neural-network interpretability in time series classification. arXiv preprint arXiv:2202.05656 (2022)

  34. Yeh, C.K., Hsieh, C.Y., Suggala, A., Inouye, D.I., Ravikumar, P.K.: On the (in) fidelity and sensitivity of explanations. In: Advances in Neural Information Processing Systems (2019)

    Google Scholar 

  35. Zeiler, M.D., Fergus, R.: Visualizing and understanding convolutional networks. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8689, pp. 818–833. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10590-1_53

    Chapter  Google Scholar 

Download references

Acknowledgements

This work has been partially supported by the Federal Ministry of Education and Research (BMBF) in VIKING (13N16242).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Udo Schlegel .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Schlegel, U., Keim, D.A. (2025). Introducing the Attribution Stability Indicator: A Measure for Time Series XAI Attributions. In: Meo, R., Silvestri, F. (eds) Machine Learning and Principles and Practice of Knowledge Discovery in Databases. ECML PKDD 2023. Communications in Computer and Information Science, vol 2135. Springer, Cham. https://doi.org/10.1007/978-3-031-74633-8_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-74633-8_1

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-74632-1

  • Online ISBN: 978-3-031-74633-8

  • eBook Packages: Artificial Intelligence (R0)

Publish with us

Policies and ethics