Skip to main content

Explainable Spatio-Temporal Graph Modeling

  • Conference paper
  • First Online:
Discovery Science (DS 2023)

Abstract

Explainable AI (XAI) focuses on designing inference explanation methods and tools to complement machine learning and black-box deep learning models. Such capabilities are crucially important with the rising adoption of AI models in real-world applications, which require domain experts to understand how model predictions are extracted in order to make informed decisions. Despite the increasing number of XAI approaches for tabular, image, and graph data, their effectiveness in contexts with a spatial and temporal dimension is rather limited. As a result, available methods do not properly explain predictive models’ inferences when dealing with spatio-temporal data. In this paper, we fill this gap proposing a XAI method that focuses on spatio-temporal geo-distributed sensor network data, where observations are collected at regular time intervals and at different locations. Our model-agnostic method performs perturbations on the feature space of the data to uncover relevant factors that influence model predictions, and generates explanations for multiple analytical views, such as features, timesteps, and node location. Our qualitative and quantitative experiments with real-world forecasting datasets show the effectiveness of the proposed method in providing valuable explanations of model predictions.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    For brevity, we restrict our analysis of the Features comparison to Fidelity+ metrics.

  2. 2.

    Additional visual artifacts generated by our method (histograms, perturbation plots, and rank plots) are available at: https://www.rcorizzo.com/graph-xai/.

References

  1. Altieri, M., Corizzo, R., Ceci, M.: Scalable forecasting in sensor networks with graph convolutional LSTM models. In: 2022 IEEE International Conference on Big Data (Big Data), pp. 4595–4600. IEEE (2022)

    Google Scholar 

  2. Breiman, L.: Random forests. Mach. Learn. 45, 5–32 (2001)

    Article  MATH  Google Scholar 

  3. Van den Broeck, G., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. J. Artif. Intell. Res. 74, 851–886 (2022)

    Article  MathSciNet  MATH  Google Scholar 

  4. Ceci, M., Corizzo, R., Fumarola, F., Malerba, D., Rashkovska, A.: Predictive modeling of PV energy production: how to set up the learning task for a better prediction? IEEE Trans. Industr. Inf. 13(3), 956–966 (2016)

    Article  Google Scholar 

  5. Ceci, M., Corizzo, R., Japkowicz, N., Mignone, P., Pio, G.: ECHAD: embedding-based change detection from multivariate time series in smart grids. IEEE Access 8, 156053–156066 (2020)

    Article  Google Scholar 

  6. Ceci, M., Corizzo, R., Malerba, D., Rashkovska, A.: Spatial autocorrelation and entropy for renewable energy forecasting. Data Min. Knowl. Disc. 33(3), 698–729 (2019). https://doi.org/10.1007/s10618-018-0605-7

    Article  Google Scholar 

  7. Corizzo, R., Ceci, M., Fanaee-T, H., Gama, J.: Multi-aspect renewable energy forecasting. Inf. Sci. 546, 701–722 (2021)

    Article  MathSciNet  MATH  Google Scholar 

  8. Corizzo, R., Dauphin, Y., Bellinger, C., Zdravevski, E., Japkowicz, N.: Explainable image analysis for decision support in medical healthcare. In: 2021 IEEE International Conference on Big Data (Big Data), pp. 4667–4674. IEEE (2021)

    Google Scholar 

  9. Ding, Y., Zhu, Y., Feng, J., Zhang, P., Cheng, Z.: Interpretable spatio-temporal attention LSTM model for flood forecasting. Neurocomputing 403, 348–359 (2020)

    Article  Google Scholar 

  10. Huang, Q., Yamada, M., Tian, Y., Singh, D., Chang, Y.: GraphLIME: local interpretable model explanations for graph neural networks. IEEE Trans. Knowl. Data Eng. 35(7), 6968–6972 (2023). https://ieeexplore.ieee.org/abstract/document/9811416

    Google Scholar 

  11. Kalajdjieski, J., Zdravevski, E., Corizzo, R., et al.: Air pollution prediction with multi-modal data and deep neural networks. Remote Sens. 12(24), 4142 (2020)

    Article  Google Scholar 

  12. Lapuschkin, S., Wäldchen, S., Binder, A., Montavon, G., Samek, W., Müller, K.R.: Unmasking clever Hans predictors and assessing what machines really learn. Nat. Commun. 10(1), 1096 (2019)

    Article  Google Scholar 

  13. Lim, B., Arık, S.Ö., Loeff, N., Pfister, T.: Temporal fusion transformers for interpretable multi-horizon time series forecasting. Int. J. Forecast. 37(4), 1748–1764 (2021)

    Article  Google Scholar 

  14. Liu, F., Deng, Y.: Determine the number of unknown targets in open world based on elbow method. IEEE Trans. Fuzzy Syst. 29(5), 986–995 (2020)

    Article  Google Scholar 

  15. Lundberg, S.M., et al.: From local explanations to global understanding with explainable AI for trees. Nat. Mach. Intell. 2(1), 56–67 (2020)

    Article  Google Scholar 

  16. Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. In: Advances in Neural Information Processing Systems, vol. 30 (2017)

    Google Scholar 

  17. Pan, Q., Hu, W., Chen, N.: Two birds with one stone: series saliency for accurate and interpretable multivariate time series forecasting. In: Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI-21, pp. 2884–2891 (2021)

    Google Scholar 

  18. Pope, P.E., Kolouri, S., Rostami, M., Martin, C.E., Hoffmann, H.: Explainability methods for graph convolutional neural networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10772–10781 (2019)

    Google Scholar 

  19. Ribeiro, M.T., Singh, S., Guestrin, C.: “Why should i trust you?” Explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016)

    Google Scholar 

  20. Saeed, W., Omlin, C.: Explainable AI (XAI): a systematic meta-survey of current challenges and future opportunities. Knowl.-Based Syst. 263, 110273 (2023)

    Article  Google Scholar 

  21. Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-CAM: visual explanations from deep networks via gradient-based localization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 618–626 (2017)

    Google Scholar 

  22. Smilkov, D., Thorat, N., Kim, B., Viégas, F., Wattenberg, M.: SmoothGrad: removing noise by adding noise. arXiv preprint arXiv:1706.03825 (2017)

  23. Wang, J., Wang, Z., Li, J., Wu, J.: Multilevel wavelet decomposition network for interpretable time series analysis. In: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 2437–2446 (2018)

    Google Scholar 

  24. Yang, H.F., Dillon, T.S., Chen, Y.P.P.: Optimized structure of the traffic flow forecasting model with a deep learning approach. IEEE Trans. Neural Netw. Learn. Syst. 28(10), 2371–2381 (2016)

    Article  Google Scholar 

  25. Ying, Z., Bourgeois, D., You, J., Zitnik, M., Leskovec, J.: GNNExplainer: generating explanations for graph neural networks. In: Advances in Neural Information Processing Systems, vol. 32 (2019)

    Google Scholar 

  26. Yuan, H., Yu, H., Gui, S., Ji, S.: Explainability in graph neural networks: a taxonomic survey. IEEE Trans. Pattern Anal. Mach. Intell. 45, 5782–5799 (2022)

    Google Scholar 

Download references

Acknowledgement

This work was partially supported by the project FAIR - Future AI Research (PE00000013), Spoke 6 - Symbiotic AI, under the NRRP MUR program funded by the NextGenerationEU and by NVIDIA with the donation of a Titan V GPU.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Roberto Corizzo .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Altieri, M., Ceci, M., Corizzo, R. (2023). Explainable Spatio-Temporal Graph Modeling. In: Bifet, A., Lorena, A.C., Ribeiro, R.P., Gama, J., Abreu, P.H. (eds) Discovery Science. DS 2023. Lecture Notes in Computer Science(), vol 14276. Springer, Cham. https://doi.org/10.1007/978-3-031-45275-8_12

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-45275-8_12

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-45274-1

  • Online ISBN: 978-3-031-45275-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics