Skip to main content

Unsupervised Multi-sensor Anomaly Localization with Explainable AI

  • Conference paper
  • First Online:
Artificial Intelligence Applications and Innovations (AIAI 2022)

Abstract

Multivariate and Multi-sensor data acquisition for the purpose of device monitoring had a significant impact on recent research in Anomaly Detection. Despite the wide range of anomaly detection approaches, localization of detected anomalies in multivariate and Multi-sensor time-series data remains a challenge. Interpretation and anomaly attribution is critical and could improve the analysis and decision-making for many applications. With anomaly attribution, explanations can be leveraged to understand, on a per-anomaly basis, which sensors cause the root of anomaly and which features are the most important in causing an anomaly. To this end, we propose using saliency-based Explainable-AI approaches to localize the essential sensors responsible for anomalies in an unsupervised manner. While most Explainable AI methods are considered as interpreters of AI models, we show for the first time that Saliency Explainable AI can be utilized in Multi-sensor Anomaly localization applications. Our approach is demonstrated for localizing the detected anomalies in an unsupervised multi-sensor setup, and the experiments show promising results. We evaluate and compare different classes of saliency explainable AI approach on the Server Machine Data (SMD) Dataset and compared the results with the state-of-the-art OmniAnomaly Localization approach. The results of our empirical analysis demonstrate a promising performance.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 119.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 159.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 159.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Antwarg, L., Miller, R.M., Shapira, B., Rokach, L.: Explaining anomalies detected by autoencoders using shapley additive explanations. Expert Syst. Appl. 186, 115736 (2021)

    Article  Google Scholar 

  2. Breiman, L.: Random forests. Mach. Learn. 45(1), 5–32 (2001). https://doi.org/10.1023/A:1010933404324

    Article  MATH  Google Scholar 

  3. Carletti, M., Masiero, C., Beghi, A., Susto, G.A.: Explainable machine learning in industry 4.0: evaluating feature importance in anomaly detection to enable root cause analysis. In: 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC), pp. 21–26 (2019)

    Google Scholar 

  4. Choi, Y., Lim, H., Choi, H., Kim, I.J.: Gan-based anomaly detection and localization of multivariate time series data for power plant. In: 2020 IEEE International Conference on Big Data and Smart Computing (BigComp), pp. 71–74 (2020)

    Google Scholar 

  5. Crabbe, J., van der Schaar, M.: Explaining time series predictions with dynamic masks. In: ICML (2021)

    Google Scholar 

  6. Fisher, A.J., Rudin, C., Dominici, F.: All models are wrong, but many are useful: learning a variable’s importance by studying an entire class of prediction models simultaneously. J. Mach. Learn. Res. JMLR 20, 1–81 (2019)

    MathSciNet  MATH  Google Scholar 

  7. Geiger, A., Liu, D., Alnegheimish, S., Cuesta-Infante, A., Veeramachaneni, K.: Tadgan: time series anomaly detection using generative adversarial networks. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 33–43 (2020)

    Google Scholar 

  8. Hundman, K., Constantinou, V., Laporte, C., Colwell, I., Söderström, T.: Detecting spacecraft anomalies using LSTMS and nonparametric dynamic thresholding. In: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (2018)

    Google Scholar 

  9. Jiang, R., Fei, H., Huan, J.: Anomaly localization for network data streams with graph joint sparse PCA. In: KDD (2011)

    Google Scholar 

  10. Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. arXiv arXiv:abs/1705.07874 (2017)

  11. Malhotra, P., Ramakrishnan, A., Anand, G., Vig, L., Agarwal, P., Shroff, G.: LSTM-based encoder-decoder for multi-sensor anomaly detection. arXiv preprint arXiv:1607.00148 (2016)

  12. Meyes, R., Lu, M., de Puiseau, C.W., Meisen, T.: Ablation studies in artificial neural networks. arXiv:abs/1901.08644 (2019)

  13. Mozaffari, M., Yılmaz, Y.: Multivariate and online anomaly detection and localization for high-dimensional systems (2019)

    Google Scholar 

  14. Mujkanovic, F., Doskoc, V., Schirneck, M., Schäfer, P., Friedrich, T.: Timexplain - a framework for explaining the predictions of time series classifiers. arXiv:abs/2007.07606 (2020)

  15. Pan, Q., Hu, W., Zhu, J.: Series saliency: temporal interpretation for multivariate time series forecasting. arXiv abs/2012.09324 (2020)

    Google Scholar 

  16. Resta, M., Monreale, A., Bacciu, D.: Occlusion-based explanations in deep recurrent models for biomedical signals. Entropy 23, 1064 (2021)

    Article  Google Scholar 

  17. Ribeiro, M.T., Singh, S., Guestrin, C.: “Why should i trust you?”: explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (2016)

    Google Scholar 

  18. Roelofs, C.M., Lutz, M.A., Faulstich, S., Vogt, S.: Autoencoder-based anomaly root cause analysis for wind turbines (2021)

    Google Scholar 

  19. Shankaranarayana, S.M., Runje, D.: Alime: autoencoder based approach for local interpretability. arXiv:abs/1909.02437 (2019)

  20. Su, Y., Zhao, Y., Niu, C., Liu, R., Sun, W., Pei, D.: Robust anomaly detection for multivariate time series through stochastic recurrent neural network. In: Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (2019)

    Google Scholar 

  21. Sundararajan, M., Taly, A., Yan, Q.: Axiomatic attribution for deep networks. arXiv:abs/1703.01365 (2017)

  22. Suresh, H., Hunt, N., Johnson, A.E.W., Celi, L.A., Szolovits, P., Ghassemi, M.: Clinical intervention prediction and understanding with deep neural networks. In: MLHC (2017)

    Google Scholar 

  23. Tonekaboni, S., Joshi, S., Campbell, K., Duvenaud, D.K., Goldenberg, A.: What went wrong and when? Instance-wise feature importance for time-series black-box models. In: NeurIPS (2020)

    Google Scholar 

  24. Trifunov, V.T., Shadaydeh, M., Barz, B., Denzler, J.: Anomaly attribution of multivariate time series using counterfactual reasoning. In: 2021 20th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 166–172 (2021)

    Google Scholar 

  25. Vilone, G., Longo, L.: Classification of explainable artificial intelligence methods through their output formats. Mach. Learn. Knowl. Extr. 3(3), 615–661 (2021)

    Article  Google Scholar 

  26. Vilone, G., Longo, L.: Notions of explainability and evaluation approaches for explainable artificial intelligence. Inf. Fusion 76, 89–106 (2021)

    Article  Google Scholar 

  27. Yang, X., Steck, H., Guo, Y., Liu, Y.: On top-k recommendation using social networks. In: Proceedings of the Sixth ACM Conference on Recommender Systems, RecSys 2012, pp. 67–74. Association for Computing Machinery, New York (2012). https://doi.org/10.1145/2365952.2365969

  28. Zong, B., et al.: Deep autoencoding gaussian mixture model for unsupervised anomaly detection. In: ICLR (2018)

    Google Scholar 

Download references

Acknowledgments

Resources used in preparing this research have received funding by the Federal Ministry for Economic Affairs and Climate Action (BMWK) in the research project SPAICER. The collaboration is among partners of the project: German Research Center for Artificial Intelligence (DFKI) and TU Darmstadt. The proposed approach is going to be part of Smart Resilience Services.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mina Ameli .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 IFIP International Federation for Information Processing

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Ameli, M., Pfanschilling, V., Amirli, A., Maaß, W., Kersting, K. (2022). Unsupervised Multi-sensor Anomaly Localization with Explainable AI. In: Maglogiannis, I., Iliadis, L., Macintyre, J., Cortez, P. (eds) Artificial Intelligence Applications and Innovations. AIAI 2022. IFIP Advances in Information and Communication Technology, vol 646. Springer, Cham. https://doi.org/10.1007/978-3-031-08333-4_41

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-08333-4_41

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-08332-7

  • Online ISBN: 978-3-031-08333-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics