Abstract
Multivariate and Multi-sensor data acquisition for the purpose of device monitoring had a significant impact on recent research in Anomaly Detection. Despite the wide range of anomaly detection approaches, localization of detected anomalies in multivariate and Multi-sensor time-series data remains a challenge. Interpretation and anomaly attribution is critical and could improve the analysis and decision-making for many applications. With anomaly attribution, explanations can be leveraged to understand, on a per-anomaly basis, which sensors cause the root of anomaly and which features are the most important in causing an anomaly. To this end, we propose using saliency-based Explainable-AI approaches to localize the essential sensors responsible for anomalies in an unsupervised manner. While most Explainable AI methods are considered as interpreters of AI models, we show for the first time that Saliency Explainable AI can be utilized in Multi-sensor Anomaly localization applications. Our approach is demonstrated for localizing the detected anomalies in an unsupervised multi-sensor setup, and the experiments show promising results. We evaluate and compare different classes of saliency explainable AI approach on the Server Machine Data (SMD) Dataset and compared the results with the state-of-the-art OmniAnomaly Localization approach. The results of our empirical analysis demonstrate a promising performance.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Antwarg, L., Miller, R.M., Shapira, B., Rokach, L.: Explaining anomalies detected by autoencoders using shapley additive explanations. Expert Syst. Appl. 186, 115736 (2021)
Breiman, L.: Random forests. Mach. Learn. 45(1), 5–32 (2001). https://doi.org/10.1023/A:1010933404324
Carletti, M., Masiero, C., Beghi, A., Susto, G.A.: Explainable machine learning in industry 4.0: evaluating feature importance in anomaly detection to enable root cause analysis. In: 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC), pp. 21–26 (2019)
Choi, Y., Lim, H., Choi, H., Kim, I.J.: Gan-based anomaly detection and localization of multivariate time series data for power plant. In: 2020 IEEE International Conference on Big Data and Smart Computing (BigComp), pp. 71–74 (2020)
Crabbe, J., van der Schaar, M.: Explaining time series predictions with dynamic masks. In: ICML (2021)
Fisher, A.J., Rudin, C., Dominici, F.: All models are wrong, but many are useful: learning a variable’s importance by studying an entire class of prediction models simultaneously. J. Mach. Learn. Res. JMLR 20, 1–81 (2019)
Geiger, A., Liu, D., Alnegheimish, S., Cuesta-Infante, A., Veeramachaneni, K.: Tadgan: time series anomaly detection using generative adversarial networks. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 33–43 (2020)
Hundman, K., Constantinou, V., Laporte, C., Colwell, I., Söderström, T.: Detecting spacecraft anomalies using LSTMS and nonparametric dynamic thresholding. In: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (2018)
Jiang, R., Fei, H., Huan, J.: Anomaly localization for network data streams with graph joint sparse PCA. In: KDD (2011)
Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. arXiv arXiv:abs/1705.07874 (2017)
Malhotra, P., Ramakrishnan, A., Anand, G., Vig, L., Agarwal, P., Shroff, G.: LSTM-based encoder-decoder for multi-sensor anomaly detection. arXiv preprint arXiv:1607.00148 (2016)
Meyes, R., Lu, M., de Puiseau, C.W., Meisen, T.: Ablation studies in artificial neural networks. arXiv:abs/1901.08644 (2019)
Mozaffari, M., Yılmaz, Y.: Multivariate and online anomaly detection and localization for high-dimensional systems (2019)
Mujkanovic, F., Doskoc, V., Schirneck, M., Schäfer, P., Friedrich, T.: Timexplain - a framework for explaining the predictions of time series classifiers. arXiv:abs/2007.07606 (2020)
Pan, Q., Hu, W., Zhu, J.: Series saliency: temporal interpretation for multivariate time series forecasting. arXiv abs/2012.09324 (2020)
Resta, M., Monreale, A., Bacciu, D.: Occlusion-based explanations in deep recurrent models for biomedical signals. Entropy 23, 1064 (2021)
Ribeiro, M.T., Singh, S., Guestrin, C.: “Why should i trust you?”: explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (2016)
Roelofs, C.M., Lutz, M.A., Faulstich, S., Vogt, S.: Autoencoder-based anomaly root cause analysis for wind turbines (2021)
Shankaranarayana, S.M., Runje, D.: Alime: autoencoder based approach for local interpretability. arXiv:abs/1909.02437 (2019)
Su, Y., Zhao, Y., Niu, C., Liu, R., Sun, W., Pei, D.: Robust anomaly detection for multivariate time series through stochastic recurrent neural network. In: Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (2019)
Sundararajan, M., Taly, A., Yan, Q.: Axiomatic attribution for deep networks. arXiv:abs/1703.01365 (2017)
Suresh, H., Hunt, N., Johnson, A.E.W., Celi, L.A., Szolovits, P., Ghassemi, M.: Clinical intervention prediction and understanding with deep neural networks. In: MLHC (2017)
Tonekaboni, S., Joshi, S., Campbell, K., Duvenaud, D.K., Goldenberg, A.: What went wrong and when? Instance-wise feature importance for time-series black-box models. In: NeurIPS (2020)
Trifunov, V.T., Shadaydeh, M., Barz, B., Denzler, J.: Anomaly attribution of multivariate time series using counterfactual reasoning. In: 2021 20th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 166–172 (2021)
Vilone, G., Longo, L.: Classification of explainable artificial intelligence methods through their output formats. Mach. Learn. Knowl. Extr. 3(3), 615–661 (2021)
Vilone, G., Longo, L.: Notions of explainability and evaluation approaches for explainable artificial intelligence. Inf. Fusion 76, 89–106 (2021)
Yang, X., Steck, H., Guo, Y., Liu, Y.: On top-k recommendation using social networks. In: Proceedings of the Sixth ACM Conference on Recommender Systems, RecSys 2012, pp. 67–74. Association for Computing Machinery, New York (2012). https://doi.org/10.1145/2365952.2365969
Zong, B., et al.: Deep autoencoding gaussian mixture model for unsupervised anomaly detection. In: ICLR (2018)
Acknowledgments
Resources used in preparing this research have received funding by the Federal Ministry for Economic Affairs and Climate Action (BMWK) in the research project SPAICER. The collaboration is among partners of the project: German Research Center for Artificial Intelligence (DFKI) and TU Darmstadt. The proposed approach is going to be part of Smart Resilience Services.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 IFIP International Federation for Information Processing
About this paper
Cite this paper
Ameli, M., Pfanschilling, V., Amirli, A., Maaß, W., Kersting, K. (2022). Unsupervised Multi-sensor Anomaly Localization with Explainable AI. In: Maglogiannis, I., Iliadis, L., Macintyre, J., Cortez, P. (eds) Artificial Intelligence Applications and Innovations. AIAI 2022. IFIP Advances in Information and Communication Technology, vol 646. Springer, Cham. https://doi.org/10.1007/978-3-031-08333-4_41
Download citation
DOI: https://doi.org/10.1007/978-3-031-08333-4_41
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-08332-7
Online ISBN: 978-3-031-08333-4
eBook Packages: Computer ScienceComputer Science (R0)