Abstract
As machine learning models become increasingly complex, there is a growing need for explainability to understand and trust the decision-making processes. In the domain of network intrusion detection, post-hoc feature relevance explanations have been widely used to provide insight into the factors driving model decisions. However, recent research has highlighted challenges with these methods when applied to anomaly detection, which can vary in importance and impact depending on the application domain. In this paper, we investigate the challenges of post-hoc feature relevance explanations for network intrusion detection, a critical area for ensuring the security and integrity of computer networks. To gain a deeper understanding of these challenges for the application domain, we quantitatively and qualitatively investigate the popular feature relevance approach SHAP when explaining different network intrusion detection approaches. We conduct experiments to jointly evaluate detection quality and explainability, and explore the impact of replacement data, a commonly overlooked hyperparameter of post-hoc feature relevance approaches. We find that post-hoc XAI can provide high quality explanations, but requires a careful choice of its replacement data as default settings and common choices do not transfer across different detection models. Our study showcases the viability of post-hoc XAI for network intrusion detection systems, but highlights the need for rigorous evaluations of produced explanations.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
Code and annotations are available under https://professor-x.de/xai-nids.
References
Alani, M.M., Miri, A.: Towards an explainable universal feature set for IoT intrusion detection. Sensors 22(15), 5690 (2022). https://doi.org/10.3390/s22155690
Antwarg, L., Miller, R.M., Shapira, B., Rokach, L.: Explaining anomalies detected by autoencoders using Shapley Additive Explanations. Expert Syst. Appl. 186, 115736 (2021). https://doi.org/10.1016/j.eswa.2021.115736
Buczak, A.L., Guven, E.: A survey of data mining and machine learning methods for cyber security intrusion detection. IEEE Commun. Surv. Tutor. 18(2), 1153–1176 (2016). https://doi.org/10.1109/COMST.2015.2494502
Casas, P., Mazel, J., Owezarski, P.: Unsupervised network intrusion detection systems: detecting the unknown without knowledge. Comput. Commun. 35(7), 772–783 (2012). https://doi.org/10.1016/j.comcom.2012.01.016
Dang, Q.V.: Improving the performance of the intrusion detection systems by the machine learning explainability. Int. J. Web Inf. Syst. 17(5), 537–555 (2021). https://doi.org/10.1108/IJWIS-03-2021-0022
Davis, J., Goadrich, M.: The relationship between precision-recall and ROC curves. In: Proceedings of the 23rd International Conference on Machine Learning, pp. 233–240 (2006)
Davis, J.J., Clark, A.J.: Data preprocessing for anomaly based network intrusion detection: a review. Comput. Secur. 30(6), 353–375 (2011)
Doshi-Velez, F., Kim, B.: Towards a rigorous science of interpretable machine learning (2017)
Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press (2016). http://www.deeplearningbook.org
Houda, Z.A.E., Brik, B., Khoukhi, L.: “Why should i trust your IDS?’’: an explainable deep learning framework for intrusion detection systems in internet of things networks. IEEE Open J. Commun. Soc. 3, 1164–1176 (2022). https://doi.org/10.1109/OJCOMS.2022.3188750
Krippendorff, K.: Content Analysis: An Introduction to Its Methodology, pp. 145–154. Sage Publications, Beverly Hills (1980)
Le, T.T.H., Kim, H., Kang, H., Kim, H.: Classification and explanation for intrusion detection system based on ensemble trees and SHAP method. Sensors 22(3), 1154 (2022). https://doi.org/10.3390/s22031154
Liu, F.T., Ting, K.M., Zhou, Z.H.: Isolation forest. In: 2008 Eighth IEEE International Conference on Data Mining, pp. 413–422. IEEE (2008)
Lundberg, S., Lee, S.I.: A unified approach to interpreting model predictions. CoRR abs/1705.07874 (2017)
Mane, S., Rao, D.: Explaining network intrusion detection system using explainable AI framework. arXiv preprint arXiv:2103.07110 (2021)
Neupane, S., et al.: Explainable intrusion detection systems (X-IDS): a survey of current methods, challenges, and opportunities (2022)
Nguyen, Q.P., Lim, K.W., Divakaran, D.M., Low, K.H., Chan, M.C.: GEE: a gradient-based explainable variational autoencoder for network anomaly detection. In: 2019 IEEE Conference on Communications and Network Security (CNS), pp. 91–99 (2019)
Oseni, A., et al.: An explainable deep learning framework for resilient intrusion detection in IoT-enabled transportation networks. IEEE Trans. Intell. Transp. Syst. 24(1), 1000–1014 (2023). https://doi.org/10.1109/TITS.2022.3188671
Patel, D., Srinivasan, K., Chang, C.Y., Gupta, T., Kataria, A.: Network anomaly detection inside consumer networks—a hybrid approach. Electronics 9(6), 923 (2020)
Pawlicki, M., Zadnik, M., Kozik, R., Choraś, M.: Analysis and detection of DDoS backscatter using NetFlow data, hyperband-optimised deep learning and explainability techniques. In: Rutkowski, L., Scherer, R., Korytkowski, M., Pedrycz, W., Tadeusiewicz, R., Zurada, J.M. (eds.) ICAISC 2022. LNCS, vol. 13588, pp. 82–92. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-23492-7_8
Ravi, A., Yu, X., Santelices, I., Karray, F., Fidan, B.: General frameworks for anomaly detection explainability: comparative study. In: 2021 IEEE International Conference on Autonomous Systems (ICAS), pp. 1–5 (2021)
Ring, M., Schlör, D., Landes, D., Hotho, A.: Flow-based network traffic generation using generative adversarial networks. Comput. Secur. 82, 156–172 (2019)
Ring, M., Wunderlich, S., Grüdl, D., Landes, D., Hotho, A.: Creation of flow-based data sets for intrusion detection. J. Inf. Warfare 16, 40–53 (2017)
Ring, M., Wunderlich, S., Grüdl, D., Landes, D., Hotho, A.: Flow-based benchmark data sets for intrusion detection. In: Proceedings of the 16th European Conference on Cyber Warfare and Security (ECCWS), pp. 361–369. ACPI (2017)
Sarhan, M., Layeghy, S., Portmann, M.: Evaluating standard feature sets towards increased generalisability and explainability of ML-based network intrusion detection (2021)
Sauka, K., Shin, G.Y., Kim, D.W., Han, M.M.: Adversarial robust and explainable network intrusion detection systems based on deep learning. Appl. Sci. 12(13), 6451 (2022). https://doi.org/10.3390/app12136451
Schölkopf, B., Platt, J.C., Shawe-Taylor, J., Smola, A.J., Williamson, R.C.: Estimating the support of a high-dimensional distribution. Neural Comput. 13(7), 1443–1471 (2001)
Takeishi, N., Kawahara, Y.: On anomaly interpretation via shapley values. arXiv preprint arXiv:2004.04464 (2020), http://arxiv.org/pdf/2004.04464.pdf
Torabi, H., Mirtaheri, S.L., Greco, S.: Practical autoencoder based anomaly detection by using vector reconstruction error. Cybersecurity 6(1), 1 (2023)
Tritscher, J., Krause, A., Hotho, A.: Feature relevance XAI in anomaly detection: reviewing approaches and challenges. Front. Artif. Intell. 6, 1099521 (2023)
Tritscher, J., Schlör, D., Gwinner, F., Krause, A., Hotho, A.: Towards explainable occupational fraud detection. In: Koprinska, I., et al. (eds.) ECML PKDD 2022. CCIS, vol. 1753, pp. 79–96. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-23633-4_7
Wali, S., Khan, I.: Explainable AI and random forest based reliable intrusion detection system (2021). https://doi.org/10.36227/techrxiv.17169080.v1
Wang, M., Zheng, K., Yang, Y., Wang, X.: An explainable machine learning framework for intrusion detection systems. IEEE Access 8, 73127–73141 (2020)
Zebin, T., Rezvy, S., Luo, Y.: An explainable AI-based intrusion detection system for DNS over HTTPS (DoH) attacks. IEEE Trans. Inf. Forensics Secur. 17, 2339–2349 (2022). https://doi.org/10.1109/TIFS.2022.3183390
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Tritscher, J., Wolf, M., Hotho, A., Schlör, D. (2023). Evaluating Feature Relevance XAI in Network Intrusion Detection. In: Longo, L. (eds) Explainable Artificial Intelligence. xAI 2023. Communications in Computer and Information Science, vol 1901. Springer, Cham. https://doi.org/10.1007/978-3-031-44064-9_25
Download citation
DOI: https://doi.org/10.1007/978-3-031-44064-9_25
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-44063-2
Online ISBN: 978-3-031-44064-9
eBook Packages: Computer ScienceComputer Science (R0)