Skip to main content

Evaluating Feature Relevance XAI in Network Intrusion Detection

  • Conference paper
  • First Online:
Explainable Artificial Intelligence (xAI 2023)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1901))

Included in the following conference series:

  • 618 Accesses

Abstract

As machine learning models become increasingly complex, there is a growing need for explainability to understand and trust the decision-making processes. In the domain of network intrusion detection, post-hoc feature relevance explanations have been widely used to provide insight into the factors driving model decisions. However, recent research has highlighted challenges with these methods when applied to anomaly detection, which can vary in importance and impact depending on the application domain. In this paper, we investigate the challenges of post-hoc feature relevance explanations for network intrusion detection, a critical area for ensuring the security and integrity of computer networks. To gain a deeper understanding of these challenges for the application domain, we quantitatively and qualitatively investigate the popular feature relevance approach SHAP when explaining different network intrusion detection approaches. We conduct experiments to jointly evaluate detection quality and explainability, and explore the impact of replacement data, a commonly overlooked hyperparameter of post-hoc feature relevance approaches. We find that post-hoc XAI can provide high quality explanations, but requires a careful choice of its replacement data as default settings and common choices do not transfer across different detection models. Our study showcases the viability of post-hoc XAI for network intrusion detection systems, but highlights the need for rigorous evaluations of produced explanations.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Code and annotations are available under https://professor-x.de/xai-nids.

References

  1. Alani, M.M., Miri, A.: Towards an explainable universal feature set for IoT intrusion detection. Sensors 22(15), 5690 (2022). https://doi.org/10.3390/s22155690

    Article  Google Scholar 

  2. Antwarg, L., Miller, R.M., Shapira, B., Rokach, L.: Explaining anomalies detected by autoencoders using Shapley Additive Explanations. Expert Syst. Appl. 186, 115736 (2021). https://doi.org/10.1016/j.eswa.2021.115736

  3. Buczak, A.L., Guven, E.: A survey of data mining and machine learning methods for cyber security intrusion detection. IEEE Commun. Surv. Tutor. 18(2), 1153–1176 (2016). https://doi.org/10.1109/COMST.2015.2494502

    Article  Google Scholar 

  4. Casas, P., Mazel, J., Owezarski, P.: Unsupervised network intrusion detection systems: detecting the unknown without knowledge. Comput. Commun. 35(7), 772–783 (2012). https://doi.org/10.1016/j.comcom.2012.01.016

    Article  Google Scholar 

  5. Dang, Q.V.: Improving the performance of the intrusion detection systems by the machine learning explainability. Int. J. Web Inf. Syst. 17(5), 537–555 (2021). https://doi.org/10.1108/IJWIS-03-2021-0022

    Article  Google Scholar 

  6. Davis, J., Goadrich, M.: The relationship between precision-recall and ROC curves. In: Proceedings of the 23rd International Conference on Machine Learning, pp. 233–240 (2006)

    Google Scholar 

  7. Davis, J.J., Clark, A.J.: Data preprocessing for anomaly based network intrusion detection: a review. Comput. Secur. 30(6), 353–375 (2011)

    Article  Google Scholar 

  8. Doshi-Velez, F., Kim, B.: Towards a rigorous science of interpretable machine learning (2017)

    Google Scholar 

  9. Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press (2016). http://www.deeplearningbook.org

  10. Houda, Z.A.E., Brik, B., Khoukhi, L.: “Why should i trust your IDS?’’: an explainable deep learning framework for intrusion detection systems in internet of things networks. IEEE Open J. Commun. Soc. 3, 1164–1176 (2022). https://doi.org/10.1109/OJCOMS.2022.3188750

    Article  Google Scholar 

  11. Krippendorff, K.: Content Analysis: An Introduction to Its Methodology, pp. 145–154. Sage Publications, Beverly Hills (1980)

    Google Scholar 

  12. Le, T.T.H., Kim, H., Kang, H., Kim, H.: Classification and explanation for intrusion detection system based on ensemble trees and SHAP method. Sensors 22(3), 1154 (2022). https://doi.org/10.3390/s22031154

    Article  Google Scholar 

  13. Liu, F.T., Ting, K.M., Zhou, Z.H.: Isolation forest. In: 2008 Eighth IEEE International Conference on Data Mining, pp. 413–422. IEEE (2008)

    Google Scholar 

  14. Lundberg, S., Lee, S.I.: A unified approach to interpreting model predictions. CoRR abs/1705.07874 (2017)

    Google Scholar 

  15. Mane, S., Rao, D.: Explaining network intrusion detection system using explainable AI framework. arXiv preprint arXiv:2103.07110 (2021)

  16. Neupane, S., et al.: Explainable intrusion detection systems (X-IDS): a survey of current methods, challenges, and opportunities (2022)

    Google Scholar 

  17. Nguyen, Q.P., Lim, K.W., Divakaran, D.M., Low, K.H., Chan, M.C.: GEE: a gradient-based explainable variational autoencoder for network anomaly detection. In: 2019 IEEE Conference on Communications and Network Security (CNS), pp. 91–99 (2019)

    Google Scholar 

  18. Oseni, A., et al.: An explainable deep learning framework for resilient intrusion detection in IoT-enabled transportation networks. IEEE Trans. Intell. Transp. Syst. 24(1), 1000–1014 (2023). https://doi.org/10.1109/TITS.2022.3188671

    Article  Google Scholar 

  19. Patel, D., Srinivasan, K., Chang, C.Y., Gupta, T., Kataria, A.: Network anomaly detection inside consumer networks—a hybrid approach. Electronics 9(6), 923 (2020)

    Article  Google Scholar 

  20. Pawlicki, M., Zadnik, M., Kozik, R., Choraś, M.: Analysis and detection of DDoS backscatter using NetFlow data, hyperband-optimised deep learning and explainability techniques. In: Rutkowski, L., Scherer, R., Korytkowski, M., Pedrycz, W., Tadeusiewicz, R., Zurada, J.M. (eds.) ICAISC 2022. LNCS, vol. 13588, pp. 82–92. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-23492-7_8

    Chapter  Google Scholar 

  21. Ravi, A., Yu, X., Santelices, I., Karray, F., Fidan, B.: General frameworks for anomaly detection explainability: comparative study. In: 2021 IEEE International Conference on Autonomous Systems (ICAS), pp. 1–5 (2021)

    Google Scholar 

  22. Ring, M., Schlör, D., Landes, D., Hotho, A.: Flow-based network traffic generation using generative adversarial networks. Comput. Secur. 82, 156–172 (2019)

    Article  Google Scholar 

  23. Ring, M., Wunderlich, S., Grüdl, D., Landes, D., Hotho, A.: Creation of flow-based data sets for intrusion detection. J. Inf. Warfare 16, 40–53 (2017)

    Google Scholar 

  24. Ring, M., Wunderlich, S., Grüdl, D., Landes, D., Hotho, A.: Flow-based benchmark data sets for intrusion detection. In: Proceedings of the 16th European Conference on Cyber Warfare and Security (ECCWS), pp. 361–369. ACPI (2017)

    Google Scholar 

  25. Sarhan, M., Layeghy, S., Portmann, M.: Evaluating standard feature sets towards increased generalisability and explainability of ML-based network intrusion detection (2021)

    Google Scholar 

  26. Sauka, K., Shin, G.Y., Kim, D.W., Han, M.M.: Adversarial robust and explainable network intrusion detection systems based on deep learning. Appl. Sci. 12(13), 6451 (2022). https://doi.org/10.3390/app12136451

    Article  Google Scholar 

  27. Schölkopf, B., Platt, J.C., Shawe-Taylor, J., Smola, A.J., Williamson, R.C.: Estimating the support of a high-dimensional distribution. Neural Comput. 13(7), 1443–1471 (2001)

    Article  MATH  Google Scholar 

  28. Takeishi, N., Kawahara, Y.: On anomaly interpretation via shapley values. arXiv preprint arXiv:2004.04464 (2020), http://arxiv.org/pdf/2004.04464.pdf

  29. Torabi, H., Mirtaheri, S.L., Greco, S.: Practical autoencoder based anomaly detection by using vector reconstruction error. Cybersecurity 6(1), 1 (2023)

    Article  Google Scholar 

  30. Tritscher, J., Krause, A., Hotho, A.: Feature relevance XAI in anomaly detection: reviewing approaches and challenges. Front. Artif. Intell. 6, 1099521 (2023)

    Google Scholar 

  31. Tritscher, J., Schlör, D., Gwinner, F., Krause, A., Hotho, A.: Towards explainable occupational fraud detection. In: Koprinska, I., et al. (eds.) ECML PKDD 2022. CCIS, vol. 1753, pp. 79–96. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-23633-4_7

    Chapter  Google Scholar 

  32. Wali, S., Khan, I.: Explainable AI and random forest based reliable intrusion detection system (2021). https://doi.org/10.36227/techrxiv.17169080.v1

  33. Wang, M., Zheng, K., Yang, Y., Wang, X.: An explainable machine learning framework for intrusion detection systems. IEEE Access 8, 73127–73141 (2020)

    Article  Google Scholar 

  34. Zebin, T., Rezvy, S., Luo, Y.: An explainable AI-based intrusion detection system for DNS over HTTPS (DoH) attacks. IEEE Trans. Inf. Forensics Secur. 17, 2339–2349 (2022). https://doi.org/10.1109/TIFS.2022.3183390

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Julian Tritscher .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Tritscher, J., Wolf, M., Hotho, A., Schlör, D. (2023). Evaluating Feature Relevance XAI in Network Intrusion Detection. In: Longo, L. (eds) Explainable Artificial Intelligence. xAI 2023. Communications in Computer and Information Science, vol 1901. Springer, Cham. https://doi.org/10.1007/978-3-031-44064-9_25

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-44064-9_25

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-44063-2

  • Online ISBN: 978-3-031-44064-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics