Skip to main content
Log in

XAI for intrusion detection system: comparing explanations based on global and local scope

  • Original Paper
  • Published:
Journal of Computer Virology and Hacking Techniques Aims and scope Submit manuscript

Abstract

Intrusion Detection System is a device or software in the field of cybersecurity that has become an essential tool in computer networks to provide a secured network environment. Machine Learning based IDS offers a self-learning solution and provides better performance when compared to traditional IDS. As the predictive performance of IDS is based on conflicting criteria, the underlying algorithms are becoming more complex and hence, less transparent. Explainable Artificial Intelligence is a set of frameworks that help to develop interpretable and inclusive machine learning models. In this paper, we use Permutation Importance, SHapley Additive exPlanation, Local Interpretable Model-Agnostic Explanation algorithms, Contextual Importance and Utility algorithms, covering both global and local scope of explanation to IDSs on Random Forest, eXtreme Gradient Boosting and Light Gradient Boosting machine learning models along with a comparison of explanations in terms of accuracy, consistency and stability. This comparison can help cyber security personnel to have a better understanding of the predictions of cyber-attacks in the network traffic. A case study focusing on DoS attack variants shows some useful insights on the impact of features in prediction performance.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20
Fig. 21
Fig. 22
Fig. 23

Similar content being viewed by others

References

  1. Hu, X., Li, T., Wu, Z., Gao, X., Wang, Z.: Research and application of intelligent intrusion detection system with accuracy analysis methodology. Infrared Phys. Technol. 88, 245–253 (2018)

    Article  Google Scholar 

  2. Holzinger, A.: From machine learning to explainable AI. In: World symposium on digital intelligence for systems and machines (DISA), pp. 55–66 (2018)

  3. National Academies of Sciences, Engineering, and Medicine et al.: Implications of artificial intelligence for cybersecurity. In: Proceedings of a Workshop. National Academies Press (2019)

  4. Othman, S.M., Ba-Alwi, F.M., Alsohybe, N.T., Al-Hashida, A.Y.: Intrusion detection model using machine learning algorithm on big data environment. J. Big Data 5(1), 1–12 (2018)

    Article  Google Scholar 

  5. Da Costa, K.A., Papa, J.P., Lisboa, C.O., Munoz, R., de Albuquerque, V.H.C.: Internet of things: a survey on machine learning-based intrusion detection approaches. Comput. Netw. 151, 147–157 (2019)

    Article  Google Scholar 

  6. Hodo, E. et al.: Threat analysis of IoT networks using artificial neural network intrusion detection system, pp. 1–6. IEEE (2016)

  7. Peng, K., et al.: Intrusion detection system based on decision tree over big data in fog environment. Wirel. Commun. Mob. Comput. 2018 (2018)

  8. Zhang, Z., Shen, H.: Application of online-training SVMs for real-time intrusion detection with different considerations. Comput. Commun. 28(12), 1428–1442 (2005)

    Article  Google Scholar 

  9. Sharma, Y., Verma, A., Rao, K., Eluri, V.: Reasonable explainability for regulating AI in health. ORF occasional paper (261) (2020)

  10. Rudin, C., Radin, J.: Why are we using black box models in ai when we don’t need to? A lesson from an explainable AI competition. Harvard Data Sci. Rev. 1(2) (2019)

  11. Paulauskas, N., Auskalnis, J.: Analysis of data pre-processing influence on intrusion detection using NSL-KDD dataset. In: Open Conference of Electrical, Electronic and Information Sciences (eStream), pp. 1–5. IEEE (2017)

  12. Datta, H., Deshmukh, T.G., Puja Padiya, Y.: International Conference on Communication, Information & Computing Technology (ICCICT). Improving classification using preprocessing and machine learning algorithms on NSL-KDD dataset

  13. Lipton, Z.: The mythos of model interpretability. arXiv preprint arXiv:1606.03490 (2016)

  14. Freitas, A.A.: Comprehensible classification models: a position paper. ACM SIGKDD Explor. Newsl. 15(1), 1–10 (2014)

    Article  Google Scholar 

  15. Lundberg, S.M., Lee, S.-I.: A unified approach to interpreting model predictions, 4768–4777 (2017)

  16. Altmann, A., Toloşi, L., Sander, O., Lengauer, T.: Permutation importance: a corrected feature importance measure. Bioinformatics 26(10), 1340–1347 (2010)

    Article  Google Scholar 

  17. Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you? Explaining the predictions of any classifier, 1135–1144 (2016)

  18. Goode, K., Hofmann, H.: Visual diagnostics of an explainer model: tools for the assessment of lime explanations. Stat. Anal. Data Min. ASA Data Sci. J. 14(2), 185–200 (2021)

    Article  MathSciNet  Google Scholar 

  19. Doshi-Velez, F., Kim, B.: Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608 (2017)

  20. Zhao, Q., Hastie, T.: Causal interpretations of black-box models. J. Bus. Econ. Stat. 39(1), 272–281 (2021)

    Article  MathSciNet  Google Scholar 

  21. Goldstein, A., Kapelner, A., Bleich, J., Pitkin, E.: Peeking inside the black box: visualizing statistical learning with plots of individual conditional expectation. J. Comput. Graph. Stat. 24(1), 44–65 (2015)

    Article  MathSciNet  Google Scholar 

  22. Apley, D.W., Zhu, J.: Visualizing the effects of predictor variables in black box supervised learning models. J. R. Stat. Soc. Ser. B (Stat. Methodol.) 82(4), 1059–1086 (2020)

  23. Arya, V., et al.: One explanation does not fit all: a toolkit and taxonomy of ai explainability techniques. arXiv preprint arXiv:1909.03012 (2019)

  24. Maonan Wang, Y.Y., Kangfeng Zheng, W.X.: An explainable machine learning framework for intrusion detection systems. IEEE Access 8(2020), 73127–73141 (2020)

    Article  Google Scholar 

  25. Kaggle dataset. https://www.kaggle.com/sampadab17/network-intrusion-detection

  26. NSL-KDD data set for network-based intrusion detection systems. https://www.unb.ca/cic/datasets/nsl.html

  27. https://pair-code.github.io/facets/

  28. Carvalho, D.V., Pereira, E.M., Cardoso, J.S.: Machine learning interpretability: a survey on methods and metrics. Electronics 8(8), 832 (2019)

    Article  Google Scholar 

  29. Fisher, A., Rudin, C., Dominici, F.: All models are wrong, but many are useful: learning a variable’s importance by studying an entire class of prediction models simultaneously. J. Mach. Learn. Res. 20(177), 1–81 (2019)

    MathSciNet  MATH  Google Scholar 

  30. Anjomshoae, S., Främling, K., Najjar, A.: Explanations of Black–Box Model Predictions by Contextual Importance and Utility, pp. 95–109. Springer, New York (2019)

  31. Främling, K.: Decision Theory Meets Explainable AI, pp. 57–74. Springer, New York (2020)

  32. Alvarez-Melis, D., Jaakkola, T.S.: On the robustness of interpretability methods. arXiv preprint arXiv:1806.08049 (2018)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ciza Thomas.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hariharan, S., Rejimol Robinson, R.R., Prasad, R.R. et al. XAI for intrusion detection system: comparing explanations based on global and local scope. J Comput Virol Hack Tech 19, 217–239 (2023). https://doi.org/10.1007/s11416-022-00441-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11416-022-00441-2

Keywords

Navigation