Abstract
Detecting intrusions on a network through a network intrusion detection system is an important part of most cyber security defences. However, the interest in machine learning techniques, most notably neural networks, to detect anomalous traffic more accurately has led to a rise of these network intrusion detection systems being a black box, opaque to the user with little ability to explain its decisions and robbing the defenders of useful information that could lead them vulnerable to an opportune attacker. This paper makes several contributions to addressing this through augmenting an autoencoder-neural network model with external memory. It first explores the effect of the memory size and the addressing scheme used on \(F_1\)-score performance, finding optimal performance plateaus at memory sizes greater than 50, and that addressing schemes to increase the sparsity of the memory usage have negligible effect on performance. In addition, this work has generated several tools to better explain the model. This includes plotting which memory slots are strongly matched with what classes, visually and numerically measuring how much external memory each class takes to be properly encoded, and using the contents of the external memory to not only identify similar previously seen classes, but identify similarity with unseen classes and help gauge how outdated a model may be based on how the results align with domain knowledge. These tools and techniques show promising results in demonstrating the explainability potential of external memory with regards to an intrusion detection system and how they might be applied to help secure networks.
D.-S. Pham—This work was partly supported by a Curtin Malaysia collaborative research grant.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Affairs, H.: Australia’s Cyber Security Strategy 2020. Cyber Security, p. 52 (2020)
Chawla, A., Lee, B., Jacob, P., Fallon, S.: Bidirectional LSTM autoencoder for sequence based anomaly detection in cyber security. Int. J. Simul. Syst. Sci. Technol. (2019). https://doi.org/10.5013/IJSSST.a.20.05.07
Chun, L., Xiaoxian, G., Jing, Z., Wei, W., Hanji, S., Peng, G.: Intrusion detection using end-to-end memory network. In: Proceedings of the ICCIS. pp. 244–249. ACM Press, Wuhan, China (2017)
Ferrag, M.A., Maglaras, L., Moschoyiannis, S., Janicke, H.: Deep learning for cyber security intrusion detection: approaches, datasets, and comparative study. J. Inform. Secur. Appl. 50, 102419 (2020)
GarcÃa-Teodoro, P., DÃaz-Verdejo, J., Maciá-Fernández, G., Vázquez, E.: Anomaly-based network intrusion detection: techniques, systems and challenges. Comput. Secur. 28, 18–28 (2009)
Gong, D., et al.: Memorizing Normality to Detect Anomaly: Memory-augmented Deep Autoencoder for Unsupervised Anomaly Detection. arXiv:1904.02639 [cs] (2019)
Graves, A., Wayne, G., Danihelka, I.: Neural Turing Machines. arXiv:1410.5401 [cs] (2014)
Graves, A., Wayne, G., Reynolds, M.: Hybrid computing using a neural network with dynamic external memory. Nature 538(7626), 471–476 (2016)
Hadi, M.Z.S., Entin, M., Pratiarso, A., Ellysabeth, J.: Intrusion detection system based SNORT using Hierarchical clustering. ISSIT 2011 1(1), 85–90 (2011)
Kingma, D.P., Ba, J.: Adam: A Method for Stochastic Optimization. arXiv:1412.6980 [cs] (2017)
La Rosa, B., Capobianco, R., Nardi, D.: Explainable inference on sequential data via memory-tracking. In: Proceedings of the IJCAI, pp. 2006–2013. Yokohama, Japan (2020)
Mahbooba, B., Timilsina, M., Sahal, R., Serrano, M.: Explainable Artificial Intelligence (XAI) to enhance trust management in intrusion detection systems using decision tree model. Complexity 2021, e6634811 (2021)
Marino, D.L., Wickramasinghe, C.S., Manic, M.: An Adversarial Approach for Explainable AI in Intrusion Detection Systems. arXiv:1811.11705 [cs, stat] (2018)
Morichetta, A., Casas, P., Mellia, M.: EXPLAIN-IT: towards explainable AI for unsupervised network traffic analysis. In: Proceedings of the Big-DAMA 2019, pp. 22–28. ACM Press, Orlando, FL, USA (2019)
Ribeiro, M.T., Singh, S., Guestrin, C.: “Why Should I Trust You?": Explaining the Predictions of Any Classifier. arXiv:1602.04938 [cs, stat] (2016)
Sharafaldin, I., Habibi Lashkari, A., Ghorbani, A.A.: Toward generating a new intrusion detection dataset and intrusion traffic characterization. In: Proceedings of the ICISSP, pp. 108–116. Funchal, Madeira, Portugal (2018)
Tavallaee, M., Bagheri, E., Lu, W., Ghorbani, A.A.: A detailed analysis of the KDD CUP 99 data set. In: Proceedings of the CISDA, pp. 1–6. IEEE, Ottawa, ON, Canada (2009)
Tran, K., Sato, H., Kubo, M.: MANNWARE: a malware classification approach with a few samples using a memory augmented neural network. Information 11(1) (2020)
Wang, M., Zheng, K., Yang, Y., Wang, X.: An explainable machine learning framework for intrusion detection systems. IEEE Access 8, 73127–73141 (2020)
Weston, J., Chopra, S., Bordes, A.: Memory Networks. arXiv:1410.3916 [cs, stat] (2015)
Yousefi-Azar, M., Varadharajan, V., Hamey, L., Tupakula, U.: Autoencoder-based feature learning for cyber security applications. In: Proceedings of the IJCNN, pp. 3854–3861. IEEE (2017)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Hutchison, J., Pham, DS., Soh, ST., Ling, HC. (2022). Explainable Network Intrusion Detection Using External Memory Models. In: Aziz, H., Corrêa, D., French, T. (eds) AI 2022: Advances in Artificial Intelligence. AI 2022. Lecture Notes in Computer Science(), vol 13728. Springer, Cham. https://doi.org/10.1007/978-3-031-22695-3_16
Download citation
DOI: https://doi.org/10.1007/978-3-031-22695-3_16
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-22694-6
Online ISBN: 978-3-031-22695-3
eBook Packages: Computer ScienceComputer Science (R0)