Abstract
As machine learning models are increasingly integrated into critical cybersecurity tools, their security issues become a priority. Particularly after the rise of adversarial examples, original data to which a small and well-computed perturbation is added to influence the prediction of the model. Applied to cybersecurity tools, like network intrusion detection systems, they could allow attackers to evade detection mechanisms that rely on machine learning. However, if the perturbation does not consider the constraints of network traffic, the adversarial examples may be inconsistent, thus making the attack invalid. These inconsistencies are a major obstacle to the implementation of end-to-end network attacks. In this article, we study the practicality of adversarial attacks for the purpose of evading network intrusion detection models. We evaluate the impact of state-of-the-art attacks on three different datasets. Through a fine-grained analysis of the generated adversarial examples, we introduce and discuss four key criteria that are necessary for the validity of network traffic, namely value ranges, binary values, multiple category membership, and semantic relations.









Similar content being viewed by others
References
Bai T, Zhao J, Zhu J, Han S, Chen J, Li, B, Kot A (2021) AI-GAN: Attack-inspired generation of adversarial examples. In: IEEE international conference on image processing (ICIP)
Biggio B, Roli F (2018) Wild patterns: Ten years after the rise of adversarial machine learning. Pattern Recognit
Buczak AL, Guven E (2015) A survey of data mining and machine learning methods for cyber security intrusion detection. IEEE Commun Surv Tutorials
Carlini N, Wagner D (2017) Towards evaluating the robustness of neural networks. In: IEEE symposium on security and privacy. IEEE
Dalvi N, Domingos P, Sanghai S, Verma D (2004) Adversarial classification. In: 10th ACM SIGKDD international conference on knowledge discovery and data mining
Devlin J, Chang MW, Lee K, Toutanova K (2019) BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In: Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT)
Dhanabal, L, Shantharajah S (2015) A study on nsl-kdd dataset for intrusion detection system based on classification algorithms. Int J Adv Res Comput Commun Eng
Gao L, Huang Z, Song J, Yang Y, Shen HT (2021) Push & Pull: Transferable adversarial examples with attentive attack. IEEE Trans Multimed
Garg S, Ramakrishnan G (2020) BAE: BERT-based Adversarial Examples for Text Classification. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y (2020) Generative adversarial networks. Commun ACM
Goodfellow IJ, Shlens J, Szegedy C (2014) Explaining and harnessing adversarial examples. In: 3rd International Conference on Learning Representations, ICLR
Kurakin A, Goodfellow I, Bengio S (2017) Adversarial examples in the physical world. In: 5th International Conference on Learning Representations, ICLR
Martins, N, Cruz JM, Cruz T, Abreu PH (2020) Adversarial Machine Learning Applied to Intrusion and Malware Scenarios: A Systematic Review. IEEE Access
Merzouk MA, Cuppens F, Boulahia-Cuppens N, Yaich R (2020) A deeper analysis of adversarial examples in intrusion detection. In: 15th International Conference on Risks and Security of Internet and Systems
Moosavi-Dezfooli SM, Fawzi A, Frossard P (2016) Deepfool: a simple and accurate method to fool deep neural networks. In: IEEE conference on computer vision and pattern recognition
Moustafa N, Slay J (2015) UNSW-NB15: a comprehensive data set for network intrusion detection systems. In: Military communications and information systems conference (MilCIS)
Nicolae MI, Sinn M, Tran MN, Buesser B, Rawat A, Wistuba M, Zantedeschi, V, Baracaldo N, Chen B, Ludwig H, Molloy I, Edwards B (2018) Adversarial robustness toolbox v1.0.0. arXiv:1807.01069
Papernot N, McDaniel P, Goodfellow I (2016) Transferability in machine learning: from phenomena to black-box attacks using adversarial samples. arXiv:1605.07277
Papernot N, McDaniel P, Jha S, Fredrikson M, Celik ZB, Swami A (2016) The limitations of deep learning in adversarial settings. In: IEEE European symposium on security and privacy (EuroS&P)
Paszke A, Gross S, Massa F, Lerer A, Bradbury J, Chanan G, Killeen T, Lin Z, Gimelshein N, Antiga L, et al (2019) Pytorch: An imperative style, high-performance deep learning library. In: Advances in neural information processing systems
Ring M, Wunderlich S, Grüdl D, Landes D, Hotho A (2017) Flow-based benchmark data sets for intrusion detection. In: Proceedings of the 16th European conference on cyber warfare and security. ACPI
Ring M, Wunderlich S, Scheuring D, Landes D, Hotho A (2019) A survey of network-based intrusion detection data sets. Compute Secur
Sharafaldin I, Lashkari AH, Hakak S, Ghorbani AA (2019) Developing realistic distributed denial of service (DDoS) attack dataset and taxonomy. In: International carnahan conference on security technology (ICCST)
Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow I, Fergus R (2013) Intriguing properties of neural networks. In: 2nd International Conference on Learning Representations, ICLR
Tavallaee M, Bagheri E, Lu W, Ghorbani AA (2009) A detailed analysis of the kdd cup 99 data set. In: IEEE symposium on computational intelligence for security and defense applications
Xiao C, Li B, Zhu JY, He W, Liu M, Song D (2019) Generating adversarial examples with adversarial networks. In: 27th International Joint Conference on Artificial Intelligence, IJCAI
Yuan X, He P, Zhu Q, Li X (2019) Adversarial examples: Attacks and defenses for deep learning. IEEE Trans Neural Netwo Learn Syst
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Merzouk, M.A., Cuppens, F., Boulahia-Cuppens, N. et al. Investigating the practicality of adversarial evasion attacks on network intrusion detection. Ann. Telecommun. 77, 763–775 (2022). https://doi.org/10.1007/s12243-022-00910-1
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s12243-022-00910-1