Abstract
In recent years, intrusion detection system (IDS) based on machine learning (ML) algorithms has developed rapidly. However, ML algorithms are easily attacked by adversarial examples, and many attackers add perturbations to features of malicious traffic to escape ML-based IDSs. Unfortunately, most attack methods add perturbations without sufficient restrictions, generating unpractical adversarial examples. In this paper, we propose RAAM, a restricted adversarial attack model with adding perturbations to traffic features, which escapes ML-based IDSs. RAAM employs the improved loss to enhance the adversarial effect uses regularizer and masking vectors to restrict perturbations. Compared with previous work, RAAM can generate adversarial examples with superior characteristics: regularization, maliciousness and small perturbation. We conduct experiments on the well-known NSL-KDD dataset, and test on nine different ML-based IDSs. Experimental results show that the mean evasion increase rate (EIR) of RAAM is 94.1% in multiple attacks, which is 9.2% higher than the best of related methods, DIGFuPAS. Especially, adversarial examples generated by RAAM have lower perturbations, and the mean distance of perturbations (\(L_{2}\)) is 1.79, which is 0.81 lower than DIGFuPAS. In addition, we retrain IDSs with adversarial examples to improve their robustness. Experimental results show that retrained IDSs not only maintain the ability of detection for original examples, but also are hard to be attacked again.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Szegedy, C., et al.: Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199 (2013)
Aiken, J., Scott-Hayward, S.: Investigating adversarial attacks against network intrusion detection systems in SDNs. In: IEEE Conference on Network Function Virtualization and Software Defined Networks (NFV-SDN), pp. 1–7 (2019)
Huang, C.-H., Lee, T.-H., Chang, L., Lin, J.-R., Horng, G.: Adversarial attacks on SDN-Based deep learning IDS system. In: Kim, K.J., Kim, H. (eds.) ICMWT 2018. LNEE, vol. 513, pp. 181–191. Springer, Singapore (2019). https://doi.org/10.1007/978-981-13-1059-1_17
Lin, Z., Shi, Y., Xue, Z.: IDSGAN: generative adversarial networks for attack generation against intrusion detection. arXiv preprint arXiv:1809.02077 (2018)
Usama, M., Asim, M., Latif, S., et al.: Generative adversarial networks for launching and thwarting adversarial attacks on network intrusion detection systems. In: 2019 15th international Wireless Communications & Mobile Computing Conference (IWCMC), pp. 78–83 (2019)
Duy, P.T., Khoa, N.H., Nguyen, A.G.T., et al.: DIGFuPAS: deceive IDS with GAN and function-preserving on adversarial examples in SDN-enabled networks. Comput. Secur. 109, 102367 (2021)
Han, D., Wang, Z., Zhong, Y., et al.: Evaluating and improving adversarial robustness of machine learning-based network intrusion detectors. IEEE J. Sel. Areas Commun. 39, 2632–2647 (2021)
Hashemi, M.J., Cusack, G., Keller, E.: Towards evaluation of NIDSs in adversarial setting. In: Proceedings of the 3rd ACM CoNEXT Workshop on Big DAta, Machine Learning and Artificial Intelligence for Data Communication Networks, pp. 14–21 (2019)
Xie, J., Li, S., Yun, X., et al.: HSTF-model: an HTTP-based Trojan detection model via the hierarchical spatio-temporal features of traffics. Comput. Secur. 96, 101923 (2020)
van Ede, T., Bortolameotti, R., Continella, A., et al.: FlowPrint: semi-supervised mobile-app fingerprinting on encrypted network traffic. In: Network and Distributed System Security Symposium (NDSS), vol. 27 (2020)
Han, D., Wang, Z., Chen, W., et al.: DeepAID: interpreting and improving deep learning-based anomaly detection in security applications. In: Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security, pp. 3197–3217 (2021)
Xie, G., Li, Q., Jiang, Y.: Self-attentive deep learning method for online traffic classification and its interpretability. Comput. Netw. 196, 108267 (2021)
Hu, W., Tan, Y.: Generating adversarial malware examples for black-box attacks based on GAN. arXiv preprint arXiv:1702.05983 (2017)
Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014)
Papernot, N., McDaniel, P., Jha, S., et al.: The limitations of deep learning in adversarial settings. In: IEEE European Symposium on Security and Privacy (EuroS&P), pp. 372–387 (2016)
Liu, C., Ding, W., Hu, Y., et al.: Rectified binary convolutional networks with generative adversarial learning. Int. J. Comput. Vis. 129(4), 998–1012 (2021)
Demetrio, L., Biggio, B., Lagorio, G., et al.: Functionality-preserving black-box optimization of adversarial windows malware. In: IEEE Transactions on Information Forensics and Security, pp. 3469–3478 (2021)
Rahman, M.S., Imani, M., Mathews, N., et al.: Mockingbird: defending against deep-learning-based website fingerprinting attacks with adversarial traces. IEEE Trans. Inf. Forensics Secur. 16, 1594–1609 (2020)
Gulrajani, I., Ahmed, F., Arjovsky, M., et al.: Improved training of Wasserstein GANs. arXiv preprint arXiv:1704.00028 (2017)
Goodfellow, I., Pouget-Abadie, J., Mirza, M., et al.: Generative adversarial nets. Adv. Neural Inf. Process. Syst. 27 (2014)
Tavallaee, M., Bagheri, E., Lu, W., Ghorbani, A.: A detailed analysis of the KDD CUP 99 data set. In: Submitted to Second IEEE Symposium on Computational Intelligence for Security and Defense Applications (2009)
Davis, J.J., Clark, A.J., et al.: Data preprocessing for anomaly based network intrusion detection: a review. Comput. Secur. 30, 353–375 (2011)
Hinton, G., Vinyals, O., Dean, J.: Distilling the Knowledge in a neural network. Stat 1050–1058 (2015)
Shwartz-Ziv, R., Armon, A.: Tabular data: deep learning is not all you need. arXiv preprint arXiv:2106.03253 (2021)
Acknowledgement
This work is supported by the National Key Research and Development Program of China (Grant No. 2018YFB0804704).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 IFIP International Federation for Information Processing
About this paper
Cite this paper
Sun, P., Li, S., Xie, J., Xu, H., Cheng, Z., Qin, R. (2022). RAAM: A Restricted Adversarial Attack Model with Adding Perturbations to Traffic Features. In: Meng, W., Fischer-Hübner, S., Jensen, C.D. (eds) ICT Systems Security and Privacy Protection. SEC 2022. IFIP Advances in Information and Communication Technology, vol 648. Springer, Cham. https://doi.org/10.1007/978-3-031-06975-8_8
Download citation
DOI: https://doi.org/10.1007/978-3-031-06975-8_8
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-06974-1
Online ISBN: 978-3-031-06975-8
eBook Packages: Computer ScienceComputer Science (R0)