Abstract
Adversarial training is an effective learning approach to harden deep neural models against adversarial examples. In this paper, we explore the accuracy of adversarial training in cybersecurity. In addition, we use an XAI technique to analyze how certain input features may have an effect on decisions yielded with adversarial training giving the security analyst much better insight into robustness of features. Finally, we start the investigation of how XAI can be used for robust features selection within adversarial training in cybersecurity problems.
Keywords
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsReferences
AL-Essa, M., Appice, A.: Dealing with imbalanced data in multi-class network intrusion detection systems using xgboost. In: PKDD/ECML Workshops (2), vol. 1525, pp. 5–21. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-93733-1_1
Andresini, G., Appice, A., Paolo Caforio, F., Malerba, D.: Improving cyber-threat detection by moving the boundary around the normal samples. In: Maleh, Y., Shojafar, M., Alazab, M., Baddi, Y. (eds.) Machine Intelligence and Big Data Analytics for Cybersecurity Applications. SCI, vol. 919, pp. 105–127. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-57024-8_5
Andresini, G., Appice, A., Caforio, F.P., Malerba, D., Vessio, G.: Roulette: A neural attention multi-output model for explainable network intrusion detection. Expert Syst. Appl., 117144 (2022)
Andresini, G., Appice, A., Mauro, N.D., Loglisci, C., Malerba, D.: Exploiting the auto-encoder residual error for intrusion detection. In: Proceedings of EuroS &P Workshops 2019, pp. 281–290. IEEE (2019)
Andresini, G., Pendlebury, F., Pierazzi, F., Loglisci, C., Appice, A., Cavallaro, L.: INSOMNIA: towards concept-drift robustness in network intrusion detection. In: Proceedings of AISec@CCS 2021, pp. 111–122. ACM (2021)
Bai, T., Luo, J., Zhao, J., Wen, B., Wang, Q.: Recent advances in adversarial training for adversarial robustness. In: Proceedings of IJCAI 2021, pp. 4312–4321. ijcai.org (2021)
Biecek, P.: DALEX: explainers for complex predictive models in R. J. Mach. Learn. Res. 19(84), 1–5 (2018)
Ceci, M., Appice, A., Loglisci, C., Caruso, C., Fumarola, F., Malerba, D.: Novelty detection from evolving complex data streams with time windows. In: Rauch, J., Raś, Z.W., Berka, P., Elomaa, T. (eds.) ISMIS 2009. LNCS (LNAI), vol. 5722, pp. 563–572. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-04125-9_59
Demetrio, L., Coull, S.E., Biggio, B., Lagorio, G., Armando, A., Roli, F.: Adversarial exemples: A survey and experimental evaluation of practical attacks on machine learning for windows malware detection. ACM Trans. Priv. Secur. 24(4), 27:1–27:31 (2021)
Engelen, G., Rimmer, V., Joosen, W.: Troubleshooting an intrusion detection dataset: the CICIDS2017 case study. In: IEEE EuroS &P Workshops (2021)
Ferilli, S., De Carolis, B., Pazienza, A., Esposito, F., Redavid, D.: An agent architecture for adaptive supervision and control of smart environments. In: Proceedings of PECCS 2015, pp. 160–167. SciTePress (2015)
Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: 3rd International Conference on Learning Representations, ICLR 2015, Conference Track Proceedings (2015)
Kuppa, A., Le-Khac, N.A.: Adversarial XAI methods in cybersecurity. IEEE Trans. Inf. Forensics Secur. 16, 4924–4938 (2021)
Lakkaraju, H., Kamar, E., Caruana, R., Leskovec, J.: Faithful and customizable explanations of black box models. In: Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, pp. 131–138 (2019)
Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. In: 6th International Conference on Learning Representations, ICLR 2018, Conference Track Proceedings (2018)
Mahdavifar, S., Alhadidi, D., Ghorbani, A.A.: Effective and efficient hybrid android malware classification using pseudo-label stacked auto-encoder. J. Netw. Syst. Manag. 30(1), 22 (2022)
Marino, D.L., Wickramasinghe, C.S., Manic, M.: An adversarial approach for explainable AI in intrusion detection systems. In: IECON 2018–44th Annual Conference of the IEEE Industrial Electronics Society, pp. 3237–3243 (2018)
Pierazzi, F., Pendlebury, F., Cortellazzi, J., Cavallaro, L.: Intriguing properties of adversarial ml attacks in the problem space. In: 2020 IEEE Symposium on Security and Privacy (SP), pp. 1332–1349 (2020)
Wang, J.: Adversarial examples in physical world. In: Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI-21, pp. 4925–4926 (2021)
Wang, J., Chang, X., Wang, Y., Rodríguez, R.J., Zhang, J.: LSGAN-AT: enhancing malware detector robustness against adversarial examples. Cybersecurity 4(1), 1–15 (2021). https://doi.org/10.1186/s42400-021-00102-9
Wang, M., Zheng, K., Yang, Y., Wang, X.: An explainable machine learning framework for intrusion detection systems. IEEE Access 8, 73127–73141 (2020)
Warnecke, A., Arp, D., Wressnegger, C., Rieck, K.: Evaluating explanation methods for deep learning in security. In: 2020 IEEE EuroS &P, pp. 158–174. IEEE (2020)
Yin, C., Zhu, Y., Liu, S., Fei, J., Zhang, H.: Enhancing network intrusion detection classifiers using supervised adversarial training. J. Supercomput. 76(9), 6690–6719 (2019). https://doi.org/10.1007/s11227-019-03092-1
Acknowledgment
The research of Malik AL-Essa is funded by PON RI 2014-2020 - Machine Learning per l’Investigazione di Cyber-minacce e la Cyber-difesa - CUP H98B20000970007. We acknowledge the support of the project “Modelli e tecniche di data science per la analisi di dati strutturati” funded by the University of Bari “Aldo Moro”.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
AL-Essa, M., Andresini, G., Appice, A., Malerba, D. (2022). XAI to Explore Robustness of Features in Adversarial Training for Cybersecurity. In: Ceci, M., Flesca, S., Masciari, E., Manco, G., Raś, Z.W. (eds) Foundations of Intelligent Systems. ISMIS 2022. Lecture Notes in Computer Science(), vol 13515. Springer, Cham. https://doi.org/10.1007/978-3-031-16564-1_12
Download citation
DOI: https://doi.org/10.1007/978-3-031-16564-1_12
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-16563-4
Online ISBN: 978-3-031-16564-1
eBook Packages: Computer ScienceComputer Science (R0)