Abstract
With the increasing number of people accessing the Internet, attacks against users or web servers have become a serious threat to network security. Network traffic can record network behavior, which is an important data source for analyzing network behavior. Using machine learning algorithm to analyze network behavior is one of the effective methods. However, these methods always put the data into black boxes, which is not enough for business understanding and result reliability display. In this paper, we propose an interpretability framework of network security traffic classification and apply it on a network traffic dataset. In this work, we apply some interpretable models, including model structure-based and feature importance-based. We verify that the methods can help researchers better explain the business features of network security traffic and optimize the classification model in algorithm selection and feature selection. We also study the interpretability of network traffic on neural network and make some progress.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Lin, Q., Zhang, H., Lou, J.: Log clustering based problem identification for online service systems. In: Proceedings of IEEE/ACM 38th International Conference on Software Engineering Companion, pp. 102–111 (2016)
Salakhutdinov, R.: Learning deep generative models. Ann. Rev. Stat. Appl. 2(1), 361–385 (2015)
Yang, X.: Optimization Techniques and Applications with Examples. John Wiley & Sons, New Jersey (2018)
Wang, P., Ye, F., Chen, X., Qian, Y.: Datanet: deep learning based encrypted network traffic classification in sdn home gateway. IEEE Access 6, 55380–55391 (2018)
Du, C., Liu, S., Si, L., Guo, Y., Jin, T.: Using object detection network for malware detection and identification in network traffic packets. Comput. Mater. Continua 64(3), 1785–1796 (2020)
Mo, C., Xiaojuan, W., Mingshu, H., Lei, J., Javeed, K.: A network traffic classification model based on metric learning. Comput. Mater. Continua 64(2), 941–959 (2020)
Kang, Y., Cheng, I.L., Mao, W.: Towards interpretable deep extreme multi-label learning. In: Proceedings of IEEE 20th International Conference on Information Reuse and Integration for Data Science, pp. 69–74 (2019)
Doshi-Velez, F., Kim, B.: Towards a rigorous science of interpretable machine learning (2017). https://arxiv.org/abs/1702.08608
Tolomei, G., Silvestri, F., Haines, A.: Interpretable predictions of tree-based ensembles via actionable feature tweaking. In: Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 465–474 (2017)
Tan, S., et al.: Tree Space Prototypes: Another Look at Making Tree Ensembles Interpretable (2016). https://arxiv.org/abs/1611.07115
Jung, A.: An Information-Theoretic Approach to Explainable Machine Learning (2020). https://arxiv.org/abs/2003.00484
Guerra-Manzanares, A., Nmm, S., Bahsi, H.: Towards the integration of a Post-Hoc interpretation step into the machine learning workflow for IoT Botnet detection. In: Proceedings of International Conference on Machine Learning and Applications (ICMLA), pp. 1162–1169 (2019)
Kim, D., Shin, G., Han, M.: Analysis of feature importance and interpretation for malware classification. Comput. Mater. Continua 65(3), 1891–1904 (2020)
Rathi, S.: Generating counterfactual and contrastive explanations using SHAP (2019). https://arxiv.org/abs/1906.09293?context=stat.ML
Ribeiro, M., Singh, S., Guestrin, C.: Local Interpretable Model-Agnostic Explanations (LIME): An Introduction (2016)
Greenwell, B.M.: pdp: an R package for constructing partial dependence plots. R J. 9(1), 421 (2017)
Acknowledgement
Thanks for the experimental environment provided by laboratory ICN&CAD of School of Electronic Engineering, Beijing University of Posts and Telecommunications.
Funding
This work was supported by the National Natural Science Foundation of China (61601053).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Ethics declarations
We declare that we do not have any commercial or associative interest that represents a conflict of interest in connection with the work submitted.
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
He, M., Jin, L., Song, M. (2021). Interpretability Framework of Network Security Traffic Classification Based on Machine Learning. In: Sun, X., Zhang, X., Xia, Z., Bertino, E. (eds) Artificial Intelligence and Security. ICAIS 2021. Lecture Notes in Computer Science(), vol 12737. Springer, Cham. https://doi.org/10.1007/978-3-030-78612-0_25
Download citation
DOI: https://doi.org/10.1007/978-3-030-78612-0_25
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-78611-3
Online ISBN: 978-3-030-78612-0
eBook Packages: Computer ScienceComputer Science (R0)