Abstract
Deep learning has been widely used in many fields and has achieved excellent performance, especially in the field of malware detection. Since attackers constantly change malware to avoid being detected by machine learning algorithms, the concept drift phenomenon often occurs when deep neural networks are used for malware classification, degrading the effect of the detection model over time. In this paper, we analyze the characteristics of neural nodes from the internal structure of neural network models. A threshold method is used to prove that different classes of samples activate different neurons whileas samples of the same class activate the same neurons. We explore the reason for concept drift of deep learning models and further improve the interpretability of neural networks by analyzing the distribution of samples before and after the concept drift.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Bengio, Y.: Learning deep architectures for AI. Found. Trends Mach. Learn. 2(1), 1–127 (2009)
Yuan, Z., Lu, Y., Xue, Y.: Droiddetector: android malware characterization and detection using deep learning. Tsinghua Sci. Technol. 21(1), 114–123 (2016)
Ye, Y., Chen, L., Hou, S., Hardy, W., Li, X.: Deepam: a heterogeneous deep learning framework for intelligent malware detection. Knowl. Inf. Syst. 54(2), 265–285 (2018)
Yuan, X.Y.: Ph.D. forum: deep learning-based real-time malware detection with multi-stage analysis, pp. 1–2, May 2017
Shen, Y., Mariconti, E., Vervier, P.A., Stringhini, G.: Tiresias: predicting security events through deep learning. In: Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, CCS 2018, New York, NY, USA, pp. 592–605. ACM (2018)
Shone, N., Ngoc, T.N., Phai, V.D., Shi, Q.: A deep learning approach to network intrusion detection. IEEE Trans. Emerging Topics Comput. Intell. 2(1), 41–50 (2018)
Roy, S.S., Mallik, A., Gulati, R., Obaidat, M.S., Krishna, P.V.: A deep learning based artificial neural network approach for intrusion detection. In: Giri, D., Mohapatra, R.N., Begehr, H., Obaidat, M.S. (eds.) ICMC 2017. CCIS, vol. 655, pp. 44–53. Springer, Singapore (2017). https://doi.org/10.1007/978-981-10-4642-1_5
Kang, M., Kang, J.: A novel intrusion detection method using deep neural network for in-vehicle network security. In: 2016 IEEE 83rd Vehicular Technology Conference (VTC Spring), pp. 1–5, May 2016
Pei, K., Cao, Y., Yang, J., Jana, S.: Deepxplore: automated whitebox testing of deep learning systems, pp. 1–18, October 2017
Guo, W., Mu, D., Xu, J., Su, P., Wang, G., Xing, X.: Lemna: explaining deep learning based security applications. In: Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, CCS 2018, New York, NY, USA, pp. 364–379. ACM (2018)
Bastani, O., Kim, C., Bastani, H.: Interpreting blackbox models via model extraction. CoRR, abs/1705.08504 (2017)
Radford, A., Józefowicz, R., Sutskever, I.: Learning to generate reviews and discovering sentiment. CoRR, abs/1704.01444 (2017)
Jordaney, R., et al.: Transcend: detecting concept drift in malware classification models. In: 26th USENIX Security Symposium (USENIX Security 17), pp. 625–642. USENIX Association, Vancouver, BC (2017)
Wang, Z., Qin, M., Chen, M., Jia, C., Ma, Y.: A learning evasive email-based P2P-like botnet. China Commun. 15(2), 15–24 (2018)
Wang, Z., et al.: A hybrid learning system to mitigate botnet concept drift attacks. J. Internet Technol. 18(6), 1419–1428 (2017)
Arp, D., Spreitzenbarth, M., Hubner, M., Gascon, H., Rieck, K.: Drebin: effective and explainable detection of android malware in your pocket, February 2014
Grosse, K., Papernot, N., Manoharan, P., Backes, M., McDaniel, P.: Adversarial perturbations against deep neural networks for malware classification, June 2016
Acknowledgment
This work is partially supported by the National Natural Science Foundation (61872202), the CERNET Innovation Project (NGII20180401).
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Wang, X., Wang, Z., Shao, W., Jia, C., Li, X. (2019). Explaining Concept Drift of Deep Learning Models. In: Vaidya, J., Zhang, X., Li, J. (eds) Cyberspace Safety and Security. CSS 2019. Lecture Notes in Computer Science(), vol 11983. Springer, Cham. https://doi.org/10.1007/978-3-030-37352-8_46
Download citation
DOI: https://doi.org/10.1007/978-3-030-37352-8_46
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-37351-1
Online ISBN: 978-3-030-37352-8
eBook Packages: Computer ScienceComputer Science (R0)