ABSTRACT
In the face of attacks against sample, machine learning technique is weak, the current flow against malicious classification samples against attack algorithm is numerous, but the vast majority of algorithms only to a certain classification model for design, migration attack ability is low, and is targeted, the target structure against samples as long as can let classification model classification error, The target attack has a clear directivity, which can mislead the model to classify as the designated category of the attacker. Therefore, in real scenes, the target attack will often bring greater threat, so it has more realistic significance. Based on Generative Adversarial Network (GAN), this paper proposes a method of generating target black-box Adversarial samples with norm limited disturbances. The shadow classifier is used to fit multiple Network traffic classification models as the discriminator and training generator. Generating and adding a specific disturbance that does not change the attack characteristics to a certain type of original malicious traffic, thus generating an adversarial sample that can attack multiple malicious traffic at the same time and misclassify it into a specified category by the classification model. The adversarial sample has strong target and good mobility.
- Cisco, “The zettabyte era–trends and analysis,” Cisco visual networking white paper, 2017.Google Scholar
- Goodfellow IJ, Shlens J, Szegedy C. Explaining and Harnessing Adversarial Examples[C]//Yoshua Bengio, Yann LeCun. 3rd International Conference on Learning Representations, San Diego, CA, USA, May 7-9, 2015.Google Scholar
- Alexey Kurakin, Ian Goodfellow and Samy Bengio. “Adversarial Machine Learning at Scale” arXiv: Computer Vision and Pattern Recognition, 2016: n. pag.Google Scholar
- Nicolas Papernot, Patrick McDaniel, Somesh Jha, Matt Fredrikson, Z. Berkay Celik and Ananthram Swami. “The Limitations of Deep Learning in Adversarial Settings” arXiv: Cryptography and Security, 2015: n. pag.Google Scholar
- Nicholas Carlini and David Wagner. “Towards Evaluating the Robustness of Neural Networks” IEEE Symposium on Security and Privacy, 2017.Google ScholarCross Ref
- Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi and Pascal Frossard. “DeepFool: a simple and accurate method to fool deep neural networks” arXiv: Learning, 2015: n. pag.Google Scholar
- Nicolas Papernot, Patrick McDaniel and Ian Goodfellow. “Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples” arXiv: Cryptography and Security, 2016: n. pag.Google Scholar
- Kathrin Grosse, Nicolas Papernot, Praveen Manoharan, Michael Backes and Patrick McDaniel. “Adversarial Perturbations Against Deep Neural Networks for Malware Classification” arXiv: Cryptography and Security, 2016: n. pag.Google Scholar
- Li Peiyang, Li Xuan, Chen Junjie, Chen Yongle. Adversarial sample generation for Avoiding botnet Traffic Detection [J/OL]. Computer Engineering and Applications :1-9, 2022, 01, 20.Google Scholar
Recommendations
Generating Transferable Adversarial Examples for Speech Classification
Highlights- Speech classification systems are vulnerable to adversarial attacks.
- ...
AbstractDespite the success of deep neural networks, the existence of adversarial attacks has revealed the vulnerability of neural networks in terms of security. Adversarial attacks add subtle noise to the original example, resulting in a ...
Adversarial Minimax Training for Robustness Against Adversarial Examples
Neural Information ProcessingAbstractIn this paper, we propose a novel method to improve robustness against adversarial examples. In conventional methods, in order to take measures against adversarial examples, a classifier is learned with adversarial examples generated in a specific ...
Stateful Detection of Black-Box Adversarial Attacks
SPAI '20: Proceedings of the 1st ACM Workshop on Security and Privacy on Artificial IntelligenceThe problem of adversarial examples, evasion attacks on machine learning classifiers, has proven extremely difficult to solve. This is true even in the black-box threat model, as is the case in many practical settings. Here, the classifier is hosted as ...
Comments