skip to main content
10.1145/3584376.3584606acmotherconferencesArticle/Chapter ViewAbstractPublication PagesricaiConference Proceedingsconference-collections
research-article

Target adversarial sample generation for malicious traffic classification model

Published:19 April 2023Publication History

ABSTRACT

In the face of attacks against sample, machine learning technique is weak, the current flow against malicious classification samples against attack algorithm is numerous, but the vast majority of algorithms only to a certain classification model for design, migration attack ability is low, and is targeted, the target structure against samples as long as can let classification model classification error, The target attack has a clear directivity, which can mislead the model to classify as the designated category of the attacker. Therefore, in real scenes, the target attack will often bring greater threat, so it has more realistic significance. Based on Generative Adversarial Network (GAN), this paper proposes a method of generating target black-box Adversarial samples with norm limited disturbances. The shadow classifier is used to fit multiple Network traffic classification models as the discriminator and training generator. Generating and adding a specific disturbance that does not change the attack characteristics to a certain type of original malicious traffic, thus generating an adversarial sample that can attack multiple malicious traffic at the same time and misclassify it into a specified category by the classification model. The adversarial sample has strong target and good mobility.

References

  1. Cisco, “The zettabyte era–trends and analysis,” Cisco visual networking white paper, 2017.Google ScholarGoogle Scholar
  2. Goodfellow IJ, Shlens J, Szegedy C. Explaining and Harnessing Adversarial Examples[C]//Yoshua Bengio, Yann LeCun. 3rd International Conference on Learning Representations, San Diego, CA, USA, May 7-9, 2015.Google ScholarGoogle Scholar
  3. Alexey Kurakin, Ian Goodfellow and Samy Bengio. “Adversarial Machine Learning at Scale” arXiv: Computer Vision and Pattern Recognition, 2016: n. pag.Google ScholarGoogle Scholar
  4. Nicolas Papernot, Patrick McDaniel, Somesh Jha, Matt Fredrikson, Z. Berkay Celik and Ananthram Swami. “The Limitations of Deep Learning in Adversarial Settings” arXiv: Cryptography and Security, 2015: n. pag.Google ScholarGoogle Scholar
  5. Nicholas Carlini and David Wagner. “Towards Evaluating the Robustness of Neural Networks” IEEE Symposium on Security and Privacy, 2017.Google ScholarGoogle ScholarCross RefCross Ref
  6. Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi and Pascal Frossard. “DeepFool: a simple and accurate method to fool deep neural networks” arXiv: Learning, 2015: n. pag.Google ScholarGoogle Scholar
  7. Nicolas Papernot, Patrick McDaniel and Ian Goodfellow. “Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples” arXiv: Cryptography and Security, 2016: n. pag.Google ScholarGoogle Scholar
  8. Kathrin Grosse, Nicolas Papernot, Praveen Manoharan, Michael Backes and Patrick McDaniel. “Adversarial Perturbations Against Deep Neural Networks for Malware Classification” arXiv: Cryptography and Security, 2016: n. pag.Google ScholarGoogle Scholar
  9. Li Peiyang, Li Xuan, Chen Junjie, Chen Yongle. Adversarial sample generation for Avoiding botnet Traffic Detection [J/OL]. Computer Engineering and Applications :1-9, 2022, 01, 20.Google ScholarGoogle Scholar

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Other conferences
    RICAI '22: Proceedings of the 2022 4th International Conference on Robotics, Intelligent Control and Artificial Intelligence
    December 2022
    1396 pages
    ISBN:9781450398343
    DOI:10.1145/3584376

    Copyright © 2022 ACM

    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 19 April 2023

    Permissions

    Request permissions about this article.

    Request Permissions

    Check for updates

    Qualifiers

    • research-article
    • Research
    • Refereed limited

    Acceptance Rates

    Overall Acceptance Rate140of294submissions,48%
  • Article Metrics

    • Downloads (Last 12 months)25
    • Downloads (Last 6 weeks)2

    Other Metrics

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format .

View HTML Format