skip to main content
10.1145/3447548.3470812acmconferencesArticle/Chapter ViewAbstractPublication PageskddConference Proceedingsconference-collections
abstract

Adversarial Robustness in Deep Learning: From Practices to Theories

Published: 14 August 2021 Publication History

Abstract

Deep neural networks (DNNs) have achieved unprecedented accomplishments in various machine learning tasks. However, recent studies demonstrate that DNNs are extremely vulnerable to adversarial examples. They are manually synthesized input samples which look benign but can severely fool the prediction of DNN models. For machine learning practitioners who are applying DNNs, understanding the behavior of adversarial examples will not only help them improve the safety of their models, but also can help them have deeper insights into the working mechanism of the DNNs. In this tutorial, we provide a comprehensive overview on the recent advances of adversarial examples and their countermeasures, from both practical and theoretical perspectives. From the practical aspect, we give a detailed introduction of the popular algorithms to generate adversarial examples under different adversary's goals. We also discuss how the defending strategies are developed to resist these attacks, and how new attacks come out to break these defenses. From the theoretical aspect, we discuss a series of intrinsic behaviors of robust DNNs which are different from traditional DNNs, especially about their optimization and generalization properties. Finally, we introduce DeepRobust, a Pytorch adversarial learning library which aims to build a comprehensive and easy-to-use platform to foster this research field. Via our tutorial, the audience can grip the main ideas of adversarial attacks and defenses and gain a deep insight of DNN's robustness. The tutorial official website is at https://sites.google.com/view/kdd21-tutorial-adv-robust.

References

[1]
Anish Athalye, Nicholas Carlini, and David Wagner. 2018. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. arXiv preprint arXiv:1802.00420 (2018).
[2]
Nicholas Carlini and David Wagner. 2017. Towards evaluating the robustness of neural networks. In 2017 ieee symposium on security and privacy (sp). IEEE, 39--57.
[3]
Wei Jin, Yaxin Li, Han Xu, Yiqi Wang, and Jiliang Tang. 2020. Adversarial Attacks and Defenses on Graphs: A Review and Empirical Study. arXiv preprint arXiv:2003.00653 (2020).
[4]
Yaxin Li, Wei Jin, Han Xu, and Jiliang Tang. 2020. DeepRobust: A PyTorch Library for Adversarial Attacks and Defenses. arXiv preprint arXiv:2005.06149 (2020).
[5]
Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2013. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199 (2013).
[6]
Han Xu, Xiaorui Liu, Yaxin Li, and Jiliang Tang. 2020. To be Robust or to be Fair: Towards Fairness in Adversarial Training. arXiv preprint arXiv:2010.06121 (2020).
[7]
Han Xu, Yao Ma, Hao-Chen Liu, Debayan Deb, Hui Liu, Ji-Liang Tang, and Anil K Jain. 2020. Adversarial attacks and defenses in images, graphs and text: A review. International Journal of Automation and Computing 17, 2 (2020), 151--178.

Index Terms

  1. Adversarial Robustness in Deep Learning: From Practices to Theories

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    KDD '21: Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining
    August 2021
    4259 pages
    ISBN:9781450383325
    DOI:10.1145/3447548
    Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 14 August 2021

    Check for updates

    Author Tags

    1. adversarial examples
    2. deep learning
    3. robustness

    Qualifiers

    • Abstract

    Funding Sources

    • National Science Foundation (NSF)
    • Army Research Office (ARO)

    Conference

    KDD '21
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 1,133 of 8,635 submissions, 13%

    Upcoming Conference

    KDD '25

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 272
      Total Downloads
    • Downloads (Last 12 months)14
    • Downloads (Last 6 weeks)3
    Reflects downloads up to 28 Feb 2025

    Other Metrics

    Citations

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media