skip to main content
10.1145/3529466.3529497acmotherconferencesArticle/Chapter ViewAbstractPublication PagesiciaiConference Proceedingsconference-collections
research-article

Fast Gradient Scaled Method for Generating Adversarial Examples

Published:04 June 2022Publication History

ABSTRACT

Though deep neural networks have achieved great success on many challenging tasks, they are demonstrated to be vulnerable to adversarial examples, which fool neural networks by adding human-imperceptible perturbations to the clean examples. As the first generation attack for generating adversarial examples, FGSM has inspired many follow-up attacks. However, the adversarial perturbations generated by FGSM are usually human-perceptible because FGSM modifies the pixels by the same amplitude through computing the sign of the gradients of the loss. To this end, we propose the fast gradient scaled method (FGScaledM), which scales the gradients of the loss to the valid range and can make adversarial perturbation to be more human-imperceptible. Extensive experiments on MNIST and CIFAR-10 datasets show that while maintaining similar attack success rates, our proposed FGScaledM can generate more fine-grained and more human-imperceptible adversarial perturbations than FGSM.

References

  1. Naveed Akhtar, Jian Liu, and Ajmal Mian. 2018. Defense Against Universal Adversarial Perturbations. In Proceedings of 2018 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).Google ScholarGoogle ScholarCross RefCross Ref
  2. Naveed Akhtar, Ajmal Mian, Navid Kardan, and Mubarak Shah. 2021. Advances in Adversarial Attacks and Defenses in Computer Vision: A Survey. IEEE Access (2021).Google ScholarGoogle ScholarCross RefCross Ref
  3. Krizhevsky Alex. 2009. Learning Multiple Layers of Features from Tiny Images.Google ScholarGoogle Scholar
  4. Nicholas Carlini and David Wagner. 2017. Towards Evaluating the Robustness of Neural Networks. In Proceedings of 2017 IEEE Symposium on Security and Privacy (SP).Google ScholarGoogle ScholarCross RefCross Ref
  5. Pin-Yu Chen, Yash Sharma, Huan Zhang, Jinfeng Yi, and Cho-Jui Hsieh. 2018. EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples. In Proceedings of the 32nd AAAI Conference on Artificial Intelligence.Google ScholarGoogle ScholarCross RefCross Ref
  6. Yinpeng Dong, Fangzhou Liao, Tianyu Pang, Hang Su, Jun Zhu, Xiaolin Hu, and Jianguo Li. 2018. Boosting Adversarial Attacks with Momentum. In Proceedings of 2018 IEEE Conference on Computer Vision and Pattern Recognition.Google ScholarGoogle ScholarCross RefCross Ref
  7. Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and Harnessing Adversarial Examples. In Proceedings of the 3rd International Conference on Learning Representations.Google ScholarGoogle Scholar
  8. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep Residual Learning for Image Recognition. In Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition.Google ScholarGoogle ScholarCross RefCross Ref
  9. Geoffrey Hinton, Li Deng, Dong Yu, George Dahl, Abdel-rahman Mohamed, Navdeep Jaitly, Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Brian Kingsbury, and Tara Sainath. 2012. Deep Neural Networks for Acoustic Modeling in Speech Recognition. IEEE Signal Processing Magazine 29 (2012), 82–97.Google ScholarGoogle ScholarCross RefCross Ref
  10. Alexey Kurakin, Ian Goodfellow, and Samy Bengio. 2017. Adversarial examples in the physical world. In Workshop Track Proceedings of the 5th International Conference on Learning Representations.Google ScholarGoogle Scholar
  11. Yann LeCun, Corinna Cortes, and Christopher J.C. Burges. 1998. The MNIST database of handwritten digits.Google ScholarGoogle Scholar
  12. Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2018. Towards Deep Learning Models Resistant to Adversarial Attacks. In Proceedings of the 6th International Conference on Learning Representations.Google ScholarGoogle Scholar
  13. Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. 2016. DeepFool: a simple and accurate method to fool deep neural networks. In Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition.Google ScholarGoogle ScholarCross RefCross Ref
  14. Nicolas Papernot, Patrick McDaniel, Somesh Jha, Matt Fredrikson, Z. Berkay Celik, and Ananthram Swami. 2016. The Limitations of Deep Learning in Adversarial Settings. In Proceedings of 2016 IEEE European Symposium on Security and Privacy.Google ScholarGoogle ScholarCross RefCross Ref
  15. Nicolas Papernot, Patrick McDaniel, Xi Wu, Somesh Jha, and Ananthram Swami. 2015. Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks. In Proceedings of 2016 IEEE Symposium on Security and Privacy.Google ScholarGoogle Scholar
  16. Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to Sequence Learning with Neural Networks. In Proceedings of the 2014 Annual Conference on Neural Information Processing Systems.Google ScholarGoogle Scholar
  17. Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2014. Intriguing properties of neural networks. In Proceedings of the 2nd International Conference on Learning Representations.Google ScholarGoogle Scholar

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Other conferences
    ICIAI '22: Proceedings of the 2022 6th International Conference on Innovation in Artificial Intelligence
    March 2022
    240 pages
    ISBN:9781450395502
    DOI:10.1145/3529466

    Copyright © 2022 ACM

    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 4 June 2022

    Permissions

    Request permissions about this article.

    Request Permissions

    Check for updates

    Qualifiers

    • research-article
    • Research
    • Refereed limited
  • Article Metrics

    • Downloads (Last 12 months)21
    • Downloads (Last 6 weeks)4

    Other Metrics

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format .

View HTML Format