Skip to main content

GM-Attack: Improving the Transferability of Adversarial Attacks

  • Conference paper
  • First Online:
Knowledge Science, Engineering and Management (KSEM 2022)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 13370))

Abstract

In the real world, blackbox attacks seem to be widely existed due to the lack of detailed information of models to be attacked. Hence, it is desirable to obtain adversarial examples with high transferability which will facilitate practical adversarial attacks. Instead of adopting traditional input transformation approaches, we propose a mechanism to derive masked images through removing some regions from the initial input images. In this manuscript, the removed regions are spatially uniformly distributed squares. For comparison, several transferable attack methods are adopted as the baselines. Eventually, extensive empirical evaluations are conducted on the standard ImageNet dataset to validate the effectiveness of GM-Attack. As indicated, our GM-Attack can craft more transferable adversarial examples compared with other input transformation methods and attack success rate on Inc-v4 has been improved by 6.5% over state-of-the-art methods.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 99.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 129.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Szegedy, C., et al.: Intriguing properties of neural networks. In: International Conference on Learning Representations (ICLR), Banff (2014)

    Google Scholar 

  2. Chen, P., Liu, S., Zhao, H., and Jia, J.: Gridmask data augmentation. arXiv preprint arXiv:2001.04086 (2020)

  3. Dong, Y., et al.: Boosting adversarial attacks with momentum. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, pp. 9185–9193 (2018)

    Google Scholar 

  4. Dong, Y., Pang, T., Su, H., Zhu, J.: Evading defenses to transferable adversarial examples by translation-invariant attacks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, pp. 4312–4321. IEEE (2019)

    Google Scholar 

  5. Girshick, R.B.: Fast R-CNN. In: Proceedings of IEEE International Conference on Computer Vision (ICCV), Santiago, pp. 1440–1448. IEEE (2015)

    Google Scholar 

  6. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: Proceedings of International Conference on Learning Representations (ICLR), San Diego (2015)

    Google Scholar 

  7. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, pp. 770–778. IEEE (2016)

    Google Scholar 

  8. Kurakin, A., Goodfellow, I., Bengio, S.: Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533 (2016)

  9. Li, Y., Bai, S., Zhou, Y., Xie, C., Zhang, Z., Yuille, A.L.: Learning transferable adversarial examples via ghost networks. In: The Thirty-Fourth AAAI Conference on Artificial Intelligence, New York City, pp. 11458–11465 (2020)

    Google Scholar 

  10. Lin, J., Song, C., He, K., Wang, L., Hopcroft, J.E.: Nesterov accelerated gradient and scale invariance for adversarial attacks. In: International Conference on Learning Representations (ICLR), New Orleans (2019)

    Google Scholar 

  11. Liu, Y., Chen, X., Liu, C., Song, D.: Delving into transferable adversarial examples and blackbox attacks. In: International Conference on Learning Representations (ICLR). Palais des Congrès Neptune (2017)

    Google Scholar 

  12. Russakovsky, O., et al.: ImageNet large scale visual recognition challenge. Int. J. Comput. Vision (IJCV) 115(3), 211–252 (2015)

    Article  MathSciNet  Google Scholar 

  13. Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z.: Rethinking the inception architecture for computer vision. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, pp. 2818–2826. IEEE (2016)

    Google Scholar 

  14. Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, Inception-Resnet and the impact of residual connections on learning. In: Proceedings of AAAI Conference on Artificial Intelligence, San Francisco, pp. 4278–4284 (2017)

    Google Scholar 

  15. Wang, X., He, X., Wang, J., He, K.: Admix: enhancing the transferability of adversarial attacks. In: Proceedings of the IEEE International Conference on Computer Vision, Montreal, pp. 16158–16167. IEEE (2021)

    Google Scholar 

  16. Wang, X., He, K.: Enhancing the transferability of adversarial attacks through variance tuning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Montreal, pp. 1924–1933. IEEE (2021)

    Google Scholar 

  17. Xie, C., et al.: Improving transferability of adversarial examples with input diversity. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, pp. 2730–2739. IEEE (2019)

    Google Scholar 

  18. Zou, J., Pan, Z., Qiu, J., Liu, X., Rui, T., Li, W.: Improving the transferability of adversarial examples with resized-diverse-inputs, diversity-ensemble and region fitting. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12367, pp. 563–579. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58542-6_34

    Chapter  Google Scholar 

  19. Zhang, H., Moustapha, C., Dauphin, Y.N., Lopez-Paz, D.: Mixup: beyond empirical risk minimization. In: International Conference on Learning Representations (ICLR), Vancouver (2018)

    Google Scholar 

  20. Zhong, Z., Zheng, L., Kang, G., Li, S., Yang, Y.: Random erasing data augmentation. In: Proceedings of the AAAI Conference on Artificial Intelligence, New York City, pp. 13001–13008 (2020)

    Google Scholar 

  21. Tramèr, F., Kurakin, A., Papernot, N., Goodfellow, I., Boneh, D., McDaniel, P.: Ensemble adversarial training: attacks and defenses. In: International Conference on Learning Representations (ICLR), Vancouver (2018)

    Google Scholar 

Download references

Acknowledgement

This work was supported by the National Key R &D Program of China (No. 2020AAA0107704), National Natural Science Foundation of China (Nos. 62073263, 62102105, 61976181, 62025602), Guangdong Basic and Applied Basic Research Foundation (Nos. 2020A1515110997, 2022A1515011501), Science and Technology Program of Guangzhou (Nos. 202002030263, 202102010419), Technological Innovation Team of Shaanxi Province (No. 2020TD-013).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Peican Zhu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Hong, J., Tang, K., Gao, C., Wang, S., Guo, S., Zhu, P. (2022). GM-Attack: Improving the Transferability of Adversarial Attacks. In: Memmi, G., Yang, B., Kong, L., Zhang, T., Qiu, M. (eds) Knowledge Science, Engineering and Management. KSEM 2022. Lecture Notes in Computer Science(), vol 13370. Springer, Cham. https://doi.org/10.1007/978-3-031-10989-8_39

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-10989-8_39

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-10988-1

  • Online ISBN: 978-3-031-10989-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics