Skip to main content

Improving the Transferability of Adversarial Attacks Through Both Front and Rear Vector Method

  • Conference paper
  • First Online:
Digital Forensics and Watermarking (IWDW 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13825))

Included in the following conference series:

  • 397 Accesses

Abstract

Deep Neural Networks (DNNs) are vulnerable to adversarial attacks, which makes adversarial attacks serve as a method to evaluate the robustness of DNNs. However, adversarial attacks have the disadvantage of high white-box attack success rates but low transferability. Therefore, many methods were proposed to improve the transferability of adversarial attacks, one of which is the momentum-based method. To improve the transferability of the existing adversarial attacks, we propose Previous-gradient as Neighborhood NI-FGSM (PN-NI-FGSM) and Momentum as Neighborhood NI-FGSM (MN-NI-FGSM), both of which are the momentum-based attacks. The results show that momentum describes the neighborhood more preciselfy than the previous gradient. Additionally, we define the front vector and the rear vector. Then, we classify momentum-based attacks into front vector attacks and rear vector attacks. Finally, we propose Both Front and Rear Vector Method (BFRVM), which combines the front vector attacks and the rear vector attacks. The experiments show that our BFRVM attacks achieve the best transferability against normally trained models and adversarially trained models under the single-model setting and ensemble-model setting, respectively.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 54.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 69.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: 2017 IEEE Symposium on Security and Privacy (sp), pp. 39–57. IEEE (2017)

    Google Scholar 

  2. Chrysos, G.G., Moschoglou, S., Bouritsas, G., Panagakis, Y., Deng, J., Zafeiriou, S.: P-nets: deep polynomial neural networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7325–7335 (2020)

    Google Scholar 

  3. Cococcioni, M., Ruffaldi, E., Saponara, S.: Exploiting posit arithmetic for deep neural networks in autonomous driving applications. In: 2018 International Conference of Electrical and Electronic Technologies for Automotive, pp. 1–6. IEEE (2018)

    Google Scholar 

  4. Dong, Y., et al.: Boosting adversarial attacks with momentum. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9185–9193 (2018)

    Google Scholar 

  5. Dong, Y., Pang, T., Su, H., Zhu, J.: Evading defenses to transferable adversarial examples by translation-invariant attacks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4312–4321 (2019)

    Google Scholar 

  6. Franchi, G., et al.: MUAD: multiple uncertainties for autonomous driving benchmark for multiple uncertainty types and tasks. arXiv preprint arXiv:2203.01437 (2022)

  7. Ghenescu, V., Mihaescu, R.E., Carata, S.V., Ghenescu, M.T., Barnoviciu, E., Chindea, M.: Face detection and recognition based on general purpose DNN object detector. In: 2018 International Symposium on Electronics and Telecommunications (ISETC), pp. 1–4. IEEE (2018)

    Google Scholar 

  8. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014)

  9. Guo, C., Rana, M., Cisse, M., Van Der Maaten, L.: Countering adversarial images using input transformations. arXiv preprint arXiv:1711.00117 (2017)

  10. Hao, C., et al.: NAIS: neural architecture and implementation search and its applications in autonomous driving. In: 2019 IEEE/ACM International Conference on Computer-Aided Design (ICCAD), pp. 1–8. IEEE (2019)

    Google Scholar 

  11. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  12. He, K., Zhang, X., Ren, S., Sun, J.: Identity mappings in deep residual networks. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9908, pp. 630–645. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46493-0_38

    Chapter  Google Scholar 

  13. Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. In: International Conference on Machine Learning, pp. 448–456. PMLR (2015)

    Google Scholar 

  14. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 25 (2012)

    Google Scholar 

  15. Kurakin, A., et al.: Adversarial attacks and defences competition. In: Escalera, S., Weimer, M. (eds.) The NIPS ’17 Competition: Building Intelligent Systems. TSSCML, pp. 195–231. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-94042-7_11

    Chapter  Google Scholar 

  16. Kurakin, A., Goodfellow, I., Bengio, S., et al.: Adversarial examples in the physical world (2016)

    Google Scholar 

  17. Liao, F., Liang, M., Dong, Y., Pang, T., Hu, X., Zhu, J.: Defense against adversarial attacks using high-level representation guided denoiser. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1778–1787 (2018)

    Google Scholar 

  18. Lin, J., Song, C., He, K., Wang, L., Hopcroft, J.E.: Nesterov accelerated gradient and scale invariance for adversarial attacks. arXiv preprint arXiv:1908.06281 (2019)

  19. Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083 (2017)

  20. Nesterov, Y.: A method for unconstrained convex minimization problem with the rate of convergence o (\(\frac{1}{k^2}\)) (1983)

    Google Scholar 

  21. Polyak, B.T.: Some methods of speeding up the convergence of iteration methods. USSR Comput. Math. Math. Phys. 4(5), 1–17 (1964)

    Article  Google Scholar 

  22. Russakovsky, O., et al.: ImageNet large scale visual recognition challenge. Int. J. Comput. Vision 115(3), 211–252 (2015)

    Article  Google Scholar 

  23. Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Thirty-First AAAI Conference on Artificial Intelligence (2017)

    Google Scholar 

  24. Szegedy, C., et al.: Going deeper with convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–9 (2015)

    Google Scholar 

  25. Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z.: Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2818–2826 (2016)

    Google Scholar 

  26. Szegedy, C., et al.: Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199 (2013)

  27. Tramèr, F., Kurakin, A., Papernot, N., Goodfellow, I., Boneh, D., McDaniel, P.: Ensemble adversarial training: attacks and defenses. arXiv preprint arXiv:1705.07204 (2017)

  28. Wang, X., Lin, J., Hu, H., Wang, J., He, K.: Boosting adversarial transferability through enhanced momentum. arXiv preprint arXiv:2103.10609 (2021)

  29. Xie, C., Wang, J., Zhang, Z., Ren, Z., Yuille, A.: Mitigating adversarial effects through randomization. arXiv preprint arXiv:1711.01991 (2017)

  30. Xie, C., et al.: Improving transferability of adversarial examples with input diversity. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2730–2739 (2019)

    Google Scholar 

  31. Zhang, J., Wang, J., Wang, H., Luo, X.: Self-recoverable adversarial examples: a new effective protection mechanism in social networks. IEEE Trans. Circuits Syst. Video Technol. 1 (2022). https://doi.org/10.1109/TCSVT.2022.3207008

Download references

Acknowledgements

This work was supported by the National Natural Science Foundation of China (No. 62072250, 62172435, U1804263, U20B2065, 61872203, 71802110 and 61802212), and by the Zhongyuan Science and Technology Innovation Leading Talent Project of China (No. 214200510019), and the Natural Science Foundation of Jiangsu Province (No. BK20200750), and Open Foundation of Henan Key Laboratory of Cyberspace Situation Awareness (No. HNTS2022002), and Post graduate Research & Practice Innvoation Program of Jiangsu Province (No. KYCX200974), and the Opening Project of Guangdong Province Key Laboratory of Information Security Technology (No. 2020B1212060078), and the Ministry of education of Humanities and Social Science project (No. 19YJA630061) and the Priority Academic Program Development of Jiangsu Higher Education Institutions (PAPD) fund.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jinwei Wang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wu, H., Wang, J., Zhang, J., Luo, X., Ma, B. (2023). Improving the Transferability of Adversarial Attacks Through Both Front and Rear Vector Method. In: Zhao, X., Tang, Z., Comesaña-Alfaro, P., Piva, A. (eds) Digital Forensics and Watermarking. IWDW 2022. Lecture Notes in Computer Science, vol 13825. Springer, Cham. https://doi.org/10.1007/978-3-031-25115-3_6

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-25115-3_6

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-25114-6

  • Online ISBN: 978-3-031-25115-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics