Skip to main content

Improving Transferbility of Adversarial Attack on Face Recognition with Feature Attention

  • Conference paper
  • First Online:
Neural Information Processing (ICONIP 2023)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1969))

Included in the following conference series:

  • 845 Accesses

Abstract

The development of deep convolutional neural networks has greatly improved the face recognition (FR) technique that has been used in many applications, which raises concerns about the fact that deep convolutional neural networks (DCNNs) are vulunerable to adversarial examples. In this work, we explore the robustness of FR models based on the transferbility of adversarial examples. Due to overfitting on the specific architecture of a surrogate model, adversarial examples tend to exhibit poor transferability. On the contrary, we propose to use diverse feature representation of a surrogate model to enhance transferbility. First, we argue that compared with adversarial examples generated by modelling output features of deep face models, the examples generated by modelling internal features have stronger transferbility. After that, we propose to leverage attention of the surrogate model to a pre-determined intermediate layer to seek the key features that different deep face models may share, which avoids overfitting on the surrogate model and narrows the gap between surrogate model and target model. In addition, in order to further enhance the black-box attack success rate, a multi-layer attack strategy is proposed, which enables the algorithm to generate perturbations guided by features with model’s general interest. Extensive experiments on four deep face models show the effectiveness of our method.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Komkov, S., Petiushko,A.: Advhat: real-world adversarial attack on arcface face id system. In: IEEE Conference on Pattern Recognition (ICPR), pp. 819–826 (2021)

    Google Scholar 

  2. Sharif, M., Bhagavatula, S., Bauer, L., Reiter, M.K.: Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition. In: Proceedings of the 2016 ACM Sigsac Conference on Computer and Communications Security, pp. 1528–1540 (2016)

    Google Scholar 

  3. Yang, L., Song, Q., Wu, Y.: Attacks on state-of-the-art face recognition using attentional adversarial attack generative network. Multimedia Tools Appl. 80(1), 855–875 (2021)

    Article  Google Scholar 

  4. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: ICLR (2014)

    Google Scholar 

  5. Lin, J., Song, C., He, K., Wang, L., Hopcroft, J.E.: Nesterov accelerated gradient and scale invariance for adversarial attacks. In: ICLR (2020)

    Google Scholar 

  6. Xie, C., et al.: Improving transferability of adversarial examples with input diversity. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2730–2739 (2019)

    Google Scholar 

  7. Dong, Y., Pang, T., Su, H., Zhu, J.: Evading defenses to transferable adversarial examples by translation-invariant attacks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4312–4321 (2019)

    Google Scholar 

  8. Liu, Y., Chen, X., Liu, C., Song, D.: Delving into transferable adversarial examples and black-box attacks. In: ICLR (2017)

    Google Scholar 

  9. Zhong, Y., Deng, W.: Towards transferable adversarial attack against deep face recognition. IEEE Trans. Inf. Forensics Secur. 16, 1452–1466 (2020)

    Article  Google Scholar 

  10. Xiao, Z., et al.: Improving transferability of adversarial patches on face recognition with generative models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11845–11854 (2021)

    Google Scholar 

  11. Kurakin, A., Goodfellow, I.J., Bengio, S.: Adversarial examples in the physical world. In: ICLR Workshop, pp. 99–112 (2018)

    Google Scholar 

  12. Dong, Y., et al.: Boosting adversarial attacks with momentum. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9185–9193 (2018)

    Google Scholar 

  13. Deng, J., Guo, J., Yang, J., Xue, N., Kotsia, I., Zafeiriou, S.: Arcface: additive angular margin loss for deep face recognition. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4690–4699 (2019)

    Google Scholar 

  14. Schroff, F., Kalenichenko, D., Philbin, J.: Facenet: a unified embedding for face recognition and clustering. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 815–823 (2015)

    Google Scholar 

  15. Yin, B., et al.: Adv-makeup: a new imperceptible and transferable attack on face recognition. In: International Joint Conferences on Artificial Intelligence (2021)

    Google Scholar 

  16. Zheng, X., Fan, Y., Wu, B., Zhang, Y., Wang, J., Pan, S.: Robust physical-world attacks on face recognition. Pattern Recogn. 133, 109009 (2023)

    Article  Google Scholar 

  17. Xu, K., et al.: Adversarial t-shirt! evading person detectors in a physical world. In: Proceeding of the European Conference on Computer Vision, vol. 5, pp. 665–681 (2020)

    Google Scholar 

  18. Hu, S., et al.: Protecting facial privacy: generating adversarial identity masks via style-robust makeup transfer. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 15014–15023 (2022)

    Google Scholar 

  19. Inkawhich, N., et al.: Transferable perturbations of deep feature distributions. In: ICLR (2020)

    Google Scholar 

  20. Inkawhich, N., et al.: Feature space perturbations yield more transferable adversarial examples. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7066–7074 (2019)

    Google Scholar 

  21. Deb, D., Zhang, J., Jain, A.K.: Advfaces: adversarial face synthesis. In: IJCB, pp. 1–10 (2020)

    Google Scholar 

  22. Long, Y., Zhang, Q., Zeng, B., et al.: Frequency domain model augmentation for adversarial attack. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds.) Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, 23–27 October 2022, Proceedings, Part IV, vol. 13664, pp. 549–566. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-19772-7_32

  23. Li, Y., Bai, S., Xie, C., Liao, Z., Shen, X., Yuille, A.: Regional homogeneity: towards learning transferable universal adversarial perturbations against defenses. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12356, pp. 795–813. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58621-8_46

    Chapter  Google Scholar 

  24. Sharma, Y., Ding, G.W., Brubaker, M.: On the effectiveness of low frequency perturbations. arXiv preprint arXiv:1903.00073 (2019)

  25. Athalye, A., Engstrom, L., Ilyas, A., et al.: Synthesizing robust adversarial examples. In: International Conference on Machine Learning, pp. 284–293. PMLR (2018)

    Google Scholar 

  26. Pautov, M., Melnikov, G., Kaziakhmedov, E., et al.: On adversarial patches: real-world attack on arcface-100 face recognition system. In: 2019 International Multi-Conference on Engineering, Computer and Information Sciences (SIBIRCON), pp. 0391–0396. IEEE (2019)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhichao Lian .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Lv, C., Lian, Z. (2024). Improving Transferbility of Adversarial Attack on Face Recognition with Feature Attention. In: Luo, B., Cheng, L., Wu, ZG., Li, H., Li, C. (eds) Neural Information Processing. ICONIP 2023. Communications in Computer and Information Science, vol 1969. Springer, Singapore. https://doi.org/10.1007/978-981-99-8184-7_19

Download citation

  • DOI: https://doi.org/10.1007/978-981-99-8184-7_19

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-99-8183-0

  • Online ISBN: 978-981-99-8184-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics