Skip to main content

Part of the book series: Advances in Intelligent Systems and Computing ((AISC,volume 1377))

Abstract

Recently, studies on generating adversarial examples on the Face Verification (FV) systems are one of the active research areas. The aim of the attack models is to deceive the FV system by adding the least amount of perturbation to maintain high structural similarity. In contradiction to the defense methods that try to protect the FV systems from attacks regardless of the amount of perturbation. In this paper, we present an analysis of the state-of-the-art adversarial attacks on FV systems, to determine the best value of the amount of perturbation to be added to the probe face images that maintain high structure similarity. Empirical experiment results validate the effectiveness of the amount of perturbation with structural similarity on the Labeled Faces in the Wild (LFW) benchmark dataset on attack setting. We show that increasing the amount of perturbation does not affect on the attack success rate, but it does have a negative effect on the structural similarity which decreased from 54% to 26% in FGSM and from 98% to 70% in PGD.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Mei, W., Weihong, D.: Deep face recognition: a survey. Neurocomputing 429, 215–244 (2021)

    Article  Google Scholar 

  2. Liu, Y., Stehouwer, J., Jourabloo, A., Liu, X.: Deep tree learning for zero-shot face anti-spoofing. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, pp. 4675–4684 (2019)

    Google Scholar 

  3. Alejandro, A., Aythami, M., Rubén, V., Julian, F.: MultiLock: mobile active authentication based on multiple biometric and behavioral patterns. In: MM 2019: The 27th ACM International Conference on Multimedia, France, vol. 1, pp. 53–59 (2019)

    Google Scholar 

  4. Dong, Y., Liao, F., Pang, T., Hu, X., Zhu, J.: Discovering adversarial examples with momentum. CoRR, abs/1710.06081 (2017). http://arxiv.org/abs/1710.06081

  5. Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.: Densely connected convolutional networks. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), USA, pp. 2261–2269 (2017)

    Google Scholar 

  6. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), USA, pp. 770–778 (2016)

    Google Scholar 

  7. Wolf, L., Hassner, T., Maoz, I.: Face recognition in unconstrained videos with matched background similarity. In: IEEE CVPR, USA, pp. 529–534 (2011)

    Google Scholar 

  8. Zhang, K., Zhang, Z., Li, Z., Qiao, Y.: Joint face detection and alignment using multitask cascaded convolutional networks. IEEE Sig. Process. Lett. 23, 1499–1503 (2016)

    Article  Google Scholar 

  9. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014)

  10. Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning Face Representation from Scratch. arXiv preprint:1411.7923 (2014)

    Google Scholar 

  11. Huang, G.B., Learned-Miller, E.: Labeled faces in the wild: updates and new reporting procedures. Department of Computer Science, University of Massachusetts Amherst, Amherst, MA, USA, Technical Report, 14-003 (2014)

    Google Scholar 

  12. Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: a database for studying face recognition in unconstrained environments. Technical Report 07-49, University of Massachusetts, Amherst (2007)

    Google Scholar 

  13. Deb, D., Zhang, J., Jain, A.K.: AdvFaces: adversarial face synthesis. arXiv preprint arXiv:1908.05008 (2019)

  14. Song, Q., Wu, Y., Yang, L.: Attacks on state-of-the-art face recognition using attentional adversarial attack generative network. arXiv preprint arXiv:1811.12026 (2018)

  15. Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. In: 6th International Conference on Learning Representations (ICLR 2018), Canada (2018)

    Google Scholar 

  16. Jourabloo, A., Liu, Y., Liu, X.: Face de-spoofing: anti-spoofing via noise modeling. In: Proceedings of the European Conference on Computer Vision, Germany (2018)

    Google Scholar 

  17. Schroff, F., Kalenichenko, D., Philbin, J.: FaceNet: a unified embedding for face recognition and clustering. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, USA, pp. 815–823 (2015)

    Google Scholar 

  18. Bose, A., Aarabi, P.: Adversarial Attacks on Face Detectors Using Neural Net Based Constrained Optimization, pp. 1–6. arXiv preprint arXiv:1805.12302 (2018)

  19. Dong, Y., Su, H., Wu, B., Li, Z., Liu, W., Zhang, T., Zhu, J.: Efficient decision-based black-box adversarial attacks on face recognition. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), USA, pp. 7706–7714 (2019)

    Google Scholar 

  20. Cao, Q., Shen, L., Xie, W., Parkhi, O., Zisserman, A.: VGGFace2: a dataset for recognising faces across pose and age. In: 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018), China, pp. 67–74 (2018)

    Google Scholar 

  21. Sun, Y., Wang, X., Tang, X.: Deeply learned face representations are sparse, selective, and robust. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), USA, pp. 2892–2900 (2015)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sohair A. Kilany .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Kilany, S.A., Mahfouz, A., Zaki, A.M., Sayed, A. (2021). Analysis of Adversarial Attacks on Face Verification Systems. In: Hassanien, A.E., et al. Proceedings of the International Conference on Artificial Intelligence and Computer Vision (AICV2021). AICV 2021. Advances in Intelligent Systems and Computing, vol 1377. Springer, Cham. https://doi.org/10.1007/978-3-030-76346-6_42

Download citation

Publish with us

Policies and ethics