Skip to main content

Fooling a Face Recognition System with a Marker-Free Label-Consistent Backdoor Attack

  • Conference paper
  • First Online:
Image Analysis and Processing – ICIAP 2022 (ICIAP 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13232))

Included in the following conference series:

  • 1717 Accesses

Abstract

Modern face recognition systems are mostly based on deep learning models. These models need a large amount of data and high computational power to be trained. Often, a feature extraction network is pretrained on large datasets, and a classifier is finetuned on a smaller private dataset to recognise the identities from the features. Unfortunately deep learning models are exposed to malicious attacks both during training and inference phases. In backdoor attacks, the dataset used for training is poisoned by the attacker. A network trained with the poisoned dataset performs normally with generic data, but misbehave with some specific trigger data. These attacks are particularly difficult to detect, since the misbehaviour occurs only with the trigger images. In these paper we present a novel marker-free backdoor attack for face recognition systems. We generate a label-consistent poisoned dataset, where the poisoned images matches their labels and are difficult to spot by a quick visual inspection. The poisoned dataset is used to attack an Inception Resnet v1. We show that the network finetuned on the poisoned dataset is successfully fooled, identifying one of the author as a specific target identity.

The work in this paper was founded by the project PON AIM1893589 promoting the attraction of researchers back to Italy.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://github.com/timesler/facenet-pytorch.

  2. 2.

    https://foolbox.jonasrauber.de/.

References

  1. Balakrishnan, G., Xiong, Y., Xia, W., Perona, P.: Towards causal benchmarking of bias in face analysis algorithms. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12363, pp. 547–563. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58523-5_32

    Chapter  Google Scholar 

  2. Cao, Q., Shen, L., Xie, W., Parkhi, O.M., Zisserman, A.: VGGFace2: a dataset for recognising faces across pose and age. In: International Conference on Automatic Face and Gesture Recognition (2018)

    Google Scholar 

  3. Deb, D., Zhang, J., Jain, A.K.: ADVFaces: adversarial face synthesis. In: 2020 IEEE International Joint Conference on Biometrics (IJCB), pp. 1–10. IEEE (2019)

    Google Scholar 

  4. Giudice, O., Guarnera, L., Battiato, S.: Fighting deepfakes by detecting GAN DCT anomalies. J. Imaging 7(8), 128 (2021). https://doi.org/10.3390/jimaging7080128

  5. Gong, Z., Wang, W., Ku, W.S.: Adversarial and clean data are not twins. arXiv preprint arXiv:1704.04960 (2017)

  6. Goodfellow, I., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: 3rd International Conference on Learning Representations, ICLR 2015 - Conference Track Proceedings (2015)

    Google Scholar 

  7. Li, Y., Wu, B., Jiang, Y., Li, Z., Xia, S.T.: Backdoor learning: a survey. arXiv preprint arXiv:2007.08745 (2020)

  8. Liu, W., Wen, Y., Yu, Z., Li, M., Raj, B., Song, L.: SphereFace: deep hypersphere embedding for face recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 212–220 (2017)

    Google Scholar 

  9. Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. In: International Conference on Learning Representations (2018)

    Google Scholar 

  10. Massoli, F.V., Carrara, F., Amato, G., Falchi, F.: Detection of face recognition adversarial attacks. Comput. Vis. Image Underst. 202, 103103 (2021)

    Article  Google Scholar 

  11. Papernot, N., McDaniel, P.: Deep k-nearest neighbors: towards confident, interpretable and robust deep learning. arXiv preprint arXiv:1803.04765 (2018)

  12. Parkhi, O.M., Vedaldi, A., Zisserman, A.: Deep face recognition. In: Proceedings of the British Machine Vision Conference (BMVC), pp. 41.1–41.12, September 2015. https://doi.org/10.5244/C.29.41

  13. Saha, A., Subramanya, A., Pirsiavash, H.: Hidden trigger backdoor attacks. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 11957–11965 (2020)

    Google Scholar 

  14. Schroff, F., Kalenichenko, D., Philbin, J.: FaceNet: a unified embedding for face recognition and clustering. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 815–823 (2015)

    Google Scholar 

  15. Taigman, Y., Yang, M., Ranzato, M., Wolf, L.: DeepFace: closing the gap to human-level performance in face verification. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1701–1708 (2014)

    Google Scholar 

  16. Tinsley, P., Czajka, A., Flynn, P.: This face does not exist... but it might be yours! identity leakage in generative models. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1320–1328 (2020)

    Google Scholar 

  17. Turner, A., Tsipras, D., Madry, A.: Label-consistent backdoor attacks. arXiv preprint arXiv:1912.02771 (2019)

  18. Wang, B., et al.: Neural cleanse: identifying and mitigating backdoor attacks in neural networks. In: 2019 IEEE Symposium on Security and Privacy (SP), pp. 707–723 (2019). https://doi.org/10.1109/SP.2019.00031

  19. Zhang, B., Tondi, B., Barni, M.: Adversarial examples for replay attacks against CNN-based face recognition with anti-spoofing capability. Comput. Vis. Image Underst. 197–198, 102988 (2020)

    Google Scholar 

  20. Zhang, K., Zhang, Z., Li, Z., Qiao, Y.: Joint face detection and alignment using multitask cascaded convolutional networks. IEEE Signal Process. Lett. 23(10), 1499–1503 (2016)

    Article  Google Scholar 

  21. Zhao, S., Ma, X., Zheng, X., Bailey, J., Chen, J., Jiang, Y.G.: Clean-label backdoor attacks on video recognition models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14443–14452 (2020)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nino Cauli .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Cauli, N., Ortis, A., Battiato, S. (2022). Fooling a Face Recognition System with a Marker-Free Label-Consistent Backdoor Attack. In: Sclaroff, S., Distante, C., Leo, M., Farinella, G.M., Tombari, F. (eds) Image Analysis and Processing – ICIAP 2022. ICIAP 2022. Lecture Notes in Computer Science, vol 13232. Springer, Cham. https://doi.org/10.1007/978-3-031-06430-2_15

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-06430-2_15

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-06429-6

  • Online ISBN: 978-3-031-06430-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics