Skip to main content

Backdoor Learning on Siamese Networks Using Physical Triggers: FaceNet as a Case Study

  • Conference paper
  • First Online:
Digital Forensics and Cyber Crime (ICDF2C 2023)

Abstract

Deep learning models play an important role in many real-world applications, for example, in face recognition systems, Siamese networks have been widely used. Their security issues have attracted increasing attention and backdoor learning is an emerging research area that studies the security of deep learning models. However, few backdoor learning focuses on Siamese models. To address the problem, this paper proposes a backdoor learning method on Siamese networks using physical triggers. Inspired by multi-task learning, after poisoning the dataset, the pre-trained Siamese network is fine-tuned at the last linear layer with the guidance of two tasks: outputting correct embeddings of benign samples and reacting to the poison samples. The outputs of the two tasks are then added and normalized as the output of the model. Experiments show that using the typical Siamese network FaceNet as the target network, the attack success rate of our method reaches 99%, while the model accuracy on the benign dataset decreases by only 0.001%, which reveals the model security issue.

Supported by China’s National Natural Science Foundation (No. 62271496).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 64.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 84.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Chopra, S., Hadsell, R., LeCun, Y.: Learning a similarity metric discriminatively, with application to face verification. In: 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), volume 1, vol. 1, pp. 539–546 (2005)

    Google Scholar 

  2. Tianyu, G., Liu, K., Dolan-Gavitt, B., Garg, S.: BadNets: evaluating backdooring attacks on deep neural networks. IEEE Access 7, 47230–47244 (2019)

    Article  Google Scholar 

  3. Li, S., Xue, M., Zhao, B.Z.H., Zhu, H., Zhang, X.: Invisible backdoor attacks on deep neural networks via steganography and regularization. IEEE Trans. Dependable Secure Comput. 18(5), 2088–2105 (2021)

    Google Scholar 

  4. Zhang, J., et al.: Poison ink: robust and invisible backdoor attack. IEEE Trans. Image Process. 31, 5691–5705 (2022)

    Article  Google Scholar 

  5. Wang, T., Yao, Y., Xu, F., An, S., Tong, H., Wang, T.: Backdoor attack through frequency domain. arXiv preprint: arXiv:2111.10991 (2021)

  6. Liu, Y., et al.: Trojaning attack on neural networks. In: Network and Distributed System Security Symposium (2018)

    Google Scholar 

  7. Wenger, E., Passananti, J., Bhagoji, A.N., Yao, Y., Zheng, H., Zhao, B.Y.: Backdoor attacks against deep learning systems in the physical world. In: 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6202–6211 (2021)

    Google Scholar 

  8. Xue, M., He, C., Sun, S., Wang, J., Liu, W.: Robust backdoor attacks against deep neural networks in real physical world. In: 2021 IEEE 20th International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom), pp. 620–626 (2021)

    Google Scholar 

  9. Li, H., et al.: Light can hack your face! black-box backdoor attack on face recognition systems. arXiv preprint: arXiv:2009.06996 (2020)

  10. Huang, G.B., Learned-Miller, E.: Labeled faces in the wild: updates and new reporting procedures. Technical Report UM-CS-2014-003, University of Massachusetts, Amherst, May 2014

    Google Scholar 

  11. Deng, J., Guo, J., Xue, N., Zafeiriou, S.: ArcFace: additive angular margin loss for deep face recognition. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4690–4699 (2019)

    Google Scholar 

  12. Schroff, F., Kalenichenko, D., Philbin, J.: FaceNet: a unified embedding for face recognition and clustering. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 815–823 (2015)

    Google Scholar 

  13. Zhang, K., Zhang, Z., Li, Z., Qiao, Yu.: Joint face detection and alignment using multitask cascaded convolutional networks. IEEE Signal Process. Lett. 23(10), 1499–1503 (2016)

    Article  Google Scholar 

  14. Caruana, R.: Multitask learning: a knowledge-based source of inductive bias. In: International Conference on Machine Learning (1993)

    Google Scholar 

  15. Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint: arXiv:1411.7923 (2014)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shasha Guo .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Pang, Z., Sun, Y., Guo, S., Lu, Y. (2024). Backdoor Learning on Siamese Networks Using Physical Triggers: FaceNet as a Case Study. In: Goel, S., Nunes de Souza, P.R. (eds) Digital Forensics and Cyber Crime. ICDF2C 2023. Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, vol 570. Springer, Cham. https://doi.org/10.1007/978-3-031-56580-9_17

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-56580-9_17

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-56579-3

  • Online ISBN: 978-3-031-56580-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics