Skip to main content

Advertisement

Log in

Poisoning attacks on face authentication systems by using the generative deformation model

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

Various studies have revealed the vulnerabilities of machine learning algorithms. For example, a hacker can poison a deep learning facial recognition system by impersonating an administrator and obtaining confidential information. According to studies, poisoning attacks are typically implemented based on the optimization conditions of the machine learning algorithm. However, neural networks, because of their complexity, are typically unsuited for these attacks. Although several poisoning strategies have been developed against deep facial recognition systems, poor image qualities and unrealistic assumptions remain the drawbacks of these strategies. Therefore, we proposed a black-box poisoning attack strategy against facial recognition systems, which works by injecting abnormal data generated by using elastic transformation to deform the facial components. We demonstrated the performance of the proposed strategy using the VGGFace2 data set to attack various facial extractors. The proposed strategy outperformed its counterparts in the literature. The contributions of the study lie in 1) providing a novel method of attack against a nonoverfitting facial recognition system with fewer injections, 2) applying a new image transformation technique to compose malicious samples, and 3) formulating a method that leaves no trace of modification to the human eye.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

References

  1. Aghakhani H, Meng D, Wang YX, Kruegel C, Vigna G (2021) Bullseye polytope: a scalable clean-label poisoning attack with improved transferability. In: 2021 IEEE European symposium on security and privacy (EuroS&P). IEEE, pp 159–178

    Chapter  Google Scholar 

  2. Alfeld S, Zhu X, Barford P (2016) Data poisoning attacks against autoregressive models. In: thirtieth AAAI conference on artificial intelligence

  3. Belhumeur PN, Hespanha JP, Kriegman DJ (1997) Eigenfaces vs. fisherfaces: recognition using class specific linear projection. IEEE Trans Pattern Anal Mach Intell 19:711–720

    Article  Google Scholar 

  4. Biggio B, Nelson B, Laskov P (2012) Poisoning attacks against support vector machines. arXiv preprint arXiv:1206.6389

  5. Cao Q, Shen L, Xie W, Parkhi OM, Zisserman A (2018) Vggface2: a dataset for recognising faces across pose and age. In: international conference on automatic face and gesture recognition

  6. Carlini N, Wagner D (2017) Towards evaluating the robustness of neural networks. In: 2017 IEEE symposium on security and privacy (SP). IEEE, pp 39–57

    Chapter  Google Scholar 

  7. Dubrofsky E (2009) Homography estimation

  8. Goodfellow IJ, Shlens J, Szegedy C (2014) Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572

  9. Gu T, Liu K, Dolan-Gavitt B, Garg S (2019) Badnets: evaluating backdooring attacks on deep neural networks. IEEE Access 7:47230–47244

    Article  Google Scholar 

  10. He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 770–778

    Google Scholar 

  11. Hu J, Shen L, Sun G (2018) Squeeze-and-excitation networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 7132–7141

    Google Scholar 

  12. Jaderberg M, Simonyan K, Zisserman A et al (2015) Spatial transformer networks. In: advances in neural information processing systems, pp 2017–2025

  13. Jain AK, Li SZ (2011) Handbook of face recognition. Springer

    MATH  Google Scholar 

  14. Kazemi V, Sullivan J (2014) One millisecond face alignment with an ensemble of regression trees. In: proceedings of the IEEE conference on computer vision and pattern recognition, pp 1867–1874

  15. Kurakin A, Goodfellow I, Bengio S (2016) Adversarial machine learning at scale. arXiv preprint arXiv:1611.01236

  16. Liu Y, Ma S, Aafer Y, Lee WC, Zhai J, Wang W, Zhang X (2017) Trojaning attack on neural networks

  17. Muñoz-González L, Biggio B, Demontis A, Paudice A, Wongrassamee V, Lupu EC, Roli F (2017) Towards poisoning of deep learning algorithms with back-gradient optimization. In: Proceedings of the 10th ACM workshop on artificial intelligence and security. ACM, pp 27–38

    Chapter  Google Scholar 

  18. Papernot N, McDaniel P, Jha S, Fredrikson M, Celik ZB, Swami A (2016) The limitations of deep learning in adversarial settings. In: 2016 IEEE European symposium on security and privacy (EuroS&P). IEEE, pp 372–387

    Chapter  Google Scholar 

  19. Papernot N, McDaniel P, Goodfellow I, Jha S, Celik ZB, Swami A (2017) Practical black-box attacks against machine learning. In: Proceedings of the 2017 ACM on Asia conference on computer and communications security. ACM, pp 506–519

    Chapter  Google Scholar 

  20. Parkhi OM, Vedaldi A, Zisserman A et al (2015) Deep face recognition. In: bmvc, vol 1, no 3, pp 6

  21. Rahimi A, Recht B (2007) Random features for large-scale kernel machines. Adv Neural Inf Proces Syst 20

  22. Ronneberger O, Fischer P, Brox T (2015) U-net: convolutional networks for biomedical image segmentation. In: International conference on medical image computing and computer-assisted intervention. Springer, pp 234–241

    Google Scholar 

  23. Schroff F, Kalenichenko D, Philbin J (2015) Facenet: a unified embedding for face recognition and clustering. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 815–823

    Google Scholar 

  24. Shafahi A, Huang WR, Najibi M, Suciu O, Studer C, Dumitras T, Goldstein T (2018) Poison frogs! Targeted clean-label poisoning attacks on neural networks. In: Advances in neural information processing systems, pp 6103–6113

    Google Scholar 

  25. Simard PY, Steinkraus D, Platt JC et al (2003) Best practices for convolutional neural networks applied to visual document analysis. In: Icdar

  26. Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow I, Fergus R (2013) Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199

  27. Taigman Y, Yang M, Ranzato M, Wolf L (2014) Deepface: closing the gap to human-level performance in face verification. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1701–1708

    Google Scholar 

  28. Wolberg G (1998) Image morphing: a survey. Vis Comput 14:360–372

    Article  Google Scholar 

  29. Xiong W, Droppo J, Huang X, Seide F, Seltzer M, Stolcke A, Yu D, Zweig G (2016) Achieving human parity in conversational speech recognition. arXiv preprint arXiv:1610.05256.

  30. Yan S, Xu D, Zhang B, Zhang HJ (2005) Graph embedding: a general framework for dimensionality reduction. In: 2005 IEEE computer society conference on computer vision and pattern recognition (CVPR’05). IEEE, pp 830–837

    Google Scholar 

  31. Yang C, Wu Q, Li H, Chen Y (2017) Generative poisoning attack method against neural networks. arXiv preprint arXiv:1703.01340

  32. Zhang K, Zhang Z, Li Z, Qiao Y (2016) Joint face detection and alignment using multitask cascaded convolutional networks. IEEE Signal Process Lett 23:1499–1503

    Article  Google Scholar 

  33. Zhang Y, Song Y, Liang J, Bai K, Yang Q (2020) Two sides of the same coin: white-box and black-box attacks for transfer learning. In: Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining, pp 2989–2997

    Chapter  Google Scholar 

  34. Zhao M, An B, Gao W, Zhang T (2017) Efficient label contamination attacks against black-box learning models. In: IJCAI, pp 3945–3951

    Google Scholar 

  35. Zhong Y, Deng W (2020) Towards transferable adversarial attack against deep face recognition. IEEE Trans Inf Forensics Secur 16:1452–1466

    Article  Google Scholar 

  36. Zhu C, Huang WR, Li H, Taylor G, Studer C, Goldstein T (2019) Transferable clean-label poisoning attacks on deep neural nets. In: International conference on machine learning. PMLR, pp 7614–7623

    Google Scholar 

Download references

Acknowledgments

This work was supported in part by the Ministry of Science and Technology, Taiwan, under Contract MOST 110-2221-E-A49-101 and Contract MOST 110-2622-8-009-014-TM1; and in part by the Financial Technology (FinTech) Innovation Research Center, National Yang Ming Chiao Tung University.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Szu-Hao Huang.

Ethics declarations

Conflict of interest

All authors certify that they have no affiliations with or involvement in any organization or entity with any financial interest or non-financial interest in the subject matter or materials discussed in this manuscript.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chan, CT., Huang, SH. & Choy, P.P. Poisoning attacks on face authentication systems by using the generative deformation model. Multimed Tools Appl 82, 29457–29476 (2023). https://doi.org/10.1007/s11042-023-14695-5

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-023-14695-5

Keywords

Navigation