Abstract
Various studies have revealed the vulnerabilities of machine learning algorithms. For example, a hacker can poison a deep learning facial recognition system by impersonating an administrator and obtaining confidential information. According to studies, poisoning attacks are typically implemented based on the optimization conditions of the machine learning algorithm. However, neural networks, because of their complexity, are typically unsuited for these attacks. Although several poisoning strategies have been developed against deep facial recognition systems, poor image qualities and unrealistic assumptions remain the drawbacks of these strategies. Therefore, we proposed a black-box poisoning attack strategy against facial recognition systems, which works by injecting abnormal data generated by using elastic transformation to deform the facial components. We demonstrated the performance of the proposed strategy using the VGGFace2 data set to attack various facial extractors. The proposed strategy outperformed its counterparts in the literature. The contributions of the study lie in 1) providing a novel method of attack against a nonoverfitting facial recognition system with fewer injections, 2) applying a new image transformation technique to compose malicious samples, and 3) formulating a method that leaves no trace of modification to the human eye.
Similar content being viewed by others
References
Aghakhani H, Meng D, Wang YX, Kruegel C, Vigna G (2021) Bullseye polytope: a scalable clean-label poisoning attack with improved transferability. In: 2021 IEEE European symposium on security and privacy (EuroS&P). IEEE, pp 159–178
Alfeld S, Zhu X, Barford P (2016) Data poisoning attacks against autoregressive models. In: thirtieth AAAI conference on artificial intelligence
Belhumeur PN, Hespanha JP, Kriegman DJ (1997) Eigenfaces vs. fisherfaces: recognition using class specific linear projection. IEEE Trans Pattern Anal Mach Intell 19:711–720
Biggio B, Nelson B, Laskov P (2012) Poisoning attacks against support vector machines. arXiv preprint arXiv:1206.6389
Cao Q, Shen L, Xie W, Parkhi OM, Zisserman A (2018) Vggface2: a dataset for recognising faces across pose and age. In: international conference on automatic face and gesture recognition
Carlini N, Wagner D (2017) Towards evaluating the robustness of neural networks. In: 2017 IEEE symposium on security and privacy (SP). IEEE, pp 39–57
Dubrofsky E (2009) Homography estimation
Goodfellow IJ, Shlens J, Szegedy C (2014) Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572
Gu T, Liu K, Dolan-Gavitt B, Garg S (2019) Badnets: evaluating backdooring attacks on deep neural networks. IEEE Access 7:47230–47244
He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 770–778
Hu J, Shen L, Sun G (2018) Squeeze-and-excitation networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 7132–7141
Jaderberg M, Simonyan K, Zisserman A et al (2015) Spatial transformer networks. In: advances in neural information processing systems, pp 2017–2025
Jain AK, Li SZ (2011) Handbook of face recognition. Springer
Kazemi V, Sullivan J (2014) One millisecond face alignment with an ensemble of regression trees. In: proceedings of the IEEE conference on computer vision and pattern recognition, pp 1867–1874
Kurakin A, Goodfellow I, Bengio S (2016) Adversarial machine learning at scale. arXiv preprint arXiv:1611.01236
Liu Y, Ma S, Aafer Y, Lee WC, Zhai J, Wang W, Zhang X (2017) Trojaning attack on neural networks
Muñoz-González L, Biggio B, Demontis A, Paudice A, Wongrassamee V, Lupu EC, Roli F (2017) Towards poisoning of deep learning algorithms with back-gradient optimization. In: Proceedings of the 10th ACM workshop on artificial intelligence and security. ACM, pp 27–38
Papernot N, McDaniel P, Jha S, Fredrikson M, Celik ZB, Swami A (2016) The limitations of deep learning in adversarial settings. In: 2016 IEEE European symposium on security and privacy (EuroS&P). IEEE, pp 372–387
Papernot N, McDaniel P, Goodfellow I, Jha S, Celik ZB, Swami A (2017) Practical black-box attacks against machine learning. In: Proceedings of the 2017 ACM on Asia conference on computer and communications security. ACM, pp 506–519
Parkhi OM, Vedaldi A, Zisserman A et al (2015) Deep face recognition. In: bmvc, vol 1, no 3, pp 6
Rahimi A, Recht B (2007) Random features for large-scale kernel machines. Adv Neural Inf Proces Syst 20
Ronneberger O, Fischer P, Brox T (2015) U-net: convolutional networks for biomedical image segmentation. In: International conference on medical image computing and computer-assisted intervention. Springer, pp 234–241
Schroff F, Kalenichenko D, Philbin J (2015) Facenet: a unified embedding for face recognition and clustering. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 815–823
Shafahi A, Huang WR, Najibi M, Suciu O, Studer C, Dumitras T, Goldstein T (2018) Poison frogs! Targeted clean-label poisoning attacks on neural networks. In: Advances in neural information processing systems, pp 6103–6113
Simard PY, Steinkraus D, Platt JC et al (2003) Best practices for convolutional neural networks applied to visual document analysis. In: Icdar
Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow I, Fergus R (2013) Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199
Taigman Y, Yang M, Ranzato M, Wolf L (2014) Deepface: closing the gap to human-level performance in face verification. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1701–1708
Wolberg G (1998) Image morphing: a survey. Vis Comput 14:360–372
Xiong W, Droppo J, Huang X, Seide F, Seltzer M, Stolcke A, Yu D, Zweig G (2016) Achieving human parity in conversational speech recognition. arXiv preprint arXiv:1610.05256.
Yan S, Xu D, Zhang B, Zhang HJ (2005) Graph embedding: a general framework for dimensionality reduction. In: 2005 IEEE computer society conference on computer vision and pattern recognition (CVPR’05). IEEE, pp 830–837
Yang C, Wu Q, Li H, Chen Y (2017) Generative poisoning attack method against neural networks. arXiv preprint arXiv:1703.01340
Zhang K, Zhang Z, Li Z, Qiao Y (2016) Joint face detection and alignment using multitask cascaded convolutional networks. IEEE Signal Process Lett 23:1499–1503
Zhang Y, Song Y, Liang J, Bai K, Yang Q (2020) Two sides of the same coin: white-box and black-box attacks for transfer learning. In: Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining, pp 2989–2997
Zhao M, An B, Gao W, Zhang T (2017) Efficient label contamination attacks against black-box learning models. In: IJCAI, pp 3945–3951
Zhong Y, Deng W (2020) Towards transferable adversarial attack against deep face recognition. IEEE Trans Inf Forensics Secur 16:1452–1466
Zhu C, Huang WR, Li H, Taylor G, Studer C, Goldstein T (2019) Transferable clean-label poisoning attacks on deep neural nets. In: International conference on machine learning. PMLR, pp 7614–7623
Acknowledgments
This work was supported in part by the Ministry of Science and Technology, Taiwan, under Contract MOST 110-2221-E-A49-101 and Contract MOST 110-2622-8-009-014-TM1; and in part by the Financial Technology (FinTech) Innovation Research Center, National Yang Ming Chiao Tung University.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
All authors certify that they have no affiliations with or involvement in any organization or entity with any financial interest or non-financial interest in the subject matter or materials discussed in this manuscript.
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Chan, CT., Huang, SH. & Choy, P.P. Poisoning attacks on face authentication systems by using the generative deformation model. Multimed Tools Appl 82, 29457–29476 (2023). https://doi.org/10.1007/s11042-023-14695-5
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11042-023-14695-5