Abstract
In recent years, physical adversarial attacks have been placed an increasing emphasis. However, previous studies usually use a printer to physically realize adversarial perturbations, and such an attack scheme will meet inevitable disadvantages of perturbation distortion and low concealment. In this paper, we propose a novel attack scheme based on illumination modulation. Because of the rolling shutter effect of CMOS sensor, the created perturbation will not be distorted and completely invisible. According to the attack scheme, we have proposed two novel attack methods, denial of service attack (DoS attack) and escape attack, and offered a real scene to apply the attack methods. The experimental results show that both of two attack methods have a good performance against AFR. DoS attack has an attack success rate of 92.13% and escape attack has an attack success rate of 82%.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Szegedy, C., et al.: Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199 (2013)
Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014)
Su, J., Vargas, D.V., Sakurai, K.: One pixel attack for fooling deep neural networks. IEEE Trans. Evol. Comput. 23(5), 828–841 (2019)
Eykholt, K., et al.: Robust physical-world attacks on deep learning visual classification. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1625–1634 (2018)
Sharif, M., Bhagavatula, S., Bauer, L., Reiter, M.: Accessorize to a crime: real and stealthy attacks on state-of-the-art face recognition. In: Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, pp. 1528–1540 (2016)
Komkov, S., Petiushko, A.: AdvHat: real-world adversarial attack on ArcFace Face ID system. arXiv preprint arXiv:1908.08705 (2019)
Li, J., Schmidt, F., Kolter, Z.: Adversarial camera stickers: a physical camera-based attack on deep learning systems. In: International Conference on Machine Learning, pp. 3896–3904 (2019)
Thys, S., Van Ranst, W., Goedemé, T.: Fooling automated surveillance cameras: adversarial patches to attack person detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (2019)
Xu, K., et al.: Adversarial t-shirt! Evading person detectors in a physical world. In: European Conference on Computer Vision (2020)
Zhou, Z., Tang, D., Wang, X., et al.: Invisible Mask: Practical Attacks on Face Recognition with Infrared. arXiv preprint arXiv:1803.04683 (2018)
Rosebrock, A.: Facial landmarks with dlib, OpenCV, and Python. https://www.pyimagesearch.com/2017/04/03/facial-landmarksdlib-opencv-python/. Accessed 19 Oct 2020
Dalal, N., Triggs, B.: Histograms of oriented gradients for human detection. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 886–893 (2015)
Huang, G.B., Ramesh, M., Berg, T., Learned-Miller, E.: Labeled Faces in the Wild: A Database for Studying Face Recognition in Unconstrained Environments. https://vis-www.cs.umass.edu/lfw/. Accessed 19 Oct 2020
Acknowledgments
This work was partially supported by National Natural Science Foundation of China (61771222, 61872109), Key research and Development Program for Guangdong Province (2019B010136001), Science and Technology Project of Shenzhen (JCYJ20170815145900474), Peng Cheng Laboratory Project of Guangdong Province (PCL2018KP004), The Fundamental Research Funds for the Central Universities (21620439), Natural Scientific Research Innovation Foundation in Harbin Institute of Technology (HIT.NSRIF.2020078).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Chen, Z., Lin, P., Jiang, Z.L., Wei, Z., Yuan, S., Fang, J. (2021). An Illumination Modulation-Based Adversarial Attack Against Automated Face Recognition System. In: Wu, Y., Yung, M. (eds) Information Security and Cryptology. Inscrypt 2020. Lecture Notes in Computer Science(), vol 12612. Springer, Cham. https://doi.org/10.1007/978-3-030-71852-7_4
Download citation
DOI: https://doi.org/10.1007/978-3-030-71852-7_4
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-71851-0
Online ISBN: 978-3-030-71852-7
eBook Packages: Computer ScienceComputer Science (R0)