Skip to main content

An Illumination Modulation-Based Adversarial Attack Against Automated Face Recognition System

  • Conference paper
  • First Online:
Information Security and Cryptology (Inscrypt 2020)

Part of the book series: Lecture Notes in Computer Science ((LNSC,volume 12612))

Included in the following conference series:

  • 1237 Accesses

Abstract

In recent years, physical adversarial attacks have been placed an increasing emphasis. However, previous studies usually use a printer to physically realize adversarial perturbations, and such an attack scheme will meet inevitable disadvantages of perturbation distortion and low concealment. In this paper, we propose a novel attack scheme based on illumination modulation. Because of the rolling shutter effect of CMOS sensor, the created perturbation will not be distorted and completely invisible. According to the attack scheme, we have proposed two novel attack methods, denial of service attack (DoS attack) and escape attack, and offered a real scene to apply the attack methods. The experimental results show that both of two attack methods have a good performance against AFR. DoS attack has an attack success rate of 92.13% and escape attack has an attack success rate of 82%.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Szegedy, C., et al.: Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199 (2013)

  2. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014)

  3. Su, J., Vargas, D.V., Sakurai, K.: One pixel attack for fooling deep neural networks. IEEE Trans. Evol. Comput. 23(5), 828–841 (2019)

    Article  Google Scholar 

  4. Eykholt, K., et al.: Robust physical-world attacks on deep learning visual classification. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1625–1634 (2018)

    Google Scholar 

  5. Sharif, M., Bhagavatula, S., Bauer, L., Reiter, M.: Accessorize to a crime: real and stealthy attacks on state-of-the-art face recognition. In: Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, pp. 1528–1540 (2016)

    Google Scholar 

  6. Komkov, S., Petiushko, A.: AdvHat: real-world adversarial attack on ArcFace Face ID system. arXiv preprint arXiv:1908.08705 (2019)

  7. Li, J., Schmidt, F., Kolter, Z.: Adversarial camera stickers: a physical camera-based attack on deep learning systems. In: International Conference on Machine Learning, pp. 3896–3904 (2019)

    Google Scholar 

  8. Thys, S., Van Ranst, W., Goedemé, T.: Fooling automated surveillance cameras: adversarial patches to attack person detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (2019)

    Google Scholar 

  9. Xu, K., et al.: Adversarial t-shirt! Evading person detectors in a physical world. In: European Conference on Computer Vision (2020)

    Google Scholar 

  10. Zhou, Z., Tang, D., Wang, X., et al.: Invisible Mask: Practical Attacks on Face Recognition with Infrared. arXiv preprint arXiv:1803.04683 (2018)

  11. Rosebrock, A.: Facial landmarks with dlib, OpenCV, and Python. https://www.pyimagesearch.com/2017/04/03/facial-landmarksdlib-opencv-python/. Accessed 19 Oct 2020

  12. Dalal, N., Triggs, B.: Histograms of oriented gradients for human detection. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 886–893 (2015)

    Google Scholar 

  13. Huang, G.B., Ramesh, M., Berg, T., Learned-Miller, E.: Labeled Faces in the Wild: A Database for Studying Face Recognition in Unconstrained Environments. https://vis-www.cs.umass.edu/lfw/. Accessed 19 Oct 2020

Download references

Acknowledgments

This work was partially supported by National Natural Science Foundation of China (61771222, 61872109), Key research and Development Program for Guangdong Province (2019B010136001), Science and Technology Project of Shenzhen (JCYJ20170815145900474), Peng Cheng Laboratory Project of Guangdong Province (PCL2018KP004), The Fundamental Research Funds for the Central Universities (21620439), Natural Scientific Research Innovation Foundation in Harbin Institute of Technology (HIT.NSRIF.2020078).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Junbin Fang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Chen, Z., Lin, P., Jiang, Z.L., Wei, Z., Yuan, S., Fang, J. (2021). An Illumination Modulation-Based Adversarial Attack Against Automated Face Recognition System. In: Wu, Y., Yung, M. (eds) Information Security and Cryptology. Inscrypt 2020. Lecture Notes in Computer Science(), vol 12612. Springer, Cham. https://doi.org/10.1007/978-3-030-71852-7_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-71852-7_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-71851-0

  • Online ISBN: 978-3-030-71852-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics