Skip to main content

ADVFilter: Adversarial Example Generated by Perturbing Optical Path

  • Conference paper
  • First Online:
Book cover Computer Vision – ACCV 2022 Workshops (ACCV 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13848))

Included in the following conference series:

  • 216 Accesses

Abstract

Deep Neural Networks (DNNs) have achieved great success in many applications, and they are taking over more and more systems in the real world. As a result, the security of DNN system has attracted great attention from the community. In typical scenes, the input images of DNN are collected through the camera. In this paper, we propose a new type of security threat, which attacks a DNN classifier by perturbing the optical path of the camera input through a specially designed filter. It involves many challenges to generate such a filter. First, the filter should be input-free. Second, the filter should be simple enough for manufacturing. We propose a framework to generate such filters, called ADVFilter. ADVFilter models the optical path perturbation by thin plate spline, and optimizes for the minimal distortion of the input images. ADVFilter can generate adversarial pattern for a specific class. This adversarial pattern is universal for the class, which means that it can mislead the DNN model on all input images of the class with high probability. We demonstrate our idea on MNIST dataset, and the results show that ADVFilter can achieve up to 90\(\%\) success rate with only 16 corresponding points. To the best of our knowledge, this is the first work to propose such security threat for DNN models.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 64.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 84.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems 2012, pp. 1097–1105 (2012)

    Google Scholar 

  2. Wang, X., Li, J., Kuang, X., Tan, Y., Li, J.: The security of machine learning in an adversarial setting: a survey. J. Parallel Distrib. Comput. 130, 12–23 (2019). https://doi.org/10.1016/j.jpdc.2019.03.003

    Article  Google Scholar 

  3. Szegedy, C., et al.: Intriguing properties of neural networks. Presented at the ICLR (2014). http://arxiv.org/abs/1312.6199. Accessed 22 Aug 2019

  4. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. Presented at the ICLR (2015). http://arxiv.org/abs/1412.6572. Accessed 22 Aug 2019

  5. Chen, P.-Y., Zhang, H., Sharma, Y., Yi, J., Hsieh, C.-J.: ZOO: zeroth order optimization based black-box attacks to deep neural networks without training substitute models. In: Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security (AISec), pp. 15–26 (2017). https://doi.org/10.1145/3128572.3140448

  6. Ilyas, A., Engstrom, L., Athalye, A., Lin, J.: Black-box Adversarial Attacks with Limited Queries and Information (2018). http://arxiv.org/abs/1804.08598. Accessed 18 Aug 2019

  7. Tu, C.-C., et al.: AutoZOOM: Autoencoder-Based Zeroth Order Optimization Method for Attacking Black-box Neural Networks (2019). http://arxiv.org/abs/1805.11770. Accessed 18 Aug 2019

  8. Moosavi-Dezfooli, S.-M., Fawzi, A., Fawzi, O., Frossard, P.: Universal Adversarial Perturbations, p. 9 (2017)

    Google Scholar 

  9. Li, Y., Bai, S., Xie, C., Liao, Z., Shen, X., Yuille, A.: Regional homogeneity: towards learning transferable universal adversarial perturbations against defenses. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12356, pp. 795–813. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58621-8_46

    Chapter  Google Scholar 

  10. Zhang, C., Benz, P., Imtiaz, T., Kweon, I.S.: Understanding Adversarial Examples From the Mutual Influence of Images and Perturbations, p. 10 (2020)

    Google Scholar 

  11. Baluja, S., Fischer, I.: Learning to attack: adversarial transformation networks. In: The Thirty-Second AAAI Conference on Artificial Intelligence (AAAI) 2018, p. 9 (2018)

    Google Scholar 

  12. Li, M., Yang, Y., Wei, K., Yang, X., Huang, H.: Learning universal adversarial perturbation by adversarial example. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, pp. 1350–1358 (2022). https://doi.org/10.1609/aaai.v36i2.20023

  13. Engstrom, L., Tran, B., Tsipras, D., Schmidt, L., Madry, A.: Exploring the Landscape of Spatial Robustness (2019). http://arxiv.org/abs/1712.02779. Accessed 28 Apr 2022

  14. Alaifari, R., Alberti, G.S., Gauksson, T.: ADef: an Iterative Algorithm to Construct Adversarial Deformations (2019). http://arxiv.org/abs/1804.07729. Accessed 28 Apr 2022

  15. Xiao, C., Zhu, J.-Y., Li, B., He, W., Liu, M., Song, D.: Spatially Transformed Adversarial Examples (2018). http://arxiv.org/abs/1801.02612. Accessed 28 Apr 2022

  16. Kurakin, A., Goodfellow, I., Bengio, S.: Adversarial examples in the physical world, arXiv:1607.02533 Cs Stat (2016). http://arxiv.org/abs/1607.02533. Accessed 22 Aug 2019

  17. Sharif, M., Bhagavatula, S., Bauer, L., Reiter, M.K.: Accessorize to a crime: real and stealthy attacks on state-of-the-art face recognition. In: Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security - CCS 2016, Vienna, Austria, pp. 1528–1540 (2016)

    Google Scholar 

  18. Brown, T.B., Mané, D., Roy, A., Abadi, M., Gilmer, J.: Adversarial Patch, arXiv:1712.09665 Cs (2017). http://arxiv.org/abs/1712.09665. Accessed 22 Aug 2019

  19. Athalye, A., Engstrom, L., Ilyas, A., Kwok, K.: Synthesizing Robust Adversarial Examples (2018). http://arxiv.org/abs/1707.07397. Accessed 28 July 2019

  20. Athalye, A., Carlini, N., Wagner, D.: Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples (2018). http://arxiv.org/abs/1802.00420. Accessed 18 Aug 2019

  21. Eykholt, K., et al.: Robust physical-world attacks on deep learning visual classification. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, pp. 1625–1634 (2018). https://doi.org/10.1109/CVPR.2018.00175

  22. Wang, D., et al.: FCA: learning a 3D full-coverage vehicle camouflage for multi-view physical adversarial attack. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, pp. 2414–2422 (2022). https://doi.org/10.1609/aaai.v36i2.20141

  23. Xu, K., et al.: Adversarial T-shirt! evading person detectors in a physical world. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12350, pp. 665–681. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58558-7_39

    Chapter  Google Scholar 

  24. Wierstra, D., Schaul, T., Peters, J., Schmidhuber, J.: Natural evolution strategies. J. Mach. Learn. Res. 15, 949–980 (2014). https://doi.org/10.1109/CEC.2008.4631255

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

We are particularly grateful to Inwan Yoo who implements TPS in Tensorflow and shares the code on https://github.com/iwyoo/tf_ThinPlateSpline.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lili Zhang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zhang, L., Wang, X. (2023). ADVFilter: Adversarial Example Generated by Perturbing Optical Path. In: Zheng, Y., Keleş, H.Y., Koniusz, P. (eds) Computer Vision – ACCV 2022 Workshops. ACCV 2022. Lecture Notes in Computer Science, vol 13848. Springer, Cham. https://doi.org/10.1007/978-3-031-27066-6_3

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-27066-6_3

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-27065-9

  • Online ISBN: 978-3-031-27066-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics