Skip to main content

Advertisement

Log in

Generate Usable Adversarial Examples via Simulating Additional Light Sources

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

Deep neural networks have been shown to be critically vulnerable under adversarial attacks. This has led to the proliferation of methods to generate different adversarial examples from different perspectives. The adversarial examples generated by these methods rapidly progress towards being harder to perceive, faster to generate, and more effective to attack. Inspired by the cyberspace attack process, this paper analyzes from the perspective of attack path and find meaningless noise perturbations makes these adversarial examples efficient but difficult to apply for an attacker. This paper generates adversarial examples from the original realistic features of the pictures. The purpose of deceiving the deep convolutional network is achieved by simulating the addition of tiny light sources to produce subtle feature effects on the image. The generated adversarial perturbations are no longer meaningless noisy making it a promising avenue for applications theoretically. The proposed method demonstrates in experiments that the generated adversarial examples can still achieve good attack results in deep convolutional networks and can be applied to black-box attacks.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12

Similar content being viewed by others

References

  1. Radke RJ, Andra S, Al-Kofahi O, Roysam B (2005) Image change detection algorithms: a systematic survey. IEEE Trans Image Process 14(3):294–307. https://doi.org/10.1109/TIP.2004.838698

    Article  MathSciNet  Google Scholar 

  2. Lecun Y, Bottou L, Bengio Y, Haffner P (1998) Gradient-based learning applied to document recognition. Proc IEEE 86(11):2278–2324. https://doi.org/10.1109/5.726791

    Article  Google Scholar 

  3. Schroff F, Kalenichenko D, Philbin J (2015) Facenet: a unified embedding for face recognition and clustering. In: 2015 IEEE conference on computer vision and pattern recognition (CVPR), pp 815–823. https://doi.org/10.1109/CVPR.2015.7298682

  4. Prakash A, Chitta K, Geiger A (2021) Multi-modal fusion transformer for end-to-end autonomous driving. In: 2021 IEEE/CVF conference on computer vision and pattern recognition (CVPR), pp 7073–7083

  5. Krizhevsky A, Sutskever I, Hinton GE (2017) Imagenet classification with deep convolutional neural networks. Commun ACM 60(6):84–90. https://doi.org/10.1145/3065386

    Article  Google Scholar 

  6. Simonyan K, Zisserman A (2015) Very deep convolutional networks for large-scale image recognition. CoRR abs/1409.1556

  7. He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: 2016 IEEE conference on computer vision and pattern recognition (CVPR), pp 770–778. https://doi.org/10.1109/CVPR.2016.90

  8. Lehtinen J, Munkberg J, Hasselgren J, Laine S, Karras T, Aittala, M, Aila T (2018) Noise2noise: learning image restoration without clean data. arXiv:1803.04189

  9. Chen X, Liu C, Li B, Lu K, Song DX (2017) Targeted backdoor attacks on deep learning systems using data poisoning. arXiv:1712.05526

  10. Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow IJ, Fergus R (2014) Intriguing properties of neural networks. CoRR abs/1312.6199

  11. Akhtar N, Mian A (2018) Threat of adversarial attacks on deep learning in computer vision: a survey. IEEE Access 6:14410–14430. https://doi.org/10.1109/ACCESS.2018.2807385

    Article  Google Scholar 

  12. Goodfellow IJ, Shlens J, Szegedy C (2015) Explaining and harnessing adversarial examples. CoRR abs/1412.6572

  13. Kurakin A, Goodfellow IJ, Bengio S (2017) Adversarial examples in the physical world. arXiv:1607.02533

  14. Carlini N, Wagner DA (2017) Towards evaluating the robustness of neural networks. In: 2017 IEEE symposium on security and privacy (SP), pp 39–57

  15. Papernot N, Mcdaniel P, Wu X, Jha S, Swami A (2016) Distillation as a defense to adversarial perturbations against deep neural networks. In: 2016 IEEE symposium on security and privacy (SP), pp 582–597

  16. Moosavi-Dezfooli SM, Fawzi A, Frossard P (2016) Deepfool: a simple and accurate method to fool deep neural networks. In: 2016 IEEE conference on computer vision and pattern recognition (CVPR), pp 2574–2582 https://doi.org/10.1109/CVPR.2016.282

  17. Papernot N, Mcdaniel P, Jha S, Fredrikson M, Celik ZB, Swami A (2016) The limitations of deep learning in adversarial settings. In: 2016 IEEE European symposium on security and privacy (EuroS &P), pp 372–387

  18. Su J, Vargas DV, Sakurai K (2019) One pixel attack for fooling deep neural networks. IEEE Trans Evol Comput 23:828–841

    Article  Google Scholar 

  19. Sarkar S, Bansal A, Mahbub U, Chellappa R (2017) Upset and angri : breaking high performance image classifiers. arXiv:1707.01159

  20. Goodfellow IJ, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y (2014) Generative adversarial nets. In: Proceedings of the 27th international conference on neural information processing systems—volume 2. NIPS’14. MIT Press, Cambridge, MA, pp 2672–2680

  21. Xiao C, Li B, Zhu JY, He W, Liu M, Song DX (2018) Generating adversarial examples with adversarial networks. arXiv:1801.02610

  22. Ilyas A, Engstrom L, Athalye A, Lin J (2018) Black-box adversarial attacks with limited queries and information. arXiv:1804.08598

  23. Uesat, J, O’Donoghue B, van den Oord A, Kohli P (2018) Adversarial risk and the dangers of evaluating against weak attacks. arXiv:1802.05666

  24. Procházka S, Neruda R (2020) Black-box evolutionary search for adversarial examples against deep image classifiers in non-targeted attacks. In: 2020 international joint conference on neural networks (IJCNN), pp 1–8. https://doi.org/10.1109/IJCNN48605.2020.9207688

  25. Brendel W, Rauber J, Bethge M (2018) Decision-based adversarial attacks: reliable attacks against black-box machine learning models. arXiv:1712.04248

  26. Li Y, Li L, Wang L, Zhang T, Gong B (2019) Nattack: learning the distributions of adversarial examples for an improved black-box attack on deep neural networks. arXiv:1905.00441

  27. Lu J, Sibai H, Fabry E, Forsyth DA (2017) No need to worry about adversarial examples in object detection in autonomous vehicles. arXiv:1707.03501

  28. He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: 2016 IEEE conference on computer vision and pattern recognition (CVPR), pp 770–778

  29. Zagoruyko S, Komodakis N (2016) Wide residual networks. arXiv:1605.07146

  30. Miller DJ, Xiang Z, Kesidis G (2020) Adversarial learning targeting deep neural network classification: a comprehensive review of defenses against attacks. Proc IEEE 108(3):402–433. https://doi.org/10.1109/JPROC.2020.2970615

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chen Xi.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Xi, C., Wei, G., Fan, Z. et al. Generate Usable Adversarial Examples via Simulating Additional Light Sources. Neural Process Lett 55, 3605–3625 (2023). https://doi.org/10.1007/s11063-022-11024-z

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11063-022-11024-z

Keywords

Navigation