Skip to main content
Log in

Practical black-box adversarial attack on open-set recognition: Towards robust autonomous driving

  • Published:
Peer-to-Peer Networking and Applications Aims and scope Submit manuscript

Abstract

As an important method of image classification, Open-Set Recognition (OSR) has been gradually deployed in autonomous driving systems (ADSs) for detecting the surrounding environment with unknown objects. To date, many researchers have demonstrated that the existing OSR classifiers are heavily threatened by adversarial input images. Nevertheless, most existing attack approaches are based on white-box attacks, assuming that information of the target OSR model is known by the attackers. Hence, these attack models cannot effectively attack ADSs that keep models and data confidential. To facilitate the design of future generations of robust OSR classifiers for safer ADSs, we introduce a practical black-box adversarial attack. First, we simulate a real-world open-set environment by reasonable dataset division. Second, we train a substitute model, in which, to improve the transferability of the adversarial data, we combine dynamic convolution into the substitute model. Finally, we use the substitute model to generate adversarial data to attack the target model. To the best of the authors' knowledge, the proposed attack model is the first to utilize dynamic convolution to improve the transferability of adversarial data. To evaluate the proposed attack model, we conduct extensive experiments on four publicly available datasets. The numerical results show that, compared to the white-box attack approaches, the proposed black-box attack approach has a similar attack capability. Specifically, using the German Traffic Sign Recognition Benchmark dataset, our model can decrease the classification accuracy of known classes from 99.8% to 9.81% and can decrease the AUC of detecting unknown classes from 97.7% to 48.8%.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

Similar content being viewed by others

References

  1. Badue C, Guidolini R, Carneiro RV, Azevedo P, Cardoso VB, Forechi A, Jesus L, Berriel R, Paixao TM, Mutz F, de Paula Veronese L, Oliveira-Santos T, De Souza AF (2021) Self-driving cars: A survey. Expert Syst Appl 165:113816

    Article  Google Scholar 

  2. Deng Y, Zhang T, Lou G, Zheng X, Jin J, Han QL (2021) Deep learning-based autonomous driving systems: A survey of attacks and defenses. IEEE Trans Ind Inf

  3. Tabernik D, Skocaj D (2020) Deep learning for large-scale traffic-sign detection and recognition. IEEE Trans Intell Transp Syst 21:1427–1440

    Article  Google Scholar 

  4. Vitas D, Tomic M, Burul M (2020) Traffic light detection in autonomous driving systems. IEEE Consum Electron Mag 9:90–96

    Article  Google Scholar 

  5. Scheirer WJ, De Rezende A, Rocha AS, Boult TE (2013) Toward open set recognition. IEEE Trans Pattern Anal Mach Intell 35:1757–1772

    Article  Google Scholar 

  6. Li F, Li X, Luo J, Fan S, Zhang H (2021) Open-set intersection intention prediction for autonomous driving

  7. Roitberg A, Ma C, Haurilet M, Stiefelhagen R (2020) Open set driver activity recognition. IEEE Intell Veh Symp 1048–1053

    Google Scholar 

  8. Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow I, Fergus R (2013) Intriguing properties of neural networks. 2nd International Conference on Learning Representations, ICLR 2014 - Conference Track Proceedings

  9. Bendale A, Boult TE (2016) Towards open set deep networks. Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit 1563–1572

    Google Scholar 

  10. Zheng Z, Zheng L, Hu Z, Yang Y (2018) Open set adversarial examples. arXiv preprint arXiv:1809.02681

  11. Shao R, Perera P, Yuen PC, Patel VM (2020) Open-set adversarial defense. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 12362 LNCS:682–698

  12. Xue M, He C, Wang J, Liu W (2021) Backdoors hidden in facial features: a novel invisible backdoor attack against face recognition systems. Peer Peer Netw Appl 14(3):1458–1474

    Article  Google Scholar 

  13. Li H, Xu X, Zhang X, Yang S, Li B (2020) Qeba: Query-efficient boundary-based blackbox attack. Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit 1218–1227

    Google Scholar 

  14. Chen Y, Dai X, Liu M, Chen D, Yuan L, Liu Z (2020) Dynamic convolution: Attention over convolution kernels. Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit 11027–11036

    Google Scholar 

  15. Sonata I, Heryadi Y, Lukas L, Wibowo A (1869) Autonomous car using cnn deep learning algorithm. J Phys: Conf Ser 012071(4):2021

    Google Scholar 

  16. Babiker MA, Elawad MA, Ahmed AH (2019) Convolutional neural network for a self-driving car in a virtual environment. Proceedings of the International Conference on Computer, Control, Electrical, and Electronics Engineering 2019, ICCCEEE 2019

  17. Zeiler MD, Fergus R (2014) Visualizing and understanding convolutional networks. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 8689 LNCS:818–833

  18. Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. 3rd International Conference on Learning Representations, ICLR 2015 - Conference Track Proceedings

  19. He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In Proceedings of the IEEE Conf Comput Vis Pattern Recognit 770–778

    Google Scholar 

  20. Abdul Aleem Kadar (2013) Single-sided deafness (ssd). Encyclopedia of Otolaryngology, Head and Neck Surgery, pp 2420–2420

    Google Scholar 

  21. Cai Y, Luan T, Gao H, Wang H, Chen L, Li Y, Sotelo MA, Li Z (2021) Yolov4–5d: An effective and efficient object detector for autonomous driving. IEEE Trans Instrum Meas 70

  22. Redmon J, Divvala S, Girshick R, Farhadi A (2016) You only look once: Unified, real-time object detection. Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit 779–788

    Google Scholar 

  23. Fujiyoshi H, Hirakawa T, Yamashita T (2019) Deep learning-based image recognition for autonomous driving. IATSS Res 43:244–252

    Article  Google Scholar 

  24. Goodfellow IJ, Shlens J, Szegedy C (2014) Explaining and harnessing adversarial examples. 3rd International Conference on Learning Representations, ICLR 2015 Conference Track Proceedings

  25. Kurakin A, Goodfellow I, Bengio S et al (2016) Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533

  26. Moosavi-Dezfooli SM, Fawzi A, Frossard P (2016) Deepfool: A simple and accurate method to fool deep neural networks. Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit 2574–2582

    Google Scholar 

  27. Papernot N, Mcdaniel P, Jha S, Fredrikson M, Celik ZB, Swami A (2016) The limitations of deep learning in adversarial settings. Proceedings 2016 IEEE European Symposium on Security and Privacy, EURO S and P 2016, pages 372–387

  28. Madry A, Makelov A, Schmidt L, Tsipras D, Vladu A (2017) Towards deep learning models resistant to adversarial attacks. 6th International Conference on Learning Representations, ICLR 2018 - Conference Track Proceedings

  29. Carlini N, Wagner D (2017) Towards evaluating the robustness of neural networks. Proc IEEE Symp Secur Privacy 39–57

    Google Scholar 

  30. Papernot N, McDaniel P, Goodfellow I, Jha S, Celik ZB, Swami A (2017) Practical black-box attacks against machine learning. Proc 2017 ACM Asia Conf Comput Commun Secur

  31. Chen PY, Zhang H, Sharma Y, Yi J, Hsieh CJ (2017) Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models. Proc ACM Workshop Artif Intell Secur

  32. Brendel W, Rauber J, Bethge M (2017) Decision-based adversarial attacks: Reliable attacks against black-box machine learning models. 6th International Conference on Learning Representations, ICLR 2018 - Conference Track Proceedings

  33. Ilyas A, Engstrom L, Madry A (2018) Prior convictions: Black-box adversarial attacks with bandits and priors. 7th International Conference on Learning Representations, ICLR 2019

  34. Tu CC, Ting P, Chen PY, Liu S, Zhang H, Yi J, Hsieh CJ, Cheng SM (2019) Autozoom: Autoencoder-based zeroth order optimization method for attacking black-box neural networks. Proc AAAI Conf Artif Intell 33:742–749

    Google Scholar 

  35. Mahmood K, Nguyen PH, Nguyen LM, Nguyen T, van Dijk M (2019) Buzz: Buffer zones for defending adversarial examples in image classification

  36. Liu Y, Moosavi-Dezfooli SM, Frossard Pascal (2019) A geometry-inspired decision-based attack. Proc IEEE Int Conf Comput Vis 4889–4897

    Google Scholar 

  37. Andriushchenko M, Croce F, Flammarion N, Hein M (2020) Square attack: A query-efficient black-box adversarial attack via random search. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 12368 LNCS:484–501

  38. Zhou M, Wu J, Liu Y, Liu S, Zhu C (2020) Dast: Data-free substitute training for adversarial attacks. Proc IEEE Computer Soc Conf Comput Vis Pattern Recognit 231–240

    Google Scholar 

  39. Li M, Deng C, Li T, Yan J, Gao X, Huang H (2020) Towards transferable targeted attack. Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit 638–646

    Google Scholar 

  40. Feng Y, Wu B, Fan Y, Liu L, Li Z, Xia ST (2022) Boosting black-box attack with partially transferred conditional adversarial distribution. Proc IEEE/CVF Conf Comput Vis Pattern Recognit 15095–15104

    Google Scholar 

  41. Huang Z, Zhang T (2019) Black-box adversarial attack with transferable model-based embedding. ICLR2020

  42. Zhang J, Lou Y, Wang J, Wu K, Lu K, Jia X (2021) Evaluating adversarial attacks on driving safety in vision-based autonomous vehicles. IEEE Internet Things J

  43. Zhang J, Zhang Y, Lu K, Wang J, Wu K, Jia X, Liu B (2021) Detecting and identifying optical signal attacks on autonomous driving systems. IEEE Internet Things J 8:1140–1153

    Article  Google Scholar 

  44. Garg S, Mehrotra D, Pandey HM, Pandey S (2021) Accessible review of internet of vehicle models for intelligent transportation and research gaps for potential future directions. Peer Peer Network Appl 1–28

    Google Scholar 

  45. Neal L, Olson M, Fern X, Wong WK, Li F (2018) Open set learning with counterfactual images. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 11210 LNCS:620–635

  46. Kim H (2020) Torchattacks: A pytorch repository for adversarial attacks. arXiv preprint arXiv:2010.01950

Download references

Funding

This work was sponsored by Program of Shanghai Academic Research Leader (No.21XD1421500) and supported by the National Natural Science Foundation of China under Grant No. 61872230, U1936213, No.61802248, No. 61802249.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mi Wen.

Ethics declarations

Conflict of interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, Y., Zhang, K., Lu, K. et al. Practical black-box adversarial attack on open-set recognition: Towards robust autonomous driving. Peer-to-Peer Netw. Appl. 16, 295–311 (2023). https://doi.org/10.1007/s12083-022-01390-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12083-022-01390-9

Keywords

Navigation