Skip to main content

Preventing Adversarial Attacks on Autonomous Driving Models

  • Conference paper
  • First Online:
Wireless Internet (WiCON 2022)

Abstract

Autonomous driving systems are among the exceptional technological developments of recent times. Such systems gather live information about the vehicle and respond with skilled human drivers’ skills. The pervasiveness of computing technologies has also resulted in serious threats to the security and safety of autonomous driving systems. Adversarial attacks are among one the most serious threats to autonomous driving models (ADMs). The purpose of the paper is to determine the behavior of the driving models when confronted with a physical adversarial attack against end-to-end ADMs. We analyze some adversarial attacks and their defense mechanisms for certain autonomous driving models. Five adversarial attacks were applied to three ADMs, and subsequently analyzed the functionality and the effects of these attacks on those ADMs. Afterward, we propose four defense strategies against five adversarial attacks and identify the most resilient defense mechanism against all types of attacks. Support Vector Machine and neural regression were the two machine learning models that were utilized to categorize the challenges for the model’s training. The results show that we have achieved 95% accuracy.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Mozaffari, S., Al-Jarrah, O.Y., Dianati, M., Jennings, P., Mouzakitis, A.: Deep learning-based vehicle behavior prediction for autonomous driving applications: a review. IEEE Trans. Intell. Transp. Syst. 23(1), 33–47 (2020)

    Article  Google Scholar 

  2. Singh, S., Saini, B.S.: Autonomous cars: recent developments, challenges, and possible solutions. In: IOP Conference Series: Materials Science and Engineering, vol. 1022, no. 1, p. 012028. IOP Publishing (2021)

    Google Scholar 

  3. Raja, F.Z., Khattak, H.A., Aloqaily, M., Hussain, R.: Carpooling in connected and autonomous vehicles: current solutions and future directions. ACM Comput. Surv. 1(1), 1–35 (2021)

    Google Scholar 

  4. Ni, R., Leung, J.: Safety and liability of autonomous vehicle technologies. Massachusetts Institute (2014)

    Google Scholar 

  5. Yuan, T., da Neto, W.R., Rothenberg, C.E., Obraczka, K., Barakat, C., Turletti, T.: Machine learning for next-generation intelligent transportation systems: a survey. Trans. Emerg. Telecommun. Technol. 33(4), e4427 (2022)

    Google Scholar 

  6. Malik, S., Khattak, H.A., Ameer, Z., Shoaib, U., Rauf, H.T., Song, H.: Proactive scheduling and resource management for connected autonomous vehicles: a data science perspective. IEEE Sens. J. 21(22), 25151–25160 (2021)

    Article  Google Scholar 

  7. Kyrkou, C., et al.: Towards artificial-intelligence-based cybersecurity for robustifying automated driving systems against camera sensor attacks. In: 2020 IEEE Computer Society Annual Symposium on VLSI (ISVLSI), pp. 476–481. IEEE (2020)

    Google Scholar 

  8. Deng, Y., Zheng, X., Zhang, T., Chen, C., Lou, G., Kim, M.: An analysis of adversarial attacks and defenses on autonomous driving models. In: 2020 IEEE International Conference on Pervasive Computing and Communications (PerCom), pp. 1–10. IEEE (2020)

    Google Scholar 

  9. Poursaeed, O., Katsman, I., Gao, B., Belongie, S.: Generative adversarial perturbations. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4422–4431 (2018)

    Google Scholar 

  10. Xiao, C., Li, B., Zhu, J.-Y., He, W., Liu, M., Song, D.: Generating adversarial examples with adversarial networks. arXiv preprint arXiv:1801.02610 (2018)

  11. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014)

  12. Buehler, M., Iagnemma, K., Singh, S.: The 2005 DARPA Grand Challenge: The Great Robot Race, vol. 36. Springer, Cham (2007)

    Book  Google Scholar 

  13. Cao, Y., et al.: Adversarial objects against lidar-based autonomous driving systems. arXiv preprint arXiv:1907.05418 (2019)

  14. Arnold, E., Al-Jarrah, O.Y., Dianati, M., Fallah, S., Oxtoby, D., Mouzakitis, A.: A survey on 3D object detection methods for autonomous driving applications. IEEE Trans. Intell. Transp. Syst. 20(10), 3782–3795 (2019)

    Article  Google Scholar 

  15. Ren, K., Wang, Q., Wang, C., Qin, Z., Lin, X.: The security of autonomous driving: threats, defenses, and future directions. Proc. IEEE 108(2), 357–372 (2019)

    Article  Google Scholar 

  16. Yuan, X., He, P., Zhu, Q., Li, X.: Adversarial examples: attacks and defenses for deep learning. IEEE Trans. Neural Netw. Learn. Syst. 30(9), 2805–2824 (2019)

    Article  MathSciNet  Google Scholar 

  17. Naseer, M.M., Ranasinghe, K., Khan, S.H., Hayat, M., Khan, F.S., Yang, M.H.: Intriguing properties of vision transformers. In: Advances in Neural Information Processing Systems, vol. 34, pp. 23296–23308 (2021)

    Google Scholar 

  18. Ren, H., Huang, T., Yan, H.: Adversarial examples: attacks and defenses in the physical world. Int. J. Mach. Learn. Cybern. 12(11), 3325–3336 (2021). https://doi.org/10.1007/s13042-020-01242-z

    Article  Google Scholar 

  19. Zhou, H., et al.: Deepbillboard: systematic physical-world testing of autonomous driving systems. In: 2020 IEEE/ACM 42nd International Conference on Software Engineering (ICSE), pp. 347–358. IEEE (2020)

    Google Scholar 

  20. Song, D.,et al.: Physical adversarial examples for object detectors. In: 12th USENIX Workshop on Offensive Technologies (WOOT 18) (2018)

    Google Scholar 

  21. Feng, R., Chen, J., Fernandes, E., Jha, S., Prakash, A.: Robust physical hard-label attacks on deep learning visual classification. arXiv preprint arXiv:2002.07088 (2020)

  22. Lu, J., Sibai, H., Fabry, E.: Adversarial examples that fool detectors. arXiv preprint arXiv:1712.02494 (2017)

  23. Yan, Z., Guo, Y., Zhang, C.: Deep defense: Training DNNs with improved adversarial robustness. In: Advances in Neural Information Processing Systems, vol. 31 (2018)

    Google Scholar 

  24. Papernot, N., McDaniel, P., Wu, X., Jha, S., Swami, A.: Distillation as a defense to adversarial perturbations against deep neural networks. In: IEEE Symposium on Security and Privacy (SP), pp. 582–597. IEEE (2016)

    Google Scholar 

  25. Zheng, Z., Hong, P.: Robust detection of adversarial attacks by modeling the intrinsic properties of deep neural networks. In: Advances in Neural Information Processing Systems, vol. 31 (2018)

    Google Scholar 

  26. Lee, K., Lee, K., Lee, H., Shin, J.: A simple unified framework for detecting out-of-distribution samples and adversarial attacks. In: Advances in Neural Information Processing Systems, vol. 31 (2018)

    Google Scholar 

  27. Kurakin, A., Goodfellow, I., Bengio, S., et al.: Adversarial examples in the physical world (2016)

    Google Scholar 

  28. Szegedy, C., et al.: Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199 (2013)

  29. Moosavi-Dezfooli, S.-M., Fawzi, A., Fawzi, O., Frossard, P.: Universal adversarial perturbations. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1765–1773 (2017)

    Google Scholar 

  30. Xu, W., Evans, D., Qi, Y.: Feature squeezing: Detecting adversarial examples in deep neural networks. arXiv preprint arXiv:1704.01155 (2017)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hasan Ali Khattak .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Sajid, J., Anam, B., Khattak, H.A., Malik, A.W., Abbas, A., Khan, S.U. (2023). Preventing Adversarial Attacks on Autonomous Driving Models. In: Haas, Z.J., Prakash, R., Ammari, H., Wu, W. (eds) Wireless Internet. WiCON 2022. Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, vol 464. Springer, Cham. https://doi.org/10.1007/978-3-031-27041-3_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-27041-3_1

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-27040-6

  • Online ISBN: 978-3-031-27041-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics