Abstract
Autonomous driving systems are among the exceptional technological developments of recent times. Such systems gather live information about the vehicle and respond with skilled human drivers’ skills. The pervasiveness of computing technologies has also resulted in serious threats to the security and safety of autonomous driving systems. Adversarial attacks are among one the most serious threats to autonomous driving models (ADMs). The purpose of the paper is to determine the behavior of the driving models when confronted with a physical adversarial attack against end-to-end ADMs. We analyze some adversarial attacks and their defense mechanisms for certain autonomous driving models. Five adversarial attacks were applied to three ADMs, and subsequently analyzed the functionality and the effects of these attacks on those ADMs. Afterward, we propose four defense strategies against five adversarial attacks and identify the most resilient defense mechanism against all types of attacks. Support Vector Machine and neural regression were the two machine learning models that were utilized to categorize the challenges for the model’s training. The results show that we have achieved 95% accuracy.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Mozaffari, S., Al-Jarrah, O.Y., Dianati, M., Jennings, P., Mouzakitis, A.: Deep learning-based vehicle behavior prediction for autonomous driving applications: a review. IEEE Trans. Intell. Transp. Syst. 23(1), 33–47 (2020)
Singh, S., Saini, B.S.: Autonomous cars: recent developments, challenges, and possible solutions. In: IOP Conference Series: Materials Science and Engineering, vol. 1022, no. 1, p. 012028. IOP Publishing (2021)
Raja, F.Z., Khattak, H.A., Aloqaily, M., Hussain, R.: Carpooling in connected and autonomous vehicles: current solutions and future directions. ACM Comput. Surv. 1(1), 1–35 (2021)
Ni, R., Leung, J.: Safety and liability of autonomous vehicle technologies. Massachusetts Institute (2014)
Yuan, T., da Neto, W.R., Rothenberg, C.E., Obraczka, K., Barakat, C., Turletti, T.: Machine learning for next-generation intelligent transportation systems: a survey. Trans. Emerg. Telecommun. Technol. 33(4), e4427 (2022)
Malik, S., Khattak, H.A., Ameer, Z., Shoaib, U., Rauf, H.T., Song, H.: Proactive scheduling and resource management for connected autonomous vehicles: a data science perspective. IEEE Sens. J. 21(22), 25151–25160 (2021)
Kyrkou, C., et al.: Towards artificial-intelligence-based cybersecurity for robustifying automated driving systems against camera sensor attacks. In: 2020 IEEE Computer Society Annual Symposium on VLSI (ISVLSI), pp. 476–481. IEEE (2020)
Deng, Y., Zheng, X., Zhang, T., Chen, C., Lou, G., Kim, M.: An analysis of adversarial attacks and defenses on autonomous driving models. In: 2020 IEEE International Conference on Pervasive Computing and Communications (PerCom), pp. 1–10. IEEE (2020)
Poursaeed, O., Katsman, I., Gao, B., Belongie, S.: Generative adversarial perturbations. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4422–4431 (2018)
Xiao, C., Li, B., Zhu, J.-Y., He, W., Liu, M., Song, D.: Generating adversarial examples with adversarial networks. arXiv preprint arXiv:1801.02610 (2018)
Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014)
Buehler, M., Iagnemma, K., Singh, S.: The 2005 DARPA Grand Challenge: The Great Robot Race, vol. 36. Springer, Cham (2007)
Cao, Y., et al.: Adversarial objects against lidar-based autonomous driving systems. arXiv preprint arXiv:1907.05418 (2019)
Arnold, E., Al-Jarrah, O.Y., Dianati, M., Fallah, S., Oxtoby, D., Mouzakitis, A.: A survey on 3D object detection methods for autonomous driving applications. IEEE Trans. Intell. Transp. Syst. 20(10), 3782–3795 (2019)
Ren, K., Wang, Q., Wang, C., Qin, Z., Lin, X.: The security of autonomous driving: threats, defenses, and future directions. Proc. IEEE 108(2), 357–372 (2019)
Yuan, X., He, P., Zhu, Q., Li, X.: Adversarial examples: attacks and defenses for deep learning. IEEE Trans. Neural Netw. Learn. Syst. 30(9), 2805–2824 (2019)
Naseer, M.M., Ranasinghe, K., Khan, S.H., Hayat, M., Khan, F.S., Yang, M.H.: Intriguing properties of vision transformers. In: Advances in Neural Information Processing Systems, vol. 34, pp. 23296–23308 (2021)
Ren, H., Huang, T., Yan, H.: Adversarial examples: attacks and defenses in the physical world. Int. J. Mach. Learn. Cybern. 12(11), 3325–3336 (2021). https://doi.org/10.1007/s13042-020-01242-z
Zhou, H., et al.: Deepbillboard: systematic physical-world testing of autonomous driving systems. In: 2020 IEEE/ACM 42nd International Conference on Software Engineering (ICSE), pp. 347–358. IEEE (2020)
Song, D.,et al.: Physical adversarial examples for object detectors. In: 12th USENIX Workshop on Offensive Technologies (WOOT 18) (2018)
Feng, R., Chen, J., Fernandes, E., Jha, S., Prakash, A.: Robust physical hard-label attacks on deep learning visual classification. arXiv preprint arXiv:2002.07088 (2020)
Lu, J., Sibai, H., Fabry, E.: Adversarial examples that fool detectors. arXiv preprint arXiv:1712.02494 (2017)
Yan, Z., Guo, Y., Zhang, C.: Deep defense: Training DNNs with improved adversarial robustness. In: Advances in Neural Information Processing Systems, vol. 31 (2018)
Papernot, N., McDaniel, P., Wu, X., Jha, S., Swami, A.: Distillation as a defense to adversarial perturbations against deep neural networks. In: IEEE Symposium on Security and Privacy (SP), pp. 582–597. IEEE (2016)
Zheng, Z., Hong, P.: Robust detection of adversarial attacks by modeling the intrinsic properties of deep neural networks. In: Advances in Neural Information Processing Systems, vol. 31 (2018)
Lee, K., Lee, K., Lee, H., Shin, J.: A simple unified framework for detecting out-of-distribution samples and adversarial attacks. In: Advances in Neural Information Processing Systems, vol. 31 (2018)
Kurakin, A., Goodfellow, I., Bengio, S., et al.: Adversarial examples in the physical world (2016)
Szegedy, C., et al.: Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199 (2013)
Moosavi-Dezfooli, S.-M., Fawzi, A., Fawzi, O., Frossard, P.: Universal adversarial perturbations. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1765–1773 (2017)
Xu, W., Evans, D., Qi, Y.: Feature squeezing: Detecting adversarial examples in deep neural networks. arXiv preprint arXiv:1704.01155 (2017)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering
About this paper
Cite this paper
Sajid, J., Anam, B., Khattak, H.A., Malik, A.W., Abbas, A., Khan, S.U. (2023). Preventing Adversarial Attacks on Autonomous Driving Models. In: Haas, Z.J., Prakash, R., Ammari, H., Wu, W. (eds) Wireless Internet. WiCON 2022. Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, vol 464. Springer, Cham. https://doi.org/10.1007/978-3-031-27041-3_1
Download citation
DOI: https://doi.org/10.1007/978-3-031-27041-3_1
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-27040-6
Online ISBN: 978-3-031-27041-3
eBook Packages: Computer ScienceComputer Science (R0)