Abstract
Machine Learning applications are acknowledged at the foundation of autonomous driving, because they are the enabling technology for most driving tasks. However, the inclusion of trained agents in automotive systems exposes the vehicle to novel attacks and faults, that can result in safety threats to the driving tasks. In this paper we report our experimental campaign on the injection of adversarial attacks and software faults in a self-driving agent running in a driving simulator. We show that adversarial attacks and faults injected in the trained agent can lead to erroneous decisions and severely jeopardize safety. The paper shows a feasible and easily-reproducible approach based on open source simulator and tools, and the results clearly motivate the need of both protective measures and extensive testing campaigns.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Dosovitskiy, A.: et al.: CARLA: an open urban driving simulator. In: Conference on Robot Learning, pp. 1–16 (2017)
Unreal Engine. www.unrealengine.com [online]
Chen, D., et al.: Learning by Cheating. In: Conference on Robot Learning (CoRL) (2019)
Secci, F., Ceccarelli, A.: On failures of RGB cameras and their effects in autonomous driving applications. In: ISSRE, pp. 13–24 (2020)
Kumar, K.N., et al.: Black-box adversarial attacks in autonomous vehicle technology. arXiv e-prints 2101.06092 (2021).
Integration of ART and LbC. https://github.com/piazzesiNiccolo/myLbc [online]
Deng, Y., et al.: An analysis of adversarial attacks and defenses on autonomous driving models. In: IEEE International Conference on Pervasive Computing and Communications (PerCom) (2020)
Nicolae, M.I., et al.: Adversarial Robustness Toolbox v1.0.0. arXiv preprint arXiv:1807.01069v4 (2019)
Zablocki, É., et al.: Explainability of vision-based autonomous driving systems: review and challenges. arXiv preprint arXiv:2101.05307 (2021)
Engstrom, L., Tran, B., Tsipras, D., Schmidt, L., Madry, A.: Exploring the Landscape of Spatial Robustness. In: PMLR 2019 (2019)
Chen, J., Jordan, M.I., Wainwright, M.J.: Hopskipjumpattack: a query-efficient decision-based attack. In: IEEE Symposium on Security and Privacy (SP) (2020)
Kurakin, A., Goodfellow, I., Bengio, S.: Adversarial examples in the physical world. arXiv:1607.02533 (2016)
Jang, U., Wu, X., Jha, S.: Objective metrics and gradient descent algorithms for adversarial examples in machine learning. In: ACSAC 2017 (2017)
ART documentation v1.5.1. https://adversarial-robustness-toolbox.readthedocs.io/en/latest/
Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014)
Brown, T.B., et al.: Adversarial patch." arXiv preprint arXiv:1712.09665 (2017)
Kurakin, A., Goodfellow, I., Bengio, S.: Adversarial examples in the physical world. arXiv preprint. https://arxiv.org/abs/1607.02533 (2016)
Stevens, E., Antiga, L., Viehmann, T.: Deep Learning with PyTorch. Manning Publications Company, Shelter Island (2020)
Codevilla, F., et al.: Exploring the limitations of behavior cloning for autonomous driving. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (2019)
Grigorescu, S., et al.: A survey of deep learning techniques for autonomous driving. J. Field Robot. 37(3), 362–386 (2020)
Miller, C.: Lessons learned from hacking a car. IEEE Des. Test 36, 6 (2019)
Ackerman, E.: Three small stickers in intersection can cause tesla autopilot to swerve into wrong lane. IEEE Spectrum (2019)
Condia, J., et al.: FlexGripPlus: an improved GPGPU model to support reliability analysis. Microelect. Reliab. 109, 1–14 (2020)
Mahmoud, A., et al.: Pytorchfi: a runtime perturbation tool for DNNS. In: IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W) (2020)
Li, G., et al.: Understanding error propagation in deep learning neural network (DNN) accelerators and applications. In: International Conference for High Performance Computing, Networking, Storage and Analysis (SC) (2017)
Du, X., Xiaoting, G., Sui, Y.: Fault triggers in the tensorflow framework: an experience report. In: IEEE International Symposium on Software Reliability Engineering (ISSRE) (2020)
Jha, S., Banerjee, S., Cyriac, J., Kalbarczyk, Z.T., Iyer, R. K.: AVFI: fault Injection for autonomous vehicles. In: IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W), pp. 55–56 (2018)
Jha, S., et al.: Kayotee: A fault injection-based system to assess the safety and reliability of autonomous vehicles to faults and errors. arXiv preprint arXiv:1907.01024 (2019)
Zhang, J.M., et al.: Machine learning testing: Survey, landscapes and horizons. In: IEEE Transactions on Software Engineering (2020)
Chen, D.: Learning by cheating code. https://github.com/dotchen/LearningByCheating
Pytorchfi documentation. https://pytorchfi.github.io/core/declare-fi
Zoppi, T., et al.: Unsupervised anomaly detectors to detect intrusions in the current threat landscape. ACM/IMS Trans. Data Sci. 2(2), 7 (2021)
Acknowledgment
This work has been partially supported by the project POR-CREO SPACE “Smart PAssenger CEnter” funded by the Tuscany Region.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Piazzesi, N., Hong, M., Ceccarelli, A. (2021). Attack and Fault Injection in Self-driving Agents on the Carla Simulator – Experience Report. In: Habli, I., Sujan, M., Bitsch, F. (eds) Computer Safety, Reliability, and Security. SAFECOMP 2021. Lecture Notes in Computer Science(), vol 12852. Springer, Cham. https://doi.org/10.1007/978-3-030-83903-1_14
Download citation
DOI: https://doi.org/10.1007/978-3-030-83903-1_14
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-83902-4
Online ISBN: 978-3-030-83903-1
eBook Packages: Computer ScienceComputer Science (R0)