Skip to main content

Attack and Fault Injection in Self-driving Agents on the Carla Simulator – Experience Report

  • Conference paper
  • First Online:
Book cover Computer Safety, Reliability, and Security (SAFECOMP 2021)

Part of the book series: Lecture Notes in Computer Science ((LNPSE,volume 12852))

Included in the following conference series:

Abstract

Machine Learning applications are acknowledged at the foundation of autonomous driving, because they are the enabling technology for most driving tasks. However, the inclusion of trained agents in automotive systems exposes the vehicle to novel attacks and faults, that can result in safety threats to the driving tasks. In this paper we report our experimental campaign on the injection of adversarial attacks and software faults in a self-driving agent running in a driving simulator. We show that adversarial attacks and faults injected in the trained agent can lead to erroneous decisions and severely jeopardize safety. The paper shows a feasible and easily-reproducible approach based on open source simulator and tools, and the results clearly motivate the need of both protective measures and extensive testing campaigns.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Dosovitskiy, A.: et al.: CARLA: an open urban driving simulator. In: Conference on Robot Learning, pp. 1–16 (2017)

    Google Scholar 

  2. Unreal Engine. www.unrealengine.com [online]

  3. Chen, D., et al.: Learning by Cheating. In: Conference on Robot Learning (CoRL) (2019)

    Google Scholar 

  4. Secci, F., Ceccarelli, A.: On failures of RGB cameras and their effects in autonomous driving applications. In: ISSRE, pp. 13–24 (2020)

    Google Scholar 

  5. Kumar, K.N., et al.: Black-box adversarial attacks in autonomous vehicle technology. arXiv e-prints 2101.06092 (2021).

    Google Scholar 

  6. Integration of ART and LbC. https://github.com/piazzesiNiccolo/myLbc [online]

  7. Deng, Y., et al.: An analysis of adversarial attacks and defenses on autonomous driving models. In: IEEE International Conference on Pervasive Computing and Communications (PerCom) (2020)

    Google Scholar 

  8. Nicolae, M.I., et al.: Adversarial Robustness Toolbox v1.0.0. arXiv preprint arXiv:1807.01069v4 (2019)

  9. Zablocki, É., et al.: Explainability of vision-based autonomous driving systems: review and challenges. arXiv preprint arXiv:2101.05307 (2021)

  10. Engstrom, L., Tran, B., Tsipras, D., Schmidt, L., Madry, A.: Exploring the Landscape of Spatial Robustness. In: PMLR 2019 (2019)

    Google Scholar 

  11. Chen, J., Jordan, M.I., Wainwright, M.J.: Hopskipjumpattack: a query-efficient decision-based attack. In: IEEE Symposium on Security and Privacy (SP) (2020)

    Google Scholar 

  12. Kurakin, A., Goodfellow, I., Bengio, S.: Adversarial examples in the physical world. arXiv:1607.02533 (2016)

  13. Jang, U., Wu, X., Jha, S.: Objective metrics and gradient descent algorithms for adversarial examples in machine learning. In: ACSAC 2017 (2017)

    Google Scholar 

  14. ART documentation v1.5.1. https://adversarial-robustness-toolbox.readthedocs.io/en/latest/

  15. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014)

  16. Brown, T.B., et al.: Adversarial patch." arXiv preprint arXiv:1712.09665 (2017)

  17. Kurakin, A., Goodfellow, I., Bengio, S.: Adversarial examples in the physical world. arXiv preprint. https://arxiv.org/abs/1607.02533 (2016)

  18. Stevens, E., Antiga, L., Viehmann, T.: Deep Learning with PyTorch. Manning Publications Company, Shelter Island (2020)

    Google Scholar 

  19. Codevilla, F., et al.: Exploring the limitations of behavior cloning for autonomous driving. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (2019)

    Google Scholar 

  20. Grigorescu, S., et al.: A survey of deep learning techniques for autonomous driving. J. Field Robot. 37(3), 362–386 (2020)

    Article  Google Scholar 

  21. Miller, C.: Lessons learned from hacking a car. IEEE Des. Test 36, 6 (2019)

    Article  Google Scholar 

  22. Ackerman, E.: Three small stickers in intersection can cause tesla autopilot to swerve into wrong lane. IEEE Spectrum (2019)

    Google Scholar 

  23. Condia, J., et al.: FlexGripPlus: an improved GPGPU model to support reliability analysis. Microelect. Reliab. 109, 1–14 (2020)

    Google Scholar 

  24. Mahmoud, A., et al.: Pytorchfi: a runtime perturbation tool for DNNS. In: IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W) (2020)

    Google Scholar 

  25. Li, G., et al.: Understanding error propagation in deep learning neural network (DNN) accelerators and applications. In: International Conference for High Performance Computing, Networking, Storage and Analysis (SC) (2017)

    Google Scholar 

  26. Du, X., Xiaoting, G., Sui, Y.: Fault triggers in the tensorflow framework: an experience report. In: IEEE International Symposium on Software Reliability Engineering (ISSRE) (2020)

    Google Scholar 

  27. Jha, S., Banerjee, S., Cyriac, J., Kalbarczyk, Z.T., Iyer, R. K.: AVFI: fault Injection for autonomous vehicles. In: IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W), pp. 55–56 (2018)

    Google Scholar 

  28. Jha, S., et al.: Kayotee: A fault injection-based system to assess the safety and reliability of autonomous vehicles to faults and errors. arXiv preprint arXiv:1907.01024 (2019)

  29. Zhang, J.M., et al.: Machine learning testing: Survey, landscapes and horizons. In: IEEE Transactions on Software Engineering (2020)

    Google Scholar 

  30. Chen, D.: Learning by cheating code. https://github.com/dotchen/LearningByCheating

  31. Pytorchfi documentation. https://pytorchfi.github.io/core/declare-fi

  32. Zoppi, T., et al.: Unsupervised anomaly detectors to detect intrusions in the current threat landscape. ACM/IMS Trans. Data Sci. 2(2), 7 (2021)

    Google Scholar 

Download references

Acknowledgment

This work has been partially supported by the project POR-CREO SPACE “Smart PAssenger CEnter” funded by the Tuscany Region.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Andrea Ceccarelli .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Piazzesi, N., Hong, M., Ceccarelli, A. (2021). Attack and Fault Injection in Self-driving Agents on the Carla Simulator – Experience Report. In: Habli, I., Sujan, M., Bitsch, F. (eds) Computer Safety, Reliability, and Security. SAFECOMP 2021. Lecture Notes in Computer Science(), vol 12852. Springer, Cham. https://doi.org/10.1007/978-3-030-83903-1_14

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-83903-1_14

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-83902-4

  • Online ISBN: 978-3-030-83903-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics