Skip to main content

Transfer Learning in Autonomous Driving Using Real-World Samples

  • Conference paper
  • First Online:
Advances on P2P, Parallel, Grid, Cloud and Internet Computing (3PGCIC 2021)

Part of the book series: Lecture Notes in Networks and Systems ((LNNS,volume 343))

Abstract

The Sim2Real gap is a topic that has been receiving a great deal of attention lately. Many Artificial Intelligence techniques, for example Reinforcement Learning, require millions of iterations to achieve satisfactory performance. This requirement often forces these techniques to solely train in simulation. If the gap between the simulated environment and the target environment is too broad, however, the trained agents will lose out on performance when deployed. Bridging this gap lowers the performance loss during deployment, in turn improving the effectiveness of these agents. This paper proposes a new technique to tackle this issue. The technique focuses on the use of demonstration samples gathered in the target environment and is based on two transfer learning fundamentals. By combining the advantages of Domain Randomization and Domain Adaptation, agents are able to transfer training performance to the target environment more successfully. Experimental results show a strong decrease in performance loss during deployment when the agent is exposed to the demonstration samples during training. The proposed technique describes a methodology that we believe can be applied in fields other than autonomous driving in order to improve transfer learning performance.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Thomas, P., Morris, A., Talbot, R., Fagerlind, H.: Identifying the causes of road crashes in Europe. Ann. Adv. Autom. Med. 57, 13 (2013)

    Google Scholar 

  2. Zhang, K., Batterman, S., Dion, F.: Vehicle emissions in congestion: comparison of work zone, rush hour and free-flow conditions. Atmos. Environ. 45, 1929–1939 (2011)

    Article  Google Scholar 

  3. Kadian, A., et al.: Sim2real predictivity: does evaluation in simulation predict real-world performance? IEEE Rob. Autom. Lett. 5(4), 6670–6677 (2020)

    Article  Google Scholar 

  4. Balaji, B., et al.: Deepracer: autonomous racing platform for experimentation with sim2real reinforcement learning. In: IEEE International Conference on Robotics and Automation (ICRA), 2020, pp. 2746–2754 (2020)

    Google Scholar 

  5. Wang, M., Deng, W.: Deep visual domain adaptation: a survey. Neurocomputing 312, 135–153 (2018)

    Article  Google Scholar 

  6. Daumé III, H.: Frustratingly easy domain adaptation (2009). arXiv preprint arXiv:0907.1815

  7. Sun, B., Feng, J., Saenko, K.: Return of frustratingly easy domain adaptation. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 30, no. 1 (2016)

    Google Scholar 

  8. Andrychowicz, O.M., et al.: Learning dexterous in-hand manipulation. Int. J. Rob. Res. 39(1), 3–20 (2020)

    Article  Google Scholar 

  9. Akkaya, I., et al.: Solving rubik’s cube with a robot hand (2019). arXiv preprint arXiv:1910.07113

  10. Hester, T., et al.: Deep q-learning from demonstrations. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32, no. 1 (2018)

    Google Scholar 

  11. Levine, S., Kumar, A., Tucker, G., Fu, J.: Offline reinforcement learning: tutorial, review, and perspectives on open problems (2020). arXiv preprint arXiv:2005.01643

  12. Russell, S., Norvig, P.: Artificial Intelligence: A Modern Approach (2010)

    Google Scholar 

  13. Mnih, V., et al.: Human-level control through deep reinforcement learning. Nature 518(7540), 529–533 (2015)

    Article  Google Scholar 

  14. Van Hasselt, H., Guez, A., Silver, D.: Deep reinforcement learning with double q-learning. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 30, no. 1 (2016)

    Google Scholar 

  15. Horgan, D., et al.: Distributed prioritized experience replay (2018). arXiv preprint arXiv:1803.00933

  16. Brockman, G., et al.: Openai gym (2016)

    Google Scholar 

  17. Liang, E., et al.: RLlib: abstractions for distributed reinforcement learning. In: International Conference on Machine Learning (ICML) (2018)

    Google Scholar 

  18. Okuyama, T., Gonsalves, T., Upadhay, J.: Autonomous driving system based on deep q learnig. In: International Conference on Intelligent Autonomous Systems (ICoIAS), 2018, pp. 201–205 (2018)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Arne Troch .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Troch, A., Hoog, J.d., Vanneste, S., Balemans, D., Latré, S., Hellinckx, P. (2022). Transfer Learning in Autonomous Driving Using Real-World Samples. In: Barolli, L. (eds) Advances on P2P, Parallel, Grid, Cloud and Internet Computing. 3PGCIC 2021. Lecture Notes in Networks and Systems, vol 343. Springer, Cham. https://doi.org/10.1007/978-3-030-89899-1_24

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-89899-1_24

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-89898-4

  • Online ISBN: 978-3-030-89899-1

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics