Skip to main content

Hierarchical Reinforcement Learning Approach for the Road Intersection Task

  • Conference paper
  • First Online:

Part of the book series: Advances in Intelligent Systems and Computing ((AISC,volume 948))

Abstract

The task of building unmanned automated vehicle (UAV) control systems is developing in the direction of complication of options for interaction of UAV with the environment and approaching real life situations. A new concept of so called “smart city” was proposed and view of transportation shifted in direction to self-driving cars. In this work we developed a solution to car’s movement on road intersection. For that we made a new environment to simulate a process and applied a hierarchical reinforcement learning method to get a required behaviour from a car. Created environment could be then used as a benchmark for future algorithms on this task.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   139.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   179.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    https://github.com/max1408/Car_Intersection.

References

  1. Xu H, Gao Y, Yu F, Darrell T (2016) End-to-end learning of driving models from large-scale video datasets. CoRR, vol. abs/1612.01079

    Google Scholar 

  2. Shalev-Shwartz S, Shammah S, Shashua A (2016) Safe, multi-agent, reinforcement learning for autonomous driving. CoRR, vol. abs/1610.03295

    Google Scholar 

  3. Bojarski M, Testa DD, Dworakowski D, Firner B, Flepp B, Goyal P, Jackel LD, Monfort M, Muller U, Zhang J, Zhang X, Zhao J, Zieba K (2016) End to end learning for self-driving cars. CoRR, vol. abs/1604.07316

    Google Scholar 

  4. Barto AG, Mahadevan S (2003) Recent advances in hierarchical reinforcement learning. Discrete Event Dyn Syst 13:341–379

    Article  MathSciNet  Google Scholar 

  5. Al-Emran M (2015) Hierarchical reinforcement learning - a survey. Int J Comput Dig Syst 4:137–143

    Article  Google Scholar 

  6. Ayunts E, Panov AI (2017) Task planning in “Block World” with deep reinforcement learning. In: Samsonovich AV, Klimov VV (eds) Biologically inspired cognitive architectures (BICA) for young scientists, advances in intelligent systems and computing, Springer International Publishing, pp 3–9

    Google Scholar 

  7. Kuzmin V, Panov AI (2018) Hierarchical reinforcement learning with options and united neural network approximation. In: Abraham A, Kovalev S, Tarassov V, Snasel V, Sukhanov A (eds) Proceedings of the third international scientific conference “Intelligent Information Technologies for Industry” (IITI’18), advances in intelligent systems and computing, Springer International Publishing, pp 453–462

    Google Scholar 

  8. Aitygulov E, Kiselev G, Panov AI (2018) Task and spatial planning by the cognitive agent with human-like knowledge representation. In Ronzhin A, Rigoll G, Meshcheryakov R (eds) Interactive collaborative robotics, lecture notes in artificial intelligence, Springer International Publishing, pp 1–12

    Google Scholar 

  9. Paxton C, Raman V, Hager GD, Kobilarov M (2017) Combining neural networks and tree search for task and motion planning in challenging environments. CoRR, vol. abs/1703.07887

    Google Scholar 

  10. Brockman G, Cheung V, Pettersson L, Schneider J, Schulman J, Tang J, Zaremba W (2016) OpenAI Gym

    Google Scholar 

  11. Sutton RS, Precup D, Singh S (1999) Between mdps and semi-MDPs: a framework for temporal abstraction in reinforcement learning. Artif Intell 112:181–211

    Article  MathSciNet  Google Scholar 

  12. Mnih V, Badia AP, Mirza M, Graves A, Lillicrap TP, Harley T, Silver D, Kavukcuoglu K (2016) Asynchronous methods for deep reinforcement learning. CoRR, vol. abs/1602.01783

    Google Scholar 

  13. Sutton RS, McAllester D, Singh S, Mansour Y (1999) Policy gradient methods for reinforcement learning with function approximation. In: Proceedings of the 12th international conference on neural information processing systems, NIPS’99, Cambridge, MA, USA, MIT Press, pp 1057–1063

    Google Scholar 

  14. Bacon P, Harb J, Precup D (2016) The option-critic architecture. CoRR, vol. abs/1609.05140

    Google Scholar 

Download references

Acknowledgments

This work was supported by the Russian Science Foundation (Project No. 18-71-00143).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Aleksandr I. Panov .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Shikunov, M., Panov, A.I. (2020). Hierarchical Reinforcement Learning Approach for the Road Intersection Task. In: Samsonovich, A. (eds) Biologically Inspired Cognitive Architectures 2019. BICA 2019. Advances in Intelligent Systems and Computing, vol 948. Springer, Cham. https://doi.org/10.1007/978-3-030-25719-4_64

Download citation

Publish with us

Policies and ethics