Skip to main content
Log in

Enhanced decision making in multi-scenarios for autonomous vehicles using alternative bidirectional Q network

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

To further enhance decision making in autonomous vehicles field, grant more safety, comfort, reduce traffic, and accidents, learning approaches were adopted, mainly reinforcement learning. However, possibility in upgrading these algorithms is still available due to many limitations including : convergence rate, stability, handling multiple dynamic environments, raw performance, robustness, and complexity of algorithms. To tackle these problems, we propose a novel extension of the well-known deep Q network called “alternative bidirectional Q network” that aims mainly to enhance stability and performance with improving exploration and Q values update policies, to overcome the literature gap that generally focuses on only one policy to handle decision making in multiple scenarios (avoiding obstacles, goal-oriented environments, etc.). In “alternative bidirectional Q network,” data about previous, current, and upcoming states are used to update the Q values where the actions are selected according to the relation between these data to handle several scenarios: highways, merges, roundabouts, and parking. This concept provides reinforcement learning agents with more balance between exploration and exploitation and enhances stability during learning. A gym simulator was adopted for training and testing the proposed algorithm’s outcome, while various state-of-the-art algorithms were used as benchmark models. The performance of the proposed extension was evaluated using several metrics being: loss, accuracy, speed, and reward values, where the results of comparison showed the superiority of the novel extension in all scenarios for most of the exploited metrics. The experiment results were confirmed using the complexity and the robustness aspects.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20
Fig. 21
Fig. 22
Fig. 23
Fig. 24
Fig. 25

Similar content being viewed by others

References

  1. Claudine B, Ranik G, Raphael VC, Pedro A, Vinicius BC, Avelino F, Luan J, Rodrigo B, Thiago MP, Filipe M, de Lucas P, V., Thiago O.-S., Alberto F. de S., (2021) Self-driving cars: a survey. Exp Syst Appl 165:113816

  2. Krotkov E, et al (2018) The DARPA Robotics Challenge Finals: Results and Perspectives. In: Spenko M, Buerger S, Iagnemma K, (eds) The DARPA Robotics Challenge Finals: Humanoid Robots To The Rescue. Springer Tracts in Advanced Robotics, 121., 429–494, Springer, Cham https://doi.org/10.1007/978-3-319-74666-1_1.

  3. Wilko S, Javier A, Daniela R (2018) Planning and decision-making for autonomous vehicles. Annu Rev Control, Robotics, and Auton Syst 1:187–210

    Article  Google Scholar 

  4. Issam D, Salwa K. Al Khatib, Tarek N, Wafic L, Zainab Z. Abdelrazzak, Hussein T. Mouftah (2021) Intelligent transportation systems: A survey on modern hardware devices for the era of machine learning, Journal of King Saud University - Computer and Information Sciences, ISSN 1319-1578. https://doi.org/10.1016/j.jksuci.2021.07.020

  5. Christian L (2019) Situation awareness and decision-making for autonomous driving ,IROS2019-IEEE/RSJ. In: International Conference on Intelligent Robots and Systems,Macau,China, 1–25

  6. Faisal R, Sohail J, Muhammad S, Mudassar A, Kashif N, Nouman A (2018) Planning and decision-making for autonomous vehicles. Comp Electr Eng 69:690–704

    Article  Google Scholar 

  7. Yan M, Zhaoyong M, Tao W, Jian Q, Wenjun D, Xiangyao M (2020) Obstacle avoidance path planning of unmanned submarine vehicle in ocean current environment based on improved firework-ant colony algorithm. Comp Electr Eng 87:14. https://doi.org/10.1016/j.compeleceng.2020.106773

    Article  Google Scholar 

  8. Mohamed AK, Walid G, Hisham E, (2012) Multi-Objective traffic light control system based on Bayesian probability interpretation. In: Proc. of 15th IEEE Intelligent Transportation Systems Conference (ITSC 2012), Anchorage, Alaska, USA, 16-19 Sept. pp. 995-1000

  9. Mohamed A. K, Walid G, Ahmed E, Amin S (2012) Adaptive traffic control system based on Bayesian probability interpretation. In: Proc. of the 2012 IEEE Japan-Egypt Conference on Electronics, Communications and Computers (JEC-ECC 2012), Alexandria, Egypt, 6-9 Mar. pp. 151-156

  10. Amarildo L, Alberto Maria M, Andrea T, Riccardo G, Marcello R, Danilo R (2020) Combining reinforcement learning with rule-based controllers for transparent and general decision-making in autonomous driving, Robotics and Autonomous Systems, 131:103568, ISSN 0921-8890. https://doi.org/10.1016/j.robot.2020.103568

  11. Bugala M (2018) Algorithms applied in autonomous vehicle systems. Szybkobiezne Pojazdy Gasienicowe 50:119–138

    Google Scholar 

  12. Changxi Y, Jianbo L, Dimitar F, Panagiotis T (2019) Advanced planning for autonomous vehicles using reinforcement learning and deep inverse reinforcement learning, Robotics and Autonomous Systems, 114:1-18, ISSN 0921-8890. https://doi.org/10.1016/j.robot.2019.01.003

  13. Mohamed AK, Walid G (2014) Adaptive multi-objective reinforcement learning with hybrid exploration for trafc signal control based on cooperative multi-agent framework. J Eng Appl Articial Intell, Elsevier 29:134–151

    Article  Google Scholar 

  14. Williams RJ (1992) statistical gradient-following algorithms for connectionist reinforcement learning. Mach Learn 8:229–256. https://doi.org/10.1007/BF00992696

    Article  MATH  Google Scholar 

  15. Zhong S, Tan J, Dong H, Chen X, Gong S, Qian Z (2020) Modeling-learning-based actor-critic algorithm with gaussian process approximator. Grid Comput 18:181–195. https://doi.org/10.1007/s10723-020-09512-4

    Article  Google Scholar 

  16. Ravichandiran S (2018) Hands-on reinforcement learning with python master reinforcement and deep reinforcement learning using openAI Gym and TensorFlow,69–90, Packt Publishing

  17. Ravichandiran S (2018) Hands-On reinforcement learning with python master reinforcement and deep reinforcement learning using openAI Gym and TensorFlow,91–111, Packt Publishing

  18. Ahmed F, Walid G, Mohamed A. K (2022) MARL-FWC: Optimal coordination of freeway traffic control measures. In: The 8th International Conference on Advanced Machine Learning and Technologies and Applications (AMLTA2022). Cairo, Egypt May 5-7. https://arxiv.org/abs/1808.09806

  19. Sandro S (2018) Introduction to deep learning - from logical calculus to artificial intelligence.Undergraduate Topics in Computer Science , 1–16 Springer

  20. Zouaidia K, Ghanemi S, Rais MS, Bougueroua L (2021) Hybrid intelligent framework for one-day ahead wind speed forecasting. Neural Comp Appl 33:16591–16608. https://doi.org/10.1007/s00521-021-06255-5

    Article  Google Scholar 

  21. Zhaowei M, Chang W, Yifeng N, Xiangke W, Lincheng S (2018) A saliency-based reinforcement learning approach for a UAV to avoid flying obstacles, Robotics and Autonomous Systems, 100:108-118, ISSN 0921-8890. https://doi.org/10.1016/j.robot.2017.10.009

  22. Mnih V, Kavukcuoglu K, Silver D et al (2015) Human-level control through deep reinforcement learning. Nature 518:529–533. https://doi.org/10.1038/nature14236

    Article  Google Scholar 

  23. Zap A, Joppen T, Furnkranz J (2020) Deep ordinal reinforcement learning, machine learning and knowledge discovery in databases. ECML PKDD 2019, Springer, Cham, 3–18

  24. Wang J, Zhang Q, Zhao D, Chen Y (2019)Lane change decision-making through deep reinforcement learning with rule-based Constraints, International Joint Conference on Neural Networks, 1–6

  25. Anschel O, Baram N, Shimkin N (2017)DQN: Variance reduction and stabilization for deep reinforcement learning. In: Proceedings of the 34th International Conference on Machine Learning, 70:176–185

  26. Yin-Hao W, Tzuu-Hseng S. L, Chih-Jui L (2013) Backward Q-learning: the combination of sarsa algorithm and Q-learning, engineering applications of artificial Intelligence, 26(9):2184-2193, ISSN 0952-1976 . https://doi.org/10.1016/j.engappai.2013.06.016

  27. Xing W, Haolei C, Changgu C, Mingyu Z, Shaorong X, Yike G, Hamido F (2020) The autonomous navigation and obstacle avoidance for USVs with ANOA deep reinforcement learning method, Knowledge-Based Systems, 196:105201, ISSN 0950-705.https://doi.org/10.1016/j.knosys.2019.105201

  28. Mohamed A. K, Walid G (2012) Enhanced multiagent multi-objective reinforcement learning for urban traffic light control. In: Proc. of the 11th IEEE International Conference on Machine Learning and Applications (ICMLA), Boca Raton, Florida, USA, 12-15 Dec. 2012, pp. 586-591

  29. Asanka W, Donal B, Philip M, Joseph O, Paul H, Peter B (2020) Trajectory based lateral control: a reinforcement learning case study, engineering applications of artificial intelligence, 94:103799, ISSN 0952-1976. https://doi.org/10.1016/j.engappai.2020.103799

  30. Wang Z, Schaul T, Hessel M, Hasselt H, Lanctot M (2016) Freitas N. Dueling network architectures for deep reinforcement learning. In: International conference on machine learning, pp 1995-2003

  31. Wang H, Yuan S, Guo M, Li X, Lan W (2021) A deep reinforcement learning-based approach for autonomous driving in highway on-ramp merge. In: Proceedings of the Institution of Mechanical Engineers, Part D: Journal of Automobile Engineering. 235(10–11):2726–2739

  32. Anh T.H, Ba-Tung N, Hoai-Thu N, Sang V, Hien D. N (2021) A method of deep reinforcement learning for simulation of autonomous vehicle control. In: Proceedings of the 16th International Conference on Evaluation of Novel Approaches to Software Engineering, pages 372-379 (ENASE)

  33. Bellman R (1957) A markovian decision process. J Math Mech 6:679–684

    MathSciNet  MATH  Google Scholar 

  34. Littman M.L (2001) Markov decision processes, international encyclopedia of the social and behavioral sciences, ScienceDirect, 9240–9242. https://doi.org/10.1016/B0-08-043076-7/00614-8

  35. Leurent E (2018) ‘highway-env’ An environment for autonomous driving decision-making, GitHub repository. https://github.com/eleurent/highway-env

  36. https://keras.io/about/

  37. Gulli A, Pal S (2017) Deep learning with Keras. Packt Publishing Ltd; Apr 26

  38. Hado v.H, Arthur G, David S (2016) Deep reinforcement learning with double Q-Learning. In: Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence. AAAI Press, 2094-2100

  39. Uchibe E (2018) Model-free deep inverse reinforcement learning by logistic regression. Neur Proc Lett 47:891–905. https://doi.org/10.1007/s11063-017-9702-7

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mohamed Saber Rais.

Ethics declarations

Conflicts of interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Rais, M.S., Zouaidia, K. & Boudour, R. Enhanced decision making in multi-scenarios for autonomous vehicles using alternative bidirectional Q network. Neural Comput & Applic 34, 15981–15996 (2022). https://doi.org/10.1007/s00521-022-07278-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-022-07278-2

Keywords

Navigation