Skip to main content

Research on Path Planning Algorithm for Mobile Robot Based on Improved Reinforcement Learning

  • Conference paper
  • First Online:
Intelligent Computing Theories and Application (ICIC 2021)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 12837))

Included in the following conference series:

  • 1561 Accesses

Abstract

This paper proposes an improved Q-learning algorithm to solve the problem that the traditional Q-learning algorithm is applied to the path planning of mobile robot in complex environment, the convergence speed is slow due to large number of iterations, and even the actual reward signal is sparse so that the agent cannot get the optimal path. The improved algorithm reduces the number of iterative runs of the agent in the path planning process to improve the convergence speed by further updating the iterative formula of the Q-value. At the same time, adding sparse reward algorithm leads to finding the optimal path successfully. In order to verify the effectiveness of the algorithm, simulation experiments are carried out in two groups of environments: the simple and the complex. The final simulation results show that the improved algorithm can avoid obstacles effectively and find the optimal path to the target position after less iterations, which proves that the performance of the improved algorithm is better than the traditional Q-learning in the path planning.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Marin-Plaza, P., Hussein, A., Martin, D., et al.: Global and local path planning study in a ROS-based research platform for autonomous vehicles. J. Adv. Transp. 2018(PT.1), 1–10 (2018)

    Google Scholar 

  2. Li, G., Yamashita, A., Asama, H., et al.: An efficient improved artificial potential field based regression search method for robot path planning. In: International Conference on Mechatronics & Automation. IEEE (2012)

    Google Scholar 

  3. Chang, L., Shan, L., Jiang, C., et al.: Reinforcement based mobile robot path planning with improved dynamic window approach in unknown environment. Auton. Robot. 45(2), 51–76 (2020)

    Google Scholar 

  4. Liu, G., Lao, S.Y., Tan, D.F., et al.: Fast graphic converse method for path planning of anti-ship missile. J. Ballist. (2011)

    Google Scholar 

  5. Tang, X.R., Zhu, Y., Jiang, X.X.: Improved A-star algorithm for robot path planning in static environment. J. Phys. Conf. Ser. 1792(1), 012067 (2021). 8p.

    Google Scholar 

  6. Shen, Z., Hao, Y., Li, K.: Application research of an Adaptive Genetic Algorithms based on information entropy in path planning. Int. J. Inf. 13(6) (2010)

    Google Scholar 

  7. Wang, G., Zhou, J.: Dynamic robot path planning system using neural network. J. Intell. Fuzzy Syst. 40(2), 3055–3063 (2021)

    Article  Google Scholar 

  8. Zhang, C., You, X.: Improved quantum ant colony algorithm of path planning for mobile robot based on grid model. Electron. Sci. Technol. (2016)

    Google Scholar 

  9. Gaskett, C., Fletcher, L., Zelinsky, A.: Reinforcement learning for a vision based mobile robot. IEEE (2018)

    Google Scholar 

  10. Kamalapurkar, R., Walters, P., Rosenfeld, J., et al.: Model-Based Reinforcement Learning for Approximate Optimal Control (2018)

    Google Scholar 

  11. Polydoros, A.S., Nalpantidis, L.: Survey of model-based reinforcement learning: applications on robotics. J. Intell. Rob. Syst. 86(2), 1–21 (2017)

    Article  Google Scholar 

  12. Tong, L., Wang, J.: Application of reinforcement learning in robot path planning. Comput. Simul. 30(012), 351–355 (2013)

    Google Scholar 

  13. Huang, C., Sheng, Z., Jie, X., et al.: Markov Decision Process (2018)

    Google Scholar 

  14. Surya, S., Rakesh, N.: Traffic Congestion Prediction and Intelligent Signaling Based on Markov Decision Process and Reinforcement Learning (2018)

    Google Scholar 

  15. Chu, P., Vu, H., Yeo, D., Lee, B., Um, K., Cho, K.: Robot reinforcement learning for automatically avoiding a dynamic obstacle in a virtual environment. In: Park, J.J.H., Chao, H.-C., Arabnia, H., Yen, N.Y. (eds.) Advanced Multimedia and Ubiquitous Engineering. LNEE, vol. 352, pp. 157–164. Springer, Heidelberg (2015). https://doi.org/10.1007/978-3-662-47487-7_24

    Chapter  Google Scholar 

  16. Chen, L.P., Mohri, M., Rostamizadeh, A., Talwalkar, A.: Foundations of machine learning, second edition. Statistical Papers 60 (2019)

    Google Scholar 

  17. Watkins, C., Dayan, P.: Q-learning. Machine Learning (1992)

    Google Scholar 

  18. Yang, R., Yan, J.P., Li, X.: Research on sparse reward algorithm for reinforcement learning: theory and experiment . J. Intell. Syst. 15(05), 888–899 (2020)

    Google Scholar 

  19. Riedmiller, M., Hafner, R., Lampe, T., et al.: Learning by Playing-Solving Sparse Reward Tasks from Scratch (2018)

    Google Scholar 

  20. Gullapalli, V., Barto, A.G.: Shaping as a Method for Accelerating Reinforcement Learning. IEEE (2002)

    Google Scholar 

  21. Wiewiora, E.: Reward Shaping. Springer, Heidelberg (2011)

    Google Scholar 

  22. Ng, A.Y., Harada, D., Russell, S.: Policy invariance under reward transformations: theory and application to reward shaping. In: ICML, vol. 99, pp. 278–287 (1999)

    Google Scholar 

Download references

Acknowledgments

This works is partly supported by the Natural Science Foundation of Liaoning, China under Grant 2019MS008, Education Committee Project of Liaoning, China under Grant LJ2019003.

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Liu, J., Zhang, A., Zhang, Y. (2021). Research on Path Planning Algorithm for Mobile Robot Based on Improved Reinforcement Learning. In: Huang, DS., Jo, KH., Li, J., Gribova, V., Hussain, A. (eds) Intelligent Computing Theories and Application. ICIC 2021. Lecture Notes in Computer Science(), vol 12837. Springer, Cham. https://doi.org/10.1007/978-3-030-84529-2_50

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-84529-2_50

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-84528-5

  • Online ISBN: 978-3-030-84529-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics