Abstract
With the development of society and lifestyle changes. Robo-tic navigation based on traditional navigation and positioning techniques can hardly cope with navigation in complex dynamic scenarios. Traditional navigation and positioning algorithms, such as Dijkstra and A*, cannot perform path planning in dynamic scenarios. In this paper, we present an improved reinforcement learning-based algorithm for local path planning that allows it to still perform well when there are more dynamic obstacles. The algorithm has the advantage of low cost to transform into others situation and fast training speed over the navigation algorithm based entirely on reinforcement learning.
This work is supported by National Natural Science Foundation of China 12071460, Shenzhen research grant (KQJSCX20180330170311901, JCYJ20180305180840138 and GGFW2017073114031767).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Hess, W., Kohler, D., Rapp, H., Andor, D.: Real-time loop closure in 2d lidar slam. In: 2016 IEEE International Conference on Robotics and Automation (ICRA), pp. 1271–1278. IEEE (2016)
Duchoň, F., et al.: Path planning with modified a star algorithm for a mobile robot. Procedia Eng. 96, 59–69 (2014). Modelling of Mechanical and Mechatronic Systems. http://www.sciencedirect.com/science/article/pii/S187770581403149X
Hennes, D., Claes, D., Meeussen, W., Tuyls, K.: Multi-robot collision avoidance with localization uncertainty. In: AAMAS, pp. 147–154 (2012)
Konolige, K., Marder-Eppstein, E., Marthi, B.: Navigation in hybrid metric-topological maps. In: IEEE International Conference on Robotics and Automation, vol.2011, pp. 3041–3047 (2011)
Quigley, M., et al.: ROS: an open-source robot operating system. In: ICRA Workshop on Open Source Software, vol. 3, no. 3.2, p. 5, Kobe, Japan (2009)
Lopes, A., Rodrigues, J., Perdigao, J., Pires, G., Nunes, U.: A new hybrid motion planner: applied in a brain-actuated robotic wheelchair. IEEE Robot. Autom. Mag. 23(4), 82–93 (2016)
Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT press, Cambridge (2018)
Tao, F., Cheng, J., Qi, Q., Zhang, M., Zhang, H., Sui, F.: Digital twin-driven product design, manufacturing and service with big data. Int. J. Adv. Manuf. Technol. 94(9), 3563–3576 (2017). https://doi.org/10.1007/s00170-017-0233-1
Peters, J., Schaal, S.: Natural actor-critic. Neurocomputing 71(7), 1180–1190 (2008). progress in Modeling, Theory, and Application of Computational Intelligenc. http://www.sciencedirect.com/science/article/pii/S0925231208000532
Lillicrap, T.P., et al.: Continuous control with deep reinforcement learning (2019)
Long, P., Fan, T., Liao, X., Liu, W., Zhang, H., Pan, J.: Towards optimally decentralized multi-robot collision avoidance via deep reinforcement learning. In: IEEE International Conference on Robotics and Automation (ICRA), vol. 2018, pp. 6252–6259 (2018)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Zhao, K., Ning, L. (2021). The Hybrid Navigation Method in Face of Dynamic Obstacles. In: Zhang, Y., Xu, Y., Tian, H. (eds) Parallel and Distributed Computing, Applications and Technologies. PDCAT 2020. Lecture Notes in Computer Science(), vol 12606. Springer, Cham. https://doi.org/10.1007/978-3-030-69244-5_26
Download citation
DOI: https://doi.org/10.1007/978-3-030-69244-5_26
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-69243-8
Online ISBN: 978-3-030-69244-5
eBook Packages: Computer ScienceComputer Science (R0)