Abstract
Obstacle detection and avoidance during navigation of an autonomous vehicle is one of the challenging problems. Different sensors like RGB camera, Radar, and Lidar are presently used to analyze the environment around the vehicle for obstacle detection. Analyzing the environment using supervised learning techniques has proven to be an expensive process due to the training of different obstacle for different scenarios. In order to overcome such difficulty, in this paper Reinforcement Learning (RL) techniques are used to understand the uncertain environment based on sensor information to make the decision. Policy free, model-free Q-learning based RL algorithm with the multilayer perceptron neural network (MLP-NN) is applied and trained to predict optimal vehicle future action based on the current state of the vehicle. Further, the proposed Q-Learning with MLP-NN based approach is compared with the state of the art, namely, Q-learning. A simulated urban area obstacles scenario is considered with the different number of ultrasonic radar sensors in detecting obstacles. The experimental result shows that Q-learning with MLP-NN along with the ultrasonic sensors is proven to be more accurate than conventional Q-learning technique with the ultrasonic sensors. Hence it is demonstrated that combining Q-learning with MLP-NN will improve in predicting obstacles for autonomous vehicle navigation.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Pendleton, S.D., Andersen, H., Du, X., Shen, X., Meghjani, M., Eng, Y.H., Rus, D., Ang, M.H.: Perception, planning, control, and coordination for autonomous vehicles. Machines 5, 6 (2017)
Garcia, F., Martin, D., De La Escalera, A., Armingol, J.M.: Sensor fusion methodology for vehicle detection. IEEE Trans. Intell. Transp. Syst. Mag. 9, 123–133 (2017)
Qiao, J.-F., Hou, Z.-J., Ruan, X.-G.: Application of reinforcement learning based on neural network to dynamic obstacle avoidance. In: Proceedings of the 2008 IEEE International Conference on Information and Automation, pp. 784–788 (2008)
Babu, V.M., Krishna, U.V., Shahensha, S.K.: An autonomous path finding robot using Q-learning. In: IEEE International Conference on Intelligent Systems and Control (2016)
Chu, P., Vu, H., Yeo, D., Lee, B., Um, K., Cho, K.: Robot reinforcement learning for automatically avoiding a dynamic obstacle in a virtual environment. In: Park, J., Chao, H.C., Arabnia, H., Yen, N. (eds.) Advanced Multimedia and Ubiquitous Engineering. Lecture Notes in Electrical Engineering, vol. 352. Springer, Berlin, Heidelberg (2015)
Huang, B.-Q., Cao, G.-Y., Guo, M.: Reinforcement learning neural network to the problem of autonomous mobile robot obstacle avoidance. In: 2005 International Conference on Machine Learning and Cybernetics, Guangzhou, China, pp. 85–89 (2005)
Duguleana, M.: Neural networks based reinforcement learning for mobile robots obstacle avoidance. Expert. Syst. Appl. Int. J. 62(C), 104–115 (2016)
Xia, C., El Kamel, A.: A reinforcement learning method of obstacle avoidance for industrial mobile vehicles in unknown environments using neural network. In: Qi, E., Shen, J., Dou, R. (eds.) Proceedings of the 21st International Conference on Industrial Engineering and Engineering Management 2014. Proceedings of the International Conference on Industrial Engineering and Engineering Management. Atlantis Press, Paris (2015)
Thorpe, C.E.: Neural network based autonomous navigation. In: Vision and Navigation. The Carnegie Mellon Navlab, Kluwer (1990)
Ide, H., Kurita, T.: Improvement of learning for CNN with ReLU activation by sparse regularization. In: International Joint Conference on Neural Networks (IJCNN) (2017)
Qin, Z.-C.: ROC analysis for predictions made by probabilistic classifiers. In: International Conference on Machine Learning and Cybernetics (2005)
Senthilnath, J., Kulkarni, S., Benediktsson, J.A., Yang, X.S.: A novel approach for multispectral satellite image classification based on the bat algorithm. IEEE Geosci. Remote Sens. Lett. 13(4), 599–603 (2016)
Senthilnath, J., Bajpai, S., Omkar, S.N., Diwakar, P.G., Mani, V.: An approach to multi-temporal MODIS image analysis using image classification and segmentation. Adv. Space Res. 50(9), 1274–1287 (2012)
Noman, A.T., Chowdhury, M.A.M., Rashid, H.: Design and implementation of microcontroller based assistive robot for person with blind autism and visual impairment. In: 2017 20th International Conference of Computer and Information Technology (ICCIT) (2017)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Arvind, C.S., Senthilnath, J. (2020). Autonomous Vehicle for Obstacle Detection and Avoidance Using Reinforcement Learning. In: Das, K., Bansal, J., Deep, K., Nagar, A., Pathipooranam, P., Naidu, R. (eds) Soft Computing for Problem Solving. Advances in Intelligent Systems and Computing, vol 1048. Springer, Singapore. https://doi.org/10.1007/978-981-15-0035-0_5
Download citation
DOI: https://doi.org/10.1007/978-981-15-0035-0_5
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-15-0034-3
Online ISBN: 978-981-15-0035-0
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)