Skip to main content

Autonomous Vehicle for Obstacle Detection and Avoidance Using Reinforcement Learning

  • Conference paper
  • First Online:
Soft Computing for Problem Solving

Part of the book series: Advances in Intelligent Systems and Computing ((AISC,volume 1048))

Abstract

Obstacle detection and avoidance during navigation of an autonomous vehicle is one of the challenging problems. Different sensors like RGB camera, Radar, and Lidar are presently used to analyze the environment around the vehicle for obstacle detection. Analyzing the environment using supervised learning techniques has proven to be an expensive process due to the training of different obstacle for different scenarios. In order to overcome such difficulty, in this paper Reinforcement Learning (RL) techniques are used to understand the uncertain environment based on sensor information to make the decision. Policy free, model-free Q-learning based RL algorithm with the multilayer perceptron neural network (MLP-NN) is applied and trained to predict optimal vehicle future action based on the current state of the vehicle. Further, the proposed Q-Learning with MLP-NN based approach is compared with the state of the art, namely, Q-learning. A simulated urban area obstacles scenario is considered with the different number of ultrasonic radar sensors in detecting obstacles. The experimental result shows that Q-learning with MLP-NN along with the ultrasonic sensors is proven to be more accurate than conventional Q-learning technique with the ultrasonic sensors. Hence it is demonstrated that combining Q-learning with MLP-NN will improve in predicting obstacles for autonomous vehicle navigation.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 44.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 59.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Pendleton, S.D., Andersen, H., Du, X., Shen, X., Meghjani, M., Eng, Y.H., Rus, D., Ang, M.H.: Perception, planning, control, and coordination for autonomous vehicles. Machines 5, 6 (2017)

    Article  Google Scholar 

  2. Garcia, F., Martin, D., De La Escalera, A., Armingol, J.M.: Sensor fusion methodology for vehicle detection. IEEE Trans. Intell. Transp. Syst. Mag. 9, 123–133 (2017)

    Google Scholar 

  3. http://outlace.com/rlpart1.html

  4. Qiao, J.-F., Hou, Z.-J., Ruan, X.-G.: Application of reinforcement learning based on neural network to dynamic obstacle avoidance. In: Proceedings of the 2008 IEEE International Conference on Information and Automation, pp. 784–788 (2008)

    Google Scholar 

  5. Babu, V.M., Krishna, U.V., Shahensha, S.K.: An autonomous path finding robot using Q-learning. In: IEEE International Conference on Intelligent Systems and Control (2016)

    Google Scholar 

  6. Chu, P., Vu, H., Yeo, D., Lee, B., Um, K., Cho, K.: Robot reinforcement learning for automatically avoiding a dynamic obstacle in a virtual environment. In: Park, J., Chao, H.C., Arabnia, H., Yen, N. (eds.) Advanced Multimedia and Ubiquitous Engineering. Lecture Notes in Electrical Engineering, vol. 352. Springer, Berlin, Heidelberg (2015)

    Google Scholar 

  7. Huang, B.-Q., Cao, G.-Y., Guo, M.: Reinforcement learning neural network to the problem of autonomous mobile robot obstacle avoidance. In: 2005 International Conference on Machine Learning and Cybernetics, Guangzhou, China, pp. 85–89 (2005)

    Google Scholar 

  8. Duguleana, M.: Neural networks based reinforcement learning for mobile robots obstacle avoidance. Expert. Syst. Appl. Int. J. 62(C), 104–115 (2016)

    Google Scholar 

  9. Xia, C., El Kamel, A.: A reinforcement learning method of obstacle avoidance for industrial mobile vehicles in unknown environments using neural network. In: Qi, E., Shen, J., Dou, R. (eds.) Proceedings of the 21st International Conference on Industrial Engineering and Engineering Management 2014. Proceedings of the International Conference on Industrial Engineering and Engineering Management. Atlantis Press, Paris (2015)

    Google Scholar 

  10. http://outlace.com/rlpart3.html

  11. Thorpe, C.E.: Neural network based autonomous navigation. In: Vision and Navigation. The Carnegie Mellon Navlab, Kluwer (1990)

    Google Scholar 

  12. Ide, H., Kurita, T.: Improvement of learning for CNN with ReLU activation by sparse regularization. In: International Joint Conference on Neural Networks (IJCNN) (2017)

    Google Scholar 

  13. https://www.pygame.org

  14. https://www.tensorflow.org/

  15. Qin, Z.-C.: ROC analysis for predictions made by probabilistic classifiers. In: International Conference on Machine Learning and Cybernetics (2005)

    Google Scholar 

  16. Senthilnath, J., Kulkarni, S., Benediktsson, J.A., Yang, X.S.: A novel approach for multispectral satellite image classification based on the bat algorithm. IEEE Geosci. Remote Sens. Lett. 13(4), 599–603 (2016)

    Article  Google Scholar 

  17. Senthilnath, J., Bajpai, S., Omkar, S.N., Diwakar, P.G., Mani, V.: An approach to multi-temporal MODIS image analysis using image classification and segmentation. Adv. Space Res. 50(9), 1274–1287 (2012)

    Article  Google Scholar 

  18. Noman, A.T., Chowdhury, M.A.M., Rashid, H.: Design and implementation of microcontroller based assistive robot for person with blind autism and visual impairment. In: 2017 20th International Conference of Computer and Information Technology (ICCIT) (2017)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to C. S. Arvind .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Arvind, C.S., Senthilnath, J. (2020). Autonomous Vehicle for Obstacle Detection and Avoidance Using Reinforcement Learning. In: Das, K., Bansal, J., Deep, K., Nagar, A., Pathipooranam, P., Naidu, R. (eds) Soft Computing for Problem Solving. Advances in Intelligent Systems and Computing, vol 1048. Springer, Singapore. https://doi.org/10.1007/978-981-15-0035-0_5

Download citation

Publish with us

Policies and ethics