Towards Multi-Modal Perception-Based Navigation: A Deep Reinforcement Learning Method | IEEE Journals & Magazine | IEEE Xplore

Towards Multi-Modal Perception-Based Navigation: A Deep Reinforcement Learning Method


Abstract:

In this letter, we present a novel navigation system of unmanned ground vehicle (UGV) for local path planning based on deep reinforcement learning. The navigation system ...Show More

Abstract:

In this letter, we present a novel navigation system of unmanned ground vehicle (UGV) for local path planning based on deep reinforcement learning. The navigation system decouples perception from control and takes advantage of multi-modal perception for a reliable online interaction with the surrounding environment of the UGV, which enables a direct policy learning for generating flexible actions to avoid collisions with obstacles in the navigation. By replacing the raw RGB images with their semantic segmentation maps as the input and applying a multi-modal fusion scheme, our system trained only in simulation can handle real-world scenes containing dynamic obstacles such as vehicles and pedestrians. We also introduce a modal separation learning to accelerate the training and further boost the performance. Extensive experiments demonstrate that our method closes the gap between simulated and real environments, exhibiting the superiority over state-of-the-art approaches. Please refer to https://vsislab.github.io/mmpbnv1/ for the supplementary video demonstration of UGV navigation in both simulated and real-world environments.
Published in: IEEE Robotics and Automation Letters ( Volume: 6, Issue: 3, July 2021)
Page(s): 4986 - 4993
Date of Publication: 08 March 2021

ISSN Information:

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.