A CNN Based Vision-Proprioception Fusion Method for Robust UGV Terrain Classification | IEEE Journals & Magazine | IEEE Xplore

A CNN Based Vision-Proprioception Fusion Method for Robust UGV Terrain Classification


Abstract:

The ability for ground vehicles to identify terrain types and characteristics can help provide more accurate localization and information-rich mapping solutions. Previous...Show More

Abstract:

The ability for ground vehicles to identify terrain types and characteristics can help provide more accurate localization and information-rich mapping solutions. Previous studies have shown the possibility of classifying terrain types based on proprioceptive sensors that monitor wheel-terrain interactions. However, most methods only work well when very strict motion restrictions are imposed including driving in a straight path with constant speed, making them difficult to be deployed on real-world field robotic missions. To lift this restriction, this letter proposes a fast, compact, and motion-robust, proprioception-based terrain classification method. This method uses common on-board UGV sensors and a 1D Convolutional Neural Network (CNN) model. The accuracy of this model was further improved by fusing it with a vision-based CNN that made classification based on the appearance of terrain. Experimental results indicated the final fusion models were highly robust with strong performance, with over 93% accuracy, under various lighting conditions and motion maneuvers.
Published in: IEEE Robotics and Automation Letters ( Volume: 6, Issue: 4, October 2021)
Page(s): 7965 - 7972
Date of Publication: 04 August 2021

ISSN Information:

Funding Agency:


References

References is not available for this document.