Abstract:
Visual Odometry and Simultaneous Localization and Mapping (SLAM) are widely used in autonomous driving. In the traditional keypoint-based visual SLAM systems, the feature...Show MoreMetadata
Abstract:
Visual Odometry and Simultaneous Localization and Mapping (SLAM) are widely used in autonomous driving. In the traditional keypoint-based visual SLAM systems, the feature matching accuracy of the front end plays a decisive role and becomes the bottleneck restricting the positioning accuracy, especially in challenging scenarios like viewpoint variation and highly repetitive scenes. Thus, increasing the discriminability and matchability of feature descriptor is of importance to improve the positioning accuracy of visual SLAM. In this paper, we proposed a novel adaptive-scale triplet loss function and apply it to triplet network to generate adaptive-scale descriptor (ASD). Based on ASD, we designed our monocular SLAM system (ASD-SLAM) which is an deep-learning enhanced system based on the state of art ORB-SLAM system. The experimental results show that ASD achieves better performance on the UBC benchmark dataset, at the same time, the ASD-SLAM system also outperforms the current popular visual SLAM frameworks on the KITTI Odometry Dataset.
Published in: 2020 IEEE Intelligent Vehicles Symposium (IV)
Date of Conference: 19 October 2020 - 13 November 2020
Date Added to IEEE Xplore: 08 January 2021
ISBN Information: