Loading [a11y]/accessibility-menu.js
Semantic Mapping for View-Invariant Relocalization | IEEE Conference Publication | IEEE Xplore

Semantic Mapping for View-Invariant Relocalization


Abstract:

We propose a system for visual simultaneous localization and mapping (SLAM) that combines traditional local appearance-based features with semantically meaningful object ...Show More

Abstract:

We propose a system for visual simultaneous localization and mapping (SLAM) that combines traditional local appearance-based features with semantically meaningful object landmarks to achieve both accurate local tracking and highly view-invariant object-driven relocalization. Our mapping process uses a sampling-based approach to efficiently infer the 3D pose of object landmarks from 2D bounding box object detections. These 3D landmarks then serve as a view-invariant representation which we leverage to achieve camera relocalization even when the viewing angle changes by more than 125 degrees. This level of view-invariance cannot be attained by local appearance-based features (e.g. SIFT) since the same set of surfaces are not even visible when the viewpoint changes significantly. Our experiments show that even when existing methods fail completely for viewpoint changes of more than 70 degrees, our method continues to achieve a relocalization rate of around 90%, with a mean rotational error of around 8 degrees.
Date of Conference: 20-24 May 2019
Date Added to IEEE Xplore: 12 August 2019
ISBN Information:

ISSN Information:

Conference Location: Montreal, QC, Canada

Contact IEEE to Subscribe

References

References is not available for this document.