Abstract:
Localization based on onboard cameras and satellite images is a promising solution for unmanned aerial vehicles (UAVs) when the global position system (GPS) is unavailabl...Show MoreMetadata
Abstract:
Localization based on onboard cameras and satellite images is a promising solution for unmanned aerial vehicles (UAVs) when the global position system (GPS) is unavailable. However, apart from the variations in illumination and ground surface, the rotation and viewpoint differences between UAV images and satellite images are substantial due to nonperpendicular camera angles and nonnorthward orientations, making matching difficult. To overcome these challenges, this article presents a novel UAV localization method by matching oblique UAV images and ortho-photograph satellite imagery. First, a lightweight SE(2)-steerable network (SeS-Net) is designed to extract rotation-equivariant dense features from images. Then, an improved peakness measurement is introduced to generate a high-response detection map, from which the discriminative sparse features encoded rotation-invariance for matching are derived. Lastly, camera poses are estimated and refined with the proposed local-to-global matching strategy. This strategy locates the local area related to the map and removes outliers by clustering the matches with a Gaussian model, followed by the adaptive high-quality match selection (AHMS) process to mitigate the impact of local nonlinear distortion on pose estimation. The absolute visual localization (AVL) framework we establish exhibits strong generalization and is applicable to other scenes without additional training. Multiple image matching and UAV localization experiments demonstrate that the proposed method accesses high robustness to drastic changes in perspective and ground surface. Furthermore, its precision and stability in localization surpass both conventional and other deep learning methods.
Published in: IEEE Transactions on Geoscience and Remote Sensing ( Volume: 62)