Abstract:
Due to adverse illumination in space, noncooperative object perception based on multisource image fusion is crucial for on-orbit maintenance and orbital debris removal. I...View moreMetadata
Abstract:
Due to adverse illumination in space, noncooperative object perception based on multisource image fusion is crucial for on-orbit maintenance and orbital debris removal. In this article, we first propose a dual-branch multiscale feature extraction encoder combining Transformer block (TB) and Inception block (IB) to extract global and local features of visible and infrared images and establish high-dimensional semantic connections. Second, different from the traditional artificial design fusion strategy, we propose a feature fusion module called cross-convolution feature fusion (CCFF) module, which can achieve image feature level fusion. Based on the above, we propose a dual-branch fusion network based on Transformer and Inception (DFTI) for space noncooperative object, which is an image fusion framework based on autoencoder architecture and unsupervised learning. The fusion image can simultaneously retain the color texture details and contour energy information of space noncooperative objects. Finally, we construct a fusion dataset of infrared and visible images for space noncooperative objects (FIV-SNO) and compare DFTI with seven state-of-the-art methods. In addition, object tracking as a follow-up high-level visual task proves the effectiveness of our method. The experimental results demonstrate that compared with other advanced methods, the entropy (EN) and average gradient (AG) of the fusing images using DFTI network are increased by 0.11 and 0.06, respectively. Our method exhibits excellent performance in both quantitative measures and qualitative evaluation.
Published in: IEEE Transactions on Instrumentation and Measurement ( Volume: 73)