Abstract:
In this paper, an Adaptive Fusion Transformer (AFT) is proposed for unsupervised pixel-level fusion of visible and infrared images. Different from the existing convolutio...Show MoreMetadata
Abstract:
In this paper, an Adaptive Fusion Transformer (AFT) is proposed for unsupervised pixel-level fusion of visible and infrared images. Different from the existing convolutional networks, transformer is adopted to model the relationship of multi-modality images and explore cross-modal interactions in AFT. The encoder of AFT uses a Multi-Head Self-attention (MSA) module and Feed Forward (FF) network for feature extraction. Then, a Multi-head Self-Fusion (MSF) module is designed for the adaptive perceptual fusion of the features. By sequentially stacking the MSF, MSA, and FF, a fusion decoder is constructed to gradually locate complementary features for recovering informative images. In addition, a structure-preserving loss is defined to enhance the visual quality of fused images. Extensive experiments are conducted on several datasets to compare our proposed AFT method with 21 popular approaches. The results show that AFT has state-of-the-art performance in both quantitative metrics and visual perception.
Published in: IEEE Transactions on Image Processing ( Volume: 32)