VIF-Net: An Unsupervised Framework for Infrared and Visible Image Fusion | IEEE Journals & Magazine | IEEE Xplore

VIF-Net: An Unsupervised Framework for Infrared and Visible Image Fusion


Abstract:

Visible images provide abundant texture details and environmental information, while infrared images benefit from night-time visibility and suppression of highly dynamic ...Show More

Abstract:

Visible images provide abundant texture details and environmental information, while infrared images benefit from night-time visibility and suppression of highly dynamic regions; it is a meaningful task to fuse these two types of features from different sensors to generate an informative image. In this article, we propose an unsupervised end-to-end learning framework for infrared and visible image fusion. We first construct enough benchmark training datasets using the visible and infrared frames, which can address the limitation of the training dataset. Additionally, due to the lack of labeled datasets, our architecture is derived from a robust mixed loss function that consists of the modified structural similarity (M-SSIM) metric and the total variation (TV) by designing an unsupervised learning process that can adaptively fuse thermal radiation and texture details and suppress noise interference. In addition, our method is an end to end model, which avoids setting hand-crafted fusion rules and reducing computational cost. Furthermore, extensive experimental results demonstrate that the proposed architecture performs better than state-of-the-art methods in both subjective and objective evaluations.
Published in: IEEE Transactions on Computational Imaging ( Volume: 6)
Page(s): 640 - 651
Date of Publication: 13 January 2020

ISSN Information:

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.