Abstract:
A single infrared image or visible image for the same scene is usually insufficient to simultaneously reveal the infrared objects and the scene details. Thus, image fusio...Show MoreMetadata
Abstract:
A single infrared image or visible image for the same scene is usually insufficient to simultaneously reveal the infrared objects and the scene details. Thus, image fusion techniques play an important role in producing a single image from the images captured by infrared and visible sensors. In this paper, we propose a novel total variation (TV)-based fusion for infrared and visible images. In our model, a weighted fidelity term is employed to fuse both the infrared objects in the infrared image and the salient scenes in the visible image. To this end, a weight estimation method is developed based on the global luminance contrast-based saliency. Also, to overcome the over-fitting, two constraints are further introduced to merge more details from the visible image and prevent the luminance degradation for the fused result, respectively. Moreover, joint norms are exploited to produce a better result. {{\boldsymbol{l}}_{2,1,{\boldsymbol{rc}}}} provides the structural group sparseness for the fidelity term, whereas {{\boldsymbol{l}}_{1/2}} presents the better gradient sparse for the detail preserving term and {{\boldsymbol{l}}_2} is utilized for the luminance degradation preventing term. Experimental results indicate that the proposed method can give state-of-the-art performances both in visual perception and quantitative scores than other methods.
Published in: IEEE Transactions on Multimedia ( Volume: 24)