Elsevier

Information Fusion

Volume 9, Issue 2, April 2008, Pages 156-160
Information Fusion

A novel similarity based quality metric for image fusion

https://doi.org/10.1016/j.inffus.2006.09.001Get rights and content

Abstract

A novel objective quality metric for image fusion is presented. The interest of our metric lies in the fact that the redundant regions and the complementary/conflicting regions are treated respectively according to the structural similarity between the source images. The experiments show that the proposed measure is consistent with human visual evaluations and can be applied to evaluate image fusion schemes that are not performed at the same level.

Introduction

Quality assessment of different image fusion schemes is traditionally carried out by subjective evaluations [1]. However, they are not only tedious, slow, expensive, difficult to reproduce and verify, but they are also impossible to be embedded into the rendering image fusion algorithms to optimize the parameter settings. Hence, although it cannot be denied that subjective tests are important in characterizing fusion performance, objective image fusion performance metrics that are consistent with human visual perception appear as a valuable alternative. Several schemes have been proposed for the development of performance metrics. Li et al. [2] associated image fusion performance with the standard deviation of the difference image between the fused image and the reference image, which was created by a simple cut and paste process. However, it is impossible to get the ideal fusion image in most real world applications. Therefore, some non-reference image fusion quality assessments have been proposed. Mutual information (MI) given by Qu et al. [3] is the most commonly used information-theory related metric, but it can not correctly assess the performance of image fusion schemes that are not performed at the same level [3]. Xydeas and Petrović [4] evaluated the fusion performance by calculating edge information preservation values in the fused image. Recently, objective image fusion performance measures based on the structural similarity metric proposed by Wang et al. [8] have come into being [5], [6], [7].

The performance metric, which is calculated pixel by pixel or region by region, given in [5] are the weighted average of the similarities between the fused image and each of the source images. It is not suitable for the evaluation of regions which contain complementary or conflicting information in the source images, where one of them should be selected to comprise the fused image in a good grayscale image fusion scheme.

Given two source images and a single fused image, in this paper, we propose a novel objective image fusion performance metric, in which complementary or conflicting regions are distinguished from redundant regions in the two source images using the structural similarity image quality measure proposed in [8] and treated separately. Thus, the results of the proposed measure are more consistent with subjective evaluations.

Section snippets

The structural similarity metric by Wang and Bovik

The structural similarity (SSIM) metric introduced by Wang and Bovik [8] for the corresponding regions in a reference original signal x and the test image signal y is defined asSSIM(x,y|w)=(2w¯xw¯y+C1)(2σwxwy+C2)(w¯x2+w¯y2+C1)(σwx2+σwy2+C2)which can be decomposed asSSIM(x,y|w)=(2w¯xw¯y+C1)(2σwxσwy+C2)(σwxwy+C3)(w¯x2+w¯y2+C1)(σwx2+σwy2+C2)(σwxσwy+C3)where C1, C2 and C3 are small constants, with C3 = C2/2, wx denotes the sliding window or the region under consideration in x, w¯x is the mean of wx, σ

The new image fusion quality metric

The goal of image fusion is to increase completeness by integrating complementary information into the fused image and to eliminate conflicting information. Therefore a good fusion performance measure should correctly estimate how much information is preserved in the fused image, especially for the regions that contain complementary or conflicting information in the source images.

Generally speaking, for regions where the two source images contain conflicting information, a good fusion scheme

Experimental results

To verify the performance of the proposed approach, we have carried out two experiments, in both of which four image fusion schemes, the Laplacian pyramid [9], the discrete wavelet transform (DWT) [10], the ratio pyramid [11] and the average method, are tested. For the first three fusion schemes, a 3-level decomposition was performed, with the approximation coefficients of the two input images averaged and the larger absolute values of the high subbands selected.

In the first experiment, 32 sets

Conclusions

In this paper, we have proposed an objective image fusion performance metric that performs different operations when evaluating different local regions according to the similarity level between the source images. Our proposed method is consistent with the human vision perception. In particular, our metric gives good results when evaluating different fusion schemes that are not performed at the same level.

There are several areas in which our quality metric can be improved. Entropy or mutual

References (11)

There are more references available in the full text version of this article.

Cited by (410)

  • Multi-focus image fusion via interactive transformer and asymmetric soft sharing

    2024, Engineering Applications of Artificial Intelligence
  • AFCANet: An adaptive feature concatenate attention network for multi-focus image fusion

    2023, Journal of King Saud University - Computer and Information Sciences
View all citing articles on Scopus

This work is supported by the Research Fund for the Doctoral Program of Higher Education (No. 20030701003) and the National Natural Science Foundation of China (No. 60477038).

View full text