Elsevier

Signal Processing

Volume 92, Issue 5, May 2012, Pages 1268-1280
Signal Processing

A regional image fusion based on similarity characteristics

https://doi.org/10.1016/j.sigpro.2011.11.021Get rights and content

Abstract

In this paper, we propose an image-driven regional fusion method based on a specific region partition strategy according to the redundant and complementary correlation of the input images. Different from the traditional regional fusion approaches dividing one or more input images, our final region map is generated from the similarity comparisons between source images. Inspired by the success of structural similarity index (SSIM), the similarity characteristics of source images are represented by luminance, contrast, and structure comparisons. To generate redundant and complementary regions, we over segment the SSIM map using watershed, and merge the small homogeneous regions with close correlation based on the similarity components. In accordance with the concentrated similarity of different regions, the fusion principles for special regions are constructed to combine the redundant or complementary property. In our method, the redundant and complementary regions of input images are distinguished effectively, which can aid in the sequent fusion process. Experimental results demonstrate that our approach achieve superior results in the different fusion applications. Compared with the existing work, the proposed approach outperforms in both visual presentation and objective evaluation.

Highlights

▶ The first study to propose special regional segmentation for image fusion. ▶ The regions showing correlation of source images. ▶ The specific fusion rules are assigned to different regions. ▶ There is no parameter in this method, and it can be used in different image fusion applications.

Introduction

A wide variety of imaging sensors are available, but it is impossible to capture an image that includes all salient features using only one sensor. To produce a more comprehensive synthetic image for a scene, fusing multiple images is still important [1], [2], [3]. Image fusion process is mainly performed at different levels of information representation: pixel, feature, and decision levels [4]. In pixel level fusion, fused pixels are derived from the original pixel information of the source images [5]. The pixel level fusion is viewed as low level fusion. Fusion at feature level is based on the extracted features such as shape, edges, and textures [6]. The decision level fusion correspondingly deals with the decisions from several experts, which is regarded as the high level fusion [7].

Many well-known fusion algorithms have been proposed based on the basic statistical analysis or multi-scale transformation of original pixels [3], [4], [5], [8], [9]. However, Wang et al. [10] argued that the local structural characteristics of objects in an image cannot be completely expressed by arbitrary pixels. Therefore, the region-based fusion rules are more effective with actual features rather than arbitrary pixels. Several researchers incorporated the regional feature information in wavelet decomposition [11], [12] or independent component analysis framework [13], while the inverse multiresolution transform process may lose some information [14]. A recent trend for some special image fusion in region-based methods is to implement the segmentation and fusion in spatial domain [14], [15], [16], [17], [18]. Examples include work in [14], which segment the averaged or source images via traditional image segmentation. Li et al. [15] used pulse-coupled neural network based segmentation to divide all input images, and the morphological–spectral unsupervised segmentation was employed in [16]. Zhang et al. implemented the fusion via extracting the target region in infrared image and replacing the corresponding position in the visual image [17]. De et al. [18] obtained the region maps for multifocus images via comparing the definition, and then simply copying the focused pixels. However, the region partitions in these fusion algorithms lie in traditional image segmentation or special object extraction based on some particular image features but not focusing on the sequent fusion processing. Moreover, a universal segmentation method for various source images captured from different sensors has not been reported in the literature. These methods are limited to the particular source images, and difficult to transplant to the other fusion models. Therefore, if the segmentation process refers to the relationship between source images in terms of redundant and complementary information, it can be more universal to different applications and more effective to the sequent region-based fusion processing.

In this paper, a novel region-based fusion algorithm is proposed. In our scheme, the segmentation is operated on the similarity characteristics of source images (no matter what kind of source images), and these characteristics reflecting redundant or complementary relationship can further guide the fusion process. Specially, our work aims at improving segmentation performance by more comprehensive correlation of source images to assist the fusion rule design. This method is under image-driven, and can be used in different fusion applications irrespective of the kind of imaging models.

The remainder of this paper is organized as follows. In Section 2, we explain our region-based fusion method in detail, including how to select the similarity characteristics of source images, obtain the region map, and fuse different regions. Section 3 provides the simulation scenarios and evaluates the results. Finally, conclusions are drawn in Section 4.

Section snippets

Our proposed region-based image fusion method

The basic objective of our work is to aid the fusion processing with the correlation of source images. The system diagram of the proposed method is shown in Fig. 1. It consists of three steps: correlation analysis, region generation, and fusion. It is useful to analyze the correlation of source image firstly, because the purpose of image fusion is to create a composite image, which preserves the complementary characteristics and removes the redundant information of input images. Considering

Experiments

We test our fusion scheme compared with six state-of-the-art methods from visual perception and objective indexes perspective. The six fusion approaches are based on region segmentation and spatial frequency (RSSF) [14], regional mathematical morphology (RMM) [18], regional empirical mode decomposition (REMD) [26], regional discrete wavelet transform (RDWT) [3], nonsubsampling contourlet transform (NSCT) [27], and simultaneous orthogonal matching pursuit (SOMP) [28]. The choice of these

Conclusion

In this paper, we have proposed a region-based image fusion method. Our work has considered the similarity maps as a relationship of source images in region partition, which also have been employed in fusion processing. The segmentation of similarity map can increase the dependability of region partition. Therefore, the salient information can be adequately extracted in fusion implement. Experimental results confirm that our region-based fusion method achieve superior results over previous

Acknowledgments

This work was supported by the National Basic Research Program of China (Grant no.2011CB707000), the Foundation for Innovative Research Groups of the National Natural Science Foundation of China (Grant no. 60921001), and the Joint Fund of the National Natural Science Foundation of China and Civil Aviation Administration of China (Grant no. 61079018). The original IR and visible images were kindly provided by Dr. Alexander Toet of the TNO Human Factors Research Institute. We would like to thank

References (32)

  • C. Pohl et al.

    Multisensor image fusion in remote sensing: concepts, methods and applications

    International Journal of Remote Sensing

    (1998)
  • E. Lallier, M. Farooq, A real time pixel-level based image fusion via adaptive weight averaging, in: Proceedings of the...
  • T. Zaveri, M. Zaveri, V. Shah, N. Patel, A novel region based multifocus image fusion method, in: 2009 International...
  • J. Byeungwoo et al.

    Decision fusion approach for multitemporal classification

    IEEE Transactions on Geoscience and Remote Sensing

    (1999)
  • Y. Yang, C.Z. Han, X. Kang, D.Q. Han, An overview on pixel-level image fusion in remote sensing, in: Proceedings of the...
  • Z. Wang. et al.

    Image quality assessment: from error visibility to structural similarity

    IEEE Transactions on Image Processing

    (Apr. 2004)
  • Cited by (41)

    • Sparse intrinsic decomposition and applications

      2021, Signal Processing: Image Communication
    • A survey on region based image fusion methods

      2019, Information Fusion
      Citation Excerpt :

      These approaches are restricted to the particular input images only. Luo et al. proposed the method of region partition strategy in which the segmentation is performed on the similar features of input images (irrespective of the kind of input images) [51]. The complementary and redundant correlations of the source images are distinguished by using fusion methods.

    • Multi-focus image fusion algorithm based on focused region extraction

      2016, Neurocomputing
      Citation Excerpt :

      In order to reduce the effects of noise and misregistration effectively, Piella [14] divided the source image into different regions and fused the pixels according to the saliency degree of the region. Luo [15] proposed a region partition strategy based on the similarity characteristics of the source images. In this strategy, the fusion method distinguished the redundant and complementary correlation of the source images effectively and the small homogeneous regions were merged based on the similarity components.

    • Color fusion method with a combination of steerable pyramid color transfer and ICA

      2023, Proceedings of SPIE - The International Society for Optical Engineering
    View all citing articles on Scopus
    View full text