A regional image fusion based on similarity characteristics
Highlights
▶ The first study to propose special regional segmentation for image fusion. ▶ The regions showing correlation of source images. ▶ The specific fusion rules are assigned to different regions. ▶ There is no parameter in this method, and it can be used in different image fusion applications.
Introduction
A wide variety of imaging sensors are available, but it is impossible to capture an image that includes all salient features using only one sensor. To produce a more comprehensive synthetic image for a scene, fusing multiple images is still important [1], [2], [3]. Image fusion process is mainly performed at different levels of information representation: pixel, feature, and decision levels [4]. In pixel level fusion, fused pixels are derived from the original pixel information of the source images [5]. The pixel level fusion is viewed as low level fusion. Fusion at feature level is based on the extracted features such as shape, edges, and textures [6]. The decision level fusion correspondingly deals with the decisions from several experts, which is regarded as the high level fusion [7].
Many well-known fusion algorithms have been proposed based on the basic statistical analysis or multi-scale transformation of original pixels [3], [4], [5], [8], [9]. However, Wang et al. [10] argued that the local structural characteristics of objects in an image cannot be completely expressed by arbitrary pixels. Therefore, the region-based fusion rules are more effective with actual features rather than arbitrary pixels. Several researchers incorporated the regional feature information in wavelet decomposition [11], [12] or independent component analysis framework [13], while the inverse multiresolution transform process may lose some information [14]. A recent trend for some special image fusion in region-based methods is to implement the segmentation and fusion in spatial domain [14], [15], [16], [17], [18]. Examples include work in [14], which segment the averaged or source images via traditional image segmentation. Li et al. [15] used pulse-coupled neural network based segmentation to divide all input images, and the morphological–spectral unsupervised segmentation was employed in [16]. Zhang et al. implemented the fusion via extracting the target region in infrared image and replacing the corresponding position in the visual image [17]. De et al. [18] obtained the region maps for multifocus images via comparing the definition, and then simply copying the focused pixels. However, the region partitions in these fusion algorithms lie in traditional image segmentation or special object extraction based on some particular image features but not focusing on the sequent fusion processing. Moreover, a universal segmentation method for various source images captured from different sensors has not been reported in the literature. These methods are limited to the particular source images, and difficult to transplant to the other fusion models. Therefore, if the segmentation process refers to the relationship between source images in terms of redundant and complementary information, it can be more universal to different applications and more effective to the sequent region-based fusion processing.
In this paper, a novel region-based fusion algorithm is proposed. In our scheme, the segmentation is operated on the similarity characteristics of source images (no matter what kind of source images), and these characteristics reflecting redundant or complementary relationship can further guide the fusion process. Specially, our work aims at improving segmentation performance by more comprehensive correlation of source images to assist the fusion rule design. This method is under image-driven, and can be used in different fusion applications irrespective of the kind of imaging models.
The remainder of this paper is organized as follows. In Section 2, we explain our region-based fusion method in detail, including how to select the similarity characteristics of source images, obtain the region map, and fuse different regions. Section 3 provides the simulation scenarios and evaluates the results. Finally, conclusions are drawn in Section 4.
Section snippets
Our proposed region-based image fusion method
The basic objective of our work is to aid the fusion processing with the correlation of source images. The system diagram of the proposed method is shown in Fig. 1. It consists of three steps: correlation analysis, region generation, and fusion. It is useful to analyze the correlation of source image firstly, because the purpose of image fusion is to create a composite image, which preserves the complementary characteristics and removes the redundant information of input images. Considering
Experiments
We test our fusion scheme compared with six state-of-the-art methods from visual perception and objective indexes perspective. The six fusion approaches are based on region segmentation and spatial frequency (RSSF) [14], regional mathematical morphology (RMM) [18], regional empirical mode decomposition (REMD) [26], regional discrete wavelet transform (RDWT) [3], nonsubsampling contourlet transform (NSCT) [27], and simultaneous orthogonal matching pursuit (SOMP) [28]. The choice of these
Conclusion
In this paper, we have proposed a region-based image fusion method. Our work has considered the similarity maps as a relationship of source images in region partition, which also have been employed in fusion processing. The segmentation of similarity map can increase the dependability of region partition. Therefore, the salient information can be adequately extracted in fusion implement. Experimental results confirm that our region-based fusion method achieve superior results over previous
Acknowledgments
This work was supported by the National Basic Research Program of China (Grant no.2011CB707000), the Foundation for Innovative Research Groups of the National Natural Science Foundation of China (Grant no. 60921001), and the Joint Fund of the National Natural Science Foundation of China and Civil Aviation Administration of China (Grant no. 61079018). The original IR and visible images were kindly provided by Dr. Alexander Toet of the TNO Human Factors Research Institute. We would like to thank
References (32)
- et al.
A wavelet-based image fusion tutorial
Pattern Recognition
(2004) - et al.
A simple and efficient algorithm for multifocus image fusion using morphological wavelets
Signal Processing
(2006) A general framework for multiresolution image fusion: from pixels to regions
Information Fusion
(2003)- et al.
Pixel-based and region-based image fusion schemes using ICA bases
Information Fusion
(April 2007) - et al.
Multifocus image fusion using region segmentation and spatial frequency
Image and Vision Computing
(2008) - et al.
A region-based multi-sensor image fusion scheme using pulse-coupled neural network
Pattern Recognition Letters
(December 2006) - et al.
Enhancing effective depth-of-field by image fusion using mathematical morphology
Image and Vision Computing
(2006) - et al.
Multi-focus image fusion using the nonsubsampled contourlet transform
Signal Processing
(2009) - et al.
A data-fusion scheme for quantitative image analysis by using locally weighted regression and Dempster–Shafer theory
IEEE Transactions on Instrumentation and Measurement
(November 2008) - et al.
Image fusion: advances in the state of the art
Information Fusion
(2008)
Multisensor image fusion in remote sensing: concepts, methods and applications
International Journal of Remote Sensing
Decision fusion approach for multitemporal classification
IEEE Transactions on Geoscience and Remote Sensing
Image quality assessment: from error visibility to structural similarity
IEEE Transactions on Image Processing
Cited by (41)
Sparse intrinsic decomposition and applications
2021, Signal Processing: Image CommunicationA survey on region based image fusion methods
2019, Information FusionCitation Excerpt :These approaches are restricted to the particular input images only. Luo et al. proposed the method of region partition strategy in which the segmentation is performed on the similar features of input images (irrespective of the kind of input images) [51]. The complementary and redundant correlations of the source images are distinguished by using fusion methods.
Multi-focus image fusion based on sparse decomposition and background detection
2016, Digital Signal Processing: A Review JournalMulti-focus image fusion algorithm based on focused region extraction
2016, NeurocomputingCitation Excerpt :In order to reduce the effects of noise and misregistration effectively, Piella [14] divided the source image into different regions and fused the pixels according to the saliency degree of the region. Luo [15] proposed a region partition strategy based on the similarity characteristics of the source images. In this strategy, the fusion method distinguished the redundant and complementary correlation of the source images effectively and the small homogeneous regions were merged based on the similarity components.
Color fusion method with a combination of steerable pyramid color transfer and ICA
2023, Proceedings of SPIE - The International Society for Optical EngineeringAn Investigation on Multimodal Brain Image Fusion in the Time–Frequency Domain using Wavelet Transforms
2023, IETE Journal of Research