Abstract
The key problem of multi-modal image fusion is that the complementary features of the source images are easily lose in the fusion. In this paper, the fusion algorithm is proposed with the hybrid ℓ0ℓ1 layer decomposing and multi-directional filter banks to extract edge, contour, and detail features from images and fuse the complementary features well. First, the hybrid ℓ0ℓ1 layer decomposing is ability to effectively overcome halo artifacts and over-enhancement, when the detail features are eliminated, so the low-frequency and detail features of the source images are separated. Then, the visual salient detection based on ant colony optimisation and local phase coherenceis is introduced to guide the fusion of the base lay images. Then the detail image is decomposed by using multi-directional filter banker, and the different direction detail features are extracted, and the fusion rule multi-directional gradient and principal component analysis are adopted for the detail sub-band images to prevent the loss of detail features. Finally, the final fused image is reconstructed by the inverse transformation of the hybrid ℓ0ℓ1 layer decomposing. Experimental results demonstrate that the values of the spatial frequency, the the standard deviation, the edge strength,the difference seminary index, and the difference structural similarity are increased significantly, so the proposed fusion algorithm can effectively preserve the complementary information between images and improve the quality of fused images.
Similar content being viewed by others
References
Aslantas V, Bendes E (2015) A new image quality metric for image fusion: the sum of the correlations of differences. Int J Electron Commun 69(3):1890–1896
Bavirisetti DP, Kollu V, Gang X, … Dhuli R (2017) Fusion of MRI and CT images using guided image filter and image statistics[J]. Int J Imaging Syst Technol 27(3):227–237
Bhatnagar G, Wu QMJ, Liu Z (2014) Directive contrast based multimodal medical image fusion in NSCT domain[J]. IEEE Trans Multimedia 9(5):1014–1024
Chen J, Wu K, Cheng Z, … Luo L (2021) A saliency-based multi-scale approach for infrared and visible image fusion [J]. Signal Process 182:107936
Hermessi H (2018) Mourali, et al. convolutional neural network-based multimodal image fusion via similarity learning in the shearlet domain[J]. Neural Comput Applic 30(7):2029–2045
Hu J, Li S (2012) The multiscale directional bilateral filter and its application to multisensor image fusion[J]. Inform Fusion 13(3):196–206
Jin X, Jiang Q, Chu X , et al. (2019) Brain Medical Image Fusion Using L2-Norm-Based Features and Fuzzy-Weighted Measurements in 2D Littlewood-Paley EWT Domain[J]. IEEE Trans Instrument Measurement. (99):1–1
Kangjian H, Dongming Z, Xuejie Z et al (2018) Infrared and visible image fusion combining interesting region detection and nonsubsampled Contourlet transform[J]. J Sensors 2018:1–15
Li J, Feng L (2020) Rolling guidance filtering-orientated saliency region extraction method for visible and infrared images fusion[J]. Sensing Imaging 21(18):1–18
Li J, Huo HT, Li C et al (2020) AttentionFGAN: infrared and visible image fusion using attention-based generative adversarial networks[J]. IEEE Trans Multimedia 23:1383–1396
Li M, Yuan X, Luo Z, … Qiu X (2018) Infrared and visual image fusion based on NSST and improved PCNN[J]. J Phys Conf Ser 1069:012151
Li S, Kang X, Hu J (2013) Image fusion with guided filtering[J]. IEEE Trans Image Process 22(7):2864–2875
Liang ZT, Xu J, Zhang D et al (2018) A hybrid layer decomposition model for tone mapping[C]. IEEE Conf Comput Vision Patt Recogn 7:1–9
Liu Y, Liu S, Wang Z (2015) A general framework for image fusion based on multi-scale transform and sparse representation. Inform Fusion 24:147–164
Liu Y, Chen X, Peng H, … Wang Z (2017) Multi-focus image fusion with a deep convolutional neural network[J]. Inform Fusion 36:191–207
Liu Y, Chen X, Cheng J et al (2018) Infrared and visible image fusion with convolutional neural networks. Int J Wavelets, Multiresolution Inform Process 16(3):1850018: 1–1850018:20
Liu Y, Chen X, Ward RK, … Wang ZJ (2019) Medical image fusion via convolutional sparsity based morphological component analysis[J]. IEEE Signal Process Lett 26(3):485–489
Long Y, Jia H, Zhong Y et al (2020) RXDNFuse: A aggregated residual dense network for infrared and visible image fusion[J]. Inform Fusion:1–41
Ma J, Chen C, Li C, … Huang J (2016) Infrared and visible image fusion via gradient transfer and total variation minimization[J]. Inform Fusion 31:100–109
Ma J, Chen C, Li C, … Huang J (2016) Infrared and visible image fusion via gradient transfer and total variation minimization[J]. Inform Fusion 31:100–109
Ma J, Ma Y, Li C (2019) Infrared and visible image fusion methods and applications: a survey[J]. Inform Fusion 45:153–178
Ma J, Xu H, Jiang J, … Zhang XP (2020) DDcGAN: a dual-discriminator conditional generative adversarial network for multi-resolution image fusion[J]. IEEE Trans Image Process 29:4980–4995
Ma L, Tian J, Yu W (2010) Visual saliency detection in image using ant colony optimisation and local phase coherence[J]. Electron Lett 46(15):1066–1068
Mehta N, Budhiraja S (2018) Multimodal medical image fusion using guided filter in NSCT domain[J]. Biomed Pharmacol J 11(4):1937–1946
Mendi E (2015) Image quality assessment metrics combining structural similarity and image fidelity with visual attention[J]. J Intell Fuzzy Syst 28(3):1039–1046
Minghui S, Lu L, Yuanxi P, … Jun L (2019) Infrared & visible images fusion based on redundant directional lifting-based wavelet and saliency detection[J]. Infrared Phys Technol 101:45–55
Padma G (2015) Vinod, et al. multimodality medical image fusion based on new features in NSST domain[J]. Biomed Eng Lett 4(4):414–424
Tan W, Zhou HX, Song J et al (2019) Infrared and visible image perceptive fusion through multi-level Gaussian curvature filtering image decomposition. Appl Opt 58(12):3064–3073
Vijayarajan SR et al (2015) Discrete wavelet transform based principal component averaging fusion for medical images[J]. Aeu Int J Electronics Comm 69(6):896–602
Wang X, Yin J, Zhang K, et al. (2019) Infrared weak-small targets fusion based on latent low-rank representation and DWT[J]. IEEE Access
Wang Z, Bovik AC, Sheikh HR, … Simoncelli EP (2004) Image quality assessment: from error visibility to structural similarity[J]. IEEE Trans Image Process 13(4):603–611
Yan L, Cao J, Cheng Y et al (2019) Infrared and Visible Image Fusion via L0 Decomposition and Intensity Mask[J]. IEEE Photonics J (99):1–9
Yan X, Qin H, Li J, … Zeng Q (2015) Infrared and visible image fusion using multiscale directional nonlocal means filter[J]. Appl Opt 54(13):4299
Yang Y , Zhang Y , Huang S , et al. (2020) Infrared and Visible Image Fusion Using Visual Saliency Sparse Representation and Detail Injection Model[J]. IEEE Trans Instrumentation Measurement. (99):1–14.
Zhang JJ, Hua J, Yan, et al. Long-wave infrared polarization feature extraction and image fusion based on the orthogonality difference method[J] J Electronic Imag, 2018, 27(2):23021.1.
Zhang L, Baoyang F, Ji L et al (2016) A categorization method of infrared polarization and intensity image fusion algorithm based on the transfer ability of difference features[J]. Infrared Phys Technol 79:91–100
Zhang S, Liu F (2020) Infrared and visible image fusion based on non-subsampled shearlet transform, regional energy, and co-occurrence filtering[J]. Electron Lett 56(15):761–764
Zhang S, Li X, Zhang X, … Zhang S (2021) Infrared and visible image fusion based on saliency detection and two-scale transform decomposition[J]. Infrared Phys Technol 103626:103626
Zhang X, Ma Y, Fan F, … Huang J (2017) Infrared and visible image fusion via saliency analysis and local edge-preserving multi-scale decomposition[J]. J Optical Soc Am A Optics Image Sci Vision 34(8):1400–1410
Zhou ZQ, Wang B, Li S et al (2016) Perceptual fusion of infrared and visible images through a hybrid multi-scale decomposition with Gaussian and bilateral filters [J]. Information fusion 30:15–36
Zhu Z, Zheng M, Qi G, … Xiang Y (2019) A phase congruency and local Laplacian energy based multi-modality medical image fusion method in NSCT domain. IEEE Access 7:20811–20824
Acknowledgments
This work was partially supported by the National Natural Science Foundation of China (Grant No. 61672472), the Fund from Key Lab of Dynamic Measurement Technology in North University of China, and the programs of innovative research teams of the North University of China, Tianjin Natural Science Foundation(16JCYBJC42000, Doctoral Program of Nanyang normal University. The open fund project in 2020 of Key Laboratory of Marine Environmental Information Technology (The research on remote sensing image sea ice classification based on generated adversarial network).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
No conflict of interest exits in the submission of this manuscript, and manuscript is approved by all authors for publication. I would like to declare on behalf of my co-authors that the work described was original research that has not been published previously, and not under consideration for publication elsewhere, in whole or in part. All the authors listed have approved the manuscript that is enclosed.
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Zhang, L., Zhang, Y., Yang, F. et al. Multi-modal image fusion with the hybrid ℓ0ℓ1 layer decomposing and multi-directional filter banks. Multimed Tools Appl 81, 21369–21384 (2022). https://doi.org/10.1007/s11042-022-12749-8
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11042-022-12749-8