Skip to main content
Log in

Multi-modal image fusion with the hybrid 01 layer decomposing and multi-directional filter banks

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

The key problem of multi-modal image fusion is that the complementary features of the source images are easily lose in the fusion. In this paper, the fusion algorithm is proposed with the hybrid 01 layer decomposing and multi-directional filter banks to extract edge, contour, and detail features from images and fuse the complementary features well. First, the hybrid 01 layer decomposing is ability to effectively overcome halo artifacts and over-enhancement, when the detail features are eliminated, so the low-frequency and detail features of the source images are separated. Then, the visual salient detection based on ant colony optimisation and local phase coherenceis is introduced to guide the fusion of the base lay images. Then the detail image is decomposed by using multi-directional filter banker, and the different direction detail features are extracted, and the fusion rule multi-directional gradient and principal component analysis are adopted for the detail sub-band images to prevent the loss of detail features. Finally, the final fused image is reconstructed by the inverse transformation of the hybrid 01 layer decomposing. Experimental results demonstrate that the values of the spatial frequency, the the standard deviation, the edge strength,the difference seminary index, and the difference structural similarity are increased significantly, so the proposed fusion algorithm can effectively preserve the complementary information between images and improve the quality of fused images.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

References

  1. Aslantas V, Bendes E (2015) A new image quality metric for image fusion: the sum of the correlations of differences. Int J Electron Commun 69(3):1890–1896

    Article  Google Scholar 

  2. Bavirisetti DP, Kollu V, Gang X, … Dhuli R (2017) Fusion of MRI and CT images using guided image filter and image statistics[J]. Int J Imaging Syst Technol 27(3):227–237

    Article  Google Scholar 

  3. Bhatnagar G, Wu QMJ, Liu Z (2014) Directive contrast based multimodal medical image fusion in NSCT domain[J]. IEEE Trans Multimedia 9(5):1014–1024

    Article  Google Scholar 

  4. Chen J, Wu K, Cheng Z, … Luo L (2021) A saliency-based multi-scale approach for infrared and visible image fusion [J]. Signal Process 182:107936

    Article  Google Scholar 

  5. Hermessi H (2018) Mourali, et al. convolutional neural network-based multimodal image fusion via similarity learning in the shearlet domain[J]. Neural Comput Applic 30(7):2029–2045

    Article  Google Scholar 

  6. Hu J, Li S (2012) The multiscale directional bilateral filter and its application to multisensor image fusion[J]. Inform Fusion 13(3):196–206

    Article  Google Scholar 

  7. Jin X, Jiang Q, Chu X , et al. (2019) Brain Medical Image Fusion Using L2-Norm-Based Features and Fuzzy-Weighted Measurements in 2D Littlewood-Paley EWT Domain[J]. IEEE Trans Instrument Measurement. (99):1–1

  8. Kangjian H, Dongming Z, Xuejie Z et al (2018) Infrared and visible image fusion combining interesting region detection and nonsubsampled Contourlet transform[J]. J Sensors 2018:1–15

    Google Scholar 

  9. Li J, Feng L (2020) Rolling guidance filtering-orientated saliency region extraction method for visible and infrared images fusion[J]. Sensing Imaging 21(18):1–18

    Google Scholar 

  10. Li J, Huo HT, Li C et al (2020) AttentionFGAN: infrared and visible image fusion using attention-based generative adversarial networks[J]. IEEE Trans Multimedia 23:1383–1396

    Article  Google Scholar 

  11. Li M, Yuan X, Luo Z, … Qiu X (2018) Infrared and visual image fusion based on NSST and improved PCNN[J]. J Phys Conf Ser 1069:012151

    Article  Google Scholar 

  12. Li S, Kang X, Hu J (2013) Image fusion with guided filtering[J]. IEEE Trans Image Process 22(7):2864–2875

    Article  Google Scholar 

  13. Liang ZT, Xu J, Zhang D et al (2018) A hybrid layer decomposition model for tone mapping[C]. IEEE Conf Comput Vision Patt Recogn 7:1–9

    Google Scholar 

  14. Liu Y, Liu S, Wang Z (2015) A general framework for image fusion based on multi-scale transform and sparse representation. Inform Fusion 24:147–164

    Article  Google Scholar 

  15. Liu Y, Chen X, Peng H, … Wang Z (2017) Multi-focus image fusion with a deep convolutional neural network[J]. Inform Fusion 36:191–207

    Article  Google Scholar 

  16. Liu Y, Chen X, Cheng J et al (2018) Infrared and visible image fusion with convolutional neural networks. Int J Wavelets, Multiresolution Inform Process 16(3):1850018: 1–1850018:20

    Article  MathSciNet  MATH  Google Scholar 

  17. Liu Y, Chen X, Ward RK, … Wang ZJ (2019) Medical image fusion via convolutional sparsity based morphological component analysis[J]. IEEE Signal Process Lett 26(3):485–489

    Article  Google Scholar 

  18. Long Y, Jia H, Zhong Y et al (2020) RXDNFuse: A aggregated residual dense network for infrared and visible image fusion[J]. Inform Fusion:1–41

  19. Ma J, Chen C, Li C, … Huang J (2016) Infrared and visible image fusion via gradient transfer and total variation minimization[J]. Inform Fusion 31:100–109

    Article  Google Scholar 

  20. Ma J, Chen C, Li C, … Huang J (2016) Infrared and visible image fusion via gradient transfer and total variation minimization[J]. Inform Fusion 31:100–109

    Article  Google Scholar 

  21. Ma J, Ma Y, Li C (2019) Infrared and visible image fusion methods and applications: a survey[J]. Inform Fusion 45:153–178

    Article  Google Scholar 

  22. Ma J, Xu H, Jiang J, … Zhang XP (2020) DDcGAN: a dual-discriminator conditional generative adversarial network for multi-resolution image fusion[J]. IEEE Trans Image Process 29:4980–4995

    Article  Google Scholar 

  23. Ma L, Tian J, Yu W (2010) Visual saliency detection in image using ant colony optimisation and local phase coherence[J]. Electron Lett 46(15):1066–1068

    Article  Google Scholar 

  24. Mehta N, Budhiraja S (2018) Multimodal medical image fusion using guided filter in NSCT domain[J]. Biomed Pharmacol J 11(4):1937–1946

    Article  Google Scholar 

  25. Mendi E (2015) Image quality assessment metrics combining structural similarity and image fidelity with visual attention[J]. J Intell Fuzzy Syst 28(3):1039–1046

    Article  Google Scholar 

  26. Minghui S, Lu L, Yuanxi P, … Jun L (2019) Infrared & visible images fusion based on redundant directional lifting-based wavelet and saliency detection[J]. Infrared Phys Technol 101:45–55

    Article  Google Scholar 

  27. Padma G (2015) Vinod, et al. multimodality medical image fusion based on new features in NSST domain[J]. Biomed Eng Lett 4(4):414–424

    Google Scholar 

  28. Tan W, Zhou HX, Song J et al (2019) Infrared and visible image perceptive fusion through multi-level Gaussian curvature filtering image decomposition. Appl Opt 58(12):3064–3073

    Article  Google Scholar 

  29. Vijayarajan SR et al (2015) Discrete wavelet transform based principal component averaging fusion for medical images[J]. Aeu Int J Electronics Comm 69(6):896–602

    Article  Google Scholar 

  30. Wang X, Yin J, Zhang K, et al. (2019) Infrared weak-small targets fusion based on latent low-rank representation and DWT[J]. IEEE Access

  31. Wang Z, Bovik AC, Sheikh HR, … Simoncelli EP (2004) Image quality assessment: from error visibility to structural similarity[J]. IEEE Trans Image Process 13(4):603–611

    Article  Google Scholar 

  32. Yan L, Cao J, Cheng Y et al (2019) Infrared and Visible Image Fusion via L0 Decomposition and Intensity Mask[J]. IEEE Photonics J (99):1–9

  33. Yan X, Qin H, Li J, … Zeng Q (2015) Infrared and visible image fusion using multiscale directional nonlocal means filter[J]. Appl Opt 54(13):4299

    Article  Google Scholar 

  34. Yang Y , Zhang Y , Huang S , et al. (2020) Infrared and Visible Image Fusion Using Visual Saliency Sparse Representation and Detail Injection Model[J]. IEEE Trans Instrumentation Measurement. (99):1–14.

  35. Zhang JJ, Hua J, Yan, et al. Long-wave infrared polarization feature extraction and image fusion based on the orthogonality difference method[J] J Electronic Imag, 2018, 27(2):23021.1.

  36. Zhang L, Baoyang F, Ji L et al (2016) A categorization method of infrared polarization and intensity image fusion algorithm based on the transfer ability of difference features[J]. Infrared Phys Technol 79:91–100

    Article  Google Scholar 

  37. Zhang S, Liu F (2020) Infrared and visible image fusion based on non-subsampled shearlet transform, regional energy, and co-occurrence filtering[J]. Electron Lett 56(15):761–764

    Article  Google Scholar 

  38. Zhang S, Li X, Zhang X, … Zhang S (2021) Infrared and visible image fusion based on saliency detection and two-scale transform decomposition[J]. Infrared Phys Technol 103626:103626

    Article  Google Scholar 

  39. Zhang X, Ma Y, Fan F, … Huang J (2017) Infrared and visible image fusion via saliency analysis and local edge-preserving multi-scale decomposition[J]. J Optical Soc Am A Optics Image Sci Vision 34(8):1400–1410

    Article  Google Scholar 

  40. Zhou ZQ, Wang B, Li S et al (2016) Perceptual fusion of infrared and visible images through a hybrid multi-scale decomposition with Gaussian and bilateral filters [J]. Information fusion 30:15–36

    Article  Google Scholar 

  41. Zhu Z, Zheng M, Qi G, … Xiang Y (2019) A phase congruency and local Laplacian energy based multi-modality medical image fusion method in NSCT domain. IEEE Access 7:20811–20824

    Article  Google Scholar 

Download references

Acknowledgments

This work was partially supported by the National Natural Science Foundation of China (Grant No. 61672472), the Fund from Key Lab of Dynamic Measurement Technology in North University of China, and the programs of innovative research teams of the North University of China, Tianjin Natural Science Foundation(16JCYBJC42000, Doctoral Program of Nanyang normal University. The open fund project in 2020 of Key Laboratory of Marine Environmental Information Technology (The research on remote sensing image sea ice classification based on generated adversarial network).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lei Zhang.

Ethics declarations

Conflict of interest

No conflict of interest exits in the submission of this manuscript, and manuscript is approved by all authors for publication. I would like to declare on behalf of my co-authors that the work described was original research that has not been published previously, and not under consideration for publication elsewhere, in whole or in part. All the authors listed have approved the manuscript that is enclosed.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, L., Zhang, Y., Yang, F. et al. Multi-modal image fusion with the hybrid 01 layer decomposing and multi-directional filter banks. Multimed Tools Appl 81, 21369–21384 (2022). https://doi.org/10.1007/s11042-022-12749-8

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-022-12749-8

Keywords

Navigation