Skip to main content
Log in

Fusion of near-infrared and visible images based on saliency-map-guided multi-scale transformation decomposition

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

In this research, we propose a near-infrared (NIR) and visible (VIS) image fusion method based on saliency-map-guided multi-scale transform decomposition (SMG-MST) to solve the problem of color distortion. Although the existing NIR and VIS image fusion methods can enhance the texture information of the fused image, they cannot control the scattering of light from objects in the fused image resulting in color distortion. The color distortion region usually has good saliency, so using saliency map to solve the above problem is a good choice. In this paper, a visible image guided by saliency map is introduced in the low frequency part, which can weaken the scattering of too much light from objects in the image. In addition, the local entropy of the NIR is used to guide the visible photon images, so the results contain more details. Both qualitative and quantitative experiments demonstrate the effectiveness of our algorithm, and the comparison of algorithm running times shows the high efficiency of our method.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14

Similar content being viewed by others

Data Availability

All public images in the manuscript can be found at: https://www.epfl.ch/labs/ivrl/research/downloads/rgb-nir-scene-dataset/.

References

  1. Ancuti CO, Ancuti C (2013) Single image dehazing by multi-scale fusion. IEEE Trans Image Process 22(8):3271–3282

    Article  Google Scholar 

  2. Ancuti C, O. Ancuti C (2014) Effective contrast-based dehazing for robust image matching. IEEE Geosci Remote Sens Lett 11(11):1871–1875

    Article  Google Scholar 

  3. Bernal EA, Yang X, Li Q, Kumar J, Madhvanath S, Ramesh P, Bala R (2017) Deep temporal multimodal fusion for medical procedure monitoring using wearable sensors. IEEE Trans Multimedia, pp 1–1

  4. Chen J, Li X, Luo L, Mei X, Ma J (2020) Infrared and visible image fusion based on target-enhanced multiscale transform decomposition. Inform Sci 508:64–78

    Article  Google Scholar 

  5. Chen Q, Sun J, Palade V, Shi X, Liu L (2019) Hierarchical clustering based band selection algorithm for hyperspectral face recognition. IEEE Access 7:24333–24342

    Article  Google Scholar 

  6. Colvero CP, Cordeiro MCR, De Faria GV, JP Von der Weid. (2005) Experimental comparison between far-and near-infrared wavelengths in free-space optical systems. Microw Opt Technol Lett 46(4):319–323

    Article  Google Scholar 

  7. Cui G, Feng H, Xu Z, Li Q, Chen Y (2015) Detail preserved fusion of visible and infrared images using regional saliency extraction and multi-scale image decomposition. Opt Commun 341:199–209

    Article  Google Scholar 

  8. Epfl database. Available at: https://www.epfl.ch/labs/ivrl/research/downloads/rgb-nir-scene-dataset/. Accessed 27 Feb 2023

  9. Eskicioglu AM, Fisher PS (1995) Image quality measures and their performance. IEEE Trans Commun 43(12):2959–2965

    Article  Google Scholar 

  10. Fattal R (2015) Dehazing using color-lines. ACM Trans Graph, vol 34(1)

  11. Feng C, Zhuo S, Zhang X, Shen L, Süsstrunk S (2013) Near-infrared guided color image dehazing. In: 2013 IEEE international conference on image processing, pp 2363–2367. IEEE

  12. Feng C, Zhuo S, Zhang X, Shen L, Süsstrunk S (2013) Near-infrared guided color image dehazing. In: 2013 IEEE international conference on image processing, pp 2363–2367

  13. Fernandez-Beltran R, Haut J, Paoletti M, Plaza J, Plaza A, Pla F (2018) Remote sensing image fusion using hierarchical multimodal probabilistic latent semantic analysis. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 11(12):4982–4993

    Article  Google Scholar 

  14. Fredembach C, Süsstrunk S (2008) Colouring the near-infrared. In: Color and imaging conference, vol 2008, pp 176–182. Society for Imaging Science and Technology

  15. Gijsenij A, Gevers T, Van De Weijer J (2011) Computational color constancy: survey and experiments. IEEE Trans Image Process 20(9):2475–2489

  16. Jang D, Park R (2017) Colour image dehazing using near-infrared fusion. IET Image Process 11(8):587–594

    Article  Google Scholar 

  17. Jiang J, Feng X, Liu F, Xu Y, Huang H (2019) Multi-spectral rgb-nir image classification using double-channel cnn. IEEE Access 7:20607–20613

    Article  Google Scholar 

  18. Lan X, Zhang L, Shen H, Yuan Q, Li H (2013) Single image haze removal considering sensor blur and noise. Springer, pp 1–13

  19. Li Z, Hu H, Zhang W, Pu S, Li B (2020) Spectrum characteristics preserved visible and near-infrared image fusion algorithm. IEEE Trans Multimedia 23:306–319

    Article  Google Scholar 

  20. Li S, Kang X, Hu J (2013) Image fusion with guided filtering. IEEE Trans Image Process 22(7):2864–2875

    Article  Google Scholar 

  21. Li H, Wu X (2019) Densefuse: a fusion approach to infrared and visible images. IEEE Trans Image Process 28(5):2614–2623

    Article  MathSciNet  Google Scholar 

  22. Ma J, Ma Y, Li C (2019) Infrared and visible image fusion methods and applications: a survey. Information Fusion, pp 153–178

  23. Ma J, Tang L, Fan F, Huang J, Mei X, Ma Y (2022) Swinfusion: cross-domain long-range learning for general image fusion via swin transformer. IEEE/CAA Journal of Automatica Sinica 9(7):1200–1217

    Article  Google Scholar 

  24. Ma J, Xu H, Jiang J, Mei X, Zhang X (2020) Ddcgan: a dual-discriminator conditional generative adversarial network for multi-resolution image fusion. IEEE Trans Image Process 29:4980–4995

    Article  MATH  Google Scholar 

  25. Ma J, Yu W, Liang P, Li C, Jiang J (2019) Fusiongan: a generative adversarial network for infrared and visible image fusion. Information Fusion 48:11–26

    Article  Google Scholar 

  26. Mertens T, Kautz J, R. Van F (2007) Exposure fusion. In: 15th Pacific conference on computer graphics and applications (PG’07), pp 382–390

  27. Nayar SK, Narasimhan SG (1999) Vision in bad weather. In: Proceedings of the Seventh IEEE international conference on computer vision, volume 2, pp 820–827, vol.2

  28. Peter J, Edward H (1987) The laplacian pyramid as a compact image code. In: Martin A, Oscar F (eds) Readings in computer vision, pp 671–679. Morgan Kaufmann, San Francisco (CA)

  29. Qu G, Zhang D, Yan P (2002) Information measure for performance of image fusion. Electron Lett 38(7):313–315

    Article  Google Scholar 

  30. Ram Prabhakar K, Sai Srikar V, Venkatesh Babu R (2017) Deepfuse: a deep unsupervised approach for exposure fusion with extreme exposure image pairs. In: Proceedings of the IEEE international conference on computer vision (ICCV)

  31. Sappa A, Carvajal J, Aguilera C, Oliveira M, Romero D, Vintimilla B (2016) Wavelet-based visible and infrared image fusion: a comparative study. Sensors 16(6):861

    Article  Google Scholar 

  32. Schaul L, Fredembach C, Süsstrunk S (2009) Color image dehazing using the near-infrared. In: 2009 16th IEEE international conference on image processing (ICIP), pp 1629–1632. IEEE

  33. Sulami M, Glatzer I, Fattal R, Werman M (2014) Automatic recovery of the atmospheric light in hazy images. In: 2014 IEEE international conference on computational photography (ICCP), pp 1–11

  34. Tang L, Deng Y, Ma Y, Huang J, Ma J (2022) Superfusion: a versatile image registration and fusion network with semantic awareness. IEEE/CAA Journal of Automatica Sinica 9(12):2121–2137

    Article  Google Scholar 

  35. Tang L, Yuan J, Ma J (2022) Image fusion in the loop of high-level vision tasks a semantic-aware real-time infrared and visible image fusion network. Inf Fusion 82:28–42

    Article  Google Scholar 

  36. V. Vanmali A, M. Gadre V (2017) Visible and nir image fusion using weight-map-guided laplacian–gaussian pyramid for improving scene visibility. Springer 508:64–78

    Google Scholar 

  37. Vanmali A, Kelkar S, Gadre V (2015) A novel approach for image dehazing combining visible-nir images. In: 2015 Fifth national conference on computer vision, pattern recognition, image processing and graphics (NCVPRIPG), pp 1–4

  38. Wesley RJ, Jan AAV, Fethi BA (2008) Assessment of image fusion procedures using entropy, image quality, and multispectral classification. J Appl Remote Sens 2(1):1–28

    Google Scholar 

  39. Xu H, Ma J, Jiang J, Guo X, Ling H (2022) U2fusion: a unified unsupervised image fusion network. IEEE Trans Pattern Anal Mach Intell 44(1):502–518

    Article  Google Scholar 

  40. Xu H, Ma J, Zhang X-P (2020) Mef-gan: multi-exposure image fusion via generative adversarial networks. IEEE Trans Image Process 29:7203–7216

    Article  MATH  Google Scholar 

  41. Zhang Y, Liu Y, Sun P, Yan H, Zhao X, Zhang L (2020) Ifcnn: a general image fusion framework based on convolutional neural network. Information Fusion 54:99–118

    Article  Google Scholar 

  42. Zhang H, Ma J (2021) Sdnet :A versatile squeeze-and-decomposition network for real-time image fusion. Int J Comput Vis 129(10):2761–2785

    Article  Google Scholar 

  43. Zhang X, Terence S, Miao X (2008) Enhancing photographs with near infra-red images. In: 2008 IEEE conference on computer vision and pattern recognition, pp 1–8

  44. Zhu Q, Mai J, Shao L (2015) A fast single image haze removal algorithm using color attenuation prior. IEEE Trans Image Process 24(11):3522–3533

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

This work was supported by the National Natural Science Foundation of China nos. 62073304, 41977242 and 61973283.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chen Jun.

Ethics declarations

Consent for Publication

The work described has not been published before, and its publication has been approved by the responsible authorities at the institution where the work is carried out.

Competing interests

The authors declare that there is no competing interests regarding the publication of this article.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Jun, C., Lei, C., Wei, L. et al. Fusion of near-infrared and visible images based on saliency-map-guided multi-scale transformation decomposition. Multimed Tools Appl 82, 34631–34651 (2023). https://doi.org/10.1007/s11042-023-14709-2

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-023-14709-2

Keywords

Navigation