Skip to main content
Log in

An improved hybrid multiscale fusion algorithm based on NSST for infrared–visible images

  • Original article
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

The key to improving the fusion quality of infrared–visible images is effectively extracting and fusing complementary information such as bright–dark information and saliency details. For this purpose, an improved hybrid multiscale fusion algorithm inspired by non-subsampled shearlet transform (NSST) is proposed. In this algorithm, firstly, the support value transform (SVT) is used instead of the non-subsampled pyramid as the frequency separator to decompose an image into a set of high-frequency support value images and one low-frequency approximate background. These support value images mainly contain the saliency details from the source image. And then, the shearlet transform of NSST is retained to further extract the saliency edges from these support value images. Secondly, to extract the bright–dark details from the low-frequency approximate background, a morphological multiscale top–bottom hat decomposition is constructed. Finally, the extracted information is combined by different rules and the fused image is reconstructed by the corresponding inverse transforms. Experimental results have shown the proposed algorithm has obvious advantages in retaining saliency details and improving image contrast over those state-of-the-art algorithms.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig.1
Fig. 2
Fig. 3
Fig. 4
Fig.5
Fig.6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig.11
Fig. 12

Similar content being viewed by others

Data availability

Experimental images in Fig. 5 can be obtained through the public dataset “Toet A. TNO Image fusion dataset” https://figshare.com/articles/TN_Image_Fusion_Dataset/1008029. In addition, the experimental fusion images will be made available upon reasonable request for academic use and within the limitations of the provided informed consent by the corresponding author upon acceptance.

References

  1. Hu, P., Yang, F., Wei, H., Ji, L., Liu, D.: A multi-algorithm block fusion method based on set-valued mapping for dual-modal infrared images. Infrared Phys. Technol. 102, 102977 (2019)

    Article  Google Scholar 

  2. Hu, P., Yang, F., Wei, H., Ji, L., Wang, X.: Research on constructing difference-features to guide the fusion of dual-modal infrared images. Infrared Phys. Technol. 102, 102994 (2019)

    Article  Google Scholar 

  3. Ma, J., Ma, Y., Li, C.: Infrared and visible image fusion methods and applications: a survey. Inf. Fus. 45, 153–178 (2019)

    Article  Google Scholar 

  4. Jiayi, M., Wei, Y., Pengwei, L., Chang, L., Junjun, J.: FusionGAN: a generative adversarial network for infrared and visible image fusion. Inf. Fus. 48, 11–26 (2019)

    Article  Google Scholar 

  5. Sun, C., Zhang, C., Xiong, N.: Infrared and visible image fusion techniques based on deep learning: a review. Electronics 9(12), 2162 (2020)

    Article  Google Scholar 

  6. Wang, X., Zhang, K., Yan, J., et al.: Infrared image complexity metric for automatic target recognition based on neural network and traditional approach fusion. Arab. J. Sci. Eng. 45(4), 3245–3255 (2020)

    Article  Google Scholar 

  7. Tan Z, Gao M, Li X, et al.: A flexible reference-insensitive spatiotemporal fusion model for remote sensing images using conditional generative adversarial network. IEEE Trans. Geosci. Remote Sens. (60-):60 (2021)

  8. Kwon, H.J., Lee, S.H.: Visible and near-infrared image acquisition and fusion for night surveillance. Chemosensors 9(4), 75 (2021)

    Article  Google Scholar 

  9. He, G., et al.: Infrared and visible image fusion method by using hybrid representation learning. IEEE Geosci. Remote Sens. Lett. 99, 1–5 (2019)

    Google Scholar 

  10. Li, S., Kang, X., Fang, L., Hu, J., Yin, H.: Pixel-level image fusion: a survey of the state of the art. Inf. Fus. 33, 100–112 (2017)

    Article  Google Scholar 

  11. Mitianoudis, N., Stathaki, T.: Pixel-based and region-based image fusion schemes using ICA bases. Inf. Fus. 8(2), 131–142 (2007)

    Article  Google Scholar 

  12. Chen, J., Li, X., Luo, L., Mei, X., Ma, J.: Infrared and visible image fusion based on target-enhanced multiscale transform decomposition. Inf. Sci. 508, 64–78 (2020)

    Article  Google Scholar 

  13. Li, H., Liu, L., Huang, W., Yue, C.: An improved fusion algorithm for infrared and visible images based on multi-scale transform. Infrared Phys. Technol. 74, 28–37 (2016)

    Article  Google Scholar 

  14. Cheng, B., Jin, L., Li, G.: Infrared and visual image fusion using nsst and an adaptive dual-channel PCNN with triple-linking strength. Neurocomputing 310, 135–147 (2018)

    Article  Google Scholar 

  15. Zhao, C., Huang, Y.: Infrared and visible image fusion method based on rolling guidance filter and NSST. Int. J. Wavelets Multiresolut. Inf. Process. 17(06), 1950045 (2019)

    Article  MathSciNet  Google Scholar 

  16. Yuan, R., Zhu, Z., Qi, G., et al.: A DT-CWT-based infrared-visible image fusion method for smart city. Int. J. Simul. Process Model. 14(6), 559 (2019)

    Article  Google Scholar 

  17. Kong, Z., Yang, H.T., Zheng, F.J., et al.: (2021) Research on multi-focal image fusion based on wavelet transform. J. Phys. Conf. Ser 1994(1), 012018 (2021)

    Article  Google Scholar 

  18. Deepika, T.: Analysis and comparison of different wavelet transform methods using benchmarks for image fusion. IEEE Signal Process. Mag. (2020). https://doi.org/10.48550/arXiv.2007.11488

    Article  Google Scholar 

  19. Candès, E., Demanet, L., Donoho, D., Ying, L.: Fast discrete curvelet transforms. Multiscale Model. Simul. 5(3), 861–899 (2006)

    Article  MathSciNet  Google Scholar 

  20. Zheng, Z., Cao, J.: Fusion high-and-low-Level features via ridgelet and convolutional neural networks for very high-resolution remote sensing imagery classification. IEEE Access 7, 118472–118483 (2019)

    Article  Google Scholar 

  21. Do, M.N., Vetterli, M.: The contourlet transform: an efficient directional multiresolution image representation. IEEE Trans Image Process Publ. IEEE Signal Process. Soc. 14(12), 2091–2106 (2005)

    Article  Google Scholar 

  22. Bamberger, R.H., Member, IEEE, S. Membrr, IEEE: A filter bank for the directional decomposition of images: theory and design. IEEE Trans. Signal Process. 40(4), 882–893 (2002)

    Article  Google Scholar 

  23. Averbuch, A., et al.: Image inpainting using directional wavelet packets originating from polynomial splines. Signal Process. Image Commun. 97(2), 116334 (2021)

    Article  Google Scholar 

  24. Averbuch, A., Neittaanmaki P, Zheludev V, et al. Coupling BM3D with directional wavelet packets for image denoising., 2020 https://doi.org/10.48550/arXiv.2008.11595.

  25. Kutyniok, G., Labate, D.: Construction of regular and irregular shearlets. J. Wavelet Theory Appl. 1, 1–10 (2007)

    Google Scholar 

  26. Easley, G., Labate, D., Lim, W.-Q.: Sparse directional image representations using the discrete shearlet transform. Appl. Comput. Harmon. Anal. 25(1), 25–46 (2008)

    Article  MathSciNet  Google Scholar 

  27. Zheng, S., Shi, W.Z., Liu, J., Zhu, G.X., Tian, J.W.: Multisource image fusion method using support value transform. IEEE Trans. Image Process. 16(7), 1831–1839 (2007)

    Article  MathSciNet  Google Scholar 

  28. Serra, J.: Introduction to mathematical morphology. Comput. Vis. Graph. Image Process. 35(3), 283–305 (1986)

    Article  Google Scholar 

  29. Zhu, P., Huang, Z.: A fusion method for infrared–visible image and infrared-polarization image based on multi-scale center-surround top-hat transform. Opt. Rev. 24(3), 370–382 (2017)

    Article  Google Scholar 

  30. Yan, X., Liu, Y., Jia, M.: A feature selection framework-based multiscale morphological analysis algorithm for fault diagnosis of rolling element bearing. IEEE Access 7, 123436–123452 (2019)

    Article  Google Scholar 

  31. Li, S., Yang, B., Hu, J.: Performance comparison of different multi-resolution transforms for image fusion. Inf. Fus. 12(2), 74–84 (2011)

    Article  Google Scholar 

  32. Xydeas, C.S., Petrović, V.: Objective image fusion performance measure. Military Tech. Cour. (2000) 56(2):181–193

  33. G. Piella and H. Heijmans, A new quality metric for image fusion, in: Proceedings 2003 International Conference on Image Processing (Cat. No.03CH37429), Barcelona, Spain, 2003, pp. III-173.

  34. Liu, C. H. et al.: Infrared and visible image fusion method based on saliency detection in sparse domain. Infrared Phys. Technol. pp. 83:94–102 (2017)

  35. Bavirisetti D P, Xiao G, Liu G. Multi-sensor image fusion based on fourth order partial differential equations[C]. In: 2017 20th International conference on information fusion (Fusion). IEEE, 2017, pp. 1–9

  36. Ma, J., Zhou, Y.: Infrared and visible image fusion via gradientlet filter. Comput. Vis. Image Underst. 197–198(2), 103016 (2020)

    Article  Google Scholar 

  37. Ma, J., Zhou, Z., Wang, B., et al.: Infrared and visible image fusion based on visual saliency map and weighted least square optimization. Infrared Phys. Technol. 82, 8–17 (2017)

    Article  Google Scholar 

  38. Li, H., Wu, X. J., Li, H., Wu, X. J.: Infrared and visible image fusion using latent low-rank representation. 2018, arXiv preprint arXiv:1804.08992

  39. Li, H., Wu, X. -J., Kittler, J.: Infrared and visible image fusion using a deep learning framework. In: 2018 24th International Conference on Pattern Recognition (ICPR), 2018, pp. 2705–2710

  40. Li, H., Wu, X.-J.: DenseFuse: A Fusion Approach to Infrared and Visible Images. IEEE Trans. Image Process. 28(5), 2614–2623 (2019)

    Article  MathSciNet  Google Scholar 

  41. Li, H., Wu, X., Durrani, T.S.: Infrared and visible image fusion with ResNet and zero-phase component analysis[J]. Infrared Phys. Technol. 102, 103039 (2019)

    Article  Google Scholar 

  42. Li, H., Xiao-Jun, Wu., Kittler, J.: RFN-Nest: An end-to-end residual fusion network for infrared and visible images. Inf. Fus. 73, 72–86 (2021)

    Article  Google Scholar 

  43. Roberts, J.W., Van Aardt, J., Ahmed, F.: Assessment of image fusion procedures using entropy, image quality, and multispectral classification. J. Appl. Remote Sens. 2(1), 023522–0235228 (2008)

    Article  Google Scholar 

  44. Qu, G., Zhang, D., Yan, P.: Information measure for performance of image fusion. Electron. Lett. 38(7), 313–315 (2002)

    Article  Google Scholar 

  45. Han, Y., Cai, Y., Cao, Y., Xu, X.: A new image fusion performance metric based on visual information fidelity. Inf. Fus. 14(2), 127–135 (2013)

    Article  Google Scholar 

Download references

Acknowledgements

We sincerely thank the reviewers and editors for carefully checking our manuscript and providing many suggestions. This work is supported by the Natural Science Research Project of Anhui Educational Committee (No. 2022AH050801), University-level key projects of Anhui University of science and technology (No. QNZD2021-02), Anhui Provincial Natural Science Foundation (No. 2208085ME128), Scientific Research Foundation for Highlevel Talents of Anhui University of Science and Technology (No. 13210679), Huainan Science and Technology Planning Project (No. 2021005).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Peng Hu.

Ethics declarations

Conflict of interest

We declare that we do not have any commercial or associative interest that represents a conflict of interest in connection with the work submitted.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hu, P., Wang, C., Li, D. et al. An improved hybrid multiscale fusion algorithm based on NSST for infrared–visible images. Vis Comput 40, 1245–1259 (2024). https://doi.org/10.1007/s00371-023-02844-8

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00371-023-02844-8

Keywords

Navigation