Skip to main content
Log in

An infrared and visible image fusion algorithm based on ResNet-152

  • 1212: Deep Learning Techniques for Infrared Image/Video Understanding
  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

The fusion of infrared and visible images can obtain a combined image with hidden objective and rich visible details. To improve the details of the fusion image from the infrared and visible images by reducing artifacts and noise, an infrared and visible image fusion algorithm based on ResNet-152 is proposed. First, the source images are decomposed into the low-frequency part and the high-frequency part. The low-frequency part is processed by the average weighting strategy. Second, the multi-layer features are extracted from high-frequency part by using the ResNet-152 network. Regularization L1, convolution operation, bilinear interpolation upsampling and maximum selection strategy on the feature layers to obtain the maximum weight layer. Multiplying the maximum weight layer and the high-frequency as new high-frequency. Finally, the fusion image is reconstructed by the low-frequency and the high-frequency. Experiments show that the proposed method can obtain more details from the image texture by retaining the significant features of the images. In addition, this method can effectively reduce artifacts and noise. The consistency in the objective evaluation and visual observation performs superior to the comparative algorithms.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

Data availability

The code and the test vector map data associated with this paper can be found at https://github.com/diylife/imagefusion_deeplearning.git.

References

  1. Du P et al (2020) Advances of four machine learning methods for spatial data handling: a review. J Geovis Spat Anal 4(1):13

    Article  Google Scholar 

  2. Haghighat M, Razian MA (2014) Fast-FMI: non-reference image fusion metric. IEEE

  3. He K, et al (2016) Deep residual learning for image recognition

  4. Huang Y, et al (2017) Infrared and visible image fusion with the target marked based on multi-resolution visual attention mechanisms. In: Selected Papers of the Chinese Society for Optical Engineering Conferences held October and November 2016. International Society for Optics and Photonics

  5. Kim M, Han DK, Ko H (2016) Joint patch clustering-based dictionary learning for multimodal image fusion. Inf fusion 27:198–214

    Article  Google Scholar 

  6. Kumar BS (2015) Image fusion based on pixel significance using cross bilateral filter. SIViP 9(5):1193–1204

    Article  Google Scholar 

  7. Li H, Wu X-J (2018) Densefuse: a fusion approach to infrared and visible images. IEEE Trans Image Process 28(5):2614–2623

    Article  MathSciNet  Google Scholar 

  8. Li S, Kang X, Hu J (2013) Image fusion with guided filtering. IEEE Trans Image Process 22(7):2864–2875

    Article  Google Scholar 

  9. Li H, Wu X-J, Durrani TS (2019) Infrared and visible image fusion with ResNet and zero-phase component analysis. Infrared Phys Technol 102:103039

    Article  Google Scholar 

  10. Liu Y et al (2016) Image fusion with convolutional sparse representation. IEEE Signal Process Lett 23(12):1882–1886

    Article  Google Scholar 

  11. Liu Y et al (2017) Multi-focus image fusion with a deep convolutional neural network. Inf Fusion 36:191–207

    Article  Google Scholar 

  12. Liu SP, Fang Y (2007) Infrared image fusion algorithm based on contourlet transform and improved pulse coupled neural network. J Infrared Millim Waves 26(3):217–221

    Google Scholar 

  13. Liu C, Qi Y, Ding W (2017) Infrared and visible image fusion method based on saliency detection in sparse domain. Infrared Phys Technol 83:94–102

    Article  Google Scholar 

  14. Liu S, Tian G, Xu Y (2019) A novel scene classification model combining ResNet based transfer learning and data augmentation with a filter. Neurocomputing 338:191–206

    Article  Google Scholar 

  15. Ma J et al (2017) Infrared and visible image fusion based on visual saliency map and weighted least square optimization. Infrared Phys Technol 82:8–17

    Article  Google Scholar 

  16. Ma J et al (2020) Infrared and visible image fusion via detail preserving adversarial learning. Inf Fusion 54:85–98

    Article  Google Scholar 

  17. Ma J, Ma Y, Li C (2019) Infrared and visible image fusion methods and applications: a survey. Inf Fusion 45:153–178

    Article  Google Scholar 

  18. Prabhakar, KR, Srikar VS, Babu RV (2017) DeepFuse: a deep unsupervised approach for exposure fusion with extreme exposure image Pairs

  19. Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. http://arxiv.org/abs/1409.1556

  20. Toet A (2014) TNO Image fusion dataset. Figshare. data

  21. Wang M et al (2019) Scene classification of high-resolution remotely sensed image based on ResNet. J Geovis Spat Anal 3(2):16

    Article  Google Scholar 

  22. Wang Z, Bovik AC (2002) A universal image quality index. IEEE Signal Process Lett 9(3):81–84

    Article  Google Scholar 

  23. Wu Y, Wang Z (2017) Infrared and visible image fusion based on target extraction and guided filtering enhancement. Acta Opt Sin 37(8):0810001

    Article  Google Scholar 

  24. Xu L, Cui GM, Zheng CP (2017) Fusion method of visible and infrared images based on multi-scale decomposition and saliency region. Laser Optoelectron Prog 54(11):111–120

    Google Scholar 

  25. Yin H (2015) Sparse representation with learned multiscale dictionary for image fusion. Neurocomputing 148:600–610

    Article  Google Scholar 

  26. Zhang Q et al (2013) Dictionary learning method for joint sparse representation-based image fusion. Opt Eng 52(5):057006

    Article  Google Scholar 

  27. Zhu P, Ma X, Huang Z (2017) Fusion of infrared-visible images using improved multi-scale top-hat transform and suitable fusion rules. Infrared Phys Technol 81:282–295

    Article  Google Scholar 

Download references

Acknowledgements

This work is funded by the Natural Science Foundation Committee, China (No. 41761080, and No. 41930101) and Industrial Support and Guidance Project of Gansu Colleges and Universities, No. 2019C-04.

Author information

Authors and Affiliations

Authors

Contributions

LZ conceived, designed, and also wrote the manuscript; HL performed the experiments; RZ supervised the study; PD offered helpful suggestions and reviewed the manuscript. RZ and PD analyzed and evaluated the results.

Corresponding author

Correspondence to Liming Zhang.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, L., Li, H., Zhu, R. et al. An infrared and visible image fusion algorithm based on ResNet-152. Multimed Tools Appl 81, 9277–9287 (2022). https://doi.org/10.1007/s11042-021-11549-w

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-021-11549-w

Keywords

Navigation