Skip to main content

Infrared and Visible Image Fusion: A Region-Based Deep Learning Method

  • Conference paper
  • First Online:
Intelligent Robotics and Applications (ICIRA 2019)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 11744))

Included in the following conference series:

Abstract

Infrared and visible image fusion is playing an important role in robot perception. The key of fusion is to extract useful information from source image by appropriate methods. In this paper, we propose a deep learning method for infrared and visible image fusion based on region segmentation. Firstly, the source infrared image is segmented into foreground part and background part, then we build an infrared and visible image fusion network on the basis of neural style transfer algorithm. We propose foreground loss and background loss to control the fusion of the two parts respectively. And finally the fused image is reconstructed by combining the two parts together. The experimental results show that compared with other state-of-art methods, our method retains both saliency information of target and detail texture information of background.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Ma, J., Yong, M., Chang, L.: Infrared and visible image fusion methods and applications: a survey. Inf. Fusion 45, S1566253517307972 (2019)

    Google Scholar 

  2. Li, S., Yang, B., Hu, J.: Performance comparison of different multi-resolution transforms for image fusion. Inf. Fusion 12(2), 74–84 (2011)

    Article  Google Scholar 

  3. Wang, J., Peng, J., Feng, X., He, G., Fan, J.: Fusion method for infrared and visible images by using non-negative sparse representation. Infrared Phys. Technol. 67, 477–489 (2014)

    Article  Google Scholar 

  4. Xiang, T., Li, Y., Gao, R.: A fusion algorithm for infrared and visible images based on adaptive dual-channel unit-linking PCNN in NSCT domain. Infrared Phys. Technol. 69, 53–61 (2015)

    Article  Google Scholar 

  5. Bavirisetti, D.P., Bavirisetti, D.P.: Multi-sensor image fusion based on fourth order partial differential equations. In: International Conference on Information Fusion (2017)

    Google Scholar 

  6. Xiaoye, Z., Yong, M., Fan, F., Ying, Z., Jun, H.: Infrared and visible imagefusion via saliency analysis and local edge-preserving multi-scaledecomposition. J. Opt. Soc. Am. A 34(8), 1400–1410 (2017)

    Article  Google Scholar 

  7. Ma, J., Zhou, Z., Bo, W., Hua, Z.: Infrared and visible image fusion based on visual saliency map and weighted least square optimization. Infrared Phys. Technol. 82, 8–17 (2017)

    Article  Google Scholar 

  8. Zhao, J., Cui, G., Gong, X., Yue, Z., Tao, S., Wang, D.: Fusion of visible and infrared images using global entropy and gradient constrained regularization. Infrared Phys. Technol. 81, 201–209 (2017)

    Article  Google Scholar 

  9. Piella, G.: A region-based multiresolution image fusion algorithm. In: International Conference on Information Fusion (2002)

    Google Scholar 

  10. Kong, W.: Technique for gray-scale visual light and infrared image fusion based on non-subsampled shearlet transform. Infrared Phys. Technol. 63(11), 110–118 (2014)

    Article  Google Scholar 

  11. Xiangzhi, B., Fugen, Z., Bindang, X.: Fusion of infrared and visual images through region extraction by using multi scale center-surround top-hat transform. Opt. Express 19(9), 8444–57 (2011)

    Article  Google Scholar 

  12. Adu, J., Gan, J., Yan, W., Jian, H.: Image fusion based on nonsubsampled contourlet transform for infrared and visible light image. Infrared Phys. Technol. 61(5), 94–100 (2013)

    Article  Google Scholar 

  13. Chen, Y., Xiong, J., Liu, H.L., Fan, Q.: Fusion method of infrared and visible images based on neighborhood characteristic and regionalization in NSCT domain. Opt. - Int. J. Light Electron Opt. 125(17), 4980–4984 (2014)

    Article  Google Scholar 

  14. Zhang, B., Lu, X., Pei, H., Zhao, Y.: A fusion algorithm for infrared and visible images based on saliency analysis and non-subsampled shearlet transform. Infrared Phys. Technol. 73, 286–297 (2015)

    Article  Google Scholar 

  15. Yu, H., Yang, Z., Lei, T., Wang, Y., Wei, S., Sun, M., Tang, Y.: Methods and datasets on semantic segmentation: a review. Neurocomputing 304, S0925231218304077 (2018)

    Article  Google Scholar 

  16. Gatys, L.A., Ecker, A.S., Bethge, M.: Image style transfer using convolutional neural networks. In: Computer Vision & Pattern Recognition (2016)

    Google Scholar 

  17. Y. Jing, Y. Yang, Z. Feng, J. Ye, M. Song, Y. Jing, Y. Yang, Z. Feng, J. Ye, and M. Song, "Neural style transfer: A review," 2017

    Google Scholar 

  18. Badrinarayanan, V., Kendall, A., Cipolla, R.: Segnet: A deep convolutional encoder-decoder architecture for image segmentation (2015)

    Google Scholar 

  19. Nencini, F., Garzelli, A., Baronti, S., Alparone, L.: Remote sensing image fusion using the curvelet transform. Inf. Fusion 8, 143–156 (2007)

    Article  Google Scholar 

  20. Lewis, J.J., OCallaghan, R.J., Nikolov, S.G., Bull, D.R., Canagarajah, C.N.: Pixel- and region-based image fusion with complex wavelets. Inf. Fusion 8(2), 119–130 (2007)

    Article  Google Scholar 

  21. Ma, J., Chen, C., Li, C., Huang, J.: Infrared and visible image fusion via gradient transfer and total variation minimization. Inf. Fusion 31, 100–109 (2016)

    Article  Google Scholar 

  22. Jiayi, M., Wei, Y., Pengwei, L., Chang, L., Junjun, J.: Fusiongan: a generative adversarial network for infrared and visible image fusion. Inf. Fusion 48, 11–26 (2019)

    Article  Google Scholar 

  23. Roberts, W.J., Van, J.A.A., Ahmed, F.: Assessment of image fusion procedures using entropy, image quality, and multispectral classification. J. Appl. Remote Sens. 2(1), 1–28 (2008)

    Google Scholar 

  24. Cui, G., Feng, H., Xu, Z., Qi, L., Chen, Y.: Detail preserved fusion of visible and infrared images using regional saliency extraction and multi-scale image decomposition. Opt. Commun. 341(341), 199–209 (2015)

    Article  Google Scholar 

  25. Eskicioglu, A.M., Fisher, P.S.: Image quality measures and their performance. IEEE Trans. Commun. 43(12), 2959–2965 (1995)

    Article  Google Scholar 

  26. Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)

    Article  Google Scholar 

Download references

Acknowledgment

This work was supported in part by the National Natural Science Foundation of China under Grant 61573097 and 91748106, in part by Key Laboratory of Integrated Automation of Process Industry (PAL-N201704), the Advanced Research Project of the 13th Five-Year Plan (31511040301), the Fundamental Research Funds for the Central Universities (3208008401), the Qing Lan Project and Six Major Top-talent Plan, and in part by the Priority Academic Program Development of Jiangsu Higher Education Institutions. The authors thank the reviewers and editors for giving valuable comments, which are very helpful for improving this manuscript.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xinde Li .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Xie, C., Li, X. (2019). Infrared and Visible Image Fusion: A Region-Based Deep Learning Method. In: Yu, H., Liu, J., Liu, L., Ju, Z., Liu, Y., Zhou, D. (eds) Intelligent Robotics and Applications. ICIRA 2019. Lecture Notes in Computer Science(), vol 11744. Springer, Cham. https://doi.org/10.1007/978-3-030-27541-9_49

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-27541-9_49

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-27540-2

  • Online ISBN: 978-3-030-27541-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics