Skip to main content
Log in

NOSMFuse: An infrared and visible image fusion approach based on norm optimization and slime mold architecture

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

In existing infrared and visible image fusion algorithms, it is usually difficult to maintain a good balance of meaningful information between two source images, which easily leads to the omission of important fractional information in a particular source image. To address this issue, a novel fusion algorithm based on norm optimization and slime mold architecture, called NOSMFuse, is proposed. First, an interactive information decomposition method based on mutually guided image filtering is devised and utilized to obtain the corresponding base and detail layers. Subsequently, the differentiation feature extraction operator is formulated and employed to fuse the base layers. In addition, we design a norm optimization-based fusion strategy for the detail layers and a loss function that considers both the intensity fidelity and the gradient constraint. Finally, to further balance the useful information of the base and detail layers contained in the fusion image, we propose a slime mold architecture based image reconstruction method that generates fusion results through adaptive optimization. The experimental results show that the proposed NOSMFuse is superior to 12 other state-of-art fusion algorithms, both qualitatively and quantitatively.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

Notes

  1. Available at: https://figshare.com/articles/dataset/TNO_Image_Fusion_Dataset/1008029

References

  1. Azarang A, Kehtarnavaz N (2021) A generative model method for unsupervised multispectral image fusion in remote sensing. Signal Image Video Processing, 1–9. https://doi.org/10.1007/s11760-021-01950-1

  2. Ding Z, Wang T, Sun Q, Wang H (2021) Adaptive fusion with multi-scale features for interactive image segmentation. Appl Intell, 1–12. https://doi.org/10.1007/s10489-020-02114-3

  3. Du J, Li W, Tan H (2020) Three-layer medical image fusion with tensor-based features. Inf Sci 525:93–108. https://doi.org/10.1016/j.ins.2020.03.051

    Article  MathSciNet  MATH  Google Scholar 

  4. Ge Z, Jiang X, Tong Z, Feng P, Zhou B, Xu M, Wang Z, Pang Y (2021) Multi-label correlation guided feature fusion network for abnormal ecg diagnosis. Knowl-Based Syst 233:107508. https://doi.org/10.1016/j.knosys.2021.107508

    Article  Google Scholar 

  5. Hou R, Zhou D, Nie R, Liu D, Xiong L, Guo Y, Yu C (2020) Vif-net: an unsupervised framework for infrared and visible image fusion. IIEEE Transactions on Computational Imaging 6:640–651. https://doi.org/10.1109/TCI.2020.2965304

    Article  Google Scholar 

  6. Jiang L, Fan H, Li J (2021) A multi-focus image fusion method based on attention mechanism and supervised learning. Appl Intell, 1–19. https://doi.org/10.1007/s10489-021-02358-7

  7. Hu Z, Liang W, Ding D, Wei G (2021) An improved multi-focus image fusion algorithm based on multi-scale weighted focus measure. Appl Intell, 1–17. https://doi.org/10.1007/s10489-020-02066-8

  8. Dinh P. -H. (2021) Multi-modal medical image fusion based on equilibrium optimizer algorithm and local energy functions. Appl Intell, 1–16. https://doi.org/10.1007/s10489-021-02282-w

  9. Zhou Z, Wang B, Li S, Dong M (2016) Perceptual fusion of infrared and visible images through a hybrid multi-scale decomposition with gaussian and bilateral filters. Information Fusion 30:15–26. https://doi.org/10.1016/j.inffus.2015.11.003

    Article  Google Scholar 

  10. Liu Y, Chen X, Ward RK, Wang ZJ (2016) Image fusion with convolutional sparse representation. IEEE Signal Process Lett 23(12):1882–1886. https://doi.org/10.1109/LSP.2016.2618776

    Article  Google Scholar 

  11. Kim M, Han DK, Ko H (2016) Joint patch clustering-based dictionary learning for multimodal image fusion. Information Fusion 27:198–214. https://doi.org/10.1016/j.inffus.2015.03.003

    Article  Google Scholar 

  12. Li H, He X, Yu Z, Luo J (2020) Noise-robust image fusion with low-rank sparse decomposition guided by external patch prior. Inf Sci 523:14–37. https://doi.org/10.1016/j.ins.2020.03.009

    Article  MathSciNet  MATH  Google Scholar 

  13. Milgrom B, Avrahamy R, David T, Caspi A, Golovachev Y, Engelberg S (2020) Extended depth-of-field imaging employing integrated binary phase pupil mask and principal component analysis image fusion. Opt Express 28(16):23862–23873. https://doi.org/10.1364/OE.393037

    Article  Google Scholar 

  14. Wang Z, Deller Jr JR, Fleet BD (2016) Pixel-level multisensor image fusion based on matrix completion and robust principal component analysis. Journal of Electronic Imaging 25(1):013007. https://doi.org/10.1117/1.JEI.25.1.013007

    Article  Google Scholar 

  15. Fu Z, Wang X, Xu J, Zhou N, Zhao Y (2016) Infrared and visible images fusion based on rpca and nsct. Infrared Phys Technol 77:114–123. https://doi.org/10.1016/j.infrared.2016.05.012

    Article  Google Scholar 

  16. Bavirisetti DP, Dhuli R (2016) Two-scale image fusion of visible and infrared images using saliency detection. Infrared Phys Technol 76:52–64. https://doi.org/10.1016/j.infrared.2016.01.009

    Article  Google Scholar 

  17. Ma J, Zhou Z, Wang B, Zong H (2017) Infrared and visible image fusion based on visual saliency map and weighted least square optimization. Infrared Phys Technol 82:8–17. https://doi.org/10.1016/j.infrared.2017.02.005

    Article  Google Scholar 

  18. Zhao J, Gao X, Chen Y, Feng H, Wang D (2016) Multi-window visual saliency extraction for fusion of visible and infrared images. Infrared Phys Technol 76:295–302. https://doi.org/10.1016/j.infrared.2016.01.020

    Article  Google Scholar 

  19. Chen G, Li L, Jin W, Zhu J, Shi F (2019) Weighted sparse representation multi-scale transform fusion algorithm for high dynamic range imaging with a low-light dual-channel camera. Opt Express 27(8):10564–10579. https://doi.org/10.1364/OE.27.010564

    Article  Google Scholar 

  20. Mao Q, Zhu Y, Lv C, Lu Y, Yan X, Wei D, Yan S, Liu J (2020) Image fusion based on multiscale transform and sparse representation to enhance terahertz images. Opt Express 28(17):25293–25307. https://doi.org/10.1364/OE.396604

    Article  Google Scholar 

  21. Shibu DS, Priyadharsini SS (2021) Multi scale decomposition based medical image fusion using convolutional neural network and sparse representation. Biomedical Signal Processing and Control 69:102789. https://doi.org/10.1016/J.BSPC.2021.102789

    Article  Google Scholar 

  22. Ram Prabhakar K, Sai Srikar V, Venkatesh Babu R (2017) Deepfuse: a deep unsupervised approach for exposure fusion with extreme exposure image pairs. In: Proceedings of the IEEE international conference on computer vision. https://doi.org/10.1109/ICCV.2017.505, pp 4714–4722

  23. Li H, Wu X-J (2018) Densefuse: a fusion approach to infrared and visible images. IEEE Trans Image Process 28(5):2614–2623. https://doi.org/10.1109/TIP.2018.2887342

    Article  MathSciNet  Google Scholar 

  24. Li H, Wu X-J, Durrani T (2020) Nestfuse: an infrared and visible image fusion architecture based on nest connection and spatial/channel attention models. IEEE Trans Instrum Meas 69(12):9645–9656. https://doi.org/10.1109/TIM.2020.3005230

    Article  Google Scholar 

  25. Li H, Wu X-J, Kittler J (2021) Rfn-nest: an end-to-end residual fusion network for infrared and visible images. Information Fusion 73:72–86. https://doi.org/10.1016/j.inffus.2021.02.023

    Article  Google Scholar 

  26. Ma J, Yu W, Liang P, Li C, Jiang J (2019) Fusiongan: a generative adversarial network for infrared and visible image fusion. Information Fusion 48:11–26. https://doi.org/10.1016/j.inffus.2018.09.004

    Article  Google Scholar 

  27. Ma J, Xu H, Jiang J, Mei X, Zhang X-P (2020) Ddcgan: a dual-discriminator conditional generative adversarial network for multi-resolution image fusion. IEEE Trans Image Process 29:4980–4995. https://doi.org/10.1109/TIP.2020.2977573

    Article  MATH  Google Scholar 

  28. Zhang H, Xu H, Xiao Y, Guo X, Ma J (2020) Rethinking the image fusion: a fast unified image fusion network based on proportional maintenance of gradient and intensity. In: Proceedings of the AAAI conference on artificial intelligence. https://doi.org/10.1609/aaai.v34i07.6975, pp 12797–12804

  29. Li G, Lin Y, Qu X (2021) An infrared and visible image fusion method based on multi-scale transformation and norm optimization. Information Fusion 71:109–129. https://doi.org/10.1016/j.inffus.2021.02.008

    Article  Google Scholar 

  30. Tan M-J, Gao S-B, Xu W-Z, Han S-C (2020) Visible-infrared image fusion based on early visual information processing mechanisms. IEEE Trans Circuits Syst Video Technol 31(11):4357–4369. https://doi.org/10.1109/TCSVT.2020.3047935

    Article  Google Scholar 

  31. Guo X, Li Y, Ma J, Ling H (2018) Mutually guided image filtering. IEEE Trans Pattern Anal Mach Intell 42(3):694–707. https://doi.org/10.1109/TPAMI.2018.2883553

    Article  Google Scholar 

  32. Li H, Wu X-J, Kittler J (2020) Mdlatlrr: a novel decomposition method for infrared and visible image fusion. IEEE Trans Image Process 29:4733–4746. https://doi.org/10.1109/TIP.2020.2975984

    Article  MATH  Google Scholar 

  33. Bavirisetti DP, Xiao G, Zhao J, Dhuli R, Liu G (2019) Multi-scale guided image and video fusion: a fast and efficient approach. Circuits, Systems, and Signal Processing 38(12):5576–5605. https://doi.org/10.1007/s00034-019-01131-z

    Article  Google Scholar 

  34. Goldstein T, Osher S (2009) The split bregman method for l1-regularized problems. SIAM Journal on Imaging Sciences 2(2):323–343. https://doi.org/10.1137/080725891

    Article  MathSciNet  MATH  Google Scholar 

  35. Bakari A, Dahiru I (2018) Comparison of jacobi and gauss-seidel iterative methods for the solution of systems of linear equations. Asian Research Journal of Mathematics, 1–7. https://doi.org/10.9734/ARJOM/2018/34769

  36. Wang Z, Bovik AC, Sheikh HR, Simoncelli EP (2004) Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process 13(4):600–612. https://doi.org/10.1109/TIP.2003.819861

    Article  Google Scholar 

  37. Li S, Chen H, Wang M, Heidari AA, Mirjalili S (2020) Slime mould algorithm: a new method for stochastic optimization. Futur Gener Comput Syst 111:300–323. https://doi.org/10.1016/j.future.2020.03.055

    Article  Google Scholar 

  38. Bavirisetti DP, Dhuli R (2015) Fusion of infrared and visible sensor images based on anisotropic diffusion and karhunen-loeve transform. IEEE Sensors J 16(1):203–209. https://doi.org/10.1109/JSEN.2015.2478655

    Article  Google Scholar 

  39. Bavirisetti DP, Xiao G, Liu G (2017) Multi-sensor image fusion based on fourth order partial differential equations. In: 20Th international conference on information fusion (fusion). https://doi.org/10.23919/ICIF.2017.8009719, pp 1–9

  40. Zhang Y, Zhang L, Bai X, Zhang L (2017) Infrared and visual image fusion through infrared feature extraction and visual information preservation. Infrared Phys Technol 83:227–237. https://doi.org/10.1016/j.infrared.2017.05.007

    Article  Google Scholar 

  41. Zhang Y, Liu Y, Sun P, Yan H, Zhao X, Zhang L (2020) Ifcnn: a general image fusion framework based on convolutional neural network. Information Fusion 54:99–118. https://doi.org/10.1016/j.inffus.2019.07.011

    Article  Google Scholar 

  42. Jian L, Yang X, Liu Z, Jeon G, Gao M, Chisholm D (2020) Sedrfuse: a symmetric encoder-decoder with residual block network for infrared and visible image fusion. IEEE Trans Instrum Meas 70:1–15. https://doi.org/10.1109/TIM.2020.3022438

    Article  Google Scholar 

  43. Ma J, Tang L, Xu M, Zhang H, Xiao G (2021) Stdfusionnet: an infrared and visible image fusion network based on salient target detection. IEEE Trans Instrum Meas 70:1–13. https://doi.org/10.1109/TIM.2021.3075747

    Google Scholar 

  44. Ma J, Ma Y, Li C (2019) Infrared and visible image fusion methods and applications: a survey. Information Fusion 45:153–178. https://doi.org/10.1016/j.inffus.2018.02.004

    Article  Google Scholar 

  45. Aja-Fernandez S, Estepar RSJ, Alberola-Lopez C, Westin C. -F. (2006) Image quality assessment based on local variance. In: International conference of the IEEE engineering in medicine and biology society. https://doi.org/10.1109/IEMBS.2006.259516, pp 4815–4818

  46. Liu L, Liu B, Huang H, Bovik AC (2014) No-reference image quality assessment based on spatial and spectral entropies. Signal Process Image Commun 29(8):856–863. https://doi.org/10.1016/j.image.2014.06.006

    Article  Google Scholar 

Download references

Acknowledgements

This work was supported by the National Natural Science Foundation of China [grant number 51804250], China Postdoctoral Science Foundation [grant number 2019M653874XB, 2020M683522], Scientific Research Program of Shaanxi Provincial Department of Education [grant number 18JK0512], Natural Science Basic Research Program of Shaanxi [grant number 2021JQ-572, 2020JQ-757], Innovation Capability Support Program of Shaanxi [grant number 2020TD-021], Xi ’an Beilin District Science and Technology Project [grant number GX2116] and Weinan Science and Technology Project [grant number 2020ZDYF-JCYJ-196].

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xu Ma.

Ethics declarations

Competing interests

The authors declare that there is no potential conflict of interest.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hao, S., He, T., Ma, X. et al. NOSMFuse: An infrared and visible image fusion approach based on norm optimization and slime mold architecture. Appl Intell 53, 5388–5401 (2023). https://doi.org/10.1007/s10489-022-03591-4

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-022-03591-4

Keywords

Navigation