Skip to main content

Advertisement

Log in

A feature level image fusion for IR and visible image using mNMRA based segmentation

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

Image fusion is a method through which an image collection is fused into a composite image by fusing important characteristics from sources. This fused image is more informational, accurate and comprises of all the required information that better enhances human visual perception and machine vision. In this paper a new technique is suggested for the fusion of infrared (IR) and visible (VIS) images. The primary problems for image fusion at feature levels are that artefacts and noise are introduced in the fused picture. The weight map generated by the modified naked mole-rat algorithm (mNMRA) is used to retain important information without using artefacts in a final fused image. The proposed FNMRA fusion method is based on a feature-level fusion after the refinement of weight maps, utilising the WLS approach. This allows the prominent object information from the IR image to be included in the VIS image without any distortion. Experiments on twenty-one image data sets are conducted to verify the fusion performance of the suggested approach. The qualitative and quantitative analysis of fusion results concludes that the suggested technique works well for most image data sets and performs better than some state-of-the-art current methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

References

  1. Bavirisetti DP, Dhuli R (2016) Two-scale image fusion of visible and infrared images using saliency detection. Infrared Phys Technol 76:52–64

    Article  Google Scholar 

  2. Mirjalili S, Mirjalili SM, Lewis A, Grey wolf optimizer. Adv Eng Softw 69:46–61

  3. Chen B, Qu R, Bai R, Laesanklang W, A variable neighborhood search algorithm with reinforcement learning for a real-life periodic vehicle routing problem with time windows and open routes. RAIRO Oper Res 54(5):1467–1494

  4. Ma J, Xu H, Jiang J, Mei X, Zhang X-P (2020) DDcGAN: a dual-discriminator conditional generative adversarial network for multi-resolution image fusion. IEEE Trans Image Process, vol 29

  5. Singh S, Mittal N, Singh H (2020) Multifocus image fusion based on multiresolution pyramid and bilateral filter. IETE J Res, pp 1–12

  6. Chen Y, Cheng L, Wu H, Mo F, Chen Z (2022) Infrared and visible image fusion based on iterative differential thermal information filter. Opt Lasers Eng 148:106776

    Article  Google Scholar 

  7. Kim M, Han DK, Ko H (2016) Joint patch clustering-based dictionary learning for multimodal image fusion. Inf Fusion 27:198–214

    Article  Google Scholar 

  8. Manchanda M, Sharma R (2018) An improved multimodal medical image fusion algorithm based on fuzzy transform. J Vis Commun Image Represent, vol 51(December 2016), pp 76–94

  9. Yin H (2011) Multimodal image fusion with joint sparsity model. Opt Eng 50(6):067007

    Article  Google Scholar 

  10. Lahat D et al (2015) Multimodal data fusion: an overview of methods, challenges and prospects. pp 1–26

  11. Kolekar NB, Shelkikar RP (2015) Decision level based image fusion using wavelet transform and support vector machine. Int J Sci Eng Res 4(12):54–58

    Google Scholar 

  12. Xu H, Ma J, Jiang J, Guo X, Ling H (2020) U2Fusion: a unified unsupervised image fusion network. IEEE Trans Pattern Anal Mach Intell

  13. Hospital TK (2006) A method for image registration by maximization of mutual information, pp 1469–1472

  14. Qiu C, Wang Y, Zhang H, Xia S (2017) Image fusion of CT and MR with sparse representation in NSST domain. Comput Math Methods Med, vol 2017

  15. Zhang Q, Maldague X (2016) Infrared Physics and technology an adaptive fusion approach for infrared and visible images based on NSCT and compressed sensing. Infrared Phys Technol 74:11–20

    Article  Google Scholar 

  16. Toet A, Hogervorst MA (2012) Progress in color night vision. Opt Eng 51(1):1–20

    Article  Google Scholar 

  17. Zhang J, Ma X, Fan Y, Zhang F, Huang Y (2017) Infrared and visible image fusion via saliency analysis and local edge-preserving multi-scale decomposition, 34(8): 1400–1410

  18. Yang B, Jing ZL, Zhao HT (2010) Review of pixel-level image fusion. J Shanghai Jiaotong Univ 15(1):6–12

    Article  Google Scholar 

  19. Sreeja P, Hariharan S (2018) An improved feature based image fusion technique for enhancement of liver lesions. Biocybern Biomed Eng 38(3):611–623

    Article  Google Scholar 

  20. Sasikala M, Kumaravel N (2007) A comparative analysis of feature based image fusion methods. Inf Technol J 6(8):1224–1230

    Article  Google Scholar 

  21. Meher B, Agrawal S, Panda R, Abraham A (2019) A survey on region based image fusion methods. Inf Fusion 48(August 2018): 119–132

  22. Pohl C, Van Genderen JL (1998) Review article multisensor image fusion in remote sensing: concepts, methods and applications. Int J Remote Sens 19(5):823–854

    Article  Google Scholar 

  23. Cvejic N, Bull D, Canagarajah N (2007) Region-based multimodal image fusion using ICA bases, 7(5): 743–751

  24. Yang B, Li S (2010) Multifocus image fusion and restoration with sparse representation. IEEE Trans Instrum Meas 59(4):884–892

    Article  Google Scholar 

  25. Singh S, Singh N, Singh H (2013) Multi-resolution representation of multifocus image fusion using Gaussian and Laplacian pyramids. Int J Adv Res Comput Sci Softw Eng 3(11):1639–1642

    Google Scholar 

  26. Yan L, Hao Q, Cao J, Rizvi S, Kun L, Zhengang Y, Zhimin W (2021) Infrared and visible image fusion via octave Gaussian pyramid framework. Sci Rep 11:1235

    Article  Google Scholar 

  27. Krishnamoorthy S, Soman KP (2010) Implementation and comparative study of image fusion algorithms. Int J Comput Appl 9(2):25–35

    Google Scholar 

  28. Micheal AA, Vani K, Sandjeevi S, Suresh Kumar R (2011) Image fusion of the multi-sensor lunar image data using DT-CWT and curvelet transform. Int Conf Recent Trends Inf Technol ICRTIT, pp 1–7

  29. Sadjadi F (2006) Comparative image fusion analysis. In: IEEE Computer society conference on computer vision and pattern recognition (CVPR’05)-workshops, p 8

  30. Aghamaleki JA, Ghorbani A (2021) Infrared and visible image fusion based on optimal segmenting and contour extraction. SN Appl Sci 3:369

    Article  Google Scholar 

  31. Li H, Chai Y, Ling R, Yin H (2013) Multifocus image fusion scheme using feature contrast of orientation information measure in lifting stationary wavelet domain. J Inf Sci Eng 29(2):227–247

    Google Scholar 

  32. Shahdoosti HR, Tabatabaei Z (2019) MRI and PET/SPECT image fusion at feature level using ant colony based segmentation. Biomed Signal Process Control 47:63–74

    Article  Google Scholar 

  33. Meher B, Agrawal S, Panda R, Abraham A (2019) A survey on region based image fusion methods. Inf Fusion

  34. Zhang DC, Chai S, Van Der Wal G (2011) Method of image fusion and enhancement using mask pyramid. In: 14th international conference on information fusion, pp 1–8

  35. Singh H, Hrisheekesha PN, Cristobal G (2019) Infrared and visible image fusion based on nonparametric segmentation. Int J Innov Technol Explor Eng 8(9 Special Issue): 29–35

  36. Bavirisetti DP, Dhuli R (2016) Fusion of infrared and visible sensor images based on anisotropic diffusion and Karhunen–Loeve transform. IEEE Sens J 16(1):203–209

    Article  Google Scholar 

  37. Li G, Lin Y, Qu X (2021) An infrared and visible image fusion method based on multi-scale transformation and norm optimization. Inf Fusion 71:109–129

    Article  Google Scholar 

  38. Zhang D, Hou J, Wu W, Lu T, Zhou H (2021) A generative adversarial network with dual discriminators for infrared and visible image fusion based on saliency detection. Math Probl Eng 2021(4209963):1–9

    Google Scholar 

  39. Li S, Yang B (2008) Multifocus image fusion using region segmentation and spatial frequency. Image Vis Comput 26(7):971–979

    Article  Google Scholar 

  40. Kamel M, Zhao A (1993) Extraction of binary character/graphics images from grayscale document images. In: CVGIP Graphical models and image processing, 55(3):203–217

  41. Marinoni A, Plaza A, Gamba P (2017) A novel preunmixing framework for efficient detection of linear mixtures in hyperspectral images. IEEE Trans Geosci Remote Sens, 55(8)

  42. Mousavirad SJ, Ebrahimpour-komleh H (2019) Human mental search-based multilevel thresholding for image segmentation. Appl Soft Comput J

  43. Shi Z, Yang Y, Hospedales TM, Xiang T (2017) Weakly-supervised image annotation and segmentation with objects and attributes. IEEE Trans Pattern Anal Mach Intell 39(12):2525–2538

    Article  Google Scholar 

  44. Wan T, Member S, Canagarajah N, Achim A, Member S (2009) Segmentation-driven image fusion based on alpha-stable modeling of wavelet coefficients, 11(4): 624–633

  45. Elon JD (2007) A non parametric theory for histogram segmentation. IEEE Trans Image Process 16(1):23–261

    Google Scholar 

  46. El Munim HEA, Farag AA (2005) A shape-based segmentation approach: an improved technique using level sets. Proc IEEE Int Conf Comput Vis II:930–935

    Article  Google Scholar 

  47. Ye Z, Yang J, Wang M, Zong X, Yan L, Liu W (2018) 2D Tsallis entropy for image segmentation based on modified chaotic bat algorithm. Entropy 20(4):1–28

    Article  Google Scholar 

  48. Pal C, Chakrabarti A, Ghosh R (2015) A brief survey of recent edge-preserving smoothing algorithms on digital images

  49. Song Y, Wu W, Liu Z, Yang X, Liu K, Lu W (2016) An adaptive pansharpening method by using weighted least squares filter. IEEE Geosci Remote Sens Lett 13(1):18–22

    Article  Google Scholar 

  50. Otsu N (1979) A threshold selection method from gray-level histograms. IEEE Trans Syst Man Cybern 9(1):62–66

    Article  Google Scholar 

  51. Tuba M (2014) Multilevel image thresholding by nature-inspired algorithms: a short review. Icisp 22(3):318–338

    MathSciNet  Google Scholar 

  52. Salgotra R, Singh U (2019) The naked mole-rat algorithm. Neural Comput Appl 31(12):8837–8857

    Article  Google Scholar 

  53. Bohat VK, Arya KV (2019) A new heuristic for multilevel thresholding of images. Exp Syst Appl 117:176–203

    Article  Google Scholar 

  54. Mirjalili S, Mirjalili SM, Lewis A (2014) Grey wolf optimizer. Adv Eng Softw 69:46–61

    Article  Google Scholar 

  55. Shreyamsha Kumar BK (2013) Multifocus and multispectral image fusion based on pixel significance using discrete cosine harmonic wavelet transform. Signal Image Video Process 7(6): 1125–1143

  56. Shreyamsha Kumar BK (2015) Image fusion based on pixel significance using cross bilateral filter. Signal Image Video Process 9(5):1193–1204

  57. Liu CH, Qi Y, Ding WR (2017) Infrared and visible image fusion method based on saliency detection in sparse domain. Infrared Phys Technol 83:94–102

    Article  Google Scholar 

  58. Zhang Q, Fu Y, Li H, Zou J (2013) Dictionary learning method for joint sparse representation-based image fusion. Opt Eng 52(5):057006

    Article  Google Scholar 

  59. Liu Y, Chen X, Peng H, Wang Z (2017) Multi-focus image fusion with a deep convolutional neural network. Inf Fusion 36:191–207

    Article  Google Scholar 

  60. Liu Y, Chen X, Ward RK, Wang J (2016) Image fusion with convolutional sparse representation. IEEE Signal Process Lett 23(12):1882–1886

    Article  Google Scholar 

  61. Li H, Wu X-J (2018) Infrared and visible image fusion using latent low-rank representation

  62. Goyal S, Wahla R (2015) A review on image fusion. In: 2019 international conference on communication signal processing 4(2):7582–7588

  63. Bhatnagar G, Wu QMJ (2011) An image fusion framework based on human visual system in framelet domain. Int J Wavelets Multiresolution Inf Process 10(01):1250002

  64. Hong R, Cao W, Pang J, Jiang J (2014) Directional projection based image fusion quality metric. Inf Sci (Ny)

  65. Tian J, Chen L (2010) Multi-focus image fusion using wavelet-domain statistics. In: Proceedings of the - international conference on image processing. ICIP, pp 1205–1208

  66. Bhandari AK, Kumar IV (2019) A context sensitive energy thresholding based 3D Otsu function for image segmentation using human learning optimization. Appl Soft Comput J 82:105570

    Article  Google Scholar 

  67. Liang H, Jia H, Xing Z, Ma J, Peng X (2019) Modified grasshopper algorithm-based multilevel thresholding for color image segmentation. IEEE Access 7:11258–11295

    Article  Google Scholar 

  68. Xydeas CS, Petrovic VS (2004) Gradient-based multiresolution image fusion, 13(2):228–237

  69. Petrovi VS, Xydeas CS (2003) Sensor noise effects on signal-level image fusion performance, vol 4, pp 167–183

  70. Saad Y (2003) Iterative methods for sparse linear systems, 2nd edn. SIAM, Philadelphia

    Book  MATH  Google Scholar 

  71. Weber D, Bender J, Schnoes M, Stork A, Fellner D (2003) Efficient GPU data structures and methods to solve sparse linear systems. Comput Graphics Forum 32(1):16–26

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Harbinder Singh.

Ethics declarations

Conflict of interest

The authors declare that there is no conflict of interests regarding the publication of this manuscript.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Singh, S., Mittal, N. & Singh, H. A feature level image fusion for IR and visible image using mNMRA based segmentation. Neural Comput & Applic 34, 8137–8154 (2022). https://doi.org/10.1007/s00521-022-06900-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-022-06900-7

Keywords

Navigation