A feature level image fusion for Night-Vision context enhancement using Arithmetic optimization algorithm based image segmentation

https://doi.org/10.1016/j.eswa.2022.118272Get rights and content

Highlights

  • It is proposed an efficient method for image Fusion.

  • Weight maps are generated from segmentation using Arithmetic optimization algorithm.

  • The weight maps are optimized using a weighted least square (WLS) technique.

  • The quality of the fused results is verified using the (QAB/F), SCD, SSIM, and NAB/F.

  • The quality of the fused results is better than other competitor algorithms.

Abstract

Images are fused to produce a composite image by combining key characteristics of the source images in image fusion. It makes the fused image better for human vision and machine vision. A novel procedure of Infrared (IR) and Visible (Vis) image fusion is proposed in this manuscript. The main challenges of feature level image fusion are that it will introduce artifacts and noise in the fused image. To preserve the meaningful information without adding artifacts from the source input images, weight map computed from Arithmetic optimization algorithm (AOA) is used for the image fusion process. In this manuscript, feature level fusion is performed after refining the weight maps using a weighted least square optimization (WLS) technique. Through this, the derived salient object details are merged into the visual image without introducing distortion. To affirm the validity of the proposed methodology simulation results are carried for twenty-one image data sets. It is concluded from the qualitative and quantitative experimental analysis that the proposed method works well for most of the image data sets and shows better performance than certain traditional existing models.

Introduction

Owing to weak night-time illumination conditions, visible images are mostly fused with the accompanying infrared (IR) pictures to improve the background of cinematic sequences. A hybrid approach using local Laplacian filter for edge preserving and enhancement along with segmentation-based weight map fusion approach has been proposed. It enhances the night vision context of infrared and visible image fusion and also keeps edges intact without adding any artifacts.

In recent times, the detectability of military targets has reduced substantially with defence system (Meher et al., 2022, James and Kavitha, 2014). The military specifications under those circumstances have stimulated the improvement of multi-mode image fusion technology, which typically uses IR and visible images to obtain complementary information. The IR image tracks heat energy emitted from objects in the scene and can be used to discover objectives because it has a hot contrast, whilst the visible picture has considerably more high-frequency background information, which is important if the target positions and circumstances are to be accurately identified.

The improved image processing system performance has led to the development of several image fusion techniques which combine information gathered through the various sensors (Jiang et al., 2018, Pan et al., 2017, Piella, 2003). The goal of image fusion is to combine the proportion able attributes of the images acquired with the best visual effects in a fused image. This fused image generates details which cannot be obtained by separately analysing various images. Other cameras such as infrared cameras, which capture images in different wavelengths, are often favoured along with a digital charge-coupled device (CCD) camera (Li & Yang, 2008). Using different imaging sensors to capture images for the same location helps to get enhanced outputs once they are fused (Singh, Singh, Gehlot, kaur, & Gagandeep, 2022). Multi-modal image fusion is a combination of data acquired from different sensors (Singh, 2013, Singh, 2020), so that the information finally generated has lesser uncertainties and more information as compared to the individual performance of each sensor (Li and Wang, 2022, Liu and Wang, 2015, Xu et al., 2020, Zhang et al., 2021). In order to perform segmentation, OTSU as an objective function to perform segmentation is discussed in the forthcoming section 2. Section 3 conferred fusion methodology AOA algorithm and proposed modified algorithm AOA. Result analysis along with performance metric are discussed section 4 and section 5 presents conclusion.

Section snippets

Related work

It is an important preliminary task which plays a crucial role in the field of medicine and other computer vision applications. It is a hot research topic gaining rapid pace in recent years. In region based fusion image thresholding provides significant support (Singh et al., 2020, Singh et al., 2021a), (Kaur & Singh, 2017). Some of the approaches to image segmentation are addressed in subsequent sections.

IR and visible image fusion methodology

In this section, a novel segmentation based fusion methodology is proposed. The objective behind the proposed technique is to fuse the Infrared IIRx,y and Visible IVisx,y images without introducing any artefacts and keeping meaningful information intact. The detailed block diagram of proposed manuscript with appropriated images is shown in Fig. 1. Each step involved in this algorithm is described in depth in forthcoming sub-sections. In which, obtain the segmented image ISx,y using

Experimental results

The final fusion results of the proposed approach are compared with seven existing state-of-the-art fusion techniques. CCD cameras with low-light sensitivity or a standard sensitivity can capture the input visible images. Some image details in both visible and infrared imagery may have to be boosted to maximise their visibility. In addition, the infrared sensor frequently uses the mid-wave and long-wave spectral bands to better identify details from objects in dark and obstructed areas. We can

Discussion and conclusion

A novel feature level image fusion algorithm using segmentation based on AOA is proposed in this manuscript. Firstly, using AOA based segmentation, the IR image is segmented into different groups. These weights map functions are refined using WLS optimization. The resulting segmented image is used for measuring weight map function used in the process of fusion. Finally, the fused image is reconstructed using pixel-wise weighted average fusion. The efficiency of the approach proposed is measured

CRediT authorship contribution statement

Simrandeep Singh: Conceptualization, Methodology, Software. Harbinder Singh: Data curation, Writing – original draft. Nitin Mittal: Investigation, Visualization. Abdelazim G. Hussien: Writing – review & editing. Filip Sroubek: Writing – review & editing.

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

References (48)

  • V.S. Petrović et al.

    Sensor noise effects on signal-level image fusion performance

    Information Fusion

    (2003)
  • G. Piella

    A general framework for multiresolution image fusion: From pixels to regions

    Information Fusion

    (2003)
  • H. Zhang et al.

    Image fusion meets deep learning: A survey and perspective

    Information Fusion

    (2021)
  • Alexander, T. (2014). TNO Image Fusion Dataset. Retrieved from...
  • D.P. Bavirisetti et al.

    Fusion of infrared and visible sensor images based on anisotropic diffusion and Karhunen-Loeve transform

    IEEE Sensors Journal

    (2016)
  • A.P. James et al.

    Mean-variance blind noise estimation for CT images

    Advances in Intelligent Systems and Computing

    (2014)
  • Q. Jiang et al.

    Multi-sensor image fusion based on interval type-2 fuzzy sets and regional features in nonsubsampled shearlet transform domain

    IEEE Sensors Journal

    (2018)
  • Kaur, R., & Singh, S. (2017). An artificial neural network based approach to calculate BER in CDMA for multiuser...
  • S. Krishnamoorthy et al.

    Implementation and comparative study of image fusion algorithms

    International Journal of Computer Applications

    (2010)
  • Li, H., X.-J. Wu. (2018). Infrared and visible image fusion using Latent Low-Rank Representation. ArXiv Preprint...
  • H. Liang et al.

    Modified grasshopper algorithm-based multilevel thresholding for color image segmentation

    IEEE Access

    (2019)
  • Y. Liu et al.

    Image fusion with convolutional sparse representation

    IEEE Signal Processing Letters

    (2016)
  • Y. Liu et al.

    Simultaneous image fusion and denoising with adaptive sparse representation

    IET Image Processing

    (2015)
  • B. Meher et al.

    A survey on region based image fusion methods

    Information Fusion

    (2019)
  • Cited by (33)

    • A review of image fusion: Methods, applications and performance metrics

      2023, Digital Signal Processing: A Review Journal
    View all citing articles on Scopus
    View full text