Multi-focus image fusion based on dynamic threshold neural P systems and surfacelet transform

https://doi.org/10.1016/j.knosys.2020.105794Get rights and content

Highlights

  • We propose a dynamic threshold neural P (DTNP) system with local topology.

  • We develop a multi-focus image fusion framework based on DTNP systems in ST domain.

  • We propose a fusion rule based on DTNP systems for low-frequency ST coefficients.

  • We propose a fusion rule based on DTNP systems for high-frequency ST coefficients.

Abstract

Dynamic threshold neural P systems (DTNP systems) are recently proposed distributed and parallel computing models, inspired from the intersecting cortical model. DTNP systems differ from spiking neural P systems (SNP systems) due to the introduction of dynamic threshold mechanism of neurons. DTNP systems have been theoretically proven to be Turing universal computing devices. This paper discusses how to apply DTNP systems to deal with the fusion of multi-focus images, and proposes a novel image fusion method based on DTNP systems in surfacelet domain. Based on four DTNP systems with local topology, a multi-focus image fusion framework in surfacelet domain is developed, where DTNP systems are applied to control the fusion of low- and high-frequency coefficients in surfacelet domain. The proposed fusion method is evaluated on an open dataset of 20 multi-focus images in terms of five fusion quality metrics, and compared with 10 state-of-the-art fusion methods. Quantitative and qualitative experimental results demonstrate the advantages of the proposed fusion method in terms of visual quality, fusion performance and computational efficiency.

Introduction

Multi-focus image fusion has become an emerging research topic because of its availability and effectiveness in image processing and computer vision [1]. Since each imaging device with optical camera has a limited depth of field, the image captured by the camera cannot all be in focus. As a result, the objects with a specific depth of field are sharp, but other objects are blurred. Multi-focus image fusion is an effective technique to address the above problems. Multi-focus image fusion is such a task that merges two or more source images with the different depths of field to produce a sharper image. Because the fused image has more detail information, it is more suitable for human visual system.

Many multi-focus image fusion methods have been presented in recent years. These methods can be classified into two classes: spatial and transform domain methods [2]. Spatial domain methods directly merge multi-focus source images without converting images into other types of expressions. Such methods are further divided into two subclasses: pixel- and region-level methods [3], [4]. Pixel-level methods merge source images by averaging the corresponding pixels. They are simple in implementation and computationally fast. However, their major drawback is that they can introduce artifacts, such as ghosting and blurring. Region-level methods divide source images into regions, and then use various sharpness measures (such as spatial frequency or gradients) to choose regions for fusion [5], [6], [7]. In addition, pulse-coupled neural networks (PCNNs) were applied for multi-focus image fusion [8], [9], [10].

In recent years, transform domain methods have received a lot of attention. These methods contain three steps: source images are first converted to a transform domain; and then coefficients in transform domain are merged to produce the fused coefficients based on fusion rules; finally, the fused coefficients are converted back into spatial domain to form a composite image via inverse transform. Multi-scale transform has been widely applied in image fusion due to its excellent locality and multi-resolution features, for example, Laplacian pyramid (LP) [11], [12], [13], gradient pyramid [14], [15], and wavelet transform [16]. The wavelet transform has become a popular fusion method in transform domain methods, including discrete wavelet transform (DWT) [17], [18], [19] and dual-tree complex wavelet transform [20]. However, these wavelet transforms have drawbacks in terms of non-shift-invariance, poor spatiality, and non-time-invariance. To overcome these drawbacks, some multi-scale transforms have been introduced for image fusion, including curvelet transform [21], [22], non-subsampled contourlet transform [23], [24], non-subsampled shearlet transform [25], shift-invariant dual-tree complex shearlet transform [26], and sparse representations [27], [28], [29]. Currently, image fusion based on surfacelet transform (ST) provides some excellent properties, such as shift invariance and the ability to capture high-dimensional singularities [30]. This work aims to develop a novel image fusion method in ST domain.

In recent years, several convolutional neural network (CNN)-based fusion methods have been developed for multi-focus image fusion. Liu et al. [31] discussed a CNN-based multi-focus image fusion method where an CNN-based model was considered to solve classification problems. Tang et al. [32] investigated a pixel CNN for multi-focus image fusion, where a model was trained to learn the probabilities of focused, defocused, and unknown pixels based on their neighborhood pixel information. Gao et al. [33] investigated an CNN for multi-focus image fusion where a deeper network was used for constructing an initial decision map. Amin-Naji et al. [34] presented an CNN-based multi-focus image fusion method combined with ensemble learning. Zhang et al. [35] presented a general image fusion framework based on an CNN called IFCNN. These CNN-based methods have demonstrated competitive fusion performance compared with previous fusion methods. However, their training processes are very time-consuming.

Dynamic threshold neural P systems (DTNP systems) are a recently developed distributed and parallel computing model [36], combining the spiking mechanism and dynamic threshold mechanism of neurons. Our previous work has proved that DTNP systems are Turing-universal number generating/accepting devices and function computing devices. This paper focuses on application of DTNP systems in the fusion of multi-focus images, and proposes a novel DTNP-systems-based fusion method in surfacelet transform (ST) domain for multi-focus images. For this goal, four DTNP systems with local topology are designed to develop a fusion framework for multi-focus images. The feature matrixes of low-frequency and high-frequency ST coefficients of multi-focus images are considered as the external inputs of four DTNP systems and the corresponding outputs are used as control condition of fusion rules. The contribution of this paper can be summarized as follows.

  • (i)

    DTNP systems with local topology are designed;

  • (ii)

    A novel fusion framework in ST domain for multi-focus images is developed, where four DTNP systems are its key component.

  • (iii)

    A low-frequency fusion rule based on DTNP system is developed, where SML feature matrix of low-frequency ST coefficients of multi-focus images is the external input of DTNP systems and the corresponding outputs are used to control the low-frequency fusion rule.

  • (iii)

    A high-frequency fusion rule based on DTNP system is developed, where SF feature matrix of high-frequency ST coefficients of multi-focus images is the external input of DTNP systems and the corresponding outputs are used to control the high-frequency fusion rule.

The rest of this paper is organized as follows. Section 2 first introduces DTNP systems with local topology, and then briefly reviews surfacelet transform. Section 3 describes in detail the proposed fusion framework in ST domain for multi-focus images. Section 4 gives the experimental results. Conclusions and discussion are drawn in Section 5.

Section snippets

Methodology

In this section, we first review the literature related to variants of SNP systems, and then introduce dynamic threshold neural P systems (DTNP systems) with local topology and briefly review surfacelet transform.

The proposed fusion method for multi-focus images

We propose an DTNP-systems-based image fusion framework in ST domain for multi-focus images, shown in Fig. 4. The fusion framework includes four parts: (i) ST transform; (ii) fusion rules; (iii) inverse ST transform; (iv) optimization. In Fig. 4, source images A and B are two multi-focus images.

The multi-focus images are first decomposed to the ST coefficients by using ST transform. Then, the ST coefficients are fused to generate the fused ST coefficients. However, low-frequency coefficients

Experimental results

In our experiments, an open image dataset consisting of 20 pairs of multi-focus images was used as a set of test source images for evaluating the proposed and compared fusion methods. Fig. 5 shows the multi-focus source images, where for every pair of images, the left side is a near-focused image and the right side is a far-focused image. The source images all are gray images, and their sizes are listed in Table 1.

The proposed fusion method has been evaluated on the image dataset and compared

Conclusions and discussion

Dynamic threshold neural P systems (DTNP systems) are a kind of distributed and parallel computing models and Turing-universal computing devices. This paper developed a novel DTNP-systems-based fusion method in ST domain for multi-focus images. DTNP systems with local topology were designed to propose a fusion framework for multi-focus images. In the fusion framework, fusion rules of low-frequency and high-frequency ST coefficients are controlled by four DTNP systems that are associated with

CRediT authorship contribution statement

Bo Li: Conceptualization, Software, Writing - original draft. Hong Peng: Conceptualization, Software, Writing - original draft. Jun Wang: Conceptualization, Writing - original draft. Xiangnian Huang: Software.

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgments

This work was partially supported by the Research Fund of Sichuan Science and Technology Project, China (No. 2018JY0083), Research Foundation of the Education Department of Sichuanprovince, China (No. 17TD0034), and the Innovation Fund of Postgraduate, Xihua University, China (Nos. YCJJ2019019 and YCJJ2019020), China.

References (67)

  • RedondoR. et al.

    Multifocus image fusion using the log-gabor transform and a multisize windows technique

    Inf. Fusion

    (2009)
  • PajaresG. et al.

    A wavelet-based image fusion tutorial

    Pattern Recognit.

    (2004)
  • LewisJ.J. et al.

    Pixel- and region-based image fusion with complex wavelets

    Inf. Fusion

    (2007)
  • NenciniF. et al.

    Remote sensing image fusion using the curvelet transform

    Inf. Fusion

    (2007)
  • LiS. et al.

    Multifocus image fusion by combining curvelet and wavelet transform

    Pattern Recognit. Lett.

    (2008)
  • ZhangQ. et al.

    Multifocus image fusion using the nonsubsampled contourlet transform

    Signal Process.

    (2009)
  • YangL. et al.

    Multimodality medical image fusion based on multiscale geometric analysis of contourlet transform

    Neurocomputing

    (2008)
  • JinX. et al.

    Multimodal sensor medical image fusion based on nonsubsampled shearlet transform and S-PCNNs in HSV space

    Signal Process.

    (2018)
  • YinM. et al.

    A novel infrared and visible image fusion algorithm based on shift-invariant dual-tree complex shearlet transform and sparse representation

    Neurocomputing

    (2017)
  • LiuY. et al.

    A general framework for image fusion based on multi-scale transform and sparse representation

    Inf. Fusion

    (2015)
  • NejatiM. et al.

    Multi-focus image fusion using dictionary-based sparse representation

    Inf. Fusion

    (2015)
  • ZhangQ. et al.

    Sparse representation based multi-sensor image fusion for multi-focus and multi-modality images: a review

    Inf. Fusion

    (2018)
  • ZhangB. et al.

    Multi-focus image fusion algorithm based on compound PCNN in surfacelet domain

    Optik

    (2014)
  • LiuY. et al.

    Multi-focus image fusion with a deep convolutional neural network

    Inf. Fusion

    (2017)
  • Amin-NajiM. et al.

    Ensemble of CNN for multi-focus image fusion

    Inf. Fusion

    (2019)
  • ZhangY. et al.

    IFCNN: a general image fusion framework based on convolutional neural network

    Inf. Fusion

    (2020)
  • PengH. et al.

    Dynamic threshold neural P systems

    Knowl.-Based Syst.

    (2019)
  • PengH. et al.

    Spiking neural P systems with multiple channels

    Neural Netw.

    (2017)
  • IbarraO.H. et al.

    Sequential SNP systems based on min/max spike number

    Theoret. Comput. Sci.

    (2009)
  • CavaliereM. et al.

    Asynchronous spiking neural P systems

    Theoret. Comput. Sci.

    (2009)
  • PengH. et al.

    Fuzzy reasoning spiking neural P system for fault diagnosis

    Inform. Sci.

    (2013)
  • WangJ. et al.

    Interval-valued fuzzy spiking neural P systems for fault diagnosis of power transmission networks

    Eng. Appl. Artif. Intell.

    (2019)
  • PăunA. et al.

    Small universal spiking neural p systems

    BioSystems

    (2007)
  • Cited by (0)

    View full text