Multi-focus image fusion based on dynamic threshold neural P systems and surfacelet transform
Introduction
Multi-focus image fusion has become an emerging research topic because of its availability and effectiveness in image processing and computer vision [1]. Since each imaging device with optical camera has a limited depth of field, the image captured by the camera cannot all be in focus. As a result, the objects with a specific depth of field are sharp, but other objects are blurred. Multi-focus image fusion is an effective technique to address the above problems. Multi-focus image fusion is such a task that merges two or more source images with the different depths of field to produce a sharper image. Because the fused image has more detail information, it is more suitable for human visual system.
Many multi-focus image fusion methods have been presented in recent years. These methods can be classified into two classes: spatial and transform domain methods [2]. Spatial domain methods directly merge multi-focus source images without converting images into other types of expressions. Such methods are further divided into two subclasses: pixel- and region-level methods [3], [4]. Pixel-level methods merge source images by averaging the corresponding pixels. They are simple in implementation and computationally fast. However, their major drawback is that they can introduce artifacts, such as ghosting and blurring. Region-level methods divide source images into regions, and then use various sharpness measures (such as spatial frequency or gradients) to choose regions for fusion [5], [6], [7]. In addition, pulse-coupled neural networks (PCNNs) were applied for multi-focus image fusion [8], [9], [10].
In recent years, transform domain methods have received a lot of attention. These methods contain three steps: source images are first converted to a transform domain; and then coefficients in transform domain are merged to produce the fused coefficients based on fusion rules; finally, the fused coefficients are converted back into spatial domain to form a composite image via inverse transform. Multi-scale transform has been widely applied in image fusion due to its excellent locality and multi-resolution features, for example, Laplacian pyramid (LP) [11], [12], [13], gradient pyramid [14], [15], and wavelet transform [16]. The wavelet transform has become a popular fusion method in transform domain methods, including discrete wavelet transform (DWT) [17], [18], [19] and dual-tree complex wavelet transform [20]. However, these wavelet transforms have drawbacks in terms of non-shift-invariance, poor spatiality, and non-time-invariance. To overcome these drawbacks, some multi-scale transforms have been introduced for image fusion, including curvelet transform [21], [22], non-subsampled contourlet transform [23], [24], non-subsampled shearlet transform [25], shift-invariant dual-tree complex shearlet transform [26], and sparse representations [27], [28], [29]. Currently, image fusion based on surfacelet transform (ST) provides some excellent properties, such as shift invariance and the ability to capture high-dimensional singularities [30]. This work aims to develop a novel image fusion method in ST domain.
In recent years, several convolutional neural network (CNN)-based fusion methods have been developed for multi-focus image fusion. Liu et al. [31] discussed a CNN-based multi-focus image fusion method where an CNN-based model was considered to solve classification problems. Tang et al. [32] investigated a pixel CNN for multi-focus image fusion, where a model was trained to learn the probabilities of focused, defocused, and unknown pixels based on their neighborhood pixel information. Gao et al. [33] investigated an CNN for multi-focus image fusion where a deeper network was used for constructing an initial decision map. Amin-Naji et al. [34] presented an CNN-based multi-focus image fusion method combined with ensemble learning. Zhang et al. [35] presented a general image fusion framework based on an CNN called IFCNN. These CNN-based methods have demonstrated competitive fusion performance compared with previous fusion methods. However, their training processes are very time-consuming.
Dynamic threshold neural P systems (DTNP systems) are a recently developed distributed and parallel computing model [36], combining the spiking mechanism and dynamic threshold mechanism of neurons. Our previous work has proved that DTNP systems are Turing-universal number generating/accepting devices and function computing devices. This paper focuses on application of DTNP systems in the fusion of multi-focus images, and proposes a novel DTNP-systems-based fusion method in surfacelet transform (ST) domain for multi-focus images. For this goal, four DTNP systems with local topology are designed to develop a fusion framework for multi-focus images. The feature matrixes of low-frequency and high-frequency ST coefficients of multi-focus images are considered as the external inputs of four DTNP systems and the corresponding outputs are used as control condition of fusion rules. The contribution of this paper can be summarized as follows.
- (i)
DTNP systems with local topology are designed;
- (ii)
A novel fusion framework in ST domain for multi-focus images is developed, where four DTNP systems are its key component.
- (iii)
A low-frequency fusion rule based on DTNP system is developed, where SML feature matrix of low-frequency ST coefficients of multi-focus images is the external input of DTNP systems and the corresponding outputs are used to control the low-frequency fusion rule.
- (iii)
A high-frequency fusion rule based on DTNP system is developed, where SF feature matrix of high-frequency ST coefficients of multi-focus images is the external input of DTNP systems and the corresponding outputs are used to control the high-frequency fusion rule.
The rest of this paper is organized as follows. Section 2 first introduces DTNP systems with local topology, and then briefly reviews surfacelet transform. Section 3 describes in detail the proposed fusion framework in ST domain for multi-focus images. Section 4 gives the experimental results. Conclusions and discussion are drawn in Section 5.
Section snippets
Methodology
In this section, we first review the literature related to variants of SNP systems, and then introduce dynamic threshold neural P systems (DTNP systems) with local topology and briefly review surfacelet transform.
The proposed fusion method for multi-focus images
We propose an DTNP-systems-based image fusion framework in ST domain for multi-focus images, shown in Fig. 4. The fusion framework includes four parts: (i) ST transform; (ii) fusion rules; (iii) inverse ST transform; (iv) optimization. In Fig. 4, source images and are two multi-focus images.
The multi-focus images are first decomposed to the ST coefficients by using ST transform. Then, the ST coefficients are fused to generate the fused ST coefficients. However, low-frequency coefficients
Experimental results
In our experiments, an open image dataset consisting of 20 pairs of multi-focus images was used as a set of test source images for evaluating the proposed and compared fusion methods. Fig. 5 shows the multi-focus source images, where for every pair of images, the left side is a near-focused image and the right side is a far-focused image. The source images all are gray images, and their sizes are listed in Table 1.
The proposed fusion method has been evaluated on the image dataset and compared
Conclusions and discussion
Dynamic threshold neural P systems (DTNP systems) are a kind of distributed and parallel computing models and Turing-universal computing devices. This paper developed a novel DTNP-systems-based fusion method in ST domain for multi-focus images. DTNP systems with local topology were designed to propose a fusion framework for multi-focus images. In the fusion framework, fusion rules of low-frequency and high-frequency ST coefficients are controlled by four DTNP systems that are associated with
CRediT authorship contribution statement
Bo Li: Conceptualization, Software, Writing - original draft. Hong Peng: Conceptualization, Software, Writing - original draft. Jun Wang: Conceptualization, Writing - original draft. Xiangnian Huang: Software.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Acknowledgments
This work was partially supported by the Research Fund of Sichuan Science and Technology Project, China (No. 2018JY0083), Research Foundation of the Education Department of Sichuanprovince, China (No. 17TD0034), and the Innovation Fund of Postgraduate, Xihua University, China (Nos. YCJJ2019019 and YCJJ2019020), China.
References (67)
- et al.
A survey on region based image fusion methods
Inf. Fusion
(2019) - et al.
Multi-focus image fusion with dense SIFT
Inf. Fusion
(2015) - et al.
Multi-focus image fusion using a morphology-based focus measure in a quad-tree structure
Inf. Fusion
(2013) - et al.
A region-based multi-sensor image fusion scheme using pulse coupled neural network
Pattern Recognit. Lett.
(2006) - et al.
Multifocus image fusion using region segmentation and spatial frequency
Image Vis. Comput.
(2008) - et al.
Multi-focus image fusion using pulse coupled neural network
Pattern Recognit. Lett.
(2007) - et al.
Multi-focus image fusion using PCNN
Pattern Recognit.
(2010) - et al.
Multi-focus image fusion based on robust principal component analysis and pulse-coupled neural network
Optik
(2014) - et al.
Multi-scale weighted gradient-based fusion for multi-focus images
Inf. Fusion
(2014) - et al.
Adaptive multi-focus image fusion using a wavelet-based statistical sharpness measure
Signal Process.
(2012)