Image restoration using overlapping group sparsity on hyper-Laplacian prior of image gradient
Introduction
In recent years, along with the popularization of the smartphone, the number of photos taken by phone camera has increased rapidly. But, those cameras equipped with a small aperture can blur the images when shooting low-light scenes in long exposures. They also could suffer from several intrinsic sources, such as thermal sensor noise and read-out noise, which is usually assumed to be additive and independent Gaussian noise [1], [2]. Besides, there are many other factors (e.g., camera misfocus, dusty environment, and atmospheric turbulence) that could degrade the real image [3], [4], [5], [2]. The study of image restoration has received a lot of attention over the last decades, which aims to recover the images from the degraded observations. The involved techniques are oriented towards modeling the degradations and then applying the inverse procedure to obtain an approximation of the original image.
Assuming that observation was blurred by a linear shift-invariant system and further contaminated by the additive noise, the overall degradation model is given bywhere g is the blurred and noisy gray-scale image, f is the latent clean gray-scale image, h denotes blurring kernel, is the additive noise, and denotes the convolution operator. In most cases, is assumed to be additive white Gaussian noise (AWGN) due to its good approximation property to real-world noises and mathematical tractability [6], [7], [4], [8], [9].
For the convenience of handling this problem, all images are represented in column-major vectorized form and h is converted to blurring matrix H under certain boundary conditions. When the size of image is given as , both g and f are vectors of size , and H is an matrix. At this point, we can reformulate the degradation model Eq. (1) aswhere is the identity matrix of dimension , and is standard derivation of AWGN. Recovering the original image f from the corrupted one g by inverting the above degradation model is an ill-posed problem because H is highly ill-conditioned [4], [10], [11].
To effectively solve this problem, a lot of research work has been done by using the maximum a posteriori (MAP) framework, which leads to the following regularized minimization problemwhere the first term of Eq. (3) is referred to as the image energy fidelity that describes the similarity between the observed image and the latent one, is the regularization functional (also known as the image regularizer) about true image, and is the positive regularization parameter which controls the trade-off between the fidelity term and the image regularizer [12], [13], [9], [6].
Since the choice of relevant image regularizer is very essential for the good performance of the restoration model, numerous authors have proposed different regularization methods. Among them, the Tikhonov and total variation (TV) regularization are two of the most traditional methods. Tikhonov regularizer, , leads to objective functional with inexpensive computation, but tends to over-smooth the important details such as sharp edges in natural images [14]. The TV regularization is based on the observation that noisy signals have larger TV than natural ones [15], [16]. In imaging science, the TV of an image is written aswithfor , where denote the discrete gradient vector, the horizontal gradient and the vertical gradient and the grey level of f at pixel under the periodic boundary condition, respectively. p is equal to 1 or 2 according to the anisotropic and isotropic version of the vector norm.
The TV regularization could preserve the important details such as sharp edges, but it also yields staircase artifacts due to the piecewise constant assumption and the lack of structural information [14], [17], [18], [19]. To compensate its drawback, many authors proposed excellent alternative regularizers including the high-order TV prior [19], [20], [6], [21], fractional order TV[22], the transform domain sparsity prior [23], [24], [25], [11], total curvature [26], the non-local TV [27], [28], the hybrid prior [29], [30], [31] and the learning-induced prior [32], [33].
Beyond such entry-wise sparsity priors, the structural sparsity, which exploits the structure of the non-zero entries of signal, has emerged recently to provide more information about the original signal [12], [17], [34]. Especially, Selesnick et al. [35] found that the grouping sparsity of the signal gradient describes the natural tendency of large values to rise near to the other large values rather than in isolation. They considered the group sparse behavior which was named as overlapping group sparsity (OGS-TV). Liu et al. [14] successfully extended OGS-TV from 1D signal denoising to 2D non-blind image deblurring. Deng et al. [36] stated that iterative OGS-TV regularization, with the adaptively estimated regularization parameter according to different noise levels, could substantially suppress the staircase artifacts in the heart sound signals. To further extend the OGS-TV, Kumar et al. [17] introduced the adaptive weight in each group for denoising the Gaussian or Poisson noisy images, and Adam et al. [21] combined non-convex high-order total variation with overlapping group sparse regularizer. Besides, several authors adopted the OGS-TV regularizer to recover from the blurry images under the impulse noise [37], Poisson noise [38], the speckle noise [39] and Cauchy noise [40].
As another structural prior, Zha et al. [41] proposed a non-convex low-rank prior model to exploit the structured sparsity of non-local similar patches and the non-convexity of rank minimization simultaneously, however, it is time-consuming and computationally expensive because it requires the composing of non-local similar patches and calculating of singular value.
Meanwhile, some researchers emphasized that the natural image gradient is considered to obey the heavy-tailed distribution, and suggested the hyper-Laplacian (HL) prior to better approximate this empirical distribution rather than the Gaussian or Laplacian prior [42], [43], [44], [45], [13], [46]. They modeled the HL prior aswhere denotes quasi-norm with , namely, . Due to the non-convexity of in(typically, ), it needs an efficient optimization algorithm to implement.
As a representative method, Krishnan et al. [44] solved the non-convex subproblem analytically at every pixel for specified exponents and achieved significant speed-up by using a lookup table (LUT) method. Other authors also proposed some efficient iterative schemes (e.g., the generalized soft thresholding method [46] and the generalized iterated shrinkage algorithm [13]), and reported the comparable results in accuracy and speed. Later, Cheng et al. proposed a spatially variant HL prior to correctly identify the gradient distribution for each pixel [8]. They are released from parameter tuning and solved the non-blind deconvolution model in an alternative way. Zuo et al. [47] introduced the HL prior for blind deconvolution and proposed a principled discriminative learning model to handle the parameter tuning. Cheng et al. [48] proposed hyper-Laplacian regularization term for reflectance and the illumination in the Retinex problem. Several authors utilized the HL prior to regularize the global spectral structures for multispectral image denoising [45], [49], [50], [51]. Shi et al. [52] introduced a coupled constraint to combine the benefits of OGS-TV and HL prior.
Motivated by the fact that the hyper-Laplacian prior could approximate the heavy-tailed distribution of natural image gradient well and that the overlapping group sparsity could remedy the staircase artifacts by introducing the additional structural information, we introduce a new regularizer, the overlapping group sparsity on hyper-Laplacian (OGS-HL) prior of the natural image gradient, for denoising and deblurring images. Since our model is with a more proper prior and considers the structural information, we need to deal with the computational issues due to the non-convexity of hyper-Laplacians and OGS-inherent complexity. Therefore, we propose an algorithm to effectively solve the non-convex and non-smooth optimization problem, in which one subproblem is optimized by a majorization-minimization algorithm with a novel quadratic majorizer. Finally, numerical denoising and deblurring experiments illustrate that proposed OGS-HL outperforms other closely related algorithms.
The remainder of this paper is organized as follows. In Section 2, we introduce several basic concepts concerning the proposed regularizer and briefly review some related methods. We also propose a quadratic majorizer for solving one subproblem arising from OGS-HL under the MM framework. Sequentially, in Section 3, we present a new image restoration model and further derive an efficient algorithm for minimizing the non-convex and non-smooth objective functional with the alternating direction method of multipliers (ADMM) and the majorization-minimization (MM) algorithm. In Section 4, we demonstrate the superiority of the proposed method via numerical experiments, followed by analyzing the parameter setting for the best performance and convergence analysis. Finally, Section 5 concludes the paper.
Section snippets
Preliminaries
This section describes the proposed OGS-HL regularizer, the ADMM framework, and the MM method with a novel quadratic majorizer.
Proposed method
In this section, we first present the proposed model and then solve it under the ADMM framework.
Numerical experiment
In this section, we show several experimental results to validate the proposed method in comparison with closely related methods. The test images of different sizes from 256 256 to 1024 1024 are shown in Fig. 2. All experiments were performed under Windows 7 and MATLAB v9.1 on a Lenovo Desktop equipped with an Intel(R) Core (TM) i5 3.2G processor and 8 GB of RAM. The quality of the restored image is measured by the peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) index,
Conclusions
In this paper, we proposed a new regularizer named OGS-HL, which is motivated by the fact that the hyper-Laplacian prior could better approximate the heavy-tailed distribution of natural image gradient and that the overlapping group sparsity with total variation would mitigate the staircase artifacts.
We adopted the ADMM framework to tackle the proposed non-convex and non-smooth optimization problem. Much of the computational complexity of the ADMM stems from the minimization of group-sparsity
CRediT authorship contribution statement
Kyongson Jon: Writing - original draft, Methodology, Software. Ying Sun: Investigation, Resources. Qixin Li: Resources, Visualization, Software. Jun Liu: Conceptualization, Methodology, Writing - review & editing, Supervision. Xiaofei Wang: Validation, Data curation. Wensheng Zhu: Validation, Formal analysis.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Acknowledgements
This work is supported in part by the National Natural Science Foundation of China (No. 11771072, 11701079, 61806024); the Science and Technology Development Plan of Jilin Province (No. 20191008004TC, 20180520026JH); the Fundamental Research Funds for the Central Universities (No. 2412020FZ023, 2412019FZ030); Jilin Provincial Department of Education (JJKH20190293KJ).
Kyongson Jon received his B.S. degree and M.S. degree both from Kim Il Sung University, Pyongyang, D.P.R. of Korea, in 2004 and 2008, respectively. He has been a Ph.D. student in the School of Mathematics and Statistics at Northeast Normal University in Changchun, China since 2017. His major research interests include image processing, computer vision, and machine learning.
References (68)
- et al.
A total variation recursive space-variant filter for image denoising
Digital Signal Process.
(2015) - et al.
Variational Bayesian image restoration with group-sparse modeling of wavelet coefficients
Digital Signal Process.
(2015) - et al.
Image restoration using total variation with overlapping group sparsity
Inf. Sci.
(2015) - et al.
Nonlinear total variation based noise removal algorithms
Phys. D Nonlinear Phenom.
(1992) - et al.
An efficient denoising framework using weighted overlapping group sparsity
Inf. Sci.
(2018) - et al.
Fractional order total variation regularization for image super-resolution
Signal Process.
(2013) - et al.
Exploiting the wavelet structure in compressed sensing MRI
Magn. Reson. Imag.
(2014) - et al.
An efficient nonconvex regularization for wavelet frame and total variation based image restoration
J. Comput. Appl. Math.
(2015) - et al.
Color image restoration and inpainting via multi-channel total curvature
Appl. Math. Model.
(2018) - et al.
An efficient nonlocal variational method with application to underwater image restoration
Neurocomputing
(2019)
A generalized hybrid nonconvex variational regularization model for staircase reduction in image restoration
Neurocomputing
Non-convex hybrid total variation for image denoising
J. Visual Commun. Image Represent.
A novel dark channel prior guided variational framework for underwater image restoration
J. Visual Commun. Image Represent.
Blind image deblurring via deep discriminative priors
Int. J. Comput. Vision
Blind image deblurring by promoting group sparsity
Neurocomputing
Adaptive overlapping-group sparse denoising for heart sound signals
Biomed. Signal Process. Control
Deblurring poisson noisy images by total variation with overlapping group sparsity
Appl. Math. Comput.
Total variation with overlapping group sparsity for speckle noise reduction
Neurocomputing
Total variation with overlapping group sparsity for deblurring images under cauchy noise
Appl. Math. Comput.
Non-convex weighted ℓp nuclear norm based ADMM framework for image restoration
Neurocomputing
A new blind deblurring method via hyper-Laplacian prior
Procedia Comput. Sci.
A variational model with hybrid hyper-Laplacian priors for Retinex
Appl. Math. Model.
The fusion of panchromatic and multispectral remote sensing images via tensor-based sparse modeling and hyper-Laplacian prior
Inf. Fusion
Total variation image restoration using hyper-Laplacian prior with overlapping group sparsity
Signal Process.
Smart CMOS Image Sensors and Applications
A review of image denoising methods
J. Eng. Sci. Technol. Rev.
A projection method based on the splitting bregman iteration for the image denoising
J. Appl. Math. Comput.
Constrained total variation deblurring models and fast algorithms based on alternating direction method of multipliers
SIAM J. Imag. Sci.
Image restoration using spatially variant hyper-Laplacian prior
Signal Image Video Process.
Mathematical methods in image reconstruction
Med. Phys.
Cited by (19)
Overlapping group prior for image deconvolution using patch-wise gradient statistics
2023, Signal ProcessingSingle image restoration through ℓ<inf>2</inf>-relaxed truncated ℓ<inf>0</inf> analysis-based sparse optimization in tight frames
2021, NeurocomputingCitation Excerpt :Over the past decades, a variety of effective methods have been proposed to solve the above imaging inverse problems (1), which can be roughly grouped into two categories, i.e., model-based optimization methods [6,14,18,24,28,30,32,35,37,54,55] and discriminative learning methods [4,11,25,56–58].
An improved non-local means algorithm for CT image denoising
2024, Multimedia SystemsA coupled non-convex hybrid regularization and weak H<sup>- 1</sup> image decomposition model for denoising application
2024, Journal of Applied Mathematics and ComputingSevere motion blurred silkworm pupae image restoration in sex discrimination
2023, Signal, Image and Video Processing
Kyongson Jon received his B.S. degree and M.S. degree both from Kim Il Sung University, Pyongyang, D.P.R. of Korea, in 2004 and 2008, respectively. He has been a Ph.D. student in the School of Mathematics and Statistics at Northeast Normal University in Changchun, China since 2017. His major research interests include image processing, computer vision, and machine learning.
Ying Sun received his B.S. degree from Peking University in 1987, Beijing, China. He is currently a research fellow in Department of Land and Resources of Jilin Province. His research interest is the theory and project application of natural resources informatization construction.
Qixin Li received his B.S. degree and M.S. degree in Jilin University, Changchun, China, in 2004 and 2010, respectively. Now He is a senior engineer in Department of Land and Resources of Jilin Province. His research interest is the project application of natural resources informatization construction.
Jun Liu received the Ph.D. degree from the University of Electronic Science and Technology of China, Chengdu, Sichuan, China, in 2015. He was a visiting student with the Department of Mathematics, University of California, Los Angeles, and a visiting scholar at The Chinese University of Hong Kong. Now he is a associate professor at Northeast Normal University. His research interests include the scientific computation and variational methods in mathematical modeling of image processing and computer vision.
Xiaofei Wang is currently an associate professor in the School of Mathematics and Statistics, Northeast Normal University, China. His research interests contain graphical models, high-dimensional data analysis, numerical algorithm analysis.
Wensheng Zhu received the Ph.D. degree from Northeast Normal University, China, in 2006. He was a postdoctoral associate at Yale University and a visiting scholar at University of North Carolina at Chapel Hil and is currently a professor at School of Mathematics and Statistics, Northeast Normal University, Changchun 130024, China. His research interests include neuroimaging data analysis, biostatistics, and machine learning.