Elsevier

Neurocomputing

Volume 420, 8 January 2021, Pages 57-69
Neurocomputing

Image restoration using overlapping group sparsity on hyper-Laplacian prior of image gradient

https://doi.org/10.1016/j.neucom.2020.08.053Get rights and content

Abstract

Due to the ill-posed nature of image restoration, seeking a meaningful image prior is still a great challenge in the field of image processing. The total variation with overlapping group sparsity (OGS-TV) has been successfully applied for image denoising/deblurring. In this paper, we further study the overlapping group sparsity of the image gradient. The sparsity is measured by the q quasi-norm (0<q<1). The proposed regularizer comes down to the well-known hyper-Laplacian prior if the overlapping group size is 1. Although it seems to be a simple extensive study compared with the previous works, its regularization capability and corresponding mathematical problems are still in demand for imaging science. To solve the non-convex and non-smooth minimization problem, we use the alternating direction method of multipliers as the main algorithm framework. The difficult inner subproblem is tackled by the majorization-minimization method with the sophisticatedly derived majorizer. We carry out some numerical experiments to demonstrate the effectiveness of the proposed regularizer in terms of PSNR and SSIM values.

Introduction

In recent years, along with the popularization of the smartphone, the number of photos taken by phone camera has increased rapidly. But, those cameras equipped with a small aperture can blur the images when shooting low-light scenes in long exposures. They also could suffer from several intrinsic sources, such as thermal sensor noise and read-out noise, which is usually assumed to be additive and independent Gaussian noise [1], [2]. Besides, there are many other factors (e.g., camera misfocus, dusty environment, and atmospheric turbulence) that could degrade the real image [3], [4], [5], [2]. The study of image restoration has received a lot of attention over the last decades, which aims to recover the images from the degraded observations. The involved techniques are oriented towards modeling the degradations and then applying the inverse procedure to obtain an approximation of the original image.

Assuming that observation was blurred by a linear shift-invariant system and further contaminated by the additive noise, the overall degradation model is given byg=h*f+η,where g is the blurred and noisy gray-scale image, f is the latent clean gray-scale image, h denotes blurring kernel, η is the additive noise, and * denotes the convolution operator. In most cases, η is assumed to be additive white Gaussian noise (AWGN) due to its good approximation property to real-world noises and mathematical tractability [6], [7], [4], [8], [9].

For the convenience of handling this problem, all images are represented in column-major vectorized form and h is converted to blurring matrix H under certain boundary conditions. When the size of image is given as n×n, both g and f are vectors of size n2, and H is an n2×n2 matrix. At this point, we can reformulate the degradation model Eq. (1) asg=Hf+η,η~N(0,ση2In2),where In2 is the identity matrix of dimension n2×n2, and ση is standard derivation of AWGN. Recovering the original image f from the corrupted one g by inverting the above degradation model is an ill-posed problem because H is highly ill-conditioned [4], [10], [11].

To effectively solve this problem, a lot of research work has been done by using the maximum a posteriori (MAP) framework, which leads to the following regularized minimization problemminf12g-Hf22+λϕ(f),where the first term of Eq. (3) is referred to as the image energy fidelity that describes the similarity between the observed image and the latent one, ϕ(f) is the regularization functional (also known as the image regularizer) about true image, and λ is the positive regularization parameter which controls the trade-off between the fidelity term and the image regularizer [12], [13], [9], [6].

Since the choice of relevant image regularizer is very essential for the good performance of the restoration model, numerous authors have proposed different regularization methods. Among them, the Tikhonov and total variation (TV) regularization are two of the most traditional methods. Tikhonov regularizer, ϕTIK(f)i,j=1n(f)i,j22, leads to objective functional with inexpensive computation, but tends to over-smooth the important details such as sharp edges in natural images [14]. The TV regularization is based on the observation that noisy signals have larger TV than natural ones [15], [16]. In imaging science, the TV of an image is written asϕTV(f)i,j=1n(f)i,jpwithfi,jxfi,j,yfi,jR2,xfi,jfi+1,j-fi,j,ifi<nf1,j-fn,j,ifi=nandyfi,jfi,j+1-fi,j,ifj<nfi,1-fi,n,ifj=n.for i,j=1,,n, where (f)i,j,(xf)i,j,(yf)i,j,fi,j denote the discrete gradient vector, the horizontal gradient and the vertical gradient and the grey level of f at pixel (i,j) under the periodic boundary condition, respectively. p is equal to 1 or 2 according to the anisotropic and isotropic version of the vector norm.

The TV regularization could preserve the important details such as sharp edges, but it also yields staircase artifacts due to the piecewise constant assumption and the lack of structural information [14], [17], [18], [19]. To compensate its drawback, many authors proposed excellent alternative regularizers including the high-order TV prior [19], [20], [6], [21], fractional order TV[22], the transform domain sparsity prior [23], [24], [25], [11], total curvature [26], the non-local TV [27], [28], the hybrid prior [29], [30], [31] and the learning-induced prior [32], [33].

Beyond such entry-wise sparsity priors, the structural sparsity, which exploits the structure of the non-zero entries of signal, has emerged recently to provide more information about the original signal [12], [17], [34]. Especially, Selesnick et al. [35] found that the grouping sparsity of the signal gradient describes the natural tendency of large values to rise near to the other large values rather than in isolation. They considered the group sparse behavior which was named as overlapping group sparsity (OGS-TV). Liu et al. [14] successfully extended OGS-TV from 1D signal denoising to 2D non-blind image deblurring. Deng et al. [36] stated that iterative OGS-TV regularization, with the adaptively estimated regularization parameter according to different noise levels, could substantially suppress the staircase artifacts in the heart sound signals. To further extend the OGS-TV, Kumar et al. [17] introduced the adaptive weight in each group for denoising the Gaussian or Poisson noisy images, and Adam et al. [21] combined non-convex high-order total variation with overlapping group sparse regularizer. Besides, several authors adopted the OGS-TV regularizer to recover from the blurry images under the impulse noise [37], Poisson noise [38], the speckle noise [39] and Cauchy noise [40].

As another structural prior, Zha et al. [41] proposed a non-convex low-rank prior model to exploit the structured sparsity of non-local similar patches and the non-convexity of rank minimization simultaneously, however, it is time-consuming and computationally expensive because it requires the composing of non-local similar patches and calculating of singular value.

Meanwhile, some researchers emphasized that the natural image gradient is considered to obey the heavy-tailed distribution, and suggested the hyper-Laplacian (HL) prior to better approximate this empirical distribution rather than the Gaussian or Laplacian prior [42], [43], [44], [45], [13], [46]. They modeled the HL prior asP(f)i,j=1ne-(f)i,jqq,where ·q denotes quasi-norm q with 0<q<1, namely, (f)i,jqq=|(xf)i,j|q+|(yf)i,j|q. Due to the non-convexity of q inϕHL(f)=i,j=1n(f)i,jqq(typically, 0.5q0.8), it needs an efficient optimization algorithm to implement.

As a representative method, Krishnan et al. [44] solved the non-convex subproblem analytically at every pixel for specified exponents and achieved significant speed-up by using a lookup table (LUT) method. Other authors also proposed some efficient iterative schemes (e.g., the generalized soft thresholding method [46] and the generalized iterated shrinkage algorithm [13]), and reported the comparable results in accuracy and speed. Later, Cheng et al. proposed a spatially variant HL prior to correctly identify the gradient distribution for each pixel [8]. They are released from parameter tuning and solved the non-blind deconvolution model in an alternative way. Zuo et al. [47] introduced the HL prior for blind deconvolution and proposed a principled discriminative learning model to handle the parameter tuning. Cheng et al. [48] proposed hyper-Laplacian regularization term for reflectance and the illumination in the Retinex problem. Several authors utilized the HL prior to regularize the global spectral structures for multispectral image denoising [45], [49], [50], [51]. Shi et al. [52] introduced a coupled constraint to combine the benefits of OGS-TV and HL prior.

Motivated by the fact that the hyper-Laplacian prior could approximate the heavy-tailed distribution of natural image gradient well and that the overlapping group sparsity could remedy the staircase artifacts by introducing the additional structural information, we introduce a new regularizer, the overlapping group sparsity on hyper-Laplacian (OGS-HL) prior of the natural image gradient, for denoising and deblurring images. Since our model is with a more proper prior and considers the structural information, we need to deal with the computational issues due to the non-convexity of hyper-Laplacians and OGS-inherent complexity. Therefore, we propose an algorithm to effectively solve the non-convex and non-smooth optimization problem, in which one subproblem is optimized by a majorization-minimization algorithm with a novel quadratic majorizer. Finally, numerical denoising and deblurring experiments illustrate that proposed OGS-HL outperforms other closely related algorithms.

The remainder of this paper is organized as follows. In Section 2, we introduce several basic concepts concerning the proposed regularizer and briefly review some related methods. We also propose a quadratic majorizer for solving one subproblem arising from OGS-HL under the MM framework. Sequentially, in Section 3, we present a new image restoration model and further derive an efficient algorithm for minimizing the non-convex and non-smooth objective functional with the alternating direction method of multipliers (ADMM) and the majorization-minimization (MM) algorithm. In Section 4, we demonstrate the superiority of the proposed method via numerical experiments, followed by analyzing the parameter setting for the best performance and convergence analysis. Finally, Section 5 concludes the paper.

Section snippets

Preliminaries

This section describes the proposed OGS-HL regularizer, the ADMM framework, and the MM method with a novel quadratic majorizer.

Proposed method

In this section, we first present the proposed model and then solve it under the ADMM framework.

Numerical experiment

In this section, we show several experimental results to validate the proposed method in comparison with closely related methods. The test images of different sizes from 256 × 256 to 1024 × 1024 are shown in Fig. 2. All experiments were performed under Windows 7 and MATLAB v9.1 on a Lenovo Desktop equipped with an Intel(R) Core (TM) i5 3.2G processor and 8 GB of RAM. The quality of the restored image is measured by the peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) index,

Conclusions

In this paper, we proposed a new regularizer named OGS-HL, which is motivated by the fact that the hyper-Laplacian prior could better approximate the heavy-tailed distribution of natural image gradient and that the overlapping group sparsity with total variation would mitigate the staircase artifacts.

We adopted the ADMM framework to tackle the proposed non-convex and non-smooth optimization problem. Much of the computational complexity of the ADMM stems from the minimization of group-sparsity

CRediT authorship contribution statement

Kyongson Jon: Writing - original draft, Methodology, Software. Ying Sun: Investigation, Resources. Qixin Li: Resources, Visualization, Software. Jun Liu: Conceptualization, Methodology, Writing - review & editing, Supervision. Xiaofei Wang: Validation, Data curation. Wensheng Zhu: Validation, Formal analysis.

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgements

This work is supported in part by the National Natural Science Foundation of China (No. 11771072, 11701079, 61806024); the Science and Technology Development Plan of Jilin Province (No. 20191008004TC, 20180520026JH); the Fundamental Research Funds for the Central Universities (No. 2412020FZ023, 2412019FZ030); Jilin Provincial Department of Education (JJKH20190293KJ).

Kyongson Jon received his B.S. degree and M.S. degree both from Kim Il Sung University, Pyongyang, D.P.R. of Korea, in 2004 and 2008, respectively. He has been a Ph.D. student in the School of Mathematics and Statistics at Northeast Normal University in Changchun, China since 2017. His major research interests include image processing, computer vision, and machine learning.

References (68)

  • L. Tang et al.

    A generalized hybrid nonconvex variational regularization model for staircase reduction in image restoration

    Neurocomputing

    (2019)
  • S. Oh et al.

    Non-convex hybrid total variation for image denoising

    J. Visual Commun. Image Represent.

    (2013)
  • G. Hou et al.

    A novel dark channel prior guided variational framework for underwater image restoration

    J. Visual Commun. Image Represent.

    (2020)
  • L. Li et al.

    Blind image deblurring via deep discriminative priors

    Int. J. Comput. Vision

    (2019)
  • D. Gong et al.

    Blind image deblurring by promoting group sparsity

    Neurocomputing

    (2018)
  • S.-W. Deng et al.

    Adaptive overlapping-group sparse denoising for heart sound signals

    Biomed. Signal Process. Control

    (2018)
  • X.-G. Lv et al.

    Deblurring poisson noisy images by total variation with overlapping group sparsity

    Appl. Math. Comput.

    (2016)
  • J. Liu et al.

    Total variation with overlapping group sparsity for speckle noise reduction

    Neurocomputing

    (2016)
  • M. Ding et al.

    Total variation with overlapping group sparsity for deblurring images under cauchy noise

    Appl. Math. Comput.

    (2019)
  • Z. Zha et al.

    Non-convex weighted ℓp nuclear norm based ADMM framework for image restoration

    Neurocomputing

    (2018)
  • J. Kong et al.

    A new blind deblurring method via hyper-Laplacian prior

    Procedia Comput. Sci.

    (2017)
  • M.-H. Cheng et al.

    A variational model with hybrid hyper-Laplacian priors for Retinex

    Appl. Math. Model.

    (2019)
  • L.-J. Deng et al.

    The fusion of panchromatic and multispectral remote sensing images via tensor-based sparse modeling and hyper-Laplacian prior

    Inf. Fusion

    (2019)
  • M. Shi et al.

    Total variation image restoration using hyper-Laplacian prior with overlapping group sparsity

    Signal Process.

    (2016)
  • J. Ohta

    Smart CMOS Image Sensors and Applications

    (2007)
  • A. Levin, R. Fergus, F. Durand, W.T. Freeman, Image and depth from a conventional camera with a coded aperture, ACM...
  • I. Irum et al.

    A review of image denoising methods

    J. Eng. Sci. Technol. Rev.

    (2015)
  • U. Schmidt, C. Rother, S. Nowozin, J. Jancsary, S. Roth, Discriminative non-blind deblurring, in: IEEE Conference on...
  • F. Heide, S. Diamond, M. Nießner, J. Ragan-Kelley, W. Heidrich, G. Wetzstein, Proximal: efficient image optimization...
  • B. Shi et al.

    A projection method based on the splitting bregman iteration for the image denoising

    J. Appl. Math. Comput.

    (2012)
  • R.H. Chan et al.

    Constrained total variation deblurring models and fast algorithms based on alternating direction method of multipliers

    SIAM J. Imag. Sci.

    (2013)
  • J. Cheng et al.

    Image restoration using spatially variant hyper-Laplacian prior

    Signal Image Video Process.

    (2019)
  • F. Natterer et al.

    Mathematical methods in image reconstruction

    Med. Phys.

    (2002)
  • G. Obozinski, L. Jacob, J.-P. Vert, Group lasso with overlaps: the latent group lasso approach, arXiv preprint...
  • Cited by (19)

    • Single image restoration through ℓ<inf>2</inf>-relaxed truncated ℓ<inf>0</inf> analysis-based sparse optimization in tight frames

      2021, Neurocomputing
      Citation Excerpt :

      Over the past decades, a variety of effective methods have been proposed to solve the above imaging inverse problems (1), which can be roughly grouped into two categories, i.e., model-based optimization methods [6,14,18,24,28,30,32,35,37,54,55] and discriminative learning methods [4,11,25,56–58].

    View all citing articles on Scopus

    Kyongson Jon received his B.S. degree and M.S. degree both from Kim Il Sung University, Pyongyang, D.P.R. of Korea, in 2004 and 2008, respectively. He has been a Ph.D. student in the School of Mathematics and Statistics at Northeast Normal University in Changchun, China since 2017. His major research interests include image processing, computer vision, and machine learning.

    Ying Sun received his B.S. degree from Peking University in 1987, Beijing, China. He is currently a research fellow in Department of Land and Resources of Jilin Province. His research interest is the theory and project application of natural resources informatization construction.

    Qixin Li received his B.S. degree and M.S. degree in Jilin University, Changchun, China, in 2004 and 2010, respectively. Now He is a senior engineer in Department of Land and Resources of Jilin Province. His research interest is the project application of natural resources informatization construction.

    Jun Liu received the Ph.D. degree from the University of Electronic Science and Technology of China, Chengdu, Sichuan, China, in 2015. He was a visiting student with the Department of Mathematics, University of California, Los Angeles, and a visiting scholar at The Chinese University of Hong Kong. Now he is a associate professor at Northeast Normal University. His research interests include the scientific computation and variational methods in mathematical modeling of image processing and computer vision.

    Xiaofei Wang is currently an associate professor in the School of Mathematics and Statistics, Northeast Normal University, China. His research interests contain graphical models, high-dimensional data analysis, numerical algorithm analysis.

    Wensheng Zhu received the Ph.D. degree from Northeast Normal University, China, in 2006. He was a postdoctoral associate at Yale University and a visiting scholar at University of North Carolina at Chapel Hil and is currently a professor at School of Mathematics and Statistics, Northeast Normal University, Changchun 130024, China. His research interests include neuroimaging data analysis, biostatistics, and machine learning.

    View full text