1 Introduction

Image reconstruction from blurry and noisy observations is an important procedure in various fields including remote sensing, astronomy, medical imaging, video cameras, etc. Many image deblurring methods, such as statistics [11, 20], Fourier or wavelet transforms [29, 31], and variational analysis [13, 14, 39], have been proposed based on the assumption that blur is caused by linear operators. The conventional observation model is expressed as

$$\begin{aligned} y=Hu+\eta , \end{aligned}$$
(1)

where \(y,u\in R^{N}\) are observed and original \(\sqrt{N}\times \sqrt{N}\) images, respectively; \(H\in R^{N\times N}\) is the blur operator, and \(\eta \) is the zero-mean white Gaussian noise with variance \(\sigma ^2\). The deblurring problem involves the restoration of u from the observation image y.

However, recovering \(u\) from \(y\) is an ill-posed problem. A natural way to solve this problem is to add some regularization terms. By utilizing the stability regularization scheme, we can obtain \(\hat{u}\) by solving

$$\begin{aligned} \hat{u}= \arg \min _u\frac{1}{2}\left\| y-Hu \right\| _2^2+\mu \cdot \mathrm{pen}(u), \end{aligned}$$
(2)

where \(\left\| \cdot \right\| _2\) stands for the Frobenius norm, \(\mathrm{pen}(u)\) is a penalty function, and \(\mu \) is a positive regularization parameter. Tikhonov regularization and total variation regularization are two well-known classes of regularizer in the literature.

  • Tikhonov [45] regularization: \(\mathrm{pen}(u)=\left\| L u \right\| _2^2\), where \(L\) stands for some finite difference operators. In this case, given that the objective function (2) is quadratic, an efficient algorithm that minimizes this problem via linear equations is easy to develop. However, Tikhonov regularization often fails to preserve sharp image edges, because this regularization tends to make images overly smooth.

  • Total variation (TV) regularization [39, 40]: \(\mathrm{pen}(u)=\left\| \nabla u \right\| _2 \mathrm{or} \left\| \nabla u \right\| _1\), where \(\nabla \) stands for the discrete gradient. The TV regularization is also known as the Rudin–Osher–Fatemi (ROF) model, which was first used by Rudin, Osher, and Fatemi in [40] for image denoising, and then extended for image deblurring in [39]. Later, Yuan et al. [48] proposed a spatially weighted TV method to multiple-frame image super-resolution and obtained a significant improvement for image super-resolution. In comparison to Tikhonov regularizer, the TV-based methods have shown good performance in preserving important image features, such as points and edges. However, the main known drawback of TV regularization is that this type of regularization usually creates staircasing artifacts; i.e., TV regularizer tends to create large homogeneous zones, which results in the deletion of some textures.

To suppress the staircasing artifacts created by TV-based models, detail-preserving methods such as sparse representations are introduced in image deblurring approaches. A wide variety of dictionaries have been proposed, e.g, Fourier, wavelet, contourlet, curvelet, and wavelet frame, under the assumption that natural images are usually sparse in an appropriate transform domain. An optimal dictionary should be rich enough to obtain the sparsest representation. Recent publications have shown that wavelet frame based approaches for image representation can significantly improve image restoration [410, 12, 1618, 30], because wavelet frame can explicitly exploits the sparse approximations of natural images and can effectively preserves image details. The basic idea of wavelet frame based approaches is that the images can be sparsely approximated by properly designed wavelet frames; that is, we can solve (1) in the proper wavelet frame domain and obtain a sparse solution of (1) in the corresponding wavelet frame domain. The regularization used for the majority of wavelet frame based models is the \(l_1\)-norm of the wavelet frame coefficients.

Inspired by the edge-preserving advantage of TV and the capability of sparsely approximating piecewise smooth images of the wavelet frame based methods, we have combined the wavelet frame and TV regularization to consider a new model for image deblurring. This novel model is expected to take advantage of wavelet frame and TV regularizations. The redundancy and translational invariant features of wavelet frame enable the proposed model to separate noise and image information. Therefore, the proposed model can efficiently suppresses the staircasing artifacts caused by TV regularization. Experimental results show that the proposed model can produces restored images with higher visual quality, peak signal-to-noise ratio (PSNR), and improvement signal-to-noise ratio (ISNR) than those by TV and wavelet frame restoration methods. It should be mentioned that, recently, Pustelnik et al. [35] considered a functional with the dual-tree wavelets and TV terms and then proposed a parallel proximal algorithm (PPXA) for image restoration. We also conduct some experiments to compare our proposed method with the hybrid regularization method PPXA. Experimental results show that our proposed method is better.

To solve the new image deblurring model, the augmented Lagrangian (AL) technique is adopted to develop a reconstruction algorithm. The AL method, which is first introduced in [27, 34], has notable stability and fast convergence performance and is currently widely used for the minimization of convex functionals under linear equality constraints. Similar to [43], we utilize AL method as a single and an efficient tool for deriving explicit reconstruction algorithms. First, we transform the proposed model into a new constrained problem by using auxiliary variables. Afterward, we transform the resulting constrained problem into a different unconstrained problem by using a variable splitting operator. Finally, the obtained unconstrained problem is solved by the AL technique. Simulation experiments show that the algorithm is effective and efficient.

The rest of this paper is organized as follows. In Sect. 2, we review the wavelet frame system and image representation. The framelet-based image reconstruction and the new model, along with the corresponding optimization algorithm, are introduced in Sect. 3. In Sect. 4, we conduct experiments to demonstrate the effectiveness of the proposed model and the corresponding algorithm. Concluding remarks and future works are given in Sect. 5.

2 The Wavelet Frame and Image Representation

In this section, we briefly review the concept of tight wavelet frames. Details on conducting wavelet frames and the corresponding framelet theory can be seen in [49, 12, 1618, 22, 41].

A countable set \(X\subset L^2(R)\) is a tight frame if

$$\begin{aligned} f=\sum \limits _{\psi \in X}{\langle f,\psi \rangle \psi },\quad \forall f\in L^2(R), \end{aligned}$$
(3)

where \(\langle \cdot ,\cdot \rangle \) is the inner product of \(L^2(R)\). For a given \(\varPsi :=\left\{ \psi _1,\psi _2,\ldots ,\psi _n\right\} \subset L^2(R)\), the wavelet system is defined by the collection of the dilations and the shifts of \(\varPsi \) as

$$\begin{aligned} X(\varPsi ):=\left\{ 2^{j/2}\psi _i(2^j\cdot -k):1\le i \le n;j,k\in Z\right\} . \end{aligned}$$
(4)

If \(X(\varPsi )\) forms a tight frame of \(L^2(R)\), then \(X(\varPsi )\) is a tight wavelet frame and each \(\psi \in \varPsi \) is a framelet.

To construct compactly supported wavelet tight frames, a compactly supported scaling function \(\phi \) with refinement mask \(h_0\) should satisfy

$$\begin{aligned} \hat{\phi }(2\omega )=\hat{h}_0\hat{\phi }(\omega ), \end{aligned}$$
(5)

where \(\hat{\phi }\) is the Fourier transform of \(\phi \), \(\hat{h}_0\) is the Fourier transform of \(h_0\), and \(\hat{h}_0\) is \(2\pi \)-periodic. For a given compactly supported scaling function \(\phi \), the construction of a wavelet tight frame requires an appropriate set of framelets \(\psi \in \varPsi \), which are defined in the Fourier domain by

$$\begin{aligned} \hat{\psi }_i(2\omega )=\hat{h}_i\hat{\phi }(\omega ), \quad i=1,\ldots ,n. \end{aligned}$$
(6)

Here, \(\hat{h}_i(i=1,\ldots ,n)\) is \(2\pi \)-periodic. The unitary extension principle [38] implies that the system \(X(\varPsi )\) in (4) forms a tight frame in \(L^2(R)\) provided that

$$\begin{aligned} \sum \limits _{i=0}^n\left| \hat{h}_i(\omega ) \right| ^2=1 \quad \mathrm{and} \quad \sum \limits _{i=0}^n \hat{h}_i(\omega )\overline{\hat{h}_i(\omega +\pi )}=0, \end{aligned}$$
(7)

for almost all \(\omega \) in \(R\). From (7), we can see that \(h_0\) corresponds to the lowpass filter, whereas \(\left\{ h_i,i=1,\ldots ,n\right\} \) corresponds to the highpass filter. The sequences of the Fourier coefficients of \(\left\{ h_i,i=1,\ldots ,n\right\} \) are called framelet masks. In our implementation, we adopt the piecewise linear B-spline framelets [38]. In this study, the refinement mask is \(\hat{h}_0(\omega )=\cos ^2(\frac{\omega }{2})\), which has a corresponding lowpass filter of \(h_0=\frac{1}{4}\left[ 1,2,1\right] \). The two framelet masks are \(\hat{h}_1=-\frac{\sqrt{2}i}{2}\sin (\omega )\) and \(\hat{h}_2=\sin ^2(\frac{\omega }{2})\), which have corresponding highpass filters of \(h_1=\frac{\sqrt{2}}{4}\left[ 1,0,-1\right] \) and \(h_2=\frac{1}{4}\left[ -1,2,-1\right] \), respectively. The associated refinable function and framelets are given in Fig. 1.

Fig. 1
figure 1

Piecewise linear refinable spline and framelets

The wavelet frame transform is calculated by using the wavelet frame decomposition algorithm given in [19]. We conduct the decomposition algorithm without downsampling by means of refinement and framelet masks. We use \(W\) to denote the wavelet frame decomposition and \(W^\mathrm{T}\) to denote the wavelet frame reconstruction. Thus, \(W^\mathrm{T}W=I\), where \(I\) is the identity matrix. The rows of \(W\) form a tight frame in \(R^n\). Thus, for every vector \(u\in R^n\), the frame coefficients of \(u\) can be computed by \(x=Wu\). The reconstruction result is given by \(u=W^\mathrm{T} x\), i.e., \(u=W^\mathrm{T}(Wu)\). It should be mentioned that, in practical computation, we proceed in the s-dimensional setting. The s-dimensional framelet system can be easily conducted by the tensor product of the 1D framelet system. Details on generating the wavelet frame transform are given in [12].

3 The New Model for Image Deblurring and Its Optimization Algorithm

3.1 The Proposed Model for Image Deblurring

We first briefly describe the TV and framelet variational methods for clarity.

We use \(\nabla _1\) to denote the difference operator given by \(\nabla _1u(1,j)=u(1,j)-u(N,j)\) for \(j=1,\ldots ,N\) and \(\nabla _1u(i,j)=u(i+1,j)-u(i,j),\quad i=1,\ldots ,N-1;\; j=1,\ldots ,N\). Thus, \(\nabla _1\) is a linear mapping from \(R^{N\times N}\) to \(R^{N\times N}\). Similarly, \(\nabla _2\) is the difference operator from \(R^{N\times N}\) to \(R^{N\times N}\) given by \(\nabla _2u(i,1)=u(i,1)-u(i,N)\) for \(i=1,\ldots ,N\) and \(\nabla _2u(i,j)=u(i,j+1)-u(i,j),\quad i=1,\ldots ,N;j=1,\ldots ,N-1\). The discrete gradient operator is defined as \(\nabla u=(\nabla _1 u,\nabla _2 u)\). The TV of \(u\) is represented by \(\left\| \sqrt{(\nabla _1u)^2+(\nabla _2u)^2} \right\| _1\) or by \(\left\| \nabla _1u \right\| _1+\left\| \nabla _2u \right\| _1\). The isotropic TV model for deblurring is to solve the following optimization problem:

$$\begin{aligned} \hat{u}=\arg \min _u\frac{1}{2}\left\| y-Hu \right\| _2^2+\mu \left\| \sqrt{(\nabla _1u)^2+(\nabla _2u)^2} \right\| _1,\end{aligned}$$

where \(\mu \) is the positive regularization parameter used to balance the two minimization terms. Correspondingly, the anisotropic TV variational approach for deblurring is given by:

$$\begin{aligned} \hat{u}=\arg \min _u\frac{1}{2}\left\| y-Hu \right\| _2^2+\mu (\left\| \nabla _1 u \right\| _1+\left\| \nabla _2 u \right\| _1). \end{aligned}$$

The TV-based method works well in terms of preserving edges while suppressing noise. However, both types of the TV discretizations lead to the so-called staircasing effects [21, 32]. Many algorithms have been proposed to solve this TV model, such as time marching strategy [39], primal-dual variable strategy [15], more recent MM method [33], FTVd method [46], ADM method [44], split Bregman scheme [26], and ALM [43].

In the wavelet frame-based framework , Eq. (1) can be written as

$$\begin{aligned} y={ HW}^\mathrm{T} x+\eta , \end{aligned}$$
(8)

which is obtained by \(u=W^\mathrm{T}x\), where \(W^\mathrm{T}\) is the ajoint of famelet basis \(W\) and \(x\) is the vector of representation coefficients. Then, the estimation can be performed by solving the following minimization problem:

$$\begin{aligned} \hat{x}=\arg \min _x\left\| y-{ HW}^\mathrm{T}x \right\| _2^2+\mu \left\| x \right\| _p, \end{aligned}$$
(9)

where \(\left\| \cdot \right\| _2^2\) stands for the square of the Frobenius norm, ensuring that the solution agrees with the observation; \(\left\| \cdot \right\| _p\) is the \(l_p\)-norm, \(0\le p\le 1\), and \(\mu \) is the regularization parameter. The restored image is given by \(\hat{u}=W^\mathrm{T} \hat{x}\). Figueiredo and Nowak [24, 25] use orthogonal wavelet and redundant wavelet (also called wavelet frame) for image deconvolution. In [23], Elad utilizes the Haar wavelet frame for image deblurring and obtains good results. Li et al. [30] propose to handle blurred images corrupted by mixed Gaussian-impulse noise by minimizing a new functional including a content-dependent fidelity term and a novel regularizer defined as the \(l_1\) norm of geometric tight framelet coefficients. The advantage of the method is that this algorithm is parameter free, which makes the proposed method more practical. Lately, Cai et al. [10] propose a regularization-based approach to remove motion blurring from the image by regularizing the sparsity of both the original image and the motion blur kernel under tight wavelet frame systems. This method has performed quite well in the case of motion blurring being the uniform blurring over the image. However, it cannot deal with nonuniform motion blurring.

In this paper, we have combined wavelet frame and TV regularizations to create the following model for image deblurring and denoising, which is given by

$$\begin{aligned} \hat{x}=\arg \min _x\frac{1}{2}\left\| y-{ HW}^\mathrm{T} x \right\| _2^2+\alpha \left\| W^\mathrm{T} x \right\| _{TV}+\beta \left\| x \right\| _1, \end{aligned}$$
(10)

where \(y\in R^{N\times N}\) is the observed image, \(H\in R^{N\times N}\) is the blurring operator, \(W^\mathrm{T}\) is the discrete inverse wavelet frame transform, \(x\) is the wavelet frame coefficient, and \(\alpha \) and \(\beta \) are positive regularization parameters that balance the three minimization terms. The deblurred image is given by \(\hat{u}=W^\mathrm{T}\hat{x}\). Here, \(\left\| \cdot \right\| _\mathrm{TV}\) denotes the total variation. We emphasize that our approach applies to both isotropic and anisotropic TV deconvolution problems. For simplicity, we will treat only the isotropic case in detail because the treatment for the anisotropic case is completely analogous. The proposed model, which will be called TVframe henceforth, uses TV and framelet approaches.

We adopt the augmented Langrangian (AL) technique to solve the inverse problem (10). The AL technique has been recently presented [43] to address TV-regularized minimization. The AL technique has notable stability and fast convergence rate compared with traditional methods. The AL method is discussed in the next subsection.

3.2 The Proposed Algorithm for this TVframe Model

We now apply the AL method to the basic problem (10).

By using two auxiliary variables \(z\) and \(v\), we first transform model (10) into the following constrained problem:

$$\begin{aligned} \min _x\frac{1}{2}\left\| y-Hz \right\| _2^2+\alpha \sum \left\| v \right\| _{2}+\beta \left\| x \right\| _1, \quad \mathrm{s.t.} \quad z=W^\mathrm{T} x,\; v=\nabla z. \end{aligned}$$
(11)

and then using a variable splitting operator, we transform the resulting constrained problem (11) into a different unconstrained problem:

$$\begin{aligned} \min _x\frac{1}{2}\left\| y-Hz \right\| _2^2+\frac{\lambda _1}{2}\left\| z-W^\mathrm{T} x \right\| _2^2 +\frac{\lambda _2}{2}\left\| v-\nabla z \right\| _2^2+\alpha \sum \left\| v \right\| _{2}+\beta \left\| x \right\| _1.\nonumber \\ \end{aligned}$$
(12)

The obtained unconstrained problem (12) is solved by the AL method. The AL function of (10) has the following form:

$$\begin{aligned} {\mathcal L}_{{\mathcal A}}(x,z,v,p_1,p_2)&= \frac{1}{2}\left\| y-Hz \right\| _2^2+\frac{\lambda _1}{2}\left\| z-W^\mathrm{T} x \right\| _2^2 +\frac{\lambda _2}{2}\left\| v-\nabla z \right\| _2^2 \nonumber \\&+\,\alpha \sum \left\| v \right\| _{2}+\beta \left\| x \right\| _1+\left\langle p_1^k,\;z-W^\mathrm{T} x \right\rangle +\left\langle p_2^k,v-\nabla z\right\rangle ,\nonumber \\ \end{aligned}$$
(13)

where \(p_1^k\) and \(p_2^k\) are vectors in \(R^{N\times N}\) to avoid \(\lambda _1\) and \(\lambda _2\) going to infinity. The classical AL method with Lagrangian \({\mathcal L}_{{\mathcal A}}(x,z,v,p_1,p_2)\) gives

$$\begin{aligned} \left( \hat{x},\hat{z},\hat{v}\right) =\arg \min _{x,z,v} \max _{p_1,p_2}{\mathcal L}_{{\mathcal A}}\left( x,z,v,p_1,p_2\right) \end{aligned}$$
(14)

as the solution to the unconstrained optimization problem (10). Finding the saddle point requires the minimization of \({\mathcal L}_{{\mathcal A}}\) with respect to variables \(x\), \(z\), and \(v\). A common practical approach is to obtain the saddle point by performing alternating optimization. By (14), we have obtained the following optimization algorithm:

figure a

Problems (18) and (19) can be solved efficiently. We are now left with the minimization problems (15), (16), and (17) to address.

3.2.1 Calculate \(v\) Subproblem with \(z\) and \(x\)

For problem (15),

$$\begin{aligned} v^{k+1}&= \arg \min _v{\mathcal L}_{{\mathcal A}}\left( x^{k},z^k,v\right) \\&= \arg \min _x\frac{\lambda _2}{2}\left\| v-\nabla z^k \right\| _2^2+\alpha \sum \left\| v \right\| _2+\left\langle P_2^k,v-\nabla z^k\right\rangle \\&= \arg \min _x\frac{\lambda _2}{2}\left\| v-\nabla z^k \right\| _2^2+\alpha \sum \left\| v \right\| _2+ \left( P_2^k \right) ^\mathrm{T}\cdot \left( v-\nabla z^k \right) .\\ \end{aligned}$$

The minimizer of the subproblem can be obtained by applying the two dimensional shrinkage [28] with

$$\begin{aligned} v^{k+1}=\frac{\nabla z^k-\frac{P_2^k}{\lambda _2}}{\left\| \nabla z^k-\frac{P_2^k}{\lambda _2} \right\| _2}\max \left\{ \left\| \nabla z^k-\frac{P_2^k}{\lambda _2} \right\| _2-\frac{\alpha }{\lambda _2},0\right\} , \end{aligned}$$
(20)

When the TV is anisotropic, \(v\) is given by the simpler one-dimensional shrinkage

$$\begin{aligned} v^{k+1}=\max \left\{ \left| \nabla z^k-\frac{p_2^k}{\lambda _2} \right| -\frac{\alpha }{\lambda _2},0\right\} \circ \mathrm{sgn}\left( \nabla z^k-\frac{p_2^k}{\lambda _2}\right) , \end{aligned}$$

where \(``\circ \)” represents the point-wise product and \(``\textit{sgn}\)” stands for the signum function defined by

$$\begin{aligned} \mathrm{sgn}(x):= \left\{ \begin{array}{l@{\quad }l} +1 &{}\quad \mathrm{if}\; x > 0;\\ 0 &{}\quad \mathrm{if}\;x=0;\\ -1 &{}\quad \mathrm{if}\; x<0. \end{array} \right. \end{aligned}$$

3.2.2 Calculate \(z\) Subproblem with \(v\) and \(x\)

For problem (16),

$$\begin{aligned} z^{k+1}&= \arg \min _z \frac{1}{2} \left\| y-Hz \right\| _2^2+\frac{\lambda _1}{2}\left\| z-W^\mathrm{T} x^k \right\| _2^2 \\&+\,\frac{\lambda _2}{2} \left\| v^{k+1}-\nabla z \right\| _2^2+\left\langle P_1^k,\; z-W^\mathrm{T}x^k\right\rangle +\left\langle P_2^k,v^{k+1}-\nabla z \right\rangle \\&= \arg \min _z \frac{1}{2} \left\| y-Hz \right\| _2^2+\frac{\lambda _1}{2}\left\| z-W^\mathrm{T} x^k \right\| _2^2+\frac{\lambda _2}{2} \left\| v^{k+1}-\nabla z \right\| _2^2 \\&+\,\left( P_1^k\right) ^\mathrm{T}\cdot \left( z-W^\mathrm{T} x^k\right) +\left( P_2^k \right) ^\mathrm{T}\cdot \left( v^{k+1}-\nabla z\right) . \end{aligned}$$

Since the above function is differentiable, the optimality condition gives the following linear equation:

$$\begin{aligned} -H^\mathrm{T}(y-Hz)+\lambda _1 \left( z-W^\mathrm{T} x^k\right) -\lambda _2\nabla ^\mathrm{T}\left( v^{k+1}-\nabla z \right) +p_1^k-\nabla ^\mathrm{T} p_2^k=0, \end{aligned}$$

where \(\nabla ^\mathrm{T}\) represents the conjugate operator of \(\nabla \). Denoting \({\mathcal F}(u)\) as the fast Fourier transform of \(u\), we can obtain the solution of \(z\) as follows:

$$\begin{aligned} z^{k+1}={\mathcal F}^{-1}\left( \varUpsilon \right) , \end{aligned}$$
(21)

where

$$\begin{aligned} \varUpsilon =\frac{{\mathcal F}\left( H^\mathrm{T}y+\lambda _1W^\mathrm{T} x^k+\lambda _2\nabla ^\mathrm{T} v^{k+1}+\nabla ^\mathrm{T} p_2^k-p_1^k\right) }{{\mathcal F}\left( \lambda _1+H^\mathrm{T}H+\lambda _2\nabla ^\mathrm{T}\nabla \right) }. \end{aligned}$$

3.2.3 Calculate \(x\) Subproblem with \(z\) and \(v\)

For minimization of (17),

$$\begin{aligned} x^{k+1}&= \arg \min _x{\mathcal L}_{{\mathcal A}}\left( x,z^{k+1},v^{k+1}\right) \\&= \arg \min _x\frac{\lambda _1}{2}\left\| z-W^\mathrm{T} x \right\| _2^2+\beta \left\| x \right\| _1+\left\langle P_1^k,z^{k+1}-W^\mathrm{T} x \right\rangle \\&= \arg \min _x\frac{\lambda _1}{2}\left\| z-W^\mathrm{T} x \right\| _2^2+\beta \left\| x \right\| _1+ \left( P_1^k\right) ^\mathrm{T}\cdot \left( z^{k+1}-W^\mathrm{T} x\right) . \end{aligned}$$

The subproblem can be minimized by using the fast shrinkage thresholding algorithm (FISTA) [2]. The outline of the algorithm is as follows: we choose \(r_1=x^0,\; t_1=1\). For each \(l\ge 1\), we let

$$\begin{aligned} \left\{ \begin{array}{l@{\quad }l} \displaystyle x^l=s_{\beta /L}\left( r^l-\frac{\lambda _1WW^\mathrm{T}}{L}r^l+\frac{W\left( \lambda _1z^{k+1}+P_1^k\right) }{L}\right) , \\ \displaystyle t^{l+1}=\frac{1+\sqrt{1+4(t^l)^2}}{2}, \\ \displaystyle r^{l+1}=x^l+\left( \frac{t^l-1}{t^{l+1}}\right) \left( x^l-x^{l-1}\right) ,\\ \end{array}\right. \end{aligned}$$
(22)

where \(s_\delta \) is the shrinkage operator defined by

$$\begin{aligned} s_\delta (x):=\textit{sgn}(x)\circ \textit{max}\left\{ \left| x \right| -\delta ,0\right\} . \end{aligned}$$

After knowing how to solve the subproblem (15)–(17), we present the following alternative minimization procedure to solve (13).

figure b

For simplicity, we terminated Algorithm 2 by relative change in \(u\) in all of our experiments; i.e.,

$$\begin{aligned} \frac{\left\| u^{k+1}-u^{k} \right\| _2}{\left\| u^k \right\| _2}< \epsilon , \end{aligned}$$

where \(\epsilon > 0\) is a given tolerance.

4 Experiment Results and Analysis

In this section, we perform some experiments to demonstrate the effectiveness and efficiency of the proposed approach and compare the proposed approach with the TV-based, wavelet frame-based and the hybrid regularization approaches. All experiments are conducted with MATLAB 2011b running on a desktop machine with a Windows 7 operating system and an Intel Core i3 Duo central processing unit at 3.07 GHz and 6 GB memory.

We consider the original clean Barbara (\(512 \times 512\)), Lena (\(512 \times 512\)), and Cameraman (\(256 \times 256\)) images as test images (see Fig. 2).

Fig. 2
figure 2

The test images used in this paper, from left to right are Barbara, Lena, and Cameraman image

For all experiments, the tested images are generated by applying blur kernel on the clean images and then adding additional Gaussian white noise with various standard deviation. The initialization of \(y\) is set as the observed image. The stop criterion \(\epsilon = 0.0008\) is used to terminate iteration, and the quality of the deblurred image is measured by PSNR and ISNR, which are defined as

$$\begin{aligned} \hbox {PSNR} = 10\cdot \log {\frac{255^2}{\hbox {MSE}}}\,(\hbox {dB}), \end{aligned}$$

and

$$\begin{aligned} \hbox {ISNR}=10\cdot \log {\frac{\left\| u-y \right\| _2^2}{\left\| Wx-u \right\| _2^2}}\;(\hbox {dB}), \end{aligned}$$

respectively, where MSE is the mean-squared-error per pixel.

4.1 Comparisons with the TV-Based Approach

In this subsection, we conduct some experiments to compare our method with LD [37], FTVd [44, 46], and SALSA [1] methods, which are based on the TV model. These methods for image deblurring are based on the ROF model and are more efficient compared with the other existing methods. The efficient Matlab code NUMIPAD [36] can be used for LD. The Matlab code for FTVd can be downloaded from Rice University and is available at http://www.caam.rice.edu/~optimization/L1/ftvd. The FTVd algorithm used for comparison is FTVd_v4.1, which is the latest version of FTVd. The Matlab code for SALSA is available at http://cascais.lx.it.pt/~mafonso/salsa.html.

We test three types of blurring kernels, i.e., Gaussian, average, and uniform. For simplicity, we denote Gaussian blur with a blurring size hsize and a SD sigma as G(hsize, sigma), whereas the average blur with a blurring size hsize is denoted by A(hsize). We generate these blur kernels by using the Matlab “fspecial” command.

In the first experiment, we conduct the experiment on the Barbara image. The blur kernel and the corresponding Gaussian noise are the same as those in [44]. The clean Barbara image is blurred by Gaussian blur G(11,9). Then, the Gaussian noise with SD of 0.255 is added to the blurred images. The regularization parameter \(\lambda =0.0002\) and iteration pars.loops=9 are selected in LD. The parameter \(\mu =1\mathrm{e}^5\) is selected in FTVd_v4.1, whereas the regularization parameters \(\lambda =0.002,\; \mu =0.00001\) are selected in SALSA. The regularization parameters \(\alpha =0.0000065, \, \beta =0.0000009\) are selected in TVframe. Figure 3 shows the experiment results. As shown in Fig. 3, the deblurred image estimated by the TVframe is better than that estimated by LD, FTVd_v4.1, and SALSA. Specifically, more details are recovered by the proposed method. Consequently, the PSNR and ISNR values obtained by TVframe are higher than those obtained by LD, FTVd_v4.1, and SALSA.

Fig. 3
figure 3

Deblurring of the Barbara image, from left to right and from top to bottom, zoomed fragments of the following images are presented: original, blurred noisy, reconstructed by LD (PSNR = 26.67 dB, ISNR = 3.87 dB), FTVd_v4.1 (PSNR = 26.14 dB, ISNR = 3.84 dB), SALSA (PSNR = 26.99 dB, ISNR = 4.04 dB) and by the proposed TVframe method (PSNR = 27.44 dB, ISNR = 4.21 dB)

Figure 4 shows the experimental results with the Lena image. In this experiment, we replicate the experimental condition of [44]. The blur point spread function is average blur A(17) and the Gaussian noise with SD is 0.255. The regularization parameter \(\lambda = 0.000009\) and the iteration pars.loops=9 are selected in LD. The parameter \(\mu =1\mathrm{e}^5\) is selected in FTVd_v4.1. The regularization parameters \(\lambda =0.003, \mu =0.0005\) are chosen in SALSA. Consequently, the regularization parameters \(\alpha =0.000006, \beta =0.00004\) are selected in TVframe. Figure 4 shows that the proposed algorithm can suppress staircasing effects and can provides sharp image edges.

Fig. 4
figure 4

Deblurring of the Lena image with average blur kernel A(17) and Gaussin noise \(\sigma \)=0.255. From left to right and from top to bottom, zoomed fragments of the following images are presented: original, blurred noisy, reconstructed by LD (PSNR = 31.39 dB, ISNR = 8.52 dB), FTVd_v4.1 (PSNR = 31.21 dB, ISNR = 8.94 dB), SALSA (PSNR = 31.45 dB, ISNR = 8.91 dB) and by the proposed TVframe method (PSNR = 31.89 dB, ISNR = 9.16 dB)

Figure 5 shows the experimental results with the Cameraman image. In this test, we use \(9 \times 9\) uniform blur kernel and Gaussian noise with SD of 0.56, which is the same as that in [1]. The regularization parameter \(\lambda = 0.00003\) and the iteration pars.loops=6 are chosen in LD. The regularization parameter \(\mu =1.5\mathrm{e}^4\) is chosen in FTVd_v4.1. Consequently, the regularization parameters \(\lambda =0.025,\; \mu =0.0025\) are chosen in SALSA. The regularization parameters \(\alpha =0.00005,\, \beta =0.0000009\) are selected in TVframe. It can be seen from Fig. 5 that more details can be recovered by the TVframe than other methods. The PSNR and ISNR values obtained by TVframe are higher compared to the ones obtained by LD, FTVd_v4.1, and SALSA.

Fig. 5
figure 5

Example of the deblurring results for the image Cameraman with Uniform blur kernel and Gaussin noise \(\sigma \)=0.56. From left to right and from top to bottom, zoomed fragments of the following images are presented: original, blurred noisy, reconstructed by LD (PSNR = 30.15 dB, ISNR = 8.02 dB), FTVd_v4.1 (PSNR = 29.93 dB, ISNR = 8.36 dB), SALSA (PSNR = 30.24 dB, ISNR = 8.22 dB) and by the proposed TVframe method (PSNR = 30.64 dB, ISNR = 8.70 dB)

4.2 Comparisons with the Framelet-Based Approach

On the basis of the above subsection, we conclude that the TVframe model is more suitable for image deblurring and denoising compared with TV-based methods. In this subsection, we will show that the proposed approach is more suitable for image deblurring and denoising compared with wavelet frame-based methods. We provide several examples to compare the TVframe method with the Wavelet Frame-based methods. Wavelet frame-based methods, including TwIST [3], SpaRSA [47], SALSA [1], and APG [42], are the current state of the art. We compare TVframe with SALSA and APG both in synthesis and analysis cases. For our numerical experiments, we replicate the experimental condition of [1]. The blur point spread function and the variance of the noise \(\sigma ^2\) for each scenario are summarized in Table 1. Each of the scenarios is tested with the well-known Cameraman image.

Table 1 Blur kernel and noise variance used in each experiment

For a fair comparison, we have optimized the regularization parameters of TwIST, SpaRSA, SALSA, and APG algorithms by using ground truth images in the same way as that of TVframe. Figure 6 shows the recovered results of the APG synthesis, APG anlysis, and TVframe in scenarios 1 and 4. Only the close-ups of the image regions are shown for better comparison. As shown in Fig. 6, the restored images estimated by TVframe are better than those estimated by APG in synthesis case and APG in analysis case. Specifically, more details are recovered by the proposed method.

Fig. 6
figure 6

Comparisons between TVframe, APG synthesis and APG analysis for image deblurring. The first row is with Uniform blur and Gaussian noise with sigma = 0.56, from left to right, zoomed fragments of the following images are presented: blurred noisy, reconstructed by APG synthesis (PSNR = 28.53 dB, ISNR = 6.98 dB), APG analysis (PSNR = 30.42 dB, ISNR = 8.69 dB) and by the proposed TVframe (PSNR = 30.64 dB, ISNR = 8.70 dB). The second row is with blur kernel \(h_{ij}=1/(1+i^2+j^2)\) and Gaussian noise with \(\sigma =\sqrt{2}\), from left to right, zoomed fragments of the following images are presented: blurred noisy, reconstructed by APG synthesis (PSNR = 28.55 dB, ISNR = 3.18 dB), APG analysis (PSNR = 30.30 dB, ISNR = 3.62 dB) and by the proposed TVframe method (PSNR = 30.45 dB, ISNR = 3.76 dB)

It is clearly seen from Table 2 that the proposed method has the highest PSNR and ISNR values (bold values), which show that the proposed method can better maintain the gray levels of the clean image.

Table 2 Comparison of the output PSNR (dB) and ISNR (dB) of the proposed method and framelet-based methods

4.3 Comparisons with the Hybrid Regularization Approach

In this subsection, we compare the proposed method with another hybrid regularization method PPXA [35]. In this test, we use the Gaussian blur G(15,8), plus Gaussian noise with SD of 0255. We run the two methods many times to obtain the best results. In PPXA, the selected parameters are \(\theta =0.8,\) \(\mu =0.006\), and in the proposed method, the selected parameters are \(\alpha =0.000009,\) \(\beta =0.000009\). Figure 7 shows the recovered results of the two methods. We can see from Fig. 7 that the restored images estimated by the proposed method are better than the ones estimated by PPXA, more details are recovered by our method, especially, the face of lena in our method looks smoother than the one in PPXA. Meanwhile, the PSNR and ISNR values obtained by the proposed method are higher than those obtained by the PPXA method.

Fig. 7
figure 7

Deblurring of the Lena image with Gussian blur kernel G(15,8) and Gaussin noise \(\sigma =\) 0.255. From left to right and from top to bottom, zoomed fragments of the following images are presented: blurred noisy, reconstructed by PPXA (PSNR = 30.35 dB, ISNR = 6.87 dB) and by the proposed TVframe method (PSNR = 31.46 dB, ISNR = 7.97 dB)

5 Conclusion

In this paper, we propose a new model for image deblurring by combining the wavelet frame and TV approaches. Experimental results demonstrate the superiority of the novel model over the TV-based, the wavelet-frame based methods, and the hybrid regularization methods. The proposed model utilizes the characteristic of wavelet frame in preserving image details and suppressing the staircasing artifacts, thereby solving the destruction problem caused by TV-based methods. In this paper, we mainly consider the synthesis model for image deblurring. The analysis model and the balanced model will be considered in future studies.