1 Introduction

Blind image restoration has been widely applied in remote sensing, astronomy, medical imaging, video cameras and so on (see, e.g. [8, 10, 18]). For example, when taking the photograph of a moving object, the shutter speed and the speed of the object are unknown. The process of degradation can be modeled as: \(g=h*f+n\), where g stands for the degraded image, f represents the original image, h is the blurring kernel (also called as point spread function (PSF)), n expresses the noise, and \(*\) denotes convolution operation. As the PSF is unknown, a lot of restoration techniques have been proposed (see, e.g. [1, 3, 5, 11,12,13,14, 16, 17, 19, 26]). It is becoming one of the most challenging problems for its complication and difficulty.

Regularization is one way to avoid the problems due to the ill-posed nature of blind image restoration. You and Kaveh [26] used the minimizing formulation with \(H^1\) regularization terms for both the image and the PSF:

$$\begin{aligned} \min \limits _{f,h}\Big \{\frac{1}{2}\Vert h*f-g\Vert _2^2 +\lambda _1\Vert f\Vert _{H^1}^2+\lambda _2\Vert h\Vert _{H^1}^2\Big \}. \end{aligned}$$
(1)

Chan and Wong [5] regularized both the image and the PSF by the famous total variation (TV) regularization terms (see, e.g. [5, 21]) instead of the \(H^1\)-norm:

$$\begin{aligned} \min \limits _{f,h} \Big \{ \frac{1}{2}\Vert h*f-g\Vert _2^2 +\lambda _1TV(f)+\lambda _2TV(h)\Big \}. \end{aligned}$$
(2)

The TV regularization is considered to be one of the best approaches to recovering edges of image, but also one of the hardest to computing because of the nonlinearity and non-differentiability of the TV regularization term. The split Bregman (SB) method is shown to be efficient to handle the TV term with notable stability and fast convergence rate (see, e.g. [4, 5, 7, 9]). Based on the SB method, the minimization problem can be divided into several subproblems. The important and difficult subproblems are to effectively find the solutions f and h alternatively, so we need to solve large linearized systems of equations. To solve large equations, the cosine preconditioned conjugate gradient method and the fixed-point method are proposed in [5]. Coupled systems of f and h are evolved by a time marching method, which is based on the gradient descent method [9]. Chang et al. [25] applied an algebraic multigrid (AMG) method and Krylov subspace acceleration technique to solve the linearized systems of equations [25]. We also noticed that the 2D fast Fourier transform (FFT) [4, 14] is fast and simple. Moreover, the spectral decomposition technology (SDT) [24] is efficient for handling a new Tikhonov regularization term [6, 15]. The SDT not only reduces the amount of calculation, but also saves the storage space.

The purpose of this paper is to briefly describe a combined algorithm of the SB method, FFT and SDT, which agglomerates advantages of the above three methods, and realize blind image restoration with a TV regularization term and a new Tikhonov regularization term.

The organization of this paper is given as follows. Section 2 exhibits our new combined algorithm based on the SB method, FFT and SDT. Computational results are shown in Sect. 3. Finally, some conclusions are given in Sect. 4.

2 Proposed Algorithm

In this section, we propose a fast combined algorithm to solve blind image restoration problem with the TV regularization term for h and the new Tikhonov (NT) regularization term for f [6] using the SB technique, FFT and SDT. We denote the proposed algorithm as NT-SB algorithm.

2.1 NT-SB Algorithm

To induce our new algorithm, we need to modify the model (2) as follows:

$$\begin{aligned} \begin{aligned} \min \limits _{f,h} \Big \{\frac{1}{2}\Vert h*f-g\Vert _2^2 +\frac{\lambda _1}{2}\Vert L_\lambda f\Vert _2^2+\lambda _2 \sum \limits _i \sqrt{(\nabla _x h)_i^2+(\nabla _y h)_i^2}\Big \}, \end{aligned} \end{aligned}$$
(3)

where \(\lambda _1, \lambda _2 >0\) are regularization parameters, \(\Vert L_\lambda f\Vert _2^2\) is a regularization term, which can filter high frequency information such as noise. We will show the detailed definition of \(L_\lambda \) in (12) later.

Introducing two auxiliary variables \(c_1,c_2\) similar to [7], we make the following substitutions for the model (2), \(\nabla _x h \rightarrow c_1, \nabla _y h \rightarrow c_2\). This yields the following equivalent constrained problem:

$$\begin{aligned} \begin{aligned}&\min \limits _{f,h,c_1,c_2} \Big \{\frac{1}{2}\Vert h*f-g\Vert ^2_2+\frac{\lambda _1}{2}\Vert L_\lambda f\Vert _2^2+\lambda _2\Vert (c_1,c_2)\Vert _2\Big \},\\&s.t. \quad \nabla _x h=c_1, \nabla _y h=c_2. \end{aligned} \end{aligned}$$
(4)

Notice that here we only need two auxiliary variables, since there is only one TV regularization term. While for the following TV-SB algorithm in Sect. 3, we have to introduce four variables.

Thus the iterative scheme of our new algorithm based on the SB technique is as follows:

$$\begin{aligned} \begin{aligned}&(f^{k+1},h^{k+1},c_1^{k+1},c_2^{k+1})=\\&arg \min \limits _{f,h,c_1,c_2} \Big \{\frac{1}{2}\Vert h*f-g\Vert ^2_2+\frac{\lambda _1}{2}\Vert L_\lambda f\Vert _2^2 +\lambda _2\Vert (c_1,c_2)\Vert _2\\&+\frac{\gamma _2}{2}(\Vert c_1-\nabla _x h-s_1^k\Vert _2^2+\Vert c_2-\nabla _y h-s_2^k\Vert _2^2)\Big \}, \end{aligned} \end{aligned}$$
(5)

and

$$\begin{aligned} \begin{aligned}&s_1^{k+1}=s_1^k+\nabla _x h^{k+1}-c_1^{k+1},\\&s_2^{k+1}=s_2^k+\nabla _y h^{k+1}-c_2^{k+1}, \end{aligned} \end{aligned}$$
(6)

where the parameters \(\lambda _1, \lambda _2>0\) fit the fidelity term and the regularization terms, \(\gamma _2 >0\) regulates the penalty function terms. The minimization problem (5) can also be decoupled into the following subproblems:

1: h-subproblem: for fixed \(f^k,c_1^k,c_2^k,s_1^k,s_2^k\), we need to solve

$$\begin{aligned} \begin{aligned} h^{k+1}=arg\min \limits _{h} \Big \{\mathcal {H}(h) +\frac{\gamma _2}{2}(\Vert c_1^k-\nabla _x h-s_1^k\Vert _2^2+\Vert c_2^k-\nabla _y h-s_2^k\Vert _2^2)\Big \}, \end{aligned} \end{aligned}$$
(7)

where \(\mathcal {H}(h)=\frac{1}{2}\Vert f^k*h-g\Vert ^2_2\).

According to the optimality condition, which requires us to solve

$$\begin{aligned} \begin{aligned}&(f^k)^T*(f^k*h^{k+1}-g)+\gamma _2\nabla ^T\nabla h^{k+1}\\ +&\gamma _2(\nabla _x^T(s_1^k-c_1^k)+\nabla _y^T(s_2^k-c_2^k))=0. \end{aligned} \end{aligned}$$
(8)

That is,

$$\begin{aligned} h^{k+1}= \mathcal {F}^{-1}[\frac{\mathcal {F}((f^k)^Tg+\gamma _2(\nabla _x^T(c_1^k-s_1^k)+\nabla _y^T(c_2^k-s_2^k))}{\mathcal {F}((f^k)^Tf^k-\gamma _2\triangle )}], \end{aligned}$$
(9)

where \(\nabla ^T\nabla =-\triangle , \nabla ^T=-div\).

2: f-subproblem: for fixed \(h^{k+1}\), we need to solve

$$\begin{aligned} f^{k+1}=arg\min \limits _{f} \Big \{\mathcal {F}_1(f)+\mathcal {F}_2(f)\Big \}, \end{aligned}$$
(10)

where \(\mathcal {F}_1(f)=\frac{1}{2}\Vert \tilde{H}f-g\Vert _2^2\), \(\mathcal {F}_2(f)=\frac{\lambda _1}{2}\Vert L_\lambda ^{k+1}f\Vert _2^2\), and \(\tilde{H}\) is the BCCB (Block Circulant with Circulant Blocks) blur matrix got by the blurring kernel h with periodic boundary conditions [20, 22]. We apply the spectral decomposition technology for the blurring matrix \(\tilde{H}\), and have

$$\begin{aligned} \tilde{H}=F^*\varSigma F, \end{aligned}$$
(11)

where F is a 2D unitary discrete Fourier transform (DFT) matrix, the conjugate transpose of a complex matrix F, denoted \(F^*\) and the diagonal matrix \(\varSigma =diag[\hat{\sigma }_1,\hat{\sigma }_2,\cdot \cdot \cdot ,\hat{\sigma }_n]\) (see, e.g. [6]), where \(\hat{\sigma }_1,\hat{\sigma }_2,\cdot \cdot \cdot ,\hat{\sigma }_n\) are eigenvalues of \(\tilde{H}\). Construct the regularization matrix \(L_\lambda \) as:

$$\begin{aligned} L_\lambda =D_\lambda F, \end{aligned}$$
(12)

where

$$\begin{aligned} \begin{aligned}&D_{\lambda }^{2} =\\&\left( \begin{array}{cccc} \max (\lambda ^{2}-\hat{\sigma }_{1}^{2},0) &{} &{} &{} \\ &{} \max (\lambda ^{2}-\hat{\sigma }_{2}^{2},0) &{} &{} \\ &{} &{} \ddots &{} \\ &{} &{} &{} \max (\lambda ^{2}-\hat{\sigma }_{n}^{2},0) \\ \end{array} \right) . \end{aligned} \end{aligned}$$
(13)

According to the NT method proposed in [15], we have

$$\begin{aligned} f=F^*(\hat{\sigma }^*\hat{\sigma }+\lambda _1 ({D_\lambda ^{k+1}})^*{D_\lambda ^{k+1}})^{-1}\hat{\sigma }^*Fg. \end{aligned}$$
(14)

3: \(c_1,c_2\)-subproblems: for fixed \(f^{k+1}\) and \(h^{k+1}\), we need to solve

$$\begin{aligned} \begin{aligned}&(c_1^{k+1},c_2^{k+1})=arg\min \limits _{c_1,c_2}\Big \{\lambda _2\Vert (c_1,c_2)\Vert _2\\&+\frac{\gamma _2}{2}(\Vert c_1-\nabla _xh^{k+1}-s_1^k\Vert _2^2+ \Vert c_2-\nabla _yh^{k+1}-s_2^k\Vert _2^2)\Big \}. \end{aligned} \end{aligned}$$
(15)

By shrinkage formulation (see, e.g. [7, 23]), the solutions of (15) are

$$\begin{aligned} \begin{aligned}&c_1^{k+1}=\frac{\nabla _xh^{k+1}+s_1^k}{W^k}\max \{W^k-\frac{\lambda _2}{\gamma _2},0\},\\&c_2^{k+1}=\frac{\nabla _yh^{k+1}+s_2^k}{W^k}\max \{W^k-\frac{\lambda _2}{\gamma _2},0\},\\ \end{aligned} \end{aligned}$$
(16)

where \(W^k=\sqrt{(\nabla _xh^{k+1}+s_1^k)^2+(\nabla _yh^{k+1}+s_2^k)^2}.\)

We summarize the NT-SB algorithm as follows:

figure a

Here, tol denotes the tolerance value for iteration scheme, and the order of the h and f subproblems can not be transposed.

3 Experimental Results

To show the superiority of the proposed algorithm, we compare the NT-SB algorithm with the TV-SB algorithm that solves the original model (2) using split Bregman method. First, we simply show the TV-SB algorithm. Using several auxiliary variables \(b_1, b_2, c_1, c_2\), we need to solve the following equivalent constrained problem:

$$\begin{aligned} \begin{array}{lll} &{}\min \limits _{f,h,b_1,b_2 \atop c_1,c_2} \Big \{\frac{1}{2}\Vert h*f-g\Vert ^2_2+\lambda _1\Vert (b_1,b_2)\Vert _2+\lambda _2\Vert (c_1,c_2)\Vert _2\Big \},\\ &{}s.t. \quad \nabla _x f=b_1, \nabla _y f=b_2, \nabla _x h=c_1, \nabla _y h=c_2. \end{array} \end{aligned}$$
(17)

For the h-subproblem, we can get the same solution (9). For the f-subproblem, we get

$$\begin{aligned} f^{k+1}=\mathcal {F}^{-1}[\frac{\mathcal {F}((h^{k+1})^Tg+\gamma _1(\nabla _x^T(b_1^k-t_1^k)+\nabla _y^T(b_2^k-t_2^k))}{\mathcal {F}((h^{k+1})^Th^{k+1}-\gamma _1\triangle )}]. \end{aligned}$$
(18)

By shrinkage formulation as above, we obtain

$$\begin{aligned} \begin{aligned}&b_1^{k+1}=\frac{\nabla _xf^{k+1}+t_1^k}{V^k}\max \{V^k-\frac{\lambda _1}{\gamma _1},0\},\\&b_2^{k+1}=\frac{\nabla _yf^{k+1}+t_2^k}{V^k}\max \{V^k-\frac{\lambda _1}{\gamma _1},0\},\\&c_1^{k+1}=\frac{\nabla _xh^{k+1}+s_1^k}{W^k}\max \{W^k-\frac{\lambda _2}{\gamma _2},0\},\\&c_2^{k+1}=\frac{\nabla _yh^{k+1}+s_2^k}{W^k}\max \{W^k-\frac{\lambda _2}{\gamma _2},0\}, \end{aligned} \end{aligned}$$
(19)

where

$$V^k=\sqrt{(\nabla _xf^{k+1}+t_1^k)^2+(\nabla _yf^{k+1}+t_2^k)^2},$$
$$ W^k=\sqrt{(\nabla _xh^{k+1}+s_1^k)^2+(\nabla _yh^{k+1}+s_2^k)^2}.$$

And the iterative parameters

$$\begin{aligned} \begin{array}{lll} &{}t_1^{k+1}=t_1^k+\nabla _x f^{k+1}-b_1^{k+1},\\ &{}t_2^{k+1}=t_2^k+\nabla _y f^{k+1}-b_2^{k+1},\\ &{}s_1^{k+1}=s_1^k+\nabla _x h^{k+1}-c_1^{k+1},\\ &{}s_2^{k+1}=s_2^k+\nabla _y h^{k+1}-c_2^{k+1}. \end{array} \end{aligned}$$
(20)

We summarize the TV-SB algorithm as follows:

figure b
Fig. 1.
figure 1

Original Images

Table 1. ISNR values and computing times using the TV-SB algorithm and the NT-SB algorithm with different Gaussian Blurs (GB) and Moffat Blurs (MB).
Fig. 2.
figure 2

Comparisons of the TV-SB algorithm and the NT-SB algorithm. Column 1: Blurred images contaminated by different blurs with different blurring kernel sizes with the variance of \(\sigma ^2=1\); Column 2: Restored images by the TV-SB algorithm; Column 3: Restored images by the NT-SB algorithm.

Some remarks are in order.

  1. (a)

    Comparing with the TV-SB algorithm, our proposed NT-SB algorithm uses less variables (missing \(b_1, b_2, t_1, t_2)\) and less initial values need to be set (missing \(b_1^0, b_2^0, t_1^0, t_2^0)\).

  2. (b)

    It is seen that, the computational complexity is obviously reduced for steps c and d.

In this section, we test two gray images (satellite and cameraman images) in Fig. 1 which are both of size \(256\times 256\) pixels to show the effectiveness and feasibility of the TV-SB algorithm and the NT-SB algorithm. In the following examples, we mainly compare the visual quality of the restored images and the improvement in signal to noise ratio (ISNR) value [2]. The larger the ISNR value is, the better the restored result is.

The elements of the noise vector n are normally distributed with zero mean, and the standard deviation is chosen such that \(\frac{\Vert n\Vert _{2}}{\Vert g\Vert _{2}}= 0.01\). In this case, we say that the level of noise is 1%. Moreover, the numerical examples are all implemented with MATLAB (R2010a) and the computer of test has 1G RAM and Intel(R) Pentium(R) D CPU @2.80 GHz.

Table 2. ISNR values and computing times using the TV-SB algorithm and the NT-SB algorithm with different Gaussian Blurs (GB) and Moffat Blurs (MB) and \(1\%\) Gaussian noise.

Then, to compare the properties of the TV-SB algorithm and the NT-SB algorithm, we consider the degraded images contaminated by Gaussian or Moffat blur and Gaussian noise.

First of all, we consider the cameraman image contaminated by only Gaussian or Moffat blur, the restored images are shown in Fig. 2. The parameters in the algorithms are set to be \(\lambda _1=13, \lambda _2=0.1, \gamma _1=0.1e-6, \gamma _2=0.2\). Blurred images contaminated by the size of \(20\times 20\) and \(30\times 30\) Gaussian blurs with \(\sigma ^{2}=1\) are displayed in Figs. 2(a) and (d), blurred images contaminated by the size of \(20\times 20\) and \(30\times 30\) Moffat blurs with \(\sigma ^{2}=1\) are depicted in Figs. 2(g) and (j); restored images by the TV-SB and NT-SB algorithms are shown in the second and third columns of Fig. 2, respectively. The ISNR values, number of iterations and computing times are listed in Table 1. We can see that the NT-SB algorithm has better restoration effect with larger ISNR values and needs fewer number of iterations and fewer computing times (cf. Table 1).

Fig. 3.
figure 3

Comparisons of the TV-SB algorithm and the NT-SB algorithm. Column 1: Blurred-Noisy images contaminated by \(10\times 10\) Gaussian or Moffat blur with different ambiguous degrees and noise of \(1\%\); Column 2: Restored images by the TV-SB algorithm; Column 3: Restored images by the NT-SB algorithm.

Next, we consider the blurred-noisy satellite image contaminated by Gaussian or Moffat blur and \(1\%\) Gaussian noise in Fig. 3. Since the background color of satellite image is black, the noise is not so obvious in the contaminated images (the first column of Fig. 3). The parameters in the algorithms are set to be \(\lambda _1=1, \lambda _2=0.05, \gamma _1=0.1e-4, \gamma _2=8\). Blurred-noisy images contaminated by Gaussian and Moffat Blurs with \(\sigma ^{2}=1\) and \(1\%\) Gaussian noise are exemplified in Figs. 3(a) and (g), respectively, blurred-noisy images contaminated by the size of \(10\times 10\) Gaussian and Moffat Blurs with \(\sigma ^{2}=1.5\) and \(1\%\) Gaussian noise are exemplified in Figs. 3(d) and (j), respectively; restored images by the TV-SB and NT-SB algorithms are shown in the second and third columns of Fig. 3. We tabulate the ISNR values, number of iterations and computing times of the two algorithms in Table 2, which shows that the NT-SB algorithm has almost the same ISNR values as the TV-SB algorithm, but the NT-SB algorithm needs less number of iterations and fewer computing times (see Table 2).

By observing Figs. 2 and 3, the NT-SB algorithm behaves better for the blurred-noisy images in visual and faster than the TV-SB algorithm with the same parameters.

4 Conclusions

In this paper, we introduced a new Tikhonov regularization term to replace the TV regularization term for blind restoration problem, and applied the split Bregman technique to separate the minimum formulation with the TV regularization term for the blurring kernel and the new Tikhonov regularization term for the image into several subproblems. In the process of solving the subproblems, we combined the FFT and the SDT to accelerate the computation. The TV-SB and NT-SB algorithms were shown to be effective by several numerical experiments. The NT-SB algorithm needed less space, fewer number of iterations, shorter computing times and better restored effect comparing with the TV-SB algorithm.