Abstract
The problem of image restoration from blur and noise is studied. By regularization techniques, a solution of the problem is found as the minimum of a primal energy function, which is formed by two terms. The former deals with faithfulness to the data, and the latter is associated with the smoothness constraints. We impose that the obtained results are images piecewise continuous and with thin edges. In correspondence with the primal energy function, there is a dual energy function, which deals with discontinuities implicitly. We present a unified approach of the duality theory, also to consider the non-parallelism constraint. We construct a dual energy function, which is convex and imposes such a constraint. To reconstruct images with Boolean discontinuities, the proposed energy function can be used as an initial approximation in a graduated non-convexity algorithm. The experimental results confirm that such a technique inhibits the formation of parallel lines.
Similar content being viewed by others
References
Ahookhosh, M., Amini, K., Bahrami, S.: A class of nonmonotone Armijo-type line search method for unconstrained optimization. Optimization 61(4), 387–404 (2012)
Allain, M., Idier, J., Goussard, Y.: On global and local convergence of half-quadratic algorithms. IEEE Trans. Image Process. 15, 1130–1142 (2006)
Antoniadis, A., Gijbels, I., Nikolova, M.: Penalized likelihood regression for generalized linear models with non-quadratic penalties. Ann. Inst. Stat. Math. 63, 585–615 (2011)
Armijo, L.: Minimization of functions having Lipschitz continuous first partial derivatives. Pac. J. Math. 16(1), 1–3 (1996)
Astrőm, F.: Color image regularization via channel mixing and half quadratic minimization. In: 2016 IEEE International Conference on Image Processing (ICIP), pp. 4007–4011 (2016)
Aubert, G., Kornprobst, P.: Mathematical Problems in Image Processing: Partial Differential Equations and the Calculus of Variations, 2nd edn. Springer, New York (2006)
Bai, X., Zhou, F., Xue, B.: Infrared image enhancement through contrast enhancement by using multiscale new top-hat transform. Infrared Phys. Technol. 54(2), 61–69 (2011)
Bauschke, H.H., Lucet, Y.: What is a Fenchel conjugate? Not. Am. Math. Soc. 59(1), 44–46 (2012)
Bedini, L., Gerace, I., Salerno, E., Tonazzini, A.: Models and algorithms for edge-preserving image reconstruction. Adv. Imaging Electron Phys. 97, 86–189 (1996)
Bedini, L., Gerace, I., Tonazzini, A.: A deterministic algorithm for reconstruction images with interacting discontinuities. CVGIP Graph. Models Image Process. 56, 109–123 (1994)
Bergmann, R., Chan, R.H., Hielscher, R., Persch, J., Steidl, G.: Restoration of manifold-valued images by half-quadratic minimization. Inverse Probl. Imaging 10, 281–304 (2016)
Bertero, M., Boccacci, P.: Introduction to Inverse Problems in Imaging. Institute of Physics Publishing, Bristol (1998)
Black, M., Rangarajan, A.: On the unification of line processes, outlier rejection, and robust statistics with applications to early vision. Int. J. Comput. Vis. 19, 597–608 (1996)
Blake, A.: Comparison of the efficiency of deterministic and stochastic algorithms for visual reconstruction. IEEE Trans. Pattern Anal. Mach. Intell. 11, 2–12 (1989)
Blake, A., Zisserman, A.: Visual Reconstruction. MIT Press, Cambridge (1987)
Boccuto, A., Gerace, I., Pucci, P.: Convex approximation technique for interacting line elements deblurring: a new approach. J. Math. Imaging Vis. 44(2), 168–184 (2012)
Boukis, C., Mandic, D.M., Constantinides, A.G., Polymenakos, L.C.: A modified Armijo rule for the online selection of learning rate of the LMS algorithm. Digit. Signal Process. 20, 630–639 (2010)
Borwein, J.M., Vanderwerff, J.D.: Convex Functions: Constructions. Characterizations and Counterexamples. Cambridge University Press, Cambridge (2010)
Bouman, C., Sauer, K.: A generalized Gaussian image model for edge-preserving MAP estimation. IEEE Trans. Image Process. 2(3), 296–310 (1993)
Boyd, S., Vandenberghe, L.: Convex Optimization. Cambridge University Press, Cambridge (2004)
Brézis, H.: Functional analysis. Sobolev spaces and partial differential equations. Springer, New York (2011)
Cavalagli, N., Cluni, F., Gusella, V.: Evaluation of a statistically equivalent periodic unit cell for a quasi-periodic masonry. Int. J. Solids Struct. 50, 4226–4240 (2013)
Charbonnier, P., Blanc-Féraud, L., Aubert, G., Barlaud, M.: Deterministic edge-preserving regularization in computed imaging. IEEE Trans. Image Process. 6, 298–311 (1997)
Chen, P.-Y., Selesnick, I.W.: Group-sparse signal denoising: non-convex regularization, convex optimization. IEEE Trans. Signal Process. 62, 3464–3478 (2014)
Chen, X., Ng, M.K., Zhang, C.: Non-Lipschitz \(l_p\)-regularization and box constrained model for image restoration. IEEE Trans. Image Process. 21(12), 4709–4721 (2012)
Cluni, F., Costarelli, D., Minotti, A.M., Vinti, G.: Applications of sampling Kantorovich operators to thermographic images for seismic engineering. J. Comput. Anal. Appl. 19(4), 602–617 (2015)
Cluni, F., Costarelli, D., Minotti, A.M., Vinti, G.: Enhancement of thermographic images as tool for structural analysis in earthquake engineering. NTD & E Int. 70, 60–72 (2015)
Cluni, F., Costarelli, D., Minotti, A.M., Vinti, G.: Applications of approximation theory to thermographic images in earthquake engineering. Proc. Appl. Math. Mech. 15, 663–664 (2015)
Coll, B., Duran, J., Sbert, C.: An algorithm for nonconvex functional minimization and applications to image restoration. In: 2014 IEEE International Conference on Image Processing (ICIP), pp. 4547–4551 (2014)
Costarelli, D., Seracini, M., Vinti, G.: Digital image processing algorithms for diagnosis in arterial diseases. Proc. Appl. Math. Mech. 15, 669–670 (2015)
Costarelli, D., Vinti, G.: Approximation by nonlinear multivariate sampling Kantorovich-type operators and applications to image processing. Numer. Funct. Anal. Optim. 34(8), 819–844 (2013)
Demoment, G.: Image reconstruction and restoration: overview of common estimation structures and problems. IEEE Trans. Acoust. Speech Signal Process. 37, 2024–2036 (1989)
Ding, Y., Selesnick, I.W.: Artifact-free wavelet denoising: non-convex sparse regularization, convex optimization. IEEE Signal Process. Lett. 22(9), 1364–1368 (2015)
Geman, D., Reynolds, G.: Constrained restoration and the recovery of discontinuities. IEEE Trans. Pattern Anal. Mach. Intell. 14, 367–383 (1992)
Geman, S., Geman, D.: Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images. IEEE Trans. Pattern Anal. Mach. Intell. 6, 721–740 (1984)
Geman, D., Yang, C.: Nonlinear image recovery with half-quadratic regularization. IEEE Trans. Image Process. 4(7), 932–946 (1995)
Gerace, I., Martinelli, F.: On regularization parameters estimation in edge-preserving image reconstruction. LNCS 3196, 1170–1183 (2008)
Gerace, I., Pandolfi, R., Pucci, P.: A new GNC algorithm for spatial dithering. In: Proceedings of the International TICSP Workshop on Spectral Methods and Multirate Signal Processing, SMMSP2003, Barcelona, Spain, September 13–14, 2003, pp. 109–114 (2003)
Hadamard, J.: Lectures on Cauchy’s Problem in Linear Partial Differential Equations. Yale University Press, Yale (1923)
He, R., Zheng, W.-S., Tan, T., Sun, Z.: Half-quadratic-based iterative minimization for robust sparse representation. IEEE Trans. Pattern Anal. Mach. Intell. 36(2), 261–275 (2014)
Horn, R.A., Johnson, C.R.: Matrix Analysis, 2nd edn. Cambridge University Press, Cambridge (2013)
Huang, Y.-M., Lu, D.-Y.: A preconditioned conjugate gradient method for multiplicative half-quadratic image restoration. Appl. Math. Comput. 219, 6556–6564 (2013)
Idier, J.: Convex half-quadratic criteria and interacting auxiliary variables for image restoration. IEEE Trans. Image Process. 16, 1003–1009 (2001)
Jähne, B.: Digital Image Processing. Springer, Berlin (2002)
Lanza, A., Morigi, S., Selesnik, I.W., Sgallari, F.: Nonconvex nonsmooth optimization via convex–nonconvex majorization-minimization. Numer. Math. 1, 1–39 (2016)
Lanza, A., Morigi, S., Sgallari, F.: Convex image denoising via non-convex regularization. Scale Space Var. Methods Comput. Vis. 9087, 666–677 (2015)
Lanza, A., Morigi, S., Sgallari, F.: Convex image denoising via non-convex regularization with parameter selection. J. Math. Imaging Vis. 56, 195–220 (2016)
Laporte, L., Flamary, R., Canu, S., Déjean, S., Mothe, J.: Nonconvex regularizations for feature selection in ranking with sparse SVM. IEEE Trans. Neural Netw. Learn. Syst. 25(6), 1118–1130 (2014)
Liu, X.-G., Gao, X.-B.: A improvement for GNC method of nonconvex nonsmooth image restoration. Appl. Mech. Mater. 380–384, 1664–1667 (2013)
Liu, X.-G., Gao, X.-B., Xue, Q.: Image restoration combining Tikhonov with different order nonconvex nonsmooth regularizations. In: 2013 Ninth International Conference on Computational Intelligence and Security, pp. 250–254 (2013)
Liu, X.-G., Gao, X.-B.: A method based on the GNC and augmented Lagrangian duality for nonconvex nonsmooth image restoration. Acta Electron. Sin. 42(2), 264–271 (2014)
Marroquin, J., Mitter, S., Poggio, T.: Probabilistic solution of ill-posed problems in computational vision. J. Am. Stat. Assoc. 82, 76–89 (1987)
Mobahi, H., Fisher, J. W., III.: A theoretical analysis of optimization by Gaussian continuation. In: Wong, W.-K., Lowd, D.(Eds.): Twenty-Ninth Conference on Artificial Intelligence of the Association for the Advancement of Artificial Intelligence (AAAI), Proceedings. Austin, Texas, USA, January 25–30, 2015, pp. 1205–1211 (2015)
Ni, C., Li, Q., Xia, L.Z.: A novel method of infrared image denoising and edge enhancement. Signal Process. 88(6), 1606–1614 (2008)
Nikolova, M.: Markovian reconstruction using a GNC approach. IEEE Trans. Image Process. 8, 1204–1220 (1999)
Nikolova, M.: Analysis of the recovery of edges in images and signals by minimizing nonconvex regularized least-squares. Multiscale Model. Simul. 4(3), 960–991 (2005)
Nikolova, M.: Analytical bounds on the minimizers of (nonconvex) regularized least-squares. Inverse Probl. Imaging 1(4), 661–677 (2007)
Nikolova, M., Chan, R.H.: The equivalence of half-quadratic minimization and the gradient linearization iteration. IEEE Trans. Image Process. 16(6), 1623–1627 (2007)
Nikolova, M., Ng, M.K.: Analysis of half-quadratic minimization methods for signal and image recovery. SIAM J. Sci. Comput. 27(3), 937–966 (2005)
Nikolova, M., Ng, M.K., Tam, C.-P.: Fast nonconvex nonsmooth minimization methods for image restoration and reconstruction. IEEE Trans. Image Process. 19(12), 3073–3088 (2010)
Nikolova, M., Ng, M.K., Tam, C.-P.: On \(\ell _1\) data fitting and concave regularization for image recovery. SIAM J. Sci. Comput. 35(1), A397–A430 (2013)
Nikolova, M., Ng, M.K., Zhang, S., Ching, W.-K.: Efficient reconstruction of piecewise constant images using nonsmooth nonconvex minimization. SIAM J. Imaging Sci. 1(1), 2–25 (2008)
Nocedal, J., Wright, S.J.: Numerical Optimization, 2nd edn. Springer, New York (2006)
Parekh, A., Selesnick, I.W.: Convex denoising using non-convex tight frame regularization. IEEE Signal Process. Lett. 22(10), 1786–1790 (2015)
Parekh, A., Selesnick, I.W.: Enhanced low-rank matrix approximation. IEEE Signal Process. Lett. 23(4), 493–497 (2016)
Robini, M.C., Magnin, I.E.: Optimization by Stochastic continuation. SIAM J. Imaging Sci. 3(4), 1096–1121 (2010)
Robini, M.C., Zhu, Y.: Generic half-quadratic optimization for image reconstruction. SIAM J. Imaging Sci. 8(3), 1752–1797 (2015)
Robini, M.C., Zhu, Y., Luo, J.: Edge-preserving reconstruction with contour-line smoothing and non-quadratic data-fidelity. Inverse Probl. Imaging 7(4), 1331–1366 (2013)
Robini, M. C., Zhu, Y., Lv, X., Liu, W.: Inexact half-quadratic optimization for image reconstruction. In: 2016 IEEE International Conference on Image Processing (ICIP), pp. 3513–3517 (2016)
Rockafellar, R.T.: Convex Analysis. Princeton University Press, Princeton (1970)
Rockafellar, R.T., Wets, R.J.-B.: Variational Analysis. Springer, Berlin (1998)
Selesnick, I.W., Parekh, A., Bayram, I.: Convex 1-D total variation denoising with non-convex regularization. IEEE Signal Proc. Lett. 22(2), 141–144 (2015)
Snyder, W., Han, Y.-S., Bilbro, G., Whitaker, R., Pizer, S.: Image relaxation: restoration and feature extraction. IEEE Trans. Pattern Anal. Mach. Intell. 17(6), 620–624 (1995)
Stoer, J., Bulirsch, R.: Introduction to Numerical Analysis, 3rd edn. Springer, Berlin (2002)
Tikhonov, A.N., Arsenin, V.Y.: Solution of Ill-Posed Problems. V. H. Winston & Sons, Washington (1977)
Tuia, D., Flamary, R., Barlaud, M.: Non-convex regularization in remote sensing. IEEE Trans. Geosci. Remote Sens. 54(11), 6470–6480 (2016)
Vese, L., Chan, T.F.: Reduced Non-convex Functional Approximations for Image Restoration & Segmentation. Department of Mathematics, University of California, Los Angeles (1997)
Wells, P.N.T.: Medical ultrasound: imaging of soft tissue train and elasticity. J. R. Soc. Interface 8(64), 1521–1549 (2011)
Xiao, C., He, Y., Yu, J.: A high-efficiency edge-preserving Bayesian method for image interpolation. In: Wang, Q., Pfahl, D., Raffo, D. (eds.) Making Globally Distributed Software Development—A Success Story, International Conference of Software Process, Proceedings. Leipzig, Germany, May 10–11, 2008, pp. 1042–1046 (2008)
Author information
Authors and Affiliations
Corresponding author
Additional information
This work was supported by Dipartimento di Matematica e Informatica, Università degli Studi di Perugia. The author Antonio Boccuto was supported also by the Italian National Group of Mathematical Analysis, Probability and Applications (G.N.A.M.P.A.).
Appendices
Appendix 1
In this appendix, we see how the assumptions in Theorems 4, 5 and 6 generalize the conditions of the other dual theorems presented in the literature.
In the following examples, we take \(\lambda ^2=1\).
(a) Observe that, with the same hypotheses and notations as in Theorem 6, if f is concave and non-decreasing on \({\mathbb {R}}^+_0\), \(f(0)=0\) and f is differentiable on \((0,+\infty )\), then, by de L’Hôpital’s rule, the limit \(\displaystyle {\ell _0=\lim _{t\rightarrow + \infty }\frac{f(t)}{t}}\) is equal to the limit \(\displaystyle {\ell =\lim _{t\rightarrow + \infty }f^{\prime }(t)}\). Moreover note that, since \(g(t)=f(t^2)\) for each \(t\in {\mathbb {R}}^+_0\), we get \(\displaystyle {f^{\prime }(t^2)=\frac{g^{\prime }(t)}{2 t}}\) for each \(t\in (0, +\infty )\), and thus \(\displaystyle {\lim _{t\rightarrow +\infty }\frac{g^{\prime }(t)}{2 t}}= \ell =\ell _0\). In the duality theorems, there are many conditions involving the limit \(\displaystyle {\lim _{t\rightarrow +\infty }\frac{g^{\prime }(t)}{2 t}}\) (see also [23, Theorem 1], [43, 58, 59]). Therefore, these conditions, when f is non-decreasing, concave on \({\mathbb {R}}^+_0\), \(f(0)=0\) and f is differentiable on \((0,+\infty )\), are equivalent to the corresponding ones involving the limit \(\ell _0\).
(b) Theorem 4 is a strict generalization of Theorem 3. Indeed, observe that every function \(g \in {\mathrm{Lip}}_{\mathrm{loc}}({\mathbb {R}})\) is also continuous on \({\mathbb {R}}\), and a fortiori u.s.c. Moreover, given a fixed real number \(\displaystyle { a\in \Bigl ( 0,\frac{1}{2}\Bigr )}\), the functions \(g:{\mathbb {R}} \rightarrow {\mathbb {R}}\), \(f:{\mathbb {R}} \rightarrow {\mathbb {R}} \cup \{ - \infty \}\), defined by \(g(t)=t^{2a}\), \(t\in {\mathbb {R}}\),
satisfy the hypotheses (4.1), (4.2) of Theorem 4, (5.1) of Theorem 5 and (6.1) of Theorem 6, but since \(2a<1\), g is not Lipschitz on [0, 1], and hence \(g \not \in {\mathrm{Lip}}_{\mathrm{loc}}({\mathbb {R}})\). Furthermore, with the same notations as in Theorem 7, we get \(B= (0,+\infty )\),
(see also [43, Table II (d)]).
Furthermore, another reason for which Theorem 4 strictly extends Theorem 3 is that a concave function f does not need to be differentiable in the complement of a finite set, similarly as seen before.
(c) We now give an example in which the conditions (6.1) and (6.2) of Theorem 6 do not hold (see also [6, Table 3.2, Example 2]). Put
It is not difficult to check that g is even, \(g(0)=0\),
and hence \(g\in C^1({\mathbb {R}})\), and a fortiori \(g\in C^1({\mathbb {R}}^+_0)\), g is strictly increasing in \([0, +\infty )\), \(g \in {\mathrm{Lip}}_{\mathrm{loc}}({\mathbb {R}})\). Set
It is not difficult to see that \(f(t)=g(\sqrt{t})\) for every \(t\ge 0\), \(f(0)=0\), f is concave and strictly increasing on \({\mathbb {R}}_0^+\),
\(f\in C^1({\mathbb {R}}^+_0)\), \(B=[1/2,1]\). Moreover, \(\displaystyle {\eta (b)=\frac{1}{b}-1}\) for every \(b\in [1/2,1]\),
and so the condition (6.2) of Theorem 6 is not satisfied. Furthermore, it is not difficult to check that \(\beta \) is decreasing and convex on B.
(d) We now show that, under the same hypotheses and notations as in Theorem 7, in general the set \((f^{\prime })^{-1}(\{b\})\) can have more than one element for some \(b\in B\). For instance, set
It is not difficult to see that g is even, \(g\not \equiv 0\), \(g(0)=0\), \(f(t)=g(\sqrt{t})\) for each \(t\ge 0\),
and so \(g\in C^1({\mathbb {R}})\), and a fortiori \(g\in C^1({\mathbb {R}}^+_0)\). Put
It is not difficult to check that \(f(t)=g(\sqrt{t})\) for every \(t \ge 0\), \(f(0)=0\), f is concave and strictly increasing on \({\mathbb {R}}_0^+\),
\(f\in C^1({\mathbb {R}}^+_0)\), \(\overline{a}=\sqrt{2}\), \(B=(0,\sqrt{2}]\). Moreover, note that \(f^{\prime }(t)>1\) if \(t\in [0,1)\) and \(f^{\prime }(t)<1\) if \(t\in (2,+\infty )\). Hence, the function \(h_1(t)=f(t)-t\) is increasing on [0, 1], decreasing on \([2, +\infty [\), \(h_1\) assumes the maximum value on [1, 2], and \(h_1([1,2])=3-2\sqrt{2}\). Furthermore, we get
\((f^{\prime })^{-1}(\{1\})=[1,2]\),
It is not difficult to check that \(\beta \) is decreasing and convex on B. Finally, we get that \(\beta (1)\) is well-defined, since \(f(t)-t=3 - 2\sqrt{2}\) for every \(t\in (f^{\prime })^{-1}(\{1\})\).
(e) In this example, the hypotheses of Theorem 4 are satisfied, but g does not fulfil the conditions (5.1) and (6.1), and g is not continuous at 0. Here, the corresponding function \(\beta \) does not fulfil (5.2), but satisfies (6.2), since the implication (6.2) \(\Longrightarrow \) (6.1) does not hold when, for every \(\delta >0\), the function f assumes some negative real values on \([\delta , + \infty )\). Put
It is not difficult to check that \(f(t)=g(\sqrt{t})\) for any \(t \ge 0\). Note that g is strictly decreasing on \({\mathbb {R}}^+_0\) and \(\displaystyle {\lim _{t\rightarrow +\infty } \frac{f(t)}{t}=-\infty }\). For \(b\in {\mathbb {R}}\), we get
and thus \(\beta (b)<+\infty \) for every \(b\in {\mathbb {R}}\).
(f) Note that it may happen that g and f take the value \(-\infty \) on some positive real numbers and assume real values on other points of the positive half line, and \(\beta \) is not constant on \({\mathbb {R}}\). For example, set
It is not difficult to see that \(f(t)=g(\sqrt{t})\) for each \(t \ge 0\). For \(b\in {\mathbb {R}}\), we get
(g) Observe that, if f is non-negative in a suitable interval \((0,t_0]\) and \(\displaystyle { \limsup _{t\rightarrow + \infty }\frac{f(t)}{t}>0}\), then, thanks to the last part of Theorem 6, there are some positive real numbers b with \(\beta (b)=+\infty \). This means to impose a positive weight to the values of the finite difference which appear in the expression of the primal energy. This is too restrictive, since in the cliques having strong discontinuities it is not advisable to impose regularity constraints. So, the condition (6.1) is not restrictive for our purposes.
(h) Observe that it may happen that f, g and \(\beta \) satisfy the properties in (a) and (b) of Theorem 4, and that \(g(0) < 0\) and \(\beta (\overline{b})<0\) for some \(\overline{b} \in {\mathbb {R}}\). Indeed, it is enough to take \(g(t)=-1\) for every \(t \in {\mathbb {R}}\), and
Note that, in this case, the conditions of Theorems 5 and 6 are satisfied.
(i) If \(g:{\mathbb {R}} \rightarrow \widetilde{{\mathbb {R}}}\) is convex and even on \({\mathbb {R}}\), \(g(0)\in {\mathbb {R}}\) and g satisfies the condition (4.2), then g is real-valued on \({\mathbb {R}}\) and \(g\in C^1({\mathbb {R}} \setminus \{0\})\) (see also [43, Lemma 3]).
(j) If \(g\in C^1({\mathbb {R}}^+_0)\), then the function \(\beta \) in Theorem 7 coincides with the corresponding one investigated by D. Geman and G. Reynolds in [34]. This function is defined in a closed interval \(B^*\), while, in our setting, \(\beta \) is defined on the whole real line.
(k) The condition of convexity of \(\beta \) is not restrictive. Indeed, since \(\beta \) is l.s.c., then, by [70, Corollary 12.1.1], we get \(\beta ^*= \) (conv \(\beta )^*\). This implies that the functions f, g obtained by starting with \(\beta \) coincide with the corresponding ones constructed by starting with the convex hull of \(\beta \).
Appendix 2
We now sketch the proof of the convexity of the function \(\varphi \) defined in (28) on
where \(\varepsilon \in (0, + \infty )\) and \(\delta \in (0,1)\). Note that \(\varphi \in C^2((-\infty ,0) \times {\mathbb {R}})\) and \(\varphi \in C^2((0,+\infty ) \times {\mathbb {R}})\), since
where the function sgn is defined as in (29). For \(t_1\ne 0\) let
be the Hessian matrix associated with \(\varphi \).
Note that, for every \( (t_1, t_2) \in ({\mathbb {R}} \setminus \{ 0 \}) \times \left[ -\sqrt{\frac{1-\delta }{\varepsilon (3-\delta )}}, \sqrt{\frac{1-\delta }{\varepsilon (3-\delta )}}\,\right] \), H is positive-semidefinite. Furthermore, for every \(\overline{t_2}\in {\mathbb {R}}\), the equation of the tangent hyperplane at the point \((0, \overline{t_2})\) is
and \(\varphi \ge 0\) on \({\mathbb {R}}^2\). Thus, we deduce that \(\varphi \) is convex on
(see also [70, Theorem 25.1], [71, Theorem 2.14, (b) and (c)]).
Now we sketch the proof of the convexity of the function
on \({\mathbb {R}}^5\). We calculate the Hessian matrix \(H(\psi )\) of \(\psi \) when \(x_1-2x_2+x_3 \ne 0\). Denoting by
\(\kappa =\dfrac{\partial ^2 \varphi }{\partial t_1^2}\), \(\upsilon =\dfrac{\partial ^2 \varphi }{\partial t_1 \partial t_2}\) and \(\omega =\dfrac{\partial ^2 \varphi }{\partial t_2^2}\), we get
We now show that \(H(\psi )\) is positive-semidefinite at the points \((x_1, \ldots , x_5)\) such that \(x_1 - 2 x_2 +x_3 \ne 0\). To this aim, we use the following result.
Proposition 2
(see also [41, Corollary 7.1.5]) A matrix A of order \(n \times n\) is positive-semidefinite if and only if every principal minor of A is non-negative.
Fist of all, we claim that \(H(\psi )\) has rank at most two. Indeed, we get
Since \(H(\psi )\) is the sum of two matrices of rank one, then we get the claim. Thus, it is enough for our purposes to prove that every principal minor of \(H(\psi )\) of order two is non-negative. We take into account that \(H(\varphi )\) is positive-semidefinite, that is \(\kappa \, \omega - \upsilon ^2 \ge 0\).
The determinants of the principal minors of order two are
Now, set \(\varPi =\{(x_1,x_2,x_3,x_4,x_5):x_1=2x_2-x_3\}\) and let \(P\in \varPi \). Then, \(\psi (P)=\varphi (0, x_4-2x_5+2x_2-x_3)=0\). It is possible to see that the equation of the hyperplane tangent to \(\psi \) at P is \(x_6=0\), and \(\psi \ge 0\) on \({\mathbb {R}}^5\). Thus, proceeding similarly as above, it is possible to show that \(\psi \) is convex on \({\mathbb {R}}^5\) (see also [70, Theorem 25.1], [71, Theorem 2.14, (b) and (c)]).
Rights and permissions
About this article
Cite this article
Boccuto, A., Gerace, I. & Martinelli, F. Half-Quadratic Image Restoration with a Non-parallelism Constraint. J Math Imaging Vis 59, 270–295 (2017). https://doi.org/10.1007/s10851-017-0731-7
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10851-017-0731-7