Skip to main content

Advertisement

Log in

On the convergence of a multigrid method for Moreau-regularized variational inequalities of the second kind

  • Published:
Advances in Computational Mathematics Aims and scope Submit manuscript

Abstract

We analyze the behavior of a multigrid algorithm for variational inequalities of the second kind with a Moreau-regularized nondifferentiable term. First, we prove a theorem summarizing the properties of the Moreau regularization of a convex, proper, and lower semicontinuous functional that is used in the rest of the paper. We prove that the solution of the regularized problem converges to the solution of the initial problem when the regularization parameter approaches zero. To give a procedure of explicit writing of the Moreau regularization of a convex and lower semicontinuous functional, we have constructed the Moreau regularization for two problems with a scalar unknown taken from the literature and also, for a contact problem with Tresca friction. These functionals are of an integral form and we prove some propositions giving general conditions for which the functionals of this type are lower semicontinuous, proper, and convex. To solve the regularized problem, which is a variational inequality of the first kind, we use a standard multigrid method for two-sided obstacle problems. The numerical experiments have showed a high accuracy and a very good convergence of the method even for values of the regularization parameter close to zero. In view of these results, we think that the proposed method can be an alternative to the existing multigrid methods for the variational inequalities of the second kind.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Badea, L.: Global convergence rate of a standard multigrid method for variational inequalities. IMA J. Numer. Anal. 34(1), 197–216 (2014)

    Article  MathSciNet  Google Scholar 

  2. Badea, L.: Convergence rate of some hybrid multigrid methods for variational inequalities. J. Numer. Math. 23(3), 195–210 (2015)

    Article  MathSciNet  Google Scholar 

  3. Badea, L., Krause, R.: One- and two-level Schwarz methods for inequalities of the second kind and their application to frictional contact problems. Numer. Math. 120(4), 573–599 (2012)

    Article  MathSciNet  Google Scholar 

  4. Barbu, V.: Nonlinear Differential Equations of Monotone Types in Banach Spaces, Springer monographs in mathematics. Springer, New York (2010)

    Book  Google Scholar 

  5. Brent, R.P.: Chapter 4: An Algorithm with Guaranteed Convergence for Finding a Zero of a Function, Algorithms for Minimization without Derivatives. Englewood Cliffs, Prentice-Hall (1973)

    Google Scholar 

  6. Brezis, H.: Functional Analysis, Sobolev Spaces and Partial Differential Equations. Springer, New York (2010)

    Book  Google Scholar 

  7. Burke, J., Qian, M.: On the superlinear convergence of the variable metric proximal point algorithm using Broyden and BFGS matrix secant updating. Math. Program. 88(1), 157–181 (2000)

    Article  MathSciNet  Google Scholar 

  8. Chen, X., Fukushima, M.: Proximal quasi-Newton methods for nondifferentiable convex optimization. Math. Program. 85(2), 313–334 (1999)

    Article  MathSciNet  Google Scholar 

  9. Christof, C.: Sensitivity Analysis of Elliptic Variational Inequalities of the First and the Second Kind, PhD. Thesis, https://eldorado.tu-dortmund.de/bitstream/2003/37059/1/DissertationChristof.eps (2018)

  10. Combettes, P.L., Pesquet, J.-C.: Proximal splitting methods in signal processing. In: Fixed-Point Algorithms for Inverse Problems in Science and Engineering, pp 185–212 (2011)

    Google Scholar 

  11. Combettes, P.L., Wajs, V.R.: Signal recovery by proximal forward-backward splitting. Multiscale Model. Simul. 4(4), 1168–1200 (2005)

    Article  MathSciNet  Google Scholar 

  12. Dahlquist, G., Björck, G.A.: Numerical Methods in Scientific Computing, vol. I. SIAM (2008)

  13. Dekker, T.J.: Finding a zero by means of successive linear interpolation. In: Dejon, B., Henrici, P. (eds.) Constructive Aspects of the Fundamental Theorem of Algebra. Wiley-Interscience, London (1969)

  14. Ekeland, I., Temam, R.: Analyse convexe et problèmes variationnels. Dunod, Paris (1974)

    MATH  Google Scholar 

  15. Fuentes, M., Malick, J., Lemaréchal, C.: Descentwise inexact proximal algorithms for smooth optimization. Comput. Optim. Appl. 53(3), 755–769 (2012)

    Article  MathSciNet  Google Scholar 

  16. Fukushima, M., Qi, L.: A globally and superlinearly convergent algorithm for nonsmooth convex minimization. SIAM J. Optim. 6(4), 1106–1120 (1996)

    Article  MathSciNet  Google Scholar 

  17. Glowinski, R., Lions, J.L., Trémolières, R.: Analyse numérique des inéquations variationnelles. Dunod, Paris (1976)

    MATH  Google Scholar 

  18. Hintermüller, M., Kunisch, K.: Path-following methods for a class of constrained minimization problems in function space. SIAM J. Optim. 17(1), 159–187 (2006)

    Article  MathSciNet  Google Scholar 

  19. Hintermüller, M., Kunisch, K.: Feasible and noninterior path following in constrained minimization with low multiplier regularity. SIAM J. Control Optim. 45 (4), 1198–1221 (2006)

    Article  MathSciNet  Google Scholar 

  20. Hintermüller, M., Hinze, M., Tber, M.: An adaptive finite element Moreau-Yosida-based solver for a non-smooth Cahn-Hilliard problem. Optim. Methods Softw. 26, 777–811 (2011)

    Article  MathSciNet  Google Scholar 

  21. Hintermüller, M., Hinze, M., Kahle, C.: An adaptive finite element Moreau-Yosida-based solver for a coupled Cahn-Hilliard/Navier-Stokes system. J. Comput. Phys. 235, 810–827 (2013)

    Article  MathSciNet  Google Scholar 

  22. Hintermüller, M., Schiela, A., Wollner, W.: The length of the primal-dual path in Moreau–Yosida-based path-following methods for state constrained optimal control. SIAM J. Optim. 24(1), 108–126 (2014)

    Article  MathSciNet  Google Scholar 

  23. Keuthen, M., Ulbrich, M.: Moreau-Yosida regularization in shape optimization with geometric constraints. Comput. Optim. Appl. 62(1), 181–216 (2015)

    Article  MathSciNet  Google Scholar 

  24. Kornhuber, R.: Monotone multigrid methods for elliptic variational inequalities I. Numer. Math. 69, 167–184 (1994)

    Article  MathSciNet  Google Scholar 

  25. Kornhuber, R.: Monotone multigrid methods for elliptic variational inequalities II. Numer. Math. 72, 481–499 (1996)

    Article  MathSciNet  Google Scholar 

  26. Kornhuber, R.: On constrained Newton linearization and multigrid for variational inequalities. Numer. Math. 91, 699–721 (2002)

    Article  MathSciNet  Google Scholar 

  27. Mandel, J.: A multilevel iterative method for symmetric, positive definite linear complementarity problems. Appl. Math. Opt. 11, 77–95 (1984)

    Article  MathSciNet  Google Scholar 

  28. Mandel, J.: Etude algébrique d’une méthode multigrille pour quelques problèmes de frontière libre. C. R. Acad. Sci. 298(Ser. I), 469–472 (1984)

    MATH  Google Scholar 

  29. Martinet, B.: Régularisation d’inéquations variationnelles par approximations successives. Rev. Française Informat. Recherche Opérationnelle 4(R-3), 154–158 (1970)

    MathSciNet  MATH  Google Scholar 

  30. Martinet, B.: Determination approchée d’un point fixe d’une application pseudocontractante. Acad. Sci. Paris 274, 163–165 (1972)

    MathSciNet  MATH  Google Scholar 

  31. Moreau, J.J.: Proximité et dualité dans un espace hilbertien. Bull. Soc. Math. France 93, 273–299 (1965)

    Article  MathSciNet  Google Scholar 

  32. Rockafellar, R.T.: Monotone operators and the proximal point algorithm. SIAM J. Control. Optim. 14(5), 877–898 (1976)

    Article  MathSciNet  Google Scholar 

  33. Ulbrich, M., Ulbrich, S., Bratzke, D.D.: A multigrid semismooth Newton method for semilinear contact problems. Int. J. Comput. Math. 35(4), 486–528 (2017)

    MathSciNet  MATH  Google Scholar 

Download references

Acknowledgments

The author acknowledges the partial support of the network GDRI ECO Math for this paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lori Badea.

Additional information

Communicated by: Stefan Volkwein

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix. Discrete approach of calculating the Moreau regularization

Appendix. Discrete approach of calculating the Moreau regularization

If the argument of φ is a scalar function, let us consider that

$$ \varphi(v)=\sum\limits_{k=1}^{n_{p}} e_{k}\phi(v(x_{k})) \text{ for any } v=(v(x_{1}),\ldots,v(x_{n_{p}}))\in \mathbf{R}^{n_{p}} $$
(A.1)

where ek, k = 1,…,np, are some nonnegative real constants and \(\phi \text { : }\mathbf {R}\to \overline {\mathbf {R}}\). Similarly with Proposition 4.1, we have

Proposition A.1

If\(\phi \text { : }\mathbf {R}\to {\overline {\mathbf {R}}}\)isa lower semicontinuous, proper and convex function,then\(\varphi \text { : }\mathbf {R}^{n_{p}}\to \overline {\mathbf {R}}\)definedin (A.1) is a lower semicontinuous, proper and convex functional.

Proof

Evidently, since ϕ is proper and convex, them φ is also a proper and convex functional. Now, let \(v_{n}=\{(v^{n}(x_{1}),\ldots ,v^{n}(x_{n_{p}}))\}_{n}\subset \mathbf {R}^{n_{p}}\), \(v=(v(x_{1}),\ldots ,v(x_{n_{p}}))\in \mathbf {R}^{n_{p}}\) and ψR such that vnv in \(\mathbf {R}^{n_{p}}\), as n, and φ(vn) ≤ ψ. Then, vn(xk) → v(xk) as n, for all k = 1,…,np, and, since ek ≥ 0, k = 1,…,np and ϕ is lower semicontinuous, we have

$$ \begin{array}{@{}rcl@{}} &&\varphi(v)= {\sum}_{k=1}^{n_{p}} e_{k}\phi(v(x_{k}))\le {\sum}_{k=1}^{n_{p}} e_{k}\liminf_{n\to\infty}\phi(v^{n}(x_{k})) \\ &&\le \liminf_{n\to\infty}{\sum}_{k=1}^{n_{p}} e_{k}\phi(v^{n}(x_{k}))=\liminf_{n\to\infty} \varphi(v^{n})\le \psi \end{array} $$

i.e., φ is lower semicontinuous. □

Also, with a reasoning similar with that in the case of L2(Ω) we get

$$ \begin{array}{@{}rcl@{}} &&\partial\varphi(u(x_{1}),\ldots,u(x_{n_{p}})) =\left\{(e_{1}\phi^{\prime}_{u(x_{1})},\ldots,e_{n_{p}}\phi^{\prime}_{u(x_{n_{p}})})\in \mathbf{R}^{n_{p}} \text{ : } \phi^{\prime}_{u(x_{k})}\in\mathbf{R},\right. \\ &&\left. \phi^{\prime}_{u(x_{k})}(z_{k}-u(x_{k}))\le \phi(z_{k})-\phi(u(x_{k})) \text{ for any }z_{k}\in \mathbf{R}, k=1,\ldots, n_{p}\right\} \end{array} $$

and, evidently, the value of a subgradient \(\partial \varphi _{(u(x_{1}),\ldots ,u(x_{n_{p}}))}\in \partial \varphi (u(x_{1}),\ldots ,u(x_{n_{p}}))\) at a point \((z_{1},\ldots ,z_{n_{p}})\in \mathbf {R}^{n_{p}}\) is written as

$$ \partial\varphi_{(u(x_{1}),\ldots,u(x_{n_{p}}))}(z_{1},\ldots,z_{n_{p}})=\sum\limits_{k=1}^{n_{p}} e_{k} \phi^{\prime}_{u(x_{k})} z_{k} $$

Similarly, if the argument of φ is a vectorial function, we consider \(\varphi \text { : }({\mathbf {R}^{\mathbf {d}}})^{n_{p}}\to \overline {\mathbf {R}}\) and

$$ \begin{array}{@{}rcl@{}} &&\displaystyle\varphi(\boldsymbol{v}(x_{1}),\ldots,\boldsymbol{v}(x_{n_{p}}))=\sum\limits_{k=1}^{n_{p}} e_{k}\phi(\boldsymbol{v}(x_{k}))\\ &&\text{for any } \boldsymbol{v}=(\boldsymbol{v}(x_{1}),\ldots,\boldsymbol{v}(x_{n_{p}}))\in ({\mathbf{R}^{\mathbf{d}}})^{n_{p}} \end{array} $$
(A.2)

Remark A.1

Similarly with the above scalar case, we can prove that if \(\phi \text { : }{\mathbf {R}^{\mathbf {d}}}\to \overline {\mathbf {R}}\) is a proper, lower semicontinuous, and convex functional which is continuous on its effective domain D(ϕ), then φ defined in (A.2) is proper, lower semicontinuous, and convex.

Also, we have

$$ \begin{array}{@{}rcl@{}} &&\partial\varphi(\boldsymbol{u}(x_{1}),\ldots,\boldsymbol{u}(x_{n_{p}})) =\left\{(e_{1}\boldsymbol{\phi}^{\prime}_{\boldsymbol{u}(x_{1})},\ldots,e_{n_{p}}\boldsymbol{\phi}^{\prime}_{\boldsymbol{u}(x_{n_{p}})})\in ({\mathbf{R}^{\mathbf{d}}})^{n_{p}} \text{ : }\right. \\ &&\boldsymbol{\phi}^{\prime}_{\boldsymbol{u}(x_{k})}\in\mathbf{R}^{d}, \boldsymbol{\phi}^{\prime}_{\boldsymbol{u}(x_{k})}\cdot(\boldsymbol{z}_{k}-\boldsymbol{u}(x_{k}))\le \phi(\boldsymbol{z}_{k})-\phi(\boldsymbol{u}(x_{k})) \\ &&\left. \text{for any }\boldsymbol{z}_{k}\in \mathbf{R}^{d}, k=1,\ldots, n_{p}\right\} \end{array} $$

and the value of a subgradient \(\partial \varphi _{(\boldsymbol {u}(x_{1}),\ldots ,\boldsymbol {u}(x_{n_{p}}))}\in \partial \varphi (\boldsymbol {u}(x_{1}),\ldots ,\boldsymbol {u}(x_{n_{p}}))\) at a point \((\boldsymbol {z}_{1},\ldots ,\boldsymbol {z}_{n_{p}})\in (\mathbf {R}^{d})^{n_{p}}\) is written as

$$ \partial\varphi_{(\boldsymbol{u}(x_{1}),\ldots,\boldsymbol{u}(x_{n_{p}}))}(\boldsymbol{z}_{1},\ldots,\boldsymbol{z}_{n_{p}})=\sum\limits_{k=1}^{n_{p}} e_{k} (\boldsymbol{\phi}^{\prime}_{\boldsymbol{u}(x_{k})}\cdot \boldsymbol{z}_{k}) $$

In the following, for completeness, we write the functionals φλ and \(\varphi ^{\prime }_{\lambda }\) in the three examples in Section 4 when the functional φ is written as in (A.1) or (A.2).

If φ in Example 1 is written as in (A.1), then

$$ \varphi^{\prime}_{\lambda}(u)=(e_{1}\phi^{\prime}_{\lambda}(u(x_{1})),\ldots,e_{n_{p}}\phi^{\prime}_{\lambda }(u(x_{n_{p}}))) $$

where

$$ \begin{array}{@{}rcl@{}} \phi^{\prime}_{\lambda }(u(x_{k}))= \left\{ \begin{array}{l} \frac{1}{1+\lambda a_{1}}[a_{1}(u(x_{k})-\theta_{0})-s_{1}]\text{ if }u(x_{k})-\theta_{0}<-\lambda s_{1} \\ \frac{1}{1+\lambda a_{2}}[a_{2}(u(x_{k})-\theta_{0})+s_{2}]\text{ if }u(x_{k})-\theta_{0}>\lambda s_{2} \\ \frac{u(x_{k})-\theta_{0}}{\lambda}\text{ if }-\lambda s_{1}\le u(x_{k})-\theta_{0}\le\lambda s_{2} \end{array} \right. \end{array} $$

for k = 1,…,np. Also,

$$ \begin{array}{@{}rcl@{}} \varphi_{\lambda}(u)&=&{\sum}_{u(x_{k})-\theta_{0}<-\lambda s_{1}} e_{k}\left[\frac{\lambda}{2}\left( \frac{a_{1}(u(x_{k})-\theta_{0})-s_{1}}{1+\lambda a_{1}}\right)^{2} \right. \\ &&\left. + \frac{a_{1}}{2}\left( \frac{u(x_{k})-\theta_{0}+\lambda s_{1}}{1+\lambda a_{1}}\right)^{2} -s_{1}\frac{u(x_{k})-\theta_{0}+\lambda s_{1}}{1+\lambda a_{1}}\right] \\ &&+{\sum}_{u(x_{k})-\theta_{0}>\lambda s_{2}} e_{k}\left[\frac{\lambda}{2}\left( \frac{a_{2}(u(x_{k})-\theta_{0})+s_{2}}{1+\lambda a_{2}}\right)^{2} \right. \\ &&\left. +\frac{a_{2}}{2}\left( \frac{u(x_{k})-\theta_{0}-\lambda s_{2}}{1+\lambda a_{2}}\right)^{2} +s_{2}\frac{u(x_{k})-\theta_{0}-\lambda s_{2}}{1+\lambda a_{2}}\right] \\ &&+ \frac{1}{2\lambda}{\sum}_{-\lambda s_{1}\le u(x_{k})-\theta_{0}\le\lambda s_{2}}e_{k}(u(x_{k})-\theta_{0})^{2} \end{array} $$
(A.3)

In the case of Example 2, \(\varphi ^{\prime }_{\lambda }(u)\) has the form in the previous example with

$$ \phi^{\prime}_{\lambda }(u(x_{k}))=\left\{ \begin{array}{ll} 2&\text{ if }u(x_{k})> 1+2\lambda \\ \frac{u(x_{k})-1}{\lambda}&\text{ if }1+\lambda\le u(x_{k})\le 1+2\lambda \\ \frac{1}{2}\left( \sqrt{\lambda^{2}+4u(x_{k})}-\lambda\right)&\text{ if }0\le u(x_{k})<1+\lambda \\ \frac{u(x_{k})}{\lambda}&\text{ if }u(x_{k})< 0 \end{array} \right. $$

for k = 1,…,np, and

$$ \begin{array}{@{}rcl@{}} \varphi_{\lambda}(u)&=&{\sum}_{u(x_{k})<0}e_{k}\frac{u(x_{k})^{2}}{2\lambda}+\\ &&{\sum}_{0\le u(x_{k})<1+\lambda}e_{k}\left( \sqrt{\lambda^{2}+4u(x_{k})}-\lambda\right)^{2}\cdot \\ &&\left[\frac{\lambda}{8}+ \frac{1}{12}(\sqrt{\lambda^{2}+4u(x_{k})}-\lambda)\right]\\ &&+ {\sum}_{1+\lambda\le u(x_{k})\le 1+2\lambda}e_{k}\left[\frac{(u(x_{k})-1)^{2}}{2\lambda}+\frac{2}{3}\right] \\ &&+{\sum}_{u(x_{k})> 1+2\lambda}e_{k}\left[2u(x_{k})-2\lambda-\frac{4}{3}\right] \end{array} $$
(A.4)

Finally, if φ in Example 3 is written as in (A.1), i.e.,

$$ \varphi(\boldsymbol{u}(x_{1}),\ldots,\boldsymbol{u}(x_{n_{p}}))=\sum\limits_{k=1}^{n_{p}} e_{k} f(x_{k})\phi(\boldsymbol{u}(x_{k})) $$

then

$$ \varphi^{\prime}_{\lambda}(\boldsymbol{u})=(\boldsymbol{\phi}^{\prime}_{\lambda }(\boldsymbol{u}(x_{1})),\ldots,\phi^{\prime}_{\lambda }(\boldsymbol{u}(x_{n_{p}}))) $$

where

$$ \boldsymbol{\phi}^{\prime}_{\lambda }(\boldsymbol{u}(x_{k}))= \frac{1}{\lambda}\left\{ \begin{array}{ll} \frac{\lambda e_{k}f(x_{k})}{|\boldsymbol{u}_{t}(x_{k})|}\boldsymbol{u}_{t}(x_{k})&\text{ if }|\boldsymbol{u}_{t}(x_{k})|>\lambda e_{k}f(x_{k}) \\ \boldsymbol{u}_{t}(x_{k})&\text{ if }|\boldsymbol{u}_{t}(x_{k})|\le\lambda e_{k} f(x_{k}) \end{array} \right. $$

for k = 1,…,np, and

$$ \begin{array}{@{}rcl@{}} \varphi_{\lambda}(\boldsymbol{u})&=&{\sum}_{|\boldsymbol{u}_{t}(x_{k})|>\lambda e_{k}f(x_{k})}e_{k}f(x_{k})\left( |\boldsymbol{u}_{t}(x_{k})|-\frac{\lambda e_{k}f(x_{k})}{2}\right) \\ && +{\sum}_{|\boldsymbol{u}_{t}(x_{k})|\le\lambda e_{k}f(x_{k})}\frac{|\boldsymbol{u}_{t}(x_{k})|^{2}}{2\lambda} \end{array} $$
(A.5)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Badea, L. On the convergence of a multigrid method for Moreau-regularized variational inequalities of the second kind. Adv Comput Math 45, 2807–2832 (2019). https://doi.org/10.1007/s10444-019-09709-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10444-019-09709-6

Keywords

Mathematics Subject Classification (2010)

Navigation