Convergence rates for iteratively regularized Gauss–Newton method subject to stability constraints

https://doi.org/10.1016/j.cam.2021.113744Get rights and content

Abstract

In this paper we formulate the convergence rates of the iteratively regularized Gauss–Newton method by defining the iterates via convex optimization problems in a Banach space setting. We employ the concept of conditional stability to deduce the convergence rates in place of the well known concept of variational inequalities. To validate our abstract theory, we also discuss an ill-posed inverse problem that satisfies our assumptions. We also compare our results with the existing results in the literature.

Introduction

The concept of inverse problems arises from many practical applications whenever one looks for the unknown causes from a set of observations. Many of the known inverse problems are ill-posed in the sense of no continuous dependence between their solutions and the data. In practical applications, due to errors in the measurements, exact data is never available. So, one needs to work with the perturbed data. Therefore, it is very important to obtain the stable approximate solutions in the presence of perturbed data.

Let F:D(F)UV be an operator between the Banach spaces U and V, with domain D(F). In this paper, our main aim is to solve the following inverse problems F(u)=v.Let the perturbed data vδ satisfy vvδδ.Consequently, in order to obtain the approximate solutions of (1.1), regularization methods are required. In Hilbert spaces, a lot of regularization methods are known (cf. [1], [2], [3], [4]). One of the well known regularization methods is iteratively regularized Gauss–Newton method [5] that takes the form un+1δ=unδ+(αn+F(unδ)F(unδ))1(F(unδ)(vδF(unδ)+αn(u0unδ)),where F(u) denotes the adjoint of Fréchet derivative F(u) of F at u, u0δu0 is an initial guess, and the sequence {αn} of positive real numbers satisfies 1αnαn+1θ,andlimnαn=0,where θ>1. The iteratively regularized Gauss–Newton method has been extensively studied by incorporating the discrepancy principle [3], [6]. In the method (1.3), it is worth to mention that un+1δ uniquely minimizes the quadratic functional vδF(unδ)F(unδ)(uunδ)2+αnuu02over uU. In the case of Hilbert spaces, regularization methods yield good results, provided the exact solution is smooth. But the general tendency of these regularization methods is to over-smooth solutions. Therefore, for the solutions having special features such as sparsity or discontinuities, such method may not give good results. Jin et al. [7] developed the following version of iteratively regularized Gauss–Newton method in Banach spaces un+1δargminuU{vδF(unδ)F(unδ)(uunδ)p+αnDη0φ(u,u0)},where u0δ=u0, p1, Dη0φ(u,u0) denotes the Bregman distance, induced by the lower semi-continuous, convex function φ:U(,], at u0 in the direction of η0. Additionally, u0 and η0 are such that u0D(F)D(φ) and η0φ(u0), where φ denotes the sub-differential of φ (see Section 2 for its formal definition). If η0=0 and φ(u)=uu0p, then (1.5) becomes the method discussed in [8]. The method in (1.5) has an additional advantage over the method discussed in [8] that it is developed to deduce the approximate solutions of inverse problems in the settings of Banach spaces with general penalty function.

In [7], convergence rates have been obtained for the method (1.5) by employing the variational inequalities (cf. (3.10)) under some additional assumptions. However, there exist many ill-posed inverse problems for which it is not known whether the source condition (3.10) is satisfied or not (cf. [9], [10], [11]). Consequently, it is not possible to deduce the convergence rates for such problems by employing the condition (3.10). Therefore, for such problems, we show that the convergence rates can be obtained by utilizing an alternative condition known as conditional stability estimates (3.11).

The paper is organized in the following manner. In Section 2, some preliminaries from convex analysis are given. In Section 3, we discuss the necessary assumptions and known results required in our analysis. Section 4 comprises discussion on the convergence rates for iterates of (1.5) for p2 by incorporating the conditional stability estimates. We discuss an example that satisfy our assumptions in Section 5. A comparative analysis of the convergence rates of our method with the already existing regularization methods in the literature is given in Section 6. Finally, Section 7 is concluding in nature.

Section snippets

Preliminaries

Let U be a dual space of the Banach space U. Given uU and ηU, we write η,u=η(u) for the duality pairing. Let φ:U(,] represents a convex function and D(φ){uU:φ(u)<} be its effective domain. Let φ(u) denote the subdifferential of φ at uU given by φ(u){ηU:φ(ũ)φ(u)η,ũu0for allũU}.We call φ to be a proper function if D(φ). Clearly, for each uU, φ(u) is closed and convex in U. The subdifferential of φ is the multi-valued mapping φ:U2U. We set D(φ){uD(φ):φ(u)ϕ

Assumptions and known results

In this section, we discuss assumptions and some known results required in our analysis. We will work under the following assumptions on the operator F and φ.

Assumption 3.1

  • (1)

    The function φ:U(,) is a proper, p-convex with p>1, weak lower semi-continuous such that (2.4) holds for some γ>0.

  • (2)

    D(F) is a closed and convex set in U and u is a solution of (1.1) in D(F)D(φ).

  • (3)

    For each uBς(u)D(F) for some ς>0, F(u):UV is a bounded linear operator such that lims0F(u+s(wu))F(u)s=F(u)(wu),wBς(u)D(F),where B

Main result

In this section, we prove main result of the paper in which we obtain the convergence rates of iteratively regularized Gauss–Newton method (3.6) by utilizing the stability estimates (3.11) for p2. Let us begin by proving the following lemma which will be helpful in establishing our main result.

Lemma 4.1

Let U and V be Banach spaces, Assumption 3.1 holds, {αn} satisfy (1.4) and φ:U(,] be a proper, p-convex for p2, lower semi-continuous function. If the conditional stability estimate (3.11) holds, and

Examples

In this section, we discuss some inverse problems which satisfy an estimate of the form (3.11), however, the main aim of this section is to discuss an example in which the assumptions required in our framework are satisfied. To see this, we recall a severely ill-posed Calderón’s inverse problem also known as Electrical Impedance Tomography (EIT) (cf. [9], [10]).

Example 5.1

Let Rt,t2 be a bounded domain having smooth boundary and uH1() satisfies the following Dirichlet problem: div(Ku)=0,inu=ξ,on.

Comparison of the convergence rates

In this section, we make an attempt to compare the convergence rates of iteratively regularized Gauss–Newton method subject to stability constraints and those of existing methods in the literature. In the literature, convergence rates have been obtained for all the numerical methods by utilizing the various smoothness concepts, i.e. source conditions, variational inequalities, stability constraints etc. (cf. [2], [3], [4]). Accordingly, this section is divided into two subsections. In the first

Conclusion and open problems

We have deduced the convergence rates for iteratively regularized Gauss–Newton method by employing the smoothness concept of conditional stability in Banach spaces. We have terminated our iterative scheme through a-posteriori stopping rules which is a usual practice for iterative methods. To validate our assumptions, we have considered a severely ill-posed EIT problem and shown that this problem fulfill our assumptions. We have compared the convergence rates of our scheme with the convergence

Acknowledgments

The authors sincerely thank the reviewers for careful reading of the manuscript, comments and suggestions that have immensely helped us in improving this work.

References (22)

  • AlessandriniG. et al.

    Lipschitz stability for the inverse conductivity problem

    Adv. Appl. Math.

    (2005)
  • MittalG. et al.

    Iteratively regularized Landweber iteration method: Convergence analysis via Hölder stability

    Appl. Math. Comput.

    (2021)
  • BakushinskyA.B. et al.

    Iterative methods for approximate solutions of inverse problem

  • EnglH.W. et al.

    Regularization of Inverse Problems

    (2000)
  • KaltenbacherB. et al.

    Iterative Regularization Methods for Nonlinear Ill-Posed Problems

    (2008)
  • SchusterT. et al.

    Regularization Methods in Banach Spaces

    (2012)
  • BakushinskyA.B.

    The problems of the convergence of the iteratively regularized Gauss–Newton method

    Comput. Math. Math. Phys.

    (1992)
  • JinQ. et al.

    On the discrepancy principle for some Newton type methods for solving nonlinear inverse problems

    Numer. Math.

    (2009)
  • JinQ. et al.

    On the iteratively regularized Gauss–Newton method in Banach spaces with applications to parameter identification problems

    Numer. Math.

    (2013)
  • KaltenbacherB. et al.

    Convergence rates for the iteratively regularized Gauss–Newton method in Banach spaces

    Inverse Problems

    (2010)
  • BerettaE. et al.

    Lipschitz stability for the electrical impedance tomography problem: the complex case

    Comm. Partial Differential Equations

    (2011)
  • Cited by (13)

    View all citing articles on Scopus
    View full text