Convergence rates for iteratively regularized Gauss–Newton method subject to stability constraints
Introduction
The concept of inverse problems arises from many practical applications whenever one looks for the unknown causes from a set of observations. Many of the known inverse problems are ill-posed in the sense of no continuous dependence between their solutions and the data. In practical applications, due to errors in the measurements, exact data is never available. So, one needs to work with the perturbed data. Therefore, it is very important to obtain the stable approximate solutions in the presence of perturbed data.
Let be an operator between the Banach spaces and , with domain . In this paper, our main aim is to solve the following inverse problems Let the perturbed data satisfy Consequently, in order to obtain the approximate solutions of (1.1), regularization methods are required. In Hilbert spaces, a lot of regularization methods are known (cf. [1], [2], [3], [4]). One of the well known regularization methods is iteratively regularized Gauss–Newton method [5] that takes the form where denotes the adjoint of Fréchet derivative of at , is an initial guess, and the sequence of positive real numbers satisfies where . The iteratively regularized Gauss–Newton method has been extensively studied by incorporating the discrepancy principle [3], [6]. In the method (1.3), it is worth to mention that uniquely minimizes the quadratic functional over . In the case of Hilbert spaces, regularization methods yield good results, provided the exact solution is smooth. But the general tendency of these regularization methods is to over-smooth solutions. Therefore, for the solutions having special features such as sparsity or discontinuities, such method may not give good results. Jin et al. [7] developed the following version of iteratively regularized Gauss–Newton method in Banach spaces where , , denotes the Bregman distance, induced by the lower semi-continuous, convex function , at in the direction of . Additionally, and are such that and , where denotes the sub-differential of (see Section 2 for its formal definition). If and , then (1.5) becomes the method discussed in [8]. The method in (1.5) has an additional advantage over the method discussed in [8] that it is developed to deduce the approximate solutions of inverse problems in the settings of Banach spaces with general penalty function.
In [7], convergence rates have been obtained for the method (1.5) by employing the variational inequalities (cf. (3.10)) under some additional assumptions. However, there exist many ill-posed inverse problems for which it is not known whether the source condition (3.10) is satisfied or not (cf. [9], [10], [11]). Consequently, it is not possible to deduce the convergence rates for such problems by employing the condition (3.10). Therefore, for such problems, we show that the convergence rates can be obtained by utilizing an alternative condition known as conditional stability estimates (3.11).
The paper is organized in the following manner. In Section 2, some preliminaries from convex analysis are given. In Section 3, we discuss the necessary assumptions and known results required in our analysis. Section 4 comprises discussion on the convergence rates for iterates of (1.5) for by incorporating the conditional stability estimates. We discuss an example that satisfy our assumptions in Section 5. A comparative analysis of the convergence rates of our method with the already existing regularization methods in the literature is given in Section 6. Finally, Section 7 is concluding in nature.
Section snippets
Preliminaries
Let be a dual space of the Banach space . Given and , we write for the duality pairing. Let represents a convex function and be its effective domain. Let denote the subdifferential of at given by We call to be a proper function if . Clearly, for each , is closed and convex in . The subdifferential of is the multi-valued mapping . We set
Assumptions and known results
In this section, we discuss assumptions and some known results required in our analysis. We will work under the following assumptions on the operator and .
Assumption 3.1 The function is a proper, -convex with , weak lower semi-continuous such that (2.4) holds for some . is a closed and convex set in and is a solution of in . For each for some , is a bounded linear operator such that where
Main result
In this section, we prove main result of the paper in which we obtain the convergence rates of iteratively regularized Gauss–Newton method (3.6) by utilizing the stability estimates (3.11) for . Let us begin by proving the following lemma which will be helpful in establishing our main result.
Lemma 4.1 Let and be Banach spaces, Assumption 3.1 holds, satisfy (1.4) and be a proper, -convex for , lower semi-continuous function. If the conditional stability estimate (3.11) holds, and
Examples
In this section, we discuss some inverse problems which satisfy an estimate of the form (3.11), however, the main aim of this section is to discuss an example in which the assumptions required in our framework are satisfied. To see this, we recall a severely ill-posed Calderón’s inverse problem also known as Electrical Impedance Tomography (EIT) (cf. [9], [10]).
Example 5.1 Let be a bounded domain having smooth boundary and satisfies the following Dirichlet problem:
Comparison of the convergence rates
In this section, we make an attempt to compare the convergence rates of iteratively regularized Gauss–Newton method subject to stability constraints and those of existing methods in the literature. In the literature, convergence rates have been obtained for all the numerical methods by utilizing the various smoothness concepts, i.e. source conditions, variational inequalities, stability constraints etc. (cf. [2], [3], [4]). Accordingly, this section is divided into two subsections. In the first
Conclusion and open problems
We have deduced the convergence rates for iteratively regularized Gauss–Newton method by employing the smoothness concept of conditional stability in Banach spaces. We have terminated our iterative scheme through a-posteriori stopping rules which is a usual practice for iterative methods. To validate our assumptions, we have considered a severely ill-posed EIT problem and shown that this problem fulfill our assumptions. We have compared the convergence rates of our scheme with the convergence
Acknowledgments
The authors sincerely thank the reviewers for careful reading of the manuscript, comments and suggestions that have immensely helped us in improving this work.
References (22)
- et al.
Lipschitz stability for the inverse conductivity problem
Adv. Appl. Math.
(2005) - et al.
Iteratively regularized Landweber iteration method: Convergence analysis via Hölder stability
Appl. Math. Comput.
(2021) - et al.
Iterative methods for approximate solutions of inverse problem
- et al.
Regularization of Inverse Problems
(2000) - et al.
Iterative Regularization Methods for Nonlinear Ill-Posed Problems
(2008) - et al.
Regularization Methods in Banach Spaces
(2012) The problems of the convergence of the iteratively regularized Gauss–Newton method
Comput. Math. Math. Phys.
(1992)- et al.
On the discrepancy principle for some Newton type methods for solving nonlinear inverse problems
Numer. Math.
(2009) - et al.
On the iteratively regularized Gauss–Newton method in Banach spaces with applications to parameter identification problems
Numer. Math.
(2013) - et al.
Convergence rates for the iteratively regularized Gauss–Newton method in Banach spaces
Inverse Problems
(2010)
Lipschitz stability for the electrical impedance tomography problem: the complex case
Comm. Partial Differential Equations
Cited by (13)
Improved local convergence analysis of the Landweber iteration in Banach spaces
2023, Archiv der MathematikFLOW MEASUREMENT: AN INVERSE PROBLEM FORMULATION
2023, SIAM Journal on Applied MathematicsConvergence analysis of Inexact Newton-Landweber iteration with frozen derivative in Banach spaces
2023, Journal of Inverse and Ill-Posed Problems