Skip to main content
Log in

On an Elliptical Trust-Region Procedure for Ill-Posed Nonlinear Least-Squares Problems

  • Published:
Journal of Optimization Theory and Applications Aims and scope Submit manuscript

Abstract

In this paper, we address the stable numerical solution of ill-posed nonlinear least-squares problems with small residual. We propose an elliptical trust-region reformulation of a Levenberg–Marquardt procedure. Thanks to an appropriate choice of the trust-region radius, the proposed procedure guarantees an automatic choice of the free regularization parameters that, together with a suitable stopping criterion, ensures regularizing properties to the method. Specifically, the proposed procedure generates a sequence that even in case of noisy data has the potential to approach a solution of the unperturbed problem. The case of constrained problems is considered, too. The effectiveness of the procedure is shown on several examples of ill-posed least-squares problems.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

References

  1. Kaltenbacher, B., Neubauer, A., Scherzer, O.: Iterative Regularization Methods for Nonlinear Ill-Posed Problems, vol. 6. Walter de Gruyter, Berlin (2008)

    Book  MATH  Google Scholar 

  2. Moré, J., Sorensen, D.: Computing a trust region step. SIAM J. Sci. Stat. Comput. 4(3), 553–572 (1983)

    Article  MathSciNet  MATH  Google Scholar 

  3. Donatelli, M., Hanke, M.: Fast nonstationary preconditioned iterative methods for ill-posed problems, with application to image deblurring. Inverse Probl. 29(9), 095008 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  4. Buccini, A.: Regularizing preconditioners by non-stationary iterated Tikhonov with general penalty term. Appl. Numer. Math. 116, 64–81 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  5. Bellavia, S., Morini, B., Riccietti, E.: On an adaptive regularization for ill-posed nonlinear systems and its trust-region implementation. Comput. Optim. Appl. 64(1), 1–30 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  6. Hanke, M.: A regularizing Levenberg–Marquardt scheme, with applications to inverse groundwater filtration problems. Inverse Probl. 13(1), 79 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  7. Wang, Y., Yuan, Y.: On the regularity of trust region-cg algorithm for nonlinear ill-posed inverse problems with application to image deconvolution problem. Sci. China Ser. A 46, 312–325 (2003)

    MathSciNet  Google Scholar 

  8. Wang, Y., Yuan, Y.: Convergence and regularity of trust region methods for nonlinear ill-posed problems. Inverse Prob. 21, 821–838 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  9. Banks, H., Murphy, K.: Estimation of coefficients and boundary parameters in hyperbolic systems. SIAM J. Control Optim. 24(5), 926–950 (1986)

    Article  MathSciNet  MATH  Google Scholar 

  10. Binder, A., Engl, H., Neubauer, A., Scherzer, O., Groetsch, C.: Weakly closed nonlinear operators and parameter identification in parabolic equations by Tikhonov regularization. Appl. Anal. 55(3–4), 215–234 (1994)

    MathSciNet  MATH  Google Scholar 

  11. Deidda, G., Fenu, C., Rodriguez, G.: Regularized solution of a nonlinear problem in electromagnetic sounding. Inverse Probl. 30(12), 125014 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  12. Henn, S.: A Levenberg–Marquardt scheme for nonlinear image registration. BIT Numer. Math. 43(4), 743–759 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  13. Tang, L.: A regularization homotopy iterative method for ill-posed nonlinear least squares problem and its application. In: Advances in Civil Engineering, ICCET 2011, Applied Mechanics and Materials, vol. 90, pp. 3268–3273. Trans Tech Publications (2011)

  14. Lopez, D., Barz, T., Krkel, S., Wozny, G.: Nonlinear ill-posed problem analysis in model-based parameter estimation and experimental design. Comput. Chem. Eng. 77(Supplement C), 24–42 (2015)

    Article  Google Scholar 

  15. Landi, G., Piccolomini, E.L., Nagy, J.G.: A limited memory BFGS method for a nonlinear inverse problem in digital breast tomosynthesis. Inverse Probl. 33(9), 095005 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  16. Cornelio, A.: Regularized nonlinear least squares methods for hit position reconstruction in small gamma cameras. Appl. Math. Comput. 217(12), 5589–5595 (2011)

    MATH  Google Scholar 

  17. Neubauer, A.: An a posteriori parameter choice for Tikhonov regularization in the presence of modeling error. Appl. Numer. Math. 4(6), 507–519 (1988)

    Article  MathSciNet  MATH  Google Scholar 

  18. Nocedal, J., Wright, S.: Numerical Optimization. Springer, Berlin (2006)

    MATH  Google Scholar 

  19. Buccini, A., Donatelli, M., Reichel, L.: Iterated Tikhonov regularization with a general penalty term. Numer. Linear Algebra Appl. 24(4), e2089 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  20. Higham, N.J.: Functions of Matrices: Theory and Computation, vol. 104. SIAM, Philadelphia (2008)

    Book  MATH  Google Scholar 

  21. Dennis, J., Schnabel, R.: Numerical Methods for Unconstrained Optimization and Nonlinear Equations. SIAM, Philadelphia (1996)

    Book  MATH  Google Scholar 

  22. Conn, A., Gould, N., Toint, P.: Trust Region Methods, vol. 1. SIAM, Philadelphia (2000)

    Book  MATH  Google Scholar 

  23. Allaire, G.: Numerical Analysis and Optimization: An Introduction to Mathematical Modelling and Numerical Simulation. Oxford University Press, Oxford (2007)

    MATH  Google Scholar 

  24. Engl, H., Hanke, M., Neubauer, A.: Regularization of Inverse Problems. Kluwer Academic Publishers Group, Dordrecht (1996)

    Book  MATH  Google Scholar 

  25. Hanke, M.: Regularizing properties of a truncated Newton-CG algorithm for nonlinear inverse problems. Numer. Funct. Anal. Optim. 18(9–10), 971–993 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  26. Kunisch, K., White, L.: Parameter estimation, regularity and the penalty method for a class of two point boundary value problems. SIAM J. Control Optim. 25(1), 100–120 (1987)

    Article  MathSciNet  MATH  Google Scholar 

  27. Rieder, A.: On the regularization of nonlinear ill-posed problems via inexact Newton iterations. Inverse Probl. 15(1), 309 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  28. Scherzer, O., Engl, H.W., Kunisch, K.: Optimal a posteriori parameter choice for Tikhonov regularization for solving nonlinear ill-posed problems. SIAM J. Numer. Anal. 30(6), 1796–1838 (1993)

    Article  MathSciNet  MATH  Google Scholar 

  29. Riccietti, E.: Levenberg–Marquardt methods for the solution of noisy nonlinear least squares problems. PhD Thesis, University of Florence. http://web.math.unifi.it/users/riccietti/publications.html (2018)

Download references

Acknowledgements

The authors thank Gian Piero Deidda, Caterina Fenu and Giuseppe Rodriguez for providing us the MATLAB code for Problem 6.3. The authors also thank Sergio Vessella and Elisa Francini for helpful discussion on this manuscript. This work was supported by Gruppo Nazionale per il Calcolo Scientifico (GNCS-INdAM) of Italy.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Stefania Bellavia.

Appendix

Appendix

1.1 Proof of item (ii) of Lemma 4.2 and item (ii) of Lemma 4.4

Proof

The proof is the same for the noise-free and the noisy case; for generality, the notation of the noisy case is employed.

Since \(\lambda _k>0\) from Lemma 3.4, the trust-region is active, and from (30) it follows that

$$\begin{aligned} \Delta _k=\Vert z(\lambda _k)\Vert \le \frac{\Vert B_k^{1/2}f'_k\Vert }{\lambda _k}. \end{aligned}$$

Then, if \(\Delta _k\) chosen at Step 1 of Algorithm 3.1 guarantees condition \(\pi _k(p_k)\ge \eta \), the thesis follows as

$$\begin{aligned} \lambda _k \le \frac{\Vert B_k^{1/2}f'_k\Vert }{\Delta _k}\le \frac{1}{C_{\min }}=\bar{\lambda }. \end{aligned}$$
(56)

Otherwise, the trust-region radius is progressively reduced, and a bound for the value of \(\Delta _k\) at termination of Step 2 of Algorithm 3.1 can be provided. First, consider the case

$$\begin{aligned} f_\delta \left( x_k^\delta +p_k\right) >\frac{1}{2}\Vert F\left( x_k^{\delta }\right) -y^\delta +F'\left( x_k^{\delta }\right) p_k\Vert ^2. \end{aligned}$$

Trivially,

$$\begin{aligned} 1-\pi _k(p_k) = \frac{f_\delta \left( x_k^\delta +p_k\right) -\frac{1}{2}\Vert F\left( x_k^{\delta }\right) -y^\delta +F'\left( x_k^{\delta }\right) p_k\Vert ^2}{f_\delta \left( x_k^\delta \right) -\frac{1}{2}\Vert F\left( x_k^{\delta }\right) -y^\delta +F'\left( x_k^{\delta }\right) p_k\Vert ^2} \end{aligned}$$

and

$$\begin{aligned}&f_\delta \left( x_k^\delta +p_k\right) -\frac{1}{2}\Vert F\left( x_k^{\delta }\right) -y^\delta +F'\left( x_k^{\delta }\right) p_k\Vert ^2 \\&\quad = \frac{1}{2}\Vert F\left( x_k^\delta +p_k\right) \pm F\left( x_k^\delta \right) \pm F'\left( x_k^\delta \right) p_k-y^\delta \Vert ^2\\&\qquad - \frac{1}{2}\Vert F\left( x_k^{\delta }\right) -y^\delta +F'\left( x_k^{\delta }\right) p_k\Vert ^2\\&\quad = \frac{1}{2}\Vert F \left( x_k^\delta +p_k\right) -F\left( x_k^\delta \right) -F' \left( x_k^\delta \right) p_k\Vert ^2 \nonumber \\&\qquad + \Vert F\left( x_k^\delta +p_k\right) -F\left( x_k^\delta \right) -F' \left( x_k^\delta \right) p_k\Vert \\&\qquad \cdot \Vert F\left( x_k^\delta \right) -y^\delta +F'\left( x_k^\delta \right) p_k\Vert . \end{aligned}$$

By the Lipschitz continuity of \(F'\) it holds

$$\begin{aligned} \Vert F\left( x_k^\delta +p_k\right) -F\left( x_k^\delta \right) -F'\left( x_k^\delta \right) p_k\Vert \le \frac{L}{2} \Vert p_k\Vert ^2. \end{aligned}$$

Moreover, using (24)

$$\begin{aligned} \Vert F\left( x_k^\delta \right) -y^\delta +F'\left( x_k^\delta \right) p(\lambda )\Vert <\Vert F\left( x_k^\delta \right) -y^\delta \Vert \end{aligned}$$

for any \(\lambda \ge 0\). Consequently, as \(\Vert p_k\Vert \le \Vert B_k^{1/2}\Vert \Delta _k\) and \(\Delta _k\le C_{\max }\Vert B_k^{1/2} f'_k\Vert \),

$$\begin{aligned}&f_\delta \left( x_k^\delta +p_k\right) -\frac{1}{2}\Vert F\left( x_k^{\delta }\right) -y^\delta +F'\left( x_k^{\delta }\right) p_k\Vert ^2 \\&\quad \le \frac{L}{2} K^2 \Delta _k^2\Vert F(x_{0})-y\Vert \left( \frac{L}{4} K^6 C_{\max }^2\Vert F(x_{0})-y\Vert +1\right) . \end{aligned}$$

From [22, Theorem 6.3.1 and §8.3] it holds

$$\begin{aligned}&f_\delta \left( x_k^\delta \right) - \left( \frac{1}{2} \langle z_k, B_k^2 z_k\rangle +\langle B_k^{1/2}f'_k,z_k\rangle +f_\delta \left( x_k^\delta \right) \right) \\&\quad \ge \frac{1}{2}\Vert B_k^{1/2} f'_k\Vert \min \left\{ \Delta _k,\frac{\Vert B_k^{1/2} f'_k\Vert }{\Vert B_k^2\Vert } \right\} . \end{aligned}$$

Then, (18) yields

$$\begin{aligned} f_\delta \left( x_k^\delta \right) -\frac{1}{2} \Vert F\left( x_k^{\delta }\right) -y^\delta +F'\left( x_k^{\delta }\right) p_k\Vert ^2 \ge \frac{1}{2}\Delta _k \Vert B_k^{1/2}f'_k\Vert , \end{aligned}$$

whenever \(\Delta _k\le \displaystyle \frac{\Vert B_k^{1/2}f'_k\Vert }{K^4}\), and this implies

$$\begin{aligned} 1-\pi _k(p_k)\le \frac{ L K^2 \Delta _k\Vert F(x_{0})-y\Vert \left( \frac{1}{4} LK^6 C_{\max }^2\Vert F(x_{0})-y\Vert +1\right) }{ \Vert B_k^{1/2}f'_k\Vert }. \end{aligned}$$

Namely, termination of the repeat loop occurs with

$$\begin{aligned} \Delta _k\le \omega \Vert B_k^{1/2} f'_k\Vert \end{aligned}$$

and

$$\begin{aligned} \omega =\min \left\{ \frac{1}{K^4},\frac{1-\eta }{ L K^2 \Vert F(x_{0})-y\Vert \left( \frac{1}{4} LK^6 C_{\max }^2\Vert F(x_{0})-y\Vert +1\right) } \right\} . \end{aligned}$$

Taking into account Step 1 and the updating rule at Step 2.4, it can be concluded that, at termination of Step 2, the trust-region radius \(\Delta _k\) satisfies

$$\begin{aligned} \Delta _k\ge \min \left\{ C_{\min } , \, \gamma \omega \right\} \Vert B_k^{1/2} f'_k\Vert . \end{aligned}$$

In fact, in case of a smaller value of \(\Delta _k\), it happens \(f_\delta (x_k^\delta +p_k)\le \frac{1}{2}\Vert F(x_k^{\delta })-y^\delta +F'\left( x_k^{\delta }\right) p_k\Vert ^2\), then \(\pi _k(p_k)\ge 1>\eta \), and the loop at Step 2 terminates. Then, it terminates for a trust-region radius greater than or equal to the one estimated above.

Then, \(\lambda _k\le \bar{\lambda }\) as

$$\begin{aligned} \lambda _k\le \frac{\Vert B_k^{1/2} f'_k\Vert }{\Delta _k} \le \max \left\{ \frac{1}{ \gamma \omega },\, \frac{1}{C_{\min }} \right\} , \end{aligned}$$

and the thesis follows. \(\square \)

1.2 Proof of Theorem 4.3

Proof

Summing up from \({\bar{k}}\) to \(k^*(\delta )-1\), by (37), (28), (10) and Lemma 4.4, it follows

$$\begin{aligned} (k^*(\delta )-{\bar{k}})\tau ^2 \delta ^2 \le \sum _{k={\bar{k}}}^{k^*(\delta )-1} \Vert F'\left( x_k^{\delta }\right) ^*(F\left( x_k^{\delta }\right) -y^{\delta })\Vert ^2\le \frac{\theta _{{\bar{k}}} {\bar{\lambda }}}{2(\theta _{{\bar{k}}}-1)q^2}\Vert x_{{\bar{k}}}^\delta -x^{\dagger }\Vert ^2. \end{aligned}$$

Thus, \(k^*(\delta )\) is finite for \(\delta >0\).

Convergence of \(x^\delta _{k^*(\delta )}\) to a stationary point of (1) as \(\delta \) tends to zero is obtained by adapting the proof of [5, Theorem 4.5]. Specifically, let \(x^*\) be the limit of the sequence \(\{x_k\}\) corresponding to the exact data y and let \(\{\delta _n\}\) be a sequence of values of \(\delta \) converging to zero as \(n\rightarrow \infty \). Denote by \(y^{\delta _n}\) a corresponding sequence of perturbed data, and by \(k_n = k_*(\delta _n)\) the stopping index determined from the discrepancy principle (37) applied with \(\delta =\delta _n\). Assume first that \({\tilde{k}}\) is a finite accumulation point of \(\{k_n\}\). Without loss of generality, possibly considering a subsequence, it can be assumed that \(k_n = {\tilde{k}}\) for all \(n\in \mathbb {N}\). Thus, from the definition of \(k_n\), it follows that

$$\begin{aligned} \Vert F'\left( x_{{\tilde{k}}}^{\delta _n}\right) ^* \left( y^{\delta _n}-F\left( x_{{\tilde{k}}}^{\delta _n}\right) \right) \Vert \le \tau \delta _n. \end{aligned}$$
(57)

As, by assumption, \(\pi _k(x_{ k+1}-x_k)\ne \eta \), for all k, it follows that for the fixed index \({\tilde{k}}\), the iterate \(x_{{\tilde{k}}}^{\delta }\) depends continuously on \(\delta \). Then

$$\begin{aligned} x_{{\tilde{k}}}^{\delta _n}\rightarrow x_{{\tilde{k}}}, \qquad F'\left( x_{\tilde{k}}^{\delta _n}\right) \rightarrow F'(x_{{\tilde{k}}}), \qquad F\left( x_{\tilde{k}}^{\delta _n}\right) \rightarrow F(x_{{\tilde{k}}}) \qquad \text { as } \delta _n\rightarrow 0. \end{aligned}$$

Therefore, by (57), it follows that \(F'(x_{{\tilde{k}}})^*(y-F(x_{{\tilde{k}}}))=0\), and the \({\tilde{k}}\)th iterate with exact data y is a stationary point of (1), i.e. \(x^* = x_{{\tilde{k}}}\), and it is possible to conclude that \(x_{k_n}^{\delta _n}\rightarrow x^*\) as \(\delta _n\rightarrow 0\).

It remains to consider the case where \(k_n\rightarrow \infty \) as \(n\rightarrow \infty \). As \(\{x_k\}\) converges to a stationary point \(x^*\) of (1) by Theorem 4.2, there exists \(\tilde{k}>0\) such that

$$\begin{aligned} \Vert x_k -x^* \Vert \le \frac{1}{2}\bar{\rho } \qquad \text { for all } \qquad k\ge \tilde{k}, \end{aligned}$$

where \(\bar{\rho }< \min \left\{ \displaystyle \frac{(q-\sigma )\tau -K(\sigma +1)}{c(K+\tau )}, \rho \right\} \). Then, as \( x_k^\delta \) depends continuously on \(\delta \), \(\delta _n\) tends to zero and \(k_*(\delta _n)\rightarrow \infty \), there exists \(\delta _n\) sufficiently small such that \({\tilde{k}}\le k_*(\delta _n)\), and

$$\begin{aligned} \Vert x_{{\tilde{k}}}^{\delta _n} -x_{{\tilde{k}}} \Vert \le \frac{1}{2} \bar{\rho }. \end{aligned}$$

Then, for \(\delta _n\) sufficiently small

$$\begin{aligned} \Vert x_{\tilde{k}}^{\delta _n}-x^*\Vert \le \Vert x_{\tilde{k}}^{\delta _n}- x_{\tilde{k}}\Vert + \Vert x_{\tilde{k}}-x^*\Vert \le \bar{\rho }. \end{aligned}$$
(58)

Now, from item (i) of Lemma 4.4, it follows \(x_{\tilde{k}}^{\delta _n}\in \mathcal {B}_{2\rho }(x_{{\bar{k}}}^{\delta _n})\), while from (46) and Theorem 4.2 it holds \(x^*\in \mathcal {B}_{2\rho }(x_{{\bar{k}}}^{\delta _n})\) as

$$\begin{aligned} \Vert x_{{\bar{k}}}^{\delta _n}-x^*\Vert \le \Vert x_{{\bar{k}}}^{\delta _n}-x^\dagger \Vert +\Vert x^\dagger -x^*\Vert \le 2\rho . \end{aligned}$$

Letting \(e_k^*=x^*-x_k^{\delta _n}\). Repeating arguments from Lemma 4.3, and using (38), (2) it follows

$$\begin{aligned} \Bigg \Vert m_{{\tilde{k}}}\left( e^*_{{\tilde{k}}}\right) \Bigg \Vert\le & {} K\delta _n+ \Bigg \Vert F'\left( x_{\tilde{k}}^{\delta _n}\right) ^{*}\left( y -F\left( x_{\tilde{k}}^{\delta _n}\right) +F'\left( x_{\tilde{k}}^{\delta _n}\right) (x^*-x_{\tilde{k}}^{\delta _n})\right) \Bigg \Vert \\\le & {} K\delta _n+\left( c\Bigg \Vert x^*-x_{\tilde{k}}^{\delta _n}\Bigg \Vert +\sigma \right) \, \Bigg \Vert F'\left( x_{\tilde{k}}^{\delta _n}\right) ^{*} (y-F(x_{\tilde{k}}^{\delta _n}))\Bigg \Vert \\\le & {} (1+ c\Bigg \Vert x^*-x_{\tilde{k}}^{\delta _n}\Bigg \Vert +\sigma ) K\delta _n+ \left( c\Bigg \Vert x^*-x_{\tilde{k}}^{\delta _n}\Bigg \Vert +\sigma \right) \\&\Bigg \Vert F'\left( x_{\tilde{k}}^{\delta _n}\right) ^{*}\left( y^{\delta _n}-F\left( x_{\tilde{k}}^{\delta _n}\right) \right) \Bigg \Vert . \end{aligned}$$

Then, at iteration \({\tilde{k}}\), conditions (37) and (28) give

$$\begin{aligned} \Vert m_{{\tilde{k}}}\left( e^*_{{\tilde{k}}}\right) \Vert\le & {} \left( K\frac{1+ c\Vert x^*-x_{\tilde{k}}^{\delta _n}\Vert +\sigma }{\tau }+ \left( c\Vert x^*-x_{\tilde{k}}^{\delta _n}\Vert +\sigma \right) \right) \\&\Vert F'\left( x_{\tilde{k}}^{\delta _n}\right) ^{*}\left( y^{\delta _n}-F\left( x_{\tilde{k}}^{\delta _n}\right) \right) \Vert \\\le & {} \left( K\frac{1+ c\Vert x^*-x_{{\tilde{k}}}^{\delta _n}\Vert +\sigma }{q\tau }+ \frac{c\Vert x^*-x_{\tilde{k}}^{\delta _n}\Vert +\sigma }{q} \right) \Vert m_{\tilde{k}}(p_{\tilde{k}})\Vert . \end{aligned}$$

Thus, by (58) and \({\bar{\rho }} < \min \left\{ \displaystyle \frac{(q-\sigma )\tau -K(\sigma +1)}{c(K+\tau )}, \rho \right\} \), it follows that

$$\begin{aligned} \Vert m_{{\tilde{k}}}\left( e^*_{{\tilde{k}}}\right) \Vert \le \frac{1}{\theta _{\tilde{k}}}\Vert m_{{\tilde{k}}}(p_{\tilde{k}})\Vert \end{aligned}$$

is satisfied with \(\theta _{\tilde{k}}=\displaystyle \frac{q\tau }{1+c(1+\tau )\bar{\rho }+\sigma (1+\tau )}>1\). Replacing \(x^\dagger \) with \(x^*\), (10) gives \(\Vert x_{{\tilde{k}}+1}^{\delta _n} -x^*\Vert < \Vert x_{{\tilde{k}}}^{\delta _n} -x^*\Vert \) and repeating the above arguments, by induction monotonicity of the error \(\Vert x_k^{\delta _n} -x^*\Vert \) for \(\tilde{k}\le k \le k_n\) is obtained. Then

$$\begin{aligned} \Vert x_{k_n}^{\delta _n}-x^*\Vert < \Vert x_{{\tilde{k}}}^{\delta _n}-x^*\Vert \le {\bar{\rho }}. \end{aligned}$$
(59)

Finally, repeating the previous arguments, it can be shown that, for every \(0\le \epsilon \le {\bar{\rho }}\), there exists \(\bar{\delta }_\epsilon \) such that \(\Vert x_{k_n}^{\delta _n}- x^*\Vert \le \epsilon \) for all \(\delta _n\le \bar{\delta }_\epsilon \), i.e.

$$\begin{aligned} x_{k_n}^{\delta _n}\rightarrow x^* \qquad \text { as } \qquad \delta _n\rightarrow 0, \end{aligned}$$

and the thesis is proved. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Bellavia, S., Riccietti, E. On an Elliptical Trust-Region Procedure for Ill-Posed Nonlinear Least-Squares Problems. J Optim Theory Appl 178, 824–859 (2018). https://doi.org/10.1007/s10957-018-1318-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10957-018-1318-1

Keywords

Mathematics Subject Classification

Navigation