A higher-order Levenberg–Marquardt method for nonlinear equations

https://doi.org/10.1016/j.amc.2013.04.033Get rights and content

Abstract

In this paper, we present a high-order Levenberg–Marquardt method for nonlinear equations. At every iteration, not only a LM step is computed but also two approximate LM steps are computed which use the previous calculated Jacobian. Under the local error bound condition which is weaker than nonsingularity, the new method has biquadratic convergence. A globally convergent LM algorithm is also given by the trust region technique. Numerical results show that the new fourth-order LM algorithm performs very well and could save many Jacobian calculations especially for large scale problems.

Introduction

We consider the system of nonlinear equationsF(x)=0,where F(x):RnRm is a continuously differentiable mapping. Due to the nonlinearity of F(x), (1) may have no solutions. Throughout the paper, we assume that the solution set of (1) denoted by X is nonempty, and in all cases · refers to the 2-norm.

A large number of researches have focus on this system ([2], [4], [5], [8], [10], [12], [13], [22], [24], [25]) and the most common way to solve (1) is the Newton’s method, which computes the trail stepdkN=-Jk-1Fk,at every iteration, where Fk=F(xk) and Jk=F(xk) is the Jacobian. As we know, if J(x) is Lipschitz continuous and nonsingular at the solution, then the Newton method has quadratic convergence. However, this method has an obvious disadvantage when the Jacobian is singular or near singular.

To overcome this fault, the Levenberg–Marquardt method ([14], [15], [16], [17]) computes the trail stepdk=-(JkTJk+λkI)-1JkTFk,where λk>0 is the LM parameter updated from iteration to iteration. Yamashita and Fukushima proved in [23] that if the LM parameter is chosen as λk=Fk2, then the LM method has quadratic convergence under the local error bound condition which is weaker than the nonsingularity of the Jacobian. Fan and Yuan showed in [11] that the LM method preserves the quadratic convergence when λk=Fkδ for any δ[1,2]. Numerical results ([7]) show that the choice of λk=Fk performs more stable and preferable.

In [6], Fan presented a modified LM method which computes not only the LM step (3) but also the approximate LM stepdˆk=-(JkTJk+λkI)-1JkTF(yk)withyk=xk+dk,at every iteration. Since Jk is used instead of J(yk) in (4), the cost of Jacobian computations could be reduced especially when F(x) is complicated or n is quite large. Under the local error bound condition, the modified LM method has cubic convergence.

The interesting question is what may happen if we compute another approximate stepd̃k=-(JkTJk+λkI)-1JkTF(zk)withzk=yk+dˆk.

The main purpose of inexact methods ([3], [9]) is to minimize computation, so we wonder whether the new LM method could save more Jacobian calculations and whether it could achieve a faster convergence rate. The aim of this paper is to study the convergence properties of the above LM method and do some numerical experiments to test its efficiency.

The paper is organized as follows. In Section 2, we present a new globally convergent LM algorithm by using trust region technique. In Section 3, we study the convergence rate of the algorithm and obtain the biquadratic convergence. Some numerical results are given in Section 4. Finally, we conclude the paper and discuss some further possible research in Section 5.

Section snippets

The new Levenberg–Marquardt algorithm

In this section, we first present the new LM algorithm by using trust region technique, then prove the global convergence.

We takeΦ(x)=F(x)2,as the merit function for (1). We define the actual reduction of Φ(x) at the kth iteration asAredk=Fk2-F(xk+dk+dˆk+d̃k)2.

Note that the predicted reduction can not be defined as Fk2-Fk+Jk(dk+dˆk+dk̃)2 as usual, as it can not be proven to be nonnegative, which is required for the global convergence in the trust region method. Hence we develop a new

Biquardratic convergence of Algorithm 2.1

In this section, we will use the singular value decomposition (SVD) technique to analyze the convergence rate of Algorithm 2.1. We assume that the sequence generated by Algorithm 2.1 converges to the solution set X.

Firstly, we make the following assumption.

Assumption 3.1

(a) F(x) is continuously differentiable, and the Jacobian J(x) is Lipschitz continuous, i.e., there exists a positive constant L1 and some xX such thatJ(y)-J(x)L1y-x,x,yN(x,b1).(b) F(x) provides a local error bound on N(x,b1)

Numerical results

We tested Algorithm 2.1 on some singular problems, and compared it with both the general LM algorithm which computes the trial step sk=dk and the modified LM algorithm presented in [6] which computes sk=dk+dˆk.

The test problems are created by modifying the nonsingular problems given by Moré et al. in [18], and have the same form as in [21]F^(x)=F(x)-J(x)A(ATA)-1AT(x-x),where F(x) is the standard nonsingular test function, x is its root, and ARn×k has full column rank with 1kn. Obviously, F

Final discussion

In this paper, we presented a fourth-order LM algorithm for nonlinear equations. At every iteration, not only a LM step is computed but also two approximate LM steps are computed which make use of the previous available Jacobian instead of computing the new Jacobian. Under the local error bound condition which is weaker than nonsingularity, the new algorithm has biquadratic convergence. Numerical results show that the new algorithm outperforms both the general LM algorithm and the modified LM

References (25)

  • J.Y. Fan et al.

    A note on the Levenberg–Marquardt parameter

    Appl. Math. Comput.

    (2009)
  • M.J.D. Powell

    Convergence properties of a class of minimization algorithms

  • R. Behling et al.

    The effect of calmness on the solution set of systems of nonlinear equations

    Math. Program.

    (2011)
  • S. Bellavia et al.

    Convergence of a regularized Euclidean residual algorithm for nonlinear least-squares

    SIAM J. Numer. Anal.

    (2010)
  • H. Dan et al.

    Convergence properties of the inexact Levenberg–Marquardt method under local error bound conditions

    Optim. Methods Softw.

    (2002)
  • J.E. Dennis et al.

    Methods for Unconstrained Optimization and Nonlinear Equations

    (1983)
  • J.Y. Fan

    Convergence rate of the trust region method for nonlinear equations under local error bound condition

    Comput. Optim. Appl.

    (2006)
  • J.Y. Fan

    The modified Levenberg–Marquardt method for nonlinear equations with cubic convergence

    Math. Comput.

    (2012)
  • J.Y. Fan et al.

    An improved trust region algorithm for nonlinear equations

    Comput. Optim. Appl.

    (2011)
  • J.Y. Fan et al.

    On the convergence rate of the inexact Levenberg–Marquardt method

    J. Ind. Manage. Optim.

    (2011)
  • J.Y. Fan, Y.X. Yuan, A regularized Newton method for monotone nonlinear equations and its application, Optim. Methods...
  • J.Y. Fan et al.

    On the quadratic convergence of the Levenberg–Marquardt method without nonsingularity assumption

    Computing

    (2005)
  • Cited by (44)

    • A family of varying-parameter finite-time zeroing neural networks for solving time-varying Sylvester equation and its application

      2022, Journal of Computational and Applied Mathematics
      Citation Excerpt :

      The methods developed for solving Sylvester equations may be divided [9] into two classes: (1) serial processing, for which the numerical algorithms are aimed to solve the Sylvester equation in a static case; (2) parallel processing, based on neural networks [10,11]. It is worth mentioning that the solution of the Sylvester equation derived using iterative algorithms [12–14] needs an enormous computational complexity, partly due to some flaws [15,16]. This becomes particularly challenging for problems having time-varying essence, as seriality of calculations in each cycle with a selected sampling rate.

    View all citing articles on Scopus
    View full text