A higher-order Levenberg–Marquardt method for nonlinear equations
Introduction
We consider the system of nonlinear equationswhere is a continuously differentiable mapping. Due to the nonlinearity of , (1) may have no solutions. Throughout the paper, we assume that the solution set of (1) denoted by is nonempty, and in all cases refers to the 2-norm.
A large number of researches have focus on this system ([2], [4], [5], [8], [10], [12], [13], [22], [24], [25]) and the most common way to solve (1) is the Newton’s method, which computes the trail stepat every iteration, where and is the Jacobian. As we know, if is Lipschitz continuous and nonsingular at the solution, then the Newton method has quadratic convergence. However, this method has an obvious disadvantage when the Jacobian is singular or near singular.
To overcome this fault, the Levenberg–Marquardt method ([14], [15], [16], [17]) computes the trail stepwhere is the LM parameter updated from iteration to iteration. Yamashita and Fukushima proved in [23] that if the LM parameter is chosen as , then the LM method has quadratic convergence under the local error bound condition which is weaker than the nonsingularity of the Jacobian. Fan and Yuan showed in [11] that the LM method preserves the quadratic convergence when for any . Numerical results ([7]) show that the choice of performs more stable and preferable.
In [6], Fan presented a modified LM method which computes not only the LM step (3) but also the approximate LM stepat every iteration. Since is used instead of in (4), the cost of Jacobian computations could be reduced especially when is complicated or n is quite large. Under the local error bound condition, the modified LM method has cubic convergence.
The interesting question is what may happen if we compute another approximate step
The main purpose of inexact methods ([3], [9]) is to minimize computation, so we wonder whether the new LM method could save more Jacobian calculations and whether it could achieve a faster convergence rate. The aim of this paper is to study the convergence properties of the above LM method and do some numerical experiments to test its efficiency.
The paper is organized as follows. In Section 2, we present a new globally convergent LM algorithm by using trust region technique. In Section 3, we study the convergence rate of the algorithm and obtain the biquadratic convergence. Some numerical results are given in Section 4. Finally, we conclude the paper and discuss some further possible research in Section 5.
Section snippets
The new Levenberg–Marquardt algorithm
In this section, we first present the new LM algorithm by using trust region technique, then prove the global convergence.
We takeas the merit function for (1). We define the actual reduction of at the kth iteration as
Note that the predicted reduction can not be defined as as usual, as it can not be proven to be nonnegative, which is required for the global convergence in the trust region method. Hence we develop a new
Biquardratic convergence of Algorithm 2.1
In this section, we will use the singular value decomposition (SVD) technique to analyze the convergence rate of Algorithm 2.1. We assume that the sequence generated by Algorithm 2.1 converges to the solution set .
Firstly, we make the following assumption. Assumption 3.1 (a) is continuously differentiable, and the Jacobian is Lipschitz continuous, i.e., there exists a positive constant and some such that(b) provides a local error bound on
Numerical results
We tested Algorithm 2.1 on some singular problems, and compared it with both the general LM algorithm which computes the trial step and the modified LM algorithm presented in [6] which computes .
The test problems are created by modifying the nonsingular problems given by Moré et al. in [18], and have the same form as in [21]where is the standard nonsingular test function, is its root, and has full column rank with . Obviously,
Final discussion
In this paper, we presented a fourth-order LM algorithm for nonlinear equations. At every iteration, not only a LM step is computed but also two approximate LM steps are computed which make use of the previous available Jacobian instead of computing the new Jacobian. Under the local error bound condition which is weaker than nonsingularity, the new algorithm has biquadratic convergence. Numerical results show that the new algorithm outperforms both the general LM algorithm and the modified LM
References (25)
- et al.
A note on the Levenberg–Marquardt parameter
Appl. Math. Comput.
(2009) Convergence properties of a class of minimization algorithms
- et al.
The effect of calmness on the solution set of systems of nonlinear equations
Math. Program.
(2011) - et al.
Convergence of a regularized Euclidean residual algorithm for nonlinear least-squares
SIAM J. Numer. Anal.
(2010) - et al.
Convergence properties of the inexact Levenberg–Marquardt method under local error bound conditions
Optim. Methods Softw.
(2002) - et al.
Methods for Unconstrained Optimization and Nonlinear Equations
(1983) Convergence rate of the trust region method for nonlinear equations under local error bound condition
Comput. Optim. Appl.
(2006)The modified Levenberg–Marquardt method for nonlinear equations with cubic convergence
Math. Comput.
(2012)- et al.
An improved trust region algorithm for nonlinear equations
Comput. Optim. Appl.
(2011) - et al.
On the convergence rate of the inexact Levenberg–Marquardt method
J. Ind. Manage. Optim.
(2011)
On the quadratic convergence of the Levenberg–Marquardt method without nonsingularity assumption
Computing
Cited by (44)
Hierarchical recursive Levenberg–Marquardt algorithm for radial basis function autoregressive models
2023, Information SciencesSpherical search with epsilon constraint and gradient-based repair framework for constrained optimization
2023, Swarm and Evolutionary ComputationSimplified quadrilateral grid generation of complex free-form gridshells by surface fitting
2022, Journal of Building EngineeringA family of varying-parameter finite-time zeroing neural networks for solving time-varying Sylvester equation and its application
2022, Journal of Computational and Applied MathematicsCitation Excerpt :The methods developed for solving Sylvester equations may be divided [9] into two classes: (1) serial processing, for which the numerical algorithms are aimed to solve the Sylvester equation in a static case; (2) parallel processing, based on neural networks [10,11]. It is worth mentioning that the solution of the Sylvester equation derived using iterative algorithms [12–14] needs an enormous computational complexity, partly due to some flaws [15,16]. This becomes particularly challenging for problems having time-varying essence, as seriality of calculations in each cycle with a selected sampling rate.