Abstract
A quasi-Newton method called the direct Broyden method is considered in this paper, which satisfies the least change principle and the direct tangent condition. The direct Broyden method can ensure that the quasi-Newton matrix equals the Jacobian matrix along step direction and accumulates more derivative information of the function. Moreover, we present an accelerated version of the new method and prove its global and superlinear convergence. Extensive numerical results are reported to show the efficiency of the two methods.
Similar content being viewed by others
References
Yuan, Y.X.: Recent advances in numerical methods for nonlinear equations and nonlinear least squares. Numer. Algebra Control Optim. 1(1), 15–34 (2011)
Dai, Z., Li, T., Yang, M.: Forecasting stock return volatility: the role of shrinkage approaches in a data-rich environment J. Forecast. 41, 980–996 (2022)
Dai, Z., Zhu, H.: Time-varying spillover effects and investment strategies between wti crude oil, natural gas and Chinese stock markets related to belt and road initiative. Energy Econ. 108, 105883 (2022)
Li, D.H., Fukushima, M.: A globally and superlinearly convergent Gauss–Newton-based bfgs method for symmetric nonlinear equations. SIAM J. Numer. Anal. 37(1), 152–172 (1999)
Chen, L.: A modified Levenberg–Marquardt method with line search for nonlinear equations. Comput. Optim. Appl. 65(3), 753–779 (2016)
Stanimirović, P.S., Ivanov, B., Ma, H., Mosić, D.: A survey of gradient methods for solving nonlinear optimization. Electron. Res. Arch. 4, 1573 (2020)
Yuan, Y.: Recent advances in trust region algorithms. Math. Program. 151(1), 249–281 (2015)
Gu, G.Z., Li, D.H., Qi, L., Zhou, S.Z.: Descent directions of quasi-newton methods for symmetric nonlinear equations. SIAM J. Numer. Anal. 40(5), 1763–1774 (2002)
Cao, H.P., Li, D.H.: Adjoint Broyden methods for symmetric nonlinear equations. Pac. J. Optim. 13(4), 645–663 (2017)
Zhou, W.J.: A globally convergent bfgs method for symmetric nonlinear equations. J. Ind. Manag. Optim. 18, 1295 (2021)
Rodomanov, A., Nesterov, Y.: Rates of superlinear convergence for classical quasi-newton methods. Math. Program. 194, 159–190 (2021)
Rodomanov, A., Nesterov, Y.: New results on superlinear convergence of classical quasi-Newton methods. J. Optim. Theory Appl. 188(3), 744–769 (2021)
Boutet, N., Haelterman, R., Degroote, J.: Secant update generalized version of psb: a new approach. Comput. Optim. Appl. 78(3), 953–982 (2021)
Yuan, G.L., Zhang, M.X., Zhou, Y.J.: Adaptive scaling damped bfgs method without gradient Lipschitz continuity. Appl. Math. Lett. 124, 107634 (2022)
Li, D.H., Fukushima, M.: A derivative-free line search and global convergence of Broyden-like method for nonlinear equations. Optim. Methods Softw. 13(3), 181–201 (2000)
Cao, H.-P., Li, D.-H.: Partitioned quasi-Newton methods for sparse nonlinear equations. Comput. Optim. Appl. 66(3), 481–505 (2017)
Zhou, W.J., Zhang, L.: A modified Broyden-like quasi-Newton method for nonlinear equations. J. Comput. Appl. Math. 372, 112744 (2020)
Powell, M.J.: A Fortran subroutine for solving systems of nonlinear algebraic equations. Technical report, Atomic Energy Research Establishment, Harwell, England (United Kingdom) (1968)
Li, D.H., Zeng, J.P., Zhou, S.Z.: Convergence of Broyden-like matrix. Appl. Math. Lett. 11(5), 35–37 (1998)
Dennis, J.E., Moré, J.J.: A characterization of superlinear convergence and its application to quasi-Newton methods. Math. Comput. 28(126), 549–560 (1974)
Ortega, J.M., Rheinboldt, W.C.: Iterative Solution of Nonlinear Equations in Several Variables. Society for Industrial and Applied Mathematics, Philadelphia (2000)
Dolan, E.D., Moré, J.J.: Benchmarking optimization software with performance profiles. Math. Program. 91(2), 201–213 (2002)
La Cruz, W., Martínez, J., Raydan, M.: Spectral residual method without gradient information for solving large-scale nonlinear systems of equations. Math. Comput. 75(255), 1429–1448 (2006)
Raydan, M.: The Barzilai and Borwein gradient method for the large scale unconstrained minimization problem. SIAM J. Optim. 7(1), 26–33 (1997)
Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for Newton’s method. SIAM J. Numer. Anal. 23(4), 707–716 (1986)
Friedlander, A., Gomes-Ruggiero, M.A., Kozakevich, D.N., Mario Martínez, J., Augusta Santos, S.: Solving nonlinear systems of equations by means of quasi-neston methods with a nonmonotone stratgy. Optim. Methods Softw. 8(1), 25–51 (1997)
Moré, J.J., Garbow, B.S., Hillstrom, K.E.: Testing unconstrained optimization software. ACM Trans. Math. Softw.: TOMS 7(1), 17–41 (1981)
Bing, Y., Lin, G.: An efficient implementation of Merrill’s method for sparse or partially separable systems of nonlinear equations. SIAM J. Optim. 1(2), 206–221 (1991)
Acknowledgements
The work is supported by the National Natural Science Foundation of China, grant number 11701577; the Natural Science Foundation of Hunan Province, China, grant number 2019JJ51002, 2020JJ5960; and the Natural Science Foundation of Shaanxi Province, China, grant number 2022JQ006.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare to have no competing interests.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix A: Test functions
Appendix A: Test functions
In this appendix, we list the test functions and initial points \(x_0\), where
-
1.
Exponential function [23]
$$\begin{aligned} f_i(x)= & {} \frac{i}{10}\left( 1-x_i^2-e^{-x_i^2}\right) , ~i=1,2,\ldots , n-1,\\ f_n= & {} \frac{n}{10}\left( 1-e^{-x_n^2}\right) .\\ x_0= & {} \left( \frac{1}{4n^2}, \frac{2}{4n^2},\ldots , \frac{n}{4n^2}\right) ^T. \end{aligned}$$ -
2.
Logarithmic function [23]
$$\begin{aligned} f_i(x)= & {} \ln (x_i+1)-\frac{x_i}{n},~i=1,2,\ldots ,n.\\ x_0= & {} (1,1,\ldots ,1)^T. \end{aligned}$$ -
3.
Strictly convex function 1 [24]
F(x) is the gradient of \(h(x)=\sum _{i=1}^n(e^{x_i}-x_i)\).
$$\begin{aligned} f_i(x)= & {} e^{x_i}-1,~i=1,2,\ldots ,n.\\ x_0= & {} \left( \frac{1}{n},\frac{2}{n},\ldots ,1\right) ^T. \end{aligned}$$ -
4.
Strictly convex function 2 [24]
F(x) is the gradient of \(h(x)=\sum _{i=1}^n\frac{i}{10}(e^{x_i}-x_i)\).
$$\begin{aligned} f_i(x)= & {} \frac{i}{10}\left( e^{x_i}-1\right) , ~i=1,2,\ldots ,n.\\ x_0= & {} (1,1,\ldots ,1 )^T. \end{aligned}$$ -
5.
Extended Rosenbrock function (n is even) [25].
For \(i=1,2,\ldots ,n/2\)
$$\begin{aligned} f_{2i-1}(x)= & {} 10(x_{2i}-x_{2i-1}^2),\\ f_{2i}(x)= & {} 1-x_{2i-1}.\\ x_0= & {} (-1.2,1,\ldots ,-1.2,1)^T. \end{aligned}$$ -
6.
Function 6 (n is a multiple of 3) [23].
For \(i=1,2,\ldots ,n/3\)
$$\begin{aligned} f_{3i-2}(x)= & {} x_{3i-2}x_{3i-1}-x_{3i}^2-1,\\ f_{3i-1}(x)= & {} x_{3i-2}x_{3i-1}x_{3i}-x_{3i-2}^2+x_{3i-1}^2-2,\\ f_{3i}(x)= & {} e^{-x_{3i-2}}-e^{-x_{3i-1}}.\\ x_0= & {} (1,1,\ldots ,1)^T. \end{aligned}$$ -
7.
Tridimensional valley function (n is a multiple of 3) [26]
For \(i=1,2,\ldots ,n/3,\)
$$\begin{aligned} f_{3i-2}(x)= & {} (c_2x_{3i-2}^3+c_1x_{3i-2})\exp \left( \frac{-x_{3i-2}^2}{100}\right) -1,\\ f_{3i-1}(x)= & {} 10 (\sin (x_{3i-2})-x_{3i-1} ),\\ f_{3i}(x)= & {} 10 (\cos (x_{3i-2})-x_{3i}),\\ \text{ where }{} & {} \\ c_1= & {} 1.003344481605351,\\ c_2= & {} -3.344481605351171\times 10^{-3}.\\ x_0= & {} (2,1,2,\ldots ,2,1,2)^T. \end{aligned}$$ -
8.
Extended Powell singular function (n is a multiple of 4) [27]
For \(i=1,2,\ldots ,n/4,\)
$$\begin{aligned} f_{4i-3}(x)= & {} x_{4i-3}+10x_{4i-2},\\ f_{4i-2}(x)= & {} \sqrt{5}(x_{4i-1}-x_{4i}),\\ f_{4i-1}(x)= & {} (x_{4i-2}-2x_{4i-1})^2,\\ f_{4i}(x)= & {} \sqrt{10}(x_{4i-3}-x_{4i})^2.\\ x_0= & {} (1.5\times 10^{-4},1.5\times 10^{-4},\ldots ,1.5\times 10^{-4})^T. \end{aligned}$$ -
9.
Tridiagonal exponential problem [28]
$$\begin{aligned} f_1(x)= & {} x_1 - \exp (\cos (h (x_1 + x_2))),\\ f_i(x)= & {} x_i - \exp (\cos (h(x_{i-1} + x_i + x_{i+1}))),~i=2,\ldots ,n-1,\\ f_n(x)= & {} x_n - \exp (\cos (h(x_{n-1} + x_n))).\\ h= & {} 1/(n+1).\\ x_0= & {} (1.5,\ldots ,1.5)^T. \end{aligned}$$ -
10.
Discrete boundary value problem [27]
$$\begin{aligned} f_1(x)= & {} 2x_1+0.5h^2(x_1+h)^3-x_2,\\ f_i(x)= & {} 2x_i+0.5h^2(x_i+ih)^3-x_{i-1}+x_{i+1},~i=2,\ldots ,n-1,\\ f_n(x)= & {} 2x_n+0.5h^2(x_n+nh)^3-x_{n-1}.\\ h= & {} 1/(n+1).\\ x_0= & {} (h(h-1),h(2h-1),\ldots ,h(nh-1))^T. \end{aligned}$$ -
11.
Discretized two-point boundary value problem
$$\begin{aligned} f_1(x)= & {} 8x_1-x_2+m(\sin x_1-1),\\ f_i(x)= & {} -x_{i-1}+8x_i-x_{i+1}m(\sin x_i -1),~i=2,\ldots ,n-1,\\ f_n(x)= & {} -x_{n-1}+8x_n+m(\sin x_n-1).\\ m= & {} 1/(n+1)^2.\\ x_0= & {} \left( \frac{1}{n},\frac{2}{n},\ldots ,1\right) ^T. \end{aligned}$$ -
12.
Trigonometric function [23]
$$\begin{aligned} f_i(x)= & {} 2\left( n+i(1-\cos x_i)-\sin x_i-\sum _{j=1}^n \cos x_j\right) (2\sin x_i-\cos x_i),\\ x_0= & {} \left( \frac{101}{100n},\ldots ,\frac{101}{100n}\right) ^T. \end{aligned}$$ -
13.
Penalty I function [23]
$$\begin{aligned} f_i(x)= & {} \sqrt{10^{-5}}(x_i-1),~ i=1,2,\ldots ,n-1,\\ f_n(x)= & {} \left( \frac{1}{4n}\right) \sum _{j=1}^n x_j^2-\frac{1}{4}.\\ x_0= & {} \left( \frac{1}{3},\frac{1}{3},\ldots , \frac{1}{3}\right) ^T. \end{aligned}$$ -
14.
Hanbook function [23].
$$\begin{aligned} f_i(x)= & {} 0.05(x_i-1)+2\sin \left( \sum _{j=1}^n(x_j-1)+\sum _{j=1}^n(x_j-1)^2\right) \\{} & {} (1+2(x_i-1))+2\sin \left( \sum _{j=1}^n(x_j-1)\right) ,~i=1,2,\ldots ,n.\\ x_0= & {} (5,\ldots ,5)^T. \end{aligned}$$ -
15.
Linear function-rank 2 [23]
$$\begin{aligned} f_1(x)= & {} x_1-1,\\ f_i(x)= & {} i\sum _{j=1}^n jx_j-i,~ i=2,3,\ldots ,n.\\ x_0= & {} \left( 1,\frac{1}{n},\ldots , \frac{1}{n}\right) ^T. \end{aligned}$$ -
16.
Engval function
$$\begin{aligned} f_1(x)= & {} x_1(x_1^2+x_2^2)-1,\\ f_i(x)= & {} x_i(x_{i-1}^2+2x_i^2+x_{i+1}^2)-1,~i=2,\ldots ,n-1,\\ f_n(x)= & {} x_n(x_{n-1}^2+x_n^2).\\ x_0= & {} (0,0,\ldots ,0)^T. \end{aligned}$$
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Cao, H., An, X. & Han, J. Solving nonlinear equations with a direct Broyden method and its acceleration. J. Appl. Math. Comput. 69, 1917–1944 (2023). https://doi.org/10.1007/s12190-022-01818-8
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s12190-022-01818-8
Keywords
- Nonlinear equations
- Quasi-Newton method
- Automatic differentiation
- Global convergence
- Superlinear convergence