Skip to main content
Log in

The Chebyshev–Shamanskii Method for Solving Systems of Nonlinear Equations

  • Published:
Journal of Optimization Theory and Applications Aims and scope Submit manuscript

Abstract

We present a method, based on the Chebyshev third-order algorithm and accelerated by a Shamanskii-like process, for solving nonlinear systems of equations. We show that this new method has a quintic convergence order. We will also focus on efficiency of high-order methods and more precisely on our new Chebyshev–Shamanskii method. We also identify the optimal use of the same Jacobian in the Shamanskii process applied to the Chebyshev method. Some numerical illustrations will confirm our theoretical analysis.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

References

  1. Moré, J.J., Garbow, B.S., Hillstrom, K.E.: Testing unconstrained optimization software. ACM Trans. Math. Softw. 7(1), 17–41 (1981)

    Article  MATH  Google Scholar 

  2. Rall, L.B., Tapia, R.A.: The Kantorovich theorem and error estimates for Newton’s method. Mathematics Research (1970)

  3. Chebyshev, P.L.: Collected works. Number 5. Moscow–Leningrad (1951)

  4. Halley, E.: A new, exact and easy method of finding roots of any equations generally, and that without any previous reduction. Philos. Trans. Roy. Soc. (1964)

  5. Gutiérrez, J.M., Hernández, M.A.: An acceleration of Newton’s method: Super-Halley method. Appl. Math. Comput. 117(2–3), 223–239 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  6. Ezquerro, J.A.: A modification of the Chebyshev method. IMA J. Numer. Anal. 17, 511–525 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  7. Ezquerro, J.A., Hernández, M.A.: An improvement of the region of accessibility of Chebyshev’s method from newton’s method. Math. Comput. 78(264), 1613–1627 (2009)

    MATH  Google Scholar 

  8. Ezquerro, J.A., Hernández, M.A., Romero, N.: On some one-point hybrid iterative methods. Nonlinear Anal. 72, 587–601 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  9. Griewank, A.: Evaluating Derivatives, Principles and Techniques of Algorithmic Differentiation. SIAM, Philadelphia (2008)

    Book  MATH  Google Scholar 

  10. Zhang, H.: On the Halley class of methods for unconstrained optimization problems. Optim. Methods Softw. 25(5), 753–762 (2009)

    Article  Google Scholar 

  11. Shamanskii, V.: On a modification of the Newton method. Mat. Zametki 19, 133–138 (1967)

    Google Scholar 

  12. Ortega, J.T., Rheinboldt, W.C.: Iterative Solution of Nonlinear Equations in Several Variables. Academic Press, New York (1970)

    MATH  Google Scholar 

  13. Traub, J.F.: Iterative Methods for the Solution of Equations. Prentice-Hall, Englewood Cliffs (1964)

    MATH  Google Scholar 

  14. Kelley, C.T.: Iterative Methods for Optimization. SIAM, Philadelphia (1995)

    Google Scholar 

  15. Lampariello, F., Sciandrone, M.: Global convergence technique for the Newton method with periodic Hessian evaluation. J. Optim. Theory Appl. 111(2), 341–358 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  16. Brent, R.P.: Some efficient algorithms for solving systems of nonlinear equations. J. Nucleic Acids 10(2), 327–344 (1973)

    MathSciNet  MATH  Google Scholar 

  17. Ostrowski, A.M.: Solutions of Equations in Euclidean and Banach Spaces. Academic Press, New York (1973)

    Google Scholar 

  18. Ezquerro, J., Grau-Sánchez, M., Grau, A., Hernández, M., Noguera, M., Romero, N.: On iterative methods with accelerated convergence for solving systems of nonlinear equations. J. Optim. Theory Appl. 151(1), 163–174 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  19. Gundersen, G., Steihaug, T.: On large scale unconstrained optimization problems and higher-order methods. Optim. Methods Softw. 25(3), 337–358 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  20. Kchouk, B., Dussault, J.-P.: A new family of high order directions for unconstrained optimization inspired by Chebyshev and Shamanskii methods. Technical report, University of Sherbrooke, Optimization-online.org (2011)

  21. Fox, L.: An Introduction to Numerical Linear Algebra. Clarendon Press, Oxford (1964)

    MATH  Google Scholar 

  22. Dussault, J.-P.: High-order Newton-penalty algorithms. J. Comput. Appl. Math. 182(1), 117–133 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  23. Baur, W., Strassen, V.: The complexity of partial derivatives. Theor. Comput. Sci. 22, 300–317 (1983)

    Article  MathSciNet  Google Scholar 

  24. Griewank, A.: On automatic differentiation. Math. Program., Recent. Dev. Appl. 6, 83–108 (1989)

    MathSciNet  Google Scholar 

  25. Dussault, J.-P., Hamelin, B., Kchouk, B.: Implementation issues for high-order algorithms. Acta Math. Vietnam. 34(1), 91–103 (2009)

    MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

This research was partially supported by NSERC grant OGP0005491, by the Institut des sciences mathématiques (ISM), and the Department of Mathematics of the University of Sherbrooke

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Bilel Kchouk.

Additional information

Communicated by Jean-Pierre Crouzeix.

Appendices

Appendix A: Test Functions

We tested our algorithms on the following functions, from the Moré, Garbow and Hillstrom collection [1]:

$$ f(x)=\sum_{i=1}^{m} f_i^{2}(x), $$
(A.1)

where f 1,…,f m are defined by the following problems. We adopt the following general format described in [1] and add the expression of \(F_{i}= \frac{\partial f(x)}{\partial x_{i}}\).

Name of function

  1. (a)

    Dimensions

  2. (b)

    Function definition

  3. (c)

    Standard starting point

  4. (d)

    Minima

  5. (e)

    Expression of the Gradient \(F_{i}= \frac{\partial f(x)}{\partial x_{i}}\).

Thus, our four test functions are:

Problem 1

Broyden tridiagonal function

  1. (a)

    n variable, m=n

  2. (b)

    f i (x)=(3−2x i )x i x i−1−2x i+1+1 where x 0=x n+1=0

  3. (c)

    x 0=(−1,…,−1)

  4. (d)

    f=0

  5. (e)

    F i =−4f i−1(x)+2(3−12x i )f i (x)−2f i+1(x).

Problem 2

Broyden banded function

  1. (a)

    n variable, m=n

  2. (b)

    \(f_{i}(x)=x_{i}(2+5x_{i}^{2})+1-\sum_{j \in J_{i}} x_{j}(1+x_{i}) \) where J i ={j:ji,max(1,im l )≤j≤min(n,i+m u )} and m l =5, m u =1

  3. (c)

    x 0=(−1,…,−1)

  4. (d)

    f=0

  5. (e)

    \(F_{i}=2(1+15x_{i}^{2}-2x_{i})f_{i}(x)\).

Problem 3

Discrete boundary value function

  1. (a)

    n variable, m=n

  2. (b)

    f i (x)=2x i x i−1x i+1+h 2(x i +t i +1)3/2 where h=1/n+1=0, t i =ih and x 0=x n+1=0

  3. (c)

    x 0=(ζ j ) where ζ j =t j (t j −1)

  4. (d)

    f=0

  5. (e)

    \(F_{i}=-2f_{i-1}(x)+2[2+\frac {3}{2}h^{2}(x_{i}+t_{i}+1)^{2}]f_{i}(x)-2f_{i+1}(x)\).

Problem 4

Discrete integral equation function

  1. (a)

    n variable, m=n

  2. (b)

    \(f_{i}(x)=x_{i}+h[(1-t_{i})\sum_{j=1}^{i} t_{j}(x_{j}+t_{j}+1)^{3}+t_{i}\sum_{j=i+1}^{n} (1-t_{j})(x_{j}+ t_{j}+1)^{3}]/2\) where h=1/(n+1), t i =ih and x 0=x n+1=0

  3. (c)

    x 0=(ζ j ) where ζ j =t j (t j −1)

  4. (d)

    f=0

  5. (e)

    \(F_{i}=2[1+\frac {3}{2}ht_{i}(x_{i}+t_{i}+1)^{2}]f_{i}(x)\).

Table 4 gives details about the costs of the tested problems.

Table 4 Detailed intrinsic cost for the test functions, considering Eq. (A.1)

Appendix B: Unsymmetric Systems

If one considers unsymmetric systems, the detailed analysis for the costs involved in high-order methods is given in Table 5.

Table 5 Costs of operations involved in high-order methods for unsymmetric systems

Thus, the costs of the Newton and Chebyshev methods are:

Consequently, the efficiency of the Newton method is

and the efficiency of the Chebyshev method is

As in Sect. 5, we obtain the same ratio between efficiencies of the Chebyshev–Shamanskii method and the Chebyshev method, as the additional cost of the Chebyshev–Shamanskii method given in Corollary 3.1 is still valid, i.e.

Thus, if c is approximated by n 2, E(CS)/E(d C )>1 for n>22; in other words, the Chebyshev–Shamanskii method is more efficient than the pure Chebyshev’s for n>22. If c is approximated by n 3, then the same result is valid for n>29.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Kchouk, B., Dussault, JP. The Chebyshev–Shamanskii Method for Solving Systems of Nonlinear Equations. J Optim Theory Appl 157, 148–167 (2013). https://doi.org/10.1007/s10957-012-0159-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10957-012-0159-6

Keywords

Navigation