Skip to main content
Log in

Gradient method with multiple damping for large-scale unconstrained optimization

  • Original Paper
  • Published:
Optimization Letters Aims and scope Submit manuscript

Abstract

Gradient methods are popular due to the fact that only gradient of the objective function is required. On the other hand, the methods can be very slow if the objective function is very ill-conditioned. One possible reason for the inefficiency of the gradient methods is that a constant criterion, which aims only at reducing the function value, has been used in choosing the steplength, and this leads to a stable dynamic system giving slow convergence. To overcome this, we propose a new gradient method with multiple damping, which works on the objective function and the norm of the gradient vector simultaneously. That is, the proposed method is constructed by combining damping with line search strategies, in which an individual adaptive parameter is proposed to damp the gradient vector while line searches are used to reduce the function value. Global convergence of the proposed method is established under both backtracking and nonmonotone line search. Finally, numerical results show that the proposed algorithm performs better than some well-known CG-based methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

References

  1. Akaike, H.: On a successive transformation of probability distribution and its applications to analysis of the optimum gradient method. Ann. Inst. Stat. Math. Tokyo 11, 1–17 (1959)

    Article  MathSciNet  MATH  Google Scholar 

  2. Andrei, N.: An unconstrained optimization test functions collection. J. Adv. Model Optim. 10, 147–161 (2008)

    MathSciNet  MATH  Google Scholar 

  3. Antonelli, L., De Simone, V., di Serafino, D.: On the application of the spectral projected gradient method in image segmentation. J. Math. Imaging Vis. 54, 106–116 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  4. Bonettini, S., Zanella, R., Zanni, L.: A scaled gradient projection method for constrained image deblurring. Inverse Probl. 25, 015002 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  5. Bongartz, I., Conn, A.R., Gould, N.I.M., Toint, PhL: CUTE: constrained and unconstrained testing environment. ACM Trans. Math. Softw. 21, 123–160 (1995)

    Article  MATH  Google Scholar 

  6. Byrd, R.H., Nocedal, J.: A tool for the analysis of quasi-Newton methods with application to unconstrained minimization. SIAM J. Numer. Anal. 26, 727–739 (1989)

    Article  MathSciNet  MATH  Google Scholar 

  7. Cauchy, A.: Méthode générale pour la résolution des systèms d’ équations simultanées. Comp. Rend. Sci. Paris 25, 46–89 (1847)

    Google Scholar 

  8. Dai, Y., Yuan, Y.: Analysis of monotone gradient methods. J. Ind. Manag. Optim. 1, 181–192 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  9. De Asmundis, R., di Serafino, D., Riccio, F., Toraldo, G.: On spectral properties of steepest descent methods. IMA J. Numer. Anal. 33, 1416–1435 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  10. De Asmundis, R., di Serafino, D., Hager, W., Toraldo, G., Zhang, H.: An efficient gradient method using the Yuan steplength. Comput. Optim. Appl. 59, 541–563 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  11. De Asmundis, R., di Serafino, D., Landi, G.: On the regularizing behavior of the SDA and SDC gradient methods in the solution of linear ill-posed problems. J. Comput. Appl. Math. 302, 81–93 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  12. di Serafino, D., Ruggiero, V., Toraldo, G., Zanni, L.: On the steplength selection in gradient methods for unconstrained optimization. Appl. Math. Comput. 318, 176–195 (2018)

    MathSciNet  Google Scholar 

  13. Dolan, E., Moré, J.: Benchmarking optimization software with performance profiles. Math. Program. 91, 201–213 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  14. Fletcher, R.: Practical Methods of Optimization Vol 1: Unconstrained Optimization. Wiley, New York (1987)

    MATH  Google Scholar 

  15. Fletcher, R.: A limited memory steepest descent method. Math. Program. 135, 413–436 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  16. Fletcher, R., Reeves, C.: Function minimization by conjugate gradients. Comput. J. 7, 149–154 (1964)

    Article  MathSciNet  MATH  Google Scholar 

  17. Glunt, W., Hayden, T.L., Raydan, M.: Molecular conformations from distance matrices. J. Comput. Chem. 14, 114–120 (1993)

    Article  Google Scholar 

  18. Gonzaga, C., Schneider, R.M.: On the steepest descent algorithm for quadratic functions. Comput. Optim. Appl. 63, 523–542 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  19. Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search techniques for Newton’s method. SIAM J. Numer. Anal. 23(19), 707–716 (1986)

    Article  MathSciNet  MATH  Google Scholar 

  20. Hager, W.W., Zhang, H.: A new conjugate gradient method with guaranteed descent and an efficient line search. SIAM J. Optim. 16(1), 170–192 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  21. Kelley, C.T.: Solving Nonlinear Equations with Newton’s Method Number 1 in Fundamental Algorithms for Numerical Calculations. SIAM, Philadelphia (2003)

    Book  Google Scholar 

  22. Liu, Y., Storey, C.: Efficient generalized conjugate gradient algorithms, part 1: theory. J. Optim. Theory Appl. 69, 129–137 (1991)

    Article  MathSciNet  MATH  Google Scholar 

  23. Luenberger, D.G.: Linear and Nonlinear Programming, 2nd edn. Addison-Wesley, Reading (1984)

    MATH  Google Scholar 

  24. Luengo, F., Raydan, M., Glunt, W., HaydenPreconditioned, T.L.: Spectral gradient method. Numer. Algorithms 30, 241–258 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  25. Polak, E., Ribiere, G.: Note sur la convergence de directions conjugees, Rev. Francaise Informat. Recherche Opertionelle 3e Annee 16, 35–43 (1969)

    MATH  Google Scholar 

  26. Polyak, B.T.: The conjugate gradient method in extreme problems. USSR Comput. Math. Math. Phys. 9, 94–112 (1969)

    Article  MATH  Google Scholar 

  27. Raydan, M.: The Barzilai and Borwein gradient method for the large scale unconstrained optimization. SIAM J. Optim. 7, 26–33 (1997)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The work was supported by the Malaysia Fundamental Research Grant Scheme (FRGS/2/2013/ST06/UPM/02/1) and the first author would also like to acknowledge Yayasan Sultan Iskandar Johor Malaysia for the financial support to attend The 10th International Conference on Optimization: Techniques and Applications (ICOTA 10) where part of this material was presented orally.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wah June Leong.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sim, H.S., Leong, W.J. & Chen, C.Y. Gradient method with multiple damping for large-scale unconstrained optimization. Optim Lett 13, 617–632 (2019). https://doi.org/10.1007/s11590-018-1247-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11590-018-1247-9

Keywords

Navigation