Skip to main content

A New Supermemory Gradient Method without Line Search for Unconstrained Optimization

  • Chapter
The Sixth International Symposium on Neural Networks (ISNN 2009)

Part of the book series: Advances in Intelligent and Soft Computing ((AINSC,volume 56))

Abstract

In this paper, we present a new supermemory gradient method without line search for unconstrained optimization problems. The new method can guarantee a descent at each iteration. It sufficiently uses the previous multi-step iterative information at each iteration and avoids the storage and computation of matrices associated with the Hessian of objective functions, so that it is suitable to solve large scale optimization problems. We also prove its global convergence under some mild conditions. In addition, We analyze the linear convergence rate of the new method when the objective function is uniformly convex and twice continuously differentiable.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 259.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 329.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Dai, Y.H., Yuan, Y.X.: Convergence Properties of the Fletcher-Reeves Method. IMA J. Numer. Anal. 16, 155–164 (1996)

    Article  MATH  MathSciNet  Google Scholar 

  2. Hu, Y.F., Storey, C.: Global Convergence Result for Conjugate Gradient Methods. JOTA 71, 399–405 (1991)

    Article  MATH  MathSciNet  Google Scholar 

  3. Grippo, L., Lucidi, S.: A Globally Convergent Version of the Polak-Ribiere Conjugate Gradient Method. Math. Prog. 78, 375–391 (1997)

    MathSciNet  Google Scholar 

  4. Fletcher, R.: Practical Methods of Optimization. Unconstrained Optimization, vol. 1. John Wiley Sons, New York (1987)

    MATH  Google Scholar 

  5. Dai, Y.H., Yuan, Y.X.: Convergence Properties of the Conjugate Descent Method. Advances in Mathematics 25, 552–562 (1996)

    MATH  MathSciNet  Google Scholar 

  6. Dai, Y.H., Yuan, Y.X.: Nonlinear Conjugate Gradient Methods. Shanghai Scientific and Technical Publishers (2000)

    Google Scholar 

  7. Dai, Y.H., Yuan, Y.X.: A Nonlinear Conjugate Gradient Method with a Strong Global Convergence Property. SIAM J. Optim. 10, 177–182 (1999)

    Article  MATH  MathSciNet  Google Scholar 

  8. Gilbert, J.C., Nocedal, J.: Global Convergence Properties of Conjugate Gradient Methods for Optimization. SIAM J. Optim. 2, 21–42 (1992)

    Article  MATH  MathSciNet  Google Scholar 

  9. Wolfe, M.A., Viazminsky, C.: Supermemory Descent Methods for Unconstrained Minimization. J. Optim. Theory Appl. 18, 455–468 (1976)

    Article  MATH  MathSciNet  Google Scholar 

  10. Zhenjun, S.: Supermemory Gradient Method for Unconstrained Optimization. J. Eng. Math. 17, 99–104 (2000)

    MATH  Google Scholar 

  11. Zhenjun, S.: A New Memory Gradient under Exact Line Search. Asia-Pacific J. Oper. Res. 20, 275–284 (2003)

    MathSciNet  Google Scholar 

  12. Zhenjun, S.: A New Supermemory Gradient Method for Unconstrained Optimization. Advances in Mathematics 35, 265–274 (2006)

    Google Scholar 

  13. Sun, J., Zhang, J.: Global Convergence of Conjugate Gradient Methods without Line Search. Annals of Operations Research 103, 161–173 (2001)

    Article  MATH  MathSciNet  Google Scholar 

  14. Chen, X., Sun, J.: Global Convergence of a Two-parameter Family of Conjugate Gradient Methods without Line Search. Journal of Computational and Applied Mathematics 146, 37–45 (2002)

    Article  MATH  MathSciNet  Google Scholar 

  15. Li, X., Chen, X.: Global Convergence of Shortest-residual Family of Conjugate Gradient Methods without Line Search. Asia-Pacific Journal of Operational Research 22, 529–538 (2005)

    Article  MATH  MathSciNet  Google Scholar 

  16. Narushima, Y.: A Memory Gradient Method without Line Search for Unconstrained Optimization. SUT Journal of Mathematics 42, 191–206 (2006)

    MATH  MathSciNet  Google Scholar 

  17. Yu, Z.S.: Global Convergence of a Memory Gradient Method without Line Search. J. Appl. Math. Comput. 26, 545–553 (2008)

    Article  MATH  MathSciNet  Google Scholar 

  18. Cohen, A.I.: Stepsize Analysis for Descent Methods. J. Optim. Theory Appl. 33, 187–205 (1981)

    Article  MATH  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2009 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Liu, J., Liu, H., Zheng, Y. (2009). A New Supermemory Gradient Method without Line Search for Unconstrained Optimization. In: Wang, H., Shen, Y., Huang, T., Zeng, Z. (eds) The Sixth International Symposium on Neural Networks (ISNN 2009). Advances in Intelligent and Soft Computing, vol 56. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-01216-7_68

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-01216-7_68

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-01215-0

  • Online ISBN: 978-3-642-01216-7

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics