Skip to main content

On the Steplength Selection in Stochastic Gradient Methods

  • Conference paper
  • First Online:
Book cover Numerical Computations: Theory and Algorithms (NUMTA 2019)

Abstract

This paper deals with the steplength selection in stochastic gradient methods for large scale optimization problems arising in machine learning. We introduce an adaptive steplength selection derived by tailoring a limited memory steplength rule, recently developed in the deterministic context, to the stochastic gradient approach. The proposed steplength rule provides values within an interval, whose bounds need to be prefixed by the user. A suitable choice of the interval bounds allows to perform similarly to the standard stochastic gradient method equipped with the best-tuned steplength. Since the setting of the bounds slightly affects the performance, the new rule makes the tuning of the parameters less expensive with respect to the choice of the optimal prefixed steplength in the standard stochastic gradient method. We evaluate the behaviour of the proposed steplength selection in training binary classifiers on well known data sets and by using different loss functions.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bellavia, S., Krejic, N., Krklec Jerinkic, N.: Subsampled inexact Newton methods for minimizing large sums of convex functions. arXiv:1811.05730 (2018)

  2. Bollapragada, R., Byrd, R., Nocedal, J.: Adaptive sampling strategies for stochastic optimization. SIAM J. Optim. 28(4), 3312–3343 (2018)

    Article  MathSciNet  Google Scholar 

  3. Bottou, L., Curtis, F.E., Nocedal, J.: Optimization methods for large-scale machine learning. SIAM Rev. 60(2), 223–311 (2018)

    Article  MathSciNet  Google Scholar 

  4. Fletcher, R.: A limited memory steepest descent method. Math. Program. Ser. A 135, 413–436 (2012)

    Article  MathSciNet  Google Scholar 

  5. Krejic, N., Krklec Jerinki, N.: Nonmonotone line search methods with variable sample size. Numer. Algorithms 68, 711–739 (2015)

    Article  MathSciNet  Google Scholar 

  6. Paquette, C., Scheinberg, K.: A stochastic line search method with convergence rate analysis. arXiv:1807.07994v1 (2018)

  7. di Serafino, D., Ruggiero, V., Toraldo, G., Zanni, L.: On the steplength selection in gradient methods for unconstrained optimization. Appl. Math. Comput. 318, 176–195 (2018)

    MathSciNet  MATH  Google Scholar 

  8. Sopyla, K., Drozda, P.: SGD with BB update step for SVM. Inf. Sci. Inform. Comput. Sci. Intell. Syst. Appl. 316(C), 218–233 (2015)

    MATH  Google Scholar 

  9. Tan, C., Ma, S., Dai, Y., Qian, Y.: BB step size for SGD. In: Lee, D., Sugiyama, M., Luxburg, U., Guyon, I., Garnett, R. (eds.) Advances in Neural Information Processing Systems (NIPS 2016), vol. 29 (2016)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Giorgia Franchini .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Franchini, G., Ruggiero, V., Zanni, L. (2020). On the Steplength Selection in Stochastic Gradient Methods. In: Sergeyev, Y., Kvasov, D. (eds) Numerical Computations: Theory and Algorithms. NUMTA 2019. Lecture Notes in Computer Science(), vol 11973. Springer, Cham. https://doi.org/10.1007/978-3-030-39081-5_17

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-39081-5_17

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-39080-8

  • Online ISBN: 978-3-030-39081-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics