Skip to main content

Combined First- and Second-Order Directions for Deep Neural Networks Training

  • Conference paper
  • First Online:
Numerical Computations: Theory and Algorithms (NUMTA 2023)

Abstract

In this work, we consider a novel stochastic optimization algorithm to solve the unconstrained, nonlinear, and non-convex optimization problems arising in the training of deep neural networks. The new algorithm is based on the combination of first- and second-order information, namely, at each step the computed search direction linearly combines a variance-reduced gradient and a stochastic limited memory quasi-Newton direction. We report computational experiments showing the performance of the proposed optimizer in the training of a modern deep residual neural network for image classification tasks. The numerical results show that the proposed algorithm exhibits comparable or superior performance than the state-of-the-art Adam optimizer, without the agonizing pain of tuning its many hyperparameters.

Supported by INdAM—GNCS Project CUP_E53C22001930001.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    https://www.mathworks.com/help/deeplearning/ug/data-sets-for-deep-learning.html.

References

  1. Anil, R., Gupta, V., Koren, T., Regan, K., Singer, Y.: Scalable second order optimization for deep learning. arXiv preprint arXiv:2002.09018 (2021)

  2. Bottou, L., Curtis, F.E., Nocedal, J.: Optimization methods for large-scale machine learning. SIAM Rev. 60(2), 223–311 (2018). https://doi.org/10.1137/16M1080173

    Article  MathSciNet  MATH  Google Scholar 

  3. Defazio, A., Bach, F., Lacoste-Julien, S.: SAGA: a fast incremental gradient method with support for non-strongly convex composite objectives. In: Advances in Neural Information Processing Systems, pp. 1646–1654 (2014)

    Google Scholar 

  4. Erway, J.B., Griffin, J., Marcia, R.F., Omheni, R.: Trust-region algorithms for training responses: machine learning methods using indefinite Hessian approximations. Optimiz. Methods Softw. 35(3), 460–487 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  5. Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. In: Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, pp. 249–256. JMLR Workshop and Conference Proceedings (2010)

    Google Scholar 

  6. Gower, R.M., Richtárik, P., Bach, F.: Stochastic quasi-gradient methods: variance reduction via Jacobian sketching. Math. Program. 188, 135–192 (2021)

    Article  MathSciNet  MATH  Google Scholar 

  7. Griffin, J.D., Jahani, M., Takac, M., Yektamaram, S., Zhou, W.: A minibatch stochastic quasi-newton method adapted for nonconvex deep learning problems. Optimization Online preprint (2022). https://optimization-online.org/?p=18601

  8. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  9. Johnson, R., Zhang, T.: Accelerating stochastic gradient descent using predictive variance reduction. Adv. Neural. Inf. Process. Syst. 26, 315–323 (2013)

    MATH  Google Scholar 

  10. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: 3rd International Conference on Learning Representations, ICLR 2015 - Conference Track Proceedings (2015)

    Google Scholar 

  11. Krizhevsky, A., Hinton, G., et al.: Learning multiple layers of features from tiny images (2009). https://www.cs.toronto.edu/~kriz/cifar.html

  12. Nguyen, L.M., Liu, J., Scheinberg, K., Takáč, M.: SARAH: a novel method for machine learning problems using stochastic recursive gradient. In: International Conference on Machine Learning, pp. 2613–2621. PMLR (2017)

    Google Scholar 

  13. Nguyen, L.M., Liu, J., Scheinberg, K., Takáč, M.: Stochastic recursive gradient algorithm for nonconvex optimization. arXiv preprint arXiv:1705.07261 (2017)

  14. Nocedal, J., Wright, S.: Numerical optimization. In: Springer Series in Operations Research and Financial Engineering. Springer (2006)

    Google Scholar 

  15. Reddi, S.J., Hefny, A., Sra, S., Poczos, B., Smola, A.: Stochastic variance reduction for nonconvex optimization. In: International Conference on Machine Learning, pp. 314–323. PMLR (2016)

    Google Scholar 

  16. di Serafino, D., Toraldo, G., Viola, M.: Using gradient directions to get global convergence of Newton-type methods. Appl. Math. Comput. 409, 125612 (2021)

    MathSciNet  MATH  Google Scholar 

  17. Xiao, H., Rasul, K., Vollgraf, R.: Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747 (2017)

  18. Yousefi, M., Martínez, A.: Deep neural networks training by stochastic quasi-Newton trust-region methods. Algorithms 16(10) (2023). https://doi.org/10.3390/a16100490

  19. Yousefi, M., Martínez Calomardo, Á.: A Matlab-based tutorial on implementing custom loops for training a deep neural network (2022). https://doi.org/10.13140/RG.2.2.33008.94720

Download references

Acknowledgments

The authors gratefully acknowledge the support of INdAM—GNCS Project CUP\(\_\)E53C22001930001. This study was carried out within the PNRR research activities of the consortium iNEST (Interconnected North-Est Innovation Ecosystem) funded by the European Union Next-GenerationEU (Piano Nazionale di Ripresa e Resilienza (PNRR)—Missione 4 Componente 2, Investimento 1.5—D.D. 1058 23/06/2022, ECS\(\_\)00000043). This manuscript reflects only the Authors’ views and opinions, neither the European Union nor the European Commission can be considered responsible for them.

The authors want to express this tribute to the memory of their dear colleague and friend Daniela di Serafino. We lost her along the path from the initial idea and experiments to the submission of this manuscript. We all will miss her and her infinite passion for research.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ángeles Martínez .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Martínez, Á., Viola, M., Yousefi, M. (2025). Combined First- and Second-Order Directions for Deep Neural Networks Training. In: Sergeyev, Y.D., Kvasov, D.E., Astorino, A. (eds) Numerical Computations: Theory and Algorithms. NUMTA 2023. Lecture Notes in Computer Science, vol 14476. Springer, Cham. https://doi.org/10.1007/978-3-031-81241-5_9

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-81241-5_9

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-81240-8

  • Online ISBN: 978-3-031-81241-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics