Skip to main content

Accelerating Stochastic Variance Reduced Gradient Using Mini-Batch Samples on Estimation of Average Gradient

  • Conference paper
  • First Online:
Advances in Neural Networks - ISNN 2017 (ISNN 2017)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 10261))

Included in the following conference series:

Abstract

Stochastic gradient descent (SGD) is popular for large scale optimization but has slow convergence. To remedy this problem, stochastic variance reduced gradient (SVRG) is proposed, which adopts average gradient to reduce the effect of variance. Since its expensive computational cost, average gradient is maintained between m iterations, where m is set to the same order of data size. For large scale problems, the efficiency will be decreased due to the prediction on average gradient maybe not accurate enough. We propose a method of using a mini-batch of samples to estimate average gradient, called stochastic mini-batch variance reduced gradient (SMVRG). SMVRG greatly reduces the computational cost of prediction on average gradient, therefore it is possible to estimate average gradient frequently thus more accurate. Numerical experiments show the effectiveness of our method in terms of convergence rate and computation cost.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Zhang, T.: Solving large scale linear prediction problems using stochastic gradient descent algorithms. In: International Conference on Machine Learning, p. 116. Omnipress (2004)

    Google Scholar 

  2. Bottou, L.: Large-scale machine learning with stochastic gradient descent. In: Lechevallier, Y., Saporta, G. (eds.) Proceedings of COMPSTAT 2010, pp. 177–186. Physica-Verlag HD, Heidelberg (2010)

    Chapter  Google Scholar 

  3. Roux, N.L., Schmidt, M., Bach, F.: A stochastic gradient method with an exponential convergence rate for finite training sets. Adv. Neural Inf. Process. Syst. 4, 2663–2671 (2012)

    Google Scholar 

  4. Schmidt, M., Roux, N.L., Bach, F.: Minimizing finite sums with the stochastic average gradient. Math. Program. 26(5), 405–411 (2013)

    MATH  Google Scholar 

  5. Schmidt, M., Roux, N.L., Bach, F.: Erratum to: minimizing finite sums with the stochastic average gradient. Math. Program. 26(5), 1 (2016)

    MATH  Google Scholar 

  6. Defazio, A., Bach, F., Lacostejulien, S.: SAGA: a fast incremental gradient method with support for non-strongly convex composite objectives. Adv. Neural Inf. Process. Syst. 2, 1646–1654 (2014)

    Google Scholar 

  7. Rie, J., Tong, Z.: Accelerating stochastic gradient descent using predictive variance reduction. Adv. Neural Inf. Process. Syst. 315–323 (2013)

    Google Scholar 

  8. Wang, C., Chen, X., Smola, A., et al.: Variance Reduction for Stochastic Gradient Optimization. University of Illinois Press, Champaign (2013). pp. 181–189

    Google Scholar 

  9. Gresti, P.: Linear convergence of variance-reduced projected stochastic gradient without strong convexity. Comput. Sci. 2014(2), 648–650 (2014)

    Google Scholar 

  10. Shalev-Shwartz, S., Zhang, T.: Stochastic dual coordinate ascent methods for regularized loss minimization. J. Mach. Learn. Res. 14(1), 2013 (2012)

    MathSciNet  MATH  Google Scholar 

  11. Tseng, P., Yun, S.: A coordinate gradient descent method for nonsmooth separable minimization. Math. Program. 117(1), 387–423 (2009)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

We thanks Rie Johnson for his advice. And the work is supported by National Natural Science Foundation of China \((61372142, U1401252, U1404603)\), Guangdong Province Science and technology plan \((2013B010102004, 2013A011403003)\), Guangzhou city science and technology research projects (201508010023).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Junchu Huang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Huang, J., Zhou, Z., Xu, B., Huang, Y. (2017). Accelerating Stochastic Variance Reduced Gradient Using Mini-Batch Samples on Estimation of Average Gradient. In: Cong, F., Leung, A., Wei, Q. (eds) Advances in Neural Networks - ISNN 2017. ISNN 2017. Lecture Notes in Computer Science(), vol 10261. Springer, Cham. https://doi.org/10.1007/978-3-319-59072-1_41

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-59072-1_41

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-59071-4

  • Online ISBN: 978-3-319-59072-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics