Skip to main content

AALRSMF: An Adaptive Learning Rate Schedule for Matrix Factorization

  • Conference paper
  • First Online:
Web Technologies and Applications (APWeb 2016)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 9932))

Included in the following conference series:

  • 1683 Accesses

Abstract

Stochastic gradient descent (SGD) is an effective algorithm to solve matrix factorization problem. However, the performance of SGD depends critically on how learning rates are tuned over time. In this paper, we propose a novel per-dimension learning rate schedule called AALRSMF. This schedule relies on local gradients, requires no manual tunning of a global learning rate, and shows to be robust to the selection of hyper-parameters. The extensive experiments demonstrate that the proposed schedule shows promising results compared to existing ones on matrix factorization.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Chin, W.-S., Zhuang, Y., Juan, Y.-C., Lin, C.-J.: A learning-rate schedule for stochastic gradient methods to matrix factorization. In: Cao, T., Lim, E.-P., Zhou, Z.-H., Ho, T.-B., Cheung, D., Motoda, H. (eds.) PAKDD 2015. LNCS, vol. 9077, pp. 442–455. Springer, Heidelberg (2015)

    Google Scholar 

  2. Gemulla, R., Nijkamp, E., Haas, P.J., Sismanis, Y.: Large-scale matrix factorization with distributed stochastic gradient descent. In: Proceedings of the 17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 69–77. ACM (2011)

    Google Scholar 

  3. Koren, Y., Bell, R., Volinsky, C., et al.: Matrix factorization techniques for recommender systems. Computer 42(8), 30–37 (2009)

    Article  Google Scholar 

  4. Yun, H., Yu, H.F., Hsieh, C.J., Vishwanathan, S., Dhillon, I.: Nomad: non-locking, stochastic multi-machine algorithm for asynchronous and decentralized matrix completion. Proc. VLDB Endowment 7(11), 975–986 (2014)

    Article  Google Scholar 

  5. Zeiler, M.D.: Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701 (2012)

Download references

Acknowledgement

The numerical calculations in this paper have been done on the supercomputing system in the Supercomputing Center of University of Science and Technology of China.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shaoyin Cheng .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing Switzerland

About this paper

Cite this paper

Wei, F., Guo, H., Cheng, S., Jiang, F. (2016). AALRSMF: An Adaptive Learning Rate Schedule for Matrix Factorization. In: Li, F., Shim, K., Zheng, K., Liu, G. (eds) Web Technologies and Applications. APWeb 2016. Lecture Notes in Computer Science(), vol 9932. Springer, Cham. https://doi.org/10.1007/978-3-319-45817-5_36

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-45817-5_36

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-45816-8

  • Online ISBN: 978-3-319-45817-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics