Abstract
By leveraging curvature information for improved performance, Newton’s method offers significant advantages over first-order methods for distributed learning problems. However, the practical applicability of Newton’s method is hindered in large-scale and heterogeneous learning environments due to challenges such as high computation and communication costs associated with the Hessian matrix, sub-model diversity, staleness in training, and data heterogeneity. To address these challenges, this paper introduces a novel and efficient algorithm called Resource-Adaptive Newton Learning (RANL), which overcomes the limitations of Newton’s method by employing a simple Hessian initialization and adaptive assignments of training regions. The algorithm demonstrates impressive convergence properties, which are rigorously analyzed under standard assumptions in stochastic optimization. The theoretical analysis establishes that RANL achieves a linear convergence rate while effectively adapting to available resources and maintaining high efficiency. Moreover, RANL exhibits remarkable independence from the condition number of the problem and eliminates the need for complex parameter tuning. These advantages make RANL a promising approach for distributed learning in practical scenarios.
Supported in part by National Natural Science Foundation of China (NSFC) under Grant 62122042 and 62302247, in part by Fundamental Research Funds for the Central Universities under Grant 2022JC016, in part by Shandong Natural Science Foundation, China under Grant ZR2022QF140.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
For \(L_g\)-smooth \(\mu \)-strongly convex functions, the condition number is defined as \(L_g/\mu \).
References
Agarwal, N., Bullins, B., Hazan, E.: Second-order stochastic optimization for machine learning in linear time. J. Mach. Learn. Res. 18, 116:1–116:40 (2017)
Beck, A.: Introduction to Nonlinear Optimization - Theory, Algorithms, and Applications with MATLAB, MOS-SIAM Series on Optimization, vol. 19. SIAM (2014)
Bullins, B., Patel, K.K., Shamir, O., Srebro, N., Woodworth, B.E.: A stochastic newton algorithm for distributed convex optimization. In: Annual Conference on Neural Information Processing Systems, NeurIPS, pp. 26818–26830 (2021)
Byrd, R.H., Hansen, S.L., Nocedal, J., Singer, Y.: A stochastic quasi-newton method for large-scale optimization. SIAM J. Optim. 26(2), 1008–1031 (2016)
Chen, J., Yuan, R., Garrigos, G., Gower, R.M.: SAN: stochastic average newton algorithm for minimizing finite sums. In: International Conference on Artificial Intelligence and Statistics, AISTATS. Proceedings of Machine Learning Research, vol. 151, pp. 279–318. PMLR (2022)
Chen, S., Yuan, Y., Tao, Y., Cai, Z., Yu, D.: Resource-adaptive newton’s method for distributed learning. arXiv preprint arXiv:2308.10154 (2023)
Islamov, R., Qian, X., Richtárik, P.: Distributed second order methods with fast rates and compressed communication. In: Proceedings of the 38th International Conference on Machine Learning, ICML. Proceedings of Machine Learning Research, vol. 139, pp. 4617–4628. PMLR (2021)
James, M.: Deep learning via hessian-free optimization. In: Proceedings of the International Conference on Machine Learning (ICML), vol. 27, pp. 735–742 (2010)
James, M., Roger, G.: Optimizing neural networks with kronecker-factored approximate curvature. In: International Conference on Machine Learning, pp. 2408–2417. PMLR (2015)
Jiang, Y., Konečný, J., Rush, K., Kannan, S.: Improving federated learning personalization via model agnostic meta learning. CoRR abs/1909.12488 (2019)
Kairouz, P., McMahan, H.B., Avent, B., Bellet, A., Bennis, M., Bhagoji, A.N., et al.: Advances and open problems in federated learning. Found. Trends Mach. Learn. 14(1–2), 1–210 (2021)
Li, T., Sahu, A.K., Zaheer, M., Sanjabi, M., Talwalkar, A., Smith, V.: Federated optimization in heterogeneous networks. In: Dhillon, I.S., Papailiopoulos, D.S., Sze, V. (eds.) Proceedings of Machine Learning and Systems, MLSys. mlsys.org (2020)
de Luca, A.B., Zhang, G., Chen, X., Yu, Y.: Mitigating data heterogeneity in federated learning with data augmentation. CoRR abs/2206.09979 (2022)
Pu, S., Olshevsky, A., Paschalidis, I.C.: A sharp estimate on the transient time of distributed stochastic gradient descent. IEEE Trans. Autom. Control 67, 5900–5915 (2021)
Qian, X., Islamov, R., Safaryan, M., Richtárik, P.: Basis matters: better communication-efficient second order methods for federated learning. In: International Conference on Artificial Intelligence and Statistics, AISTATS. Proceedings of Machine Learning Research, vol. 151, pp. 680–720. PMLR (2022)
Qiu, C., Zhu, S., Ou, Z., Lu, J.: A stochastic second-order proximal method for distributed optimization. IEEE Control. Syst. Lett. 7, 1405–1410 (2023)
Reddi, S.J., Hefny, A., Sra, S., Póczos, B., Smola, A.J.: On variance reduction in stochastic gradient descent and its asynchronous variants. In: Conference on Neural Information Processing Systems, pp. 2647–2655 (2015)
Safaryan, M., Islamov, R., Qian, X., Richtárik, P.: FEDNL: making newton-type methods applicable to federated learning. In: International Conference on Machine Learning, vol. 162, pp. 18959–19010. PMLR (2022)
Shamir, O., Srebro, N.: Distributed stochastic optimization and learning. In: 52nd Annual Allerton Conference on Communication, Control, and Computing, pp. 850–857. IEEE (2014)
Yuan, B., Wolfe, C.R., Dun, C., Tang, Y., Kyrillidis, A., Jermaine, C.: Distributed learning of fully connected neural networks using independent subnet training. Proc. VLDB Endow. 15(8), 1581–1590 (2022)
Zhang, J., Liu, H., So, A.M., Ling, Q.: Variance-reduced stochastic quasi-newton methods for decentralized learning. IEEE Trans. Sig. Process. 71, 311–326 (2023)
Zhang, J., You, K., Basar, T.: Achieving globally superlinear convergence for distributed optimization with adaptive newton method. In: 59th IEEE Conference on Decision and Control, CDC, pp. 2329–2334. IEEE (2020)
Zhang, Q., Huang, F., Deng, C., Huang, H.: Faster stochastic quasi-newton methods. IEEE Trans. Neural Netw. Learn. Syst. 33(9), 4388–4397 (2022)
Zhou, H., Lan, T., Venkataramani, G., Ding, W.: On the convergence of heterogeneous federated learning with arbitrary adaptive online model pruning. CoRR abs/2201.11803 (2022)
Zhuang, F., et al.: A comprehensive survey on transfer learning. Proc. IEEE 109(1), 43–76 (2021)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Chen, S., Yuan, Y., Tao, Y., Cai, Z., Yu, D. (2024). Resource-Adaptive Newton’s Method for Distributed Learning. In: Wu, W., Tong, G. (eds) Computing and Combinatorics. COCOON 2023. Lecture Notes in Computer Science, vol 14422. Springer, Cham. https://doi.org/10.1007/978-3-031-49190-0_24
Download citation
DOI: https://doi.org/10.1007/978-3-031-49190-0_24
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-49189-4
Online ISBN: 978-3-031-49190-0
eBook Packages: Computer ScienceComputer Science (R0)