Abstract
Federated learning (FL) is an emerging distributed optimization paradigm that learns from data samples distributed across many clients with data privacy protection in the artificial intelligence of things (AIoT). Adaptive client selection can improve FL efficiency in clients’ training progress. Still, it is not yet well understood, especially with clients’ data and client heterogeneity in the real world. Most existing FL methods assume that all clients in training phases are equally essential and have the same learning abilities with random selection in each round. However, this assumption has been proven invalid due to ignoring system heterogeneity and different critical learning periods (CLP) for each client. In this paper, we propose an adaptive critical learning periods control framework, FedPrime, to augment client selection in FL based on the fine-grained clients’ utility. We first adopt a fine-grained CLP detection for each heterogeneous client. Thus, we design an adaptive CLP control mechanism to fully leverage client selection in the training phase of FL. Extensive experiments based on various models and datasets validate that FedPrime framework achieves improved model accuracy up to 69.28% than the state-of-the-art methods. Moreover, FedPrime also keeps good generalization performance.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Achille, A., Rovere, M., Soatto, S.: Critical learning periods in deep networks. In: International Conference on Learning Representations (2018)
Alistarh, D., Grubic, D., Li, J., Tomioka, R., Vojnovic, M.: QSGD: communication- efficient SGD via gradient quantization and encoding. In: Advances in Neural Information Processing Systems, vol. 30 (2017)
Amiri, M.M., Gunduz, D., Kulkarni, S.R., Poor, H.V.: Federated learning with quantized global model updates. arXiv preprint arXiv:2006.10672 (2020)
Bonawitz, K., et al.: Towards federated learning at scale: system design. Proc. Mach. Learn. Syst. 1, 374–388 (2019)
Chen, B., Ivanov, N., Wang, G., Yan, Q.: DynamicFL: balancing communication dynamics and client manipulation for federated learning. In: 2023 20th Annual IEEE International Conference on Sensing, Communication, and Networking (SECON), pp. 312–320. IEEE (2023)
Chen, M., et al.: Federated learning of n-gram language models. arXiv preprint arXiv:1910.03432 (2019)
Fu, L., Zhang, H., Gao, G., Zhang, M., Liu, X.: Client selection in federated learning: principles, challenges, and opportunities. IEEE Internet Things J. 1, 21811–21819 (2023)
Gang Yan, Hao Wang, X.Y., Li, J.: CriticalFL: a critical learning periods augmented client selection framework for efficient federated learning. In: Proceedings of ACM SIGKDD (2023)
He, C., et al.: FedML: a research library and benchmark for federated machine learning. arXiv preprint arXiv:2007.13518 (2020)
Imteaj, A., Mamun Ahmed, K., Thakker, U., Wang, S., Li, J., Amini, M.H.: Federated learning for resource-constrained IoT devices: panoramas and state of the art. In: Razavi-Far, R., Wang, B., Taylor, M.E., Yang, Q. (eds.) Federated and Transfer Learning. Adaptation, Learning, and Optimization, vol. 27, pp. 7–27. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-11748-0_2
Imteaj, A., Thakker, U., Wang, S., Li, J., Amini, M.H.: A survey on federated learning for resource-constrained IoT devices. IEEE Internet Things J. 9(1), 1–24 (2021)
Jastrzębski, S., Kenton, Z., Ballas, N., Fischer, A., Bengio, Y., Storkey, A.: On the relation between the sharpest directions of DNN loss and the SGD step length. arXiv preprint arXiv:1807.05031 (2018)
Kairouz, P., et al.: Advances and open problems in federated learning. Found. Trends® Mach. Learn. 14(1-2), 1–210 (2021)
Karimireddy, S.P., Kale, S., Mohri, M., Reddi, S., Stich, S., Suresh, A.T.: SCAFFOLD: stochastic controlled averaging for federated learning. In: International Conference on Machine Learning, pp. 5132–5143. PMLR (2020)
Kim, Y., Jernite, Y., Sontag, D., Rush, A.: Character-aware neural language models. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 30 (2016)
Krizhevsky, A., Hinton, G., et al.: Learning multiple layers of features from tiny images (2009)
Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep con- volutional neural networks. Adv. Neural Inf. Process. Syst. 25 (2012)
Lai, F., Zhu, X., Madhyastha, H.V., Chowdhury, M.: Oort: efficient federated learning via guided participant selection. In: 15th USENIX Symposium on Operating Systems Design and Implementation (OSDI 221), pp. 19–35 (2021)
Li, C., Zeng, X., Zhang, M., Cao, Z.: PyramidFL: a fine-grained client selection framework for efficient federated learning. In: Proceedings of the 28th Annual International Conference on Mobile Computing and Networking, pp. 158–171 (2022)
Li, T., Sahu, A.K., Zaheer, M., Sanjabi, M., Talwalkar, A., Smith, V.: Federated optimization in heterogeneous networks. Proc. Mach. Learn. Syst. 2, 429–450 (2020)
Liang, X., Shen, S., Liu, J., Pan, Z., Chen, E., Cheng, Y.: Variance reduced local SGD with lower communication complexity. arXiv preprint arXiv:1912.12844 (2019)
McMahan, B., Moore, E., Ramage, D., Hampson, S., Arcas, B.A.: Communication-efficient learning of deep networks from decentralized data. In: Artificial Intelligence and Statistics, pp. 1273–1282. PMLR (2017)
Paszke, A., et al.: Automatic differentiation in pyTorch (2017)
Peng, X., Huang, Z., Zhu, Y., Saenko, K.: Federated adversarial domain adaptation. arXiv preprint arXiv:1911.02054 (2019)
Reddi, S., et al.: Adaptive federated optimization. arXiv preprint arXiv:2003.00295 (2020)
Wang, H., Yurochkin, M., Sun, Y., Papailiopoulos, D., Khazaeni, Y.: Federated learning with matched averaging. arXiv preprint arXiv:2002.06440 (2020)
Wang, J., Joshi, G.: Adaptive communication strategies to achieve the best error- runtime trade-off in local-update SGD. Proc. Mach. Learn. Syst. 1, 212–229 (2019)
Wang, J., Liu, Q., Liang, H., Joshi, G., Poor, H.V.: Tackling the objective in- consistency problem in heterogeneous federated optimization. Adv. Neural. Inf. Process. Syst. 33, 7611–7623 (2020)
Xiao, H., Rasul, K., Vollgraf, R.: Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747 (2017)
Xu, G., Li, X., Li, H., Fan, Q., Wang, X., Leung, V.C.: Energy-efficient dynamic asynchronous federated learning in mobile edge computing networks. In: ICC 2023IEEE International Conference on Communications, pp. 160–165. IEEE (2023)
Yan, G., Wang, H., Li, J.: Seizing critical learning periods in federated learning. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, pp. 8788–8796 (2022)
Yang, Z., Chen, M., Saad, W., Hong, C.S., Shikh-Bahaei, M.: Energy efficient federated learning over wireless communication networks. IEEE Trans. Wireless Commun. 20(3), 1935–1949 (2020)
Zeng, Q., Du, Y., Huang, K., Leung, K.K.: Energy-efficient resource management for federated edge learning with CPU-GPU heterogeneous computing. IEEE Trans. Wirel. Commun. 20(12), 7947–7962 (2021)
Zhao, P., Zhang, T.: Stochastic optimization with importance sampling for regularized loss minimization. In: International Conference on Machine Learning, pp. 1–9. PMLR (2015)
Acknowledgments
We are extremely grateful for the anonymous ECML reviewers for their wonderful and constructive feedback. We have incorporated their comments in the manuscript. We thank Yan Peng, Chen Shen, Chaoqian Cheng, Kaiqiang Hu, Pengfei Li, Yiran Xiang, Wenhao Li, Chongyi Qiu, Yukun Cao, and Feifei Xu for their help and suggestions during the preparation of this paper.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Ethics declarations
Disclosure of Interests
The authors declare that we have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Du, H., Yang, Z. (2024). FedPrime: An Adaptive Critical Learning Periods Control Framework for Efficient Federated Learning in Heterogeneity Scenarios. In: Bifet, A., Davis, J., Krilavičius, T., Kull, M., Ntoutsi, E., Žliobaitė, I. (eds) Machine Learning and Knowledge Discovery in Databases. Research Track. ECML PKDD 2024. Lecture Notes in Computer Science(), vol 14945. Springer, Cham. https://doi.org/10.1007/978-3-031-70362-1_8
Download citation
DOI: https://doi.org/10.1007/978-3-031-70362-1_8
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-70361-4
Online ISBN: 978-3-031-70362-1
eBook Packages: Computer ScienceComputer Science (R0)