Skip to main content

FedPrime: An Adaptive Critical Learning Periods Control Framework for Efficient Federated Learning in Heterogeneity Scenarios

  • Conference paper
  • First Online:
Machine Learning and Knowledge Discovery in Databases. Research Track (ECML PKDD 2024)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 14945))

Abstract

Federated learning (FL) is an emerging distributed optimization paradigm that learns from data samples distributed across many clients with data privacy protection in the artificial intelligence of things (AIoT). Adaptive client selection can improve FL efficiency in clients’ training progress. Still, it is not yet well understood, especially with clients’ data and client heterogeneity in the real world. Most existing FL methods assume that all clients in training phases are equally essential and have the same learning abilities with random selection in each round. However, this assumption has been proven invalid due to ignoring system heterogeneity and different critical learning periods (CLP) for each client. In this paper, we propose an adaptive critical learning periods control framework, FedPrime, to augment client selection in FL based on the fine-grained clients’ utility. We first adopt a fine-grained CLP detection for each heterogeneous client. Thus, we design an adaptive CLP control mechanism to fully leverage client selection in the training phase of FL. Extensive experiments based on various models and datasets validate that FedPrime framework achieves improved model accuracy up to 69.28% than the state-of-the-art methods. Moreover, FedPrime also keeps good generalization performance.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Achille, A., Rovere, M., Soatto, S.: Critical learning periods in deep networks. In: International Conference on Learning Representations (2018)

    Google Scholar 

  2. Alistarh, D., Grubic, D., Li, J., Tomioka, R., Vojnovic, M.: QSGD: communication- efficient SGD via gradient quantization and encoding. In: Advances in Neural Information Processing Systems, vol. 30 (2017)

    Google Scholar 

  3. Amiri, M.M., Gunduz, D., Kulkarni, S.R., Poor, H.V.: Federated learning with quantized global model updates. arXiv preprint arXiv:2006.10672 (2020)

  4. Bonawitz, K., et al.: Towards federated learning at scale: system design. Proc. Mach. Learn. Syst. 1, 374–388 (2019)

    Google Scholar 

  5. Chen, B., Ivanov, N., Wang, G., Yan, Q.: DynamicFL: balancing communication dynamics and client manipulation for federated learning. In: 2023 20th Annual IEEE International Conference on Sensing, Communication, and Networking (SECON), pp. 312–320. IEEE (2023)

    Google Scholar 

  6. Chen, M., et al.: Federated learning of n-gram language models. arXiv preprint arXiv:1910.03432 (2019)

  7. Fu, L., Zhang, H., Gao, G., Zhang, M., Liu, X.: Client selection in federated learning: principles, challenges, and opportunities. IEEE Internet Things J. 1, 21811–21819 (2023)

    Article  Google Scholar 

  8. Gang Yan, Hao Wang, X.Y., Li, J.: CriticalFL: a critical learning periods augmented client selection framework for efficient federated learning. In: Proceedings of ACM SIGKDD (2023)

    Google Scholar 

  9. He, C., et al.: FedML: a research library and benchmark for federated machine learning. arXiv preprint arXiv:2007.13518 (2020)

  10. Imteaj, A., Mamun Ahmed, K., Thakker, U., Wang, S., Li, J., Amini, M.H.: Federated learning for resource-constrained IoT devices: panoramas and state of the art. In: Razavi-Far, R., Wang, B., Taylor, M.E., Yang, Q. (eds.) Federated and Transfer Learning. Adaptation, Learning, and Optimization, vol. 27, pp. 7–27. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-11748-0_2

    Chapter  Google Scholar 

  11. Imteaj, A., Thakker, U., Wang, S., Li, J., Amini, M.H.: A survey on federated learning for resource-constrained IoT devices. IEEE Internet Things J. 9(1), 1–24 (2021)

    Article  Google Scholar 

  12. Jastrzębski, S., Kenton, Z., Ballas, N., Fischer, A., Bengio, Y., Storkey, A.: On the relation between the sharpest directions of DNN loss and the SGD step length. arXiv preprint arXiv:1807.05031 (2018)

  13. Kairouz, P., et al.: Advances and open problems in federated learning. Found. Trends® Mach. Learn. 14(1-2), 1–210 (2021)

    Google Scholar 

  14. Karimireddy, S.P., Kale, S., Mohri, M., Reddi, S., Stich, S., Suresh, A.T.: SCAFFOLD: stochastic controlled averaging for federated learning. In: International Conference on Machine Learning, pp. 5132–5143. PMLR (2020)

    Google Scholar 

  15. Kim, Y., Jernite, Y., Sontag, D., Rush, A.: Character-aware neural language models. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 30 (2016)

    Google Scholar 

  16. Krizhevsky, A., Hinton, G., et al.: Learning multiple layers of features from tiny images (2009)

    Google Scholar 

  17. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep con- volutional neural networks. Adv. Neural Inf. Process. Syst. 25 (2012)

    Google Scholar 

  18. Lai, F., Zhu, X., Madhyastha, H.V., Chowdhury, M.: Oort: efficient federated learning via guided participant selection. In: 15th USENIX Symposium on Operating Systems Design and Implementation (OSDI 221), pp. 19–35 (2021)

    Google Scholar 

  19. Li, C., Zeng, X., Zhang, M., Cao, Z.: PyramidFL: a fine-grained client selection framework for efficient federated learning. In: Proceedings of the 28th Annual International Conference on Mobile Computing and Networking, pp. 158–171 (2022)

    Google Scholar 

  20. Li, T., Sahu, A.K., Zaheer, M., Sanjabi, M., Talwalkar, A., Smith, V.: Federated optimization in heterogeneous networks. Proc. Mach. Learn. Syst. 2, 429–450 (2020)

    Google Scholar 

  21. Liang, X., Shen, S., Liu, J., Pan, Z., Chen, E., Cheng, Y.: Variance reduced local SGD with lower communication complexity. arXiv preprint arXiv:1912.12844 (2019)

  22. McMahan, B., Moore, E., Ramage, D., Hampson, S., Arcas, B.A.: Communication-efficient learning of deep networks from decentralized data. In: Artificial Intelligence and Statistics, pp. 1273–1282. PMLR (2017)

    Google Scholar 

  23. Paszke, A., et al.: Automatic differentiation in pyTorch (2017)

    Google Scholar 

  24. Peng, X., Huang, Z., Zhu, Y., Saenko, K.: Federated adversarial domain adaptation. arXiv preprint arXiv:1911.02054 (2019)

  25. Reddi, S., et al.: Adaptive federated optimization. arXiv preprint arXiv:2003.00295 (2020)

  26. Wang, H., Yurochkin, M., Sun, Y., Papailiopoulos, D., Khazaeni, Y.: Federated learning with matched averaging. arXiv preprint arXiv:2002.06440 (2020)

  27. Wang, J., Joshi, G.: Adaptive communication strategies to achieve the best error- runtime trade-off in local-update SGD. Proc. Mach. Learn. Syst. 1, 212–229 (2019)

    Google Scholar 

  28. Wang, J., Liu, Q., Liang, H., Joshi, G., Poor, H.V.: Tackling the objective in- consistency problem in heterogeneous federated optimization. Adv. Neural. Inf. Process. Syst. 33, 7611–7623 (2020)

    Google Scholar 

  29. Xiao, H., Rasul, K., Vollgraf, R.: Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747 (2017)

  30. Xu, G., Li, X., Li, H., Fan, Q., Wang, X., Leung, V.C.: Energy-efficient dynamic asynchronous federated learning in mobile edge computing networks. In: ICC 2023IEEE International Conference on Communications, pp. 160–165. IEEE (2023)

    Google Scholar 

  31. Yan, G., Wang, H., Li, J.: Seizing critical learning periods in federated learning. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, pp. 8788–8796 (2022)

    Google Scholar 

  32. Yang, Z., Chen, M., Saad, W., Hong, C.S., Shikh-Bahaei, M.: Energy efficient federated learning over wireless communication networks. IEEE Trans. Wireless Commun. 20(3), 1935–1949 (2020)

    Article  Google Scholar 

  33. Zeng, Q., Du, Y., Huang, K., Leung, K.K.: Energy-efficient resource management for federated edge learning with CPU-GPU heterogeneous computing. IEEE Trans. Wirel. Commun. 20(12), 7947–7962 (2021)

    Article  Google Scholar 

  34. Zhao, P., Zhang, T.: Stochastic optimization with importance sampling for regularized loss minimization. In: International Conference on Machine Learning, pp. 1–9. PMLR (2015)

    Google Scholar 

Download references

Acknowledgments

We are extremely grateful for the anonymous ECML reviewers for their wonderful and constructive feedback. We have incorporated their comments in the manuscript. We thank Yan Peng, Chen Shen, Chaoqian Cheng, Kaiqiang Hu, Pengfei Li, Yiran Xiang, Wenhao Li, Chongyi Qiu, Yukun Cao, and Feifei Xu for their help and suggestions during the preparation of this paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Haizhou Du .

Editor information

Editors and Affiliations

Ethics declarations

Disclosure of Interests

The authors declare that we have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Du, H., Yang, Z. (2024). FedPrime: An Adaptive Critical Learning Periods Control Framework for Efficient Federated Learning in Heterogeneity Scenarios. In: Bifet, A., Davis, J., Krilavičius, T., Kull, M., Ntoutsi, E., Žliobaitė, I. (eds) Machine Learning and Knowledge Discovery in Databases. Research Track. ECML PKDD 2024. Lecture Notes in Computer Science(), vol 14945. Springer, Cham. https://doi.org/10.1007/978-3-031-70362-1_8

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-70362-1_8

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-70361-4

  • Online ISBN: 978-3-031-70362-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics