Skip to main content

FGFL: Fine-Grained Federated Learning Based on Neural Architecture Search for Heterogeneous Clients

  • Conference paper
  • First Online:
Advances in Swarm Intelligence (ICSI 2024)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14789))

Included in the following conference series:

  • 387 Accesses

Abstract

Federated learning (FL) has gained tremendous attention across different machine learning tasks. In large-scale deployments, client heterogeneity is a fact and imposes constraints on model design, training performance and accuracy. This paper introduces a fine-grained federated learning (FGFL) method to tackle resource heterogeneity. FGFL utilizes a configurable architecture search space in order to offer abundant architectures for various devices. FGFL first employs a greedy coarse-grained architecture selection method and a local training optimization strategy to enable most architectures to be readily deployable. Additionally, it conducts a fine-grained multi-objective evolutionary search to automatically identify the optimal architectures for heterogeneous devices. Experimental results demonstrate that FGFL achieves the superior performance while reducing computational costs.

This work is supported in part by the Natural Science Foundation of Guangdong Province, China, under Grants 2024A1515010697 and 2020A1515011491, in part by the Science Research Project of Guangzhou University under Grant YG2020008, and in part by the Open Project Program of Key Laboratory of Intelligent Optimization and Information, Minnan Normal University, under Grant ZNYH202401.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Cai, H., Gan, C., Wang, T., Zhang, Z., Han, S.: Once-for-all: train one network and specialize it for efficient deployment. arXiv preprint arXiv:1908.09791 (2019)

  2. Deb, K., Pratap, A., Agarwal, S., Meyarivan, T.: A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 6(2), 182–197 (2002)

    Article  Google Scholar 

  3. Diao, E., Ding, J., Tarokh, V.: HeteroFL: computation and communication efficient federated learning for heterogeneous clients. arXiv preprint arXiv:2010.01264 (2020)

  4. Dudziak, L., Laskaridis, S., Fernandez-Marques, J.: FedorAS: federated architecture search under system heterogeneity. arXiv preprint arXiv:2206.11239 (2022)

  5. Horvath, S., Laskaridis, S., Almeida, M., Leontiadis, I., Venieris, S., Lane, N.: FjORD: fair and accurate federated learning under heterogeneous targets with ordered dropout. Adv. Neural. Inf. Process. Syst. 34, 12876–12889 (2021)

    Google Scholar 

  6. Kang, H., Cha, S., Shin, J., Lee, J., Kang, J.: NeFL: nested federated learning for heterogeneous clients. arXiv preprint arXiv:2308.07761 (2023)

  7. Kim, M., Yu, S., Kim, S., Moon, S.M.: DepthFL: depthwise federated learning for heterogeneous clients. In: The Eleventh International Conference on Learning Representations (2022)

    Google Scholar 

  8. Krizhevsky, A.: Learning multiple layers of features from tiny images. Master’s thesis, Department of Computer Science, University of Toronto (2009)

    Google Scholar 

  9. Li, T., Sahu, A.K., Zaheer, M., Sanjabi, M., Talwalkar, A., Smith, V.: Federated optimization in heterogeneous networks. Proc. Mach. Learn. Syst. 2, 429–450 (2020)

    Google Scholar 

  10. Li, X., Jiang, M., Zhang, X., Kamp, M., Dou, Q.: FedBN: federated learning on non-IID features via local batch normalization. arXiv preprint arXiv:2102.07623 (2021)

  11. Pham, H., Guan, M., Zoph, B., Le, Q., Dean, J.: Efficient neural architecture search via parameters sharing. In: International Conference on Machine Learning, pp. 4095–4104. PMLR (2018)

    Google Scholar 

  12. Yu, J., Huang, T.S.: Universally slimmable networks and improved training techniques. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 1803–1811 (2019)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yu Wu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Ying, W., Wang, C., Wu, Y., Luo, X., Wen, Z., Zhang, H. (2024). FGFL: Fine-Grained Federated Learning Based on Neural Architecture Search for Heterogeneous Clients. In: Tan, Y., Shi, Y. (eds) Advances in Swarm Intelligence. ICSI 2024. Lecture Notes in Computer Science, vol 14789. Springer, Singapore. https://doi.org/10.1007/978-981-97-7184-4_9

Download citation

  • DOI: https://doi.org/10.1007/978-981-97-7184-4_9

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-97-7183-7

  • Online ISBN: 978-981-97-7184-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics