Skip to main content

Advertisement

Log in

Energy-efficient client selection in federated learning with heterogeneous data on edge

  • Published:
Peer-to-Peer Networking and Applications Aims and scope Submit manuscript

Abstract

Due to large scale deployment of machine learning applications, a vast amount of data is increasingly generated from mobile and edge devices. Federated Learning (FL) has recently attracted a lot of attention from both industry and academy to explore the potential of such data. It is a distributed optimisation paradigm where a central server coordinates learning from heterogeneous data distributed across a wide range of clients. Typical participating clients in FL are energy-restricted mobile devices, and thus energy efficiency is a key challenge. One approach to reduce energy cost is to choose only a small number of suitable clients to finish training tasks. However, the current approach of the random selection method tends to require more participants than needed. Therefore, in this paper, we propose FedNorm, a client selection framework that finds the clients that provide significant information in each round of FL training. Furthermore, based on FedNorm, we further propose a more energy-efficiency variant that requires only the client selection to be conducted every certain round. With extensive experiments in PyTorch implementation and FEMNIST-based datasets, the evaluation results demonstrate that the proposed algorithms outperforms existing client selection methods in FL in various heterogeneous data distribution properties, and reduces energy cost by decreasing the number of participating clients.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

References

  1. Li, Q., Wen, Z., Wu, Z., Hu, S., Wang, N., He, B.: A Survey on Federated Learning Systems: Vision, Hype and Reality for Data Privacy and Protection pp. 1–41 (2019). http://arxiv.org/abs/1907.09693

  2. Bonawitz K, Eichner H, Grieskamp W, Huba D, Ingerman A, Ivanov V, Kiddon C, Konecny J, Mazzocchi S, McMahan HB (2019) Others: Towards federated learning at scale: System design. arXiv preprint arXiv:1902.01046

  3. Kairouz P, McMahan HB, Avent B, Bellet A, Bennis M, Bhagoji AN, Bonawitz K, Charles Z, Cormode G, Cummings R et al (2019) Advances and open problems in federated learning. arXiv preprint arXiv:1912.04977

  4. Saputra YM, Hoang DT, Nguyen DN, Dutkiewicz E, Mueck MD, Srikanteswara S (2019) Energy demand prediction with federated learning for electric vehicle networks. In: 2019 IEEE Global Communications Conference (GLOBECOM), pp. 1–6. IEEE

  5. Wang X, Han Y, Wang C, Zhao Q, Chen X, Chen M (2019) In-edge ai: Intelligentizing mobile edge computing, caching and communication by federated learning. IEEE Netw 33(5):156–165

    Article  Google Scholar 

  6. Brisimi TS, Chen R, Mela T, Olshevsky A, Paschalidis IC, Shi W (2018) Federated learning of predictive models from federated electronic health records. Int J Med Inform 112:59–67

    Article  Google Scholar 

  7. Hard A, Rao K, Mathews R, Ramaswamy S, Beaufays F, Augenstein S, Eichner H, Kiddon C, Ramage D (2018) Federated learning for mobile keyboard prediction. arXiv preprint arXiv:1811.03604

  8. Brendan McMahan H, Moore E, Ramage D, Hampson S, Agüera y Arcas B (2017) Communication-efficient learning of deep networks from decentralized data. Proceedings of the 20th International Conference on Artificial Intelligence and Statistics. AISTATS 2017 54

  9. Caldas S, Duddu SMK, Wu P, Li T, Konečný J, McMahan HB, Smith V, Talwalkar A (2018) LEAF: A Benchmark for Federated Settings (NeurIPS) 1–9

  10. Li M, Andersen DG, Park JW, Smola AJ, Ahmed A, Josifovski V, Long J, Shekita EJ, Su BY (2014) Scaling distributed machine learning with the parameter server. In: Proceedings of the 11th USENIX Symposium on Operating Systems Design and Implementation. OSDI 2014

  11. Sergeev A, Balso MD (2017) Horovod: fast and easy distributed deep learning in TensorFlow (September)

  12. Wang J, Sahu AK, Yang Z, Joshi G, Kar S (2019) MATCHA: Speeding Up Decentralized SGD via Matching Decomposition Sampling. 2019 6th Indian Control Conference, ICC 2019 - Proceedings pp. 299–300. https://doi.org/10.1109/ICC47138.2019.9123209

  13. Li T, Sahu AK, Talwalkar A, Smith V (2020) Federated learning: Challenges, methods, and future directions. IEEE Signal Process Mag 37(3):50–60

    Article  Google Scholar 

  14. Lim WYB, Luong NC, Hoang DT, Jiao Y, Liang YC, Yang Q, Niyato D, Miao C (2020) Federated learning in mobile edge networks: A comprehensive survey. IEEE Commun Surv Tutorials 22(3):2031–2063

    Article  Google Scholar 

  15. Nishio T, Yonetani R (2019) Client selection for federated learning with heterogeneous resources in mobile edge. In: ICC 2019-2019 IEEE International Conference on Communications (ICC), pp. 1–7. IEEE

  16. Zhao Y, Li M, Lai L, Suda N, Civin D, Chandra V (2018) Federated learning with non-iid data. arXiv preprint arXiv:1806.00582

  17. Li X, Huang K, Yang W, Wang S, Zhang Z (2019) On the Convergence of FedAvg on Non-IID Data 1–26. http://arxiv.org/abs/1907.02189

  18. Duan M, Liu D, Chen X, Liu R, Tan Y, Liang L (2021) Self-Balancing Federated Learning with Global Imbalanced Data in Mobile Systems. IEEE Trans Parallel Distrib Syst 32(1):59–71. https://doi.org/10.1109/TPDS.2020.3009406

    Article  Google Scholar 

  19. Yoshida N, Nishio T, Morikura M, Yamamoto K, Yonetani R (2020) Hybrid-fl for wireless networks: Cooperative learning mechanism using non-iid data. In: ICC 2020-2020 IEEE International Conference on Communications (ICC), pp. 1–7. IEEE

  20. Li T, Sahu AK, Zaheer M, Sanjabi M, Talwalkar A, Smith V (2018) Federated Optimization in Heterogeneous Networks. http://arxiv.org/abs/1812.06127

  21. Li C, Li R, Wang H, Li Y, Zhou P, Guo S, Li K (2019) Gradient scheduling with global momentum for non-iid data distributed asynchronous training. arXiv preprint arXiv:1902.07848

  22. Wang H, Kaplan Z, Niu D, Li B (2020) Optimizing federated learning on non-iid data with reinforcement learning. In: IEEE INFOCOM 2020-IEEE Conference on Computer Communications, pp. 1698–1707. IEEE

  23. Mohri M, Sivek G, Suresh AT (2019) Agnostic federated learning. In: International Conference on Machine Learning, pp. 4615–4625. PMLR

  24. Cho YJ, Wang J, Joshi G (2020) Client selection in federated learning: Convergence analysis and power-of-choice selection strategies. arXiv preprint arXiv:2010.01243

  25. Tang M, Ning X, Wang Y, Wang Y, Chen Y (2021) Fedgp: Correlation-based active client selection for heterogeneous federated learning. arXiv preprint arXiv:2103.13822

  26. Nguyen VD, Sharma SK, Vu TX, Chatzinotas S, Ottersten B (2020) Efficient federated learning algorithm for resource allocation in wireless iot networks. IEEE Internet Things J 8(5):3394–3409

    Article  Google Scholar 

  27. Dinh CT, Tran NH, Nguyen MNH, Hong CS, Bao W, Zomaya AY, Gramoli V (2021) Federated learning over wireless networks: Convergence analysis and resource allocation. IEEE/ACM Trans Networking 29(1):398–409. https://doi.org/10.1109/TNET.2020.3035770

    Article  Google Scholar 

  28. Wang S, Chen M, Saad W, Yin C (2020) Federated learning for energy-efficient task computing in wireless networks. In: ICC 2020 - 2020 IEEE International Conference on Communications (ICC), pp. 1–6. https://doi.org/10.1109/ICC40277.2020.9148625

  29. Konečnỳ J, McMahan HB, Yu FX, Richtárik P, Suresh AT, Bacon D (2016) Federated learning: Strategies for improving communication efficiency. arXiv preprint arXiv:1610.05492

  30. Li L, Shi D, Hou R, Li H, Pan M, Han Z (2021) To talk or to work: Flexible communication compression for energy efficient federated learning over heterogeneous mobile edge devices. In: IEEE INFOCOM 2021-IEEE Conference on Computer Communications, pp. 1–10. IEEE

  31. Hsieh K, Harlap A, Vijaykumar N, Konomis D, Ganger GR, Gibbons PB, Mutlu O (2017) Gaia: Geo-distributed machine learning approaching LAN speeds. Proceedings of the 14th USENIX Symposium on Networked Systems Design and Implementation, NSDI 2017 pp. 629–647

  32. Sun Y, Zhou S, Gündüz D (2020) Energy-aware analog aggregation for federated learning with redundant data. In: ICC 2020-2020 IEEE International Conference on Communications (ICC), pp. 1–7. IEEE

  33. Zhang J, Simeone O (2020) Lagc: Lazily aggregated gradient coding for straggler-tolerant and communication-efficient distributed learning. IEEE transactions on neural networks and learning systems

  34. Singh N, Data D, George J, Diggavi S (2020) Sparq-sgd: Event-triggered and compressed communication in decentralized optimization. In: 2020 59th IEEE Conference on Decision and Control (CDC), pp. 3449–3456. IEEE

  35. Amirhosein Bodaghi SG (2017) Dynamics of Instagram Users. https://zenodo.org/record/823283. [Online; accessed 23-July-2021]

  36. Kingma DP, Ba J (2014) Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jianxin Zhao.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This article is part of the Topical Collection: Special Issue on Green Edge Computing

Guest Editors: Zhiyong Yu, Liming Chen, Sumi Helal, and Zhiwen Yu

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhao, J., Feng, Y., Chang, X. et al. Energy-efficient client selection in federated learning with heterogeneous data on edge. Peer-to-Peer Netw. Appl. 15, 1139–1151 (2022). https://doi.org/10.1007/s12083-021-01254-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12083-021-01254-8

Keywords

Navigation