Abstract
With the rapid growth of edge devices such as smartphones, wearables, and mobile networks, how to effectively utilize a large amount of private data stored on these devices has become a challenging issue. To address this issue, federated learning has emerged as a promising solution. Federated learning allows multiple devices to train machine learning models collaboratively while keeping the data decentralized and following local privacy policies. However, the heterogeneous differences in data distributions, model structures, network environments, and devices pose challenges in realizing collaboration. In this paper, we reviewed the heterogeneous federated learning (HFL) approaches and classified them into data heterogeneity, device heterogeneity, communication heterogeneity, and model heterogeneity. Also, we concluded their advantages and disadvantages and gave the solutions to the limitations in detail. Meanwhile, this paper introduces the commonly used methods for evaluating the performance of federated learning and suggests the future directions of the HFL framework.
This work is supported in part by the National Natural Science Foundation of China under Grant 62372125, in part by the Guangdong Natural Science Funds for Distinguished Young Scholar under Grant 2023B1515020041.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
McMahan, B., et al.: Communication-efficient learning of deep networks from decentralized data. In: Artificial Intelligence and Statistics. PMLR (2017)
Yang, Q., et al.: Federated machine learning: Concept and applications. ACM Trans. Intell. Syst. Technol. (TIST) 10(2), 1ā19 (2019)
Dayan, I., et al.: Federated learning for predicting clinical outcomes in patients with COVID-19. Nat. Med. 27(10), 1735ā1743 (2021)
Wu, C., et al.: FedGNN: federated graph neural network for a privacy-preserving recommendation. arXiv preprint arXiv:2102.04925 (2021)
Suzumura, T., et al.: Towards federated graph learning for collaborative financial crimes detection. arXiv preprint arXiv:1909.12946 (2019)
Usynin, D., et al.: Adversarial interference and its mitigations in privacy-preserving collaborative machine learning. Nat. Mach. Intell. 3(9), 749ā758 (2021)
Li, T., et al.: Federated learning: challenges, methods, and future directions. IEEE Signal Process. Mag. 37(3), 50ā60 (2020)
Kairouz, P., et al.: Advances and open problems in federated learning. Found. TrendsĀ® Mach. Learn. 14(1-2), 1ā210 (2021)
Wahab, O.A., et al.: Federated machine learning: survey, multi-level classification, desirable criteria and future directions in communication and networking systems. IEEE Commun. Surv. Tutor. 23(2), 1342ā1397 (2021)
Tan, A.Z., et al.: Towards personalized federated learning. IEEE Trans. Neural Netw. Learn. Syst. 34, 9587ā9603 (2022)
Abdelmoniem, A.M., et al.: A comprehensive empirical study of heterogeneity in federated learning. IEEE Internet Things J. 10, 14071ā14083 (2023)
Gao, D., Yao, X., Yang, Q.: A survey on heterogeneous federated learning. arXiv preprint arXiv:2210.04505 (2022)
Ye, M., et al.: Heterogeneous Federated Learning: State-of-the-art and Research Challenges. arXiv preprint arXiv:2307.10616 (2023)
Zhang, J., et al.: Federated learning with label distribution skew via logits calibration. In: International Conference on Machine Learning. PMLR (2022)
Luo, Z., et al.: Disentangled federated learning for tackling attributes skew via invariant aggregation and diversity transferring. arXiv preprint arXiv:2206.06818 (2022)
Zhu, H., et al.: Federated learning on non-IID data: a survey. Neurocomputing 465, 371ā390 (2021)
Yoon, T., et al.: Fedmix: Approximation of mixup under mean augmented federated learning. arXiv preprint arXiv:2107.00233 (2021)
Duan, M., et al.: Astraea: self-balancing federated learning for improving classification accuracy of mobile deep learning applications. In: 2019 IEEE 37th International Conference on Computer Design (ICCD). IEEE (2019)
Fallah, A., Mokhtari, A., Ozdaglar, A.: Personalized federated learning with theoretical guarantees: a model-agnostic meta-learning approach. Adv. Neural. Inf. Process. Syst. 33, 3557ā3568 (2020)
Wang, L., Huang, C., Han, X.: Vertical federated knowledge transfer via representation distillation. In: FL-IJCAI Workshop (2022)
Le, H.Q., et al.: Layer-wise Knowledge Distillation for Cross-Device Federated Learning. In: 2023 International Conference on Information Networking (ICOIN). IEEE (2023)
Yu, T., Bagdasaryan, E., Shmatikov, V.: Salvaging federated learning by local adaptation. arXiv preprint arXiv:2002.04758 (2020)
Hou, C., et al.: FeDChain: Chained algorithms for near-optimal communication cost in federated learning. arXiv preprint arXiv:2108.06869 (2021)
Luping, W., Wei, W., Bo, L.: CMFL: mitigating communication overhead for federated learning. In: 2019 IEEE 39th International Conference on Distributed Computing Systems (ICDCS). IEEE (2019)
Li, X., et al.: A unified federated DNNs framework for heterogeneous mobile devices. IEEE Internet Things J. 9(3), 1737ā1748 (2021)
Luo, J., et al.: Fedskel: efficient federated learning on heterogeneous systems with skeleton gradients update. In: Proceedings of the 30th ACM International Conference on Information & Knowledge Management (2021)
Yao, X., et al.: Federated learning with additional mechanisms on clients to reduce communication costs. arXiv preprint arXiv:1908.05891 (2019)
Zaccone, R., et al.: Speeding up heterogeneous federated learning with sequentially trained superclients. In: 2022 26th International Conference on Pattern Recognition (ICPR). IEEE (2022)
Gao, Z., et al.: FedSeC: a robust differential private federated learning framework in heterogeneous networks. In: 2022 IEEE Wireless Communications and Networking Conference (WCNC). IEEE (2022)
Ma, Q., et al.: FedSA: a semi-asynchronous federated learning mechanism in heterogeneous edge computing. IEEE J. Sel. Areas Commun. 39(12), 3654ā3672 (2021)
Chan, Y.H., Edith, C.H.N.: Fedhe: heterogeneous models and communication-efficient federated learning. In: 2021 17th International Conference on Mobility, Sensing and Networking (MSN). IEEE (2021)
Abdellatif, A.A., et al.: Communication-efficient hierarchical federated learning for IoT heterogeneous systems with imbalanced data. Future Gener. Comput. Syst. 128, 406ā419 (2022)
Li, L., et al.: FedSAE: a novel self-adaptive federated learning framework in heterogeneous systems. In: 2021 International Joint Conference on Neural Networks (IJCNN). IEEE (2021)
Wang, H., et al.: Optimizing federated learning on non-IID data with reinforcement learning. In: IEEE INFOCOM 2020-IEEE Conference on Computer Communications. IEEE (2020)
Wang, D., et al.: CFL-HC: a coded federated learning framework for heterogeneous computing scenarios. In: 2021 IEEE Global Communications Conference (GLOBECOM). IEEE (2021)
Elkordy, A.R., Salman Avestimehr, A.: Heterosag: secure aggregation with heterogeneous quantization in federated learning. IEEE Trans. Commun. 70(4), 2372ā2386 (2022)
Li, Y., et al.: FedH2L: Federated learning with model and statistical heterogeneity. arXiv preprint arXiv:2101.11296 (2021)
Takahashi, H., Liu, J., Liu, Y.: Breaching FedMD: image recovery via paired-logits inversion attack. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (2023)
Liu, Y., et al.: A secure federated learning framework for 5G networks. IEEE Wirel. Commun. 27(4), 24ā31 (2020)
Ding, J., et al.: Differentially private and communication efficient collaborative learning. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35. No. 8. (2021)
Zeng, H., et al.: FedCAV: contribution-aware model aggregation on distributed heterogeneous data in federated learning. In: Proceedings of the 50th International Conference on Parallel Processing (2021)
Bibikar, S., et al.: Federated dynamic sparse training: computing less, communicating less, yet learning better. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36. No. 6 (2022)
Hardt, M., Price, E., Srebro, N.: Equality of opportunity in supervised learning. In: Advances in Neural Information Processing Systems, vol. 29 (2016)
Lyu, L., et al.: Towards fair and privacy-preserving federated deep models. IEEE Trans. Parallel Distrib. Syst. 31(11), 2524ā2541 (2020)
GƔlvez, B.R., et al.: Enforcing fairness in private federated learning via the modified method of differential multipliers. In: NeurIPS 2021 Workshop Privacy in Machine Learning (2021)
Sun, L., Lyu, L.: Federated model distillation with noise-free differential privacy. arXiv preprint arXiv:2009.05537 (2020)
Bonawitz, K., et al.: Practical secure aggregation for privacy-preserving machine learning. In: Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security (2017)
Chai, D., et al.: FedEval: A Holistic Evaluation Framework for Federated Learning. arXiv preprint arXiv:2011.09655 (2020)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Appendix
Appendix
The four tables above summarize the solutions to federated learning data heterogeneity, model heterogeneity, communication heterogeneity, and device heterogeneity, and analyze the main contributions and limitations of each approach. These valuable discussions can contribute to the high-quality development of the heterogeneous federated learning community.
Rights and permissions
Copyright information
Ā© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Lin, Y., Long, Y., Zhou, Z., Pang, Y., Yang, C. (2024). Towards Heterogeneous Federated Learning: Analysis, Solutions, and Future Directions. In: Vaidya, J., Gabbouj, M., Li, J. (eds) Artificial Intelligence Security and Privacy. AIS&P 2023. Lecture Notes in Computer Science, vol 14509. Springer, Singapore. https://doi.org/10.1007/978-981-99-9785-5_13
Download citation
DOI: https://doi.org/10.1007/978-981-99-9785-5_13
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-99-9784-8
Online ISBN: 978-981-99-9785-5
eBook Packages: Computer ScienceComputer Science (R0)