Abstract
Personalized Federated Learning (pFL) is among the most popular tasks in distributed deep learning, which compensates for mutual knowledge and enables device-specific model personalization. However, the effectiveness of pFL is severely impeded by challenges on fairness and significant communication overhead, where devices holding essential samples have to devote extensive resources for model training. To address these issues, we introduce the Fair and Communication-Efficient Personalized Federated Learning (FCE-PFL) framework, which harmonizes performance and device fairness while maintaining communication costs. Based on the Dempster-Shafer Theory, FCE-PFL employs assistance and contribution metrics to quantify the auxiliary information a client receives and provides. Then FCE-PFL balances and adjusts the involvement of devices in each training iteration, so as to facilitate fair training by constraining the maximum resource consumption per device and reduce the overall communication overhead. Our framework has been proven to be superior to existing methods in accuracy through rigorous experiments on the CIFAR10 and CIFAR100 datasets, demonstrating its potential as a fair and efficient pFL solution.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Acar, D.A.E., Zhao, Y., Navarro, R.M., Mattina, M., Whatmough, P.N., Saligrama, V.: Federated learning based on dynamic regularization. arXiv preprint arXiv:2111.04263 (2021)
Arivazhagan, M.G., Aggarwal, V., Singh, A.K., Choudhary, S.: Federated learning with personalization layers. arXiv preprint arXiv:1912.00818 (2019)
Cai, Z., Xiong, Z., Xu, H., Wang, P., Li, W., Pan, Y.: Generative adversarial networks: a survey toward private and secure applications. ACM Comput. Surv. (CSUR) 54(6), 1–38 (2021)
Chen, H.Y., Chao, W.L.: On bridging generic and personalized federated learning for image classification. arXiv preprint arXiv:2107.00778 (2021)
Cho, Y.J., Wang, J., Chiruvolu, T., Joshi, G.: Personalized federated learning for heterogeneous clients with clustered knowledge transfer. arXiv preprint arXiv:2109.08119 (2021)
Deng, Y., Kamani, M.M., Mahdavi, M.: Adaptive personalized federated learning. arXiv preprint arXiv:2003.13461 (2020)
Doshi-Velez, F., Kim, B.: Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608 (2017)
Ezzeldin, Y.H., Yan, S., He, C., Ferrara, E., Avestimehr, A.S.: Fairfed: enabling group fairness in federated learning. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 37, pp. 7494–7502 (2023)
Fallah, A., Mokhtari, A., Ozdaglar, A.: Personalized federated learning: A meta-learning approach. arXiv preprint arXiv:2002.07948 (2020)
He, Z., Wang, L., Cai, Z.: Clustered federated learning with adaptive local differential privacy on heterogeneous iot data. IEEE Internet of Things J. (2023)
Huang, Y., et al.: Personalized cross-silo federated learning on non-iid data. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 7865–7873 (2021)
Karimireddy, S.P., Kale, S., Mohri, M., Reddi, S., Stich, S., Suresh, A.T.: Scaffold: stochastic controlled averaging for federated learning. In: International Conference on Machine Learning, pp. 5132–5143. PMLR (2020)
Konečnỳ, J., McMahan, H.B., Yu, F.X., Richtárik, P., Suresh, A.T., Bacon, D.: Federated learning: Strategies for improving communication efficiency. arXiv preprint arXiv:1610.05492 (2016)
Li, L., Fan, Y., Tse, M., Lin, K.Y.: A review of applications in federated learning. Comput. Indust. Eng. 149, 106854 (2020)
Li, T., Hu, S., Beirami, A., Smith, V.: Ditto: fair and robust federated learning through personalization. In: International Conference on Machine Learning, pp. 6357–6368. PMLR (2021)
Li, T., Sahu, A.K., Talwalkar, A., Smith, V.: Federated learning: challenges, methods, and future directions. IEEE Signal Process. Mag. 37(3), 50–60 (2020)
Li, T., Sanjabi, M., Beirami, A., Smith, V.: Fair resource allocation in federated learning. arXiv preprint arXiv:1905.10497 (2019)
Lin, Y., Han, S., Mao, H., Wang, Y., Dally, W.J.: Deep gradient compression: Reducing the communication bandwidth for distributed training. arXiv preprint arXiv:1712.01887 (2017)
Luo, J., Wu, S.: Adapt to adaptation: learning personalization for cross-silo federated learning. In: IJCAI: Proceedings of the Conference, vol. 2022, p. 2166. NIH Public Access (2022)
McMahan, B., Moore, E., Ramage, D., Hampson, S., y Arcas, B.A.: Communication-efficient learning of deep networks from decentralized data. In: Artificial Intelligence and Statistics, pp. 1273–1282. PMLR (2017)
Mohri, M., Sivek, G., Suresh, A.T.: Agnostic federated learning. In: International Conference on Machine Learning, pp. 4615–4625. PMLR (2019)
Qin, Z., Yang, L., Wang, Q., Han, Y., Hu, Q.: Reliable and interpretable personalized federated learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 20422–20431 (2023)
Sattler, F., Müller, K.R., Samek, W.: Clustered federated learning: model-agnostic distributed multitask optimization under privacy constraints. IEEE Trans. Neural Netw. Learn. Syst. 32(8), 3710–3722 (2020)
Sattler, F., Wiedemann, S., Müller, K.R., Samek, W.: Robust and communication-efficient federated learning from non-iid data. IEEE Trans. Neural Netw. Learn. Syst. 31(9), 3400–3413 (2019)
Sensoy, M., Kaplan, L., Kandemir, M.: Evidential deep learning to quantify classification uncertainty. Adv. Neural Inform. Process. Syst. 31 (2018)
Thapa, C., Arachchige, P.C.M., Camtepe, S., Sun, L.: Splitfed: when federated learning meets split learning. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, pp. 8485–8493 (2022)
Wu, C., Wu, F., Lyu, L., Huang, Y., Xie, X.: Fedkd: Communication efficient federated learning via knowledge distillation. arXiv preprint arXiv:2108.13323 (2021)
Xiong, Z., Cai, Z., Takabi, D., Li, W.: Privacy threat and defense for federated learning with non-iid data in aiot. IEEE Trans. Industr. Inf. 18(2), 1310–1321 (2021)
Xiong, Z., Li, W., Cai, Z.: Federated generative model on multi-source heterogeneous data in iot. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 37, pp. 10537–10545 (2023)
Xiong, Z., Li, W., Li, Y., Cai, Z.: Exact-fun: an exact and efficient federated unlearning approach. In: 2023 IEEE International Conference on Data Mining (ICDM), pp. 1439–1444. IEEE (2023)
Zhou, Z., Chu, L., Liu, C., Wang, L., Pei, J., Zhang, Y.: Towards fair federated learning. In: Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, pp. 4100–4101 (2021)
Acknowledgements
This research work was partly supported by National Natural Science Foundation of China under grant No. 62372085, 61802050. And we would like to acknowledge the Science Committee of the Ministry of Science and Higher Education of the Republic of Kazakhstan (grant no. AP19579354).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Zheng, Y., Zheng, X., Wang, T., Mu, C., Zhakiyev, N. (2025). Fair and Communication-Efficient Personalized Federated Learning. In: Cai, Z., Takabi, D., Guo, S., Zou, Y. (eds) Wireless Artificial Intelligent Computing Systems and Applications. WASA 2024. Lecture Notes in Computer Science, vol 14998. Springer, Cham. https://doi.org/10.1007/978-3-031-71467-2_10
Download citation
DOI: https://doi.org/10.1007/978-3-031-71467-2_10
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-71466-5
Online ISBN: 978-3-031-71467-2
eBook Packages: Computer ScienceComputer Science (R0)