Abstract
Federated learning is a distributed training method that integrates multi-party data information using privacy-preserving technologies through dispersed client data sets to jointly construct a global model under the coordination of a central server. However, in practical applications, there is a high degree of data distribution skewness among clients, which causes the optimization direction of the client models to diverge, resulting in model bias and reducing the accuracy of the global model. Existing methods require the calculation and transmission of much information to correct the optimization direction of the client models, or only roughly limit the deviation of the client models end-to-end, ignoring targeted processing of the internal structure of the model, resulting in unclear improvement effects. To address these problems, we propose a federated optimization algorithm FedECCR based on encoding contrast and classification correction. This algorithm divides the model into an encoder and a classifier. It utilizes prototype contrastive training of the model encoder and unbiased classification correction of the classifier. This approach notably improves the accuracy of the global model while maintaining low communication costs. We conducted experiments on multiple data sets to evaluate the validity of our method, and the quantified results showed that FedECCR can improve the global model classification accuracy by approximately 1% to 6% compared to FedAvg, FedProx, and MOON.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Wang, X., Gao, H., Huang, K.: Artificial intelligence in collaborative computing. Mobile Netw. Appl. 26, 2389–2391 (2021). https://doi.org/10.1007/s11036-021-01829-y
Yang, J., Zheng, J., Zhang, Z., Chen, Q.I., Wong, D.S., Li, Y.: Security of federated learning for cloud-edge intelligence collaborative computing. Int. J. Intell. Syst., 9290–9308 (2022). https://doi.org/10.1002/int.22992
McMahan, H.B., Moore, E., Ramage, D., Hampson, S., Arcas, B.: Communication-efficient learning of deep networks from decentralized data. arXiv: Learning (2016)
Hard, A., et al.: Federated learning for mobile keyboard prediction. arXiv: Computation and Language (2018)
Geyer, R.C., Klein, T., Nabi, M.: Differentially private federated learning: a client level perspective. Cornell University - arXiv (2017)
Tan, Y., Long, G., Liu, L., Zhou, T., Jiang, J.: FedProto: federated prototype learning over heterogeneous devices. arXiv: Learning (2021)
Reynolds, D.A.: Gaussian Mixture Models (2009)
Yan, Y., Zhu, L.: A Simple Data Augmentation for Feature Distribution Skewed Federated Learning (2023)
Zhao, Y., Li, M., Lai, L., Suda, N., Civin, D., Chandra, V.: Federated learning with non-IID data. Cornell University - arXiv (2018)
Tuor, T., Wang, S., Ko, B., Liu, C., Leung, K.K.: Overcoming noisy and irrelevant data in federated learning. arXiv: Learning (2020)
Yoshida, N., Nishio, T., Morikura, M., Yamamoto, K., Yonetani, R.: Hybrid-FL: Cooperative Learning Mechanism Using Non-IID Data in Wireless Networks (2019)
Wicaksana, J., et al.: FedMix: Mixed Supervised Federated Learning for Medical Image Segmentation (2022)
Seol, M., Kim, T.: Performance enhancement in federated learning by reducing class imbalance of non-IID data. Sensors, 1152 (2023)
Shin, M., Hwang, C., Kim, J., Park, J., Bennis, M., Kim, S.-L.: XOR mixup: privacy-preserving data augmentation for one-shot federated learning. Cornell University - arXiv (2020)
Jeong, E., Oh, S., Park, J., Kim, H., Bennis, M., Kim, S.-L.: Multi-hop federated private data augmentation with sample compression. arXiv: Learning (2019)
Karimireddy, S., Kale, S., Mohri, M., Reddi, S.J., Stich, S.U., Suresh, A.: SCAFFOLD: stochastic controlled averaging for federated learning. In: International Conference on Machine Learning (2020)
Gao, L., Fu, H., Li, L., Chen, Y., Xu, M., Xu, C.-Z.: FedDC: Federated Learning with Non-IID Data via Local Drift Decoupling and Correction
Liu, Y., Sun, Y., Ding, Z., Shen, L., Liu, B., Tao, D.: Enhance Local Consistency in Federated Learning: A Multi-Step Inertial Momentum Approach (2023)
Li, B., Schmidt, M.N., Alstrøm, T.S., Stich, S.U.: Partial Variance Reduction improves Non-Convex Federated learning on heterogeneous data (2022)
Li, T., Sahu, A., Zaheer, M., Sanjabi, M., Talwalkar, A., Smith, V.: Federated optimization in heterogeneous networks. arXiv: Learning (2018)
Shoham, N., et al.: Overcoming forgetting in federated learning on non-IID data. Cornell University - arXiv (2019)
Yao, X., Sun, L.: Continual local training for better initialization of federated models. In: 2020 IEEE International Conference on Image Processing (ICIP) (2020). https://doi.org/10.1109/icip40778.2020.9190968
Li, H., Krishnan, A., Wu, J., Kolouri, S., Pilly, P.K., Braverman, V.: Lifelong learning with sketched structural regularization. Cornell University - arXiv (2021)
Kirkpatrick, J., et al.: Overcoming catastrophic forgetting in neural networks. In: Proceedings of the National Academy of Sciences, pp. 3521–3526 (2017). https://doi.org/10.1073/pnas.1611835114
Li, Q., He, B., Song, D.: Model-contrastive federated learning. In: 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2021). https://doi.org/10.1109/cvpr46437.2021.01057
Chen, T., Kornblith, S., Norouzi, M., Hinton, G.E.: A simple framework for contrastive learning of visual representations. Cornell University - arXiv (2020)
Vanschoren, J.: Meta-learning: a survey. arXiv: Learning (2018)
Zhang, Y., Yang, Q.: An overview of multi-task learning. Natl. Sci. Rev., 30–43 (2018). https://doi.org/10.1093/nsr/nwx105
Yang, L., Huang, J., Lin, W., Cao, J.: Personalized federated learning on non-IID data via group-based meta-learning. ACM Trans. Knowl. Discov. Data., 1–20 (2023). https://doi.org/10.1145/3558005
He, C., Ceyani, E., Balasubramanian, K., Annavaram, M., Avestimehr, A.S.: SpreadGNN: serverless multi-task federated learning for graph neural networks. Cornell University - arXiv (2021)
Mu, X., et al.: FedProc: prototypical contrastive federated learning on non-IID data. arXiv: Learning (2021)
Miller, J.W., Harrison, M.T.: Mixture models with a prior on the number of components. arXiv: Methodology (2015)
Hsu, H., Qi, H., Brown, M.: Measuring the effects of non-identical data distribution for federated visual classification. arXiv: Learning (2019)
Acknowledgement
This work is supported by the Key Research and Development Program of Zhejiang Province 2023C03194; the National Natural Science Foundation of China under Grant No. 62072146; and the Natural Science Foundation of Zhejiang Province under Grant No. LQ23F020015.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering
About this paper
Cite this paper
Zeng, Y. et al. (2024). FedECCR: Federated Learning Method with Encoding Comparison and Classification Rectification. In: Gao, H., Wang, X., Voros, N. (eds) Collaborative Computing: Networking, Applications and Worksharing. CollaborateCom 2023. Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, vol 563. Springer, Cham. https://doi.org/10.1007/978-3-031-54531-3_4
Download citation
DOI: https://doi.org/10.1007/978-3-031-54531-3_4
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-54530-6
Online ISBN: 978-3-031-54531-3
eBook Packages: Computer ScienceComputer Science (R0)