Skip to main content

Advertisement

Log in

Multi-index federated aggregation algorithm based on trusted verification

  • Regular Paper
  • Published:
CCF Transactions on High Performance Computing Aims and scope Submit manuscript

Abstract

Motivated by the modern phenomenon of distributed data collected by edge devices at scale, federated learning can use large amounts of training data from diverse users for better representation and generalization. To improve flexibility and scalability, a new federated optimization algorithm, named multi-index federated aggregation algorithm based on trusted verification (TVFedmul) is proposed. It overcomes a series of problems caused by the original aggregation algorithm, which only takes the single index of data quantity as a reference factor to measure the aggregation weight of each client. The improved algorithm is based on multi-index measurement. It reflects the comprehensive ability of clients more thoroughly, to make overall judgments. Further, to achieve customized federated learning, a hyperparameter α is introduced. It can be changed to determine the importance of indexes. Finally, via extensive experimentation, it has been observed that the improved algorithm is faster, and the accuracy reaches 94.59%, which is 2.53% higher than that of FedAvg (92.06%).

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15

Similar content being viewed by others

Explore related subjects

Discover the latest articles and news from researchers in related subjects, suggested using machine learning.

Data availability

Data openly available in a public repository.

References

  • Acar, D. A. E., Zhao, Y., Navarro, R. M., et al.: Federated learning based on dynamic regularization. Int. Confer. Learn. Repres (2021)

  • Ahmad, A., Luo, W., Robles-Kelly, A.: Robust federated learning under statistical heterogeneity via hessian-weighted aggregation. Mach. Learn. 112(2), 633–654 (2023)

    Article  MathSciNet  MATH  Google Scholar 

  • Bao, Z., Bai, W., Zhang, W.: Multi-index federated aggregation algorithm based on trusted verification. The 22nd International Conference on Parallel and Distributed Computing, Applications and Technologies (PDCAT 2021). p. 412–420 (2021)

  • Ben Mansour, A., et al.: Federated learning aggregation: new robust algorithms with guarantees. In: 21ST IEEE International Conference on machine learning and applications, ICMLA (2022)

  • Chen, M., Mao, B.C., Ma, T.Y.: A staleness-aware asynchronous Federated Learning algorithm with non-IID data. Article 120, 1–12 (2021)

    MATH  Google Scholar 

  • Chen, Z., Zhou, C., Zhou, Y.: A hierarchical federated learning model with adaptive model parameter aggregation. Comput. Sci. Inf. Syst. 20(3), 1037–1060 (2023)

    Article  MathSciNet  MATH  Google Scholar 

  • Dai, W., Zhou, Y., Dong, N., et al.: Toward understanding the impact of staleness in distributed machine learning. Int. Confer. Learn. Represent. (2019)

  • Esteves, L., et al.: Towards mobile federated learning with unreliable participants and selective aggregation. Appl. Sci. Basel 13(5), 3135 (2023)

    Article  MATH  Google Scholar 

  • Fallah, A., Mokhtari, A., Ozdaglar, A.: Personalized federated learning: a meta-learning approach. ArXiv (2020)

  • Guendouzi, B.S., et al.: A systematic review of federated learning: Challenges, aggregation methods, and development tools. J Netw. Comput. Appl. 220 (2023)

  • Hsieh, K., Phanishayee, A., Mutlu, O., et al.: The non-IID data quagmire of decentralized machine learning. International Conference on Machine Learning 4337–4348 (2020)

  • Hame, J., Mohri. M., Suresh, A. T.: FedBoost: Communication-efficient algorithms for federated learning. Int. Confer. Mach. Learn 3931–3941 (2020)

  • Karimireddy, S. P., Kale, S., Mohri, M., et al.: SCAFFOLD: Stochastic controlled averaging for on-device federated learning. ArXiv (2019)

  • Li, L., Xu, W., Chen, T., et al.: RSA: byzantine-robust stochastic aggregation methods for distributed learning from heterogeneous datasets. Proc. AAAI Confer. Artif. Intellig. 33, 1544–1551 (2019)

    Article  MATH  Google Scholar 

  • Li, X., Huang, K., Yang, W., et al.: On the convergence of FedAvg on non-IID data. Arxiv (2020a)

  • Li, T., Sanjabi, M., Smith, V.: Fair resource allocation in federated learning. ArXiv (2020b)

  • Li, T., et al.: Ditto: fair and robust federated learning through personalization. In: Meila, M, Zhang, T, Meila M, Zhang T. (eds.) International Conference on Machine Learning. (ICML), vol. 139 (2021)

  • Li, S., et al.: Learning to collaborate in decentralized learning of personalized models. In: IEEE Conference on Computer Vision and Pattern Recognition. Comp Soc. p. 9756–9765 (2022)

  • Lyu, L., Yu J., Nandakumar K., et al.: Towards fair and privacy-preserving federated deep models. IEEE Trans. Parallel. Distrib. Syst. 2524–2541 (2020)

  • Mao, Y.L., et al.: Romoa: Robust model aggregation for the resistance of federated learning to model poisoning attacks. In: Computer Security - Esorics 2021, Bertino E, Shulman H, Waidner M, Bertino E, Shulman E, Waidner M (Eds) 26th European Symposium on Research in Computer Security (ESORICS)/16th Data Privacy Management International Workshop (DPM)/5th International Workshop on Cryptocurrencies and Blockchain Technology (CBT). p. 476–496 (2021)

  • McMahan, H.B., Moore, E., Ramage, D., et al.: Communication-efficient learning of deep networks from decentralized data. Int. Confer. Artif. Intellig. Statist. 54, 1273–1282 (2017)

    MATH  Google Scholar 

  • Nishio, T., Yonetani, R.: Client selection for federated learning with heterogeneous resources in mobile edge. IEEE Int. Confer. Commun. 1–7 (2019)

  • Palihawadana, C., et al.: FedSim: similarity guided model aggregation for federated learning. Neurocomputing 483, 432–445 (2022)

    Article  Google Scholar 

  • Pillutla, K., Kakade, S.M., Harchaoui, Z.: Robust aggregation for federated learning. IEEE Trans. Signal Process. 70, 1142–1154 (2022)

    Article  MathSciNet  MATH  Google Scholar 

  • Qi, P., et al.: Model aggregation techniques in federated learning: a comprehensive survey. Fut. Gener. Comput. Syst. Int. J. Esci. 150, 272–293 (2024)

    Article  MATH  Google Scholar 

  • Reisizadeh, A., Mokhtari, A., Hassani, H., et al.: FedPAQ: a communication-efficient federated learning method with periodic averaging and quantization. Int. Confer. Artific. Intellig. Statist. 108, 2021–2030 (2020)

    MATH  Google Scholar 

  • Rodríguez-Barroso, N., et al.: Survey on federated learning threats: concepts, taxonomy on attacks and defences, experimental study and challenges. Inform. Fus. 90, 148–173 (2023)

    Article  Google Scholar 

  • Sun, G., et al.: Data poisoning attacks on federated machine learning. IEEE Internet Things J. 9(13), 11365–11375 (2022)

    Article  MATH  Google Scholar 

  • Talukder, Z., Islam, M. A.: Computationally efficient auto-weighted aggregation for heterogeneous federated learning. In: 2022 IEEE International Conference on Edge Computing & Communications (IEEE EDGE 2022), Ardagna CA, et al. (Eds) 6th IEEE International Conference on Edge Computing and Communications (IEEE EDGE). p. 12–22 (2022)

  • Warnat-Herresthal, S., Schultze, H., Shastry, K.L., et al.: Swarm Learning for decentralized and confidential clinical machine learning. Nature 594, 7862 (2021)

    Article  MATH  Google Scholar 

  • Wu, D.Y., et al.: Towards efficient secure aggregation for model update in federated learning. In: 2020 IEEE Global Communications Conference (GLOBECOM). IEEE Global Communications Conference (GLOBECOM) on Advanced Technology for 5G Plus (2020)

  • Xie, C., Koyejo, S., Guptal, I.: Asynchronous federated optimization. ArXiv (2019)

  • Zhang, S. X., Choromanska, A., LeCun, Y.: Deep learning with elastic averaging SGD. NIPS, 28 (2015)

  • Zhang, M., Sapra, K., Fidler, S., Yeung, S., Alvarez, J.M.: Personalized federated learning with first order model optimization. In: International Conference on Learning Representations (2021)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhang Wenbo.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhenshan, B., Mengyuan, W., Bai, W. et al. Multi-index federated aggregation algorithm based on trusted verification. CCF Trans. HPC 6, 632–645 (2024). https://doi.org/10.1007/s42514-024-00199-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s42514-024-00199-7

Keywords