Skip to main content

MSSA-FL: High-Performance Multi-stage Semi-asynchronous Federated Learning with Non-IID Data

  • Conference paper
  • First Online:
Knowledge Science, Engineering and Management (KSEM 2022)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 13369))

Abstract

Federated Learning (FL) is an emerging distributed machine learning framework that allows edge devices to collaborative train a shared global model without transmitting their sensitive data to centralized servers. However, it is extremely challenging to apply FL in practical scenarios because the statistics of the data across edge devices are usually not independent and identically distributed (Non-IID), which will introduce the bias to global model. To solve the above data heterogeneity issue, we propose a novel Multi-Stage Semi-Asynchronous Federated Learning (MSSA-FL) framework. MSSA-FL benefits convergence accuracy through making the local model complete multi-stage training within the group guided by combination module. To improve the training efficiency of the framework, MSSA-FL adopts a semi-asynchronous update method. Meanwhile, proposed model assignment strategy and model aggregation method further boost the performance of MSSA-FL. Experiments on several public datasets show that MSSA-FL achieves higher accuracy and faster convergence than the comparison algorithms.

This work is supported by the National Natural Science Foundation of China (NSFC) (Grants No. U19A2061), National key research and development program of China under Grants No. 2017YFC1502306.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Li, E., Zhou, Z., Chen, X.: Edge intelligence: on-demand deep learning model co-inference with device-edge synergy. In: Workshop on Mobile Edge Communication, pp. 31–36 (2018)

    Google Scholar 

  2. Zhou, Z., Chen, X., Li, E., et al.: Edge intelligence: paving the last mile of artificial intelligence with edge computing. Proc. IEEE 107(8), 1738–1762 (2019)

    Article  Google Scholar 

  3. Qiu, H., Zheng, Q., et al.: Topological graph convolutional network-based urban traffic flow and density prediction. IEEE Trans. on Intel. Transpor. Syst. 22(7), 4560–4569 (2020)

    Article  Google Scholar 

  4. Li, Y., Song, Y., et al.: Intelligent fault diagnosis by fusing domain adversarial training and maximum mean discrepancy via ensemble learning. IEEE TII. 17(4), 2833–2841 (2020)

    Google Scholar 

  5. Hu, F., Lakdawala, S., Hao, Q., Qiu, M.: Low-power, intelligent sensor hardware interface for medical data preprocessing. IEEE Trans. Info. Tech. Biomed. 13(4), 656–663 (2009)

    Article  Google Scholar 

  6. Li, X., Huang, K., Yang, W., Wang, S., Zhang, Z.: On the convergence of fedavg on non-iid data (2019). arXiv preprint, arXiv:1907.02189

  7. Zhao, Y., Li, M., Lai, L., Suda, N., Civin, D., Chandra, V.: Federated learning with non-iid data (2018). arXiv preprint, arXiv:1806.00582

  8. Li, T., Sahu, A.K., Zaheer, M., Sanjabi, M., Talwalkar, A., Smith, V.: Federated optimization in heterogeneous networks. Proc. Mach. Learn. Sys. 2, 429–450 (2020)

    Google Scholar 

  9. Karimireddy, S.P., Kale, S., Mohri, M., Reddi, S., Stich, S., Suresh, A.T.: Scaffold: Stochastic controlled averaging for federated learning. In: International Conference on Machine Learning, pp. 5132–5143 (2020)

    Google Scholar 

  10. Duan, M., Liu, D., et al.: Self-balancing federated learning with global imbalanced data in mobile systems. IEEE TPDS 32(1), 59–71 (2020)

    Google Scholar 

  11. Qiu, M., Xue, C., Shao, Z., Sha, E.: Energy minimization with soft real-time and DVS for uniprocessor and multiprocessor embedded systems. In: IEEE DATE, pp. 1–6 (2007)

    Google Scholar 

  12. Qiu, M., Liu, J., et al.: A novel energy-aware fault tolerance mechanism for wireless sensor networks. In: IEEE/ACM Conference on GCC (2011)

    Google Scholar 

  13. Qiu, M., et al.: Heterogeneous real-time embedded software optimization considering hardware platform. In: ACM Symposium on Applied Computing, pp. 1637–1641 (2009)

    Google Scholar 

  14. Lu, Z., et al.: IoTDeM: an IoT Big Data-oriented MapReduce performance prediction extended model in multiple edge clouds. JPDC 118, 316–327 (2018)

    Google Scholar 

  15. Liu, M., Zhang, S., et al.: H infinite state estimation for discrete-time chaotic systems based on a unified model. In: IEEE SMC (B) (2012)

    Google Scholar 

  16. Qiu, H., Zheng, Q., et al.: Deep residual learning-based enhanced jpeg compression in the internet of things. IEEE TII 17(3), 2124–2133 (2020)

    Google Scholar 

  17. Qiu, H., Qiu, M., Lu, Z.: Selective encryption on ECG data in body sensor network based on supervised machine learning. Infor. Fusion 55, 59–67 (2020)

    Article  Google Scholar 

  18. McMahan, B., Moore, E., Ramage, D., Hampson, S., y Arcas, B.A.: Communication-efficient learning of deep networks from decentralized data. In: Artificial Intelligence Statistics, pp. 1273–1282 (2017)

    Google Scholar 

  19. Wu, G., Zhang, H., et al.: A decentralized approach for mining event correlations in distributed system monitoring. JPDC 73(3), 330–340 (2013)

    MATH  Google Scholar 

  20. Qiu, L., et al.: Optimal big data sharing approach for tele-health in cloud computing. In: IEEE SmartCloud, pp. 184–189 (2016)

    Google Scholar 

  21. Wang, H., Yurochkin, M., Sun, Y., Papailiopoulos, D., Khazaeni, Y.: Federated learning with matched averaging (2020). arXiv preprint, arXiv:2002.06440

  22. Nishio, T., Yonetani, R.: Client selection for federated learning with heterogeneous resources in mobile edge. In: IEEE ICC, pp. 1–7 (2019)

    Google Scholar 

  23. Xie, C., Koyejo, S., Gupta, I.: Asynchronous federated optimization (2019). arXiv preprint, arXiv:1903.03934

  24. Wu, W., He, L., Lin, W., et al.: Safa: a semi-asynchronous protocol for fast federated learning with low overhead. IEEE Trans. Comput. 70(5), 655–668 (2020)

    Article  MathSciNet  Google Scholar 

  25. Xu, Z., Yu, F., Xiong, J., Chen, X.: Helios: heterogeneity-aware federated learning with dynamically balanced collaboration. In: 58th ACM/IEEE DAC, pp. 997–1002 (2021)

    Google Scholar 

  26. Duan, M., et al.: Flexible clustered federated learning for client-level data distribution shift. In: IEEE TPDS (2021)

    Google Scholar 

  27. Ghosh, A., Chung, J., Yin, D., Ramchandran, K.: An efficient framework for clustered federated learning. Adv. Neural. Inf. Process. Syst. 33, 19586–19597 (2020)

    Google Scholar 

  28. Sattler, F., Müller, K.R., Samek, W.: Clustered federated learning: Model-agnostic distributed multitask optimization under privacy constraints. IEEE Trans. Neural Netw. Learn. Syst. 32(8), 3710–3722 (2020)

    Google Scholar 

  29. Wang, L., Xu, S., Wang, X., Zhu, Q.: Addressing class imbalance in federated learning. In: AAAI Conference, vol. 35, pp. 10165–10173 (2021)

    Google Scholar 

  30. Wang, J., Liu, Q., et al.: Tackling the objective inconsistency problem in heterogeneous federated optimization. Adv. Neural Infor. Proc. Syst. 33, 7611–7623 (2020)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Chenghao Ren or Xiang Li .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wei, X., Hou, M., Ren, C., Li, X., Yue, H. (2022). MSSA-FL: High-Performance Multi-stage Semi-asynchronous Federated Learning with Non-IID Data. In: Memmi, G., Yang, B., Kong, L., Zhang, T., Qiu, M. (eds) Knowledge Science, Engineering and Management. KSEM 2022. Lecture Notes in Computer Science(), vol 13369. Springer, Cham. https://doi.org/10.1007/978-3-031-10986-7_14

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-10986-7_14

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-10985-0

  • Online ISBN: 978-3-031-10986-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics