Skip to main content

Synchronous Federated Learning Latency Optimization Based on Model Splitting

  • Conference paper
  • First Online:
Wireless Algorithms, Systems, and Applications (WASA 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13473))

  • 1255 Accesses

Abstract

Federated Learning (FL) is a distributed machine learning approach which is suitable for edge computing environment. While in this environment, how to take full advantage of the computing resources on end devices and edge servers is still a difficult problem. Especially for the synchronous federated learning, computing resources among different participants will lead to extra time cost and cause resource waste. In this paper, we try to reduce the time cost and the computing resource waste by using model splitting and task scheduling. We first establish the mathematical model and find it can not be solved directly. Then we design our algorithm which we name as the Federated Learning Offloading Acceleration (FLOA) algorithm to obtain a sub-optimal solution. The FLOA algorithm first uses the Partition Points Selection method to reduce the size of the solution space, then proposes a task offloading method based on matching theory. Experiments and simulations show that compared to the other three calculation methods, the single iteration time is reduced by \(47\%\), \(28\%\), \(14\%\) under our algorithm in turn.

The Work is Supported by Major Science and Technology Projects in Anhui Province (202003a05020009).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Liang, Y., Cai, Z., Yu, J., Han, Q., Li, Y.: Deep learning based inference of private information using embedded sensors in smart devices. IEEE Network 32(4), 8–14 (2018)

    Article  Google Scholar 

  2. Cai, Z., Xiong, Z., Xu, H., Wang, P., Pan, Y.: Generative adversarial networks: a survey toward private and secure applications. ACM Comput. Surv. 54(6), 1–38 (2021)

    Article  Google Scholar 

  3. Ren, J., Yu, G., Ding, G.: Accelerating DNN training in wireless federated edge learning systems. IEEE J. Sel. Areas Commun. 39(1), 219–232 (2021)

    Article  Google Scholar 

  4. Shi, W., Cao, J., Zhang, Q., Li, Y., Xu, L.: Edge computing: vision and challenges. IEEE Internet Things J. 3(5), 637–646 (2016)

    Article  Google Scholar 

  5. Cai, Z., Shi, T.: Distributed query processing in the edge-assisted IoT data monitoring system. IEEE Internet Things J. 8(16), 12679–12693 (2021)

    Article  Google Scholar 

  6. Zhu, T., Shi, T., Li, J., Cai, Z., Zhou, X.: Task scheduling in deadline-aware mobile edge computing systems. IEEE Internet Things J. 6(3), 4854–4866 (2019)

    Article  Google Scholar 

  7. McMahan, B., Moore, E., Ramage, D., Hampson, S., y Arcas, B.A.: Communication-efficient learning of deep networks from decentralized data. In: Singh, A., Zhu, J. (eds.) Proceedings of the 20th International Conference on Artificial Intelligence and Statistics. Proceedings of Machine Learning Research, vol. 54, pp. 1273–1282. PMLR, 20–22 April 2017

    Google Scholar 

  8. Pang, J., Huang, Y., Xie, Z., Han, Q., Cai, Z.: Realizing the heterogeneity: a self-organized federated learning framework for IoT. IEEE Internet Things J. 8(5), 3088–3098 (2021)

    Article  Google Scholar 

  9. Xiong, Z., Cai, Z., Takabi, D., Li, W.: Privacy threat and defense for federated learning with non-I.I.D. data in AIoT. IEEE Trans. Ind. Inform. 18(2), 1310–1321 (2022)

    Google Scholar 

  10. Lim, W.Y.B., et al.: Federated learning in mobile edge networks: a comprehensive survey. IEEE Commun. Surv. Tutorials 22(3), 2031–2063 (2020)

    Article  Google Scholar 

  11. Wu, W., He, L., Lin, W., Mao, R.: Accelerating federated learning over reliability-agnostic clients in mobile edge computing systems. IEEE Trans. Parallel Distrib. Syst. 32(7), 1539–1551 (2021)

    Google Scholar 

  12. Zheng, J., Li, K., Tovar, E., Guizani, M.: Federated learning for energy-balanced client selection in mobile edge computing. In: 2021 International Wireless Communications and Mobile Computing (IWCMC), pp. 1942–1947 (2021)

    Google Scholar 

  13. Luo, S., Chen, X., Wu, Q., Zhou, Z., Yu, S.: HFEL: joint edge association and resource allocation for cost-efficient hierarchical federated edge learning. IEEE Trans. Wireless Commun. 19(10), 6535–6548 (2020)

    Article  Google Scholar 

  14. Wang, X., Han, Y., Leung, V.C.M., Niyato, D., Yan, X., Chen, X.: Convergence of edge computing and deep learning: a comprehensive survey. IEEE Commun. Surv. Tutorials 22(2), 869–904 (2020)

    Article  Google Scholar 

  15. Zhou, Z., Chen, X., Li, E., Zeng, L., Luo, K., Zhang, J.: Edge intelligence: paving the last mile of artificial intelligence with edge computing. Proc. IEEE 107(8), 1738–1762 (2019)

    Article  Google Scholar 

  16. Qu, X., Hu, Q., Wang, S.: Privacy-preserving model training architecture for intelligent edge computing. Comput. Commun. 162, 94–101 (2020)

    Article  Google Scholar 

  17. Shi, L., Xu, Z., Shi, Y., Fan, Y., Ding, X., Sun, Y.: A DNN inference acceleration algorithm in heterogeneous edge computing: joint task allocation and model partition. In: Gao, H., Wang, X., Iqbal, M., Yin, Y., Yin, J., Gu, N. (eds.) CollaborateCom 2020. LNICST, vol. 349, pp. 237–254. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-67537-0_15

    Chapter  Google Scholar 

  18. Liu, Z., Wang, K., Zhou, M.T., Shao, Z., Yang, Y.: Distributed task scheduling in heterogeneous fog networks: a matching with externalities method. In: 2020 International Conference on Computing, Networking and Communications (ICNC), pp. 620–625 (2020)

    Google Scholar 

  19. Gale, D., Shapley, L.S.: College admissions and the stability of marriage. Am. Math. Mon. 120(5), 386–391 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  20. Bodine-Baron, E., Lee, C., Chong, A., Hassibi, B., Wierman, A.: Peer effects and stability in matching markets. In: Persiano, G. (ed.) SAGT 2011. LNCS, vol. 6982, pp. 117–129. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-24829-0_12

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lei Shi .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Fang, C., Shi, L., Shi, Y., Xu, J., Ding, X. (2022). Synchronous Federated Learning Latency Optimization Based on Model Splitting. In: Wang, L., Segal, M., Chen, J., Qiu, T. (eds) Wireless Algorithms, Systems, and Applications. WASA 2022. Lecture Notes in Computer Science, vol 13473. Springer, Cham. https://doi.org/10.1007/978-3-031-19211-1_41

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-19211-1_41

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-19210-4

  • Online ISBN: 978-3-031-19211-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics