Abstract
Federated learning based on edge computing environment has great potential to facilitate the implementation of artificial intelligence at the edge of the network. However, because of the limited resource at the edge, place the complete Deep Neural Networks (DNN) model on the edge for training may not a good choice. In this paper, we study the time optimization for asynchronous federated learning based on model partition. That is, the DNN model is divided into two parts and deployed separately on the device and the edge server for the model training. First, we give the metric of the relationship between learning accuracy and iteration frequency, and then we build a mathematical model based on this. Because the solution space of mathematical model is too large to be solved directly, we propose an algorithm to minimize the total time by dynamically adjusting the model partition point and bandwidth allocation. Simulation results show that our algorithm can reduce the time by 32% to 60% compared with the other three methods.
The work is supported by the Major science and technology projects in Anhui Province, No. 202003a05020009.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsReferences
McMahan, B., Moore, E., Ramage, D., Hampson, S., Arcas, B.A.: Communication-efficient learning of deep networks from decentralized data. In: Proceedings of the 20th International Conference on Artificial Intelligence and Statistics vol. 54, pp. 1273–1282 (2017)
Liang, Y., Cai, Z., Yu, J., Han, Q., Li, Y.: Deep learning based inference of private information using embedded sensors in smart devices. IEEE Network 32(4), 8–14 (2018). https://doi.org/10.1109/MNET.2018.1700349
Cai, Z., Xiong, Z., Xu, H., Wang, P., Li, W., Pan, Y.L.: Generative adversarial networks: a survey towards private and secure applications. ACM Comput. Surv. (CSUR) 54(6), 1–38 (2021)
Mills, J., Hu, J., Min, G.: Communication-efficient federated learning for wireless edge intelligence in IoT. IEEE Internet Things J. 7(7), 5986–5994 (2020)
Zhang, W., Gupta, S., Lian, X., Liu, J.: Staleness-aware Async-SGD for distributed deep learning. In: Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, pp. 2350–2356 (2016)
Pang, J., Huang, Y., Xie, Z., Han, Q., Cai, Z.: Realizing the heterogeneity: a self-organized federated learning framework for IoT. IEEE Internet Things J. 8(5), 3088–3098 (2021)
Xiong, Z., Cai, Z., Takabi, D., Li, W.: Privacy threat and defense for federated learning with non-i.i.d. data in AloT. IEEE Trans. Industr. Inform. 18(2), 1310–1321 (2022)
Luo, S., Chen, X., Wu, Q., Zhou, Z., Yu, S.: HFEL: joint edge association and resource allocation for cost-efficient hierarchical federated edge learning. IEEE Trans. Wireless Commun. 19(10), 6535–6548 (2020). https://doi.org/10.1109/TWC.2020.3003744
Yang, Z., Chen, M., Saad, W., Hong, C.S., Shikh-Bahaei, M.: Energy efficient federated learning over wireless communication networks. IEEE Trans. Wireless Commun. 20(3), 1935–1949 (2021)
Li, X., Huang, K., Yang, W., Wang, S., Zhang, Z.: On the convergence of FedAvg on non-IId data. In: International Conference on Learning Representations (2020)
Chen, M., Poor, H.V., Saad, W., Cui, S.: Convergence time optimization for federated learning over wireless networks. IEEE Trans. Wireless Commun. 20(4), 2457–2471 (2021)
Xu, J., Wang, H.: Client selection and bandwidth allocation in wireless federated learning networks: a long-term perspective. IEEE Trans. Wireless Commun. 20(2), 1188–1200 (2021)
Amiri, M.M., Gündüz, D., Kulkarni, S.R., Poor, H.V.: Convergence of update aware device scheduling for federated learning at the wireless edge. IEEE Trans. Wireless Commun. 20(6), 3643–3658 (2021)
Chen, M., Mao, B., Ma, T.: FedSA: a staleness-aware asynchronous federated learning algorithm with non-IID data. Futur. Gener. Comput. Syst. 120, 1–12 (2021)
Lee, H.S., Lee, J.W.: Adaptive transmission scheduling in wireless networks for asynchronous federated learning. IEEE J. Sel. Areas Commun. 39(12), 3673–3687 (2021)
Cai, Z., Shi, T.: Distributed query processing in the edge-assisted IoT data monitoring system. IEEE Internet Things J. 8(16), 12679–12693 (2021)
Zhu, T., Shi, T., Li, J., Cai, Z., Zhou, X.: Task scheduling in deadline-aware mobile edge computing systems. IEEE Internet Things J. 6(3), 4854–4866 (2019)
Zhu, G., Liu, D., Du, Y., You, C., Zhang, J., Huang, K.: Toward an intelligent edge: wireless communication meets machine learning. IEEE Commun. Mag. 58(1), 19–25 (2020)
Chen, M., Semiari, O., Saad, W., Liu, X., Yin, C.: Federated echo state learning for minimizing breaks in presence in wireless virtual reality networks. IEEE Trans. Wireless Commun. 19(1), 177–191 (2020)
Kang, Y., et al.: Neurosurgeon: collaborative intelligence between the cloud and mobile edge. SIGPLAN Not. 52(4), 615–629 (2017)
Zhang, L., Xu, J.: Learning the optimal partition for collaborative DNN training with privacy requirements. IEEE Internet Things J. 8, 1–11 (2021)
Shi, L., Xu, Z., Sun, Y., Shi, Y., Fan, Y., Ding, X.: A DNN inference acceleration algorithm combining model partition and task allocation in heterogeneous edge computing system. Peer-to-Peer Netw. Appl. 14, 4031–4045 (2021)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Xu, J., Shi, L., Shi, Y., Fang, C., Xu, J. (2022). An Asynchronous Federated Learning Optimization Scheme Based on Model Partition. In: Wang, L., Segal, M., Chen, J., Qiu, T. (eds) Wireless Algorithms, Systems, and Applications. WASA 2022. Lecture Notes in Computer Science, vol 13473. Springer, Cham. https://doi.org/10.1007/978-3-031-19211-1_31
Download citation
DOI: https://doi.org/10.1007/978-3-031-19211-1_31
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-19210-4
Online ISBN: 978-3-031-19211-1
eBook Packages: Computer ScienceComputer Science (R0)