Skip to main content

subMFL: Compatible subModel Generation for Federated Learning in Device Heterogeneous Environment

  • Conference paper
  • First Online:
Euro-Par 2023: Parallel Processing Workshops (Euro-Par 2023)

Abstract

Federated Learning (FL) is commonly used in systems with distributed and heterogeneous devices with access to varying amounts of data and diverse computing and storage capacities. FL training process enables such devices to update the weights of a shared model locally using their local data and then a trusted central server combines all of those models to generate a global model. In this way, a global model is generated while the data remains local to devices to preserve privacy. However, training large models such as Deep Neural Networks (DNNs) on resource-constrained devices can take a prohibitively long time and consume a large amount of energy. In the current process, the low-capacity devices are excluded from the training process, although they might have access to unseen data. To overcome this challenge, we propose a model compression approach that enables heterogeneous devices with varying computing capacities to participate in the FL process. In our approach, the server shares a dense model with all devices to train it: Afterwards, the trained model is gradually compressed to obtain submodels with varying levels of sparsity to be used as suitable initial global models for resource-constrained devices that were not capable of train the first dense model. This results in an increased participation rate of resource-constrained devices while the transferred weights from the previous round of training are preserved. Our validation experiments show that despite reaching about 50% global sparsity, generated submodels maintain their accuracy while can be shared to increase participation by around 50%.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 44.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 59.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Golpayegani, F., Ghanadbashi, S., Riad, M.: Urban emergency management using intelligent traffic systems: challenges and future directions. In: 2021 IEEE International Smart Cities Conference (ISC2), pp. 1–4. IEEE (2021)

    Google Scholar 

  2. Ghanadbashi, S., Golpayegani, F.: An ontology-based intelligent traffic signal control model. In: 2021 IEEE International Intelligent Transportation Systems Conference (ITSC), pp. 2554–2561. IEEE (2021)

    Google Scholar 

  3. Malekjafarian, A., Golpayegani, F., Moloney, C., Clarke, S.: A machine learning approach to bridge-damage detection using responses measured on a passing vehicle. Sensors 19(18), 4035 (2019)

    Article  Google Scholar 

  4. Lv, Z., Chen, D., Lou, R., Wang, Q.: Intelligent edge computing based on machine learning for smart city. Futur. Gener. Comput. Syst. 115, 90–99 (2021)

    Article  Google Scholar 

  5. Safavifar, Z., Ghanadbashi, S., Golpayegani, F.: Adaptive workload orchestration in pure edge computing: a reinforcement-learning model. In: 2021 IEEE 33rd International Conference on Tools with Artificial Intelligence (ICTAI), pp. 856–860. IEEE (2021)

    Google Scholar 

  6. Galakatos, A., Crotty, A., Kraska, T.: Distributed machine learning (2018)

    Google Scholar 

  7. Peteiro-Barral, D., Guijarro-Berdiñas, B.: A survey of methods for distributed machine learning. Prog. Artif. Intell. 2, 1–11 (2013)

    Article  Google Scholar 

  8. McMahan, H.B., Moore, E., Ramage, D., y Arcas, B.A.: Federated learning of deep networks using model averaging. arXiv preprint arXiv:1602.05629 2, 2 (2016)

  9. Li, T., Sahu, A.K., Talwalkar, A., Smith, V.: Federated learning: challenges, methods, and future directions. IEEE Signal Process. Mag. 37(3), 50–60 (2020)

    Article  Google Scholar 

  10. Kairouz, P., et al.: Advances and open problems in federated learning. Foundations and Trends in Machine Learning (2021)

    Google Scholar 

  11. Caldas, S., Konečny, J., McMahan, H.B., Talwalkar, A.: Expanding the reach of federated learning by reducing client resource requirements. arXiv:1812.07210 (2018)

  12. Wu, X., Yao, X., Wang, C.L.: FedSCR: structure-based communication reduction for federated learning. IEEE Trans. Parallel Distrib. Syst. 32(7), 1565–1577 (2020)

    Google Scholar 

  13. Vahidian, S., Morafah, M., Lin, B.: Personalized federated learning by structured and unstructured pruning under data heterogeneity. In: 2021 IEEE 41st International Conference on Distributed Computing Systems Workshops (ICDCSW) (2021)

    Google Scholar 

  14. Horvath, S., Laskaridis, S., Almeida, M., Leontiadis, I., Venieris, S., Lane, N.: FjORD: fair and accurate federated learning under heterogeneous targets with ordered dropout. In: Advances in Neural Information Processing Systems, vol. 34 (2021)

    Google Scholar 

  15. Li, A., Sun, J., Li, P., Pu, Y., Li, H., Chen, Y.: Hermes: an efficient federated learning framework for heterogeneous mobile clients. In: Proceedings of the 27th Annual International Conference on Mobile Computing and Networking, pp. 420–437 (2021)

    Google Scholar 

  16. Zhou, G., Xu, K., Li, Q., Liu, Y., Zhao, Y.: AdaptCL: efficient collaborative learning with dynamic and adaptive pruning. arXiv preprint arXiv:2106.14126 (2021)

  17. Zhou, H., Lan, T., Venkataramani, G., Ding, W.: On the convergence of heterogeneous federated learning with arbitrary adaptive online model pruning. arXiv preprint arXiv:2201.11803 (2022)

  18. Jiang, Y., et al.: Model pruning enables efficient federated learning on edge devices. IEEE Trans. Neural Networks Learn. Syst. 34(12), 10374–10386 (2022)

    Article  Google Scholar 

  19. Yu, S., Nguyen, P., Anwar, A., Jannesari, A.: Adaptive dynamic pruning for Non-IID federated learning. arXiv preprint arXiv:2106.06921 (2021)

  20. Liu, S., Yu, G., Yin, R., Yuan, J., Shen, L., Liu, C.: Joint model pruning and device selection for communication-efficient federated edge learning. IEEE Trans. Commun. 70(1), 231–244 (2021)

    Article  Google Scholar 

  21. Munir, M.T., Saeed, M.M., Ali, M., Qazi, Z.A., Qazi, I.A.: FedPrune: towards inclusive federated learning. arXiv preprint arXiv:2110.14205 (2021)

  22. Jiang, Z., Xu, Y., Xu, H., Wang, Z., Qiao, C., Zhao, Y.: FedMP: federated learning through adaptive model pruning in heterogeneous edge computing. In: 2022 IEEE 38th International Conference on Data Engineering (ICDE), pp. 767–779 (2022)

    Google Scholar 

  23. McMahan, B., Moore, E., Ramage, D., Hampson, S., y Arcas, B.A.: Communication-efficient learning of deep networks from decentralized data. In: Artificial Intelligence and Statistics, pp. 1273–1282. PMLR (2017)

    Google Scholar 

  24. Anwar, S., Hwang, K., Sung, W.: Structured pruning of deep convolutional neural networks. ACM J. Emerg. Technol. Comput. Syst. (JETC) 13(3), 1–18 (2017)

    Article  Google Scholar 

  25. LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)

    Article  Google Scholar 

  26. LeCun, Y.: The MNIST database of handwritten digits (1998). http://yann.lecun.com/exdb/mnist/

  27. Xiao, H., Rasul, K., Vollgraf, R.: Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747 (2017)

  28. Paszke, A., et al.: PyTorch: an imperative style, high-performance deep learning library. In: Advances in Neural Information Processing Systems, vol. 32 (2019)

    Google Scholar 

  29. Beutel, D.J., et al.: Flower: a friendly federated learning research framework. arXiv preprint arXiv:2007.14390 (2020)

  30. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)

Download references

Acknowledgment

This project has received funding from RE-ROUTE Project, the European Union’s Horizon Europe research and innovation programme under the Marie Skłodowska-Curie grant agreement No 101086343.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zeyneddin Oz .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Oz, Z., Soygul Oz, C., Malekjafarian, A., Afraz, N., Golpayegani, F. (2024). subMFL: Compatible subModel Generation for Federated Learning in Device Heterogeneous Environment. In: Zeinalipour, D., et al. Euro-Par 2023: Parallel Processing Workshops. Euro-Par 2023. Lecture Notes in Computer Science, vol 14352. Springer, Cham. https://doi.org/10.1007/978-3-031-48803-0_5

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-48803-0_5

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-48802-3

  • Online ISBN: 978-3-031-48803-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics