Skip to main content

Two-Stream Communication-Efficient Federated Pruning Network

  • Conference paper
  • First Online:
PRICAI 2022: Trends in Artificial Intelligence (PRICAI 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13631))

Included in the following conference series:

  • 1181 Accesses

Abstract

Federated learning is a distributed machine learning framework which enables different parties to collaboratively train a model while protecting data privacy and security. This form of privacy-preserving collaborative learning comes at the cost of a significant communication overhead during training. Another key challenge in federated learning is to handle the heterogeneity of local data distribution across parties. To solve these problems, we proposed a novel two-stream communication-efficient federated pruning network (FedPrune), which consists of two parts: in the downstream stage, deep reinforcement learning is used to adaptively prune each layer of global model to reduce downstream communication costs; in the upstream stage, a pruning method based on the proximal operator is proposed to reduce the upstream communication costs as well as limit the drift of the local update, which is robust to non-IID client data. FedPrune is tested on three DNN models and publicly available datasets. The results demonstrate that it can well control the training overhead while still guaranteeing the learning performance.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Alistarh, D., Grubic, D., Li, J., Tomioka, R., Vojnovic, M.: QSGD: communication-efficient SGD via gradient quantization and encoding. In: Advances in Neural Information Processing Systems, pp. 1709–1720 (2017)

    Google Scholar 

  2. Ashok, A., Rhinehart, N., Beainy, F., Kitani, K.M.: N2N learning: network to network compression via policy gradient reinforcement learning. In: ICLR, pp. 1–20 (2017)

    Google Scholar 

  3. Gupta, M., Aravindan, S., Kalisz, A., Chandrasekhar, V., Jie, L.: Learning to prune deep neural networks via reinforcement learning. arXiv preprint arXiv:2007.04756, pp. 1–11 (2020)

  4. He, Y., Lin, J., Liu, Z., Wang, H., Li, L.-J., Han, S.: AMC: AutoML for model compression and acceleration on mobile devices. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11211, pp. 815–832. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01234-2_48

    Chapter  Google Scholar 

  5. Hsu, T.M.H., Qi, H., Brown, M.: Measuring the effects of non-identical data distribution for federated visual classification. arXiv preprint arXiv:1909.06335, pp. 1–5 (2019)

  6. Jiang, Y., et al.: Model pruning enables efficient federated learning on edge devices. arXiv preprint arXiv:1909.12326, pp. 1–26 (2019)

  7. Kairouz, P., et al.: Advances and open problems in federated learning. In: FTML, pp. 1–210 (2021)

    Google Scholar 

  8. Karimireddy, S.P., Kale, S., Mohri, M., Reddi, S.J., Stich, S.U., Suresh, A.T.: Scaffold: stochastic controlled averaging for on-device federated learning, pp. 1–41 (2019)

    Google Scholar 

  9. Konečnỳ, J., McMahan, H.B., Yu, F.X., Richtárik, P., Suresh, A.T., Bacon, D.: Federated learning: strategies for improving communication efficiency. arXiv preprint arXiv:1610.05492, pp. 1–10 (2016)

  10. Krizhevsky, A., Hinton, G., et al.: Learning multiple layers of features from tiny images. Technical report (2009)

    Google Scholar 

  11. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: NIPS, pp. 1097–1105 (2012)

    Google Scholar 

  12. LeCun, Y., et al.: Lenet-5, convolutional neural networks, p. 14 (2015). http://yann.lecun.com/exdb/lenet

  13. Li, A., et al.: LotteryFL: empower edge intelligence with personalized and communication-efficient federated learning. In: 2021 IEEE/ACM Symposium on Edge Computing (SEC), pp. 68–79. IEEE (2021)

    Google Scholar 

  14. Li, Q., Diao, Y., Chen, Q., He, B.: Federated learning on non-IID data silos: an experimental study. arXiv preprint arXiv:2102.02079, pp. 1–20 (2021)

  15. Li, Q., et al.: A survey on federated learning systems: vision, hype and reality for data privacy and protection. In: TKDE, pp. 1–44 (2021)

    Google Scholar 

  16. Li, T., Sahu, A.K., Zaheer, M., Sanjabi, M., Talwalkar, A., Smith, V.: Federated optimization in heterogeneous networks. In: MLSYS, pp. 429–450 (2020)

    Google Scholar 

  17. Li, X., Huang, K., Yang, W., Wang, S., Zhang, Z.: On the convergence of FedaVG on non-IID data. arXiv preprint arXiv:1907.02189, pp. 1–26 (2019)

  18. Lin, S., Wang, C., Li, H., Deng, J., Wang, Y., Ding, C.: ESMFL: efficient and secure models for federated learning. In: NIPS, pp. 1–7 (2020)

    Google Scholar 

  19. Liu, J., et al.: From distributed machine learning to federated learning: a survey. In: KAIS, pp. 1–33 (2022)

    Google Scholar 

  20. Luping, W., Wei, W., Bo, L.: CMFL: mitigating communication overhead for federated learning. In: ICDCS, pp. 954–964 (2019)

    Google Scholar 

  21. McMahan, B., Moore, E., Ramage, D., Hampson, S., Arcas, B.A.: Communication-efficient learning of deep networks from decentralized data. In: AISTATS, pp. 1273–1282 (2017)

    Google Scholar 

  22. Nguyen, D.C., et al.: Federated learning for smart healthcare: a survey. In: CSUR, pp. 1–37 (2022)

    Google Scholar 

  23. Sattler, F., Wiedemann, S., Müller, K.R., Samek, W.: Robust and communication-efficient federated learning from non-IID data. In: TNNLS, pp. 3400–3413 (2019)

    Google Scholar 

  24. Shah, S.M., Lau, V.K.: Model compression for communication efficient federated learning. In: TNNLS, pp. 1–15 (2021)

    Google Scholar 

  25. Tao, Z., Li, Q.: Esgd: Communication efficient distributed deep learningon the edge. In: USENIX Workshop on HotEdge, pp. 1–6 (2018)

    Google Scholar 

  26. Wang, H., Sievert, S., Liu, S., Charles, Z., Papailiopoulos, D., Wright, S.: Atomo: communication-efficient learning via atomic sparsification. In: NIPS pp. 1–12 (2018)

    Google Scholar 

  27. Wang, H., Yurochkin, M., Sun, Y., Papailiopoulos, D., Khazaeni, Y.: Federated learning with matched averaging. In: ICLR, pp. 1–16 (2020)

    Google Scholar 

  28. Wang, J., Liu, Q., Liang, H., Joshi, G., Poor, H.V.: Tackling the objective inconsistency problem in heterogeneous federated optimization. In: NIPS, pp. 7611–7623 (2020)

    Google Scholar 

  29. Wang, Z., Schaul, T., Hessel, M., Hasselt, H., Lanctot, M., Freitas, N.: Dueling network architectures for deep reinforcement learning. In: ICML, pp. 1995–2003 (2016)

    Google Scholar 

  30. Wen, W., et al.: TernGrad: ternary gradients to reduce communication in distributed deep learning. In: NIPS, pp. 1–11 (2017)

    Google Scholar 

  31. Xiao, H., Rasul, K., Vollgraf, R.: Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747, pp. 1–6 (2017)

  32. Yu, S., Mazaheri, A., Jannesari, A.: GNN-RL compression: topology-aware network pruning using multi-stage graph embedding and reinforcement learning. arXiv preprint arXiv:2102.03214, pp. 1–10 (2021)

  33. Yu, S., Nguyen, P., Anwar, A., Jannesari, A.: Adaptive dynamic pruning for non-IID federated learning. arXiv preprint arXiv:2106.06921, pp. 1–7 (2021)

Download references

Acknowledgments

This work was supported in part by the National Natural Science Foundation of China under Grant 62076179, in part by the Beijing Natural Science Foundation under Grant Z180006.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Liu Yang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Gu, S., Yang, L., Deng, S., Xu, Z. (2022). Two-Stream Communication-Efficient Federated Pruning Network. In: Khanna, S., Cao, J., Bai, Q., Xu, G. (eds) PRICAI 2022: Trends in Artificial Intelligence. PRICAI 2022. Lecture Notes in Computer Science, vol 13631. Springer, Cham. https://doi.org/10.1007/978-3-031-20868-3_14

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-20868-3_14

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-20867-6

  • Online ISBN: 978-3-031-20868-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics