skip to main content
10.1145/3426745.3431335acmconferencesArticle/Chapter ViewAbstractPublication PagesconextConference Proceedingsconference-collections
research-article

FEWER: Federated Weight Recovery

Published: 01 December 2020 Publication History

Abstract

In federated learning, the local devices train the model with their local data, independently; and the server gathers the locally trained model to aggregate them into a shared global model. Therefore, federated learning is an approach to decouple the model training from directly assessing the local data. However, the requirement of periodic communications on model parameters results in a primary bottleneck for the efficiency of federated learning. This work proposes a novel federated learning algorithm, Federated Weight Recovery(FEWER), which enables a sparsely pruned model in the training phase. FEWER starts with the initial model training with an extremely sparse state, and FEWER gradually grows the model capacity until the model reaches a dense model at the end of the training. The level of sparsity becomes the leverage to either increasing the accuracy or decreasing the communication cost, and this sparsification can be beneficial to practitioners. Our experimental results show that FEWER achieves higher test accuracies with less communication costs for most of the test cases.

Supplementary Material

MP4 File (3426745.3431335.mp4)
The presentation video of FEWER: Federated Weight Recovery

References

[1]
Mohammed Aledhari, Rehma Razzak, Reza M Parizi, and Fahad Saeed. 2020. Federated Learning: A Survey on Enabling Technologies, Protocols, and Applications. IEEE Access 8 (2020), 140699--140725.
[2]
Jin Chen, Per Jönsson, Masayuki Tamura, Zhihui Gu, Bunkei Matsushita, and Lars Eklundh. 2004. A simple method for reconstructing a high-quality NDVI time-series data set based on the Savitzky-Golay filter. Remote sensing of Environment 91, 3-4 (2004), 332--344.
[3]
Yu Cheng, Duo Wang, Pan Zhou, and Tao Zhang. 2017. A survey of model compression and acceleration for deep neural networks. arXiv preprint arXiv:1710.09282 (2017).
[4]
Jonathan Frankle and Michael Carbin. 2018. The lottery ticket hypothesis: Finding sparse, trainable neural networks. arXiv preprint arXiv:1803.03635 (2018).
[5]
Song Han, Huizi Mao, and William J Dally. 2015. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149 (2015).
[6]
Ching-Yi Hung, Cheng-Hao Tu, Cheng-En Wu, Chien-Hung Chen, Yi-Ming Chan, and Chu-Song Chen. 2019. Compacting, Picking and Growing for Unforgetting Continual Learning. In Advances in Neural Information Processing Systems. 13669--13679.
[7]
Frank Hutter Ilya Loshchilov. 2017. SGDR: Stochastic Gradient Descent with Warm Restarts. In International Conference on Learning Representations (2017).
[8]
Yuang Jiang, Shiqiang Wang, Bong Jun Ko, Wei-Han Lee, and Leandros Tassiulas. 2020. Model Pruning Enables Efficient Federated Learning on Edge Devices. arXiv preprint arXiv:1909.12326 (2020).
[9]
James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei ARusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, and et al. 2017. Overcoming catastrophic forgetting in neural nets. Proceedings of the National Academy of Sciences Mar 2017, 114 (13) (2017).
[10]
Jakub Konečny, H Brendan McMahan, Daniel Ramage, and Peter Richtárik. 2016. Federated optimization: Distributed machine learning for on-device intelligence. arXiv preprint arXiv:1610.02527 (2016).
[11]
Jakub Konečny, H. Brendan McMahan, Felix X. Yu, Peter Richtárik, Ananda Theertha Suresh, and Dave Bacon. 2016. Federated Learning: Strategies for Improving Communication Efficiency. arXiv preprint arXiv:1610.05492 (2016).
[12]
Ang Li, Jingwei Sun, Binghui Wang, Lin Duan, Sicheng Li, Yiran Chen, and Hai Li. 2020. LotteryFL: Personalized and Communication-Efficient Federated Learning with Lottery Ticket Hypothesis on Non-IID Datasets. arXiv preprint arXiv:2008.03371 (2020).
[13]
Tian Li, Anit Kumar Sahu, Manzil Zaheer, Maziar Sanjabi, Ameet Talwalkar, and Virginia Smith. 2019. Federated Optimization in Heterogeneous Networks. arXiv preprint arXiv:1812.06127 (2019).
[14]
Tao Lin, Sebastian U. Stich, Luis Barba, Daniil Dmitriev, and Martin Jaggi. 2020. Dynamic Model Pruning with Feedback. In International Conference on Learning Representations. https://openreview.net/forum?id=SJem8lSFwB
[15]
H Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, et al. 2016. Communication-efficient learning of deep networks from decentralized data. arXiv preprint arXiv:1602.05629 (2016).
[16]
Mehryar Mohri, Gary Sivek, and Ananda Theertha Suresh. 2019. Agnostic Federated Learning. Proceedings of the36thInternational Conference on MachineLearning, Long Beach, California, PMLR 97 (2019).
[17]
Oleksiy Ostapenko, Mihai Puscas, Tassilo Klein, Patrick Jahnichen, and Moin Nabi. 2019. Learning to remember: A synaptic plasticity driven framework for continual learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 11321--11329.
[18]
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. PyTorch: An Imperative Style, High-Performance Deep Learning Library. In Advances in Neural Information Processing Systems 32, H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett (Eds.). Curran Associates, Inc., 8024-8035. http://papers.neurips.cc/paper/9015-pytorch-an-imperative-style-high-performance-deep-learning-library.pdf
[19]
Jacob Poushter et al. 2016. Smartphone ownership and internet usage continues to climb in emerging economies. Pew Research Center 22 (2016), 1--44.
[20]
Amirhossein Reisizadeh, Aryan Mokhtari, Hamed Hassani, Ali Jadbabaie, and Ramtin Pedarsani. 2020. Fedpaq: A communication-efficient federated learning method with periodic averaging and quantization. In International Conference on Artificial Intelligence and Statistics. 2021--2031.
[21]
Andrei A Rusu, Neil C Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, and Raia Hadsell. 2016. Progressive neural networks. arXiv preprint arXiv:1606.04671 (2016).
[22]
Hanul Shin, Jung-Kwon Lee, Jaehong Kim, and Jiwon Kim. 2017. Continual Learning with Deep Generative Replay. 31st Conference on Neural Information Processing Systems (NIPS 2017) (2017).
[23]
Dan Wang, Zheng Xiang, and Daniel R Fesenmaier. 2016. Smartphone use in everyday life and travel. Journal of travel research 55, 1 (2016), 52--63.
[24]
Hongyi Wang, Mikhail Yurochkin, Yuekai Sun, Dimitris Papailiopoulos, and Yasaman Khazaeni. 2020. Federated Learning with Matched Averaging. International Conference on Learning Representations (2020).
[25]
Qiang Yang, Yang Liu, Tianjian Chen, and Yongxin Tong. 2019. Federated machine learning: Concept and applications. ACM Transactions on Intelligent Systems and Technology (TIST) 10, 2 (2019), 1--19.
[26]
Tien-Ju Yang, Yu-Hsin Chen, and Vivienne Sze. 2017. Designing energy-efficient convolutional neural networks using energy-aware pruning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 5687--5695.
[27]
Jaehong Yoon, Eunho Yang, Jeongtae Lee, and Sung Ju Hwang. 2017. Lifelong learning with dynamically expandable networks. arXiv preprint arXiv:1708.01547 (2017).
[28]
Aojun Zhou, Anbang Yao, Yiwen Guo, Lin Xu, and Yurong Chen. 2017. Incremental Network Quantization: Towards Lossless CNNs with Low-precision Weights. International Conference on Learning Representations (2017).
[29]
Michael Zhu and Suyog Gupta. 2017. To prune, or not to prune: exploring the efficacy of pruning for model compression. arXiv preprint arXiv:1710.01878 (2017).

Cited By

View all
  • (2024)FedPE: Adaptive Model Pruning-Expanding for Federated Learning on Mobile DevicesIEEE Transactions on Mobile Computing10.1109/TMC.2024.337470623:11(10475-10493)Online publication date: Nov-2024
  • (2022)Towards predicting client benefit and contribution in federated learning from data imbalanceProceedings of the 3rd International Workshop on Distributed Machine Learning10.1145/3565010.3569063(23-29)Online publication date: 9-Dec-2022

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
DistributedML'20: Proceedings of the 1st Workshop on Distributed Machine Learning
December 2020
46 pages
ISBN:9781450381826
DOI:10.1145/3426745
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 01 December 2020

Permissions

Request permissions for this article.

Check for updates

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Funding Sources

  • National Research Foundation of Korea
  • Institute for Information and Communications Technology Promotion

Conference

CoNEXT '20
Sponsor:

Acceptance Rates

Overall Acceptance Rate 5 of 10 submissions, 50%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)11
  • Downloads (Last 6 weeks)1
Reflects downloads up to 10 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2024)FedPE: Adaptive Model Pruning-Expanding for Federated Learning on Mobile DevicesIEEE Transactions on Mobile Computing10.1109/TMC.2024.337470623:11(10475-10493)Online publication date: Nov-2024
  • (2022)Towards predicting client benefit and contribution in federated learning from data imbalanceProceedings of the 3rd International Workshop on Distributed Machine Learning10.1145/3565010.3569063(23-29)Online publication date: 9-Dec-2022

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media