Skip to main content

Personalized, Robust Federated Learning with Fed+

  • Chapter
  • First Online:
Federated Learning

Abstract

Fed+ is a unified family of methods designed to better accommodate the real-world characteristics found in federated learning training, such as the lack of IID data across parties and the need for robustness to outliers. Fed+ does not require all parties to reach a consensus, allowing each party to train local, personalized models through a form of regularization while benefiting from the federation to improve accuracy and performance. The methods included in the Fed+ family are shown to be provably convergent. Experiments indicate that Fed+ outperform other methods when data is not IID, and the robust versions of Fed+ outperform other methods in the presence of outliers.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Beck A (2015) On the convergence of alternating minimization for convex programming with applications to iteratively reweighted least squares and decomposition schemes. SIAM J Optim 25(1):185–209. https://doi.org/10.1137/13094829X

    Article  MathSciNet  Google Scholar 

  2. Caldas S, Wu P, Li T, Konečnỳ J, McMahan HB, Smith V, Talwalkar A (2018) LEAF: a benchmark for federated settings. arXiv preprint arXiv:181201097

    Google Scholar 

  3. Charles Z, Konecný J (2020) On the outsized importance of learning rates in local update methods. ArXiv abs/2007.00878

    Google Scholar 

  4. Cohen G, Afshar S, Tapson J, Van Schaik A (2017) EMNIST: extending MNIST to handwritten letters. In: 2017 international joint conference on neural networks (IJCNN). IEEE, pp 2921–2926

    Google Scholar 

  5. Deng Y, Kamani MM, Mahdavi M (2020) Adaptive personalized federated learning. arXiv preprint arXiv:200313461

    Google Scholar 

  6. Go A, Bhayani R, Huang L (2009) Twitter sentiment classification using distant supervision. CS224N project report, Stanford 1(12):2009

    Google Scholar 

  7. Hanzely F, Richtárik P (2020) Federated learning of a mixture of global and local models. ArXiv abs/2002.05516

    Google Scholar 

  8. Hanzely F, Zhao B, Kolar M (2021) Personalized federated learning: a unified framework and universal optimization techniques. 2102.09743

    Google Scholar 

  9. Karimireddy SP, Kale S, Mohri M, Reddi S, Stich S, Suresh AT (2019) SCAFFOLD: stochastic controlled averaging for on-device federated learning. ArXiv abs/1910.06378

    Google Scholar 

  10. LeCun Y, Bottou L, Bengio Y, Haffner P (1998) Gradient-based learning applied to document recognition. Proc IEEE 86(11):2278–2324

    Article  Google Scholar 

  11. Li T, Sahu AK, Zaheer M, Sanjabi M, Talwalkar A, Smith V (2020) Federated optimization in heterogeneous networks. Proc Mach Learn Syst 2:429–450

    Google Scholar 

  12. Li T, Hu S, Beirami A, Smith V (2021) Ditto: fair and robust federated learning through personalization. 2012.04221

    Google Scholar 

  13. Li X, Huang K, Yang W, Wang S, Zhang Z (2020) On the convergence of FedAvg on non-IID data. ICLR, Arxiv, abs/1907.02189

    Google Scholar 

  14. Malinovsky G, Kovalev D, Gasanov E, Condat L, Richtárik P (2020) From local SGD to local fixed point methods for federated learning. ICML Arxiv, abs/2004.01442

    Google Scholar 

  15. Mansour Y, Mohri M, Ro J, Theertha Suresh A (2020) Three approaches for personalization with applications to federated learning. arXiv e-prints arXiv:2002.10619, 2002.10619

    Google Scholar 

  16. McMahan B, Moore E, Ramage D, Hampson S, y Arcas BA (2017) Communication-efficient learning of deep networks from decentralized data. In: Artificial intelligence and statistics. PMLR, pp 1273–1282

    Google Scholar 

  17. Pathak R, Wainwright M (2020) FedSplit: an algorithmic framework for fast federated optimization. ArXiv abs/2005.05238

    Google Scholar 

  18. Pennington J, Socher R, Manning CD (2014) Glove: global vectors for word representation. In: Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pp 1532–1543

    Google Scholar 

  19. Pillutla K, Kakade SM, Harchaoui Z (2019) Robust aggregation for federated learning. arXiv preprint arXiv:191213445

    Google Scholar 

  20. Dinh CT, Tran N, Nguyen J (2020) Personalized federated learning with Moreau envelopes. In: Larochelle H, Ranzato M, Hadsell R, Balcan MF, Lin H (eds) Advances in neural information processing systems, vol 33. Curran Associates, Inc., pp 21394–21405. https://proceedings.neurips.cc/paper/2020/file/f4f1f13c8289ac1b1ee0ff176b56fc60-Paper.pdf

  21. Tyler DE (2008) Robust statistics: theory and methods. J Am Stat Assoc 103(482):888–889. https://doi.org/10.1198/jasa.2008.s239

    Article  Google Scholar 

  22. Yin D, Chen Y, Kannan R, Bartlett P (2018) Byzantine-robust distributed learning: towards optimal statistical rates. PMLR, Stockholmsmässan, Stockholm, vol 80. Proceedings of Machine Learning Research, pp 5650–5659. http://proceedings.mlr.press/v80/yin18a.html

  23. Zhang M, Sapra K, Fidler S, Yeung S, Alvarez JM (2021) Personalized federated learning with first order model optimization. In: International conference on learning representations. https://openreview.net/forum?id=ehJqJQk9cw

  24. Zhang S, Choromanska AE, LeCun Y (2015) Deep learning with elastic averaging SGD. In: Cortes C, Lawrence N, Lee D, Sugiyama M, Garnett R (eds) Advances in neural information processing systems, vol 28. Curran Associates, Inc. https://proceedings.neurips.cc/paper/2015/file/d18f655c3fce66ca401d5f38b48c89af-Paper.pdf

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Laura Wynter .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Yu, P., Kundu, A., Wynter, L., Lim, S.H. (2022). Personalized, Robust Federated Learning with Fed+. In: Ludwig, H., Baracaldo, N. (eds) Federated Learning. Springer, Cham. https://doi.org/10.1007/978-3-030-96896-0_5

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-96896-0_5

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-96895-3

  • Online ISBN: 978-3-030-96896-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics