Skip to main content

A Framework of Large-Scale Peer-to-Peer Learning System

  • Conference paper
  • First Online:
Neural Information Processing (ICONIP 2023)

Abstract

Federated learning (FL) is a distributed machine learning paradigm in which numerous clients train a model dispatched by a central server while retaining the training data locally. Nonetheless, the failure of the central server can disrupt the training framework. Peer-to-peer approaches enhance the robustness of system as all clients directly interact with other clients without a server. However, a downside of these peer-to-peer approaches is their low efficiency. Communication among a large number of clients is significantly costly, and the synchronous learning framework becomes unworkable in the presence of stragglers. In this paper, we propose a semi-asynchronous peer-to-peer learning system (P2PLSys) suitable for large-scale clients. This system features a server that manages all clients but does not participate in model aggregation. The server distributes a partial client list to selected clients that have completed local training for local model aggregation. Subsequently, clients adjust their own models based on staleness and communicate through a secure multi-party computation protocol for secure aggregation. Through our experiments, we demonstrate the effectiveness of P2PLSys for image classification problems, achieving a similar performance level to classical FL algorithms and centralized training.

This study is supported by the National Key R &D Program of China (Grant No. 2022YFB3102100), Shenzhen Fundamental Research Program (Grant No. JCYJ20220818102414030), the Major Key Project of PCL (Grant No. PCL2022A03, PCL2023AS7-1), Guangdong Provincial Key Laboratory of Novel Security Intelligence Technologies (No. 2022B1212010005), Shenzhen Science and Technology Program (Grant No. ZDSYS20210623091809029, RCBS20221008093131089).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. McMahan, B., Moore, E., Ramage, D., Hampson, S., Aguera y Arcas, B.: Communication-efficient learning of deep networks from decentralized data. In: Artificial Intelligence and Statistics, pp. 1273–1282. PMLR (2017)

    Google Scholar 

  2. Sarkar, S., Agrawal, S., Gadekallu, T.R., Mahmud, M., Brown, D.J.: Privacy-preserving federated learning for pneumonia diagnosis. In: International Conference on Neural Information Processing, pp. 345–356. Springer, Singapore (2022). https://doi.org/10.1007/978-981-99-1648-1_29

  3. Teng, L., et al.: Flpk-bisenet: federated learning based on priori knowledge and bilateral segmentation network for image edge extraction. IEEE Trans. Netw. Serv. Manag. (2023)

    Google Scholar 

  4. Alazab, M., et al.: Federated learning for cybersecurity: concepts, challenges, and future directions. IEEE Trans. Indust. Inf. 18(5), 3501–3509 (2021)

    Article  Google Scholar 

  5. Bonawitz, K., et al.: Towards federated learning at scale: system design. Proc. Mach. Learn. Syst. 1, 374–388 (2019)

    Google Scholar 

  6. Xu, C., Qu, Y., Xiang, Y., Gao, L.: Asynchronous federated learning on heterogeneous devices: a survey. arXiv preprint arXiv:2109.04269 (2021)

  7. Chen, Y., Ning, Y., Slawski, M., Rangwala, H.: Asynchronous online federated learning for edge devices with non-IID data. In: Proceedings of the 2020 IEEE International Conference on Big Data (Big Data), pp. 15–24. IEEE (2020)

    Google Scholar 

  8. Xie, C., Koyejo, S., Gupta, I.: Asynchronous federated optimization. arXiv preprint arXiv:1903.03934 (2019)

  9. Shi, G., Li, L., Wang, J., Chen, W., Ye, K., Xu, C.Z.: Hysync: hybrid federated learning with effective synchronization. In: 2020 IEEE 22nd International Conference on High Performance Computing and Communications; IEEE 18th International Conference on Smart City; IEEE 6th International Conference on Data Science and Systems (HPCC/SmartCity/DSS), pp. 628–633. IEEE (2020)

    Google Scholar 

  10. Zhou, C., Tian, H., Zhang, H., Zhang, J., Dong, M., Jia, J.: Tea-fed: time-efficient asynchronous federated learning for edge computing. In: Proceedings of the 18th ACM International Conference on Computing Frontiers, pp. 30–37 (2021)

    Google Scholar 

  11. Xu, C., Qu, Y., Xiang, Y., Gao, L.: Asynchronous federated learning on heterogeneous devices: a survey. arXiv preprint arXiv:2109.04269 (2021)

  12. Rieke, N., et al.: The future of digital health with federated learning. NPJ Digit. Med. 3(1), 1–7 (2020)

    Article  Google Scholar 

  13. Roy, A.G., Siddiqui, S., Pölsterl, S., Navab, N., Wachinger, C.: Braintorrent: a peer-to-peer environment for decentralized federated learning. arXiv preprint arXiv:1905.06731 (2019)

  14. Warnat-Herresthal, S., et al.: Swarm learning for decentralized and confidential clinical machine learning. Nature 594(7862), 265–270 (2021)

    Article  Google Scholar 

  15. Kairouz, P., et al.: Advances and open problems in federated learning. Found. Trends® Mach. Learn. 14(1–2), 1–210 (2021)

    Google Scholar 

  16. Wink, T., Nochta, Z.: An approach for peer-to-peer federated learning. In: Proceedings of the 51st Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W), pp. 150–157. IEEE (2021)

    Google Scholar 

  17. Zapechnikov, S.: Secure multi-party computations for privacy-preserving machine learning. Procedia Comput. Sci. 213, 523–527 (2022)

    Article  Google Scholar 

  18. Luo, Y., Zhiyun, X., Huang, L.: Secure multi-party statistical analysis problems and their applications. Comput. Eng. Appl. 41(24), 141–143 (2005)

    Google Scholar 

  19. Kanagavelu, R., et al.: Two-phase multi-party computation enabled privacy-preserving federated learning. In: Proceedings of the 20th IEEE/ACM International Symposium on Cluster, Cloud and Internet Computing (CCGRID), pp. 410–419. IEEE (2020)

    Google Scholar 

  20. Mugunthan, V., Polychroniadou, A., Byrd, D., Balch, T.H.: SMPAI: secure multi-party computation for federated learning. In: Proceedings of the NeurIPS 2019 Workshop on Robust AI in Financial Services (2019)

    Google Scholar 

  21. Ranbaduge, T., Vatsalan, D., Christen, P.: Secure multi-party summation protocols: are they secure enough under collusion? Trans. Data Priv. 13(1), 25–60 (2020)

    Google Scholar 

  22. Chen, Y., Sun, X., Jin, Y.: Communication-efficient federated deep learning with layerwise asynchronous model update and temporally weighted aggregation. IEEE Trans. Neural Netw. Learn. Syst. 31(10), 4229–4238 (2019)

    Article  Google Scholar 

  23. Jmour, N., Zayen, S., Abdelkrim, A.: Convolutional neural networks for image classification. In: Proceedings of the 2018 International Conference on Advanced Systems and Electric Technologies, pp. 397–402 (2018)

    Google Scholar 

  24. LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)

    Article  Google Scholar 

  25. Krizhevsky, A., et al.: Learning multiple layers of features from tiny images (2009)

    Google Scholar 

  26. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Peiyi Han .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Luo, Y., Han, P., Luo, W., Xue, S., Chen, K., Song, L. (2024). A Framework of Large-Scale Peer-to-Peer Learning System. In: Luo, B., Cheng, L., Wu, ZG., Li, H., Li, C. (eds) Neural Information Processing. ICONIP 2023. Lecture Notes in Computer Science, vol 14448. Springer, Singapore. https://doi.org/10.1007/978-981-99-8082-6_3

Download citation

  • DOI: https://doi.org/10.1007/978-981-99-8082-6_3

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-99-8081-9

  • Online ISBN: 978-981-99-8082-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics