Skip to main content

Split Learning: A Resource Efficient Model and Data Parallel Approach for Distributed Deep Learning

  • Chapter
  • First Online:
Book cover Federated Learning

Abstract

Resource constraints, workload overheads, lack of trust, and competition hinder the sharing of raw data across multiple institutions. This leads to a shortage of data for training state-of-the-art deep learning models. Split Learning is a model and data parallel approach of distributed machine learning, which is a highly resource efficient solution to overcome these problems. Split Learning works by partitioning conventional deep learning model architectures such that some of the layers in the network are private to the client and the rest are centrally shared at the server. This allows for training of distributed machine learning models without any sharing of raw data while reducing the amount of computation or communication required by any client. The paradigm of split learning comes in several variants depending on the specific problem being considered at hand. In this chapter we share theoretical, empirical, and practical aspects of performing split learning and some of its variants that can be chosen depending on the application of your choice.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Konečnỳ J, McMahan HB, Yu FX, Richtárik P, Suresh AT, Bacon D (2016) Federated learning: Strategies for improving communication efficiency. Preprint. arXiv:1610.05492

    Google Scholar 

  2. Gupta O, Raskar R (2018) Distributed learning of deep neural network over multiple agents. J Netw Comput Appl 116:1–8

    Article  Google Scholar 

  3. Vepakomma P, Gupta O, Swedish T, Raskar R (2018) Split learning for health: Distributed deep learning without sharing raw patient data. Preprint. arXiv:1812.00564

    Google Scholar 

  4. Chen J, Pan X, Monga R, Bengio S, Jozefowicz R (2016) Revisiting distributed synchronous SGD. Preprint. arXiv:1604.00981

    Google Scholar 

  5. Lin Y, Han S, Mao H, Wang Y, Dally WJ (2017) Deep gradient compression: Reducing the communication bandwidth for distributed training. Preprint. arXiv:1712.01887

    Google Scholar 

  6. Han S, Mao H, Dally WJ (2015) Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. Preprint. arXiv:1510.00149

    Google Scholar 

  7. Louizos C, Ullrich K, Welling M (2017) Bayesian compression for deep learning. Preprint. arXiv:1705.08665

    Google Scholar 

  8. Laskin M, Metz L, Nabarro S, Saroufim M, Noune B, Luschi C, Sohl-Dickstein J, Abbeel P (2020) Parallel training of deep networks with local updates. Preprint. arXiv:2012.03837

    Google Scholar 

  9. Huo Z, Gu B, Huang H (2018) Training neural networks using features replay. Preprint. arXiv:1807.04511

    Google Scholar 

  10. Elthakeb AT, Pilligundla P, Mireshghallah F, Cloninger A, Esmaeilzadeh H (2020) Divide and conquer: Leveraging intermediate feature representations for quantized training of neural networks. In: International conference on machine learning. PMLR, pp 2880–2891

    Google Scholar 

  11. Gharib G, Vepakomma P (2021) Blind learning: An efficient privacy-preserving approach for distributed learning. In: Workshop on split learning for distributed machine learning (SLDML’21)

    Google Scholar 

  12. Thapa C, Chamikara MAP, Camtepe S (2020) Splitfed: When federated learning meets split learning. Preprint. arXiv:2004.12088

    Google Scholar 

  13. Madaan H, Gawali M, Kulkarni V, Pant A (2021) Vulnerability due to training order in split learning. Preprint. arXiv:2103.14291

    Google Scholar 

  14. Han DJ, Bhatti HI, Lee J, Moon J (2021) Han DJ, Bhatti HI, Lee J, Moon J (2021) Accelerating federated learning with split learning on locally generated losses. In: ICML 2021 workshop on federated learning for user privacy and data confidentiality. ICML Board

    Google Scholar 

  15. Singh A, Vepakomma P, Gupta O, Raskar R (2019) Detailed comparison of communication efficiency of split learning and federated learning. Preprint. arXiv:1909.09145

    Google Scholar 

  16. Poirot MG, Vepakomma P, Chang K, Kalpathy-Cramer J, Gupta R, Raskar R (2019) Split learning for collaborative deep learning in healthcare. Preprint. arXiv:1912.12115

    Google Scholar 

  17. Ceballos I, Sharma V, Mugica E, Singh A, Roman A, Vepakomma P, Raskar R (2020) SplitNN-driven vertical partitioning. Preprint. arXiv:2008.04137

    Google Scholar 

  18. Dingledine R, Mathewson N, Syverson P (2004) Tor: The second-generation onion router. Technical report, Naval Research Lab Washington DC

    Google Scholar 

  19. Sharma V, Vepakomma P, Swedish T, Chang K, Kalpathy-Cramer J, Raskar R (2019) Expertmatcher: Automating ML model selection for clients using hidden representations. Preprint. arXiv:1910.03731

    Google Scholar 

  20. Singh A, Chopra A, Garza E, Zhang E, Vepakomma P, Sharma V, Raskar R (2020) Disco: Dynamic and invariant sensitive channel obfuscation for deep neural networks. Preprint. arXiv:2012.11025

    Google Scholar 

  21. Arachchige PCM, Bertok P, Khalil I, Liu D, Camtepe S, Atiquzzaman M (2019) Local differential privacy for deep learning. IEEE Internet Things J 7(7):5827–5842

    Article  Google Scholar 

  22. Vepakomma P, Balla J, Raskar R (2021) Differentially private supervised manifold learning with applications like private image retrieval. Preprint. arXiv:2102.10802

    Google Scholar 

  23. Li O, Sun J, Yang X, Gao W, Zhang H, Xie J, Smith V, Wang C Label leakage and protection in two-party split learning. Preprint. arXiv:2102.08504

    Google Scholar 

  24. Abadi M, Chu A, Goodfellow I, McMahan HB, Mironov I, Talwar K, Zhang L (2016) Deep learning with differential privacy. In: Proceedings of the 2016 ACM SIGSAC conference on computer and communications security, pp 308–318

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Praneeth Vepakomma .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Vepakomma, P., Raskar, R. (2022). Split Learning: A Resource Efficient Model and Data Parallel Approach for Distributed Deep Learning. In: Ludwig, H., Baracaldo, N. (eds) Federated Learning. Springer, Cham. https://doi.org/10.1007/978-3-030-96896-0_19

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-96896-0_19

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-96895-3

  • Online ISBN: 978-3-030-96896-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics