skip to main content
10.1145/3556557.3557953acmconferencesArticle/Chapter ViewAbstractPublication PagesmobicomConference Proceedingsconference-collections
research-article

Federated split GANs

Published: 22 November 2022 Publication History

Abstract

Mobile devices and the immense amount and variety of data they generate are key enablers of machine learning (ML)-based applications. Traditional ML techniques have shifted toward new paradigms such as federated learning (FL) and split learning (SL) to improve the protection of user's data privacy. However, SL often relies on server(s) located in the edge or cloud to train computationally-heavy parts of an ML model to avoid draining the limited resource on client devices, potentially resulting in exposure of device data to such third parties.
This work proposes an alternative approach to train computationally heavy ML models in user's devices themselves, where corresponding device data resides. Specifically, we focus on GANs (generative adversarial networks) and leverage their network architecture to preserve data privacy. We train the discriminative part of a GAN on user's devices with their data, whereas the generative model is trained remotely (e.g., server) for which there is no need to access device true data. Moreover, our approach ensures that the computational load of training the discriminative model is shared among user's devices - proportional to their computation capabilities - by means of SL. We implement our proposed collaborative training scheme of a computationally-heavy GAN model in simulated resource-constrained devices. The results show that our system preserves data privacy, keeps a short training time, and yields the same model accuracy as when the model is trained on devices with unconstrained resources (e.g., cloud). Our code can be found at https://github.com/YukariSonz/FSL-GAN.

References

[1]
A. Abedi and S. S. Khan. FedSL: Federated Split Learning on Distributed Sequential Data in Recurrent Neural Networks. arXiv preprint arXiv:2011.03180, 2020.
[2]
Y.-J. Cao, L.-L. Jia, Y.-X. Chen, N. Lin, C. Yang, B. Zhang, Z. Liu, X.-X. Li, and H.-H. Dai. Recent advances of generative adversarial networks in computer vision. IEEE Access, 7:14985--15006, 2018.
[3]
Q. Chang, Z. Yan, L. Baskaran, H. Qu, Y. Zhang, T. Zhang, S. Zhang, and D. N. Metaxas. Multi-modal AsynDGAN: Learn From Distributed Medical Image Data without Sharing Private Information. arXiv preprint arXiv:2012.08604, 2020.
[4]
S. Doolani, C. Wessels, V. Kanal, C. Sevastopoulos, A. Jaiswal, H. Nambiappan, and F. Makedon. A review of extended reality (xr) technologies for manufacturing training. Technologies, 8(4):77, 2020.
[5]
M. Gattullo, G. W. Scurati, M. Fiorentino, A. E. Uva, F. Ferrise, and M. Bordegoni. Towards augmented reality manuals for industry 4.0: A methodology. Robotics and Computer-Integrated Manufacturing, 56:276--286, 2019.
[6]
I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial networks. Communications of the ACM, 63(11):139--144, 2020.
[7]
C. Hardy, E. Le Merrer, and B. Sericola. Md-gan: Multi-discriminator generative adversarial networks for distributed datasets. In 2019 IEEE international parallel and distributed processing symposium (IPDPS), pages 866--877. IEEE, 2019.
[8]
C.-C. Hsu, H.-T. Hwang, Y.-C. Wu, Y. Tsao, and H.-M. Wang. Voice conversion from unaligned corpora using variational autoencoding wasserstein generative adversarial networks. arXiv preprint arXiv:1704.00849, 2017.
[9]
A. Jabbar, X. Li, and B. Omar. A survey on generative adversarial networks: Variants, applications, and training. ACM Computing Surveys (CSUR), 54(8):1--49, 2021.
[10]
J. Jeon and J. Kim. Privacy-sensitive parallel split learning. In 2020 International Conference on Information Networking (ICOIN), pages 7--9. IEEE, 2020.
[11]
P. Kairouz, H. B. McMahan, B. Avent, A. Bellet, M. Bennis, A. N. Bhagoji, K. Bonawitz, Z. Charles, G. Cormode, R. Cummings, et al. Advances and open problems in federated learning. arXiv preprint arXiv:1912.04977, 2019.
[12]
K. Lin, D. Li, X. He, Z. Zhang, and M.-T. Sun. Adversarial ranking for language generation. Advances in neural information processing systems, 30, 2017.
[13]
Y. Liu, J. Peng, J. James, and Y. Wu. PPGAN: Privacy-preserving generative adversarial network. In 2019 IEEE 25Th international conference on parallel and distributed systems (ICPADS), pages 985--989. IEEE, 2019.
[14]
B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas. Communication-efficient learning of deep networks from decentralized data. In Artificial intelligence and statistics, pages 1273--1282. PMLR, 2017.
[15]
O. Mogren. C-RNN-GAN: Continuous recurrent neural networks with adversarial training. arXiv preprint arXiv:1611.09904, 2016.
[16]
A. Radford, L. Metz, and S. Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.
[17]
M. Rasouli, T. Sun, and R. Rajagopal. Fedgan: Federated generative adversarial networks for distributed data. arXiv preprint arXiv:2006.07228, 2020.
[18]
A. Singh and K. Chatterjee. Cloud security issues and challenges: A survey. Journal of Network and Computer Applications, 79:88--115, 2017.
[19]
C. Thapa, M. A. P. Chamikara, and S. Camtepe. Splitfed: When federated learning meets split learning. arXiv preprint arXiv:2004.12088, 2020.
[20]
V. Turina, Z. Zhang, F. Esposito, and I. Matta. Combining split and federated architectures for efficiency and privacy in deep learning. In Proceedings of the 16th International Conference on emerging Networking EXperiments and Technologies, pages 562--563, 2020.
[21]
P. Zhou, T. Braud, A. Zavodovski, Z. Liu, X. Chen, P. Hui, and J. Kangasharju. Edge-facilitated augmented vision in vehicle-to-everything networks. IEEE Transactions on Vehicular Technology, 69(10):12187--12201, 2020.
[22]
P. Zhou, H. Xu, L. Lee, P. Fang, and P. Hui. Are you left out? an efficient and fair federated learning for personalized profiles on wearable devices of inferior networking conditions. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT/UbiComp), 6(3), 2022.

Cited By

View all
  • (2024)Federated Collaborative Learning with Sparse Gradients for Heterogeneous Data on Resource-Constrained DevicesEntropy10.3390/e2612109926:12(1099)Online publication date: 16-Dec-2024
  • (2024)DRD-GAN: A Novel Distributed Conditional Wasserstein Deep Convolutional Relativistic Discriminator GAN with Improved ConvergenceACM Transactions on Probabilistic Machine Learning10.1145/36550301:1(1-34)Online publication date: 9-Dec-2024
  • (2024)Federated Split Learning via Mutual Knowledge DistillationIEEE Transactions on Network Science and Engineering10.1109/TNSE.2023.334846111:3(2729-2741)Online publication date: May-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
FedEdge '22: Proceedings of the 1st ACM Workshop on Data Privacy and Federated Learning Technologies for Mobile Edge Network
October 2022
34 pages
ISBN:9781450395212
DOI:10.1145/3556557
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 22 November 2022

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. GAN
  2. federated learning
  3. split learning

Qualifiers

  • Research-article

Funding Sources

  • National Key R&D Program

Conference

ACM MobiCom '22
Sponsor:

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)54
  • Downloads (Last 6 weeks)3
Reflects downloads up to 16 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Federated Collaborative Learning with Sparse Gradients for Heterogeneous Data on Resource-Constrained DevicesEntropy10.3390/e2612109926:12(1099)Online publication date: 16-Dec-2024
  • (2024)DRD-GAN: A Novel Distributed Conditional Wasserstein Deep Convolutional Relativistic Discriminator GAN with Improved ConvergenceACM Transactions on Probabilistic Machine Learning10.1145/36550301:1(1-34)Online publication date: 9-Dec-2024
  • (2024)Federated Split Learning via Mutual Knowledge DistillationIEEE Transactions on Network Science and Engineering10.1109/TNSE.2023.334846111:3(2729-2741)Online publication date: May-2024
  • (2024)A Distributed Conditional Wasserstein Deep Convolutional Relativistic Loss Generative Adversarial Network With Improved ConvergenceIEEE Transactions on Artificial Intelligence10.1109/TAI.2024.33865005:9(4344-4353)Online publication date: Sep-2024
  • (2024)Fedadkd:heterogeneous federated learning via adaptive knowledge distillationPattern Analysis and Applications10.1007/s10044-024-01350-427:4Online publication date: 10-Oct-2024
  • (2024)Privacy-Preserving Adaptive Re-Identification Without Image TransferComputer Vision – ECCV 202410.1007/978-3-031-72943-0_6(95-111)Online publication date: 29-Nov-2024

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media