Skip to main content

Migrating Models: A Decentralized View on Federated Learning

  • Conference paper
  • First Online:
Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD 2021)

Abstract

Federated learning (FL) researches attempt to alleviate the increasing difficulty of training machine learning models, when the training data is generated in a massively distributed way. The key idea behind these methods is moving the training to locations of data generation, and periodically collecting and redistributing the model updates. We present our approach for transforming the general training algorithm of FL into a peer-to-peer-like process. Our experiments on baseline image classification datasets show that omitting central coordination in FL is feasible.

This work was partially supported by the project “Application Domain Specific Highly Reliable IT Solutions” financed by the National Research, Development and Innovation Fund of Hungary (TKP2020-NKA-06).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    A simple heuristic we have used here was to extend the buffer size if the exponential moving average of the model performance on new datasets (on the new nodes) shows no improvement after \(\theta \) steps, where \(\theta \) is a hyper-parameter.

  2. 2.

    Following the Keras reference model for MNIST (not available anymore):

    input: 784 dimension vector (=28 \(\times \) 28) \(\rightarrow \) dropout \(\rightarrow \) dense with 128 units \(\rightarrow \) sigmoid activation \(\rightarrow \) dropout \(\rightarrow \) dense with 10 units \(\rightarrow \) softmax.

  3. 3.

    Following the Keras reference model for CIFAR-10 (not available anymore):

    input: \(32\times 32 \times 3\) image\(\rightarrow \)

    2d convolution with 32 \(3\times 3\) filter and same padding and ReLU\(\rightarrow \) 2d convolution with 32 \(3\times 3\) filter and same padding and ReLU\(\rightarrow \) \(2\times 2\) maxpooling \(\rightarrow \) Droput\(\rightarrow \)

    2d convolution with 64 \(3\times 3\) filter and same padding and ReLU\(\rightarrow \) 2d convolution with 64 \(3\times 3\) filter and same padding and ReLU \(\rightarrow \) \(2\times 2\) maxpooling \(\rightarrow \) Droput \(\rightarrow \)

    dense with 512 units and ReLU \(\rightarrow \) Dropout \(\rightarrow \) dense with 10 units \(\rightarrow \) softmax.

  4. 4.

    For the buffer size extension we set a threshold for improvement to \(\theta =15\), that is, if the accuracy did not improve in the last 15 relocations then we extend the number of models to aggregate.

  5. 5.

    The accuracy, along with the communication and computation costs, of ensemble of MMs exceeds that one of FedAvg. The reason for that is that, due to resource intensity of CIFAR-10 training, the number of participating nodes have been reduced to 50, which means fewer models have been used in FedAvg, while the hyper-parameters (\(K'\) and maximum \(\sigma \)) of sMM and MM have been kept unchanged.

  6. 6.

    Hyper-parameter search was not performed for the \(\gamma \) parameter of FedAvg, \(\gamma = 1/10\) was used according to [23]. The main reason was our limited computation possibilities. However, since the settings for \(\gamma \) also affect sMM and MM (e.g. the buffer size or the ensemble count), it is possible that the gap between the communication and computation costs might be different in case of a large-scale hyper-parameter search.

References

  1. Aji, A.F., Heafield, K.: Sparse communication for distributed gradient descent. arXiv preprint arXiv:1704.05021 (2017)

  2. Alistarh, D., Grubic, D., Li, J., Tomioka, R., Vojnovic, M.: QSGD: communication-efficient sgd via gradient quantization and encoding. In: Advances in Neural Information Processing Systems, pp. 1709–1720 (2017)

    Google Scholar 

  3. Bellet, A., Guerraoui, R., Taziki, M., Tommasi, M.: Personalized and private peer-to-peer machine learning. arXiv preprint arXiv:1705.08435 (2017)

  4. Boyd, S., Parikh, N., Chu, E., Peleato, B., Eckstein, J., et al.: Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 3(1), 1–122 (2011)

    Google Scholar 

  5. Chen, J., Pan, X., Monga, R., Bengio, S., Jozefowicz, R.: Revisiting distributed synchronous sgd. arXiv preprint arXiv:1604.00981 (2016)

  6. Chen, Y., Sun, X., Jin, Y.: Communication-efficient federated deep learning with layerwise asynchronous model update and temporally weighted aggregation. IEEE Trans. Neural Networks Learn. Syst. (2019)

    Google Scholar 

  7. Dean, J., et al.: Large scale distributed deep networks. In: Advances in Neural Information Processing Systems, pp. 1223–1231 (2012)

    Google Scholar 

  8. Dimakis, A.G., Kar, S., Moura, J.M., Rabbat, M.G., Scaglione, A.: Gossip algorithms for distributed signal processing. Proc. IEEE 98(11), 1847–1864 (2010)

    Article  Google Scholar 

  9. Dryden, N., Moon, T., Jacobs, S.A., Van Essen, B.: Communication quantization for data-parallel training of deep neural networks. In: 2016 2nd Workshop on Machine Learning in HPC Environments (MLHPC), pp. 1–8. IEEE (2016)

    Google Scholar 

  10. Du, W., Zeng, X., Yan, M., Zhang, M.: Efficient federated learning via variational dropout (2018)

    Google Scholar 

  11. Hegedűs, I., Danner, G., Jelasity, M.: Gossip learning as a decentralized alternative to federated learning. In: Pereira, J., Ricci, L. (eds.) DAIS 2019. LNCS, vol. 11534, pp. 74–90. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-22496-7_5

    Chapter  Google Scholar 

  12. Hinton, G., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531 (2015)

  13. Hinton, G.E., Srivastava, N., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.R.: Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580 (2012)

  14. Hitaj, B., Ateniese, G., Perez-Cruz, F.: Deep models under the GAN: information leakage from collaborative deep learning. In: Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, pp. 603–618 (2017)

    Google Scholar 

  15. Hsu, T.M.H., Qi, H., Brown, M.: Measuring the effects of non-identical data distribution for federated visual classification. arXiv preprint arXiv:1909.06335 (2019)

  16. Jaggi, M., et al.: Communication-efficient distributed dual coordinate ascent. In: Advances in Neural Information Processing Systems, pp. 3068–3076 (2014)

    Google Scholar 

  17. Khaled, A., Mishchenko, K., Richtárik, P.: First analysis of local gd on heterogeneous data (2019)

    Google Scholar 

  18. Kingma, D.P., Salimans, T., Welling, M.: Variational dropout and the local reparameterization trick. In: Advances in Neural Information Processing Systems, pp. 2575–2583 (2015)

    Google Scholar 

  19. Konečnỳ, J., McMahan, H.B., Ramage, D., Richtárik, P.: Federated optimization: Distributed machine learning for on-device intelligence. arXiv preprint arXiv:1610.02527 (2016)

  20. Li, T., Sahu, A.K., Zaheer, M., Sanjabi, M., Talwalkar, A., Smith, V.: Federated optimization in heterogeneous networks (2018)

    Google Scholar 

  21. Li, X., Huang, K., Yang, W., Wang, S., Zhang, Z.: On the convergence of fedavg on non-iid data. arXiv preprint arXiv:1907.02189 (2019)

  22. Masters, D., Luschi, C.: Revisiting small batch training for deep neural networks. arXiv preprint arXiv:1804.07612 (2018)

  23. McMahan, H.B., Moore, E., Ramage, D., Hampson, S., et al.: Communication-efficient learning of deep networks from decentralized data. arXiv preprint arXiv:1602.05629 (2016)

  24. Nishio, T., Yonetani, R.: Client selection for federated learning with heterogeneous resources in mobile edge. CoRR abs/1804.08333 (2018)

    Google Scholar 

  25. Seide, F., Fu, H., Droppo, J., Li, G., Yu, D.: 1-bit stochastic gradient descent and application to data-parallel distributed training of speech DNNs. In: Interspeech (2014)

    Google Scholar 

  26. Stich, S.U.: Local SGD converges fast and communicates little (2018)

    Google Scholar 

  27. Strom, N.: Scalable distributed DNN training using commodity GPU cloud computing. In: Sixteenth Annual Conference of the International Speech Communication Association (2015)

    Google Scholar 

  28. Vanhaesebrouck, P., Bellet, A., Tommasi, M.: Decentralized collaborative learning of personalized models over networks (2017)

    Google Scholar 

  29. Wang, H., Kaplan, Z., Niu, D., Li, B.: Optimizing federated learning on non-iid data with reinforcement learning. In: IEEE INFOCOM 2020-IEEE Conference on Computer Communications, pp. 1698–1707. IEEE (2020)

    Google Scholar 

  30. Wang, J., Joshi, G.: Cooperative SGD: a unified framework for the design and analysis of communication-efficient SGD algorithms (2018)

    Google Scholar 

  31. Wang, S., et al.: Adaptive federated learning in resource constrained edge computing systems. In: IEEE INFOCOM 2018-IEEE Conference on Computer Communications (2018)

    Google Scholar 

  32. Wei, E., Ozdaglar, A.: On the o (1= k) convergence of asynchronous distributed alternating direction method of multipliers. In: 2013 IEEE Global Conference on Signal and Information Processing, pp. 551–554. IEEE (2013)

    Google Scholar 

  33. Woodworth, B., Wang, J., Smith, A., McMahan, B., Srebro, N.: Graph oracle models, lower bounds, and gaps for parallel stochastic optimization (2018)

    Google Scholar 

  34. Yang, H.H., Liu, Z., Quek, T.Q., Poor, H.V.: Scheduling policies for federated learning in wireless networks. IEEE Trans. Commun. (2019)

    Google Scholar 

  35. Yu, H., Yang, S., Zhu, S.: Parallel restarted SGD with faster convergence and less communication: demystifying why model averaging works for deep learning. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 5693–5700 (2019)

    Google Scholar 

  36. Yurochkin, M., Agarwal, M., Ghosh, S., Greenewald, K., Hoang, T.N., Khazaeni, Y.: Bayesian nonparametric federated learning of neural networks. arXiv preprint arXiv:1905.12022 (2019)

  37. Zhao, B., Mopuri, K.R., Bilen, H.: idlg: improved deep leakage from gradients. arXiv preprint arXiv:2001.02610 (2020)

  38. Zhao, Y., Li, M., Lai, L., Suda, N., Civin, D., Chandra, V.: Federated learning with non-iid data. arXiv preprint arXiv:1806.00582 (2018)

  39. Zhou, F., Cong, G.: On the convergence properties of a k-step averaging stochastic gradient descent algorithm for nonconvex optimization. In: Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence (2018)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Péter Kiss .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Kiss, P., Horváth, T. (2021). Migrating Models: A Decentralized View on Federated Learning. In: Kamp, M., et al. Machine Learning and Principles and Practice of Knowledge Discovery in Databases. ECML PKDD 2021. Communications in Computer and Information Science, vol 1524. Springer, Cham. https://doi.org/10.1007/978-3-030-93736-2_15

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-93736-2_15

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-93735-5

  • Online ISBN: 978-3-030-93736-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics